Texture Synthesis / Targeting
moultano
Creator of ns_shiva. Join Date: 2002-12-14 Member: 10806Members, NS1 Playtester, Contributor, Constellation, NS2 Playtester, Squad Five Blue, Reinforced - Shadow, WC 2013 - Gold, NS2 Community Developer, Pistachionauts
in Off-Topic
<div class="IPBDescription">A new tool I've written.</div>Hey folks.
For the last week or two I've been working on a tool to attempt to automate some of the texture creation process for NS2, or games in general. The idea is a modified form of algorithms from <a href="http://graphics.cs.cmu.edu/people/efros/research/EfrosLeung.html" target="_blank">Efros and Leung</a>. Essentially, the tool looks at a target image and tries to find bits of a source image to match it. It then fills in the image this way synthesizing a new texture. You can also tell it to ignore the contents of the target image and have it synthesize a new texture from scratch.
Possible uses:<ul><li><b>Making a new texture from scratch:</b> Pick a texture you like, and set that as the source. Then make an image the size of your desired image, set it as the target and set <b>a</b> to 0 so that it ignores the contents.</li><li><b>Sketching a texture and then filling in the details:</b> Set a completed texture as the source and your sketch as the target. I haven't gotten this to work very well yet, so if you get good results let me know how! <img src="style_emoticons/<#EMO_DIR#>/smile-fix.gif" style="vertical-align:middle" emoid=":)" border="0" alt="smile-fix.gif" /></li><li><b>Filling in normal/specularity channels:</b> Set a completed texture with normals and specularity as the source, and a completed texture with just the albedo as the target. I haven't tried this at all yet, but I wouldn't expect it to work well unless the target and source texture have all the same elements.</li></ul>Here are the <a href="http://code.google.com/p/texture-targeter/wiki/Instructions" target="_blank">detailed instructions</a> and the <a href="http://code.google.com/p/texture-targeter/downloads/list" target="_blank">download</a>.
Play around! Show me what you make! <img src="style_emoticons/<#EMO_DIR#>/smile-fix.gif" style="vertical-align:middle" emoid=":)" border="0" alt="smile-fix.gif" />
For the last week or two I've been working on a tool to attempt to automate some of the texture creation process for NS2, or games in general. The idea is a modified form of algorithms from <a href="http://graphics.cs.cmu.edu/people/efros/research/EfrosLeung.html" target="_blank">Efros and Leung</a>. Essentially, the tool looks at a target image and tries to find bits of a source image to match it. It then fills in the image this way synthesizing a new texture. You can also tell it to ignore the contents of the target image and have it synthesize a new texture from scratch.
Possible uses:<ul><li><b>Making a new texture from scratch:</b> Pick a texture you like, and set that as the source. Then make an image the size of your desired image, set it as the target and set <b>a</b> to 0 so that it ignores the contents.</li><li><b>Sketching a texture and then filling in the details:</b> Set a completed texture as the source and your sketch as the target. I haven't gotten this to work very well yet, so if you get good results let me know how! <img src="style_emoticons/<#EMO_DIR#>/smile-fix.gif" style="vertical-align:middle" emoid=":)" border="0" alt="smile-fix.gif" /></li><li><b>Filling in normal/specularity channels:</b> Set a completed texture with normals and specularity as the source, and a completed texture with just the albedo as the target. I haven't tried this at all yet, but I wouldn't expect it to work well unless the target and source texture have all the same elements.</li></ul>Here are the <a href="http://code.google.com/p/texture-targeter/wiki/Instructions" target="_blank">detailed instructions</a> and the <a href="http://code.google.com/p/texture-targeter/downloads/list" target="_blank">download</a>.
Play around! Show me what you make! <img src="style_emoticons/<#EMO_DIR#>/smile-fix.gif" style="vertical-align:middle" emoid=":)" border="0" alt="smile-fix.gif" />
Comments
Artists require GUIs!
Also, if it is written in Java you could very easily publish it to the web so people can run it from within browsers. Some people (like me) might be more inclined to try it out that way.
Sounds like a good idea though, I'm curious to see what it does/how it works.
Artists require GUIs!
Also, if it is written in Java you could very easily publish it to the web so people can run it from within browsers. Some people (like me) might be more inclined to try it out that way.
Sounds like a good idea though, I'm curious to see what it does/how it works.<!--QuoteEnd--></div><!--QuoteEEnd-->
Does the jvm run in a separate process/thread in firefox? This can take an hour or so to operate. I'll eventually be adding a visualization of the pixels it has laid down so far, although I suspect you'll always be executing it from the command-line.
<!--quoteo(post=1700143:date=Feb 12 2009, 10:53 AM:name=TychoCelchuuu)--><div class='quotetop'>QUOTE(TychoCelchuuu @ Feb 12 2009, 10:53 AM) <a href="index.php?act=findpost&pid=1700143"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->Do you have any examples of before and after images? Does it automatically tile, or do I have to do that manually? This sounds a bit like Luxologys' ImageSynth, only without any control on my end.<!--QuoteEnd--></div><!--QuoteEEnd-->
Unfortunately the test texture I've been using to test is one from ns2 so I don't think I can just drop the results here. It should be able to reproduce anything on <a href="http://graphics.cs.cmu.edu/people/efros/research/EfrosLeung.html" target="_blank">this page</a> and a whole lot more. It's set up to automatically tile, although at the moment it lays down the pixels in scanline order which produces some artifacts at the edges. That's probably my next thing to fix.
That ImageSynth thing looks cool, although I'd say in some ways this gives you more control over the final image. Basically this doesn't let you specify anything about the source image. You give it the pixels and it decides what to do with them. However I think this give you a lot more control over the final output by letting you "target" the pixels against something you've sketched up. So for instance, if you know you want certain features from the source in certain places in your output, you can just copy paste them into place and let it fill in the rest. Another next feature for me to add will be to let you specify an image where the brightness indicates how important each pixel in your target image is to match. This will allow it to do hole filling and such.
Umm. Not positive.
The project I worked on is here... <a href="http://web.cs.wpi.edu/~rich/courses/imgd4000-d08/projects/swarm/index.html" target="_blank">http://web.cs.wpi.edu/~rich/courses/imgd40...warm/index.html</a>
Turns out it doesn't actually open in the browser, though it does use Java Web Start so it doesn't need to be installed or anything.
I was thinking more along the lines of examples from something where it targets the drawing I made or whatever, which is the aspect of this program that you identify as being different from imageSynth.
Here's an example usage of this. The resulting texture isn't that great, but it at least demonstrates the idea.
Suppose you've found a nice rocky wall texture:
<a href="http://picasaweb.google.com/lh/photo/PH6KSEQTUISXRadQoOnLzg?feat=embedwebsite" target="_blank"><img src="http://lh5.ggpht.com/_VDbSvmyPAXo/SZU9SNoLd7I/AAAAAAAASRQ/1UNnok8av_k/s400/wall_512_5_05.jpg" border="0" class="linked-image" /></a>
You decide that you really like it, but you want to have part of your level seem damper so you want to use exclusively the parts with the moss. You create a target image with background the mean color of the moss, and clone paint a few rocks in to seed it. Then you create a mask that weights the background color a little and the clone painted rocks a lot.
Target:
<a href="http://picasaweb.google.com/lh/photo/FKtlB4n4Ic2y3a3Q9szdng?feat=embedwebsite" target="_blank"><img src="http://lh5.ggpht.com/_VDbSvmyPAXo/SZU9SIPrA3I/AAAAAAAASRY/8Ak23-4whI0/s400/green_seed.jpg" border="0" class="linked-image" /></a>
Target Mask:
<a href="http://picasaweb.google.com/lh/photo/91b9RwaJYJgJ9XUo3nzp7Q?feat=embedwebsite" target="_blank"><img src="http://lh5.ggpht.com/_VDbSvmyPAXo/SZU9SLaznWI/AAAAAAAASRg/B80W1slhwEg/s400/green_seed_importance.jpg" border="0" class="linked-image" /></a>
Once you've got these images made, you can run this commandline:
<!--c1--><div class='codetop'>CODE</div><div class='codemain'><!--ec1-->java -jar -Xmx1000m Targeter.jar n:21 m:11 d:3 s:10 i:wall_512_5_05.jpg t:green_seed.jpg tm:green_seed_importance.jpg<!--c2--></div><!--ec2-->
And then in 20 minutes or so, you'll get your output:
<a href="http://picasaweb.google.com/lh/photo/klndrWANlkSAQHoGgBb0qQ?feat=embedwebsite" target="_blank"><img src="http://lh5.ggpht.com/_VDbSvmyPAXo/SZU9SVyGa9I/AAAAAAAASRo/ltf2cx3QS88/s400/green_seed.jpg-wall_512_5_05.jpg-n21-m11-d3-s1000-a0.0.png" border="0" class="linked-image" /></a>
That output image doesn't look great, the rocks aren't very well formed and such, but different parameters to the targeting should give you better results. Now that it has a display it's possible to iterate quickly on experimenting with parameters since you can see the partial results without waiting for it to finish.
Hope that explains the intent of the tool a little better.
Tell me what controls you would like. <img src="style_emoticons/<#EMO_DIR#>/smile-fix.gif" style="vertical-align:middle" emoid=":)" border="0" alt="smile-fix.gif" /> I'll be working on this at least until it's unequivocally better.
Have you used ImageSynth or are you basing that on the demo images? Also, cut me some slack. The project is a week old and I've been working on it in my spare time. Now, if you don't mind, give me feature requests or stop complaining <img src="style_emoticons/<#EMO_DIR#>/tounge.gif" style="vertical-align:middle" emoid=":p" border="0" alt="tounge.gif" />.
I think you're missing the big picture here Tycho. It's free; as in beer(and hopefully as in speech as well). I think the Photoshop GIMP comparison is quite relevant. Maybe you can drop 99$ for ImageSynth and X99$(whatever it is for Photoshop), but I'm not going to. It may not be comparable yet, but seriously continue to suggest features if you want it to be better.
Also, either I have lost my javafu in the last 5 years or had completely forgotten that colon was the separator of choice, I had to read the source to figure that out.
Now that's feedback I can use. <img src="style_emoticons/<#EMO_DIR#>/smile-fix.gif" style="vertical-align:middle" emoid=":)" border="0" alt="smile-fix.gif" /> I should be able to finish that by Monday.
<!--quoteo(post=1700244:date=Feb 13 2009, 04:39 PM:name=Confused)--><div class='quotetop'>QUOTE(Confused @ Feb 13 2009, 04:39 PM) <a href="index.php?act=findpost&pid=1700244"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->My biggest issue is simply the lack of gui, i understand that that's a pain but tools of this nature are generally not marketed to those of us who live on the command line.
Also, either I have lost my javafu in the last 5 years or had completely forgotten that colon was the separator of choice, I had to read the source to figure that out.<!--QuoteEnd--></div><!--QuoteEEnd-->
I'll try to make that more explicit in the notes. I used colons for attaching the variables because it gives me the flexibility to assign complicated inputs to variables. For instance, eventually you'll be able to specify multiple unrelated input images separated by colons and multiple channels for each image separated by commas. (At the moment only the second part is implemented.) So for instance, suppose you want to draw pixels from 5 different images where each one has a normal map, this syntax will let you.
I'm only a novice at texture creation and I still think doing it manually in PS would give you more control and better results. Is it simply a time-saver or what? The example comparison took 20 minutes and still requires tweaking to get it looking better than what you've shown. Couldn't an experienced texture artist make something up from scratch of equal or better quality, and with more control over the visual interest, details and all the rest of it?
Also, if this is just a flat image and not comprised of layers as a custom texture (using the source image as a base/mask) would be, how easy would it be to create a good-quality normal map from it?
Here's a result from that same input with parameters that use a lot more computation. Note now it copies the original texture until it gets in the vicinity of the edges, and then invents new texture to make sure it wraps.
<a href="http://picasaweb.google.com/lh/photo/CiLJ4CtwgaPYbEr_-OF8bw?feat=embedwebsite" target="_blank"><img src="http://lh6.ggpht.com/_VDbSvmyPAXo/SZcWTXHBRzI/AAAAAAAASTE/KpIVICASzSM/s400/green_seed.jpg-wall_512_5_05.jpg-n35-m21-d3-s20000-a0.0.jpg" border="0" class="linked-image" /></a>
Artists require GUIs!<!--QuoteEnd--></div><!--QuoteEEnd-->
Truth.
<!--quoteo(post=1700195:date=Feb 13 2009, 03:15 AM:name=)--><div class='quotetop'>QUOTE( @ Feb 13 2009, 03:15 AM) <a href="index.php?act=findpost&pid=1700195"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->...
And then in 20 minutes or so, you'll get your output:
<a href="http://picasaweb.google.com/lh/photo/klndrWANlkSAQHoGgBb0qQ?feat=embedwebsite" target="_blank"><img src="http://lh5.ggpht.com/_VDbSvmyPAXo/SZU9SVyGa9I/AAAAAAAASRo/ltf2cx3QS88/s400/green_seed.jpg-wall_512_5_05.jpg-n21-m11-d3-s1000-a0.0.png" border="0" class="linked-image" /></a>
...<!--QuoteEnd--></div><!--QuoteEEnd-->
20 minutes is way to long to develop a texture of that output quality. Invariably I could go and do something else while it is compiling the image but why should I (this is a principal argument) when I can create the same thing in less time with Photoshop?
If you can cut the processing time down to 5 minutes or less. (I think 5 minutes is comfortable for images of say 3000x2000, an arbitrary value, resolution.) to generate a random texture and take up the other suggestions like implementing a GUI and making it Java based so it can be run in say Firefox then I will be interested.
Also on a side note don't take it so personally when people here are 'complaining' as you put it, all of these suggestions and inquiries I've seen so far have been valid.
A bulk processing option would also be brilliant as well.
If you could ultimately package this as a Firefox add-on that would be pretty sweet.
20 minutes is way to long to develop a texture of that output quality. Invariably I could go and do something else while it is compiling the image but why should I (this is a principal argument) when I can create the same thing in less time with Photoshop?
If you can cut the processing time down to 5 minutes or less. (I think 5 minutes is comfortable for images of say 3000x2000, an arbitrary value, resolution.) to generate a random texture and take up the other suggestions like implementing a GUI and making it Java based so it can be run in say Firefox then I will be interested.<!--QuoteEnd--></div><!--QuoteEEnd-->
If I want to do something similar to ImageSynth where the textures have little originality relative to the original image I can probably make it run in that sort of timeframe. I should be able to make it include that functionality and the more computationally intensive stuff I've been doing now with the parameters smoothly transitioning between them so it should subsume that tool except for the gui. The only thing I really need to implement for that to work is the min-cut algorithm. Essentially, at the moment I spend a long time searching for a tile that will fit well with the surroundings. Their approach is to not worry much about the tile fitting with the surroundings, but overlap it with the existing tiles and then cut it out along a seam that minimizes the error between the new tile and the existing tiles. I'm hoping to implement that sometime today if I can find a good paper on it.
Ultimately though I've been thinking of this as a tool where you'd kick off maybe a dozen instances of it when you leave for the day and then check out the output the next morning. I've been used to that sort of workflow though since I work at Google and that's usually how we iterate on things (since even on 2000 machines anything worth doing at web scale usually takes at least a few hours.) So I can understand why people would get more annoyed by that than I do.
<!--quoteo--><div class='quotetop'>QUOTE</div><div class='quotemain'><!--quotec-->Also on a side note don't take it so personally when people here are 'complaining' as you put it, all of these suggestions and inquiries I've seen so far have been valid.
A bulk processing option would also be brilliant as well.
If you could ultimately package this as a Firefox add-on that would be pretty sweet.<!--QuoteEnd--></div><!--QuoteEEnd-->
I was just teasing about him complaining. Basically though, I was hoping people could actually try it out so I could start iterating based on that sort of feedback rather than speculation. I haven't had the time to make a really good demo of it since I've mostly been testing it on NS2 textures that I can't release. (I'll check with Charlie to see if I can show some of the results on the forum here.) It does really good things when the input texture has an associated normal map, and I'm pretty sure it would do even better with a height map, though I haven't had the chance to test that yet.
Bulk processing should be doable, though you guys should really just learn a bit of shell scripting. <img src="style_emoticons/<#EMO_DIR#>/wink-fix.gif" style="vertical-align:middle" emoid=";)" border="0" alt="wink-fix.gif" />
Good luck telling every artist that they can only work on their texture for 5 minutes each day before they go home, and then they get to wait overnight for the computer to do everything and then come back in the morning to see if it's borne fruit. If you look at something like the texture viewer that UWE just previewed, an increasingly important aspect of game art is seeing it in engine as soon as possible, immediately, and getting feedback on what your texture looks like in real time. I don't understand how I could ever make a brick wall the way you're describing it: I need to know if my brick wall texture is done <i>right now</i>, not after the computer goes and processes it. imageSynth's value is that it takes incredibly complex, non-tileable images with areas of distinct irregularity and turns them into neat, square, tiling textures with very few or no pieces that I have to clean up by hand. At the same time, it barely alters the source image, changing almost nothing that it doesn't need to change, and it does it really fast, so in an instant I can tell how much work needs to be done, or whether my source isn't going to work out.
I guess I don't see the value added with your tool; you took that picture of a stone wall with moss, and in 20 minutes you turned it into something that doesn't even look like it has discrete stones. After an unidentified amount of further computation, the result was a nontiling texture that looks approximately the same as the original, only smaller. Basically it did nothing, it took a while to do it, and the output image was smaller. This is a pretty harsh assessment of your tool and at least to me it seems like nobody would argue that this is useful, so I can't help but think I'm missing the value here. What is it exactly that your program does that helps me make textures? Ignoring imageSynth, why would I run your algorithm on my brick wall if it's just going to spit out a nearly identical wall that's smaller and doesn't tile? It might just be a problem with the sample pictures or the way you're explaining things but I just don't see any reason to use the thing in the first place. I don't understand what it automates or what it does faster than me.
Let's keep this in perspective.
--Scythe--
I would suggest making the core functionality in c/c++ and coding it very simply, so that any cross platform problems (file i/o, etc) can be overcome easily. You can then access this functionality from a GUI made in Java or use one of those C++ GUI toolkits.
0.02
- Proficient artists can do it better and possibly quicker by hand
- Rank newbies will not be able to benefit from this without a solid GUI
The only type of person who has both a need and the ability to use this software in its current state is web developers who will have enough of a grounding in code to not be put off by the scripting aspects and a need for tiling textures based off found images.
Is this who you were aiming it at? Or is this just an experiment?
---
As for people testing it, what you seem to have now is like an axe blade without a handle. If each time I use the axe I have to attach the handle, it loses some of its benefits. I think the quickest way to getting this properly tested would be to make it simple to use. The GUI doesn't need to be anything flashy, just enough to enough to be able to get to grips with it (going back to the axe analogy).
Mostly, I want to get all of the known popular algorithms for texture synthesis in one software package. Ideally though, the goal is to simplify the creation of a large texture set by letting the artist make only a handful of finalized textures, then sketch the rest and let the software fill in the details.
<a href="http://picasaweb.google.com/lh/photo/KrY8QxkooQk5CcBVIT8jtw?feat=embedwebsite" target="_blank"><img src="http://lh5.ggpht.com/_VDbSvmyPAXo/SZsT-irjwcI/AAAAAAAASU0/P9nqlSZlZvs/s800/new_refinery.jpg-refinery_wall_05.bmp-n11-m5-d6-s20000-a0.0@0.png" border="0" class="linked-image" /></a>
<a href="http://picasaweb.google.com/lh/photo/T3wc6QnZyB3HjZCBSQaGBA?feat=embedwebsite" target="_blank"><img src="http://lh4.ggpht.com/_VDbSvmyPAXo/SZsT_EV8PuI/AAAAAAAASU8/jZ-t5LFLLEg/s800/new_refinery.jpg-refinery_wall_05.bmp-n11-m5-d6-s20000-a0.0@1.png" border="0" class="linked-image" /></a>
As you can see, the normal map forces a lot more structure into the resulting output texture. My suspicion is that I could get even better results including a height map or a manually created correspondence map as additional channels. There are a few annoying discontinuities, but upping the sampling rate should help to fix those (at the expense of more computation.)
Not to mention the normal map has a few messy bits compared to what I'd imagine CrazyBump would give me.