Steve's Dev-Stream-of-Consciousness
SteveRock
Join Date: 2012-10-01 Member: 161215Members, NS2 Developer, Subnautica Developer
I'm gonna post a bunch of random stuff here that I think people might find interesting, often without much context, but definitely related to what I'm working on for Subnautica. If you're interested in game dev, especially for procedural open world type games, dig in!
Starting with some shadertoy stuff I've been experimenting with...
https://www.shadertoy.com/view/4d23R3#
https://www.shadertoy.com/view/4dB3DV
https://www.shadertoy.com/view/4sXXz7
Starting with some shadertoy stuff I've been experimenting with...
https://www.shadertoy.com/view/4d23R3#
https://www.shadertoy.com/view/4dB3DV
https://www.shadertoy.com/view/4sXXz7
Comments
Would need more organization. (unless of course you dont plan to keep posting everything here)
P.S. How hard is it to slightly vary the the bone position/rigidity/etc at runtime for each entity? It would be great if they not all moved in an identical fashion.
After some random stuff in the morning, I spent most of today working out bugs with the order-independent blending (full acronym: WBOIT). I'll post some shots...anyway, blending in games is always gonna suck to some extent, so we're just trying to find the thing that sucks the least. This WBOIT method definitely has some drawn backs, as I discovered today. Per-pixel light passes can't really be done the same way, so even if you just have 1 layer and 1 point light, you won't get the same results. Effectively, all blended layers have to get drawn "over" lights, since the only thing you can really do is add your lights on top of your opaqe bg and then do the composite pass...ah well. Hopefully the artists will be OK with this, because I think WBOIT lets you do much cooler stuff with heavily transparent creatures and props. Will post some shots hopefully tmrw.
That is an interesting thought...but then I think you would have the opposite problem where the lighting over takes everything. Once you do the WBOIT stuff, all depth information is gone. So if a lit part was supposed to be covered up by some other layers above it, there's no real way to figure that out.
An even more extreme thought is to do the WBOIT passes per-object, so then you'd at least get proper depth cues between objects. But...you're talking about two passes per object..which I suppose isn't the end of the world, but it definitely would be more expensive. I'll have to think about this a bit more..
Here's a debuggin' screen shot:
Still a lot left to do, but it's nice to have some sort of huge draw distance, even tho the far away stuff may be horrible
I have no idea It's pretty fast, but for low quality, we might just need to disable the IBL shaders? No idea.
some pretty tech-heavy stuff here...
current biggest problem i need to solve: terrain performance. our terrain looks pretty great up close IMHO, but our draw distance really blows and the perf is a bit like Crysis 1 all over again :-/ so i need to make the terrain meshes more efficient (ie. less triangles). some simple things i tried worked about as well as you could expect: not too well.
i've been reading a boat load of papers on adaptive voxel meshing, and here's a good list: http://swiftcoder.wordpress.com/planets/isosurface-extraction/ Although it's missing this very important one: http://vis.computer.org/vis2004/dvd/vis/papers/nielson2.pdf
unfortunately, most of the adaptive methods seem to assume you have a pretty hefty data structure (an octree where each node stores a cell with 8 corner values and/or some quantity derived from those). currently, our octree is just one density value per node, and it fits pretty snuggly, usually under 50mb. i'd rather not dedicate 200+MB just for voxel data... it's also not clear how good the results are compared to classic quadric mesh simplification/decimation...
so anyway, my current approach is going to be: mesh it normally, then use mesh simplification to reduce the triangle count. this has the disadvantage of needing to actually get the whole mesh first, but hopefully that won't be a big deal since all that's done on a child-thread anyway. post-meshing simplification will likely get you the best results too. not too worried about borders: just going to not simplify border tris at all so all pieces will definitely fit together, water tight, no T-junctions or anything. a bit wasteful, but just on the borders. Max already helped me by writing the simplification code, so i just gotta plug it in.
if it's too slow to do at run time, we always have the nuclear option: precompute all meshes, simplify them, and ship them. i'd prefer not to do this, since it would mean a long build process, and it complicates terrain deformation (which is low priority, but hopefully eventually).
oh, i did also change our meshing code to actually use density values instead of binary minecraft, so our hills look a lot smoother now. yay!
we use dual marching cubes [Nielson2004] to get a nice mesh of all quads. the vertices are positioned using distance-field values (stored as bytes, for 128 fixed points of precision), allowing gradual slopes and smooth spheres and stuff. then we subdivide once and do a Laplacian smoothing to make things look, well, even more smooth! this generates a shit load of quads (and twice as many triangles!), but it gives us a nice rounded look that is pretty different from a lot of voxel engines out there. most engines tend to look sharp and jaggy because they're either based on classic marching cubes, which can generate crappy triangles, or they don't bother with mesh topology so they can't subdivide or smooth normals.
of course, all this comes at a price. our draw distance will probably never be Skyrim-amazing, and as mentioned above, performance is a challenge. but, for a game where you're swimming around and examining things a lot with your face in the sand, we thought it was pretty important for terrain to look good up close and from all angles.
a) the view changes (resulting in a fairly minor slice to be re-meshed), or
b) a distance terrain object is changed, which would result in a localized re-meshing (that would hopefully not be too resource intensive)
Objects that move in the distance could take a different approach, such as being added to a separate low-poly mesh in real-time which would be combined with the terrain low-poly mesh before rendering. They could even be generated at a different LOD depending on their complexity.
So is going from 57k to 23k triangles per "3D area" enough of a reduction? I was thinking on the order of 10x to 100x reduction being required to fit the visible scene into memory.