Natural Selection 2 News Update - Occlusion culling interview
Flayra
Game Director, Unknown Worlds EntertainmentSan Francisco Join Date: 2002-01-22 Member: 3Super Administrators, NS2 Developer, Subnautica Developer
Comments
Looking forward to the future implementations of occlusion culling and dynamic lighting in the Spark engine!
It's important to note that Max <b>did not</b> say that the new occlusion system would result in increased CPU utlilisation.
He said that currently, occlusion culling is performed by the CPU and GPU in concert. The communication between the CPU and GPU is clearly identified by Max as a performance inefficiency. In the new system, the culling will be performed by the CPU alone.
That this would increase CPU utilisation does not necessarily follow. Based on what Max said, we do not know if CPU utilisation will go up or down in the new system. The efficiencies gained by removing removing the GPU from the equation could increase or decrease utilisation. We just don't know, and as none (or very few) of us are engine programmers, there's not much use in speculating.
The only sure thing is that B188 is going to be very exciting!
good catch but could be an internal milestone
Right; "tomorrow's build" might well mean the build that the team is working on or testing tomorrow.
<a href="http://cmpmedia.vo.llnwd.net/o1/vault/gdc2011/slides/Daniel_Collin_Programming_Culling_The_Battlefield.pdf.pdf" target="_blank">http://cmpmedia.vo.llnwd.net/o1/vault/gdc2...lefield.pdf.pdf</a>
<a href="http://cmpmedia.vo.llnwd.net/o1/vault/gdc2011/slides/Daniel_Collin_Programming_Culling_The_Battlefield.pdf.pdf" target="_blank">http://cmpmedia.vo.llnwd.net/o1/vault/gdc2...lefield.pdf.pdf</a><!--QuoteEnd--></div><!--QuoteEEnd-->
The system I've implemented is pretty much identical to what Daniel describes (I attended this presentation at GDC and shamelessly copied them).
<!--quoteo(post=1881789:date=Oct 24 2011, 07:12 PM:name=John.)--><div class='quotetop'>QUOTE (John. @ Oct 24 2011, 07:12 PM) <a href="index.php?act=findpost&pid=1881789"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->He was using an example about compiling a map to see the lighting. Ie compiling a new map lighting build and then seeing what it looks like in game...<!--QuoteEnd--></div><!--QuoteEEnd-->
Exactly.
I don't see how it could result in anything else. He is taking a system that was previously shared between CPU and GPU and running it exclusively on CPU. Of course, if NS2 is not particularly CPU intensive then this will have an overall performance gain (especially to frame rate stability), by eliminating latencies introduced by shuttling data between the CPU and GPU, or having one wait idle for the other. Shooters typically have low CPU loads and high GPU loads so it should result in improved performance.
It might not exactly "improve CPU usage", but from what I understand with the current system the CPU has to spend ages just waiting for the GPU, doing nothing at all and preventing anything else from happening, so I think it will definitely improve CPU utilization.
I would further add that <i>any </i>news at all is exciting. I lot of us really enjoy, and look forward to, hearing about what's going on behind the scenes at Unknown Worlds. Learning about the development of NS2 has been just as interesting as actually playing, and it gets me even more excited for the game I already love. I think a lot of us in the community really value being able to watch NS2 evolve first hand. This is a great example of that.
Keep stuff like this coming. It holds our attention well.
Wouldn't leveraging CUDA or OpenCL make more sense for a GPU based task???
What kind of overhead is this going to introduce to the CPU? Aren't physics already heavily relying on it?
I understand the need to run them with affinity on the same chip to stop hitches and really optimize it, but shouldn't it go the other way?
Either way its ambitious and will greatly help, no doubt. Just a thought.
-Mike
MOAAAAR with Charlie now :D