Threading

GartermanGarterman Join Date: 2003-09-24 Member: 21158Members
I was just wondering, do most modern games (including NS2 eventually) lend themselves very well to having multiple threads for making the most of multiple cores ?

Comments

  • rofldinhorofldinho Join Date: 2009-07-25 Member: 68259Members
    edited August 2010
    Will NS2 use Hyperthreading?

    I have no idea what it does above standard threading, but would it be worth getting an i7 in case NS2 or Spark and any of it's games/mods will use Hyperthreading?
  • RobBRobB TUBES OF THE INTERWEB Join Date: 2003-08-11 Member: 19423Members, Constellation, Reinforced - Shadow
    more cores, more hertz to play with. Thats a german pun, in case you dont know that herz is heart.
    At any rate, in the current state I dont know if the engine core is even using the extended command sets, let alone the full power of gfx cards.
    But time will tell and hopefully Flayra and the gang get the kinks hammered flat.
  • remiremi remedy [blu.knight] Join Date: 2003-11-18 Member: 23112Members, Super Administrators, Forum Admins, NS2 Developer, NS2 Playtester
    edited August 2010
    <!--quoteo(post=1794861:date=Aug 17 2010, 07:11 PM:name=Garterman)--><div class='quotetop'>QUOTE (Garterman @ Aug 17 2010, 07:11 PM) <a href="index.php?act=findpost&pid=1794861"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->I was just wondering, do most modern games (including NS2 eventually) lend themselves very well to having multiple threads for making the most of multiple cores ?<!--QuoteEnd--></div><!--QuoteEEnd-->
    The current paradigm used in game programming does not lend itself very well to heavy use of threads. Generally you will get better performance on a dual core with a higher clock speed than a quad core for the same price (and therefore a lower clock speed).

    The reason for this is that the game's logic is very lock-step. You can't render the graphics until you've done the physics, and you can't do the physics before you get the player input. There are some things that can be pulled out into separate threads, and some PORTIONS of game logic suits itself well to threading (physics calculations can often be run in parallel, graphics calculations are done in super-crazy-parallel on the graphics card), but in its entirety games as designed at present can not take great advantage of threading.

    Hope that was helpful.
  • oldfartoldfart Join Date: 2010-04-23 Member: 71509Members
    mmm ..

    Is this in regards to the client or the server ?

    I'm starting to assume client .. so, I'll be interested in any knowledgeable replies! :D

    cheers
    <u>OldFart</u>

    BTW: Is there anywhere we can read up on the server architecture ?
  • MercsDragonMercsDragon Join Date: 2002-11-05 Member: 6963Members
    As Psyke said most fps games are going to be better with higher clock rates. AI is the place where a game can usually benefit from as many cores as you can throw at it.

    Hyper threading is a way to get more performance by simulating two cores per core, it is hardware only and has nothing to do with how any software is programmed.

    As far as NS2 goes, you'll just have to wait and find out when it's released.

    Charlie "Flayra" Cleveland, Official Rep, replied 1 month ago
    As of this week, NS2 now supports multi-threading and thus multi-core CPUs. If he has time he can give you a bit more info.
  • schkorpioschkorpio I can mspaint Join Date: 2003-05-23 Member: 16635Members
    a lot of games nowadays make good use of 4 cores - this is because a lot of games are console ports where they use 3-7 cores to do their thing.
  • shad3rshad3r Join Date: 2010-07-28 Member: 73273Members
    edited August 2010
    To take advantage of multiple threads, you need tasks that can run independently of each other. They can't need to modify the same data, otherwise synchronising the access to shared data ends up meaning one of the threads is blocked waiting on a lock much of the time.

    Game logic in one thread, rendering in another is pretty common, you setup a shared buffer or similar that the game thread dumps the info about renderable game objects into, then the render thread goes and does it while the game thread gets on with the next simulation frame.

    Problem is, dedicated GPUs are pretty much separate rendering threads already, for most game engines there isn't that much work to do on the CPU for rendering apart from sending data to the GPU. And what rendering tasks you do need to do on the CPU, like character animation, often need to be done in the game thread 'cos the game code needs the results to know where the hitboxes are at, among other reasons.

    Having latency critical stuff like network and sound in their own threads is common, but isn't much of a win on a multi core system.

    Having pools of worker threads that handle little stand alone jobs, like pathfinding queries for AI, can work well... the benefit is very dependant on the type of game.
  • MartinMartin Join Date: 2010-07-27 Member: 73229Members
    <!--quoteo(post=1794897:date=Aug 18 2010, 05:54 AM:name=Psyke)--><div class='quotetop'>QUOTE (Psyke @ Aug 18 2010, 05:54 AM) <a href="index.php?act=findpost&pid=1794897"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->...graphics calculations are done in super-crazy-parallel on the graphics card...<!--QuoteEnd--></div><!--QuoteEEnd-->

    You mean like the 480 cores on the GTX 480! It should also be noted that rendering scenes lends itself perfectly to parallel computation because it can be easily partitioned. Sorry CPUs, you're lagging.

    <a href="http://www.nvidia.com/object/product_geforce_gtx_480_us.html" target="_blank">http://www.nvidia.com/object/product_geforce_gtx_480_us.html</a>
  • cmc5788cmc5788 Join Date: 2009-10-06 Member: 68959Members
    <!--quoteo(post=1795231:date=Aug 19 2010, 04:03 PM:name=Martin)--><div class='quotetop'>QUOTE (Martin @ Aug 19 2010, 04:03 PM) <a href="index.php?act=findpost&pid=1795231"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->You mean like the 480 cores on the GTX 480! It should also be noted that rendering scenes lends itself perfectly to parallel computation because it can be easily partitioned. Sorry CPUs, you're lagging.

    <a href="http://www.nvidia.com/object/product_geforce_gtx_480_us.html" target="_blank">http://www.nvidia.com/object/product_geforce_gtx_480_us.html</a><!--QuoteEnd--></div><!--QuoteEEnd-->

    Don't worry, GPUs and CPUs will be the same thing, soon*.

    * Soon in this context means "eventually."
  • DarkFrostDarkFrost Join Date: 2003-04-03 Member: 15154Members, NS1 Playtester, Constellation
    edited August 2010
    The 'cores' that nvidia make reference too are not the same as cpu cores...

    The cores they reference are more of a... marketting move.

    That particular card has 480 shader processors, on 15 multiprocessors, on a single gpu core.

    It requires a rediculous amount of power, if it is to be compared to a CPU.

    It does come in a two core configuration, for an even more rediculous price than normal ofcourse.
  • WhiteZeroWhiteZero That Guy Join Date: 2004-06-24 Member: 29511Members, Constellation
    edited August 2010
    <!--quoteo(post=1795231:date=Aug 19 2010, 04:03 PM:name=Martin)--><div class='quotetop'>QUOTE (Martin @ Aug 19 2010, 04:03 PM) <a href="index.php?act=findpost&pid=1795231"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->You mean like the 480 cores on the GTX 480! It should also be noted that rendering scenes lends itself perfectly to parallel computation because it can be easily partitioned. Sorry CPUs, you're lagging.

    <a href="http://www.nvidia.com/object/product_geforce_gtx_480_us.html" target="_blank">http://www.nvidia.com/object/product_geforce_gtx_480_us.html</a><!--QuoteEnd--></div><!--QuoteEEnd-->
    Not really the same when comparing. You have to actually understand the architecture of a CPU vs a GPU before making an argument about it.
    Plus, CPU is designed to use FARRRRRR less power than a GPU. Not to mention the 480/470 produce an obscene amount of heat.
    GPUs aren't really made to run 24/7, unlike CPUs.
  • douchebagatrondouchebagatron Custom member title Join Date: 2003-12-20 Member: 24581Members, Constellation, Reinforced - Shadow
    as previously mentioned, AI and things of that sort are much more useful when put on multiple cores. This means that pathfinding intensive games, like RTSs do better, which explains why games such as supreme commander and many other RTS games perform better on multiple cores, whereas FPS games are generally do not experience the same benefits.

    As NS2 is a cross between them, there will probably be some marginal benefits to having multiple cores.

    The interesting thing I've noticed is that several Source games (tf2 and l4d) have an option for multiple core rendering. I'm not particularly sure what this does, but if the strategy they used is also implemented on the spark engine, then there could be more benefits for more cores.
  • Soylent_greenSoylent_green Join Date: 2002-12-20 Member: 11220Members, Reinforced - Shadow
    <!--quoteo(post=1795667:date=Aug 21 2010, 02:59 PM:name=DarkFrost)--><div class='quotetop'>QUOTE (DarkFrost @ Aug 21 2010, 02:59 PM) <a href="index.php?act=findpost&pid=1795667"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->The 'cores' that nvidia make reference too are not the same as cpu cores...<!--QuoteEnd--></div><!--QuoteEEnd-->

    If they were similar to CPU cores, graphics cards would be ridiculously slow. CPUs go to an enormous expense to increase single-threaded performance by a small amount.

    On a CPU it makes sense to use a high clock speed, even though power consumption of CMOS goes roughly as the cube of clock speed(it'd be linear if the only thing that changed was frequency, but you have to up the voltage to keep it stable).

    On a CPU it can make sense to double the size of the cache in order to get a 5-10% performance improvement; on a GPU you just add more ROPs, more TMUs and more shader processors.

    On a CPU it can make sense to have a wider issue superscalar core, even though you get logarithmic performance improvement for an exponentially increasing expense. On a GPU you just go for more of everything.

    On a CPU it makes sense to pay the enormous expense to have out of order execution. On a GPU you just go for more of everything.

    On a CPU it can make sense to waste a quite enormous quantity of transistors to increase the branch-prediction hit rate from 93% to 94%. On a GPU you just go for more of everything.

    There is a sort of middle ground between the super simple shader processors on a GPU and the monolithic monster cores on a CPU; it's low power CPUs for things like smart phones and pads/readers. e.g. The ARM A9 draws ~250 mW per core at 1 GHz and it's a third of the size of the already tiny intel atom.
  • JAmazonJAmazon Join Date: 2009-02-21 Member: 66503Members
    Theres also the interesting fact (I think) that most GPU cores cannot perform arithmetic with higher than 3 places of floating point precision (all you need to place a pixel correctly on the screen.)
  • spellman23spellman23 NS1 Theorycraft Expert Join Date: 2007-05-17 Member: 60920Members
    <!--quoteo(post=1796480:date=Aug 27 2010, 04:32 AM:name=Soylent_green)--><div class='quotetop'>QUOTE (Soylent_green @ Aug 27 2010, 04:32 AM) <a href="index.php?act=findpost&pid=1796480"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->If they were similar to CPU cores, graphics cards would be ridiculously slow. CPUs go to an enormous expense to increase single-threaded performance by a small amount.<!--QuoteEnd--></div><!--QuoteEEnd-->

    While true, here's some more internal laymans stuff:
    1) CPUs are throughput optimized. GPUs are latency optimized. This in itself pretty much causes the rest of these points to fall out.
    2) Accuracy of GPUs doesn't have to be as good as CPUs. (JAmazon's note)
    3) CPUs deal with branching threads and serial code. GPUs were designed with easily partitionable actions (i.e. per pixel) that don't rely on each other (start rendering brand new frame! Or just grab the info for some other pixel)
    4) CPUs have accelerators for specific special functions and the single core can do EVERYTHING. GPUs have lots of simpler processors and partition which can do what (recent movements though are towards all being able to do everything, see Nvidia 400 series and Intel's Larabee).
    5) Memory latency is a big deal for CPUs since they have to wait, so caches and out-of-order execution are important. GPUs can just work on another independent pixel while they wait for memory access.


    Back on topic: as already noted Physics and AI lend themselves really, really well to multiple threads. The pathing problem alone one you start getting a lot of dudes running around can get hairy if you want it to work right (see Starcraft fail Reaver pathing).

    Also, servers really benefit from threading so that they can handle the different actions and reconcile each one quickly. The only problem is proper re-sync overhead. So keep it on the CPU. The PCI-E interface is too slow.
Sign In or Register to comment.