I mean - more options to disable some more lights or something that could greatly improve fps so people like me can just play with at least 30-40 fps mid and end game. Or any more optimisations for the game?
ScardyBobScardyBobJoin Date: 2009-11-25Member: 69528Forum Admins, Forum Moderators, NS2 Playtester, Squad Five Blue, Reinforced - Shadow, WC 2013 - Shadow
Optimizations are always in progress (the last build included some memory optimizations that should help people with slow loading problems). More optimizations are possible, but they need to ensure to fix any potential bugs and/or exploits before they can release them.
I mean - more options to disable some more lights or something that could greatly improve fps so people like me can just play with at least 30-40 fps mid and end game. Or any more optimisations for the game?
Just my thoughts.
You can't just disable lights. That's now how it works.
I mean - more options to disable some more lights or something that could greatly improve fps.
No. After disabling all graphics options (which are comprehensive) there is very little rendering going on. If you have poor performance after disabling all those options it is
a. A weak GPU
b. Other non-lighting systems (animation, game logic, collision, sound) dragging you down
Absolutely. Everyone at UWE wants NS2 to run faster, and that's why most of the programming team is working on performance most of the time, and why almost every single update since launch has made changes seeking greater performance.
that's why most of the programming team is working on performance most of the time, and why almost every single update since launch has made changes seeking greater performance.
Does that include doing more researche on the Texture Streaming (currently experimental) option?
that's why most of the programming team is working on performance most of the time, and why almost every single update since launch has made changes seeking greater performance.
Does that include doing more researche on the Texture Streaming (currently experimental) option?
Yes, texture streaming is actually one of the optimization tasks that is next on the list.
I'd imagine texture streaming would only be useful if you don't have enough RAM. AFAIK the textures are preloaded onto RAM from the beginning of the match.
I mean - more options to disable some more lights or something that could greatly improve fps so people like me can just play with at least 30-40 fps mid and end game. Or any more optimisations for the game?
Just my thoughts.
in console:
map ns2_tram
cheats 1
r_mode unlit
r_mode lit
Compare fps
edit: this is lighting completely disabled, taking out a few lights shouldn't help much at all because of the deferred rendering system.
texture streaming streams textures in from the hard drive to your video ram as needed. It's what most console games do these days to overcome the low vram limitation(256mb on ps3, 384mb give or take on xbox 360). I did some tests with msi afterburner monitoring a while ago with texture streaming.
With the game running at 1080p with textures on high: Texture streaming off: Vram usage: 1.2GB Texture streaming on: 300MB. Unless your computer doesn't hvae enough vram (1GB graphics cards for example, and even then, only if you get fps hit) I would leave it off as it's easier on your computer hard drive.
Anyway, in response to the original poster. What computer specifications are you running? It seems like it's so low end that you should just upgrade. I'm guessing you have a weak graphics processing unit, try running in 720p (1280x720) or if that still doesn't work lower it further to 1024x768. I did this on my laptop and got 50+fps on most maps and I was able to play properly.
I would still recommend upgrading to better hardware when you can afford it so you can enjoy the graphical prettiness that is natural selection 2.
You will never play NS2 smoothly on stock CPU speeds, the only way you'll get 80+ fps constantly is by overclocking, get a non stock fan/cooler and L2OC, there's some magic number between 3.5k and 4.5k where performance seems to sit around 100+ non stop.
I'm playing with a 3.4k i7 and 7970 and I always dip from 80-100 to 50-60 8v8+ in combat, which is the only time you really want 80+ FPS.
I seriously doubt many peoples GPUs are bottlenecking them at this point.
texture streaming streams textures in from the hard drive to your video ram as needed. It's what most console games do these days to overcome the low vram limitation(256mb on ps3, 384mb give or take on xbox 360). [...]
I would assume the spark engine is not streaming from HDD to VRAM but from RAM to VRAM. Texture streaming is essentially a memory management task in many engines. The two implementations I have seen so far were having the memory manager a) running in an own thread and b) running it in the same thread as the game loop.
If running in the same thread as the game loop, you usually measure how much time (in ms) you have left at some point in your loop from reaching a desired goal (for example a fixed cap at 60 FPS / 16.6 milliseconds), and then you allow the memory manager to work for that long at the end of the game loop, having it pre-load data that is estimated to be needed for the next frames.
This is most probably not the case with the spark engine, as there is no FPS cap in that sense. So my bet is that the memory manager is running in its own thread, getting passed messages of what needs to be loaded with what priority and then does that and reports back when the new memory table is ready to be read. This particular method is what is often best suited for streaming textures as the memory manager has much more time each frame to access various steps in the memory hierarchy. I doubt the spark engine loads textures from the HDD in that time thought, because that is something you always want to avoid and only make use of if you have a huge open world (think of Elder Scrolls games) where all the data does not fit in the RAM, instead that is what loading times are for: You load all the big stuff into the RAM, if possible, and then only have to transfer it from RAM to VRAM, making much better use of your hierarchy.
Though, as you mentioned, on consoles you are very limited in RAM and have to load from the HDD. My assumption is that because NS2 was planned as a PC game from the beginning, the Spark engine is making much more use of the RAM than any console game can. Because what's the point of having large upper layers in the memory hierarchy when you are still going all the way down to the HDD anyway?
Insight from the developers would be welcome here, I find this topic to be very interesting.
You will never play NS2 smoothly on stock CPU speeds, the only way you'll get 80+ fps constantly is by overclocking, get a non stock fan/cooler and L2OC,
Why would I need 80+ fps?
If my PC can handle it, I go for eyecandy without overclocking.
I have a i7-960 running on stock speed with stock cooler.
All settings are maxed, with VSync ON, and I get a steady 50-60fps, with no twitch or stutter during excessive mouse movement.
GTX 570
6Gb Kinston RAM
Seagate ST31000526SV ATA (a fast HD that is normally used in servers)
Win7-64
I tried the optional Texture Streaming a few builds ago, but allthough performance was great, it gave me crashes in the end.
You will never play NS2 smoothly on stock CPU speeds, the only way you'll get 80+ fps constantly is by overclocking, get a non stock fan/cooler and L2OC,
haha true :P
Indeed, I don't need it to run NS2 the way I'm satisfied.
My pc specs are good, and imo it's the game that needs to addapt to those.
Pretty standard stuff.
And as Hugh already said, that's what they're constanly working on.
there's some magic number between 3.5k and 4.5k where performance seems to sit around 100+ non stop.
???
A higher frequency is always better from a pure performance point of view.
The only drawbacks are heat and power consumption.
I realise this, the point I'm making is that stock CPUs can not run this game properly for an FPS and some arbitrary number near 3.9k and 4.5k hits a sweet spot where performance is great, constant 80-100 FPS in any situation, you will never get this performance without OCing your CPU no matter what you do in game, none of the options in the menu help this they only reduce what little performance your GPU can contribute. I can max everything with my 7970 and drop maybe 5 FPS or put everything on low and gain 5 FPS at 1920x1080, changing resolution makes 0 difference.
50-60 FPS might have been standard in CS 1.5 in 2003, hell I know it wasn't but it was acceptable, barely pulling 60 FPS nowadays with 120mz monitors and all the other wank is ...not good. And if you're using vsync and dropping below 60 FPS you realise vsync drops the FPS to 30 straight away, vsync is a horrible method of limiting max fps.
Max may correct me here, but texture streaming probably won't help in-game performance. It will however reduce load times.
Currently we're also doing a lot of things that will reduce our memory foot print, since that is preventing many people from even playing the game (even as a PC game, RAM is not unlimited, especially given the other stuff that runs on a PC, like Windows).
But we are always aware of performance and looking for optimization opportunities. As a small team, it just takes a bit more time, but we'll get there
If someone is on windows with less than 4GB, they should upgrade their computers.
I literally can't wait to run NS2 on my Linux box one day, as that is using less than 100 MiB of RAM when fully loaded. Leaving me with 7.9 GiB of my 8 GiB of RAM free to use.
I realise this, the point I'm making is that stock CPUs can not run this game properly for an FPS and some arbitrary number near 3.9k and 4.5k hits a sweet spot where performance is great, constant 80-100 FPS in any situation, you will never get this performance without OCing your CPU no matter what you do in game, none of the options in the menu help this they only reduce what little performance your GPU can contribute.
That is not entirely true, my i5 3570k runs the game with a max of 15 ms per frame, which is absolutely fine. If you think you need at least 80 FPS constantly than that is your preference to which I do not agree as mandatory for enjoyment. Of course a shorter delay is always preferable, but of the many tests I did I can say that it is extremely hard to notice any improvement above 16-18 ms delay between HID-input and a reaction on the screen. Extra buffer is always nice and I would say that a placebo can always enhance your experience, so 100 FPS surely are not a bad thing. But 60 FPS are very much playable and falling into the "run this game properly" category that you put out there.
On a side note, the effect about FPS increase based on settings and overclocking you describe is a very interesting one. What is limiting you there is in fact not the game engine, nor is it the graphics renderer, it is, at least to my understanding, the game logic. More specifically it is the fact that all game logic in NS2 is written in an interpreted language (in contrast to a pre-compiled or JIT language*) combined with the complex nature of an FPS/RTS hybrid. Now this is not a bad thing, I am almost certain that without the advantages for development that come with Lua we would still not be playing NS2.
The drawback of complex game logic is that you can't scale it down just like graphics. You can't disable the pathfinding or entity detection, or movement system or scale the rate at which the game loop updates status of players or structures; because all of that is essential to a balanced multiplayer experience. That is why, if you are CPU-bound, none of the settings will affect your performance much while overclocking CPU, BUS and Memory will very much do so.
Why there appears to be a sweet spot at which the FPS stop fluctuation as much is a lot harder to understand and it will be different for every CPU model; it probably has something to do with the pipelining or alignment of synchronization timings.
* And to be completely accurate languages are not limited to one of those categories, for example you can also run Lua with Luajit as a just-in-time-compiled language as well.
Comments
You can't just disable lights. That's now how it works.
No. After disabling all graphics options (which are comprehensive) there is very little rendering going on. If you have poor performance after disabling all those options it is
a. A weak GPU
b. Other non-lighting systems (animation, game logic, collision, sound) dragging you down
Absolutely. Everyone at UWE wants NS2 to run faster, and that's why most of the programming team is working on performance most of the time, and why almost every single update since launch has made changes seeking greater performance.
Yes, texture streaming is actually one of the optimization tasks that is next on the list.
Edit:
I'm assuming it "hot" loads the textures on a "as requested" basis.
I was wondering the same thing. Do people leave it on or off? Is it broken?
It loads it onto the video memory, if I recall correctly.
in console:
map ns2_tram
cheats 1
r_mode unlit
r_mode lit
Compare fps
edit: this is lighting completely disabled, taking out a few lights shouldn't help much at all because of the deferred rendering system.
With the game running at 1080p with textures on high: Texture streaming off: Vram usage: 1.2GB Texture streaming on: 300MB. Unless your computer doesn't hvae enough vram (1GB graphics cards for example, and even then, only if you get fps hit) I would leave it off as it's easier on your computer hard drive.
Anyway, in response to the original poster. What computer specifications are you running? It seems like it's so low end that you should just upgrade. I'm guessing you have a weak graphics processing unit, try running in 720p (1280x720) or if that still doesn't work lower it further to 1024x768. I did this on my laptop and got 50+fps on most maps and I was able to play properly.
I would still recommend upgrading to better hardware when you can afford it so you can enjoy the graphical prettiness that is natural selection 2.
I'm playing with a 3.4k i7 and 7970 and I always dip from 80-100 to 50-60 8v8+ in combat, which is the only time you really want 80+ FPS.
I seriously doubt many peoples GPUs are bottlenecking them at this point.
???
A higher frequency is always better from a pure performance point of view.
The only drawbacks are heat and power consumption.
I would assume the spark engine is not streaming from HDD to VRAM but from RAM to VRAM. Texture streaming is essentially a memory management task in many engines. The two implementations I have seen so far were having the memory manager a) running in an own thread and b) running it in the same thread as the game loop.
If running in the same thread as the game loop, you usually measure how much time (in ms) you have left at some point in your loop from reaching a desired goal (for example a fixed cap at 60 FPS / 16.6 milliseconds), and then you allow the memory manager to work for that long at the end of the game loop, having it pre-load data that is estimated to be needed for the next frames.
This is most probably not the case with the spark engine, as there is no FPS cap in that sense. So my bet is that the memory manager is running in its own thread, getting passed messages of what needs to be loaded with what priority and then does that and reports back when the new memory table is ready to be read. This particular method is what is often best suited for streaming textures as the memory manager has much more time each frame to access various steps in the memory hierarchy. I doubt the spark engine loads textures from the HDD in that time thought, because that is something you always want to avoid and only make use of if you have a huge open world (think of Elder Scrolls games) where all the data does not fit in the RAM, instead that is what loading times are for: You load all the big stuff into the RAM, if possible, and then only have to transfer it from RAM to VRAM, making much better use of your hierarchy.
Though, as you mentioned, on consoles you are very limited in RAM and have to load from the HDD. My assumption is that because NS2 was planned as a PC game from the beginning, the Spark engine is making much more use of the RAM than any console game can. Because what's the point of having large upper layers in the memory hierarchy when you are still going all the way down to the HDD anyway?
Insight from the developers would be welcome here, I find this topic to be very interesting.
If my PC can handle it, I go for eyecandy without overclocking.
I have a i7-960 running on stock speed with stock cooler.
All settings are maxed, with VSync ON, and I get a steady 50-60fps, with no twitch or stutter during excessive mouse movement.
GTX 570
6Gb Kinston RAM
Seagate ST31000526SV ATA (a fast HD that is normally used in servers)
Win7-64
I tried the optional Texture Streaming a few builds ago, but allthough performance was great, it gave me crashes in the end.
If you don't know you need it, you surely don't.
Indeed, I don't need it to run NS2 the way I'm satisfied.
My pc specs are good, and imo it's the game that needs to addapt to those.
Pretty standard stuff.
And as Hugh already said, that's what they're constanly working on.
I realise this, the point I'm making is that stock CPUs can not run this game properly for an FPS and some arbitrary number near 3.9k and 4.5k hits a sweet spot where performance is great, constant 80-100 FPS in any situation, you will never get this performance without OCing your CPU no matter what you do in game, none of the options in the menu help this they only reduce what little performance your GPU can contribute. I can max everything with my 7970 and drop maybe 5 FPS or put everything on low and gain 5 FPS at 1920x1080, changing resolution makes 0 difference.
50-60 FPS might have been standard in CS 1.5 in 2003, hell I know it wasn't but it was acceptable, barely pulling 60 FPS nowadays with 120mz monitors and all the other wank is ...not good. And if you're using vsync and dropping below 60 FPS you realise vsync drops the FPS to 30 straight away, vsync is a horrible method of limiting max fps.
Currently we're also doing a lot of things that will reduce our memory foot print, since that is preventing many people from even playing the game (even as a PC game, RAM is not unlimited, especially given the other stuff that runs on a PC, like Windows).
But we are always aware of performance and looking for optimization opportunities. As a small team, it just takes a bit more time, but we'll get there
That is not entirely true, my i5 3570k runs the game with a max of 15 ms per frame, which is absolutely fine. If you think you need at least 80 FPS constantly than that is your preference to which I do not agree as mandatory for enjoyment. Of course a shorter delay is always preferable, but of the many tests I did I can say that it is extremely hard to notice any improvement above 16-18 ms delay between HID-input and a reaction on the screen. Extra buffer is always nice and I would say that a placebo can always enhance your experience, so 100 FPS surely are not a bad thing. But 60 FPS are very much playable and falling into the "run this game properly" category that you put out there.
On a side note, the effect about FPS increase based on settings and overclocking you describe is a very interesting one. What is limiting you there is in fact not the game engine, nor is it the graphics renderer, it is, at least to my understanding, the game logic. More specifically it is the fact that all game logic in NS2 is written in an interpreted language (in contrast to a pre-compiled or JIT language*) combined with the complex nature of an FPS/RTS hybrid. Now this is not a bad thing, I am almost certain that without the advantages for development that come with Lua we would still not be playing NS2.
The drawback of complex game logic is that you can't scale it down just like graphics. You can't disable the pathfinding or entity detection, or movement system or scale the rate at which the game loop updates status of players or structures; because all of that is essential to a balanced multiplayer experience. That is why, if you are CPU-bound, none of the settings will affect your performance much while overclocking CPU, BUS and Memory will very much do so.
Why there appears to be a sweet spot at which the FPS stop fluctuation as much is a lot harder to understand and it will be different for every CPU model; it probably has something to do with the pipelining or alignment of synchronization timings.
* And to be completely accurate languages are not limited to one of those categories, for example you can also run Lua with Luajit as a just-in-time-compiled language as well.