In the modern Quake days (quakelive.com) you get up to 80 ms of lag compensation
that makes it fair for everyday play, but you can't go across the world and ruin games by shooting people a second after they think they go around a corner
Basically this. I'm not sure what the upper limit is in NS2 but it feels pretty high / needs to be tightened.
Will this suck for people with higher pings? Yes. But it'll unsuck the game quite a bit for the people with lower pings who should be the majority...
@Kamamura, @frantix, @mattji104, @CrushaK
I am usually on the 3 top score. I don't have great issue like other as i have a new PC. I like photo and graphics so i need power and ram, and i had to replaced it. Good thing for NS but i didn't buy a PC for it.
I'm just saying; in 2013 :
-We have multiple GIGAHERTZ on our processor.
-Multiple cores.
-Gigabytes of RAM.
-Gigabytes in our HD/SSD. Sometimes To...
-Screen that just are WIDE.
-Hardware acceleration for any stuff that is sound and / or graphics. Even 3D calculation with lot's of "FX".
-Each new generation doing better/faster and with less electrical power.
-recoding video at the same time and streaming it to a server is possible...
...and THIS is still "slow" !?! That's the kind of power no one would have dreamed of, 10 or 20 years ago. Same goes for other applications (quality has been declining) but not this far. You don't need so much power for the game mechanics. Other games don't need that and are as complex as NS2 (sometime more on some aspects).
Don't even bother to look at computers 30 years ago... This is just madness to see that with 7Mhz (yes "mega") computers we were doing 3D games (not textured and shit but still). It's like the more power you have the more it is wasted.
I have a lot of problems with this game but one thing that does not come to mind is hit-detection, it's actually really good for such a low budget FPS, it's much better hit detection than CoD which is insane considering the budget for those games must be close to the 100s millions at this point. I've never had a problem with the lag in this game and runs better than the big budget shooters, only getting killed around a corner is frustrating but understandable.
As far as shotguns go I'm really not sure if their spread got fixed or is still partially random because that was definitely the base before 250.
Skulks are pretty difficult for players to one shot consistently without +3 weapons, the hitbox is really small and really fast that only a near point blank shot will guarantee a one shot. Most shotgun kills come from 2-3 shots since they fire pretty fast now.
Only thing about hitreg that really pisses of is when the skulk is literally filling your screen and shotgun his face and get 70 shown on your screen.
And now some bloak started to bring these 30+ over to EU. I mean is he intentionally trying give the game bad rep to nooblets? These massive unbalanced and lagshit servers really pisses me off, great way to ruin the way the game was meant to be played and its rep along with it.
I have a lot of problems with this game but one thing that does not come to mind is hit-detection, it's actually really good for such a low budget FPS, it's much better hit detection than CoD which is insane considering the budget for those games must be close to the 100s millions at this point. I've never had a problem with the lag in this game and runs better than the big budget shooters, only getting killed around a corner is frustrating but understandable.
As far as shotguns go I'm really not sure if their spread got fixed or is still partially random because that was definitely the base before 250.
Skulks are pretty difficult for players to one shot consistently without +3 weapons, the hitbox is really small and really fast that only a near point blank shot will guarantee a one shot. Most shotgun kills come from 2-3 shots since they fire pretty fast now.
Shotguns still have fixed spread, but their spread increased in b250, along with a base damage nerf. (RoF increased though) I think the goal of those changes was specifically to make one-shotting skulks a bit harder to do, at which it succeeded obviously.
@UncleCrunch - yeah the power of the hardware has grown dramatically, no doubt about that. But the speed of light is still just 3*10^5 km/s, and if you and your buddy sit a few hundreds kilometers apart, your "now" is not exactly his "now", because the information travels with finite speed. True, network elements like hubs and routers add something too, but what you are experiencing are basically relativistic effects.
Talking about the speed of the engine itself, while the power of the hardware grows, so does the complexity of the code. Gone are the days of ZX Spectrum where super-talented individuals could code directly in the machine code, using superior knowledge of the hardware to work around all the latencies and squeezing the absolute max from the computers. Today, you have Spark coded in C, on top of that reside lua scripts that resolve special game events not implemented in spark itself. The devs told us they don't have the manpower to code the thing from gound up, admittedly the result could be much, much better, but not everyone is John Carmac with years of specialized experience and almost unlimited budget.
(...)Talking about the speed of the engine itself, while the power of the hardware grows, so does the complexity of the code.(...)
Same proportions ?
4000Mhz / 7Mhz = 570 times; let's say 80x86 is half efficient compared to other kind of CPU. The machine is still 260 times more powerful. And and i don't talk about I/Os that would increase drastically the number.
I doubt that the code of a game mechanic increase in complexity that much (x260) while you have to make simple additions and multiplications/divisions (ex: Bullet * weapon lvl coef = damage per bullet ).
Hit detection did not changed much from 2D to 3D, it's just more tests to add the 3rd dimension. And there are tricks about that (lines, planar). It stays simple number comparisons. And you can even help with bounding spheres/box (Big simple object to see if it is needed to go further) to save cycles. But that's another topic.
Not even the 3D engines as they are now just defining what's needed and DirectX/card do the rest. The real 3D engine is in directX/card, no the contrary. You can see it as a framework that does every pixel drawing. And that does 90% of the code you don't see as a dev.
(...)Talking about the speed of the engine itself, while the power of the hardware grows, so does the complexity of the code.(...)
Same proportions ?
4000Mhz / 7Mhz = 570 times; let's say 80x86 is half efficient compared to other kind of CPU. The machine is still 260 times more powerful. And and i don't talk about I/Os that would increase drastically the number.
I doubt that the code of a game mechanic increase in complexity that much (x260) while you have to make simple additions and multiplications/divisions (ex: Bullet * weapon lvl coef = damage per bullet ).
Hit detection did not changed much from 2D to 3D, it's just more tests to add the 3rd dimension. And there are tricks about that (lines, planar). It stays simple number comparisons. And you can even help with bounding spheres/box (Big simple object to see if it is needed to go further) to save cycles. But that's another topic.
Not even the 3D engines as they are now just defining what's needed and DirectX/card do the rest. The real 3D engine is in directX/card, no the contrary. You can see it as a framework that does every pixel drawing. And that does 90% of the code you don't see as a dev.
So i am afraid i have to say 'nope'.
You missed part of his point. No longer can things be coded in machine language. Going to a higher level language like C causes a performance loss over doing it in machine code, let alone adding LUA on top of C.
The reason things aren't coded in machine language anymore? Yes, complexity has increased, but what has increased even more is the amount of the code. No one could have envisioned a program that would take millions of lines of code back then.
^ lol... who said you can't code in machine language? If you feel like having fun most if not all (?) languages that compile to an executable (not interpreted languages) allow you to write assembly code which is more or less machine instructions in a more "human readable" format (i.e. "mov herp, derp" instead of "1001010001001010010010010111001..." however the instructions translate 1:1 to the generated machine code).
In addition, C is probably THE closest you can get to writing in assembly without actually writing assembly - especially if you know how to code efficiently.
As far as C++/C# I can't say, because I don't have much experience with them, but from what I understand they mostly add object-oriented programming features...
That being said, fewer people are really going "all the way down" to machine instructions for substantial amounts of time these days, because people have either gotten lazy or project sizes have exploded, so it takes a lot of time to write and maintain code at such a low level - and computational time and system memory are SO cheap compared to 10+ years ago.
Think of it this way (an oversimplification, but it'll suffice):
- LUA is something like: "Go get bread from the store, here's $5. Use the hallway."
- C is something like: "Extend your hand, extend your fingers. When paper bill hits hand, close fingers. Move hand close to pocket, put hand in pocket, extend fingers, when bill leaves hand, take hand out of pocket. Turn 180 degrees. Walk until you're in the hallway. Turn left. Walk until you reach the front door..."
- Assembly/"Machine Code" is like: "Left hand, get spacial coordinates. Spacial coordinate for left finger +4,+5,+7, rotation 20,30,70..."
Basically, LUA and other "high level" or interpreted languages allow you to get things done fast, unfortunately sometimes not very efficiently and without as much low-level control. There's still places where knowing how to write optimal low-level code is valuable, i.e. the rendering code, and things like shad3rZ.
As far as NS2's requirements - yes, they're totally ridiculous, and it's mostly because of LUA. There are no other games that have NEARLY the complexity of NS2's game code that use LUA - and for good reason... because it's not very efficient for time-critical gameplay. Unfortunately, the choice has been made, and porting to something else is beyond the financial means of the team at this point, I'm guessing (because of the initial work and then the ongoing code maintenance required afterwards).
You missed part of his point. No longer can things be coded in machine language. Going to a higher level language like C causes a performance loss over doing it in machine code, let alone adding LUA on top of C.
The reason things aren't coded in machine language anymore? Yes, complexity has increased, but what has increased even more is the amount of the code. No one could have envisioned a program that would take millions of lines of code back then.
Yes they can. FarCry 2 included some assembly. It's not that old. Either way you don't need that.
About C language (as HeatSurge pointed out properly), in the end it's always machine code. The only difference is that: it is a compiler that writes it for you. Then comes the optimization question.
I was on the demo scene 20 years ago and thousands (not millions... another bit industry lie) of lines wouldn't be a problem as we were teaming up (like a company) to do stuff. Same thing did happened in the game industry at that time. Plus libraries are not only for high level languages. You always capitalize on was was done before.
Complexity ? No! "bounding box" brings simplicity that is needed for these kind of applications. And suddenly you don't need million of lines. We don't care if a bullet hit the toe, or the nail.
Everything is see in NS2 considering balance (damage, timers etc) can be done with a little engine that will "crawl" in tables. There will be no "super uber new technique" about that kind of things. You always check things with "comps" (CMP) or IF/THEN/ELSE or SWITCH for the programmer that know something about matrices. And if you see a hundreds of IFs fit into each other (or combined), then my friend, i am afraid somebody should take some lessons.
lol please, just stop Unclecrunch, you really have no idea what you are talking about.
You are basically trying to argue that we all should be getting massive frames per second because the code is not that complex.....
There IS a performance loss going to higher level languages beyond machine code whether you believe it or not and, while the term "complexity" can mean different things to different people, there is no denying that todays software there is far more concurrent code running than older software.
You'd be hard pressed to find a FPS multiplayer game that has as much concurrent code running as NS2 where many aspects are run dynamically rather than pre-compiled like most fps multiplayer games.
20 years ago was a long time ago, there are certainly lots of software out there that have millions of lines of code ("an industry lie".... I lolled at that.)
the logic in NS2 doesn't require anywhere close to the amount of computing resources it uses
it's a very fixed-scope game. it should have been complied
sure it's more complicated than Quake, but it isn't more complicated than RTS games
In terms of the amount of logic states that are running, an RTS game will have just as many compared to NS2. However, there are things an RTS game do not deal with that makes it run better.
Most RTS games have very simple lighting, nothing like the deferred light rendering that NS2 has, so they are all running pre-compiled lighting with no dynamic lighting.
RTS games have very simple visibility cullings. (that means your computer isn't rendering things you can't see on your screen, so you have better FPS as a result.) Basically, all an RTS game has to do is not render anything within the fog-of-war. NS2's visibility cullings in comparison are far more complicated and are done on the fly, unlike source engine where they are pre-compiled.
RTS games don't generally have to deal with physics.
RTS games have very simplistic maps compared to NS2. All RTS games usually just have terrain elevation changes, trees, rocks, maybe some old building ruins and maybe water. Because of the nature of an RTS game, you can make simple models in a map look good. NS2 maps in comparison have an incredible amount of lighting, models and architectural features, which does affect performance.
You'd be hard pressed to find a FPS multiplayer game that has as much concurrent code running as NS2 where many aspects are run dynamically rather than pre-compiled like most fps multiplayer games.
That is not to blame on the other multiplayer fps games but rather on NS2 then.
Unreal Engine games use a scripting language that is compiled to byte code and interpreted by the virtual machine of the engine at runtime, yet is still 20 times slower than native C++ code. But the kicker is that this UnrealScript code is idling most of the time (only in about 5% of CPU time is actual UScript code executed).
The language is very event-based, which means that the underlying C++ engine code does all the low-level computations like physics (moving Actors around the game world based on Velocity, updating Velocity based on Acceleration, handling collision) and calls UnrealScript functions when something interesting happens that the Actor wants to be notified about (like colliding with something, having landed on the ground, enters the line of sight of another player, finishing an animation, calling a specific function after a timer runs out, being destroyed, etc).
UnrealScript on the other hand can call functions that have their body implemented in native C++ code for faster execution (for instance iterating a list of all non-hidden, colliding Actors within a given radius instead of checking all those properties in slower UnrealScript code manually; or generally all basic math functions).
Where NS2 would calculate the distance of all alien entities to a Crag on every frame update to determine if they are in range, the component- and event-based approach in UnrealScript would be to just add a non-blocking but colliding SphereCollisionComponent to the Crag and then get notified in script by the underlying C++ code when something enters that collision range and another event when something leaves the range again. All the Crag would have to do is add every valid entity that enters the range to an internal list and remove it from the list when it leaves again. Then it just would need to go through list on every update to heal everything in it, without having to perform any distance calculations on its own.
The same would be true for Unity, btw.
NS2 uses a different scripting approach, but you make it sound like a benefit that it runs so much concurrent code. There isn't so much more game logic from the RTS aspect of NS2 than other multiplayer fps games would have, unless CoD is the only multiplayer fps game you know.
It would be pretty easy to recreate NS2 in UnrealScript and it wouldn't break out in performance sweat, tbh. The only thing you probably couldn't do efficiently without the full engine license there was to manage the difference in visibility of certain surfaces to commanders and normal players.
As far as C++/C# I can't say, because I don't have much experience with them, but from what I understand they mostly add object-oriented programming features...
C++ is basically C with object orientation. It's still compiled to machine code, but you get all the organizational benefits of OOP. But it also adds complexity and it's easy to mess things up (as always when you need to take care of allocating and freeing memory yourself), plus the syntax can only become more cryptic than C (especially if you use something like the Hungarian Notation).
C# is compiled to byte code and interpreted by a virtual machine. It has the .NET Framework available (and shares it with Visual Basic.NET), which is probably one of the most extensive libraries out there. It's a joy to code with, but unfortunately is Microsoft not so keen on releasing the framework on other operating systems (though there are free open source recreations of it that also work on other platforms), so you are somewhat bound to Windows.
The latest release, the .NET Framework 4.5, incorporates some JIT functionality and you can specify preferences for each function how it should be inlined.
Unity allows you to use C# as scripting language for its games. Alternatively you can also use JavaScript (which is now called UnityScript in their slightly modified version) or BooScript. The engine understands all of them and they are also (within limits) compatible to each other, so you can call functions defined in a JavaScript class from a C# class. You can even change the Unity editor itself with these scripts.
the logic in NS2 doesn't require anywhere close to the amount of computing resources it uses
it's a very fixed-scope game. it should have been complied
sure it's more complicated than Quake, but it isn't more complicated than RTS games
In terms of the amount of logic states that are running, an RTS game will have just as many compared to NS2. However, there are things an RTS game do not deal with that makes it run better.
Most RTS games have very simple lighting, nothing like the deferred light rendering that NS2 has, so they are all running pre-compiled lighting with no dynamic lighting.
RTS games have very simple visibility cullings. (that means your computer isn't rendering things you can't see on your screen, so you have better FPS as a result.) Basically, all an RTS game has to do is not render anything within the fog-of-war. NS2's visibility cullings in comparison are far more complicated and are done on the fly, unlike source engine where they are pre-compiled.
RTS games don't generally have to deal with physics.
RTS games have very simplistic maps compared to NS2. All RTS games usually just have terrain elevation changes, trees, rocks, maybe some old building ruins and maybe water. Because of the nature of an RTS game, you can make simple models in a map look good. NS2 maps in comparison have an incredible amount of lighting, models and architectural features, which does affect performance.
lol please, just stop Unclecrunch, you really have no idea what you are talking about.
You are basically trying to argue that we all should be getting massive frames per second because the code is not that complex.....
There IS a performance loss going to higher level languages beyond machine code whether you believe it or not and, while the term "complexity" can mean different things to different people, there is no denying that todays software there is far more concurrent code running than older software.
You'd be hard pressed to find a FPS multiplayer game that has as much concurrent code running as NS2 where many aspects are run dynamically rather than pre-compiled like most fps multiplayer games.
20 years ago was a long time ago, there are certainly lots of software out there that have millions of lines of code ("an industry lie".... I lolled at that.)
You missed it. My point is. You don't need assembler for a NS2 game mechanic. Noway.
What on earth is going on in this game to require huge amounts of CPU power while other games just don't need it ? I am talking about FPS, RTS, and also other combined FPS/RTS.
I didn't say that OOP languages where not losing some cycle here and there.
Let's look at it by language differences since you do :
On a 2Ghz CPU you have 2 billions clock cycle /s. So at 60 FPS you have ~33 333 333 Cycles available at each frame. Far more than a game needs in the first place. The only applications that really need cycles are the ones that deals with raw data like sounds or graphics (Or science calculation of course)... repetitive and simple routines. And we're back to ASICs acceleration (ex: GPU, DSP and such). So not really the issue there.
On a PC. The most simple instruction in assembler is MOV(e) (copy) which takes only one cycle (more depending on addressing mode). The DIV(ision) take from 150 to 200 cycles. More or less of course depending on CPU generation and efficiency. The newer the lower. So you have somewhat 'time' before you fill up about 33 millions cycles.
In C if you do something like : somevar = somenum *10, the compiler will use the very same instructions (MOV, MUL). It just gonna take more cycles to store and get things the way it likes. But you're really close to something you can write in ASM. Same goes for C++ with a little extra compared to C (usually more RAM for Object stuff).
C++ (or others) because of optimized libraries and frameworks (sometime done in C or ASM) is pretty efficient and powerful. it's the way of doing things. Simple functions (the dirty work) are programmed with languages that are close to the machine and stored in libraries. Then we use a OO language. And that's it. The portion of OO language is so tiny when using optimized libraries that it removes the need for 2Ghz CPUs...
Sometime you don't even need anything else as the compiler does a really good job. So yes there is a loss using these languages (compared to ASM, C), but there is no huge gap if programmed and compiled properly. You can even write ASM code that will not be faster compared to compiled C as you don't use every trick of the architecture.
Most of the time a programmer optimize where it is needed by profiling (an such) to find where the bottleneck is, and then work on a solution for it. I'm not sure it's the NS2 case.
So my second question is: How can the code been ruined so much that the minimum requirement is a CPU above 2Ghz ? I mean even in LUA which is not that bad. India outsourcing (nah just joking... still...) ? There is definitely something wrong.
My machine code is a good 20 years ild now (68000 era the last time i needed to raw code) however we used to multiply by fractions instead as multiply was faster than divisions.
That being said q code would compile this automatically so C divisions wouldnt used DIV anyway. I would imagine interpreters would do the same.
There should be no modern reason (outside certain paranoid specifics such as military guidance and aeronautics rhat i know of) to use MC over a higher level compiled language such as C or derivatives.
It needs improvements, and it's the main reason I don't play NS2 anymore most of the time, because it's the biggest fun killer for me. Man, I had more fun with NS2 back in Beta when you had 20 fps avarage, but at the very least I felt like there were way less problems with the hit detection.
Back in UT and Quake days, leading the target according to your ping was considered part of the skill.
Perhaps that Lightning Gun was meant to be an instant-hit shot at the target - but to me it was a simulation of a bullet traveling towards the target in 80 ms!
We didn't need client-side sissy lag compensation back then! Playing on servers located on the other side of the world was just playing in HARD MODE!!!
I'm not sure how serious you are, but I feel like shitty lag compensation like in NS2 actually makes the game worse not better. I've played some game without lag compensation recently, so to get hits you had to aim little ahead of the player models, the more ahead the higher your ping was.
But at the very least it was consistent, not subject to random bullshit like in NS2, seems like your movement or aim matters little because you never know whether the game deceides to fuck you over. Unless there is some secret I don't know about.
lol unclecrunch, you make it sound like no game should ever require anything more than a 2ghz CPU. However, you are missing so much. Where those CPU cycles get used up is FAR more complicated than what you are trying to say. I could go into more detail about where CPU cycles get used up during a given time frame, but it would require far more typing/teaching than I'm willing to do.
@UncleCrunch - yeah the power of the hardware has grown dramatically, no doubt about that. But the speed of light is still just 3*10^5 km/s, and if you and your buddy sit a few hundreds kilometers apart, your "now" is not exactly his "now", because the information travels with finite speed. True, network elements like hubs and routers add something too, but what you are experiencing are basically relativistic effects.
Talking about the speed of the engine itself, while the power of the hardware grows, so does the complexity of the code. Gone are the days of ZX Spectrum where super-talented individuals could code directly in the machine code, using superior knowledge of the hardware to work around all the latencies and squeezing the absolute max from the computers. Today, you have Spark coded in C, on top of that reside lua scripts that resolve special game events not implemented in spark itself. The devs told us they don't have the manpower to code the thing from gound up, admittedly the result could be much, much better, but not everyone is John Carmac with years of specialized experience and almost unlimited budget.
no,light takes roughly 150 ms to circumnavigate the globe. we are not talking about that as a constraint. the processing time and routing speed is the majority of lag.
no,light takes roughly 150 ms to circumnavigate the globe. we are not talking about that as a constraint. the processing time and routing speed is the majority of lag.
This is right. But it is also the reality we need to deal with. Information on the internet does not get send through the air. So the 150ms are unrealistic. Processing time and routing speed is nothing else as sending electrons through a conductor and therefor capped at the speed of light (in this matter). While it may be more realistic that we could decrease the amount of conductor the signal has to pass through (Say NO to DPI!), it won't really happen.
no,light takes roughly 150 ms to circumnavigate the globe. we are not talking about that as a constraint. the processing time and routing speed is the majority of lag.
This is right. But it is also the reality we need to deal with. Information on the internet does not get send through the air. So the 150ms are unrealistic. Processing time and routing speed is nothing else as sending electrons through a conductor and therefor capped at the speed of light (in this matter). While it may be more realistic that we could decrease the amount of conductor the signal has to pass through (Say NO to DPI!), it won't really happen.
apparently copper wire coobscure attwo thirds the speed of light. so processing times still make up the majority of delay. also, lotsa long distance stuff is done over fiber optics, so speed of light still.
light can go around the earth 8 times in a second, 1.3 seconds to the moon, and 8 minutes to the sun. lights fast, but the universe is ginormous.
But what is processing time? Electrons running through a conductor. When a chip in a router processes your packed to send it to the next one, it calculates it by sending electrons through this chip and they move at the speed of light (in the given matter / silicon in the case of integrated circuits made of semiconductors ).
While you can't change the speed (despite changing the matter the electrons travels through), you also can't really change the distance. Sure, without DPI it would have a lower latency by having less to calculate (=less travel distance in the chip). And maybe some other improvements too. But the Internet will stick to routers connected by wires. The OSI model will remain to be used and those packets need to be processed. So you won't change those distance much. Making the answer "it can't be realtime because of the speed of light", not wrong.
Processing time in the switches and network backends. You'll most likely notice it if there are issues, but processing the packets and routing them accordingly does take some CPU time, under heavy load it might take longer then usual, but most of the time these switches are extremly capable of handling load. But, pings should be low as long you stay local. If you go further, like to the US<->EU most of the time will simply be the fact that the light needs some time to travel though the sea cables.
lol unclecrunch, you make it sound like no game should ever require anything more than a 2ghz CPU. However, you are missing so much. Where those CPU cycles get used up is FAR more complicated than what you are trying to say. I could go into more detail about where CPU cycles get used up during a given time frame, but it would require far more typing/teaching than I'm willing to do.
Yes you're right to state that. The machine also consumes/remove cycles.
What is that? I/O waits, port listening, Hardware Syncrho' (or give back), Inter-process communication, lock on memory areas ???... Still... a game like NS2 shouldn't need a 2Ghz CPU/PC. I mean any other game face the same issues on PC (or console). These game are working fine on a 2Ghz CPU (maybe less depending on generation) and some are as much complex as NS2. Especially RTS games.
So yes a game could take up to 4 Ghz if needed. I mean by 'needed' that the game is so complex that you can't do any less than a high end CPU; also knowing the dev did a real good job a every corner (including speeding up things and optimizations). Not wasted like NS2 seems to do.
Graphics are fine, maps are fine. All the complaints i hear on server are about CPU issue. It doesn't take a genius in quantum physics to get it.
As stated in other threads by many including me. At each DX/game generation it's "I need a new video card". We were used to it and stood by it as it looked normal in a sense. NS2 is the only one that brings "I need a new PC"...
My machine code is a good 20 years ild now (68000 era the last time i needed to raw code) however we used to multiply by fractions instead as multiply was faster than divisions.
That being said q code would compile this automatically so C divisions wouldnt used DIV anyway. I would imagine interpreters would do the same.
There should be no modern reason (outside certain paranoid specifics such as military guidance and aeronautics rhat i know of) to use MC over a higher level compiled language such as C or derivatives.
You're right about that, compilers know some tricks here and there to optimize (depending on architectures). It was just for the example. One way or another you multiply or divide with more than one cycle.
I don't say programmers should always use ASM, just when and if needed. Something is so wrong with NS2 that it can't be "LUA" solely (or JIT). Even a LUA compiler does the job properly and all scripts gets better if you use JIT (bytecode to machine code).
IMHO this game has a relatively high skill ceiling regardless of what side you play or what your ping is. I would consider myself a very good FPS gamer, and even I struggled (and still do) playing NS2. Melee fighting is fairly uncommon in FPS video games, so people aren't very good at it and it takes time to learn. It also takes time to learn how to shoot and kill a small little bug as opposed to a full sized human character, as you see in most FPS games.
Whether you view this as a good or bad thing, I think NS2 isn't a very noob friendly game. You're not going to play well consistently, even after several hours of game play. I think I've got around 60-70 hours and I still die quite a bit, and I'm going up against guys I know have played for hundreds of hours or more. I have the patience to continue to play, but I know many don't, especially those new to NS. It's not like CoD where with my time played I'd be racking up 2:1 or 3:1 k/d ratios on the reg.
Comments
Basically this. I'm not sure what the upper limit is in NS2 but it feels pretty high / needs to be tightened.
Will this suck for people with higher pings? Yes. But it'll unsuck the game quite a bit for the people with lower pings who should be the majority...
I am usually on the 3 top score. I don't have great issue like other as i have a new PC. I like photo and graphics so i need power and ram, and i had to replaced it. Good thing for NS but i didn't buy a PC for it.
I'm just saying; in 2013 :
-We have multiple GIGAHERTZ on our processor.
-Multiple cores.
-Gigabytes of RAM.
-Gigabytes in our HD/SSD. Sometimes To...
-Screen that just are WIDE.
-Hardware acceleration for any stuff that is sound and / or graphics. Even 3D calculation with lot's of "FX".
-Each new generation doing better/faster and with less electrical power.
-recoding video at the same time and streaming it to a server is possible...
...and THIS is still "slow" !?! That's the kind of power no one would have dreamed of, 10 or 20 years ago. Same goes for other applications (quality has been declining) but not this far. You don't need so much power for the game mechanics. Other games don't need that and are as complex as NS2 (sometime more on some aspects).
Don't even bother to look at computers 30 years ago... This is just madness to see that with 7Mhz (yes "mega") computers we were doing 3D games (not textured and shit but still). It's like the more power you have the more it is wasted.
As far as shotguns go I'm really not sure if their spread got fixed or is still partially random because that was definitely the base before 250.
Skulks are pretty difficult for players to one shot consistently without +3 weapons, the hitbox is really small and really fast that only a near point blank shot will guarantee a one shot. Most shotgun kills come from 2-3 shots since they fire pretty fast now.
And now some bloak started to bring these 30+ over to EU. I mean is he intentionally trying give the game bad rep to nooblets? These massive unbalanced and lagshit servers really pisses me off, great way to ruin the way the game was meant to be played and its rep along with it.
Shotguns still have fixed spread, but their spread increased in b250, along with a base damage nerf. (RoF increased though) I think the goal of those changes was specifically to make one-shotting skulks a bit harder to do, at which it succeeded obviously.
Talking about the speed of the engine itself, while the power of the hardware grows, so does the complexity of the code. Gone are the days of ZX Spectrum where super-talented individuals could code directly in the machine code, using superior knowledge of the hardware to work around all the latencies and squeezing the absolute max from the computers. Today, you have Spark coded in C, on top of that reside lua scripts that resolve special game events not implemented in spark itself. The devs told us they don't have the manpower to code the thing from gound up, admittedly the result could be much, much better, but not everyone is John Carmac with years of specialized experience and almost unlimited budget.
Same proportions ?
4000Mhz / 7Mhz = 570 times; let's say 80x86 is half efficient compared to other kind of CPU. The machine is still 260 times more powerful. And and i don't talk about I/Os that would increase drastically the number.
I doubt that the code of a game mechanic increase in complexity that much (x260) while you have to make simple additions and multiplications/divisions (ex: Bullet * weapon lvl coef = damage per bullet ).
Hit detection did not changed much from 2D to 3D, it's just more tests to add the 3rd dimension. And there are tricks about that (lines, planar). It stays simple number comparisons. And you can even help with bounding spheres/box (Big simple object to see if it is needed to go further) to save cycles. But that's another topic.
Not even the 3D engines as they are now just defining what's needed and DirectX/card do the rest. The real 3D engine is in directX/card, no the contrary. You can see it as a framework that does every pixel drawing. And that does 90% of the code you don't see as a dev.
So i am afraid i have to say 'nope'.
You missed part of his point. No longer can things be coded in machine language. Going to a higher level language like C causes a performance loss over doing it in machine code, let alone adding LUA on top of C.
The reason things aren't coded in machine language anymore? Yes, complexity has increased, but what has increased even more is the amount of the code. No one could have envisioned a program that would take millions of lines of code back then.
In addition, C is probably THE closest you can get to writing in assembly without actually writing assembly - especially if you know how to code efficiently.
As far as C++/C# I can't say, because I don't have much experience with them, but from what I understand they mostly add object-oriented programming features...
That being said, fewer people are really going "all the way down" to machine instructions for substantial amounts of time these days, because people have either gotten lazy or project sizes have exploded, so it takes a lot of time to write and maintain code at such a low level - and computational time and system memory are SO cheap compared to 10+ years ago.
Think of it this way (an oversimplification, but it'll suffice):
- LUA is something like: "Go get bread from the store, here's $5. Use the hallway."
- C is something like: "Extend your hand, extend your fingers. When paper bill hits hand, close fingers. Move hand close to pocket, put hand in pocket, extend fingers, when bill leaves hand, take hand out of pocket. Turn 180 degrees. Walk until you're in the hallway. Turn left. Walk until you reach the front door..."
- Assembly/"Machine Code" is like: "Left hand, get spacial coordinates. Spacial coordinate for left finger +4,+5,+7, rotation 20,30,70..."
Basically, LUA and other "high level" or interpreted languages allow you to get things done fast, unfortunately sometimes not very efficiently and without as much low-level control. There's still places where knowing how to write optimal low-level code is valuable, i.e. the rendering code, and things like shad3rZ.
As far as NS2's requirements - yes, they're totally ridiculous, and it's mostly because of LUA. There are no other games that have NEARLY the complexity of NS2's game code that use LUA - and for good reason... because it's not very efficient for time-critical gameplay. Unfortunately, the choice has been made, and porting to something else is beyond the financial means of the team at this point, I'm guessing (because of the initial work and then the ongoing code maintenance required afterwards).
Yes they can. FarCry 2 included some assembly. It's not that old. Either way you don't need that.
About C language (as HeatSurge pointed out properly), in the end it's always machine code. The only difference is that: it is a compiler that writes it for you. Then comes the optimization question.
I was on the demo scene 20 years ago and thousands (not millions... another bit industry lie) of lines wouldn't be a problem as we were teaming up (like a company) to do stuff. Same thing did happened in the game industry at that time. Plus libraries are not only for high level languages. You always capitalize on was was done before.
Complexity ? No! "bounding box" brings simplicity that is needed for these kind of applications. And suddenly you don't need million of lines. We don't care if a bullet hit the toe, or the nail.
Everything is see in NS2 considering balance (damage, timers etc) can be done with a little engine that will "crawl" in tables. There will be no "super uber new technique" about that kind of things. You always check things with "comps" (CMP) or IF/THEN/ELSE or SWITCH for the programmer that know something about matrices. And if you see a hundreds of IFs fit into each other (or combined), then my friend, i am afraid somebody should take some lessons.
You are basically trying to argue that we all should be getting massive frames per second because the code is not that complex.....
There IS a performance loss going to higher level languages beyond machine code whether you believe it or not and, while the term "complexity" can mean different things to different people, there is no denying that todays software there is far more concurrent code running than older software.
You'd be hard pressed to find a FPS multiplayer game that has as much concurrent code running as NS2 where many aspects are run dynamically rather than pre-compiled like most fps multiplayer games.
20 years ago was a long time ago, there are certainly lots of software out there that have millions of lines of code ("an industry lie".... I lolled at that.)
it's a very fixed-scope game. it should have been complied
sure it's more complicated than Quake, but it isn't more complicated than RTS games
In terms of the amount of logic states that are running, an RTS game will have just as many compared to NS2. However, there are things an RTS game do not deal with that makes it run better.
Most RTS games have very simple lighting, nothing like the deferred light rendering that NS2 has, so they are all running pre-compiled lighting with no dynamic lighting.
RTS games have very simple visibility cullings. (that means your computer isn't rendering things you can't see on your screen, so you have better FPS as a result.) Basically, all an RTS game has to do is not render anything within the fog-of-war. NS2's visibility cullings in comparison are far more complicated and are done on the fly, unlike source engine where they are pre-compiled.
RTS games don't generally have to deal with physics.
RTS games have very simplistic maps compared to NS2. All RTS games usually just have terrain elevation changes, trees, rocks, maybe some old building ruins and maybe water. Because of the nature of an RTS game, you can make simple models in a map look good. NS2 maps in comparison have an incredible amount of lighting, models and architectural features, which does affect performance.
That is not to blame on the other multiplayer fps games but rather on NS2 then.
Unreal Engine games use a scripting language that is compiled to byte code and interpreted by the virtual machine of the engine at runtime, yet is still 20 times slower than native C++ code. But the kicker is that this UnrealScript code is idling most of the time (only in about 5% of CPU time is actual UScript code executed).
The language is very event-based, which means that the underlying C++ engine code does all the low-level computations like physics (moving Actors around the game world based on Velocity, updating Velocity based on Acceleration, handling collision) and calls UnrealScript functions when something interesting happens that the Actor wants to be notified about (like colliding with something, having landed on the ground, enters the line of sight of another player, finishing an animation, calling a specific function after a timer runs out, being destroyed, etc).
UnrealScript on the other hand can call functions that have their body implemented in native C++ code for faster execution (for instance iterating a list of all non-hidden, colliding Actors within a given radius instead of checking all those properties in slower UnrealScript code manually; or generally all basic math functions).
Where NS2 would calculate the distance of all alien entities to a Crag on every frame update to determine if they are in range, the component- and event-based approach in UnrealScript would be to just add a non-blocking but colliding SphereCollisionComponent to the Crag and then get notified in script by the underlying C++ code when something enters that collision range and another event when something leaves the range again. All the Crag would have to do is add every valid entity that enters the range to an internal list and remove it from the list when it leaves again. Then it just would need to go through list on every update to heal everything in it, without having to perform any distance calculations on its own.
The same would be true for Unity, btw.
NS2 uses a different scripting approach, but you make it sound like a benefit that it runs so much concurrent code. There isn't so much more game logic from the RTS aspect of NS2 than other multiplayer fps games would have, unless CoD is the only multiplayer fps game you know.
It would be pretty easy to recreate NS2 in UnrealScript and it wouldn't break out in performance sweat, tbh. The only thing you probably couldn't do efficiently without the full engine license there was to manage the difference in visibility of certain surfaces to commanders and normal players.
C++ is basically C with object orientation. It's still compiled to machine code, but you get all the organizational benefits of OOP. But it also adds complexity and it's easy to mess things up (as always when you need to take care of allocating and freeing memory yourself), plus the syntax can only become more cryptic than C (especially if you use something like the Hungarian Notation).
C# is compiled to byte code and interpreted by a virtual machine. It has the .NET Framework available (and shares it with Visual Basic.NET), which is probably one of the most extensive libraries out there. It's a joy to code with, but unfortunately is Microsoft not so keen on releasing the framework on other operating systems (though there are free open source recreations of it that also work on other platforms), so you are somewhat bound to Windows.
The latest release, the .NET Framework 4.5, incorporates some JIT functionality and you can specify preferences for each function how it should be inlined.
Unity allows you to use C# as scripting language for its games. Alternatively you can also use JavaScript (which is now called UnityScript in their slightly modified version) or BooScript. The engine understands all of them and they are also (within limits) compatible to each other, so you can call functions defined in a JavaScript class from a C# class. You can even change the Unity editor itself with these scripts.
not talking about GPU
You missed it. My point is. You don't need assembler for a NS2 game mechanic. Noway.
What on earth is going on in this game to require huge amounts of CPU power while other games just don't need it ? I am talking about FPS, RTS, and also other combined FPS/RTS.
I didn't say that OOP languages where not losing some cycle here and there.
Let's look at it by language differences since you do :
On a 2Ghz CPU you have 2 billions clock cycle /s. So at 60 FPS you have ~33 333 333 Cycles available at each frame. Far more than a game needs in the first place. The only applications that really need cycles are the ones that deals with raw data like sounds or graphics (Or science calculation of course)... repetitive and simple routines. And we're back to ASICs acceleration (ex: GPU, DSP and such). So not really the issue there.
On a PC. The most simple instruction in assembler is MOV(e) (copy) which takes only one cycle (more depending on addressing mode). The DIV(ision) take from 150 to 200 cycles. More or less of course depending on CPU generation and efficiency. The newer the lower. So you have somewhat 'time' before you fill up about 33 millions cycles.
In C if you do something like : somevar = somenum *10, the compiler will use the very same instructions (MOV, MUL). It just gonna take more cycles to store and get things the way it likes. But you're really close to something you can write in ASM. Same goes for C++ with a little extra compared to C (usually more RAM for Object stuff).
C++ (or others) because of optimized libraries and frameworks (sometime done in C or ASM) is pretty efficient and powerful. it's the way of doing things. Simple functions (the dirty work) are programmed with languages that are close to the machine and stored in libraries. Then we use a OO language. And that's it. The portion of OO language is so tiny when using optimized libraries that it removes the need for 2Ghz CPUs...
Sometime you don't even need anything else as the compiler does a really good job. So yes there is a loss using these languages (compared to ASM, C), but there is no huge gap if programmed and compiled properly. You can even write ASM code that will not be faster compared to compiled C as you don't use every trick of the architecture.
Most of the time a programmer optimize where it is needed by profiling (an such) to find where the bottleneck is, and then work on a solution for it. I'm not sure it's the NS2 case.
So my second question is: How can the code been ruined so much that the minimum requirement is a CPU above 2Ghz ? I mean even in LUA which is not that bad. India outsourcing (nah just joking... still...) ? There is definitely something wrong.
@CrushaK : Good pointS
That being said q code would compile this automatically so C divisions wouldnt used DIV anyway. I would imagine interpreters would do the same.
There should be no modern reason (outside certain paranoid specifics such as military guidance and aeronautics rhat i know of) to use MC over a higher level compiled language such as C or derivatives.
I'm not sure how serious you are, but I feel like shitty lag compensation like in NS2 actually makes the game worse not better. I've played some game without lag compensation recently, so to get hits you had to aim little ahead of the player models, the more ahead the higher your ping was.
But at the very least it was consistent, not subject to random bullshit like in NS2, seems like your movement or aim matters little because you never know whether the game deceides to fuck you over. Unless there is some secret I don't know about.
no,light takes roughly 150 ms to circumnavigate the globe. we are not talking about that as a constraint. the processing time and routing speed is the majority of lag.
This is right. But it is also the reality we need to deal with. Information on the internet does not get send through the air. So the 150ms are unrealistic. Processing time and routing speed is nothing else as sending electrons through a conductor and therefor capped at the speed of light (in this matter). While it may be more realistic that we could decrease the amount of conductor the signal has to pass through (Say NO to DPI!), it won't really happen.
apparently copper wire coobscure attwo thirds the speed of light. so processing times still make up the majority of delay. also, lotsa long distance stuff is done over fiber optics, so speed of light still.
light can go around the earth 8 times in a second, 1.3 seconds to the moon, and 8 minutes to the sun. lights fast, but the universe is ginormous.
While you can't change the speed (despite changing the matter the electrons travels through), you also can't really change the distance. Sure, without DPI it would have a lower latency by having less to calculate (=less travel distance in the chip). And maybe some other improvements too. But the Internet will stick to routers connected by wires. The OSI model will remain to be used and those packets need to be processed. So you won't change those distance much. Making the answer "it can't be realtime because of the speed of light", not wrong.
Yes you're right to state that. The machine also consumes/remove cycles.
What is that? I/O waits, port listening, Hardware Syncrho' (or give back), Inter-process communication, lock on memory areas ???... Still... a game like NS2 shouldn't need a 2Ghz CPU/PC. I mean any other game face the same issues on PC (or console). These game are working fine on a 2Ghz CPU (maybe less depending on generation) and some are as much complex as NS2. Especially RTS games.
So yes a game could take up to 4 Ghz if needed. I mean by 'needed' that the game is so complex that you can't do any less than a high end CPU; also knowing the dev did a real good job a every corner (including speeding up things and optimizations). Not wasted like NS2 seems to do.
Graphics are fine, maps are fine. All the complaints i hear on server are about CPU issue. It doesn't take a genius in quantum physics to get it.
As stated in other threads by many including me. At each DX/game generation it's "I need a new video card". We were used to it and stood by it as it looked normal in a sense. NS2 is the only one that brings "I need a new PC"...
You're right about that, compilers know some tricks here and there to optimize (depending on architectures). It was just for the example. One way or another you multiply or divide with more than one cycle.
I don't say programmers should always use ASM, just when and if needed. Something is so wrong with NS2 that it can't be "LUA" solely (or JIT). Even a LUA compiler does the job properly and all scripts gets better if you use JIT (bytecode to machine code).
Whether you view this as a good or bad thing, I think NS2 isn't a very noob friendly game. You're not going to play well consistently, even after several hours of game play. I think I've got around 60-70 hours and I still die quite a bit, and I'm going up against guys I know have played for hundreds of hours or more. I have the patience to continue to play, but I know many don't, especially those new to NS. It's not like CoD where with my time played I'd be racking up 2:1 or 3:1 k/d ratios on the reg.