Engine question
countbasie
Join Date: 2008-12-27 Member: 65884Members
Hello,
I have a question.
In Quake- and Source-based games the netcode and graphics-engine code seem to be separated. When the network connection times out I still have the same framerates and everything is still playing the last animations as a loop, completely smooth.
In NS2, so you forum-guys tell me, my bad framerate may be caused by low tickrates of the server. And when the connection times out, everything seems to hang/crash.
Could someone explain me as simple and right as possible, what is different in Spark compared to, let's say Quake? Or is the point completely wrong?
I have a question.
In Quake- and Source-based games the netcode and graphics-engine code seem to be separated. When the network connection times out I still have the same framerates and everything is still playing the last animations as a loop, completely smooth.
In NS2, so you forum-guys tell me, my bad framerate may be caused by low tickrates of the server. And when the connection times out, everything seems to hang/crash.
Could someone explain me as simple and right as possible, what is different in Spark compared to, let's say Quake? Or is the point completely wrong?
Comments
the net code is getting better as they go on that is why they are still in testing as of right now and what we have is considered a work in progress
The question was: How is the netcode related to the graphics code?
I <i>think</i> the problem we see is caused be unoptimized extrapolation. When the server is slow the client gets fewer updates than it needs. So it extrapolates whenever it needs to render a frame and doesn't have all the data. And right now it's simply not so good at that.
But this is just my understanding and it may not be true at all.
"what you have" is the stuff from the last frame so it would cause really annoying hitching, even more annoying than having temporarily low framerates from extrapolating. Low FPS > Drawing the same frame over and over again, effectively having "zero FPS".
Sorry, I'm not a programmer, but I wanna learn something. I think a few people here could be interested in this part of developing.
From what I understand, neither in Spark nor in Source or Quake the client is completely independent from the network, because the client needs to wait for the confirmed information about what it should draw, for example a skulk moving 100 pixels left. And this information is computed and transmitted via inter- and extrapolation.
Having the client drawing before a confirmation would cause artifacts / hitching, because your client doesn't know what the skulk will do.
Is that about right?
@Max:
Thanks for your answer. I think for my understanding, changing this one line would make the experience a lot 'smoother'. Because like it is right now it makes me think the whole game crashed. Allowing me to move the mouse etc. with a warning that the connection is failing would make me feel more secure. What are the reasons for your decision?
@BJHBnade_spammer
I know that, and there's no problem for me.
The question was: How is the netcode related to the graphics code?
That's not a bad idea, at least when the server has not sent a single update-packet the past second.
When the network is having a .... problem, then the client can no longer walk around. That's pretty simple.
But it still doesn't answer the original poster's question: why does client side fps suffer when the network is having troubles? I think I know why, so here goes.
With the current public beta build(189), the game updates it's 2d rasterised frames(frame rendering... fps, frames per second) at the same time it updates the game world state. This means, that if the game world state is not updating(i.e. the server is not sending the client world updates), then the client framerate will go down due to what i said earlier. The game updates the graphics frame at the same rate the world itself updates.
What this means is: if the world is running at 10 updates per second, the client will experience 10 frames per second.
But I also remember in an interview, that the graphics updating is being separated from the world updaterate(and if this is not true, get to work and make it true). Which means that regardless of the world updaterate, your framerate will be the maximum your computer can handle, which will also greatly help with mouse lag issues.
well that's all for my incorrect information posting.
it would just introduce mad rubberbanding
packet lost, player keeps on moving, updated packets arrive, server is like.. dood, but you should be over there after all i know, bam, client is resetting your position to the spot the server thinks you were just when the packets got lost
When the client gets into this state (it has buffered the maximum amount of frames without getting any confirmation from the server) it's supposed to display a message on the screen saying there are connection problems. If that's not happening it's a bug.
By the time a server can deliver a constant tick rate of 25 this will not be important at all, but at the current state where the tick rate fluctuates a lot you will get massive rubber banding with every slowdown if you let the players move. Once the tick rate is stable it would be a good idea to let the player move freely and independent of tick rate, but with the fluctuation now this would just make the entire experience a lot less smooth.
The graphics are rendered independently from the network as it is, movement is not a part of the graphics. If the graphics were dependent on the network and your FPS were directly connected to your tick rate then you wouldn't even be able to look around every time the tick rate goes haywire.
Having the "client stop moving" is fine, but don't bottleneck a player's FPS, which is what has been happening for some time now. I play with 40+ FPS drops because the server is slowing down, this obviously is a mistake.
The graphics are rendered independently from the network as it is, movement is not a part of the graphics. If the graphics were dependent on the network and your FPS were directly connected to your tick rate then you wouldn't even be able to look around every time the tick rate goes haywire.<!--QuoteEnd--></div><!--QuoteEEnd-->
No no, I said when no update-packet of any kind has been recieved for the past second (or even 2). You can have a tickrate as low as 5 and packet-loss on top of that without issues, but not recieving anything of any kind for a whole second (or 2) is indicative of a serious problem, and warrants freezing the game to notify the user that it is not functioning normally. It may be so that some servers out there will have this problem (not sending anything for seconds), in which case I would love for my client to freeze up so I'll immediately know something is very wrong, and I should either check my connection or part to another server. I don't want it to try to interpolate its arse out of this resulting in a crappy game, call it quality-control. I guess this would also be the right time to mention that a net_graph-diagram (ala Gldsource\Source) would be very much appreciated.
<!--quoteo(post=0:date=:name=Kalabalana)--><div class='quotetop'>QUOTE (Kalabalana)</div><div class='quotemain'><!--quotec-->Having the "client stop moving" is fine, but don't bottleneck a player's FPS, which is what has been happening for some time now. I play with 40+ FPS drops because the server is slowing down, this obviously is a mistake.<!--QuoteEnd--></div><!--QuoteEEnd-->
Didn't this have something to do with the fact that the client needs to do more work (in ways of interpolating\extrapolating) when the server is sending less information. Given that Lua is having a hard time as it is, adding additional prediction work on top of that will kill your framerate. So yeah this is pretty much exactly like Gldsource, except that we do not have the CPU-time to spare for interpolation\extrapolation-work, resulting in bad performance.
No, this is incorrect, or is correct, and this game was somehow coded in this manner (highly unlikely).
Prediction algorithms like this do not increase the workload this much. 100+% increases in workload are not caused this way.
Prediction algorithms like this do not increase the workload this much. 100+% increases in workload are not caused this way.<!--QuoteEnd--></div><!--QuoteEEnd-->
Game-logic shouldn't bog down 4-5GHz CPUs, yet at the moment it does, so I wouldn't put it past NS2 that prediction\interpolation really does weigh this heavily on the game.
Exactly.
Making the game feature/mechanic-complete so they can start optimizing it as a whole?
No, that's not how it works. Optimization is an ongoing process. They're doing it now, and if all goes well, they'll be doing it long after "release."