Limit lag compensation to 300 ms

1235»

Comments

  • bERt0rbERt0r Join Date: 2005-03-23 Member: 46181Members
    edited October 2014
    The current HL1 netcode is from 2001. Are you telling me they copied ns2 in 2001? Rather, it is the other way around, i remember a dev posting that source and ns2 netcode is similar.

    As XDragon mentioned, the "movement" calculation of the server can take up to 50% performance, it was rather obvious for me that should you be able to gain any performance, this would be a good place to look. For example excluding grenades or cysts from lagcompensation could boost server performance and appease the people who cry about the nades.

    For example it is possible to "accellerate" projectiles to shift from the attacker's to the other client's realities faster so they become easier to dodge. http://www.ra.is/unlagged/walkthroughs.html#PN
  • DC_DarklingDC_Darkling Join Date: 2003-07-10 Member: 18068Members, Constellation, Squad Five Blue, Squad Five Silver
    I never stated they copied ns2.
    I tried to state that ns2 does it rather similar to hl2.
    And I also tried to state that if so many think HL1/NS1 netcode was superior, then why would valve redo it in HL2?
  • bERt0rbERt0r Join Date: 2005-03-23 Member: 46181Members
    I'm not sure if there have been that many changes to netcode. The paper i linked from 2001 is still referenced in the article about source's lagcompensation.
  • bizbiz Join Date: 2012-11-05 Member: 167386Members
    counterstrike is basically about getting headshots where all players are using hitscan weapons
    the entire netcode is designed to handle that precise need

    in case the point of the thread wasn't clear, my claim is that copying valve's netcode deisions is a poor fit for NS2's gameplay
    whether NS2 copied it well or efficiently is kind of a different topic
  • xDragonxDragon Join Date: 2012-04-04 Member: 149948Members, NS2 Playtester, Squad Five Gold, NS2 Map Tester, Reinforced - Shadow
    I had typed up a long explanation of why a Quake style max lag compensation setup doesn't work well for NS2, but realized its completely pointless. If you cant see how bad of an impact a max lag compensation of 80ms would have on NS2's current playerbase, there's no point in me explaining it.
  • LastdonLastdon Join Date: 2012-06-29 Member: 153767Members
    biz wrote: »
    looking for a way to cap lag compensation

    it doesn't help that the game is like twice as laggy as the pings would indicate. there's definitely some extra latency in the netcode making things worse

    Biz, was your question answered? I mean is there a way to cap it for server admins? If your question wasn't answered maybe you should shot some one at UWE (Charlie) an email. To see if they will allow you to look into this to see if you could come up with a solution. Having more then one person look at the code and possible find a solution for this problem will go along way. The CDT is not UWE even though they seem to be the only ones engaged with the community.

    I know UWE is working on Subnautica but they should still engage with the community or why would people want to spread the word about any of their other projects. This topic is deeper then just NS2 it speaks directly to the Spark engine. Which they now have another company using it to develop standalone combat. This issue will drag that game down as well.

    The sheer views on this topic demonstrates the issue.

    ohh yea NS1 wasn't source but rather gldsrc engine.
  • bERt0rbERt0r Join Date: 2005-03-23 Member: 46181Members
    @XDragon So you decided to spam the thread? If you can't explain it, maybe you are at fault? The ignorance is strong with you.

    Letting the cap be set by server admins would also be an option although i doubt this would have a positive effect on performance.
  • bizbiz Join Date: 2012-11-05 Member: 167386Members
    Lastdon wrote: »
    Biz, was your question answered? I mean is there a way to cap it for server admins?

    i don't know whether or not server admins have a way to cap this, whether they will have a way to cap this in the future, or how I can find what the cap is on a server i'm connecting to

    that's why I made the thread

    based on various posts, interpolation settings seem to be customizable
    that's nice to know, but doesn't it address lag compensation

    xDragon wrote: »
    I had typed up a long explanation of why a Quake style max lag compensation setup doesn't work well for NS2, but realized its completely pointless. If you cant see how bad of an impact a max lag compensation of 80ms would have on NS2's current playerbase, there's no point in me explaining it.

    i reference quake more for the netcode implementation example and less for the exact quantity of the limit

    you might have reasons for why 80ms is bad. maybe some are valid. that's why I think it should be a customizable setting.

    but does anyone have any real unbiased reasons for why a higher limit would be bad? or why grenades shouldn't have a lower limit?

    right now you can connect to a good nearby server and take more than half a second of damage before getting alerted, and a lot of people have expressed frustration with that

    for those players, a limit of 400ms would immediately improve the game. even if 150ms of those get eaten by extra NS2 server latency, that gives perfect accuracy to anyone 250 ping and below

    i just don't see why normal players need to get the shaft on all servers just because some people want to pretend to play competitively with 300 ping
  • DC_DarklingDC_Darkling Join Date: 2003-07-10 Member: 18068Members, Constellation, Squad Five Blue, Squad Five Silver
    I want to add some details to this convo since it seems it has eluded so far.

    People always complain, regardless if they are correct or not, on 2 issues with ns2:
    * The responsiveness.
    * I see blood but no damage.


    I want to focus on the 2nd one.
    As you could all read in this link here under the 'Lag Compensation' Section, with a 100ms + interp period.
    Interp for source is also, like in ns2, 100ms, as stated earlier.. giving us the same 200ms as this source example.
    If you read on it clearly states that in source, there are often small variations in the hitbox on the server and the prediction client one. (In NS2 the blood is also client one while the damage numbers is serverside)

    Not knowing how different ns2 truly is to the example given it gives one big point to think about.. if it is even reasonably similar that means, like in source, that ns2 server hitbox would differ from client at various times.
    But unlike source its cubicle hitbox which is pretty much over the model, ns2 has a far more accurate hitbox on the model itself. So if a skulks its leg is shifted on the server compared to the client, thats a guaranteed miss.

    So what to do?
    Let me quote what Valve (with their big research budget and all) says themselves on that wiki (im assuming for the moment they wrote it, I do not know):
    Network latencies and lag compensation can create paradoxes that seem illogical compared to the real world. For example, you can be hit by an attacker you can't even see anymore because you already took cover. What happened is that the server moved your player hitboxes back in time, where you were still exposed to your attacker. This inconsistency problem can't be solved in general because of the relatively slow packet speeds. In the real world, you don't notice this problem because light (the packets) travels so fast and you and everybody around you sees the same world as it is right now.

    Just wanted to share that detail.
  • bERt0rbERt0r Join Date: 2005-03-23 Member: 46181Members
    edited October 2014
    I agree with your post but I think you misunderstood one thing:
    When the source netcode paper talks about prediction, it means executing the clients own commands immediatly on the local machine, so you dont have a noticable delay when you move forward for example because you have to wait for the server to check if the movement is allowed otherwise.
    The location of other players and entities is determined by interpolation. Every client sees the server reality 100 ms in the past, regardless of ping. This is so your client has 2 states of a player (the vector of his movement and his animation) and can make a fluid transistion between those. Valve considered it impossible to predict (or extrapolate) a players movement, at least if you want it to look good - players move zig zag all the time and generally try to be unpredictable. These 100 ms are important to display the movement of enemy players in a smooth, realistic and normal looking way. But dont worry, Ironhorse also didnt get this.
  • bizbiz Join Date: 2012-11-05 Member: 167386Members
    For example, you can be hit by an attacker you can't even see anymore because you already took cover. What happened is that the server moved your player hitboxes back in time, where you were still exposed to your attacker. This inconsistency problem can't be solved in general because of the relatively slow packet speeds. In the real world, you don't notice this problem because light (the packets) travels so fast and you and everybody around you sees the same world as it is right now.

    see sv_maxunlag

    it basically limits the amount of inconsistency between hitbox positions and player positions
    (the comparatively minor inconsistency due to interp is a different issue)

    valve keeps it extremely high because
    1) CS has snail-paced movement and the physical distance (perceived inconsistency) between your hitboxes & your model is still kind of low
    2) combat in CS is largely sight-based where whoever sees the other guy first wins. there isn't really any dodging or running away from or towards some opponent that moves at different speeds from you. there isn't some prolonged back and forth of multiple volleys and medpack infusions

    nobody worries about it (or even knows about it...) because
    1) it's the same for both teams (hitscan vs hitscan), the game is symmetrical, and the ttk is so low that it doesn't affect decision-making in 99% of situations
    2) engagements/rounds are decided by aim + positioning + reaction times. nobody is going to lose 10 minutes worth of resources because of lag or delayed notifications about incoming damage
    3) CS is so popular that people don't have a reason to join faraway servers
  • DC_DarklingDC_Darkling Join Date: 2003-07-10 Member: 18068Members, Constellation, Squad Five Blue, Squad Five Silver
    @‌bErt0r
    Well, I actually DID get it.. You are speaking about the input prediction.
    I was not. You will notice im basicly restating what the page itself said about the hitboxes.
  • bERt0rbERt0r Join Date: 2005-03-23 Member: 46181Members
    I just wanted to make sure what we are talking about. I think what you describe only happens when packets get dropped - UDP is an unreliable protocol. Were you talking about this:
    If more than one snapshot in a row is dropped, interpolation can't work perfectly because it runs out of snapshots in the history buffer. In that case the renderer uses extrapolation (cl_extrapolate 1) and tries a simple linear extrapolation of entities based on their known history so far. The extrapolation is done only for 0.25 seconds of packet loss (cl_extrapolate_amount), since the prediction errors would become too big after that.

    Entity interpolation causes a constant view "lag" of 100 milliseconds by default (cl_interp 0.1), even if you're playing on a listenserver (server and client on the same machine). This doesn't mean you have to lead your aiming when shooting at other players since the server-side lag compensation knows about client entity interpolation and corrects this error.
  • DC_DarklingDC_Darkling Join Date: 2003-07-10 Member: 18068Members, Constellation, Squad Five Blue, Squad Five Silver
    I was not.
    It is LITERALLY this part:
    https://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking#Lag_compensation

    They even supply a screenshot with both client and server hitbox to show it differs.
    I am NOT talking about mere interp or mere input prediction.

    I am repeating the part which states that if you see a enemy player on your screen, your client hitbox can and probably will differ from the server.

    * The server calculates its hitboxes with all the info on its own time without any delay, as its own time is the baseline.
    * The client starts with a difference of 200ms in hitbox caused by 100ms latency + 100ms interp.
    * Client registers a hit.
    * Server knowing clients work 'in the past' calculates where the hitbox was.
    * The server is done calculating the old hitbox location and ALMOST matches the location the client reported. The small variation can be caused because the server had more info, more accurate info or whatever causes the calculation difference.. Its not full specified in the link, there just is a SLIGHT difference.

    * Now imagine a slight difference in hitbox in a slow source game is a big difference in a not slow game.
    * Also imagine hitboxes in ns2 are dead accurate and not the boxes in older games, which allowed for a area around the model to count as a hit.


    If I am still not making sense I can only suggest to read the part of the link I just pasted on the start of this post. It has it all.
  • IronHorseIronHorse Developer, QA Manager, Technical Support & contributor Join Date: 2010-05-08 Member: 71669Members, Super Administrators, Forum Admins, Forum Moderators, NS2 Developer, NS2 Playtester, Squad Five Blue, Subnautica Playtester, Subnautica PT Lead, Pistachionauts
    Moved to Ideas and Suggestions and corrected the title of your thread to reflect the purpose of it.

    Since this is not about actually "fixing" the netcode or having anything to do with improving NS2's netcode to be more like other fast competitive games, (QL being the one exception) it belongs in I&S sub forums.
  • develdevel Join Date: 2014-09-13 Member: 198444Members
    biz wrote: »
    2) combat in CS is largely sight-based where whoever sees the other guy first wins.

    And in NS2 is "whoever sees the other guy first loses". Because if you camp in order to be the first, you actually will be the second (and be dead instantly).
  • GhoulofGSG9GhoulofGSG9 Join Date: 2013-03-31 Member: 184566Members, Super Administrators, Forum Admins, Forum Moderators, NS2 Developer, NS2 Playtester, Squad Five Blue, Squad Five Silver, Reinforced - Supporter, WC 2013 - Supporter, Pistachionauts
    edited October 2014
    nizb0ag wrote: »
    Ok let's clean up a few things here:
    xDragon wrote: »
    Source lets clients choose their settings, usually clamped between server enforced max/mins. NS2 forces all clients to use what the server sets.

    First of all that's not true, ns2 clients can also set the interpolation rate and moverate totally freely them-self but they should not as it doesn't takes the server rates into account and ppl tend to screw up with those settings mostly when it comes to ns2 after my experience ( no it won't help you to set the interpolation rate to 40 ms, it will only make it useless ).

    Please enlighten us, there is nothing in any configs other then NS2gamerules+server lua files which have bwlimit,interp,mr,sendrate etc. So how are you changing it on the clientside?

    It would benefit us all if the CDT released client-side changes as we know they are working on it already, but make it easier to allow updaterate/cmd/rate etc.

    If you look at this you should realize that i already answered your question:
    Benson wrote: »
    Regarding server rate variables, I had head that there was a plan to set up "premade" settings that can be adjusted based on Server hardware...is that even possible due to the variety of server hardware outhere?

    You are correct:

    Around the time when we unlocked the network variables there were plans to release a set of recommended settings to the public. You can make such kind of sets assuming a certain setup for most of the ns2 servers.

    But there was a small fraction of cdt members who were against the release of those and after a really long internal debate this plan was dropped and a decision was made i will explain further later in this post.

    Overall releasing such a set might be good to give server admins a kind of guideline how to modify network settings overall generally. But there a more reasons speaking against doing so and why this plan was finally dropped:
    1. Due to the variance of ns2 server setups we currently see it's pretty hard to come up with a setup which would cover most of them (due to hardware differences and more important network setups)
    2. Admins will be temped to "turn the switch" without actually understanding what that switch does which might make the experience for most ns2 players even worse than it is currently.
    3. Most servers are already struggling with the current default values. Giving out values which need even more cpu power won't do any good. And due to the lack of knowledge we can assume that most admins will resist to use lower setups than the default ones even if those would improve the user experience.

    Also at that point when this discussion came up internally we didn't had enough time to test how modified network variables affect the game-play overall. Today we know that at a update-rate around 60 you already notice some weird effects in-game which indicate that certain parts of the game-code were not coded with having in mind that the update-rate might be a lot higher than 30.

    So the cdt decided to not release any details about the settings at all, instead to just tell the community that the variables can now be chosen freely and that interested server admins should contact given cdt members who are well aware how the spark/ns2 network code works.

    By doing this and adding the warning to the server-browser we did make sure that the network variables are handles with the needed care and that admins tweaking those are fully aware about what they do.

    This all has been done with the goal to improve the overall ns2 user experience and don't make it worse.

    And good example at this place is that it might be a good idea at some servers running currently into a cpu limit in the late-game to lower the tick-rate. If you are now aware that the tick-rate only controls the A.I. update rate and that the player movement update rate is only 20 by default this makes totally sense and overall can lower the cpu load without that the user even notice it while playing.

    But most ns2 players without reading about it and understanding how the ns2 networking actually work will just declare you as an idiot if you suggest to lower the tick-rate ...

    Just looking at some posts in this thread (no offense) tell me that some ppl still don't understand how things work. :(

    Both interpolation and move-rate are client sided settings. And all those thing can only be set directly via console command addressing the engine and not via any lua function etc.

    But i won't publish the exact commands here as we decided as team to not do that to avoid ppl messing around with those setting without knowing what they do.

    But i guess if you really want to find them out you will with a bit of searching the web and maybe trial and error.
  • turtsmcgurtturtsmcgurt Join Date: 2012-11-01 Member: 165456Members, Reinforced - Supporter
    i thought build 267 had the server enforce mr/interp on clients?
  • meatmachinemeatmachine South England Join Date: 2013-01-06 Member: 177858Members, NS2 Playtester, NS2 Map Tester, Reinforced - Shadow, WC 2013 - Supporter
    edited October 2014
    It's funny how people have problems with the netcode... I recently started playing lots of seige on a 22 man server and boy, if anything its a testament to how well this game handles movement and collision when everyone has a ping of ~150+


    Interestingly, when I started playing seige the lag (network and local/mouselag) was unbareable, however since they removed hydras and babblers from gorges (only khamm can build hydras now), it actually runs completely fine 90% of the time.
  • cooliticcoolitic Right behind you Join Date: 2013-04-02 Member: 184609Members
    edited October 2014
    There is only one thing that will cause good NS2 servers to break down completely in terms of performance.

    32-man siege servers with maxentities turned on.
  • cooliticcoolitic Right behind you Join Date: 2013-04-02 Member: 184609Members
    TLDR niz.

    A little tip (and it's your choice to take it or not), is to try splitting your text into paragraphs and make it a bit shorter.
  • IronHorseIronHorse Developer, QA Manager, Technical Support & contributor Join Date: 2010-05-08 Member: 71669Members, Super Administrators, Forum Admins, Forum Moderators, NS2 Developer, NS2 Playtester, Squad Five Blue, Subnautica Playtester, Subnautica PT Lead, Pistachionauts
    What in the world makes you think we didn't thoroughly test rates for weeks on end??
    Idk why you're presuming such a thing..

    And no, there's no way to change the definition of these cvars to be fewer. It's not like renaming them consolidates or combines their uses. Their interdependencies are confusing to most users, even pts, and can cause harm /bugs if configured not exactly right.
    This is precisely the reason why we decided to not release the commands publicly.
  • IronHorseIronHorse Developer, QA Manager, Technical Support & contributor Join Date: 2010-05-08 Member: 71669Members, Super Administrators, Forum Admins, Forum Moderators, NS2 Developer, NS2 Playtester, Squad Five Blue, Subnautica Playtester, Subnautica PT Lead, Pistachionauts
    @nizb0ag‌
    Uhhh.. which release notes contained the server rate commands? I've just reviewed the past 5 and do not see them listed?
    All i found was "“interp”, “mr” and other server settings are now sent to clients when they join."
    Which obviously are not the actual cvars at all, nor is it the complete list of them?

    I somehow doubt you or anyone else was actually able to "reverse engineer" Spark - that's quite a claim to make, and i'd love to see evidence of it, unless you are using that term in a hyperbolic sense.

    That "fix" to separate the tickrate from the sendrate would be quite the fundamental change and undertaking, and I doubt would happen It's not a limiting factor by any means, either. The processing power required for the increased data would still be.


    And yea, I am sort of glad you got to experience poor server rate settings.. it's exactly what we were afraid would happen if people weren't directly told how in private, and even then if they didn't actually listen. Even in a few posts in these forums when someone took the time to explain (even from matso, who made the commands and understands this the most) people still confused / skipped over the important parts and misunderstood. Quite scary.

    I do agree that it would be nice to release these commands publicly, but only with a way to adequately measure your server's performance - which matso has been working on- (tickrate is such a narrow method to gauge with) and with a thorough explanation given - which matso has already provided in here iirc.

    Hopefully with the trifecta of improved server performance, adequate documentation, and proper tools to gauge and test with - we could finally release the settings publicly.
    This is all very off topic, though...
Sign In or Register to comment.