Audio improvements?

2»

Comments

  • cooliticcoolitic Right behind you Join Date: 2013-04-02 Member: 184609Members
    What about Asus SupremeFX integrated audio? It has EMI shielding to keep the audio good.
  • antouantou France Join Date: 2016-07-24 Member: 220615Members
    Kouji_San wrote: »
    You know it is actually really sad, the fact that proper sound hardware and support in games is totally unappreciated in our time and the last thing that is tacked onto most people's new rigs... Even in this thread specifically about improving audio, I'm getting this vibe of "meh sound is good enuf on my hardware" :D I'm not even going into people playing with onboard motherboard chips, they simply don't know, they don't know what they're missing :o

    Regarding 3D audio with stereo output:

    Reconstructed 3D audio doesn't work for everyone. I found myself trying some company's (forgot the name) 3D-from-headphones algorithms at a professional broadcasting convention ; my colleague said he was impressed with how accurate the positioning sounded to him, but I heard pretty much nothing but stereo.
    The engineer who was presenting the demo explained that, to achieve 3D positioning, they studied how sounds bounce off the average person's face and ears into their eardrums by placing small microphones into a test subject's eardrums, and studying the delay between each ear, looking at dephasing effects and other things I don't remember / didn't understand. :tongue: Then, to create 3D audio, they could replicate the transformations they measured and trick the brain into hearing the sound "coming from" a specific spot.
    The problem is that everybody looks different, so they had to tweak their model for the average human head. So while it should work okay for most people, it will work perfectly for some and not that much for others.

    In my case, my ears stick out a bit from my head, maybe that's why it doesn't work well for me ? Anyway, until I make my own model with tiny microphones in my ears, I'll never know what I'm missing :P

    Soul_Rider wrote: »
    For me, sound is more important than graphics in a game. A game with great sounds and bad visuals will keep me entertained longer than a game with great visuals and bad sound.
    I work with audio and video all the time, and yep, everyone at work seems to agree that a small visual glitch is acceptable, but the tiniest audio glitch is very distracting :)
  • Kouji_SanKouji_San Sr. Hινε Uρкεερεг - EUPT Deputy The Netherlands Join Date: 2003-05-13 Member: 16271Members, NS2 Playtester, Squad Five Blue
    edited November 2017
    @antou did you try the videos in my spoiler tag with headphones, especially the A3D deme?

    Also try this one, heck it might work for you. It's a bit different, but the idea is the same using two microphones to calculate the 3D positioning

    0.jpg
    https://youtube.com/watch?v=8IXm6SuUigI



    @McGlaspie, that explains the awful 3D positioning of the sounds, but I do believe it has gotten worse with a NS2 version from about a year ago. It was the first thing that I noticed when I started up NS2 again a few weeks back. Your post also is quite a good insight as to why "sound" engines are vastly underestimated in terms of how much actual raw processing power it requires. I mean CPU's have become al lot faster, compared to 1998-2000, yet in software simulation it still requires quite a bit of CPU cycles, even today...

    Thanks for explaining it btw, very interesting read!
  • moultanomoultano Creator of ns_shiva. Join Date: 2002-12-14 Member: 10806Members, NS1 Playtester, Contributor, Constellation, NS2 Playtester, Squad Five Blue, Reinforced - Shadow, WC 2013 - Gold, NS2 Community Developer, Pistachionauts
    edited November 2017
    @McGlaspie It seems like this sort of thing could be cached, as the path through the nav mesh to any far away sound emitter changes very little from frame to frame. Things are either 1. in room and updating quickly. 2. Not in room, and reachable through one of a small number of exits that change slowly.

    Roughly speaking, you'd trace to everything in the cache as "in room" (within n units on nav mesh or within LoS.) on every frame. 1/50th of everything not known to be in room (not in the "in room" cache") would be handled every 50th frame. Things would only move from "not in room" to "in room" when processed at this delayed rate. Sounds would be played using the current data whenever they occur, which for far away things could be stale by up to 50 frames.
  • McGlaspieMcGlaspie www.team156.com Join Date: 2010-07-26 Member: 73044Members, Super Administrators, Forum Admins, NS2 Developer, NS2 Playtester, Squad Five Blue, Squad Five Silver, Squad Five Gold, Reinforced - Onos, WC 2013 - Gold, Subnautica Playtester
    moultano wrote: »
    @McGlaspie It seems like this sort of thing could be cached, as the path through the nav mesh to any far away sound emitter changes very little from frame to frame. Things are either 1. in room and updating quickly. 2. Not in room, and reachable through one of a small number of exits that change slowly.

    Roughly speaking, you'd trace to everything in the cache as "in room" (within n units on nav mesh or within LoS.) on every frame. 1/50th of everything not known to be in room (not in the "in room" cache") would be handled every 50th frame. Things would only move from "not in room" to "in room" when processed at this delayed rate. Sounds would be played using the current data whenever they occur, which for far away things could be stale by up to 50 frames.

    The problem with a cached and/or deferred update approach is the Db level of sounds would suddenly jump. So, the volume of a sound event would look something like (yay ascii "graph"):
                   _____
              _____|
       ______|
    ___|
    
    Which would be pretty jarring. Also, FMOD doesn't provide a straightforward method to dynamically (via API calls) change properties of an event at run-time. The entire library is designed so the sound event definitions themselves dictate that behavior, and not external code controlling FMOD. As a result, using cached data (external to FMOD) to augment the properties of an event isn't a viable option. And attempting to dynamically control Sound Events externally to FMOD has a huge performance cost (I suspect due to FMOD's internal caching and channel management routines). This issue is a big part of why my previous attempts at augmenting what we have were not performant.
  • moultanomoultano Creator of ns_shiva. Join Date: 2002-12-14 Member: 10806Members, NS1 Playtester, Contributor, Constellation, NS2 Playtester, Squad Five Blue, Reinforced - Shadow, WC 2013 - Gold, NS2 Community Developer, Pistachionauts
    edited November 2017
    McGlaspie wrote: »
    moultano wrote: »
    @McGlaspie It seems like this sort of thing could be cached, as the path through the nav mesh to any far away sound emitter changes very little from frame to frame. Things are either 1. in room and updating quickly. 2. Not in room, and reachable through one of a small number of exits that change slowly.

    Roughly speaking, you'd trace to everything in the cache as "in room" (within n units on nav mesh or within LoS.) on every frame. 1/50th of everything not known to be in room (not in the "in room" cache") would be handled every 50th frame. Things would only move from "not in room" to "in room" when processed at this delayed rate. Sounds would be played using the current data whenever they occur, which for far away things could be stale by up to 50 frames.

    The problem with a cached and/or deferred update approach is the Db level of sounds would suddenly jump. So, the volume of a sound event would look something like (yay ascii "graph"):
                   _____
              _____|
       ______|
    ___|
    
    Which would be pretty jarring. Also, FMOD doesn't provide a straightforward method to dynamically (via API calls) change properties of an event at run-time. The entire library is designed so the sound event definitions themselves dictate that behavior, and not external code controlling FMOD. As a result, using cached data (external to FMOD) to augment the properties of an event isn't a viable option. And attempting to dynamically control Sound Events externally to FMOD has a huge performance cost (I suspect due to FMOD's internal caching and channel management routines). This issue is a big part of why my previous attempts at augmenting what we have were not performant.

    I'm not sure what the granularity of sound events is, but what if you only use the new data for new events? More latency again, but no jumps in mid sound. How does this work currently? If you are getting closer to something mid-sound-event, does it actually get louder mid-sound-event?

    (Also, hopefully the deferred sounds would be all far away enough that their exact parameters 1) change slowly. 2) don't matter much.)
  • NintendowsNintendows Join Date: 2016-11-07 Member: 223716Members, Squad Five Blue
    I created a new version of ns2_tram with some experimental occlusion geometry that seems to improve the issues of hearing things from rooms far away. For example from north tunnels/platform to mezzanine, from shipping to south tunnels, and ore processing to warehouse to name a few.

    It's a tedious job to add these "extra layers" of occlusion, but it shows that a mapper can use the current system and provide a little extra information to improve the occlusion problem. Perhaps in the future mappers could have the ability to set the "density" of a wall, or perhaps the engine could somehow measure the thickness of a wall to determine how much it muffles/occludes the sound.
  • antouantou France Join Date: 2016-07-24 Member: 220615Members
    Kouji_San wrote: »
    @antou did you try the videos in my spoiler tag with headphones, especially the A3D deme?!

    I just did:
    • HL: pretty much stereo to me
    • Quake: during the parts with A3D ON, I can hear some phasing effects, so something is happening... but I don't hear much difference in term of positionning
    • A3D demos: I thought this one worked for me ! Until I closed my eyes, and then without visual clues it was just stereo for my brain x)
    • Barber shop: this one I liked a lot ! Maybe it's because it's less abstract sounds, or because I made up a scene in my mind, but it seems to have worked best.

    Also the guy in the HL and Quake videos is really, really bad :p
    Thanks for sharing the videos.
  • Kouji_SanKouji_San Sr. Hινε Uρкεερεг - EUPT Deputy The Netherlands Join Date: 2003-05-13 Member: 16271Members, NS2 Playtester, Squad Five Blue
    Well, kinda goes to show that it is a combination of visual and audio that has to force your brain to brain :D And yeah, he was indeed very bad. Really rage inducing lack of situational awareness.
  • RejZoRRejZoR Slovenia Join Date: 2013-09-24 Member: 188450Members, NS2 Playtester, Reinforced - Shadow
    So, to summarize all the posts, we have a game that ENORMOUSLY depends on visual and audio cues, but is essentially too broken to be fixed, like ever. Well, that's joyful... The reason why I made this thread was because despite superior hardware used to run the game, I'm experiencing sound skipping when a lot of action is happening. Literally the same as when you exceed number of sound buffers and the soundcard/game just starts truncating stuff to make space for new audio events. If it's pure CPU based, then my soundcard plays no role here. Which makes me question what's wrong with the audio queue and sound buffers. Are they too small? There is a queue problem where engine flushes events in wrong order to make space for new ones? What is it? Surely something within the engine allows you to fiddle at least a bit. As a user, I have ZERO control over it other than volume. Console also doesn't have any. But I'm certain that can't just be the end of it. The game can't be THAT primitive because I've seen more advanced audio in System Shock 2 from 1999 then...

    But also secondly, if it's a game where sound is just a decorative thing, whatever. In a game where audio cues mean a difference between a dead marine and dead alien, it's something worth investing almost any kind of time into. Because broken audio in NS2 means broken gameplay as gameplay actively depends on it. And you don't want a broken gameplay in a game where gameplay is literally above anything else (I mean it's a team based shooter with another layer of a strategy with commander on top, it's a delicate and complex game system). This stuff matters and it matters A LOT.
Sign In or Register to comment.