DX12
N@uralBornNoobist
Gorge-N-Freeman,2Gorges1Clog Join Date: 2012-12-24 Member: 176138Members
Will we see dx12 implemented?
Some random quotes.
In theory, any DX11 graphics card should work with DX12 - Nvidia itself has confirmed that anything from the "Fermi" 400 series onwards should work. "Richard Leadbetter Eurogamer.net"
"Nvidia official blog" - DX12’s focus is on enabling a dramatic increase in visual richness through a significant decrease in API-related CPU overhead.
"Official dx blog" What makes Direct3D 12 better? First and foremost, it provides a lower level of hardware abstraction than ever before, allowing games to significantly improve multithread scaling and CPU utilization. In addition, games will benefit from reduced GPU overhead via features such as descriptor tables and concise pipeline state objects. And that’s not all – Direct3D 12 also introduces a set of new rendering pipeline features that will dramatically improve the efficiency of algorithms such as order-independent transparency, collision detection, and geometry culling.
In a tech demo I watched the cpu load is basically on a 1:1 ratio across all cores, compared to dx11 being no where near that.
Or...do I purchese a bar of gold and throw that at ns2 to make it work better?
Some random quotes.
In theory, any DX11 graphics card should work with DX12 - Nvidia itself has confirmed that anything from the "Fermi" 400 series onwards should work. "Richard Leadbetter Eurogamer.net"
"Nvidia official blog" - DX12’s focus is on enabling a dramatic increase in visual richness through a significant decrease in API-related CPU overhead.
"Official dx blog" What makes Direct3D 12 better? First and foremost, it provides a lower level of hardware abstraction than ever before, allowing games to significantly improve multithread scaling and CPU utilization. In addition, games will benefit from reduced GPU overhead via features such as descriptor tables and concise pipeline state objects. And that’s not all – Direct3D 12 also introduces a set of new rendering pipeline features that will dramatically improve the efficiency of algorithms such as order-independent transparency, collision detection, and geometry culling.
In a tech demo I watched the cpu load is basically on a 1:1 ratio across all cores, compared to dx11 being no where near that.
Or...do I purchese a bar of gold and throw that at ns2 to make it work better?
Comments
I'm not sure it would be worth the hassle of supporting it for minimal gains, or perhaps even performance loss and bugs...
Your probably right though, if its using 2.5 cores then it wont make a blind bit of difference?
They are just releasing a new version of software layer (that is called directX) between the driver and the game.
Then, if some hardware manufacturer does not tweak his driver to be able to expose the new features of his video card to that intermediate layer - he is screwed. On the other side - if the game developer does not change his game to talk to the new directX, then he is in trouble too.
So, why the hell we are talking about microsoft here? Is it a company that is specialized on games? Or graphics? Or hardware?
Why it decides when and how to release that thing?
Why in these announcements it always takes credit for the improved features and performance when we all know for sure that it is all happening thanks to the new hardware designs done by Intel/AMD/NVidia?
So, dx is just a little noisy middleman that tries to make you believe that he is important and that he is in charge. And while you, the customer, believe that he is in charge, the hardware manufacturers and game developers remain unfree.
Thread has nothing to do with Vista.
Gosh.
Please never mention it again, it's one of those things nobody wants to remember. The dark times....
Deleting Derailing posts.
Please go back to Squabbling about DX12, peasants.
Minor rederailment: Wayland :x
That's a bit over-dramatic, and on a side note, inaccurate.
If the game developers would feel that they are unfree and limited by DirectX, they can either decide to write code to support each and every graphics card out there individually (which *might* be a bit of a problem for their budget) - or they could just switch to a different intermediate layer, such as OpenGL - which is exactly what some publishers do. So - give the developers some credit and try to acknowledge that they had their reasons to go for D3D.
Seems even Blizzard with a near infinite budget decided that a D3D version of World of Warcraft would be a good idea - even though they already have an OpenGL version which runs both on Mac and Windows.
~Luchs
And if it is indeed something similar like building the pyramids, would it be worth the hassle for the theoretical performance gains and such...
DX11 wasnt worth it, so I imagine, no. What I'd like to know is that is it even going to be worth the risk to use the upcoming D3D9Ex.
From a development standpoint (that's where I come from), it's not that much of a heart surgery. You can reference the new assemblies, might have to update some method signatures if you used some fancy (or obsolete) ones, and deal with an unpredictable amount of bugs where existing methods behave slightly different in a way that affects your software.
The main issues are feature fallback and client requirements. Let's imagine DX12 introduces this awesome feature you desperately want. That leaves you with a few options - and questions:
- If you maintain a DX12 branch of your game, you're making this the minimum requirement for every gamer. This may not sound like a big deal, but it could cut off some old Windows versions on which DX12 will no longer be installable. Solution: Maintain 2 branches - at an extra cost. Is that one feature really worth it?
- DirectX is a framework to interface GPU hardware - which means that the functionality of the framework wraps abstract features of a GPU, for which the manufacturers then write the translation to native code. Which makes it very likely that 'feature XY' added in DX12 is not just a fancy computing algorithm or helper method, but something that translates down to a hardware feature of the GPU chipset. Which means: It may not be available. The GPU may not support it; and the fallback scenario may be even less appealing than just not implementing it at all.
~Luchs
Edit it next time damnit! =((
/slaps @AurOn2
*looksy in history*
DX info:
http://blogs.msdn.com/b/directx/archive/2014/03/20/directx-12.aspx
it contains links to nvidia's and AMDs press release on the bottom.
Adding support for a new version of direct x is relatively painless, but there is no advantage. The new features that could make it faster can't be implemented automagically, that's much more in the open heart surgery category.
If there was a big advantage to be had from some hardware feature simply by a straight port to a newer version of direct x, you would already be recieving that benefit in the current version of direct X because the people at nVidia and AMD are not stupid. As long as the result is within spec, AMD and nVidia can implement features any which way they want on the hardware side; the driver development teams have the same access to the underlying hardware regardless of what version of direct X you use. What differs is what is exposed to software developers and how it is exposed.
The only thing which could possibly help in a straight port is overhead in the driver and in direct x.