Any plans for AMD Mantle support?

12357

Comments

  • shonanshonan Join Date: 2013-01-28 Member: 182562Members, Reinforced - Shadow
    ArthurDent wrote: »
    Honestly, the graphics requirements for this game aren't that high. So I'm not sure why we would need a hardware level api implementation. The only graphics thing that needs work is opengl, since it is currently really buggy and slow.

    It would reduce CPU overhead and leave more resources for non-graphics stuff.
  • 72U1172U11 Spain Join Date: 2014-01-08 Member: 192810Members
    edited January 2014
    The moment they decide to implement Mantle into the game will be the moment I donate to them, no doubt.

    I'm surprised the devs haven't said anything about Mantle.

  • cooliticcoolitic Right behind you Join Date: 2013-04-02 Member: 184609Members
    Why did you necro this post 72U11?
  • Ghosthree3Ghosthree3 Join Date: 2010-02-13 Member: 70557Members, Reinforced - Supporter
    72U11 wrote: »
    I'm surprised the devs haven't said anything about Mantle.
    Why would you be surprised?
  • 72U1172U11 Spain Join Date: 2014-01-08 Member: 192810Members
    edited January 2014
    I get 2 responses and both are stupid questions. Okay.
  • NeXuSNeXuS US Join Date: 2013-10-13 Member: 188681Members, NS2 Playtester, Reinforced - Silver, Reinforced - Shadow, Subnautica Playtester
    edited January 2014
    72U11 wrote: »
    I get 2 responses and both are stupid questions. Okay.
    Because you replied to a post that was 2 months old. Hence Coolitic's term, Necro. Derp.
  • 72U1172U11 Spain Join Date: 2014-01-08 Member: 192810Members
    NeXuS wrote: »
    72U11 wrote: »
    I get 2 responses and both are stupid questions. Okay.
    Because replied to a post that was 2 months old. Hence Coolitic's term, Necro. Derp.

    OMG, it was 2 months old! Older than my grandma!! Oh well, guess it's the death penalty for me. Forum mods ban me already, what are you waiting for?

    And Mantle isn't even out yet. You people make sense. But hey, as long as you get to say "Derp." it's all good.
  • 72U1172U11 Spain Join Date: 2014-01-08 Member: 192810Members
    edited January 2014
    And btw, that video I posted is 3 days old and it's first video ever that shows Mantle actually running before its release, but hey, guess the thread is 2 months old (or is it 1 month and x days? Cause that would make things soo different), so who cares about something so old now, right? Let's just talk about the Army getting plasma rifles instead.
  • shonanshonan Join Date: 2013-01-28 Member: 182562Members, Reinforced - Shadow
    edited January 2014
    NeXuS wrote: »
    72U11 wrote: »
    I get 2 responses and both are stupid questions. Okay.
    Because replied to a post that was 2 months old. Hence Coolitic's term, Necro. Derp.

    So he shouldve made a duplicate thread then? You are just derailing/trolling.
  • NeXuSNeXuS US Join Date: 2013-10-13 Member: 188681Members, NS2 Playtester, Reinforced - Silver, Reinforced - Shadow, Subnautica Playtester
    Yes. Make a new thread, instead of necroing an old post where some users may not still be active. And btw, that is not the correct use of the abuse flag. FYI
  • IronHorseIronHorse Developer, QA Manager, Technical Support & contributor Join Date: 2010-05-08 Member: 71669Members, Super Administrators, Forum Admins, Forum Moderators, NS2 Developer, NS2 Playtester, Squad Five Blue, Subnautica Playtester, Subnautica PT Lead, Pistachionauts
    NeXuS wrote: »
    Yes. Make a new thread, instead of necroing an old post where some users may not still be active. And btw, that is not the correct use of the abuse flag. FYI
    ^ Exactly what this man said, he's correct on both counts. The previous poster called both responses stupid, yet you flag the individual explaining to him?

    But since we're here:
    Mantle isn't even released on any game yet (let alone BF4), so maybe we should wait for practical benchmarks from the public before even discussing it at all.
  • DarkLaunch357DarkLaunch357 Campinas, Brazil Join Date: 2013-09-01 Member: 187599Members
    Kamamura wrote: »
    I bet against AMD Mantle, I do not think it has a chance to succeed.

    AMD is losing to Intel in the area of CPU (where both may be losing to ARM long term). In the GPU arena, AMD is losing to NVidia. So this may be their management's shot to reverse the trend - to provide something unique yet useful and carve up their secure position on the market like 3Dfx did once.

    But there is a problem - 3Dfx succeeded because it was first to bring a huge innovation - their first card brought filtered textures and HW rendering, huge performance and quality improvement. Yet the proprietary API finally became hindrance - there were technological limits they could not pass due to the low level API being too tied to the HW ( no 32bit color, etc. ), and the trend was to move to a generic, HW independent API that would simplify development. Performance boost may be nice, but I don't think it will outweigh the need to optimize and write the code for different graphic libraries - and Mantle being low level means you need intimate knowledge of the HW, similar to the ancient assembler programmers who wrote those amazing 8bit games nobody thought would be possible to run. Few companies have budget and knowhow for that.

    The best outcame I can see is some guru like Carmack writing some new, revolutionary engine that will be able to really take advantage of the low level optimizations, and other companies licensing it for their games. But I think you can forget about average game dev company using it to its full potential. Plus, I don't believe the hype so much, I think the performance gains will be only slight, not decisive. What could be achieved, though, would be smoother, more balanced performance overall, which might be worth it.

    Finally someone who knows what Mantle actually is and how it actually works. I recently witnessed a FIERCE battle between a supporter and a denier of its benefits on facebook... it was pretty enlightening.

    I will try to keep as neutral as I can here, so fans of either side bear with me as I respect your stances as well.

    The whole marketing talk about it being closely tied to the GCN architecture conveniently forgets that AMD will not use GCN forever... which makes this low level API pretty much completely and totally useless in the long term unless it is constantly updated for support on ALL pieces of software that use it to be able to easily understand all sorts of supported low-level hardware, rumor has it is open source (and therefore should support NVIDIA processors as well). What intrigues me is that we have not seen a single piece of Mantle outside of AMD's partners and tied developers, which makes it sound dubious at best in the eyes of enthusiasts like us, if they are really willing to extensive develop an API to work with both their and their competitor's hardware design and maintain and update it... forever.

    To me, GCN as we know will meet its end in the Hawaii GPU. Hawaii's density is beyond what I consider adequate for its lithography and architecture, much like the original GF100 (which I have years of experience with). It is boiling hot and consumes an unreal amount of power simply because the GCN architecture was obviously, despite claims, not designed from the ground up to be a GK110-sized monstrous die. It will not get better with process maturation because it is already a late game 28nm part.

    As much as I hate to say it, things look bad for AMD right now. Their console/SoC business is what will get them places... ironically thanks to the change Jen-Hsun Huang rejected from Sony due to extremely low margin/not worth bothering console license, 400 million in a decade isn't exactly the sort of profit a company the size of NVIDIA seeks.

    The Bulldozer roadmap is an utter failure. Bulldozer could not beat the K10 architecture, that is now six years old in IPC, AMD gamers often had to put their Phenom II X4/X6 processors back in to get the best gaming performance because the FX-8150 simply couldn't handle it ( take a look at this, it's the reason my AMD box still has a 1090T... http://www.xtremesystems.org/forums/showthread.php?276002-Compare-1090T-and-FX-8150 ), by the time they fixed the mess with Piledriver Vishera processors, Ivy Bridge-DT was out and obliterating it, then shortly after Haswell came out, they resorted to factory overclocking that design, creating the shame of all AMD fans, the FX-9590, initially sold for $900 (now found for $370), that leveraged its 8 cores, 5 GHz clock speed and 220W of TDP (hint: draw is higher) to barely scrape what the 84W 4770K can do (and lose at single threaded) at stock, at the expense of any overclockability and extremely high power, cooling and motherboard requirements. That led many people to say that AMD had lost it, but after a simple price drop and a few words here and there, it's out of a sudden a great option. Marketing it seems works wonders to the masses.

    The APUs are about to find fierce competition from Intel, the Haswell SoC is the embryo to what they could do uniting the superior architecture with adequate GPU power; there are very few titles that will not run on an HD Graphics 4600 at 720p/medium settings, Crystalwell despite being cost ineffective already provides a Haswell Core i7 with 128MB of L4 + Iris Pro graphics with enough GFLOPS to face a small Kepler dGPU like a GT 640M, enabling medium quality 1440p gaming far beyond the next generation consoles are capable of; the low power/handheld market faces brutal competition from ARM designs manufactured by NVIDIA (Tegra), Qualcomm (Snapdragon) and Samsung (Exynos), their professional graphics marketshare is extremely low (it is basically NVIDIA Quadro territory), and the Radeon department faces ever fiercer competition from GeForce, who offer power efficiency and speed on a feature rich gaming environment. It's going so well they started adding certain niche products just to smear at AMD (like the GTX Titan, being semi-professional and DP capable, you know a card i'd buy from AMD? A prosumer-oriented 8 GB R9 290X variant with uncapped compute speeds just to race my own Titan) and adding goodies like ShadowPlay that leverage the Kepler built-in encoder allowing users to record and broadcast at nearly no impact on rendering performance whatsoever.

    My first computer had an AMD processor... it was a K6 266. Not too bad for its time, kept up with the Pentium MMX if didn't beat it at all, and I sincerely wish the best for AMD so I can -proudly- say I own an AMD rig again, without fanboying or shilling. I'm currently looking into importing an AMD 9590 for my 990FXA-UD3 motherboard that has been lying in my drawer ever since I pretty much bought it.

    (off-topic, but if you own an "unsupported" laptop with Kepler graphics (GT 650M+), to enable ShadowPlay create a shortcut to GFExperience.exe and add -shadowplay after pathname, it will work just fine.)
  • Soul_RiderSoul_Rider Mod Bean Join Date: 2004-06-19 Member: 29388Members, Constellation, Squad Five Blue
    edited January 2014
    I am sorry @DarkLaunch357, but your post comes across purely as nVidia fanboyism, and is full of inaccuracies, suggesting you haven't looked at any official statements made by AMD regarding what Mantle is.
    The whole marketing talk about it being closely tied to the GCN architecture conveniently forgets that AMD will not use GCN forever... which makes this low level API pretty much completely and totally useless in the long term unless it is constantly updated for support on ALL pieces of software that use it to be able to easily understand all sorts of supported low-level hardware, rumor has it is open source (and therefore should support NVIDIA processors as well).

    This shows a lack of understanding about what Mantle is. It is not a Full Low-Level Hardware API, but a Thin-Layer Abstraction above the API. This means it should be transferable (while designed for GCN firstly of course). Here are the words directly from AMD regarding this subject:
    While Graphics Core Next is the "hardware foundation" for Mantle, AMD's Guennadi Riguer and some of the other Mantle luminaries at APU13 made it clear that the API is by no means tied down to GCN hardware. Some of Mantle's features are targeted at GCN, but others are generic. "We don't want to paint ourselves in a corner," Riguer explained. "What we would like to do with Mantle is to have [the] ability to innovate on future graphics architectures for years to come, and possibly even enable our competitors to run Mantle." Jurjen Katsman of Nixxes was even bolder in his assessment, stating, "There's nothing that I can see from my perspective that stops [Mantle] from running on pretty much any hardware out there that is somewhat recent."
    - from http://techreport.com/review/25683/delving-deeper-into-amd-mantle-api
    by the time they fixed the mess with Piledriver Vishera processors, Ivy Bridge-DT was out and obliterating it, then shortly after Haswell came out, they resorted to factory overclocking that design, creating the shame of all AMD fans, the FX-9590, initially sold for $900 (now found for $370), that leveraged its 8 cores, 5 GHz clock speed and 220W of TDP (hint: draw is higher) to barely scrape what the 84W 4770K can do (and lose at single threaded) at stock, at the expense of any overclockability and extremely high power, cooling and motherboard requirements

    How anything to do with AMD processors (APU yes, Processors, no) is relevant to this discussion I will never know, but once again you are wrong on points you have made. AMD's TDP is the Maximum Thermal Design Power, the amount of heat that the chip will have to dissipate, whereas the Intel measurement uses an SDP, Scenario Designed Power, the TDP of an Intel chip is normally around 2x their SDP figure.

    Your comparisons between 220w and 84w shows exactly why AMD are switching to using the same metric as Intel in the future, because so many apparently knowledgeable people don't know the difference and have the wool pulled over their eyes.
    If you were reading our CES coverage last week, you'll know that among other things Intel introduced a handful of new, lower-power Y-series Ivy Bridge CPUs designed to fit into thinner and lighter Ultrabooks and tablets. The slides in Intel's keynote called these "7 watt" Ivy Bridge CPUs, and the company compared them directly to the 17 watt U-series chips in wide use today.

    After talking to some Intel reps and doing some sleuthing of our own, we found the direct comparison wasn't quite warranted. The actual thermal design power (TDP) of those processors was in fact 13 watts—still lower than before, but less miraculous than had previously been implied. A new measurement, scenario design power (SDP), was actually being used to achieve that 7 watt figure.

    There was one particular element of that write-up that was not entirely accurate: based on our conversations with Intel reps, we thought SDP was purely a marketing ploy, a measurement of the amount of power the processor would use on average. It turns out there is an element of marketing to these new 7 watt CPUs, but there's a technical element, too—Intel is simply giving a name to and publicizing a measurement previously left behind-the-scenes. We talked with an Intel engineer to get a better explanation.
    - from http://arstechnica.com/gadgets/2013/01/the-technical-details-behind-intels-7-watt-ivy-bridge-cpus/

    I have an Intel/nVidia combination before you yell AMD fanboy at me..

    It has taken me around an hour to make this post, and I can't be bothered to make it any more detailed, supported and confirmed, so I will leave it where it is.

    If you are going to post facts and base your opinions on them, it is wise to make sure your 'facts' are as accurate as possible first.
  • NeXuSNeXuS US Join Date: 2013-10-13 Member: 188681Members, NS2 Playtester, Reinforced - Silver, Reinforced - Shadow, Subnautica Playtester
    edited January 2014
    Soul, I know DarkLaunch. You just opened a can of worms. Wrong or right, you just started WW3 man. Prepare for battle!
  • DarkLaunch357DarkLaunch357 Campinas, Brazil Join Date: 2013-09-01 Member: 187599Members
    edited January 2014
    Soul_Rider wrote: »
    This shows a lack of understanding about what Mantle is. It is not a Full Low-Level Hardware API, but a Thin-Layer Abstraction above the API. This means it should be transferable (while designed for GCN firstly of course). Here are the words directly from AMD regarding this subject:
    While Graphics Core Next is the "hardware foundation" for Mantle, AMD's Guennadi Riguer and some of the other Mantle luminaries at APU13 made it clear that the API is by no means tied down to GCN hardware. Some of Mantle's features are targeted at GCN, but others are generic. "We don't want to paint ourselves in a corner," Riguer explained. "What we would like to do with Mantle is to have [the] ability to innovate on future graphics architectures for years to come, and possibly even enable our competitors to run Mantle." Jurjen Katsman of Nixxes was even bolder in his assessment, stating, "There's nothing that I can see from my perspective that stops [Mantle] from running on pretty much any hardware out there that is somewhat recent."
    - from http://techreport.com/review/25683/delving-deeper-into-amd-mantle-api
    Hence why I mentioned "(and therefore should support NVIDIA processors as well)", but since it is a low level architecture to any degree, even if it's middleware, it must understand the hardware it is working on, no? Every low-level architecture needs to work closely with the hardware it is tied to, and adding new architectures will require costly maintenance.

    WMWKLkn.png

    You see, compute strength I got plenty, but a low level API would do me absolutely no good if it only knew how to work with an R9 290X instead, for example.
    Soul_Rider wrote: »
    How anything to do with AMD processors (APU yes, Processors, no) is relevant to this discussion I will never know, but once again you are wrong on points you have made. AMD's TDP is the Maximum Thermal Design Power, the amount of heat that the chip will have to dissipate, whereas the Intel measurement uses an SDP, Scenario Designed Power, which is what the chip will be in 'Ideal' scenarios, the TDP of an Intel chip is normally around 2x their SDP figure.

    Your comparisons between 220w and 84w shows exactly why AMD are switching to using the same metric as Intel in the future, because so many apparently knowledgeable people don't know the difference and have the wool pulled over their eyes.

    If you were reading our CES coverage last week, you'll know that among other things Intel introduced a handful of new, lower-power Y-series Ivy Bridge CPUs designed to fit into thinner and lighter Ultrabooks and tablets. The slides in Intel's keynote called these "7 watt" Ivy Bridge CPUs, and the company compared them directly to the 17 watt U-series chips in wide use today.

    After talking to some Intel reps and doing some sleuthing of our own, we found the direct comparison wasn't quite warranted. The actual thermal design power (TDP) of those processors was in fact 13 watts—still lower than before, but less miraculous than had previously been implied. A new measurement, scenario design power (SDP), was actually being used to achieve that 7 watt figure.

    There was one particular element of that write-up that was not entirely accurate: based on our conversations with Intel reps, we thought SDP was purely a marketing ploy, a measurement of the amount of power the processor would use on average. It turns out there is an element of marketing to these new 7 watt CPUs, but there's a technical element, too—Intel is simply giving a name to and publicizing a measurement previously left behind-the-scenes. We talked with an Intel engineer to get a better explanation.
    This wasn't related to Mantle itself, but AMD's financial future and whole lineup. What will they fund this development with? They are about to enter uncharted waters and their only backing so far is the very small margins on the gaming consoles. I own both Intel and AMD CPUs, and I can tell you one thing, the Athlon II X4 and the Phenom II X6 can easily eat as much power as the first generation Core i7 processors, and while it is a lot cooler, its thermal constraints are also significantly lower (62°C of maximum operating temperature, really?). SDP have only been published for the low-power Intel lineup, and while that is a fact, it doesn't apply to the desktop parts announced with their TDP target. However, you need to also take into consideration that Kabini and Jaguar are so slow they cannot be compared to Ivy Bridge at all, they are a lot closer to the Atom lineup (which they admittedly beat and beat hard) and Haswell being a complete SoC, mildly increased the processor wattage but have much lower idles due to having no need for any more chips on the system. The accurate comparison to the Ivy Bridge ULV parts would be an AMD A8 APU, which last I checked are rated at 35W TDP, or over twice even a 7W SDP X2 (14W) consumption.
    by the time they fixed the mess with Piledriver Vishera processors, Ivy Bridge-DT was out and obliterating it, then shortly after Haswell came out, they resorted to factory overclocking that design, creating the shame of all AMD fans, the FX-9590, initially sold for $900 (now found for $370), that leveraged its 8 cores, 5 GHz clock speed and 220W of TDP (hint: draw is higher) to barely scrape what the 84W 4770K can do (and lose at single threaded) at stock, at the expense of any overclockability and extremely high power, cooling and motherboard requirements
    My 4770K at 4.7 GHz can make do with 111W of power, measured under HyperPI (8 instances of SuperPI) 8M on all 8 threads. That is simply incredible for what it provides me.

    GBG6LWg.png

    I also mentioned processors because Mantle heavily touts multi-core scaling and there are rumors (which I did not and could not verify) that it will use AMD-only CPU instructions to help it achieve its speedup, if proven true that could be a much needed blow to Intel. As you can see, my CPU outperforms a stock FX-8150 2 to 1 in arithmetic calculation, and is 80% faster than the FX-8350, with half the power, and half the cores :)

    T9VuIBw.png
  • Soul_RiderSoul_Rider Mod Bean Join Date: 2004-06-19 Member: 29388Members, Constellation, Squad Five Blue
    Don't get me wrong, I am not knocking of the manufacturers here, I believe they all have different benefits to offer the industry :)

    And if you want a comparison to the IVY Bridge ULV I think this should make you realise again you have been fooled by numbers:

    Intel Core i3/i5/i7 - Up to 1.5 GHz at 13 W (Core i7 3689Y)
    AMD A - Up to 1,7 GHz at 19 W (A8-5545m)

    So you see AMD has a higher clock speed, for higher W, although it is still a higher W/Ghz, it is nowhere near the 38W you mentioned above. Now we all know that Intel CPU's do much more work per cycle etc, but everything you have posted has been 'Over-Exaggerating' the brilliance of Intel/nVidia, or 'Under-Exaggerating' the work of AMD.

    The truth is Intel and nVidia will always try and use their Market-Share superiority to keep AMD a relatively small player. AMD has made some great moves and had some great inventions over the years, and arguably they could have severly dented Intel a good few years ago if it weren't for dirty tricks etc.

    AMD are a smaller company than those big boys, but it also allows them to be more adaptive, flexible and risky. AMD are exactly what the market needs, and everyone better hope and pray they never go away, or we might as well kiss our PC enthusiast ways goodbye, because Intel and nVidia would kill it completely.


  • DarkLaunch357DarkLaunch357 Campinas, Brazil Join Date: 2013-09-01 Member: 187599Members
    edited January 2014
    Soul_Rider wrote: »
    Don't get me wrong, I am not knocking of the manufacturers here, I believe they all have different benefits to offer the industry :)

    And if you want a comparison to the IVY Bridge ULV I think this should make you realise again you have been fooled by numbers:

    Intel Core i3/i5/i7 - Up to 1.5 GHz at 13 W (Core i7 3689Y)
    AMD A - Up to 1,7 GHz at 19 W (A8-5545m)

    So you see AMD has a higher clock speed, for higher W, although it is still a higher W/Ghz, it is nowhere near the 38W you mentioned above. Now we all know that Intel CPU's do much more work per cycle etc, but everything you have posted has been 'Over-Exaggerating' the brilliance of Intel/nVidia, or 'Under-Exaggerating' the work of AMD.

    The truth is Intel and nVidia will always try and use their Market-Share superiority to keep AMD a relatively small player. AMD has made some great moves and had some great inventions over the years, and arguably they could have severly dented Intel a good few years ago if it weren't for dirty tricks etc.

    AMD are a smaller company than those big boys, but it also allows them to be more adaptive, flexible and risky. AMD are exactly what the market needs, and everyone better hope and pray they never go away, or we might as well kiss our PC enthusiast ways goodbye, because Intel and nVidia would kill it completely.


    I mentioned 35W, and yes I agree with you and this is why AMD needs to start innovating again. But GHz are not and have never been anything. My Titan at 837MHz stock absolutely obliterates my old GTX 480 at the same clock speed, my 4770K will do quick work of even my old 990X processor when both are clocked at say 4.5 GHz. It's the joys of architecture innovation, something AMD is a century in terms of computing world time behind.

    Ivy Bridge at 1.5 GHz is a whole lot faster than Llano, Richland or anything AMD has right now at the same clock speed, and this is why it just doesn't matter lol.

    Shame on Intel for lying about the SDP thing though, but /pol/ was right again.
  • Soul_RiderSoul_Rider Mod Bean Join Date: 2004-06-19 Member: 29388Members, Constellation, Squad Five Blue
    edited January 2014
    I agree totally that Intel and nVidia have the superior weaponry in their arsenal at the moment, and I already mentioned that Intel chips do more work per cycle than AMD, but you are saying AMD are bad and not taking into consideration that AMD will tomorrow release their first 28nm chips, their previous being 32nm, whereas Intel are on their 2nd round of 22nm processors. Things like this also need to be taken into consideration.

    Technology-wise, things are not as dire for AMD as you seem to imagine. They have started mapping out 20nm and 14nm ahead of Intel, so if they can catch up with Intel slightly on the processes, the performance and power advantages Intel hold will be greatly reduced. In fact, when you look at low power devices, and compare the processes used, AMD are doing much better than Intel currently on both power usage and performance at 32nm.

    This doesn't necessarily hold true on the Desktop front, but when you compare AMD 32nm 1st GEN FX-8150 with Intel 32nm 1st Gen - Nehalem based i7-970 series, things are a lot closer than currently, and this bodes well for future die shrinks.

    As I said previously, there is space for all the companies, and there is a huge benefit to them all being successful.

    Oh and I didn't mention it earlier, but IRIS is behind the AMD integrated graphics solution. I would be more worried if Intel made a move to have nVidia integrated GPU's on it's die, rather than producing it's own. They still need a few generations to catch up to AMD on that front, and if it wasn't for AMD, that front wouldn't even exist, so I think they are being fairly innovative anyway...

    /ontopic

    I really think Mantle could work, and it has already been shown to provide massive benefits, not least taking CPU load off of the devices running it, as there is no bloated software API like DirectX or OpenGL to run through, and this alone provides a massive boost for AMD APU gaming performance vs Non-Mantle devices, and the same goes for their processors, although, the benefits would also extend to those running Intel processors with discreet AMD cards as well.

    Edit---

    Your comment -
    But GHz are not and have never been anything.
    shows you are a young'un, as in the old days, AMD and Intel used the same architechture, so Ghz was everything, but because AMD could always get more Ghz, Intel moved the goalposts again and started going on about work per cycle... :)

  • 72U1172U11 Spain Join Date: 2014-01-08 Member: 192810Members
    edited January 2014
    IronHorse wrote: »
    NeXuS wrote: »
    Yes. Make a new thread, instead of necroing an old post where some users may not still be active. And btw, that is not the correct use of the abuse flag. FYI
    ^ Exactly what this man said, he's correct on both counts. The previous poster called both responses stupid, yet you flag the individual explaining to him?

    But since we're here:
    Mantle isn't even released on any game yet (let alone BF4), so maybe we should wait for practical benchmarks from the public before even discussing it at all.

    Sir, your flawed logic is no longer needed.

    Thank you for your time.


    My my, loads to read after I "necro'd" the thread (dear Lord, I beg for forgiveness one more time). Better get to it! :D
  • IronHorseIronHorse Developer, QA Manager, Technical Support & contributor Join Date: 2010-05-08 Member: 71669Members, Super Administrators, Forum Admins, Forum Moderators, NS2 Developer, NS2 Playtester, Squad Five Blue, Subnautica Playtester, Subnautica PT Lead, Pistachionauts
    Not logic in this case... ruuuulllleesss.... ahhhh
    But, i invite whatever knowledge you have on symbolic logic in this case. It fuels me. Rawr.
  • 72U1172U11 Spain Join Date: 2014-01-08 Member: 192810Members
    edited January 2014
    *Snip*

    I need to learn to be nicer.

  • DarkLaunch357DarkLaunch357 Campinas, Brazil Join Date: 2013-09-01 Member: 187599Members
    edited January 2014
    Soul_Rider wrote: »
    I agree totally that Intel and nVidia have the superior weaponry in their arsenal at the moment, and I already mentioned that Intel chips do more work per cycle than AMD, but you are saying AMD are bad and not taking into consideration that AMD will tomorrow release their first 28nm chips, their previous being 32nm, whereas Intel are on their 2nd round of 22nm processors. Things like this also need to be taken into consideration.

    Technology-wise, things are not as dire for AMD as you seem to imagine. They have started mapping out 20nm and 14nm ahead of Intel, so if they can catch up with Intel slightly on the processes, the performance and power advantages Intel hold will be greatly reduced. In fact, when you look at low power devices, and compare the processes used, AMD are doing much better than Intel currently on both power usage and performance at 32nm.

    This doesn't necessarily hold true on the Desktop front, but when you compare AMD 32nm 1st GEN FX-8150 with Intel 32nm 1st Gen - Nehalem based i7-970 series, things are a lot closer than currently, and this bodes well for future die shrinks.

    As I said previously, there is space for all the companies, and there is a huge benefit to them all being successful.

    Oh and I didn't mention it earlier, but IRIS is behind the AMD integrated graphics solution. I would be more worried if Intel made a move to have nVidia integrated GPU's on it's die, rather than producing it's own. They still need a few generations to catch up to AMD on that front, and if it wasn't for AMD, that front wouldn't even exist, so I think they are being fairly innovative anyway...

    /ontopic

    I really think Mantle could work, and it has already been shown to provide massive benefits, not least taking CPU load off of the devices running it, as there is no bloated software API like DirectX or OpenGL to run through, and this alone provides a massive boost for AMD APU gaming performance vs Non-Mantle devices, and the same goes for their processors, although, the benefits would also extend to those running Intel processors with discreet AMD cards as well.

    Edit---

    Your comment -
    But GHz are not and have never been anything.
    shows you are a young'un, as in the old days, AMD and Intel used the same architechture, so Ghz was everything, but because AMD could always get more Ghz, Intel moved the goalposts again and started going on about work per cycle... :)

    Back in the days, ~2005, the Athlon64 thrashed the Pentium 4, that has the same design flaws Bulldozer has, like the long pipelines etc. ;) but that's what, 9-10 years ago. That's why it's "never" hehe.

    Problem here, is just one, AMD will be releasing a 28nm processor and Intel will be releasing a 14nm part already. 32nm Intel processors like Westmere and Sandy Bridge are now considered obsolete and all going into EOIS/End of Life and they're all being retired and discontinued save for OEM parts that they had very large supplies of. The i7-2600K and the i5-2500K are mentioned as Retired and Discontinued, End of Life in the ARK.

    http://ark.intel.com/products/52214/Intel-Core-i7-2600K-Processor-8M-Cache-up-to-3_80-GHz

    22nm technology on the Intel side is two years old, and that's where things are starting to go off by quite a bit. I know lithography isn't everything, but if you compare even a i7-970 (a CPU released four years ago, short of two cores) to their latest and greatest, and still can't quite beat that it is a sad comparison you gotta agree with me here. Sure, the 970 has released at $562 bulk price, but it's a retired and discontinued 4 generations old processor. And it probably released at that price because the HEDT market goes completely unchallenged as it is. Sure, I take a FX-9590 over a i5-2500K (not a 2600K tho...) but hey, I have a 4770K on my daily driver box. My brother has a 2600K (clocked at 4.8 too) and his CPU is more than fast enough to take on any AMD box i've seen so far. Man, look at my Queens score, at 2x the clock speed my CPU can beat 32 Opteron Interlagos (Bulldozer based) cores in math speed, that is just whack mane.

    My point with all that is just one, it will be economically unwise to buy an AMD processor because they are a LOT worse than an Intel processor on the same price range, that has already started to happen and will only worsen with time, consumers care about having great hardware that just works without having to worry about high power usage, exceptionally high temperatures arising from such draw, enthusiasts care only about speed, and AMD is quickly beginning to lose ground in these areas. If they don't release a VERY NICE Steamroller processor by the time Broadwell starts circulating, that slap might be just what AMD needed to lose the battle, maybe for good. I just hope they do some miracle with Steamroller and bring it to at least Haswell performance somehow, as unlikely as it seems it'll happen, and they just can't and won't survive on the money their loyal fans bring in.

    Back on the whole topic, the only thing I can say now is that Mantle would be nice indeed, but only if done correctly and if they actually deliver here, with substantial improvement over DirectX on both the Radeon, GeForce and Iris Pro platforms. AMD delivers, and they'll have a much needed boost to their reputation in my book. On paper, Mantle looks marvelous, whether that can be efficiently applied, updated and maintained, that is a whole different story. To this day, all we have seen are delays and hype, hype and more hype.
  • RejZoRRejZoR Slovenia Join Date: 2013-09-24 Member: 188450Members, NS2 Playtester, Reinforced - Shadow
    AMD Mantle is by any definition meant as "lower than Direct3D", not necessarely the lowest level there is...

    Now that Battlefield 4 is getting a physical access to AMD Mantle and AMD is releasing Mantle powered drivers, i'd realyl love to hear what's the stand of UW on this matter. Clearly, there is a potential and considering how popular HD7000 cars were and how current R series are popular, no one can say it's pointless.
  • elodeaelodea Editlodea Join Date: 2009-06-20 Member: 67877Members, Reinforced - Shadow
    edited February 2014
    RejZoR wrote: »
    ...
    i'd realyl love to hear what's the stand of UW on this matter.
    ...
    I dont understand why people are still asking for this. It's been clear for the longest time that UWE doesn't have the manpower for mantle support to make any sense as a good decision benefit to time cost.

    As for people saying ns2 isn't 'rendering intensive' in terms of hardware required, you couldn't be further from the truth. Go run around biodome with r_stats.
  • Soul_RiderSoul_Rider Mod Bean Join Date: 2004-06-19 Member: 29388Members, Constellation, Squad Five Blue
    All mantle does is reduce the CPU overhead that comes with using a software API like DirectX. It reduces CPU load. If you are CPU limited, then you will see a benefit to Mantle, if you are GPU limited, you won't see a whole lot of point.

    Essentially, people with AMD cpu's will see the biggest benefit to using it, or very old Intel CPU's. People running modern CPU's are unlikely to see a huge benefit, as they will tend to be more GPU limited. That being said, if you run Xfire, at high resolutions, your CPU is likely to be the issue, so again, you are likely to see quite an improvement.
  • NordicNordic Long term camping in Kodiak Join Date: 2012-05-13 Member: 151995Members, NS2 Playtester, NS2 Map Tester, Reinforced - Supporter, Reinforced - Silver, Reinforced - Shadow
    I could see mantle being useful to a small select few for now. If it ever matures and actually gains some traction it might be worth looking into. Even on my 2500k@4.8ghz with my a 7970 I sometimes become cpu limited during end game and intense fights dropping me to 60fpsish. Thats acceptable to me but mantle would remove that. Then theres the people with lower end hardware that would get a lot out of it.
  • ukeuke Join Date: 2013-07-01 Member: 185840Members
    edited February 2014
    So im starting a fresh new thread on it asking for Mantle support.

    I'm for Mantle support as i belong to the minority sticking to amd which is ~33% as a number but still thousands of people who would benefit of it.
    Not just because they get performance improvements also because they could save money instead of getting new hardware for 200-400$ (which is times the people considering an upgrade quite a bit of money).
    I'd rather spend my money directly on supporting UWE instead of getting myself some new parts thats cheaper for me and better for UWE. At this point the nvidia faction would also benefit.
    Furthermore they will benefit from the publicity being one of the first games implementing it which will also be -you guessed it- benefitial for all of us.

    I did just test Mantle with Oxides stress test and it looks really promising as it looks similar to the lategame performance drops in NS2.

    As from what i read it looks like a bit envy from your posts denying(no offense) us mantle support if its not that much effort to implement i mean they also implemented dx9,dx11 and opengl, where opengl could have done the job for all of us and save the devs time on implementing those other things. I think at least trying to implement it a soon its usable would be a fair trade.

    ---

    Concluding the main arguments* from the old thread:

    -Not worth the time and effort. [1][1.1][2]
    As i said people would save money which instead they can spend directly in the reinforcement program for instance and it is good publicity for NS2 and UWE. But not mentioned is the most important aspect(for me at least) it improves the playability and fun for all because with low fps you cant hit anything and on the other side your enemies wont get weak with decreasing performance if it could be fixed with Mantle. In numbers it could enhance the game by potential 33%(the AMD users and ofc not everyone can enable Mantle of them but as i said not only they profit of it)

    -Mantle would only be worth if the Spark engine would be developed for the new consoles. [3][4]
    [and some more i think]
    I'd agree if Mantle would only be provided to Console users but mantle is also available for PC.

    -Don't need it because I'm nvidia user.[2]
    Makes me a bit sad as it's a kind of egoism and makes me having my thoery of your envy.
    lwf wrote: »
    Let us at least wait until the first game with Mantle support and the first graphics card with Mantle support is out...
    (knowing about the age of the post)
    -supports HD 7xxx and the R9 series
    -BF4 >40% better performance (its not the only video proving it)
    HeatSurge wrote: »
    [...]
    I'm sad CUDA and ShysiX haven't disappeared yet. And I've had nvidia cards for 5+ years. CUDA and ShysiX add NOTHING that couldn't have been done with OpenCL, except exclusivity and probably some SDK tools.
    -CUDA are stream processors and amd also got stream processors - just not calling them CUDA
    -and physx seems to me like its programmed to run bad(if it runs) on anything else than nvidia because after a patch for borderlands 2 i could use them with my AMD card

    [1][1.1]: Ghosthree3
    [2]: BeigeAlert
    [3]: SmashIT
    [4]: GhoulofGSG9

    tl;dr:
    Read it. If it doesn't interest you are on the wrong thread!

    Also check: https://en.wikipedia.org/wiki/Mantle_(API)

    *as far i cought them
  • Omega_K2Omega_K2 Join Date: 2011-12-25 Member: 139013Members, Reinforced - Shadow
    Regarding "time and effort"
    I don't see why this shold be priorized over anything that affects *MORE* users and other issues, i.e why not work on a more parallized portion ("multi-threaded") in NS2 so everyone with a multicore processor benefits from it, especially those with 4+ cores and hyperthreading capable processors.

    Or using processor features (enabling certain instruction sets like SSE) , but what about *optimized* build dlls for processor architectures (and possible a loading system that loads the optimized versions)? These can yield improvments with just a compiler switch, but are subject to testing anyway (whether it is worth it)

    Or work on other numerious really annoying issues ns2 has like mouse lag, net compensation, etc?
  • ukeuke Join Date: 2013-07-01 Member: 185840Members
    I didnt mean to priorize it over the other things I just want it to be done as soon as possible.
  • Soul_RiderSoul_Rider Mod Bean Join Date: 2004-06-19 Member: 29388Members, Constellation, Squad Five Blue
    Mantle is only of a benefit to those AMD GFx card users who are CPU limited, meaning those who have a crap CPU. If you have an R9 or HD7xxx it is highly unlikely you have a CPU that will see any benefit, unless you happen to be running your shiny GFX with a VERY OLD Intel CPU or an AMD cpu. The likelihood of any real benefit coming from this will be to a tiny fraction of the people who own one of those cards.

    Incidently, the most popular Video Card on use in Steam is the Intel HD4000 series.....

    All-in-all, It depends on UWE's long term plans for the Spark engine. If they intend to use it on future games and keep it developed and updated like Source etc, then it may be worth them adding this, for the next game, however, whether it would be easily backwards compatible or not is another matter.

    Long story short, more options for the end-user is good, but I would expect to see Mantle implemented in their next Spark game if it gets implemented at all.
Sign In or Register to comment.