The End Of Faster Processors

ViPrViPr Resident naysayer Join Date: 2002-10-17 Member: 1515Members
<div class="IPBDescription">horrible news.</div> i just got the news recently but this story came out a while ago. i can't believe i didn't hear about it till now. it is not physically possible for processors to get any faster than they are now. the only way now to get faster computers is to do what super-computers do and put multiple processors into one computer working simultaneously. this would mean that programming will have to be drastically different and a lot more annoying.

Comments

  • stingavandalstingavandal Join Date: 2004-07-17 Member: 29952Members, Constellation
    edited November 2004
    i've heard its possible...

    with some new stuff...
    that i don't feel like remembering

    something biological

    maybe
  • DuoGodOfDeathDuoGodOfDeath Join Date: 2002-08-01 Member: 1044Members
    Overclock from 3.4 GHZ's to 4!!!!!
  • CMasterCMaster Join Date: 2003-10-25 Member: 21922Members
    edited November 2004
    Oh, they can still get far faster - both in terms of clock speed, and in terms of useful oeration. Theres a physical limit eventually, but we're a way off that, yet.
    And there are hopes in things like SETs, Optical, Quantum, etc.

    Oh, and wrong forum.
  • KaMiKaZe1KaMiKaZe1 Join Date: 2002-11-18 Member: 9196Members
    With a few more breakthroughs in nanotechnology and quantum physics, computers will be getting a hell of a lot faster.
  • Rapier7Rapier7 Join Date: 2004-02-05 Member: 26108Members
    Dude, you're stupid. You're beyond stupid. In fact, retards silently mock you when you're not looking.

    Nah, just kidding.

    But seriously, if you attribute this to Intel's decision to stop their 4 ghz Prescott CPU from coming out, then that's just plain ignorant.

    Theoretically, it's not that hard to ramp up the actual hertz, because all you need to do is extend the CPU pipeline, but a longer pipeline offsets the advantage of more speed.

    Based on our current fabrication process, (90 nm), we can't have faster processors without massive heat and power problems. The more speed you have, the more juice it takes, and the more heat it outputs. It's just not fiscally sound to keep pushing this up, because you'd need to have a water or even phase change cooling based system to keep up with the heat. And that doesn't even take in to consideration of voltage problems.

    No, we're still progressing, but faster processors aren't always the best performing ones. Right now, dual core design will be able to substitute faster processors if more programs are SMP aware.
  • ReKReK Join Date: 2004-08-30 Member: 31058Members, Constellation, Reinforced - Shadow, WC 2013 - Silver
    There will be eventually a physical limit, which is, theoretically, the speed at which an electron can change state, in the current system that is. There are many other ways of making it. Theyve already started with quantum drives in which each electron can be both 1 and 0 at the same time, confusing stuff. i read it in sci am, or pop sci, cant remember which
  • CaMCaM Join Date: 2004-07-05 Member: 29735Members
    i could have sworn someone brought a similar topic up years ago when the 386 processers came out <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo-->
  • MouseMouse The Lighter Side of Pessimism Join Date: 2002-03-02 Member: 263Members, NS1 Playtester, Forum Moderators, Squad Five Blue, Reinforced - Shadow, WC 2013 - Shadow
  • DOOManiacDOOManiac Worst. Critic. Ever. Join Date: 2002-04-17 Member: 462Members, NS1 Playtester
    edited November 2004
    They've been saying this for YEARS now...

    As long as there is money to be made, processors will get faster. Moore's Law has yet to be broken.
  • reasareasa Join Date: 2002-11-10 Member: 8010Members, Constellation
    But the guy from Dell told me mine was the fastist one evar made and no one can evar make a btta one~!!
  • booogerboooger Join Date: 2003-11-03 Member: 22274Members
    they made tempered silicon that can get a lot faster due to higher heat durability, and that's not even nanotechnology. you get transistors with molecules and we could have a computer the size of a dime running roughly a terahert (1000 ghz). i'm just twiddling my fingers waiting for that. <!--emo&:)--><img src='http://www.unknownworlds.com/forums/html//emoticons/smile-fix.gif' border='0' style='vertical-align:middle' alt='smile-fix.gif' /><!--endemo-->
  • ThansalThansal The New Scum Join Date: 2002-08-22 Member: 1215Members, Constellation
    I am surprised no one has brought up the obvious one:

    one of the current ideas is to fugg the smaller and faster, and just make faster <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo-->

    and there are always people saying "things can't go past this point" then people laugh and do it (there was something about not being able to OC to a cretin hrz, some one recently did it)

    etc etc etc.

    don't wory.

    by the time we hit our limit with current tech, we will have new tech to do more nifty things with <!--emo&:)--><img src='http://www.unknownworlds.com/forums/html//emoticons/smile-fix.gif' border='0' style='vertical-align:middle' alt='smile-fix.gif' /><!--endemo-->
  • 2_of_Eight2_of_Eight Join Date: 2003-08-20 Member: 20016Members
    <!--QuoteBegin-Thansal+Nov 14 2004, 10:42 PM--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Thansal @ Nov 14 2004, 10:42 PM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> there was something about not being able to OC to a cretin hrz, some one recently did it <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd-->
    I think it was the 6 GHZ clock limit. Yes, it was surpassed <!--emo&:)--><img src='http://www.unknownworlds.com/forums/html//emoticons/smile-fix.gif' border='0' style='vertical-align:middle' alt='smile-fix.gif' /><!--endemo-->
  • taboofirestaboofires Join Date: 2002-11-24 Member: 9853Members
    Same old blah blah blah.

    The only thing that's changed is that Intel finally got out of the megahertz myth. Things will still get faster, but the clock speeds are unlikely to go up significantly. Not that it matters. I can double the clock speed of a chip and make it <i>slower</i> by doing screwy things to the pipeline.
  • Thats_EnoughThats_Enough USA Join Date: 2004-03-04 Member: 27141Members, Constellation, Reinforced - Shadow
    Do your research.

    I just recently attended a seminar hosted by a Professor from Columbia University. The future of processors and such depends on the scale of which they can be created. In other words--the smallest you can get is to an atomic level, and handle the individual transfer of electrons through carbon-tubes, or nano-tubes (no no... not nanites, but that is a fix all <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo--> ).

    Conventional transistor technology has been used WAY too long now, and has only just been getting smaller. If you realize how much really goes into making those 3.8 ghz P4s or those 64-bit AMDs, you start to think how in the hell anything can be done to make them better. Without getting all nerdy and technical about it, I'll just leave it at that technology still has awhile to go on the concept of shrinking transistors. 220 million transistors in a 1" by 1" chip that is about 1/16" of an inch thick is pretty good, eh?

    Well. I vote for quantum computing <!--emo&:0--><img src='http://www.unknownworlds.com/forums/html//emoticons/wow.gif' border='0' style='vertical-align:middle' alt='wow.gif' /><!--endemo--> (which they have gotten to work!)
  • NeonSpyderNeonSpyder &quot;Das est NTLDR?&quot; Join Date: 2003-07-03 Member: 17913Members
    edited November 2004
    well i've basically been waiting for diamond processors to hit the market, now before everyone goes off and says "make processors from diamonds? that'll costs fortunes!" let me remind any of you who havn't heard that diamonds are in fact plentiful, there is simply a monopoly on them, however diamond wafers have been created in labs that can act perfectly conducters and are incredibly cheap to manufactuter (relatively).

    <a href='http://www.geek.com/news/geeknews/2003Aug/gee20030827021485.htm' target='_blank'>an article</a> now this isn't the article i read a few weeks ago but it's close enough.

    --a snippit--

    <!--QuoteBegin--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->Enter diamond semiconductors. Diamond? Yes, diamond, the hardest substance known to man. Diamond possesses some very useful properties, not the least of which is its superb ability to conduct heat, its high breakdown voltage, and its high carrier mobility. While silicon begins to show severe signs of thermal stress around 100°C, diamond can withstand several times that without ill effects. A chip made of diamond could do with a far less robust cooling mechanism and run at unheard-of frequencies without damage. CPUs could reach temperatures in the hundreds of degrees and continue to function normally.<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->

    supposedly they've created a diamond processor at 84 ghz

    **edit** after a bit of sleuthing i found the original article i read, i enjoyed it very much and i hope you will do the same <a href='http://www.wired.com/wired/archive/11.09/diamond.html' target='_blank'>Wired Diamond Age</a>
  • Soylent_greenSoylent_green Join Date: 2002-12-20 Member: 11220Members, Reinforced - Shadow
    edited November 2004
    All is not lost, we can still get increadibly much faster actual performance out of processors for certain purposes. Certain tasks are just computationally intensive, no logical branching, if you look at a regular processor it's something like 99% cache memory, branch prediction, instruction scheduling and so on, and very very little actual computational resources. It was built for general computing and is very fast at this, it's extremely poorly optimized for simple computationally intensive tasks that are easilly parallellizable.

    Among these tasks are rendering computer graphics, graphics cards have many pipelines(16 in the latest generation of high end cards) and need extremely little actual logic, they mostly just handle streams of data comming in, some operation on this data and a stream of data comming out, there's no need to have millions of transistors for it to figure out what to do when waiting for data it didn't know it needed to fetch from RAM a few clock cycles ago because it never happens, there's no need for a ton of cache to keep things in if you happen to need them in the future. These pipelines are practically identical units, if you can make one of them, it's not hard to make more of them. For graphics cards performance scales practically linearly with the number of pipelines, and the smaller feature sizes become, the more pipelines you can cram in.

    Mechanics('physics' like the havoc engine) and sound are important elements of games and these love multi-processing/more computational units and not much logic is needed to predict anything or keeping it busy while fetching stuff from memory. To see just how much more power is available in these areas look at the clearspeed coprocessors, here we have a 96-way(96 PE's(Processing Elements) each consisting of an integer MAC(Multiply ACcumulate) and 2 64-bit floating point units) processor running at around 200 MHz, consuming 5-watts of power and capable of 50 GFlops.(<a href='http://www.clearspeed.com/products.php?si' target='_blank'>link</a>). This techology could migrate into gamers computers eventually, it would not need be so expensive if produced in the same insane quantities as normal processor. Also look at graphics cards, they are many times faster than your CPU at the same tasks as the above co-processor, they are still hard to program for such tasks because they are kinda graphics centric(for good reason) but it's not impossible.

    <!--QuoteBegin-Rapier7+--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Rapier7)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->Theoretically, it's not that hard to ramp up the actual hertz, because all you need to do is extend the CPU pipeline, but a longer pipeline offsets the advantage of more speed.<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->

    Practically it is an insanely difficult task to ramp the clock speed any further. As clock speed goes up so does power usage, this is a huge practical problem, can we really demand that everyone uses peltier cooling or water cooling and ever larger, more expensive and noisier power supplies? To achieve the clock speeds they did Intel used self-resetting domino circuits, one circuit drives the next, which drives the next... They are extremely timing sensetive, i.e. trace lenght sensitive. Measuring equipment causes so much interference that readings tend to become non-sensical, as a result the timing of the circuits had to be done BY HAND, moving millions of transistors one at a time, often WITHOUT accurate meassurements using trial and error. The more actual logic circuits you have the worse this problem becomes. Smaller feature sizes gives more transistors to play with, and they are invariably put to use(sometimes just as added cache mem or just by duplicating certain structures, in which case this point isn't valid).

    <!--QuoteBegin-DOOManiac+--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (DOOManiac)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->They've been saying this for YEARS now...

    As long as there is money to be made, processors will get faster. Moore's Law has yet to be broken.<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->

    And that's no coincidence, the clock speed version has been broken for years(which is the one people tend to speak of. The transistor density version still seems alive and kicking allthough at ever increasing manufacturing costs and complexity(=> more bugs)).

    Originally it was transistor density doubles every year, then it somehow got changed to performance/clock speed/transistor density(take your pick) doubles every one and a half years and now intel is saying "every few years".

    The p4 2.0 GHz came out around september 2001, and now slightly more than 3 years down the road intel is at 3.8 GHz and has abandoned tejas(the successor to p4 building of the netburst architecture which was slated to scale to 9 GHz in 2005). It took 3 years to allmost double clock frequency.

    AMD released the 1.6 GHz athlon xp 1900+ around december 2001, now it has ramped up to 2.6 GHz and switched to a more efficient architecture. It took 3 years to get 60 % higher clock speeds.

    Performance and transistors have scaled alot faster then clock speed.

    Why is the clock speed doubling version of Moore's law important you might ask? Well, it's been a free lunch until recently, you 'just' shrink the process and you get higher clock speeds and more transistors, now you're not getting higher clock speeds anymore because it's starting to become less than practical to have ever bigger power supplies, ever bigger heat sinks and so on. What we are relying on now for extra performance is to put the extra transistors to good use, on die memory controller, more cache, 64-bits, there's talk of dual core processors for home computing and so on.

    It looks like the transistor version might be threatened by thermal noise(but intel recons this is a decade or so off). Leakage current is also threatening the transistor version of Moore's law, we might be able to stave this off with SOI, strained sillicon and such.
  • Cold_NiTeCold_NiTe Join Date: 2003-09-15 Member: 20875Members
    <!--QuoteBegin-reasa+Nov 14 2004, 09:33 PM--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (reasa @ Nov 14 2004, 09:33 PM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> But the guy from Dell told me mine was the fastist one evar made and no one can evar make a btta one~!! <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd-->
    lol u 2? we both msut haev teh fastes prosesors 4evar lol b/c w/e i can have 2 programs runing @ teh saem tiems.
  • DC_DarklingDC_Darkling Join Date: 2003-07-10 Member: 18068Members, Constellation, Squad Five Blue, Squad Five Silver
    Thats not the prob with CPU's.. CPU's can and will go faster. Theres another more critical prob which is basicly.. unsolvable. <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo-->

    The current aplications & OSs are being beild for the CURRENT generation of CPU's. from the 286, towards the P4 I mean. Same with AMX. So whats the prob?

    Backwards compatibility, same with XP. Xp is made to be partially compatible with old win98 aplications.. Same with the CPU's. Programs are written on this generation, so in order to keep using your pwning Norton 2004 Antivirus on the new uber cool CPU, they have to build in backwards compatibility. yes ppl, if yo got this far... BITS. 64bits CPU... 32bits CPU. This is bad.

    Why?

    Go figure you got a 32bits aplication, or worse, OS. And a 64bits CPU. your cpu has to work SLOWER to communicate with that OS. This means skipping clockcycles, etc. Alot of tech stuff. Point is, imagine a 128bits running a 32bits OS and aplications. OUCH. Thats alot of skipping, it gets slower cause it A.. has to skip, B. has to modulate the info and probably other things I forgot.

    So what is this to us? if we keep going on this route we are keeping making hardware and software compatible with each other, from OSs back. Imagine a P6 supporting win98. You can say, we do not have win98 in that time, but remember.. windows HAS backwards compatibility so windows is STILL working on the old win98 principle, ok since xp this aint entirely true but some part still.

    There are ways of making CPU's WAY faster but point is, you can NOT run 286+ software on that CPU, simply cause the software was never written to deal with that processor.

    So the real question is not, can we make the CPU's faster, the real Q is, can we make them faster when STILL supporting backwards compatibility. THIS will end alot faster then physical limits, simply cause in time it will start to be to much.

    Cause skipping clockcycles is not perfect, usually you have to skip more then needed (another long story) in order to keep a correct speed. So in time, you are skipping hell of alot. and anything you emulate, you risk making more and more false calculations.

    in short, best thing we can do is dump the entire current CPU generation, rebuild from ground 0 on NEW discovered techniques and make OSs and aplications for THAT CPU. Now some companies do this (usually mainframes), since that has to be totally customizes anywayz.

    I will try to dig up links to this info, since its been a while since i read it, I might be a tad of on some things.
  • XythXyth Avatar Join Date: 2003-11-04 Member: 22312Members
    I really wouldn't mind having larger cpus as long as they are faster... My computer case is big enough already.
  • ViPrViPr Resident naysayer Join Date: 2002-10-17 Member: 1515Members
    i say they dump compatibility and make emulators. PCs are not compatible anyway. they had to make an emulator called Dosbox so you can run Dos games on current machines.
  • DC_DarklingDC_Darkling Join Date: 2003-07-10 Member: 18068Members, Constellation, Squad Five Blue, Squad Five Silver
    cursing hell. <!--emo&:(--><img src='http://www.unknownworlds.com/forums/html//emoticons/sad-fix.gif' border='0' style='vertical-align:middle' alt='sad-fix.gif' /><!--endemo-->

    I just googled a few hours and I can not find it anymore. I get tons of links to articles over the great compatibility of the current line of CPUs, damn it. Thats not what I need. (I read this bout a year ago and i came across it on accident when digging up some other info bout CPUs)

    If anyone ever finds a article, talking where I am talking bout, pls lemme know. <!--emo&:)--><img src='http://www.unknownworlds.com/forums/html//emoticons/smile-fix.gif' border='0' style='vertical-align:middle' alt='smile-fix.gif' /><!--endemo-->

    Anywayz, point is.. they gotta restart, rebuild CPUs, rebuild OSs, rebuild aplications, and NOT make backwards compatibility.

    What happens when they do build it? lets think of this:
    they make a whole new CPU, totally new, original 0% compatibility with anything currently out. So no OS or software can run on the CPU. (good) Now they make a weird thing to make it run old software, not needed. Sure, if you run new software, which will be build for the CPu, you do not need it, but if you run software needing this backdoor you are SERIOUSLY lagging your CPU.

    man, I wish I could find that article. took me 2 days the first time. <!--emo&:(--><img src='http://www.unknownworlds.com/forums/html//emoticons/sad-fix.gif' border='0' style='vertical-align:middle' alt='sad-fix.gif' /><!--endemo-->
  • SoulSkorpionSoulSkorpion Join Date: 2002-04-12 Member: 423Members
    edited November 2004
    <!--QuoteBegin-D.C. Darkling+Nov 15 2004, 10:55 PM--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (D.C. Darkling @ Nov 15 2004, 10:55 PM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> Anywayz, point is.. they gotta restart, rebuild CPUs, rebuild OSs, rebuild aplications, and NOT make backwards compatibility.

    What happens when they do build it? lets think of this:
    they make a whole new CPU, totally new, original 0% compatibility with anything currently out. So no OS or software can run on the CPU. (good) Now they make a weird thing to make it run old software, not needed. Sure, if you run new software, which will be build for the CPu, you do not need it, but if you run software needing this backdoor you are SERIOUSLY lagging your CPU.

    man, I wish I could find that article. took me 2 days the first time. <!--emo&:(--><img src='http://www.unknownworlds.com/forums/html//emoticons/sad-fix.gif' border='0' style='vertical-align:middle' alt='sad-fix.gif' /><!--endemo--> <!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->
    Cow droppings.

    Rebuild software from scratch because the architecture has changed? What the hell for? All you have to do is write compilers for the new architecture. And get people to write standards conforming, portable code. Forget making the architecture backwards compatible; have the compiler do the work, not the CPU (thanks a lot, Intel. CISC was a <i>great</i> idea, wasn't it? :/)

    ...

    As noted before, this is nothing new. It's probably nothing to worry about either. Firstly because computers can get faster by reducing bottlenecks - there's no point in your CPU being incredibly fast if your memory or hard drive can't keep up.

    And then there's quantum computers, which is a different architecture altogether.

    Anyway, I wouldn't worry about it. There's more than one way to make software run fast, and improving the CPU is only one of them.
  • lazygamerlazygamer Join Date: 2002-01-28 Member: 126Members
    edited November 2004
    Actually Vipr, you can run some dosgames on a modern Windows XP machine. What dosbox fixes is sound not working, or many games(most?) having serious timer issues. In fact I managed to run Quest for Glory 4(dos version) under Windows XP with proper sound(no VDMsound needed, I dunno why) and no speed issues, simply by using some fanmade timer patches for it(sadly most oldgames don't have fanmade timer patches <!--emo&:)--><img src='http://www.unknownworlds.com/forums/html//emoticons/smile-fix.gif' border='0' style='vertical-align:middle' alt='smile-fix.gif' /><!--endemo-->.

    Also, Windows 98 can really help out for running old games that XP doesn't want to run.
    A Windows 98 system on a modern cpu can usually run windows 9x versions of old games excellent. Even XP is pretty decent, although there is some Windows 9X games that XP may not like. Considering many such games go back to 1996(when the P1 r0x0red), I'd say that's pretty damn good for X86 compatibility. <!--emo&:)--><img src='http://www.unknownworlds.com/forums/html//emoticons/smile-fix.gif' border='0' style='vertical-align:middle' alt='smile-fix.gif' /><!--endemo-->



    Ok my personal theory is that plenty of software is far too unoptimized. Hardware increases in power so quickly that there isn't a necessity to heavily optimize code.
  • MetoMeto Join Date: 2004-04-26 Member: 28216Members
    Computers have a long way to go yet... but more importantly why does anyone need a 6ghz processor?!? People are just trapped in the retail cycle which makes everyone buy faster and faster processors. To be quite frank 95% of PC owners would still be able to do everything they do now and just as well with a 1.5-2ghz processor. It's only games like Doom3 and HL2 which need the upper part of that spectrum.

    Also quantum computing will be revolutionary. Also it'll be insanely fast... at certain tasks which can take advantage of the quantum nature of particles! If you actually look in to the theories and algorithms that have been developed so far to utilise quantum computing you'll find that not much is complete despites years and years of work. Quantum Computing for the general public is still a good 30-50 years away.
  • DC_DarklingDC_Darkling Join Date: 2003-07-10 Member: 18068Members, Constellation, Squad Five Blue, Squad Five Silver
    A. this was atleast a year ago, I have not kept up, maybe this found a solution by now.
    B. I never said they might not find a solution to somehow emulate it to run it anywayz, what i mean is.. if they do "fix" it softwarematic (like compilers) they ARE losing speed. period. And I know its not just the CPU, I have a few degrees in system administrating here, so i know whats it all about.

    you say let the compiler do the work, i say let themselves do the work, recode, start from scratch and do it right.
  • ConfusedConfused Wait. What? Join Date: 2003-01-28 Member: 12904Members, Constellation, NS2 Playtester, Squad Five Blue, Subnautica Playtester
    edited November 2004
    darkwing as some one who is sitting next to a 64 bit machine. i agree with you.
    i can only assume the rewriting stuff is based around the debacle that was itanium. My real problem at the current 64 bit systems is finidng drivers that support a 64 bit os, outside of win 64. (yeah, thats fedora)

    The reason i support backward compatibility for my hardware on this one is that the 64 bit market is far too smalll to have all the things i want to have. Particularly if i dont want to pay enterpirse sized prices for software.

    further i fand hat a large number of 32 bit programs are still very usefull. exampeles in clude things like steam, of zoners halflife tools. these are things which i cannt reasonably expect to be upgraded, particularly when 64 bit machines are such a small part of the computer market. thus X86_64 for the win:)
  • gyMegyMe Join Date: 2004-08-27 Member: 30961Members, Constellation
    edited November 2004
    Few problems with cranking up the clock speed. First and foremost is heat. Right now it's very very difficult to get a processor past 4ghz without some extreme heat loss, which is why Intel decided against releasing their 4ghz processor.

    Intel realized, just like AMD had a while ago that it's just not effective to keep pumping up the clock frequency. Instead Intel will work on making their chips run more efficient with new technology.

    Then again if AMD and Intel weren't such **** to silicon, we could have 10ghz diamond based processors at a decent price, since the synthetic diamond makers have been seriously increasing their capabilities.

    So you are unlikely to see processors jumping past 4-5ghz in the near future, however you will see faster processors.

    EDIT:

    Oh in the future, you will see multicore processors (2 or more cores on a single processor). New types of processor extensions (Like SSE1/2/3). I'd rather not see them use carbon nanotubes (I want my space elevator first!). I did read a paper on being able to use standard copper/silicon in quantum computing by measuring the spin of an electron, but quantum computing is a LOOOOOOOONG ways off.
  • voogruvoogru Naturally Modified (ex. NS programmer) Join Date: 2002-10-31 Member: 1827Members, Retired Developer, NS1 Playtester, Contributor, Constellation
    edited November 2004
    <a href='http://www.isi.edu/stories/97.html' target='_blank'>http://www.isi.edu/stories/97.html</a>

    <!--QuoteBegin--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->"The bottom line is we will deliver 8 times the computing power using less than one tenth of the electricity."<!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->

    Nuff said.
  • Soylent_greenSoylent_green Join Date: 2002-12-20 Member: 11220Members, Reinforced - Shadow
    <!--QuoteBegin--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> </td></tr><tr><td id='QUOTE'><!--QuoteEBegin-->Then again if AMD and Intel weren't such **** to silicon, we could have 10ghz diamond based processors at a decent price, since the synthetic diamond makers have been seriously increasing their capabilities.
    <!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->

    Not a good solution. It would cost AMD and Intel many billions of dollars getting diamond fabs on line and years of effort to get all the problems solved. But for what?

    Do regular users want to have performance at the cost of insane power usage? The attractiveness of diamond is that it can stand up to allmost incandecent temperatures without that much trouble while sillicon cannot, therefor you can run it at very high frequencies(~200 GHz) and not worry too much(but won't thermal noise quickly become a problem?), with extreme power densities(~30W per square milimeter(!). A normal sized processor(~100 mm^2) would use 3 kW's, trip your fuse at home if you turn on two of them at once, have monstrous cooling and power supply and double as an efficient space heater), something which you can never do with sillicon. If you have several kW's to throw away it's not a problem as long as it works. For regular users IT IS a huge problem. Diamond processors are not a solution to the power problem for regular users and especially not for laptops where power is even more limited.

    What do we need more computing resources for? Games? Current processors are horribly suited to games, we wan't something like a coprocessor for raw computation, we can make a 5 W processor(such as the clearspeed linked in my previous post) that could perform as well as that 200 GHz diamond processor if it was designed like a p4 at alot of the tasks we care for in our games.

    I can't see any benefit to me with diamond processors instead of sillicon.
Sign In or Register to comment.