The End Of Semiconductor Development

Dr_LEE7Dr_LEE7 Join Date: 2004-10-15 Member: 32265Banned
<div class="IPBDescription">its gonna come soon</div> According to Moore's Law, it has been said that the transistor density on integrated circuits double every 6 months. As the years have gone by, semiconductor development slowed down to every 12 months, and today it is at 18 months. I would say it is going much slower than every 18 months with the latest pentium 4 processors.

Currently, processors use 90µm and 130µm processes to make cpu's with 50million to 110million transistors. As we can see, Intel is already having problems with its pentium 4's, and they are not expected to cross the 4ghz barrier with that chip. They are now working on making processors with dual cores.

It is said that the theoretical limit for microprocessors is a 65µm process, and the most possible transistors is 18billion. Microprocessors are also supposed to reach their theoretical limit by 2010 to 2020.

Do you think we will soon reach the theoretical limit for semiconductor development? What other technologies do u think we will go into if that happens? Is there a possibility of 3-dimensional semiconductors using silicon wafers? The day when microprocessors reach their theoretical limits is inevitable. What is your opinion on the subject?
«1

Comments

  • taboofirestaboofires Join Date: 2002-11-24 Member: 9853Members
    Um with a little clever marketing trick called doubling the pipeline length, I can double the clock speed of a chip and actually make it <i>slower</i> for most purposes. The clock is a poor indicator of speed, so forget about that (and if you want more details, google "the megahertz myth").

    The whole Moore's "Law" thing is junk anyway. So I consider it an extremely good thing that Intel has finally stopped trying to play into it.

    By the time 2020 comes about we'll almost certainly have migrated to another medium anyway. Optical I think is about 15 years away, for example.
  • SwiftspearSwiftspear Custim tital Join Date: 2003-10-29 Member: 22097Members
    That being said, you can expect the rate at which the processor doubles in speed to decrease exponentially through time, just like the increase of nearly any realistic causal relation, treands rarely run to compleation on large scales.
  • Mr_HeadcrabMr_Headcrab Squee&#33;~ Join Date: 2002-11-20 Member: 9392Members, Constellation
    eventually, no matter what, we're gonna hit a barrier: Electricity can only travel at the speed of light, right now, thaty's not a problem, but when we get into CPUs with Quadrillions of transistors, chips are going to have delays between the information being entered, and the last transistor getting the signal. Also, we have to worry about resistance, true, the resistance is low on a circuit etch, but over the billions of connections, the power level of the charge is going to dwindle from resistance. A 3 Dimensional chip seems like the most probable evolution right now, but heat dissipation would be a problem, unless Cooling is hardlined to the chip, but even then other problems arise...
  • UltimaGeckoUltimaGecko hates endnotes Join Date: 2003-05-14 Member: 16320Members
    <!--QuoteBegin-Mr. Headcrab+Oct 17 2004, 03:11 AM--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Mr. Headcrab @ Oct 17 2004, 03:11 AM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> eventually, no matter what, we're gonna hit a barrier: Electricity can only travel at the speed of light, right now, thaty's not a problem, but when we get into CPUs with Quadrillions of transistors, chips are going to have delays between the information being entered, and the last transistor getting the signal. Also, we have to worry about resistance, true, the resistance is low on a circuit etch, but over the billions of connections, the power level of the charge is going to dwindle from resistance. A 3 Dimensional chip seems like the most probable evolution right now, but heat dissipation would be a problem, unless Cooling is hardlined to the chip, but even then other problems arise... <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd-->
    Or quantum computers, based off of the electron configurations of atoms. I'm not sure how well we are with it right now, but I'm sure in 50 years or so we'll at least be able to use it for minor calculations of some sort.

    I was thinking the other day, aside from power output, why we don't use more than a binary transistor. I mean, it uses more power, the more ways your switch needs to work, but it could increase computing speed to not need as many transistors. It would probably take some intense hardware to actually use this either, but I think you could get up to a 10 way transistor or something.

    Of course, I don't really have any factual information and it's a bit late/early to be looking it up at the moment for me.

    [for which I apologize <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo--> ]
  • SwiftspearSwiftspear Custim tital Join Date: 2003-10-29 Member: 22097Members
    <!--QuoteBegin-Mr. Headcrab+Oct 17 2004, 03:11 AM--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Mr. Headcrab @ Oct 17 2004, 03:11 AM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> eventually, no matter what, we're gonna hit a barrier: Electricity can only travel at the speed of light, right now, thaty's not a problem, but when we get into CPUs with Quadrillions of transistors, chips are going to have delays between the information being entered, and the last transistor getting the signal. Also, we have to worry about resistance, true, the resistance is low on a circuit etch, but over the billions of connections, the power level of the charge is going to dwindle from resistance. A 3 Dimensional chip seems like the most probable evolution right now, but heat dissipation would be a problem, unless Cooling is hardlined to the chip, but even then other problems arise... <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd-->
    At the point were there is absolutly no where to go in terms of improvements to computers it won't be our biggest concern any more by a long shot... Like you say there are still tonees of theorys that can still work out, so we're not done yet. Not even close.
  • Soylent_greenSoylent_green Join Date: 2002-12-20 Member: 11220Members, Reinforced - Shadow
    edited October 2004
    Finally, some reasonable people. That '6 GHz, mad overclock' thread showed painfully clear what people in general think the current situation is.

    edit: some articles from the inq that may be interesting.
    <a href='http://www.theinquirer.net/?article=19105' target='_blank'>The Roadmap to Recovery: Part I</a>
    <a href='http://www.theinquirer.net/?article=19110' target='_blank'>The Roadmap to Recovery II</a>
  • Dr_LEE7Dr_LEE7 Join Date: 2004-10-15 Member: 32265Banned
    <!--QuoteBegin-Soylent green+Oct 17 2004, 04:24 AM--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Soylent green @ Oct 17 2004, 04:24 AM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> Finally, some reasonable people. That '6 GHz, mad overclock' thread showed painfully clear what people in general think the current situation is.

    edit: some articles from the inq that may be interesting.
    <a href='http://www.theinquirer.net/?article=19105' target='_blank'>The Roadmap to Recovery: Part I</a>
    <a href='http://www.theinquirer.net/?article=19110' target='_blank'>The Roadmap to Recovery II</a> <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd-->
    good articles, enjoyed reading them.
    Yes intel is having problems like never before. If AMD gets past the 4ghz (4000+) barrier, Intel is gonna have a serious problem. I guess intel deserves it if they are gonna jerk around when they own the market for years and years.

    I think moore's law is generally true, it will just get slower and slower every few years as we get closer to the theoretical limit.

    Other things that are really starting to make me mad are slow development of hard-drive transfer rates, optical drives, and reliable operating systems. Whats up with Microsoft anyways, they should be developing much more intelligent operating systems, they own 95% of the market for godsakes!!!


    o yeah, i believe the speed of electricity moves at 1/10 the speed of light, headcrab, i think i read that somewhere a few years back.
  • ZelZel Join Date: 2003-01-27 Member: 12861Members
    <!--QuoteBegin-Dr_133t+Oct 17 2004, 10:00 AM--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (Dr_133t @ Oct 17 2004, 10:00 AM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> o yeah, i believe the speed of electricity moves at 1/10 the speed of light, headcrab, i think i read that somewhere a few years back. <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd-->
    i heard a couple of days ago that electricity can travel at 0.8c within copper.

    sounds like fast, but thats about 4.2Ghz across a 1cm chip and back, assuming zero resistance =/

    after silicon gets dragged out of us for a little longer, into dual cores and such, i think we are going to move to diamond core. there was ana rticle in Wired a few months back about how we can build diamond chips for about the same price as current processors, and that they can run at 400 degrees celsius, instead of 80, meaning that they can have activity rates up to about 700 Ghz.

    yeah, seven hundred.
  • WheeeeWheeee Join Date: 2003-02-18 Member: 13713Members, Reinforced - Shadow
    edited October 2004
    *edit* my mistake, usually the electrical signal propagates at slightly less than 1 ft/nanosecond, but <a href='http://www.eskimo.com/~ultra/microstrip%20propagation.pdf' target='_blank'>this article</a> explains how it is actually determined. . that's why mobo manufacturers have to work hard to make sure all the traces to memory banks and stuff are the same length, otherwise the signals get there at different times and Bad Things happen.
  • juicejuice Join Date: 2003-01-28 Member: 12886Members, Constellation
    That "theoretical" 65um limit is artificial, based on SPECIFIC techniques. We can go below it with other etching methods for finer microstructure. We won't reach that feature limit barrier for many decades. Chill, dudes, and enjoy the technology as it comes.
  • Rapier7Rapier7 Join Date: 2004-02-05 Member: 26108Members
    Actually AMD's roadmap has 45 nanometer based CPUs.

    What's really the issue here? Necessity and convenience is the mother of invention, as juice said, enjoy the ride.
  • HawkeyeHawkeye Join Date: 2002-10-31 Member: 1855Members
    Well technically, electricity moves as fast as the electrons move through the wire itself, which is in fact very slow. As I understand it, it takes several hours for an electron to cross the switch and travel through the wire and finally through the light bulb itself. The reason why the light turns on seemingly instantaneously is because it works like pressure in a hose. The water at the source may not reach the end of the hose for a few seconds, but the water pushed out of the hose at the end comes out immediately.

    There is a definite problem with clock speeds. They keep making it faster and faster, when in fact that is about the stupidest way to make a processor faster. The clock regulates when the pipeline takes actions. The TIME it takes to do these actions is entirely dependent upon what the task to be done is, not the clock speed. Multiplications take longer than additions. If it takes 3 cycles and addition takes 1 cycle, by making the clock speed twice as fast, you end up with 6 cycles to do a multiplication and 2 to do an addition. Means nothing. It is a worthless way of accounting for processor speed.

    There is also what's called "MIPS" or millions of instructions per second, also known as "meaningless indication of processor speed" jokingly by those in the computer science field. "Instruction" is a lose way of describing a step involved by the processor to compute something. By doing more, you aren't necessarily doing any more real work done, considering that some tasks require more instructions than others (depending on the architecture used by the processor). Some processors believe in doing small fast instructions, and others believe in doing large slow instructions. The amount of instructions per second would be seen higher for small fast instructions, but it does about the same amount of work (relatively).

    Anyway, back to the topic. I hear they are really having trouble carrying the clock signals across the board, so they are having to have "delays" put places to wait extra long for signals to reach them.

    Honestly, I think if they keep this up, they'll end up getting a synchronous asynchronous processor (try saying that 5 times fast).
  • ThE_HeRoThE_HeRo Join Date: 2003-01-25 Member: 12723Members
    I don't understand 10 words in this thread.
  • TheWizardTheWizard Join Date: 2002-12-11 Member: 10553Members, Constellation
    I have designed a few chips. (some memory, and basic CPUs)


    I'll post more later. But let me tell you, FUNKY stuff happens when you get below 100 microns.

    I had a transistor act funky once because it was 1 atom thinner than the others. We couldn't figure out what was wrong about it because it behaved perfectly except some characteristics were off by a factor of 1/4. Which, if it were improperly made wouldn't occur.



    If you guys want some more food for your debate. Look up finFETs. Neat stuff.
  • eggmaceggmac Join Date: 2003-03-03 Member: 14246Members
    As someone already pointed out, the limit's you're suggesting are not egenral limits to technology, they're just specific limits to specific solutions.

    Already in the late 1970's one of the greatest physicists Richard Feynman calculated the theoretical limit for energy needed for computing. Astonishingly, this limit <i>does not exist</i>. Neither does quantum behaviour impose any limit. The only real limit is thus the simple space needed.

    But there does exist a problem with silicium technology indeed. And the solution lies in new technologies, such as carbon nanotubes which are developed right now and could have a great impact progress in this area. Transistors consisting of nanotubes are supposed to be possible very soon. I guess you can read up on them in Scientific American.
  • HawkeyeHawkeye Join Date: 2002-10-31 Member: 1855Members
    edited October 2004
    I believe we need to reinvent the computer. Computers were made to interpret on or off, because that was all that was available at the time. We can distinguish between 32 different states now, so that's a heck of a lot more information a bit can carry. Imagine the computation power of that? We're already sending in 64 bits into the proccessor, so imagine

    32^64 possible states. You could send entire blocks of memory to the processor that way.

    Also other directions include using cellular processors. Each processor thinking individually sharing with its neighbor. Von Neumann did some incredible stuff with these things. You can simulate any behavior.

    Also, I don't know how quantum computing works, but in theory, it makes 2^x algorithms be done in x time. Am I right about that? I really don't know.
  • illuminexilluminex Join Date: 2004-03-13 Member: 27317Members, Constellation
    To an AMD customer, clockspeed does not mean much anymore. If you look at the Athlon 3200 XP vs the Athlon 64 2800, the 2800 is clocked lower than the 3200 (by around 400 megahertz, I believe) and is more powerful. Intel is playing a marketing game with clock speeds, since most people don't understand that the 2.0 ghz Athlon 64 is actually as fast or faster in most processes than the 3.2 Ghz Intel P4 (obviously the core and pin count change this, but you get the picture).
  • taboofirestaboofires Join Date: 2002-11-24 Member: 9853Members
    With quantum computing, you have the answer as fast as you can input the request. Input speed is the only real barrier.

    BTW, illuminex, if you check above you'll see that Intel is now getting out of that marketing game (but mostly because it isn't working anymore).
  • TheWizardTheWizard Join Date: 2002-12-11 Member: 10553Members, Constellation
    edited October 2004
    The real fun problems occur with clock speeds.


    Let us say we have a 1Ghz clock speed. That is the same as 1 tick every nanosecond. Think about how far light can travel in 1 nanosecond. About 30cm (1foot) In wires, electricity is slower, and only goes about 60-70% of that distance.

    At 1ns you have plenty of time to let a signal propagate across the die. However, as you increase clock speeds you also decrease the amount of time between pulses. This means that eventually you will have a clock pulse get halfway across the chip as the next pulse is starting.

    imagine the following model where the dashes indicate the longest distance across a die.

    1ns or 1GHz:

    Pulse2----------------------------------<span style='color:purple'>.......................................</span>Pulse 1

    Here we see that the first pulse can easily travel the length of the die before pulse 2 begins. However let us look at this example

    0.25ns or 4GHz:

    Pulse2----------------Pulse1----------

    Notice that Pulse1 has not crossed the entire chip before the second pulse starts. You don't need to be an engineer to imagine how many problems this will cause.


    Don't know why I posted this information but it is good fodder for the thread.
  • the_x5the_x5 the Xzianthian Join Date: 2004-03-02 Member: 27041Members, Constellation
    <!--QuoteBegin-wizard@psu+Oct 18 2004, 10:00 AM--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (wizard@psu @ Oct 18 2004, 10:00 AM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> But let me tell you, FUNKY stuff happens when you get below 100 microns. <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd-->
    And even weirder stuff at picometers... The heisenberg uncertainity prinicple makes a huge difference when the mass gets really tiny. <!--emo&;)--><img src='http://www.unknownworlds.com/forums/html//emoticons/wink-fix.gif' border='0' style='vertical-align:middle' alt='wink-fix.gif' /><!--endemo--> (/me turns off my mass and clips though wall)

    As far as reachign the theroretic limits, yes it will happen but then new flow designs develop and we may see the processor begin to grow. We have much to learn from neurons, the transistor may not necessarily be the ultimate design.
  • TheWizardTheWizard Join Date: 2002-12-11 Member: 10553Members, Constellation
    edited October 2004
    <!--QuoteBegin-x5+Oct 18 2004, 03:11 PM--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (x5 @ Oct 18 2004, 03:11 PM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> <!--QuoteBegin-wizard@psu+Oct 18 2004, 10:00 AM--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (wizard@psu @ Oct 18 2004, 10:00 AM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> But let me tell you, FUNKY stuff happens when you get below 100 microns. <!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->
    And even weirder stuff at picometers... The heisenberg uncertainity prinicple makes a huge difference when the mass gets really tiny. <!--emo&;)--><img src='http://www.unknownworlds.com/forums/html//emoticons/wink-fix.gif' border='0' style='vertical-align:middle' alt='wink-fix.gif' /><!--endemo--> (/me turns off my mass and clips though wall)

    As far as reachign the theroretic limits, yes it will happen but then new flow designs develop and we may see the processor begin to grow. We have much to learn from neurons, the transistor may not necessarily be the ultimate design. <!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->
    You don't have to get down to the picometer level to notice this.

    It has to be taken into account when you are manufacturing the chips. You don't actually lay wires onto a circuit board and get a chip, you build up layers on a substrate by several methods.

    One method which is best described as an ion gun, is used and instead of rectangular looking components you get what looks like a bell curve if you were able to take a cross section of your chip.


    It gets really funky when your ion beam is 50 microns wide and you are trying to implement 45 micron components <!--emo&;)--><img src='http://www.unknownworlds.com/forums/html//emoticons/wink-fix.gif' border='0' style='vertical-align:middle' alt='wink-fix.gif' /><!--endemo-->

    Here is a picture of an inverter on the material level. In general, the smaller you get the more rounded each of those blocks get. Rounding is bad as your transistors will not behave as you expect them to. (While rounded components might be nice in some situations, they are pretty much impossible to implement with any degree of certainty.)<img src='http://www.cse.psu.edu/~cg477/fa04/max_tutorial_files/image005.gif' border='0' alt='user posted image' />
  • the_x5the_x5 the Xzianthian Join Date: 2004-03-02 Member: 27041Members, Constellation
    LoL wizard. <!--emo&:D--><img src='http://www.unknownworlds.com/forums/html//emoticons/biggrin-fix.gif' border='0' style='vertical-align:middle' alt='biggrin-fix.gif' /><!--endemo--> Nice diagrams.

    Are you an ECE or EE major btw?

    Your information also reiterates the difficulty of making such a small electrical system. (not to mention the fact that like you said, electrons only travel a certain range of distance in a given period of time)

    Do you think they could come up with a superconductor transistor? I mean n&p-type silicon may be impossible when you get down to a few atoms. I heard somewhere that carbon nanotubes were being experimented with.
  • Dr_LEE7Dr_LEE7 Join Date: 2004-10-15 Member: 32265Banned
    edited October 2004
    nice pics wizard.

    Yep its true, AMD has reached the 4000+ mark, so it doesn't look good for intel.
    <a href='http://www.tomshardware.com/hardnews/20041019_133250.html' target='_blank'>AMD 4000+</a>
    Maybee AMD will get a chance to take be the king of the hill for once. I remember AMD beating Intel for about a month before Intel came out with the pentium 2, and once again when teh 450mhz and 500mhz Athlon's came out.
    I think this time AMD will be the performance leader for at least a year and a half.

    Intel should have never made their early pentium 4's not work with rambus, and they should have just developed a better processor, not one with more and more instructions.



    I wonder what happened to digital. Are they still in business? Back In the days of teh pentium 2, they had a cpu that was 2X faster, I wonder if they are still around.
  • TheWizardTheWizard Join Date: 2002-12-11 Member: 10553Members, Constellation
    <!--QuoteBegin-x5+Oct 18 2004, 06:21 PM--></div><table border='0' align='center' width='95%' cellpadding='3' cellspacing='1'><tr><td><b>QUOTE</b> (x5 @ Oct 18 2004, 06:21 PM)</td></tr><tr><td id='QUOTE'><!--QuoteEBegin--> LoL wizard. <!--emo&:D--><img src='http://www.unknownworlds.com/forums/html//emoticons/biggrin-fix.gif' border='0' style='vertical-align:middle' alt='biggrin-fix.gif' /><!--endemo--> Nice diagrams.

    Are you an ECE or EE major btw?

    Your information also reiterates the difficulty of making such a small electrical system. (not to mention the fact that like you said, electrons only travel a certain range of distance in a given period of time)

    Do you think they could come up with a superconductor transistor? I mean n&p-type silicon may be impossible when you get down to a few atoms. I heard somewhere that carbon nanotubes were being experimented with. <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd-->
    I did my undergrad in Computer Engineering.


    While resistance is a problem as you decrease in scale, the real trick is being able to fabricate something so small. As I mentioned the ion beams used to dope the materials are larger than the object they are trying to create. You have to worry about capacitance between wires and all sorts of other problems that do not occur on a macro scale.

    Do i think we will reach a limit? Possibly. There is a limit with every technology developed. However that just means a new technology has to be implemented. We were using vacuum tubes some 40-50 years ago. Those certainly had a limit.

    You also have to realize that the transistor count on a chip is relativly meaningless. The technology and design of the chip is really what drives the 'power' of a processor. The transistor count is outdated and irrelevant. When circuits are designed large areas of the die are prepared to accept additional transistors if the fabrication process has problems. This prevents a company from losing 1-4million dollars if they forgot a FET somewhere. They can just alter the upper masks and continue with fabrication. Many chips are designed with redundancy in their circuits. The chips can detect when errors are occuring and switch to a secondary unit or block off part of the memory.

    This is a shot in the dar example but here goes:

    Chip A is working fine, but then begins to notice that memory addresses AA34-B5B7 are no longer responding. The chip decides to mark this section as damaged. Rather than operate at a reduced capacity the chip was designed with extra memory that can be enabled in case of a failure like this. The chip then turns on this backup section of the circuit and operates as if there was no bad memory on the chip.

    This means that a designer will put extra transistors and memory banks onto a chip if they have the room. The cost of creating a chip is not in the complexity of the chip but rather its size. Because all chips on a wafer are manufactured simultaneously the cost is per wafer rather than per chip.

    So there is a bit of a balance between die size and circuit area. However if a designer has any available space on a chip they will utilize it as it will not affect the final cost.
  • SwiftspearSwiftspear Custim tital Join Date: 2003-10-29 Member: 22097Members
    Wizard went way over my head on this topic <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo-->
  • HawkeyeHawkeye Join Date: 2002-10-31 Member: 1855Members
    Yeah, which is why in fact, that there is only so much a processor chip can do. From what I understand, there are so many features and accessories they would love added to the chips but they simply do not have the room for it. Wizard, I'm going to have to go with my gut and say you're wrong about the memory thing.

    If they can manage to squeeze more memory on a chip, heck they'd use it. The chips that dont' work when tested on the assembly line are thrown out like junk. However the manufacturers know that size is precious and speeds are what gives the profits in markets like these. Besides, not a very large percentage of bad chip productions are due to bad memory locations. They are typically some unforseen contact between wires in some of the layers of the chip.

    But wizard, I agree with everything you said about the clock speeds. Watch Intel try to push the clock speeds harder and faster. Don't be fooled though. Higher clock speeds probably won't be as successful as better chip designs. We're starting to reach the point when the only way to improve chip speed isn't by tampering with common instructions, but add utilities to less common instructions to make them go faster. MMX was probably the first instance of this shown in Intel chips to show more support for media type things.

    Now excuse me, but I really should get back to studying for my comp sci test. <!--emo&:D--><img src='http://www.unknownworlds.com/forums/html//emoticons/biggrin-fix.gif' border='0' style='vertical-align:middle' alt='biggrin-fix.gif' /><!--endemo-->
  • TheWizardTheWizard Join Date: 2002-12-11 Member: 10553Members, Constellation
    edited October 2004
    Since I am ranting on how they make chips Ill go a bit further.


    A good way to think about chip design is in layers. One material placed on top of another. It is not unlike how your tv remote works with the layer of rubber placed on top of the sensors. When they touch it makes a connection.

    Here is the basic process of how a chip is made.

    1. Engineers design the chip according to Design Rules from the foundry

    2. This design is passed to the foundry where a mask is made.
    -The mask is like a lense in which each layer of the design is etched

    3. A pure silicon crystal is lowered into a vat of molten silicon
    -This silicon crystal is removed at a constant speed temp and pressure.
    -This forms a long cylinder of 1 pure silicon crystal (10 feet tall)

    4. The silicon crystal is sliced into wafers

    5. An oxide layer is grown on top of the wafers. This provides (the substrate)

    6. A photoresist layer is placed on the wafer.

    7. A laser is directed through the first mask and onto the wafer. This is repeated until every inch on the wafer is marked.

    8. The masked laser bakes the photoresist layer in a pattern the engineers designed.

    9. The wafer is put in a chemical that washes away everything the laser touched (as per the mask) Or the other way around depending on what process

    10. You now have layer 1 created on the wafer.

    11. The process is repeated for layer 2. Except this time instead of removing materials or doping. Metal ion beams are used. This makes the 'wires'

    12. Rinse & repeat. Literally. Until all layers are finished.


    Now you may have noticed that every chip on a wafer is made at the exact same time. This means that the cost of production is per wafer rather than per chip. This is why manufacturers strive for lower die (chip) sizes. If you can fit more chips onto a wafer then you get a higher yield. Higher yield = cheaper production and low cost chips.

    However, smaller chips often have higher error rates (speck of dust is more likely to hit a critical component)

    In the end nearly 30% of the chips manufactured have to be discarded right off the start. Another 10-15% typically do not meet specifications.

    Of the total yield they then divide the chips depending on how well they perform. Those that have bad sections have those sections turned off and are shipped as low end models (celeron Duron etc.)

    Pretty much every chip in a line is exactly the same physically. Just like people, some do not reach their full potential.

    This is a bit of a condensed version and isn't exactly how it happens but you get the idea. Hopefully this will help you guys later on in your semiconductor discussions.
  • HawkeyeHawkeye Join Date: 2002-10-31 Member: 1855Members
    On the contrary, I heard something different for the source of Celeron chips (or other low end models).

    Celeron is a marketing ploy to get people that don't have as much money to still be able to buy from them. All a celeron chip is is simply a high order chip that has been "underclocked" to speeds less than normal. The early celeron chips could be overclocked quite a bit (and it was because of this very reason). Intel obviously saw this as a problem, so they took measures to prevent it from being overclocked.

    This is for the convenience of Intel to not have to keep some factories outputting lesser chips and other factories outputting higher chips. This way, they upgrade all the factories to the best design, and downgrade some chips to be sold at a cheaper price. I don't think these chips have problems on them, or at least if they do, they are very subtle and wouldn't conflict with the speed of the processor much if at all.
  • DukemDukem Join Date: 2003-04-06 Member: 15246Banned
    Actually - saw a TV programme on this last year. Apparently really disturbing consequences. It been predicted that unless engineers can find a managable way of implementing Carbon for processor purposes within the next decade it could plunge the world into a scientific dark-ages. The world economy would probably begin to stagnate - there would be very little advantage in investing in new technologies without the adequate proccessing power to make maximum use of them. It could also have really bizarre implications. PC prices would probably lock and some businesses would quicky go out of business. It would quickly put an end to ground breaking research across the scientific community.

    It looked promising for a while as some scientist managed to get carbon to exibit certain microprocessor qualities.... truth is he was making the whole lot up - speaking complete **** - and now the whole reality has been put back at least 5 years.

    So for all u who are worried about climate change - stop! This is the biggest problem......
  • TheWizardTheWizard Join Date: 2002-12-11 Member: 10553Members, Constellation
    "Everything that can be invented has been invented."

    -Charles H. Duell, Commissioner, U.S. Office of Patents, 1899.
Sign In or Register to comment.