The End Of Semiconductor Development
Dr_LEE7
Join Date: 2004-10-15 Member: 32265Banned
in Discussions
<div class="IPBDescription">its gonna come soon</div> According to Moore's Law, it has been said that the transistor density on integrated circuits double every 6 months. As the years have gone by, semiconductor development slowed down to every 12 months, and today it is at 18 months. I would say it is going much slower than every 18 months with the latest pentium 4 processors.
Currently, processors use 90µm and 130µm processes to make cpu's with 50million to 110million transistors. As we can see, Intel is already having problems with its pentium 4's, and they are not expected to cross the 4ghz barrier with that chip. They are now working on making processors with dual cores.
It is said that the theoretical limit for microprocessors is a 65µm process, and the most possible transistors is 18billion. Microprocessors are also supposed to reach their theoretical limit by 2010 to 2020.
Do you think we will soon reach the theoretical limit for semiconductor development? What other technologies do u think we will go into if that happens? Is there a possibility of 3-dimensional semiconductors using silicon wafers? The day when microprocessors reach their theoretical limits is inevitable. What is your opinion on the subject?
Currently, processors use 90µm and 130µm processes to make cpu's with 50million to 110million transistors. As we can see, Intel is already having problems with its pentium 4's, and they are not expected to cross the 4ghz barrier with that chip. They are now working on making processors with dual cores.
It is said that the theoretical limit for microprocessors is a 65µm process, and the most possible transistors is 18billion. Microprocessors are also supposed to reach their theoretical limit by 2010 to 2020.
Do you think we will soon reach the theoretical limit for semiconductor development? What other technologies do u think we will go into if that happens? Is there a possibility of 3-dimensional semiconductors using silicon wafers? The day when microprocessors reach their theoretical limits is inevitable. What is your opinion on the subject?
Comments
The whole Moore's "Law" thing is junk anyway. So I consider it an extremely good thing that Intel has finally stopped trying to play into it.
By the time 2020 comes about we'll almost certainly have migrated to another medium anyway. Optical I think is about 15 years away, for example.
Or quantum computers, based off of the electron configurations of atoms. I'm not sure how well we are with it right now, but I'm sure in 50 years or so we'll at least be able to use it for minor calculations of some sort.
I was thinking the other day, aside from power output, why we don't use more than a binary transistor. I mean, it uses more power, the more ways your switch needs to work, but it could increase computing speed to not need as many transistors. It would probably take some intense hardware to actually use this either, but I think you could get up to a 10 way transistor or something.
Of course, I don't really have any factual information and it's a bit late/early to be looking it up at the moment for me.
[for which I apologize <!--emo&:p--><img src='http://www.unknownworlds.com/forums/html//emoticons/tounge.gif' border='0' style='vertical-align:middle' alt='tounge.gif' /><!--endemo--> ]
At the point were there is absolutly no where to go in terms of improvements to computers it won't be our biggest concern any more by a long shot... Like you say there are still tonees of theorys that can still work out, so we're not done yet. Not even close.
edit: some articles from the inq that may be interesting.
<a href='http://www.theinquirer.net/?article=19105' target='_blank'>The Roadmap to Recovery: Part I</a>
<a href='http://www.theinquirer.net/?article=19110' target='_blank'>The Roadmap to Recovery II</a>
edit: some articles from the inq that may be interesting.
<a href='http://www.theinquirer.net/?article=19105' target='_blank'>The Roadmap to Recovery: Part I</a>
<a href='http://www.theinquirer.net/?article=19110' target='_blank'>The Roadmap to Recovery II</a> <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd-->
good articles, enjoyed reading them.
Yes intel is having problems like never before. If AMD gets past the 4ghz (4000+) barrier, Intel is gonna have a serious problem. I guess intel deserves it if they are gonna jerk around when they own the market for years and years.
I think moore's law is generally true, it will just get slower and slower every few years as we get closer to the theoretical limit.
Other things that are really starting to make me mad are slow development of hard-drive transfer rates, optical drives, and reliable operating systems. Whats up with Microsoft anyways, they should be developing much more intelligent operating systems, they own 95% of the market for godsakes!!!
o yeah, i believe the speed of electricity moves at 1/10 the speed of light, headcrab, i think i read that somewhere a few years back.
i heard a couple of days ago that electricity can travel at 0.8c within copper.
sounds like fast, but thats about 4.2Ghz across a 1cm chip and back, assuming zero resistance =/
after silicon gets dragged out of us for a little longer, into dual cores and such, i think we are going to move to diamond core. there was ana rticle in Wired a few months back about how we can build diamond chips for about the same price as current processors, and that they can run at 400 degrees celsius, instead of 80, meaning that they can have activity rates up to about 700 Ghz.
yeah, seven hundred.
What's really the issue here? Necessity and convenience is the mother of invention, as juice said, enjoy the ride.
There is a definite problem with clock speeds. They keep making it faster and faster, when in fact that is about the stupidest way to make a processor faster. The clock regulates when the pipeline takes actions. The TIME it takes to do these actions is entirely dependent upon what the task to be done is, not the clock speed. Multiplications take longer than additions. If it takes 3 cycles and addition takes 1 cycle, by making the clock speed twice as fast, you end up with 6 cycles to do a multiplication and 2 to do an addition. Means nothing. It is a worthless way of accounting for processor speed.
There is also what's called "MIPS" or millions of instructions per second, also known as "meaningless indication of processor speed" jokingly by those in the computer science field. "Instruction" is a lose way of describing a step involved by the processor to compute something. By doing more, you aren't necessarily doing any more real work done, considering that some tasks require more instructions than others (depending on the architecture used by the processor). Some processors believe in doing small fast instructions, and others believe in doing large slow instructions. The amount of instructions per second would be seen higher for small fast instructions, but it does about the same amount of work (relatively).
Anyway, back to the topic. I hear they are really having trouble carrying the clock signals across the board, so they are having to have "delays" put places to wait extra long for signals to reach them.
Honestly, I think if they keep this up, they'll end up getting a synchronous asynchronous processor (try saying that 5 times fast).
I'll post more later. But let me tell you, FUNKY stuff happens when you get below 100 microns.
I had a transistor act funky once because it was 1 atom thinner than the others. We couldn't figure out what was wrong about it because it behaved perfectly except some characteristics were off by a factor of 1/4. Which, if it were improperly made wouldn't occur.
If you guys want some more food for your debate. Look up finFETs. Neat stuff.
Already in the late 1970's one of the greatest physicists Richard Feynman calculated the theoretical limit for energy needed for computing. Astonishingly, this limit <i>does not exist</i>. Neither does quantum behaviour impose any limit. The only real limit is thus the simple space needed.
But there does exist a problem with silicium technology indeed. And the solution lies in new technologies, such as carbon nanotubes which are developed right now and could have a great impact progress in this area. Transistors consisting of nanotubes are supposed to be possible very soon. I guess you can read up on them in Scientific American.
32^64 possible states. You could send entire blocks of memory to the processor that way.
Also other directions include using cellular processors. Each processor thinking individually sharing with its neighbor. Von Neumann did some incredible stuff with these things. You can simulate any behavior.
Also, I don't know how quantum computing works, but in theory, it makes 2^x algorithms be done in x time. Am I right about that? I really don't know.
BTW, illuminex, if you check above you'll see that Intel is now getting out of that marketing game (but mostly because it isn't working anymore).
Let us say we have a 1Ghz clock speed. That is the same as 1 tick every nanosecond. Think about how far light can travel in 1 nanosecond. About 30cm (1foot) In wires, electricity is slower, and only goes about 60-70% of that distance.
At 1ns you have plenty of time to let a signal propagate across the die. However, as you increase clock speeds you also decrease the amount of time between pulses. This means that eventually you will have a clock pulse get halfway across the chip as the next pulse is starting.
imagine the following model where the dashes indicate the longest distance across a die.
1ns or 1GHz:
Pulse2----------------------------------<span style='color:purple'>.......................................</span>Pulse 1
Here we see that the first pulse can easily travel the length of the die before pulse 2 begins. However let us look at this example
0.25ns or 4GHz:
Pulse2----------------Pulse1----------
Notice that Pulse1 has not crossed the entire chip before the second pulse starts. You don't need to be an engineer to imagine how many problems this will cause.
Don't know why I posted this information but it is good fodder for the thread.
And even weirder stuff at picometers... The heisenberg uncertainity prinicple makes a huge difference when the mass gets really tiny. <!--emo&;)--><img src='http://www.unknownworlds.com/forums/html//emoticons/wink-fix.gif' border='0' style='vertical-align:middle' alt='wink-fix.gif' /><!--endemo--> (/me turns off my mass and clips though wall)
As far as reachign the theroretic limits, yes it will happen but then new flow designs develop and we may see the processor begin to grow. We have much to learn from neurons, the transistor may not necessarily be the ultimate design.
And even weirder stuff at picometers... The heisenberg uncertainity prinicple makes a huge difference when the mass gets really tiny. <!--emo&;)--><img src='http://www.unknownworlds.com/forums/html//emoticons/wink-fix.gif' border='0' style='vertical-align:middle' alt='wink-fix.gif' /><!--endemo--> (/me turns off my mass and clips though wall)
As far as reachign the theroretic limits, yes it will happen but then new flow designs develop and we may see the processor begin to grow. We have much to learn from neurons, the transistor may not necessarily be the ultimate design. <!--QuoteEnd--></td></tr></table><div class='postcolor'><!--QuoteEEnd-->
You don't have to get down to the picometer level to notice this.
It has to be taken into account when you are manufacturing the chips. You don't actually lay wires onto a circuit board and get a chip, you build up layers on a substrate by several methods.
One method which is best described as an ion gun, is used and instead of rectangular looking components you get what looks like a bell curve if you were able to take a cross section of your chip.
It gets really funky when your ion beam is 50 microns wide and you are trying to implement 45 micron components <!--emo&;)--><img src='http://www.unknownworlds.com/forums/html//emoticons/wink-fix.gif' border='0' style='vertical-align:middle' alt='wink-fix.gif' /><!--endemo-->
Here is a picture of an inverter on the material level. In general, the smaller you get the more rounded each of those blocks get. Rounding is bad as your transistors will not behave as you expect them to. (While rounded components might be nice in some situations, they are pretty much impossible to implement with any degree of certainty.)<img src='http://www.cse.psu.edu/~cg477/fa04/max_tutorial_files/image005.gif' border='0' alt='user posted image' />
Are you an ECE or EE major btw?
Your information also reiterates the difficulty of making such a small electrical system. (not to mention the fact that like you said, electrons only travel a certain range of distance in a given period of time)
Do you think they could come up with a superconductor transistor? I mean n&p-type silicon may be impossible when you get down to a few atoms. I heard somewhere that carbon nanotubes were being experimented with.
Yep its true, AMD has reached the 4000+ mark, so it doesn't look good for intel.
<a href='http://www.tomshardware.com/hardnews/20041019_133250.html' target='_blank'>AMD 4000+</a>
Maybee AMD will get a chance to take be the king of the hill for once. I remember AMD beating Intel for about a month before Intel came out with the pentium 2, and once again when teh 450mhz and 500mhz Athlon's came out.
I think this time AMD will be the performance leader for at least a year and a half.
Intel should have never made their early pentium 4's not work with rambus, and they should have just developed a better processor, not one with more and more instructions.
I wonder what happened to digital. Are they still in business? Back In the days of teh pentium 2, they had a cpu that was 2X faster, I wonder if they are still around.
Are you an ECE or EE major btw?
Your information also reiterates the difficulty of making such a small electrical system. (not to mention the fact that like you said, electrons only travel a certain range of distance in a given period of time)
Do you think they could come up with a superconductor transistor? I mean n&p-type silicon may be impossible when you get down to a few atoms. I heard somewhere that carbon nanotubes were being experimented with. <!--QuoteEnd--> </td></tr></table><div class='postcolor'> <!--QuoteEEnd-->
I did my undergrad in Computer Engineering.
While resistance is a problem as you decrease in scale, the real trick is being able to fabricate something so small. As I mentioned the ion beams used to dope the materials are larger than the object they are trying to create. You have to worry about capacitance between wires and all sorts of other problems that do not occur on a macro scale.
Do i think we will reach a limit? Possibly. There is a limit with every technology developed. However that just means a new technology has to be implemented. We were using vacuum tubes some 40-50 years ago. Those certainly had a limit.
You also have to realize that the transistor count on a chip is relativly meaningless. The technology and design of the chip is really what drives the 'power' of a processor. The transistor count is outdated and irrelevant. When circuits are designed large areas of the die are prepared to accept additional transistors if the fabrication process has problems. This prevents a company from losing 1-4million dollars if they forgot a FET somewhere. They can just alter the upper masks and continue with fabrication. Many chips are designed with redundancy in their circuits. The chips can detect when errors are occuring and switch to a secondary unit or block off part of the memory.
This is a shot in the dar example but here goes:
Chip A is working fine, but then begins to notice that memory addresses AA34-B5B7 are no longer responding. The chip decides to mark this section as damaged. Rather than operate at a reduced capacity the chip was designed with extra memory that can be enabled in case of a failure like this. The chip then turns on this backup section of the circuit and operates as if there was no bad memory on the chip.
This means that a designer will put extra transistors and memory banks onto a chip if they have the room. The cost of creating a chip is not in the complexity of the chip but rather its size. Because all chips on a wafer are manufactured simultaneously the cost is per wafer rather than per chip.
So there is a bit of a balance between die size and circuit area. However if a designer has any available space on a chip they will utilize it as it will not affect the final cost.
If they can manage to squeeze more memory on a chip, heck they'd use it. The chips that dont' work when tested on the assembly line are thrown out like junk. However the manufacturers know that size is precious and speeds are what gives the profits in markets like these. Besides, not a very large percentage of bad chip productions are due to bad memory locations. They are typically some unforseen contact between wires in some of the layers of the chip.
But wizard, I agree with everything you said about the clock speeds. Watch Intel try to push the clock speeds harder and faster. Don't be fooled though. Higher clock speeds probably won't be as successful as better chip designs. We're starting to reach the point when the only way to improve chip speed isn't by tampering with common instructions, but add utilities to less common instructions to make them go faster. MMX was probably the first instance of this shown in Intel chips to show more support for media type things.
Now excuse me, but I really should get back to studying for my comp sci test. <!--emo&:D--><img src='http://www.unknownworlds.com/forums/html//emoticons/biggrin-fix.gif' border='0' style='vertical-align:middle' alt='biggrin-fix.gif' /><!--endemo-->
A good way to think about chip design is in layers. One material placed on top of another. It is not unlike how your tv remote works with the layer of rubber placed on top of the sensors. When they touch it makes a connection.
Here is the basic process of how a chip is made.
1. Engineers design the chip according to Design Rules from the foundry
2. This design is passed to the foundry where a mask is made.
-The mask is like a lense in which each layer of the design is etched
3. A pure silicon crystal is lowered into a vat of molten silicon
-This silicon crystal is removed at a constant speed temp and pressure.
-This forms a long cylinder of 1 pure silicon crystal (10 feet tall)
4. The silicon crystal is sliced into wafers
5. An oxide layer is grown on top of the wafers. This provides (the substrate)
6. A photoresist layer is placed on the wafer.
7. A laser is directed through the first mask and onto the wafer. This is repeated until every inch on the wafer is marked.
8. The masked laser bakes the photoresist layer in a pattern the engineers designed.
9. The wafer is put in a chemical that washes away everything the laser touched (as per the mask) Or the other way around depending on what process
10. You now have layer 1 created on the wafer.
11. The process is repeated for layer 2. Except this time instead of removing materials or doping. Metal ion beams are used. This makes the 'wires'
12. Rinse & repeat. Literally. Until all layers are finished.
Now you may have noticed that every chip on a wafer is made at the exact same time. This means that the cost of production is per wafer rather than per chip. This is why manufacturers strive for lower die (chip) sizes. If you can fit more chips onto a wafer then you get a higher yield. Higher yield = cheaper production and low cost chips.
However, smaller chips often have higher error rates (speck of dust is more likely to hit a critical component)
In the end nearly 30% of the chips manufactured have to be discarded right off the start. Another 10-15% typically do not meet specifications.
Of the total yield they then divide the chips depending on how well they perform. Those that have bad sections have those sections turned off and are shipped as low end models (celeron Duron etc.)
Pretty much every chip in a line is exactly the same physically. Just like people, some do not reach their full potential.
This is a bit of a condensed version and isn't exactly how it happens but you get the idea. Hopefully this will help you guys later on in your semiconductor discussions.
Celeron is a marketing ploy to get people that don't have as much money to still be able to buy from them. All a celeron chip is is simply a high order chip that has been "underclocked" to speeds less than normal. The early celeron chips could be overclocked quite a bit (and it was because of this very reason). Intel obviously saw this as a problem, so they took measures to prevent it from being overclocked.
This is for the convenience of Intel to not have to keep some factories outputting lesser chips and other factories outputting higher chips. This way, they upgrade all the factories to the best design, and downgrade some chips to be sold at a cheaper price. I don't think these chips have problems on them, or at least if they do, they are very subtle and wouldn't conflict with the speed of the processor much if at all.
It looked promising for a while as some scientist managed to get carbon to exibit certain microprocessor qualities.... truth is he was making the whole lot up - speaking complete **** - and now the whole reality has been put back at least 5 years.
So for all u who are worried about climate change - stop! This is the biggest problem......
-Charles H. Duell, Commissioner, U.S. Office of Patents, 1899.