Is Moore's Law Still Relevant?
Intel's new tri-gate transistor, or 3D transistor, is already praised as the most significant innovation for microprocessor architectures in years.
It will bring more processing speed, lower power consumption and, as a result, greater computing efficiency on a wide scale. At the core of all of this is Moore's Law, which is guaranteed to be upheld for a few more years. But is it still relevant?
Moore's Law, an observation described by Intel co-founder Gordon Moore in a 1965 paper published in the 35th anniversary edition of Electronic Magazine (PDF), comes up every other year, usually when Intel is introducing another new manufacturing processing and shrinks transistor sizes. Every other year, Moore's Law is frequently misquoted and interpreted in countless ways. We are now used to a scenario in which this observation has turned into a law of nature in the chip manufacturing industry and appears to have become the major force to drive semiconductor innovation.
Moore's Law and a prediction 46 years ago
If you spend half an hour to read and think about Moore's paper and today's Moore's Law discussion, you might come across some interesting implications. Moore's observation does not imply that microprocessors will accelerate by a factor of 2X every 18-24 months. Moore discussed the performance gains of a processor very briefly, but he also discussed the reduction of manufacturing cost in (5-year) intervals. There is no direct claim that the processing speed will double every 2 years. What Moore observed, however, is that the transistor count roughly doubles in 2-year intervals - and if we are picky about his claims, then it is clear that Moore also suggested that there is a time limit to this trend and that it will slow down over time.
Moore's famous observation chart reaches until 1975, but carries the notion that there is no visible end to it. In 2007, when we were about to get the first 45 nm processors, Moore was quoted saying that he would expect his observation to reach another 15 years into the 2020 - 2022 time frame, but hit a definite wall then. However, he said the same in 2001, when 90 nm CPUs arrived in 2003 and it appears that the 15 year prediction can be pushed out in 2 year intervals. Let me just note that Moore's paper is not just about the transistor count. It also includes predictions that microprocessors could be produced at 100% yield rates and that there is a substantial heat problem creeping up on denser chip structures (something that forced Intel to abandon its Netburst architecture used for the Pentium 4 generation from 1999 to 2005). He also predicted that there is a manufacturing limit to two-dimensional chip structures, which implies that the industry would have to go 3D at some point in time, which appears to be happening in 2011/2012, at least as far as Intel is concerned. I am actually somewhat surprised that Intel did not point out this prediction during the 22 nm announcement.
Benefits
Over time, Moore's Law has turned into a guideline for the IT industry, a guideline that cannot be broken. It is widely credited to have enabled affordable computers that can run virtually every application you would want to run. However, Moore's Law has also been somewhat abused as a marketing tool to justify new processors and force innovation into a tight pair of shoes, that was not always the best choice, such as Intel's Netburst products that turned out to be a dead end and almost brought the company down to its knees. Transistor size and count is one component of innovation, but not the only one and probably not the key component of to enable new and intelligent semiconductors anymore.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
More transistors are more likely to enable more features and we are most likely in a time that requires a pile of new features to cope with the current trends of computing. Processors need to become more secure, they need to be better able to efficiently identify threats in cloud computing environments (which would include your usage of online services) and they need to become vastly more scalable as we move toward heterogeneous processor architectures. Transistor count is an enabler of these features, but I believe we give Moore's law more credit today than we should and it directs our attention occasionally in the wrong direction.
Market forces
Competitive forces are likely to be driving especially Intel's R&D much more than Moore's Law. Over the past two decades, it was AMD Intel had to deal with. Intel always considered its production capabilities and manufacturing process as its key competitive advantage and is unlikely to give that up at any time soon. Now Intel is dealing with ARM and an army of chip designers such as Qualcomm, Samsung, Nvidia, TI and possible AMD as well that own the mobile space that Intel wants a part of so desperately. Once again, it will a new manufacturing process that will be Intel's major weapon in this fight. Once again, it is competition that drives the transistor count down, not necessarily Moore's Law (which, however, will be upheld by Intel with a late 2011 release of these processors.)
Of course, there is the question of the relevance of Moore's Law today. Do you really care whether Intel can keep this 2-year cycle going for another 10 years? Probably not - and (enthusiast) consumers probably care less than they did 10 years ago. There are plenty of examples in Intel's manufacturing history that show that a processor is much more than just about transistor count. If you have been around for some time, you may especially remember that Intel's former CTO Pat Gelsinger predicted in 2001 that we would be using 30 GHz processors by 2010 that would need cooling techniques similar to that used in nuclear power plants. By 2005, the company hit a wall just under 4 GHz and completely changed its approach and chip design to cope with power consumption and leaking current. These are the milestones that change trends and deliver the true innovation we are benefiting from. It is a radically different approach to envision the feature set and capability of a processor - and our ability to squeeze more intelligence into every single transistor and not just squeeze more transistors into a certain area.
The tri-gate transistor will enable Intel again to double the transistor count by building 3D structures and creating more transistors in the same area space. You may interpret Moore's law that Intel is cheating a bit as you could consider Moore's paper to be simply referring to 2D structures and the even area they occupy. But honestly: Who really cares? If there are more transistors, it's good for everyone and it is likely to make processors better. However, the fact that the transistor count is doubling may be rather irrelevant today - which does not discount Moore's observation. We may simply be too obsessed with keeping this observation alive.
-
Zeh I believe this trigate technology is getting a bit overestimated, altough I truly wish I'm wrong.Reply -
jprahman I doubt it's overestimated. If the claims that Intel makes are true then it will be pretty significant improvement, although the biggest gains will likely be in terms of reduced power consumption. I doubt performance will rise massively because architectural improvements are what bring about the greatest improvements in performance, not improved transistor designs, although, the higher clock speeds that tri-gate transistors enable will still have some impact on performance.Reply -
silversurfernhs i think the new smart phones, netbooks, SSDs and tablets threw a wrench into it...Reply -
tommysch cbrownx88As long as it can pay Crysis and fit in my pocket?Reply
The real benchmark is: Can it play Crysis at 5760*1080 in 3D? -
cyprod My question is, has Moore's law ever been relevent? Granted, I do software, but I play at all levels from firmware on up and at no point in my day does the question as to how many transistors something has ever come up. The hardware people never mention it either. The things that matter, die size, power consuption and functionality provided. Transistor count seems more of a pissing match than anything. I mean, being a software guy, I could code algorithm X in 10 clever lines of code, or I could do it in 50 lines without much issue. I could probably stretch it to 100 lines if I was clever. Transistor count is similar. Who cares about how many transistors. It's just an interesting statistic, nothing more, nothing less.Reply -
NightLight I think this is a significant step forwards! It's like adding another story to your house. This invention will be around for years to come. Well done intel.Reply -
fazers_on_stun First of all, I disagree with the statement that Netburst was a 'dead end' - the architecture incorporated many novel ideas which are now being recycled in Sandy Bridge and in Bulldozer. While it was not as good performance-wise as AMD's K8 (due to too much leakage so that the clocks never got up to '10GHz', plus the branch prediction was not up to snuff for the 32-stage pipeline), it was ahead of its time. I suggest reading Kanter's analysis of both Sandy Bridge architecture http://www.realworldtech.com/page.cfm?ArticleID=RWT091810191937&p=1 and Bulldozer architecture http://www.realworldtech.com/page.cfm?ArticleID=RWT082610181333&p=1 for more information.Reply
Second, transistor count does correlate to capability. Sure you could have some lousy design where a particular circuit uses 10,000 transistors while a good design uses only 2,000, same as in software coding. But assuming a similar degree of optimization, a CPU with 1 billion transistors will usually outperform (or be more capable) than one with a mere 500 million of the same transistors (i.e., same process node and characteristics). For example, in heavily-threaded loads a quad-core will usually outperform a dual core.
However I do think that Moore's law (process shrink) is in the realm of diminishing returns, as is core-count and architectural changes. What we need is some breakthrough, out-of-the-box design - graphene transistors, quantum computing, neural networks, even true trinary logic instead of binary. Trouble is, binary silicon designs are where all the expertise and money is, until it butts up against a dead end.. -
Intel have been making pathetick AMD style CPUs for few yeara. I had 3.4GHz Pentium 4 6 years ago. Now I have just bought i7 2600K wchih is stupid Athlon style name for i7 3.8GHz... just 400MHz faster than i have 6 years ago. I had also pathetic 3.33GHz Core2 Duo E8600. It was slower than my P4, but just a litle bit and I bought it because of new motherboard features. Where are times when CPU were known by its speed 386DX 40MHz, 486DX2 66MHz, Pentium 100MHz, Pentium 200MMx, Pentium II 300MHz, Pentium III 450MHz, Petium III 700MHz, Pentium 4 1.6GHz, Pentium 4 2.8GHz and finally Pentium 4 3.4GHz... then stupid consumers started complaining about power consumption. I wish nVidia started making x86 CPUs. they would made 20GHz 1kW TDP CPU just like they do theier GeForce GPUs.Reply