TSMC Contracted as 64-Bit ARM Foundry for Next-gen Chips
ARM is preparing the production ecosystem for its first 64-bit processors.
ARM and TSMC entered in a "multi-year agreement" that "extends beyond 20 nm technology" to enable the production of next-gen ARMv8 processors that use FinFET transistors and leverages the ARM's Artisan IP that currently covers a production process range from 250 nm to 20 nm.
"By working closely with TSMC, we are able to leverage TSMC's ability to quickly ramp volume production of highly integrated SoCs in advanced silicon process technology," said Simon Segars, executive vice president and general manager, Processor and Physical IP Divisions, ARM. "The ongoing deep collaboration with TSMC provides customers earlier access to FinFET technology to bring high-performance, power-efficient products to market."
ARM unveiled its 64-bit architecture in October of last year and said that ARMv8 chips will be targeted ant consumer and enterprise markets. First processors are expected to be announced later this year, while prototype servers running the processors are unlikely to surface until early 2014. ARM said that TSMC's FinFET process "promises impressive speed and power improvements as well as leakage reduction", which the company hopes will contribute to help solve the problem of scaling SoCs in high-volume production.
64-bit doesn't mean x86_64. And ARM isn't based on x86.
ARM isn't based on x86, but they have implemented a 64-bit architecture in their newest designs, which is half of the point of the article. This could spawn an Atom direct competitor in the mobile space (beyond phones/tablets).
64-bit doesn't mean x86_64. And ARM isn't based on x86.
32/64 bit is not an architecture like x86 is. x86's name comes from the Intel 8086 processor if I remember correctly which was the first processor to use the later named x86 instruction set architecture. All CPU architectures work with data and that data must be a certain length (or lengths). x86 is the architecture name for an ISA that currently uses either 32 bit code or 64 bit code for integer work. A CPU engineer who works or has worked with x86 CPUs would probably have better descriptions. Regardless, ARM uses 32 bit code for integer work and has been thinking about using 64 bit code. It seems that this is a large commitment into that conversion. However, ARM is not using an x86 ISA. ARM is a RISC implementation and x86 is a CISC implementation. They are fairly different, although how different, I'm not sure of. That would take a little research for which I'm too lazy and I don't think anyone here really cares anyway.
read properly and learn,
its better to remain silent and assumed as ignorant then prove you are ignorant by typing....
Not everyone is afraid of learning.
Actually, I think its a lot better to ask questions and learn in the process compared to keeping silent and assuming a person is ignorant overall because they stated something that is perceived as 'false information'.
Furthermore, degrading someone on their lack of information is utterly pointless and counterproductive.
Not likely IMO. ARM doesn't scale upwards that far very well. It's better at dropping power consumption and making decent performance strides than at something huge enough to compete with the desktop or even the mobile x86 CPUs that aren't junk like Atoms or even Brazos APUs. By the time it is competitive with full CPUs such as Pentiums and A series APUs of today, we'll be far ahead of them again in the desktop/mobile x86-based machines. Being 64 bit does not make it suddenly several times faster than it is as 32 bit.
but how long does it take to research ARM vs x86-x64 technology? 5 mins flat to get basic knowledge. i will even give you hint. Google and Wikipedia....
we don't need to know how they are made, thats for computer scientists . but surely Internet is all about learning? if you can find tom's website then i assume you can search Google?
asking questions should always be secondary, in the way of research.
knowledge gained by researching something yourself is more likely to last longer..
Then you cannot be a successful person. You get more and more information by asking.
Perhaps you'd benefit from searching up English typography rules on the internet. Just saying.
Probably not in the near future except maybe one or two such models. It's not really a high priority to get 4GB of RAM last I checked, especially when 1GB or 1.5GB are both already more than enough for almost all situations.
32 and 64 bits are reference to the word size, word is the amount of data and processor or set of instructions can process as discrete unit. In theory going from 32bit to 64bit architecture means that theoretical maximum processing power of your design will double. By the way, Intel didn't invent 64bit processors, nor did AMD, they just manufactured first for PC, and 64 bit CPUs has been around since '70s. ARM is not based on x86 architecture it is RISC architecture what it's name states, Advanced RISC Machine. x86 is derived from names of Intel products which brought to wide audience that architecture 8086, 80286, 80386, 80486, though they are all based on IBM's 8080 processor.
It's not as simple as you'd like it to be. ARM is not an architecture that can scale upwards in both power consumption and performance very well for a variety of reasons. I could look up exact reasons if you want me to, it's been a while and I don't remember them all off the top of my head.
Also, pretty much all x86 CPUs for years convert x86 instructions into RISC micro-code except for the Transmeta Cruose and its successor which used VLIW instead of RISC, if I remember correctly. Simply being RISC does not mean that it scales upwards. x86's RISC implementation, unlike ARM's, has different kinds of execution units that use differing code and is much more expandable. ARM is probably more like having a much small number of different execution units if even more than one for integer work and other stuff for FPU work, like I said, I'd have to refresh my memory of the specifics. Regardless of the why, ARM simple doesn't scale upwards in per core performance well. It might be able to make a decent GPU-like chip by having many cores, but per core performance simply doesn't scale upwards well. ARM would need an overhaul (not that it can't happen), if not an actual success, for it to be able to scale upwards in per core performance like the much higher power x86 CPUs such as i3s, i5s, i7s, Xeons, FX, Athlon IIS, Phenom IIs, Semprons, Opterons, et cetera.
One major drawback is that a RISC instruction set is simple. It doesn't have some more complex instructions that when made use of, can counter the performance impact of having a larger instruction set excellently, especially in high-power CPUs that make use of many types of complex code. RISC also needs more memory/cache because it lacks many complex instructions that would need to be carried out with several instructions instead of one instruction when they are lacked. This, if I remember correctly, was what made us turn ti CISC all those years ago rather than RISC because memory was very expensive.
One thing that ARM might be able to do well is, like you mentioned, highly parallel server work and for that, a 64 bit implementation could be quite the milestone. However, like those very powerful ten core Xeons, it would not be ideal for consumer computers.
Also, SPARC, MIPS, and several others are still in business. In fact, there are new Sparc CPUs coming out this or next year, if I remember correctly. IBM also designs CPUs for game consoles and such occasionally, so it's not like they're doing nothing either. MIPS is supposedly getting ready to unveil something this or next year too.
Like I've said before, a CPU engineer might be able to explain it better and correct any mistakes that I might have made (I work much more with GPUs and memory rather than CPUs), but I think that this is quite accurate.