Will we ever have 128-bit processors?

Fixadent

Commendable
Sep 22, 2016
307
0
1,780
We've had 64-bit processors for quite a long time now, but do you think we'll ever have 128 bit CPU's?

And what will be the advantage of that?
 
Solution
The number of "bits" a CPU supports has always referred to the Address Space the CPU is capable of addressing [EG: How much memory it can use at any one time]. From that perspective, there's really no need to go beyond 64-bit, as we aren't ever going to saturate the full 64-bit address space [16 Exabytes]
Nice link RCFProd.

One pertinent point:

... these are already served with special purpose instruction sets (like SSE)

https://en.wikipedia.org/wiki/Advanced_Vector_Extensions

Advanced Vector Extensions (AVX) are extensions to the x86 instruction set architecture for microprocessors from Intel and AMD proposed by Intel in March 2008 and first supported by Intel with the Sandy Bridge[1] processor shipping in Q1 2011 and later on by AMD with the Bulldozer[2] processor shipping in Q3 2011. AVX provides new features, new instructions and a new coding scheme.

AVX2 expands most integer commands to 256 bits and introduces FMA. AVX-512 expands AVX to 512-bit support utilizing a new EVEX prefix encoding proposed by Intel in July 2013 and first supported by Intel with the Knights Landing processor scheduled to ship in 2015.

It's important to decide what exactly you want to be more than 64bit. CPUs are complex, and it's not all or nothing.
 
The number of "bits" a CPU supports has always referred to the Address Space the CPU is capable of addressing [EG: How much memory it can use at any one time]. From that perspective, there's really no need to go beyond 64-bit, as we aren't ever going to saturate the full 64-bit address space [16 Exabytes]
 
Solution

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


128bit will allow to support a larger memory subsystem. This requirement is not expected soon, but some guys as those behind the RISC-V ISA already include a 128bit version for future needs.

https://en.wikipedia.org/wiki/RISC-V
 

Fixadent

Commendable
Sep 22, 2016
307
0
1,780


A 128-bit CPU can address exabytes of RAM.

The only type of computer that would have/need exabytes of RAM are the world's most powerful supercomputers.

I don't even think we've reached the exaflop or exabyte level yet...
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


As stated in the wiki link, the 128bit support was added to the RISC-V specification for usage in a future supercomputer: "At historic rates of growth, it is possible that greater than 64 bits of address space might be required before 2030."

And then it would go down to general user level latter. Remember that a smartphone today has about 100x more memory capacity than older supercomputers like the Cray-1.
 

Fixadent

Commendable
Sep 22, 2016
307
0
1,780


The limitations of Moore's law might make it impossible for such vast amounts of memory to ever make it to personal computers.
 


ARM is more efficient due to being 10x slower. And yes, I've benchmarked. Make an ARM CPU with x86 performance, and you'll end up with a worse version of x86.

x86 is a horrid CPU architecture, but ARM isn't replacing it at the top end.
 


Huh nvm. Looks like quantum and bio computers are the future then.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Not even close. In reality ARM is already better than x86 because provides nearly the same performance but on smaller size and power

Gwennap notes, “Apple’s new CPU actually compares better against Intel’s mainstream x86 cores,” claiming that the A10 delivers “nearly identical performance” to Intel’s Skylake processors, primarily due to its high performance Hurricane architecture.

Apple-A10-Fusion.jpg


http://www.androidauthority.com/apple-a10-fusion-chip-performance-723918/

One also has to recall Jim Keller talk about core microarchitectures, where he explained the advantages of the ARM ISA over the x86 ISA and why the efficiency provided by ARM allowed him to make K12 faster than Zen.

One has also to recall the several server ARM SoCs available, which are matching and/or beating Xeons in raw performance but with lower power and cost

https://www.top500.org/news/applied-micro-claims-third-generation-arm-chip-ready-to-take-on-intel-xeon/

http://www.hpcuserforum.com/presentations/santafe2014/Broadcom%20Monday%20night.pdf#page=5

It is more, there are persistent rumors that Intel is in panic, because their analysts predict that ARM will replace x86 sooner than expected, and engineers at Intel are already working in a hybrid CPU that can execute both x86 and ARM code.
 
Not even close. In reality ARM is already better than x86 because provides nearly the same performance but on smaller size and power

The A10 isn't anywhere close in performance to a desktop class x86 CPU. And the ARM SoCs have the benefit of being 32/32 rather then 16/32, a potentially huge edge. On a per core basis, its not close.
 

juanrga

Distinguished
BANNED
Mar 19, 2013
5,278
0
17,790


Are you aware that the A10 is used in a phone? There is no way that a 1--2W dual-core phone chip provides the same raw performance than 65--220W desktop chips. Do you know some x86 phone with an i7-7700k inside?

But we can compare phone to notebook:

Apple’s new CPU actually compares better against Intel’s mainstream x86 cores. The current MacBook Air ultrathin notebook, which uses a 2.2GHz Core i7-5650U Broadwell processor, scores about the same as the iPhone 7 on single-core Geekbench...

Note he is comparing a 2W chip to a 15W chip, and still the ARM core provides the same performance than the x86 core.

The ARM ISA is more efficient than the x86 ISA, and this efficiency (in transistor count) can be used in different ways. For high-performance servers:

Moar cores. This is the MACOM aproach. They are currently shipping 32-core server CPUs, but a 64-core CPU is in the plans. This is also the Qualcomm approach, with its 48-core chip.

Faster core. This was the K12 core approach. As mentioned by Jim Keller, K12 was wider (and better) than Zen.

A mixture of both. This is the Vulcan approach. The Vulcan core provides 90% of the single thread performance of Haswell core, but there are moar cores in Vulcan chip than in any Haswell Xeon.
 

weissschnee

Prominent
Sep 20, 2017
1
0
510


I hope I'm still alive by then. I'm kinda old. Think of the games that can be made with 128 bits. Think of the graphics! The immersion! The sheer realism! Oh, I'm giddy just thinking about it.^_^

 

jvanasselberg

Prominent
Nov 18, 2017
1
0
510


You are so right! WE won't ever need it, but once long ago when there was little life on the planet there was no need for much memory then either. And then came the Cambrian Explosion, with the "IoT Cambrian Explosion" it'll be the same. Instead of millions of years it'll be decades and they, our cyber children, will need it! Who knows 128bit might just be the tip of the iceberg. I do know that humanity won't get to see much of it. Carbon life on this planet is doomed.

 

Karadjgne

Titan
Ambassador
Ya'll are thinking small. You are seeing limitations on exactly what a cpu is or can do as it applies to current or the 'now' needs. If you think back just a few years, a computer was the size of a bedroom, then a desk, then a full tower AT, now you have HTPC and PS4 the size of a college text book. Just how long before a pc with the power we are used to now comes in a package the size of a coffee cup? . How long before the things in StarTrek become reality? How long before there is nanotechnology and cpus the size of a pin head.

And 1024bit...

It wasn't all that long ago when ppl saw only a need for the physical connection, puppets on strings, phones on a wired connection, even Tv's needed cable, and ppl rave about a slight adjustment to the now. All of which boils down to the single most important discovery ever, which is totally glossed over,... The battery.

Maybe ppl can't see a need for 128bit addressing now, tomorrow is a different story.
 


All memory really is at the end of the day is external storage for the CPU. So when you say that we'll need a larger address space, you need to ask "what for".

The only thing that currently comes close to saturating the 64-bit address space is DNA modeling. In theory, dynamic physics can require that godly amounts of address space, but we'd need CPUs orders and orders of magnitude beyond what we have to be able to even run a simulation that needed that much physical address space.

64-bit is going to be around for a really long time, and I can't foresee any situation where a home user would need more then 16 exebytes. There's simply no way to actually use it all.
 

sebastianpalm7

Prominent
Nov 15, 2017
15
0
520
Just to demonstrate what kind of numbers are involved when talking 128 bit memory addressing.

Imagine the very improbable future when we're able to pack 16 exabytes of memory into a gram of silicon. It would take an information density of just over 1100 atoms per byte to manage this feat, which may some day be theoretically possible, with the liberal applications of space magic.

At that density of information, it would take 19 *teratons* of silicon to store the full addressable memory of a 128 bit processor. If I got the numbers right, 19 teratons of silicon equals not one, but *three* cubes, each 10 km per side, made out of solid silicon. Even if you could bring it down to one byte per atom, you're looking at a solid block that's 10 x 10 x 3 *kilometers*.

We might, some day in the far future, need to build 128-bit computers because RAM has become dense enough to run out of 64-bit space.

We will categorically *never* need a 256 bit CPU, because there's not enough of basically any material laying around to build memory big enough to need one.
 

Bo Lee

Reputable
Jun 17, 2015
509
1
5,360


"640K ought to be enough for anyone." - Bill Gates in 1981

Never say never.....
 

Karadjgne

Titan
Ambassador
Like I said, ya'll are thinking small. I see the word 'currently' used alot. 15 years ago 'currently' ppl couldn't really see a need for anything more than a single core Pentium, dual cores were exotic and pricey. 100 years ago ppl couldn't see the need for those noisy contraptions called automobiles that required a hand crank to start, were semi impossible to drive and were lucky to top 10mph. A good horse was preferable. 'Currently' silicon has issues with the 5nm process, next round of cpus will likely be 7nm at best. You assume that in the next 50 years or less that cpus will still be made from silicon. Could very well be that silicon will be ditched as a ram/cpu base and something entirely different take its place and have no issues with a 0.1nm process.

When the first computer was invented in the late 30's, I'm quite sure the creator had absolutely no idea his invention would end up a million+ times more powerful and go from the size of his parents living room to something smaller than a pinkie nail.

It took less than 30 years for someone with some serious imagination to take a Startrek communicator and end up with today's cellphone. What's going to happen in the next 30 years? Nanotechnology robots diving through the bloodstream destroying cancer cells? Oh, that's already been done by Starwars.

Doesn't take all that long to go from Science Fiction to Science Fact. Think outside the box and sooner or later it'll dawn on you that there really is no box, only one of your own making.
 


He wasn't thinking ahead enough; he believed the most general users would need is a decent word processor...

...oh wow, that kinda explains everything now doesn't it.

But seriously: I work in an industry that actually does dynamic physics calculations that legitimately eat up terabytes of RAM; I can categorically say that there is nothing out there that is going to come close to the exabyte scale within the next century, and I doubt ever.