Fouth Gen PCIe Sees Bandwidth Double
Tags:
- Hardware
- PCI Express
Last response: in News comments
Anonymous
June 6, 2014 11:26:06 AM
This week we got the first details about the fourth generation of the PCI express spec. PCIe 4.0 will have a base speed of 16 Gbps per data link.
Fouth Gen PCIe Sees Bandwidth Double : Read more
Fouth Gen PCIe Sees Bandwidth Double : Read more
More about : fouth gen pcie sees bandwidth double
djtronika
June 6, 2014 1:11:12 PM
velocityg4
June 6, 2014 1:14:50 PM
Does anything out there even come close to saturating PCI-e 3.0 x16? All I can think of would be the enterprise segment with massive RAID arrays, SSD RAID, fiber optic networking and cluster computing.
It would be nice if PCI-e would become mainstream for SSD not just relatively expensive cards for enthusiasts. As SATA III seems to be a hindrance as the SATA spec isn't keeping pace. While Apple is the only one I see taking advantage of PCI-e for SSD in mainstream computers getting about 1 GBPS. Heck they're the only ones that seem to be interested in switching their lines to any type of SSD.
It would be nice if PCI-e would become mainstream for SSD not just relatively expensive cards for enthusiasts. As SATA III seems to be a hindrance as the SATA spec isn't keeping pace. While Apple is the only one I see taking advantage of PCI-e for SSD in mainstream computers getting about 1 GBPS. Heck they're the only ones that seem to be interested in switching their lines to any type of SSD.
Score
-1
Related resources
- ASUS Sabertooth AM3+ Gen3 PCI-e Bandwidth - Forum
- seeking MoBo to run 3 controllers PCIe (gen3) at 16x, 8x, 8x without bandwidth limitation - Forum
64GB/s is more than twice the amount of system memory bandwidth in most PCs today. At this point, it would almost start to make sense to start using GPU memory as system memory for things that depend heavily on bandwidth instead of low latency.
Note: to get 64GB/s out of PCIe4, you need to simultaneously receive and transmit 32GB/s each way.
Note: to get 64GB/s out of PCIe4, you need to simultaneously receive and transmit 32GB/s each way.
Score
0
josejones
June 6, 2014 1:51:02 PM
neon neophyte
June 6, 2014 1:54:22 PM
CaedenV
June 6, 2014 2:02:02 PM
Quote:
Does anything out there even come close to saturating PCI-e 3.0 x16? All I can think of would be the enterprise segment with massive RAID arrays, SSD RAID, fiber optic networking and cluster computing.It would be nice if PCI-e would become mainstream for SSD not just relatively expensive cards for enthusiasts. As SATA III seems to be a hindrance as the SATA spec isn't keeping pace. While Apple is the only one I see taking advantage of PCI-e for SSD in mainstream computers getting about 1 GBPS. Heck they're the only ones that seem to be interested in switching their lines to any type of SSD.
Yes and no.
Within the next few years we will see dual GPU setups that will be capable of saturating the PCIe 16 slot, and the way these standards work it needs to get ratified now if we want to see it in the next 2-3 years.
Outside of graphics there is a bit of a PCIe shortage coming up. Right now PCIe is used mostly for expansion cards... but in the near future we are going to start seeing more use of things like Lightpeak (seriously, Thunderbolt is a horrible name, can't we have the old name back?), M.2, M-PCIe, and SATA Express, all of which will need PCIe lanes. So the real question is, do we pay more for our processors and chipsets where the PCIe lanes are hosted? Or do we assign fewer lanes to each device so that we can dedicate lanes to these new IO?
Outside of the ridiculous high-end GPU space, we could be just fine with moving to a PCIe8 v4 standard for GPUs, freeing up 8 lanes each capable of 2GB/p of throughput each. That could be 4 SSDs, and a thunderbolt port right there. And for those few crazy people with far too much money that demand 4GPU setups, you can always get higher end enthusiast or workstation boards that have more lanes available.
Score
6
neon neophyte
June 6, 2014 2:03:59 PM
thundervore
June 6, 2014 2:20:35 PM
Puiucs
June 6, 2014 3:37:33 PM
Quote:
Nice, PCIe4 and DDR 4 are making way. All that s left is USB4 and then i can upgrade my Z77 system.So by the year 2020 all this should be out. Better start saving up money now lol.
you will get USB 3.1 soon (double data rates). we should get mainstream USB 3.1 and DDR4 by next year. i expect PCI-e 4 in 2016.
Score
3
JOSHSKORN
June 6, 2014 3:56:31 PM
Quote:
Quote:
Nice, PCIe4 and DDR 4 are making way. All that s left is USB4 and then i can upgrade my Z77 system.So by the year 2020 all this should be out. Better start saving up money now lol.
you will get USB 3.1 soon (double data rates). we should get mainstream USB 3.1 and DDR4 by next year. i expect PCI-e 4 in 2016.
It looks like PCI-e 4.0 will be supported in Skylake E/EX/EP series. I imagine what comes after that will be supported as well in mainstream processors. So, PCI-e 4.0 toward the end of 2015 and it'll become mainstream in 2016, I'm guessing. Of course, it might be closer to 2020 before we even need it. Who knows. There will obviously be something better out, then. You can't win.
Score
0
magnetite2
June 6, 2014 4:29:35 PM
Quote:
Does anything out there even come close to saturating PCI-e 3.0 x16? All I can think of would be the enterprise segment with massive RAID arrays, SSD RAID, fiber optic networking and cluster computing.Bandwidth isn't everything though. The increased speed also helps. Certainly with programs which send a lot of textures, models, etc to the GPU. Flight simulators such as FSX, Prepar3D or XPlane come to mind. Faster PCIe bus has been shown to have more FPS and less stuttering.
Score
4
knowom
June 6, 2014 5:35:39 PM
Quote:
Does anything out there even come close to saturating PCI-e 3.0 x16? All I can think of would be the enterprise segment with massive RAID arrays, SSD RAID, fiber optic networking and cluster computing.It would be nice if PCI-e would become mainstream for SSD not just relatively expensive cards for enthusiasts. As SATA III seems to be a hindrance as the SATA spec isn't keeping pace. While Apple is the only one I see taking advantage of PCI-e for SSD in mainstream computers getting about 1 GBPS. Heck they're the only ones that seem to be interested in switching their lines to any type of SSD.
You've also got to consider that it doubles the bandwidth for PCI-E x1 slots as well which will benefit lots of new cards made for it audio interface sound cards, networking cards both wired and wifi, raid controllers, SSD's, and the list goes on. You may even see a new more powerful PCI-E x1 video card perhaps. It will certainly benefit future atom chips for example that use PCI-E x4.
Score
3
eklipz330
June 6, 2014 6:01:23 PM
Quote:
Nice, PCIe4 and DDR 4 are making way. All that s left is USB4 and then i can upgrade my Z77 system.So by the year 2020 all this should be out. Better start saving up money now lol.
but what do we know? maybe some new technology will just sweep all of this away (graphene, all eyes on you) and they'll just have new ways of building/using computers.
Score
1
Puiucs said:
Quote:
Nice, PCIe4 and DDR 4 are making way. All that s left is USB4 and then i can upgrade my Z77 system.So by the year 2020 all this should be out. Better start saving up money now lol.
you will get USB 3.1 soon (double data rates). we should get mainstream USB 3.1 and DDR4 by next year. i expect PCI-e 4 in 2016.
USB 3.1 will still only push 10Gbps which is still slower than current Thunderbolt (20Gbps) and tied to a single bus instead of lane per port like Thunderbolt.
Still 10Gbps is pretty fast. I just hate that the more devices, the slower it gets and as well all the other in between that cut bandwidth.
And PCIe has been doubling bandwidth every generation. PCIe 1 was 8GB/s, 2 was 16GB/s and current is 32GB/s.
What I would like to see is utilization of that bandwidth. Right now most dGPUs are happy even on a PCIe 2.0 x8 let alone a x16 PCIe 3.0.
eklipz330 said:
Quote:
Nice, PCIe4 and DDR 4 are making way. All that s left is USB4 and then i can upgrade my Z77 system.So by the year 2020 all this should be out. Better start saving up money now lol.
but what do we know? maybe some new technology will just sweep all of this away (graphene, all eyes on you) and they'll just have new ways of building/using computers.
Unless they come out with a new slot, remember just 10 years ago AGP was the top dGPU slot and PCIe was very new with PCI-X being the though of replacement.
As for DDR5, that is to be seen. There are various other technologies that do not involve DDR in the works that offer lower latency, higher bandwidth and larger sizes than DDR does so that might not exist in the next 5-10 years.
Score
1
hannibal
June 7, 2014 1:02:45 AM
daglesj
June 7, 2014 2:45:52 AM
jimmysmitty said:
As for DDR5, that is to be seen. There are various other technologies that do not involve DDR in the works that offer lower latency, higher bandwidth and larger sizes than DDR does so that might not exist in the next 5-10 years.Whatever the name is, all memory is fundamentally the same internally. Making the interface quad-data-rate is only a minor tweak over DDR; nothing fundamentally new there.
The only thing that is truly close to ground-breaking is die-stacking which allows much wider interfaces operating at lower speeds and lower power by eliminating the need to drive long lines between the memory and whatever it is connected to. If that sort of technology got on the desktop, you would need to upgrade your CPU to upgrade RAM since RAM would now be soldered to the CPU. If this happened, I'm guessing PCIE5-based memory expansion would become an option for applications that need more memory than what is integrated in the CPU.
Yes, there are some fancy high-speed serial interfaces out there but all that (de)serializing uses power, adds complexity, adds latency and adds cost on an ultra-low-leakage process generally not suitable for high-speed stuff.
Score
0
hannibal said:
But most speed gain will be achieved by pci-slot SSD cards.Not necessarily: most games and applications do some amount of processing as they load stuff from storage and even infinite storage bandwidth with null latency cannot make that load-time processing go away. Once your storage is fast enough that most of the load time is due to processing, faster storage no longer provides any significant benefits.
In everyday scenarios other than file copy, the most obvious benefit of SSDs is their ~100X lower latency compared to HDDs.
Score
1
SteelCity1981
June 7, 2014 7:13:41 AM
Quote:
[You've also got to consider that it doubles the bandwidth for PCI-E x1 slots as well which will benefit lots of new cards made for it audio interface sound cards, networking cards both wired and wifi, raid controllers, SSD's, and the list goes on. You may even see a new more powerful PCI-E x1 video card perhaps. It will certainly benefit future atom chips for example that use PCI-E x4. Maybe, maybe not. pci-e x1 still uses a 2.0 config. and manufactures are perfectly content with that type of setup at the moment.
Score
0
kritzler
June 7, 2014 8:24:12 AM
tomfreak
June 7, 2014 8:26:03 PM
kritzler said:
The move to PCIe 4.0 and beyond is mostly about the transition to GDDR6 "12Ghz" rumored top end speed. And now with 4K gaming on the rise memory bandwidth/fill rate becoming a choke point we should see AMD and JEDEC releasing more information sometime this year.Fill rate and GPU memory bandwidth are very loosely related to PCIe bandwidth at best: if a game or application stores all the geometry and textures in the GPU's memory, it can generate huge GPU workloads with very little PCIe bandwidth since all the heavy-lifting is between the GPU and its local memory.
Score
0
Christopher1
June 8, 2014 6:23:07 AM
Christopher1 said:
Excuse me if I am wrong, but wasn't the third generation pipeline not being totally maxed out yet?Just about the point where 2.0x16/3.0x8 was starting to become a bottleneck in some circumstances.
With individual peripheral IOs exceeding 3.0x1 bandwidth, it makes sense to have a next-gen PCIe version where all lower-level IO standards can fit on a single lane.
It also slots in nicely with things like multi-purpose IO pins where the same set of pins can be used for PCIe, SATA, USB3(.1) and possibly other stuff up to 16Gbps if PCIe4 is the highest-speed IO going through those pins.
Score
0
Matthew Busse
June 8, 2014 7:49:39 AM
mapesdhs
June 8, 2014 1:00:10 PM
mapesdhs said:
Don't know about the others but the only reason it helps for FSX is becausethe game is staggeringly badly written. Constantly loading textures in the
way it does is just nuts for a flightsim.
In keynotes about improving DirectX and OpenGL performance on the back of Mantle's launch, the way 3D APIs have been working up to now has involved tons of unnecessary API calls and likely IO traffic too... so one could say D3D and OGL were pretty nutty to start with even on a good day.
Score
0
mapesdhs
June 8, 2014 3:59:11 PM
InvalidError said:
so one could say D3D and OGL were pretty nutty to start with even on a good day.SGI was able to obtain very good performance out of its tech years ago with GL and then of course
OpenGL. I can only presume OGL has degraded & bloated since then, which is a shame. Realiability
was critical with IRx systems for defense, oil/gas, etc., and they were able to cope with massive
datasets without falling over, eg. the Group Station for Defense Imaging, which involved I/O rates
beyond 40GB/sec. In some ways I don't think anything has yet matched what a maxed-out Onyx3900
could do with 16xIR4, except of course where the feature set of that era was a limitation in some
manner. Even so, they could do some amazing things with Performer, which sits on top of OGL. Alas
how it's all come downhill since then in the world of APIs...
Btw, what I meant about FSX was that it's badly written period, not that something inherant to OGL/D3D
is holding it back. It just manages texture data very poorly compared to techniques for flightsims that
have been common practice for more than 20 years.
Ian.
Score
0
mapesdhs said:
SGI was able to obtain very good performance out of its tech years ago with GL and then of course OpenGL. I can only presume OGL has degraded & bloated since then, which is a shame.The OpenGL from 20 years ago is a considerably different critter from the OpenGL today. Among other things, shaders did not exist back then; that alone is a major game-changer in the way things get done in modern software along with all the glue that makes the new stuff fit with the old stuff.
Score
0
Peripherals are currently strangled by the limited total bandwidth available, especially between the CPU and the Southbridge. This crops up a couple of times in this article: http://www.tomshardware.com/reviews/samsung-xp941-z97-p... .
Additional bandwidth for teeny-tiny SSDs would be a good thing. Not to mention four-port USB 3.0 adapter cards with only enought PCI-E bandwidth to run one port at a time at full speed.
Additional bandwidth for teeny-tiny SSDs would be a good thing. Not to mention four-port USB 3.0 adapter cards with only enought PCI-E bandwidth to run one port at a time at full speed.
Score
0
WyomingKnott said:
Peripherals are currently strangled by the limited total bandwidth available, especially between the CPU and the Southbridge.Which devices would those be?
Most people are not going to lose sleep over the DMI since it does not become a significant bottleneck unless you throw outlandish devices and workloads at it: practically no consumer-level storage or other devices come anywhere near 1.5GB/s and DMI is full-duplex so it can easily handle device-to-device move/copy which is about the most IO-intensive task normal people will ever demand out of it. By the time it does, Intel will probably have integrated the PCH into the CPU - there already is some of that coming with Skylake's four extra PCIE/SATA-Express lanes.
For people who have extreme IO needs that cannot be met by DMI, there is the LGA2011 option.
Score
0
threefish
June 9, 2014 2:47:58 PM
WyomingKnott said:
The ones mentioned in the article that I linked to.Since DMI 2.0 runs at 20Gbps (~2GB/s net) each way, DMI alone does not explain why throughput drops under 1GB/s when switching to PCH's PCIe lanes - if you setup a RAID0 array with four fast SATA6 SSDs, you can get 1.2-1.5GB/s out of the z87 even after all the extra overhead this implies. Other parameters than bandwidth must be at play and latency should not be one of them either with command queuing hiding it.
I would hazard a guess that the PCH is simply not optimized to pass PCIE traffic over the DMI.
Score
0
!