Fouth Gen PCIe Sees Bandwidth Double

Status
Not open for further replies.
Does anything out there even come close to saturating PCI-e 3.0 x16? All I can think of would be the enterprise segment with massive RAID arrays, SSD RAID, fiber optic networking and cluster computing.

It would be nice if PCI-e would become mainstream for SSD not just relatively expensive cards for enthusiasts. As SATA III seems to be a hindrance as the SATA spec isn't keeping pace. While Apple is the only one I see taking advantage of PCI-e for SSD in mainstream computers getting about 1 GBPS. Heck they're the only ones that seem to be interested in switching their lines to any type of SSD.
 

InvalidError

Titan
Moderator
64GB/s is more than twice the amount of system memory bandwidth in most PCs today. At this point, it would almost start to make sense to start using GPU memory as system memory for things that depend heavily on bandwidth instead of low latency.

Note: to get 64GB/s out of PCIe4, you need to simultaneously receive and transmit 32GB/s each way.
 
Does anything out there even come close to saturating PCI-e 3.0 x16? All I can think of would be the enterprise segment with massive RAID arrays, SSD RAID, fiber optic networking and cluster computing.

It would be nice if PCI-e would become mainstream for SSD not just relatively expensive cards for enthusiasts. As SATA III seems to be a hindrance as the SATA spec isn't keeping pace. While Apple is the only one I see taking advantage of PCI-e for SSD in mainstream computers getting about 1 GBPS. Heck they're the only ones that seem to be interested in switching their lines to any type of SSD.
Yes and no.
Within the next few years we will see dual GPU setups that will be capable of saturating the PCIe 16 slot, and the way these standards work it needs to get ratified now if we want to see it in the next 2-3 years.

Outside of graphics there is a bit of a PCIe shortage coming up. Right now PCIe is used mostly for expansion cards... but in the near future we are going to start seeing more use of things like Lightpeak (seriously, Thunderbolt is a horrible name, can't we have the old name back?), M.2, M-PCIe, and SATA Express, all of which will need PCIe lanes. So the real question is, do we pay more for our processors and chipsets where the PCIe lanes are hosted? Or do we assign fewer lanes to each device so that we can dedicate lanes to these new IO?
Outside of the ridiculous high-end GPU space, we could be just fine with moving to a PCIe8 v4 standard for GPUs, freeing up 8 lanes each capable of 2GB/p of throughput each. That could be 4 SSDs, and a thunderbolt port right there. And for those few crazy people with far too much money that demand 4GPU setups, you can always get higher end enthusiast or workstation boards that have more lanes available.

 

thundervore

Distinguished
Dec 13, 2011
1,030
1
19,460
Nice, PCIe4 and DDR 4 are making way. All that s left is USB4 and then i can upgrade my Z77 system.

So by the year 2020 all this should be out. Better start saving up money now lol.
 

Puiucs

Honorable
Jan 17, 2014
66
0
10,630
Nice, PCIe4 and DDR 4 are making way. All that s left is USB4 and then i can upgrade my Z77 system.

So by the year 2020 all this should be out. Better start saving up money now lol.
you will get USB 3.1 soon (double data rates). we should get mainstream USB 3.1 and DDR4 by next year. i expect PCI-e 4 in 2016.
 

JOSHSKORN

Distinguished
Oct 26, 2009
2,395
19
19,795
Nice, PCIe4 and DDR 4 are making way. All that s left is USB4 and then i can upgrade my Z77 system.

So by the year 2020 all this should be out. Better start saving up money now lol.
you will get USB 3.1 soon (double data rates). we should get mainstream USB 3.1 and DDR4 by next year. i expect PCI-e 4 in 2016.
It looks like PCI-e 4.0 will be supported in Skylake E/EX/EP series. I imagine what comes after that will be supported as well in mainstream processors. So, PCI-e 4.0 toward the end of 2015 and it'll become mainstream in 2016, I'm guessing. Of course, it might be closer to 2020 before we even need it. Who knows. There will obviously be something better out, then. You can't win.
 
G

Guest

Guest
Does anything out there even come close to saturating PCI-e 3.0 x16? All I can think of would be the enterprise segment with massive RAID arrays, SSD RAID, fiber optic networking and cluster computing.

Bandwidth isn't everything though. The increased speed also helps. Certainly with programs which send a lot of textures, models, etc to the GPU. Flight simulators such as FSX, Prepar3D or XPlane come to mind. Faster PCIe bus has been shown to have more FPS and less stuttering.
 

knowom

Distinguished
Jan 28, 2006
782
0
18,990
Does anything out there even come close to saturating PCI-e 3.0 x16? All I can think of would be the enterprise segment with massive RAID arrays, SSD RAID, fiber optic networking and cluster computing.

It would be nice if PCI-e would become mainstream for SSD not just relatively expensive cards for enthusiasts. As SATA III seems to be a hindrance as the SATA spec isn't keeping pace. While Apple is the only one I see taking advantage of PCI-e for SSD in mainstream computers getting about 1 GBPS. Heck they're the only ones that seem to be interested in switching their lines to any type of SSD.

You've also got to consider that it doubles the bandwidth for PCI-E x1 slots as well which will benefit lots of new cards made for it audio interface sound cards, networking cards both wired and wifi, raid controllers, SSD's, and the list goes on. You may even see a new more powerful PCI-E x1 video card perhaps. It will certainly benefit future atom chips for example that use PCI-E x4.
 

eklipz330

Distinguished
Jul 7, 2008
3,034
19
20,795
Nice, PCIe4 and DDR 4 are making way. All that s left is USB4 and then i can upgrade my Z77 system.

So by the year 2020 all this should be out. Better start saving up money now lol.
i'd be shocked if we weren't at pcie 5.0 or even 6.0 by then. ddr5 will probably be making its way to the market.

but what do we know? maybe some new technology will just sweep all of this away (graphene, all eyes on you) and they'll just have new ways of building/using computers.
 


USB 3.1 will still only push 10Gbps which is still slower than current Thunderbolt (20Gbps) and tied to a single bus instead of lane per port like Thunderbolt.

Still 10Gbps is pretty fast. I just hate that the more devices, the slower it gets and as well all the other in between that cut bandwidth.

And PCIe has been doubling bandwidth every generation. PCIe 1 was 8GB/s, 2 was 16GB/s and current is 32GB/s.

What I would like to see is utilization of that bandwidth. Right now most dGPUs are happy even on a PCIe 2.0 x8 let alone a x16 PCIe 3.0.



Unless they come out with a new slot, remember just 10 years ago AGP was the top dGPU slot and PCIe was very new with PCI-X being the though of replacement.

As for DDR5, that is to be seen. There are various other technologies that do not involve DDR in the works that offer lower latency, higher bandwidth and larger sizes than DDR does so that might not exist in the next 5-10 years.
 

hannibal

Distinguished
I would also think that AMD GPU crossfire setup can benefit from this because they don't use separate connector between their newer graphic cards.
But most speed gain will be achieved by pci-slot SSD cards.
 

InvalidError

Titan
Moderator

Whatever the name is, all memory is fundamentally the same internally. Making the interface quad-data-rate is only a minor tweak over DDR; nothing fundamentally new there.

The only thing that is truly close to ground-breaking is die-stacking which allows much wider interfaces operating at lower speeds and lower power by eliminating the need to drive long lines between the memory and whatever it is connected to. If that sort of technology got on the desktop, you would need to upgrade your CPU to upgrade RAM since RAM would now be soldered to the CPU. If this happened, I'm guessing PCIE5-based memory expansion would become an option for applications that need more memory than what is integrated in the CPU.

Yes, there are some fancy high-speed serial interfaces out there but all that (de)serializing uses power, adds complexity, adds latency and adds cost on an ultra-low-leakage process generally not suitable for high-speed stuff.
 

InvalidError

Titan
Moderator

Not necessarily: most games and applications do some amount of processing as they load stuff from storage and even infinite storage bandwidth with null latency cannot make that load-time processing go away. Once your storage is fast enough that most of the load time is due to processing, faster storage no longer provides any significant benefits.

In everyday scenarios other than file copy, the most obvious benefit of SSDs is their ~100X lower latency compared to HDDs.
 

SteelCity1981

Distinguished
Sep 16, 2010
1,129
0
19,310
[You've also got to consider that it doubles the bandwidth for PCI-E x1 slots as well which will benefit lots of new cards made for it audio interface sound cards, networking cards both wired and wifi, raid controllers, SSD's, and the list goes on. You may even see a new more powerful PCI-E x1 video card perhaps. It will certainly benefit future atom chips for example that use PCI-E x4.

Maybe, maybe not. pci-e x1 still uses a 2.0 config. and manufactures are perfectly content with that type of setup at the moment.
 

kritzler

Distinguished
Sep 15, 2010
42
0
18,560
The move to PCIe 4.0 and beyond is mostly about the transition to GDDR6 "12Ghz" rumored top end speed. And now with 4K gaming on the rise memory bandwidth/fill rate becoming a choke point we should see AMD and JEDEC releasing more information sometime this year.
 

tomfreak

Distinguished
May 18, 2011
1,334
0
19,280
I think Radeon 290X with its bridge less crossfire is close to saturate PCIE 2.0 16x. With PCIE 4.0, u can finally use some budget board that have 16x connector but with 4x speed to crossfire. :)
 

InvalidError

Titan
Moderator

Fill rate and GPU memory bandwidth are very loosely related to PCIe bandwidth at best: if a game or application stores all the geometry and textures in the GPU's memory, it can generate huge GPU workloads with very little PCIe bandwidth since all the heavy-lifting is between the GPU and its local memory.
 

InvalidError

Titan
Moderator

Just about the point where 2.0x16/3.0x8 was starting to become a bottleneck in some circumstances.

With individual peripheral IOs exceeding 3.0x1 bandwidth, it makes sense to have a next-gen PCIe version where all lower-level IO standards can fit on a single lane.

It also slots in nicely with things like multi-purpose IO pins where the same set of pins can be used for PCIe, SATA, USB3(.1) and possibly other stuff up to 16Gbps if PCIe4 is the highest-speed IO going through those pins.
 
Status
Not open for further replies.