PCIe 3.0 - Will You make the Jump Right Away

PCIe 3.0 - Will You make the Jump Right Away

  • Yes... I must have the latest and greatest

    Votes: 2 10.5%
  • No... I'm just fine with PCIe 2.0

    Votes: 8 42.1%
  • Maybe... I want to give it some time on the market first & see the benchmarks

    Votes: 6 31.6%
  • Undecided... I'll play it by ear on where I'm at for a need to upgrade

    Votes: 3 15.8%
  • Who uses a dedicated GPU? On-board FTW

    Votes: 0 0.0%

  • Total voters
    19

tecmo34

Administrator
Moderator
With the release of PCIe 3.0 around the corner in 2011, will you make the jump to a PCIe 3.0 ready Motherboard when released based on the increased bandwidth / performance increase?

PCI-SIG-FAQ-PCIExpress_Page_1.jpg


PCI Express® 3.0 Frequently Asked Questions
 
Interest rig it will be to fill the PCIe 3.x; lots of Dual GPUs + yet to be seen SSDs. I assume x16 lane structure will remain but doubled. The X68 & P65 is 'supposed' to support the PCIe 3.x standard. However, AMD has been really vague about Bulldozer with the PCIe 3.x standard?! I would assume it will support it, but I cannot find anything from AMD confirming PCIe 3.0.
 

tecmo34

Administrator
Moderator

Agree... But isn't that the same thing we said about going from PCIe 1.0 to PCIe 2.0?

It might take a little bit for the GPU's to catch up but soon we'll wonder how we did without :) We might get a setup/GPU that can play Crysis maxed out without needing two or more GPU's :lol:
 
The X8 lanes on the current PCIe are almost saturated from scaling, and in a 4-WAY OC GTX 480 + RAID 0 C300 the available bandwidth is practically @ 100%. The next step is a 4-WAY + PhysX, added PhysX, and even on an SR-2 is saturated on Futuremark. A waterblocked GTX 480 uses single width PCI slot.

Now, as I eluded to earlier if you doubled the available bandwidth then this is no longer a concern - for the time being with PCIe 3.x. Cannot wait to see what rig will max-out PCIe 3.0.
SLI-PhysX-Chart.jpg
 
I guess if the Sand Force 2000 can already max out SATAIII, I guess the next step would be PCIe. Given the way we are heading (mainly, Sand Force and multi GPU from ATI/nVidia) we will need PCIe 3.0 in about 2 years.
 
It's interesting but maybe I will wait a long time before change to PCI-E 3.0 because like new technology when PCI-E 3.0 comes out will be overpriced and I don't plan pay a overpriced product that reduce his price few months after. :lol:
 

tecmo34

Administrator
Moderator
@Gekko Shadow... jaquith is working on the OMG! system next. He has the 4-way, 3-way, and who needs the bottom ones :lol: when you have those. He has run out of room to improve, so PCIe 3.0 is the way to go [:mousemonkey:5]
 

Gekko Shadow

Distinguished
Oct 4, 2010
618
1
19,065


Wholy scat batman!! He is now my idol in the SLi world! T-T
Hell i'm barely making my way to 3-way SLi T-T...
 
Interesting thoughts for CPU and GPU progression over the years, and clearly there are current limits being hit with PCIe 2.x limits creating needs for the next rendition PCIe 3.x. There are needs beyond the typical "home gaming PC."

Tesla GPUs running off Xeon rigs - http://www.dell.com/content/products/productdetails.aspx/poweredge-c410x?c=us&dgc=CJ&cid=24471&lid=566643&acd=10550055-3463938-
poweredge-c410x-overview1.jpg

poweredge-c410x-overview3.jpg


Moore's Law {transistor count doubles every 18 months} - http://en.wikipedia.org/wiki/Moore's_law {some debate the "Law" but correlation of GPU vs CPU still applies to FLOPS and transistor count}
Article - http://www.forbes.com/2009/06/02/nvidia-gpu-graphics-technology-intelligent-technology-huang.html

FLOPS - http://en.wikipedia.org/wiki/FLOPS
Parallel Computing - http://en.wikipedia.org/wiki/Parallel_computing
GPU -

Graphs needs updating
Transistor Count
683px-Transistor_Count_and_Moore%27s_Law_-_2008.svg.png

Hendy's Law
Hendys_Law.jpg
 
Ever seen a Pixar movie? Or a Commercial or any SGI movie? If so then it was rendered off an "Image Farm." ROOM FULLS, Hundreds of thousands and millions spent.

The Pro GPUs work more efficiently with Xeon. That small 3U rack is ~$25,000+.
 
^ Yup. But from what I understand, Pixar still hasn't and won't switch to GPGPU for some time.

Also, realize that some tasks really CAN'T be turned in to run on a massively parallel way.

As far as if the PCIe 2.0 x16 bandwidth is enough:
So with these results, Dell’s final answer over whether a single x16 PCIe bus is enough was simply “sometimes”. If an application scales against multiple GPUs in the first place, it usually makes sense to go further – after all if you’re already on GPUs, you probably need all the performance you can get. However if it doesn’t scale against multiple GPUs, then the bus is the least of the problem. It’s in between these positions where the bus matters: sometimes it’s a bottleneck, and sometimes it’s not. It’s almost entirely application dependent.
Source: http://www.anandtech.com/show/3972/nvidia-gtc-2010-wrapup/2
 

Gekko Shadow

Distinguished
Oct 4, 2010
618
1
19,065


Uh-huh. So then it's still the same thing - and even with 3.0 - this same fact will yet remain then?
 


Untrue, they are constantly adding their "shared" centers, and there are other huge centers as well. DOUBLING ANNUALLY. All of them are CRAZY in their sizes and capacities. They literally they use everything including the kitchen sink.

http://cnettv.cnet.com/2001-1_53-25855.html {there are more up-to-date "links" but I couldn't find on the record video} As the economy goes down the Movie going is only going up.

Also, from that article Tesla 1000 series -> "HPC: Dell Says a Single PCIe x16 Bus Is Enough for Multiple GPUs – Sometimes" The link from Dell was a Tesla 2050 X10 in a 3U.

Again, for a Gaming rig "normal people" scaling is indeed an issue when there are 8 x 4 PCIe 2.x lanes, but as I posted ^ way up there the PCIe 3.x effectively "doubles the available bandwidth.
 
^ Yes, but the point I was trying to make is no matter how much you try, there will always be things that can't be parallalized effectively.

Untrue, they are constantly adding their "shared" centers, and there are other huge centers as well. DOUBLING ANNUALLY. All of them are CRAZY in their sizes and capacities. They literally they use everything including the kitchen sink.
Ahh... I was talking about switching their main animation tools to pure GPGPU (and again, AFAIK, Pixar haven't done this yet). But yes, you are right, the market for shared (Tesla) based server/HPCs are increasing.
 
^ The cheapest approach and configurations win. Less CPU # :: GPU # keeping the rendering rate the same or greater.

All of this is WAY over the heads of anyone looking at this post, but it is interesting...
 
One advantage to the Xeon is the low power consumption :: GT/s. There are a lot more advantages as well.

Example:
Xeon 5660 6-Core, 95W, 6.4 GT/s
i7 980X 6-Core, 130W, 6.4 GT/s

Now multiply that by 1000 -> 4000+ in a render farm. That is a lot of HEAT + WASTED KW/H; one of the biggest problems in these large Server Centers is A/C. Administrators & Managers look at 5 years in their cost analysis. In my case I look at everything in terms of 5 years, I lease my Servers and look at the Income, Expense {including repair, downtime, etc}, Management.

In my office if a server goes down I get a replacement same day, and couldn't care less about "fixing it." I only replace: SAS drives, Fans, and PSU - on a rare occasion RAM. We also have spare servers, so I'll yank the RAID drives and I'm up in few minutes.