PCIe 3.0 - Will You make the Jump Right Away

With the release of PCIe 3.0 around the corner in 2011, will you make the jump to a PCIe 3.0 ready Motherboard when released based on the increased bandwidth / performance increase?



PCI Express® 3.0 Frequently Asked Questions
34 answers Last reply
More about pcie make jump away
  1. its hard enough to saturate a 2.1e as it is.. sure when its utilised but not really seeing a point right now
  2. ^+1 same here.
  3. Interest rig it will be to fill the PCIe 3.x; lots of Dual GPUs + yet to be seen SSDs. I assume x16 lane structure will remain but doubled. The X68 & P65 is 'supposed' to support the PCIe 3.x standard. However, AMD has been really vague about Bulldozer with the PCIe 3.x standard?! I would assume it will support it, but I cannot find anything from AMD confirming PCIe 3.0.
  4. g00fysmiley said:
    its hard enough to saturate a 2.1e as it is.. sure when its utilised but not really seeing a point right now

    Agree... But isn't that the same thing we said about going from PCIe 1.0 to PCIe 2.0?

    It might take a little bit for the GPU's to catch up but soon we'll wonder how we did without :) We might get a setup/GPU that can play Crysis maxed out without needing two or more GPU's :lol:
  5. The X8 lanes on the current PCIe are almost saturated from scaling, and in a 4-WAY OC GTX 480 + RAID 0 C300 the available bandwidth is practically @ 100%. The next step is a 4-WAY + PhysX, added PhysX, and even on an SR-2 is saturated on Futuremark. A waterblocked GTX 480 uses single width PCI slot.

    Now, as I eluded to earlier if you doubled the available bandwidth then this is no longer a concern - for the time being with PCIe 3.x. Cannot wait to see what rig will max-out PCIe 3.0.
  6. I guess if the Sand Force 2000 can already max out SATAIII, I guess the next step would be PCIe. Given the way we are heading (mainly, Sand Force and multi GPU from ATI/nVidia) we will need PCIe 3.0 in about 2 years.
  7. It's interesting but maybe I will wait a long time before change to PCI-E 3.0 because like new technology when PCI-E 3.0 comes out will be overpriced and I don't plan pay a overpriced product that reduce his price few months after. :lol:
  8. I'm going to wait to see how much performance gain it will give.(Compared to PCI-E 2.0/2.1)
  9. jaquith said:


    I lol'd at this xD :lol:
  10. @Gekko Shadow... jaquith is working on the OMG! system next. He has the 4-way, 3-way, and who needs the bottom ones :lol: when you have those. He has run out of room to improve, so PCIe 3.0 is the way to go [:mousemonkey:5]
  11. tecmo34 said:
    @Gekko Shadow... jaquith is working on the OMG! system next. He has the 4-way, 3-way, and who needs the bottom ones :lol: when you have those. He has run out of room to improve, so PCIe 3.0 is the way to go [:mousemonkey:5]


    Wholy scat batman!! He is now my idol in the SLi world! T-T
    Hell i'm barely making my way to 3-way SLi T-T...
  12. There should be a nice bump coming in gpus, and so possibly gaming needs as well, coiciding with the new console releases, yea, 2 years or so
  13. i dont wanna wait 2 years. why not tomorrow? >=P
  14. Interesting thoughts for CPU and GPU progression over the years, and clearly there are current limits being hit with PCIe 2.x limits creating needs for the next rendition PCIe 3.x. There are needs beyond the typical "home gaming PC."

    Tesla GPUs running off Xeon rigs - http://www.dell.com/content/products/productdetails.aspx/poweredge-c410x?c=us&dgc=CJ&cid=24471&lid=566643&acd=10550055-3463938-



    Moore's Law {transistor count doubles every 18 months} - http://en.wikipedia.org/wiki/Moore's_law {some debate the "Law" but correlation of GPU vs CPU still applies to FLOPS and transistor count}
    Article - http://www.forbes.com/2009/06/02/nvidia-gpu-graphics-technology-intelligent-technology-huang.html

    FLOPS - http://en.wikipedia.org/wiki/FLOPS
    Parallel Computing - http://en.wikipedia.org/wiki/Parallel_computing
    GPU -

    Graphs needs updating
    Transistor Count

    Hendy's Law
  15. "Tesla GPUs running off Xeon rigs"

    SWEET BABY JEBUS AND THE ORPHANS! D:!!
  16. Ever seen a Pixar movie? Or a Commercial or any SGI movie? If so then it was rendered off an "Image Farm." ROOM FULLS, Hundreds of thousands and millions spent.

    The Pro GPUs work more efficiently with Xeon. That small 3U rack is ~$25,000+.
  17. ^ Yup. But from what I understand, Pixar still hasn't and won't switch to GPGPU for some time.

    Also, realize that some tasks really CAN'T be turned in to run on a massively parallel way.

    As far as if the PCIe 2.0 x16 bandwidth is enough:
    Quote:
    So with these results, Dell’s final answer over whether a single x16 PCIe bus is enough was simply “sometimes”. If an application scales against multiple GPUs in the first place, it usually makes sense to go further – after all if you’re already on GPUs, you probably need all the performance you can get. However if it doesn’t scale against multiple GPUs, then the bus is the least of the problem. It’s in between these positions where the bus matters: sometimes it’s a bottleneck, and sometimes it’s not. It’s almost entirely application dependent.

    Source: http://www.anandtech.com/show/3972/nvidia-gtc-2010-wrapup/2
  18. Oh i see. I suppose i could buy one or two. x)
  19. Shadow703793 said:
    ^ Yup. But from what I understand, Pixar still hasn't and won't switch to GPGPU for some time.

    Also, realize that some tasks really CAN'T be turned in to run on a massively parallel way.

    As far as if the PCIe 2.0 x16 bandwidth is enough:
    Quote:
    So with these results, Dell’s final answer over whether a single x16 PCIe bus is enough was simply “sometimes”. If an application scales against multiple GPUs in the first place, it usually makes sense to go further – after all if you’re already on GPUs, you probably need all the performance you can get. However if it doesn’t scale against multiple GPUs, then the bus is the least of the problem. It’s in between these positions where the bus matters: sometimes it’s a bottleneck, and sometimes it’s not. It’s almost entirely application dependent.

    Source: http://www.anandtech.com/show/3972/nvidia-gtc-2010-wrapup/2


    Uh-huh. So then it's still the same thing - and even with 3.0 - this same fact will yet remain then?
  20. Shadow703793 said:
    ^ Yup. But from what I understand, Pixar still hasn't and won't switch to GPGPU for some time.

    Also, realize that some tasks really CAN'T be turned in to run on a massively parallel way.

    As far as if the PCIe 2.0 x16 bandwidth is enough:
    Quote:
    So with these results, Dell’s final answer over whether a single x16 PCIe bus is enough was simply “sometimes”. If an application scales against multiple GPUs in the first place, it usually makes sense to go further – after all if you’re already on GPUs, you probably need all the performance you can get. However if it doesn’t scale against multiple GPUs, then the bus is the least of the problem. It’s in between these positions where the bus matters: sometimes it’s a bottleneck, and sometimes it’s not. It’s almost entirely application dependent.

    Source: http://www.anandtech.com/show/3972/nvidia-gtc-2010-wrapup/2


    Untrue, they are constantly adding their "shared" centers, and there are other huge centers as well. DOUBLING ANNUALLY. All of them are CRAZY in their sizes and capacities. They literally they use everything including the kitchen sink.

    http://cnettv.cnet.com/2001-1_53-25855.html {there are more up-to-date "links" but I couldn't find on the record video} As the economy goes down the Movie going is only going up.

    Also, from that article Tesla 1000 series -> "HPC: Dell Says a Single PCIe x16 Bus Is Enough for Multiple GPUs – Sometimes" The link from Dell was a Tesla 2050 X10 in a 3U.

    Again, for a Gaming rig "normal people" scaling is indeed an issue when there are 8 x 4 PCIe 2.x lanes, but as I posted ^ way up there the PCIe 3.x effectively "doubles the available bandwidth.
  21. ^ Yes, but the point I was trying to make is no matter how much you try, there will always be things that can't be parallalized effectively.

    Quote:
    Untrue, they are constantly adding their "shared" centers, and there are other huge centers as well. DOUBLING ANNUALLY. All of them are CRAZY in their sizes and capacities. They literally they use everything including the kitchen sink.

    Ahh... I was talking about switching their main animation tools to pure GPGPU (and again, AFAIK, Pixar haven't done this yet). But yes, you are right, the market for shared (Tesla) based server/HPCs are increasing.
  22. ^ The cheapest approach and configurations win. Less CPU # :: GPU # keeping the rendering rate the same or greater.

    All of this is WAY over the heads of anyone looking at this post, but it is interesting...
  23. Quote:
    The cheapest approach and configurations win. Less CPU # :: GPU # keeping the rendering rate the same or greater.

    Good point. Btw, when talking about performance, are you talking about raw performance or performance per watt.
  24. One advantage to the Xeon is the low power consumption :: GT/s. There are a lot more advantages as well.

    Example:
    Xeon 5660 6-Core, 95W, 6.4 GT/s
    i7 980X 6-Core, 130W, 6.4 GT/s

    Now multiply that by 1000 -> 4000+ in a render farm. That is a lot of HEAT + WASTED KW/H; one of the biggest problems in these large Server Centers is A/C. Administrators & Managers look at 5 years in their cost analysis. In my case I look at everything in terms of 5 years, I lease my Servers and look at the Income, Expense {including repair, downtime, etc}, Management.

    In my office if a server goes down I get a replacement same day, and couldn't care less about "fixing it." I only replace: SAS drives, Fans, and PSU - on a rare occasion RAM. We also have spare servers, so I'll yank the RAID drives and I'm up in few minutes.
  25. jaquith said:
    ^ The cheapest approach and configurations win. Less CPU # :: GPU # keeping the rendering rate the same or greater.

    All of this is WAY over the heads of anyone looking at this post, but it is interesting...


    I can barely keep up to what you guys are saying. I'm understanding it - or at least some of it lol. :cry:

    jaquith said:
    One advantage to the Xeon is the low power consumption :: GT/s. There are a lot more advantages as well.

    Example:
    Xeon 5660 6-Core, 95W, 6.4 GT/s
    i7 980X 6-Core, 130W, 6.4 GT/s

    Now multiply that by 1000 -> 4000+ in a render farm. That is a lot of HEAT + WASTED KW/H; one of the biggest problems in these large Server Centers is A/C. Administrators & Managers look at 5 years in their cost analysis. In my case I look at everything in terms of 5 years, I lease my Servers and look at the Income, Expense {including repair, downtime, etc}, Management.

    In my office if a server goes down I get a replacement same day, and couldn't care less about "fixing it." I only replace: SAS drives, Fans, and PSU - on a rare occasion RAM. We also have spare servers, so I'll yank the RAID drives and I'm up in few minutes.


    Interesting. Never looked at it that way - of course i don't exactly run servers - but it makes sense. And so I'm also assuming that efficiency is increased compared to that of an i7 980x by the decrease of heat.

    Btw what offices are you in charge in?
  26. Quote:

    In my office if a server goes down I get a replacement same day, and couldn't care less about "fixing it." I only replace: SAS drives, Fans, and PSU - on a rare occasion RAM. We also have spare servers, so I'll yank the RAID drives and I'm up in few minutes.

    If you don't mind me asking, what models/OEM?

    Quote:

    Now multiply that by 1000 -> 4000+ in a render farm. That is a lot of HEAT + WASTED KW/H; one of the biggest problems in these large Server Centers is A/C. Administrators & Managers look at 5 years in their cost analysis. In my case I look at everything in terms of 5 years, I lease my Servers and look at the Income, Expense {including repair, downtime, etc}, Management.

    Yup. That's what I thought, so you are talking about performance per kW/hr correct?
  27. ^ Yes the Xeon are more efficient consume less Watts and run cooler.

    The short version is that I own an REO/IDX enterprise data center. Compile foreclosure {REO} & LIS Pendens {pre-foreclosure} data plus Realtor MLS listing data; match it up; analyze tends, forecasting, comparable data -> package and resell to corporate clients. - don't buy now; wait until ~ 2014-2015
  28. Quote:
    analyze tends, forecasting, comparable data -> package and resell to corporate clients. - don't buy now; wait until ~ 2014-2015

    Hmm.... now that's what I call useful information :ange:

    What do you mean by "REO/IDX enterprise data center"? This is data mining by the looks of it. Is Hadoop/Mapreduce used over there?
  29. Shadow703793 said:
    Quote:
    analyze tends, forecasting, comparable data -> package and resell to corporate clients. - don't buy now; wait until ~ 2014-2015

    Hmm.... now that's what I call useful information :ange:

    What do you mean by "REO/IDX enterprise data center"? This is data mining by the looks of it. Is Hadoop/Mapreduce used over there?

    Never scraped once! I purchase all of my data directly from: State, County and MLS IDX/RETS either State level or on MLS level I purchase the feeds. I am damn good at relational databases and pattern recognition analysis to code. BTW - RealtyTrac is one of my clients and not visa versa.
  30. jaquith said:
    I am damn good at relational databases and pattern recognition analysis to code.
    You should see his farm on Farmville too :lol:

    Okay joking aside, this thread has taken a very interesting turn / direction. Please continue with your feedback on the original topic or the latest turn. I fine this very interesting & educational :)
  31. PCIe to Rendering Farms {Servers} -> Thermal Dynamics -> Q&A.

    What more can be said? 90% of the people won't make any difference -> 9.999% cannot afford it -> 0.001% maybe a problem in next gen GPU. {all guessed #s}

    Leaving only the obscure commercial side now.
  32. tecmo34 said:
    You should see his farm on Farmville too :lol:

    Okay joking aside, this thread has taken a very interesting turn / direction. Please continue with your feedback on the original topic or the latest turn. I fine this very interesting & educational :)

    lol, well played. :lol:
  33. I am running a 5770 @ x8 with little to no performance decrease. So why would I need more lanes for a low mid range card?
  34. If the PCIe 3.0 comes within the time frame of my upgrade, i would get it. Since it would be backward compatible, i would at least get a motherboard with PCIe 3.0 slots alright.
Ask a new question

Read More

Motherboards Performance Bandwidth