Sign in with
Sign up | Sign in
Your question

PCIe 3.0 - Will You make the Jump Right Away

Last response: in Motherboards
Share

PCIe 3.0 - Will You make the Jump Right Away

Total: 22 votes (3 blank votes)

  • Yes... I must have the latest and greatest
  • 10 %
  • No... I'm just fine with PCIe 2.0
  • 40 %
  • Maybe... I want to give it some time on the market first & see the benchmarks
  • 30 %
  • Undecided... I'll play it by ear on where I'm at for a need to upgrade
  • 20 %
  • Who uses a dedicated GPU? On-board FTW
  • 0 %
a c 234 V Motherboard
October 13, 2010 5:13:10 PM

With the release of PCIe 3.0 around the corner in 2011, will you make the jump to a PCIe 3.0 ready Motherboard when released based on the increased bandwidth / performance increase?



PCI Express® 3.0 Frequently Asked Questions

More about : pcie make jump

October 13, 2010 5:30:16 PM

its hard enough to saturate a 2.1e as it is.. sure when its utilised but not really seeing a point right now
a b V Motherboard
October 13, 2010 6:07:43 PM

^+1 same here.
Related resources
a c 715 V Motherboard
October 13, 2010 7:19:54 PM

Interest rig it will be to fill the PCIe 3.x; lots of Dual GPUs + yet to be seen SSDs. I assume x16 lane structure will remain but doubled. The X68 & P65 is 'supposed' to support the PCIe 3.x standard. However, AMD has been really vague about Bulldozer with the PCIe 3.x standard?! I would assume it will support it, but I cannot find anything from AMD confirming PCIe 3.0.
a c 234 V Motherboard
October 13, 2010 9:39:22 PM

g00fysmiley said:
its hard enough to saturate a 2.1e as it is.. sure when its utilised but not really seeing a point right now

Agree... But isn't that the same thing we said about going from PCIe 1.0 to PCIe 2.0?

It might take a little bit for the GPU's to catch up but soon we'll wonder how we did without :)  We might get a setup/GPU that can play Crysis maxed out without needing two or more GPU's :lol: 
a c 715 V Motherboard
October 13, 2010 11:07:24 PM

The X8 lanes on the current PCIe are almost saturated from scaling, and in a 4-WAY OC GTX 480 + RAID 0 C300 the available bandwidth is practically @ 100%. The next step is a 4-WAY + PhysX, added PhysX, and even on an SR-2 is saturated on Futuremark. A waterblocked GTX 480 uses single width PCI slot.

Now, as I eluded to earlier if you doubled the available bandwidth then this is no longer a concern - for the time being with PCIe 3.x. Cannot wait to see what rig will max-out PCIe 3.0.
a b V Motherboard
October 13, 2010 11:12:12 PM

I guess if the Sand Force 2000 can already max out SATAIII, I guess the next step would be PCIe. Given the way we are heading (mainly, Sand Force and multi GPU from ATI/nVidia) we will need PCIe 3.0 in about 2 years.
a c 292 V Motherboard
October 14, 2010 2:31:59 PM

It's interesting but maybe I will wait a long time before change to PCI-E 3.0 because like new technology when PCI-E 3.0 comes out will be overpriced and I don't plan pay a overpriced product that reduce his price few months after. :lol: 
a c 135 V Motherboard
October 14, 2010 3:34:59 PM

I'm going to wait to see how much performance gain it will give.(Compared to PCI-E 2.0/2.1)
a b V Motherboard
October 14, 2010 3:42:30 PM

jaquith said:


I lol'd at this xD :lol: 
a c 234 V Motherboard
October 14, 2010 4:14:34 PM

@Gekko Shadow... jaquith is working on the OMG! system next. He has the 4-way, 3-way, and who needs the bottom ones :lol:  when you have those. He has run out of room to improve, so PCIe 3.0 is the way to go [:mousemonkey:5]
a b V Motherboard
October 14, 2010 4:17:57 PM

tecmo34 said:
@Gekko Shadow... jaquith is working on the OMG! system next. He has the 4-way, 3-way, and who needs the bottom ones :lol:  when you have those. He has run out of room to improve, so PCIe 3.0 is the way to go [:mousemonkey:5]


Wholy scat batman!! He is now my idol in the SLi world! T-T
Hell i'm barely making my way to 3-way SLi T-T...
October 18, 2010 1:54:13 AM

There should be a nice bump coming in gpus, and so possibly gaming needs as well, coiciding with the new console releases, yea, 2 years or so
a b V Motherboard
October 18, 2010 1:58:16 PM

i dont wanna wait 2 years. why not tomorrow? >=P
a c 715 V Motherboard
October 18, 2010 3:10:56 PM

Interesting thoughts for CPU and GPU progression over the years, and clearly there are current limits being hit with PCIe 2.x limits creating needs for the next rendition PCIe 3.x. There are needs beyond the typical "home gaming PC."

Tesla GPUs running off Xeon rigs - http://www.dell.com/content/products/productdetails.asp...



Moore's Law {transistor count doubles every 18 months} - http://en.wikipedia.org/wiki/Moore's_law {some debate the "Law" but correlation of GPU vs CPU still applies to FLOPS and transistor count}
Article - http://www.forbes.com/2009/06/02/nvidia-gpu-graphics-te...

FLOPS - http://en.wikipedia.org/wiki/FLOPS
Parallel Computing - http://en.wikipedia.org/wiki/Parallel_computing
GPU -

Graphs needs updating
Transistor Count

Hendy's Law
a b V Motherboard
October 18, 2010 3:24:52 PM

"Tesla GPUs running off Xeon rigs"

SWEET BABY JEBUS AND THE ORPHANS! D:!!
a c 715 V Motherboard
October 18, 2010 3:37:13 PM

Ever seen a Pixar movie? Or a Commercial or any SGI movie? If so then it was rendered off an "Image Farm." ROOM FULLS, Hundreds of thousands and millions spent.

The Pro GPUs work more efficiently with Xeon. That small 3U rack is ~$25,000+.
a b V Motherboard
October 18, 2010 3:52:18 PM

^ Yup. But from what I understand, Pixar still hasn't and won't switch to GPGPU for some time.

Also, realize that some tasks really CAN'T be turned in to run on a massively parallel way.

As far as if the PCIe 2.0 x16 bandwidth is enough:
Quote:
So with these results, Dell’s final answer over whether a single x16 PCIe bus is enough was simply “sometimes”. If an application scales against multiple GPUs in the first place, it usually makes sense to go further – after all if you’re already on GPUs, you probably need all the performance you can get. However if it doesn’t scale against multiple GPUs, then the bus is the least of the problem. It’s in between these positions where the bus matters: sometimes it’s a bottleneck, and sometimes it’s not. It’s almost entirely application dependent.

Source: http://www.anandtech.com/show/3972/nvidia-gtc-2010-wrap...
a b V Motherboard
October 18, 2010 4:00:03 PM

Oh i see. I suppose i could buy one or two. x)
a b V Motherboard
October 18, 2010 4:02:02 PM

Shadow703793 said:
^ Yup. But from what I understand, Pixar still hasn't and won't switch to GPGPU for some time.

Also, realize that some tasks really CAN'T be turned in to run on a massively parallel way.

As far as if the PCIe 2.0 x16 bandwidth is enough:
Quote:
So with these results, Dell’s final answer over whether a single x16 PCIe bus is enough was simply “sometimes”. If an application scales against multiple GPUs in the first place, it usually makes sense to go further – after all if you’re already on GPUs, you probably need all the performance you can get. However if it doesn’t scale against multiple GPUs, then the bus is the least of the problem. It’s in between these positions where the bus matters: sometimes it’s a bottleneck, and sometimes it’s not. It’s almost entirely application dependent.

Source: http://www.anandtech.com/show/3972/nvidia-gtc-2010-wrap...


Uh-huh. So then it's still the same thing - and even with 3.0 - this same fact will yet remain then?
a c 715 V Motherboard
October 18, 2010 5:52:32 PM

Shadow703793 said:
^ Yup. But from what I understand, Pixar still hasn't and won't switch to GPGPU for some time.

Also, realize that some tasks really CAN'T be turned in to run on a massively parallel way.

As far as if the PCIe 2.0 x16 bandwidth is enough:
Quote:
So with these results, Dell’s final answer over whether a single x16 PCIe bus is enough was simply “sometimes”. If an application scales against multiple GPUs in the first place, it usually makes sense to go further – after all if you’re already on GPUs, you probably need all the performance you can get. However if it doesn’t scale against multiple GPUs, then the bus is the least of the problem. It’s in between these positions where the bus matters: sometimes it’s a bottleneck, and sometimes it’s not. It’s almost entirely application dependent.

Source: http://www.anandtech.com/show/3972/nvidia-gtc-2010-wrap...


Untrue, they are constantly adding their "shared" centers, and there are other huge centers as well. DOUBLING ANNUALLY. All of them are CRAZY in their sizes and capacities. They literally they use everything including the kitchen sink.

http://cnettv.cnet.com/2001-1_53-25855.html {there are more up-to-date "links" but I couldn't find on the record video} As the economy goes down the Movie going is only going up.

Also, from that article Tesla 1000 series -> "HPC: Dell Says a Single PCIe x16 Bus Is Enough for Multiple GPUs – Sometimes" The link from Dell was a Tesla 2050 X10 in a 3U.

Again, for a Gaming rig "normal people" scaling is indeed an issue when there are 8 x 4 PCIe 2.x lanes, but as I posted ^ way up there the PCIe 3.x effectively "doubles the available bandwidth.
a b V Motherboard
October 18, 2010 6:02:49 PM

^ Yes, but the point I was trying to make is no matter how much you try, there will always be things that can't be parallalized effectively.

Quote:
Untrue, they are constantly adding their "shared" centers, and there are other huge centers as well. DOUBLING ANNUALLY. All of them are CRAZY in their sizes and capacities. They literally they use everything including the kitchen sink.

Ahh... I was talking about switching their main animation tools to pure GPGPU (and again, AFAIK, Pixar haven't done this yet). But yes, you are right, the market for shared (Tesla) based server/HPCs are increasing.
a c 715 V Motherboard
October 18, 2010 6:08:09 PM

^ The cheapest approach and configurations win. Less CPU # :: GPU # keeping the rendering rate the same or greater.

All of this is WAY over the heads of anyone looking at this post, but it is interesting...
a b V Motherboard
October 18, 2010 8:00:49 PM

Quote:
The cheapest approach and configurations win. Less CPU # :: GPU # keeping the rendering rate the same or greater.

Good point. Btw, when talking about performance, are you talking about raw performance or performance per watt.
a c 715 V Motherboard
October 18, 2010 8:32:36 PM

One advantage to the Xeon is the low power consumption :: GT/s. There are a lot more advantages as well.

Example:
Xeon 5660 6-Core, 95W, 6.4 GT/s
i7 980X 6-Core, 130W, 6.4 GT/s

Now multiply that by 1000 -> 4000+ in a render farm. That is a lot of HEAT + WASTED KW/H; one of the biggest problems in these large Server Centers is A/C. Administrators & Managers look at 5 years in their cost analysis. In my case I look at everything in terms of 5 years, I lease my Servers and look at the Income, Expense {including repair, downtime, etc}, Management.

In my office if a server goes down I get a replacement same day, and couldn't care less about "fixing it." I only replace: SAS drives, Fans, and PSU - on a rare occasion RAM. We also have spare servers, so I'll yank the RAID drives and I'm up in few minutes.
a b V Motherboard
October 18, 2010 8:56:04 PM

jaquith said:
^ The cheapest approach and configurations win. Less CPU # :: GPU # keeping the rendering rate the same or greater.

All of this is WAY over the heads of anyone looking at this post, but it is interesting...


I can barely keep up to what you guys are saying. I'm understanding it - or at least some of it lol. :cry: 

jaquith said:
One advantage to the Xeon is the low power consumption :: GT/s. There are a lot more advantages as well.

Example:
Xeon 5660 6-Core, 95W, 6.4 GT/s
i7 980X 6-Core, 130W, 6.4 GT/s

Now multiply that by 1000 -> 4000+ in a render farm. That is a lot of HEAT + WASTED KW/H; one of the biggest problems in these large Server Centers is A/C. Administrators & Managers look at 5 years in their cost analysis. In my case I look at everything in terms of 5 years, I lease my Servers and look at the Income, Expense {including repair, downtime, etc}, Management.

In my office if a server goes down I get a replacement same day, and couldn't care less about "fixing it." I only replace: SAS drives, Fans, and PSU - on a rare occasion RAM. We also have spare servers, so I'll yank the RAID drives and I'm up in few minutes.


Interesting. Never looked at it that way - of course i don't exactly run servers - but it makes sense. And so I'm also assuming that efficiency is increased compared to that of an i7 980x by the decrease of heat.

Btw what offices are you in charge in?
a b V Motherboard
October 18, 2010 9:21:29 PM

Quote:

In my office if a server goes down I get a replacement same day, and couldn't care less about "fixing it." I only replace: SAS drives, Fans, and PSU - on a rare occasion RAM. We also have spare servers, so I'll yank the RAID drives and I'm up in few minutes.

If you don't mind me asking, what models/OEM?

Quote:

Now multiply that by 1000 -> 4000+ in a render farm. That is a lot of HEAT + WASTED KW/H; one of the biggest problems in these large Server Centers is A/C. Administrators & Managers look at 5 years in their cost analysis. In my case I look at everything in terms of 5 years, I lease my Servers and look at the Income, Expense {including repair, downtime, etc}, Management.

Yup. That's what I thought, so you are talking about performance per kW/hr correct?
a c 715 V Motherboard
October 18, 2010 9:28:17 PM

^ Yes the Xeon are more efficient consume less Watts and run cooler.

The short version is that I own an REO/IDX enterprise data center. Compile foreclosure {REO} & LIS Pendens {pre-foreclosure} data plus Realtor MLS listing data; match it up; analyze tends, forecasting, comparable data -> package and resell to corporate clients. - don't buy now; wait until ~ 2014-2015
a b V Motherboard
October 18, 2010 9:43:09 PM

Quote:
analyze tends, forecasting, comparable data -> package and resell to corporate clients. - don't buy now; wait until ~ 2014-2015

Hmm.... now that's what I call useful information :ange: 

What do you mean by "REO/IDX enterprise data center"? This is data mining by the looks of it. Is Hadoop/Mapreduce used over there?
a c 715 V Motherboard
October 18, 2010 10:06:16 PM

Shadow703793 said:
Quote:
analyze tends, forecasting, comparable data -> package and resell to corporate clients. - don't buy now; wait until ~ 2014-2015

Hmm.... now that's what I call useful information :ange: 

What do you mean by "REO/IDX enterprise data center"? This is data mining by the looks of it. Is Hadoop/Mapreduce used over there?

Never scraped once! I purchase all of my data directly from: State, County and MLS IDX/RETS either State level or on MLS level I purchase the feeds. I am damn good at relational databases and pattern recognition analysis to code. BTW - RealtyTrac is one of my clients and not visa versa.
a c 234 V Motherboard
October 18, 2010 10:40:01 PM

jaquith said:
I am damn good at relational databases and pattern recognition analysis to code.
You should see his farm on Farmville too :lol: 

Okay joking aside, this thread has taken a very interesting turn / direction. Please continue with your feedback on the original topic or the latest turn. I fine this very interesting & educational :) 
a c 715 V Motherboard
October 18, 2010 10:52:22 PM

PCIe to Rendering Farms {Servers} -> Thermal Dynamics -> Q&A.

What more can be said? 90% of the people won't make any difference -> 9.999% cannot afford it -> 0.001% maybe a problem in next gen GPU. {all guessed #s}

Leaving only the obscure commercial side now.
a b V Motherboard
October 18, 2010 10:57:39 PM

tecmo34 said:
You should see his farm on Farmville too :lol: 

Okay joking aside, this thread has taken a very interesting turn / direction. Please continue with your feedback on the original topic or the latest turn. I fine this very interesting & educational :) 

lol, well played. :lol: 
December 15, 2010 7:16:47 AM

I am running a 5770 @ x8 with little to no performance decrease. So why would I need more lanes for a low mid range card?
a b V Motherboard
December 15, 2010 8:16:07 AM

If the PCIe 3.0 comes within the time frame of my upgrade, i would get it. Since it would be backward compatible, i would at least get a motherboard with PCIe 3.0 slots alright.
!