The New PCI-Express 2.0 Standard, is your PSU ready?

systemlord

Distinguished
Jun 13, 2006
2,737
0
20,780
In November we will see a new standard evolve, PCI-Express 2.0 or DXX.

PCI Express 2.0 was announced on January 15 by the PCI-SIG, an industry group of currently more than 850 companies like AMD, Broadcom, HP, IBM, Intel, LSI Logic, Microsoft and Nvidia. It defines the standards for the next generation of motherboards and graphics cards, commonly dubbed as DirectX 10 capable. This new architecture effectively boosts their performance and utilization compared to their DirectX 9 / PCI Express 1.0 predecesser enormously and requires a new 8P power connector.

For those building new systems be sure that if your going to be spending your hard earned money be sure that you power supply supports PCI Express 2.0 or DXX. There are many companies that are just coming out with these new breed of PSU's. I just didn't want you guys & gal's to make the same mistake I did by not buying a PSU that was ready for PCI Express 2.0 or DXX. If you care to add a comment please do as being prepared is better than not being prepared at all. Here is a link for more info. http://www.pcisig.com/news_room/faqs/pcie2.0_faq/

 

airblazer

Distinguished
Jan 12, 2007
181
0
18,680
If you want a good power supply the 1200 watt PSU from Thermaltake is compatible.
It's a modular psu and have the cables for 8 pin video cards.
 

systemlord

Distinguished
Jun 13, 2006
2,737
0
20,780


Thats one that I was looking at because 1200W isn't going to be overkill in a year. Your have the PC case (TJ09) that I want, did you find that your PSU cables were long enough? Thanks.
 

IcY18

Distinguished
May 1, 2006
1,277
0
19,280
1200W will always be overkill. More than 90% of the systems out there right now could get by with a solid 550W power supply, people have just over hyped the power draw of all these graphics cards.
 

paq7512

Distinguished
Jul 9, 2006
473
0
18,790
Right now, I am running a 750w w/72A, but he is right most systems could get by with a 550w. About 80%+ systems. There are core2's with 450w and like 8600,7800 and others. My friend has an 8800GTX with a 550w and it works fine.
 

Mugz

Splendid
Oct 27, 2006
7,127
0
25,790
500W PSU, generic.

Intel Pentium 4 HT 631, 3GHz stock speed.
ECS PF5 Extreme ATX mobo (Crossfire supported, though not used).
Samsung 512MB DDR2-533 x2 (1GB total, 3-3-3-8, 400MHz).
XFX GeForce 8800GTS, w/6pin power connector connected.
WD 80GB SATA2 HDD, nothing fancy.
Samsung SATA DVD-RW.

CPU is OCed to 3.6GHz (240MHz x15), RAM is at 480MHz DDR, GPU is at 770 core and 1100 mem.

Stuffing HDDs in there hasn't made a bit of difference, and that system is rock-solid stable under maximum loads.

However, if I want to use SLI or Crossfire, then I'll have to go up to 650~800W, no more.

These really big (>1KW) are completely over-specified. But then, if you've got a 1.2KW PSU and your components pull at most 450W, then it won't draw more, and you're future-proof i.t.o. PSU - unless they change the standard again...
 

IcY18

Distinguished
May 1, 2006
1,277
0
19,280



Yes 550W, like it's been mentioned before a solid/quality psu manufacturer with a 550W psu would be able to run a Core 2 Duo with an 8800GTX. There is just a knee jerk reaction among many people that we need these huge psu's to power these systems. When in reality time and time again it's been proven most systems don't consume over 400W.

http://www.anandtech.com/systems/showdoc.aspx?i=2818&p=2

This machine is run by a 620W psu, it consists of 2xX1900XTs, Core 2 Duo clocked at 3.5Ghz, a watercooling pump, 4 fans, 2 optical drives, x-fi soundcard, overclocked ram and 2 raptor hard drives. Basically this system has just about peripheral you could have and ABS thinks it's just fine to put a 620W power supply in there.

So yes a 550W could run most if not all single card setups out there right now.
 

NaDa

Distinguished
Mar 30, 2004
574
0
18,980
It's all marketing I tell ya.

My old Athlon64 3500+ (newcastle) probably used more power than a q6600 and people think these chips are so hot. And you probably wouldnt know but I have an celeron (prescot) 2.6GHz which spits more fire than a Athlon 3500+ & q6600 together (at least feels that way).

Who wants a machine that consumes 1200W ??? thats insane!!!
The industry cant go this way. Intel has turned away from netburst the graphic industry must follow.
From what I have read on the inq it says the g92 wont be so hot.
But ionq gets them only 50% right!

Those numbers are so inflated. Its absurd.

I see 2kW PSUs on the horizon. whos buying???
 

cb62fcni

Distinguished
Jul 15, 2006
921
0
18,980
One thing to keep in mind: Just because a PSU is rated for >1kW, it's not going to be putting that out constantly. Lots of people seem to have that misperception. But getting a 550W PSU for a system that pulls 525W isn't exactly ideal either. Most PSU's have their highest efficiency when they're at between roughly 25 and 60 percent load. So if you run a 525W system from a 1000W PSU you'll be at about 50% load and very close to peak efficiency. How much higher is peak efficiency? Not much, in any quality CPU there's a 5 - 10% difference between full load efficiency and peak efficiency, so it probably won't be readily apparent on your next power bill. But there's another big advantage to running at a lower load - less stress, thermal and otherwise, on your PSU, and a very real possibility for longer component lifespans.

So if you can afford a big honkin' PSU, go for it. If not, make sure you leave yourself at least a little wiggle room between your peak system load and your PSU's rated output. You'll actually pay less in kWh for the big honker than for a smaller one. Until you run triple crossfire and two watercooling loops and 5 TECs and a huge overclock, and a 15 disc raid array and 4 blu-ray drives and 20 fans and a sweet cold cathode light. Then your power bill might go up....noticably.
 

cb62fcni

Distinguished
Jul 15, 2006
921
0
18,980


It's not entirely the fault of the GPU producers. They're struggling to keep up with software advances that place a huge load on the modern GPU. Massive power and high efficiency are extraordinarily difficult to achieve at the same time. Think of internal combustion motors, HP came long before MPG. This is the same deal. Because the advances are coming pretty quickly, the GPU makers must apply a brute force approach. The deal was the same for CPU's not too terribly long ago, but if you look, CPU requirements have hardly budged for the last 4 years or so. That's the primary reason that CPU producers have been able to increase the efficiency of their designs. Unfortunately, I don't see the situation resolving itself, there's simply a huge demand for gorgeous graphics. So every shrink will be clocked higher and volted to the max. There's certainly some room for them to increase their efficiency, but it would take time and effort that they would rather allocate to sheer power.
 

systemlord

Distinguished
Jun 13, 2006
2,737
0
20,780
What about these new PCI Express 2.0 graphics cards that will consume 225-300W per card, can you imagine running two of them in SLI thats up to 600W. Granted I will never support SLI in my future.
 

Scottz

Distinguished
Nov 23, 2005
32
0
18,530
Well personally if I'm going to throw down that amount of money on a new PCIe 2.0 motherboard and card(s), I'll more than likely spend a bit extra on a new power supply then anyway.
 

rammedstein

Distinguished
Jun 5, 2006
1,071
0
19,280


I find the cables too long in my case with that psu, i can say, it sturdily powers my system like a dream...*mind wanders to psu slowly chugging along*, from what i have experienced and have read in reviews, the psu is great for its price compared to those by pc p & c
 

systemlord

Distinguished
Jun 13, 2006
2,737
0
20,780


You need to reread the OP as you have missed a very important link which shows proof of what I am saying. Then you'll see that your statement above was without thought. I really wish people would read (including links) the hole post before posting. Oh yeah never say never.
 

Hatman

Distinguished
Aug 8, 2004
2,024
0
19,780
What did I miss?

It states that it will be ABLE to gvie them that much power, which card uses that? None use that much! And I doubt they will! Highest power hungry card about is the 2900xt, which uses 80nm process, which is 225watts, nowhere near 300watts, and the cards are getting a die shrink to 65nm and beyond for the next gen.

Kinda funny mate, type your system into teh PSU calc, and it reccomends about 350watts :D
 

systemlord

Distinguished
Jun 13, 2006
2,737
0
20,780


In september we will see X38 with PCI-e 2.0, then in November you'll see something from Nvidia to back X38's launch and Crysis. Theres a new standard upon us PCI-E 2.0 and if you want to ignor it fine, but in its spec's are increased 2x the bandwidth and 225-300W of power. Did you really read the hole link yet? Some people are affraid of change and will not except it either.

Maybe right away the cards won't use 300W but to say, "it will never happen" is a like saying progress will never happen. No offence, but you seem to be the only one so far having trouble accepting the new standard which is coming weather you agree or not.:) 9800 anyone.
 

gwolfman

Distinguished
Jan 31, 2007
782
0
18,980

I have a device that measures the power usage of any device plugged into it. I have a Core 2 Duo E6600 o/c to 3.38GHz, nVidia 8800 GTX, 2GB RAM @ 2.3V, Creative X-Fi, 3 HDDs in RAID 0, 2 optical drives, a water cooling kit, various system fans, and only a 550W power supply. The power usage while playing games is only ~360 watts. I don't think you'd need 1200 watt power supply in a long time.
 

gwolfman

Distinguished
Jan 31, 2007
782
0
18,980

Well put, I am an example of that with facts to prove it. I agree 100%.
 

systemlord

Distinguished
Jun 13, 2006
2,737
0
20,780
It will be interresting to see what will happen when people start trying to plug their 6P PCI-E 1.0 into an 8P socket designed for a PCI-E 2.0 graphics card.
 

xela

Distinguished
Apr 27, 2007
153
0
18,680
LoL @ this thread

Can hardly type.. :D :D :D :D :D

Consider this:

Upcoming CPU's from both AMD and Intel will consume less power then what we have today.

Nvidia has promised that 9800 GTX will consume less power then 8800 GTX.

Core 2 consumes less power then P4-D while making a joke of it performance wise.

Why do you assume more power = better performance? P4 would destroy Core Quad if that was true.

Remeber the wires that go from PSU to GPU.. Those carry electricity.. could it be that the power that PCI 2.0 can provide is meant to get rid of those pesky wires other then triple the power that GPU's will use in the future?

I am kind of shocked that no one has suggested this in so many posts :ouch:




 

TRENDING THREADS