How many watt does pci-e 3.0 provide

kondicykel

Honorable
Apr 3, 2012
19
0
10,510
that just made me even more confused.. hehe xD does it mean that the pci-e will support up to 300w gpu's ? or does it mean that the pci-e socket will provide 300 watt ? which seems alot compared to 2.0 with 75watt from socket ?
 
It's counter intuitive in that the 'new' PCIe 3.0 GPU's are using less power than their equivalent PCIe 2.0 GPUs.

Reading the data the PCIe can deliver up to 300W which might explain going from (some) 8/8-pin PCIe to 6/6-pin PCIe power; sure I noticed 6/8-pin GPU's of GTX 680's and HD 7970's. Many 3/4-WAY Z77's add AUX (Molex) to the board itself. So it's not something that I worry at all about. As we migrate towards 4K resolutions I can foresee a lot more power consumption IF lower nm GPU's core don't become available with higher efficiency.

But to answer your question, yep it went from 75W up to 300W. As more data gets pushed through the PCIe lanes (pipe) so does the relative consumption. Again, I presume that GPU's will become more efficient to offset rendering capability over time.
 

kondicykel

Honorable
Apr 3, 2012
19
0
10,510
oh.. thats awesome. yeah i saw that the gtx 680 only had 6/6 thats why i wondered about the pci-e 3.0. so as i understand then is this 680 6/8pin card not needed if u got pci-e 3.0 on mobo ?:)

i use the z77 so i will go for the 6/6 pin. unless you know any benefits from using 6/8pin in my case? maybe some more stability? what do i know im not the expert xD

thx alot for your time and your help. i appreciate it :)
 
Examples of the GTX 680, same principle of AMD/ATI applies: there are reference cards which are fully specced from nVidia the spec is for 6/6-pin connectors, PCB, cooling, etc and the GPU to run at a specified clock/core/ram speed. After the initial release companies start producing 'non-reference' cards with e.g. overclocked, multiple fans, different PCB, additional phase power, etc which exceed the initial spec and therefore require additional 'power' and you start seeing 8/6-pin etc.

The 'benefit' to the non-spec cards is improved performance, but with that you also get less efficiency, more heat, heat dissipated into the case vs exhausted out, etc. So it depends on what YOU want and how important OC'ing you GPU is to YOU. My feelings are irrelevant, and I typically opt for the non-reference GPU's and often using water to cool them.

Things to note, if you're using a single HD monitor 1920x1080 then the 2GB offered on the GTX 680 is plenty, but if you're planning to run say 3 HD monitors then you defiantly don't want a 'reference' GTX 680, which is why I'm personally waiting for 4GB non-reference to avoid 'vRAM bottlenecks' that often occur with rendering 6MP+ with high AA and/or details.
 


I think PCIe 2 & 3 are essentially the same - a theoretical 75w is available from the slot with total power rated 300w -- 75w + (75w + 150w)




 
From the PCIe 'Slot' itself it's 25W. The question is "How many watt does pci-e 3.0 provide". So I simply noted the PCIe Watt Spec, potentially under severe OC each card can draw a 'total' of 500W/each even on PCIe 2.0.

The 75W is 25W from the PCIe 'Slot' and 50W from a minimum 2x4 connector up to 175W: 25W PCIe 'Slot' plus 100W 2x4 connector plus 50W 2x3 connector; see slide 11 from above.
 

kondicykel

Honorable
Apr 3, 2012
19
0
10,510
it was related with oc when i asked if there was any benefits in 6/8pin.
im just thinking if the pci-e socket provide 300w+150watt from 2x6pin connectors then it would be plenty also for overclocking? since the card should only use something like 194w on stock speed ?


im starting just to use my plasma tv with hdmi... maybe not the best solution but as long as it will work im happy..
im not going to overclock it as soon as i get it but i would like the opportunity to do it later when i do get some water cooling.. also going for sli later on.. and maybe this 3 monitor setup even later...
is it posible to run the 2gb version in sli with the 4gb? cuzz i cant w8 for the 4gb version to apeer xD

ive also heard about a 6gb 8/8pin 680... dont know if it was roumors..

sorry for my english btw. im danish xD
 
More pins = more power or the potential for more power; see reference vs non-reference discussions above.

I don't recommend 'mixing' vRAM sizes, you end-up with the lesser or 2GB; 4-WAY example 4GB + 2GB + 4GB + 4GB = 2GB of vRAM in SLI. On a TV presumably 1920x1080 you'll have improved performance with 4GB, but >40~50FPS and certainly above the refresh rate those FPS are only nice to brag about. Also, where 4GB comes into play is IF you opt for 3D, you essentially lose 1/2 your render FPS.

Regarding, an 8/8-pin IDK maybe one from ASUS, they'll possibly have a version for the LN2 folks. The EVGA GeForce GTX 680 FTW w/Backplate 4096MB is 6/8-pin, and I've only heard bits and pieces from EVGA on their 4GB GTX 680 Hydro Copper everything I've seen suggests 6/8-pin.

A 4GB GTX 680 review, and after reading it you'll see why I'm waiting for 4GB - http://www.tweaktown.com/articles/4666/palit_jetstream_geforce_gtx_680_4gb_video_cards_in_sli/index1.html
 

kondicykel

Honorable
Apr 3, 2012
19
0
10,510
lol.. ok i ill just buy the 6/8 pin then im on the safer side...
even though i think 525watt is a little overkill for gpu on a 24/7 oc build.

i did read the whole review and i cant rly see any big abenefits on 4gb+ but they doesnt test it with 3 monitors ? and they doesnt compare their 4 gb sli with 2gb sli only compare between 4gb and 2 gb single.

would the hd 7970 3gb be better for 3 monitors than the 2gb 680 ?
 


some clarification:
- The graphics card will idle at a very low power. An entire (barebones) system with a GTX680 idles at 112Watts. The card uses probably about 20W idle but I couldn't be bothered to look it up.

- Multi-monitor. Not a fan, but realistically you'd need two of either card for high quality @ 60FPS. The 2xHD7970 will benefit from the extra VRAM. On the other hand there are 4GB versions of the GTX680 which I'd recommend if you were gaming on three 1920x1080 or higher screens. So 2x (GTX680 4GB).

- PCIe power. No point in even thinking about it. Who cares? All you need to do is ensure the PSU has adequate power overall (total Wattage) and sufficient on the +12V rail or rails. I saw an Antec 620W with single +12V rail of 48A for $65 on sale at NCIX. awesome deal and sufficient for a GTX680.

- multiple HD7970's or GTX680's??
Better due some research on power consumption. They recommend 38Amps (total system) with a GTX680. How much does the actual card use?

Find the Load power of the card or use its TDP. Let's say it is 185W. Use this formula:
Power (W) = Volts X Amps

Therefore the Amperage used by a GTX680 on the +12V rails is about 185W/12V = 15.4A (make it 20A to be safe)

So we need 40Amps for two cards, although other components use +12V. Without really crunching numbers I'd probably recommend something like an 800Watt PSU with 60Amps +12V rail/combined rails.
 

No doubt the Article I linked has MANY flaws, but there simply aren't any other 4GB GTX 680 reviews, at least that I've seen to date accurately comparing 2GB vs 4GB in a 'useful' environment - choice A or A. In my instance of 5800x1080 I have no doubt 4GB is the way to go to avoid vRAM bottlenecks; the plan is 3x 4GB GTX 680's.

Given the choice in 2GB GTX 680 SLI and 3GB HD 7970 CF it's pretty clear that that GTX 680 in SLI is the better choice; Chris did an excellent article - http://www.tomshardware.com/reviews/geforce-gtx-680-sli-overclock-surround,3162.html

Here's an Extreme example - http://www.evga.com/forums/tm.aspx?m=1537816
 


You may wish to rethink that statement:
http://www.tomshardware.com/reviews/geforce-gtx-680-sli-overclock-surround,3162-10.html

It's not necessarily that clear. It seems to toggle back and forth, SKYRIM is also back and forth at 5760x1440 so it's not "clear" to me.

Driver support is also likely to change the picture so more benchmarks in the future, especially with the 4GB GTX680's should be very interesting.
 

If I based all my decisions on the vast MINORITY of games then I might, but at least in my household it's a new game every couple or so weeks.
 

kondicykel

Honorable
Apr 3, 2012
19
0
10,510
Sorry for the late replies.. but better late then never.. so here it comes :)







havent got time to wait for the 4gb version i think.. cuzz i gotta have a working computer before the 15.05.12 (diablo3)
its not only ment for playing d3 so i doesnt need to hear that its overkill

Well i think 3 monitor setup is pretty awesome and the first step into the future with even more crazy resolutions and curved screens or even dome monitors/screens...



acording to my own teori the power is pretty important?

when the socket is only designed to pull 300watt and 2x75watt from the 6pin connectors..
then i would only have 450watt for power supply(not the psu but the connections)..
if i for ex. use a 500watt 680 gtx oc'ed then i would have to pull an ekstra 50watt from a 450watt power supply.. which have oputunity to waste its energi on heating wires and then it will also becone unstable is to much current is pulled through a wire..
which i heard is very bad when oc'en


i got a ocz zx 1250watt psu with 104a on single rail
so the psu should'nt be a problem i guess.. maybe its even overkill xD

No doubt the Article I linked has MANY flaws, but there simply aren't any other 4GB GTX 680 reviews, at least that I've seen to date accurately comparing 2GB vs 4GB in a 'useful' environment - choice A or A. In my instance of 5800x1080 I have no doubt 4GB is the way to go to avoid vRAM bottlenecks; the plan is 3x 4GB GTX 680's.

aahh thats sad.. i will be looking for those reviews in the near future then :) i think this one didnt show much difference..
sounds like a awesome setup u're getting there !:)


Here's an Extreme example - http://www.evga.com/forums/tm.aspx?m=1537816

awesome... didnt know that there was such a big difference :D


You may wish to rethink that statement:
http://www.tomshardware.com/review [...] 62-10.html

It's not necessarily that clear. It seems to toggle back and forth, SKYRIM is also back and forth at 5760x1440 so it's not "clear" to me.

think this review is not clear as u say.. in the beginning (on the benches)
It seems like the 7970 will outperform 680 in sli/cx?
but the 680 outperfrom the 7970 when running in single card setup?


quote from the review: particularly if you have a 30” screen or three 1920x1080 displays

Since when did the screen/monitor size matter?
example: 1080p on 30" is the same amoung as 1080p on 50" ?




This havent made me more clear wether to buy a 6/6 pin or 6/8 pin and now im also unsure on wether to buy 7970 and 680 haha thx alot guys xD

Ive been reading about stuttering problems on the 680, with v-sync on.
this kinda the only big difference i see on those 2 cards.... and this problem is going to be fixed sooner og later..


 

Now I am not a guru with video, but I get the principles so the nomenclature is off the top of my head.

1920x1080 = 2,073,600 pixels / 2.1MP ; HD 1080p, typical single HD monitor resolution
2560x1600 = 4,096,000 pixels / 4.1MP ; 30" monitor resolution
5900x1080 = 6,372,000 pixels / 6.4MP ; 3xHD monitors with bezel correction resolution

3840x2160 = 8,294,400 pixels / 8.3MP ; 4K (QFHD) forthcoming resolution effectively 4xHD 1080p

Most 1080p monitors have a vertical refresh rate of 60hz and very few 120Hz, essentially Hz is refresh (redrawing) per second and loosely maximum FPS. Example if your monitor is 60Hz and the GPU is producing 100FPS then 40FPS is being 'wasted' and worst less/more can cause artifacts (tearing) -- vSync is nVidia's way to synchronize to the refresh rate of the monitor. nVidia has been offering this for years and the current it's Adaptive VSync; nice article - http://www.hardocp.com/article/2012/04/16/nvidia_adaptive_vsync_technology_review

Myth - the human eye cannot detect >25FPS. The truth is anything less than 24~25FPS is very noticeable, and 24~25FPS is the minimum for smooth frame transition. I can easily sense the difference between a 60Hz monitor and the smooth 120Hz, 60Hz is the minimum refresh rate to avoid eye strain and the higher Hz monitor's reduce 'motion blur'.

Now, if you have a 24", 37", 42", 50". 55", 60", 65", etc 1920x1080 (1080p) monitor the number of pixels is the same i.e. the GPU rendering is the same. The differences often are the refresh rates 60Hz, 120Hz and up to 240Hz or higher. In general, for 3D you need 120Hz or higher refresh rates, 120Hz / 2 = 60Hz per half of the 3D render i.e. 60Hz per eye right 60Hz left 60Hz.

Where XGB of vRAM gets in play is the number of pixels the GPU has to buffer and render to the monitor(s) resolution(s), the more the 2.1~8.3+MP the more the vRAM is required not to mention the power of the GPU's to render. This brings me to FPS, in short you need 35FPS or higher to avoid 'chop' or stutter. Some of the newer GPUs have had stutter and other rendering issues which are often corrected by a new driver or slightly modifying the core/shader/memory speeds.

As far as HD 79XX or GTX 6XX it depends on the 'games' you decide to play, and in general I find nVidia SLI scales better than AMD(ATI) plus the quality of the drivers. My current 'bitch' with nVidia is they turned-off PCIe 3.0 on SB-E/LGA 2011 whereas AMD has not taken this approach; the issue is barely noticeable for 3-WAY, but you saw the 4-WAY link.
 
"if i for ex. use a 500watt 680 gtx oc'ed then i would have to pull an ekstra 50watt from a 450watt power supply.. "

NO.
The recommendation of 550Watts by some card companies is for the ENTIRE COMPUTER. Even overclocked I don't believe the card goes above 200Watts.

The 450Watts you added up is just the amount of power that COULD be delivered if:
a) the graphics card required that much, and
b) the Power Supply could deliver it

The graphics card uses the +12V (twelve Volts) so the Amperage it requires is roughly 13Amps for the card itself. The CPU might require up to 10Amps.

So as long as you have a quality 550Watt PSU with at least 30Amps on the +12V rail you're likely to have no issues with a single GTX680.