Sign in with
Sign up | Sign in
Your question

Anybody heard of gtx690 running on older 775 platform?

Last response: in Graphics & Displays
Share
May 28, 2012 4:31:27 AM

Hi,

I have been assured that sli certification is not an issue with a dual-gpu card like the GTX690, but another issue has come up - will the card even run on my older hardware?

I have a 775-slot Q9450 oc'd to 3.6 ghz, on an asus p5e motherboard, 8 gigs DDR2 Ram, pci-e 2.0, and a toughpower 750 psu that I have been assured will run a card like the GTX690. I realize I will likely be cpu bound.

I run a 30" dell 2560x1600 monitor, and one game I am looking at in particular is BF3 on ultra settings. I am okay with 30-35 fps, and less on occasion if I can keep the minimums at least in the mid 20s - let's say if a mortar goes off nearby or if there is a lot of heavy fire.

But a forum participant, smorizio, brought up a good point - he mentioned that the GTX690 might not even boot.

So my question now is, not if the dual-gpu card requires sli certification, which I understand it doesn't, but - has anybody heard of it running on an older platform like mine? Thanks in advance.

Rich
a c 258 U Graphics card
May 28, 2012 6:57:49 AM

You would have a huge CPU bottleneck. See this thread.

http://www.tomshardware.com/forum/forum2.php?config=tom...

That is with a single GTX 580. I would think you would need a Sandy or Ivy Bridge system at about 4Ghz to not bottleneck a GTX 690. Or an AMD FX system at about 5Ghz.

m
0
l
a b U Graphics card
May 28, 2012 7:11:32 AM

Better to spend $600 on a CPU, mobo and RAM upgrade and $400 on a new vid card. It would be a waste of money to put a $1000 vid card on a system worth under $200.
m
0
l
a c 87 U Graphics card
May 28, 2012 7:13:13 AM

AMD FX (Bulldozer) would need to be around 6GHz to be as fast as a 4GHz Sandy for gaming (more like 6.3GHz to 6.5GHz against a 4GHz Ivy). It would also use several times more power and need exotic cooling. However, I will admit that it could run stably if it had a good motherboard. FX is good at being more limited by cooling than by the chip itself when it comes to overclocking.

Like above posts say, don't even consider putting such high end graphics in any LGA 775 system. It should have either a Sandy Bridge i5 overclocked to around 4GHz (I'd go a little higher) or an Ivy Bridge i5 overclocked to around 4GHz (again, I'd go a little higher, but an Ivy Bridge i5 shouldn't need to go as far as a Sandy due to it's slightly improved performance per Hz), or else you would have a huge CPU bottle-neck.
m
0
l
a b U Graphics card
May 28, 2012 7:28:08 AM

what? Are you alright my friend, you're wasting $1000 on a $80 motherboard, better to spend this money on i5 build.
m
0
l
May 28, 2012 7:54:31 AM

I have experience of older and slower H/W combinations in system. I was running a HD-4850 on a Pentium 4 processor. it was terible setup because I was getting same performance as of a Nvidia 7600Gt.
Then i upgraded to dual core it was kind of a medium buff. Now i am running the same 4 and 1/2 half year card on intel core i5 2500k, Gigabyte GA P67A UD7 B3 MOBO and corssair Vengeance DDR-3 Ram @1600 MHZ. With my current setup i am getting 40-50 FPS in all games at 1440X900 ( i will upgrade my GPU and display very soon ;)  ).
What my point is Q9450 can not give you the same performance which you can get with a i5 or i7 even AMD FX series. I dont think there will be issues of booting in LGA 775 if you make a setup of GTX 690 on it. but it will going to bottle neck you system and you might get a performance of a 4870x2 or may a 5850. Because GTX 690 is a real beast running at memory clock of 6.008GHz.

I will suggest you to change you CPU to i5 2500k or i5 3770k or AMD FX-8150, and get a PCIE 3.0 support methodboard as GTX 690 also support PCIE 3.0.
Considering you resolution you will get ultimate speed and quality.
m
0
l
a c 87 U Graphics card
May 28, 2012 8:36:10 AM

jeet_shek said:
I have experience of older and slower H/W combinations in system. I was running a HD-4850 on a Pentium 4 processor. it was terible setup because I was getting same performance as of a Nvidia 7600Gt.
Then i upgraded to dual core it was kind of a medium buff. Now i am running the same 4 and 1/2 half year card on intel core i5 2500k, Gigabyte GA P67A UD7 B3 MOBO and corssair Vengeance DDR-3 Ram @1600 MHZ. With my current setup i am getting 40-50 FPS in all games at 1440X900 ( i will upgrade my GPU and display very soon ;)  ).
What my point is Q9450 can not give you the same performance which you can get with a i5 or i7 even AMD FX series. I dont think there will be issues of booting in LGA 775 if you make a setup of GTX 690 on it. but it will going to bottle neck you system and you might get a performance of a 4870x2 or may a 5850. Because GTX 690 is a real beast running at memory clock of 6.008GHz.

I will suggest you to change you CPU to i5 2500k or i5 3770k or AMD FX-8150, and get a PCIE 3.0 support methodboard as GTX 690 also support PCIE 3.0.
Considering you resolution you will get ultimate speed and quality.


A high end Core 2 Quad CPU would actually beat an FX CPU in gaming performance, especially an overclocked S model that is far more energy efficient than FX. Don't forget, FX CPUs have worse performance per Hz than Phenom II CPUs and the performance per Hz difference between Phenom II CPUs and Core 2 Quad CPUs is even greater, especially 45nm Core 2 Quad CPUs. This adds up to FX being far behind Core 2 Quad in performance per Hz. The S models can overclock much better than FX with a similarly performing cooler.

Also, since different games vary widely in CPU dependance, different games will get wildly different performance numbers when you have a CPU bottle-neck. They won't be consistent with any weaker card because with less CPU dependent games, the bottle-necked graphics card will be able to churn out closer to its full performance than on more CPU-heavy games.
m
0
l
May 28, 2012 9:42:31 AM

blazorthon said:
A high end Core 2 Quad CPU would actually beat an FX CPU in gaming performance, especially an overclocked S model that is far more energy efficient than FX. Don't forget, FX CPUs have worse performance per Hz than Phenom II CPUs and the performance per Hz difference between Phenom II CPUs and Core 2 Quad CPUs is even greater, especially 45nm Core 2 Quad CPUs. This adds up to FX being far behind Core 2 Quad in performance per Hz. The S models can overclock much better than FX with a similarly performing cooler.

Also, since different games vary widely in CPU dependance, different games will get wildly different performance numbers when you have a CPU bottle-neck. They won't be consistent with any weaker card because with less CPU dependent games, the bottle-necked graphics card will be able to churn out closer to its full performance than on more CPU-heavy games.


I dont have any sort of experience of AMD processor as I never used any. The reason why I choose I5 2500k over FX 8150 is its gaming performance. But now considering the price drop of AMD FX 8150 after ivy bridge release its not a bad deal, considering it has 8 cores running and 3.9 GHZ, which makes it 31.2 GHZ overall . But still as you mentioned gaming performance in must better in I5 and I7 compared ot FX.
m
0
l
a b U Graphics card
May 28, 2012 10:27:55 AM

FX hahaah are you kidding, it's was just beaten by core 2 quad..
m
0
l
a c 87 U Graphics card
May 28, 2012 2:20:35 PM

jeet_shek said:
I dont have any sort of experience of AMD processor as I never used any. The reason why I choose I5 2500k over FX 8150 is its gaming performance. But now considering the price drop of AMD FX 8150 after ivy bridge release its not a bad deal, considering it has 8 cores running and 3.9 GHZ, which makes it 31.2 GHZ overall . But still as you mentioned gaming performance in must better in I5 and I7 compared ot FX.


No. Multiplying core count by frequency does not tell you anything about FX's performance. FX has horrible performance per Hz and games can't use more than 4 threads (they generally can use more than two threads efficiently). For that last reason, the FX-8150 is hardly any better at all than the FX-4100 for gaming. Also, the 8150 runs at 3.6GHz and 3.9GHz is a Turbo frequency, not a base clock frequency. Even $70 Intel 2.6GHz dual core Sandy Bridge Pentiums (Pentium G620) can consistently meet or beat the 8150 in gaming performance and a i5 with a similar price to the 8150 would fly past the 8150.
m
0
l
May 29, 2012 6:30:22 AM

Hey guys,

These comments are very helpful. I looked at the GTX580 cpu bound article - but he was running just 1680x1050. That is only 40% of the pixels of my 2560x1600 system. That makes a huge difference - in other words, my cpu is less likely to be the bottleneck, the higher my resolution, and the more AA I ask from the graphics card.

But .... be that as it may, I agree, I'll be cpu bound. The other thing about the article, the guy was running his 6600 at stock 2.4. I'll have this Q9450 at least around 3.6 by the time the graphics card expenditure hits $1000.

I am moving back to thinking of a 7970 - just one for now for about $500, and then adding another later. Whether by the time I add the second 7970, I will have moved to another platform - we'll see. I think we're talking at least $600-800 for quality mobo, i5 or i7, and quality ram.

So, this is new graphics hardware, a 7970, on an older Quad core - BUT keep in mind, it's 30" gaming. So my Quad 9450 puts together the frame, and the graphics card has to fill in 4 MB worth of pixels with full eye candy and AA. On the tougher titles, like BF3 on ultra settings, I'll be running some logs to see if my cpu is running flat out 100%, vs gpu humming at 60-70% - then I'll know I'm cpu bound. LOL

Speaking about that - what is a good way to capture that type of logging?

Rich
m
0
l
a b U Graphics card
May 29, 2012 12:44:46 PM

harvardguy said:
Hey guys,

These comments are very helpful. I looked at the GTX580 cpu bound article - but he was running just 1680x1050. That is only 40% of the pixels of my 2560x1600 system. That makes a huge difference - in other words, my cpu is less likely to be the bottleneck, the higher my resolution, and the more AA I ask from the graphics card.

But .... be that as it may, I agree, I'll be cpu bound. The other thing about the article, the guy was running his 6600 at stock 2.4. I'll have this Q9450 at least around 3.6 by the time the graphics card expenditure hits $1000.

I am moving back to thinking of a 7970 - just one for now for about $500, and then adding another later. Whether by the time I add the second 7970, I will have moved to another platform - we'll see. I think we're talking at least $600-800 for quality mobo, i5 or i7, and quality ram.

So, this is new graphics hardware, a 7970, on an older Quad core - BUT keep in mind, it's 30" gaming. So my Quad 9450 puts together the frame, and the graphics card has to fill in 4 MB worth of pixels with full eye candy and AA. On the tougher titles, like BF3 on ultra settings, I'll be running some logs to see if my cpu is running flat out 100%, vs gpu humming at 60-70% - then I'll know I'm cpu bound. LOL

Speaking about that - what is a good way to capture that type of logging?

Rich



I like your way of thinking. I would give your old quad core a try with a new gpu and then decide if an extra 1GHz of processor power is worth the upgrade. I also like the idea of trying a single 7970 first as it likes high resolution with lots of AA. You can run it with a fulltime 200MHz gpu and memory overclock on stock voltage and really manhandle that monitor.
m
0
l
a c 87 U Graphics card
May 29, 2012 4:17:44 PM

alrobichaud said:
I like your way of thinking. I would give your old quad core a try with a new gpu and then decide if an extra 1GHz of processor power is worth the upgrade. I also like the idea of trying a single 7970 first as it likes high resolution with lots of AA. You can run it with a fulltime 200MHz gpu and memory overclock on stock voltage and really manhandle that monitor.


By extra 1GHz, were you referring to the overclock on the Core 2 Quad or an upgrade to Sandy/Ivy Bridge?

@harvardguy A 2560x1600 frame is 16MiB (assuming 32 bits per pixel), not 4MB. 4MB is a 720p frame. 2560x1600 is a 4MP frame, but one pixel takes 4 bytes (again, assuming 32 bits per pixel). I also agree that it would be a good idea to examine how CPU limited you will be before taking action about it.
m
0
l
a b U Graphics card
May 29, 2012 4:27:55 PM

blazorthon said:
By extra 1GHz, were you referring to the overclock on the Core 2 Quad or an upgrade to Sandy/Ivy Bridge?




I was refering to an upgrade not the existing 1GHz overclock he already has. :) 
m
0
l
a c 258 U Graphics card
May 29, 2012 4:45:24 PM

The higher resolution may very well make you bottle neck less than the guy I linked to above but he was running that Q6600 at 3.4Ghz not stock. Stock is 2.4Ghz.
m
0
l
a b U Graphics card
May 29, 2012 5:10:32 PM

man it would make much of a bottleneck. why dont you overhaul your system instead of buying a $1200 graphic card!!
m
0
l
a b U Graphics card
May 29, 2012 6:06:49 PM

I think he decided on a $450 card instead.
m
0
l
a b U Graphics card
May 29, 2012 7:07:52 PM

Unfortunately, I deleted my results from 3dmark11 with my old 6990 running on my i7 990X at 4.5Ghz. Here is my 6990 running on a Q6600 at 3.5GHz with 3GB of RAM on an ASUS Rampage Formula motherboard.

I know there are a few people here who have some results with a 6990 running on a much better processor so I am hoping one of you will compare your results for a stock 6990 to my results on the old socket 775 platform to show what kind cpu bottleneck harvardguy can expect to get. I tried searching for a benchmark result with the 6990 on a better machine but I ended up with this review from Tom's of the 6990 on an i7 990X running at 4GHz and I actually scored higher with my old Q6600. The physics score is not good but I don't think the graphics scores are much less than what they would be on an up to date system.


http://www.tomshardware.com/reviews/radeon-hd-6990-anti...
m
0
l
a c 87 U Graphics card
May 29, 2012 7:20:13 PM

alrobichaud said:
Unfortunately, I deleted my results from 3dmark11 with my old 6990 running on my i7 990X at 4.5Ghz. Here is my 6990 running on a Q6600 at 3.5GHz with 3GB of RAM on an ASUS Rampage Formula motherboard.
http://i1175.photobucket.com/albums/r637/alrobichaud/6990Q6600.jpg
I know there are a few people here who have some results with a 6990 running on a much better processor so I am hoping one of you will compare your results for a stock 6990 to my results on the old socket 775 platform to show what kind cpu bottleneck harvardguy can expect to get. I tried searching for a benchmark result with the 6990 on a better machine but I ended up with this review from Tom's of the 6990 on an i7 990X running at 4GHz and I actually scored higher with my old Q6600. The physics score is not good but I don't think the graphics scores are much less than what they would be on an up to date system.


http://www.tomshardware.com/reviews/radeon-hd-6990-anti...


Synthetic benchmarks are rarely representative of how graphics cards will perform in actual games. Also, a 6990 would most likely be more CPU limited than a 680 because Crossfire is more CPU intensive than a single GPU and Nvidia is known to be slightly easier on the CPU than AMD, even in single card situations. How well a graphics card scores in some synthetics isn't CPU limited like games are, so you can often get inflated results that aren't even close to how games act. In games, a 3.4GHz or 3.5GHz Core 2 Quad isn't bad, but it would bottle-neck a GTX 680 in CPU limited games.
m
0
l
a c 87 U Graphics card
May 29, 2012 7:31:55 PM

alrobichaud said:
I was refering to an upgrade not the existing 1GHz overclock he already has. :) 


A 3.4GHz Sandy/Ivy Bridge i5 would be almost twice as fast as a 2.4GHz Core 2 Q9450. GHz/MHz is not a measurement of performance; it's only one of many factors. It's only a good indicator for performance by itself when comparing CPUs of the same family and generation with the same core count and features.
m
0
l
a b U Graphics card
May 29, 2012 7:51:30 PM

Yes that is all well and good but I think the idea was to figure out if getting what originally was going to be a 690 which now looks like it may be a 7970 will be bottlenecked by an older socket 775 quad core or should the OP upgrade. Of course we all say upgrade but I am simply trying to provide some real time results for the OP in the form of whether or not an old quad core running at 3.6Ghz can feed enough data to a high end gpu. I think we all know that there are many factors involved when it comes to how particular games will perform on different platforms. I know that synthetic benchmarks are not very good at representing how games run but if a 6990 ends up with about the same FPS in all of the graphics tests on an old core 2 quad running at 3.5Ghz compared to an i7 990X running at 4GHz then it would simply show that the old platform will run a high end gpu similar to what a newer platform would. Of course a cpu dependent game will not run as well on a 3.5GHz core 2 quad vs Sandy bridge running at 4.5GHz.
m
0
l
a c 87 U Graphics card
May 30, 2012 12:51:17 AM

alrobichaud said:
Yes that is all well and good but I think the idea was to figure out if getting what originally was going to be a 690 which now looks like it may be a 7970 will be bottlenecked by an older socket 775 quad core or should the OP upgrade. Of course we all say upgrade but I am simply trying to provide some real time results for the OP in the form of whether or not an old quad core running at 3.6Ghz can feed enough data to a high end gpu. I think we all know that there are many factors involved when it comes to how particular games will perform on different platforms. I know that synthetic benchmarks are not very good at representing how games run but if a 6990 ends up with about the same FPS in all of the graphics tests on an old core 2 quad running at 3.5Ghz compared to an i7 990X running at 4GHz then it would simply show that the old platform will run a high end gpu similar to what a newer platform would. Of course a cpu dependent game will not run as well on a 3.5GHz core 2 quad vs Sandy bridge running at 4.5GHz.


A CPU limited game would run better on a 3GHz Sandy i5 than on a 3.5GHz Core 2 Quad. A 4.5GHz Sandy Bridge CPU would be almost twice as fast in gaming as a 3.5GHz Core 2 Quad.
m
0
l
May 30, 2012 4:48:24 AM

Quote:
A CPU limited game would run better on a 3GHz Sandy i5 than on a 3.5GHz Core 2 Quad. A 4.5GHz Sandy Bridge CPU would be almost twice as fast in gaming as a 3.5GHz Core 2 Quad.


Wow! Well I figured clock for clock about 40% faster, and multiply by 45/35 the difference in clock speed, gives about 180% faster overall. Hmmm, so that makes sense.

(By the way, slightly off topic, but thinking long term, versus the original i7 on 1336, what do you give up with the easy-to-overclock Sandy bridge? They moved the pci-e controller to a place on-board the cpu chip, right? - didn't that have some negative affect on overall lanes of pci-e available to the graphics cards?)

You guys are very helpful.

Yes, for sure I have reigned in my former grandiose thinking about getting a GTX690 - mostly because it probably won't even boot up - and instead I will start pretty soon with just one 7970. That will launch me quite far from where I am with my 8800GTX.

I don't know if it will get me BF3 on ultra settings at full 2560x1600 - 4 million pixels, 32 million bytes per frame - but I'll try. If not, I won't play it - yet. And if I see that I am already cpu bound, then of course adding another 7970 wouldn't make sense - I would have to move forward with a new cpu.

BUT .... if I see that I am still gpu bound, like 100% gpu load, 80% cpu load, then maybe I WILL add the second 7970, which in that scenario might move me from 25 fps to 40 fps, where I would be happy. At that point the 7970's would be kicking back, at 70% load, and my old 9450 would be working up a sweat, flat out 100%, but so what, I'm playing the game. LOL

So it would be kind of interesting to do some charting, to figure out - am I cpu bound, or am I gpu bound?

So, let me ask you guys, how do I log my 9450 and 7970?

And by the way I appreciate the comments about overclocking the 7970 - I hear it overclocks very well. Would you guys recommend a model with a better heat sink for that reason - easier to overclock - rather than the stock turbine? Do you guys like XFX equipment? I game with headphones, so I figured - well turbine is okay, if I can set the fan speed to 100% and blow the hot air out of the case. But I can't remember if Catalyst allows manual fan settings. Ati try tools does, but I don't know if the author, Ray Adams, will have his program up to speed on 7970s already. Maybe riva tuner is already working on that card - I have become familiar with it for my current nvidia card.

Back to logging - I know that furmark has some logging on the gpu side - it doesn't run for very long. Would you guys use Sandra, or Everest, or some tool like that?

Like I said, I like ATT with the OSD showing gpu %, and cpu % load real time while I'm gaming. With that in the OSD, you don't need logging. But again, if Ray has not yet gotten hold of a 7970 from one of his fans, then ATT won't read the sensors yet.

In which case some logging tools would be handy. Any ideas?

thanks,
Rich
m
0
l
a c 87 U Graphics card
May 30, 2012 5:08:46 AM

harvardguy said:
Quote:
A CPU limited game would run better on a 3GHz Sandy i5 than on a 3.5GHz Core 2 Quad. A 4.5GHz Sandy Bridge CPU would be almost twice as fast in gaming as a 3.5GHz Core 2 Quad.


Wow! Well I figured clock for clock about 40% faster, and multiply by 45/35 the difference in clock speed, gives about 180% faster overall. Hmmm, so that makes sense.

(By the way, slightly off topic, but thinking long term, versus the original i7 on 1336, what do you give up with the easy-to-overclock Sandy bridge? They moved the pci-e controller to a place on-board the cpu chip, right? - didn't that have some negative affect on overall lanes of pci-e available to the graphics cards?)

You guys are very helpful.

Yes, for sure I have reigned in my former grandiose thinking about getting a GTX690 - mostly because it probably won't even boot up - and instead I will start pretty soon with just one 7970. That will launch me quite far from where I am with my 8800GTX.

I don't know if it will get me BF3 on ultra settings at full 2560x1600 - 4 million pixels, 32 million bytes per frame - but I'll try. If not, I won't play it - yet. And if I see that I am already cpu bound, then of course adding another 7970 wouldn't make sense - I would have to move forward with a new cpu.

BUT .... if I see that I am still gpu bound, like 100% gpu load, 80% cpu load, then maybe I WILL add the second 7970, which in that scenario might move me from 25 fps to 40 fps, where I would be happy. At that point the 7970's would be kicking back, at 70% load, and my old 9450 would be working up a sweat, flat out 100%, but so what, I'm playing the game. LOL

So it would be kind of interesting to do some charting, to figure out - am I cpu bound, or am I gpu bound?

So, let me ask you guys, how do I log my 9450 and 7970?

And by the way I appreciate the comments about overclocking the 7970 - I hear it overclocks very well. Would you guys recommend a model with a better heat sink for that reason - easier to overclock - rather than the stock turbine? Do you guys like XFX equipment? I game with headphones, so I figured - well turbine is okay, if I can set the fan speed to 100% and blow the hot air out of the case. But I can't remember if Catalyst allows manual fan settings. Ati try tools does, but I don't know if the author, Ray Adams, will have his program up to speed on 7970s already. Maybe riva tuner is already working on that card - I have become familiar with it for my current nvidia card.

Back to logging - I know that furmark has some logging on the gpu side - it doesn't run for very long. Would you guys use Sandra, or Everest, or some tool like that?

Like I said, I like ATT with the OSD showing gpu %, and cpu % load real time while I'm gaming. With that in the OSD, you don't need logging. But again, if Ray has not yet gotten hold of a 7970 from one of his fans, then ATT won't read the sensors yet.

In which case some logging tools would be handy. Any ideas?

thanks,
Rich


If you're going to overclock your graphics, then don't you dare buy a 7970. The 7950 overclocks equally well and is less than 5% slower at the same frequencies.

With the LGA 1366 versus LGA 1155, you go from 32 PCIe graphics lanes down to 16 and you go from a triple channel memory configuration to a dual channel memory configuration, but overall, the LGA 155 platform is still a faster gaming platform. The PCIe lanes lost aren't too important and the Sandy Bridge CPUs have enhanced memory controllers, so even if they have one fewer 64 bit controller (three 64 bit controllers makes a triple-channel, two 64 bit controllers makes a dual-channel), they still have good memory performance. Sandy/Ivy Bridge's higher IPC than Nehalem and Westmere allow them to outperform even the six core i7-990X in gaming, even if not by much, all while using much less power.
m
0
l
May 30, 2012 5:31:06 AM

"Sandy/Ivy Bridge's higher IPC than Nehalem and Westmere"

Great information - I followed everything but that very last part, can you clarify? - I'm not familiar with IPC.
m
0
l
May 30, 2012 6:12:17 AM

Blazorthon, excuse me, I had also meant to comment on the 7950 suggestion.

No doubt, the 7950 offers a better performance/price ratio and I appreciate the suggestion.

BUT - 5% to me, in the past, has often meant several fps, and has been the difference between 28-30 fps, and "it's playable" vs 25-26, and "it noticeably lags." (I am not too sensitive on lagging - it has to really lag to bug me - so 28-30 is ok as a minimum.) For instance, everybody said "Don't overclock the 8800GTX, it's already a hot-running gpu and you'll kill it." But a 5% modest overclock brought in almost 5 fps in a few cases, bringing me out of lag, into .... playable ... like I say, 28-31 or so.

Anyway .... whew- the opossums just startled me - making their noise coming from out of the third drawer in my trailer/office where they sleep in the daytime, crawling behind the cabinetry, and making their way around past the water tank and out some kind of hole in the bottom that I can't get to ..... (three of them - youngsters - I videoed them on my cell phone after lurching open the drawer last week looking for an electronic cable - I was able to get the drawer closed without them coming out - they went back to sleep and occasionally I open the drawer a crack just to make sure they're all right, lol - I have to pick up a non-lethal trap pretty soon - I'll take them a couple miles toward Laguna Beach and let them go - my brother says "find the hole, patch it, and let them live in the backyard and keep the rats away" - easier said than done - there is very little crawl space under there to find that hole)


............ Let me look up the 7950 price. Hmmm. Wow - all the way down to $380, and around $400 with non-reference cooler, for example sapphire.

So Sapphire prices are $80 less, on $480 base, that's 17% less, but about 12-13% fewer stream processing units. So the 7950 has a lower price/stream processing unit ratio, but not by much.

When you say only 5% slower - are you talking full 30" gaming with full AA, or are you referring to lower resolutions, like 1920x1200?

I realize that counting stream processing units might not be that important - but we're talking parallel processing, and on high resolution, with full AA, that's where I always heard the 7970 shines - and maybe even exhibits good performance comparisons against the gtx680, which I heard beats it at lower resolutions.

So for $80-100 more I would tend to lean toward the full 7970.

Sapphire has one for $479 with dual fans and lots of heat pipes - maybe more overclocking potential with a non-reference cooler? I've owned sapphire before with good results. What do you guys think?

Rich



m
0
l
a c 109 U Graphics card
May 30, 2012 6:16:50 AM

Here's what I think:

Gtx 670 $400
i5-2500k $220
Gigabyte z77-UD5 $180
8gb Corsair Vengeance $50
Custom Watercooling Loop $300
m
0
l
a b U Graphics card
May 30, 2012 6:37:51 AM

amuffin said:
Here's what I think:

Gtx 670 $400
i5-2500k $220
Gigabyte z77-UD5 $180
8gb Corsair Vengeance $50
Custom Watercooling Loop $300


+1

Gives you a better overall system and you can run a 2nd 670 later to equal the 690s performance.
m
0
l
a c 109 U Graphics card
May 30, 2012 6:40:15 AM

You don't need the Watercooling loop, just there when they release the 670 Waterblocks.
m
0
l
a c 87 U Graphics card
May 30, 2012 4:07:29 PM

harvardguy said:
Blazorthon, excuse me, I had also meant to comment on the 7950 suggestion.

No doubt, the 7950 offers a better performance/price ratio and I appreciate the suggestion.

BUT - 5% to me, in the past, has often meant several fps, and has been the difference between 28-30 fps, and "it's playable" vs 25-26, and "it noticeably lags." (I am not too sensitive on lagging - it has to really lag to bug me - so 28-30 is ok as a minimum.) For instance, everybody said "Don't overclock the 8800GTX, it's already a hot-running gpu and you'll kill it." But a 5% modest overclock brought in almost 5 fps in a few cases, bringing me out of lag, into .... playable ... like I say, 28-31 or so.

Anyway .... whew- the opossums just startled me - making their noise coming from out of the third drawer in my trailer/office where they sleep in the daytime, crawling behind the cabinetry, and making their way around past the water tank and out some kind of hole in the bottom that I can't get to ..... (three of them - youngsters - I videoed them on my cell phone after lurching open the drawer last week looking for an electronic cable - I was able to get the drawer closed without them coming out - they went back to sleep and occasionally I open the drawer a crack just to make sure they're all right, lol - I have to pick up a non-lethal trap pretty soon - I'll take them a couple miles toward Laguna Beach and let them go - my brother says "find the hole, patch it, and let them live in the backyard and keep the rats away" - easier said than done - there is very little crawl space under there to find that hole)


............ Let me look up the 7950 price. Hmmm. Wow - all the way down to $380, and around $400 with non-reference cooler, for example sapphire.

So Sapphire prices are $80 less, on $480 base, that's 17% less, but about 12-13% fewer stream processing units. So the 7950 has a lower price/stream processing unit ratio, but not by much.

When you say only 5% slower - are you talking full 30" gaming with full AA, or are you referring to lower resolutions, like 1920x1200?

I realize that counting stream processing units might not be that important - but we're talking parallel processing, and on high resolution, with full AA, that's where I always heard the 7970 shines - and maybe even exhibits good performance comparisons against the gtx680, which I heard beats it at lower resolutions.

So for $80-100 more I would tend to lean toward the full 7970.

Sapphire has one for $479 with dual fans and lots of heat pipes - maybe more overclocking potential with a non-reference cooler? I've owned sapphire before with good results. What do you guys think?

Rich


The 7950 does just as well as the 7970 at higher resolutions and higher AA and that's not 5% slower, that's up to 5% slower. Most of the time, the difference is lower and considering the fact that many 7950s have the same cooler as 7970s, they have the same thermal headroom and can hit slightly higher frequencies, letting them equal or even sometimes very slightly exceed the 7970. A 5% overclock of a GPU would only give you a 5FPS gain in one of three cases: One, FPS was already 100FPS. Two, there was some other problem with your system and somehow, the overclock fixed it. Three, somehow that 5% overclock allowed extra hardware to be unlocked on the 8800 GT. That is extremely unlikely to ever happen again. A real 5% gain is not noticeable and if you do a 5% overclock on any card nowadays, that's the best that you'll get.

Stream processing unit count increases scales exceptionally poorly. In fact, a highly overclocked 7850 can meet or beat a 7970, despite having half of the 7970's cores (1024 versus 2048). The 7870 can do a little better despite only having 1280 cores. With the 7950 being so close in shader count to the 7970, the two are indistinguishable at the same clock frequencies and they can overclock to the same frequencies. The 7970 is only faster than the 7950 with both at stock because the 7970 has a significantly higher clock frequency (925MHz instead of the 800MHz of the stock 7950).

The 7970, at 2560x1600, is on-par with the GTX 670 and is beaten slightly by the GTX 680 in most games (memory bandwidth heavy games prefer the 7970 and 79560 because the Kepler cards have a severe memory bandwidth bottle-neck). At 5760x1080, the 7970 and GTX 680 are on-par with each other overall. It is only at 5760x1200 and higher resolutions where the 7970 beats the GTX 680. Even then, you would then need two or three 7970s to get high FPS, so unless you want to pay more for graphics and displays than your current budget, the 7970 will not beat the GTX 680, except in memory bandwidth heavy games. Regardless, the 7970 is more future-proofed due to it not having severe bottle-necks like the GTX 680 and GTX 670 do (the same is true for the 7950) and later on, if you're using a higher resolutions display and can afford one or more additional 7970s (after price drops, of course), it will then show superiority over Kepler. However, once again, the same is true for the 7950.

If you want the 7970 anyway, then yes, Sapphire is usually a great brand for AMD graphics cards.
m
0
l
May 31, 2012 8:39:47 AM

Wow,


Some great info. First amuffin, thanks for the suggested cpu layout with the 2500. I will print that out in my paperport, and those suggested components look good for cpu upgrade, at about $450. I happen to have a spare case already - and I can throw a bunch of fans in it. I have it in the box on a shelf in the garage - a spedo - don't laugh. I have become interested in the HAF, but I can customize the spedo I'm sure.

For my purposes, I would go with a TRUE before watercooling, maybe even push/pull - unless you're saying water is the only way to get to 4.5 ghz. I would probably mess up the water and fry everything, so I'd rather just heavily vent the case (I prefer positive pressure cases with all inlets filtered - why have dust accumulate and reduce heat sink performance?) and put 120mm fans everywhere and keeps things cool that way.

-------------------

Blazothorn - my God that is really some terrific information! And I am glad to see support for the 7970/7950, vs nvidia, particularly on the issue of memory bottle-necking.

Quote:
so unless you want to pay more for graphics and displays than your current budget, the 7970 will not beat the GTX 680, except in memory bandwidth heavy games.

Let me ask you - I know you're talking 3 gigs vs their 2 gigs. Can you give more information when you talk about memory bottle-necking - has it come up on game benchmarks? Even if it hasn't - what would be a memory intensive game - elaborate textures, right?


I can't say for sure whether or not one particular game that I might be attracted to would be memory intensive - but I go for eye candy - and so to me that says "Yes" I am likely to be pulled toward the heavy texture games (if that is what puts a load on memory as I think it is.)

That's part of the reason I fell for the 30" Dell in the first place after a forum buddy Sam Morris, over at afterdawn, raved about his 30" for several years. Actually, once I got it, I accused him of being a hold-out. He didn't rave enough in my opinion from the effect the big screen had on me.

The immersive quality was, and IS, incredible. I game with Medusa 5.1 headphones, and I think that also isolates me from reality. I like to talk of the time I was in a Left 4 Dead safe room, in the Hospital compaign - the roof final chapter. It had gotten kind of chilly in the office/trailer where I game, and I wanted to close the window a bit. I looked all around the safe room, but I couldn't find the window. Then slowly I pulled back from the screen - I sit pretty close to the monitor - and finally I turned my head and shut the trailer window over on my left.

I had to laugh - but I was THERE in the safe room looking for the source of that chilly breeze. I get sucked all the way into that big Dell and I don't know where I am. LOL

I went back to Far Cry a year or so ago as a lark, and I was able to play at max settings of course, full 2560x1600. Where did those unbelievably colorful parrots come from?

I played the heck out of the game a few years ago on a 17" CRT - all single player - I played through at hardest setting, and I don't remember ever seeing anything so colorful and beautiful.

(I was a fanatic on that game. I played the demo so much, that I had all three helicopters crashed and burning on top of the Fort, and I went around taking screen shots to prove it to myself later, to try to break myself from playing that finale over and over again, lol.)

I have a very close family member who is a professional modeler and animator for one of the major game companies, and I have come to appreciate video games for the art form that they are - and when the artists' renderings turn into something transcendental on the screen, that is when I truly celebrate the joy of silicone engineering.

We have seen some awesome things lately.

When I walked into that hospital atrium in Rage, ever alert for whatever new monster was going to try to crush me, and I saw that gigantic mutant carcass splayed against the far wall - the carcass was about 12 stories high for crying out loud - and I watched the chocolate blood slowly ooze down the walls of the ground floor operating room across the way, I was spellbound. The music, full orchestral accompaniment, pounded with 5.1 intensity in my headphones. I was terrified.

Later on, I posted my most inspired game review, concluding with, "These artists are magicians. We put down our money, engage our electronics, and let slip away our safe reality in order to embrace the exquisite lunacy of their twisted vision."

I know that the guys working with Carmack must have tripped out with that artwork - "Look guys, here's our plan for the atrium layout." Stop and think about it - that is GIANT creativity at work, Picasso, Rembrandt, Da Vinci, and for $50 we can dive right into their creation - what Carmack and his buddies developed for five long years.

Later I was appalled to read all kinds of critiques on the game - even in that very same scene. "Oh, the computers were bland and textures were dated 5 years ago."

"For pete sake - you weren't supposed to be fooling around focusing on the grayscale computer monitors and maybe hoping to find a pencil to play with - jeeezz - all the good stuff graphics-wise, texture-wise, imagination-wise, was right in front of you. What were you doing looking for a pencil?"

"If you can't stop for a minute and take in what has been created for you in such painstaking fashion - then you are depriving yourself of a major portion of the extraordinary richness of the gaming experience."

Hahahaha

End of tirade. "We all have our reasons for playing, and if some of us want to blast through as fast as possible and look for pencils, that's cool - well, not really, but it is if you say it is - PENCIL NERDS - whoever you may be."

Hahahaha


No seriously, different strokes for different folks. It's all good.


Anyway, blazothorn, my whole point is - if I can make the game super beautiful, even at the cost of fps dropping to near lag - then I'll go for the beauty. And if it's unplayable, then I'll keep it on the shelf, like Crysis, and Warhead, and Far Cry 2.

So when you talk about memory bandwidth - to me that sounds like "Rich, with the green guys and their 2 gigs, not the AMD 3 gigs, you could run the risk of getting screwed going for the beauty."

And that brings me back to the 7900 family - thank you.

Quote:
the 7970 is more future-proofed due to it not having severe bottle-necks like the GTX 680 and GTX 670


Very nicely put.

And - here's one more weird thought:

Since I don't have sli certification, a gtx 680 would be a single card only.

But let's just say there is a very SLIGHT CHANCE that I might continue to be gpu bound - let's say on one particular title - something really graphically intensive - like maybe Metro, or Crysis, or BF3, or something coming soon, maybe Max Payne with its massive PC texture packs. Maybe it can't happen. But IF it were to happen, then - only with the 7000 family - I could utilize crossfire on my cpu hardware - the PSU will handle it.


Back to the 7950 vs 7970 - blazothorn - you totally had me on the edge.

And I think you're right about the overclock. Thinking back a little harder, I think my 8800GTX overclock was closer to 10%, not 5%. And as to whether I picked up 5 fps, I was probably overstating - it was probably just 2-3 fps. You know your hardware.

As I recall, stock 8800GTX clocks are 579 mhz - and I was able to get it stable at only about 629, not more than that. I remember the 629, that's the name I gave it in the Riva profile. For the other two clocks - one is memory, the other is shaders - I added about 100 to each one.

So that would be nearly a 10% overclock, not just 5%.

I forget which game - well, actually, now, I believe it was BF:BC2. I was able to run it at high everything and full 30" gaming, but it was lagging a bit at about 26-27 fps, single player. Later I played it multiplayer. The extra 2-3 fps from the overclock smoothed it out - it felt okay.

But the forum guys said "Don't overclock that thing - you'll fry it." The overclock raised my furmark +10 degrees. I was already running the 8800gtx fan at max. It forced me to install a kaze front intake - loud 3000 rpm - dropping temps 5 degrees. Sam said - "You'll hear it through the headphones."

Nothing comes through those headphones.

I play those games loud. On Left 4 Dead you have to play it loud or you won't hear the hoard music. "Here they come - get in a corner NOW!" I eventually installed a filtered side intake on that piano-black Sonata case, and added another kaze pointed straight at the 8800 - that dropped temps another 5 degrees.


Quote:
If you want the 7970 anyway, then yes, Sapphire is usually a great brand for AMD graphics cards.



LOL. You correctly detected some stubborness. Thank you for the Sapphire endorsement. That does it - I'm just about ready to hit the paypal.


Over at the egg, there is an $80 difference between the sapphire 7950 vs 7970, with the identical dual-fan non-reference cooler. But you really had me on the fence. At the end, when you mentioned higher starting clocks, that pulled me back over to the 7970 - the $80 price difference is not compelling to me. Something just feels better about getting the cadillac of the 7000 series.

The way my brain works, 400 or $480 - feels like the same number. It's a $500 chunk, and whether that chunk is actually $400, or 500, I can make up the difference by eating rice for a while.


So here's my $500 chunk projected upgrade path:

Chunk 1. A state-of-the-art massively parallel graphics card.

Chunk 2. Less than what I thought it would be - $450 for sandy bridge along the lines of amuffin's recommendation. (Add about $100 for a TRUE - and another $100 for lots of fans - and another $150-200 for a PSU unless I want to take the PSU from the Sonata and render the rig unusable even as a business machine - no I'll probably just pick up a new PSU and pull the spedo case out of its box for the first time. There - I knew it, that $500 chunk is really an $800 super chunk)

Chunk 3. Another card in crossfire.

And as I mentioned, the second and third chunk could get reversed, in the unlikely event that I find I am still somewhat gpu bound on a particularly memory-heavy title. In that case, I could leave the spedo on the garage shelf, delay super chunk #2, and plop another 7970 into the Sonata, at which time I'd no doubt be heavily cpu bound - that's for sure.

So all three chunks have to conclude eventually. But then what? Having completed the last chunk, and running on new nehalem or derivative 4.5 ghz technology, how long will those three $500 chunks hold me at 2560x1600 gaming?

A year? Two years? Until the new Respawn Entertainment title comes out?

It is, of course, never ending. And at 30 inches, I am forced to be a lot closer to the cutting edge than I have been, unless I want to continue to play games 1-2 years after everybody else, which doesn't work well for multiplayer.

--------------------

But at least with the 30" Dell, it's ONLY 4 million pixels, not 6 million like those eyefinity rigs. They won't fit on my trailer table anyway, and I don't know if I could be more immersed than I already am.

But I recently read about how they engage the peripheral vision - and that did sound interesting. Still, there's a pretty big gap between monitors, right? I know they were working on reducing edge thickness, but no real gains so far, right?

Well, one day I'll have to go somewhere, maybe over to your house, blazothorn, and try it out. lol

Thanks again,
Rich
m
0
l
a c 129 U Graphics card
May 31, 2012 12:19:05 PM

The spedo (don't laugh) is a rockin' case. It's only flaw is the lack of a cutout in the motherboard tray behind the CPU (fortunately for me the LGA 2011 no longer requires this to easily swap out CPU cooling solutions). There is no other case like it.
m
0
l
a c 87 U Graphics card
May 31, 2012 5:09:22 PM

A single 7970 at stock (925MHz) or 7950 overclocked to 975MHz (memory also overclocked to the stock 7970 frequency of 1375MHz) is enough to play games on a 2D display with quality settings maxed out and some serious AA in today's most intensive games, even up to 2560x1600. Chances are high that it will be able to do that for a few years. Two 7970s could do it with insanely high AA/AF, Ambient Occlusion, and so many other detail enhancing technologies that you'd think even the single 7970 or slightly overclocked 7950 running with high settings and AA looked moderate instead of incredible in comparison and they'd be able to do it for quite a while yet, several years is a reasonable minimum. Graphics cards with enough VRAM can usually last for a long time at the same resolution and even if two such cards start to show weakness, well, you can buy a Sandy Bridge motherboard that supports three or even four graphics cards in CF/SLI (some LGA 1155 boards can do it excellently, LGA 2011/1366 aren't necessary for it) and get another 7970 or 7950 OC (they scale very well, even with more than two GPUs and oftentimes even with four GPUs).

Like ubercake said, I wouldn't worry about that spedo case. Those are some excellent cases if you don't mind the lack of cutout (you probably wouldn't need to change coolers anyway, just get a good one to start with). As for the CPU, there are two excellent options... The already mentioned i5-2500K, and a modded i5-3570K. What I mean by that is removing the IHS of the 3570K (time-consuming, but fairly safe and easy if you have the patience for it), changing out Intel's poor quality paste with top-notch paste, and then putting the IHS back on. It would then overclock slightly better than even the i5-2500K while using a little less power. It would cost a little more, but the increased performance would keep performance for the money fairly parallel. If you don't feel comfortable with doing what can be considered a moderately extreme mod, then don't even consider it and get the 2500K because the 2500K will still be incredible, just a little less incredible.

Either way, an excellent Z77 motherboard is the best choice. I simply prefer the Ivy Bridge i5s for this because they support PCIe 3.0 and with the 7970/7950 OC, it is slightly bottle-necked if it doesn't have at least 8GB/s of PCIe bandwidth. Crossfire with two 7970/7950 OC would mean that they are in an x8/x8 configuration and a PCIe 2.0 x8 slot can only supply up to 4GB/s of PCIe bandwidth per graphics card. Most games will not have much of a problem with this right now, but if two 7970s or 7950s ever become not enough and you want to add more, then it will become a serious bottle-neck if you don't have PCIe 3.0 because then each card would be limited to 2GB/s. It would still not be a huge bottle-neck in most games, but by that time, it might become noticeable.

You might have noticed, but I'll say this: There might not be anything about a computer more frustrating than being bottle-necked for one or two reasons, except a computer that doesn't work and you don't know why and even then, it's close. It's frustrating because you know that there is nothing you can do to fix it (in this case) except for replacing any non-PCIe 3.0 compliant parts (such as the i5-2500K). Even if it's not a huge bottle-neck, you don't seem like the kind of person who wouldn't mind this problem. I doubt that it would exceed even 10% in most games, but that 10% might be the difference between 55FPS minimums, and the perfect over 60FPS that lets you have V-Sync enabled, smoothing out the entire visual experience and cutting down stutter. When you have a technological problem, it usually manifests itself in one of the worst possible ways for it and above is one possible example, even if it should be unlikely. I have to say that when I come on these forums to suggest parts to someone, the higher end the build, the more fun it is to research and finalize. I'd hate to mar that for you with something this simple.

The i5-2570K is about $30 more expensive than the 2500K last I checked on newegg and to avoid something like this, I think that it is worth it. However, if when two 7970s or 7950s become not enough for you anymore some years in the future, if you will replace the whole build at that time (a likely scenario, at least I think it would be), then don't worry about this at all, the 2500K is slightly better at overclocking than the 3570K (unless you do the mod to the 3570K that I mentioned) and that would likely cancel out any difference made by the PCIe 2.0 x8/x8 configuration and the power usage difference between the 3570K and 2500K isn't enough to justify paying $30 more for the 3570K.

Now, on to the memory bottle-necking that I mentioned on the graphics cards. The GTX 680 has 2GB (reference) of 1502MHz GDDR5 memory on a 256 bit aggregate bus (that usually means 8 RAM chips, aka ICs, because each chip has a 32 bit connection and with eight chips, that comes to 256MB per chip to get the total of 2GB spread across eight ICs. It is an aggregate bus because instead of being one piece of the card, each RAM IC has a 32 bit bus to the GPU and altogether that makes a 256 bit aggregate bus). The Radeon 7900 cards each have a 384 bit aggregate bus made from twelve 32 bit 256MB RAM ICs. At stock, the 7970's RAM is running at 1375MHz. The 680 has a little over 192GB/s (256 bit bus times 1502MHz times 4 because GDDR5 RAM has a quadruple data rate, meaning that in each clock cycle, aka Hz, it transmits four bits, one per each each quarter of a clock cycle) and the 7970 (and 7950 when OCed the little bit necessary to reach reference 7970 performance) has 264GB/s (384 bit bus times 1375MHz times 4).

In order to explain why the memory bandwidth and the memory capacity can be bottle-necks, I'll have to show you a few numbers from some slower graphics cards. The GTX 580, like the GTX 680, actually has a little over 192GB/s of memory bandwidth, except it gets it's bandwidth from a 384 bit bus at 1002MHz instead of a 256 bit bus at 1502MHz (this means that the 580 has trivially higher memory bandwidth than the GTX 680). The GTX 580 is not a memory bandwidth bottle-necked graphics card. Memory bandwidth is one factor that helps to determine how many FPS the graphics card can output because the GPU on the graphics card is constantly moving data to and from the memory for it to render a frame. The VRAM (Video RAM) is what the RAM on a video card is usually called nowadays. However, it is also sometimes called the frame buffer because it was once a buffer for each frame that the video card renders. Now, it's still doing that, but it does so much more, so I think that it is more accurate to simply call it the VRAM. The VRAM stores much more data than just the frame (a frame actually takes up very little space and for example, a 2560x1600 frame only takes up 16MB of VRAM). This data can include textures, among much more. Increasing settings in the game that you play will increase the amount of VRAM being used during play. Other things that eat up a lot of memory include data for AA.

At 1080p, some games can push past 1GB of VRAM capacity and that is why 1GB cards aren't really recommended by most computer experts, except for low and lower mid-range gaming computers that won't play 1080p in any intensive games. A higher resolution doesn't really use twice as much memory as a resolution that has half the pixels because even if you increase the resolution, most of the textures and some other things are the same, so they don't take up more space. It is stuff like AA that will then take up more space, but it isn't enough to increase memory usage linearly with increased pixel counts. This is why even though 1GB is not enough for 1080p, a 2MP resolution, 2GB is usually enough for 2560x1600, a 4MP resolution. As resolution increases, VRAM capacity needs increase slightly slower. 2GB might be enough for 2560x1600, but it's starting to push your luck. Then you can consider 5760x1080 and 5760x120 Eyefinity resolutions. Even here, 2GB can be enough in some games, but in others, 2GB is no longer enough to have proper AA. Nvidia counters this with FXAA and TXAA, two types of AA that have much lighter impacts on performance than MSAA and other types of AA.

TXAA is very new, but FXAA s not as new. FXAA is lighter than MSAA, but it also has worse visual quality. TXAA has good visual quality, comparable to MSAA in many ways, despite it being much lighter, but it is very new and is poorly supported. It has what is probably the greatest quality to performance hit ratios of any AA technology, but that advantage is squandered by not being supported well yet. Assuming it was working right, then it would alleviate Nvidia's needs for more memory capacity, almost justifying Nvidia's apparent fixation with not giving their customers enough VRAM. However, with the 7900 cards being able to handle higher performance loads more efficiently than lighter loads, increasing the load with something such as MSAA still lets them perform very well, despite Nvidia's TXAA advantage. At 2560x1600, AMD and Nvidia are fairly parallel, with AMD winning slightly when it comes to memory capacity in the sense of a bottle-neck. At 2560x1600, there is no reasonable way for 3GB to become a bottle-neck and the chances of that happening even several years from now are slim, at worst. However, 2GB, even with TXAA, is at a slight disadvantage, but it is close enough to be forgiven and since there are 4GB models for the GTX 600 cards, Nvidia is a viable option for you, even if they aren't superior.

However, now it's time to talk about the bandwidth problem of Nvidia cards. Like I said, the GTX 580 has almost exactly the same amount of VRAM bandwidth as the GTX 680. What's the problem with that? That means that Nvidia basically gave a card that is almost as fast as GTX 580 SLI the same amount of bandwidth as a single GTX 580! Those 4GB models of the 670 and 680 still only have a 256 bit bus, they simply change out the 256MB RAM ICs for 512MB RAM ICs. That solves the capacity problem, but it does not help the bandwidth problem. I'll let you in on something that AMD might not want us to know: The only reason that the 7900 cards beat Nvidia in some games at 2560x1600 is because the games that AMD wins in are bandwidth heavy. If the GTX 600 cards had a 384 bit bus or better with the same 1502MHz frequency, then the 7900 cards would be in trouble, except at even higher resolutions, if even then. The GTX 600 cards truly have better GPUs overall, but their memory bandwidth holds them back so much that you wouldn't believe it. I'm not a fanboy for either camp and overall, I do lean towards AMD because over the years, they have been a more honest company.

That has hurt them in many ways against the other companies, but it has given them greater integrity (all tech companies are guilty, but AMD isn't nearly as guilty as most of the others). However, the GK104 GPU is simply better at gaming than the Tahiti is if you go by GPU power alone at pretty much any practical resolution. More proof for the memory bottle-neck is the GTX 670. The 670 is less than 7% slower than the GTX 680 even though the 680 has a disadvantage in core count (Nvidia's cores seem to scale better than AMD's at lower resolutions) and in clock frequency. The 670 also happens to have equal memory bandwidth to the 680. This is a way of identifying memory bottle-necks, IE when two video cards with non-trivial GPU performance differences and the same memory bandwidth despite having nearly identical gaming performance, then a memory bandwidth bottle-neck is usually the prime suspect because it's usually the cause of that sort of problem.

However, the 600 cards don't have the correct memory bandwidth for their GPU performance, so there's no point in dwelling on that for anything other than explaining their problem. The 7970 beats the GTX 600 cards only in memory bandwidth heavy games. This is backed up by the fact that the GTX 580 does close to the 670 and 680 in such games. A few such games would include Metro 2033, Alien vs Predator, and quite a few others. The 7900 cards can fly past them all in these games. When you take a card, make one like it that has a huge GPU performance improvement, but leave it with the same memory bandwidth and only a minor capacity improvement, a winner is not born, it is squandered. Nvidia did this and because of that, the Radeon 7900 cards are the better choice for most high-end workloads. The memory bandwidth and the GPU performance should be balanced in order to have one not far gre4ater than the other. This would provide more efficient usage of the memory and the GPU. AMD did this and did it elegantly whereas Nvidia failed at it miserably and only got the GTX 600 cards where they are by brute GPU power alone.

This tells me that AMD knew what their cards needed and made sure that the cards were best suited for those needs and Nvidia simply slapped stuff together on their cards and hoped for the best. Nvidia cut so many corners that it's not even funny. AMD cut none and made 7900 one of the most balanced series to date. The memory bandwidth is great for the GPU performance, the capacity is high enough for today and for the foreseeable future, power efficiency improved, compute performance is off the scales for consumer cards and is beating many of Nvidia's multi-thousand dollar professional cards that were designed specifically for compute, Crossfire scaling is kept high even for more than two GPUs, and overclocking performance is supreme. Nvidia has perks and minor advantages, but AMD has far more and many of AMD's advantages are far more important right now. If you have any more questions or I failed to answer all of yours properly, then feel free to say so and I'll get right on them.
m
0
l
June 2, 2012 3:12:43 AM

Wow again.

First of all, ubercake - it's nice to see the Fonz again. I'm glad nobody is laughing about my spedo. Thanks for the support, lol.

By the way, we need you guys to drop in over on the afterdawn official pc builder thread - 4th edition, where we get to add pictures to our posts, italics, bold, red, links, etc and mix it up with some of the English dudes. There was a little bit of a heated discussion yesterday, pretty much about what we've been talking about here, and Sam said:
Quote:
it does amuse me to see the heated arguments flare up even in my absence, which goes some way to dispelling the commonly held belief that I'm the catalyst for all the negative conversations around here.
So we do have some lively times.

But blazorthon, if you were to drop over, you would be astonished, because in the last 2 pages, I have done a 180, mostly due to your influence. Having finally performed some of my own research, I proposed a "radical new theory": http://forums.afterdawn.com/t.cfm/f-216/the_official_pc...

Quote:
But Sam, you didn't comment on my radical theory, that for my "mid-range mere 4 megapixel needs" the full 7970 is a tad bloated and power-hungry, not to mention costly, and I suspect that with a 7950, I can fully well equal 7970 performance - EVERY BIT IDENTICAL FPS - while consuming less power.


Yeah, that's today's quote from me, believe it or not.

Holy moly - what happened to the stubborn guy of a day or so ago who insisted on buying the 7970 cadillac for a mere $80 more. If you go back one page on that forum, you'll see more of what happened to me in my first explanation. (They only allow 9 digit user names, so that's the reason for the harvrdguy - blazorthon is also 10 - you'd have to lose the o in the middle - ubercake is perfect, lol.)


What happened to me is that your ideas percolated around in my brain, and I started to realize that 4MP is not extreme any more, it is mid-range between 2.3MP on 24" gaming, and 6-7MP on eyefinity.

By the way, I was wrong also about eyefinity and wide gaps - here's a guy on YouTube playing BF3 with CF 7970s, http://www.youtube.com/watch?v=Ovm7XBtaQAM&feature=rela..., not 5760x1080 eyefinity like you normally see, but 3240x1920 eyefinity, with the same pixel count, on 3 samsung monitors in portrait mode - with the bezel removed. The guy did some customization. Take a look, it is awesome!

I am almost interested in that kind of eyefinity.


Back to the 7950, I read one very interesting 7950 review at http://hexus.net/tech/reviews/graphics/34761-amd-hd-795.... You're probably familiar with that one.

It supported everything you told me. And then - there was a clincher. Under load on crysis 2, at identical clocks, and as I recall 2560x1600, with the 7950 delivering only about 3% less fps performance, the 7950 drew 220 watts, versus the 7970 at 252. That was 32 watts less, about 12%, corresponding to having about 12% fewer stream processing units. Drawing less power, the card runs cooler.

The reason that drew me in, was that I don't want to look at a PSU expense as part of this project. So my thinking is - if I overclock the 7970 way beyond the stock, and it starts pulling 300 watts, then that's two of them pulling 600 watts, and then I'm pushing the 750 capacity of my toughpower. If two 7950s are still a cumulative 65 watts, less, then that's 545 and I still have some room for overclocking the Q9450.

And then I felt sheepish for even looking at the starting clocks of the stock 7950 at 800 vs the other at 925, since as the article pointed out, you just bump the clock, and there you are at 925, no fuss no muss. Sapphire sells the 7950 with dual fans, for $399 at stock clocks, or the same card clocked at 950, for $409, for somebody who doesn't like the idea of over-clocking. Well that is not me - that's why I now feel sheepish for yesterday's comments about the starting clocks.

I already quoted above, but inasmuch as my needs are middle of the road, 4MP, I have the sneaking suspicion that the 7970 is a little overkill for me, and that the leaner meaner 7950, 12% less power usage, 12% fewer stream processing units, but still massive 3 gigs Vram, is better match for my 4MP needs, AND it seems eminently overclockable. I may be wrong, but by a slight OC tweak, I think I could match the 7970, fps for fps (I only need to make up the 3-5% performance difference) and BE RIGHT THERE, while actually pulling slightly less power, and thereby running a bit cooler, which in my present Sonata case, is helpful.


Finally, I tossed my upgrade path out the window, chunk 1 2 and 3, and decided to think of chunk 1, then chunk 2. I am already gpu bound. So that tells me to work first on the gpu. At $400 a card, that's $800. Your comments about giving each card 8GB/s of PCI-e bandwidth were helpful.

Quote:
with the 7970/7950 OC, it is slightly bottle-necked if it doesn't have at least 8GB/s of PCIe bandwidth. Crossfire with two 7970/7950 OC would mean that they are in an x8/x8 configuration and a PCIe 2.0 x8 slot can only supply up to 4GB/s of PCIe bandwidth per graphics card.
I realize it's not a major factor, but every little bit helps. My present PCIe 2.0 motherboard, the p5e, puts out x16 for each card in crossfire, x32 total - I read it in the user manual. So I can deliver the 8GB/s you mentioned - no slowdown there.

So far, the mobo looks ideal. The rub is the old processor, Q9450. Okay, so I'll overclock that up to 3.4 or 3.6, whatever I can get it to. I'll add one 7950 for $400. And I suspect - I'm not positive - but I suspect that if I go for complete, absolutely full eye-candy, I will STILL be gpu bound on one or two titles. Let me restate how you put it, which had me drooling:

Quote:
Two 7970s could do it with insanely high AA/AF, Ambient Occlusion, and so many other detail enhancing technologies that you'd think even the single 7970 or slightly overclocked 7950 running with high settings and AA looked moderate instead of incredible in comparison



Now that is a quote and a half. That's almost the most exciting 40 words that I have ever read. (That's why you'd like the forum I mentioned - you could go toe to toe with some other people who write equally powerfully.)

Wow!!! Double wow!! I just posted your quote over there - don't worry - I gave you credit, lol.

Well, all I can say in support of that sentiment is that I have put games that I own on the shelf, not wanting to go past just the very first bit, because I don't see what the guys are raving about - case in point Crysis.

And I understand that even with CF 7950s, I might not be able to max Crysis in 30" gaming at VERY HIGH settings - but certainly I would be able to handle Warhead at enthusiast - something about better drivers. So maybe I'll find out what the crysis guys have been talking about, and what you just wrote about in that quote.

I said a second ago that my upgrade path had changed. I think it is very possible that I could add a 7950, overclock it, and try to max warhead, and have the gpu still be the bottleneck. I think I could monitor my cpu and gpu load and determine that. I may need some better monitoring tools, but gpu z will give me a two or three minute track of the graphics card, and I'll have to find a cpu logger - maybe everest, or sandra, or something. Unless I have an OSD that will tell me.

So, assuming there is at least one title I want to max, for which a single 7950, even overclocked to the max, is still the bottleneck, well, simple, I add another one. So that is an $800 solution.

That will leave me heavily cpu-bottlenecked, and then next year I can take care of that, making sure, per your new very helpful ideas, to get a motherboard that will at least support tri cf, if not quad cf. I know the scaling diminishes greatly at quad, but by then 7950s will be dirt cheap, lol.

The cpu solution next year would be the second $800 investment - the second big chunk. I really appreciate your advice about how to stay ahead of the curve through a tri or quad mobo, as well as the idea of modifying the IHS of that one chip you like. I might investigate that further.

I'm printing out your comments to paperport for further study, and of course for future reference, but I got lost at one section, but it was a minor section. I mostly understood the memory bottlenecks of the nvidia cards.

Oh, in reviewing this just now, I remember another question I had for you. You mentioned Metro 2033 as one title with a heavy potential for memory bandwidth bottlenecking. I just played through that and liked it, but my settings were nowhere near max of course. I have heard that it is something else at max settings, so I should have another go at in a few months to see what the fuss is about.

My question is: What in particular makes that game, or Crysis, a memory bandwidth heavy game? Are the textures flying into the card at breakneak speed - are the effects accumulating rapidly and adding to the memory load - you alluded to this in talking about the AA calculations - is that mostly it?


At first, yesterday, in thinking more about it, I decided that you must have meant eyefinity, but then I read some benchmarks, and found the 7970 beating the gtx680 at 2560x1600 in a few titles - crysis for one. So it was not just in eyefinity where AMD can beat the GTX680.


Quote:
The only reason that the 7900 cards beat Nvidia in some games at 2560x1600 is because the games that AMD wins in are bandwidth heavy.



Now with today's discussion, like that quote above, you are saying - "No not just eyefinity, but 30" games too." And, AMD winning, you are saying, is due to the nvidia memory bottleneck kicking in. So I am glad I did yesterday's research.

But there is one paragraph about the 670 that confused me, and maybe there was a typo somewhere:

Quote:
More proof for the memory bottle-neck is the GTX 670. The 670 is less than 7% slower than the GTX 680 even though the 680 has a disadvantage in core count (Nvidia's cores seem to scale better than AMD's at lower resolutions) and in clock frequency. The 670 also happens to have equal memory bandwidth to the 680. This is a way of identifying memory bottle-necks, IE when two video cards with non-trivial GPU performance differences and the same memory bandwidth despite having nearly identical gaming performance, then a memory bandwidth bottle-neck is usually the prime suspect because it's usually the cause of that sort of problem.


I don't know the 670 but I think it's cheaper and slightly slower than the 680, right? One guy told me to pick one up instead of the 680. So the second time you say 680 above, I think maybe that should have read 670 - the 670 has a disadvantage in core count. Wait, I'm going to newegg right now. Yes, 1344 cores for 670, 1536 for 680. Ok - same memory limited bandwidth, 192 GB/s, not 264 like AMD, and 7% slower than 680 with fewer cuda cores. But I don't get the rest of that paragraph.

So if you want to clarify that, super, but if you don't want to bother, I don't think it's crucial to the whole discussion about memory bottleneck, 256bits wide vs 384bits, 192 vs 264 GB/s, and also that very interesting discussion of MSAA vs the new nvidia idea, TXAA, which is lighter, better quality with less of a performance hit, but which so far does not have much support.

That was an eye-opener.

I also agree on your comments about integrity and honesty - I've heard that before.

I will say, however, that I am glad I found myself with this 8800GTX card, after several ATI cards which I loved - because I discovered something new, but which is not an nvidia exclusive, since about 2006. I discovered some eye candy called digital vibrance, and then I dug around and found how to achieve it with AMD by goosing the saturation and adjusting the color temperature.

It's a very personal thing, but on some titles, it just adds to the eye-appeal for me. I know some people call it cartoony and unrealistic. It's all in the perception. I am in the minority of one on the forum about that - but I won't change my mind, I like it.

I took some photos one time of a Brazilian girl I was dating. I took the photos at Ipanema Beach in Rio. The colors and tones were unlike any photos I had ever taken before on my Nikon. Even elsewhere in the Brazilian countryside as we took a bus from Fortaleza to Rio, the colors were unique to my experience. A little later, stateside again, a friend of a friend, a professional photographer, saw the photos and commented that the warmth and hues in those photos could only be achieved down there near the Equator - "That's where I have to go for certain calendars" he said to me.

Some would look and say "unreal, cartoony." Well, I was there, and it was real, lol. Now maybe the German countryside on fire doesn't look like that, but it would be fun if it did.

The first title I played with my 8800GTX was World at War, the Treyarch COD. I did the nvidia monitor tuning, which I liked very much, and the burning forests and other colors of that game were more vibrant than anything I had ever before experienced. I was jazzed by it and attributed it to the new rig. Later I found out that the digital vibrance had gotten turned up in the course of running the monitor adjustment utility. And finally, like I say, I dug around and found out how to replicate on AMD stuff - it's almost hidden - you go down to the display settings and find avivo color, and adjust saturation and color temperature.

So nvidia was a good experience, but I'm more than ready to come back to AMD. Nvidia wiped out pioneer 3dfx - I would not like to see that happen again to AMD - it's bad for pricing and it's bad for innovation.

So thanks again guys, and bravo blazorthon!

Rich
m
0
l
a c 87 U Graphics card
June 2, 2012 3:57:17 AM

You were correct... I did have a typo when I was talking about the 670 and 680 memory bottle-necking. It is the 670 that has a large disadvantage, not the 680. Thanks for spreading my words... Like I said before, these sorts of topics are the ones that I enjoy most of all and the better informed people are, the better the enthusiast industry can be. Being a part of that education for others makes it even better, especially since I get to read about the perspectives of others with such interests in these subjects. Some people might call us geeks and such, but they're the ones who're missing out on some of the fun, at least IMHO.

Quote:
More proof for the memory bottle-neck is the GTX 670. The 670 is less than 7% slower than the GTX 680 even though the 680 has a disadvantage in core count (Nvidia's cores seem to scale better than AMD's at lower resolutions) and in clock frequency. The 670 also happens to have equal memory bandwidth to the 680. This is a way of identifying memory bottle-necks, IE when two video cards with non-trivial GPU performance differences and the same memory bandwidth despite having nearly identical gaming performance, then a memory bandwidth bottle-neck is usually the prime suspect because it's usually the cause of that sort of problem.


Did you mean this is the paragraph that you didn't understand the end of? I'll explain it a little more. The GTX 670 and GTX 680 have large differences in the performance of their GPUs, but they have identical memory bandwidth and memory capacity. They have very close gaming performance even though the 670's GPU has a large disadvantage. This is because the memory bandwidth of the two cards is not high enough for the 680 to *stretch* itself to it's GPU's full potential. If a GPU can do say 100FPS in a given situation and it needs 200GB/s to do that 100FPS, but you only give it 100GB/s, then it will only get about 50FPS to 55FPS. Take another GPU that can do 80FPS in a that same workload and it needs 160GB/s to get that 80FPS, then giving it the same 100GB/s as the other card means that it will only be slightly slower than the other card, despite the 25% difference in the maximum performance of each GPU on the two cards.

This is a moderate oversimplification, but I think that it gets the point across. Basically, the GTX 670 has a significantly slower GPU than the GTX 680, but neither of them have enough memory bandwidth for their GPUs and since their memory bandwidth is the same, the GTX 680 is only a little faster because although a memory bandwidth bottle-neck doesn't kill GPU performance completely, it hurts the scaling of GPU performance improvements greatly. Basically, the more memory-bandwidth bottle-necked a graphics card's GPU is, the more performance the GPU needs to get the same boost in performance. The GTX 670 has 1344 cores at a slightly lower clock frequency than the GTX 680 which has 1536 cores. The 680's GPU is around 20% faster than the GTX 670's GPU, yet at worst, the GTX 670 is only about 7% slower than the GTX 680 and it is usually even closer to 3% or 4% slower than the GTX 670. That means that in the best case scenario for Nvidia, the gaming performance scaling of the GTX 680's faster GPU is only 25% of the boost in GPU power and at worst, is closer to a mere 15%. Keep in mind that increases in core count usually aren't close to 100%, but they are much better for Nvidia than they are for AMD and even then, the 7970 managed to scale better compared to the 7950 than the GTX 680 did compared to the GTX 670. If the 670 and the 680 are at the same clock frequency, they are even closer than the 7970 and the 7950 are, on average, despite Nvidia having the GPU scaling advantage. This is a memory bandwidth bottle-neck in action.

Another example of memory bottle-necks would be looking at AMD's Llano APUs. If you take the 6550D of a desktop A8 and overclock it by 60 and you have 1600MHz dual channel DDR3 memory, you would be lucky if you saw a more than 15% performance increase.
m
0
l
June 2, 2012 4:39:12 AM

Yes, I am glad you didn't mind that I spread your words around. You're right - there is massive misinformation out there and the facts need to get out.

Furthermore, you'll be glad to note that my temporary dyslexia went away and I got your name right, blazorthon, I put the "r" in the right place finally, and even better than that, I went over to afterdawn and fixed it there too.

Remember, official pc builder thread - 4th edition. They have a variety of official pc builder threads going on, lol.

Thank you for reviewing that one paragraph that baffled me - now I believe I totally understand the discussion about 670 and 680.

I'll paraphrase. The 680 should be much faster than the 670. But the 670 is virtually the same. Why. It must be that the 680, which should be way faster, is constricted in some way - something is holding it back. Hmmm - while the 680 is stronger in every other aspect, it does share the same memory bandwidth with the all-around "weaker" 670. So THAT must be what is holding it back.

If that is correct, - then bravo again. Brilliant, totally makes sense. Good point!!

[Please drop by. You and Sam (sammorris) would share some lively discussions. Then there is Jeff, (Estuansis) no slouch either and a major crossfire enthusiast like Sam. And Russ and Stevo with massive builder experience, and Omegaman7, Kevin. Red_Maw, and more.]

Your point on Llano memory bottlenecking is also very interesting. I am going to pay much more attention to memory speeds than I ever have in the past.

Rich



m
0
l
a c 87 U Graphics card
June 2, 2012 4:41:18 AM

About that Youtube video... Now that looks interesting too. I've wondered about modifying displays, but that is something that I've yet to try. Now, I must try it sometime.

I wanted to say something about overclocking the video cards. When you're overclocking, the GPU is not the only part of the card that gets hot, but it is often the only part with proper cooling. If you overclock a video card, the best way to ensure the greatest overclock would be to make sure the RAM ICS and VR also get good cooling. Some cards cool them, but some do not. You can buy cheap copper heatsinks for them if the cards you buy don't have proper VR and RAM cooling. There are so many things that can be modded and there are so many benefits from doing it beyond it simply being fun and it is something that I enjoy greatly.

Also, if you do get the i5-3570K, change out the paste, and it is still to hot, there is one other thing that you can do that might help a little more and it is called lapping. You might have heard of it, but I'll mention it just in case you haven't. Lapping is a type of mod in which you take a CPU and basically sand off the top layer of the IHS. The IHS is primarily copper and the silvery color comes from a plating of nickel. Nickel however, is a substantially less thermally conductive material than copper. The IHS is also not perfectly flat, further hurting thermal conductivity to the CPU cooler. By removing the nickel plating and flattening out the IHS, it can then conduct heat to the CPU cooler better. You can also *lap* the bottom of the CPU cooler.

Having a flatter IHS and heatsink will remove most of the gaps that would have existed between the IHS and the cooler. This means that you need less thermal paste and less paste means an even more thermally conductive connection between the IHS and the cooler because paste, like I think we've established by now, is not as thermally conductive as most metals. This probably wouldn't help as much as changing the paste between the IHS and the CPU die with high quality paste would, but I've tested this myself and it does help a little. It's also easier than replacing the paste under the IHS.

Quote:
My question is: What in particular makes that game, or Crysis, a memory bandwidth heavy game? Are the textures flying into the card at breakneak speed - are the effects accumulating rapidly and adding to the memory load - you alluded to this in talking about the AA calculations - is that mostly it?


I'm not exactly sure why Metro 2033 and other games need more memory bandwidth than others, but from what I can tell, these games generally have better picture quality, so I think that the quality is related to the increased load. Higher quality textures take up more space and when the GPU needs them, take up more memory bandwidth than low quality textures, so they are a likely culprit. Metro 2033 also has some more settings and makes greater use of the features present in DX11 than many other games. I think that the fact that many of the more GPU bound games are also more memory bound than others and tend to look better than CPU bound games tells quite the tale in this subject. Take this paragraph of mine with a grain of salt... It is speculation as to why VRAM limited games are so VRAM limited, not solid proof.
m
0
l
June 9, 2012 5:39:10 AM

Well, that is very interesting speculation - it makes sense to me.

I am back after heading up to LA last Saturday, getting back a few days later, then dealing with a solid four days of paperwork on a new real estate escrow.

The other agent coincidentally has the transaction coordinator I used for 12 years, until recently, who charges $400 an escrow. So to save the money I haven't used her for a bit, and the savings will buy the first 7950 - but boy is there a lot of paperwork. As I get back up to speed on selling houses, I'll have to start letting her handle it all like the old days, lol.

Blazorthon, Sam liked your quote over on the other forum.

I posted it like this:

Quote:
Hey guys, this is probably unkosher, but there's a guy over at Tom's hardware, blazorthon, whom I am talking with. The guy has deep knowledge. I don't know if he's built thousands of PCs like you have, Stevo, but he might be a chip engineer. He got me started on thinking about the 7950. Here's what he just said, talking about the ability of crossfire 7950s to render a scene. Reading it, I have to parrot what Kevin said a couple hours ago, "I'm drooling."


Quote:
Two 7970s could do it with insanely high AA/AF, Ambient Occlusion, and so many other detail enhancing technologies that you'd think even the single 7970 or slightly overclocked 7950 running with high settings and AA looked moderate instead of incredible in comparison



My eye-candy hungry brain is telling me, that's one of the most powerful 40 word descriptions I ever read. I so want to experience what you guys have raved about - in Crysis for example - Jeff's stories about walking through the forest watching the Korean patrol, on very high settings, lol.


Here's Sam's comeback:

Quote:
There is a considerable element of truth in that post - I don't frequent THG generally because their main site is garbage, but whoever you're speaking with has a fair idea. The more you realise you can achieve, the more dissatisfied you become with what you used to have - that applies in all walks of life, not just graphics performance. One thing that's often banded about is to avoid 120Hz unless you have the funds to commit to guaranteeing 120fps in the games you play, because once you realise what you were missing, you'll suddenly realise 60Hz was inadequate - yet people that haven't seen 120Hz much in action will be perfectly satisfied


I thought that was quite interesting, and I will avoid 120hz like the plague - because you commented on the difference between 55 fps and 60 - jumping into vertical synch or not.

Regarding the cpu that you have been mentioning, the i5-3570 as I recall - that's the one with PCI-E 3.0, correct?

I saw that mentioned as a build-it-yourself component in a kit in a Tiger Direct catalog that just showed up in the mail today, and they were including the z77 motherboard you talked about. I have to admit that actually taking off the IHS seems a little daunting - you said it wasn't that hard to do but it was time-consuming - whereas lapping both the hfs and the cpu doesn't seem that bad.

I have known about lapping for quite some time but I haven't yet bought any lapping kits.

I know lapping voids your warranty, so I guess somehow you should run the cpu first to make sure it isn't DOA, before you take it back off and lap it. If I went with a TRUE, there was a company that would add a $20 surcharge, and do the lapping for you. I have read quite a bit about the benefits of lapping. I suppose if one did the IHS re-paste, plus the lapping, that might be quite a good overclock on that 3570, which pulls a little less power, is that what you said, than the 2500?


Finally, what did you think of my 7950 theory?

Now that I see my 30" requirements as being middle of the road, no longer extreme since eyefinity is working, I have come up with the theory that the 7950 is more suited to 30" gaming, whereas perhaps the 7970 is better for the extreme challenge of eyefinity.

I see that you liked that YouTube eyefinity that I linked to, with the samsung monitors in portrait. I would be quite interested in hearing about any tinkering you did with a setup like that.

Anyway, to continue with the theory, the 7950 under load, according to that review that heavily influenced my thinking, consumes 12% less power, but delivers only 2 or 3% less performance, on average, at same clocks - and the review talked 30" gaming. So at a slight overclock to bring the performance of the 7950 up to 7970, I would expect the 7950 to sit there, delivering identical performance, while consuming less energy, despite the higher clocks.

The review actually said that they felt the 7950 architecture was better balanced, and maybe that is what they meant - because every test they did was at 2560x1600 resolution, 30" gaming, what they called 1600p.

WHAT I EXPECT TO GET: For 30" gaming: 7950 vs 7970: I expect to get - Identical fps performance, slightly higher clocks than 7970 to achieve that performance, with 5-8% less energy consumption than the 7970.

If all that is true, then the 7950 is ........

Quote:
A leaner meaner chip more ideally suited for the mid-range, 30" gaming, cooler running offering higher overclocking headroom, and ultimately better performance for 30" gaming than you can get from the 7970 bigger brother.


That's what I think I will realize with the 7950 mated to my 2560x1600 Dell. Would you expect that to be true?

thanks,
Rich
m
0
l
a c 87 U Graphics card
June 9, 2012 5:58:25 AM

harvardguy said:
Well, that is very interesting speculation - it makes sense to me.

I am back after heading up to LA last Saturday, getting back a few days later, then dealing with a solid four days of paperwork on a new real estate escrow.

The other agent coincidentally has the transaction coordinator I used for 12 years, until recently, who charges $400 an escrow. So to save the money I haven't used her for a bit, and the savings will buy the first 7950 - but boy is there a lot of paperwork. As I get back up to speed on selling houses, I'll have to start letting her handle it all like the old days, lol.

Blazorthon, Sam liked your quote over on the other forum.

I posted it like this:

Quote:
Hey guys, this is probably unkosher, but there's a guy over at Tom's hardware, blazorthon, whom I am talking with. The guy has deep knowledge. I don't know if he's built thousands of PCs like you have, Stevo, but he might be a chip engineer. He got me started on thinking about the 7950. Here's what he just said, talking about the ability of crossfire 7950s to render a scene. Reading it, I have to parrot what Kevin said a couple hours ago, "I'm drooling."


Quote:
Two 7970s could do it with insanely high AA/AF, Ambient Occlusion, and so many other detail enhancing technologies that you'd think even the single 7970 or slightly overclocked 7950 running with high settings and AA looked moderate instead of incredible in comparison



My eye-candy hungry brain is telling me, that's one of the most powerful 40 word descriptions I ever read. I so want to experience what you guys have raved about - in Crysis for example - Jeff's stories about walking through the forest watching the Korean patrol, on very high settings, lol.


Here's Sam's comeback:

Quote:
There is a considerable element of truth in that post - I don't frequent THG generally because their main site is garbage, but whoever you're speaking with has a fair idea. The more you realise you can achieve, the more dissatisfied you become with what you used to have - that applies in all walks of life, not just graphics performance. One thing that's often banded about is to avoid 120Hz unless you have the funds to commit to guaranteeing 120fps in the games you play, because once you realise what you were missing, you'll suddenly realise 60Hz was inadequate - yet people that haven't seen 120Hz much in action will be perfectly satisfied


I thought that was quite interesting, and I will avoid 120hz like the plague - because you commented on the difference between 55 fps and 60 - jumping into vertical synch or not.

Regarding the cpu that you have been mentioning, the i5-3570 as I recall - that's the one with PCI-E 3.0, correct?

I saw that mentioned as a build-it-yourself component in a kit in a Tiger Direct catalog that just showed up in the mail today, and they were including the z77 motherboard you talked about. I have to admit that actually taking off the IHS seems a little daunting - you said it wasn't that hard to do but it was time-consuming - whereas lapping both the hfs and the cpu doesn't seem that bad.

I have known about lapping for quite some time but I haven't yet bought any lapping kits.

I know lapping voids your warranty, so I guess somehow you should run the cpu first to make sure it isn't DOA, before you take it back off and lap it. If I went with a TRUE, there was a company that would add a $20 surcharge, and do the lapping for you. I have read quite a bit about the benefits of lapping. I suppose if one did the IHS re-paste, plus the lapping, that might be quite a good overclock on that 3570, which pulls a little less power, is that what you said, than the 2500?


Finally, what did you think of my 7950 theory?

Now that I see my 30" requirements as being middle of the road, no longer extreme since eyefinity is working, I have come up with the theory that the 7950 is more suited to 30" gaming, whereas perhaps the 7970 is better for the extreme challenge of eyefinity.

I see that you liked that YouTube eyefinity that I linked to, with the samsung monitors in portrait. I would be quite interested in hearing about any tinkering you did with a setup like that.

Anyway, to continue with the theory, the 7950 under load, according to that review that heavily influenced my thinking, consumes 12% less power, but delivers only 2 or 3% less performance, on average, at same clocks - and the review talked 30" gaming. So at a slight overclock to bring the performance of the 7950 up to 7970, I would expect the 7950 to sit there, delivering identical performance, while consuming less energy, despite the higher clocks.

The review actually said that they felt the 7950 architecture was better balanced, and maybe that is what they meant - because every test they did was at 2560x1600 resolution, 30" gaming, what they called 1600p.

WHAT I EXPECT TO GET: For 30" gaming: 7950 vs 7970: I expect to get - Identical fps performance, slightly higher clocks than 7970 to achieve that performance, with 5-8% less energy consumption than the 7970.

If all that is true, then the 7950 is ........

Quote:
A leaner meaner chip more ideally suited for the mid-range, 30" gaming, cooler running offering higher overclocking headroom, and ultimately better performance for 30" gaming than you can get from the 7970 bigger brother.


That's what I think I will realize with the 7950 mated to my 2560x1600 Dell. Would you expect that to be true?

thanks,
Rich


Oh, most definitely... Get the 7950. That card is most certainly better balanced and your theory is correct beyond a doubt. Heck, the 7950, while consuming the same amount of power as the 7970, could be a little faster than the 7970 too, if you bring it up past being just on-par with the 7970. Of course, you can also leave it as on-par with the 7970 and have it consuming a little less power as you described. Honestly, even for incredibly high end builds, i'm not sure if the 7970 can truly overtake the 7950. At best, the 7970 can't be more than ~10% faster than the 7950 and that's in a ridiculously high end configuration with a huge pixel count, so it can't really beat the 7950 enough to be worth about ~20% more money unless someone wants to spend a good deal more money for a fairly minor performance boost. I'm not sure if the 7970 is truly worth buying in any situation where someone is willing to overclock because of the 7950.

All i5s where the first number after i5 is a 3 (i5-3450, i5-3550, i5-3570K, etc. etc.) are Ivy Bridge i5s; Intel calls them third-generation i5s. Ivy Bridge has PCIe 3.0, so yes, the 3570K does have PCIe 3.0 and yes, even if you don't want to switch out the paste, lapping would also help (although the paste mod would probably help more and doing both would be superb). If you lap the processor, be sure to have something covering the gold *lands* on the bottom of the CPU with something to keep them from getting dirty and/or damaged. I think you probably already knew that, but I thought that I'd just throw that out there to be sure.
m
0
l
June 9, 2012 8:07:52 AM

No I did not know that. I have never lapped anything - only read about it and watched a few YouTube videos.

Hey, I see you popped over to AfterDawn. That is great. Because of you, I just changed my name to the full harvardguy, and I am back to newb status, but no more name contraction. Hooray. I was ticked off 4-5 years ago with the 9 letter limitation, but seeing your full name on there tipped me off. THAT IS TOO FUNNY!!

I am going to continue the rest of this discussion over there. So drop back over please. Sorry I was absent for this week.

I don't know if you noticed, but the protocol over there is usually NOT to quote the entirety of the prior post - THE OPPOSITE TO HERE - they used to do that a few years ago - but it changed to mostly picking out smaller quotes, the way Estuansis (Jeff) did in the post above yours.

Come to think of it, maybe I caused that change, lol. I'm going to sign out here and pick it back up over there and post.

see you over there,
Rich
PS what's your real name?
m
0
l
June 10, 2012 4:51:17 AM

Hey blazorthon,

I meant to ask you another question. In that review of the 7950 that we like, he mentioned that if you volt the card to the same voltage as the 7970, it really flies. Do you happen to know those voltages?

Thanks again for the massive help on that card.

I'm going over to afterdawn to take a look at Jeff's post - he did some things with crysis textures and I think he's showing comparisons. Please drop in from time to time - it's got a different flavor, but with the pictures that it allows posting, we can do some awesome game reviews, for one thing - it will be a little break for you from this forum which I see is also great in its own way - very tech intensive which is nice.

Prior to talking with you, Sam had passed through a moment of self-doubt, and was saying "Honestly, just get a gtx670." Good thing I ran into you and got straightened out about all that stuff.

By the way, you like Ivy Bridge, of course, as you said, with the PCIE 3.0 and that totally makes sense to me. But somebody was putting Ivy Bridge down on some thread, I think it was here on Tom's hardware - something about it was a big disappointment.

You already said it doesn't overclock as well, unless we do the lapping and the new paste. But do you care to comment any further about Ivy Bridge? Finally, I assume I'll find plenty of help on doing the re-pasting - probably on YouTube - correct? You said to make sure to protect the gold leads during the lapping - how would one do that?

Rich
m
0
l
a c 87 U Graphics card
June 10, 2012 5:12:14 AM

Ivy Bridge dissapointed people who were expecting much more than they should have been and didn't do the paste replacing mod that I mentioned. Some people didn't realize that Ivy Bridge was not intended to be a large performance leap over Sandy, just a die shrink with an improved IGP, so they mock it for not being like the jump between Nehalem and Sandy Bridge.

Ivy Bridge did exactly what it was supposed to do... Superior IGP, lower power usage, and a small performance boost. Intel releases their CPU generations in a tick-tock cycle, ticks are die shrinks, tocks are new architectures on the process node of a previous die shrink. It's not like with GPUs where each die shrink is given a new architecture. Basically, they were ignorant of Intel's business strategy, worked their hopes up beyond reason, and then were let down when Ivy Bridge did what Intel intended for it to do... Establish their 22nm process node. Intel does this so they get experience with a process node before making a new architecture on it.

About protecting the gold lands, all I did was cover the bottom of my CPU in scotch tape the last time I lapped a CPU. It was cheap and it worked very well at keeping the lands clean, although taking the tape off was a little annoying.
m
0
l
a b U Graphics card
June 10, 2012 10:15:05 AM

blazorthon said:
Ivy Bridge dissapointed people who were expecting much more than they should have been and didn't do the paste replacing mod that I mentioned. Some people didn't realize that Ivy Bridge was not intended to be a large performance leap over Sandy, just a die shrink with an improved IGP, so they mock it for not being like the jump between Nehalem and Sandy Bridge.

Ivy Bridge did exactly what it was supposed to do... Superior IGP, lower power usage, and a small performance boost. Intel releases their CPU generations in a tick-tock cycle, ticks are die shrinks, tocks are new architectures on the process node of a previous die shrink. It's not like with GPUs where each die shrink is given a new architecture. Basically, they were ignorant of Intel's business strategy, worked their hopes up beyond reason, and then were let down when Ivy Bridge did what Intel intended for it to do... Establish their 22nm process node. Intel does this so they get experience with a process node before making a new architecture on it.

About protecting the gold lands, all I did was cover the bottom of my CPU in scotch tape the last time I lapped a CPU. It was cheap and it worked very well at keeping the lands clean, although taking the tape off was a little annoying.


+1

Don't forget about 3.0 support :) 
m
0
l
a c 87 U Graphics card
June 10, 2012 10:25:50 AM

akamrcrack said:
+1

Don't forget about 3.0 support :) 


Well, features wasn't part of the point that I was making, but sure. Of course, if we were talking features, then PCIe 3.0, superior Lucid MVP and other such technologies, triple display support in the IGP, DX11 support in the IGP, and much more could be mentioned.
m
0
l
a b U Graphics card
June 10, 2012 10:36:59 AM

blazorthon said:
Well, features wasn't part of the point that I was making, but sure. Of course, if we were talking features, then PCIe 3.0, superior Lucid MVP and other such technologies, triple display support in the IGP, DX11 support in the IGP, and much more could be mentioned.


Yeah I purchased my brother a 3770K w/ z77 Sabertooth, mainly for overkill since he mostly games/watches vids/music but he always will have the opportunities to explore other possibilities with his components later on.

Always fun to build multiple pcs for different reasons :) 
m
0
l
June 11, 2012 6:35:46 AM

Wow, scotch tape - but wouldn't that leave a sticky residue from the tape? Could you then apply some rubbing alcohol to get rid of any scotch tape residue?

I watched a youtube video on the IHS removal - the guy was pounding on a switchblade knife to loud rap music. He made it look brutal. But I did see around the net that they said that the paste DOES come off.

Then, as he was holding the chip, he had his fingers all over the gold lands - I always heard you don't want to get fingerprints on those gold lands - so I was wincing. All the comments said they wouldn't do it this way, lol. Actually everybody said they would as soon let a monkey work on their chip as the guy in the video, lol.

Do you have any favorite IHS removal videos - or is that the only one out so far.

I appreciate your great knowledge. It was fun to see you and Sam chatting away over there at AfterDawn. He told me to get a gtx 670 - it's a good thing that didn't set well with me - especially since I don't have a sli certified motherboard. Sometimes he gets depressed and just gives in, lol. Did you pick up on Jeff's crysis pictures with the texture mods?

Oh, I found out newb status lasts for only 25 posts. So if you feel like popping over from time to time and doing a quick post, the newb will be out of the way fast - I'm up to 14 already. You can see how many you've done by looking up by your name and clicking settings.

I tried to post a few pictures and links, but it seems I am totally blocked until out of newb. Again - any link to a better IHS removal would be good, otherwise I see how it is done, just kind of hack that paste apart. And I saw the posts for what kind of quality paste to replace it with. Then some lapping, and a TRUE pre-lapped, and I'm in business - but that's still a year away. Thanks again,

Rich


m
0
l
a c 87 U Graphics card
June 11, 2012 7:16:15 AM

harvardguy said:
Wow, scotch tape - but wouldn't that leave a sticky residue from the tape? Could you then apply some rubbing alcohol to get rid of any scotch tape residue?

I watched a youtube video on the IHS removal - the guy was pounding on a switchblade knife to loud rap music. He made it look brutal. But I did see around the net that they said that the paste DOES come off.

Then, as he was holding the chip, he had his fingers all over the gold lands - I always heard you don't want to get fingerprints on those gold lands - so I was wincing. All the comments said they wouldn't do it this way, lol. Actually everybody said they would as soon let a monkey work on their chip as the guy in the video, lol.

Do you have any favorite IHS removal videos - or is that the only one out so far.

I appreciate your great knowledge. It was fun to see you and Sam chatting away over there at AfterDawn. He told me to get a gtx 670 - it's a good thing that didn't set well with me - especially since I don't have a sli certified motherboard. Sometimes he gets depressed and just gives in, lol. Did you pick up on Jeff's crysis pictures with the texture mods?

Oh, I found out newb status lasts for only 25 posts. So if you feel like popping over from time to time and doing a quick post, the newb will be out of the way fast - I'm up to 14 already. You can see how many you've done by looking up by your name and clicking settings.

I tried to post a few pictures and links, but it seems I am totally blocked until out of newb. Again - any link to a better IHS removal would be good, otherwise I see how it is done, just kind of hack that paste apart. And I saw the posts for what kind of quality paste to replace it with. Then some lapping, and a TRUE pre-lapped, and I'm in business - but that's still a year away. Thanks again,

Rich


Scotch tape, at least the scotch tape that I buy, does not leave a residue, but yes, I did clean the lands with an alcohol solution anyway. Don't do what that youtube guy did... I can't understand someone even knowing about something like this, but not understanding basic caution. Like I said, this procedure is fairly easy, but it should take a lot of time and patience. What I would do is take a small knife with a sharp, strait (not serrated) blade and gently, slowly carve through the glue without letting the blade touch the chip itself.

For example, I have an old Swiss army knife with two blades, one three-inch long blade and one two-inch long blade. I would use the two-inch long blade after I completely cover the gold lands's of the CPU with a good layer of scotch tape. Once I've made a lot of progress on one side of the CPU, I would move on to another side. Once the knife can fit more than two or thee millimeters into the bottom of the IHS on all four sides, it should be almost ready to pull off. Just finish cutting any glue that is still holding on and then the IHS should be able to come right off. Then, clean out the paste (both on the inside of the IHS and on the CPU itself), put the new paste in, and re-attache the IHS. This process could take over half an hour or even longer, so don't rush it. Do not hit the knife, pound on it, or anything like that, just use the blade to slowly cut through the glue. Remember, you're not chopping some tree down or splitting rock with a hammer and chisel, you're carefully carving a sculpture and it needs to be ideal, even if perfect isn't necessary.

If you plan on lapping, then be sure to lap the IHS before you replace the paste, or else lapping it could be both dangerous for the CPU and difficult. If you lap it before removing the IHS, lapping is very easy. You don't need to lap your cooler until you want to, but lapping the CPU should be done before you remove the IHS.

If you lap it, then I recommend that you wear a dust mask and do so in an at least semi-well ventilated room to make sure that you don't inhale nickel and copper dust from the IHS. This isn't the kind of stuff that you'd want in your lungs. There shouldn't be any in the air if you're careful and keep the IHS at least partially soaked in alcohol or another such chemical while you grind it down with a lapping kit, but better safe than painfully coughing up metal powder.

You can get such masks from a Walmart for a less than $5. You might also want some basic gloves and goggles/safety glasses. Some lapping kits come with all of these, but I don't know if they all do and chances are that they don't all have these. You wouldn't need any of this stuff at all for the paste exchange mod, just for lapping. I don't mean to scare you out of lapping, I recommend it, just if you do, don't be stupid about it. Like many good things, using common sense goes a long way and really, a dust mask, safety goggle/glasses, and gloves can be useful for many other things and would be reusable tools, so it's worth having them just in case you ever want to do something with them again and they are so cheap, why not?

Heck, last I lapped a CPU, I used a doctor's mask (got it for less than $2), latex gloves (got a 10 pack for $5), and safety glasses that came with my soldering iron that I have used for other stuff. I simply opened a window in my spare room, placed a fan in the window, and got to lapping my old Pentium-Dual Core CPU in preparation for a small overclock with a pack of sand paper and some alcohol. I've used the gloves on many occasions and the mask is good for when I'm doing something in a dusty place (I have moderate dust allergies and a very dusty attic) and I can honestly say that they were very practical purchases.
m
0
l
a b U Graphics card
June 11, 2012 11:04:11 AM

blazorthon said:
Scotch tape, at least the scotch tape that I buy, does not leave a residue, but yes, I did clean the lands with an alcohol solution anyway. Don't do what that youtube guy did... I can't understand someone even knowing about something like this, but not understanding basic caution. Like I said, this procedure is fairly easy, but it should take a lot of time and patience. What I would do is take a small knife with a sharp, strait (not serrated) blade and gently, slowly carve through the glue without letting the blade touch the chip itself.

For example, I have an old Swiss army knife with two blades, one three-inch long blade and one two-inch long blade. I would use the two-inch long blade after I completely cover the gold lands's of the CPU with a good layer of scotch tape. Once I've made a lot of progress on one side of the CPU, I would move on to another side. Once the knife can fit more than two or thee millimeters into the bottom of the IHS on all four sides, it should be almost ready to pull off. Just finish cutting any glue that is still holding on and then the IHS should be able to come right off. Then, clean out the paste, put the new paste in, and re-attache the IHS. This process could take over half an hour or even longer, so don't rush it. Do not hit the knife, pound on it, or anything like that, just use the blade to slowly cut through the glue. Remember, you're not chopping some tree down or splitting rock with a hammer and chisel, you're careful carving a sculpture and it needs to be ideal, even if perfect isn't necessary.

If you plan on lapping, then be sure to lap the IHS before you replace the paste, or else lapping it could be both dangerous for the CPU and difficult. If you lap it before removing the IHS, lapping is very easy.

If you lap it, then I recommend that you wear a dust mask and do so in an at least semi-well ventilated room to make sure that you don't inhale nickel and copper dust from the IHS. This isn't the kind of stuff that you'd want in your lungs. There shouldn't be any in the air if you're careful and keep the IHS at least partially soaked in alcohol or another such chemical while you grind it down with a lapping kit, but better safe than painfully coughing up metal powder.

You can get such masks from a Walmart for a less than $5. You might also want some basic gloves and goggles/safety glasses. Some lapping kits come with all of these, but I don't know if they all do and chances are that they don't all have these. You wouldn't need any of this stuff at all for the paste exchange mod, just for lapping. I don't mean to scare you out of lapping, I recommend it, just if you do, don't be stupid about it. Like many good things, using common sense goes a long way and really, a dust mask, safety goggle/glasses, and gloves can be useful for many other things and would be reusable tools, so it's worth having them just in case you ever want to do something with them again and they are so cheap, why not?

Heck, last I lapped a CPU, I used a doctor's mask (got it for less than $2), latex gloves (got a 10 pack for $5), and safety glasses that came with my soldering iron that I have used for other stuff. I simply opened a window in my spare room, placed a fan in the window, and got to lapping my old Pentium-Dual Core CPU in preparation for a small overclock with a pack of sand paper and some alcohol. I've used the gloves on many occasions and the mask is good for when I'm doing something in a dusty place (I have moderate dust allergies and a very dusty attic) and I can honestly say that they were very practical purchases.


Quoted ftw, gonna use this information myself :) 
m
0
l
!