Sign in with
Sign up | Sign in
Your question
Closed

AMD Pitcairn With 768 Shaders: What is This Mystery Chip?

Last response: in News comments
Share
May 3, 2012 2:52:49 PM

CSI Shanghai
Score
23
Related resources
May 3, 2012 2:53:34 PM

almost too bad...
there's a market for the 768-shader chip of a 7830 somewhere around $180, filling the gap between 7770 ($130) and 7850 ($230), replacing the old 6850/6870 and competing with the 560/560ti
Score
38
May 3, 2012 2:56:08 PM

I look for single slot solutions. Maybe AMD's next success.
Score
26
May 3, 2012 3:14:22 PM

im very interested in finding out how these perform heat wise. especially if i can drop two into my box in crossfire without there being like 1/2" between them for better air flow to knock back that almost automatic 10 degree jump in temp on the top card.
Score
4
May 3, 2012 3:26:05 PM

I was also hoping it was something filling the gap between the 7770 and 7850. A 768 shader Pitcairn could potentially be a near perfect HTPC card too. Enough performance to do some gaming, single slot solution, really low power consumption....
Score
23
May 3, 2012 3:30:33 PM

ScrewySqrl said:
almost too bad...
there's a market for the 768-shader chip of a 7830 somewhere around $180, filling the gap between 7770 ($130) and 7850 ($230), replacing the old 6850/6870 and competing with the 560/560ti


Agreed. And with nvidia not releasing mid range cards for some time now, they could have the best solution for $200 and lower which is a great sweet spot for gamers in that budget
Score
23
May 3, 2012 3:31:40 PM

Call it whatever you want, unless it works with Adobe Creative Suite (which it won't as it's not Cuda architecture) it is irrelevant for anything but games. IMO...
Score
-27
May 3, 2012 4:08:08 PM

7830 anyone!!!
Score
21
May 3, 2012 4:08:33 PM

edvinasmCall it whatever you want, unless it works with Adobe Creative Suite (which it won't as it's not Cuda architecture) it is irrelevant for anything but games. IMO...

wrong! CS6 supports OpenCL.
Score
21
May 3, 2012 4:09:07 PM

Is "Pitcairn" another word for irrelevant?
Score
-22
May 3, 2012 4:11:41 PM

Maybe it's related to the design chip for the new Xbox or PS4?
Score
17
May 3, 2012 4:36:10 PM

semisonicMaybe it's related to the design chip for the new Xbox or PS4?


Maybe... But that does not explain the clearly PC-form-factor of this card, or the PC type connections.

Looks like a pet project that AMD decided not to pursue for whatever reason...

Although a single slot card of that kind of power does sound kinda cool.
Score
14
May 3, 2012 4:47:01 PM

7830 or 7790? i thought 7790 would be a bit less powerful than this.
ba 7830 would be nice. there's a big gap between 7770 and 7850 that needs to be filled.
with nvidia facing supply issues and (afaik) no midrange kepler in sight... amd stands to make quite a bit of money at that segment. 7800 cards already offer very good performance in their class.
Score
13
May 3, 2012 4:59:36 PM

I think Tom's copy editor must have left town or died. Tom's, heads up.... no one's editing your content! Hire someone new!
Score
-10
May 3, 2012 5:30:25 PM

An incomplete board is not always the best thing to be using for validation, since it might behave differently from the complete board. In either case though, I think we just ruined AMD's 7830 launch party.
Score
8
Anonymous
May 3, 2012 6:06:24 PM

They do have a gap, and this surfacing sure gets the speculation going for a 7830 or 7790.
Score
4
May 3, 2012 6:14:58 PM

aftcometIs "Pitcairn" another word for irrelevant?

No, but aftcomet is.
Score
16
May 3, 2012 6:53:37 PM

i would love a single slot "gameing" solution...
i dont need the best of the best, in fact i turn down shadows in every game reguardless of when it was made just because it gives me that much more headroom.
if i could put this instead of a 2 slot i would get it.
Score
2
May 3, 2012 7:03:18 PM

There're several gaps in performance between the GCN cards. The 7750 and 7770 have a large gap (7750 performs on par with the 6750, but the 7770 is right next to the 6850). The 7770 and the 7850 have a large gap (the 7770 is right with the 6850 the 7850 is between the 6950 2GB and the 6970). A 7830 would solve one of these gaps and it would do so very nicely. I doubt that the other gaps will be filled, but that's me.
Score
2
May 3, 2012 10:28:55 PM

must be the sample AMD was going to 'leak' to Nvidia spys & they were poking fun at the 460 768mb edition in a wink and a smile round about way.
Score
2
May 3, 2012 11:06:19 PM

7790 or 7830?
Score
0
May 3, 2012 11:33:36 PM

7790 sounds better and pls makes more sense as the 7700 series are known to be pretty cool and more plausable for single slot configurations
Score
-1
May 4, 2012 12:12:50 AM

FINALLY... original news from tom's
how long has it been? 10 years maybe?
Score
0
May 4, 2012 1:06:16 AM

I think the 7830 name makes a lot of sense. The 5830 was a castrated 5850 after all. This one in single-slot might be a lot nicer than the 5830 since this cut-down sounds like it'll do well on thermals so it actually has a purpose.

I hope AMD reads these comments and realizes that they actually should launch this card.
Score
3
May 4, 2012 1:38:21 AM

7amoodFINALLY... original news from tom'show long has it been? 10 years maybe?


I see what you're getting at, but I think that your number is a gross overestimation.
Score
4
May 4, 2012 3:55:20 AM

7845? Maybe not.
Score
-2
May 4, 2012 4:06:15 AM

maybe it will be tomĀ“s fault that the crowd will demand that amd produce this thing! and then maybe amd can get enough money to at least improve in the cpu market and be intel competition again
Score
1
Anonymous
May 4, 2012 8:12:04 AM

i agree with semisonic.
the form factor is just for the ease of setup in a dev box.



Score
1
May 4, 2012 8:29:08 AM

probably to compete with the GK110!
Score
-4
May 4, 2012 12:04:13 PM

clearly an engineering sample possibly testing a new line like 7830 as most suggested
Score
3
May 4, 2012 12:29:32 PM

eddieroolzAn incomplete board is not always the best thing to be using for validation, since it might behave differently from the complete board. In either case though, I think we just ruined AMD's 7830 launch party.


As long as the IO is good, it can be useful for debug (think memory testing). If the front-end or backend are intact, it is more useful. If some shaders are all that is missing, then you can validate your PCB. What you are missing at that point is maximum power consumption, temperatures, and performance. You would need the full part to validate your cooling solution. Functionally, you have everything you need, though.

That said, the price gap has me wondering about the 7830.
Score
2
May 4, 2012 1:39:08 PM

dreadlokzprobably to compete with the GK110!


Why do people say GK110? If there EVER is a big Kepler for the GTX 600 cards, it will be GK100. GK110 would mean it's a second generation Kepler, presumably GTX 700 cards at that point. I fail to understand how this is a difficult concept. The GF10x GPUs were in the first generation Fermi cards (GTX 400 series) and the GF11x GPUs were in the second generation Fermi cards (GTX 500 series). The GK104 is what is in the GTX 680, why are people so ridiculously hooked on the GK110 when we don't even know if it will ever be made (Nvidia might just make a new arch instead of second generation Kepler GPUs)?
Score
2
May 5, 2012 3:10:27 AM

I enjoy these kind of investigative articles. Keep them going.
Score
3
May 5, 2012 8:11:47 AM

Man, look at the mess of that thermal paste! :( 
Score
2
May 8, 2012 10:54:53 AM

captaincharismaAMD's next failiure


OK, Maybe you'd call AMD a slower GPU than the latest Nvidia has to offer. Maybe you'd criticize them for their lack of PhysX support. But then, you have to admit that they offered some technologies than Nvidia only offer in the 600 series like Eyefinity on a SINGLE board. I'm sitting here staring at my 1080p 23" display imagining the possible screen estate if I get three of them running in Eyefinity mode. AMD's price/performance couldn't be matched by Nvidia until the recent introduction of the GTX 680 and Nvidia had to pull the plug on any compute performance improvements. Also, AMD's tessellation is significantly improving with every generation while Nvidia is simply sitting there. Another thing is Nvidia's accelerated video encoding which is significantly worse than what AMD has to offer despite Nvidia's CUDA being SEVERAL years older than AMD's Stream /APP. Nvidia is also locking out the PhysX capability if it detects another GPU in the system. I can't understand this move since it will boost the sales of cheaper GPU which is a section dominated by AMD and most of graphics card profits are in the lower end of their portfolios.

Last but not least, AMD never released a driver that fried GPUs. So, I think my money is way safer with AMD than it'll ever be with Nvidia.

Please,my kind sir, look at The Best Graphics Cards for the money column before saying AMD's GPU's are a failure (that's also the right spelling for failure, not what you wrote)
Score
2
May 9, 2012 2:32:22 PM

youssef 2010OK, Maybe you'd call AMD a slower GPU than the latest Nvidia has to offer. Maybe you'd criticize them for their lack of PhysX support. But then, you have to admit that they offered some technologies than Nvidia only offer in the 600 series like Eyefinity on a SINGLE board. I'm sitting here staring at my 1080p 23" display imagining the possible screen estate if I get three of them running in Eyefinity mode. AMD's price/performance couldn't be matched by Nvidia until the recent introduction of the GTX 680 and Nvidia had to pull the plug on any compute performance improvements. Also, AMD's tessellation is significantly improving with every generation while Nvidia is simply sitting there. Another thing is Nvidia's accelerated video encoding which is significantly worse than what AMD has to offer despite Nvidia's CUDA being SEVERAL years older than AMD's Stream /APP. Nvidia is also locking out the PhysX capability if it detects another GPU in the system. I can't understand this move since it will boost the sales of cheaper GPU which is a section dominated by AMD and most of graphics card profits are in the lower end of their portfolios.Last but not least, AMD never released a driver that fried GPUs. So, I think my money is way safer with AMD than it'll ever be with Nvidia.Please,my kind sir, look at The Best Graphics Cards for the money column before saying AMD's GPU's are a failure (that's also the right spelling for failure, not what you wrote)


Nvidia's only faster because they were willing to sacrifice compute performance in an attempt to get us to turn to Quadro and Tesla. Furthermore, it's not even a big difference. There aren't many games where a difference between the 7970 and the 680 can be seen (at least in FPS). Granted, the 680 wins more often then the 7970 does, but it's worth every penny, just as the 680 is. The difference is that the 7970 has the memory capacity to last more than a year or two before AA/AF needs to be lowered to stop the VRAM capacity from getting overloaded, whereas the 680 already shows problems caused by it's low VRAM capacity for it's performance in some games and resolutions, settings, and AA/AF. AMD also has cards that can do six monitors in Eyefinity instead of just three and has had this for years. The list goes on, but I'll stop here before looking like an AMD fanboy.

Let's see what Nvidia did... They most certainly do have the most energy efficient and for the most part, the fastest single GPU and dual GPU cards in the gaming world right now. However, if games become more compute focused like so many say they will (including Tom's), then how long will that last? Well, that depends on whether or not such games can be released before Nvidia releases another compute focused architecture on their Geforce cards. I hope so because if not, then it would be a one-sided competition until Nvidia did. In that, I hope that the next generation of Nvidia cards have more compute performance just in case we get our compute heavier games soon. Nvidia does have one advantage in that with Kepler, the dual precision performance comes from cores that aren't related to the 32 bit gaming cores, so it should be able to keep it's regular performance unchanged when it adds in it's little bit of compute performance, whereas AMD will need to have some cores allocated to the 32 bit math and some allocated to the 64 bit math. If Nvidia simply adds more of the 64 bit cores, then it could have a winner in the next generation. Maybe AMD will take a similar approach in their next generation too by separating the 32 bit and 64 bit math into different cores. It definitely is an interesting concept, although I'm more partial to keeping everything in one for this so that if one type of performance is improved, it can all get improved at the same time.

It would be interesting if we could change the clock frequency of the 64 bit cores relative to the 32 bit cores. That way, we could overclock what really needs it more than what is already fast enough.
Score
1
!