Which GPU in near future is faster - Intel vs AMD?

dariusz72

Distinguished
Jul 6, 2010
20
0
18,510
Hello,

With user feedback, I would like to determine (if suitable information is available), which future GPU in 2011 from Intel or AMD will be faster.

I understand in early 2011, Intel will be releasing Sandy Bridge CPU with built in GPU, and following later in the year Ivy Bridge, where the speed of the GPU will be twice as fast and be compatible with DirectX 11 compared with Sandy Bridge (Sandy Bridge will have DirectX 10.1). - Reference: http://en.wikipedia.org/wiki/Sandy_Bridge_(microarchitecture)

Where as AMD will be releasing Llano and Bobcat range of CPU with built in GPU, both which I understand will be compatible with DirectX 11. - Reference:
http://en.wikipedia.org/wiki/AMD_Fusion#cite_note-AMDOntario-14

Unfortunately, I cannot find as much details about AMD's CPU/GPU range compared with Intel's CPU/GPU.

If anyone knows which GPU from Intel or AMD will be faster, I'd like to understand your reasoning / feedback.

D
 

LePhuronn

Distinguished
Apr 20, 2007
1,950
0
19,960
Nobody knows because they're not out yet!

Unless you call Intel and AMD directly and ask for copies of their development benchmarks (which you're not gonna get), then there is no way for any of us to know what's going to be faster.

Besides, this integrated graphics thing is just a waste of time - I don't care how "powerful" these embedded GPU systems are going to be, even a cheap discrete card will be so much better in every respect.
 
Both the techs are going to have the GPU Core Clock controller on-die, as far as I know, they're not going to have a GPU on die..... so I guess, the new Mobo's that come out for both the technologies will have you can say a better on-die controller for the Graphics.
Earlier this year and the last year they came up with embedding the Mem Clock controller onto the Die to increase mem efficiency and clock cycles, now they are inculcating the same thing just for the graphics part of the setup so as to enhance our graphical experiences.
Basically mobo architecture is going to be more into play to use these new technologies since now you are getting a graphical load sharing within the processor at the die level making the sending and receiving and calculations of signals from you GPU to the Processor an the memory a lot more faster.
It is really confusing technology since how it is going to affect the present Fermi and ATI cards is still not known......
Of course I maybe wrong in understanding these technologies or nomenclature of theirs. But I think this is the basic idea behind the whole thing.

I can understand AMD/ATI being at an advantage since now they're the same, so the new AMD tech will play hand in hand with the ATI cards both supporting one another, but I still can't be very sure of which one is going to be faster...
We're just going to have to wait and watch till the proper hardware is released and tested out.
 

jj463rd

Distinguished
Apr 9, 2008
1,510
0
19,860


Perhaps it's because many new customers buying PC's expect to game on low end name brand Dell's,HP's,Compaq's,Acer's,Gateway's,eMachines etc and are frustrated at not being able to do so right out of the box.
I agree a discrete GPU will always outperform a IGP but it's not a bad idea to have a far more capable IGP for light gaming or one capable of running older titles on or newer ones in lower resolutions etc.
 

kevikom

Distinguished
Jan 30, 2009
15
0
18,510
Right now based on discrete GPU, It looks Like AMD but them again, AMD likes to anounce things for early buzz, but then delay them for several months/ years until the performance does not equal Intel performance anymore. You know what would be a great idea?

wait until you are already manufacturing something that beats the compitition and it has been tested before saying it is better than the other guys product. So when you release it, you keep your credibilty in the community. I keep wanting to root for AMD but they keep messing up. it is a pattern
 
Earlier this year and the last year they came up with embedding the Mem Clock controller onto the Die to increase mem efficiency and clock cycles, now they are inculcating the same thing just for the graphics part of the setup so as to enhance our graphical experiences.

I disagree on that; syncing the CPU/Memory makes sense from an architecture point of view, since the CPU needs to fetch data from RAM in order to do anything in the first place. But as a GPU is technically not needed (The CPU can always fall back on vector graphics, after all), and because the GPU is such a totally different beast (massivly parrallel), doing such a move from an architecture point of view doesn't make sense in the first place.

As for integrated graphics, they are what they are: Very weak GPU's that are good enough for non-gaming purposes.
 
^^ Exactly, you understood me wrong. That's what I meant, when I said we need to have more info on the boards that will come out, and that's why AMD/ATI might have an upper hand...... since they're making both.......
And especially since we have absolutely independent video engine.... why would a company try to put something like that on it's die... :)
Get it???
That's the whole point, cos if it does work that way, then we are looking at a really very major turn in technology following the release of these processors.....
sandy.jpg
 

Houndsteeth

Distinguished
Jul 14, 2006
514
3
19,015
Historically, AMD has always had a performance edge on Intel in the IGP arena. Intel has come a long way, but their offerings are still very meager when compared to the IGP solutions from both Nvidia and AMD. I'm not sure how much advantage Intel will gain from on-die GPU, but it will more than likely never be comparable to discrete. The only users who will benefit are the ones who never really tax their GPU anyway, such as a majority of the business machines out there that do nothing more than download email or run Office apps.
 

I could not have put it better myself.