Intel and DirectX 10 support--is ATI/AMD in serious trouble?

bourgeoisdude

Distinguished
Dec 15, 2005
1,240
25
19,320
http://www.tgdaily.com/2007/02/23/intel_crestline_graphics/

Okay, so now Intel is stating that their integrated video released this May will be fully DirectX 10 compliant. Obviously, it is not supposed to be superior than say the GeForce 8800 series or something, but let's remember something: AMD announced that their new DirectX 10 video cards are coming out this May as well.

What would be the consequences if AMD didn't release the new cards until AFTER Intel has the new chipsets? Again I realise there is a huge difference in specifications, etc., but come on, Intel is beating ATI to DirectX 10? That is embarrasing!!! It is time ATI moved forward, because this AMD aquisition looks like it is a dumber move every day that passes, I feel. (sigh) I warned AMD in an email that I feared this would happen--I never expected them to listen or something, but they responded and assured me that the transition would help them develop DirectX 10 faster than they would separate from AMD. I was skeptical but of course felt they knew more than I do about these sorts of things. I was wrong, apparently.

AMD is in serious trouble, I hate it. I use NVIDIA and AMD together because to me they were practically made for each other, and I still feel AMD made a horrible mistake with ATI. I hope they prove me wrong, I really do, but right now, it is not looking good at all.
 

sandmanwn

Distinguished
Dec 1, 2006
915
0
18,990
Perhaps you havent noticed but Intel is the largest graphics manufacturer in the world.

And yes you are equating an IGC to a dedicated GPU card. Really man you need to take a deep breathe and step away from the PC for a little while. ATI has had dx10 on one of those in the xbox360 since 2005.

ATI is far from being in any sort of serious trouble from Intel for years to come. In fact the opposite is true, Intel is the one under pressure from ATI because they are pushing into the IGC market which Intel has had a stranglehold over for a very long time.

As far as dedicated GPU cards are concerned Nvidia is the only direct competition for ATI and from the looks of it that will only last a little longer with the release of the R600.
 

Retardicus

Distinguished
Aug 21, 2006
49
0
18,530
I'm not so sure. They say its using a many-core architecture, this makes me think it's x86-based processors. And only 16 cores on a chip!?

They took a very general purpose architecture and are trying to mould it into a graphics engine. This means they're paying latency, power and die area for x86 decode. What a waste! And how can 16 cores perform 16x better than an 8800! An 8800 has 128 specialized cores running at several hundred MHz, within a specialized connection fabric. It's a stream processor.

Most graphics computations are 4-vector based, because they're working with transform matrices, point vectors and RGBA pixel values. Trying to do more work than one or two 4-vector operations per cycle in a generalized manner seems pretty complicated, but that's what they would have to do in order to compete. Each individual core would need a lot of FP processing power. I'm guessing it's also going to be running at a pretty high clock speed.

I dunno. I suppose they probably know more about it than I do.
 

ADM-86

Distinguished
Sep 11, 2006
164
0
18,680
:? I know what you mean but yeah I am gonna agree with sandman.

I really know how you feel :x , ati is taking its time to release the r600 and time is what they don't have right now ,but I predict that ati is gonna come with a card that can beat the 8800 performance for around 10-25% and maybe MAYBE they come out with a physics technology in which we can use another cards for physics.

Is gonna be good for ati since they'll have the performance lead again( I don't know for how long) but again they are losing a lot of buyers to nvidia for taking their time.

so we are gonna see how it goes, time will tell us :roll:

741171866392ry5.gif


____________________________________________________________
CPU Type - DualCore Intel Pentium D 820
Motherboard Chipset - Intel Lakeport i945P
System Memory - 1024 MB (DDR2-533 DDR2 SDRAM)
Video Adapter - NVIDIA GeForce 7800 GT (256 MB)
Sound Card - SB Audigy 2 ZS Audio
Monitor - Dell 1905FP 19" LCD
Disk Drive - (160 GB, 7200 RPM, SATA-II)
Optical Drive - PHILIPS DVD+-RW DVD8701 (DVD+R9:8x, DVD+RW:16x/
8x, DVD-RW:16x/6x, DVD-ROM:16x, CD:48x/32x/48x DVD+RW/DVD-RW)
Optical Drive - TSSTcorp DVD-ROM TS-H352C (16x/48x DVD-ROM)
Keyboard - Microsoft USB Comfort Curve Keyboard 2000 (IntelliType Pro)
Mouse - Logitech G5 Laser Mouse
_____________________________________________________________
 

fidgewinkle

Distinguished
Feb 27, 2007
162
0
18,680
I'm not so sure. They say its using a many-core architecture, this makes me think it's x86-based processors. And only 16 cores on a chip!?

They took a very general purpose architecture and are trying to mould it into a graphics engine. This means they're paying latency, power and die area for x86 decode. What a waste! And how can 16 cores perform 16x better than an 8800! An 8800 has 128 specialized cores running at several hundred MHz, within a specialized connection fabric. It's a stream processor.

Most graphics computations are 4-vector based, because they're working with transform matrices, point vectors and RGBA pixel values. Trying to do more work than one or two 4-vector operations per cycle in a generalized manner seems pretty complicated, but that's what they would have to do in order to compete. Each individual core would need a lot of FP processing power. I'm guessing it's also going to be running at a pretty high clock speed.

I dunno. I suppose they probably know more about it than I do.

Intel has been working on an 80 core vector processing project that can be applied to problems such as graphics processing. They aren't going to use a general purpose x86 processor for graphics. Processor architecture is moving more toward a managed system of specialized processing cores. Don't be surprised when your video processing moves onto the motherboard and is directly managed in concert with the CPU.
 
Hasn't this been covered about 50 times already?

You better hope ATI/AMD survive, because if Intel gained a monopoly, do you even have a clue as to what Intels prices will go to?

I guarantee it won't be good!!!!! :roll:
 

weuntouchable

Distinguished
Jan 26, 2007
23
0
18,510
What benefits are there for AMD/ATI to rush the release of a DX10 card? There are no DX10 games and the high end video card segment accounts for less than 5% of their sales. Nvidia may own current bragging rights, but that does little to boost their sales. Midrange cards is where the money is at. Right now AMD/ATI is doing just fine in that market segment.
 

celewign

Distinguished
Sep 23, 2006
1,154
0
19,280
Perhaps you haven't noticed, but the $hit Intel makes as graphics chips are only DX9 as a technicality. They SUX. I'm sure DX will love having 32Mb of shared system RAM, lol. (sorry for the rampant exaggerations)
-cm
 

sweetpants

Distinguished
Jul 5, 2006
579
0
18,980
What benefits are there for AMD/ATI to rush the release of a DX10 card? There are no DX10 games and the high end video card segment accounts for less than 5% of their sales. Nvidia may own current bragging rights, but that does little to boost their sales. Midrange cards is where the money is at. Right now AMD/ATI is doing just fine in that market segment.

I'm in agreement here.

Though there are rumors running around (started in forums) that AMD is to be bought out by IBM. *shrugs*
 

celewign

Distinguished
Sep 23, 2006
1,154
0
19,280
I think the issue isn't that NVidia has high end kick ass cards, it's that they have DX10 cards. People don't want high end often, but they want contemporary.
-cm
 

bourgeoisdude

Distinguished
Dec 15, 2005
1,240
25
19,320
Actually the Intel GMA 965 (aka GMA 3000) is quite capable of playing the latest games, including (brace yourself) OBLIVION: http://www.intel.com/support/graphics/intelg965/sb/CS-023531.htm

Yup, in a future driver release, Intel claims Oblivion is more than capable of working on this GMA--it'll likely be 11FPS outdoors at low settings though (heh)!

We are over-analyzing this. 90% of the casual gamers look at DirectX compliance and video memory for their gaming needs, and nothing more. Intel's new DX10 chipset will sell BECAUSE it has the capability, not the speed. We are all intelligent enough to know the difference in this forum, but the reality is that most consumers are completely ignorant of this.

All intel has to do is advertise the fire out of the directx 10 capability, and the ignorant users who had one bad nvidia card in the past will purchase Intel IGCS when it comes out. You think this would be miniscule, but I've had dozens of people already ask me when will the next DirectX 10 card come out by someone other than nvidia? NVIDIA had the FX5200 card that Dell advertised as a "gaming card" and therefore most ignorant users assume nvidia are liars.

Again, innovation means nothing in the end--it all comes down to profits, and if AMD delays its graphics card much longer it will seal its own fate.
 

zenmaster

Splendid
Feb 21, 2006
3,867
0
22,790
Have you read about all of the DX10 issues with NVIDIA?

IMHO, It's a very smart thing to hold off on DX10 cards until all of the issues are resolved. I suspect just as many of the issues are in the Vista code as they are in the drivers.

And yes, stating that Intel has DX10 coming out is so funny.
Sure, it can play HL-2, but only on 320x240 Resolution using 16-colors. However, when set as such the other DX10 features work nicely.
 

celewign

Distinguished
Sep 23, 2006
1,154
0
19,280
Two things:
1: You link indicates that the game is NOT playable. It's really clear.
2: Chip manufacturers often say a game is "playable". This means nothing. This means that the chip can the game within the game's settings. My friend has a Radeon 9000 that can run Oblivion so it is "playable". This means he can run it at a resolution of 640x320 within Oldblivion with no eye candy and 25 FPS. While technically "playable" in reality this is not at all enjoyablt to play. Sorry mate. The GMA950, though a powerful integrated solution, sucks at games.

Wiki: http://en.wikipedia.org/wiki/GMA950
It says the GMA950 is beaten my a Radeon Xpress 200. Would you play Oblivion on a X200? Hell it only has 4 pixel pipes. You have to go back about 4 years or so to find a mainstream card with 4 pipes that is acceptable.
-cm
 

bourgeoisdude

Distinguished
Dec 15, 2005
1,240
25
19,320
Have you read about all of the DX10 issues with NVIDIA?

Actually, no I didn't. I'm busy and lazy at the moment and don't feel like digging around--you wouldn't happen to have any URL's would you?

Ah, nevermind, I'll just have to look it up later tonight...
 

Anoobis

Splendid
Feb 4, 2006
3,702
0
22,780
For integrated, the current GMA X3000 isn't as bad as everyone touts. From what I can read of it, it appears to use some sort of programmable, unifed architecture. Probably not as advanced as nVidia's or what ATI's will be, but nonetheless not bad for integrated.

Link.

This probably as more of a plug to make the the Intel's latest integrated graphics chipset appear more Vista capable. The average computer user doesn't know what Direct X is but they know what Vista is.
 

Retardicus

Distinguished
Aug 21, 2006
49
0
18,530
intel is not using an x86 processor for graphics.
dont confuse a core with an EU
i am not sure if the 16 cores would be 16x better but better none the less
also i think power requirments will be quite low compaired to the nvidia/amd offerings

check this out---- tflop @65-85W
http://news.com.com/Intel+pledges+80+cores+in+five+years/2100-1006_3-6119618.html

The current graphics engines from Intel don't use x86 cores, I know. The EUs in the graphics engine are _completely_ different than an x86 core. There is no confusion there.

However, Larrabee up in Hillsboro is a whole different monster. That is what this article is talking about and that is what I am talking about. Only 16 EUs on a graphics chip at 45 or 32nm would be pretty pathetic. You would be able to fit 128-256 of those EUs on a chip at those manufacturing nodes.

As far as I know, Larabee is developing a many-core graphics solution that uses x86 cores with a large set of graphics extensions to accelerate graphics and other streaming FP calculations.

Larabee is a completely different group than the group at intel that is currently doing graphics.
 

bourgeoisdude

Distinguished
Dec 15, 2005
1,240
25
19,320
Well I don't have a DirectX10 Card or Vista running so I may not know the latest, but here is a tidbit.

http://crave.cnet.com/8301-1_105-9684886-1.html?tag=head

As of a few weeks ago, NVIDIA still did not have working DirectX10 drivers. Hence while they had the worlds only DirectX10 cards, there were no well functioning DirectX10 cards yet.

Aha...okay, yes I heard these stupid sensless complaints, yes. For one, there is no reason to have "fully DirectX 10 compliant drivers" when nothing in the universe uses them yet--if the CARD ITSELF SUPPORTS DIRECTX 10, those stupid lawsuits should be shoved down the whiner's throats.

Sorry, but whining about technology that hasn't even come out yet is stupid, and for some odd reason it is a huge pet peeve of mine (you likely noticed that from my harsh statements...). DirectX 10 is supported in the cards, just drivers aren't there yet. For God's sake give it a rest people (speaking to the nvidiaclassaction.org people).
 

Retardicus

Distinguished
Aug 21, 2006
49
0
18,530
why cant one core have more than one EU?

Depends on the architecture. A regular, non-superscalar in-order core can't really make use of multiple execution units for say FP or Vector operations.

A superscalar in-order core would be able to make use of multiple execution units, though statically scheduling instructions in the compiler becomes increasingly difficult as you add more units.

A superscalar out-of-order core (like core 2 duo) would be much better at scheduling instructions and would make better use of it's available execution units (integer, load/store, branch, FP, vector).

Another option is to increase the width of your vector unit from say 4, to something like 16. That increases your overall throughput, however, your data or algorithms may not be so conducive to 16-word vector operations.

Either way, x86 decode + superscaler + out-of-order execution costs a lot of power and die area. Stuff that you probably don't want to pay if you want to do as much FP calculations as possible per watt and per dollar.
 

enewmen

Distinguished
Mar 6, 2005
2,249
5
19,815
I dont know if it is amd's fault or that they just woke up intel
looks like intel is planning on makeing the amd/ati thing a waste of money

http://www.ntcompatible.com/Intel_Discrete_GPUs_Roadmap_Overview_s81073.html
You think within this decade, onboard graphics will have 16x the speed & memory of the 8800 GTX ? This is about when Fusion appears.
 

flasher702

Distinguished
Jul 7, 2006
661
0
18,980
What makes you think DX10 is important yet? What DX10 application is grabbing huge market share and compelling people to upgrade?

...what DX10 applications are there at all?

chill. DX10 doesn't matter. It will continue to not matter for about another year or more. It seriously might not take off at all since it practically requires an entirely new system to run DX10 games in Vista if your system is >1year old and most gamers don't do full system upgrades every year. More like every 3 years.