Sign in with
Sign up | Sign in
Your question

CPU & GPU - should be hybrided or continue???

Tags:
Last response: in CPUs
Share
July 21, 2009 3:56:42 PM

Hi all, i am developing a new project to compete with Intel's Larrabee GPU project. Currently i still studied inside the University but i kept thinking of researching new tech for the future. Still, i am not working with any semiconductor factory yet.

I also design the blue prints of the next phased GPU (only 50% completed).

What had crossed my mind is should a CPU and GPU hybrid together??? Any comments will be appreciate. :) 


My Current build:

Asus Striker II formula

8GB ( 4 x 2GB of 1066MHz per stick) of CORSAIR TWIN2X4096-8500C5DF

3 of XFX NVIDIA GeForce GTX 260 448bit 896MB DDR3

5 of 500GB Seagate Barracuda 7200.11 32M NCQ which make a space of 2.5TB of storage

2 of LG DVD +/_ RW in PATA and another in SATA which makes 3 DVD Burner

COOLER MASTER HAF RC-932 Full Tower Case

CORSAIR HX1000watts - Modular Cables

ZALMAN CNPS9700NT Heatsink Fan

MICROSOFT Windows Vista Ultimate SP1
July 21, 2009 4:37:32 PM

I dont see how cpu/gpu hybridding is avoidable. Cpus just cant get it done for graphics applications, which are primarily parallel, where the gpu reigns supreme.
Problem is, most graphics encoding isnt singularly just parallel, and contains many serial strings, where the cpu riegns supreme.
Having a hybrid doing both is the way to go. While today we see cpus as generally being good enough for the average Joe, Average Joe is missing out on many things visual, simply because of cpu limitations, and gpgpus infancy.
Combining the 2 together eliminates the needs of special prgrams that cost more money, or having both gpu with stream or cuda capability and a decent cpu in their computers. So, as computation needs go, the market has reached somewhat of a plateau, whereas, the visual opportunities are still fairly young, not matured, and has a much wider growth potential. Im all for it obviously
July 21, 2009 4:54:32 PM

We will soon see the joining of CPU and GPU, but this will be aimed at the value and perhaps business market where graphics aren't needed.

Gamers will still be buying video cards for years to come.
Related resources
July 21, 2009 5:04:45 PM

Hmmm.... I agree with you but i ready a lots of mags about CPU, GPU and Computering and what they all said is the Future of Computering is DARK!!!

With that kinda of point, i do not agree with it.

Lets see a example: Last year, Geogre Lucas said that the computering should be bad in the next few years (as the economic crisis started to began slowly) when he's game titled Star Wars, the force unleashed in the 2008's Q1 and only available in console platforms.

After two Q's, they are announcing the new game called star wars, the old republic in the PC platform first than the consoles.

What we can summaries is the computering of the future still insight with the Semiconductor factories still kept pushing on.

For now, i still developing the Thermal Heat on the GPU surface. What i tried to ofter to the market is high performance with low prices. Just like between Intel and AMD.
July 21, 2009 5:06:56 PM

The Intel solution thats going to come soon isnt a true hybrid. LRB is somewhat closer, actually much closer, as is the G300 will be as well. 1 chip doing it all, not yet, but the igp on die isnt anything we dont have now, other than just a faster connection/link
July 21, 2009 5:32:04 PM

I just post some Question for you guys to me some idea, i hope any answer should be appropriate or overwhelming ( e.g. like overclocking, liquid nitrogen or something related ).

1. Should a bus ( hardware term )
a. increases its core clock and downcreases its RAM clock
b. increases its RAM clock and downcreases its core clock

2. Should a CPU
a. increases it frequency and downcreases its core clock and increase its multiplier
b. increases it frequency and increases its core clock and downcrease its multiplier

Which is more stable for Question 2???

3. Should the concept of GPU introcude intro low profile users??? ( the users which uses less the 2.0Ghz of frequency??? ) If yes or no, pls write your point of view.

4. Should the GPU to be introduce powered controlled phases??? If yes or no, pls write your point of view.

5. Should the GPU market to be flexible in hardware terms ( e.g. like spare parts which can change ) ??? If yes or no, pls write your point of view.

6. What is important, the price vs performance or the power usage vs performance. Pls give comments.

7. Finally, what do you will about the future of computering. Pls any give comments.

Only just this 7 question for you guys to answer that will help me improve my research.

Thank you for all your corporation. Just in case anything you can email me at:

Nathan_8756@yahoo.com or
Nathan.12.12.14.12@gmail.com

I am staying in Penang, Malaysia where the Intel and AMD processsor's meets and assemble their last packing in the factory but i don't know where the hxll the factory is..... lol. :lol:  :pt1cable: 
July 21, 2009 7:18:49 PM

The future is going to go like this:
1. CPU and GPU come together on the same die (probably initially through an MCM approach, but then native)
2. We being making the CPU & GPU really the same thing, by making the GPU more general, though still the king of parallelism. AKA heterogeneous cores, some very serial, some very parallel.
3. Skynet.
July 21, 2009 7:30:29 PM

3. Skynet
4. Drums
July 21, 2009 7:36:00 PM

5. Smoke signals (have to reinvent fire first)
July 22, 2009 4:06:05 AM

Perhaps we should not make the same mistake again.... no drums. And no banjo.
July 22, 2009 12:16:14 PM

When Nvidia can make a chip that does not die within a couple of years (just had to reflow my 8800gtx yet again) then maybe the hybrid will be a good idea.

Untill then i rather like the fact that if one component dies i can replace it relatively easily or with something cheap just to get things running again meantime.

Remember all the problems Nvidia had with the 6 series mobile GPU's overheating and then dieing due to poor internal solder? And now it seems that after 2 years roughly of use the G80 series is having much the same problem.
(Not picking on nvidia, seen the same problems with Ati over the years + I have a Nvidia card atm)

When oversights like a simple quality manufacturing process are sorted then i'll consider hybrid gpu/cpu a good idea :) 


Oh, and 6: Cave paintings / pictographs haha.
July 22, 2009 12:31:18 PM

habitat87 said:
Also, I don't know if it's just me but the newer hardware tends to run really hot already considering the cooling solutions got a lot better. I mean, what were they thinking... "Hey, the laptop battery fire incident was exciting, let's think of a way to do something like that again!"

Gamers need discreet graphics cards, servers run hot enough, a normal user won't care. This only complicates things a lot. Who thought of such an unproductive idea?


LOL, yup. That's why I think the integrated CPU\GPU will be for non-graphic intensive systems, such as low-end PC's, business PC's where the business is just spreadsheets and email, and may perhaps be a great option for netbooks.

Gamers, for the foreseeable future, will need video cards.
July 22, 2009 12:55:14 PM

TechnologyCoordinator said:
LOL, yup. That's why I think the integrated CPU\GPU will be for non-graphic intensive systems, such as low-end PC's, business PC's where the business is just spreadsheets and email, and may perhaps be a great option for netbooks.

Gamers, for the foreseeable future, will need video cards.

Or, to put it more simply, just more of the same crap from Intel
July 22, 2009 2:15:17 PM

If the sweet spot for low profile users is the GPU core case, why the NVIDIA and ATI don't take the full advantage??? Instead of this, Intel's were very comfortable to make use their low profile graphics into netbooks or low profile notebooks.

The reason is due to not much people knows such a "GPU thing" inside their computer expect for whose who knows. What they thought only is that the Intel is the only manufacturer and even the worst is they mixed up with the chipset manufacturer with the computer manufacturer.

For example my uncle, when he sees a computer, what will he replies???

a. The brand of that computer or
b. The manufacturer of the chipset on that computer



If you guess b., you are corrected. As i said, some people do mixed up with the chipset manufacturer with the computer manufacturer. What they think is the Intel is the computer makers not the brands.

If for me, the first to do while i am promoting my GPU card is to be educate the users in the market. At lease this will make them to know what is a GPU and its function is for rendering graphics.

If ( only if ) this kinda of method does work, why the semiconductors like Intel and AMD never does like this as the companies had their public relations on their team.
July 22, 2009 2:16:21 PM

If the sweet spot for low profile users is the GPU core case, why the NVIDIA and ATI don't take the full advantage??? Instead of this, Intel's were very comfortable to make use their low profile graphics into netbooks or low profile notebooks.

The reason is due to not much people knows such a "GPU thing" inside their computer expect for whose who knows. What they thought only is that the Intel is the only manufacturer and even the worst is they mixed up with the chipset manufacturer with the computer manufacturer.

For example my uncle, when he sees a computer, what will he replies???

a. The brand of that computer or
b. The manufacturer of the chipset on that computer



If you guess b., you are corrected. As i said, some people do mixed up with the chipset manufacturer with the computer manufacturer. What they think is the Intel is the computer makers not the brands.

If for me, the first to do while i am promoting my GPU card is to be educate the users in the market. At lease this will make them to know what is a GPU and its function is for rendering graphics.

If ( only if ) this kinda of method does work, why the semiconductors like Intel and AMD never does like this as the companies had their public relations on their team.
July 22, 2009 3:00:43 PM

nVidia has done alot to do this very thing, using CUDA as the catalyst. Theyre trying to get people thinking more graphically inclined, as, truly, thats where the real growth is.
Cpus have reached their utmost usefullness for the average user, now its time for eyecandy. And before someone says anything, how many threads have we seen comments such as "current cpus are fast enough", whereas even with few challenging games out tight now, in the gfx forum, you never ever hear that, unless its because theyre complaining about weak games. And, even more importantly, Intels investing tons of money in Larrabee, so before anyone thinks cpus havnt plateau'd, ask why Intel is working so dilligently on Larrabee. Its simply because thats where were heading, and thats where the real best growth is
July 24, 2009 9:56:30 PM

I can't believe no one commented on the heterogeneous cores idea....
July 25, 2009 6:27:19 AM

I wouldve, but I took this post off
July 27, 2009 3:29:15 AM

Because some people doesn't like us which is classified as the "enthusiast" in the PC world.
a b à CPUs
July 27, 2009 12:26:30 PM

Personally, I don't see the hyprid model ever going beyond AGP's.

CPU's operate on a small bus width (64-bit, if that), limited cache, but a high clock speed to compensate. So even if only a limited amount of data can be transferred at one time, the higher clock rate balances out the limited bus width.

GPU's operate at a lower clock speed (~900Mhz average), but have a much larger data bus (256 or 512 bit), and a much, much larger cache (512MB+ these days). And thanks to PCI-E's design, even these can continue to be bumped up with relative ease.

I simply do not think that CPU architecture, as it exists now, could ever handle the load of calculating graphics at any reasonable speed (thanks in part to heat issues beyond 3GHz) without bottlenecking the rest of the system in the process. Likewise, I don't believe the PCI-E lane could ever be pumped up high enough to provide the speeds that current CPU architecture requires. Hence, the status quo.
a b à CPUs
July 27, 2009 2:02:35 PM

At the end of the world.

We will see more stuff become integrated.

Remember the days of the discrete audio chip, NIC and CPU cache?
July 27, 2009 2:41:13 PM

Current cpu speeds are being held back by the non dev of smaller cores. The focus is no longer there. Its those many mini cores where it gets interesting, like LRB. If the SW implementation does well on LRB, that may well be our future, and forget these old clunkin quads and hex cores.
At some point, its feasable to see having sections of a multi chip chip working with the OS only, and have other areas dedicated to HW, more in a risc fashion, as theyre doing it a lil already, speaking of uops and macros. By doing this, its keeps legacy alive, while commiting little to it, and having the majority dedicated where its really needed, while also bringing video capability along with it
July 29, 2009 2:04:05 AM

I think they really need to steer their following if you will, in a new direction. I seriously dont see the benefits of tri channel on DT. I think they need to get it out of peoples heads i7 tri channel is the best thing out.
Look at my thread on i5 vs i7, feeling ripped off. over 10,000 hits. Why? I havnt been in there pounding a drum. People are simply smart enough to know that i7 benefits them little compared to a comparably powered dual channel solution, which will come in cheaper, and use less power.
As for links, these are simply discussions that are getting louder and louder, and more frequent. Theres a few links, but until either AMD or Intel moves away from legacy, or creates a scenario like I mentioned above, its still just talk.
LRB could be that bridge, or something like it. But it also has alot of failings, and is why I said have a dedicated core or 2 for legacy, as thatll be fixed function , dedicated solely for that purpose. Its why were going to see video playback decoders on LRB, simply because fixed function hw is the way to go, not just more x86 cores.
Unless SW takes a mega leap in ability, and W7 has a miracle hiding in it somewheres, having all these cores thrown at us, it will become very apparent very soon, we need to diversify our current approach, and no amount of cores will solve it, but many types will and the can still be put on die.
Problem with all this is, it does diservice to us, the enthusiast. We may soon find a time where switching something out for something better was a thing of the past
a b à CPUs
July 29, 2009 3:28:03 AM

Gfx and CPUs shouldn't be Hybrided, then changing out for new parts would become pretty expensive, the gfx cards, would be part of the CPU, which means if one dies then the other dies. And getting a new one would cost an arm and a leg. Mobos would be a burden on peoples wallets and they would have to get a new computer for those few parts that died.

It would be very impractical to combine GPU and CPU.
July 29, 2009 3:54:13 AM

Wait until i5. With i7, youre buying something you most likely will never use. Tri channel BW
a b à CPUs
July 29, 2009 4:10:58 AM

gamerk316 said:
Personally, I don't see the hyprid model ever going beyond AGP's.

CPU's operate on a small bus width (64-bit, if that), limited cache, but a high clock speed to compensate. So even if only a limited amount of data can be transferred at one time, the higher clock rate balances out the limited bus width.

Quick note - modern CPUs have much more than a 64 bit memory bus. i7 has a 192 bit memory bus, and all dual channel IMC based CPUs (i5 + anything AMD) have a 128 bit bus. It's still a lot less than the GPUs, but far better than a 64 bit bus would be.
July 29, 2009 4:13:54 AM

I think LRB is 512, need to check on that
July 29, 2009 10:14:06 AM

They say it ocees well. Its an early chip by looking at it bare nekked. The mobo was a stripped down version, so bad drivers most likely. If its real, its a great early showing under the circumstances.
I didnt read all the specs, but this isnt the highest end model either, thats 3.4 with turbo, should be close to 4 without touching a thing
July 29, 2009 10:47:24 AM

At the time, and I dont recall it, but at that early point, I didnt know theyd have HT on duals. That fact alone is very enticing. Plus, it looks to have maybe a 2-300Mhz lead for top clocks vs the 8xxx series, and about a 8-9% IPC advantage
July 30, 2009 4:01:11 AM

Remember, these are the lil bros to the i5. Those will give a better kick. In the article, which was interpreted sorta, theyd said something about not having a full mobo, so its most likely a reference to beta bios, so, clocks and overall perf should be better in channel
July 30, 2009 7:38:13 AM

All i3's and i5s will fit the same skt, as will the Lynnfields, tho only 1 of those is listed as i5, it has no HT, the rest are i7 and have HT, but all on the same skt. Intel sorta stretched it here, putting Lynnfield on 1 skt, but naming some of them i7s
a b à CPUs
July 30, 2009 10:50:04 AM

habitat87 said:
When you think about it, why not just make integrated graphics better? They don't realize how terrible they make those things?

Seriously, they can't even get multi core cpu's to work correctly, now they want to complicate things even more by adding a gpu/cpu combination in it?

Imagine if they made a single core, 45nm with specifications of let's say, second rate cpu's as of now. Those performance charts would look like a disaster.

As for the dual core i3/i5 cpu's that are supposedly going to be released later, they are either going to make them crappy, or they are going to regret releasing them.


No, that'd cannibalize their low-end GPU market (think HD 2400pro...)

Fusing CPU and GPU would increase data speeds, but for IGP-class GPUs, you wonder why?
a b à CPUs
July 30, 2009 10:52:23 AM

You gotta hate more than having one socket...

I know about the extra cost of that HT lane but wouldn't it be nice if they let you put 2 HT lane CPUs in 3 HT lane mobos and vice versa, and, while theit at it, make a univeral socket for motherboards... I hate being locked to AMD or Intel.
a b à CPUs
July 30, 2009 11:34:31 AM

HD 2400 pro was good for it's time.

I'm not so updated with IGPs nowdays but the HD 2400pro still beats the Intel x4500
a b à CPUs
July 31, 2009 9:58:44 AM

Just using it as an example...

At the time of introduction, it was good.

So like the age of GMA 950
a b à CPUs
July 31, 2009 2:56:41 PM

They forgot the HD 3200 and 3300
a b à CPUs
August 1, 2009 3:03:23 AM

I really wish they'd stop the NDA of the 785G chipset, if they make a DDR 2 version, you can be sure I'll be the first in line for one :) 
!