Sign in with
Sign up | Sign in
Your question

when's k10 coming out

Tags:
Last response: in CPUs
Share
April 23, 2006 1:26:55 PM

sounds stupid, but i just wanna know.

becasue this means that there'll be bette performance.

the 'reverse hyperthreading' thing is aslo cool

take a loo at this:

Quad Core CPU's have the potential to eliminate GPU (Graphic Processing Unit), by dedicating one CPU to graphic processing as well as dedicated ram. The current Graphic cards would be unable to compete with a dedicated GPU within the CPU because the latency comparison between a PCI-Express and interior CPU are incomparable.


http://www.answers.com/quad%20core

i got it from here.

so this means that by te time k10 is released, there'll be no need for gpus, even though some of u might say this wont be happenin at that time, but the cpu might also be able to take the graphics load off the gpu at least

More about : k10 coming

April 23, 2006 2:02:15 PM

CPUs will never be even close to a GPUs performance.
April 23, 2006 2:06:17 PM

I think the latest from AMD on this and i'm not even sure what Quarter but, 2007 we will more than likely see quad cores from AMD. To me, just having dual cores is awesome. I can't imagine having a quad core processor.
Related resources
April 23, 2006 2:42:09 PM

Quote:
I think the latest from AMD on this and i'm not even sure what Quarter but, 2007 we will more than likely see quad cores from AMD. To me, just having dual cores is awesome. I can't imagine having a quad core processor.
I agree with you Luminaris dual cores are awesome but why cant AMD release quad cores now with 90 nanos? (Now time for a quote out of nowhere)
Quote:
Gentlemen, we can rebuild him. We have the technology. We have the capability to make the world's first bionic man. Steve Austin will be that man. Better than he was before. Better... stronger... faster.
April 23, 2006 3:31:45 PM

Quote:
Quad Core CPU's have the potential to eliminate GPU (Graphic Processing Unit), by dedicating one CPU to graphic processing as well as dedicated ram. The current Graphic cards would be unable to compete with a dedicated GPU within the CPU because the latency comparison between a PCI-Express and interior CPU are incomparable.
The guy who wrote that has no idea what he's talking about; a CPU is not even close to performing as well as a dedicated GPU.
April 23, 2006 3:39:03 PM

Pound for pound, a GPU beats the crap out of a CPU in terms of performance increase. Tell me, can a FX-60 make Half-Life 2 what it should be with onboard graphics?
April 23, 2006 3:45:49 PM

I guess dual cores also "have the ability to eliminate the GPU" by having one core dedicated to graphics processing. XD
April 23, 2006 3:46:13 PM

Quote:
sounds stupid, but i just wanna know.

becasue this means that there'll be bette performance.

the 'reverse hyperthreading' thing is aslo cool

take a loo at this:

Quad Core CPU's have the potential to eliminate GPU (Graphic Processing Unit), by dedicating one CPU to graphic processing as well as dedicated ram. The current Graphic cards would be unable to compete with a dedicated GPU within the CPU because the latency comparison between a PCI-Express and interior CPU are incomparable.


http://www.answers.com/quad%20core

i got it from here.

so this means that by te time k10 is released, there'll be no need for gpus, even though some of u might say this wont be happenin at that time, but the cpu might also be able to take the graphics load off the gpu at least


Is latency really such a big deal for graphics cards? It is my impression that unlike a memory bus, graphics data over PCIE is basically a one-way stream from the CPU to the GPU and it doesn't really hurt if you have a little extra latency. It just takes a few extra nanoseconds (literally) to get the picture out. If this is an accurate assessment, then dedicating one core to GPU duties won't be as good as having a dedicated GPU on a PCIE card, because the GPU is specifically optimized for processing graphics, and not general purpose computing. Of course, the core has more RAM at its disposal, but are graphics cards really hurting for memory at the moment? Please correct me if I am wrong, since I would like to learn too!
April 23, 2006 6:14:16 PM

Quote:
I think the latest from AMD on this and i'm not even sure what Quarter but, 2007 we will more than likely see quad cores from AMD. To me, just having dual cores is awesome. I can't imagine having a quad core processor.
I agree with you Luminaris dual cores are awesome but why cant AMD release quad cores now with 90 nanos? (Now time for a quote out of nowhere)
Quote:
Gentlemen, we can rebuild him. We have the technology. We have the capability to make the world's first bionic man. Steve Austin will be that man. Better than he was before. Better... stronger... faster.


Perhaps not quite enough room on the die due to size? I'm sure AMD could pull it off but right now, they need to transition to 65nm first then, move to quad core. Gotta get those costs down. They've already got a great architecture but, they can't compete costwise vs. Intel. Just my opinion. :wink:
April 23, 2006 7:29:32 PM

Quote:
I think the latest from AMD on this and i'm not even sure what Quarter but, 2007 we will more than likely see quad cores from AMD. To me, just having dual cores is awesome. I can't imagine having a quad core processor.
I agree with you Luminaris dual cores are awesome but why cant AMD release quad cores now with 90 nanos? (Now time for a quote out of nowhere)
Quote:
Gentlemen, we can rebuild him. We have the technology. We have the capability to make the world's first bionic man. Steve Austin will be that man. Better than he was before. Better... stronger... faster.


Perhaps not quite enough room on the die due to size? I'm sure AMD could pull it off but right now, they need to transition to 65nm first then, move to quad core. Gotta get those costs down. They've already got a great architecture but, they can't compete costwise vs. Intel. Just my opinion. :wink: I get what you mean but if they can build a man for six million dollars into a bionic man then why cant they for 50$ make a quad core proc? (Now time for another random quote)
Quote:
He has gone insane!
April 23, 2006 8:19:20 PM

Quote:
Pound for pound, a GPU beats the crap out of a CPU in terms of performance increase. Tell me, can a FX-60 make Half-Life 2 what it should be with onboard graphics?


thats a perfectly dumb comparison..
tell me, can you compare a Ferrari to a Bus ?

no right?
the diference is..
multipurpose vs dedicated
hell, even a 10 Mhz RISC cpu for decrambling signals is faster than a FX.60..

why? because the risc chip was designed for his task
while the FX-60 is a multipurpose cpu....

same with the gpu, its designed for graphics, all its hardware is done for so.

so comparing them is completely retardly.
April 23, 2006 11:38:52 PM

Quote:
Pound for pound, a GPU beats the crap out of a CPU in terms of performance increase. Tell me, can a FX-60 make Half-Life 2 what it should be with onboard graphics?


thats a perfectly dumb comparison..
tell me, can you compare a Ferrari to a Bus ?

no right?
the diference is..
multipurpose vs dedicated
hell, even a 10 Mhz RISC cpu for decrambling signals is faster than a FX.60..

why? because the risc chip was designed for his task
while the FX-60 is a multipurpose cpu....

same with the gpu, its designed for graphics, all its hardware is done for so.

so comparing them is completely retardly. Perhaps you should take the time to think about what I said and actually realize that it is a perfectly relevant comparison. Though I don't agree that my comparison is "dumb," as you say, I agree that when something is created for or optimized for something in particular, it will most likely run better on it than something far more powerful but is designed for general tasks. This is clearly apparent in software, i.e., games. Take Half-Life 2 which is optimized for ATi hardware, two equally aspirated VPU's one from ATi and the other from nVidia and test them, the ATi will come out on top. From my own personal experience, ATi seems to have the best visual quality and nVidia seems to have the best raw performance.
April 23, 2006 11:59:49 PM

As far as I'm aware, the transistor count of nVidia/ATI's latest creations is far higher than the latest CPU's from AMD/Intel. They are far more sophisticated hardware.
April 24, 2006 12:30:39 AM

True but they are limited in their processing ability and not as multi-function nor have the mathematic computational flexibility to compare with CPU's.
April 24, 2006 12:33:20 AM

Quote:
sounds stupid, but i just wanna know.

becasue this means that there'll be bette performance.

the 'reverse hyperthreading' thing is aslo cool

take a loo at this:

Quad Core CPU's have the potential to eliminate GPU (Graphic Processing Unit), by dedicating one CPU to graphic processing as well as dedicated ram. The current Graphic cards would be unable to compete with a dedicated GPU within the CPU because the latency comparison between a PCI-Express and interior CPU are incomparable.


http://www.answers.com/quad%20core

i got it from here.

so this means that by te time k10 is released, there'll be no need for gpus, even though some of u might say this wont be happenin at that time, but the cpu might also be able to take the graphics load off the gpu at least


I believe these specifications are more a wish list. For one, reverse hyperthreading is just being looked at currently as a concept. I’m not sure if they know yet whether it could even be done. Intel is also supposedly researching this as well from what I read, but who knows for sure. Anyway, I wouldn’t hold my breath on this unit we know more.

Now, I have heard talk on the onboard GPU with CPU but not like you talked about. I may be wrong about this one but I thought the onboard GPU is like a low cost option as a selling point for new processors. It is not currently looked at as a replacement for a true GPU but a cheap replacement for onboard video. A dedicated GPU would destroy such a setup currently as it does now with onboard video. Maybe in the future this will change.

Quad core is coming out but I’m not sure if we will see it in 2007 or not. It all depends on how the battle heats up between Intel and AMD. Intel’s Conroe chips may change a lot of things if its performance numbers hold up. Please don’t turn this statement into another flame war between AMD and Intel.
April 24, 2006 1:59:01 PM

I think future its, multipurpose core to send data and specific coprocesros for tasks like CELL does, but then you need ano computer for gaming, another for oficce, etc. AMD HT could help to put coprocesors in sockets. One dual core central processor and two or tree coprocesors in other socckets sharing ram betwen them.
April 24, 2006 10:43:54 PM

Quote:
I think future its, multipurpose core to send data and specific coprocesros for tasks like CELL does, but then you need ano computer for gaming, another for oficce, etc. AMD HT could help to put coprocesors in sockets. One dual core central processor and two or tree coprocesors in other socckets sharing ram betwen them.
LEARN TO SPELL! :lol: 
April 24, 2006 11:14:17 PM

I use one my my cores for a GPU; Quake2 runs like a charm. :lol: 
April 25, 2006 4:24:16 AM

Quote:
I use one my my cores for a GPU; Quake2 runs like a charm. :lol: 


And how can you do that ? What software or tweak you are using ? He's joking!
April 25, 2006 4:48:42 AM

I run an Opteron 270, which is two x dual-core processors.

With one of them rendering 3D effects I get about 5fps, while the other 3 cores are working.

The article links is highly inaccurate.

Perhaps if they itegrate a GPU core and a CPU core within the same die... but with GPUs heading towards 1 billion transistors fast I really don't see it happening. A single GPU core is already more complex than most processors, and since it is relative, it almost always will be.

Beyond 2012, perhaps yes, it will happen. Maybe even in HandTop PCs, and smaller / low power PCs, but in the traditional desktop it will be a long way off, and we'll be at 32 cores / processor by then.

There is software that lets OpenGL be rendered using a processor btw, instead of a by a GPU.
April 25, 2006 6:11:57 AM

lol Actualy you CAN run Quake 2 in software mode :)  but thats neither here nor there I guess :(  Since I cant even get Unreal Tournament 1 to run under XP (that realy sucks) lol
April 25, 2006 7:19:31 AM

I m looking at this. Some people say it will not work or so on. It just a idea. But setup 5 slots for ram on the motherboard. 1 ram slot and it for both itegrate GPUs Useing upto 1gb total 512mb each. The rest for the Cpu. Here what I think it will do. It would stop shard Memory. And it would make dual gpus faster for it on the motherboard not on a grapics card. Plus it will help keep the computer cooler. For it will help in airflow.
April 25, 2006 7:30:52 AM

Wouldn't work.
April 25, 2006 8:00:57 AM

I think he just discribed a physics processor being built into the chip. Might work.
There really is no reason why cores could not be built as dedicated funtion devices. After all, a core is just a bunch of transistors. Since they are built right onto the wafer, there is no reason the design could not include special purpose cores.
I think you have to be looking at that option, once you have more than about 8 cores on a single die. The north and south bridges are probably the first things thar should go on die.
April 25, 2006 8:10:51 AM

Really? Sounds more like GPU's using system memory.
April 25, 2006 9:03:52 AM

The Cyrix MediaGX was an integrated chip: Cyrix 6x86 chip+Video+sound.

It used a dedicated socket (if it wasn't sodered onboard directly).

It had abysmal performances (even compared to the time's standards).

Why would you integrate parts that have, as of yet, nothing in common? What would be the gain of making the CPU and the GPU communicate directly when the actual amount of communication between the two is amongst the lowest in the whole system? Right now, the biggest bus hogs in 3D rendering are...

Texture transferts.
Z- (or W-) buffers.

This is why integrating a GPU in a northbridge is more intelligent: it has short pathways to central RAM and very low latency. Integrating it in a chip would, at best, make it benefit from the increased performance of an integrated RAM controller like the K8's.

But then, you'd still be 'bottlenecked' by actual RAM performance: in current configurations, you need to load central RAM textures to graphics RAM texture before the GPU can make use of it: thus, dedicate looong RAM cycles to copy BIG chunks of RAM from one chip to the other.

If, somehow, one could rewrite today's graphics methods to load textures directly in GPU RAM, AND destroy the RAM bottleneck, THEN we'd see an advantage to dedicated GPU cores integrated in the CPU.
April 25, 2006 10:57:10 AM

One of the advantages of keeping graphics memory on a dedicated board is that there is the potential for much higher bandwidth.

A lot of high end GPUs use GDDR3 clocked at 1GHz + with a 256bit (twice that of system RAM) interface, even mid range models have the higher bandwidth despite the 128bit interface.-+

Any latency advantage you gain will be lost from the extreme drop in bandwidth.

Of course this is all BS if all of a sudden we start using GDDR3 as system RAM (Like Xbox 360)

for the time being the flexibiltiy of having a seperate cpu and gpu is preferable to having to replace the whole lot everytime you want to upgrade.

Functions like Nbridge etc will probably be the first to be fully and successfully integrated into a CPU design. As has already happened with K8.
April 26, 2006 2:40:24 AM

You don't want the k10.. k12 will be right around the corner and that's the one you'll want. Keep waiting ..
April 26, 2006 4:31:39 AM

Quote:
You don't want the k10.. k12 will be right around the corner and that's the one you'll want. Keep waiting ..
:roll:
April 27, 2006 12:49:34 AM

Quote:
sounds stupid, but i just wanna know.

becasue this means that there'll be bette performance.

the 'reverse hyperthreading' thing is aslo cool

take a loo at this:

Quad Core CPU's have the potential to eliminate GPU (Graphic Processing Unit), by dedicating one CPU to graphic processing as well as dedicated ram. The current Graphic cards would be unable to compete with a dedicated GPU within the CPU because the latency comparison between a PCI-Express and interior CPU are incomparable.


http://www.answers.com/quad%20core

i got it from here.

so this means that by te time k10 is released, there'll be no need for gpus, even though some of u might say this wont be happenin at that time, but the cpu might also be able to take the graphics load off the gpu at least


Right after the K9.
April 27, 2006 9:18:02 AM

That would be obvious but I think that they are skipping K9 - I think thats why they are using K8L.

Its probably a few years away though - major architectural updates seem to take forever in the CPU biz.

My take on reverse hyperthreading - I imagine that if an application uses a number of threads, say 2. That would be all fine and dandy on a dual core - but what about impending multicore chips? I have an 8 core proc,( in theory) running 2 apps with 2 threads. Wouldn't it be nice if I could still have the advantage of running those apps invidually, yet still take advantage of the unused 4 cores? On a dual core processor - it probably wouldn't make a lot of sense, you would lose the advantage afforded by SMP by turing it back into a single logical processor(even if there are some speed advantages for that one thread). Software development can contine to optimize for multiple threads, while reverse HT can make the most effecient use of your CPU.

Sounds good to me - but if we are waiting till k10, I think that we probably have 8+ core cpus by then.
April 27, 2006 12:18:19 PM

Ok Let tweak the idea then KingGreatYat. Dual gpus on the mother board And a 5 memory slot for GDDR3. I think the only bottleneck would be the Ram. But the speed up will be No slow down in Card to motherboard.

1 you can buy the amout of ram
2 you have two gpus already setup so if you have gddr3 512 shared or what you can afford.
3 it keep the cpu ram for the cpu.

I know it a wack idea. But I think it could work.
April 27, 2006 6:29:26 PM

I have seen talk previously of a GPU socket on motherboards - so the idea isn't that far-fetched.

The idea of sharing memory may upset some but if you think about how much you can potentially waste on either CPU or GPU - there would be a utilistation advantage to sharing big wodges of fast ram.

Actually, GPU socket would allow more flexibility than a seperate card as it would allow the flexibility of just upgrading the RAM.

I still think that in the desktop PC market we wont see integrated GPU/CPU designs. These system-on-a-chip designs will probably be more prevalent first in low power systems like notebooks and tablets.
April 27, 2006 11:18:24 PM

Quote:
sounds stupid, but i just wanna know.

http://www.answers.com/quad%20core


Quote:
This entry is from Wikipedia, the leading user-contributed encyclopedia. It may not have been reviewed by professional editors (see full disclaimer)


Most probably, it wasn't.


Cheers!
April 27, 2006 11:43:17 PM

I understand what you are saying.
However, I NEVER saw anything like mixing core types on one chip. Such as having 2 CPUs, 1 GPU, and 1 PPU all on one die.
I also still don't know anything about the K10.
April 28, 2006 8:38:33 AM

Anything is possible!!!

I suppose that really K8 is a mixture of two "core types" having both a CPU and mem controller. You could go even further and say that any CPU with cache is a mixture as it has SRAM and logic.

No reason why we couldn't see a more fully intergrated design.

No one really knows anything about K10, hell, we don't even know much about K8L!!!!! - I have seen one rumour about 'reverse hyperthreading' - making a number of physical cores appear as one logical unit.
April 28, 2006 12:16:53 PM

I remember when cache was on the motherboard then it change to slots then to socket. Then Amd with the memory control. What next? I would guess Intel will work on a onboard memory controler. We will see some time soon.
!