Sign in with
Sign up | Sign in
Your question

"Inside AMD: Interview With Processor Firm's CTO"

Last response: in CPUs
Share
June 20, 2006 7:40:34 PM

Electronics Weekly has cooked an interesting interview with AMD's CTO Phil Hester.

Quote:
Electronic News: One of the more interesting ideas to come out of AMD recently was the idea of allowing third parties to develop IP or separate chips that work with your processors. Will third party technology migrate onto the processor itself?

Hester: What we’re doing right now is putting the infrastructure in place to let that sort of migration happen. If you ask specifically which cores and when, that’s something I can’t predict. For a long time, in the PC space, there was a separate floating-point co-processor. In a transition from the 386 and 387 to the 486, enough of the applications were using floating point that you could justify the incremental cost of adding in that silicon capability. A high enough fraction of users needed both pieces so the economics of that worked. Today if you look at these vertical markets—Java, XML, vector floating point media processing—those don’t yet justify everyone incurring the cost of that silicon. The right answer today is to enable, with as little incremental cost, these vertical systems to be built. What we want to do is to give a migration path so that when those application accelerators do make sense we’ve got both the systems architecture and the internal co-processor architecture designed to let those co-processors live all the way from being attached on PCI Express or HTX today to potentially being an execution unit on the main die. I’ll speculate, based on history, at some point that will happen.

Electronic News: So instead of AMD doing battle with Intel, it’s now AMD plus its allies?

Hester: I don’t know if it’s as much doing battle with Intel as it is to naturally expand the capability of these systems. If you talk to the companies that today are building the Java or XML accelerators, what they have to do is build a whole system—both hardware and software—just to deliver that accelerator. That’s typically what we refer to as ‘N plus 1.’ The N is the general-purpose servers that don’t go away. You have this one thing that accelerates particular applications. What you ideally want is a base system that can run the general purpose applications extremely well that only have to incur the development expense of the accelerator piece, not go build this whole system from the ground up. To me it’s a very efficient way to satisfy what some of our end customers and some of our OEM partners are telling us they want in the way of capabilities for these future systems. That way they can spend their dollars optimizing their systems, not replicating the things that Opteron already does well.

Electronic News: Is it more efficient, though, to have functions on the chip or off the chip?

Hester: If you’ve architected from the ground up, it’s more efficient to do it on chip. The reason is that you have fewer chip crossings and fewer I/Os. Driving across a chip boundary into a printed circuit board is always going to take more power than driving a bus inside a microprocessor. If you look at the system level, some of the absolute power savings may be relatively modest compared with the amount of power the whole memory subsystem uses. Externally, there is always some additional cost in terms of efficiency. That may be relatively minor, but you clearly can do the ultimate optimization when you put it on silicon because you have direct control over all the elements that you need to.
...
June 20, 2006 7:44:31 PM

Quote:
Electronics Weekly has cooked an interesting interview with AMD's CTO Phil Hester.

Electronic News: One of the more interesting ideas to come out of AMD recently was the idea of allowing third parties to develop IP or separate chips that work with your processors. Will third party technology migrate onto the processor itself?

Hester: What we’re doing right now is putting the infrastructure in place to let that sort of migration happen. If you ask specifically which cores and when, that’s something I can’t predict. For a long time, in the PC space, there was a separate floating-point co-processor. In a transition from the 386 and 387 to the 486, enough of the applications were using floating point that you could justify the incremental cost of adding in that silicon capability. A high enough fraction of users needed both pieces so the economics of that worked. Today if you look at these vertical markets—Java, XML, vector floating point media processing—those don’t yet justify everyone incurring the cost of that silicon. The right answer today is to enable, with as little incremental cost, these vertical systems to be built. What we want to do is to give a migration path so that when those application accelerators do make sense we’ve got both the systems architecture and the internal co-processor architecture designed to let those co-processors live all the way from being attached on PCI Express or HTX today to potentially being an execution unit on the main die. I’ll speculate, based on history, at some point that will happen.

Electronic News: So instead of AMD doing battle with Intel, it’s now AMD plus its allies?

Hester: I don’t know if it’s as much doing battle with Intel as it is to naturally expand the capability of these systems. If you talk to the companies that today are building the Java or XML accelerators, what they have to do is build a whole system—both hardware and software—just to deliver that accelerator. That’s typically what we refer to as ‘N plus 1.’ The N is the general-purpose servers that don’t go away. You have this one thing that accelerates particular applications. What you ideally want is a base system that can run the general purpose applications extremely well that only have to incur the development expense of the accelerator piece, not go build this whole system from the ground up. To me it’s a very efficient way to satisfy what some of our end customers and some of our OEM partners are telling us they want in the way of capabilities for these future systems. That way they can spend their dollars optimizing their systems, not replicating the things that Opteron already does well.

Electronic News: Is it more efficient, though, to have functions on the chip or off the chip?

Hester: If you’ve architected from the ground up, it’s more efficient to do it on chip. The reason is that you have fewer chip crossings and fewer I/Os. Driving across a chip boundary into a printed circuit board is always going to take more power than driving a bus inside a microprocessor. If you look at the system level, some of the absolute power savings may be relatively modest compared with the amount of power the whole memory subsystem uses. Externally, there is always some additional cost in terms of efficiency. That may be relatively minor, but you clearly can do the ultimate optimization when you put it on silicon because you have direct control over all the elements that you need to.
...


Moo.
June 20, 2006 7:48:25 PM

I want a mini cache socket connected via HTT3.0 that we can stick something like a 32-128 MB chunk of L3 into.
Related resources
Can't find your answer ? Ask !
June 20, 2006 7:49:10 PM

Quote:
Electronics Weekly has cooked an interesting interview with AMD's CTO Phil Hester.

Electronic News: One of the more interesting ideas to come out of AMD recently was the idea of allowing third parties to develop IP or separate chips that work with your processors. Will third party technology migrate onto the processor itself?

Hester: What we’re doing right now is putting the infrastructure in place to let that sort of migration happen. If you ask specifically which cores and when, that’s something I can’t predict. For a long time, in the PC space, there was a separate floating-point co-processor. In a transition from the 386 and 387 to the 486, enough of the applications were using floating point that you could justify the incremental cost of adding in that silicon capability. A high enough fraction of users needed both pieces so the economics of that worked. Today if you look at these vertical markets—Java, XML, vector floating point media processing—those don’t yet justify everyone incurring the cost of that silicon. The right answer today is to enable, with as little incremental cost, these vertical systems to be built. What we want to do is to give a migration path so that when those application accelerators do make sense we’ve got both the systems architecture and the internal co-processor architecture designed to let those co-processors live all the way from being attached on PCI Express or HTX today to potentially being an execution unit on the main die. I’ll speculate, based on history, at some point that will happen.

Electronic News: So instead of AMD doing battle with Intel, it’s now AMD plus its allies?

Hester: I don’t know if it’s as much doing battle with Intel as it is to naturally expand the capability of these systems. If you talk to the companies that today are building the Java or XML accelerators, what they have to do is build a whole system—both hardware and software—just to deliver that accelerator. That’s typically what we refer to as ‘N plus 1.’ The N is the general-purpose servers that don’t go away. You have this one thing that accelerates particular applications. What you ideally want is a base system that can run the general purpose applications extremely well that only have to incur the development expense of the accelerator piece, not go build this whole system from the ground up. To me it’s a very efficient way to satisfy what some of our end customers and some of our OEM partners are telling us they want in the way of capabilities for these future systems. That way they can spend their dollars optimizing their systems, not replicating the things that Opteron already does well.

Electronic News: Is it more efficient, though, to have functions on the chip or off the chip?

Hester: If you’ve architected from the ground up, it’s more efficient to do it on chip. The reason is that you have fewer chip crossings and fewer I/Os. Driving across a chip boundary into a printed circuit board is always going to take more power than driving a bus inside a microprocessor. If you look at the system level, some of the absolute power savings may be relatively modest compared with the amount of power the whole memory subsystem uses. Externally, there is always some additional cost in terms of efficiency. That may be relatively minor, but you clearly can do the ultimate optimization when you put it on silicon because you have direct control over all the elements that you need to.
...


Moo.



it's not this fault Intel isn't making any news
June 20, 2006 7:51:18 PM

Quote:
Electronics Weekly has cooked an interesting interview with AMD's CTO Phil Hester.

Electronic News: One of the more interesting ideas to come out of AMD recently was the idea of allowing third parties to develop IP or separate chips that work with your processors. Will third party technology migrate onto the processor itself?

Hester: What we’re doing right now is putting the infrastructure in place to let that sort of migration happen. If you ask specifically which cores and when, that’s something I can’t predict. For a long time, in the PC space, there was a separate floating-point co-processor. In a transition from the 386 and 387 to the 486, enough of the applications were using floating point that you could justify the incremental cost of adding in that silicon capability. A high enough fraction of users needed both pieces so the economics of that worked. Today if you look at these vertical markets—Java, XML, vector floating point media processing—those don’t yet justify everyone incurring the cost of that silicon. The right answer today is to enable, with as little incremental cost, these vertical systems to be built. What we want to do is to give a migration path so that when those application accelerators do make sense we’ve got both the systems architecture and the internal co-processor architecture designed to let those co-processors live all the way from being attached on PCI Express or HTX today to potentially being an execution unit on the main die. I’ll speculate, based on history, at some point that will happen.

Electronic News: So instead of AMD doing battle with Intel, it’s now AMD plus its allies?

Hester: I don’t know if it’s as much doing battle with Intel as it is to naturally expand the capability of these systems. If you talk to the companies that today are building the Java or XML accelerators, what they have to do is build a whole system—both hardware and software—just to deliver that accelerator. That’s typically what we refer to as ‘N plus 1.’ The N is the general-purpose servers that don’t go away. You have this one thing that accelerates particular applications. What you ideally want is a base system that can run the general purpose applications extremely well that only have to incur the development expense of the accelerator piece, not go build this whole system from the ground up. To me it’s a very efficient way to satisfy what some of our end customers and some of our OEM partners are telling us they want in the way of capabilities for these future systems. That way they can spend their dollars optimizing their systems, not replicating the things that Opteron already does well.

Electronic News: Is it more efficient, though, to have functions on the chip or off the chip?

Hester: If you’ve architected from the ground up, it’s more efficient to do it on chip. The reason is that you have fewer chip crossings and fewer I/Os. Driving across a chip boundary into a printed circuit board is always going to take more power than driving a bus inside a microprocessor. If you look at the system level, some of the absolute power savings may be relatively modest compared with the amount of power the whole memory subsystem uses. Externally, there is always some additional cost in terms of efficiency. That may be relatively minor, but you clearly can do the ultimate optimization when you put it on silicon because you have direct control over all the elements that you need to.
...


Moo.



it's not this fault Intel isn't making any news

Moo?
June 20, 2006 8:02:14 PM

OK
June 20, 2006 8:10:32 PM

[lil John] WHAT [/lil John]
June 20, 2006 8:21:34 PM

So it's dual-graphics cards for the graphics industry (1990s tech) and apparently co-processors for the CPU industry (1990s tech).

Why are we moving backwards? Is the PhysX processor really the best new idea anyone can come up with?
June 20, 2006 8:30:42 PM

Quote:
So it's dual-graphics cards for the graphics industry (1990s tech) and apparently co-processors for the CPU industry (1990s tech).

Why are we moving backwards? Is the PhysX processor really the best new idea anyone can come up with?



Actuall I was trying to get funding ofr a HTX memory board. Just a board for a 4U case that would only hold RAM. Imagine 16 banks of 4 DDR2 sockets.
June 20, 2006 11:57:26 PM

Actually, an interesting interview. Care to comment?


Cheers!
June 21, 2006 12:25:57 AM

Quote:
Actually, an interesting interview. Care to comment?


Cheers!


What I liked the most is AMD's focus in their upcoming architecture which will "fuse" the offerings from the server and mobile design teams. Coprocessors will be a reality sooner rather than later thanks to "Torrenza" and this will widen the gap even more for AMD since Intel won't be able to touch them in the multisocket segment (not even 2-way servers just as some tend to believe).

One thing I missed from this interview is about AMD's plans of expansion. There's still a rumor about AMD planning to build a $3 billion FAB in NY.
June 21, 2006 12:32:32 AM

Quote:
Electronics Weekly has cooked an interesting interview with AMD's CTO Phil Hester.

Electronic News: One of the more interesting ideas to come out of AMD recently was the idea of allowing third parties to develop IP or separate chips that work with your processors. Will third party technology migrate onto the processor itself?

Hester: What we’re doing right now is putting the infrastructure in place to let that sort of migration happen. If you ask specifically which cores and when, that’s something I can’t predict. For a long time, in the PC space, there was a separate floating-point co-processor. In a transition from the 386 and 387 to the 486, enough of the applications were using floating point that you could justify the incremental cost of adding in that silicon capability. A high enough fraction of users needed both pieces so the economics of that worked. Today if you look at these vertical markets—Java, XML, vector floating point media processing—those don’t yet justify everyone incurring the cost of that silicon. The right answer today is to enable, with as little incremental cost, these vertical systems to be built. What we want to do is to give a migration path so that when those application accelerators do make sense we’ve got both the systems architecture and the internal co-processor architecture designed to let those co-processors live all the way from being attached on PCI Express or HTX today to potentially being an execution unit on the main die. I’ll speculate, based on history, at some point that will happen.

Electronic News: So instead of AMD doing battle with Intel, it’s now AMD plus its allies?

Hester: I don’t know if it’s as much doing battle with Intel as it is to naturally expand the capability of these systems. If you talk to the companies that today are building the Java or XML accelerators, what they have to do is build a whole system—both hardware and software—just to deliver that accelerator. That’s typically what we refer to as ‘N plus 1.’ The N is the general-purpose servers that don’t go away. You have this one thing that accelerates particular applications. What you ideally want is a base system that can run the general purpose applications extremely well that only have to incur the development expense of the accelerator piece, not go build this whole system from the ground up. To me it’s a very efficient way to satisfy what some of our end customers and some of our OEM partners are telling us they want in the way of capabilities for these future systems. That way they can spend their dollars optimizing their systems, not replicating the things that Opteron already does well.

Electronic News: Is it more efficient, though, to have functions on the chip or off the chip?

Hester: If you’ve architected from the ground up, it’s more efficient to do it on chip. The reason is that you have fewer chip crossings and fewer I/Os. Driving across a chip boundary into a printed circuit board is always going to take more power than driving a bus inside a microprocessor. If you look at the system level, some of the absolute power savings may be relatively modest compared with the amount of power the whole memory subsystem uses. Externally, there is always some additional cost in terms of efficiency. That may be relatively minor, but you clearly can do the ultimate optimization when you put it on silicon because you have direct control over all the elements that you need to.
...


Moo.



it's not this fault Intel isn't making any news

Moo?Mew?
June 21, 2006 1:31:54 AM

Quote:
A number of folks want really good technology for their family, but they want it to be more affordable. There’s a set of hardware technologies we need to build into chips into the future to allow alternate business models to be able to create these PCs more affordable. It’s not that you’re going to put lower-end technology in it. It’s technology that would allow subscription models, advertising subsidies, and pay-per-use models.


I don't know how much I like this idea. Subscription models, pay-per use? With a little manipulation, a company could easily abuse this model. Over time, use fees could add up to negate any savings the consumer recieved when they bought the machine.

Plus, advertising is rampant enough as it is, even in games now (cough/nfs/). The last thing I want my AMD core to do is interupt me with KFC while I'm searching for pr0n.
June 21, 2006 5:09:55 AM

Quote:
Electronics Weekly has cooked an interesting interview with AMD's CTO Phil Hester.

Electronic News: One of the more interesting ideas to come out of AMD recently was the idea of allowing third parties to develop IP or separate chips that work with your processors. Will third party technology migrate onto the processor itself?

Hester: What we’re doing right now is putting the infrastructure in place to let that sort of migration happen. If you ask specifically which cores and when, that’s something I can’t predict. For a long time, in the PC space, there was a separate floating-point co-processor. In a transition from the 386 and 387 to the 486, enough of the applications were using floating point that you could justify the incremental cost of adding in that silicon capability. A high enough fraction of users needed both pieces so the economics of that worked. Today if you look at these vertical markets—Java, XML, vector floating point media processing—those don’t yet justify everyone incurring the cost of that silicon. The right answer today is to enable, with as little incremental cost, these vertical systems to be built. What we want to do is to give a migration path so that when those application accelerators do make sense we’ve got both the systems architecture and the internal co-processor architecture designed to let those co-processors live all the way from being attached on PCI Express or HTX today to potentially being an execution unit on the main die. I’ll speculate, based on history, at some point that will happen.

Electronic News: So instead of AMD doing battle with Intel, it’s now AMD plus its allies?

Hester: I don’t know if it’s as much doing battle with Intel as it is to naturally expand the capability of these systems. If you talk to the companies that today are building the Java or XML accelerators, what they have to do is build a whole system—both hardware and software—just to deliver that accelerator. That’s typically what we refer to as ‘N plus 1.’ The N is the general-purpose servers that don’t go away. You have this one thing that accelerates particular applications. What you ideally want is a base system that can run the general purpose applications extremely well that only have to incur the development expense of the accelerator piece, not go build this whole system from the ground up. To me it’s a very efficient way to satisfy what some of our end customers and some of our OEM partners are telling us they want in the way of capabilities for these future systems. That way they can spend their dollars optimizing their systems, not replicating the things that Opteron already does well.

Electronic News: Is it more efficient, though, to have functions on the chip or off the chip?

Hester: If you’ve architected from the ground up, it’s more efficient to do it on chip. The reason is that you have fewer chip crossings and fewer I/Os. Driving across a chip boundary into a printed circuit board is always going to take more power than driving a bus inside a microprocessor. If you look at the system level, some of the absolute power savings may be relatively modest compared with the amount of power the whole memory subsystem uses. Externally, there is always some additional cost in terms of efficiency. That may be relatively minor, but you clearly can do the ultimate optimization when you put it on silicon because you have direct control over all the elements that you need to.
...

What?
June 22, 2006 12:16:12 AM

Quote:
Electronics Weekly has cooked an interesting interview with AMD's CTO Phil Hester.

Electronic News: One of the more interesting ideas to come out of AMD recently was the idea of allowing third parties to develop IP or separate chips that work with your processors. Will third party technology migrate onto the processor itself?

Hester: What we’re doing right now is putting the infrastructure in place to let that sort of migration happen. If you ask specifically which cores and when, that’s something I can’t predict. For a long time, in the PC space, there was a separate floating-point co-processor. In a transition from the 386 and 387 to the 486, enough of the applications were using floating point that you could justify the incremental cost of adding in that silicon capability. A high enough fraction of users needed both pieces so the economics of that worked. Today if you look at these vertical markets—Java, XML, vector floating point media processing—those don’t yet justify everyone incurring the cost of that silicon. The right answer today is to enable, with as little incremental cost, these vertical systems to be built. What we want to do is to give a migration path so that when those application accelerators do make sense we’ve got both the systems architecture and the internal co-processor architecture designed to let those co-processors live all the way from being attached on PCI Express or HTX today to potentially being an execution unit on the main die. I’ll speculate, based on history, at some point that will happen.

Electronic News: So instead of AMD doing battle with Intel, it’s now AMD plus its allies?

Hester: I don’t know if it’s as much doing battle with Intel as it is to naturally expand the capability of these systems. If you talk to the companies that today are building the Java or XML accelerators, what they have to do is build a whole system—both hardware and software—just to deliver that accelerator. That’s typically what we refer to as ‘N plus 1.’ The N is the general-purpose servers that don’t go away. You have this one thing that accelerates particular applications. What you ideally want is a base system that can run the general purpose applications extremely well that only have to incur the development expense of the accelerator piece, not go build this whole system from the ground up. To me it’s a very efficient way to satisfy what some of our end customers and some of our OEM partners are telling us they want in the way of capabilities for these future systems. That way they can spend their dollars optimizing their systems, not replicating the things that Opteron already does well.

Electronic News: Is it more efficient, though, to have functions on the chip or off the chip?

Hester: If you’ve architected from the ground up, it’s more efficient to do it on chip. The reason is that you have fewer chip crossings and fewer I/Os. Driving across a chip boundary into a printed circuit board is always going to take more power than driving a bus inside a microprocessor. If you look at the system level, some of the absolute power savings may be relatively modest compared with the amount of power the whole memory subsystem uses. Externally, there is always some additional cost in terms of efficiency. That may be relatively minor, but you clearly can do the ultimate optimization when you put it on silicon because you have direct control over all the elements that you need to.
...

What?

Word.
!