Sign in with
Sign up | Sign in
Your question

Do you think AMD will create a 4x4 version of Fusion?

Last response: in CPUs
Share
December 22, 2006 8:47:58 AM

Hello,

99% of us agree that AMD 4x4 systems are a complete waste of money, but can AMD convince us that it isn't by releasing a 4x4 version of Fusion?

Let me make it clearer, they can use their Torrenza technology to make a GPU that would slot into socket F, and then bundle it with a CPU, and they can create different SKUs by differing the specs of either the CPU or GPU. Or they can create a Fusion processor that can be used for a 4x4 system, so that when u place two on a mobo, you effectively get a CPU and a GPU. Memory bandwidth night be a problem, but if they solve it will it be cost-effective? plz dont flame me, im just asking a question out of curiosity.

thanx
December 22, 2006 11:31:46 AM

I wont flame you, I don't bite
I can see your point though, as it is theoretically possible to have a GPU-CPU config on a 4x4. Most would agree with the 4x4 platform being a flop at the moment, it would breathe some life into it, however there are a few problems though:

Firstly, if they did use a GPU on one of the sockets, memory latency would be a big problem, due to the GPU being given regular system memory only. With fast DDRIII (and 4) used on modern video cards, they can push the VRAM hgher to higher and higher speeds. With regular RAM, it would have alot of bottlenecking. It wouldnt be feasable for a gamer to buy one of these when a decent video card would be cheaper than the fastest DDRII RAM + GPU

Fusion would be a nifty idea though, if they integrated the GPU into the CPU die, but i see this as a mobile solution only.

Wait for AMD's native quad core, then perhaps QFX maybe (just maybe) worth spending your money on
December 22, 2006 11:39:13 AM

4x4 is just a relatively ugly hack on the road towards Fusion - similar to the crude implementation of SLI and Crossfire on the way to multi-GPUs.
Related resources
Can't find your answer ? Ask !
December 22, 2006 11:58:40 AM

4x4 and Fusion are polar opposites.
December 22, 2006 12:53:20 PM

i'm gonna go out on a limb here and say that rocking two Fusion processors (GPU variants) in 4x4 will have less or equal processing power than say a 7600 or X1600 series video card at twice the cost, sure you got double the processing power from the CPU side of Fusion, but still not enough, you'd have the worlds most powerful integrated graphics system.

I think its possible but it would stupid and costly for no real performance at all.

Fusion is meant to be cheap, low power, low to average performance range

Torrenza (4x4) is meant to be exspensive, high power, extreme raw power performance range

When AMD releases Quad Crossfire (when Sapphire starts selling there dual RV570 cards) or and when we see Quad SLi capable cards, and AMD releases K8L, 4x4 isn't very practical, and over priced
December 22, 2006 1:03:52 PM

Quote:
When AMD releases Quad Crossfire (when Sapphire starts selling there dual RV570 cards) or and when we see Quad SLi capable cards, and AMD releases K8L, 4x4 isn't very practical, and over priced


It's already impractical and overpriced. No need to wait for that one.
December 22, 2006 1:09:47 PM

Quote:
Hello,

99% of us agree that AMD 4x4 systems are a complete waste of money, but can AMD convince us that it isn't by releasing a 4x4 version of Fusion?

Let me make it clearer, they can use their Torrenza technology to make a GPU that would slot into socket F, and then bundle it with a CPU, and they can create different SKUs by differing the specs of either the CPU or GPU. Or they can create a Fusion processor that can be used for a 4x4 system, so that when u place two on a mobo, you effectively get a CPU and a GPU. Memory bandwidth night be a problem, but if they solve it will it be cost-effective? plz dont flame me, im just asking a question out of curiosity.

thanx

I think you are confusing Fusion with Torrenza. Fusion is the placing of a CPU and GPU on the SAME silicon die. Torrenza is the technology that enables the use of third party coprocessors in CPU sockets and HTX slots. This will extend to integrating the coprocessors on the SAME silicon die as the CPU - what AMD terms the APU(Accelerated Processing Unit). AMD have already stated that they will not be releasing a GPU to fit into a CPU socket.
This is due to the fact that there is insufficient bandwidth for a GPU in a CPU socket as well as other technical problems. So you may well ask why bother with Fusion if there is insufficient bandwidth. The answer is that Fusion is meant to provide a good baseline of graphics but not to supplant an independent GPU, which is required for high performance. Fusion also has the advantage of of lowering power consumption. How AMD intend to use Fusion is initially in laptops. These laptops will have a Fusion processor as well as an independent high performance GPU. When the laptop is run off battery power, the graphics will be handled by the Fusion CPU to save power. When it is run off a mains power supply, the graphics will be handled by the independent high performance GPU. The whole process happens seamlessly and no rebooting is required. Fusion CPUs will initially be available for laptops and then for desktops.

The other advantage of Fusion is the ability to use the integrated GPU as a GPGPU.
December 22, 2006 1:51:22 PM

Quote:
When AMD releases Quad Crossfire (when Sapphire starts selling there dual RV570 cards) or and when we see Quad SLi capable cards, and AMD releases K8L, 4x4 isn't very practical, and over priced


It's already impractical and overpriced. No need to wait for that one.


Currently i do agree DX9 generation didn't need SLi or CrossFire or 4x4... i think the DX10 generation might need traditional SLI/CF, not quad. Look at the 8800's from nVidia you need C2Q to get the most from it, and its early DX10 hardware, so thats where 4x4 might become practical for the uber elite performance junkies. If DX10 features become as taxing to the GPU as the features suggest twin GPU's might be needed. Which is where my gripe with SLi/CF comes in, because the companies can do this they can slack off in the GPU development to push people into using twin cards to get the performance they would out of one high end card, in traditional sense.

But i do agree 1 GPU is more than enough currently, and 1 Quad Core will be sufficient
December 26, 2006 5:47:16 PM

but do think it will be proper if they do this kind of stuff once we start using XDR ram or any other kind of memory, e.g the XBOX 360 has 512 MB of unified memory, is it practical for PCs?
December 26, 2006 5:56:36 PM

Do you think AMD will create a 4x4 version of Fusion?

Nah. They are basically incompatible. What would you do with four wheels and five blades anyway? :lol: 

Quote:
99% of us agree that AMD 4x4 systems are a complete waste of money, but can AMD convince us that it isn't by releasing a 4x4 version of Fusion?


Yes 4x4 with dual cores is a waste of money and electricity.
However....when you drop in two native QC's that use the same or less power, then your talking a mean machine.

The current 4x4 implementation is "a waste of money and electricity." 100% agreement. I'd love to see a Quad FX chuffing along on eight cores, but the two big questions I have are:

1) When? How long do we have to wait. And what will the competition have out by the time it finally dawns?

2) Is the price/perf going to beat 2xClovertowns I can buy right now (I've found FB-DIMMs at about $80/GB higher than conventional RAM).
December 26, 2006 5:57:35 PM

Quote:
When AMD releases Quad Crossfire (when Sapphire starts selling there dual RV570 cards) or and when we see Quad SLi capable cards, and AMD releases K8L, 4x4 isn't very practical, and over priced


It's already impractical and overpriced. No need to wait for that one.


I assembled a 4x4 system for $1600 and one for $2700 at ibuypower.com.

The $2700 one came with FX70, 4GB DDR2-800, 8800GTS, 320GB HDD, water cooling, 750W PSU, Logitech KB/Mouse.

But no Fusion will first be for mobile, then for servers, then for specialized workstations. In other words, there won't be a stand-alone 1207 GPU. Next year's chipsets will support PCIe 2.0 for graphics.

A GPU would need to be 32nm or smaller to be in a CPU socket since you still need that high speed local RAM. 3 HT3 links could theoretically push 75GB/s but G80 already does ~82GB/s(IIRC) internally.
December 26, 2006 6:02:18 PM

Quote:
Do you think AMD will create a 4x4 version of Fusion?

Nah. They are basically incompatible. What would you do with four wheels and five blades anyway? :lol: 

99% of us agree that AMD 4x4 systems are a complete waste of money, but can AMD convince us that it isn't by releasing a 4x4 version of Fusion?


Yes 4x4 with dual cores is a waste of money and electricity.
However....when you drop in two native QC's that use the same or less power, then your talking a mean machine.

The current 4x4 implementation is "a waste of money and electricity." 100% agreement. I'd love to see a Quad FX chuffing along on eight cores, but the two big questions I have are:

1) When? How long do we have to wait. And what will the competition have out by the time it finally dawns?

2) Is the price/perf going to beat 2xClovertowns I can buy right now (I've found FB-DIMMs at about $80/GB higher than conventional RAM).


How is Opteron Dual great, 8 core duals are great but QFX sucks? Can someone PLEASE explain that to me. Dual socket Xeons are everywhere using Bensley/Dempsey at HIGH power.

You all just want to have a reason to b i t c h about it in my mind. I can't wait to get it.
December 26, 2006 7:18:53 PM

Quote:

How is Opteron Dual great, 8 core duals are great but QFX sucks? Can someone PLEASE explain that to me. Dual socket Xeons are everywhere using Bensley/Dempsey at HIGH power.

You all just want to have a reason to b i t c h about it in my mind. I can't wait to get it.


I'm not b i t c h ing at all. I am asking to be educated. I am a prosumer/enthusiast who wants maximum bang for my CPU buck. I don't care if the label says AMD, Intel or K-Tel.

I have my eye set on a 2xClovertown system. By the time I'm said and done I'm gonna be out a bit over ten grand and that is gonna be by scrimping and saving on a few critical components. That may equal a fun night out to some people but to my income level that is a serious chunk of change and therefore requires careful analysis. I would happily put that valuable $$$ into a 2xOpteron, 2xFX or 2xRoncoSlicesDices CPU if I can definitively obtain a better price/performance ratio for my specific prosumer/enthusiast utilizations:

10% Light Gaming
10% Various Video Edit
20% Photoshop
30% General Office-type Apps
30% Heavy Video Viewing, Encoding, etc.

And of course, constant web use, DL/UL but that's pretty well normal these days and it's mostly a function of ISP bandwidth.

That's it. The only other parameter is that I want to build this system to last me at least 24-36 months without constant upgrading and farting around. I wanna buy the box, plug it in and get my use out of it for that time.

Is an Opteron or something else AMD in my future? No problem. I'd be as happy as a pig in sh!t. I've been an AMD man for years and would love to keep the faith.

I am far from a forum guru. I'm just a guy who spends 12 hours or more per day stuck in front of a PC, has done that for many many many years (likely before the birth of most of the posters on this forum) and has picked up a trick or two along the way. Therefore, I am respectfully requesting that some of you please educate me.
December 26, 2006 7:54:49 PM

The fusion program is simply a commercial adaptationof the concept of the IBM P7 and the Cray Cascade architecture. It has generally been accepted within IEEE circles that for large scale computing homogeneous multicore cpus are a waste of money because of the inherent energy inefficiency.

"Energy efficiency of computation is quickly becoming a key problem from the chip through the data center. This paper presents the first quantitative study of the potential energy efficiency of vector accelerators. We propose and study a vector accelerator architecture suitable for implementation in a 70nm technology. The vector architecture has a high-bandwidth on-chip cache system coupled to 16 independent memory channels. We show that such an accelerator can achieve speedups of 10X or more on loop kernels in comparison to a quad-issue superscalar uniprocessor, while using less energy. We also introduce run-ahead lanes, a complexity and energy efficient means of tolerating variable latency from crossbar contention, cache bank conflicts, cache misses, and the memory system. Run-ahead lanes only synchronize on dependencies or when explicitly directed." http://sc06.supercomp.org/schedule/event_detail.php?evi... The link contains a link to the entire PDF of the paper. The discussion of multicore drew significant discussion by others before including a panel sponsered by DARPA at the ACM International Conference.

"To evaluate Cell's potential, Berkeley Lab computer scientists evaluated the processor's performance in running several scientific-application kernels, and then compared this performance against other processor architectures. The results of the group's evaluation were presented at the ACM International Conference on Computing Frontiers, held May, 2006 in Ischia, Italy, in a paper by Samuel Williams, Leonid Oliker, Parry Husbands, Shoaib Kamil, and Katherine Yelick of the Future Technologies Group in Berkeley Lab's Computational Research Division, and by John Shalf from DOE's National Energy Research Scientific Computing Center, NERSC. "

"On average, Cell is eight times faster and at least eight times more power-efficient than current Opteron and Itanium processors, despite the fact that Cell's peak double-precision performance is fourteen times slower than its peak single-precision performance. If Cell were to include at least one fully usable pipelined double-precision floating-point unit, as proposed in the Cell+ implementation, these performance advantages would easily double." http://www.supercomputingonline.com/article.php?sid=118... The IBM Cell and AMD Fusion operate like nVidia 's description of CUDA at SC'06.
"A CUDA-enabled GPU operates as either a thread processor, where thousands of threads work together to solve complex problems, or as a streaming processor in specific applications such as imaging where threads do not communicate. CUDA-enabled applications use the GPU for fine grained data-intensive processing, and the multi-core CPUs for coarse grained tasks such as control and data management." http://www.hpcwire.com/hpc/1107979.html
In a nut shell what you want to do is cut down the latency and bandwidth problems involved in the pipeline between the CPU and the Accfelerator. That is the design premise of Fusion. The article has a further discussion of AMD and other thoughts on multicore.

"AMD Quad-Core and Beyond?

Richard Oehler, Corporate Fellow at AMD, presented a session on Thursday to give everyone a taste of the company's multi-core roadmap. He talked about the new L3 cache and increased DRAM bandwidth on the upcoming (2007) quad-core chips. Oehler also discussed the effort going towards dynamically managing both power and performance, on a core-by-core basis, in order to increase energy efficiency. HyperTransport links will go from three to four, improving on-chip bandwidth.

But for the time being, the core-count roadmap seems to stop at eight. Oehler said AMD is following the current industry trend in scale out, rather that scale up, and they don't see a demand for many-core processors in the bulk of the market, that is, desktop and enterprise systems. This seems to support the notion that AMD is not going to do any special favors for the supercomputing crowd -- at least on the Opteron front. On the other hand, their aforementioned Stream Processor is certainly targeted for high performance applications, and their future Fusion (CPU-GPU) architecture is also geared towards HPC workloads.

The 800-Pound Multi-Core Gorilla

The last panel of SC06, "Multi-Core for HPC: Breakthrough or Breakdown?", was chock-full of industry luminaries including Thomas Sterling (LSU), Peter Kogge (University of Notre Dame), Ken Kennedy (Rice University), Steve Scott (Cray Inc.), Don Becker (Penguin Inc.) and William Gropp (Argonne National Laboratory). Each gave his perspective on the various issues of this, now mainstream, architecture. The issues discussed by the panel are too complex to summarize in a few words (although I intend to cover this in more detail in a future issue), but there was quite a bit of consensus on the main themes.

Most of the participants believed that the number of processor cores will continue to increase -- an inevitable result of the power dissipation limitations on semiconductor technology. The group also agreed that the current multi-core architectures put the CPUs on the wrong side of the memory wall. A hierarchical memory model, exploitation of locality, and other hardware/software technologies will be needed to solve the CPU-memory bandwidth disparity. And most of all, everyone acknowledged that the software models will have to evolve to take advantage of the increased parallelism. All of this promises to cause a great deal of pain for software developers, which is why Ken Kennedy summed up the multi-core problem as follows: "Be afraid. Be very afraid.""

Wolfgang Greuner published two stories here at Tom's that give you the crux of the multicore problem. The first was about Tyan's 10 quadcore Intel server:
"TyanPSC announced what likely could be considered the ultimate multi-core desktop computer system that money can buy today. The new Typhoon 600 combines ten Intel quad-core processors for a maximum performance of 256 GFlops." http://www.tgdaily.com/2006/11/15/tyanpsc_600/ The other was about the Amd/ATI accelerator card.
" The firm's stream processor announced is based on the 384-million-transistor R580 graphics core, which is more commonly used in Radeon X1900 graphics cards and is known for hiding a powerful number crunching engine: In fact - and at least in theory - the BlueGene's 367 TFlops could be achieved could be achieved with less than 1000 graphics processors, which provide a performance of about 375 GFlops each." http://www.tgdaily.com/2006/11/14/amd_stream_processor/ The GPu/Cpu accelerator is about 40% faster(Rpeak values-real world(Rmax) is substantially less,60% for Blue Gene L) than the 40 Intel cores and uses about 10% 0f the power. Stanford's Folding @Home is the classic example of the appplication of the new tech.http://folding.stanford.edu/FAQ-ATI.html

Ninja and Parrot are correct in their statements. Fusion is the the merger of the GPU type accelerator on to the CPU chip. It is strictly for computing enhancement and is not a graphics solution.
December 26, 2006 9:20:34 PM

Quote:
The fusion program is simply a commercial adaptationof the concept of the IBM P7 and the Cray Cascade architecture. It has generally been accepted within IEEE circles that for large scale computing homogeneous multicore cpus are a waste of money because of the inherent energy inefficiency.

"Energy efficiency of computation is quickly becoming a key problem from the chip through the data center. This paper presents the first quantitative study of the potential energy efficiency of vector accelerators. We propose and study a vector accelerator architecture suitable for implementation in a 70nm technology. The vector architecture has a high-bandwidth on-chip cache system coupled to 16 independent memory channels. We show that such an accelerator can achieve speedups of 10X or more on loop kernels in comparison to a quad-issue superscalar uniprocessor, while using less energy. We also introduce run-ahead lanes, a complexity and energy efficient means of tolerating variable latency from crossbar contention, cache bank conflicts, cache misses, and the memory system. Run-ahead lanes only synchronize on dependencies or when explicitly directed." http://sc06.supercomp.org/schedule/event_detail.php?evi... The link contains a link to the entire PDF of the paper. The discussion of multicore drew significant discussion by others before including a panel sponsered by DARPA at the ACM International Conference.

"To evaluate Cell's potential, Berkeley Lab computer scientists evaluated the processor's performance in running several scientific-application kernels, and then compared this performance against other processor architectures. The results of the group's evaluation were presented at the ACM International Conference on Computing Frontiers, held May, 2006 in Ischia, Italy, in a paper by Samuel Williams, Leonid Oliker, Parry Husbands, Shoaib Kamil, and Katherine Yelick of the Future Technologies Group in Berkeley Lab's Computational Research Division, and by John Shalf from DOE's National Energy Research Scientific Computing Center, NERSC. "

"On average, Cell is eight times faster and at least eight times more power-efficient than current Opteron and Itanium processors, despite the fact that Cell's peak double-precision performance is fourteen times slower than its peak single-precision performance. If Cell were to include at least one fully usable pipelined double-precision floating-point unit, as proposed in the Cell+ implementation, these performance advantages would easily double." http://www.supercomputingonline.com/article.php?sid=118... The IBM Cell and AMD Fusion operate like nVidia 's description of CUDA at SC'06.
"A CUDA-enabled GPU operates as either a thread processor, where thousands of threads work together to solve complex problems, or as a streaming processor in specific applications such as imaging where threads do not communicate. CUDA-enabled applications use the GPU for fine grained data-intensive processing, and the multi-core CPUs for coarse grained tasks such as control and data management." http://www.hpcwire.com/hpc/1107979.html
In a nut shell what you want to do is cut down the latency and bandwidth problems involved in the pipeline between the CPU and the Accfelerator. That is the design premise of Fusion. The article has a further discussion of AMD and other thoughts on multicore.

"AMD Quad-Core and Beyond?

Richard Oehler, Corporate Fellow at AMD, presented a session on Thursday to give everyone a taste of the company's multi-core roadmap. He talked about the new L3 cache and increased DRAM bandwidth on the upcoming (2007) quad-core chips. Oehler also discussed the effort going towards dynamically managing both power and performance, on a core-by-core basis, in order to increase energy efficiency. HyperTransport links will go from three to four, improving on-chip bandwidth.

But for the time being, the core-count roadmap seems to stop at eight. Oehler said AMD is following the current industry trend in scale out, rather that scale up, and they don't see a demand for many-core processors in the bulk of the market, that is, desktop and enterprise systems. This seems to support the notion that AMD is not going to do any special favors for the supercomputing crowd -- at least on the Opteron front. On the other hand, their aforementioned Stream Processor is certainly targeted for high performance applications, and their future Fusion (CPU-GPU) architecture is also geared towards HPC workloads.

The 800-Pound Multi-Core Gorilla

The last panel of SC06, "Multi-Core for HPC: Breakthrough or Breakdown?", was chock-full of industry luminaries including Thomas Sterling (LSU), Peter Kogge (University of Notre Dame), Ken Kennedy (Rice University), Steve Scott (Cray Inc.), Don Becker (Penguin Inc.) and William Gropp (Argonne National Laboratory). Each gave his perspective on the various issues of this, now mainstream, architecture. The issues discussed by the panel are too complex to summarize in a few words (although I intend to cover this in more detail in a future issue), but there was quite a bit of consensus on the main themes.

Most of the participants believed that the number of processor cores will continue to increase -- an inevitable result of the power dissipation limitations on semiconductor technology. The group also agreed that the current multi-core architectures put the CPUs on the wrong side of the memory wall. A hierarchical memory model, exploitation of locality, and other hardware/software technologies will be needed to solve the CPU-memory bandwidth disparity. And most of all, everyone acknowledged that the software models will have to evolve to take advantage of the increased parallelism. All of this promises to cause a great deal of pain for software developers, which is why Ken Kennedy summed up the multi-core problem as follows: "Be afraid. Be very afraid.""

Wolfgang Greuner published two stories here at Tom's that give you the crux of the multicore problem. The first was about Tyan's 10 quadcore Intel server:
"TyanPSC announced what likely could be considered the ultimate multi-core desktop computer system that money can buy today. The new Typhoon 600 combines ten Intel quad-core processors for a maximum performance of 256 GFlops." http://www.tgdaily.com/2006/11/15/tyanpsc_600/ The other was about the Amd/ATI accelerator card.
" The firm's stream processor announced is based on the 384-million-transistor R580 graphics core, which is more commonly used in Radeon X1900 graphics cards and is known for hiding a powerful number crunching engine: In fact - and at least in theory - the BlueGene's 367 TFlops could be achieved could be achieved with less than 1000 graphics processors, which provide a performance of about 375 GFlops each." http://www.tgdaily.com/2006/11/14/amd_stream_processor/ The GPu/Cpu accelerator is about 40% faster(Rpeak values-real world(Rmax) is substantially less,60% for Blue Gene L) than the 40 Intel cores and uses about 10% 0f the power. Stanford's Folding @Home is the classic example of the appplication of the new tech.http://folding.stanford.edu/FAQ-ATI.html

Ninja and Parrot are correct in their statements. Fusion is the the merger of the GPU type accelerator on to the CPU chip. It is strictly for computing enhancement and is not a graphics solution.

What?
December 26, 2006 9:28:46 PM

Nod, and agree. I lack the mental capacity to take that all in, especially on my vacation..... but I saw my name so, meh. :mrgreen:
December 27, 2006 4:19:51 AM

My cranial loop kernel suffered in comparison to a quad-issue superscalar uniprocessor, the run-ahead lanes were clogged at rush hour, a complexity and energy efficient means of tolerating variable flatulence from crossbar contention created a cache bank conflicts when I overspent at Xmas and went into overdraft, as I sorely missed all my cache, and the memory system went into tilt and I entered an Alzheimer treatment program.
December 27, 2006 6:56:32 PM

Quote:
4x4 and Fusion are polar opposites.


WTF is it with people on this board? Why do you insist on clouding supposition, hearsay and flights of fancy with fact? :wink:

BTW, very nice seasonal adaption of your old avatar. Hope your holidays have been pleasing.
December 27, 2006 7:17:48 PM

I'm sorry. I forgot to read the FUD memo given out by the Horde board. Won't happen again. :wink:


Thanks. This time I got the avatar right. My friend, I hope the best for you and yours too.
December 31, 2006 6:20:04 PM

do u think a GPU socketed next to a CPU will be feasible if we use XDR?
January 1, 2007 12:42:01 AM

Quote:
do u think a GPU socketed next to a CPU will be feasible if we use XDR?

I don't know enough about XDR RAM to make an informed decision.
January 1, 2007 3:13:27 PM

Quote:
do u think a GPU socketed next to a CPU will be feasible if we use XDR?


After my hungover brain carefully considered that question, the answer is: Meh!
January 1, 2007 3:24:48 PM

Please note that the above bolded three letter word is an exclusive trademark of Dasickninja, DaClan, and DaClan industries. So Meh.
January 1, 2007 3:45:01 PM

Quote:
Please note that the above bolded three letter word is an exclusive trademark of Dasickninja, DaClan, and DaClan industries. So Meh.


Er... Escabuse me? It seems you are the one in violation of trademark!



:lol: 
January 1, 2007 7:45:47 PM

..... Mine is registered in Japan... Nice one bro.
January 1, 2007 7:48:44 PM

Quote:
Please note that the above bolded three letter word is an exclusive trademark of Dasickninja, DaClan, and DaClan industries. So Meh.
I would suggest he alter it to Mooo, but Spud has a trademark on that one. :wink:
January 2, 2007 8:53:27 AM

Quote:
..... Mine is registered in Japan... Nice one bro.


:D 
!