Fujitsu's 'DLU' AI Processor Promises 10x The Performance Of 'The Competition'

Status
Not open for further replies.

tslot05qsljgo9ed

Distinguished
May 22, 2009
51
0
18,530
Quote: 10x Performance / Watt compared to competitors

Are they comparing that number 10x to past products (Nvidia Pascal) or current products (Nvidia Volta).

Nvidia is not standing still and they too will have new products in 2018.
 

FranticPonE

Commendable
Jan 11, 2017
4
0
1,510
It's a GPU's general purpose SIMD units... that's more efficient because they say it will be. "Look our 'master units' coordinate memory access and SIMD units!" ... Exactly like GPU "wavefronts" or whatever do now. Good job Fujitsu, you reinvented the wheel then bragged about it. Just like Google did with their own "Deep Learning" chip, and just like Google's chip it will no doubt be no faster nor more efficient than what Nvidia and AMD already do.
 

bit_user

Polypheme
Ambassador
... with a scalable design utilizing the company's Tofu interconnect technology
I love it! Finally, something Japanese-sounding in a Japanese chip!

Seriously, did Fujitsu ever set any records with their SPARC CPUs? I don't recall reading anything about it, but I don't follow the supercomputer or mainframe sectors very closely.

I was hoping to see some exotic new technology, like phase-change memory embedded with the processing elements. But, unless their Tofu interconnect somehow lets them scale incredibly well, I just don't see any reason they should beat (or even match) Volta or Google's Gen2 TPU.
 

bit_user

Polypheme
Ambassador

Yes. The whole thing seems incredibly GPU-like, to me. I guess it has the advantage that latencies and memory access patterns could be more predictable, allowing for a number of simplifications not possible in GPUs. But you don't get an order of magnitude from that.

Good luck to them.
 
Status
Not open for further replies.