Skip to main content

Arm Co-Founder: Nvidia Owning Arm Would Be a Disaster

arm
(Image credit: Shutterstock)

Just recently, news surfaced that Nvidia is interested in acquiring Arm from SoftBank, which stirred up quite some fuss. While many individuals agree that it would be a good move for Nvidia own sake, it's also clear that many think it would be bad for the industry, and one of those people is Arm's co-founder, Hermann Hauser. 

As interviewed by the BBC, Hauser is on record expressing that he is against Nvidia as an owner of the company, though he also believes the deal won't end up going through.

"It's one of the fundamental assumptions of the ARM business model that it can sell to everybody," Hauser told BCC, "The one saving grace about Softbank was that it wasn't a chip company, and retained ARM neutrality. If it becomes part of Nvidia, most of the licensees are competitors of Nvidia, and will of course then look for an alternative to ARM."

Among Arm's clients are Intel, Nvidia, Apple, Qualcomm, TSMC, Samsung, and more. 

However, Dr. Hauser does believe that Softbank interest in selling Arm presents an opportunity: if not to Nvidia, he believes that the British government should get involved to bring the Cambridge-founded company back to home soil. "The great opportunity that the cash needs of Softbank presents is to bring ARM back home and take it public, with the support of the British government."

Thus far, it is unclear whether Nvidia acquiring Arm will actually happen. Although there is no doubt about Nvidia being serious about the acquisition, the deal would likely be subject to strict supervision from antitrust regulators, which could hamper Nvidia plans. 

  • Gomez Addams
    To me, Nvidia's intentions for Arm are fairly obvious. They want to be even bigger in supercomputers and the ARM architecture is very well suited for those because of its low power consumption. It becomes even more clear when you consider that Nvidia has said they are porting CUDA to run on the ARM instruction set. Then consider that ARM-based CPUs have been made with 80 cores and 4-way SMT. It seems to me what Nvidia wants to do is make a massively parallel MCM-based machine. They will go with at least 4-way SMT but think they will aim for 8 or 16-way. This would make one ARM CPU look like a streaming multiprocessor which are 64-way SMT currently. Then they will put 64 of these on chip module which will be good for 512 or 1024 threads. Then they will put 8 of those on one MCM and have 4096 or 8192 threads available on one module. This will be look like just another GPU to CUDA code. The biggest benefit is it won't be a co-processor so data transfers will be minimized. All together, this architecture will be astonishingly fast and will change the design of super computers for years to come.

    That's where I think Nvidia is going with this and they want Arm as a design resource more than anything else.
    Reply
  • Jimbojan
    If it is not clear NVDA can do that competitively, last time Fujitsu did its supercomputer using ARMs, it uses 3.4M watts to do 2 exaflop, Intel is likely to have 1.2 exaflops with 1.x M watts for the Argonne Lab, it is far more power efficient than the Arms, It is not clear NVDA is smarter than Fujitsu.
    Reply
  • ubicray
    I am just hoping Nvidia is allowed for this merger as otherwise my money riding on NVDA will not be doing that good
    Reply
  • InvalidError
    If I was an ARM licensee, I'd be looking into RISC-V and the few other truly open ISAs out there to make sure I don't get screwed again by ISAs getting locked up behind prohibitively steep licensing fees by another buy-out.
    Reply
  • JamesSneed
    InvalidError said:
    If I was an ARM licensee, I'd be looking into RISC-V and the few other truly open ISAs out there to make sure I don't get screwed again by ISAs getting locked up behind prohibitively steep licensing fees by another buy-out.

    I don't think any of those open source ISA's are close to the same maturity level of ARM. Apple could do something like this but they are even more heavily invested in ARM since building there own ARM laptop chip. If Nvidia does buy ARM I highly suspect open source ISA's will look better and you will see a large group of companies come together to produce one standard ISA. I could see Amazon, Apple, and Google join together to make one ISA.
    Reply
  • brainburst
    I would love this. Apple moving to ARM which would be owned by NVidia whould mean that they would probably be forced to support nVidia Gpus & Cuda
    Reply
  • daworstplaya
    If Nvidia owns ARM, it will only be a matter of time before the other companies that license ARM (Apple, Qualcomm, Amazaon, etc) will get screwed with heavy licensing fees and basically forced to buy ARM chips made by Nvidia. Selling to a direct competitor is a bad idea, IMHO.
    Reply
  • AdrianBc
    Jimbojan said:
    If it is not clear NVDA can do that competitively, last time Fujitsu did its supercomputer using ARMs, it uses 3.4M watts to do 2 exaflop, Intel is likely to have 1.2 exaflops with 1.x M watts for the Argonne Lab, it is far more power efficient than the Arms, It is not clear NVDA is smarter than Fujitsu.

    You are comparing the Aurora supercomputer project, which maybe Intel will complete in the future, in 2 years from now, and only by using Intel GPUs manufactured at TSMC, as disclosed by Intel recently, with the Fujitsu ARM computer, which exists and works right now.

    Therefore the comparison is meaningless. The Fujitsu ARM CPUs have about the same power efficiency as the NVIDIA Volta GPUs, which is unprecedented for a CPU. Of course, the new NVIDIA Ampere will surpass them and take again the first place in power efficiency, but this time with a much less advance than over past CPUs.

    The current Intel Cascade Lake and Cooper Lake CPUs, when using AVX-512, have a power efficiency 3 times less than either the NVIDIA Volta GPUs or the Fujitsu ARM CPUs.

    Of course, the high power efficiency of the Fujitsu ARM CPUs has much less to do with them implementing the ARM instruction set than to the fact that they implement the new SVE instruction set extension for vector computation.
    Reply
  • InvalidError
    JamesSneed said:
    I don't think any of those open source ISA's are close to the same maturity level of ARM.
    ARM wasn't built in one day either, give them some time. RISC-V's biggest problem for now is that it is still mostly an academic curiosity and as such, the ISA tends to get some extensive revisions when devs run into issues either writing software or designing CPUs and tweak the ISA to smooth those out.

    I wouldn't put too much faith in large corporations necessarily faring a whole lot better since those would be under considerable internal pressure to get some sort of product out the door instead of minimizing design and performance roadblocks from ISA to silicon and software.
    Reply
  • bit_user
    InvalidError said:
    RISC-V's biggest problem for now is that it is still mostly an academic curiosity
    Why do you say that?

    InvalidError said:
    the ISA tends to get some extensive revisions when devs run into issues either writing software or designing CPUs and tweak the ISA to smooth those out.
    Did you check out the revision history of RISC V? They're already up to v2.1:

    https://en.wikipedia.org/wiki/RISC-V#ISA_base_and_extensions
    Here are some benchmarks of a board running Linux on RISC V from > 2 years ago:

    https://www.phoronix.com/scan.php?page=news_item&px=SiFive-RISC-V-Initial-Benchmark
    Of course, the performance is nothing to write home about, but the fact that they could already boot the OS and run a benchmark suite, back then, is saying something about maturity.

    InvalidError said:
    I wouldn't put too much faith in large corporations necessarily faring a whole lot better since those would be under considerable internal pressure to get some sort of product out the door instead of minimizing design and performance roadblocks from ISA to silicon and software.
    That's why it's good to have open source OS kernels and toolchains - because they have gatekeepers who block low-quality patches, forcing contributors to set aside the schedule and resources to do it right.
    Reply