Samsung claims its new AI chip is 8x more power efficient than Nvidia's H100

Synopsys
(Image credit: Synopsys)

Samsung Electronics and Naver have teamed up to create a specialized semiconductor solution for large-scale artificial intelligence (AI) models. This project combines Samsung's chip production prowess and advanced memory technologies with Naver's expertise in AI. The first solution from the companies would be eight times more power efficient than Nvidia's AI GPUs, based on their data, reports BusinessKorea

Recently, the two companies demonstrated their first AI semiconductor solution based on a field-programmable gate array (FPGA) tailored for inference for the Naver HyperCLOVA X large language model. Naver said that this AI solution is eight times more power efficient than Nvidia's AI GPUs thanks to the usage of LPDDR memory, but did not elaborate on other details about the device. For example, it is still unclear when the two companies will develop a shipping product.

Samsung and Naver started to work together in late 2022. The collaboration between the two companies focuses on using Samsung's advanced process technologies and leveraging high-tech memory solutions like computational storage, processing-in-memory (PIM), processing-near-memory (PNM), and Compute Express Link (CXL) as well as Naver's prowess in software and AI algorithms. 

Samsung already produces and sells various types of memory and storage technologies used for AI applications, including SmartSSDs, HBM-PIM, and memory extenders with the CXL interface. In fact, memory is crucial for Samsung's and Naver's upcoming AI solutions.

"Through our collaboration with Naver, we will develop cutting-edge semiconductor solutions to solve the memory bottleneck in large-scale AI systems," said Jinman Han, Executive Vice President of Memory Global Sales & Marketing at Samsung Electronics. "With tailored solutions that reflect the most pressing needs of AI service providers and users, we are committed to broadening our market-leading memory lineup including computational storage, PIM and more, to fully accommodate the ever-increasing scale of data."

Anton Shilov
Freelance News Writer

Anton Shilov is a Freelance News Writer at Tom’s Hardware US. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • bit_user
    I'd be more impressed if they compared against the H100 running at reduced clock speeds. The stock speeds doubtlessly push the H100 well outside of its efficiency sweet spot.

    I do believe there are big efficiency gains to be made via PIM (Processing In Memory), which I'm expecting they are doing via their hybrid HBM approach:
    https://www.servethehome.com/samsung-processing-in-memory-technology-at-hot-chips-2023/
    Nvidia is apparently working with Samsung's competitor on an answer:
    https://www.tomshardware.com/news/sk-hynix-plans-to-stack-hbm4-directly-on-logic-processors
    Reply