Intel unveils Crescent Island, an inference-only GPU with Xe3P architecture and 160GB of memory

Intel
(Image credit: Intel)

Intel on Tuesday formally introduced its next-generation Data Center GPU explicitly designed to run inference workloads, wedding 160 GB of LPDDR5X onboard memory with relatively low power consumption. The new unit is codenamed Crescent Island, and it will use the company's upcoming Xe3P architecture when it hits the market next year. 

Intel's inference-optimized Data Center GPU codenamed Crescent Island will carry a GPU (perhaps two) based on the Xe3P architecture, which is a performance-enhanced version of the Xe3 architecture used in the Core Ultra 300-series 'Panther Lake' processors for laptops and compact desktops. The GPU is said to support a 'broad range of data types' relevant for inference workloads and cloud providers. Unfortunately, there is no word regarding the estimated performance for the part. However, there are still some hints in Intel's press release. 

Keep in mind that since LPDDR5X DRAMs feature two fully independent 16-bit channels, they cannot support butterfly mode (like GDDR6 or GDDR7), so it is impossible to connect 20 ICs using a single 320-bit interface to one GPU. 

Google Preferred Source

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

TOPICS
Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • User of Computers
    Could there perhaps be a link to the press release include in the article?
    Reply
  • thestryker
    User of Computers said:
    Could there perhaps be a link to the press release include in the article?
    While there may be one I didn't see it. The Phoronix article mentioned that this was from the tech tour and the NDA expired today so there might not have been one.
    Reply
  • thestryker
    An LPDDR5X DRAM IC features two 16-bit channels, so its total interface width is 32 bits. The highest-capacity LPDDR5X die is 32 GB (8 Gb), so 20 of such chips are needed to equip a graphics card with 160 GB of LPDDR5X memory. This means that the card either carries one massive GPU with an unprecedented 640-bit wide memory interface connecting all 20 memory devices, or two smaller GPUs, each with a 320-bit memory interface and equipped with 10 memory devices.
    LPDDR5X is shipping in up to 128Gb packages at 32-bit so I think it's likely that this would be a 320-bit memory controller using 10 of said packages.
    Reply
  • User of Computers
    thestryker said:
    While there may be one I didn't see it. The Phoronix article mentioned that this was from the tech tour and the NDA expired today so there might not have been one.
    makes sense- I checked intc.com and there was nothing new for this announcement.
    Reply