Intel redefines AI strategy — Jaguar Shores to be rack-level design with focus on silicon photonics

Intel
(Image credit: Intel)

This week, Intel outlined some details about its upcoming AI strategy following the recent appointment of a new AI chief. The company plans to develop a multifaceted AI strategy that includes building off-the-shelf workload-specific full-stack solutions for AI, bespoke solutions, and foundry partnerships. The codenamed Jaguar Shores GPU remains in Intel's plans.

" My focus will be ensuring that our team builds products that are highly competitive and meet the needs of our customers as we enter a new era of computing defined by AI agents and reasoning models," said Lip-Bu Tan, chief executive of Intel, during the company's conference call with analysts and investors. "We are taking a holistic approach to redefine our portfolio, to optimize our products. For new and emerging AI workloads, we are making necessary adjustments to our product roadmap so that we are positioned to make the best-in-class products while staying laser-focused on execution and ensuring on-time delivery."

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • JRStern
    >At the same time, Intel acknowledges that it needs to work hard on its software
    YES
    Reply
  • AkroZ
    Also, our open x86.
    Open on what ?
    Only AMD and VIA have a license and this date back when IBM forced them to do it in 1982.

    Intel can provide products for inference but they are too much behind nVidia and AMD on AI training, they can only be third option if they can supply in quantities. It's unlikely for nVidia and AMD to use Intel Foundry for cutting edge technologies, the market known the story of Apple and Samsung Foundry.
    Reply
  • bit_user
    Michelle Johnston Holthaus said:
    What we are seeing is that customers do love the x86 ecosystem, the software around it. If they can build out an AI infrastructure with x86, they are very interested in doing that.
    Uh, that didn't work so well for Xeon Phi. I'd caution Intel against going back to x86, even though they perceive that as a strength.
    Reply
  • bit_user
    AkroZ said:
    Open on what ?
    Only AMD and VIA have a license and this date back when IBM forced them to do it in 1982.
    I don't know, but maybe referring to this?
    https://www.tomshardware.com/pc-components/cpus/intel-and-amd-forge-x86-ecosystem-advisory-group-that-aims-to-ensure-a-unified-isa-moving-forward
    AkroZ said:
    Intel can provide products for inference but they are too much behind nVidia and AMD on AI training,
    Gaudi 3 doesn't seem so bad. I wouldn't say their strategy isn't salvageable.

    What they definitely need to do is avoid the mistake AMD often makes of trying to beat Nvidia at its own game. Nvidia is currently tied to CUDA, which has been a strength, but also slows down the pace at which they can evolve. AMD foolishly decided to make a CUDA-compatible API called HIP. Intel has been going in a similar direction with oneAPI, and while I like oneAPI, I can see that it's not the path to success in AI that Intel needs.
    Reply
  • wzis
    Intel should focus on CPU performance improvements especially for the mobile ones.
    Reply
  • bit_user
    wzis said:
    Intel should focus on CPU performance improvements especially for the mobile ones.
    Lunar Lake was a good improvement over Meteor Lake, so long as you don't need more than 8 cores. The respin of Meteor Lake they did on Intel 3 also seems like a solid step forward.

    I haven't followed iGPU developments as closely, but that also seems to be an area benefiting mobile quite nicely.

    Maybe the biggest challenge their mobile chips face is competitive pricing?
    Reply
  • JayNor
    The SYCL Joint Matrix Extension appears to offer new life for SYCL/OneAPI, and Intel's Battlemage GPUs will be supported.

    Info is from the IXPUG presentations, a month ago.

    They say the same code will run on CUDA, AMD, Intel GPUs and Intel AMX matrix operations on Xeon.
    Reply