Intel and Lenovo Develop Future of PCs in Shanghai

Lenovo IdeaPad
(Image credit: Lenovo)

The worst rivals for any chip designer and PC maker are not its direct competitors, but rather devices that their potential customers already own. Therefore, to make those customers buy something new, they need to advance their products at a rapid pace so that a new PC would offer radically better experience than a three-years-old computer. This is apparently what Intel and Lenovo are doing in their joint co-engineering lab in Shanghai. 

Intel and Lenovo's Advanced System Innovation Lab serves as a breeding ground where engineers from both companies combine their skills to build next-generation laptops that provide strong performance, elegance, features, and user experience, reports DigiTimes.

Lenovo and Intel have a long history of working together on multiple innovative products, including ThinkPad X-series as well as ThinkPad X Fold series. To build such systems, Intel and Lenovo not only need to overcome hardware-related challenges such as performance, power management, and thermals, but also software-related issues. In addition, Intel's dedicated teams work closely with Lenovo in other co-engineering labs located in Zizhu and Pudong

"We share a long and illustrious history of deep engineering collaboration with Lenovo," said Zheng Jiong (ZJ), senior director of client customer engineering for Intel China's client computing group (CCG). "We work together very well and are thankful for the innovation support Lenovo has given us through joint labs like these." 

A notable achievement of the Advanced System Innovation Lab in Shanghai is development of an OLED display driver that can run two OLED screens instead of one, which opens doors to a number of potentially interesting use cases.  

"This work was critical to the development of our platform," said Zhijian Mo, director of platform design and development in Lenovo's intelligent devices group. 

Furthermore, both companies joined forces with DRAM makers to enhance LPDDR5 memory data transfer rates.

While Lenovo remains a key partner, Intel also teams up with other global PC OEMs and software vendors. Their collective goal is to break technological barriers, identify core issues, and engineer enhanced PC solutions. Several advancements in CPU, power, thermal management, and other PC parts have emerged from this cooperative approach.

Looking ahead, as work on the Meteor Lake platform nears completion, plans for the Lunar Lake platform are already in motion and expected to be ready in 2024. To make Lunar Lake-based PCs radically better than systems in use today, the two companies are again collaborating across multiple fronts, but this is probably something the two companies would prefer not to discuss in detail for now. 

"It is a very special project that involves detailed co-engineering efforts between both our teams," said Mo.

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • bit_user
    Lenovo jumped on the AR and VR bandwagons pretty early. I appreciated the effort and was sad to see it not work out better, for them.

    I also applaud the work they're doing on the Thinkpad X13S, especially in regards to Linux support. If I needed an ARM laptop today, that's the one I'd buy. Sadly for them, I do not.

    As for things like thermal dissipation, I do not want a laptop churning out 55 W, period. No matter how quiet it is, that's just a lot of heat sitting right in your space. So, unless I'm in a cold room, it's going to be unwelcome. This summer, I have configured a max power limit of 45 W on my work laptop (based on a i7-12850HX CPU) and that's really helped a lot.
    Reply
  • Diogene7
    I wish Intel would allocate much more resources to the development of low latency, low power (High Bandwith Memory (HBM)) Non-Volatile Memory (NVM) VG-SOT-MRAM (or VCMA MRAM) of at least 64GB/128GB.

    Ideally, they should find a way of doing so re-using as many 3D NAND flash manufacturing tools as possible to lower manufacturing costs.

    This would be REALLY disruptive for all computing devices, especially IoT devices to finally usher the era of low-power « Normally-Off Computing ».
    Reply
  • bit_user
    Diogene7 said:
    I wish Intel would allocate much more resources to the development of low latency, low power (High Bandwith Memory (HBM)) Non-Volatile Memory (NVM) VG-SOT-MRAM (or VCMA MRAM) of at least 64GB/128GB.

    Ideally, they should find a way of doing so re-using as many 3D NAND flash manufacturing tools as possible to lower manufacturing costs.

    This would be REALLY disruptive for all computing devices, especially IoT devices to finally usher the era of low-power « Normally-Off Computing ».
    Intel is out of the storage business! There's no way they would do a 180, this soon after completely divesting from it.

    As for HBM, you don't need that for such power-constrained IoT devices.
    Reply
  • Diogene7
    bit_user said:
    Intel is out of the storage business! There's no way they would do a 180, this soon after completely divesting from it.

    As for HBM, you don't need that for such power-constrained IoT devices.

    Yes, I know that Pat Gelsinger is not favorable to Intel being in the memory business : he doesn’t like this business.

    However, in the early 2000, it is because Intel innovated by selling together a computing chip + wireless chip (Centrino) that it really ushered for consumers a new era of WIFI enabled computers : it was disruptive at the time.

    Phase Change Memory (PCM) Optane as a Non-Volatile-Memory has too many shortcomings (high-power consumption and low lifecycles) was a bad choice to begin with.

    On the contrary MRAM, like VG-SOT-MRAM seems to have most (all) the technical requirements, and would enable new opportunities in designing the mobile computing devices (« Normally-Off computing ») : I believe it would be disruptive as well.

    But yes, it is more a wishfull thinking because the problem is economics : it has to be (very) profitable, and as of 2023, there is more profits to be made in data center businesses rather than consumer.

    Regarding HBM in a mobile device, if the power consumption was low enough (using VCMA MRAM), then I am sure new use cases could emerge (maybe more on-device machine learning training) : So there is likely the need for it, only that as of 2023, it is not yet technically and/or econmically feasible…

    PS: Realistically, regarding MRAM, I think the main company that could have an incentive to manufacture it at scale is TSMC because it could increase their revenue/profit (because for Samsung, SK-Hynix,… it would compete with their other memory products).
    Reply
  • bit_user
    Diogene7 said:
    Regarding HBM in a mobile device, if the power consumption was low enough (using VCMA MRAM), then I am sure new use cases could emerge (maybe more on-device machine learning training) :
    Training big models requires not just a lot of fast memory, but also a lot of compute, and that takes power.
    Reply
  • Diogene7
    bit_user said:
    Training big models requires not just a lot of fast memory, but also a lot of compute, and that takes power.

    Yes, for big models. I don’t know exactly what would be the new opportunities, but I am confident that some new (maybe yet unforeseen) ones would emerge with HBM available at scale on a mobile devices…
    Reply
  • NeoMorpheus
    Intel, not content in blocking AMD on Dell business line is now going after their only partner, Lenovo.

    AMD is doomed.
    Reply
  • bit_user
    Diogene7 said:
    I don’t know exactly what would be the new opportunities, but I am confident that some new (maybe yet unforeseen) ones would emerge with HBM available at scale on a mobile devices…
    Let's distinguish between the IoT cases where you want persistent memory for power-optimization purposes vs. mobile devices like phones. I can see why a phone would want HBM, since it offers the most bandwidth per Watt of any DRAM technology. Probably the main reason we don't already have it is cost. Maybe Apple will lead the way, here.
    Reply
  • Diogene7
    bit_user said:
    Let's distinguish between the IoT cases where you want persistent memory for power-optimization purposes vs. mobile devices like phones. I can see why a phone would want HBM, since it offers the most bandwidth per Watt of any DRAM technology. Probably the main reason we don't already have it is cost. Maybe Apple will lead the way, here.

    What I am wondering is, as of 2023, what the the cost difference between 16GB LPDDR5x memory versus 16GB HBM2E or 16GB HBM3 memory ?

    How much more expensive is HBM memory versus LPDDR5x memory at same capacity ? For example, is it 5x to 7x more ? Is it 8x to 10x ?….

    It would provide some idea of how far (Apple) could indeed consider using HBM memory instead of LPDDR memory…
    Reply
  • bit_user
    Diogene7 said:
    What I am wondering is, as of 2023, what the the cost difference between 16GB LPDDR5x memory versus 16GB HBM2E or 16GB HBM3 memory ?
    The only source I have on it is this:
    "Compared to an eight-channel DDR5 design, the NVIDIA Grace CPU LPDDR5X memory subsystem provides up to 53% more bandwidth at one-eighth the power per gigabyte per second while being similar in cost. An HBM2e memory subsystem would have provided substantial memory bandwidth and good energy efficiency but at more than 3x the cost-per-gigabyte and only one-eighth the maximum capacity available with LPDDR5X."

    Source: https://developer.nvidia.com/blog/nvidia-grace-cpu-superchip-architecture-in-depth/
    If you want better than that, you could try doing your own "web research". Let us know if you find any good info.
    Reply