Report: New, Cheaper 1440p Next-Gen Xbox in the Works

(Image credit: Shutterstock)

The Xbox Series X has a cheaper, less powerful sibling console codenamed “Lockhart” in the works, according to an Xbox developer kit leak from earlier this week that was today verified by The Verge.

See more

According to Twitter user @XB1_HexDecimal, the June release notes for the Xbox Development Kit reference a “Scarlett dev kit” that includes three different console modes -- “default, AnacondaProfiling and LockhartProfiling” -- as testing options. Since Microsoft’s codename for the Xbox Series X was Project Scarlett and a picture of an anaconda is etched into the Xbox Series X mainboard, the leak sparked speculation that “LockhartProfiling” refers to a new, unannounced next-gen Xbox. 

See more

Now, we have more to go on than screenshots and hearsay. According to The Verge today, it confirmed via anonymous "sources familiar with Microsoft's Xbox plans" that the Xbox developer kit does include a “special Lockhart mode." The sources also claimed that enabling the mode reduces the kit’s performance to the standards that Microsoft is planning to hit with an unannounced budget next-gen Xbox.

“We understand that includes 7.5GB of usable RAM, a slightly underclocked CPU speed, and about 4 teraflops of performance,” The Verge reported. “The [standard] Xbox Series X includes 13.5GB of usable RAM, and targets 12 teraflops of GPU performance.”

The report also pointed to Twitter user @bllyhlbrt, who highlighted several Lockhart references in the Xbox One operating system, alongside references to Anaconda and Dante (the name of the developer kit, according to The Verge's sources).

Whether Lockhart will be included in the Xbox Series X branding or sold under a different name (Xbox Series S?) is unknown. 

The Verge did, however, claim that the console is meant to target “gaming at 1080p or 1440p,” as opposed to the 4K at 60 frames per second standard the Xbox Series X has been advertising.

A second, less expensive console would fit into Microsoft’s announced, PC-like Xbox strategy, as the company has been adamant about the Xbox Series X and Xbox One maintaining the same library in the first few years after release. 

This would eliminate the need to make or buy different versions of games for different Xbox consoles. Remember when Ubisoft made versions of Assassin’s Creed 4 for both PS3 and PS4? Instead, compatibility would be assumed, and a game’s performance would simply be decided by how powerful your machine is, opening the door for plenty of different models and configurations.

Michelle Ehrhardt

Michelle Ehrhardt is an editor at Tom's Hardware. She's been following tech since her family got a Gateway running Windows 95, and is now on her third custom-built system. Her work has been published in publications like Paste, The Atlantic, and Kill Screen, just to name a few. She also holds a master's degree in game design from NYU.

  • bit_user
    It strikes me as odd that they'd make it less powerrful than the XBox One X, in any respect. And from the sound of it, the GPU performance is indeed worse.

    IMO, the smart thing to do would just be to deliver a cost-reduced version of the One X, maybe with a few updates (like newer CPU cores & improved GPU features). However, they could simply target the One X, and know that anything which performs well on it @ 1440p would also work well on LockHart.
    Reply
  • atomicWAR
    bit_user said:
    It strikes me as odd that they'd make it less powerrful than the XBox One X, in any respect. And from the sound of it, the GPU performance is indeed worse.

    IMO, the smart thing to do would just be to deliver a cost-reduced version of the One X, maybe with a few updates (like newer CPU cores & improved GPU features). However, they could simply target the One X, and know that anything which performs well on it @ 1440p would also work well on LockHart.

    We were just talking about this on another forum. Someone was claiming they thought the IPC increase for RDNA 2 would be around 50-60 percent. If true then a 4TF RDNA 2 chip would perform like a 6TF RDNA 1 chip or stronger if over 50% IPC increase. Personaly I think an IPC increase in the range of 35-40% as more likely, at least from what AMD is claiming on RDNA 2 gains and what reality actually gives them in actual frame rates. This would put the XBSS just behind the XB1X. With the console targeting a lower resolution than the XB1X it would still be a solid upgrade from last gen graphics, particularly if they go for 1080P. Regardless of how close those guesses are I am very interested to see how the XBSS plays out
    Reply
  • cryoburner
    bit_user said:
    It strikes me as odd that they'd make it less powerrful than the XBox One X, in any respect. And from the sound of it, the GPU performance is indeed worse.
    Teraflops are not necessarily directly comparable between different architectures though, and are measuring compute performance, not gaming graphics performance. An RX 580 is roughly a 6 Tflop card, and its GCN architecture is similar to what an Xbox One X uses. However, the newer RX 5500XT is typically a little faster, but it's only a 5 Tflop card as far as compute performance is concerned, due to its newer RDNA architecture offering more graphics performance relative to compute performance. It's possible we could see things shift a bit further in that direction with RDNA2. The updated architecture will supposedly be improving efficiency further, and part of that might involve trading away some more compute performance for a given level of graphics performance. New features like variable rate shading could also allow for some additional performance in games that support them. And this is assuming that this rumored "about 4 teraflops" number is even accurate.
    Reply
  • Chung Leong
    bit_user said:
    It strikes me as odd that they'd make it less powerrful than the XBox One X, in any respect. And from the sound of it, the GPU performance is indeed worse.

    The next-gen GPU supports variable rate shading. Games can hit the frame-rate target with less raw computational power. Scaling should be largely automatic thanks to VRS. On the cheaper console, peripheral parts of the scene will simply be rendered more frequently at half or quarter resolution.
    Reply
  • Alvar "Miles" Udell
    Uh, why? If the PS5 reveal was anything to go the flagships are going to be struggling to even to 4K30, but cut the power by 66% and the resolution to just 2560x1440, and expect to get 60fps? I don't see it, not without games shipping with an alternate set of textures and detail settings to lower the load level...
    Reply
  • alextheblue
    Alvar Miles Udell said:
    not without games shipping with an alternate set of textures and detail settings to lower the load level...
    Yes. PC games do it all the time. The devs will dial things in to run well out of the box.

    I think the goal here is to use newer architectures (Zen, RDNA) to achieve similar performance to the One X at a lower production cost. Of course, these cost savings may be offset if they're using an SSD, but IMO that would just mean a better overall experience at a similar price point.
    Reply
  • Jim90
    cryoburner said:
    Teraflops are not necessarily directly comparable between different architectures though, and are measuring compute performance, not gaming graphics performance. An RX 580 is roughly a 6 Tflop card, and its GCN architecture is similar to what an Xbox One X uses. However, the newer RX 5500XT is typically a little faster, but it's only a 5 Tflop card as far as compute performance is concerned, due to its newer RDNA architecture offering more graphics performance relative to compute performance. It's possible we could see things shift a bit further in that direction with RDNA2. The updated architecture will supposedly be improving efficiency further, and part of that might involve trading away some more compute performance for a given level of graphics performance. New features like variable rate shading could also allow for some additional performance in games that support them. And this is assuming that this rumored "about 4 teraflops" number is even accurate.

    Exactly!!
    As a reminder to others getting confused (or deliberately spreading lies) by implying Teraflop and 'overall' performance are directly linked, please watch Mark Cerney's PS5 "Road to PS5" video... ph8LyNIT9sgView: https://www.youtube.com/watch?v=ph8LyNIT9sg
    Reply
  • bit_user
    Jim90 said:
    As a reminder to others getting confused (or deliberately spreading lies) by implying Teraflop and 'overall' performance are directly linked,
    I don't think anyone in this thread is deliberately spreading lies, but the specs differences seemed large enough that I didn't expect RDNA's efficiency gains would necessarily cover them. However, I probably didn't account for the additional gains of RDNA2.

    In particular, I think the decrease RAM size is notable. From the sound of it, total physical RAM decreased from 12 GB (One X) to 8 GB (LockHart). Though my original post didn't mention it, that was one of the factors motivating it.

    Jim90 said:
    please watch Mark Cerney's PS5 "Road to PS5" video...
    Thanks for posting, and I have a lot of respect for Mr. Cerny, but 52:44 (+ ads) is a lot to ask of someone.
    Reply
  • Fleet33
    cryoburner said:
    Teraflops are not necessarily directly comparable between different architectures though, and are measuring compute performance, not gaming graphics performance. An RX 580 is roughly a 6 Tflop card, and its GCN architecture is similar to what an Xbox One X uses. However, the newer RX 5500XT is typically a little faster, but it's only a 5 Tflop card as far as compute performance is concerned, due to its newer RDNA architecture offering more graphics performance relative to compute performance. It's possible we could see things shift a bit further in that direction with RDNA2. The updated architecture will supposedly be improving efficiency further, and part of that might involve trading away some more compute performance for a given level of graphics performance. New features like variable rate shading could also allow for some additional performance in games that support them. And this is assuming that this rumored "about 4 teraflops" number is even accurate.


    Exactly! There is too much simple comparisons and misunderstandings spreading like wildfire. Then have these console gamers posting false information across the internet. That Jaguar/GCN APU was weak, mostly in the CPU(department).

    A console aiming for 1440p (with possibly some heavy graphical settings turned down) would be a huge smart play by Microsoft. Besides, it's going to look great on a 1080p television anyways....
    As someone stated already, this has always existed on PC to turn down some demanding settings (if needed) that in some cases aren't that noticeable.

    Even with that "Checkerboard" idea in the last generation looked great in 4K. Who's to say this LockHart won't deliver 4K gaming? It's a nice compromise for the money you'll be saving! Now if u want True 4K native, then go for the Series "X"

    The problem would be all the general public (and PS fanboys) who don't understand the tech which will just spread more false information. I haven't purchased a console in over 15 years, but that Series "X" is tempting.
    I need to build a new PC (as the current one is going on 10 years), that Series "X" could satisfy me for 3 years. While I wait for new PC parts advancements and improvements in the coming years.
    Reply
  • bit_user
    Fleet33 said:
    A console aiming for 1440p (with possibly some heavy graphical settings turned down) would be a huge smart play by Microsoft. Besides, it's going to look great on a 1080p television anyways....
    How many people do console gaming on a PC monitor, though? That's one thing that really jumped out at me about 1440p. I suspect game developers are going to do all of their testing at 4k and 1080p, with maybe a token run or two at 1440p and 720p to make sure they're not completely broken.

    Fleet33 said:
    I need to build a new PC (as the current one is going on 10 years), that Series "X" could satisfy me for 3 years. While I wait for new PC parts advancements and improvements in the coming years.
    Why do you need to wait an additional 3 years, to upgrade a PC that's already 10 years old? You can find plenty of benchmarks comparing multiple generations, and it seems well worthwhile to upgrade even a Sandybridge, already (though I suspect you're on Nehalem - Sandybridge will be 10 years in 2021).

    That said, if you're only considering an Intel CPU, I'd have to agree that Comet Lake's high power-consumption & corresponding cooling requirements make it rather unappealing. Still, you probably have only about 15 months until Intel launches its first 10 nm desktop CPU.
    Reply