Intel Arc B580 Battlemage GPU allegedly surfaces on Geekbench — with 20 Xe Cores, 12GB of VRAM, and 2.85 GHz boost it falls short of the A580 despite being a generation newer
The Arc B580 could hit shelves as early as next month.
According to a benchmark at Geekbench, we have more information regarding the alleged specifications of Intel's upcoming Arc B580 GPU — based on Team Blue's Xe2 "Battlemage" architecture. The Arc B580 was spotted in a few preliminary listings last week at Amazon from ASRock, which were promptly taken down. This benchmark seemingly reaffirms that the B580 will carry 12GB of VRAM alongside 20 Xe cores, though initial performance in OpenCL leaves more to be desired. Keep in mind that this is not an official benchmark, and that Geekbench OpenCL can be a terrible way of measuring performance, so reserve judgment until review units are available.
The test bench features a Z890 AORUS MASTER motherboard with the flagship Intel Core Ultra 9 285K and 32GB of DDR5-6400 memory. Do note that the benchmark doesn't explicitly mention that this is an Arc B580. This is not an exact confirmation of the specifications, though there is a strong correlation with previous leaks.
The Arc B580 in this benchmark is listed with 160 Compute Units which really are just Xe Vector Engines. Based on the Xe2 architecture's core division, this would yield 20 Xe cores (1 Xe Core = 8 XVEs) or 2,560 ALUs (1 XVE = 16 ALUs). For context, the last-gen Arc A580 offered 24 Xe cores with 3,072 ALUs. On the memory side of things, the B580 packs 12GB of GDDR6 VRAM on a 192-bit memory bus. At least in this benchmark, the GPU was running at a maximum clock speed of 2.85 GHz — 42.5% faster than the A580.
If you calculate the teraflops on tap, the higher clocks should make up for the shader ALU deficit. The A580 has 10.4 TFLOPS of FP32 based on its 1700 MHz boost clock (which is still conservative). With a 2.85 GHz clock speed, the B580 has 14.6 TFLOPS of FP32. So, it should be faster, but again this is Geekbench OpenCL.
The performance end of things is slightly disappointing since the B580's subpar 78,743 points land it slower than the A580. But don't let that disappoint you because Battlemage has always been slower in synthetic workloads than Alchemist.
Case in point, the Core Ultra 7 155H (Meteor Lake) is 20% faster than the Core Ultra 7 258V (Lunar Lake) in the same test with the same Xe core counts. Depending on real-world performance, Intel may need to price these GPUs aggressively if it plans to up its market share this generation. While we expect a price tag in the ballpark of $200–$250, high production costs associated with 4nm technology could hurt Intel's bottom line.
In any case, all indicators show that Intel is prepping to reveal Battlemage in December, ahead of Nvidia's Blackwell RTX 50-series and AMD's RDNA 4 RX 8000-series GPUs. Initial supply could be limited to the B580 as the high-end BMG-31-based Arc B770 appears to be coming later.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Hassam Nasir is a die-hard hardware enthusiast with years of experience as a tech editor and writer, focusing on detailed CPU comparisons and general hardware news. When he’s not working, you’ll find him bending tubes for his ever-evolving custom water-loop gaming rig or benchmarking the latest CPUs and GPUs just for fun.
-
vanadiel007 As I mentioned some weeks ago, they should spin off their GPU division. They simply do not have the performance nor the funds to be competitive in the discrete GPU market.Reply
It's not a sound business decision to keep developing and marketing these GPU's. -
JarredWaltonGPU
Considering AI is basically eating Intel's lunch (Nvidia has gone from being worth less than Intel prior to 2020, to being worth 30X as much as Intel, in terms of market cap at least), I don't think Intel can just pretend GPUs aren't important. Intel needs some changes if it's going to stay relevant.vanadiel007 said:As I mentioned some weeks ago, they should spin off their GPU division. They simply do not have the performance nor the funds to be competitive in the discrete GPU market.
It's not a sound business decision to keep developing and marketing these GPU's.
I'm not saying GPUs alone are the solution, but Intel ignored GPUs for 20 years and it's now paying the price. Or rather, it dabbled in GPUs a little bit (i.e. Larrabee) but was afraid it would hurt the CPU division. And now GPUs are indeed killing the CPU division... just not Intel's own GPUs.
Look at how many Chinese startups are getting major state funding to try and create competitive GPUs for AI. If China also sees this as important, why wouldn't Intel come to similar conclusions? And sure, Intel could go more for AI accelerators, but the point is it can't just give up on the non-CPU market, and GPUs are a good middle ground as Nvidia has proven. -
Jaykihn0
They’re currently very competitive in edge and automotive, wdym?vanadiel007 said:As I mentioned some weeks ago, they should spin off their GPU division. They simply do not have the performance nor the funds to be competitive in the discrete GPU market.
It's not a sound business decision to keep developing and marketing these GPU's.
It’s absolutely a sound investment in its status quo. -
HideOut
if they are making money on them why would they stop? And right now AI is the future and thats mostly GPU based...vanadiel007 said:As I mentioned some weeks ago, they should spin off their GPU division. They simply do not have the performance nor the funds to be competitive in the discrete GPU market.
It's not a sound business decision to keep developing and marketing these GPU's. -
Elusive Ruse We need Intel to get this right, I will reserve my judgement for when the product actually launches and is tested independently.Reply -
Krieger-San I feel that this article is simply click bait, or rushed out to attack Intel or Nvidia competitors. The title has a clear opinion of "falls short of prior gen", but then within the second paragraph they state:Reply
"Keep in mind that this is not an official benchmark, and that Geekbench OpenCL can be a terrible way of measuring performance, so reserve judgment until review units are available"
This article is doing nothing but making a statement about an OEM that's trying, and somewhat decently succeeding. In our modern world where attention spans are VERY short, the title has quite an impact on people as to whether they even read the article and start spouting nonsense because tl:dr. The 'AI Market' isn't the whole worlds commerce market, it's a portion of the Tech Industry. The fact that Intel has brought a video card to market that is quite useful is many aspects beyond its integrated graphics is amazing. We can now look for Intel Arc cards to do video transcoding, including AV1 which has better compression than HEVC.
Lets also remember that driver tuning can also bring quite a bit of performance - look at AMD's gains over the years, as well as Intel's - I won't even mention what nVidia has been able to do, they're doing just fine.
P.S. - While I might be attacked as an Intel fan boi, I've been working in the IT industry for >15 years and I choose whichever OEM is the most effective for the use case. I wouldn't choose Intel right now for some products, but trashing their video cards because they aren't competitive for gaming feels like you're applying a blanket statement to a tool or device that has much more versatility. -
JRStern What you say is true, but I'm sure that within Intel so much of it is just considered history, that this puts them off trying very hard. Also that they have sufficient grief at this point with all of their mainline processor products, they just can't imagine taking on a new initiative ... that they've failed at several times before, even including the Gaudi line.Reply -
JarredWaltonGPU
The headline is supposed to draw readers, so there's always a balance between clickbait and boring. The important bit is that there's a B580 benchmark... even if it's flawed. Anyone that knows much about Geekbench OpenCL should know not to put much stock in it. I've routinely seen things where a 4090 or whatever will massively underperform, or some company like Apple or Qualcomm will put out drivers that massively boost performance. Geekbench is a synthetic benchmark in every sense of the word, and that makes it ripe for abuse in terms of targeted optimizations.Krieger-San said:I feel that this article is simply click bait, or rushed out to attack Intel or Nvidia competitors. The title has a clear opinion of "falls short of prior gen", but then within the second paragraph they state:
"Keep in mind that this is not an official benchmark, and that Geekbench OpenCL can be a terrible way of measuring performance, so reserve judgment until review units are available"
This article is doing nothing but making a statement about an OEM that's trying, and somewhat decently succeeding. In our modern world where attention spans are VERY short, the title has quite an impact on people as to whether they even read the article and start spouting nonsense because tl:dr. The 'AI Market' isn't the whole worlds commerce market, it's a portion of the Tech Industry. The fact that Intel has brought a video card to market that is quite useful is many aspects beyond its integrated graphics is amazing. We can now look for Intel Arc cards to do video transcoding, including AV1 which has better compression than HEVC.
Lets also remember that driver tuning can also bring quite a bit of performance - look at AMD's gains over the years, as well as Intel's - I won't even mention what nVidia has been able to do, they're doing just fine.
P.S. - While I might be attacked as an Intel fan boi, I've been working in the IT industry for >15 years and I choose whichever OEM is the most effective for the use case. I wouldn't choose Intel right now for some products, but trashing their video cards because they aren't competitive for gaming feels like you're applying a blanket statement to a tool or device that has much more versatility.
It's nice that Intel has brought a GPU to market, I really hope Battlemage shows major improvements from Alchemist. But I'm also realistic, and Intel is still behind on drivers and other areas. Intel's gains over time have mostly been from DX11 and earlier games, incidentally. Plus gains in games that get benchmarked a lot. 🤔
And FYI, AV1 doesn't really have better compression or quality than HEVC. It's similar overall, perhaps fractionally better (like 0~5 percent). The main thing is that it's royalty free, which is what helped AVC (H.264) become so popular — not that H.264 was supposed to be royalty free, but it basically became that way. HEVC (H.265) required royalties and so most companies and software balked. Higher bitrate AVC can look as good as lower bitrate HEVC (and AV1) and so that was the accepted solution for many years. -
Jagwired
Intel is falling now because they don't have a GPU business for AI, and they missed out on the crypto boom too. ARM also is predictably slowly taking over the CPU market. I don't know why AMD, Nvidia, and Intel haven't tried to promote RISC-V more, or come up with a new joint, open source architecture with other companies.vanadiel007 said:As I mentioned some weeks ago, they should spin off their GPU division. They simply do not have the performance nor the funds to be competitive in the discrete GPU market.
It's not a sound business decision to keep developing and marketing these GPU's.
What Intel should spin off is the foundry business. I don't see a lot of value controlling the foundries at this point. In fact, it was putting Intel at a disadvantage for a while because AMD and Nvidia were able to use smaller manufacturing processes from TSMC.
It would be great for the electronics market if there was another independent competitor to TSMC. Spinning off the foundries would likely give Intel billions they could use to save the remaining business units like GPU's.