A Weibo user has reportedly tested the Ryzen 7 4800U's integrated graphics, and the results are better than you might expect.
The Ryzen 4000-series (codename Renoir) mobile APUs leverage AMD's most recent Zen 2 microarchitecture, but the chips still rely on the Vega GPU microarchitecture for graphics. On top of that, the design suffered a minor regression. While Picasso, the Renoir's predecessor, has up to 10 Compute Units (CUs), Renoir maxes out at eight CUs. Renoir does use TSMC's cutting-edge 7nm FinFET manufacturing process, so there are improvements in other areas.
Picasso's iGPU has a peak boost clock of 1,400 MHz. Renoir boosts up to 1,750 MHz, a very noticeable 25% improvement. As opposed to Picasso and its support for DDR4-2400, Renoir natively supports DDR4-3200, and we all know how AMD's APUs love faster RAM.
AMD Ryzen 7 4800U vs. Nvidia GeForce MX250
Naturally, the Weibo user pulled in laptops to test the mobile-targeted chip, more specifically different Lenovo Xiaoxin laptops. The user highlighted that the Ryzen 7 4800U-powered laptop is an engineering prototype so performance for the retail product could differ.
The Weibo user pitted the Ryzen 7 4800U against two Intel rivals, the Core i7-1065G7 (Ice Lake), which has the Iris Plus Graphics G7, and the Core i5-10210U (Comet Lake) that's accompanied by Nvidia's GeForce MX250. There's a comparison between Ryzen 7 4800U to AMD's own Ryzen 5 4600U chip as well.
Header Cell - Column 0 | Xiaoxin 15 2020 | Xiaoxin Pro 13 | Xiaoxin Pro 13 | Xiaoxin Pro 13 |
---|---|---|---|---|
Processor | Intel Core i7-1065G7 | Intel Core i5-10210U | AMD Ryzen 5 4600U | Ryzen 7 4800U |
iGPU / GPU | Iris Plus Graphics G7 | Nvidia GeForce MX250 | Radeon Graphics (6 CUs) | Radeon Graphics (8 CUs) |
RAM | 16GB DDR4-3200 | 16GB DDR4-2666 | 16GB DDR4-3200 | 16GB DDR4-3200 |
All the Lenovo Xiaoxin laptops are equipped with 16GB of DDR4 RAM. Only the memory speed varies between each model. The devices with the Ice Lake and Renoir parts have DDR4-3200 modules while the Comet Lake model comes with DDR4-2666 sticks. Aside from the obvious difference of twice the cores for the Ryzen processor that will benefit it in CPU-heavy titles, it's a pretty fair fight considering that each processor is paired with the official maximum memory speed supported.
The tester used generic benchmarks, such as 3DMark Time Spy and Fire Strike and real-world gaming tests conducted at 1080p resolution. The quality settings varied depending on game.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Benchmark Results
Header Cell - Column 0 | Time Spy | Fire Strike | Counter-Strike: Global Offensive | League of Legends | Assassin's Creed: Odyssey | Shadow of the Tomb Raider |
---|---|---|---|---|---|---|
Intel Core i5-10210U + Nvidia GeForce MX250 | 1,130 | 3,664 | 156.67 | >90 | 28 | 24 |
AMD Ryzen 7 4800U | 1,159 | 3,378 | 97.87 | >90 | 25 | 21 |
AMD Ryzen 5 4600U | 954 | 2,892 | 95.87 | >90 | 22 | 18 |
Intel Core i7-1065G7 | 761 | 2,453 | 61.93 | 68 | 15 | 0 |
In terms of generic benchmarks, the Ryzen 7 4800U was up to 2.6% faster than the GeForce MX250 in Time Spy; however, the GeForce MX250 beat the Ryzen 7 4800U by up to 8.5% in the old-school Fire Strike benchmark.
In Counter-Strike: Global Offensive, the GeForce MX250 delivered up to 60.1% higher average frame rates than the Ryzen 7 4800U. Despite the Ryzen 7 4800U having two additional CUs, the difference between it and the Ryzen 5 4600U was around 2%. The reviewer didn't list the exact results for League of Legends but claimed that the GeForce MX250 and two Ryzen parts perform similarly.
The GeForce MX250 outperformed the Ryzen 7 4800U by 12% and 14.3% in the Assassin's Creed: Odyssey and Shadow of the Tomb Raider, respectively. To be fair, the two aforementioned titles are CPU-intensive games, and that's where the Ryzen 7 4800U's four extra cores help push the chip forward. The Intel Core i5-10210U is a quad-core, eight-thread part after all. However, you can't really blame AMD for offering more cores per chip as the chipmaker has worked hard to develop a microarchitecture to squeeze double the cores out of the Zen 2 silicon.
Laptop manufacturers are fond of the GeForce MX250 because it's a cheap discrete graphics solution that provides better graphics performance than integrated graphics. However, Renoir is starting to break that mold. While AMD's latest U-series APUs haven't completely reached parity or surpassed the GeForce MX250, based on these numbers, they are getting there.
Zhiye Liu is a news editor and memory reviewer at Tom’s Hardware. Although he loves everything that’s hardware, he has a soft spot for CPUs, GPUs, and RAM.
-
jeremyj_83 Will be interesting to see what it can do with LPDDR4X-4266. That would give the APUs another 33% increase in RAM bandwidth.Reply -
InvalidError
The price premium of 4266 vs 3200 would pay for most of the MX250, so I doubt many manufacturers(if any) will bother exploring that option.jeremyj_83 said:Will be interesting to see what it can do with LPDDR4X-4266. That would give the APUs another 33% increase in RAM bandwidth.
Unless AMD decides to introduces IGPs with HBM first, the next major IGP performance bump will likely come from DDR5 bringing 4000+MT/s down to the entry-level. -
alextheblue
That was my immediate thought as well. My second thought was that I wish they had a 10+ CU design, but I understand their desire to spend their transistor (and power) budget heavily on the CPU side this time, since they have a competitive and efficient core. That being said, the 8 CU models are performing quite well.jeremyj_83 said:Will be interesting to see what it can do with LPDDR4X-4266. That would give the APUs another 33% increase in RAM bandwidth.
Not that I think anyone here really knows exactly how much more an OEM would pay for LPDDR4-4266 vs 3200, nor what the same OEM would pay for an MX250 (preferably a 4GB version, as 2GB can hinder some games even at lower settings), but that aside:InvalidError said:The price premium of 4266 vs 3200 would pay for most of the MX250, so I doubt many manufacturers(if any) will bother exploring that option.
A high-end ultrathin design might very well favor the space, thermal, and power savings. There are already Intel-powered designs with LPDDR4, it's not exactly breaking new ground... but the high-end Renoir chips can make better use of the extra bandwidth. -
Unholygismo alextheblue said:That was my immediate thought as well. My second thought was that I wish they had a 10+ CU design, but I understand their desire to spend their transistor (and power) budget heavily on the CPU side this time, since they have a competitive and efficient core. That being said, the 8 CU models are performing quite well.
Not that I think anyone here really knows exactly how much more an OEM would pay for LPDDR4-4266 vs 3200, nor what the same OEM would pay for an MX250 (preferably a 4GB version, as 2GB can hinder some games even at lower settings), but that aside:
A high-end ultrathin design might very well favor the space, thermal, and power savings. There are already Intel-powered designs with LPDDR4, it's not exactly breaking new ground... but the high-end Renoir chips can make better use of the extra bandwidth.
I really don't think the extra bandwidth would help it much, or even much by extra cu. It looks to me to be all dependant on the power target. Nothing was written about this in the review, and it seems to me that it is set to 15watt, versus I believe 15watt+15watt Intel+Nvidia combo. (Keep in mind, that this is tdp and not representative of actual power usage) but it gives us a hint.
Several people have tested the Igpu on the 4900HS and 4800HS, which is only a 7CU Vega chip, on 3200mhz ram and those are running away from the mx250 and even mx 350, prompted Nvidia to rush out mx450 (touring based).
Findings from lowering the tdp on 4800Hs for the Igpu clocks. But keep in mind, that this also depends on how stressed out the CPU is and how recourses are allocated.
15watt it runs at about 950-1000 MHz
20 watt it runs at about 1250-1300 MHz
25 watt it runs at about 1400-1500 MHz
35 watt it runs at a locked 1600 MHz
Based upon this, the chip should get roughly 50% increase in clockspeeds when using the full 25watt capable on the 4800U ( assuming it is running 15watt)
This power scaling is based of this YouTube channel.
https://www.youtube.com/channel/UCV_FbbkkWz4KHNzMlmYO04A -
LordConrad "While Picasso, the Renoir's predecessor, has up to 10 Compute Units (CUs), Renoir maxes out at eight CUs."Reply
I know it's an edge case, but Microsoft's Surface edition has 11 Compute Units. -
ron baker would love to get something like that in a NUC type either Intel or AMD ..basically a laptop without screen or keybard.. but cost is a killer , intels ghost canyon is too rich . AsRock Deskmini ??Reply -
alextheblue
Even their older APUs (with slower graphics) continued to scale with memory speed past 3200. The tuned CUs in Renoir will also scale, even at 15W. The question is how much. The actual frequencies will also vary by cooling solution, load, and the load on the CPU cores. I would agree that if you're looking at a top-line "U" series chip as an alternative to a 15W+dGPU combo, 25W sounds pretty reasonable. OEMs don't usually look at it that way, but I think it would be fantastic to see these with LPDDR4 and a 25W TDP.Unholygismo said:I really don't think the extra bandwidth would help it much, or even much by extra cu. It looks to me to be all dependant on the power target. -
Unholygismo
Don't get me wrong, I in no way intended to make it seem like the memory was redundant. But if running at only 1000mhz it is much less of an issue. My point was, that the only reason the chip did not reach mx250 and above is because of the power limit.alextheblue said:Even their older APUs (with slower graphics) continued to scale with memory speed past 3200. The tuned CUs in Renoir will also scale, even at 15W. The question is how much. The actual frequencies will also vary by cooling solution, load, and the load on the CPU cores. I would agree that if you're looking at a top-line "U" series chip as an alternative to a 15W+dGPU combo, 25W sounds pretty reasonable. OEMs don't usually look at it that way, but I think it would be fantastic to see these with LPDDR4 and a 25W TDP.
One of the main reasons this generation of Vega is faster than the last, is that it is way better at utilizing the bandwidth, thereby it being less of a bottleneck. And if you scale up higher than 8 cu,(25 watt or more) more bandwidth would definitely be needed.
Of course this depends on what sort of texture you are trying to load in, what resolution. In some cases it might not mean anything, in others a lot.