The latest rumor from China indicates that Ryzen 9 4900H and Ryzen 7 4800H sport up to eight cores and 16 threads, making them the first APUs (Accelerated Processing Units) to do so. After seeing hard evidence of the Ryzen 7 4700U, we believe AMD is capable of delighting us with eight-core, 16-thread APUs.
The Chinese forum user claims that the source of the information is one of the world's top laptop makers. There is no mention of clock speeds except that the Ryzen 9 4900H and Ryzen 7 4800H will give the Core i9-9880H a run for its money. It is also said that while the pair of Renoir chips consume less power, their operating temperatures aren't lower.
After scouring the web for the Ryzen 9 4900H and Ryzen 7 4800H, we could only verify the existence of the latter. Irish retailer Elara actually has listed a fair number of Asus-branded laptops with the Ryzen 7 4800H and other Ryzen 4000-series APUs. The search unexpectedly dug up three other previously unknown Renoir chips: the Ryzen 7 4800HS, Ryzen 5 4600HS and Ryzen 5 4600HS.
While AMD utilizes the 'H' suffix to denote its high-performance mobile parts, this is the first time that we're seeing any association of the 'HS' suffix with an AMD processor. We suspect that chips with the HS designation might represent a more power efficient variant of normal H variant, if that makes sense.
Another jewel from Elara's listing is that the Asus laptops are estimated to arrive on December 31, which lends certain credence to the speculation that AMD is readying a Renoir announcement for CES 2020.
Intel certainly has reason to worry as AMD is determined to steal a piece of the chipmaker's mobile pie. Renoir will seemingly bring AMD's APUs up to parity with Intel's 9th-generation Core H-series offerings in terms of core count. It still remains to be seen whether the rumored Renoir chips can outclass the current mobile Coffee Lake parts. However, AMD has proven to us before that Zen 2's IPC (instructions per cycle) should be taken seriously.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Zhiye Liu is a news editor and memory reviewer at Tom’s Hardware. Although he loves everything that’s hardware, he has a soft spot for CPUs, GPUs, and RAM.
-
alextheblue
Well, since they're going to use the same dies...NightHawkRMX said:I just want to see a desktop 6 or 8 core APU. -
PCMDDOCTORS It would be cool if it had integrated VEGA 56 or 64. I dreaming but it's not a crime, lolReply -
bit_user
The thing to keep in mind is cooling. Those GPUs use 200 - 300 W, by themselves. Add the CPU cores and you're well into water cooling territory, which would limit the market, considerably. People who could afford such a setup could also afford just to use a dGPU, which would still probably perform better. So, there's basically no market left.PCMDDOCTORS said:It would be cool if it had integrated VEGA 56 or 64. I dreaming but it's not a crime, lol
The other issue is potentially memory bandwidth (unless they use HBM2). DDR4 or even DDR5 wouldn't have nearly enough bandwidth, for graphics on that level. But HBM2 would probably push the chip into a much bigger socket (if the mere presence of the GPU didn't, already). So, even further out of the mainstream.
Nothing wrong with dreaming, but it's also good to be realistic about the prospects of it coming true. -
PCMDDOCTORS bit_user said:The thing to keep in mind is cooling. Those GPUs use 200 - 300 W, by themselves. Add the CPU cores and you're well into water cooling territory, which would limit the market, considerably. People who could afford such a setup could also afford just to use a dGPU, which would still probably perform better. So, there's basically no market left.
The other issue is potentially memory bandwidth (unless they use HBM2). DDR4 or even DDR5 wouldn't have nearly enough bandwidth, for graphics on that level. But HBM2 would probably push the chip into a much bigger socket (if the mere presence of the GPU didn't, already). So, even further out of the mainstream.
Nothing wrong with dreaming, but it's also good to be realistic about the prospects of it coming true.
Well what GPU do you think they could cram into a CPU without it being a problem, going beyond VEGA 11? -
Rdslw on integrated side those would be VEGA 5.6 or 6.4 but still...Reply
8/16 APU is a dream of any SFF build... with full 300W vega integrated that would make perfect hovercraft!
As mentioned:
https://forums.tomshardware.com/threads/amd-renoir-apu-graphics-configurations-seemingly-discovered-in-driver.3555763/I expect 6/12C 10-11CU unit @45W. (15W 8CU would be perfect, but lets not be greedy) I want it. -
JayNor extra cores are ok, but Intel integrates wifi6, thunderbolt3, avx512, Optane support, and the new Tiger Lake laptop chips will have Xe graphics.Reply
An article today mentions a 4.3GHz boost TIger Lake U capability. That makes this 28W NUC box more interesting. It also has a slot for a discrete GPU. Might be late this year for the NUC.
https://www.tomshardware.com/news/intel-phantom-canyon-nuc-tiger-lake-xe-graphics-pcie-4.0,40140.html -
bit_user
Let's look at Intel, for a moment. On their 10 nm process (which is pretty equivalent to the 7 nm TSMC process that AMD is using) the Gen11 iGPUs max out at 64 EU. Each EU has 8 of what AMD tearms "shaders". So, that works out to roughly what AMD would term a 512-shader iGPU.PCMDDOCTORS said:Well what GPU do you think they could cram into a CPU without it being a problem, going beyond VEGA 11?
A Vega CU (which I guess they call NCU, because it was "new", at one point) has 64 shaders. So, a 11-CU Vega iGPU has 704 shaders. That suggests that Intel's Gen11 is mostly a catch-up exercise. Of course, it's a bit simplistic to compare GPUs on shaders, alone. Anyone following the AMD vs. Nvidia race could tell you that raw compute isn't the whole story. Even within AMD's products, Navi shows that Vega didn't use its compute very efficiently. But, I digress.
So, we see a rough convergence around 8-11 CUs. Maybe that suggests that memory is a bottleneck, if you go much higher. That shouldn't be surprising, if you look at the compute vs. bandwidth ratio of their dGPUs. Below are numbers derived from the base clocks of the respective reference card specs.
Polaris (GDDR5):ModelBus WidthGFLOPSGB/secFLO/BRX 550
128
1126
112
10.1RX 560
128
2406
112
21.5RX 570
256
4784
224
21.4RX 580
256
5792
256
22.6RX 590
256
6769
256
26.4
Vega (HBM2):ModelBus WidthGFLOPSGB/secFLO/BRX Vega 56
2048
8286
410
20.2RX Vega 64
2048
10215
484
21.1Radeon VII
4096
11136
1028
10.8
Navi (GDDR6):ModelBus WidthGFLOPSGB/secFLO/BRX 5500 XT
128
4703
224
21.0RX 5700
256
6751
448
15.1RX 5700 XT
256
8218
448
18.3
Now, what patterns do we see? Well, first, we should probably ignore the RX 550 (indeed, there are supposedly 64-bit bus versions of that GPU, which have FLO/B right in line with its siblings). Also, we should ignore Radeon VII, since it wasn't originally intended to be a gaming GPU. As for the rest, we see FLO/B range from 15.1 to 26.4, with a definite clustering around 21.
So, what does this tell us about GPU bandwidth requirements? Well, at least for Vega, I think we can say that you probably don't want much more than 21 GFLOPS per GB/sec. Given that the Ryzen 5 3400G supports up to dual-channel DDR4-2933, which I think is good for somewhere in the neighborhood of 47 GB/sec, that would suggest about 987 GFLOPS of compute. Dividing that by 2 gives us a target shader * GHz product. With typical AMD iGPU clocks ranging from 1.1 to 1.4 GHz, that yields 449 to 353 shaders. So, that's actually between about 6 and 7 CUs. This suggests their APUs are already compute-heavy.
For a sanity-check, consider that Ryzen 5 3400G's GPU reportedly delivers 1971 GFLOPS (though I'm not sure if the 1.4 GHz figure is technically base or boost clocks). Divided by 47 GB/sec, we get about 42 FLO/B. So, it's a real outlier. Very compute-heavy, for the amount of bandwidth it has. And it has to share that bandwidth with the CPU! You can further validate this, by checking out various benchmarks people have run on it, where they vary the memory frequency.
In conclusion, I'd say that iGPUs are already rather maxed, unless/until you do something to alleviate memory bottlenecks. Maybe put some HBM2, in package. However, that would require a bigger package and significantly increase price. Anyway, once you can scale up the bandwidth, then your bottlenecks become power and thermals.
Where I would expect such a product to possibly make sense is in the laptop sector. There, you could potentially reap some cost savings from the higher-level of integration. What's tricky about that proposition is that high-end laptops are in the 16 - 32 GB range. Certainly, you want at least 8 GB for the GPU, so 16 GB would be a minimum. However, HBM2 isn't cheap and the CPU doesn't really need so much bandwidth. So, at that point, it's tempting just to move the GPU into its own package, with its own memory. Furthermore, the thinnest laptops, that place the highest premium on integration, are also the most power & thermally-restricted - so, not great candidates to feature an extremely high-powered iGPU. And in anything bigger, then it doesn't seem such an issue to have a dGPU.
As a bonus, let's try the same analysis on some recent consoles (both Polaris + GDDR5):
ModelBus WidthGFLOPSGB/secFLO/BPS4 Pro
256
4198
218
19.3Xbox One X
384
6001
326
18.4
This seems to confirm that you really want somewhere in the realm of 1 GB/sec of bandwidth, for every 20 GFLOPS of GPU horsepower. -
bit_user
Granted, wifi 6 and thunderbolt 3 have obvious value (especially in a laptop or case without a slot for a graphics card).JayNor said:extra cores are ok, but Intel integrates wifi6, thunderbolt3, avx512, Optane support, and the new Tiger Lake laptop chips will have Xe graphics.
Optane... is something Intel keeps talking about. Let's see it gain some real traction, before counting that as a win. AVX-512... again, I'm skeptical it's going to see much use, in consumer apps. You're better off just using the GPU for most things that would benefit from it. As I outlined, above, I think Intel is still in the catch-up phase, with their iGPUs. I'll want to see somebenchmarks, before I count a Xe iGPU as a win (vs. simply achieving parity with AMD).
That's not a NUC, in any meaningful sense. NUCs have a 4" x 4" motherboard. Intel invented that term to describe computers that were far smaller than traditional PCs. However, mini PCs of that size even predate NUCs.JayNor said:That makes this 28W NUC box more interesting. It also has a slot for a discrete GPU. Might be late this year for the NUC.
https://www.tomshardware.com/news/intel-phantom-canyon-nuc-tiger-lake-xe-graphics-pcie-4.0,40140.html
That thing is basically (if not actually) a mini-ITX PC that Intel's brain-dead marketing department just decided to brand as a NUC. That they didn't think they could sell it simply as an Intel PC says a lot about how little regard they have for that product's target customers. Basically, once you can plug in 160 W PCIe graphics cards, it's definitely no longer a NUC. I guess the only remaining difference would probably be its non-socketed CPU. -
ElianDiann Informative post- ASUS is leading the laptop chips market when it comes to style, aesthetics and use of the latest technology. In this article, we are going to review.Reply
https://asuslaptopdealsuk.blogspot.com/2020/07/buy-best-asus-vivobook-s14-with-screen.html https://uk.store.asus.com/gaming/mice.html