As noticed by ComputerBase, AMD reportedly removed performance-per-watt graphs from its recent RX 7900 XTX and 7900 XT graphics card presentations that compared the cards to Nvidia's RTX 4090.
AMD's footnotes from November 15th's performance presentation state there was supposed to be a performance-per-watt slide for an RX 7900 XTX versus RTX 4090 power efficiency comparison, but AMD never presented it. The omission is made clear by the "RX-841" footnote that doesn't have a corresponding slide in the deck.
The performance-per-watt slide compared the RX 7900 XTX to Nvidia's very power-hungry GeForce RTX 4090 based on publicly disclosed peak power consumption values (TBP/TGP). AMD accidentally left this footnote in its presentation, so the decision to exclude the slide was apparently made at the last minute.
We don't have any confirmation as to why AMD decided to remove this slide, and there could be several reasons for the omission. First, AMD has said that the RX 7900 XTX competes with the RTX 4080, so a comparison to the RTX 4090 might not have made sense. Additionally, while Nvidia's 40-series cards consume prodigious amounts of energy, the new architecture is more power efficient than its predecessors, and, during gaming, the 40-series cards often consume far less energy than the full TGP.
For instance, our review of the RTX 4080 found the card's performance-per-watt to be incredibly good. It outperformed Nvidia's power-sucking RTX 3090 Ti while consuming far less power during gaming than its maximum 320W TGP allows. In fact, according to our tests, the RTX 4080 Founders Edition consumed just 221W at 1080p. Furthermore, the only workload we found that would max out the RTX 4080's power budget was Furmark combined with a GPU overclock.
Nvidia also sought to highlight that its GPUs don't pull the full TGP during gaming with its own article on the same subject, stating the RTX 4080 could hit just 251W of power consumption under average gaming conditions (albeit at 1440p resolution). This is an impressive feat considering Nvidia says its previous-gen RTX 3090 Ti consumes 398W during typical gaming workloads. This works out to a 37% power reduction for the RTX 4080, and that's not even considering the 4080's slightly faster gaming performance.
It's logical to expect AMD's forthcoming RDNA 3 GPUs to be plenty efficient and easily rival or beat the 40-series in power efficiency metrics. However, as with all launch presentations, AMD would want to ensure that its card has a healthy enough lead to highlight the advantages of its architecture. Maybe the comparison against the RTX 4090 didn't work out quite as favorably as AMD expected. Conversely, AMD may have decided that comparing the two cards based on total power ratings alone might not be the best approach, as the RTX 4090 likely doesn't pull its full TGP during gaming. Besides, the RX 7900 XTX competes more directly with the RTX 4080.
It's expected for some of the comparisons in any presentation to hit the cutting room floor; for instance, as noted by Computerbase, there is also no slide corresponding to AMD's "RX-837" footnote, and it appears that the "RX-838" footnote is missing entirely.
We'll likely never know why AMD pulled the RX 7900 XTX versus RTX 4090 power efficiency comparison, but one thing is for sure — it is missing from the deck.
The choices are go for the RX 7900 which is much cheaper with performance that will likely be similar to the 4080 but have a bit higher power use or throw power consumption out the window and go for a 4090 with a 30-40% performance increase for a similar price as the 4080.
I've been keeping track of ibuypower and cyberpower custom builds and there is only a few hundred dollar difference between a 4080 and a 4090 PC on builds that are all over $3k except for the i7 versions that are just under $3k.
even the extra power used by the 4090 is still pretty good on a perf/watt scale compared to the 4080/7900 .
i'd stick with the "our card won't melt the connections and possible ruin your psu and $1600-2000 gpu" angle for my press at this point :)
My guess is that AMD was expecting nVidia's cards to hit their TGP limit under most workloads, as has been the case in the past, and prepared a slide with comparisons based on that. But with the 40 series frequently using significantly less power than the TGP limit (and to be clear I'm not expecting the same to be true for AMD,) that will likely make nVidia's cards more power efficient this gen.
The press about how many inserts the plug is rated for; and the mere fact that they have any that create a system fire means they have done their dash.
Maybe in a few years consumers will forget, but for now that connector is a dead selling point.
Yes proofreading seems to be a lost art .... Too much reliance on automatic proofreading that can't catch errors like that one
You better get used to it because that connector is going to be on motherboards eventually (In fact that was what Intel originally designed it for) since it is part of the ATX 3.0 standard. Intel's intention with ATX 3.0 is to get rid of the 24 pin motherboard connector, and the 12V Molex connector and replace it with this single 12V connector and have all the voltage conversion/regulation for the other voltages on the motherboard itself. You'll basically have two of these 16 pin connectors coming from the power supply, one for the motherboard and another for the graphics cards.
What is throwing everyone for a loop is Intel has been using this connector on server motherboards for almost two years without issue but then again those were installed by professionals
BTW you can't start a fire without a flame and the composite plastic material used for those connectors are flameproof .... Melting DOES NOT equal fire