Memory
Latest about memory

Nvidia officially announces a new RTX 4070 with slower GDDR6 memory
By Andrew E. Freedman published
Nvidia is adding GDDR6 memory to the RTX 4070 in order to improve supply. It shouldn't affect performance much if at all, but sadly probably won't change the pricing either.

China's new Hygon CPU spotted with 64 Zen cores
By Anton Shilov published
Hygon's C86-7490 processors use AMD's SP5 packaging, but what resides under the heat spreader?

G.Skill launches ultra-low-latency RAM for Intel and AMD CPUs
By Aaron Klotz published
G.Skill has unveiled a new high-performance DDR5 memory spec aimed purely around performance. The new kit has a very low latency configuration of DDR5-6400 with a CAS latency 30.

Pick up 32GB of speedy 6000 MHz Team T-Force Vulcan DDR5 memory for only $86
By Stewart Bendle published
Snag a deal on 32GB of DDR5 6000 memory with the Team T-Force Vulcan memory kit for only $86.

Nvidia RTX 4070 with slower GDDR6 memory is on the way, according to rumors
By Mark Tyson published
Green team is likely responding to component pricing and supply conditions.

China's CXMT begins mass-producing HBM2 memory well ahead of schedule
By Anton Shilov published
China-based memory maker CXMT begins to mass produce HBM2 memory, well ahead of telegraphed mass production in 2026.

Ampere unveils monstrous 512-core AmpereOne Aurora processor — custom AI engine, support for HBM memory
By Paul Alcorn published
Ampere is adding a 512-core AmpereOne Aurora processor to its roadmap, and it also divulged pricing for its AmpereOne lineup.

SK hynix announces its GDDR7 memory touting 60% faster speeds, 50% improved power efficiency
By Jowi Morales published
Korean firm plans to start GDDR7 mass production in 3Q24.

New memory tech unveiled that reduces AI processing energy requirements by 1,000 times or more
By Jeff Butts published
Seeing the need to improve the energy-efficiency of AI applications, one research team from Minnesota may have cracked the code to lowering energy consumption by a huge amount.

Faulty Nvidia H100 GPUs and HBM3 memory caused half of failures during LLama 3 training — one failure every three hours for Meta's 16,384 GPU training cluster
By Anton Shilov published
In a 16,384 H100 GPU cluster, something breaks down every few hours or so. In most cases, H100 GPUs are to blame, according to Meta.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.