Artificial Intelligence (AI)
The world of artificial intelligence (AI) is growing at an exponential rate, with a broad range of topics spanning from the latest CPUs, GPUs, ASICs and FPGAs that run modern AI workloads, along with the different types of AI usages, such as the different types of large language models (LLM) and how they are trained and then used for inference workloads. Here you'll find Tom's Hardware's leading coverage of all things AI.
Latest about Artificial Intelligence

Oracle reportedly delays several new OpenAI data centers because of shortages
By Anton Shilov published
Oracle reportedly delays some of the data centers for OpenAI from 2027 to 2028, citing labor and materials shortages.

Microsoft, Google, OpenAI, and Anthropic join forces to form Agentic AI alliance, according to report
By Jon Martindale published
Premium The biggest names in AI are set to form an open source alliance

Basement AI lab captures 10,000 hours of brain scans to train thought-to-text models
By Luke James published
Conduit says it has collected roughly 10,000 hours of noninvasive neural data from “thousands of unique individuals” in a basement studio.

China starts list of government-approved AI hardware suppliers: Cambricon and Huawei are in, Nvidia is not
By Anton Shilov published
Nvidia is not there... yet?

Nvidia decries 'far-fetched' reports of smuggling in face of DeepSeek training reports
By Sunny Grimm published
DeepSeek's next priorities for training future LLM generations conveniently line up with Blackwell's biggest strengths.

Research commissioned by OpenAI and Anthropic claims that workers are more efficient when using AI
By Sunny Grimm published
These counter studies released by MIT and Harvard in August, claiming the opposite.

Chinese Navy base 3D imaged to 50cm resolution in single satellite pass
By Jowi Morales published
US spatial intelligence firm shows off its capabilities by providing high-quality images of a Chinese naval base on Hainan Island.

Two GTX 580s in SLI are responsible for the AI we have today — Nvidia's Huang revealed that the invention of deep learning began with two flagship Fermi GPUs in 2012
By Aaron Klotz published
Nvidia CEO Jensen Huang revealed in a recent Joe Rogan podcast that the inventors behind deep learning ran the world's first machine learning network on a pair of GTX 580s in SLI in 2012.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

