Amazon employees admit to using AI unnecessarily to pump up internal usage scores — workers complain of intense pressure to use AI tools
Employees at Amazon, Meta, and Microsoft have been gaming AI usage metrics.
Amazon is the latest hyperscaler where employees have been caught inflating AI token consumption to hit internal usage targets, following similar behavior documented at Meta and Microsoft last month, the Financial Times reports.
The company set targets requiring more than 80% of its developers to use AI tools each week and tracked consumption on internal leaderboards. Some employees told FT they had been using MeshClaw, an in-house agent platform that can initiate code deployments, triage emails, and interact with Slack to maximize their token numbers. Amazon said usage statistics would not factor into performance evaluations, but multiple employees said they believed managers were monitoring the data. One said there was "so much pressure to use these tools," another described how tracking created "perverse incentives."
The practice — dubbed "tokenmaxxing" — has become widespread enough to generate its own vocabulary and leaderboards, but beyond workplace culture, if a meaningful share of AI consumption is performative, how reliable are the demand figures that hundreds of billions in AI infrastructure procurement are being allocated against?
Combined 2026 capex from Amazon, Microsoft, Alphabet, and Meta is tracking between $650 billion and $700 billion, with some Wall Street projections exceeding $1 trillion for 2027, and every hyperscaler has told investors that inference capacity is being absorbed as fast as it can be deployed. Internal developer consumption is obviously part of that absorption, and it sits alongside paying external customers in the usage data that informs the likes of capacity planning, GPU orders, HBM procurement, and power infrastructure.
Tokenmaxxing doesn’t mean the demand is fabricated — enterprise AI adoption is broadening, and inference workloads are scaling into production — but there’s a distinction between adoption and consumption intensity. The former is a durable driver of demand, whereas the latter is gameable, and it’s currently being amplified by the incentive structures that these companies built. The water is further muddied by reports that AI is more expensive than actual workers.
Meta's internal leaderboard lasted days after public exposure, and Amazon recently restricted visibility of team-wide usage statistics. And when measurement shifts, the consumption intensity they incentivized will shift with them.
Nvidia CEO Jensen Huang has highlighted per-engineer token consumption as a key metric, stating he’d be "deeply alarmed" if a $500,000-a-year engineer was not consuming at least $250,000 in tokens. Nvidia's inference growth obviously depends on that consumption being a productive workload that persists and compounds because every inflated token is real GPU time.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Angie Jones, formerly VP of engineering for AI tools at Block, told LeadDev she expected the industry to pivot toward measuring efficient token usage rather than celebrating volume. In a cycle where GPU orders and power commitments are being placed years in advance, the quality of the demand projections behind them matters. The hyperscalers are building for a world where every knowledge worker consumes hundreds of thousands of dollars in annual compute. Whether that consumption proves productive or performative will determine how much of this year's $700 billion generates durable returns.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Luke James is a freelance writer and journalist. Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.
-
King_V "We swear, this is really absolutely the next big thing and totally necessary!" they screamed, almost as if they had to convince themselves.Reply -
thesyndrome This is very clearly a method to try and trick shareholders to convince them that paying through the nose for the AI is beneficialReply
"Look at how much our workers are using it!" - Said by man mandating that his workers use it, because he was the one who told everyone they needed it in the first place.
The business world has a HUGE ego problem at the moment, with the people at the top being so unwilling to admit to being wrong that they are more willing to let their companies fail than to admit their choices are bleeding it dry (though in some cases, I'm sure that CEO/executive is 100% aware that their decisions will destroy the company, but it makes them personally very rich in a short timeframe, then they just 'golden parachute' to the next company to do it all over again) -
hotaru251 the amount of wasted power & water to meet these "required usage" should be illegal.Reply -
Findecanor When a metric becomes a goal, it seizes to be a useful metric... Now where have we heard that before?Reply -
bit_user Reply
My company also has usage targets that we have to meet. It feels to me like some executive made a pitch to the higher-ups that we should spend money on AI services to boost our productivity, and they're worried the productivity gains won't come if we don't use it enough. However, if we use it and the gains still don't come, I guess they're in hot water?King_V said:"We swear, this is really absolutely the next big thing and totally necessary!" they screamed, almost as if they had to convince themselves.
I find it useful for code reviews. Some of my co-workers are using it more aggressively, and I can tell you the quantity of their output went up but the quality (which wasn't great to begin with) did not. -
bit_user Reply
The way capitalist systems usually curb waste of valuable resources is to increase cost (e.g. through taxation, regulated cost structures, or fines). Then, let the profit-motive of the company, and competitive pressures it's under, work to minimize usage of those resources.hotaru251 said:the amount of wasted power & water to meet these "required usage" should be illegal.
Sometimes, this happens by simply moving where the work happens to a jurisdiction where the cost increases don't apply. Economists call it "comparative advantage", when one region has lower costs than another. This works to stimulate trade. -
bit_user Reply
Well, there's some of that. For instance, I heard about a company that fired most of its software developers, trying to replace them with AI. The AI was so bad that they had to bring in a consulting firm at a much higher rate than they were previously paying their developers. This is from a good source, although the software company wasn't named.thesyndrome said:The business world has a HUGE ego problem at the moment, with the people at the top being so unwilling to admit to being wrong
Another thing that happens is sort of like a FOMO (Fear Of Missing Out), where execs hear about a trend and are afraid of being behind the curve (or at least seeming to). As long as it's something everybody else is doing, that gives them coverage to try it for themselves. However, fads often fade almost as quickly as they appear. If a critical mass of businesses decide that AI was over-hyped and take their foot off the gas pedal, the whiplash could end up being severe.
I'll say this: AI is certainly over-hyped, but it does have real uses. It's just that you get maybe a 10% boost in productivity, and right now it's sort of a one-time thing, rather than an annual 10% improvement. We don't know if or when AI will deliver the next big boost. For that to happen in software development, it needs to get good enough that it no longer requires detailed human oversight, which I think is a fairly long way off. -
fball922 Reply
And we are about to find out what that 10% actually costs as providers pivot to pay by usage models.bit_user said:I'll say this: AI is certainly over-hyped, but it does have real uses. It's just that you get maybe a 10% boost in productivity, and right now it's sort of a one-time thing, rather than an annual 10% improvement. We don't know if or when AI will deliver the next big boost. For that to happen in software development, it needs to get good enough that it no longer requires detailed human oversight, which I think is a fairly long way off. -
Trake_17 Reply
I'm not sure how you could effectively implement this sort of thing in a free society, but I agree with your sentiment. It's wasteful on a scale that absolutely injures others.hotaru251 said:the amount of wasted power & water to meet these "required usage" should be illegal. -
Trake_17 Reply
I expect it will follow much as the PC and then the internet. We don't yet really understand what the tech is going to become both because it'll take a generation growing up with it to really create ideas and because it's changing so fast. We also, like with smart phones, aren't likely to know how much it's going to injure people either for a generation. It will eventually create big changes that we aren't as yet really capable of visualizing.bit_user said:Well, there's some of that. For instance, I heard about a company that fired most of its software developers, trying to replace them with AI. The AI was so bad that they had to bring in a consulting firm at a much higher rate than they were previously paying their developers. This is from a good source, although the software company wasn't named.
Another thing that happens is sort of like a FOMO (Fear Of Missing Out), where execs hear about a trend and are afraid of being behind the curve (or at least seeming to). As long as it's something everybody else is doing, that gives them coverage to try it for themselves. However, fads often fade almost as quickly as they appear. If a critical mass of businesses decide that AI was over-hyped and take their foot off the gas pedal, the whiplash could end up being severe.
I'll say this: AI is certainly over-hyped, but it does have real uses. It's just that you get maybe a 10% boost in productivity, and right now it's sort of a one-time thing, rather than an annual 10% improvement. We don't know if or when AI will deliver the next big boost. For that to happen in software development, it needs to get good enough that it no longer requires detailed human oversight, which I think is a fairly long way off.