Researchers find automated financial traders will collude with each other through a combination of 'artificial intelligence' and 'artificial stupidity'

Robot AI thinking about something
(Image credit: Shutterstock)

If it looks like a duck, swims like a duck, and quacks like a duck, would it be fair to call it a duck? Or, in the case of a working paper from researchers at Wharton and the Hong Kong University of Science and Technology, how closely does the behavior of "AI-powered trading" have to resemble collusion before it would be fair for financial regulators to start treating it as such?

The working paper (PDF) is titled "AI-Powered Trading, Algorithmic Collusion, and Price Efficiency" and was published by the National Bureau of Economic Research. It effectively looks to answer the question raised above by conducting experiments with algorithmic trading agents that use reinforcement learning to determine when they should buy and sell assets based on the broader market's history, trends, and forecast.

The authors, Winston Wei Dou, Itay Goldstein, and Yan Ji, found that "AI collusion in securities trading can robustly emerge through two distinct algorithmic mechanisms: one based on price-trigger strategies, and the other driven by over-pruning bias in learning" — which they not-so-charitably labeled "AI collusion driven by 'artificial intelligence' [... and] AI collusion driven by 'artificial stupidity,'" respectively.

The paper features enough undefined proper nouns and mathematical symbols for me to caveat that I am primarily relying on the authors' introduction and conclusion here. My investment strategy has been — and look, I'm trusting y'all, so please don't share this with anyone else — to be too poor to have to worry about some hedge fund's algorithm having an impact on my net worth. (Google's algorithms, though...)

But I am familiar enough with how algorithms like this "make decisions" — if it can be called that — to be unsurprised by the paper's findings. Tools like this are meant to figure out the best way to maximize the probability of a number going up and minimize the probability of that number going down. The result is a bunch of algorithms independently settling on broadly similar responses to particular conditions.

"This highlights a fundamental insight about AI: algorithms relying solely on pattern recognition can exhibit behavior that closely resembles logical and strategic reasoning," the authors wrote, adding that the over-pruning bias they found "is not the result of specific, nonstandard algorithmic assumptions or limitations, but a generic feature of [reinforcement learning| that persists even in sophisticated settings."

The problem, they explained, is that regulators attempting to solve the "artificial intelligence" problem could exacerbate the "artificial stupidity" problem in the process. The former occasionally prompts algorithmic traders to make potentially risky moves; the latter mostly has them adopt conservative trading strategies. How does one discourage aggressiveness without further encouraging timidity?

My favorite example of this comes from a 12-year-old paper (PDF) describing a bot that was taught how to play games on the Nintendo Entertainment System. The bot was great at "Super Mario Bros." but terrible at "Tetris"—so it ultimately decided the best way to "win" at the game, which technically only ends when the player loses and therefore lacks a true win condition, was to pause the game before it lost.

What's the quickest way to avoid the appearance of collusion, regulatory scrutiny, and potential fines resulting from a bunch of algorithms making aggressive trades? Teaching 'em not to make aggressive trades, which is the same behavior encouraged by "artificial stupidity." This won't be an easy problem for companies developing these algorithmic traders or the financial regulators overseeing the market to solve.

Note that this paper doesn't prove AI collusion via artificial intelligence or stupidity is already occurring in financial markets; the findings were based on how different algorithmic traders behaved in simulated markets created as part of this research. But if it looks like a duck, swims like a duck, and quacks like a duck in a simulated pond...

Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Nathaniel Mott
Freelance News & Features Writer

Nathaniel Mott is a freelance news and features writer for Tom's Hardware US, covering breaking news, security, and the silliest aspects of the tech industry.

  • Alex/AT
    Struggling to implement human experience and prowess in AI, the result was just implementing stupidity.
    Reply
  • jlake3
    Hm. I understand the urge to personify computers, but I think that saying they are "colluding" might be the wrong way to describe it. I see a lot of people in various comments sections saying that all sorts of companies are colluding to some ends, but communication is an essential part of collusion. Simply responding to the same inputs in the same way is not inherently collusion, even if multiple parties both respond incorrectly or in a way that appears dumb, although it certainly casts suspicion of it.

    If three companies all increase the price of widgets and it turns out that the price of widget alloy went up, you'd have a hard time proving they were colluding to drive up the price of widgets rather than responding to supply chain costs. If market trends say blue widgets are NOT selling and they all stop offering a blue option, again, you'd have a hard time proving that they colluded to eliminate the supply of blue widgets instead of just responding to market demand. "But they'd take so much marketshare if they didn't pass through the costs! But there's a small-but-loyal fanbase for blue widgets! Them all making the same decision is collusion!" Nope, not unless they coordinated it.
    The result is a bunch of algorithms independently settling on broadly similar responses to particular conditions.
    This is dumb, and almost more dangerous than collusion. If none of these bots are talking to each other and they're all operating on roughtly the same internals, it feels like competing bots could run up the price on something simply on the basis that "the price is trending up", not realizing that the demand is entirely artificial and will inevitably crash when bots can no longer sustain the pump. The other direction, something taking a dip could trigger bots to sell which triggers other bots to sell which could create a self-fulfilling downward spiral... although humans buying in to exploit the bots' error could stabilize that one.
    Reply
  • Kindaian
    You already had this kind of behaviors with fast trading algos without AI being involved.
    Reply
  • edzieba
    The bigger problem is that training your 'AI' on the behaviour of financial markets once those same 'AI' are performing trades on those same financial markets, is going to inevitably result in the same slop-self-ingestion decline as seen with LLMs that start to be trained on text corpuses that themselves contain AI-generated slop.
    Reply