Major insurers move to deny liability for AI lawsuits as multi-billion dollar risks emerge — Recent public incidents have lead to costly repercussions
Major insurers seek permission to exclude AI-related claims from corporate policies.
Major insurers are moving to ring-fence their exposure to artificial intelligence failures, after a run of costly and highly public incidents pushed concerns about systemic, correlated losses to the top of the industry’s risk models. According to the Financial Times, AIG, WR Berkley, and Great American have each sought regulatory clearance for new policy exclusions that would allow them to deny claims tied to the use or integration of AI systems, including chatbots and agents.
The requests arrive at a time when companies across virtually all sectors have accelerated adoption of generative tools. That shift has already produced expensive errors. Google is facing a $110 million defamation suit after its AI Overview feature incorrectly claimed a solar company was being sued by a state attorney-general. Meanwhile, Air Canada was ordered to honor a discount invented by its customer-service chatbot, and UK engineering firm Arup lost £20 million after staff were duped by a digitally cloned executive during a video-call scam.
Those incidents have made it harder for insurers to quantify where liability begins and ends. Mosaic Insurance told the FT that outputs from large language models remain too unpredictable for traditional underwriting, describing them as “a black box.” Even Mosaic, which markets specialist cover for AI-enhanced software, has declined to underwrite risks from LLMs like ChatGPT.
As a workaround, a potential WR Berkley exclusion would bar claims tied to “any actual or alleged use” of AI, even if the technology forms only a minor part of a product or workflow. AIG told regulators it had “no plans to implement” its proposed exclusions immediately, but wants the option available as the frequency and scale of claims increase.
At issue is not only the severity of individual losses but the threat of widespread, simultaneous damage triggered by a single underlying model or vendor. Kevin Kalinich, Aon’s head of cyber, told the paper that the industry could absorb a $400 million or $500 million hit from a misfiring agent used by one company. What it cannot absorb, he says, is an upstream failure that produces a thousand losses at once, which he described as a “systemic, correlated, aggregated risk.”
Some carriers have moved toward partial clarity through policy endorsements. QBE introduced one extending limited coverage for fines under the EU AI Act, capped at 2.5% of the insured limit. Chubb has agreed to cover certain AI-related incidents while excluding any event capable of affecting “widespread” incidents simultaneously. Brokers say these endorsements must be read closely, as some reduce protection while appearing to offer new guarantees.
As regulators and insurers reshape their positions, businesses may find that the risk of deploying AI now sits more heavily on their own balance sheets than they expected.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Luke James is a freelance writer and journalist. Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.
-
SomeoneElse23 "And mankind celebrated the creation of AI. Billions of dollars are spent in a race towards an unidentified goal. Massive amount of earth's resources were spent with entire state's worth residential power used to power a single "AI" data center, in a race towards an unidentified goal. Corporations celebrated being able to replace paid, thinking, people, with AI chat bots. Shareholders celebrated increased profit."Reply
Then reality started to set in.