March 9, 2026
When the Financial Times ran a story in November 2025 with the headline "Insurers retreat from AI cover as risk of multibillion-dollar claims mounts," it put into words what the industry had been quietly signalling for months. Major insurers — Great American, Chubb, and W.R. Berkley among them — were asking U.S. regulators for permission to exclude widespread AI-related liabilities from corporate policies. One underwriter described AI model outputs to the FT as "too much of a black box."
What really terrifies insurers, as an Aon executive told the FT, isn't a single massive payout. It's systemic risk: an agentic AI mishap that triggers 10,000 simultaneous losses at once. Insurers can absorb a $400 million loss to one company. They're far less equipped to handle the same scale of loss distributed across an entire portfolio.
The headline macro numbers remain compelling. WTW's February 2026 cyber outlook estimated the cyber market expanded to $16 billion in 2025, with most studies projecting growth to at least $40 billion by 2030. Forrester Research forecasts written cyber premiums will rise 15% in 2026, as AI threat surface expansion reverses the market's recent softening. GlobalData named AI, cyber, and climate change the three themes with the biggest impact on the insurance industry in 2026.
But beneath the growth curve, coverage is quietly fracturing along a fault line that most policyholders haven't yet noticed.
The examples that spooked the FT's sources are not theoretical. Google's AI Overview falsely accused a solar company of legal troubles, triggering a $110 million lawsuit. Air Canada was held liable for a discount its chatbot invented. Fraudsters used a deepfaked executive to steal $25 million from Arup during a video call that appeared entirely real. These are the kinds of losses that have no clean home in a policy written before generative AI existed.
WTW notes that AI "transgresses all lines of insurance" — it amplifies traditional cyber risk while simultaneously introducing novel regulatory and liability exposures that traditional policy language simply wasn't designed to absorb.
The direction of exclusions is unmistakable. ISO filed absolute AI exclusions for general commercial liability and completed products/operations policies, effective January 2026. As Resilience's Chief Underwriting Officer Maria Long observed, this creates a familiar pattern: exclusions don't eliminate risk, they migrate it — pushing AI exposures onto cyber and Tech E&O policies that weren't designed to carry them.
Lexology's early 2026 analysis documented what specific exclusions are emerging: denials for AI-generated errors and misrepresentations, for hallucinated outputs, flawed chatbot advice, automated decision-making failures, and model-produced content that infringes, defames, or discriminates. The legal rationale is straightforward — AI introduces "black box" outputs that are non-deterministic, errors can propagate across user populations simultaneously, and courts have barely begun to draw the boundaries of AI liability. Insurers, who watched cyber coverage evolve from early uncertainty to massive plaintiff settlements before the market stabilized, are not waiting to repeat the experience.
Insurance Thought Leadership's 2026 outlook is blunt: "by 2026, policies are expected to provide less certainty than policyholders have come to assume." The WTW Insurance Marketplace Realities 2026 report adds that markets grappling with how to address AI exposures "are aggressively underwriting wrongful collection coverage" — and that where they're not satisfied with answers, they will broadly exclude coverage.
On the other side of this market, a different set of carriers and insurtechs is looking at the same coverage gap and seeing product opportunity.
WTW's IMR 2026 report documented the emergence of new insurance products specifically designed for AI risks not captured under traditional policies — coverage targeting model-specific failure events, hallucination liability, regulatory defense costs, and "AI washing" D&O exposure for companies that overstated AI capabilities in public filings. WTW's Insuring the AI Age research goes further, arguing that robust AI coverage can give investors, customers, and boards the confidence to deploy AI technologies responsibly — much like cyber insurance once gave businesses the confidence to engage in e-commerce.
The EU AI Act is a meaningful forcing function. WTW's February 2026 cyber outlook notes that certain provisions take effect in 2026, carrying potential fines of up to €35 million or 7% of global turnover. Because violations can arise absent any cyber event or data breach, these losses fall outside the scope of traditional cyber coverage entirely — creating explicit, unmet demand for standalone AI regulatory liability products.
Lockton Re went the furthest, arguing that AI needs its own risk class entirely. Their analysis found that CGL insurers are not currently modeling, underwriting, or pricing AI risks — creating a growing gap between what insurers intend to cover and what they actually cover. The recommended path: underwrite each AI model on its individual merits, assessing industry context, foundation model, version deployed, and specific use case, with affirmative coverage tied to clearly defined failure event triggers.
Insurance Business Magazine's February 2026 coverage captured the practical implication for brokers and risk managers: companies offering AI within professional services need to ensure their policies do not contain AI exclusions — and that submission processes, monitoring tools, and risk signals are becoming increasingly automated, meaning how insurers assess risk is changing as fast as the risks themselves.
Here's what we believe, having built at the intersection of insurance and technology: the most valuable position in any market disruption is not on one side of the tension — it's in the resolution of it.
The carriers adding AI exclusions are not wrong. Actuarial discipline is the foundation of a sustainable insurance market. You don't write coverage you can't price. AI risk, at this moment, is genuinely hard to price — sparse historical loss data, no stable legal framework, opaque model behavior, and the systemic exposure of shared foundation model infrastructure that could trigger thousands of policies simultaneously. Lockton Re summarized the stakes clearly: "The challenge for the insurance industry is not whether AI will create systemic risk events, but when, and whether underwriting practices can keep pace."
The carriers building affirmative AI coverage are also not wrong. Businesses are deploying AI now. The liability is real now. The regulatory exposure is live in 2026. And the market that figures out how to underwrite AI governance posture — not just security hygiene, but model inventories, documented human oversight, bias mitigation protocols, and failure mode documentation — will write the next chapter of cyber insurance.
What bridges these two positions is data and structure. As Lexology concluded: companies with strong AI governance will be insurable. Those without it face denials, exclusions, and premium increases. That's not just an insurance story. It's a business strategy imperative.
For businesses, WTW's guidance is direct: pursue as broad coverage as possible under cyber policies for the full range of AI losses now, while the market remains competitive. Early 2026 is a ripe window for coverage expansions at competitive premiums — a window that won't stay open as loss trends develop and reinsurers harden their positions.
For insurers and insurtechs, the underwriting edge belongs to those who can credibly assess AI governance posture. The EU AI Act's risk classification, the NIST AI Risk Management Framework, and emerging model audit standards are all inputs into a new generation of underwriting questionnaires the market hasn't fully built yet. WTW's ongoing research with Professor Anat Lior points to the same conclusion: the market is still experimenting, coverage gaps are real, and the players who bring structure to ambiguity will win.
At Walnut, we're building for exactly that future. The companies that treat AI governance as a risk management input — not a compliance checkbox — will unlock favorable coverage terms, lower their total cost of risk, and demonstrate to boards and investors that they are deploying AI responsibly.
The AI coverage paradox is real. But paradoxes don't last forever. They get resolved by the players willing to do the hard work of bringing structure to ambiguity.
That's exactly where we intend to be.