Does your business insurance cover AI mistakes? Probably not.
Most SMEs running an AI agent in production today are operating with a silent gap. The policies that sound like they should respond were written for a world without autonomous systems, and insurers are moving quickly to clarify that they do not intend to carry this risk.
Key takeaways
- Errors and omissions, cyber, and general liability policies were almost all written before AI agents existed and were never priced for autonomous decision making.
- Insurers have started adding explicit AI exclusions at renewal, turning what was silent coverage into clear non-coverage.
- The Air Canada tribunal decision in 2024 confirmed that businesses are legally responsible for what their chatbots promise, regardless of whether the source was a human agent.
- Dedicated AI agent liability policies are being built by specialty carriers with reinsurance support, with the first European bindings expected in the third quarter of 2026.
- The practical step this month is a written response from your broker on every policy, clause by clause, before your next renewal.
The policies you think you have
If you are an SME founder or operations lead and you have deployed an AI agent in the last twelve months, there is a good chance that you believe your existing insurance covers it. You have an errors and omissions policy, a cyber policy, general liability, maybe directors and officers. Between them, surely the AI agent is inside at least one of those wrappers.
It is a reasonable assumption. It is also almost certainly wrong. None of those policies were designed for autonomous systems. Several of them are now being explicitly rewritten to exclude AI losses, and the rest will hinge on how a claims adjuster interprets the wording after the event. The purpose of this article is to walk through each one, explain the mechanics of why the cover is weaker than you think, and give you the exact language to take to your broker before your next renewal.
Errors and omissions: the policy most people think will respond
Errors and omissions insurance, sometimes called professional indemnity, is the policy that most founders expect to respond when an AI agent makes a mistake. The structure sounds right. E and O covers financial loss to a client caused by a negligent act in the course of providing a professional service. An AI agent that gives bad advice, misquotes a price, or promises a discount the business cannot honour feels like exactly that kind of event.
The problem is in the wording. Most E and O policies define the insured event as a negligent act, error, or omission by an insured person in the performance of professional services. The definition of insured person is usually a human: a director, employee, or partner. An autonomous agent is not an insured person, and a negligent act by a person who was not in the decision loop is not a negligent act by a person at all.
Even before any AI-specific exclusion, the claim runs into definitional questions. Was there a person in the decision chain? Did they exercise reasonable care? What does reasonable care mean when the output was generated by a model? The Lloyd's Market Association published guidance in 2025 noting that insurers are moving from silent coverage to explicit exclusion precisely because these questions are too expensive to litigate one claim at a time.
The practical effect is that a lot of E and O renewals in 2026 now include one of two things. Either an outright exclusion for losses arising from the use of artificial intelligence or algorithmic decision making, or a restrictive endorsement that preserves coverage only if a defined human review process was followed. If your policy renewed in the last six months and you have not reread the endorsements, you may already be uncovered for the risk you are running.
Cyber liability: the policy that was never designed for this
The second policy operators reach for is cyber liability. The logic is that AI is a computer system, cyber policies cover computer systems, therefore cyber should respond. The logic does not survive contact with the policy wording.
Cyber policies are built around three triggers: unauthorised access, data breach, and business interruption caused by a network event. They respond when someone who should not have been in the system got in, when personal data was exposed, or when operations were stopped by a cyber incident. None of these triggers are met by an AI agent doing exactly what it was configured to do and producing an output that turned out to be wrong.
A hallucinated answer is not unauthorised access. A bad recommendation is not a data breach. A customer refund that should not have been issued is not business interruption. The agent is working as designed. It is the design that is the problem, and cyber policies are not written to cover design risk.
Some newer cyber wordings are adding AI-specific extensions. These are typically narrow and focused on prompt injection and model manipulation rather than the broader category of agent errors. If your cyber policy has one of these extensions, read it carefully. The exclusions on the extension are often where the real work is being done.
General liability and product liability
General liability is the classic catch-all policy for bodily injury and property damage. For most software-driven AI deployments it is not the right instrument. The policy exclusions for professional services, data handling, and contractual liability typically absorb any loss that could plausibly come from an AI agent.
Product liability is the more interesting case in Europe. Directive (EU) 2024/2853, the revised Product Liability Directive, explicitly brings software and AI systems inside strict liability rules. That is an expansion of exposure rather than coverage. It means an operator who places a defective AI system on the market can be held liable without the claimant having to prove negligence, and it is exactly the kind of risk that reinsurers are pricing right now. The directive is driving insurers toward dedicated AI products rather than stretching general liability to cover it.
Directors and officers: the policy people forget
D and O insurance covers directors and officers in their personal capacity for claims alleging mismanagement. If an AI agent causes significant harm to the business and shareholders, regulators, or creditors argue that the board failed to supervise the deployment, the claim lands on the D and O tower.
D and O wordings in 2026 are starting to include AI-specific duties of care. The question an underwriter asks is whether the board had visibility into the AI footprint, approved the risk, and received reporting on incidents. If the honest answer is no, the personal exposure of the directors is real. This is the policy board members should read themselves rather than leaving to the finance director.
The four policies at a glance
- Errors and omissions. Trigger depends on a negligent act by a human. AI exclusions now appearing at renewal.
- Cyber liability. Trigger depends on unauthorised access or breach. Does not respond to agents working as designed.
- General liability. Designed for physical harm. Excludes professional services and data handling in most wordings.
- Directors and officers. Responds to claims of board mismanagement. New AI-specific duties of care appearing in 2026.
The Air Canada precedent
If you want the single case that should concentrate the mind of every SME operator, it is Moffatt v. Air Canada, decided by the British Columbia Civil Resolution Tribunal in February 2024. A customer asked Air Canada's website chatbot whether he could claim a bereavement fare retroactively. The chatbot invented a policy that said yes. The real policy said no. When Air Canada refused to honour what the chatbot had promised, the customer took the airline to the tribunal and won.
The tribunal ruled that Air Canada was responsible for information on its own website, whether the source was a human agent or an automated one. The defence that the chatbot was a separate entity was rejected outright. The amount at stake was small. The precedent is not. Every business running a customer-facing agent is now on notice that statements made by that agent bind the business. Read the Mata v. Avianca decision from 2023 alongside it for the professional services angle, where a lawyer was sanctioned for filing a brief with citations invented by an AI tool.
What new AI-specific coverage looks like
The good news is that the insurance market is responding. In February 2026, ElevenLabs completed the first AI agent deployment to be underwritten against a formal standard, AIUC-1, developed by the AI Underwriting Company with participation from Munich Re. The policy covers specific loss categories tied to agent behaviour rather than trying to stretch a generic wrapper to fit.
European specialty carriers are expected to bind their first AI agent liability policies in the third quarter of 2026. The underwriting process is intentionally selective. Carriers are prioritising operators with a completed certification, a documented incident response plan, and a board attestation that the risk has been reviewed. The Future Proof Certified methodology at agentcertified.eu was built specifically to feed this underwriting process, and the waitlist for the first wave is at agentinsured.eu.
What to do this month
The single most useful step an SME operator can take in the next thirty days is to send a written request to the broker asking for a clause-by-clause position on AI agent risk for every policy. Do not accept a verbal answer. The request should list the four policies, ask specifically how each one responds to a loss caused by an AI agent, and ask for any AI-related clauses by number and page. Request confirmation that the broker has reviewed the most recent renewal endorsements.
Once the written response is in hand, you will know exactly where your silent gaps are. You can then make an informed decision about whether to accept the gap, self-insure the exposure, or start the pathway toward dedicated AI coverage. None of those options is ideal, but all of them are better than finding out at claim time.
The three questions on The Questions page are the diagnostic that pairs with this review. The coverage pathway page walks through the certification and waitlist process in detail. If you want the regulatory context behind the shift, the Why It Matters page covers the EU AI Act and the revised Product Liability Directive.
Frequently asked questions
Does my standard business insurance cover AI agent mistakes?
In most cases it does not. Errors and omissions, cyber, and general liability policies were written before autonomous systems existed, and insurers are now adding explicit AI exclusions at renewal. Even where no exclusion exists, proving a loss is covered is much harder when the decision was made by a model with no human in the loop.
Does cyber liability insurance cover AI hallucinations?
Usually not. Cyber policies respond to unauthorised access and data breach events. A hallucinated answer is not a breach. A flawed recommendation from a model working as intended is outside the cyber trigger in almost every standard wording.
What exact question should I ask my broker about AI coverage?
Ask in writing: please confirm how each of our current policies, E and O, cyber, general liability, and D and O, responds to a claim arising from an action taken by an AI agent deployed in our business, including any clause added at the most recent renewal. Request specific clause references.
When will AI-specific business insurance be available in Europe?
The first binding AI agent liability policies in Europe are expected in the third quarter of 2026, written by specialty carriers with reinsurance support from firms including Munich Re. Operators with certification already in hand are being prioritised.