Insure Your Agent Operator Edition · No. 001

What AI agents can actually do on your behalf

An AI agent is not a chatbot with good manners. It is a system that can make decisions, call tools, move money, update records, and communicate with customers and third parties without a human in the loop on every step. If your agent can do any of the following, you have operational exposure:

  • Issue refunds, credits, discounts, or goodwill gestures
  • Send email, WhatsApp, or SMS from your domain or number
  • Commit to delivery dates, pricing, or terms with customers
  • Execute transactions, bookings, or purchases
  • Provide advice that a customer will rely on to make a decision
  • Handle personal data covered by GDPR or equivalent regimes
  • Interface with insurance, medical, legal, or financial information

Each one of these is an action your business is legally responsible for, the same way it would be responsible for the actions of a junior employee. The difference is that the junior employee has judgement and a manager. Your agent has neither, and it operates at thousands of times the speed.

Plain version

The legal rule is old, the technology is new.

Under most jurisdictions, a business is responsible for statements and promises made by agents acting on its behalf. Courts have already applied this rule to AI agents without controversy. The hard part is not whether you are liable. The hard part is what your insurance says about it.

Three real cases operators should know

Air Canada v. Moffatt (2024)

A customer asked Air Canada's website chatbot whether he could claim a bereavement fare retroactively. The chatbot invented a policy that said yes. The real policy said no. When Air Canada refused to honour what the chatbot had promised, the customer took the airline to the British Columbia Civil Resolution Tribunal. The tribunal ruled that Air Canada was responsible for information provided on its own website regardless of whether the source was a human agent or an automated one, and ordered the airline to pay. The decision is short, readable, and devastating for anyone who thought "the chatbot said it" was a defence.

Mata v. Avianca (2023)

A lawyer in New York filed a legal brief citing six prior cases that did not exist. The cases had been generated by an AI tool, which the lawyer had used without verifying the output. The court sanctioned both the lawyer and the firm. The broader lesson is that professional responsibility does not bend around the tool. If your agent produces hallucinated content and a human in your business forwards it to a client, the liability sits with the human and the business behind them.

Autonomous transaction errors

Not every case is reported publicly, but practitioners are seeing a pattern of AI-driven transaction errors: agents that approve refunds outside their authority, agents that send pricing quotes with arithmetic mistakes, agents that misclassify customer accounts. Each one is a small financial loss on its own. Across a customer base, they compound. Across a regulated industry, any one of them can trigger a supervisory investigation.

What your current policies probably exclude

This is the part most operators have not checked. You have four policies that sound relevant. None of them were written with AI agents in mind, and the market is now moving fast to clarify what they do and do not cover.

Errors and Omissions / Professional Indemnity

E&O policies respond when a professional service causes financial loss to a client. The classic wording requires a negligent act by a person. Insurers are now adding exclusions for losses arising out of the use of artificial intelligence, algorithmic decision-making, or autonomous systems. Even without a specific exclusion, the claim will hinge on whether a human was in the loop and whether reasonable care was taken. If the agent is the decision-maker, both tests get harder.

Cyber liability

Cyber policies respond to unauthorised access, data breaches, and related events. They typically do not respond to losses caused by a system working exactly as intended, even if what it was intended to do turned out to be wrong. A hallucinated response is not a breach. A bad decision is not an exploit.

General liability

General liability covers bodily injury and property damage. For most software-driven AI deployments this is not the right instrument, and the policy exclusions for professional services and data handling usually apply.

Directors and Officers

This is the one people forget. D&O policies respond to claims against directors and officers in their personal capacity for mismanagement. If your agent causes significant harm and shareholders or regulators argue that the board failed to supervise the deployment, the claim lands here. D&O wordings are starting to include AI-specific duties of care. You want to know what yours says.

Insurers are moving from silent coverage to explicit exclusions. The window where "we never excluded it" was a defence is closing fast. Lloyd's Market Association commentary, 2025

The regulatory shift you cannot ignore

EU AI Act

The EU AI Act entered into force in 2024. The first operator obligations begin August 2, 2026, and cover prohibited uses, general purpose model requirements, and governance structures. The high-risk system obligations, which apply to AI used in credit scoring, employment, critical infrastructure, and several other categories, follow in 2027. If your agent touches any of these areas, you are inside the scope and the clock has started.

Revised Product Liability Directive

Directive 2024/2853 replaces the forty-year-old product liability regime and explicitly brings software and AI systems inside strict liability. In plain language, a defective AI system is treated as a defective product, and the operator who placed it on the market can be held liable without the claimant having to prove negligence. This is a serious expansion of exposure for anyone shipping an agent to end users.

United States

There is no federal equivalent yet, but Colorado passed the first comprehensive state AI law in 2024, California is close behind, and the FTC has started enforcement actions against companies that misrepresent what their AI systems can do. The US picture is fragmented but moving in the same direction as the EU.

What this adds up to

You are already inside a new regulatory regime.

The laws have changed. The insurance market is catching up. The cases are being decided. None of this requires you to become a lawyer. It does require you to know what your agent does, read your policies carefully, and have a plan for what you would do if something went wrong tomorrow. That is exactly what the next page walks you through.

Continue to the three questions to find out where your business stands.

Further reading

References
  1. Moffatt v. Air Canada, 2024 BCCRT 149. British Columbia Civil Resolution Tribunal, February 14, 2024.
  2. Mata v. Avianca, Inc., 22-cv-1461 (S.D.N.Y. June 22, 2023). Opinion and order on sanctions.
  3. Regulation (EU) 2024/1689 of the European Parliament and of the Council (EU AI Act). Entry into force August 1, 2024.
  4. Directive (EU) 2024/2853 on liability for defective products. Published Official Journal November 18, 2024.
  5. Colorado SB24-205, Concerning Consumer Protections in Interactions with Artificial Intelligence Systems. Signed May 17, 2024.
  6. Lloyd's Market Association, Guidance on Artificial Intelligence in Underwriting and Claims, 2025 edition.
  7. AIUC-1: Certification Standard for AI Agents, version 1.0, published by the AI Underwriting Company, 2025.
  8. Federal Trade Commission, Operation AI Comply enforcement sweep announcement, September 2024.