Key takeaways
- Your broker is probably not asking about AI yet. Raise it yourself and get the conversation documented in writing.
- Four things to disclose: what your AI agents do, what they decide autonomously, whether they face customers, and what documentation you have in place.
- Three questions to ask: does your PI policy cover AI agent errors, does your cyber policy cover non-breach AI mistakes, and have any AI exclusions been added at recent renewals.
- The Air Canada ruling (2024) established that your AI agent's errors are your liability. There is no "the chatbot said it, not us" defence.
- Deploying first and disclosing later creates a non-disclosure risk. A claim made while your AI deployment was undisclosed is a complicated claims process even if it does not void coverage entirely.
Why your broker has probably not raised this yet
Insurance is a conservative industry. Brokers work from products and wordings that typically lag the real world by one to three years, sometimes longer. The AI agent conversation is genuinely new territory for most commercial lines brokers. They are not being negligent. They are working from the tools they have.
The problem is that "we will update policies when we understand the risk better" is a broker timeline, not your business timeline. You deployed your AI customer support agent six months ago. Your broker renewed your professional indemnity policy three months ago. No one asked, no one told, and the renewal form did not have a checkbox for it.
That gap is the risk. Not because your broker is acting in bad faith, but because an undisclosed material fact about how your business operates can complicate any future claim in that area. You do not need to wait for your broker to add the question to their renewal process. You can raise it yourself, today.
The four things to tell your broker
When you contact your broker, do not say "we use some AI tools." That is too vague to be useful and it does not create a proper record. Be specific on four points.
1. What AI agents you are deploying
Name the systems and describe what they do. "We run an AI customer support agent built on OpenAI's API that handles initial queries, processes refund requests up to GBP 50, and books appointments directly into our calendar." That is a disclosure. "We use AI" is not.
If you use multiple agents, list each one. If you use a third-party product that runs AI underneath (a customer service platform, an email response tool, a scheduling assistant), include those too. The fact that you did not build the underlying model yourself does not change your exposure as the operator deploying it.
2. What those agents decide autonomously versus what requires human approval
This matters because the liability picture changes depending on whether a human was in the loop. An AI that drafts an email for a human to review and send is different from an AI that sends the email directly. An AI that flags a customer issue for a human to resolve is different from one that resolves it and closes the ticket.
Be clear about which actions in your system are fully autonomous (no human sees them before they happen) and which require approval. Your broker needs this to understand the scope of potential exposure.
3. Whether those agents interact with customers or third parties
A purely internal AI tool that helps your team draft documents carries a different risk profile than an AI that communicates directly with your customers, suppliers, or partners. If your AI agent sends emails, has conversations, gives advice, or makes commitments in your name to people outside your organisation, say so explicitly.
Customer-facing AI is the higher-risk category for most SMEs. It is also the category where the Air Canada case applies most directly (more on that below).
4. What your documentation looks like
Tell your broker what records you have. Do you have a written risk assessment for your AI deployment? Do you have the vendor's instructions-of-use or acceptable use policy on file? Have you done any internal compliance work around the EU AI Act or similar frameworks?
You do not need a perfect compliance programme to have this conversation. You just need to be honest about what exists. If the answer is "we deployed it six months ago and we have nothing written down," that is fine to say. It is far better than not raising it at all.
The three questions to ask your broker
Once you have made your disclosure, ask three specific questions and request written responses to each. An undocumented question is not a coverage.
1. Does our professional indemnity policy respond if an AI agent we operate gives wrong information to a client?
Professional indemnity (PI) insurance typically covers claims arising from errors, omissions, or negligent advice in the delivery of professional services. Whether an AI agent's output falls within that scope depends on the policy wording. Some policies now include explicit AI exclusions. Others are silent on the issue, which creates genuine uncertainty at claim time.
Ask your broker to check the wording and give you a written view. If they cannot give a view, ask them to seek a coverage opinion from the insurer directly. "We think you're probably covered" is not good enough.
2. Does our cyber policy cover AI agent errors that do not involve a data breach?
Cyber policies are designed primarily around data events: unauthorised access, ransomware, data theft. An AI agent that gives a customer wrong information and causes financial loss is a different kind of event. No data was stolen. No system was breached. The question is whether your cyber policy extends to AI operational errors or whether that gap falls between cyber and PI.
Many SMEs find they have exactly this gap. The cyber policy does not cover it because there was no breach. The PI policy does not cover it because it was an automated system rather than a professional providing advice. Identifying the gap now is better than discovering it during a claim.
3. Have any AI exclusions been added to our policies at the last renewal? Ask them to check explicitly.
The Lloyd's Market Association introduced AI model exclusion language (LMA5566) in 2023. Some insurers have started incorporating AI-related exclusions or carve-outs into standard policy wordings, particularly in tech PI and cyber lines. These changes do not always come with a letter explaining what changed. They appear in the renewal schedule or policy endorsements, and brokers do not always flag them proactively.
Ask your broker to pull the current wording for each relevant policy and compare it to the previous year. Look for any language referencing AI, machine learning, automated systems, or algorithmic outputs. If exclusions exist, you need to know about them before a claim, not during one.
What to expect from the conversation
Most brokers will not have a ready answer. That is genuinely fine. The insurance market is still working through how to price and respond to AI risk, and a broker who says "I need to check with the underwriter on this" is being honest with you.
What matters is not that your broker gives you a perfect answer in the first conversation. What matters is that you have raised it, the question is documented, and your broker is actively seeking a written coverage opinion from the insurer. Until you have that written opinion, you do not know where you stand.
If your broker dismisses the question entirely ("you're fine, AI is just a tool"), push back. Ask them to document their view in writing. A dismissal on record is better than nothing. And if they are wrong, you have evidence that you acted in good faith and relied on professional advice.
If you are approaching a renewal, this conversation should happen before the renewal is confirmed, not after. Once the policy is bound, changes typically require a mid-term endorsement, which takes time and sometimes costs money.
The Air Canada lesson: your agent is your agent
In 2024, the Civil Resolution Tribunal of British Columbia decided Moffatt v. Air Canada (2024 BCCRT 149). The case involved a customer who asked Air Canada's chatbot about bereavement fares. The chatbot gave him incorrect information about how to apply for a refund. He booked flights relying on that information, then discovered the policy worked differently and lost the refund.
Air Canada's legal team argued that the chatbot was a "separate legal entity" responsible for its own statements, and that Air Canada should not be held liable for what the chatbot told customers. The tribunal rejected that argument completely. It held that Air Canada was responsible for all information on its website, including information provided by its automated systems. The airline was ordered to pay the customer's claim.
The practical lesson for any SME is this: if your AI agent communicates with your customers and gives them wrong information that they rely on to their financial detriment, that is your liability. There is no legal distinction between "my employee said it" and "my AI agent said it." The agent is yours. Its outputs are yours.
For more detail on this case and what it means for your exposure, see our full breakdown at The Air Canada Chatbot Case: SME Operator Lessons. For the broader liability question of who bears responsibility when an AI agent makes a mistake, see Who Is Liable When an AI Agent Makes a Mistake.
What happens if you deploy the AI first and review coverage later
Policies respond to claims, not deployments. The problem with leaving the coverage review until after deployment is not that you automatically lose your insurance. It is that if a claim arises from your AI agent during a period when the deployment was not disclosed to your broker, you have a potential non-disclosure problem.
In insurance, non-disclosure of a material fact (something that would affect an insurer's decision to provide coverage or to price it differently) can give the insurer grounds to dispute a claim, reduce a settlement, or in serious cases void the policy. Whether an AI deployment counts as a material fact under your specific policy is a question for your broker and your insurer. But you do not want to be arguing that question during a claim.
Proactive disclosure costs you nothing except the time it takes to have the conversation. It creates a record that you acted in good faith. It gives your broker and insurer the opportunity to adjust your coverage if they think the risk profile warrants it. And it means that if a claim does arise, you are not simultaneously arguing about whether you should have disclosed the deployment while also trying to get the claim paid.
For a more detailed look at how AI exclusions work across common policy types, see AI Policy Exclusions: A Guide for SME Operators.
The EU AI Act angle for European SMEs
If you operate in the European Union, there is a regulatory layer on top of the insurance question. Regulation (EU) 2024/1689, the EU AI Act, applies to operators deploying AI systems, not just to the companies that build them. The Act's Article 5 prohibitions (on the most harmful AI applications) have been in force since February 2025. Article 50 transparency requirements, which require operators of certain AI systems to inform users they are interacting with AI, were originally scheduled for August 2026 but may shift to later under the Omnibus package currently in trilogue.
Why does this matter for insurance? Because insurers are beginning to look at regulatory compliance as a factor in how they assess risk. A business that has documented its AI deployment against the Act's requirements is, on paper, a more predictable risk than one that has not. Whether your insurer actually prices that difference today varies by insurer and product. But getting ahead of the compliance documentation now creates an asset for future coverage conversations, regardless of how the regulatory timeline moves.
You do not need to run a full compliance programme today. You need to know what obligations apply to your specific deployment and have a record of having thought about them. That record helps you with insurers in the same way it helps you with regulators.
For a plain-English summary of EU AI Act obligations for SMEs, see the EU AI Act Operator Obligations guide on agentliability.eu.
What good documentation looks like
You do not need a thirty-page compliance report to have a productive conversation with your broker. A one-page AI deployment register is usually enough to start, and it takes about two hours to put together.
The register should list each AI tool or agent you use, including third-party products that run AI underneath. For each entry, record: what the tool does, who it interacts with (internal staff only, or customers and third parties), what it can do autonomously without human review, what oversight is in place, and what documentation you hold from the vendor (terms of service, instructions-of-use, acceptable use policy).
That document does three things at once. It helps your broker understand your exposure. It demonstrates to any regulator who asks that you have thought about your AI deployment. And it is the foundation of the risk assessment that the EU AI Act increasingly expects operators to be able to produce.
Two hours of work now. Very different conversation if something goes wrong later.
If you want guidance on getting the right coverage in place once you have had the broker conversation, see our coverage guide for SME AI operators. For a broader look at the coverage landscape across European markets, see agentinsured.eu.
Frequently asked questions
Should I tell my insurance broker I am using AI agents?
Yes, and you should raise it proactively rather than waiting for your broker to ask. Most brokers are not yet asking about AI deployments systematically. If you have a claim and your broker then discovers an undisclosed AI deployment, you face a potential non-disclosure problem that complicates the claims process even if it does not void your policy outright. Raising it yourself, in writing, creates a paper trail that protects you.
What information does my broker need about my AI deployment?
Four things matter most: what AI agents you are running and what they do specifically, which actions those agents take without human approval versus which ones require sign-off, whether those agents interact directly with customers or third parties, and what documentation you have in place including risk assessments, vendor instructions-of-use, and any compliance work. A one-page AI deployment register covering these points is usually sufficient to start the conversation.
Can my existing business insurance cover an AI agent mistake?
It depends on your policy wording and whether any AI exclusions have been added at renewal. Your professional indemnity policy may respond if an AI agent gives wrong advice to a client, but only if AI use is not excluded and the claim falls within the defined scope. Your cyber policy typically covers data breaches but may not extend to AI errors that do not involve a data breach. The only way to know is to ask your broker directly and request a written coverage opinion.
What happened in the Air Canada chatbot case?
In Moffatt v. Air Canada, 2024 BCCRT 149, the Civil Resolution Tribunal of British Columbia found Air Canada liable for incorrect information its chatbot gave to a customer about a bereavement fare discount. Air Canada argued the chatbot was a separate legal entity whose statements were not binding on the airline. The tribunal rejected that argument entirely. The ruling established that an operator is responsible for the statements made by its AI agent, in the same way it is responsible for the statements of its human employees.
References
- Moffatt v. Air Canada, 2024 BCCRT 149, Civil Resolution Tribunal of British Columbia (21 February 2024).
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (AI Act), Article 5 (prohibitions, in force February 2025) and Article 50 (transparency obligations for certain AI systems).
- Lloyd's Market Association, LMA5566: Artificial Intelligence Exclusion (2023). Exclusion clause addressing losses arising from AI systems including machine learning, natural language processing, and large language model outputs.