Future Proof The Authority Stack
Operator Edition · GPAI Liability Guide Part of the Agent Liability Network
Published by Future Proof Intelligence
Insure Your Agent The Coverage Guide

Using ChatGPT, Claude or Gemini in your business. What you are responsible for, and where your insurance probably does not help.

Most small businesses deploying AI have used an AI assistant to build a chatbot, configured a customer service agent using an API, or integrated a GPAI model into their product as a feature. The assumption that often goes unstated is that if the AI says something wrong, it is the AI company's problem, not yours. That assumption is incorrect. This article explains who carries the liability when a GPAI-based product fails a customer, why your current insurance is probably not set up to cover it, and what five things you can do about it before something goes wrong.

Key takeaways

  • The Moffatt v. Air Canada case (BC Civil Resolution Tribunal, 2024) established that a business cannot disclaim responsibility for its AI's statements by treating the AI as a separate entity. If your product said it, your business said it.
  • The API terms of service governs the relationship between you and the model provider. It does not limit your liability to your customers, because your customers have no contract with the model provider.
  • Article 50 of Regulation (EU) 2024/1689 requires you to disclose to customers that they are interacting with an AI system, unless the context makes it obvious. This obligation applies from 2 August 2026.
  • Most general commercial liability and professional indemnity policies were not designed for AI claims and many now contain AI absolute exclusions. You need to specifically verify whether your policies cover AI-generated content claims before assuming they do.
  • Five practical steps reduce your exposure: disclose the AI, document your testing, add human review for high-stakes outputs, check your insurance for AI exclusions, and keep a record of the version you are running.

The basic legal position: you own the output

When you deploy a customer-facing AI product, you are acting as an operator in the legal sense of EU AI Act Regulation (EU) 2024/1689. You have taken a GPAI model from a provider, configured it for your use case, and put it in front of your customers. From your customers' perspective, the AI is part of your product or service. They came to you. They paid you, or gave you their data, or used your service. Their legal relationship is with you.

This is the principle that the Moffatt v. Air Canada tribunal stated clearly in February 2024. Air Canada's chatbot told a customer that they could purchase a ticket at the regular price and apply for a bereavement fare discount afterward. That was incorrect. The airline argued that the chatbot was "a separate legal entity" responsible for its own statements and that the airline could not be held responsible. The tribunal rejected that position without hesitation. The chatbot operated as Air Canada's agent. Air Canada was responsible for what it said.

The same logic applies to every SME that has deployed a customer-facing AI chatbot, a GPAI-powered recommendation tool, a content generation feature visible to customers, or any other AI product that interacts with people who rely on its output. Your business is the operator. What the AI says to your customers is what your business says to your customers.

What the API terms of service actually do and do not cover

The most common misunderstanding about GPAI operator liability concerns the API terms of service. When you sign up for the OpenAI API, the Anthropic API, or Google's Vertex AI service, you agree to a set of terms that limits the provider's liability to you in significant ways. These terms typically disclaim warranties on output quality, limit the provider's liability for damages to the amount you paid in the preceding months, and require you to indemnify the provider against claims arising from your use of the service.

These terms govern your relationship with the provider. They do not govern the provider's relationship with your customers, because your customers have no relationship with the provider. A customer who was harmed by your AI's output cannot look to OpenAI's terms of service for relief. They look to you.

This creates a gap that many SME operators have not mapped. You may have a contractual claim against OpenAI if the API produced output that violates the provider's own acceptable use policy, depending on the specific facts and your agreement. But pursuing that claim against a well-resourced technology company across a jurisdictional boundary is a different matter from managing the immediate claim your customer has against your business. The API terms of service is not your shield against customer claims. It is your contract with a supplier.

Article 50: the disclosure obligation you need to know about

Article 50 of Regulation (EU) 2024/1689 creates a direct disclosure obligation for operators of AI systems that interact with natural persons. If you have an AI product that talks to your customers, you must tell those customers they are talking to an AI, unless it is obvious from context. The obligation applies from 2 August 2026 for systems launched after that date, and will apply more broadly as EU member states implement the regulation.

There is a practical reason this matters beyond compliance. In the Mata v. Avianca case (US District Court for the Southern District of New York, 2023), a lawyer submitted court filings containing case citations invented by ChatGPT. He did not know they were invented because he had not adequately verified the output. The court sanctioned him. The case illustrated a harm mechanism that applies in commercial contexts too: a customer who makes a consequential decision based on AI output, without knowing it is AI output, has a stronger claim against you than a customer who was warned and chose to rely on it anyway.

If your product clearly states "this response is generated by AI," and the customer proceeds to rely on it and suffers harm, your position is different from the position of an operator whose AI presented itself as human expertise. Clear disclosure is not merely a regulatory compliance requirement. It is also a practical risk management measure. For the full analysis of Article 50 obligations, see agentliability.eu's Article 50 guide.

Your current insurance and the AI coverage gap

The most important practical step you can take right now is to open your commercial insurance policies and look for the word "artificial intelligence." What you will probably find is one of three things.

The first is silence: no mention of AI at all. Policies written more than two years ago were typically written before AI agents were in widespread commercial use. The absence of an explicit AI clause does not mean you are covered. It means the policy was not designed with AI in mind, and if you make a claim related to an AI output, the insurer's position will depend on how they interpret the existing policy language. That interpretation may not go your way.

The second is an AI absolute exclusion. This is language that explicitly excludes claims arising from AI-generated content, AI-assisted decisions, or the use of machine learning or large language models. These exclusions have become more common in renewals since 2023 as insurers updated their policy language in response to the growth of AI deployment. If your policy contains an absolute AI exclusion, you have a coverage gap for any claim related to your AI product.

The third, and the least common for SMEs, is a policy that explicitly includes AI liability. Specialist technology professional indemnity and tech E&O products written by carriers with AI expertise sometimes include AI-generated content claims in their scope. Obtaining this kind of coverage requires working with a specialist broker, not a standard commercial lines renewal.

The article on whether your business insurance covers AI mistakes goes deeper on how to read your policy for AI coverage, and the guide to AI policy exclusions explains the specific clause language to look for. For a broader view of what AI-specific insurance products currently exist and who provides them, the Agent Insured coverage framework gives a structured overview.

Five things to do before your AI product causes a customer problem

The following five actions do not require a large compliance budget. They require time, attention, and a basic level of documentation discipline. Each one reduces your exposure materially.

First: disclose clearly. Add a visible, unambiguous statement to any customer-facing AI interface stating that it is powered by AI. "This assistant uses AI to generate responses. Please verify important information independently." That is sufficient for Article 50 purposes in most cases and positions you better in any future claim discussion.

Second: document what you tested. Before deploying your AI product, run a set of test cases that cover the most likely harmful output scenarios: incorrect factual claims, inappropriate recommendations, sensitive topic handling, and edge cases in your specific domain. Write down what you tested, what you found, and what you did about the issues you found. This record becomes your evidence of reasonable care. An operator who tested and documented is in a fundamentally different position from an operator who deployed without any documented testing.

Third: add human review for high-stakes outputs. If your AI gives advice on financial matters, health, legal questions, or any decision with significant consequences for the customer, route those outputs through a human reviewer before they reach the customer, or add a prominent disclaimer and follow-up channel. AI assistants hallucinate. The cases where that hallucination causes serious harm are concentrated in high-stakes domains.

Fourth: audit your insurance. Contact your insurance broker and ask specifically whether claims arising from your AI product's outputs are covered under your current policies. Ask them to identify any AI exclusion clauses in your commercial general liability, professional indemnity, and any tech E&O coverage you hold. Treat the answer as the starting point for a coverage gap conversation, not as reassurance.

Fifth: keep a version record. Note which model version you are running in production, when it was deployed, and when any model updates occurred. If a claim arises, the model version at the time of the incident is relevant evidence. Model providers update their models, sometimes in ways that change output behaviour. Your record of the version at the time of any specific customer interaction is part of the operational documentation that determines whether you can show that the system was operating within its documented parameters.

For a more detailed pre-deployment checklist, see the SME pre-deployment insurance checklist. For a guide to what to discuss with your insurance broker when you need specialist AI coverage, see what to tell your broker about AI agents.

Frequently asked questions

If ChatGPT gives my customer wrong information, can I blame OpenAI?

No. The Moffatt v. Air Canada principle holds that a business cannot disclaim responsibility for its AI's statements. Your customer has a contract with you, not with OpenAI. What the AI says on your platform is what your business says. You may have a separate claim against OpenAI under your API agreement, but that does not relieve your obligation to the customer.

What does the API terms of service actually cover?

The API terms governs the relationship between you and the model provider. It typically limits the provider's liability to you and requires you to indemnify the provider against misuse claims. It does not affect your liability to your customers, because your customers have no contract with the provider. The API terms is your supplier contract, not your liability shield.

Does the EU AI Act require me to disclose when my product uses AI?

Yes. Article 50 of Regulation (EU) 2024/1689 requires operators of AI systems that interact with natural persons to disclose that the system is an AI, unless it is obvious from context. The obligation applies from 2 August 2026. Non-disclosure is a regulatory violation that also strengthens any claim from a customer who relied on AI output without knowing it was AI.

Does my business insurance cover claims from my AI chatbot's mistakes?

Possibly not. Many commercial liability and professional indemnity policies contain AI absolute exclusions or were written before AI use was anticipated. You should review your policies specifically for AI coverage language and ask your broker directly whether AI-generated content claims are covered. Do not assume coverage applies without verifying.

What practical steps reduce my liability as a GPAI operator?

Five steps: disclose to customers that they are talking to an AI; document your pre-deployment testing; add human review for high-stakes outputs; audit your insurance for AI exclusions; and keep a record of the model version running in production. Each step reduces exposure and provides evidence of reasonable care if something goes wrong.

References

  1. Moffatt v. Air Canada. 2024 BCCRT 149. Civil Resolution Tribunal of British Columbia. February 2024. Tribunal held Air Canada responsible for chatbot's incorrect statement about bereavement fare policy.
  2. Mata v. Avianca, Inc. No. 22-cv-1461 (PKC). United States District Court, Southern District of New York. 2023. Court sanctioned attorneys for submitting AI-generated fictitious case citations as genuine authority.
  3. Regulation (EU) 2024/1689 of the European Parliament and of the Council (AI Act), Article 50: Obligations for deployers of certain AI systems on transparency toward natural persons.
  4. Regulation (EU) 2024/1689, Article 26: Obligations of deployers of high-risk AI systems, including for operators who use AI systems for customer interaction.
  5. Directive 2024/2853 on liability for defective products (revised Product Liability Directive), applicable from 9 December 2026, treating AI software as a product for strict liability purposes.
  6. OpenAI. Terms of Service and Usage Policies. As updated through 2025. Liability limitations and indemnification obligations for API users.
  7. Anthropic. Usage Policy and API Terms. As updated through 2025. Operator responsibility provisions.