Insure Your Agent Operator Edition

How to review an AI vendor contract before you sign. Five clauses that will define your liability.

Most AI vendor contracts are written by the vendor's lawyers to protect the vendor. Deployers, especially smaller businesses, often sign without reading the provisions that cap the vendor's exposure and shift the risk to them. Here are the five clauses that matter most and what to do about each.

Key takeaways

  • AI vendor contracts almost universally cap the vendor's liability to a fraction of fees paid, leaving the deployer exposed to the full cost of any harm their AI agent causes to a third party.
  • Indemnity clauses in AI vendor contracts typically cover IP infringement and security breaches, not incorrect AI outputs. If your agent harms a customer, the vendor's indemnity is unlikely to respond.
  • Acceptable use clauses may prohibit the very use cases you are building. Violating the acceptable use clause voids your warranty claims against the vendor and may void an insurance policy.
  • EU AI Act technical documentation (Annex IV, under Regulation (EU) 2024/1689) is a right you should assert contractually before signing, not request informally after deployment.
  • Data processing agreements linked to the AI vendor contract define who is the data controller and who processes data on their behalf. Getting this wrong creates GDPR exposure that runs alongside your AI liability exposure.

Why the contract matters more than the demo

When an SME evaluates an AI vendor, most of the attention goes to the product demo: does it answer questions accurately, does it integrate with existing systems, does the pricing work. The contract comes at the end of the evaluation process, often under time pressure, and often gets signed without the clause-by-clause review that it warrants.

This is the wrong order. The contract is the document that defines what happens when the AI agent fails: who pays, what for, and to what limit. The demo shows the best case. The contract governs the worst case. For an SME deploying an AI agent to handle customer interactions, the worst case is a customer who acts on something the agent said that was wrong, suffers a loss, and brings a claim. The Moffatt v. Air Canada case (British Columbia Civil Resolution Tribunal, February 2024) established that you as the operator are responsible for your agent's statements regardless of the vendor relationship. The contract you signed with the vendor tells you how much of that exposure the vendor will share.

For most AI SaaS contracts, the answer to that question is: very little.

Clause 1: The limitation of liability

The limitation of liability clause is the most consequential provision in any AI vendor contract. It caps the total amount the vendor will pay in any claim arising from the contract, regardless of how much your actual loss is.

The standard formulation in most AI SaaS vendor contracts caps liability at the fees paid in the 12 months preceding the claim. For an SME paying EUR 200 per month, this is a EUR 2,400 cap. A customer claim arising from incorrect AI advice, a data breach triggered by a vendor security failure, or a regulatory penalty resulting from an AI system error could run to tens of thousands of euros or more. The EUR 2,400 cap is not a meaningful protection.

There are two things you can do. First, negotiate the cap. Most SaaS vendors will not increase the cap to match potential third-party losses because they price their product on the assumption that the cap will hold. But larger vendors will sometimes agree to an increased cap for enterprise-level contracts, particularly where the use case involves customer-facing AI. Second, use the cap as an input to your own insurance assessment. The shortfall between the vendor's cap and your likely maximum exposure is the gap your own liability coverage must fill. If you know the vendor will pay no more than EUR 2,400, and your worst-case AI incident could cost EUR 50,000, you need coverage for the difference.

Clause 2: The indemnity clause

Indemnity clauses in AI vendor contracts look protective. They say the vendor will defend you against third-party claims in specified circumstances. Reading them carefully usually reveals that the specified circumstances are narrow: typically intellectual property infringement (the vendor's AI model used training data it should not have) and sometimes specific security breach scenarios. What they do not cover, in the standard formulation, is third-party claims arising from incorrect AI outputs.

This matters because the most likely claim you will face as an AI agent operator is not an IP claim. It is a claim from a customer who relied on something your agent said and suffered a loss as a result. Under the Air Canada principle, that claim comes to you. The vendor's indemnity clause does not cover it. You are on your own.

If the AI system will handle consequential decisions for your customers, whether that is giving advice, processing orders, explaining policies, or supporting purchases, you should ask the vendor whether they will extend the indemnity to cover claims arising from incorrect AI outputs within the defined permitted use case. Most will decline. Their refusal tells you something useful: they do not stand behind the accuracy of their system's outputs in the way that matters when things go wrong. Factor that into your own risk assessment and insurance planning.

Clause 3: The acceptable use clause

Acceptable use clauses define what you may and may not use the AI system for. They typically include a list of prohibited use cases, often including medical diagnosis, legal advice, financial decisions, and sometimes any use that could affect fundamental rights or physical safety.

Two problems arise regularly for SME operators. The first is that the prohibited list describes the operator's actual use case. An accountancy firm using an AI assistant that provides tax guidance may find that "financial advice" is a prohibited use. A legal practice using document review AI may find that "legal analysis" is prohibited. If you are using the system for a prohibited purpose, you are in breach of the contract. Your warranty claims are void. Your indemnity is void. And if you have an insurance policy linked to the vendor relationship, the policy may be void on the grounds that you were using the system outside its permitted scope.

The second problem is the gap between what is explicitly permitted and what is merely not prohibited. Some acceptable use clauses list permitted uses exclusively; anything not listed is prohibited by implication. Others list only prohibited uses; anything not listed is permitted. The structure of the clause determines the risk. Read both lists carefully and confirm, in writing if necessary, that your intended use case falls within the permitted scope.

Clause 4: The data processing agreement

Almost every AI vendor processes personal data in the course of providing their service. Under the GDPR (Regulation (EU) 2016/679), when a vendor processes personal data on your behalf, they are a data processor and you are the data controller. The relationship must be governed by a data processing agreement that meets the requirements of Article 28 of the GDPR.

Many AI vendors supply a standard data processing agreement (sometimes called a DPA or data processing addendum) as part of their contract pack. These are often adequate for basic data processing but may not address the specific ways AI systems handle personal data. AI models can retain input data for training purposes (creating a data retention and purpose limitation issue), can output personal data about individuals from their training data (creating an unintended disclosure issue), and can process data in ways that amount to automated decision-making under Article 22 of the GDPR.

Before signing, confirm three things in the data processing agreement. First, does the vendor use your customers' input data to train or improve their model? If yes, this is a use of personal data beyond the purposes for which your customers consented, and it is your problem, not the vendor's. Second, does the vendor's agreement specify where personal data is stored and processed? EU AI Act and GDPR compliance depends on knowing where data goes. Third, does the agreement define who bears responsibility for a data subject's right of access, erasure, or complaint under GDPR? The GDPR makes you the controller. You need to ensure the DPA gives you the operational access to respond to subject access requests without depending on the vendor's cooperation at a time of their choosing.

Clause 5: The technical documentation right

This clause does not exist in most AI vendor contracts, and that is the problem. Under the EU AI Act (Regulation (EU) 2024/1689), providers of high-risk AI systems are required to supply technical documentation under Article 17 and instructions for use under Article 13 before the system is put into service. If you are deploying an AI system in a high-risk context under the Act's Annex III categories (employment decisions, financial services, healthcare administration, educational access, essential public services), you have a legal right to this documentation as a matter of regulation, not just contract.

In practice, many vendors do not proactively supply this documentation and many deployers do not know to ask for it. The result is that SME operators are running AI systems in high-risk contexts without the documentation they need to build an adequate compliance programme, without the monitoring guidance the provider is required to supply, and without the residual risk disclosure that tells them what they are actually taking on.

Before signing any AI vendor contract for a consequential use case, add a clause requiring the vendor to supply Annex IV technical documentation, instructions for use consistent with Article 13, and written notification of any material update to the system along with updated documentation. This is not an unusual ask for a vendor whose product is genuinely compliant. A vendor who refuses to commit to supplying this documentation is a vendor who may not have prepared it to the required standard.

For a detailed explanation of what Article 17 technical documentation must contain and how to evaluate it, see the Article 17 guide on agentliability.eu. For the broader framework of operator obligations under the EU AI Act, see the Article 26 guide.

What to do before signing

A practical pre-signature process for any AI SaaS vendor contract runs to five steps. First, identify the limitation of liability cap and compare it to your estimated maximum exposure for the use case. If the gap is material, either negotiate the cap or ensure your own insurance covers the shortfall.

Second, read the indemnity clause and map it to the claims you are most likely to face. If the most probable claim type is not covered, do not assume it is. Confirm with your broker that your own liability coverage extends to AI-specific outputs in the relevant context.

Third, read the acceptable use clause against your actual intended use. If there is any ambiguity, get written confirmation from the vendor that your intended use is within scope. Keep this confirmation as part of your contract file.

Fourth, review the data processing agreement for the three issues above: training data use, data location, and subject rights process. If the DPA does not address them, negotiate addendum language or ask the vendor for their standard AI-specific DPA.

Fifth, ask the vendor for technical documentation and instructions for use before deploying. If the use case is high-risk under the EU AI Act, make this a contractual right. If the vendor cannot supply adequate documentation, reassess whether you are comfortable deploying the system without it.

For the question of whether your current insurance covers the exposure that remains after completing this review, see the business insurance coverage guide and the Agent Insured platform, which tracks emerging AI liability coverage options for European enterprises.

Frequently asked questions

What is the most important clause to check?

The limitation of liability clause. It caps what the vendor pays if their AI system harms your business or a third party. Most caps are set at fees paid in the preceding 12 months, which for most SME contracts is a few hundred to a few thousand euros. Your actual exposure can be orders of magnitude higher. Know the cap before you sign.

Does a vendor indemnity clause protect me from customer claims?

Typically no. Vendor indemnity clauses cover IP infringement and specified security events, not claims from customers harmed by incorrect AI outputs. Under the Moffatt v. Air Canada principle, those claims come to you as the operator. The vendor's indemnity will not typically respond.

What is the acceptable use clause and why does it matter?

The acceptable use clause defines what you may use the AI for. If your actual use case is prohibited or ambiguous, you may be in breach of contract without knowing it. A breach voids your warranty claims against the vendor and may void your insurance cover. Read this clause against your actual use case, not your intended ideal use case.

What EU AI Act documentation should I receive from my AI vendor?

If your use case is high-risk under Annex III of Regulation (EU) 2024/1689, the vendor as provider must supply technical documentation under Article 17 and instructions for use under Article 13. You should receive a document covering intended purpose, capabilities, limitations, accuracy metrics, known risks, and required human oversight measures. Make this a contractual right before signing.

Does my existing business insurance cover AI mistakes?

Most standard policies do not affirmatively cover AI-specific losses. Some have added AI exclusions. Ask your broker specifically about AI agent outputs and get the response in writing. The business insurance coverage guide on this site explains what typical policies cover and exclude.

References

  1. Moffatt v. Air Canada, British Columbia Civil Resolution Tribunal, decision of 14 February 2024, tribunal member Christopher Rivers.
  2. Regulation (EU) 2024/1689 of the European Parliament and of the Council (the EU AI Act), Article 13, Transparency and provision of information to deployers.
  3. Regulation (EU) 2024/1689, Article 17, Technical documentation.
  4. Regulation (EU) 2024/1689, Annex IV, Technical documentation content requirements.
  5. Regulation (EU) 2024/1689, Article 26, Obligations of deployers of high-risk AI systems.
  6. Regulation (EU) 2016/679 (GDPR), Article 28, Processor obligations.
  7. Regulation (EU) 2016/679, Article 22, Automated individual decision-making.
  8. Directive (EU) 2024/2853, Liability for defective products, applying to AI software from 9 December 2026.
  9. Mata v. Avianca, S.D.N.Y., Judge P. Kevin Castel, June 2023. AI-generated case citations submitted to the court found to be fabricated.
  10. AIUC-1, AI Insurance Underwriting Standard, AI Underwriting Company, 2025.