Who is liable when an AI agent makes a mistake? The 2026 decision tree for operators.
The question reaches every operator eventually: your agent gave a customer the wrong information, booked the wrong service, generated a harmful output, or made a consequential decision without adequate oversight. Who is responsible? The answer is not automatic and it is not whoever built the model. This article maps the four parties in any AI agent claim, walks through the 2026 liability decision tree, explains the six legal theories now active in courts and regulators worldwide, examines six real cases that drew the lines, and gives you the 48-hour incident playbook to run when something goes wrong. The law has moved faster than most operators realise. This is the current position.
Key takeaways
- In the landmark case Moffatt v. Air Canada (BC Civil Resolution Tribunal, February 2024), the tribunal held that a business cannot disclaim liability for its AI agent's statements by treating the agent as a separate legal entity. The operator is responsible for what the agent says to customers.[1]
- The EU's revised Product Liability Directive (Directive 2024/2853, in force December 2026) treats software including AI systems as products. Deployers and manufacturers across the supply chain face liability without the claimant needing to prove fault, a fundamental shift from the fault-based model most operators assumed would apply.[2]
- Agency theory is entering AI liability law. In Mobley v. Workday (US District Court, N.D. California), the court held in 2024 that an AI vendor acting as an employer's screening agent could be directly liable for discriminatory outcomes. The case achieved collective action certification in 2025.[3]
- Your API contract with the model provider limits the provider's liability to you. It does not limit your liability to third parties who relied on your agent's output. That gap between what the provider owes you and what you owe the world is where most operators are exposed.[4]
- The EEOC's first AI discrimination settlement (August 2023, $365,000 against iTutorGroup) and the Tesla Autopilot verdicts of 2025 (up to $329 million) confirm that courts are willing to impose substantial damages on operators of AI systems that cause foreseeable harm.[5][6]
- The EU AI Act (Regulation 2024/1689) assigns deployers of high-risk AI systems a set of ongoing obligations. Non-compliance with those obligations strengthens any negligence claim against you in civil proceedings, because it establishes that you failed to meet the applicable standard of care.[7]
- The 48 hours after an AI incident are the most consequential. Evidence preserved in those hours determines whether your insurer will respond and whether a regulator treats the event as a managed failure or an unchecked one. Most operators have no plan for that window.
Section 1: The four parties in any AI agent claim
Before you can apply the liability decision tree, you need to be clear on who the parties are. Most AI agent incidents involve four distinct parties, and the legal exposure each carries is different.
1. The model provider
The model provider is the company that built and operates the underlying AI model you access via API: OpenAI, Anthropic, Google, Mistral, Cohere, or a proprietary provider. The provider's liability to deployers is governed by the API terms of service, which almost universally include disclaimers of warranty, limitations of liability, and indemnification carve-outs for misuse. In the US, the provider may also have a strong argument under Section 230 of the Communications Decency Act for claims involving third-party content, though courts are actively debating the scope of that protection for AI-generated content.[8]
Critically, the model provider has no direct legal relationship with the end user who was harmed. A customer who received wrong advice from your AI agent cannot sue the model provider under contract. They can sue you. Whether you then have a claim against the provider under your API contract is a separate question, and the answer depends on your specific agreement.
2. The deployer (operator)
The deployer is the business that built the product or service using the AI model and put it in front of end users. This is you. In the EU AI Act framework, the deployer is defined as any natural or legal person that uses an AI system under its own authority for purposes other than personal non-professional activity.[7] In practice, if your business deployed the agent, you are the deployer, regardless of whether you built anything or simply configured a third-party tool.
The deployer carries the heaviest practical liability exposure in most scenarios because they are the party with whom the harmed user has a direct commercial relationship, and they are the party that made the decision to put the agent in front of users in its current configuration. The Moffatt v. Air Canada tribunal made this explicit: the airline could not shift liability to the chatbot's developers by treating the chatbot as a distinct entity.[1]
3. The affected party
The affected party is the person or business harmed by the agent's action or output. This is most commonly a customer who received incorrect information, an employee or job applicant who was subjected to a discriminatory automated decision, or a third party defamed or misrepresented by the agent's output. The affected party's available legal routes depend on their jurisdiction, their relationship to the deployer, and the nature of the harm.
Under EU law, the revised Product Liability Directive significantly expands affected parties' access to relief. The disclosure facilitation rules in Article 9 of Directive 2024/2853 allow courts to order defendants to disclose documentation about their AI systems. The presumption of defectiveness in Article 10 applies where the claimant faces disproportionate difficulty in proving the defect due to technical complexity.[2] Both provisions directly address the information asymmetry that has historically insulated AI operators from claims.
4. The integrator (where present)
In many deployments, a fourth party sits between the model provider and the deployer: the integrator who built the specific application, plugin, or workflow automation that connects the model to your business process. This might be a software development agency, a no-code automation provider, or an enterprise AI platform vendor.
The integrator's liability depends on the nature of what they built. If the defective output was caused by poor prompt engineering, inadequate guardrails, a misconfigured tool interface, or a design choice made by the integrator, they carry potential liability to the deployer. If the deployer gave the integrator inadequate specifications, the liability may flow the other way. Under joint and several liability frameworks including Article 8 of Directive 2024/2853, the affected party can pursue any or all of these parties directly for the full amount of the loss.[2]
Section 2: The 2026 liability decision tree
When an AI agent causes harm, the applicable legal theory and the party who carries the primary liability depends on a sequence of factual questions. Work through the following decision tree before any claim is made against you, and certainly before you respond to a complaint or make any admission.
- Identify the harmed party's relationship to you before characterising the claim type. Customer claims run in contract first. Third-party claims run in tort first.
- Establish whether a human was in the loop at the point where the harmful output was transmitted or acted upon. The absence of human review does not create liability but it removes a potential intervening cause argument.
- Distinguish system defect from deployment defect. A defect in the underlying model is a product liability issue implicating the model provider. A defect in how you deployed, configured, or supervised the model is a deployer negligence issue. Many real incidents involve both.
- Check your API contract for indemnification provisions and any upstream claims procedure. Most providers have short notice windows for invoking indemnification and will reject late claims.
- Assess whether the incident triggers regulatory reporting obligations independently of any civil claim. Regulatory exposure under the EU AI Act, GDPR, or sector regulations may run in parallel to civil liability and carry its own timeline.
Section 3: Vicarious liability — when the deployer carries the cost of the agent's action
Vicarious liability is the principle that one party is responsible for the wrongful acts of another where a recognised relationship of authority, control, or agency exists. Traditionally applied to the employment relationship (an employer is vicariously liable for an employee's torts committed in the course of employment), it is now being applied to AI agents in several jurisdictions, with results that surprise operators who assumed their contractual arrangements provided a shield.
The agency theory applied in Mobley v. Workday is the clearest US example. The court in the Northern District of California rejected the argument that Workday was merely providing a tool and held that by actively participating in the hiring decision-making process, by recommending some candidates and rejecting others, the platform was acting as the employer's agent.[3] Once characterised as an agent, the vendor became directly liable for discriminatory outcomes as if it were an employer itself.
The logical extension for deployers is significant. If your AI agent acts on your behalf, makes commitments in your name, takes actions that bind your business, or performs functions you would otherwise perform as a business, a court applying agency theory will hold you responsible for the agent's wrongful acts in the same way you would be responsible for a human employee's wrongful acts. The fact that the agent is software, not a person, does not change the analysis under agency law: what matters is the functional relationship, not the nature of the party performing the task.
This analysis is particularly important for customer-facing agents that make representations about pricing, availability, terms, or eligibility. If your agent tells a customer they qualify for a discount, are booked on a particular service, or are entitled to a refund, and that statement is wrong, you are vicariously responsible for the representation in the same way you would be if a salesperson made it in a meeting. The Air Canada chatbot tribunal found precisely this: the agent's statements bound the airline.[1]
The defence against vicarious liability is not to disclaim the agent but to define its scope clearly and build a system that cannot make binding commitments outside that scope. Operators who deploy agents with general instruction sets and no hard limits on what the agent can commit to are creating uncapped vicarious liability exposure.
Section 4: Product liability — when the AI itself is the defective product
Product liability shifts the analysis from the conduct of the deployer to the quality of the product itself. The question is not whether the operator behaved negligently but whether the AI system, treated as a product, was defective in a way that caused the harm.
The revised EU Product Liability Directive (Directive 2024/2853) is the most significant development in this space globally. In force from December 2026, it explicitly extends the product liability framework to software and digital services, removing the longstanding ambiguity about whether software could be treated as a product under the original 1985 Directive.[2] The key provisions that affect AI deployers are:
First, the definition of product defect under Article 7 includes the expected performance and safety of the product as a whole. An AI system that produces outputs causing harm where safer alternatives were reasonably available at the time of deployment can be characterised as defective under this standard.
Second, Article 10 introduces a rebuttable presumption of causation where the claimant faces excessive difficulty in establishing the link between the defect and the harm due to technical complexity. For AI systems, where the causal chain from model behaviour to output to harm is inherently complex, this provision substantially reduces the evidentiary burden on claimants.
Third, Article 9 allows courts to order the disclosure of evidence about the AI system's design, training data, and performance characteristics when the defendant is better placed to access that evidence. Operators who cannot produce comprehensive system documentation at claim time will face adverse inferences from courts exercising this power.
For the deployer, the most important implication of the revised PLD is that the distinction between a defect in the model and a defect in the deployment is not always available as a defence. Under Article 8 on joint and several liability, if both the model provider and the deployer contributed to the harm, the affected party can pursue either of them for the full amount.[2] The deployer who deploys a model they knew, or should have known, had a tendency to produce harmful outputs in the category of use they were deploying it for cannot escape liability by pointing to the model provider.
In the US, product liability for AI systems is still developing under existing common law principles. The Tesla Autopilot verdicts of 2025 are instructive: in the Benavides case, the Miami jury found Tesla's Autopilot system defective and awarded $329 million in damages, holding the manufacturer responsible for the product's design despite the driver's acknowledged role in the accident.[6] The jury allocated 33% of fault to Tesla and 67% to the driver, a split that illustrates how courts are beginning to apportion liability between the product and the human operator in AI-adjacent systems.
Section 5: Negligence — failure to oversee, train, or constrain
Negligence is the most frequently invoked theory in AI agent claims outside of contract disputes. The elements are the same as in any negligence case: the defendant owed a duty of care to the claimant, the defendant breached that duty, and the breach caused the claimant's loss. The question for operators is what standard of care applies to an entity deploying an AI agent in a commercial context.
Courts have not yet codified a specific standard for AI agent operators, but several principles from existing case law and regulatory guidance are being applied. The relevant standard is that of a reasonable operator in the same industry with the same access to information about AI capabilities and limitations at the time of deployment. That standard is rising rapidly as the volume of publicly available guidance, case law, and regulatory documentation increases. An operator who deployed an agent in 2026 without reviewing the EU AI Act guidance for deployers, the available incident response frameworks, and the known failure modes for the category of AI they used will find it difficult to argue they met the reasonable operator standard.
The specific negligence failures courts are examining in AI cases fall into three categories. First, failure to constrain: deploying an agent with the ability to take binding actions or make representations outside the boundaries appropriate for the use case. Second, failure to oversee: relying on fully autonomous operation without periodic review of outputs, without monitoring for anomalous patterns, and without human escalation paths for edge cases. Third, failure to respond: identifying a problem with an agent's outputs and not acting, or acting too slowly, to prevent further harm.
In Mata v. Avianca, the negligence finding was about the third category: lawyers who used an AI tool to produce case citations failed to verify those citations before submitting them to a court.[9] The court sanctioned the firm for knowingly submitting a brief containing fabricated legal authorities. The professional duty to verify before relying was not discharged by blaming the AI. For operators, the parallel is direct: deploying an AI agent that makes factual claims, recommendations, or professional assessments does not transfer the duty of verification to the model.
Section 6: Contractual allocation — what your supplier contract actually says versus what the law requires
The relationship between your API contract and your actual legal exposure is one of the most consistently misunderstood elements of AI operator liability. Most operators believe their API agreement with the model provider either protects them from liability for the model's outputs or requires the provider to indemnify them when things go wrong. This is rarely accurate.
A typical model provider API agreement contains the following elements that bear on liability allocation. An indemnification provision that covers claims against the deployer for the provider's infringement of third-party intellectual property rights, typically including claims that the model's training data violated copyright. This is the one area where providers generally do offer substantive indemnification and where the deployer has a reasonable expectation of cover.[10]
A limitation of liability clause that caps the provider's aggregate liability to the deployer at the fees paid in the preceding 12 months, or a similar low figure. This clause does not affect what the deployer owes to third parties. It only limits what the deployer can recover from the provider if the deployer is found liable to a third party and seeks contribution. For most SME deployers, the recoverable amount under this cap is materially less than the potential third-party claim.
A warranty disclaimer that excludes any warranty, express or implied, that the model will produce accurate, reliable, or fit-for-purpose outputs. This is a near-universal provision and it means the deployer cannot sue the provider for a model hallucination under a warranty theory. The deployer must instead rely on any applicable negligence or statutory claims.
An acceptable use policy that restricts the categories of deployment the provider permits and imposes obligations on the deployer to implement safeguards. A deployer who violates the acceptable use policy may void the indemnification provisions and face additional exposure if the provider exercises its right to terminate access.
The critical gap is this: the API contract allocates risk between the provider and the deployer. It has no effect on what the deployer owes to the end user who was harmed. That user has no contract with the provider and does not interact with the provider's terms. They interact with the deployer's product and the deployer's terms of service. The deployer's terms of service may include a limitation of liability provision attempting to cap the deployer's exposure to the end user. Whether those limitations are enforceable depends on consumer protection law in the applicable jurisdiction, and in the EU, any attempt to limit liability for personal injury or death caused by negligence is unenforceable under the Unfair Contract Terms Directive and its national implementations.[11]
Section 7: Six real cases that drew the lines
The following six cases represent the current state of AI agent liability law. They are cited by courts, regulators, and insurers as the foundation of the current legal landscape.
1. Moffatt v. Air Canada (BC Civil Resolution Tribunal, February 2024)
Jake Moffatt contacted Air Canada's chatbot after a bereavement and received information about a bereavement fare discount that required him to travel first and claim later. He relied on that information, purchased tickets, and was subsequently told by Air Canada that the policy the chatbot described did not exist. The tribunal awarded damages and rejected Air Canada's argument that the chatbot was a "separate legal entity" for whose statements it bore no responsibility. The tribunal found that Air Canada failed to exercise reasonable care in ensuring its chatbot provided accurate information, and that it was responsible for representations made by the automated system it put in front of customers.[1]
The case is cited in every subsequent AI liability discussion for one reason: it settled definitively that a business cannot disclaim its AI agent. The operator is responsible for what the agent says.
2. Mata v. Avianca (US District Court, SDNY, June 2023)
Attorneys Roberto Mata and Steven Schwartz submitted a brief in a personal injury case against Avianca containing six case citations generated by ChatGPT. The citations were entirely fabricated: the cases did not exist. When challenged, the lawyers initially insisted the cases were real before admitting they had not verified the AI-generated citations. Judge Kevin Castel sanctioned the lawyers and their supervising partner for failing to verify the cases before relying on them in court filings.[9]
The professional responsibility principle is widely applicable: deploying AI to produce work product that is then presented as accurate transfers the duty of verification to the professional, not to the AI. For operators in any regulated sector, the same principle applies to agent outputs presented to regulators, clients, or counterparties.
3. EEOC v. iTutorGroup (Settlement, August 2023)
The US Equal Employment Opportunity Commission brought the first AI-related employment discrimination enforcement action against iTutorGroup, an online tutoring company. The company's hiring software automatically rejected female applicants aged 55 or older and male applicants aged 60 or older. A claimant demonstrated the discrimination by submitting two identical applications with different birthdates, receiving an interview only on the application with the more recent birthdate. The settlement required iTutorGroup to pay $365,000 to affected applicants and to adopt anti-discrimination policies governing its automated hiring tools.[5]
The case established that automated discrimination in employment carries the same regulatory and financial exposure as human discrimination. Operators running AI in any hiring or employment workflow should treat their AI's outcomes as if a human had made those decisions directly.
4. Walters v. OpenAI (Superior Court of Gwinnett County, Georgia, 2023-2025)
Radio talk show host Mark Walters sued OpenAI after ChatGPT described him in a summary provided to a journalist as having been accused of fraud and embezzlement in a lawsuit. The underlying case it was summarising did not involve Walters at all. A Georgia court initially denied a motion to dismiss in January 2024, allowing the case to proceed. The court ultimately granted summary judgment to OpenAI in May 2025, holding that the ChatGPT output could not reasonably be understood as describing actual facts given OpenAI's general user warnings about inaccuracy, and that Walters had not established recoverable damages.[12]
The case is significant not because OpenAI won but because the defamation theory proceeded as far as it did and because the basis for dismissal, namely the adequacy of OpenAI's user warnings, will not always be available to deployers who present AI output without comparable caveats. An operator who deploys an agent and presents its outputs as accurate, reliable, or authoritative cannot rely on the same reasoning.
5. Mobley v. Workday (US District Court, N.D. California, 2024-2025)
Derek Mobley, who is African American, over 40, and disabled, applied for more than 80 positions he believed used Workday's AI screening tool and was rejected every time. In July 2024, the court ruled that Workday, by actively participating in the hiring decision process, could be liable as an agent of the employers using its platform. The case achieved collective action certification under the Age Discrimination in Employment Act in May 2025, expanding it to cover all applicants over 40 rejected by Workday's AI screening since September 2020.[3]
The agency theory this case applies matters for every operator who deploys a third-party AI tool to perform a function the operator would otherwise perform directly. The vendor may carry liability alongside the operator for the outcomes the AI produces.
6. Benavides v. Tesla (Miami-Dade Circuit Court, September 2025)
In a case arising from a 2019 fatal crash involving Tesla's Autopilot system, the Miami jury found Tesla's product defective and awarded $329 million in damages. The jury allocated 33% of fault to Tesla and 67% to the driver, who admitted he took his eyes off the road. The verdict is significant for AI agent operators because it confirms that a jury will allocate meaningful liability to the developer of an autonomous system even where the human operator of that system contributed substantially to the harm through inattention.[6]
For software AI agents rather than physical autonomous systems, the implication is that a deployer who puts an agent in production that takes consequential actions cannot fully transfer accountability to the end user's decisions after the fact. The agent's design, its constraints, and the adequacy of warnings all remain part of the liability analysis.
Section 8: The 48-hour incident playbook
When an AI agent incident occurs, the actions taken in the first 48 hours determine most of what follows. They determine whether your insurer responds or rejects. They determine whether a regulator treats the event as a managed failure or an uncontrolled one. They determine what evidence you have available when a claim or investigation begins. Most operators discover they have no plan for this window only after the window has closed.
The following sequence applies across jurisdictions. Some steps will have shorter mandatory timelines depending on the applicable regulation.
Hours 0-4: Contain and preserve. The first step is to pause or constrain the agent's operation if the incident is ongoing or if the failure mode is not fully understood. If you cannot pause the agent without disrupting critical services, constrain its scope to prevent additional harm in the affected category. Simultaneously, initiate evidence preservation: lock the log files from the relevant period, screenshot any relevant conversation outputs, preserve the agent's current system prompt and configuration, and take a timestamped record of when you became aware of the incident and what you knew. Do not alter, delete, or overwrite any logs or configuration records from this point forward.
Hours 4-12: Internal assessment and carrier notification. Establish internally what happened, who was affected, and what the scope of the harm appears to be. Most insurance policies require prompt notification of a potential claim or incident as a condition of coverage. This is not the time to determine whether a claim will be made: it is the time to notify the carrier of a potential claim event. Review your policy wording for notification requirements and provide written notice within the required window. Late notification is one of the most common grounds for coverage denial in AI incident claims. Do not make any admission of liability or offer any payment or remedy to the affected party before speaking to legal counsel and your carrier.
Hours 12-48: Regulatory assessment and affected party communication. Assess whether the incident triggers any mandatory reporting obligations. Under the GDPR, a data breach with risk to individuals requires notification to the supervisory authority within 72 hours of becoming aware.[13] If the incident involves a high-risk AI system under the EU AI Act, the deployer's obligation to log incidents and notify relevant market surveillance authorities applies from the point of awareness.[7] Sector-specific regulations in financial services, healthcare, and legal services may impose their own notification timelines. Communicate with affected parties only after legal review of the communication. The content and timing of external communications about an AI incident can affect insurance coverage, regulatory treatment, and litigation exposure.
Frequently asked questions
Who is liable when an AI agent makes a mistake?
In most cases, the operator who deployed the agent to end users carries primary liability. The British Columbia Civil Resolution Tribunal confirmed this in Moffatt v. Air Canada (2024), holding the airline responsible for its chatbot's incorrect representations regardless of whether a human or an automated system made them. The model provider's liability is typically limited by contract to the terms of its API agreement. The deployer cannot use those contractual limits as a shield against third parties who relied on the agent's output.
Can I pass AI liability to the model provider through my API contract?
No. API contracts between a deployer and a model provider are bilateral agreements. They allocate risk between those two parties but do not bind third parties who interact with your agent. A customer harmed by your agent's output has no contract with the model provider and will pursue the deployer directly. You may have an indemnity right against the provider under your API contract, but you cannot rely on that contract to avoid liability to the person your agent harmed.
What is the difference between product liability and negligence for AI agents?
Product liability focuses on the AI system itself as a defective product. Under the revised EU Product Liability Directive (Directive 2024/2853), software including AI is treated as a product, and economic operators in the supply chain can be held liable without the claimant proving fault. Negligence instead focuses on the behaviour of the party deploying or overseeing the agent: whether they took reasonable care to prevent foreseeable harm. Product liability is harder to exclude contractually; negligence requires proving a breach of the applicable duty of care.
Does the EU AI Act assign liability directly to deployers?
The EU AI Act assigns regulatory obligations and penalties to deployers but does not create a standalone private right of action for harmed individuals. However, Article 25 of Regulation (EU) 2024/1689 provides that a deployer who substantially modifies a high-risk AI system or places it on the market under its own name assumes the full obligations of a provider. For high-risk systems, non-compliance with EU AI Act obligations can be used as evidence of negligence in civil proceedings.
What does vicarious liability mean for operators running AI agents?
Vicarious liability holds one party responsible for the wrongful acts of another where a relationship of authority or control exists. Courts in several jurisdictions are beginning to apply agency theory to AI deployments: if you deploy an AI agent that acts on your behalf, makes commitments to customers, or performs functions you would otherwise perform, you may be vicariously liable for its harmful actions in the same way an employer is liable for an employee's wrongful acts. The Mobley v. Workday ruling applied agency theory to hold an AI screening tool's vendor directly liable for discriminatory outcomes.
What happened in Mata v. Avianca and why does it matter for operators?
In Mata v. Avianca (SDNY, 2023), a law firm submitted an AI-generated brief containing fabricated case citations. Judge Kevin Castel sanctioned the lawyers for failing to verify the AI output before filing it with the court. The principle the case establishes is universal: a professional who deploys AI to produce work product that is then presented as accurate takes personal responsibility for the accuracy of that output. For operators, the lesson is that deploying an AI agent to advise, recommend or communicate does not transfer professional responsibility to the model.
What is the 48-hour AI incident response window?
The 48-hour window is the critical period after an AI agent incident in which the actions you take most determine whether your insurer will respond and whether a regulator will treat the event as a managed or unmanaged failure. In those 48 hours you should pause or constrain the agent, preserve all logs, notify your insurance carrier (most policies require prompt notification), and assess whether the incident triggers any reporting obligation under GDPR, the EU AI Act, sector regulations, or local consumer protection rules. Delay in carrier notification is one of the most common grounds for coverage denial.
Can an affected third party sue both the model provider and the deployer?
Yes, in most jurisdictions. Under EU law, Article 8 of Directive 2024/2853 provides that where multiple economic operators are each responsible for the same damage, they are jointly and severally liable, meaning a claimant can recover in full from any one of them. In the US, joint and several liability in tort law follows similar logic in many states. A claimant who can show that both the model's design and the deployer's implementation contributed to harm can pursue both parties, though in practice the deployer is the more accessible and better-resourced target.
Does deploying an AI agent in my business create employment discrimination exposure?
Yes, if the agent is involved in any employment decision or workflow. The EEOC settled its first AI discrimination case against iTutorGroup in August 2023 for $365,000 after the company's hiring software automatically rejected female applicants over 55 and male applicants over 60. In Mobley v. Workday, certified as a collective action in 2025, the court held that an AI screening vendor could be liable as the employer's agent. If your AI agent screens resumes, ranks candidates, makes scheduling decisions, or influences any employment outcome, your Employment Practices Liability exposure is real and current.
What evidence should I keep to defend an AI liability claim?
You need four categories of evidence. First, the system documentation: what the agent was designed to do, its intended scope, the guardrails you applied, and any third-party model documentation you received. Second, the interaction logs: timestamped records of what the agent said and what the user asked, preserved in their original form without alteration. Third, your governance record: the oversight and review processes you operated before and after deployment. Fourth, the incident record: when you first became aware, what steps you took, and who you notified. Courts and insurers will both look for this evidence. The Agent Certified self-assessment at agentcertified.eu is the structured framework for building it before any claim arises.
Related reading
Run the Coverage Audit
Before you talk to a broker or legal counsel, use the Coverage Audit tool to map your current policies against your AI agent exposure. It takes ten minutes and produces the document your broker needs to review your position.
Start the Coverage AuditFootnotes
- Moffatt v. Air Canada, 2024 BCCRT 149 (BC Civil Resolution Tribunal, February 14, 2024). Tribunal member Christopher Rivers. Full decision available at crt.bc.ca.
- Directive 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products. OJ L, 2024/2853. Entered into force 9 December 2024. Implementation deadline: 9 December 2026. See in particular Articles 4 (definitions), 7 (defect), 8 (multiple liable parties), 9 (disclosure of evidence), 10 (rebuttable presumption of causation).
- Mobley v. Workday Inc., Case No. 3:23-cv-00770, US District Court for the Northern District of California. Motion to dismiss ruling July 12, 2024 (Judge Rita Lin). ADEA collective action conditional certification granted May 16, 2025.
- See OpenAI API Terms of Service, Section 7 (Limitations of Liability), and Anthropic Usage Policy. Both cap provider liability at the fees paid in the preceding 12 months and exclude consequential losses. Similar limitations appear in the terms of all major model providers as of April 2026.
- EEOC v. iTutorGroup Inc. and affiliates, Consent Decree approved September 8, 2023, US District Court for the Eastern District of New York. EEOC press release: "iTutorGroup to Pay $365,000 to Settle EEOC Discriminatory Hiring Suit," August 9, 2023. Available at eeoc.gov.
- Benavides v. Tesla Inc., Miami-Dade Circuit Court, September 2025. Jury verdict of approximately $329 million. The jury found Tesla's Autopilot system defective and allocated 33% fault to Tesla, 67% to the driver. Earlier related verdict: August 2025 Miami jury awarded $243 million in a separate 2019 Autopilot-involved crash.
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). OJ L, 2024/1689. See Article 25 (obligations of deployers treated as providers), Article 26 (obligations of deployers of high-risk AI systems).
- 47 USC Section 230. For current analysis of Section 230's application to AI-generated content, see the pending petition for certiorari in Enigma Software Group USA LLC v. Malwarebytes Inc. and related proceedings. Courts are divided on whether AI-generated output constitutes third-party content for Section 230 purposes.
- Mata v. Avianca Inc., Case No. 22-cv-01461 (PKC) (SDNY). Order re: sanctions, June 22, 2023 (Judge Kevin P. Castel). Sanctions imposed on Robert Mata, Steven Schwartz, and their supervising partner for submitting AI-generated fabricated case citations.
- For analysis of model provider indemnification scope, see Microsoft Azure OpenAI Service Terms (Customer Copyright Commitment), Google Cloud Generative AI Indemnification, and Amazon Bedrock Service Terms. Each program provides some indemnification for intellectual property infringement claims but does not extend to accuracy, reliability, or third-party harm claims.
- Council Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts. Implemented in EU member states via national law. Limitation of liability clauses that purport to exclude liability for personal injury or death caused by the supplier's negligence are unenforceable as unfair terms.
- Walters v. OpenAI LLC, Superior Court of Gwinnett County, Georgia. Motion to dismiss denied January 2024. Summary judgment in favour of OpenAI granted May 19, 2025 (Judge Tracie Carson) on three independent grounds: the output could not reasonably be understood as stating actual facts, Walters failed to show negligence, and Walters failed to establish recoverable damages. Analysis: Cleary IP and Technology Insights, May 2025.
- Regulation (EU) 2016/679 of the European Parliament and of the Council (GDPR), Article 33. The 72-hour notification requirement to the supervisory authority applies where a personal data breach is likely to result in a risk to the rights and freedoms of natural persons.
- For product liability treatment of AI software under US law, see Restatement (Third) of Torts: Products Liability, Section 19(b) (software as product), and the growing body of state court decisions applying strict products liability to software-related harm. As of 2026, no US federal court has issued a definitive ruling on strict products liability for AI-generated outputs.
- On the current state of the AI liability insurance market: HSB (Munich Re subsidiary) launched AI Liability Insurance for SMEs March 2026. Armilla (Lloyd's coverholder, Chaucer underwriting) limits increased to $25 million January 2026 following a $25 million Series A. Testudo (Apollo, Atrium, QBE backing) launched January 2026. See the Get Covered directory for current carrier information.