The Air Canada chatbot case: what SME operators should learn.
A tribunal decision worth a few hundred Canadian dollars quietly rewrote the liability picture for every business running a customer-facing AI agent. If your agent talks to the public, this is the case you cannot afford to misread.
Key takeaways
- In February 2024 a British Columbia tribunal ruled that Air Canada was responsible for a bereavement fare promise its chatbot invented.
- The tribunal rejected the airline's argument that the chatbot was a separate legal entity whose statements the airline could disown.
- The decision is short, readable, and applies in substance across most common-law jurisdictions and, through parallel consumer rules, in much of the EU.
- The practical takeaway is not about chatbots specifically but about any AI agent that communicates on behalf of a business with customers, suppliers, or regulators.
- Every SME running a customer-facing agent should have a documented incident response plan for the moment a customer acts on something the agent got wrong.
What actually happened
In November 2022, Jake Moffatt's grandmother died in Ontario. He booked a flight on Air Canada from Vancouver to Toronto to attend the funeral. Before booking, he used the chatbot on the Air Canada website to ask whether he could claim a bereavement fare retroactively. The chatbot told him yes, and explained that he could book the full-fare ticket and apply for a refund within ninety days of travel by filling out a form. Moffatt booked the flight on that basis.
The problem was that the chatbot's explanation was wrong. Air Canada's actual bereavement fare policy, documented on a separate page of the same website, required the discount to be requested before travel, not after. When Moffatt submitted his refund claim after the trip, Air Canada refused. The customer service team acknowledged that the chatbot had provided misleading information but told him the real policy controlled and offered him nothing.
Moffatt took the case to the British Columbia Civil Resolution Tribunal, a small-claims body that hears disputes under a threshold. He asked for the difference between the full fare he had paid and the bereavement fare he had been promised. The amount was about eight hundred Canadian dollars. The tribunal ruled in his favour on 14 February 2024.
The argument Air Canada tried
The airline's defence was unusual enough to be worth quoting almost verbatim. Air Canada argued that the chatbot was a separate legal entity responsible for its own actions. The submission, reported by tribunal member Christopher Rivers in the decision, essentially treated the chatbot as an independent agent whose statements did not bind the airline in the way a human customer service agent's statements would.
The tribunal rejected this reasoning in plain language. The relevant passage is worth the time: "This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada's website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot."
That paragraph is the reason every SME operator should know this case. The tribunal did not need to resolve novel questions about legal personhood or the nature of AI. It applied an existing principle of consumer protection law, which is that a business is responsible for information given to customers by its own communication channels. The chatbot was part of the channel. The fact that it could hallucinate did not change the legal responsibility.
Why the case matters beyond aviation
If the only thing you take from this case is that airlines should be careful with their chatbots, you have underread the decision. The principle the tribunal applied is jurisdiction-neutral and technology-neutral. It applies to any business running any communication system that talks to customers, whether that system is a web form, a phone tree, a live agent, or an AI model.
Three implications for SMEs are worth drawing out explicitly.
One: your agent is an agent of your business
Agency law, the body of rules governing who can act on behalf of whom, has been adapted to AI systems without controversy. Courts across common-law jurisdictions have treated automated systems as agents of the business that deployed them since the 1990s. The Air Canada decision is notable because it extended that principle to a generative chatbot whose output was not explicitly scripted. The tribunal was unimpressed by the argument that unpredictability changed the analysis.
What this means for an SME operator is that statements by your AI agent bind your business in the same way that statements by a junior employee would. The junior employee exists inside a management structure with training, guardrails, and an escalation path. Your agent often does not. The gap is where the liability lives.
Two: disclaimers help only so much
Many operators, on reading this case for the first time, ask whether a disclaimer on the chatbot would have saved Air Canada. The honest answer is probably not. Consumer protection law across most jurisdictions refuses to enforce disclaimers that are inconsistent with the reasonable expectation of the customer. If a customer asks a direct question through a channel the business controls and gets a direct answer, a disclaimer that says "the answer may be wrong" does not reliably override the reliance.
Disclaimers are still worth having. They shift the framing, establish the context, and can help in marginal cases. They do not make a hallucinated refund policy go away. If you are an operator whose risk mitigation plan begins and ends with a disclaimer, you have not read the case carefully.
Three: the amount of money is misleading
The Moffatt case involved a few hundred dollars. It would be easy to read the decision and conclude that the stakes are too low to matter. That reading misses the point. The value of a tribunal decision is not the settlement. It is the precedent. Moffatt cost Air Canada an order of magnitude more in legal fees, airline policy review, and reputational cost than the refund itself. For an SME, the equivalent precedent costs even more proportionally because the fixed costs of responding to a claim are larger relative to revenue.
The bigger concern is what the decision enables. A class of claims that was previously difficult to bring, because businesses could argue the chatbot was experimental or unofficial, now has a clear answer. Plaintiffs' lawyers in jurisdictions from the UK to Australia are already citing Moffatt as persuasive authority in their pre-action letters.
The European angle
Canadian tribunal decisions are not binding in Europe, but the consumer protection principles the tribunal applied map closely onto European rules. The Unfair Commercial Practices Directive, the Consumer Rights Directive, and the national consumer protection laws that implement them all impose duties of truthfulness and transparency on businesses communicating with consumers. A business that made a false promise through a chatbot in Germany, France, or the Netherlands would face the same core legal question that Air Canada faced in Canada.
Layer on top of this the EU AI Act, which starts its first operator obligations on 2 August 2026, and the revised Product Liability Directive, which brings AI systems inside strict liability rules, and the European picture arguably has less wiggle room than the Canadian one. A business running a customer-facing chatbot in Europe in 2026 is operating under a stricter regime than Air Canada was in 2022.
The incident response lesson
The most useful thing an SME operator can take from Moffatt v. Air Canada is not the legal principle but the operational reality. The case happened because Air Canada did not have a plan for what to do when the chatbot made a mistake. The customer service team did not escalate. The legal team did not audit the chatbot. The policy team did not have visibility into what the bot was saying. By the time the problem reached the tribunal, the airline's options had narrowed to honouring the promise or litigating the principle. It chose badly.
The operators who read this case correctly build an incident response plan that answers a specific set of questions before the incident happens. Who picks up the phone when a customer says "your agent promised me X"? Who has the authority to pause the agent? Who approves a goodwill payment? Who decides whether to notify a regulator? Who reviews the conversation logs to understand what the agent actually said? These questions are boring until the moment you need them.
Our three-question diagnostic uses this reasoning in its third question. It asks whether you have a written incident response plan. If you do not, the Moffatt case is the reason you should.
What to do if you run a customer-facing AI agent
Six practical steps follow from a close reading of the case, and all of them can be started this week.
- Log every customer-facing conversation your agent has. If you cannot reconstruct what the agent said, you cannot defend the business.
- Run a weekly sample review where someone actually reads a subset of conversations looking for invented policies, wrong prices, or over-promises.
- Build a clear escalation path. A single button or handoff that puts the customer in front of a human when the agent is uncertain.
- Audit your published policies against what the agent can say. If your refund policy changed in March, make sure the agent knows.
- Write the incident response playbook. Name the people, draft the customer communication template, identify the kill switch.
- Review your insurance. See our companion article on whether business insurance covers AI mistakes.
None of this is exotic. It is the same due diligence that a well-run customer service team has always applied, adapted to a communication channel that can generate new content on the fly. The businesses that do it now will be the ones an underwriter is willing to quote when dedicated AI policies start writing in Europe in Q3 2026. The ones who do not will be the ones citing Air Canada in their own defence.
If you want the regulatory context that surrounds the case, the Why It Matters page walks through the EU AI Act and the revised Product Liability Directive. If you want the structured pathway to actual coverage once your house is in order, see the coverage pathway.
Frequently asked questions
What was the Air Canada chatbot case about?
In February 2024 the British Columbia Civil Resolution Tribunal ruled that Air Canada was responsible for a bereavement fare refund promised by its website chatbot, even though the actual published policy did not allow retroactive refunds. The tribunal ordered the airline to pay.
Why does the decision matter to SMEs in Europe?
The core principle, that a business is responsible for information given to customers through its own channels, maps directly onto European consumer protection rules and is strengthened by the EU AI Act and the revised Product Liability Directive. Any SME running a customer-facing agent in Europe in 2026 is operating under a stricter framework than Air Canada faced in Canada.
Can an SME rely on a chatbot disclaimer to avoid liability?
Disclaimers help contextually but do not override a reasonable customer's reliance on information given directly by the business. The tribunal explicitly rejected the argument that the chatbot was a separate legal entity whose statements the airline could disown.
What should an SME actually do after reading this case?
Log every customer-facing conversation, run regular sample reviews, build an escalation path, audit your published policies against what the agent can say, write an incident response playbook, and review your insurance policies for AI exclusions.