Insure Your Agent Operator Edition

Deploying your first AI agent. The 90-day insurance and compliance playbook for SMEs.

Most SMEs deploying their first AI agent in 2026 do not fail because the technology fails. They fail because they go live before they are insurable, or they are insurable but have no governance structure to hold the first thirty days of live operation. This playbook gives you a week-by-week sequence to avoid both. Ninety days is enough time if you do them in the right order.

Key takeaways

  • 90 days is enough for most SME deployments if the sequencing is right. The order matters more than the speed.
  • Insurance binding comes before go-live, not after. Binding coverage is a week 9 to 10 action. Going live in week 11 without it is an unacceptable exposure.
  • Four documents are non-negotiable for insurability: a risk record, an oversight register, vendor documentation, and an incident protocol. Without all four, most brokers cannot place the risk.
  • Two failure modes kill most SME deployments: going live without bound coverage, and going live without a governance cadence active from day one.
  • The playbook is aligned with EU AI Act Article 26 expectations for deployers, which apply from 2 August 2026.

Why 90 days, and not 30 or 180

Thirty days is too short. You can scope an agent in thirty days, but you cannot produce the documentation, approach a broker, receive a quote, negotiate terms, and bind coverage in that window. Operators who try end up going live uninsured.

One hundred and eighty days adds risk of a different kind. The longer the timeline, the higher the chance the agent scope drifts during build, the vendor updates their model, or a team member leaves who held the institutional knowledge. Ninety days is the window where you can hold all of this in your head and keep the evidence file current.

There is also a practical regulatory driver. The EU AI Act's first deployer obligations under Regulation (EU) 2024/1689 apply from 2 August 2026.1 An SME that starts a 90-day process now completes it before that date. An SME that starts in June does not.

Weeks 1 to 2: scope the agent

The most common reason SME deployments run into trouble is that nobody wrote down what the agent was actually allowed to do before it went live. Scope conversations happen in product meetings and Slack threads. None of that counts when a broker asks for a risk record, or when a customer asks why the agent made a commitment the business did not intend.

Spend the first two weeks producing a scope document. It is a single page, written in plain language, that anyone in the business can read and understand. It covers:

Version the document from the start. The scope you write in week one will change. You need a record of what the scope was at each point in time.

Weeks 1 to 2: scope checklist
ActionOutput
Write the scope document in plain languageScope v1.0 (one page, versioned)
List the tools, APIs, and integrations the agent can callIntegration map
List the decisions the agent makes without human approvalAutonomous action register (draft)
Name the categories of people affected if the agent is wrongAffected parties section of scope doc
Identify any regulatory category the agent touches (credit, employment, health data, minors)Regulatory flag memo

Weeks 3 to 4: pick the vendor with insurability in mind

Vendor selection is usually treated as a technical decision. For insurance purposes it is also an evidence decision. When an underwriter reviews your application they will ask about model provenance, third-party guardrails, and the contractual terms governing the vendor's liability. If you cannot answer those questions, the broker cannot place the risk.

Before committing to a vendor in weeks 3 to 4, confirm you can get the following from them in writing:

A vendor who cannot provide this documentation in a reasonable timeframe is a red flag. It does not mean their product is bad, but it does mean your insurer will be uncomfortable, and your evidence file will have a gap that is hard to fill.

Weeks 3 to 4: vendor checklist
ActionOutput
Confirm model name, version, and update schedule from vendorModel provenance record
Collect vendor terms covering their liability position on outputsVendor liability section of evidence file
Obtain data processing agreement if personal data is involvedDPA on file
Confirm guardrail documentation is availableGuardrail summary from vendor
Review vendor incident notification obligationsIncident notification terms noted

Weeks 5 to 6: draft the four documents that make you insurable

There are four documents every SME needs before a broker can meaningfully discuss AI agent coverage. You have already started two of them in weeks 1 to 4. Weeks 5 and 6 are for completing all four.

The four documents for insurability
DocumentWhat it containsWho holds it
Risk record Scope of the agent, autonomous decisions it makes, affected parties, regulatory flags, version history The named agent owner, shared with broker
Oversight register Named owner, their review cadence, escalation path, evidence of completed monthly reviews Agent owner, available to insurer on request
Vendor documentation Model provenance, guardrails, vendor terms, DPA, incident notification obligations Agent owner, available to insurer on request
Incident protocol Kill-switch owner and mechanism, named roles for incident response, customer notification template, regulator escalation path Agent owner, shared with broker

The risk record and incident protocol are the ones brokers ask for first. The oversight register and vendor documentation come up in underwriting. Have all four ready before you contact a broker, not partway through the conversation.

For detailed guidance on the incident protocol specifically, read our companion article: AI Agent Incident Response: A Guide for SME Operators.

Weeks 5 to 6: documentation checklist
ActionOutput
Finalise risk record from scope document (weeks 1 to 2)Risk record v1.0
Name the oversight owner and document their review cadenceOversight register v1.0
Compile vendor documentation into a single evidence fileVendor documentation folder
Write the incident protocol (kill switch, named roles, notification template)Incident protocol v1.0
Internal review of all four documents by a directorDirector sign-off noted in oversight register

Weeks 7 to 8: approach the broker and get a quote

By week 7 you have four documents, a scoped agent, and a vendor with paperwork. That is enough to have a meaningful conversation with a specialist broker.

A general commercial insurance broker is unlikely to have the vocabulary or the carrier relationships to place AI agent liability. You need a broker who works in technology or professional liability, ideally with experience in emerging technology risks. Ask them directly whether they have placed AI agent coverage before. If the answer is no, ask who they would approach, and why.

At the first meeting, share the risk record and incident protocol. Explain the vendor and the model. The broker will translate this into an underwriting submission. Their job is to find a carrier or programme that can price the risk. Your job is to make that submission as complete as possible.

Expect the broker to come back with questions. The most common ones are about logging (can you produce a complete record of what the agent said?), human oversight (is there a named person who can confirm the agent was reviewed last month?), and exclusions in your existing policies (have you checked whether your cyber or E and O policy already excludes this?). For the exclusions question, read our article: AI policy exclusions: what SME operators must review before their next renewal.

Weeks 7 to 8: broker engagement checklist
ActionOutput
Identify a specialist broker with technology or emerging risk experienceBroker selected
Send risk record and incident protocol to broker ahead of first meetingSubmission started
Review existing policies for AI exclusions (cyber, E and O, GL, D and O)Exclusions memo
Respond to broker's underwriting questions in writingSubmission completed
Request a binding indication with coverage terms and sub-limitsQuote received (or red flag identified)

Weeks 9 to 10: bind coverage, complete final testing, train the oversight human

If the broker returns a quote in weeks 7 to 8, you spend weeks 9 and 10 reviewing the terms, negotiating where needed, and binding coverage. Coverage must be bound before the agent goes live. This is not a nice-to-have. It is the sequencing the entire playbook is built around.

Review the policy terms carefully. Pay particular attention to: the definition of the covered "AI system," any exclusions for autonomous actions, the sub-limit for AI-specific claims, the notice provisions for incidents, and whether the policy requires you to maintain your governance documents as a condition of coverage. If the governance documents are a condition, changes to scope must be notified to the insurer. Make that a written process from day one.

In parallel with the insurance work, run your final technical testing. This is not user acceptance testing of the product. It is adversarial testing of the agent's limits: what happens when the user tries to push it outside scope, what does it do when it hits a decision boundary, does the logging capture what you think it captures, and does the kill switch work as expected.

Also in this window: train the person who will own the agent in production. They need to know how to read the logs, what a weekly sample review looks like in practice, how to trigger the kill switch, and who to call if an incident starts.

Weeks 9 to 10: pre-launch checklist
ActionOutput
Review coverage terms with the broker, negotiate sub-limits if neededPolicy bound
Confirm governance documents are on file and match policy conditionsCompliance confirmation from broker
Run adversarial testing against scope boundariesTest report
Confirm logging captures full conversation and tool-call recordsLogging verified
Train the oversight owner: weekly review, kill switch, escalationOwner trained, first review date set

Weeks 11 to 12: go live with the first-30-days governance cadence active from day one

Week 11 is go-live. Coverage is bound. Testing is done. The oversight owner knows what they are doing. The logs are running. The kill switch is confirmed.

The first thirty days of live operation are the period of highest risk. The agent will encounter inputs the testing phase did not anticipate. Users will try things you did not expect. The model may behave differently at production scale than it did in a sandboxed environment. None of that is unusual. What makes it manageable is having the governance cadence active before those things happen, not after.

In the first thirty days, the oversight owner should be doing a weekly sample review of agent outputs. Not monthly. Weekly. They are looking for: outputs that contradict the scope document, commitments that were not intended, errors that could cause a customer loss, and patterns in the questions that suggest users are testing the agent's limits. Each weekly review gets a one-paragraph note in the oversight register.

After thirty days, review the scope document. Update it with anything you learned. Version it. Notify your broker if the scope has changed materially. Then move to a monthly review cadence, which is what the EU AI Act's Article 26 oversight expectations are broadly calibrated to for non-high-risk deployers.

Weeks 11 to 12: go-live and first-30-days checklist
ActionOutput
Go live with agent in productionGo-live date recorded in oversight register
First weekly output sample review by oversight ownerWeek 1 review note
Second weekly output sample reviewWeek 2 review note
Third weekly output sample reviewWeek 3 review note
End of day 30: scope review and updateScope v1.1, notified to broker if materially changed

The red flags that delay a binding

Not every SME reaches week 9 ready to bind. These are the situations that most commonly push the timeline out, and what to do about them.

For the full picture of what your existing policies actually say, and why most of them will not respond as expected, read our article: The Air Canada chatbot case: what SME operators should learn.

Frequently asked questions

How long does it take to get AI agent insurance?

For SMEs, the realistic timeline from starting your documentation to receiving a binding indication is six to eight weeks, assuming you have a structured evidence file ready and are working with a broker who understands AI agent exposures. The 90-day playbook builds this procurement window into weeks 7 to 10 so that coverage is in place before go-live, not after.

What documents do I need before binding AI insurance?

Four documents are non-negotiable: a risk record describing what the agent does, what it decides, and who it affects; an oversight register naming the responsible person and their review cadence; vendor documentation covering model provenance, guardrails, and third-party terms; and an incident protocol with a named kill-switch owner and a customer notification template.

When should my AI agent go live in the 90-day plan?

Week 11. Coverage should be bound in weeks 9 to 10 and a first-30-days governance cadence should be active from the moment the agent goes live. Going live without bound coverage or without a governance structure in place are the two failure modes that account for most SME AI deployment problems.

What if my insurer says no?

A decline at weeks 7 to 8 is usually a documentation problem, not a permanent block. The broker should be able to tell you specifically what is missing. The most common gaps are: no written risk record, no named oversight owner, and no incident protocol. Fix those, then re-approach. If the agent poses genuinely novel or unquantifiable risk, the go-live date may need to move to Q4 2026 or later when dedicated AI lines are more widely available.

Does my current business insurance cover my AI agent?

Almost certainly not in full. Existing cyber, E and O, and general liability policies were written before autonomous agents existed. Many now carry explicit AI exclusions added at recent renewals. Some incidents may respond under cyber or professional indemnity, but sub-limits, conduct exclusions, and autonomous-action carve-outs mean the response is often partial. A written query to your broker on each policy is the only way to know.

Notes and sources

  1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (AI Act). Article 26 sets out obligations for deployers of AI systems. The first general deployer obligations apply from 2 August 2026. High-risk AI system deployer obligations apply from 2 August 2026 for most categories, with some provisions phased to December 2026. Official Journal of the European Union, L series, 2024.
  2. Moffatt v. Air Canada, Civil Resolution Tribunal of British Columbia, Case No. SC-2023-001996, February 2024. The tribunal found that Air Canada was bound by a bereavement fare commitment made by its chatbot, despite the airline's argument that the chatbot was a separate legal entity. The decision is the leading English-language authority on operator liability for AI agent outputs.
  3. Mata v. Avianca, Inc., United States District Court, Southern District of New York, Case No. 22-cv-01461 (PKC), June 2023. The court sanctioned attorneys for submitting AI-generated case citations that did not exist. The case is cited as authority on the professional responsibility implications of unverified AI outputs in legal and regulated contexts.
  4. Association of British Insurers (ABI). The ABI's position on AI in insurance has emphasised the importance of governance, transparency, and human oversight as preconditions for insurability. ABI members are actively developing guidance on AI agent coverage. See ABI publications at abi.org.uk for current positions.