Key takeaways
- Document first, deploy second. The autonomy envelope (what the agent is allowed to do) is the foundation of both your insurance submission and your regulatory compliance.
- Contact your broker before going live, not after. An undisclosed AI deployment creates a non-disclosure risk on any claim that follows.
- Check for AI exclusions on your existing policies. Many professional indemnity and cyber wordings have been updated since 2023 to exclude or limit AI-related losses.
- A two-hour documentation exercise now is worth considerably more than a retroactive exercise after a claim is filed.
- If you operate in the EU, the AI Act's Article 5 prohibitions have been in force since February 2025. Checking your deployment against them is a legal obligation, not a best practice.
Step 1: Write the autonomy envelope before you write the code
Before any other step, write down exactly what your AI agent is allowed to do, at what thresholds it must stop and ask a human, and what it absolutely cannot do. This document is called the autonomy envelope or scope of authorised action. It is the founding document for everything that follows.
If you are building a custom agent, this means defining the actions it can take, the data it can access, and the decisions it can make without a human in the loop. If you are using a third-party AI tool rather than building one yourself, this means documenting how your business uses it: which features you have switched on, which user groups it interacts with, and what it is allowed to say or do in your name.
The autonomy envelope matters for three reasons. First, your broker cannot write a meaningful policy without it. Second, your team cannot oversee the agent meaningfully without it. Third, if something goes wrong, it is the document that shows what the agent was supposed to do versus what it actually did. No deployment should go live without a written scope, even a short one. A one-page document is enough to start.
Step 2: Run the AI Act Article 5 check
Article 5 of EU Regulation 2024/1689 lists eight prohibited AI practices. These have been in force since February 2025. If your deployment touches any of them, you have a legal problem that no insurance policy can cover, and you need to know before you go live rather than after a regulator asks.
The three Article 5 provisions most relevant to SMEs are: emotion recognition in workplace or educational contexts (not permitted except for strictly defined safety or medical purposes); biometric categorisation systems that infer sensitive characteristics such as political views, religious beliefs, or sexual orientation from physical features; and techniques that exploit psychological vulnerabilities or use subliminal manipulation to influence behaviour. These are not grey areas. They are hard prohibitions with significant penalties.
Check your planned deployment against each of the eight prohibitions. If you are unsure whether a feature falls within scope, that uncertainty is itself a reason to seek legal advice before launch. The EU AI Act compliance guide on agentliability.eu provides a plain-English walkthrough of each prohibition.
Even if your deployment clearly sits outside Article 5, document the check. A record of having assessed your deployment against the prohibited practices is useful with both insurers and regulators.
Step 3: Review your vendor's instructions-of-use and acceptable use policy
Every major AI provider publishes terms of service and an acceptable use policy. Read them before you deploy, not after. These documents tell you what use cases the vendor permits, what they prohibit, what liability they disclaim, and under what circumstances they can terminate your access.
This matters for insurance in a specific way. If your intended use case is prohibited by your vendor's terms, you may be operating outside the permitted scope of the tool. That creates two problems: a potential breach of your vendor agreement, and an argument from your insurer that the deployment was not compliant with the vendor's instructions, which can affect how a claim is handled.
Document what you found. Note the date you reviewed the terms, the version of the policy you reviewed, and any relevant restrictions or permissions. If the vendor's terms prohibit your intended use case, stop and find a different approach. This is a thirty-minute step that can prevent a much larger problem later.
Step 4: Build your AI deployment register
An AI deployment register is a one-to-two page document listing every AI tool or agent your business uses. For each entry, record five things: what the tool does in plain English, who it interacts with (internal staff only, or customers and third parties), what it can do autonomously without a human reviewing or approving its output, what oversight is in place, and what documentation you hold from the vendor.
This document is the central object in your insurance and compliance conversations. Your broker will need it. If a regulator asks questions about your AI deployments, this is where you start. And if something goes wrong, it is the document that shows you had a systematic view of your AI use rather than an informal collection of tools deployed without thought.
Two hours is a realistic estimate for a first version. Update it every time you deploy a new tool, switch on a new feature, or change how an existing tool is used. Treat it as a living document, not a one-time exercise.
Step 5: Contact your broker before launch
Send your broker a written disclosure of the deployment before you go live. Not a phone call. A written message, by email, so there is a record. Be specific: name the tool or agent, describe what it does, state what it can decide without human review, and say who it interacts with. Attach or summarise your deployment register.
Then ask three questions and request written responses to each. First: does our professional indemnity policy respond if this AI agent makes an error that causes a client financial loss? Second: does our cyber policy cover AI operational mistakes that do not involve a data breach? Third: have any AI exclusions been added to our policies at the last renewal? Ask them to check the current wording explicitly.
Most brokers will not have a ready answer to all three questions. That is genuinely fine. What matters is that you have raised it, the questions are documented, and your broker is actively seeking written coverage opinions from the relevant insurers. "We think you are probably covered" is not a coverage opinion. A written response from the insurer is. For a deeper guide to this conversation, see what to tell your insurance broker about AI agents.
Step 6: Check for AI exclusions on your existing policies
The Lloyd's Market Association introduced AI exclusion language (LMA5566) in 2023. Since then, AI-related exclusions or carve-outs have been appearing in tech professional indemnity and cyber policy renewals across the market. These changes do not always arrive with a covering note explaining what changed. They appear in the renewal schedule or as endorsements, and brokers do not always flag them proactively.
Pull the current wording for each policy that could be relevant: professional indemnity, cyber, product liability if applicable, and any technology-specific cover. Compare the current wording to the version from the prior year. Look specifically for language referencing AI, machine learning, automated systems, algorithmic outputs, or large language models. If exclusions exist, note them explicitly and ask your broker what, if anything, bridges the gap.
This step is worth doing even if your broker has just told you that you are covered. Coverage opinions given without checking the current wording for AI exclusions are not reliable. The wording is what matters, not the broker's memory of last year's renewal. For a full guide to common AI exclusion patterns, see AI policy exclusions: a guide for SME operators.
Step 7: Document your human oversight arrangement
Who is responsible for monitoring this AI agent? What training do they have? What does the escalation path look like if the agent produces a wrong or harmful output? Write this down as part of your deployment documentation.
EU AI Act Article 26(2) requires deployers in scope for the regulation to assign human oversight to named persons with the competence and authority to act. Even if your specific deployment is not formally in scope for Article 26, having a named owner and a documented escalation path protects you with insurers and regulators alike. It demonstrates that your deployment was not just switched on and forgotten.
The oversight document does not need to be elaborate. A paragraph naming the responsible person, describing what they are monitoring, and explaining what happens if they identify a problem is enough to start. Update it when staff responsibilities change. If the agent operates overnight or outside business hours, document that too, and explain what triggers a review the next morning.
Step 8: Set up basic audit logging before launch
Every autonomous action your AI agent takes should produce a log entry. Each entry should capture a timestamp, the input the agent received, the output it produced, and the action it took. This is the evidence you need if something goes wrong.
Without logs, you cannot reconstruct what happened. You cannot defend against a claim where the claimant asserts the agent said something it may or may not have said. You cannot satisfy an insurance adjuster who needs to understand the sequence of events. And you cannot satisfy a regulatory investigator who wants to know what the agent was doing during a specific window.
Minimum retention for audit logs is one year. For systems operating in regulated industries, three years is a more defensible position. Store logs somewhere the AI agent itself cannot modify. A separate database or log management service is appropriate. The cost is negligible. The protection is material.
Step 9: Set a calendar reminder for the first post-deployment review
Schedule a thirty-minute review one month after launch. This is the step most operators skip, and it is the step that turns a one-time checklist into a defensible ongoing governance posture.
At the one-month review, revisit your autonomy envelope against what the agent actually did in its first month of operation. Check whether any near-miss incidents occurred that were not formally logged as incidents. Review whether the scope of the agent's actions has crept beyond what you originally documented. Update the deployment register to reflect what you learned.
Notify your broker of any significant change in the agent's scope or use. An agent that started as an internal drafting tool and has since been given authority to send emails directly to customers is a materially different deployment from what you originally disclosed. That change needs to be communicated. Most policies require notification of material changes. Doing it proactively keeps you in good standing. Doing it retroactively during a claim is a much harder conversation.
After the first month, set quarterly reviews as a standing calendar item. The effort per review is small. The cumulative documentation is significant.
Frequently asked questions
What is the single most important thing to do before deploying an AI agent?
Write down exactly what the agent is allowed to do, at what thresholds it must stop and ask a human, and what it absolutely cannot do. This document, called the autonomy envelope or scope of authorised action, is the foundation of every other step. Without it your insurance broker cannot write a policy that covers the agent's actions, your team cannot oversight the agent meaningfully, and if something goes wrong you cannot demonstrate what the agent was supposed to do versus what it actually did.
Do I need specialist AI insurance before deploying my first AI agent?
Not necessarily, but you need to review your existing cover before deployment, not after. The key step is to ask your broker two specific questions: does our professional indemnity policy respond to AI agent errors, and does our cyber policy cover AI operational mistakes that do not involve a data breach. If the answer to either is no or unclear, you need to understand the gap before you go live. Specialist AI agent cover is becoming available in 2026 for higher-risk deployments. For simpler deployments, documentation and a broker conversation may be sufficient to start.
What is an AI deployment register and why do I need one?
An AI deployment register is a one-to-two page document listing every AI tool or agent your business uses, what each one does, who it interacts with, what it can do without human approval, and what documentation you have from the vendor. It serves three purposes simultaneously: it gives your broker the information needed to assess your coverage, it demonstrates to any regulator that you have thought about your AI deployments, and it is the foundation of the risk assessment that the EU AI Act increasingly expects operators to maintain. Two hours to create. Significant protection if something goes wrong.
What if I am already running an AI agent and have not done this yet?
Start the checklist from the beginning, but treat it as a retroactive documentation exercise rather than a pre-deployment one. The most urgent step is the broker conversation: disclose the deployment in writing immediately, ask for a written coverage opinion, and document the response. Then work through the nine steps in order. Retroactive documentation is more time-consuming than prospective documentation, but it is far better than undocumented exposure.
References
- Regulation (EU) 2024/1689, Article 5, prohibited AI practices, in force 2 February 2025.
- Regulation (EU) 2024/1689, Article 26(2), deployer obligation to assign human oversight persons.
- Lloyd's Market Association, LMA5566, artificial intelligence exclusion clause, 2023.
- Moffatt v. Air Canada, 2024 BCCRT 149, Civil Resolution Tribunal of British Columbia, February 2024.
- AIUC-1 reference standard, AI Underwriting Company, 2025, scope of authorised action as underwriting precondition.