EU AI Act for SMEs: what your obligations actually are in 2026.
Most guides to the EU AI Act are written for large enterprises with dedicated compliance teams. This one is written for the founder or operations manager who just wants to know what they must do before going live with an AI agent. Here is what actually applies to you, in the order it matters.
Key takeaways
- The EU AI Act applies to any deployer putting an AI system into service in the EU, regardless of company size. Being an SME does not exempt you from the core obligations.
- The Article 5 prohibitions have been in force since 2 February 2025. If your AI agent does any of the prohibited things, you are already in breach.
- Article 50 transparency (telling users they are talking to an AI) applies to all deployers. The date it applies turns on the Omnibus outcome but is coming.
- Most SMEs do not deploy high-risk AI systems. If your agent handles employment decisions, credit scoring, or essential services for EU residents, check carefully.
- The Act includes specific SME support provisions: reduced fees, sandbox priority access, and national authority guidance. These ease the compliance pathway without reducing the obligations themselves.
Start here: what kind of deployer are you?
The EU AI Act (Regulation (EU) 2024/1689) uses the word "deployer" for an organisation that uses an AI system in a professional context. If your business uses an AI chatbot for customer service, an AI tool for hiring screening, an AI agent that sends emails on your behalf, or any AI system that takes actions affecting people, you are a deployer under the regulation.
Being a deployer is different from being a provider. A provider is the company that builds and places the AI system on the market: OpenAI, Anthropic, Microsoft, Salesforce. Most SMEs are deployers, not providers. As a deployer, your obligations are different from and generally lighter than the obligations that providers face. But they are not zero.
The deployer question also has an important limit. If you take a commercially available AI system and substantially modify it for your own use, or if you put it on the market under your own brand as a distinct product, you may transition from deployer to provider under Article 25(1) of the regulation. Most SMEs who use off-the-shelf tools via API or SaaS subscription remain deployers throughout.
What is already in force: Article 5 prohibitions
The prohibitions in Article 5 of Regulation (EU) 2024/1689 came into force on 2 February 2025. They apply now, to everyone, regardless of company size and regardless of the Omnibus delay proposal. If your AI agent does any of the following things, you are in breach of the regulation today.
You cannot use an AI system that employs subliminal techniques beyond a person's consciousness to manipulate their behaviour in a way that causes or is likely to cause significant harm. This applies to AI that manipulates purchasing decisions, emotional states, or behaviour through mechanisms that users are not aware of and would object to if they were. A standard recommendation engine is not prohibited. An AI specifically designed to exploit psychological vulnerabilities to drive behaviour against the user's interests is.
You cannot use an AI system that exploits the vulnerabilities of specific groups, including children, the elderly, or people with disabilities, to distort their behaviour in a way that causes significant harm. If your product serves any of these groups and your AI agent's design specifically targets their vulnerabilities, you need a legal review immediately.
As a private business, you are unlikely to be operating a social scoring system in the sense Article 5 addresses, since that prohibition is primarily aimed at public authorities. But if you are building a platform that evaluates individuals across multiple life domains and restricts their access to services based on that evaluation, the prohibition is relevant.
Real-time remote biometric surveillance in publicly accessible spaces is prohibited except under narrow law enforcement exceptions. For the vast majority of SMEs, this is not relevant. If you are deploying AI-powered cameras or facial recognition in publicly accessible locations, it is.
What is coming: Article 50 transparency
Article 50 of Regulation (EU) 2024/1689 requires deployers of AI systems designed to interact with natural persons to inform those persons that they are interacting with an AI. The disclosure must be made at the beginning of each interaction and must be clear, not buried in terms and conditions. The obligation applies to chatbots, AI customer service agents, AI voice assistants, and any other AI system that conducts a conversational interface with a person.
The application date for Article 50 depends partly on the Omnibus proposal. New AI systems will be subject to this obligation from the application date, which is 2 August 2026 under the current regulation or December 2027 if the Omnibus delay for new systems is adopted. Systems already in use when the regulation applies have a transition period. But the direction of travel is clear: every AI-to-human interaction in your business needs a disclosure mechanism.
The disclosure does not need to be elaborate. A statement at the start of the conversation that reads "You are now chatting with an AI assistant" or equivalent satisfies the core requirement. What does not satisfy it is an AI that presents itself as a human, uses a human name without disclosure, or actively denies being an AI when asked. Article 50(3) also prohibits deployers from allowing AI systems to claim they are human when a person sincerely asks.
Article 50 also covers AI-generated synthetic content. If your business uses AI to generate deepfake images, synthetic audio, or video content that depicts real persons in ways they did not consent to, the content must be labelled as AI-generated. This provision applies to providers at the system level, but deployers who use synthetic content tools have corresponding disclosure obligations when they use the output.
The high-risk question: does your AI fall under Chapter III?
Chapter III of Regulation (EU) 2024/1689 contains the full compliance framework for high-risk AI systems: risk management systems, technical documentation, conformity assessments, human oversight registers, incident notification, and FRIA requirements. This is the part of the regulation that most guides focus on. For most SMEs, it does not apply.
Article 6 and Annex III define the categories of high-risk AI. The categories are: biometric identification and categorisation, critical infrastructure management, education (AI making consequential educational decisions about individuals), employment and worker management, essential services including credit scoring and insurance risk assessment, law enforcement, migration and asylum, administration of justice, and democratic processes. If your AI agent does none of these things for EU residents, you are not deploying a high-risk AI system under Chapter III.
Three categories deserve particular attention for SMEs. First, employment: if you use AI to screen CVs, rank candidates, monitor employees, or make promotion recommendations, you may be in the employment category. The test is whether the AI is making or substantially influencing decisions about access to employment or working conditions for natural persons. Second, credit and insurance: if your platform uses AI to assess creditworthiness or insurance risk for individuals, you are likely in the essential services high-risk category. Third, education: if your product uses AI to evaluate, grade, or make access decisions about individual learners, review the education category carefully.
If your AI agent does fall into a high-risk category, the full Chapter III compliance programme applies, and the deadline for that programme is 2 August 2026 under the current regulation (or December 2027 under the proposed Omnibus delay). The compliance programme is substantial: a risk management system under Article 9, technical documentation under Article 11, logging and monitoring procedures, human oversight registers, and FRIA where required by Article 26(9). If you are in this situation and have not started compliance preparation, begin now. For the complete Article 26 deployer obligations, see the Article 26 guide on the sister site.
The SME-specific provisions
The EU AI Act includes provisions that specifically recognise the position of smaller operators. Article 49(4)(b) provides that fees for registration in the EU database are reduced for SMEs. The definition of SME for this purpose follows Commission Recommendation 2003/361/EC: fewer than 250 employees and either annual turnover not exceeding EUR 50 million or annual balance sheet total not exceeding EUR 43 million.
Articles 57 to 63 establish a framework for national AI regulatory sandboxes. These sandboxes allow operators to test AI systems under regulatory supervision before full market deployment, with liability protections during the testing period. The sandboxes are primarily relevant for SMEs developing AI products rather than for those deploying existing commercial AI systems. Article 62 requires national competent authorities to give priority access to SMEs and start-ups.
Article 64(2) requires market surveillance authorities to pay particular attention to the interests of SMEs and to ensure that compliance measures are proportionate to the operator's size and resources. In practice this means that for initial regulatory interactions following the regulation's full application, SMEs are less likely than large enterprises to be the primary target of enforcement actions, though the obligations are identical.
Using commercial AI tools: what your role is
Most SMEs deploy AI not by building their own models but by using commercial tools: Microsoft Copilot, Salesforce Einstein, Google Workspace AI, standalone chatbot platforms, or similar. When you use a commercial tool, you are a deployer and the tool provider is the provider under the regulation.
Your obligations as a deployer of a commercial AI tool include: using the system in accordance with the provider's instructions for use (Article 26(1)); maintaining human oversight proportionate to the risk of the system; monitoring the system's operation; notifying the provider if you observe something that suggests a risk to health, safety, or fundamental rights (Article 26(4)); and informing users that they are interacting with an AI if the system is conversational (Article 50).
You are also responsible for any use of the AI system that goes beyond the scope the provider authorised. If a commercial AI agent is marketed as a customer service tool and you deploy it to make consequential employment decisions, you have gone beyond the authorised scope. That use may bring you within the high-risk category even though the tool itself is not marketed as a high-risk system.
A practical first step is to read the instructions for use and the system cards that commercial AI providers publish. These documents describe the intended use, known limitations, and prohibited uses. An SME deployer who has read and followed the provider's instructions for use has a better starting position in any regulatory or insurance context than one who has not.
A five-point SME compliance check
For an SME deploying AI agents in 2026, a five-point check covers the most significant obligations without requiring enterprise-scale compliance infrastructure.
First, confirm your AI agent does not perform any Article 5 prohibited practice. Review the prohibited categories against your deployment. This is a yes or no question, and if the answer is not clearly no, get a legal opinion.
Second, map your AI agent against the Annex III high-risk categories. If you are not in a high-risk category, document this briefly: a one-page mapping is sufficient. If you are, the Chapter III compliance programme applies.
Third, add an AI disclosure to every conversational AI interface. Implement Article 50 disclosure before the application date. It is low cost and removes the transparency risk entirely.
Fourth, document the named owner of each AI agent, the tools and data it can access, and the monitoring cadence. This is the minimum governance documentation a regulator, insurer, or counterparty will ask for.
Fifth, review your insurance. Read our guide on whether your business insurance covers AI mistakes and send your broker the written request for each policy. The EU AI Act does not require you to hold AI insurance, but the Product Liability Directive and the general liability exposure from deploying autonomous agents makes it a serious question to answer before an incident, not after.
For the full operator workflow, see the three-question diagnostic. For information on dedicated AI coverage as the market opens in Europe, see agentinsured.eu. For the certification pathway that strengthens both compliance and coverage positioning, see agentcertified.eu.
Frequently asked questions
Do SMEs have obligations under the EU AI Act?
Yes. The regulation applies to any deployer operating in the EU, regardless of size. The Article 5 prohibitions have been in force since February 2025. The Article 50 transparency obligations apply to all deployers. Most SMEs are not in the high-risk category, but they still have real obligations.
What does the EU AI Act prohibit for SMEs?
Article 5 prohibits subliminal manipulation, exploitation of vulnerable groups, and certain biometric surveillance practices. These prohibitions apply to everyone and have been in force since 2 February 2025.
What is the Article 50 transparency obligation for SMEs?
Article 50 requires that users be informed at the start of an interaction that they are talking to an AI. This applies to all conversational AI deployments. The application date is 2 August 2026 (or December 2027 under the proposed Omnibus delay for new systems).
Does the EU AI Act apply differently to SMEs than to large companies?
The core obligations are the same. SME-specific provisions include reduced registration fees (Article 49(4)(b)), priority access to regulatory sandboxes (Articles 57 to 63), and a requirement for national authorities to provide SME-specific guidance (Article 62). These ease the pathway without reducing the obligations.