AI Act 2026: What an SME Needs to Prepare Before August
On August 2, 2026, most rules of the AI Act begin to apply. This guide explains what an SME must review: transparency, high risk, AI literacy, documentation, and suppliers.
The European AI Act is no longer a theoretical topic in 2026. According to the official European Commission calendar, most rules of the Artificial Intelligence Regulation will begin to apply on August 2, 2026, and enforcement will start at the national and European level.
For an SME, this does not mean that all AI projects must stop. It means that you must stop using AI informally, without an inventory, without designated responsible parties, and without knowing which provider processes which data.
The good news: most day-to-day uses of AI in an SME will not be considered high risk. The bad news: if you haven't identified them, you cannot prove it.
The Dates That Matter
The official AI Act calendar marks a progressive application:
| Date | What Changes |
|---|---|
| February 2, 2025 | Definitions, AI literacy, and prohibitions apply |
| August 2, 2025 | Governance rules and general-purpose model obligations apply |
| August 2, 2026 | Most rules, transparency, and Annex III high risk apply |
| August 2, 2027 | Certain high-risk rules integrated into regulated products apply |
By May 2026, an SME should already be preparing its inventory and basic controls. Waiting until August is too late.
First: Know Where You Use AI
The first task is not legal, it is operational. Create a simple inventory of all AI uses:
- ChatGPT, Claude, Gemini, or other assistants used by employees
- Web chatbots, WhatsApp, or Instagram
- Marketing tools that generate content
- Commercial scoring systems
- Email automation
- CV analysis or selection processes
- Agents connected to CRM, ERP, or databases
- Transcription or call summary tools
- Computer vision systems in warehouses, stores, or production
Every use should have an owner, a provider, a purpose, the data processed, and a preliminary risk level.
Second: Classify the Risk
The AI Act uses a risk-based approach. Not everything has the same obligations.
| Category | Examples for an SME | What to do |
|---|---|---|
| Minimal risk | Spam filters, help drafting internal texts | Best practices and internal policy |
| Transparency | Chatbot talking to clients, AI-generated content | Clearly inform about AI usage |
| High risk | AI in job selection, employee evaluation, access to essential services | Review strict obligations |
| Prohibited | Manipulation, social scoring, certain biometric uses | Do not use |
If you use AI in employment, selection, employee evaluation, credit, education, health, or essential services, it is advisable to review it with special care.
Transparency: Disclosing When AI is Present
The European Commission explains that the AI Act introduces transparency obligations so that people know when they are interacting with a machine or when content has been generated or manipulated by AI.
For an SME, this particularly affects:
- Customer service chatbots
- Voice agents
- Automated emails that appear manually written
- Images, videos, or audio generated by AI
- Informative texts published with AI assistance
The practical rule: if the user might think they are speaking to a person, clarify that AI is involved. And if the content could mislead, label it.
AI Literacy: Training the Team
Article 4 of the Regulation requires providers and deployers to take measures to ensure a sufficient level of AI literacy among their staff and the people who operate AI systems on their behalf.
For an SME, this means it is not enough to just hire a tool. You must give the team that uses it at least basic training.
Reasonable training should cover:
- What AI can and cannot do
- The risk of hallucinations
- What data should not be entered
- How to review responses
- When to escalate to a person
- How to use prompts safely
- How to act in case of errors
This does not have to be a complex certification. But it must exist, be documented, and adapt to real use.
Documentation: Leaving a Trace of What Matters
Compliance becomes much easier if you document from the start.
For every relevant AI system, keep records of:
- System purpose
- Provider and contract
- Type of data processed
- Affected persons or groups
- Identified risks
- Mitigation measures
- Internal responsible party
- Usage instructions
- Human review policy
- Available logs or evidence
If you use documentation tools like Polp, it may make sense to centralize internal policies, procedures, FAQs, provider contracts, and usage guides so the team always consults the correct version.
Providers: Not All Are Equal
Part of the risk comes from third parties. If you contract an AI tool, review:
- Where data is processed
- If they offer servers in the EU
- If they sign a DPA or data processing agreement
- If they use your data to train models
- What logs they keep and for how long
- How they allow data deletion
- What security guarantees they offer
- If they explain the use of general-purpose models
This connects directly with GDPR. The AI Act does not replace GDPR: it supplements it.
What an SME Must Prepare Before August 2, 2026
Practical Checklist:
- AI tool inventory.
- Preliminary risk classification.
- Internal AI usage policy.
- Basic team training.
- Transparency notices in chatbots and agents.
- Review of providers and contracts.
- Record of decisions and responsible parties.
- Human review procedure.
- Protocol for errors, complaints, or claims.
- Improvement plan after August.
You do not need to become a multinational corporation with a huge legal department. You need to know what you use, for what purpose, with what data, and under what controls.
How We Can Help
At Navel Digital, we help SMEs implement AI with technical and regulatory criteria: use case inventory, secure architecture, suitable providers, documentation, traceability, and team training.
The AI Act should not be seen as a brake. When managed well, it is an opportunity to use AI with more confidence, more control, and less improvisation.