The European Union Artificial Intelligence Act is the most detailed piece of AI governance legislation in the world. For deployers, the part that matters most is short, stricter than expected, and enters application in less than four months. This guide reads the operator regime in one place and sets out the documents every European deployer should hold on file.
Key takeaways
- The deployer regime for high risk systems applies from 2 August 2026. A second tier of provisions tied to regulated products applies from 2 August 2027.
- Article 26 sets seven continuing duties. None of them are delegable through contract.
- Human oversight under Article 14 is a design requirement. It cannot be bolted on after deployment.
- Penalty exposure for an Article 26 breach reaches EUR 15 million or 3 per cent of worldwide annual turnover, whichever is higher.
- The minimum operator file is a risk record, an oversight register, an instructions-for-use map, a logging schedule, and an incident protocol.
What the operator regime actually is
Most public conversation about the AI Act focuses on providers, the bodies that build and place AI systems on the market. The text reserves a separate and shorter body of rules for deployers, the bodies that use those systems under their own authority. Article 3(4) defines a deployer as any natural or legal person, public authority, agency, or other body using an AI system in the course of a professional activity. Purely personal, non professional use is excluded. Everything else is inside the regime.
The regime applies only to the subset of systems classified as high risk under Article 6 and Annex III. For most organisations in 2026, that means systems used in recruitment, worker management, creditworthiness assessment, access to essential services, law enforcement, migration and border control, and the administration of justice. The list is deliberately procedural. A generic large language model used internally for drafting emails is not high risk. The same model wrapped inside an autonomous agent that screens candidates or routes social benefit applications is.
The operator regime is not a one-time compliance checklist. It is a continuing duty of care that starts the moment a system is put into service and ends only when the system is retired. Article 26 does not require deployers to build or test the model. It requires them to use it within its documented purpose, supervise it with named humans, verify that the inputs it receives are relevant, keep its logs for a minimum period, report incidents, and tell affected workers before deploying it in an employment context.
The seven duties of Article 26
The core obligations are set out in seven sub-paragraphs. Each one attaches to the deployer, each one is owed to the supervisor, and each one cannot be removed by contract. They are listed below in the order they appear in the text.
1. Use within the instructions for use
Article 26(1) requires deployers to take appropriate technical and organisational measures to use the system within the parameters of the provider's instructions for use. In practice, this means reading the instructions. Most providers of high risk systems publish a technical documentation package, a list of intended purposes, and a set of operational limits. A deployer that runs the system outside those limits, for example using a credit scoring model trained on one jurisdiction to score applicants in another, is not using it in accordance with its instructions and will carry the consequences.
2. Human oversight by named persons
Article 26(2) requires deployers to assign human oversight to natural persons who have the necessary competence, training, authority, and support to exercise it. Article 14 adds the design requirement: the system itself must be built so that oversight is possible. Deployers inherit that design and must staff it. Competence is not a vague category. National supervisors have begun to indicate that they expect documented training, named individuals, and an escalation path that reaches a senior decision maker.
3. Input data relevance
Article 26(4) requires deployers, to the extent they control input data, to ensure the data is relevant and sufficiently representative in view of the system's intended purpose. The qualifying clause matters. Where the provider controls input data, the duty sits upstream. Where the deployer controls it, for example by connecting its own customer database to an autonomous agent, the deployer is accountable for the relevance of the data the agent sees.
4. Monitoring and incident duty
Article 26(5) requires deployers to monitor the operation of the system, report serious incidents to the provider and, where applicable, the market surveillance authority, and suspend use where monitoring reveals a risk within the meaning of Article 79. A serious incident is not defined in lay terms. Article 3(49) gives the formal definition, and it includes death, serious harm to health, significant and irreversible disruption of critical infrastructure, and serious damage to property or the environment.
5. Log retention
Article 26(6) requires deployers to keep automatically generated logs for a period appropriate to the intended purpose of the system, and at least six months unless Union or national law provides otherwise. The six-month floor is a minimum. In regulated sectors such as finance and healthcare, longer retention is usually required by sectoral law. The deployer must map the log fields to its retention infrastructure and ensure they are kept in a form that survives system migration.
6. Information to workers
Article 26(7) requires deployers to inform worker representatives and affected workers before putting a high risk system into service in an employment context. This duty is additive to the consultation rights already provided under national labour law and under Directive (EU) 2019/1152 on transparent and predictable working conditions. It is not a one-time notification. It attaches to every new deployment and every material change in purpose.
7. Public sector registration
Article 26(8) requires deployers that are public authorities, Union institutions, or bodies acting on their behalf to register their use of high risk systems in the EU database established under Article 71. The database is being prepared by the Commission and the AI Office. Private deployers are not required to register, but providers are, and the registration is a public record that supervisors and civil society will scrutinise.
Article 27 and the fundamental rights impact assessment
Article 27 adds a separate obligation on a narrower class of deployers. Public bodies, private operators providing public services, and any deployer using a system listed in specific points of Annex III, including credit scoring and life insurance pricing, must complete a fundamental rights impact assessment before first deployment. The assessment covers the process, the persons likely to be affected, the risks of harm, the oversight in place, and the mitigation plan if risks materialise.
Unlike the provider's conformity assessment under Article 43, the fundamental rights impact assessment is a living document. It must be updated whenever any of its elements changes materially, and it must be made available to the supervisor on request. Several national data protection authorities have indicated that they will treat the assessment as the first document they ask for in an enforcement inquiry.
Penalties under Article 99
Article 99 sets the fine ceilings in three tiers. Breaches of the prohibitions in Article 5 carry the highest exposure, at up to EUR 35 million or 7 per cent of worldwide annual turnover, whichever is higher. Operator breaches under Article 26 and other provisions applicable to deployers sit in the second tier at up to EUR 15 million or 3 per cent. The third tier, at up to EUR 7.5 million or 1 per cent, covers the provision of incorrect, incomplete, or misleading information to notified bodies or competent authorities.
Supervisors must take into account the nature, gravity, and duration of the infringement, the size and turnover of the operator, whether other authorities have already imposed penalties for the same facts, and whether the operator cooperated with the investigation. Article 99(6) adds a specific instruction that penalties for SMEs and startups be set with regard to their economic viability. It is not an exemption, but it is an acknowledgement that the same fine applied to a multinational and a ten-person company produces different outcomes.
The minimum operator file
A defensible position under Article 26 looks like a file. The file does not need to be large. It needs to be coherent and maintained. The minimum contents for most deployers are the following five documents.
- A risk record. A short description of the system, its intended purpose, its classification under Article 6, and the risks the deployer has identified. This is the deployer's reading of the system, not the provider's documentation.
- An oversight register. A list of the natural persons responsible for oversight, their training, their authority, and the escalation path that reaches a senior decision maker. The register also describes the thresholds that trigger intervention.
- An instructions-for-use map. A mapping between the provider's stated operational limits and the deployer's actual usage. Any deviation is flagged and justified or corrected.
- A logging schedule. A description of what is logged, where the logs are stored, how long they are kept, and how they can be produced on request. The six-month minimum is a floor, not a target.
- An incident protocol. A written procedure for identifying, reporting, and responding to serious incidents under Article 26(5), including the contact details of the provider and the relevant market surveillance authority.
These five documents are not a substitute for the underlying technical and organisational measures, and they are not a replacement for the fundamental rights impact assessment where Article 27 applies. They are the file that a supervisor, an auditor, or an insurer will ask for first, in that order. Holding them is not compliance on its own. Not holding them is a reliable path to an enforcement finding.
Interaction with the GDPR
The AI Act does not displace the General Data Protection Regulation. Article 70 explicitly foresees coordination between the two regimes. For deployers processing personal data, the Article 26 duties sit alongside the existing obligations under the GDPR, and in many cases the same documentation serves both. A data protection impact assessment under Article 35 of the GDPR will overlap with a fundamental rights impact assessment under Article 27 of the AI Act, and the two should be designed to complement each other rather than duplicate.
National data protection authorities have been designated as market surveillance authorities for a number of high risk use cases, including biometric systems and systems used in law enforcement. This overlap is deliberate. Where personal data is involved, the deployer should expect the data protection authority to arrive first with questions about AI Act compliance.
Practical preparation between now and August
Four months is tight for a regulation of this scope, but it is enough for a focused deployer to produce the minimum file and rehearse the incident protocol. The order of operations matters. The risk record is the first document because every other document refers to it. The oversight register is second because it names the humans who will carry the remaining duties. The instructions-for-use map is third because it tells the deployer whether the current usage is inside or outside the provider's stated operational limits, and a deviation discovered now is cheaper than one discovered during enforcement.
The logging schedule is usually the most technically demanding document to produce, because it requires coordination between the deployer's data engineering team and the provider. It also requires a decision about long-term storage that may touch existing retention policies. The incident protocol is the last document in the sequence because it depends on all of the others, but it is the one that supervisors are most likely to exercise in an early review.
What to do if the system is already in production
Any high risk system already in production on 2 August 2026 must be compliant on that date. There is no grandfathering period for existing deployments of systems listed in Annex III. The practical consequence is that deployers with systems already in the field should start by producing the risk record and the oversight register, because these do not require changes to the system itself. The logging schedule and the incident protocol can then be produced in parallel. The instructions-for-use map is a diagnostic document: if it reveals a deviation from the provider's stated operational limits, the deployer has a decision to make about whether to bring the usage back within the limits or to accept reclassification under Article 25.
Related reading
For a close reading of the operator provisions in their statutory order, see the operator provisions of Regulation 2024/1689, read in plain sequence. For the documentation architecture that supports Article 26, see how to document AI agent risk management for compliance. For the liability analysis that runs in parallel to Article 26, see when AI agents make mistakes, who is liable under EU law. For the structural framework we use across the publication, see the three gaps in AI agent underwriting.
Frequently asked questions
When do the EU AI Act operator obligations start to apply?
The main deployer obligations for high risk AI systems under Chapter III of Regulation (EU) 2024/1689 enter into application on 2 August 2026. A further layer of obligations tied to systems embedded in regulated products is deferred to 2 August 2027.
Who counts as an operator under the AI Act?
Article 3(4) defines a deployer as any natural or legal person, public authority, agency, or other body that uses an AI system under its authority in the course of a professional activity. Purely personal, non professional use is excluded. This publication uses the terms operator and deployer interchangeably.
Can a deployer transfer its Article 26 obligations to the provider by contract?
No. The obligations in Article 26 attach to the deployer and are owed to the supervisor. A terms of service or indemnity with the provider does not remove them. A contractual allocation may reallocate commercial risk between the parties, but it does not affect the regulatory duty or the supervisor's right to enforce it.
What penalties can apply for a breach of Article 26?
Breaches of the Article 26 obligations fall within the second tier of Article 99, with fines up to EUR 15 million or 3 per cent of worldwide annual turnover, whichever is higher. Article 99(6) instructs supervisors to take SME economic viability into account when setting penalty levels.
Is there an overlap between Article 26 and the GDPR?
Yes. Article 70 explicitly foresees coordination with the GDPR. For deployers processing personal data, the AI Act duties sit alongside the existing obligations under the GDPR, and data protection authorities are expected to play a central role in AI Act enforcement across sectors.
References
- Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), OJ L, 12.7.2024.
- Article 3(4), Regulation (EU) 2024/1689, definition of deployer.
- Article 6 and Annex III, Regulation (EU) 2024/1689, classification of high risk AI systems.
- Article 14, Regulation (EU) 2024/1689, human oversight requirements.
- Article 26, Regulation (EU) 2024/1689, obligations of deployers of high risk AI systems.
- Article 27, Regulation (EU) 2024/1689, fundamental rights impact assessment for deployers of certain high risk AI systems.
- Article 71, Regulation (EU) 2024/1689, EU database for high risk AI systems.
- Article 99, Regulation (EU) 2024/1689, penalties.
- Regulation (EU) 2016/679, General Data Protection Regulation.
- Directive (EU) 2019/1152 on transparent and predictable working conditions in the European Union.