On 2 August 2026, the operator provisions of the European Union Artificial Intelligence Act enter into application. Any organisation deploying an AI agent within the single market will carry ongoing obligations for oversight, logging, and human intervention. Most will not be ready. This publication exists to make the text legible, the dates visible, and the liability structure citable.
The Act does not activate on a single day. It arrives in waves, and operator liability is the second wave. Miss any of these and the obligation accrues regardless of awareness.
Long form pieces on Article 26, the Revised Product Liability Directive, and the documentation operators need to hold on file when the provisions enter application.
A practical breakdown of the Chapter III duties, the human oversight standard, and the documentation an operator must hold on file when the provisions enter application on 2 August.
A reading of the Revised Product Liability Directive, the AI Act, and the national case law beginning to shape deployer responsibility for autonomous decisions.
The nine artefacts a European operator should produce and maintain to satisfy Article 9, Article 12, and Article 26 in a single coherent file.
The deployer of a high risk AI system shall take appropriate technical and organisational measures to ensure that they use such systems in accordance with the instructions for use.Article 26(1), Regulation (EU) 2024/1689 · The AI Act
The AI Act is often discussed in the language of prohibition and risk classification. Operator liability sits in a quieter register. It is procedural, continuous, and cumulative. It applies from the moment a system is put into service inside the Union, and it does not distinguish between in house deployments and third party agents operating under contract.
Three interpretations have hardened over the past six months. First, the deployer's duty to monitor outputs cannot be delegated to the provider through a terms of service. Second, human oversight under Article 14 is a design requirement, not a run time option. Third, fundamental rights impact assessments under Article 27 are expected for any public body and for any private deployer operating in the sectors listed in Annex III.
This publication tracks those interpretations as they cross from academic commentary into supervisory practice. Each piece is dated, footnoted to the text, and maintained as the Commission and national authorities issue guidance.
Agent Liability EU sits inside a network of five sister publications covering the regulatory, certification, and insurance dimensions of autonomous AI agent deployment.