Most AI compliance work in 2026 will come down to a small set of documents. This guide describes the nine that matter, why each one exists, what it must contain, and how to produce it without duplicating work that already exists under the GDPR, ISO 42001, or sectoral law.

Key takeaways

  • The AI Act is document-heavy. Most Article 26 enforcement will begin with a document request.
  • A coherent deployer file contains nine documents that together address risk, oversight, use, logs, incidents, rights impact, data governance, human-in-the-loop, and revisions.
  • Article 12 log retention has a six-month floor. In regulated sectors, sectoral law usually extends it.
  • A data protection impact assessment under GDPR Article 35 and a fundamental rights impact assessment under AI Act Article 27 can be combined into one document.
  • The revision log is the least glamorous document and the most valuable in an enforcement inquiry.

Why documentation is the centre of the regime

The AI Act is sometimes described as a risk-based regulation, and in principle it is. In practice, for a deployer working in 2026, it is a documentation-based regulation. The Act's provisions on risk management, technical documentation, logging, oversight, and incident reporting converge on a single question: can the deployer show, in writing, what the system is, how it is used, and how it is supervised? The answer is either yes or no, and the consequences follow from that answer.

This is not an accident. The Commission's approach to the Act has always been to make compliance observable. A rule that cannot be audited cannot be enforced. A rule that can be audited creates a market for compliance products and a baseline of evidence that courts, supervisors, and insurers can all use. Documentation is the instrument that turns the Act from a statement of principles into an enforceable regime, and deployers who understand this will spend their compliance budget on the file rather than on the technology.

The nine documents

The file described below is organised around the functions the Act expects a deployer to perform, not around the Act's article numbers. The mapping to the Act is given alongside each document. The file is not the only way to comply, but it is a structure that has held up in practice against national supervisor inquiries in the early reviews conducted in Q1 2026.

1. Risk register

The risk register is a short document, usually two to four pages, that describes the system, its intended purpose, its classification under Article 6, and the risks the deployer has identified. It is not the provider's technical documentation, and it is not a copy of the provider's risk assessment. It is the deployer's own reading of the system as deployed inside the deployer's own environment. It answers the question: what does this organisation think can go wrong, and how serious is the consequence if it does.

The register should list risks by category (bias, privacy, accuracy, availability, safety, reputational), rate each risk on a simple scale of likelihood and consequence, and describe the mitigation in place. It should be dated, signed by a named officer, and revised at least annually. It maps to Article 9 of the AI Act for providers, and serves as the deployer's mirror document.

2. Oversight register

The oversight register is the list of natural persons responsible for overseeing the system under Article 14 and Article 26(2). It names the individuals, their job titles, the training they have received, their authority to intervene, and the escalation path that reaches a senior decision maker. It describes the thresholds that trigger intervention, the procedure for intervention, and the reporting obligations of the oversight team to the board or equivalent.

The register is not a one-time artefact. It must be updated whenever the oversight team changes, whenever the training content changes, and whenever the intervention thresholds change. In an enforcement inquiry, the oversight register is often the first document the supervisor reads, because it tells the supervisor whether the deployer has taken the human oversight duty seriously.

3. Instructions-for-use map

The instructions-for-use map is a short document that maps the provider's instructions for use to the deployer's actual usage of the system. It is a diagnostic tool. It identifies any deviation between what the provider said the system was built for and what the deployer is actually doing with it.

The map is important for two reasons. First, it supports compliance with Article 26(1), which requires the deployer to use the system within the parameters of the instructions for use. Second, it flags potential reclassifications under Article 25. A deployer who is using the system outside its intended purpose is at risk of being reclassified as a provider, with full upstream obligations. The map makes this visible early, before the deployer is caught by an inquiry.

4. Logging schedule

The logging schedule is the description of what the system logs, where the logs are stored, how long they are kept, and how they can be produced on request. It is built around Article 12 of the AI Act, which requires the system to automatically record events throughout its lifecycle, and Article 26(6), which requires the deployer to keep those logs for at least six months.

A useful logging schedule has five columns: log class, content, storage location, retention period, and production method. A deployer in a regulated sector may need to extend retention beyond the six-month floor. The schedule should also describe how logs are protected against tampering, because tamper evidence is a prerequisite for the logs to be useful in a subsequent proceeding.

5. Incident protocol

The incident protocol is the written procedure for identifying, reporting, and responding to serious incidents under Article 26(5). It describes what counts as a serious incident in the context of the specific system, who must be notified inside the organisation, who must be notified outside the organisation, and within what time frame. It also describes the decision procedure for suspending use of the system where monitoring reveals a risk within the meaning of Article 79.

The protocol should be rehearsed. A supervisor who reads a well-drafted incident protocol will often ask when the protocol was last tested. A deployer who can point to a table-top exercise in the last twelve months is in a materially stronger position than a deployer who has drafted the protocol but never run it.

6. Human-in-the-loop description

The human-in-the-loop description is a technical document that describes how the system is built to support human oversight, how the oversight points are wired into the user interface or the API, and how the humans actually intervene. It is the design-side complement to the oversight register. Where the oversight register names the humans, this document describes the mechanics.

This document exists because Article 14 is a design requirement. An oversight team cannot exercise oversight unless the system is built to allow it. A supervisor who sees a thoughtful human-in-the-loop description is reassured that the deployer understood Article 14 as a design requirement rather than a policy aspiration.

7. Fundamental rights impact assessment

The fundamental rights impact assessment under Article 27 is required only for a subset of deployers: public bodies, private operators providing public services, and deployers of certain high risk systems listed in Annex III. For those deployers, the assessment is the central compliance document. It covers the process by which the system is used, the categories of persons likely to be affected, the risks of harm, the oversight in place, and the mitigation plan if risks materialise.

Where the deployer is also subject to the GDPR, the fundamental rights impact assessment should be combined with the data protection impact assessment under Article 35. A combined document is easier to maintain and produces a single coherent record for both supervisors.

8. Data governance note

The data governance note describes the data the system uses, its sources, its representativeness, its update frequency, and its known limitations. It is the deployer's response to Article 26(4), which requires the deployer, to the extent it controls input data, to ensure the data is relevant and sufficiently representative.

This document overlaps with records already held under the GDPR, and in many cases the GDPR record can be extended to cover the AI Act's requirements. The key additions are the discussion of representativeness, which is not a GDPR concept, and the description of update frequency, which matters for models that depend on live data.

9. Revision log

The revision log is the least interesting document in the file and the one that most strongly supports a defensible position in an enforcement inquiry. It is a chronological record of every change made to the eight other documents, with a date, an author, and a short note on what changed and why.

The revision log matters because the supervisor will ask not only what the file contains today, but what it contained when the incident occurred, and what changed since. A deployer who can produce the version of the oversight register that was in force on the date of the incident, signed by the officer who held the role on that date, is in a fundamentally different position from a deployer who can only produce the current version. The revision log is how that history is made visible.

Practical drafting tip. Keep the file flat. Nine documents in one shared folder, each with a clear file name (risk-register.pdf, oversight-register.pdf, and so on), and each with a change history maintained inside the document itself. Avoid fragmenting the file across multiple systems. Supervisors who cannot find the document on request will often treat the absence as a compliance failure in its own right.

Mapping to ISO 42001 and NIST AI RMF

ISO/IEC 42001:2023 is the first international management system standard for artificial intelligence, and it maps cleanly to the nine-document file described above. An organisation that has adopted ISO 42001 will already hold most of the documents in a different form, and the mapping exercise is usually straightforward. The AI management system under ISO 42001 corresponds to the risk register and the revision log. The human oversight clauses in ISO 42001 map to the oversight register and the human-in-the-loop description. The logging and incident clauses map to the logging schedule and the incident protocol.

The NIST AI Risk Management Framework is built around four functions: Govern, Map, Measure, and Manage. The nine-document file maps to all four, with the risk register and the oversight register serving the Govern and Map functions, the logging schedule and the incident protocol serving the Measure and Manage functions, and the remaining documents bridging all four. A deployer who has adopted the NIST framework has the conceptual structure in place and needs only to produce the document artefacts that the European supervisors will ask for.

Common mistakes

Three mistakes show up repeatedly in the files we review. The first is producing one very long document instead of nine short ones. A single 50-page compliance manual is harder to maintain, harder to revise, and harder for a supervisor to read than nine documents each running to five or ten pages. The second is copying the provider's technical documentation into the file. The provider's documentation is written from the provider's point of view and does not answer the deployer-specific questions a supervisor will ask. The third is failing to sign the documents. An unsigned document is a draft, and a draft is not evidence.

A fourth mistake, less common but more consequential, is building the file without a revision log. A file without a revision log collapses into whatever it happens to contain on the day it is requested, and the deployer cannot prove what it contained at any earlier point. This is the easiest mistake to avoid and one of the most expensive to correct after the fact.

What to do this quarter

For a deployer starting from scratch in April 2026, the order of operations is straightforward. Produce the risk register and the oversight register first, because these anchor everything else and do not depend on technical work. Produce the instructions-for-use map next, because it will reveal whether the current usage is compliant or needs to be brought within limits. Produce the logging schedule and the incident protocol in parallel, because they depend on input from the provider and from the deployer's data engineering team. Produce the remaining documents over the following six weeks. Hold the first internal review in early July, and the first rehearsal of the incident protocol in late July. By the time Article 26 enters application on 2 August, the file should be in a recognisable shape and the team should have rehearsed it at least once.

Related reading

For the full reading of the operator regime, see EU AI Act operator obligations, a 2026 compliance guide. For the liability analysis that the documentation supports, see when AI agents make mistakes, who is liable under EU law. For the structural framework of the three gaps that the documentation closes, see the liability framework. For the plain reading of the operator provisions, see the operator provisions of Regulation 2024/1689.

Frequently asked questions

What documents does the AI Act require deployers to hold?

The AI Act does not prescribe a specific document list for deployers. It sets out duties in Articles 14, 26, and 27, and expects deployers to produce the records needed to demonstrate compliance. In practice, a coherent file contains a risk register, an oversight register, an instructions-for-use map, a logging schedule, an incident protocol, a human-in-the-loop description, a fundamental rights impact assessment where Article 27 applies, a data governance note, and a revision log.

How long should AI agent logs be retained?

Article 26(6) requires automatically generated logs to be kept for a period appropriate to the intended purpose of the system, and at least six months unless Union or national law provides otherwise. In regulated sectors such as finance, healthcare, and employment, longer retention is usually required by sectoral law. The logging schedule should map each log class to its retention floor and storage location.

Is ISO 42001 required under the EU AI Act?

ISO/IEC 42001:2023 is not mandatory under the AI Act. The Act invites harmonised standards through CEN and CENELEC but does not cite ISO 42001 as a required standard. In practice, ISO 42001 is the most coherent organisational baseline available today, and deployers who adopt it produce a file that maps cleanly to Article 9 and Article 26.

Who signs the fundamental rights impact assessment?

The fundamental rights impact assessment under Article 27 is the responsibility of the deployer. In practice, it should be signed by a senior officer with authority over the deployment, such as a chief compliance officer, a chief data officer, or a designated AI governance lead. The signature is not a legal formality. It anchors the document to a named decision maker whom the supervisor can contact.

Can one document serve both the GDPR and the AI Act?

Yes, and in many cases it should. A data protection impact assessment under Article 35 of the GDPR covers much of the same ground as a fundamental rights impact assessment under Article 27 of the AI Act. The two can be combined into a single document if the document addresses all the elements required by each instrument. A combined document is usually clearer and easier to maintain than two parallel files.

References

  1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), OJ L, 12.7.2024.
  2. Article 9, Regulation (EU) 2024/1689, risk management system.
  3. Article 12, Regulation (EU) 2024/1689, record keeping.
  4. Article 14, Regulation (EU) 2024/1689, human oversight.
  5. Article 26, Regulation (EU) 2024/1689, obligations of deployers of high risk AI systems.
  6. Article 27, Regulation (EU) 2024/1689, fundamental rights impact assessment.
  7. ISO/IEC 42001:2023, Information technology, Artificial intelligence, Management system.
  8. NIST AI Risk Management Framework 1.0, January 2023, and the Generative AI Profile, July 2024.
  9. Regulation (EU) 2016/679, General Data Protection Regulation, Article 35 (data protection impact assessment).