Independent European Publication Wednesday, 15 April 2026
Agent Liability EU · AI Act Operator Desk
Vol. I · Issue 04
ISSN pending
Framework · Published 08 / 04 / 2026

Three gaps between today's AI stack and tomorrow's underwriting requirements.

Insurers, auditors, and supervisors look for the same things in a file. They want to know what the system is, who is watching it, and against which published standard it was built. None of these are yet routinely produced for autonomous AI agents. This is the framework we use to describe the distance, and the shape of what has to close it.

The Three Gaps

A structural account of why agent risk is currently uninsurable.

The European insurance market has underwritten software, cyber exposure, and professional liability for decades. It has not yet opened a standalone line for autonomous AI agents. The reason is not a lack of demand. It is a lack of the three artefacts below. Close them and a policy becomes possible. Leave them open and every underwriter defaults to exclusion.

01
The Verification Gap

The system cannot prove what it did.

Most production AI agents run without a tamper evident log of prompts, tool calls, retrieved context, and generated outputs. Where logs exist, they are kept for incident triage, not for external audit. An insurer or a supervisor who arrives after the fact cannot reconstruct the decision with certainty, and so cannot attribute harm to a specific behaviour of the system rather than to the user or to an adjacent service.

Closing the verification gap requires three things. A durable record of the inputs the agent saw, the actions it took, and the model versions that were active at the time. A cryptographic or otherwise tamper evident mechanism for preserving that record. And a retention schedule that outlives the commercial life of the system, so that late arriving claims can still be investigated.

02
The Governance Gap

No named human owns the agent's behaviour.

Article 14 of the AI Act requires human oversight assigned to natural persons with competence, authority, and support. In practice, oversight is often diffuse. Product, security, and legal teams each assume another department holds the pen. When an incident occurs, the supervisor asks who was on watch, and the organisation cannot produce a single answer that survives cross examination.

The governance gap closes when the operator publishes an oversight register: a short document naming the system, the humans accountable for it, the thresholds that trigger intervention, the training those humans received, and the reporting line that reaches a board level or equivalent senior authority. The register is the artefact underwriters and auditors ask for first. Without it, no amount of technical control substitutes for the appearance of clear accountability.

03
The Standards Gap

There is no published yardstick to measure the system against.

The AI Act does not prescribe technical standards. It invites harmonised standards through CEN and CENELEC, references existing instruments such as ISO 42001 and the NIST AI Risk Management Framework, and expects the market to converge. Convergence is happening, but slowly. In the meantime, operators produce bespoke governance documentation that no external party is obliged to accept as sufficient.

Closing the standards gap is the slowest of the three, because it depends on bodies outside the operator's control. What operators can do today is adopt ISO 42001 as the organisational baseline, map their controls to the NIST AI RMF functions of Govern, Map, Measure, and Manage, and track the AIUC-1 specification being developed for autonomous agent underwriting. Doing so today signals readiness the day a harmonised standard is published and cited by supervisors.

Figure 1 · How each gap maps to underwriting requirements
Gap Operator Artefact Underwriting Question It Answers Reference Standard
Verification Tamper evident activity log with defined retention. Can the insurer reconstruct the agent's behaviour at the time of loss? ISO 42001 § 8.3, NIST AI RMF Measure 2
Governance Oversight register with named humans and escalation paths. Who is accountable, and does that accountability reach the board? AI Act Article 14, NIST AI RMF Govern
Standards Attestation against a recognised framework with external review. Against what baseline was the system designed and operated? ISO 42001, AIUC-1, CEN/CENELEC harmonised work programme

The short reading

An operator that holds a verification log, an oversight register, and a standards attestation has assembled a file that supervisors, auditors, and insurers all recognise. None of the three can be manufactured after an incident. They have to exist in advance, maintained as a live record of the system's operation, and available on request. The work is procedural and unglamorous. It is also the only path currently available to a defensible position under Article 26 and to an insurable exposure under any EU underwriter's programme.

This framework is deliberately narrow. It does not attempt to resolve the philosophical questions about AI autonomy, agency, or moral status. It describes the paperwork that must exist for a European organisation to operate an AI agent lawfully and to transfer residual risk to an insurance market. The rest is commentary.

Move the framework into practice.

Certification against the three gaps is the fastest way to produce the file an underwriter will accept. Agent Certified EU maintains the attestation protocol and reviews operator evidence.

Visit Agent Certified EU →

Sources

  1. ISO/IEC 42001:2023, Information technology, Artificial intelligence, Management system.
  2. NIST AI Risk Management Framework 1.0, January 2023, and the Generative AI Profile, July 2024.
  3. AIUC-1 specification, AI Underwriting Consortium, version in development as of Q1 2026.
  4. Regulation (EU) 2024/1689, Articles 14, 26, 27, and 99.
  5. CEN/CENELEC JTC 21, work programme on harmonised standards in support of the AI Act.