Regulation (EU) 2024/1689 does not stop at EU borders. Article 2 draws non-EU companies into the regime through five distinct routes, the most consequential of which catches any operator whose AI output reaches a person in the Union. This guide maps the jurisdictional reach of the EU AI Act as it operates for US, UK, and other non-EU companies in 2026, and identifies the compliance obligations that follow.

Key takeaways

  • Article 2(1) of Regulation (EU) 2024/1689 applies the regulation to providers, operators, importers, and distributors regardless of third-country establishment, through five explicit routes into the regime.
  • The "output used in the Union" trigger in Article 2(1)(c) catches non-EU providers and deployers whose AI systems produce results that affect persons in the EU, even where the operator has no EU establishment, EU contracts, or EU employees.
  • Non-EU companies can fall under the regulation simultaneously as a provider (if they place a system on the EU market) and as a deployer (if they use a system whose output reaches EU persons), with separate duty sets for each role.
  • The practical compliance test is not whether the company is incorporated in the EU but whether an EU person is a subject of the AI system's output. If a natural or legal person in the Union is scored, classified, ranked, or materially affected by the system, the regulation applies.
  • Article 99 penalties of up to EUR 35 million or 7 percent of worldwide annual turnover apply to non-EU entities that breach prohibited-AI-practice obligations, and Article 22 requires non-EU providers of high-risk and general-purpose AI systems to designate an authorised representative established in the Union.

Article 2. The five routes into the regime.

The jurisdictional scope of the EU AI Act is set in Article 2(1) of Regulation (EU) 2024/1689. The provision covers five categories of actor, irrespective of where they are established.

Route Who it covers The territorial connection
Article 2(1)(a) Providers placing AI systems on the EU market or putting them into service in the EU Market placement or service availability in the EU, regardless of where the provider is established
Article 2(1)(b) Operators of AI systems established or located in the EU Establishment or location of the operator in the EU
Article 2(1)(c) Providers and operators established in third countries where the output of the AI system is used in the Union Output used in the Union, regardless of provider or operator establishment
Article 2(1)(d) Importers and distributors of AI systems Acting in the EU supply chain
Article 2(1)(e) Product manufacturers placing a product containing an AI system on the EU market Product placed on the EU market

Most non-EU companies will encounter the regulation through routes (a) and (c). Route (a) applies when a company makes its AI system available to EU customers, whether through a direct contract, an app store listing, a SaaS subscription, or an API. Route (c) applies when the output of the system reaches EU territory even if the company never marketed to or contracted with EU parties directly. Routes (d) and (e) matter principally to hardware manufacturers and component-level AI vendors.

Recital 22 of the regulation clarifies that the output-used-in-the-Union test is intended to have broad effect and was specifically drafted to prevent third-country operators from circumventing the regulation by processing outside the EU while directing results inward.

The "output used in the Union" trigger. The most misunderstood provision.

Article 2(1)(c) is the provision that catches the largest number of non-EU operators by surprise. The test is not whether the company has EU customers. It is not whether the AI system processes EU data. It is whether the output of the system is used in the Union.

Output means what the AI system produces: a credit score, a candidate ranking, a risk assessment, a content moderation decision, a generated document, a medical diagnosis suggestion, a price recommendation, or any other result the system returns. The question is whether a person or institution in the EU receives and acts on that output. If yes, the regulation applies to the provider and to any third-country operator whose system produced it.

The practical implication for US and UK companies is significant. Consider three common situations:

  • A US-based LLM API is used by a European company to generate customer-facing content. The US provider is subject to the regulation as a provider under Article 2(1)(a), and the European company is an operator under Article 2(1)(b). The US provider may also be caught under Article 2(1)(c) if it has no EU establishment but the output reaches EU persons.
  • A UK hiring software vendor whose product is used only by UK clients, but whose platform ranks candidates who later apply to roles at EU subsidiaries of multinational companies. The ranking is the output. If it is relied on in the EU hiring process, the UK vendor is inside the scope.
  • A US fintech that scores creditworthiness for a global bank. If the bank uses those scores to make decisions about EU retail customers, the scoring output is used in the Union. The US fintech is within Article 2(1)(c).

The threshold is intentionally low. Recital 22 states that the regulation applies where the AI output is directed toward users in the Union or where it affects persons in the Union. A company that cannot point to a clear break in the chain between its AI output and its use in the EU should assume the regulation applies.

When a US company is a provider and when it is a deployer.

The regulation allocates different duties to providers and to operators (deployers). The distinction matters for non-EU companies because the obligations are not identical, and because a single company can be both simultaneously.

A provider under Article 3(3) is a natural or legal person that develops an AI system or a general-purpose AI model and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge. A US company that builds and commercially deploys an AI model is a provider.

An operator under Article 3(4) is a natural or legal person that uses an AI system under its own authority, except where the system is used in a personal non-professional activity. A US company that procures a third-party AI system and integrates it into its products or workflows is an operator.

The key obligations differ by role.

Role Core obligations for high-risk systems Key articles
Provider Conformity assessment, technical documentation, quality management system, registration in the EU database, CE marking (where applicable), post-market monitoring, serious incident reporting Art. 9, 10, 11, 16, 47, 72, 73
Operator (deployer) Use only within intended purpose, fundamental rights impact assessment (public authorities and certain private entities), human oversight, transparency to affected persons, technical measures to support oversight Art. 26, 27, 29

For a non-EU company that both builds and uses its own AI system in serving EU customers, both sets of obligations apply. In practice this means a single compliance programme that addresses the provider file (technical documentation, conformity assessment, registration) and the operator file (impact assessment, human oversight, transparency notices).

Where a US company procures a third-party system, the operator duty set under Article 26 applies. Article 26 requires the operator to use the system according to the instructions in the provider's documentation, to designate human oversight, to implement technical and organisational measures for oversight, and to report serious incidents to the provider. The operator does not repeat the provider's conformity assessment, but it cannot simply rely on the provider's CE marking without verifying that the intended use falls within what the conformity assessment covered.

UK position post-Brexit. Single market access, Windsor Framework, and practical exposure.

The United Kingdom left the EU single market on 31 January 2020 and is a third country for the purposes of Regulation (EU) 2024/1689. The regulation applies to UK companies through the same five routes that apply to any non-EU operator. There is no reciprocal recognition agreement for AI services between the UK and the EU. The Trade and Cooperation Agreement signed on 24 December 2020 (TCA) governs trade and cooperation in broad terms but does not create mutual recognition of AI compliance regimes.

The Windsor Framework, agreed between the UK and EU in February 2023, addresses the specific position of Northern Ireland under the Protocol on Ireland and Northern Ireland. It is a trade arrangement, not a regulatory equivalence agreement. It has no effect on the extraterritorial reach of the EU AI Act for UK companies outside Northern Ireland, and it does not create any compliance pathway for AI systems sold into the EU from Great Britain.

The UK is developing its own AI governance approach through the AI Opportunities Action Plan published in January 2025 and the ongoing work of the AI Safety Institute (now the AI Security Institute). The UK approach is sectoral and lighter-touch by design. A UK company cannot satisfy EU AI Act obligations by complying only with the UK framework. The two regimes are distinct and compliance with one does not constitute compliance with the other.

For UK companies, the practical exposure map looks like this. A UK SaaS company with EU customers is a provider under Article 2(1)(a) and must comply with all provider obligations for any high-risk or general-purpose AI systems it places on the EU market. A UK analytics firm whose models are used by EU clients in consequential decisions is inside Article 2(1)(c). A UK AI consultancy that deploys third-party models on behalf of EU clients is an operator under Article 2(1)(b) if it is established or operating in the EU context, and may also trigger Article 2(1)(c).

Practical scenarios. Four cases that illustrate the reach.

The following scenarios illustrate how the extraterritorial provisions operate in practice. These are representative patterns, not legal advice for any specific situation.

SaaS company with EU subscribers. A US company offers a document summarisation and contract analysis tool via subscription. It has customers in Germany, France, and the Netherlands. The tool is a general-purpose AI system. If the company's aggregate FLOP count or EU user base meets the thresholds in Article 51, it has obligations as a provider of a general-purpose AI model under Chapter V. If the tool is used in a regulated domain by EU deployers, the US provider must ensure its technical documentation and instructions enable those deployers to meet their own obligations under Article 26. The US company is inside the regulation through Article 2(1)(a).

LLM provider with EU users. A US AI laboratory offers API access to its foundation model. EU companies use the API to build customer-facing products including loan screening and CV assessment tools. The US laboratory is a provider under Article 2(1)(a) and, if the model meets the FLOP threshold in Article 51(1), a general-purpose AI model provider under Chapter V. It must publish a summary of training data, maintain technical documentation, and cooperate with downstream deployers. It is also within Article 2(1)(c) because its output reaches EU persons.

US bank with EU subsidiary. A US bank uses an AI-based credit decision engine developed by its US technology division. The same engine is used by the bank's Frankfurt subsidiary for retail credit decisions in Germany. The US technology division is a provider placing the system into service in the EU via the subsidiary. The Frankfurt subsidiary is an operator. If the credit decision use case falls within Annex III (which lists high-risk use cases including creditworthiness assessment), both entities have obligations. The bank's US parent cannot structure around this by characterising the Frankfurt usage as purely local. The provider assessment follows the system, not the legal entity.

UK retailer with EU sales. A UK fashion retailer uses an AI-based pricing and recommendation engine that serves the same outputs to customers in the UK and to customers in EU member states through the same e-commerce platform. For EU customer interactions, the system's output is used in the Union. The UK retailer is inside Article 2(1)(c). Whether the pricing and recommendation engine constitutes a high-risk system under Annex III is a separate analysis, but the retailer is within scope for prohibited-practice provisions (Article 5) regardless of Annex III classification.

The authorised representative requirement. Article 22.

Article 22 of Regulation (EU) 2024/1689 requires providers of high-risk AI systems and providers of general-purpose AI models established in third countries to designate, in writing, an authorised representative established in the Union before placing the system on the EU market or putting it into service.

The authorised representative is the point of contact for national competent authorities and the AI Office. The representative must be empowered to act on the provider's behalf, to cooperate with competent authorities, and to maintain the technical documentation and register the system in the EU database where required.

The requirement applies to providers of high-risk AI systems listed in Annex III and to providers of general-purpose AI models meeting the conditions in Article 51. It does not apply to operators who are not also providers. An operator that deploys a third-party system does not need its own authorised representative; that obligation falls on the provider of that system.

For practical purposes, the authorised representative functions as a compliance anchor in the EU. It can be an existing EU subsidiary, a law firm, a compliance service provider, or any legal person with an EU establishment and the contractual authority to act. The designation must be in writing, must identify the representative by name and address, and must be documented in the technical file.

Failure to designate an authorised representative when required is itself a violation of the regulation and carries penalty exposure under Article 99.

Enforcement reach and realistic penalty exposure for non-EU entities.

A common assumption is that EU enforcement cannot practically reach non-EU companies. The regulation is designed to close that gap, and enforcement architecture follows the GDPR pattern that has been in effect since 2018.

Article 99 sets three penalty tiers that apply without geographic restriction:

  • EUR 35 million or 7 percent of worldwide annual turnover (whichever is higher) for violations of prohibited AI practices under Article 5 and of data requirements under Article 10.
  • EUR 15 million or 3 percent of worldwide annual turnover for violations of obligations on providers and operators, including Article 22 (authorised representative), the high-risk system obligations in Chapter III, and the general-purpose AI model obligations in Chapter V.
  • EUR 7.5 million or 1.5 percent of worldwide annual turnover for providing incorrect, incomplete, or misleading information to notified bodies or national competent authorities.

Enforcement against non-EU entities operates through the authorised representative in the first instance. Where no authorised representative has been designated, the AI Office or national market surveillance authority can initiate proceedings against the provider directly. The EU has demonstrated through GDPR enforcement that penalties against non-EU companies are achievable, and several non-EU companies have paid substantial fines without establishing EU presence.

The more immediate enforcement risk for non-EU companies is typically market access restriction rather than immediate financial penalty. A non-EU company that cannot demonstrate compliance may find its system prohibited from EU market distribution, its EU-established customers directed to terminate use, or its products blocked from EU distribution channels. For companies whose EU revenue is material, market access restriction is a more proximate commercial threat than the penalty ceiling.

A compliance decision tree for non-EU operators.

The following ordered process maps the key questions a US or UK company should work through to determine its EU AI Act exposure.

  1. Does your company develop or deploy an AI system? If no, the regulation does not apply. If yes, proceed.
  2. Is the AI system or its output available to or directed at persons in the EU? This includes EU-based customers, EU subsidiaries of your clients, or EU persons who are subjects of the system's output. If yes, Article 2 applies. Identify which route: Article 2(1)(a) (market placement), Article 2(1)(b) (EU establishment), Article 2(1)(c) (output in Union), or multiple routes simultaneously.
  3. Are you the provider (developer) or the operator (deployer), or both? Identify your role. A company that both develops and uses the system is both. A company that procures and deploys a third-party system is an operator. Role determines which duty set applies.
  4. Does the system fall within the prohibited practices in Article 5? Subliminal manipulation, social scoring, real-time remote biometric identification in public spaces (with exceptions), and exploitative targeting practices are prohibited regardless of whether the system is classified as high-risk. A prohibited-practice system must be discontinued for EU use regardless of other compliance steps.
  5. Is the system a high-risk AI system under Annex III? Annex III lists high-risk use cases across eight domains including biometrics, critical infrastructure, education, employment, access to essential services, law enforcement, migration, and administration of justice. If the intended use falls within Annex III, the provider duties under Chapter III and the operator duties under Chapter III Section 3 both apply.
  6. Is the system a general-purpose AI model under Article 51? If the model was trained using a computational power exceeding 10^25 FLOPs, or if it meets the systemic-risk indicators in Article 51(2), Chapter V obligations apply including transparency, capability evaluation, adversarial testing, and incident reporting to the AI Office.
  7. As a provider, have you designated an authorised representative in the EU? If your system falls within step 5 or step 6 above, and you are established outside the EU, Article 22 requires a written designation before market placement or service launch.
  8. Have you prepared the technical file, conducted or commissioned a conformity assessment, and registered the system? These are the core provider obligations for high-risk systems. An operator's obligations do not replace them; they supplement the provider file with operation-phase duties including human oversight and fundamental rights impact assessment where required.

For the US and UK domestic regimes that run alongside EU obligations, see US, EU, UK: three approaches to the same question, which maps how the three regulatory spheres allocate liability across the AI supply chain. For the EU operator duty set in detail, including the fundamental rights impact assessment and human oversight requirements under Article 26, see the Article 26 operator obligations guide at the EU Regulatory Desk. For the Colorado-level US domestic counterpart to the EU's deployer duties, see The Colorado AI Act. Deployer obligations under SB 24-205.

Frequently asked questions

Does the EU AI Act apply to companies outside the EU?

Yes. Regulation (EU) 2024/1689 Article 2(1) extends to providers placing AI systems on the EU market, operators using AI systems in the EU, providers and deployers established in third countries where the output of an AI system is used in the Union, and importers and distributors. Establishment outside the EU does not insulate a company from the regime if any of these five routes is satisfied.

When does a US company fall under the EU AI Act?

A US company falls under the EU AI Act as a provider when it places or puts into service an AI system on the EU market, regardless of where it is established. It falls under the regulation as a deployer when it uses an AI system whose output is directed at persons in the EU, or when it operates an EU-established entity that uses AI. The output-used-in-the-Union trigger under Article 2(1)(c) catches many SaaS and API businesses that never considered themselves subject to the regulation.

Does the EU AI Act apply to UK companies?

Yes. The UK is a third country for the purposes of Regulation (EU) 2024/1689. The same five routes that apply to US companies apply to UK companies. Post-Brexit, the UK has no single market access for goods or services under EU law. A UK company serving EU users, placing systems on the EU market, or whose AI output reaches EU persons is subject to the same extraterritorial analysis as any non-EU operator. The Windsor Framework applies only to Northern Ireland and does not affect AI Act scope.

What does "output used in the Union" mean?

Article 2(1)(c) captures providers and deployers established in third countries where the output produced by an AI system is used in the Union. Output means the result the system generates: a score, a decision, a recommendation, a piece of generated content, a classification. The test is whether a person in the EU is a subject of or is materially affected by that output. A US LLM that generates a credit score used by an EU lender, or a UK hiring tool whose rankings reach EU applicants, satisfies the test.

Can EU authorities fine a non-EU company?

Yes. Article 99 sets penalties without reference to the establishment of the violating party. The ceiling for prohibited AI practices is EUR 35 million or 7 percent of worldwide annual turnover, whichever is higher. EU market surveillance authorities and the AI Office have authority to investigate non-EU companies that fall within the scope of Article 2. The authorised representative requirement under Article 22 creates an enforcement contact point within the EU.

References

  1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), OJ L, 2024/1689, 12.7.2024.
  2. Regulation (EU) 2024/1689, Article 2 (Scope).
  3. Regulation (EU) 2024/1689, Article 3(3) (Definition of provider).
  4. Regulation (EU) 2024/1689, Article 3(4) (Definition of operator).
  5. Regulation (EU) 2024/1689, Article 5 (Prohibited AI practices).
  6. Regulation (EU) 2024/1689, Article 22 (Authorised representative).
  7. Regulation (EU) 2024/1689, Article 26 (Obligations of deployers of high-risk AI systems).
  8. Regulation (EU) 2024/1689, Article 51 (Classification of general-purpose AI models with systemic risk).
  9. Regulation (EU) 2024/1689, Article 99 (Penalties).
  10. Regulation (EU) 2024/1689, Annex III (High-risk AI systems).
  11. Directive (EU) 2024/2853 of the European Parliament and of the Council on liability for defective products (Product Liability Directive recast), OJ L, 2024/2853, 18.11.2024.
  12. UK-EU Trade and Cooperation Agreement, OJ L 444, 31.12.2020.
  13. Windsor Framework (Joint Declaration, February 2023) and accompanying legal instruments amending Protocol on Ireland/Northern Ireland.
  14. UK AI Opportunities Action Plan, Department for Science, Innovation and Technology, January 2025.