South Korea's Framework Act on the Promotion of Artificial Intelligence and Establishment of a Foundation for Trust, commonly called the AI Basic Act, was enacted by the National Assembly in December 2024. It applies from August 2026, placing it in direct parallel with the EU AI Act's high-risk obligation window. For operators with a Korean presence or user base, the Act creates a compliance layer that shares substantive logic with the EU regime but differs in structure, enforcement, and terminology. This guide sets out what matters for cross-border deployers.

Key takeaways

  • The Korea AI Basic Act was enacted December 2024 and applies from August 2026, administered by MSIT (Ministry of Science and ICT).
  • High-impact AI systems in sensitive domains face transparency, human oversight, and documentation obligations closely parallel to the EU AI Act's Annex III regime.
  • Korea's regime has no prohibited AI list equivalent to EU AI Act Article 5, but imposes prohibited conduct provisions through general fairness and non-discrimination obligations.
  • Cross-border operators with EU programmes will find substantial overlap; the incremental work is primarily in translation, MSIT registration, and Korean-language disclosure obligations.

Background and context

Korea has a long record of technology legislation. The Personal Information Protection Act (PIPA), first enacted in 2011 and substantially revised in 2023, is among the most mature data protection statutes in the Asia-Pacific region. The Act on Promotion of Information and Communications Network Utilisation and Information Protection (the Network Act) has governed platform conduct for two decades. These instruments created a regulatory culture accustomed to detailed statutory obligations on technology operators, administered through specialist agencies with real enforcement capacity.

The AI Basic Act sits within this tradition. It was prepared through a multi-year National Assembly deliberation process that accelerated through 2023 and 2024, drawing on OECD AI Principles, the EU AI Act's architecture, and domestic consultation with industry and civil society. The National Assembly passed the Act in December 2024. The Ministry of Science and ICT (MSIT), which also administers the Network Act and broader ICT sector regulation, is the primary administrative body. Subsidiary implementing regulations, expected from MSIT in 2025 and early 2026, will fill in procedural detail on registration, documentation standards, and incident reporting timelines.

The Act applies from August 2026, which places it in the same compliance window as the EU AI Act's Annex III high-risk obligations (subject to the possible delay under the EU Digital Omnibus; Korea's date is not affected by European legislative developments). For operators already preparing EU programmes, the August 2026 date is therefore a consolidation point rather than a new obligation horizon.

Structure of the Act

The AI Basic Act follows a risk-tiered structure that should be legible to operators familiar with the EU AI Act, even though the Korean statutory text uses different terminology and a somewhat different analytical frame.

At the base layer, the Act establishes general AI governance principles applicable to all providers and deployers of AI systems used in Korea. These include accuracy and reliability obligations, a general duty of transparency to users of AI systems, a fairness and non-discrimination principle, and an accountability obligation requiring operators to designate responsibility for AI-related decisions. These principles operate as a floor; they apply to ordinary AI systems that do not meet the high-impact threshold.

The Act's most operationally significant tier covers high-impact AI systems, defined by domain rather than by technical characteristics. High-impact designation triggers a substantively heavier set of obligations: transparency to affected persons (more detailed than the general duty), mandatory human oversight assignment, risk management documentation, and a notification obligation to MSIT covering serious incidents. This structure is functionally parallel to the EU AI Act's Annex III and Article 26 deployer regime, though Korea's specific procedural requirements are set partly through the Act itself and partly through MSIT implementing regulations.

At the strictest tier sits government-managed AI: AI systems operated by or on behalf of public authorities in consequential administrative decisions. The Act imposes additional procedural requirements on this category, including enhanced documentation, mandatory review mechanisms, and rights of contestation for affected individuals. This tier has no direct private-sector equivalent, though operators providing AI systems to Korean public authorities should treat it as bearing on their contractual and technical obligations.

The Act does not include a list of prohibited AI systems comparable to EU AI Act Article 5. Instead, prohibited conduct is addressed through the general fairness and non-discrimination obligations, through PIPA's existing rules on automated individual decision-making, and through the criminal and civil liability framework that applies across Korean law. Operators who have designed systems around EU Act Article 5 requirements will not find a direct Korean equivalent but should note that the underlying prohibited conduct (mass surveillance, social scoring, subliminal manipulation, exploitation of vulnerability) would likely engage the general fairness and non-discrimination duties at minimum.

High-impact AI: the designated domains

The AI Basic Act designates high-impact AI by reference to seven domain categories. The list is worth comparing in detail to EU AI Act Annex III because cross-border operators managing a single system classification process will need to know where the lists align and where they differ.

Medical devices and clinical diagnosis is the first domain. AI systems used in diagnosis, prognosis, or treatment recommendation, and AI embedded in regulated medical devices, are high-impact under the Korean Act. This maps directly to Annex III, item 5(a), which covers AI systems used as medical devices or safety components of medical devices. The practical compliance obligation is similar: documentation of clinical validation, human oversight in the diagnostic workflow, and incident reporting for adverse outcomes.

Employment decisions is the second domain. AI systems used in recruitment, selection, performance evaluation, termination, or promotion decisions involving workers are high-impact. This is the closest parallel to Annex III item 4, covering AI for employment and worker management. EEOC guidance in the United States and ICO guidance in the United Kingdom address the same domain through different instruments. Operators running hiring automation across multiple jurisdictions will recognise the pattern.

Educational assessment is the third domain. AI systems used to assess student performance, determine placement, or make decisions affecting educational advancement are high-impact. This maps to Annex III item 3 (AI in education and vocational training). Korea's rapidly digitised education sector makes this a practically significant category.

Financial creditworthiness is the fourth domain. AI systems used in credit scoring, loan decisions, insurance underwriting, and related financial determinations are high-impact. The EU AI Act equivalent is Annex III item 5(b) (AI in natural person creditworthiness assessment). Korea's Financial Services Commission (FSC) also maintains sectoral rules for AI in financial services that operate alongside the AI Basic Act framework, creating a layered obligation structure for financial sector operators.

Criminal justice and public safety is the fifth domain. AI systems used in policing, predictive risk assessment, or public safety operations fall in this category. The EU AI Act equivalent is Annex III items 6 and 7, covering law enforcement and justice administration. Given that the Korean Act does not separately regulate law enforcement AI through an equivalent to the EU Act's biometric surveillance provisions, this category in Korea is broader in its practical application than the equivalent EU category.

Critical infrastructure is the sixth domain. AI systems used in the operation of energy, transport, water, communications, and financial market infrastructure are high-impact. This maps to Annex III item 2. The MSIT designation process for critical infrastructure AI is expected to align with Korea's existing Critical Information Infrastructure Protection Act frameworks, which already impose layered security and resilience obligations.

Legal services is the seventh domain. AI systems used in legal research, case assessment, document preparation with legal effect, or legal advice are high-impact. The EU AI Act equivalent is Annex III item 8 (AI for administration of justice and democratic processes). For operators running legal AI platforms with Korean users, this is the category most likely to create a compliance obligation that is not already covered by EU programme design, given the specifics of Korean legal practice and language requirements.

The MSIT designation process works through initial statutory designation (the seven domains above) and a supplementary designation power allowing MSIT to add further AI uses by ministerial regulation without primary legislation. This mirrors the European Commission's delegated act power to update Annex III under the EU AI Act. Operators should monitor MSIT regulatory activity on the same cadence as they monitor the European AI Office's Annex III update process.

Transparency and human oversight obligations

The Act imposes transparency obligations at two levels, following the structure used in the EU AI Act's Articles 13 and 50.

For all AI systems interacting with users, operators are required to disclose that the user is interacting with an AI system where this would not otherwise be apparent. The disclosure must be clear and accessible, using plain language. This maps directly to EU AI Act Article 50(1), which requires disclosure for AI systems intended to interact with natural persons. Korea adds a specific duty to make the disclosure in Korean (or in the language of the interface) where the system is deployed to Korean users, which has operational implications for operators running multilingual deployments.

For high-impact AI systems, the transparency obligation is substantively heavier. The Act requires operators to provide affected persons with information about: the fact that an AI system was used in a decision affecting them; the basis of the decision at a level sufficient for the affected person to understand and contest it; and the identity of the responsible operator or responsible person within the operator organisation. This maps to EU AI Act Article 26(6) (information to affected workers) and to Article 86 of the GDPR as applied to automated decision-making, though Korea's provision is Act-specific rather than an extension of its PIPA framework.

Human oversight obligations for high-impact AI require operators to designate a named individual or role responsible for monitoring and intervening in the AI system's operation. The designated person must have the technical access and organisational authority needed to intervene. Documentation of the oversight designation is required and must be retained for audit. The structure is closely parallel to EU AI Act Article 26(1)(d), which requires deployers to assign human oversight to natural persons with the necessary competence, training, and authority. Korea's implementing regulations are expected to specify retention periods and documentation standards; until they are published, operators should apply the EU Act standard as a safe proxy.

Incident reporting for high-impact AI requires operators to notify MSIT within a specified period following a serious incident caused by or involving the AI system. The Act's definition of serious incident covers outcomes causing significant bodily or psychological harm, significant financial loss, or a significant adverse effect on fundamental rights. MSIT implementing regulations are expected to specify the notification period and form; the EU Act Article 26(5) standard of 15 business days from awareness is a reasonable interim target for operators designing cross-border programmes.

The MSIT registration and enforcement framework

The Ministry of Science and ICT is the primary enforcement authority for the AI Basic Act. Unlike the EU, which established a dedicated European AI Office and a coordinated system of national supervisors, Korea routes AI Act enforcement through MSIT's existing ICT regulatory infrastructure. This has practical implications for operators. MSIT has decades of enforcement experience in the ICT sector, including under the Network Act and the PIPA framework (the latter jointly administered with the Personal Information Protection Commission, PIPC). Enforcement culture is systematic and documentation-oriented.

Registration requirements for high-impact AI are expected to be specified through MSIT implementing regulation. The Act grants MSIT authority to require providers and deployers of high-impact AI to register with a designated registry, maintain documentation accessible to MSIT on request, and submit periodic compliance reports. The precise scope of registration obligations was subject to consultation through 2025; the implementing regulations are the definitive source once published.

Penalties under the AI Basic Act operate on a graduated scale. Administrative fines of up to KRW 30 million apply to transparency disclosure violations (failure to disclose AI system use to users and affected persons). More serious violations, including failure to implement required human oversight for high-impact AI or failure to report serious incidents, attract higher administrative fines. The Act also grants MSIT authority to issue corrective orders requiring operators to modify or suspend AI system operation, with non-compliance carrying additional penalty. MSIT can also refer serious cases to the PIPC where a breach also involves personal data processing, creating a dual-enforcement scenario comparable to the coordination between national AI supervisors and data protection authorities in EU Member States.

Extraterritorial reach follows the pattern of Korea's existing technology legislation. The Act applies to AI systems used in Korea regardless of where the provider or deployer is established. An operator based in Germany whose AI system is deployed to Korean users falls within the regime for those users. This is the same extraterritorial logic as the EU AI Act and the Colorado AI Act. The practical implication is that operators who have already confronted the EU Act's extraterritorial scope question have answered the same question for Korea: the analysis is the same; only the specific operator duties differ by regime.

Practical implications for cross-border operators

For operators who have built a compliance programme to meet the EU AI Act's Article 26 deployer obligations, the Korea AI Basic Act creates incremental rather than parallel obligations. The underlying analytical work, system classification, risk documentation, oversight assignment, and incident protocol, transfers substantially. What Korea adds is primarily a set of Korean-language and MSIT-specific requirements layered on top.

System classification is the first task. Operators should map their deployed AI systems against the seven high-impact domains using the same classification methodology applied to EU AI Act Annex III. Because the domain lists are substantially similar, a system already classified as Annex III high-risk will almost certainly be high-impact under the Korean Act. A system that falls below the EU Annex III threshold may still be high-impact under the Korean Act (for example, in legal services, where the Annex III item covers a narrower set of uses than the Korean provision).

Documentation must be adapted to Korean requirements. The operator file maintained for EU purposes requires supplementary entries covering: the MSIT registration number (once implementing regulations specify the registration process), Korean-language versions of user-facing disclosures and affected-person information, documentation of the Korean-market oversight designee, and the MSIT incident notification log. The underlying risk management record, system description, and monitoring plan can be shared documents with Korean addenda rather than parallel documents.

For operators without an existing EU programme, the rational approach is to design a single programme to the higher of the two standards, treating the EU Act's Article 26 requirements as the procedural baseline and adding Korean-specific elements. An operator that can evidence the five shared obligations (risk management, transparency, human oversight, documentation, incident response) is in a defensible position in both regimes. See also the EU AI Act operator obligations guide at agentliability.eu for the detailed EU framework. For the broader Asia-Pacific context in which Korea's Act sits, see Asia-Pacific AI governance in 2026. For the transatlantic comparison, see US, EU, UK: three approaches to the same question.

Certification and assurance may support compliance positioning. The Agent Certified methodology applies across jurisdictions and provides a structured evidence framework for documenting compliance across the five shared obligations. A certified operator has produced the documentation that both MSIT and an EU national supervisor would expect to review.

Five shared obligations. Both the EU AI Act and the Korea AI Basic Act require: risk management, transparency to users, human oversight, documentation, and incident response. An operator with a well-built EU programme requires targeted adaptation for Korea, not a parallel programme.

Related reading

For the detailed EU deployer framework, see EU AI Act operator obligations 2026 at agentliability.eu. For the Asia-Pacific context, see Asia-Pacific AI governance in 2026. For the three-jurisdiction comparison, see US, EU, UK: three approaches to the same question.

Frequently asked questions

What is the Korea AI Basic Act and when does it apply?

The Framework Act on the Promotion of Artificial Intelligence and Establishment of a Foundation for Trust (the AI Basic Act) was enacted by the National Assembly of Korea in December 2024 and is administered by the Ministry of Science and ICT (MSIT). It applies from August 2026 and establishes obligations for providers and deployers of AI systems used in Korea, with heightened requirements for high-impact AI systems used in sensitive domains.

What is a high-impact AI system under the Korea AI Basic Act?

The AI Basic Act designates high-impact AI as systems used in domains where errors could cause significant harm to persons: medical devices and diagnosis, employment decisions, educational assessment, financial creditworthiness, criminal justice and public safety, critical infrastructure, and legal services. High-impact designation triggers transparency obligations, human oversight requirements, and risk management documentation comparable in structure to the EU AI Act's Annex III regime.

How does the Korea AI Basic Act compare to the EU AI Act?

The two regimes share the same structural logic: a risk-tiered approach with heightened obligations for high-impact uses, transparency duties for most systems, and human oversight requirements. Korea's regime is narrower in procedural detail than the EU Act but covers the same substantive ground. The key practical difference for cross-border operators is that Korea does not yet have a dedicated AI supervisory authority comparable to the European AI Office; the MSIT administers the regime through existing ICT enforcement infrastructure.

Does the Korea AI Basic Act have a list of prohibited AI systems?

No. Unlike EU AI Act Article 5, the Korean Act does not enumerate a list of prohibited AI systems. Prohibited conduct is addressed through the general fairness and non-discrimination obligations, through PIPA's automated decision-making provisions, and through the general civil and criminal liability framework. Operators designing systems around the EU prohibited uses list will not find a direct Korean parallel but should treat the general fairness duties as covering substantially the same ground.

Who enforces the Korea AI Basic Act and what are the penalties?

MSIT is the primary enforcement authority, operating through its existing ICT regulatory infrastructure. Administrative fines of up to KRW 30 million apply to transparency violations. Higher fines apply to more serious breaches including failure to implement high-impact AI oversight or failure to report serious incidents. MSIT may also issue corrective orders and can refer data-related breaches to the PIPC for concurrent enforcement.

References

  1. Republic of Korea, Framework Act on the Promotion of Artificial Intelligence and Establishment of a Foundation for Trust (AI Basic Act, in Korean: 인공지능 발전과 신뢰 기반 조성 등에 관한 법률), enacted December 2024, effective August 2026.
  2. Ministry of Science and ICT (MSIT), Korea, guidelines on the implementation of the AI Basic Act, 2025.
  3. Regulation (EU) 2024/1689 of the European Parliament and of the Council (AI Act), Annex III high-impact AI systems, for comparison.
  4. OECD, AI Principles (2024 revision), as a reference standard shared across the two regimes.
  5. Korea Personal Information Protection Commission, enforcement data on technology sector compliance, 2024.