Brazil has the most advanced AI governance framework in Latin America. PL 2338, the Projeto de Lei approved by the Brazilian Senate and advancing through the legislative process, establishes a risk-based architecture that will create binding compliance obligations for any AI system producing effects on Brazilian territory. For European and global operators deploying AI agents that serve Brazilian users, the framework introduces obligations that run in parallel with those created by Regulation (EU) 2024/1689 and by Brazil's own Lei Geral de Proteção de Dados.

Key takeaways

  • PL 2338 establishes three risk tiers: excessive risk (prohibited), high risk (obligations-heavy), and general use (transparency-focused). The structure is comparable to the EU AI Act's framework but with Brazilian constitutional law as its foundation.
  • The framework applies extraterritorially: systems that produce effects in Brazilian territory or are offered to users in Brazil fall within scope regardless of the operator's establishment.
  • The ANPD (Autoridade Nacional de Proteção de Dados) has a central oversight role, building on its existing LGPD enforcement authority. This creates a single regulatory point of contact for AI and data protection compliance in Brazil.
  • High-risk obligations require algorithmic impact assessments, transparency registers, human oversight mechanisms, and technical documentation. These mirror EU AI Act deployer obligations closely enough that EU compliance documentation is largely transferable.
  • EU operators with Brazilian market exposure face dual-regime compliance: LGPD plus PL 2338 on the Brazilian side, GDPR plus EU AI Act on the European side. The frameworks are compatible, but gaps exist on automated decision-making rights and sectoral exemptions.

Legislative background

PL 2338 of 2023 was introduced to the Brazilian Senate by Senator Eduardo Gomes following a comprehensive commission of jurists process, in which a panel of legal experts convened under the coordination of former Minister Cezar Peluso produced a draft framework after extensive public consultation. The process was modelled partly on Brazil's experience drafting the LGPD and was informed by the European Commission's work on the EU AI Act.

The Brazilian Senate approved the bill in December 2024 after committee amendments that adjusted the risk classification criteria and strengthened the provisions on algorithmic impact assessments. As of early 2026, the bill is advancing through the Chamber of Deputies (Câmara dos Deputados) where further amendments are expected before it returns to the Senate for a final vote. The bill is not yet enacted into law, but it represents the most advanced AI framework in the region and Brazilian regulators, including the ANPD, are already developing implementation guidance on the assumption of enactment.

The legislative record reflects a deliberate effort to position Brazil as a standard-setter for AI governance in Latin America and the Global South more broadly. Brazil's G20 presidency in 2024 included AI governance as a headline agenda item, producing the Brasília Ministerial Declaration on AI which references PL 2338's framework as a model for other jurisdictions. Argentina, Colombia, and Mexico are each developing AI governance frameworks, and PL 2338 will influence all three.

Risk architecture: three tiers

PL 2338 classifies AI systems into three risk categories. Understanding the classification criteria is the first step for any operator mapping their AI deployments against the Brazilian framework.

Excessive risk (risco excessivo) covers applications that are prohibited outright. The prohibited categories include: AI systems used by public authorities for social scoring of natural persons based on personal behaviour or characteristics; AI systems that use subliminal techniques to manipulate individuals without their awareness in ways likely to cause harm; AI that exploits vulnerabilities of specific groups, including children and the elderly, to produce distorted behaviour; and real-time remote biometric surveillance in publicly accessible spaces by law enforcement, except under judicial authorisation and narrow exceptions. The prohibited categories align closely with the prohibitions in Article 5 of Regulation (EU) 2024/1689, with some adaptations reflecting Brazilian constitutional law's specific protections.

High risk (alto risco) covers applications in eight domains: employment and workforce management; education and vocational training; essential public and private services including credit, insurance, and housing; public safety and law enforcement; migration and asylum; administration of justice; democratic processes and electoral contexts; and healthcare and medical decisions. In each domain, deployers face a set of obligations that include algorithmic impact assessments, transparency registrations, human oversight mechanisms, incident notification, and technical documentation retention. The obligations are structured similarly to Article 26 of the EU AI Act, with the ANPD as the primary oversight authority.

General use AI carries lighter obligations. Providers and deployers of general-use AI systems must ensure transparency about the AI nature of the interaction, maintain basic documentation, and comply with anti-discrimination requirements. The framework for general use AI draws on Brazil's existing consumer protection law (the Código de Defesa do Consumidor) and LGPD principles.

The ANPD as primary regulator

The decision to vest primary AI oversight authority in the ANPD is significant for operators already navigating LGPD compliance. The ANPD, established under LGPD in 2020, has developed substantial technical capacity for data protection enforcement. PL 2338 expands its mandate to cover AI-specific obligations, creating a consolidated compliance relationship for operators working with Brazilian personal data and AI systems.

The ANPD has authority under PL 2338 to issue implementing regulations, conduct investigations, impose administrative penalties, and develop sector-specific guidance in coordination with other sectoral regulators. Sectoral regulators retain oversight authority for AI applications within their jurisdictions: the Central Bank of Brazil (Banco Central do Brasil) oversees AI in financial services, the National Supplementary Health Agency (ANS) oversees health AI, and ANATEL oversees AI in telecommunications. This mirrors the EU's model where the AI Office coexists with sector regulators, though the Brazilian version gives sectoral regulators somewhat more autonomous authority than the EU model does.

The ANPD has already published preliminary guidance on automated decision-making under LGPD Article 20, which gives data subjects the right to request information about decisions taken exclusively by automated means that affect their interests. PL 2338 builds on this foundation, extending the right to explanation to a broader category of consequential AI decisions and requiring operators to implement accessible mechanisms through which affected individuals can exercise it.

Algorithmic Impact Assessment (Avaliação de Impacto Algorítmico)

The Avaliação de Impacto Algorítmico (AIA) is the central documentary obligation for high-risk AI deployers under PL 2338. It functions similarly to the Fundamental Rights Impact Assessment required under Article 27 of the EU AI Act, but with a wider scope that also encompasses economic impacts and competition effects alongside fundamental rights considerations.

The AIA must be conducted before deployment and updated whenever there is a material change to the system, the use case, or the population affected. It must address: the system's purpose and technical characteristics; the population that will be affected and the nature of that effect; the risk of discriminatory outcomes based on race, gender, age, disability, religion, or other protected characteristics under Brazil's Constitution; the measures taken to mitigate identified risks; the human oversight mechanism; and the procedure for affected individuals to contest decisions made by the system.

The AIA must be made available to the ANPD on request and, in specified high-risk categories, submitted to the ANPD before deployment. The ANPD may reject a deployment if the AIA demonstrates unacceptable residual risk that the operator has not adequately mitigated. For operators already conducting fundamental rights impact assessments under the EU AI Act, the AIA adds a competition and economic dimension that requires additional analysis but does not require a fundamentally different assessment structure.

The LGPD intersection. LGPD Article 20 already gives Brazilian data subjects the right to request a review of decisions taken exclusively by automated means that affect their interests significantly, including decisions related to employment, credit, and personal profile creation. PL 2338 expands the scope of this right to cover AI-assisted decisions (not only fully automated ones) and requires operators to implement clear, accessible mechanisms for its exercise. Operators who have already built LGPD Article 20 response procedures have a foundation for the PL 2338 requirements, but the procedural detail and the covered decision categories are broader under the AI framework.

Extraterritorial reach and implications for global operators

PL 2338 follows a market-effects principle. The framework applies to AI systems that produce effects on individuals or groups located in Brazilian territory, or that are offered to users in Brazil, regardless of where the provider or deployer is established. A European operator running a credit scoring AI that is used by Brazilian financial institutions to assess Brazilian applicants is within scope. A US operator running an HR AI that is used by Brazilian subsidiaries of multinational employers to make employment decisions affecting Brazilian workers is within scope.

The extraterritorial application creates a dual-compliance question for European operators. They must simultaneously comply with Regulation (EU) 2024/1689 for their EU-facing operations and with PL 2338 for their Brazil-facing operations. The frameworks are broadly compatible. The risk classification criteria are similar. The high-risk category maps closely between the two. The impact assessment obligations are structurally equivalent. The transparency and documentation requirements are largely parallel.

The primary gap areas are: the specific content of the AIA versus the FRIA, where PL 2338's economic impact dimension requires additional analysis; the individual rights framework, where PL 2338 builds on LGPD's automated decision rights and creates some procedural requirements that differ from the EU AI Act's person-notification obligations; and the enforcement mechanism, where the ANPD's powers and penalty schedule will differ from those of EU member state market surveillance authorities.

For the EU regulatory context, see the EU AI Act Article 26 deployer obligations guide. For the broader cross-jurisdictional picture, see the EU AI Act's extraterritorial reach and US, EU, UK: three approaches to AI liability.

Latin American context

Brazil is the most advanced jurisdiction in Latin America for AI governance. Argentina has issued Resolution 4/2023 from the Secretary of Innovation and Digital Transformation, which establishes voluntary principles for public sector AI use and provides a framework that private operators are expected to reference. Colombia published a national AI ethics framework in 2021 and is developing binding AI regulation through the Ministry of ICT and the Superintendence of Industry and Commerce. Mexico has begun work on an AI governance framework under the auspices of COFECE, the competition authority, though no binding AI legislation has been enacted as of early 2026.

The Brazilian framework will have regional influence. The ANPD has established working relationships with data protection and AI regulatory bodies across the region, and the Brasília Declaration's endorsement of the PL 2338 framework as a model creates strong incentive for regional convergence. Operators building Latin American AI governance programmes should build to the Brazilian standard as the effective regional ceiling and document how their governance maps onto the lighter requirements in Argentina, Colombia, and Mexico.

Practical preparation steps

For global operators with Brazilian market exposure, four preparation steps are advisable before PL 2338 is enacted and its implementing regulations published.

First, conduct a scope mapping. Identify all AI systems that serve Brazilian users, make decisions about Brazilian residents, or produce effects in Brazilian territory. Classify each against PL 2338's three-tier risk structure. The EU AI Act classification performed for the same system is a useful starting point, and the differences in the Brazilian classification criteria are manageable.

Second, review LGPD compliance for the automated decision-making dimension. LGPD Article 20 is already enforceable, and the ANPD is actively monitoring compliance. Operators whose automated decision-making review procedures are incomplete should address this before PL 2338 adds the AI-specific layer on top of it.

Third, prepare the AIA template. Draft the structure of an AIA using the bill's requirements and the ANPD's preliminary guidance on algorithmic impact. Applying the template to the most significant high-risk deployments before the law is enacted identifies the documentation gaps while there is time to address them.

Fourth, establish ANPD monitoring. The ANPD's implementing regulations, expected within twelve months of enactment, will define the specific procedural requirements for AIA submission, registration, incident notification, and individual rights procedures. Operators who have followed the development process will be positioned to adapt quickly.

Frequently asked questions

What are the three risk tiers in Brazil's PL 2338?

Excessive risk (prohibited uses), high risk (impact assessments, human oversight, technical documentation required), and general use (transparency and non-discrimination obligations). The categories are structurally similar to the EU AI Act's risk architecture.

Does Brazil's AI framework apply to companies based outside Brazil?

Yes. PL 2338 applies to AI systems that produce effects in Brazilian territory or that are offered to users in Brazil, regardless of where the provider or deployer is established. European and US operators serving Brazilian users are within scope.

How does PL 2338 interact with Brazil's LGPD data protection law?

LGPD already governs personal data processing in Brazil. PL 2338 builds on LGPD and assigns the ANPD a central AI oversight role. Operators compliant with LGPD have a foundation for PL 2338 compliance, but the AI framework adds obligations specific to automated decision-making that go beyond LGPD Article 20.

What is the relationship between PL 2338 and the EU AI Act for compliance planning?

The two frameworks are broadly compatible. An operator that has built EU AI Act compliance for high-risk deployments will find that most documentation and governance work transfers to PL 2338 compliance with adaptation rather than reconstruction. The primary difference is PL 2338's economic impact dimension in the AIA and some procedural differences in individual rights procedures.

References

  1. Projeto de Lei do Senado PL 2338/2023, Brazil AI Framework Bill. Senate approval December 2024.
  2. Lei Geral de Proteção de Dados (LGPD), Federal Law No. 13709 of 14 August 2018.
  3. Autoridade Nacional de Proteção de Dados (ANPD). Preliminary guidance on automated decision-making under LGPD Article 20. 2024.
  4. G20 Brasília Ministerial Declaration on AI Governance, November 2024.
  5. Código de Defesa do Consumidor, Federal Law No. 8078/1990.
  6. Regulation (EU) 2024/1689 (EU AI Act), Articles 5, 26, 27. OJ L, 12 July 2024.
  7. Argentina. Resolución 4/2023, Secretaría de Innovación Pública.
  8. OECD AI Principles, adopted May 2019, revised November 2024. OECD/LEGAL/0449.
  9. ISO/IEC 42001:2023, Artificial intelligence management system.
  10. Basel Committee on Banking Supervision. Principles for Operational Resilience. 2021.