Canada's proposed Artificial Intelligence and Data Act was the most ambitious AI-specific legislation in North America when it was introduced in June 2022. It died in January 2025 when Parliament was prorogued. What replaced it is a patchwork of privacy law, sector guidance, and voluntary frameworks that operators deploying AI in Canada must now navigate without a single statutory reference point. This analysis explains the current landscape, what the proposed AIDA framework would have required, and how global operators should position their Canadian deployments in 2026.
Key takeaways
- Canada's Artificial Intelligence and Data Act (AIDA) was part of Bill C-27. It died when Parliament was prorogued in January 2025 and has not been re-introduced as of May 2026.
- In the absence of a specific AI statute, the primary regulatory instruments governing AI in Canada are PIPEDA (federally), Law 25 in Quebec, sector guidance from OSFI and Health Canada, and the Treasury Board Directive on Automated Decision-Making for federal government systems.
- Quebec's Law 25 currently provides the most demanding requirements for automated decision-making in any Canadian jurisdiction: a right to be informed of automated decisions using personal data, and a right to request human review.
- Canada's federal government has developed a voluntary Algorithmic Impact Assessment framework that, while not binding on private operators, is used as a procurement criterion for federal contracts and provides a practical voluntary reference.
- Global operators subject to the EU AI Act will generally find that their EU compliance posture exceeds current Canadian requirements. The risk of under-preparation is greater for Canada-only operators who may not have built governance infrastructure and face a rapidly evolving regulatory trajectory.
What AIDA proposed and why it still matters
The Artificial Intelligence and Data Act was Part 3 of Bill C-27, the Digital Charter Implementation Act 2022. It was introduced in the House of Commons on 16 June 2022 and went through several rounds of committee scrutiny before Parliament was prorogued on 6 January 2025. The prorogation killed all outstanding bills, including Bill C-27.
AIDA's proposed framework is worth understanding even though it never became law, for two reasons. First, it represents the likely template for whatever Canada eventually enacts, because the risk-based model it proposed commands broad policy consensus across Canadian federal parties. Second, it has already influenced federal procurement criteria, voluntary frameworks, and sector-level guidance that operators encounter in practice.
AIDA's central concept was the high-impact AI system. The statute would have empowered the Minister of Innovation, Science and Industry to designate categories of AI system as high-impact by regulation, based on the nature and consequences of their outputs. High-impact systems would have been subject to a set of requirements that tracked the risk-based model familiar from the EU AI Act, though in a distinctly Canadian administrative style.
Developers of high-impact systems would have been required to establish and maintain measures to identify, assess, and mitigate risks of harm or biased output. They would have been required to monitor systems in deployment and to keep records of their risk assessments. Deployers would have been required to publish notices explaining their use of high-impact AI systems in regulated contexts. A federal AI and Data Commissioner would have been created with investigation and order-making powers. Penalties for violations would have reached CAD 25 million or 5 per cent of global revenue.
The substantive requirements proposed in AIDA were less procedurally intensive than the EU AI Act's Annex IV technical documentation and conformity assessment obligations, but the risk logic was similar: identify high-impact uses, require documented mitigation, mandate monitoring, and enforce through a designated authority.
What the current Canadian landscape looks like
With AIDA expired and no replacement legislation in force, operators deploying AI in Canada in 2026 navigate a multi-layered set of obligations that are sector-specific and privacy-law-based rather than AI-specific.
PIPEDA and its provincial equivalents
The Personal Information Protection and Electronic Documents Act applies to the collection, use, and disclosure of personal information by private sector organisations subject to federal jurisdiction in the course of commercial activity. Most AI systems that process personal data to make or inform decisions are subject to PIPEDA. The Act's requirements for meaningful consent, access, and correction apply to automated data processing. The Office of the Privacy Commissioner (OPC) has interpreted PIPEDA's fairness principles as requiring organisations to be transparent about automated decision-making and to provide recourse where such decisions have significant consequences.
Alberta and British Columbia have their own substantially similar legislation (PIPA) that applies to provincially regulated operators. Quebec's Law 25 goes further. In effect from September 2023 and fully in force from September 2024, Law 25 explicitly addresses automated decision-making. It requires organisations to inform individuals when a decision is made exclusively by an automated means that produces a legal or significant effect, and to provide the individual with the right to have the decision reviewed by a human upon request. Law 25 also requires a privacy impact assessment before deploying any technology that involves personal information. For operators active in Quebec, Law 25 currently imposes the most demanding Canadian AI governance requirements through the privacy channel.
Sector guidance
In the absence of a comprehensive statute, federal sector regulators have moved to fill the gap. The Office of the Superintendent of Financial Institutions published its Artificial Intelligence in Banking Supervisory Framework in 2024, which applies to federally regulated financial institutions. The framework sets expectations for governance, model risk management, explainability, and monitoring of AI systems used in consequential financial decisions. Banks and insurers regulated by OSFI face binding supervisory expectations that closely resemble what a comprehensive AI statute would impose. Non-compliance can trigger supervisory intervention.
Health Canada has issued AI-specific guidance for software as a medical device (SaMD) that builds on existing medical device regulatory requirements. AI systems used in clinical decision support, diagnostic imaging, and patient triage are subject to this guidance if they fall within the medical device definition. The guidance requires pre-market assessment, performance monitoring, and lifecycle management documentation.
The Treasury Board Directive on Automated Decision-Making
The Directive on Automated Decision-Making, issued by the Treasury Board of Canada Secretariat, applies to federal government departments and agencies deploying automated decision systems that affect the rights or interests of individuals or organisations. The Directive requires completion of an Algorithmic Impact Assessment before deployment. The AIA assigns the system to one of four impact levels based on the severity and breadth of potential consequences. The impact level determines the required human oversight mechanism, the required audit trail, the notice obligations, and the right of affected individuals to request human review.
Private sector operators are not directly bound by the Directive. However, organisations supplying automated decision systems to federal departments under contract are expected to support the department's AIA requirements. The AIA framework has been adopted as a voluntary reference by a growing number of Canadian corporations and has influenced AI ethics policies at major financial institutions, telecommunications providers, and healthcare organisations. For any operator seeking federal contracts, familiarity with the AIA methodology is effectively a procurement requirement.
How Canada compares to the EU and US approaches
The EU AI Act represents the most comprehensive legislative approach to AI governance currently in force anywhere. It imposes detailed procedural obligations on both providers and deployers of high-risk systems, with mandatory technical documentation, conformity assessments, and a penalty regime with extraterritorial reach. The Act explicitly applies to AI systems placed on the EU market or whose outputs are used in the EU, regardless of where the developer or deployer is established. Global operators subject to the EU AI Act face requirements that have no Canadian equivalent in the current legal framework.
The United States federal baseline remains the absence of a comprehensive AI statute. Executive Order 14110 on Safe, Secure, and Trustworthy AI (October 2023) and OMB Memorandum M-24-10 (March 2024) established requirements for federal agencies and set reporting obligations for AI developers working with covered critical capability thresholds, but neither instrument applies directly to private sector AI deployment outside the federal contracting context. Colorado's SB 24-205, in force from 1 February 2026, is the first comprehensive US state AI statute and represents the most demanding US subnational baseline. For a detailed analysis of Colorado's approach see the Colorado AI Act deployer guide.
Canada sits between these two positions. The proposed AIDA was closer to the EU model than the current US federal baseline in its risk-based ambition, but the current operative framework is closer to the US in its patchwork character. An organisation that has built a compliance programme for the EU AI Act will, in most respects, exceed what Canadian law currently requires. The risk of under-preparation runs in the other direction: a Canada-only operator that has not built governance infrastructure consistent with international best practice faces a rapidly evolving regulatory trajectory and may need to retrofit governance systems when legislation eventually passes.
The likely trajectory of Canadian AI legislation
The political context following the prorogation has been complex. The Liberal government won the April 2025 federal election under Prime Minister Mark Carney, who had made technology and industrial policy central to his leadership campaign. The new government has indicated that AI governance legislation remains a policy priority, but the specific vehicle for that legislation has not been confirmed. Two paths are under discussion within government policy circles: a successor bill that substantially replicates AIDA's structure with amendments that address criticisms raised during the Bill C-27 committee process, or a phased approach that begins with mandatory reporting requirements and liability provisions before moving to comprehensive standards.
Independent of the legislative timeline, several regulatory instruments are likely to develop. The OPC has committed to updated PIPEDA guidance on automated decision-making following the expiry of Bill C-27. OSFI is expected to update its AI banking framework to align with international developments including the EU AI Act and the Basel Committee on Banking Supervision's AI guidance. The Treasury Board Directive is under periodic review and may be expanded in scope.
The practical implication for global operators is that Canadian AI governance requirements will increase over the next two to three years regardless of whether AIDA is re-introduced. Organisations building governance infrastructure now should design it to accommodate obligations broadly consistent with AIDA's proposed high-impact system requirements: documented risk assessment and mitigation, monitoring procedures, transparency notices, and defined human oversight mechanisms. This investment is defensible under current Canadian law and positions the organisation appropriately for whatever legislative framework eventually follows.
What operators should do now
In the absence of a comprehensive statute, the practical compliance priorities for operators deploying AI in Canada in 2026 are the following. First, identify whether any of your AI deployments affect individuals whose personal data is processed under PIPEDA or Law 25, and ensure your notice, consent, and access procedures cover automated decision-making. If you are operating in Quebec and using automated decisions with significant effects, confirm that you have the required human review mechanism in place.
Second, if you supply systems or services to federal departments, review the AIA methodology and confirm that your system can support the department's AIA completion. Understand which of the four impact levels your system is likely to fall into and what the corresponding requirements are. The AIA tool is publicly available on the Treasury Board website and takes approximately two to four hours to complete for a well-documented system.
Third, if you are in financial services or healthcare, engage with the sector guidance from OSFI or Health Canada as applicable. These are binding supervisory expectations for regulated entities and are the closest analogue to an AI statute that currently applies in those sectors.
Fourth, document your AI governance programme in a form that is legible to both a Privacy Commissioner inquiry and a future legislative compliance review. The substantive content of AIDA's proposed requirements provides a useful voluntary standard. An organisation that has documented its high-impact AI systems, their risk assessments, their mitigations, and their monitoring procedures in terms broadly consistent with AIDA's proposed framework is well positioned for whatever the successor legislation requires.
For the European dimension of AI governance for globally operating organisations, the EU AI Act operator obligations guide on agentliability.eu provides a structured starting point. For the insurance dimension of AI liability in European markets, see agentinsured.eu.
Frequently asked questions
What happened to Canada's AIDA?
The Artificial Intelligence and Data Act was part of Bill C-27, introduced in June 2022. The bill died on the order paper when Parliament was prorogued in January 2025. It has not been re-introduced as of May 2026. Canada currently lacks a comprehensive federal AI statute.
What law governs AI deployment in Canada today?
In the absence of a specific AI statute, AI deployment is governed by PIPEDA and provincial equivalents for data processing, Law 25 in Quebec for automated decision-making with significant effects, sector guidance from OSFI and Health Canada, and the Treasury Board Directive for federal government systems. The combination of these instruments covers most consequential AI deployments without a single statute.
How does Canada compare to the EU AI Act?
The EU AI Act is more comprehensive and procedurally demanding than the current Canadian baseline. A compliance programme built for the EU AI Act will exceed current Canadian requirements in most respects. The risk runs in the opposite direction for Canada-only operators, who may not have governance infrastructure consistent with what legislation will eventually require.
Is the Algorithmic Impact Assessment mandatory for private companies?
The Treasury Board Directive on Automated Decision-Making, which requires the AIA, applies to federal government departments and agencies, not to private sector operators directly. However, it functions as a de facto requirement for suppliers to the federal government and has been widely adopted voluntarily. It is the most detailed publicly available Canadian framework for AI impact assessment and is a useful reference for any organisation documenting its governance programme.
When might Canada pass new AI legislation?
The new federal government has indicated that AI governance remains a legislative priority, but no specific bill or timeline had been confirmed as of May 2026. The most plausible trajectory is a bill introduced in the 2026-2027 parliamentary session that substantially tracks AIDA's risk-based model with amendments addressing criticisms raised during the Bill C-27 committee process. Operators should monitor federal government announcements and the Office of the Privacy Commissioner's ongoing guidance publications.
References
- Bill C-27, Digital Charter Implementation Act, 2022, Part 3, Artificial Intelligence and Data Act, introduced 16 June 2022, died on order paper January 2025.
- Personal Information Protection and Electronic Documents Act (PIPEDA), S.C. 2000, c. 5.
- Quebec Law 25 (Act 25), An Act to Modernize Legislative Provisions as Regards the Protection of Personal Information, in full force September 2024.
- Treasury Board of Canada Secretariat, Directive on Automated Decision-Making, last amended 2023.
- Office of the Superintendent of Financial Institutions, Artificial Intelligence in Banking Supervisory Framework, 2024.
- Executive Order 14110, Safe, Secure, and Trustworthy Artificial Intelligence, 30 October 2023.
- OMB Memorandum M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, 26 March 2024.
- Colorado SB 24-205, Colorado Consumer Protections for Artificial Intelligence, C.R.S. section 6-1-1701 et seq., in force 1 February 2026.
- NIST AI Risk Management Framework 1.0 (January 2023).
- OECD Principles on AI, updated 2024 revision.