A timeline of major regulations, laws, and policy actions shaping the future of artificial intelligence worldwide.
South Korea's AI Basic Act, introduced in January 2025, enters broader enforcement with mandatory requirements for high-risk AI systems. The law establishes national AI governance standards and transparency obligations for major AI providers.
South Korea joins the EU as a major jurisdiction with comprehensive AI legislation, representing the global trend toward proactive governance.
Official MSIT announcementThe first enforcement wave of the EU AI Act becomes law. Social credit scoring, real-time biometric identification in public spaces, and other high-risk AI applications are prohibited. Fines of up to 30 million euros or 6% of annual global revenue apply.
This is the most significant AI enforcement action globally, establishing hard legal restrictions and creating a precedent for other jurisdictions.
EU Press Release on AI Act ImplementationBrazil's AI regulation bill, focusing on transparency, accountability, and risk-based governance, advances through Congress. The proposed law includes requirements for disclosure of AI-generated content and liability frameworks for AI systems.
Brazil is positioning itself as a major voice in global AI governance, balancing innovation with consumer protection in Latin America.
Câmara dos Deputados updatesAustralia publishes a voluntary AI safety standard as part of its broader AI regulatory strategy. The framework emphasizes responsible AI use and provides guidance for organizations deploying AI systems.
Australia adopts a lighter regulatory touch compared to the EU, focusing on industry self-regulation and standard-setting rather than hard legal restrictions.
Australian Government AI guidanceThe EU AI Act becomes enforceable law across all Member States. While some prohibitions apply immediately, a phased implementation period extends through 2025 for other requirements. Compliance obligations begin for high-risk AI systems and providers.
This marks the world's first comprehensive AI legislation, establishing the legal framework that governments worldwide are using as a reference for their own regulations.
EUR-Lex Official TextThe European Commission establishes the AI Office as a dedicated body to implement and enforce the AI Act. The office coordinates across Member States and serves as the primary authority for AI regulation enforcement.
The creation of a dedicated regulatory body signals the EU's commitment to active, ongoing AI governance rather than one-time legislation.
EU Commission AI Office announcementSouth Korea's legislature approves the AI Basic Act, establishing comprehensive framework for AI development and deployment. The law addresses transparency, safety, and fairness in AI systems while promoting innovation.
South Korea becomes one of the few nations with standalone AI legislation, alongside the EU, signaling Asia's regulatory ambitions.
MSIT Press ReleaseMultiple US states including Colorado, California, and others advance AI-related legislation covering transparency, algorithmic auditing, and consumer protection. A patchwork of state-level regulations emerges.
Without federal AI legislation, the US creates a fragmented regulatory landscape, contrasting with the unified EU approach.
State AI legislation trackerThe European Parliament formally adopts the AI Act following extensive negotiation and compromise. The law establishes a risk-based approach to AI regulation, with prohibited categories, high-risk systems, and transparency requirements.
Adoption signals the finalization of the world's first comprehensive AI legislation and sets the stage for implementation and enforcement.
European Parliament AI Act voteThe G7 nations release the Hiroshima AI Process Code of Conduct, a voluntary framework for AI governance developed during Japan's presidency. Major AI companies sign on to commitments around transparency and safety.
The code represents international consensus on AI governance principles, though critics note its voluntary nature limits enforceability.
G7 Hiroshima AI Code announcementPresident Biden issues an Executive Order on AI Safety and Standards, directing federal agencies to establish AI safety standards, conduct audits, and develop best practices. The order focuses on high-impact AI systems and critical infrastructure.
The Executive Order represents the most significant US federal action on AI governance to date, though it lacks the legal force of legislation.
White House Executive OrderThe UK hosts the first international AI Safety Summit at Bletchley Park, bringing together government leaders, AI researchers, and industry experts from 28 countries. Discussions focus on frontier AI safety and global governance approaches.
The summit affirms AI safety as a global priority and establishes the UK as a key voice in international AI governance discussions.
UK AI Safety Summit outcomesThe National Institute of Standards and Technology (NIST) releases the AI Risk Management Framework, a voluntary guideline for managing AI risks across development, deployment, and monitoring. The framework provides practical standards without legal mandates.
The NIST framework becomes a widely-adopted industry standard and resource, influencing both private sector and regulatory approaches globally.
NIST AI RMF official releaseAfter reaching a settlement on data protection concerns, Italy lifts its ban on ChatGPT. OpenAI implements age verification and data processing commitments to comply with Italian and EU data protection standards.
The ban resolution demonstrates how individual nations can pressure major AI providers to improve compliance, even without comprehensive legislation.
Italian Privacy Authority statementItaly's Data Protection Authority temporarily bans ChatGPT due to concerns about data collection, age verification, and GDPR compliance. OpenAI is given 30 days to comply or face fines of up to 20 million euros or 4% of revenue.
Italy's ban becomes a significant moment, highlighting privacy concerns with AI systems and demonstrating that individual countries can enforce restrictions absent EU-wide AI legislation.
Garante privacy authority decreeIndia's government initially signals a preference for AI innovation over heavy regulation, resisting calls for immediate comprehensive legislation. This reflects India's focus on AI as an economic opportunity for a developing nation.
India's approach contrasts with EU and other jurisdictions, emphasizing growth while committing to monitoring and advisory frameworks rather than legal mandates.
Indian Ministry statementsChina's regulations on generative AI services become effective, requiring content security assessments, algorithm transparency, and state security reviews. AI systems must align with "core socialist values" and cannot spread prohibited content.
China establishes strict state-based governance of AI, reflecting different regulatory philosophy emphasizing state control and content filtering versus the EU's rights-based approach.
China Cyberspace Administration regulationsChina's regulations on deepfake and synthetic media (deep synthesis) become law, requiring content labels and provider registration. The rules target deepfakes, generated audio, and other synthetic media that could spread false information.
China becomes an early mover in regulating specific AI applications, particularly those affecting information integrity and state stability.
China Cyberspace Administration noticeUNESCO's Members adopt the Recommendation on AI Ethics, establishing international norms for responsible AI development and deployment. The recommendation emphasizes human rights, transparency, and accountability in AI systems.
The UNESCO recommendation provides non-binding international guidance that influences national policy discussions and establishes shared ethical principles.
UNESCO AI Ethics RecommendationThe European Commission formally proposes the AI Act, the world's first comprehensive AI legislation. The proposal introduces a risk-based framework, prohibited categories, and high-risk AI system requirements, marking the beginning of the legislative process.
The proposal launches what will become the most comprehensive AI regulation globally and signals the EU's commitment to a rights-based approach to AI governance.
European Commission AI Act proposalCanada's government introduces the Artificial Intelligence and Data Act (AIDA), proposed legislation establishing a risk-based framework for AI governance. The law would require impact assessments and transparency for high-risk AI systems.
Canada positions itself as an early adopter of comprehensive AI legislation, though passage remains pending as of 2025.
AIDA bill informationThis page documents publicly reported regulatory actions for informational and educational purposes. All descriptions are based on published news reporting and official government communications. This is not legal advice.
Parte del contenido de esta página fue creado con la asistencia de herramientas de IA.