Home /

Global Legal Landscape of AI Regulation

Global Legal Landscape of AI Regulation

Global Legal Landscape of AI Regulation

An overview of key regulatory developments

The interdisciplinary nature of artificial intelligence (AI) has created a complex legal landscape. Across the world, AI regulations are taking shape through strategic vision papers, regulatory guidelines, binding laws, and governance models. Each jurisdiction is rebuilding its legal framework according to its own dynamics. Below is an overview of key regulatory developments from selected countries:

United States
AI regulation in the U.S. is multi-layered. At the federal level, Executive Orders 13859, 13960, and 14110 (“Safe and Trustworthy AI”) serve as binding directives. These are supported by legislation and frameworks such as the National AI Initiative Act (2020), AI Training Act, AI Risk Management Framework 1.0, Blueprint for an AI Bill of Rights, Algorithmic Accountability Act, and the AI Research, Innovation, and Accountability Act (2024).
At the state level, Connecticut, California, Utah, and Texas have enacted binding laws on transparency in decision-making, AI use in public services, and oversight of automated processes. Tennessee has introduced laws protecting artists against unauthorized use of voice and visual data.

Germany
Germany’s Federal Ethics Commission published ethical principles for autonomous vehicles in 2017, followed by a National AI Strategy in 2018. Amendments to the Road Traffic Act in 2020 and 2021 introduced legal standards for software safety and data processing in autonomous systems. The 2021 Data Strategy aligned AI governance with the EU’s GDPR.

France
France initiated AI policy with CNIL’s 2017 Ethics Report and rolled out its National AI Strategy in two phases (2018 and 2021). CNIL’s AI Department expanded its authority under a 2023 Action Plan. The government also established INESIA (National Institute for AI Security and Development). The 2024 publication “AI: Our Goals for France” prioritizes public sector applications.

Singapore
Following the PDPA (2012), Singapore introduced the Cybersecurity Act and Digital Government Blueprint in 2018. In 2019, it launched the Model AI Governance Framework and the VERITAS initiative, which combine ethical governance with technical compliance. In 2023, the AI Verify Foundation was established to embed accountability and auditability into software development.

Australia
Australia revised its 2019 AI Ethics Principles in 2024. Its 2023 “Safe and Responsible AI” policy, along with practical guides for non-lawyers, privacy, public sector use, and voluntary standards, provide detailed regulatory frameworks. Design-stage principles such as data protection, explainability, and user consent are aligned with the Privacy Act 1988.

United Arab Emirates
In 2017, the UAE became the first country to establish a Ministry of AI. Its AI regulation framework includes the National AI Strategy 2031, updated Data Protection Laws, the AI Charter (2024), and the AI and Advanced Technology Council. In 2023, the DIFC introduced the first binding regulatory framework covering AI systems under data protection law.

AI Governance: Law, Policy, and Ethics
These examples demonstrate that countries are shaping their AI responses through a blend of legal mandates and ethical principles. Core values like transparency, accountability, data privacy, algorithmic oversight, human dignity, and equitable access are no longer aspirational—they are now embedded in enforceable legislation.

At CFECERT, we continue to offer professional training and certification in AI governance to help institutions align with global standards and embed sustainable excellence into their operations.

For more information, contact us at: sales@cfecert.co.uk

ISO 9001 QMS

ISO 9001 is the international standard designed to help organisations implement a Quality Management System (QMS).

Learn More