This article discusses the differences in AI regulations.

In the previous article, “Harmonizing Data and AI Governance: To Do or Not To Do,” I discussed five key factors that influence the decision on how to integrate governance frameworks for data, AI, and risk management. In this article, I will explore the challenges posed by AI-related regulations introduced around the world:

  • Variations in AI definitions
  • Different approaches to defining AI-related requirements
  • Diverse legal statuses

The key takeaway: AI legislation across the globe is not aligned.

Figure 1 illustrates the scope of legislation from various regions, based on information initially gathered from the White & Case AI Watch.

Figure 1: The scope of legislation taken into consideration.

Figure 1: The scope of legislation taken into consideration.

The scope covers five world regions, nine countries, and the European Union as a single entity. Let’s dive into each challenge one by one.

Variations in AI Definitions

The “artificial intelligence” (AI) concept is not uniquely defined globally.

Figure 2 demonstrates the status of the AI definitions in different regions and countries.

Figure 2: Different Approaches to Defining AI.

Figure 2: Different Approaches to Defining AI.

The status of AI definitions across different countries can be grouped into three categories: formal single definitions, formal multiple definitions, and no formal definitions.

Formal Single Definition

The European Union and Canada both adopt formal single definitions.. The European Union’s Artificial Intelligence Act offers a thorough definition, aiming to ensure consistency across all member states.

Likewise, Canada has established a formal definition through the Artificial Intelligence and Data Act (AIDA),   which outlines AI regulation at the national level, providing clear guidance for both AI developers and users.

Formal Multiple Definitions

In contrast, the United States employs multiple formal definitions for AI. These definitions can differ between federal guidelines and state-specific regulations, reflecting the varied landscape of AI applications across different industries and regions. This approach offers flexibility, allowing definitions to be tailored to the needs of specific sectors or areas, but it may also result in inconsistencies in national AI regulation. The presence of multiple formal definitions underscores the need for a more coordinated approach to AI governance in the US.

No Formalized Definitions

Finally, several countries lack formalized definitions of AI. Japan, Australia, the United Kingdom, Saudi Arabia, Brazil, China, and Singapore fall into this category. Instead of providing specific legal definitions, these nations often rely on guidelines or ethical frameworks that emphasize responsible AI practices and sector-specific principles. While the absence of a formal definition allows for a more flexible approach to AI regulation, it can also lead to ambiguities in legal interpretation and enforcement.

These varying approaches highlight the different maturity levels and regulatory priorities of countries when it comes to addressing AI technologies, reflecting the diverse and evolving global landscape of AI governance.
In a future article, I will explore the definitions of AI systems in more detail.

Different Approaches to Defining AI-Related Requirements

The regulation of AI technologies around the world follows various approaches, shaped by regional priorities and societal values: risk-based, principle-based, and mixed, as illustrated in Figure 3.

Figure 3: Different approaches to defining AI-related requirements.

Figure 3: Different approaches to defining AI-related requirements.

Risk-Based Approaches

Risk-based regulation classifies AI applications according to their potential impact or risk level, with specific compliance requirements for each category.  The European Union’s AI Act organizes AI systems into four risk categories: minimal, limited, high, and unacceptable risk, which dictate the necessary controls and safeguards.

Similarly, China has introduced Interim Measures for Generative AI Services, categorizing AI usage based on national security, society, and public safety risks.

Principle-Based Approaches

Principle-based regulation is grounded in broad ethical principles, offering flexibility while ensuring oversight.

The United Kingdom introduced “A pro-innovation approach to AI regulation.” which emphasizes ethics and responsible innovation, steering clear of prescriptive laws to support industry-specific applications.

Similarly, Saudi Arabia bases its AI standards on ethical principles and released “The draft AI Ethics Principles dated September 2023 (the “AI Principles”).

In Singapore, “The AI Regulations,” offer guidance on responsible AI use, complemented by sector-specific rules.

Mixed Approaches

Some countries adopt mixed approaches that combine risk assessment and ethical principles. In the United States , various states and federal initiatives use a blend of risk assessments, existing federal regulations, and ethical standards.

Canada’s proposed Artificial Intelligence and Data Act (AIDA) emphasizes responsible AI practices with elements of ethical guidelines, though it is not explicitly risk-based..

Japan primarily relies on soft law to encourage responsible AI, while considering future legal measures, as outlined in “AI Guidelines for Business Version 1.0.“

Australia focuses on voluntary AI Ethics Principles while assessing the need for formal legislation.

In one of the upcoming articles, I will dive into the specifics of each approach.

Various Legal Statuses

AI regulations worldwide vary in legislative status and can be categorized into three main types: legally binding, voluntary/soft law, and sector-specific regulations, as illustrated in Figure 4.

Figure 4: Different approaches to defining the requirements.

Figure 4: Different approaches to defining the requirements.

Legally-Binding Regulations

Legally binding regulations include frameworks like the European Union’s Artificial Intelligence Act, which aims to classify AI systems based on risk levels and ensure comprehensive oversight.

Similarly, China’s Interim Measures for the Management of Generative Artificial Intelligence Services provide enforceable rules for generative AI applications.

Voluntary/Soft Regulations

In contrast, voluntary or soft law approaches include guidelines and ethical frameworks that lack the force of law but promote responsible AI practices.

For instance, Australia’s AI Ethics Principles encourage ethical AI development, and Japan’s AI Governance Guidelines aim to establish an ethical baseline for AI innovation.

Sector-Specific Regulations

Sector-specific regulations focus on particular industries or applications. The United States employs a variety of sector-specific guidances14 developed by federal agencies to regulate AI use in specific contexts.

Similarly, Singapore’s Model AI Governance Framework combines ethical AI practices with regulations tailored to different sectors.

This diversity of regulatory approaches reflects different regional priorities—from stringent oversight aimed at mitigating risks to encouraging innovation while maintaining ethical practices—all contributing to the evolving global landscape of AI governance.

Conclusion

The diverse approaches to AI regulations across the globe present significant challenges for organizations seeking to comply with multiple frameworks. With variations in AI definitions, regulatory strategies, and legal statuses, companies must navigate a complex landscape to ensure they meet the unique requirements of each region. Multinational organizations, in particular, face the burden of adapting their AI systems and practices to comply with different regulatory environments, which can increase operational costs and create compliance risks. To succeed, businesses must stay informed about evolving AI laws and develop flexible governance strategies that can adapt to varying global standards.