This article discusses the approach to tailoring AI industry frameworks to an organization’s needs.
Recently, I conducted a LinkedIn poll to identify the biggest challenges in implementing an AI governance framework. The results are presented in Figure 1.

Figure 1: Most significant challenges in implementing AI governance.
While the most significant challenges cited were insufficient expertise and resources, and low organizational prioritization, I believe that the immaturity of industry frameworks is a root cause of the limited expertise across the field.
This article aims to:
- Discuss the potential impact of AI regulations on industry frameworks
- Define the key content requirements for an AI governance framework
- Review and compare existing industry frameworks for AI
- Recommend a high-level approach for tailoring these frameworks to organizational needs
Potential Impact of AI Regulations on AI Frameworks
For this article, regulation is defined as “a law, rule, or other directive prescribed by authority,” while a framework refers to documents developed by industry bodies. A framework can be described as “a structured set of principles, models, and methods that an organization adopts to develop its business capabilities and transform them into business functions to achieve its objectives.”
For a comprehensive review of AI legislation, readers are encouraged to consult my new book, Aligning Data and AI Governance: A Step-by-Step Guide to Integrating Governance Frameworks for Data and AI Management, available on Amazon. A summary of that review is provided below.

Book “Aligning Data and AI Governance”👉 //www.amazon.com/dp/B0F3JTCW3M
A broad array of authorities, including international organizations and national regulatory bodies, have issued legislative requirements for AI systems. These regulations differ significantly across several key dimensions:
- Scope: Regulatory bodies may issue either a single, comprehensive regulation or multiple specialized instruments that address specific aspects of AI governance.
- Legal status: Some regulations are legally binding and enforceable, while others serve as voluntary guidelines aimed at shaping best practices. Figure 2 presents the distribution of regulations by legal status.
- Compliance approach: Regulations may follow a risk-based, principle-based, or hybrid compliance model, each of which sets different expectations for how organizations must govern AI development and deployment. Figure 2 shows this distribution using the compliance approach.

Figure 2: The distribution of AI regulations per compliance approach.
The wide variation in regulatory approaches presents substantial compliance challenges for organizations:
- Navigating legal obligations: Companies often find themselves subject to multiple AI-related regulations, each varying in legal status, scope, terminology, and compliance structure. Determining which requirements take precedence can be complex.
Some laws adopt a risk-based approach, categorizing AI systems by their potential impact on individuals, organizations, or broader ecosystems, and prescribing specific requirements for each category. Others focus on overarching principles for responsible AI management. These differing models require organizations to adapt and customize their internal AI governance frameworks carefully. - Managing cross-border compliance: Global organizations must often comply with conflicting or overlapping AI regulations across jurisdictions. Depending on their operational footprint, this may involve aligning internal practices with a patchwork of regional rules, creating added complexity and administrative burden.
Requirements for the content of an AI Framework
To meet legislative expectations, organizations need a governance framework that can effectively respond to the challenges discussed above by:
- Harmonizing the definition of an AI system across jurisdictions to ensure consistent understanding and application of regulatory requirements.
- Establishing a unified set of AI governance principles and developing a clear approach for translating these principles into actionable operational practices and controls.
- Customizing existing risk management processes to address AI-specific risks, including identification, evaluation, mitigation, and continuous monitoring in line with regulatory mandates.
- Outlining guidance for the full AI management lifecycle, including how to scope, design, implement, assess, and monitor the maturity and effectiveness of AI governance capabilities.
- Developing an AI capability map that supports the implementation and oversight of the AI data lifecycle across the organization.
Overview of Existing AI Frameworks
In my recent book, Aligning Data and AI Governance, I examined several publicly available AI frameworks developed by organizations such as NIST, Microsoft, ISO, IBM, and Gartner.
One of the key findings is that, collectively, these frameworks address three fundamental areas:
- The development and execution of an AI strategy
- Measurement of AI maturity
- Definition and implementation of AI capabilities
However, each particular framework tends to be fragmented, typically focusing on only a subset of these areas.
Below is a brief explanation and illustration of each area.
AI Strategy Development
Various industry frameworks offer guidance on developing an AI strategy, but they differ in several key aspects:
- The factors considered critical for shaping AI strategy
- The recommended steps in the strategy development process
- The proposed structure and content of the AI strategy document
Table 1 highlights how three leading organizations, Microsoft, Gartner, and IBM, approach AI strategy development, illustrating the variations in their guidance.

Table 1: Approaches to developing an AI strategy.
Even a brief analysis reveals that industry authorities take notably different approaches to AI strategy development. The recommended steps vary in number, scope, and sequence, reflecting diverse perspectives on how organizations should structure their strategic efforts.
Maturity Measurement
Several industry authorities have introduced guidelines and tools to help organizations assess the maturity of their AI practices. While many of these assessments are publicly accessible, they often do not provide complete transparency into the underlying maturity models. These models vary in both structure and the definition of maturity levels.
Table 2 highlights the differences in maturity model dimensions between the frameworks developed by Microsoft and Gartner.

Table 2: The dimensions of AI maturity models.
AI Capabilities and Their Implementation
Industry frameworks take varied approaches to AI governance—ranging from risk-based to principle-based, control-based, or a hybrid of these methods. These foundational differences make direct comparisons difficult. As a result, this article focuses its analysis on a few key components.
For instance, the Artificial Intelligence Risk Management Framework (AI RMF 1.0) developed by NIST emphasizes risk management related to AI system deployment, while also introducing several foundational principles for responsible AI use. The ISO/IEC 42001:2023 (ISO 42001) standard takes a control-based approach, defining structured management system requirements, with risk management incorporated as one of the recommended controls. Meanwhile, Microsoft’s Responsible AI Standard is principle-driven, presenting six core principles supported by organizational objectives that guide responsible AI design and implementation.
Despite their shared goals, the overlapping yet divergent nature of these frameworks presents several challenges for organizations:
- Fragmentation and inconsistency: Organizations must navigate frameworks that differ in terminology, scope, and structure, complicating integration efforts.
- Incomplete coverage: No single framework comprehensively addresses all the principles and operational needs required by various regulatory authorities, leaving organizations to bridge gaps on their own.
- Alignment complexity: Companies operating in multiple jurisdictions face the added burden of reconciling frameworks that may prioritize different aspects of AI governance, risk mitigation, and ethical oversight—complicating both implementation and compliance reporting.
A High-Level Approach to Tailoring Frameworks to Organizational Needs
The challenges discussed above—stemming from both regulatory diversity and the fragmentation of industry frameworks—underscore the necessity for organizations to develop and implement their own internal AI governance frameworks. My book, Aligning Data and AI Governance, provides an in-depth guide to this process. Below is a high-level summary of the key steps organizations should consider:
Step 1: Identify AI use cases
AI use cases may be tied to specific business capabilities, IT tools, or operational needs. The nature of each use case determines the relevant regulatory requirements the organization must meet.
Step 2: Analyze applicable legislative requirements
This step involves identifying all regulations relevant to the organization’s operations. As previously discussed, regulatory approaches vary—some are risk-based, others principle-based, and some combine both.
- For risk-based regulations, the organization must assess the risk level associated with each AI system. Different risk levels come with different compliance obligations.
- For principle-based regulations, the organization must determine how to operationalize high-level principles. This often involves identifying and developing the necessary business capabilities (e.g., data governance, compliance processes, AI model development). For instance, to meet a transparency requirement, capabilities such as metadata management, data and application architecture, and explainable AI practices are often essential.
- When dealing with hybrid regulatory models, mapping both risks and principles to the appropriate capabilities is the most effective approach.
Step 3: Evaluate existing AI industry frameworks
Once the required business capabilities are defined, organizations should assess available industry frameworks to identify best practices that can inform the development of their internal framework.
Step 4: Tailor best practices to organizational needs
The organization should define its own vision, objectives, and structure for the framework. In Aligning Data and AI Governance, I outline three core goals for building an integrated data and AI governance framework:
- Establishing an enterprise-wide framework that includes an operating model and governance structure (e.g., governing bodies, self-managed groups, defined roles)
- Defining capability-specific governance structures, including policies, processes, roles, and technology requirements for each data or AI domain (e.g., data architecture governance)
- Ensuring coordination across different data management and AI capabilities to enable cohesive governance practices
In an evolving regulatory landscape, organizations cannot rely solely on external frameworks to meet their AI governance needs. By tailoring industry best practices to their unique context, businesses can build robust, compliant, and effective AI governance frameworks that support both innovation and accountability.