This article discusses the alignment of data and AI practices.

This is the third article in a series where I share my impressions and key insights gathered during the #DGIQ and #EDW2025 conference, as well as the top trending topics. This series offers a general summary and does not focus on any specific presentation from the conference.

In this article, I will focus on four key topics in aligning Data and AI Management discussed at the conference:

  • Tailoring AI Industry Frameworks
  • Aligning Data and AI Management and Governance
  • Making Data Work for AI
  • Building Organizational Support for Data and AI Governance

Let´s start with the first topic.

Tailoring AI Industry Frameworks

Artificial Intelligence (AI) has rapidly moved from a promising innovation to a strategic imperative for organizations. However, the implementation of AI governance is still hindered by the lack of mature and consistent industry frameworks.

A framework is best understood as a structured set of principles, models, and methods that an organization uses to develop and manage its business capabilities. In the context of AI, it should help scope, design, and implement AI management practices, as well as measure their performance and maturity.

An analysis of frameworks developed by leading authorities—including NIST, Microsoft, ISO, Gartner, IBM, Harvard, and Accenture—shows that most of them include three essential elements:

  1. AI strategy and implementation: Nearly all frameworks emphasize the importance of a clear AI strategy that aligns business objectives and ethical principles. This includes defining responsible AI principles and creating oversight mechanisms.
  2. Maturity models: Several frameworks, particularly those developed by Microsoft, Gartner, and Accenture, introduce models to evaluate an organization’s progress in areas such as governance, technical development, and risk management.
  3. Implementation guidance: While the structure varies, most frameworks describe how to manage AI across its lifecycle—from planning and development to deployment, monitoring, and decommissioning. These typically include elements of governance, ethical compliance, and risk mitigation.

Despite their different structures and terminologies, these frameworks share similar operational goals: establishing organizational AI governance, identifying and mitigating risks, integrating ethics into AI design, and monitoring AI performance over time.

At the same time, organizations face several complications. Frameworks often overlap yet differ in terminology and scope, making them difficult to integrate. None offers complete coverage of all regulatory and operational needs, leaving organizations to fill in the gaps themselves. The complexity increases for companies operating in multiple jurisdictions, where frameworks may prioritize different aspects of governance, ethics, or risk.

Aligning Data and AI Management and Governance

Perspectives on the integration of data and AI management practices vary widely. Some organizations maintain separate structures for each, while others place AI management under the accountability of the Chief Data Officer. Despite these differing approaches, the need to align data and AI practices has become one of the most critical challenges in the field.

Several factors influence how integration is approached. These include the type and scope of AI-related regulations, an organization’s business strategy, internal culture, and the availability of resources. Regardless of the chosen structure, successful alignment requires the coordination of multiple frameworks—specifically for metadata, data, AI, and their associated risk management processes.

During the conference, several key topics related to this alignment challenge were explored.

Toward Integrated Governance Models

One of the most important shifts taking place is the integration of data and AI governance into a unified framework. Historically, data governance and AI oversight have operated as separate domains—often with different owners, tools, and objectives. This separation creates inconsistencies and limits the ability to manage risk holistically. A more effective approach involves aligning governance roles, policies, and processes across the full data-to-AI value chain. Cross-functional collaboration, shared accountability structures, and continuous monitoring are all essential features of this emerging governance model—one that is built not only for control, but for adaptability and trust at scale.

The relationships among these frameworks are dynamic and interdependent. Metadata management enables data management, and together, they form the foundation for AI capabilities. At the same time, AI tools are increasingly used to enhance metadata and data management processes themselves—creating a feedback loop that resembles a natural “water cycle.” Risk management applies across all three areas, as each introduces distinct risks that must be actively identified, monitored, and mitigated.

Governance as a Business Enabler

As organizations move forward with AI adoption, many find that their existing governance frameworks are not designed to keep pace. Traditional approaches tend to emphasize formal policies and static controls, but these are often too slow or narrow to guide real-world AI deployments. A new, more pragmatic mindset is emerging—focused on “good enough” governance. This concept encourages organizations to make governance decisions based on business risk, resource constraints, and expected outcomes. Rather than perfecting documentation or frameworks, the goal is to create governance that is actionable, accountable, and visibly supports the organization’s strategic objectives. When treated this way, governance becomes a driver of value, not a barrier to innovation.

Ethical Risk and Trust

With AI increasingly embedded in business operations and decision-making, ethical concerns have moved to the forefront. Governance can no longer be limited to data quality or compliance—it must actively address questions of fairness, bias, transparency, and accountability. These risks are more complex and diffuse than traditional operational risks and require governance structures capable of identifying and mitigating ethical blind spots across the AI lifecycle. For example, selecting inappropriate proxy variables, ignoring underrepresented populations, or failing to explain automated decisions can all erode trust and lead to harm. As a result, organizations are embedding ethical checkpoints into development workflows, conducting equity reviews as part of model validation, and formalizing policies for transparency and responsible AI use.

Making Data Work for AI: Characteristics, Challenges, and Capabilities

Unique Characteristics of Data for AI

AI systems require a different kind of data readiness than traditional analytics. While structured data remains important, much of the value lies in unstructured sources—such as text, images, or audio—that lack standardized formats. These data types carry rich contextual information but demand more advanced processing and governance. To be usable for AI, data must be accessible, well-documented, and enriched with metadata to preserve meaning and ensure traceability. Without this foundation, organizations risk introducing bias, error, and opacity into AI systems.

Governance and Engineering Challenges

Managing data for AI at scale introduces significant operational strain. Many organizations still rely on outdated pipelines, fragmented tooling, and manual data processes that are unfit for the demands of AI. Engineering teams are often overwhelmed by constant changes in data architecture and integration technology. At the same time, quality gaps, lack of lineage, and siloed storage environments hinder the trustworthiness and transparency required for AI. Ensuring explainability, compliance, and ethical sourcing requires a proactive approach that moves beyond traditional data governance practices.

The Interplay of Metadata, Data, and AI Capabilities

As discussed in the previous section, effective AI depends on the alignment of several interrelated frameworks: metadata, data, and AI governance. Metadata management enables data visibility and lineage; data management ensures quality and accessibility; and both are foundational for building reliable AI systems. In turn, AI is increasingly used to enhance metadata tagging, automate quality checks, and improve classification. This creates a continuous feedback loop—similar to a natural cycle—where each layer enables and strengthens the others. Risk management must be embedded throughout, as each capability introduces its own exposure points. Governing data for AI is no longer a technical afterthought—it is a strategic discipline that determines whether AI will scale with trust or fail under risk.

Building Organizational Support for Data and AI Governance

From Participation to Accountability

As AI initiatives mature, successful implementation increasingly depends on more than just technical readiness—it requires the active involvement of business stakeholders. However, informal or case-by-case approaches often fail to scale. A common pattern emerges: while many stakeholders are willing to participate in discussions, the absence of clearly defined roles and decision rights leads to fragmented efforts, duplicated work, and operational inefficiencies. A structured governance model, anchored in formal accountability across business domains, helps align data and AI efforts with enterprise goals. This includes appointing data and AI stewards, custodians, and domain leaders who represent both technical and business interests.

Engaging Business Stakeholders through Relevance and Value

Business stakeholders are more likely to support governance when they clearly see how it connects to their goals and pain points. Effective engagement begins with understanding their needs, constraints, and success metrics. Rather than presenting governance as a compliance burden, framing it as a way to reduce risk, accelerate decision-making, or improve customer outcomes creates a shared sense of purpose. Techniques such as stakeholder mapping, value hierarchy visualization, and framing governance benefits in terms of loss avoidance (e.g., missed revenue, reputational harm) can shift perceptions and build alignment.

Building Data and AI Literacy as a Foundation for Engagement

Stakeholder involvement cannot be sustained without foundational literacy. Many of the challenges organizations face—resistance to change, inconsistent data practices, or distrust in AI—stem from limited understanding. Building confidence through tailored education, hands-on use cases, and showcasing “quick wins” helps normalize governance as a valuable part of business operations. Programs that focus on empathy, resilience, adaptability, and continuous learning contribute to a culture where all employees—not just data professionals—feel empowered to contribute to responsible AI use.

Conclusion and Recommendations: Making Data and AI Governance Work

Effective data and AI governance is no longer optional—it’s foundational to delivering trusted, scalable, and ethical AI outcomes. The insights shared across recent discussions point to one overarching message: governance must evolve from fragmented policies to integrated, outcome-driven practices that involve the whole organization.

To move forward, organizations should:

Align frameworks: Integrate metadata, data, AI, and risk management into a unified governance structure to reduce duplication and improve clarity.

Start with “good enough”: Prioritize practical governance that delivers business value over theoretical perfection.

Design for data+AI: Recognize that AI depends on high-quality, contextual, and traceable data—and that AI itself can enhance data practices.

Embed ethics: Address bias, transparency, and accountability throughout the AI lifecycle.

Engage business stakeholders: Link governance efforts directly to business goals, and frame value in terms of risk reduction and opportunity gain.

Build literacy: Foster organization-wide understanding of data and AI to support adoption and trust.

Scale through structure: Use federated models and clearly defined roles to standardize execution across teams.

By approaching governance as a strategic enabler rather than a compliance exercise, organizations can unlock the full value of data and AI.