In Part 1 of the series, I will focus on trending topics in AI management:

  • AI governance
  • Unlocking the value of GenAI
  • Managing AI risks
  • Combating AI poisoning
  • AI and Human collaboration

AI Governance: Proposed Definition

The absence of a uniform definition for the term governance remains a significant challenge within the data management community.  To provide clarity, I will refer to the definition offered by the Merriam-Webster Dictionary: governance is “the act or process of governing or overseeing the control and direction of something.

In the context of AI, governance involves the following core responsibilities:

  • Exercising authority: Making high-level decisions that define the organization’s vision, values, and strategic direction for AI. This includes setting expectations, principles, and boundaries for how AI should be developed and used.
  • Overseeing control: Supervising how well AI-related activities align with established governance decisions. This includes monitoring compliance with legal, ethical, and organizational standards, evaluating the effectiveness of controls, and ensuring that AI initiatives support the organization’s broader goals.

The two core governance activities—exercising authority and overseeing control—manifest differently at the strategic, tactical, and operational levels, as shown in Figure 1.

Figure 2: AI governance tasks at different organizational levels.

Figure 2: AI governance tasks at different organizational levels.

Let’s briefly consider examples of AI governance tasks at different organizational levels.

Strategic Level

At the strategic level, the key focus is on setting direction and organizational commitment.

  • Exercising Authority: At the strategic level, executive leadership defines the organization’s vision and values related to AI. This includes setting ethical principles, establishing the acceptable level of AI-related risk, and approving the enterprise-wide AI strategy. These high-level decisions guide how AI is adopted and governed across the organization.
  • Overseeing Control: Oversight at this level involves monitoring whether AI initiatives align with the approved strategic direction.
Tactical Level

At the tactical level, the key focus is on translating strategy into governance policies and coordination mechanisms.

  • Exercising Authority: The tactical level involves department heads or functional managers who interpret strategic decisions into actionable priorities. They approve AI use case roadmaps, allocate resources, and endorse standards for AI model development, validation, and deployment to ensure consistency across the organization.
  • Overseeing Control: At this level, governance teams assess how effectively policies and standards are implemented.
Operational Level

The key focus at the operational level is day-to-day execution and compliance monitoring.

  • Exercising Authority: Operational managers are responsible for approving and applying detailed procedures for AI system usage, including model monitoring, documentation, and escalation protocols. They also define day-to-day roles and responsibilities for staff interacting with AI systems.
  • Overseeing Control: Oversight at the operational level ensures that AI systems function as intended within the approved parameters.

AI Governance Framework

A governance framework can be considered as a structured set of principles, models, and methods that an organization adopts to develop a governance capability and functions.

Several presentations have been devoted to establishing an AI governance framework, covering several topics.

AI governance framework: content and implementation

Different industry guidelines, as well as the frameworks discussed at the conference, demonstrate quite different content and approaches to implementation.

Let me summarize the key recommendations from several conference presentations. An effective AI governance framework should be grounded in a set of core principles: accountability, fairness and accessibility, reliability and safety, transparency and explainability, human-in-the-loop oversight, privacy and retention, and security. These principles serve as the ethical and operational foundation for responsible AI adoption.

A strategic approach to AI governance requires a comprehensive framework that addresses risk management, ensures regulatory compliance, and supports the ethical deployment of AI across its entire lifecycle. At the heart of this approach is effective data management—high-quality, reliable, and well-governed data is essential to every trustworthy AI initiative.

To achieve impact, AI governance must facilitate data-driven decision-making while actively mitigating legal, ethical, and operational risks. Governance should be embedded throughout all phases of development and deployment, ensuring alignment with organizational objectives and the delivery of measurable business value.

Governance bodies must establish clear roles and responsibilities, manage AI-specific risks, and coordinate efforts across technical, legal, policy, and business domains. A structured framework should guide strategy through implementation, with strong support for compliance and accountability.

A recommended path forward is incremental implementation, using small, low-disruption changes within existing workflows. This gradual integration supports sustainable governance without hindering innovation. Executive sponsorship, stakeholder engagement, and continuous training are key to success.

Critically, governance must be proactive and integrated from the outset. Retrofitting oversight after deployment leads to misalignment, inefficiencies, and increased risk. Early planning and governance integration are essential for building trustworthy, scalable AI systems.

AI policy development

Establishing effective AI governance within an organization involves several structured steps. It begins with forming a small, cross-functional policy team that has the backing of executive leadership. This team should represent various perspectives from across the organization to ensure that the governance framework is practical, inclusive, and aligned with business needs.
The first core task of this team is to develop AI governance policies. These policies should be clearly scoped, responsibilities should be defined, and the guiding principles the organization will follow should be outlined. To ensure transparency and accountability, they should also reference any related internal procedures, policies, and relevant external regulations.

Once the policy is in place, supporting procedures must be developed. These provide clear, detailed instructions on how specific tasks should be carried out and under what conditions exceptions may apply. Where helpful, guidelines can be created to offer flexible, advisory information that supports policy implementation without enforcing strict requirements.

Clear communication is critical. Organizations should define what needs to be communicated, identify their target audiences, and select appropriate communication channels and timing.

Finally, AI governance must address emerging concerns such as generative AI, model transparency, bias, and evolving legal and regulatory frameworks. A well-structured governance approach ensures that AI is deployed responsibly, ethically, and in alignment with organizational goals.

Unlocking the Value of GenAI

Maximizing the value of generative AI requires both leveraging unstructured data effectively and managing the unique risks GenAI introduces. Unstructured data—emails, images, call logs, and more—represents over 80% of enterprise data and holds immense potential for insight generation, process automation, and innovation. However, organizations face major challenges in harnessing it, including data silos, inconsistent formats, and governance complexity. Unlocking this data demands AI-ready infrastructure, advanced technologies like NLP and computer vision, and strategic integration of domain-specific models.

At the same time, the unpredictable nature of GenAI systems presents new risks. Traditional quality assurance approaches fall short, necessitating new testing frameworks such as “LLM-as-a-Judge,” where AI evaluates AI. These tools assess accuracy, bias, relevance, and compliance across the lifecycle, from development to deployment. Red teaming and contextual scoring are also essential for surfacing vulnerabilities and improving reliability.

Together, these perspectives underscore a dual imperative: organizations must both activate unstructured data as a strategic asset and establish robust, adaptive governance mechanisms to ensure safe, trustworthy, and effective GenAI deployment.

Combating AI Data Poisoning

AI data poisoning refers to the intentional manipulation of training data, prompts, or contextual inputs to distort the behavior or output of AI systems. This can range from obvious misinformation to subtle statistical noise designed to bias results or degrade performance.

To address these risks, a combination of technical safeguards and governance frameworks is essential. On the technical side, solutions include using open models, implementing retrieval-augmented generation (RAG) to ground AI responses in verified sources, and designing systems with both probabilistic and deterministic controls. These measures help detect and reduce the impact of manipulated content and improve traceability.

Equally important is the adoption of trustworthy AI policies that define acceptable use, exposure management, and evaluation methods. Organizations are encouraged to build internal understanding of generative AI risks through hands-on experimentation and to align use cases with risk-informed frameworks. This dual approach—combining transparent technology design with strong governance—helps ensure that AI remains secure, reliable, and aligned with organizational and societal values.

Human + Machine Collaboration for Ethical AI

Machine learning plays a powerful role in enhancing data-driven outcomes across sectors by automating tasks, analyzing large datasets, and supporting decision-making. To ensure these systems remain trustworthy and effective, ethical design principles must guide development and deployment. Core principles include fairness, transparency, accountability, privacy, and ongoing human oversight. Common sources of bias—such as sampling errors, algorithmic skew, and omitted variables—can be addressed through diverse data practices, bias detection tools, and continuous monitoring. The most effective AI outcomes emerge from collaboration between human and machine: humans provide context, empathy, and ethical judgment, while machines offer speed, consistency, and scale. This synergy leads to smarter, more reliable, and more explainable AI systems. As adoption increases across industries like healthcare and finance, integrating ethical frameworks and preparing the workforce remain essential to long-term success.

Key Actions for Responsible AI Adoption

To build responsible and resilient AI governance, organizations should:

  • Establish a structured governance framework led by a cross-functional team with executive backing, defining clear roles, policies, and procedures.
  • Embed ethical principles—fairness, accountability, transparency, privacy, and human oversight—into AI design, deployment, and oversight processes.
  • Leverage both structured and unstructured data by investing in AI-ready infrastructure and tools that ensure data quality, discoverability, and compliance.
  • Mitigate GenAI-specific risks using evaluation techniques like LLM-as-a-Judge, red teaming, and contextual scoring to assess accuracy, coherence, and harmful outputs.
  • Defend against data poisoning through technical safeguards such as Retrieval-Augmented Generation (RAG), and reinforce this with trustworthy AI policies and open frameworks.
  • Promote continuous collaboration between humans and machines, combining strategic, ethical judgment with AI’s speed and scale for smarter, more trusted outcomes.

This integrated approach ensures AI is not only powerful but also secure, explainable, and aligned with business and societal values.