Ensuring Trust and Compliance with the Hourglass Model of AI Governance
Article by Caroline Lancelot
While Artificial Intelligence promises efficiency, insight, and innovation, without rigorous governance it also carries significant risks such as bias, opacity, regulatory missteps, and even reputational damage. The hourglass model of organisational AI governance offers a clear, structured approach to turning high-level ethics principles and legal requirements (e.g., the EU AI Act) into practical, day-to-day governance practices.
The Hourglass model visually represents AI governance as having three layered components, built upon the idea of distinct governance levels (e.g., social/legal, ethical, technical).
Source: ChatGPT
- From Regulations to Practice: The Environmental Layer (top)
At the top of the hourglass model sit external inputs: binding laws (e.g. the EU AI Act), industry standards, voluntary principles, self-regulatory guidelines, societal norms, ethical principles and stakeholder expectations (customers, regulators, the general public).
Cataloguing and tracking these inputs is the first essential step.
- Turning Sand into Governance: The Organisational Layer (middle)
The “neck” of the hourglass is where transformation happens. It consists of internal practices and capabilities needed to “grind” the top inputs into actionable policies. Here, you align AI initiatives with corporate strategy, establish clear governance roles (e.g. appoint a Chief AI Officer or an AI Oversight Unit), provide training, and weave AI governance into existing IT and data governance processes.
This layer includes ensuring strategic alignment (defining the organisation's AI strategy and aligning it with overall objectives) and value alignment (ensuring AI use adheres to ethical principles). Effective functioning at this layer requires resource allocation, capabilities, processes, management commitment, governance roles, change management, and staff training. This layer relates to corporate governance, IT governance, and strategic management.
This layer confirms ethical AI is not an afterthought but a core enabler of trust and compliance.
- Embedding Practices in Systems: The AI System Layer (bottom)
Finally, the AI System Layer is the bottom layer, covering the operational governance of the AI systems themselves. It's the most directly relevant layer for practical implementation. It is complex due to the task of implementation and the continuous advancement of AI technologies. Governance must manifest in the AI systems themselves across every lifecycle stage.
Core components include:
- Data operations: sourcing, quality assurance, health checks
- Algorithm management: ID, predesign, deployment metrics, version control, monitoring
- Risk & Impact: preassessments, impact metrics, ongoing health checks
- Transparency & Accountability: documentation standards, explainability toolkits, audit trails.
The Hourglass model provides a template for decision makers to address key questions related to the use of AI. It is value agnostic; it does not give any priority to any particular ethical stance.
The hourglass metaphor illustrates how “grains of sand” (the environmental inputs) flow through the narrow neck (organisational translation processes) into operational practices and denotes the flow of governance decision-making processes from requirements to operational practices. The metaphor depicts how the normative inputs from the environmental layer (like grains of sand) are translated into operational practices for AI systems through the mediating organisational layer. This translation process is continuous and involves various organisational roles and functions. The layers also influence each other in both top-down and bottom-up directions. Finally, the metaphor also highlights the dynamic nature of AI governance as a continuous activity that translates the normative regulatory, self-regulatory, and stakeholder input into operational practices.
Implementing the Hourglass Model requires four main types of operationalisation work:
- Regulatory Mapping: Where to catalogue relevant laws, guidelines, and stakeholder requirements.
- Governance Blueprint: Where to design your AI Oversight Unit, defining roles, decision rights, and escalation processes.
- Operational Integration: Where to integrate governance “patterns” into your software delivery lifecycle, considering automated checks, versioning, healthcheck dashboards, and audit logs.
- Continuous Improvement: The hourglass is dynamic; bottom-up feedback from system monitoring informs regular updates to strategy and processes.
By adopting the hourglass model, organisations achieve not just compliance but operational resilience and stakeholder trust, ensuring AI systems are seen as transparent, accountable, fair, and secure.
Caroline Lancelot is a member of our AI expert advisory panel. Click here for more information on Caroline Lancelot and her areas of AI expertise.
References
Mäntymäki, M., Minkkinen, M., Birkstedt, T., & Viljanen, M. (2022). Putting AI ethics into practice: The Hourglass Model of Organizational AI Governance. arXiv. https://doi.org/10.48550/arXiv.2206.00335
Mäntymäki, M., Minkkinen, M., Birkstedt, T., & Viljanen, M. (2022). Defining organizational AI governance. AI and Ethics, 2(4), 603–609. https://doi.org/10.1007/s43681-022-00143-x
Birkstedt, T., Minkkinen, M., Tandon, A., & Mäntymäki, M. (2023). AI governance: Themes, knowledge gaps and future agendas. Internet Research, 33(7), 133–167. https://doi.org/10.1108/INTR-01-2022-0042
Batool, A., Zowghi, D., & Bano, M. (2024). AI governance: A systematic literature review. AI and Ethics. https://doi.org/10.1007/s43681-024-00653-w
Lu, Q., Zhu, L., Xu, X., Whittle, J., Zowghi, D., & Jacquet, A. (2024). Responsible AI Pattern Catalogue: A Collection of Best Practices for AI Governance and Engineering. ACM Computing Surveys, 56(7), 173. https://doi.org/10.1145/3626234