AI Governance
Lessons learned from past approaches
The first TDL webinar of 2026 focused on AI governance, featuring a panel of experts discussing the evolution of AI governance frameworks and implementation challenges.
Summary
The session explored the current state of AI governance in 2026 including the evolution of AI governance from focusing on ethics and privacy to emphasising compliance and automation, as well as the development of international standards and legal frameworks. The panel also explored safety controls, human oversight and the importance of provenance in agentic AI systems, while emphasising the need for practical implementation and context-specific risk-based approaches. The discussion concluded with insights on education, human oversight and the importance of understanding AI systems through hands-on workshops and training, with a focus on building adaptable governance frameworks that can address new challenges.
Background
AI governance encompasses policies, frameworks, regulations and technologies recommended or mandated to reduce some risks associated with artificial intelligence. Different countries and regions developed different approaches and while international standards for AI governance were created, their deployment differs in different regions.
After several years of using different approaches to AI governance, it is timely to reflect on some aspects of the lessons learned including risk-based approaches, how human oversight is used, or the limitations of transparency of dynamic AI systems (e.g., autonomous agents). Not to be overlooked are the approaches enterprises and organisations should adopt, the ways to strengthen global collaboration and the need to enable flexibility in regulatory mandates as the technology evolves, notwithstanding finding ways forward to ensure that users’ trust in AI is preserved.
Each participant was asked in turn to comment on their view on the most important topics, challenges and lessons learned in AI governance and how the focus of the field has changed since the issues were first introduced. Their opinion was solicited on what the most important focus areas for AI governance in 2026 should be, and how the principles developed for AI governance apply to areas where there is no PII (e.g., weather analysis or automated manufacturing). At the inception of AI governance, one of the focus areas was on ‘human-in-the-loop’ and it was of interest to know whether it has changed already and what we should expect in the coming year. As the management and use of data is fundamental to all forms of AI, we wanted to know whether there will be broadly applicable metrics available to improve data governance.
To that end, understanding the quality and provenance of the output from AI systems, it is seemingly important that AI governance principles should apply to the quality and accuracy of the results and indeed whether providing provenance information should be made mandatory, If so, there may be technologies and standardisation that can help although it is questionable whether anything changed in this area in 2025
A lot of research has gone into AI agents and autonomy and its connection with the concepts of data sovereignty. We asked for some insights as to whether systems where agents monitor other agents and dynamic adaptations replace statis rules affect key concepts in AI governance; and what lifecycle management might mean in this context and how human checks could be implemented. Another dimension comes from looking at any sectoral differences with regard to the governance and importance of agentic/autonomous systems.
Finally, the panellists were prompted to consider the most important lessons learned from over five years of applying the principles and (later) the standards of AI governance, and whether the failures were technical or organisational. For example, could the issues come from applying governance principles post- rather than pre-deployment. And could it be that agentic systems require different mental models or that there are ill-considered costs associated with the fragmented regulatory space.
So many questions, so little time. At least we made a start!
Speakers
The panellists were:
· Gonda Lamberink, UN Joint Staff Pension Fund, AI & Cybersecurity Governance Consultant
· Lalitha Suryanarayana, Executive in Residence, Techquity Growth Capital
· Allen Wishman, Executive Partner: Data, Analytics, AI & Governance
The session was moderated by TDL strategic advisor, Claire Vishik.
AI Governance Trends for 2026
The initial discussion noted the rising importance of AI governance, highlighting the trends for 2026, including a shift from general ethical principles to practical implementation and context-specific risk-based approaches. An emphasis was put on the need for safety controls, human oversight and provenance in agentic AI and autonomous systems, as well as the importance of post-deployment monitoring. On the question of the collaboration between autonomous agents and human intervention, the explanation examined concepts like human-in-the-loop and its upgrade to human-in-command, emphasising the role of humans in accountability and policy setting.
AI Governance Implementation Strategies
The discussion went on to focus on AI governance, both pre- and post-deployment, highlighting the importance of concrete implementation of governance principles with KPIs and metrics, including lineage coverage, metadata documentation and auditability of models. The need for enterprises to scale AI governance across organisational infrastructure and address compute governance was mentioned, considering the strategic importance of computational resources for frontier models.
AI Implementation Governance Challenges
The challenges in AI implementation were raised, emphasising that issues often arise from organisational aspects rather than technical failures. The need for shared governance across departments and the importance of translating high-level principles into actionable steps for engineers and developers was highlighted. For small and medium enterprises, leveraging existing tools and understanding the accountability associated with sourced AI capabilities were suggested.
AI Governance: Education and Oversight
The panel discussed AI governance, focusing on education, provenance and human oversight. The importance of understanding AI systems through hands-on workshops and training were also stressed. The group went on to explore the challenges of reducing hallucinations and ensuring the quality of AI-generated outputs, highlighting the need for human oversight in AI systems, especially in sensitive sectors. The panel agreed that AI governance should build on existing controls and be adaptable to new challenges, concluding that organisations need to understand how AI systems work and learn to coexist with them.
Watch the full webinar recording on our YouTube channel below!


