Thank you for visiting the HLA website, we are updating the site to reflect changes in our organisation. Contact us if you have any questions. Contact us

Humanitarian AI podcast series | Guest spotlight – Timi Olagunju

We’re excited to continue with our special six-part Humanitarian AI series podcast, building on the groundbreaking research conducted by the HLA in partnership with Data Friendly Space (DFS). We’re shining a spotlight on the experts you’ll be hearing from – read on to learn about the work and perspectives of Timi Olagunju.

Meet Timi Olagunju – a policy expert, lawyer, and governance strategist from Lagos, Nigeria. We’re pleased to share that Timi will feature on the Fresh Humanitarian Perspectives podcast as an expert guest in episode three of the Humanitarian AI series: ‘Addressing AI governance gaps: perspectives from Nigeria and beyond’. Timi shares his perspectives and experiences on AI governance challenges with a particular focus on Africa together with the HLA’s Ka Man Parkinson.

The podcast conversations build on findings and themes to emerge from the August 2025 report: How are humanitarians using artificial intelligence in 2025? Mapping current practice and future potential.

A man smiling outdoors, wearing a traditional maroon and gold African attire with a matching cap. There are green trees and blurred people in the background.
The opportunity that excites me most is the ability to use AI for anticipatory action, helping communities prepare for shocks rather than only reacting to them. However, the major risk is the non-critical adoption of AI systems without safeguards, especially for vulnerable populations, with little or no digital self-determination mechanisms, leading to a possible erosion of trust in humanitarian actors. The challenge is ensuring that innovation does not come at the cost of rights and dignity.
Timi Olagunju

Tell us about yourself and journey into law, AI and tech and development. What drew you to working at this intersection and what are you focusing on right now?

My journey into law, technology, and development began in 2007 while serving as a student leader at the University of Ibadan, studying law. Faced with a long-standing accommodation challenge affecting over 1,500+ students, I worked with a student programmer to build a web app that transformed a more than three-decade long problem of weeks-long room allocation process into minutes. The real breakthrough, however, came from persuading university authorities, hostel management, and stakeholders to adopt the solution. That experience showed me the power of technology to solve problems, but also that adoption and impact require governance, persuasion, and the right policy frameworks.

Motivated by this, after my training as a lawyer, I took more interest in public policy, and then governance in the technology sector. Over time, I came to see that while technology can accelerate progress, without strong governance structures it can also entrench inequality. This conviction continues to shape my work at the intersection of law, policy, and emerging technologies such as Artificial Intelligence.

Currently, I am working to prepare youth for the future of work with AI and also supporting the governance of AI in the education and other critical sectors to humanity, through a nonprofit I led ‘AI Literacy Foundation’: basically, digital transformation and modelling.

From your vantage point, what does the humanitarian AI and data governance landscape look like in 2025? What’s one key opportunity that excites you and a major risk in this space that is concerning you?

Well, the current humanitarian AI and data governance landscape is both promising and fragile. On one hand, AI is being deployed to improve crisis response, from predictive analytics in public health to satellite imagery for climate resilience, even to using IoT (Internet of Things) and AI to fight animal poaching.

The opportunity that excites me most is the ability to use AI for anticipatory action, for example, flood precision forecasting, helping communities prepare for shocks rather than only reacting to them.

However, the major risk is the non-critical adoption of AI systems without safeguards, especially for vulnerable populations, with little or no digital self-determination mechanisms, leading to a possible erosion of trust in humanitarian actors.

The challenge is ensuring that innovation does not come at the cost of rights and dignity. For example, use of AI-driven biometric systems to streamline food and aid distribution, in humanitarian assistance programs in IDP/refugee camps, where data rights are almost non-existent or alien.

Are there current developments or potential for a pan-African approach to humanitarian AI development? What kind of governance structures would or could this involve?

There is strong potential for a pan-African approach to humanitarian AI development, building on frameworks such as the AU’s Digital Transformation Strategy 2030 and the African Continental Free Trade Area (AfCFTA). Rather than fragmented national policies, a coordinated framework could establish continental standards through the African Union, with regional economic communities driving coordination and national governments adapting policies to their contexts. Civil society and universities would be critical in this structure, ensuring inclusion, accountability, and the integration of local knowledge.

Such a model would prevent uneven protections across countries, strengthen resilience to cross-border crises, and give Africa a stronger collective voice when engaging with global tech companies and donors. Although the EU’s framework shows the advantages of collective bargaining and the EU has been a partner in shaping digital frameworks on the continent (for instance the GDPR – Europe’s data privacy and security law), Africa must avoid overly prescriptive models that do not reflect local realities. Instead, a homegrown, context-sensitive governance framework can enable Africa to harness AI for humanitarian and development goals while protecting rights and advancing innovation.

What progress do you think could be feasibly made in the AI governance space in the next 1-2 years, particularly in relation to the humanitarian and development space?

In the next one to two years, the most feasible progress in humanitarian AI governance will come through practical norm-setting and capacity-building rather than treaties. I expect to see sector-specific guidelines emerge, such as baseline requirements for transparency in AI tools used for crisis response and data-sharing standards that safeguard privacy, often anchored in existing digital governance frameworks.

Donors and multilateral institutions will likely reinforce this by tying funding to compliance with adaptable standards, alongside offering training and technical support. This can help humanitarian actors, especially smaller organizations, adopt responsible AI practices without being overwhelmed.

My concern, however, is that much of this work risks being driven by Global North “copy-and-paste” models, that may not reflect the realities of regions in Sub-Saharan Africa.

If standards remain rigidly external, they could burden rather than empower local actors. For real progress, stakeholders must agree that governance efforts must evolve locally and contextually, building on regional needs and knowledge rather than importing foreign blueprints. Donors may fund this good work but not dictate the format.

Only 1/5 of survey respondent told us that they have an organisational AI policy that they’re aware of. How can humanitarian organisations, particularly local organisations that may not have a dedicated team, begin to formulate a robust and useful AI policy? In your opinion, could they use a general template or should a more contextualised humanitarian AI policy be developed?

To the local organizations faced with this ubiquitous challenge, I would say, templates are not strategy. The answer is not to copy-paste a foreign framework; it is to start with a minimum-viable policy that fits your own work.

Advice is free, but consulting costs a fee. So here is my advice: Draft a two-page starter that names your AI use cases, sets a plain-language purpose, and locks in four non-negotiables: consent and data minimization, human oversight with an appeal path, risk tiers with clear ‘do-not-use’ boundaries, and a simple incident response with public reporting. Then localize it – relentlessly. Add a one-page annex per country or program co-created with staff and affected communities that spells out context-specific risks, data sources, and red lines.

Revisit quarterly (in agile governance process, we call that ‘iterate’) as tools and risks evolve, keep an auditable log of AI decisions, and name a single accountable owner who can pause deployments.

A lightweight template is fine as scaffolding; the substance must be contextual, bottom-up, and enforceable in practice, or it becomes compliance theatre that puts people at risk.

A promotional graphic for the Humanitarian AI series podcast by Humanitarian Leadership Academy, featuring a smiling man in traditional Nigerian attire. Text highlights a conversation with Timi Olagunju on governance gaps in Nigeria and beyond.
New episode with Timi coming soon! Subscribe to Fresh Humanitarian Perspectives on your favourite podcast platform to be notified of new releases.

About Timi Olagunju

Timi Olagunju is a lawyer and policy expert working at the intersection of emerging technology, governance, and development. He is the founder of the AI Literacy Foundation and Youths in Motion, and serves on the boards of the Slum and Rural Health Initiative and Feed To Grow Africa. His advocacy on AI governance, including his submission published by the White House Office of Science and Technology Policy in 2025, contributed to U.S. policy debates that shaped the Executive Order on AI Education for American Youth. His publications and recommendations have informed policymakers and courts alike.

Timi is a Partner at Timeless Practice, and has advised policymakers, multinational companies, digital platforms, and global institutions including UNICEF and the ILO. He is an Internet Society Fellow and contributed to Berkman Klein Center research on digital self-determination now informing policy in Germany. A former Global Vice President of Generation Democracy (IRI), he has been recognized by President Obama and the Ooni of Ife. He studied law at the University of Ibadan and completed a master’s degree at Harvard University, J.F. Kennedy School, as an Edward Mason Fellow, specializing in digital policy and governance of emerging technology.

About the report and podcast series

In August 2025, the Humanitarian Leadership Academy and Data Friendly Space launched a joint report on artificial intelligence in the humanitarian sector: How are humanitarians using artificial intelligence in 2025? Mapping current practice and future potential.

Drawing on insights from 2,539 survey respondents from 144 countries and territories, coupled with deep dive interviews, this study was the world’s first baseline study of AI adoption across the humanitarian sector.

In the next phase of our initiative, we’re releasing a six-episode podcast series featuring expert guests to build on the themes emerging from the research, including community-centred research, implementation barriers, governance and regulatory frameworks, cultural and ethical considerations, localisation, learning and training, and more. This first series will have a particular focus on the Global South and Africa due to high levels of engagement in this research from these regions.

The platform aims to promote inclusive, accessible conversations and knowledge exchange to support shared progress in the development of ethical, contextually-appropriate humanitarian AI.

Newsletter sign up