Thank you for visiting the HLA website, we are updating the site to reflect changes in our organisation. Contact us if you have any questions. Contact us

Humanitarian AI podcast series: the collection

Global expert voices on humanitarian artificial intelligence

In August 2025, the Humanitarian Leadership Academy and Data Friendly Space released a joint report: ‘How are humanitarians using artificial intelligence in 2025? Mapping current practice and future potential’ – the world’s first baseline study of AI adoption across the humanitarian sector, with 2,539 survey respondents from 144 countries and territories.

To build on this research, we’re producing a six-episode deep dive podcast series featuring expert guests to explore their views on the research and chart possible pathways forward together.

We delve into the report’s key themes, including community-centred research, implementation barriers, governance and regulatory frameworks, cultural and ethical considerations, localisation, learning and training, and more. The conversations also feature questions from the humanitarian community.

Released in September – October 2025, this first series features Global and African expert perspectives, since 46% of research respondents were from Sub-Saharan Africa.

The platform aims to promote inclusive, accessible conversations and knowledge exchange to support shared progress in the development of ethical, contextually-appropriate humanitarian AI.

Episodes:

Ep. 1: How are humanitarians using AI: reflections on our community-centred research approach – with Lucy Hall, Ka Man Parkinson and Madigan Johnson. Listen
Ep. 2: Bridging implementation gaps: from AI literacy to localisation – in conversation with Michael Tjalve. Listen
Ep. 3: Addressing governance gaps: perspectives from Nigeria and beyond – in conversation with Timi Olagunju. Listen
Ep. 4: Building inclusive AI: indigenous knowledge frameworks from Kenya and beyond – in conversation with Wakanyi Hoffman. Listen
Ep. 5: Localising AI solutions: practitioner experiences from Rwanda – in conversation with Deogratius Kiggudde. Listen

Episode 6 coming soon! Subscribe to Fresh Humanitarian Perspectives on your usual podcast platform to be notified of new releases.

Speakers

Find out more about the speakers on each episode webpage, which features biographies and links to expert guest Q&A articles.

Who this series is for

Anyone with an interest in the development of humanitarian AI can engage with this content. Whether you’re an AI-curious individual or a leader navigating organisational digital transformation decisions, you’ll hear the latest thinking and developments from experts in this space.

These long-form conversations run for around 60-75 minutes, allowing space and time to explore themes in depth with our guests.

To support accessibility, each episode includes:

  • Chapter markers to navigate to segments of interest
  • Glossaries of technical terms
  • Full conversation transcripts

Key themes to emerge so far

Humanitarians must find informed, inclusive, measured, and principled pathways to AI deployment if the technology is to genuinely help address real-world challenges and support communities in crisis. This is a moment for collective learning, discussion, and collaboration to develop AI tools that support humanitarian work and uphold the principle of do no harm.

For practitioners and organisations:

1. Start with AI literacy
Demystify AI for everyone, not just technical staff. Invest in skilling and training opportunities across all levels of the organisation.

2. Establish data governance now
Treat your data responsibly. As Timi Olagunju advises: “Treat your data like cash” – secure it, track who accesses it, and only collect what you need. Develop an AI policy as a living document, using templates as a starting point but always localising and contextualising content.

3. Begin with pain points, not possibilities
Identify organisational challenges first, then assess whether AI can help – rather than adopting AI and searching for applications.

4. Explore beyond ChatGPT and LLMs
Small language models (SLMs) offer promising potential for contextualised humanitarian AI solutions, overcoming some limitations of large language models, especially in low-connectivity or resource-constrained environments.

5. Keep humans in the loop
AI should assist decision-making, not replace it. Maintain human oversight, particularly for high-stakes or sensitive decisions.

For developers and system builders:

1. Enable localised, contextual solutions
Design tools to reflect local languages, infrastructures, contexts, stories and lived experiences. Build through local partnerships and appropriate technologies.

2. Make community ownership and governance central
Communities should be able to shape, control, and sustain their own tools, whether through open-source approaches, participatory design, or other inclusive models.

Sector-wide considerations:

1. Foster genuine collaboration
With shrinking resources, collaboration should go beyond funding to include shared expertise, infrastructure, testing spaces, and connections. Coalitions of actors across industry, academia, governments and NGOs can help to balance risk and drive innovative approaches. AI adoption must happen in alignment with localisation processes.

2. Document and share openly
Both successes and failures – and everything in between – are valuable. Public testing and open documentation accelerate collective learning and sector-wide developments.


Next steps

The series producer, Ka Man Parkinson (Communications and Marketing Lead, HLA), will publish a short article reflecting on the key themes to emerge from the conversations and connect this to the original research. The article is planned for publication in November 2025.

We invite your feedback! Tell us what you thought about this series: what was helpful, and what else would you like to hear about in future series? We want to co-create this humanitarian AI learning journey together with you.

Please email us on info@humanitarian.academy

Newsletter sign up