22nd September 2025
We’re excited to release a six-part Humanitarian AI series podcast this month, building on the groundbreaking research conducted by the HLA in partnership with Data Friendly Space (DFS). We’re shining a spotlight on the experts you’ll be hearing from – read on to learn about the work and perspectives of Michael Tjalve.
Meet humanitarian AI expert, Michael Tjalve (founder, Humanitarian AI Advisory; co-founder, the Roots AI Foundation; Senior AI Advisor, UN OCHA; former Chief AI Architect at Microsoft Philanthropies). We’re delighted that Michael will feature on the Fresh Humanitarian Perspectives podcast as an expert guest in episode two of the Humanitarian AI series: ‘Bridging implementation gaps: from AI literacy to localisation’. Michael shares his perspectives and experiences on AI implementation challenges together with the HLA’s Ka Man Parkinson.
The podcast conversations build on findings and themes to emerge from the August 2025 report: How are humanitarians using artificial intelligence in 2025? Mapping current practice and future potential.

AI tools can provide value in many ways, from back-office process efficiency to scaling up or enabling novel approaches to delivery of frontline functions. The humanitarian challenges are so multidimensional and the impact so deep that it does require insight and contribution from a wide range of stakeholders, each contributing with their unique expertise and vantage point.
Tell us about your journey into humanitarian AI – what drew you to this intersection?
I have been working in the field of artificial intelligence my entire career, a little over 25 years. Most of that time has been in the tech sector and academia and about a decade ago, I began focusing less on how to improve the underlying quality of AI capabilities and more on how AI is used in the real world, on its societal impact. Both the good and the bad.
Through my role at Microsoft Philanthropies, I was fortunate to work closely with a wide range of NGOs and nonprofits and humanitarian actors. There was a lot of excitement around AI but also a lot of concern and, more than anything, a lack of clarity on how to get started with AI or even which questions to ask. So it became increasingly clear that there was a need and I felt that I could provide some value to help address that need with what I’ve learned over the years.
So, last year, I decided to leave the tech sector to establish the Humanitarian AI Advisory where I work with humanitarian organizations to help them understand how to harness the potential of AI while navigating its pitfalls.
From your vantage point, what does the humanitarian AI landscape look like in 2025, and could you tell us about any priority areas in your work?
As you know all too well, our sector is struggling to meet demands. We’re at a unique moment in time where two highly impactful and far-reaching factors are intersecting. On one side, the humanitarian sector is under water in terms of its ability to effectively address the growing humanitarian needs. On the other side, modern AI has matured to the point where it is highly capable of playing a central role in the path forward.
Faced with this situation of having to do more with less, AI therefore becomes a very attractive tool. This is good because there are so many ways that AI can provide value to the people working in the sector and to the communities we serve, whether directly or indirectly. However, there are also many ways that introducing AI can make things worse and often in unexpected ways. When working with vulnerable populations, unexpected AI behaviour can have a devastating impact.
I think that 2025 is the year where more and more organizations will be leaning in on the use of AI. One of the projects I’m very excited about is the SAFE AI initiative, which is a UK Foreign, Commonwealth and Development Office (FCDO)-funded initiative to create a framework for the safe and effective use of AI in humanitarian action. We’re working on providing guidance and a set of easy-to-use tools to help individuals and organizations get started with AI, helping humanitarian actors understand how to get the most out of what AI can do today and how to identify relevant risks so that you can proactively implement mitigation strategies.
What’s one key area or development in AI that excites you in terms of application or future potential for humanitarian work? And what’s a risk that we as a sector need to pay particular attention to right now?
I’m excited about how far the technology has come. Compared to when I first started working on AI, the capabilities have evolved faster than what I thought possible. When generative AI became broadly accessible three years ago, it completely changed the landscape by significantly lowering the barriers for adoption, which has led to a much broader demographic of society starting to engage meaningfully with AI.
However, as impressive as the AI capabilities are, it is worth keeping in mind that it only works well in English and a relatively small number of other languages. This means that the large majority of the world population sees absolutely no benefit from modern AI, which in turn further deepens existing inequities across the world. We’ll never get anywhere near equitable outcomes from AI without a dedicated focus on language access.
To help address this challenge, we recently launched the RootsAI Foundation. Our goal with this nonprofit is to bring the value of modern AI to languages and to communities that don’t have easy access to it today. This involves community-built AI that helps counter bias and ensure representation in AI models, preserve endangered languages, and build culturally grounded AI tools.
Are there any key areas of alignment or contrast between the Humanitarian AI research conducted by the HLA and DFS and your own experiences? What’s one sectoral knowledge gap that would you like to see tackled in future research?
One of the key findings that I saw highlighted in the Humanitarian AI research report was related to a fragmented landscape for AI training approaches. That resonates with what I’ve seen. Getting started with AI can feel daunting. There are so many skilling options and courses to pick from, so many AI tools and capabilities to leverage, so many risks and dependencies to consider. It can be overwhelming and hard to know where to get started. It’s why I started the Humanitarian AI Advisory and these discussions rightfully are a key part of the engagements I have.
Looking at types of errors for a moment, it’s important to keep in mind that AI models will always make mistakes. Understanding how an error in AI output can materialize as tangible real-world consequences gives you a better chance of mitigating the risks. The concept of Cost of Error, for example, helps clarify the level of human oversight required for a given use case. If the Cost of Error is low, you can more comfortably act on the AI output as-is, e.g. if using generative AI to summarize a report, whereas if the Cost of Error is high, the AI output should always only be taken as a recommendation for a human expert to confirm, e.g. if using AI to recommend where to deploy a field operator for humanitarian landmine clearance. As part of the SAFE AI Initiative, we’re building tools to help with this process.
A great way to start getting familiar with some of the key concepts is with on-demand courses. For future research, I would like to see an overview of recommended courses for our sector including general purpose courses as well as courses that have been adapted to our sector’s unique needs and priorities.
As someone who has worked across Microsoft, academia, UN agencies, and humanitarian organisations, you have a unique perspective on these different sectors. How do you think we can work more closely and effectively together for shared progress in the development of humanitarian AI? What’s something vital to accelerate progress in this space?
AI tools can provide value in many ways, from back-office process efficiency to scaling up or enabling novel approaches to delivery of frontline functions. The humanitarian challenges are so multidimensional and the impact so deep that it does require insight and contribution from a wide range of stakeholders, each contributing with their unique expertise and vantage point.
As part of the Humanitarian Reset, we’re at a point now where much more focus should be dedicated to not just encourage but to incentivize collaboration and reduce waste and duplication. An effective approach to enabling this is simply to raise awareness of what’s been done already at the intersection of AI and humanitarian action. Sharing case studies and lessons learned from within the sector does a few very valuable things: It helps to demystify AI use in the sector and allow people to connect the dots between what the technology can do and how it can apply to your specific challenge or mission. Raising awareness of existing work, specifically including both the successes and the failures, also helps others build upon and reuse existing tools, data, and approaches.
The bigger gap, though, as I see it – and therefore, the bigger opportunity for lasting impact – is to engage with the impacted communities in a way that is both meaningful and purposeful. AI solutions should be co-designed with the communities where they’re used. True participatory AI means giving voice to the community and to empower locally. This not only builds trust with the people who end up using the AI solution or who are directly or indirectly impacted by what is produced by the AI solution. It also puts you in a better position to avoid blind spots and identify both the unstated opportunities and the unseen risks that may be introduced when deploying AI capabilities into an existing ecosystem.

About Michael Tjalve
Michael Tjalve brings more than two decades of experience with AI, from applied science and research to tech sector AI development, most recently serving as Chief AI Architect at Microsoft Philanthropies where he helped humanitarian organizations leverage AI to amplify their impact. In 2024, he left the tech sector to establish Humanitarian AI Advisory, dedicated to helping humanitarian organizations and stakeholders understand how to harness the potential of AI while navigating its pitfalls.
Michael holds a PhD in Artificial Intelligence from University College London and he is Assistant Professor at University of Washington where he teaches AI in the humanitarian sector. Michael serves as Board Chair and technology advisor for Spreeha Foundation, working to improve healthcare and education in underserved communities in Bangladesh. Michael is AI Advisor to the UN on humanitarian affairs, where he works with OCHA on AI strategy and on providing guidance on the safe and effective use of AI for humanitarian action. He is also co-lead of the SAFE AI initiative which aims to promote the safe and responsible use of AI in humanitarian action. Michael recently co-founded the RootsAI Foundation, a nonprofit dedicated to bringing the value of modern AI to languages and communities that don’t have easy access to it today, and to improve representation in AI models by building culturally grounded AI tools.
About the report and podcast series
In August 2025, the Humanitarian Leadership Academy and Data Friendly Space launched a joint report on artificial intelligence in the humanitarian sector: How are humanitarians using artificial intelligence in 2025? Mapping current practice and future potential.
Drawing on insights from 2,539 survey respondents from 144 countries and territories, coupled with deep dive interviews, this study was the world’s first baseline study of AI adoption across the humanitarian sector.
In the next phase of our initiative, we’re releasing a six-episode podcast series featuring expert guests to build on the themes emerging from the research, including community-centred research, implementation barriers, governance and regulatory frameworks, cultural and ethical considerations, localisation, learning and training, and more. This first series will have a particular focus on the Global South and Africa due to high levels of engagement in this research from these regions.
The platform aims to promote inclusive, accessible conversations and knowledge exchange to support shared progress in the development of ethical, contextually-appropriate humanitarian AI.