Thank you for visiting the HLA website, we are updating the site to reflect changes in our organisation. Contact us if you have any questions. Contact us

Humanitarian AI podcast series | Bridging implementation gaps: from AI literacy to localisation

How can humanitarian organisations bridge the gap between individual AI experimentation and organisational adoption?

While 9 out of 10 humanitarians are using AI tools in their work, only 8% say that AI is widely integrated across their organisation, with the majority in the early experimentation or piloting phase. What are the implications of this picture for humanitarian organisations?

In this second instalment of our new six-part humanitarian AI podcast series, we’re delighted to welcome Michael Tjalve to the show to share his expert perspectives.

Ka Man Parkinson sits down with Michael to explore how humanitarian organisations can meaningfully bridge this implementation gap, moving from experimentation and piloting to the deployment of meaningful, contextualised, fit-for-purpose AI solutions.

Michael holds more than two decades of interdisciplinary expertise in humanitarian AI, working at the intersection of technology, social impact and the humanitarian sector. He is founder of Humanitarian AI Advisory and co-founder of the RootsAI Foundation; with a career spanning roles in academia, the UN, and he was formelyr Chief AI Architect at Microsoft Philanthropies.

In this deep dive conversation, Michael discusses implementation barriers, practical steps and possibilities, including:

  • Why AI policy development should be every organisation’s first step: Michael shares practical guidance on creating organisational frameworks that address concerns and risks before technical implementation
  • Looking beyond ChatGPT and large language models for contextualised humanitarian solutions: Hear how emerging innovations like small language models could transform humanitarian contexts by addressing infrastructure, connectivity and security challenges
  • Language barriers and broadening AI access: Understanding how training systems on diverse languages could close digital divides and extend technology benefits to communities worldwide
  • Community questions: Michael tackles community questions from our research launch event, addressing practical concerns about costs, security, and organisational readiness

Podcast promo image for the Humanitarian Leadership Academy’s Humanitarian AI series, featuring Michael Tjalve. Text reads: “Bridging implementation gaps: from AI literacy to localisation.”.
Now streaming on SpotifyApple PodcastsAmazon MusicBuzzsprout and more!

Keywords: humanitarian AI implementation, AI policy development, small language models, language access, organisational AI adoption, cultural contextualisation, infrastructure challenges, community engagement, pilot-to-practice scaling, data sovereignty, AI literacy, responsible AI deployment.

Want to learn more? Read a Q&A article with Michael

Who should tune in to this conversation

These insights are essential listening for humanitarian organisations, technologists and donors navigating AI and digital transformation challenges. This conversation provides valuable guidance for practitioners, organisational leaders, programme managers, and decision-makers exploring AI adoption.

Michael makes a compelling case for how AI could be developed in line with humanitarian principles and localisation through coordinated sector action and investment, offering insights for technologists, digital transformation teams, funders, and policymakers working at the intersection of technology and humanitarian action.

Episode chapters

00:00: Chapter 1: Introduction
02:55: Chapter 2: Tech and academia meets social impact: Michael’s transition into humanitarian AI
08:24: Chapter 3: Closing implementation gaps: moving beyond pilot projects through learning
17:35: Chapter 4: Overcoming infrastructure challenges: small language models and emerging technologies
24:32: Chapter 5: Contextualisation and localisation of AI solutions: addressing the language and cultural learning gaps
36:31: Chapter 6: Balancing commercial tools with purpose-built solutions and weighing up the cost of error
47:32: Chapter 7: Michael answers your questions: humanitarian community Q&A
61:32: Chapter 8: Blind spots to address to accelerate shared progress

Glossary of terms

We’ve included definitions of some technical terms used during this podcast discussion for those who are unfamiliar or new to this topic.

Agentic AI
– AI systems that can act autonomously to achieve goals without constant human guidance

AI (Artificial intelligence) – Computer systems that can perform tasks typically associated with human intelligence

AI compute – The computational resources required for artificial intelligence systems to perform tasks, such as processing data, training machine learning models, and making predictions

AI policy – Organisational guidelines defining how AI tools can be used and what approval processes are required

Algorithm – A set of rules or instructions that computers follow to solve problems or make decisions

Chatbot – AI-powered software that can have conversations with users through text or voice

Cloud-based AI – AI services that run on internet servers rather than local devices

Conversational interface – A way of interacting with technology using natural language, like talking to ChatGPT

Cost of Error – Framework for understanding the consequences when AI systems make mistakes

CVA (Cash and Voucher Assistance) – Humanitarian programmes that provide money or vouchers to affected populations rather than goods directly

Data leakage – When private information accidentally appears in AI outputs

Data sovereignty – The principle that organisations should control where their data is stored and how it’s used

Foundation models – Large-scale AI models trained on broad datasets that can be adapted for different uses

Generative AI – AI systems like ChatGPT that can create new content like text or images

Large language models (LLMs) – Very large AI systems like ChatGPT that work with text and require significant computing power

Machine learning – AI approach where computers learn patterns from data to make decisions

Participatory AI – Approach involving affected communities in AI system design and development

Small language models (SLMs) – Smaller AI systems that can run locally on devices like phones without internet connection

Training data – Information used to teach AI systems how to perform specific tasks

WASH – Water, Sanitation and Hygiene programmes in humanitarian work

Episode transcript

This podcast transcript was generated using automated tools. While efforts have been made to check its accuracy, minor errors or omissions may remain.

00:00: Chapter 1: Introduction

[Intro music]

[Ka Man, voiceover]: Welcome to Fresh Humanitarian Perspectives, the podcast brought to you by the Humanitarian Leadership Academy.

[Music changes]

[Ka Man, voiceover]:
Global expert voices on humanitarian artificial intelligence.

I’m Ka Man Parkinson, Communications and Marketing Lead at the Humanitarian Leadership Academy and co-lead of our report released in August: ‘How are humanitarians using artificial intelligence in 2025? Mapping current practice and future potential’ produced in partnership with Data Friendly Space.

In this new six-part podcast series, we’re exploring expert views on the research itself and charting possible pathways forward together.

[Music changes]

[Michael, soundbite]: Given how many people are using AI today, if you do just one thing, create an AI policy that fits your unique needs. That, I have seen several times, can address a lot of the concerns, a lot of the risks, a lot of the doubt that just naturally come up when you start to engage with AI…

[Michael, soundbite]: I really believe that the biggest obstacle to equitable outcomes for modern AI is language access. We’ll never get to really equitable outcomes without a dedicated focus on access…

[Ka Man, voiceover]: Today we’re taking a global exploration of AI implementation challenges across the humanitarian sector. Our recent research revealed that whilst 9 out of 10 humanitarians have used or are using AI tools to support their work, driven by individual use of tools like ChatGPT, only 8% report of organisation-wide integration with most remaining in experimentation or piloting phases. So what are the potential implications, risks and opportunities of this implementation gap?

To find out, I recently had the honour of sitting down with Michael Tjalve, founder of Humanitarian AI Advisory and co-founder of Roots AI, to discuss this.

With over two decades of interdisciplinary expertise as former Chief AI Architect at Microsoft Philanthropies, and roles within academia and the UN, Michael brings an insider perspective to these implementation challenges.

He advocates for informed, measured and equitable pathways to technological adoption, while also sharing specific guidance on getting started, from AI policy templates to understanding emerging technologies like small language models.

Plus, Michael also tackles community questions from our report launch event.

[Music fades]

***

02:55: Chapter 2: Tech and academia meets social impact: Michael’s transition into humanitarian AI

Ka Man: Hi Michael, welcome to the podcast!

Michael: Hello, hello, thanks for having me!

Ka Man: I feel really lucky to have this time and space to talk with you today about humanitarian AI.

So, before we dive into the more technical questions and all about the implementation challenges in the sector, could you tell us a little bit about yourself, your journey into humanitarian AI, perhaps sharing any key milestones or perhaps turning points in your career? And what really drives you to do what you do today?

Michael: All right, yes, absolutely. Yes, so thanks again for having me. Looking forward to the discussion. So, I have been working on artificial intelligence for essentially my entire career, a little over 25 years now. Most of that time in the tech sector, also in academia. In the past 10 years or so, I was focusing less on improving the underlying AI capabilities and more on how they were used in the real world. And through that, focusing more and more on the societal impact, both the good and the bad.

I’ve also been assistant professor at University of Washington for the past 15 years in the Linguistics Department, where my most recent focus has been teaching AI in the humanitarian sector.

And so, up until last year, and for the past many up to that, I was working for Microsoft Philanthropies, working very closely with a broad range of nonprofits, NGOs, UN agencies. And it was really through discussions there that it became clearer and clearer that there was a potential need to be filled here.

So I was talking with various organisations, and there’s been a lot of excitement about AI across the sector. There’s been a lot of concern, but more than anything, there’s been a lack of understanding of how to get started, and even which questions to ask.

And so I was at an event early last year at a place in England called Wilton Park, where we were sort of sequestered for three days at this beautiful old English manor. It was 40, 50 humanitarian leaders from all around the world, and one tech sector representative – that was me. So I was sitting in the hot seat for some of the questions, but it was really, really valuable, and I came out of that really understanding and having validated what I’d been noodling on for a little bit, that there’s a need here. I think I can provide some value with what I’ve learned over the years.

And so last year, I left the tech sector to establish this one-person gig I have here, Humanitarian AI Advisory, where I work with a range of humanitarian organisations and agencies and individuals to help them understand how to effectively use AI, how to take those first steps, how to get started, and then follow them along the way.

And, it’s been daunting. It’s the first time since I started my career without an employer, or basic things like a team or you know, salary. So building that up from the ground has been all kinds of exciting. But I’m very happy with the change, and I feel very fortunate to be able to work today on some very exciting projects at the intersection of AI and humanitarian action.

Ka Man: Thanks, Michael. What an interesting journey you’ve had. I liked how you described your sort of Wilton Park meeting moment as maybe a bit of an epiphany for you. It was like a microcosm of – you’re replicating that now, but as your own professional trajectory, where you’re that voice in the room saying, ‘what about this, and these are what we need to consider.’

Michael: Yes.

Ka Man: And so I’m just curious, do you feel, I mean, obviously, Microsoft is a huge, major player in this space. And so do you feel, as an individual, moving away from that, that you can, you feel like you’re at the stage where you can make more meaningful impact as an individual with that freedom and flexibility?

Michael: I do. Obviously, it’s in a different way. It’s more face-to-face, it’s more direct impact, if you will, on both individuals and their organisations. When you work at a large company like Microsoft, of course, you have immediate scale to millions of users with almost anything you’re working on, so that comes with a different kind of responsibility, just thinking about the scale of the impact. Whereas now, I work on, from the other side, if you will, to making sure that the technologies that come out of, for example, tech sector representatives, but also other places that AI models and AI capabilities may be coming from, that they are used in a way that’s both effective and safe.

08:24: Chapter 3: Closing implementation gaps: moving beyond pilot projects through learning

Ka Man: Thank you. So there’s so much I want to ask you. Obviously, I could, we could produce probably a whole series of podcasts [laughs] just with me asking you questions, but today we decided to focus specifically on implementation.

So I have a few questions for you around organisational implementation barriers and to gain your thoughts and expertise as to how that gap may be bridged, if an organisation feels that AI deployment is the right and appropriate route for them.

So, firstly, I wanted to look at the organisational level, because in our research, we found that despite widespread individual AI usage, only around a fifth of organisations in the humanitarian sector have formal AI policies. And in terms of their actual projects, AI-driven projects, the majority describe themselves as in the early experimentation and piloting phase. So I’m just wondering, does this align with what you’re seeing and experiencing?

Michael: Yes, first of all, let me just say that the research that you and the team have done, it’s been really, really good to see that. I was lucky to attend the launch event when you released the report. And there’s a lot of good data in there, and it’s good to see that coming to good use.

So, in terms of trends, I think there are two separate things at play here. So I think, for example, the discrepancy between the high use of AI for experimentation and piloting versus the low percentage of full implementation, that is consistent with what I’ve seen throughout the sector. Part of that is structural, it’s easy to get funding for pilots. It’s important to start with a pilot, and then learn from that, and then you often generate a lot of interest, you raise awareness across the organisation, across the community about what the technology can do. Hopefully you share some lessons learned, both if it’s good and if it’s bad, so that others can build upon it.

But deep and lasting impact in the communities we serve doesn’t really come from pilots, right? It comes from scaling up what you learned from those pilots, and that’s often a lot harder to get funding for. So I think that’s one factor.

Another factor is I think it’s simply a reflection of what the technology is today. I think that it’s less about fewer pilots graduating to full implementations, and it’s more about a spike in AI use of individuals using ChatGPT or something like that, that came out three years ago, and generative AI becoming broadly accessible since then.

I think that generative AI has completely changed the landscape by significantly lowering the barriers for adoption, which means that you get a lot more people experimenting, which I think is a really good thing, as long as that is done responsibly.

And so I think, given how many people are using AI today, my recommendation would typically be that if you do just one thing with AI for your organisation, it should be to create an AI policy that fits your unique needs. That, I have seen several times that this can address a lot of the concerns, a lot of the risks, a lot of the doubt that just naturally come up when you start to engage with AI.

There’s a learning gap. It can be challenging to know what you can do with AI capabilities, given your unique work context, and I’ve been fortunate to play a small role in facilitating this, but I think that we still have a lot more to do in that aspect, to provide easily accessible and continuous skilling opportunities to make the most out of these technologies.

Ka Man: Thanks, Michael. A lot of people talk about the opacity of algorithms, like they don’t understand how they work, and they need to get under the bonnet and try and understand that. But as the majority of people that I’m connecting with are not those who are actually building the algorithms. So I wanted to ask you, as somebody who worked directly in this space, could you give us a little bit of insight, or just some, a tip, that’s useful for us as laypeople to understand about AI algorithms? I know that’s quite a broad question, but yeah, if there’s something that you think is useful to highlight to non-experts about how algorithms work, I’d just be interested to hear any reflections on that.

Michael: You’re absolutely right, that’s a big topic, a big question, and one that’s hard to narrow down to just a sentence or two. But I think it’s important to keep in mind that even though it’s often considered a black box, whether that is a black box that gives you anxiety, or it’s a black box that feels like magic, and you expect it to do all kinds of wonderful things without understanding how it works, it is important to have some basic level of understanding of how AI works.

The fact that it’s machine learning, which is the most widely used form of approach, methodology for AI, learns from the data it has seen, and so it will reproduce, in some way, data that it has seen before. That’s what we see with generative AI, even though it looks like, it sounds like, or it looks like, convincingly human, it’s important to keep in mind that for every word it produces, it’s just the most likely next word or next action based on what it has seen before.

And so, if you train a model on some kind of data, and you use it for a different environment, different scenario, there is a discrepancy between what it has learned and what you’re asking it to do. And that means that the quality or the robustness is going to suffer from that. The bigger the distance between what it has learned from and what you ask it to do, the less robust it’ll be to the task.

But yes, I think you’re right, there’s a lot of available training capabilities, skilling capabilities. It can be hard to get a sense of which ones are good, which ones to focus on. I can talk about, for example, the SAFE AI initiative we’re working on, which helps, or aims to address some of those challenges, the getting started part.

As you know all too well, the sector is entering this discussion at a really unique moment in time where two highly impactful and far-reaching factors are intersecting. On one side, the humanitarian sector is underwater in terms of its ability to effectively address the growing humanitarian needs, and on the other side, modern AI has matured to the point where it is highly capable of playing a central role in the path forward for many of the challenges.

And so one of the projects I’m very excited about is the SAFE AI Initiative, which is a UK Foreign Office-funded initiative to create a framework for the effective and safe use of AI. So building out this journey from preconception and identifying the need and the use case, and so we have some tools and guidance to help people along the way. So that, I think, is something that can help people along with demystifying what AI is, how it works, what it can do, understanding its potential and its limitations.

Ka Man: Thanks, Michael. Yeah, I’ve been looking at the CDAC Network website, and having a look at some of the resources available already. I know on the website it’s still in beta phase, but you’re actively inviting feedback from people who are using that, so that’s really good to see. And I was just curious as to what the timescales are for the completion of the SAFE Initiative project and the outputs all to be ready?

Michael: Yeah, so phase one, which we currently scoped out, will complete some – later this year. We hope to be able to continue that and expand on that, because there is much more that needs to be done so as to support the individual organisations wherever they are on their AI journey. So we have sort of a phased approach.

17:35: Chapter 4: Overcoming infrastructure challenges: small language models and emerging technologies

Ka Man: Great, thank you. So I’d like to come to infrastructure challenges, because this is a key theme that emerged from our research. We saw that this is a major barrier to AI implementation by humanitarians around the world, particularly those working or living in contexts affected by protracted crises and getting online itself can present challenges.

So, for example, one interviewee we spoke to in Lebanon developed his own closed system to address both connectivity and data security challenges, so that was really interesting to hear about, and we documented that in the report.

So I was just wondering, in your view, what kinds of AI solutions show the most promise for these kinds of environments, particularly those that aren’t highly contingent on resources and specialist expertise to maintain?

Michael: Yes, I think what you described, and I remember the example. That’s kind of a unique example where someone is able to build that up from the ground, and it’s hard to rely on that, obviously.

There are a number of sort of dimensions to this here. One thing is just data security, data privacy concerns about how the data is used, what happens to data that the model is interacting with. And then, of course, also how the model is trained. And data sovereignty, making sure that you control where the data is stored as well.

So I think one approach, which I see as very promising, it’s already shown a lot of good potential, is SLMs, or small language models. We have a lot of focus in the – and rightfully so – in today’s debate around generative AI in the context of large language models, which, as the names indicate, are very, very large, compared to historical other types of AI models that we saw from before three years ago, before five years ago, these models are significantly larger. That also means that there’s an environmental factor to this. They take up a lot of compute. They have to process a lot of data, particularly when they get trained. Takes up a lot of energy, consumes a lot of water to cool down the data centres, so there’s a very real carbon footprint to the large language models, even though the first generation of these large language models we saw come out a little less than three years ago were not as efficient, so we’ve got now significantly more efficient models, still large language models, that take up less compute consumption. Still, they are very large.

I think one area I’ve seen a lot of potential is small language models, which are essentially a smaller version of the large language models that can do most of what the large language models can do. They are often trained from a large language model, but having a small language gmodel or an SLM means that you don’t need to have it running in the cloud. You can have it running on a local device, like a phone, for example. That also potentially addresses some of the security concerns, if you know exactly – if the model lives the same place as the data lives, you don’t have the same kind of leakage, or you don’t expose yourself to hacking of systems in the same way.

SLMs is one thing that can really help with data security, data sovereignty, but also, obviously, connectivity. It lives on the device, you’re not dependent on being connected to a cloud-based AI service.

Ka Man: That’s really interesting, thank you, Michael. I only heard the term small language models for the first time in recent weeks. And the more I hear about it, the more I think this sounds like it really, has a lot of potential and application, applicability to the humanitarian context. So I was wondering, how would this look in practice, like, would it look like, have a conversational interface like ChatGPT, and could it be trained on very specific domains. So, for example, WASH or cash and voucher assistance, whatever programmatic area we’re looking at in the humanitarian context, and then used on, say, mobile devices. Would it, could it look as straightforward from a user perspective as that?

Michael: It certainly could, yes. And that’s both one of the benefits, but also kind of what’s required. The LLMs can do, the foundation models can do so many different things. They’re trained broadly to do a very broad range of tasks. But if you don’t really need all of those tasks, if you can narrow down the search space, or the types of tasks that you want it to do, you don’t really need all of what the data offers. So if you’re okay with not having it excel at maths, philosophy, or something like that, and let’s say nine out of 10 other broad domains, then you can narrow it down to a more specific focus. So it can still be very conversational, but it just won’t be able to do everything broadly the way an LLM can.

Ka Man: It’s really interesting, because the more I hear about it, the more I wonder why LLMs are so large [laughs]. It doesn’t seem…I was wondering myself, is this because, obviously, developers like OpenAI, they want users, so they want to make it as accessible and universal as possible to get people on their systems, and then loyal users. But the more I hear about the concerns around climate impacts, etc, and I think more contextualised, more efficient, less resource-intensive approaches sound like, not just in the humanitarian sector, but just generally, but with particular relevance to our sector, it seems like a sensible and practical way forward?

Michael: Absolutely. The contextualisation both makes it more effective, but it also makes it, but because it’s more targeted, it also has a better chance of the right outputs, if you will.

24:44: Chapter 5: Contextualisation and localisation of AI solutions: addressing the language and cultural learning gaps

Ka Man: Thank you. So in terms of contextualisation, that links nicely to the next question I have around cultural contextualisation. Again, this was a theme that came across a lot through the research, particularly respondents from Sub-Saharan Africa, who formed 46% of our respondents.

So one of the use cases documented a representative from an INGO in Kenya, where he was talking about chatbots that required community facilitators to actually go into the community, show users how to interact with these chatbots and to build trust and acceptance, because he says in some of these communities, there was a lot of distrust around what were these devices for, why are we using it? He said some people thought they were being used to monitor us, so there had to be that interlocutor, if you like, to be there, make sure that there was that trust and understanding.

So I just wanted to ask you, what do you think represents good practice, from your perspective, in terms of approaching cultural contextualisation when implementing AI tools, especially given the kinds of concerns that were raised through our research about AI systems lacking local knowledge, such as languages and local norms?

Michael: Yeah, this is such an important and timely question. I really believe that the biggest obstacle to equitable outcomes for modern AI is language access. Generative AI works incredibly well in English and just a few more languages. You don’t have to get very far down the list of languages, major languages across the world before the quality really starts to drop off.

That means that the large majority of the world population sees absolutely no benefit from modern AI, which in turn further deepens existing inequities throughout the world, right? I think we’ll never get to really equitable outcomes without a dedicated focus on access.

We recently launched the Roots AI Foundation, which is a new nonprofit that operates in this space, and our goal is to bring the value of modern AI to languages and to communities that don’t have easy access to them today. This involves local community-built AI that helps counter bias and ensure representation in AI models, that helps preserve endangered languages, and that focuses on culturally grounded AI tools. I think it really is important to empower locally, make sure that where possible models are locally built, that is built on communal wisdom and knowledge.

Also, as part of the SAFE AI project, we focus deeply on participatory AI, and how to co-design with affected communities. To your point about trust, this kind of approach not only builds trust with the people who end up using the AI solution, or who are directly or indirectly impacted by whatever is produced by the AI solutions. That’s one side. It also helps ensure that we avoid blind spots, or identify both the opportunities and the risks by deploying AI capabilities into an existing ecosystem.

Bringing the community into the discussion in a way that is meaningful, not just a checkbox, not just a shallow consultation, but really engaging with them in a way that ensures that the outcomes, the models, the solutions are built within for those communities. I think that’s absolutely critical, and I’m encouraged to see a lot of effort, a lot of initiatives happening in this space right now. You mentioned many of your respondents were from Sub-Saharan Africa, and there is a lot of focus right now on just the broad diversity of languages in Africa, and enabling training and enabling AI models for African languages that don’t have access to them today. It’s a very active space.

Ka Man: Thanks, Michael. In terms of picking up on the point around language, again, this is a very broad question, but I’m just intrigued. What does it take to actually train a model on a certain language? Because if there’s lots of written documentation in different languages, what does it actually take to train the model? Is it, are there big barriers? Like, what are the blockers to this happening faster?

Michael: You really put a finger on the nail with the key challenge, and actually also historically, the commercial reason for why those languages are not supported today. There is an upfront cost, both in time and effort and just money, to develop those models, AI models for more languages. And often the private sector calculation just means that those are typically not prioritised just from lack of commercial value.

But yes, with machine learning, there is no AI without data, and the quality and the robustness of any AI model is very directly linked to the quality of the data. What kind of data you need and how much, it really depends on what you want to build. The type of model that requires the most data is the type of model we see in, like, ChatGPT, the GPT-style models, foundation models.

But there are many other kinds of AI models you can build. You can build a basic machine translation model, which requires a pair of two languages, of text in two languages. And if you have enough samples of that data, you can train a model and it learns how to go from one to the other and back. If you want to add support for, say, speech recognition in a language, then you need some audio data. You need some data of people saying certain things, and you need quite a bit of data.

If you want a basic chatbot, language understanding, conversational model that’s not necessarily fully capable like a ChatGPT-type model, then you still need a fairly broad range of text data, but you can deal with a lot less. So it depends on what you want to build. Often for low-resource languages, or for languages that have been historically deprioritised, just getting the data is the big challenge, and that’s going to be very directly informing what kind of model you’re building. Essentially just what data you have access to, or you are able to build or acquire. That’s one of the challenges we’re very directly hoping to address with the Roots AI Foundation.

Ka Man: I’m curious, do you feel like, because obviously there’s people like you with specialised skill sets, and you’re dedicated to this particular cause. It’s obviously something that you do not just professionally, but something that is your mission. We would hope around the world, globally, there are enough people who’ve got this will and desire to advance this mission. But is funding a big blocker because I almost feel like there needs to be a business case made to access funding, but when it’s in a humanitarian context, there may be a challenge making a compelling case to funders if they’re say, for example, focused more on profit considerations, commercial applications over you know, social impact. That to me is something that I’m really grappling with in this big conundrum of humanitarian AI.

Michael: Yes, funding is obviously, to a large degree, where things start and end with a sector. We’ve seen the impact of cuts to funding this year. We’ve seen the devastating impact of programmes that are so desperately needed having to be shut down. As part of the Humanitarian Reset, one of the key focus areas is obviously also to reduce waste, to avoid duplication, and so I think naturally, funders, donors, they need to prioritise, obviously, getting the most out of what they’re funding. And that means reducing risks of, for example, in AI implementation.

One of the things we do with the SAFE AI project is exactly that. It’s really to help reduce the risk, help demystify the process, make sure that people know where they’re going and how to get there, and that they have the appropriate level of support on their journey. But also, incentivise or encourage more sharing across the sector, of what is being built. And that not just in terms of describing use cases, case studies, lessons learned, findings, things like that, though that is absolutely critically important.

I think a lot of times, people struggle with understanding where they can use AI, connecting the capabilities of the technology with what they’re working on on a given day, how AI can play a role here, so seeing what others in the sector, organisations that are similar to them, seeing what they have done, what worked and what didn’t, is very, very valuable. It really helps to demystify what you can get from the technology.

Obviously, funders will want to reduce the risks and increase, enhance the chances of successful outcomes, and there are a number of different ways of doing that, right? So one thing is just having such a framework or a journey to help people along the way. There’s a lot of training and skilling needed, but then, as you point to as well, there’s also just understanding which tools are available to you and how to pick the most appropriate ones.

Ka Man: So it sounds like increased understanding of possible pathways and frameworks to increase confidence, and hopefully the confidence of donors and funders to support the development of the tools in the way that you’ve just mentioned, such as contextualising through, like training on different languages. Is that what you mean?

Michael: Yes, absolutely.

Ka Man: OK great, that is a priority, like a takeaway, that this is something that people can focus on, specifically, in terms of garnering support, broader support for the development of ethical and contextually-appropriate humanitarian AI.

36:43: Chapter 6: Balancing commercial tools with purpose-built solutions and weighing up the cost of error

Michael: So, linking onto the set of commercial considerations and balancing that with application in the humanitarian sector, so our research found that around seven out of 10 humanitarians who are using AI are using commercial tools, like ChatGPT. So they’re not necessarily officially sanctioned on organisational licences, they’re just using that as individuals.

I wondered, from your perspective, when are these types of tools sufficient for humanitarian needs? And when might an organisation decide what’s the tipping point that they need a specialised, purpose-built solution?

Michael: I’ve heard from many organisations that something like ChatGPT, comes up again and again, whether it’s sanctioned or not, whether it’s something that the IT department has recommended and provided training for, people are using it. It just provides a lot of value. And when you are underwater with what you need to do, whether it’s summarising documents, or it might just be translating documents to give you access to information that you otherwise wouldn’t have easy access to, it provides a lot of value. So going back to my point before, that’s why I think if you do just one thing, it should be to create your AI policy so that it’s clear to everybody what you can use, for what you can use it, and if you have a good reason to use something different, or to use the existing tools for a different kind of use, including different kind of data, that there is a clear process for how to get that added to the list of things that can be used.

Whether you use out-of-the-box tools, like cloud-based AI services or a custom-trained model, it really depends entirely on your use case, on what you want to do, what the specific solution or scenario you’re working on. Generative AI models, foundation models, LLMs, they work great across a wide range of use cases and tasks. It’s very easy to get started with using them, due to their natural language and very intuitive interface.

Incidentally, my entry point into working with AI these many years ago was through linguistics, and I’m still very much a linguist at heart, which is one of the reasons I’m so excited about the work we’re doing with the Roots AI Foundation. I also find it exciting that today, because of these AI models, that language is now the primary interface to technology and to knowledge. I find that all kinds of exciting.

However, a general-purpose model that tries to do everything isn’t necessarily the best option for all tasks and all use cases. For example, I’m working… I’m currently working on a project that aims to use AI to assist with humanitarian demining. The specific task is to train an AI model based on photos from drone footage, and then to automatically identify landmines or other explosive ordinances that can be seen on or in the ground. A general-purpose large language model would fail miserably at that task. So it really comes down to understanding what the task is, what is fit-for-purpose models, what’s an appropriate model for the task.

Another aspect of selecting the right kind of model involves understanding the risks that are related to AI failures. As we’ve discussed, it’s incredible what the AI models can produce today, but they do make mistakes. They will always make mistakes. You should always consider that as not just a possibility, but just as a fact. Somewhere in the process, it will make a mistake. But understanding how and when they can happen, so that you can practically implement mitigation strategies, is important, but also to understand, for example, what the cost of those errors may be.

For example, if you use generative AI to summarise a report, so you can easily consume the information, or you use generative AI to generate the annual report for your donors, an error in AI output there is bad, but it’s not necessarily terrible. Whereas, if you use an AI model to, like the example I just described, if you use an AI model to recommend where a field operator for landmine clearance should be deployed, an error in an AI model output can be catastrophic.

Understanding this concept of cost of error means that you can decide in the same process where you can go ahead and execute on the AI output, versus when the AI-generated output should only be considered a recommendation for a human expert to evaluate and decide on. It’s maybe not a very satisfying answer, but it really comes down to the use case to understand what kind of model gets you the best outcomes.

Ka Man: Thanks, Michael. I’ve got a couple of questions to pick up on from what you’ve just said. I’m probably trying to ask you something too specific, but just around that acceptance of risk. Like you say, if a document goes out with something that’s just slightly misrepresented, it’s not going to have huge consequences. Communications team might have to issue, you know, an update.

Michael: A retraction, yes.

Ka Man: Yeah, exactly. But with regards to those real-world implications of those systems that you’ve just described, is there, in the sector, maybe outside of the humanitarian sector, maybe in military or other applications, are there sector norms for what’s an accepted risk threshold? I don’t know if there’s, like, numerical values, I’m obviously not trying to push you to say anything like that, but is that something that exists, or is it more on a literally on a case-by-case basis as to what the actual situation is?

Michael: It’s a good question, and it’s very much a philosophical, a sociocultural question. There are some attempts to come up with some metric or some way of understanding what acceptable risk may mean. I would say, or I would argue, that risks in the humanitarian sector, or just by the nature of where we’re operating and the communities we serve, they’re just shifted upwards. Obviously, you can have significant negative consequences by applying AI, say, in the medical sector or other sectors, but the humanitarian sector, just by definition, working with vulnerable communities, the potential negative consequences are, on average, just so much higher.

The AI Act from the European Union has some notion or some description of levels of risk, including unacceptable risks. It’s not clear-cut, necessarily, what unacceptable means, and it will come down to cultural norms, it will come down to the specific use case. It is hard to describe in a way that’s binary, across the board, but there is some effort in this space, and I think a lot more is needed. But I think it’s a good starting point, but it needs to come down to, like we talked about, for contextualisation, what does the risk mean in this specific context?

Ka Man: That’s very interesting. Thank you, Michael. And I did say two things, didn’t I? My second question, hopefully this is a more straightforward one. You’ve mentioned a couple of times about, create an AI policy, and obviously that’s something that emerged in the research. How can people go about this quite… in a quite simple way? Is a template from a Google search, is that a good first step, or should it be much more built from scratch for each organisation?

Michael: I think starting with a template is a really good idea, especially because many of those templates will have questions that you likely wouldn’t have thought about, because they’re just not necessarily intuitively something you would think about as important to consider or to get agreement on. So starting from a template and then adapting it to your unique needs, I think is a good plan.

That does mean that you can add new sections to… if you build off a template, add new sections, or there may be sections that are irrelevant, that don’t really apply. But it should also be considered a living document, because things change. New technologies will become available. If you had an AI policy that you finished three years ago, it wouldn’t really have much about generative AI, and if you had one from, let’s say, just a year ago, maybe agentic AI wasn’t something you would need to think about. So it needs to be a living document, it needs to be something that is communicated broadly across the organisation so everybody knows what it is, how to use it, where it is, and what’s the process to get it updated.

Ka Man: It sounds like there’s a real need for more organisation-wide conversations around AI just generally, and embedding that into organisational meetings and just thinking. And that will inform and shape the policies.

And the research did show that was a real barrier, that people didn’t necessarily feel able or even safe to talk about AI use. There’s some stigma attached, or, you know, just sensitive, so people kind of stuck in their own lane and did what they wanted to do without that broader team consensus. So I hope that this conversation will help encourage teams to start that dialogue and, like you say, work on that living document. Doesn’t have to be, you know, chapter and verse, it’s something that can be evolved over time and refined, but it has to reflect the actual organisational realities.

[Music]

47:42: Chapter 7: Michael answers your questions: humanitarian community Q&A

So we’re just going to switch gears a little bit. So thank you so much for attending our online launch event for the report last month. It was really good to have you there. And as you’ll know, we had a lot of questions in the chat and in the Q&A, and we weren’t able to cover them all, because I think we received over 100 in total.

But what we thought would be really good is to roll the questions into these conversations so that the dialogue is shaped by our community, because we agree that community involvement is so crucial in the development of humanitarian AI. So some of these questions you might be able to speak to directly. Some of them might not be quite as applicable, because obviously they were raised in the context of that event. But any signposting that you might want to share would be really useful as well.

So the first question is around humanitarian AI applications, and this question was from Lutamyo, and they say, should we say that AI is more important in administrative planning for research than the actual humanitarian action, i.e. use of voucher and cash distribution?

Michael: Good question, and I love that it comes down to the very, very specific use cases. I don’t think the presence or the integration of AI capabilities needs to shift the priority of individual functions across the sector, so you still need focus on administrative planning for research, you still need the actual humanitarian action that happens in-country. So I think it… of course, it should not replace any of those areas.

The hands-on humanitarian action tasks and projects are equally critical, with and without AI. Ideally, and hopefully that’s what we’re going on to enabling, ideally it should augment human expertise, human experience, and ingenuity, rather than replace it.

There are obviously some tasks that are more easily to automate, or to facilitate. But that means also that if you’re in a situation where there’s a human process that’s a bottleneck, for example, there’s only so much that can happen with a human resource, let’s say it’s one person that’s responsible for some flow of information. If that becomes a bottleneck and AI can help reduce that issue, reduce that bottleneck by having more information or knowledge flow, that can open up downstream for more services, even more staff needs, and you can serve more people or scale up your mission. So augmenting what is already happening, making it more efficient, is what’s going to lead to the best outcomes on the user AI.

Ka Man: Thank you. So the next question is around learning. An anonymous person has asked, could you share practical tips on how social impact organisations can discover useful AI tools? There’s so much out there, it’s hard to know where to start.

Michael: It is a question I’ve seen or come across often, and one thing I spend a lot of time on in person, face-to-face, with the various organisations is really understanding their context, understanding their use case, understanding what they want to do, their mission, their aspirations and why they believe AI could provide some value.

Hopefully this discussion will help address, or move people along, someone along the line for that specific question. I think a lot of our focus with the SAFE AI initiative aims to address that specific challenge. But it’s often… it often requires a somewhat in-depth brainstorming discussion to connect the two sides, to make sure that the organisations understand what AI can do, how it works, how it makes decisions, how it makes mistakes.

Understanding what the specific context is, what the need is, what the use case is. And then I think, okay, this type of AI capability can help very explicitly here. Making that connection, connecting the dots between those two worlds, those two key factors, can be challenging, and having an in-depth brainstorming sessions on that, typically gets something new, something surprising, and but it is required to get the most out of the models.

Ka Man: Thank you. The next question comes from Farouk, who actually asked a question around localisation, and we have kind of touched on this in our discussions so far, but I’ll ask the question to see if you have any other points to share. So Farouk asks: How can AI systems be adapted to understand and process local language inputs effectively?

Michael: We talked about language access, and I think just… is… are the AI capabilities able to work in a language that I speak? Being able to address that challenge, I think, is absolutely key. But even if you can access a model, let’s say a model in English, for… just as an example, does that… from the data it has been trained on, does that understand your culture? Does it understand what’s relevant to you? Has it been trained on data that is representative of the way you see the world?

If you live in, say, the US, or Europe or something like that, then most cases, yeah, it’s pretty well… it’s a pretty good representation of what the world looks like from there. If you are part of the global majority, then there’s a good chance that it doesn’t. Being able to understand the limitations, being able to understand the bias that inherently exists in all models, and all humans, it’s important to ensure the best outcomes, and that really does require local expertise, local empowerment, not just for consulting, not just for checking a box, but really meaningfully engaging with local communities to make sure that, yes, you avoid blind spots, and that you get the most out of the capabilities, and has the best chance of addressing the actual needs.

Ka Man: Thank you. I have a question from Sawsan around commercial tools and their security. So Sawsan asks, to what extent is AI a safe and secure path for handling sensitive information, could AI introduce new vulnerabilities when relying on commercial AI tools?

Michael: Yes. So, it can be safe. But you should absolutely assume that it’s not. And so that’s also part of the AI policy discussion. I’ve seen many organisations starting out with fairly open usage of tools that are available, and then having to scale back and backpedal and say, okay, actually, you can’t use these tools, because if you put sensitive information into them in what the model consumes, in order to get a reply back, then that information may be used to update the model, to train the model. And there’s this notion of leakage from AI models, where data that is presented to the model, that has been presented to the model, can be spit out again. It’s generally a very low likelihood scenario, but it’s possible.

You should assume the worst. You should assume, especially given the data you’re working with. Sensitive data is important always, but particularly when sensitive data touches on vulnerable populations, when it touches on people who may be refugees, or where the access to sensitive data about them, where they are at any given point in time, can be misused in all kinds of nefarious ways.

As I mentioned, for example, using an SLM on a local device where the data lives on device, that is one way of ensuring it. But it’s also, if you use a cloud-based AI service, it is a very reasonable question to ask your vendor, as part of your procurement process. In this specific use case, this is the type of data we have, how can you ensure that the data is secured? And that, in a data privacy, ensuring manner.

Ka Man: Thank you. I’m going to squeeze in a very short follow-up question from me on that one. Because humanitarians are used to working with sensitive data, predating AI, we just, do you, I’m just curious. Based on practices that you see from beyond the humanitarian sector, do you think that we, collectively, the humanitarian sector has good readiness in terms of awareness and practices, in terms of following, you know, data security protocols in terms of what they’re pasting into ChatGPT or whatever, or is it not possible to make any distinction between humanitarians and general users?

Michael: That is a good point. I think that, historically, as you alluded to, because the sector is used to working with very sensitive data, data that, if misused, can have terrible consequences. There is an innate sense of caution.

But I do think that the introduction of broad access to modern AI tools introduces risks that just weren’t there before. If you’re interacting with a generative AI model, you give it the prompt, you give it some questions, it gives you something back, you point it to some data so you can ask questions about that data. Just knowing that that interaction may be consumed by the model for a future version of the model, and that data, in and out, there is a risk that it might show up elsewhere.

I think the sector is well prepared, but there are new breakage points that you need to be aware of.

Ka Man: That’s really interesting, because the research surfaced a lot of points around trust. But in the sense that there is that user trust because of those conversational interfaces, which is a positive in many ways, because people are using the tools, but also, like you say, it introduces those vulnerabilities and those points of risk. Very thought-provoking, thank you for sharing that.

So the final audience question to put to you is from Anonymous, and it’s around resource allocation and diversion. So they say, I worry about money being moved away from where it’s needed most, into profit for the tech sector. So how can we use these tools, protect our data, and keep costs reasonable?

Michael: It’s a good point. I would say, overall, I would worry about that, too. Money should not be moved away from where it’s most needed. That said… the commercial market exists for a reason, if it provides some value. And as long as it is the right return on investment from the sector, that whatever money you… I wouldn’t say, channel or shift away from humanitarian needs into the private sector, but money, budget that you invest in humanitarian action. As long as you get value for that money, then it’s not automatically, it’s not necessarily a bad option.

But that said, there are ways you can use it more effectively with budget as a key factor. Again, SLM is an option, something that is not cloud-based, where you don’t pay for those services. And also, even if you do pay, there are ways to make sure that you use the right type of model so you don’t have an overkill option. But also understanding what you’re paying for, making sure that that’s part of the technical assurance of… whatever the AI provider says about the service, that you can confirm that, like, for example, cost. That you can confirm when you actually use it, yeah, okay, the cost that was quoted, that’s similar to what we’re seeing when we’re using it.

As long as it enables you to do more, to do things that you weren’t able to do without this investment, and as long as that equation is in favour of doing so, then I wouldn’t necessarily have any qualms with using commercial products, at least not from that side of view, not from that point of view.

Ka Man: Great, thank you. Thanks so much for answering our audience questions. That was really, really interesting.


61:45: Chapter 8: Blind spots to address to accelerate shared progress, and closing reflections

So yeah we’re just going to move to the closing segment of this discussion. What’s one thing about bridging this implementation gap in AI, humanitarian AI, that you think is overlooked or not talked about often enough, but is vital to making shared progress in this space?

Michael: I think it comes down to demystifying what AI is and what it isn’t, because there is a lot of, understandably so, there’s a lot of confusion. There’s a lot of hesitancy to engage because it is so daunting and overwhelming. So demystifying what AI is, how it makes decisions, how it makes mistakes, so that you can proactively implement mitigation strategies to address the errors that inevitably will happen. And so, scaling and awareness on that. Both its potential and its limitation. But definitely also understanding when AI is not the right tool in the toolbox.

Ka Man: It’s interesting, because I recently had a discussion focused on governance, and actually a key takeaway for me was, although the focus of the discussion was governance, regulation. His key point was the same, about literacy and understanding as the cornerstone, the foundation of making strides in this space. So I thought that was interesting.

So before we wrap up, do you have any closing reflections or remarks to share with our listeners?

Michael: If your organisation doesn’t have an AI policy, start there. Definitely, it’ll avoid a lot of headache. But overall, just, I’d say, get started. Don’t let AI be something that happens to your organisation, because it is widely used, and again, in a small number of languages. But beware of what it is you’re working with, and how it works. Make sure that you understand the limitations. And it can absolutely be daunting. There are so many options and tools and courses to choose from that it can definitely seem overwhelming, so reach out if you have questions. I know many people are ready to help. Myself included.

Ka Man: That’s great, thank you, Michael. Honestly, I think, hearing you reflect on that and share those words will be really reassuring to our listeners. Especially with your perspectives and expertise, you understand the humanitarian context, you understand the tech. You understand what’s going on under the bonnet of the tech. So hearing you say that is really reassuring, because there is a lot of anxiety and worry and concern, understandably, around this, and this feeling of being left behind. But actually, hearing you say, actually, these are practical things that you can do today, or tomorrow [laughs], will be really helpful for many, so thank you very much for sharing that.

And thank you for this conversation. It’s been really thought-provoking, and it’s been really good to get your expert take on this topic, and like I say, I’ve got so much more I can ask, but we’ll save that for future conversations. So you’ve kindly agreed to be part of a future webinar, so I’ll be pleased to welcome you back to this space.

Michael: Thank you very much for the invitation. I love the work that you do, and I love the research that you’ve done, and the report that came out, and I think there’s a lot of value that people are finding also in this podcast series, and some of the other things you’re working on,

Ka Man: Oh, wonderful, thank you very much!

So thanks for the invitation.

Ka Man: So yes, thank you, Michael Tjalve, thank you very much for joining us for today’s episode of Fresh Humanitarian Perspectives from the Humanitarian Leadership Academy.

[Music]


Continuing the conversations: new Humanitarian AI podcast miniseries

This conversation is the second episode new humanitarian AI podcast miniseries which builds on the August 2025 research: ‘How are humanitarians using artificial intelligence? Mapping current practice and future potential’. Tune in for long-form accessible conversations with diverse expert guests, sharing perspectives on themes emerging from the research, including implementation challenges, governance, cultural frameworks and ethical considerations, as well as localised AI solutions, with global views and perspectives from Africa. The miniseries aims to promote information exchange and dialogue to support ethical humanitarian AI development.

Episode 1: How are humanitarians using AI: reflections on our community-centred research approach with Lucy Hall, Ka Man Parkinson and Madigan Johnson [Listen here]

About the speakers

Michael Tjalve brings more than two decades of experience with AI, from applied science and research to tech sector AI development, most recently serving as Chief AI Architect at Microsoft Philanthropies where he helped humanitarian organizations leverage AI to amplify their impact. In 2024, he left the tech sector to establish Humanitarian AI Advisory, dedicated to helping humanitarian organizations and stakeholders understand how to harness the potential of AI while navigating its pitfalls.

Michael holds a PhD in Artificial Intelligence from University College London and he is Assistant Professor at University of Washington where he teaches AI in the humanitarian sector. Michael serves as Board Chair and technology advisor for Spreeha Foundation, working to improve healthcare and education in underserved communities in Bangladesh. Michael is AI Advisor to the UN on humanitarian affairs, where he works with OCHA on AI strategy and on providing guidance on the safe and effective use of AI for humanitarian action. He is also co-lead of the SAFE AI initiative which aims to promote the safe and responsible use of AI in humanitarian action. Michael recently co-founded the RootsAI Foundation, a nonprofit dedicated to bringing the value of modern AI to languages and communities that don’t have easy access to it today, and to improve representation in AI models by building culturally grounded AI tools.

Ka Man Parkinson is Communications and Marketing Lead at the Humanitarian Leadership Academy. With 20 years’ experience in communications and marketing management at UK higher education institutions and the British Council, Ka Man now leads on community building initiatives as part of the HLA’s convening strategy. She takes an interdisciplinary people-centred approach to her work, blending multimedia campaigns with learning and research initiatives. Ka Man is the producer of the HLA’s Fresh Humanitarian Perspectives podcast and leads the HLA webinar series. Currently on her own humanitarian AI learning journey, her interest in technology and organisational change stems from her time as an undergraduate at The University of Manchester, where she completed a BSc in Management and IT. She also holds an MA in Business and Chinese from the University of Leeds, and a CIM Professional Diploma in Marketing

Share the conversation

Did you enjoy this episode? Please share with someone who might find it useful.

We love to hear listener feedback – please leave a comment on your usual podcast platform, connect with us on social media or email info@humanitarian.academy

Disclaimer

The views and opinions expressed in our podcast are those of the speakers and do not necessarily reflect the views or positions of their organisations. This podcast series has been produced to promote learning and dialogue and is not intended as prescriptive advice. Organisations should conduct their own assessments based on their specific contexts, requirements and risk tolerances.

Episode produced in September 2025

Newsletter sign up