We are currently supporting humanitarian responses in multiple locations - Find out more

Shaping humanitarian AI: why every voice counts

Can you help shape the future of humanitarian AI?

In this special episode, Ka Man Parkinson is joined by Lucy Hall from the Humanitarian Leadership Academy and Madigan Johnson from Data Friendly Space to discuss a landmark joint survey on AI adoption in the humanitarian sector.

Together, they explore how artificial intelligence is currently being used across the humanitarian space, what future potential it holds, and why this new piece of research is so vital. Whether you’re an early adopter or new to AI, your voice is essential in shaping how it supports humanitarian work.

💬 “What we’re really sort of missing is that messy middle ground of the actual adoption patterns across the sector…there is a fascinating disconnect between knowing what AI can do and really the benefits of it.” – Madigan Johnson, Data Friendly Space

💬 “AI is the same as other systems that have come before it. It can’t be built and iterated by a narrow group of stakeholders. It has to reflect expertise and experiences, otherwise, the systems will ultimately not fulfil the objectives of why they even exist in the first place.” – Ka Man Parkinson, HLA

💬 “This is your opportunity as our audience to really shape how we take this forward. We’re being led by what you want from us.” – Lucy Hall, HLA

Tune in to learn more about this first-of-its-kind study and how your input through the survey can help map current practice and guide the sector’s responsible and effective use of AI in the future.

A podcast promo image featuring the logos of Humanitarian Leadership Academy and Data Friendly Space, highlighting Humanitarian AI research with the title Shaping humanitarian AI: why every voice counts, plus photos of speakers Ka Man Parkinson, Lucy Hall, and Madigan Johnson.
Listen to the episode, now streaming on major platforms including SpotifyApple PodcastsAmazon Music, and Buzzsprout.

Keywords: humanitarian AI, AI for good, tech for good, prosocial AI, humanitarian technology, local humanitarian organisations, nonprofits using AI, AI training, digital transformation, humanitarian sector research, humanitarian funding cuts, AI ethics, AI literacy, equity in tech, responsible innovation, community insights on AI.


Episode chapters

00:00: Chapter 1: Introduction

08:37: Chapter 2: Research origin story

13:00 Chapter 3: What do we currently know about AI adoption across the humanitarian sector?

29:41: Chapter 4: Why non-experts belong in the AI conversation

37:33: Chapter 5: Join us on our AI research mission

40:36: Chapter 6: Closing reflections and useful AI resources

About the speakers

Lucy Hall is a data strategist and systems thinker with over seven years of experience driving digital transformation in the humanitarian sector. As a Data and Evidence Specialist at the Humanitarian Leadership Academy, she leads efforts to integrate AI and data innovation into locally led humanitarian action, exploring how data and technology amplifies local expertise.

Madigan Johnson is a digital expert specializing in user behaviour and research, design, and storytelling. Following her Master’s in International Humanitarian Action through the NOHA network, Madigan pivoted to the private tech sector, where she has worked in both digital agencies and startups. Throughout her journey in tech, Madigan has maintained her commitment to creating meaningful impact, expertly leveraging user-led methodologies and data analytics to shape exceptional digital experiences. As Head of Communications at Data Friendly Space (DFS), she brings her expertise in digital technology, content strategy, and community engagement to the frontier of humanitarian AI innovation.
 
Ka Man Parkinson
is a Communications and Marketing Specialist at the Humanitarian Leadership Academy. With 20 years of experience driving international marketing and communications across the nonprofit space, Ka Man has led impactful campaigns for the British Council and UK higher education institutions before joining the HLA in 2022. Ka Man is passionate about creating meaningful change through compelling storytelling that informs, connects and inspires global communities. She completed a joint honours degree in Management and IT in the era of dial-up internet – and remains a constant, curious observer of systems, people and technological change.

Highlighted resources and further reading

Tools & online learning

Further reading

Did you enjoy this episode? Please share with someone who might find it useful! Please take the survey by 20 June 2025. The initial findings will be published in July 2025.

Feedback/enquiries: please email info@humanitarian.academy or connect with us on social media.

Episode transcript

[Music]

Ka Man [voiceover]: Welcome to Fresh Humanitarian Perspectives, the podcast brought to you by the Humanitarian Leadership Academy.

I’m Ka Man Parkinson, Communications and Marketing Specialist here at the HLA and today I’m hosting a special conversation to support the launch of a new joint global survey on AI in the humanitarian sector, together with our partner Data Friendly Space.

I’m delighted to be joined by my colleague Lucy Hall, together with Madigan Johnson from Data Friendly Space.

We’re collaborating to launch this new joint survey, and we invite you to get involved. It’s a short survey and our aim is that our global community’s responses will help to build a rich snapshot of AI adoption and aspirations across the humanitarian sector in what we believe is one of the first research projects of its kind. Today, we want to share with you why we think it’s important, some of our thinking behind the project and crucially, while your voice matters on this AI learning journey.

[Music ends]

Ka Man: So Madigan, a warm welcome to the podcast! Thanks so much for being here today. Would you like to introduce yourself to our listeners and tell us a bit more about the mission of Data Friendly Space?

Madigan: So my name is Madigan Johnson. I’m the Head of Communications at Data Friendly Space. We’re an international nonprofit that is focused on technology and data for social impact. So in the past year that we’ve really focused a lot of our time and energy on building ethical AI tools for the humanitarian sector. And I really see kind of my role in that as advocating and helping humanitarians trust and use AI and technology in their work. I think there’s enormous potential in a time where the humanitarian sector is, you know, facing the sort of reset. However, like with any tools or trainings, we need to ensure that they kind of adhere to these sort of core humanitarian principles, ensure that everyone has sort of the same opportunities to engage and learn and utilise. So I’m really excited about partnering with HLA on this survey and kind of the opportunities that will come from this.

Ka Man: Oh that’s brilliant. Thank you so much Madigan, you’re based in Slovenia, aren’t you, and your team is geographically dispersed, is that right, are you across Europe?

Madigan: Yeah, so not even across Europe, but really across the globe.

Ka Man: Oh, right!

Madigan: So, I’m based in Slovenia, American by birth, but I’ve been living in Slovenia since January. And then our team is everywhere from Colombia to India to Kenya, Spain. Our CEO is in Estonia. So we really have a truly global, global team.

Ka Man: Oh, that’s really interesting, and it must obviously give a real global dimension and shift the dynamic in your ways of working as well.

Madigan: Exactly. I mean it’s great because you know, we don’t have one headquarters. So where we operate fully remote. So there’s not even a really a headquarters for us. So sometimes that you know presents its challenges because we obviously would like to meet up in person. But what we found is that when we have this truly global remote team, we actually figure out and innovate new ways of communication, of talking with each other. And again, it’s a great team, so, really lucky to be working here.

Ka Man: Oh that’s fantastic. I’m quite new to the humanitarian sector, when I say new – 2 1/2 years – I feel quite new [laughs]. But since I joined the HLA, I heard about the work of Data Friendly Space. And I think that you’re doing some really important work in this space, from the GANNET AI platform, some of the Kaya courses that we’ve collaborated on, such as the one in Community Crisis Intelligence developed by you guys together with us and the Centre for Collective Intelligence Design, part of the Nesta group, and CDAC Network. And your CEO, Karin Maasel, was previously on one of our webinars hosted by none other than Lucy, my colleague who will come to you shortly. And she was also one of our panellists at the Humanitarian Xchange event in London last year. So it’s really wonderful to be able to build on these collaborations and this knowledge exchange between us.

So that brings us nicely over to Lucy. Welcome, Lucy! Lucy is my wonderful colleague in our Research and Evidence team and is one of the champions of ethical AI adoption within our organisation. So Lucy, would you like to introduce yourself to our listeners?

Lucy: Hey yeah, lovely to be here, thanks for having me guys. This is such a cool conversation because as you said, Ka Man, I’ve been working with the Humanitarian Leadership Academy for seven years now in various different roles, primarily in the MEAL, Research and Evidence team. And I found that because the HLA is quite a technology-driven organisation through our Kaya platform, my interest has kind of grown into data science to really engage and drive slightly more meaningful insights for MEAL purposes primarily.

And I’ve kind of grown into that data science specialism just as AI really came to the forefront when like the big named platforms were kind of being launched, I won’t name them here and that you can possibly imagine what I’m talking about. And I think since then I’ve focused a lot of the research that we do in the HLA around well, how can we, how can we as a sector engage with AI in kind of like a really meaningful way that is aligned with the Core Humanitarian Standards – with our principles, with our values, and is owned by local leaders. Because there’s so many ways that we can really change how the humanitarian system kind of works through engaging AI, kind of ethically, and so that’s something that I feel very, very strongly about. And I’ve got lots of ideas and hopes and ambitions that I’m sure I’ll talk to you guys today.

Ka Man: That’s brilliant. Thank you so much, Lucy. I know that you’re really passionate about what you do and that you really champion and advocate data-driven approaches. You know, not just doing things on a whim – it’s like, what’s the impact here? And you’re always thinking about, yeah, who are we working for – affected communities. And you’re really passionate about locally led action. So I’m absolutely delighted to have you here for this conversation and part of this collaboration as well.

So if this is the first time people are tuning in and have heard from me, so I’m Ka Man, I’m the regular host of this podcast, and I work in the comms team here at the HLA where my focus is on building community initiatives. So I’m in this space because I’m really passionate about harnessing the power of tech for good, particularly its ability to convene and connect people, organisations; build alliances and partnerships and knowledge exchange for positive impact around the world. And of course, AI is a massive part of this technological mix that has rapidly come to the fore this year. Like I said earlier, I’m relatively new to the sector, 2 1/2 years, but I can’t believe even in that relatively short space of time, AI wasn’t really a thing [laughs] when I first joined the HLA. Now here we are in May, June 2025, and it’s everywhere and it’s omnipresent. And so I want to know how I alongside other non-experts in data and tech can be a part of this, rather than sitting on the outside of it all. So that’s why I’m so excited to have Lucy, Madigan and other colleagues as part of this project and other collaborations to really be part of this AI learning journey.

[Music]

08:37: Chapter 2: The research origin story

Ka Man: Coming to the research piece that we’re embarking on together – to share a little bit of the origin story, if you like, of this research piece. It came about in quite an organic way. So Madigan, you recently invited your followers, so Data Friendly Space LinkedIn followers, to take part in a poll to let you know about how often they’re using AI. So this came up on my feed and I was expecting most people to say daily. But there was a good proportion, right, that said that they never use it.

Madigan: Yeah, exactly. So we were also really surprised to see that because I think we kind of operate inside of our little bubble where we use again AI on the regular within our organisation. And we have been doing these sort of user interviews with users of our GANNET AI platform, and we’re just like oh, this is kind of interesting you know what the feedback is, you know, it would be interesting to know, what the general humanitarian sentiment is in terms of using AI, so that’s what kind of prompted that poll. And like you said I was expecting daily, you know, I can’t live without it, you know, all that sort of stuff. And then it was, oh, some people have never used it or it’s very rarely. And that’s kind of where I was like, huh, this is really interesting. And I was thinking at this time like, OK, how do we do some more research? And then that’s when you sent me the message and I was like, this is perfect – it’s like she’s reading my mind! [laughs]

Ka Man: Serendipity, right?

Madigan: Exactly.

Ka Man: Well, that’s fantastic. Yes, like you say that the users who say we don’t use it at all, that piqued my interest. And I’m really interested to know obviously the people who are using it daily, how are they using it so that we can learn from these individuals and organisations. But similarly, I’m really equally interested to know about those who aren’t – what are the barriers? Is it that they’re just not interested? Is this technological barrier, etc, etc. So yeah, that planted the seed of this idea for this collaboration.

Madigan, from your side, what excited you most about this idea of working with the HLA and what findings do you do you hope will be of use to DFS as well as the wider humanitarian sector?

Madigan: I think what got me really excited about partnering with you guys is kind of the timing and the potential of this survey. So we’re at this moment in time in the humanitarian sector where we have less resources, we have less funding, organisations are trying to embrace new technology, new ways of coping with this reality. But there’s a lot of serious sort of ethical concerns or trust around incorporating AI into their work. So while organisations kind of have been mapping out, let’s say, AI safety concerns and policies, we’re not actually seeing sort of the systematic mapping of how humanitarians are actually incorporating AI into their work and the challenges that they have in doing so. So for what that means for kind of Data Friendly Space and hopefully the broader sector that the survey will be able to kind of cut through sort of the hype around AI because I feel like there’s a lot of hype and everything is like AI this, and AI that. And don’t get me wrong, I think there it should be, there should be this hype around it.

But we also need to kind of look at what AI’s potential is but also the real data on how, for example, a WASH specialist in Lebanon is using certain, let’s say, commercial AI agents – or not – for project reports, or if AI translation tools are actually helping in communicating with affected communities. So this sort of like practical intelligence, I think is kind of what we’ve been missing in terms of understanding where the humanitarian sector is at. So I’m hoping that with this survey, it’ll really help organisations start to kind of make these sort of adoption decisions instead of just sort of following the trends of all this organisation is using, you know, this tool, so then we have to incorporate it, but rather be like, OK, actually how is this going to systematically benefit us and the communities that we serve. So I think that we can do a lot of really great things with that and kind of cultivate this community of practice and learning, especially with the support of HLA and all the great work that you guys do.

[Music]

13:00 Chapter 3: what do we currently know about AI adoption across the humanitarian sector?

Ka Man: Continuing with you, Madigan, when we caught up initially you’d mentioned that there had obviously been lots of great pieces of research conducted on AI in the humanitarian space. We believe that this is among the first to build that sector wide picture and outlook which is really obviously exciting for us. Could you share any insights about what we do currently know, based on some of those learnings from within DFS and these other organisations and individuals that have been spearheading this type of research?

Madigan: Yeah, so what we’ve been seeing is, is that there’s a lot of research into, let’s say, the AI policies or the ethical concerns around using AI in the humanitarian space. There’s also been some research into the mapping of AI tools that are already existing within the humanitarian sector. So UKHIH did a great job of mapping out all the different humanitarian AI tools.

But what we’re really sort of missing is that messy middle ground of the actual adoption patterns across the sector. So what we’ve observed at DFS in our conversations with users with different organisations. There is a sort of fascinating disconnect between knowing what AI can do and really the benefits of it. But then there’s a significant lack of trust in the AI tools, so you have leadership teams telling programmes have to do, you know the same amount of work or more, but with less resources or less funding. You also hear stories of you know, yes, do all of this, but at the same time, we can’t invest in training in these sort of tools to help you better do the work. You also have teams that are using different AI tools, so you might have one team that it’s let’s say using ChatGPT and another one that’s using Perplexity, another one that is using, you know, all these different tools and adopting them to fit their work, but still kind of ad hoc and not without that sort of organisation wide, this is what we’re doing and why.

So with that though, we are seeing these sort of pockets of innovation and really staff sort of leading the charge and the adoption of AI in the work. So you have communications teams be saying, you know what this is, how I’m going to incorporate it into my work and then start to share with you know other people on the teams, you know how they’re incorporating it. So there’s still a lot of sort of experimentation in the sector. So what I’m really hoping is to like kind of uncover in this research is, you know, where these sort of success stories are actually translating into the operational impact versus where maybe the organisations are still experimenting and kind of playing around with different AI tools. In my head, I’m thinking that we might see them effective adoption of AI more in areas, let’s say it’s not as fun, or it’s going to be in the areas where it’s like logistics and operations and information management, kind of the routine applications of AI  where the flashier ones, such as predictive analytics, are still mostly aspirational. There’s still a lot of great work being done there, of course. The real value I think is going to be knowing who’s using what, but who’s actually moved from that sort of piloting different things into actual incorporation of AI usage into their work. So I think that’s something that organisations and other humanitarians can really find value.

Ka Man: Yeah, honestly, everything that you’ve said there really resonates and sort of chimes with my sort of thoughts, speculation around AI adoption. I liked how you said you’re really interested in the messy middle ground because that’s what I’m also really interested to delve into, that sort of all that fuzzy grey area. So yeah, Lucy did you want to come in and share any reflections on anything that Madigan’s just shared there?

Lucy: Yeah. I mean, I totally agree, I think that messy middle ground is where the good stuff comes from, right? And that’s where I think as researchers, we can always dig in and find stuff out. When you were talking Madigan you mentioned around kind of like leadership investing in training and allowing sort of individuals to engage with AI, and I think what sparked in my mind was also the knowing where to invest in AI because it’s a really expensive tool, potentially expensive. I know a lot of the platforms are very freely available, and that’s actually one of the reasons why I love GANNET because there is that kind of freely access that I’ve been playing around with. And that’s one thing that I think is missing often from the conversation around the adoption in the humanitarian sector. Obviously you mentioned that the resources are quite stretched and time is stretched at the moment. And so knowing where and how to invest in AI, kind of ready capabilities, I think really crucial and in training as well. So that was just something that sparked in my mind because it’s sometimes we often invest in like an AI project to develop an AI project or we develop an AI humanitarian driven concept. But to really have a lot of potential there, I think something that’s missing is around that sort of systemic investment in AI and technology.

Ka Man: So staying with you, Lucy, I wonder if you wanted to build on the picture that Madigan has painted, of what we know about AI adoption across the humanitarian sector, whether anything has revealed itself to you through your various literature reviews and work in this area?

Lucy: There’s a couple of things that I’ve observed for all of this, all reading and the literature reviews that I’ve been doing. So I’ve primarily been focusing kind of like the skills and the leadership in this space, which is why I sort of investment is kind of at the forefront of my mind because that’s where the real kind of potential is, I think. And also that’s the role that the HLA plays is to really strengthen capabilities at an organisational level to really further locally led action driven by AI sometimes. And so the main thing I kind of really wanted to talk about is that most AI innovations seem to happen with kind of like large, primarily international organisations designing the outcome, designing the project, leading that. And that’s not necessarily a bad thing. I think it’s really helpful to have that perspective and that engagement. But one thing that I see quite clearly is that local organisations and local humanitarians aren’t always at the forefront of the decision-making process in how AI is used in their context, whether AI is actually the best approach to use. And obviously we’re all, we’re huge advocates of their own championing, but sometimes there’s jumping on a trend for jumping on a trend’s sake. And I want to see more evidence of local organisations and local humanitarians being more intentional about how they adopt AI and whether they adopt AI.

Which kind of leads me on to my next main observation is that and as Madigan, I think you said earlier, there’s a lot of uncertainty around AI in the humanitarian sector and I think that’s a mixture of nervousness – it’s a new technology and I think there is still quite a real inhibition when it comes to experimenting with technology sometimes. There’s some completely legitimate and really important ethical and protection concerns and considerations, especially when we work in really vulnerable contexts, having AI access lots of data could really pose risks, and I think it’s important that we have to explore and have these conversations, whether, how will you mitigate those risks?

And I think there’s something around scalability that I’m observing. There’s lots of case studies, as you said, the UKHIH has this amazing new list of resources of where AI is being used. But AI at scale I’m not seeing – and I think there was a really interesting paper that came out a couple weeks ago from ALNAP around there’s no standards, there’s no kind of framework, there’s no IASC position paper on AI, and I think that’s something that really needs to evolve in the next sort of 18 months to really advance AI adoption.

So yeah, I think there’s lots of things, but I’m really curious and that’s there’s lots of different trends that I’m seeing. But I think we’re all on the right path, it’s just how we all connect together and make sure the right people are in the right room at the right time.

Ka Man: Fantastic. Thank you, Lucy. You know, as you’re speaking I’ve got so many thoughts sort of sparked in my mind, I’m thinking about how AI, it’s like, wow, it’s really, really important – the time really is now that we as a collective get to grips with it – what do we want to achieve? And like you say, work towards some common standards across the board, but particularly within the humanitarian context because of the sensitivity of the people and organisations that we’re working with.

Sticking with you, Lucy, I wondered if you wanted to share a little bit about the work that the HLA is doing in the area of AI? You’ve already alluded to some of it, but I don’t know if you wanted to elaborate on that a little bit?

Lucy: Yeah, there’s quite a lot that we’re doing and with AI – I mean nowhere near as much as Data Friendly Spaces is, who as I said, and Madigan, I am a huge fan of and I am slightly awe struck that we’re having this conversation. I think the main, you know, AI is such a broad term as well. I think it’s important to kind of unpack that. I’m obviously kind of coming at it from a data science perspective, from a research perspective, but the primary focus of the HLA is to enable accessible quality learning offers and we do use AI in our digital learning offer and I think that’s something that we almost take for granted in the HLA, that we’ve been using AI to drive a lot of our content creation, but also contextualisation – and there’s so many ideas that we have and we’re currently conceptualising quite a lot. We’ve got an AI ready plan as to how we want to offer and engage with the wider humanitarian sector and a big part of that is kind of this piece around exploring kind of like the capacity gaps that all actors in the system have, which is why this conversation with Madigan and DFS is so important and so exciting.

And we’ve also, as I’ve been talking about, completed a couple of literature reviews exploring the kind of like capacity needs of AI adoption for a truly kind of locally led AI usage. And that was really, really interesting because we didn’t look specifically at capacity deficiencies. We look to capacity opportunities. And we also explored how other organisations can strengthen their capabilities of working with local organisations. And we really want to expand on that research piece over the coming months, which is why we’re so keen to be involved with this particular study because again, it’s just giving our audience have really large voice in our research agenda.

Ka Man: Thank you so much, Lucy, that made perfect sense. Yeah, I’m really, genuinely excited about the potential and application of this research. Hopefully people will be willing to share some examples of how they’re using AI so that we’ve got build up a bank of innovative use cases from across the sector, hopefully not just INGO’s, although obviously that is of value to us, but also people in local organisations.

And as I’ve mentioned just before, the timing of this I think is so important and you’ve both mentioned this as we’ve gone along. It’s so critical to get this piece of research off the ground now and we’ve worked on this quickly, haven’t we? Because we appreciate that we need to work at pace to be able to, yeah, capture this important information and generate this conversation. Next month, the picture will have changed, you know, so we want to really know what’s happening now, what’s on the ground.

And obviously this coincides with a time that we, as a sector collectively, are facing and dealing with so much deep structural change and it’s really fundamentally shifted our ways of working. And I think that really amplifies magnifies this risk of potentially an AI gap widening between those who are the haves and have nots, so to speak you know, really widening that gap and those disparities in technological know-how like I say, particularly between relatively well funded INGOs versus local organisations. So yeah, I think we’re sitting at a crossroads right now, particularly in the arena of this prosocial AI model, as opposed to for profit and commercial applications.

Madigan: Just one kind of quick comment on what you were saying about, yeah, kind of the I what I’m really curious to see in this sort of research is that sometimes the local organisations have already been implementing AI into the work because they don’t have the resources that large NGOs or UN agencies have. So I’m going to be really curious to kind of see what pops out of that, so really encourage, yeah, like every local humanitarian organisation like to respond to this because again, I think they’ve already been kind of scrappy and working with less resources, so might have already turned to AI. But again, that’s just a thought right now. So very curious to see where the research takes versus sometimes in the larger INGO’s or UN agencies, there’s a lot more bureaucracy getting tools adopted into the fold, so kind of, yeah, excited to see what comes out of that.

Ka Man: Yeah, you make a really good point there where we might make an work on the assumption that a local organisation may not be further along the line with AI adoption. But like you say, someone might be wearing several hats, you know, they don’t have the relative luxury of having specialists, colleagues, departments, teams to interface with – they’ve got to deliver everything you know, have a working knowledge of finance as well as you know, writing bids etc etc. So yeah, they may already be further down the line with the AI adoption journey. Lucy, would you like to come in and share some thoughts?

Lucy: Yeah I just wanna say I completely agree and actually going back to what you shared at the start line going around your thoughts on that poll. I was actually expecting almost the opposite. I was expecting to see that people would be using it already because I’m sure like I’m sure that outside of like formal data collections around AI usage, as you say, people will be using this because of the lack of resources. And I think it’s such a polarising thing, people either love it or hate it or work with it, or don’t wanna work with it. So yeah, I’m actually really excited to see those examples and I think they could be all ways of using AI, that we probably haven’t even necessarily thought of, and that’s going to be the cool bit, right? [laughs]

[Music]

29:41: Chapter 4: Why non-experts belong in the AI conversation

Ka Man: So I wanted to bring the conversation to the importance of involving non experts, people from outside of the tech space into this research conversation, because I think we’ve mentioned as we’ve gone along that is important, but I just wanted to talk about it a bit more.

So last month I had a brilliant conversation, it was really eye opening with an AI researcher, Dr Cornelia Walther, so she used to be working as a senior leader in comms with UNICEF with two decades experience in this space, and then she moved over to AI research. So she’s now with Wharton Business School, University of Pennsylvania. And she writes prolifically for Forbes, so you could check out her work there if you’ve not heard of her before. And in that conversation that we had, she said: “AI is not happening to you, but it’s happening with you. And no matter how overwhelming it might feel, you always have a choice. It’s important to be aware of that and to take a proactive stance when it comes to AI.”

So that quote really stayed with me actually. And I thought, yes, I need to be proactive in this and I need to be part of this. So, Madigan and Lucy, I wanted to bring you back in here and ask your view on why is it important for non-experts to engage in this discussion from your perspectives, Madigan, what do what do you think?

Madigan: I think it’s probably the most important thing that we can be doing at this point in time. So when people are engaging with AI, like I think there’s this misconception that you need to be a prompt engineer or an AI expert or a researcher. That’s not true at all. So each humanitarian brings a special skill, or know-how into their work. So what you’ll see is, you know you’ll have a WASH or education specialist that is really going to know their work super well, more so than, let’s say, an IT systems administrator, someone telling them how to use the AI. So what we need to have is that the decisions can’t be made solely by the tech or IT companies or the leadership teams within organisations. They actually need to be made by the people that are actually doing the work itself. So, for example, a logistics coordinator is going to make better decisions about how to integrate AI into their work then their tech team. What we run into is kind of the real risk isn’t that AI is, will be used wrong, although you know if they’re there will be case studies of that I’m sure, but it’s that humanitarians will avoid it completely because of distrust or their apprehension when it could genuinely help, or the opposite, where they’ll blindly adopt it without understanding its limitations and where they need to kind of step in with that human intelligence. So we’ve seen this pattern before with other technologies that have been introduced into this sector. AI I think is just kind of amplifying some of those risks and those limitations. But again, we still need to have that human intelligence and that feedback from people that actually understand the work so well.

And in that I think we also kind of going back earlier to our conversations around AI literacy and training, we need to, we really need to kind of invest in that, so we need to have people who can critically think and ask the sort of right questions is, is this a tool that we should be incorporating into our work? Is it actually solving one of the problems that. We have or are we just kind of creating new dependencies or biases with this? I think going back to your point, Lucy about, you know, adopting that like shiny new fad, are we actually adopting it because of that it’s the latest trend that we don’t want to be left behind or is it actually instrumental to our work in actually making us more efficient and yeah. So yeah, so the humanitarian sector has so much knowledge and expertise and it’s absolutely critical that everyone gets involved into this conversation. And again, you don’t have to have that expertise of AI it’s bringing the expertise of your work and it’s going to really help drive that conversation of how to incorporate AI into the humanitarian sector.

Ka Man: You make really good points there, that AI, it’s the same as other systems that have come before it. It can’t be built by and iterated with just like a narrow group of stakeholders. It has to reflect the expertise and experiences, otherwise, the systems will ultimately not fulfil the objectives of why it even exists in the first place. Lucy, would you like to share any thoughts on that?

Lucy: Yeah, just to completely echo, I think that Madigan said really and AI is essentially a tool. It’s there to enhance our work. It’s not there to replace it or do our jobs for human intelligence is probably. More important, and the experience, the expertise and just that instinct, even though you know as an evidence professional, I would always argue -sometimes argue – against instinct, really in the humanitarian work, that instinct can actually serve you really well. And I think combining human intelligence with artificial intelligence is where that sweet spot really is. And I think the main thing that’s kind of always missing, not always missing, sometimes missing from the conversation around AI is that critical thinking piece. AI is a tool. It is another piece of evidence. And I think having that ability to step back and say, does this tally with what my experience tells me? Does this tally with what I am currently experiencing? Does this reconcile with the reality?

And I think, you know, using AI is probably the easy part of AI. I think understanding how AI is built – and I’m not talking about kind of coding, I’m not talking about being the person developing or training the data – but understanding what AI has been built on to understand what biases it brings in to understand, kind of the risks that the AI can pose to solve a solution only partially and know what that missing piece is. That is where humanitarians really can engage with AI in a way that’s much more meaningful, and I think, I will, I always encourage people just to learn about AI by trying it, seeing what it comes out, but then challenging it, challenging and saying you’ve got that wrong – it’s actually this if you’re using sort of the text based ChatGPT style tool. And see how it reacts because that’s where you can learn about how you can apply it in your work, I think, that’s the best way of putting it. So yeah, I think really echoing what Madigan says. Human intelligence in the humanitarian action carries so much more or carries just as much value as AI does.

[Music]

37:33: Chapter 5: Join us on our AI research mission

Ka Man: So wherever you are, we want to hear from you. The survey is completely anonymous. We’re hoping, though, that a lot of people will be interested to continue the conversation and there is the option to opt in for potential follow up research in the future. So obviously depending on what comes of this, we may identify particular areas that we want to dive deeper into. So if you wish to share your email address with us, then, we may contact you for future follow-up research projects.

The survey you’ll be able to find the link in this podcast show notes and it will be open until Friday the 20th of June 2025. So we’ll share the initial insights with everyone from July. Keep a keep an eye on our channels, so Data Friendly Space and HLA, and we’ll announce details of when we’re ready to share that.

So yes, so we hope that this is really this conversation’s encouraged you, got some sparked some thoughts in your mind about your own AI journeys as well.

Madigan: Yeah, I’m just really excited to see this sort of research come to actual fruition. And again, thank you so much for kind of taking the charge on this. It’s really exciting to actually see how just an organic conversation can really kind of spark this sort of research and insight into how humanitarians are actually incorporating AI into the work, and I think we’re going to get some results are probably are going to challenge my ways of thinking, which I’m really excited about and I’m excited to hopefully be able to share with the wider humanitarian community the findings and then, you know, help others figure out how to incorporate AI into their work, or not, where the challenge is where the limitations as well. But also, where’s the potential at so very, very excited about this.

Lucy: Yeah, same, I think. You can probably tell, but they’re all very excited and there is so much that we want to learn and I think you know this is your opportunity as our audience to really like shape how we take this forward, because I think we’re really – we want to be led by you guys and I think that’s something that I’m really, genuinely excited by, that we are able to engage with you and how you’re using AI and what you want from this research. I think that, you know, we’ve got no agenda, like we’ve said, it’s quite organic. How this this happened and I think that in itself is quite exciting because we’re, you know, being led by what you want from us. And so yeah, please do have your voice heard in this survey and I think that’s what I’m most excited by so very much looking forward to seeing what we can come up with.

40:36: Chapter 6: Closing reflections and useful AI resources

Ka Man: Brilliant. Thank you so much, Lucy and Madigan for joining me on this podcast today. Before we wrap up, do you have anything else you’d like to add or highlights to our listeners? Let’s come see you first, please, Madigan.

Madigan: Yeah, just to our listeners, I would encourage you to just learn and explore as much around AI as you as you feel comfortable with. Again, this whole sort of conversation around AI, it’s it is a new tool and I know that there is a lot of maybe negative connotations with some of the commercial agents and you know things that we’ve seen AI maybe do, but there’s also a lot of really positive impact as well that might not get as much attention as let’s say, the negative press around there. There’s a lot of amazing resources out there. So for example, like I said earlier, the UKHIH has a directory of AI enabled humanitarian projects. So if you’re curious about what other humanitarians are doing and really different spaces and clusters, go take a look at that. Again, the Kaya courses on like the Collective Community Intelligence and just in general, great, great opportunities for learning there as well.

And then I’m going to just promote DFS in our tools as well here for a little bit. But we have GANNET Virtual assistant, which is basically like a chatbot but built on already trusted and verified humanitarian information. So we don’t include everything from the web. So just think that really that’s kind of your humanitarian side kick there. And then we also have the GANNET Situation Hub as well, which is a more specialised one for Sudan, Lebanon and Myanmar. And again, it’s automated analysis, but with human oversight, because what we realise is how important that human intelligence is, like you said, Lucy, so that combination.

But really, I think we, you know, talk, talk to your colleagues, talk to your friends, kind of cultivate this community of practice and share lessons with each other. You know what was a win for you where you were like, yes, this really helps me. And then also the failures because I think we need to share that as well because that that’s kind of how we grow this this community and make sure that you know we’re all learning alongside each other. So thank you so much for having me and yeah.

Ka Man: Thank you so much, Madigan. And I think the point you made around community of practice is such a good idea and that’s something that people can take away from this conversation – as well as obviously undertaking the survey because yeah, that’s something that we can think about within our own teams or you know, it doesn’t have to be organisation wide. It could just be those people with an interest. So how about you, Lucy, would you like to share any closing thoughts or messages to our listeners?

Lucy: Honestly, I’m going to echo Madigan again. I think just keep trying out using different tools, getting, build your confidence with using it because it can have such a huge impact and it can really change the way you think. It can change the way you work, but not dominate how you work – it is a tool and I think that’s probably my main takeaway for our listeners is that it is a tool. It is not how we work. It is and it is your choice. And I think that’s important. There are some great resources out there. Do the reading and understand it because there are so many applications. And your ideas and your ways of working will be different from the person sat next to you, probably and so try it out. The courses on Kaya are great, especially if the Collective Community Intelligence that was mentioned earlier. And I’d also recommend reading non humanitarian use cases as well and understanding AI from a non-humanitarian specific perspective. You are the expert in the humanitarian aspects. You will understand how AI can help your work.

So if you want to explore AI more, I’d actually really recommend a course such as AI for Everyone on Coursera. And just read and engage. And as we mentioned, Community of Practices, talking to people, learning about their experiences. It’s all about applying that. critical thinking skills that I mentioned earlier and that’s the real beauty of AI.

And yeah. There’s so many opportunities and I’m just looking forward to seeing what happens next.

Ka Man: That’s brilliant, so am I Lucy, so am I! Thank you so much to both of you. I will include the links to everything, all the resources that you’ve both just mentioned in the show notes, so that listeners can check those out. So please do visit our website, take the survey. We really value your responses to help support us on our research mission.

Thank you once again, Madigan and Lucy, and thank you to our listeners for joining us for today’s episode of Fresh Humanitarian Perspectives from the Humanitarian Leadership Academy.

Note

Episode produced in May 2025 by Ka Man Parkinson.

The views and opinions expressed in our podcast are those of the speakers and do not necessarily reflect the views or positions of their organisations. 

Newsletter sign up