Thank you for visiting the HLA website, we are updating the site to reflect changes in our organisation. Contact us if you have any questions. Contact us

How are humanitarians using AI: reflections on our community-centred research approach

How can artificial intelligence research be designed with community engagement at heart?

When a simple LinkedIn poll asking humanitarians how often they use AI was scaled into a global survey attracting 2,500+ responses, it revealed something unexpected: practitioners were hungry to discuss AI adoption but lacked the community space to air views and experiences.

The team from the Humanitarian Leadership Academy and Data Friendly Space behind the first systematic study of AI use in humanitarian work reveal how they turned organic community engagement into global research [read the report and watch the launch event recording].

The HLA’s Lucy Hall (Research and Evidence Lead) sits down with fellow research co-leads Ka Man Parkinson (Communications and Marketing Lead), and Madigan Johnson (Head of Communications, Data Friendly Space) to explore the realities of rapid, community-driven research: balancing speed with rigour, managing cross-organisational collaboration, and their campaign approach to creating sector-wide dialogue where data and evidence is urgently needed.

Reflecting beyond the report findings, the research team:

  • Share candid reflection on their research approach and lessons learned along the way
  • Reveal unexpected challenges leading them to shelve planned video content
  • Explore how collaboration, capacity sharing and clear communication across teams and organisations can offer an agile and powerful way of working during a time of major sectoral shifts.
Three women’s headshots—Ka Man Parkinson, Lucy Hall, and Madigan Johnson—appear below podcast text: “How are humanitarians using AI: reflections on our community-centred research approach.” Logos feature Humanitarian Leadership Academy and Data Friendly Space.
Tune in to the episode now streaming on Spotify, Apple Podcasts, Amazon Music, Buzzsprout and more!

Keywords: humanitarian AI, community-centred research, humanitarian technology adoption, rapid research methods, humanitarian research methodology, AI adoption survey, Global South engagement, humanitarian innovation, cross-organisational collaboration, communications campaigns, trust in AI, psychological safety.

➕ Tune in to our companion episode: Reflecting on our community-centred humanitarian AI research: your questions answered – listen here and read on for further details!

Who should tune in to this conversation

This conversation provides useful insights for humanitarian researchers, MEAL professionals, impact and evidence teams, communications and marketing specialists, and project managers. It also offers humanitarian contextual insights serving technologists, digital transformation teams, funders, and policymakers exploring community-led research approaches with Global South engagement.

Want to delve into this further? Listeners may also be interested in the podcast conversation the team recorded at the launch of the survey in May 2025 highlighting their aims and aspirations for this research and why they believe that every voice counts in shaping humanitarian AI. Listen here.

Chapters

00:00: Chapter 1: Introduction
03:51: Chapter 2: Reflecting on an organic research origins and responsive approaches
18:09: Chapter 3: Using inclusive research methodological approaches
23:40: Chapter 4: Balancing voices and taking an audience-centred editorial approach
31:05: Chapter 5: Reflections on the research process: what would we change?
41:36: Chapter 6: Research as a springboard for dialogue – next steps
50:23: Chapter 7: Closing reflections

The views and opinions expressed in our podcast are those of the speakers and do not necessarily reflect the views or positions of their organisations.

Episode transcript | How are humanitarians using AI: reflections on our community-centred research approach

Chapter 1: Introduction

[Intro music]

[Voiceover, Ka Man]: Welcome to Fresh Humanitarian Perspectives, the podcast brought to you by the Humanitarian Leadership Academy.

[Music changes]

[Voiceover, Madigan]: It kind of felt like we had tapped into this massive underground conversation that was just bubbling and waiting to come up to the surface. And that the sector was really ready for this discussion in a way that I think kind of took me by surprise, and I think maybe all of us by surprise.

[Voiceover, Ka Man]: I think what I’ve really seen and learned through this process is it’s really underscored our community is special. So we’re evolving that relationship, more of a two-way symbiotic relationship so we’re listening and learning from the sector. And I really want to build on that because I think it’s so, so crucial.

[Voiceover, Lucy]
: I really came into this project thinking that we’d be exploring the technology. But what’s really come out strongly is the human element of artificial intelligence. It’s not around technology, it’s around leadership, it’s around organisational capabilities and capacities, and it’s around psychological safety and relationships.

[Voiceover, Ka Man]: Last month, we were delighted to launch a new report ‘How are humanitarians using artificial intelligence in 2025?’ together with our partner, Data Friendly Space. It was the first global study into the realities of AI adoption by humanitarians which highlighted a phenomenon we call the Humanitarian AI paradox, which describes surprising contrasts in AI adoption, including high individual uptake yet low organisational adoption of AI, high individual uptake yet low levels of AI expertise within organisations.

And inverting the typical script on technological innovation, we found that many leaders in local organisations across the Global South were leading the charge in terms of experimentation, testing and trialling. Meanwhile, many Global North international organisations are navigating questions around risk, budget and compliance challenges first.

In today’s conversation, the research team regroup to reflect beyond the research findings into the process itself, sharing valuable lessons learned along the way. This conversation is of particular interest to humanitarian researchers, MEAL teams, communications and marketing professionals, project managers, as well as leaders and teams looking for insights and experiences on cross-team, collaborative and agile approaches.

Plus, once you’ve listened to this conversation, we have a companion episode where we’re addressing some of the questions from our community asked about the research itself as well as about humanitarian AI more broadly.

[Intro music ends]

03:51: Chapter 2: Reflecting on an organic research origins and responsive approaches

Lucy: Hi everyone. I’m Lucy Hall, I’m the Research Evidence and MEAL Lead at the HLA, and I’m honoured to be your host today. I’m hosting a really special conversation with my colleague Ka Man Parkinson and Madigan Johnson from Data Friendly Space. This is an episode in the first of a dedicated follow-up podcast series focused on our recent AI in humanitarian sector research.

The three of us have been co-leading this research project over the past few months to explore AI usage in the humanitarian sector, and we wanted to spend some time reflecting on our experiences, our findings, the methodology, and the importance of community-led research – and also have a little bit of an exploration as to what is coming next.

Hi ladies, lovely to see you again. It’s been a bit of a whirlwind few months for us, I feel. So much has happened. So just for our listeners, shall we do a bit of a recap as to what has happened since we first started collaborating back in around May or June time, I think it was?

Ka Man: Hi, Lucy. Thanks very much for hosting this conversation today. It’s so nice to have this time and space together with you to reflect on the research project and processes, because honestly it’s been such a whirlwind, like you say, but in unexpected and positive ways.

So yeah, from my side, thinking back to early May when we first had the idea for this collaboration, it evolved really organically from a poll that popped up on my LinkedIn feed that Madigan had posted on behalf of Data Friendly Space, asking followers how often they’re using AI tools. So I cast my own vote – daily – and waited with interest to see how others respond. At that time, most of the users were saying that they rarely or never use it, and I thought, oh wouldn’t that be interesting to dig into deeper?

So that’s when I connected to Madigan, suggesting that maybe Data Friendly Space and the HLA join forces to scale up the poll into a global survey. So I’m really glad that Madigan was on board and had the support of Data Friendly Space to do that.

And I was really surprised when I first connected to Madigan, and she said, actually, yes, I think that this exercise would be unique. I don’t think it’s been done there from what I see and hear in the sector, and that this will address a gap in knowledge – actually mapping out, in reality, how practitioners are engaging with AI tools in the humanitarian space. I was really surprised, but that obviously motivated me to push ahead with this in collaboration with Data Friendly Space.

And that’s when we brought you on board as well, Lucy, to create our core project trio. And it’s been such an incredible project to be part of and collaborate on together. I think we’ve been really aligned in our aspirations and vision, as well as being really flexible and adaptable in our ways of working and approaches.

Madigan: Hi, Lucy. Thanks so much for having me. And again, so fabulous to have these conversations with you. I have to be honest, I think when I first published that poll on LinkedIn, I did not expect it to turn into what we have today.

And so when Ka Man reached out about this idea, my first reaction was just genuine excitement, because at Data Friendly Space we had been seeing this fascinating disconnect about AI usage in the sector, which I think came about later in the research as this AI paradox, after we kind of delved into the research findings and the survey results.

And so we had been having a lot of these conversations about AI and AI usage with other humanitarian organisations, but when we really looked or asked them how they were actually using it, or their organisational policies or formal adoption, there was basically – I don’t want to say nothing, but there was this huge gap that we were seeing. Everyone was talking about it, but then when we actually asked about the implementation or how they’re using it, there was this sort of trepidation. It was also like discovering this sort of shadow economy of AI usage within the humanitarian sector.

Our research aim was really simple but quite ambitious at the same time: to map the reality of AI usage, not just the rhetoric or what we had been hearing in conversations, but actually the practical implementation – how humanitarians are actually using it or not using it. We wanted to understand not just who’s using what, but why this gap exists between individual adoption and institutional support, and really focus on the reality at hand that’s facing humanitarians and the humanitarian sector as a whole.

As we know, the humanitarian sector is notoriously risk-averse, with very good reason, but we kind of suspected that people were experimenting anyway. And it turns out we were right. The implications of that are actually much bigger than we initially imagined, which for me, and I think for you guys as well, kind of means that this is just the starting point of this research, and there’s so many other areas that we’re really excited to kind of delve into.

Lucy: Such a wonderful reflection, and I agree – there’s so much more we could delve into. When I think back to our own research from the last year or so where we’ve been looking at AI and how that ties into local leadership and local engagement in designing something that is truly driving transformation of how the sector operates. So it was great to have those conversations and realise that we were so aligned with the conversations and thinking we were having.

I just can’t believe how busy and how quickly the last few months have passed, which is why I think this conversation today feels really useful to have – just to take stock of all of the conversations that have happened, all of the thinking that we’ve done, look at the feedback that we’ve received, to start looking into the future, right?

When I think back to those initial discussions, the whole project feels as if it’s evolved to be much bigger than we possibly imagined it would be, which has been really exciting, but I’ll admit at times I’ve been slightly overwhelmed and surprised, particularly by the pace of the research and the conversations surrounding it.

Is there anything that has surprised you both around the process of this research, or what’s happened over the last few months?

Madigan: Yeah, I mean it’s a great thing to discuss, because I think what caught me completely off guard, to be honest, was how hungry people were to talk about this. So, when we first launched the survey, I think both Ka Man and I we kept checking in to see how many responses we were getting. We kept hitting a couple hundred responses and we’re like, oh, wouldn’t it be great if we could reach 500? And it was, okay, it’d be so great if we could reach 1,000. And instead, we got over 2,500 responses from such a wide, diverse group of humanitarians from, I think it was 144 different countries and territories. We got really diverse backgrounds and management roles.

What was also really interesting is that these people weren’t just sort of ticking the boxes, but they were also writing, some of them essays, in the comment sections, and really giving us their knowledge and know-how about how they’re using AI and what they’re using AI for, or not, right?

I think the other thing, like you said, was how quickly this research kind of took on a life of its own. It kind of felt like we had tapped into this massive underground conversation that was kind of just bubbling and waiting to come up to the surface, and that the sector was really ready for this discussion in a way that I think kind of took me by surprise – and I think maybe all of us by surprise – as we sort of have been starved for these sorts of insights and this actual hard data that we could kind of back up, because we had just been having these conversations, and now we actually had data to support these conversations.

I think something else that sort of surprised me in the best way possible was the collaboration between us as co-researchers and organisations. And I really felt that we played to each other’s strengths and worked really well as a research team, dividing the work and then stepping up to make sure that the work got done in a really meaningful way. For me, this also just drives home the fact of collaborating with partners that really complement each other’s skill sets. This is also a great way for me personally to learn from the two of you and your fantastic research skills, and I’ve learned so much throughout this entire process.

Ka Man: Thank you, Madigan. It’s felt like the process, despite the challenges, and the tight timescales and the big ambitions, I think working with you and in collaboration has felt seamless. I think you’ve been a partner, not only a partner, but an extension of our team for this particular project. So it’s really been capacity sharing in action from my perspective.

In terms of what took me by surprise, well I’d say there were lots of surprises, but I’ll focus on a couple. The first one like you’ve said Madigan was around engagement – we kind of unleashed something, giving people permission and licence to talk about AI. It almost felt like in some aspects an almost taboo subject. Maybe it’s because of what’s been going on at the same time in the sector – the very human aspects of the funding cuts, job losses, etc – that AI felt like a very sensitive topic, perhaps, that some people didn’t feel comfortable, and maybe still don’t feel comfortable, talking about openly in the organisation.

So yeah, we didn’t know obviously how things would land when we launched the survey. It’s easy to scroll past a social media post, it’s easy to ignore a survey that just lands in your inbox – a lot of people might just delete it straight away. But like you say, Madigan, we had a lot of engagement where people were actually sharing their thoughts and feelings about it at length.

I thought, wow actually this is surprising me, this is taking it beyond a little bit of information gathering to that two-way dialogue, giving people that space and permission to talk about it. I remember one comment was, “Thanks for making people think about the ethical development of AI in the humanitarian space.” So yeah, I thought that was really nice and gratifying for us to see, this level of engagement.

Another surprise, which was maybe a bit more of a challenge for us as a team, was the actual complexity of the piece. Even though we’ve undertaken research before – now obviously Lucy, this is the mainstay of your role, and within Comms, I co-led a humanitarian learning survey last year which was large-scale as well. So we hve this experience under out belt. But this had a whole life of its own, like you say [laughs]. This report was framed around the concept that Madigan came up with: the Humanitarian AI paradox.

Initially, the idea was we’ll create a survey, we’ll share the findings as we get them, we’ll create some shared narrative around it. But actually, that creation of that shared narrative was the bigger challenge. After working it through, brainstorming and teamwork, and that Humanitarian AI paradox framing that you cracked there, Madigan, with that overarching theme and then five key takeaways that we built out as the thematic areas and structure of the report – that gave us a framework. And it was a challenge to arrive there, but it was a good one for us to tackle together.

And then finally, just to share an unexpected aspect from a comms perspective: so we had initially planned to develop some multimedia content to bring the sort of mapping dimension to life. We thought, oh wouldn’t it be really interesting if we invite humanitarians to share some talking head videos filmed on their phone or computer, or maybe even voice notes about how they’re using AI? We thought people might be really excited or interested to do that.

But actually, we found that we had to shelve that idea because even though there were individuals who came forward and were keen to do that, there wasn’t enough with the permission and ability on behalf of their organisations to do that. So we wouldn’t have been able to develop a bank of stories that we could publicly share.

And like I say, it was all down to not being sure if they could do that, if it was the right thing to do, if they needed extra authorisation from senior leadership teams, especially from those within international organisations. Obviously, we had to respect that and we understood that. And even though it was a little bit disappointing just from a comms point of view, because we thought that would help with the big picture, actually we found that that in itself supported some of our research findings around that concept of a lack of psychological safety within organisations about speaking candidly and openly about AI adoption.

Lucy: Yeah, I have to say, Ka Man, I completely agree with that particular point around the tension that we’ve all described that we felt throughout the process around people’s concerns around talking about AI. For me, that is definitely the most interesting outcome of this research. So thank you for surfacing that in this conversation, because it’s been surprising.

I also just want to echo that it has been obviously lovely to work in this project team with yourself, Madigan, and Ka Man. Often in this sector we get stuck in processes and procedures and protocols, but we forget about that human connection and just actually getting on with something. It’s been so organic, and I think we’ve known what each other has been thinking most of the time, so we knew we could just change things or refine things, and we’d all be like, “Oh, that makes sense.” That’s been a really lovely part of this process, so I just want to echo that and thank you guys.

18:09: Chapter 3: Using inclusive research methodological approaches


One thing that I do want to talk about is obviously the methodology. That’s the core part of the research for me, and the most interesting – that’s how you shape the results you get. At times, this felt really different to previous research pieces that I’ve been involved in, and the HLA has been involved in, mainly because of the scale, the audience, and the timelines, as I mentioned.

And I found myself reflecting that a lot of the time, research can feel very academic and distant, and, as I just mentioned, procedural and disengaging at times. I think the main criteria that we set out with was making sure that our work felt really different in that sense, so that it was an engaging piece. We wanted it to be really accessible by as many people, as many different stakeholder groups as possible, to make sure that it’s relevant to people’s every day, so that people could look at this research and see, “That’s my experience, I understand that, and I resonate with this.”

And I think using social media as a driver to get responses, for me, played a huge part in that. I was really excited at the outset of this project by the anticipated scale of responses. Again, I don’t think any of us imagined that we’d get over 2,500 results. I was so impressed with how engaged people were.

It really struck me how we’ve become used to really specific, targeted data collection in research. By opening this up so widely, one of my initial concerns was the risk of losing that specificity, and I was worried that the detail of people’s answers wouldn’t necessarily come through. But now we’re through the other side, I actually feel that this approach allowed for a greater contrasting global picture and allowed us to understand this subject area in much more detail and made it so much richer.

So, Ka Man, because you’re such a really significant part of building and growing the HLA community and the network that we have, what are your reflections on the power of such a large community, and how can this inform our locally led research agenda that we have?

Ka Man: Thanks, Lucy. I loved hearing your reflections. And I think for me, this project has really shown what’s possible when you break down those kind of traditional organisational lines and silos and work together on a piece. And I think having comms, marketing and research working hand-in-hand throughout has really played to our strengths and created something quite powerful and making sure that the research is shaped from the start in a really inclusive and community-centred way, with our community being focused on local leadership. So that’s been really rewarding for me and supports our advocacy for localisation and locally led leadership in the humanitarian space. And I think what I’ve really seen and learned through this process is that our community is special. They’ve been absolutely fundamental. This piece has really enabled us to drive this relationship forward with our community, which is on Kaya, 800,000 strong, and when we add in everyone in the broader network that we link into. So we’re evolving that relationship from a passive or one-way relationship where we’ve got expertise and knowledge and we’re sharing it with you, but more of a two-way symbiotic relationship so that we’re listening and learning from the sector. And I really want to build on that because I think it’s so, so crucial.

And then finally, the Global South engagement was really strong in this piece and something that really excited me, so three-quarters from the Global South and approaching half from Sub-Saharan Africa. And because traditionally these audiences are not necessarily reflected in technological developments led by the Global North, I think that’s so valuable. I can see that we’ve got a really crucial and important role to play here, particularly in the humanitarian space, and that’s something I really want for us to build on and support the sector with. That really speaks to the power and potential of partnership.

Lucy: It’s been astonishing, the engagement. And also, it’s not just been the HLA community, whilst that has been a huge part of this, we’ve obviously been able to work with Data Friendly Space, and they’ve brought their own audience into this research, and it’s been a real blend and complementary blend of different audiences and different engagements.

I think one of the things that really excited me at the start of this project as well was the possibility of combining all of these different voices together to see what came out of it. I always felt that the HLA and Data Friendly Space had different voices, but strengths that would really complement each other. And I think that the partnership has had huge value for each organisation in different ways, and I think we have seen that complementarity come to life after the research has been launched.

23:40: Chapter 4: Balancing voices and taking an audience-centred editorial approach

Although, I have to say, whilst we were in the midst of writing, again, another of my apprehensions was how are we going to bring all of this together in one unified voice because we are coming from different perspectives? I actually think we did a fairly good job in the end. Do you agree, Madigan?

Madigan: Yes, absolutely right. I think, like you said, we definitely achieved that unified voice, but I will confess there were points in that process where I was – especially because we had done such a wide scale of digital products, multiple reports, also social media – so I’d wonder sometimes if we had kind of bitten off more than we could chew, especially when you consider we’re two different organisations with different audiences, we also have our own personal different writing styles, and again, these different audiences to serve.

But I actually think that sort of diversity and diverse voices actually ended up playing to our strength. I think also what worked really well was establishing early on that this wasn’t about promoting any one of our specific organisations’ agenda, but rather about serving the sector. When you’re really genuinely focused on impact over attribution, those ego battles that we sometimes see in the sector that can sometimes derail the collaborations—it just didn’t happen in this case.

Instead, I think we brought each something really essential. So HLA’s research rigour with Lucy, I think, DFS’s technical expertise, and then Ka Man came in with the communications and the editorial magic to kind of weave it all together.

I think the real test was when we had done the first, let’s say initial draft of the report, and I think it was approaching close to 80, 90 pages, and when we had to cut sections out of the report. I think for me and Lucy, maybe you’ll agree, that was the hardest part. It was like giving up part of our baby in some ways. But Ka Man did a brilliant job in executing every difficult editorial decision that really made the product – our final product, final report – really strong and reflective of the community and what we were seeing in the results from the survey. So, yeah, Ka Man did a brilliant job in executing, making sure that it all flowed and was seamless in the end.

Lucy: Absolutely, and I think that process really embodied the trust that we had in each other, and the confidence that we shared the same vision for the piece and the research, because, as you say, we approached it differently, but Ka Man ultimately had the difficult decision of making significant cuts, but for good reason, and I think we ended up with a really, really great product in the end.

So, Ka Man, I guess the question is, how was that process for you?

Ka Man: [laughs] You’re both making me sound super brutal! But yeah, no, I agree with everything you say, and I think, because we had that trust, and because we had these weekly meetings, and we’re always keeping each other up to speed on where we were at, our thought process, it meant that it just made it easier to actually go through, because we could understand each other’s line of thought and our approach. And we were all able to go, “Yeah, okay, yeah, no, I can see that,” and adjust and adapt accordingly.

So, like you say, like you both say, it wasn’t a straightforward process, given the time pressures, the volume of material that we had. Because as well as the survey, Lucy and I had conducted those in-depth survey, um, interviews as well with participants. So, there was a lot to collate, synthesise, and reflect in a way that was – and package – in a way that diverse audiences could engage with it.

And because we didn’t have an existing blueprint or an existing framework, because this is quite a different way of researching, like you’ve mentioned, Lucy, we were starting from scratch. We could take inspiration and ideas from elsewhere, and certain conventions in research, but we were saying, “Right, okay, it’s a blank page, let’s go.” [laughs] So that was a challenge, but it was a good one for us. We learned a lot through the process, throughout the writing and editing process.

Collectively, we’d often take a step back and say, “Does this reflect our research findings? Is this reflecting what we’re seeing in research, sorry the survey, in the interviews? Is it what our audiences are expecting us to share? Is it aligning with our overarching goals of mapping AI practice across the sector?” And if we thought maybe not, then we’d adjust or repurpose at times for different formats.

So, for example, you may tell that I like to talk, I like to share [laughs], I like to weave in storytelling elements, but ultimately, I thought some of the extended content I’d had in the executive summary and the introductions, that had to be removed and moved to other formats, like this podcast series, so that we were prioritising the data and findings.

So this editorial approach aimed to create modular outputs, so that we’re meeting our audiences where they are, so whether they are all in, want to engage with everything, or skimming key insights, or honing in on the technical aspects, like through the dashboard and the user personas, or maybe the more traditional research side of things, engaging with the articles. So, we created this portfolio of products, if you like, which allows us to test, as well, what type of content and approaches is landing and resonating with audiences, and that allows us to iterate and refine, and build on and out from there.

So, the editorial process was challenging, but we learned a lot through the process, and through that, we were able to sort of learn, evolve and adapt along with our audiences. So, yeah, the ethos was to be community-centred the whole way.

Lucy: I think that really shone through, and I think that’s shone through in so many different ways throughout this entire journey that we’ve been on. Part of my concerns whilst we were in the midst of it was the ability to really have the time to critically reflect on the data and contrast what was going on elsewhere in humanitarian and technology sectors.

But I think because we were so audience-based, as you said, we were able to reformat those really important conversations into things like this podcast series and other products that we’ve been able to share. So in a way, there’s almost nothing that I would do differently now that we’ve come through it. I think that’s something that I’ve really taken away, is to really trust our process.

31:05: Chapter 5: Reflections on the research process: what would we change?

Lucy: But I guess I’m curious to hear from you both. Is there anything that you would do differently in terms of this whole journey that we’ve been on? And reflecting back on why it was so important to keep our engagement and our community at the heart of this process.

Madigan: Yeah, I think for me, one of the things that was a bit tricky at times was probably the tight timelines that we set for ourselves. I think this was both a curse and a blessing at the same time. So on one hand, I think we all put in a lot of extra time, late nights, early mornings, I think Ka Man did weekends, and again, I think that just showed our excitement for this project. But that pressure of meeting those timelines also kind of prevented us from falling into, I feel like, sometimes that classic research trap of endless refinement and perfectionism paralysis, where you’re kind of like, “Okay, let me just go back and edit it one more time, one more time.” And then you kind of get stuck in that loop.

Here we didn’t really have that option, because we had self-imposed those deadlines on ourselves, but in this particular moment, with AI moving so fast in the sector, really, we felt like we had tapped into this conversation and this community, needing evidence to inform decisions about how we’re going to be integrating AI or talking about AI, I feel like the timing was really crucial.

So if I’m honest, I think what I would change wasn’t the timeline, but maybe the initial scope management. I think we kept saying yes to additional output, so digital products, we have translations coming, the launch events, I think we did the initial insights, full report, user personas, articles, and I think that was genuinely because we were all so excited, and we kept being like, “Okay, well, if it doesn’t fit into the full report, we’ll do it this way and this way.” And we really saw the need to appeal to the community, and because I think at the HLA, you guys know this very well, but everyone learns in a different style, and so to create materials that would appeal to audiences and their different learning styles was something I think we really all saw the value in.

But I think there was also that sort of meet-the-moment mentality, that’s exactly right. AI isn’t waiting for us to get comfortable with these research timelines. And I think, Lucy, like you mentioned earlier, this has probably been one of the more fluid research projects that you’ve worked on, and definitely that I’ve worked on. And in a lot of ways, I’m really grateful that we didn’t have sort of fixed output when we first started the project.

We talked about, “Okay, we’ll have a report.” And then we kept talking, and then I was like, “Okay, now we’ll have a dashboard. And now we’ll have -” and it kind of took on a life of its own, and it was really nice not to have that sort of fixed rigidity that you sometimes get in academic research. So we could decide what would work or what wouldn’t work. And sometimes that decision was made for us in the case of, like Ka Man mentioned earlier, where we wanted to do video outputs, but people weren’t exactly comfortable with that.

So I think in this instance, we could really see where the research took us, and what outputs were needed to support the dissemination to the community, and to the sector. And I’m sure, Ka Man, that you probably have also quite a lot to say about this, so love to hear your thoughts.

Ka Man: Thanks, Madigan. I totally agree with everything that you’ve said there and articulated so well. I almost think, reflecting on what you’re saying, it’s made me think we almost took like a very much a campaign approach to this research. Maybe that reflects, Madigan, you know your role and my role in comms and marketing. I almost felt like, you know, if you look at, outside of the sector, say, political campaigns. There’s a, there’s specific asks, and there’s specific messages, and there’s a specific call to action with an anticipated result. And then that all leads to a specific moment.

So it’s like a campaign trail, and then the election. And obviously, I’m not [laughs] comparing us to people in positions of power, etc, and so on, but you can see the lessons learned from that type of approach. So because we had this, like you say, we were meeting the moment, we had – we saw this need, and we were moving fast, leading to the launch of the report, and our panel experts and inviting in the global community. So we launched the report on the 4th of August, we had the event on the 5th of August, so it was all tight, but for a reason, for that conversation, for that global conversation, to meet that demand and need.

So I wouldn’t change it, but underestimated the energy and momentum of that campaign, and we were just so invested that we just gave all of our time and resources, individual resources [laughs], to do that.

And like you say, that perfection paralysis, you know. Obviously, everyone can perfect things and iterate and refine. And you know, if you gave me that extra time, I would have continued to iterate and refine until that specific deadline. So even if I had a month longer, two months longer, probably I don’t know whether the core of the content would have been any particularly different, substantially. So meeting the moment was more important than that perfection. Not that perfection exists, right?

Would I change anything in the report itself? I did gauge from some comments and questions in the online launch event, some people were perhaps expecting more definitive recommendations from us. And at times in the writing process, we did debate ourselves, didn’t we, how far do we go? What’s our role in actually outlining solutions in our report?

We decided that our role is conveners with technical expertise, obviously, Data Friendly Space, reflecting that, asking questions, but obviously, we don’t have all the definitive answers. Our message is that we as a sector have to work together, and beyond the sector, to really get to grips with those difficult questions and come up with solutions together. So, some people might have preferred or wanted a very definitive, here are our prescriptive recommendations, but that’s not what we set out to achieve in the first place. So it’s not that I would change it, but maybe the framing of it might have been clearer for some audiences.


And then finally, as I mentioned about the timing, it was so crucial because it’s been so interesting to see a number of reports coming out around the same time, and they all – what’s been really interesting, I don’t know whether, Lucy, Madigan, you’d agree, even though they all have different focuses, so there’s been an MIT State of Business – sorry, State of AI in Business report featuring Fortune 500 companies, MasterCard released a white paper on African AI. And even though we’re looking at all totally different contexts, not necessarily humanitarian, all the key themes and issues are emerging, but just with very different vantage points.

So it makes me think that this timing was crucial, because now there’s this whole growing body of research where people have got something very tangible to say, right, okay, we can hone in on this particular theme now, and that’ll be a springboard to the next round of conversations. So yes, I think the timing – I feel sort of, not vindicated, but I feel like the timing was right, and that should have been the primary driver.

Lucy: 100% agree with everything that you’ve both just said. And I think it’s just been so energising, and so I think vindicated probably is the right word, Ka Man, actually. Because the feedback that we’ve had has been so overwhelmingly positive, not just in terms of congratulating us for producing something, but everyone has commented how meaningful the findings are, how much it resonates, and how much of this paradox plays out in real life, because technology companies are already moving on to the next frontier. You know, they’re already exploring what is next in a post-AI world. Yeah, a lot of organisations, like you say, not just humanitarian organisations, are still grappling with what AI means for them.

So there’s this real, almost, disconnect happening. So it’s really important just to say, this is our experience as a sector. And just bringing it back to the research approach, this new research approach of community-based, locally led research. I think what you both said about meeting the moment and calling to action, that’s something that I would love to see a lot more of. Bringing people together, taking a communication style approach to research, with a view to really bringing people together to have the conversations, but also figure out what the next steps are, right? That’s the primary purpose I think we should have as researchers, is to advance the conversation and not just rehash the conversation, essentially. So that’s something that I’m really hoping comes from this, in this particular arena of AI, and technology, but also in our broader approach, as an organisation, I think. This is something that I personally would love to see a lot more of.

41:36: Chapter 6: Research as a springboard for dialogue – next steps


So whilst I’m starting to think about the future, I think it’s really a great chance to have that conversation for the three of us. What is next? I know that we’ve all had a lot of conversations. We have taken a little bit of a break to catch our breath after what’s been a pacey summer, but there’s lots of things that we’re planning and trying to get off the ground. So, what are our main priorities for the rest of the year?


Ka Man: Thanks, Lucy. I totally agree with everything that you’ve just shared there, and yeah, looking forward to next steps. It was so important to have a bit of a breather [laughs] last month, but we’ve regrouped, and part of the regroup is this conversation, actually, so it’s so nice to have this time and space. Yeah, and we’re looking forward to the next season, so to speak. So yeah, I’m really proud of what we’ve achieved together, and yeah, the next steps – well, this conversation marks the start of a new podcast series that we are working on currently.

So, we’re going to be delving further into the themes that emerged from the research, including implementation, governance, ethics, training, and more. So, in these conversations, we’re incorporating some of the unanswered audience questions from our online launch event that we held on the 5th of August, because we received, I think, over 100 questions, so we didn’t have the time and space, unfortunately, to address them all. But we thought, wouldn’t it be great to roll these conversations into the next phase, so that, really, the community continues to shape the dialogue that we are hosting?

So, yes, it’s great to have that platform with some experts, where we’re taking a global view, but also we’re going to have a particular focus on Africa, since almost half of respondents were from Sub-Saharan Africa. So, really excited to be recording those conversations over the next couple of weeks.
And we have some Q&A articles as well with these experts. So again, that’s sort our approach of trying to package up content so that you can engage with it in the way that works for you. These are going to be released on the road to NetHope. I don’t know, Lucy, if you want to actually just jump in and share about the NetHope Summit.

Lucy: Sure, yeah, so the three of us will be attending the NetHope Summit in October, at the end of October where we’ll be convening a small group of people, who will be able to sit around and delve into the research that we’ve delivered and really unpack what it means in terms of next steps. It’s deliberately designed to be a small session to really focus the conversation on designing solutions. And I think whilst we don’t know what’s going to come out of that session yet, because we want it to reflect the conversation in the room. I think what we hope to do is bring it back to our audiences, to test, refine, learn, and iterate, so that we continue to move things forward. I think we’re all looking forward to being in person. We’re really excited about being there at NetHope, to meet as many people as possible, to learn, convene, and carry this forward.

Ka Man: Brilliant, yeah, I’m so excited, and for the three of us to be in a room together, in a physical room [laughs], will be really nice, and on the same time zone, so that’s lovely. Yeah, so really excited for that. So, we have these podcasts, there’s six in total, and we’re going to be releasing those on the road to NetHope, so weekly, as a countdown. So, if you’re going to be there in Amsterdam at the NetHope Global Summit, please do look us up, because we’d love to meet with you and have conversations.

And then to continue the conversation from the NetHope conference, we will be hosting a couple of webinars in collaboration with NetHope, so we’re really excited about that.

And then finally, still very in the early stages, but we’re planning a light touch pulse check. So, not a full survey, not a full research piece, but just a light touch pulse check to see if the dial shifted on any of the key AI adoption metrics. So, how often are you using AI, what kind of training have you received, how’s your organisation using it, how far is it embedded, so that we can see if anything has changed on that.

Madigan: I would just love to jump in and reiterate what you said about our session at NetHope. I’m so excited to finally be in the same room with you guys, and I think we will all be just giddy with the excitement to actually put our – we’ve put our research out there, but then also to have these meaningful conversations in a smaller setting as well. So I’m really excited about how we can hopefully nudge the humanitarian sector to critically reflect on their own AI conversations and what they’ve been experiencing, and then the actions that they can take now.

I think something else that – I mean, I guess it’s not – it’s kind of emerged not as a priority, but something kind of organically, has been continuing the conversation. So I think we’ve been invited to different humanitarian organisations on different podcasts to talk about our research, and I think that’s been a really wonderful experience to see the engagement and reflections from the humanitarian community, and to be able to support the humanitarians and their organisations in these conversations about how to, in fact, have these conversations around AI, and some of the harder ones about shadow AI usage, and where organisations see themselves, and where humanitarians see themselves, and try to hopefully bridge that gap between them.

And I think we’ve had some really fascinating dialogues there, so I’m curious and interested to see where that takes us, and it’s also really inspiring to hear from other organisations as well, where they are on their AI journey, and to learn from them in this process.

Like Ka Man said, I’m so excited for all the follow-up conversations that are gonna be on the Fresh Humanitarian Perspectives, as there’s some really great guests that are going to be there. I think there’s also a lot of follow-up opportunities that I think Lucy and I have both have multiple pages of notes for fields of research, and I’m really hoping that with HLA, we’ll be able to continue to do this, I think, which is really critical research.

And then just specifically for Data Friendly Space, my priorities there, after the research came out about how there’s still a kind of a lack of widespread adoption of AI among organisations, we’re really focusing on supporting organisations in their AI journey, so kind of watch this space, we have something really exciting launching in the coming weeks.

And then, of course, we’re always focused on building sector-specific tools. So we’ve recently just launched the Occupied Palestinian Territory Situation Hub with Save the Children, which I know HLA is part of Save the Children. So we’re really grateful to be able to continue to build these solutions that prioritise the ethics and safety that humanitarians raised in the survey, but I think with that also comes what I hope are opportunities and ideas about how to train and build the capacity of humanitarians in using these tools, or using commercial tools in a safe manner. So I’m really excited to see what HLA, and NetHope and other organisations are coming up with. I know that NetHope has some trainings on the Kaya platform, so I’d really encourage everyone to take a look at that.

And yeah, just – I think there’s so many opportunities, it’s like, in priorities, it’s like, where do you even focus or begin? But I think what we’ve been doing is, okay, first make sure that this critical research is out, and then now having this conversation about, okay, what’s next, and what are the concrete actions that we can take to support. So but again, really excited about everything that’s upcoming.
Lucy: You’re completely right, Madigan, there are reams of notes and ideas coming out of all of our heads, I think. I know we’ve had some conversations in recent days just exploring what’s possible, so.

50:23: Chapter 7: Closing reflections


And I think just to reflect again a little bit here. I really came into this project thinking that we’d be exploring the technology and the data science elements. That’s my professional background. But what’s really come out strongly is the human element of artificial intelligence.

And I think that’s so important to remember, especially in a humanitarian sector, in a humanitarian context, it’s the human, because it’s not around technology, it’s around leadership, it’s around organisational capabilities and capacities, and it’s around psychological safety and relationships. And I think that’s been such an important reflection for me to take away. And I think that’s something that we keep coming back to.

As we’ve all said, there’s a lot that we could explore, so I will pause here for a moment. But I guess before we finish the conversation, is there anything that either one of you would like to add at this point to our reflections?

Ka Man: Yeah, so yeah, thank you very much, Lucy. I totally agree what you said about, this being more about people than technology, in a sense. That’s what really surfaced, throughout this whole process. I’d say it’s just been really meaningful as well, I feel. So yeah, thank you very much for really highlighting that humanitarian leadership dimension. So I just wanted to thank you both for – as co-leads, for being on this fast-paced, challenging but rewarding journey, collaborating under a shared goal.

I want to thank everyone who’s engaged with this research, particularly those who don’t normally engage in the AI or tech space, and from my perspective, particularly women, because women’s voices aren’t always prominent in tech conversations and discourse. I just also wanted to highlight that we’ve delivered this work as part of our business as usual. It’s not received any external funding. And I’m really excited at what we could potentially achieve, and how we could scale this work if we had dedicated funding and partnerships, and we could achieve deeper and more wide-ranging impact. So, if you’re listening, and you’re a potential partner or collaborator, we’re keen to connect with you, so please do reach out to us, because we’re always open to conversation, to explore ideas.

Madigan: Yeah, I mean, I completely agree with you, Ka Man, literally about everything you just said. I just also want to add, I think that the research also confirms something I’ve long suspected around the humanitarian sector, around the caution around new technology adoption. And while we do need to be careful, we need to be ethical, and we need to be asking the hard questions because of the communities that we serve, being careful doesn’t mean we have to be paralysed, and sometimes I feel like we get into this decision paralysis, and we can’t make up our minds.

So I think what really excites me about this research is that we’ve proven that community-led research can really happen at speed, and without sacrificing the rigour, and I think it’s opened up the door for what community-led research can achieve. And we’ve really shown that practitioners’ voices matter in shaping how we think about AI adoption, and we’ve also created this baseline for future research that we can build on, like you said, like that little pulse check, so again, won’t be the full survey, but again, it’s also something that other humanitarians can go in and build on and research and develop further.

And again, like you said about the community engagement, it’s been really remarkable, and so to anyone that’s been quietly experimenting with AI and feels like you don’t – your voice doesn’t really matter, it really does. I want to iterate that you’re not alone, you’re not behind, and that all of your experiences are really valid. And that the future of AI, we can’t have it just be written in the Silicon Valley boardrooms, it needs to be written by people by you. You are the users, and how you engage and how you implement with thoughtfulness, with intent, with the ethics, I think is really going to matter going forward. So, yeah, just thank you so much to everyone that has participated in the research, and to the community that is engaged with it.

Lucy: It really has been a real privilege, almost, as a vehicle for everyone that’s contributed to this research, and continues to contribute. I think that’s what I feel really grateful and really privileged, to be involved in bringing people together. It’s something that I think we would never take for granted. And I think I’m an eternal optimist, and I think this is – there is a real sense that things are shifting because of your commitment to engaging with this topic. So, to everyone that has listened, to everyone that has read the report, to interact with the dashboard, attended the launch event – thank you, because we are acting as your voice, and we hope what we’re doing you proud.

So thank you both very much for this conversation. As we said, there are more podcasts planned for the next couple of months, unpacking all of these discussions and conversations in more detail, and hearing from more experts, so do stay tuned to Fresh Humanitarian Perspectives from the Humanitarian Leadership Academy with our wonderful partners, around the world, including Data Friendly Space.

[Music fades]


Continuing the conversations: new Humanitarian AI podcast miniseries

This conversation marks the start of a new humanitarian AI podcast miniseries which builds on the August 2025 research: ‘How are humanitarians using artificial intelligence? Mapping current practice and future potential’. Tune in for long-form accessible conversations with diverse expert guests, sharing perspectives on themes emerging from the research, including implementation challenges, governance, cultural frameworks and ethical considerations, as well as localised AI solutions, with global views and perspectives from Africa. The miniseries aims to promote information exchange and dialogue to support ethical humanitarian AI development.

▶️Listen to episode 2: Bridging implementation gaps: from AI literacy to localisation – in conversation with Michael Tjalve [listen here]


About the speakers

Lucy Hall is Research Evidence and MEAL Lead at the HLA, working at the intersection of humanitarian action, locally led innovation, and ethical AI. Her work focuses on turning complex information into meaningful insights, enabling systems change, and building tools that amplify the voices and leadership of communities closest to crisis. With a background in humanitarian response, she brings a sharp lens to equity, power, and evidence – championing approaches that move beyond theory into action. Lucy is currently exploring how AI can be made accessible, responsible, and genuinely useful in low-resource and crisis-affected settings. She believes innovation must be grounded in trust, local ownership, and real-world utility – not just governance frameworks or flashy tech – and she designs sessions and strategies that reflect this ethos.

Madigan Johnson is the Head of Communication at Data Friendly Space. She is a digital expert specializing in user behaviour and experience, co-design, and storytelling, with a focus on the practical applications of artificial intelligence in social impact contexts. Following her Master’s in International Humanitarian Action through the NOHA network, Madigan pivoted to the private tech sector where she worked for several years in quality assurance, user behavior and analytics, and creating digital experiences for major e-commerce players and social impact startups. She now contributes to DFS’s mission on responsible AI implementation and human oversight in AI-powered humanitarian applications, with research interests focused on human-centered design principles for AI systems that prioritize user agency, transparency, and ethical considerations

Ka Man Parkinson is Communications and Marketing Lead at the Humanitarian Leadership Academy. With 20 years’ experience in communications and marketing management at UK higher education institutions and the British Council, Ka Man now leads on community building initiatives as part of the HLA’s convening strategy. She takes an interdisciplinary people-centred approach to her work, blending multimedia campaigns with learning and research initiatives. Ka Man is the founder and producer of the HLA’s Fresh Humanitarian Perspectives podcast and leads the HLA webinar series. Currently on her own humanitarian AI learning journey, her interest in technology and organisational change stems from her time as an undergraduate at The University of Manchester, where she completed a BSc in Management and IT. She also holds an MA in Business and Chinese from the University of Leeds, and a CIM Professional Diploma in Marketing.


➕ Companion episode: Reflecting on our community-centred humanitarian AI research: your questions answered


Plus, the community conversation continues: tune into a companion episode where the team respond to community questions raised at the online report launch event held in August 2025. The questions cover specific report-focused queries as well as broader questions around the application of AI by humanitarians. Listen to the episode on Spotify, Apple Podcasts, Amazon Music, Buzzsprout and more.


Episode transcript | Reflecting on our community-centred humanitarian AI research: your questions answered

The podcast transcripts were generated using automated tools. While efforts have been made to check their accuracy, minor errors or omissions may remain.

[Intro music]

[Voiceover, Ka Man]: Welcome to Fresh Humanitarian Perspectives, the podcast brought to you by the Humanitarian Leadership Academy.

Lucy: Hi everyone, I’m Lucy Hall. I’m the Research, Evidence and Evaluation Lead at the HLA and I’m honoured to be your host today. As part of our research launch, we received some wonderful questions from our attendees at the time. We wanted to make sure that those questions were answered, and we wanted to start by answering some of them from ourselves – the research team.

So let’s get stuck in straight away because there’s a lot to cover, and because you are in a wonderfully engaged community. The first question that we’re going to be taking in this episode came from Zeynep, who asked: “Did the research surface any concrete examples of AI tools being effectively applied in humanitarian work?”

Madigan: Yeah, I’d be happy to take this. I think we do have some of the use cases actually in our full report. So if you take a look at the full report, we have a couple of very concrete examples there. But I’ll be honest, there is still some hesitancy to publicly discuss the AI use cases or actually how they’re using it in the day-to-day work, which I think is telling about where the sector is culturally.

Again, like I said, we’ve documented that and we also included quite a bit of comments and feedback that we got through the survey into the report. So people were using it in very practical applications. A lot of the times what we were hearing or what we were seeing from the survey results is that people weren’t building these elaborate AI systems like predictive analytics, but rather they were using existing tools to solve their immediate problems, such as emails, translations, different donor proposals or reports.

So yeah, think of ChatGPT for rapid proposal drafting, or again we had a case where they were using it for data cleaning for needs assessments. I think the most interesting examples though did also come from smaller locally-led organisations, and they didn’t seem to be as constrained by the institutional risk management. So what we were also seeing was a lot of innovation is happening at the field level, not at headquarters. And this innovation was really trying to address the problems that teams are facing in their day-to-day work – basically wherever they could use AI to get time back so that they could focus on something else. I think that’s where we really saw the most concrete examples of AI tools being used effectively.

Lucy: What I will add to that is I think the timing of the research also probably highlighted these cases more because we are in a time where the sector is under huge resource constraints – time, financial, from people. Their time is so precious and I think AI is a tool that people are using to help them manage their time and their resources much more effectively. So I think that was just an observation that I made in the wider context of how people are using AI and how that influences AI use in the sector.

Madigan: Yeah. One thing just to add to that as well, just because I know the UK HIH has a directory of AI tools or use cases – basically of how humanitarians are implementing AI. So I would really encourage users to go look at that directory, look at existing tools. You don’t need to always build something very specific to your use case, or there might be other organisations that have a similar use case as yours. So go and try and find something that might complement your research instead of having to start from scratch. I think that’s something else that I would highly recommend.

Lucy: The next question we had from Zunera is: “How is AI expertise defined? Is it expertise in using AI tools or having an understanding of the mechanisms behind these tools?”

Ka Man: Yeah, so I can shed a bit of light on this one. In the survey, respondents were asked to rate their overall digital skills in one question and in a separate question, they were asked to rate their AI skills. So it was a self-assessment and they could choose from: beginner, intermediate, advanced or expert.

So we took a broad self-assessment approach with statements that people could align and agree with. The categories that we added were:

  • For beginner: little or no experience with AI tools
  • Intermediate: I can use common AI tools for basic tasks confidently
  • Advanced: they were agreeing with the statement “I’m comfortable applying AI tools to solve complex problems”
  • And for expert, which the question specifically asks around: “I have deep knowledge of AI concepts and often develop or customise AI solutions”

We also gave people the options “don’t know” or “prefer not to say”. So we felt like that was a good enough categorisation without getting too complex, and we didn’t want to make it so specific like around generative AI versus specialist AI tools. So that was the basis on which the definitions were formulated.

Lucy: Right. And I think that really answers another similar question that we had in from Elliot around how did we determine these levels of expertise. And I think self-assessment is a really good way of evaluating these criteria, but we don’t have sector standards. We don’t have common terms used to define AI capabilities, which would obviously help standardise this across the sector if they were in place. So I think that’s really helpful.

Now, another question that we’ve had in from an anonymous person is: “How reliable can reports be perceived if they are developed using platforms like ChatGPT or Grammarly to create, and how do we think that donors in particular might view this? How do you see organisations being accountable for the content of these reports that are generated using AI?”

Madigan: This is a really great question and I think this cuts to the heart of the professional ethics and integrity of our sector. For me personally, and I would love to hear your thoughts as well, but using AI for grammar checking – it’s like using spell check. It’s a tool. Using ChatGPT or Claude to generate entire sections of the report without any human review, judgement or oversight – I feel like that’s a different conversation completely.

And so for me, the question isn’t really about the technology, it’s more so about the transparency and the human oversight. So if you’re using AI to help structure arguments or improve clarity – to help, maybe you’re not fluent in the donor report’s language, so helping make sure that it reads consistently – there, I think those are genuinely your work. I think that’s defensible and it’s again like using a tool like spell check to help you in your work.

I do think if you’re having AI generate findings without any human involvement or review, that can be problematic. From a donor perspective, I would actually love if there are donor agencies out there – I would love to hear from them. But maybe how I would see it is: if the report accurately represents the work done and the insights gained, and if you are an organisation, if that AI helps you to communicate to the donor more effectively, I feel like most donors won’t object.

I think we can all agree in the humanitarian sector that donor reports usually take a lot of time and energy from staff, and so any way that gets your message across more clearly and more effectively, I think would be beneficial. But if it’s also creating content – because we know that AI, especially generative AI, can hallucinate – and if that doesn’t reflect the actual programme learnings, I think then there’s this accountability problem that goes beyond just the tool usage. And I think again, that’s why human judgement and oversight is so critical when you’re using these tools.

So yeah, that’s my thoughts. I don’t know if you guys have your own, and again I would love to hear from donors on how they think about this. I think we actually just had a proposal that wanted us to say if we were using AI in the creation of the proposal concepts. So I think we’re going to be seeing more and more of these conversations take hold.

Lucy: Yeah, I’ve seen that a few times – specifically asking for AI usage. And I think I’ve also seen it in recent pieces as well. So almost in a bibliography or reference section, some credit has to go to the AI agent that you’ve used. And I think that’s how we strengthen our accountability and transparency – it’s about being honest.

Madigan: Yeah. And I think we had also said in the beginning of the report and when we launched the report that we did use AI to help with the report as well. So we want to be fully transparent about that as well. And I think the more conversations that we can have around the transparency and the accountability – but there was still massive human oversight and judgement from our end.

Lucy: I’m going to move on to a really interesting question and I think we’ve been grappling with this question ourselves. It’s come from an anonymous audience member: “What is the source for organisational inertia? They found that several of their departments are actually preventing more systematic training.” I think this goes to the heart of the research, but what are your thoughts on this?

Ka Man: Yeah, so I’m happy to share some thoughts to reflect what we found in the research. This is obviously a whole topic that we can discuss at length, but just to address the report specifically: the inertia that you described is due to an overall lack of organisational AI readiness.

So for example, in the survey, the top three AI implementation barriers were:

  1. A lack of technical expertise by staff members
  2. Funding constraints
  3. Availability of data or their data quality

So the foundations aren’t there for people to meaningfully engage in AI, even if the individual will is there. And at the same time in the survey, when we asked questions around levels of investment, we found that organisations of all kinds have very low or low levels of investment in AI. That was the majority of survey respondents. And that investment was measured in terms of budget allocation, staff training, infrastructure and technology, and research and development.

So because of this constrained picture across the board, because of this lack of readiness, leadership within organisations – particularly in international organisations – understandably have mixed views, and that came through in this research about their views on the effectiveness of AI. So even if an individual is really excited about the potential of AI and has expertise, because of this overall lack of readiness in the organisation, that’s a contributing factor to the inertia that you mentioned. And it’s reflecting the big significant challenges across the board – really infrastructural and organisational.

Lucy: Yeah, completely agree. So the next question has come in from Suzy, and she has asked: “What kind of awareness of AI risks did respondents have across different aspects, from data privacy to overconfidence in results to bias to tech-facilitated gender-based violence specifically? And what do people think they needed in order to mitigate any of these risks?”

So I noticed that there was a really high awareness of data privacy and ensuring the models were aware of that and what was inputted into the commercial platforms in particular were depersonalised so that you couldn’t identify anyone at risk of harm. And I think that was really across the board. I think that really speaks to our commitment to the humanitarian standards – right – of do no harm. And I think that really, really shone through in all of the conversations, you know, and it was behind a lot of the rationale why certain people had built their own AI agents so that there was no risk of a data breach.

Essentially, there are always going to be risks that could cause harm in terms of what you put into the AI and what it could produce. And I think that’s one of the things where I think training, governance and standards will really help strengthen that. And I think that’s the primary mitigation that needs to be built into the humanitarian sector – the governance and standards – because I think organisations have this lack of confidence and guidance. They don’t know how to share this widely with staff and I think that’s where some of the anxieties about engaging with AI really come to the forefront.

I think there’s a real opportunity to create something that really helps strengthen the risk mitigation of data protection and privacy, and also overconfidence in results and bias. And I think that’s again, as someone who’s got background in research and data – using good practice, using this as a tool to help with data analysis, but not use it as the sole correct answer. You know, we have to engage our critical thinking and resources as humans.

Madigan: I mean, I think one thing that stands out to me is that a lot of organisations also have their own data privacy governance policies. And again, I think it’s a question for organisations: how to translate that to their staff in a way that they can basically understand the dos and don’ts of data and what data can be inputted into the AI systems. So I think that’s something that I’m hoping that organisations will – I mean I know that some organisations are already having these conversations and working with their staff to figure out the dos and don’ts – but I would love to see that be a wider conversation or to share their dos and don’ts with the wider community so that others can learn from their dos and don’ts and their policies or governance standards. So I think that’s something else that I think would be really helpful.

Lucy: The next one has come in from Simbarashe – and I hope I’m saying your name right and apologies if not – they ask: “It seems that there’s more focus on the general large language models like ChatGPT, DeepSeek, Copilot, Claude, etc. And they would love to know how organisations have adopted building in-house models specific to business cases, and did we really uncover any of these examples?” Ka Man, would you like to comment on this?

Ka Man: Yes, sure. So it’s a great question. We received what I’ve described as a relative handful of detailed use cases relating to in-house purpose-built systems at the beginning of this exercise. I thought that relative low volume was quite surprising. But on further exploration, I realised that this reflects the early organisational adoption patterns in the sector, with only 8% of survey respondents telling us that AI is highly integrated and embedded in their organisations. And that reflects the low investment patterns that I’ve mentioned previously.

And as I’ve also talked about, there’s that unwillingness or uncertainty by individuals to disclose details of their in-house systems for commercially sensitive reasons, or they don’t feel like they’re authorised or empowered to do that and to share that information.

So there are a couple of detailed use cases in the report. If you’ve not had the chance to check them out, do take a look. A notable one for me was an INGO in Lebanon, so a data expert built his own secure offline system to address connectivity and security challenges. And then there’s another from an NGO in Kenya who customised a Microsoft product to develop their own custom chatbots, and that’s trained on specific languages to help with community engagement. So do take a look at the details of that.

And this is an area that I do really want to build our knowledge and awareness of and understanding. So I think future phases of the research will dig into this deeper – these in-house models and what’s possible. But as Madigan has mentioned earlier, do check out the UKHIH Use Case directory because there are more examples there.

Lucy: So another quick thing that we’ve had in from someone who wants to remain anonymous is: “They often get asked questions about the environmental impact of AI. How do you suggest that we talk about this and acknowledge this?”

Madigan: Yeah, I can take this one. Firstly, I think we have to acknowledge it. There really is a real environmental impact of AI and it is quite significant, especially with data centres consuming enormous amounts of energy, and training large models does have a substantial carbon footprint – like this is the reality.

But I think we do need to put it in perspective for humanitarian work. So for example, the carbon cost of running a ChatGPT query is roughly about the same amount as turning on a light bulb for 20 minutes. If AI helps you write a more effective funding proposal that secures resources for climate adaptation work, the math might actually probably work out in your favour.

I think that said, we need to acknowledge that there is this cost and be super intentional about its use. Use AI where it adds genuine value and not just where it’s novel or you feel like you should be using it because other people are using it. I think the other thing that we’ll see is humans in general are quite the innovators. So already we’re seeing examples – for example, in Switzerland, they just released their own LLM supercomputer and it’s running on 100% renewable energy. So what that means is that we can hopefully push Big Tech and others to focus on renewable energy. We can make choices maybe to invest in systems or supercomputers that are actually aligning with the humanitarian values.

But I think most importantly, we don’t let perfect be the enemy of good. There is also the environmental impact of humanitarian crises already to consider, especially in areas of conflict, and sometimes the carbon cost of using AI tools to help respond to those communities outweighs the carbon footprint there. Again, this is just my sense. I think the main thing is to focus on proportionality and purposefulness of how we use these tools. And again we shouldn’t be brushing this under the table. We should be having dialogue and engaging with climate actors as well, because then I think that’s where we could combine forces and figure out new innovations that would allow AI to scale in a more ethical way, or a way that aligns with the humanitarian values as well. I don’t know, Lucy, Ka Man, if you have anything to add to that.

Ka Man: Yeah, I agree. I think it’s very much about intentionality in usage. It’s a big topic. It’s something that came through in the open comments in the survey, particularly when asked about ethical concerns. People are well informed about this impact and people do want to talk about it. Yes, at the same time, it can feel a bit taboo or polarising – “AI is bad and you shouldn’t use it.” But like you say, Madigan and Lucy, that nuanced discussion and thinking is really crucial.

So just personal anecdotes – I’ve been reading recently some media articles about AI innovations within the social impact space. And they’re really interesting and really exciting. But what I’m not seeing too much is that climate impact or any kind of plans to offset that. It’s not – it’s all sort of just reflecting that that kind of commentary and thinking should be really central, integral parts of presentations or communication about AI projects. So for example, when setting up a business case, when considering the system as a whole, climate impact should be built in as part of that problem space. And then considering whether the benefits outweigh the environmental impact and that’s built into the go/no-go thinking. So it needs to be there.

Lucy: Yeah, I’d agree with the fact that we need to keep talking about this because we’re the ones as humanitarians that see the impact of the climate crisis, right. And we see how that affects communities all over the world. If we don’t keep talking about this and looking to shape solutions that reduce the carbon footprint of AI specifically, but all of our other tools that we use… you know, we have to keep doing that because it begs the question of who else would, if we’re not bringing this to the table.

So we’ve had a great question from Williams, who’s asked: “Until recently, the use of AI was seen as taking a shortcut to get work done. Do you think it should be viewed that way, or is it an opportunity to accelerate and strengthen humanitarian work?”

Madigan: Yeah, I think this one is really interesting because I think the shortcut framing reveals how we still think about work in terms of time spent rather than the value created. So for example, is using a calculator a shortcut in math? Is using an email a shortcut compared to a handwritten letter?

I think AI can absolutely be a shortcut in the best sense, and it can help us get to insights faster, communicate more clearly, and then really be able to use our human intelligence on problems that actually require more human judgement. So I think the question really isn’t whether it’s a shortcut, but whether it’s an effective one that maintains the quality and integrity.

I think where I draw the line though is when AI becomes a substitute for thinking rather than a tool that enhances thinking. Again, ultimately at the end of the day, AI is just a tool. I mean, maybe we’ll see in 30 years from now with the robots and everything like that will have taken on a life of its own. But I feel like if you’re using it to avoid engaging with complex problems or generating content that you may not understand, that can be quite problematic. But if you’re using it to handle routine tasks so that you can focus more on the strategy or relationship building or that critical analysis that is really needed with the human oversight and judgement, I think that’s exactly where technology does provide those shortcuts so that we really can focus on the human intelligence there.

I think the humanitarian sector has always been about leveraging available resources, and AI is just one of those tools in that toolkit. But again, we need to maintain that human component because without that, I think that’s where we lose our humanitarian values. So again, I think it should be seen as a tool, and for some it might be a great opportunity to accelerate and optimise humanitarian work, but you always need to have your domain expertise and knowledge integral to how you’re using that tool.

Lucy: I think that sounds great and I think it really links to potentially the next question as well, which is from Jonathan, who asks: “What do we think about equating AI and NI? Is it appropriate or what are the risks to that?” Madigan, I’m gonna put this back to you.

Madigan: Yeah. And I’m hoping that I understand – and I think by NI, I think it’s natural intelligence is what he means here. So I think that comparison is a little bit fundamentally flawed. And I know that Cornelia also talks about this hybrid intelligence and she was one of our panellists on the report launch, so really do encourage everyone to check out her work because she does this brilliant way of talking about this hybrid intelligence.

But I think for me right now is that the current AI systems are still very much sophisticated pattern recognition and text generation tools. And while we see that they are taking on some sort of thinking and reasoning and understanding and we’re seeing that more and more developed, there’s still humans behind every interaction and there’s still the humans that are feeding into that system and telling it how to respond. So for me, they’re more like advanced auto-complete systems rather than actual intelligence. Again, I think others might debate me here, but that’s just how I personally feel.

So I think this distinction matters because it affects how we use them and what we expect from them as well, right? I’ve heard cases of where humans are really using ChatGPT as an online boyfriend, girlfriend or a therapist. And again, I think there’s a risk of treating AI as an equivalent to human intelligence. And sometimes we were seeing that sometimes people are over-relying on it for tasks, for emotional support and places where we really should require this genuine understanding and judgement from humans.

And then on the other side, I think we can underutilise it by when we expect it to fail at tasks it’s quite good at. So the other part is when you go in and we’re like, oh, but no, I can just do this because I’m quicker. Actually, in some cases AI might actually be quicker to generate that thing – let’s say for data cleaning, right, that’s usually like a very time-consuming and labour-intensive task for a human, but with AI it can really do it quite quickly. So it can process vast amounts of information and then identify patterns that humans might miss just because there’s such an overwhelming amount of data.

But I think again, it still can’t quite grasp context or exercise judgement or navigate maybe the complex social or community dynamics that define humanitarian work. And so for me, I really think that we should think of AI as a very capable assistant and not a replacement for human intelligence or natural intelligence. And I think if we frame it that way, it’ll help us use it appropriately to both not put too much emphasis on the hype and also not make it an unnecessary fear as well. But again, I would really encourage everyone to go look at Cornelia’s work on artificial intelligence, natural intelligence, and the hybrid intelligence, because I think she probably does a much better job of explaining it than I just did.

Lucy: It was great to hear it. And I think that’s a really lovely answer, so thank you. Our final question for today is from Gülsüm – and again apologies if I’ve mispronounced your name – and they asked: “What is your view on the use of AI-generated images to depict crisis-affected communities in humanitarian communications?” Ka Man, I don’t know if you’ve got any views on this.

Ka Man: Thank you. This is a really good question where, in my comms hat, I would say as a general rule of thumb, I have a preference for human-centric and authentic imagery. But this obviously depends on the context and application and how you’re using that particular image. Whether you’re using real life imagery or AI-generated imagery, ethical considerations have to be front and centre.

So for me I prioritise real photos of communities, but I understand that I’m in a fortunate position working at the HLA. Because through Save the Children we have access to robust, contextually appropriate imagery. And I know that all the appropriate protocol and good practice is being followed, for example, informed consent, data protection, and that’s really central to what they do and they’re thinking about the dignity and agency of the people who are portrayed and depicted in the images.

So often if I don’t have appropriate imagery for a project myself, I might use freely available stock imagery if that’s appropriate. Or we might use a blend. So for example, in the AI report itself, we did that. We took that approach – a mixture of real imagery and stock images. But we made it clear where we’re getting those images from and what it was meant to depict.

I’m not against wholesale use of AI imagery, but there are risks linking to what Madigan was just saying previously about the critical importance of human oversight and contextual understanding. So there is a risk, for example, if you have a gap in imagery and you want to depict a crisis-affected community member. If someone who doesn’t know that contextual understanding or doesn’t have deep appreciation of the nuance, there’s a risk that if they’re using, say, ChatGPT, MidJourney, that the generated image doesn’t have that cultural nuance. It could perpetuate stereotypical or even harmful depictions without that sense check and involvement of someone who does have that understanding. And obviously that risk is really heightened when you’re depicting a crisis-affected community.

So for example, an LLM doesn’t have a real-world understanding of what a refugee camp looks like. That said, I do understand that if there is that human oversight and with the right contextual understanding, it could fill a genuine gap representing groups who would otherwise not be portrayed. So it’s all about balance, having that oversight and linking back to that NI conversation that we’ve just had as well. It’s a combination not to replace human decision making.

And then some closing thoughts when you’re looking to secure an image for using communications. I would say generally ask yourself: what are you trying to portray and why do you need to have a literal representation of a community member from a crisis-affected context? Could you, for example – or would it be appropriate to use an alternative approach or graphical treatment such as a thematic representation? Could you portray, for example, a message of hope or optimism or are there other ways and graphical representations and devices that you could use? I would say take a holistic view, but human judgement has to rule.

Lucy: Thanks so much, Ka Man. Yeah, I think that’s right again. Did you want to add anything as well from your perspective?

Madigan: I just wanted to add like a little tiny anecdote – just with the AI-generated images. I would love to know if people have actually had success with creating AI-generated images, especially of humans, because I feel like oftentimes they have extra fingers, there’s something wrong with their faces. I would really love to hear from others about their experiences there because I’ve tried quite a few different AI systems to create that, and normally what I found is that if you’re asking for like a more photo or image, it’s quite…

Where I have seen some improvement with AI-generated images is where it’s like more illustrations. And so I think that’s also something where, again, because you’re not actually depicting a real life human, and that’s something that I think also we should consider as well – having AI help you generate some of those prompts for illustrations. But again, I think like Ka Man said, that human oversight or taking maybe that prompt or image and really, truly making it your own – you have that hopefully that cultural understanding, that nuance, that human judgement to know if it’s appropriate or not. So I would also just suggest that as an alternative, if you’re trying to depict crisis-affected communities. But again like amounts – I think it all depends on: are there other alternatives? What are you trying to convey in your messaging there?

Lucy: Thank you both for some really thoughtful and considered responses to our questions. I know that there were so many questions that came in, we haven’t been able to get to all of them in this particular episode, but I’m sure that there were more to come throughout the Fresh Humanitarian Perspectives AI series. Please do keep listening, keep engaging, and keep asking us questions – it’s something that we enjoy talking about and so please do reach out to us.

Note

Episode produced by Ka Man Parkinson, September 2025.

Website: UKHIH Directory of AI-Enabled Humanitarian Projects

Podcast: ‘Double literacy’: harnessing AI for humanitarians and social change – Ka Man Parkinson in conversation with Dr Cornelia C. Walther. Madigan mentions Cornelia’s research into hybrid intelligence during the companion episode.

Tools & online learning

Videos: AI Fluency from Microsoft hosted on our Response Learning Hub (EN, AR, ES, FR)

AI platform: GANNET humanitarian tools created by Data Friendly Space

CourseCommunity Crisis Intelligence on Kaya

CourseAI For Everyone from Coursera

Share this podcast

Did you enjoy these conversations? Please share with someone who may find it of interest. Please help us grow our show by following Fresh Humanitarian Perspectives on your favourite podcast platform and leaving a review. We appreciate your support!

Newsletter sign up