5th August 2025
Thank you to everyone who joined us on 5 August 2025 for the online launch of our AI adoption and aspirations in the humanitarian sector research report, a partnership between the Humanitarian Leadership Academy and Data Friendly Space.
We were delighted by the global engagement in this event, with 739 attendees joining from 91 countries. Access the recording and slide deck below. An edited transcript can be found in the footer of this webpage. Frequently asked questions from the Q&A session have been compiled by the team members who are planning content in audio format in response.
Event description
Join us to discover key insights from this first-of-its-kind global study, based on responses from over 2.5k humanitarian professionals across 140+ countries and territories. You’ll hear from a panel of practitioners and experts who will share their experiences and perspectives on the findings, followed by an interactive Q&A session.
Why attend?
This is your opportunity to be among the first to explore findings from the largest global study on AI in the humanitarian sector. You’ll gain insight into how humanitarians are using AI, how organisations are responding to the challenges and opportunities emerging. Take part in a growing global conversation on ethical, practical and locally grounded uses of AI in humanitarian work.
Who is it for?
This event is open to anyone with an interest in this space – including humanitarian practitioners, policymakers, funders, researchers, technologists and government actors.
Attendees will be able to interact using the chat function and submit questions for the Q&A session.
Speakers
- Project co-leads Ka Man Parkinson, Communications and Marketing Specialist and Lucy Hall, Data and Evidence Specialist from the Humanitarian Leadership Academy, and Madigan Johnson, Head of Communications at Data Friendly Space (DFS)
- Expert panellists Dr Cornelia C. Walther, Humanitarian leader and senior fellow, University of Pennsylvania/Wharton School, C. Douglas Smith from Data Friendly Space, and Ali Al Mokdad, senior strategic humanitarian leader (independent).
About the speakers
Lucy Hall, Data and Evidence Specialist, Humanitarian Leadership Academy
Lucy Hall is a data and evidence strategist working at the intersection of humanitarian action, locally led innovation, and ethical AI. Her work focuses on turning complex information into meaningful insights, enabling systems change, and building tools that amplify the voices and leadership of communities closest to crisis. With a background in humanitarian response, she brings a sharp lens to equity, power, and evidence — championing approaches that move beyond theory into action. Lucy is currently exploring how AI can be made accessible, responsible, and genuinely useful in low-resource and crisis-affected settings. She believes innovation must be grounded in trust, local ownership, and real-world utility — not just governance frameworks or flashy tech — and she designs sessions and strategies that reflect this ethos.
Madigan Johnson, Head of Communications, Data Friendly Space (DFS)
Madigan Johnson is a digital expert specializing in user behaviour and research, design, and storytelling. Following her Master’s in International Humanitarian Action through the NOHA network, Madigan pivoted to the private tech sector, where she has worked in both digital agencies and startups. Throughout her journey in tech, Madigan has maintained her commitment to creating meaningful impact, expertly leveraging user-led methodologies and data analytics to shape exceptional digital experiences. Today, as Head of Communications at Data Friendly Space (DFS), she brings her expertise in digital technology, content strategy, and community engagement to the frontier of humanitarian AI innovation. At DFS, Madigan leads strategic communications for AI-powered tools designed to support humanitarian decision-making. Her work focuses on building trust in AI systems through transparent storytelling, ethical framing, and inclusive community engagement. She is particularly passionate about demystifying complex technologies and crafting narratives that foster confidence and clarity around AI’s role in sensitive humanitarian contexts.
Ka Man Parkinson, Communications and Marketing Specialist, Humanitarian Leadership Academy
Ka Man Parkinson is a creative and strategic communications leader with 20 years of experience driving international marketing and communications across the nonprofit space. Ka Man led impactful campaigns for the British Council and UK higher education institutions before joining the HLA in 2022. Ka Man is passionate about creating meaningful change through compelling storytelling that informs, connects and inspires global communities. In her role at the HLA, she helps to drive and shape a collaborative culture of thought leadership by creating spaces for connection and sharing, and amplifying voices from across the sector. Ka Man completed a joint honours degree in Management and IT in the era of dial-up internet – and remains a constant, curious observer of systems, people and technological change. She is based in Manchester, UK.
Dr Cornelia C. Walther, Humanitarian leader and senior fellow, University of Pennsylvania/Wharton School
Cornelia C. Walther, PhD, is a thought catalyst in hybrid intelligence and prosocial AI. Her scope combines theory and practice. As a humanitarian practitioner, she worked with the United Nations for two decades in large-scale emergencies in West Africa, Asia, and Latin America, focusing on hybrid advocacy and social and behavioral change. She collaborates with universities across the Americas and Europe as a lecturer, executive coach, and researcher. She is presently a senior fellow at the University of Pennsylvania’s School of Dental Medicine and the Wharton Initiative for Neuroscience (WiN)/Wharton AI and Analytics. She is affiliated with MindCORE and the Center for Social Norms and Behavioral Dynamics at the University of Pennsylvania. Since 2021, her focus has been on aspirational algorithms and the potential to harness technology for social good; briefly, she has said artificial intelligence for inspired action (AI4IA). In 2017, she initiated POZE (Perspective – Optimization – Zeniths – Exposure) in Haiti, which has since grown into a global transdisciplinary network of like-minded researchers and practitioners committed to inclusive social change.
Ali Al Mokdad, Strategic senior humanitarian leader, independent
Ali Al Mokdad is a Strategic Senior Leader specializing in Global Impact Operations, Governance, and Innovative Programming. With a global footprint across the Middle East, Africa, and Asia, he has led complex humanitarian and development responses through senior roles in INGOs, UN agencies, donor institutions, and the Red Cross and Red Crescent Movement.
Ali held the role of AI Co-Lead at NetHope, where he also led work on AI integration and AI strategy for business operations, delivering targeted trainings for leaders. He also authored in-depth research on the future of governance, “Inclusive and Intelligent Governance: Enabling Sustainability Through Policy and Technology,” published by IGI Global Scientific Publications.
Ali is known for driving operational excellence, advancing inclusive governance, and designing people-centered systems that hold both purpose and impact at their core.
C. Douglas Smith, Acting CEO, Data Friendly Space
C. Douglas Smith is the Acting CEO of Data Friendly Space and a cross-sector executive with experience spanning private enterprise, government relations, and non-profit leadership. Having previously testified before Congress on key policy issues, he has also served as CEO, lobbyist, and startup consultant. Doug brings a unique perspective on AI adoption across different contexts and their strategic implications for social impact organizations and policy development.
Edited event transcript
This is a transcript of the discussion which has been lightly edited for clarity and headings added for readability.
Welcome and introduction
Ka Man: Hello everyone, and welcome to our AI in the humanitarian sector research report launch. My name is Ka Man Parkinson. I’m a communications and marketing advisor here at the HLA, and I’m absolutely delighted to welcome you to today’s session brought to you by the Humanitarian Leadership Academy in partnership with Data Friendly Space.
It’s an absolute privilege to welcome you here today, and we’re truly delighted by the global engagement in this research through people taking part in the survey, our interviews, reading the research, and sharing your thoughts on social media as well. We have a full house here today – we’ve reached maximum capacity of a thousand attendees from 102 different countries and territories around the world. That’s on top of 2,539 respondents to the survey from 144 countries and territories. This really demonstrates and highlights the global interest by humanitarians in learning, upskilling, and trying to get to grips with AI technological development to see how we may be able to harness its benefits whilst balancing that against potential risks and reconciling that with the humanitarian principle of ‘do no harm’.
This session is 90 minutes and will be in four sections: welcome and introduction, followed by housekeeping, then my colleague Madigan will walk us through some of the key insights that emerged from the research. From there, my colleague and fellow co-lead Lucy will lead the panel discussion together with our expert panellists, and then it’s over to audience Q&A.
I’d like to pass you over to Madigan Johnson from Data Friendly Space to introduce herself and say a little bit about Data Friendly Space.
Madigan: Thank you so much. Hi everyone, it’s so great to have you here. I’m Madigan Johnson, I’m the head of communications at Data Friendly Space and one of the research co-leads on this project. We’ve been working with the HLA for the past couple of months on this project.
Data Friendly Space is an organisation that operates really in the AI practical implementation space. We build AI tools and work in data ecosystems in the humanitarian sector. What we’ve noticed is that we really want to be learning from the humanitarian practitioners and how they’re integrating AI into their work, so this has been a great opportunity for us.
Lucy: Hi everyone, I’m Lucy. I work at the Humanitarian Leadership Academy as the data and evidence specialist and I am also one of the co-leads on this research project. My research focuses on a wide range of things, but I especially look at and explore innovation, technology advances, and how we can make sure as a sector that that advances localisation and locally led humanitarian action, and how we can bring together different people and different voices and perspectives into that conversation to really shape something quite collaboratively together.
It’s been amazing working with Data Friendly Space on this project because they bring such a wealth of expertise and knowledge and understanding of how AI is applied in today’s humanitarian sector. As Ka Man mentioned at the start, we’ve been quite blown away by the engagement of everyone here and everyone who took part in the research. This is our collective research.
I would like to introduce our wonderful panel speakers now. Ali, if you’d like to introduce yourself?
Ali: Thank you so much. Hi everyone. For those who don’t know me, my name is Ali. I am a humanitarian and development leader. I support several local international organisations across different regions, mainly focusing on strategy development, governance, operational modalities, and innovative programming. Thank you so much for joining the session today.
Doug: Thanks, Madigan. Doug Smith, I’m the acting CEO of Data Friendly Space, coming from the United States of America this morning. Really happy to join you. I have a background in human rights, technology, and innovation, primarily around entrepreneurial ventures.
Cornelia: Thank you so much, and also thank you for the opportunity to be here. I’m Cornelia Walther. I’ve been working with UNICEF for 20 years in humanitarian operations around the world, then left, wrote a bunch of books about social change, and became more interested in pro-social AI and hybrid intelligence. I’m presently a senior fellow at the Wharton School and also at the Centre for Planetary Health, where I’m currently in Malaysia working on pro-social AI and climate change.
Ka Man: Thank you so much, Madigan, Lucy, Doug, Ali, and Cornelia. I think it’s fair to say that there are big minds and big hearts together in this room, and that’s what’s needed really to make collective progress in this space.
Housekeeping
Just a little bit of housekeeping from me. This session is being recorded and will be uploaded to YouTube in the next day or two, and we’ll send you a link in the post-event email. This session is being delivered in English, but you can turn on captions including auto-generated translated captions by going to your Zoom bar and pressing on the CC button.
We do want this to be an interactive space. This session is for you, so please use the chat to share any reflections or reactions as you go along. For questions during the audience Q&A, please submit your questions using the Zoom Q&A function – don’t put them in the chat because it will be moving quickly and we might miss that.
As a kind reminder, we want this to be a safe discussion space, so please keep any questions and comments that you have respectful and on topic. In terms of thank you for your attendance and engagement in this forum today, we’ll be issuing HPass digital badges for live attendees in recognition of your learning and engagement. Please keep an eye out for a separate email, probably landing in your inbox early next week, with details of how to claim your badge.
Research overview
As this is the launch event for our report, I wanted to briefly highlight the resources that are available. Last month we launched our initial insights report, which is a real top-level summary of the key insights that emerged from our research, including the five key takeaways. That’s available to download as a PDF in English, French, and Spanish.
Yesterday we were absolutely delighted to release our full insights report – a 50-page report that contains expanded deeper dive insights together with findings from the interviews as well that we conducted. Together with the reports, we have produced – well, Data Friendly Space has produced – two brilliant digital products.
Madigan: Data Friendly Space has provided two digital products alongside the full insights report. One is a website where you can go and get the top-level overview of the report if you’re crunched for time, and then the other one is the respondent dashboard, which I would really encourage all of you to go take a look at. You’ll be able to filter by different organisation types, respondent profiles, different questions around the trainings and AI expertise. With those two things, I think they’re very complementary to the full insights report and hopefully should give you a very well-rounded picture of the survey results themselves.
Ka Man: Thank you very much, Madigan. As data fans, we were really excited to see that dashboard come to life, so you can really spend some time delving into that and enjoying the different charts and visualisations.
Interactive poll
Ka Man: We’re just going to do a quick poll – we know it’s slightly ironic that we’re doing a poll in a session about a survey, but we like to ask questions and we like to learn.
Madigan: It’s a very simple poll. What we would like to know is if you use AI tools in your work. Something that we’ve seen across the report is very mixed responses from humanitarians. Some people are using AI daily, some people are never using AI, and we’re also curious about whether you’re using it in your personal or in your work. This question is really specific to: are you using AI tools in your humanitarian work? We have three responses: yes, no, and sometimes. The results show 66% of you said yes, 11% saying no, and 23% saying sometimes. Again, quite the spread, and that’s something that we also saw documented in the survey itself.
Ka Man: This broadly aligns – in the survey, we found 70% are using AI daily or weekly, so that 66% there tracks with what we found.
Research findings overview
Ka Man: This was really about the community – a community-focused piece. Although the objective was global mapping, we really wanted community voices to be surfaced and represented through our research. We believe this is the world’s first global humanitarian AI mapping exercise, which really excites us.
As I say, 2,539 responses from 144 countries and territories, and excitingly and encouragingly, 80% of respondents opted in and provided their email address for follow-up research conversations, webinars, activities, and so on. For me, that’s really encouraging and demonstrates that real appetite and desire to connect, engage, learn.
Now that we have this data, this baseline data, we have the foundations in place and the capabilities for longitudinal tracking studies so that we can see how attitudes, perceptions, adoption may shift over time.
It wasn’t just about data – people’s voices and qualitative feedback was really central, and we tried to really integrate that into our findings. With the nature of the survey and the interviews being anonymous, we were really pleased that people felt relaxed and open and trusted us to share their real, candid views, authentic experiences. We had the whole range, the whole spectrum of experiences – people very willing to share negative, positive, excited views, cautious views.
One thing that was particularly exciting for us is the engagement in the research from the Global South. Around three-quarters of responses were from the Global South, with approaching half from Africa. Because Global South voices are historically underrepresented in technological research, we were really encouraged by this.
We also thought it was very interesting that we had a very notable rate of response from people in French. The survey was in English, but it’s also available in auto-translated versions in different languages. French was the second most common language of responses received, particularly from French-speaking countries in West and Central Africa.
To augment the findings, Lucy and I conducted six interviews to do a deeper dive into experiences and perceptions and to draw out those details for some of those use cases that we have highlighted in the report.
One participant quote I wanted to share – he’s a French-speaking technical specialist based in Asia who’s AI curious, not currently really using AI, but wants to know a bit more. He was reflecting on how he’s experimenting, testing, trialling AI, but he had this realisation that without the skills or ability or experience to critically evaluate, verify, and validate the outputs, these powerful tools were not necessarily helping him in his work. His quote translates to: “Without mastery, power is nothing.” I thought that was a really good way to encapsulate what we’re calling this humanitarian dilemma over using AI tools as part of their work.
Key research findings
70% of humanitarians are using AI for their work daily or weekly. 93% have said that they have used an AI tool, so we can say that AI adoption is almost universal. However, despite this global uptake, I think it belies a more complex picture beneath that.
When Lucy, Madigan, and I sat down to delve into the data, we saw contradictions and contrasts – things that we weren’t quite expecting. High level of uptake but low AI expertise (self-assessed), high individual uptake but low integration at the organisational level, and only around one-fifth have an AI organisational policy that they’re aware of.
Despite this global uptake and widespread adoption, that doesn’t infer universally positive reception of AI. There are very mixed attitudes across the board. Even those who are early adopters, innovators who are really open to what AI can do to help humanitarians, there are still high levels of ethical concerns expressed throughout the survey. This shows that humanitarians are well-informed, looking very holistically, not just trying to find shortcuts to help everyday work, but really looking at what am I doing here, what kind of impact might this be having beyond me and my computer and the task that I’m trying to undertake.
Things like data privacy, security, and climate impact came through often in the open comments and in the survey. Another contrast that we found is even though lack of funding is a key adoption constraint for those who want to embed AI into their work, we are finding that low-resource settings or humanitarians in different contexts are showing high levels of interest in AI, and lots of promising and exciting use cases are emerging.
When we sat back and looked at all of these trends that emerged, Madigan called this paradoxical and coined this phrase: “the humanitarian AI paradox.” We thought that was a really neat way to summarise and reflect this situation that we saw emerging from the research.
Key research themes
Madigan: We started exploring the data from the survey and clustered it into five key themes:
1. Individual AI adoption outpaces organisational readiness
Most organisations remain in the early experimentation and piloting phases, or they haven’t even started to think about it. The numbers tell a striking story: 93% of humanitarians use or have tried AI tools, with 70% using them daily or weekly. However, only 8% of organisations report widespread AI integration. 25% of organisations are in experimentation pilot phases, and 26% of organisations plan to adopt but haven’t even started. This means that over half of organisations are still trying to figure out how to incorporate AI into the work, or even looking at whether this is something they should be incorporating.
2. Accessible AI tools and limited specialist expertise
Conversational generative AI tools like ChatGPT or Claude have lowered barriers to AI adoption. Yet despite the uptake by humanitarians, there’s a lack of AI expertise in the sector. Only 3.6% of humanitarians consider themselves AI experts. Of those that use AI, there are still concerns around the actual benefit. Nearly half agreed that AI improves efficiency, but only 38% believe it enhances decision-making. 30% of our respondents remain really uncertain around the benefits of AI. This highlights the urgent need for capacity building.
One quote that stood out was from a survey respondent: “At first I’m not familiar with AI use, but later on, after being introduced to us, I started getting familiarised with AI. It’s user-friendly.” This accessibility of especially generative AI showed us that people could use it in more conversational ways that they’re accustomed to, but how do we take that from beginner and intermediate to the advanced and expert levels?
3. Fragmented AI training approaches
Training is largely individual-driven. 73% of respondents identified training as their most crucial support need for the next 12-24 months. This ranked above many other things, including funding, including frameworks and guidelines. The gap here is stark: only 33% of NGOs and 38% of local NGOs reported no training at all, and 64% of respondents in total reported little to no organisational AI training. This fragmented approach leaves practitioners learning in isolation and without the support of their organisation, and oftentimes without the support of their peers.
4. The AI governance vacuum
What perhaps could be the biggest wake-up call for all of us is what we call “shadow AI,” which is unauthorised usage of AI within organisations. 7% of respondents who use AI work for organisations that explicitly state no intention to adopt AI, and 17% belong to organisations that have not yet adopted AI but plan to. This doesn’t account for all the others that use it whilst not officially sanctioned by their organisation. This governance vacuum can create significant risks around data protection, accountability, and ethical concerns.
5. Commercial AI tool dominance
69% of humanitarians are using commercial AI agents – this could be Claude, ChatGPT, Perplexity. These are the backbone of the current adoption rates. 35% are using AI translation or language tools, and 18% of respondents use AI-powered data analytics tools. However, this commercial dominance raises really sector-specific questions around data sovereignty and solutions that we need to be addressing. How can we make sure that what we’re putting into the AI systems, especially the commercial ones, follows our guidelines in the humanitarian ecosystem?
Panel discussion
Lucy: What this research has done is really spotlighted a need for greater leadership within humanitarian AI. I think the paradox and the shadow AI really highlights that need in particular. It really showcases that we need those solutions that are built with humanitarian principles, standards, and ethical considerations built into the system, and not just into individuals, hoping that individuals adopt those practices.
Ali, what we are seeing is some really exciting use cases that have been emerging across the world, and it really feels like it’s driving a culture of learning and experimentation that’s almost forcing the sector to consider these policies and the infrastructure around this. What do you feel is enabling that innovation? How can we build on this to strengthen our collective readiness to move forward within this space?
Ali: From my perspective, what we are seeing is not just innovation. What we saw in this survey, or when we see people in the sector practising and using those different tools, it’s mainly enabled by a mix of different elements: pressure, accessibility and access, evidence, and adaptation.
The humanitarian sector and humanitarians are under so much pressure – funding cuts, complex crises, access limitations, operational limitations, and limited staff. Those different AI tools like ChatGPT, Claude, AI Copilot are providing and proving to be useful, introducing many tasks, especially when it comes to text-related tasks like report writing, proposal writing, translation, and others. It’s also online, available in so many countries, it’s free – the majority of them are free. People can use them and prompt them in different languages, and it’s easy to use, user-friendly.
The evidence is clear: when someone is using AI or generative AI, they found it helpful, and they tend to use it again. All these things led the humanitarian sector to adapt and leverage those different tools, not out of strategy or digitalisation transformation, but just to adapt to this chaos and the situation that we are witnessing within the humanitarian sector. It’s not only innovation, it’s also resilience.
To strengthen that, the main important part is to match this momentum with structure and overall governance. This could happen through different elements: allow and encourage experimenting – make this AI experimentation and use normalised. People shouldn’t have to hide how they are using and leveraging different AI tools. Leadership should support and protect this learning and experimenting within the sector.
It’s very important for the sector and for leaders and organisations to invest in practical learning, not only generic training, and tailor trainings to invest in leadership and those experts or super users and community practices. We must rely on and try to support them.
When we look at governance frameworks, they should be light, practical, and empowering innovation whilst also providing ethical clarity, protecting data, and other things. We need to invest in a stronger foundation: welcoming experimenting, education and skills and trainings, as well as light governance.
Lucy: That was amazing, thank you. Your call about investing in leadership and investing in experimentation really resonates. What we found interesting from the findings was that local organisations are actually already leading the way in allowing and fostering that culture of experimentation and local leadership. That doesn’t come as a surprise to us because we know that our local leaders are the ones that often pave the way, but it was really refreshing to see that.
Doug, the report highlights what we are terming the AI paradox – widespread AI use by individuals but limited organisational strategy, policy, and technical infrastructure. Why do you think that is, and what are the risks but also the opportunities of this bottom-up approach? How can stronger infrastructure possibly affect current uses of humanitarian aid for the better or for the worse?
Doug: Let me hit on three points that really undergird what I see as the AI paradox challenge. The first is around inertia. Why is it that institutions seemingly are going much more slowly than those in the last-mile services delivery or people on the ground in clusters and offices? Institutions themselves tend not to move very rapidly. There’s a lot of stakeholders internally – often there’s good lawyers involved and finance folks that want to talk about procurement and compliance.
This kind of inertia that’s very slow and deliberate and thoughtful is important, and it has traditionally provided balance to field staff. But one of the things that the report really shows is that field staff simply aren’t waiting any longer. That tension within the paradox is really inertia – institutions are struggling with keeping up, maybe more so now than ever before.
The second is probably a little paternalism – the general feel from these guiding principles and frameworks and standards that everyone wants to control what’s happening in the field. At times that’s really important – we have legal frameworks we have to worry about, we certainly want to worry about data protection and data security. But at the same time, slowing down dramatically what happens in the field or being heavily directive about what types of tools people can use tends not to work really well. That’s why we’re seeing so many people freelancing – they’re using their own logins to access some of these commercial tools.
The third issue is entrepreneurial opportunities. We’re asking people at the field level to do more with less in pretty dramatic ways. Part of what we’re seeing is that people are going to use whatever means they can to deliver impact, and that could be creating some disruption in the power balance between large institutions and either grant recipients or people who are actually implementing the grants.
A key here is something that Elliott Johnson said earlier in the chat: it’s important to keep humans in the loop. We have to make sure that we’re not just saying AI is going to solve all of my problems.
The risk is that we’re going to have this patchwork of tools that have very little context for the humanitarian sector. If all we’re using is vanilla, we’re very quickly going to realise that vanilla doesn’t work in every situation. What that means is we have this risk where the patchwork of tools starts getting further away from the context and the needs of the organisation.
The opportunities are about agency. If we believe in the localisation movement, then providing agency to those who are working at the local level is really key. We have to figure out how to support that within this paradox – how do we embody our values, how do we really incorporate localisation, and how do we provide agency to people on the ground so that the large institutions really now are going to play a supporting role to make sure people on the ground can be as successful as possible?
Among the opportunities, we shouldn’t forget the most important one: our impact can be greater and more efficient and more cost-effective. At the end of the day, we can save more lives if we are adopting trustworthy, tested services and models and AI that we are going to iterate our way into, test, and deploy in ways that can really put information in the hands of those who are responding when it’s most needed.
Lucy: Thank you so much, Doug. I couldn’t agree any more with those points. You’re right – humans are the most important part of artificial intelligence.
Cornelia, one of the common themes that we heard, picking up on some of the risks that Doug just identified, was a lot of strong ethical concerns, particularly around protection but also around environmental impact and whether actually humanitarian use of AI is contributing to environmental crisis that we then have to respond to, creating this self-fulfilling prophecy almost. One thing that’s really struck me throughout this research is the indications of shame and fear around AI, yet the usage is so consistent.
How do we balance urgency and this experimentation and usage to mitigate the resource constraints that we have operated with risk and responsibility? Are these actually any different for humanitarian AI compared to other sectors?
Cornelia: There’s one aspect that is weaving through the previous answers, and that is agency amidst AI. I would argue that there are four pillars that individuals and institutions can deliberately invest in to deliberately curate agency amidst AI:
The first is attitude – our emotional relationship with ourselves but also towards that tool. Because AI is not just a tool; it’s something which is going into that subtle aspect where anthropomorphism – our projection of our own emotional sphere onto that something – becomes something that is very much becoming part of our workspace.
The second is approach, which has two components: the alignment of our aspirations and our actions (a purely offline, low-tech piece – why are we doing this?), and then the aspiration towards the algorithm. All this debate around the alignment conundrum and the problem that human values are not aligned with algorithms – I think they’re missing the point that very often humans, including ourselves, are not really in sync with what we say and what we do. There’s a mismatch of words, values, and practice. This “practice what you preach” is something that I think is very relevant for all sectors but which is doubly painful in the humanitarian sector.
The third is ability: human literacy (a more holistic understanding of ourselves and the society and institutions that we’re part of – humanitarian organisations have very special dynamics, and it’s important to be aware of that, especially now as we’re moving towards a hybrid society) and algorithmic literacy (not just prompt engineering and not just what’s the latest tool, but what are the limitations, what are the caveats, and how does it work).
Finally, ambition. That’s the big demarcation line between AI in the private sector and AI in the humanitarian space. In the private sector, I have seen the same thing in academia – it’s very much the tool for efficiency and effectiveness. Where the humanitarian space can make a dramatic difference is this ambition to use it as a catalyst for positive social change and to really use it as an opportunity to address some of the long-standing issues that have been around for the past many decades, where the standard answer was always “we don’t have the means, we don’t have the money, there are funding cuts, and that’s why we can’t do XYZ.”
I would argue that is a very limited answer. But also now, once again, we have a huge opportunity with this ever-expanding technological treasure chest if we use it with that ambition. All of that to say, these four pillars – something very practical that institutions, both local and international, can start investing in is double literacy for staff and implementation partners.
Lucy: Amazing, thank you so much, Cornelia. I completely agree, and I think those four pillars really frame it well.
I want to come to all of you with the same question to close this section of the discussion. We’ve talked about tools, use cases, governance, risks. But as Doug highlighted, we are standing on the precipice of a real new era in humanitarianism, and with the digitisation era starting now, this moment really calls for vision. What is the one thing that each of you would prioritise to ensure that AI strengthens and not distorts humanitarian practice?
Ali: From my perspective, it’s very important for organisations and institutions to invest in the transformation – the digital transformation and the way we are going to integrate AI in different things. AI is happening and will happen. AI is going to affect different sectors and different jobs. Right now, not only the investment but the impact of AI is already happening, and we are seeing it in different sectors.
The key important thing is to invest in the transformation processes, invest in our institutions in terms of how we are going to adapt to different situations and contexts, tailor different learning and development programmes, try to invest in the way we are going to roll out the different tools and identify what’s the best suited in terms of models and tools. We have learned so much in the sector from rolling out ERP systems and different tools previously – learn from those and then use the best fit, not only the best practice. In a simple way: invest in transformation for institutions so that we can support those individuals that are already leveraging those different tools.
Doug: Data Friendly Space worked for the last few years with AWS, and we built out this tool called Gannet. In that process, AWS really kept asking us, almost forcing us to ask ourselves: what’s the problem you’re trying to solve? What’s the output? But what’s the impact? And then how do you get there? Work backwards.
My answer would be: we probably need to stop starting with the word “no.” We tend to come up with lists and lists of reasons why this can’t happen, it shouldn’t happen, why it doesn’t align with our values. In reality, maybe what we need to start with is: what is the impact we want to have? Is there a use case that AI helps to frame in order to fulfil that mission point, and then work backwards from there to make sure that we hold all the bits and pieces and governance and security in line? But let’s really start with the number one impact, which is about people. AI is about people – not a lot of people think in those terms, and I think it’s very true.
Cornelia: Hybrid intelligence, which arises from the complementarity of AI and NI – artificial and natural intelligences – and which I would argue is the cause and consequence of agency amidst AI, and which in the best-case scenario can actually lead to pro-social AI: AI systems that are tailored, trained, tested, and targeted to bring out the best in and for people and planet. So yes, AI as a tool, but as a tool that we use very intentionally in order to fulfil the best that we in our institutions have and want to fulfil.
Lucy: I think it’s very clear that AI is often thought of as something very technical and futuristic, but actually it’s intrinsically very human, and it’s about power, access, trust, and care.
Audience Q&A
Ka Man: Thank you so much to our amazing panellists for sharing your perspectives. In this closing section, we’re going to move to audience Q&A. We’ve got some brilliant questions coming through, so we’ll get through as many as we can. We’ll try to do them quite rapid response, but we know we won’t be able to cover them all, so we plan to collate an FAQ and respond to those after the event.
Question from Tiama: Local actors have more knowledge on using or maximising AI, and this seems to indicate localisation commitments are moving in a positive direction. Therefore, the findings shouldn’t be viewed as a challenge. Don’t you agree?
Ka Man: Tiama makes a very good point. Often we talk about localisation in terms of shifting power from the Global North through to the Global South. What we’re seeing from the research findings is that respondents from the Global South are really driving this from an individual level for a whole host of reasons, but also partly because there isn’t the red tape which can constrain innovation. Obviously, it’s there for a reason – due diligence, etc. – but it gives people licence and permission to test and experiment.
Ali: I want to agree in general with my position on local NGOs and local leaders – they speak faster than international ones are doing. This is coming from their ability to adapt quickly and try to leverage whatever is available to help them do things faster or better. It’s the experimenting culture. In local NGOs, especially in the Global South, their main focus is how to deliver services, and they have less focus on the overall rules and regulations in different contexts or countries.
I could see also a huge difference between organisations in the US and their ability to adapt and experiment compared to organisations in the EU, or organisations in Africa, Asia, or Middle East – they are approaching AI in different ways. Local organisations are moving fast, breaking things – that’s one of the key trends in their strategy to leverage those different AI tools. But when it comes to larger institutions and international ones, they are taking longer, understanding the overall ecosystem, and then taking it from there.
I could see local organisations experimenting with AI to a totally different level – not just talking about ChatGPT or generative AI or Claude, but prompting agents in AI to make those agents do specific tasks. I’ve been working with one organisation on how to have an AI advisor in their board of directors. They are experimenting in a faster way because of their ability to adapt, their interest in moving fast, and their interest in focusing on how to do the job faster and in an efficient way, with less focus on the overall ecosystem and compliance associated with it.
Question about stigma and productivity concerns: There are concerns around the stigma of using AI – that people might perceive workers as being lazy if they’re using AI tools. Also, Amalia notes that Cornelia’s recent article flags the risk that AI does not lighten the load for humans but keeps raising the bar for productivity. What are the implications for the humanitarian sector?
Cornelia: I think there are two aspects to that. It’s important to keep in mind that organisations, particularly those threatened with funding cuts, are adjusting very fast to a time and space where answers are expected 24/7, and where, with shrinking people, the same output is still expected. Managers are looking at the AI space and thinking, “Well, if I can do that report with the snap of a finger and ChatGPT, then why does my team take two weeks to accomplish it?”
This initial promise that AI was presented with – that it’s going to free up time and space for humans to leave away the redundant tasks and focus on strategic, high-level, creative thinking – I think that promise has turned out very quickly to be very void.
But there is still a possibility to harness AI as an ally and as something that is helping each of us to identify and focus more on what we enjoy doing, and what are our unique strengths and talents which we all have but which tend to fall between the cracks in this hamster wheel of doing, doing, doing. If we manage the current transition well, both as part of a team and as part of this evolving relationship between our NI and our AI, we actually have an opportunity to carve out some quality time for those tasks that bring out our unique strengths.
But that in and by itself requires effort – this kind of uncomfortable effort that can’t be delegated because it means to think: what’s my why, what makes me unique, and especially what remains my unique selling points, no matter how sophisticated ChatGPT, Claude, and Gemini are becoming?
Question about funding: How can organisations secure funding from donors for digital transformation initiatives whilst ensuring that digital rights and ethical considerations of beneficiaries, particularly refugees, are protected and prioritised throughout the process?
Doug: Those are actually two questions. The first is around funding and funding opportunities, and the second is around helping funders to stay consistent and true to data security standards. That is a conversation that we should all be having with donors and funders – that just because we are using AI, it doesn’t mean that suddenly we’re going to abandon our values. We’re not going to simply open all data up, particularly secure, very important demographic data that’s easily tracking people.
The conversation that we can be having with funders is about values and our continued commitment. If there were a funder on this call, I would really encourage them to think about how you are both funding into this opportunity and then not doing what funders often do, which is leave a littered trail of proofs of concept that have no sustainability behind them.
Let’s try to see where we find good contextual, maybe even already launched tools that are proven and start funding those rather than encouraging people to create new systems and new opportunities and new standards. The reality is that funders, if they will just take a break, look at the landscape, will probably find that there’s some opportunities that they could fund now and then focus more on scaling than trying to catch up with the work in AI happening at the local level.
Question about training: How can organisations provide effective AI training for their staff? What alternative approaches would you recommend that follow a more holistic and integrated approach?
Madigan: The first thing I would recommend if you’re trying to implement training into your organisation is to look at the pain points. Because AI can be a buzzword, people think “we should just bring it in and use it.” The first thing is to see: do we actually need this? Let’s look at the pain points in our systems, figure out where we need to improve, and then focus on those pain points and address them with training.
In terms of the training itself, it really is dependent on your audience or your team members. We’ve had general AI training – trainings on prompt engineering. If you’re using generative AI tools, how do you prompt them effectively to get the most out of those responses? Others are surrounding the ethics of AI – how do we make sure that we’re putting in secure data, what do we have to do to protect that data if we are using AI tools? Then there are general frameworks or guidelines – how do you make sure that it’s not just another report or framework that you’re giving to your staff, but how do you actually make sure that they’re engaging with it and taking the most important pieces out?
Consider the different formats of training. You also might be operating in low-resource settings where you might not have the digital capabilities, so you have to tailor the training to meet your audience where they’re at.
What I would really encourage is everyone to identify the pain points within your organisation, decide if you need to use AI to solve those pain points or not, and if you decide yes, then how are we going to have these trainings fit our audiences’ backgrounds and areas of expertise? Make it really participatory – engage with your team members to hear what they would like to learn. You’re going to get a lot out of that besides just saying “we’re going to have an AI training.”
Question about ethical concerns: What are the categories of ethical concerns expressed in the research? Are there variations by geography or other variables?
Ka Man: In the survey, we called out ethical concerns as a specific question and designed it as an open comment. We didn’t put specific breakdown of ethical concerns like multiple choice because we didn’t want to inadvertently guide prompts or introduce some form of bias into the responses. We just wanted to ask that open question.
We did text analysis of the responses that came through. They were around privacy, sensitive data – particularly because of the communities that we’re working with, the dangers and risks of using that for AI, especially in commercial tools that we’ve talked about and not specialist, humanitarian-specific tools. A lot of people put things about the moral, ethical considerations of delegating your human decision-making to a machine, especially when it’s sensitive and having a direct impact on the communities that they’re working with. Climate impact came through as well – and this is not scientific, but what I saw was a lot of the INGO respondents were citing climate impact, although it was mentioned across the board.
Lucy: What I would add is that looking through the comments in the chat today really emphasises this concern that people have around how AI affects their job security and their perception of their capabilities. One thing that really struck us is that everyone has been really engaged in this conversation, but when we spoke to people, there was this real concern about being identifiable or saying the wrong thing or giving the impression that they were using AI when they shouldn’t be. I think that in itself is quite a strong observation.
The humanitarian sector is obviously in a very volatile state at the moment. A lot of us will know that our work has been shut down – I’m sure people on this call have experienced their programmes being paused or pulled, our funding being cut, which in turn leads to job losses. I think there is a tension emerging around how AI could be used against us or used to replace us. As the conversation has focused so much on the importance of people and agency, it’s very short-sighted in a way.
Final question: What are the most practical ways humanitarian organisations can start integrating AI tools in field operations, especially in low-resource settings?
Cornelia: I have three thoughts. I think it’s a great opportunity and a great tool for any organisations, but in particular for big organisations, to ask the tool – ask ChatGPT or Claude or whatever you’re using – what your biggest opponent to the programme that you are planning to implement would voice. So that would mean, maybe for a big organisation like the UN, to ask: what would a government say? What would a small NGO partner feel about my proposition? So that we move a little bit away from this parachuted “our voice decides it all,” but also from the other way around for small organisations: what would make my programme unique and special for a big donor? What makes me special? What makes our mission special? To help us, each of us in our respective capacities, to carve out our unique selling point.
There’s also the possibility now with ChatGPT to personalise and have highly personalised reminders, and maybe to tailor and programme your own chatbot to send you a daily reminder of what are your big goals, what are your big aspirations that you are aspiring to as a humanitarian but also as a human being. What are those special values that you sometimes maybe miss because you’re so busy and stressed out? And that you use ChatGPT as this non-judgemental buddy in the pocket that only you and ChatGPT know what’s being saved, but from which maybe it’s easier to listen to the uncomfortable truth that we all need to hear at some point.
Ali: If you are looking for a quick win, then work with those super users and this community of practice and try to explain the key data protection concerns or challenges, and then let them experiment. That’s a quick one. But from the institution angle, I think there are several things to consider:
Innovation – it’s very important to understand how the community is using and leveraging those tools. This survey is an excellent foundation for that, and at the same time, try to see how to empower that environment and support the people so they don’t even hide how they are using it.
Infrastructure – institutions and organisations must look at the infrastructure in terms of the devices they have, the laptops, the computers, the internet, the access to those different tools, the models that they are going to use. One of the key elements here is data. Start your AI strategy or AI integration with a strong data management framework. Without strong data, you are going to face so many challenges, and most probably garbage will produce garbage in the end.
Ecosystem – it’s very important to see, after exploring the innovation and infrastructure, how you are going to link what you did with your wider ecosystem. Organisations also have M&E systems, financial systems, HR systems, ERP systems, and NGOs are not in their own ecosystem – they are also interacting with the wider international affairs, foreign policies, trade, economy, governments, rules and regulations.
One of the key things that I would put as a key pillar of foundation when it comes to innovation, infrastructure, and ecosystem is partnerships. You must work with others. You must engage with other organisations, with other partners, you must engage with private sector, especially with tech companies. We have so many platforms that are excellent in sharing knowledge and providing that space – Humanitarian Leadership Academy is one for sure, we also have NetHope, and I’ve seen so many tech leaders are part of that community.
I want to add one more here: open source. I really think it’s very important to share with everyone what are the challenges that you are facing, what are the issues, what are the risks, what are the achievements, what are the deliverables, how you are progressing. Because that’s how we are going to learn from each other. In a simple way: innovation, infrastructure, ecosystem, partnership, and open source your experience.
Closing remarks
Ka Man: Thank you very much to our amazing expert panellists and, of course, my research co-leads. This has been a brilliant journey. I’ve learned so much, but I know it’s just the beginning. Like Ali said, it’s collaboration, it’s pooling what we have to forge this path ahead for AI that works for us as a sector.
Thank you to all of our attendees for joining us, adding great reflections in the chat as we’ve gone along, for posting your brilliant questions. We’re sorry that we didn’t get a chance to answer all of the questions – we wish we had the time. But we’ll collect some FAQs and we’ll try and get some response to you in a format that works.
The full report was released yesterday, together with the supporting resources. Behind the scenes, the team will be working on translating the reports into French and Spanish. We’ll announce on our social media channels and by email when those translated reports are ready.
In the spirit of learning, we encourage you to visit Kaya, the HLA’s digital learning platform, where we have some great AI-focused resources, including a course in collaboration with Data Friendly Space. There are lots of resources from NetHope – brilliant, really great, useful resources that are new, hot off the press. If you’re not a Kaya user, it’s a free resource. You just have to register once, and you can access over 500 courses, often available in a range of languages as well.
We do plan to continue to develop more content focused around humanitarian AI and humanitarian tech. Keep your eyes peeled for more webinars and podcast episodes. If you’ve enjoyed hearing from Ali today, you’ll be pleased to hear that there is a new podcast episode coming out imminently. This is the second part of a two-part podcast, and in this episode, Ali is going to be talking about strategic insights for 2025 and 26, including developments in the AI sector.
The research team – Madigan, Lucy, and I – will continue to work on content, written content, articles. Lucy’s produced a brilliant one on the history of humanitarian AI. In this, she emphasises that we think about AI as ChatGPT, generative AI that’s emerged in the last couple of years, but she shines a light on developments before that. So that’s a really accessible and interesting piece.
When this window closes in a moment, you’ll see a quick survey pop up. Please share your feedback. We’d be really grateful to hear from you. Keep an eye on your inbox – a copy of these slides and the YouTube link will be emailed to you in the next one to two days, together with some links to relevant resources that you may find of interest.
We will be working on another quick survey – not another mapping exercise, this is more to try and get some more thoughts and feedback from the humanitarian community on what would be useful, what would you like to see in the pipeline. We’d love to hear your thoughts on that.
Finally, in recognition of your learning and engagement in this forum today, we’ll be sending you a HPass digital badge, which is brilliant to share on LinkedIn and a nice discussion tool to show that you’ve been engaging in this space and keeping up to speed on the latest developments in AI in the humanitarian sector.
Once again, thank you to our panellists, research co-leads, and everybody who’s taken part and taken the time to be here today. Thank you very much, and I’m now bringing this session to a close.
Contact us
Humanitarian Leadership Academy
info@humanitarian.academy
Data Friendly Space
hello@datafriendlyspace.org