Thank you for visiting the HLA website, we are updating the site to reflect changes in our organisation. Contact us if you have any questions. Contact us

Beyond the hype: Ground truth on AI across the humanitarian sector

Thank you to everyone who joined us for this session hosted on 26 February 2026 in partnership with Data Friendly Space.

The recording, slide deck and session transcript are available to access below. We’d love to hear your feedback and suggestions for future sessions – email us on info@humanitarian.academy

Session transcript

Note and disclaimer

This transcript captures a live thematic discussion of preliminary findings from the January 2026 Humanitarian AI pulse survey. An executive summary of the final research analysis and findings is planned for release in March 2026. This series has been produced to promote learning and dialogue and is not intended as prescriptive advice. Organisations should conduct their own assessments based on their specific contexts, requirements and risk tolerances.

Transcript

This transcript has been generated using automated tools and has been lightly edited for clarity and readability. It has been reviewed but minor errors or omissions may remain.

Ka Man: Hello, everyone, and welcome to today’s webinar, brought to you by the Humanitarian Leadership Academy in partnership with Data Friendly Space. It’s an absolute pleasure to welcome you here to today’s learning and discussion space, where we’ll be focusing on humanitarian AI, some of the top-level insights from our recent pulse survey, together with our experts panel, and you, our global community.

We’re absolutely delighted to welcome you here. We have 550 attendees registered from 90 different countries, so we really welcome you, and thank you for taking the time to be here with us today. As you’re joining us, feel free to introduce yourself in the chat, letting us know your name and where you’re joining us from.

It’s great to see people already introducing themselves. Hello to Muhammad from Jordan, Saeed from Somalia, Zinat from Ethiopia, Jean from Senegal, Rana from Egypt, Albert from Montreal — welcome.

Today’s session is 90 minutes, which begins with this welcome, introductions, and a little bit of housekeeping, before we move into a bit of an audience poll as a bit of an icebreaker, where we get to hear a bit about you. Then we’ll move into top-level pulse survey insights — a short presentation led by my colleagues Madigan and Lucy. Then we’ll move into a thematic panel discussion together with our wonderful panellists, and then our audience Q&A. If you have any questions for the Q&A segment, please submit those using the Zoom Q&A function. Go to your Zoom toolbar and press the question mark icon to submit your question — that’s open at any time.

This session is being recorded. The YouTube link and slide deck will be emailed to you, so expect to receive that in your inbox tomorrow. For accessibility, Zoom captions are enabled, including translations, and post-event a session transcript will also be available. Feel free to use the chat as we go along — we really want this to be a lively and interactive discussion space. Please use the reactions too, so we know what you’re thinking and feeling. Just a reminder to submit questions using the Q&A feature rather than the chat, as the chat tends to move quickly and we may miss those. Please keep any questions or comments respectful and on topic, relevant to humanitarian AI.

In recognition of your learning and participation in this forum today, you’ll be eligible to receive an HPass digital badge — please keep an eye out for a separate email that’ll land in your inbox next week.

I’m absolutely thrilled to be joined by a brilliant global panel today, and our research team. I’m joined by Lucy Hall from the HLA, Madigan Johnson from Data Friendly Space, Rebecca Chandiru from Humanitarian OpenStreetMap Team ESA Hub, Liz Devine from Goal Global, Timi Olagunju from The Timeless Practice, and Nayid Orozco Bohorquez from Data Friendly Space. I’d now like to invite each speaker to briefly introduce themselves, starting with Lucy.

Lucy: Hi, Ka Man, hi everyone — it’s lovely to be here with you all today. My name is Lucy, I’m the Research and MEAL Lead here at the Humanitarian Leadership Academy, and my research focuses primarily on digital capability, locally-led technology, and looking at how we can make technology much more inclusive, ethical, safe, and locally led. Madigan, it’d be lovely to hand over to you now.

Madigan: Hi, everyone. My name is Madigan Johnson. I’m the Head of Communications at Data Friendly Space, where we build humanitarian AI tools. I’m going to pass it over to Rebecca to introduce herself now. Lovely to be here.

Rebecca: I’m the Volunteer Engagement Coordinator at Humanitarian OpenStreetMap Team in the East and Southern African Hub. I have about 7 years of experience in spatial data, and I’m very passionate about building capacity of young people and communities in open mapping.

Briefly about our involvement in tech at Humanitarian OpenStreetMap Team — we’ve developed an AI-assisted mapping service called FAIR, used by communities all over the world to increase the speed, capacity, and efficiency of creating base maps. Local users are able to train the GeoAI computer vision models to create maps by selecting a specific GeoAI model, imagery, and features like buildings, roads, etc. This tool also allows human mappers to validate the predicted features. Happy to be here — thank you. Over to you, Liz.

Liz: Thanks, Rebecca. Hi, everyone. My name is Liz Devine, and I’m the Head of the Program Technical Team at Goal Global — an international NGO working across both fragile and crisis-affected contexts in Africa, the Middle East, and Latin America. I’ve been working in the sector for over a decade, and for the last year and a half I’ve been overseeing our team of humanitarian experts working across health, nutrition, food security, livelihoods, and a range of other sectors. We’ve been focusing on the intersection of program quality and technology, and trying to find how we can integrate AI safely in our programming and amplify the impact of our work across the globe. I’m really happy to be here today. Now I’ll hand it over to Timi.

Timi: Thank you so much, Liz. I’m Timi Olagunju. I’m based in Lagos, Nigeria, and I help organizations and governments shape policy, regulations, and governance efforts, particularly in emerging technology, for over a decade and a half. I’m currently a Macfound Fellow on Technology in the Public Interest. It’s a pleasure to be with you — thank you.

Nayid: Hi, hi everyone. My name is Nayid Orozco, and I’m happy to be here with you all today. I’m the Chief Product Officer at Data Friendly Space. I’m based in Bogotá, Colombia, and for the past 15 years I’ve been developing tech products. As Madigan mentioned, at DFS we build tech products for the humanitarian sector and provide analysis for crisis response. We have worked with IFRC, UN agencies, GiveDirectly, and we were the ones behind the development of the DEEP. Maybe some of you were users of that tool. Today we have a tool called Gannet — it uses AI to bring together qualitative and quantitative data and supports analysis for thousands of users across more than 90 countries. Happy to be here today and looking forward to the conversation and your questions. Back to you, Ka Man.

Ka Man: Thank you so much, Nayid, and thank you to our wonderful panellists. We have over 200 attendees in the virtual room already from all over the world, so we’re really looking forward to the conversation that will unfold.

This session is being hosted as a follow-up to our Humanitarian AI pulse survey that we conducted last month, where our community told us how you’re using AI in 2026. This session has an intentional community focus — we want it to have a community of practice feel to it. Thank you to everyone who took part in the survey. We had over 1,700 responses from 120 different countries and territories. This builds on the global baseline study we conducted in May–June 2025, where we received over 2,500 responses from 144 different countries. We really do have this invaluable dataset — all of your experiences, opinions, and views — which creates a rich picture and is invaluable for the sector to get a handle on where we are with AI across the humanitarian space.

Today we’re not intending to do a deep-dive technical exploration of the research itself. We’ll give top-level thematic insights to use as a basis for discussion with our panel and with you. In the coming weeks, we’ll be releasing an updated data dashboard where you’ll be able to delve into the data directly, and we’ll also be releasing an executive summary.

Before we dive in, we’ve got a bit of an icebreaker to hear more from you. I’m going to launch the questions, and Madigan will walk you through those.

Madigan: The first question is: what first comes to mind when you think about AI and humanitarian work? The options are: risks and ethical dilemmas; new ways to work smarter and faster; exciting possibilities for innovation; or still figuring it out.

(Poll results)

It looks like “new ways to work smarter and faster” is the most popular, followed by “risks and ethical dilemmas” and “exciting possibilities for innovation.” Only a couple of people are still figuring it out — good insights.

Ka Man: I think it’s really interesting that risks and ethical dilemmas rank second. I’ll launch the second poll.

Madigan: For you, where are you now with AI? Are you a non-user, still exploring and in the learning phase, a regular user, or do you rely on AI for a significant part of your work? This is something we’ve asked in our original surveys, so we’d love to know from our community where you sit.

(Poll results)

A lot of you are saying you’re a regular user of AI. Exploring and learning, and relying on AI for a significant part of your work, are actually quite close — only a two-point difference. We have a couple of non-users, and we’d also love to hear from you, because that was also a significant part of our research.

And our final question: what is your main objective from joining this session? Is it to gain a general overview of what’s happening across the sector, to support personal learning and development, to learn something new to share with your colleagues or organization, or to connect and interact with others in this space?

(Poll results)

It looks like the majority of you are here to gain a general overview, followed by learning something new to share within your organisations. Very interesting — thank you so much for participating in the poll.

Ka Man: That’s great — really interesting to see where people are at with AI today. Some of the themes we touched on in those questions will emerge in the survey presentation as well. Without further ado, I’ll pass you back to Madigan to kick off the insights.

Madigan: Hi, everyone. Again, thank you so much for being here today.

We started this pulse survey as a follow-up to our 2025 survey from last May and June. What we had deemed back then the “humanitarian AI paradox” has now shifted into “a sector in transition.” What we’re seeing is that AI is no longer just an experiment — adoption is becoming more widespread across roles, organizations, and geographies. The level of use has increased significantly since 2025, and there’s a very clear indication that AI is here to stay as a core tool in humanitarian work.

I want to start with a stat that really sets the scene. Half of the respondents in our 2026 Pulse survey are now using AI daily — 50.4%. That’s up from 45% just a year ago, an increase of 14.2%. I want that to sink in, because this is from people spanning over 120 different countries, working in local nonprofits, INGOs, UN agencies, government, academia. A quote from a local NGO respondent in Sub-Saharan Africa really captures something the numbers alone can’t: “It is a technology that has come to stay, and organizations that shy away from it will be left behind.” That’s not us at DFS or the HLA saying that — that’s a practitioner in their context, telling us what AI feels like from where they sit.

As we go through today’s session, I want to focus on something important: widespread individual use is not the same as institutional or sector integration. The gap between those two things — between what people are doing on their own and what organizations are actually supporting and governing — is where the real story lives.

One thing I’m really excited to share is what happens when we look at respondents who took both the 2025 survey and this 2026 Pulse survey. About 12% of this year’s sample participated in both. What we saw is a leading cohort that has rapidly advanced in terms of adoption, governance, and training. Daily usage jumped from 44.5% to 58.7% in one year among the same group of people — not a new population, but the same practitioners just using AI more.

What also stands out is the governance data. Formal AI policy presence in this cohort went from 21.8% to 35.7%. When you add in organizations currently actively developing a policy, that combined figure goes up to 54% — a nearly 18 percentage point shift. AI training also almost doubled, showing that organizations are investing in their staff.

The key takeaway is that organizations that engage early and stay engaged are integrating and adopting AI responsibly — with more usage comes more governance and more training. I do want to note a limitation here: this is a self-selected group, so respondents willing to come back are likely more motivated and AI-engaged. But I think they show us what’s possible when organizations commit, and that matters enormously for what comes next. I’ll now pass over to Lucy to share the key themes that emerged from the 2026 survey.

Lucy: It’s been so interesting looking at all of your responses — there are so many themes coming out, and it really was a wealth of information. A huge thank you for contributing to such a rich knowledge base.

There are so many themes we could have talked about today, but these felt like the really important ones from our analysis:

The first is that locally embedded AI capacity is widely distributed across the humanitarian system.

The second is widespread individual use with uneven institutionalisation — even though that’s increasing, it is still uneven.

The third is that adoption is being driven by practitioners themselves, not by top-down organisations imposing this technology.

The fourth is that regional adoption is shifting — I’m looking forward to hearing more about that from Madigan.

And the fifth is that humanitarian AI is moving from individual practice to sector transformation. We’re at a tipping point, and I’m really looking forward to exploring that with you.

Looking at our first theme in more detail: AI adoption isn’t limited to large international organisations or technical teams. It is consistently emerging throughout the humanitarian system, regardless of operational role — whether you’re in a thematic, technical, or other role. This is really consistent particularly among local organisations and practitioners working directly in operational contexts.

What we’re actually seeing is that local NGOs report slightly higher daily usage of AI than international NGOs, particularly in resource-constrained contexts — working smarter, faster, and innovating where resources may no longer be available because of the changes within the sector. That’s our hypothesis anyway.

This tells us that AI isn’t only emerging through top-down institutional investment. But what we’re really seeing is that AI adoption is being driven by local humanitarian practitioners just using tools to support their work on a daily basis. Over to you, Madigan.

Madigan: The next point is about sector-wide integration — and following up on what Lucy said about widespread individual use, how it becomes uneven when looking at institutionalisation.

This is one I keep coming back to, because it captures the central tension right now. AI is used by 50.4% of individuals daily — that majority threshold has been crossed for the first time in 2026. That’s huge. And yet, only 8.8% of respondents work in organisations where AI is widely adopted and integrated. Not 88%, but 8.8%. That gap of 41.6 percentage points is a defining quantitative point.

What’s underneath that number matters too. A lot of people are still in the experimentation stage, and what they’re experimenting with is largely productivity-oriented — writing faster, quicker research synthesis, summarising meetings and notes. That has real value in a sector that has seen a lot of shifts in the last year. But it’s qualitatively different from operational integration, where AI is actually embedded into program design, needs assessments, resource allocation, and decision-making processes.

On governance: 44.7% confirmed their organisation has no AI policy at all. A further 13.8% didn’t know if one exists. Together, that’s 58.5% operating without confirmed governance infrastructure — and this is a sector that handles sensitive population data.

On training: 34% have access to organisation-led AI training — a third — which shows movement. But 54% are still relying on self-directed learning. That 20-point gap is not primarily a resource problem; it’s a prioritisation problem, especially given how many people are using AI daily.
The sector is generating AI capabilities through individual initiative rather than systematic investment, which can produce uneven distribution, inconsistent practice, and capability that may be invisible to institutional risk management. I’ll let Lucy go into the practitioner-driven approach.

Lucy: One of the clearest patterns in the data is that adoption is being driven by practitioners rather than through formal organisational rollout. While so many of us now use AI daily, only around a third report receiving organisational training — though that has grown significantly since last year.
What this tells us is that practitioners aren’t waiting for organisations to introduce these tools. We’re going out and problem-solving ourselves, because they provide immediate value. It primarily happens through hands-on experimentation and peer learning. As practitioners become more familiar with these tools, we are starting to see a shift in the type of work being done with AI — towards much more analytical and decision support use cases.

Primarily last year, we were looking at content generation, editing, and refining emails. That’s still very much the case, but we are seeing a shift into slightly more advanced use cases as our confidence and comfort with the tools grows.

This really does represent a workforce-level transformation where capabilities are emerging from the ground up, rather than being introduced solely through top-down institutional systems. Madigan.

Madigan: Regional adoption is shifting — there’s no way about it. Before I walk through this, two notes. First, when we say “AI adoption,” this combines respondents who reported their organisations are either in limited implementation or widely adopted — we used a broad measure to capture where movement is happening across the whole spectrum. Second, in certain regions like Western Europe, Eastern Europe, and North America, sample sizes were a bit too small to include meaningfully here, though you’ll be able to explore that in the dashboard.

With those noted: Asia Pacific is the headline. Asia Pacific has a 12.7 percentage point increase in a single year — from 45.6% to 58.4% — the fastest regional growth we’ve seen in this dataset. Sub-Saharan Africa and Latin America and the Caribbean also show meaningful movement, while global and multi-region organisations lead in absolute terms.

But the number that matters most — and is consistent across every single region — is that “widely adopted and integrated” still sits below 10% in every single region, without exception.

Interestingly, in MENA and Sub-Saharan Africa, around a quarter of respondents have reported intending to adopt AI but not yet having started. There are large populations sitting right at the threshold of adoption, and we’d love to hear from those regions about why the plunge into experimenting hasn’t been made yet — that’s something we definitely want to follow up with further research.

On governance: every single region uses AI more than it governs it. Roughly half of Sub-Saharan Africa respondents have no AI policy, and the picture is very similar in MENA and Latin America and the Caribbean. But what does AI policy or risk mitigation look like when the technology changes so rapidly? We’re really hoping to hear from the audience on that — and from our panellists as well. I’ll shift back to you, Lucy, for a final wrap-up.

Lucy: I think what all of these stats, this information, and the stories coming out from our analysis show is that we’re at a real transition point. AI isn’t something we’re experimenting with or debating as a sector — we are using it. It is embedded in our workflows.

What’s really obvious is that this isn’t about curiosity or hope anymore. There are clear, real, tangible values in day-to-day humanitarian work. What’s especially interesting is that this confidence is emerging across the sector, including among local organisations, who report strong belief in AI’s decision-making value. That has reinforced the narrative we’ve been discussing today — capability is emerging from the ground up, driven by humanitarian practitioners.

This poses a really important structural question. As this transformation is being carried by individuals and practitioners — largely without institutional support — there is huge opportunity, because we are able to work smarter and faster and to innovate. But the risks are real, because the responsibility for safe, effective, ethical use of AI is being placed on us as individuals rather than on organisations and institutions.

The question for the sector as a whole is no longer whether AI adoption will continue. The question is now whether organisations are going to step forward to support and scale this capability in a responsible manner — whether we move from individual adoption to truly systemic adoption, with the governance, training, and infrastructure needed to sustain it.

It feels like we’re at a tipping point, and it’s not just about adoption anymore. It’s about whether we have the institutional capability to support and enable this growing, locally-led humanitarian AI workforce. That’s a really interesting point to pause on from this research.

Ka Man: Thank you so much, Lucy and Madigan, for walking us through those top-level insights. We had a real dilemma as to how much to share today, so that was designed to act as a provocation for the discussion. We do plan to release an executive summary, the dashboard, and I’m sure we’ll have more webinars and podcasts to delve into this further — so watch this space.

All of these data points represent real people’s lives, situations, adaptation, and resilience in the face of the sectoral changes that happened through 2025 and into now — technological, funding, and more — and that really did emerge.

I want to turn to the panel now for some initial reactions to what’s just been shared. Timi, wearing your governance hat — what’s your take? What jumps out at you?

Timi: Three key things jump out to me. First, the humanitarian sector deals with the most vulnerable in society, and humanitarian AI is essentially the moral compass of what AI should be doing — for public good, for the most vulnerable, for human good. It’s important that humanitarian organisations invest the effort in governance.

Which brings me to my second point: the fact that AI policy is at a slow pace compared to the growth in use within the humanitarian sector is concerning. Governance frameworks provide the context in which AI can truly serve the public good. For example, in the humanitarian context, issues of language, consent, and the ability of beneficiaries to opt out of AI all matter — most AI is developed by countries far from where it is deployed in humanitarian work.

My third point is about practitioners driving adoption rather than a top-down approach. The positive side is that it shows AI is becoming embedded and utilised — that’s a key marker of success for any technology. But the other side, the negative, is that this data also speaks to the absence of digital leadership. Thank you.

Ka Man: Really pertinent points — thank you, Timi. Liz, what jumps out at you when you compare that to what you’re seeing through your work at Goal Global?

Liz: One of the biggest things that stuck out to me was that dramatic increase in people using AI daily — it really has become business as usual. But there’s that sticking point: only 8.8% widespread integration, and I think that’s really linked to the limited governance framework and digital leadership in humanitarian contexts.

People are using AI for their own efficiencies — not yet for design, monitoring, evaluation, or other areas. I think overall it reflects a limited risk appetite to drive forward with widespread integration as the technology evolves so quickly.

One of the things we’re doing at Goal to address that is trying to facilitate the shift from AI for personal use to more organisational development. We’re actually working with Data Friendly Space — and Nayid is on the panel — to develop tools that can synthesise information and best practices so we can recommend the strongest and most relevant packages of interventions. That’s where we’re going to start seeing a really exciting shift — using AI to embed it into our decision-making and programming, and eventually using that to transition our work to local partners and ensure locally-driven programming instead of a top-down approach.

Ka Man: That’s really interesting — thank you, Liz, and thanks for sharing your experience exploring contextualised tools. Rebecca, what are your reactions, and how does that align or differ from what you’re seeing with HOT?

Rebecca: What my eyes were very critical on was how locally embedded AI capacity is, and how NGOs are growing in confidence with AI. NGOs are usually solving local problems — so this means tools should be designed to solve local problems, like disasters, public health, etc. It means communities should be involved in verifying AI, and since communities are the ones facing these problems, they should be part of this.

As Timi mentioned, most AI policies are made by people who are not in Sub-Saharan Africa. And you’ve seen the growth happening very quickly — this implies that Sub-Saharan Africa and Latin America should be involved in making these policies, in building these AI models, and in using these models. Thank you very much.

Ka Man: Thank you — that really echoes a lot of the open comments from our respondents in the Global South and from local organisations. So, Nayid, how’s this looking from your perspective? Where do you think the sector can safely take AI in 2026 — how far can we responsibly push this innovation?

Nayid: Thanks, Ka Man. Let me start with what we’re seeing today to frame what 2026 might look like. What we see is that AI is helping people deal with information overload and understand what’s actually happening in a crisis. And just to make a disclaimer — the hype right now is around generative AI, but AI also includes traditional machine learning algorithms, satellite imagery, and more. So just to say that what we’re seeing is adoption in analysis practices.

At DFS, we’re supporting many partners with that work. With Gannet, we’ve reduced the time it takes to go from raw data to useful analysis from about two weeks to just a couple of hours in places like Sudan, Lebanon, Myanmar, and OPT. Partners like Goal, SafeShield, and GiveDirectly are feeling the difference — people are saying AI is improving how efficiently they work, and this isn’t about replacing humans. It’s about expanding their capabilities.

To answer your question about 2026: first, scale is important. We need to move from small pilots and make AI a normal, embedded practice. It’s not just about working faster, but expanding what teams are able to do — deeper analysis, forecasting, making sense of complex situations. Second, integration — AI needs to fit into existing workflows. If you create extra systems or parallel processes, it’s difficult for users. Every new tool comes with a learning curve, and that transition needs to be as smooth as possible. Third, localization — we know this word is always in the humanitarian conversation, but the survey results show AI capabilities aren’t just sitting in large institutions; they’re spread across the whole system. Supporting local NGOs, testing and building with them, is what will truly drive the change. Scaling, integrating, and localising — that’s what we can expect and how we need to look at 2026.

Ka Man: Thank you so much — really thought-provoking. As Lucy and others have mentioned, we’re at this tipping point, and we’re all talking about building safely, in collaboration and partnership, and equitably. I’ll hand over to Lucy, who’ll continue facilitating the conversation.

Lucy: Hi, everyone. I’m really looking forward to asking some quite tricky questions — hopefully not too tricky.

This is an open question to our panel. What is needed to accelerate this? We know we’ve talked about making things faster, expanding use cases, fitting AI into existing workflows, and making policies more locally led. What does it actually take to drive tha

Nayid: I can start with two thoughts. The first is sustained funding. Donors love pilot grants and short-term 6 or 12-month projects. But scaling a tool from 10 to 1,000 users requires capital, engineering, maintenance, iteration, and feedback collection. Sustained funding is one thing.
The second is co-creation from day one — not just consultation at the end of the development cycle, but genuinely co-designing with local organisations from the very beginning: understanding the problems, thinking big but starting small, tackling small problems first, and collaborating with those in the field who can test and iterate based on feedback.

Liz: To complement that, Nayid — one thing that’s really important as we develop different tech tools is that we must integrate contextual and community knowledge. That has to be treated as the source data. There have been some comments in the chat questioning the use of large language models, and I think AI trained on Global North datasets will always underperform in our context — it just won’t be as relevant as tools trained locally. Integrating local knowledge and using smaller local language models is what will make the AI tools we develop the most accurate and trusted by the communities we’re working in.

Rebecca: For me, it’s mainly about AI tools being designed on community-identified, locally identified problems — tools that serve the people. This also means people should be involved in actually testing and using those models.

An example is the FAIR tool at HOT. It’s an AI-assisted mapping service with open GeoAI models. Different community members are able to use that AI to develop local mapping models, and they can then provide feedback that is incorporated into the tool as it evolves. Once the local community is involved in building and testing the models, they become very confident and they trust that data. Involving communities and the people actually going through those challenges is very important. Thank you.

Timi: To chime in — good points have been raised. I want to reiterate what Nayid mentioned about funding: funding is fundamental, and efforts towards it are key. There is also a key need for digital leadership at the humanitarian sector level — both locally and in terms of government, because culture, not politics, can determine success. But politics can shape culture, so it has to be local leadership within the humanitarian sector as well as governments.

Lastly, on AI literacy foundations — focusing on universities is very important. That’s where you have a lot of research, researchers, and young minds that can innovate. The humanitarian sector needs to work closely with universities within local regional contexts. There’s a need for the town and the gown to come together to drive digital leadership and humanitarian AI that truly serves the people. Thank you.

Lucy: I love that, and I think it’s something we at the Humanitarian Leadership Academy believe very strongly — bringing people together, looking at sustainable solutions that aren’t fragmented, and drawing on data and volunteers embedded in communities. Wonderful to hear that message loud and clear from all four of you.

I now have another, potentially slightly controversial question, because we’ve talked about governance and policy to make sure that AI is safe and responsibly used. What was really clear from the research narratives is that adoption and usage has been driven from the ground up — it’s not been imposed. There is a sense that people really trust AI tools — not fully, I don’t think anyone would say they trust AI fully — but I’ve been wondering: if organisations bring in more restrictions, more governance, more policies, more training that makes people question AI a little more, would this actually affect AI usage? Do you think people may stop using these tools, or change how they use them? I’d really welcome ideas from anyone willing to take that on.

Timi: Let me chime in quickly. Governance and policy are like money — money itself isn’t evil, it’s the application of it that matters. We’ve seen where procurement policies in Singapore have been used to drive AI training and literacy, providing tax credits and subsidies for people to advance their learning in AI. But we’ve also seen the US, where certain policy frameworks drew back on regulations. So it’s not that governance or policy itself is bad — it’s a tool, and it can be used well.

To quickly liken it to an analogy: imagine a family with its own codes and rules of engagement, and a neighbour’s child comes to stay while the neighbour is travelling. The neighbour’s child is new to the family, and so you need to set certain parameters — the child might have a week of freedom at first, but then stricter rules begin to set in. What you’re trying to do is ensure you manage the outcomes in the household in a way that is beneficial to everyone. That’s what governance can do in the context of people introducing foreign AI tools into local humanitarian contexts.

Nayid: I think it’s about building confidence and trust. When people say their organisation either doesn’t have an AI policy or they’re not even sure if one exists, two things tend to happen: either people avoid AI completely because they’re unsure, or they use it quietly without much guidance. In those larger organisations, there are users whose bosses don’t know, but they’re paying out of their own pockets for GPT Pro and using it on a daily basis. Neither scenario is ideal.

Clear policies don’t stop innovation — they give people the confidence to start using AI in a responsible and open way. And trust always comes back to how you’re using AI and for which tasks. If you have a clear pathway for evaluating outputs and always keep a human in the loop — reviewing and verifying — you can start trusting it more for tasks that don’t directly harm beneficiaries or communities, while allowing you to be more efficient in analysing and summarising information. Those are my two points — thanks.

Rebecca: What I’m going to say is very close to what Nayid said. If your organisation accepts to use AI, there should be transparency and clear limitations on how it should be used. It should be integrated into your workflow.

At Humanitarian OpenStreetMap Team, we have an end-to-end workflow involving imagery acquisition, digitisation — remote mapping — and now AI-assisted mapping has been added as an optional step within that workflow. After the AI-assisted mapping, we go into field data collection, then downloading and using the map. The key is to make AI use as transparent as possible and include it in the workflows, so that everyone is clear on how to use AI and what its limitations are. Thank you.

Liz: I’d echo what the other panellists have said. Ultimately, when it comes to governance, the risk of under-governing is far greater than the risk of over-governing. We really need to move away from the idea that governance introduces restrictions — instead, it introduces protections, both for us and for the participants in the programming we’re working to deliver.

As Nayid and Rebecca have mentioned, people are using AI anyway, and they might not be using it in the most responsible way — not out of any malice, but because they just need the guardrails and guidance on where it’s appropriate to use AI, and where it’s better to do things the old-fashioned way. We really need to step up and support the development of stronger governance structures. Ultimately, that’s going to allow both ourselves and the communities we work in to trust how we’re using AI far more, knowing those protection measures are in place.

Lucy: Thank you so much. I’d love to delve into all of these points more, but I’m conscious of time. Ka Man, I’m going to hand back to you — I think it’s now time to hear from our audience.

Ka Man: What a fascinating discussion has already unfolded. We’re now moving to the audience Q&A, where we’ll put some community questions to the panel. With the time we have, we’ll address about 5 or 6 questions. Apologies if we don’t get to your question directly today — we’ll roll them forward and find ways to share responses and ideas with you.

For the first question, I’d like to come to Timi, given your governance background. The question is from Neil. Neil says: “Thanks for the research and presentation — it’s been really valuable. You’ve shown that AI uptake is still quite sporadic and often driven by individual initiative, with only limited work on governance so far. In that context, I’m curious how you see safe use evolving as adoption scales, especially in systems handling highly sensitive protection data, including GBV-related and wider protection risks.” Timi, what’s your take?

Timi: Safe use will definitely evolve — maybe not at the pace that adoption is. That’s why conversations like this are so important, so that we can see why it matters that people are using AI for their own personal work, and that this personal work can dovetail into collective outcomes. Pushing this kind of conversation is one way to ensure a consistent, or at least closer-to-consistent, evolution of safe use of AI in the humanitarian sector — particularly in those sensitive areas like displacement and insecurity.

What we’ve seen is that even something as foundational as privacy policy is a challenge for many organisations. People haven’t yet come to the realisation that these things are quite important, particularly for a sector dealing with the most vulnerable. Conversations like this will push the envelope forward, but at a slow pace still.

Ka Man: Thank you, Timi. Liz, I could see you engaging with this — I wondered if you had anything to add, especially around the highly sensitive protection data and wider protection risks.

Liz: Just a couple of points that resonated with me. There are data protection policies to consider, and one challenge we’ve experienced is that there are different levels of regulation in countries where we operate versus countries where headquarters are based. There’s always a balance between trying to observe the most stringent protections while developing AI solutions

One of the things the sector really needs to figure out is developing a set of minimum standards for really sensitive areas like protection and gender-based violence. These aren’t the only ones to consider, but we need a set of core minimum standards that organisations agree on — so that at the bare minimum we can provide these quality assurances in our programming. We haven’t reached that point yet; policy development and governance structures are still in early stages. But we need to pay attention to it quickly, because we are operating in very sensitive environments and want to make sure our programming does no harm to participants. This conversation is happening at individual project level, but it needs to be brought much more widely to the community.

Ka Man: Thank you, Liz. It’s interesting that you talked about humanitarian minimum standards — the way Sphere, CHS, and others work. I know that’s a conversation people are having, though I don’t know how much actual movement is possible in 2026, given the deep funding cuts, staffing challenges, and everything else the sector is grappling with.

Liz: If I can jump back in — I actually think that’s one of the reasons we’re seeing slower organisational adoption. Organisations are grappling with this level of risk. We don’t want to put out solutions we can’t back up with the ethical safeguards needed. That’s why we see that 8–9% of organisational integration — we’re navigating that risk appetite to make sure we’re providing tools that are safe to use and can manage the sensitivity of the information channelled through them.

Ka Man: Absolutely, and that really resonates with what we saw in the open comments. Even people who are really tech-positive evangelists are still very mindful of the risks, and they’re looking for guidance. Humanitarians are trained to work to standards, they know they’re working with vulnerable communities — they want that guidance. It’s not that they want to go rogue; they want to be in concert and sync with others across the sector, and that really did come through. Thank you, Liz.

Next, I’d like to put a question to Madigan, from Chiyuki. Chiyuki asks: “The presenter mentioned risks associated with the rapid rise of AI agents in humanitarian work. What types of AI agents and focus areas are typically used? Do we have any data, and what are the challenges?”

Madigan: We don’t have highly granular data, but in the survey, 29.8% said they were using AI agents in some form of work. In terms of what those agents are specifically doing — whether predictive, anticipatory, communication, or logistics agents — we don’t have quite that breakdown, so I’d love it if anyone in this audience would follow up with research there.

What we are also seeing are AI-generated persona agents. For example, UNU CPR had a research initiative examining agent-generated digital personas that could simulate conversations with refugees and conflict actors, to see how humanitarian organisations could collect data, train staff, or understand community needs. This was also quite controversial, because people were questioning whether talking to a persona means you’re missing real nuances from actual refugees or conflict actors.

Some key challenges with AI agents are that they’re usually built on commercial large language models, which brings algorithmic bias, questions around data sovereignty and privacy — especially when you give these agents access to your entire computer and the sensitive population data you’re working with. There are a lot of ethical and data questions here. I think there is room for AI agents in this space, but we really need governance and training to help colleagues shape how these agents work within their work, rather than letting them make the decisions or assume everything is accurate.

I don’t think that fully answered your question, but there’s still a lot we don’t know, and a lot of follow-up research to be done specifically around AI agents in humanitarian AI.

Ka Man: To add a few thoughts: when we did the baseline survey in May–June 2025, AI agents wasn’t mentioned even once. This time we added it as a specific multiple-choice question, worded as “custom-built agents” such as those built on Microsoft Copilot Studio or the OpenAI API — and that was what 29.8% ticked. We were surprised, because from less than 12 months ago with no mention of this term, to almost 30% saying yes they’re using it is quite striking. We don’t yet know whether these are agents just accessing SharePoint, sending emails, or doing something more substantial — so it’s something we want to explore further in subsequent research.

Now, a question from Levent, which I’d like to put to Nayid. Levent says: “Gannet still has a long way to go to reach the level of assistance provided by tools like Claude AI, especially as competitors have rapidly evolved through deep integration with other software ecosystems. In your opinion, can Gannet ever truly meet the high standard of AI assistance our sector requires? Or do you think organisations will eventually open their doors to other trusted AI frameworks as they refine their internal policies?”

Nayid: Thanks, Ka Man, and thanks Levent — that’s a really fair question. Let me start by saying there’s a misconception here. Gannet wasn’t built to compete with Claude, GPT, or any other general-purpose AI. Those are impressive tools, advancing very fast — and as the survey shows, humanitarians are already using them a lot, and we welcome that.

But the key difference is that Gannet is what’s called a RAG system — a purpose-built system that sits on top of a foundational model, in this case Claude. So Gannet benefits from all the progress Claude makes. The difference is that the responses you get in Gannet come from a trusted knowledge base: we source and fetch documents, and today more than 1,000 documents from ReliefWeb, UN agencies, and local sources make up that knowledge base. You get responses with a humanitarian focus.

So the answer isn’t Gannet or Claude — it’s both. It’s more about organisations finding the right tools for their specific tasks. We’re not thinking “one tool fits all”; as DFS, we provide consultancy to understand partners’ needs, as we’re doing with Goal. The idea is to avoid duplication of efforts, find ways to integrate, and co-create those tools together.

That said, we don’t know what will happen in 6 months, 1 year, or 2 years with the speed of AI progress. It’s also an invitation for everyone to start testing, ideating, and building — the democratisation of access to tool-building is open to anyone in the field now, and you can partner with people in the sector who can guide the process. Think big, start small, keep building and testing — I think that’s the way to go. Thanks.

Ka Man: Thank you, Nayid. Madigan, would you like to add any thoughts — could Gannet be a “humanitarian Claude”?

Madigan: In some ways it has its own specific purpose. One of the known issues with large language models is hallucination — the model will make up information to give you the answer you want to hear. One of the differences with Gannet is that if it doesn’t have the answer, it tells you that. It’s not going to make up information just for the sake of giving you a response.

The other differentiator is for quantitative data. We pull directly from the Humanitarian Data Exchange from OCHA, and those quantitative datasets are now integrated into Gannet, where you can ask questions and get those queries answered in plain language — which for me, as a non-quantitative data person, is really meaningful.

Those two things — being honest about the limits of its knowledge, and sourcing — set Gannet apart. We let people see exactly where information is coming from, through in-text citations and visibility in our data explorer. Like Nayid said, I don’t think it can compete with Claude, but I think it works for the sector in its own way.

Ka Man: Exactly — contextualisation is key. Thank you. Next, a question from Amalia for Rebecca. Amalia says: “Particularly for conflict-affected regions, the dearth of local data and documented perspectives will likely affect the relevance and accuracy of AI findings. There’s also the matter of privacy and security, with real and potentially immediate implications for local populations. How are organisations dealing with these challenges?” Rebecca, I know you have a background in data and work directly with communities — would you like to share your perspective?

Rebecca: I’ll use our workflow as an example. We do have AI-assisted mapping, but it doesn’t mean we simply generate data and that’s the end of it. What happens is that even in a disaster response, when we use AI to generate data, there is what we call human validation. Before a dataset is used for humanitarian response, a human validates it. Someone goes in, generates the data, and validates it before it’s uploaded to our platform, OSM.

On data ethics and protection — we do have a policy that we follow, and we don’t just upload very sensitive data. An example is Sudan, where there was mapping we had to stop because people were using that data to target communities. We have ethics and data protection standards, and some data we create can be made private depending on the situation. And all the AI we create is verified by humans. I hope that answers the question.

Ka Man: Thank you, Rebecca. Timi, would you like to jump in?

Timi: I think she summed it up. I’ll just add that conflict regions and similar peculiar contexts are most affected by AI findings, because of the challenges around data quality, infrastructure, and the availability of real-world context. That also pushes the need for small language models that cater to particular local contexts. That’s just my addendum.

Ka Man: Thank you. Liz, I could see you nodding along — would you like to share some thoughts?

Liz: Just a couple of points that resonated — particularly Rebecca’s point about introducing safeguards like putting a human in the loop at certain points within the process of developing these tools. That’s related to the broader governance conversation and the standards we need to have, making sure humanitarian practitioners understand the specific points within an AI solution where human verification is needed.

If we can integrate something like that into the wider governance conversation, it would give us a lot more comfort in deploying different AI tools at the ground level. And so can the communities benefiting from our services — they’d know we haven’t just stepped back and let AI do the job. We still have our expertise, we’re using it to validate, and ultimately to amplify the impact we’re trying to have, not to replace it.

Ka Man: Thank you, Liz. I’ve got a question from Paul about the research itself. Paul asks: “How do the research figures compare with other sectors, such as the public and private sector? There’s a danger of humanitarian exceptionalism driving policy decisions if adoption in humanitarian organisations is roughly similar in scale and nature to other types of organisations — we’re all just surfing the wave, and it might not be specific weaknesses in the sector to blame for the lack of more systematic approaches.”

Paul, thanks for this. Personally, I’d frame it in the inverse way — the adoption by individuals is happening despite limited organisational support, and that individual adoption is being driven by need, by pressure, by operational demands, as broader team support, funding, and decommissioned systems fall away. It’s something people are driven to do to continue business as usual in what we can fairly call a chaotic environment from 2025 through to 2026.

It’s not about apportioning guilt or blame at individual practitioners. When I compare our findings to those from reports like MIT or McKinsey looking across commercial and healthcare sectors, we’re following the same trajectory, just not as far along. But I don’t see that as a specific weakness, or as a need to “catch up.”

In fact, from my personal perspective, I see this as a positive — because of those governance gaps, this provides a critical window for us collectively to grapple with the issues, share resources and capacity, and drive collective action to bring in specific humanitarian interventions ensuring contextualised, responsible, and ethical AI. I know that can feel quite idealistic, but I hope that through research, convening, and bringing together diverse actors, we can start to make progress. So it’s not about placing a halo over humanitarian exceptionalism — it’s about circumstances, and these patterns have emerged in response to them. Lucy and Madigan, would you like to add anything?

Lucy: I think you framed it really well. For me, every sector is on a different journey with AI — and when I say “AI” I’m referring more specifically to generative AI, which has been the most disruptive part. AI has been around for decades, but it’s the ChatGPTs and Claudes of the world that have caused this disruption.

The parallels I keep drawing on are what we’ve seen with other disruptive technologies — mobile phones, social media, the internet. Everyone went on a different journey. The humanitarian sector wasn’t necessarily behind. I think we are naturally more cautious, because the risks are higher — we’re working with the most vulnerable populations on the planet. So we absolutely have to be more cautious whenever there’s a disruption to our work.

Whilst we didn’t look at cross-sector comparisons exclusively, those are the parallels I’ve been drawing, and that’s the arc I’ve been referencing to understand where we are as a sector. I hope that made some sort of sense.

Ka Man: Thank you, Lucy. Madigan, is there anything you want to add?

Madigan: No, I think you’ve both encapsulated it. I think we do have to be cautious about how we integrate AI, but we also need to recognise it for what it is, and realise that AI is probably here to stay in some capacity or another — even if we don’t know exactly what shape or form it might take. What we’ve seen is a lot of practitioner-driven, individual usage, and in that way we aren’t as behind as it might appear. The tech industry is constantly making new advances, but so are humanitarians — we are starting to do more forecasting, more risk analysis, and including those in our programs and needs assessments. So like you said, Ka Man, I’m maybe just as idealistic or hopeful, but there is positivity coming from this.

Ka Man: Thank you, Madigan. We have to be hopeful — it’s not just about technology, it’s about systemic change, and how we can build a more resilient, robust, and equitable humanitarian system.

Unfortunately we’re out of time for the panel discussion and audience Q&A. So many rich threads and conversations emerged, and thank you to our audience for asking such fantastic, challenging questions. We’re very grateful for your candour.

In the closing moments, I’d like to invite you — beyond this session — to reflect on what you as an individual, or your organisation, can do to support your AI journey. Perhaps a gentle invitation to set a follow-up action for yourself, whether that’s some learning, a conversation, putting it on the agenda for the next team meeting, or something longer term.

In terms of continuing the conversation, we have a couple of online sessions coming up at HNPW — Humanitarian Networks and Partnerships Weeks. These two are taking place online, and we also have hybrid sessions in Geneva and online. The AI-related ones: Lucy and I will be convening a session on Tuesday together with some local leaders to gain perspectives on bridging digital divides — please do join us if you’re available, and I’ll share the link in the follow-up email. We’re also running a broader humanitarian learning and development session together with the Training Providers Forum. Madigan and Data Friendly Space colleagues will be leading sessions too, both online and in person. Madigan, would you like to say a couple of words about those?

Madigan: Yes — the first session is around Gannet, focusing on AI-powered quantitative analysis, specifically using data from the Humanitarian Data Exchange from OCHA. If you’re working anywhere near program data, needs analysis, or response planning, I think it’s well worth your time.
The second session speaks to the heart of today’s findings — it’s about building a human-in-the-loop process. We’ll be joined by KoBoToolbox, and we’ll be looking at the full pipeline from primary data collection through to secondary data analysis. That human-in-the-loop component is something the sector really needs to hold on to quite deliberately right now. Thank you all so much for today.

Ka Man: Thank you very much — and thank you all so much for joining us today. When the session closes, a short survey with 5 quick questions will pop up. If you could give us some feedback on what you liked and what we could improve, we’d really appreciate it.

This session will be uploaded to the HLA YouTube channel and will be available within 24 hours — we’ll send that by email, together with the slide deck and full transcript, so you can read those at your leisure. The executive summary and dashboard will be available in the coming weeks, and as session attendees and survey respondents, you’ll be among the first to receive them — so keep an eye on our channels.

Finally, in recognition of your attendance and participation today, we’ll be sending you an HPass digital badge, which you’ll be able to share on LinkedIn to show that you’re engaging in critical AI dialogue in the humanitarian space.

All that’s left to say is a huge thank you to our wonderful panellists, research team, and of course our wonderful community for joining us today. Thank you very much.
 


Session description

Artificial intelligence (AI) is moving fast – but what’s really happening in humanitarian organisations on the ground right now? AI use and conviction in its benefits are surging – but how can we harness it safely and responsibly?

The Humanitarian Leadership Academy and Data Friendly Space present insights from 1,729 humanitarians from 120+ countries who responded to the January 2026 Humanitarian AI pulse survey, offering a vital check-in on how practitioners are experiencing AI today.

Join us to explore survey insights and hear how the ‘Humanitarian AI paradox’ identified in the foundational 2025 global study is deepening.

Expect a candid, dynamic discussion together with panellists. Hear experiences and perspectives of where AI is creating value, where risks and gaps are emerging, and where collective action is most needed to shape a more locally led and accountable AI future.


Speakers

  • Rebecca Chandiru, Volunteer Engagement Coordinator at the Humanitarian OpenStreetMap Team ESA Hub
  • Liz Devine, Head of Program Technical Team, GOAL Global
  • Lucy Hall, Research and MEAL Lead, Humanitarian Leadership Academy
  • Madigan Johnson, Head of Communications, Data Friendly Space
  • Timi Olagunju, Policy expert, lawyer, and governance strategist, The Timeless Practice
  • Nayid Orozco Bohorquez, Chief Product Officer, Data Friendly Space
  • Ka Man Parkinson, Communications and Marketing Lead, Humanitarian Leadership Academy [Host]

Who this session is for

This session is for anyone in the humanitarian space who wants to gain insights into the use of AI across the sector. The discussion will be of particular interest to leaders navigating AI decisions, as well as technologists, researchers, donors and government stakeholders seeking to understand the humanitarian AI landscape and its opportunities and risks in 2026.

View the 2025 foundational study and supporting resources

About the speakers

Rebecca Chandiru

A woman with long braided hair and glasses smiles outdoors, wearing a patterned top in green, black, and white designs. She enjoys fresh air as she prepares to join the Humanitarian AI January 2026 pulse survey webinar. Blurred greenery fills the background.


Rebecca Chandiru is the Volunteer Engagement Coordinator at the Humanitarian OpenStreetMap Team (HOT) ESA Hub, working in partnership and community team. She works with volunteers and youth in local OpenStreetMap communities, youthmappers, and global contributors to advance open mapping for humanitarian impact. She coordinates training, skill sharing, and peer to peer initiatives that strengthen collaboration across local communities mapping in solidarity. Rebecca’s work focuses on inclusive participation, contributor recognition, and ensuring community members have the technical support needed to create open geospatial data that matters to them and solve their local problems for effective response and support long-term resilience.

Liz Devine

A woman with long brown hair and blue eyes smiles at the camera. She is wearing gold hoop earrings and a black top, ready to discuss the Humanitarian AI January 2026 pulse survey webinar against a light gray background.

Liz Devine is Head of GOAL’s Programme Technical Team and a public health leader with 10+ years’ experience leading humanitarian and development programming in complex emergencies across Ethiopia, South Sudan, Syria, Turkey, Moldova, and the regional Ukraine response. She has managed $30M+ annual budgets and led large multi-disciplinary teams delivering health and multi-sector portfolios in fragile and conflict-affected contexts. At GOAL, Liz leads a global “research and design” team supporting operations in 17 countries, translating frontline realities into responsible, practical innovation — selecting and shaping AI- and digital-enabled approaches (e.g., real-time monitoring and feedback loops, predictive analytics/early warning, and decision-support tools) that strengthen government- and community-aligned response without adding operational burden.



Lucy Hall

A woman in the humanitarian tech sector with long, wavy brown hair smiles at the camera. She is wearing a dark green top and is indoors, with a door and light green walls in the background.


Lucy Hall is a data strategist and systems thinker with over ten years of experience driving digital transformation in the humanitarian sector. As Research and Evidence Lead at the Humanitarian Leadership Academy, she leads efforts to integrate AI and data innovation into locally-led humanitarian action, exploring how data and technology amplifies local expertise.

Madigan Johnson

A woman with long brown hair and a slight smile takes a selfie outdoors near a lake, with hills and cloudy skies in the background. She wears a rust-colored sweater and black vest.


Madigan Johnson is a digital expert specialising in user behaviour, research, design, and storytelling. Following her Master’s in International Humanitarian Action through the NOHA network, Madigan pivoted to the private tech sector before returning to humanitarian technology. As Head of Communications at Data Friendly Space (DFS), she brings her expertise in digital technology, content strategy, and community engagement to the frontier of humanitarian AI innovation.


Timi Olagunju

A man smiling outdoors, wearing a traditional maroon and gold African attire with a matching cap. There are green trees and blurred people in the background.


Timi Olagunju Timi Olagunju is a policy expert, lawyer, and governance strategist working at the intersection of emerging technology, law, and development. Timi has advised governments, multinationals, and global institutions including UNICEF, the ILO, Samuel Hall, and the U.S–Africa Business Bridge on digital transformation, regulatory policy, sustainability policy, AfCFTA/AGOA, and governance of emerging technologies. He has also provided public policy and GR expertise through Global Integrity (Washington DC) and Speyside Group (London). He is the Founder of the AI Literacy Foundation and Youths in Motion, and serves on the boards of the Slum and Rural Health Initiative and Feed To Grow Africa. His advocacy on AI governance, including his published submissions to the White House Office of Science and Technology Policy (OSTP), helped shape U.S. policy debates that informed the landmark Executive Order on AI Education for American Youth in 2025 and US AI Strategic Plan.


Nayid Orozco Bohorquez

A man with short dark hair and a trimmed mustache and beard, wearing a white button-up shirt, stands smiling with arms crossed in front of large green palm leaves, ready to join the Humanitarian AI January 2026 pulse survey webinar.


Nayid Orozco Bohorquez is a data enthusiast and strategic leader with over 10 years of experience in product management, operations, and sales. As Chief Product Officer at Data Friendly Space, he drives product innovation that leverages artificial intelligence to advance data-driven solutions in the humanitarian and development sectors. Throughout his career, Nayid has successfully led distributed teams across multiple continents, delivering transformative solutions that span both B2C and B2B environments. His work is distinguished by a commitment to democratizing AI technology and building tools that empower organizations to make better data-informed decisions.

Ka Man Parkinson

A woman with long black hair smiles at the camera. She is wearing a black blazer and black top, standing against a plain light gray background.

Ka Man Parkinson is Communications and Marketing Lead at the Humanitarian Leadership Academy. She has 20 years’ professional experience in communications and marketing across the nonprofit sector. Ka Man joined the HLA in 2022 and now leads on global engagement and community building as part of the HLA’s convening strategy. She takes an interdisciplinary, people-centred approach to her work, blending multimedia campaigns with learning and research initiatives. Ka Man produces the HLA’s Fresh Humanitarian Perspectives podcast and leads the HLA Webinar Series.

About the HLA Webinar Series

The HLA Webinar Series is an online initiative designed to connect, inform and inspire humanitarians from around the world. We promote information sharing and knowledge exchange on topical issues facing the sector.

Through these regular free online sessions, we strive to bring you fresh and engaging insights from diverse speakers ranging from seasoned leaders to more recent entrants to the sector.

Disclaimer

This series has been produced to promote learning and dialogue and is not intended as prescriptive advice. Organisations should conduct their own assessments based on their specific contexts, requirements and risk tolerances.


Newsletter sign up