Viewing archives for Featured

Thank you for visiting the HLA website, we are updating the site to reflect changes in our organisation. Contact us if you have any questions. Contact us

Opinion | How are humanitarians using AI in 2026? The case for governance and local leadership

In 2026, individual artificial intelligence tool adoption across the humanitarian ecosystem continues to outpace overall organisational readiness. What are the implications of this sharpening trend?

In this opinion piece, the HLA’s Ka Man Parkinson reflects on findings from the second phase of humanitarian AI research conducted together with Data Friendly Space. She shares her key takeaways from the research together with qualitative insights and voices drawn from convenings as well as wider sector developments, building a case for governance and local leadership as critical priorities requiring collective action.

No single actor can shift the humanitarian AI landscape alone. It takes a movement, built from contributions big and small – including yours.
Ka Man Parkinson

Navigating sectoral challenges and rapid technological change

As we are all too familiar, 2026 continues to be characterised by profound challenge for everyone in the humanitarian space – felt most acutely by those in crisis-affected contexts. Alongside this, AI developments continue to accelerate, adding to the sense of ‘noise’ and confusion across the sector.

To play our part in providing data and evidence and to promote dialogue on how AI is bearing out across the humanitarian system at large, in May 2025, we partnered with Data Friendly Space (DFS) to lead the first comprehensive global study into how humanitarians are using AI, reaching more than 2,500 respondents from 144 countries. As my research co-lead Madigan Johnson put it at the time: “we had tapped into a massive underground conversation”, signalling the huge demand from humanitarians for insights and guidance on how to navigate AI in their work. At the end of the foundational phase in November, I wrote a reflection piece on the research, outputs and sector engagement.

Our approach in 2026: from AI mapping to convening and collective action

In response to this sector demand, and as part of HLA’s broader convening strategy and commitment to local leadership, throughout the first quarter of 2026 we led the second phase of this work. We once again teamed up with DFS to focus on rapid data collection, community engagement and mobilisation through surveys and convening using digital platforms. Together with partners and contributors, we:

From the two survey waves 2025-26 with DFS and supporting engagement campaigns with the wider community, we now have data from 4,200 survey responses, attracting more than 2,700 individuals to learn more through online sessions, as well as thousands more through events, podcasts, social media engagement, and sector media including Devex.

January 2026 pulse survey: what the data is telling us

The crises and upheaval of 2025 appear to have deepened the paradox rather than resolved it: rising individual conviction set against largely static organisational readiness.

A world map shaded by respondent count, with darker orange indicating more responses. Highest concentration is in Nigeria. Record count totals 1,729. Other regions have lighter shades, indicating fewer responses.

Heat map of survey response locations from the January 2026 pulse survey. Over 80% of respondents are from the Global South/Majority – an even stronger representation than the 2025 baseline study (75%).

While the full picture is available in our research briefing note and dashboard, a few findings are worth highlighting here. Notably, AI adoption in the humanitarian eco-system is not following a Global North-to-South diffusion pattern – the highest growth and most intensive daily usage are concentrated in regions with acute humanitarian needs, including Kenya, Sudan and Bangladesh. Looking at usage alongside organisational governance, we can see that while local organisations are the highest daily users of AI, only 13% have a formal AI policy, compared to 39% in UN agencies. This indicates that the governance gap falls hardest on those already working with less resource.

2026 is bringing governance into sharp focus

At the time of writing my personal reflection at the end of the first phase of research in November 2025, I did not see a clear consensus emerging on sectoral priorities for action and clear pathways forward. Five months on, I see AI literacy and governance crystallising as critical priorities – conversations, data and sectoral movements from this phase point to an increased convergence around these as foundational challenges.

With the rapid diffusion of AI across the ecosystem at large – driven by accessible LLMs and now agentic AI – and limited movement on organisational AI governance, right now in 2026, I believe this evidence points to humanitarian AI as a governance and protection challenge, rather than primarily as an innovation agenda.

The 2026 AI Index Report just released by the Stanford Institute for Human-Centered Artificial Intelligence (HAI) provides data evidencing that responsible AI is not keeping pace with AI capability: “Documented AI incidents rose to 362 [since 2025], up from 233 in 2024. Adding to the challenge, recent research found that improving one responsible AI dimension, such as safety, can degrade another, such as accuracy.”  This evidence underscores the urgency of the governance and accountability piece.

What are the sentiments across the sector? The conversation is maturing: cautious optimism with an eye on ethical risks

Across our three 2026 webinars, sentiments and lines of questioning have noticeably shifted since the August 2025 report launch. The question is shifting from “should we use AI?” to “how can we use it – responsibly?”

A virtual meeting with seven participants, each in their own video frame. Their names are displayed: Ka Man Parkinson, Lucy Hall, Madigan Johnson, Nayid Orozco Bohorquez, Rebecca Chandiru, Liz Devine, and Timi Olagunju.

Speakers at our February 2026 webinar: Beyond the hype: Ground truth on AI across the humanitarian sector held in partnership with Data Friendly Space


In 2026, humanitarians are keen to hear more about specific use cases and to learn from each other – and the discussions tend to be more focused and grounded in the reality of current capabilities and context rather than more future-facing aspirational large-scale deployments.

In webinars, attendees raise specific questions that demonstrate close engagement on the discourse including the rise of AI agents, small language models for low-connectivity settings, climate impact, data sovereignty, and local leadership.

It has been encouraging to see AI conversations becoming more mainstream among practitioners alongside what appears to be increased confidence and sense of psychological safety around these conversations, including on LinkedIn.

Looking at the survey comments, the tone can be characterised on balance as cautiously optimistic – those who point out the benefits now and in the future usually add caveats and concerns too.

A senior leader in operations at a local NGO in Somalia said:

“AI has strong potential to improve humanitarian work by supporting data collection, faster decision-making, and better targeting of assistance. However, more training and access are needed to ensure effective and responsible use, especially at local level.”

As one programme manager at a local NGO in the Philippines said:

“Since our work deals with people with complex situations, I think what AI can help in our organisation is that it can help us analyse scientifically, but it cannot replace human interface. So ultimately, we will adopt with caution.”

The human dimension and judgement are often grappled with and expressed, highlighting a tension in the use of AI in humanitarian work. I’ve noticed that the word ‘lazy‘ surfaces quite frequently in the comments and conversations, particularly respondents from Africa – from Sudan to Nigeria.

An operations manager from Nigeria, a non-adopter, said:

“AI training should not be done in a manner that will make people become lazy.”

Environmental concerns also came through. A programme team lead at a US-based INGO wrote:

“In our sector we need to address the environmental impact and exacerbated digital divide that AI is escalating.”

Another operations manager noted they use AI “only begrudgingly” given concerns about water and energy consumption.

These are considered, values-led positions that point to something important that I personally view as overlooked: near-universal individual uptake of AI tools driven by LLMs does not mean universal organisational adoption is inevitable, or required in every context. There must remain the right to say no.

An overlooked perspective? Organisational AI adoption and the case for intentionality over inevitability

To me, what the discourse often underexplores is a significant proportion of the sector who have not embarked upon AI adoption – including those who do not intend to. 22% of January survey respondents said their organisation has not yet started AI adoption but intends to, and a further 10% have no intention of adopting AI at all.

28% of survey respondents said that their organisation is currently in the AI experimentation and piloting phase. Organisations may stay in that phase for reasons of resource, governance, deliberate caution, or context.

In my view, the humanitarian AI paradox does not represent a gap to be closed in a linear way, rather it is a space to be navigated and led according to each actor in each context – with purpose and intentionality.

Yet, a conscious non-adoption decision for an organisation does not mean no action is required in this space. Individual usage will continue regardless, creating a vacuum without governance or guardrails.

In our January webinar in partnership with NetHope, Daniela Weber made the case for action on organisational policies as a critical step:

“Everyone in your staff that uses a device will come into contact with AI, either by choice, or because the tools you’re using have AI built in. So having that policy is important.”

What this points to is an informed approach to AI at the individual level and leadership at the institutional level. As Timi Olagunju articulated, this moment calls for “digital leadership”. And in our January webinar with NetHope, Mercyleen Tanui captured the organisational challenge: “AI is not an IT initiative. It is an organisational change initiative.”

Governance is emerging as a priority: from frameworks to operationalisation and accountability

At our January webinar, Esther Grieder from NetHope offered some predictions for 2026: that this could be the year of one significant AI-related error or harmful use case that forces the sector to act on standards, and that AI agents may begin appearing in organograms. These are observations from conversations, not data – but they are a stark reminder of what is at stake if governance continues to lag.

During the same session, Michael Tjalve noted that in his recent experience working across the sector, he had not seen much meaningful movement on AI policy development – and our survey data released shortly after bore that out: just a 1% increase in formal organisational AI policies between the two surveys (to 23%).

In our February convening to discuss the findings, Timi Olagunju highlighted this governance gap as a particular concern for the sector:

“The fact that AI policy is at a slow pace compared to the growth in use within the humanitarian sector is concerning. Governance frameworks provide the context in which AI can truly serve the public good.”

Liz Devine also called for shared standards specifically for humanitarians, and in doing so reframes governance as an enabler:

“One of the things the sector really needs to figure out is developing a set of minimum standards for really sensitive areas like protection and gender-based violence…we need a set of core minimum standards that organisations agree on.”

This lack of shared approaches and standards, she argued, is holding organisations back:

“I actually think that’s one of the reasons we’re seeing slower organisational adoption. Organisations are grappling with this level of risk. We don’t want to put out solutions we can’t back up with the ethical safeguards needed.”

Nayid Orozco Bohorquez reinforces this view of governance as enabler rather than inhibitor of innovation:

“Clear policies don’t stop innovation – they give people the confidence to start using AI in a responsible and open way.”

As Mercyleen Tanui advocates for with AI tools: “Right-size the tool so that you are not starting bigger by default”

I think we can arguably apply this principle more broadly: considering right-sized governance frameworks, tools and audit requirements for each organisation and context to shape governance as an enabler, and not a burden.

Across diverse actors and contexts, there is a growing convergence: governance is a collective priority, and the direction of travel is encouraging and critical work lies ahead. As a newly-released report from NetHope puts it: “responsible AI governance is emerging but structurally fragmented” and “without shared standards to bridge this gap, responsible AI practices will continue to vary widely across organizations.”

Funders and donors have a crucial role to play. Analysis just published by Candid highlights the scale of the disconnect: 84% of nonprofits need funding to develop and scale AI tools, yet only 17% say their funders have engaged them on AI.

In a period of hyper-prioritisation, investing in AI governance may not represent the most urgent ask. Yet, when we view the humanitarian AI landscape through a protection lens, a case emerges that this is the moment when that investment is needed – to protect sensitive data, safeguard vulnerable populations, and mitigate risk at a time of acute shocks and vulnerabilities across the sector. What happens next – how emerging governance frameworks are operationalised across different contexts is critical, including the involvement and focus on smaller and local actors.

Centre local actors: harnessing contextual knowledge and innovation

In our research in both 2025 and 2026, what really emerged was the creative and resourceful applications and approaches of local actors in the Global South, which is a finding also documented by Daniela Weber in a NetHope article where she has found through her extensive work in this space that “the Global South leads innovation.”  She believes this year more AI use cases and tools will be developed locally in the Global South, particularly specialised language models and domain-specific solutions tailored to regional needs.  

In 2024, local and national actors participated in 93% of Humanitarian Country Teams (HCTs) globally (GHO Report 2025). By that logic, humanitarian AI by its very definition should meet the specific needs of these local and national actors. Their experience should be shaping the agenda in line with broader localisation processes to shift power to local actors.

Active work must be done in the AI space to counter the risk that ‘humanitarian AI’ becomes shorthand for large-scale technical deployments by well-resourced international actors, particularly when the current adoption patterns are globally distributed and bottom-up and often without governance or organisational support. This is a pivotal moment for action to counter and not perpetuate digital divides.

In our HNPW session focused on local leadership in AI development in March, Musaab Abdalhadi set the frame:

“When we talk about local leadership in humanitarian AI, we often focus on access to technology, but from my perspective, the real issue is power, not technology.”

A virtual meeting with five participants, each in their own video frame, labeled as Ka Man Parkinson, Lucy Hall, Ali Al Mokdad, Musaab Abdalhadi, and Gülsüm Özkaya, with neutral backgrounds.

Speakers at our March 2026 HNPW session: Bridging digital divides: centring Local leadership in humanitarian AI development


Musaab’s words carry weight because they are grounded in lived experience and action. In November 2025, he initiated what we believe was one of the first AI training sessions designed specifically for Sudan’s crisis context – delivered fully in Arabic, in partnership with Ali Al Mokdad and the HLA. The 28 participants – drawn from Emergency Response Rooms, youth-led volunteer groups and local NGOs – were already using AI informally and without guidance.

As a training participant reflected:

“Since the beginning of the war, we have relied on artificial intelligence to meet donor requirements – this training helped me use these tools more effectively and confidently.”

Four men sit on chairs outdoors, dressed in light traditional clothing, watching a laptop placed on a small table. Palm trees and sunlight are visible in the background.

Local responders in Sudan participating in remote-delivered humanitarian AI training in November 2025. Image credit: Save the Children in Sudan


The infrastructure reality behind many of these conversations represents significant barriers. In the January survey, a senior leader in data and information management at a local organisation in Cameroon, responding in French, described the context of low internet penetration, limited digital literacy, and rural areas where electricity is still a luxury across much of Sub-Saharan Africa. And yet, he wrote, in a world of constant change, there is a duty to adapt and level up – not for its own sake, but to improve the daily lives of the populations who are suffering.

Agency of local actors and community-centred approaches is central to this. As Ali Al Mokdad said:

“Let’s not overestimate the risks and underestimate the opportunities. Local organisations in Nigeria, Lebanon, Syria, Sudan, Kenya, Rwanda are giving us very good examples of how to leverage these tools. The main important thing is not to stand in the way of local organisations and local leaders.”

In our February webinar, Rebecca Chandiru illustrated what community-embedded AI looks like in practice:

“Once the local community is involved in building and testing the models, they become very confident and they trust that data.”

Gülsüm Özkaya, whose research focuses on AI-generated visuals from the perspective of crisis-affected people, offered a reframe that cuts through much of the international versus local debate – which may be a false dichotomy, particularly when viewing the AI landscape.

“The main divide right now is not about being global or local. It’s about being digitally fluent and AI-aware. A local organisation that masters the use of AI tools can access the opportunities and create impact as effectively as the global giants.”

Shortly before HNPW in March, I published an interview with Ivan Toga – a pulse survey respondent joining the call from Rhino Refugee Camp in northern Uganda. As Ivan summarises:

“We need an artificial intelligence that speaks the language of the donor and the language of the village where I come from. We need an AI that is good for all of us.”

Looking ahead: collective action and local leadership

We have focused efforts in these two phases of this research measuring and communicating the paradox and its implications. The next phase is about responding to it – connecting and convening to support humanitarian actors and supporting collective action to find solutions in small and big ways.

To support and promote AI literacy and skilling, we are currently exploring microlearning guides and bite-sized content. As a direct follow-up to our January webinar with NetHope, we published a practical quick-start guide on organisational AI readiness. It is also encouraging to see new AI courses emerging across the ecosystem, including through NetHope on Kaya, a new free humanitarian AI course from Elrha, as well as nonprofit AI initiatives from Microsoft. These represent encouraging movement toward contextualised learning for nonprofits and humanitarians.

In the governance arena, the first instalment of the UK FCDO-funded SAFE AI Framework is scheduled for release in May 2026, with a vision to establish “the nature and scale of the humanitarian AI governance gap, why it matters and why individual agency policies cannot close it alone.” This stands alongside increasing sectoral discourse and outputs focused on governance, responsible AI and accountability.

2026 remains a critical window. Humanitarian AI will keep growing and we need to move forward with purpose – with informed, intentional, values-led choices before key decisions are made and vendors are locked in. The sector’s choices must align with our overall commitments to shift power toward local actors, and ensure the tools emerging truly serve the people and principles we are here for.

As Musaab Abdalhadi said, local actors should be at the table as co-designers, not testers. As Ivan Toga advocates, we need models built from community, not delivered to it. And as our research keeps showing us – the energy, the ingenuity, and the will are already there.

Ali Al Mokdad encapsulates the opportunities, challenges and what is at stake:

“AI tools and AI in general could be either the best or the worst thing that could ever happen to humanity and to what we do. And localising AI could take us to the best-case scenario.”

Coordinated efforts together with donors and funders and other influential actors are pivotal in the next stages to operationalise and embed sector efforts. The risk should not be placed on individuals or local actors but those who are mandated and resourced to bear this responsibility. No single actor can shift this alone. It takes a movement, built from contributions big and small – including yours.

About the author

Ka Man Parkinson is Communications Lead at the Humanitarian Leadership Academy, where she leads on global engagement and community building initiatives as part of the organisation’s convening strategy. Ka Man blends multimedia campaigns with learning and research – she produces and hosts the Fresh Humanitarian Perspectives podcast and HLA Webinar Series, building a culture of thought leadership. Her interdisciplinary background – spanning two decades of communications and marketing experience in the international education and nonprofit sectors, and an academic grounding in business management and IT – shapes her holistic and people-centred approach to her work. She initiated and co-leads the first global study to track how humanitarians are using AI in their work. Ka Man is based near Manchester, UK.

Acknowledgements

This research and convening initiative is a collective effort that would not be possible without input, engagement and support from across the sector. The author would like to thank all research participants who generously shared their experiences and insights; research co-leads Lucy Hall (HLA) and Madigan Johnson (Data Friendly Space); January webinar panellists Esther Grieder (NetHope), Mercyleen Tanui (WaterAid), Michael Tjalve (Humanitarian AI Advisory/Roots AI Foundation), and Daniela Weber (NetHope); February webinar panellists Rebecca Chandiru (Humanitarian OpenStreetMap Team), Liz Devine (GOAL Global), Timi Olagunju (The Timeless Practice), and Nayid Orozco Bohorquez (now Mercy Corps); HNPW session panellists Musaab Abdalhadi (Save the Children in Sudan), Ali Al Mokdad (independent), and Gülsüm Özkaya (now IHH); and interview guest Ivan Toga (humanitarian practitioner from Uganda).
 

Note and disclaimer

This article is a personal reflection, prepared to promote learning and dialogue. It is not intended as prescriptive policy advice, or as organisational endorsement of specific individuals, organisations, technologies or approaches. Organisations should conduct their own assessments based on their specific contexts, requirements, and risk tolerances. Quotes from contributors have been drawn from webinars, interviews, and published materials; the views expressed in this article are those of the author and do not necessarily reflect the positions of quoted individuals or their organisations. This research has been conducted independently by the Humanitarian Leadership Academy in partnership with Data Friendly Space and has received no external funding.

About the research

The January 2026 pulse survey was conducted by the Humanitarian Leadership Academy and Data Friendly Space, building on the May/June 2025 foundational study – creating a global baseline of AI adoption across the humanitarian sector. Since the report’s release in August 2025, the research has achieved significant global reach and impact: informing academic research in Türkiye, Colombia, Switzerland, and Germany; supporting an Arabic-language AI training initiative for local responders in Sudan; contributing to civil society advocacy in Ukraine; and shaping high-level and practitioner dialogue on responsible humanitarian AI at international conferences and webinars, as well as through a six-part podcast series with a focus on global and African humanitarian AI perspectives. A combined total of more than 4,200 responses have been received across the two survey waves. All reports, briefing notes and supporting resources are available on the research landing page in English, French and Spanish.

Thank you for visiting the HLA website, we are updating the site to reflect changes in our organisation. Contact us if you have any questions. Contact us

Opinion | Beyond resilience: Why women and girls must shape how we respond to crisis and conflict

This Women’s History Month, Claire Sanford, Director of Conflict and Humanitarian at Save the Children UK, shares a powerful reflection drawing on 26 years working on the frontlines of some of the world’s most difficult humanitarian crises – and the women and girls whose stories have never left her.

Over the past 26 years working in humanitarian contexts, many moments blur into one another, whether that’s checking into a tin hut airport on a dirt runway, cautiously going through checkpoints, displacement and refugee camps, conversations, hand gestures and facial expressions held in fragments of shared language.

The backdrop has often been uncertain and chaotic at times, conflict, disaster, displacement. Yet within that uncertainty, there is a striking clarity that emerges not from systems or strategies, but from people particularly the women and girls at the centre of these experiences.

A woman in a green headscarf smiles while holding a baby, sitting beside two young children in yellow headscarves and an older woman in a white coat, all sitting on a red and white mat indoors.
Image credit: Save the Children

Throughout my time in the humanitarian field, I have worked across several countries, and I can vividly remember the names of the women and children I have spoken with.

Their names are often etched in my mind alongside the stories they have shared of what they have endured, what they have fought to overcome, and what matters to them most. What stands out, time and again, is not only their resilience, their courage, their strength and their unwavering determination, but also the conditions and inequalities that have demanded so much of them.

Last year I met a 14-year-old girl in a dusty camp on the border of Sudan. She spoke of her fear that she may never achieve her dream of becoming a public health worker. Conflict had denied her the right to learn, something which she loved, was passionate about, and that would allow her, as she said, to “help rebuild her country.” Even in uncertainty, her ambition remained clear, despite the barriers placed in her way which denied her this choice and opportunity.

A health worker measures the upper arm circumference of a small child held by a woman in a headscarf, checking for malnutrition. The scene takes place indoors with others nearby.
Image credit: Claire Sanford

I think of the Somali mother I met in a nutrition stabilisation centre in Baidoa, her body exhausted, holding her two children—a severely malnourished two-year-old daughter barely responsive, and her four-month-old son. She had walked more than 60 miles in searing heat to reach a displacement camp, driven by the hope of finding medical treatment and food. Her journey speaks not only to her determination, but to the absence of accessible support that should have been there for her from the start and prevented the illness of her young daughter.

I think of the mother in Aleppo who shared her sense of guilt with me. She had returned to Syria in December 2024, leaving behind safety in Türkiye with hope for rebuilding her life at home. Instead, she found harsher realities, no electricity or heating, soaring food prices that meant surviving on little more than bread and water, and an overwhelming fear of how she would access medical care for her children if they needed it. Her story was one of love, sacrifice, and impossible choices, shaped by circumstances far beyond her control and her desire to return to her home country.

I think of the injured children in Syria, and in so many other conflict-affected places, who have shared their dreams: to walk again, to play football or cricket, to dance, dreams they should never have had to imagine after the remnants of conflict have taken so much from them. Their desire to reclaim their childhoods remains powerful, highlighting both their determination and what has been unjustly taken. It is a stark reminder that children must be placed at the centre of how wars are fought, regulated, and responded to so that weapons no longer define the shape of childhood.

Life for women and girls in many contexts is harder than many can imagine, shaped by structural inequalities that limit safety, opportunity, and choice.

Over the decades, and across continents, women and girls have fought inequality in all its forms. The stories I have heard and the sights I have seen are often those where inequality is at its most extreme. While some countries have made progress, many have not. But what I have come to believe is that while experiences differ widely, there are shared patterns that connect many women and girls a thread of courage, resilience, and collective determination to support one another and to keep going despite the odds.

This spirit is not new. It is the very foundation on which organisations like Save the Children were built. After the First World War, when children across Europe were left starving and malnourished, sisters Eglantyne Jebb and Dorothy Buxton were among those who refused to look away. As part of the Fight the Famine movement, they raised awareness of the suffering and demanded action, alongside others calling for change at the time. At a time when many remained silent, they chose to act—driven by compassion, courage, and a belief in a better future.

That same ethos underpins the work we do today. It is carried forward in every influencing approach, every programme, every response, every decision. And it is continually shaped and strengthened by the leadership, insight, and experiences of the women and girls we meet.

Working in the UK, it is these stories that stay with me. They bring clarity amidst complexity and uncertainty. They sharpen our focus and fuel our determination as teams to do what we can, however challenging the context may be and to defy the barriers.

They remind us that behind every decision, every policy, every intervention, every statistic, there are real lives full of hope, ambition, and dignity, and that our responsibility is to ensure those realities shape what we do.

The International Women’s Day 2026 theme Give to Gain is not just an idea but rather it is something I have seen lived out repeatedly. Women and girls, often with the least, continue to lead, adapt, and persevere in the face of significant barriers. Their insight, strength, and lived experience must shape how we act.

During conflict and disaster, these stories reveal the essence of humanity. They remind us of what truly matters: safety, dignity, opportunity and the ability to shape one’s own future.

They show us that even in the most difficult circumstances, the human spirit endures, whilst also underscoring the urgency of addressing the inequalities that make such endurance necessary.

A woman with blonde hair, wearing a red Save the Children jacket over a black-and-white striped shirt, stands outdoors in front of a wooden fence and green plants.
These stories are not only reflections of hardship. They are a call to action. They are reminders that a just and equal future for every woman and girl is not only necessary, but possible if we continue to listen, to act, and to stand alongside them.
Claire Sanford

About the author

Claire Sanford is a dedicated humanitarian leader with more than 26 years of experience championing the rights, safety, and dignity of children and vulnerable groups in complex crisis settings. From her early work in mine action across South Asia and the Middle East to leading global emergency responses with Save the Children, Claire has worked alongside local teams and partners in contexts affected by conflict and disaster including Afghanistan, Bangladesh, Pakistan, Somalia, and Indonesia guided by a commitment to upholding children’s rights and supporting community-led efforts to ensure their safety and dignity.

Now serving as Director of Conflict and Humanitarian at Save the Children UK, she leads a dynamic team working in partnership with colleagues, partners and communities to influence policy, advocacy, and humanitarian response for children affected by conflict. Her leadership extends beyond her role through trustee positions and strategic partnerships that strengthen accountable, locally informed, and ethical humanitarian action. A committed advocate on crises such as those in Somalia, Sudan, South Sudan and Syria, Claire works to ensure that the perspectives of affected communities are reflected in global decision-making and calls for stronger, more equitable international action.

Thank you for visiting the HLA website, we are updating the site to reflect changes in our organisation. Contact us if you have any questions. Contact us

Develop your organisational AI readiness: a quick start guide from the HLA and NetHope

Two women sitting at a table, one wearing a hijab, both looking at a laptop together. Above them are logos for Humanitarian Leadership Academy and NETHOPE, with text promoting AI readiness for nonprofits.

Need help starting your nonprofit’s AI journey?

Read our new quick-start guide created in partnership with NetHope as a follow-up our January 2026 webinar, Humanitarian AI: Lessons learned, trends and opportunities for 2026.

With expert guidance from Daniela Weber, Director of NetHope’s Center for the Digital Nonprofit, and resources from both organisations, this guide offers practical next steps – from establishing ethical guardrails to building capacity and scaling AI initiatives.

This guide is available in English and Spanish – the French language version is coming soon!

We know that open conversations about what worked and what didn’t are the most helpful learning opportunities for humanitarian staff and lift the overall capability of the sector. Share your approaches and use cases with each other, collaborate to create common solutions that will serve all humanitarians.
Daniela Weber, NetHope

Watch the webinar recording

Explore humanitarian AI adoption trends and considerations for responsible development and implementation. Hear from expert practitioners about what’s working, what isn’t, and lessons worth sharing.

Contact

This guide was created in March 2026. If you have any feedback or suggestions on this content, please contact:

Humanitarian Leadership Academy
info@humanitarian.academy

NetHope
communications@nethope.org

Thank you for visiting the HLA website, we are updating the site to reflect changes in our organisation. Contact us if you have any questions. Contact us

HLA at Humanitarian Networks and Partnerships Weeks | 2nd – 12th March 2026

People seated at desks wearing headsets, participating in a conference or training session. Event logos and text announce the Humanitarian Leadership Academy at Humanitarian Networks and Partnerships Weeks 2026.

Thank you for joining us at HNPW 2026, over four sessions we discussed youth leadership through the lens of crisis response in Ukraine, Peru and Türkiye; what can be done to drive change towards a locally led research agenda; and local leadership in humanitarian AI development.

We are grateful to collaborate with Start Network, H2H Network, ELRHA, Open Space Works Ukraine, KAOS, and the Training Providers Forum.

Read 📖: BLOG – Small ripples at HNPW – Humanitarian Leadership Academy

Listen 🎧: PODCAST – True Youth Involvement and the Humanitarian Reset

Session recordings are available below:

Bridging digital divides: centring local leadership in humanitarian AI development

Speakers: Musaab Abdalhadi – Save the Children in Sudan, Ali Al Mokdad – independent, Lucy Hall – HLA, Gülsüm Özkaya – IHH, Ka Man Parkinson – HLA

AI is rapidly shaping humanitarian work, but local actors are still largely excluded from how these technologies are designed and governed, risking deeper inequalities.

This session explores how AI can become a driver of localisation itself by embedding inclusion, ethics, and collaboration into humanitarian systems. Drawing on new research and frameworks, panelists will discuss practical ways to build locally led AI ecosystems and reimagine humanitarian action as co-created, context-driven, and collectively intelligent.

Watch the recording and access the transcript below.

Session transcript

This transcript has been generated using automated tools and has been lightly edited for clarity and readability. The transcript has been reviewed but minor errors or omissions may remain.

Ka Man: Hello, everyone, and welcome to today’s session, brought to you as part of Humanitarian Networks and Partnerships Week, HNPW. My name is Ka Man Parkinson, I’m Communications and Marketing Lead at the Humanitarian Leadership Academy, and I’m absolutely delighted to welcome you to this session today, Bridging Digital Divides, Centering Local Leadership in Humanitarian AI Development.

This session is taking place as part of the H2H network, so we’re delighted to be joining this virtual forum as part of the H2H network today. The Humanitarian Leadership Academy is part of Save the Children, and our mission is to accelerate the movements for locally-led humanitarian action.

Today’s session is expected to last 75 minutes, with around an hour for the main content, and around 15 minutes for your questions. So, if you have any questions, please submit those using the Zoom Q&A.

The session will begin with welcome introductions, followed by a short presentation from myself and Lucy to contextualise this session in the HLA’s work. We’ll then move into local leadership perspectives with our panellists, followed by a panel discussion, and then we’ll move on to audience questions.

I’m really delighted to be joined by some incredible panellists today, and I’m really grateful to them for taking the time to be here and join us for this important conversation, particularly in the very challenging context in which we’re all operating. So, I’m delighted to welcome Musaab Abdalhadi from Save the Children in Sudan, Ali Al Mokdad, a senior independent leader, Lucy Hall, my colleague from the HLA, and Gülsüm Özkaya from Children of the Earth Association in Turkey. I’d now like to invite each speaker to just briefly introduce themselves to you, and to say a few words about why this conversation matters to them. Over to you, Musaab.

Musaab: Thanks so much, Ka Man, and yeah, everyone. Good morning and good afternoon. My name is Musaab Abdalhadi, and I work with Save the Children as GCT specialist based in Sudan. I work closely with community-based organisations and mutual aid groups operating in conflict-affected and hard-to-reach urban areas. So, basically, this conversation is important to me, because communities on the front line of crisis are increasingly becoming data providers for humanitarian AI systems, but not really decision makers in how those systems are designed, governed, or used.

Ali: Thank you so much for hosting us, and for the participants for joining. For the humanitarian development sector, I started as a national staff, then I took international assignments. I was stationed in East Africa and Asia, focusing mainly on programme and operations management. And from there, I moved to headquarters roles, where I covered policy, processes, and tools. And I spent the past years focusing mainly at redesigning and reimagining policies, governance, as well as humanitarian diplomacy, where I engage with impact investors, policymakers, and economists.

From my perspective, this conversation is extremely important. I could write a book about it. But in a simple way, I think AI tools and AI in general could be either the best or the worst thing that could ever happen to humanity and to what we do. And localising AI could take us to the best case scenario, and I think that’s one of the key things that we are trying to cover today. Of course, there are many things to cover under that umbrella, but it’s very important to focus on localisation when it comes to AI. Thank you.

Ka Man: Thank you, Ali. Gülsüm, over to you.

Gülsüm: Hi, everyone. Welcome to the session. I’m Gülsüm Özkaya, I’m representing Children of Earth Association here. It’s an Istanbul-based local organisation. I’m the board member responsible for communication, and also thank you to HLA for hosting us here.

Well, why this conversation matters for me is, actually, it’s my research topic, basically. I’m working on AI-generated visuals in humanitarian communication, from the perspective of crisis-affected people. And I’m directly working on how it’s important to be AI-aware in local leadership. So, hope it’s a meaningful discussion for everyone.

Ka Man: Thank you, Gülsüm. I really appreciate you taking the time to join us today. And over to you, Lucy, would you like to introduce yourself and tell us why this conversation matters to you?

Lucy: Hi everyone, lovely to be here, thanks for having me. My name is Lucy, I am the Research and MEAL Lead at the Humanitarian Leadership Academy. This topic is really important. I’ve been researching digital tools and transformation, and what that means for locally-led humanitarian action for a number of years.

And I really believe that AI has huge potential to really transform how the humanitarian system operates, how organisations can really transform and really become almost a lot stronger than we already are. It’s an amazing opportunity, but there’s a lot of risks involved, so I’m looking forward to exploring all of those themes in this conversation today. Thank you, Ka Man.

Ka Man: Thank you very much, Lucy, and thank you to our incredible panellists.

I’m just going to spend the next 5 minutes or so to just set the scene for the conversation, and to explain why the HLA is hosting this conversation today. So, in May-June 2025, the HLA conducted, in partnership with Data Friendly Space, the world’s first study into how humanitarians are actually using AI in practice.

And we were actually astounded by the engagement with this survey. We had 2,500 responses from 144 different countries. And what that showed us is that AI, or generative AI, such as ChatGPT and Microsoft Copilot, etc., has really driven individual experimentation and creative applications of AI across the sector. So AI use is not being clustered in particular areas, but really is being embedded across the sector at large, although that is uneven.

In this survey, we wrote a report, which documented the patterns in detail, and we included some use cases from Ukraine, Afghanistan, and Lebanon, led by local leaders, which showed this creative and resourceful application of AI. So you can read those in more detail, and I think Lucy will be able to drop the link in the chat if you want to have a look at that information.

And then in January, as follow-up to that, we conducted a light-touch pulse survey to see if there had been any shifts in AI adoption. And what we found, when we’re looking at the local level, is that local organisations continue to have very strong interest and engagement in AI, again, very much driven by generative AI tools, although there is growing interest in how to scale these efforts and integrate those across operations and across organisations.

We can see that adoption patterns are generally similar between local organisations and other organisations as a whole, although a couple of differences that we’ve seen so far is that local actors are using AI tools daily on a slightly higher frequency than INGOs. And also, there’s a lower presence of formal AI policies. So, I would say that local efforts are very much being driven by adaptation, creative application, problem solving. It may not be as formalised as compared to the INGO sector and so on, but that’s led to some really interesting and promising use cases as well.

So, from the follow-up survey, I interviewed one participant recently, Dr. Ivan Toga from Uganda, and we’ll be releasing this interview next week, so please keep an eye out on HLA channels for this. And Dr. Ivan Toga speaks very enthusiastically about the potential and harnessing the power of AI in specific contexts and use cases, such as family reunification, and helping with satellite imagery, etc. So he’s got very strong views on this, including the need for localisation, for the sectors to come together, for donors to understand the context in which he’s operating. So he’s speaking to me from Rhino Refugee Camp in Uganda.

Another quote was from a survey participant in Cameroon. So this response was actually in French, so I’ve just translated here. And basically, this characterises this absolute drive and desire to try and harness the potential of AI, even in low connectivity settings. So you can see the thematic link to what Dr. Ivan said in his statements.

And then a middle manager working in education in the Philippines talks about how AI is not just a tech thing anymore, that it’s more widely embedded than that. And they’re very excited about the potential of AI freeing up humanitarians’ time from tasks to actually get more time working with communities. So that is their aspiration.

And then finally, this leader from Nigeria speaks very clearly about access gaps, how, again, there is so much potential, especially to amplify youth and grassroots actors, but highlights, again, that digital divide that needs to be bridged about particular marginalised groups that need to be brought into the conversation and development.

So, I just brought that in as a bit of scene setting to explain the context and rationale for this conversation today. And I’m now going to hand you over to Lucy, who’s going to speak to the concept of AI readiness and localisation. Over to you, Lucy.

Lucy: Thanks, Ka Man. I think this is a really interesting point, because those quotes really ground the experience in lived realities, because AI can feel very alien, very technology-led, and a lot of the participants that we’ve spoken to have talked about how distant it can feel, and I think it’s a really important challenge to acknowledge.

Especially when I think about that quote from Cameroon, where we talk about change, and the pace of change, and how hard it can be when we don’t always have basic infrastructure such as internet access.

And that’s why a lot of our research has kind of concluded that AI isn’t just about developing technology and developing tools. It’s about a whole host of behaviours and things that need to be really… foundations that need to be in place to really enable a locally-led humanitarian world of the future, and of today, because AI is around, and it’s probably not going to go anywhere anytime soon.

So we’ve been, over the last 6 months, discussing around what it means to become AI-ready. And a lot of that centres on different elements of digital transformation. It means we need to understand what AI is and how it is used in humanitarian action. Research helps with that massively. But again, in order to be locally led, it has to be local research that really leads the way. It can’t be dictated by the Global North or technology companies. It has to come from communities that are working with the tools.

All of these different elements are all about being locally led, convening, bringing people together, learning what the challenges are from one another, learning what the opportunities are, and sharing knowledge, sharing skills, sharing experiences.

Ensuring that there’s good leadership, governance, and standards. Again, how can we make sure that AI is safe? That’s one of the key things that our research has been finding over the last year. We are amazing as a humanitarian community when we consider the needs of the populations that we work with, the safety of people is paramount and our number one priority. So having strong leadership to govern AI, to use AI, to design AI, and ensuring that there’s really good standards in place.

Making collective action and working together to drive change. AI is a transformation process in the humanitarian system, because a lot of people are using AI tools, but as Ka Man mentioned, the policy and governance uptake is low. If we don’t work together, and we don’t advocate for locally-led AI, it won’t happen.

Innovations. Technology is always going to be a key part of AI usage. We are always going to continue to find ways of making better outcomes for communities that we work with, to make sure that they are safe, protected, and have access to chances in life.

And learning literacy, shared knowledge, and having that common language and understanding between one another, super important part of AI-ready.

This interconnected approach, when combined, can really enable a transformative approach in how organisations connect to one another, how we become more locally led, how we become able to amplify expertise and leadership throughout the humanitarian community. And by taking this approach, it will really allow this transformation to take place in a way that is grounded in local experience, local leadership, and realities.

And I think this is a real opportunity moment. We’ve talked in the HLA with other colleagues about being at a tipping point. And I think by adopting locally-led design principles for digital and AI transformation, we’ll begin to see a shift, hopefully in the right direction, towards a much more equitable humanitarian system as knowledge flows in different ways.

It’s very contextual. What we’re hoping to do through this conversation is to ground it in lived experiences from our wonderful panellists. So, with this in mind, I’d love to bring in Musaab to talk to his leadership in AI space. So, over to you, Musaab. I’m looking forward to hearing from you.

Musaab: Yeah, thanks, Lucy. So, basically, from my perspective and local leadership about AI, when we talk about local leadership in humanitarian AI, we often focus on access to technology, but from my perspective, the real issue is power, not technology.

So, local actors already generate knowledge every day through informal networks, community assessment, and adaptive responses that international systems often struggle to capture. Yet AI tools are frequently built externally, trained on incomplete data sets, and deployed into contexts they do not fully understand. So, this creates three risks, I would say: AI reinforcing existing humanitarian power imbalances, local knowledge being extracted without ownership, and definitely decision-making moving farther away from affected communities.

So bridging the digital divide, therefore, is not only about connectivity or skills, it means shifting from local partnership to local authority, where communities help define problems, shape data sets, and influence how AI informs humanitarian decisions. So, basically, that’s the AI from my perspective, or the local leadership. Yeah, that’s it, over to you.

Lucy: Thanks so much, Musaab. I’m sorry, I was struggling with my technology there. I honestly couldn’t agree more, and I believe that AI poses a huge risk about being very extractive. It’s something that I feel very uncomfortable with, and I think by calling it out early and making sure that we are creating much more equitable resources, that is the only way forward, really.

I’d now like to bring in yourself, Gülsüm, to hear about your leadership perspectives in this space.

Gülsüm: Well, actually, especially thinking about the global and the local actors, well, in the sector, we’re always talking about the shifting power from global to local actors, but I think that being a global actor, or being a global organisation is no longer enough to adapt to today’s world. Because in our local organisation, in YerChat, we fit in today’s world differently, I think, because when I think about the reason, maybe the reason might be we’re mostly made up of Gen Z. We are all young people.

So, this allows us to create impact differently than traditional organisations, whether it’s how we engage our donors, or how we protect the children in the media. So, being digitally fluent and AI-aware is just, actually, I think the main divide right now, rather than being global or local.

So the local organisation that masters the use of AI tools actually can access the opportunities, funding, and maybe create an impact as effectively as the global giants do. So, if you use AI correctly, maybe it might be the ultimate bridge in the sector.

But, my point here is actually the ‘correct’ part. I mean, when I say AI used correctly, whose correct is this? When we talk about correct, is HLA’s correct, HS correct, or yours correct, everyone’s correct, it might be different. So, at this point, actually, my question was whose perspective must be included. And I think my answer was, the beneficiaries, the people affected by crisis.

So, well, that’s why, actually, my research focuses on AI-generated visuals in humanitarian communication from the crisis-affected people’s perspectives. So, when I see the people’s perspectives, there’s a significant gap here. Their perspectives and the humanitarian communicator’s perspective are totally different in some points. For example, in some points, I think that the communication practitioners just think that it’s a protective thing for practice, but they might see it very differently.

So, I think we need a shared environment for creating AI standards in our sector. Otherwise, if we cannot do this, if AI policies and standards are developed under a global monopoly, probably they will fail in the local context. So, local leaders’ AI awareness is a key point here. Otherwise, it will probably lead to digital colonialism on the beneficiaries and the crisis-affected people, I think. It’s not a technical failure.

So, when we see AI ethics and standards, I think that we need a table for all who have digital fluency. It’s not that they are based on global actors, global leaders, or local leaders, but who is AI-aware, or who has digital fluency. I think that will empower local leadership in this case.

Lucy: Thank you so much, Gülsüm. I think that’s such an interesting point. It’s not about global versus local, it’s about being digitally confident and not so digitally confident, and how AI works.

I’ve got so many questions in my head based on just those couple of statements alone, but it is now time for our panel discussion, where we’re looking forward to bringing Ali in. I will be building on some of those points raised by Musaab and Gülsüm, but Ali, I kind of first wanted to come to you.

Because I think the point raised here is the risk that if AI governance and design continue to sit in global spaces, we’re really going to risk reinforcing existing inequalities. That came through loud and clear from both Musaab and Gülsüm. So, in your experience and your view, Ali, what would inclusive AI reform look like? And who needs decision-making power, not just to be sat at the table and consulted?

Ali: I was actually writing super notes during the conversation. But before I answer the question, let me say one thing, that outside humanitarian and the development sector is a bit different than inside. Because when I look at social enterprise, when I look at AI diplomacy, the design and the development of models, it is progressing in the Global South, and I want to acknowledge that India, Nigeria, Kenya, Lebanon, Gulf countries including Saudi Arabia, United Arab Emirates, and Qatar, they made progress when it comes to AI, the social enterprise, AI-native companies, AI diplomacy, investments in data centres and the infrastructure, as well as designing AI-native models, because now we have large language models in Arabic, we have in Swahili, and a few others. So I want to acknowledge that outside, the Global South is part of the design and the deployment. Now, in the sector, it’s a bit different story.

And from where I’m sitting, and what I am noticing in my conversations and the work that I have been doing at strategic level and operational level, we are kind of repeating the same mistake that we had with ERP systems and automation, and Power Apps and Power BI, which means designing at a global level, and then rolling out. And when we look at the challenges we faced there, it was mainly transformation and supply chain. And I usually say that technology is not the problem, transformation is.

So, how to address, or how to deal with that. I think it’s very important when looking at governance to look at it from inclusive and intelligent governance. And that means mainly focusing on user-based design, so thinking from the perspective of the end user, looking at the access, and what I call the AI privilege, who will access the tools, who will access the data, and a few other things, looking at the skills, and here I’m not talking about capacity as training, I’m talking about capacity as infrastructure, and also the stakeholder engagement and the AI transparency when it comes to the algorithms. But in a simple way, I think to have that system in place, and to focus more at the operational level, I think it all starts with the experimentation.

Local organisations are already experimenting, and the survey that you did is showing the results that local organisations are progressing there. And when I look at the local organisations or the local leaders that I have been collaborating and working with for the past four years, AI empowered them in different ways. Yes, of course, they had some challenges related to the internet connection, to the access. But the moment they started learning how to leverage those tools was also the same moment they started making progress. So, number one is the experimentation stage.

Number two, I think it’s very important to look at the infrastructure, because if we are, as a sector, wanting to leverage this technology, it’s very important to look at how are we protecting data. How are we working on the supply chain and the deployment? How are we scaling those digital tools? Who’s going to be a part of the infrastructure and the innovation part design? Where are they? Their location, their access to those tools and services and internet.

And then building on the innovation, or the experimenting, the infrastructure, here comes the ecosystem. Because it’s very important to look at this as a full ecosystem, especially when we are looking at full governance. And the ecosystem means the infrastructure we built, the teams that we have, the workflow, the roles and responsibilities, the data, and the tools that we are using, how they are going to interact with each other, how those tools are going to affect roles, how they are going to affect the relation between, let’s say, UN agencies, INGOs, NGOs, civil society organisations, the community, and all that.

And from the ecosystem going to the partnership, I don’t think any organisation alone could do that. We need to invest in the partnership. And here, I’m talking about the humanitarian development actors amongst ourselves, the relations with the private sector and the economists, the relations and the connection with impact investors, the connection with policymakers and the state, and the other actors, because it’s very important to invest in this partnership. And of course, be open about the lessons and the failures and the progress that we are making within this arena. But I think in general, when I look at the full picture and I zoom out at macro level, it’s very important to acknowledge that local leaders are already using and leveraging those tools. We need to keep that innovation and that way of being open about experimenting, and try to see how we build that bridge between local actors and international actors, and as I mentioned, user-based design.

Lucy: Amazing. Thank you so much, Ali. I think that’s kind of our guiding star throughout all of this work, is how do we keep the momentum going, and not constrain local leaders in adopting AI, because it is already happening, it’s been happening for probably years, we don’t have the evidence to substantiate that.

And I think that’s a challenge that a lot of people are grappling with, because there feels like, as you say, there’s a bit of a lag, and the humanitarian sector is rolling out AI in a way that is traditional, and has not always worked, so I think that’s a really pertinent point when we talk about transformation.

And just very briefly, I’d be quite interested in hearing your thoughts on this. Because there’s been a lot of talk about governance standards and policies. Is there a risk that that would actually then impede that uptake and that innovation in local organisations? Or is that something that you see would enhance usage of AI?

Ali: Very good question. I think it depends on the location, because I could see some organisations who are based in Europe, they are a bit struggling in navigating also the rules and the regulations, and how to adapt to that, so it’s a bit limited arena for experimenting. Those in the US, they address it in a different way. Those in the Global South, in a different way, and I have to admit that many of those leaders and organisations in the Global South, they have a bit more flexibility and way of experimenting. Let me give an example.

There was this local organisation in Nigeria. They wanted to… they have their own strategy, they wanted to integrate AI within the strategy. So we had several conversations, and we looked at what they want to achieve from a programmatic perspective. That was the foundation of AI integration. How could we reach more people and support more communities? And from there, how AI could speed that process, or scale it, or make it efficient.

So what we did after doing the discussions and the mapping and all that, we found that the best way was doing several trainings for team leads and the management, as well as the staff on what not to do. So, the things that we shouldn’t, from data perspective and the ethical use of AI, we shouldn’t do, and we should keep aside. And then everything else is open for experimenting and use.

And I spoke to that organisation when we did the periodic review for that organisation after 6 months. It changed how they worked. It changed the workflow, it changed their relation to each other, it changed their relation with the donor, and the other organisation, their positioning, and different things, because they… I want to say they moved fast and broke things, so they took those tools, taken into account what not to do, and they started experimenting. Of course, there are risks.

But I want to say, let’s not overestimate the risks and underestimate the opportunities. And local organisations in Nigeria, in Lebanon, in Syria, in Sudan, in Kenya, in Rwanda, they are giving us very good examples on how to leverage those tools and use them. I think at the end of the day, it’s about not standing in the way of local organisations and local leaders. They have a clear mindset of what they want to do, and they have already strong access, they are part of those communities, and suddenly those AI tools appeared as a resilience tool. It helped them to navigate that complexity in their engagement with donors and other organisations. So I think we have so much potential there. The main important thing is, you know, we could raise awareness on the things that we shouldn’t do, and the risks, and give them space, not stand in their own way.

Lucy: I couldn’t agree more with everything that you’ve just said there. I was gonna bring in Gülsüm at this point, but actually, Musaab, I’d like to come to you, if that’s okay, because I think your work, what I know you’ve done in collaboration with Ali, links quite nicely, potentially, to what was just being spoken about there.

So, Musaab, I wondered if you could actually just talk us through your experience in not just learning about AI, but shaping AI in your work, and what your experience has been to date, and how you’re now using it, and how you kind of led the way in Sudan in particular.

Musaab: So, basically, the session that I’ve done in collaboration with Ali, in collaboration with Humanitarian Leadership Academy, and there were many local responders here from Sudan working with emergency response rooms, or CBOs, community-based organisations. So, for myself, first, it helped me to know exactly, or demystify AI. So, basically, in many humanitarian spaces, AI is either over-hyped or feared. So, this session clarified what AI actually is, and also pattern recognition systems, trained on data, and where its limits are. So that matters when you are responsible for a programming decision, because I’m working as a technical specialist, so all these matters.

And secondly, it shifts how I think about power. AI is not neutral. It reflects the priorities of those who design and fund it. For someone working closely with local and mutual aid groups, that realisation is crucial, actually. If local actors are not involved upstream, the tools may optimise for efficiency over dignity, actually, or scale over context. So particularly, it made me more knowledgeable about AI and its use, more intentional and more aware of governance gaps in my role. Yes.

Lucy: Amazing, and I think that’s really great to hear. It does highlight the governance gaps, and I guess the question… and again, this is… I’m being quite tricky with the three of you, because you’re such amazing panellists… what risks does that pose to your work, having that governance gap? Because when we think about governance and policy gaps, as humanitarians, I personally immediately think of standards and the principle of do no harm, because the risks are so high, and I don’t know if you’d have any reflections on that.

Musaab: So basically, in cash programming, I see, I would say, 3 major risks. So cash systems require digital trails, mobile money reports, targeting databases, biometric registration, in some contexts, definitely, like Sudan, in conflict-affected areas. So that data can be extremely sensitive if accessed by someone who is not authorised to access this data. So it can put people at high risk and increases the aggregation and analysis of these data, which increases exposure. So, data exposure is one of the risks of AI.

The second thing, I think, if AI models are trained in an incomplete way, they may systematically exclude certain groups: informal shelter, undocumented people, and minority communities. So, this causes bias in cash programming or cash targeting. So, bias in cash targeting is not just a technical flaw, it becomes a protection issue, definitely.

And the third thing is, if AI systems are developed or hosted by actors aligned with a specific government or private interests, communities may perceive cash assistance as political influence, so they have to be on that side to just receive the assistance or the cash assistance. So, trust is central to neutrality. Once communities believe data may be shared or misused, participation drops and risk increases, definitely. So, basically, those are the main or major risks that I see in cash programming related to AI.

Lucy: Completely agree, and I think it’s something that a lot of work is being done to address, but as you say, it comes down to people, and how we use it, and how the data is managed. And I think that comes to your original point, that local leaders in humanitarian action aren’t just users of AI, we’re also producing AI and informing future iterations. So it can’t be one-sided.

And I could talk for hours on this with you, but I do want to bring in Gülsüm here, because I know that you’ve done an awful lot of research, and you’ve got a lot of experience with people that are using AI, you use AI, and how it interacts in your daily life and within your work.

And I think we’ve talked a lot about leadership, and that’s quite an abstract concept, potentially, and you mentioned things around, you know, Gen Z are leading the way in this, because they are digital natives. So, what I’d really be interested in hearing from you is, where do you see people stepping into leadership in these ways? And who is engaging consistently in that network? Is it purely young people, or is it a range of different voices?

Gülsüm: Well, actually, I think the humanitarian sector, the leadership, is not only in humanitarian sector, by the way, but the sense of leadership is really related with experience. But with the coming of AI, the sense of leadership, I think it’s based on expertise rather than experience. So, if someone from another generation, or a very young person, has this expertise on AI, and also not just using the AI tools, but also can criticise the AI tools usage, or can criticise the risk of AI, they can bring a new kind of leadership perspective, I think.

Because there are lots of points related with AI that are not discussed yet. For example, in the media, specifically, you mentioned the risk issue, Musaab. In media, the most dangerous thing, the most relatable risk that we are facing is actually using AI visuals without permission, or just using them for donation reasons by humanitarian practitioners, which directly affects the trust of individual donors.

So, in this perspective, we are just facing AI shadow work, mostly. If there is no leadership with AI awareness, AI shadow work might be our most relatable risk, I think. So at this point, I can say that expertise in AI, being a leader, a humanitarian leader, is actually the most effective issue, let’s say, to protect the trust issue between both partnerships and individuals and NGOs.

Lucy: And I want to pick up on that point, because Musaab also mentioned trust, and Ali, I think you might have mentioned it as well. And what is it about trust that you think is important in AI and in relationships with people?

Gülsüm: Well, when I conducted my research, I just conducted lots of interviews with, in my previous research, Syrian people, and during my research for my thesis, with Sudanese people, with crisis-affected people, and the most common thing that they brought to the interview was this donation issue, using AI by humanitarian communicators for donation causes, which is directly affecting their trust, their trust in the organisation.

And also there, I just realised something in my interview is that if the person is not engaging with AI in their daily life, they are mostly more aggressive towards using AI in the humanitarian sector. And their trust is more fragile in NGOs when they use AI. But these individuals, if they use AI in their daily lives, in their work, etc., they’re just feeling that, okay, it’s a common thing, it’s a normal thing, and it can also be used by NGOs. So, actually, the trust issue, I think, depends on people’s own relations with AI.

Lucy: Yeah, I really agree, and we’ve seen that play out throughout our research journey as well, that there has to be organisational trust to use AI at work. That’s been a really interesting trend that we’ve not explored explicitly. I know the research team are keen to explore it, but the fact that you say it is about that individual relationship with AI, I think is a really…

Gülsüm: I think so. I think it was one of my outcomes when I just conducted my research, let’s say.

Lucy: And brilliant, and I think I’m really glad that you’ve been able to see that and articulate it so clearly. There’s so many questions that I want to ask all of you, but I’m conscious of time, and want to make sure that we have a chance to open the floor up to our audience. I’d love to hear from each of you individually now, and just ask, what are your aspirations for humanitarian AI in 2026? What do you think you would like to see, and what do you think might happen as we move through what is proving to be yet another very difficult year?

And I’ll open the floor to whoever takes that first.

Ali: Shall I go first?

Lucy: Great, yes.

Ali: Okay, so if you think about localisation as a garden, like, let’s imagine the localisation agenda is a garden. Then technology is the rain. But without gardeners, without good soil, without clear boundaries, this rain is going to become a flood. What does that mean? It means the technology, AI, automation, etc., it comes with opportunities. But for us to get the best out of them, we must invest in innovation, infrastructure, ecosystem, partnerships, being open about the challenges and lessons learned, and work together in all stages to keep it based on ‘we, the people’. It is the main thing in what we do in the humanitarian and development sector. So, build it based on people’s interests. That’s how I look at it.

Lucy: Wonderful visual image, and you’re absolutely right, it is everything. It is not about the rain. Musaab, would you like to come in if you’re still with us? I know you’re having bandwidth issues.

Musaab: Yes. I would like to comment. So, I would like to see AI designed with local actors at the table from the beginning, not as testers, not as data collectors, but as co-designers. If the systems are meant to serve crisis-affected communities, then those closest to the context should influence how problems are defined and how models are trained. So, basically that’s what I would like to see for AI from now to the future.

Lucy: Amazing. Thank you. And Gülsüm?

Gülsüm: Well, actually, I think that standardising will increase in general on all AI-related topics, and I think that in the cluster system, also, the use of AI will be included, I think. And also, for the standards, and I think that Sphere standards and other standards that are globally used, probably the use of AI and related issues will be on the table during this year, I think.

And I think that there is a huge responsibility on us, I think, because we need to be critical for every standard suggested related with AI. So, if we would like to… if we think that the standards should not be under a monopoly, that a giant entity just decides on it, and they are just serving to local organisations, if we don’t want to do this, I think we have more responsibility on this, both to be critical and also suggesting new standards.

So maybe for each local organisation, I also suggested this in our organisation as well. It’s a very local one, it’s about 70 people in the organisation. And, even though we have a very small scale, we are working on AI standards for the organisation’s policies, because if we would like to continue to use this, we need it. We cannot just let everyone make what they think with AI, etc. So, I think, for this year and maybe the upcoming year, who will take the responsibility about AI? Probably they will shape the future of AI in the sector.

Lucy: Yeah. It is a really pivotal moment. Fantastic. Like I say, there is so much more that I want to delve into with you, but I can see some questions coming in from the audience. So, Ka Man, I’m gonna hand over to you to start facilitating these questions. I’m looking forward to hearing from our panel still.

Ka Man: Hi, sorry, I had a technical issue coming off mute there. Thank you so much for such a thought-provoking discussion today. Thank you, Lucy, for facilitating that, and thank you to our panellists for sharing your candid insights. I really appreciate it. And thank you to our audience for your attentive listening and your great questions that you’ve posted in the Q&A, as well as the chat.

So, we have the next 20 minutes or so to put the questions to the panellists. And actually, I see a thematic grouping of the questions. There’s a lot of interest around accountability and governance, which really aligns with the gap in formal governance that we spoke to at the beginning of the session, and how local organisations, local actors are using their own creativity, drive, and ingenuity. But obviously, coordination and accountability is the next step.

So, I wanted to just put some questions that are for anyone to jump in and respond to. So, the first question links to this theme. Would there be a local accountability mechanism for AI-related harm? What does anyone think about that?

Ali: A quick note from my end. Please keep in mind that inclusive and intelligent governance is already part of the Sustainable Development Goals for 2030. So all the Sustainable Development Goals, they have indicators related to inclusive and intelligent governance, and intelligent governance includes AI. This is number one.

Number two, there are several countries that are putting rules and regulations, rolling out AI rules and regulations, so there are mechanisms to look into the AI design, deployment, and a few other things, including the EU AI Act, the executive order in the US. And I am aware of several countries working on AI privilege rules and regulations, where they look at who has access to the data, as well as the principles of least privilege from an ecosystem perspective. This is number two.

And also keep in mind that it’s not fully clear for us as a society, because the development is increasing, and it’s happening super fast. And, you know, just two years ago, we were at the large language model stage, then we moved to reasoning, and now agentic AI. So we are not fully sure how that’s moving, the speed, the progress, how it would look like. So I’m a bit worried if we put additional rules and regulations, it might cause damage or harm. This is point A.

Point B, please keep in mind that the examples that we have within the humanitarian development sectors, they are still at the large language model stage, a bit of automation, tiny bit of agentic AI. So, until now, the main issue for us is data-related, as well as cyber attacks. This is going to grow in the future. So, my recommendation, or my suggestion, would be instead of jumping into putting rules and regulations within our governance process, let’s map the different scenarios, let’s plan based on those scenarios, let’s look at the integration of those different tools and all that, and focus on the transformation. If we focus on the transformation element, we will reduce the supply chain challenges, cyber security challenges, as well as reducing harm.

Ka Man: Thank you, Ali. Linking to this, I have a question that was received in French, but is in English: the use of AI can be costly, and there is often a lack of clear regulatory frameworks for its use. What’s your opinion on this, please?

So, Gülsüm, you spoke optimistically about sector mechanisms coming together, clusters and so on, and you talked about Sphere standards. Do you think that we can make progress in this space with regards to AI in the coming period? What’s your take on that?

Gülsüm: Well, I think so, because right now, the standards that are wanted to be received by local organisations… I’m thinking based on Turkey, by the way, but I’m just seeing that there is a huge tendency in Turkey that local organisation staff are demanding standards trainings, especially for Sphere right now. So, it just gives me a kind of hope, let’s say.

And the staff, the humanitarian workers, are not just, okay, being about, we have adjusted this, and we will do this, etc., the stuff that was just designed before them. But they are also critical, and would like to be part of the steps that are included. So, at this point, I think that when AI is more spread in humanitarian work, especially, let’s say, in the media, I think that practitioners will criticise it more and just demand a standard for this.

Because probably they will be criticising each other’s work as well. I just saw an AI-generated visual before, used by an NGO for funding reasons, for a crowdfunding campaign, and I asked them. And they just thought that, why should it be related and should be used, and I just gave my perception, etc. So there’s a common and shared environment to give these ideas, and I think that people, the practitioners, also push each other to use AI to standards, even though it’s not named as standards. They are just pushing each other to use it the way that was already decided before. So, that’s why I’m in a more optimistic position, let’s say.

Ka Man: Thank you, Gülsüm, I really appreciate that perspective. Ali, if I could bring you in very briefly. So, if we’re talking about almost collective action around common standards, if we’re looking at more coercive pressure, so to speak, from a governance perspective and regulation. Do you think, say, for example, is there anything around the EU AI Act, or is there anything that you think might be pertinent to highlight here?

Ali: I think the opportunities that we have here are the initiatives happening within different alliances at the global level when it comes to the implementation of their programmes or operations or projects, because there are already several initiatives, joint country programmes or joint operations. We have ACT Alliance, we have several networks and platforms that are bringing organisations together, so that brings many opportunities. This is one.

The EU AI Act is going to bring the element of governance to different organisations, but international organisations that have an office in the EU, they must start adapting their processes and communicating their workflows, as well as what they are doing in finance, human resources, monitoring and evaluation, ERP system, and other things. I think, in general, we have several opportunities coming. We could leverage those rules and regulations for the speed and efficiency and reducing bureaucracy in the sector.

We could also take advantage of the alliances, joint country programmes, joint country operations. We could build on what’s happening within Sphere, the Inter-Agency Standing Committee guidelines, the NGO forums at country level, and all those different infrastructures that we already have in place. So, I would suggest, instead of investing in creating a new thing, let’s redesign and reimagine what we already have, so that we don’t reinvent the wheel, we don’t invest in something new, and go through the rollout and all that. We have several initiatives, they already have a certain level of trust and certain level of access, we could invest in them and leverage those different opportunities.

Ka Man: Thank you, Ali. Really appreciate your insights there. So next, I’d like to put a question to Musaab, if that’s okay, and it’s a question from Maria. So, she’s building on something that you talked about in this session. So, she asks, please, would you be able to give a specific and practical example of how AI use may cause bias in targeting in your context?

Musaab: Okay, good question, actually. So, imagine an AI model trained on historical beneficiary data. I will give you an example from Sudan, actually. So, imagine an AI model trained on historical beneficiary data to predict which households are most vulnerable and should receive cash assistance. So the model may choose indicators like registered displacement status, formal camp residence, household size, documented income loss, or mobile money transaction history. So, if you can follow me on this.

So, but in Sudan right now, many of the most vulnerable people are not formally registered in the system as displaced. So, some are staying with host families. Others move frequently between neighbourhoods due to insecurity. Some women-headed households avoid registration because of protection risks. Informal workers may not have digital transaction histories.

So, if the model is trained mostly on formal camp data or structured registration datasets, it will learn patterns from those populations. It may systematically prioritise households that resemble previously registered beneficiaries and unintentionally exclude advocated urban displaced people or marginalised groups who do not appear clearly in the data or in the system registration system, whether by their displacement status, or the others that I’ve just mentioned.

So, that is algorithmic bias, because the data reflect structural gaps. So in conflict contexts like Sudan, exclusion from cash is not just an administrative issue, it can deepen vulnerability, create tension between communities and undermine trust. So basically this kind of practical example might happen when we use AI. So that’s why human validation and local knowledge must remain central when AI is used in targeting, especially in cash programming, and I would say for any targeting. Yeah.

Ka Man: Thank you very much, Musaab, for sharing that tangible example. Linking to this, next week I’m going to be having a podcast conversation with regards to the role of blockchain with cash transfers and how this may fit into the broader humanitarian context and digital transformation, including AI. So, I’ll share the link with everybody, so keep an eye out on our channels for that, to get a bit more insight into how these pieces may connect.

So, I’d like to bring in a question from Dr. Ivan Toga, who was actually one of the interviewees who was featured at the start of this presentation. So, it’s linking to this sort of opacity of systems. So, Dr. Ivan asks, doesn’t building robust AI from a black box ignore its real impact on our local refugees and people needing mental health support? How do we address this gap?

Would anyone like to come in on that?

Ali: Maybe a quick thing from my side, that’s an excellent question. I think we have to build it the same way we build our programmes, from first principles, from the community needs, and doing the actual context analysis and all the other elements there. But I want to mention that recently I saw several very good examples where those AI tools were leveraging medical care in providing educational support, and a few other things in camps and within remote areas and all that. But the way they were designed, it’s a bit similar to the way we did that training in Sudan. We built it from the local language, from the context, from the community, from their needs, from how they work together, how they interact together, the cultural elements.

And all those things, we built it from there. And then we started deploying or using those different tools. And to be totally transparent, I think if we don’t do that, we will not be able to achieve that digitalisation, or AI integration, or the intelligent and inclusive governance. We have to keep in mind the people that we are serving in those communities, and to do that, and not cause any harms or issues, and to avoid any risks, we have to build it from there, from them, not for them.

Ka Man: Thank you, I really appreciate your perspective on that. So next I’ll come to a question from James, who’s actually my colleague. Thank you, James, for asking this question. He asks, is it realistic that the owners of leading AI models, such as OpenAI and Anthropic, could be held responsible for supporting locally-led AI, or will the sector as a collective be responsible for developing best practices for integration, or even their own models? Does anyone have a view on that?

Ali: I could just say that Claude AI, they have a non-profit element, and OpenAI, they also have a non-profit element. I was part of several conversations with some actors, including Microsoft at some point, exploring that side, and asking them to give that perspective. NetHope is playing a key role in building that bridge between the private sector and non-profit actors.

I’m a bit worried that at scale, at large scale, we don’t have this, and it’s a bit challenging holding them accountable, but there are small initiatives, and if we build on them, I think we might get some results. I want to mention again that there are several language models that have similar capabilities in other countries and other regions, and they are native in that region. We have Groupa in Africa, we have Jais, we have Bharat in India, we have other language models, they are open source, they are native in the language, and many people at policy level there, and civil society, are engaging with them. But at global level, those OpenAI, or Claude AI, etc., they have small elements, but still not that large scale.

Ka Man: Thank you, Ali. From my perspective, I guess I wanted to make that point where, yeah, collaboration with actors and trying to sort of almost lobby for our interests, collective interests for the humanitarian sector is pivotal and ongoing. That has to happen, but we can’t obviously pin our hopes on that movement alone. So, a lot of people that I speak to in this space do advocate for, like you say, the open systems where people can build from the ground up using reusable components. So I think there’s a lot of interest around developing small language models. There’s been a lot of talk about this, where it can be more secure, used offline, trained in specific languages and contexts. So I think that this is really exciting. From what I’m hearing, it’s very early days. I’ve not heard of any specific cases yet where people are deploying small language models, but I think it’s something that humanitarians should pay particular interest and attention to myself, personally.

So, unfortunately, our session has to come to a close shortly. I’d like to thank our incredible panellists for their candid insights and perspectives today, which are really invaluable, and I really do appreciate you taking the time to engage in this conversation today and to drive forward this discussion. I’d just like to take the closing minutes to just highlight the next sessions that we have coming up as part of the HNPW programme.

So, we have 3 more sessions happening, one is online only, and the other two are hybrid, Geneva and online, so everyone can access. They’re taking place on the 5th, 10th, and 12th of March. So please do sign up if you’re able to, and I will share the links in the post-event email.

So, thank you again for taking the time to join the session. We appreciate it. Thank you to H2H, thank you to the organisers of HNPW, and I’m wishing you all a good rest of day. Thank you.

The State of Learning and Development in the Nonprofit Sector

The Training Providers Forum – Groupe URD, Humanitarian Leadership Academy, Humentum, IECAH, INTRAC, NetHope, and RedR UK held this online session as part of Humanitarian Networks and Partnerships Week (HNPW).

Over the past year the global humanitarian and development sectors have been rocked by funding cuts on an unprecedented scale, whilst simultaneously being called to respond to escalating levels of need. This session specifically examined the impact that dramatic sector changes are having on provision of training, and Learning and Development for humanitarian and development actors.

This session is aimed at those working in L&D, HR, people and culture or in a leadership role in the humanitarian and development sectors.

Unsettling the status quo: The case for locally led humanitarian research

Speakers: Tamara Low – HLA, Maryana Zaviyska – Open Space Works Ukraine, Umut Güner – KAOS, Kai Hopkins – ELRHA

Despite strong rhetoric around localisation, humanitarian research is still largely controlled by well-resourced Western institutions, with local actors often sidelined into limited roles. This undermines the value of locally led research, which is typically more relevant, culturally grounded, and responsive to affected communities—especially critical amid shrinking sector funding.

This session explored the power shifts, funding changes, and norm-setting required to advance a genuinely locally led research agenda, drawing on insights from local research organisations, funders, and humanitarian leaders working to drive this change.

Watch the recording below

A Call to Action for Youth Leadership and the Future of Humanitarian Action

Speakers: Jennifer Dias – HLA, Maryana Zaviyska – Open Space Works Ukraine, Olha Shevchuk-Kliuzheva – Alliance UA CSO, Mercedes Garcia – Save the Children International, Van Anh Tranová – DEMDIS, and Huseyin Arslan – HLA

Young people are already playing critical roles in humanitarian response across the world, yet their leadership remains poorly defined, under-recognised, and weakly embedded in humanitarian systems. Often active as volunteers and innovators, youth face limited pathways to formal leadership and professional growth.

This panel explored how the sector can shift from ad-hoc youth engagement to genuine youth leadership, drawing on global research and lived experiences to identify practical, safe, and empowering pathways for youth to lead in humanitarian action.

Watch the recording below

If you have any questions, please contact info@humanitarian.academy

Thank you for visiting the HLA website, we are updating the site to reflect changes in our organisation. Contact us if you have any questions. Contact us

7 Questions for 7 Humanitarian Leaders

Nominate a humanitarian leader you would love to hear from!

A graphic with a vintage microphone shaped like the number 7, text reads “7 Questions for Humanitarian Leaders,” Humanitarian Leadership Academy logo, and colorful speech bubbles over a faded image of two people talking.

With more than 1.2 billion young people aged 15–24 worldwide, young people make up 16% of the global population (United Nations). According to United Nations Volunteers, 33% of youth globally are engaged in volunteering through humanitarian and community action.

Across the world, young people are vital contributors to community response. They are active in civil society and often provide both formal and informal humanitarian support – especially during times of crisis. Yet their leadership, insight, and impact are too often overlooked.

At a time in history when more than ever before, the future of humanitarian action sits in a quandary – due to limited financial resources amongst many more challenges. The HLA is choosing to amplify voices of the future who are rising above these challenges and already acting now.

In line with the Humanitarian Leadership Academy’s commitment to supporting local leadership and connecting humanitarian actors, we are launching a new podcast series to spotlight young humanitarian leaders and the work they are leading in their communities.

7 Questions for 7 Humanitarian Leaders will feature seven thoughtful, in-depth conversations with 7 guests nominated by you – members of the HLA global community. Together, we’ll explore their journeys as humanitarian responders – their motivations, challenges, inspirations, aspirations, and the realities of leading change from the frontlines.

Get involved!

This series has a dual purpose: to strengthen collective learning across the sector, and to recognise the contributions of young humanitarian actors whose work often goes unseen. By sharing their stories, we aim to increase visibility, appreciation, and access to opportunities that can positively shape the future of their work and the communities they serve.

We believe that inspiration leads to action and that motivated people inspire others in turn.

Is there a humanitarian leader you would love to hear from? Someone whose work has inspired you, or whose journey you’ve always wanted to learn more about? Perhaps there’s a question you’ve never had the chance to ask.

Nominate them via this form and help us amplify the voices of young leaders shaping humanitarian action.

As former UN Secretary-General Kofi Annan said: “You are never too young to lead, and never too old to learn.”

Let’s learn from young leaders, together.

For questions, contact info@humanitarian.academy or F.Okomo@savethechildren.org.uk

Thank you for visiting the HLA website, we are updating the site to reflect changes in our organisation. Contact us if you have any questions. Contact us

Artificial intelligence in the humanitarian sector: mapping current practice and future potential

In August 2025, the Humanitarian Leadership Academy and Data Friendly Space launched a joint report on artificial intelligence (AI) in the humanitarian sector, representing the first global baseline study into humanitarian AI adoption. This study continued in January 2026 through a pulse survey to track shifts in adoption and attitudes towards AI in humanitarian work.

Drawing on insights from 2,539 survey respondents from 144 countries and territories, coupled with deep dive interviews – we created a foundational study of AI adoption across the humanitarian sector together with actionable insights and takeaways. As the first comprehensive baseline study of AI in humanitarian work, this research provides essential insights for practitioners, leaders, partners, funders, and collaborators navigating AI adoption and digital transformation decisions.

The team has built on the foundational research through a follow-up pulse survey conducted in January 2026, generating 1,729 responses from more than 120 countries.

Explore the global baseline study insights

Explore how humanitarians engaged with AI in 2025

Discover more
Two women wearing hijabs work together at a laptop. One is seated, the other leans over, pointing at the screen. A small robot is on the table. Behind them is a wall with colorful handprints and Arabic writing.

Tracking changes in 2026

Pulse check: how have things shifted by January 2026?

Find out more
A person is typing on a laptop at a table, with another person sitting nearby. Papers, a green folder, and a bottle of hand sanitizer are also on the table. Only their hands and part of their arms are visible.

A global baseline: explore the 2025 insights from 2,539 respondents from 140+ countries

Explore 2026 pulse survey insights from 1,729 respondents from 120+ countries

Our research approach: inclusive, community-led insights

Our research is designed with community participation at its core, recognising that meaningful insights about AI in humanitarian work must come from practitioners themselves. In 2025, three-quarters of respondents were from the Global South, providing unique insights.

With the foundational 2025 research conducted over a three-month period, the team has worked at pace to deliver this report and supporting campaign because we believe that the timing is crucial given the rapid pace of AI development and radical systemic changes the humanitarian sector faces.

This research represented the first attempt to map AI adoption at individual and organisational levels globally, complementing existing sector initiatives on ethical AI processes.

The global engagement in the research – from individuals with a broad range of attitudes and experiences of AI in humanitarian work – shows a strong appetite to learn and align on values and standards.

Coordinated efforts, underpinned by data and diverse voices like those in this research, will enable actors across the sector to move forward together. We share a collective ambition to find contextually-appropriate, ethical AI solutions that uplift humanitarian efforts supporting crisis-affected communities.

The research team is continuing to build out these insights to support and mobilise the sector to forge a path ahead in the adoption of AI technologies, including through a pulse survey in January 2026.

How you can engage in this research and follow-up content

Use the reports, briefing note and supporting user personas as a discussion tool in your teams or organisations: The aim of this research and launch event is to spark conversations across the sector and beyond. We encourage the report to be used as a learning and discussion tool in organisations and to help shape approaches to AI and digital transformation. 

Webinars and online events

  • Report launch event (5 August 2025): We were delighted to host an online report launch event to highlight key findings and discuss their potential implications together with an expert panel (C. Douglas Smith, Dr Cornelia C. Walther and Ali Al Mokdad) and attendees. A recording, discussion transcript and slides are available on the event webpage.

  • Humanitarian AI: Lessons learned, trends and opportunities for 2026 (29 January 2026): We were pleased to host a webinar in partnership with NetHope together with an expert panel (Esther Grieder, Mercyleen Tanui, Michael Tjalve, Daniela Weber). A recording, discussion transcript and slides are available on the event webpage. In March 2026, we published a supporting quick-start guide on developing organisational AI readiness.

  • Beyond the hype: Ground truth on AI across the humanitarian sector (26 February 2026): In this community-centred session, we presented key insights from 1,729 humanitarians from 120+ countries who responded to the January 2026 Humanitarian AI pulse survey, offering a vital check-in on how practitioners are experiencing AI today. This was followed by discussion with an expert panel (Rebecca Chandiru, Liz Devine, Timi Olagunju, Nayid Orozco Bohorquez). A recording, discussion transcript and slides are available on the event webpage.

  • Bridging digital divides: centring local leadership in humanitarian AI development (3 March 2026): We were delighted to host this session as part of Humanitarian Networks and Partnerships Weeks (HNPW) 2026. In this session together with an expert panel ( Musaab Abdalhadi, Ali Al Mokdad and Gülsüm Özkaya), we discussed AI adoption patterns across the sector. Focusing on local actors, we explored themes such as governance, coordination, safety and co-creation – and whether AI can be a driver of localisation itself. A recording, discussion transcript and slides are available on the event webpage.


    We plan to host further webinars and podcasts to continue this conversation with a diverse range of speakers. Please contact us if you have suggestions for discussion themes and speakers.

Podcast collection

  • From the launch of the project (May 2025): Tune in to a Fresh Humanitarian Perspectives episode recorded at the start of this initiative and to hear why the project co-leads decided to collaborate – and why they believe that every humanitarian’s voice matters in shaping AI for the sector.

  • Post-report launch (August 2025): Ka Man Parkinson and Madigan Johnson joined the Intelligence Explosion podcast as guests to discuss the research and potential implications in more detail. Listen

  • Deep dive six-part Humanitarian AI follow-up podcast series (September/October 2025): Featuring expert guests discussing the research themes and sharing global and African perspectives. Each episode includes transcripts and glossary of technical terms used in the conversations. View the podcast series webpage
  • Episode 1: How are humanitarians using AI: reflections on our community-centred research approach – with Lucy Hall, Ka Man Parkinson and Madigan Johnson. Listen
  • Episode 2: Bridging implementation gaps: from AI literacy to localisation – in conversation with Michael Tjalve. Listen
  • Episode 3: Addressing governance gaps: perspectives from Nigeria and beyond – in conversation with Timi Olagunju. Listen
  • Episode 4: Building inclusive AI: indigenous knowledge frameworks from Kenya and beyond – in conversation with Wakanyi Hoffman. Listen
  • Episode 5: Localising AI solutions: practitioner experiences from Rwanda – in conversation with Deogratius Kiggudde. Listen
  • Episode 6: Developing AI literacy: a matter of trust, critical thinking and localisation – in conversation with Meheret Takele Mandefro. Listen

Launch of pulse survey (January 2026): Ka Man Parkinson, Lucy Hall and Madigan Johnson from Data Friendly Space discuss the January 2026 pulse survey shaping the next phase of this humanitarian AI research. Listen

Articles

  • Research report companion article (July 2025): In this short article, research co-lead Lucy Hall outlines the history of artificial intelligence in humanitarianism. Read
  • Opinion piece (October 2025): Following his participation as an expert panellist at the launch event, Ali Al Mokdad shares his reflections on the evolving waves of AI and the deeper element of transformation that underpins them. Read
  • Reflection piece (November 2025): In this personal reflection piece, research co-lead Ka Man Parkinson charts a six-month research and learning journey into the adoption of artificial intelligence (AI) across the humanitarian sector. Read
  • Opinion piece (November 2025): Should we use AI-generated imagery in humanitarian communications? Spotlight on research by Gülsüm Özkaya. Read
  • Interview (March 2026): “We need an AI that is good for all of us” – a humanitarian perspective from Uganda. An interview with Ivan Toga, a January 2026 pulse survey participant. He shares his views on humanitarian work, climate negotiations, and the case for localised artificial intelligence. Read/listen

Disclaimer

This research has been produced to promote learning and dialogue and is not intended as prescriptive advice. Organisations should conduct their own assessments based on their specific contexts, requirements and risk tolerances. This initiative has been conducted independently by the HLA and DFS and has not received external funding.

Contact

Connect with us. We aim to convene, connect and collaborate for shared learning and discovery – we welcome your comments and suggestions as we continue this humanitarian AI journey together.

Humanitarian Leadership Academy
info@humanitarian.academy

Data Friendly Space
hello@datafriendlyspace.org

Newsletter sign up