Thank you for visiting the HLA website, we are updating the site to reflect changes in our organisation. Contact us if you have any questions. Contact us

HLA at Humanitarian Networks and Partnerships Weeks | 2nd – 12th March 2026

People seated at desks wearing headsets, participating in a conference or training session. Event logos and text announce the Humanitarian Leadership Academy at Humanitarian Networks and Partnerships Weeks 2026.

Thank you for joining us at HNPW 2026, over four sessions we discussed youth leadership through the lens of crisis response in Ukraine, Peru and Türkiye; what can be done to drive change towards a locally led research agenda; and local leadership in humanitarian AI development.

We are grateful to collaborate with Start Network, H2H Network, ELRHA, Open Space Works Ukraine, KAOS, and the Training Providers Forum.

Session recordings are available below:

Bridging digital divides: centring local leadership in humanitarian AI development

Speakers: Musaab Abdalhadi – Save the Children in Sudan, Ali Al Mokdad – independent, Lucy Hall – HLA, Gülsüm Özkaya – IHH, Ka Man Parkinson – HLA

AI is rapidly shaping humanitarian work, but local actors are still largely excluded from how these technologies are designed and governed, risking deeper inequalities.

This session explores how AI can become a driver of localisation itself by embedding inclusion, ethics, and collaboration into humanitarian systems. Drawing on new research and frameworks, panelists will discuss practical ways to build locally led AI ecosystems and reimagine humanitarian action as co-created, context-driven, and collectively intelligent.

Watch the recording and access the transcript below.

Session transcript

This transcript has been generated using automated tools and has been lightly edited for clarity and readability. The transcript has been reviewed but minor errors or omissions may remain.

Ka Man: Hello, everyone, and welcome to today’s session, brought to you as part of Humanitarian Networks and Partnerships Week, HNPW. My name is Ka Man Parkinson, I’m Communications and Marketing Lead at the Humanitarian Leadership Academy, and I’m absolutely delighted to welcome you to this session today, Bridging Digital Divides, Centering Local Leadership in Humanitarian AI Development.

This session is taking place as part of the H2H network, so we’re delighted to be joining this virtual forum as part of the H2H network today. The Humanitarian Leadership Academy is part of Save the Children, and our mission is to accelerate the movements for locally-led humanitarian action.

Today’s session is expected to last 75 minutes, with around an hour for the main content, and around 15 minutes for your questions. So, if you have any questions, please submit those using the Zoom Q&A.

The session will begin with welcome introductions, followed by a short presentation from myself and Lucy to contextualise this session in the HLA’s work. We’ll then move into local leadership perspectives with our panellists, followed by a panel discussion, and then we’ll move on to audience questions.

I’m really delighted to be joined by some incredible panellists today, and I’m really grateful to them for taking the time to be here and join us for this important conversation, particularly in the very challenging context in which we’re all operating. So, I’m delighted to welcome Musaab Abdalhadi from Save the Children in Sudan, Ali Al Mokdad, a senior independent leader, Lucy Hall, my colleague from the HLA, and Gülsüm Özkaya from Children of the Earth Association in Turkey. I’d now like to invite each speaker to just briefly introduce themselves to you, and to say a few words about why this conversation matters to them. Over to you, Musaab.

Musaab: Thanks so much, Ka Man, and yeah, everyone. Good morning and good afternoon. My name is Musaab Abdalhadi, and I work with Save the Children as GCT specialist based in Sudan. I work closely with community-based organisations and mutual aid groups operating in conflict-affected and hard-to-reach urban areas. So, basically, this conversation is important to me, because communities on the front line of crisis are increasingly becoming data providers for humanitarian AI systems, but not really decision makers in how those systems are designed, governed, or used.

Ali: Thank you so much for hosting us, and for the participants for joining. For the humanitarian development sector, I started as a national staff, then I took international assignments. I was stationed in East Africa and Asia, focusing mainly on programme and operations management. And from there, I moved to headquarters roles, where I covered policy, processes, and tools. And I spent the past years focusing mainly at redesigning and reimagining policies, governance, as well as humanitarian diplomacy, where I engage with impact investors, policymakers, and economists.

From my perspective, this conversation is extremely important. I could write a book about it. But in a simple way, I think AI tools and AI in general could be either the best or the worst thing that could ever happen to humanity and to what we do. And localising AI could take us to the best case scenario, and I think that’s one of the key things that we are trying to cover today. Of course, there are many things to cover under that umbrella, but it’s very important to focus on localisation when it comes to AI. Thank you.

Ka Man: Thank you, Ali. Gülsüm, over to you.

Gülsüm: Hi, everyone. Welcome to the session. I’m Gülsüm Özkaya, I’m representing Children of Earth Association here. It’s an Istanbul-based local organisation. I’m the board member responsible for communication, and also thank you to HLA for hosting us here.

Well, why this conversation matters for me is, actually, it’s my research topic, basically. I’m working on AI-generated visuals in humanitarian communication, from the perspective of crisis-affected people. And I’m directly working on how it’s important to be AI-aware in local leadership. So, hope it’s a meaningful discussion for everyone.

Ka Man: Thank you, Gülsüm. I really appreciate you taking the time to join us today. And over to you, Lucy, would you like to introduce yourself and tell us why this conversation matters to you?

Lucy: Hi everyone, lovely to be here, thanks for having me. My name is Lucy, I am the Research and MEAL Lead at the Humanitarian Leadership Academy. This topic is really important. I’ve been researching digital tools and transformation, and what that means for locally-led humanitarian action for a number of years.

And I really believe that AI has huge potential to really transform how the humanitarian system operates, how organisations can really transform and really become almost a lot stronger than we already are. It’s an amazing opportunity, but there’s a lot of risks involved, so I’m looking forward to exploring all of those themes in this conversation today. Thank you, Ka Man.

Ka Man: Thank you very much, Lucy, and thank you to our incredible panellists.

I’m just going to spend the next 5 minutes or so to just set the scene for the conversation, and to explain why the HLA is hosting this conversation today. So, in May-June 2025, the HLA conducted, in partnership with Data Friendly Space, the world’s first study into how humanitarians are actually using AI in practice.

And we were actually astounded by the engagement with this survey. We had 2,500 responses from 144 different countries. And what that showed us is that AI, or generative AI, such as ChatGPT and Microsoft Copilot, etc., has really driven individual experimentation and creative applications of AI across the sector. So AI use is not being clustered in particular areas, but really is being embedded across the sector at large, although that is uneven.

In this survey, we wrote a report, which documented the patterns in detail, and we included some use cases from Ukraine, Afghanistan, and Lebanon, led by local leaders, which showed this creative and resourceful application of AI. So you can read those in more detail, and I think Lucy will be able to drop the link in the chat if you want to have a look at that information.

And then in January, as follow-up to that, we conducted a light-touch pulse survey to see if there had been any shifts in AI adoption. And what we found, when we’re looking at the local level, is that local organisations continue to have very strong interest and engagement in AI, again, very much driven by generative AI tools, although there is growing interest in how to scale these efforts and integrate those across operations and across organisations.

We can see that adoption patterns are generally similar between local organisations and other organisations as a whole, although a couple of differences that we’ve seen so far is that local actors are using AI tools daily on a slightly higher frequency than INGOs. And also, there’s a lower presence of formal AI policies. So, I would say that local efforts are very much being driven by adaptation, creative application, problem solving. It may not be as formalised as compared to the INGO sector and so on, but that’s led to some really interesting and promising use cases as well.

So, from the follow-up survey, I interviewed one participant recently, Dr. Ivan Toga from Uganda, and we’ll be releasing this interview next week, so please keep an eye out on HLA channels for this. And Dr. Ivan Toga speaks very enthusiastically about the potential and harnessing the power of AI in specific contexts and use cases, such as family reunification, and helping with satellite imagery, etc. So he’s got very strong views on this, including the need for localisation, for the sectors to come together, for donors to understand the context in which he’s operating. So he’s speaking to me from Rhino Refugee Camp in Uganda.

Another quote was from a survey participant in Cameroon. So this response was actually in French, so I’ve just translated here. And basically, this characterises this absolute drive and desire to try and harness the potential of AI, even in low connectivity settings. So you can see the thematic link to what Dr. Ivan said in his statements.

And then a middle manager working in education in the Philippines talks about how AI is not just a tech thing anymore, that it’s more widely embedded than that. And they’re very excited about the potential of AI freeing up humanitarians’ time from tasks to actually get more time working with communities. So that is their aspiration.

And then finally, this leader from Nigeria speaks very clearly about access gaps, how, again, there is so much potential, especially to amplify youth and grassroots actors, but highlights, again, that digital divide that needs to be bridged about particular marginalised groups that need to be brought into the conversation and development.

So, I just brought that in as a bit of scene setting to explain the context and rationale for this conversation today. And I’m now going to hand you over to Lucy, who’s going to speak to the concept of AI readiness and localisation. Over to you, Lucy.

Lucy: Thanks, Ka Man. I think this is a really interesting point, because those quotes really ground the experience in lived realities, because AI can feel very alien, very technology-led, and a lot of the participants that we’ve spoken to have talked about how distant it can feel, and I think it’s a really important challenge to acknowledge.

Especially when I think about that quote from Cameroon, where we talk about change, and the pace of change, and how hard it can be when we don’t always have basic infrastructure such as internet access.

And that’s why a lot of our research has kind of concluded that AI isn’t just about developing technology and developing tools. It’s about a whole host of behaviours and things that need to be really… foundations that need to be in place to really enable a locally-led humanitarian world of the future, and of today, because AI is around, and it’s probably not going to go anywhere anytime soon.

So we’ve been, over the last 6 months, discussing around what it means to become AI-ready. And a lot of that centres on different elements of digital transformation. It means we need to understand what AI is and how it is used in humanitarian action. Research helps with that massively. But again, in order to be locally led, it has to be local research that really leads the way. It can’t be dictated by the Global North or technology companies. It has to come from communities that are working with the tools.

All of these different elements are all about being locally led, convening, bringing people together, learning what the challenges are from one another, learning what the opportunities are, and sharing knowledge, sharing skills, sharing experiences.

Ensuring that there’s good leadership, governance, and standards. Again, how can we make sure that AI is safe? That’s one of the key things that our research has been finding over the last year. We are amazing as a humanitarian community when we consider the needs of the populations that we work with, the safety of people is paramount and our number one priority. So having strong leadership to govern AI, to use AI, to design AI, and ensuring that there’s really good standards in place.

Making collective action and working together to drive change. AI is a transformation process in the humanitarian system, because a lot of people are using AI tools, but as Ka Man mentioned, the policy and governance uptake is low. If we don’t work together, and we don’t advocate for locally-led AI, it won’t happen.

Innovations. Technology is always going to be a key part of AI usage. We are always going to continue to find ways of making better outcomes for communities that we work with, to make sure that they are safe, protected, and have access to chances in life.

And learning literacy, shared knowledge, and having that common language and understanding between one another, super important part of AI-ready.

This interconnected approach, when combined, can really enable a transformative approach in how organisations connect to one another, how we become more locally led, how we become able to amplify expertise and leadership throughout the humanitarian community. And by taking this approach, it will really allow this transformation to take place in a way that is grounded in local experience, local leadership, and realities.

And I think this is a real opportunity moment. We’ve talked in the HLA with other colleagues about being at a tipping point. And I think by adopting locally-led design principles for digital and AI transformation, we’ll begin to see a shift, hopefully in the right direction, towards a much more equitable humanitarian system as knowledge flows in different ways.

It’s very contextual. What we’re hoping to do through this conversation is to ground it in lived experiences from our wonderful panellists. So, with this in mind, I’d love to bring in Musaab to talk to his leadership in AI space. So, over to you, Musaab. I’m looking forward to hearing from you.

Musaab: Yeah, thanks, Lucy. So, basically, from my perspective and local leadership about AI, when we talk about local leadership in humanitarian AI, we often focus on access to technology, but from my perspective, the real issue is power, not technology.

So, local actors already generate knowledge every day through informal networks, community assessment, and adaptive responses that international systems often struggle to capture. Yet AI tools are frequently built externally, trained on incomplete data sets, and deployed into contexts they do not fully understand. So, this creates three risks, I would say: AI reinforcing existing humanitarian power imbalances, local knowledge being extracted without ownership, and definitely decision-making moving farther away from affected communities.

So bridging the digital divide, therefore, is not only about connectivity or skills, it means shifting from local partnership to local authority, where communities help define problems, shape data sets, and influence how AI informs humanitarian decisions. So, basically, that’s the AI from my perspective, or the local leadership. Yeah, that’s it, over to you.

Lucy: Thanks so much, Musaab. I’m sorry, I was struggling with my technology there. I honestly couldn’t agree more, and I believe that AI poses a huge risk about being very extractive. It’s something that I feel very uncomfortable with, and I think by calling it out early and making sure that we are creating much more equitable resources, that is the only way forward, really.

I’d now like to bring in yourself, Gülsüm, to hear about your leadership perspectives in this space.

Gülsüm: Well, actually, especially thinking about the global and the local actors, well, in the sector, we’re always talking about the shifting power from global to local actors, but I think that being a global actor, or being a global organisation is no longer enough to adapt to today’s world. Because in our local organisation, in YerChat, we fit in today’s world differently, I think, because when I think about the reason, maybe the reason might be we’re mostly made up of Gen Z. We are all young people.

So, this allows us to create impact differently than traditional organisations, whether it’s how we engage our donors, or how we protect the children in the media. So, being digitally fluent and AI-aware is just, actually, I think the main divide right now, rather than being global or local.

So the local organisation that masters the use of AI tools actually can access the opportunities, funding, and maybe create an impact as effectively as the global giants do. So, if you use AI correctly, maybe it might be the ultimate bridge in the sector.

But, my point here is actually the ‘correct’ part. I mean, when I say AI used correctly, whose correct is this? When we talk about correct, is HLA’s correct, HS correct, or yours correct, everyone’s correct, it might be different. So, at this point, actually, my question was whose perspective must be included. And I think my answer was, the beneficiaries, the people affected by crisis.

So, well, that’s why, actually, my research focuses on AI-generated visuals in humanitarian communication from the crisis-affected people’s perspectives. So, when I see the people’s perspectives, there’s a significant gap here. Their perspectives and the humanitarian communicator’s perspective are totally different in some points. For example, in some points, I think that the communication practitioners just think that it’s a protective thing for practice, but they might see it very differently.

So, I think we need a shared environment for creating AI standards in our sector. Otherwise, if we cannot do this, if AI policies and standards are developed under a global monopoly, probably they will fail in the local context. So, local leaders’ AI awareness is a key point here. Otherwise, it will probably lead to digital colonialism on the beneficiaries and the crisis-affected people, I think. It’s not a technical failure.

So, when we see AI ethics and standards, I think that we need a table for all who have digital fluency. It’s not that they are based on global actors, global leaders, or local leaders, but who is AI-aware, or who has digital fluency. I think that will empower local leadership in this case.

Lucy: Thank you so much, Gülsüm. I think that’s such an interesting point. It’s not about global versus local, it’s about being digitally confident and not so digitally confident, and how AI works.

I’ve got so many questions in my head based on just those couple of statements alone, but it is now time for our panel discussion, where we’re looking forward to bringing Ali in. I will be building on some of those points raised by Musaab and Gülsüm, but Ali, I kind of first wanted to come to you.

Because I think the point raised here is the risk that if AI governance and design continue to sit in global spaces, we’re really going to risk reinforcing existing inequalities. That came through loud and clear from both Musaab and Gülsüm. So, in your experience and your view, Ali, what would inclusive AI reform look like? And who needs decision-making power, not just to be sat at the table and consulted?

Ali: I was actually writing super notes during the conversation. But before I answer the question, let me say one thing, that outside humanitarian and the development sector is a bit different than inside. Because when I look at social enterprise, when I look at AI diplomacy, the design and the development of models, it is progressing in the Global South, and I want to acknowledge that India, Nigeria, Kenya, Lebanon, Gulf countries including Saudi Arabia, United Arab Emirates, and Qatar, they made progress when it comes to AI, the social enterprise, AI-native companies, AI diplomacy, investments in data centres and the infrastructure, as well as designing AI-native models, because now we have large language models in Arabic, we have in Swahili, and a few others. So I want to acknowledge that outside, the Global South is part of the design and the deployment. Now, in the sector, it’s a bit different story.

And from where I’m sitting, and what I am noticing in my conversations and the work that I have been doing at strategic level and operational level, we are kind of repeating the same mistake that we had with ERP systems and automation, and Power Apps and Power BI, which means designing at a global level, and then rolling out. And when we look at the challenges we faced there, it was mainly transformation and supply chain. And I usually say that technology is not the problem, transformation is.

So, how to address, or how to deal with that. I think it’s very important when looking at governance to look at it from inclusive and intelligent governance. And that means mainly focusing on user-based design, so thinking from the perspective of the end user, looking at the access, and what I call the AI privilege, who will access the tools, who will access the data, and a few other things, looking at the skills, and here I’m not talking about capacity as training, I’m talking about capacity as infrastructure, and also the stakeholder engagement and the AI transparency when it comes to the algorithms. But in a simple way, I think to have that system in place, and to focus more at the operational level, I think it all starts with the experimentation.

Local organisations are already experimenting, and the survey that you did is showing the results that local organisations are progressing there. And when I look at the local organisations or the local leaders that I have been collaborating and working with for the past four years, AI empowered them in different ways. Yes, of course, they had some challenges related to the internet connection, to the access. But the moment they started learning how to leverage those tools was also the same moment they started making progress. So, number one is the experimentation stage.

Number two, I think it’s very important to look at the infrastructure, because if we are, as a sector, wanting to leverage this technology, it’s very important to look at how are we protecting data. How are we working on the supply chain and the deployment? How are we scaling those digital tools? Who’s going to be a part of the infrastructure and the innovation part design? Where are they? Their location, their access to those tools and services and internet.

And then building on the innovation, or the experimenting, the infrastructure, here comes the ecosystem. Because it’s very important to look at this as a full ecosystem, especially when we are looking at full governance. And the ecosystem means the infrastructure we built, the teams that we have, the workflow, the roles and responsibilities, the data, and the tools that we are using, how they are going to interact with each other, how those tools are going to affect roles, how they are going to affect the relation between, let’s say, UN agencies, INGOs, NGOs, civil society organisations, the community, and all that.

And from the ecosystem going to the partnership, I don’t think any organisation alone could do that. We need to invest in the partnership. And here, I’m talking about the humanitarian development actors amongst ourselves, the relations with the private sector and the economists, the relations and the connection with impact investors, the connection with policymakers and the state, and the other actors, because it’s very important to invest in this partnership. And of course, be open about the lessons and the failures and the progress that we are making within this arena. But I think in general, when I look at the full picture and I zoom out at macro level, it’s very important to acknowledge that local leaders are already using and leveraging those tools. We need to keep that innovation and that way of being open about experimenting, and try to see how we build that bridge between local actors and international actors, and as I mentioned, user-based design.

Lucy: Amazing. Thank you so much, Ali. I think that’s kind of our guiding star throughout all of this work, is how do we keep the momentum going, and not constrain local leaders in adopting AI, because it is already happening, it’s been happening for probably years, we don’t have the evidence to substantiate that.

And I think that’s a challenge that a lot of people are grappling with, because there feels like, as you say, there’s a bit of a lag, and the humanitarian sector is rolling out AI in a way that is traditional, and has not always worked, so I think that’s a really pertinent point when we talk about transformation.

And just very briefly, I’d be quite interested in hearing your thoughts on this. Because there’s been a lot of talk about governance standards and policies. Is there a risk that that would actually then impede that uptake and that innovation in local organisations? Or is that something that you see would enhance usage of AI?

Ali: Very good question. I think it depends on the location, because I could see some organisations who are based in Europe, they are a bit struggling in navigating also the rules and the regulations, and how to adapt to that, so it’s a bit limited arena for experimenting. Those in the US, they address it in a different way. Those in the Global South, in a different way, and I have to admit that many of those leaders and organisations in the Global South, they have a bit more flexibility and way of experimenting. Let me give an example.

There was this local organisation in Nigeria. They wanted to… they have their own strategy, they wanted to integrate AI within the strategy. So we had several conversations, and we looked at what they want to achieve from a programmatic perspective. That was the foundation of AI integration. How could we reach more people and support more communities? And from there, how AI could speed that process, or scale it, or make it efficient.

So what we did after doing the discussions and the mapping and all that, we found that the best way was doing several trainings for team leads and the management, as well as the staff on what not to do. So, the things that we shouldn’t, from data perspective and the ethical use of AI, we shouldn’t do, and we should keep aside. And then everything else is open for experimenting and use.

And I spoke to that organisation when we did the periodic review for that organisation after 6 months. It changed how they worked. It changed the workflow, it changed their relation to each other, it changed their relation with the donor, and the other organisation, their positioning, and different things, because they… I want to say they moved fast and broke things, so they took those tools, taken into account what not to do, and they started experimenting. Of course, there are risks.

But I want to say, let’s not overestimate the risks and underestimate the opportunities. And local organisations in Nigeria, in Lebanon, in Syria, in Sudan, in Kenya, in Rwanda, they are giving us very good examples on how to leverage those tools and use them. I think at the end of the day, it’s about not standing in the way of local organisations and local leaders. They have a clear mindset of what they want to do, and they have already strong access, they are part of those communities, and suddenly those AI tools appeared as a resilience tool. It helped them to navigate that complexity in their engagement with donors and other organisations. So I think we have so much potential there. The main important thing is, you know, we could raise awareness on the things that we shouldn’t do, and the risks, and give them space, not stand in their own way.

Lucy: I couldn’t agree more with everything that you’ve just said there. I was gonna bring in Gülsüm at this point, but actually, Musaab, I’d like to come to you, if that’s okay, because I think your work, what I know you’ve done in collaboration with Ali, links quite nicely, potentially, to what was just being spoken about there.

So, Musaab, I wondered if you could actually just talk us through your experience in not just learning about AI, but shaping AI in your work, and what your experience has been to date, and how you’re now using it, and how you kind of led the way in Sudan in particular.

Musaab: So, basically, the session that I’ve done in collaboration with Ali, in collaboration with Humanitarian Leadership Academy, and there were many local responders here from Sudan working with emergency response rooms, or CBOs, community-based organisations. So, for myself, first, it helped me to know exactly, or demystify AI. So, basically, in many humanitarian spaces, AI is either over-hyped or feared. So, this session clarified what AI actually is, and also pattern recognition systems, trained on data, and where its limits are. So that matters when you are responsible for a programming decision, because I’m working as a technical specialist, so all these matters.

And secondly, it shifts how I think about power. AI is not neutral. It reflects the priorities of those who design and fund it. For someone working closely with local and mutual aid groups, that realisation is crucial, actually. If local actors are not involved upstream, the tools may optimise for efficiency over dignity, actually, or scale over context. So particularly, it made me more knowledgeable about AI and its use, more intentional and more aware of governance gaps in my role. Yes.

Lucy: Amazing, and I think that’s really great to hear. It does highlight the governance gaps, and I guess the question… and again, this is… I’m being quite tricky with the three of you, because you’re such amazing panellists… what risks does that pose to your work, having that governance gap? Because when we think about governance and policy gaps, as humanitarians, I personally immediately think of standards and the principle of do no harm, because the risks are so high, and I don’t know if you’d have any reflections on that.

Musaab: So basically, in cash programming, I see, I would say, 3 major risks. So cash systems require digital trails, mobile money reports, targeting databases, biometric registration, in some contexts, definitely, like Sudan, in conflict-affected areas. So that data can be extremely sensitive if accessed by someone who is not authorised to access this data. So it can put people at high risk and increases the aggregation and analysis of these data, which increases exposure. So, data exposure is one of the risks of AI.

The second thing, I think, if AI models are trained in an incomplete way, they may systematically exclude certain groups: informal shelter, undocumented people, and minority communities. So, this causes bias in cash programming or cash targeting. So, bias in cash targeting is not just a technical flaw, it becomes a protection issue, definitely.

And the third thing is, if AI systems are developed or hosted by actors aligned with a specific government or private interests, communities may perceive cash assistance as political influence, so they have to be on that side to just receive the assistance or the cash assistance. So, trust is central to neutrality. Once communities believe data may be shared or misused, participation drops and risk increases, definitely. So, basically, those are the main or major risks that I see in cash programming related to AI.

Lucy: Completely agree, and I think it’s something that a lot of work is being done to address, but as you say, it comes down to people, and how we use it, and how the data is managed. And I think that comes to your original point, that local leaders in humanitarian action aren’t just users of AI, we’re also producing AI and informing future iterations. So it can’t be one-sided.

And I could talk for hours on this with you, but I do want to bring in Gülsüm here, because I know that you’ve done an awful lot of research, and you’ve got a lot of experience with people that are using AI, you use AI, and how it interacts in your daily life and within your work.

And I think we’ve talked a lot about leadership, and that’s quite an abstract concept, potentially, and you mentioned things around, you know, Gen Z are leading the way in this, because they are digital natives. So, what I’d really be interested in hearing from you is, where do you see people stepping into leadership in these ways? And who is engaging consistently in that network? Is it purely young people, or is it a range of different voices?

Gülsüm: Well, actually, I think the humanitarian sector, the leadership, is not only in humanitarian sector, by the way, but the sense of leadership is really related with experience. But with the coming of AI, the sense of leadership, I think it’s based on expertise rather than experience. So, if someone from another generation, or a very young person, has this expertise on AI, and also not just using the AI tools, but also can criticise the AI tools usage, or can criticise the risk of AI, they can bring a new kind of leadership perspective, I think.

Because there are lots of points related with AI that are not discussed yet. For example, in the media, specifically, you mentioned the risk issue, Musaab. In media, the most dangerous thing, the most relatable risk that we are facing is actually using AI visuals without permission, or just using them for donation reasons by humanitarian practitioners, which directly affects the trust of individual donors.

So, in this perspective, we are just facing AI shadow work, mostly. If there is no leadership with AI awareness, AI shadow work might be our most relatable risk, I think. So at this point, I can say that expertise in AI, being a leader, a humanitarian leader, is actually the most effective issue, let’s say, to protect the trust issue between both partnerships and individuals and NGOs.

Lucy: And I want to pick up on that point, because Musaab also mentioned trust, and Ali, I think you might have mentioned it as well. And what is it about trust that you think is important in AI and in relationships with people?

Gülsüm: Well, when I conducted my research, I just conducted lots of interviews with, in my previous research, Syrian people, and during my research for my thesis, with Sudanese people, with crisis-affected people, and the most common thing that they brought to the interview was this donation issue, using AI by humanitarian communicators for donation causes, which is directly affecting their trust, their trust in the organisation.

And also there, I just realised something in my interview is that if the person is not engaging with AI in their daily life, they are mostly more aggressive towards using AI in the humanitarian sector. And their trust is more fragile in NGOs when they use AI. But these individuals, if they use AI in their daily lives, in their work, etc., they’re just feeling that, okay, it’s a common thing, it’s a normal thing, and it can also be used by NGOs. So, actually, the trust issue, I think, depends on people’s own relations with AI.

Lucy: Yeah, I really agree, and we’ve seen that play out throughout our research journey as well, that there has to be organisational trust to use AI at work. That’s been a really interesting trend that we’ve not explored explicitly. I know the research team are keen to explore it, but the fact that you say it is about that individual relationship with AI, I think is a really…

Gülsüm: I think so. I think it was one of my outcomes when I just conducted my research, let’s say.

Lucy: And brilliant, and I think I’m really glad that you’ve been able to see that and articulate it so clearly. There’s so many questions that I want to ask all of you, but I’m conscious of time, and want to make sure that we have a chance to open the floor up to our audience. I’d love to hear from each of you individually now, and just ask, what are your aspirations for humanitarian AI in 2026? What do you think you would like to see, and what do you think might happen as we move through what is proving to be yet another very difficult year?

And I’ll open the floor to whoever takes that first.

Ali: Shall I go first?

Lucy: Great, yes.

Ali: Okay, so if you think about localisation as a garden, like, let’s imagine the localisation agenda is a garden. Then technology is the rain. But without gardeners, without good soil, without clear boundaries, this rain is going to become a flood. What does that mean? It means the technology, AI, automation, etc., it comes with opportunities. But for us to get the best out of them, we must invest in innovation, infrastructure, ecosystem, partnerships, being open about the challenges and lessons learned, and work together in all stages to keep it based on ‘we, the people’. It is the main thing in what we do in the humanitarian and development sector. So, build it based on people’s interests. That’s how I look at it.

Lucy: Wonderful visual image, and you’re absolutely right, it is everything. It is not about the rain. Musaab, would you like to come in if you’re still with us? I know you’re having bandwidth issues.

Musaab: Yes. I would like to comment. So, I would like to see AI designed with local actors at the table from the beginning, not as testers, not as data collectors, but as co-designers. If the systems are meant to serve crisis-affected communities, then those closest to the context should influence how problems are defined and how models are trained. So, basically that’s what I would like to see for AI from now to the future.

Lucy: Amazing. Thank you. And Gülsüm?

Gülsüm: Well, actually, I think that standardising will increase in general on all AI-related topics, and I think that in the cluster system, also, the use of AI will be included, I think. And also, for the standards, and I think that Sphere standards and other standards that are globally used, probably the use of AI and related issues will be on the table during this year, I think.

And I think that there is a huge responsibility on us, I think, because we need to be critical for every standard suggested related with AI. So, if we would like to… if we think that the standards should not be under a monopoly, that a giant entity just decides on it, and they are just serving to local organisations, if we don’t want to do this, I think we have more responsibility on this, both to be critical and also suggesting new standards.

So maybe for each local organisation, I also suggested this in our organisation as well. It’s a very local one, it’s about 70 people in the organisation. And, even though we have a very small scale, we are working on AI standards for the organisation’s policies, because if we would like to continue to use this, we need it. We cannot just let everyone make what they think with AI, etc. So, I think, for this year and maybe the upcoming year, who will take the responsibility about AI? Probably they will shape the future of AI in the sector.

Lucy: Yeah. It is a really pivotal moment. Fantastic. Like I say, there is so much more that I want to delve into with you, but I can see some questions coming in from the audience. So, Ka Man, I’m gonna hand over to you to start facilitating these questions. I’m looking forward to hearing from our panel still.

Ka Man: Hi, sorry, I had a technical issue coming off mute there. Thank you so much for such a thought-provoking discussion today. Thank you, Lucy, for facilitating that, and thank you to our panellists for sharing your candid insights. I really appreciate it. And thank you to our audience for your attentive listening and your great questions that you’ve posted in the Q&A, as well as the chat.

So, we have the next 20 minutes or so to put the questions to the panellists. And actually, I see a thematic grouping of the questions. There’s a lot of interest around accountability and governance, which really aligns with the gap in formal governance that we spoke to at the beginning of the session, and how local organisations, local actors are using their own creativity, drive, and ingenuity. But obviously, coordination and accountability is the next step.

So, I wanted to just put some questions that are for anyone to jump in and respond to. So, the first question links to this theme. Would there be a local accountability mechanism for AI-related harm? What does anyone think about that?

Ali: A quick note from my end. Please keep in mind that inclusive and intelligent governance is already part of the Sustainable Development Goals for 2030. So all the Sustainable Development Goals, they have indicators related to inclusive and intelligent governance, and intelligent governance includes AI. This is number one.

Number two, there are several countries that are putting rules and regulations, rolling out AI rules and regulations, so there are mechanisms to look into the AI design, deployment, and a few other things, including the EU AI Act, the executive order in the US. And I am aware of several countries working on AI privilege rules and regulations, where they look at who has access to the data, as well as the principles of least privilege from an ecosystem perspective. This is number two.

And also keep in mind that it’s not fully clear for us as a society, because the development is increasing, and it’s happening super fast. And, you know, just two years ago, we were at the large language model stage, then we moved to reasoning, and now agentic AI. So we are not fully sure how that’s moving, the speed, the progress, how it would look like. So I’m a bit worried if we put additional rules and regulations, it might cause damage or harm. This is point A.

Point B, please keep in mind that the examples that we have within the humanitarian development sectors, they are still at the large language model stage, a bit of automation, tiny bit of agentic AI. So, until now, the main issue for us is data-related, as well as cyber attacks. This is going to grow in the future. So, my recommendation, or my suggestion, would be instead of jumping into putting rules and regulations within our governance process, let’s map the different scenarios, let’s plan based on those scenarios, let’s look at the integration of those different tools and all that, and focus on the transformation. If we focus on the transformation element, we will reduce the supply chain challenges, cyber security challenges, as well as reducing harm.

Ka Man: Thank you, Ali. Linking to this, I have a question that was received in French, but is in English: the use of AI can be costly, and there is often a lack of clear regulatory frameworks for its use. What’s your opinion on this, please?

So, Gülsüm, you spoke optimistically about sector mechanisms coming together, clusters and so on, and you talked about Sphere standards. Do you think that we can make progress in this space with regards to AI in the coming period? What’s your take on that?

Gülsüm: Well, I think so, because right now, the standards that are wanted to be received by local organisations… I’m thinking based on Turkey, by the way, but I’m just seeing that there is a huge tendency in Turkey that local organisation staff are demanding standards trainings, especially for Sphere right now. So, it just gives me a kind of hope, let’s say.

And the staff, the humanitarian workers, are not just, okay, being about, we have adjusted this, and we will do this, etc., the stuff that was just designed before them. But they are also critical, and would like to be part of the steps that are included. So, at this point, I think that when AI is more spread in humanitarian work, especially, let’s say, in the media, I think that practitioners will criticise it more and just demand a standard for this.

Because probably they will be criticising each other’s work as well. I just saw an AI-generated visual before, used by an NGO for funding reasons, for a crowdfunding campaign, and I asked them. And they just thought that, why should it be related and should be used, and I just gave my perception, etc. So there’s a common and shared environment to give these ideas, and I think that people, the practitioners, also push each other to use AI to standards, even though it’s not named as standards. They are just pushing each other to use it the way that was already decided before. So, that’s why I’m in a more optimistic position, let’s say.

Ka Man: Thank you, Gülsüm, I really appreciate that perspective. Ali, if I could bring you in very briefly. So, if we’re talking about almost collective action around common standards, if we’re looking at more coercive pressure, so to speak, from a governance perspective and regulation. Do you think, say, for example, is there anything around the EU AI Act, or is there anything that you think might be pertinent to highlight here?

Ali: I think the opportunities that we have here are the initiatives happening within different alliances at the global level when it comes to the implementation of their programmes or operations or projects, because there are already several initiatives, joint country programmes or joint operations. We have ACT Alliance, we have several networks and platforms that are bringing organisations together, so that brings many opportunities. This is one.

The EU AI Act is going to bring the element of governance to different organisations, but international organisations that have an office in the EU, they must start adapting their processes and communicating their workflows, as well as what they are doing in finance, human resources, monitoring and evaluation, ERP system, and other things. I think, in general, we have several opportunities coming. We could leverage those rules and regulations for the speed and efficiency and reducing bureaucracy in the sector.

We could also take advantage of the alliances, joint country programmes, joint country operations. We could build on what’s happening within Sphere, the Inter-Agency Standing Committee guidelines, the NGO forums at country level, and all those different infrastructures that we already have in place. So, I would suggest, instead of investing in creating a new thing, let’s redesign and reimagine what we already have, so that we don’t reinvent the wheel, we don’t invest in something new, and go through the rollout and all that. We have several initiatives, they already have a certain level of trust and certain level of access, we could invest in them and leverage those different opportunities.

Ka Man: Thank you, Ali. Really appreciate your insights there. So next, I’d like to put a question to Musaab, if that’s okay, and it’s a question from Maria. So, she’s building on something that you talked about in this session. So, she asks, please, would you be able to give a specific and practical example of how AI use may cause bias in targeting in your context?

Musaab: Okay, good question, actually. So, imagine an AI model trained on historical beneficiary data. I will give you an example from Sudan, actually. So, imagine an AI model trained on historical beneficiary data to predict which households are most vulnerable and should receive cash assistance. So the model may choose indicators like registered displacement status, formal camp residence, household size, documented income loss, or mobile money transaction history. So, if you can follow me on this.

So, but in Sudan right now, many of the most vulnerable people are not formally registered in the system as displaced. So, some are staying with host families. Others move frequently between neighbourhoods due to insecurity. Some women-headed households avoid registration because of protection risks. Informal workers may not have digital transaction histories.

So, if the model is trained mostly on formal camp data or structured registration datasets, it will learn patterns from those populations. It may systematically prioritise households that resemble previously registered beneficiaries and unintentionally exclude advocated urban displaced people or marginalised groups who do not appear clearly in the data or in the system registration system, whether by their displacement status, or the others that I’ve just mentioned.

So, that is algorithmic bias, because the data reflect structural gaps. So in conflict contexts like Sudan, exclusion from cash is not just an administrative issue, it can deepen vulnerability, create tension between communities and undermine trust. So basically this kind of practical example might happen when we use AI. So that’s why human validation and local knowledge must remain central when AI is used in targeting, especially in cash programming, and I would say for any targeting. Yeah.

Ka Man: Thank you very much, Musaab, for sharing that tangible example. Linking to this, next week I’m going to be having a podcast conversation with regards to the role of blockchain with cash transfers and how this may fit into the broader humanitarian context and digital transformation, including AI. So, I’ll share the link with everybody, so keep an eye out on our channels for that, to get a bit more insight into how these pieces may connect.

So, I’d like to bring in a question from Dr. Ivan Toga, who was actually one of the interviewees who was featured at the start of this presentation. So, it’s linking to this sort of opacity of systems. So, Dr. Ivan asks, doesn’t building robust AI from a black box ignore its real impact on our local refugees and people needing mental health support? How do we address this gap?

Would anyone like to come in on that?

Ali: Maybe a quick thing from my side, that’s an excellent question. I think we have to build it the same way we build our programmes, from first principles, from the community needs, and doing the actual context analysis and all the other elements there. But I want to mention that recently I saw several very good examples where those AI tools were leveraging medical care in providing educational support, and a few other things in camps and within remote areas and all that. But the way they were designed, it’s a bit similar to the way we did that training in Sudan. We built it from the local language, from the context, from the community, from their needs, from how they work together, how they interact together, the cultural elements.

And all those things, we built it from there. And then we started deploying or using those different tools. And to be totally transparent, I think if we don’t do that, we will not be able to achieve that digitalisation, or AI integration, or the intelligent and inclusive governance. We have to keep in mind the people that we are serving in those communities, and to do that, and not cause any harms or issues, and to avoid any risks, we have to build it from there, from them, not for them.

Ka Man: Thank you, I really appreciate your perspective on that. So next I’ll come to a question from James, who’s actually my colleague. Thank you, James, for asking this question. He asks, is it realistic that the owners of leading AI models, such as OpenAI and Anthropic, could be held responsible for supporting locally-led AI, or will the sector as a collective be responsible for developing best practices for integration, or even their own models? Does anyone have a view on that?

Ali: I could just say that Claude AI, they have a non-profit element, and OpenAI, they also have a non-profit element. I was part of several conversations with some actors, including Microsoft at some point, exploring that side, and asking them to give that perspective. NetHope is playing a key role in building that bridge between the private sector and non-profit actors.

I’m a bit worried that at scale, at large scale, we don’t have this, and it’s a bit challenging holding them accountable, but there are small initiatives, and if we build on them, I think we might get some results. I want to mention again that there are several language models that have similar capabilities in other countries and other regions, and they are native in that region. We have Groupa in Africa, we have Jais, we have Bharat in India, we have other language models, they are open source, they are native in the language, and many people at policy level there, and civil society, are engaging with them. But at global level, those OpenAI, or Claude AI, etc., they have small elements, but still not that large scale.

Ka Man: Thank you, Ali. From my perspective, I guess I wanted to make that point where, yeah, collaboration with actors and trying to sort of almost lobby for our interests, collective interests for the humanitarian sector is pivotal and ongoing. That has to happen, but we can’t obviously pin our hopes on that movement alone. So, a lot of people that I speak to in this space do advocate for, like you say, the open systems where people can build from the ground up using reusable components. So I think there’s a lot of interest around developing small language models. There’s been a lot of talk about this, where it can be more secure, used offline, trained in specific languages and contexts. So I think that this is really exciting. From what I’m hearing, it’s very early days. I’ve not heard of any specific cases yet where people are deploying small language models, but I think it’s something that humanitarians should pay particular interest and attention to myself, personally.

So, unfortunately, our session has to come to a close shortly. I’d like to thank our incredible panellists for their candid insights and perspectives today, which are really invaluable, and I really do appreciate you taking the time to engage in this conversation today and to drive forward this discussion. I’d just like to take the closing minutes to just highlight the next sessions that we have coming up as part of the HNPW programme.

So, we have 3 more sessions happening, one is online only, and the other two are hybrid, Geneva and online, so everyone can access. They’re taking place on the 5th, 10th, and 12th of March. So please do sign up if you’re able to, and I will share the links in the post-event email.

So, thank you again for taking the time to join the session. We appreciate it. Thank you to H2H, thank you to the organisers of HNPW, and I’m wishing you all a good rest of day. Thank you.

The State of Learning and Development in the Nonprofit Sector

The Training Providers Forum – Groupe URD, Humanitarian Leadership Academy, Humentum, IECAH, INTRAC, NetHope, and RedR UK held this online session as part of Humanitarian Networks and Partnerships Week (HNPW).

Over the past year the global humanitarian and development sectors have been rocked by funding cuts on an unprecedented scale, whilst simultaneously being called to respond to escalating levels of need. This session specifically examined the impact that dramatic sector changes are having on provision of training, and Learning and Development for humanitarian and development actors.

This session is aimed at those working in L&D, HR, people and culture or in a leadership role in the humanitarian and development sectors.

Unsettling the status quo: The case for locally led humanitarian research

Speakers: Tamara Low – HLA, Maryana Zaviyska – Open Space Works Ukraine, Umut Güner – KAOS, Kai Hopkins – ELRHA

Despite strong rhetoric around localisation, humanitarian research is still largely controlled by well-resourced Western institutions, with local actors often sidelined into limited roles. This undermines the value of locally led research, which is typically more relevant, culturally grounded, and responsive to affected communities—especially critical amid shrinking sector funding.

This session explored the power shifts, funding changes, and norm-setting required to advance a genuinely locally led research agenda, drawing on insights from local research organisations, funders, and humanitarian leaders working to drive this change.

Watch the recording below

A Call to Action for Youth Leadership and the Future of Humanitarian Action

Speakers: Jennifer Dias – HLA, Maryana Zaviyska – Open Space Works Ukraine, Olha Shevchuk-Kliuzheva – Alliance UA CSO, Mercedes Garcia – Save the Children International, Van Anh Tranová – DEMDIS, and Huseyin Arslan – HLA

Young people are already playing critical roles in humanitarian response across the world, yet their leadership remains poorly defined, under-recognised, and weakly embedded in humanitarian systems. Often active as volunteers and innovators, youth face limited pathways to formal leadership and professional growth.

This panel explored how the sector can shift from ad-hoc youth engagement to genuine youth leadership, drawing on global research and lived experiences to identify practical, safe, and empowering pathways for youth to lead in humanitarian action.

Watch the recording below

If you have any questions, please contact info@humanitarian.academy

Newsletter sign up