Thank you for visiting the HLA website, we are updating the site to reflect changes in our organisation. Contact us if you have any questions. Contact us

Humanitarian AI: Lessons learned, trends and opportunities for 2026

Thank you to everyone who joined us for our Humanitarian AI webinar on 29 January 2026 held in partnership with NetHope.

You can watch the recording and view the slide deck below. We’d love to hear your feedback and suggestions for future sessions – email us on info@humanitarian.academy

Session transcript

This transcript has been generated using automated tools and has been lightly edited for clarity and readability. The transcript has been reviewed but minor errors or omissions may remain.

Ka Man: Hello, everyone, and welcome to today’s webinar, brought to you by the Humanitarian Leadership Academy, in partnership with NetHope. I’m Ka Man Parkinson, Communications and Marketing Lead here at the HLA, and I’m absolutely delighted to welcome you to today’s session: Humanitarian AI, Lessons Learned, Trends and Opportunities for 2026.

We’re absolutely delighted and excited to have over 800 people registered to join us today from all over the world. Thank you for making the time to be here together with us and our amazing expert panel.

As you’re joining us, please introduce yourself in the chat, letting us know your name and where you’re joining us from.

Today’s session is 90 minutes. We’ll start with this welcome, followed by quick introductions, then move into presentations from our panellists and myself, and then we’ll move into audience Q&A. If you have any questions for our panellists, please submit those using the Zoom Q&A function, which is open at any time.

It’s great to see that we already have over 160 in the virtual room with us already, so thank you and welcome. Welcome to people from the UK, Syria, Türkiye, and Nairobi. You’re all very welcome here, thank you for being here.

A little bit of housekeeping: this session is being recorded and will be uploaded to the HLA YouTube channel, together with the slide deck, within 24 hours. Zoom captions are enabled, so to turn those on, just go to your Zoom toolbar. There’s also translated captions available.

Please feel free to use the chat. We want this to be an interactive and lively discussion space, so if you’ve got any reflections or thoughts as we go along, please share those with us. Use the Zoom Q&A function so your questions are all in one place, as the chat tends to move very quickly and we may miss your questions otherwise.

Please keep any questions or comments respectful and on topic, relevant to humanitarian AI. In recognition of your learning in this forum today, you’ll be eligible to claim an HPass digital badge. Keep your eye out for a separate email with details of how to claim that next week.

It’s so nice to see people joining from all over the world. Libya, Sweden, Nigeria—welcome.

I’m absolutely delighted and thrilled to be joined by an incredible panel today with such a wealth of expertise and experience in the humanitarian AI space. We’re joined by Mercyleen Tanui from WaterAid, Michael Tjalve from Humanitarian AI Advisory and Roots AI Foundation, Daniela Weber and Esther Grieder from NetHope, and me. I’m your host for today’s session. If this is your first time joining an HLA webinar, welcome. The HLA is part of Save the Children, and our mission is to accelerate the movement for locally led humanitarian action.

So, I’d like to invite our panellists to say hello and say a few words about themselves. We’ll start with Mercyleen.

Mercyleen: Amazing, thank you so much Ka Man, and I’m so excited to be here today. My name is Mercyleen Tanui. I’m the Global IT Operations Manager at WaterAid. I’m responsible for driving operational excellence, building resilient systems, and championing human-centred innovation. I’m so glad to be here. Thank you. I’ll pass it over to Michael.

Michael: Hey everyone, I’m very happy to be here and excited to join this conversation. My name is Michael. I help humanitarian organisations understand how to get started with AI in a way that emphasises safety and effectiveness. I’m an independent consultant focused on AI and humanitarian action, and I’m co-founder of Roots AI Foundation. Pleased to be here.

Daniela: Hi everyone. It’s so exciting to see everyone in the chat and the many different places people are joining this webinar from. It’s really humbling. I’m Daniela, and I lead NetHope’s Center for the Digital Nonprofit. At the Center, we support nonprofits in their digital transformation, including classic tools and newer technologies such as AI. It’s all about helping nonprofits utilise AI in a way that’s beneficial to them and, of course, also responsible. Thank you.

Esther: And I’m Esther, I’m NetHope’s Membership Director. We’ll say more about NetHope in just a moment. A fun fact about me is that I used to work for the Humanitarian Leadership Academy, so this is kind of bringing together two parts of my career, which is very nice. Thanks for having me on the panel.

Ka Man: Thank you, Esther. Thank you to our incredible panel. I’m so excited to hear your insights and perspectives today, and I’m so excited that Esther is here with us in this space. I’m sure many of our community members will remember Esther as she used to manage HLA webinars. So, thank you very much to our wonderful panellists.

I’d like to invite Esther back onto the virtual stage to say a few words to you.

Esther: Thank you Ka Man, and it’s really nice to see such a huge and enthusiastic audience here today. When I was running this webinar series, we had much smaller audiences, so it’s really exciting to see how much this has snowballed over time.

I wanted to introduce NetHope, because this is a webinar in partnership between HLA and NetHope. We are a global community of non-profit digital and technology experts. We bring together over 60 INGOs in the sector. We also have a growing community of national and subnational organisations, which we call NetHope Connected, and we have around 120 organisations in that community now, but that’s growing all the time.

We have over 50 global tech partners that we work with as well, and the cumulative staff across our membership is over 800,000 people, so that’s quite a reach when you look at it that way. We’ve been going for 20 years, bringing people together to talk about anything digital and tech-related. Think about AI, of course, cybersecurity, how you use digital in your fundraising, how you use it in your programmes, in your data collection—all those kinds of topics.

If we go to the next slide, we’ll show you who our members are. These are our 63 members currently, lots of familiar names in there, I’m sure, for everyone here. And as I say, we also have our growing NetHope Connected community of small organisations as well.

If you’re interested in getting involved with NetHope, this is how you can do so. You can get in touch with us via email. I’ve put a QR code up there as well. You might be interested in our NetHope membership or in NetHope Connected, but the main thing is just get in touch with us, and we’ll send you some information through if that’s of interest.

Now, moving on to the business of the day, because we’re here to talk about humanitarian AI and what that’s going to look like in 2026. I got out my crystal ball, and with the help of a lot of my colleagues, I pulled together a few predictions for what AI is going to look like in the humanitarian sector during 2026. These have been crowdsourced from a bunch of people, mostly at NetHope.

One of the things that we’re expecting to see is more localised AI use—more tools developed in the Global South—and smaller organisations demonstrating what can be achieved with AI when you’re unencumbered by the red tape of working for larger organisations. This is really because we’ve had a lot of hype around AI, and now people are moving towards thinking: well, really, practically, what are the applications of this technology? At the same time, we’re seeing people talk a lot about things like digital sovereignty—the desire to not be overly dependent on US tools, for example.

We think that’s shifting us in the direction of maybe some more homegrown solutions with very niche specific objectives. So, localised AI is definitely going to be on the up.

We’re also seeing lots of moves towards shared standards for AI usage in the sector. There are conversations coming up through different channels about this. Everybody, including at NetHope—Daniela has been leading a lot of work on AI governance tools—is moving towards defining how we want to engage with this technology as a sector.

Linked to that, one of my colleagues put forward the idea that this year might be the year when we see one big mistake or use case that results in harm as a result of AI, and that will increase the pressure on the sector to have those standards and regulations in place.

We’re also going to see larger organisations that have invested heavily in AI needing to demonstrate what the impact is of that budget spending. That’s something we’re expecting this year.

There’s also been talk about whether AI agents will appear in organisational structures. Will someone put an AI into their organogram this year? Are people starting to add AI just in every proposal—a little sprinkling of AI to try to tempt donors? Or are we beyond that point? That’s something we’re increasingly seeing: people add AI to programmes to make them more appealing.

We’re expecting this year as well that there will be more talk about the environmental impacts of AI. That’s been a relatively quiet topic, surprisingly, I think, so far in the sector, but maybe that’s going to come more to the fore in the coming year.

Overall, I think what we are expecting is that we’re going to move a little bit beyond the hype this year, manage some of those huge expectations around what AI can achieve for the sector, and start to get much more practical and nuanced about what AI can do for us.

And this is just an example from UN Relief Chief Tom Fletcher, who’s not a technology person, talking about the UN80 plan and how we as a sector can respond to the funding situation and shrinking resources. As a non-tech person, he is also looking to technology to create efficiencies. How can AI change the operating environment for us? Technology and AI are really becoming such mainstream topics now. Everybody’s talking about them as a potential solution to our problems, which creates a lot of expectations and also a need to narrow down to where the real opportunities are. He also talks about the need to make sure that technology is working for humanity and really serving our needs as a sector.

So that’s all from me. I would love to hear in the chat people’s predictions for what they think might happen in 2026. I don’t know whether any of the ones from the previous slide resonated with you. If so, please put that in the chat. If you have a different prediction, as crazy as possible, please, we would like to hear it.

I will hand back to Ka Man for our next speaker. Thanks, everyone.

Ka Man: Thank you so much, Esther. I’ve just kept up the crystal ball slide there so that people can take a look at that and share any thoughts in the chat if they want to do so.

This is thought-provoking, because it really aligns with the 2025 research that the HLA conducted together with Data Friendly Space. You’ve obviously moved and shifted along because you’re looking through your crystal ball into the future. Some of this is encouraging, like the localised AI use cases and tools developed in the Global South. That’s something that we really align with, given our mission to advance locally-led humanitarian action. But also, some of the things you’ve highlighted are areas of big concern—such as the potential for one big mistake or use case that results in harm. That is something that we obviously have to collectively try to mitigate. And on a personal level, the AI agents in the organogram is something that I find a little unnerving from a personal perspective, because human colleagues are probably preferable. But I will keep an open mind about that.

So, thank you so much for that really useful scene setting and predictions.

I’m just going to take the next 5 minutes or so to briefly recap on the key takeaways from the 2025 global baseline study that we conducted with Data Friendly Space.

As many of you on this call may know and may have contributed to, this was the first global study to look into how humanitarians are using AI on a practical level, on a day-to-day basis, to support their work. We had incredible engagement with over 2,500 responses from 144 different countries and territories. We gained a really rich picture of what’s happening around the world, and three-quarters of respondents were from the Global South. It was brilliant to have that window into what people are doing, what’s happening on the ground.

What emerged was what we call the humanitarian AI paradox: there’s really high levels of individual uptake of AI tools, like ChatGPT, against a backdrop of generally low levels of AI readiness. For example, only 4% of survey respondents consider themselves to be AI experts.

In terms of how humanitarians are using AI on a day-to-day basis, when we started the survey, we thought we might hear lots of interesting use cases—quite technical systems for forecasting or needs assessment. This did come through, but primarily, the usage was driven by individual use, supporting day-to-day tasks and workflows: writing, data analysis, language tools, and content creation.

In contrast to the high levels of individual adoption, there were low levels of organisational-wide integration. The majority of respondents described their organisations as in the experimentation and piloting phases, with just 8% saying that it was widely integrated. Just over a fifth have a formal AI policy that they’re aware of.

There were mixed attitudes towards AI’s effectiveness to support humanitarian work. Less than half felt that it had helped efficiency, and 38% said it helps with decision-making. So the jury was still out with regards to its positive impact.

In the open comments, we received lots of very candid views. Whilst there’s a whole spectrum of feelings around AI—some people were very against the ease of technology, did not think it was compatible with humanitarian work and values, others were evangelists really positive about the potential, and everything in between—there was a lot of awareness about the concerns and constraints about moving forward in the AI journey.

These included practical barriers like skills, funding, and infrastructure. It was interesting that climate impacts came through a lot in our open comments. And a lot of respondents pointed to a lack of overall organisational strategy.

We conducted that survey in May-June 2025. Just over half a year later, we want to see how the picture has shifted. Everyone on this call is all too familiar with the fundamental changes and everything that’s happened to the sector. Humanitarians have had to grapple with this change and quickly adapt to new ways of working in this new reality.

We want to know how and if that has impacted on your AI use and adoption. Together with Data Friendly Space, we’ve launched a Pulse survey to do a real quick temperature check on what’s happening in key areas: individual and organisational adoption, how humanitarians are using AI, whether it has moved along or further from those use cases I’ve just mentioned, what’s happening with policies, and training.

If you’ve taken the survey, thanks so much, we really appreciate it. We’ve already had responses from over 120 countries, so we’re excited at that engagement. The survey’s open till Saturday the 31st, so if you’ve not taken it, please do. I’ll send the link in the post-event email. Taking a few minutes to fill that in will really help us gain a clearer picture of where we’re at in January 2026. We’ll start to roll out insights from next week, starting with those key percentages and figures.

One of the key themes that emerged from the research is that many humanitarians, in the face of all this change, felt like AI was off-limits when talking about it with teammates, colleagues, and managers. It’s not something that people felt safe to talk about. Obviously, that lack of psychological safety is a big barrier to AI adoption. So, in the spirit of information sharing and collaboration, that’s why we’ve created this webinar for you today, we also would love to invite people from our community to share experiences in a future webinar, podcast, or articles in different formats. So, if you’d like to get involved, please send us an email sharing your perspectives. We’d love to continue that conversation.

So, thank you very much. I would now like to hand you over to Michael.

Michael: Thanks, Ka Man. It’s been fascinating to see the evolution of AI and the role it has across society and our sector today. I have been working within the field of AI for a very long time. Most of that time was in the tech sector, focused on social impact AI. A couple of years ago, I left the tech sector to establish the Humanitarian AI Advisory, and I’ve been working as an independent consultant since then, supporting humanitarian organisations in navigating the potential as well as the risks of AI.

I’ve had the pleasure of partnering with NetHope on a number of different projects for about a decade now, and with the Humanitarian Leadership Academy, particularly over this past year.

The attention across the sector is really gaining steam, as many of you will have seen firsthand. There are a couple of key reasons for that. On one side, the tech sector is underwater in terms of its ability to effectively address the growing humanitarian needs. On the other side, AI has matured to the point where it is capable of playing a key role in the path forward.

Faced with the situation of having to do more with less—which you’ve probably heard many times—AI becomes a very attractive tool. While AI has potential to provide very real and very meaningful positive impact, there are also many ways that AI can lead to negative outcomes. Of course, when you’re working with vulnerable populations, the inherent risks are just so much higher.

So there’s rightfully a lot of focus on understanding how to balance the potential with the risks of AI. I’ve been fortunate to be part of the small team behind the Safe AI Initiative. It’s a partnership between CDAC Network, the Turing Institute, and Humanitarian Advisory, and it’s funded by the UK Foreign Office.

Our objective with this initiative has been to help individuals and organisations get started with AI, helping humanitarian actors understand how to get the most out of what AI can do today and how to identify the relevant risks so that you can proactively implement mitigation strategies.

We’ve built it out as a toolkit with supporting guidance and documentation. I like to think of the process itself in terms of an AI journey: problem identification, use case definition, design, development, and deployment. The toolkit focuses on the key activities that happen along this journey, with corresponding easy-to-use tools such as readiness assessment, risk assessment, risk mitigation strategies, and procurement. The toolkit is in its final stages. We’re currently putting finishing touches on it and expect to release it very soon.

As impressive as the AI capabilities are, it is worth keeping in mind that it only works well in English and a relatively small number of other languages. That means the large majority of the world population sees absolutely no benefit from modern AI, which in turn further deepens existing inequities across the world. I believe we’ll never get anywhere near equitable outcomes from AI without dedicated focus on language access.

I’m proud to be part of the Roots AI Foundation, which aims to help address this challenge by expanding access to the value of AI to underrepresented languages and communities. This involves things like community-built AI that helps counter bias and assure representation in modern AI models, helps preserve endangered languages, and build culturally grounded AI tools.

Over the past year, I’ve been talking with many individuals and organisations about what role we want AI to play going forward. I think we’re just seeing the beginning of the devastating consequences of the cuts to budgets and programmes that we saw starting at the beginning of last year. With a humanitarian reset well underway, I think there’s a broad realisation that the path forward involves having to do less with less. There’s simply less budget around, so we have to really be creative and effective in our thinking about how to use AI for the cases where that’s the right tool.

If we want the benefits of AI to reach more people—not just the ones who can access it today or afford it today—I think we need to do it in a way that is empowering and sustainable. What we do with Roots AI on language equity is one factor. But I think local empowerment also requires participatory AI and engaging with communities in a way that’s both meaningful and purposeful, and consulting with impact communities in a way that’s deliberate. It also requires capacity building and skilling that empowers local communities to participate more directly in the creation of the AI systems that impact them.

There’s a wide range of technology choices you can use to help empower locally. One example: you can build an AI system based on a cloud AI service. But you can also opt for a small language model, or SLM.

An SLM is a smaller version of the large language models—the typical foundation models we hear a lot about today—that are behind generative AI. An SLM can run locally on a device and can do most of what an LLM can do.

This has a few benefits:

  • Removing dependency on connectivity, because it can run locally on a device
  • Improving data security and data sovereignty, because you know exactly where it’s running and what data it’s touching
  • Reducing cost, if you don’t need to use a cloud service
  • Reducing the carbon footprint of using AI, because you don’t depend on very large data centres and GPUs—these are high compute resources

So, if you’re interested in learning more about what you can do and what you should consider as you’re onboarding your AI journey, check out the Safe AI Toolkit when it comes out soon, and reach out if you have any questions.

Back to you, Ka Man.

Ka Man: Thank you so much, Michael. Really interesting, because you and I spoke in September to record a podcast around humanitarian AI, after the research that I spoke of was released. I remember at the time you introduced me to some of these initiatives that you’re working on.

If I could put you on the spot and ask you a question—but don’t worry if you don’t have an answer just yet, because we could always come back to it. Since then, I mean, you talked about culturally grounded tools, localisation, and you also emphasised the importance of developing an AI policy. You said that is a key starting point for people now. So, let’s say, half a year on, nearly. Has that shifted? Has anything changed? Is there anything different you want to emphasise?

Michael: I don’t think I’ve seen a change or a shift since we spoke. At that time, six months ago, so I think it’s probably more of the same—realising the new reality that we’re under, with significantly reduced budgets and necessary cuts to programmes, and how to react to that. I think there’s a realisation that AI has a role to play, and that it can be a powerful partner in that.

At the same time, there’s a need for standards and guidance on how to make sure that you can both understand how to leverage the potential of AI and understand its limitations. You need to design, whilst being very much aware of those limitations, so you can avoid some of the major risks.

But you’re absolutely right to point to what I said. If you do just one thing: create an AI policy for your organisation.

Ka Man: Thank you. I think Michael needs a t-shirt with that as a slogan on it: “Create an AI policy.” [laughs] That’s really helpful, thank you.

I’d now like to invite Mercyleen to take the virtual stage and share her insights with you.

Mercyleen: Thank you so much, Ka Man. Thank you, Michael. Around the world today, AI is used to increase personal productivity, increase disaster response, and improve and protect vulnerable communities. But this only happens when people are considered at the centre of design.

Today, I want to take you through how AI is used at WaterAid. We first started by exploring how generative AI can be of use to our staff. It started small, from personal productivity use cases—research, proposal writing, summarisation. But we are now moving towards using agentic AI for more strategic, high-value business use cases, to ensure that we are using AI to monitor processes, handle workflows, and even trigger actions.

In every stage of our implementation, we took people, platform, processes, and policies as the key builders. We first started at the exploration stage, where it was more ad hoc. But now we’re starting to see a more codified approach, and we are being intentional and actively using AI in a more codified and governed way.

As we look at the forecast for AI in the next 12 months, we are looking at a more codified approach in the way we use AI. Our AI maturity is not yet uniform across the organisation, but we are constantly moving beyond the early explorations toward a structured and purposeful experimentation that creates high value for the business.

So, what are the pillars of AI enablement? For us, they are: data governance, incubator, and workforce enablement.

Data governance: The quality of the data that you feed into AI guarantees the success of the output. You have to start with ensuring that your data is reliable if you’re preparing to implement AI in your environment. Failing to act on data governance means missing the chance to scale AI effectively and unlock the full strategic value.

Incubator: We want to accelerate innovation through structured pilots and constant feedback from staff. We want to ensure that every user who has a licence constantly provides feedback so that this changes the way we address adoption.

Workforce enablement: We want to increase the AI fluency among the workforce through trainings to equip people with skills and create capabilities. IT is taking a leadership role in bringing enterprise mentality into this. This is going to take time, for sure, but choosing to start now ensures that we are setting up ourselves for success.

An encouragement to all our community members: if you’re starting now, it’s okay to start somewhere, even if it’s just acquiring 3 ChatGPT licences for piloting. We need to be honest about where we are so that we can adapt to the lessons that we have in moving forward.

Let me talk more about incubation. Incubation is a priority for us to help us identify what works and what doesn’t. With the structured piloting that we have in place, it allows us to get feedback from staff using premium licences, for example, to be able to drive efficiency in AI. We constantly collect feedback: How is AI driving efficiency for you? That way, we feed that into our model so that we are constantly improving and using the feedback that we receive. We are looking at defining models based on the feedback that we have gotten, so that we see where we need additional training and what would be recommended for the workforce.

This initiative also gives us insight into what type of users are using it more and for what. This will be useful also in our licensing model. In the humanitarian sector, we have budget constraints. You don’t have money to spend on everything. So you have to be very purposeful in your approach to how you implement AI so that there is high business value and there is output that is relevant to the users.

We are also a water organisation, and we want to ensure that we are reducing footprints. So we focus on high-value use cases with balanced innovation because of our water consumption. We have a scoring model that we have implemented to help evaluate business use cases and ensure we are only prioritising high-value ones. So we are taking a very intentional and active approach in our evaluation.

We cannot do this if we don’t have the backup from our leadership team. AI is not an IT initiative. For us, AI is actually an organisational change initiative. If you forget everything today, please just remember: AI is not an IT initiative. It is an organisational change initiative.

So we want our leaders to help us model change and build trust. They need to demonstrate commitment by initiating the use of AI tools themselves. This signals a cultural readiness and reduces resistance to adoption. Change is hard, especially when you have a big workforce. So how do you tackle that? It has to start from leadership to be able to drive that change across.

We’re also relying on our leadership to simplify and remove barriers—any bureaucracies in place, any policies that need to be passed, any governance, any decision-making that needs to be accelerated to accelerate AI integration. We also want our leadership to engage strategic business partners. An AI initiative is not just a one-department approach. It is a whole organisational approach. When we collaborate with business partners to identify, prioritise, and shape the highest value AI use cases, then we can deliver high, measurable impact.

Lastly, we also want our leaders to champion upskilling and engagement. Workforce enablement is something that we continually do, and based on the feedback that we receive, we continually refine that to suit the status at which our users are. We want our leaders to encourage teams to build AI literacy. We have an AI policy guide at our workplace. We have AI trainings. We have e-learnings. We have a landing page where people can get this information. It’s only when people have digital literacy on how to use AI that they can use it for the right purpose.

So, if you’re looking to start today, wondering as an organisation—small or big—where to start, look no further. I have put here a few takeaways that you can use to be ready for AI adoption.

First, you need to start small. Look at time and resources and what works for you. You don’t have to start big. If anything, we need to continue to reduce footprints, so right-size the tools so that you are not starting bigger by default. We need to right-size the tool so that you are also caring for the environment, even as you adopt AI.

Building readiness is what I’ll talk about next. What does that mean? Readiness from a people and skill perspective, readiness from data cleanup and quality, readiness from time and resources. It takes time before you get to a point where the whole organisation uses AI. It takes time before you learn how AI is actually impacting and creating value for your organisation. It takes time for you to evaluate the impact of AI after implementation. So those three things—you have to ready them in order to set yourself up for success when you’re adopting AI.

Integration with existing systems: It’s better to implement AI in an integrated kind of environment, because siloed systems—it’s harder. It takes time.

Strategy for success: What’s your strategy for success? You need to have an operating model to ensure successful implementation. For example, you may decide that you start with a working group. This working group would look at where you need AI. What are the highest places of value to have AI in your processes, in your workflows? There’s still a human aspect in AI. That’s why you need to decide, even as you implement AI, where does the human oversight come in your workflows or processes?

Security and privacy: My background is in cybersecurity, and I’m so passionate about responsible and humanitarian AI. With my extensive experience in IT, I’m able to have a rounded view of how to implement AI in an ethical, inclusive, and responsible manner. So security and privacy needs to be a key consideration. You need to define guidelines on the usage of AI to minimise risk. You need to consider data protection, compliance requirements, and cybersecurity risks. I believe that as the policies for different countries continue to grow, we are going to see a shift towards better policies in place to regulate AI and ensure that we have ethical, inclusive, and responsible AI in every environment.

So, what are the lessons that I want to share with you today as you look to start or advance your implementation of AI, wherever you are? My advice would be: start small. Even if it’s just 3 licences that you have, if that’s what your budget allows, start there and continue to organically grow.

Human oversight is still key. You’re looking to ensure that you’re maintaining a broad perspective on AI. This is very essential, as humanity remains a crucial component in achieving success.

Workforce training and capability development continues to remain essential. You cannot implement something if your user base is not well trained around it. AI alone won’t solve all problems. Sometimes people need training. So invest in ongoing AI literacy, data literacy and digital skills for your workforce.

Consideration of cost and water: As a water organisation, we are passionate about water, and even as we implement AI and advance on the use of agentic AI, we still consider water because we are a water organisation. So to accelerate our AI journey and enhance its maturity, we consider water and cost. So we need to right-size the tool for the problem we face. This means you’re not starting bigger by default—you’re starting small and continually advancing.

Cross-functional collaboration to drive impact: Bringing people together to address innovation challenges. As I mentioned earlier, AI is not an IT initiative; it’s an organisational change initiative. So this is not going to be done by IT or the tech department alone. This is going to be a drive within the organisation. The AI sector moves quickly, and learning from one another, regardless of titles or roles, is still very key.

At WaterAid, for example, the AI journey started with a steering committee, which then evolved into a diverse AI champions network. By diverse, I mean any department—from finance to fundraising. It wasn’t an IT initiative. So that way, we’re ensuring an inclusive representation for a broader range of perspectives in shaping why and how we are using AI in the organisation.

My closing remark: AI is an organisational change initiative, not an IT initiative. If you have to start today, start small. Thank you. I’ll hand it back to Ka Man.

Ka Man: Thank you so much, Mercyleen. That was such an insightful, informative sharing about your experience at WaterAid. I loved how systematic you were in charting that out, and so many valuable lessons. I think everyone will get the slides so that you can look at all of those lessons learned from Mercyleen. I think every organisation needs a Mercyleen to help drive forward responsible, humanitarian AI. So thank you so much.

Now, last but by no means least, I’d like to welcome Daniela. Over to you.

Daniela: Thank you so much, Ka Man. Yeah, we need many, many Mercyleens, I think, as well. Completely agree with that.

I said in the beginning about digital transformation for nonprofits, and one of the things we’ve done to help that move forward—and it is moving forward, but it is moving slowly—is to define what digital transformation readiness actually means and how we get organisations ready for that. How do we get their skills ready for that? We’ve built a number of frameworks defining what that means in order to help organisations assess where they are on the journey and what gaps they might have to close. That also helps us to create a sector benchmark where we can see the trajectory of how things are moving along.

We’ve done the same now for AI. We’ve created a Digital Nonprofit AI readiness framework—DNAI, I call it as well. Looking at the categories we defined here, I think that summarises really nicely what everyone else on the panel has already brought up in terms of what are actually the important factors here.

It’s about responsible AI. Responsible AI means many things. It needs to be ethical, it needs to be equitable, it needs to be safe. All these things play a role.

Technology and knowing how to use it also plays a role, but like everyone else has said, it’s not about the technology only. It is about the organisation looking at their AI strategy and what really their opportunity is to wield it. It is about data. A few people have mentioned that as well. It is about having the resources—the people, the money—and wanting to invest those resources as well. And, of course, it is about skilling, but not just skilling and training, but really thinking about the change, thinking about how your business model might look like, what your processes might look like, when you use AI.

Based on this framework, we’ve created an initial survey, and I can show what the state of readiness is based on that first wave of surveys. You see here, across those different categories, pretty much landing in the middle, which I think is already quite good and probably surprisingly good, actually. And it will be really interesting to see, as we keep doing these, whether that moves or not, as people also increase their learning experience and capability around AI.

But you also see the subtle differences here. People think they are more ready in the area of technology—yes, the technology is there. Data readiness also showed quite high. In comparison, responsible AI is more mixed: many organisations we know have written their policies, but many are not yet at that point, or have not finished that, or have not started yet. So there are subtle differences in where everyone is.

If we look at where people felt they were most advanced, that was really in those areas of tech, of data, and responsible AI. But that also came out as one of the areas where organisations thought they were less advanced. So there is a bit of a gap here between organisations that are more mature and organisations that are not very mature yet at all.

There’s all more data in there. I’ve got the QR code there for the full paper, so people can download that and look at the results.

But based on that and based on the many conversations we’ve had with our members and with our other colleagues in the sector, these are really the key things we think organisations should do:

Check your AI readiness is the first thing. Of course, look across these different categories on where you stand and where your gaps might be.

Adopt a responsible AI framework, and that’s a bit different way of saying: if you do one thing, you have to make a policy. I fully agree here with Michael. Whether you like it or not, whether your staff like it or not, everyone in your staff that uses a device will come into contact with AI, either by choice, or because the tools you’re using have AI built in, or because you’re implementing AI tools. So having that policy is important. Looking at your cybersecurity as regards to AI is important. Having some overall ethical guardrails as well, in terms of where you do and where you do not want to use AI as an organisation, is important.

Build the capacity everywhere. Again, for that very reason, everyone will come into touch with AI, or choose to use AI. So knowing not just how to use the tools, but also understanding the limitations and risks—it’s really, really important.

Define your AI purpose, and Mercyleen, you talked a lot about that. It’s really about that cross-functional, organisational, strategic conversation about: okay, where do you really want to go with this? Where does it contribute to what you’re trying to do as an organisation and where does it not? What problems do you solve and where should you not use AI?

Embed measures. As a sector, we’re very familiar with words like behaviour change, impact measurement, incentives. We’re terrible at doing it in general for digital tools, and so far we’re also not great in doing it for AI tools. Part of that is a reflection of where many organisations stand with their use of AI, but it’s really, really important to also apply those tools to really measure whether you are achieving what you want to achieve with AI.

Now, part of that achieving is, of course, thinking about how you bring it into the organisation. The point about training and change management and how it is embedded in your organisation’s processes is important. But really do understand, especially as you’re thinking about scaling, what the benefits will be versus the total cost of ownership of the tools you want to scale up. And don’t be afraid to stop projects that don’t give you value.

We’ve talked about data. Don’t lose the focus on that. If you haven’t got your data governance and data management in place, and you want to start using AI, and particularly AI agents as well, then the time is now to look after that.

And last but not least, be part of the sector collaboration. One thing we really very clearly see and hear from our members is that a lot of value they see is in those peer-to-peer discussions and having those conversations. It’s important within organisations to have that exchange, and it’s also important and very, very helpful for everyone to have those conversations between organisations so they can learn from each other and hopefully not make the same mistakes in the learning journey.

So really, I think the strength of coming together in that way will be to raise the overall level of readiness but also to make sure that as a sector, we make our voices heard in terms of AI governance, AI legislation, and equitable AI solutions. That is really, really close to my heart.

I think we’re already at the resources. Yes, so two or three things I wanted to mention. One is that all our resources that we have developed over the many years that NetHope has looked at AI—since 2017, I think—we’ve put all those together in something we call the AI Lighthouse, in the hope that it will be a bit of a guiding light for nonprofits that are going on the AI journey. That’s on our website, so you can scan the QR code and go right there.

We’re currently doing some work with our friends from the UK Humanitarian Innovation Hub to scope what a new version, a better version, of that lighthouse could look like. So what are really the needs of the sector?

And the other one is about learning, because that’s the starting point for many. We have an AI for Nonprofits curriculum right now available on our Digital Leadership Institute e-learning platform. It’s in English. There’s a Spanish version coming soon, and there will be a further curriculum, an expanded one, that’s also going to become available in a number of languages. So please do watch that space. Initial courses you can do now, more to come.

And last and least, I mentioned the peer learning and exchange. Esther talked in the beginning about coming to NetHope. Now you can come to NetHope, and if you do, there’s a number of peer learning exchange options there as well that you can participate in.

And I think that was me, Ka Man, right?

Ka Man: That’s a great, great point to end on. That’s such an informative presentation and sharing all of your learning from across the whole NetHope network, so thank you so much for sharing that.

What I found really interesting across all the presentations is that, although we met to discuss this session today, the panellists didn’t actually compare notes about what we were going to share. And there was so much alignment in what we were saying. We’re all coming at it from our own organisational perspectives, but yeah, the key messages were all coming through. So I found that reassuring because there’s that coordinated thought and alignment, so thank you so much.

So we have the next 25 minutes, and we’re going to whizz through your Q&A. Thank you so much to those of you who have submitted questions through the Q&A. We’re going to work through them, and I’m going to put questions to some of the panellists individually and then some for just general discussion. We’ll get through as many as we can in the time that we have. But some of the more specific ones we may take away with us, prepare some quick pointers or written responses, and get that to you. We’ll also roll the questions forward into other content initiatives.

So I’d like to start off with a question to Mercyleen. I found it very interesting how you talked about, as WaterAid, as a water charity, this is something that is a key consideration as you roll out AI. There are actually a number of questions around environmental impact, so I wonder if you could just speak to this generally. You already talked about in your presentation around some of the scoring model, for example, using the right-size tools. But I wondered if there’s anything in addition that you’d like to share.

Mercyleen: Amazing, interesting question, for sure. We’re dealing with climate action, it’s still a very hot topic, COP26. And of course, there’s AI, which is becoming the force, and we cannot sugarcoat things. AI has a real climate footprint. From energy-intensive model training and data centres to increased demand for compute and water. If you need higher speeds, it means higher processing speeds, which means high processing computing power.

So we need to first acknowledge that there is that environmental impact so that we analyse how we can address it.

If we’re looking to reduce AI climate footprint—which we call responsible AI—then we need to look at energy-efficient models. I mentioned earlier about right-sizing models: not going bigger by default, but ensuring that you’re choosing the right size for your use case.

And then we need to also prioritise renewable-powered data centres so that we are reducing the energy that is consumed at the data centres.

Lastly, instead of retraining models from scratch, I would advise that we need to reuse and fine-tune existing models so that we can reduce that footprint. And of course, from a humanitarian lens of things, we are a low-resource setting, so we cannot afford to be wasteful with AI. That’s why, if there is anywhere we can reuse what we have existing, then we need to do so. If there is a part where we need to scale down on our AI tools, then we have to do that.

Thank you.

Ka Man: Thank you so much. I see a lot of appreciation for your insights there, Mercyleen, in the chat. So, staying with that idea and that concept of right-sized tools, there are a couple of questions around small language models that I’d like to put to Michael. Do you know of any organisations who are using SLMs? And could you shed any insights on what you know from your discussions with people in the sector?

Michael: I’ve seen a lot of interest in SLMs in response to some of the reasons I mentioned before, including addressing the challenge around connectivity, data sovereignty, data security, and also, obviously, as Mercyleen points out, the environmental footprint, the carbon footprint.

I don’t have specific examples where people have actually implemented and deployed it. I think most of the ones I’ve been talking with so far are exploring, but it does provide a lot of the key capabilities that you would want from an AI solution. There’s been a lot of development improvement in research, as well as for open source SLM models.

Where we were, say, a year or two years ago, the quality and the capabilities of SLMs are significantly better than they were back then. There are a lot of options. There are open source models, like Llama, Mistral, or Phi, so there are a few different options to choose from.

In addition to just going back to the environmental factor that Mercyleen talked about, I absolutely agree with her point on fit-for-purpose models. I think it’s important, even if you use a cloud service. It’s important to think about what kind of model is relevant for which use case.

You can substantially reduce the carbon footprint by using the model you have access to a little more diligently. So you don’t necessarily need to hit a cloud AI service for every single question. You don’t need a large language model to respond to a question about office hours, for example. So, in the design, you also have an opportunity to reduce the cost so you don’t hit the cloud service too often.

But other than that, SLM is a growing and interesting approach, and I think one that we’re going to see significantly more of.

Ka Man: That’s really interesting, thank you so much, Michael. It’s really interesting that we’re not hearing that discourse around SLM, so I’m quite interested to see if this is something that humanitarians should be exploring with as much enthusiasm as agentic AI. So thank you very much for sharing that.

I’ll move to a question from Elena that’s for Daniela, so it’s around leadership. Elena asks: do you think senior managers have enough transparency to share their open conversations about success or failure of AI if they were using it in wrong ways? For instance, sending strategic orders by email that were not including real data?

So, do you have any thoughts on that, Daniela?

Daniela: I do. I come from the private sector originally, and something that you see in the private sector very much is everyone running around and saying how great they are and how successful their projects were, and so on. Going into the nonprofit sector, I found that to be quite different, and that people were generally a lot more open in also sharing what hasn’t worked.

Now, I think the trick is to find these mechanisms where people feel safe to have these conversations. It’s a difference to come to a webinar like this and say, oh, we did this thing and it went really, really wrong and created harm or monetary damage or whatever, and to be in a room virtually or physically and talking with peers about these things.

So what we do is we have these different convening mechanisms. We have the working group, of course, which is where the AI-interested and expert people come together. We have our IT leaders that come together in regional chapters, and in these relatively closed rooms, that’s where those conversations happen and happen quite freely. Because it encourages people to just be open with their peers, and what we take out of that is the general learnings, but not “organisation X has had this, and organisation Y has done this wrong.”

So I think it’s about creating those mechanisms where people feel safe to have these conversations, and then also have a mechanism to extract the learnings without putting anyone or any organisation on the spot, particularly.

Ka Man: Thank you, Daniela, that’s great.

I have a question from Jaya that I’m going to put to Esther, and Jaya asks: are there innovative ways for early career professionals coming from a technical AI background to get involved in humanitarian AI initiatives? Could be research, pilots, discussions. He says, I found that myself and many peers are eager to work on such purposeful projects, but given the obvious cutbacks, there were fewer routes to apply these skills for good.

So I love this question, and I’d love to hear, Esther, if you’ve got any perspectives to share.

Esther: OK, so that’s a really interesting question. People that have technical expertise in relation to AI and how they can use their skills in the sector as volunteers. Yeah, I would say that a lot of volunteer organisations these days are looking for more highly skilled people than they used to be in the past. They have very specific types of expertise that they’re looking for.

So it might be worth looking at some of those big organisations that deploy expert volunteers, such as VSO, which is a UK-based one, or Team Rubicon, which is another example of that.

I also think just kind of networking in spaces such as these, or the spaces that NetHope has as well. There are lots of communities that bring together people that are interested in tech, which give you a chance to meet people that are working in the nonprofit sector who might be able to find interesting routes for you to get involved.

So I think those would be some of the main ways from my perspective.

Ka Man: Thank you, Esther. Just to add something from my personal experience, so earlier this month, as part of the HLA team, we were fortunate to participate in an Amazon Web Services Breaking Barriers Challenge 2026 in London, and it was actually taking part, across different cities simultaneously. This is a social impact AI challenge using generative AI. We were using it to explore prototypes to uplift the existing Kaya system.

And it was absolutely fascinating as a participant because it was interdisciplinary teams with people representing humanitarian organisations and charities, like ourselves, together with technical specialists, generally drawn from the for-profit space, financial services, retail, and so on. Through that experience, I could see such enthusiasm and resonance. They really connected with the mission, and I’ve stayed connected with people that we were working with, and they’re looking for ways that they can continue to link with those and contribute. So this, for me, was very energising, and I learned from them as well.

So maybe explore different ways to partner and collaborate, even informally or formally, through organised hackathons, and so on. That’s just a sharing of experience from my side.

So I’ll move on to the next question. I’m going to ask a question from Samat. Obviously, here we’re talking about AI journeys, and there will be people, like people from our survey respondents, who’ve taken an intentional choice not to pursue AI adoption. Now, Samat asks: what are the dangers of not using or embracing AI as an organisation? And I think this is an important question because yeah, there’s obviously going to be people who’ve made that choice.

This is an open question for any panellist to jump in and share any thoughts. So, what are the dangers of not using or embracing AI?

Mercyleen: I could jump in.

Daniela: Mercyleen, you start.

Mercyleen: Yeah, interesting question. What are the challenges of not embracing AI? You definitely will be left behind. And I say this because every person, every individual, every organisation works for a mission, whether it’s responding to disaster response, whether it’s protecting vulnerable communities, and whether it’s climate action. AI will deliver climate resilience, for example. AI will increase the speed at which you’re able to respond to vulnerable communities, to disaster response. AI will increase the rate at which you’re able to make decisions.

So those are the things that you, I mean, those are the repercussions of not embracing AI. Can you imagine being in a conversation and people are talking about what they’ve been able to do with AI, and you cannot talk about something similar? So there is impact, there is positive impact of AI in the work we do, in the workflows, in the decision-making, in advancing the work we do as an individual and also as organisations.

Daniela: Yeah, maybe just to add the other aspect I was thinking of, which, as I said earlier, you may as an organisation decide you’re not interested in AI and you don’t want to pursue AI. You will have people in your organisation that are doing stuff with AI because ChatGPT and others—it’s so easy to access. You will have people that are experimenting with it. You will have people that are putting your data into that chat window. You will have people that are taking results that they do not check and use them for something.

So the very first thing, again, to start with is to say: OK, if you’re using tools, and people will be using them, know what you can do and know what you can’t do with them, and how to use them safely. So for me, the very lowest level is that, and to make sure you are protecting your organisation from leaking information that you don’t want to leak and things like that. And then the other thing is exactly what Mercyleen said, which is you’re missing out on a chance to work better as an organisation.

Ka Man: Thanks, Daniela. Anyone else want to chip in before I move to the next question?

Well, great, thank you. So kind of linked to this: for the people who stand on more of the sceptical side of the spectrum, someone asks if AI will negatively affect communities should it translate to less human workforce? This is obviously quite a difficult question to answer definitively. It links back to Esther’s crystal ball right at the start of this session and what we were talking about with AI agents on the organogram, for example. So it’s hard to give a definitive answer, but I wondered if anyone wants to share some thoughts on that.

Michael: I can kick us off. I do think it’s an important question, and it’s important to keep in mind that, at the end of the day, AI is a tool. AI is a suite of technologies. But it is and should be used to empower human ingenuity and human expertise. So I think it should be used diligently so that yes, we don’t risk ending up in that situation.

One factor I like to emphasise and highlight is the notion of cost of error. Just keeping in mind that AI models will always make mistakes. But not all errors are equal. So understanding how an error in AI output can materialise as tangible, real-world consequences gives you a better chance of mitigating the risks.

The notion of cost of error helps clarify the level of human oversight that’s required for a given use case. So for your specific use case or scenario, if the cost of error is low—let’s say you use generative AI to summarise a PDF file—then you can typically just go ahead and execute on the AI output.

Whereas if the cost of error is high, the AI output should always only be taken as a recommendation for a human expert to confirm. Understanding what the consequences are of acting on AI output that may be inaccurate helps you design the implementation and understand which level of human oversight is required.

But you’re right to raise the question. I think if we just go ahead and blindly trust AI always, then yes, it certainly has the risk of removing some of the human aspect of the humanitarian work.

Ka Man: Thank you. Anyone else? Mercyleen, did you want to come in?

Mercyleen: Yeah, sure, thanks, Michael. In addition to what Michael mentioned, I wanted to say that AI definitely is not coming to replace humans, but it’s reshaping the way we work. At the moment, there are new roles that have been created. Of course, some roles have been eliminated by AI. But also, new roles have come up—somebody like an AI oversight advisor, for example, human-in-the-loop kind of systems. New roles have been created by AI.

From the humanitarian perspective, if, for example, you need to respond to disaster, you need AI to forecast or provide early warnings, but the final decision is actually human. The engagement of communities on the ground is human.

I would give an analogy of the calculator. When the calculator was invented, people felt that it was coming to replace accountants and financial specialists. But we still have a calculator today. It just makes work easy for the people in the finance industry. But it didn’t come to replace the finance specialist.

So similarly, we have to embrace AI in looking at it from the lens of: it’s coming to reshape the way we work. It’s coming to improve what we have in place. So, and with that, it poses a challenge to us to continually evolve our roles so that, yeah, it’s not coming to replace. It’s just reshaping what we have.

Thank you.

Ka Man: Thank you very much, Mercyleen. Next, I’ll go to a question from Ibrahim. Ibrahim’s a regular member of our HLA community. He’s asked a question that’s directed to Michael, but I’d like to open it out to everyone as well. Ibrahim would like to know an opinion on having a dedicated AI-powered humanitarian platform as a reference for the sector. So, it’s a nice idea. What do you think, Michael, first of all?

Michael: Yeah, I’d love to see that. Obviously, back to the humanitarian reset, focusing on reducing waste and reducing duplication across the sector, if there is some notion of a centralised platform, something that organisations that wouldn’t have been able to access as easily some of these capabilities, because they’re too small or they don’t have the budget to do so, having some central asset platform that is accessible to the community at large, I think that could be a lot of value in that.

And obviously, as individual organisations develop AI implementations and create assets that can be shared across the community, that could also be a place where lessons learned, case studies, including models and other technology components could be shared. So, yeah, I think there’s value in that.

Daniela: I’m going to chime in there, if I may. Yes, totally agree there’s value in that, and something we’re trying to do, and it’s really just a small nucleus of it with the AI Lighthouse, and looking at how that could expand. It’s something that we do at NetHope, right, to bring together people around these joint efforts. I will say the challenge very often is not that people don’t want to do it. People—again, in this sector—are always very keen to share and contribute.

It’s sustaining these efforts, right? Because you can make an effort and put something together, and you might get some funding for it, but then you want to run it and expand it over as long as it’s needed, basically, right? And finding that right financial model of then keeping that going, keeping the resources going, the people that are looking after it, and keeping it up to date, and so on—that’s really the trick.

So that’s why we don’t see more of what we all think would be a good idea, but we’re not giving up yet, so hopefully with all the impetus and all the funder interest as well behind AI at the moment, maybe we can get something going.

Ka Man: Thank you, Daniela.

So we have time for one last question. Thank you so much to everyone who submitted a question. The panellists and I will review all of the questions, and if there’s something that’s specific that we can respond to, we’ll share responses in the follow-up email, so hopefully you’re not disappointed if they’re not directly addressed in this session today.

I’m asking actually quite a big question, it’s quite ambitious for the closing moments of this webinar, but it’s around AI readiness, because that’s obviously a theme that comes through across the board in everyone’s presentations. So Miriam asks: the key steps towards the adoption of AI. The question is: should we first define AI purpose before anything else? Because maybe AI should only be deployed for certain specific activities, which are low risk, versus high risk, for example, and this will inform the next steps, such as AI readiness. So would anyone like to come in with some thoughts on that?

Mercyleen: Ka Man, if you can read the question one more time?

Ka Man: Yeah, of course. Yeah. It’s around the key steps towards the adoption of AI, and the question is: should we first define AI purpose before anything else? Because maybe AI should only be deployed for certain specific activities, which are low risk, versus high risk, for example.

Mercyleen: Yeah, really good question, defining the AI purpose first. I talked about an operating model when you’re thinking about AI. An operating model encompasses: why do you need AI? You might actually not need it, you know?

So when you ask yourself, why do you need it? And then, how do you plan to implement? When do you plan to implement? The five things of what, how, when, and with whom, for example. Those five prompt questions allow you to categorically and objectively look at AI, you know, be able to look at what value does it bring. As I mentioned earlier, we need to lean towards high-value use cases.

So that you’re not just implementing AI for the sake of implementing, you’re actually implementing to support a specific high-value business case. So yes, I agree the purpose of AI should be looked at as part of the operating model. Ask yourself the what, the why, the when, with whom, and the when, yeah.

Michael: I’d say one point. Defining the AI purpose first—yes, I think you should do that, but you should do that at an organisational level, and you should do that as an AI policy that sets out the guidelines for within the organisation where specific AI capabilities and tools are approved for which use cases, using which kind of data, with an exceptions process if you want to get something added.

But when it comes down to the specific use case, you should not start with the technology. You should absolutely start with the humanitarian need that you hope to address, and then start to think about what kind of technology, if any, can help you achieve that. AI may be the right tool in the toolbox, but you should also be comfortable saying and concluding that AI is not right for this type of solution, this type of implementation.

Daniela: And I’m going to add to what you both said, that it’s a journey, and that journey isn’t necessarily linear, right? So it might well be that you start with a set of initial guardrails, a policy, and trying to test tools internally with low risk—yeah, to summarise emails or, you know, very, very simple tasks. As you use the technology, you also start learning about what else could we do with that, and as you then keep talking about that, keep expanding, that then also helps forming your strategy a bit more about: OK, what can the technology actually potentially do for us, and therefore, what are the good use cases going to be? What is the business model around that going to be?

So if you don’t have your fully fledged strategy at the beginning, that’s fine, but it’s important to very intentionally, and as Michael said, for each use case, think about: OK, what is this good for? Is AI the right solution for that? And then checking: do we actually get out of that what we hope to get out of it, and of course, manage the risk of letting it loose on communities, on people you support, very carefully as well.

Ka Man: Thank you so much, Daniela, and that is a brilliant place to end the Q&A. Thank you so much to everyone for your wonderful questions—really pertinent, great questions—and of course, thank you to our incredible panellists for sharing all your rich perspectives and experiences. It’s really reassuring, actually, to hear all of your takes on this.

So just in the final minute: there are lots of resources from the HLA, from our research last year, so I’ll include the links to that in our follow-up email.

Save the date: the next session will be on the 27th of February, held together in partnership with Data Friendly Space, where we’ll be discussing, in a format similar to this, key insights from our Pulse survey. Details will be announced soon.

And then NetHope and the HLA will be at the Humanitarian Networks and Partnerships Week, online and in person. These are a few of the sessions that we’ll be hosting. That’s on the HNPW website. You’ll be able to register. It’s open to anyone to join, so it’s in person in Geneva and online.

There are a couple of other events. Next week I’ll be taking part in a Devex Careers Briefing on AI skills for development professionals, together with Ali Al Mokdad, who’s an independent humanitarian leader. And NetHope will be at the ICT4D conference in Nairobi in May.

So thank you very much for joining us. When the browser closes, there’ll be a quick survey with 5 quick questions. If you can give us some feedback, tell us what you liked, what we could improve. This session will be uploaded to the HLA YouTube channel imminently, and I’ll send the link in the post-event email landing in your inbox within 24 hours.

You’ll also be able to claim an HPass digital badge, so look out for a separate email next week. If you haven’t taken the Pulse survey, please do that before Saturday. I’ll be really grateful for your engagement in that.

So, all that’s left for me to say is thank you once again to everyone for joining us for what has been an absolutely fantastic session, and thank you to our incredible panellists as well. So, thank you. And I’ll now bring this session to a close. Thank you very much.


Session description

  • Gain insights on sectoral humanitarian AI developments and practitioner experiences of real-world applications.
  • Learn about the latest developments from NetHope on AI skilling initiatives and available resources for humanitarian organisations.
  • Hear an update on the next phase of the HLA’s research with Data Friendly Space into the use of AI in the humanitarian sector – and how you can get involved.

This is an opportunity to learn from shared experience and contribute to building a more informed, responsible approach to AI in humanitarian contexts.

Speakers

Who this session is for

This session will provide valuable insights to support humanitarians navigating AI adoption. The discussion is aimed at practitioners of all levels – no technical or prior experience in AI is needed. The discussion will also be of interest to technologists, researchers, donors and government stakeholders who would like to gain insights into the humanitarian AI landscape.

Humanitarian Leadership Academy Data Friendly Space 2025 research and supporting content including podcasts

Resources from NetHope


SAFE AI Project


Events

Humanitarian Networks and Partnerships Week (HNPW)

  • Bridging digital divides: centring local leadership in humanitarian AI development (remote). 3 March, 11:00-12:00 UTC+1
  • The State of Learning and Development in the Nonprofit Sector (remote). 5 March, 14:00-15:30 UTC+1
  • From Insight to Action: Applying AI Research in Humanitarian Practice. (Hybrid: Geneva/online). 10 March, 09:00-10:30 UTC+1

    Visit the HNPW website to view the full event programme and to register for sessions.

ICT4D Conference, Nairobi

AWS Breaking Barriers Challenge 2026

About the speakers

Mercyleen Tanui

A woman with braided hair styled in an updo, wearing pearl earrings and a patterned shawl, sits with her hands folded on a table. She has blue nail polish and a gold watch. Lush green plants are in the background.


Mercyleen Tanui is a seasoned Global IT Operations Manager at WaterAid, with extensive experience leading enterprise-scale technology environments across distributed and fast-growing organizations. She is known for driving operational excellence, building resilient systems, and championing human-centered innovation.

With her experience in various areas of IT, her specialization in cybersecurity and a strong focus on Responsible and Humanitarian AI, she offers a unique perspective on the AI adoption framework. In her speaking experiences, she has shared the need for a comprehensive assessment of potential impacts, effectively mitigating risks associated with the use of AI in the workplace, and the incorporation of privacy values within AI implementations.

Mercyleen has advised cross-functional teams on leveraging emerging technologies to improve service delivery, crisis response, and digital equity. Her work sits at the intersection of technology, impact, and global operations, with a passion for ensuring AI solutions remain ethical, inclusive, and sustainable. In 2026, she continues to be a thought leader on the evolving AI landscape – sharing lessons learned, key trends, and transformative opportunities that can empower communities and organizations worldwide.

Michael Tjalve

A man with short dark hair and a beard, wearing a dark jacket over a white shirt, stands outdoors at sunset with buildings and a blue sky in the background.


Michael Tjalve brings more than two decades of experience with AI, from applied science and research to tech sector AI development, most recently serving as Chief AI Architect at Microsoft Philanthropies where he helped humanitarian organizations leverage AI to amplify their impact. In 2024, he left the tech sector to establish Humanitarian AI Advisory, dedicated to helping humanitarian organizations and stakeholders understand how to harness the potential of AI while navigating its pitfalls.

Michael holds a PhD in Artificial Intelligence from University College London and he is Assistant Professor at University of Washington where he teaches AI in the humanitarian sector. Michael serves as Board Chair and technology advisor for Spreeha Foundation, working to improve healthcare and education in underserved communities in Bangladesh. Michael is AI Advisor to the UN on humanitarian affairs, where he works with OCHA on AI strategy and on providing guidance on the safe and effective use of AI for humanitarian action. He is also co-lead of the SAFE AI initiative which aims to promote the safe and responsible use of AI in humanitarian action. Michael recently co-founded the RootsAI Foundation, a nonprofit dedicated to bringing the value of modern AI to languages and communities that don’t have easy access to it today, and to improve representation in AI models by building culturally grounded AI tools.

Daniela Weber

A woman with shoulder-length black and gray hair, wearing glasses and a black top, looks directly at the camera with a neutral expression. The background is plain white.


Daniela Weber is the Director of NetHope’s Center for the Digital Nonprofit where she researches emerging technologies and trends and their relevance for the nonprofit sector, and supports nonprofits in their digital transformation through assessments, tools, and consultations. She leads NetHope’s AI program which looks to enable nonprofits to utilize AI to amplify their efforts to support vulnerable communities in the face of multi-crisis, and to make sure they can do so in a safe, ethical, and equitable way. 

Daniela has over 30 years of experience in leadership roles in IT and Consulting, delivering digital transformation for organizations across a variety of sectors including consumer goods, pharmaceuticals, and hospitality. She moved into the nonprofit sector in 2015 and now works for NetHope, an organization that convenes the world’s leading international NGOs and technology companies, and some of the world’s most sophisticated users of digital for human, social, and environmental good.

Ka Man Parkinson

A woman with long black hair smiles at the camera. She is wearing a black blazer and black top, standing against a plain light gray background.


Ka Man Parkinson is Communications and Marketing Lead at the Humanitarian Leadership Academy. She has 20 years’ professional experience in communications and marketing across the nonprofit sector. Ka Man joined the HLA in 2022 and now leads on global engagement and community building as part of the HLA’s convening strategy. She takes an interdisciplinary, people-centred approach to her work, blending multimedia campaigns with learning and research initiatives. In May 2025, she initiated and co-led the world’s first study into global humanitarian AI adoption together with Data Friendly Space, reaching 2.5k practitioners in 144 countries. Ka Man produces the HLA’s Fresh Humanitarian Perspectives podcast and leads the HLA Webinar Series.

Esther Grieder

A woman with wavy brown hair and blue eyes smiles at the camera. She is wearing a blue and white patterned top and is posed in front of a plain, light-colored background.


Esther Grieder is a community engagement and partnerships expert with over two decades experience in the humanitarian and nonprofit sectors. As Director of Membership Engagement at NetHope, she supports a global network of organisations driving impact through collaboration, collective action and smart use of technology. Previously at the Humanitarian Leadership Academy, she led strategic partnerships and global community initiatives, and developed sector-wide platforms and services. Esther is passionate about building inclusive, purpose-driven communities.


About NetHope

NetHope, a consortium of over 60 leading global nonprofits, unites with technology companies and funding partners to design, fund, implement, adapt, and scale innovative approaches to solve development, humanitarian, and conservation challenges. Together, the NetHope community strives to transform the world, building a platform of hope for those who receive aid and those who deliver it.

About the HLA Webinar Series

The HLA Webinar Series is an online initiative designed to connect, inform and inspire humanitarians from around the world. We promote information sharing and knowledge exchange on topical issues facing the sector.

Through these regular free online sessions, we strive to bring you fresh and engaging insights from diverse speakers ranging from seasoned leaders to more recent entrants to the sector.

Newsletter sign up