00:00: Chapter 1: Introduction
[Ka Man, voiceover]: Welcome to Fresh Humanitarian Perspectives, the podcast brought to you by the Humanitarian Leadership Academy.
[Music changes]
[Ka Man, voiceover]: Global expert voices on humanitarian artificial intelligence.
I’m Ka Man Parkinson, Communications and Marketing Lead at the Humanitarian Leadership Academy and co-lead of our report released in August: ‘How are humanitarians using artificial intelligence in 2025? Mapping current practice and future potential’ produced in partnership with Data Friendly Space.
In this new six-part podcast series, we’re exploring expert views on the research itself and charting possible pathways forward together.
[Music changes]
[Meheret, voiceover]: Technical skills without critical literacy are really dangerous. So you can train staff to use AI in a short period of time, but without the ability to evaluate it, spot ethical red flags, or recognise cultural misalignment, even technically competent teams can cause harm.
[Ka Man, voiceover]: In this final instalment of our humanitarian AI miniseries, we explore AI training and literacy with Meheret Takele Mandefro, a Business Analyst at NetHope originally from Ethiopia, based in the Netherlands.
In new AI readiness research led by Meheret, only around 9% of nonprofit organisations report being fully ready for AI adoption. Yet individual staff are already racing ahead, experimenting with AI tools every day. These findings align exactly with our own August 2025 research into how humanitarians are using AI, and we’re keen to deepen our shared understanding of this picture.
Guest hosted by Madigan Johnson, our research co-lead from Data Friendly Space, this conversation goes beyond technical skills to highlight the human competencies that make a difference: critical thinking, cultural intelligence, and the ability to ask ‘should we?’ before ‘can we?’ when it comes to AI deployment.
From peer learning to power dynamics, Meheret brings lived experience and practical guidance for organisations navigating their AI literacy journey. Enjoy this insightful conversation with Meheret and Madigan.
02:33: Chapter 2: From Ethiopia to Netherlands: Meheret’s journey into humanitarian AI and her work at NetHope
Madigan: Hi, everyone. I am really excited to be here, as your guest host today on the Fresh Humanitarian Perspective. Today, I am with Meheret Takele Mandefro, a business analyst at NetHope, who is exploring this critical dimension of humanitarian AI adoption between literacy and training.
Meheret, thank you so much for being a guest on our podcast. Could you please introduce yourself and share your journey into the world of humanitarian AI, and what drew you to this focus, and this intersection of technology and humanitarian work?
Meheret: Sure, thank you, Madigan, and thank you for having me on this podcast. So, my name is Meheret Takele Mandefro, and I serve as a business analyst at NetHope Centre for the Digital Nonprofit. So, to explain my role at NetHope, first I would like to say a little bit about what we do at the Center for Digital Nonprofit at NetHope, because that will make clearer how my role fits in.
So our work is built around three core pillars. One is research, so we produce sector research, briefings, articles, case studies, and benchmarks to help nonprofits understand and respond to emerging digital opportunities. The second one is providing services and tools. So, based on the research that we do, we provide consultations, digital strategy support, assessment tools, toolkits, and standards. And the last building block is advocacy and collective action. So, at this pillar, we aim to convene working groups, including those focused on AI, co-create sector guidelines, and foster collaboration across the nonprofit community.
So, having that in mind, my role spans across these three pillars. I analyse technological trends and translate them into actionable insights, developing resources such as reports, briefings, toolkits, and case studies that shape how nonprofits adapt technology. I also manage the centre’s database, ensuring our information is organised, current, and accessible, so this is essential for delivering effective tools and informed advice.
In addition, I co-lead the NetHope AI Working Group and its subgroups, so this is a collaborative space where our community learns, shares experience, and co-creates AI resources. I facilitate discussions, invite guest speakers, and help produce practical guides for the nonprofits exploring AI. So, in short, my work connects research, tools, and advocacy to empower nonprofits with the knowledge and resources they need to make informed decisions and accelerate their digital transformation.
My journey into humanitarian AI started in Ethiopia during my undergraduate study at Mekelle University. During our student trip to a rural area, I saw how students struggled to access reading materials. Textbooks were very scarce and often shared among several students. Having grown in Addis Ababa, the capital city of Ethiopia, with better access to reading resources, I felt this deeply. So, for my undergraduate project, nothing mattered more than addressing this issue. Together with my project team, we developed a digital library system using Greenstone Digital Library software, and we submitted it to our Information Science Department, so that was my first experience using technology to promote equity and access in education.
After that, I graduated and joined Mekelle University as a lecturer. So there I coordinated and taught different courses, like database administration, programming, system administration, and others. But while I was working there in 2020, a conflict erupted in northern Ethiopia with Mekelle at its epicentre, which is where I live and work. So I remember it like yesterday, one night, the entire city went dark. No phone, no internet, no power, and complete isolation from the rest of the country. That moment taught me how fragile digital infrastructure is, and how vulnerable communities become during crisis. It sparked a question that guided my career ever since. At that time, I asked: if I could use AI to predict conflict and protect communities before disaster strikes?
Driven by that question, I pursued my master’s degree at Hawassa University in Ethiopia, and I conducted my thesis on conflict analytics using over 20 years of data and predictive modelling to anticipate violent conflicts in Ethiopia that can enable early interventions in this kind of situation. After a year, I joined NetHope, where I bring this vision to life by connecting my technical expertise with humanitarian needs. I have worked on projects such as developing generative AI guidelines for nonprofits, researching AI applications across diverse use cases, and developing an AI readiness benchmark to help organisations assess their preparedness. So this work allows me to combine my passion for technology with a commitment to strengthening humanitarian impact.
Madigan: That’s such a powerful story, and thank you so much for sharing it with us and with our audience. From your first experience when you went to the community where they didn’t have access to books, and then creating these solutions, and now co-creating solutions at NetHope for AI, it’s really an amazing thing to see all the work that you’ve been able to accomplish. I think it really gives you this unique vantage point that I sometimes feel like we’re also missing in this sector, and specifically in this technology AI space.
09:50: Chapter 3: Closing capacity gaps: the current picture of humanitarian AI readiness
Madigan: So given that NetHope’s role is connecting nonprofits with technology solutions, what patterns have you observed in how humanitarian organisations are approaching AI capacity strengthening? What are they doing well? What might they be lacking? I would be really curious to hear from you in terms of what NetHope is doing well, maybe what other organisations are doing well as well.
Meheret: Yeah, the most visible pattern that I have observed is there is a gap between individual experimentation and organisational readiness. On the one hand, staff are moving fast. Many humanitarian professionals are already experimenting with generative AI in their daily work. They’re using it for translating, drafting reports, analysing data. They’re really curious, they’re trying things, they are seeing value in their day-to-day tasks. But on the other hand, only about 8-9% of organisations report being fully ready for systematic adoption of AI.
So, while individuals are experimenting with these tools, organisations are lagging behind in formalising the governance, providing trainings and infrastructure needed to support that innovation safely. Staff are experimenting without guardrails, without clear governance frameworks, without strong data privacy protocols, without structured capacity building. I think that is the paradox where individuals are proving the potential of AI every day, but organisations haven’t yet caught up to provide the systems and the safeguards that make this innovation sustainable. I believe closing that gap is where, really, capacity strengthening has to happen.
Madigan: Yeah, I mean, that gap that you mentioned between the individual experimentation and organisational readiness, I think that’s something that’s been really resonating and what we’ve been hearing throughout this podcast series and throughout our research that we’ve done with the HLA and the AI usage survey. It really seems there’s this tension where individuals are racing ahead with these AI tools, and while organisations are still trying to figure out, like you said, this governance, their strategy, how are we implementing it.
In the significant variations in AI readiness across and even within individuals as well, right? You have some individuals that are really experimenting and trying new things out, where others might be a little bit more reticent. From your vantage point, how would you characterise the current state of AI literacy in the humanitarian sector? Are we starting to see this digital divide that’s maybe even increasing, and now maybe this AI divide? Or is it more nuanced than that?
Meheret: To address the first question, how I characterise the sector, maybe I can point to two points. The first one is the thing that I already mentioned about the gap. I characterise the sector by there’s a strong interest, but uneven readiness, with individuals often ahead of institutions. This is the one that I already described, but just to support it with real experience that NetHope has based on the AI Working Group’s discussions at NetHope, I observed that many nonprofit professionals are experimenting with AI tools, like chatbots and different generative AI platforms. But organisational policies and governance structures lag behind, so this creates an implementation gap where personal use is very common, but institutional integration is limited. So, I think one way to characterise the sector is with this.
And the second one that I see and characterise the sector is with the AI skill paradox. Tools are increasingly accessible and user-friendly, yet specialised expertise within humanitarian organisations remains scarce.
A few months ago at NetHope, we conducted research on digital skill demand across the nonprofit sector. The research was based on job posting data from 2021 to 2025. The finding shows that the percentage share of emerging tech and AI was less than 2%, which represents the lowest share of demanded skill categories in the nonprofit sector. This shows that staff can use AI at a basic level, but deeper technical knowledge for safe, ethical, and context-specific deployment is still lacking.
In short, we can say AI literacy in the humanitarian work is marked by enthusiasm and experimentation, but uneven skill, governance gaps, and resource divides between large and small organisations still persist.
Just to talk about the digital divide that you mentioned, I think it’s tempting to frame the challenge as a simple digital divide, those who have access to advanced tools and training versus those that don’t. Yet this framing oversimplifies the reality. So what we see instead is a more nuanced landscape where literacy levels vary not only by geography or infrastructure, but also by organisational scale, governance capacity, and different contexts.
For example, I would like to mention our recent research on the NetHope AI Readiness Benchmark. We assessed 974 nonprofit professionals across 6 dimensions. By that analysis, the sector is positioned at intermediate level of readiness, which is 1.99 out of the 4.0 scale, which is less than the midpoint, as we can see. Most organisations have moved beyond early experimentation, but only less than 10% of organisations have achieved full readiness. More than a third sit at intermediate stage, whilst nearly 30% remain at basic or emerging levels.
This tells us that whilst progress is being made, the sector is still far from being prepared to fully leverage the capabilities that AI can bring us. The gap isn’t just about technology. The six dimensions I mentioned when we analysed, one of which is technology access. In fact, technology readiness scores relatively high, along with data readiness. But strategy and governance, skilling and change, organisational resources lag behind. That shows that organisations may have the tools, but they lack strategic direction, skilled personnel, and dedicated resources to use them effectively.
So in that research, we found three critical standing points. So the first one is AI initiatives are often not fully integrated into organisational missions or long-term goals. Second one is only about 10% of organisations report full readiness in skilling and change management. And the third one is nearly 38% of organisations are still at basic or emerging level when it comes to dedicated funding and leadership support. This challenge shows that AI literacy is layered. Operational staff may be experimenting with tools, but without strategic integration, workforce investment, and resources, those efforts remain fragmented in our sector.
The clear nuance here is it’s not just about connectivity or access. Even in well-connected regions, organisations struggle with strategy and AI skills. Meanwhile, in low-resource settings, underlying infrastructure challenges compound these problems, so it will be more challenging for them. Addressing these issues requires targeted interventions that go beyond technology provision. We need to focus on workforce development, ethical frameworks, strategic planning, and collaborative resource sharing. Only then we can, as humanitarian organisations, adopt AI responsibly and effectively, ensuring that innovation truly serves the communities they exist to protect.
Madigan: I have so many things that I want to address there. First of all, that research sounds absolutely incredible, and I’m really excited to read that, and I think our audience will be as well. It’s really interesting, the three things that you pointed out there, especially around change management, in terms of skilled workforce, in terms of not just implementing it in one specific project, but how do we actually implement it across the board and really make sure that it’s integrated into the areas where it needs to be integrated.
I think what we were seeing in some of our research, or what we’ve been trying to tell people at DFS is, you know, find one area, where’s your biggest problem in your organisation, can AI help there, or will it not help? Because that’s the other part that I think some organisations are also dealing with. They think maybe AI will help them solve this problem, but if they have bad data, or it’s not standardised, then the AI systems read that and kind of come up with things.
When you talk about it’s not just who has access to the tools, but it’s about organisational scale, capacity, and resource management, it really flips the conversation in some ways that I think we’ve been having around, oh, we need access to these tools, or there’s all these innovator, accelerator programmes that are happening, and those are fantastic, and I think those definitely need to exist. But I think there also needs to be this investment into the change management, into this governance capacity.
And that AI skills paradox that you mentioned, where these tools are becoming more and more accessible, but the expertise is still remaining scarce, that brings up the question about how do we upskill ourselves as humanitarian workers? What courses are we taking? How are we learning? Are we engaging in sandbox or communities of practice? And so, like you said, I think that’s exactly the challenge that we do need to be addressing through effective training and capacity strengthening.
21:33: Chapter 4: Humanitarian AI training and learning approaches – what’s working and what’s not?
So, speaking of which, when we talk about AI training in the humanitarian context, we’re often dealing with teams that are spread across multiple countries, that are often working in different crises, and that have varying levels of technical infrastructure. You mentioned before when you introduced yourself your experience with that. I know NetHope has just recently launched your own AI training programme on the Kaya platform, which is super exciting. But beyond these more formalised training modules, we’ve been hearing throughout our research, in our podcast series, about the role that peer-to-peer learning, or self-taught learning, or AI working groups, have in making knowledge really stick, right? I’m always more likely to trust my coworker or someone that might have experience than when I’m just reading online.
So, from NetHope’s perspective, or from your perspective, how are you seeing organisations or maybe individuals sort of blend that formal training with these more collaborative approaches? Are there some pressing challenges in creating these learning ecosystems that combine these structured courses with peer exchanges and informal knowledge sharing? And I know you mentioned before the AI community of practice that you have at NetHope. So how does that actually build lasting capacity, or strengthen the capacity that might already be existing?
Meheret: Yeah, I think now we are addressing how we can build the capacity to overcome the gap, so I think it’s a good question. First, I would like to say some things about the course that we provided, as you also mentioned. Yes, we recently launched a free CPD-certified course series on Unlocking AI for Nonprofits, which is designed to help nonprofit teams build the skills and confidence they need to use AI effectively and safely. The course offers four flexible and self-paced pathways, which are AI basics, application of generative AI, advancing application with Copilot, and responsible use of AI.
The response has been really remarkable. Over 3,000 participants have already enrolled in the course, which shows just how urgent the need for AI training is in the sector. With spending gaps widening and workload increasing, nonprofits are looking to AI as a way to scale their impact without compromising their mission. This is one evidence that formal learning is needed, and it’s also the need of the sector.
At the same time, at NetHope, we also believe that structured training alone is not enough. We have seen a critical role in peer-to-peer learning in our different sessions. That’s why we have established an AI working group with three specialised subgroups. One is on ethics and governance, the other one is on AI application and delivery, and the third one is on generative AI in internal operations. We found that these community spaces serve as a place for really collective capacity building, experience sharing, and knowledge production. At NetHope, we are also blending this approach, giving the training, and also providing this community space where we can collaborate, work, and build the capacity of the sector together.
Just to justify and also show why does this blending matter, we can mention two different points. The first one is formal training and courses provide structured knowledge, frameworks and professional standards. It ensures consistency across organisations, and it builds credibility with external stakeholders, like vendors, partners, regulators. That is one powerful side of formal trainings and courses.
When we come to peer-to-peer learning and working groups, this encourages contextual lived experience sharing. It fosters trust, collaboration, and collective problem solving. And it makes us adapt knowledge to local realities and organisational cultures. This blending of formal training and peer-to-peer learning creates a learning ecosystem where formal training sets the foundation, whilst peer-to-peer exchange makes it practical, adaptive, and sustainable.
Specifically, this approach helps nonprofit staff to learn not only best practices, but also how to improvise and adapt them in their real-world contexts. And also, this working group reinforces the formal training and embeds it into our daily practice. I think that is the power of blending these two approaches.
When we say these things, it doesn’t come without challenge. Building learning ecosystems that truly strengthen long-term capacity in the humanitarian and the nonprofit sector is really complex. The challenges go beyond simply offering courses or convening groups. They lie in how these elements connect and sustain impact. Let me just point out some critical challenges that we also face and we see.
One critical challenge is navigating the power dynamics. Peer exchange, whilst valuable, can unintentionally reproduce hierarchies unless inclusively facilitated. In peer exchange, participants come from very different organisational contexts, like big international NGOs sitting alongside small grassroots organisations. If you don’t actively design for equity, those spaces can reinforce existing hierarchies rather than breaking them down. The loudest voices might dominate, certain types of knowledge get privileged over others, and people from smaller organisations leave feeling like they didn’t really belong in that space.
To overcome these challenges, there are some things that we apply at NetHope. The first one is to promote authentic participation. We often utilise different approaches, like for example, creating Miro boards to gather reflections from all participants, ensuring diverse voices are heard. Applying different methodologies rather than just speaking out is one way to promote authentic participation throughout the working group. And the other one is establishing ground rules is also really important. For example, when we run our working groups, we apply Chatham House Rule. This kind of rule emphasises norms for respecting dialogue and equal participation. It further lays the foundation for equitable engagement. We have to apply this kind of approach to overcome the challenges that we face when blending the approach and providing these services.
Madigan: I mean, there’s so much there to unpack. I think that could be a whole other podcast episode just around the community of practice. The way that you describe this ecosystem, about having complementary aspects of the peer-to-peer learning alongside the formal training, I think really sets a solid foundation, hopefully, for the humanitarian sector, and makes it, like you said, much more practical, and I think in the end, also sustainable long-term as well.
The other thing that I really loved about what you said is about the contextual lived experiences and trust. I think trust is something that really comes into play quite a bit with AI, because I think there’s still this black box around it. It’s like, how do we trust AI? How do we understand it? And when you have peers that are able to help dig into that with you, it makes it a lot easier to adapt that to your experience, and then also understand it when you need to put it into a context and be culturally responsive and design the AI to work in such context.
I think the other thing that you acknowledged is very much the challenges around power dynamics. I think we’ve all, unfortunately, may have been in a room where the loudest voice is the one that dominates the conversations. And sometimes the quietest voice or the people that don’t say anything actually have a lot to say when you get them maybe outside of that group. So I really love the way that NetHope is promoting that authentic participation and establishing ground rules and really saying, hey, everyone has a seat at the table. So yeah, I think that’s fantastic.
31:29: Chapter 5: The power of case studies, peer exchange and real-world examples
Madigan: I think one thing that has also emerged is how powerful case studies or real-life examples can be in terms of capacity strengthening, so they make it more tangible and contextual for people to understand. Do you have any implementation stories, maybe both the victories and the failures, as learning tools that maybe have come up in the community of practice, only, of course, if it’s okay for you to share with us? Or can you share an example that has been particularly effective at strengthening capacity? Maybe it’s not in the case study, maybe it’s another tool that was used in the working group. And what kind of made it so strong and resonate in that area?
Meheret: Yeah, I really like the work that we do on case studies, so I’m really glad you asked this question. Yes, at NetHope, we conduct case studies on AI implementation. The case studies are based on direct input from those who were involved in the implementation process. I think that’s the quite important point. It’s not theoretical, it’s really applied.
These case studies include, just to give you examples of the case studies that we do this year, for example, the first one that I could mention is Catholic Relief Services (CRS). They use machine learning to predict food prices for their food distribution programme budget. If there is any other nonprofit that is thinking to do this kind of job, it’s the right place to look at this specific case study, so they can see what are the challenges, how to implement it. They’re not going to learn what’s AI and how to apply in their context theoretically, but they are going to see the real example of how this machine learning is applied to predict food prices in the nonprofit context, so that’s one example.
The other example is also the case study on Norwegian Refugee Council, which is also a use case of custom AI champions using large language models to facilitate precise and rapid access to policies and other relevant information from their own database. This is another generative AI aspect of locally developing a chatbot for their organisation. If another organisation wants to develop this chatbot, they can see how it has been developed and what challenges were faced. They are not going to repeat the challenges that that organisation has faced, so they can learn from these real implementations.
Another one that I also like to mention is the ICRC case study, which is population mapping using AI to support more accurate data-driven crisis response. By themselves, I would like to acknowledge first, these case studies are really powerful, and it’s a way that we can really see how AI can be applied in the humanitarian sector. But other than showing that, if other nonprofits would like to implement this kind of thing, it is the right place to look, and which is very contextual to their sector.
These case studies make AI really tangible and practical by showing how abstract concepts translate into real-world AI outcomes. We might hear AI, we might hear LLMs, different terminologies, but what does it mean in our context can be shown within these case studies. We know that large language models can be used in chatbots, but how can we use it in our context? These case studies have real examples to show us that.
It shows beyond the theory and the hype. That’s also one thing that I would like to emphasise, because with generative AI, there’s a lot of hype, but these things would really distil them in our context and show us beyond the hype what really can be done with these tools. These case studies provide us with context, highlight practical challenges, and the lessons that generic training often misses. Trainings might be more generic, these case studies are very specific to our context, so that’s one power of these case studies.
On the question of how organisations use these AI case studies as a learning tool, I already touched on some points on that, but what I would like to give an example is how we use them at NetHope. At NetHope, we use these case studies often for peer exchange. We bring these case studies into discussions, like the AI Working Group, to identify what worked, what didn’t, and why. This approach fosters critical thinking and contextual adaptation. Some of the case studies were also presented in the AI working group by the organisations themselves who implemented it.
Other than the case studies, we also invite them as guest speakers in our AI working group, and try to open a place for discussion, and our working group participants ask the real implementers about the things that they failed. This is one of the things that we use the case studies at NetHope. In addition, we also use the insights from case studies to feed into practical resources. As I mentioned, we develop guides, we develop research reports. These case studies are also our reference points that are really contextualised in our sector, so we use them to feed into our resources, and that will help organisations move to implementation, beyond just seeing the theory. That’s the second application that we use from these case studies.
And also, we are working to embed these case studies into formal courses to illustrate best practices and pitfalls. Trainings mostly have theory, but when you support it with case studies, it will make it more contextual and distilled to our context, so we are also working on how to integrate our case studies into the training we provide. That’s also how we use NetHope case studies.
Madigan: I mean, there’s so much power behind those case studies, as you’ve just outlined. I think for me, the implementation part is so key, because I think when we’re experimenting and playing with generative AI, you’re like, oh, but I could do this, or I could do this. And you’re like, but where do you focus first? I think where you see other organisations, in the example of maybe CRS or NRC, who are then showing you, this is how we implemented it, you’re like, oh, okay, I’m working on a similar project, I might have to adapt some things, which I guess then comes in the peer-to-peer working groups, where you’re able to be like, okay, you have this idea, can we iterate on it? Can we improve on it so that it fits our context?
I think that’s huge. And I especially love how you’ve described all of these examples as this bridge between peer-to-peer learning and formal learning, and really making sure that it binds well together for, hopefully, future AI implementations in this context.
I think one thing that really stands out for me is that I didn’t actually hear you mention much around the technical knowledge, and I think that really stands out to me, that it highlights that AI capacity building isn’t just knowing how LLMs work, or having this really deep technical knowledge. I think that came up in our survey as well. I think it was only 3 or 4% considered themselves AI experts. But again, sometimes I feel like that expertise doesn’t have to be just technical, and you can have it be on the front end, on the user side of things. It’s about understanding the outputs.
40:25: Chapter 6: Beyond technical skills: judgement, ethics, context and leadership
Madigan: So in your experience, maybe what are some of the often overlooked competencies that humanitarian workers need to work with AI effectively? Or maybe when you’ve been doing this research that you mentioned previously around the AI skills, what are those AI skills that might be coming up in this sector, that we might see in 5 years’ time, all of a sudden, the job descriptions are including all of this? I’m thinking about ethical decision making. How do you evaluate AI recommendations, or even understanding these cultural contexts in AI deployment? So yeah, would love to pass it back to you and say, what are the competencies that humanitarian workers will need?
Meheret: Yeah, I think that’s a great question. Yes, you are right, there is often an assumption that AI training is mainly about technical skills, like learning algorithms and tools. Just to pause on that, I think why we say technical skills is now we are just focusing on generative AI, but before, we were talking about predictive analytics, machine learning, so that aspect of AI might ask more technical skill to build. But when we come to generative AI, the things that we mentioned, starting from our discussion, the things that we are saying is not that much technical, because the things that we talk about are, other than technical, what we have seen at NetHope is that there is a real readiness gap, so it isn’t technical, it’s more human, I can say.
Access to tools doesn’t equal readiness, I believe. What matters most are the competencies that help humanitarian workers decide when, and how, and whether to use AI, how to use it responsibly. The first thing that is often overlooked is critical AI literacy. That is the ability to question outputs rather than accept them at face value. As I mentioned, it’s more on the generative AI aspect, so this is a very important thing that is often overlooked. I think one is critical AI literacy.
And the other one is ethical decision-making is also one thing that is often overlooked, other than the technical things. When I say ethical decision-making, technical trainings teach people how to use AI, but ethical competencies teach them when not to use it. I think it’s really important to know when to use it and when not to use it, so we have to be ethical, especially in the sector where we operate.
Workers need to assess risk, align with their mission values, and they have to ensure accountability when using these tools. Ethical decision-making is a really important point that we also have to focus on. At NetHope, that’s why we develop tools like Humanitarian AI Code of Practice, the AI Suitability Toolkit. They are not technical, but they are really important aspects that we have to build our capacity on. I think this is the second one that I would like to also emphasise.
And the third one is related to cultural and contextual intelligence. AI trained in one setting can fail in another. Humanitarian staff need to recognise these issues like language bias, infrastructure gaps, and power dynamics, especially in low-resource environments where digital access is limited. That’s also the third part that I would like to emphasise.
And finally, the point that I would like to make is change leadership and strategic thinking. AI adoption is an organisational change process. Staff need to manage fears, bridge technical and programmatic teams, and connect AI use cases directly to their mission and impact. We are not only talking about efficiency, but we have to connect them with our mission. That’s also the thing that we overlook, other than the technical aspects.
These overlooked competencies, as we see, are not related to coding or algorithms, as I mentioned when I started. They are just about judgement, ethics, context, and leadership. And just as we discussed earlier, with peer learning and case studies, these skills are best built when organisations continue training with collaborative spaces, where practitioners wrestle with real dilemmas. That’s how AI becomes not just a tool, but a trusted capability in humanitarian work.
Madigan: I think what you just mentioned there about trust, I think that’s something that at DFS we’ve had challenges with this, is around this human-in-the-loop methodology. Every AI output is always checked by a human, because like you said, the context matters. The decisions that are made or recommended by the AI analysis matter, because if you put that out there, you’re eventually putting that out into a world where these decisions that are taken on your analysis have real life or death consequences.
And especially that ethical judgement about knowing when not to use AI. There are some things where, I was just reading about a gender-based violence chatbot tool, and again, in some contexts, it worked very well, and then they adapted it to other contexts, and sometimes you don’t want to have a robot telling you certain information. You want a real human, right? So I think that’s really crucial for us as a humanitarian sector to look at these core competencies that you’ve outlined, and what I hear there is you have to maintain that human in the loop. There’s no world, maybe in terms of predictive models and analytics, but you still have a human there that’s doing the coding and still checking everything.
47:44: Chapter 7: Balancing speed and solutions with a dual-track approach to adoption
So, in these certain situations which might have urgency in the humanitarian response, there is this tension as humanitarians. Do we balance that urgency in terms of getting resources and data or that response to them within the time needed, or do we take more time to develop a more nuanced approach? So how do you balance the time that it takes in order to implement maybe some of these, because again, the ethics, the context, all of these come with time and learning, and then sometimes you’re, as a humanitarian, you’re put into a situation where you might not have that time. So, how do we balance that? And especially with the introduction of AI, where you’re trying to learn as much as you can before you’ve been deployed, and then it’s telling you, yeah, this is the situation in Sudan, and then you get there, and it’s nothing like it said.
Meheret: Yeah, I think this is perhaps the most challenging tension in humanitarian AI adoption. We all operate in crisis contexts where speed can save lives. But yet, rushing AI deployment without adequate competency can cause real harm, such as reinforcing bias, violating privacy, or eroding community trust. The difficult truth is that urgency does not justify deploying AI irresponsibly. I think that’s… we all agree on that. We know that it’s urgent, but we can’t deploy it irresponsibly. That’s one thing. But at the same time, urgency means we can’t wait for the perfect competency before using AI at all. I think the balance lies in how we approach the adoption.
The first thing, what I would recommend is start small. It’s always good in the AI area, so it’s good to start with low-risk applications, where learning is possible without harm. Building foundational ethical competencies first is non-negotiable. The first thing that we have to do is building that foundational ethical competency, and then scale gradually as organisational skill develops. I think that’s one approach.
And the second one that I would recommend is adopting a dual-track model, which means immediate deployment of accessible, safe AI tools for urgent needs, alongside ongoing investments in deeper competencies. For example, frontline staff might use AI-powered translation or crisis mapping tools right away, but whilst working groups and training programmes build long-term skills in ethics, governance, and contextual adoption. It’s going to develop through time.
And the third one, which is related to this, is I think we have to develop iterative cycles of practice and reflection. Skills like ethical judgement, participatory design, and governance literacy take time to develop. But they can grow through iterative cycles of practice and reflection. By embedding reflection sessions, after-action reviews, and community feedback loops in the humanitarian workflow, I think we can ensure that every urgent response also contributes to long-term capacity.
In that case, the balance is achieved by starting small, the first thing that I mentioned, and acting fast with safe and accessible tools, learning continuously from practice and embedding reflections into the response cycle. This way, urgency and skill development reinforce each other, rather than compete.
Madigan: I think that’s so fascinating what you just said, because actually at the NetHope event at the end of October, we actually hosted a session, and we asked them to prioritise the needs of the sector, and we had it in a pyramid. And what we found out was that in every group, everyone had different priorities, and then we facilitated the conversation. And what really came out from those conversations was it wasn’t actually a pyramid. It wasn’t step-by-step. Instead, it’s this iterative cycle, or this cyclical approach, where they all feed into each other.
And so that approach that you’re describing, also along that dual-track approach of building, starting small, scaling in a no-risk or low-risk situation, but at the same time building up those competencies. You really can’t have one thing or another. It has to be done along the same time, which is tricky in the humanitarian sector when we’re often under-resourced, underfunded, all of that. But I think this really speaks to this iterative cycles of practice and reflection, and how learning actually happens in real-world contexts.
53:33: Chapter 8: Context matters: why culturally responsive AI training is essential
And so I think that brings us to something really fundamental that came up in the research that we also conducted: localised AI and the local implementation as key themes. How should AI literacy programmes be adapted to reflect local contexts? Particularly in regions where traditional knowledge systems may offer very different perspectives on data, data sovereignty, decision making, and technical adoption. And what does it look like when we’re saying culturally responsive AI training? What does that look like in practice?
Meheret: Yeah, I think it’s really mixed with the conversation that we had, so context is really important. That’s one thing that we also address in this question. We all know that traditional knowledge systems often emphasise collective wisdom, oral traditions, and relational decision-making, rather than purely data-driven logic. AI adoption in the humanitarian setting can clash with these systems if training assumes Western-centric notions of objectivity or efficiency. Trust and legitimacy depend on showing that AI complements, not replaces, local expertise and cultural practices. I think we have to create that understanding in this context. AI is not replacing, but it supports us. That’s one thing to diffuse this thinking.
To truly advance AI literacy and training, we must boldly adapt our programmes to reflect local contexts, especially in regions where traditional knowledge systems offer unique perspectives on data, decision making, and technology adoption. Culturally responsive AI training is not just a theoretical idea, but it’s a practical necessity for us.
In practice, I believe it means recognising that communities themselves must define the problem AI is meant to address, ensuring that technology serves real needs, rather than imposing external priorities. These are two critical points. But one thing is applying the principle of localisation to AI capacity building demands that we rethink both our process and underlying power dynamics of technology adoption.
Let’s talk about localisation. Localisation empowers communities to set the agenda. This approach transforms capacity building from a one-way transfer of skills into a truly collaborative process, where local actors lead, and global partners support. That is the narrative that we are going to follow when we are implementing locally.
Training materials in local languages, that might be one case, with metaphors and examples. It’s not only translating it, but including metaphors and examples drawn from everyday life of the community. Training and capacity building should avoid technical jargon, the first thing. Instead, framing AI as an extension of existing problem-solving traditions. For example, we can consider in pastoralist communities, AI weather prediction can be explained alongside traditional sky-watching practices.
We can also show integration with traditional knowledge, for example, in the context of where I come from in Ethiopia. It’s also good to position AI outputs as one input among many. It should be triangulated with local wisdom. The example that I would like to mention here is, for example, we can use AI to detect crop disease, but we can triangulate it with farmers’ indigenous knowledge of soil and plant behaviour. We can integrate both the AI output and triangulate with the community knowledge. In that case, we can build trust within the community. I think that’s one way of showing how it can support each other.
And one thing is, in regions facing limited infrastructure or distinct social realities, this strategy calls for AI tools that are not only lightweight, but adaptable. It should be accessible in local languages, that’s one thing. From where I come, in Ethiopia, there are 80 languages we speak. So when we say localised, it should be multilingual. The AI should have multilingual capacity that can be translated into the language we speak, so that’s one thing we can mention.
Implementing locally is, I know it’s hard, but it’s a way to move forward and to build trust in the community. Through this approach, AI becomes a tool for self-determined progress, strengthening resilience, and ensuring that digital futures are defined, owned, and led by the communities themselves.
Madigan: I think that’s so interesting, because basically that describes data sovereignty, right? The idea that communities should own their own data, have the choice in what they do with their data, where to give it. And what you were saying about the pastoral communities, I think there’s actually this case study that was done where basically it was a local community versus a group of scientists, and the scientists were trying to predict something around crops or livestock, and what happened was that the scientists predicted it almost every single time wrong, and then you had the local community being like, oh no, it’s gonna do this, and I think they were right, like, 90% of the time.
So it just shows you how important it is to have local communities to still have their voices being heard, because sometimes I feel the threat with AI is that, oh, but AI told us to do this, we take this recommendation, and then we’re gonna go put it into practice, and then you have a community there that’s saying, well, no, we know our place better than anybody. How can you expect us to go along with it? Like, we’re gonna tell you you’re wrong. No, no, no, but the AI system told us that this is how it’s gonna be.
So really interesting insights in there, and I think that it really beautifully illustrates how AI shouldn’t replace traditional wisdom or knowledge or community knowledge, but like you said, sort of complement it and work alongside it, where there’s parts where that community knowledge is going to be more powerful, but are there gaps where maybe it’s not as strong, and maybe AI can work in that part?
61:24: Chapter 9: Building a culture of critical literacy: advice for smaller humanitarian organisations on their AI journey
Madigan: So given everything that we’ve discussed around localisation and cultural responsiveness, what would your advice be to a smaller local humanitarian organisation where they might not have the resources that some of the big INGOs or the UN agencies have in starting their AI literacy journey tomorrow? What are the things that they might need to begin? I have some thoughts from the research we’ve done, but I would love to hear yours.
Meheret: Yeah, I’m glad you asked this question, because we had a generative AI for internal operations subgroup session, where we mapped the actual AI journeys of humanitarian practitioners. The participants identified their journey and provided recommendations about organisations and staff for their AI journey. Based on the findings from the session, I was able to develop an AI learning journey model. This model shows that AI literacy is not linear. I know you have mentioned cyclical paths before, so that’s also my finding. It’s not linear, but iterative. It’s shaped by personal experience, social learning, and adaptive strategies.
The model comprises interrelated components, including engagement, reflection, refinement, and iteration. Based on the finding and the model, I will give my recommendations.
The first thing is, start small with low-risk experimentation of AI. This can help you build confidence. It’s not about productivity, but to build confidence, so start small.
The second one is build peer learning from day one. For example, form a small cohort across roles who meet regularly to share what they tried, what worked, and what didn’t. And one thing we have to emphasise here is we have to normalise failure as a learning opportunity.
The third thing is to establish a reflection practice about where AI is adding value, where it is failing, what ethical concerns are emerging. For example, they can use frameworks like the one that I mentioned. We developed the AI Readiness Benchmark to track their progress. By using these tools, they have to establish a reflection practice about the use of AI.
And the fourth one is refine based on what you learn. If data quality issues arise, focus there. If ethical concerns surface, prioritise responsible AI frameworks. And if AI doesn’t fit the task, decide not to use it. We have to openly discuss about these issues.
And finally, adopt an iterative mindset. This is the very important thing that has been discussed in the working group, and also the thing that I observed throughout my career. We have to adopt an iterative mindset. This means developing a culture of continuous learning, rather than seeking a perfect solution. Rather than aiming for a perfect solution, we have to improve step by step.
In other words, begin with curiosity, create space for experimentation, learn together, reflect honestly, refine based on experience, and keep iterating. AI literacy isn’t a box that we tick, but it’s a continuous cycle. These are the points that I would like to recommend.
And I think there are a lot of resources, free resources that we develop that can be accessed freely for them as a starting point to develop their capacity. For example, we have the Unlocking AI for Nonprofits course that we mentioned at first, the AI Suitability Toolkit, the Humanitarian AI Code of Practice, the AI Readiness Benchmark, the Guide to Usefulness of AI, and lots. It’s freely available, they can start from there, they can use those resources, and by using an iterative mindset, they can build their capacity through time.
Madigan: I don’t even know how to follow up with that, because everything you just said was wonderful. But like you said, starting small with the, again, the current theme throughout this has been starting with the low-risk, learning about the ethics. I think building that peer-to-peer learning, that’s something that we’ve done at DFS, and I think that, as myself personally, has strengthened my ability to interact with generative AI when I’m talking to the developers, they’re talking to the analysts. As communications, it’s definitely helped.
And I think something that really stood out to me is that also recognising your failures, and being able to openly discuss those failures. Sometimes I feel like in the sector, we’re not able to discuss the failures publicly, and you don’t have to discuss them publicly, they might help in some situations, but if you have those peer-to-peer networks or learning groups where you can say, hey, I tried this, and I failed miserably. For me, I’ve tried to use AI-generated images. I failed miserably. I’m now putting that off to the side, because I’m like, I’m not going to use that at this point in time. Versus, I know some other people that are really skilled at using it. But then comes the ethics questions there.
But this whole mindset of adopting this iterative approach, I think that’s such really practical, grounded advice, and really emphasises that AI adoption in the sector is a journey, and we’re all on this journey together, and that we’re all learning as a group, and hopefully we’ll be able to share with each other the successes, the failures, the messy parts, and everything in between.
So, as we’re wrapping up, I wanted to ask you, if there’s anything, any final words that you would like to share with our listeners. Again, this has been such an important conversation in terms of how to approach AI literacy in a really thoughtful and contextualised way. Is there anything else that you would like to say?
Meheret: Yeah, thank you, Madigan. I know we have covered a lot, from readiness benchmarks, peer learning, cultural responsiveness, and even the importance of learning from failure. I think if I have to distil all of this into one message for the humanitarian sector, for our sector, it would be: prioritise building a culture of critical AI literacy over only technical capability.
Critical AI literacy means the organisation’s capacity to ask the right questions before, during, and after adoption. It’s not just the ability to use tools. It’s the ability to ask “should we?” before “can we?” The confidence to challenge AI outputs rather than accept them blindly, and the practice of learning collectively from both successes and failures. These aren’t technical skills, as we mentioned. They are cultural competencies that empower staff to make sound judgement, raise concerns, and adapt responsibly.
If you ask me why is this important, it’s because technical skills without critical literacy are really dangerous. You can train staff to use AI in a short period of time, but without the ability to evaluate it, spot ethical red flags, or recognise cultural misalignment, even technically competent teams can cause harm.
Critical literacy enables everything else: better decisions, stronger vendor accountability, adaptation to local context, earlier harm mitigation, faster learning, and responsible scaling. I believe we have to create regular, protected spaces where diverse staff can ask hard questions. For example, does the proposed AI solution align with our mission? Who might be harmed? Whose voices are missing? What would success and failure look like?
The sector, I believe that the sector doesn’t need more organisations rushing to adopt AI, but it needs more organisations thinking critically about whether, when, and how to adopt it in a way that truly serves our communities. Technical skills matter, infrastructure matters, but without critical AI literacy as a foundation, I think all of that risks doing more harm than good. I think that would be my final point.
Madigan: And what a final point that is to end on. A culture of critical literacy over just technical competencies or capability, I think that’s a really powerful message, and I think that really encapsulates everything that we’ve discussed today.
So I want to say, Meheret, thank you so much for sharing these invaluable insights with us and with our listeners. For our listeners, if you guys are interested, we have the full research report that was conducted by HLA and Data Friendly Space. We will be conducting another light touch survey in January to see what new patterns in the humanitarian AI usage have changed. And in the meantime, we really encourage you to check out all of NetHope’s amazing work and to keep the conversation going. So again, thank you so much for being with us here today. We really appreciate it.
Meheret: Thank you, thank you so much, Madigan, and thank you so much for having me.