Thank you for visiting the HLA website, we are updating the site to reflect changes in our organisation. Contact us if you have any questions. Contact us

Humanitarian AI podcast series | Guest spotlight – Meheret Takele Mandefro

This week we’re pleased to release the final instalment of our special six-part Humanitarian AI podcast series, building on the groundbreaking research conducted by the HLA in partnership with Data Friendly Space (DFS). Read on to learn about the work and perspectives of our season finale guest, Meheret Takele Mandefro.

Meet Meheret Takele Mandefro – a Business Analyst at NetHope. We’re pleased that Meheret will feature on the Fresh Humanitarian Perspectives podcast as an expert guest in episode six of the Humanitarian AI series: ‘Developing AI literacy: a matter of trust, critical thinking and localisation.’

Meheret joins guest host Madigan Johnson, Head of Communications and research co-lead from Data Friendly Space on the podcast to share her views and experiences of the current state of AI literacy across the humanitarian sector, as well as practical implementation barriers she’s seeing from across the NetHope network. Meheret also shares why developing AI literacy is more than technical skills, and why a culturally-attuned approach must be a priority for humanitarian and development organisations on their pathway to AI development.

The podcast conversations build on findings and themes to emerge from the August 2025 report: How are humanitarians using artificial intelligence in 2025? Mapping current practice and future potential.

Madigan Johnson caught up with Meheret ahead of the podcast discussion – read our Q&A to learn more!

A woman wearing glasses and an orange ribbed turtleneck sits at a table with her hands clasped, looking at the camera. The background shows a modern, brightly lit room with blue and beige walls.
Training programmes must be tailored to local realities, ensuring that communities can build, maintain, and govern AI systems themselves rather than relying on external expertise. Ethical frameworks should be co-created with local stakeholders, embedding cultural values and social priorities into AI governance.
Meheret Takele Mandefro

As a Business Analyst at NetHope, you sit at the intersection of technology and social impact. How would you describe your role to someone outside the sector, and what aspects of your work energise you most?

At NetHope, I work at the intersection of technology and social impact supporting nonprofits harness digital innovation for resilience and impact.

NetHope drives collaboration between NGOs and tech companies to bridge digital divides and strengthen technology capacity across humanitarian, conservation, and development sectors.

As a Business Analyst, my primary role is to analyse complex technological trends and interpret them into actionable insights that will assist nonprofits make informed decisions.

Much of my work involves managing the database for the Center for Digital Nonprofits, conducting sector-specific research, and developing resources, such as reports, briefs, toolkits, and case studies, which influence how organisations view and adopt technology.

The biggest motivator for me is seeing the direct link between what we analyse and how it impacts real life. Regardless of whether I am creating resources or helping lead our AI working group, I see my work on a daily basis support the ability of nonprofit organisations to make better decisions and positively impact on the lives of communities they serve.

My background, growing up in Ethiopia and transitioning through roles in teaching, data analysis and engineering, and humanitarian research, enables me to see both sides: I understand the needs of local communities, as well as the challenges nonprofits face in serving those needs. Bringing these perspectives together and ensuring that the relationship between technology and community impact remains strong is what motivates me every day.

NetHope connects global nonprofits with technology solutions and expertise. From your analyst perspective, what unexpected patterns have you observed in how organisations approach AI adoption differently from other digital transformations? What makes AI unique in this landscape?

Through my experience I have observed two significant patterns that distinguish AI from previous digital transformations.

One of the biggest surprises has been the pace of AI adoption as compared to other digital transformations. Past technology adoptions, for example, cloud migrations, involved an initial roadmap and a governance framework that provided structure for the process.

The adoption of AI typically began with very rapid pilots conducted by groups within an organisation or individual experimentations. Often these pilots occur without any formalised frameworks for success, return on investment or a long-term plan for how they will be sustained.

I also observed that there is a significant amount of external hype driving the adoption of AI in many organisations. The hype surrounding AI is creating pressure on organisations to ‘keep pace’ with their competitors, donors and the overall AI development trend.

As a result of this hype, organisations are making early investments in AI technology prior to establishing return on investment (ROI) metrics or developing sustainability plans for the new systems. This creates unique challenges and opportunities, as organisations invest in AI with less initial clarity on sustainability and measurable impact.

When implementing AI solutions across diverse cultural and operational contexts, organisations may encounter unexpected resistance or enthusiasm. What’s one critical insight about culture’s role in AI adoption that has fundamentally changed how you approach capacity strengthening?

AI adoption is not just a technical process. It is deeply influenced by local values, norms, and trust in technology. In some communities, scepticism may arise from concerns about data privacy or fear that AI will disrupt traditional ways of working, while others are eager to experiment, seeing AI as a pathway to new opportunities.

Understanding these dynamics has taught me that successful AI capacity building requires more than technical training; it demands engaging with local stakeholders, listening to their priorities, and co-designing solutions that reflect their aspirations and address their concerns.

This culturally attuned approach fosters genuine buy-in and ensures that AI tools are embraced not just as innovations, but as relevant and empowering resources for the communities they serve.

The humanitarian sector often talks about localisation and decolonisation of aid. How should these principles shape our approach to AI capacity building, particularly in regions where technological infrastructure and contexts differ significantly from those where many AI tools are developed? What does truly localised AI capacity building look like in practice?

Applying localisation and decolonisation principles to AI capacity building means rethinking both the process and the power dynamics behind technology adoption.

In simple terms, localisation ensures that communities define the problems AI should solve, while decolonisation challenges the dominance of external frameworks and insists on valuing indigenous knowledge systems. Together, these principles shift capacity building from a one-way transfer of skills to a collaborative process where local actors lead and global partners support.

In regions with limited infrastructure or different social contexts, this approach requires designing AI tools that are lightweight, adaptable and accessible in local languages.

Training programmes must be tailored to local realities, ensuring that communities can build, maintain, and govern AI systems themselves rather than relying on external expertise. Ethical frameworks should be co-created with local stakeholders, embedding cultural values and social priorities into AI governance.

Truly localised AI capacity building in practice looks like community-driven innovation hubs, curricula developed in local languages, and AI applications designed for pressing local needs such as agriculture, health, or disaster response. It emphasises ownership, sustainability, and reciprocity where local institutions not only adopt AI but actively shape its development.

This model transforms AI from an externally imposed solution into a tool for self-determined progress, strengthening resilience and ensuring that digital futures are defined by the communities themselves.

Podcast promo image for Humanitarian Leadership Academy’s “Humanitarian AI series.” Topic: Developing AI literacy—trust, critical thinking, and localisation. Featuring Meheret Takele Mandefro.


About Meheret Takele Mandefro

Meheret Takele Mandefro is a Business Analyst at NetHope’s Center for the Digital Nonprofit (CDN), where she has been contributing since December 2024. Her work bridges technology and social impact, generating actionable insights that inform strategy and innovation across the nonprofit sector. At NetHope, Meheret manages the CDN database, conducts in-depth research, and leads data-driven analysis to support strategic initiatives. She regularly publishes briefings, reports, and case studies that highlight technology applications across the humanitarian and development sector.

Her recent research includes assessment of AI readiness across nonprofits, case studies on AI adoption, a guide to the usefulness of generative AI, analysis of digital skills demand across nonprofits, and a sector-wide report on cybersecurity. Her work has been published through UKHIH and NetHope, contributing to thought leadership in responsible technology use for social impact.

Meheret also co-leads NetHope’s AI Working Group which helps to shape sector-wide conversations on ethical AI and responsible innovation. Her academic and professional background spans data science, artificial intelligence and information science. Prior to NetHope, she served as an assistant lecturer in Ethiopia, worked as a data engineer at Gmaven in South Africa, and led a research initiative as an intern at IRC’s WASH in The Hague, exploring AI’s role in enhancing business operations.

Now based in The Hague, Meheret is driven by a commitment to harness data and emerging technologies to advance equity, resilience, and impact in humanitarian and development contexts.

About the report and podcast series

In August 2025, the Humanitarian Leadership Academy and Data Friendly Space launched a joint report on artificial intelligence in the humanitarian sector: How are humanitarians using artificial intelligence in 2025? Mapping current practice and future potential.

Drawing on insights from 2,539 survey respondents from 144 countries and territories, coupled with deep dive interviews, this study was the world’s first baseline study of AI adoption across the humanitarian sector.

In the next phase of our initiative, we’ve produced a six-episode humanitarian AI podcast series featuring expert guests to build on the themes emerging from the research, including community-centred research, implementation barriers, governance and regulatory frameworks, cultural and ethical considerations, localisation, learning and training, and more. This first series has particular focus on the Global South and Africa due to high levels of engagement in this research from these regions.

The platform aims to promote inclusive, accessible conversations and knowledge exchange to support shared progress in the development of ethical, contextually-appropriate humanitarian AI.

Newsletter sign up