15th October 2025
Deogratius Kiggudde
Ka Man Parkinson
How can experimentation and collaboration bridge the gap between humanitarian AI aspirations and reality?
Our recent AI research conducted in partnership with Data Friendly Space highlighted infrastructural constraints and risk tolerance as major barriers to AI adoption. In this fifth instalment of our six-part humanitarian AI podcast series, we explore how Rwanda’s innovation ecosystem offers practical lessons for the humanitarian sector.
We’re delighted to welcome Deogratius Kiggudde, Programme Manager for The Upanzi Network at Carnegie Mellon University Africa in Kigali. Deogratius, who considers himself “a person of the continent”, shares his experiences of working with open-source community-centred tech and AI in Rwanda’s innovation ecosystem and beyond.
Deogratius speaks with Ka Man Parkinson to discuss how government-driven collaboration, academic partnerships and open-source approaches are creating an experimentation culture and sharing lessons learned is encouraged.
Tune in for a practical, grounded conversation on AI implementation, including:
- Deogratius’ experience of working within Rwanda’s experimentation culture: How progressive policies can encourage testing, failure and collaboration between government, academia and practitioners
- The potential of small language models for humanitarian AI: Why resource-efficient AI designed for specific tasks offers viable pathways for humanitarian contexts with limited connectivity
- The power of community-driven open-source tech: Building sustainable movements around shared tools rather than chasing the “shiniest models”
- Insight into real-world connectivity solutions: From Raspberry Pi mesh networks on motorcycles to understanding when localised approaches work – and when they don’t
- Plus, community questions: Deogratius addresses practical concerns about infrastructural barriers in conflict zones, AI policies, inclusive development, resource allocation and keeping humans in the loop

Keywords: Rwanda, innovation ecosystem, experimentation culture, open-source AI, small language models, community-governed AI, digital public infrastructure, connectivity solutions, mesh networks, Raspberry Pi, collaboration, resource sharing, academia partnerships, pain points, human in the loop, policy implementation, field deployment, CMU Africa, Upanzi Network.
Want to learn more? Read our Q&A with Deogratius
Who should tune in to this conversation
This episode is essential listening for humanitarian practitioners, programme managers, technologists and organisational leaders seeking practical guidance on AI implementation. The conversation is particularly valuable for those working in resource-constrained environments, exploring partnerships between academia and practice, or developing community-centred technology solutions.
The technology is discussed in context of potential humanitarian AI approaches and does not require specialist knowledge of these tools: this conversation is relevant and accessible to anyone interested in the grounded realities of technological and AI development – and how technology can be built for and with communities. All episodes in this series feature a supporting glossary of technical terms used in the conversations.
Episode chapters
00:00: Chapter 1: Introduction
03:11: Chapter 2: From building homes to technology: Deogratius’ journey to tech for social impact
08:19: Chapter 3: Balancing innovation and risk: how Rwanda’s innovative policies supports an experimentation culture for practitioners
24:25: Chapter 4: Deogratius’ view: building a movement through open-source AI
31:42: Chapter 5: Technical solutions to overcome connectivity challenges: the potential of small language models
50:34: Chapter 6: Community Q&A: Deogratius answers your questions
71:09: Chapter 7: Deogratius’ view: what’s needed to accelerate progress
72:58: Chapter 8: Closing reflections
Glossary of terms
We’ve included definitions of some technical and conceptual terms used during this podcast discussion for those who are unfamiliar or new to this topic.
Agentic AI – AI systems designed to perform specific tasks autonomously on behalf of users, acting as specialised agents for particular purposes rather than general-purpose assistants.
AI (Artificial Intelligence) – Technology that enables machines to perform tasks that typically require human intelligence, such as learning, problem-solving, and decision-making.
API (Application Programming Interface) – A set of rules and protocols that allows different software applications to communicate and share data with each other. See also: OpenAPI
Biases (in AI) – Systematic errors or unfair assumptions in AI systems that can result from limited or skewed training data, leading to prejudiced or inaccurate outputs.
Black box – A system or process whose internal workings are hidden or not transparent to users, who can only see inputs and outputs without understanding how it functions internally.
Boda bodas – Bicycles and motorcycle taxis with a space for a passenger or for carrying goods commonly found in East Africa.
CMU Africa – Carnegie Mellon University Africa, located in Rwanda.
Community-governed – A model where communities of users and developers collectively influence the direction, development, and usage of technology platforms or AI models.
Computation power – The processing capacity required to perform complex calculations and run AI systems, typically measured in terms of hardware capabilities.
DeepSeek – An AI model mentioned as an example of systems working on small language models.
DHIS2 – An open-source platform used globally for health information management and data collection.
Digital Health Wallet – A prototype tool allowing individuals to own and control access to their own medical records.
Digital protection and privacy – Safeguarding personal information from unauthorised access, use or disclosure, and giving individuals control over how their data is collected, stored and shared.
Digital public goods – Open-source software, data, AI models, standards, and content that are freely available for anyone to use, adapt, and share.
Digital public infrastructure – The underlying digital systems, platforms, and services that support essential functions in society, built on open and interoperable principles.
Drone mapping – The use of unmanned aerial vehicles (drones) equipped with cameras to capture aerial imagery for creating maps and conducting surveys.
Edge devices – Computing devices that process data locally rather than relying on cloud servers, such as smartphones, tablets, or specialised small computers.
Gemini – Google’s large language model with billions of parameters.
Gen AI (Generative AI) – AI systems capable of creating new content, such as text, images, or code, based on patterns learned from training data.
Geo-mapping AI model – An AI model developed by the Humanitarian OpenStreetMap Team for mapping purposes.
Granite model – An AI model developed by IBM, mentioned in the context of small language models.
Hackathons – Intensive collaborative events where programmers, designers, and others work together rapidly to develop software solutions or prototypes.
Hugging Face – An open-source platform and community hub for sharing, discovering, and deploying machine learning models, particularly for natural language processing and AI applications.
Human in the loop – An approach where human oversight and decision-making remain integral to AI systems, with humans verifying, correcting, or guiding AI outputs.
Humanitarian OpenStreetMap Team (HOT) – An organisation focused on collaborative mapping using open-source tools and data for humanitarian purposes.
IRembo – A Rwandan government platform for delivering public services digitally with open APIs
Kigali Innovation City – An innovation hub in Rwanda where CMU Africa and other tech organisations are based.
Large Language Models (LLMs) – AI models trained on vast amounts of text data with billions of parameters, capable of understanding and generating human-like text (e.g., ChatGPT, Gemini).
Llama – An open-source large language model developed by Meta that allows developers to use, modify, and build upon the model freely.
Mesh network – A network topology where devices connect directly to each other, creating multiple pathways for data to travel, increasing reliability and coverage.
MOSIP – Modular Open Source Identity Platform, an open-source system for building digital identity infrastructure.
MOU (Memorandum of Understanding) – A formal agreement between two or more parties outlining their intent to collaborate, commonly used to establish partnerships between organisations.
OpenAPI – A standard for building and documenting APIs that makes it easier for different systems to work together.
OpenCRVS – Open Civil Registration and Vital Statistics, an open-source digital system for registering births, deaths, and other vital events.
Open source – Software or technology whose source code is freely available for anyone to inspect, modify, and distribute.
Opportunistic network – A network that takes advantage of temporary connections between mobile devices to transmit data, useful in areas with limited infrastructure.
OSM (OpenStreetMap) – A collaborative project to create a free, editable map of the world.
Parameters – The internal variables within an AI model that are adjusted during training to improve performance; more parameters generally mean more complexity and capability.
Policy Analyser – A platform being developed at CMU Africa using AI to analyse legal documents.
Prompt engineering – The practice of crafting effective instructions or questions to get desired outputs from AI systems.
Prototype – An early test version of a product, tool, or system used to trial concepts and gather feedback before full development and deployment.
Raspberry Pi – A small, affordable single-board computer about the size of a credit card, designed for education and portable computing projects.
R&D (Research and Development) – The process of investigating and developing new products, services, or technologies, typically involving experimentation and innovation before commercial or practical application.
Small Language Models (SLMs) – AI models similar to LLMs but with significantly fewer parameters (millions rather than billions), making them more efficient, faster, and capable of running on less powerful devices.
Tecno – A brand of affordable smartphones popular in Africa, representing the types of lower-cost devices commonly used in humanitarian contexts.
Tiny MLs – Tiny Machine Learning devices; extremely small computers designed to run machine learning algorithms on edge devices with minimal power consumption.
UN Global Pulse – A UN initiative exploring how AI and big data can be used for sustainable development and humanitarian action.
Upanzi Network – A research programme at CMU Africa focusing on digital public infrastructure and AI.
Ushahidi – An NGO platform that crowdsources information and uses AI to gather sentiments from communities on the ground.
User journey – The complete experience a person has when interacting with a product or service, from initial contact through various stages of use.
Episode transcript
[Ka Man, voiceover]: Welcome to Fresh Humanitarian Perspectives, the podcast brought to you by the Humanitarian Leadership Academy.
[Music changes]
[Ka Man, voiceover]: Global expert voices on humanitarian artificial intelligence.
I’m Ka Man Parkinson, Communications and Marketing Lead at the Humanitarian Leadership Academy and co-lead of our report released in August: ‘How are humanitarians using artificial intelligence in 2025? Mapping current practice and future potential’ produced in partnership with Data Friendly Space.
In this new six-part podcast series, we’re exploring expert views on the research itself and charting possible pathways forward together.
[Music changes]
[Voiceover, Deogratius]: “You’re a country of people, right. The people run the country. And if the people do not understand the technology you’re putting forward, you can write as many policies as you want, but if you don’t get that literacy up, you don’t get people involved in trying and testing. One of the things about AI people don’t realise is you have to try and test multiple times.”
[Voiceover, Deogratius]: “You can make a wonderful test in the lab here, and everything works. Once you hit the ground, you find that lots of variables that you didn’t really think about come into play – in terms of when do people use it, the time they work on it, they have other things to do, they have different culture and beliefs about certain things as well. So I do think, for me, the first thing when it comes to innovation, we need to really foster collaboration.”
[Voiceover, Ka Man]: Welcome to episode five where we’ll be exploring the practical localisation of humanitarian AI solutions in Rwanda. So far in this series featuring global and African expert perspectives, we’ve explored the humanitarian AI landscape, examining implementation and governance challenges, as well as the foundational role of AI literacy. We’ve discussed cultural frameworks for inclusive AI development, as well as interrogated some of the structural barriers blocking local innovation, together with a look at promising emerging solutions like small language models.
In today’s conversation, we bring these threads together with our expert guest Deogratius Kiggudde, a programme manager for the Upanzi Digital Public Infrastructure Network at Carnegie Mellon University Africa in Kigali.
Starting his career in construction before moving into drone technology and joining the Humanitarian OpenStreetMap Team, Deogratius applies a practical, community-centred lens to his work.
In this episode, we explore lessons learned from Rwanda’s innovative technological environment. From government-driven collaboration to community-built solutions, what can humanitarian organisations learn from contexts where testing technological tools, failing – sometimes publicly – and sharing lessons learned and resources is encouraged?
Just like he did when he began his career in construction, Deogratius shows us how using the right tools can build fit-for-purpose platforms for and with the community.
[Music fades]
***
03:11: Chapter 2: From building homes to technology: Deogratius’ journey to tech for social impact
Ka Man: Hi, Deogratius, welcome to the podcast!
Deogratius: Yeah, welcome as well. I’m glad to be here.
Ka Man: Thank you so much for taking the time to speak with me today. I really enjoyed our conversation when we were first introduced, and I enjoyed your insights from that short call, so I’m really looking forward to this deep dive conversation with you today.
So, before we delve into the questions, I wondered if you could introduce yourself to our listeners, letting us know what you do, and what drew you to this intersection of tech, AI, and humanitarian and development work that you do today?
Deogratius: Sure, sure, sure, I’ll start off with my name. My name is Deogratius, Deogratius Kiggudde. I am Ugandan, but currently in Rwanda, living in Rwanda. I’ve been all over the continent, that’s something I’m very proud of as well, so I usually call myself a person of the continent, as opposed to just Ugandan.
Believe it or not, I actually began my work in construction. When I finished university and everything else, I was very much a construction person. I would focus on building roads, bridges, houses for everyone, but I was very interested in the digital aspects of the construction as well – the digital tools, the designs, the analysis, the visualisations, you know, the inspections. With that, I was actually very much drawn to the liking of drones. This was way back in 2012, drones weren’t a big deal at the time, but I was very much drawn to them to see how they can improve inspections in the construction industry.
So that’s how I really picked up technology, but from the get-go, technology was always at my heart. I used to love working with computers and just really being around them as well.
But when actually got the interest in drones, I was given an opportunity to work with a very small organisation by the time called Humanitarian OpenStreetMap Team. They were an NGO focused on mapping, and they were thinking about also getting more drones into that space, so I definitely hooked up with them. And that really began my journey of trying to be within the humanitarian sector, and being the technology aspect within the humanitarian sector.
So I was with them for almost 10 years, so we did a lot of projects from different countries across the continent. They have a really big focus on open-source tools, open-source technology, open-source data.
That really grounded me in understanding what is free, what is open-source, what technology can people use, especially within remote settings and what we call constrained environments. And that really brokered me to that. Around the tail end of my work with them, I really got more involved in what they call digital public goods.
Once I learned more about open-source, I wanted to go deeper. Digital public goods was a term that began being coined up around 2017-18 as well, and I got involved in different spaces as well.
And that actually got me very much closer to where I am now at CMU Africa, because understanding digital public goods led to understanding digital public infrastructure. That then propelled me to join the CMU Africa research programme called the Upanzi Network.
So the Upanzi Network really focuses on digital public infrastructure, but really with a big arm on AI, and CMU Africa in general really fosters a lot of talent around AI. Within this research company, there’s a lot of AI, not only in academia and research and development, but also really trying to get AI out there.
This is where I’ve learned a lot about my AI skills. When I joined the team, I found AI experts here – guys that are working on large language models, building models from scratch, working with them in robotics, working in agriculture. Being at the university here has really helped my understanding about AI, and it really helps me now to link all the different aspects that I’ve learned within the humanitarian industry, and also in my previous work in construction. So that’s how I really got in touch with AI, and how my journey has been so far.
Ka Man: Thank you, Deo. Thanks for sharing your journey. That’s really interesting, from your origins in the construction sector through to high-tech world that you’re in now. That’s really interesting. And actually, I can see parallels – I can imagine that in construction, you’re looking at real infrastructural challenges, literally the foundations of buildings and public spaces. And now, moving along the trajectory, you’re still looking at addressing these challenges, but in the digital infrastructure. So it’s really interesting that your career has evolved in this way. So thank you very much for sharing.
So I liked how you described yourself as a ‘man of the continent’ [laughs], because I really appreciate you sharing with me and our listeners your perspectives from where you are working in Rwanda, but also from experiences that you have in Uganda and beyond.
08:19: Chapter 3: Balancing innovation and risk: how Rwanda’s innovative policies supports an experimentation culture for practitioners
Ka Man: Speaking of infrastructure, I wanted to ask you a question around the context in Rwanda. So, in our humanitarian AI research report that we launched together with our partner, Data Friendly Space, we asked how humanitarians are using AI around the world, and what blockers and barriers are there for those who do want to move forward with this. And infrastructural constraints were a key barrier to implementation, for those who wanted to.
So, as I’ve been reading and learning more about this, I’ve heard that the Rwandan government has invested heavily, and prioritised actions to address these kinds of challenges. So I wondered: how do policies at the national level, how is that affecting you as a practitioner, as a researcher, as a programme leader, on the ground? Does it create certain opportunities? Has it accelerated progress or potential in humanitarian AI applications as a result? And how might that compare to other contexts that you’re familiar with, like Uganda?
Deogratius: Yeah, sure, sure, I must say Rwanda is a very progressive government. Having talked to lots of governments over the years, they are being bold, and saying, “Hey, we may not be sure, may not be 100% sure about AI, but we’re willing to test it out.”
Most governments will always say, “No, we want something sure, straight from the box – like you know, give it to me, I just plug and play.” But I feel like the government of Rwanda is really saying, “Okay, let’s work together, let’s test this process together, even if it’s going to work out. Amazing. We either work out together or we fail together.” That’s something that they really focus on. They’ve always been looking to be digital from the get-go, even before the AI words were coined, they had a notion of how can we be digital, but not just digital for the sake of being digital, but digital to improve service delivery.
I’ve seen like how firsthand they walk the talk of the policies, because so many governments can write policies. On the African continent, we love writing policies. In certain countries, that’s something that comes along as well. They write policies, they write frameworks, but they cannot back them up. They cannot say, “We’ve written the policy, how do we get that forward?” I think, for me, the Rwandan government are saying, “OK, let’s invest in this policy, let’s invest putting stuff on the ground” starting with, so they have these innovation spaces that they’ve been able to put up, with Kigali Innovation City being one of them, and that’s where CMU Africa sits as well. There are so many others, like different hubs within the country that are really focusing on creating this literacy, because you can have a very wonderful policy, but again you’re a country of people, right. The people run the country, and if the people do not understand the technology you’re putting forward, you can write as many policies as you want, but if you don’t get that literacy up, you don’t get people involved in trying and testing.
One of the things about AI people don’t realise is you have to try and test multiple times, and you must fail multiple times. If anyone tells you that on their first try they did that and it worked, then they’re actually lying, right.
So that is something I’m really emphasising. I’ve seen instances where the government literally takes one person from the private sector and says, “I’m putting you in my car, driving you to the academia offices,” and tells them, “Sit down, come up with something. You don’t leave this room until you come up with something.” Which I think for me, most governments don’t really do that, right. They’re more pulled back in other contexts. They focus on governance and just making sure that the right payments and taxes are done, but I’ve seen here, the government is really hands-on. They try to get people in the same room and get them to work together. Even if some of them don’t like it, at least they try as much as possible to really emphasise that collaboration.
One thing they’re really emphasising, because they know it’s where we are going, is data protection and privacy. Everyone loves to hear that we’re using AI, we’re going digital, but when you have a population that has a majority still not online as well, you want them to go into a space prepared, because your first experience about a space can really leave a scary memory. So they are really focusing on trying to emphasise how we can improve data protection and privacy.
That’s something even here within the Upanzi Network, we are doing. We’re coming up with tools, we’re doing lots of studies around how can we improve data protection for the people, how can we set up systems, whether it’s AI, whether it’s digital systems that can still be able to protect the privacy and the data of people, but also giving them the power to control the data themselves.
There’s a tool we have here called the Digital Health Wallet that we are trying to prototype, where we’re trying to get people to actually own their medical records. You have a medical record on you, and you can share with someone – like a doctor that you want to share with – but also revoke access as well. So examples like that.
And really, the government has been emphasising something like that as well, and they’re looking for that. It’s still all prototype, but they’re really testing out stuff.
But yeah, they’re really getting people to really understand what it means to be online. There are lots of academies that have been put up that focus on digital literacy, coding academies, and so many others. So I definitely see that.
Testing in the open – I’ve seen so many times when the government comes out and says, “Hey, we failed on this. We tried this one, it didn’t really work out. But we’re moving on. We’re moving on to something new.” That’s something you want a government to be able to do. In other places, they focus on the façade of “oh, we’re doing amazing,” and don’t want to test in the open. And I think if you do that, you get more support, you get more help, where someone says, “Oh, you failed on this, let me come in and actually fill that gap that you maybe didn’t know as well.”
So, I do think, for me, that’s something I’m seeing within this space that is definitely unique, compared to other contexts where they are focusing on getting the people ready for online, they’re getting their hands dirty, they’re saying, “Let’s work together,” without just saying the words, but actually putting you in the car, driving you to the other one’s office, then you meet, you come up with a decision, they follow up on you, which is, for me, I will see often across the continent.
Another example I would like to give is the government has really been trying to emphasise what they call OpenAPI,right. They’ve set up this platform called Irembo, and it’s really pushing government service delivery. But they’re also saying we want to be able to create open APIs that other people, other sectors, private sector, development sector, can really piggyback on and build platforms that can improve service delivery. So they’re also not saying that we can do everything ourselves. They’re saying, “Hey, we can build the bedrock. But we want different actors to come into play.”
So I see that, something that they want to really work on. It’s not yet 100% there, and that’s something I love about them. They don’t say they’re going to build it in one day, one week, but they have that vision. For me, that’s where they’re really going beyond the policies. Policies are nice, they’re quick, they rubberstamp them, everyone signs them. When you start working on things like this, I think it’s definitely what you want an entity and a government to be working on.
Ka Man: Thanks so much, Deo. That’s so, so interesting, and I have a lot to ask you actually on what you’ve just shared.
Deo: Sure, sure.
Ka Man: So it sounds like Rwanda’s got quite a unique culture and context in terms of innovation, and that’s really coming top-down from the government policy level by what you describe, and there’s a culture of testing and innovation and encouraging experimentation that might not normally be seen in other contexts. Is that right, is that what you were sort of reflecting on?
Deogratius: Yeah, 100%, 100%. I think right now we’re in a world where everyone wants to show perfection, everyone’s like, “Ooh, I’ve made this, it’s bulletproof,” but they are saying, “Hey, we put out something, it’s version one, it may not be amazing, may not be great, but we have a vision going forward, right.” And they’re willing to know that, “Okay, something didn’t work out in this way, let’s say we set up this platform, it didn’t really fully work out, let’s move forward. Let’s try this one.”
I think that kind of space is not very common on the continent. I feel like there’s a lot of – the word people like using is, we prefer showing off than showing up, right. Like, we want to come and just say, “I’m the best at this something,” and I think they’re not focusing on being the best, they’re focusing on actually being right. Being right, being fast, and really serviceable for the other people as well.
Ka Man: So, it sounds almost like there’s a culture that’s a bit different. It almost sort of aligns with the Silicon Valley mindset of testing and breaking things and moving fast, the expression that they use. And actually I’ve heard – is it Silicon Savannah? Akin to Silicon Valley in America. So you’re sort saying there’s this testing and experimental culture at the local level, so it’s not just about policy, it’s actually encouraging this within projects and programmes and initiatives. How is it balanced against risk? What kind of approaches are there to guardrails being put in place while you’re taking this testing and trialling approach?
And I’m particularly interested in this because that’s a big factor as a barrier for AI implementation that emerged in the research is that a lot of organisations don’t have a high threshold for risk, understandably, if they’re, you know, using donor funds, and they need to be able to say what they’re using it for. You know, they might not allow such testing and experimentation, so therefore the organisation as a whole might not be pushing ahead with that.
So because I’m sure listeners who are working in contexts that are more measured and have a lower risk threshold – what kind of practical guardrails can be put in place? Or do you have any examples of any projects? I know you mentioned this digital wallet. Is there anything you can share, or lessons learned in terms of creating safe spaces for innovation?
Deogratius: Yeah, sure, sure, sure. I think for me, that’s a very valid point, especially in the humanitarian sector right now, with budgets getting cut, donors cutting back on a few things, especially on innovation as well, right. You don’t just go high flying on everything shiny at the moment. I think one of the big lessons I’ve seen here that really works is collaboration. I think one of the biggest drawbacks to innovation has always been how much can you collaborate, because you can’t do it alone.
Even here at the university, within the Upanzi Network, we have amazing guys who can build all these AI tech stuff and put up all these wallets. But we cannot easily go and deploy in the field without really working with the people on the ground. So that means we have to get the government involved. We have to get government permissions, but also go with them in the field for them to actually see the user journey, work with communities that are already on the ground. You have to work with NGOs, you have to work with community organisations, to really see how they use it in their lives.
From just being here in the test lab, you can make a wonderful test in the lab here, and everything works. Once you hit the ground, you find that lots of variables that you didn’t really think about come into play – in terms of when do people use it, the time they work on it, they have other things to do, they have different culture and beliefs about certain things as well.
I do think, for me, the first thing when it comes to innovation, we need to really foster collaboration. And collaboration beyond just saying, “We shall collaborate.” Beyond writing the MOUs – everyone writes MOUs right now, they’re very straightforward, there’s even a template for MOUs. How do we actually share resources? Resources – it doesn’t have to be money. When some people say resources, everyone thinks money, no. It can be like, “You have a car, maybe you can transport us to the field. You already have some kind of accommodation in the field,” meaning we don’t need to spend too much on accommodation. You already have somewhere people can stay for a month or two to test out a tool.
It could be in terms of experiences that you’ve lived as well, and tell guys, “You’re going to put up this amazing wallet, but you know that people don’t like seeing their medical records sometimes. They just want to give it over to their loved ones, or something like that.”
I think there’s so much resource sharing we could do beyond just money, that I think organisations need to – especially in the humanitarian sector. I find that there needs to be more resource sharing. Not money, but the things that you have, the connections, the cars, the accommodation, it could be even the equipment as well. It doesn’t always have to be money. I think for me, that lowers the risk for every organisation. Before you’re going to spend, let’s say a million dollars or something like that, it really lowers that. But also, it gives you the time to really test, because now you have more resources to test.
Even the government is really encouraging that. They want to see close collaboration, actual close collaboration, beyond writing papers and saying, “We have an MOU together, we’ve taken photos and all those kinds of things.” They want actual movements being done.
Also one thing I want to say is that it could sound like an advert for the university, but I do think academia has that space for testing out. I think more private sectors should be reaching out to academia and seeing how they can work together to test out, especially at the very beginning tools.You want to do your R&D in-house. I know some companies love R&D in-house, but I think for me, academia can be that space where – and I think should be the space, because this is where you have students who are learning, who actually want to learn, who are very eager to learn, right. So eager to gain those skills, and I think being able to bring industry-level scenarios, problems, issues, challenges into academia makes academia more relevant, as opposed to it being where we’re just in academia just to learn and move out. Then you find the challenge, right. If we can bring more industry, whether it’s NGOs or development actors, into academia, to actually tell them, “Stop just doing all those fancy things that you love, this is a challenge we have in the field, this is a challenge we have in the world, can we actually work on it together?” So, that I think lowers the risk appetite and definitely gives you more time to explore and test out stuff.
Ka Man: That really resonates with what I’m seeing in the sector generally, and it makes sense in relation to AI and technological innovation in the way that you’ve described. And like you say, I can see how universities, they have a culture of collaboration. Obviously, research and development happens in partnership. Maybe compared to the private sector, where companies want commercial advantages, so they might not necessarily want to go hand-in-hand with a partner unless there is that absolute proven benefit and trust. But obviously, that’s difficult – presents challenges in the commercial world.
24:25: Chapter 4: Deogratius’ view: building a movement through open-source AI
Ka Man: I just actually wanted to ask one more point, question around what you shared at the start, around OpenAPI and building platforms together. So does this link with the partnership and collaboration theme and point that you’re making around open-source? My question is, does this link to open-source AI? Because I’m not a technical specialist [laughs], so I want to understand how OpenAPI links to building platforms and AI, and whether that links to the broader theme that you’re highlighting about collaboration.
Deogratius: Yeah, 100%. I definitely want to mention the OpenAPI’s part, because it’s a big component within the digital public infrastructure component – an ecosystem of not just building digital systems in silos, or like, “It’s a nice digital infrastructure, but it’s not speaking to another system,” then you find that you have to build another system to speak to this other system. It becomes more expensive as you go along.
But yes, it does speak to the whole ecosystem of how can we work together, but also build upon what someone else has already worked on. For example, the government really has built this backbone of data, and actually has a lot of data as well on these kinds of things. Once it makes their APIs open, the private sector development actors can actually build upon that.
That really makes sure that we are reusing the digital systems that we have. One of the big things that should be research that people have done is that the digital ecosystem has come and has arrived, but there’s a lot of misuse or lack of reuse of these digital systems. That’s also creating a lot of wastage and more resources, and some people get frustrated – they’re like, “The digital system was supposed to make things easier, but I feel like it’s more frustrating.”
I do think, for me, that’s one aspect around bringing collaborations as well. That also links to the open-source component. I’m an open-source advocate, 100%, but even within the AI space, I do think AI should be, as much as possible, open source, because the information the AI is using has been put out there for it to ingest and be able to generate these answers that we’re getting, and all these parameters. So, I think, in turn, it should also be open source, because it will allow more collaborations or creation of communities.
A good example of this is what the Llama team is doing at Meta, where they’re creating a community of AI developers, actors, users, as opposed to just being a black box in a corner, where they just ship you the next update of a model. Multiple people are working on it, multiple people are using it to improve it, to actually make that happen, because then it does reduce a lot of biases that people have, you know.
So I do love that, right, they’ve been able to create these hackathons, and give out these grants, and really just not keep it to themselves. I think, for me, that’s the main connotation, and in the open-source world, when you’re working with technology that everyone is working on, one, you get criticism, which is good. You get more criticism, meaning you get more feedback, you get more user journeys, and that makes it a better application. As opposed to getting 10 guys in a room, tell them, come up with the next version. They’ll come up with their own 10 versions that they think could be the best, but not necessarily the best, because they don’t speak for everyone.
But when you have a community movement behind an AI technology, an AI ecosystem, an AI model as well, you get the best out of it. You really push it to the limit. Having been in the space of that movement, I’ve definitely seen that. Mapping the world alone, you can’t do that, even if you have AI right now. You’re seeing a lot of community members who see how things are changing, things are morphing, but also really to keep you in check, because sometimes as techies, techies tend to see the next shiny thing and move to that, and share anything, move to that, but yet the user hasn’t asked for that.
I think for me, having those open systems, testing in public, working with communities, I think is really key in driving not only AI, but general tech.
Ka Man: The more I hear people reflecting on this space, the more I’m convinced that, like you say, a movement, a movement for the development of AI systems for the humanitarian and development space is absolutely central. And even listeners who are not familiar with the terms like OpenAPI, open-source, you know, the more technical mechanics of that, I think they can still take away that these are mechanisms that will allow, will enable a community, a movement, so the people with the skills and the hearts and minds and desire to push ahead with this can use these mechanisms to collaborate, and to not be reliant necessarily on proprietary systems and create their own tools from the bottom up. Does that resonate or reflect what you’re thinking?
Deogratius: Yeah, 100%, 100%. I do think, for me, it’s open-source that I think has always driven a lot of innovation. Sometimes people forget what open source does in the programme, but also what the community behind that open-source application does – being able to say, “We are not all at the same skill level, we may not even be working for the same organisation, that is fine, but we have a common interest. We have a common interest of improving this platform, improving the outcome of this system, this model.”
I think for me, that really helps bring a diverse group of users as well. Sometimes, having been at the university here and working with a lot of AI gurus and techie guys, when they are put together in a room, they just don’t know what to do, really. But when you bring a diverse group of people, people who may not be that techy, may just be saying, “I just want to use the platform, how can you make it easier for me to use the platform?” I think for me, once a group’s diverse like that, you’re able to really make innovation as opposed to invention. I think I like using that word sometimes when I’m speaking to people, “you can invent a lot of things. Inventions will always happen, but innovation happens when adoption takes place”. And adoption takes place when you’ve actually been diverse, you have a diverse range of thinking beyond yourself.
Ka Man: I love what you just said there about innovation happening when adoption takes place.
Deogratius: Yes
Ka Man: That is absolutely fundamental. There’s no point innovating, trying to innovate for innovation’s sake, and saying, “We’re the first to do this.” What are you actually doing it for? That’s the definition of success in this space, right – actually being used, adopted, and is improving people’s lives in some way. So thank you very much for sharing and illuminating some of the technical mechanisms that can support a movement towards more inclusive and appropriate technical solutions for communities.
31:42: Chapter 5: Technical solutions to overcome connectivity challenges: the potential of small language models
Ka Man: So linking to that, I wanted to talk to you about innovation in terms of implementation, and addressing connectivity challenges that many humanitarians may face. Connectivity in our report, and infrastructural challenges was cited as a key barrier to AI adoption.
So, when we connected on a call before today’s podcast, you were actually the first person to mention this concept of small language models, and that this is an example, a mechanism which can help to overcome these types of connectivity and infrastructural challenges.
Deogratius: Interesting
Ka Man: Actually, since you mentioned that to me, I’ve had a few conversations, including one with a podcast guest, Michael Tjalve, who is speaking on the second episode in this series. So I had a good conversation, a very informative conversation of what an SLM might look like in practice. But I wanted to get your take on it, your perspective, to ask you if you can tell us a bit more about it, and how it might look like in practice, from your perspective, in your context. And are there any other innovations, like SLMs, that might address this connectivity challenge?
Deogratius: Yeah, sure, sure. When I first joined CMU Africa and the Upanzi Network, one of the things I got to learn very quickly is how much computation power you need for these LLMs, right. Everyone mentions the word LLM, and you think it’s easy, but then when I began sitting down with the team, and they were telling me how much it takes to run these large language models, I was just shocked. I was like, “I’m even surprised people are even talking about this.” There’s a lot of computation power that goes in, because sometimes we just go in and type our question and get an answer, we’re like, “Oh, okay.” So we don’t get to see what’s in the background as well.
So when I learned that, I was very much intrigued. But then also, one of the team members also really got me into thinking about these small language models, right. Basically, they’re definitely built from LLMs, and I’m not saying that this is – “let’s dump our LLMs and throw them away in a coffin somewhere”. They’re still 100% needed, and these small language models are actually built off of them. It’s just being – simply put, you have less parameters to run through.
So, simply put, so let’s say some of the Gemini models are in the billions of parameters. Here, we’re looking at small language models with millions, like 21 million, 19 million, even 9 million as well, right. The number can keep on changing. And the team at Meta as well, is really pushing a lot on small language models with Llama, and a few others. I know there’s a Granite model from IBM, and even DeepSeek, everyone is really trying them out as well. They’re obviously less known, because everyone’s talking about the big ones, but what they actually bring is very key.
Because, especially in the humanitarian sector, the humanitarian sector is always working – it’s not working in the fanciest building, or within the middle of the city where there’s 5G internet, right. They’re always working on the peripherals of society. Sometimes you are in places where you’re asking yourself, “wow, do these people actually have power as well?”
Having been able to travel across the continent, I do believe there’s a big population that is still in those kinds of situations, where the network power is not available. I’ve been in places like Northern Mozambique, northern Zambia, Zimbabwe, even in Uganda as well, in these refugee settlements – they’re usually places where no one wants to be, that’s why the refugees are actually there as well. They don’t just rush into the cities. In northern Kenya, in Kakuma as well – they’re really, really off the grid.
So thinking about LLMs can be tricky. So yeah, I do think, for me, small language models bring that opportunity that you have less computation power from the get-go. You have less computation power that you can use, meaning you can actually use this on edge devices. Not everyone has the Samsung Galaxy new version that has just been launched, or the iPhone 17, which may not have these issues. People are running on you know, Tecnos, the early age edge platforms, and the majority of people in these environments are working on that. They’re still smartphones, but they’re really quite edge, so these small language models can actually work on those. They process actually way faster, because they’re using less parameters.
And actually good, I would say, for me for what everyone is now calling agentic AI. Agentic AI basically means that it’s doing a particular task for you, and that’s what you need. You need – picture this: picture a person who knows generally about everything – knows about the news, knows about a certain topic, a little bit of every topic, like a jack-of-all-trades. But there’s certain tasks where you want someone who’s very specialised, especially if you’re going to do something that is clear-cut, straightforward, there’s no ambiguity. There’s not really much of thought processes that are needed.
You need, I think, some of these SLMs. They work also in places where you don’t have internet, because you can actually store them on the device itself, so that the AI is on the device. This happens a lot – you’re in refugee camps, you’re in conflict zones, sometimes there’s definitely no internet. And also, they’re easier to customise. They’re easier to customise, and work with if, let’s say you’re thinking about analysing legal documents. You know, the legal document is most times non-changing, because it’s just been selected. The frameworks and the policy documents that we set up, you’d rather use a small language model, because your knowledge base is within that legal document.
It’s something that even here at the Upanzi Network, we’ve definitely prototyped. We have a platform called the Policy Analyser. We are working through, hoping to go public soon. It’s an example of how we are getting the small language models, or a little bit of LLMs as well, to be used for a specific task of analysing legal documents. That’s where they fit in, right, because sometimes you don’t have to know everything about everything to actually do a particular task.
So, that’s what I would say, I think they’re picking up, but definitely seeing more of that, and I do think, for me, in the humanitarian sector, they will be a game changer, whether it’s in clinics, in communication, we will definitely be seeing them. I do think they will definitely be picked up soon.
Ka Man: Thank you. So, is this something, then, that’s still quite emerging, and that people are looking at use cases for humanitarian contexts? Where are we in terms of the adoption of SLMs for humanitarian contexts overall, do you think?
Deogratius: I think it’s still early days. The way I’ve been speaking, I don’t want to say SLMs are like the silver bullet. They still have their own risks as well. Because they do still carry the risk of normal AI of obviously having biases, because if you have a small pool of parameters, that means there’s a high chance of bias being involved there.
They tend to struggle with a bit of complexity, because you’ve told it to do one specific thing. As soon as you say, “Can you try something new,” the AI will fail. So it tends to really narrow the scope, but they are definitely getting there.
They’re also a bit difficult to rebuild over time, but I think, for me, right now, I would say that we’re not yet there. I think we’re not yet fully there getting SLMs into the humanitarian sector. But they’re getting better. I could have just mentioned this, and I know something new comes up as well, but that’s how the AI system has been moving at the moment.
But I would say it’s one thing, inevitably, if we’re to pick up AI, I think, in the humanitarian sector, I do see this as the first viable option compared to LLMs that everyone knows. Everyone has seen, everyone is talking about. They’re definitely good as the best, but I think we will need to break it down for the humanitarian sector.
Ka Man: The more I hear about it, ever since you first mentioned it to me, the more I think, wow, this really is really relevant for the humanitarian sector, given all of the constraints that we work in, and then in, obviously, deployed in the field, in the environmental constraints that are faced. So it sounds like it really could be a way forward. So I really hope that momentum can gather around this, because I think it’s, I think it’s really exciting.
Now, just because you mentioned agentic AI – I know nothing about agentic AI, that’s like the next level for me to learn about. But I was reading an MIT report, or white paper, saying that in the commercial sector, in Fortune 500 companies, that the majority of pilots fail – and when I say fail, as in, in terms of profitability, that’s their measure – because the systems themselves are not learning. They are not learning and they’re not evolving and adapting, so there isn’t this sort of feedback loop. So I thought that was really interesting. So that was from a systemic side, not the human side of things.
So, that can obviously be a challenge as well, I guess, in SLMs, that could be a constraint that can be faced. So I wondered – I’m just, just pondering this, because agentic AI seems to be the next frontier. Could there be, or are there, scaled-down versions of agentic AI networks, where it doesn’t… there is the ability to learn, and those loops, but not having this huge energy requirement and resources?
Deogratius: Yes, I do think there’s a possibility of that. I myself may not be 100% sure, but I do think there’s a possibility of that, where you set up your system to be able to learn from itself, and learn from the failures, and get more inputs, while keeping it small within the parameters that you have.
I wouldn’t say I know exactly how it would be done, but I do think it’s very much possible, and I think it’s going to be the next thing, because in the end, we are trying to get tasks done. How can we get our tasks done? With the Gen AI, generative AI is giving you answers for some of the most generic things that we tend to do, writing reports and things like that.
But then tasks where you need to click, upload, text, upload, delete – simple things we tend to do all the time that could be repetitive – I do think that is definitely going to come into play. But the loop as well takes a lot of computation power as well. I’ve been told when for the AI to learn, you need a lot of computation power. But I think making that balance of, “How offline can I be, and how online can I be, how big or small I’m supposed to be to be able to do my task,” or to have at least the AI model to do my tasks as best as possible – but I think there will definitely be a lot of mathematics behind that, a lot of calculations, but I do think it can be done.
Ka Man: Thank you. I know that’s quite future-facing, that one..
Deogratius: [laughs] Yes
Ka Man: But it’s good to hear your thoughts on that, especially because listeners might be hearing these words come up, these terms, it might be a bit disorienting hearing about these things. So I think it’s worth bringing these terms into the conversation so people can start to learn what are the potential implications for this in the humanitarian space. So thank you for sharing your thoughts on that.
43:26: Chapter 6: Balancing the vision of pan-African and localised approaches
Ka Man: I liked what you said about SLMs, with the fact that they can be trained on very specific domains, like you talked about policies and legal information, it can be trained on that, for example. But I just sort of wanted to highlight, discuss contextualisation further. So, in our research, it emerged that there are significant gaps between AI tools being used and deployment in local contexts.
So, just going back to what we were saying at the start of this conversation, how you described yourself as a ‘man of the continent’ and looking at a pan-African approach. Do you think that a pan-African AI approach would be the right path to addressing a contextual knowledge gap? Or, do you think it needs to be more nuanced than that, more regional or localised, to be more effective in humanitarian contexts?
Deogratius: I know sometimes development actors love having one big document that says, “Africa’s AI policy is right here, and it’s ready for you.” It’s nice to have, it will be a nice document, and I’m sure the African Union will sign off, and stuff like that, will sign off on those kinds of things, but in reality, I don’t think it actually scales down.
For me, I do think when you start with the localised approaches, they definitely pick up as you go along. It’s always easier to say, “We picked 3, 4, 5 things from these different countries, and then we’ve made this regional kind of approach.” That speaks to all the different countries that are within that centre.
An example I can give you is similar to the connectivity point that you asked me earlier, of what innovation around connectivity is actually out there. There are lots of innovations that could work in one country, but couldn’t work in another.
An example is that we – here again, I’m just speaking about some of the work we’re doing here – we came up with a solution of what we’re calling rural connectivity, where we get small Raspberry Pis, and connect them in a kind of network mesh. Make a localised internet connection, but then put them on moving motorcycles. Here, in East Africa, we use a lot of what we call motorcycles, sometimes called boda bodas, and even lots of buses that are moving around, so we put those Raspberry Pis on those buses.
We create this kind of moving network. So you send a message. It goes onto that Raspberry Pi. That Raspberry Pi is on a bus, a motorcycle that is moving. Once it moves and meets another Raspberry Pi, it transfers that information to that Raspberry Pi, and it cascades. It’s what we call opportunistic network. As long as there’s a kind of mesh around that is moving, it actually gets connected.
Now, that’s an amazing innovation. It’s clear-cut, it works here in Rwanda, because Rwanda has a lot of motorcycles moving around, even in the villages. When you go to the remote villages and the refugee camps, there are lots of motorcycles transporting people’s goods and stuff, so if you just link into that system, it actually works. It’s an amazing innovation. We’ve done that, we’re testing it out together with the team from World Vision and a couple of other NGOs working in the refugee camp.
Now, if I got that idea and took it to northern Zimbabwe or in Zambia as well, that would definitely not work. They don’t have that many motorcycles. The movements are way different. They have bigger buses that move in more specified periods of time. So it’s a good innovation. It has worked here in Rwanda, at least we’re definitely testing it out here in Rwanda, and there’s positivity with it. But in another country, which is not that far from Rwanda – or Zambia – it wouldn’t work completely. The people will look at you and be like, “What are you trying to do?”
Just to give you that example is how I would say the localised approaches would work. Just because something worked in one country doesn’t mean it would work in another country, but it doesn’t negate that it’s not an innovation. It doesn’t mean that approach doesn’t work. That’s why you need to localise lots of these approaches, and not just do a copy-paste and say, “The whole of Africa has one AI approach to… has one approach to AI, and let’s make the big framework.”
I do think it’s nice to have it at the top. But I do think, for me, the localised approach really makes sense, because you get to see a lot of… there are different humanitarian contexts, the cultures are different, the uses will be different, the resources will be different, so telling someone, “We have to use AI in a certain way all the time” – fine, you can have a few principles, I would say.
But approaches, I think, will definitely be different. You can set principles, making sure that the AI is privacy protected, doesn’t do any harm, things like that. I do agree with AI principles for the continent. But how you apply things should be different.
Ka Man: Thanks for sharing those examples, that was really interesting. For listeners who aren’t familiar, could you explain what Raspberry Pis are?
Deogratius: Oh, yeah, sure, sure. So, Raspberry Pi is just a very small computer. Simply look at a computer, but make it to a scale of close to twice a microchip. As small as your keyboard, maybe. Very small, smaller than your keyboard. It’s like your trackpad. Say your trackpad is that size, and this Raspberry Pi is a fully capable computer that has a motherboard and everything, and a computer system in there. It’s really small.
The idea for these small computers is that they help us be able to transport what a computer system can do in small spaces. That’s why they can be attached to so many things. They can be used to store certain stuff and transport and create networks as well. And they’re actually cheap as well, $75 to $100 each. There are even smaller ones as well, called Tiny MLs as well, so they’re basically small computers, simply put.
Ka Man: So they’re designed specifically to be portable, so attached to, like you say, this transport infrastructure – buses and motorcycles – but obviously in certain contexts, that’s not applicable, because that’s not how things are, that’s not how people are moving around. That is really interesting. Also making me feel hungry, talking about raspberry pies [laughs]
Deogratius: [laughs] Yes, it’s definitely not the pie, not the food pie, but it’s…
Ka Man: Not a food pie.
Deogratius: Pi, pi with ‘i’, yeah.
Ka Man: Yeah, so it’s important to clarify that [laughs]. But I sometimes find that tech, they do have funny names, don’t they? And they can be quite confusing. So I was looking at like, SLMs, I was like, “Hugging Face? What is this?” [laughs] You know, there’s a lot of novel names. They do stick in your mind, but they can cause confusion. But thank you very much for outlining that as a really practical example. Thank you.
Deogratius: Sure, sure.
50:34: Chapter 6: Community Q&A: Deogratius answers your questions
Ka Man: So, I just want to quickly move to the next segment. So we launched our humanitarian AI research report on the 5th of August, where we invited the community to hear us share and discuss the findings. We had a lot of questions that we weren’t able to answer in that time and space, but what we’re doing now is rolling some of the questions forward into this podcast series, so that our community members have the chance for their questions to be addressed by different experts in this space.
As these have been raised in the context of that setting, some of them might not be directly applicable, but if you can share any insights or signposts, that would be really helpful. So we’ll take it away.
So, the first one actually relates to some of the things that we’ve already discussed today. But, it’s around infrastructural challenges. So, Faruq asks, “what are the major infrastructural barriers to deploying AI technologies in humanitarian programmes across rural areas affected by banditry and conflict?” Do you have any thoughts on that?
Deogratius: Hmm, the first struggle is definitely going to be in a place that is… a place of conflict is very difficult, regardless of any technology, whether it’s AI, whether it’s any data tool, it’s going to be really hard when you’re in a conflict zone, you don’t know what’s really happening, and there’s a lot of banditry, because technology is an asset, and those assets tend to be the first to be picked.
I would really make… I think for me, I think the biggest struggle there will be how you get your logistical and safety protocols in place. Because it’s the biggest part. I think if you’re talking about a conflict zone, like even in what’s happening in Sudan as well. Thinking about deploying AI, that can be tricky in certain contexts as well, really deployed on the ground, but in a remote setting, I think it can actually be done, especially if you’re going to be focusing on understanding damage assessments. You can use a lot of satellite imagery for that, and the different AI systems that have come up, but if it’s got to be on the ground, I think it’s definitely a logistical and safety protocol process that needs to be first sorted, and the rest will follow, but I do think it’s very possible, but I would also tell myself it’s definitely a challenge, and it’s the first thing I’ll think about when I’m in a conflict zone as well.
Ka Man: Yeah, it’s a multi-layered challenge, isn’t it? Each aspect there compounds the existing challenges. But thanks for sharing your thoughts on that.
So the next question is from an anonymous person who asks a question around AI policies. In our research, we found that only around a fifth of respondents said that their organisations have an AI policy. So this seems to be a key priority area for those who want to move forward in this area.
So, anonymous asks, “What kind of humanitarian organisations have AI policies, or are providing training on this? Knowing that information can help the rest of us understand who is leading the way, and how we can learn from them.”
Deogratius: That’s another interesting question. I can see there are lots of people who want to learn more about understanding what their organisations can do.
Off the back of my mind, I would say that there are lots of UN organisations that have really put out policies and data structures and data strategies. UNHCR, UN OCHA, I know UNICEF, and the Red Cross agencies are really doing a lot in that space. I’ve seen lots of reports around that. I know Oxfam and CARE and so many others are also doing a lot of internal understanding internal use of AI.
But I would say this, I would say, before even we jump right into the organisational policies, because I feel like, for me, when policies come into places, when we actually have a full understanding of stuff that is going on, I think we need to first say, “What do we understand about AI?”
AI, and what do you want to understand about AI, before even you think about making a policy, because you can’t make a policy of something you don’t fully understand. At least for me, that’s what I think. So, we need to first… before even we say, “Let’s classify humanitarian AI,” let’s first understand AI itself, guys. Before we make it into a sectorial thing, let’s understand what it is first. Understand the fundamentals.
But critically, I understand what tasks can it take on. Whether we’re going to use that task for humanitarian causes comes in next. I need to understand, “This is what I do every day, or this is what our organisation does. Our organisation does this a lot. If we are a humanitarian organisation and we distribute food all the time, and that’s really our core business. What can AI do in our core business? In our core work that we do?”
Then we ask ourselves, “What are the biggest pain points in the work that we do? We have to agree that this is the biggest problem.” Yes, someone may say, “I want to write my emails faster, I want to send record my things better, I want to transcribe my documents much faster.” Yes, those are your individual pain points, but as an organisation, where do you see your biggest pain points?
I think that’s where people don’t want to have the conversations. Or, why is our organisation struggling the most? Once you identify that, you can say, “Then, can AI help us on this?” Then you say no. If the number one is no, the number two most pain point, no. Number three, then you go down the list until you find where AI first comes in. Where do you see AI coming in to help you on that first pain point?
Once you have that, then you learn about AI. It’s like, “Let me learn how it works, what systems are, what are the models behind it, what are the processes behind it, how does it keep our data, how do we remove our data,” all those things come into play, because you’re building from a point of pain, but if you built from a point of comfort, it won’t… You won’t see the benefits, because it’s just doing work you already were able to do, and were not feeling the pressure not to do it.
So if the AI does it for you, you’ll be like, “Wait, nothing has changed. I still have my pain point,” and then people will be like, “Oh, the AI is not working for me. I’m not seeing a return on investment.” Just because you focused on something that you don’t usually consider valuable enough.
My thing, that’s the first part. But once you do that kind of first point, then it’s very easy to put up a strategy. Then you can say, “Let’s put up a policy. Let’s say our policy is when you’re using this part for distributing goods,” all those kinds of things come. But for me, before even we go down who makes the policy, who makes the strategy, let’s understand, because I see lots of organisations putting up AI policies, and they look nice. They’re flashy, they’re all 40 pages, 100 pages, but then does that translate into actual usage? That’s when you hear everyone saying, “No, we don’t use it that much in organisation. We have a big AI policy, but no one is using it.” So that’s my take on it.
Ka Man: That’s really interesting, Deo. In a conversation I’ve had for this podcast with governance expert Timi Olagunju, he makes a similar point. Before you get to governance, there has to be AI literacy, there has to be understanding about what AI is, but also, you know, what is the problem?
So, I think that’s really interesting, because that is a key theme that emerges, that literacy has to come first. And just to speak to another part of that question, there are lots of online resources. Hopefully, there will be more coming through, there’s obviously a real need for it, but there are free resources, for example, through NetHope, there are NetHope courses available on the HLA’s Kaya digital platform, so Kaya, kayaconnect.org. Microsoft also have some good resources available, including video content in different languages. So, definitely, people should check those out to lay those foundations for AI literacy.
I was just reflecting back to my days of studying IT, what feels like many decades ago [laughs]. Even then pre all these innovations that there are, fundamentally, we were told at that time, that systems fail. The majority of systems are failing because they don’t actually address the fundamental problem. What’s the challenge? What’s the organisational problem? So it even just fails at the requirements capture stage.
I think it’s really interesting that now, AI isn’t, like you say, it’s not magic. The same process and learning, organisational learning and understanding has to be built into that. So, yeah, it just really resonated what you were saying in response to that question, so thank you.
Deogratius: Sure, sure.
Ka Man: So the next question is from Augustine. It’s around the theme of inclusive development. So, Augustine asks, “how can we ensure that AI tools are not only built for local organisations, but also built with them?”
Deogratius: That definitely rings to my point I mentioned earlier around building communities around these tools. I mentioned when I really got accelerated within the open-source space, I fell into the digital public goods ecosystem, and I found those kinds of systems really build a lot of community around themselves. Whether it’s OpenCRVS, DHIS2, the MOSIP applications, or Digital ID, and so many others as well, they build a, when you build a community around a system, it actually, first of all, one, lasts longer, it’s more sustainable, but then you get a diversity of view.
So I would say, for me, the first part of inclusivity is build a community around the systems that you’re building, whether it’s models – we’re talking about AI models right now. Like I mentioned, what Meta is doing with the Llama models, and also so many other organisations are doing, like IBM and others, you create a community. Once you create that community, then you are giving more people an opportunity to come and contribute, or learn on their own.
It definitely reduces the barrier of entry, because sometimes when people see these things of AI and the same models, they definitely run away. But when you create a community, someone can come in at their own pace and start learning and building and staying within the community, and picking up from others, meeting up, having community meetups, learning the skills, and next thing you know, they’re also contributing something. So, I think that’s the first thing.
We need to build communities around these models. The second thing as well is we should not fall in the trap of the loop of, “Let’s keep improving the model, improving the model, improving the model,” while not getting the usage and the adoption. Those user journeys and user studies need to be done. We need to go on the ground, use it, before we put up another version 5, version 10, version 100. Let people use the version that we have now, see the actual issues that are happening, before we make another improvement.
But I do know that as a, sometimes tech developers want, “We’ve released two, we start working on three immediately.” I feel like, for me, that’s not the best way to go around it. We need to get people to use, see the struggles, see the changes, be like, “Yeah, it’s great, it’s not happening.” Then you improve on the next version. But I feel like right now with AI models and the AI ecosystem itself, they’re fighting more to be the first, like you’ve said. Who makes the highest version, the latest version, the biggest version, with the biggest parameters ever. So I think, for me, that’s the biggest struggle.
We need to get more user development. User development, testing, deployment, use before we go any more improvement, and I think, for me, that’s where we get more opportunity for communities to be involved, I think, with this. But yeah, those are the two points I’ll give. I think building communities around these systems and getting those communities involved in the building, the testing, and also the deployment, the whole ecosystem, will definitely make it happen, and as well as really making that deployment much more common.
Ka Man: Thank you. It links back to that statement you made earlier about innovation. Innovation happens when there’s acceptance, so it’s not just getting into these cycles of innovation for innovation’s sake, it’s about meaningful innovation and progress. So, thank you very much for sharing your thoughts on that.
So, we’ve got the last couple of questions from the audience before we move to the closing section. So, Callum asks, actually, linking to Augustine’s question, Callum’s got a question around community-led AI. He’d like to know if you’ve seen any examples of AI supporting localisation or community-led response models.
Deogratius: Mainly because I’ve been involved in that space, the Llama community, the Llama model from Meta, they are putting all these hackathons with students and early career professionals across the continent, Lagos, even here in Rwanda, they were here in Rwanda, I think they went to Kenya as well, but they’re moving around the continent, South Africa as well.
Setting up these hackathons is a design sprint for people to use, download, use Llama, because it’s open source, and improve upon it, work with it, come up with something with it. I think, for me, that’s a good point. It’s definitely not community-led, but I think it’s community-governed, I think, which for me, I think is the best model for that to happen.
But even in my previous steps as well with the Humanitarian OpenStreetMap Team, they have this geo-mapping AI model as well, where they’re working together with the different communities within the mapping ecosystem to keep building on that AI model. So it’s not only them, as an open team that are building it. But they’re working with the different communities within the different countries, the different OSM communities on the ground, to keep mapping and improving the AI model they have. So I think, for me, that’s another example of you work with the community to build the model, as opposed to just giving it to them and say, “Hey, it’s here for you, please use.” So, really making people builders and users at the same time, I think, is key.
I know that UN Global Pulse is doing a lot of interesting conversations around AI with languages. Especially in Uganda, I think they try to work with different linguistics to improve the AI models there in that language. And also another example is Ushahidi. They’re an NGO that focuses on trying to get sentiments from people on the ground, and they’re incorporating AI, especially in their work, from the ground up, with the communities they already work with in getting this information.
So yeah, those are the ones that come off the top of my head, as well, but I do think it’s very similar to the conversation of how do you… you need to build a community around the systems, and I think, for me, that’s the first thing. You build a community, they work with us, they learn how to build… to become builders of the models and users at the same time, or at least they can’t build.
But I think giving people the option to be part of a community, I think, is what we are longing for, in a space where there’s a lot of digital rooms and digital spaces, communities are really key, to keep these softwares going.
Ka Man: That’s so interesting, thank you. Community really does have to be central. That message and theme is really coming through in all the conversations that I’m having, and you really articulated that point well, so thank you.
So my final audience question comes from Salah, and they ask a question around AI and whether it can support resource allocation. So they asked, “can you elaborate on AI’s impact on decision making? How can AI help with resource allocation in organisations?”
Deogratius: I think that’s something that usually people think, “Oh, AI is going to be able to do that.” But for me, just to track back, I’ve always mentioned to people that AI is just a tool. I think it’s a tool that helps us make decisions. I’ve never been the person, and I don’t think AI is yet there to actually make full decisions for us. I feel like it’s a helper, and I know some people may be having their own opinions on this, but I do think AI is supposed to be, and should be, a helper for the human being who’s going to make the decision.
So, principally, I do believe, in the end, the human being behind the AI needs to make the decision that the AI has just helped him to do. Like, simply put, I remember, give you an example in the construction industry. We used to draw maps and plans on paper using hand maps and papers, back in the day. And now the computer came, and we’re doing them in a more digitised way. So the computer was just aiding us, helping us to make the plans. It wasn’t making the plans for us.
So it’s just helping me do it faster, better, in a more… But I still have to sit there and say, “How big do I want to go? How small do I want to go? How complex do I want to be?” So you still have to make decisions. So I do think one of the things we need to be making clear to people as the AI becomes more available and more accessible, is that you are still in control, you are the decision maker, it’s helping you make a decision.
And you can still say no to the decision as well. I think that’s something when I’m discussing some of the students here about how to use ChatGPT and all that, is that you can… you are supposed to interrogate that answer. It’s not the gospel truth. Interrogate the answer, improve on the answer, use more prompt engineering. If you feel like it’s not working for you, ask it again, change the parameters you’re working with. It’s not… every answer it gives is the gospel truth, so I think that’s the first thing.
I do think… So now, on to his actual question, can AI be used in resource distribution? I do think it can help you in your decision… in your resource distribution-making decision. In multiple ways, whether it’s starting from a geographical point of view of, “Where do I need to send resources, who is most needed?” I do think it can help you. It can make those calculations. In the humanitarian sector, it’s one of the toughest decisions you have to make. I remember when I was working in refugee camps in Uganda, and we had to make decisions on even just where to travel, at what time, and in what spaces.
We had to really make those calculations ourselves, but I think AI can definitely come up with that once the different parameters. But, I keep saying that, yes, that is yes, but… you need to remember, you are still the person making the decision. So, ask yourself how much information you are being given by the AI, what information does it have, what complexities are involved, does it know the weather? Will it know the different uncertainties that may come up? If you’re working in the humanitarian context, there’s a lot of uncertainty. You’re moving straight forward, next thing you know, there are bullets on the road.
Sometimes your, the road looks okay on Google Maps, but when you reach there, the road is, the bridge is broken, or something like that, so you need to really factor in that.
But just to put an icing on that, someone told me this, that a good decision will always remain good based on the information that you use to make the decision. If you use all the information you have available to make a decision, and that decision is good, it will always remain good. A bad decision will always be bad, even if I bring more information around it at that moment. So, truly try to focus on the information that you have.
You work with the information that you have to make a decision. It could be by your experience, could be by gesture, by guessing. By planning it out critically, but in the end, you need to combine all those to make a decision. And AI can be part of that.
Ka Man: That’s very good advice, thank you.
So, sadly, we’re going to be wrapping up this conversation shortly. It’s been so interesting hearing your take on things, because you’ve got that technical expertise, as well as that contextual understanding of the humanitarian development sectors, as well as, obviously, within your role within the university, and also with your previous experiences. So I can see that intersectional lens and space that you sit in is really, really useful, and so I’m sure our listeners have gained lots of really good insights from you, as well as probably a whole list of terms that they’ve got now ready to Google to learn more [laughs].
Deogratius: Yeah.
71:08: Chapter 7: Deogratius’ view: what’s needed to accelerate progress
Ka Man: So, just to sort of wrap up, I just wanted to ask you a question around blind spots. Could you share one thing around localised AI solutions that you think is overlooked or not talked about often enough, but that you believe is vital to accelerate our shared progress in this space?
Deogratius: I think one of the things we don’t talk enough about is the human loop in the AI. I think we focus a lot on the tool, on the system itself, or focus… we don’t talk about the human being behind the AI, the human being giving the information, the human being verifying the information. That human being is very critical. All the AI models give you what the human being has given you. So I think we need to emphasise who is the human in the loop? What is he doing? How can we support them in making the best AI possible?
I think especially in the localised context, we don’t want to have a system that builds another system. We want to have a programme with a human being. And the other thing as well is let’s have the conversations. We need to have these conversations. I know we’re in the space of is nowadays people are having AI conversations… AI is having conversations with you, but I do think we need to continue having conversations as human beings, because I do think we need that. That is how you get the localised context. You cannot learn that only from the machine.
Have a conversation with people, go to the people, have a conversation with them. Sometimes they’re tough conversations, sometimes they’re easy. We can’t run away from those conversations. Whether it’s online, offline, we need to have conversations. So I think, for me, that’s what I would say in the localised context, especially with AI. Have that conversation. You learn a lot from just having a chat with someone. Like we’re doing right now.
72:58: Chapter 8: Closing reflections
Ka Man: Thank you, and I’m so glad that we’ve had this conversation. I really have learned a lot from you. So, thank you very much. Before we wrap up, do you have any closing thoughts to share with our listeners?
Deogratius: Yeah, so I would say for me, I think AI is going to be amazing. I think it’s definitely going to be big changes on the continent, in the Global South, but generally, even in the world as well, I think it’s going to be there.
But let’s really invest our time and energy in really understanding AI. Let’s not rush to be the fastest, let’s not rush to be the best at something, but as long as AI is actually working on the pain points you have in an organisation. I think, for me, that’s my big takeaway. I’ll mention it multiple times. Don’t rush to know everything about AI, learn to know what AI can do for you, and starting with your pain points. How can you get your pain points out of the way so that you can actually see return on investment of the AI as well, is what I would say. So yeah, so don’t rush anything. Focus on the pain points that you have, and getting those pain points away. So, thank you so much.
Ka Man: That’s really good advice. There is so much anxiety around AI at large, so I think hearing experts like you share your wisdom and say, let’s test, let’s trial, let’s really unleash the potential, but let’s take a measured and informed approach, and keep people and humans central in that process, so I think that’s a really key takeaway after hearing you and the other experts speak on this topic.
So, thank you very much for sharing your insights with us today on the podcast. Deogratius, thank you very much for joining us for today’s episode of Fresh Humanitarian Perspectives from the Humanitarian Leadership Academy.
[Music faces]
When you have a community movement behind an AI technology, an AI ecosystem, an AI model as well, you get the best out of it. You really push it to the limit. Mapping the world alone, you can’t do that, even if you have AI right now. Having those open systems, testing in public, working with communities, I think is really key in driving not only AI, but general tech.
I think AI is going to be amazing. I think it’s definitely going to be big changes on the continent. But let’s really invest our time and energy in really understanding AI. Let’s not rush to be the fastest, let’s not rush to be the best at something, but as long as AI is actually working on the pain points you have in an organisation. How can you get your pain points out of the way so that you can actually see return on investment of the AI as well.
About the speakers
Deogratius Kiggudde is a Programs Manager for the Upanzi Digital Public Infrastructure Network at Carnegie Mellon University Africa. He manages teams that oversee multi-country research, innovation, capacity building, and outreach initiatives across AI, cybersecurity, connectivity, digital ID, payments, data governance, and DPI implementation.
He leads stakeholder engagement in the digital sector, bringing together government, donors, the private sector, and academia through policy panels, solution demonstrations, conferences, and roundtables. He enhances partnerships and promotes delivery with user training and documentation.
Previously, he served as Senior Programs Manager for Technology and Implementation at the Humanitarian OpenStreetMap Team, where he coordinated multi-country teams, expanded a regional grant portfolio, and built impactful partnerships. He holds a BSc in Quantity Surveying, a Postgraduate Diploma in Monitoring and Evaluation, and a PMP certification.
Ka Man Parkinson is Communications and Marketing Lead at the Humanitarian Leadership Academy. With 20 years’ experience in communications and marketing management at UK higher education institutions and the British Council, Ka Man now leads on community building initiatives as part of the HLA’s convening strategy. She takes an interdisciplinary people-centred approach to her work, blending multimedia campaigns with learning and research initiatives. Ka Man is the producer of the HLA’s Fresh Humanitarian Perspectives podcast and leads the HLA webinar series. Currently on her own humanitarian AI learning journey, her interest in technology and organisational change stems from her time as an undergraduate at The University of Manchester, where she completed a BSc in Management and IT. She also holds an MA in Business and Chinese from the University of Leeds, and a CIM Professional Diploma in Marketing.
Continuing the conversations: new Humanitarian AI podcast miniseries
This conversation is the fifth episode new humanitarian AI podcast miniseries which builds on the August 2025 research: ‘How are humanitarians using artificial intelligence? Mapping current practice and future potential’. Tune in for long-form accessible conversations with diverse expert guests, sharing perspectives on themes emerging from the research, including implementation challenges, governance, cultural frameworks and ethical considerations, as well as localised AI solutions, with global views and perspectives from Africa. The miniseries aims to promote information exchange and dialogue to support ethical humanitarian AI development.
▶️ Episode 1: How are humanitarians using AI: reflections on our community-centred research approach with Lucy Hall, Ka Man Parkinson and Madigan Johnson [Listen here]
▶️ Episode 2: Bridging implementation gaps: from AI literacy to localisation – in conversation with Michael Tjalve [Listen here]
▶️ Episode 3: Addressing governance gaps: perspectives from Nigeria and beyond – in conversation with Timi Olagunju [Listen here]
▶️ Episode 4: Building inclusive AI: indigenous knowledge frameworks from Kenya and beyond [Listen here]
Links
What are Small Language Models (SLM)? | IBM
Irembo – Deogratius mentions this in the conversation as an example of a Rwandan government platform
Share the conversation
Did you enjoy this episode? Please share with someone who might find it useful.
We love to hear listener feedback – please leave a comment on your usual podcast platform, connect with us on social media or email info@humanitarian.academy
Disclaimer
The views and opinions expressed in our podcast are those of the speakers and do not necessarily reflect the views or positions of their organisations. This podcast series has been produced to promote learning and dialogue and is not intended as prescriptive advice. Organisations should conduct their own assessments based on their specific contexts, requirements and risk tolerances.
Episode produced in October 2025