8th October 2025
Wakanyi Hoffman
Ka Man Parkinson


How can indigenous knowledge systems and African philosophy reshape how we design, deploy – and retire – sustainable and contextualised humanitarian AI tools?
Our recent AI research surfaced ethical and cultural concerns by humanitarians around the world, including the ownership and suitability of AI systems developed elsewhere for localised humanitarian action.
In this fourth instalment of our six-part humanitarian AI podcast series, we’re delighted to welcome Wakanyi Hoffman, Head of Research on Sustainable African AI Innovation at the Inclusive AI Lab, Utrecht University, to explore how Ubuntu philosophy, African storytelling traditions and community wisdom can transform humanitarian AI development.
Wakanyi speaks to Ka Man Parkinson to discuss and challenge fundamental assumptions about who gets to build AI, whose stories feed these systems, and whether technology should be permanent or temporary. Drawing on examples from Kenya to pan-African innovation, this conversation reframes AI not as something built for communities, but by them – with profound implications for humanitarian practice.
Tune in for a thought-provoking, human-centred conversation on AI development, including:
- Ubuntu and the “right to relate” framework: Rethinking AI ethics beyond individual rights to encompass relationships with land, community and planetary flourishing
- Storytelling as indigenous technology: How recognising ourselves as storytellers and “data workers” reclaims power in shaping AI systems and whose knowledge gets amplified
- Decolonising AI through community power: Challenging colonial patterns in both humanitarian systems and technology development, and what pan-African innovation offers as alternative pathways
- Small language models and sustainable design: Building contextualised, temporary AI tools with and by communities – designed for retirement, not permanence
- Plus, community questions: Wakanyi addresses audience concerns about digital divides, cultural representation, human rights and authentic community amplification

Keywords: Ubuntu philosophy, right to relate, indigenous knowledge systems, storytelling, decolonising AI, power dynamics, data sovereignty, Global Majority, bias, community-led AI development, participatory design, inclusive AI, sustainable AI systems, technology temporality, small language models, inclusive knowledge systems, African AI innovation, humanitarian technology, localisation, digital divide.
Want to learn more? Read our Q&A with Wakanyi which includes more on her perspectives on Ubuntu philosophy
Who should tune in to this conversation
These insights are essential listening for humanitarian practitioners, community engagement specialists, and AI developers seeking to centre indigenous knowledge and community wisdom in technology design. The conversation is particularly valuable for those working on localisation, participatory approaches, and decolonising humanitarian systems.
Wakanyi addresses questions from the humanitarian AI community about digital divides, cultural representation, and authentic community amplification as we explore pathways to building sustainable, contextualised AI with and by communities rather than for them.
Episode chapters
00:00: Chapter 1: Introduction
02:45: Chapter 2: From journalism to global education to Ubuntu philosophy: Wakanyi’s intersectional world view
11:38: Chapter 3: Contexualising AI: community participation, debiasing systems, and the potential of small language models
29:55: Chapter 4: Storytelling as a cultural tool for AI development
39:32: Chapter 5: Building systems with community wisdom: working in equitable partnership
52:37: Chapter 6: Wakanyi answers community questions
69:15: Chapter 7: Overlooked priorities in AI ethics and closing reflections
Glossary of terms
We’ve included definitions of some technical terms used during this podcast discussion for those who are unfamiliar or new to this topic.
AI bias – Systematic and repeatable errors in AI outputs that reflect prejudices in the training data, design choices, or the perspectives of those who built the system
Analogue technologies – Non-digital tools, methods or systems that don’t rely on electronic devices or connectivity; discussed as sometimes being more appropriate solutions than AI
Citizens’ Assembly – A group or initiative creating public spaces for citizens to engage with policymakers and governments; Wakanyi mentions one launched during UN Climate Week focused on giving people a voice in shaping technology and policy
Data centres – Large facilities housing computer servers that store data and run cloud-based services; they consume significant energy and water for cooling
Data workers – Recognition that all individuals creating content online (social media posts, messages, documents) are contributing unpaid labour to AI training datasets
Decolonisation – The process of identifying and dismantling colonial structures, mindsets and knowledge systems; in AI contexts, this means addressing whose stories, languages, cultures and perspectives are centred or excluded in technology development
Deployment – The process of putting an AI system into active use in real-world settings, moving beyond testing or pilot phases
Design thinking – A problem-solving methodology that emphasises understanding users’ needs and perspectives, prototyping solutions, and iterative testing; Wakanyi uses this in context of understanding the designer’s mindset when creating AI
Digital divide – Refers to unequal access to and effective use of digital technology
Extractive – Practices that take resources, data or knowledge from communities without fair compensation, consent or benefit returning to those communities
GDPR – General Data Protection Regulation of the European Union
Global Majority – Term used as alternative to ‘Global South’ to centre the world’s majority population rather than defining communities by geography or through deficit-based framing
Hyper-individualised – Technology or systems designed primarily for individual users without consideration for collective, community or relational dimensions
Indigenous knowledge systems – Traditional ways of knowing, learning and understanding the world that have been developed and passed down by indigenous communities over generations, often oral rather than written
Large language models (LLMs) – Very large AI systems like ChatGPT that work with text, require significant computing power and data centres to operate, and are trained on massive amounts of data
Localisation (humanitarian) – The process of shifting power, resources and decision-making to local and national actors in humanitarian response, moving away from international-led interventions
Localisation (technology) – Adapting AI systems to work within specific local contexts by incorporating local languages, cultural knowledge, values and community needs, rather than deploying systems built elsewhere without adaptation
Non-AI pathway – An alternative way for people to access the same services or achieve the same outcomes without using AI systems.
Open source – Software or technology where the underlying code is freely available for anyone to use, modify and distribute
Participatory AI/design – Approach that meaningfully involves affected communities in AI system design and development from the beginning, going beyond token consultation to genuine co-creation
Right to relate – Ethical framework emphasising humans’ fundamental right to proper relationships with each other, with land and place, and with the planet; challenges AI’s focus on individual rights alone
Small language models (SLMs) – Smaller, more efficient AI systems that can run locally on devices like phones without internet connection, requiring significantly fewer resources than large language models whilst still performing many similar tasks
Sunset/sunsetting – Retiring or discontinuing a technology system. Traditionally, this means ending a system because it’s no longer fit for purpose or has become obsolete. In this discussion, Wakanyi discusses an indigenous design approach where AI systems are intentionally built from the outset to be retired once a specific problem is solved – designing with temporality in mind so the technology can be successfully discontinued when no longer required.
Tech colonialism – The imposition of technological systems and structures by dominant nations or entities upon less powerful ones.
Training data – Information used to teach AI systems how to perform specific tasks; the quality, diversity and representativeness of this data directly affects how the AI performs
Ubuntu philosophy – The philosophy of being human together; an African way of thinking and African logic that emphasises how people come at technology from a mindset of collective ethics, interconnectedness and community accountability rather than individualism
Wisdom keepers – Term for individuals or roles that hold and transmit community knowledge; Wakanyi uses this particularly for teachers in contemporary contexts
Episode transcript
00:00: Chapter 1: Introduction
[Ka Man, voiceover]: Welcome to Fresh Humanitarian Perspectives, the podcast brought to you by the Humanitarian Leadership Academy.
[Music changes]
[Ka Man, voiceover]: Global expert voices on humanitarian artificial intelligence.
I’m Ka Man Parkinson, Communications and Marketing Lead at the Humanitarian Leadership Academy and co-lead of our report released in August: ‘How are humanitarians using artificial intelligence in 2025? Mapping current practice and future potential’ produced in partnership with Data Friendly Space.
In this new six-part podcast series, we’re exploring expert views on the research itself and charting possible pathways forward together.
[Music changes]
[Wakanyi, voiceover]: “We don’t need to be granted access. We need to decide whether we give access to corporations and others to mine our stories. And then we get to share those stories. We get to decide what are my stories that I need to share with the world.”
[Wakanyi, voiceover]: “That’s a very indigenous way of thinking about technology. How do we build technologies that have the potential to be retired? Once the project is over, then so should the AI system for that particular problem. And I think that’s one way of building sustainably – with temporality in mind rather than a permanent thing.”
[Ka Man, voiceover]: Welcome to episode 4 of our Humanitarian AI podcast series.
In today’s episode, I’m delighted to welcome Wakanyi Hoffman, Head of Research on Sustainable African AI Innovation at the Inclusive AI Lab in the Netherlands.
When people talk about building technology for and with communities, have you wondered, like me, what that might actually look like?
Wakanyi makes a powerful case for this inclusive AI approach. Drawing on her intersectional experiences and knowledge, she takes us on a rich exploration of how Ubuntu philosophy, African storytelling traditions and community wisdom can transform humanitarian AI development. We discuss reframing our relationship with technology using Ubuntu principles as a guide, focusing on our relationship with land, communities and each other. From building in our stories and cultural knowledge, designing with sustainability in mind, this is a deeply thought-provoking conversation exploring our collective relationship with technology on a very human level.
[Music fades]
***
02:45: Chapter 2: From journalism to global education to Ubuntu philosophy: Wakanyi’s intersectional world view
Ka Man: Hi, Wakanyi! A warm welcome to the podcast!
Wakanyi: Thank you so much for having me!
Ka Man: Oh, I’m so happy to have this time and space to talk to you today. So before we start, would you like to introduce yourself to our listeners? Maybe sharing a little about yourself and your journey into the world of AI, together with the other hats, and lenses that you see the world through?
Wakanyi: [laughs] Thanks so much. I like that you’ve mentioned the various hats that we tend to wear when we’re doing different things, or things that seem different, but actually are all interrelated. I’ll start with where I am. Currently, I am the Head of the African AI Innovation, the Sustainable AI Innovation at Utrecht University, and we have a lab called the Inclusive AI Lab. And so I’m looking at the research, I’m looking at what is emerging on the so-called Global South, but we like to call the Global Majority. For me specifically, I’m looking at the landscape that is Africa on a whole – so everywhere, you know, from north all the way to south. What is emerging? What are the trends? What are the AI solutions that people are trying to create using AI technology, and why? I’m very much interested in that designer’s mind.
So design thinking. And from that, I’m looking at the ethics of Ubuntu, which I know we will talk about here later. What Ubuntu, the philosophy of being human together, which is a very quintessential African way of thinking, or African logic even, if you want – how the ethics of that, how people come at technology from that mindset. So that’s where I am at now.
Now my journey! [laughs] My journey, I would say, began probably quite a long time ago when I finished college in Kenya, and I went into journalism. I was a journalism major, and I worked for two of the top newspapers in Kenya. And it was, I think, at that time that I realised this question of inclusivity, whose stories are included, whose stories are excluded, and why, and I never understood that that was an editorial choice – that you could make the decision to include certain stories, and I would always be that person who was trying to bring the stories from the margins. I would be talking about stories of certain communities that are marginalised, or communities that aren’t even known of, and I wanted to bring those into the mainstream media. And somehow, of course, the news was always around the politics, and very much inspired or influenced by who supports this newspaper, or that newspaper. So, corporate money or corporate energies always influences the stories that we consume.
That was the beginning of that, and then I went on and did a master’s in development education at UCL, University College London. And then again there, it was about global education, but somehow I kept wondering how we could possibly be talking about global education and completely miss out other ways of knowing – the indigenous knowledge system, which is not accommodated within that narrative of what a globalised education system is. So again, I realised this gap was always missing. Those on, presumably on the margins, right? And so I always wanted to bring that, integrate those two worlds.
And then I ended up being a fellow at the New Institute quite recently, where, again, I was looking at the question of what does it take to flourish? So, non-material conceptions of human flourishing, and I was bringing in this knowledge of indigenous ways of thinking and knowing that have always been with us, that have always actually influenced the way that we all think, but it doesn’t seem that way when you look at stories emerging from the front, right?
From that perspective, then AI became a mainstream narrative. And so that’s sort of my journey towards joining this space that we’re now all grappling with, trying to understand what is artificial intelligence, and how does it influence our daily lives, and that’s how I ended up at the Inclusive AI Lab, heading the African Sustainable Innovation.
Ka Man: Thank you so much for sharing that, Wakanyi, a little whistle-stop tour through your professional history and interests. So interesting, I’ve got so much to ask you. Maybe I’ll have to save some of my questions for over virtual coffee another time [laughs]. But I think it’s so interesting, obviously, because I’m coming at this from a communications perspective as well, when you were talking about your early career in journalism, and the editorial choices that were being made, what shaped and influenced that, who you spotlight, who you feature, and whose voices that you centre. And obviously you can see parallels directly to technology now, so I think that is so interesting.
And just like technology and journalism, you’ve got the questions about power and control, the ownership of that, and then who’s consuming that information and those messages that are chosen to be positioned, so I think that’s really interesting.
Global education is also, feeds directly into this, and I think that’s really interesting. I wonder, we hear about this term decolonisation a lot, so do you see, what your questions around studying global education in London at that time? Do you see some of the gaps that were emerging in that academic inquiry that you were following, do you think it links to questions around decolonisation, or is this a separate perspective?
Wakanyi: Yeah, I think both. I think there’s a separate narrative around decolonising the education system and decolonising especially academia, because that’s where our knowledge systems are built, right? Or mainstream knowledge systems, right. And yes, I did notice, like I said earlier, being at UCL and working on global development education and global learning, and without any mention of other ways of learning, other ways of knowing the world. It seemed like a very colonial lens of looking at development. Development for who? I kept asking, and others did, we were very open about this question. And so, yes, I very much – I don’t call myself a decolonial scholar, per se, because I think that’s a whole other way of coming into the world. But I look at how can we design spaces for integrating different ways of learning and knowing that then includes this decolonial lens through which we can look at the world.
And especially now, when you’re looking at AI, the content that’s been used to design AI systems, we know, is coming out of what has been traditional media or traditional knowledge, right? Not – and when I say traditional knowledge, I mean, like, modern, right? Western, modern knowledge systems. Not traditional indigenous and other knowledge systems, certainly not even localised knowledge, simply because of access. There’s not a lot of scholars on the Global Majority or Global South who don’t have their work published in these mainstream journals and newspapers and media that is globalised. And so, without those stories in there, then the machines are only fed what appears to be a global perspective, but it’s very much a very narrow colonial lens.
So in that sense, yes, I think my work does somehow influence that decolonial, decolonisation that we all need to be participating in, that we are all actually, in some ways, responsible for, whether you come from the, you’re a product of the coloniser or the colonised. Somehow or the other, now we’re looking at a whole other landscape, a whole other instrument, right – a technology that has the possibility or the power to actually influence all of the ways in which we appear, right? So has the ability to colonise all of us at scale. So it’s like, people coming from the margins, the Global South, the Africans, and minoritised groups, then suffer this double colonial experience within AI, right? But on the whole, I think we all have that risk, the risk of losing our identities and our knowledge systems.
11:38: Chapter 3: Contexualising AI: community participation, debiasing systems, and the potential of small language models
Ka Man: The sort of tech colonialism is a really interesting one to explore. Linking to that, so in our research, particularly in the open comments, respondents expressed concerns around the appropriateness of AI systems, ah you know, like ChatGPT and other, Copilot, et cetera, developed primarily in America. So they’re trained on data and developed in different contexts that are not reflecting, or may not reflect the characteristics of local populations. So, what do you think are the biggest risks when AI systems are developed elsewhere and deployed globally without this localised knowledge or indigenous knowledge that you’ve talked about? And I wondered if you have any examples that may relate to the humanitarian sector?
Wakanyi: So, I’ll start with where we are at in our lab, and I actually want to start with saying, bias is something that is – it’s not a bug in the system. And I don’t know that it’s by design, but we are biased. We’re all biased. Human beings, we’re prone to biases because we see the world, we perceive the world as we know it, often, not as it is, right? And so, coming from that perspective, I can somewhat sympathise with the technology builders who then shape technology based on the way that they see the world. But I think this is a question – this is where then it becomes truly important to think about ethics and inclusion and participation. And so how do you – and this is a question that I have for AI technology designers – what ethics, what moral ethical considerations are inspiring you, influencing the way you see the other, right? And particularly when we’re talking about the humanitarian space, which is, it was, it is a system that was built out of a very colonised world, right? The humanitarian need to go and save others, or help certain communities, or respond to crisis, came out of a very colonised world already. So from that lens, we can see how even the technologies that are deployed for humanitarian response are also coming from a colonial mindset of how the world should be, rather than how the world is at the local level.
And I wrote a paper about it, where I said localised knowledge should influence the response, humanitarian response. And now we have AI tools that are also built from that perspective, from that very colonial, globalised world view that negates the real lived experiences of people on the ground.
So, for example, I was doing research in northern Kenya, within the Samburu community, and one of the things that struck me is there’s a river. It’s called the Ewaso Nyiro River, and this river should be about the size of, let’s say the Hudson River. Perhaps even bigger, like, that’s how big it should be.
But now it’s down to a little stream. It’s down to a little stream that has been siphoned upstream to feed off agricultural farms and other emerging farms. But eventually, those kinds of natural resources are going to be used to cool down the data centres, right? Because there’s this sort of assumption that indigenous people, or the communities that live in those marginalised spaces, don’t, are not interested in technology, or are not part of the global agenda. I feel like that’s the underlying assumption when we’re responding to people’s actual needs on the ground, from a humanitarian perspective. So how might we create AI systems that truly include those knowledges, those localised knowledge systems, to promote not just human flourishing, but actually planetary flourishing.
And I think this is where the question of inclusivity comes in. So, for example, in our lab, we have someone who’s working with Adobe, the photo stock company, looking at de-biasing the stock images that are not representative of people as they are, but they’re representative of people as we imagine them to be.
And certainly, I mean, you work in the humanitarian sector, when you look at images of children in Kenya, where I’m from, those are not pictures of children that represent the kids that I know, right? It’s a very small percentage of kids that appear that way, but that image then creates a perception of who is the person in the Global South, and then who gets to speak on behalf of those children. So, my work is very much looking at how do we de-bias the creative space? How do we de-bias the narratives?
And I’m also coming at it with the understanding that we all bring our biases into these spaces, including those we, those presumably in the marginalised or minoritised groups. So, it’s less about removing the bias, but actually having spaces where we can have a conversation about true representation. So participation is key in creating systems that actually create spaces for people to see themselves as they are.
Ka Man: You raised so many great points there, Wakanyi, there’s so many avenues there. But I thought it was really interesting what you said about the humanitarian system being founded on colonial sort of mindset and principles, well-intentioned, but agendas, and obviously the system’s evolved. But there are structural challenges that actually have obviously come to a head in the past year or so, and there’s a lot of structural change and calls for reform in the UN and beyond. So, the sector is grappling with this, and at the same time, you mentioned, tech companies with their own, being built with a certain mindset and a certain, from the bottom up, with obviously commercial agendas and certain assumptions about who’s a user of technology and who wants to be engaged in the system. And often not taking into account communities outside of their sphere.
So what I grapple with a lot, particularly through this research that I’m embarking on now, is I almost feel like the humanitarian sector’s got this double whammy, these double challenges of these two sectoral and structural challenges, overlaid over the top of each other, and I think that’s why it feels like quite a big weighty challenge.
And what I’m grappling with is what interventions can unlock some of these barriers and actually create positive change. And then there’s obviously individual humanitarians trying to keep everything going, keeping the system together. Do you have any thoughts on how, practically, that we can reconcile this, because it feels so big, but like you say, you were talking about making space for conversations and recognising the bias in the system. I wonder if you could speak to that a bit.
Wakanyi: Well, firstly, the good news is that the UN is on its way to being reformed, right? So there’s a lot of reformation talks, and I like that. I think it’s Heba Aly, she used to run the New Humanitarian, it was called, right? And now she’s part of that cohort or task force tasked with reforming the UN, or writing a new UN charter, or something like that, right? So that’s helpful, and those are the spaces that I think that the biases and the narrow perspective, the narrow colonial worldview, that space can be expanded to include people that have not held that worldview, have actually been living very differently and alternatively, right?
And another thing, I mean, I think maybe it’s not a solution, but it’s something to be concerned about, and to be careful about, is the possibility of turning the UN or the humanitarian sector into a commercial space. Because we now have an interesting time that we’re living in, where technology was never built for responding to humanitarian crises, right? These were two different – so technology was one arm of society, right? Then you have government, then you had the humanitarian, the non-profit or non-governmental organisations, right? The UN being one body. And so now, all of a sudden, you have the UN interacting with technology, and maybe even less so with government, right? Or governments being influenced by technology, which we know underneath all of that is the corporate world, right?
So I think this is – and I love it when I’m in spaces where there are technologies, people from the business world, government officials, academics, and the nonprofit world, coming together to see what are the ways in which we see the world. Because all these groups of people have a very narrow view of the world. If you look at it from the UN world, all you see is crisis and funding that needs to go to X. Sometimes even – I mean, one of the complaints within the UN system is that even the distinctive UN agencies don’t communicate to each other when they are responding to the same crisis, right?
So, maybe that’s where the role of technology is important, that it can help create efficiency within those systems of governance within the UN and humanitarian world. But at the same time, we must somewhat distinguish between what is driving the current technologies, which is commercial, profit-driven agendas, versus what drives the nonprofit humanitarian world, which is to create conditions for communities to be flourishing on their own, right? So that you don’t risk this energy of corporate, the corporate energy coming into the humanitarian world and recreating the same problems, because it needs to keep itself alive in terms of profit, right?
And then overall, having governments at the local level, be involved in creating these spaces where these conversations can occur. So I think that there are three different groups of people that need to come together and form a whole other – I don’t know if it’s a new consortium, or a new agreement, certainly a new agreement, of how the world actually is versus what it looks like from our different corners, and from our different bubbles.
Ka Man: What’s quite interesting is you, and as well as the other guests that I’ve spoken to through this podcast series, all look at AI, come at it from a very intersectional lens. So, interestingly, they’ve all had a different range of careers, you know, from linguists, lawyers, construction, before they find their path to technology. But it gives that really unique perspective and can see the gaps and connections that other people who may be in a certain track or bubble may not see. So I think it’s really interesting that you’re saying that technology could be this common thread that enables more intersectional approaches and dialogue to make progress in this space. So I think that’s really interesting.
And another point that you touched on was finance and the sector sustaining itself. And again, this is another big theme that I’m grappling with in my head, trying to see how we can make progress in this space. So in episode 2 of this podcast series, I spoke with Michael Tjalve, from Humanitarian AI Advisory, formerly of Microsoft. And he was talking about the importance of localising AI through training systems on more of the world’s languages, so as to not perpetuate the digital divide. And I was asking him, what’s the blocker to this happening, you know, not faster, because obviously it makes sense, let’s get more people on systems and provide that access. And it was the training of the systems, and there’s a big cost attached to it.
So, since that conversation, I’m thinking, how can we incentivise people, companies, funders, to do that. I can’t reconcile that in my head. What’s the incentive? I thought, would it be that they have people who do invest in this will have access to new markets, so to speak? But then I thought, well, from a humanitarian perspective, that feels, I don’t know, how does that sit? How does that land? Does that feel extractive? So this is something that I’ve been sort of pondering in my mind. So it was interesting when you talked about the finance side of things and rebuilding this model. So I didn’t know if you had any thoughts on that?
Wakanyi: Yeah, it’s very tricky, isn’t it? Because the humanitarian world is explicitly not-for-profit, right? And it’s there to serve, it’s sort of the arm of, it’s a global arm of government, right, if you may, that helps people on the ground. And so, when you bring that commercial aspect into that space, then it changes sort of the intention. And it changes even the way that projects can be designed.
One of the things that I’m finding interesting, and I read this recently, is that the biggest digital divide beyond access, beyond the fact that something more than 65% or maybe even 80% of people in Sub-Saharan Africa don’t have access to digital tools, or digital technology, or even the internet, right? But beyond that, on a global scale, when we’re talking about AI, the biggest digital divide is actually the fact that the cost of running these systems is so steep that it is virtually impossible for anyone to build a large language model in Africa today that doesn’t depend on funding from elsewhere, from the Western entities, or from technologies, or from big corporations that are not based on the continent. And this is obviously something that people are working on.
But just based on that, which is why I’m a big fan of the concept of small language models. So if there is in any way, I think this is a space that can be shaped by the humanitarian world, is to see what is the problem on-the-ground with local communities, build small, with local communities on the ground. That could be a humanitarian project. Because it takes away that divide, that, which is going to be bigger and bigger, right?
So then in the end, actually, there will be no need to build a large language, another large language model because there’s already one that’s coming from Silicon Valley, and we can all adapt to that and contextualise it. But we know that it is inherently biased, it requires a lot of work. And still, we are at the behest of someone sitting in San Francisco, right? But then, localised small language models, and there are various ways to do this. There are indigenous communities that are working with this, building out of – I’ve seen recently a group that was building out of a container. So, using that container technology to build a small language model that doesn’t require, doesn’t take up a lot of energy and doesn’t take up, can use solar and all of these other things.
So, working – my point is that we need to work with who is on the ground, and see what is possible on the ground, what are the resources on the ground. So, going back to, like, that community in Samburu land, the river’s already there. And then slowly, you start to see that, actually, the people on the ground get to decide how the land is utilised, for who and for what purpose. That in itself is a humanitarian project. That is small, but yet can spread out in different parts of the world.
Ka Man: It’s so interesting that you’ve mentioned small language models. This has come up in this podcast series, so Michael Tjalve has also mentioned it, and a guest that we’ll be hearing from Deogratius Kiggude from Carnegie Mellon University in Rwanda, he said the same. So, I think this is really promising, and actually, the more I hear about it, the more applicable and relevant I think it is for the humanitarian sector.
But also, in general, because these large language models, as you’ve mentioned, are so resource intensive, it feels at the moment, when I hear the news, it’s like, oh, another data centre’s being built here, and it’s the size of so many football pitches, and it just feels quite overwhelming, actually. And you think, what’s the end game? Especially when we’re reconciling this with climate impact and yeah, resources just at large. So, really interesting, thank you for sharing that.
29:55: Chapter 4: Storytelling as a cultural tool for AI development
So, I liked how you brought in the example of the river in your previous part of this discussion. So, a major strand to your work is storytelling, especially, particularly African folklore. So I would love to hear a bit more about this, and how this knowledge can be used as a cultural tool in the development of AI systems. And in particular, what your take is on the relevance for humanitarian AI systems.
Wakanyi: Well, thank you so much for bringing up the work that I’m doing with the African Folktales Project. The humanitarian sector relies on stories, right? It relies on stories on the ground, stories of people in need, stories of people in crisis, stories of how people are grappling with development issues, sustainable development even, right? Sustainable development goals were developed to establish a framework of the way in which we can respond to challenges of being on this Earth. So, storytelling is itself a technology, and I like to say that firstly, we’re all technologists, because we’re all storytellers. To tell a story is in itself an act of using a particular technology that is unique to human beings, right?
And so, from that lens, we’re all shaping the world. We’re all shaping narratives. We just might not be aware of it, we might not be doing it consciously. So I think that this is a moment at which we can all start to look, and again, going back to small language models, what can we build that is open-sourced that captures stories, that captures stories in the original languages, and helps small communities in parts of the world that are marginalised bring up their knowledge, bring it up to the level that the global knowledge system is at. So my work with the African Folk Tales Project is a, it’s a curation process. And we’re all curating, we’re all on social media, we’re all sending each other messages on WhatsApp. All of that is going to feed certain AI systems, Meta and others, right?
So if you think about it from that perspective, there’s no way to really opt out of this technology, right? And so, the humanitarian system could then at this point, be shaping the narratives, be helping communities have access to ways of representing themselves accurately, on their own terms. So we are beyond this idea of someone coming in and trying to represent a community, right? Somebody going into the Samburu community and saying, this is what they need. Why don’t you go there and actually have them speak back to the world and say, this is who we are, and this is what we need. And we can do that with AI, we can do that with all kinds of other technologies.
So I think the power of storytelling cannot be undermined. If anything, it needs to be somewhat accelerated. Make visible those that have remained invisible. The African Folk Tales Project is working with teachers across the continent to co-create a curriculum based on African knowledge. So where teachers are, I call teachers the wisdom keepers, the modern-day wisdom keepers in the classroom, with stories and experiences, and a deeper understanding of the local landscape, right? They understand where the children are coming from, where certain children don’t have access to books, or food or whatever. So, imagine if you have an AI system that is able to understand that, that is run and developed by teachers, then you have a different curriculum, and you have a different education system. So this is—there are many ways, but this is just one example of ways in which we can use stories, storytelling as a tool to shape the technology, and that we can all participate in, because we’re all storytellers.
Ka Man: I love that. I love how you said storytelling is a technology. I think that’s very powerful, it’s very simple, but very powerful, actually, because it gives the power back to us.
Wakanyi: Exactly.
Ka Man: That we can effect change in the most simple way. Because obviously storytelling is something we’ve all engaged with in some form since we arrive in this world. Actually, before, even in the womb, right?
Wakanyi: Yes.
Ka Man: I think that’s so interesting. So, do you advocate for people to embrace their own storytelling and creative expression?
Wakanyi: Absolutely, and that is the one thing that – I’m part of a group, a new group, recently new, it’s called the Citizens Assembly, and this is an assembly that was developed and launched, actually, at the UN week in New York, at the Climate Week. And it’s all about creating public spaces for the public to engage with policy makers, with government, and for citizens to have a voice. I think we cannot forget that actually everything that we do is shaped by the people. We, the people, get to decide what the technology is. I often say, think about this scenario. Think about a scenario where someone just deleted all of our data, right? That’s gone. That’s gone. The world changes, can change within a minute. But if you have an open-sourced experience, like where we shape the technology, we get to decide how to govern the technology, because we are the data workers. We are, it’s our knowledge that is actually shaping, it’s our stories, it’s our experiences as human beings.
And so being able to remember that, that we don’t need to be granted access. We need to decide whether we give access to corporations and others to mine our stories. And then we get to share those stories. We get to decide what are my stories that I need to share with the world. And how might I share those stories? And what stories don’t speak to me? That I can have that conversation with others and say, that’s not actually how my community is. There’s a lot of assumptions made about people’s lack of visibility and voice, but I think we also forget that people don’t want to be part of a system that is shaped by ideas of oppression, ideas that suppress certain voices.
I’m often interested in knowing who does not want to be part of this system and why. Not so much about creating a larger table and inviting people to the table, but actually understanding who didn’t show up and why. There’s a reason that person didn’t show up, because they don’t feel that their stories matter. They don’t feel that their stories can help shape that particular narrative or technology. So, turning the speaker [laughs] back to all of us, right? We’re all shaping the technology.
Ka Man: Wakanyi, actually, I felt a bit emotional hearing you say that – it really resonated, and I think because of the times that we’re in, people, a lot of us feel, in various ways, powerless. AI, obviously, is one of these great forces at play in the global conversation, where we feel powerless, and certain individuals and firms hold that power. But actually, you, sort of flipping the narrative, where you say, actually, you can switch off your data centres or whatever tomorrow, but our stories have permanence.
Wakanyi: Yes.
Ka Man: Our voices will remain. I think that’s really beautiful and profound. And I think it’s really important for us to remember that relationship that we have as people, while we’re thinking about our interactions with these systems as they evolve, but feel empowered through our voice and our actions. So I think people can take comfort from the words and wisdom that you’ve just shared there. Thank you.
Wakanyi: Thank you, thank you so much. I think you actually captured it so well, the fact that I mean, look – these technology companies that are commercialised, that have a commercial interest, rely on our data. So if they’re not able to store that data well, if they lose their data, they lose their profits, right? But we still retain our stories, we still retain our human, our humanness, our ability to share with each other. And I think that is the power that we forget we have when we’re confronted with a new technology that tries to sell us certain things. But actually, we’re the ones selling our stories to the technology, offering it for free. So, realising that we are all data workers, realising that we’re all technologists, and realising that the technology is actually the story. The story is what builds the technology, and it wouldn’t, there would not be, ChatGPT wouldn’t exist without all the stories that we have shared with each other on different forums. And that’s powerful. We are, we, the people, shape the technology.
39:32: Chapter 5: Building systems with community wisdom: working in equitable partnership
Ka Man: Linking to this, on this theme of people power, community power, which is, like I say, it’s really nice to hear you reflect on this, because it can feel a little disorientating, and people can feel disenfranchised, especially those who already are marginalised, and feel they’re totally outside of this. But it’s a powerful reminder that even large language models, the data’s from us. It’s crowdsourced from us.
So, kind of thinking in terms of practical steps, how do you think that AI systems can be shaped further using community wisdom, but from the start. And on a practical level, do you think that this can be done and driven by the community for the community, or do you think it’s actually, it is possible and okay for it to be in partnership with external groups?
Wakanyi: I think both are possible. It’s not one or the other. I think that we can look at, for instance, the humanitarian world, which has been around since 1945, right? Quite a long time of creating networks, partnerships with local people. So at this point, that in itself is a space. It’s almost like a mini-universe of potential partnerships globally, right? From the local all the way to the global. And I think that is the power of who we are as human beings at this moment, at this juncture, that we have a particular kind of technology that can help us see each other, and can help us grow together, but that what is required is a precision of creating equal partnerships.
We need to be precise about what it means to have equality, right? Because on the one hand, yes, we the people have the data, but we, the few people have access to technology, right? So how do we ensure that the data that is feeding the technology is as valuable as the technology itself, and that both people on both sides are able to come at it from a place of union, that we’re seeing what could emerge between us. I’m bringing X. So it’s about, it’s a bit like a barter trade, right? I’m bringing X number of potatoes, you’re bringing, you have the land. So the people have the land where the data is being mined, and then here’s a technology that could create something out of all of that data.
And this is why I think it’s a humanitarian project. It really is. Large language models are commercialised, and they’re going to be that way, but small language models could be the answer to these kinds of partnerships. It doesn’t have to be technology coming from outside to help people on the ground. It could be a technology that’s coming from the ground that helps the bigger world, right?
But until we see each other, we create those spaces where we can have honest conversations about what is the need, and not every problem requires a technological solution as well, right? And when we get to that point, and we remove the commercial aspect of developing these technologies. We create funding models that help with ensuring that we can create smart technologies where they’re needed. And maybe analogue technologies where those are necessary and better, right? That can only happen when we look at the partnerships between people on the ground, localised populations, and whatever’s coming from above, whether it’s technology, Silicon Valley, or government, or corporates, wherever the money’s coming from. But that it shouldn’t be, commercial interests of a few should not dictate how the majority shape their world.
Ka Man: Small language models do sound like they have, hold a lot of potential and promise, from so many different aspects, but also, like you say, that contextualised partnership, that’s an equitable partnership, or at least more equitable if you’re working with external organisations. What you were saying about having possibly even analogue technologies as part of the solution, we’re not always saying that AI is the solution.
It reminds me of our guest on episode 3 of this series, Timi Olagunju. We talked about governance. But actually, even though the conversation was on governance, it revolved around AI literacy and more holistic aspects that’s foundational before we move into governance and deployment of technology. But he emphasised the importance of having, developing non-AI pathways, and always having that opt-out loop. And I thought that was really interesting, because I think about opt-out, I’m thinking from a communications perspective working in the UK, thinking, oh, opt-out – GDPR, opt-out of emails. You know, very simple, they don’t have massive implications for me, but there you go, following good practice.
But actually, in terms of whole systems, having that ability to opt out and having your individual rights, I think, is such an important point, and it kind of resonated with what you were just, the point that you were just making.
Wakanyi: Absolutely, and it’s possible, and in fact, I’m so glad you brought that up, because there was a study, or a friend of mine, actually, was working with students in California, within the education department, and he had the kids develop certain apps using AI, and there was a group that actually developed a system that was only supposed to help them during the revision period, and then after that, they opt out of it, so completely delete the data and clean it out, and it no longer exists. I think that’s the way forward with small language models, that you can create a small model for a particular problem that you’re dealing with, if it even necessitates AI, right?
But then once you’ve solved the problem, then you no longer need this model. So that in itself is a sustainable way of thinking about artificial intelligence, right? So where you’re not thinking about permanency of data centres that are guzzling natural resources. But that you’re thinking about a smaller model that can be built with sustainable materials, can harness energy sustainably, but that actually can also be destroyed in the end. Like, we don’t need it permanently, so that people’s lives can continue. I love that combination of the analogue and the digital. And only using the digital when it’s necessary. And that is possible with smaller models. I don’t know that it’s possible with the bigger ones, but certainly with the smaller ones, because you can just create small solutions with AI, that then you can remove out of the experience and allow people to go back in and have their lives as they were.
Ka Man: You make a really interesting point that I hadn’t really considered before with small language models. So, obviously with large language models, and what we hear on the news is it’s about scaling – bigger and bigger, and faster and more powerful, and I don’t quite know what the endpoint is. Where are we going to get all these chips from? I don’t know, but, anyway [laughs]. What’s quite interesting is you’re talking about small language models that are developed sustainably to solve a specific challenge or problem, but then can be retired, sunset. I never thought about it like that. You think of it as the machine keeps going, things keep, cogs keep turning.
Wakanyi: It doesn’t, I mean, and that’s a very indigenous way of thinking about technology. Not all technology that was developed centuries ago is still relevant today, right? So how do we retire these technologies that we no longer need? But to begin with, how do we even build technologies that have the potential to be retired? But the design thinking there already includes the possibility of this model not being in existence 5 years after the project is over. I think about the typical UN project – 3 to 5 years, or maybe 5 to 10 years, I don’t know. But whatever it is, it could be a food project, food-related project, or a project that is related to the refugee crisis or migration. But once you’ve resolved that, once the project is over, then so should the AI system that was used to create the solution for that particular problem. And I think that’s one way of building sustainably, because you would use with temporality in mind rather than a permanent thing.
Ka Man: That’s so interesting! So it’s like, you know, we think of in the tech world, if something has to be retired, it’s because it’s no longer fit for purpose, whereas if you’re designing a small language model, it is fit for purpose if you can see a point, an end point, where it’s no longer required, the resources are no longer needed. I think that’s a really interesting take that I never thought of.
And actually, I’m kind of thinking that it links to the localisation dilemma in general in the humanitarian system. Now, obviously, everyone has a different perspective and take on what localisation means, locally led action, and what’s the ultimate goal – whether that means that the humanitarian structures are dismantled and leave, or whether the structures remain, but evolve and change. You know, everyone has a different take on it, and it’s obviously very much a live and ongoing discussion, but I can see how the conversations around the technology structure in line with localisation, they need to align.
And actually, I can see through these conversations that AI could potentially support, rather than hinder localisation practices, but there needs to be that synergy, and they need to be in sync with each other. So that’s why I think humanitarian AI discussions with an intersectional lens are so vital so that those linkages can be made.
Wakanyi: Absolutely, and it needs to be built from the ground up. And that’s why that idea of citizens’ participation, that the people on the ground understand what their problem is, and why they’ve been unable to resolve that problem. So bringing the technology to them doesn’t mean that they don’t have the solution. You’re giving them a tool, but they already have the solution, so they get to use a tool to build the solutions that are relevant to them, and usually that means that once the solution, once the problem has been solved, then the technology is no longer necessary.
Whereas the former mindset is, here’s a technology, or even the current mindset with technology, here’s AI, it’s going to solve everything, and it’s here to stay. No, it’s not here to stay. We’ve had different technologies throughout the history of humanity that we no longer need. So can we build with that perspective of, at some point, we might not need this particular technology. I mean, think about Indigenous communities all around the world. My mum grew up in an indigenous community as well. Even the structures, the housing structures, were built with sustainability in mind, that this hut is made out of the local materials here, so even in the case of it getting destroyed by the climate or whatever, there’s nothing to lose, and we can rebuild again because the materials are around us.
So using that design thinking of that indigenous way of thinking about what is sustainable development. Sustainability is inbuilt, it’s not a goal, it’s not something that we are aspiring to get to. That, hey, this technology will get us to sustainability. No, we begin with sustainability, we build with what we have, and we have an understanding that this is not the only way, that it is actually just a part of the process. That the technology is always evolving. This is just the tool that we have to keep building. Again, the technology is our stories and our experiences, not the actual tool, right? [laughs] So that is never going to come to an end.
Ka Man: That’s such a good way of expressing it, that if it’s broken, it’s not a flaw, this was always expected. It’s around the sustainability concept. I think you’ve encapsulated that in a really neat way, so thank you.
52:37: Chapter 6: Wakanyi answers community questions
Honestly, I’ve really loved this conversation so far, but it’s my, I have to stop hogging the mic [laughs] and bring in the audience questions. So, when we launched our humanitarian AI report at an event on the 5th of August, we had over 100 questions, because our humanitarian community is so keen to engage with this topic. So, I’ve got a few questions that relate to your area of expertise, so any insights or signposting to information would be really valuable. So, we’ll see how this goes. So, the first question relates to the digital divide. So, Rahmanullah asks, “how can humanitarian organisations ensure that the use of AI doesn’t widen the digital divide or marginalise vulnerable communities?”
Wakanyi: Well, firstly, the good news is that – well, I think we’ve sort of answered that, but just real quick. That’s a big problem, and we’re definitely grappling with it. That’s why we have the Inclusive AI Lab. So these kinds of spaces are created specifically to shed a spotlight on what we, our perception of who is on the margins. And I think just flipping the narrative. The humanitarian community can do such a great job with storytelling. Flipping the narrative from even just a framing of the Global South to the global majority. And looking at that, what does that even mean? When that becomes a whole other way of looking at the world. That the majority of the people are actually the ones shaping the world, the future world, and that there’s a small group of people that are holders of the technology, or the corporations, and the funding. But that actually, all of that is coming from taxpayers as well. So we’re all the majority world, right? And in that way, then, we’re able to see ourselves as participants, as active participants.
I think another way of ensuring that this digital divide does not become widened is working with media. I’m a big fan of what the potential for media companies is. Media companies should be, at this point, be the voice of the citizen. So having these citizens’ assemblies even be something that is, the voice of the people is coming out of the media that is shaped by voices of the people, creates an opportunity for people to actually have their voices heard, and understand, and have technologies that explain what this technology is, how it is built, being transparent about it, so that people are able to understand, right, so maybe I might need that technology in our village to communicate about early warning signs for climate disasters and that sort of thing. But if there’s no trust building, because we don’t understand what the technology is, and there’s just fear in between us, then that digital divide will just keep growing. So I think it’s both. Everyone needs to be involved, everybody needs to be open to having a conversation, open to not knowing. But also, there is a responsibility, there’s a huge responsibility on humanitarian organisations to be precise about how they, how we develop spaces of inclusion, so that that dialogue is inclusive of the voices that are so-called minoritised or on the margins.
Ka Man: That’s brilliant, thank you so much. I’ll move on to the next question from Yahya. It’s around ethical use of AI in storytelling. And again, you did touch on this earlier, but if you have any new perspectives to share, that would be great. So Yahya asks, “what are the ethical challenges of using AI in creative storytelling, especially when representing marginalised voices or traditional cultures?”
Wakanyi: That’s a big one as well, and of course, I’m in the heart of it, right? So, one of the things that I’m very careful about is to not be the voice of the other. And I think at this point, we can say AI is representing all of us as the other, it’s othering everyone, right? Again, participation. We need to understand that this tool is only as good as its data. It’s only as good as its stories. So, the ethics of engagement with AI are on the individual user. Yes, the technology companies absolutely must adhere to national, regional, and global ethics, but we can’t expect the coders, the programmers, to be the only ones concerned with how to engage, how to code, how to de-bias.
We need to be involved in that de-biasing process. And in fact, maybe, I mean, maybe it’s difficult to now say that we can all embark on this de-biasing project of LLMs, but again, going back to small language models, if we’re going to build small language models, then we need to build them ethically. We need to bring those ethics of engagement. What does it mean to be human? What does it mean to have, to think about the other? To build an AI system that represents a community that’s not digitised, that’s not in the digital landscape, that doesn’t have access to internet? Build with them. How would they build it? We could have significantly different outcomes if we just work with people as they are, not as we imagine they should be, or that the world should be. And this is one way of creating that equity within that space.
Ka Man: Thank you. So my next question is not quite as directly aligned to your specialist expertise, but it’d be interesting to hear your perspective and take on it. So it’s a question from Rocío, and they ask, “how do we ensure respect for human rights when using AI in a highly unregulated sector? How do we ensure redress where AI causes harm?”
Wakanyi: One of the things that I’m a big advocate for is our right to relate with each other, and with the planet, and with place. And, of course, we can see the harms that AI could cause in those relationships, because AI is hyper-individualised. It is not built for dialogue. It mimics dialogue, but it’s really a monologue, and here I’m talking about LLMs, right? So if you go into any large language model, you’re basically talking with yourself, right? And it might seem as though you’re having a dialogue with some other person, but it’s not a real person. So, understanding, again, going back to that, understanding that AI is not a person. And that AI does not have a body, right? It doesn’t have an experience of being human. And this is something that we can have these conversations if there are these citizens’ assemblies, where then people are able to understand what is it that we’re interacting with. You’re actually interacting with a machine, you’re not interacting with a person. You’re interacting with a machine that’s been fed a particular narrative, or particular information, and can decipher from the algorithm, mathematically, the pattern of language, for example. It can read a language, right? It can create, it can seem to understand a language, but it doesn’t understand the nuance.
And so this is where policymakers, leaders in technology, and anyone that is involved in that space needs to be particularly careful about how you engage ethically with other people’s knowledge system and language, and be able to create spaces where the real knowledge holders and the people that speak that language originally are actually shaping the technology.
For example, I saw something about films. When you get voices that are supposedly representing African actors or voices, you end up with a weird accent that’s supposedly African, right? [laughs] But no African speaks like that. So, these are just small spaces that we can decolonise, if you want, or shift that narrative and say, hey, that’s not who we really are. So being careful about how you represent the other. And actually not representing the other, and having other people represent themselves accurately, and taking that as an accurate representation of who they are, and not your own imagined aspiration of what they should be. I think that’s one way of dealing with that issue of rights. But again, I’m always handing the power to the people. Each one of us has the ability to shape this technology, because we’re all individually having an experience online, and having this experience with technology as if it is a person, but it isn’t.
Ka Man: Thank you so much. That’s a really good reminder that you’ve said that, LLMs in particular are, AI is hyper-individualised, and that it’s mimicking dialogue, but it’s actually, you’re actually having a monologue. I think that’s actually a really useful reminder, especially when everyone ends up having their preferred LLM, and like, might ask it quite trivial questions.
Wakanyi: Exactly. I mean, we have, I think it was, OpenAI just announced that they’re going to have your AI companion in your pocket, like a, it’s the size of your phone, of your headphones case, a little companion, an AI companion that listens to everything that you say and then perhaps sells back solutions to problems you didn’t think you had, simply because you vocalised them. I think we’re going to end up with a world that’s very quiet for a while, where we don’t share a whole lot.
Ka Man: Wow, that is interesting, because obviously we already have that with social media, where it listens to us, and then presents you with the adverts for things that you didn’t realise that you needed.
Wakanyi: Exactly
Ka Man: [laughs] It’s just that you mentioned something in conversation, and there you are, here’s the journey for you to fulfil that dream. But having a personalised chatbot telling you that is quite something.
Wakanyi: Yes, or the glasses, the Meta glasses. Meta just launched glasses with, I think, Oakley, or I don’t remember which sunglass company, that then completely distorts your vision. In many ways, I think it’s kind of playing around with your reality, and creating your own individual hyper-individualised reality. I mean, do we want that? These are the questions we really do need to have. When we have these spaces, the media, for example, could be shaping that dialogue, could be shaping that space where people can come in and say, that’s not the kind of technology that’s going to work for our community, we don’t actually need that. And then we can start to see what are other technologies or solutions to problems that we haven’t even interacted with, and tip that balance, because we can’t have technology or AI technology shaping an entire world and flattening the lived experiences at the local level.
Ka Man: I think it’s really important hearing you speak about how, it’s making me think how the youth voice is so important in this, because obviously they’re going to, we’re shaping this technology now, but it will, our generation, but it will obviously impact more heavily, and shape their world directly in a way that they may not have consented to, even. So it’s really important for the youth voice to come through and have those, what have you been calling them, Citizen spaces?
Wakanyi: Citizens’ Assemblies. And another group that is really undermined, I feel, is parents. Parents don’t have spaces where they can vocalise their actual concerns for children. I mean, Mattel just produced a Barbie that then can listen to your baby’s noises and all the things that a child might be, just imagining, their imaginative play, and then suddenly that becomes a product. So we need to kind of disrupt this process of, not everything is for sale, right? Not every problem requires a technological solution. And so having these spaces of dialogue, and people wanting to be part of that dialogue is going to be one way of ensuring that we can limit the harm.
Ka Man: Very interesting, thank you. The final question from the audience is from Callum, and it’s around inclusive design. So, Callum asks, “what role can AI play in amplifying the voices of affected communities, not just analysing them?”
Wakanyi: Again, I hate to sound like a broken record, I think small language models [laughs] that are built by the community will absolutely be the way forward to ensuring that what wants to be amplified is actually amplified. I think we can limit all harms, if not completely remove this dominance of large language models, if we just build with community. Because we’ll build things differently. We’ll build them that are bite-sized, that are specific to certain communities, that are embedded with a particular language that speaks to the people, contextualised, and all the ethics of engagement with particular communities, whether, for example, indigenous communities. All of that can be mitigated when you build with the community, and also you build with an intention to remove that technology out of the human experience, use it as a tool that can be used. It’s another tool in the toolbox. We don’t always need a knife, but it’s there. You can use it if you need it, right? You don’t always need a matchbox, or a lighter, whatever it is. So, being able to understand that these technologies are not a permanent solution, they’re temporary tools for a problem that is occurring now, and then we’re going to need something different later on, helps us build with sustainability in mind.
Ka Man: Thank you very much. So much food for thought there. What you were saying about tools, I heard someone say, people saying, always saying AI is just a tool, but sometimes they’re using that in a way to sort of minimise the harm that it can cause. It’s like, but any tool has the potential to cause harm. So, I think that framing is quite interesting, how you were talking about the different types of tools, and when we actually literally down those tools, because they’re not fit for purpose anymore, and that should be intentional, part of the process, not accidental or disruptive.
Wakanyi: And not take it for granted that people don’t understand the harm. I think we all very much understand the harm that certain tools have in our lives, and so being able to understand that, yes, it is a tool, but it’s not just a tool. It can be a very harmful tool in the hands of the wrong person, or someone who’s not engaged ethically with it. So having that experience, and this is why Ubuntu ethics, having Ubuntu as a framework of ethics of engagement, where there is community accountability. The accountability is on the individual, but the individual is accountable for how the community is shaped. And this is a very different way of building versus building with just the individual in mind, who then doesn’t think about ethics and accountability. This is a very different way of looking at technology.
69:17: Chapter 7: Overlooked priorities in AI ethics and closing reflections
Ka Man: I’ve absolutely loved this conversation, it’s been really energising and inspiring, and you’ve illuminated a lot of blind spots, so, thank you very much. So, just before we wrap up, I just want to ask if there’s one thing that you think in this arena, so around ethics and cultural considerations in relation to AI, that you think is overlooked, or not talked about often enough, but is vital to shared progress in this space.
Wakanyi: Human rights, the right to relate, the right to have the right relationship with land, I think this is continually being overlooked. There’s some sort of assumption that we can just exploit and extract. But that has implications and actual costs implications, right? And so being able to understand what we’re extracting and from whom we’re extracting, and that we all have this right to relate to each other properly. That in itself is something that needs to be fundamentally influencing the way that we shape AI.
Ka Man: That’s a great note to end on, a very powerful message there. So, absolutely loved this conversation. Thank you so much. You’ve really made me zoom out. I’m obviously looking at AI from quite a, well, not granular, but granular level and then systemic level, but you’ve really made us look at a really, just, almost existential level [laughs], as a human, humans on this earth, and what we are hoping to achieve in our lifetime, and the legacy that we leave. So I think that’s, it will really resonate, and it’s such an important dimension to consider. So I think the work that you’re doing in this space is so crucial, in terms of inclusive AI and that intersectional lens. So thank you for everything that you’re doing in this space. And thank you so much for sharing your insights and perspectives today on Fresh Humanitarian Perspectives.
Wakanyi: Thank you so much, Ka Man. It was an amazing talk and conversation with you. Thank you.
Ka Man: Thank you. Wakanyi Hoffman, thank you very much.
[Music fades]

We don’t need to be granted access. We need to decide whether we give access to corporations and others to mine our stories. And then we get to share those stories. We get to decide what are my stories that I need to share with the world..

How do we build technologies that have the potential to be retired? Once the project is over, then so should the AI system for that particular problem. And I think that’s one way of building sustainably – with temporality in mind rather than a permanent thing.
About the speakers
Wakanyi Hoffman is an African indigenous thinker, global speaker and Ubuntu philosophy scholar. Her work integrates Ubuntu Ethics into AI systems as an essential framework for ensuring these technologies create a desirable future for all, highlighting how African indigenous principles of interconnectedness, compassion and dignity can guide AI to reflect ethical ways of being human and foster planetary flourishing.
She is the founder of Humanity Link Foundation and the African Folktales Project, working with teachers across the continent to co-create an African Knowledge Curriculum. This gives children of African descent direct access to their ancestral knowledge and wisdom, preparing them as global citizens who are locally grounded in their cultural knowledge whilst engaging with emerging global trends like AI innovation.
Currently, Wakanyi is Head of Research for Sustainable African AI Design at the Inclusive AI Lab, Centre for Global Challenges (UGlobe), Utrecht University, Netherlands. She is also a board member of Inner Development Goals, a movement promoting collective inner flourishing for outer change to complement the Sustainable Development Goals.
Ka Man Parkinson is Communications and Marketing Lead at the Humanitarian Leadership Academy. With 20 years’ experience in communications and marketing management at UK higher education institutions and the British Council, Ka Man now leads on community building initiatives as part of the HLA’s convening strategy. She takes an interdisciplinary people-centred approach to her work, blending multimedia campaigns with learning and research initiatives. Ka Man is the producer of the HLA’s Fresh Humanitarian Perspectives podcast and leads the HLA webinar series. Currently on her own humanitarian AI learning journey, her interest in technology and organisational change stems from her time as an undergraduate at The University of Manchester, where she completed a BSc in Management and IT. She also holds an MA in Business and Chinese from the University of Leeds, and a CIM Professional Diploma in Marketing.
Continuing the conversations: new Humanitarian AI podcast miniseries
This conversation is the fourth episode new humanitarian AI podcast miniseries which builds on the August 2025 research: ‘How are humanitarians using artificial intelligence? Mapping current practice and future potential’. Tune in for long-form accessible conversations with diverse expert guests, sharing perspectives on themes emerging from the research, including implementation challenges, governance, cultural frameworks and ethical considerations, as well as localised AI solutions, with global views and perspectives from Africa. The miniseries aims to promote information exchange and dialogue to support ethical humanitarian AI development.
Episode 1: How are humanitarians using AI: reflections on our community-centred research approach with Lucy Hall, Ka Man Parkinson and Madigan Johnson [Listen here]
Episode 2: Bridging implementation gaps: from AI literacy to localisation – in conversation with Michael Tjalve [Listen here]
Episode 3: Addressing governance gaps: perspectives from Nigeria and beyond – in conversation with Timi Olagunju [Listen here]
Links
The UN’s Global Dialogue on AI Must Give Citizens a Real Seat at the Table | TechPolicy.Press
Global Citizens Assembly – Wakanyi mentions this during the conversation
Humanity Link Foundation
African Folktales Project
Share the conversation
Did you enjoy this episode? Please share with someone who might find it useful.
We love to hear listener feedback – please leave a comment on your usual podcast platform, connect with us on social media or email info@humanitarian.academy
Disclaimer
The views and opinions expressed in our podcast are those of the speakers and do not necessarily reflect the views or positions of their organisations. This podcast series has been produced to promote learning and dialogue and is not intended as prescriptive advice. Organisations should conduct their own assessments based on their specific contexts, requirements and risk tolerances.
Episode produced in October 2025