2nd October 2025
Timi Olagunju
Ka Man Parkinson


How can humanitarians engage responsibly with AI tools without clear governance frameworks?
Hear a legal and policy expert’s perspective on AI governance challenges facing humanitarian organisations in Nigeria and beyond.
Only around one in five humanitarian organisations have formal AI policies despite widespread individual usage – creating what we termed a “governance vacuum.” How can humanitarian organisations develop robust AI governance frameworks when operating across diverse regulatory environments and while AI regulation is still emerging?
In the third instalment of our new six-part humanitarian AI podcast series, we’re delighted to welcome Timi Olagunju to the show to hear his expert perspectives.
Ka Man Parkinson sits down with Timi to explore how humanitarian organisations can close this governance gap, moving from ad hoc AI usage to structured frameworks that protect communities while enabling innovation. From data sovereignty concerns to regulatory inconsistencies across African contexts, how can organisations balance accessibility with accountability?
Timi Olagunju brings expertise in Nigerian tech policy and AI regulation, working at the intersection of law, technology, and development. As a lawyer specialising in tech policy and AI governance, Timi offers insights into the regulatory landscape affecting humanitarian AI adoption across Africa and globally.
In this conversation, Timi examines governance gaps and practical pathways forward, including:
- Why AI literacy the cornerstone of governance: Timi explains how understanding AI tools, data use, and when to say no forms the foundation for any governance framework – not an afterthought
- Procurement paralysis across Africa: Discover how fragmented regulations force humanitarian organisations toward expensive foreign vendors, blocking partnerships with local developers who understand the context
- Four principles for responsible AI deployment: Learn Timi’s actionable framework covering do no harm assessments, data dignity, human accountability, and auditability – principles you can implement immediately
- Treating data like cash: Practical guidance on segregating sensitive information, understanding commercial platform terms of service, and knowing when self-hosted models are essential
- Community questions: Timi addresses audience questions on AI transparency, data protection as a double-edged sword, European versus Global South regulatory gaps, and safeguarding strategies including red teaming approaches

Keywords: AI governance frameworks, regulatory fragmentation Africa, procurement paralysis, data sovereignty humanitarian sector, AI literacy, commercial AI tools risks, data minimisation principles, human-in-the-loop accountability, red teaming simulations, self-hosted AI models, Global North-South power imbalance, contextualised AI solutions, data protection humanitarian contexts, policy frameworks development, community consent practices, AI transparency standards.
Want to learn more? Read a Q&A article with Timi
Who should tune in to this conversation
These insights are essential listening to humanitarian organisations, legal teams, and donors navigating global and African AI governance challenges, particularly in complex regulatory landscapes.
The conversation addresses questions from the humanitarian AI community as we investigate pathways to responsible AI deployment.
Episode chapters
00:00: Chapter 1: Introduction
07:20: Chapter 2: AI literacy: the foundation of AI governance
17.13: Chapter 3: ‘Procurement paralysis’ from regulatory inconsistency across Africa
31.29: Chapter 4: Weighing up commercial AI tools use through a governance and security lens
48.53: Chapter 5: Community Q&A: Timi answers your questions
70.21: Chapter 6: One thing we need to talk about more in the AI governance space: Global North and South power imbalance
73.49: Chapter 7: Closing thoughts from Timi
Glossary of terms
We’ve included definitions of some technical terms used during this podcast discussion for those who are unfamiliar or new to this topic.
Access logs – Records tracking who accessed what data and when. These logs create accountability and help identify potential security breaches or misuse of sensitive information.
AI (Artificial intelligence) – Computer systems that can perform tasks typically associated with human intelligence
AI literacy – Understanding of AI tools, their data usage and limitations. Encompasses knowledge needed by all users, not just technical experts, to interact safely and effectively with AI systems.
AI policy – Organisational guidelines defining how AI tools can be used and what approval processes are required
Algorithm – A set of rules or instructions that computers follow to solve problems or make decisions
AU – African Union. A continental organisation comprising 55 African member states that aims to promote unity, cooperation, and development across Africa, coordinate policy positions, and address continental challenges.
Auditability – The practice of maintaining logs and records of AI system decisions, including model versions, prompts, and authorisation records, to enable accountability and review.
Black box – The opacity of AI algorithms that makes their decision-making processes difficult or impossible to explain or understand, even by their creators.
Consent (in data protection context) – Permission obtained from individuals before collecting, processing, or using their personal data, requested in plain language.
Covenants – In this context, refers to agreed principles or standards that guide behaviour without being strictly prescriptive regulations. Timi advocates for covenant-based frameworks that establish shared commitments rather than rigid, top-down rules.
Data breach – Unauthorised access to or disclosure of sensitive data.
Data leakage – The unauthorised or accidental exposure of sensitive data to parties who should not have access to it.
Data minimisation – The principle of collecting only the data that is strictly necessary for a specific purpose, rather than gathering all available information. A core data protection principle.
Do no harm assessment – Evaluation conducted before deploying AI tools to identify who might be harmed, misclassified, excluded, or exposed by the system.
DPIA – Data Protection Impact Assessment – A process to identify and minimise data protection risks in projects, particularly those involving high-risk uses of AI or sensitive data.
EAC – East African Community – A regional intergovernmental organisation comprising several East African countries.
ECOWAS – Economic Community of West African States
Encryption – The process of converting data into coded form to prevent unauthorised access. Essential for protecting sensitive humanitarian data during storage and transmission.
EU AI Act – European Union legislation classifying AI systems by risk level and setting corresponding requirements. Represents Europe’s prescriptive regulatory approach alongside GDPR.
Firewall – A security system that monitors and controls network traffic, acting as a barrier between a trusted internal network and external networks. In this conversation, Timi refers to keeping AI models “behind your own firewall” meaning they operate within your secure infrastructure rather than on external commercial platforms.
GDPR – General Data Protection Regulation of the European Union – A comprehensive data protection law that has influenced many African countries’ regulatory frameworks.
Grievance mechanism – A formal process that allows individuals to report concerns, file complaints, or appeal decisions related to AI systems or data use. In humanitarian contexts, this provides affected communities with a pathway to challenge AI-driven outcomes or raise data protection concerns.
HEAT – Hostile Environment Awareness Training
Human in the loop – The practice of ensuring human oversight and decision-making authority in AI systems, particularly for high-stakes decisions, rather than full automation.
IDP – Internally Displaced Persons
INGO – International Non-Governmental Organisation
ISWAP – Islamic State West Africa Province
LLM – Large Language Model such as ChatGPT
Metadata – Data about data, such as timestamps, location information, or usage patterns. Can reveal sensitive information even without personally identifiable details.
MVP – Minimum Viable Product – usually meant in relation to a technical product. In this conversation Timi refers to this as a Minimum Viable Product regulatory baseline, serving as minimum essential standards needed to function effectively, rather than comprehensive, fully-developed legislation.
NCC – Nigerian Communications Commission
Net neutrality – The principle that internet service providers should treat all data equally, without favouring certain platforms or services through partnerships or pricing.
Non-AI pathway – An alternative way for people to access the same services or achieve the same outcomes without using AI systems.
PII – Personally Identifiable Information – Data that can identify a specific individual, such as names, ID numbers, biometrics, or location data.
Procurement paralysis – The situation where humanitarian organisations struggle to acquire or implement AI tools because inconsistent regulations across different jurisdictions require constantly resetting compliance approaches, creating delays and favouring large foreign vendors over local solutions.
Red teaming – A simulation exercise where one team attempts to identify vulnerabilities or risks in a system by deliberately testing its weaknesses.
SADC – Southern African Development Community – A regional economic community comprising 16 member states in Southern Africa. It aims to promote economic integration, peace, security, and development in the region.
Sectoral rules – Data protection or AI regulations that apply only to specific industries or sectors (such as healthcare, finance, or telecommunications) rather than comprehensive, cross-cutting legislation. In the African context, some countries rely on these sector-specific regulations in the absence of broader national data protection laws.
Self-hosted models – AI models that an organisation runs on its own servers or infrastructure rather than accessing them through commercial cloud services. This allows greater control over data security and privacy since information never leaves the organisation’s own systems.
Synthetic examples – Artificially created data that resembles real data but contains no actual personally identifiable information, used for safe testing and demonstration.
Episode transcript
This podcast transcript was generated using automated tools. While efforts have been made to check its accuracy, minor errors or omissions may remain.
00:00: Chapter 1: Introduction
[Ka Man, voiceover]: Welcome to Fresh Humanitarian Perspectives, the podcast brought to you by the Humanitarian Leadership Academy.
[Music changes]
[Ka Man, voiceover]: Global expert voices on humanitarian artificial intelligence.
I’m Ka Man Parkinson, Communications and Marketing Lead at the Humanitarian Leadership Academy and co-lead of our report released in August: ‘How are humanitarians using artificial intelligence in 2025? Mapping current practice and future potential’ produced in partnership with Data Friendly Space.
In this new six-part podcast series, we’re exploring expert views on the research itself and charting possible pathways forward together.
[Music changes]
[Timi, voiceover]: It’s important to use AI as a torch, not a blindfold. And as a torch, in terms of building trust with the people that are beneficiaries of humanitarian service, it’s important to understand how AI – tools, consent, minimisation, and things like that – interact with the users or the beneficiaries of the service. Because trust is easily eroded in a humanitarian context when those things are missing.
[Ka Man, voiceover]: Welcome to episode 3 of our Humanitarian AI podcast series. Today we’re exploring AI governance challenges with a particular focus on Africa together with our expert guest, Timi Olagunju.
If you’ve not already listened to earlier episodes, I highly recommend that you tune in for a holistic picture of the research which this conversation builds on, as well as an insightful exploration of implementation challenges with Michael Tjalve.
So in today’s episode, I’m sharing a conversation that I had with Timi, who is a lawyer and policy expert based primarily in Lagos, Nigeria. Timi’s working at the intersection of technology, law, and humanitarian and development work. With a degree in law at the University of Ibadan and master’s degree from Harvard Kennedy School, specialising in digital policy and emerging tech governance, his work and thought leadership has informed policy debates at the national and international level.
In our recent research report on how humanitarians are using AI in 2025 together with Data Friendly Space, our findings revealed that despite widespread AI usage across the sector, only around one in five humanitarian organisations have formal AI policies. So there’s this governance vacuum happening alongside high levels of individual experimentation. In this conversation, Timi helps us understand what the potential implications of this through a governance lens – from how fragmented regulation across Africa can create unexpected barriers to localisation, including what he calls “procurement paralysis”, to practical frameworks organisations can implement today.
Whether he’s explaining complex policy challenges or unpacking data protection principles, Timi brings high-level governance concepts to life through grounded examples, storytelling and good humour.
[Music changes]
Ka Man: Hi, Timi, a warm welcome to the podcast!
Timi: Thank you for having me, Ka Man.
Ka Man: I’m so excited and keen to hear your different perspectives. So, before we start, would you like to introduce yourself to our listeners, telling us about your journey into this intersection of AI, law, governance, and all the other many hats you wear?
Timi: OK, thank you so much. So my journey started much more back in the university, when I had the privilege of serving over 1,500 students in a particular hostel, and there’s this perennial, this decade-long challenge of getting accommodation sorted. Usually, it takes weeks, and then it’s always lopsided. So, somehow, in my leadership, we came up with a solution, which was a tech solution, even though I was studying law at that time, in the university in 2007, at the University of Ibadan. But that particular situation, that solution we brought in, involved a lot of lobbying with stakeholders and policy makers, which then later birthed the solution. And what used to take weeks later started to take just minutes to solve, so people could get accommodation to hostels in minutes, what took 2-3 weeks to solve, but it was a journey. And you know, that journey reminds me of the journey of governance.
And so, it dawned on me, as someone who was a law student at that time, that it was important to understand the tech space, that tech solution is not just about deploying high-grounded technical technology, but it’s also about the governance, the partnerships, the frameworks, the policies that ensures that, you know, you’re not just driving tech solutions, but you’re driving tech solutions that solve problems equitably, you know, inclusively. And so that brought my understanding of that intersection between law, technology, policy, governance, and the like.
So, I started off much more as a tech lawyer, and then moved into tech policy, and then, you know, after that, I had the privilege of doing a master’s in policy, particularly focusing on emerging tech policy and governance at Harvard University J.F. Kennedy School of Government. So, that’s been my journey, and I’ve worked across board advising governments, multinationals, major digital platforms as well as institutions across different sectors on that particular issue, on tech policy, tech governance, and shaping the direction of how that impacts human beings, humanity. That’s my core [laughs] tagline – how it impacts humanity positively. That’s it.
Ka Man: Wonderful. Thank you so much, Timi, for sharing a snapshot of your journey to where you are now. I love that, how in your formative years at university, this real-world problem affecting you and your fellow students, you used that as an opportunity to see how you can help in this problem space and yeah, actually create change.
And that’s really that legacy of wanting to create change and being driven to support others through your work
Timi: Absolutely
Timi: Through the legal governance, and yeah, advocacy, yeah it seems like that’s a common theme. And we’re so lucky to have you here today to share your expertise and experiences, so thank you very much.
Timi: Thank you, pleasure.
07:20: Chapter 2: AI literacy: the foundation of AI governance
Ka Man: So, when we caught up recently, so we had a little call before recording this podcast conversation today, and I got to hear about you and your experiences, and you were talking about the importance of broader society-wide AI literacy – and education being a crucial foundation for AI governance.
And I thought, that’s interesting, because I’ve never – obviously, AI literacy and AI governance are themes that emerge all the time, and were key themes in our research. But actually, currently, I’ve not made that strong connection in my own mind between literacy and governance. So I wonder if you can just speak to that a bit more, and explain the connection between these two concepts, and the importance of this, and how this may link into people working in the humanitarian context.
Timi: Thank you so much for that very interesting key question, because there’s a lot of conversation around AI governance, but people still do not realise that the foundation of AI governance and everything around it is built around AI literacy. And that’s how the governance direction should take. So, when you talk about governance, most times people picture laws, regulations, as key part of governance. But not knowing that governance, in my own argument, starts with literacy. So, AI governance starts with AI literacy. AI literacy, not just for engineers, but for everyone.
And so, for example, if you have a nurse, a teacher, that cannot tell what AI tool does, what data is used, or when to say no, then, you know, whatever you have on paper is just a fictitious rubble in itself. And so, for that context, AI literacy is more like the seatbelt and governance is the traffic law. Much more in a humanitarian context is that in humanitarian context, there’s a lot of dealing with urgent situations, needs, and also issues that are sensitive to the data subjects and the humans, as it were. And so, because of the sensitivity and the urgency of that particular sector, it’s important that AI literacy is even a core of it.
So, for example, when you go to your doctor, the doctor explains to you extensively what the diagnosis is, the prognosis is, what solutions they want to proffer, and seeks your consent. In humanitarian settings, it is even much more complex, because then, you see that some of the people are in a more compromised situation.
And so, it’s important that in the humanitarian setting, we understand AI literacy from the perspective of when to get consent, what is involved in data minimisation – that’s, in other words, getting data that is needed, not getting much more, you know, above what you actually require. Also issues around bias checks and human override.
Because in humanitarian context, I would say it’s important to use AI as a torch, not a blindfold. And as a torch, in terms of building trust with the people that are beneficiaries of humanitarian service, it’s important to understand how AI tools, consent minimisation, and things like that interact with the users or the beneficiaries of the service. Because trust is easily eroded in a humanitarian context when those things are missing. So it’s more complex. That’s the key thing there.
So my rule of thumb is that before you deploy tools, deploy explanation. And that is the guiding principle behind my argument, where I say, look, AI literacy is a key part of AI governance, so before you deploy tools, AI tools, ensure that you deploy explanation. And that explanation cannot come except you know. Hence, AI literacy. So you need to be armed with AI literacy in order to use AI literacy as a torch to guide the beneficiaries of the service. So that’s the context there.
Ka Man: That was absolutely fantastic, illuminating piece of advice from you there, Timi. Do you teach, by the way? Because I like the way, your use metaphor and analogies, it was really clear to me. I like the way you highlight those concepts. Do you teach, by the way?
Timi: I’ve gotten a lot of teaching approaches, but I try not to teach in the more formal setting, because that would kind of restrict. But the challenge with Nigeria also is the fact that we’ve not learned how to bring in expertise from outside the academia to provide learning for students. That’s the challenge. So if you want to teach in Nigeria, for example, you need to be in academia. So I’ve gotten a lot of invites to be in the academia, which I…[laughs]
Ka Man: Oh that’s super interesting.
Timi: Thank you
Ka Man: I liked what you said. Well, I liked a lot of what you said there. I liked going back to the start when you say about AI literacy, for example, learning when to say no, when to say no. And you were talking about deploying – before you deploy tools, deploy explanations.
Timi: Explanation.
Ka Man: And it’s like you say, you use driving analogies and metaphors, and it’s like, you can’t get behind the wheel of a car to drive safely without that full driver’s education and training. So, I feel you’ve illustrated the points well there, the connection between literacy – education and literacy and governance.
But also, you’ve also alluded to some of the challenges in the Nigerian context, where non-formal education – so you’re teaching and training in an informal context, but that formal training needs to come from official establishments, people with PhDs. So does that present a challenge, then, in terms of AI literacy at a sort of societal level?
Timi: Yes, it does, because there needs to be a marriage, I call it a holy matrimony between theory and the guide. I’m a process person, because I do a lot of governance work. So, in a more process form, we’re talking about the need to create systems that allow people that are experts from outside to come in to solve some of the key challenges of the expert gap in the academia.
And that’s one thing that makes, for instance, in my time in Harvard, we had a lot of experts who would come in and stay for perhaps a month for a fellowship programme, use that month break they’ve taken out of their regular work to impart a certain knowledge on the students, different from the classroom nuanced experience.
So, those are the kind of things that can help shape, even in terms of shaping AI governance going forward. Because much of the expertise in African countries are not in the academia, per se, because it’s a new and evolving space, so you need more practical approach, and you need experts that are with more practical, nuanced, day-to-day use of this to actually bring in value.
Ka Man: Really interesting, thank you. I’ve made a note of a few points that you made there about concepts that people are – it’s essential for people, including humanitarians, well, particularly in the humanitarian context, to sort of gain a good handle and grasp of. So, consent, data minimisation – not collecting all the data that you can, but minimising that – bias and human override. And I think they’re really good, solid concepts for people to hopefully, if they’re new to listeners, that’s something that they can research online, look at online resources to support their learning and understanding in this space, and build their sort of rubrics to AI literacy in a humanitarian context.
Timi: Absolutely, absolutely.
17.13: Chapter 3: ‘Procurement paralysis’ from regulatory inconsistency across Africa
Ka Man: Brilliant. So, I want to move on to this concept of regulation, at a sort of wider, broader level. So, I was really interested to see, shortly after, or around the same time that, the team and I released our Humanitarian AI report, Mastercard released a white paper on AI in Africa. And although it’s obviously focusing on a different context to our report, I felt like there were lots of synergies and crossover, particularly because our report, 46% of respondents were from Sub-Saharan Africa. So, it was actually really illuminating for me to read the AI report from MasterCard, because it offered that contextual understanding and some of the specific regional and national contexts.
So, one of the things that it highlighted in the report was that regulatory inconsistency across Africa is one of the key barriers that could deepen digital divides. So I thought this was quite an interesting observation, and I just wanted to ask you, from your governance perspective, what specific regulatory challenges do you see that are affecting equitable AI access in Africa? And if you can share any kind of examples or angles from the humanitarian and development context, that would be really interesting for us and our listeners to hear.
Timi: Absolutely. The MasterCard Foundation did a fantastic work in that end. They elaborated extensively into three categories, basically, which I will touch. And also, your report, the Humanitarian Leadership Academy report on moving it further beyond just the general AI context into the humanitarian sector is also a pioneering approach. And so I’m not going to talk within those lines, I’m going to talk on lines that have not been touched.
And so, firstly, building a foundation is the fact that across three levels, you see that in Africa, in the region, the continent, you have three kinds of legal frameworks, or regulatory landscape. One is some countries that have data protection laws. Then you have others, who rely on sectoral rules. And many who have none. So, it’s quite an interesting one, but three-quarters, as of today, three-quarters of the African countries sort of fall between both categories. That’s legal laws, as well as sectoral rules, and the other, about a quarter, fall in the other categories of have-nots now.
This is what that does, particularly in the humanitarian context, which is quite important to our conversation here, is that, firstly, there’s what I call procurement paralysis [laughs] – PP – where you have humanitarian agencies struggle to buy or localise tools, because each border resets the compliance approach again, right.
And then another key challenge is also the issue of data insecurity, where you have weak safeguards, and so, there’s increased risk to communities.
For example, such risk, not to sound high-sounding, such risk would include biometrics being reused. We’ve even seen in situations where it is not just about – even where there are laws, there are data laws, you see situations where there’s a leak, data leak. So in the humanitarian context, such can be quite risky. Biometric being reused, location data leaks, and when you have those kind of contexts where there’s a data leak, you can’t say, for example, IDP camps or refugees. These are people running from armed groups, right, that’s a lot of risk, per se. And there is no clear appeal path, where if you have an issue with an AI, whatever, you can then appeal. Those are key concerns.
Now, another third point is that there’s an inconsistency of rules that favour large foreign vendors who can absorb compliance costs. So, in terms of partnerships, it’s usually safer in the humanitarian sector is usually left with playing safe by working with these large foreign vendors at the expense of local vendors who do not have – who know the terrain, but do not have the overarching framework and implementation that can help drive that kind of partnership.
And so, these are key issues that I see being faced at the humanitarian level, but to be more practical, not just to talk about the problem, I think it’s important going forward for there to be a focus on regional baselines. I mean, ECOWAS, Eastern Economic Community, Eastern African Community, the South African Development Community, those regional blocs. There needs to be certain regional baselines that cover certain minimum viable products covering consent, purpose limitation, minimisation, DPIA, for high risk, that’s data protection impact assessment frameworks for high-risk uses of AI, and grievance mechanism. It doesn’t have to be an overarching, but rather than just have silos of frameworks. You have an overarching framework at regional levels that helps humanitarian organisations also to be able to not just solve the procurement paralysis problem, or the data insecurity problem paralysis, but also be able to partner with local vendors, in that sense.
Ka Man: Thank you, Timi. That is so, so interesting, and there’s a lot, really, to unpack, but you’ve explained that really clearly. I really liked how you called it procurement paralysis. That’s a really interesting concept, because zooming out, looking at the broader picture, in our research, we found that only 8% of humanitarian organisations, well, respondents from humanitarian organisations said that AI was highly embedded. And the reason for that inertia, so to speak, is because there was a low level of AI readiness as a whole, so low levels of investment, low levels of expertise, low levels of training.
So it was really starting at a baseline, and even though there are individuals and champions of AI within organisations, particularly those within INGOs, international organisations, they’re obviously working with procurement teams, and compliance teams, and data teams, and the gap is so great in terms of readiness that they can’t, or they’re not able or empowered, really, to take forward the procurement and deployment processes. So, we saw that big chasm, if you like, particularly pronounced in international organisations.
What we found in contrast is that local organisations, because they’re not subject to these broader processes, and because there’s free, open access LLMs, ChatGPT, so there’s a lot of experimentation, and there are lots of individuals who’ve developed great, deep-level skills, and able to develop their own systems without these constraints. So, those constraints obviously exist for a reason.
So anyway, I just wanted to reflect the sort of broader picture that we saw in the research and then link it to what you’re sharing here, specifically in the African context about this procurement paralysis. Because I was sort of wanting to really unpack how can we localise AI systems? How can they be contextually appropriate? And then because of this landscape that you’re describing across the African context, because of these different levels of regulatory readiness, that, actually, it’s a blocker to localised, procurement of localised applications.
Timi: Absolutely.
Ka Man: And that that, in turn, is causing people, forcing people’s hands to have to turn to large foreign vendors, therefore really compounding that localisation challenge.
Timi: Yes, yes.
Ka Man: Does that, does that sort of reflect what you’re seeing?
Timi: Absolutely, absolutely, absolutely. It takes serious reflection across both the governance and also the humanitarian space and practicality to see that gap there. There is a business case to be made [laughs] for the need for a minimum viable product regional baseline. There is a business case to be made for local, because most often, it’s usually that you need to protect this, protect that, set up policies and frameworks, but we are not able to make that business case so succinctly as to why. This is one of it.
Ka Man: Very interesting, so I’m just curious, so, you’re in Nigeria, and you’ve got, obviously, experience working in different contexts as well. So, when we’re looking at – when you’re looking at regional levels, like a bloc that can, to sort of overcome these regulatory challenges, and then that will smoothen the procurement processes and meet the compliance expectations in terms of data impact assessment, and so on, and so the creation of a localised, minimum viable product. So, is this happening in practice anywhere, or do you see any certain types of countries that are ready for this, can make strides in this space?
Timi: Well, as it stands, in terms of AI governance, so much is being said, with little being done appropriately. So you have situations where certain countries, for example, are trying to set up certain AI agencies to help develop regulations for AI. That’s going too far [laughs]. That’s going too far. I think what needs to happen is a top-bottom approach, a more regional – a more, firstly, AU taking on that challenge, to see the importance, not just in terms of protecting rights and of users, and of citizens, and of constituents, but also the business case and other case that can be made for it, and then taking it on to a more regional level, and then from a more regional level, there is now efforts towards national, robust approach towards – because we need to balance innovation and regulation. I call it reguvation – that’s regulation and innovation!
Ka Man: Right! [laughs]
Timi: [laughs] So we need a bit of that, and like I said earlier, it more start off with simple, minimal viable products that focus on not just broad-sweeping, regulatory frameworks, but that focused on issues around consent, data minimisation, human loop, and things of that nature, that can help drive covenants, not prescriptive in that sense.
Ka Man: Thank you, thank you. So interesting. You know, as you’re speaking, just already in this conversation, I’m really already seeing strengthened interconnections between education, AI literacy, regulation, and then the practicalities of procurement.
Timi: Yes
Ka Man: Because I’ve obviously been looking at them…
Timi: Yeah, they are close twins.
Ka Man: Exactly! So I’ve obviously been looking at these individual tracks and themes, because that’s how the report that we’ve worked on has been structured. But now, actually, it’s so illuminating talking to you, because you’re really showing with very concrete and tangible examples what, how they interplay. So, so interesting, thank you.
31:29: Chapter 4: Weighing up commercial AI tools use through a governance and security lens
Kind of building on what we’ve just been talking about in terms of localised tools, localised systems, so, we found that over 90% of humanitarians are using AI tools in their work. 7 out of 10 are using them regularly. They are primarily using commercial tools, like LLMs, like ChatGPT, Claude, Gemini, and so on. So yeah, 7 out of 10 are using those tools.
So, I understand that you’ve written, because you like to share, you write articles, don’t you, and you share on LinkedIn?
Timi: Yeah [Laughs]
Ka Man: Which is great, people can learn from you. And you’ve written about foreign tech companies controlling Nigerian data, is that right?
Timi: Oh, yes. Interestingly, it has birthed the NCC’s conversation. The NCC’s having a conversation now around reviewing certain partnership frameworks for partnership at levels of telecoms companies partnering with platforms to enhance video communication services. So that article actually was quite instrumental, because I wrote about free basics and the fact that there was a need for there to be a level playing ground, what we call in internet governance, net neutrality, level playing ground for platforms, because if one platform has partnership with telecoms to access the platform without data, or to get enhanced services at the expense of others, that is not good for local platforms, as well as other bigger platforms that may not have such privileged partnerships. So, yeah.
Ka Man: Oh, that’s really interesting. Well, I’m glad that has contributed to developments and continues to do so. So, I wanted to get your view on sort of balancing what’s ideal versus what’s practical. So, how do you think organisations, particularly humanitarian and development organisations, approach this balance between accessible commercial tools, like ChatGPT, but also maintaining control of their data, and good practice. What are your thoughts on that?
Timi: Yes, absolutely. The research is quite spot on, and that’s one of my most important takeaways, and I think that will speak to what some of the things I will say. The truth is that naturally, people gravitate to what is available. It’s not just that commercial tools are convenient, they’re also available. So, when you look at it, you can see that it is just now that we’re beginning to have specialised chatbots for specific fields. So, for instance, in certain countries, you have the medical profession has specialised chatbots for medical doctors and medical students that caters to the needs in that space. So we will see a lot of that happening going forward. Even in the humanitarian sector, there will be specialised tools, but as it stands now, what is available and what is convenient is largely commercial tools. And when you say commercial tools for the sake of our listeners, ChatGPT, Meta’s AI, and all that, since they didn’t pay us for commercial [laughs].
But the most important thing here is the fact that in terms of governance, yes, when you use these commercial tools, it takes you outside the obligations that you have to your community. So, you might have, as a humanitarian organisation, a certain community expectation towards you. But when you use those tools, you’re not governed by those community expectations. You’re now governed by the community guidelines of the commercial tools that you use. And one of the key things, that’s why AI literacy is key. One of the key things is to have a clear understanding of what are the frameworks behind what govern these tools. What are the policy frameworks, the contractual frameworks behind them? That’s one key thing, which dovetails back to AI literacy.
And my stance is usually that commercial AI is like a loudspeaker. You assume that whatever you type can escape. So, I would, in that sense, for humanitarian sector and others, not just the humanitarian sector, those that work in particularly sensitive situations. First thing is that it’s important to segregate. In other words, never paste raw personnel data, use anonymised summaries or synthetic examples, and it’s as simple as you taking out the names and setting other details that can relate to a personal identifiable information, PII, about the subject, the data subject. That’s one point, segregate.
Second point is to ensure that you take a cognisant look at the contracts. Does it have elements of control? Usually, I would prefer enterprise plans, with certain data processing agreements, with agreements of no training of your inputs, and clear retention limits, including certain considerations around regional hosting, depending on the sensitivity of the data.
Now, the third point, which will be my last one, will be focused largely around the need to use for most sensitive peculiarly sensitive data, or PII, that’s Personally Identifiable Information, there might be a need to have or use local or self-hosted models that have localised behind your own firewall, to drive your own privacy and all that.
And one thing that most people overlook is also the back to the issue of literacy. It’s important to always ask communities permission in plain language, very important, to always ask community permission in plain language. That, I think, kind of summarises some of the practical approaches towards – because as it stands now, what is available is what’s desirable. And until we develop specialised tools that speak to sectors, the use of commercial AI will still be a thing.
Ka Man: That’s really interesting, thank you, Timi. Just wanted to pick up on what you said around assume what you type will be seen. And therefore obviously do not enter any sensitive data. Now, I just wanted to sort of pick up on that a little bit. Because something that came through in the research and in the interviews that we did, is that a lot of users have high levels of trust with their AI, because of these conversational interfaces, that people feel like maybe that – maybe they start off really conscious of good practice, what you should do, but maybe after a while, that trust builds. Do you think that there is a – do you think that there’s a risk of that trust being a risk to humanitarians, and how do you maintain that good practice, when under pressure, under stress, time constraints, don’t take shortcuts thinking this won’t do any harm?
Timi: Yes, absolutely. The truth about it is, it is a reality. It’s the reality of particularly in urgent situations, hence why there’s a need for policy framework that speak to this is what you need to do at certain points and then ensure accountability for those frameworks. And those frameworks, because you cannot leave the idea of self-determination as to what to do to humans, it would leave a lot of gap in terms of achieving outputs. So there needs to be frameworks that ensure accountability for a consistency of output, a consistency of ensuring that people do what they are supposed to do in context of sensitive data and things like that.
And this happens largely to be – I’ll just state a few four principles to journey with in this context. One is, like you mentioned, do no harm. It’s important to understand, to run some kind of assessment, before starting. Who should be classified or misclassified? Who is excluded and who is exposed?
The second principle is ensure that there is a sense of data dignity in the policy framing, which means that collect the least that you need, determinisation and things of that nature, get consent, and ensure that you give, which is an opt-out approach, which is usually a challenge with most AI platforms, is that they do not understand that a key part of self-determination, particularly in humanitarian contexts, is to give people options. And so even that must come into play in terms of how we interact with AI, commercial AI, or AI generally, within those contexts, to ensure that people have a non-AI pathway to achieving the same services, if they so choose.
Now, the third is to ensure accountability, and that requires human in the loop, human in charge. So it’s not just about just a laissez-faire approach, where some people do what’s right, some people don’t do what’s right, some people do what’s right, they forget to do what’s right tomorrow. So there needs to be a real person with authority in the loop for high-risk calls. And there must be a public appeal process for situations where that human interface.
And then a key last point is what we call auditability, which means keep a simple log of model versions, ensure that there’s prompts and key decision taken and logged in, and know who signs on and off on stuff. Very important. So, those are some of the key elements there, but to leave it entirely to humans to do what is supposed to be done may not be a workable approach without a framework that kind of covers, I believe in minimum viable products, those four key principles I mentioned.
Ka Man: That’s brilliant, that’s so helpful for our listeners, and for me, that’s gold dust, really, what you’ve just shared, because I’m sure our listeners could be jotting down those key points that you’ve made there, and start to research these concepts, if they’re not already familiar, and start to build that into their own AI policies and approaches, and thinking about how they embed this into their work.
What I thought was particularly interesting, from my personal perspective, is the opt-out approach and the non-AI pathway. I think that’s really, really interesting. I mean, obviously, I can think of allied parallels to a non-humanitarian context, where, for example, where a lot of businesses were no longer accepting cash, and they were moving to cashless, digital payments only, but of course, that then excludes certain populations, people who do not have the technology, do not have bank accounts, and so on. So, I think that’s so relevant, and very interesting.
The other concept that was new to me, I’ve not heard this term, how you packaged it was new to me. What you dug into was concepts I’m familiar with, but it was really interesting how you framed it as dignity of data,dignity of data. So it’s not just the dignity of the people, which is obviously front and centre, that’s central to everything, but the dignity of the data, and how we’re looking at that in parallel with each other. I think that’s really thought-provoking.
And then finally, the human in the loop and accountability is so, so important. And it can feel, I think, maybe a bit removed to people now, at this early stage of AI experimentation, if they think, okay, at the moment, I’m just testing and trialling as an individual using ChatGPT – that feels like quite far down the line in terms of deployment, this human in the loop and accountability. But I think what you’re saying is that, really, from the outset, you could be thinking about your pathways to these systems, and that starts with that early experimentation phase, and what data you’re putting into your system.
Timi: Absolutely. Absolutely. For more individual context, it’s always very important. One thing is that the focus is a lot on AI algorithm, which might be something I might talk about, the black box and things like that. I tend to brown box. I’m working on a paper called the brown box anyways [laughs]. I let it out here first. Which is basically the black box that’s the fact that AI algorithms are not explainable, and which is the basis for private sector to, as an excuse that, look, this thing is not explainable, but also gold. Like, a lot of private companies use that unexplainability of AI algorithm to milk and profit gold. So when – and you know gold is brown, so have the brown box. So the brown box basically is context around developing further the black box context. But the challenge is that the conversation is not just about, that’s a bit of academic research [laughs]
Ka Man: It’s interesting
Timi: But the context is not just about the black box or the brown box, as I termed it. That means profiting from the unexplainability. It’s also the problem of data. And that’s key. And so, for individuals, in the course of the interaction with the LLMs and AI models, it’s important to also look at what is how important is this data to me or to someone else? That would inform the kind of accountability level that they can apply on a more individual level.
Ka Man: That’s really interesting. So, you’re sort of talking about how the opacity and the unexplainability of how AI algorithms work is actually a commercial advantage.
Timi: Yes, yes.
Ka Man: For the firms.
Timi: Absolutely
Ka Man: And we’ve got this dependence, and it’s about the power, isn’t it? Who holds the power here, and they’re holding the power? No, that’s really interesting. I look forward to seeing that paper! When will it be ready?
Timi: [Laughs] Halfway, halfway through, but…
Ka Man: [Laughs] Halfway. No pressure or anything, Timi, but I need to see that soon [laughs] for more context! That’s really interesting. No, I look forward to seeing that, and I’m sure our listeners will be too.
48:53: Chapter 5: Community Q&A: Timi answers your questions
So, this has been such a fascinating and illuminating discussion, and I really thank you for sharing these insights so far, so vividly. I just wanted to change tack a little, and move to some questions from our community.
So, we launched our report last month with a wonderful launch event which took place on the 5th of August, where we convened an expert panel, and we received so many questions, and it was wonderful to see that engagement with the research from humanitarians and people from outside of the sector joining together in that space. So we had over 700 people, from over 100 countries, so it was brilliant to have that diversity represented in the audience.
So we couldn’t get through all the questions, unfortunately, because of the time and the volume. So what I’ve done is rolled forward those questions, which we’re now putting to the experts, because we want to make sure this is a live and ongoing conversation, because that’s so crucial. We don’t want this to be a static piece of research, we want this to drive and promote further dialogue. It’s really great to have this opportunity to put some of the audience questions to you as a governance expert.
So, some of these you may be able to speak to directly, others you might not be so direct, because they were obviously asked in the context of that event. But if you can signpost to any resources or anywhere else that might be of, relevant, that would be wonderful.
Here goes. The first one is around AI transparency, which is actually just what we’ve been speaking about, isn’t it? So, Jonathan asked, do you think that AI will force or fail greater transparency of humanitarian community in acknowledging data sources when protecting people’s personal data or knowledge and references including when AI was used. So I assume they may mean in, say, for example, donor reporting, or sharing information between agencies.
Timi: Oh, well, in context, I would say that there is sort of a push right, for greater transparency in the humanitarian space. Because what you – the research you’ve done is also some kind of push, right. It’s also some kind of – because when we talk about AI literacy, we usually assume that it’s just the education schooling approach. But AI literacy is also about research that draws light to set in context. It’s about the humanitarian aid worker ensuring that beneficiaries have options and are aware of options to not use AI tools provided. So, those are levels. So, I think there’s a greater push for transparency in the humanitarian settings.
As you can know, there are a lot of people, watchdogs, the Humanitarian Leadership Academy asking questions around, whose data is this? How was AI used in making certain decisions? And those are the kind of questions that would determine whether AI will fail or not in terms of solving humanitarian questions. They may seem simple, but they are tricky questions around algorithm, but not just algorithm, which has been the conversations for a long time, but also about data. And I think that’s how the conversation should go going forward. A data and algorithm conversation is what will determine the increased transparency, that we get as to whether AI fails or doesn’t.
Ka Man: Thank you very much. I’ve got lots of follow-up questions to ask you, but I’ll make sure that I get through everyone’s questions today. So, the next question is from Alex, who asked a question around data protection, and it’s a very broad and wide-ranging question, but maybe you can hone in on a specific aspect that you see fit. So, what are the implications for using AI for data protection in the humanitarian sector?
Timi: I would say that AI is a two-edged sword, and it can be used to strengthen data protection, and it can also undermine it. So, it’s neither here or there. And so, because I believe the question is expecting me to say something around the undermining of data protection by AI, but that’s not the case. It can be – there they’re constraining AI, AI constraining data protection and can undermine it. By strengthening, making a case for AI now, being the devil’s advocate in that context, is that, one, we’re talking about issues around automating anonymisation, using AI to spot unusual access patterns, and correcting that loop or gap. Those are some of the good uses for AI in terms of data protection. But also, you could use AI to undermine data protection through mass profiling, or inference of sensitive traits. And it’s not – you remember the case around Cambridge Analytica? [laughs] As it were, so that’s the – because of the scale at which you can get work done. So it can be both a two-edged sword. And that’s why, for me, I think the undermining part of this question is that the question emphasises the need for AI governance, and for me, AI governance must evolve alongside its adoption.
Ka Man: Thank you very much. It all comes back to those linkages and interconnections, doesn’t it, between all of these areas?
Timi: Absolutely.
Ka Man: Thank you. So, I’ve got a question now from, it’s a combined question – well, it’s a question from two people, Zunera and Maeen. So, they’ve asked, can you briefly discuss the use of AI and data privacy, especially concerns around data storage and usage, and data going into the wrong hands.
Timi: Okay, so that kind of brings us back to the conversation that we talked about. I remember you mentioned cash, right [laughs]
Ka Man: Mmm
Timi: So that got me thinking around the fact that we need to treat data like cash.
Ka Man: Right
Timi: You know, like, you lock it, track who touches it, and then you only collect what you absolutely need. It’s important. But then, in terms of the risk, the risk is not just about algorithm, which we’ve laid a foundation on earlier, but it’s also about data ending up in the wrong hands, which is a key point in that sense.
Now, for humanitarian context, it’s important that this is put in context, because you cannot afford for there to be a leak, or data falling into the wrong hands. The humanitarian context, it’s even more sensitive towards the need for robust AI governance framework. Because, you see, when you think about humanitarian work, you think about, for example, refugees, and its connection to data will be ID of refugees, right, health records, for example, or movement patterns for those who are running from certain communities or the other, like, for example, instances that we see around West Africa, right, and IDP camps.
Now, imagine that there is a leak of movement pattern for a certain set of people who are running from ISWAP or Boko Haram, you know. And then it falls into the hands of the same people they are running from, just as simple as movement pattern. Not even personal identifiable information now, but all that information, like movement pattern, which can be fed upon. Imagine the harm that it will do to those kinds of people in a vulnerable position. And so, it’s a key challenge, really, data fall into the wrong hands, and like I said, it’s not just about data or algorithm challenge the black box. It’s also about the brown box. Not just about the data, which is what we focus a lot about the algorithm. Now we are shifting into the need for data protection and things of that nature.
And now, also, I think the conversation needs to also move into companies profiting from all that, black box, which is the brown box in that sense, and that’s why I say it’s important to treat data like cash, very strict tracking approach, and I think for the humanitarian sector, in terms of those two contexts of algorithm challenge, as well as data challenge. I think that the algorithm challenge, as much as it is important, is not as important as the data challenge for the humanitarian sector, because of the vulnerability of these beneficiaries or data subjects involved.
Ka Man: Absolutely. I think that’s really solid advice. It’s almost get your own sort of house in order, so to speak, with your processes, with storage, storing data. Because often we hear on the news, you know, you’ll hear, like, a data leak, a data breach, and it’s happened in a surprisingly fundamental way. It’s not like a sophisticated job by a hacking outfit for example.
Timi: It’s more of a big issue even than algorithm bias and things of that nature, but the conversation has been a lot around the algorithm bias, which is important, because we need an inclusive and fair, equitable society, and AI must help us achieve that, not to work against us. But I think for the humanitarian sector, the data conversation is key.
Ka Man: Yeah, so treating data like you treat your cash, I think, is a really good takeaway. For everyone, that’s – everyone can shift your mindset if you’re not already thinking like that, in terms of data that you’re holding on spreadsheets, where you’re saving that, how you’re storing that, how you’re sharing that. So, that’s really practical guidance, so thank you.
Timi: And, you know, when you talk about data, we tend to think data at the level of PII, that’s personalised information, but even metadata, for instance, movement pattern, it’s important to also have conversations around those things as well.
Ka Man: Sure. That’s actually really important. Yeah, like you say, people are thinking about personally identifiable information, but yeah, broad patterns, that can be overlaid with other data…
Timi: Absolutely
Ka Man: That represents increased levels of risk. So, yeah, thank you.
So yes, I’ve got another question from Martina. I’m not sure if this is particularly your specialist area, but I know you work in a US context as well, but this is a regional governance question. She asks, would you say Europe is more restricted using AI, (data protection policy), or is it a global restriction right now?
Timi: I wouldn’t want to use the word restrictive. I would say that Europe is more prescriptive,
Ka Man: Prescriptive, right
Timi: Yes [laughs]. And being prescriptive may look restrictive. But I think from a more contextual point, is that Europe is more prescriptive. And you know, because we borrowed from the GDPR, which is usually also the challenge with African countries, in the context of we always borrowing regulations rather than allowing them evolve in a certain way. And so, the GDPR is something I’m quite familiar with, because of the fact that it evolved, a lot of African countries’ data protection frameworks evolved from there. And so, I would say it’s prescriptive. And also, aside the GDPR, it’s also the AI Act. I think those are the two key issues that this question speaks to. And they kind of set a set of prescriptive rules on data and AI risk systems.
However, they mostly do cover, provide safeguard for Europeans in that context. So, which leaves a certain level of unevenness? So, for example, in the Global South, in countries like in SSA, Latin America, and certain parts of Asia, where a lot of humanitarian work happens, they lack that clear rule. And so, it leaves a grey area. So, I would say that, though Europe’s protection is strong, but because it mostly applies to Europeans, there is still a lot of gap in the general context of where humanitarian work does happen, and that is a call for concern. Hopefully, I’m able to put my perspective to that question, but I’m not sure if I did answer to substance.
Ka Man: Oh, but no, thank you for sharing your insights. No, I think you did, in terms of GDPR, the General Data Protection Regulation from Europe, the European Union, has provided this robust framework that others, including African nations, can learn from, but there is a gap, because obviously it’s not contextualised. That’s fit for purpose for Europe, in European context. But actually, even there might even be gaps in between European and specific countries. So, you made that point clear to me, so thank you.
And then the final audience question before we close. So Téïlo asks a question around safeguarding for data safety. So, Téïlo asks, what safeguarding strategies can practitioners adopt in order to address data safety concerns? Are there any tools or easy-to-implement practices that we can use today?
Timi: Okay, so firstly, I will say that it’s important to strip personal identifiable information out before using commercial AI, that’s one important thing. As much as you can. I would also say that it’s important to, in some contexts, depending on sensitivity of data, use encryption and access logs for those kind of data. Another key element is training. AI literacy is not just about the education of a population, in terms of the use of AI tools, and how it affects them, and how they communicate the same, and get consent and things of that nature, but also in terms of the day-to-day interaction. And so, one key element of training that I see that a lot of people don’t use is red teaming. So, you could run a red teaming test that could ensure that you identify who the data hurts, and who is most exposed by data breach. I think that’s one key part of the AI literacy approach that I think a lot of people – because there’s always a top-down sharing of information, but there needs to be more simulation-based red teaming, kind of approach.
And then also, lastly, I would say that deploy tools, for example, data protection impact assessment tools, which can help spot risk early. Those are some of the key things. In that context, I would say that safeguarding certain safeguarding standards is like the seatbelt and brake analogy. So, putting safeguards is basically to avoid accidents, and accidents could include breach of trust, it could include loss and things of that nature. So, those are some of the few things that could be used.
Ka Man: That’s great, thank you. I thought the point that you made around simulation there were really interesting. You know, it’s not something that is sort of daily practice, is it? You know, you’re not thinking about these kind of strategies,
Timi: Absolutely
Ka Man: But it’s interesting to consider that. Yeah, so thank you! That was – that covers the audience questions.
Timi: I suggest that there’s – so, in the area of AI and cybersecurity, I don’t know if you’ve heard much about the IBM Cyber Range.
Ka Man: No
Timi: Oh, so apparently, so when you visit the IBM Cyber Range, for example, I visited the one in Boston, and the IBM Cyber Range is such that you see live simulation of cyberattacks
Ka Man: Oh right, wow
Timi: And start to generate the approach in which to deal with them spot on. So it’s, you know, you should check it out in that show way to actually learn.
Ka Man: Gosh, that sounds a bit…I think I’ll be very stressed in that scenario…
Timi: Exactly, me too
Ka Man: Well, that’s really interesting, because, sorry, I’m going off on a tangent slightly, but the Humanitarian Leadership Academy, we’ve delivered humanitarian capacity strengthening training over our sort of decade history, and simulation-based training, it’s quite common in the sector that people, especially if they’re going to high-risk contexts, they do simulation training for security, HEAT training, oh I think that’s hostile awareness training. So, it’s commonplace in the humanitarian sector
Timi: Absolutely
Ka Man: It’s not a new concept, but maybe, maybe this, in terms of data, cybersecurity, this is a new part of that.
Timi: Absolutely. I think for AI governance to – and evolving aspects of this is also the fact that, you could always take old rules, old approaches, old tricks to train a new situation. So, for example, in the aspect of incident reporting for AI risk, for example. People have developed models that have come from the aviation sector, for example, and applied it in the AI governance space. So, it’s just about not inventing something new, really, but just taking what works and applying it in a way that is contextualised to the particular situation.
Ka Man: Exactly. No, that’s so interesting, thank you. And we’re going to wrap up this conversation shortly. I’ve learned so much from you, really genuinely have illuminated so many areas for me to look further into, and you provided some really solid, practical guidance for our listeners, with lots of terms and suggestions for people to look into and to consider.
70.21: Chapter 6: One thing we need to talk about more in AI governance space: Global North and South power imbalance
So, I just wondered if I can ask you, what’s one thing in the area of AI governance and policy that you think is overlooked or not talked about often enough, but you think is vital to accelerate shared progress in this space?
Timi: Well, I think, one thing that is not often talked about is the power imbalance, which you mentioned. We talk about bias, privacy, regulations, but not enough about who sets the rules. For instance, today, most humanitarian AI platforms run on platforms or tools that are built in the Global North, with data, or even labour or input from the Global South. For example, you remember a certain report about labour use to develop or train AI models for ChatGPT, OpenAI, in Kenya? I’m sure you heard about that issue. So, what you have is that, yes, the platforms are built in the Global North, but even the data or labour inputs could come from the Global South. And so, that imbalance is there. So, I think it’s important that we look at the core context of capacity for the Global South, and also shared control. Otherwise, the context around the power imbalance and AI governance remain unequal going forward.
And by capacity, I simply do not just mean that we’re using, Global South, using tools development in Global North. I mean real funding of the Global South, in terms of universities, labs, and startups, to actually build tests, and deploy AI models that are localised or tools, in that sense. I think that’s a key part of it.
Ka Man: That’s really interesting, thank you. It links to what you were saying earlier on in this conversation around that procurement paralysis dilemma and challenge that we have, and having to purchase Global North models because of compliance and the regulatory challenges.
Timi: Absolutely
Ka Man: So, it’s almost like this big puzzle where there’s so many interdependencies that they need to be tackled together, and that’s really – talking to you has really reinforced my belief that emerged from the research, but it’s really – you really sort of galvanised this belief that we really need to collaborate and cooperate and harness shared expertise to really – because it’s not an isolated challenge, it’s a global challenge.
Timi: Yes, absolutely
Ka Man: It’s a global challenge, but then there’s the – even the local technology challenges are a global challenge, too.
Timi: Yes
Ka Man: It reflects those patterns and those power structures. So, so much food for thought there. So, thank you so much.
Timi: Pleasure
73:49: Chapter 7: Closing thoughts from Timi
Ka Man: So before we wrap up this conversation, Timi, do you have any closing remarks to share, or anything you would like to really reinforce or highlight with our listeners?
Timi: Well, I would say that, like you mentioned, it’s not just a challenge, that it’s also a challenge that is interdisciplinary, you know. And it’s important that the conversations start to happen not just at the international level of the UN and UNESCO and things of that nature, but the conversation, like, the Humanitarian Leadership Academy has taken on the pioneering leadership of, to start having the conversations at regional, at national levels, I think that will feed into building a more increased power symmetry in terms of that balance in the deployment of AI, because it’s a win-win for everyone if that happens.
Ka Man: Thank you, Timi. That’s really galvanised me in my mission, and the team’s mission, to push ahead with this mission, because I absolutely, truly believe that, and totally agree, with everything that you’ve just shared there.
So the conversations will continue, and honestly, thank you so much for this. It’s been so thought-provoking. I’ve learned, I’m sure our listeners have learned from your expertise. And as well as this audio podcast, Timi has written a short Q&A article, where there’s some good practical tips and guidance there. So, they’re packaged together as one sort of piece, so that you can engage in this in whichever way, whichever format suits you best.
And please do share this conversation, please do share the article with someone who may find it interesting, because it goes back to what Timi was saying about AI. It begins with education and AI literacy, not just using the tools, but that awareness and understanding and a more holistic view. So, I’m really, really pleased to be just a small part of this.
Timi: Absolutely, yeah. Just as a joke [laughs], you know, I like saying that if the foundation be destroyed, what can the makeup artist do, right? The makeup artist will only be able to put powder, so the foundation is AI literacy. If it is not fully developed, then you have a powdered governance framework.
Ka Man: Absolutely, absolutely.
[Music]
Ka Man: Timi Olagunju, thank you very much for joining us today on Fresh Humanitarian Perspectives.
Timi: Pleasure is mine. Thank you for having me. Wonderful work.[Music]

AI governance starts with AI literacy. AI literacy, not just for engineers, but for everyone.

I think it’s important that we look at the core context of capacity for the Global South, and also shared control. Otherwise, the context around the power imbalance and AI governance remain unequal going forward.
Continuing the conversations: new Humanitarian AI podcast miniseries
This conversation is the second episode new humanitarian AI podcast miniseries which builds on the August 2025 research: ‘How are humanitarians using artificial intelligence? Mapping current practice and future potential’. Tune in for long-form accessible conversations with diverse expert guests, sharing perspectives on themes emerging from the research, including implementation challenges, governance, cultural frameworks and ethical considerations, as well as localised AI solutions, with global views and perspectives from Africa. The miniseries aims to promote information exchange and dialogue to support ethical humanitarian AI development.
Episode 1: How are humanitarians using AI: reflections on our community-centred research approach with Lucy Hall, Ka Man Parkinson and Madigan Johnson [Listen here]
Episode 2: Bridging implementation gaps: from AI literacy to localisation – in conversation with Michael Tjalve [Listen here]
About the speakers
Timi Olagunju is a lawyer and policy expert working at the intersection of emerging technology, governance, and development. He is the founder of the AI Literacy Foundation and Youths in Motion, and serves on the boards of the Slum and Rural Health Initiative and Feed To Grow Africa. His advocacy on AI governance, including his submission published by the White House Office of Science and Technology Policy in 2025, contributed to U.S. policy debates that shaped the Executive Order on AI Education for American Youth. His publications and recommendations have informed policymakers and courts alike.
Timi is a Partner at Timeless Practice, and has advised policymakers, multinational companies, digital platforms, and global institutions including UNICEF and the ILO. He is an Internet Society Fellow and contributed to Berkman Klein Center research on digital self-determination now informing policy in Germany. A former Global Vice President of Generation Democracy (IRI), he has been recognized by President Obama and the Ooni of Ife. He studied law at the University of Ibadan and completed a master’s degree at Harvard University, J.F. Kennedy School, as an Edward Mason Fellow, specializing in digital policy and governance of emerging technology.
Ka Man Parkinson is Communications and Marketing Lead at the Humanitarian Leadership Academy. With 20 years’ experience in communications and marketing management at UK higher education institutions and the British Council, Ka Man now leads on community building initiatives as part of the HLA’s convening strategy. She takes an interdisciplinary people-centred approach to her work, blending multimedia campaigns with learning and research initiatives. Ka Man is the producer of the HLA’s Fresh Humanitarian Perspectives podcast and leads the HLA webinar series. Currently on her own humanitarian AI learning journey, her interest in technology and organisational change stems from her time as an undergraduate at The University of Manchester, where she completed a BSc in Management and IT. She also holds an MA in Business and Chinese from the University of Leeds, and a CIM Professional Diploma in Marketing.
Further reading
Mastercard Foundation. (2024). AI in Africa [White paper].
https://www.mastercard.com/news/media/ue4fmcc5/mastercard-ai-in-africa-2025.pdf
Olagunju, T., & Jumana, A. (2024). Ethical challenges in AI-driven personalized learning platforms. MIT AI and Society. https://www.researchgate.net/publication/394541038_ETHICAL_CHALLENGES_IN_AI-DRIVEN_PERSONALIZED_LEARNING_PLATFORMS
Olagunju, T. (2025). Smarter than sanctions: The case for AI diplomacy over export bans. HKS Review. https://www.researchgate.net/publication/394539387_Smarter_Than_Sanctions_The_Case_for_AI_Diplomacy_Over_Export_Bans-response-_ostp-nsf-rfi-notice-request-for-information-on-the-development-of-an-artificial-intelligence-ai-action-_planpdf
Olagunju, T. (2017, February). The actual cost of free basics in Nigeria. YourCommonwealth. https://yourcommonwealth.org/editors-pick/the-actual-cost-of-free-basics-in-nigeria
Olagunju, T. (2025, May 5). ₦352bn fine: What FCCPC and Meta must do next. Punch Nigeria. https://punchng.com/n352bn-fine-what-fccpc-and-meta-must-do-next
Olagunju, T. (2019, March). Three recommendations to Nigeria’s policymakers on blockchain technology and regulations. Irish Tech News. https://irishtechnews.ie/three-recommendations-to-nigerias-policymakers-on-blockchain-technology-and-regulations
Olagunju, T. (2024, June). A critique of the strategic plan of Nigeria’s Ministry of Communications, Innovation, and Digital Economy. TheCable. https://www.thecable.ng/critique-strategic-plan-ministry-communications-innovation-digital-economy
Olagunju, T. (2023, November). Who controls Nigeria’s internet? Punch Nigeria. https://punchng.com/who-controls-nigerias-internet
Share the conversation
Did you enjoy this episode? Please share with someone who might find it useful.
We love to hear listener feedback – please leave a comment on your usual podcast platform, connect with us on social media or email info@humanitarian.academy
Disclaimer
The views and opinions expressed in our podcast are those of the speakers and do not necessarily reflect the views or positions of their organisations. This podcast series has been produced to promote learning and dialogue and is not intended as prescriptive advice. Organisations should conduct their own assessments based on their specific contexts, requirements and risk tolerances.
Episode produced in September 2025