Viewing archives for Featured

Thank you for visiting the HLA website, we are updating the site to reflect changes in our organisation. Contact us if you have any questions. Contact us

HLA at Humanitarian Networks and Partnerships Weeks | 2nd – 12th March 2026

People seated at desks wearing headsets, participating in a conference or training session. Event logos and text announce the Humanitarian Leadership Academy at Humanitarian Networks and Partnerships Weeks 2026.

Join HLA at HNPW 2026, over four sessions to discuss youth leadership through the lens of crisis response in Ukraine, Peru and Türkiye; what can be done to drive change towards a locally led research agenda; and local leadership in humanitarian AI development. This year we are pleased to collaborate with Start Network, H2H Network, ELRHA, Open Space Works Ukraine, KAOS, and the Training Providers Forum.

Click the registration buttons to register for your sessions of interest. We look forward to connecting with partners and learners attending during the in-person week.

The State of Learning and Development in the Nonprofit Sector

📅 5 March | 🕛 13:00 UTC | 💻 Zoom Webinar

The Training Providers Forum – Groupe URD, Humanitarian Leadership Academy, Humentum, IECAH, INTRAC, NetHope, and RedR UK – invite you to join us for this online session taking place as part of Humanitarian Networks and Partnerships Week (HNPW).

Over the past year the global humanitarian and development sectors have been rocked by funding cuts on an unprecedented scale, whilst simultaneously being called to respond to escalating levels of need. This session specifically examines the impact that dramatic sector changes are having on provision of training, and Learning and Development for humanitarian and development actors.

This session is aimed at those working in L&D, HR, people and culture or in a leadership role in the humanitarian and development sectors.

Unsettling the status quo: The case for locally led humanitarian research

📅 Tuesday 10 March | 🕛10:00 – 11:30 UTC | 📍 CICG – Salle Vevey and online

Speakers: Tamara Low – HLA, Maryana Zaviyska – Open Space Works Ukraine, Umut Güner – KAOS, Kai Hopkins – ELRHA

Despite strong rhetoric around localisation, humanitarian research is still largely controlled by well-resourced Western institutions, with local actors often sidelined into limited roles. This undermines the value of locally led research, which is typically more relevant, culturally grounded, and responsive to affected communities—especially critical amid shrinking sector funding.

This session will explore the power shifts, funding changes, and norm-setting required to advance a genuinely locally led research agenda, drawing on insights from local research organisations, funders, and humanitarian leaders working to drive this change.

A Call to Action for Youth Leadership and the Future of Humanitarian Action

📅 Thursday 12 March | 🕛13:00 – 14:30 | 📍CICG – Salle 13 and online

Speakers: Jennifer Dias – HLA, Maryana Zaviyska – Open Space Works Ukraine, Olha Shevchuk-Kliuzheva – Alliance UA CSO, Mercedes Garcia – Save the Children International, Youth Intern from the HLA’s Youth Internship Programme in Türkiye

Young people are already playing critical roles in humanitarian response across the world, yet their leadership remains poorly defined, under-recognised, and weakly embedded in humanitarian systems. Often active as volunteers and innovators, youth face limited pathways to formal leadership and professional growth.

This panel will explore how the sector can shift from ad-hoc youth engagement to genuine youth leadership, drawing on global research and lived experiences to identify practical, safe, and empowering pathways for youth to lead in humanitarian action.

Thank you for joining us for:

Bridging digital divides: centring local leadership in humanitarian AI development

📅 Tuesday 3 March |🕛11:00-12:15 UTC | 💻 Zoom Webinar

Speakers: Musaab Abdalhadi – Save the Children in Sudan, Ali Al Mokdad – independent, Lucy Hall – HLA, Gülsüm Özkaya – IHH, Ka Man Parkinson – HLA

AI is rapidly shaping humanitarian work, but local actors are still largely excluded from how these technologies are designed and governed, risking deeper inequalities.

This session explores how AI can become a driver of localisation itself by embedding inclusion, ethics, and collaboration into humanitarian systems. Drawing on new research and frameworks, panelists will discuss practical ways to build locally led AI ecosystems and reimagine humanitarian action as co-created, context-driven, and collectively intelligent.

Watch the recording and access the transcript below.

Session transcript

This transcript has been generated using automated tools and has been lightly edited for clarity and readability. The transcript has been reviewed but minor errors or omissions may remain.

Ka Man: Hello, everyone, and welcome to today’s session, brought to you as part of Humanitarian Networks and Partnerships Week, HNPW. My name is Ka Man Parkinson, I’m Communications and Marketing Lead at the Humanitarian Leadership Academy, and I’m absolutely delighted to welcome you to this session today, Bridging Digital Divides, Centering Local Leadership in Humanitarian AI Development.

This session is taking place as part of the H2H network, so we’re delighted to be joining this virtual forum as part of the H2H network today. The Humanitarian Leadership Academy is part of Save the Children, and our mission is to accelerate the movements for locally-led humanitarian action.

Today’s session is expected to last 75 minutes, with around an hour for the main content, and around 15 minutes for your questions. So, if you have any questions, please submit those using the Zoom Q&A.

The session will begin with welcome introductions, followed by a short presentation from myself and Lucy to contextualise this session in the HLA’s work. We’ll then move into local leadership perspectives with our panellists, followed by a panel discussion, and then we’ll move on to audience questions.

I’m really delighted to be joined by some incredible panellists today, and I’m really grateful to them for taking the time to be here and join us for this important conversation, particularly in the very challenging context in which we’re all operating. So, I’m delighted to welcome Musaab Abdalhadi from Save the Children in Sudan, Ali Al Mokdad, a senior independent leader, Lucy Hall, my colleague from the HLA, and Gülsüm Özkaya from Children of the Earth Association in Turkey. I’d now like to invite each speaker to just briefly introduce themselves to you, and to say a few words about why this conversation matters to them. Over to you, Musaab.

Musaab: Thanks so much, Ka Man, and yeah, everyone. Good morning and good afternoon. My name is Musaab Abdalhadi, and I work with Save the Children as GCT specialist based in Sudan. I work closely with community-based organisations and mutual aid groups operating in conflict-affected and hard-to-reach urban areas. So, basically, this conversation is important to me, because communities on the front line of crisis are increasingly becoming data providers for humanitarian AI systems, but not really decision makers in how those systems are designed, governed, or used.

Ali: Thank you so much for hosting us, and for the participants for joining. For the humanitarian development sector, I started as a national staff, then I took international assignments. I was stationed in East Africa and Asia, focusing mainly on programme and operations management. And from there, I moved to headquarters roles, where I covered policy, processes, and tools. And I spent the past years focusing mainly at redesigning and reimagining policies, governance, as well as humanitarian diplomacy, where I engage with impact investors, policymakers, and economists.

From my perspective, this conversation is extremely important. I could write a book about it. But in a simple way, I think AI tools and AI in general could be either the best or the worst thing that could ever happen to humanity and to what we do. And localising AI could take us to the best case scenario, and I think that’s one of the key things that we are trying to cover today. Of course, there are many things to cover under that umbrella, but it’s very important to focus on localisation when it comes to AI. Thank you.

Ka Man: Thank you, Ali. Gülsüm, over to you.

Gülsüm: Hi, everyone. Welcome to the session. I’m Gülsüm Özkaya, I’m representing Children of Earth Association here. It’s an Istanbul-based local organisation. I’m the board member responsible for communication, and also thank you to HLA for hosting us here.

Well, why this conversation matters for me is, actually, it’s my research topic, basically. I’m working on AI-generated visuals in humanitarian communication, from the perspective of crisis-affected people. And I’m directly working on how it’s important to be AI-aware in local leadership. So, hope it’s a meaningful discussion for everyone.

Ka Man: Thank you, Gülsüm. I really appreciate you taking the time to join us today. And over to you, Lucy, would you like to introduce yourself and tell us why this conversation matters to you?

Lucy: Hi everyone, lovely to be here, thanks for having me. My name is Lucy, I am the Research and MEAL Lead at the Humanitarian Leadership Academy. This topic is really important. I’ve been researching digital tools and transformation, and what that means for locally-led humanitarian action for a number of years.

And I really believe that AI has huge potential to really transform how the humanitarian system operates, how organisations can really transform and really become almost a lot stronger than we already are. It’s an amazing opportunity, but there’s a lot of risks involved, so I’m looking forward to exploring all of those themes in this conversation today. Thank you, Ka Man.

Ka Man: Thank you very much, Lucy, and thank you to our incredible panellists.

I’m just going to spend the next 5 minutes or so to just set the scene for the conversation, and to explain why the HLA is hosting this conversation today. So, in May-June 2025, the HLA conducted, in partnership with Data Friendly Space, the world’s first study into how humanitarians are actually using AI in practice.

And we were actually astounded by the engagement with this survey. We had 2,500 responses from 144 different countries. And what that showed us is that AI, or generative AI, such as ChatGPT and Microsoft Copilot, etc., has really driven individual experimentation and creative applications of AI across the sector. So AI use is not being clustered in particular areas, but really is being embedded across the sector at large, although that is uneven.

In this survey, we wrote a report, which documented the patterns in detail, and we included some use cases from Ukraine, Afghanistan, and Lebanon, led by local leaders, which showed this creative and resourceful application of AI. So you can read those in more detail, and I think Lucy will be able to drop the link in the chat if you want to have a look at that information.

And then in January, as follow-up to that, we conducted a light-touch pulse survey to see if there had been any shifts in AI adoption. And what we found, when we’re looking at the local level, is that local organisations continue to have very strong interest and engagement in AI, again, very much driven by generative AI tools, although there is growing interest in how to scale these efforts and integrate those across operations and across organisations.

We can see that adoption patterns are generally similar between local organisations and other organisations as a whole, although a couple of differences that we’ve seen so far is that local actors are using AI tools daily on a slightly higher frequency than INGOs. And also, there’s a lower presence of formal AI policies. So, I would say that local efforts are very much being driven by adaptation, creative application, problem solving. It may not be as formalised as compared to the INGO sector and so on, but that’s led to some really interesting and promising use cases as well.

So, from the follow-up survey, I interviewed one participant recently, Dr. Ivan Toga from Uganda, and we’ll be releasing this interview next week, so please keep an eye out on HLA channels for this. And Dr. Ivan Toga speaks very enthusiastically about the potential and harnessing the power of AI in specific contexts and use cases, such as family reunification, and helping with satellite imagery, etc. So he’s got very strong views on this, including the need for localisation, for the sectors to come together, for donors to understand the context in which he’s operating. So he’s speaking to me from Rhino Refugee Camp in Uganda.

Another quote was from a survey participant in Cameroon. So this response was actually in French, so I’ve just translated here. And basically, this characterises this absolute drive and desire to try and harness the potential of AI, even in low connectivity settings. So you can see the thematic link to what Dr. Ivan said in his statements.

And then a middle manager working in education in the Philippines talks about how AI is not just a tech thing anymore, that it’s more widely embedded than that. And they’re very excited about the potential of AI freeing up humanitarians’ time from tasks to actually get more time working with communities. So that is their aspiration.

And then finally, this leader from Nigeria speaks very clearly about access gaps, how, again, there is so much potential, especially to amplify youth and grassroots actors, but highlights, again, that digital divide that needs to be bridged about particular marginalised groups that need to be brought into the conversation and development.

So, I just brought that in as a bit of scene setting to explain the context and rationale for this conversation today. And I’m now going to hand you over to Lucy, who’s going to speak to the concept of AI readiness and localisation. Over to you, Lucy.

Lucy: Thanks, Ka Man. I think this is a really interesting point, because those quotes really ground the experience in lived realities, because AI can feel very alien, very technology-led, and a lot of the participants that we’ve spoken to have talked about how distant it can feel, and I think it’s a really important challenge to acknowledge.

Especially when I think about that quote from Cameroon, where we talk about change, and the pace of change, and how hard it can be when we don’t always have basic infrastructure such as internet access.

And that’s why a lot of our research has kind of concluded that AI isn’t just about developing technology and developing tools. It’s about a whole host of behaviours and things that need to be really… foundations that need to be in place to really enable a locally-led humanitarian world of the future, and of today, because AI is around, and it’s probably not going to go anywhere anytime soon.

So we’ve been, over the last 6 months, discussing around what it means to become AI-ready. And a lot of that centres on different elements of digital transformation. It means we need to understand what AI is and how it is used in humanitarian action. Research helps with that massively. But again, in order to be locally led, it has to be local research that really leads the way. It can’t be dictated by the Global North or technology companies. It has to come from communities that are working with the tools.

All of these different elements are all about being locally led, convening, bringing people together, learning what the challenges are from one another, learning what the opportunities are, and sharing knowledge, sharing skills, sharing experiences.

Ensuring that there’s good leadership, governance, and standards. Again, how can we make sure that AI is safe? That’s one of the key things that our research has been finding over the last year. We are amazing as a humanitarian community when we consider the needs of the populations that we work with, the safety of people is paramount and our number one priority. So having strong leadership to govern AI, to use AI, to design AI, and ensuring that there’s really good standards in place.

Making collective action and working together to drive change. AI is a transformation process in the humanitarian system, because a lot of people are using AI tools, but as Ka Man mentioned, the policy and governance uptake is low. If we don’t work together, and we don’t advocate for locally-led AI, it won’t happen.

Innovations. Technology is always going to be a key part of AI usage. We are always going to continue to find ways of making better outcomes for communities that we work with, to make sure that they are safe, protected, and have access to chances in life.

And learning literacy, shared knowledge, and having that common language and understanding between one another, super important part of AI-ready.

This interconnected approach, when combined, can really enable a transformative approach in how organisations connect to one another, how we become more locally led, how we become able to amplify expertise and leadership throughout the humanitarian community. And by taking this approach, it will really allow this transformation to take place in a way that is grounded in local experience, local leadership, and realities.

And I think this is a real opportunity moment. We’ve talked in the HLA with other colleagues about being at a tipping point. And I think by adopting locally-led design principles for digital and AI transformation, we’ll begin to see a shift, hopefully in the right direction, towards a much more equitable humanitarian system as knowledge flows in different ways.

It’s very contextual. What we’re hoping to do through this conversation is to ground it in lived experiences from our wonderful panellists. So, with this in mind, I’d love to bring in Musaab to talk to his leadership in AI space. So, over to you, Musaab. I’m looking forward to hearing from you.

Musaab: Yeah, thanks, Lucy. So, basically, from my perspective and local leadership about AI, when we talk about local leadership in humanitarian AI, we often focus on access to technology, but from my perspective, the real issue is power, not technology.

So, local actors already generate knowledge every day through informal networks, community assessment, and adaptive responses that international systems often struggle to capture. Yet AI tools are frequently built externally, trained on incomplete data sets, and deployed into contexts they do not fully understand. So, this creates three risks, I would say: AI reinforcing existing humanitarian power imbalances, local knowledge being extracted without ownership, and definitely decision-making moving farther away from affected communities.

So bridging the digital divide, therefore, is not only about connectivity or skills, it means shifting from local partnership to local authority, where communities help define problems, shape data sets, and influence how AI informs humanitarian decisions. So, basically, that’s the AI from my perspective, or the local leadership. Yeah, that’s it, over to you.

Lucy: Thanks so much, Musaab. I’m sorry, I was struggling with my technology there. I honestly couldn’t agree more, and I believe that AI poses a huge risk about being very extractive. It’s something that I feel very uncomfortable with, and I think by calling it out early and making sure that we are creating much more equitable resources, that is the only way forward, really.

I’d now like to bring in yourself, Gülsüm, to hear about your leadership perspectives in this space.

Gülsüm: Well, actually, especially thinking about the global and the local actors, well, in the sector, we’re always talking about the shifting power from global to local actors, but I think that being a global actor, or being a global organisation is no longer enough to adapt to today’s world. Because in our local organisation, in YerChat, we fit in today’s world differently, I think, because when I think about the reason, maybe the reason might be we’re mostly made up of Gen Z. We are all young people.

So, this allows us to create impact differently than traditional organisations, whether it’s how we engage our donors, or how we protect the children in the media. So, being digitally fluent and AI-aware is just, actually, I think the main divide right now, rather than being global or local.

So the local organisation that masters the use of AI tools actually can access the opportunities, funding, and maybe create an impact as effectively as the global giants do. So, if you use AI correctly, maybe it might be the ultimate bridge in the sector.

But, my point here is actually the ‘correct’ part. I mean, when I say AI used correctly, whose correct is this? When we talk about correct, is HLA’s correct, HS correct, or yours correct, everyone’s correct, it might be different. So, at this point, actually, my question was whose perspective must be included. And I think my answer was, the beneficiaries, the people affected by crisis.

So, well, that’s why, actually, my research focuses on AI-generated visuals in humanitarian communication from the crisis-affected people’s perspectives. So, when I see the people’s perspectives, there’s a significant gap here. Their perspectives and the humanitarian communicator’s perspective are totally different in some points. For example, in some points, I think that the communication practitioners just think that it’s a protective thing for practice, but they might see it very differently.

So, I think we need a shared environment for creating AI standards in our sector. Otherwise, if we cannot do this, if AI policies and standards are developed under a global monopoly, probably they will fail in the local context. So, local leaders’ AI awareness is a key point here. Otherwise, it will probably lead to digital colonialism on the beneficiaries and the crisis-affected people, I think. It’s not a technical failure.

So, when we see AI ethics and standards, I think that we need a table for all who have digital fluency. It’s not that they are based on global actors, global leaders, or local leaders, but who is AI-aware, or who has digital fluency. I think that will empower local leadership in this case.

Lucy: Thank you so much, Gülsüm. I think that’s such an interesting point. It’s not about global versus local, it’s about being digitally confident and not so digitally confident, and how AI works.

I’ve got so many questions in my head based on just those couple of statements alone, but it is now time for our panel discussion, where we’re looking forward to bringing Ali in. I will be building on some of those points raised by Musaab and Gülsüm, but Ali, I kind of first wanted to come to you.

Because I think the point raised here is the risk that if AI governance and design continue to sit in global spaces, we’re really going to risk reinforcing existing inequalities. That came through loud and clear from both Musaab and Gülsüm. So, in your experience and your view, Ali, what would inclusive AI reform look like? And who needs decision-making power, not just to be sat at the table and consulted?

Ali: I was actually writing super notes during the conversation. But before I answer the question, let me say one thing, that outside humanitarian and the development sector is a bit different than inside. Because when I look at social enterprise, when I look at AI diplomacy, the design and the development of models, it is progressing in the Global South, and I want to acknowledge that India, Nigeria, Kenya, Lebanon, Gulf countries including Saudi Arabia, United Arab Emirates, and Qatar, they made progress when it comes to AI, the social enterprise, AI-native companies, AI diplomacy, investments in data centres and the infrastructure, as well as designing AI-native models, because now we have large language models in Arabic, we have in Swahili, and a few others. So I want to acknowledge that outside, the Global South is part of the design and the deployment. Now, in the sector, it’s a bit different story.

And from where I’m sitting, and what I am noticing in my conversations and the work that I have been doing at strategic level and operational level, we are kind of repeating the same mistake that we had with ERP systems and automation, and Power Apps and Power BI, which means designing at a global level, and then rolling out. And when we look at the challenges we faced there, it was mainly transformation and supply chain. And I usually say that technology is not the problem, transformation is.

So, how to address, or how to deal with that. I think it’s very important when looking at governance to look at it from inclusive and intelligent governance. And that means mainly focusing on user-based design, so thinking from the perspective of the end user, looking at the access, and what I call the AI privilege, who will access the tools, who will access the data, and a few other things, looking at the skills, and here I’m not talking about capacity as training, I’m talking about capacity as infrastructure, and also the stakeholder engagement and the AI transparency when it comes to the algorithms. But in a simple way, I think to have that system in place, and to focus more at the operational level, I think it all starts with the experimentation.

Local organisations are already experimenting, and the survey that you did is showing the results that local organisations are progressing there. And when I look at the local organisations or the local leaders that I have been collaborating and working with for the past four years, AI empowered them in different ways. Yes, of course, they had some challenges related to the internet connection, to the access. But the moment they started learning how to leverage those tools was also the same moment they started making progress. So, number one is the experimentation stage.

Number two, I think it’s very important to look at the infrastructure, because if we are, as a sector, wanting to leverage this technology, it’s very important to look at how are we protecting data. How are we working on the supply chain and the deployment? How are we scaling those digital tools? Who’s going to be a part of the infrastructure and the innovation part design? Where are they? Their location, their access to those tools and services and internet.

And then building on the innovation, or the experimenting, the infrastructure, here comes the ecosystem. Because it’s very important to look at this as a full ecosystem, especially when we are looking at full governance. And the ecosystem means the infrastructure we built, the teams that we have, the workflow, the roles and responsibilities, the data, and the tools that we are using, how they are going to interact with each other, how those tools are going to affect roles, how they are going to affect the relation between, let’s say, UN agencies, INGOs, NGOs, civil society organisations, the community, and all that.

And from the ecosystem going to the partnership, I don’t think any organisation alone could do that. We need to invest in the partnership. And here, I’m talking about the humanitarian development actors amongst ourselves, the relations with the private sector and the economists, the relations and the connection with impact investors, the connection with policymakers and the state, and the other actors, because it’s very important to invest in this partnership. And of course, be open about the lessons and the failures and the progress that we are making within this arena. But I think in general, when I look at the full picture and I zoom out at macro level, it’s very important to acknowledge that local leaders are already using and leveraging those tools. We need to keep that innovation and that way of being open about experimenting, and try to see how we build that bridge between local actors and international actors, and as I mentioned, user-based design.

Lucy: Amazing. Thank you so much, Ali. I think that’s kind of our guiding star throughout all of this work, is how do we keep the momentum going, and not constrain local leaders in adopting AI, because it is already happening, it’s been happening for probably years, we don’t have the evidence to substantiate that.

And I think that’s a challenge that a lot of people are grappling with, because there feels like, as you say, there’s a bit of a lag, and the humanitarian sector is rolling out AI in a way that is traditional, and has not always worked, so I think that’s a really pertinent point when we talk about transformation.

And just very briefly, I’d be quite interested in hearing your thoughts on this. Because there’s been a lot of talk about governance standards and policies. Is there a risk that that would actually then impede that uptake and that innovation in local organisations? Or is that something that you see would enhance usage of AI?

Ali: Very good question. I think it depends on the location, because I could see some organisations who are based in Europe, they are a bit struggling in navigating also the rules and the regulations, and how to adapt to that, so it’s a bit limited arena for experimenting. Those in the US, they address it in a different way. Those in the Global South, in a different way, and I have to admit that many of those leaders and organisations in the Global South, they have a bit more flexibility and way of experimenting. Let me give an example.

There was this local organisation in Nigeria. They wanted to… they have their own strategy, they wanted to integrate AI within the strategy. So we had several conversations, and we looked at what they want to achieve from a programmatic perspective. That was the foundation of AI integration. How could we reach more people and support more communities? And from there, how AI could speed that process, or scale it, or make it efficient.

So what we did after doing the discussions and the mapping and all that, we found that the best way was doing several trainings for team leads and the management, as well as the staff on what not to do. So, the things that we shouldn’t, from data perspective and the ethical use of AI, we shouldn’t do, and we should keep aside. And then everything else is open for experimenting and use.

And I spoke to that organisation when we did the periodic review for that organisation after 6 months. It changed how they worked. It changed the workflow, it changed their relation to each other, it changed their relation with the donor, and the other organisation, their positioning, and different things, because they… I want to say they moved fast and broke things, so they took those tools, taken into account what not to do, and they started experimenting. Of course, there are risks.

But I want to say, let’s not overestimate the risks and underestimate the opportunities. And local organisations in Nigeria, in Lebanon, in Syria, in Sudan, in Kenya, in Rwanda, they are giving us very good examples on how to leverage those tools and use them. I think at the end of the day, it’s about not standing in the way of local organisations and local leaders. They have a clear mindset of what they want to do, and they have already strong access, they are part of those communities, and suddenly those AI tools appeared as a resilience tool. It helped them to navigate that complexity in their engagement with donors and other organisations. So I think we have so much potential there. The main important thing is, you know, we could raise awareness on the things that we shouldn’t do, and the risks, and give them space, not stand in their own way.

Lucy: I couldn’t agree more with everything that you’ve just said there. I was gonna bring in Gülsüm at this point, but actually, Musaab, I’d like to come to you, if that’s okay, because I think your work, what I know you’ve done in collaboration with Ali, links quite nicely, potentially, to what was just being spoken about there.

So, Musaab, I wondered if you could actually just talk us through your experience in not just learning about AI, but shaping AI in your work, and what your experience has been to date, and how you’re now using it, and how you kind of led the way in Sudan in particular.

Musaab: So, basically, the session that I’ve done in collaboration with Ali, in collaboration with Humanitarian Leadership Academy, and there were many local responders here from Sudan working with emergency response rooms, or CBOs, community-based organisations. So, for myself, first, it helped me to know exactly, or demystify AI. So, basically, in many humanitarian spaces, AI is either over-hyped or feared. So, this session clarified what AI actually is, and also pattern recognition systems, trained on data, and where its limits are. So that matters when you are responsible for a programming decision, because I’m working as a technical specialist, so all these matters.

And secondly, it shifts how I think about power. AI is not neutral. It reflects the priorities of those who design and fund it. For someone working closely with local and mutual aid groups, that realisation is crucial, actually. If local actors are not involved upstream, the tools may optimise for efficiency over dignity, actually, or scale over context. So particularly, it made me more knowledgeable about AI and its use, more intentional and more aware of governance gaps in my role. Yes.

Lucy: Amazing, and I think that’s really great to hear. It does highlight the governance gaps, and I guess the question… and again, this is… I’m being quite tricky with the three of you, because you’re such amazing panellists… what risks does that pose to your work, having that governance gap? Because when we think about governance and policy gaps, as humanitarians, I personally immediately think of standards and the principle of do no harm, because the risks are so high, and I don’t know if you’d have any reflections on that.

Musaab: So basically, in cash programming, I see, I would say, 3 major risks. So cash systems require digital trails, mobile money reports, targeting databases, biometric registration, in some contexts, definitely, like Sudan, in conflict-affected areas. So that data can be extremely sensitive if accessed by someone who is not authorised to access this data. So it can put people at high risk and increases the aggregation and analysis of these data, which increases exposure. So, data exposure is one of the risks of AI.

The second thing, I think, if AI models are trained in an incomplete way, they may systematically exclude certain groups: informal shelter, undocumented people, and minority communities. So, this causes bias in cash programming or cash targeting. So, bias in cash targeting is not just a technical flaw, it becomes a protection issue, definitely.

And the third thing is, if AI systems are developed or hosted by actors aligned with a specific government or private interests, communities may perceive cash assistance as political influence, so they have to be on that side to just receive the assistance or the cash assistance. So, trust is central to neutrality. Once communities believe data may be shared or misused, participation drops and risk increases, definitely. So, basically, those are the main or major risks that I see in cash programming related to AI.

Lucy: Completely agree, and I think it’s something that a lot of work is being done to address, but as you say, it comes down to people, and how we use it, and how the data is managed. And I think that comes to your original point, that local leaders in humanitarian action aren’t just users of AI, we’re also producing AI and informing future iterations. So it can’t be one-sided.

And I could talk for hours on this with you, but I do want to bring in Gülsüm here, because I know that you’ve done an awful lot of research, and you’ve got a lot of experience with people that are using AI, you use AI, and how it interacts in your daily life and within your work.

And I think we’ve talked a lot about leadership, and that’s quite an abstract concept, potentially, and you mentioned things around, you know, Gen Z are leading the way in this, because they are digital natives. So, what I’d really be interested in hearing from you is, where do you see people stepping into leadership in these ways? And who is engaging consistently in that network? Is it purely young people, or is it a range of different voices?

Gülsüm: Well, actually, I think the humanitarian sector, the leadership, is not only in humanitarian sector, by the way, but the sense of leadership is really related with experience. But with the coming of AI, the sense of leadership, I think it’s based on expertise rather than experience. So, if someone from another generation, or a very young person, has this expertise on AI, and also not just using the AI tools, but also can criticise the AI tools usage, or can criticise the risk of AI, they can bring a new kind of leadership perspective, I think.

Because there are lots of points related with AI that are not discussed yet. For example, in the media, specifically, you mentioned the risk issue, Musaab. In media, the most dangerous thing, the most relatable risk that we are facing is actually using AI visuals without permission, or just using them for donation reasons by humanitarian practitioners, which directly affects the trust of individual donors.

So, in this perspective, we are just facing AI shadow work, mostly. If there is no leadership with AI awareness, AI shadow work might be our most relatable risk, I think. So at this point, I can say that expertise in AI, being a leader, a humanitarian leader, is actually the most effective issue, let’s say, to protect the trust issue between both partnerships and individuals and NGOs.

Lucy: And I want to pick up on that point, because Musaab also mentioned trust, and Ali, I think you might have mentioned it as well. And what is it about trust that you think is important in AI and in relationships with people?

Gülsüm: Well, when I conducted my research, I just conducted lots of interviews with, in my previous research, Syrian people, and during my research for my thesis, with Sudanese people, with crisis-affected people, and the most common thing that they brought to the interview was this donation issue, using AI by humanitarian communicators for donation causes, which is directly affecting their trust, their trust in the organisation.

And also there, I just realised something in my interview is that if the person is not engaging with AI in their daily life, they are mostly more aggressive towards using AI in the humanitarian sector. And their trust is more fragile in NGOs when they use AI. But these individuals, if they use AI in their daily lives, in their work, etc., they’re just feeling that, okay, it’s a common thing, it’s a normal thing, and it can also be used by NGOs. So, actually, the trust issue, I think, depends on people’s own relations with AI.

Lucy: Yeah, I really agree, and we’ve seen that play out throughout our research journey as well, that there has to be organisational trust to use AI at work. That’s been a really interesting trend that we’ve not explored explicitly. I know the research team are keen to explore it, but the fact that you say it is about that individual relationship with AI, I think is a really…

Gülsüm: I think so. I think it was one of my outcomes when I just conducted my research, let’s say.

Lucy: And brilliant, and I think I’m really glad that you’ve been able to see that and articulate it so clearly. There’s so many questions that I want to ask all of you, but I’m conscious of time, and want to make sure that we have a chance to open the floor up to our audience. I’d love to hear from each of you individually now, and just ask, what are your aspirations for humanitarian AI in 2026? What do you think you would like to see, and what do you think might happen as we move through what is proving to be yet another very difficult year?

And I’ll open the floor to whoever takes that first.

Ali: Shall I go first?

Lucy: Great, yes.

Ali: Okay, so if you think about localisation as a garden, like, let’s imagine the localisation agenda is a garden. Then technology is the rain. But without gardeners, without good soil, without clear boundaries, this rain is going to become a flood. What does that mean? It means the technology, AI, automation, etc., it comes with opportunities. But for us to get the best out of them, we must invest in innovation, infrastructure, ecosystem, partnerships, being open about the challenges and lessons learned, and work together in all stages to keep it based on ‘we, the people’. It is the main thing in what we do in the humanitarian and development sector. So, build it based on people’s interests. That’s how I look at it.

Lucy: Wonderful visual image, and you’re absolutely right, it is everything. It is not about the rain. Musaab, would you like to come in if you’re still with us? I know you’re having bandwidth issues.

Musaab: Yes. I would like to comment. So, I would like to see AI designed with local actors at the table from the beginning, not as testers, not as data collectors, but as co-designers. If the systems are meant to serve crisis-affected communities, then those closest to the context should influence how problems are defined and how models are trained. So, basically that’s what I would like to see for AI from now to the future.

Lucy: Amazing. Thank you. And Gülsüm?

Gülsüm: Well, actually, I think that standardising will increase in general on all AI-related topics, and I think that in the cluster system, also, the use of AI will be included, I think. And also, for the standards, and I think that Sphere standards and other standards that are globally used, probably the use of AI and related issues will be on the table during this year, I think.

And I think that there is a huge responsibility on us, I think, because we need to be critical for every standard suggested related with AI. So, if we would like to… if we think that the standards should not be under a monopoly, that a giant entity just decides on it, and they are just serving to local organisations, if we don’t want to do this, I think we have more responsibility on this, both to be critical and also suggesting new standards.

So maybe for each local organisation, I also suggested this in our organisation as well. It’s a very local one, it’s about 70 people in the organisation. And, even though we have a very small scale, we are working on AI standards for the organisation’s policies, because if we would like to continue to use this, we need it. We cannot just let everyone make what they think with AI, etc. So, I think, for this year and maybe the upcoming year, who will take the responsibility about AI? Probably they will shape the future of AI in the sector.

Lucy: Yeah. It is a really pivotal moment. Fantastic. Like I say, there is so much more that I want to delve into with you, but I can see some questions coming in from the audience. So, Ka Man, I’m gonna hand over to you to start facilitating these questions. I’m looking forward to hearing from our panel still.

Ka Man: Hi, sorry, I had a technical issue coming off mute there. Thank you so much for such a thought-provoking discussion today. Thank you, Lucy, for facilitating that, and thank you to our panellists for sharing your candid insights. I really appreciate it. And thank you to our audience for your attentive listening and your great questions that you’ve posted in the Q&A, as well as the chat.

So, we have the next 20 minutes or so to put the questions to the panellists. And actually, I see a thematic grouping of the questions. There’s a lot of interest around accountability and governance, which really aligns with the gap in formal governance that we spoke to at the beginning of the session, and how local organisations, local actors are using their own creativity, drive, and ingenuity. But obviously, coordination and accountability is the next step.

So, I wanted to just put some questions that are for anyone to jump in and respond to. So, the first question links to this theme. Would there be a local accountability mechanism for AI-related harm? What does anyone think about that?

Ali: A quick note from my end. Please keep in mind that inclusive and intelligent governance is already part of the Sustainable Development Goals for 2030. So all the Sustainable Development Goals, they have indicators related to inclusive and intelligent governance, and intelligent governance includes AI. This is number one.

Number two, there are several countries that are putting rules and regulations, rolling out AI rules and regulations, so there are mechanisms to look into the AI design, deployment, and a few other things, including the EU AI Act, the executive order in the US. And I am aware of several countries working on AI privilege rules and regulations, where they look at who has access to the data, as well as the principles of least privilege from an ecosystem perspective. This is number two.

And also keep in mind that it’s not fully clear for us as a society, because the development is increasing, and it’s happening super fast. And, you know, just two years ago, we were at the large language model stage, then we moved to reasoning, and now agentic AI. So we are not fully sure how that’s moving, the speed, the progress, how it would look like. So I’m a bit worried if we put additional rules and regulations, it might cause damage or harm. This is point A.

Point B, please keep in mind that the examples that we have within the humanitarian development sectors, they are still at the large language model stage, a bit of automation, tiny bit of agentic AI. So, until now, the main issue for us is data-related, as well as cyber attacks. This is going to grow in the future. So, my recommendation, or my suggestion, would be instead of jumping into putting rules and regulations within our governance process, let’s map the different scenarios, let’s plan based on those scenarios, let’s look at the integration of those different tools and all that, and focus on the transformation. If we focus on the transformation element, we will reduce the supply chain challenges, cyber security challenges, as well as reducing harm.

Ka Man: Thank you, Ali. Linking to this, I have a question that was received in French, but is in English: the use of AI can be costly, and there is often a lack of clear regulatory frameworks for its use. What’s your opinion on this, please?

So, Gülsüm, you spoke optimistically about sector mechanisms coming together, clusters and so on, and you talked about Sphere standards. Do you think that we can make progress in this space with regards to AI in the coming period? What’s your take on that?

Gülsüm: Well, I think so, because right now, the standards that are wanted to be received by local organisations… I’m thinking based on Turkey, by the way, but I’m just seeing that there is a huge tendency in Turkey that local organisation staff are demanding standards trainings, especially for Sphere right now. So, it just gives me a kind of hope, let’s say.

And the staff, the humanitarian workers, are not just, okay, being about, we have adjusted this, and we will do this, etc., the stuff that was just designed before them. But they are also critical, and would like to be part of the steps that are included. So, at this point, I think that when AI is more spread in humanitarian work, especially, let’s say, in the media, I think that practitioners will criticise it more and just demand a standard for this.

Because probably they will be criticising each other’s work as well. I just saw an AI-generated visual before, used by an NGO for funding reasons, for a crowdfunding campaign, and I asked them. And they just thought that, why should it be related and should be used, and I just gave my perception, etc. So there’s a common and shared environment to give these ideas, and I think that people, the practitioners, also push each other to use AI to standards, even though it’s not named as standards. They are just pushing each other to use it the way that was already decided before. So, that’s why I’m in a more optimistic position, let’s say.

Ka Man: Thank you, Gülsüm, I really appreciate that perspective. Ali, if I could bring you in very briefly. So, if we’re talking about almost collective action around common standards, if we’re looking at more coercive pressure, so to speak, from a governance perspective and regulation. Do you think, say, for example, is there anything around the EU AI Act, or is there anything that you think might be pertinent to highlight here?

Ali: I think the opportunities that we have here are the initiatives happening within different alliances at the global level when it comes to the implementation of their programmes or operations or projects, because there are already several initiatives, joint country programmes or joint operations. We have ACT Alliance, we have several networks and platforms that are bringing organisations together, so that brings many opportunities. This is one.

The EU AI Act is going to bring the element of governance to different organisations, but international organisations that have an office in the EU, they must start adapting their processes and communicating their workflows, as well as what they are doing in finance, human resources, monitoring and evaluation, ERP system, and other things. I think, in general, we have several opportunities coming. We could leverage those rules and regulations for the speed and efficiency and reducing bureaucracy in the sector.

We could also take advantage of the alliances, joint country programmes, joint country operations. We could build on what’s happening within Sphere, the Inter-Agency Standing Committee guidelines, the NGO forums at country level, and all those different infrastructures that we already have in place. So, I would suggest, instead of investing in creating a new thing, let’s redesign and reimagine what we already have, so that we don’t reinvent the wheel, we don’t invest in something new, and go through the rollout and all that. We have several initiatives, they already have a certain level of trust and certain level of access, we could invest in them and leverage those different opportunities.

Ka Man: Thank you, Ali. Really appreciate your insights there. So next, I’d like to put a question to Musaab, if that’s okay, and it’s a question from Maria. So, she’s building on something that you talked about in this session. So, she asks, please, would you be able to give a specific and practical example of how AI use may cause bias in targeting in your context?

Musaab: Okay, good question, actually. So, imagine an AI model trained on historical beneficiary data. I will give you an example from Sudan, actually. So, imagine an AI model trained on historical beneficiary data to predict which households are most vulnerable and should receive cash assistance. So the model may choose indicators like registered displacement status, formal camp residence, household size, documented income loss, or mobile money transaction history. So, if you can follow me on this.

So, but in Sudan right now, many of the most vulnerable people are not formally registered in the system as displaced. So, some are staying with host families. Others move frequently between neighbourhoods due to insecurity. Some women-headed households avoid registration because of protection risks. Informal workers may not have digital transaction histories.

So, if the model is trained mostly on formal camp data or structured registration datasets, it will learn patterns from those populations. It may systematically prioritise households that resemble previously registered beneficiaries and unintentionally exclude advocated urban displaced people or marginalised groups who do not appear clearly in the data or in the system registration system, whether by their displacement status, or the others that I’ve just mentioned.

So, that is algorithmic bias, because the data reflect structural gaps. So in conflict contexts like Sudan, exclusion from cash is not just an administrative issue, it can deepen vulnerability, create tension between communities and undermine trust. So basically this kind of practical example might happen when we use AI. So that’s why human validation and local knowledge must remain central when AI is used in targeting, especially in cash programming, and I would say for any targeting. Yeah.

Ka Man: Thank you very much, Musaab, for sharing that tangible example. Linking to this, next week I’m going to be having a podcast conversation with regards to the role of blockchain with cash transfers and how this may fit into the broader humanitarian context and digital transformation, including AI. So, I’ll share the link with everybody, so keep an eye out on our channels for that, to get a bit more insight into how these pieces may connect.

So, I’d like to bring in a question from Dr. Ivan Toga, who was actually one of the interviewees who was featured at the start of this presentation. So, it’s linking to this sort of opacity of systems. So, Dr. Ivan asks, doesn’t building robust AI from a black box ignore its real impact on our local refugees and people needing mental health support? How do we address this gap?

Would anyone like to come in on that?

Ali: Maybe a quick thing from my side, that’s an excellent question. I think we have to build it the same way we build our programmes, from first principles, from the community needs, and doing the actual context analysis and all the other elements there. But I want to mention that recently I saw several very good examples where those AI tools were leveraging medical care in providing educational support, and a few other things in camps and within remote areas and all that. But the way they were designed, it’s a bit similar to the way we did that training in Sudan. We built it from the local language, from the context, from the community, from their needs, from how they work together, how they interact together, the cultural elements.

And all those things, we built it from there. And then we started deploying or using those different tools. And to be totally transparent, I think if we don’t do that, we will not be able to achieve that digitalisation, or AI integration, or the intelligent and inclusive governance. We have to keep in mind the people that we are serving in those communities, and to do that, and not cause any harms or issues, and to avoid any risks, we have to build it from there, from them, not for them.

Ka Man: Thank you, I really appreciate your perspective on that. So next I’ll come to a question from James, who’s actually my colleague. Thank you, James, for asking this question. He asks, is it realistic that the owners of leading AI models, such as OpenAI and Anthropic, could be held responsible for supporting locally-led AI, or will the sector as a collective be responsible for developing best practices for integration, or even their own models? Does anyone have a view on that?

Ali: I could just say that Claude AI, they have a non-profit element, and OpenAI, they also have a non-profit element. I was part of several conversations with some actors, including Microsoft at some point, exploring that side, and asking them to give that perspective. NetHope is playing a key role in building that bridge between the private sector and non-profit actors.

I’m a bit worried that at scale, at large scale, we don’t have this, and it’s a bit challenging holding them accountable, but there are small initiatives, and if we build on them, I think we might get some results. I want to mention again that there are several language models that have similar capabilities in other countries and other regions, and they are native in that region. We have Groupa in Africa, we have Jais, we have Bharat in India, we have other language models, they are open source, they are native in the language, and many people at policy level there, and civil society, are engaging with them. But at global level, those OpenAI, or Claude AI, etc., they have small elements, but still not that large scale.

Ka Man: Thank you, Ali. From my perspective, I guess I wanted to make that point where, yeah, collaboration with actors and trying to sort of almost lobby for our interests, collective interests for the humanitarian sector is pivotal and ongoing. That has to happen, but we can’t obviously pin our hopes on that movement alone. So, a lot of people that I speak to in this space do advocate for, like you say, the open systems where people can build from the ground up using reusable components. So I think there’s a lot of interest around developing small language models. There’s been a lot of talk about this, where it can be more secure, used offline, trained in specific languages and contexts. So I think that this is really exciting. From what I’m hearing, it’s very early days. I’ve not heard of any specific cases yet where people are deploying small language models, but I think it’s something that humanitarians should pay particular interest and attention to myself, personally.

So, unfortunately, our session has to come to a close shortly. I’d like to thank our incredible panellists for their candid insights and perspectives today, which are really invaluable, and I really do appreciate you taking the time to engage in this conversation today and to drive forward this discussion. I’d just like to take the closing minutes to just highlight the next sessions that we have coming up as part of the HNPW programme.

So, we have 3 more sessions happening, one is online only, and the other two are hybrid, Geneva and online, so everyone can access. They’re taking place on the 5th, 10th, and 12th of March. So please do sign up if you’re able to, and I will share the links in the post-event email.

So, thank you again for taking the time to join the session. We appreciate it. Thank you to H2H, thank you to the organisers of HNPW, and I’m wishing you all a good rest of day. Thank you.

If you have any questions, please contact info@humanitarian.academy

Thank you for visiting the HLA website, we are updating the site to reflect changes in our organisation. Contact us if you have any questions. Contact us

7 Questions for 7 Humanitarian Leaders

Nominate a humanitarian leader you would love to hear from!

A graphic with a vintage microphone shaped like the number 7, text reads “7 Questions for Humanitarian Leaders,” Humanitarian Leadership Academy logo, and colorful speech bubbles over a faded image of two people talking.

With more than 1.2 billion young people aged 15–24 worldwide, young people make up 16% of the global population (United Nations). According to United Nations Volunteers, 33% of youth globally are engaged in volunteering through humanitarian and community action.

Across the world, young people are vital contributors to community response. They are active in civil society and often provide both formal and informal humanitarian support – especially during times of crisis. Yet their leadership, insight, and impact are too often overlooked.

At a time in history when more than ever before, the future of humanitarian action sits in a quandary – due to limited financial resources amongst many more challenges. The HLA is choosing to amplify voices of the future who are rising above these challenges and already acting now.

In line with the Humanitarian Leadership Academy’s commitment to supporting local leadership and connecting humanitarian actors, we are launching a new podcast series to spotlight young humanitarian leaders and the work they are leading in their communities.

7 Questions for 7 Humanitarian Leaders will feature seven thoughtful, in-depth conversations with 7 guests nominated by you – members of the HLA global community. Together, we’ll explore their journeys as humanitarian responders – their motivations, challenges, inspirations, aspirations, and the realities of leading change from the frontlines.

Get involved!

This series has a dual purpose: to strengthen collective learning across the sector, and to recognise the contributions of young humanitarian actors whose work often goes unseen. By sharing their stories, we aim to increase visibility, appreciation, and access to opportunities that can positively shape the future of their work and the communities they serve.

We believe that inspiration leads to action and that motivated people inspire others in turn.

Is there a humanitarian leader you would love to hear from? Someone whose work has inspired you, or whose journey you’ve always wanted to learn more about? Perhaps there’s a question you’ve never had the chance to ask.

Nominate them via this form and help us amplify the voices of young leaders shaping humanitarian action.

As former UN Secretary-General Kofi Annan said: “You are never too young to lead, and never too old to learn.”

Let’s learn from young leaders, together.

For questions, contact info@humanitarian.academy or F.Okomo@savethechildren.org.uk

Thank you for visiting the HLA website, we are updating the site to reflect changes in our organisation. Contact us if you have any questions. Contact us

To AI or not to AI: a humanitarian comms conversation

Questioning visuals in humanitarian communications and fundraising in light of localisation and AI.

The use of images is crucial in the way we communicate especially in the humanitarian sector where an image is truly “worth a thousand words”, many emotions, conversations and storage for historic information.

In this episode Deborah Adesina (Debby), Doctoral Scholar at the University of Liverpool and David Girling, Associate Professor at the University of East Anglia, UK both co-leads of the Charity Advertising Research Series hold a light-hearted yet thought-provoking conversation on the use of generative AI images as an option for humanitarian campaigns. 

You can listen to this episode on Apple PodcastsAmazon MusicBuzzsproutSpotify and more.

Nwabundo Okoh, Comms and Marketing Lead at the HLA approaches the conversation through the lens of David and Debby’s in-depth research pieces and follow-up articles on the analysis of UK charity visual communications in direct mail campaigns and the analysis of charity advertising supporting international causes in UK national newspapers. Asking how/if generative AI images might be considered now or in the future. 

Debbie quoting Susan Sontag says that “the problem isn’t that people remember through photographs but they remember only the photographs” 

Listen now to hear David and Debbie’s fresh perspectives on findings from their research; why knowing your ‘why’ is so important; how humanitarians can consider navigating the use of AI for images, what to be aware of and more.

Keywords: Localisation, Ethical storytelling, International development, Poverty, Fundraising, Humanitarian communication, Photography, AI, Education, Co-creation, Authenticity

Helpful resources mentioned in this episode and more

Charity Advertising Research series

All assets: Charity Advertising | A critical analysis of UK charity advertising

Report – Charity Representations of Distant Others: An analysis of charity advertising supporting international causes in UK national newspapers

Report – Charity Representations of Distant Others: An analysis of UK charity visual communications in direct mail campaigns

Article: Africa overrepresented in aid charities’ direct mail campaigns, research finds

Article: Charity representations of distant ‘others’ in direct mailouts: time for a rethink? | Bond UK

David’s blog: Social media for Development

Related article that piqued our interest by Arsenii Alenichev in The New Humanitarian: AI visuals: A problem, a solution, or more of the same?

About the Speakers

Deborah (Debby) Adesina asks the questions that make development practitioners uncomfortable, and that’s exactly why her work matters. She is a leading voice at the intersection of Development Communications, Ethical Storytelling, and the burgeoning role of Generative AI in the humanitarian and development sector.

A Commonwealth Scholar with a Master’s in Media and International Development from the University of East Anglia, Deborah’s expertise blends rigorous academic analysis with practical sector insight, including her time with Tearfund’s Global Fundraising and Communications Team.

She co-leads the Charity Advertising Research Series, a body of work that has been influential within the sector with findings that are shaping practice and informing debates across news media, INGO board rooms and at International conferences.

Currently an AHRC-funded doctoral scholar at the University of Liverpool, Deborah is investigating Celebrity Humanitarianism in Nigeria, centring perspectives often marginalized in a conversation that has long privileged Global North ideologies.

David Girling is an Associate Professor in the School of Global Development (DEV) at the University of East Anglia, UK. His research focuses on three main areas: social media for development, humanitarian communication and ethical storytelling. He is particularly interested in how imagery is used in development communications and led on a research study of visual communication in six African countries. His latest research project, Charity Representations of Distant Others in collaboration with Deborah Adesina, involves analysis of charity images in national UK newspapers and charity direct mailouts. He has recently co-authored a chapter with Sarah Horton “WaterAid: Representing Development through Art and Developing Artists through Representation” in the Routledge Handbook of Arts and Global Development.

David is a Chartered Marketer with over 25 years marketing and communications experience in the public and non-profit sector. David has been actively involved on a number of committees and judging panels including The Chartered Institute of Marketing Charity Group, The NGO International Film Festival, HEIST Awards for Marketing Excellence and the Rusty/Golden Radiator awards for online videos promoting best practice in development communication. His interests are multidisciplinary, but has particular expertise in strategic marketing, communications, PR, branding, digital, social media and ethical storytelling.

He teaches media and global development in the School of Global Development, University of East Anglia.

Episode Transcript

Please note: this transcript is generated using AI

Welcome to this episode of Fresh Humanitarian Perspectives. My name is Wando and I’m delighted to host episode which is a light-hearted yet serious conversation about how we are communicating as humanitarians and comms professionals in this AI era.

Fittingly, I’m joined by two highly experienced professionals, David Girling and Debbie Adesina, who conducted the interesting research. We’re really reacting to the news article you wrote and the research that you and David conducted, on images and direct mail campaigns from charities. In this episode we’ll be looking at how and if AI images might be a tool organisations can consider now or in the future. And we’ve called this episode To AI or Not to AI: A comms conversation.

David
Okay, first, hi everybody. My name is David Girling. I’m an associate professor of Media and Global Development in the School of Global Development at the University of East Anglia. I consider myself a pracademic. I’ve been a Practitioner for many, many years, working in communications, marketing, pr, media, and I’ve been teaching in the School of Global Development now for about 13 years. And so that’s sort of a mixture of theory and practice.

And I still produce newsletters, I still help out with social media, I still help with films, etc, and I’m really, really interested in ethical representation and how unethical storytelling, how images are used, how storytelling is used, how film is used, etc. So that’s a bit about me. Debby, do you want to introduce yourself?

Debby
Oh yeah, hi. Hi everyone. My name is Debby. Debby Deborah Adesina. But yeah, I go by Debbie Wonders. And Okay, so now I’m a little bit stuck where my, my journey into ethical storytelling or you know, representations of global poverty actually was solidified or I would say maybe has its origin in my degree in Media and International Development, Masters in Media and International Development at the University of East Anglia where I got to meet David and I was introduced to media theories, media in practice.

And I also got the amazing opportunity to join David on the charity advertising project. And then that led up to joining Tear Fund, the international development organisation and just seeing the practical outworking of these tensions about what we see say about policies and guidelines and then practices, you know.

So that was an exciting intervention so to speak. An exciting experience of seeing ethical storytelling in practice. And I’m also looking forward to how strands of this will also unravel in my doctoral research on Nigerian celebrity humanities at the University of Liverpool.

And this is because I think oftentimes we binarize ethical storytelling as white presentation of black or brown bodies, whereas there’s a whole critique of privilege and poverty that applies to every single one of us. Black, brown, white, blue. Yeah. So looking forward to talking, tugging on those threads.

Nwando
Your PhD research sounds very exciting. Sorry, can you say what the topic is again?

Debby
So it’s. I will be looking at celebrity humanitarianism, but from the perspective of the global South. So how Nigerian celebrities do humanitarian work, how they you know, the performances of it, the politics, the processes, the impact of it and you know, just sort of contributing to the de. Westernisation of celebrity studies as it were.

Nwando
Very interesting. Yeah. A part of your research is mentioned in the recommendations that perhaps that could be, you know, something that people can consider is telling the story, flipping it and telling it from the side of celebrities.Your research will be really interesting. Before we even go into research, because I’m really interested in how you and David went about the entire process of what you did and what your thinking is now, especially with regards to the topic of AI localisation.

I wanted to ask this question around, comms today, David, especially with you, where you’ve been teaching for 13 years, which is by now you’ll be a teenager if you were a baby when you started. But your perspective on this, I think would be really interesting to hear is how has a visual. From your experience or from what you’ve seen, how has visual storytelling really fared with evolution and how things have changed with tech and all those things. How would you speak about comms today? Especially looking at our side, Humanitarian and then obviously private sector.

David

So before I actually started in the School of Global Development, I was in charge of digital marketing, I suppose you could call it, across the university. And I was very, very interested in social media, how, how social media, could be used, to support students in that particular context. And we had a great intern that did some absolutely wonderful things with the university, changing the way really that we told stories about the university. And so when I moved over to the academic side, I started a blog called Social Media for Development, which, which still exists, but I haven’t blogged for about two years.

In fact, I think Deborah helped me with that last blog post. So social media really has changed and I don’t think it’s the panacea of communications. In fact, sometimes I think that there’s too much emphasis on social media. There’s a whole range of communications that charities and NGOs still use, from anything from, an annual report to direct mail, to websites, posters, TV adverts, billboards.

There’s lots and lots of different things. When I was in Nairobi last week, was it. I can’t remember now, last week, I think, and I was in the, the Sarat Mall, and I came out of, the Carrefour supermarket and there was SOS children, and they had a banner and they were fundraising. And I found it really, really interesting the kind of imagery they used. And I was, oh, have you got any leaflets I can take away? And they said, oh, no, we don’t we don’t have leaflets. And so I think it’s evolved by using social media. It will evolve even further with AI, and there are some criticisms of AI in fundraising and development communications at the moment.

And it will just continue to evolve. But I think people shouldn’t forget the traditional forms of media, of pr, the importance of pr, the importance of prints. People actually still like print. They like to read print. But the images. There’s this constant debate on, you know, should you use negative, positive, post humanitarian, so, or no image at all. And I think that that debate has really, really, really changed since COVID since Black Lives Matter movement, since the decolonization movement.

And I think things have really, really improved in how organisations think about the impact that their images have on representation. I think your response is a good lead on to the next question, but I really liked what you said about how, we shouldn’t forget about other forms of media.

Nwando
And I 100 agree with you. Sometimes I wonder if that perspective is too old school or too traditional. But hearing you say it gives me a bit of comfort that honestly, I feel like people are so focused on social, socials, which is great, but there’s other forms that we mustn’t also ignore because I feel like if you do it, there’s a whole generation of people that you’d be excluding.

But yeah, thank you so much for that reflection and what you said. And Debbie, David ended with visual imagery, which is really what this whole conversation is kind of bordering around. And so, in your experience so far, how has.

But how has visual imagery really impacted your storytelling? Or how would you say it’s always important to include? Or what’s your perspective?

Debby
Yeah, I think one of the things we found, David and myself from analyzing over a thousand images, used by 32 charities, is that actually in the development sector, humanitarian space, images. And a picture is actually worth a thousand words and more. So that still holds true, although with, generative AI, there’s going to be some challenges to that notion. So, yeah, visual, visual, storytelling is important.

Visual imagery is important in, storytelling. Pictures are important. And it, reminds me of one of the quotes, by Sontag. She says the problem is not that people remember through photographs, but that they remember only the photographs.

And that puts a really different perspective on the kind of images we then choose. Because, And there’s another beautiful, really famous quote by Chimamanda Adichie about, showing people over and over as one thing, you know, and the danger of the single story, or in our case, the danger of the single image, you know.

So, yeah, visual imagery is important. It’s important for so many reasons. It’s important to the work that we do. It’s important for its truth value. It’s important for painting pictures of possibilities. It’s important to document that this, campaign, for instance, actually happens.

But beyond all of that, it’s also having the awareness that these images, are contributing to the public register of global poverty. So it’s contributing to people’s understanding, it’s sort of the educational value of that.

You know, it’s teaching the public what global poverty looks like, the colour of global poverty, the texture, the geographic location of global poverty. So, yeah, in that sense, pictures are really important. As a matter of fact, probably, they carry much more importance than we have given, over the years, or we’ve given thoughts over the years.

So that’s what research, like ours, that’s what it does. It brings back that perspective that this is not just about pictures, it’s not just about fundraising. It’s not even just as simple as telling a story. There’s a whole gamut of considerations to be made, behind every single image. Yeah.

Nwando
Wow. I. One thing I appreciate is the fact that your passion speaks through when, as you speak, David shared an interesting story with me. Corroborates or aligns with what you said about contributing to  the public register of, of whatever it is, whatever topic, whether it’s going poverty, whether it’s, you know, what riches look like, but David, can I come to you, please, withyour perspectives on that same question?

David
Yeah. And I mean, I totally agree with what Debby has said and I found it really interesting. I did a lecture, earlier this week and there were people from all over the world in that lecture.

You know, Vietnam, Japan, Nigeria, Kenya, where I’d just come back from the UK, Pakistan, etc, And I was showing, different examples. This was about social media for development. I was showing different examples, from different NGOs and large charities. And, I played one, which is something positive. And a few of the students were really laughing and I said, why? Why are you laughing? And they were just, oh, it’s okay, it’s okay. And I said, well, no, come on, share with us. And they said it. Why is it always Africa? It’s always Africa. And this particular, film example I was showing them was filmed in Africa. And that’s what came out as well in our research.

So we analyzed, as Debbie says, 1,000 images from newspaper adverts in the UK, and also direct mail. So I subscribed to 10. I donated a small amount of money to 10 different charities to find out what they would send me through the mail. And Debbie and I analyzed those images as well as the newspaper images. 56% of the images, in the newspapers were of Africa and 51 of the images in the direct mail were of Africa.

So those students laughing, they were right. Why is it that all the depictions of need, of poverty, of, well, yeah, people in need, why do the majority of those come from Africa?

And that’s something that we’ve done a number of presentations to large organizations, Debbie and myself, and that’s one of the things that we keep trying to say is “please, please have more diversity in the kind of images that you use. Poverty doesn’t just happen in Africa”. But there was good news as well. One of the things we actually analyzed our data set using a methodology that had been used by fantastic academic Nandita Dogra.

And her book, is now sort of 15 years plus old. And so by actually measuring our data against her data, we saw that there have been some improvements. So, children. Children are always used. There’s always infantilization and feminization in this kind of imagery. But the number of children use had reduced from 42% to, from Nandita’s research to our research, which dropped to 21. So there was a 50% reduction in the use of, of children, and children and women reduced from 72% to 50%. But that’s still 50% of those images, are of women and children.

Where are the whole family units? Where is the diversity? Where are the grandparents? This is something in another piece of research I did, they said it’s actually our grandparents who’ve looked after us. Why aren’t they ever in these photos? So we just need more. Things are improving, definitely, but we need to have more diversity in the images that are produced by Development Organisations, NGOs and charities.

Nwando
Yeah, I agree. I also agree with both of you and, that’s why I really appreciate the fact that you’ve taken time to do this research and to analyze what we’re seeing and to really put it, document it, because if things don’t, don’t get documented, sometimes they just get missed, even though we see them every day.

What you were saying about diversifying images kind of leads me on to what we want to talk about. Well, in addition to everything we’re talking about is – do you think that the use of AI can create an option for diversification of images? Or do you think that could be an option? Do you think it’s a mindset? What are your thoughts around the use of AI really in a nutshell

Debby

David, do you want to go first or do you want me to David, I think you should go first. Oh, well, because yes, I think just a couple of months ago I was at a conference a workshop on artificial intelligence.

Really really brilliant, coming together of academics or academics of researchers, of practitioners, of photographers with experience, working in the global north, working in global south, of you know, humanitarian health organizations.

And it was a really, really brilliant I think it was three days. Really brilliant set of conversations. Thanks to David for, sending me  in his place really.

The whole conversation. There was you know looking at photos in the age of generative AI and there’s a whole lot to be honest I’m now I’m stuck of where to start from because these issues are just flying at me from all the direction.

So yes to your question. Does AI offer …Does generative AI offer a sort of way out to some of the ethical quandaries that humanitarian organisations find themselves in? Yes it does, or yes it seems to do, or yes there’s the promise that it can. But there’s also the pitfalls of that and there’s a lot of things that we need to display awareness on. So it’s possible for, in the case of diversity of cast and characters, you know, the inclusion of more a wider range of people, of the parents, of the men of whole family units as we saw we’re lacking from our research.

It’s easy for this to be generated, you know, for this to be solved in quote, this problem to be, you know, solved by the use of generative AI. We could just sit down and prompt a way and come up with a clean looking campaign. The question however is why do we tell stories in the humanitarian sector? Why do we use pictures? Because the why will always determine the how it will always determine the placement of AI of artificial intelligence in our workflow as communications experts. Otherwise we would be on the slippery slope of doing whatever can be done because it can be done.

So because we can generate family units, we can prompt AI because we can generate, grandparents, the uncles, the aunties. Should we do that? Bear in mind that our research was able to point out, a lot of these critiques because it was based on actual images.

And there’s the danger when we start generating. How do we tell. How can we tell that, you know, what development now looks like on ground, in quotes? How can we tell what the actual representation of participants or beneficiaries are? You know, in the programs, that the charities run, in the intervention. So there’s that danger where we could start generating. We could start making up, and then it starts with images. But at some point will we also generate statistics? Will we also generate data? Will we also generate quotes? Where does the generation stop? Where do we put the mark? And what makes one form of generation? Right. What’s the ethical or the moral consideration behind. What’s the argument for generating images and not generating a quote from an absent voice? You know, so there are all of those ethical considerations, And all of those questions to be asked. And they are not easy questions. They are not black or white questions. They are. There are questions that require organisations to sit down and, you know, and ask why do we tell stories in the first place? Why do we take pictures? And only when we know why can we know how to. Or how not to? So I’m going to take a pause there so I can catch a breath, but there’s a lot I want to still say regarding that.

Nwando
Okay, sorry, David, before you come in, this question of actual representation/ longevity of these images. Take it away.

David
Yeah. I mean, it’s interesting what, Debbie was saying there about, you know, the quotes, because in all honesty, I just hadn’t even thought about that. And so that sort of takes AI generation to a new level. There are so many problems with AI. One of the things that I’ve been looking into recently and there’s actually a film about this that I haven’t been able to watch, but it’s called Humans in the Loop. And it’s about the whole use of data workers often, once again, in countries where they’re using cheap labour.

So Nairobi, where I’ve just come from, there’s. There’s quite a lot of case studies there that people are just literally getting paid very, very little amounts of money to label images to whether that’s on social media platforms with sort of verification.

But it’s also feeding all into this AI system. So there are so many different ethical problems. Because at the end of the day, these images, if you go into Mid journey or Runway or Dali or any of these sort of AI, image platforms, they’re taking images that already exist.

So that’s plagiarism, isn’t it? You’re actually stealing somebody’s image and you’re not crediting them. So there’s loads and loads and loads and loads of different, aspects there. But then there’s also what, you know, AI is improving.

It will improve every, every single day, every minute, probably. But at the moment, there’s, there’s racial biases, there’s gender biases, there’s a whole issue of job displacement. For a start, I, I met with a couple of photographers, while I was in Nairobi that I’ve worked with over the years. They were saying that there are several reasons for this, but, you know, they’ve had hardly any work over the last year. Now they were putting that down to, funding cuts, like USAID funding cuts and things like that.

But I was saying, do you think AI is, has actually, affected your work, the kind of jobs that you do? And they went no. And I said, really? And they were like, no. How would that happen? I said, well, they said, INGOs using AI. And when I said there’s case studies. Amnesty International have been criticized for using an image in Colombia of a protest. And there’s another case, couple of case studies. And they went, ‘really?’ ‘Are they really?’ And they were like, quite shocked. And they were saying, how, how can you use AI imagery? Just that, that, that’s so unethical. And I said, well, it’s happening. So, you know, it’s really, really sad for so many different reasons. But I think one of the problems is to me is these AI sweatshops that not many people are talking about and are unaware of, then you’ve got the whole ethics of actually the amount of electricity that it uses, etc. Etc. Etc.

But then most importantly, when you’re talking about representations, it’s the biases that currently exist within those platforms.

Nwando
Okay, okay. So what I’ve heard so far is, No.

David
Well, I’m going to counter that with another argument. I’m going to counter it and Debbie can continue if you want. We’re talking about the big organisations that can still afford the local photographers. And, and that’s one thing that’s really shifted and something that I think is fantastic is that the large charities and NGOs are using local photographers wherever they work in the, the. Or that it’s increased. They know the local context, they know the local language, they know the culture, etc.

So that’s great, that’s a massive, massive, improvement in the sector. But what about the really small NGOs that exist? The, the community based organisations that literally can never ever afford to use a professional photographer.

Maybe they’re not even using a proper camera, they’re just shooting it on a, on a cheap smartphone. They haven’t got that knowledge and they’re taking photographs. They haven’t got the education. They don’t understand as much about representation.

There’s a really interesting academic paper and I can’t remember the author’s name, from Ghana, who’s explored ethical representation and imagery in Ghana. And basically the answer was that very few people follow any ethical guidelines, very few people have any training and therefore the images they’re producing within Ghana, yes, they’re from local people, but they’re still really, really problematic. So for those smaller NGOs, is AI a solution? I posit that question to Debby.

Nwando
I want to interrupt before Debby goes in. Don’t forget your question, Debby by the way, but as you respond, and I’m going to come back to you as well, David, as you respond, both of you, what about the people on those images?

I mean you’re talking about lovely images, beautiful, you know, and hopefully diverse. But the models, what happens to them? Well, not models, but the people that are in those images. And the reason I asked this question is because many, many years ago, well, not many, many, but my colleague wanted to use AI images on a banner and shout out to Austen, his former colleague. And he argued for it. And I was thinking, I think now, which is like. Or said something similar, which is we have lovely images, why use AI? But anyway, he said what about the people? So Debbie, I leave that on your conscience.

Debby
Okay. So yeah, it gets complicated the more you dig in, the messier it gets. And that’s just and that’s just the thing with AI, with generative AI, with technology anyway, as it were, but especially with AI and much more so because the speed at which things are changing is alarming.So, which is why this is just an aside anyway, which is why just developing policies or guardrails won’t suffice. Because there will always be new use cases, there will always be, you know, new advancements and that your policy doesn’t necessarily account for.

So, yeah, having said that, back to the question. AI for smaller NGOs and the people in the picture. And before I just come to that, I really wanted to speak to the point David was mentioning about, because Amnesty International were in that room, in the conference organised by Dr. Aseni and Dr. Sonia, regarding the use of artificial images. And Amnesty International was there. No MSF. Rather, MSF was there. And we had that conversation about using generative AI, large organisations using generative AI.

And there was also the question of carbon, decarbon. What’s the word? Decarbonisation. I don’t know. Reduction in carbon emission. You know, which is one of the things that led to or one of the arguments behind hiring local photographers in the first place.

You know, so rather than having somebody from the Global North fly into Global South, burning thousands of energy, and oil and all of that and pollution, just hire somebody locally. That being said, I find that the argument for the climate sorts of balances itself or sorts of. It goes both ways. So while we’re saving on fuel, as it were, on energy as it were, on one side of the argument, using AI does not necessarily prove to be the most sustainable, use of Earth’s resources either.

That’s on one side. Again, I have digressed because there’s a lot of thought in my head. David talked about, the use of local photographers. And I worry that with generative AI, all the grounds we’ve sort of gained, in terms of localisation, in terms of shifting power will be lost. And this is because I don’t see that we gained so much ground. Anyway, we need to appreciate that there was progress being made, there was progress being made in terms of hiring of local photographers. But from that conference I was at in Geneva, the photographers in the room, had to say that the briefs were still a barrier.

So yes, we were hiring local photographers, but we were sending briefs from the global north to the global South. And now in place of that for generative AI, the briefs have now become the prompts. So now, who gets to decide what an image looks like?

Who gets to decide what is being generated. So, we talked about co-creation in previous spaces. So now we’re going to talk about co-creation of prompts. Are we going to get people with lived experience to actually dictate what an image should look like, would look like? You know, so there’s all that conversation around power, and around relocating power to the global north or to large organizations or to the organizations using artificial intelligence, AI generated imagery. And so AI for smaller organisations. I wouldn’t say yes or no, David. I don’t know because it’s not that straightforward either. But I do think it holds promise. I think it holds promise, because it’s accurate to say that some organisations do not have the budget or the time or the luxury of either of these for hiring local photographers, for working with local photographers, or even global photographers as it were.The consideration then would be the truth value and the trust value. So for local organisations using artificially generated images, would they pass this off as real images taken by real photographers? Or would they be willing to label this?

Would they be willing to disclose this? Would there be that transparency, that accountability? Now there’s been some research pushing back against NGOs using AI and I think it’s because it’s still early days. I think several years down the line the resistance will grow, less and less. There will be backlashes. But I think that with time, the donor publics will become more accepting of that. But before we get there, we don’t want, what we don’t want to do is to be in too much of a rush. There’s a lot of talk about adoption.

There’s a lot of organizations adopting AI rapidly. But we cannot force the donor publics unless they are ready for that. If we do so, what the sector would lose is trust, the erosion of trust. And that’s a really costly currency for humanitarian organizations.

And then to question about the people in the picture. So, one of the considerations, one of the controversies, or one of the challenges with using, with taking images of real people, has been the issue of consent. How long does consent last? And then how well can these images be repurposed into other images, or into other formats? How do you reuse these images? And then how about the procedures for recall? So, where someone is no longer interested in having their images in the public domain, how do you recall that?

Well, you might be able to recall the post, you might be able to retrieve, you know, what has gone Out. But how about what is already, stored in the memories of, of audiences, global audiences that already received these images. How do you deal with recall?

So in that sense, it seems that generative AI offers a way out. After all. We can argue these are not real people. These are this person doesn’t exist anywhere. You know, but again, there’s no easy way out. There’s no easy way out.

That also has, that also has its challenges. I think I’ve spoken quite a lot, so I’m going to take a breath now.

Nwando
Yeah, that’s fine. I’ve thoroughly enjoyed, your reflections and I like the fact that we’re not necessarily seeking a solution. We’re just thinking through all of the different, likely solutions that exist and really talking through the conundrum as comm people who are now dealing with this, you know, this change and trying to really navigate it in a way that we’re not harming anyone. David, would you like to comment on ‘what about the people?’

David
Yeah, yeah. I mean I’m gonna quote Debbie here because I think that’s funny and I’m going to use it in, in lectures and the briefs are. But it also made me think while Debbie was talking about, somebody who used to work for a very, very large charity, I won’t name them.

And they were in a motorway service station in the UK and they went to use the, the washroom. And on the back of the toilet door was actually a poster from a charity. And they laughed at themselves and they thought, I wonder when they actually consented to the use of their image, did they imagine themselves on the back of a toilet door in a service station in the, on the outskirts of London?

And, and it’s always made me smile that because I’m thinking, no, they did not. They had no understanding. And however many ethical guidelines you write, however much training you use, do the people who are having their pictures taken, do they really, really fully understand how their images are going to be used?

And is there a process in place to show them, to go back and show them how their images have been used. So to that extent, AI is a solution. And I, I read something on the BBC the other day and at the bottom it said this article has been. I can’t remember how it said exactly, but rewritten using AI. And I thought, well okay, so at least they’re being open and honest. The, the thing that I always ask the question because I, I teach in the school of global development, but also the business school. And it’s interesting having those conversations with different people. Everybody will, you know, when you ask the question, is it better to use positive image, negative, image, no image at all, or if you add in now an AI image, everyone’s going to have a different perspective, on what is ethical.

Because to some people representation is going to be the number one goal. To others, they consider the fundraising is the number one goal and there is no right or wrong answer. And the people that are desperately in need, they would probably say that fundraising would be the goal. Whereas if you speak to elites in different countries, they’re saying it’s representation, the diaspora representation, the academics representation. What about the people really, really suffering to survive, literally to eat, to survive, to have access to clean water, etc, they might have a different perspective. So they might say it doesn’t matter, I just want the most money.


Debby
Absolutely, absolutely. And I think just to, just to jump on that, David. So I give quite a lot of thought to ethical storytelling or ethics as it were. And how ethics takes a different shape or a different you know, texture across cultures, across space, across time.

And what we consider ethical, today might not necessarily be so. And then we, if we try to judge the ethics of the past with the eye of the now, you know, we would be horrified at some of the decisions that were taking. You know, so yes, there’s the need for that humility, that understanding that we are humanity is flawed where imperfect humans work, making the most of and the best of an imperfect situation.

And it’s that awareness that we haven’t got this right. We probably will not get this right on this side of the divide, of this side of eternity. But we can try to do the very best that we can, while we can, by centring people, people, over technology, people over principles. Yeah, people, people, people over technology. Absolutely. And people over principles. So yeah, those are some of the ethical conundrums. And there’s also the challenge then of organizations in the global south because when we talk about ethical storytelling, there’s the tendency to narrow it down to Global North organisations, you know, humanitarian organisations situated in the Global North.

But does ethics also apply to organisations in the global South? Does it also apply to local organisations? Does it also apply to my local homelessness charity here in Eastbourne? Should they be worried? Should they be asking the same questions, otherwise are we being pretentious about it? And that reminds me of, you know, brings me back to David’s thoughts about ethics takes a different shape or taste depending on who you ask.

David
Can I just add something because you’d reminded me. So I just looked up while Debbie was speaking that the, the academic paper that I was talking about in Ghana. So it’s by Mahmoud and it was published in 2024. So as I said, that looks at NGOs, 22 NGOs, and it looks at sort of their storytelling, their ethics. And this, this whole sort of pornography of poverty debate.

So that, that’s a really, really interesting paper and it’s great, as Debbie said, it’s great that these discussions are now going on in the in the global south as well as the global north, because it’s important, because there have been shifts in fundraising as I saw in Nairobi, with people from SOS, raising money there. So these issues need to be discussed globally, not just in the UK and Europe.

Nwando
Absolutely, I agree with you. And the good thing is HLA’s listeners are quite a broad sphere so hopefully we’ll get some reactions to how people are thinking about this from all areas of the world.

What would you say if you were to give steps, what would you say people should be doing? And this is not necessarily recommendation because I said before, it’s light-hearted. It’s not like we’re saying, oh, you must do this. No, it’s just in your own experience – what would you say people should be thinking about, be cautious of if they are going to employ AI for visual storytelling as humanitarians?

David
 Okay, do you want me to go first, Debby, or do you want to,

Debby
Okay, let me go first. Let me take a quick, swipe at that and again I will. You know, so when, when you ask a new question, Nwando, my mind goes back first. It’s just like a rubber band. It pulls back first to what we’ve discussed before. It snaps forward, you know, so, and then one of the, with regards to your question about whether this is a yes or a no, and of course we know it is neither or it is both.

There’s also the issue of gaining efficiencies with regards to the amount of, of data being demanded by humanitarian organizations. So on one hand there’s shrinking budgets, as we’ve seen. There’s also the pressure to keep up on social media.

How many posts, how many more posts do we have to make these days now than before? How many more? What’s the lifespan of a story today? What’s the lifespan of an image? How often do we, you know, need to tell new stories in new formats to new audiences on new platforms?

You know, and how quickly can we go on, data gathering, dream trips, on you know, verification trips and all of that? You know, so there’s that, there’s that demand side of storytelling, which is, you know, is driven by changes in media.

We’re all being mediatized one way or the other. So there’s that gaining efficiency. So, AI in that sense sorts of provides an escape route, where, or it gives promise in terms of scale, in terms of output, in terms of speed, that does not necessarily make it right or wrong.

But then I’m coming to your question. If an, if an organization, if a humanitarian organisation, a charity, someone working in the third sector decides to go the AI way, which to some extent we all will. I think there are just a few considerations that pop up for me.

And first of all, it’s to, to really decide as an organisation what is most important to us. What’s our mission? What’s our, what’s our mission that must be up and central what our values as an organisation because how we, I mean, how we tell stories will change over time.

There will be new tools, there would be. Let’s not even think that AI artificial intelligence would be the apex of technological advancements. Once upon a time we were debating whether to use smartphones or to use analog cameras. Once upon a time we’re debating whether to go natural or to you know, to use makeup or to, you know, to augment those sorts.

So there will always be constant advancements. There will always be. Humanity will continue to push forward, push forward in tech, in tools, the medium. You know, so when we know the how, I mean, when we know the why, rather, then we can figure out the how, you know. So, yes, that’s one of the very first things for me. It’s really defining, really understanding, really centring what the values of the organisation is. So before you dive in, you know, what, what are we really about? What is our vision?

What is our mission? And where does artificial intelligence, where does artificially generated imagery, where does it sit within our workflow? You know, and it’s also important to note that the change should not, should not be built around a person or a trend or even a new technology, you know, change.

So what often is the case is we have one person in the team driving or championing the cost of adoption. One person saying, let’s adopt this. And that often happens with a lot of issues like diversity, equity, inclusion, decolonization. That is such a mouthful. Like, you know, there’s always one person saying, oh, let’s get on board, let’s do this. But change cannot be sustainably built around the person. It has to be organisational culture. It has to be embedded in the culture of the organisation.

The enemy here is the time it takes to do such a solid groundwork. You know, things are moving at breakneck speed. We don’t have the time to sit down and ask deep, tough questions. And this is where, you, you could get external help you could leverage on people like David and myself.

This is not free marketing, I promise you. We really want this stuff. So yeah, this is where you could leverage an external help to facilitate deep, quick sessions. You know, you could get the relevant stakeholders in a room and you know, get this conversation started, get a blueprint, get a strategy in, in place.

Because it’s not about developing clear policies or currently if you just take a quick Google, I mean if you search on Google, you will find a lot of AI policies, ethics of AI Usage, guardrails, that’s great, but it often gets forgotten in use. What gets remembered is culture. It’s culture. So before a list of do’s and don’ts, a clear sense of why is better. So, yeah, I mean, those are some of the things I would say. You know, and rather than making broad sweeping changes, rather than all of a sudden tomorrow we’re back in the office and the entire comms has been replaced by an assemble of AI agents.

Rather than making broad sweeping changes or general, adoption, it’s good to ask where can we fit AI into the flow? It’s always the technology should be solving a problem. It should not be leading the way. We should not be adopting for the sake of adoption.

Where does it genuinely help? What are the efficiencies to be gained? How can we test, how can we track and how can we iterate? And finally, I think the last point I would make is that while adopting, artificially generated use of artificially generated images, we must display an acute awareness as a sector.

When we started to put forth images for fundraising, in the earlier days, little did we know that we were creating lesson notes, we are creating textbooks. You know, we were writing the textbooks that AI would be trained upon. We didn’t know then, you know, as there was not that, that awareness, there was not that foresight, there was not that thinking, that the images we were putting forth had any significance beyond just raising money.

But now we know better. Dr produced some really brilliant work about, the biases in artificial intelligence and how AI has taken our bad lesson notes and has run with it and is multiplying that. So when we are adopting with generative AI, we must just display that higher level of awareness of our role as educators.

Even with the use of generative AI, even with generated images, we are still educating the general public on what poverty looks like on, you know, we’re still educating the general public on people, on, places. So we must display that awareness. Of course, we don’t know 10 years down the line, even tomorrow, we don’t know what the repercussions of some of these might be.

But we can just, just, you know, make the best with what we know in, with the imperfect systems, with the imperfect man and, and forgive ourselves and, and move on. Absolutely. So, yeah, those are some of the, a few things I would, I would say not much of a step per se, but yeah, more like questions to be thinking about.

Nwando

Yeah, absolutely. Useful. Thank you so much. David, what’s here?

David
Well, as Debbie has said so much, I think that was probably about a 17 minute answer. I’ll try, I’ll get mine down to 17 seconds. I would say be, be transparent if you’re gonna use it, because that’s important for trust. Make, sure you’re authentic. And as Debbie said, you can have as many guidelines as you want in the world, but how do you implement them? And how do you monitor them? You need to implement guidelines lines in order to be successful. So there you go. Was that 17 seconds? (laughs)

Nwando
I think those suggestions have been so, so they’re so apt because I feel like we’re very, as a people in general. You know, I remember when you mentioned, smartphone, there was a time when people were scared to use social media. I remember my roles. I was trying to, I was literally begging the senior management to please join Twitter and Facebook.

They were like, no, their privacy didn’t. I was like, trust me, nobody’s going to scam you. Because it was just, I feel like social media came just after, you know, the scamming of, oh, I’m a Nigerian prince here to, you know, I did not come to you by chance. I feel like that was when like Facebook started just after that whole thing of email.

You know, we’re still, we’re still trying to get over email. And then, you know, we got Facebook and Twitter and all. And I remember there was so much restraints, especially because then I was working in Nigeria, so there was so much, you know, people were like, I don’t want to mess about with this. And now we’ve got now AI, so things will keep on changing like you said. And I think that all of these points that you raised, I really like about being transparent. I really, really like that. And of course, know your why it will always be because that leads to authenticity. If you don’t even know where you’re, why you’re doing what you’re doing, how are you going to be authentic about it?

So you guys, you guys are just superstars. Thank you. But lastly, lastly. Lastly. Well, ish. I wanted to ask the question about prompts because it was mentioned earlier in the conversation because it links to briefs as well.

And this also links to previous questions like what, what should we be thinking about? And if people are deciding to do all of these, I know that, that their why would help them be a bit more, you know, hopefully help them to do no harm. However, there’s a lot of lack Of, Well, I don’t, Well, I don’t say lack of information, but there’s a lot of people don’t really know.

Everyone is like, you know, experimenting. So, what, what should people be thinking about when it comes to problems? And this question comes with, from the angle of localisation which you mentioned earlier, and really thinking about telling people, telling people where poverty or where riches or where, you know, different topics live.

So yeah, sorry if that’s not clear and feel free to ask me what, what are you actually asking?

David|
I, I think it is clear. I would, I would say I, I loved what Debbie said earlier about co-creation. I think if that is possible, then that should really, really, really be explored to ensure, localisation, context, etc.And then I think don’t be lazy with your prompts. You have to be really detailed. You can’t just type in, in your first instance what you want, you have to keep adding to it. And if it’s not right, you have to change it again and change it again.

But as, as to who needs to be represented, I hope that the, you know that the charity advertising website that Debbie and I have set up has loads and loads and loads of images as examples. It has a lot of data analysis.

And I would say that the main findings are that the diversity of characters. Stop, stop always, using Africa as an example. But equally if you’re working in Africa then you don’t want to, you don’t want to include images of India.

Somebody actually wrote to me and asked me that question, said but I only work in Kenya, what can I do? And I went, you need to make images from Kenya. You know, just make sure that you try and include the whole family units. But yeah, lastly, because I hadn’t really thought about this in those prompts wants make, try and make sure that you include those kind of details, those, you know, what is missing from the imagery at the moment.

It’s all family units. It’s fathers with their children, it’s grandparents with their children, it’s grandparents alone, etc, doing different things. So what’s missing? What could actually be a much more accurate representation of what you’re trying to portray?

Nwando
There was something you shared with me earlier David, about the evolution of that, you know, of prompts and how you educated – you’ve been taking time to educate, which I think is, is a very valuable thing to do.

Is if you see, just like you said now, if you see something missing when you do a prompt is keep, keep, keep, keep reiterating, keep saying no, add this, add this at that. And then with time, hopefully we all, get to the point where, images are. Well, AI is making images that we can use if we decide to use them. Exactly. Yeah. And, I think, I think, I feel like we can pause this conversation here, but if there’s anything else that we feel like we’re missing out on

Debby
Yeah, I think just to, Because I like this question about prompt and I’m all of a sudden thinking I’m going to be in David’s DM afterwards, we should actually run a course on how to prompt better, you know, how to prompt effectively with all of these ideas, of localization, of shifting power, of access, of doing no harm, of our roles as educators, as charities. But yes, educating the public. So I think it’s something, it’s something to be considered. And the point I wanted to make lastly is that, like David said, don’t be lazy. We’re going to get to a point where AI keeps improving, the images being produced keeps improving, but it’s always a matter of garbage in, garbage out.

So if you don’t have the background knowledge of what a good picture looks like, or what a good picture should be like, or what an effective picture should be like, or what a picture should not be like, if you don’t have all of the knowledge in you, know of the critiques we’ve had over the years, then it’s really easy to do a lazy job, a really sleazy job.

So I think one of my recommendations would be, to go and dig into the charity advertising website, read both, reports, dig into that, look at what’s been done so far, look at the kind of images, look at the loopholes, look at the gaps and then, within your context, within your own reality, within the work that you do, your operational reality, be creative. And yes, absolutely, for transparency and authenticity.

Nwando
Thank you so much. And that research will be linked in this, in the podcast page when we publish, as well as other resources. I found David’s blog as well. So all of these resources will be linked for our listeners to engage with. So on this note, I want to say a massive thank you to David and Debby. Thank you so much for sharing your wealth of experience with us this, on this episode. And I honestly, personally, I genuinely appreciate everything that you shared.

I think it should be very useful not just for comms professionals, to be honest, but for everyone who’s thinking about navigating the use of AI in the humanitarian sector. And so thank you again for your time.  We look forward to that course.

Debby
Yes, absolutely. We’ll send you a discount code. Honestly, only. 999 only. Only if you. If you sign up now. If you sign up now.

Nwando
And, consultancies, I think after the course, I think that doing consultancies one on ones with organisations who are really trying to figure it out because this is not the easiest thing to figure out.

Debby
No, it’s not. Yeah. So you need help.

Nwando
Yep. We’ll reach out to you guys. Thank you so much, Deb and David, and, thank you for joining us on this episode.

Thank you for listening, sharing and engaging with the conversation. We’d love to hear your reflections! Do email us info@humanitarian.academy

This episode is produced by Nwabundo Okoh, Comms and Marketing Lead HLA

Newsletter sign up