Thank you for visiting the HLA website, we are updating the site to reflect changes in our organisation. Contact us if you have any questions. Contact us

Opinion | Should we use AI-generated imagery in humanitarian communications? Spotlight on research by Gülsüm Özkaya

How can we integrate the use of AI into humanitarian communications – with ethics and care at the centre?

This is the question that a young humanitarian leader from Türkiye is tackling through her work and research. Gülsüm Özkaya from Children of Earth Association, a local organisation in Istanbul, is exploring the use of AI-generated imagery in fundraising and awareness campaigns, including community-centred research in Syria.

The HLA’s Ka Man Parkinson caught up with Gülsüm to learn how the August 2025 humanitarian AI report conducted with Data Friendly Space has helped to shape and inform the basis of her research, key insights to emerge so far, as well as the future direction of her work.

A woman wearing a light blue hijab and black blazer speaks into a microphone. She stands in front of a banner with blue text and a child’s illustration on it.
Ultimately, I hope my research supports a shift toward more participatory, grounded, and inclusive communication practices – where AI is used in ways that genuinely reflect the perspectives and dignity of the people whose stories we tell.
Gülsüm Özkaya

Please tell our readers about yourself and your work

My name is Gülsüm Özkaya, born in 1999 in Istanbul. I am a humanitarian communication practitioner and researcher currently pursuing an MA in Strategic Communication Management. I have more than three years of experience in the humanitarian sector. I coordinate the team responsible for strategic communication planning and the design of fundraising and awareness campaigns at Children of Earth Association, a local NGO that provides psychosocial support and education for children affected by crises.

Through collaborations with humanitarian actors from different regions, I aim to contribute to more ethical, inclusive, and collective communication practices. My academic work focuses on the integration of AI into humanitarian communication, particularly around ethics, representation, and audience perception.

What prompted your interest in AI in humanitarian work, and how does it feature in your current role?

My interest began during my master’s courses, where we explored AI and communication ethics. At my organisation, we frequently discussed the ethics of using real photos of children, so AI-generated visuals initially seemed like a promising alternative. But as I looked deeper, I realised that the challenges in humanitarian communication extend far beyond imagery.

AI could help address some of these challenges, but without clear policies, it may also fail to respond to the risks we encounter in our daily work. Although my organisation has strong humanitarian standards, we do not yet have an AI policy. That gap became very clear to me, and I am currently working on developing our organisation’s first AI policy to ensure that the rights and dignity of our beneficiaries are protected through consistent and principled use of these tools.

How did you hear about our humanitarian AI research with Data Friendly Space, and what interested you most about it?

I had already been following HLA’s work for some time, so when a colleague shared the survey with me, the questions immediately caught my attention because they were very similar to the ones I had been exploring in my own work. At the time, I was studying how humanitarian communicators in Türkiye use different AI tools and for what purposes, gathering data from six local NGOs. This is why I was genuinely excited about the HLA/DFS research.

Your study reached a much broader audience than I could access on my own and presented the findings in a structured and accessible way. It was inspiring to see such comprehensive sector-wide insights. It also showed me that a humanitarian communicator in another context might be using an AI tool or approach that I had never even considered. For me, the report didn’t just validate my research interests – it expanded them.

Tell us about your own research in this area and a key insight. How did the HLA/DFS research support your work?

My research examines the ethical, emotional, and representational impact of AI-generated visuals in humanitarian communication. Building on earlier qualitative work I conducted at Galatasaray University, I looked at how Turkish civil society organisations adopt AI tools such as ChatGPT and Midjourney for communication tasks.

Through semi-structured interviews with Syrian social media users affected by conflict, I found a spectrum of reactions: some participants appreciated AI visuals for protecting privacy, while others felt the imagery reduced the seriousness of their lived experiences. The key question that emerged was whether AI protects dignity or risks abstraction and dehumanisation.

My main argument is that affected communities must be included in discussions around ethical visual standards. The HLA/DFS research provided an essential broader context, especially regarding sector-wide concerns around governance, risk, and the need for policy frameworks.

You recently presented at the IHSA conference in Istanbul – congratulations! Tell us about the experience and the response your research received.

It was a wonderful experience, with a very engaged audience of humanitarian scholars and practitioners. I received many questions during and after the presentation, especially about the relationship I observed between interviewees’ familiarity with AI and their attitudes toward AI-generated crisis imagery.

For example, participants who use AI more frequently seemed more open to AI-generated images of crisis-affected areas. This sparked lively discussion. Although my sample size was small, it showed the potential value of expanding the study for my master’s thesis. I plan to include not only migrants in Türkiye but also participants still living in Syria.

I also hope to explore similar questions in the context of current crises, such as Gaza, where AI-generated imagery is being produced and circulated at unprecedented scale.

What are your future aspirations with your research, and how do you hope it will support your work?

Looking ahead, my aspiration is to contribute not only as a practitioner but also to the policy-making dimension of humanitarian communication. I believe that humanitarian policies should be shaped not only by donors, INGOs, or global actors, but also by the people who experience crises directly and the local humanitarian workers who are closest to them.

Their insights, concerns, and lived experiences should play a central role in how we design ethical communication standards – especially as AI tools become more deeply embedded in the sector.

For this reason, my future work aims to strengthen the presence of crisis-affected communities in both research and policy conversations. I want to deepen the academic dimension of my study and expand the diversity of the communities I engage with, because the most valuable knowledge in humanitarian communication comes from those whose lives are shaped by crises.

Ultimately, I hope my research supports a shift toward more participatory, grounded, and inclusive communication practices – where AI is used in ways that genuinely reflect the perspectives and dignity of the people whose stories we tell.

As someone working for a local NGO, how important is it for local organisations and youth leaders to be involved in humanitarian AI research and development?

I believe the importance of localisation is increasingly recognised in the sector. Younger generations of humanitarians are more proactive and no longer wait for INGO-led solutions. AI has entered our work so rapidly that the real divide is no longer between local and international actors, but between those who are digitally fluent and those who are not.

I hope to see younger, tech-aware teams across all areas of the humanitarian ecosystem – not just communication, but also programme design, finance, and monitoring. Local organisations should start identifying their long-term operational challenges and considering which of these could be addressed responsibly through AI. The goal is not to simply adopt AI, but to do so in a way that strengthens dignity, equity, and accountability.

Thank you Gülsüm for sharing your inspiring research journey!

A young woman wearing a maroon hijab and a beige vest smiles while standing in front of an olive grove with green hills and a clear sky in the background.
Younger generations of humanitarians are more proactive and no longer wait for INGO-led solutions. AI has entered our work so rapidly that the real divide is no longer between local and international actors, but between those who are digitally fluent and those who are not.
Gülsüm Özkaya

About Gülsüm Özkaya

An Istanbul-based young professional, practitioner, and researcher in the field of humanitarian communication, Gülsüm Özkaya is currently pursuing an MA in Strategic Communication Management. With over three years of professional humanitarian experience, she works as a coordinator responsible for strategic communication planning and the design of fundraising and awareness campaigns. Her research explores the integration of artificial intelligence into humanitarian communication practices, focusing on ethical implications, representation, and audience perception.

You may also be interested in

Article | How are humanitarians using artificial intelligence in 2025? Reflections on a six-month research journey. In this personal reflection piece, Ka Man Parkinson charts a six-month research and learning journey into the adoption of artificial intelligence (AI) across the humanitarian sector. 

Podcast | To AI or not to AI: a humanitarian comms conversation. Deborah Adesina (Debby) from University of Liverpool and David Girling from the University of East Anglia, UK discuss the use of generative AI images as an option for humanitarian campaigns. 

Newsletter sign up