6th May 2026
What happens when a humanitarian communications specialist with 15 years of field experience starts using AI – and then turns that learning into advocacy for AI literacy and opportunities?
In this interview, we hear from Nour Arab from Lebanon, on her journey from early AI experimentation to building tools for her own community, and why she believes AI literacy is the most urgent need facing the sector today.
Nour was one of 1,729 individuals from 120+ countries and territories who participated in the Humanitarian Leadership Academy and Data Friendly Space’s Humanitarian AI January 2026 pulse survey – an ongoing effort to track how humanitarians are using AI in their work. She has since engaged with ongoing HLA initiatives including attendance at humanitarian AI webinars and LinkedIn discussions.
With a background spanning field operations, camp management, community engagement, project management, and communications, Nour brings a breadth of perspective to the question of how AI can – and should – be used in humanitarian contexts. She is also a NORCAP member and an advisory board member with the Community Engagement Forum, where she advocates for community-centred approaches to AI.
In her own words, drawn from a follow-up interview conducted by research co-lead Ka Man Parkinson, Nour explains how she is currently using AI, what she has built with it – including a fundraising website for her home city of Tripoli, Lebanon – and what she believes the global humanitarian community needs to do next.
Introducing Nour
I’ve been humanitarian for 15 years. I started as a field worker in Lebanon, and the work evolved across different positions here and there with UN agencies, NGOs, and governments around the world. I’m also a NORCAP member. My last assignment was with NRC as a global communications consultant for the Community Engagement Forum, which helps practitioners work on community engagement and get evidence-based findings from the field to improve that work with communities.
I’ve jumped across different roles, and I really felt like I had this unique expertise – starting from the field, knowing three different languages, doing project management, training, communications. And then AI came, and it started to eat into some of the things I do every day. And this is where I realised that there’s something that needs to be done. I need to start learning it and see how to turn that fear into something that’s productive – to build this AI literacy that can help me do the work, but differently, since the sector is all changing. To stay relevant, I need to jump on that very fast train that we’re all on.
Starting small, thinking big: Nour’s journey into AI
It started with simple steps – summarise this report for me – and every day I started using it more and more for different tasks, until I started to become comfortable to speak to it and explore different tools depending on the needs that I have for my current problem. I have this problem, how can I fix it? I ask questions in the chat, and then I start to get a few different options to see how to solve this.
It’s been such a nice journey that was actually worth sharing, so I started to become more vocal about it on LinkedIn – to advocate for AI use that is grounded in field experiences, because I’ve seen a lot of resistance also among humanitarians for the very right reasons: no, we don’t want to be using AI, it’s unethical, it’s scary, there’s a lot of bias in it, there’s a lot of control, the things that we cannot control.
In my own opinion, this fear is keeping them away from the conversation, and keeping them away from making the AI better. If the people with ethics, with field experience, with deep accountability to communities don’t show up with their judgement and their data, these systems will be shaped without us. The most ethical thing we can do right now isn’t to refuse. It’s to participate, carefully and visibly, so the tools that get built carry our fingerprints.
On language and AI tools
While I speak three languages, I use AI tools in English, and of course the quality in English is much better. I haven’t used French much in the prompting. I can see that Claude is not the best in Arabic. I am also interested in trying the same prompt using Gemini, ChatGPT, Claude, DeepSeek. Believe it or not, DeepSeek is the best in Arabic, which is quite interesting. I’m an Arabic native speaker, but maybe I’m not someone who can write super formal Arabic, so I returned to DeepSeek to help me with that. Even the dialect is on point. When I try to use ChatGPT’s speaking feature in Arabic, it comes back with a different accent – Jordanian rather than Lebanese – and it doesn’t pick up all the dialect well. But in DeepSeek, it’s much, much better.
Beyond the chatbot: building contextualised humanitarian tools
Currently I’m developing an agent to help me with my own writing and how I want to evolve in the sector. Not just a knowledge base or a brand, but rather something that challenges me every day – brainstorming with me. I’m using it as a thought partner. And it’s been super interesting. I know that AI agrees a lot with people, but we can add some skills to it so it can challenge us a lot.
I’m also exploring turning resource libraries and knowledge bases into something digital and much more user-friendly. Instead of Excel sheets, I’m working on HTML platforms that are very easy to navigate. But I’m also aware that the whole search experience is changing – people are going to Claude or ChatGPT to ask questions instead of Google. People now have a much shorter focus. So, as a humanitarian, I shouldn’t offer them a website with 200 different tools to check, but rather tailor their need differently. Creating a resource library, but also having a chatbot that you can speak to: “I have this problem, which tool can help me solve it?” And then it gives you a list of tools, and you add some context, and it helps you contextualise the tool from being a standard tool that works everywhere to something that works in the field.
Nour’s humanitarian AI use case: building for community – the Tripoli Emergency Fund website
Because of AI, and thanks to AI, I’m able to work more for my city that I love. There was a very unfortunate event that happened two months ago. Two buildings collapsed – it’s not related to war, but related to many, many different things. I was called by the municipality and the order of engineers, along with other people, to work on an initiative that we call the Tripoli Emergency Fund, and its role is to build a project that is funded by the people to support the reinforcement of 800 buildings at risk of collapsing.
So AI did not just give me time to attend those meetings and be part of these conversations, but also it gave me the tools. I built a website for this initiative using Lovable. The prototype really took around 10 minutes. And then we had something that we could go live with, but of course there’s lots of work that needs to be done – the brand, what you want to say to people, the payments, the payment policy, the governance framework. But without AI, I wouldn’t be able to create this website in the six weeks it took at the end. It would have taken a lot more time.
In Lebanon, we’ve had lots of challenges with transparency in the last five years – trusting the government, trusting civil society initiatives. So we wanted to go all transparent with the people and tell them where the money is going, how much money we collected, how much money was spent on the reinforcement, who is working on what. We got endorsements from the mayor and the order of engineers. We added names of individuals and institutions that donated. And I was also able to translate it in a few prompts from Arabic to English, which was amazing. Now every feature that I edit can be edited in both languages. Before, I used to make it all manual everywhere. Maybe I would forget something. Now everything is just going perfect.
The platform still has lots of bugs, so I’m not saying the system is perfect. We had to test it many times. I even tested the payment method more than 20 times – sometimes it works, sometimes it doesn’t. So a human needs to monitor this on a regular basis to make sure that things are flowing well. I always recommend that whenever people want to use AI, not to just create the tool and disappear, but also make budget for maintenance.
What made me really happy was the participation of different civil society members. One of the local NGOs we had in our city made this amazing presentation using NotebookLM to illustrate the situation of the buildings – who owns, who’s renting in the building, what are the shops available, where are the illegal constructions. When you see things visually, they make a lot more sense and you get a whole different perspective on how you’d like to move forward. Just the fact that local NGOs are starting to think this way is amazing, really. And it’s super promising for the future of AI in the humanitarian field.
The initiative was added as a best practice by MedCities, a global platform that works with lots of cities around the world on urban planning. It’s incredible what AI can do for local people without needing much funding.
The most urgent need: AI literacy that is grounded and tailored
For me, it really starts with AI literacy. We’ve been speaking a lot about small language models, building small language models for people – but not really thinking about what people can do on such a small scale to solve their very small problems in a more efficient way, in a cheaper way.
I understand there’s a huge need for governance frameworks and policy work around how to use AI safely. But nobody’s waiting. People are building anyway, and people are using LLMs that we may not like. They’re building their memory there. Very few are comfortable migrating their history from an LLM to another that is safer, especially when you’re reaching lots of people everywhere.
And we’re at risk of repeating it in a much bigger way. AI massively scales our ability to collect community feedback – voice notes, surveys, multilingual transcripts. We could be drowning in input within a year. But are we ready for feedback at a thousand times the volume if we didn’t listen the first time? More data isn’t the same as more listening. If the willingness to act on what we hear isn’t there, we’re going to generate huge archives of unheard voices and call it accountability. That’s worse than silence.
There’s also the question of how we, as humanitarians, add content to the LLMs in a way that is synthesised without keeping away some community groups, or gender, etc. There’s lots of work that needs to be done on advocating for what content comes first, and who should be prioritised in the whole response.
The problem is not building one big solution for everyone. We had this mistake with digital transformation – we spent millions of dollars trying to create digital products that nobody used, because they were all made in San Francisco and London, away from the people that they actually should serve. This is a golden time for us to realise that the power is no longer on the upper end. Maybe only AI literacy can solve so many problems for local organisations working with communities.
The AI literacy I’d like to see is tailored. First, to simplify the mundane tasks that everybody, all organisations, need to do to make the job done. But also saving some time to prompt more and more the vision and the strategy of these local organisations, how they would like to move further. Also work on their research skills. Now, as research has changed with AI, they can now test different angles in a few clicks. This has been almost impossible before. It took a lot of time and a lot of resources.
Looking ahead and a closing message
In the next 12 months, I’d like to see more people using AI to save some of the important time that they spend doing some work, to focusing more on the vision and the strategy of their organisation. For me, I’d like to explore working on AI agents for local organisations – starting with the case study that studies how local organisations in my city are using AI, and how AI can help them improve their response better. I also want to push a lot more on what communities want out of AI, with my role as an advisory board member with the Community Engagement Forum, and independently. I want to meet people who have solved specific problems using AI, and see how this can be replicated to other contexts as well.
I’d like to see donors understand that people will not keep on producing things if not supported financially. Local NGOs are doing a lot more with AI now, but they still need funding to hire the right people, to give enough time for innovation. Innovation needs funding.
It’s a very exciting time we’re living in. It’s loaded with big emotions of fear, frustration, joy, and opportunities that come with AI. One thing we can do to manage that overwhelming feeling is by starting to adopt AI every day with simple steps. Just delegate a few tasks and learn on the go. There are so many interesting courses that can be found. But applying it to the context needs practical applications.
The solution is not always creating something big that solves big problems. It’s building with the right people. The problem has never been a tool – but rather solving the issue in a specific way, with the right people. AI literacy for those who are on the ground is key to solving problems in a more grounded way, and in a cheaper way as well.
The solution is not always creating something big that solves big problems. It’s building with the right people.
Thank you to Nour for sharing her work and perspectives and for her contributions to the Humanitarian AI January 2026 pulse survey conducted by the Humanitarian Leadership Academy and Data Friendly Space. This work builds on the 2025 foundational study and the supporting resources including reports, podcasts and webinars available on the research landing page.
You may also be interested in
Humanitarian AI interview with Abdullah Azimi from Afghanistan
Read the article
Humanitarian AI interview with Ivan Toga from Uganda
Read the article/listen to the interview
Bridging digital divides: centring local leadership in humanitarian AI development – an online session as part of Humanitarian Networks and Partnerships Weeks (HNPW).
This session in March 2026 explored how AI is rapidly shaping humanitarian work, but local actors are still largely excluded from how these technologies are designed and governed, risking deeper inequalities. The panel discuss how AI can become a driver of localisation itself by embedding inclusion, ethics, and collaboration into humanitarian systems.
Watch the recording
Humanitarian AI podcast series: Global and African perspectives
A six-episode podcast series exploring the findings of our landmark 2025 AI in humanitarian action report, featuring global and African expert perspectives on ethics, governance, localisation, and implementation barriers.
Browse the collection