21st October 2025
Following his participation as an expert panellist at the launch of the ‘How are humanitarians using AI in 2025? Mapping current practice and future potential’ report produced in August by the Humanitarian Leadership Academy in partnership with Data Friendly Space, Ali Al Mokdad shares his reflections on the evolving waves of AI and the deeper element of transformation that underpins them.
Ali highlights that the real challenge is not the technology itself, but how institutions, leaders, and systems adapt to it, emphasizing readiness, leadership, and cultural transformation as the foundations for meaningful progress.
Technology is not the problem, transformation is
Artificial Intelligence (AI) has become one of the most talked-about topics of the past decade. Once confined to research labs and technical papers, it is now a subject of conversation everywhere: in strategic meetings, in boardrooms, in team meetings, in classrooms, and even in casual chats among colleagues, friends, and family. It has entered every sector, including international aid and humanitarian work, and like every major technological shift, this one divides opinion. Some are optimistic, seeing AI as a force for efficiency, creativity, and progress. Others are more cautious, pointing to risks such as bias, surveillance, and job displacement.
My own perspective is a realistic one, shaped by experience and experimentation through direct engagement with users, communities, and organizations of all sizes across the international affairs and economic landscape. I have worked with those building the tools and those trying to work with them in different countries, and from that vantage point, I have come to a simple conclusion: technology is not the problem; transformation is.
The tools, of course, come with their own limitations, but the deeper story is about systems and how we, as users, leaders, and communities, learn, adapt, govern, and evolve.
 
            I have worked with those building the tools and those trying to work with them in different countries, and from that vantage point, I have come to a simple conclusion: technology is not the problem; transformation is.
The waves we’re living through
From what I have observed, AI has not advanced in a straight line. It has evolved through distinct waves, each unlocking new capabilities and reshaping how people live, work, and solve problems. These waves build on one another, each demanding greater computational power and opening the door to entirely new possibilities and risks.
Wave 1: Perception AI: This wave marked the breakthroughs that gave machines the ability to recognize images, voices, and patterns in data. It was the era of perception and the foundation that powered everyday tools like Siri, Alexa, and Google Assistant, as well as early advances in facial recognition and voice translation. During this period, AI gained the ability to interpret the world for the first time.
Wave 2: Generative AI (2018–2023): Then came the creative leap. AI shifted from recognizing patterns to generating content. Tools like ChatGPT, DALL·E, and Microsoft Copilot showed that machines could produce text, images, code, and insights at scale. For the first time, generative AI allowed users to engage with tools through simple conversation and to work side by side with AI to write, translate, analyze, and design.
Wave 3: Reasoning or Agentic AI (2023–Ongoing): We are now in the reasoning era, marked by major investments in AI systems such as ChatGPT-5, Grok, and other advanced models that are learning to think, analyze, reason, plan, and act across multiple steps. These systems can tackle complex problems, research solutions, and even communicate with digital agents to complete multi-stage tasks. From my perspective, this phase represents the moment when AI moves from being a tool to becoming a true collaborator—one capable of structured reasoning, memory, and goal-driven behavior.
Wave 4: Physical AI (Emerging): The next frontier will merge intelligence with movement. Machines will understand physics, motion, and cause and effect, allowing robots to learn from experience and operate safely in real-world environments. Physical AI—encompassing robotics, self-driving cars, and other autonomous systems—could redefine what we mean by “work,” supporting people in tasks that are too dangerous, too repetitive, or too remote. This wave is still under development, but it is poised to transform manufacturing and many other industries in the years ahead.
When I look at these waves, I see market integration as the invisible thread connecting them. Each stage builds on the successful merging of the previous layer — perception, generation, reasoning, and action — into a coherent system. Together, they represent far more than a story of technological progress; they mark a deeper transformation in how we design institutions, deliver services, and imagine what humans and machines can achieve together.
The humanitarian paradox
Across the humanitarian sector, AI adoption is not being driven by institutions; it is being driven by individuals. The sector has been slower to feel the impact of recent AI waves, partly because of structural and regulatory constraints that make change difficult to implement at scale.
Yet, just like in every other industry, since the release of ChatGPT and other tools, people have been moving faster than the organizations they work within. Frontline staff, analysts, and local organizations are already experimenting with AI, for tasks such as writing and editing reports and proposals, translating data, summarizing meetings, and even designing training materials. In many cases, this experimentation is happening quietly, without formal approval, training, or organizational AI policies or strategies.
Some see this as a governance failure; others view it as a capacity development or awareness gap. I see it as resilience.
It shows that humanitarians are not waiting for permission to innovate. They are finding ways to make their work more efficient, even when the systems around them remain rigid and slow. It reflects what has always been true about this sector: when resources are limited, when crises multiply, and when bureaucracy slows progress, people improvise, especially since the majority of the tools are simple, easy to use, accessible, and available in different languages.
I see that as a signal. It tells us that innovation is already alive within the system; it lives within individuals. What is missing is the structure to guide it, the ethics to anchor it, and the leadership to sustain it.
The institutional layer: what needs to change
Organizational transformation rarely fails because of technology itself. It fails because of structure, communication, rollout, and leadership vision. The AI tools that could benefit humanitarian organizations already exist, yet the systems that govern them remain slow, fragmented, and risk-averse. While individuals are testing and using AI in real time, many organizations are still debating whether it fits their digital strategies, whether the right policies are in place, or whether to invest at all. By the time a policy is drafted or a task force convened, the technology has already evolved into something new.
To bridge that gap, we as a sector need to think beyond adoption and focus on readiness and transformation. True AI readiness is not about software or servers; it is about architecture. It requires an ecosystem where innovation is not an accident but a habit, where there is space for experimenting, failing, and succeeding, and where mavericks are enabled and supported.
From my work with humanitarian organizations, governments, and private partners, I see five foundational pillars that define this readiness:
1. Innovation: Create spaces where people can experiment safely. Allow pilots, prototypes, and small-scale trials. Encourage curiosity and accept that some initiatives will fail because that is where learning happens. Support communities of practice, super users, and individuals who are curious, and build around them. Most importantly, invest in learning and development.
2. Infrastructure: Invest in the digital, legal, and ethical systems that enable the secure and scalable use of AI. This means modernizing data governance, ensuring connectivity in low-resource settings, and building trust frameworks that make responsible AI possible. Data is a key asset, it must be cleaned, secured, and properly governed. Organizations need clear ownership structures, data management standards, and cybersecurity measures that protect both people and information. Without a strong foundation of trustworthy data, AI becomes unreliable and difficult to scale.
3. Ecosystem: AI integration cannot exist in isolation. It must be part of a broader digital ecosystem that connects operational, programmatic, and strategic functions. Within an organization, AI should strengthen existing systems rather than sit apart from them. This requires mapping internal processes, identifying where AI adds value, and linking it to existing tools such as ERP systems, digital workflows, and decision-support platforms.
4. Partnerships: No single organization can do this alone. Collaboration across local actors, international organizations, academia, donors, and technology partners is essential. Shared learning reduces duplication, accelerates progress, and fosters responsible innovation. Strong partnerships with technology-native companies can bring new expertise, infrastructure, and design capabilities to humanitarian and development contexts. These collaborations should emphasize co-design, knowledge transfer, and long-term capacity building rather than one-off pilots.
5. Openness: Open source your journey. Communicate internally and externally. Share what works, what does not, and what is next. The humanitarian sector thrives on transparency, and AI should be no exception. Knowledge must circulate freely if progress is to reach everyone.
These five pillars form the backbone of institutional transformation. Without them, AI adoption will remain fragmented, isolated, and dependent on a few well-resourced actors. With them, we can turn scattered innovation into a collective leap forward. But even the strongest framework needs leadership, because it is leadership that gives direction, builds trust, and turns structure into lasting transformation. Therefore, the leadership of organizations must cultivate a clear vision, foster curiosity, and create cultures where learning and innovation are not exceptions but everyday practice.
Final thoughts: the work ahead
The future of AI in humanitarian work will not be decided by a single breakthrough or policy, but by the everyday practice of learning, experimenting, and adjusting. It is a process of thinking, collaborating, reflecting, and continuously refining our collective understanding of what responsible progress looks like.
This moment calls for humility as much as ambition. The systems we are building will evolve through trial and error, guided by dialogue rather than certainty. To move forward, we must protect the space for curiosity, invite diverse perspectives, and remain anchored in first-principles thinking: start from what is true, reason from the ground up, and design from the needs of people rather than the habits of institutions.
Transformation is not only about the tools we adopt, but about how we communicate and collaborate around them. It depends on how we explain complex ideas to those outside the room, how we build awareness across cultures and professions, and how we keep ethics and empathy visible in every decision.
I believe that AI tests not only our systems, but our willingness to evolve them. If we can approach this moment with openness, discipline, and a shared sense of purpose, transformation will not remain an aspiration. It will become a collective practice – one defined by courage, clarity, and the constant pursuit of better ways to serve humanity.
Organizational transformation rarely fails because of technology itself. It fails because of structure, communication, rollout, and leadership vision. The AI tools that could benefit humanitarian organizations already exist, yet the systems that govern them remain slow, fragmented, and risk-averse.
About the author
Ali Al Mokdad is a Strategic Senior Leader specializing in Global Impact Operations, Governance, and Innovative Programming. With a global footprint across the Middle East, Africa, and Asia, he has led complex humanitarian and development responses through senior roles in INGOs, UN agencies, donor institutions, and the Red Cross and Red Crescent Movement.
Ali held the role of AI Co-Lead at NetHope, where he also led work on AI integration and AI strategy for business operations, delivering targeted trainings for leaders. He also authored in-depth research on the future of governance, “Inclusive and Intelligent Governance: Enabling Sustainability Through Policy and Technology,” published by IGI Global Scientific Publications.
Ali is known for driving operational excellence, advancing inclusive governance, and designing people-centered systems that hold both purpose and impact at their core.
Related resources
Report and supporting resources: Artificial intelligence in the humanitarian sector: mapping current practice and future potential (the report is available in English, French and Spanish)
Research report companion article: The history of artificial intelligence in humanitarianism by Lucy Hall (HLA) [read the article]
Report launch event: watch the recording and download the slides
Leadership podcasts with Ali Al Mokdad:
Part 1: Leading with vision and heart: reflections on humanitarian leadership [listen here]
Part 2: Inviting in the chaos: strategic insights for humanitarian leaders [listen here]
