70% of humanitarians use AI daily or weekly, while only 21.8% of organizations have formal policies in place
A comprehensive survey of 2,539 humanitarian professionals across 144 countries and territories reveals a striking disconnect: while seven in ten humanitarian workers use artificial intelligence (AI) tools daily or weekly, fewer than one in four organizations have established formal AI policies. This initial insight report reveals that while humanitarian workers worldwide are rapidly integrating artificial intelligence tools into their work, their organizations are struggling to keep pace with proper governance, training, and ethical frameworks.
The research, conducted by the Humanitarian Leadership Academy and Data Friendly Space, represents one of the most extensive global assessments of AI usage in the humanitarian sector and uncovers a striking “humanitarian AI paradox“: individual innovation dramatically outpacing institutional capacity in supporting responsible AI implementation.

Key findings: a sector in transition
The research uncovers substantial flux in the sector.
Individual innovation exceeds institutional capacity:
- 70% of humanitarians use AI daily or weekly
- Only 21.8% of organizations have formal AI policies
- 64% report minimal organizational AI training
Skills gap: While humanitarians demonstrate confidence with AI at entry levels, only 3.5% possess expert-level knowledge. Surprisingly, AI skills exceed general digital capabilities at beginner levels, suggesting AI may serve as an intuitive gateway to technology adoption. Organizations are underinvesting in AI training, creating critical knowledge gaps.
Fragmented tools: Commercial platforms dominate usage, with 69% relying on commercial AI tools such as ChatGPT, Claude, and Copilot. AI is mainly used for report writing, data summarization, translations, and research assistance.
Governance vacuum: Despite widespread usage, fewer than 25% of organizations have AI policies. Workers express concerns about data protection, decision-making ethics, environmental impact, and over-reliance on AI versus participatory approaches.
Future priorities and implications
Looking ahead, humanitarian organizations are prioritizing AI expansion in data analytics and forecasting, monitoring and evaluation, and risk and needs assessment. The findings highlight both the sector’s readiness for AI transformation and the urgent need for coordinated investment in training, infrastructure, and governance frameworks. Organizations remain largely in experimentation phases, with only 8% reporting widespread AI integration, despite it being widely adopted by individual practitioners. However, with 64% of organizations providing little to no AI training for staff, this creates risks around data protection, ethics, and effectiveness in contexts that require strict neutrality and accountability standards.
Madigan Johnson, Head of Communications at Data Friendly Space said:
“This research highlights a clear disconnect between how humanitarian workers are using AI and how their organizations are supporting them. With staff using AI tools regularly but few organizations having formal policies, there’s an obvious need for better training and governance frameworks. Organizations that invest in proper AI knowledge and training now will be better positioned to use these tools effectively and responsibly. With further research coming in our full report launch on August 5th, we hope to share more detailed insights and practical recommendations.”
Lucy Hall, Data and Evidence Specialist at the Humanitarian Leadership Academy said:
“The humanitarian AI paradox is quite stark – usage is high, yet confidence seems low. On top of this, commercial AI platforms dominate usage despite humanitarian-specific AI tools being available – generative AI is almost synonymous with AI usage in humanitarian work, but future usage trends are emerging that may shift towards more specific tools being better utilised. Reassuringly, a high proportion of respondents are considering the ethical challenges with AI usage in highly sensitive contexts, upholding humanitarian standards and principles of Do No Harm.”
Ka Man Parkinson, Communications and Marketing Specialist at the Humanitarian Leadership Academy said:
“Thanks to the global engagement of the humanitarian community in this survey, we have a dataset that is revealing new insights into humanitarians’ views, attitudes towards AI, and usage patterns around the world. The gap between individuals and organisations is stark and highlights possible courses of action for humanitarian actors to move forward with AI in an ethical and aligned manner. We look forward to sharing deeper insights with the release of our full report and online report launch event on 5 August, together with our expert panel.”
About the research
The survey was conducted during May-June 2025 by the Humanitarian Leadership Academy and Data Friendly Space, representing humanitarian actors across diverse contexts and organizational types. The research aimed to capture the full spectrum of AI adoption patterns across all types of humanitarian entities and is designed to inform strategic decision-making, policy development, and resource allocation for AI initiatives across humanitarian organizations worldwide. A full report will be released in August 2025.
Join the researchers and humanitarian leaders for a comprehensive report launch webinar on 5 August at 12:00 UTC to explore the complete findings and discuss implications for the future of humanitarian work. Registration details are available at the event registration page.
About the Humanitarian Leadership Academy
The Humanitarian Leadership Academy’s (HLA) mission is to accelerate the movement for locally led humanitarian action – one that reimagines how response happens and how those delivering it are best supported.
With over a decade of strengthening the capacity of humanitarians worldwide, a powerful network of allies, and a thriving community of alumni, we bring together local and global partnerships to drive real change. Our digital learning platform, Kaya, connects 875,000 learners, and our social media presence engages 1.2M people, making high-quality learning accessible worldwide.
www.humanitarianleadershipacademy.org
About Data Friendly Space
Born out of the Nepal earthquake of 2015, Data Friendly Space (DFS) provides trustworthy digital tools and actionable data for social impact organisations to fulfil their mission. We believe that the international community holds not just the knowledge, but also the responsibility to utilise emerging technologies and data to better prepare for and respond to humanitarian needs, support the achievement of Sustainable Development Goals (SDGs), and accelerate climate action.