23rd March 2026
During the Humanitarian Reset and drive for systemic reform, what meaningful role does technology play? In this opinion piece, Alex Bornstein, Executive Director, Zcash Foundation and formerly IRC Chief Supply Chain Officer, asks a challenging question: is the sector moving towards genuine technological transformation, or is this activity driven by donor expectations?
The humanitarian sector engages in ongoing discussions about the application of blockchain, artificial intelligence (AI), drones, big data, and digital cash in crisis response. These technologies are often presented as having the potential to transform aid delivery, promising faster, more accountable, and more cost-effective assistance. The dialogue surrounding these tools is fueled by the compelling idea that technological advancements could help mitigate the structural constraints that have constrained humanitarian aid operational effectiveness.

Yet the evidence suggests a widening gap between aspiration and operational reality. The UN OCHA Global Humanitarian Overview 2026 reports that 239 million people will require humanitarian assistance in 2026, but humanitarian plans aim to reach only 135 million, with 87 million prioritised for immediate, life-saving aid (OCHA 2026). At the same time, research on humanitarian innovation finds that agencies frequently struggle to translate promising pilots into scaled, routine capabilities (ALNAP 2022).
This context raises a fundamental question: is the sector witnessing genuine systemic transformation, or primarily performing a visible display of innovation intended to satisfy donor expectations without delivering meaningful, system-wide progress?
The distinction matters. When performance is mistaken for impact, scarce resources and leadership attention drift toward high-visibility projects with limited real-world effect. This can happen precisely when needs are rising and institutional capacity is stretched thin.
Why do so many ‘breakthroughs’ fail to scale?
Several related structural and cultural barriers prevent promising new technologies from moving beyond pilot phases into system-wide deployment, perpetuating a chronic cycle of experimentation without meaningful institutional change.
1. Humanitarian need exceeds system capacity and risk tolerance
The humanitarian system operates in a chronic imbalance between escalating needs and available resources, clearly demonstrated in the 2026 figures, where only a fraction of individuals requiring aid can be reached (OCHA 2026). This pressure is further intensified by chronic underfunding, with appeals consistently receiving less than 40% of the amounts requested (The New Humanitarian 2025).
In this environment of resource scarcity, heightened scrutiny, and intense pressure to deliver results instantly, organisations prioritise proven reliability over hypothetical efficiency. They often revert to delivery models known to function successfully in insecure, remote, and infrastructure-poor settings. Paper-based cash, in-kind distributions, and manual verification systems may be labour-intensive and slow, but they offer reliable and robust functionality that many nascent, digital technologies cannot guarantee when internet access is disrupted or power fails. Under this pressure, decision-makers consistently select approaches that guarantee service delivery today, even if suboptimal, rather than risking widespread failure for the sake of potential efficiencies tomorrow. The cost of a failed technical pilot is the loss of critical aid to a crisis-affected population, which is a risk few decision-makers are willing to assume.
2. Innovation remains trapped in the pilot phase due to funding structures
The humanitarian innovation landscape is saturated with pilots, but few become institutional tools that cross the chasm between proof-of-concept and institutional adoption. ALNAP’s comprehensive review of 540 innovation projects found that most never progressed beyond experimentation because they lacked sustained funding, robust evidence, or organisational ownership (Komuhangi, C. et al. 2023).
This failure is fundamentally a matter of funding misalignment. Grant ecosystems often reward novelty and launch velocity rather than dependability and long-term integration, which is the unglamorous work of adoption and scaling. Donor funding cycles are typically short (i.e. 12–24 months) and reward ‘newness’ rather than ‘what works at scale’. This system incentivises organisations to chase new, easily reportable pilot grants instead of investing in the essential infrastructure required for system-wide adoption. Scaling a successful digital tool requires patient, multi-year investment in maintenance, security updates, standardisation, staff training across dozens of country offices, and integration with existing legacy systems. When initial pilot grants conclude, these essential, less visible support activities are rarely picked up by core organisational budgets, causing the project to wither and fail to transition into a scalable, dependable system.
3. Fragmentation prevents interoperability and shared infrastructure
The humanitarian ecosystem is a sprawling, distributed consortium of diverse actors—UN agencies, international NGOs, national NGOs, governments, and community groups—each operating with distinct mandates, data systems, databases, and technology infrastructures. This inherent fragmentation creates massive hurdles for any technology seeking system-wide adoption. A specialised software or blockchain tool designed for one organisation typically cannot be adopted by another without substantial, time-consuming, and costly modification.
Consequently, data remains deeply siloed, and technology systems lack common standards for exchange or integration. Even within the UN system, where coordination is prioritised, a 2024 internal review documented fragmented and inconsistent adoption of AI tools, highlighting a lack of unified digital strategy (UN CEB 2024). So often, innovation fails not because the tools lack merit, but because the ecosystem lacks the shared technical architecture, common governance frameworks, and standardised data protocols needed to support cross-organisational scale and interoperability. Without this foundation, every scaling effort effectively becomes a bespoke costly integration project, which is slow, expensive and severely limited in reach and impact.
4. Ethical and protection concerns demand caution and slow adoption
Humanitarian environments involve profound and non-negotiable risks related to data privacy, power asymmetry, surveillance, exclusion, and consent. Technologies that rely on capturing vast amounts of personal data or employ automated decision-making raise legitimate and serious ethical concerns that directly conflict with the core humanitarian principle of “Do No Harm.”
Uncertainties around the operational impacts of fairness, algorithmic bias, transparency, and potential harm lead agencies to rightly exercise extreme caution, thereby slowing or halting the scaling of tools in crisis settings, as detailed by ALNAP’s Explain AI series (ALNAP 2023). These are not abstract regulatory issues; they are operational concerns involving the safety and security of people already living with heightened risks. For instance, biometric identification can fail disproportionately for certain groups (e.g. higher false rejections for women, darker skin tones, older adults, or people with worn fingerprints), resulting in wrongful denial of aid. Similarly, AI-driven resource allocation can reproduce historical under-service by prioritising areas that are easier to reach or better represented in data (e.g., places with stronger connectivity or more complete records), systematically deprioritising remote or marginalised communities.
The potential for technology to deepen existing power imbalances, by providing surveillance capabilities to authorities or creating layers of digital exclusion for those without access, is a constant, pressing concern. Absent robust governance frameworks and privacy protections, which are often time-consuming to develop and test, digital systems will continue to be treated as high-risk experiments rather than dependable operational necessities.
Privacy-preserving technologies are essential for genuine progress
If digital innovation is to move beyond theatre and become a durable pillar of humanitarian action, it must be fundamentally designed around privacy, protection, and humanitarian principles from the outset. Traditional digital systems are often deemed risky because they centralize data, making them a high-value target for breaches. Emerging privacy-preserving approaches, however, show that digital tools can advance efficiency while comprehensively safeguarding crisis-affected populations.
Several models are particularly relevant:
- Decentralized Cryptographic Identity: Individuals to retain full control of their encrypted credentials, reducing reliance on central organisational databases that are inherently vulnerable to misuse or breaches. This shifts the locus of control from the aid agency to the recipient.
- Zero-Knowledge Proofs (ZKPs): ZKPs allow an aid organisation to verify an individual’s eligibility (e.g. they are over 18 or live in a certain area) without requiring the individual to reveal any of the sensitive underlying attributes (e.g. date of birth, specific address). This mechanism is critical in politically volatile or high-risk settings where revealing personal information is dangerous.
- Secure Multi-Party Computation (SMPC): SMPC allows multiple organisations to collaborate on data analysis (e.g. detecting duplicate registrations or identifying coverage gaps) without ever sharing or exposing their raw data sets to one another, enabling unprecedented coordination while strictly respecting organisational boundaries and client privacy.
- Privacy-Preserving Digital Cash: Tokenised, cryptographically secured aid models, such as those utilising technologies like Zcash, which employs zero-knowledge proofs to allow fully shielded transactions, demonstrate that it is possible to deliver digital assistance with auditability, security, and minimal data exposure. The application of such systems directly overcomes the fear that digital cash exposes recipients’ transaction history to surveillance, a critical concern for humanitarian actors operating in sensitive contexts.
By adopting these privacy-by-design tools, the sector can directly address the ethical, operational, and regulatory barriers that have historically prevented digital innovation from scaling. Privacy-preserving design is therefore not optional; it is a foundational, moral imperative necessary for building trust, securing buy-in from communities, and ensuring the safe, sustainable adoption of digital systems.
Organisational culture shapes innovation more than technology
Even well-designed and secure tools cannot scale in systems that are structurally or culturally unprepared for change. Persistent cultural and institutional dynamics within the sector act as powerful constraints on progress:
- Misaligned incentives: Short-term funding cycles fundamentally reward launching pilots over scaling them. The incentive structure itself promotes novel, temporary projects over durable, integrated systems, prioritising the appearance of activity over genuine, long-term impact. This often leads to ‘pilotitis’, where organisations endlessly repeat small-scale experiments without ever committing to deep institutional change.
- Deep structural risk aversion: While ethical caution is necessary, a pervasive structural risk aversion often prevents the adoption of any new method, even after thorough de-risking and successful pilots. The organisational culture prioritises the elimination of all perceived risk associated with new technologies, often preferring the known, predictable inefficiencies of legacy systems over the uncertainty of new tools, especially where failure may endanger lives or undermine public trust. This translates into bureaucratic inertia that stifles scaling.
- Limited local leadership: Despite national and local organisations delivering the vast majority of frontline aid, the innovation agenda, including technological design, funding, and governance, is overwhelmingly set by international headquarters and Northern donors. This severely limits the relevance, appropriateness, and long-term sustainability of solutions, which often fail to account for crucial local infrastructure realities, governance structures, language needs, and cultural contexts. Solutions must be co-designed and governed by those closest to the crisis to ensure true fitness-for-purpose.
- Outcome-light evaluation: A pervasive culture allows projects to be labeled ‘successful’ or ‘promising’ simply because they launched on time and completed their immediate, narrow objectives, without demonstrating measurable improvements in crucial humanitarian outcomes like cost-effectiveness, speed, dignity, or reach. This prevents the necessary rigor for validating technologies before scaling. Without robust, third-party validation and a shared metric for success that goes beyond anecdotal evidence, the sector cannot rationally decide which innovations deserve sustained investment.
Without fundamental changes to these incentives and to the overall governance structure, technology alone cannot deliver system-wide transformation. The barriers are institutional and cultural, not purely technical.
What would real humanitarian innovation look like?
For innovation to contribute meaningfully and safely to humanitarian outcomes, the sector must prioritise a holistic shift in approach, moving from a focus on novelty to a focus on utility, safety, privacy, and integration.
This requires demanding tools designed for operational reality, systems that function seamlessly amidst low connectivity, political complexity, insecure environments, and limited infrastructure. It requires promoting local leadership as standard practice, ensuring solutions reflect context and are governed by those closest to affected populations from conception to deployment, not just at the implementation stage.
Perhaps most importantly, it necessitates building privacy-preserving digital systems that embed safety and dignity from the outset, treating data protection as a design requirement, not an afterthought. Crucially, it would require funding models that value integration, maintenance, and long-term performance, alongside evidence-based decisions and standards that reward measurable outcome improvements rather than compelling narratives.
Closing reflection
Technology can strengthen humanitarian response by improving targeting, reducing duplication, increasing accountability, and accelerating delivery. But potential is not progress. As long as innovation is primarily measured by its narrative appeal, the number of press releases it generates, or its success in securing the next grant, rather than its demonstrable impact on people in crisis, it risks becoming theatre.
The sector does not need less innovation; it needs better innovation. Innovation that is rigorously tested, grounded in operational reality, accountable to affected communities, supported by long-term institutional investment, and designed to protect privacy and dignity by default. If these principles guide future work, technology can finally move from being a mere promise to a dependable, operational reality that truly strengthens the humanitarian aid sector.
When performance is mistaken for impact, scarce resources and leadership attention drift toward high-visibility projects with limited real-world effect. This can happen precisely when needs are rising and institutional capacity is stretched thin.
Sources
- OCHA (2026). Global Humanitarian Overview 2026 (GHO 2026). United Nations Office for the Coordination of Humanitarian Affairs.
- ALNAP. (2022). Engineering Scale Up in Humanitarian Innovation’s Missing Middle. ALNAP/ODI.
- The New Humanitarian. (2025). Five takeaways from the UN’s aid plans for 2026.
- Komuhangi, C. et al. (2023). Assessing the promise of innovation for improving humanitarian performance: A 10-year review for the State of the Humanitarian System report. ALNAP/ODI.
- United Nations Chief Executives Board for Coordination. (2024). Report on the Operational Use of AI in the UN System.
- ALNAP. (2023). Explain AI Series: Fairness, Transparency, and Protection in AI for Crisis Settings. ALNAP/ODI.
About the author
Alex Bornstein
Alex Bornstein is the Executive Director of the Zcash Foundation, stewarding open-source, privacy-preserving financial infrastructure. He leads the Shielded Aid Initiative, designing humanitarian digital cash systems that ensure both privacy and auditable accountability. Drawing on his humanitarian aid and nonprofit leadership background, Alex approaches privacy as a duty of care—essential for dignity, compliance, and safety in digital aid delivery. His goal is to make privacy the simplest choice for aid organisations and the people they serve.
Alex was a panellist on our online event held on 10 December 2025: Humanitarian Futures: Still here, still human – what next?. He is a guest on an upcoming Fresh Humanitarian Perspectives podcast episode: ‘Privacy as protection: rethinking blockchain, cryptocurrency and humanitarian sector reform.’
Disclaimer
The views expressed in this piece are the author’s own and do not necessarily reflect those of their organisation or the Humanitarian Leadership Academy. This piece is published as a contribution to ongoing discussions about humanitarian sector reform and digital transformation. Publication does not constitute endorsement of any specific technology, organisation, or approach by the HLA.