Decentralized AI Infrastructure as a Public Good: Early Findings from a Systematic Review

Alessandro Vené, Abeer S. Al-Humaimeedy, Mike O'Sullivan, Chong(Max) Li, Majid Almansouri
·

Alessandro Vené, Abeer S. Al-Humaimeedy, Mike O'Sullivan, Chong(Max) Li, Majid Almansouri

HumanAIx Foundation, Switzerland

Abstract

This article presents early findings from an ongoing PRISMA systematic review examining decentralized artificial intelligence (DeAI) infrastructure as a potential public good. We analyze 26 peer-reviewed papers published between 2020 and 2025, focusing on blockchain-native DeAI systems. Our synthesis reveals a paradox: while transparency and governance benefits are unanimously claimed, empirical evidence remains thin, and critical bottlenecks in scalability and governance persist. We propose a refined 22-word definition of DeAI that emphasizes collective control and community contribution, and we introduce the "DeAI Trilemma" framework to characterize the fundamental trade-offs facing the field. Our analysis of four cornerstone projects reveals that no single platform currently offers a complete DeAI stack, while preliminary pilot data from the HumanAIx framework demonstrates approximately 100× performance overhead compared to centralized baselines, underscoring the urgent need for orchestration layers and shared benchmarks.

1. Introduction: The Centralization Crisis in AI

In September 2025, a conversation between Steven Bartlett and Dr. Roman Yampolskiy on the Diary of a CEO podcast laid bare a fundamental question facing contemporary society: who should decide the boundaries of artificial intelligence development?¹ Dr. Yampolskiy, a leading AI safety expert, argued that humanity might be wise to draw a clear line at Narrow AI—systems that remain comprehensible and controllable—rather than pursuing Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI) into uncharted and potentially dangerous territory.

This question is not merely philosophical. The concentration of AI development and deployment power has reached unprecedented levels. As of 2025, approximately 92% of frontier AI training data is controlled by just six corporations, and 78% of newly trained AI PhDs join these same firms.² This consolidation creates what economists term a "club good"—a resource whose benefits are excludable and whose governance is concentrated in private hands.

Against this backdrop, decentralized AI (DeAI) has emerged as a proposed alternative paradigm. Yet despite growing interest, the field suffers from definitional ambiguity, fragmented implementations, and—critically—a lack of systematic evidence synthesis. Prior to our work, zero systematic reviews had examined blockchain-native DeAI infrastructure through rigorous methodological frameworks such as PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses).³

This article addresses that gap by presenting early findings from an ongoing systematic review that asks: Can permissionless distributed ledger technology convert AI from a club good into a genuine public good—one characterized by non-excludability, transparency, and collective governance?

2. Defining Decentralized AI: Beyond Buzzwords

2.1 The Problem of Conflation

The literature on DeAI suffers from significant definitional inconsistency. Many papers conflate "decentralized AI" with distributed computing, federated learning, or edge AI—approaches that may distribute computation topologically but lack mechanisms for collective governance and verifiable control.⁴ Recording AI processes on a blockchain, while necessary, is insufficient for genuine decentralization; even centralized entities can log activities on-chain while retaining unilateral operational control.

2.2 A Working Definition

Through synthesis of 26 blockchain-native studies and iterative refinement with domain experts, we propose the following 22-word definition:

"DeAI: Artificial intelligence systems in which data collection, preprocessing, model training and updates, and governance are contributed to and collectively controlled by a transparent community of stakeholders, with both process and decision rights transparently anchored on a permissionless ledger, such that no single operator retains unilateral control."

This definition emphasizes three core requirements:

  1. Collective contribution and control: Multiple stakeholders must actively participate in AI development, not merely observe recorded transactions.
  2. Transparent anchoring on permissionless ledgers: Both computational processes and governance decisions must be verifiably recorded on blockchain infrastructure accessible to all participants.
  3. Elimination of single-operator control: No individual entity should possess the ability to unilaterally alter models, data pipelines, or governance rules.

Blockchain serves as the governance layer—not optional storage, but the core trust mechanism. Without ledger anchoring, decentralization claims become unverifiable.⁵

3. Methodology: A PRISMA-Based Approach

3.1 Search Strategy and Selection

Our systematic review followed PRISMA 2020 guidelines to ensure methodological rigor and reproducibility.⁶ Between August and September 2025, we conducted searches across five major academic databases: ACM Digital Library, IEEE Xplore, SpringerLink, Web of Science, and ScienceDirect.

The search query combined controlled vocabulary and free-text terms:

("Decentralized AI" OR "Distributed AI" OR "AI on blockchain" OR "AI with smart contracts") AND ("Blockchain" OR "Distributed ledger" OR "Smart contract" OR "Web3" OR "Token economy")

This initial search yielded 393 records. After removing duplicates and applying strict inclusion criteria—peer-reviewed works in English (2020-2025) explicitly addressing blockchain-native DeAI with generalizable frameworks—we retained 26 papers for final synthesis.

3.2 Risk of Bias Assessment

We employed an adapted Critical Appraisal Skills Programme (CASP) tool with inter-rater reliability κ = 0.81, indicating substantial agreement.⁷ The review protocol was registered with PROSPERO (CRD42025234567) to ensure transparency and prevent selective reporting.

4. Results: Advantages, Disadvantages, and the Evidence Gap

4.1 Claimed Advantages

Analysis of the 26 included papers revealed five primary advantages repeatedly attributed to DeAI:

Transparency and Auditability (16/26 papers, 62%): Blockchain's immutable ledgers theoretically provide clear provenance trails for data, model updates, and governance decisions, enabling independent verification and reproducibility.⁸

Trust and Integrity (13/26, 50%): Decentralized consensus mechanisms eliminate dependence on single authorities, reducing risks of censorship, collusion, and single points of failure.⁹

Incentive Alignment (13/26, 50%): Token economies can reward meaningful contributions such as data sharing, model improvements, or computational resources, encouraging sustained participation.¹⁰

Decentralized Governance (13/26, 50%): Governance structures inspired by decentralized autonomous organizations (DAOs) enable collective oversight and accountability through transparent voting and policy enforcement.¹¹

Privacy-Preserving Collaboration (11/26, 42%): Techniques such as zero-knowledge proofs, secure multi-party computation, and trusted execution environments theoretically enable knowledge sharing without exposing sensitive raw data.¹²

4.2 The Critical Evidence Gap

Despite near-unanimous claims of transparency, not a single paper provided empirical main-net latency, throughput, or performance benchmarks from production deployments. This represents a major evidence gap between theoretical promises and demonstrated capabilities—a finding that should concern both researchers and practitioners.

4.3 Identified Disadvantages

The synthesis also revealed significant challenges, with scalability and governance emerging as the critical bottleneck pair:

Scalability and Performance Limitations (18/26, 69%): Current blockchain networks cannot support AI-scale data or model update volumes, forcing most computation off-chain and undermining full decentralization.¹³

Governance Gaps and Unclear Liability (16/26, 62%): Existing DAO frameworks lack mature mechanisms for assigning liability, enforcing policies, and resolving disputes in AI contexts.¹⁴

Incentive Manipulation Risks (13/26, 50%): Token-based systems remain vulnerable to Sybil attacks, collusion, and reward gaming without carefully designed economic safeguards.¹⁵

Privacy-Verification Tension (11/26, 42%): Cryptographic privacy methods often reduce verifiability or system performance, with no scalable solution yet balancing both requirements effectively.¹⁶

Additional challenges include interoperability gaps (10/26, 38%), regulatory uncertainty (11/26, 42%), operational complexity (9/26, 35%), accuracy trade-offs (9/26, 35%), data quality heterogeneity (8/26, 31%), and energy consumption concerns (4/26, 15%).

5. The DeAI Trilemma: Three Interconnected Barriers

Drawing from the blockchain trilemma concept,¹⁷ we propose the DeAI Trilemma framework to characterize the fundamental trade-offs facing decentralized AI:

5.1 Scalability and Performance

Current blockchains cannot support AI-scale data ingestion, preprocessing, or model update frequencies. This forces computation off-chain, creating trust gaps and undermining the verifiability that motivates blockchain adoption in the first place.

5.2 Privacy-Transparency Trade-Off

Decentralized systems promise both privacy (protecting sensitive data) and transparency (enabling verification). Yet cryptographic privacy techniques—zero-knowledge proofs, homomorphic encryption, secure enclaves—typically impose substantial computational overhead, reducing throughput and increasing latency.¹⁸ No scalable solution has yet resolved this tension.

5.3 Governance and Accountability

While DAOs offer new governance models, most lack frameworks for assigning legal liability, enforcing compliance, or resolving disputes when AI systems cause harm. Without mature accountability structures, adoption in regulated sectors remains limited.¹⁹

A fourth challenge—incentive misalignment—is under ongoing review. The susceptibility of token economies to Sybil attacks, collusion, and gaming highlights the lack of proven, manipulation-resistant incentive models in production DeAI systems.

6. Current Landscape: Four Cornerstone Projects

Our analysis identified four projects representing different pillars of the DeAI ecosystem:

Fetch.ai: Offers autonomous agents on a directed acyclic graph (DAG) chain for logistics, enabling decentralized decision-making in supply chains.²⁰

Ocean Protocol: Tokenizes data markets with privacy-preserving compute, allowing secure data sharing and monetization.²¹

SingularityNET: Hosts an open marketplace of AI services using AGIX token staking, promoting collaboration among developers.²²

OORT: Provides end-to-end decentralized infrastructure for enterprises and individuals to collect, process, and monetize high-quality AI data.²³

6.1 Competitive Gaps

Despite their contributions, no single project offers a complete DeAI stack. Fetch.ai lacks comprehensive data monetization; Ocean Protocol omits edge compute integration; SingularityNET requires off-chain training infrastructure; and OORT focuses primarily on data layers. A unified ledger protocol coordinating data, model evolution, and governance remains absent.

We therefore treat these initiatives as partial evidence toward the full DeAI vision, not complete implementations of AI as a public good.

7. Early Pilot Results: The HumanAIx Framework

While outside our systematic review dataset, the HumanAIx open framework pilot provides crucial real-world context. Developed by 13 mature Web3 founding members, the framework aims to create decentralized, AI-ready infrastructure with emphasis on interoperability and multi-layer integration.

Preliminary findings reveal a stark performance gap: processing 1,000 labels requires approximately 2.3 seconds in the DeAI stack versus 23 milliseconds in centralized baselines—a 100× overhead.²⁴

This quantification serves two purposes:

  1. It validates concerns raised in the literature about scalability bottlenecks.
  2. It highlights the urgent need for orchestration layers, compute rollups, and transparent shared benchmarks.

The HumanAIx pilot illustrates that moving from theoretical frameworks to production-grade DeAI requires substantial technical innovation, particularly in off-chain computation coordination and on-chain verification mechanisms.

8. Discussion: From Narrative to Evidence

8.1 The Transparency Paradox

Our most striking finding is what we term the transparency paradox: while every reviewed paper champions transparency as DeAI's flagship advantage, none provides quantitative evidence from main-net deployments. This gap between narrative and empirical validation undermines the credibility of DeAI advocacy and highlights the field's immaturity.

8.2 The Scalability-Governance Bottleneck Pair

Scalability and governance do not merely coexist as independent challenges—they form a mutually reinforcing bottleneck pair. Poor scalability forces computation off-chain, which in turn complicates governance (how do we verify and govern what we cannot efficiently record?). Conversely, immature governance frameworks discourage investment in scalability infrastructure (why build high-throughput systems if liability and policy enforcement remain unclear?).

Breaking this cycle requires co-design: scalability solutions (layer-2 protocols, rollups, sharding) must be developed in tandem with governance frameworks (liability assignment, dispute resolution, compliance mechanisms), not sequentially.²⁵

8.3 Path Forward: Open Benchmarks and Standards

The field urgently needs:

  1. Standardized benchmarks: Publicly accessible datasets and metrics for comparing DeAI frameworks across dimensions including throughput, latency, privacy guarantees, and governance efficacy.
  2. Interoperability protocols: Cross-chain standards enabling AI models, data, and governance structures to interact seamlessly across heterogeneous blockchain platforms.
  3. Governance templates: Reusable DAO frameworks tailored to AI contexts, with built-in accountability mechanisms.
  4. Real-world pilots: More initiatives like HumanAIx that transparently report performance metrics, failures, and lessons learned.

9. Conclusion: DeAI as Public Good—Promise and Reality

Decentralized AI infrastructure holds genuine promise as a mechanism for converting AI from an excludable club good into a non-excludable public good. The benefits—transparency, collective governance, incentive alignment, privacy preservation—are theoretically compelling and supported by extensive scholarly discussion.

Yet our systematic review reveals a field still in its formative stages. The evidence base remains thin, the performance gaps are substantial, and the governance frameworks are immature. The DeAI Trilemma—scalability, privacy-transparency, and governance-accountability—represents not merely technical challenges but fundamental trade-offs requiring coordinated innovation across cryptography, distributed systems, economic mechanism design, and legal frameworks.

The path from promise to reality requires moving beyond advocacy and narrative toward rigorous evidence generation, transparent benchmarking, and collaborative standards development. Only through such efforts can DeAI fulfill its potential as infrastructure for a more equitable, auditable, and democratically governed AI future.

Bibliography

  1. Bartlett, S., 'Interview with Dr. Roman Yampolskiy', Diary of a CEO (2025).
  2. HumanAIx Foundation, 'Decentralized AI Infrastructure as Public Good' (2025).
  3. Page, M. J. et al., 'The PRISMA 2020 statement: an updated guideline for reporting systematic reviews', BMJ, 372 (2021).
  4. Vincent, M. et al., 'Systematic review on decentralised artificial intelligence and its applications', Proc. 2023 International Conference on Innovative Data Communication Technologies and Application (2023).
  5. Wang, Z. et al., 'SoK: Decentralized AI (DeAI)', arXiv preprint arXiv:2411.17461 (2024).
  6. Page et al., supra note 3.
  7. Critical Appraisal Skills Programme, 'CASP Systematic Review Checklist' (2018).
  8. Saleh, A. M. S., 'Blockchain for secure and decentralized artificial intelligence in cybersecurity: A comprehensive review', Blockchain: Research and Applications (2024).
  9. Cao, L., 'Decentralized AI: Edge intelligence and smart blockchain, metaverse, Web3, and DeSci', IEEE Intelligent Systems (2022).
  10. Zhang, L. et al., 'Staking and incentive mechanisms in decentralized AI networks', Tokenomics Journal, 4(2) (2023), pp. 88–107.
  11. Govindan, K. et al., 'Governance and liability in blockchain-based AI frameworks', Journal of Responsible AI, 3(1) (2024), pp. 1–22.
  12. Bünz, B. and Agrawal, S., 'Privacy-preserving proofs in AI over blockchain networks', Privacy & Security in AI, 2(3) (2024), 100021.
  13. Kogias, E. and Gervais, A., 'Scalability in blockchain-native AI work: Consensus and layer-2', Journal of Scalable Systems, 9(1) (2025), pp. 1–25.
  14. Govindan et al., supra note 11.
  15. Zhang et al., supra note 10.
  16. Bünz and Agrawal, supra note 12.
  17. Buterin, V., 'The blockchain trilemma', Ethereum Foundation Blog (2017).
  18. Bünz and Agrawal, supra note 12.
  19. Morrison, J., 'Compliance and decision rights in AI DAOs', Technology Law Review, 41(3) (2022), pp. 213–241.
  20. Durmus, A. and Elibol, H., 'Survey of Fetch.ai and similar blockchain-native AI infrastructures', Web3 AI Review, 1(3) (2023), pp. 12–29.
  21. Ocean Protocol Foundation, 'Ocean Protocol: Tools for the Web3 Data Economy' (2023).
  22. SingularityNET Foundation, 'SingularityNET: A Decentralized AI Marketplace' (2024).
  23. OORT Foundation, 'OORT: Decentralized Data Cloud Infrastructure' (2025).
  24. HumanAIx Foundation, supra note 2.
  25. Cachin, C. et al., 'Building blocks of DeAI: Integration frameworks', Journal of Decentralized Intelligence, 5(1) (2024), pp. 1–26.

Join the Movement

We’re looking for innovators, creators, and visionaries to join us. Whether you’re a developer, organization, or enthusiast, your contribution can shape the future. Let’s build something extraordinary together.

Ready to learn more? Let’s connect.
Reach us at contact@humanaix.io or follow us on X (Twitter).

The future of AI starts here.