Explore AI Layer1: Seeking the Fertile Ground for On-Chain DeAI

Blockbeats
10 Jun
Original Title: "Biteye & PANews Jointly Release AI Layer 1 Research Report: Exploring the Fertile Ground for On-chain DeAI"
Original Authors: @anci_hu49074 (Biteye), @Jesse_meta (Biteye), @lviswang (Biteye), @0xjacobzhao (Biteye), @bz1022911 (PANews)

Overview

Background

In recent years, leading technology companies such as OpenAI, Anthropic, Google, Meta, and others have been driving the rapid advancement of Large Language Models (LLMs). LLMs have demonstrated unprecedented capabilities across various industries, greatly expanding the human imagination space and even showing the potential to replace human labor in some scenarios. However, the core of these technologies remains firmly controlled by a few centralized tech giants. With abundant capital and control over high-cost computing resources, these companies have established barriers that are difficult to overcome, making it challenging for the majority of developers and innovation teams to compete.

Source: BONDAI Trend Analysis Report

Simultaneously, in the early stages of AI's rapid evolution, public discourse often focused on the breakthroughs and conveniences brought by the technology, while relatively less attention was paid to core issues such as privacy protection, transparency, and security. In the long run, these issues will significantly affect the healthy development of the AI industry and its societal acceptance. If not properly addressed, the controversy over whether AI leans towards "good" or "evil" will become more pronounced. Moreover, driven by profit-seeking instincts, centralized giants often lack sufficient motivation to proactively address these challenges.

With its decentralization, transparency, and censorship-resistant characteristics, blockchain technology has provided new possibilities for the sustainable development of the AI industry. Currently, numerous "Web3 AI" applications have emerged on mainstream blockchains such as Solana and Base. However, a deeper analysis reveals that these projects still face many challenges: on the one hand, their decentralization is limited, as critical aspects and infrastructure still rely on centralized cloud services, leading to an overly memetic nature that hinders support for a truly open ecosystem; on the other hand, compared to AI products in the Web2 world, on-chain AI still exhibits limitations in model capabilities, data utilization, and application scenarios, indicating a need for improvement in innovation depth and breadth.

To truly realize the vision of decentralized AI and enable blockchain to securely, efficiently, and democratically support large-scale AI applications, competing with centralized solutions in performance, we need to design a Layer1 blockchain tailored specifically for AI. This will provide a solid foundation for open innovation in AI, democratic governance, and data security, driving the prosperous development of a decentralized AI ecosystem.

Core Features of AI Layer 1

AI Layer 1, as a blockchain tailored for AI applications, has its underlying architecture and performance designed closely around the needs of AI tasks, aiming to efficiently support the sustainable development and prosperity of the on-chain AI ecosystem. Specifically, AI Layer 1 should possess the following core capabilities:

Efficient Incentives and Decentralized Consensus Mechanism

The core of AI Layer 1 is to build an open network for sharing resources such as computing power and storage. Unlike traditional blockchain nodes that mainly focus on ledger keeping, AI Layer 1 nodes need to undertake more complex tasks. They are required not only to provide computing power and perform AI model training and inference but also to contribute diverse resources such as storage, data, and bandwidth, thereby breaking the monopoly of centralized giants in AI infrastructure. This imposes higher requirements on the underlying consensus and incentive mechanisms: AI Layer 1 must accurately assess, incentivize, and validate nodes' actual contributions to AI inference, training, and other tasks to achieve network security and efficient resource allocation. Only in this way can the stability and prosperity of the network be ensured, and the overall computing power cost be effectively reduced.

Outstanding High Performance and Heterogeneous Task Support Capability

AI tasks, especially LLM training and inference, place extremely high demands on computational performance and parallel processing capabilities. Furthermore, the on-chain AI ecosystem often needs to support diverse and heterogeneous task types, including different model structures, data processing, inference, storage, and other scenarios. AI Layer 1 must be deeply optimized at the architectural level for requirements such as high throughput, low latency, and elastic parallelism, and it must inherently support heterogeneous computing resources to ensure the efficient operation of various AI tasks. This is to achieve a smooth expansion from "single-task type" to a "complex and diverse ecosystem."

Verifiability and Trustworthy Output Assurance

AI Layer 1 not only needs to prevent model malpractice and data tampering but also must ensure the verifiability and alignment of AI output results from the ground-up. By integrating cutting-edge technologies such as Trusted Execution Environments (TEE), Zero-Knowledge Proofs (ZK), Multi-Party Computation (MPC), the platform allows each model inference, training, and data processing process to be independently verified, ensuring the fairness and transparency of the AI system. Moreover, this verifiability can help users understand the logic and basis of AI outputs, achieve "desired outcomes," enhance users' trust and satisfaction with AI products.

Data Privacy Protection

AI applications often involve user-sensitive data, especially in the fields of finance, healthcare, social media, etc., where data privacy protection is crucial. AI Layer 1 should ensure verifiability and, at the same time, utilize encrypted data processing techniques, privacy computing protocols, data permission management, etc., to guarantee the security of data throughout the entire process of inference, training, and storage, effectively preventing data leakage and misuse, eliminating users' concerns about data security.

Strong Ecological Support and Development Capability

As AI-native Layer 1 infrastructure, the platform should not only have technological leadership but also provide comprehensive development tools, integrated SDKs, operation support, and incentive mechanisms for developers, node operators, AI service providers, and other ecological participants. By continuously optimizing platform availability and developer experience, promoting a diverse range of AI-native applications, and realizing the continued prosperity of the decentralized AI ecosystem.

Based on the above background and expectations, this article will detail six AI Layer 1 representative projects, including Sentient, Sahara AI, Ritual, Gensyn, Bittensor, and 0G, systematically review the latest progress in the field, analyze the current status of the projects, and discuss future trends.

Sentient: Building a Trustworthy Open-Source Decentralized AI Model

Project Overview

Sentient is an open-source protocol platform that is creating an AI Layer 1 blockchain (initially a Layer 2, which will later migrate to Layer 1). By combining AI Pipeline and blockchain technology, it is building a decentralized artificial intelligence economy. Its core goal is to address model ownership, call tracking, and value distribution issues in centralized MLL markets through the "OML" framework (Open, Monetize-friendly, Loyal), enabling AI models to achieve on-chain ownership structures, transparent calls, and value sharing. Sentient's vision is to allow anyone to build, collaborate, own, and monetize AI products, thereby driving a fair, open AI Agent network ecosystem.

The Sentient Foundation team has brought together top global academic experts, blockchain entrepreneurs, and engineers, committed to building a community-driven, open-source, and verifiable AGI platform. Core members include Princeton University professor Pramod Viswanath and Indian Institute of Science professor Himanshu Tyagi, responsible for AI security and privacy protection, with blockchain strategy and ecosystem layout led by Polygon co-founder Sandeep Nailwal. Team members have backgrounds in top companies like Meta, Coinbase, Polygon, prominent universities like Princeton University, Indian Institute of Technology, covering areas such as AI/ML, NLP, computer vision, working together to drive the project forward.

As the second entrepreneurial project of Sandeep Nailwal, co-founder of Polygon, Sentient came with its own halo at its founding, boasting rich resources, connections, and market awareness, providing a strong endorsement for the project's development. In mid-2024, Sentient completed an $85 million seed round of funding led by Founders Fund, Pantera, and Framework Ventures, with other investment firms including Delphi, Hashkey, and Spartan among dozens of well-known VCs.

Design Architecture and Application Layer

1. Infrastructure Layer

Core Architecture

Sentient's core architecture consists of an AI Pipeline and a Blockchain System:

The AI Pipeline is the foundation for developing and training "loyal AI" artifacts, comprising two core processes:

· Data Curation: A community-driven data selection process used for model alignment.

· Loyalty Training: Ensuring the model undergoes training consistent with community intent.

The Blockchain System provides protocol transparency and decentralized control, ensuring ownership of AI artifacts, tracking of usage, revenue distribution, and fair governance. The specific architecture is divided into four layers:

· Storage Layer: Stores model weights and fingerprint registration information;

· Distribution Layer: An authorization contract controls the model invocation entry point;

· Access Layer: Validates user authorization through proof of permission;

· Incentive Layer: An income routing contract distributes payment for each invocation to trainers, deployers, and validators.

Sentient System Workflow Diagram

OML Model Framework

The OML Framework (Open, Monetizable, Loyal) is a core concept proposed by Sentient, aiming to provide explicit ownership protection and economic incentives for open-source AI models. By combining on-chain technology and native AI cryptography, it possesses the following characteristics:

· Openness: The model must be open-source, with transparent code and data structure, making it easy for the community to reproduce, audit, and improve.

· Monetization: Each model invocation triggers a revenue stream, with on-chain contracts distributing the revenue to trainers, deployers, and validators.

· Loyalty: The model belongs to the contributor community, with upgrade direction and governance determined by a DAO, and its use and modification controlled by cryptographic mechanisms.

AI-Native Cryptography

AI-native cryptography leverages the continuity of AI models, low-dimensional manifold structure, and model differentiability to develop a "verifiable but non-removable" lightweight security mechanism. Its core technologies include:

· Fingerprint Embedding: Inserting a set of hidden query-response key-value pairs during training to form a unique model signature;

· Ownership Verification Protocol: Using a third-party prover to verify if the fingerprint is retained through query questioning;

· Permissioned Invocation Mechanism: Requiring a "permission credential" issued by the model owner before invocation, upon which the system authorizes the model to decode the input and return an accurate answer.

This approach enables "behavior-based authorization invocation + ownership verification" without costly re-encryption.

Model Ownership and Secure Execution Framework

Sentient currently employs the Melange hybrid security approach: combining fingerprinting ownership, TEE execution, and on-chain contract revenue sharing. The fingerprinting method follows the OML 1.0 mainline, emphasizing the "Optimistic Security" concept, where default compliance is assumed, and violations are detectable and punishable.

The fingerprinting mechanism is a key implementation of OML. It generates unique signatures during the training phase by embedding specific "question-answer" pairs. Through these signatures, model owners can verify ownership, preventing unauthorized replication and commercialization. This mechanism not only protects the rights of model developers but also provides a traceable on-chain record of model usage behavior.

Furthermore, Sentient has introduced the Enclave TEE computing framework, utilizing a trusted execution environment (such as AWS Nitro Enclaves) to ensure that the model only responds to authorized requests, preventing unauthorized access and use. While TEE relies on hardware and poses certain security risks, its high performance and real-time advantages make it a core technology for current model deployment.

In the future, Sentient plans to introduce Zero-Knowledge Proof (ZK) and Fully Homomorphic Encryption (FHE) technologies to further enhance privacy protection and verifiability, providing a more mature solution for the decentralized deployment of AI models.

OML proposes an evaluation and comparison of five verifiability methods

2. Application Layer

Currently, Sentient's products mainly include the decentralized chat platform Sentient Chat, the open-source Dobby series of models, and the AI Agent framework.

Dobby Series Models

SentientAGI has released multiple "Dobby" series models, mainly based on the Llama model, focusing on values of freedom, decentralization, and cryptocurrency support. Among them, the leashed version is more restrained and rational in style, suitable for scenarios requiring stable output; the unhinged version leans towards freedom and boldness, with a richer conversational style. The Dobby models have been integrated into multiple Web3-native projects such as Firework AI and Olas, and users can interact directly with these models in Sentient Chat. Dobby 70B is the most decentralized model ever, with over 600,000 owners (those holding the Dobby fingerprint NFT are also co-owners of the model).

Sentient also plans to launch Open Deep Search, which is a search agent system that aims to surpass ChatGPT and Perplexity Pro. The system combines Sentient's search capabilities (such as query paraphrasing and document processing) with inference agents, enhancing search quality through open-source LLMs (such as Llama 3.1 and DeepSeek). Its performance on the Frames Benchmark has surpassed other open-source models and even approached some closed-source models, demonstrating its powerful potential.

Sentient Chat: Decentralized Chat with On-Chain AI Agent Integration

Sentient Chat is a decentralized chat platform that combines open-source large language models (such as the Dobby series) with advanced inference agent frameworks, supporting multi-agent integration and complex task execution. The embedded inference agent in the platform can perform searches, calculations, code execution, and other complex tasks, providing users with efficient interactive experiences. Additionally, Sentient Chat also supports direct integration of on-chain intelligent agents, currently including the Astrology Agent Astro247, the Crypto Analysis Agent QuillCheck, the Wallet Analysis Agent Pond Base Wallet Summary, and the Spiritual Guidance Agent ChiefRaiin. Users can interact with different intelligent agents based on their needs. Sentient Chat will serve as a distribution and coordination platform for agents. User inquiries can be routed to any integrated model or agent to provide the best response results.

AI Agent Framework

Sentient provides two main AI Agent frameworks:

· Sentient Agent Framework: A lightweight open-source framework focused on automating web tasks through natural language commands (such as search and video playback). The framework supports building intelligent agents with perception, planning, execution, and feedback loops, suitable for lightweight off-chain web task development.

· Sentient Social Agent: An AI system developed for social platforms like Twitter, Discord, and Telegram, supporting automated interaction and content generation. Through multi-agent collaboration, this framework can understand the social environment, providing users with a more intelligent social experience. It can also integrate with the Sentient Agent Framework to further expand its application scenarios.

Ecosystem and Participation

The Sentient Builder Program currently offers a $1 million grant program to encourage developers to use its development kit to build AI Agents that integrate through the Sentient Agent API and can run in the Sentient Chat ecosystem. The ecosystem partners announced on the Sentient website cover various projects in the Crypto AI field, as follows:

Sentient Ecosystem Map

Furthermore, Sentient Chat is currently in the testing phase and requires an invitation code to access the whitelist. Regular users can submit to the waitlist. According to official information, there are already over 50,000 users and 1,000,000 query records. There are 2,000,000 users on the Sentient Chat waitlist waiting to join.

Challenges and Outlook

Sentient starts from the model end, aiming to address core issues faced by current large-scale language models (LLMs) such as misalignment and lack of trust. Through the OML framework and blockchain technology, Sentient provides models with a clear ownership structure, usage tracking, and behavioral constraints, significantly advancing the development of decentralized open-source models.

With the support of Polygon co-founder Sandeep Nailwal's resources and endorsements from top VCs and industry partners, Sentient is at the forefront of resource integration and market attention. However, in the current market environment where high valuation projects are gradually losing their appeal, whether Sentient can deliver truly impactful decentralized AI products will be a crucial test of its ability to become the standard for decentralized AI ownership. These efforts are not only crucial to Sentient's own success but also have far-reaching implications for rebuilding trust in the industry and promoting decentralized development.

Sahara AI: Building a Decentralized AI World for All

Project Overview

Sahara AI is a decentralized infrastructure born for the AI × Web3 new paradigm, dedicated to building an open, fair, and collaborative AI economy. The project achieves on-chain management and transactions of datasets, models, and intelligent agents through decentralized ledger technology, ensuring sovereignty and traceability of data and models. Additionally, Sahara AI introduces a transparent, fair incentive mechanism that allows all contributors, including data providers, annotators, and model developers, to receive immutable income rewards throughout the collaboration process. The platform also implements an unlicensed "copyright" system to protect contributors' ownership and attribution of AI assets and to encourage open sharing and innovation.

Sahara AI offers a one-stop solution covering the entire AI lifecycle, from data collection and annotation to model training, AI agent creation, AI asset trading, and more, to meet AI development needs comprehensively. Its product quality and technical capabilities have been highly recognized by global top enterprises and institutions such as Microsoft, Amazon, MIT, Motherson Group, and Snap, demonstrating strong industry influence and broad applicability.

Sahara is not just a research project but a deep-tech platform jointly driven by frontline tech entrepreneurs and investors with a focus on implementation. Its core architecture could become a key support point for the landing of AI × Web3 applications. Sahara AI has received a total of $43 million investment support from top institutions such as Pantera Capital, Binance Labs, and Sequoia Capital China; co-founded by USC tenured professor and 2023 Samsung Research Fellow Sean Ren and former Binance Labs investment director Tyler Zhou, its core team members come from top institutions like Stanford University, UC Berkeley, Microsoft, Google, and Binance, integrating deep academic and industrial expertise.

Design Architecture

Sahara AI Architecture Diagram

1. Base Layer

The base layer of Sahara AI is divided into: 1. On-chain layer for AI asset registration and monetization, 2. Off-chain layer for running Agents and AI services. It consists of the collaboration between on-chain and off-chain systems, responsible for AI asset registration, provenance, execution, and revenue distribution, supporting a trustworthy collaboration throughout the AI lifecycle.

Sahara Blockchain and SIWA Testnet (On-chain Infrastructure)

The SIWA testnet is the first public version of the Sahara blockchain. The Sahara Blockchain Protocol (SBP) is at the core of the Sahara blockchain, which is a set of smart contracts designed specifically for AI, achieving on-chain ownership, traceability records, and revenue distribution of AI assets. Core modules include asset registration system, ownership protocol, contribution tracking, permission management, revenue distribution, execution proof, etc., building an "on-chain operating system" for AI.

AI Execution Protocol (Off-chain Infrastructure)

To support the trustworthiness of model execution and invocation, Sahara has also built an off-chain AI execution protocol system, combined with Trusted Execution Environments (TEE), supporting Agent creation, deployment, operation, and collaborative development. Each task execution generates verifiable records and uploads them on-chain to ensure full traceability and verifiability throughout the process. The on-chain system is responsible for registration, authorization, and ownership records, while the off-chain AI execution protocol supports real-time operation of AI Agents and service interaction. Due to Sahara's cross-chain compatibility, applications built on Sahara AI infrastructure can be deployed on any chain, even off-chain.

2. Application Layer

Sahara AI Data Service Platform (DSP)

The Data Service Platform (DSP) is a fundamental module of the Sahara application layer, where anyone can accept data tasks via Sahara ID, participate in data annotation, denoising, and review, and receive on-chain point rewards (Sahara Points) as a contribution credential. This mechanism not only ensures data provenance and ownership but also drives the "contribution-reward-model optimization" loop. The platform is currently in its fourth season of activities, which is a primary way for ordinary users to contribute.

Building on this foundation, in order to encourage users to submit high-quality data and services, through the introduction of a dual-incentive mechanism, users can not only receive rewards provided by Sahara, but also receive additional rewards from ecosystem partners, achieving a one-time contribution and multi-party benefits. Taking data contributors as an example, once their data is repeatedly called by models or used to generate new applications, they can continue to receive rewards, truly participating in the AI value chain. This mechanism not only extends the lifecycle of data assets but also injects strong momentum into collaboration and co-construction. For example, on the BNB Chain, MyShell leverages DSP crowdsourcing to generate custom datasets, enhance model performance, and users receive MyShell token incentives, forming a win-win cycle.

AI companies can crowdsource custom datasets based on the data service platform by releasing specialized data tasks and quickly receiving responses from data annotators around the world. AI companies no longer rely solely on traditional centralized data suppliers and can massively obtain high-quality annotated data.

Sahara AI Developer Platform

The Sahara AI Developer Platform is an all-in-one AI development and operation platform for developers and businesses, providing end-to-end support from data acquisition, model training to deployment execution, and asset monetization. Users can directly access high-quality data resources in the Sahara DSP for model training and fine-tuning; the processed models can be combined, registered, and listed on the AI market within the platform, achieving ownership confirmation and flexible authorization through the Sahara blockchain.

The Studio also integrates decentralized computing capabilities, supporting model training and Agent deployment and operation, ensuring the security and verifiability of the computation process. Developers can also store key data and models, perform encrypted hosting and access control to prevent unauthorized access. Through the Sahara AI Developer Platform, developers can build, deploy, and commercialize AI applications at a lower threshold without the need to build their own infrastructure, and fully integrate into the on-chain AI economic system through a protocolized mechanism.

AI Marketplace

The Sahara AI Marketplace is a decentralized asset marketplace for models, datasets, and AI Agents. It not only supports asset registration, transactions, and authorization but also constructs a transparent and traceable revenue distribution mechanism. Developers can register their own built models or collected datasets as on-chain assets, set flexible usage authorizations and revenue sharing ratios. The system will automatically settle revenues based on call frequency. Data contributors can also continue to receive revenue sharing as their data is repeatedly called, achieving "continuous monetization".

This marketplace is deeply integrated with the Sahara Blockchain Protocol, where all asset transactions, calls, and revenue sharing records will be verifiable on-chain, ensuring clear asset ownership and traceable revenue. Through this marketplace, AI developers no longer rely on traditional API platforms or centralized model hosting services, but instead have an autonomous, programmable path to commercialization.

3. Ecosystem Layer

The ecosystem layer of Sahara AI connects data providers, AI developers, consumers, enterprise users, and cross-chain partners. Whether contributing data, developing applications, using products, or driving internal AI transformation within enterprises, all stakeholders can play a role and find revenue models. Data annotators, model development teams, and computing power providers can register their resources as on-chain assets, authorize and share revenue through the Sahara AI protocol mechanism, ensuring that every time a resource is used, it automatically receives rewards. Developers can integrate data, train models, deploy agents on a one-stop platform, and directly commercialize their achievements in the AI Marketplace.

General users without a technical background can also participate in data tasks, use AI apps, collect or invest in on-chain assets, becoming part of the AI economy. For enterprises, Sahara provides end-to-end support from data crowdsourcing, model development to private deployment, and revenue realization. Furthermore, Sahara supports cross-chain deployment, allowing any public chain ecosystem to use the protocols and tools provided by Sahara AI to build AI applications, access decentralized AI assets, and achieve compatibility and expansion in a multi-chain world. This makes Sahara AI not just a single platform but a fundamental collaborative standard for an on-chain AI ecosystem.

Ecosystem Progress

Since its inception, Sahara AI has not only provided a set of AI tools or computing power platforms but has also redefined the production and distribution order of AI on-chain, creating a decentralized collaboration network where everyone can participate, have ownership, contribute, and share. It is for this reason that Sahara has chosen blockchain as the underlying architecture to build a verifiable, traceable, and distributable economic system for AI.

Around this core goal, the Sahara ecosystem has made significant progress. Even in a private testing phase, the platform has accumulated over 3.2 million on-chain accounts, with daily active accounts consistently above 1.4 million, demonstrating user engagement and network vitality. Among them, over 200,000 users have participated in data annotation, training, and validation tasks through the Sahara data service platform and received on-chain incentive rewards. At the same time, millions of users are still waiting to join the whitelist, confirming the strong demand and consensus in the market for a decentralized AI platform.

In terms of enterprise collaboration, Sahara has partnered with global leading institutions such as Microsoft, Amazon, and the Massachusetts Institute of Technology (MIT) to provide them with customized data collection and labeling services. Companies can submit specific tasks through the platform, which are efficiently executed by Sahara's global network of data annotators, enabling large-scale crowdsourcing. The advantages lie in execution efficiency, flexibility, and support for diverse needs.

Sahara AI Ecosystem Map

Participation Method

SIWA will be rolled out in four stages. The current first stage establishes on-chain data ownership, where contributors can register and tokenize their datasets. The first stage is currently open to the public and does not require whitelisting. It is essential to ensure that the uploaded data is useful for AI, as plagiarism or inappropriate content may be addressed. The second stage will realize on-chain monetization of datasets and models. The third stage will open the testnet and publish the protocol. The fourth stage will introduce an AI data stream registration, traceability tracking, and contribution proof mechanism.

SIWA Testnet

In addition to the SIWA testnet, ordinary users can participate in Sahara Legends at this stage, where they can learn about Sahara AI's functions through gamified tasks. After completing tasks, they will earn Guardian Fragments, which can eventually be synthesized into an NFT to record their contribution to the network.

Alternatively, users can annotate data on the data service platform, contribute valuable data, and act as reviewers. Sahara plans to collaborate with ecosystem partners to release tasks, allowing participants to receive incentives from both Sahara points and ecosystem partners. The first dual-reward task will be hosted together with Myshell, where users completing tasks will receive Sahara points and Myshell's token rewards. According to the roadmap, Sahara is expected to launch its mainnet in Q3 2025, potentially coinciding with a Token Generation Event (TGE).

Challenges and Prospects

Sahara AI is breaking the limitations of AI being restricted to developers or large AI companies, making AI more open, inclusive, and democratized. For ordinary users, participation and earning rewards do not require programming knowledge. Sahara AI is creating a decentralized AI world where everyone can participate. For technical developers, Sahara AI bridges the development paths of Web2 and Web3, providing decentralized yet flexible and powerful development tools and high-quality datasets.

For AI infrastructure providers, Sahara AI has provided a new decentralized monetization path for models, data, computing power, and services. Sahara AI not only focuses on public chain infrastructure but also aims to develop core applications, utilizing blockchain technology to promote the AI copyright system. At this stage, Sahara AI has already partnered with multiple top AI institutions, achieving initial success. The future success will depend on monitoring the performance after the mainnet launch, ecosystem product development and adoption rates, and whether the economic model can continue to drive users to contribute to the dataset post-TGE.

Ritual: Innovative Design Breakthrough in Heterogeneous Task AI Core Challenges

Project Overview

Ritual aims to address the centralization, closedness, and trust issues in the current AI industry, providing AI with a transparent verification mechanism, fair computing resource allocation, and flexible model adaptation capabilities; allowing any protocol, application, or smart contract to integrate verifiable AI models in a few lines of code; and through its open architecture and modular design, driving the widespread application of AI on the chain, creating an open, secure, and sustainable AI ecosystem.

Ritual completed a $25 million Series A financing round in November 2023, led by Archetype, with participation from various institutions and notable angel investors such as Accomplice, demonstrating market recognition and the team's strong social capabilities. Co-founders Niraj Pant and Akilesh Potti are former Polychain Capital partners who have led investments in industry giants such as Offchain Labs and EigenLayer, demonstrating deep insights and judgment. The team has rich experience in cryptography, distributed systems, AI, and other fields, and the advisory lineup includes founders of projects like NEAR and EigenLayer, highlighting their strong background and potential.

Design Architecture

From Infernet to Ritual Chain

The Ritual Chain is the second-generation product that naturally transitions from the Infernet node network, representing Ritual's comprehensive upgrade on the decentralized AI computing network. Infernet is the first-stage product launched by Ritual, officially launched in 2023. It is a decentralized oracle network designed for heterogeneous computing tasks, aiming to address the limitations of centralized APIs, enabling developers to more freely and stably access transparent and open decentralized AI services.

Infernet adopts a flexible and simple lightweight framework that, due to its ease of use and efficiency, quickly attracted over 8,000 independent nodes upon its launch. These nodes possess diverse hardware capabilities, including GPU and FPGA, providing powerful computing power for tasks such as AI inference and zero-knowledge proof generation. However, in order to maintain system simplicity, Infernet has foregone some key features, such as consensus-coordinating nodes or integrated robust task routing mechanisms. These limitations have made it difficult for Infernet to meet the needs of a wider range of Web2 and Web3 developers, prompting Ritual to introduce the more comprehensive and powerful Ritual Chain.

Ritual Chain is the next-generation Layer 1 blockchain designed for AI applications, aiming to address the limitations of Infernet and provide developers with a more robust and efficient development environment. Through Resonance technology, Ritual Chain offers a concise and reliable pricing and task routing mechanism for the Infernet network, significantly optimizing resource allocation efficiency. Furthermore, Ritual Chain is based on the EVM++ framework, which is a backward-compatible extension of the Ethereum Virtual Machine (EVM), offering enhanced functionality, including precompiled modules, native scheduling, Account Abstraction (AA), and a range of advanced Ethereum Improvement Proposals (EIPs). These features together build a powerful, flexible, and efficient development environment, providing developers with new possibilities.

Ritual Chain Workflow Diagram

Precompiled Sidecars

Compared to traditional precompilation, Ritual Chain's design enhances the system's scalability and flexibility, allowing developers to create custom functional modules in a containerized manner without modifying the underlying protocol. This architecture not only significantly reduces development costs but also provides decentralized applications with more powerful computing capabilities.

Specifically, Ritual Chain decouples complex computations from the execution client through a modular architecture and implements them in the form of independent Sidecars. These precompiled modules can efficiently handle complex computing tasks, including AI inference, zero-knowledge proof generation, and Trusted Execution Environment (TEE) operations, among others.

Native Scheduling

Native Scheduling addresses the need for task scheduling and conditional execution. Traditional blockchains often rely on centralized third-party services (such as a keeper) to trigger task execution, but this model carries centralization risks and high costs. Ritual Chain completely eliminates reliance on centralized services through a built-in scheduler. Developers can directly set the entry points and callback frequencies of smart contracts on-chain. Block producers maintain a mapping table of pending calls and prioritize processing these tasks when generating a new block. Combined with Resonance's dynamic resource allocation mechanism, Ritual Chain can efficiently and reliably handle computationally intensive tasks, providing a stable foundation for decentralized AI applications.

Technical Innovation

Ritual's core technological innovations ensure its leading position in performance, security, and scalability, offering robust support for on-chain AI applications.

1. Resonance: Optimized Resource Allocation

Resonance is a bilateral market mechanism that optimizes blockchain resource allocation, addressing the complexity of heterogeneous transactions. As blockchain transactions evolve from simple transfers to diverse forms such as smart contracts and AI inference, existing fee mechanisms (such as EIP-1559) struggle to efficiently match user demands with node resources. Resonance introduces two core roles, Broker and Auctioneer, to achieve the optimal match between user transactions and node capabilities:

The Broker analyzes user transaction fee willingness and node resource cost functions to achieve the best match between transactions and nodes, enhancing the utilization of computing resources. The Auctioneer organizes transaction fee allocation through a bilateral auction mechanism to ensure fairness and transparency. Nodes choose transaction types based on their hardware capabilities, while users can submit transaction requests based on priority conditions (such as speed or cost).

This mechanism significantly improves network resource utilization efficiency and user experience. Moreover, it enhances system transparency and openness through a decentralized auction process.

Under the Resonance mechanism: Auctioneer assigns suitable tasks to nodes based on Broker's analysis

2. Symphony: Enhancing Validation Efficiency

Symphony focuses on enhancing validation efficiency, addressing the inefficiency issue of the traditional blockchain's "re-execute" model when handling and validating complex computing tasks. Symphony is based on the "Execute Once, Verify Many Times" (EOVMT) model, which separates computation from validation processes, greatly reducing the performance overhead caused by repeated computation. The computation task is executed once by a designated node, the computation result is broadcasted through the network, and validation nodes use non-interactive proofs (succinct proofs) to confirm the correctness of the result without the need for re-executing the computation.

Symphony supports distributed validation by decomposing complex tasks into multiple sub-tasks processed in parallel by different validation nodes, further enhancing validation efficiency and ensuring privacy protection and security. Symphony is highly compatible with Trust Execution Environments (TEE) and Zero-Knowledge Proof (ZKP) systems, providing flexible support for rapid transaction confirmation and privacy-sensitive computing tasks. This architecture not only significantly reduces the performance overhead of repeated computation but also ensures the decentralization and security of the validation process.

Symphony decomposes complex tasks into multiple sub-tasks processed in parallel by different validation nodes

3. vTune: Traceable Model Validation

vTune is a tool provided by Ritual for model validation and source tracing. It has almost no impact on model performance while possessing good anti-interference capabilities, making it especially suitable for protecting the intellectual property of open-source models and promoting fair distribution. vTune combines watermarking technology and zero-knowledge proof to achieve model source tracing and computational integrity assurance through embedding covert markers:

· Watermarking Technology: Embedding markers through weight space watermarking, data watermarking, or function space watermarking ensures that even if the model is made public, its ownership can still be verified. Function space watermarking, in particular, can verify ownership through model output without accessing model weights, thus providing stronger privacy protection and robustness.

· Zero-Knowledge Proof: Introducing covert data during the model fine-tuning process to verify if the model has been tampered with while protecting the rights of the model creator.

This tool not only provides trustworthy source verification for decentralized AI model markets but also significantly enhances model security and ecosystem transparency.

Ecosystem Development

Ritual is currently in the private testnet phase, with limited opportunities for ordinary users to participate; developers can apply for and participate in the official Altar and Realm incentive programs to join Ritual's AI ecosystem development, receive full-stack technical support from the official team, and funding.

The official team has currently released a batch of native applications from the Altar program:

· Relic: An ML-based automated market maker (AMM) that dynamically adjusts liquidity pool parameters through Ritual's infrastructure to optimize fees and underlying pools;

· Anima: Focuses on on-chain transaction automation tools based on LLM, providing users with a smooth and natural Web3 interaction experience;

· Tithe: AI-driven lending protocol that supports a wider range of asset types through dynamic optimization of lending pools and credit scoring.

In addition, Ritual has also engaged in deep partnerships with several established projects to drive the development of the decentralized AI ecosystem. For example, the collaboration with Arweave provides decentralized permanent storage support for models, data, and zero-knowledge proofs; through integrations with StarkWare and Arbitrum, Ritual introduces native on-chain AI capabilities to these ecosystems; furthermore, the re-staking mechanism provided by EigenLayer adds active validation services to Ritual's proof market, further enhancing the network's decentralization and security.

Challenges and Outlook

Ritual's design addresses key aspects such as allocation, incentives, validation, etc., solving the core challenges faced by decentralized AI, while achieving model verifiability through tools like vTune, breaking the contradiction between model openness and incentives, and providing technical support for the construction of a decentralized model market.

Currently, Ritual is in its early stages, primarily focusing on the model inference phase, with the product matrix expanding from infrastructure to model markets, L2 as a Service (L2aaS), and Agent frameworks. Since blockchain is still in the private testing phase, Ritual's advanced technical design proposals are yet to be widely implemented at scale and require continuous attention. With ongoing technological improvements and ecosystem enrichment, it is hoped that Ritual will become a key part of decentralized AI infrastructure.

Gensyn: Addressing the Core Challenge of Decentralized Model Training

Project Overview

In an era of accelerated artificial intelligence advancement and increasingly scarce computational resources, Gensyn is attempting to reshape the underlying paradigm of AI model training.

In the traditional AI model training process, computational power is almost monopolized by a few cloud computing giants, leading to high training costs, low transparency, and hindering the innovation pace of small teams and independent researchers. Gensyn's vision is to break this "centralized monopoly" structure by advocating for pushing the training task down to countless devices worldwide with basic computing capabilities—whether it's a MacBook, a gaming-grade GPU, or edge devices and idle servers—all can join the network, participate in task execution, and receive rewards.

Founded in 2020, Gensyn focuses on building decentralized AI computing infrastructure. As early as 2022, the team first proposed to redefine the way AI models are trained at both the technical and institutional levels: no longer relying on closed cloud platforms or giant server clusters but sinking the training tasks into the heterogeneous computing nodes worldwide, constructing a trustless intelligent computing network.

In 2023, Gensyn further expanded its vision: building a globally connected, open-source autonomous AI network with no permission barriers—any device with basic computing capability can be part of this network. Its underlying protocol is designed based on blockchain architecture, not only with the composability of incentive mechanisms and validation mechanisms.

Since its inception, Gensyn has raised a total of $50.6 million in support, with investments from 17 institutions including a16z, CoinFund, Canonical, Protocol Labs, Distributed Global, etc. The Series A funding round led by a16z in June 2023 was widely noted, marking the decentralized AI field's entry into the mainstream Web3 venture capital's vision.

The core team members also have substantial backgrounds: co-founder Ben Fielding studied theoretical computer science at the University of Oxford, with a deep technical research background; another co-founder, Harry Grieve, has been involved in the design of decentralized protocol systems and economic modeling for a long time, providing solid support for Gensyn's architecture design and incentive mechanism.

Design Architecture

The development of current decentralized artificial intelligence systems is facing three major core technical bottlenecks: Execution, Verification, and Communication. These bottlenecks not only limit the unleashing of large-scale model training capabilities but also hinder the fair integration and efficient utilization of global computing resources. Building on systematic research, the Gensyn team has proposed three representative innovative mechanisms—RL Swarm, Verde, and SkipPipe, each of which has constructed a solution path for the above problems, driving decentralized AI infrastructure from concept to implementation.

1. Execution Challenge: How to enable fragmented devices to collaboratively and efficiently train large models?

Currently, the performance improvement of large language models mainly relies on the "scale-up" strategy: larger parameter sizes, broader datasets, and longer training periods. However, this also significantly raises the computational cost—training super-sized models often needs to be split among thousands of GPU nodes, which require high-frequency data communication and gradient synchronization among them. In a decentralized scenario, where nodes are widely distributed, hardware is heterogeneous, and state fluctuation is high, traditional centralized scheduling strategies are ineffective.

To address this challenge, Gensyn has proposed RL Swarm, a peer-to-peer reinforcement learning post-training system. The core idea is to transform the training process into a distributed collaborative game. This mechanism consists of three stages: "Share—Critique—Decide." Firstly, nodes independently perform problem reasoning and publicly share the results. Subsequently, nodes evaluate peer answers from the perspectives of logic and strategic rationality and provide feedback. Finally, nodes adjust their outputs based on the collective opinion, generating more robust answers. This mechanism effectively integrates individual computation and group collaboration, particularly suitable for tasks requiring high precision and verifiability such as mathematical and logical reasoning. Experiments have shown that RL Swarm not only improves efficiency but also significantly reduces the entry barriers, demonstrating good scalability and fault tolerance.

RL Swarm's "Share—Critique—Decide" three-stage reinforcement learning training system

2. Verification Challenge: How to verify the computation results of untrusted providers?

In a decentralized training network, "anyone can provide computing power" is both an advantage and a risk. The issue lies in how to verify the authenticity and validity of this computation without the need for trust.

Traditional methods such as re-computation or whitelist auditing have clear limitations—the former is high-cost and lacks scalability, while the latter excludes "long-tail" nodes, compromising network openness. To address this, Gensyn has designed Verde, a lightweight arbitration protocol specifically built for neural network training verification scenarios.

The key idea behind Verde is "Minimal Trustworthy Adjudication": when a validator suspects that a supplier's training result is incorrect, the arbitration contract only needs to recalculate the first disputed operation node in the computation graph, without replaying the entire training process. This significantly reduces the validation burden while ensuring the correctness of the result when at least one party is honest. To address the floating-point non-determinism issue between different hardware, Verde has also developed the Reproducible Operators library, which enforces a uniform execution order for common mathematical operations such as matrix multiplication, thus achieving bitwise consistent output across devices. This technology significantly enhances the security and engineering feasibility of distributed training and is a key breakthrough in the current trustless validation system.

The entire mechanism is built on the basis of the trainer recording key intermediate states (i.e., checkpoints), where multiple validators are randomly assigned to replay these training steps to assess the consistency of the output. Once a validator's recomputed result differs from the trainer's, the system does not forcibly rerun the entire model. Instead, through a network arbitration mechanism, it precisely identifies the first operation in the computation graph where the discrepancy occurred between the two parties and only replays that operation for comparison, thus achieving dispute resolution with minimal overhead. In this way, Verde, without trusting the training nodes, both ensures the integrity of the training process and balances efficiency and scalability, tailored as a validation framework for the distributed AI training environment.

Vader's Workflow

III. Communication Challenge: How to Reduce the Network Bottleneck Caused by High-Frequency Synchronization Between Nodes?

In traditional distributed training, the model is either fully replicated or split by layer (pipeline parallelism), both of which require high-frequency synchronization between nodes. Especially in pipeline parallelism, a mini-batch must strictly pass through each model layer in sequence, causing the entire training process to be blocked if any node is delayed.

Gensyn addresses this issue with SkipPipe: a high-fault-tolerant pipeline training system that supports skip execution and dynamic path scheduling. SkipPipe introduces the "skip ratio" mechanism, allowing some mini-batch data to skip certain model layers when a specific node is overloaded, while using a scheduling algorithm to dynamically select the current optimal computation path. Experiments show that in a geographically dispersed, hardware-diverse, and bandwidth-constrained network environment, SkipPipe reduces training time by up to 55% and can maintain only a 7% loss even with a 50% node failure rate, demonstrating strong resilience and adaptability.

Participation Method

The public testnet of Gensyn was launched on March 31, 2025, and is currently in its early stage as outlined in its technical roadmap (Phase 0). The focus is on the deployment and validation of the RL Swarm, which is the first use case of Gensyn designed around collaborative training of reinforcement learning models. Each participating node binds its behavior to an on-chain identity, and the contribution process is fully recorded, providing a validation basis for subsequent incentive distribution and a trusted computing model.

Gensyn Node Ranking

The hardware requirements during the early testing phase are relatively friendly: Mac users can run using the M series chip, while Windows users are recommended to have high-performance GPUs such as 3090 or 4090, along with 16GB or more of memory to deploy a local Swarm node. After the system is running, users can complete the verification process by logging in through a web page using email (Gmail is recommended) and choose whether to bind a HuggingFace Access Token to activate more comprehensive model capabilities.

Challenges and Prospects

The biggest uncertainty surrounding the Gensyn project currently lies in the fact that its testnet has not yet fully covered the promised full technical stack. Key modules such as Verde and SkipPipe are still pending integration, leading observers to adopt a wait-and-see attitude toward its architectural implementation capabilities. The official explanation is that the testnet will advance in stages, with each stage unlocking new protocol capabilities, prioritizing the validation of infrastructure stability and scalability. The initial stage starts with RL Swarm and will gradually expand to core scenarios such as pre-training, inference, and eventually transition to the mainnet deployment supporting real economic transactions.

Despite the relatively conservative pace with which the testnet was launched, it is noteworthy that just one month later, Gensyn introduced new Swarm test tasks supporting larger-scale models and more complex mathematical tasks. This move to some extent responded to doubts about its development pace from the outside and demonstrated the team's efficiency in advancing local modules.

However, challenges have emerged as well: the new version of the task sets a very high hardware threshold, recommending configurations including top-tier GPUs like A100 and H100 (with 80GB VRAM), which are almost unattainable for small to medium-sized nodes. This creates some tension with Gensyn's emphasis on "open access" and decentralized training. If the trend of computing power centralization is not effectively guided, it may affect the network's fairness and the sustainability of decentralized governance.

Next, if Verde and SkipPipe can integrate successfully, it will help enhance the protocol's integrity and collaborative efficiency. However, whether Gensyn can truly find a balance between performance and decentralization remains to be seen through longer and more widespread testing on the testnet. Currently, it has begun to show potential while also exposing challenges, embodying the most authentic state of an early-stage infrastructure project.

Bittensor: Innovation and Development of a Decentralized AI Network

Project Overview

Bittensor is a groundbreaking project that combines blockchain and artificial intelligence, founded by Jacob Steeves and Ala Shaabana in 2019, aiming to build a "market economy for machine intelligence." Both founders have deep backgrounds in artificial intelligence and distributed systems. Yuma Rao, a credited author of the project's whitepaper, is considered the team's core technical advisor, bringing a professional perspective in cryptography and consensus algorithms to the project.

The project aims to integrate global computing resources through a blockchain protocol to build a continuously self-optimizing distributed neural network ecosystem. This vision transforms digital assets such as computation, data, storage, and models into an intelligent value flow, creating a new economic model to ensure the fair distribution of AI development dividends. Setting itself apart from centralized platforms like OpenAI, Bittensor has established three core value pillars:

· Breaking Data Silos: Utilizing the TAO token incentive system to promote knowledge sharing and model contributions

· Market-Driven Quality Assessment: Introducing game theory mechanisms to filter high-quality AI models, achieving survival of the fittest

· Network Effect Amplifier: Participant growth is exponentially correlated with network value, forming a virtuous cycle

In terms of investment landscape, Polychain Capital has been incubating Bittensor since 2019, currently holding TAO tokens worth around $200 million; Dao5 holds approximately $50 million worth of TAO, being an early supporter of the Bittensor ecosystem. In 2024, Pantera Capital and Collab Currency further increased their stakes through strategic investments. In August of the same year, Grayscale Group included TAO in its decentralized AI fund, signaling institutional investors' high recognition and long-term optimism for the project's value.

Design Architecture and Operation Mechanism

Network Architecture

Bittensor has built a sophisticated network architecture consisting of four collaborative layers:

· Blockchain Layer: Built on the Substrate framework, serving as the trust foundation of the network, responsible for recording state changes and token issuance. The system generates a new block every 12 seconds and issues TAO tokens according to rules to ensure network consensus and incentive distribution.

· Neuron Layer: Serving as the computational nodes of the network, neurons run various AI models to provide intelligent services. Each node explicitly declares its service type and interface specification through a carefully designed configuration file, achieving functional modularization and plug-and-play capabilities.

· Synapse Layer: The communication bridge of the network dynamically optimizes inter-node connection weights, forming a neural network-like structure to ensure efficient information transfer. The synapse also incorporates an economic model where interactions and service calls between neurons require payment in TAO tokens, forming a closed-loop value circulation.

· Metagraph Layer: Serving as the global knowledge graph of the system, continuously monitors and evaluates the contribution value of each node, providing intelligent guidance to the entire network. The metagraph accurately calculates synapse weights to influence resource allocation, reward mechanisms, and node influence in the network.

Bittensor's Network Framework

Yuma Consensus Mechanism

The network adopts the unique Yuma consensus algorithm, completing a round of reward distribution every 72 minutes. The validation process combines subjective assessment and objective metrics:

· Manual Scoring: Validators subjectively assess the quality of miner outputs

· Fisher Information Matrix: Objectively quantifies nodes' overall contribution to the network

This "subjective + objective" hybrid mechanism effectively balances professional judgment and algorithmic fairness.

Subnet Architecture and dTAO Upgrade

Each subnet focuses on a specific AI service area, such as text generation, image recognition, etc., running independently but connected to the main blockchain subtensor, forming a highly flexible modular expansion architecture. In February 2025, Bittensor completed a milestone dTAO (Dynamic TAO) upgrade, transforming each subnet into an independent economic unit, intelligently regulating resource allocation through market demand signals. The core innovation is the Subnet Token (Alpha Token) mechanism:

· How It Works: Participants stake TAO to receive Alpha Tokens issued by each subnet, where these tokens represent market recognition and support for specific subnet services

· Allocation Logic: The market price of Alpha Tokens serves as a key indicator of subnet demand intensity, with initially uniform prices for Alpha Tokens in each subnet, with each liquidity pool containing 1 TAO and 1 Alpha Token. As trading activity and liquidity injection increase, the price of Alpha Tokens dynamically adjusts, and TAO allocation is intelligently distributed based on the subnet token price proportion, with more resource skew towards subnets with higher market demand, achieving true demand-driven resource optimization

Bittensor Subnet Token Emission Allocation

The dTAO upgrade significantly enhances ecosystem vitality and resource utilization efficiency, with the Subnet Token market capitalization reaching $5 billion, demonstrating strong growth momentum.

Bittensor Subnet Alpha Token Value

Ecosystem Progress and Use Cases

Mainnet Development Journey

The Bittensor network has gone through three key development stages:

· January 2021: Mainnet officially launched, laying down the foundational infrastructure

· October 2023: The 'Revolution' upgrade introduced subnet architecture, achieving functional modularity

· February 2025: Completed the dTAO upgrade, establishing a market-driven resource allocation mechanism

Subnet Ecosystem Sees Explosive Growth: As of June 2025, there are already 119 specialized subnets, with the number expected to exceed 200 within the year.

Bittensor Subnet Count

The ecosystem features a diverse range of projects covering AI agents (such as Tatsu), prediction markets (such as Bettensor), DeFi protocols (such as TaoFi), among other cutting-edge fields, forming an innovative ecosystem deeply integrating AI and finance.

Representative Subnet Ecosystem Projects

· TAOCAT: TAOCAT is a native AI agent in the Bittensor ecosystem, directly built on the subnet, offering users a data-driven decision-making tool. Leveraging Subnet 19’s large language model, Subnet 42’s real-time data, and Subnet 59’s Agent Arena, it provides market insights and decision support. It has received investment from DWF Labs, included in their $20 million AI agent fund, and launched on binance alpha.

· OpenKaito: A subnet launched on Bittensor by the Kaito team, aiming to build a decentralized cryptographic industry search engine. It has currently indexed 500 million web resources, showcasing the powerful capability of decentralized AI in handling massive amounts of data. Its core advantage over traditional search engines lies in reducing commercial interests interference, providing a more transparent, neutral data processing service, and offering a new paradigm for information retrieval in the Web3 era.

· Tensorplex Dojo: Subnet 52 developed by Tensorplex Labs, focusing on sourcing high-quality human-generated datasets through a decentralized platform, incentivizing users to earn TAO tokens through data labeling. In March 2025, YZi Labs (formerly Binance Labs) announced an investment in Tensorplex Labs, supporting the development of Dojo and Backprop Finance.

· CreatorBid: Operating on Subnet 6, it is a creative platform that combines AI and blockchain, integrated with Olas and other GPU networks (such as io.net), supporting content creators and AI model development.

Technology and Industry Collaboration

Bittensor has made significant progress in cross-disciplinary collaborations:

· Established a deep model integration channel with Hugging Face, enabling on-chain seamless deployment of 50 leading AI models

· Collaborated with high-performance AI chip manufacturer Cerebras in 2024 to jointly release the BTLM-3B model, with a total download count surpassing 160,000

· Reached a strategic partnership with DeFi giant Aave in March 2025 to explore the application of rsTAO as premium lending collateral

Participation Methods

Bittensor has designed diversified ecosystem participation pathways, forming a complete value creation and distribution system:

· Mining: Deploy mining nodes to produce high-quality digital goods (such as AI model services), and receive TAO rewards based on contribution quality

· Validation: Run validator nodes to assess miners' work outcomes, maintain network quality standards, and earn corresponding TAO incentives

· Staking: Hold and stake TAO to support high-quality validator nodes, and earn passive income based on validator performance

· Development: Utilize the Bittensor SDK and CLI tools to build innovative applications, practical tools, or new subnets, actively participating in ecosystem development

· Service Usage: Access network-provided AI services through a user-friendly client application interface, such as text generation or image recognition

· Trading: Engage in market trading of subnet asset tokenization, capturing potential value growth opportunities

Distribution of subnet alpha tokens to participants

Challenges and Outlook

Despite demonstrating outstanding potential, Bittensor, as an exploration of cutting-edge technology, still faces multidimensional challenges. On the technological front, security threats faced by a distributed AI network (such as model theft and adversarial attacks) are more complex than in centralized systems, requiring continuous optimization of privacy computing and security protection solutions; on the economic model front, early-stage inflationary pressure exists, and the subnet token market exhibits high volatility, necessitating vigilance against potential speculative bubbles; in the regulatory environment, although the SEC has classified TAO as a utility token, regulatory framework differences worldwide may still restrict ecosystem expansion; additionally, confronted with resource-rich centralized AI platforms' fierce competition, decentralized solutions need to prove their long-term competitive advantages in terms of user experience and cost-effectiveness.

As the 2025 halving approaches, Bittensor's development will focus on four strategic directions: further deepening the specialization of subnets, enhancing the service quality and performance of vertical domain applications; accelerating deep integration with the DeFi ecosystem, expanding the smart contract application boundary through newly introduced EVM compatibility extensions; smoothly transitioning governance weight from TAO to Alpha token within the next 100 days through the dTAO mechanism to drive the decentralization of governance processes; actively expanding interoperability with other mainstream public chains to broaden the ecosystem boundary and application scenarios. These synergistic strategic initiatives will collectively propel Bittensor steadily towards the grand vision of a "Machine Intelligence Market Economy."

0G: Storage-based Modular AI Ecosystem

Project Overview

0G is a Layer 1 public chain designed specifically for AI applications, aiming to provide efficient and reliable decentralized infrastructure for data-intensive and high-compute scenarios. Through a modular architecture, 0G has independently optimized core functions such as consensus, storage, computation, and data availability, supporting dynamic scalability to efficiently handle large-scale AI inference and training tasks.

The founding team consists of Michael Heinrich (CEO, previously founded Garten with over $100 million in funding), Ming Wu (CTO, Microsoft researcher, co-founder of Conflux), Fan Long (co-founder of Conflux), and Thomas Yao (CBO, Web3 investor), with 8 computer science Ph.D. holders. The team's backgrounds include Microsoft, Apple, etc., showcasing deep expertise in blockchain and AI technologies.

In terms of funding, 0G Labs completed a $35 million Pre-seed round and a $40 million Seed round, totaling $75 million, with investors including Hack VC, Delphi Ventures, and Animoca Brands, among others. Additionally, the 0G Foundation secured a $250 million token purchase commitment, $30.6 million in public node sales, and an $88.88 million ecosystem fund.

Design Architecture

1. 0G Chain

The goal of the 0G Chain is to create the fastest modular AI public chain. Its modular architecture supports independent optimization of key components such as consensus, execution, and storage, and integrates a data availability network, distributed storage network, and AI computation network. This design provides outstanding performance and flexibility for the system to handle complex AI application scenarios. Here are the three core features of the 0G Chain:

Modular Scalability

0G employs a horizontally scalable architecture that can efficiently handle large-scale data workflows. Its modular design separates the Data Availability (DA) layer from the data storage layer, providing higher performance and efficiency for data access and storage in AI tasks such as large-scale training or inference.

0G Consensus

0G's consensus mechanism consists of multiple independent consensus networks that can dynamically scale as needed. As data volume grows exponentially, system throughput can also increase in sync, supporting scaling from one to hundreds or even thousands of networks. This distributed architecture not only enhances performance but also ensures system flexibility and reliability.

Shared Staking

Validators are required to stake funds on the Ethereum mainnet to provide security for all participating 0G consensus networks. In the event of a slashable incident on any 0G network, validators' stake on the Ethereum mainnet will be reduced. This mechanism extends the security of the Ethereum mainnet to all 0G consensus networks, ensuring the overall system's security and robustness.

0G Chain is EVM-compatible, ensuring that Ethereum, Layer 2 Rollup, or other chain developers can easily integrate 0G services (such as data availability and storage) without migration. Additionally, 0G is exploring support for Solana VM, Near VM, and Bitcoin compatibility to expand AI applications to a broader user base.

0G Storage

0G Storage is a highly optimized distributed storage system designed for decentralized applications and data-intensive scenarios. At its core is a unique consensus mechanism called Proof of Random Access (PoRA), which incentivizes miners to store and manage data, achieving a balance of security, performance, and fairness.

Its architecture can be divided into three layers:

· Log Layer: Enables the permanent storage of unstructured data suitable for archiving or data logging purposes.

· Key-Value Layer: Manages mutable structured data and supports access control, suitable for dynamic application scenarios.

· Transaction Layer: Supports multi-user concurrent writes, enhancing collaboration and data processing efficiency.

Proof of Random Access (PoRA) is a key mechanism of 0G Storage, used to verify if miners have correctly stored the specified data block. Miners periodically receive challenges and must provide valid cryptographic hashes as proof, similar to proof of work. To ensure fair competition, 0G limits the data range for each mining operation to 8 TB, preventing resource monopolization by large-scale operators and enabling small-scale miners to participate in competition in a fair environment.

Proof of Random Access Illustration

Through erasure coding technology, 0G Storage divides data into multiple redundant fragments and distributes them to different storage nodes. This design ensures that even if some nodes go offline or fail, data can still be fully recovered, significantly improving data availability and security, and enabling the system to perform well when handling large-scale data. In addition, data storage is managed at the sector and block level, optimizing data access efficiency and enhancing miners' competitiveness in the storage network.

The submitted data is organized sequentially, and this order is referred to as the data flow, which can be understood as a list of log entries or a sequence of fixed-size data sectors. In 0G, each piece of data can be quickly located by a common offset, enabling efficient data retrieval and challenge queries. By default, 0G provides a general data flow called the main flow to handle the majority of application scenarios. Additionally, the system supports specialized flows that specifically accept certain categories of log entries, providing independent contiguous address spaces optimized for different application needs.

Through this design, 0G Storage can flexibly adapt to diverse usage scenarios while maintaining high performance and management capabilities, providing robust storage support for AI x Web3 applications that need to process large-scale data streams.

3. 0G Data Availability (0G DA)

Data Availability (DA) is one of the core components of 0G, aimed at providing accessible, verifiable, and retrievable data. This feature is a key part of decentralized AI infrastructure, such as validating the results of training or inference tasks to meet user needs and ensure the reliability of the system's incentive mechanisms. 0G DA achieves outstanding scalability and security through thoughtfully designed architecture and validation mechanisms.

The design goal of 0G DA is to provide extremely high scalability while ensuring security. Its workflow is mainly divided into two parts:

· Data Storage Lane: Data is divided into multiple small segments ("data blocks") using erasure coding and distributed to storage nodes in the 0G Storage network. This mechanism effectively supports large-scale data transmission while ensuring data redundancy and recoverability.

· Data Publishing Lane: Data availability is verified by DA nodes through aggregate signatures and the results are submitted to the consensus network. Through this design, data publishing only needs to deal with a small amount of critical data flow, avoiding bottlenecks in traditional broadcast methods and significantly improving efficiency.

To ensure the security and efficiency of the data, 0G DA uses a randomness-based validation method combined with an aggregate signature mechanism to form a complete validation process:

· Random Selection of Quorum: Through a Verifiable Random Function (VRF), the consensus system randomly selects a group of DA nodes from the validator set to form a quorum. This random selection method theoretically ensures that the honesty distribution of the quorum is consistent with the entire validator set, so the data availability client cannot collude with the quorum.

· Aggregate Signature Verification: The quorum samples and verifies the stored data blocks and generates an aggregate signature, submitting the availability proof to 0G's consensus network. This aggregate signature method significantly improves verification efficiency, outperforming traditional Ethereum by several orders of magnitude in performance.

0G Validation Process

Through the above mechanism, 0G DA provides an efficient, highly scalable, and secure data availability solution, offering a solid foundation for decentralized AI applications.

4. 0G Compute

The 0G Compute Network is a decentralized framework designed to provide robust AI computing power to the community. Through smart contracts, compute providers can register the types of AI services they offer (such as model inference) and set prices for their services. After users submit AI inference requests, service providers decide whether to respond based on the sufficiency of the user's balance, enabling efficient compute allocation.

To further optimize transaction costs and network efficiency, service providers can batch process multiple user requests. This approach significantly reduces on-chain settlements, mitigating the resource consumption associated with frequent transactions. Additionally, the 0G Compute Network employs Zero-Knowledge Proofs (ZK-Proofs) technology, leveraging off-chain computation and on-chain validation to greatly compress transaction data and reduce on-chain settlement costs. Combined with 0G's storage module, its scalable off-chain data management mechanism significantly reduces on-chain costs related to tracking data keys for storage requests, while enhancing storage and retrieval efficiency.

Currently, 0G's decentralized AI network primarily offers AI inference services and has demonstrated advantages in efficiency and cost optimization. In the future, 0G plans to further expand its capabilities to achieve comprehensive decentralization in tasks beyond inference, such as training, providing users with a more complete solution.

Ecosystem Development

0G's testnet has upgraded from Newton v2 to Galileo v3 and boasts over 8000 validators according to official data. The storage network has 1591 active miners who have currently processed over 430,000 uploaded files, offering a total of 450.72 GB of storage space.

0G's influence in the decentralized AI field continues to expand with the increasing depth of partnerships. According to official data, there have been over 450 integrations covering various areas such as AI compute, data, models, frameworks, infra, and DePin.

0G Ecosystem Chart

Furthermore, the 0G Foundation has launched an $88.88 million ecosystem fund to support the development of AI-related projects, giving rise to the following native applications:

· zer0: AI-driven DeFi liquidity solution, providing on-chain liquidity optimization services

· H1uman: Decentralized AI Agent Factory, creating a scalable AI integration workflow

· Leea Labs: Infrastructure for multiple AI agents, supporting secure deployment of multi-agent systems

· Newmoney.AI: Intelligent DeFi agent wallet, automating investment and trading management

· Unagi: AI-driven on-chain entertainment platform, integrating anime and game experiences into Web3

· Rivalz: Verifiable AI oracle, providing trusted AI data access for smart contracts

· Avinasi Labs: AI project focusing on longevity research

How to Participate

Regular users can currently participate in the 0G ecosystem through the following methods:

· Participate in 0G Testnet Interaction: 0G has launched Testnet V3 (Galileo v3), where users can access the official testnet page (0G Testnet Guide) to claim test tokens and interact with DApps on the 0G chain.

· Participate in Kaito Event: 0G has joined the Kaito platform's content creation event, where users can participate by creating and sharing high-quality content related to 0G (such as technical analysis, ecosystem updates, or AI application use cases) to earn rewards.

Challenges and Vision

0G has demonstrated strong technical capabilities in the storage field, providing a comprehensive modular solution for decentralized storage with excellent scalability and cost-effectiveness (storage costs as low as $10-11 per TB). Additionally, 0G has addressed the verifiability issue of data through the Data Availability Layer (DA), laying a solid foundation for future large-scale AI inference and training tasks. This design provides robust support for decentralized AI at the data storage layer and creates an optimized storage and retrieval experience for developers.

In terms of performance, 0G is expected to increase its mainnet TPS to the range of 3,000 to 10,000, achieving a 10x growth compared to previous levels, ensuring the network can meet the high-intensity computing demands associated with AI inference and high-frequency trading tasks. However, in the computing power market and model aspects, 0G still requires substantial development. Currently, 0G's computing power business is limited to AI inference services, and more customized design and technological innovation are needed to support model training tasks. As a core component of AI development, models and computing power are not only key to driving product upgrades and large-scale applications but also a necessary path for 0G to achieve its goal of becoming the largest AI Layer 1 ecosystem.

Summary

Current Status: Diverse Entry Points, Faced with Challenges

Reflecting on the six AI Layer1 projects above, each has chosen different entry points, focusing on core elements such as AI assets, computing power, models, storage, etc., to explore the path of decentralized AI infrastructure and ecosystem construction:

· Sentient: Focuses on the development of decentralized models, introducing the Dobby series, emphasizing model trustworthiness, alignment, and loyalty. The underlying chain development is still in progress and will achieve deep integration between models and the chain.

· Sahara AI: With AI asset ownership protection as its core, the initial phase focuses on data rights and circulation, aiming to provide a trustworthy data foundation for the AI ecosystem.

· Ritual: Focuses on a high-efficiency implementation of decentralized computing power in inference, strengthens the functionality of the blockchain itself, enhances system flexibility and scalability, and supports the development of AI-native applications.

· Gensyn: Aims to solve the challenge of decentralized model training, reducing the cost of large-scale distributed training through technological innovation, and providing a viable path for AI computing power sharing and democratization.

· Bittensor: A relatively mature subnet platform that, through token incentives and decentralized governance, has pioneered the development of a rich developer and application ecosystem, serving as an early model of decentralized AI.

· 0G: Taking decentralized storage as its starting point, focusing on the data storage and management challenges in the AI ecosystem, gradually expanding to more comprehensive AI infrastructure and application services.

Project Comparison and Summary

Overall, these projects not only have differences in technical roadmaps but also complement each other in ecosystem strategies, collectively driving the diversified development of on-chain decentralized AI ecosystems. However, it is undeniable that the entire track is still in the early exploration stage. Although many forward-looking visions and blueprints have been proposed, actual development progress and ecosystem construction still require time to mature, and many key infrastructures and innovative applications are yet to be implemented.

How to attract and incentivize more computing power, storage, and other basic nodes to join the network is the core issue that needs to be urgently addressed. Just as the Bitcoin network underwent more than a decade of development before gradually gaining mainstream market recognition, a decentralized AI network also needs to continuously expand the scale of nodes to meet the massive computational demands of AI tasks. Only when the resources such as computing power and storage in the network reach a certain level of abundance can costs be effectively reduced, drive the democratization of computing power, and ultimately achieve the grand vision of decentralized AI.

In addition, on-chain AI applications still lack innovation. Currently, many products are simply migrated from a Web2 model, lacking innovative designs deeply integrated with blockchain's native mechanisms, and failing to fully demonstrate the unique advantages of decentralized AI. These real-world challenges remind us that the continued development of the industry not only requires technological breakthroughs but also depends on improving user experience and continuously enhancing the entire ecosystem.

Continuous Development: Emerging High-Quality AI Layer1 and DeAI Projects

In addition to the projects we have delved into, many new AI Layer1 and DeAI projects worthy of our attention are emerging in the current era's major trends. (Due to space limitations, a brief introduction will be provided here, and you can follow our continuous research on more AI tracks.)

Kite AI

Kite AI, based on its core consensus mechanism "Proof of Attributed Intelligence" (PoAI), has built an EVM-compatible Layer 1 blockchain, aiming to create a fair AI ecosystem. It seeks to ensure that data providers, model developers, and AI agent creators have their contributions to AI value creation transparently recorded and fairly rewarded, thereby breaking the AI resource monopoly held by a few tech giants. Currently, Kite AI is focusing its development efforts on C-end applications and ensuring the development, rights confirmation, and monetization of AI assets through subnet architecture and a transaction market.

Story

Story is an AI Layer 1 built around open intellectual property (IP) to provide creators and developers with a full set of tools to help them register, track, authorize, manage, and monetize various content IPs on-chain, whether they are videos, audios, texts, or AI works. Story allows users to mint original content as NFTs, with built-in flexible authorization and revenue-sharing mechanisms, enabling users to engage in derivative works and business collaborations while ensuring ownership and transparent revenue distribution.

Vana

Vana is a next-generation data-centric AI Layer 1 built for "user data monetization and AI training." It breaks free from the data monopoly of big companies, allowing individuals to truly own, manage, and share their data. Users can aggregate social, health, consumption, and other data through a "data DAO" (decentralized autonomous organization for users to collectively govern, share, and benefit from AI training data) to participate in AI training and receive dividends while retaining data ownership. Furthermore, Vana emphasizes privacy and security in its design, utilizing privacy computation and encryption verification technologies to safeguard user data.

Nillion

Nillion is a "blind computation network" focused on data privacy and secure computation, providing developers and enterprises with a set of privacy-enhancing technologies (such as Multi-Party Computation MPC, Homomorphic Encryption, Zero-Knowledge Proofs, etc.) that enable data storage, sharing, and complex computation without decrypting the original data. This allows scenarios such as AI, Decentralized Finance (DeFi), healthcare, personalized applications, etc., to securely handle high-value and sensitive information without worrying about the risk of data leakage. Currently, the Nillion ecosystem supports various innovative applications including AI privacy computation, personalized intelligent agents, private knowledge bases, attracting partners such as Virtuals, NEAR, Aptos, Arbitrum, and more.

Mira Network

Mira Network is an innovative network designed for "decentralized validation" of AI outputs, aiming to build a trusted validation layer for autonomous AI. Mira's core innovation utilizes integrated assessment technology to concurrently run multiple different language models in the background, segmenting the AI-generated results into specific assertions that are independently verified by distributed model nodes. Only when the majority of models reach consensus and identify the content as "fact" is it output to the user. Through this multi-model consensus mechanism, Mira significantly reduces the single-model illusion rate of 25% to only 3%, equivalent to a reduction of over 90% in error rate. Mira eliminates reliance on centralized large institutions or a single large model, employing distributed nodes and economic incentive mechanisms to become the verifiable foundational infrastructure layer for numerous Web2 and Web3 AI applications, truly achieving the transition from AI as a co-pilot to AI systems with autonomous decision-making capabilities that are trustworthy.

Prime Intellect

Prime Intellect is a platform focusing on decentralized AI training and computing power infrastructure, dedicated to integrating global computing resources and driving collaborative training of open-source AI models. Its core architecture includes a peer-to-peer computing power leasing market and an open training protocol, allowing anyone to contribute idle hardware to the network for large-scale model training and inference, thereby alleviating the traditional issues of highly centralized, high-threshold, and resource-wasting AI. Additionally, Prime Intellect has developed an open-source distributed training framework (such as OpenDiLoco), supporting efficient training of multi-billion-parameter large models across regions, and deeply cultivating algorithm innovation and special tracks, such as the METAGENE-1 model based on metagenomics and the INTELLECT-MATH project focused on mathematical reasoning. In 2025, Prime Intellect also launched the SYNTHETIC-1 initiative, utilizing crowdsourcing and reinforcement learning to create the world's largest open-source dataset for inference and mathematical code validation.

Future Outlook: Open and Collaborative Decentralized AI

Despite various challenges, on-chain decentralized AI still holds vast development prospects and transformative potential. As the underlying technology gradually matures and projects continue to deliver on their promises, the unique advantages of decentralized AI are expected to become increasingly prominent. AI Layer1 projects are expected to realize the following vision:

· Democratized sharing of compute power, data, and models, breaking the monopoly of technology giants, allowing global individuals, companies, and organizations to participate in AI innovation effortlessly.

· Circulation of ownership rights and trusted governance of AI assets, promoting the free flow and transaction of core assets such as data and models on-chain, ensuring owners' benefits, and forming a healthy open ecosystem.

· More trustworthy, traceable, and alignable AI outputs, providing a solid foundation for the secure and controllable development of AI, effectively reducing the risk of "AI malicious behavior."

· Inclusive implementation of industry applications, unlocking the immense value of AI in fields such as finance, healthcare, education, and content creation, enabling AI to truly benefit society in a decentralized manner.

As more and more AI Layer1 projects make continuous progress, we look forward to the early realization of the decentralized AI vision. We also hope to see more developers, innovators, and participants join hands to collectively build a more open, diverse, and sustainable AI ecosystem.

Original Article Link

Welcome to join the official BlockBeats community:

Telegram Subscription Group: https://t.me/theblockbeats

Telegram Discussion Group: https://t.me/BlockBeats_App

Official Twitter Account: https://twitter.com/BlockBeatsAsia

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10