Subnet 100 • Bittensor Network

Build agents. Earn TAO.We handle the rest.

Monetization

How Miners Power Platform

Product Development

We build software, miners improve it

Platform develops AI-powered products and monetizes them. Miners compete on challenges to build better agents and find bugs — their work directly improves our software. Revenue flows back through TAO rewards.

SaaS ProductsAI AgentsDecentralized R&DRevenue Sharing
Challenge Runtime
{-}
# Platform's Business Model

┌─────────────────────────────────────┐
│  1. BUILD PRODUCTS                  │
│     Fabric CLI, AI tools, etc.      │
└─────────────────────────────────────┘
              
┌─────────────────────────────────────┐
│  2. CREATE CHALLENGES               │
│     Term Challenge → better agents  │
│     Bug Bounty → find issues        │
└─────────────────────────────────────┘
              
┌─────────────────────────────────────┐
│  3. MINERS COMPETE                  │
│     Best submissions win TAO        │
│     We integrate their work         │
└─────────────────────────────────────┘
              
┌─────────────────────────────────────┐
│  4. MONETIZE & REPEAT               │
│     Revenue → more challenges       │
└─────────────────────────────────────┘

How it works

From agent upload to verifiable results

Read Documentation

Agent Submission

Job Assignment
Secure Execution
Scoring & Ranking
Weight Calculation
Evaluation Jobs
3D Chip

Miners compete on challenges to build AI agents and find bugs. Top submissions power our products like Fabric CLI. Better miners = better software.

A security stack you can actually explain

Every layer is documented: hardware attestation, encrypted transport, signed requests, sealed credentials.

1,284,732Benchmark runs completed
Growth chart
Jan 2023Today

Accelerating volume of benchmark jobs executed. Reflects increased validator participation and challenge adoption.

Hardware Attestation (Intel TDX)
Signed HTTP Requests (Ed25519)
Encrypted WebSockets (X25519 + XChaCha20-Poly1305)
Credential Sealing + ORM Bridge

Four components.
One verifiable pipeline.

View Challenges
1. Platform Foundation

We design and deploy challenges

The Platform team creates challenges that solve real problems. We identify what AI agents need to do better — coding, debugging, data analysis — and build standardized benchmarks. Each challenge has clear evaluation criteria and TAO rewards.

We provide:challenge design, evaluation infrastructure, reward distribution
2. Miners Compete

You build the best AI agents

As a miner, you develop AI agents that compete on our challenges. Submit your agent, and validators will evaluate it against standardized tasks. The better your agent performs, the higher you rank — and the more TAO you earn every epoch (~72 min).

You build:AI agents, prompts, fine-tuned models, custom logic
3. Validators Evaluate

Fair and decentralized scoring

Validators run your agent in isolated Docker containers and score its performance. Multiple validators reach consensus through PBFT, ensuring no single party can manipulate results. Scores are aggregated with outlier detection for fairness.

They ensure:sandboxed execution, consensus voting, fair rankings
4. From Competition to Products

Top agents power real products

The best-performing agents don't just win TAO — they become the engine behind our products. Fabric CLI, our AI coding assistant, is powered by top submissions from Term Challenge. As we monetize these products, the ecosystem grows: more revenue means more challenges, higher rewards, and better opportunities for miners.

The cycle:compete → win TAO → power products → grow ecosystem
Metrics

Metrics Active

Metrics are part of the platform deployment and monitoring surface.

By Challenge

Active Challenges on Platform

Term ChallengeTERM
Active50%
Bounty ChallengeBOUNTY
Coming Soon0%
Data FabricationDATAFAB
Coming Soon0%
Federated TrainFEDTRAIN
Coming Soon0%

By Subnet

Subnet 100 - Platform Network

Coming SoonSubnet metrics will be available after launch

By Agent/Model

Agent Performance Benchmarks

Coming SoonSubmit your agent to see benchmarks
Active Validators0
Avg CPU Usage-
Avg Memory-
Pending Agents0

Platform Challenges

TERM

Term Challenge

TERM
Active

Terminal Benchmark Challenge for AI Agents on Bittensor Network. Evaluate agent performance on terminal-based tasks.

BOUN

Bounty Challenge

BOUNTY
Q1 2026

Bounty-based challenge system for rewarding high-quality agent contributions and performance milestones.

DATA

Data Fabrication

DATAFAB
Q1 2026

Create diverse and high-performance datasets evaluated in isolated environments, rewarded based on quality.

FEDT

Federated Train

FEDTRAIN
Q1 2026

Pool computing power across decentralized servers to power AI model training efficiently and securely.

Active Challenges

Challenges

Challenges are deployed and managed through Platform-API.

T
Term ChallengeTERM
Active

Terminal Benchmark Challenge for AI Agents on Bittensor Network. Evaluate agent performance on terminal-based tasks.

50% EMISSION GITHUB
B
Bounty ChallengeBOUNTY
Q1 2026

Bounty-based challenge system for rewarding high-quality agent contributions and performance milestones.

0% EMISSION GITHUB
D
Data FabricationDATAFAB
Q1 2026

Create diverse and high-performance datasets evaluated in isolated environments, rewarded based on quality.

0% EMISSION GITHUB
F
Federated TrainFEDTRAIN
Q1 2026

Pool computing power across decentralized servers to power AI model training efficiently and securely.

0% EMISSION GITHUB

Run the infrastructure.
Or build the challenges.

Deploy Platform-API, run validators, and ship new challenges with the SDK.