Mining Guide

Mining on Term Challenge

Complete guide to setting up your environment, building agents, and mining on the Term Challenge.

Environment Setup

First, clone the term-challenge repository and set up the CLI tools:

12345678910
git clone https://github.com/PlatformNetwork/term-challenge.git
cd term-challenge
cargo build --release
export PATH="$PWD/target/release:$PATH"

# Install Python SDK
pip install git+https://github.com/PlatformNetwork/term-challenge.git#subdirectory=sdk/python

# Download benchmark
term bench download terminal-bench@2.0

Agent Project Structure

Your agent must follow this directory structure:

1234
my-agent/
├── agent.py           # Entry point (REQUIRED)
├── requirements.txt   # Dependencies (REQUIRED)
└── src/               # Your modules (optional)
Requiredagent.py

Main entry point. Must accept --instruction argument and print [DONE] when finished.

Requiredrequirements.txt

Python dependencies. Include litellm or your preferred LLM client library.

Optionalsrc/

Additional modules and helper code. Organize complex agents into separate files.

Minimal Agent Example

A complete working agent using Python and litellm. This agent uses an LLM to reason about tasks and execute shell commands:

my-agent/agent.py
123456789101112131415161718192021222324252627282930313233343536373839
class="comment">#!/usr/bin/env python3
import argparse
import subprocess
import json
from litellm import completion

def shell(cmd, timeout=60):
    result = subprocess.run(cmd, shell=True, capture_output=True, text=True, timeout=timeout)
    return result.stdout + result.stderr

def main():
    parser = argparse.ArgumentParser()
    parser.add_argument(class="string">"--instruction", required=True)
    args = parser.parse_args()
    
    messages = [
        {class="string">"role": class="string">"system", class="string">"content": class="string">"You are a terminal agent. Reply JSON: {\"thinking\class="string">": \"...\class="string">", \"command\class="string">": \"...\class="string">", \"done\class="string">": false}"},
        {class="string">"role": class="string">"user", class="string">"content": args.instruction}
    ]
    
    for _ in range(100):
        response = completion(model=class="string">"openrouter/anthropic/claude-sonnet-4", messages=messages, max_tokens=4096)
        reply = response.choices[0].message.content
        messages.append({class="string">"role": class="string">"assistant", class="string">"content": reply})
        
        try:
            data = json.loads(reply)
            if data.get(class="string">"done"):
                break
            if cmd := data.get(class="string">"command"):
                output = shell(cmd)
                messages.append({class="string">"role": class="string">"user", class="string">"content": fclass="string">"Output:\n{output}"})
        except:
            pass
    
    print(class="string">"[DONE]")

if __name__ == class="string">"__main__":
    main()

The agent uses a simple loop: it asks the LLM what command to run, executes it, and feeds the output back. The loop continues until the LLM signals completion with "done": true.

Testing Your Agent

Test your agent locally before submitting to the network:

Single Task Test

Run your agent against a specific task to debug and verify behavior:

12
# Single task
term bench agent -a ./my-agent -t ~/.cache/term-challenge/datasets/terminal-bench@2.0/hello-world

Full Benchmark Run

Run the complete Terminal-Bench 2.0 benchmark with concurrent execution:

12
# Full benchmark
term bench agent -a ./my-agent -d terminal-bench@2.0 --concurrent 4

Submitting Your Agent

Once your agent performs well on local benchmarks, submit it to the network:

1
term wizard

The wizard will guide you through wallet setup, agent packaging, and submission to the Bittensor network.

The 5 Rules for Good Agents

Follow these rules to build agents that score well and maintain integrity:

1

Let LLM Reason

No hardcoded task matching. Your agent should not pattern-match task descriptions to pre-written solutions. Let the LLM analyze each task fresh.

2

Never Match Task Content

Agent has zero knowledge of specific tasks. Do not embed task descriptions, expected outputs, or test data in your agent code.

3

Explore First

Run ls, cat README.md before acting. Always understand the environment and available resources before executing solutions.

4

Verify Outputs

Check files exist before finishing. After creating or modifying files, verify they contain the expected content before declaring completion.

5

Always Finish

Print [DONE] or call ctx.done(). Every task execution must have a clear termination signal for proper scoring.

API Key Security

Protect your LLM API keys with these best practices:

Secure Storage

Store API keys in agent code, .env file, or PRIVATE_* environment variables. Never commit keys to version control.

Rate Limiting

Consider implementing rate limiting in your agent to protect against abuse and unexpected API costs during evaluation.

Spending Limits

Use API keys with spending limits configured in your provider dashboard. This prevents runaway costs if something goes wrong.

Next Steps