Gonka AI Node
Decentralized AI network using Proof of Work 2.0 for meaningful compute contribution to AI training and inference.
Overview
Gonka transforms GPU compute into useful AI work through Proof of Work 2.0, where computational power advances real AI models instead of solving arbitrary puzzles. Operators earn rewards for delivering verifiable compute.
Key Features:- Real AI workloads (not wasteful mining)
- Honest-majority validation
- Reputation-based trust system
- Open, censorship-free LLM inference and training
Hardware Requirements
Minimum per MLNode:- VRAM: 40GB+ usable
- GPUs: 2-5 Network Nodes recommended
Large Models (DeepSeek R1, Qwen3-235B):
- 2+ MLNodes, each with 8x H200 GPUs
- 640GB+ VRAM per MLNode
Medium Models (Qwen3-32B, Gemma-3-27B):
- 2+ MLNodes, each with 4x A100 or 2x H100
- 80GB+ VRAM per MLNode
- CPU: 16-core
- RAM: 64GB+
- Storage: 1TB NVMe SSD
- Network: Stable high-speed connection
- RAM: 1.5x GPU VRAM
- CPU: 16-core
- NVIDIA Container Toolkit with CUDA 12.6-12.9
Key Management Overview
Gonka uses three-key system:
- Account Key (Cold) - Created locally, high-privilege, offline storage
- Consensus Key (TMKMS) - Managed by secure service for block validation
- ML Operational Key (Warm) - Created on server for automated transactions
Read the Gonka Key Management Guide before production deployment.
Prerequisites
- Spheron AI account (sign up)
- Payment method configured
- SSH key (how to generate)
- Local secure machine for Account Key generation
- HuggingFace account and token
Part A: Local Machine Setup
Step 1: Install CLI Tool
Download inferenced binary from Gonka releases:
chmod +x inferenced
./inferenced --helpmacOS: Allow execution in System Settings → Privacy & Security if prompted.
Step 2: Create Account Key
⚠️ IMPORTANT: Do this on your secure local machine, not server./inferenced keys add gonka-account-key --keyring-backend fileCritical: Save the mnemonic phrase securely offline. This is your only recovery method.
Part B: Deploy GPU on Spheron
Step 3: Sign Up & Add Credits
- Go to app.spheron.ai and sign up
- Click Credits → Add funds (card or crypto)
Step 4: Deploy Instance
- Click Deploy in sidebar
- Select GPU: A100 (80GB) or H100 (40GB+ VRAM required)
- Region: Closest to you
- OS: Ubuntu 22.04 LTS + CUDA 12.8
- Select your SSH key
- Click Deploy Instance
Instance ready in 30-60 seconds.
Part C: Server Setup
Step 5: Connect to Instance
ssh root@your-instance-ipStep 6: Install Dependencies
sudo apt update && apt upgrade -y
sudo apt install git docker.io docker-compose -yStep 7: Install NVIDIA Container Toolkit
sudo apt install nvidia-container-toolkit -y
sudo nvidia-ctk runtime configure --runtime=docker
systemctl restart dockerVerify GPU access:
docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smiStep 8: Clone Gonka Repository
git clone https://github.com/gonka-ai/gonka.git -b main
cp /root/gonka/deploy/join/config.env.template /root/gonka/deploy/join/config.env
cd /root/gonka/deploy/joinStep 9: Configure Environment
# Create HuggingFace cache directory
mkdir -p /mnt/sharednano config.env- Key name
- Public URL of your node
- Account public key
- SSH ports
source config.env- Define MLNodes and inference ports
- Specify models to load
- Set concurrent request limits
Step 10: Download Model Weights
# Setup HuggingFace cache
mkdir -p $HF_HOME
sudo apt update && apt install -y python3-pip pipx
pipx install huggingface_hub[cli]
pipx ensurepath
export PATH="$HOME/.local/bin:$PATH"
# Download model
hf download Qwen/Qwen2.5-7B-InstructStep 11: Pull Containers
# Pull all images
docker compose -f docker-compose.yml -f docker-compose.mlnode.yml pull
# Start chain components
source config.env && docker compose up tmkms node -d --no-deps
# Check logs
docker compose logs tmkms node -fStep 12: Create ML Operational Key
Enter API container:docker compose run --rm --no-deps -it api /bin/shprintf '%s\n%s\n' "$KEYRING_PASSWORD" "$KEYRING_PASSWORD" | inferenced keys add "$KEY_NAME" --keyring-backend fileSave mnemonic then exit container:
exitStep 13: Register Host
Re-enter API container:docker compose run --rm --no-deps -it api /bin/shinferenced register-new-participant \
$DAPI_API__PUBLIC_URL \
$ACCOUNT_PUBKEY \
--node-address $DAPI_CHAIN_NODE__SEED_API_URLexitStep 14: Grant Permissions (Switch to Local Machine)
⚠️ IMPORTANT: Run this on your local machine where you created Account Key./inferenced tx inference grant-ml-ops-permissions \
gonka-account-key \
<ml-operational-key-address-from-step-B8> \
--from gonka-account-key \
--keyring-backend file \
--gas 2000000 \
--node <seed_api_url>/chain-rpc/This grants ML Operational Key permission to submit inference proofs.
Step 15: Launch Node (Switch Back to Server)
source config.env && \
docker compose -f docker-compose.yml -f docker-compose.mlnode.yml up -dAll services start: chain node, API node, MLNodes.
Your Gonka node is now running!Verification
Check Participant Registration
http://node2.gonka.ai:8000/v1/participants/<your-account-address>Should display your public key in JSON.
Check Current Epoch
After Proof of Compute completes (every 24 hours):
http://node2.gonka.ai:8000/v1/epochs/current/participantsMonitor Dashboard
http://node2.gonka.ai:8000/dashboard/gonka/validatorTrack next Proof of Compute session timing.
Check Node Status
Using public IP:curl http://<PUBLIC_IP>:<PUBLIC_RPC_PORT>/statuscurl http://0.0.0.0:26657/statuscurl http://node2.gonka.ai:26657/statusProof of Compute
Simulation: Test PoC on MLNode before actual PoC phase begins.
Timing:- Runs every 24 hours
- Check dashboard for next session
- Can stop server between sessions and restart before PoC
Troubleshooting
Container won't start:# Check Docker status
docker ps -a
docker compose logs
# Verify configuration
source config.env
env | grep DAPI# Verify NVIDIA toolkit
nvidia-ctk --version
sudo nvidia-ctk runtime configure --runtime=docker
systemctl restart docker- Verify Account Key is correct
- Check network connectivity to seed node
- Ensure sufficient gas
- Verify ML Operational Key address
- Verify all MLNodes have sufficient VRAM
- Check model weights downloaded correctly
- Review MLNode logs:
docker compose logs mlnode
Managing Your Node
Update profile:- Update host name, website, avatar on dashboard
- Helps network identify your node
- Check PoC completion status
- View earned rewards
- Monitor GPU usage:
nvidia-smi -l 1
docker compose downsource config.env && \
docker compose -f docker-compose.yml -f docker-compose.mlnode.yml up -dAdditional Resources
- Gonka GitHub
- Gonka Dashboard
- Getting Started - Spheron deployment
- SSH Connection - SSH setup
Proof of Work 2.0: Every computation advances real AI models. Earn rewards for meaningful compute contribution.