Skip to content

Gonka AI Node

Decentralized AI network using Proof of Work 2.0 for meaningful compute contribution to AI training and inference.

Overview

Gonka transforms GPU compute into useful AI work through Proof of Work 2.0, where computational power advances real AI models instead of solving arbitrary puzzles. Operators earn rewards for delivering verifiable compute.

Key Features:
  • Real AI workloads (not wasteful mining)
  • Honest-majority validation
  • Reputation-based trust system
  • Open, censorship-free LLM inference and training

Hardware Requirements

Minimum per MLNode:
  • VRAM: 40GB+ usable
  • GPUs: 2-5 Network Nodes recommended

Large Models (DeepSeek R1, Qwen3-235B):

  • 2+ MLNodes, each with 8x H200 GPUs
  • 640GB+ VRAM per MLNode

Medium Models (Qwen3-32B, Gemma-3-27B):

  • 2+ MLNodes, each with 4x A100 or 2x H100
  • 80GB+ VRAM per MLNode
Network Node Server:
  • CPU: 16-core
  • RAM: 64GB+
  • Storage: 1TB NVMe SSD
  • Network: Stable high-speed connection
MLNode Server:
  • RAM: 1.5x GPU VRAM
  • CPU: 16-core
  • NVIDIA Container Toolkit with CUDA 12.6-12.9

Key Management Overview

Gonka uses three-key system:

  • Account Key (Cold) - Created locally, high-privilege, offline storage
  • Consensus Key (TMKMS) - Managed by secure service for block validation
  • ML Operational Key (Warm) - Created on server for automated transactions

Read the Gonka Key Management Guide before production deployment.

Prerequisites

  • Spheron AI account (sign up)
  • Payment method configured
  • SSH key (how to generate)
  • Local secure machine for Account Key generation
  • HuggingFace account and token

Part A: Local Machine Setup

Step 1: Install CLI Tool

Download inferenced binary from Gonka releases:

chmod +x inferenced
./inferenced --help

macOS: Allow execution in System Settings → Privacy & Security if prompted.

Step 2: Create Account Key

⚠️ IMPORTANT: Do this on your secure local machine, not server
./inferenced keys add gonka-account-key --keyring-backend file

Critical: Save the mnemonic phrase securely offline. This is your only recovery method.

Part B: Deploy GPU on Spheron

Step 3: Sign Up & Add Credits

  1. Go to app.spheron.ai and sign up
  2. Click Credits → Add funds (card or crypto)

Step 4: Deploy Instance

  1. Click Deploy in sidebar
  2. Select GPU: A100 (80GB) or H100 (40GB+ VRAM required)
  3. Region: Closest to you
  4. OS: Ubuntu 22.04 LTS + CUDA 12.8
  5. Select your SSH key
  6. Click Deploy Instance

Instance ready in 30-60 seconds.

Part C: Server Setup

Step 5: Connect to Instance

ssh root@your-instance-ip

Step 6: Install Dependencies

sudo apt update && apt upgrade -y
sudo apt install git docker.io docker-compose -y

Step 7: Install NVIDIA Container Toolkit

sudo apt install nvidia-container-toolkit -y
sudo nvidia-ctk runtime configure --runtime=docker
systemctl restart docker

Verify GPU access:

docker run --rm --gpus all nvidia/cuda:12.2.0-base-ubuntu22.04 nvidia-smi

Step 8: Clone Gonka Repository

git clone https://github.com/gonka-ai/gonka.git -b main
cp /root/gonka/deploy/join/config.env.template /root/gonka/deploy/join/config.env
cd /root/gonka/deploy/join

Step 9: Configure Environment

# Create HuggingFace cache directory
mkdir -p /mnt/shared
Edit config.env:
nano config.env
Required fields:
  • Key name
  • Public URL of your node
  • Account public key
  • SSH ports
Load configuration:
source config.env
Configure node-config.json:
  • Define MLNodes and inference ports
  • Specify models to load
  • Set concurrent request limits

Step 10: Download Model Weights

# Setup HuggingFace cache
mkdir -p $HF_HOME
sudo apt update && apt install -y python3-pip pipx
pipx install huggingface_hub[cli]
pipx ensurepath
export PATH="$HOME/.local/bin:$PATH"
 
# Download model
hf download Qwen/Qwen2.5-7B-Instruct

Step 11: Pull Containers

# Pull all images
docker compose -f docker-compose.yml -f docker-compose.mlnode.yml pull
 
# Start chain components
source config.env && docker compose up tmkms node -d --no-deps
 
# Check logs
docker compose logs tmkms node -f

Step 12: Create ML Operational Key

Enter API container:
docker compose run --rm --no-deps -it api /bin/sh
Create warm key:
printf '%s\n%s\n' "$KEYRING_PASSWORD" "$KEYRING_PASSWORD" | inferenced keys add "$KEY_NAME" --keyring-backend file

Save mnemonic then exit container:

exit

Step 13: Register Host

Re-enter API container:
docker compose run --rm --no-deps -it api /bin/sh
Register participant:
inferenced register-new-participant \
    $DAPI_API__PUBLIC_URL \
    $ACCOUNT_PUBKEY \
    --node-address $DAPI_CHAIN_NODE__SEED_API_URL
Exit:
exit

Step 14: Grant Permissions (Switch to Local Machine)

⚠️ IMPORTANT: Run this on your local machine where you created Account Key
./inferenced tx inference grant-ml-ops-permissions \
    gonka-account-key \
    <ml-operational-key-address-from-step-B8> \
    --from gonka-account-key \
    --keyring-backend file \
    --gas 2000000 \
    --node <seed_api_url>/chain-rpc/

This grants ML Operational Key permission to submit inference proofs.

Step 15: Launch Node (Switch Back to Server)

source config.env && \
docker compose -f docker-compose.yml -f docker-compose.mlnode.yml up -d

All services start: chain node, API node, MLNodes.

Your Gonka node is now running!

Verification

Check Participant Registration

http://node2.gonka.ai:8000/v1/participants/<your-account-address>

Should display your public key in JSON.

Check Current Epoch

After Proof of Compute completes (every 24 hours):

http://node2.gonka.ai:8000/v1/epochs/current/participants

Monitor Dashboard

http://node2.gonka.ai:8000/dashboard/gonka/validator

Track next Proof of Compute session timing.

Check Node Status

Using public IP:
curl http://<PUBLIC_IP>:<PUBLIC_RPC_PORT>/status
Using private (on server):
curl http://0.0.0.0:26657/status
Using genesis node:
curl http://node2.gonka.ai:26657/status

Proof of Compute

Simulation: Test PoC on MLNode before actual PoC phase begins.

Timing:
  • Runs every 24 hours
  • Check dashboard for next session
  • Can stop server between sessions and restart before PoC

Troubleshooting

Container won't start:
# Check Docker status
docker ps -a
docker compose logs
 
# Verify configuration
source config.env
env | grep DAPI
GPU not accessible:
# Verify NVIDIA toolkit
nvidia-ctk --version
sudo nvidia-ctk runtime configure --runtime=docker
systemctl restart docker
Permission grant failed:
  • Verify Account Key is correct
  • Check network connectivity to seed node
  • Ensure sufficient gas
  • Verify ML Operational Key address
PoC failures:
  • Verify all MLNodes have sufficient VRAM
  • Check model weights downloaded correctly
  • Review MLNode logs: docker compose logs mlnode

Managing Your Node

Update profile:
  • Update host name, website, avatar on dashboard
  • Helps network identify your node
Monitor performance:
  • Check PoC completion status
  • View earned rewards
  • Monitor GPU usage: nvidia-smi -l 1
Stop node:
docker compose down
Restart node:
source config.env && \
docker compose -f docker-compose.yml -f docker-compose.mlnode.yml up -d

Additional Resources

Proof of Work 2.0: Every computation advances real AI models. Earn rewards for meaningful compute contribution.