Skip to content

Gensyn RL Swarm

Decentralized reinforcement learning framework for distributed AI training across a global network of GPU contributors.

Overview

Gensyn RL Swarm enables collaborative RL model training across distributed devices. Participants contribute compute to train reasoning-gym datasets and earn testnet rewards.

Node Roles:
  • Trainer - Runs model training tasks and submits performance data
  • Judge - Evaluates reasoning predictions in AI Prediction Market

Requirements

Hardware: CPU Mode:
  • Processor: ARM64 or x86
  • RAM: 32GB minimum
  • Note: Training may fail with concurrent heavy applications
GPU Mode (Recommended):
  • GPU: RTX 3090, 4090, 5090, A100, H100
  • VRAM: 24GB+ recommended (also supports <24GB)
  • CUDA: 12.4+ driver
System:
  • Ubuntu 22.04 or 24.04 LTS
  • Python 3.9+
  • Node.js 20+
  • Git
Network:
  • Stable internet connection
  • Open ports as required

Prerequisites

Step 1: Deploy GPU on Spheron

  1. Sign up at app.spheron.ai
  2. Add credits - Click Credits button → Add funds (card/crypto)
  3. Deploy:
    • Click Deploy in sidebar
    • Select GPU: RTX 4090, 5090, A100, or H100
    • Region: Closest to you
    • OS: Ubuntu 24.04 LTS
    • Select your SSH key
    • Click Deploy Instance

Instance ready in 30-60 seconds.

Step 2: Connect to Instance

ssh root@your-instance-ip

Step 3: Update System & Install Packages

sudo apt update
sudo apt install -y python3 python3-venv python3-pip curl wget screen git lsof ufw

Step 4: Install Node.js and Yarn

# Node.js 20
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt install -y nodejs
 
# Yarn
curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
sudo apt update
sudo apt install -y yarn

Step 5: Create Screen Session

screen -S gensyn

This allows node to run persistently.

Step 6: Clone Repository

git clone https://github.com/gensyn-ai/rl-swarm.git
cd rl-swarm

Step 7: Setup Python Environment

python3 -m venv .venv
source .venv/bin/activate

Step 8: Run RL Swarm Node

chmod +x ./run_rl_swarm.sh
./run_rl_swarm.sh

Wait for: Waiting for userData.json to be created...

Step 9: Access Web Interface

In new terminal (keep first terminal running):

# Connect to instance again
ssh root@your-instance-ip
 
# Install LocalTunnel
sudo npm install -g localtunnel
 
# Generate tunnel password
curl https://loca.lt/mytunnelpassword
 
# Expose port 3000
lt --port 3000

Access URL: Open the provided https://[name].loca.lt URL in browser

Login:
  • Password: Your VM IP address (e.g., 38.224.253.251)
  • Sign in with Gensyn account (Google/email)

Step 10: Configuration Prompts

Return to first terminal showing "Waiting for userData.json..."

Prompt 1 - HuggingFace Upload:
Would you like to push models you train to HuggingFace Hub? [y/N]
  • Recommended: Press N (requires 2GB upload per model)
  • Press Y if you want to upload models (provide HF token)
Prompt 2 - Choose Model:
Enter model name in huggingface repo/name format, or press [Enter] for default.
  • Recommended: Press Enter for default
  • Or choose based on your VRAM capacity
Prompt 3 - AI Prediction Market:
Would you like to participate in the AI Prediction Market? (Y/n)
  • Recommended: Press Y or Enter to join
  • Enables Judge role for prediction evaluation

Step 11: Identify Node Name

After setup completes, note your unique node name:

Hello sprightly placid crane

Find your node at dashboard.gensyn.ai

Running in Background

Detach from Screen

# Press: Ctrl+A then D

Reattach Later

screen -r gensyn

Backup Swarm Key

From your local machine:
scp root@your-instance-ip:~/rl-swarm/swarm.pem ~/swarm.pem

Verification

Check node status:
  • Visit dashboard.gensyn.ai
  • Verify node appears with your unique name
  • Check leaderboard for trainer/judge rankings
Monitor locally:
# Reattach to screen
screen -r gensyn
 
# Check processes
ps aux | grep swarm

Troubleshooting

userData.json not created:
  • Verify LocalTunnel is running
  • Check web interface is accessible
  • Ensure logged in correctly on web interface
Installation errors:
# Check Python environment
source .venv/bin/activate
python --version
 
# Check dependencies
pip list
LocalTunnel connection issues:
# Restart LocalTunnel
pkill lt
lt --port 3000
 
# Use different tunnel service if needed
npm install -g ngrok
ngrok http 3000
Node not appearing on dashboard:
  • Verify internet connection stable
  • Check node logs for errors
  • Ensure completed all configuration prompts
  • Restart node: ./run_rl_swarm.sh

Additional Resources

Testnet Benefits: Early ecosystem access, research participation, verifiable compute, community rewards, no hardware lock-in.