Skip to content

Pluralis Node0-7.5B

Collaborative multi-participant model training via Protocol Learning. Node0-7.5B enables permissionless participation in distributed AI model pretraining.

Overview

Pluralis Protocol Learning allows multiple participants to collaboratively train large-scale foundation models without central ownership. Models remain unextractable and become collectively owned protocol assets.

Node0-7.5B: Permissionless, model-parallel pretraining framework for GPUs with 16GB+ VRAM.

Requirements

Hardware:
  • GPU: 16GB+ VRAM
  • RAM: 16GB+ recommended
  • Storage: 50GB free
  • Network: Stable connection
Recommended GPUs:
  • RTX 4090, A100, H100
Software:
  • Ubuntu 22.04 or 24.04
  • Python 3.11
  • Miniconda
  • Git

Prerequisites

Step 1: Deploy GPU on Spheron

  1. Sign up at app.spheron.ai
  2. Add credits - Click Credits button → Add funds (card/crypto)
  3. Deploy:
    • Click Deploy in sidebar
    • Select GPU: RTX 4090, A100, or H100 (16GB+ VRAM)
    • Region: Closest to you
    • OS: Ubuntu 22.04 or 24.04 LTS
    • Select your SSH key
    • Click Deploy Instance

Instance ready in 30-60 seconds.

Step 2: Connect to Instance

ssh root@your-instance-ip

Step 3: Install Dependencies

# Install PyTorch (CPU version for setup)
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
 
# Install Git
sudo apt install -y git

Step 4: Clone Repository

git clone https://github.com/PluralisResearch/node0
cd node0

Step 5: Install Miniconda

# Download installer
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh
 
# Install
bash ~/miniconda.sh -b -p ~/miniconda3
 
# Initialize
~/miniconda3/bin/conda init bash
 
# Clean up
rm ~/miniconda.sh
 
# Verify
source ~/miniconda3/etc/profile.d/conda.sh && conda --version

Step 6: Create Conda Environment

# Create environment
conda create -n node0 python=3.11 -y
 
# Activate
conda activate node0
 
# Install Node0
pip install .

Step 7: Configure Node0

# Generate configuration
python3 generate_script.py --host_port 49200 --announce_port 22
When prompted, enter your HuggingFace token:
  1. Visit huggingface.co/settings/tokens
  2. Create new token with "Read" permissions
  3. Copy and paste when prompted

Step 8: Start Node0 Server

./start_server.sh

Server starts and begins listening on configured ports.

Verification

Check server status:
# Monitor logs
tail -f logs/node0.log
 
# Verify process running
ps aux | grep node0
Confirm participation:
  • Check Pluralis dashboard for your node
  • Verify network connectivity
  • Monitor contribution metrics

Troubleshooting

Installation fails:
# Verify Python version
python --version
 
# Check conda environment
conda env list
HuggingFace token error:
  • Verify token has "Read" permissions
  • Regenerate token if expired
  • Check token copied correctly (no spaces)
Server won't start:
# Check ports available
lsof -i :49200
lsof -i :22
 
# View error logs
cat logs/node0.log
Connection issues:
  • Verify firewall allows ports 49200 and 22
  • Check GPU is accessible: nvidia-smi
  • Ensure sufficient VRAM available

Additional Resources

Protocol Learning: Decentralized, collaborative AI model training with collective ownership and transparent contribution tracking.