Pluralis Node0-7.5B
Collaborative multi-participant model training via Protocol Learning. Node0-7.5B enables permissionless participation in distributed AI model pretraining.
Overview
Pluralis Protocol Learning allows multiple participants to collaboratively train large-scale foundation models without central ownership. Models remain unextractable and become collectively owned protocol assets.
Node0-7.5B: Permissionless, model-parallel pretraining framework for GPUs with 16GB+ VRAM.
Requirements
Hardware:- GPU: 16GB+ VRAM
- RAM: 16GB+ recommended
- Storage: 50GB free
- Network: Stable connection
- RTX 4090, A100, H100
- Ubuntu 22.04 or 24.04
- Python 3.11
- Miniconda
- Git
Prerequisites
- Spheron account (sign up)
- Payment method configured
- SSH key (how to generate)
- HuggingFace account and token (get token)
Step 1: Deploy GPU on Spheron
- Sign up at app.spheron.ai
- Add credits - Click Credits button → Add funds (card/crypto)
- Deploy:
- Click Deploy in sidebar
- Select GPU: RTX 4090, A100, or H100 (16GB+ VRAM)
- Region: Closest to you
- OS: Ubuntu 22.04 or 24.04 LTS
- Select your SSH key
- Click Deploy Instance
Instance ready in 30-60 seconds.
Step 2: Connect to Instance
ssh root@your-instance-ipStep 3: Install Dependencies
# Install PyTorch (CPU version for setup)
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
# Install Git
sudo apt install -y gitStep 4: Clone Repository
git clone https://github.com/PluralisResearch/node0
cd node0Step 5: Install Miniconda
# Download installer
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh
# Install
bash ~/miniconda.sh -b -p ~/miniconda3
# Initialize
~/miniconda3/bin/conda init bash
# Clean up
rm ~/miniconda.sh
# Verify
source ~/miniconda3/etc/profile.d/conda.sh && conda --versionStep 6: Create Conda Environment
# Create environment
conda create -n node0 python=3.11 -y
# Activate
conda activate node0
# Install Node0
pip install .Step 7: Configure Node0
# Generate configuration
python3 generate_script.py --host_port 49200 --announce_port 22- Visit huggingface.co/settings/tokens
- Create new token with "Read" permissions
- Copy and paste when prompted
Step 8: Start Node0 Server
./start_server.shServer starts and begins listening on configured ports.
Verification
Check server status:# Monitor logs
tail -f logs/node0.log
# Verify process running
ps aux | grep node0- Check Pluralis dashboard for your node
- Verify network connectivity
- Monitor contribution metrics
Troubleshooting
Installation fails:# Verify Python version
python --version
# Check conda environment
conda env list- Verify token has "Read" permissions
- Regenerate token if expired
- Check token copied correctly (no spaces)
# Check ports available
lsof -i :49200
lsof -i :22
# View error logs
cat logs/node0.log- Verify firewall allows ports 49200 and 22
- Check GPU is accessible:
nvidia-smi - Ensure sufficient VRAM available
Additional Resources
- Pluralis Research GitHub
- Getting Started - Spheron deployment
- SSH Connection - SSH setup
- General Info - Support channels
Protocol Learning: Decentralized, collaborative AI model training with collective ownership and transparent contribution tracking.