AI Node Guides
Guides for deploying and running AI network nodes on Spheron GPU instances. Participate in decentralized AI compute networks and distributed model training protocols.
Choosing the Right Instance for AI Nodes
| Node Type | Recommended Type | Why |
|---|---|---|
| Gonka AI Node | Dedicated (A100 / H100) | Sustained uptime for Proof of Work 2.0 tasks |
| Pluralis Node0 | Dedicated (RTX 4090 / A100) | 16GB+ VRAM required for collaborative training |
Dedicated instances are recommended for node operations to ensure consistent availability. Spot instances may be interrupted, which can affect node participation and rewards.
Available Guides
Gonka AI Node
Decentralized AI network using Proof of Work 2.0 for meaningful compute contribution to AI training and inference. Docker-based deployment on A100 or H100 instances.
Best for: Contributing GPU compute to the Gonka decentralized AI network.
Pluralis Node 0
Collaborative multi-participant model training via Protocol Learning. Node0-7.5B enables permissionless participation in distributed AI model pretraining with 16GB+ VRAM.
Best for: Participating in collaborative distributed AI model training.
Additional Resources
- Instance Types: Spot vs Dedicated vs Cluster
- Networking: SSH tunneling and port access
- Getting Started: Deploy your first Spheron instance