Ubuntu Environments
Available Ubuntu configurations for GPU instances, optimized for AI/ML workloads.
Available Versions
| Version | Support Until | Best For |
|---|---|---|
| Ubuntu 20.04 LTS | 2025 | Legacy applications, older dependencies |
| Ubuntu 22.04 LTS | 2027 | Production workloads (most stable) |
| Ubuntu 24.04 LTS | 2029 | Latest features, experimental projects |
Configuration Options
Base Images
- Base / LTS Base - Minimal Ubuntu installation
- + NVIDIA 550/570 - NVIDIA drivers pre-installed
- + CUDA X.X - CUDA toolkit included
- + Docker - Docker pre-installed for containerized workflows
- Shade OS - Optimized lightweight version for maximum GPU performance
Pre-configured ML Environments (Ubuntu 24.04)
| Environment | Includes | Best For |
|---|---|---|
| ML Everything | PyTorch, TensorFlow, JAX | Multi-framework experimentation |
| ML PyTorch | PyTorch optimized | LLM training, computer vision |
| ML TensorFlow | TensorFlow optimized | Production ML, enterprise |
CUDA Versions Available
| CUDA Version | Features | Compatibility |
|---|---|---|
| 12.0 | Baseline, maximum compatibility | Older frameworks |
| 12.4 | Bug fixes, stable | General AI development |
| 12.6 | Newer GPU optimizations | RTX 5090, H100 |
| 12.8 Open | Open-source drivers | Community projects |
| 13.0 Open | Latest features | Cutting-edge research |
Selection Guide
| Use Case | Recommended Environment | Why |
|---|---|---|
| Beginners | Ubuntu 24.04 ML Everything | All frameworks pre-installed |
| LLM Training | Ubuntu 24.04 ML PyTorch | PyTorch optimized |
| TensorFlow | Ubuntu 24.04 ML TensorFlow | TensorFlow optimized |
| Production | Ubuntu 22.04 + CUDA 12.8 + Docker | Stable, containerized |
| Research | Ubuntu 24.04 + CUDA 13.0 Open | Latest features |
| Legacy Apps | Ubuntu 20.04 LTS | Older dependency support |
| Max Performance | Ubuntu 22.04 (Shade OS) | Optimized, minimal overhead |
Docker vs Non-Docker
Without Docker:- Direct GPU access
- Simpler setup
- Single-purpose instances
- Good for: Learning, simple projects
- Containerized workflows
- Dependency isolation
- Multi-project instances
- Good for: Production, complex setups
Deploying
- Go to app.spheron.ai → Deploy
- Select GPU
- Choose Ubuntu environment from OS dropdown
- Deploy (ready in 30-60 seconds)
Verify Installation
After deployment, connect and verify:
# Connect
ssh root@your-instance-ip
# Check OS version
cat /etc/os-release
# Check CUDA (if applicable)
nvcc --version
# Check GPU
nvidia-smi
# Check Docker (if applicable)
docker --versionFrequently Asked Questions
What does LTS mean?
Long Term Support - 5 years of security updates and bug fixes.
Can I change environments after deployment?
No. Deploy new instance with desired environment.
Do I need Docker?
Not for simple projects. Use Docker for complex dependencies or multi-project instances.
Which CUDA version should I use?
CUDA 12.8 for best balance. Check framework compatibility first.
Can I install multiple CUDA versions?
Not recommended. Select correct version during deployment.
Ubuntu 22.04 or 24.04?
22.04 for production stability. 24.04 for latest features.
What is Shade OS?
Optimized Ubuntu variant with minimal overhead for maximum GPU performance.
Additional Resources
- Getting Started - Deploy your first instance
- Quick Start - Fast deployment
- SSH Connection - SSH setup guide
- TensorFlow - TensorFlow environment
- Jupyter - Jupyter Notebook setup