Templates & Images
Ready-to-copy cloud-init startup scripts organized by use case. Paste the script into the Startup Script field when deploying an instance.
For a full introduction to startup scripts, see the Startup Script guide.
Available Templates
| Template | Minimum GPU | Stack |
|---|---|---|
| PyTorch + CUDA 12.1 | RTX 4090 | Python 3.11, PyTorch 2.x |
| TensorFlow 2.x | RTX 4090 | Python 3.11, TF 2.x, CUDA |
| JupyterLab ML Stack | RTX 4090 | PyTorch + TF + JupyterLab |
| Docker + NVIDIA Runtime | Any | Docker, NVIDIA Container Toolkit |
| Prometheus + Grafana | Any | Docker Compose monitoring |
| vLLM Inference Server | H100 / A100 | vLLM, OpenAI-compatible API |
| Ollama + Open WebUI | RTX 4090 | Docker, Ollama, Open WebUI |
PyTorch + CUDA 12.1
Installs Python 3.11, PyTorch 2.x with CUDA 12.1, and common ML libraries.
#cloud-config
runcmd:
- apt-get update -y
- apt-get install -y python3.11 python3.11-venv
- python3.11 -m ensurepip --upgrade
- python3.11 -m pip install --upgrade pip
- python3.11 -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
- python3.11 -m pip install transformers accelerate bitsandbytes datasetspython3.11 -c "import torch; print(torch.cuda.is_available(), torch.version.cuda)"TensorFlow 2.x
Installs Python 3.11, TensorFlow 2.x with CUDA support, and common data science libraries.
#cloud-config
runcmd:
- apt-get update -y
- apt-get install -y python3.11
- python3.11 -m ensurepip --upgrade
- python3.11 -m pip install --upgrade pip
- python3.11 -m pip install tensorflow[and-cuda]
- python3.11 -m pip install numpy pandas scikit-learn matplotlibpython3.11 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"JupyterLab ML Stack
Full ML environment with PyTorch, TensorFlow, and JupyterLab. JupyterLab starts on port 8888.
#cloud-config
runcmd:
- apt-get update -y
- apt-get install -y python3.11
- python3.11 -m ensurepip --upgrade
- python3.11 -m pip install --upgrade pip
- python3.11 -m pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
- python3.11 -m pip install tensorflow[and-cuda]
- python3.11 -m pip install jupyterlab transformers accelerate datasets matplotlib scikit-learn
- |
cat > /etc/systemd/system/jupyterlab.service << 'EOF'
[Unit]
Description=JupyterLab
After=network.target
[Service]
Type=simple
ExecStart=/usr/bin/python3.11 -m jupyterlab --ip=0.0.0.0 --port=8888 --no-browser --allow-root
Restart=on-failure
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
- systemctl daemon-reload
- systemctl enable jupyterlab
- systemctl start jupyterlabDocker + NVIDIA Runtime
Installs Docker CE and the NVIDIA Container Toolkit so you can run GPU-accelerated containers.
#cloud-config
runcmd:
- apt-get update -y
- apt-get install -y ca-certificates curl gnupg
- install -m 0755 -d /etc/apt/keyrings
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
- chmod a+r /etc/apt/keyrings/docker.gpg
- echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" > /etc/apt/sources.list.d/docker.list
- apt-get update -y
- apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
- curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
- curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' > /etc/apt/sources.list.d/nvidia-container-toolkit.list
- apt-get update -y
- apt-get install -y nvidia-container-toolkit
- nvidia-ctk runtime configure --runtime=docker
- systemctl restart dockerdocker run --rm --gpus all nvidia/cuda:12.1.0-base-ubuntu22.04 nvidia-smiPrometheus + Grafana
Sets up a Docker Compose monitoring stack with Prometheus and Grafana on port 3000.
#cloud-config
runcmd:
- apt-get update -y
- apt-get install -y ca-certificates curl gnupg
- install -m 0755 -d /etc/apt/keyrings
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
- chmod a+r /etc/apt/keyrings/docker.gpg
- echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" > /etc/apt/sources.list.d/docker.list
- apt-get update -y
- apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
- |
ADMIN_PASS=$(openssl rand -hex 16)
mkdir -p /opt/monitoring
echo "Grafana admin password: $ADMIN_PASS" > /root/grafana-credentials.txt
chmod 600 /root/grafana-credentials.txt
echo "GF_SECURITY_ADMIN_PASSWORD=$ADMIN_PASS" > /opt/monitoring/grafana.env
chmod 600 /opt/monitoring/grafana.env
cat > /opt/monitoring/docker-compose.yml << 'EOF'
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
restart: unless-stopped
grafana:
image: grafana/grafana:latest
container_name: grafana
ports:
- "3000:3000"
env_file:
- ./grafana.env
restart: unless-stopped
node-exporter:
image: prom/node-exporter:latest
container_name: node-exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.rootfs=/rootfs'
- '--path.sysfs=/host/sys'
restart: unless-stopped
EOF
chmod 644 /opt/monitoring/docker-compose.yml
- |
cat > /opt/monitoring/prometheus.yml << 'EOF'
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'node'
static_configs:
- targets: ['node-exporter:9100']
EOF
- docker compose -f /opt/monitoring/docker-compose.yml up -dAccess Grafana at http://localhost:3000 via SSH tunnel. Log in as admin with the generated password stored in /root/grafana-credentials.txt on your instance:
cat /root/grafana-credentials.txtvLLM Inference Server
Installs vLLM and starts an OpenAI-compatible inference server on port 8000. See the full vLLM guide for configuration details.
#cloud-config
runcmd:
- apt-get update -y
- apt-get install -y python3-pip
- pip install vllm
- |
cat > /etc/systemd/system/vllm.service << 'EOF'
[Unit]
Description=vLLM Inference Server
After=network.target
[Service]
Type=simple
ExecStart=/usr/bin/python3 -m vllm.entrypoints.openai.api_server \
--model meta-llama/Llama-3-8B-Instruct \
--tensor-parallel-size 1 \
--port 8000 \
--gpu-memory-utilization 0.9
Restart=on-failure
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
- systemctl daemon-reload
- systemctl enable vllm
- systemctl start vllmOllama + Open WebUI
Installs Docker and starts Ollama with the Open WebUI browser interface on port 3000. See the full Ollama guide for model usage.
#cloud-config
runcmd:
- apt-get update -y
- apt-get install -y ca-certificates curl gnupg
- install -m 0755 -d /etc/apt/keyrings
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
- chmod a+r /etc/apt/keyrings/docker.gpg
- echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" > /etc/apt/sources.list.d/docker.list
- apt-get update -y
- apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
- curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
- curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' > /etc/apt/sources.list.d/nvidia-container-toolkit.list
- apt-get update -y
- apt-get install -y nvidia-container-toolkit
- nvidia-ctk runtime configure --runtime=docker
- systemctl restart docker
- mkdir -p /opt/ollama
- |
cat > /opt/ollama/docker-compose.yml << 'EOF'
services:
ollama:
image: ollama/ollama:latest
container_name: ollama
runtime: nvidia
environment:
- NVIDIA_VISIBLE_DEVICES=all
volumes:
- ollama_data:/root/.ollama
ports:
- "11434:11434"
restart: unless-stopped
open-webui:
image: ghcr.io/open-webui/open-webui:main
container_name: open-webui
ports:
- "3000:8080"
environment:
- OLLAMA_BASE_URL=http://ollama:11434
volumes:
- webui_data:/app/backend/data
depends_on:
- ollama
restart: unless-stopped
volumes:
ollama_data:
webui_data:
EOF
- docker compose -f /opt/ollama/docker-compose.yml up -dAdditional Resources
- Startup Script guide: Cloud-init syntax and best practices
- vLLM guide: Full vLLM configuration
- Ollama guide: Model management and memory guidelines
- Networking: SSH tunneling and port exposure