Startup Scripts
Automate instance configuration on first boot with startup scripts.
Overview
What: Scripts that run automatically when your instance first boots
Formats Supported:- Cloud-init (YAML) - Recommended for complex setups
- Bash scripts - Traditional shell scripts
- Automate environment setup (saves 15-30 minutes per deployment)
- Ensure consistent configuration across instances
- Install dependencies automatically
Adding Startup Scripts
During deployment, paste your script in the Startup Script section or upload a script file.
The platform auto-detects format (cloud-init or bash) and executes appropriately.
Example Scripts
Basic Setup
Install essential packages and update system:
#cloud-config
# Update package database and install essential packages
packages:
- curl
- git
- build-essential
- python3-pip
# Run commands on first boot
runcmd:
- echo "Instance setup started" >> /var/log/startup.log
- apt-get update
- systemctl enable docker
- echo "Setup complete" >> /var/log/startup.logDocker Setup
Install Docker and Docker Compose with required dependencies:
#cloud-config
# Install Docker and Docker Compose
packages:
- apt-transport-https
- ca-certificates
- curl
- gnupg
- lsb-release
runcmd:
- curl -fsSL https://get.docker.com -o get-docker.sh
- sh get-docker.sh
- usermod -aG docker ubuntu
- curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- systemctl enable docker
- systemctl start dockerNVIDIA GPU Setup
Install NVIDIA drivers, CUDA toolkit, and nvidia-docker:
#cloud-config
# Setup NVIDIA drivers and CUDA
packages:
- build-essential
- linux-headers-$(uname -r)
runcmd:
# Add NVIDIA package repositories
- wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-keyring_1.0-1_all.deb
- dpkg -i cuda-keyring_1.0-1_all.deb
- apt-get update
# Install CUDA and drivers
- apt-get install -y cuda-toolkit-12-2
- apt-get install -y nvidia-driver-535
# Install nvidia-docker
- distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
- curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | apt-key add -
- curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | tee /etc/apt/sources.list.d/nvidia-docker.list
- apt-get update && apt-get install -y nvidia-docker2
- systemctl restart docker
# Verify installation
- nvidia-smi >> /var/log/gpu-setup.logML/AI Environment
Complete Python ML stack with PyTorch, TensorFlow, and Jupyter:
#cloud-config
# Machine Learning environment setup
packages:
- python3-pip
- python3-dev
- python3-venv
- git
- wget
- tmux
runcmd:
# Create virtual environment
- python3 -m venv /opt/ml-env
- source /opt/ml-env/bin/activate
# Install ML frameworks
- pip install --upgrade pip
- pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
- pip install transformers datasets accelerate
- pip install jupyter notebook
# Setup Jupyter
- jupyter notebook --generate-config
- echo "c.NotebookApp.ip = '0.0.0.0'" >> ~/.jupyter/jupyter_notebook_config.py
- echo "c.NotebookApp.allow_remote_access = True" >> ~/.jupyter/jupyter_notebook_config.py
# Create workspace
- mkdir -p /workspace/models /workspace/datasets /workspace/notebooks
- echo "ML environment ready" >> /var/log/ml-setup.logMonitoring Stack
Deploy Prometheus and Grafana with Docker Compose:
#cloud-config
# Setup monitoring with Prometheus and Grafana
write_files:
- path: /etc/docker/compose/monitoring/docker-compose.yml
content: |
version: '3'
services:
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
volumes:
- grafana_data:/var/lib/grafana
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
node-exporter:
image: prom/node-exporter:latest
ports:
- "9100:9100"
volumes:
prometheus_data:
grafana_data:Shell Script Format
Traditional bash scripts (auto-converted to cloud-init):
#!/bin/bash
# Traditional shell script format (will be converted to cloud-init)
# Update system
apt-get update && apt-get upgrade -y
# Install essential packages
apt-get install -y \
build-essential \
curl \
wget \
git \
vim \
htop
# Create directories
mkdir -p /opt/app /var/log/app
# Set up environment variables
echo "export APP_HOME=/opt/app" >> /etc/environment
echo "export NODE_ENV=production" >> /etc/environment
# Clone repository (example)
cd /opt/app
git clone https://github.com/example/app.git .
# Install dependencies and start service
npm install
npm run build
npm start
# Log completion
echo "Setup completed at $(date)" >> /var/log/setup.logBest Practices
Script Design:- Start simple, add complexity as needed
- Test commands manually before scripting
- Add logging to track execution
- Use comments for documentation
#!/bin/bash
set -e # Exit on error
echo "Setup started" >> /var/log/startup.log
apt-get update || { echo "Failed"; exit 1; }
echo "Setup completed" >> /var/log/startup.log- Never hardcode credentials
- Use secure secret management
- Review scripts before deployment
- Scripts run with root privileges
See Security Best Practices for detailed guidelines.
Troubleshooting
Script didn't run:# Check cloud-init logs
cat /var/log/cloud-init.log
cat /var/log/cloud-init-output.log- Verify package names and syntax for Ubuntu
- Test commands manually first
- Check logs for specific errors
- Keep scripts under 10 minutes
- Use background processes for long tasks
- Check for hanging commands
- Validate YAML format before deploying
- Use online YAML validator
- Check indentation (spaces, not tabs)
Additional Resources
- Getting Started - Deploy your first instance
- Quick Start - Fast deployment
- Security - Startup script security guidelines