Skip to content

Voltage Park - Mounting Shared Storage

Mount persistent storage volumes on Voltage Park instances using Network File System (NFS) protocol.

Overview

Voltage Park persistent volumes use NFS (Network File System) version 3 protocol for high-performance network-attached storage. Each volume is accessible via a virtual IP address and supports concurrent access from multiple instances.

Protocol: NFS v3 Connection: Virtual IP-based Performance: Parallel connections via nconnect option

Prerequisites

Before starting, ensure you have:

  1. ✓ Created a volume with provider: voltage-park via the Volumes API
  2. ✓ Attached the volume to your Voltage Park instance
  3. ✓ Retrieved the volume's virtual IP address from:
    • Instance Details drawer → "Mounting your storage volume" section, OR
    • My Shared Storage page
  4. ✓ SSH access to your Voltage Park instance

Mounting Process

Connect to Your Instance

SSH into your Voltage Park GPU instance using the connection details from the instance's Details drawer:

ssh ubuntu@your-instance-ip

See SSH Connection Setup for detailed connection instructions.

Install NFS Dependencies

Ensure the Network File System client is installed on your instance:

sudo apt install nfs-common

Create Mount Directory

Create a directory where you'll mount the storage volume:

sudo mkdir /data

You can choose any directory name and location. Common choices include:

  • /data - Simple and clear
  • /mnt/storage - Standard mount location
  • /workspace - For ML/AI projects

Configure File System Table

Add the volume configuration to /etc/fstab to enable automatic mounting. Replace <virtualIP> with your volume's virtual IP address:

echo '<virtualIP>:/data /data nfs rw,nconnect=16,nfsvers=3 0 0' | sudo tee -a /etc/fstab
Configuration breakdown:
  • <virtualIP>:/data - Remote NFS server and export path
  • /data - Local mount point
  • nfs - Filesystem type
  • rw - Read-write access
  • nconnect=16 - Use 16 connections for better performance
  • nfsvers=3 - NFS version 3 protocol
  • 0 0 - No dump, no fsck on boot
Example with actual IP:
echo '192.168.100.50:/data /data nfs rw,nconnect=16,nfsvers=3 0 0' | sudo tee -a /etc/fstab

Mount the Volume

Mount the volume using the configuration in /etc/fstab:

sudo mount -a

This command mounts all filesystems defined in /etc/fstab that aren't already mounted.

Verify Mount

Confirm the volume mounted successfully:

df -h

Look for your storage volume in the output. You should see a line showing your volume's virtual IP mounted at /data:

Filesystem              Size  Used Avail Use% Mounted on
...
192.168.100.50:/data    100G   1.0G   99G   1% /data

Set Permissions (Optional)

Make the mounted storage writable by your user:

sudo chown ubuntu:ubuntu /data

Replace ubuntu:ubuntu with your username if different.

Using Your Mounted Volume (Voltage Park)

Accessing the Storage

Once mounted, you can use the storage like any local directory:

# Navigate to the storage
cd /data
 
# Create files and directories
mkdir my-project
echo "Hello, storage!" > my-project/readme.txt
 
# List contents
ls -lh /data/

Checking Storage Usage

Monitor your volume's space usage:

# Check space on the mounted volume
df -h /data
 
# Check detailed disk usage
du -sh /data/*

Working with Large Datasets

The mounted volume is ideal for:

  • ML/AI training datasets
  • Model checkpoints and artifacts
  • Shared data across multiple instances
  • Persistent application data
# Example: Download dataset to storage
cd /data
wget https://example.com/large-dataset.tar.gz
tar -xzf large-dataset.tar.gz

Unmounting Shared Storage (Voltage Park)

You may want to unmount shared storage from your Voltage Park instance when:

  • Swapping out the attached storage volume
  • Detaching the volume from the instance
  • The instance is being terminated

Unmount the Volume

Run the umount command on the mount directory:

sudo umount /data

If you get a "target is busy" error, ensure no processes are using the storage:

# Check what's using the storage
lsof /data
 
# Or use fuser
fuser -m /data

Remove from File System Table

Remove the volume configuration from /etc/fstab to prevent auto-mount on next boot:

# Edit fstab and remove the NFS entry
sudo nano /etc/fstab
 
# Or use sed to remove it automatically
sudo sed -i '/\/data.*nfs/d' /etc/fstab

Troubleshooting (Voltage Park)

Mount Fails with "Connection Refused"

Cause: Incorrect virtual IP or volume not attached to instance

Solution:
  1. Verify the volume is attached to your instance via the dashboard or API
  2. Double-check the virtual IP address
  3. Ensure the virtual IP in /etc/fstab matches the volume's virtual IP

Mount Fails with "No such file or directory"

Cause: Mount point directory doesn't exist

Solution:
sudo mkdir -p /data
sudo mount -a

Volume Not Mounting on Boot

Cause: Network not ready when fstab mounts are processed

Solution: Add network wait options to fstab entry:

<storage-vip>:/data /data nfs rw,nconnect=16,nfsvers=3,_netdev 0 0

The _netdev option tells the system to wait for network before mounting.

Performance Issues

Cause: Suboptimal NFS settings or network latency

Solutions:
  • Increase nconnect value for more parallel connections (try 32 or 64)
  • Use NFSv4 if supported: nfsvers=4
  • Add async mode for better write performance: async
<storage-vip>:/data /data nfs rw,nconnect=32,nfsvers=4,async 0 0

Checking Mount Status

# List all NFS mounts
mount | grep nfs
 
# Check NFS statistics
nfsstat
 
# Verify fstab syntax
sudo mount -fav

Best Practices (Voltage Park)

Organization:
  • Use descriptive mount points: /data, /models, /datasets
  • Create subdirectories for different projects or datasets
  • Document what data is stored where
Performance:
  • Use higher nconnect values (16-64) for better throughput
  • Consider async mode for write-heavy workloads
  • Monitor network bandwidth usage
Data Safety:
  • Maintain backups of critical data
  • Volumes persist, but data can still be lost due to corruption or accidental deletion
  • Test backup and restore procedures
Security:
  • Restrict access to the mount point using filesystem permissions
  • Only mount volumes from trusted sources
  • Regularly audit who has access to shared storage
Automation:
  • Add mount commands to startup scripts for new instances
  • Use cloud-init to configure NFS mounts automatically

Example cloud-init script:

#cloud-config
runcmd:
  - apt-get install -y nfs-common
  - mkdir -p /data
  - echo '192.168.100.50:/data /data nfs rw,nconnect=16,nfsvers=3 0 0' >> /etc/fstab
  - mount -a
  - chown ubuntu:ubuntu /data

Multiple Volume Management (Voltage Park)

Mounting Multiple Volumes

You can attach and mount multiple storage volumes to a single Voltage Park instance:

# Create mount points
sudo mkdir -p /data1 /data2 /models
 
# Add to fstab
echo '192.168.100.50:/data /data1 nfs rw,nconnect=16,nfsvers=3 0 0' | sudo tee -a /etc/fstab
echo '192.168.100.51:/data /data2 nfs rw,nconnect=16,nfsvers=3 0 0' | sudo tee -a /etc/fstab
echo '192.168.100.52:/data /models nfs rw,nconnect=16,nfsvers=3 0 0' | sudo tee -a /etc/fstab
 
# Mount all
sudo mount -a

Sharing Volumes Across Instances

Volumes can be attached to multiple instances simultaneously:

  1. Attach the same volume to multiple instances via the API or dashboard
  2. Mount the volume on each instance using the same virtual IP
  3. All instances can read and write to the shared storage

Note: Ensure your application handles concurrent access appropriately.


Quick Reference

Voltage Park NFS Mounting

Setup Process:
  1. Install nfs-common package
  2. Create mount directory
  3. Add NFS configuration to /etc/fstab
  4. Mount with sudo mount -a
  5. Verify with df -h
Key Configuration:
<storage-vip>:/data /data nfs rw,nconnect=16,nfsvers=3 0 0
Common Operations:
  • Mount: sudo mount -a
  • Unmount: sudo umount /data
  • Check status: df -h /data
  • Monitor: nfsstat

Protocol: Network File System (NFS) version 3

Key Benefits:
  • ✅ Persistent storage independent of instances
  • ✅ Shared access across multiple instances simultaneously
  • ✅ Automatic mounting on boot (with fstab configuration)
  • ✅ No data loss when instances terminate
  • ✅ High-performance parallel connections (up to 64 connections via nconnect)
  • ✅ Hot attach/detach without instance restart

Additional Resources

Need Help?