External Storage Access
Access and configure the 17.4TB external NVMe storage available on Voltage Park deployments.
Overview
Voltage Park nodes include substantial external storage in addition to the boot drive. While the main boot drive shows approximately 300GB, each node provides 6 additional NVMe drives totaling ~17.4TB of external storage capacity.
Storage Layout
- nvme0n1 (~447GB) - Primary OS boot drive, mounted at
/ - nvme1n1 through nvme6n1 - 6 external data drives × 2.9TB each = ~17.4TB total
These external drives are available but not automatically mounted - you need to configure them based on your requirements.
Setup Options
Choose the configuration that best matches your needs:
Option 1: Individual Drives (Recommended)
Pros:- ✅ If one drive fails, only data on that drive is lost
- ✅ Easy to recover - just mount the remaining drives
- ✅ Simple to manage and troubleshoot
- ✅ Can move drives to another system easily
- ❌ Need to manage 6 separate mount points
Use Case: Best when data recovery and safety are priorities.
Option 2: RAID 6 (Balance of Capacity and Safety)
Pros:- ✅ Survives up to 2 simultaneous drive failures
- ✅ Data remains accessible with failed drives
- ✅ Single large volume (~11.6TB usable)
- ❌ Need RAID knowledge to recover
- ❌ Loses ~5.8TB to redundancy
Use Case: When you need both capacity and redundancy.
Option 3: RAID 0 (Maximum Capacity, NOT RECOMMENDED)
Pros:- ✅ Full ~17.4TB usable capacity
- ✅ Single mount point
- ❌ CRITICAL: If ANY drive fails, ALL data across ALL drives is lost
- ❌ No redundancy whatsoever
Use Case: Only for temporary/expendable data.
Option 4: LVM (Flexible Management)
Pros:- ✅ Flexible volume management
- ✅ Can add/remove drives dynamically
- ✅ Single large volume
- ❌ Without RAID, no redundancy (similar to RAID 0)
- ❌ More complex to manage
Use Case: When you need flexibility in storage management.
Recommended Setup: Individual Drives
This approach provides the best balance of simplicity and data safety.
Create Mount Points
sudo mkdir -p /mnt/nvme{1..6}Format the Drives
Format each drive with ext4 filesystem:
sudo mkfs.ext4 /dev/nvme1n1
sudo mkfs.ext4 /dev/nvme2n1
sudo mkfs.ext4 /dev/nvme3n1
sudo mkfs.ext4 /dev/nvme4n1
sudo mkfs.ext4 /dev/nvme5n1
sudo mkfs.ext4 /dev/nvme6n1Warning: This will erase all existing data on these drives.
Mount the Drives
for i in {1..6}; do
sudo mount /dev/nvme${i}n1 /mnt/nvme${i}
doneMake Mounts Persistent
Enable auto-mount on boot by adding entries to /etc/fstab:
for i in {1..6}; do
echo "/dev/nvme${i}n1 /mnt/nvme${i} ext4 defaults 0 2" | sudo tee -a /etc/fstab
doneSet Ownership
Make the drives writable by your user:
for i in {1..6}; do
sudo chown ubuntu:ubuntu /mnt/nvme${i}
doneVerify Setup
Check that all drives are mounted:
df -h | grep nvmeExpected output:
/dev/nvme0n1p2 439G 28G 389G 7% /
/dev/nvme0n1p1 511M 6.1M 505M 2% /boot/efi
/dev/nvme1n1 2.9T 28K 2.8T 1% /mnt/nvme1
/dev/nvme2n1 2.9T 28K 2.8T 1% /mnt/nvme2
/dev/nvme3n1 2.9T 28K 2.8T 1% /mnt/nvme3
/dev/nvme4n1 2.9T 28K 2.8T 1% /mnt/nvme4
/dev/nvme5n1 2.9T 28K 2.8T 1% /mnt/nvme5
/dev/nvme6n1 2.9T 28K 2.8T 1% /mnt/nvme6Usage
Accessing the Drives
Each drive is accessible at its mount point:
# Access drive 1
cd /mnt/nvme1
# Create files
echo "test data" > /mnt/nvme1/myfile.txt
# List contents
ls -lh /mnt/nvme1/Checking Space Usage
# Check space on all drives
df -h | grep nvme
# Check space on a specific drive
df -h /mnt/nvme1Distributing Data
You can distribute your data across all drives:
# Store different datasets on different drives
cp -r /path/to/dataset1 /mnt/nvme1/
cp -r /path/to/dataset2 /mnt/nvme2/
# ... and so onAlternative Setup: RAID 6
If you prefer redundancy over maximum capacity:
Install RAID Tools
sudo apt update
sudo apt install mdadmCreate RAID 6 Array
# Create RAID 6 (survives 2 drive failures)
sudo mdadm --create /dev/md0 --level=6 --raid-devices=6 \
/dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 \
/dev/nvme4n1 /dev/nvme5n1 /dev/nvme6n1
# Format the array
sudo mkfs.ext4 /dev/md0
# Create mount point and mount
sudo mkdir -p /mnt/raid
sudo mount /dev/md0 /mnt/raid
# Make persistent
echo '/dev/md0 /mnt/raid ext4 defaults 0 2' | sudo tee -a /etc/fstab
# Save RAID configuration
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
sudo update-initramfs -uThis provides ~11.6TB usable space with 2-drive fault tolerance.
Alternative Setup: LVM
For maximum flexibility in storage management:
Install LVM Tools
sudo apt update
sudo apt install lvm2Create LVM Setup
# Create physical volumes
sudo pvcreate /dev/nvme{1..6}n1
# Create volume group
sudo vgcreate data_vg /dev/nvme{1..6}n1
# Create logical volume with all space
sudo lvcreate -l 100%FREE -n data_lv data_vg
# Format and mount
sudo mkfs.ext4 /dev/data_vg/data_lv
sudo mkdir -p /mnt/data
sudo mount /dev/data_vg/data_lv /mnt/data
# Make persistent
echo '/dev/data_vg/data_lv /mnt/data ext4 defaults 0 2' | sudo tee -a /etc/fstabTroubleshooting
Checking Drive Status
# List all block devices
lsblk
# Check drive health
sudo smartctl -a /dev/nvme1n1 # Requires smartmontools packageDrives Don't Mount on Boot
# Check fstab syntax
cat /etc/fstab
# Try manual mount to test
sudo mount -a
# Check system logs
sudo journalctl -xe | grep mountUnmounting Drives
# Unmount a specific drive
sudo umount /mnt/nvme1
# Unmount all data drives
for i in {1..6}; do
sudo umount /mnt/nvme${i}
doneRemoving Drives from fstab
To undo the auto-mount configuration:
# Edit fstab and remove the nvme entries
sudo nano /etc/fstab
# Or use sed to remove them
sudo sed -i '/\/mnt\/nvme[1-6]/d' /etc/fstabRecovery Scenarios
If One Drive Fails
With individual drives setup:
- Identify the failed drive using
dmesgorlsblk - The other 5 drives remain fully accessible
- Only data on the failed drive is lost
- Replace the failed drive and format it
- Restore data from backups for that drive only
Moving Drives to Another System
- Unmount the drives:
sudo umount /mnt/nvme{1..6} - Physically move the drives
- On the new system, mount them:
sudo mount /dev/nvmeXn1 /mnt/target - All data remains intact
Best Practices
Organization:- Keep track of what data is stored on which drive
- Use descriptive names via symlinks:
ln -s /mnt/nvme1 /mnt/datasets ln -s /mnt/nvme2 /mnt/models
- Install and use
smartmontoolsto monitor drive health - Check space regularly to avoid running out:
df -h
- Maintain regular backups of critical data
- Even with redundancy, backups are essential
- Test your backup restoration process
- Plan your data distribution strategy before filling drives
- Document which data goes where
- Consider future scalability needs
Summary
Default Configuration:- Boot drive shows ~300GB (nvme0n1)
- 6 external drives available but not mounted by default
- Each external drive provides ~2.8TB
- Total external capacity: ~17.4TB
- Mount drives individually at
/mnt/nvme1through/mnt/nvme6 - Enable auto-mount for convenience
- Distribute data across drives based on project needs
- Maintain backups of critical data
- Maximum data recoverability
- Simple management
- One drive failure doesn't affect others
- Easy to understand and troubleshoot
Additional Resources
- SSH Connection Setup - Secure instance access
- Getting Started - Complete setup guide
- Security - Best practices for secure storage