Deploy Your Container on Spheron Console
This guide will walk you through the steps to deploy and access your application on Spheron Console. Follow these steps carefully to ensure a successful deployment.
New users automatically receive $20 in free credits to get started. If you don’t see your credits, please reach out to us on Discord .
Access the Spheron Console
- Visit console.spheron.network
- Log in to your account or create a new one if you haven’t already
- Deposit some credits to your balance to pay for the deployment by clicking on the Deposit button in the top right corner.
Select Your GPU
- Navigate to the Marketplace tab
- Choose between two deployment options:
- Secure GPUs: Enterprise-grade hardware in professional data centers
- Higher reliability and stability
- Higher cost but guaranteed performance
- Community GPUs: Shared resources from community members
- More cost-effective
- May have variable performance
- Secure GPUs: Enterprise-grade hardware in professional data centers
- Browse available GPUs or use the search function to find specific models
- Review pricing and specifications before selection
Configure Your Deployment
- Select the Jupyter with PyTorch 2.4.1 template
- Pre-configured with CUDA support
- Includes all necessary dependencies for AI and LLM app development
- Configure your deployment settings:
- Set a secure password in the
JUPYTER_TOKEN
field - Adjust GPU count based on your needs
- Choose your deployment duration
- Set a secure password in the
- Review your configuration and click Confirm
- Wait for deployment (typically under 60 seconds)
Access Your Environment
- Go to the Overview tab once deployment is complete
- Locate the py-cuda service
- Click the provided connection URL
- Log in using your previously set password
⚠️
Make sure to save your JUPYTER_TOKEN password securely. You’ll need it to access your environment.
Start Developing
- Your environment comes pre-configured with:
- Jupyter Notebook interface
- PyTorch 2.4.1
- CUDA support for GPU acceleration
- Common ML/AI libraries
- All changes persist during your rental period
- You can install additional packages as needed
- You can also access the deployment shell to run any command you want and deployment logs to check the status of your deployment
Verification
To verify GPU support:
- Create a new Python notebook
- Run the following code:
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
- Or run this command to check the GPU count:
nvidia-smi
Additional Tips
- Save your work regularly on Github.
- Monitor your memory usage carefully - if your notebook uses more memory than available (Out Of Memory/OOM), the server will automatically terminate and restart your notebook session, causing you to lose any unsaved work. You can check memory usage by running
nvidia-smi
in a notebook cell. - Your deployment environment is dedicated to you and not shared with other users, ensuring optimal performance for your workloads.
Congratulations! Your app is now deployed and accessible. If you encounter any issues, reach out to Spheron Discord Support .
Last updated on