Running Jupyter Notebook with GPU Support
This guide walks you through deploying a Jupyter Notebook instance with GPU support on Spheron.
Prerequisites
- Spheron account
- Basic familiarity with Jupyter Notebook
- Sufficient credits for GPU usage
Deployment Steps
Access the Spheron Console
- Navigate to console.spheron.network
- Log in to your account
- If you are new to Spheron, you should already have a free credits balance of $20. If not, please reach out to us on Discord to get a free credits balance.
Select a GPU
- Go to Marketplace tab
- You have 2 options to choose from:
- Secure: For deploying on secure and data center grade provider. It is super reliable but costs more.
- Community: For deploying on community fizz nodes that are running on someones home machine. It might not be very reliable.
- Now select any GPU you want to deploy on. You can also search the GPU name to find the exact GPU you want to deploy on.
Configure the Deployment
- Select the template Jupyter with Pytorch 2.4.1
- Put any password in
JUPYTER_TOKEN
field that you want to set for the Jupyter Notebook - If you want you can increase the GPU count to access multiple GPUs at once.
- You can select the duration of the deployment.
- Click on Confirm button to start the deployment
- Deployment will be done in less than 60 seconds
Access the Jupyter Notebook
- Once deployed, go to Overview tab.
- Click on py-cuda service to open the Jupyter Notebook service.
- Click on the connection url to open the Jupyter Notebook.
- Use the password you set in the previous step to log in.
Verification
To verify GPU support:
- Create a new Python notebook
- Run the following code:
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
- Or run this command to check the GPU count:
nvidia-smi
Additional Tips
- Save your work regularly on Github.
- Monitor your memory usage carefully - if your notebook uses more memory than available (Out Of Memory/OOM), the server will automatically terminate and restart your notebook session, causing you to lose any unsaved work. You can check memory usage by running
nvidia-smi
in a notebook cell.
Last updated on