- Do not store any valuable data you haven't backed up on the GPU Cluster! If any issues occur with the Cluster the operating system will be restored- destroying all data.
If you would like an account on the GPU please contact Brian Tempero. Provide your SUNet ID and a brief explanation of how you will use the Cluster.
Note: your SUNetid and password are used to login.
- Once you are provided with your login information you can ssh into the Cluster. Example: ssh -l <yoursunetid> icme-gpu.stanford.edu
- The password will be your sunetid password.
- The login infromation will also allow you to login to the graphical user interface to view the Clusters resources
- Copy and paste or click on the following link for access to the GPU Cluster: icme-gpu.stanford.edu
The GPU cluster now uses SLURM as the job manager. Please refere to the SLURM documentation prior to submitting a job.
The head node of the GPU cluster is the kickoff point for running jobs on the host nodes. the host nodes contain the GPU's.
Please refere to the following documentation prior to submitting a job.
- SLURM Complete
- Bright Cluster Manager Users Guideuser-manual.pdf
- Block SpMV on GPUbspmv_icme.pdf
- NVIDIA CUDA Forum