Please Note: Do not store any valuable data you haven't backed up on the GPU Cluster! If any issues occur with the Cluster the operating system will be restored- destroying all data.
Absolutely no Personally Identifiable Information (PII) or Protected Health Information (PHI) can be used or stored on any of ICME systems!
Welcome to the ICME High Performance GPU Cluster.
(10) K80 and (6) K40
|CPU Data||Infiniband||NVIDIA K80 (Each)|
Intel Xeon E5-2609 v3
1.90Ghz 15MB Cache
40 Gb/s Gen2
PCIe2.0 x 8
If you would like an account on the GPU or MPI CLuster please contact Brian Tempero. Provide him with your SUNet ID and a brief explanation of how you will use the Cluster.
Note: your SUNetid and password are used to login.
Once you are provided with your login information you can ssh into the Cluster. Example: ssh -l <yoursunetid> icme-mpi1.stanford.edu
The password will be your sunetid password.
The login infromation will also allow you to login to the graphical user interface to view the Clusters resources
Copy and paste or click on the following link for access to the GPU Cluster: icme-gpu.stanford.edu
The GPU cluster now uses SLURM as the job manager. Please refere to the SLURM documentation prior to submitting a job.
The head node of the GPU cluster is the kickoff point for running jobs on the host nodes. the host nodes contain the GPU's.
Please refere to the following documentation prior to submitting a job.
- SLURM 101
- Basic SLURM_0.pdf
- SLURM Complete
- Bright Cluster Manager Users Guide
- Block SpMV on GPU
- ICME Usage Policy
- NVIDIA CUDA Forum