Skip to:

ICME High Performance Clusters

Please Note: Do not store any valuable data you haven't backed up on the GPU Cluster! If any issues occur with the Cluster the operating system will be restored- destroying all data.

Absolutely no Personally Identifiable Information (PII) or Protected Health Information (PHI) can be used or stored on any of ICME systems!

 

Welcome to the ICME High Performance GPU Cluster.

Technical Specs:

(GPU cluster)

CPU Cores Memory GPU's
96 500 GB

(10) K80 and (6) K40

CPU Data Infiniband NVIDIA K80 (Each)

Intel Xeon E5-2609 v3

1.90Ghz 15MB Cache

 

40 Gb/s Gen2

PCIe2.0 x 8

5.0GT/s

GPU 2x Kepler GK210
Peak double precision FLOPS 2.91 Tflops (GPU Boost Clocks)
1.87 Tflops (Base Clocks)
Peak single precision FLOPS 8.74 Tflops (GPU Boost Clocks)
5.6 Tflops (Base Clocks)
Memory bandwidth (ECC off) 480 GB/sec (240 GB/sec per GPU)
Memory size (GDDR5) 24 GB (12GB per GPU)
CUDA cores 4992 ( 2496 per GPU)

 

Access: 

If you would like an account on the GPU or MPI CLuster please contact Brian Tempero.  Provide him with your SUNet ID and a brief explanation of how you will use the Cluster.
Note: your SUNetid and password are used to login.

Once you are provided with your login information you can ssh into the Cluster. Example: ssh -l <yoursunetid> icme-mpi1.stanford.edu

The password will be your sunetid password.

The login infromation will also allow you to login to the graphical user interface to view the Clusters resources

Copy and paste or click on the following link for access to the GPU Cluster: icme-gpu.stanford.edu

Information:

The GPU cluster now uses SLURM as the job manager. Please refere to the SLURM documentation prior to submitting a job.

The head node of the GPU cluster is the kickoff point for running jobs on the host nodes. the host nodes contain the GPU's.

Please refere to the following documentation prior to submitting a job.