Skip to content Skip to navigation

ICME Cluster

Please Note:

  • Do not store any valuable data you haven't backed up on the GPU Cluster! If any issues occur with the Cluster the operating system will be restored- destroying all data.

Cluster

The ICME-GPU cluster is used by ICME students, icme workgroups and has a restricted partition for certain courses. The cluster has a total of 32 nodes. 20 CPU nodes and 12 GPU nodes.

 

PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST

CPU*         up 1-00:00:00     20   idle icme[07-26]

k80          up    4:30:00      5   idle icme[01-05]

CME          up    2:00:00      6   idle icmet[01-06] This partition is a restricted partition.

V100         up    8:00:00      1   idle icme06

Access

If you would like an account on the GPU  please contact Brian Tempero.  Provide your SUNet ID and a brief explanation of how you will use the Cluster.
Note: your SUNetid and password are used to login.

  • Once you are provided with your login information you can ssh into the Cluster. Example: ssh -l <yoursunetid> icme-gpu.stanford.edu
  • The password will be your sunetid password.
  • This cluster is not backed up so you are responsible for your data.

Information

Slurm is the job manager for the icme-gpu cluster.

All accounts have 500GB of storage space. 

Please do not store any critical information on this cluster. It can be rebuilt with very little notice.

Just login using your sunet_id and password. 

To see available resources please use the command “spart

Here is a sample of how to get to allocate a compute node and run a module in interactive mode:

  srun --partition=k80 --gres=gpu:1 --pty bash

 

Please refere to the following documentation prior to submitting a job.