Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
en:centro:servizos:hpc [2016/05/24 09:23] – [Quick usage instructions] fernando.guillen | en:centro:servizos:hpc [2024/05/23 11:27] (current) – fernando.guillen | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== High Performance Computing (HPC) ====== | + | ====== High Performance Computing (HPC) cluster ctcomp3 |
+ | [[ https:// | ||
+ | ===== Description | ||
- | ===== Quick usage instructions ===== | + | The computing part of the cluster is made up of: |
- | ---------------- | + | * 9 servers for general computing. |
- | A summary of the steps necessary to get a job done: | + | * 1 "fat node" for memory-intensive jobs. |
+ | * 4 servers for GPU computing. | ||
+ | |||
+ | Users only have direct access to the login node, which has more limited features and should not be used for computing. \\ | ||
+ | All nodes are interconnected by a 10Gb network. \\ | ||
+ | There is distributed storage accessible from all nodes with 220 TB of capacity connected by a dual 25Gb fibre network. \\ | ||
- | - [[ en: | + | \\ |
- | - [[ en: | + | ^ Name ^ Model ^ Processor |
- | - [[ en: | + | | hpc-login2 |
+ | | hpc-node[1-2] | ||
+ | | hpc-node[3-9] | ||
+ | | | ||
+ | | hpc-gpu[1-2] | ||
+ | | hpc-gpu3 | Dell R7525 | 2 x AMD EPYC 7543 @2,80 GHz (32c) | 256 GB | ||
+ | | | ||
+ | ===== Accessing the cluster ===== | ||
+ | To access the cluster, access must be requested in advance via [[https:// | ||
+ | The access is done through an SSH connection to the login node (172.16.242.211): | ||
+ | <code bash> | ||
+ | ssh < | ||
+ | </ | ||
- | ===== Introduction | + | ===== |
- | ------------- | + | <note warning> None of the file systems in the cluster are backed up!!!</ |
- | High Performance Computing | + | The HOME of the users in the cluster is on the file share system, so it is accessible from all nodes in the cluster. Path defined in the environment variable %%$HOME%%. \\ |
+ | Each node has a local 1TB scratch partition, which is deleted at the end of each job. It can be accessed through the %%$LOCAL_SCRATCH%% environment variable in the scripts. \\ | ||
+ | For data to be shared by groups of users, you must request the creation of a folder in the shared storage that will only be accessible by members of the group.\\ | ||
+ | ^ Directory | ||
+ | | Home | %%$HOME%% | ||
+ | | local Scratch | ||
+ | | Group folder | ||
+ | %%* storage is shared %% | ||
+ | === WARNING === | ||
+ | The file share system performs poorly when working with many small files. To improve performance in such scenarios, create a file system in an image file and mount it to work directly on it. The procedure is as follows: | ||
+ | * Create the image file at your home folder: | ||
+ | <code bash> | ||
+ | ## truncate image.name | ||
+ | truncate example.ext4 | ||
+ | </ | ||
+ | * Create a filesystem in the image file: | ||
+ | <code bash> | ||
+ | ## mkfs.ext4 | ||
+ | ## -T small optimized options for small files | ||
+ | ## -m 0 Do not reserve capacity for root user | ||
+ | mkfs.ext4 | ||
+ | </ | ||
+ | * Mount the image (using SUDO) with the script | ||
+ | <code bash> | ||
+ | ## By default it is mounted at / | ||
+ | sudo mount_image.py example.ext4 | ||
+ | </ | ||
+ | * To unmount | ||
- | A queue management system | + | The mount script has this options: |
+ | < | ||
+ | --mount-point path < | ||
+ | --rw <-- (optional) By default it is mounted readonly, | ||
+ | </ | ||
+ | <note warning> Do not mount the image file readwrite from more than one node!!!</note> | ||
- | The way these systems work is: | + | The unmounting script has this options: |
- | - The user requests some resources | + | < |
- | - The queue manager assigns | + | --mount-point |
- | - When the requested resources are available and depending on the priorities established by the system, | + | </ |
+ | ===== Transference of files and data ===== | ||
+ | === SCP === | ||
+ | From your local machine | ||
+ | <code bash> | ||
+ | scp filename < | ||
+ | </ | ||
+ | From the cluster | ||
+ | <code bash> | ||
+ | scp filename < | ||
+ | </ | ||
+ | [[https:// | ||
+ | === SFTP === | ||
+ | To transfer several files or to navigate through the filesystem. | ||
+ | <code bash> | ||
+ | < | ||
+ | sftp> | ||
+ | sftp> ls | ||
+ | sftp> cd < | ||
+ | sftp> put < | ||
+ | sftp> get < | ||
+ | sftp> quit | ||
+ | </ | ||
+ | [[https:// | ||
+ | === RSYNC === | ||
+ | [[ https:// | ||
+ | === SSHFS === | ||
+ | Requires local installation of the sshfs package.\\ | ||
+ | Allows for example to mount the user's local home in hpc-login2: | ||
+ | <code bash> | ||
+ | ## Mount | ||
+ | sshfs < | ||
+ | ## Unmount | ||
+ | fusermount -u < | ||
+ | </ | ||
+ | [[https:// | ||
- | It is important to note that the request and the execution | + | ===== Available Software ===== |
+ | All nodes have the basic software | ||
+ | * GCC 8.5.0 | ||
+ | * Python 3.6.8 | ||
+ | * Perl 5.26.3 | ||
+ | GPU nodes, in addition: | ||
+ | * nVidia Driver 510.47.03 | ||
+ | * CUDA 11.6 | ||
+ | * libcudnn 8.7 | ||
+ | To use any other software not installed on the system or another version | ||
+ | - Use Modules with the modules | ||
+ | - Use a container (uDocker or Apptainer/ | ||
+ | - Use Conda | ||
+ | A module | ||
+ | A container is ideal when dependencies are complicated and/ | ||
+ | Conda is the best solution if you need the latest version of a library or program or packages not otherwise available.\\ | ||
- | ==== Hardware description | + | ==== Modules/ |
+ | [[ https:// | ||
+ | <code bash> | ||
+ | # See available modules: | ||
+ | module avail | ||
+ | # Module load: | ||
+ | module < | ||
+ | # Unload a module: | ||
+ | module unload < | ||
+ | # List modules loaded in your environment: | ||
+ | module list | ||
+ | # ml can be used as a shorthand of the module command: | ||
+ | ml avail | ||
+ | # To get info of a module: | ||
+ | ml spider < | ||
+ | </ | ||
- | Ctcomp2 | + | ==== Software containers execution ==== |
- | * Each HP Proliant node has 4 AMD Opteron 6262 HE (16 cores) processors and 256 GB RAM(except node1 and the master with 128GB). | + | === uDocker ==== |
- | * Each Dell PowerEdge M910 node has 2 Intel Xeon L7555 (8 cores, 16 threads) processors and 64 GB RAM. | + | [[ https:// |
- | * Each Dell PowerEdge M620 node has 2 Intel Xeon E5-2650L (8 cores, 16 threads) processors and 64 GB RAM. | + | udocker |
- | * Connection with the cluster is made at 1Gb but nodes are connected between them by several 10 GbE networks. | + | <code bash> |
+ | ml uDocker | ||
+ | </ | ||
+ | === Apptainer/ | ||
+ | [[ https:// | ||
+ | Apptainer/ | ||
- | ==== Software description ==== | ||
- | The job management is done by the queue manager PBS/TORQUE. To improve energetic efficiency an on demand power on and off system called CLUES has been implemented. | ||
- | * [[http://docs.adaptivecomputing.com/maui/index.php|MAUI 3.3.1]] | + | ==== CONDA ==== |
- | * [[http://docs.adaptivecomputing.com/torque/4-1-7/help.htm|Torque 4.1.3]] | + | [[ https://docs.conda.io/en/latest/miniconda.html | Conda Documentation |
- | | + | Miniconda is the minimal version of Anaconda and only includes the conda environment manager, Python and a few necessary packages. From there on, each user only downloads and installs the packages they need. |
+ | <code bash> | ||
+ | # Getting miniconda | ||
+ | wget https://repo.anaconda.com/miniconda/Miniconda3-py39_4.11.0-Linux-x86_64.sh | ||
+ | # Install | ||
+ | sh Miniconda3-py39_4.11.0-Linux-x86_64.sh | ||
+ | # Initialize for bash shell | ||
+ | ~/miniconda3/bin/conda init bash | ||
+ | </code> | ||
- | ===== User queues | + | ===== Using SLURM ===== |
- | ------------- | + | The cluster queue manager is[[ https:// |
+ | <note tip>The term CPU identifies a physical core in a socket. Hyperthreading is disabled, so each node has as many CPUs available as (number of sockets) * (number of physical cores per socket) it has.</ | ||
+ | == Available resources == | ||
+ | <code bash> | ||
+ | hpc-login2 ~]# ver_estado.sh | ||
+ | ============================================================================================================= | ||
+ | NODO | ||
+ | ============================================================================================================= | ||
+ | hpc-fat1 up 0%[--------------------------------------------------]( 0/80) RAM: 0% --- | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | ============================================================================================================= | ||
+ | TOTALES: [Cores : 3/688] [Mem(MB): 270000/ | ||
+ | hpc-login2 ~]$ sinfo -e -o " | ||
+ | # There is an alias for that command: | ||
+ | hpc-login2 ~]$ ver_recursos | ||
+ | NODELIST | ||
+ | hpc-fat1 | ||
+ | hpc-gpu[1-2] | ||
+ | hpc-gpu3 | ||
+ | hpc-gpu4 | ||
+ | hpc-node[1-2] | ||
+ | hpc-node[3-9] | ||
- | There are four user and eight system queues. The user queues are //routing// queues that set, depending on the number | + | # To see current resource use: (CPUS (Allocated/ |
+ | hpc-login2 ~]$ sinfo -N -r -O NodeList, | ||
+ | # There is an alias for that command: | ||
+ | hpc-login2 ~]$ ver_uso | ||
+ | NODELIST | ||
+ | hpc-fat1 | ||
+ | hpc-gpu3 | ||
+ | hpc-gpu4 | ||
+ | hpc-node1 | ||
+ | hpc-node2 | ||
+ | hpc-node3 | ||
+ | hpc-node4 | ||
+ | hpc-node5 | ||
+ | hpc-node6 | ||
+ | hpc-node7 | ||
+ | hpc-node8 | ||
+ | hpc-node9 | ||
+ | </ | ||
+ | ==== Nodes ==== | ||
+ | A node is SLURM' | ||
+ | <code bash> | ||
+ | # Show node info: | ||
+ | hpc-login2 ~]$ scontrol show node hpc-node1 | ||
+ | NodeName=hpc-node1 Arch=x86_64 CoresPerSocket=18 | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | </ | ||
+ | ==== Partitions ==== | ||
+ | Partitions in SLURM are logical groups | ||
+ | <code bash> | ||
+ | # Show partition info: | ||
+ | hpc-login2 ~]$ sinfo | ||
+ | defaultPartition* | ||
+ | </ | ||
+ | ==== Jobs ==== | ||
+ | Jobs in SLURM are resource allocations | ||
+ | A JOB consists of one or more STEPS, each consisting of one or more TASKS that use one or more CPUs. There is one STEP for each program that executes sequentially in a JOB and there is one TASK for each program that executes in parallel. Therefore in the simplest case such as launching a job consisting of executing the hostname command the JOB has a single STEP and a single TASK. | ||
- | Independently of the type of queue used for job submissions, an user can only specify | + | ==== Queue system (QOS) ==== |
- | Therefore | + | The queue to which each job is submitted defines the priority, the limits and also the relative " |
- | __To execute jobs that don't adjust | + | <code bash> |
+ | # Show queues | ||
+ | hpc-login2 ~]$ sacctmgr show qos | ||
+ | # There is an alias that shows only the relevant info: | ||
+ | hpc-login2 ~]$ ver_colas | ||
+ | Name Priority | ||
+ | ---------- | ||
+ | | ||
+ | interactive | ||
+ | urgent | ||
+ | long | ||
+ | | ||
+ | | ||
+ | | ||
+ | </ | ||
+ | # Priority: is the relative priority of each queue. | ||
+ | # DenyonLimit: | ||
+ | # UsageFactor: | ||
+ | # MaxTRES: limnits applied to each job \\ | ||
+ | # MaxWall: maximum time the job can run \\ | ||
+ | # MaxTRESPU: global limits per user \\ | ||
+ | # MaxJobsPU: Maximum number of jobs a user can have running simultaneously. \\ | ||
+ | # MaxSubmitPU: | ||
+ | |||
+ | ==== Sending a job to the queue system | ||
+ | == Requesting | ||
+ | By default, if you submit a job without | ||
+ | This is very inefficient, | ||
+ | - %%Node number (-N or --nodes), tasks (-n or --ntasks) and/or CPUs per task (-c or --cpus-per-task).%% | ||
+ | - %%Memory (--mem) per node or memory per cpu (--mem-per-cpu).%% | ||
+ | - %%Job execution time ( --time )%% | ||
- | User queues are '' | + | In addition, it may be interesting to add the following parameters: |
- | | + | | -J |
- | | + | | |
- | < | + | | -o |
- | ct$ qsub -q short script.sh | + | | |
+ | | -C | ||
+ | | | %%--exclusive%% | ||
+ | | -w | %%--nodelist%% | ||
+ | |||
+ | == How resources | ||
+ | The default allocation method between nodes is block allocation | ||
+ | |||
+ | == Priority calculation == | ||
+ | When a job is submitted | ||
+ | If resources are available, the job is executed directly, but if not, it is queued. Each job is assigned a priority that determines the order in which the jobs in the queue are executed when resources are available. To determine the priority of each job, 3 factors are weighted: the time it has been waiting in the queue (25%), the fixed priority | ||
+ | The fairshare is a dynamic calculation made by SLURM for each user and is the difference between the resources allocated and the resources consumed over the last 14 days. | ||
+ | < | ||
+ | hpc-login2 ~]$ sshare | ||
+ | User RawShares | ||
+ | ---------- ---------- ----------- ----------- ----------- | ||
+ | | ||
+ | 1 0.500000 | ||
+ | user_name | ||
</ | </ | ||
- | * '' | + | # RawShares: Is the amount of resources allocated to the user in absolute terms . It is the same for all users.\\ |
- | < | + | # NormShares: |
- | ct$ qsub -q bigmem script.sh | + | # RawUsage: The number |
+ | # NormUsage: RawUsage normalised to total seconds/cpu consumed in the cluster.\\ | ||
+ | # FairShare: The FairShare factor between 0 and 1. The higher the cluster usage, the closer | ||
+ | |||
+ | == Job submission == | ||
+ | - sbatch | ||
+ | - salloc | ||
+ | - srun | ||
+ | |||
+ | 1. SBATCH \\ | ||
+ | Used to send a script | ||
+ | < | ||
+ | # Crear el script: | ||
+ | hpc-login2 ~]$ vim test_job.sh | ||
+ | # | ||
+ | # | ||
+ | #SBATCH --nodes=1 | ||
+ | #SBATCH --ntasks=1 | ||
+ | #SBATCH --cpus-per-task=1 | ||
+ | #SBATCH --mem=1gb | ||
+ | #SBATCH --time=00: | ||
+ | #SBATCH --qos=urgent | ||
+ | #SBATCH --output=test%j.log | ||
+ | |||
+ | echo "Hello World!" | ||
+ | |||
+ | hpc-login2 ~]$ sbatch test_job.sh | ||
</ | </ | ||
- | * '' | + | 2. SALLOC \\ |
- | < | + | It is used to immediately obtain an allocation of resources (nodes). As soon as it is obtained, the specified command or a shell is executed. |
- | ct$ qsub -q interactive | + | <code bash> |
+ | # Get 5 nodes and launch a job. | ||
+ | hpc-login2 ~]$ salloc -N5 myprogram | ||
+ | # Get interactive | ||
+ | hpc-login2 ~]$ salloc -N1 | ||
+ | # Get interactive | ||
+ | hpc-login2 ~]$ salloc -N1 --exclusive | ||
+ | </code> | ||
+ | 3. SRUN \\ | ||
+ | It is used to launch a parallel | ||
+ | < | ||
+ | # Launch the hostname command on 2 nodes | ||
+ | hpc-login2 ~]$ srun -N2 hostname | ||
+ | hpc-node1 | ||
+ | hpc-node2 | ||
</ | </ | ||
- | The system queues are '' | ||
- | * '' | ||
- | * '' | ||
- | * '' | ||
- | * '' | ||
- | * '' | ||
- | * '' | ||
- | * '' | ||
- | * '' | ||
- | The following table summarizes | + | ==== GPU use ==== |
+ | To specifically request a GPU allocation for a job, options must be added to sbatch or srun: | ||
+ | | %%--gres%% | ||
+ | | %%--gpus o -G%% | Request gpus per JOB | %%--gpus=[type]: | ||
+ | There are also the options %% --gpus-per-socket, | ||
+ | Ejemplos: | ||
+ | <code bash> | ||
+ | ## See the list of nodes and gpus: | ||
+ | hpc-login2 ~]$ ver_recursos | ||
+ | ## Request any 2 GPUs for a JOB, add: | ||
+ | --gpus=2 | ||
+ | ## Request a 40G A100 at one node and an 80G A100 at another node, add: | ||
+ | --gres=gpu: | ||
+ | </ | ||
- | ^ Queue ^ Limits | ||
- | | ::: ^ Processes | ||
- | | '' | ||
- | | '' | ||
- | | '' | ||
- | | '' | ||
- | | '' | ||
- | | '' | ||
- | | '' | ||
- | | '' | ||
- | | '' | ||
- | | '' | ||
- | | '' | ||
- | | '' | ||
+ | ==== Job monitoring ==== | ||
+ | <code bash> | ||
+ | ## List all jobs in the queue | ||
+ | hpc-login2 ~]$ squeue | ||
+ | ## Listing a user's jobs | ||
+ | hpc-login2 ~]$ squeue -u < | ||
+ | ## Cancel a job: | ||
+ | hpc-login2 ~]$ scancel < | ||
+ | ## List of recent jobs: | ||
+ | hpc-login2 ~]$ sacct -b | ||
+ | ## Detailed historical information for a job: | ||
+ | hpc-login2 ~]$ sacct -l -j < | ||
+ | ## Debug information of a job for troubleshooting: | ||
+ | hpc-login2 ~]$ scontrol show jobid -dd < | ||
+ | ## View the resource usage of a running job: | ||
+ | hpc-login2 ~]$ sstat < | ||
+ | </ | ||
+ | ==== Configure job output ==== | ||
+ | == Exit codes == | ||
+ | By default these are the output codes of the commands: | ||
+ | ^ SLURM command | ||
+ | | salloc | ||
+ | | srun | The highest among all executed tasks or 253 for an out-of-mem error. | ||
+ | | sbatch | ||
+ | |||
+ | == STDIN, STDOUT y STDERR == | ||
+ | **SRUN:**\\ | ||
+ | By default stdout and stderr are redirected from all TASKS to srun's stdout and stderr, and stdin is redirected from srun's stdin to all TASKS. This can be changed with: | ||
+ | | %%-i, --input=< | ||
+ | | %%-o, --output=< | ||
+ | | %%-e, --error=< | ||
+ | And options are: | ||
+ | * //all//: by default. | ||
+ | * //none//: Nothing is redirected. | ||
+ | * //taskid//: Redirects only to and/or from the specified TASK id. | ||
+ | * // | ||
+ | * //filename pattern//: Same as the filename option but with a file defined by a [[ https:// | ||
+ | |||
+ | **SBATCH: | ||
+ | By default "/ | ||
+ | | %%-i, --input=< | ||
+ | | %%-o, --output=< | ||
+ | | %%-e, --error=< | ||
+ | The reference of filename_pattern is [[ https:// | ||
+ | |||
+ | ==== Sending mail ==== | ||
+ | JOBS can be configured to send mail in certain circumstances using these two parameters (**BOTH ARE REQUIRED**): | ||
+ | | %%--mail-type=< | ||
+ | | %%--mail-user=< | ||
+ | |||
+ | |||
+ | |||
+ | ==== Status of Jobs in the queuing system ==== | ||
+ | <code bash> | ||
+ | hpc-login2 ~]# squeue -l | ||
+ | JOBID PARTITION | ||
+ | 6547 defaultPa | ||
+ | |||
+ | ## Check status of queue use: | ||
+ | hpc-login2 ~]$ estado_colas.sh | ||
+ | JOBS PER USER: | ||
+ | -------------- | ||
+ | | ||
+ | | ||
+ | |||
+ | JOBS PER QOS: | ||
+ | -------------- | ||
+ | | ||
+ | long: 1 | ||
+ | |||
+ | JOBS PER STATE: | ||
+ | -------------- | ||
+ | | ||
+ | | ||
+ | ========================================== | ||
+ | Total JOBS in cluster: | ||
+ | </ | ||
+ | Common job states: | ||
+ | * R RUNNING Job currently has an allocation. | ||
+ | * CD COMPLETED Job has terminated all processes on all nodes with an exit code of zero. | ||
+ | * F FAILED Job terminated with non-zero exit code or other failure condition. | ||
+ | * PD PENDING Job is awaiting resource allocation. | ||
+ | |||
+ | [[ https:// | ||
+ | |||
+ | If a job is not running, a reason will be displayed underneath REASON:[[ https:// | ||
- | * Processes: Maximum number of processes by job in this queue. | ||
- | * Nodes: Maximum numbers of nodes in which the job will be executed. | ||
- | * Memory: Maximum virtual memory concurrently used by all the job processes. | ||
- | * Jobs/user: Maximum number of jobs per user regardless of their state. | ||
- | * Maximum time (hours): Maximum real time during which the job can be in the execution state. | ||
- | * Priority: Priority of the execution queue related to the other queues. A higher value means more priority. Please note that lacking other criteria, any job sent with qsub will by default be executed in np1 using its limits. |