Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
en:centro:servizos:hpc [2022/06/30 13:08] – fernando.guillen | en:centro:servizos:hpc [2024/10/08 09:55] (current) – [CONDA] jorge.suarez | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== High Performance Computing (HPC) cluster ctcomp3 | ====== High Performance Computing (HPC) cluster ctcomp3 | ||
- | [[ https:// | + | [[ https:// |
===== Description ===== | ===== Description ===== | ||
Line 18: | Line 18: | ||
| hpc-node[3-9] | | hpc-node[3-9] | ||
| hpc-fat1 | | hpc-fat1 | ||
- | | | + | | hpc-gpu[1-2] |
- | | hpc-gpu2 | + | |
| hpc-gpu3 | | hpc-gpu3 | ||
| hpc-gpu4 | | hpc-gpu4 | ||
- | * Now ctgpgpu8. It will be integrated in the cluster soon. | + | |
- | ===== Accessing the system | + | ===== Accessing the cluster |
To access the cluster, access must be requested in advance via [[https:// | To access the cluster, access must be requested in advance via [[https:// | ||
- | The access is done through an SSH connection to the login node: | + | The access is done through an SSH connection to the login node (172.16.242.211): |
<code bash> | <code bash> | ||
ssh < | ssh < | ||
Line 114: | Line 113: | ||
* Python 3.6.8 | * Python 3.6.8 | ||
* Perl 5.26.3 | * Perl 5.26.3 | ||
+ | GPU nodes, in addition: | ||
+ | * nVidia Driver 510.47.03 | ||
+ | * CUDA 11.6 | ||
+ | * libcudnn 8.7 | ||
To use any other software not installed on the system or another version of the system, there are three options: | To use any other software not installed on the system or another version of the system, there are three options: | ||
- Use Modules with the modules that are already installed (or request the installation of a new module if it is not available). | - Use Modules with the modules that are already installed (or request the installation of a new module if it is not available). | ||
Line 143: | Line 145: | ||
=== uDocker ==== | === uDocker ==== | ||
[[ https:// | [[ https:// | ||
- | uDocker está instalado como un módulo, así que es necesario cargarlo en el entorno: | + | udocker is installed as a module, so it needs to be loaded into the environment: |
<code bash> | <code bash> | ||
ml uDocker | ml uDocker | ||
Line 149: | Line 151: | ||
=== Apptainer/ | === Apptainer/ | ||
- | [[ https:// | + | [[ https:// |
- | Apptainer/ | + | Apptainer/ |
==== CONDA ==== | ==== CONDA ==== | ||
- | [[ https:// | + | [[ https:// |
- | Miniconda | + | Miniconda |
<code bash> | <code bash> | ||
- | # Obtener | + | # Getting |
- | wget https:// | + | wget https:// |
- | # Instalarlo | + | # Install |
- | sh Miniconda3-py39_4.11.0-Linux-x86_64.sh | + | bash Miniconda3-latest-Linux-x86_64.sh |
+ | # Initialize for bash shell | ||
+ | ~/ | ||
</ | </ | ||
- | ===== Uso de SLURM ===== | + | ===== Using SLURM ===== |
- | El gestor de colas en el cluster | + | The cluster |
- | <note tip>El término | + | <note tip>The term CPU identifies |
- | == Recursos disponibles | + | == Available resources |
<code bash> | <code bash> | ||
+ | hpc-login2 ~]# ver_estado.sh | ||
+ | ============================================================================================================= | ||
+ | NODO | ||
+ | ============================================================================================================= | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | ============================================================================================================= | ||
+ | TOTALES: [Cores : 3/688] [Mem(MB): 270000/ | ||
hpc-login2 ~]$ sinfo -e -o " | hpc-login2 ~]$ sinfo -e -o " | ||
- | # Hay un alias para este comando: | + | # There is an alias for that command: |
hpc-login2 ~]$ ver_recursos | hpc-login2 ~]$ ver_recursos | ||
NODELIST | NODELIST | ||
Line 179: | Line 203: | ||
hpc-node[3-9] | hpc-node[3-9] | ||
- | # Para ver el uso actual de los recursos: (CPUS (Allocated/ | + | # To see current resource use: (CPUS (Allocated/ |
hpc-login2 ~]$ sinfo -N -r -O NodeList, | hpc-login2 ~]$ sinfo -N -r -O NodeList, | ||
- | # Hay un alias para este comando: | + | # There is an alias for that command: |
hpc-login2 ~]$ ver_uso | hpc-login2 ~]$ ver_uso | ||
NODELIST | NODELIST | ||
Line 197: | Line 221: | ||
hpc-node9 | hpc-node9 | ||
</ | </ | ||
- | ==== Nodos ==== | + | ==== Nodes ==== |
- | Un nodo es la unidad de computación de SLURM, y se corresponde con un servidor físico. | + | A node is SLURM's computation unit and corresponds to a physical server. |
<code bash> | <code bash> | ||
- | # Mostrar la información de un nodo: | + | # Show node info: |
hpc-login2 ~]$ scontrol show node hpc-node1 | hpc-login2 ~]$ scontrol show node hpc-node1 | ||
NodeName=hpc-node1 Arch=x86_64 CoresPerSocket=18 | NodeName=hpc-node1 Arch=x86_64 CoresPerSocket=18 | ||
Line 220: | Line 244: | ||
| | ||
</ | </ | ||
- | ==== Particiones | + | ==== Partitions |
- | Las particiones en SLURM son grupos lógicos de nodos. En el cluster | + | Partitions in SLURM are logical groups of nodes. In the cluster |
<code bash> | <code bash> | ||
- | # Mostrar la información de las particiones: | + | # Show partition info: |
hpc-login2 ~]$ sinfo | hpc-login2 ~]$ sinfo | ||
- | defaultPartition* | + | defaultPartition* |
- | # Cuando se incorporen al cluster ctgpgpu7 y 8 apareceran como los nodos hpc-gpu1 y 2 respectivamente. | + | |
</ | </ | ||
- | ==== Trabajos | + | ==== Jobs ==== |
- | Los trabajos en SLURM son asignaciones de recursos | + | Jobs in SLURM are resource allocations to a user for a given time. Jobs are identified by a sequential number or JOBID. \\ |
- | Un trabajo (JOB) consiste en uno o más pasos (STEPS), cada uno consistente en una o más tareas (TASKS) que usan una o más CPU. Hay un STEP por cada programa que se ejecute de forma secuencial en un JOB y hay un TASK por cada programa que se ejecute en paralelo. Por lo tanto en el caso más simple como por ejemplo lanzar un trabajo consistente en ejecutar el comando | + | A JOB consists of one or more STEPS, |
- | ==== Sistema de colas (QOS) ==== | + | ==== Queue system |
- | La cola a la que se envíe cada trabajo define la prioridad,los límites y también el "coste" | + | The queue to which each job is submitted defines the priority, the limits and also the relative |
<code bash> | <code bash> | ||
- | # Mostrar las colas | + | # Show queues |
hpc-login2 ~]$ sacctmgr show qos | hpc-login2 ~]$ sacctmgr show qos | ||
- | # Hay un alias que muestra solo la información más relevante: | + | # There is an alias that shows only the relevant info: |
hpc-login2 ~]$ ver_colas | hpc-login2 ~]$ ver_colas | ||
- | Name | + | Name Priority |
- | ---------- | + | ---------- |
- | | + | |
- | interactive | + | interactive |
- | urgent | + | urgent |
- | long 100 DenyOnLimit | + | long |
- | | + | |
- | | + | |
+ | | ||
</ | </ | ||
- | # Priority: | + | # Priority: |
- | # DenyonLimit: | + | # DenyonLimit: |
- | # UsageFactor: | + | # UsageFactor: |
- | # MaxTRES: | + | # MaxTRES: |
- | # MaxWall: | + | # MaxWall: |
- | # MaxTRESPU: | + | # MaxTRESPU: |
- | # MaxJobsPU: | + | # MaxJobsPU: |
- | # MaxSubmitPU: | + | # MaxSubmitPU: |
- | ==== Envío de un trabajo al sistema de colas ==== | + | ==== Sending a job to the queue system |
- | == Especificación de recursos | + | == Requesting resources |
- | Por defecto, si se envía un trabajo sin especificar nada el sistema lo envia a la QOS por defecto | + | By default, if you submit |
- | Esto es muy ineficiente, lo ideal es especificar en la medida de lo posible al menos tres parámetros a la hora de enviar los trabajos: | + | This is very inefficient, the ideal is to specify as much as possible at least three parameters when submitting jobs: |
- | - %%El número de nodos (-N o --nodes), | + | - %%Node number |
- | - %%La memoria | + | - %%Memory |
- | - %%El tiempo estimado de ejecución del trabajo | + | - %%Job execution time ( --time )%% |
- | A mayores puede ser interesante añadir los siguientes parámetros: | + | In addition, it may be interesting to add the following parameters: |
- | | -J | + | | -J |
- | | -q | + | | -q |
- | | -o | + | | -o |
- | | | + | | |
| -C | | -C | ||
- | | | %%--exclusive%% | + | | | %%--exclusive%% |
- | | -w | %%--nodelist%% | + | | -w | %%--nodelist%% |
- | == Cómo se asignan los recursos | + | == How resources are allocated |
- | Por defecto el método de asignación entre nodos es la asignación en bloque | + | The default allocation method between nodes is block allocation |
- | == Calculo de la prioridad | + | == Priority calculation |
- | Cuando se envía un trabajo al sistema de colas, lo primero que ocurre es que se comprueba si los recursos solicitados entran dentro de los límites fijados en la cola correspondiente. Si supera alguno se cancela el envío. \\ | + | When a job is submitted to the queuing system, the first thing that happens is that the requested resources are checked to see if they fall within the limits set in the corresponding queue. If it exceeds any of them, the submission is cancelled. \\ |
- | Si hay recursos disponibles el trabajo se ejecuta directamente, pero si no es así se encola. Cada trabajo tiene asignada una prioridad que determina el orden en que se ejecutan los trabajos de la cola cuando quedan recursos disponibles. Para determinar la prioridad de cada trabajo se ponderan | + | If resources are available, the job is executed directly, but if not, it is queued. Each job is assigned a priority that determines the order in which the jobs in the queue are executed when resources are available. To determine the priority of each job, 3 factors are weighted: the time it has been waiting in the queue (25%), |
- | El fairshare | + | The fairshare |
<code bash> | <code bash> | ||
hpc-login2 ~]$ sshare -l | hpc-login2 ~]$ sshare -l | ||
Line 289: | Line 313: | ||
user_name | user_name | ||
</ | </ | ||
- | # RawShares: | + | # RawShares: |
- | # NormShares: | + | # NormShares: |
- | # RawUsage: | + | # RawUsage: |
- | # NormUsage: | + | # NormUsage: |
- | # FairShare: | + | # FairShare: |
- | == Envío de trabajos | + | == Job submission |
+ | - sbatch | ||
- salloc | - salloc | ||
- srun | - srun | ||
- | - sbatch | ||
- | 1. SALLOC | + | 1. SBATCH |
- | Sirve para obtener de forma inmediata una asignación de recursos (nodos). En cuanto se obtiene se ejecuta el comando especificado o una shell en su defecto. | + | Used to send a script to the queuing system. It is batch-processing and non-blocking. |
- | <code bash> | + | |
- | # Obtener 5 nodos y lanzar un trabajo. | + | |
- | hpc-login2 ~]$ salloc -N5 myprogram | + | |
- | # Obtener acceso interactivo | + | |
- | hpc-login2 ~]$ salloc -N1 | + | |
- | </ | + | |
- | 2. SRUN \\ | + | |
- | Sirve para lanzar un trabajo paralelo ( es preferible a usar mpirun ). Es interactivo y bloqueante. | + | |
- | <code bash> | + | |
- | # Lanzar un hostname en 2 nodos | + | |
- | hpc-login2 ~]$ srun -N2 hostname | + | |
- | hpc-node1 | + | |
- | hpc-node2 | + | |
- | </ | + | |
- | 3. SBATCH \\ | + | |
- | Sirve para enviar un script al sistema de colas. Es de procesamiento por lotes y no bloqueante. | + | |
<code bash> | <code bash> | ||
# Crear el script: | # Crear el script: | ||
- | hpc-login2 ~]$ vim trabajo_ejemplo.sh | + | hpc-login2 ~]$ vim test_job.sh |
#!/bin/bash | #!/bin/bash | ||
- | #SBATCH --job-name=prueba | + | #SBATCH --job-name=test |
#SBATCH --nodes=1 | #SBATCH --nodes=1 | ||
#SBATCH --ntasks=1 | #SBATCH --ntasks=1 | ||
Line 328: | Line 336: | ||
#SBATCH --mem=1gb | #SBATCH --mem=1gb | ||
#SBATCH --time=00: | #SBATCH --time=00: | ||
- | #SBATCH --qos=urgent | + | #SBATCH --qos=urgent |
- | #SBATCH --output=prueba_%j.log | + | #SBATCH --output=test%j.log |
echo "Hello World!" | echo "Hello World!" | ||
- | hpc-login2 ~]$ sbatch | + | hpc-login2 ~]$ sbatch |
</ | </ | ||
+ | 2. SALLOC \\ | ||
+ | It is used to immediately obtain an allocation of resources (nodes). As soon as it is obtained, the specified command or a shell is executed. | ||
+ | <code bash> | ||
+ | # Get 5 nodes and launch a job. | ||
+ | hpc-login2 ~]$ salloc -N5 myprogram | ||
+ | # Get interactive access to a node (Press Ctrl+D to exit): | ||
+ | hpc-login2 ~]$ salloc -N1 | ||
+ | # Get interactive EXCLUSIVE access to a node | ||
+ | hpc-login2 ~]$ salloc -N1 --exclusive | ||
+ | </ | ||
+ | 3. SRUN \\ | ||
+ | It is used to launch a parallel job (preferable to using mpirun). It is interactive and blocking. | ||
+ | <code bash> | ||
+ | # Launch the hostname command on 2 nodes | ||
+ | hpc-login2 ~]$ srun -N2 hostname | ||
+ | hpc-node1 | ||
+ | hpc-node2 | ||
+ | </ | ||
+ | |||
- | ==== Uso de los nodos con GPU ==== | + | ==== GPU use ==== |
- | Para solicitar específicamente una asignación de GPUs para un trabajo hay que añadir | + | To specifically request |
- | | %%--gres%% | + | | %%--gres%% |
- | | %%--gpus o -G%% | | + | | %%--gpus o -G%% | |
- | También existen las opciones | + | There are also the options |
Ejemplos: | Ejemplos: | ||
<code bash> | <code bash> | ||
- | ## Ver la lista de nodos y gpus: | + | ## See the list of nodes and gpus: |
hpc-login2 ~]$ ver_recursos | hpc-login2 ~]$ ver_recursos | ||
- | ## Solicitar | + | ## Request any 2 GPUs for a JOB, add: |
--gpus=2 | --gpus=2 | ||
- | ## Solicitar una A100 de 40G en un nodo y una A100 de 80G en otro, añadir: | + | ## Request a 40G A100 at one node and an 80G A100 at another node, add: |
--gres=gpu: | --gres=gpu: | ||
</ | </ | ||
- | ==== Monitorización de los trabajos | + | ==== Job monitoring |
<code bash> | <code bash> | ||
- | ## Listado de todos los trabajos en la cola | + | ## List all jobs in the queue |
hpc-login2 ~]$ squeue | hpc-login2 ~]$ squeue | ||
- | ## Listado de los trabajos de un usuario | + | ## Listing a user's jobs |
hpc-login2 ~]$ squeue -u < | hpc-login2 ~]$ squeue -u < | ||
- | ## Cancelar un trabajo: | + | ## Cancel a job: |
hpc-login2 ~]$ scancel < | hpc-login2 ~]$ scancel < | ||
- | ## Lista de trabajos recientes | + | ## List of recent jobs: |
hpc-login2 ~]$ sacct -b | hpc-login2 ~]$ sacct -b | ||
- | ## Información histórica detallada de un trabajo: | + | ## Detailed historical information for a job: |
hpc-login2 ~]$ sacct -l -j < | hpc-login2 ~]$ sacct -l -j < | ||
- | ## Información de debug de un trabajo para troubleshooting: | + | ## Debug information of a job for troubleshooting: |
hpc-login2 ~]$ scontrol show jobid -dd < | hpc-login2 ~]$ scontrol show jobid -dd < | ||
- | ## Ver el uso de recursos de un trabajo en ejecución: | + | ## View the resource usage of a running job: |
hpc-login2 ~]$ sstat < | hpc-login2 ~]$ sstat < | ||
</ | </ | ||
- | ==== Controlar la salida de los trabajos | + | ==== Configure job output |
- | == Códigos de salida | + | == Exit codes == |
- | Por defecto estos son los códigos de salida de los comandos: | + | By default these are the output codes of the commands: |
^ SLURM command | ^ SLURM command | ||
- | | salloc | + | | salloc |
- | | srun | | + | | srun | |
- | | sbatch | + | | sbatch |
== STDIN, STDOUT y STDERR == | == STDIN, STDOUT y STDERR == | ||
**SRUN:**\\ | **SRUN:**\\ | ||
- | Por defecto | + | By default |
- | | %%-i, --input=< | + | | %%-i, --input=< |
- | | %%-o, --output=< | + | | %%-o, --output=< |
- | | %%-e, --error=< | + | | %%-e, --error=< |
- | Y las opciones son: | + | And options are: |
- | * // | + | * // |
- | * // | + | * // |
- | * // | + | * // |
- | * // | + | * // |
- | * //filename pattern//: | + | * //filename pattern//: |
**SBATCH: | **SBATCH: | ||
- | Por defecto | + | By default |
| %%-i, --input=< | | %%-i, --input=< | ||
| %%-o, --output=< | | %%-o, --output=< | ||
| %%-e, --error=< | | %%-e, --error=< | ||
- | La referencia de filename_pattern | + | The reference of filename_pattern |
- | ==== Envío de correos | + | ==== Sending mail ==== |
- | Se pueden configurar los JOBS para que envíen correos en determinadas circunstancias usando estos dos parámetros | + | JOBS can be configured to send mail in certain circumstances using these two parameters |
- | | %%--mail-type=< | + | | %%--mail-type=< |
- | | %%--mail-user=< | + | | %%--mail-user=< |
- | ==== Estados de los trabajos en el sistema de colas ==== | + | ==== Status of Jobs in the queuing system |
<code bash> | <code bash> | ||
hpc-login2 ~]# squeue -l | hpc-login2 ~]# squeue -l | ||
JOBID PARTITION | JOBID PARTITION | ||
6547 defaultPa | 6547 defaultPa | ||
+ | |||
+ | ## Check status of queue use: | ||
+ | hpc-login2 ~]$ estado_colas.sh | ||
+ | JOBS PER USER: | ||
+ | -------------- | ||
+ | | ||
+ | | ||
+ | |||
+ | JOBS PER QOS: | ||
+ | -------------- | ||
+ | | ||
+ | long: 1 | ||
+ | |||
+ | JOBS PER STATE: | ||
+ | -------------- | ||
+ | | ||
+ | | ||
+ | ========================================== | ||
+ | Total JOBS in cluster: | ||
</ | </ | ||
- | Estados (STATE) más comunes de un trabajo: | + | Common job states: |
* R RUNNING Job currently has an allocation. | * R RUNNING Job currently has an allocation. | ||
* CD COMPLETED Job has terminated all processes on all nodes with an exit code of zero. | * CD COMPLETED Job has terminated all processes on all nodes with an exit code of zero. | ||
Line 416: | Line 462: | ||
* PD PENDING Job is awaiting resource allocation. | * PD PENDING Job is awaiting resource allocation. | ||
- | [[ https:// | + | [[ https:// |
- | Si un trabajo no está en ejecución aparecerá una razón debajo de REASON:[[ https:// | + | If a job is not running, a reason will be displayed underneath |