LDAP: couldn't connect to LDAP server
Differences
This shows you the differences between two versions of the page.
Next revision | Previous revisionLast revisionBoth sides next revision | ||
en:centro:servizos:hpc [2016/04/27 17:37] – created fernando.guillen | en:centro:servizos:hpc [2024/03/12 11:12] – [CONDA] fernando.guillen | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | FIXME **This page is not fully translated, yet. Please help completing the translation.**\\ //(remove this paragraph once the translation is finished)// | + | ====== High Performance Computing (HPC) cluster ctcomp3 |
+ | [[ https://web.microsoftstream.com/video/f5eba154-b597-4440-9307-3befd7597d78 | Video of the presentation of the service (7/3/22) (Spanish only) ]] | ||
+ | ===== Description ===== | ||
- | ====== High Performance Computing (HPC) ====== | + | The computing part of the cluster is made up of: |
+ | * 9 servers for general computing. | ||
+ | * 1 "fat node" for memory-intensive jobs. | ||
+ | * 4 servers for GPU computing. | ||
+ | |||
+ | Users only have direct access to the login node, which has more limited features and should not be used for computing. \\ | ||
+ | All nodes are interconnected by a 10Gb network. \\ | ||
+ | There is distributed storage accessible from all nodes with 220 TB of capacity connected by a dual 25Gb fibre network. \\ | ||
- | ==== Introduction ==== | + | \\ |
- | High Performance Computing (HPC from now on) infrastructures offer CITIUS researchers a platform to resolve problems with high computational requirements. | + | ^ Name ^ Model ^ Processor |
- | Ctcomp2 is a heterogeneous cluster, formed by 7 HP Proliant BL685c G7, 5 Dell PowerEdge M910 and 5 nodos Dell PowerEdge M620. | + | | hpc-login2 |
- | | + | | hpc-node[1-2] |
- | | + | | hpc-node[3-9] |
- | | + | | hpc-fat1 |
- | | + | | hpc-gpu[1-2] |
- | | + | | hpc-gpu3 |
+ | | hpc-gpu4 | ||
+ | ===== Accessing the cluster ===== | ||
+ | To access the cluster, access must be requested in advance via [[https:// | ||
+ | The access is done through an SSH connection to the login node: | ||
+ | <code bash> | ||
+ | ssh < | ||
+ | </ | ||
- | ==== Use ==== | + | ===== Storage, directories and filesystems |
- | + | <note warning> None of the file systems in the cluster are backed up!!!</ | |
- | * :!: [[centro:servizos:cluster_de_computacion_hpc_ctcomp2|Guía de usuario del clúster ctcomp2]] | + | The HOME of the users in the cluster is on the file share system, so it is accessible from all nodes in the cluster. Path defined in the environment variable %%$HOME%%. \\ |
- | * {{:centro:servizos:cluster_de_computacion_hpc_ctcomp2:presenta_ctcomp2.pdf|Presentación clúster ctcomp2 | + | Each node has a local 1TB scratch partition, which is deleted at the end of each job. It can be accessed through the %%$LOCAL_SCRATCH%% environment variable in the scripts. \\ |
- | /* | + | For data to be shared by groups of users, you must request the creation of a folder in the shared storage that will only be accessible by members of the group.\\ |
+ | ^ Directory | ||
+ | | Home | %%$HOME%% | ||
+ | | local Scratch | ||
+ | | Group folder | ||
+ | %%* storage is shared %% | ||
+ | === WARNING === | ||
+ | The file share system performs poorly when working with many small files. To improve performance in such scenarios, create a file system in an image file and mount it to work directly on it. The procedure is as follows: | ||
+ | * Create the image file at your home folder: | ||
+ | <code bash> | ||
+ | ## truncate image.name -s SIZE_IN_BYTES | ||
+ | truncate example.ext4 -s 20G | ||
+ | </ | ||
+ | * Create a filesystem in the image file: | ||
+ | <code bash> | ||
+ | ## mkfs.ext4 -T small -m 0 image.name | ||
+ | ## -T small optimized options for small files | ||
+ | ## -m 0 Do not reserve capacity for root user | ||
+ | mkfs.ext4 -T small -m 0 example.ext4 | ||
+ | </ | ||
+ | * Mount the image (using SUDO) with the script | ||
+ | <code bash> | ||
+ | ## By default it is mounted at / | ||
+ | sudo mount_image.py example.ext4 | ||
+ | </ | ||
+ | * To unmount the image use the script // | ||
+ | |||
+ | The mount script has this options: | ||
+ | < | ||
+ | --mount-point path < | ||
+ | --rw <-- (optional) By default it is mounted readonly, with this option it is mounted readwrite. | ||
+ | </ | ||
+ | <note warning> Do not mount the image file readwrite from more than one node!!!</ | ||
+ | |||
+ | The unmounting script has this options: | ||
+ | < | ||
+ | --mount-point | ||
+ | </ | ||
+ | ===== Transference of files and data ===== | ||
+ | === SCP === | ||
+ | From your local machine to the cluster: | ||
+ | <code bash> | ||
+ | scp filename < | ||
+ | </ | ||
+ | From the cluster to your local machine: | ||
+ | <code bash> | ||
+ | scp filename < | ||
+ | </ | ||
+ | [[https:// | ||
+ | === SFTP === | ||
+ | To transfer several files or to navigate through the filesystem. | ||
+ | <code bash> | ||
+ | < | ||
+ | sftp> | ||
+ | sftp> ls | ||
+ | sftp> cd < | ||
+ | sftp> put < | ||
+ | sftp> get < | ||
+ | sftp> quit | ||
+ | </ | ||
+ | [[https:// | ||
+ | === RSYNC === | ||
+ | [[ https:// | ||
+ | === SSHFS === | ||
+ | Requires local installation of the sshfs package.\\ | ||
+ | Allows for example to mount the user's local home in hpc-login2: | ||
+ | <code bash> | ||
+ | ## Mount | ||
+ | sshfs < | ||
+ | ## Unmount | ||
+ | fusermount -u < | ||
+ | </ | ||
+ | [[https://linux.die.net/ | ||
+ | |||
+ | ===== Available Software ===== | ||
+ | All nodes have the basic software that is installed by default in AlmaLinux 8.4, in particular: | ||
+ | * GCC 8.5.0 | ||
+ | * Python 3.6.8 | ||
+ | * Perl 5.26.3 | ||
+ | GPU nodes, in addition: | ||
+ | * nVidia Driver 510.47.03 | ||
+ | * CUDA 11.6 | ||
+ | * libcudnn 8.7 | ||
+ | To use any other software not installed on the system or another version of the system, there are three options: | ||
+ | - Use Modules with the modules that are already installed | ||
+ | - Use a container (uDocker or Apptainer/Singularity) | ||
+ | - Use Conda | ||
+ | A module is the simplest solution for using software without modifications or difficult to satisfy dependencies.\\ | ||
+ | A container is ideal when dependencies are complicated and/or the software is highly customised. It is also the best solution if you are looking for reproducibility, | ||
+ | Conda is the best solution if you need the latest version of a library or program or packages not otherwise available.\\ | ||
+ | |||
+ | ==== Modules/ | ||
+ | [[ https:// | ||
+ | <code bash> | ||
+ | # See available modules: | ||
+ | module avail | ||
+ | # Module load: | ||
+ | module < | ||
+ | # Unload a module: | ||
+ | module unload < | ||
+ | # List modules loaded in your environment: | ||
+ | module list | ||
+ | # ml can be used as a shorthand of the module command: | ||
+ | ml avail | ||
+ | # To get info of a module: | ||
+ | ml spider < | ||
+ | </ | ||
+ | |||
+ | ==== Software containers execution ==== | ||
+ | === uDocker ==== | ||
+ | [[ https://indigo-dc.gitbook.io/ | ||
+ | udocker is installed as a module, so it needs to be loaded into the environment: | ||
+ | <code bash> | ||
+ | ml uDocker | ||
+ | </ | ||
+ | |||
+ | === Apptainer/ | ||
+ | [[ https:// | ||
+ | Apptainer/ | ||
+ | |||
+ | |||
+ | ==== CONDA ==== | ||
+ | [[ https:// | ||
+ | Miniconda is the minimal version of Anaconda and only includes the conda environment manager, Python and a few necessary packages. From there on, each user only downloads and installs the packages they need. | ||
+ | <code bash> | ||
+ | # Getting miniconda | ||
+ | wget https:// | ||
+ | # Install | ||
+ | sh Miniconda3-py39_4.11.0-Linux-x86_64.sh | ||
+ | # Initialize for bash shell | ||
+ | ~/miniconda3/ | ||
+ | </ | ||
+ | |||
+ | ===== Using SLURM ===== | ||
+ | The cluster queue manager is[[ https:// | ||
+ | <note tip>The term CPU identifies a physical core in a socket. Hyperthreading is disabled, so each node has as many CPUs available as (number of sockets) * (number of physical cores per socket) it has.</ | ||
+ | == Available resources == | ||
+ | <code bash> | ||
+ | hpc-login2 ~]# ver_estado.sh | ||
+ | ============================================================================================================= | ||
+ | NODO | ||
+ | ============================================================================================================= | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | ============================================================================================================= | ||
+ | TOTALES: [Cores : 3/688] [Mem(MB): 270000/ | ||
+ | hpc-login2 ~]$ sinfo -e -o " | ||
+ | # There is an alias for that command: | ||
+ | hpc-login2 ~]$ ver_recursos | ||
+ | NODELIST | ||
+ | hpc-fat1 | ||
+ | hpc-gpu[1-2] | ||
+ | hpc-gpu3 | ||
+ | hpc-gpu4 | ||
+ | hpc-node[1-2] | ||
+ | hpc-node[3-9] | ||
+ | |||
+ | # To see current resource use: (CPUS (Allocated/ | ||
+ | hpc-login2 ~]$ sinfo -N -r -O NodeList,CPUsState,Memory, | ||
+ | # There is an alias for that command: | ||
+ | hpc-login2 ~]$ ver_uso | ||
+ | NODELIST | ||
+ | hpc-fat1 | ||
+ | hpc-gpu3 | ||
+ | hpc-gpu4 | ||
+ | hpc-node1 | ||
+ | hpc-node2 | ||
+ | hpc-node3 | ||
+ | hpc-node4 | ||
+ | hpc-node5 | ||
+ | hpc-node6 | ||
+ | hpc-node7 | ||
+ | hpc-node8 | ||
+ | hpc-node9 | ||
+ | </ | ||
+ | ==== Nodes ==== | ||
+ | A node is SLURM' | ||
+ | <code bash> | ||
+ | # Show node info: | ||
+ | hpc-login2 ~]$ scontrol show node hpc-node1 | ||
+ | NodeName=hpc-node1 Arch=x86_64 CoresPerSocket=18 | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | </ | ||
+ | ==== Partitions ==== | ||
+ | Partitions in SLURM are logical groups of nodes. In the cluster there is a single partition to which all nodes belong, so it is not necessary to specify it when submitting jobs. | ||
+ | <code bash> | ||
+ | # Show partition info: | ||
+ | hpc-login2 ~]$ sinfo | ||
+ | defaultPartition* | ||
+ | </code> | ||
+ | ==== Jobs ==== | ||
+ | Jobs in SLURM are resource allocations to a user for a given time. Jobs are identified by a sequential number or JOBID. \\ | ||
+ | A JOB consists of one or more STEPS, each consisting of one or more TASKS that use one or more CPUs. There is one STEP for each program that executes sequentially in a JOB and there is one TASK for each program that executes in parallel. Therefore in the simplest case such as launching a job consisting of executing the hostname command the JOB has a single STEP and a single TASK. | ||
+ | |||
+ | ==== Queue system (QOS) ==== | ||
+ | The queue to which each job is submitted defines the priority, the limits and also the relative " | ||
+ | <code bash> | ||
+ | # Show queues | ||
+ | hpc-login2 ~]$ sacctmgr show qos | ||
+ | # There is an alias that shows only the relevant info: | ||
+ | hpc-login2 ~]$ ver_colas | ||
+ | Name Priority | ||
+ | ---------- | ||
+ | | ||
+ | interactive | ||
+ | urgent | ||
+ | long | ||
+ | | ||
+ | | ||
+ | | ||
+ | </ | ||
+ | # Priority: is the relative priority of each queue. \\ | ||
+ | # DenyonLimit: | ||
+ | # UsageFactor: | ||
+ | # MaxTRES: limnits applied to each job \\ | ||
+ | # MaxWall: maximum time the job can run \\ | ||
+ | # MaxTRESPU: global limits per user \\ | ||
+ | # MaxJobsPU: Maximum number of jobs a user can have running simultaneously. \\ | ||
+ | # MaxSubmitPU: | ||
+ | ==== Sending a job to the queue system ==== | ||
+ | == Requesting resources == | ||
+ | By default, if you submit a job without specifying anything, the system submits it to the default (regular) QOS and assigns it a node, a CPU and 4 GB. The time limit for job execution is that of the queue (4 days and 4 hours). | ||
+ | This is very inefficient, | ||
+ | - %%Node number (-N or --nodes), tasks (-n or --ntasks) and/or CPUs per task (-c or --cpus-per-task).%% | ||
+ | - %%Memory (--mem) per node or memory per cpu (--mem-per-cpu).%% | ||
+ | - %%Job execution time ( --time )%% | ||
+ | |||
+ | In addition, it may be interesting to add the following parameters: | ||
+ | | -J | ||
+ | | -q | ||
+ | | -o | ||
+ | | | ||
+ | | -C | ||
+ | | | %%--exclusive%% | ||
+ | | -w | %%--nodelist%% | ||
+ | |||
+ | == How resources are allocated == | ||
+ | The default allocation method between nodes is block allocation (all available cores on a node are allocated before using another node). The default allocation method within each node is cyclic allocation (the required cores are distributed equally among the available sockets in the node). | ||
+ | |||
+ | == Priority calculation == | ||
+ | When a job is submitted to the queuing system, the first thing that happens is that the requested resources are checked to see if they fall within the limits set in the corresponding queue. If it exceeds any of them, the submission is cancelled. \\ | ||
+ | If resources are available, the job is executed directly, but if not, it is queued. Each job is assigned a priority that determines the order in which the jobs in the queue are executed when resources are available. To determine the priority of each job, 3 factors are weighted: the time it has been waiting in the queue (25%), the fixed priority of the queue (25%) and the user's fairshare (50%). \\ | ||
+ | The fairshare is a dynamic calculation made by SLURM for each user and is the difference between the resources allocated and the resources consumed over the last 14 days. | ||
+ | <code bash> | ||
+ | hpc-login2 ~]$ sshare -l | ||
+ | User RawShares | ||
+ | ---------- ---------- ----------- ----------- ----------- | ||
+ | | ||
+ | 1 0.500000 | ||
+ | user_name | ||
+ | </ | ||
+ | # RawShares: Is the amount of resources allocated to the user in absolute terms . It is the same for all users.\\ | ||
+ | # NormShares: This is the above amount normalised to the total allocated resources.\\ | ||
+ | # RawUsage: The number of seconds/cpu consumed by all user jobs.\\ | ||
+ | # NormUsage: RawUsage normalised to total seconds/cpu consumed in the cluster.\\ | ||
+ | # FairShare: The FairShare factor between 0 and 1. The higher the cluster usage, the closer to 0 and the lower the priority.\\ | ||
+ | |||
+ | == Job submission == | ||
+ | - sbatch | ||
+ | - salloc | ||
+ | - srun | ||
+ | |||
+ | 1. SBATCH \\ | ||
+ | Used to send a script to the queuing system. It is batch-processing and non-blocking. | ||
+ | <code bash> | ||
+ | # Crear el script: | ||
+ | hpc-login2 ~]$ vim test_job.sh | ||
+ | #!/bin/bash | ||
+ | #SBATCH --job-name=test | ||
+ | #SBATCH --nodes=1 | ||
+ | #SBATCH --ntasks=1 | ||
+ | #SBATCH --cpus-per-task=1 | ||
+ | #SBATCH --mem=1gb | ||
+ | #SBATCH --time=00: | ||
+ | #SBATCH --qos=urgent | ||
+ | #SBATCH --output=test%j.log | ||
+ | |||
+ | echo "Hello World!" | ||
+ | hpc-login2 ~]$ sbatch test_job.sh | ||
+ | </ | ||
+ | 2. SALLOC \\ | ||
+ | It is used to immediately obtain an allocation of resources (nodes). As soon as it is obtained, the specified command or a shell is executed. | ||
+ | <code bash> | ||
+ | # Get 5 nodes and launch a job. | ||
+ | hpc-login2 ~]$ salloc -N5 myprogram | ||
+ | # Get interactive access to a node (Press Ctrl+D to exit): | ||
+ | hpc-login2 ~]$ salloc -N1 | ||
+ | </ | ||
+ | 3. SRUN \\ | ||
+ | It is used to launch a parallel job (preferable to using mpirun). It is interactive and blocking. | ||
+ | <code bash> | ||
+ | # Launch the hostname command on 2 nodes | ||
+ | hpc-login2 ~]$ srun -N2 hostname | ||
+ | hpc-node1 | ||
+ | hpc-node2 | ||
+ | </ | ||
- | <video width=480 height=320>:centro:servizos:cluster_de_computacion_hpc_ctcomp2:ctcomp2_for_impatients_verylow.webm</video> | + | ==== GPU use ==== |
+ | To specifically request a GPU allocation for a job, options must be added to sbatch or srun: | ||
+ | | %%--gres%% | ||
+ | | %%--gpus o -G%% | Request gpus per JOB | %%--gpus=[type]:count,...%% | | ||
+ | There are also the options %% --gpus-per-socket, | ||
+ | Ejemplos: | ||
+ | <code bash> | ||
+ | ## See the list of nodes and gpus: | ||
+ | hpc-login2 ~]$ ver_recursos | ||
+ | ## Request any 2 GPUs for a JOB, add: | ||
+ | --gpus=2 | ||
+ | ## Request a 40G A100 at one node and an 80G A100 at another node, add: | ||
+ | --gres=gpu: | ||
+ | </code> | ||
- | ^ Contacto para incidencias ^ | ||
- | | Correo electrónico: | ||
- | | Extensión telefónica: | ||
+ | ==== Job monitoring ==== | ||
+ | <code bash> | ||
+ | ## List all jobs in the queue | ||
+ | hpc-login2 ~]$ squeue | ||
+ | ## Listing a user's jobs | ||
+ | hpc-login2 ~]$ squeue -u < | ||
+ | ## Cancel a job: | ||
+ | hpc-login2 ~]$ scancel < | ||
+ | ## List of recent jobs: | ||
+ | hpc-login2 ~]$ sacct -b | ||
+ | ## Detailed historical information for a job: | ||
+ | hpc-login2 ~]$ sacct -l -j < | ||
+ | ## Debug information of a job for troubleshooting: | ||
+ | hpc-login2 ~]$ scontrol show jobid -dd < | ||
+ | ## View the resource usage of a running job: | ||
+ | hpc-login2 ~]$ sstat < | ||
+ | </ | ||
+ | ==== Configure job output ==== | ||
+ | == Exit codes == | ||
+ | By default these are the output codes of the commands: | ||
+ | ^ SLURM command | ||
+ | | salloc | ||
+ | | srun | The highest among all executed tasks or 253 for an out-of-mem error. | ||
+ | | sbatch | ||
- | ===== Preguntas frecuentes ===== | + | == STDIN, STDOUT y STDERR |
+ | **SRUN: | ||
+ | By default stdout and stderr are redirected from all TASKS to srun's stdout and stderr, and stdin is redirected from srun's stdin to all TASKS. This can be changed with: | ||
+ | | %%-i, --input=< | ||
+ | | %%-o, --output=< | ||
+ | | %%-e, --error=< | ||
+ | And options are: | ||
+ | * //all//: by default. | ||
+ | * //none//: Nothing is redirected. | ||
+ | * //taskid//: Redirects only to and/or from the specified TASK id. | ||
+ | * // | ||
+ | * //filename pattern//: Same as the filename option but with a file defined by a [[ https:// | ||
- | | + | **SBATCH: |
- | | + | By default "/ |
- | | + | | %%-i, --input=< |
+ | | %%-o, --output=< | ||
+ | | %%-e, --error=< | ||
+ | The reference of filename_pattern is [[ https:// | ||
+ | ==== Sending mail ==== | ||
+ | JOBS can be configured to send mail in certain circumstances using these two parameters (**BOTH ARE REQUIRED**): | ||
+ | | %%--mail-type=< | ||
+ | | %%--mail-user=< | ||
- | === ¿Qué es un clúster de computación? | ||
- | * Es un conjunto de nodos computacionales interconectados mediante una red dedicada y que pueden actuar como un único elemento computacional | ||
- | * En la práctica, esto se traduce en: | + | ==== Status of Jobs in the queuing system ==== |
- | * potencia computacional | + | <code bash> |
- | * ... en una infraestructura compartida entre varios usuarios | + | hpc-login2 ~]# squeue -l |
+ | JOBID PARTITION | ||
+ | 6547 defaultPa | ||
- | === ¿Qué es un sistema de gestión de colas? === | + | ## Check status of queue use: |
+ | hpc-login2 ~]$ estado_colas.sh | ||
+ | JOBS PER USER: | ||
+ | -------------- | ||
+ | | ||
+ | | ||
- | * Un sistema de gestión de colas (SGC) es un software que planifica la ejecución de trabajos entre los recursos computacionales disponibles. Es un software habitual en los sistemas de computación de altas prestaciones ya que permite una gestión eficiente de los recursos computacionales en un sistema con múltiples usuarios. En el clúster está instalado PBS/TORQUE. Para mayor información sobre el sistema de colas de '' | + | JOBS PER QOS: |
- | * La dinámica de funcionamiento de estos sistemas es: | + | -------------- |
- | | + | regular: |
- | | + | long: 1 |
- | | + | |
- | | + | |
- | === ¿Por qué no se ejecuta inmediatamente el trabajo que he enviado con qsub? === | + | JOBS PER STATE: |
- | + | -------------- | |
- | El sistema de colas permite desacoplar la ejecución de un trabajo en dos fases claramente diferenciadas. La primera acción consiste en una **solicitud**, a través de '' | + | |
+ | | ||
+ | ========================================== | ||
+ | Total JOBS in cluster: | ||
+ | </ | ||
+ | Common job states: | ||
+ | * R RUNNING Job currently has an allocation. | ||
+ | | ||
+ | | ||
+ | * PD PENDING Job is awaiting resource allocation. | ||
- | Por lo tanto, si el trabajo no se ejecuta inmediatamente, | + | [[ https://slurm.schedmd.com/squeue.html#SECTION_JOB-STATE-CODES |
- | * Existe un pequeño retardo entre la **solicitud** y el **envío**. No es una cantidad de tiempo determinada, | + | |
- | * Si existen muchos trabajos encolados (se puede consultar con '' | + | |
- | * El clúster dispone de un sistema de gestión de energía que apaga los nodos computacionales cuando no han sido utilizados durante cierto tiempo. Puede suceder que, aunque no haya trabajos en la cola, nuestra solicitud no sea enviada a los nodos porque los recursos estaban apagados en el momento de la solicitud. En este caso, la ejecución del trabajo deberá esperar a que los recursos estén activos (es también un tiempo variable, pero suele ser menor de 10 minutos). | + | |
+ | If a job is not running, a reason will be displayed underneath REASON:[[ https:// | ||