Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
en:centro:servizos:hpc [2023/03/03 13:56] – [Using SLURM] fernando.guillenen:centro:servizos:hpc [2024/10/08 09:55] (current) – [CONDA] jorge.suarez
Line 25: Line 25:
 To access the cluster, access must be requested in advance via [[https://citius.usc.es/uxitic/incidencias/add|incident form]]. Users who do not have access permission will receive an "incorrect password" message. To access the cluster, access must be requested in advance via [[https://citius.usc.es/uxitic/incidencias/add|incident form]]. Users who do not have access permission will receive an "incorrect password" message.
  
-The access is done through an SSH connection to the login node:+The access is done through an SSH connection to the login node (172.16.242.211):
 <code bash> <code bash>
 ssh <nombre_de_usuario>@hpc-login2.inv.usc.es ssh <nombre_de_usuario>@hpc-login2.inv.usc.es
Line 160: Line 160:
 <code bash> <code bash>
 # Getting miniconda # Getting miniconda
-wget https://repo.anaconda.com/miniconda/Miniconda3-py39_4.11.0-Linux-x86_64.sh+wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
 # Install  # Install 
-sh Miniconda3-py39_4.11.0-Linux-x86_64.sh+bash Miniconda3-latest-Linux-x86_64.sh 
 +#  Initialize for bash shell 
 +~/miniconda3/bin/conda init bash
 </code> </code>
  
Line 260: Line 262:
 # There is an alias that shows only the relevant info: # There is an alias that shows only the relevant info:
 hpc-login2 ~]$ ver_colas hpc-login2 ~]$ ver_colas
-      Name   Priority           Flags UsageFactor                     MaxTRES     MaxWall     MaxTRESPU MaxJobsPU MaxSubmitPU  +      Name    Priority                                  MaxTRES     MaxWall            MaxTRESPU MaxJobsPU MaxSubmitPU  
----------- ---------- --------------- ----------- --------------------------- ----------- ------------- --------- -----------  +----------  ---------- ---------------------------------------- ----------- -------------------- --------- -----------  
-   regular        100     DenyOnLimit    1.000000   cpu=200,gres/gpu=1,node=4  4-04:00:00                      10          50  +   regular         100                cpu=200,gres/gpu=1,node=4  4-04:00:00       cpu=200,node=4        10          50  
-interactive       200     DenyOnLimit    1.000000                      node=1    04:00:00        node=1                   1  +interactive        200                                   node=1    04:00:00               node=1                   1  
-    urgent        300     DenyOnLimit    2.000000           gres/gpu=1,node=1    04:00:00        cpu=36                  15  +    urgent         300                        gres/gpu=1,node=1    04:00:00               cpu=36                  15  
-      long        100     DenyOnLimit    1.000000           gres/gpu=1,node=4  8-08:00:00                                      +      long         100                        gres/gpu=1,node=4  8-04:00:00                                         
-     large        100     DenyOnLimit    1.000000          cpu=200,gres/gpu=2  4-04:00:00                      10          25  +     large         100                       cpu=200,gres/gpu=2  4-04:00:00                                       10  
-     admin        500                    0.000000 +     admin         500                                                                                                  
 +     small         150                             cpu=6,node=2    04:00:00              cpu=400        40         100 
 </code> </code>
 # Priority: is the relative priority of each queue. \\ # Priority: is the relative priority of each queue. \\
Line 280: Line 283:
 ==== Sending a job to the queue system ==== ==== Sending a job to the queue system ====
 == Requesting resources == == Requesting resources ==
-By default, if you submit a job without specifying anything, the system submits it to the default (regular) QOS and assigns it a node, a CPU and all available memory. The time limit for job execution is that of the queue (4 days and 4 hours). +By default, if you submit a job without specifying anything, the system submits it to the default (regular) QOS and assigns it a node, a CPU and 4 GB. The time limit for job execution is that of the queue (4 days and 4 hours). 
 This is very inefficient, the ideal is to specify as much as possible at least three parameters when submitting jobs: This is very inefficient, the ideal is to specify as much as possible at least three parameters when submitting jobs:
   -  %%Node number (-N or --nodes), tasks (-n or --ntasks) and/or CPUs per task (-c or --cpus-per-task).%%   -  %%Node number (-N or --nodes), tasks (-n or --ntasks) and/or CPUs per task (-c or --cpus-per-task).%%
Line 317: Line 320:
  
 == Job submission == == Job submission ==
 +  - sbatch
   - salloc   - salloc
   - srun   - srun
-  - sbatch 
  
-1. SALLOC \\ +1. SBATCH \\
-It is used to immediately obtain an allocation of resources (nodes). As soon as it is obtained, the specified command or a shell is executed.  +
-<code bash> +
-# Get 5 nodes and launch a job. +
-hpc-login2 ~]$ salloc -N5 myprogram +
-# Get interactive access to a node (Press Ctrl+D to exit): +
-hpc-login2 ~]$ salloc -N1  +
-</code> +
-2. SRUN \\ +
-It is used to launch a parallel job (preferable to using mpirun). It is interactive and blocking. +
-<code bash> +
-# Launch the hostname command on 2 nodes +
-hpc-login2 ~]$ srun -N2 hostname +
-hpc-node1 +
-hpc-node2 +
-</code> +
-3. SBATCH \\+
 Used to send a script to the queuing system. It is batch-processing and non-blocking. Used to send a script to the queuing system. It is batch-processing and non-blocking.
 <code bash> <code bash>
Line 356: Line 343:
 hpc-login2 ~]$ sbatch test_job.sh  hpc-login2 ~]$ sbatch test_job.sh 
 </code> </code>
 +2. SALLOC \\
 +It is used to immediately obtain an allocation of resources (nodes). As soon as it is obtained, the specified command or a shell is executed. 
 +<code bash>
 +# Get 5 nodes and launch a job.
 +hpc-login2 ~]$ salloc -N5 myprogram
 +# Get interactive access to a node (Press Ctrl+D to exit):
 +hpc-login2 ~]$ salloc -N1 
 +# Get interactive EXCLUSIVE access to a node
 +hpc-login2 ~]$ salloc -N1 --exclusive
 +</code>
 +3. SRUN \\
 +It is used to launch a parallel job (preferable to using mpirun). It is interactive and blocking.
 +<code bash>
 +# Launch the hostname command on 2 nodes
 +hpc-login2 ~]$ srun -N2 hostname
 +hpc-node1
 +hpc-node2
 +</code>
 +
  
 ==== GPU use ==== ==== GPU use ====
Line 430: Line 436:
 JOBID PARTITION     NAME     USER      STATE       TIME  NODES NODELIST(REASON) JOBID PARTITION     NAME     USER      STATE       TIME  NODES NODELIST(REASON)
 6547  defaultPa  example <username>  RUNNING   22:54:55      1 hpc-fat1 6547  defaultPa  example <username>  RUNNING   22:54:55      1 hpc-fat1
 +
 +## Check status of queue use:
 +hpc-login2 ~]$ estado_colas.sh
 +JOBS PER USER:
 +--------------
 +       usuario.uno:  3
 +       usuario.dos:  1
 +
 +JOBS PER QOS:
 +--------------
 +             regular:  3
 +                long:  1
 +
 +JOBS PER STATE:
 +--------------
 +             RUNNING:  3
 +             PENDING:  1
 +==========================================
 +Total JOBS in cluster:  4
 </code> </code>
 Common job states: Common job states: