Computer Cluster for Analysis of Data from SLS and SwissFEL Facilities

The official cluster name is Ra and the cluster will be referenced below by this name.

Current status: OPERATIONAL

Schedule of maintenance: See table

2020-04-27 08:00 2020-04-27 17:00 ra-l-001 update (OS, Nomachine, OFED, GPFS)
2020-04-29 08:00 2020-04-29 17:00 ra-l-002 update (OS, Nomachine, OFED, GPFS)

 

To be able to use Ra cluster you need :

  • PSI account
  • authorisation to use the Ra cluster.

PSI Account

If you don't have PSI account, please follow this procedure

Please contact your beamline manager and provide the following information:

  • your PSI account
  • data identifier (Proposal ID or e-account used to collect the data) for the data you need to access.

The names of the login nodes are

  • ra-l-001.psi.ch  
  • ra-l-002.psi.ch
  • ra-l-003.psi.ch
  • ra-l-004.psi.ch

From the PSI subnet, you can connect to login nodes directly via ssh or with NoMachine using ra-nx.psi.ch as target host. From outside the PSI subnet, you may use one of the two way for a remote access to Ra cluster:

$ ssh <PSI_account>@hop.psi.ch 
$ Password: <password>

AFTER login

$ssh ra-l-003.psi.ch
$ Password: <password>

For further details see

Please follow these instructions

NoMachine hints:

  • Set the display preferences of the NoMachine ("Ctrl+Alt+0") to "Resize remote screen". This will provide a better default resolution matching your monitor to the remote desktop on the Ra login node.
  • In case of heavy graphic application try to run it prepending vglrun before the application name. See yourself the difference by running "glxgears" vs. "vglrun glxgears".
  • If a NoMachine sessions is not being accessed for 5 days it gets automatically terminated.

Login nodes are the entry point to the Ra cluster and they are the common place for every user. For this reason please avoid to overload the login nodes with CPU or memory intensive job, since for that ones the batch system should be used instead.

Currently a limit of 100GB maxmimum memory usage per user is enforced.

There are several file systems for different purposes on the Ra cluster:

File system Path Default Quota Access rights Access mode Used for
HOME /mnt/das-gpfs/home/$USER 5 GB user only read-write user home directory, code, private data, etc.
SLS raw data /sls/$BEAMLINE - the PI* group only read-only SLS data
work data /das/work/$PGR/$PGROUP 4 TB the PI* group only read-write derived data

*) PI -- principal investigator or Main Proposer

Example:

  /sls/X06SA/Data10/e15874    # contains the raw data for the PI group 15874
  /das/work/p15/p15874        # contains the derived data for the PI group 15874


To see the group members for a given group use the getent group command, e.g.:

  getent group p15874


To check the quota for home, use homequota command being on one of the login node:

  homequota

The p-groups inside the work area are split between the so called internal and external area. The default quota per p-group only applies to the external p-groups. The internal ones are members of one and only one unit, that provides a certain amount of space on the work area to all its p-groups. The p-group of a unit don't have a default quota so a single p-group could fill up all the space on the work area for its unit.

To see to which unit a pgroup is member of and if is internal or external:

[talamo_i@ra-l-002 ~]$ /das/support/users/space_usage/pgroup_info p17277
Name:		p17277
Unit:		tomcat
Kind:		internal
Used:		63 GB
Members:	ozerov_d,talamo_i,gsell
Unit quota:	500 TB

Files permission and ownership

All files inside a specific p-group directory are considered to be owned by the specific pgroup, meaning that the unix group of the files should be the p-group and that the group should have read-write access, ie. all its members have read-write access. For this reason a regular process checks the files permission and ownership and fixes them.

The fix happens automatically every hour, but in case you would need to change permission/ownership sooner, you can run the following commands for a file:

chmod g+w file
chgrp your-p-group file


And in case you want to do it recursively on a directory:

chmod -R g+ws dir
chgrp -R your-p-group dir

The data analysis and advanced development software is available via PSI Environment Modules (Pmodules) system. Use the following commands to manage your environment:

  module list                # show the loaded modules
  module avail               # show the available modules
  module search              # show all modules in hierarchy 
  module help                # if you do not remember what to do  
  module add module_name      # add module_name 
  module rm module_name       # remove module_name                


Example:

  module load matlab/2015b   # load Matlab 2015b


There are MX-beamline specific environment configuration files in the /etc/scripts/ directory:

 

  • /etc/scripts/mx_sls.sh - the default configuration for the analysis of SLS data
  • /etc/scripts/mx_fel.sh - the default configuration for the analysis of data from FEL sources

Source the corresponding configuration file to use the predefined settings, for example:

  source /etc/scripts/mx_sls.sh


The environment settings done with the module are effective only in the current shell and all its children processes.

You may wish to edit the .bashrc file in home directory to make permanent changes of your environment: see the comments therein for more details.

The list of the available software includes: Matlab, python, intel and gcc compilers, Fiji, standard MX software like xds, shelx, hkl2map, adxv, CBFlib, ccp4, dials, mosflm, phenix, software used for serial crystallography like crystfel, cheetah .... For the complete list use the following commands:

module use unstable
module avail

The Ra login node should be used mainly for development, small and quick tests, and work with the graphical applications.

For CPU intensive work, the compute nodes must be used. There are presently 48 computing nodes in the cluster, each with 256GB of RAM, InfiniBand interconnect, and 10GbE network.

Computing node Processor on each node Number of cores on each node
c-001..016 2xIntel Xeon E5-2690v3 (2.60 GHz) 24 (2x12)
c-017..032 2xIntel Xeon E5-2697Av4 (2.60 GHz) 32 (2x16)
c-033..048 2xIntel Xeon(R) Gold 6140 (2.30GHz ) 36 (2x18)

Access to the compute nodes is controlled by Slurm, a modern workload manager for Linux clusters. You can allocate compute node for interactive use or submit batch jobs using Slurm.

Useful commands:

  sinfo     # view information about Slurm nodes, in particular, idle (free) nodes 
  squeue    # view information about jobs in the scheduling queue (useful to find your nodes)  
  salloc    # request a job allocation (a set of nodes) for further use  
  srun      # allocate compute nodes and run a command inside the allocated nodes  
  sbatch    # submit a batch script  


The present Slurm configuration at Ra implements two modes of allocation of the resources on the computing node: a) "shared" allocation (partition "shared"): your job will share computing resources with other jobs on the node; b)the "whole node" allocation (partitions "day" and "week"): you will have an exclusive access to the allocated compute nodes (not shared with the other users) within the requested time limits. By default (if you don't specify the partition name) jobs land on the "shared" partition.

  Partition name shared/exclusive access to computing node default allocation time maximum allocation time
default shared shared 1 hour 8 days
  hour exclusive 1 hour 1 hour
  day exclusive 8 hours 24 hours
  week exclusive 2 days 8 days

Example:

sbatch job.sh # to submit job to the default partition, with allocation time of 1 hour
sbatch -p week job.sh # to submit to the partition with longer allocation time (2 days if not specified)
sbatch -p week -t 4-5:30:00 job.sh # to submit job with time limit of 4 days, 5 hours and 30 minutes (max. allowed time limit is 8 days)

When the time limit is reached, all the unfinished process will be killed.

Please, do not forget to release the resources you do not need any more, otherwise they will remain unavailable to other users until your allocation expires. Holding on idle nodes will have negative impact on the scheduling priority of your future jobs. The priority of your future Slurm jobs depends on your past usage according to the fair share mechanism.

For interactive use of Ra cluster one can use jupyter notebooks as well.

More examples and details how to use Slurm on Ra cluster, you will find in the Ra help pages.

ra-help.png

To receive announcements about the cluster (downtime, changes in the resource policy etc.) please subscribe to the mailing list

For any problem or further information please contact: ra-admins@lists.psi.ch

From PSI subnet and using PSI-VPN, one need to use ra-nx as target for NoMachine connection, to reduce load on VPN and network infrastructure.

rem-acc is accessible from PSI subnet, but not from PSI-VPN.

Try to reduce in your NoMachine client settings (Ctrl+Alt+0, then "Display", then "Settings") the resolution to find a compromise between speed, comfort and network latency

More detailed documentation on use of slurm on Ra, description of NoMachine setup is available in the internal PSI page  (accessible from PSI subnet, PSI-VPN or in the NoMachine session (Ra help))