Computer Cluster for Analysis of Data from SLS and SwissFEL Facilities

Cluster Name

The official cluster name is Ra and the cluster will be referenced below by this name.

Prerequisites

To be able to use Ra cluster you need :
  • PSI account
  • authorisation to use the Ra cluster.

PSI Account

If you don't have PSI account, please follow this procedure

Authorisation for the Ra cluster

Please contact your beamline manager and provide the following information:
  • your PSI account
  • data identifier (Proposal ID or e-account used to collect the data) for the data you need to access.

Getting Connection to Ra Cluster

The names of the login nodes are ra-l-001.psi.ch and ra-l-002.psi.ch. A generic alias ra.psi.ch is resolved to one of the login nodes name. The login nodes can be accessed via ssh or NoMachine protocol. For graphical login (remote desktop), we recommend to use NoMachine client.

From the PSI subnet, you can connect to ra.psi.ch directly via ssh or NoMachine. From outside the PSI subnet, you may use one of the two way for a remote access to Ra cluster:

Option one: connect to the hop.psi.ch system and from there go via ssh to the one of the Ra login node

$ ssh <PSI_account>@hop.psi.ch 
$ Password: <password>

AFTER login

$ssh ra.psi.ch
$ Password: <password>
For further details see

Option two: (recommended) NoMachine Connection via Remote Access Server

Please follow these instructions

NoMachine hints:
  • Set the display preferences of the NoMachine ("Ctrl+Alt+0") to "Resize remote screen". This will provide a better default resolution matching your monitor to the remote desktop on the Ra login node.
  • In case of heavy graphic application try to run it prepending vglrun before the application name. See yourself the difference by running "glxgears" vs. "vglrun glxgears".
Top

Data

There are several file systems for different purposes on the Ra cluster:
File system Path Quota Access rights Access mode Used for
HOME /mnt/das-gpfs/home/$USER 5 GB user only read-write user home directory, code, private data, etc.
SLS raw data /sls/$BEAMLINE - the PI* group only read-only SLS data
work data /das/work/$PGR/$PGROUP 4 TB the PI* group only read-write derived data
*) PI -- principal investigator or Main Proposer

Example:
  /sls/X06SA/Data10/e15874    # contains the raw data for the PI group 15874
  /das/work/p15/p15874        # contains the derived data for the PI group 15874


To see the group members for a given group use the getent group command, e.g.:
  getent group p15874


To check the quota for home, use homequota command being on one of the login node:
  homequota


For the quota on the particular p-group folder in work/, use command groupquota, e.g.:
  groupquota p15874

Software

The data analysis and advanced development software is available via PSI Environment Modules (Pmodules) system. Use the following commands to manage your environment:
  module list                # show the loaded modules
  module avail               # show the available modules
  module search              # show all modules in hierarchy 
  module help                # if you do not remember what to do  
  module add module_name      # add module_name 
  module rm module_name       # remove module_name                


Example:
  module load matlab/2015b   # load Matlab 2015b


There are MX-beamline specific environment configuration files in the /etc/scripts/ directory:
  • /etc/scripts/mx_sls.sh - the default configuration for the analysis of SLS data
  • /etc/scripts/mx_fel.sh - the default configuration for the analysis of data from FEL sources
Source the corresponding configuration file to use the predefined settings, for example:
  source /etc/scripts/mx_sls.sh


The environment settings done with the module are effective only in the current shell and all its children processes.

You may wish to edit the .bashrc file in home directory to make permanent changes of your environment: see the comments therein for more details.

Top

Scientific Applications

The list of the available software includes: Matlab, python, intel and gcc compilers, Fiji, standard MX software like xds, shelx, hkl2map, adxv, CBFlib, ccp4, dials, mosflm, phenix, software used for serial crystallography like crystfel, cheetah .... For the complete list use the following commands:
module use unstable
module avail

Batch system

The Ra login node should be used mainly for development, small and quick tests, and work with the graphical applications.

For CPU intensive work, the compute nodes must be used. There are presently 48 computing nodes in the cluster, each with 256GB of RAM, InfiniBand interconnect, and 10GbE network.
Computing node Processor on each node Number of cores on each node
c-001..016 2xIntel Xeon E5-2690v3 (2.60 GHz) 24 (2x12)
c-017..032 2xIntel Xeon E5-2697Av4 (2.60 GHz) 32 (2x16)
c-033..048 2xIntel Xeon(R) Gold 6140 (2.30GHz ) 36 (2x18)
Access to the compute nodes is controlled by Slurm, a modern workload manager for Linux clusters. You can allocate compute node for interactive use or submit batch jobs using Slurm.

Useful commands:
  sinfo     # view information about Slurm nodes, in particular, idle (free) nodes 
  squeue    # view information about jobs in the scheduling queue (useful to find your nodes)  
  salloc    # request a job allocation (a set of nodes) for further use  
  srun      # allocate compute nodes and run a command inside the allocated nodes  
  sbatch    # submit a batch script  


The present Slurm configuration at Ra implements two modes of allocation of the resources on the computing node: a) "shared" allocation (partition "shared"): your job will share computing resources with other jobs on the node; b)the "whole node" allocation (partitions "day" and "week"): you will have an exclusive access to the allocated compute nodes (not shared with the other users) within the requested time limits. By default (if you don't specify the partition name) jobs land on the "shared" partition.
  Partition name shared/exclusive access to computing node default allocation time maximum allocation time
default shared shared 1 hour 8 days
  day exclusive 8 hours 24 hours
  week exclusive 2 days 8 days
Example:
sbatch job.sh # to submit job to the default partition, with allocation time of 1 hour
sbatch -p week job.sh # to submit to the partition with longer allocation time (2 days if not specified)
sbatch -p week -t 4-5:30:00 job.sh # to submit job with time limit of 4 days, 5 hours and 30 minutes (max. allowed time limit is 8 days)
When the time limit is reached, all the unfinished process will be killed.

Please, do not forget to release the resources you do not need any more, otherwise they will remain unavailable to other users until your allocation expires. Holding on idle nodes will have negative impact on the scheduling priority of your future jobs. The priority of your future Slurm jobs depends on your past usage according to the fair share mechanism.

More examples and details how to use Slurm on Ra cluster, you will find in the Ra help pages. ra-help.png

User Mailing List

To receive announcements about the cluster (downtime, changes in the resource policy etc.) please subscribe to the mailing list

Troubleshooting

NoMachine

To change the settings use Ctrl+Alt+0 to get the menu.

My heavy graphical application (coot for example) runs slow via NoMachine

Try to reduce in your NoMachine client settings (Ctrl+Alt+0, then "Display", then "Settings") the resolution to find a compromise between speed, comfort and network latency