Cluster

SSH connection

Once you have an account on the platform, to connect to our cluster or other command line tools, you need to connect to our bastion server genossh.genouest.org.

This is the entrypoint to our network. From there, you can submit jobs with the Slurm job manager or connect to other resources.

You first need to connect to the front-end server via SSH from your computer.

You can connect to genossh.genouest.org from anywhere, but only with a properly configured SSH Key.

From Windows

On Windows, Putty can be used to load SSH keys and connect via SSH to the cluster. Have a look at this video tutorial explaining the whole procedure (creating a SSH key and then connecting to the cluster):

GenOuest SSH

If you prefer, you can also use the Linux subsystem of Windows (in this case, see the Linux/Mac paragraph below).

From Linux or Mac

You need first to generate an SSH key. To do so, launch this command on your computer:

ssh-keygen -t rsa -b 4096

The command will ask for a password: it will protect your SSH key, and you will need it everytime you will use it to connect to the cluster (depending on your configuration, a program named ssh-agent can remember this password after you entered it the first time you connect).

The ssh-keygen program will create two files in your home directory:

$HOME/.ssh/id_rsa
$HOME/.ssh/id_rsa.pub

id_rsa is your private key: keep this file secret.

id_rsa.pub is your public key. You need to open this file and copy-paste its content to https://my.genouest.org (Public SSH key form on the right side, once your are logged in).

Add your key in your ssh agent (this is not always needed depending on your configuration):

ssh-add $HOME/.ssh/id_rsa

You should then be able to connect to the cluster with this command:

ssh <your-login>@genossh.genouest.org

Once connected, to access other servers, you might still need your to use your ssh key.

Data transfers

It is possible to copy data from/to cluster via the scp tool.

An FTP server is also available:

You can use any ftp compliant tool to transfer data, be sure however to use the ftps (secure) option and not ftp and to specify port 990.

Data storage

You have access to three volumes, available on all computing nodes.

Home directory

Your home directory (/home/genouest/your-group/your-login). We have a total of around 100TB of storage capacity shared between all the home directories, and each user have a quota of 100GB. You can check your disk usage with this command:

quota -s

A snapshot mechanism is available on this volume, in case you erased a file by mistake. See below.

Groups directory

A project directory (/groups/your-group) that you share with your team. We have a total of around 200TB of storage capacity shared between all these group directories. Each project have a specific quota, and a single person in your team is responsible to grant you access to this volume. You can check your disk usage with the command:

df -h /groups/<your-group>

A snapshot mechanism is available on this volume, in case you erased a file by mistake. See below.

Scratch directory

A high performance storage space (/scratch/your-login). Each user have a quota of 250GB. You can check your disk usage with the command:

du -sh /scratch/<your-login>

Good practices

Quotas are intentionally restrictive, if you need them to be increased, please contact support@genouest.org.

As a general rule, user should not write during the jobs in the /home or /groups directory, nor do heavy read operations on these volume. They are used to keep your data safe. During jobs, one should use the /scratch directory. This directory is hosted by a high performance system and designed for temporary data. It supports heavy read and write operations.

Please note that none of your data is backed up. If you would like us to backup your data for specific reasons, you can contact us and we will help you to find a solution.

We strongly advise you to anticipate your storage needs: if you plan to generate a big amount of data, please contact us before to check that we have the possibility to host this data. It is preferable to anticipate this when applying for grants that imply data generation and analysis.

Before generating data on the cluster, please do not forget to check the remaining available space

Snapshots

If you erase some files by mistake, you can recover the files by looking in a special .snapshots directory, available both in any /home or /groups sub-directory.

The snapshots are performed each hour and are kept for 5 weeks. To access the snapshot files of your account just go to the .snapshots directory.

cd .snapshots

There, you will see all several directories in which copies of your files at different times are stored.

The directories are easily recognizable by their name: hourly, daily, weekly.

Please note that snapshots are not backups. They provide protection against user error, but not against physical failure of the data storage servers.

Please consider an external backup solution if your data is valuable.

Software

Preinstalled catalog

Pre-installed software are available in /softs/local (see software manager for a list of installed software). To use a software, you have to load its environment. For example to load python 2.7 you can launch this command (the dot at the beginning is important):

.  /softs/local/env/envpython-2.7.sh

This will automatically configure your shell environment to execute the selected tool. Any subsequent python command you will launch will use this 2.7 version.

To get a list of all environments available, just list the content of /softs/local/env/env*, or look at the list on software manager.

Note: DO NOT USE the python/perl/... of the node directly, always load a specific version from /softs/local.

Conda

Conda is a system allowing package, dependency and environment management for any programming language. It is widely used to install software on the cluster.

Conda is a way to create custom environments, completely isolated from the other software installed on the cluster. You can install all the Conda packages you want in each isolated Conda environment.

A list of available conda packages is here: https://anaconda.org/anaconda/repo.

Conda allows you to install the software you need in your own storage volumes (/home, /groups or /omaha-beach). The software needs to be available as Conda packages.

By default, the channels defaults, bioconda and conda-forge are enabled on the cluster. The Bioconda channel in particular is tailored for bioinformatics tools. You may add other channels if you need. Please keep in mind that private channels might present security risks (software will not be vetted). If possible, please keep to the standard channels.

https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-channels.html

To use Conda, first source it the usual way (on a compute node):

. /local/env/envconda.sh

With Conda, you can create as many environments as you want, each one containing a list of packages you need. You need to activate an environment to have access to software installed in it.

To create a new environment containing biopython, deeptools (v2.3.4), bowtie and blast, run:

conda create -p ~/my_env biopython deeptools=2.3.4 bowtie blast

To activate it:

conda activate ~/my_env

To deactivate it:

conda deactivate

Although it's not recommended, you can activate multiple environments by adding the --stack option. In this case, the last activated environment will have a higher priority than the other ones.

conda activate --stack ~/my_env

We also have installed Mamba as an alternative to Conda. It is a reimplementatino written in C++ that should be much faster. To use it, just source the conda env as usual (. /local/env/envconda.sh), then you can replace the conda commands by mamba such as in this example:

mamba create -p ~/my_env biopython deeptools=2.3.4 bowtie blast

Activation and deactivation of environments still needs to be done with the conda activate command.

Virtualenv

Several versions of Python are available on the cluster. Each one comes with a specific set of modules preinstalled. If you need to install a module, or to have a different module version, you can use Virtualenv. Virtualenvs are a way to create a custom Python environment, completely isolated from the Python installation on the cluster. You can install all the Python modules you want in this isolated environment.

To use it, first create a new virtualenv like this:

. /local/env/envpython-3.7.6.sh
virtualenv ~/my_new_env

This will create the directory ~/my_new_env. This directory will contain a minimal copy of Python 3.7.6 (the one you sourced just before), without any module installed in it, and completely isolated from the global 3.7.6 python version installed by GenOuest. If you prefer to use a Python 2.7 version, you can source 2.7.15 of Python if you prefer:

. /local/env/envpython-2.7.15.sh
virtualenv ~/my_new_env

To use this virtualenv, you need to activate it:

. ~/my_new_env/bin/activate

Once activate, your prompt will show that you activated a virtualenv:

(my_new_env)[login@cl1n025 ~]$

You can then install all the Python modules you need in this virtualenv:

pip install biopython
pip install pyaml...

Now when you run python, you will be using the virtualenv’s Python version, containing only the modules you installed in it.

Once you have finished working with the virtualenv, you can stop using it and switch back to the normal environment like this:

deactivate

You can create as many virtualenv as you want, each one being a directory that you can safely remove when you don’t need it anymore.

Submitting jobs with Slurm

See the Slurm section