HPC cluster: Slurm
Our batch computing system is based on slurm.
From our bastion server
genossh.genouest.org, you can execute computing jobs or connect interactivly to one of the computing nodes.
Galaxy is a web portal allowing to execute bioinformatics analysis in a user-friendly environment.
Dashboard (with API and CLI) to execute some Docker containers.
You can access to containers with your id or as root (with some restrictions). In container, you can ask for access to your home directory and shared storages or, with projects, group specific storage.
A job is basically a shell script to execute in selected container with its cpu and memory requirements.
Dashboard also allows to submit jobs to slurm.
Private cloud to launch some virtual machines. You are the owner of the VM (root). There is an expiration mechanism. If you do not extend the lifetime of the VM (you receive a reminder email), VM is deleted. Along VM you can attach additional disks or use a shared disk (manilla) among them.
my self-service dashboard lets you create on-demand MySQL databases. Choose a database name and database is automatically created. An email containing credentials is sent once created.
Databases are hosted on our database server (genobdd)
CeSGO provides an integrated environment to help scientists to work from project ideas to publication through data production and management.
The CeSGO project is offered by GenOuest core facility and is funded part of CPER by European funds, by state and by region Britain.
Among the different features you have access to a chat collaboration tool, a project management tool (Kanboard using Kanban), a collaboration system to share document/progress/info with your team (public or private), and a file sharing tool based on Owncloud (like Dropbox).
All your data are hosted on GenOuest resources (France) and remain private.
Herodote is a "data to compute" serverless software.
When you push data (new file or update a file) to the Openstack object storage in a project (bucket), Herodote checks for hooks. If a hook is defined for this data, then a job is automatically submitted. The hook will download the file, execute the commands you defined and upload the results back to the storage server.
All is about automation, users focus on data, not on jobs.