Connecting Galaxy to a compute cluster


question Questions
  • How to connect Galaxy to a compute cluster?

  • What are job metrics?

  • What sort of information can I collect?

  • Where can I find this information?

objectives Objectives
  • Be familiar with the basics of installing, configuring, and using Slurm

  • Understand all components of the Galaxy job running stack

  • Understand how the job_conf.xml file controls Galaxy’s jobs subsystem

  • Have a strong understanding of Galaxy job destinations

  • Understand the purpose and function of Galaxy job metrics

requirements Requirements
time Time estimation: 1 hour
Supporting Materials
last_modification Last modification: Jun 10, 2021

Running Galaxy Jobs with Slurm

Results may vary

Your results may be slightly different from the ones presented in this tutorial due to differing versions of tools, reference data, external databases, or because of stochastic processes in the algorithms.

The tools that are added to Galaxy can have a wide variance in the compute resources that they require and work efficiently on. To account for this, Galaxy’s job configuration needs to be tuned to run these tools properly. In addition, site-specific variables must be taken into consideration when choosing where to run jobs and what parameters to run them with.


  1. Running Galaxy Jobs with Slurm
    1. Installing Slurm
    2. Using Slurm
    3. Slurm-DRMAA
  2. Galaxy and Slurm
    1. Running a Job
  3. Recording Job Metrics
    1. Setting up Galaxy
    2. Generating Metrics
    3. What should I collect?
    4. Accessing the data
    5. Further Reading

Installing Slurm

comment Ansible Best Practices

If you’ve set up your Galaxy server using the Galaxy Installation with Ansible tutorial, you will have created a galaxyservers group in your inventory file, hosts, and placed your variables in group_vars/galaxyservers.yml. Although for the purposes of this tutorial, the Galaxy server and Slurm controller/node are one and the same, in a real world deployment they are very likely to be different hosts. We will continue to use the galaxyservers group for simplicity, but in your own deployment you should consider creating some additional groups for Slurm controller(s), Slurm nodes, and Slurm clients.

tip Do you need a DRM?

If you have a smaller server, do you still need a DRM? Yes! You should definitely run Slurm or a similar option. If you don’t, as soon as you restart Galaxy with local runners, any running jobs will be killed. Even with a handful of users, it is a good idea to keep 1-2 CPU cores/4GB RAM reserved for Galaxy.

hands_on Hands-on: Installing Slurm

  1. Edit your requirements.yml and include the following contents:

    --- a/requirements.yml
    +++ b/requirements.yml
    @@ -18,3 +18,7 @@
       version: 2.6.3
     - src: galaxyproject.cvmfs
       version: 0.2.13
    +- src: galaxyproject.repos
    +  version: 0.0.2
    +- src: galaxyproject.slurm
    +  version: 0.1.3

    The galaxyproject.repos role adds the Galaxy Packages for Enterprise Linux (GPEL) repository for RedHat/CentOS, which provides both Slurm and Slurm-DRMAA (neither are available in standard repositories or EPEL). For Ubuntu versions 18.04 or newer, it adds the Slurm-DRMAA PPA (Slurm-DRMAA was removed from Debian/Ubuntu in buster/bionic).

  2. In the same directory, run:

    code-in Input: Bash

    ansible-galaxy install -p roles -r requirements.yml
  3. Add galaxyproject.repos, galaxyproject.slurm to the beginning of your roles section in your galaxy.yml playbook:

    --- a/galaxy.yml
    +++ b/galaxy.yml
    @@ -12,6 +12,8 @@
             repo: ''
             dest: /libraries/
    +    - galaxyproject.repos
    +    - galaxyproject.slurm
         - galaxyproject.postgresql
         - role: natefoo.postgresql_objects
           become: true
  4. Add the slurm variables to your group_vars/galaxyservers.yml:

    --- a/group_vars/galaxyservers.yml
    +++ b/group_vars/galaxyservers.yml
    @@ -139,3 +139,13 @@ golang_gopath: '/opt/workspace-go'
     # Singularity target version
     singularity_version: "3.7.4"
     singularity_go_path: "{{ golang_install_dir }}"
    +# Slurm
    +slurm_roles: ['controller', 'exec'] # Which roles should the machine play? exec are execution hosts.
    +- name: localhost # Name of our host
    +  CPUs: 2         # Here you would need to figure out how many cores your machine has. For this training we will use 2 but in real life, look at `htop` or similar.
    +  SlurmdParameters: config_overrides   # Ignore errors if the host actually has cores != 2
    +  SelectType: select/cons_res
    +  SelectTypeParameters: CR_CPU_Memory  # Allocate individual cores/memory instead of entire node
  5. Run the playbook

    code-in Input: Bash

    ansible-playbook galaxy.yml

Note that the above Slurm config options are only those that are useful for this training exercise. In production, you would want to use a more appropriate configuration specific to your cluster (and setting SlurmdParameters to config_overrides is not recommended).

Installed with Slurm is MUNGE (MUNGE Uid ‘N Gid Emporium…) which authenticates users between cluster hosts. You would normally need to ensure the same Munge key is distributed across all cluster hosts (in /etc/munge/munge.key) - A great task for Ansible. However, the installation of the munge package has created a random key for you, and you will not need to distribute this since you’ll run jobs only on a single host.

You can now check that all of the daemons are running with the command systemctl status munge slurmd slurmctld

$ sudo systemctl status munge slurmd slurmctld
● munge.service - MUNGE authentication service
  Loaded: loaded (/usr/lib/systemd/system/munge.service; enabled; vendor preset: disabled)
   Active: active (running) since Sa 2019-01-26 22:38:13 CET; 28min ago
     Docs: man:munged(8)
 Main PID: 22930 (munged)
    Tasks: 4
   Memory: 128.0K
   CGroup: /system.slice/munge.service
           └─22930 /usr/sbin/munged

Jan 26 22:38:13 helena-test.novalocal systemd[1]: Starting MUNGE authentication service...
Jan 26 22:38:13 helena-test.novalocal systemd[1]: Started MUNGE authentication service.

● slurmd.service - Slurm node daemon
   Loaded: loaded (/usr/lib/systemd/system/slurmd.service; enabled; vendor preset: disabled)
   Active: active (running) since Sa 2019-01-26 23:04:21 CET; 2min 25s ago
  Process: 15051 ExecStart=/usr/sbin/slurmd $SLURMD_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 15054 (slurmd)
    Tasks: 1
   Memory: 628.0K
   CGroup: /system.slice/slurmd.service
           └─15054 /usr/sbin/slurmd

Jan 26 23:04:21 helena-test.novalocal systemd[1]: Starting Slurm node daemon...
Jan 26 23:04:21 helena-test.novalocal systemd[1]: PID file /var/run/ not readable (yet?) after start.
Jan 26 23:04:21 helena-test.novalocal systemd[1]: Started Slurm node daemon.

● slurmctld.service - Slurm controller daemon
   Loaded: loaded (/usr/lib/systemd/system/slurmctld.service; enabled; vendor preset: disabled)
   Active: active (running) since Sa 2019-01-26 23:04:20 CET; 2min 26s ago
  Process: 15040 ExecStart=/usr/sbin/slurmctld $SLURMCTLD_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 15042 (slurmctld)
    Tasks: 7
   Memory: 1.1M
   CGroup: /system.slice/slurmctld.service
           └─15042 /usr/sbin/slurmctld

Jan 26 23:04:20 helena-test.novalocal systemd[1]: Starting Slurm controller daemon...
Jan 26 23:04:20 helena-test.novalocal systemd[1]: PID file /var/run/ not readable (yet?) after start.
Jan 26 23:04:20 helena-test.novalocal systemd[1]: Started Slurm controller daemon.

Running the playbook, the Slurm configuration, /etc/slurm/slurm.conf (or /etc/slurm-llnl/slurm.conf on Debian-based distributions) was created for you automatically. All of the variables were set by default. If you need to override the configuration yourself, Slurm provides an online tool which will help you configure it.

Using Slurm

You should now be able to see that your Slurm cluster is operational with the sinfo command. This shows the state of nodes and partitions (synonymous with queues in other DRMs). The “node-oriented view” provided with the -N flag is particularly useful:

$ sinfo
debug*       up   infinite      1   idle localhost
$ sinfo -Nel
Fri Nov  4 16:51:24 2016
localhost      1    debug*        idle    1    2:1:1      1        0      1   (null) none

If your node state is not idle, something has gone wrong. If your node state ends with an asterisk *, the Slurm controller is attempting to contact the Slurm execution daemon but has not yet been successful (the * next to the partition name is normal, it indicates the default partition).

We want to ensure that Slurm is actually able to run jobs. There are two ways this can be done:

  • srun: Run an interactive job (e.g. a shell, or a specific program with its stdin, stdout, and stderr all connected to your terminal.
  • sbatch: Run a batch job, with stdin closed and stdout/stderr redirected to a file.

Galaxy runs sbatch jobs but we can use both srun and sbatch to test:

hands_on Hands-on: Running commands with srun

  1. Use srun to run the command uname -a

    code-in Input: Bash

    srun uname -a

    code-out Output

    Your output may look slightly different:

    $ srun uname -a
    Linux 5.4.0-48-generic #52-Ubuntu SMP Thu Sep 10 10:58:49 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

Although it looks like this command ran as if I had not used srun, it was in fact routed through Slurm.

hands_on Hands-on: Running commands with sbatch

  1. Create a test job script somewhere, such as in ~/ It should be a batch script which runs uname -a, uptime, and sleeps for 30 seconds.

    question Question

    What does your shell script look like?

    solution Solution

    uname -a
    sleep 30
  2. Make the script executable:

    code-in Input: Bash

    chmod +x ~/
  3. Use sbatch to submit the job script

    question Question

    What command did you run?

    solution Solution

    $ sbatch ~/
  4. Use squeue to check the queue

    code-in Input: Bash


    code-out Output

    Your output may look slightly different:

       3     debug sbatch-t   ubuntu  R       0:22      1 localhost

If you’ve made it this far, your Slurm installation is working!


Above Slurm in the stack is slurm-drmaa, a library that provides a translational interface from the Slurm API to the generalized DRMAA API in C.

hands_on Hands-on: Installing Slurm-DRMAA

  1. Add a post_task to your playbook to install slurm-drmaa1 (Debian/Ubuntu) or slurm-drmaa (RedHat/CentOS).

    --- a/galaxy.yml
    +++ b/galaxy.yml
    @@ -11,6 +11,10 @@
         - git:
             repo: ''
             dest: /libraries/
    +  post_tasks:
    +    - name: Install slurm-drmaa
    +      package:
    +        name: slurm-drmaa1
         - galaxyproject.repos
         - galaxyproject.slurm
  2. Run the playbook (ansible-playbook galaxy.yml)

    code-in Input: Bash

    ansible-playbook galaxy.yml

Moving one level further up the stack, we find DRMAA Python. This is a Galaxy framework conditional dependency. Conditional dependencies are only installed if, during startup, a configuration option is set that requires that dependency. The galaxyproject.galaxy Ansible role will install these conditional dependencies, automatically.

Galaxy and Slurm

At the top of the stack sits Galaxy. Galaxy must now be configured to use the cluster we’ve just set up. The DRMAA Python documentation (and Galaxy’s own documentation) instruct that you should set the $DRMAA_LIBRARY_PATH environment variable so that DRMAA Python can find (aka slurm-drmaa). Because Galaxy runs under systemd, the environment that Galaxy starts under is controlled by the environment option in systemd service unit that the ansible role manages. The galaxy task should thus be updated to refer to the path to slurm-drmaa, which is /usr/lib/slurm-drmaa/lib/

hands_on Hands-on: Making Galaxy aware of DRMAA

  1. Open your group variables and add the environment variable:

    --- a/group_vars/galaxyservers.yml
    +++ b/group_vars/galaxyservers.yml
    @@ -103,6 +103,7 @@ galaxy_config_files:
     # systemd
     galaxy_manage_systemd: yes
    +galaxy_systemd_env: [DRMAA_LIBRARY_PATH="/usr/lib/slurm-drmaa/lib/"]
     # Certbot
     certbot_auto_renew_hour: "{{ 23 |random(seed=inventory_hostname)  }}"

    This environment variable will then be supplied to any web process (zerglings or mules).

  2. Next, we need to configure the Slurm job runner. First, we instruct Galaxy’s job handlers to load the Slurm job runner plugin, and set the Slurm job submission parameters. A job runner plugin definition must have the id, type, and load attributes. Since we already have a good default destination that uses singularity, we will simply modify that to use the slurm runner. Galaxy will do the equivalent of submitting a job as sbatch /path/to/ In a <destination> tag, the id attribute is a unique identifier for that destination and the runner attribute must match the id of a defined plugin:

    --- a/templates/galaxy/config/job_conf.xml.j2
    +++ b/templates/galaxy/config/job_conf.xml.j2
    @@ -1,9 +1,16 @@
         <plugins workers="4">
             <plugin id="local_plugin" type="runner" load=""/>
    +        <plugin id="slurm" type="runner" load=""/>
    -    <destinations default="singularity">
    +    <destinations default="slurm">
             <destination id="local_destination" runner="local_plugin"/>
    +        <destination id="slurm" runner="slurm">
    +            <param id="singularity_enabled">true</param>
    +            <env id="LC_ALL">C</env>
    +            <env id="SINGULARITY_CACHEDIR">/tmp/singularity</env>
    +            <env id="SINGULARITY_TMPDIR">/tmp</env>
    +        </destination>
             <destination id="singularity" runner="local_plugin">
                 <param id="singularity_enabled">true</param>
                 <!-- Ensuring a consistent collation environment is good for reproducibility. -->
  3. Run your Galaxy playbook

    code-in Input: Bash

    ansible-playbook galaxy.yml
  4. Watch the logs to check that everything loads correctly

    code-in Input: Bash

    journalctl -f -u galaxy

    code-out Output

    Your output may look slightly different:

    Jan 12 15:46:01 uwsgi[1821134]: DEBUG 2021-01-12 15:46:01,109 [p:1821134,w:0,m:1] [MainThread] Starting 4 SlurmRunner workers
    Jan 12 15:46:01 uwsgi[1821134]: DEBUG 2021-01-12 15:46:01,110 [p:1821134,w:0,m:1] [MainThread] Loaded job runner '' as 'slurm'

Running a Job

You should now be able to run a Galaxy job through Slurm. The simplest way to test is using the upload tool to upload some text.

hands_on Hands-on: Testing a Slurm Job

  1. If you’re not still following the log files with journalctl, do so now.

    code-in Input: Bash

    journalctl -f -u galaxy
  2. Click the upload button at the top of the tool panel (on the left side of the Galaxy UI).
  3. In the resulting modal dialog, click the “Paste/Fetch data” button.
  4. Type some random characters into the text field that has just appeared.
  5. Click “Start” and then “Close”

    code-out Output

    Your output may look slightly different. In your journalctl terminal window you should see the following messages: DEBUG 2020-02-10 09:37:17,946 [p:9859,w:0,m:2] [JobHandlerQueue.monitor_thread] (1) Mapped job to destination id: slurm DEBUG 2020-02-10 09:37:17,976 [p:9859,w:0,m:2] [JobHandlerQueue.monitor_thread] (1) Dispatching to slurm runner DEBUG 2020-02-10 09:37:18,016 [p:9859,w:0,m:2] [JobHandlerQueue.monitor_thread] (1) Persisting job destination (destination id: slurm) DEBUG 2020-02-10 09:37:18,021 [p:9859,w:0,m:2] [JobHandlerQueue.monitor_thread] (1) Working directory for job is: /srv/galaxy/jobs/000/1 DEBUG 2020-02-10 09:37:18,358 [p:9859,w:0,m:2] [JobHandlerQueue.monitor_thread] Job [1] queued (380.809 ms) INFO 2020-02-10 09:37:18,372 [p:9859,w:0,m:2] [JobHandlerQueue.monitor_thread] (1) Job dispatched INFO 2020-02-10 09:37:18,564 [p:9859,w:0,m:2] [SlurmRunner.work_thread-0] Built script [/srv/galaxy/jobs/000/1/] for tool command [python '/srv/galaxy/server/tools/data_source/' /srv/galaxy/server /srv/galaxy/jobs/000/1/registry.xml /srv/galaxy/jobs/000/1/upload_params.json 1:/srv/galaxy/jobs/000/1/working/dataset_1_files:/data/000/dataset_1.dat]
    ... DEBUG 2020-02-10 09:37:18,645 [p:9859,w:0,m:2] [SlurmRunner.work_thread-0] (1) submitting file /srv/galaxy/jobs/000/1/ INFO 2020-02-10 09:37:18,654 [p:9859,w:0,m:2] [SlurmRunner.work_thread-0] (1) queued as 4 DEBUG 2020-02-10 09:37:18,654 [p:9859,w:0,m:2] [SlurmRunner.work_thread-0] (1) Persisting job destination (destination id: slurm)

    At this point the job has been accepted by Slurm and is awaiting scheduling on a node. Once it’s been sent to a node and starts running, Galaxy logs this event: DEBUG 2020-02-10 09:37:19,537 [p:9859,w:0,m:2] [SlurmRunner.monitor_thread] (1/4) state change: job is running

    Finally, when the job is complete, Galaxy performs its job finalization process: DEBUG 2020-02-10 09:37:24,700 [p:9859,w:0,m:2] [SlurmRunner.monitor_thread] (1/4) state change: job finished normally
    galaxy.model.metadata DEBUG 2020-02-10 09:37:24,788 [p:9859,w:0,m:2] [SlurmRunner.work_thread-1] loading metadata from file for: HistoryDatasetAssociation 1 INFO 2020-02-10 09:37:24,883 [p:9859,w:0,m:2] [SlurmRunner.work_thread-1] Collecting metrics for Job 1 in /srv/galaxy/jobs/000/1 DEBUG 2020-02-10 09:37:24,917 [p:9859,w:0,m:2] [SlurmRunner.work_thread-1] job_wrapper.finish for job 1 executed (154.514 ms)

    Note a few useful bits in the output:

    • Persisting job destination (destination id: slurm): Galaxy has selected the slurm destination we defined
    • submitting file /srv/galaxy/server/database/jobs/000/2/ This is the path to the script that is submitted to Slurm as it would be with sbatch
    • (1) queued as 4: Galaxy job id “1” is Slurm job id “4”, this can also be seen with the (1/4) in other output lines.
    • If job <id> ended is reached, the job should show as done in the UI

Slurm allows us to query the exit state of jobs for a time period of the value of Slurm’s MinJobAge option, which defaults to 300 (seconds, == 5 minutes):

code-in Input: Bash

Your job number is potentially different.

scontrol show job 4

code-out Output

Your output may also look slightly different:

JobId=4 JobName=g1_upload1_admin_example_org
   UserId=galaxy(999) GroupId=galaxy(999) MCS_label=N/A
   Priority=4294901757 Nice=0 Account=(null) QOS=(null)
   JobState=COMPLETED Reason=None Dependency=(null)
   Requeue=1 Restarts=0 BatchFlag=1 Reboot=0 ExitCode=0:0
   RunTime=00:00:05 TimeLimit=UNLIMITED TimeMin=N/A
   SubmitTime=2020-02-10T09:37:18 EligibleTime=2020-02-10T09:37:18
   StartTime=2020-02-10T09:37:19 EndTime=2020-02-10T09:37:24 Deadline=N/A
   PreemptTime=None SuspendTime=None SecsPreSuspend=0
   Partition=debug AllocNode:Sid=gcc-1:9453
   ReqNodeList=(null) ExcNodeList=(null)
   NumNodes=1 NumCPUs=1 NumTasks=1 CPUs/Task=1 ReqB:S:C:T=0:0:*:*
   Socks/Node=* NtasksPerN:B:S:C=0:0:*:* CoreSpec=*
   MinCPUsNode=1 MinMemoryNode=1M MinTmpDiskNode=0
   Features=(null) DelayBoot=00:00:00
   Gres=(null) Reservation=(null)
   OverSubscribe=OK Contiguous=0 Licenses=(null) Network=(null)

After the job has been purged from the active jobs database, a bit of information (but not as much as scontrol provides) can be retrieved from Slurm’s logs. However, it’s a good idea to set up Slurm’s accounting database to keep old job information in a queryable format.

tip Which directories need to be shared on a cluster?

The following directories need to be accesible via the same path on both the head node and compute nodes:

  • galaxy_shed_tools_dir
  • galaxy_tool_dependency_dir
  • galaxy_file_path
  • galaxy_job_working_directory
  • galaxy_server_dir
  • galaxy_venv_dir

Recording Job Metrics

Job metrics record properties of the jobs that are executed, information that can help you plan for trainings or plan capacity for further expansions of your Galaxy server. These properties include details such as the number of slots (cores) assigned to a job, the amount of memory available, details about the node on which the job executed, environment variables that were set at execution time, and more.

Galaxy collects and records very few job metrics by default, enabling more metrics plugins is recommended for any cluster-enabled Galaxy deployment. The metrics are stored in the Galaxy database, which can be queried externally to generate reports and debug job problems.

Some work has been done to try to analyse job runtime metrics to optimise cluster allocation based on job inputs, and enhance job submission (Tyryshkina et al. 2019). More work will be done in this area.

comment Note

Job metrics are only visible to Galaxy admin users, unless you set expose_potentially_sensitive_job_metrics: true, like does. EU’s intention with this is to empower users and make everything as transparent as possible.

This is the only option controlling which metrics general users see. Admins see all metrics collected, and by default general users see none. However most of the metrics exposed by this setting are quite safe (e.g. cgroups information on resource consumption, walltime, etc.)

Setting up Galaxy

By default, Galaxy enables the core metrics:

screenshot of galaxy metrics

These include very basic submission parameters. We want more information!

hands_on Hands-on: Setting up the job metrics plugin configuration

  1. Edit the global (for all hosts) group variables file, group_vars/all.yml:

    details Why are we editing “all” instead of “galaxyservers” vars?

    Both Galaxy and Pulsar use job metrics plugins, and when we configure Pulsar later, we will want it to have the same metrics plugin configuration as Galaxy. Putting this variable in all.yml will allow us to refer to it later when setting the corresponding variable for Pulsar.

    The variable we’ll set is named galaxy_job_metrics_plugins:

    --- a/group_vars/all.yml
    +++ b/group_vars/all.yml
    @@ -2,3 +2,13 @@
     cvmfs_role: client
     galaxy_cvmfs_repos_enabled: config-repo
     cvmfs_quota_limit: 500
    +# Galaxy vars that will be reused by Pulsar
    +  - type: core
    +  - type: cpuinfo
    +  - type: meminfo
    +  - type: uname
    +  - type: env
    +  - type: cgroup
    +  - type: hostname
  2. Run your Galaxy playbook

    code-in Input: Bash

    ansible-playbook galaxy.yml

Currently, the job metrics plugin configuration is stored in a separate configuration file from Galaxy’s main configuration file (galaxy.yml). By setting galaxy_job_metrics_plugins, we instructed the galaxyproject.galaxy role to create this file, and update the option (job_metrics_config_file) in galaxy.yml that sets the path to this file. You can inspect the contents of the new config file on your Galaxy server:

code-in Input: Bash

cat /srv/galaxy/config/job_metrics_conf.yml

code-out Output: Bash

## This file is managed by Ansible.  ALL CHANGES WILL BE OVERWRITTEN.
-   type: core
-   type: cpuinfo
-   type: meminfo
-   type: uname
-   type: env
-   type: cgroup
-   type: hostname

Generating Metrics

With this, the job metrics collection and recording should be set up. Now when you run a job, you will see many more metrics:

hands_on Hands-on: Generate some metrics

  1. Run a job (any tool is fine, even upload)

  2. View the information of the output dataset (galaxy-info)

advanced metrics

What should I collect?

There is not a good rule we can tell you, just choose what you think is useful or will be. Numeric parameters are “cheaper” than the text parameters (like uname to store), eventually you may find yourself wanting to remove old job metrics if you decide to collect the environment variables or similar.

Accessing the data

You can access the data via BioBlend (JobsClient.get_metrics), or via SQL with gxadmin.

Got lost along the way?

If you missed any steps, you can compare against the reference files, or see what changed since the previous tutorial.

Further Reading

keypoints Key points

  • Galaxy supports a variety of different DRMs.

  • You should absolutely set one up, it prevents jobs from being killed during server restarts.

Frequently Asked Questions

Have questions about this tutorial? Check out the FAQ page for the Galaxy Server administration topic to see if your question is listed there. If not, please ask your question on the GTN Gitter Channel or the Galaxy Help Forum


  1. Tyryshkina, A., N. Coraor, and A. Nekrutenko, 2019 Predicting runtimes of bioinformatics tools based on historical data: five years of Galaxy usage (J. Wren, Ed.). Bioinformatics 35: 3453–3460. 10.1093/bioinformatics/btz054


Did you use this material as an instructor? Feel free to give us feedback on how it went.

Click here to load Google feedback frame

Citing this Tutorial

  1. Nate Coraor, Björn Grüning, Helena Rasche, 2021 Connecting Galaxy to a compute cluster (Galaxy Training Materials). Online; accessed TODAY
  2. Batut et al., 2018 Community-Driven Data Analysis Training for Biology Cell Systems 10.1016/j.cels.2018.05.012

details BibTeX

    author = "Nate Coraor and Björn Grüning and Helena Rasche",
    title = "Connecting Galaxy to a compute cluster (Galaxy Training Materials)",
    year = "2021",
    month = "06",
    day = "10"
    url = "\url{}",
    note = "[Online; accessed TODAY]"
        doi = {10.1016/j.cels.2018.05.012},
        url = {},
        year = 2018,
        month = {jun},
        publisher = {Elsevier {BV}},
        volume = {6},
        number = {6},
        pages = {752--758.e1},
        author = {B{\'{e}}r{\'{e}}nice Batut and Saskia Hiltemann and Andrea Bagnacani and Dannon Baker and Vivek Bhardwaj and Clemens Blank and Anthony Bretaudeau and Loraine Brillet-Gu{\'{e}}guen and Martin {\v{C}}ech and John Chilton and Dave Clements and Olivia Doppelt-Azeroual and Anika Erxleben and Mallory Ann Freeberg and Simon Gladman and Youri Hoogstrate and Hans-Rudolf Hotz and Torsten Houwaart and Pratik Jagtap and Delphine Larivi{\`{e}}re and Gildas Le Corguill{\'{e}} and Thomas Manke and Fabien Mareuil and Fidel Ram{\'{\i}}rez and Devon Ryan and Florian Christoph Sigloch and Nicola Soranzo and Joachim Wolff and Pavankumar Videm and Markus Wolfien and Aisanjiang Wubuli and Dilmurat Yusuf and James Taylor and Rolf Backofen and Anton Nekrutenko and Björn Grüning},
        title = {Community-Driven Data Analysis Training for Biology},
        journal = {Cell Systems}

congratulations Congratulations on successfully completing this tutorial!