Last updated: 2023-09-22
Checks: 2 0
Knit directory: snakemake_tutorial/
This reproducible R Markdown analysis was created with workflowr (version 1.7.0). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.
The results in this page were generated with repository version bc62309. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.
Note that you need to be careful to ensure that all relevant files for
the analysis have been committed to Git prior to generating the results
(you can use wflow_publish
or
wflow_git_commit
). workflowr only checks the R Markdown
file, but you know if there are other scripts or data files that it
depends on. Below is the status of the Git repository when the results
were generated:
Ignored files:
Ignored: .Rhistory
Ignored: .Rproj.user/
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the repository in which changes were
made to the R Markdown (analysis/cluster.Rmd
) and HTML
(docs/cluster.html
) files. If you’ve configured a remote
Git repository (see ?wflow_git_remote
), click on the
hyperlinks in the table below to view the files as they were in that
past version.
File | Version | Author | Date | Message |
---|---|---|---|---|
Rmd | bc62309 | Jean Morrison | 2023-09-22 | updates |
html | bc62309 | Jean Morrison | 2023-09-22 | updates |
Rmd | a52b174 | Jean Morrison | 2023-09-21 | updates |
html | a52b174 | Jean Morrison | 2023-09-21 | updates |
Rmd | 8972212 | Jean Morrison | 2023-09-21 | updates |
In this section, we will learn how to use Snakemake to submit jobs to the cluster.
In this section we will:
--slurm
option--cluster
option--slurm
A big advantage of using Snakemake is that it can take care of submitting lots of jobs to the cluster, leaving you free time to do more interesting tasks. In order for Snakemake to do this, we need to tell it how to submit jobs.
All three of the Biostatistics, CSG, and GreatLakes clusters use
SLURM. As of recent versions of Snakemake, we can simply add
--slurm
to the command line execution.
On the CSG and Biostatistics clusters, the following line will (probably) work
snakemake --slurm --jobs 10 -p
On GreatLakes you will (probably) also need to specify the account you are using.
snakemake --slurm --jobs 10 --default-resources slurm_account=<your SLURM account>
The --jobs
flag indicates the maximum number of jobs to
submit at a time. You may want to increase this if you have a larger job
underway. I recommend adding the flag --latency-wait 60
which increases the amount of time that Snakemake will wait for the
expected outputs to appear.
Every job that gets submitted to the cluster is allocated resources
such as memory, time, and number of cores. These can be specified either
as defaults used for every job using --default-resources
as
shown above for specifying the slurm account or they can be specified
differently for each rule. To specify resources for a specific rule, add
a line resources:
to that rule with the desired
specifications. For example, using
rule combine_data:
input: expand("data/chr{c}.vcf.gz", c = range(20, 23))
output: "data/all.vcf.gz"
resources: mem_mb = 1000
shell: "bcftools concat -o {output} {input}"
specifies 1Gb of memory to be allocated for the
combine_data
rule. A full listing of the available cluster
resources can be found in the Snakemake documentation here.
--cluster
If you are using a non-SLURM cluster (or you simply want to), you can
use the --cluster
command to specify a generic command for
submitting jobs to the cluster. In the file
code/run-snakemake.sh
you will find the following
submission line:
mkdir -p log
snakemake \
--keep-going \
--jobs 96 \
--max-jobs-per-second 5 \
--latency-wait 60 \
--cluster-config cluster.yaml \
--cluster "sbatch \
--output={cluster.log}_%j.out \
--error={cluster.log}_%j.err \
--account=jvmorr0 \
--job-name={cluster.name} \
--time={cluster.time} \
--cpus-per-task={cluster.cpus} \
--mem={cluster.mem}"
which uses the --cluster
command to specify how to
submit a job. This line also uses --cluster-config
to
specify a configuration file for cluster jobs. Copy
code/run-snakemake.sh
and code/cluster.yaml
to
your main tutorial directory. The cluster configuration file is an
alternative way to specify resources for each rule. The placeholders
{cluster.log}
etc in the snakemake command above reference
values in the cluster.yaml
file. Each section of the
cluster config file specifies rule specific or default resources.
__default__:
mem: "1G"
cpus: "1"
name: "{rule}-{wildcards}"
log: "log/snake-{rule}-{wildcards}"
time: "2:00:00"
variant_qc:
cpus: "4"
ld_prune:
cpus: "4"
Since the run-snakeamake.sh
file contains the snakemake
call, to run this, we only need to type
./run-snakemake.sh
at the command line.
Not every rule is a big job. For little jobs, it may be faster to run
these locally (e.g. on the log in node or whatever node you choose to
run Snakemake from) than to submit to the cluster. To specify that the
rules alla
and plot_pca
should be run locally,
add the following line to your Snakefile
localrules: all, plot_pca
Alternatively, we can add the specification
localrule: True
inside of the rule itself.
If your workflow is going to take a long time to fully execute, it is a good idea to run Snakemake in the background. Running it in the background will mean that after you type the Snakemake command, you are returned to the command line and can do other things, rather than waiting for Snakemake to finish. You can also use a specification so that if you are disconnected from the cluster, Snakemake keeps running.
I like to put my Snakemake command into a bash script (like
run-snakemake.sh
) and then run it with the following
command:
nohup ./run-snakemake.sh &
The nohup
and &
additions to this line
cause snakemake to run in the background and keep running if the
terminal is closed. This works regardless of what type of snakemake call
you are using. You could also type the snakemake command directly
between the nohup
and the &
.
I also prefer to run long-running Snakemake jobs on compute nodes
rather than the log in node. There are two advantages to doing this. The
first is that the job is easy to kill. I can simply cancel the session
that it is running in. The second advantage is that my Snakemake job has
its own designated resources and isn’t going to use up too many
resources on the log in node. My personal preferred strategy is to start
an interactive job inside of a screen
session using a
command like
screen salloc --account=jvmorr0 --mem 1G --time 10:00:00
I like to give these jobs really long running times, sometimes I will
give them several days and then just kill them when I am done with them.
Once the node is allocated, you can run your Snakemake command in the
background using nohup
and &
. Then to
leave the interactive session, type ctrl-a-d
to “detach”
the screen session. To join back in, use screen -r
. This
allows you to close your terminal window without killing the interactive
session.