I am using snakemake v. 5.7.0. The pipeline runs correctly when either launched locally or submitted to SLURM via snakemake --drmaa
: jobs get submitted, everything works as expected. However, in the latter case, a number of slurm log files is produced in the current directory.
Snakemake invoked with the --drmaa-log-dir
option creates the directory specified in the option, but fails to execute the rules. No log files are produced.
Here is a minimal example. First, the Snakefile used:
rule all:
shell: "sleep 20 & echo SUCCESS!"
Below is the output of snakemake --drmaa
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cores: 1
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 all
1
[Fri Apr 10 21:03:50 2020]
rule all:
jobid: 0
Submitted DRMAA job 0 with external jobid 13321.
[Fri Apr 10 21:04:00 2020]
Finished job 0.
1 of 1 steps (100%) done
Complete log: /XXXXX/snakemake_test/.snakemake/log/2020-04-10T210349.984931.snakemake.log
Here is the output of snakemake --drmaa --drmaa-log-dir foobar
Building DAG of jobs...
Using shell: /usr/bin/bash
Provided cores: 1
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 all
1
[Fri Apr 10 21:06:19 2020]
rule all:
jobid: 0
Submitted DRMAA job 0 with external jobid 13322.
[Fri Apr 10 21:06:29 2020]
Error in rule all:
jobid: 0
shell:
sleep 20 & echo SUCCESS!
(one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!)
Error executing rule all on cluster (jobid: 0, external: 13322, jobscript: /XXXXXX/snakemake_test/.snakemake/tmp.9l7fqvgg/snakejob.all.0.sh). For error details see the cluster log and the log files of the involved rule(s).
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /XXXXX/snakemake_test/.snakemake/log/2020-04-10T210619.598354.snakemake.log
No log files are produced. The directory foobar has been created, but is empty.
What am I doing wrong?
Problem using
--drmaa-log-dir
in slurm was reported before, but unfortunately there has been no known solution so far.