Reputation: 669
I wrote a snakemake pipeline for performing sortmeRNA version 4.2.0. The pipeline is as follows, and works perfectly when I run it for 1 sample:
SAMPLES = ['A']
READS=["R1", "R2"]
rule all:
input:
expand("Clean/{exp}.clean.{read}.fastq.gz", exp = SAMPLES, read = READS)
rule sortmeRNA:
input:
one = "{SAMPLES}.R1.trimd.fastq.gz",
two = "{SAMPLES}.R2.trimd.fastq.gz"
output:
one = "Clean/{SAMPLES}.clean.R1.fastq.gz",
two = "Clean/{SAMPLES}.clean.R2.fastq.gz"
params:
bac16s = "rRNA_databases/silva-bac-16s-id90.fasta",
bac23s = "rRNA_databases/silva-bac-23s-id98.fasta",
acc = "--num_alignments 1 --threads 16 --fastx --other -v"
message: "---Sorting reads with rRNA databases---"
shell:'''
rm -rf {SAMPLES}/kvdb;\
sortmerna -ref {params.bac16s} -ref {params.bac23s}\
-reads {input.one} -reads {input.two}\
--workdir {SAMPLES}\
{params.acc} && \
echo "deinterleaving...." &&\
bash deinterleave_fastq.sh < {SAMPLES}/out/other.fastq {output.one} {output.two} compress && \
echo "moving log and removing folder.." && mkdir -p Sort_log && mv {SAMPLES}/out/aligned.log Sort_log/{SAMPLES}.log &&\
rm -rf {SAMPLES}
'''
The last section does the following:
- purges {SAMPLES}/kvdb if it exists
- Runs sortmeRNA
- Checks for {SAMPLES}/out/other.fastq
- runs 'deinterleave_fastq.sh' and puts the R1 and R2 in Folder called 'Clean'
- Moves the 'aligned.log' to Sort_log and renames it to {SAMPLES}.log
- Removes the {SAMPLES} folder.
Essentially, what happens is that for each 'SAMPLE' it filters out the fastq files and retains only the output required and purges the other folders.
It is submitted with:
snakemake -ps sortmeRNAv4.2.0.snakefile --cluster "sbatch -n 1 --time=02:00:00 -c 16" --jobs 1
The problem arises, when I do the following:
SAMPLES = ['A','B']
snakemake -ps sortmeRNAv4.2.0.snakefile --cluster "sbatch -n 1 --time=02:00:00 -c 16" --jobs 2
The job gets submitted, however it gets into the error
rm -rf A B
I understand that, SAMPLES are replaced by A & B and since it is not yet generated and everytime the folders get purged and it the condition can never be satisfied. How do I modify the code such that, each job is run in parallel without conflict ? i.e at any given time, the command should only be
set of batch commands with
SAMPLES = A, complete the process SAMPLES = B, complete the process
and not mix it up.
Thanks in advance.
Upvotes: 0
Views: 130
Reputation: 3368
The problem here is that {SAMPLES}
in your inputs and outputs is a wildcard. In the shell it is read as the global variable defined above the rule.
You should use {wildcards.SAMPLES}
in the shell section:
rule sortmeRNA:
input:
one = "{SAMPLES}.R1.trimd.fastq.gz",
two = "{SAMPLES}.R2.trimd.fastq.gz"
output:
one = "Clean/{SAMPLES}.clean.R1.fastq.gz",
two = "Clean/{SAMPLES}.clean.R2.fastq.gz"
params:
bac16s = "rRNA_databases/silva-bac-16s-id90.fasta",
bac23s = "rRNA_databases/silva-bac-23s-id98.fasta",
acc = "--num_alignments 1 --threads 16 --fastx --other -v"
message: "---Sorting reads with rRNA databases---"
shell:'''
rm -rf {wildcards.SAMPLES}/kvdb;\
sortmerna -ref {params.bac16s} -ref {params.bac23s}\
-reads {input.one} -reads {input.two}\
--workdir {wildcards.SAMPLES}\
{params.acc} && \
echo "deinterleaving...." &&\
bash deinterleave_fastq.sh < {wildcards.SAMPLES}/out/other.fastq {output.one} {output.two} compress && \
echo "moving log and removing folder.." && mkdir -p Sort_log && mv {wildcards.SAMPLES}/out/aligned.log Sort_log/{wildcards.SAMPLES}.log &&\
rm -rf {wildcards.SAMPLES}
'''
You also should avoid naming a wildcard as a global python variable. Your rule treats one sample at a time, so just call it {sample}
and leave the SAMPLES
for the python variable defined above the rules.
Upvotes: 1