Reputation: 33
this is more of a technical question regarding the capabilities of snakemake. I was wondering whether it is possible to dynamically alter the set of input samples during a snakemake run.
The reason why I would like to do so is the following: Let's assume a set of sample associated bam files. The first rule determines the quality of each sample (based on the bam file), i.e. all input files are concerned. However, given specified criteria, only a subset of samples is considered as valid and should be processed further. So the next step (e.g. gene counting or something else) should only be done for the approved bam files, as shown in the minimal example below:
configfile: "config.yaml"
rule all:
input: "results/gene_count.tsv"
rule a:
input: expand( "data/{sample}.bam", sample=config['samples'])
output: "results/list_of_qual_approved_samples.out"
shell: '''command'''
rule b:
input: expand( "data/{sample}.bam", sample=config['valid_samples'])
output: "results/gene_count.tsv"
shell: '''command'''
In this example, rule a would extend the configuration file with a list of valid sample names, even though I believe to know that this is not possible.
Of course, the straightforward solution would be to have two distinct inputs: 1.) all bam files and 2.) a file that lists all valid files. This would boil down to do the sample selection within the code of the rule.
rule alternative_b:
input:
expand( "data/{sample}.bam", sample=config['samples']),
"results/list_of_qual_approved_samples.out"
output: "results/gene_count.tsv"
shell: '''command'''
However, do you see a way to setup the rules such that the behavior of the first example can be achieved?
Many thanks in advance, Ralf
Upvotes: 3
Views: 1313
Reputation: 337
I think I have an answer that could be interesting.
At first I thought that it wasn't possible to do it. Because Snakemake needs the final files
at the end. So you can't just separate a set of files without knowing the separation at the beginning.
But then I tried with the function dynamic. With the function dynamic you don't have to know the amount of files which will be created by the rule.
So I coded this :
rule all:
input: "results/gene_count.tsv"
rule a:
input: expand( "data/{sample}.bam", sample=config['samples'])
output: dynamic("data2/{foo}.bam")
shell:
'./bloup.sh "{input}"'
rule b:
input: dynamic("data2/{foo}.bam")
output: touch("results/gene_count.tsv")
shell: '''command'''
Like in your first example the snakefile wants to produce a file named results/gene_count.ts
.
The rule a
takes all samples from configuration file. This rule execute a script that chooses the files to create. I have 4 initial
files (geneA, geneB, geneC, geneD) and it only touches two
for the output (geneA and geneD files) in a second repertory. There is no problem with the dynamic function.
The rule b
takes all the dynamics files created by the rule a
. So you just have to produce the results/gene_count.tsv. I just touched it in the example.
Here is the log of Snakemake
for more information :
Provided cores: 1
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 a
1 all
1 b
3
rule a:
input: data/geneA.bam, data/geneB.bam, data/geneC.bam, data/geneD.bam
output: data2/{*}.bam (dynamic)
Subsequent jobs will be added dynamically depending on the output of this rule
./bloup.sh "data/geneA.bam data/geneB.bam data/geneC.bam data/geneD.bam"
Dynamically updating jobs
Updating job b.
1 of 3 steps (33%) done
rule b:
input: data2/geneD.bam, data2/geneA.bam
output: results/gene_count.tsv
command
Touching output file results/gene_count.tsv.
2 of 3 steps (67%) done
localrule all:
input: results/gene_count.tsv
3 of 3 steps (100%) done
Upvotes: 1
Reputation: 441
Another approach, one that does not use "dynamic".
It's not that you do not know how many files you are going to use, but rather, you are only using a sub-set of the files you would be starting with. Since you are able to generate a "samples.txt" list of all the potential files, I'm going to assume you have a firm starting point.
I did something similar, where I have initial files that I want to process for validity, (in my case, I'm increasing the quality~sorting, indexing etc). I then want to ignore everything except my resultant file.
What I suggest, to avoid creating a secondary list of sample files, is to create a second directory of data (reBamDIR), data2 (BamDIR). In data2, you symlink over all the files that are valid. That way, Snake can just process EVERYTHING in the data2 directory. Makes moving down the pipeline easier, the pipeline can stop relying on sample lists, and it can just process everything using wildcards (much easier to code). This is possible becuase when I symlink I then standardize the names. I list the symlinked files in the output rule so Snakemake knows about them and then it can create the DAG.
`-- output
|-- bam
| |-- Pfeiffer2.bam -> /home/tboyarski/share/projects/tboyarski/gitRepo-LCR-BCCRC/Snakemake/buildArea/output/reBam/Pfeiffer2_realigned_sorted.bam
| `-- Pfeiffer2.bam.bai -> /home/tboyarski/share/projects/tboyarski/gitRepo-LCR- BCCRC/Snakemake/buildArea/output/reBam/Pfeiffer2_realigned_sorted.bam.bai
|-- fastq
|-- mPile
|-- reBam
| |-- Pfeiffer2_realigned_sorted.bam
| `-- Pfeiffer2_realigned_sorted.bam.bai
In this case, all you need is a return value in your "validator", and a conditional operator to respond to it.
I would argue you already have this somewhere, since you must be using conditionals in your validation step. Instead of using it to write the file name to a txt file, just symlink the file in a finalized location and keep going.
My raw data is in reBamDIR. The final data I store in BamDIR. I only symlink the files from this stage in the pipeline over to bamDIR. There are OTHER files in reBamDIR, but I don't want the rest of my pipeline to see them, so, I'm filtering them out.
I'm not exactly sure how to implement the "validator" and your conditional, as I do not know your situation, and I'm still learning too. Just trying to offer alternative perspectives//approaches.
from time import gmtime, strftime
rule indexBAM:
input:
expand("{outputDIR}/{reBamDIR}/{{samples}}{fileTAG}.bam", outputDIR=config["outputDIR"], reBamDIR=config["reBamDIR"], fileTAG=config["fileTAG"])
output:
expand("{outputDIR}/{reBamDIR}/{{samples}}{fileTAG}.bam.bai", outputDIR=config["outputDIR"], reBamDIR=config["reBamDIR"], fileTAG=config["fileTAG"]),
expand("{outputDIR}/{bamDIR}/{{samples}}.bam.bai", outputDIR=config["outputDIR"], bamDIR=config["bamDIR"]),
expand("{outputDIR}/{bamDIR}/{{samples}}.bam", outputDIR=config["outputDIR"], bamDIR=config["bamDIR"])
params:
bamDIR=config["bamDIR"],
outputDIR=config["outputDIR"],
logNAME="indexBAM." + strftime("%Y-%m-%d.%H-%M-%S", gmtime())
log:
"log/" + config["reBamDIR"]
shell:
"samtools index {input} {output[0]} " \
" 2> {log}/{params.logNAME}.stderr " \
"&& ln -fs $(pwd)/{output[0]} $(pwd)/{params.outputDIR}/{params.bamDIR}/{wildcards.samples}.bam.bai " \
"&& ln -fs $(pwd)/{input} $(pwd)/{params.outputDIR}/{params.bamDIR}/{wildcards.samples}.bam"
Upvotes: 1
Reputation: 968
**This is not exactly an answer to your question, but rather a suggestion to reach your goal. **
I think it's not possible - or at least not trivial - to modify a yaml file during the pipeline run.
Personally, when I run snakemake workflows I use external files that I call "metadata". They include a configfile, but also a tab-file containing the list of samples (and possibly additional information about said samples). The config file contains a parameter which is the path to this file.
In such a setup, I would recommend having your "rule a" output another tab-file containing the selected samples, and the path to this file could be included in the config file (even though it doesn't exist when you start the workflow). Rule b would take that file as an input.
In your case you could have:
config:
samples: "/path/to/samples.tab"
valid_samples: "/path/to/valid_samples.tab"
I don't know if it makes sense, since it's based on my own organization. I think it's useful because it allows storing more information than just sample names, and if you have 100s of samples it's much easier to manage!
Upvotes: 0