Reputation: 343
I am wondering if you would be able to give advice about defining a Snakemake rule to combine over one, but not all wildcards? My data is organized so that I have runs and samples; most, but not all samples, were resequenced in every run. Therefore, I have pre-processing steps that are per-sample-run. Then, I have a step that combines BAM files for each run per-sample. However, the issue I'm running into is that I'm a bit confused how to define a rule so that I can list an input of all indivudal bams (from different runs) corresponding to a sample.
I'm putting my entire pipeline below, for clarity, but my real question is on rule combine_bams. How can I list all bams for a single sample in the input?
Any suggestions would be great! Thank you very much in advance!
# Define samples and runs
RUNS, SAMPLES = glob_wildcards("/labs/jandr/walter/tb/data/Stanford/{run}/{samp}_L001_R1_001.fastq.gz")
print("runs are: ", RUNS)
print("samples are: ", SAMPLES)
rule all:
input:
#trim = ['process/trim/{run}_{samp}_trim_1.fq.gz'.format(samp=sample_id, run=run_id) for sample_id, run_id in zip(sample_ids, run_ids)],
trim = expand(['process/trim/{run}_{samp}_trim_1.fq.gz'], zip, run = RUNS, samp = SAMPLES),
kraken=expand('process/trim/{run}_{samp}_trim_kr_1.fq.gz', zip, run = RUNS, samp = SAMPLES),
bams=expand('process/bams/{run}_{samp}_bwa_MTB_ancestor_reference_rg_sorted.bam', zip, run = RUNS, samp = SAMPLES), # add fixed ref/mapper (expand with zip doesn't allow these to repeate)
combined_bams=expand('process/bams/{samp}_bwa_MTB_ancestor_reference.merged.rmdup.bam', samp = np.unique(SAMPLES))
# Trim reads for quality.
rule trim_reads:
input:
p1='/labs/jandr/walter/tb/data/Stanford/{run}/{samp}_L001_R1_001.fastq.gz', # update inputs so they only include those that exist use zip.
p2='/labs/jandr/walter/tb/data/Stanford/{run}/{samp}_L001_R2_001.fastq.gz'
output:
trim1='process/trim/{run}_{samp}_trim_1.fq.gz',
trim2='process/trim/{run}_{samp}_trim_2.fq.gz'
log:
'process/trim/{run}_{samp}_trim_reads.log'
shell:
'/labs/jandr/walter/tb/scripts/trim_reads.sh {input.p1} {input.p2} {output.trim1} {output.trim2} &>> {log}'
# Filter reads taxonomically with Kraken.
rule taxonomic_filter:
input:
trim1='process/trim/{run}_{samp}_trim_1.fq.gz',
trim2='process/trim/{run}_{samp}_trim_2.fq.gz'
output:
kr1='process/trim/{run}_{samp}_trim_kr_1.fq.gz',
kr2='process/trim/{run}_{samp}_trim_kr_2.fq.gz',
kraken_stats='process/trim/{run}_{samp}_kraken.report'
log:
'process/trim/{run}_{samp}_run_kraken.log'
threads: 8
shell:
'/labs/jandr/walter/tb/scripts/run_kraken.sh {input.trim1} {input.trim2} {output.kr1} {output.kr2} {output.kraken_stats} &>> {log}'
# Map reads.
rule map_reads:
input:
ref_path='/labs/jandr/walter/tb/data/refs/{ref}.fasta.gz',
kr1='process/trim/{run}_{samp}_trim_kr_1.fq.gz',
kr2='process/trim/{run}_{samp}_trim_kr_2.fq.gz'
output:
bam='process/bams/{run}_{samp}_{mapper}_{ref}_rg_sorted.bam'
params:
mapper='{mapper}'
log:
'process/bams/{run}_{samp}_{mapper}_{ref}_map.log'
threads: 8
shell:
"/labs/jandr/walter/tb/scripts/map_reads.sh {input.ref_path} {params.mapper} {input.kr1} {input.kr2} {output.bam} &>> {log}"
# Combine reads and remove duplicates (per sample).
rule combine_bams:
input:
bams = 'process/bams/{run}_{samp}_bwa_MTB_ancestor_reference_rg_sorted.bam'
output:
combined_bam = 'process/bams/{samp}_{mapper}_{ref}.merged.rmdup.bam'
log:
'process/bams/{samp}_{mapper}_{ref}_merge_bams.log'
threads: 8
shell:
"sambamba markdup -r -p -t {threads} {input.bams} {output.combined_bam}"
Upvotes: 0
Views: 544
Reputation: 256
Create a dictionary to associate each sample with its list of runs.
Then for the combine_bams rule, use an input function to generate the input files for that sample using the dictionary.
rule combine_bams:
input:
bams = lambda wildcards: expand('process/bams/{run}_{{samp}}_bwa_MTB_ancestor_reference_rg_sorted.bam', run=sample_dict[wildcards.sample])
output:
combined_bam = 'process/bams/{samp}_{mapper}_{ref}.merged.rmdup.bam'
log:
'process/bams/{samp}_{mapper}_{ref}_merge_bams.log'
threads: 8
shell:
"sambamba markdup -r -p -t {threads} {input.bams} {output.combined_bam}"
Upvotes: 1