Reputation: 4220
Whats the best (easiest?) way to take a large file that has many columns (~600000) and split it into 6 files with evenly spread out columns among the 6 files? Also, I want each file to have the first 6 columns from original file
example:
FID IID MID PID SEX PHENO SNP1 SNP2 SNP3 SNP4
1 70323 0 0 2 2 0 0 1 0 ...
2 70323 0 0 2 2 1 0 2 1 ...
3 70323 0 0 2 2 0 0 0 1 ...
...
Solutions using basic linux command line functions available to ubuntu would be preferable (or perl/python script)
My solution in PERL: Here is what I did in Perl. Its very ugly so I was hoping there would be a simple elegant solution.
#!/usr/bin/perl
use warnings;
use strict;
my $line;
my $num_snps=0;
my $i=0;
my $OUT1;
my $OUT2;
my $OUT3;
my $OUT4;
my $end1;
my $end2;
my $end3;
my $end4;
while($line = <>){
chomp $line;
my @a = split(/\s/,$line);
if($i==0){
$num_snps = $#a + 1 - 6;
$end1 = 5+int($num_snps/4);
$end2 = $end1+int($num_snps/4)+1;
$end3 = $end2+int($num_snps/4)+1;
$end4 = $#a;
print("Breaks: $end1\t$end2\t$end3\t$end4\tTotal SNPs: $num_snps\n");
}else{
open($OUT1 , ">>kuehn1.raw");
print $OUT1 join(" ",@a[0..5])." ".join(" ", @a[6..$end1])."\n";
close($OUT1);
open($OUT2 , ">>kuehn2.raw");
print $OUT2 join(" ",@a[0..5])." ".join(" ", @a[($end1+1)..$end2])."\n";
close($OUT2);
open($OUT3 , ">>kuehn3.raw");
print$OUT3 join(" ",@a[0..5])." ".join(" ", @a[($end2+1)..$end3])."\n";
close($OUT3);
open($OUT4 , ">>kuehn4.raw");
print$OUT4 join(" ",@a[0..5])." ".join(" ", @a[($end3+1)..$end4])."\n";
close($OUT4);
}
$i=$i+1;
}
Upvotes: 0
Views: 839
Reputation: 4220
You could first get the number of columns in the file with
awk '{ print NF}' < file
then use that knowledge to construct your breakpoints. So if your file had 66 columns
cut -f1-6,7-16 < file > file1
cut -f1-6,17-26 < file > file2
cut -f1-6,27-36 < file > file3
cut -f1-6,37-46 < file > file4
cut -f1-6,47-56 < file > file5
cut -f1-6,57-66 < file > file6
Not most elegant, but should work
Upvotes: 1