ameerosein
ameerosein

Reputation: 563

How can i read a file with multi headers?

I have a file with multiple headers and i also need the headers .

Head of my file :

>\>1 Len = 254

>13 112 1 18

>15 112 1 30

>22 11  3 25

>\>1 Reverse  Len = 254

>14 11 1 15

>\>2 Len = 186

>19 15 2 34

>25 11  3 25

>....

How can i read this file, and import the values into R variables (like dataframe)?

Alternatively, its good if someone could help we with removing the headers and adding another column that represents the number of table ( or shows this row is the first row of another table)

I don't want to read it as string and parse it

If it helps, the data is a report from MUMMER package

and also i uploaded an example here : http://m.uploadedit.com/ba3c/1429271308686.txt

Upvotes: 1

Views: 180

Answers (3)

ameerosein
ameerosein

Reputation: 563

Finally I parse the data with a few lines of code and import the data into R

I merge all tables into one table and add a new column to represent the name of tables...

that's it :

lns = readLines("filename.txt") ; # read the data as character

idx = grepl(">", lns) ; # location of all ">"s

df = read.table(text=lns[!idx]) ; # read all lines as table unless those who starts with ">"

wd = diff(c(which(idx), length(idx) + 1)) - 1  ; # finding the index of each table to add in new column

df$label = rep(lns[idx], wd) ; # add table indices in a new column

and another way to do this special case is to use perl onliner that someone in other forum suggests me, I don't know what is it but it works :

https://support.bioconductor.org/p/66724/#66767

thanks others for their helpful answers and comments that helps me derive to write answer :)

Upvotes: 0

steinbock
steinbock

Reputation: 736

Or if you want a long winded cumbersome method...

# if you just want the data and not the header information

x<-read.table("1429271308686.txt",comment.char=">")

# in case all else fails, my somewhat cumbersome solution...
x<-scan("1429271308686.txt",what="raw")

# extract the lengths, ind1 has all the lengths
ind1<-x=="="
ind1<-c(ind1[length(ind1)],ind1[-length(ind1)]) # take the value that comes after "="
cumsum(ind1)
lengths<-as.numeric(x[ind1])[c(TRUE,FALSE)] # only want one of the lengths

# remove the unwanted characters
ind2<-x==">"
ind2<-c(ind2[length(ind2)],ind2[-length(ind2)]) # take the value that comes after ">"

ind3<-x==">"|x=="Len"|x=="="|x=="Reverse"
dat<-as.numeric(x[!(ind1|ind2|ind3)]) # remove the unwanted

# arrange as matrix
mat<-matrix(dat,length(dat)/4,4,byrow=T)

# the number of rows for each block
block<-(c(1:length(x))[duplicated(cumsum(!ind2))][c(FALSE,TRUE)]-c(1:length(x))[duplicated(cumsum(!ind2))][c(TRUE,FALSE)]-5)/4

# the number for each block
id<-as.numeric(x[ind2])[c(TRUE,FALSE)]

# new vector
mat<-cbind(rep(id,block),mat) # note, this assumes that the last line is again "> Reverse"

Upvotes: 1

A5C1D2H2I1M1N2O1R2T1
A5C1D2H2I1M1N2O1R2T1

Reputation: 193527

There is really no easy to do this without reading the whole thing in as a string and parsing it, but you can easily convert such actions into a function, as I have done with the read.mtable function in my "SOfun" package.

Here it is applied to your sample data:

## library(devtools)
## install_github("mrdwab/SOfun")

library(SOfun)
X <- read.mtable("http://m.uploadedit.com/ba3c/1429271308686.txt", ">")
X <- X[!grepl("Reverse", names(X))]

names(X)
#  [1] "> 1  Len = 354"   "> 2  Len = 127"   "> 3  Len = 109"   "> 4  Len = 52"   
#  [5] "> 5  Len = 1189"  "> 6  Len = 1007"  "> 7  Len = 918"   "> 10  Len = 192" 
#  [9] "> 11  Len = 169"  "> 13  Len = 248"  "> 14  Len = 2500"
X[1]
# $`> 1  Len = 354`
#        V1  V2  V3  V4
# 1  203757   1   1  35
# 2  122132   1   1  87
# 3  203756   1   1 354
# 4       1   1   1 354
# 5   42364  12   1  89
# 6  203757  37  37  91
# 7  122132  90  90  38
# 8   42364 102  91  37
# 9  203757 129 129 168
# 10  42364 140 129 212
# 11 122132 129 129 212
# 12 203757 298 298  43

As you can see, it has created a list of 11 data.frames, each named with the "Len =" value.

The two arguments used here are the file location (here a URL) and the chunkID, which can be set to a regular expression or a fixed pattern that you want to match. Here, we want to match any lines that start with a ">" as indicative of where a new dataset starts.

Upvotes: 2

Related Questions