Reputation: 99
I have 100's of files that I want to run through R to do analysis on. Some of my code is using single files and I have figured out that I can batch them using the following code
setwd("~/directory of interest")
files <- list.files(pattern = "*.csv$")
files
for(i in 1:length(files)){
DataSet1 <- read.csv(file = files[i], header = TRUE, stringsAsFactors = TRUE)
do my algorithum
setwd("~/location of saving directory")
write.csv(DataSet1, file = files[i], quote = FALSE, row.names = FALSE)
setwd("~/directory of interest")}
I have another analysis in which I have 2 files (1 pair) that both need to be read in as two different data sets that my alogorithum then can run. Is there a way to do this in a batch format. My files are named so that they are always next to each other, (aka when the files are identified in my WD, they whould be DataSet1a, DataSet1b, DataSet2a, DataSet2b, etc) since I have 100's of pairs of files I could do them 1 pair at a time manually but I feel there has got to be a better way. Thanks
Upvotes: 0
Views: 1521
Reputation: 19544
You can try something like that:
setwd("~/directory of interest")
filesA <- list.files(pattern = "*a.csv$")
filesB <- list.files(pattern = "*b.csv$")
for(i in 1:length(filesA)){
DataSet1A <- read.csv(file = filesA[i], header = TRUE, stringsAsFactors = TRUE)
DataSet1B <- read.csv(file = filesB[i], header = TRUE, stringsAsFactors = TRUE)
...
}
Upvotes: 1