khajlk
khajlk

Reputation: 851

R: Split and write very large data frame into slices

I have a large dataframe my_df in R containing 1983000 records. The following lines of sample code take the chunk of 1000 rows starting from 25001, do some processing, and write the processed data into a file to the local disk.

my_df1 <- my_df[25001:26000,]
my_df1$end <- as.POSIXct(paste(my_df1$end,"23:59",sep = ""))
my_df1$year <- lubridate::year(my_df1$start)
str_data <- my_df1
setwd("path_to_local_dir/data25001_26000")
write.table(str_data, file = "data25001-26000.csv",row.names = F,col.names = F,quote = F)

and so on like this:

my_df2 <- my_df[26001:27000,]
...

I would like automate this task such that the chunks of 1000 records are processed and written to a new directory. Any advise on how this could be done?

Upvotes: 1

Views: 924

Answers (4)

kstew
kstew

Reputation: 1114

You can add a grouping variable to your data first (e.g., to identify every 1000 rows), then use d_ply() to split the data and write to file.

df <- data.frame(var=runif(1000000))
df$fold <- cut(seq(1,nrow(df)),breaks=100,labels=FALSE)

df %>% filter(fold<=2) %>% # only writes first two files
  d_ply(.,.(fold), function(i){
    # make filenames 'data1.csv', 'data2.csv'
    write_csv(i,paste0('data',distinct(i,fold),'.csv')) 
    })

Upvotes: 1

Cole
Cole

Reputation: 11255

This is similar to @Parfait but takes a lot of stuff out of the function. Specifically, it creates a copy of the entire dataset and then performs the time manipulation functions.

my_df1 <- my_df
my_df1$end <- as.POSIXct(paste(my_df1$end,"23:59",sep = ""))
my_df1$year <- lubridate::year(my_df1$start)

lapply(seq(25001, nrow(my_df1), by = 1000),
       function(i) write.table(my_df1[i:i+1000-1,]
                               , file = paste0('path_to_logal_dir/data'
                                               , i, '-', i+1000-1, '.csv')
                               ,row.names = F,col.names = F,quote = F)
)

For me, I'd probably just do:

write.table(my_df1, file = ...)

and be done with it. I don't see the advantages of splitting it up - 1 million rows really isn't that many.

Upvotes: 0

Andrey Shabalin
Andrey Shabalin

Reputation: 4614

Here is my code doing the sliced loop:

step1 = 1000
runto = nrow(my_df)
nsteps = ceiling(runto/step1)
for( part in seq_len(nsteps) ) { # part = 1
    cat( part, 'of', nsteps, '\n')
    fr = (part-1)*step1 + 1
    to = min(part*step1, runto)

    my_df1 = my_df[fr:to,]
    # ...
    write.table(str_data, file = paste0("data",fr,"-",to,".csv"))
}
rm(part, step1, runto, nsteps, fr, to)

Upvotes: 1

Parfait
Parfait

Reputation: 107567

Consider generalizing your process in a function, data_to_disk, and call function with an iterator method like lapply passing a sequence of integers with seq() for each subsequent thousand. Also, incorporate a dynamic directory creation (but maybe dump all 1,000+ files in one directory instead of 1,000+ dirs?).

data_to_disk <- function(num) {
   str_data <- within(my_df[num:(num + 999)], {
                   end <- as.POSIXct(paste0(end, "23:59"))
                   year <- lubridate::year($start)
               })

   my_dir <- paste0("path_to_local_dir/data", num, "_", num + 999)
   if(!dir.exists(my_dir)) dir.create(my_dir)

   write.table(str_data, file = paste0(my_dir, "/", "data", num, "-", num + 999, ".csv"), 
               row.names = FALSE, col.names = FALSE, quote = FALSE)
   return(my_df)
}

seqs <- seq(25001, nrow(my_df), by=1000)
head(seqs)
# [1] 25001 26001 27001 28001 29001 30001
tail(seqs)
# [1] 1977001 1978001 1979001 1980001 1981001 1982001    

# LIST OF 1,958 DATA FRAMES
df_list <- lapply(seqs, data_to_disk)

Upvotes: 2

Related Questions