Reputation: 15
I have been trying to read a large .CSV file (2GB+) with over 900 variables. I tried various options to import the file directly but none of them worked.
Right now I am trying to import the .CSV file by reading it row-wise. Which means I use the skip option and append one row at a time to the master file.I am using the following code:-
data<-read.table(file_name,header=TRUE,nrows=1, skip=2,sep=",")
The issue I am facing is that when I use the skip option it doesn't read the headers, even though I have set the header=TRUE.
Am I missing something? Any help will be really appreciated.
Upvotes: 0
Views: 1920
Reputation: 78832
This should let you split out your large CSVs into chunks with headers.
#' Split a large CSV file into separate files with \code{chunk_size} records per-file
#'
#' @param path path to the large CSV file
#' @param template path template for saving out the smaller files. Uses \code{sprintf}.
#' @param chunk_size number of records per file
#' @param locale,na,quoted_na,comment,trim_ws passed on to \code{read_csv}
#' @examples
#' csv_split("largefile.csv", chunk_size=10000)
csv_split <- function(path, template="file%05d.csv", chunk_size=1000,
locale=default_locale(), na=c("", "NA"), quoted_na=TRUE,
comment="", trim_ws=TRUE) {
require(readr)
path <- path.expand(path)
csv_spec <- spec_csv(path)
skip <- 0
part <- 1
repeat {
df <- read_csv(path, col_names=names(csv_spec$cols), col_types=csv_spec,
locale=locale, na=na, quoted_na=quoted_na, comment=comment,
trim_ws=trim_ws, skip=skip, n_max=chunk_size)
if (nrow(df) == 0) break
cat(sprintf("Writing [%s]...\n", sprintf(template, part)))
write_csv(df, sprintf(template, part))
part <- part + 1
skip <- skip + chunk_size
}
}
Example:
library(readr)
df <- data.frame(name=sample(LETTERS, 1000000, replace=TRUE),
age=sample(30:100, 1000000, replace=TRUE))
write_csv(df, "allinone.csv")
csv_split("allinone.csv", chunk_size=50000)
## Writing [file00001.csv]...
## Writing [file00002.csv]...
## Writing [file00003.csv]...
## Writing [file00004.csv]...
## Writing [file00005.csv]...
## Writing [file00006.csv]...
## Writing [file00007.csv]...
## Writing [file00008.csv]...
## Writing [file00009.csv]...
## Writing [file00010.csv]...
## Writing [file00011.csv]...
## Writing [file00012.csv]...
## Writing [file00013.csv]...
## Writing [file00014.csv]...
## Writing [file00015.csv]...
## Writing [file00016.csv]...
## Writing [file00017.csv]...
## Writing [file00018.csv]...
## Writing [file00019.csv]...
## Writing [file00020.csv]...
## Writing [file00021.csv]...
This [can|should] be modified to handle the edge cases of the first file having chunk_size
- 1 records and the possibility of there being commented lines.
If you don't use this for the actual splitting, you at least have example code for getting/using the column headers.
Upvotes: 3