Reputation: 51
I am relatively new to using R and attempting to use data from a large CSV file (~13.2 million lines, ~250 fields per line, ~14 GB total). After searching for fast methods of accessing this data, I encountered the ff package and the read.table.ffdf method. I have been using it as follows:
read.table.ffdf(file="mydata.csv",sep=',',colClass=rep("factor",250),VERBOSE=TRUE)
However, with the VERBOSE setting activated, I noticed that the following output indicates each successive block write tends to take increasingly long.
read.table.ffdf 1..1000 (1000) csv-read=0.131sec ffdf-write=0.817sec
read.table.ffdf 1001..18260 (17260) csv-read=2.351sec ffdf-write=24.858sec
read.table.ffdf 18261..35520 (17260) csv-read=2.093sec ffdf-write=33.838sec
read.table.ffdf 35521..52780 (17260) csv-read=2.386sec ffdf-write=41.802sec
read.table.ffdf 52781..70040 (17260) csv-read=2.428sec ffdf-write=43.642sec
read.table.ffdf 70041..87300 (17260) csv-read=2.336sec ffdf-write=44.414sec
read.table.ffdf 87301..104560 (17260) csv-read=2.43sec ffdf-write=52.509sec
read.table.ffdf 104561..121820 (17260) csv-read=2.15sec ffdf-write=57.926sec
read.table.ffdf 121821..139080 (17260) csv-read=2.329sec ffdf-write=58.46sec
read.table.ffdf 139081..156340 (17260) csv-read=2.412sec ffdf-write=63.759sec
read.table.ffdf 156341..173600 (17260) csv-read=2.344sec ffdf-write=67.341sec
read.table.ffdf 173601..190860 (17260) csv-read=2.383sec ffdf-write=70.157sec
read.table.ffdf 190861..208120 (17260) csv-read=2.538sec ffdf-write=75.463sec
read.table.ffdf 208121..225380 (17260) csv-read=2.395sec ffdf-write=109.761sec
read.table.ffdf 225381..242640 (17260) csv-read=2.824sec ffdf-write=131.764sec
read.table.ffdf 242641..259900 (17260) csv-read=2.714sec ffdf-write=116.166sec
read.table.ffdf 259901..277160 (17260) csv-read=2.277sec ffdf-write=97.019sec
read.table.ffdf 277161..294420 (17260) csv-read=2.388sec ffdf-write=158.784sec
My understanding was that ff would avoid slowdown that comes from using all available RAM by storing the data frame in files. It should take a similar amount of time to write each block, right? Is there something I have done incorrectly or a better approach to what I wish to accomplish?
Thanks in advance for any insights you might have to offer!
Upvotes: 4
Views: 1456
Reputation: 3297
Have you tried the fread function from the data.table package? I load files of that size frequently and despite the fact that it takes some time, it is robust and much much faster than base R. Give it a go.
library(data.table)
X<-fread("mydata.csv")
Upvotes: 3