P.Escondido
P.Escondido

Reputation: 3553

Load a small random sample from a large csv file into R data frame

The csv file to be processed does not fit into the memory. How can one read ~20K random lines of it to do basic statistics on the selected data frame?

Upvotes: 15

Views: 18445

Answers (4)

G. Grothendieck
G. Grothendieck

Reputation: 269852

Try this based on examples 6e and 6f on the sqldf github home page:

library(sqldf)
DF <- read.csv.sql("x.csv", sql = "select * from file order by random() limit 20000")

See ?read.csv.sql using other arguments as needed based on the particulars of your file.

Upvotes: 8

Philip John
Philip John

Reputation: 5565

The following can be used in case you have an ID or something similar in your data. Take a sample of IDs, then take the subset of the data using the sampled ids.

sampleids <- sample(data$id,1000)
newdata <- subset(data, data$id %in% sampleids)

Upvotes: -4

Jed
Jed

Reputation: 261

You can also just do it in the terminal with perl.

perl -ne 'print if (rand() < .01)' biglist.txt > subset.txt

This won't necessarily get you exactly 20,000 lines. (Here it'll grab about .01 or 1% of the total lines.) It will, however, be really really fast, and you'll have a nice copy of both files in your directory. You can then load the smaller file into R however you want.

Upvotes: 25

Se&#241;or O
Se&#241;or O

Reputation: 17432

This should work:

RowsInCSV = 10000000 #Or however many rows there are

List <- lapply(1:20000, function(x) read.csv("YourFile.csv", nrows=1, skip = sample(1, RowsInCSV), header=F)
DF = do.call(rbind, List)

Upvotes: 5

Related Questions