n.e.w
n.e.w

Reputation: 1148

R: Is it possible to parallelize / speed-up the reading in of a 20 million plus row CSV into R?

Once the CSV is loaded via read.csv, it's fairly trivial to use multicore, segue etc to play around with the data in the CSV. Reading it in, however, is quite the time sink.

Realise it's better to use mySQL etc etc.

Assume the use of an AWS 8xl cluster compute instance running R2.13

Specs as follows:

Cluster Compute Eight Extra Large specifications:
88 EC2 Compute Units (Eight-core 2 x Intel Xeon)
60.5 GB of memory
3370 GB of instance storage
64-bit platform
I/O Performance: Very High (10 Gigabit Ethernet)

Any thoughts / ideas much appreciated.

Upvotes: 8

Views: 3645

Answers (3)

Richard Erickson
Richard Erickson

Reputation: 2625

Going parallel might not be needed if you use fread in data.table.

library(data.table)
dt <- fread("myFile.csv")

A comment to this question illustrates its power. Also here's an example from my own experience:

d1 <- fread('Tr1PointData_ByTime_new.csv')
Read 1048575 rows and 5 (of 5) columns from 0.043 GB file in 00:00:09

I was able to read in 1.04 million rows in under 10s!

Upvotes: 5

John
John

Reputation: 23768

Flash or conventional HD storage? If the latter, then if you don't know where the file is on the drives, and how it's split, it's very hard to speed things up because multiple simultaneous reads will not be faster than one streamed read. It's because of the disk, not the CPU. There's no way to parallelize this without starting at the storage level of the file.

If it's flash storage then a solution like Paul Hiemstra's might help since good flash storage can have excellent random read performance, close to sequential. Try it... but if it's not helping you know why.

Also... a fast storage interface doesn't necessary mean the drives can saturate it. Have you run performance testing on the drives to see how fast they really are?

Upvotes: 4

Paul Hiemstra
Paul Hiemstra

Reputation: 60984

What you could do is use scan. Two of its input arguments could prove to be interesting: n and skip. You just open two or more connections to the file and use skip and n to select the part you want to read from the file. There are some caveats:

  • At some stage disk i/o might prove the bottle neck.
  • I hope that scan does not complain when opening multiple connections to the same file.

But you could give it a try and see if it gives a boost to your speed.

Upvotes: 4

Related Questions