Reputation: 2611
We are managing to write 1.7 M rows in 30s or so (on a MacBook Pro) using R and MonetDB.R within a shiny application:
dbWriteTable(conn, tableName, Dt,overwrite=TRUE, csvdump=TRUE)
As we understand the MonetDB.R code saves first in a /tmp
folder the table in a csv
format, then is loaded onto the MonetDB database with COPY INTO
We have just tried on a centos server and RStudio Server and we got an access right error on the temporary folder in /tmp
.
We resolved the problem with the classic
chmod -R 777 /tmp
But it seems we have to do it again if we run this part of the app within shiny hence it does not look a scalable solution.
Would it be possible to get a more stable & scalable solution?
(also, this feature was documented in MonetDB 0.94 but it does not appear anywhere in the 0.95 document on CRAN - can we safely assume that this is a typo and the feature is going to stay?)
Upvotes: 1
Views: 128
Reputation: 2552
MonetDB.R
uses R's tempfile()
to create the temporary CSV file. tempfile
in turn calls tempdir
to get a temporary directory. You can control the location of this directory through the environment variable TMPDIR
and others. See ?tempdir
for details.
e.g.
$ TMPDIR=/tmp/foo R -e "print(tempfile())"
[1] "/tmp/bar/Rtmp7UKG0k/file1173e13a4477c"
$ TMPDIR=/tmp/bar R -e "print(tempfile())"
[1] "/tmp/foo/RtmpPxx76t/file1174a409c06a2"
With regards to your other question, the csvdump feature is going to remain, since MonetDB is quite fast at bulk CSV loading.
Upvotes: 3