Reputation: 149
I have several large R data.frames that I would like to put into a local duckdb database. The problem I am having is duckdb seems to load everything into memory even though I am specifying a file as the location.
Also, it isn't clear to me the correct way to establish a connection (so I'm not sure if this has something to do with it). I have tried:
duckdrv <- duckdb(dbdir="dt.db", read_only=FALSE)
dkCon <- dbConnect(drv=duckdrv)
and also:
duckdrv <- duckdb()
dkCon <- dbConnect(drv=duckdrv, dbdir="dt.db", read_only=FALSE)
Both work fine, meaning I can create tables, use dbWriteTable, run queries, etc. However, the memory usage is very high (about the same size as the data.frames). I think I read somewhere that duckdb defaults to using a certain % of the available memory which won't work for me because the system that I am using is a shared resource. I also want to run some queries in parallel which will drive memory usage even higher.
I have tried this:
dbExecute(dkCon, "PRAGMA memory_limit='1GB';")
but that doesn't seem to make a difference, even if I close the connection, shutdown the instance and reconnect.
Does anyone know how I can fix this problem? RSQLite, also has high memory usage temporarily when I am writing data to a table but then it goes back to normal and if I open a read only connection it isn't an issue at all. I would like to get duckdb working because I think the queries are supposed to be much faster. Any help would be appreciated!
Upvotes: 7
Views: 3382
Reputation: 8824
Memory limit can be set using PRAGMA or SET statement in DuckDB. By default, 75% of the RAM is the limit.
con.execute("PRAGMA memory_limit='200MB'")
OR
con.execute("SET memory_limit='200MB'")
I can confirm that this limit works. However this is not a hard limit and might get exceeded sometimes based on the volume of data, format of data your are querying(eg: parquet from s3), type of query - certain limitations or certain constraints around it at the moment.
Below is one of the examples where the volume of data in plain text(csv) was around 4.23 GB. This data was first loaded into DuckDB and then some SQL queries were run by setting the memory_limit='200MB'
. Below screenshot shows max recorded memory used by the py script.
Upvotes: 3
Reputation: 402
Your approach is correct - using memory_limit
pragma, but you used an outdated version.
For example, using DuckDb version 0.5.1:
library("DBI")
con = dbConnect(duckdb::duckdb(), dbdir="my-db.duckdb")
dbExecute(conn = con, paste0("PRAGMA memory_limit='500MB'"))
dbGetQuery(conn = con, "PRAGMA version")
dbExecute(con, "CREATE TABLE gen AS SELECT * FROM 'gen1GB.csv'")
dbGetQuery(conn = con, "select count(*) from gen")
This outputs for me:
library_version source_id
1 0.5.1 7c111322d
count_star()
1 1e+08
Memory usage is less than 500MB. On MacOs can be checked using:
ps axu | grep 'lib\/R' | awk '{print $6 " " $11}'
464768 /usr/local/Cellar/r/4.2.1_4/lib/R/bin/exec/R
You can generate a test csv-file using:
import numpy as np
import pandas as pd
rng = np.random.default_rng()
df = pd.DataFrame(rng.integers(0, 100, size=(100000000, 4)), columns=list('ABCD'))
df.to_csv('gen1GB.csv', index=False)
Upvotes: 1