sr1
sr1

Reputation: 173

psql dumping a table and memory usage

Hello I would like to dump a postgres table to csv. The database is local to the (Linux) machine on which I am working.

psql dev -U myusername -c "select * from control.mytable;" > mydata.csv

mytable is pretty large - around 120 million rows. I am worried that this will take up way too much memory, and when I monitor with top the memory usage seems to keep increasing, and so I kill it when it reaches a certain point. Does anyone know if this is really taking up a terrible amount of memory? I know that the usage reported by top can be somewhat misleading to the uninitiated, such as myself. Are there any alternatives that won't take up so much memory?

Upvotes: 2

Views: 1650

Answers (3)

Dustin Butler
Dustin Butler

Reputation: 897

If you use COPY you have to goto stdout unless you run it as postgres user, then you can go directly to a file.

psql -U myusername dev -c "COPY  (select * from control.mytable) TO STDOUT WITH CSV HEADER;" > mydata.csv

Upvotes: 1

Peter Eisentraut
Peter Eisentraut

Reputation: 36709

If you do it like this, the entire result set will be built up in the client. So yes, it really is using a lot of memory. If you want to alleviate that, use a cursor to fetch the result in batches. Or use COPY, which already has a CSV mode built in.

Upvotes: 1

Frank Heikens
Frank Heikens

Reputation: 126971

Why don't you use COPY to create the csv? COPY does a much better job.

Upvotes: 1

Related Questions