Reputation: 823
I have a csv file with around 30,000 rows of data.
I need these data to be in a database for my application.
I'm not sure what approach I should take to initialize this data.
I'm using docker image of postgresql.
my thoughts are:
.sql
file that inserts this data, and execute this when docker runs.first approach is very versatile since inserting rows is a very common task that doesn't break. But down-side is that I need to do this in every docker-run.
I guess second approach is faster and efficient...? but volume might not be compatible if some reason postgres updates version or if I decided to change database.
any advices?
Upvotes: 2
Views: 590
Reputation: 2656
Just mount a volume on your host and put the database in there. If the database does not exist, then it's created by the Postgres Image. In an entrypoint procedure you could check if the database is empty and then load the 30000 records.
You state that this might not be thebest solution for the following 2 reasons:
Upvotes: 1