![]() $ cat docker-entrypoint-initdb.d/01-restore.shįile="/docker-entrypoint-initdb.d/dump.pgdata" docker-entrypoint-initdb.d/ /docker-entrypoint-initdb.d/ Create a Docker image with a given dump and initialization script $ treeĬOPY. Or from host from running container ( postgres-container) docker exec postgres-container pg_dump -U postgres -format custom my-db > "dump.pgdata"Ģ. ![]() Create the dump.pgdata binary dump of a DB named "my-db"ĭirectly from within a container or your local DB pg_dump -U postgres -format custom my-db > "dump.pgdata" Then the same DB is initialized in about 500ms instead of 1 minute.ġ. The solution is to make a binary PostgreSQL dump and use shell scripts initialization support. That is unacceptable for local development / unit test, etc. My dump.sql script is about 17MB (small DB - 10 tables with 100k rows in only one of them) and the initialization takes over a minute (!). You can do simple sql dump and copy the dump.sql file into /docker-entrypoint-initdb.d/. # postgres is now initialized with the dumpįor those who want to initialize a PostgreSQL DB with millions of records during the first run. WARNING: psql version 9.2, server version 9.4. Gosu postgres postgres -single 5432/tcp custom_psql_running # it imports the base database structure and create the database for the testsĭB_DUMP_LOCATION="/tmp/psql_data/structure.sql" # this script is run when the docker container is built ![]() Then, the init_docker_postgres.sh #!/bin/bash After a lot of fighting, I have found a solution -)įor me was very useful a comment posted here: from "justfalter"Īnyway, I have done in this way: # DockerfileĬOPY scripts/init_docker_postgres.sh /docker-entrypoint-initdb.d/ĭb/structure.sql is a sql dump, useful to initialize the first tablespace. ![]()
0 Comments
Leave a Reply. |