Arush Shah
Arush Shah

Reputation: 63

Python PostgreSQL table creation using pg_dump

I'm trying to replicate the table structure of an existing PostgreSQL database. I have 2 databases currently, A and B. I used the "pg_dump -s" command and generated the sql query to create the tables of A. In my python script, I have the following code to create the same structure in database B:

con2 = psycopg2.connect(database="Archives", user="postgres", password="root", host="127.0.0.1", port="5432")

dbexe = """
--
-- PostgreSQL database dump
--

-- Dumped from database version 9.6.0
-- Dumped by pg_dump version 9.6.0

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET row_security = off;

--
-- Name: plpgsql; Type: EXTENSION; Schema: -; Owner: 
--

CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog;


--
-- Name: EXTENSION plpgsql; Type: COMMENT; Schema: -; Owner: 
--
several thousand more lines ...."""

cur2 = con2.cursor()
cur2.execute(dbexe)
print("done")

The code seems to run fine but when I check database B I don't see any tables. Any suggestions as to why this may be happening?

Upvotes: 0

Views: 676

Answers (1)

Jakub Macina
Jakub Macina

Reputation: 982

You must commit the transaction at the end because by default any data manipulation is rolled back and lost.

cur.commit()

Alternatively, you can set autocommit to True at the beginning to commit each new data automatically to the db.

conn.autocommit = True

Upvotes: 1

Related Questions