Renato Dinhani
Renato Dinhani

Reputation: 36726

Continue Postgres transaction with exceptions in Java

The insert method below insert links on a table in the database (PostgreSQL), but if a error occurs, the rest of transaction is affected and dont works. The exception is because the field URL is unique.

It's possible in PostgreSQL continue the transaction even with the exceptions?

for (int i = 0, n = page.getLinks().length; i < n; i++) {
     linkBD.insert(page.getLink(i), idPage);
}            

public boolean insert(Link link, int idPage) {
    try {
        String sql = "INSERT INTO link (id,idPage,url,linkText,"
        + "visited,broken) VALUES(nextval('idLinkSqc'),?,?,?,?,?)";

        PreparedStatement pstm = Conn.conn.prepareStatement(sql);

        pstm.setInt(1, idPage);
        pstm.setString(2, link.getUrl());
        pstm.setString(3, link.getLinkText());
        pstm.setBoolean(4, false);
        pstm.setBoolean(5, false);

        pstm.execute();            
        Conn.commit();
        pstm.close();            
        return true;
    } catch (Exception e) {
        System.out.println("Erro inserindo link no banco de dados: " + e.getMessage());
        System.out.println("Erro no link: "+link.getUrl());
        return false;
    }
}

The error message in portuguese: transação atual foi interrompida, comandos ignorados até o fim do bloco de transação

The translation of Google Translate, I think is right: current transaction is aborted, commands ignored until end of transaction block

Upvotes: 2

Views: 3556

Answers (7)

mu is too short
mu is too short

Reputation: 434735

You could avoid the constraint violation (and hence the exception and transaction problem) by changing your INSERT

INSERT INTO link(id, idPage, url, linkText, visited, broken)
SELECT nextval('idLinkSqc'), ?, ?, ?, ?, ?
FROM link
WHERE NOT EXISTS (SELECT url FROM link WHERE url = ?)
LIMIT 1

And then the extra placeholder:

pstm.setString(6, link.getUrl());

You could also replace your raw INSERT with a stored procedure that would check that the new URL isn't already there before trying to do the INSERT.

UPDATE: A better version of the above SQL would be this:

INSERT INTO link (id, idPage, url, linkText, visited, broken)
SELECT nextval('idLinkSqc'), ?, ?, ?, ?, ?
FROM (
    SELECT 1
    WHERE NOT EXISTS (SELECT 1 FROM link WHERE url = ?)
) AS postgres_needs_this_alias

The end result should be the same but this version doesn't need the LIMIT 1 hack. The idea behind this is to use the inner SELECT to produce one row if url is not present (hence the extra nested NOT EXISTS business) and no rows if url is present; then, we use the number of rows from the inner SELECT as a counter for how many rows should be inserted in link. An EXISTS check on an indexed column should be quite fast as well.

Upvotes: 2

bobflux
bobflux

Reputation: 11581

Do this :

INSERT INTO table (column list)
SELECT v.* FROM (VALUES (....), (....), (....)) v
     LEFT JOIN table t ON t.t_unique_column=v.column1 -- choose the matching column
     WHERE t.t_unique_column IS NULL
RETURNING *

This will allow you to insert a bulk of rows without exception checking for each row, and the RETURNING clause gives you back what was inserted, sequence generated PKs, etc. It is the fastest solution (besides COPY).

Upvotes: 1

Wayne Conrad
Wayne Conrad

Reputation: 108079

It is possible to continue if the failure was inside a SAVEPOINT. Here's an example in psql:

# create temporary table foo (i int primary key);
CREATE TABLE

Begin a transaction and insert a row:

# begin;
BEGIN
# insert into foo values(1);
INSERT 0 1

Start a savepoint, the insert the same row twice. This will cause an error:

# savepoint bar;
SAVEPOINT
# insert into foo values(2);
INSERT 0 1
# insert into foo values(2);
ERROR:  duplicate key value violates unique constraint "foo_pkey"

Roll back to the savepoint, then insert another row.

# rollback to savepoint bar;
ROLLBACK
# insert into foo values(3);
INSERT 0 1

Commit and see what's there:

# commit;
COMMIT
# select * from foo;
 i 
---
 1
 3
(2 rows)

Upvotes: 3

Arjan
Arjan

Reputation: 9884

The problem shouldn't be how to handle exceptions but rather how to prevent exceptions. This link should give you information how to gracefully ignore inserts that would fail because of duplicate keys.

Upvotes: 0

Milen A. Radev
Milen A. Radev

Reputation: 62613

Disclaimer: I know almost nothing about Java/JDBC.

Two "solutions":

  • don't group all insert in one transaction, execute them separately
  • take a look at SAVEPOINTs

Upvotes: 1

Anthony Accioly
Anthony Accioly

Reputation: 22471

Have you tried moving the commit outside of the try block (maybe to a finnaly block)? And maybe it has something to do with returning false (have you tried to see it the code isn't checking for this boolean at a later time and doing some kind of rollback if it's false?)

Upvotes: 0

Will Hartung
Will Hartung

Reputation: 118704

I don't know a way around thing, there might be one, but I simply don't know what it is. Once Postgres has an issue in a transaction, it kills the entire thing and your only hope is to restart it. That means you need to ensure that your code will operate correctly before batching it all up (notably things like duplicate primary keys and missing foreign keys and such).

Upvotes: 2

Related Questions