PentaKon
PentaKon

Reputation: 4636

How to call multiple DAO functions in a transaction

I'm looking for a way to call multiple DAO functions in a transaction but I am NOT using spring or any such framework. What we actually have is a Database api type .jar which gets initialized with the used datasource. What I want to achieve is have my business logic level code do something like:

Connection conn = datasource.getConnection();
conn.setAutoCommit(false);
DAOObject1.query1(params, conn);
DAOObject2.query4(params, conn);
conn.commit();
conn.setAutoCommit(false);

however I want to avoid passing the connection object in every single function since this is not the correct way to do it. Right now in the few transactions we have we use this but we are looking for a way to stop passing the connection object to the database layer or even create it outside of it. I'm looking for something along the lines of:

//Pseudocode
try{
  Datasource.startTransactionLogic();
  DAO1.query(params);
  DAO2.query(params);
  Datasource.endAndCommitTransactionLogic();
}
catch(SQLException e){
  Datasource.rollbackTransaction();
}

Could I achieve this through EJBs? Right now we're not using DAOs through injection, we're creating them by hand but we're about to migrate to EJBs and start using them via the container. I've heard that all queries executed by EJBs are transactional but how does it know what to rollback to? Through savepoints?

EDIT:

Let me point out that each DAO object's method, right now, obtains its own connection object. Here is an example of how our DAO classes will be:

public class DAO {
public DTO exampleQueryMethod(Integer id) {
    DTO object = null;
    String sql = "SELECT * FROM TABLE_1 WHERE ID = ?";
    try (
        Connection connection = datasourceObject.getConnection();
        PreparedStatement statement = connection.prepareStatement(sql)
    ) {
        statement.setInt(1,  id);
        try (ResultSet resultSet = statement.executeQuery()) {
            if (resultSet.next()) {
                object = DAO.map(resultSet);
            }
        }
    }
    return object;
}
}

Right now what we're doing for methods that need to be in a transaction is to have a second copy of them that receive a Connection object:

public void exampleUpdateMethod(DTO object, Connection connection) {
    //table update logic
}

What we want is to avoid having such methods in our 'database api' .jar but instead be able to define the beginning and commit of a transaction in our business logic layer, like mentioned in the pseudocode above.

Upvotes: 2

Views: 3540

Answers (3)

PentaKon
PentaKon

Reputation: 4636

EDIT: After accumulating a few more years of experience, I'd like to point out that the simplest and most correct answer to this question was to use ThreadLocal objects to contain the Connection (since it's request scoped only a single threads executes it). Unfortunately at the time I didn't know the existence of such a construct.

@G. Demecki has the right idea but I followed a different implementation. Interceptors couldn't solve the problem (at least from what I saw) because they need to be attached to each function that is supposed to use them. Also once an interceptor is attached, calling the function will always have it intercepted which is not my goal. I wanted to be able to explicitly define the beginning and ending of a transaction, and have every sql executed between these 2 statements be part of the SAME transaction, without it having access to the database's related objects (like connection, transaction etc) through argument passing. The way I was able to achieve this (and quite elegant in my opinion) is the following:

I created a ConnectionWrapper object like so:

@RequestScoped
public class ConnectionWrapper {

@Resource(lookup = "java:/MyDBName")
private DataSource dataSource;

private Connection connection;

@PostConstruct
public void init() throws SQLException {
    this.connection = dataSource.getConnection();
}

@PreDestroy
public void destroy() throws SQLException {
    this.connection.close();
}

public void begin() throws SQLException {
    this.connection.setAutoCommit(false);
}

public void commit() throws SQLException {
    this.connection.commit();
    this.connection.setAutoCommit(true);
}

public void rollback() throws SQLException {
    this.connection.rollback();
    this.connection.setAutoCommit(true);
}

public Connection getConnection() {
    return connection;
}
}

My DAO objects themselves follow this pattern:

@RequestScoped
public class DAOObject implements Serializable {

private Logger LOG = Logger.getLogger(getClass().getName());

@Inject
private ConnectionWrapper wrapper;

private Connection connection;

@PostConstruct
public void init() {
    connection = wrapper.getConnection();
}

public void query(DTOObject dto) throws SQLException {
    String sql = "INSERT INTO DTO_TABLE VALUES (?)";
    try (PreparedStatement statement = connection.prepareStatement(sql)) {
        statement.setString(1, dto.getName());
        statement.executeUpdate();
    }
}
}

Now I can easily have a jax-rs resource which @Injects these objects and starts and commits a transaction, without having to pass any Connection or UserTransaction around.

@Path("test")
@RequestScoped
public class TestResource {

@Inject
ConnectionWrapper wrapper;

@Inject
DAOObject dao;

@Inject
DAOObject2 dao2;

@GET
@Produces(MediaType.TEXT_PLAIN)
public Response testMethod() throws Exception {
    try {
        wrapper.begin();
        DTOObject dto = new DTOObject();
        dto.setName("Name_1");
        dao.query(dto);
        DTOObject2 dto2 = new DTOObject2();
        dto2.setName("Name_2");
        dao2.query2(dto2);
        wrapper.commit();
    } catch (SQLException e) {
        wrapper.rollback();
    }
    return Response.ok("ALL OK").build();
}
}

And everything works perfectly. No Interceptors or looking around InvocationContext etc.

There are only 2 things bothering me:

  1. I have not yet found a way to have a dynamic JNDI name on @Resource(lookup = "java:/MyDBName") and this bothers me. In our AppServer we have defined many datasources and the one used by the application is dynamically chosen according to an .xml resource file packaged with the war. Which means that I can't know the datasource JNDI on compile time. There is the solution of obtaining a datasource through InitialContext() environment variable but I'd love to be able to get it as a resource from the server. I could also create a @Produces producer and inject it that way but still.
  2. I'm not really sure why ConnectionWrapper's @PostConstruct gets called BEFORE the DAOObject's @PostConstruct. It is the correct and desired behavior but I haven't understood why. I'm guessing since DAOObject @Injects a ConnectionWrapper, its @PostConstruct takes precedence since it has to have finished before the DAOObjects's can even start but this is just a guess.

Upvotes: 1

G. Demecki
G. Demecki

Reputation: 10596

@JBNizet comment was correct, but ... please think twice whether you really need to migrate to the EJBs. Even transactions are not much intuitive there: wrapping your exception into javax.ejb.EJBException isn't neither flexible nor readable. Not to mention other problems, like startup time or integration testing.

Judging from your question, it seems that all you need is a Dependency Injection framework with support for the Interceptors. So possible ways to go:

  • Spring is definitely the most popular in this area
  • CDI (Weld or OpenWebBeans) which came since Java EE 6 release - but can used totally without Java EE Application Server (I'm using this approach right now - and it works nicely).
  • Guice also comes with its own com.google.inject.persist.Transactional annotation.

All three above frameworks are equally good for your use case, but there are other factors that should be considered, like:

  • which one you & your team is familiar with
  • learning curve
  • your application's future possible needs
  • framework's community size
  • framework's current development speed
  • etc.

Hope it helps you a little bit.

EDIT: to clarify your doubts:
You can create your own Transaction class, which would wrap a Connection fetched from the datasource.getConnection(). Such transaction should be a @RequestScoped CDI bean and contain method methods like begin(), commit(), and rollback() - which would call connection.commit/ rollback under the hood. Then you can write a simple interceptor like this one which would use mentioned transaction and start/ commit/ rollback it wherever needed (of course with AutoCommit disabled).

It is doable, but keep in mind, that it should be carefully designed. That is why interceptors for transactions have been already provided in almost every DI platform/ framework.

Upvotes: 1

DaveTheRave
DaveTheRave

Reputation: 463

What i have done in the past is to create a Repository Object that takes the Database API and generates a connection and saves the connection as a member variable to it. (along with the database reference as well)

I then hang all the Business Layer calls as methods from this Repository Object for convenience to the caller.

This way.. you can call, mix, match any calls and use the underlying connection, perform rollback, commits.. etc.

Repository myr = new Repository(datasource);  // let constructor create connection
myr.setAutoCommit(false); 
myr.DAOObject1(parms);   // method wrapper
myr.DAOObject2(parms);   // method wrapper

myr.commitwork();   // method in Repository that calles endAndCommitTransactionLogic 

We then took this new object, and created a pool of them primed, and managed in a new thread, and the Application just requested a new "Repository" from the pool.. and off we went.

Upvotes: 1

Related Questions