Metal Wing
Metal Wing

Reputation: 545

Entity classes and Record locking

I am looking at EntityManager API, and I am trying to understand an order in which I would do a record lock. Basically when a user decides to Edit a record, my code is:

entityManager.getTransaction().begin();
r = entityManager.find(Route.class, r.getPrimaryKey());
r.setRoute(txtRoute.getText());
entityManager.persist(r);
entityManager.getTransaction().commit();

From my trial and error, it appears I need to set WWEntityManager.entityManager.lock(r, LockModeType.PESSIMISTIC_READ); after the .begin().

I naturally assumed that I would use WWEntityManager.entityManager.lock(r, LockModeType.NONE); after the commit, but it gave me this:

Exception Description: No transaction is currently active

I haven't tried putting it before the commit yet, but wouldn't that defeat the purpose of locking the record, since my goal is to avoid colliding records in case 50 users try to commit a change at once?

Any help as to how to I can lock the record for the duration of the edit, is greatly appreciated!

Thank You!

Upvotes: 3

Views: 8389

Answers (2)

Glen Best
Glen Best

Reputation: 23115

Great work attempting to be safe in write locking your changing data. :) But you might be going overboard / doing it the long way.

  • First a minor point. The call to persist() isn't needed. For update, just modify the attributes of the entity returned from find(). The entityManager automatically knows about the changes and writes them to the db during commit. Persist is only needed when you create a new object & write it to the db for the first time (or add a new child object to a parent relation and which to cascade the persist via cascade=PERSIST).

  • Most applications have a low probability of 'clashing' concurrent updates to the same data by different threads which have their own separate transactions and separate persistent contexts. If this is true for you and you would like to maximise scalability, then use an optimistic write lock, rather than a pessimistic read or write lock. This is the case for the vast majority of web applications. It gives exactly the same data integrity, much better performance/scalability, but you must (infrequently) handle an OptimisticLockException.

  • optimistic write locking is built-in automatically by simply having a short/integer/long/TimeStamp attribute in the db and entity and annotating it in the entity with @Version, you do not need to call entityManager.lock() in that case

If you were satisfied with the above, and you added a @Version attribute to your entity, your code would be:

try {
   entityManager.getTransaction().begin();
   r = entityManager.find(Route.class, r.getPrimaryKey());
   r.setRoute(txtRoute.getText());
   entityManager.getTransaction().commit();
} catch (OptimisticLockException e) {
   // Logging and (maybe) some error handling here.
   // In your case you are lucky - you could simply rerun the whole method.
   // Although often automatic recovery is difficult and possibly dangerous/undesirable
   // in which case we need to report the error back to the user for manual recovery 
}

i.e. no explicit locking at all - the entity manager handles it automagically.

IF you had a strong need to avoid concurrent data update "clashes", and are happy to have your code with limited scalability then serialise data access via pessimistic write locking:

try {
   entityManager.getTransaction().begin();
   r = entityManager.find(Route.class, r.getPrimaryKey(), LockModeType.PESSIMISTIC_WRITE);
   r.setRoute(txtRoute.getText());
   entityManager.getTransaction().commit();
} catch (PessimisticLockException e) {
   // log & rethrow
}

In both cases, a successful commit or an exception with automatic rollback means that any locking carried out is automatically cleared.

Cheers.

Upvotes: 0

Mikko Maunu
Mikko Maunu

Reputation: 42114

Performing locking inside transaction makes perfectly sense. Lock is automatically released in the end of the transaction (commit / rollback). Locking outside of transaction (in context of JPA) does not make sense, because releasing lock is tied to end of the transaction. Also otherwise locking after changes are performed and transaction is committed does not make too much sense.

It can be that you are using pessimistic locking to purpose other than what they are really for. If my assumption is wrong, then you can ignore end of the answer. When your transaction holds pessimistic read lock on entity (row), following is guaranteed:

  • No dirty reads: other transactions cannot see results of operations you performed to locked rows.
  • Repeatable reads: no modifications from other transactions
  • If your transaction modifies locked entity, PESSIMISTIC_READ is upgraded to PESSIMISTIC_WRITE or transaction fails if lock cannot be upgraded.

Following coarsely describes scenario with obtaining locking in the beginning of transaction:

entityManager.getTransaction().begin();
r = entityManager.find(Route.class, r.getPrimaryKey(), 
      LockModeType.PESSIMISTIC_READ);
//from this moment on we can safely read r again expect no changes
r.setRoute(txtRoute.getText());
entityManager.persist(r);
//When changes are flushed to database, provider must convert lock to 
//PESSIMISTIC_WRITE, which can fail if concurrent update
entityManager.getTransaction().commit();

Often databases do not have separate support for pessimistic read, so you are actually holding lock to row since PESSIMISTIC_READ. Also using PESSIMISTIC_READ makes sense only if no changes to the locked row are expected. In case above changes are done always, so using PESSIMISTIC_WRITE from the beginning on is reasonable, because it saves you from the risk of concurrent update.

In many cases it also makes sense to use optimistic instead of pessimistic locking. Good examples and some comments about choosing between locking strategies can be found from: Locking and Concurrency in Java Persistence 2.0

Upvotes: 1

Related Questions