n3o
n3o

Reputation: 2873

@Transaction behaviour in Spring JPA Repository

So I have a Spring JPA repository:

@Transactional(readOnly = true)
public interface UsageReportsRepository extends JpaRepository<UsageReports, Long> {

@Query(value = "select u from UsageReports u where u.username=:username AND u.requestType='USER_ACTIVITY' ORDER BY u.activityEndTime DESC")
public List<UsageReports> getLastActiveTime(@Param("username") String username, Pageable pageable);

}

And two filters that use this repository.

public class UsageReportingInterceptor implements HandlerInterceptor {

@Autowired
private UsageReportsRepository usageReportsRepository;

@Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
    // Do stuff
    usageReportsRepository.save(usageReportsBean)
}
}


public class DumpToFile {

@Autowired
private UsageReportsRepository usageReportsRepository;

@Override
public boolean dumpToFile() throws Exception {
    // Do stuff
    List<UsageReports> usageReports =  usageReportsRepository.findAll()
    usageReportsRepository.deleteAll()
    //  Write to file
}
}

I don't feel the need for a service layer therefore I call them directly from my filters. One filter inserts data using this repository and the other clears out the data and puts it into a file. Both are independent and can get executed in any arbitrary order. I want to make sure that I don't loose any data in the process. Is the @Transactional annotation sufficient for this? If not what else should I do?

Upvotes: 0

Views: 1160

Answers (1)

Praba
Praba

Reputation: 1381

Based on your comment, I understand that the call to the dumpToFile is completely random and it might happen so that between your findAll and deleteAll there was a save.

Before going further, I'd suggest a good reading on what transaction is and how Spring's transaction mechanism works. I'll try to give a brief overview but this is in now way exhaustive. A transaction is a logical operation that you perform on any system. I say Logical because one particular transaction, I might have to update 2 records (payment and order tracking, if you want to think of an example). And if one of the update fails for some reason, the other update should fail as well. The whole operation is atomic even though 2 records are updated. So, we call these two operations as one transaction.

There are other parameters as well [ACID] but a single transaction should be a single logical operation.

In your case, you marked your repository in its entirety to be read-only. [there are multiple types of transactions, read-only will only allow you to read data but not modify it]. So i'm not sure if your delete will happen. But if you change your config to allow deletes, then we have a different problem. The problem of locking.

A read-only transaction doesn't usually(there are caveats) lock the table which you're reading from. If you want your table to be locked when you do the findAll and the deleteAll, you could wrap these two in a single transaction, mark the table as locked (this will prevent other queries to be blocked until your current transaction is complete), do your thing and commit/rollback. The commit/rollback releases the lock. Other queries continue to execute.

This is just one way of doing it. The other way is you synchronize access to the repository and lock the transaction in a Java way (though I wouldn't recommend it). There could be other ways, but a good design shouldn't ideally lock the entire table for several reasons.

This is not giving you code, but my idea is to get you to know more about transactions, table locking and thinking about alternate designs of doing this.

Upvotes: 1

Related Questions