Reputation: 2758
I have a query, which is expected to return ~500.000 elements, which have to be postprocessed. The elements are loaded from a spring-boot app with JPA/Hibernate. To increase the overall speed of the operation I use the getResultStream
instead of getResultList
.
Still, the speed of the operation seems to low. I experimented with the hibernate fetch-size, which should be applicable here.
In my application.yml
, the fetch size is set in
spring:
jpa:
properties:
hibernate:
jdbc:
batch_size: ...
When I put the logger org.hibernate.cfg
to debug, I can see that the values I set are printed out. However, they seem to have no effect whatsoever. Whether the fetch size is set to 1, 10, or 2000, the time for executing the code never differs.
What am I doing wrong here?
Upvotes: 0
Views: 3256
Reputation: 2758
So, we just learned the hard way, that SqlServer ignores the fetch size in the default SELECTMETHOD=DIRECT and simply always transfers the whole ResultSet
as is. Therefore, any changes in the fetch size will have no effect when executed against SqlServer unless you change the SELECTMETHOD to CURSOR (which will slow your queries down a lot).
My issue was therefore not with hibernate but with the underlying database.
Upvotes: 0
Reputation: 1
In the entity, use the assigned generator since MySQL IDENTITY will cause insert batching to be disabled. If you have auto increment id, batch will not work. I used @Id and provided UUID, batch worked.
Upvotes: 0
Reputation: 16400
Maybe you are simply not doing anything wrong and it just takes the database that long to fetch all the data from disk and send it to your application.
One thing you can try is to use a StatelessSession
or try to clear the persistence context after every e.g. 20 elements by doing entityManager.clear()
. Maybe the slowdown is due to the persistence context filling up with all those elements and causing memory pressure.
Without any further information, it's impossible to help you though.
Upvotes: 1