Sujee
Sujee

Reputation: 5145

Hibernate - Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1

I get following hibernate error. I am able to identify the function which causes the issue. Unfortunately there are several DB calls in the function. I am unable to find the line which causes the issue since hibernate flush the session at the end of the transaction. The below mentioned hibernate error looks like a general error. It doesn't even mentioned which Bean causes the issue. Anyone familiar with this hibernate error?

org.hibernate.StaleStateException: Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1
        at org.hibernate.jdbc.BatchingBatcher.checkRowCount(BatchingBatcher.java:93)
        at org.hibernate.jdbc.BatchingBatcher.checkRowCounts(BatchingBatcher.java:79)
        at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:58)
        at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:195)
        at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:235)
        at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:142)
        at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:297)
        at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27)
        at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:985)
        at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:333)
        at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:106)
        at org.springframework.orm.hibernate3.HibernateTransactionManager.doCommit(HibernateTransactionManager.java:584)
        at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransacti
onManager.java:500)
        at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManag
er.java:473)
        at org.springframework.transaction.interceptor.TransactionAspectSupport.doCommitTransactionAfterReturning(Transaction
AspectSupport.java:267)
        at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106)
        at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:170)
        at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:176)

Upvotes: 226

Views: 484322

Answers (30)

Rayan Argolo
Rayan Argolo

Reputation: 1

I solved this problem mapping my UUIDS as:

@Id
@GeneratedValue(generator = "uuid4")
@Column(name = "id", columnDefinition = "binary(16)")
UUID id;

The problem never happened again. Good luck.

Upvotes: 0

Hett
Hett

Reputation: 3825

If you using ClickHouse, it may be related with materialized views. CH counts affected row including related tables in MV.

Upvotes: 1

Bijan Zohouri
Bijan Zohouri

Reputation: 11

Consider this in multithreded context.

When using Spring Data JPA query methods as following:

void removeExpiredRecordsByValidfromBefore(Instant removeBefore);

Hibernation first retrieves all records and deletes them one by one. Calling such a function could result in a so-called exception if the retrieved records have already been deleted by another thread.

To prevent this, either isolate the delete requests using an appropriate lock or try a bulk delete, which deletes the records all at once.

@Modifying
@Query("delete from entityName e where e.validfrom < :removeBefore")
void removeBulkExpiredRecordsByValidfromBefore(@Param ("removeBefore")Instant removeBefore);

The difference here is that there will not be any retrieval. The first one is benefitial when we have callbakcs to be executed for each deletion. #java #hibernate

Upvotes: 1

meggar
meggar

Reputation: 88

The problem we met is that we use the PostgreSQL's jsonb with hibernate-types (https://github.com/vladmihalcea/hypersistence-utils), and the jsonb is a List<MyType>, where the MyType does not impl. the equals and hashCode, that makes the Hibernate dirty checking trigger unexpected UPDATE statements always after a query..

You can check this best practices, and this issue for more details

Upvotes: 0

Sinisa Milojevic
Sinisa Milojevic

Reputation: 1

I was facing this exception and the issue was that we were trying to delete parent object first then the child object. Reversing order of deletion in the code resolved the issue.

Upvotes: 0

iwritecomeinmymind
iwritecomeinmymind

Reputation: 366

I encounter this problem when I query insert operation it means that I save my parent entity and it's child entity as well. It's solution that, use bidirectional mapping in your entity relationships.

Upvotes: 0

beny23
beny23

Reputation: 35018

Without code and mappings for your transactions, it'll be next to impossible to investigate the problem.

However, to get a better handle as to what causes the problem, try the following:

  • In your hibernate configuration, set hibernate.show_sql to true. This should show you the SQL that is executed and causes the problem.
  • Set the log levels for Spring and Hibernate to DEBUG, again this will give you a better idea as to which line causes the problem.
  • Create a unit test which replicates the problem without configuring a transaction manager in Spring. This should give you a better idea of the offending line of code.

Upvotes: 85

Hany Alshafey
Hany Alshafey

Reputation: 11

This thread is a bit old, however I thought I should drop my fix here in case it may help someone with same root cause.

I was migrating a Java Spring hibernate app. from Oracle to Postgre, along the migration process, I converted a trigger from Oracle to Postgre, the trigger was "on Before Insert" of a table and was setting a one of the columns value (of course the desired column was marked update=false insert=false in hibernate mapping to allow the trigger to set its value), and when inserting data from the application I got this error Hibernate - Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1

My mistake was that I was setting "Return NULL" at the end of the trigger function, so when the trigger set the column value and the control is back to hibernate for saving, the record was lost as I was returning null.

My fix was to change "Return NULL" to "RETURN NEW" in trigger, this will keep the record available after being altered by the trigger, simply this was what it means by "unexcepted row count for update: 0 expected 1"

Upvotes: 1

Peter Keller
Peter Keller

Reputation: 7636

My two cents.

Problem: With Spring Boot 2.7.1 the h2 database version has changed to v2.1.214 which may result into a thrown OptimisticLockException when using generated UUIDs for Id columns, see https://hibernate.atlassian.net/browse/HHH-15373.

Solution: Add columnDefinition="UUID" to the @Column annotation

E.g., with a primary key definition for an entity like this:

@Id
@GeneratedValue(generator = "UUID")
@GenericGenerator(name = "UUID", strategy = "org.hibernate.id.UUIDGenerator")
@Column(name = COLUMN_UUID, updatable = false, nullable = false)
UUID uUID;

Change the column annotation to:

@Column(name = COLUMN_UUID, updatable = false, nullable = false, columnDefinition="UUID")

Upvotes: 10

Amirhosein Al
Amirhosein Al

Reputation: 500

I encountered this problem when using Hibernate with MySQL and upon changing the Id data type from UUID to String, the problem was solved. I don't know the reason though.

Upvotes: 0

Ravi
Ravi

Reputation: 338

In my case, I had **two** similar primary keys which I mistakenly created through MySQL workbench.

In case you see this kind of scenario, try deleting one from the MySQL workbench.

enter image description here

Upvotes: 0

Kiran Maharjan
Kiran Maharjan

Reputation: 11

I solved it. I found that there was no primary key for my Id column in table. Once I created it solved for me. Also there was duplicate id found in table before which I deleted and solved it.

Upvotes: 1

Sumit Singh
Sumit Singh

Reputation: 229

It happens when you try to delete an object and then you try to update the same object. Use this after delete:

session.clear();

Upvotes: 4

user2770352
user2770352

Reputation: 97

Removing cascade=CascadeType.ALL solved it for me.

Upvotes: 0

Blas Albir
Blas Albir

Reputation: 1

Almost always the cause of this error is that you are sending one or more erroneous primary keys, verify that your variables are clean before loading them with the new values to perform the update.

Upvotes: -1

starry
starry

Reputation: 1

In my case, it's because a trigger is triggered before a insert cause, (actually it means to split a big table in several tables using timestamp), and then return null. So I met this problem when I used springboot jpa save() function.

In addition to change the trigger to SET NOCOUNT ON; Mr. TA mentioned above, the solution can also be using native query.

insert into table values(nextval('table_id_seq'), value1)

Upvotes: 0

Vlad Mihalcea
Vlad Mihalcea

Reputation: 153770

Hibernate 5.4.1 and HHH-12878 issue

Prior to Hibernate 5.4.1, the optimistic locking failure exceptions (e.g., StaleStateException or OptimisticLockException) didn't include the failing statement.

The HHH-12878 issue was created to improve Hibernate so that when throwing an optimistic locking exception, the JDBC PreparedStatement implementation is logged as well:

if ( expectedRowCount > rowCount ) {
    throw new StaleStateException(
            "Batch update returned unexpected row count from update ["
                    + batchPosition + "]; actual row count: " + rowCount
                    + "; expected: " + expectedRowCount + "; statement executed: "
                    + statement
    );
}

Testing Time

I created the BatchingOptimisticLockingTest in my High-Performance Java Persistence GitHub repository to demonstrate how the new behavior works.

First, we will define a Post entity that defines a @Version property, therefore enabling the implicit optimistic locking mechanism:

@Entity(name = "Post")
@Table(name = "post")
public class Post {

    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE)
    private Long id;

    private String title;

    @Version
    private short version;

    public Long getId() {
        return id;
    }

    public Post setId(Long id) {
        this.id = id;
        return this;
    }

    public String getTitle() {
        return title;
    }

    public Post setTitle(String title) {
        this.title = title;
        return this;
    }

    public short getVersion() {
        return version;
    }
}

We will enable the JDBC batching using the following 3 configuration properties:

properties.put("hibernate.jdbc.batch_size", "5");
properties.put("hibernate.order_inserts", "true");
properties.put("hibernate.order_updates", "true");

We are going to create 3 Post entities:

doInJPA(entityManager -> {
    for (int i = 1; i <= 3; i++) {
        entityManager.persist(
            new Post()
                .setTitle(String.format("Post no. %d", i))
        );
    }
});

And Hibernate will execute a JDBC batch insert:

SELECT nextval ('hibernate_sequence')
SELECT nextval ('hibernate_sequence')
SELECT nextval ('hibernate_sequence')

Query: [
    INSERT INTO post (title, version, id) 
    VALUES (?, ?, ?)
], 
Params:[
    (Post no. 1, 0, 1), 
    (Post no. 2, 0, 2), 
    (Post no. 3, 0, 3)
]

So, we know that JDBC batching works just fine.

Now, let's replicate the optimistic locking issue:

doInJPA(entityManager -> {
    List<Post> posts = entityManager.createQuery("""
        select p 
        from Post p
        """, Post.class)
    .getResultList();

    posts.forEach(
        post -> post.setTitle(
            post.getTitle() + " - 2nd edition"
        )
    );

    executeSync(
        () -> doInJPA(_entityManager -> {
            Post post = _entityManager.createQuery("""
                select p 
                from Post p
                order by p.id
                """, Post.class)
            .setMaxResults(1)
            .getSingleResult();

            post.setTitle(post.getTitle() + " - corrected");
        })
    );
});

The first transaction selects all Post entities and modifies the title properties.

However, before the first EntityManager is flushed, we are going to execute a second transition using the executeSync method.

The second transaction modifies the first Post, so its version is going to be incremented:

Query:[
    UPDATE 
        post 
    SET 
        title = ?, 
        version = ? 
    WHERE 
        id = ? AND 
        version = ?
], 
Params:[
    ('Post no. 1 - corrected', 1, 1, 0)
]

Now, when the first transaction tries to flush the EntityManager, we will get the OptimisticLockException:

Query:[
    UPDATE 
        post 
    SET 
        title = ?, 
        version = ? 
    WHERE 
        id = ? AND 
        version = ?
], 
Params:[
    ('Post no. 1 - 2nd edition', 1, 1, 0), 
    ('Post no. 2 - 2nd edition', 1, 2, 0), 
    ('Post no. 3 - 2nd edition', 1, 3, 0)
]

o.h.e.j.b.i.AbstractBatchImpl - HHH000010: On release of batch it still contained JDBC statements

o.h.e.j.b.i.BatchingBatch - HHH000315: Exception executing batch [
    org.hibernate.StaleStateException: 
    Batch update returned unexpected row count from update [0]; 
    actual row count: 0; 
    expected: 1; 
    statement executed: 
        PgPreparedStatement [
            update post set title='Post no. 3 - 2nd edition', version=1 where id=3 and version=0
        ]
], 
SQL: update post set title=?, version=? where id=? and version=?

So, you need to upgrade to Hibernate 5.4.1 or newer to benefit from this improvement.

Upvotes: 21

Amit Mishra
Amit Mishra

Reputation: 510

This problem mainly occurs when we are trying to save or update the object which are already fetched into memory by a running session. If you've fetched object from the session and you're trying to update in the database, then this exception may be thrown.

I used session.evict(); to remove the cache stored in hibernate first or if you don't wanna take risk of loosing data, better you make another object for storing the data temp.

     try
    {
        if(!session.isOpen())
        {
            session=EmployeyDao.getSessionFactory().openSession();
        }
            tx=session.beginTransaction();

        session.evict(e);
        session.saveOrUpdate(e);
        tx.commit();;
        EmployeyDao.shutDown(session);
    }
    catch(HibernateException exc)
    {
        exc.printStackTrace();
        tx.rollback();
    }

Upvotes: 2

Sandeep Khantwal
Sandeep Khantwal

Reputation: 381

In our case we finally found out the root cause of StaleStateException.

In fact we were deleting the row twice in a single hibernate session. Earlier we were using ojdbc6 lib, and this was ok in this version.

But when we upgraded to odjc7 or ojdbc8, deleting records twice was throwing exception. There was bug in our code where we were deleting twice, but that was not evident in ojdbc6.

We were able to reproduce with this piece of code:

Detail detail = getDetail(Long.valueOf(1396451));
session.delete(detail);
session.flush();
session.delete(detail);
session.flush();

On first flush hibernate goes and makes changes in database. During 2nd flush hibernate compares session's object with actual table's record, but could not find one, hence the exception.

Upvotes: 1

Durja
Durja

Reputation: 667

Few ways I debugged this error:

  1. As suggested in the accepted answer- turn on show sql.
  2. I found there is some issue with setting up the id in the hibernate sql.
  3. Found that I was missing @GeneratedValue(strategy=GenerationType.IDENTITY)

Upvotes: 0

Bryan Pugh
Bryan Pugh

Reputation: 123

Another way to get this error is if you have a null item in a collection.

Upvotes: 5

Sergio Lema
Sergio Lema

Reputation: 1629

In my case, I came to this exception in two similar cases:

  • In a method annotated with @Transactional I had a call to another service (with long times of response). The method updates some properties of the entity (after the method, the entity still exists in the database). If the user requests two times the method (as he thinks it doesn't work the first time) when exiting from the transactional method the second time, Hibernate tries to update an entity which already changed its state from the beginning of the transaction. As Hibernate search for an entity in a state, and found the same entity but already changed by the first request, it throws an exception as it can't update the entity. It's like a conflict in GIT.
  • I had automatic requests (for monitoring the platform) which update an entity (and the manual rollback a few seconds later). But this platform is already used by a test team. When a tester performs a test in the same entity as the automatic requests, (within the same hundredth of a millisecond), I get the exception. As in the previous case, when exiting from the second transaction, the entity previously fetched already changed.

Conclusion: in my case, it wasn't a problem which can be found in the code. This exception is thrown when Hibernate founds that the entity first fetched from the database changed during the current transaction, so it can't flush it to the database as Hibernate doesn't know which is the correct version of the entity: the one the current transaction fetch at the beginning; or the one already stored in the database.

Solution: to solve the problem, you will have to play with the Hibernate LockMode to find the one which best fit your requirements.

Upvotes: 31

Marcel
Marcel

Reputation: 3258

It also can happen when you try to UPDATE a PRIMARY KEY.

Upvotes: 8

YumikoTanaka
YumikoTanaka

Reputation: 11

I got the same message. After looking for a code related source it got to me that running the application on a local machine interferes with the dev stage, because the share the same DB. So sometimes one server has deleted an entry already while the other just wanted to do the same.

Upvotes: 0

karan bainade
karan bainade

Reputation: 21

Actually, it happen to me when I didn't store the object as reference variable. in Entity class. Like this code: ses.get(InsurancePolicy.class, 101); After that, I stored the object in entity's reference variable so problem solved for me. policy=(InsurancePolicy)ses.get(InsurancePolicy.class, 101); After that, I updated the object and it worked fine.

Upvotes: 0

Amit
Amit

Reputation: 1

In my case there was an issue with the Database as one of the Stored Procs was consuming all the CPU causing high DB response times. Once this was killed issue got resolved.

Upvotes: 0

Ganesh Giri
Ganesh Giri

Reputation: 1161

This happens to me because I am missing ID declaration in bean class.

Upvotes: 0

roylac
roylac

Reputation: 71

I got this error because I mistakenly mapped the ID column using Id(x => x.Id, "id").GeneratedBy.**Assigned**();

Issue resolved by using Id(x => x.Id, "id").GeneratedBy.**Identity**();

Upvotes: 0

kamel2005
kamel2005

Reputation: 459

I was facing this exception, and hibernate was working well. I tried to insert manually one record using pgAdmin, here the issue became clear. SQL insert query returns 0 insert. and there is a trigger function that cause this issue because it returns null. so I have only to set it to return new. and finally I solved the problem.

hope that helps any body.

Upvotes: 0

vootla561
vootla561

Reputation: 11

One of the case

SessionFactory sf=new Configuration().configure().buildSessionFactory();
Session session=sf.openSession();

UserDetails user=new UserDetails();

session.beginTransaction();
user.setUserName("update user agian");
user.setUserId(12);
session.saveOrUpdate(user);
session.getTransaction().commit();
System.out.println("user::"+user.getUserName());

sf.close();

Upvotes: 0

Related Questions