stack_user
stack_user

Reputation: 595

Has the transaction behavior changed when a conflict occurred in firestore datastore?

I created new Google Cloud Platform project and Datastore.

Datastore was created as "Firestore in Datastore mode".

But, I think Firestore Datastore and Old Datastore behave differently if Conflict occurred.

e.g Following case.

procA: -> enter transaction -> get -> put -----------------> exit transaction
procB: -----> enter transaction -> get -> put -> exit transaction

Old Datastore;

Firestore in Datastore mode;

Is it spec? I cannot find document on Google Cloud Platform documentation.

Upvotes: 2

Views: 441

Answers (3)

Tijmen Roberti
Tijmen Roberti

Reputation: 41

The behavior you describe is caused by the chosen concurrency mode for Firestore in Datastore mode. The default mode is Pessimistic concurrency for newly created databases. From the concurrency mode documentation:

Pessimistic

Read-write transactions use reader/writer locks to enforce isolation and serializability. When two or more concurrent read-write transactions read or write the same data, the lock held by one transaction can delay the other transactions. If your transaction does not require any writes, you can improve performance and avoid contention with other transactions by using a read-only transaction.

To get back the 'old' behavior of Datastore, choose "Optimistic" concurrency instead (link to command). This will make the faster transaction win and remove the blocking behavior.

Upvotes: 0

Dan Cornilescu
Dan Cornilescu

Reputation: 39814

I've been giving it some thought and I think the change may actually be intentional.

In the old behaviour that you describe basically the shorter transaction, even if it starts after the longer does, is successful, preempting the longer one and causing it to fail and be re-tried. Effectively this give priority to the shorter transactions.

But imagine that you have a peak of activity with a bunch of shorter transactions - they will keep preempting the longer one(s) which will keep being re-tried until eventually reaching the maximum retries limit and failing permanently. Also increasing the datastore contention in the process, due to the retries. I actually hit such scenario in my transaction-heavy app, I had to adjust my algorithms to work around it.

By contrast the new behaviour gives all transactions a fair chance of success, regardless of their duration or the level of activity - no priority handling. It's true, at a price - the shorter transactions started after the longer ones and overlapping them will take overall longer. IMHO the new behaviour is preferable to the older one.

Upvotes: 2

gso_gabriel
gso_gabriel

Reputation: 4660

I would recommend you to take a look at the documentation Transactions and batched writes. On this documentation you will be able to find more information and examples on how to perform transactions with Firestore.

On it, you will find more clarification on the get(), set(),update(), and delete() operations.

I can highlight the following from the documentation for you, that is very important for you to notice when working with transactions:

  • Read operations must come before write operations.
  • A function calling a transaction (transaction function) might run more than once if a concurrent edit affects a document that the transaction reads.
  • Transaction functions should not directly modify application state.
  • Transactions will fail when the client is offline.

Let me know if the information helped you!

Upvotes: -1

Related Questions