Fore
Fore

Reputation: 6146

Difference between "read commited" and "repeatable read" in SQL Server

I think the above isolation levels are so alike. Could someone please describe with some nice examples what the main difference is?

Upvotes: 396

Views: 263412

Answers (9)

不辞长做岭南人
不辞长做岭南人

Reputation: 481

Please note that, the repeatable in repeatable read regards to a tuple, but not to the entire table. In ANSC isolation levels, phantom read anomaly can occur, which means reading a table with the same WHERE clause twice may return different result sets. Literally, it's not repeatable.

Upvotes: 1

user2138149
user2138149

Reputation: 17384

There are other answers here, but they don't give any details about the underlying database architecture, which makes it difficult to understand why the transaction isolation levels function the way they do, and what problems are solved.


General Overview of Common Problems in Concurrent Environments

Database systems permit multiple concurrent connections. This leads to the same kinds of problems seen in other concurrent systems. For example, in multithreaded environments, mutexes prevent concurrent access to memory, thus solving the problem of race conditions which can lead to corrupt or invalid data.

In a similar way, because database systems permit concurrent CRUD operations (update, insert, delete, select), concurrent operations by multiple connections can lead to undesirable observed behaviour.

Note that the atomicity of database row operations prevents outright data corruption or inconsistency, so there is always a base level of transaction isolation enforced at all times.

For more information on this, see ACID. (Atomic, Consistent, Isolation, Durability.) The short explanation is that on a per-row basis, operations are atomic. This means prevents data corruption by preventing a situation whereby one connection would write part of a rows data, before another connection corrupted that data by partially writing its data to the same row. (This will be more intuitive to those familiar with multithreaded environments.)

The above described problem is analagous to problem seen in multithreaded programming whereby one thread begins writing to a block of memory, and then another thread comes and partially writes its data to the same block, before the first thread is finished. This results in inconsistent data.

It is important to understand the atomic nature of row operations first, because this already provides a base level of protection.

Types of Transaction Isolation Levels

We will look at the following Transaction Isolation levels, whcih are available in MariaDB and many other SQL database implementations.

We first need to know what the different isolation levels are:

  • Read uncommitted
  • Read committed
  • Repeatable read
  • Serializable

What problems are solved by the Transaction Isolation Levels?

Before explaining what these different options do, it is important to understand the problems solved by each of them. Here is a list of potential unwanted behaviours.

  • Dirty read
  • Non-repeatable read
  • Phantom read
Dirty Read:

Database operations are often grouped together into a transaction. This transaction is then either committed to the database as a group of operations, or a rollback is perfomed to discard the group of operations.

If one Connection starts a series of operations as a transaction, and then a second Connection begins reading data from the same table(s), the second Connection could either read the data from the database which has been committed, or it could also read the changes made as part of the open, and yet uncommitted, transaction.

This defines the difference between read committed and read uncommitted.

This is conceptually unusual, because it often doesn't make much sense to read uncommitted data. The whole point of transactions is to ensure data in the database does not change in this way.

To summarize:

  • Connection A opens a transaction and begins queueing up modifying (write) operations
  • Connection B opens in read-uncommitted mode, and reads data from the database
  • Connection A continues queueing up further modifications
  • If Connection B reads the data again, it will have changed
  • If Connection A performs a rollback, and then Connection B performs another read, the data read will have changed due to the rollback of the uncommitted data
  • This is known as a dirty read
  • This isn't a situation you usually have to worry about because you probably shouldn't be working in read-uncommitted mode as a general rule because it makes data appear as if transactions didn't exist, which is weird
Non-Repeatable Read

Given the above, a Non-Repeatable read may occur if the read operation mode is set to "Read Committed". This mode solves the problem of Dirty Read because only committed data is read.

The possible write operations are:

  • update
  • insert
  • delete

A Non-Repeatable Read occurs when a read operation reads a set of rows, and then the operation is repeated and the same set of rows (same keys) are returned, but the non-key data has changed.

For example:

  • Connection A may read a set of rows from a table. These rows are selected based on some selection criteria, for example a where clause.
  • Connection B may alter one or more of the rows in this set, as well as others
  • If Connection A repeats the same read query, then the same set of rows will be returned, but data which is not part of the "key" may have changed.
  • The "key" is defined by the selection criteria in the where, or other filter, clause.
Phantom Reads

Phantom Reads are an extension of the Non-Repeatable read. An insert or delete operation may change the rows returned in the set of returned rows. An insert operation may add new rows to the set of returned rows. A delete operation may do the opposite and remove rows from the set of returned rows.

To summarize:

  • Connection A perfoms a read operation
  • Connection B performs an insert, or delete, operation
  • Connection A performs the same read operation resulting in a different set of rows being returned. New rows may appear. Existing rows may dissappear. Hence "Phantom" Read.

Isolation Levels

Given our understanding of the potential undesirable behaviours, this is what the isolation levels do:

  • Read uncommitted does nothing to prevent any of these problems. However there is still a base level of protection due to atomic row operations.
  • Read committed prevents Dirty Reads only.
  • Repeatable Read prevents Dirty and Non-Repeatable Reads, but not Phantom Reads.
  • Serializable prevents all of the above.

Higher levels of isolation require more data in the database to be "locked" preventing concurrent access. This can be undesirable, because if a DMBS is holding a lock over a whole table, then no other connections can modify that data. This might cause other processes which need access to the database to hang.

Read Committed is typically the most sensible choice. It ensures that you, the Database Administrator, see only the committed data (data which is persistent, not transient) and it will not cause other processes to hang.

Bibliography

Further reading:

Wikipedia ACID Operations

Isolation Levels in MariaDB

Geeks for Geeks Isolation Levels. (Be aware some of the information doesn't make any sense, for example the explanation of Read Committed, which states that this causes a commit of any uncommitted data which is read. That is not correct and does not make sense. Uncommitted data is only committed by an explicit commit operation.)

Non-Repeatable Read vs Phantom Read

Upvotes: 13

Mo Zaatar
Mo Zaatar

Reputation: 960

Simply the answer according to my reading and understanding to this thread and @remus-rusanu answer is based on this simple scenario:

There are two transactions A and B. They perform the following operations in this following sequence.

  • Transaction B is first reads from Table X
  • Transaction A then writes to table X
  • Transaction B then readings again from Table X.
  • ReadUncommitted: Transaction B can read uncommitted data from Transaction A and it could see different rows based on A writing. No lock at all
  • ReadCommitted: Transaction B can read ONLY committed data from Transaction A and it could see different rows based on COMMITTED only A writing. could we call it Simple Lock?
  • RepeatableRead: Transaction B will read the same data (rows) whatever Transaction A is doing. But Transaction A can change other rows. Rows level Block
  • Serialisable: Transaction B will read the same rows as before and Transaction A cannot read or write in the table. Table-level Block
  • Snapshot: every Transaction has its own copy and they are working on it. Each one has its own view

Upvotes: 80

Remus Rusanu
Remus Rusanu

Reputation: 294437

Read committed is an isolation level that guarantees that any data read was committed at the moment is read. It simply restricts the reader from seeing any intermediate, uncommitted, 'dirty' read. It makes no promise whatsoever that if the transaction re-issues the read, will find the Same data, data is free to change after it was read.

Repeatable read is a higher isolation level, that in addition to the guarantees of the read committed level, it also guarantees that any data read cannot change, if the transaction reads the same data again, it will find the previously read data in place, unchanged, and available to read.

The next isolation level, serializable, makes an even stronger guarantee: in addition to everything repeatable read guarantees, it also guarantees that no new data can be seen by a subsequent read.

Say you have a table T with a column C with one row in it, say it has the value '1'. And consider you have a simple task like the following:

BEGIN TRANSACTION;
SELECT * FROM T;
WAITFOR DELAY '00:01:00'
SELECT * FROM T;
COMMIT;

That is a simple task that issue two reads from table T, with a delay of 1 minute between them.

  • under READ COMMITTED, the second SELECT may return any data. A concurrent transaction may update the record, delete it, insert new records. The second select will always see the new data.
  • under REPEATABLE READ the second SELECT is guaranteed to display at least the rows that were returned from the first SELECT unchanged. New rows may be added by a concurrent transaction in that one minute, but the existing rows cannot be deleted nor changed.
  • under SERIALIZABLE reads the second select is guaranteed to see exactly the same rows as the first. No row can change, nor deleted, nor new rows could be inserted by a concurrent transaction.

If you follow the logic above you can quickly realize that SERIALIZABLE transactions, while they may make life easy for you, are always completely blocking every possible concurrent operation, since they require that nobody can modify, delete nor insert any row. The default transaction isolation level of the .Net System.Transactions scope is serializable, and this usually explains the abysmal performance that results.

And finally, there is also the SNAPSHOT isolation level. SNAPSHOT isolation level makes the same guarantees as serializable, but not by requiring that no concurrent transaction can modify the data. Instead, it forces every reader to see its own version of the world (its own 'snapshot'). This makes it very easy to program against as well as very scalable as it does not block concurrent updates. However, that benefit comes with a price: extra server resource consumption.

Supplemental reads:

Upvotes: 840

ivpavici
ivpavici

Reputation: 1113

I think this picture can also be useful, it helps me as a reference when I want to quickly remember the differences between isolation levels (thanks to kudvenkat on youtube)

enter image description here

Upvotes: 6

vkrishna17
vkrishna17

Reputation: 936

Trying to explain this doubt with simple diagrams.

Read Committed: Here in this isolation level, Transaction T1 will be reading the updated value of the X committed by Transaction T2.

Read Committed

Repeatable Read: In this isolation level, Transaction T1 will not consider the changes committed by the Transaction T2.

enter image description here

Upvotes: 23

Sanjeev Dhiman
Sanjeev Dhiman

Reputation: 1199

My observation on initial accepted solution.

Under RR (default mysql) - If a tx is open and a SELECT has been fired, another tx can NOT delete any row belonging to previous READ result set until previous tx is committed (in fact delete statement in the new tx will just hang), however the next tx can delete all rows from the table without any trouble. Btw, a next READ in previous tx will still see the old data until it is committed.

Upvotes: 0

Chris Gillum
Chris Gillum

Reputation: 15052

Old question which has an accepted answer already, but I like to think of these two isolation levels in terms of how they change the locking behavior in SQL Server. This might be helpful for those who are debugging deadlocks like I was.

READ COMMITTED (default)

Shared locks are taken in the SELECT and then released when the SELECT statement completes. This is how the system can guarantee that there are no dirty reads of uncommitted data. Other transactions can still change the underlying rows after your SELECT completes and before your transaction completes.

REPEATABLE READ

Shared locks are taken in the SELECT and then released only after the transaction completes. This is how the system can guarantee that the values you read will not change during the transaction (because they remain locked until the transaction finishes).

Upvotes: 30

Hazel_arun
Hazel_arun

Reputation: 1801

Repeatable Read

The state of the database is maintained from the start of the transaction. If you retrieve a value in session1, then update that value in session2, retrieving it again in session1 will return the same results. Reads are repeatable.

session1> BEGIN;
session1> SELECT firstname FROM names WHERE id = 7;
Aaron

session2> BEGIN;
session2> SELECT firstname FROM names WHERE id = 7;
Aaron
session2> UPDATE names SET firstname = 'Bob' WHERE id = 7;
session2> SELECT firstname FROM names WHERE id = 7;
Bob
session2> COMMIT;

session1> SELECT firstname FROM names WHERE id = 7;
Aaron

Read Committed

Within the context of a transaction, you will always retrieve the most recently committed value. If you retrieve a value in session1, update it in session2, then retrieve it in session1again, you will get the value as modified in session2. It reads the last committed row.

session1> BEGIN;
session1> SELECT firstname FROM names WHERE id = 7;
Aaron

session2> BEGIN;
session2> SELECT firstname FROM names WHERE id = 7;
Aaron
session2> UPDATE names SET firstname = 'Bob' WHERE id = 7;
session2> SELECT firstname FROM names WHERE id = 7;
Bob
session2> COMMIT;

session1> SELECT firstname FROM names WHERE id = 7;
Bob

Makes sense?

Upvotes: 97

Related Questions