Reputation: 3517
I would like to increment a non-primary key value without the possibility of creating duplicates in the event that two queries check the MAX()
of the non-primary key at the same time. I've found that a good way to accomplish this would be to use InnoDB's locking mechanism.
I have a table like this:
tbl
-----------------------
groupID | msgNum | msg
-----------------------
1 | 1 | text
1 | 2 | text
1 | 3 | text
2 | 1 | text
2 | 2 | text
I would like to insert a new row and to increment the msgNum
for that row. What I'm worried about is that if I use MAX(msgNum)
to calculate the next number, that two near-simultaneous queries would calculate MAX(msgNum)
at the same time and then insert the same msgNum
twice. So, I would like to lock the table, but only specifically lock the minimum possible, which would be to lock calculating the MAX(msgNum)
for a specific groupID
while also locking the ability to insert a new row of the specified groupID
. Ideally, I would like to avoid locking reading from the table.
A possible solution would be this (SQL Fiddle):
START TRANSACTION;
SELECT * FROM tbl WHERE groupID=1 FOR UPDATE;
INSERT INTO tbl
(groupID,msgNum,msg) VALUES
(1,(SELECT IFNULL(MAX(msgNum)+1,0) FROM (SELECT * FROM tbl WHERE groupID=1) AS a),"text");
COMMIT;
I think this solution should work, but I am not sure and after testing it I ran into an issue. Additionally, it's a difficult concept to test and I'd like to be sure so it would be better to know. What I'm not sure of is if the lock would prevent the INSERT
query from beginning and thus preventing its calculation of MAX(msgNum)
.
I did perform an initial test:
package main
import (
"database/sql"
"fmt"
_ "github.com/go-sql-driver/mysql"
)
func runTest(sqlCon *sql.DB) {
_, err := sqlCon.Exec(
"START TRANSACTION",
)
if err != nil {
fmt.Println(err.Error())
}
_, err = sqlCon.Exec(
"SELECT * FROM tbl WHERE groupID=1 FOR UPDATE",
)
if err != nil {
fmt.Println(err.Error())
}
_, err = sqlCon.Exec(
"INSERT INTO tbl " +
"(groupID,msgNum,msg) VALUES " +
"(1,(SELECT IFNULL(MAX(msgNum)+1,0) FROM (SELECT * FROM tbl WHERE groupID=1) AS a),\"text\")",
)
if err != nil {
fmt.Println(err.Error())
}
_, err = sqlCon.Exec(
"COMMIT",
)
if err != nil {
fmt.Println(err.Error())
}
}
func main() {
sqlCon, err := sql.Open("mysql", "user1:password@tcp(127.0.0.1:3306)/Tests")
if err != nil {
panic(err.Error())
}
sqlCon2, err := sql.Open("mysql", "user1:password@tcp(127.0.0.1:3306)/Tests")
if err != nil {
panic(err.Error())
}
for i := 0; i < 40; i++ {
fmt.Println(i)
go runTest(sqlCon)
go runTest(sqlCon2)
}
}
I got between 7 and 52 rows inserted, no duplicates, but the tests did not finish (with 80 rows), saying Error 1213: Deadlock found when trying to get lock; try restarting transaction
:
$ go run main.go
0
1
2
3
4
5
6
7
8
9
10
11
12
13
Error 1213: Deadlock found when trying to get lock; try restarting transaction
Error 1213: Deadlock found when trying to get lock; try restarting transaction
Error 1213: Deadlock found when trying to get lock; try restarting transaction
Error 1213: Deadlock found when trying to get lock; try restarting transaction
Error 1213: Deadlock found when trying to get lock; try restarting transaction
Error 1213: Deadlock found when trying to get lock; try restarting transaction
Error 1213: Deadlock found when trying to get lock; try restarting transaction
Error 1213: Deadlock found when trying to get lock; try restarting transaction
Error 1213: Deadlock found when trying to get lock; try restarting transaction
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
Upvotes: 2
Views: 736
Reputation: 222492
I don't think that you need transactions and explicit locking for this to run correctly. I would suggest having a single query that select
s the last value, increments it and inserts
a new row at once.
I would phrase the query as a insert into ... select
statement:
insert into tbl(groupID, msgNum, msg)
select 1, coalesce(max(msgNum), 0) + 1, 'text'
from tbl
where groupID = 1
You would run this query with autocommit enabled. The database should be able to handle the concurrency for you, by queuing the insert
s under the hood, so this should not generate deadlocks.
As a more general thought: I would not actually try to store msgNum
. This is actually derived information, that can be computed on the fly when needed. You could just have an auto-incremented primary key on the table, and a view that computes msgNum
, using window functions (available in MySQL 8.0)
create table tbl (
id int auto_increment primary key,
groupID int
msg varchar(50)
);
create view myview as
select
groupID,
row_number() over(partition by groupID order by id) msgNum,
msg
from tbl
You can then use a regular insert
statement:
insert into tbl(groupID, msg) values(1, 'text');
Upsides:
the database manages the primary key for you under the hood
your insert
query is as simple and efficient as it gets (it does not require scanning the table, as in the other solution)
the view gives you an always up-to-date perspective at your data, including the derived information (msgNum
), with 0 maintenance costs
Upvotes: 1