Reputation: 33
I have a MySQL DB running on a raspi4 2GB on a USB>SATA SSD with python connector interface. When I run the query
UPDATE sessions
SET used = 1, date_created = %s, places = %s, loc_query = %s,
resp_loc = %s, resp_json = %s, ip_address = %s, full_q_string = %s
WHERE used IS NULL
LIMIT 1
It works fast (~.1 sec) but as it finding rows lower on the list it get much slower (into 2-3+ seconds, by row ~9000 its 4-5 seconds). I thought this was from the db filling up but when I go back and free up rows 1-10 by NULLING all those filled in columns in the query and setting "used" = NULL, those first 10 rows are lightning fast even with the other 9000 rows populated, then when it starts writing to row 9001 it is slow again. I tried writing directiy to row a higher row
UPDATE sessions
SET used = 1, date_created = %s, places = %s, loc_query = %s,
resp_loc = %s, resp_json = %s, ip_address = %s, full_q_string = %s
WHERE placeholder_id = 9002`
It works well/fast, but I need the logic/idea in the first query to work. Finally I tried restarting the Pi to try to flush RAM thinking it was some cacheing thing, but had similar expirence. Any input advice would be appricated :)
MySQL -(mysql Ver 8.0.20-0ubuntu0.20.04.1 for Linux on aarch64 ((Ubuntu)))
Table columns:
placeholder_id INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
session_key CHAR(5),
date_polled DATETIME,
date_created DATETIME(6),
full_q_string VARCHAR(2550),
ip_address VARCHAR(255),
places JSON,
loc_query LONGTEXT,
resp_loc LONGTEXT,
resp_json JSON
Upvotes: 2
Views: 149
Reputation: 1269633
The problem is that MySQL has a find a row where user IS NULL
. That can take longer and longer.
The solution is to create an index on user
:
create index idx_sessions_user on sessions(user);
MySQL should be able to use the index to find a NULL
row quite quickly speeding up your updates (with a little bit of overhead for maintaining the index on insert
s and update
s).
Upvotes: 3