Reputation: 3162
I would like to monitor the performance (duration) of distinct queries. An easy way to do that is to look at the output of HeidiSQL. For example, this simple query:
SELECT COUNT(*) FROM db.patient;
gives me the following output on the HeidiSQL-GUI:
/* Betroffene Zeilen: 0 Gefundene Zeilen: 1 Warnungen: 0
Dauer von 1 Abfrage: 6.022 Sek. */
Getting this output I copy/paste the time (6.022), the number of affected rows (0) and the number of located rows (1) into an excel. This I want to do periodically to control over the duration of the queries. If duration get critical I want to analyse which part of the code needs to be changed to get better performance.
Since my sql codes are in various scripts (10 scripts) and contains each 5 to 20 queries doing this manually (retrieving these three figures) is very cumbersome. Also, I have to run each single query allone to get the infos for each single query. If I run several queries I got only the info for the whole run.
My idea is to retrieve the infos from a log file with a script (e.g. Perl, Python). But there are the following problems:
Since I want these informations for each single query but do not want run each single query alone I supposedly have to add a query which does produce these infos:
SELECT * INTO OUTFILE 'path\log.txt'
FROM (SELECT COUNT(*) FROM table);
UPDATE table SET var = 1 WHERE x = 'a';
-- (1) query which retrieves affected and
located number of rows and time and stores in a log.txt
INSERT INT table SELECT * FROM stage;
-- (1) query which retrieves affected and
located number of rows and time and stores in a log.txt
Is there something like (1)?
I'm working with 10.2.11-MariaDB on a Windows 7 (soon Win 10).
Thanks for any idea.
Upvotes: 0
Views: 1136
Reputation: 142518
(This does not answer the Question asked, but addresses the "real" question.)
The slowlog, with long_query_time
set to 1.0 (or less) is an excellent tool. Then use pt-query-digest
to summarize the data. It will, by default, list the "worst" query first.
Upvotes: 1