Reputation: 197
I have a table where i need to do a join on a lot of different tables. The data set is 140 000 records long.
An example looks like this:
SELECT SQL_CALC_FOUND_ROWS e.designation
, e.remark
, e.moment
, e.rpm
, e.cycleK
, c.type
, d.description
, a.PAnr
, b.family
, b.articlenrKronhjul
, b.ratio
, a.oiltype
, a.oiltemp
, a.createdBy
, a.createdDate
FROM testdata_test a
, testdata_gear b
, testdata_damcategory c
, testdata_damage d
, testdata_result e
WHERE a.id = e.test_id
AND e.id = d.result_id
AND a.id = b.test_id
AND c.id = d.category_id
ORDER
BY designation asc
LIMIT 0, 10
It averages out on about 1 second, how can I speed this up? I have been trying to add some indexes on some columns although without much improvement.
Anyone have any tips?
Edit:
Here is my query plan in regular and JSON format:
+----+-------------+-------+------+---------------------------------------------------+--------------------------+---------+----------------+------+----------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------+---------------------------------------------------+--------------------------+---------+----------------+------+----------------------------------------------------+
| 1 | SIMPLE | a | ALL | PRIMARY | NULL | NULL | NULL | 10617 | Using where; Using temporary; Using filesort |
| 1 | SIMPLE | b | ref | TestData_gear_2e06cda4 | TestData_gear_2e06cda4 | 5 | webappdev.a.id | 1 | NULL |
| 1 | SIMPLE | e | ref | PRIMARY,TestData_result_2e06cda4 | TestData_result_2e06cda4 | 4 | webappdev.a.id | 5 | NULL |
| 1 | SIMPLE | d | ref | TestData_damage_b583a629,TestData_damage_57f06544 | TestData_damage_57f06544 | 4 | webappdev.e.id | 1 | NULL |
| 1 | SIMPLE | c | ALL | PRIMARY | NULL | NULL | NULL | 4 | Using where; Using join buffer (Block Nested Loop)|
+----+-------------+-------+------+---------------------------------------------------+--------------------------+---------+----------------+-------+---------------------------------------------------+
5 rows in set (0.00 sec)
| {
"query_block": {
"select_id": 1,
"ordering_operation": {
"using_temporary_table": true,
"using_filesort": true,
"nested_loop": [
{
"table": {
"table_name": "a",
"access_type": "ALL",
"possible_keys": [
"PRIMARY"
],
"rows": 10617,
"filtered": 100,
"attached_condition": "(`webappdev`.`a`.`id` is not null)"
}
},
{
"table": {
"table_name": "b",
"access_type": "ref",
"possible_keys": [
"TestData_gear_2e06cda4"
],
"key": "TestData_gear_2e06cda4",
"used_key_parts": [
"test_id"
],
"key_length": "5",
"ref": [
"webappdev.a.id"
],
"rows": 1,
"filtered": 100
}
},
{
"table": {
"table_name": "e",
"access_type": "ref",
"possible_keys": [
"PRIMARY",
"TestData_result_2e06cda4"
],
"key": "TestData_result_2e06cda4",
"used_key_parts": [
"test_id"
],
"key_length": "4",
"ref": [
"webappdev.a.id"
],
"rows": 5,
"filtered": 100
}
},
{
"table": {
"table_name": "d",
"access_type": "ref",
"possible_keys": [
"TestData_damage_b583a629",
"TestData_damage_57f06544"
],
"key": "TestData_damage_57f06544",
"used_key_parts": [
"result_id"
],
"key_length": "4",
"ref": [
"webappdev.e.id"
],
"rows": 1,
"filtered": 100
}
},
{
"table": {
"table_name": "c",
"access_type": "ALL",
"possible_keys": [
"PRIMARY"
],
"rows": 4,
"filtered": 75,
"using_join_buffer": "Block Nested Loop",
"attached_condition": "(`webappdev`.`c`.`id` = `webappdev`.`d`.`cate
gory_id`)"
}
}
]
}
}
} |
Here are my CREATE TABLES and counts for the tables
mysql> SHOW CREATE TABLE testdata_test;
| testdata_test | CREATE TABLE `testdata_test` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`PAnr` int(11) NOT NULL,
`projectAcc` varchar(20) DEFAULT NULL,
`reportnr` varchar(20) DEFAULT NULL,
`oiltype` varchar(40) DEFAULT NULL,
`oiltemp` int(11) DEFAULT NULL,
`headline1` varchar(40) DEFAULT NULL,
`headline2` varchar(40) DEFAULT NULL,
`testDescription` longtext,
`TestName` varchar(9) DEFAULT NULL,
`createdBy` varchar(6) NOT NULL,
`createdDate` date NOT NULL,
PRIMARY KEY (`id`),
FULLTEXT KEY `test_index_testdescription` (`testDescription`)
) ENGINE=InnoDB AUTO_INCREMENT=14172 DEFAULT CHARSET=utf8 |
1 row in set (0.00 sec)
mysql> SELECT count(*) FROM testdata_test;
+----------+
| count(*) |
+----------+
| 14161 |
+----------+
1 row in set (0.01 sec)
mysql> SHOW CREATE TABLE testdata_gear;
| testdata_gear | CREATE TABLE `testdata_gear` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`family` varchar(20) NOT NULL,
`articlenrKronhjul` int(11) DEFAULT NULL,
`revisionK` varchar(200) DEFAULT NULL,
`articlenrPinjong` int(11) DEFAULT NULL,
`revisionP` varchar(200) DEFAULT NULL,
`ratio` double DEFAULT NULL,
`geardata` varchar(100) DEFAULT NULL,
`remark` varchar(40) DEFAULT NULL,
`test_id` int(11),
PRIMARY KEY (`id`),
KEY `TestData_gear_2e06cda4` (`test_id`),
CONSTRAINT `TestData_gear_test_id_325c2ab6_fk_TestData_test_id` FOREIGN KEY (`
test_id`) REFERENCES `testdata_test` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=14167 DEFAULT CHARSET=utf8 |
mysql> SELECT count(*) FROM testdata_gear;
+----------+
| count(*) |
+----------+
| 14157 |
+----------+
1 row in set (0.01 sec)
| testdata_result | CREATE TABLE `testdata_result` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`designation` varchar(20) NOT NULL,
`remark` varchar(200) DEFAULT NULL,
`moment` double DEFAULT NULL,
`rpm` int(11) DEFAULT NULL,
`cycleK` int(11) DEFAULT NULL,
`test_id` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `test_id_result_index` (`test_id`) USING BTREE,
KEY `result_designation_index` (`designation`) USING BTREE,
CONSTRAINT `TestData_result_test_id_5ed0cbc8_fk_TestData_test_id` FOREIGN KEY
(`test_id`) REFERENCES `testdata_test` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=141382 DEFAULT CHARSET=utf8 |
mysql> SELECT count(*) FROM testdata_result;
+----------+
| count(*) |
+----------+
| 141323 |
+----------+
1 row in set (0.03 sec)
mysql> SHOW CREATE TABLE testdata_damage;
| testdata_damage | CREATE TABLE `testdata_damage` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`description` longtext NOT NULL,
`part` varchar(100) DEFAULT NULL,
`timestamp` datetime(6) NOT NULL,
`category_id` int(11),
`result_id` int(11) NOT NULL,
PRIMARY KEY (`id`),
KEY `TestData_damage_b583a629` (`category_id`),
KEY `TestData_damage_57f06544` (`result_id`),
FULLTEXT KEY `damage_index_description` (`description`),
CONSTRAINT `TestData_damage_category_id_215346e4_fk_TestData_damcategory_id` F
OREIGN KEY (`category_id`) REFERENCES `testdata_damcategory` (`id`),
CONSTRAINT `TestData_damage_result_id_2fb199b2_fk_TestData_result_id` FOREIGN
KEY (`result_id`) REFERENCES `testdata_result` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=141341 DEFAULT CHARSET=utf8 |
mysql> SELECT count(*) FROM testdata_damage;
+----------+
| count(*) |
+----------+
| 141291 |
+----------+
1 row in set (0.04 sec)
mysql> SHOW CREATE TABLE testdata_damcategory;
| testdata_damcategory | CREATE TABLE `testdata_damcategory` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`type` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8 |
mysql> SELECT count(*) FROM testdata_damcategory;
+----------+
| count(*) |
+----------+
| 5 |
+----------+
1 row in set (0.00 sec)
SELECT COUNT(*) FROM (
SELECT e.designation
...
) AS x;
+----------+
| COUNT(*) |
+----------+
| 141298 |
+----------+
1 row in set (2.40 sec)¨
EDIT: Explain with the index for designation added
| {
"query_block": {
"select_id": 1,
"ordering_operation": {
"using_temporary_table": true,
"using_filesort": true,
"nested_loop": [
{
"table": {
"table_name": "a",
"access_type": "ALL",
"possible_keys": [
"PRIMARY"
],
"rows": 10617,
"filtered": 100,
"attached_condition": "(`webappdev`.`a`.`id` is not null)"
}
},
{
"table": {
"table_name": "b",
"access_type": "ref",
"possible_keys": [
"TestData_gear_2e06cda4",
"test_id_gear_index"
],
"key": "TestData_gear_2e06cda4",
"used_key_parts": [
"test_id"
],
"key_length": "5",
"ref": [
"webappdev.a.id"
],
"rows": 1,
"filtered": 100
}
},
{
"table": {
"table_name": "e",
"access_type": "ref",
"possible_keys": [
"PRIMARY",
"test_id_result_index"
],
"key": "test_id_result_index",
"used_key_parts": [
"test_id"
],
"key_length": "4",
"ref": [
"webappdev.a.id"
],
"rows": 4,
"filtered": 100
}
},
{
"table": {
"table_name": "d",
"access_type": "ref",
"possible_keys": [
"TestData_damage_b583a629",
"TestData_damage_57f06544",
"result_id_damage_index"
],
"key": "TestData_damage_57f06544",
"used_key_parts": [
"result_id"
],
"key_length": "4",
"ref": [
"webappdev.e.id"
],
"rows": 1,
"filtered": 100
}
},
{
"table": {
"table_name": "c",
"access_type": "ALL",
"possible_keys": [
"PRIMARY"
],
"rows": 4,
"filtered": 75,
"using_join_buffer": "Block Nested Loop",
"attached_condition": "(`webappdev`.`c`.`id` = `webappdev`.`d`.`cate
gory_id`)"
}
}
]
}
}
} |
FINAL UPDATE: Here is the query that finally did the trick and brought down the query time to below 0.1s for me.
SELECT e.designation
, e.remark
, e.moment
, e.rpm
, e.cycleK
, c.type
, d.description
, a.PAnr
, b.family
, b.articlenrKronhjul
, b.ratio
, a.oiltype
, a.oiltemp
, a.createdBy
, a.createdDate
FROM (
SELECT e.id, e.designation, e.remark, e.moment, e.rpm, e.cycleK, e.test_id
FROM testdata_result e
ORDER BY moment asc
LIMIT 10
)e
JOIN testdata_damage AS d ON d.result_id = e.id
JOIN testdata_test AS a ON a.id = e.test_id
JOIN testdata_gear AS b ON a.id = b.test_id
JOIN testdata_damcategory as c ON c.id = d.category_id;
Upvotes: 0
Views: 114
Reputation: 142550
On second glance, I think SQL_CALC_FOUND_ROWS
is preventing any attempt to speed up the query. That requires that all the JOINs
be fully done. In my previous answer I was hopping to short circuit the evaluation after 10 rows. That is not possible.
So, the "right" solution involved changing the application to do something about the count.
If, after removing SQL_CALC_FOUND_ROWS
, you still have trouble optimizing, then do this:
SELECT ...
FROM (
SELECT e.id, e.remark, e.moment, e.rpm, e.cycleK
FROM result
ORDER BY designation -- check that there is a plain INDEX on this
LIMIT 10
) ON e
JOIN damage AS d ON d.result_id = e.id -- check for INDEX(result_id)
JOIN ... (the rest of the tables other than `e`)
ORDER BY designation -- yes, this redundancy is necessary
This coding 'trick' is to move the LIMIT
(or GROUP BY
) into a subquery to decrease the number of rows to work with while doing the JOINs
. This is instead of JOINing
lots of rows, then tossing most of them.
Upvotes: 1
Reputation: 142550
INDEX(designation)
might help some.
How many rows do you get without the LIMIT 10
?
Please provide SHOW CREATE TABLE
for each table so we can discuss further.
Edit
Never mind. A resultset of 141K rows, including a LONGTEXT
will take time. You are lucky it takes only a second or so. That a lot of stuff to shovel around. Disks go only so fast; networks go only so fast. It is not the query or the indexes that are causing it to be "slow"; it is the request.
What can you do with 141K rows? Surely you do not display all of them to the user. If you are munching on them in some way, surely that takes second(s), meaning that the query is only a part of the total time.
I suggested an index in order to avoid doing the filesort -- but I did not notice that it was LONGTEXT
, thereby invalidating using a regular index. A FULLTEXT
index has no ordering. Bottom line: my INDEX(designation)
was useless.
There is essentially no way to speed up the query as it stands. If you need to fetch 10 rows at a time, there may be techniques (not involving OFFSET
) to make it work better.
Upvotes: 1