Reputation: 33
I have 3 tables :
CREATE TABLE `t_event` (
`id` int(10) NOT NULL auto_increment,
`title` varchar(100) NOT NULL,
`kind` int(10) NOT NULL,
`type` int(10) NOT NULL,
`short_desc` varchar(500) default NULL,
`long_desc` varchar(1500) default NULL,
`location` int(10) NOT NULL,
`price` decimal(11,0) NOT NULL,
`currency` int(11) NOT NULL default '1',
`remark_price` varchar(250) default NULL,
`remark_prerequisite` varchar(250) default NULL,
`date_start` date NOT NULL,
`date_end` date default NULL,
`date_remark` varchar(300) default NULL,
`time_start` time default NULL,
`time_end` time default NULL,
`remark_time` varchar(50) default NULL,
`leader` int(50) NOT NULL,
`leader2` int(100) NOT NULL,
`eve_contact_name` varchar(50) default NULL,
`eve_contact_phone` varchar(50) default NULL,
`eve_contact_email` varchar(50) default NULL,
`eve_contact_url` varchar(150) default NULL,
`eve_image_path` varchar(250) default NULL,
`provider` int(10) default NULL,
`timestamp` datetime NOT NULL,
`last_change` datetime NOT NULL default '0000-00-00 00:00:00',
`quality` int(10) default NULL,
`min_number` int(10) NOT NULL,
`max_number` int(10) NOT NULL,
`active_for_reservation` tinyint(1) NOT NULL,
`cancellation_day1` int(10) NOT NULL,
`cancellation_day2` int(10) NOT NULL,
`cancellation_fee1` varchar(255) NOT NULL,
`cancellation_fee2` varchar(255) NOT NULL,
PRIMARY KEY (`id`),
KEY `FK_t_event_t_event_kind` (`kind`),
KEY `FK_t_event_t_event_type` (`type`),
KEY `FK_t_event_t_location` (`location`),
KEY `FK_t_event_t_currency` (`currency`),
KEY `FK_t_event_t_leader` (`leader`),
KEY `FK_t_event_t_provider` (`provider`),
KEY `FK_t_event_t_quality` (`quality`),
CONSTRAINT `FK_t_event_t_currency` FOREIGN KEY (`currency`) REFERENCES `t_currency` (`id`),
CONSTRAINT `FK_t_event_t_event_kind` FOREIGN KEY (`kind`) REFERENCES `t_event_kind` (`id`),
CONSTRAINT `FK_t_event_t_event_type` FOREIGN KEY (`type`) REFERENCES `t_event_type` (`id`),
CONSTRAINT `FK_t_event_t_leader` FOREIGN KEY (`leader`) REFERENCES `t_leader` (`id`),
CONSTRAINT `FK_t_event_t_location` FOREIGN KEY (`location`) REFERENCES `t_location` (`id`),
CONSTRAINT `FK_t_event_t_provider` FOREIGN KEY (`provider`) REFERENCES `t_provider` (`id`),
CONSTRAINT `FK_t_event_t_quality` FOREIGN KEY (`quality`) REFERENCES `t_quality` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=8432 DEFAULT CHARSET=latin1
CREATE TABLE `t_location` (
`id` int(10) NOT NULL auto_increment,
`loc_name` varchar(50) NOT NULL,
`loc_detail` varchar(50) default NULL,
`loc_adress1` varchar(50) NOT NULL,
`loc_adress2` varchar(50) default NULL,
`loc_country` int(50) NOT NULL default '1',
`loc_zip` varchar(50) NOT NULL,
`loc_loc` varchar(50) NOT NULL,
`loc_shortdesc` varchar(250) default NULL,
`loc_contact_name` varchar(250) default NULL,
`loc_contact_gender` int(10) default NULL,
`loc_contact_phone` varchar(250) default NULL,
`loc_contact_email` varchar(250) default NULL,
`loc_contact_url` varchar(250) default NULL,
`loc_image_path` varchar(250) default NULL,
`latitude` varchar(100) default NULL,
`longitude` varchar(100) default NULL,
`created` datetime NOT NULL,
`last_change` datetime NOT NULL default '0000-00-00 00:00:00',
`provider` int(10) NOT NULL default '1',
PRIMARY KEY (`id`),
UNIQUE KEY `id` USING BTREE (`id`),
KEY `FK_t_location_t_country` (`loc_country`),
KEY `FK_t_location_t_gender` (`loc_contact_gender`),
KEY `FK_t_location_t_provider` (`provider`),
CONSTRAINT `FK_t_location_t_country` FOREIGN KEY (`loc_country`) REFERENCES `t_country`(`id`),
CONSTRAINT `FK_t_location_t_provider` FOREIGN KEY (`provider`) REFERENCES `t_provider` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1287 DEFAULT CHARSET=latin1
CREATE TABLE `t_dates` (
`id` int(10) NOT NULL auto_increment,
`events_id` int(10) NOT NULL,
`events_start_date` date NOT NULL,
`events_end_date` date NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `IND_id` (`id`),
KEY `IND_events_id` (`events_id`),
CONSTRAINT `t_dates_ibfk_1` FOREIGN KEY (`events_id`) REFERENCES `t_event` (`id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=32048 DEFAULT CHARSET=latin1
My Query is :
SELECT e.*,I.* ,d.*
FROM t_event AS e
INNER JOIN t_location AS I ON I.id = e.location
INNER JOIN t_dates AS d ON d.events_id = e.id
;
this query take 90s to be executed and return = 27727
The PROFILE command show that section "sending data" take almost the time of execution. The EXPLAIN command is the following :
+----+------------+------+------+----------------------------+--------------------+---------+-----------+-------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+------------+------+------+----------------------------+--------------------+---------+-----------+-------+-------+
| 1 | SIMPLE | I | ALL | PRIMARY,id | NULL | NULL | NULL | 1143 | |
| 1 | SIMPLE | e | ref | PRIMARY,FK_t_event_t_location | FK_t_event_t_location | 4 | wu_db.I.id | 4 | |
| 1 | SIMPLE | d | ref | IND_events_id | IND_events_id | 4 | wu_db.e.id | 3 | |
+----+------------+------+------+----------------------------+--------------------+---------+-----------+-------+-------+
My point of view is that the big number of column is responsible of this slowdown but even when I write "SELECT e.id, I.events_id, d.id" it still take 16 s.
I think that I have to rewrite the query with LIMIT and OFFSET clause, what do you think?
number of records for each tables :
Upvotes: 3
Views: 1578
Reputation: 57388
My point of view is that the big number of column is responsible of this slowdown but even > when I write "SELECT e.id, I.events_id, d.id" it still take 16 s.
I think that I have to rewrite the query with LIMIT and OFFSET clause, what do you think?
I think you're right.
If you could speed up the JOIN
by a factor of infinite, you would decrease to zero the "select" phase, and would leave the "sending data" part untouched - that's the other 74 seconds.
In other words, an infinite effort of query optimization would give you an advantage of 16 seconds out of 90 - around 18% overall.
If this is a batch query, then the time isn't so important; if it is not, as I believe, then I think it's really unlikely that someone is going to want a display, or even a synopsis, of some 27 thousands items.
Apart from a "paging" approach, or if a paging approach turned out not to be practical, or even in addition to a paging approach, you could see whether your application could use some kind of "filter" query (date ranges, location ranges).
So I'd study what WHERE
conditions might be used to make that selection leaner.
If it is a Web application, you could SELECT
only the IDs (the query you already tried, the one taking only 16 s; and with a WHERE, maybe even less), or as few columns as possible. Let's imagine that now you're displaying a very long page with lots of "forms" holding all the information, e.g.
...
+----------------------------------------+
| Date: .... Place: ...... Notes: .... |
| Leader: .... Cost: .... |
... and so on and so forth ...
+----------------------------------------+
| Date: .... Place: ...... Notes: .... |
... and so on and so forth ...
+----------------------------------------+
...
You could, instead, display only a very basic, minimal set of information, corresponding to the columns you have fetched:
...
+----------------------------------------+
| Date: .... Place: ...... <OPEN>|
+----------------------------------------+
| Date: .... Place: ...... <OPEN>|
+----------------------------------------+
| Date: .... Place: ...... <OPEN>|
+----------------------------------------+
| Date: .... Place: ...... <OPEN>|
...
At this point, the user will be able to quickly browse the list, but almost certainly won't open all those forms, but only two or three. When the user clicks on OPEN
, a jQuery function could issue a very fast AJAX call to the server, supplying the data with the IDs; then three separate queries would retrieve all the relevant data in milliseconds.
The data would be json_encode()
d and sent back to jQuery, and the form would "open" displaying all the information in "accordion" fashion:
+----------------------------------------+
| Date: .... Place: ...... <OPEN>|
+----------------------------------------+
| Date: .... Place: ...... <CLOSE>|
| large form with all the information
| ...
| ...
+----------------------------------------+
| Date: .... Place: ...... <OPEN>|
This way you would not need to immediately retrieve all the columns, especially those largish columns such as short_desc
and long_desc
, which can reach two whole Kb between them, and yet the user would experience very fast response.
Upvotes: 0
Reputation: 125855
Broadly speaking, MySQL can only filter records using one index from each table in a query.
That is, whilst your t_event
table has indexes defined on both id
and location
, only one of those indexes can be used to satisfy your query. You can see this in your EXPLAIN
output, which indicates that both the PRIMARY
and FK_t_event_t_location
keys were identified as possibly useful (with the latter actually selected for use).
Therefore, your join with t_dates
, which involves a test on the id
column, is being fulfilled with a table scan rather than an index lookup. Again, you can see this from the first row in the EXPLAIN
output which shows type = ALL
(table scan) and key = NULL
(no index being used).
You should create a composite index on (id, location)
for your t_event
table:
ALTER TABLE t_event ADD INDEX (id, location);
Upvotes: 1