Reputation: 33
I've seen the question almost answered on a number of threads, but not considering the implications for this specific domain:
I am looking to store time series data in MySQL for a large number of gauges (500 and growing) which each provide a single float value at 5 minute intervals. At simplest, the structure would be: - gauge_id - timestamp - value
(where gauge_id and timestamp combine as primary key)
This means roughly 105120 rows per gauge per year - all of which needs to be stored for 10 or 20 years. For 1000 gauges we'll be looking at 100 million records per year then.
Data is written in batches, typically values for each channel are aggregated into an XML file from remote source and read in to the database either hourly or daily. SO at most, there are as many inserts per hour as we have gauges.
Read operations on the data would be per gauge (so no join operations of data between gauges) based on time range. So e.g. to get all values for gauge X between two dates. Usually, this will also include some form of aggregation/interpolation function - so a user may want to see daily averages, or weekly max, etc for arbitrary ranges. Again, relatively low number of reads, but these need a response in under 1 second from MySQL.
At this stage I am leanign toward 1 table per gauge, rather than partitioning one huge table in MySQL on gauge_id. The logic is this will make sharding easier down the line, simplify backup, and essentially make gauges easier to remove/rebuild if there are data errors at any stage. The cost is that both write and read operatiosn are a little more complex.
Any thoughts on this?
-------- UPDATE --------
I ran a few tests on my MacBook 2.4gHz core 2 duo, 4 gigs of ram.
Set up the following table:
CREATE TABLE `test` (
`channel_id` int(10) NOT NULL,
`time` datetime NOT NULL,
`value` int(10) NOT NULL,
KEY `channel_id` (`channel_id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
Populated with a stored procedure:
CREATE PROCEDURE `addTestData`(IN ID INT, IN RECORDS INT)
BEGIN
DECLARE i INT DEFAULT 1;
DECLARE j DATETIME DEFAULT '1970-01-01 00:00:00';
WHILE (i<=RECORDS) DO
INSERT INTO test VALUES(ID,j,999);
SET i=i+1;
SET j= j + INTERVAL 15 MINUTE;
END WHILE;
END $$
while I then called to create first 1 million records
call addTestData(1,1000000);
insert executed in 47 secs
SELECT * FROM `test` WHERE channel_id = 1 and YEAR(time) = '1970';
executed in 0.0006 secs
SELECT AVG(value) as value, DATE(time) as date FROM `test`
WHERE channel_id = 1 and YEAR(time) = '1970' group by date;
executed in 4.6 secs (MAX, SUM functions executed in same time).
after adding 4 more gauges:
call addTestData(2,1000000);
call addTestData(3,1000000);
call addTestData(4,1000000);
call addTestData(5,1000000);
insert executed each in 47 secs, 78 megabytes used for the table
I ran the same two queries - and got exactly the same execution time as with 1 million records in the table (4.6 secs for the bigger query).
So, bar the potential use for sharding, backup and future hardware driven changes to any individual gauge's table (ie multiple readings, change of data interval), there seemed to be no need to split into multipel tables for the foreseeable. Did not even try running the query with partitions, there did not seem to be any reason.
--------HOWEVER-------------
Since 4.6 seconds for a query is not ideal, we obviously need to do some optimising. As a first step I restructured the query:
SELECT
AVG(value) as value,
DATE(time) as date
FROM
(SELECT * FROM test
WHERE channel_id = 1 and YEAR(time) = '1970')
as temp
group by date;
Run on a table with 5 million records (over 5 channel_id's) the query takes 4.3 seconds. If I run it on a table with 1 channel, 1 million records, it runs in 0.36 seconds!! Scratching my head a little over this...
Partitioning the table of 5 million records
ALTER TABLE test PARTITION BY HASH(channel_id) PARTITIONS 5;
Subsequently completes the compound query above in 0.35 seconds also, same performance gain.
Upvotes: 3
Views: 3085
Reputation: 10635
For me there is nothing in your scenario that justify partitioning by gauge, if you have an index on gauge_id the performance would not be an issue because MySql will find rows related to a certain gauge immediately by using the index, after that other operations will be like dealing with a dedicated table for each gauge.
The only situation in which partitioning might be justifiable is if you access very recent gauge data (say newest 10%) very more often that the old data (remaining 90%) if that's the case partitioning into two "recent" and "archive" tables might give you a lot of performance advantage.
If your operation on individual tables doesn't involve an index then the same operation shouldn't take much longer on the merged table because MySql first narrows down the results to the certain gauge rows using the index on gauge_id, if the operation involves an index you should make the index a multi-column index on the merged table starting with 'gauge_id' e.g. INDEX( timestamp )
on individual tables should become INDEX( gauge_id, timestamp )
then in most cases the operation will take the same time as individual tables. Also don't be put off by numbers like '500 million rows', databases are designed to work with that amount of data.
My remarks are mostly based on experience almost every time I was in your situation and decided to go with individual tables, for one reason or another I ended up merging the tables back into one and since most of the times that happens when the project has matured it is a painful process. I have really experienced "relational databases are not designed to be used like that".
I really like to hear others input on this, by the way do a lot testing before going either way, MySql has a lot of unexpected behaviors.
Upvotes: 3