Michael
Michael

Reputation: 35341

Optimizing a SQL query with a ranged WHERE clause against a large table

I have a database table that has three columns: an ID, a timestamp, and a string. About 14,000 rows are inserted every day, so the table is very large. It currently has 1.3 million rows.

Table definition:

CREATE TABLE readings (
  id int primary key auto_increment,
  ts datetime not null, --the timestamp
  json text not null --the string
);

The query I'm running is:

SELECT * FROM readings WHERE ts >= 'TIME_START' AND ts <= 'TIME_END' ORDER BY ts

The query takes around 45 seconds to execute. How can I modify the table and/or query to make it faster?

Thanks.

Upvotes: 1

Views: 334

Answers (4)

MarcinWolny
MarcinWolny

Reputation: 1645

The only thing that comes into my mind are partitions assuming you set everything right. You also might want to try different database engine (eg. InnoDB). Or setting up an index in the ts column.

Other than that - using >= <= vs BETWEEN doesn't make any performance impact so don't worry with that.

Upvotes: 1

pranag
pranag

Reputation: 5622

Try the below query.. if the index is on id this might be faster

declare @min_id as int
declare @max_id as int

select @min_id = min(id)
from readings WHERE ts = 'TIME_START' 

select @max_id = max(id)
from readings WHERE ts = 'TIME_END'

SELECT * FROM readings  
id between @min_id and @max_id order by id

Upvotes: 0

tkorkunckaya
tkorkunckaya

Reputation: 163

You can just order the results with your primary id which is autoincrement.

Upvotes: 0

user359040
user359040

Reputation:

Add a new index on ts to the table.

Upvotes: 5

Related Questions