Reputation: 135
I have a table of 2+ million rows. in the script that I have, I need the queries to be done in under 0.0001 seconds. but now the large number of rows increased it to 0.1 seconds. my table structure is like:
chat_id | user_id | score
in each query i just want to get the score of a user in a chat and I use LIMIT 1 in the query. for the users at the beginning of the table, the query is done in 0.0001 seconds but as for the ones at the end of the table sometimes it takes like 0.5 seconds to return the result. I was thinking of splitting the table into multiple tables with chat_ids. for example:
chat1_table
user_id | score
chat2_table
user_id | score
this causes the rows in each table to be much fewer (like 1k rows in a table) and the query speed will increase as I want. but I'm wondering isn't there any way I can get the same performance in one table? I've heard of partitioning, But i don't know much about it, plus I only have one disk.
Upvotes: 0
Views: 24
Reputation: 65166
I think just adding an index with the chat_id and user_id columns should suffice. In fact you could make them into a composite primary key, if the (char_id, user_id) pair is always unique. If your table doesn't have a primary key yet, you can add it:
ALTER TABLE tablename ADD PRIMARY KEY(char_id, user_id);
Upvotes: 1