garyc2232
garyc2232

Reputation: 65

mysql performance with situation

If I have a table TableA with 10k rows and I want to search all rows where id > 8000

When I use the SQL statement SELECT * FROM TableA WHERE id > 8000 to search them, what will MySQL do? Will it search 10k rows and return the 2k rows that match the condition or just ignore those 8k rows and return the 2k rows of data?

I also have a requirement to store a lot of data in the database per day and need to have a quick search for today records. Is one big table still the best method or are other solutions available?

Or would it be best to create 2 tables. 1 for the all records and 1 for today's records and when the new data coming, both table will insert but in the next day the record of the second table will delete.

Which method are better when comparing the speed of select or any other good method can for this case?

Actually i don't have the real database here now but i just worry about which way/method can be better in that case

Updated information below at (8-12-2016 11:00)

I am using InnoDB but i will use the date as the search key and it is not a PK.

Returning 2k rows is just a extremely case for study but in the real case may returning (User Numbers * each record for that User), so if i got 100 user and they make 10 record in that day, i may need to returning 1k rows record.

My real case is i need to store all user records per days (maybe 10 records per 1 user) and i need to generate the rank for the last day records and the last 7 days records so i just worry if i just search the last day records in a large table, would it be slow or create another table just for save the last day records?

Upvotes: 0

Views: 57

Answers (1)

Rick James
Rick James

Reputation: 142278

  • Are you fetching more than about 20% of the table? (The number 20% is inexact.)
  • Is the PRIMARY KEY on id? Or is it a secondary key?
  • Are you using ENGINE=InnoDB?

Case: InnoDB and PRIMARY KEY(id): The execution will start at 8000 and go until finished. This is optimal

Case: InnoDB, id is a secondary key, and a 'small' percentage of table is being fetched: The index will be used; it is a BTree and is scanned from 8000 to end, jumping over to the data (via the PK) to find the rows.

Case: InnoDB, id is secondary, and large percentage: The index will be ignored, and the entire table will be scanned ("table scan"), ignoring rows that don't match the WHERE clause. A table scan is likely to be faster than the previous case because of all the 'jumping over to the data'.

Other comments:

  • 10K rows is "small" as tables go.
  • returning 2K rows is "large" as result sets go. What are you doing with them?
  • Is there further filtering that you could turn over to MySQL, so that you don't get all 2K back? Think COUNT or SUM with GROUP BY, FULLTEXT search index, etc.

More tips on indexes.

Upvotes: 1

Related Questions