Reputation: 485
I currently have a large database for GPS Tracking purposes. What happens is each time a new person starts sending locations to me, I create a new database with a unique name. Then I start adding rows to this table. There are columns such as altitude, longitude, latitude, etc.
The situation is that some of these tables might contain over 100,000 locations (rows). I also have about 600 tables in the database right now. Searching through all these tables is quite difficult, it requires me to run a "SHOW tables" query and then run a query for each table to get the data I need.
The question breaks down to this: should I combine all these tables into one gigantic table which could contain over 1,000,000 rows. I will then add one column to separate the different users. It would make running queries much easier, as I'd need a single query now instead of multiple.
Does this make the system faster or slower? Do you need more information to be able to answer this question?
Upvotes: 1
Views: 72
Reputation: 7871
Why dont you have a table with location_id, altitude, longitude and latitude. Have a UNIQUE KEY combining altitude, longitude and latitude.
Have another table, maybe user_location, with user_id and location_id. location_id here is a FOREIGN KEY to the above table.
Since user_id and location_id are PRIMARY KEYS in their respective tables, which means INDEXED automatically then it should not have a problem.
Also the location table would some time saturate.
Upvotes: 0
Reputation: 2552
If you intend to regularly search through all of the tables simultaneously, then it is almost certainly worth combining the tables and adding a user identifier column. Adding indices to this table (for columns you regularly search by) will then greatly improve performance.
A single query will nearly always be faster than multiple queries across different tables, especially if you are sorting the results.
Upvotes: 0
Reputation: 2943
1,000,000 rows+ with proper indexing might work ok, but since rows will go up and up you will soon be waiting for queries to execute. I don't think it's really a good idea. What you could do is, create such table, insert all the rows into it and try some queries on it. You will see for yourself how well it performs. And then just to be sure, generate some random data and go up a few millions (up to expected size). If it still performs well then you have your answer. The query speed depends a lot on hardware too so it's not easy to answer such questions. Try to do some tests.
Upvotes: 2