Strom
Strom

Reputation: 129

MySQL - Best performance between 2 solutions

I need and advice about MySQL.

I have a user table that have id, nickname, numDVD, money and table DVD that have idDVD, idUser, LinkPath, counter.

Now I belive that I could have max. 20 user and each user has about 30 DVD.

So when I insert a DVD I should have idDVD(auto-Increment), idUser (same idUser of User table), LinkPath (generic String), and counter that it is a number from 1 to 30 (unique number) (depends from number or DVD) for each user.

The problem is handle the last column "counter", because I would select for example 2 3 random DVD from 1 to 30 that have the same UserId.

So I was thinking if it's the best solution in my case and hard to handle (for me I never used MySQL) OR it's better create 20 tables (1 for each user) that contains the ID and DVDname etc.

Thanks

Upvotes: 1

Views: 84

Answers (3)

r0ast3d
r0ast3d

Reputation: 2635

If I understand your requirements clearly, you should be able to accomplish that by creating a compound index for you to be able to select efficiently.

If there is too much of data that is being handled in that table, then it would help to clear up some historical data.

Upvotes: 0

Michael Low
Michael Low

Reputation: 24506

Don't create 20 tables! That'd be way overkill, and what if you needed to add more users in the future ? It'd be practically impossible to maintain and update reliably.

A better way would be like:

Table users
-> idUser
-> other user specific data

Table dvd
-> idDvd
-> DVDname
-> LinkPath
-> other dvd specific data (no user data here)

Table usersDvds
-> idUser
-> idDvd

This way, it's no problem if one or more users has the same DVD, as it's just another entry in the usersDvds table - the idDvd value would be the same, but idUser woudl be different. And to count how many DVDs a user has, just do a SELECT count(*) FROM usersDvds WHERE userId = 1

Upvotes: 3

user207421
user207421

Reputation: 310916

You don't need a table per user, and doing so will make the subsequent SQL programming basically impossible. However with these data volumes practically nothing you do is going to cause or relieve bottlenecks. Very probably the entire database will fit into memory so access via any schema will be practically instantenous.

Upvotes: 1

Related Questions