Reputation: 366
I'm working on a project where each user has an inventory, which is really just an array of IDs that correspond to actual things in the project.
To store the inventory, I want to use Postgres and just serialize/deserialize data into a string format. However, in this array it is likely that they will have many of a single ID. The IDs are all integers so I'm planning on just joining them with a comma and separating them out during retrieval.
For example, the inventory might be [1, 2, 3, 3, 3, 4]
, but it would be best not to store it like 1,2,3,3,3,4
with a long repeating list.
Is there an efficient way to store duplicate items?
Upvotes: 0
Views: 236
Reputation: 366
Thanks to Nishant I realized I can store the array in the JSON format instead. With this I can do the compression like this (using the original input):
{
"1": 1,
"2": 1,
"3": 3,
"4": 1
}
Which will allow me to just store a count of each element in the JSON.
EDIT: Here's the code.
function(array) {
output = {};
array.forEach((x) => {
output[x] = (output[x] || 0) + 1
});
return output;
}
Upvotes: 0
Reputation: 1271181
"Actual things in the project" should be another table in the database. The correct relational approach is to have a junction table:
create table userItems as (
userItemId serial primary key,
userId int references users(userId),
itemId int references items(itemId)
);
You can then add additional information -- such as quantity, price, date when added, and so on.
There are some circumstances where an array would be more appropriate. However, for run-of-the-mill situations where you have users and items, a junction table would be the first approach and most likely to solve your problem.
Upvotes: 1