Reputation: 3
I have to store 2 million codes, being every code a string of fixed length 11. Every code has a boolean attribute called 'used', which marks the code as used or not (!). The amount of codes will be the same forever (no Create codes, no Delete codes, no Update codes) and the only update action would be to the used attribute of existing codes.
Every time a user enters a code on a web form, I have to search that code, check whether it has been used and if not, mark it as used.
There will be lots of users entering codes (up to 50 concurrently, yeah, that's lots to me!) so I wonder what would be a good approach to make my webapp not suck of slowness.
Is this feasible with postgres (the DBMS I like the most)? What feature of postgres should I explore further? Indices?
Is this kind of task best performed by nosql? Maybe redis?
I'm going to use a cloud server (rackspace). What amount of ram should I put into it? There will be other stuff happening to the database, but nothing of importance in comparison? I'm guessing 1GB.
What other things should I research?
I don't want anyone doing any research for me, just need some pointers so I can further investigate this issue.
Thanks!
Upvotes: 0
Views: 356
Reputation: 6051
It should be fast enough with appropriate index code field. Database will not make 2 million iteration, with index it will be log(2) 2 million ~ 20. 20 iteration should be fast enought
Upvotes: 0
Reputation: 16677
yes a SQL database will do this.
create a table with at least two columns, used_flag and code
add indexes to the code.
Upvotes: 1