Hagai L
Hagai L

Reputation: 1613

Correct modeling in Redis for writing single entity but querying multiple

I'm trying to convert data which is on a Sql DB to Redis. In order to gain much higher throughput because it's a very high throughput. I'm aware of the downsides of persistence, storage costs etc...

So, I have a table called "Users" with few columns. Let's assume: ID, Name, Phone, Gender

Around 90% of the requests are Writes. to update a single row. Around 10% of the requests are Reads. to get 20 rows in each request.

I'm trying to get my head around the right modeling of this in order to get the max out of it.

If there were only updates - I would use Hashes. But because of the 10% of Reads I'm afraid it won't be efficient.

Any suggestions?

Upvotes: 3

Views: 268

Answers (1)

Didier Spezia
Didier Spezia

Reputation: 73306

Actually, the real question is whether you need to support partial updates.

Supposing partial update is not required, you can store your record in a blob associated to a key (i.e. string datatype). All write operations can be done in one roundtrip, since the record is always written at once. Several read operations can be done in one rountrip as well using the MGET command.

Now, supposing partial update is required, you can store your record in a dictionary associated to a key (i.e. hash datatype). All write operations can be done in one roundtrip (even if they are partial). Several read operations can also be done in one roundtrip provided HGETALL commands are pipelined.

Pipelining several HGETALL commands is a bit more CPU consuming than using MGET, but not that much. In term of latency, it should not be significantly different, except if you execute hundreds of thousands of them per second on the Redis instance.

Upvotes: 3

Related Questions