Reputation: 5039
We have a fairly large database, a few hundred tables across 2 schemas and the larger tables have upwards of 80M records. As a result over time the application has slowed. In particular around materialized views. We wondered about using Redis as a cache to help speed this application up on a whole. What we're not overly sure on would be the level of work needed to properly utilise Redis in this case or if we could use it in part across the biggest tables? It's an Oracle 11g and Java application. As someone who has no experience with Redis what would the steps involved be for general adoption into an existing DB and the learning curve. It's a small team, so we don't want to undertake something that is too much work to properly implement.
Upvotes: 2
Views: 3287
Reputation: 49942
Your question, IMO, borders on the verge of being too general to provide a meaningful answer :) I can, however, address one aspect of it, specifically about Redis' learning curve. Borrowing Karl Seguin's words from his (still very relevant) "Redis: Zero to Master in 30 minutes" posts:
learning Redis is the most efficient way a programmer can spend 30 minutes.
So take 30 minutes to read through the posts, grab a book about Redis or simply go to http://try.redis.io and type tutorial
. Once you understand what Redis is and how to use it, you can start thinking about offloading some of the traffic from your Oracle to it.
Upvotes: 2