Reputation: 6720
I am trying to test a very simple setup with Redis and Twemproxy but I can't find a way to make it faster.
I have 2 redis servers that I run with bare minimum configuration:
./redis-server --port 6370
./redis-server --port 6371
Both of the compiled from source and running under 1 machine with all the appropriate memory and CPUs.
If I run a redis-benchmark in one of the instances I get the following:
./redis-benchmark --csv -q -p 6371 -t set,get,incr,lpush,lpop,sadd,spop -r 100000000
"SET","161290.33"
"GET","176366.86"
"INCR","170940.17"
"LPUSH","178571.42"
"LPOP","168350.17"
"SADD","176991.16"
"SPOP","168918.92"
Now I would like to use Twemproxy in front of the two instances to distribute the requests and get a higher throughput (at least this is what I expected!).
I used the following configuration for Twemproxy:
my_cluster:
listen: 127.0.0.1:6379
hash: fnv1a_64
distribution: ketama
auto_eject_hosts: false
redis: true
servers:
- 127.0.0.1:6371:1 server1
- 127.0.0.1:6372:1 server2
And I run nutcracker as:
./nutcracker -c twemproxy_redis.yml -i 5
The results are very disappointing:
./redis-benchmark -r 1000000 --csv -q -p 6379 -t set,get,incr,lpush,lpop,sadd,spop-q -p 6379
"SET","112485.94"
"GET","113895.21"
"INCR","110987.79"
"LPUSH","145560.41"
"LPOP","149700.61"
"SADD","122100.12"
I tried to understand what is going on by getting Twemproxy's statistics as this:
telnet 127.0.0.1 22222
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
{
"service": "nutcracker",
"source": "localhost.localdomain",
"version": "0.4.1",
"uptime": 10,
"timestamp": 1452545028,
"total_connections": 303,
"curr_connections": 3,
"my_cluster": {
"client_eof": 300,
"client_err": 0,
"client_connections": 0,
"server_ejects": 0,
"forward_error": 0,
"fragments": 0,
"server1": {
"server_eof": 0,
"server_err": 0,
"server_timedout": 0,
"server_connections": 1,
"server_ejected_at": 0,
"requests": 246791,
"request_bytes": 11169484,
"responses": 246791,
"response_bytes": 1104215,
"in_queue": 0,
"in_queue_bytes": 0,
"out_queue": 0,
"out_queue_bytes": 0
},
"server2": {
"server_eof": 0,
"server_err": 0,
"server_timedout": 0,
"server_connections": 1,
"server_ejected_at": 0,
"requests": 353209,
"request_bytes": 12430516,
"responses": 353209,
"response_bytes": 2422648,
"in_queue": 0,
"in_queue_bytes": 0,
"out_queue": 0,
"out_queue_bytes": 0
}
}
}
Connection closed by foreign host.
Is there any other benchmark around that works properly? Or redis-benchmark
should had worked?
I forgot to mention that I am using Redis: 3.0.6 and Twemproxy: 0.4.1
Upvotes: 2
Views: 2012
Reputation: 1449
The proxy imposes a small tax on each request. Measure throughput using the proxy with one server. Impose a load until the throughput stops growing and the response times slow to a crawl. Add another server and note the response times are restored to normal, while capacity just doubled. Of course, you'll want to add servers well before response times start to crawl.
Upvotes: 1
Reputation: 49187
It might seem counter-intuitive, but putting two instances of redis with a proxy in front of them will certainly reduce performance!
In a single instance scenario, redis-benchmark connects directly to the redis server, and thus has minimal latency per request.
Once you put two instances and a single twemproxy in front of them, think what happens - you connect to twemproxy, which analyzes the request, chooses the right instance, and connects to it.
So, first of all, each request now has two network hops to travel instead of one. Added latency means less throughput of course.
Also, you are using just one twemproxy instance. So let's assume that twemproxy itself performs more or less like a single redis instance, you can never beat a single instance with a single proxy.
Twemproxy facilitates scaling out, not scaling up. It allows you to grow your cluster to sizes that a single instance could never achieve. But there's a latency price to pay, and as long as you're using a single proxy, it's also a throughput price.
Upvotes: 1