Reputation: 11
We are trying to store some big buffers (8MB each) in Redis using the ServiceStack wrapper. We use the “RedisNativeClient.Set(string key, byte[] value)” API to set the buffers. Both client and server reside on the same machine. Persistence in disabled. We are currently using the evaluation version of ServiceStack.
The problem is that we get very poor performance - around 60 MB/Sec. Using some different c# wrappers ("Sider"), we get better performance (~400 MB/Sec).
The code I used for my measurements:
public void SimpleTest()
{
Stopwatch sw;
long ms1, ms2, interval;
int nBytesHandled = 0;
int nBlockSizeBytes = 8000000;
int nMaxIterations = 5;
byte[] pBuffer = new byte[(int)(nBlockSizeBytes)];
// Create Redis Wrapper
ServiceStack.Redis.RedisNativeClient m_serviceStackRedisClient = new ServiceStack.Redis.RedisNativeClient();
// Clear DB
m_serviceStackRedisClient.FlushAll();
sw = Stopwatch.StartNew();
ms1 = sw.ElapsedMilliseconds;
for (int i = 0; i < nMaxIterations; i++)
{
m_serviceStackRedisClient.Set("eitan" + i.ToString(), pBuffer);
nBytesHandled += nBlockSizeBytes;
}
ms2 = sw.ElapsedMilliseconds;
interval = ms2 - ms1;
// Calculate rate
double dMBPerSEc = nBytesHandled / 1024.0 / 1024.0 / ((double)interval / 1000.0);
Console.WriteLine("Rate {0:N4}", dMBPerSEc);
}
What could the problem be ?
Thanks, Eitan.
Upvotes: 1
Views: 728
Reputation: 143319
ServiceStack.Redis uses a reusable Buffer Pool to reduce memory pressure by reusing a pool of byte buffers. The size of the default byte[]
buffer is 1450 bytes to fit within the Ethernet MTU packet size. Whilst this default configuration is optimal for the normal use-case of smaller payloads (<100k) it looks like it ends up being slower for larger payloads (>1MB+).
Based on this the ServiceStack.Redis client has now been modified so that it no longer uses the buffer pool for payloads larger than 500k which is now configurable with RedisConfig.BufferPoolMaxSize
, e.g:
RedisConfig.BufferPoolMaxSize = 500000;
The default 1450 byte size of the byte[]
buffer is now also configurable with:
RedisConfig.BufferLength = 1450;
This change now improves the throughput performance of ServiceStack.Redis for larger payloads as seen in RedisBenchmarkTests suite which uses your benchmark with different payload sizes, e.g:
public void Run(string name, int nBlockSizeBytes, Action<int,byte[]> fn)
{
Stopwatch sw;
long ms1, ms2, interval;
int nBytesHandled = 0;
int nMaxIterations = 5;
byte[] pBuffer = new byte[nBlockSizeBytes];
// Create Redis Wrapper
var redis = new RedisNativeClient();
// Clear DB
redis.FlushAll();
sw = Stopwatch.StartNew();
ms1 = sw.ElapsedMilliseconds;
for (int i = 0; i < nMaxIterations; i++)
{
fn(i, pBuffer);
nBytesHandled += nBlockSizeBytes;
}
ms2 = sw.ElapsedMilliseconds;
interval = ms2 - ms1;
// Calculate rate
double dMBPerSEc = nBytesHandled / 1024.0 / 1024.0 / (interval / 1000.0);
Console.WriteLine(name + ": Rate {0:N4}, Total: {1}ms", dMBPerSEc, ms2);
}
Results running from my MacBook Pro and redis-server running in an Ubuntu VirtualBox VM:
ServiceStack.Redis 1K: Rate 4.7684, Total: 1ms
Sider 1K: Rate 0.4768, Total: 10ms
ServiceStack.Redis 10K: Rate 47.6837, Total: 1ms
Sider 10K: Rate 4.3349, Total: 11ms
ServiceStack.Redis 100K: Rate 26.4910, Total: 18ms
Sider 100K: Rate 20.7321, Total: 23ms
ServiceStack.Redis 1MB: Rate 103.6603, Total: 46ms
Sider 1MB: Rate 70.1231, Total: 68ms
ServiceStack.Redis 8MB: Rate 77.0646, Total: 495ms
Sider 8MB: Rate 84.3960, Total: 452ms
Where the performance for of ServiceStack.Redis is faster for smaller payloads and now closer for payloads larger than 8MB.
This change is available from v4.0.41+ that's now available on MyGet.
Upvotes: 3