Reputation: 470
I'm building a Spring application to capture data from ~100 websocket clients, then store the data in a queue-like way in a Redis server. The issue is that the server starts to freeze up over time, and eventually the websocket clients disconnect due to host timeouts.
I initially thought the issue was with using Spring Redis Repositories, but the issue persisted once I switched to Redis Templates.
I then thought that the issue was with (de)serialization of Redis objects, and for some time, it was the issue. Through profiling, I found that parsing doubles from strings is slow (when processing thousands per second), so I instead wrote a serialization function to convert double arrays into byte arrays for Redis. This greatly reduced CPU time.
fun DoubleArray.toBytes(): ByteArray {
val buffer = ByteBuffer.allocate(DOUBLE_SIZE_BYTES * size)
forEachIndexed { i, d -> buffer.putDouble(DOUBLE_SIZE_BYTES * i, d) }
return buffer.array()
}
open class SingleSampleRepository<T : SampleModel>(
private val tClass: KClass<T>,
template: RedisTemplate<String, ByteArray>
) {
private val ops = template.opsForValue()
private val keyName = "ValueOf${tClass.simpleName}"
fun find(deviceId: Long): T? {
val name = "$keyName:$deviceId"
return SampleModelHelper.deserializeFromBytes(tClass, ops.get(name) ?: return null)
}
fun save(deviceId: Long, sample: T) {
val name = "$keyName:$deviceId"
ops.set(name, sample.serializeToBytes())
}
}
open class MultiSampleRepository<T : SampleModel>(
private val tClass: KClass<T>,
private val template: RedisTemplate<String, ByteArray>,
private val maxSamples: Int = MAX_SAMPLES
) {
companion object {
private const val SAMPLES_HZ = 50
private const val TIME_DURATION_SECONDS = 120
const val MAX_SAMPLES = TIME_DURATION_SECONDS * SAMPLES_HZ
}
private val ops = template.opsForZSet()
private val keyName = "ZSetOf${tClass.simpleName}"
private val scoreProperty = tClass.memberProperties.first { it.hasAnnotation<RedisScore>() }
fun findAll(deviceId: Long): Set<T> {
val name = "$keyName:$deviceId"
return ops.range(name, 0, ops.size(name) ?: 0)?.map {
SampleModelHelper.deserializeFromBytes(tClass, it)
}?.toSet() ?: emptySet()
}
fun saveAll(deviceId: Long, samples: Set<T>) {
val name = "$keyName:$deviceId"
template.delete(name)
ops.add(name, samples.map {
ZSetOperations.TypedTuple.of(it.serializeToBytes(), scoreProperty.get(it) as Double)
}.toMutableSet())
while ((ops.size(name) ?: 0) > MAX_SAMPLES) ops.popMin(name)
}
fun save(deviceId: Long, sample: T) {
val name = "$keyName:$deviceId"
ops.add("$keyName:$deviceId", sample.serializeToBytes(), scoreProperty.get(sample) as Double)
while ((ops.size(name) ?: 0) > MAX_SAMPLES) ops.popMin(name)
}
}
I now suspect that the spring-data-redis Lettuce client is the issue. Specifically, Lettuce seems to only use one NIO event loop thread. I don't know if this is a good/bad thing, so please let me know if it's working correctly. Here are some screenshots from profiling:
I also tried using ClientResources and custom thread pools after seeing other posts about Lettuce, but none of these methods increased the NIO event loop thread count.
I understand that Redis itself is mostly single-threaded, but from profiling, it looks like most CPU time is spent on encoding/decoding Redis commands, not actually sending them. Should Lettuce be using multiple threads for the NIO event loop?
Upvotes: 2
Views: 4034
Reputation: 470
In addition to setting shareNativeConnection to false, I also had to set up the LettucePoolingClientConfiguration. This combination finally increased the thread count, and I saw greatly improved performance.
@Configuration
class RedisConfig {
@Bean
fun connectionFactory(): RedisConnectionFactory {
val redisConfig = RedisStandaloneConfiguration()
val clientConfig = LettucePoolingClientConfiguration.builder().build()
val factory = LettuceConnectionFactory(redisConfig, clientConfig)
factory.shareNativeConnection = false
return factory
}
}
Upvotes: 1