Reputation: 45
Using Kafka Streaming 2.4 and DSL API.
I am having stateful streaming processing which connects to user topic having 100 partitions. application also refers internal topics which has default partitions similler to user topic.
Observing Below Error and all task threads are getting shut down eventually.
Could you please put some pointer on getting formula to calculate required open file descriptors?
public class CustomRocksDBConfig implements RocksDBConfigSetter {
private org.rocksdb.Cache cache = new org.rocksdb.LRUCache(2 * 1024L * 1024L * 1024L);
@Override
public void setConfig(final String storeName, final Options options, final Map<String, Object> configs) {
BlockBasedTableConfig tableConfig = (BlockBasedTableConfig) options.tableFormatConfig();
tableConfig.setBlockCache(cache);
tableConfig.setBlockCacheSize(1024L * 1024L * 1024L);
tableConfig.setBlockSize( 4 * 1024L);
tableConfig.setCacheIndexAndFilterBlocks(true);
options.setTableFormatConfig(tableConfig);
options.setMaxWriteBufferNumber(7);
options.setMinWriteBufferNumberToMerge(4);
options.setWriteBufferSize(25 * 1024L * 1024L);}
Caused by: org.rocksdb.RocksDBException: While open a file for appending: /data/directory/generator.1583280000000/002360.sst: Too many open files
at org.rocksdb.RocksDB.flush(Native Method)
at org.rocksdb.RocksDB.flush(RocksDB.java:2394)
at org.apache.kafka.streams.state.internals.RocksDBStore$SingleColumnFamilyAccessor.flush(RocksDBStore.java:581)
at org.apache.kafka.streams.state.internals.RocksDBStore.flush(RocksDBStore.java:384)
... 17 more
Upvotes: 0
Views: 2680