Mohamed Taher Alrefaie
Mohamed Taher Alrefaie

Reputation: 16243

Why Amazon Titan throws Elasticsearch exception?

I'm using Spark to write a heavy application requiring 50 machines on a cluster to work on reading/writing on the graph.

At the moment, I'm testing it locally, meaning that there 50 threads starting in parallel. Each one of them initializes the database connection.

For some reason, I get this error:

16/03/16 21:18:19 WARN state.meta: [Jacob "Jake" Fury] failed to find dangling indices
java.nio.file.FileSystemException: /tmp/searchindex/data/elasticsearch/nodes/32/indices: Too many open files
    at sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
    at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
    at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
    at sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
    at java.nio.file.Files.newDirectoryStream(Files.java:457)
    at org.elasticsearch.env.NodeEnvironment.findAllIndices(NodeEnvironment.java:530)
    at org.elasticsearch.gateway.local.state.meta.LocalGatewayMetaState.clusterChanged(LocalGatewayMetaState.java:245)
    at org.elasticsearch.gateway.local.LocalGateway.clusterChanged(LocalGateway.java:215)
    at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:467)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:188)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:158)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

I'm not using ElasticSearch at all in the configuration file. and all my indexes are composites. Does Titan DynamoDB implementation uses it internally? How to solve this exception?

Upvotes: 1

Views: 78

Answers (1)

Chen Harel
Chen Harel

Reputation: 10052

This is not directly related to Titan's DynamoDB implementation but the error Too many open files is a fairly common problem with *nix os.

Dig up information on how to raise the number of open files in your OS and the problem will go away (e.g. How to increase Neo4j's maximum file open limit (ulimit) in Ubuntu?).

Upvotes: 1

Related Questions