lobna
lobna

Reputation: 11

Disconnected from Neo4j. Please check if the cord is unplugged

I am running simple queries on neo4j 2.1.7 I am trying to execute that query:

MATCH (a:Caller)-[:MADE_CALL]-(c:Call)-[:RECEIVED_CALL]-(b:Receiver) CREATE(a)-[:CALLED]->(b) RETURN a,b

While the query is executing, I am getting the following error

Disconnected from Neo4j. Please check if the cord is unplugged.

Then another error:

GC overhead limit exceeded

I'm working on windows server 2012 with 16G of RAM and here is my nodes.properties file:

**
`neostore.nodestore.db.mapped_memory=1800M 
neostore.relationshipstore.db.mapped_memory=1G
#neostore.relationshipgroupstore.db.mapped_memory=10M
neostore.propertystore.db.mapped_memory=500M
neostore.propertystore.db.strings.mapped_memory=250M
neostore.propertystore.db.arrays.mapped_memory=10M

cache_type=weak
keep_logical_logs=100M size**`

and my neo4j-community.vmoption file:

 **
-Xmx8192 
-Xms4098 
-Xmn1G 
-include-options ${APPDATA}\Neo4j Community\neo4j-community.vmoptions**

I have 6 128 644 Nodes, 6 506 355 Relationships and 10 488 435 properties

Any solution?

Upvotes: 1

Views: 909

Answers (1)

Shrey Gupta
Shrey Gupta

Reputation: 5617

TL;DR: Neo4j disconnected because your query is too inefficient. The solution is to improve the query.

Your Neo4j instance appears to have timed out and undergone a GC dump due to the computational intensiveness of your query. When you initialize the Neo4j database using the bash shell, you have the option of configuring certain JVM variables, of which include the amount of memory and heap size available to Neo4j. Should a query exceed these computational limitations, Neo4j automatically terminates the query, undergoes a GC dump, and disconnects.

Looking at the information you gave on the database, there are 6M nodes with 6M relationships. Considering that your query essentially looks for all pathways from Callers to Receivers across 6M nodes, then tries to perform bulk write operations, it's not surprising that Neo4j crashes/disconnects. I would suggest finding a way to limit the query (even with a simple LIMIT keyword) and running multiple smaller queries to get the job done.

Upvotes: 1

Related Questions