comiventor
comiventor

Reputation: 4122

neo4j breadth first traversal memory issue

I have a graph with million nodes and 3 million edges loaded into Neo4j. It crashes while doing a breadth first traversal over it complaining of insufficient memory on a 8 GB machine. Each node label string has an average length of 40 characters.

What kind of internal representation does Neo4j's uses which requires so much memory, esp. for traversal? Given that Neo4j is able to represent the entire graph, why does it fail while trying to maintain the set of visited nodes required for breadth first traversal.

As per my understanding, a graph representation of the above graph in an adjacency list format should be in MBs. Calculation assuming 64-bit representation of the node and edge

Upvotes: 2

Views: 112

Answers (2)

comiventor
comiventor

Reputation: 4122

bingo @brian-underwood! you are right.

I hadn't configured Neo4J to use more memory.

Since the issue was related to nodes only, following is what I modified

  • neostore.nodestore.db.mapped_memory=256M # increased
  • neostore.relationshipstore.db.mapped_memory=3G # unchanged
  • neostore.propertystore.db.mapped_memory=256M # increased
  • neostore.propertystore.db.strings.mapped_memory=200M # unchanged
  • neostore.propertystore.db.arrays.mapped_memory=200M # unchanged

Also enabled, auto indexing for nodes and their keys

  • node_auto_indexing=true
  • node_keys_indexable=key_name

Upvotes: 0

Brian Underwood
Brian Underwood

Reputation: 10856

You might have 8 GB available, but are you configuring Neo4j to allow it to use that space? Can you see how much it's taking up when it's working?

Here are some resources:

http://neo4j.com/developer/guide-performance-tuning/

http://neo4j.com/docs/stable/server-performance.html

http://neo4j.com/docs/stable/configuration-settings.html#config_neostore.nodestore.db.mapped_memory

http://neo4j.com/developer/guide-sizing-and-hardware-calculator/

Upvotes: 2

Related Questions