Yiliang
Yiliang

Reputation: 473

java.lang.OutOfMemoryError related to Spark Graphframe bfs

The OutOfMemoryError appears after I call bfs 20+ times in this way:

list_locals = [] 
#g is the graphframe with > 3 million nodes and > 15 million edges. 

def fn(row): 
    arg1 = "id = '%s'" %row.arg1 
    arg2 = "id = '%s'" %row.arg2 
    results = g.bfs(arg1, arg2, maxPathLength = 4) 
    list_locals.append(results.rdd.collect()) 
    results = None 

# t is a list of row objects 
for i in range(101): 
    fn(t[i]) 
print i 

From the logs, I can see that bfs created lots of broadcast variables and tried to clear them. I wonder whether the clearance of broadcast variables is not done completely? I have attached the most recent error messages below. Thanks!

16/07/11 09:44:28 INFO storage.BlockManagerInfo: Removed     broadcast_922_piece0 on dsg-cluster-server-s06.xxx:40047 

in memory (size: 8.1 KB, free: 3.0 GB)

16/07/11 09:44:38 INFO storage.MemoryStore: Block broadcast_924 stored as values in memory (estimated size 24.4 KB, free 2.8 MB)                                                                     

Exception in thread "dag-scheduler-event-loop" java.lang.OutOfMoryError: Java heap space

Upvotes: 0

Views: 1072

Answers (1)

Kien Truong
Kien Truong

Reputation: 11381

Exception in thread "dag-scheduler-event-loop" java.lang.OutOfMoryError: Java heap space

This is an exception in the driver process, you should increase your driver memory.

Upvotes: 1

Related Questions