Reputation: 61
Using ND4J and Flink, I have a process function that receives a POJO, uses the linalg ndarray to calculate a result using a bunch of math, and outputs a pojo. When running the program on the cluster, using the Linux CPU backend, both with and without avx512, I can see that the memory usage only goes up. It seems like there is a memory leak from the process function with the nd4j calculations. I'm not keeping any references outside that method, so there is no reason for the memory to not be released The GC is called, but it doesn't release much of the memory. I also tried to use the workspace feature, but it didn't change anything
I have tried to change the GC, change heap / off heap sizes, setting bytedeco's maxbytes and maxphysicalbytes, use workspaces, but nothing helps
Upvotes: 5
Views: 195