Reputation: 3
I am running a Flink kafka connect streaming job which just reads events from a kafka topic, transform and publish to another kafka topic. I don't have any requirement of storing/managing state during the job execution, however the pod on which the job is running out of memory very often. To identify the root cause I have taken the java process heap dump and the heap looks ok. I am not sure where the memory is getting filled in the pod. Note: I am running the entire flink job in a single java process in kubernetes cluster. It will be great if any one can help with how to fix this issue or any other better deployment model too if that can help managing the memory.
Upvotes: 0
Views: 18