Reputation: 6089
The program is receiving the image data in bytes from IP camera and then process the image. The first time when the program starts uses 470Mb of RAM, and in every 1 second it increases up to 15Mb, it will continue till there is no enough space and the computer hanged up.
The method getImage()
is called every 100ms
I have done some experiment going to share here. The original code is like this: (in which the buffer is created only once and after that, it can be reused)
private static final int WIDTH = 640;
private static final int HEIGHT = 480;
private byte[] sJpegPicBuffer = new byte[WIDTH * HEIGHT];
private Mat readImage() throws Exception {
boolean isGetSuccess = camera.getImage(lUserID, sJpegPicBuffer, WIDTH * HEIGHT);
if (isGetSuccess) {
return Imgcodecs.imdecode(new MatOfByte(sJpegPicBuffer), Imgcodecs.CV_LOAD_IMAGE_UNCHANGED);
}
return null;
}
In the above code, RAM goes up to Computer hang up (99% 10Gb). Then I changed the code like this: (in every loop it will create a new buffer)
private static final int WIDTH = 640;
private static final int HEIGHT = 480;
private Mat readImage() throws Exception {
byte[] sJpegPicBuffer = new byte[WIDTH * HEIGHT];
boolean isGetSuccess = camera.getImage(lUserID, sJpegPicBuffer, WIDTH * HEIGHT);
if (isGetSuccess) {
return Imgcodecs.imdecode(new MatOfByte(sJpegPicBuffer), Imgcodecs.CV_LOAD_IMAGE_UNCHANGED);
}
return null;
}
In this above code the RAM goes up to about 43% (5Gb)and then freed up.
Now the question is in the first block of code seems to be optimized, the buffer can be reused avoiding to create new space of memory in every call, but the result is not something we want. Why?
In the second block of code, it seems that the code is not as optimized as the first one, but works well than the first one.
But in general why the RAM increasing up to 10Gb in the first case and 5Gb in the second case. How can we control this situation?
Upvotes: 1
Views: 126
Reputation: 8409
This is a speculation, though I've seen similar scenario in real live few times.
Your Java code is interacting with native camera SDK (dll). Native code is like to allocating buffers in non JVM memory and use some internal Java objects to access that buffers. Common (a very poor) practice is to relay on Java object finalizer deallocate native buffer if it is not used any more.
Finalizers rely on garbage collector to trigger them, and this is a reason that pattern often fails. Although, finalizer is guarantied to run eventually, in practice it would not happen as long as there are enough space in Java heap and native memory would not be deallocated in timely fashion.
Java heap size have hard limit, but native memory pool used by C/C++ can grow as long as OS allow it to grow.
Concerning your problem
I assume in your first snippet, Java heap traffic is low. GC is idle and no finalizers are executes, thus memory allocated outside of Java heap keeps growing.
In second snippet, your are creating pressure on Java heap forcing GC to run frequently. As a side effect of GC finializer are executed and native memory released.
Instead of finalizers and buffer allocated in native code, your camera SDK may relay on Java direct memory buffers (these memory is direct accessing for C code so it is convenient to pass data over JVM boundary). Though effect would be mostly the same, because Java direct buffers implementation is using same pattern (with phantom references in stead of finalizers).
Suggestions
-XX:+PrintGCDetails
and -XX:+PrintReferenceGC
options would print information about reference processing so you can verify if finalizer/phantom references are indeed being used.-XX:MaxDirectMemorySize=X
can be used to cap direct buffer usage, if your camara's SDK relays on them. Though it is not a solution, but a safety net to let your application OOM before OS memory is exhaustedSystem.gc()
). This is another poor option as behavior of System.gc()
is JVM dependent.PS
This is my post about resource management with finalizers and phantom references.
Upvotes: 1