olmo_sattath
olmo_sattath

Reputation: 173

Memory consumption issues of a Java program

I have a Java program that runs on my Ubuntu 10.04 machine and, without any user interaction, repeatedly queries a MySQL database and then constructs img- and txt-files according to the data read from the DB. It makes tens of thousands of queries and creates tens of thousands of files.

After some hours of running, the available memory on my machine including swap space is totally used up. I haven't started other programs and the processes running in the background don't consume much memory and don't really grow in consumption.

To find out what is allocating so much memory I wanted to analyse a heap dump, so I started the process with -Xms64m -Xmx128m -XX:+HeapDumpOnOutOfMemoryError.

To my surprise, the situation was the same as before, after some hours the program was allocating all of the swap which is way beyond the given max of 128m.

Another run debugged with VisualVM showed that the heap allocation never is beyond the max of 128m - when the allocated memory is approximating the max, a big part of it is released again (I assume by the garbage collector).

So, it cannot be a problem a steadily growing heap.

When the memory is all used up:

free shows the following:

             total       used       free     shared    buffers     cached
Mem:       2060180    2004860      55320          0        848    1042908
-/+ buffers/cache:     961104    1099076
Swap:      3227640    3227640          0

top shows the following:

USER    VIRT    RES     SHR     COMMAND
[my_id] 504m    171m    4520    java
[my_id] 371m    162m    4368    java

(by far the two "biggest" processes and the only java processes running)

My first question is:

My old questions were:

Upvotes: 17

Views: 8127

Answers (8)

vladimir e.
vladimir e.

Reputation: 723

You say you are creating image files are you creating image objects? If so, are you calling dispose() on these objects when you are done?

If I remember rightly, java awt imagine objects allocate native resources that must be disposed explicitly.

Upvotes: 0

Tom De Leu
Tom De Leu

Reputation: 8274

Are you creating separate threads to run your "tasks"? The memory used to create threads is separate from the Java heap.

This means that even if you specify -Xmx128m the memory used by the Java process could be much higher, depending on how many threads you're using and the thread stack size (each thread gets a stack allocated, of size specified by -Xss).

Example from work recently: We had a Java heap of 4GB (-Xmx4G), but the OS process was consuming upwards of 6GB, also using up the swap space. When I checked the process status with cat /proc/<PID>/status I noticed we had 11000 threads running. Since we had -Xss256K set, this is easily explained: 10000 threads means 2,5GB.

Upvotes: 1

sharakan
sharakan

Reputation: 6921

As @maximdim and @JamesBranigan point out, the likely culprit is some native interaction from your code. But as you haven't been able to track down exactly where the problematic interaction is using available tools, why don't you try a brute force approach?

You've outlined a two part process: query MySQL and write files. Either one of those things could be excluded from the process as a test. Test one: eliminate the query and hard code the content that would have been returned. Test two: do the query, but don't bother writing the files. Do you still have leaks?

There may be other testable cases as well, depending on what else your application does.

Upvotes: 1

olmo_sattath
olmo_sattath

Reputation: 173

As there was no activity after the day I asked the question (until March 23) and as I still couldn't find the cause for the memory consumption I "solved" the problem pragmatically.

The program causing the problem is basically a repetition of a "task" (i.e. querying a DB and then creating files). It is relatively easy to parameterize the program so that a certain subset of tasks is executed and not all of them.

So now I repeatedly run my program from a shell script, in each process executing only a set of tasks (parameterized through arguments). In the end, all tasks are being executed, but as a single process only processes a subset of tasks there are no memory issues any more.

For me that is a sufficient solution. If you have a similar problem and your program has a batch-like execution structure this may be a pragmatic approach.

When I find the time I will look into the new suggestions hopefully identifying the root cause (thanks for the help!).

Upvotes: 0

James Branigan
James Branigan

Reputation: 1154

@maximdim's answer is great general advice for this kind of situation. What is likely happening here is that a very small Java object is being retained that causes some larger amount of native(OS-level) memory to hang around. This native memory is not accounted for in the Java heap. The Java object is likely so small that you will hit your system memory limit well before the Java object retention would overwhelm the heap.

So the trick for finding this is to use successive heap dumps, far enough apart that you have noticed memory growth for the whole process, but not so far apart that a ton of work has gone on. What you are looking for are Java object counts in the heap that keep increasing and have native memory attached.

These could be file handles, sockets, db connections, or image handles just to name a few that are likely directly applicable for you.

On more rare occasions, there is a native resource that is leaked by the java implementation, even after the Java object is garbage collected. I once ran into a WinCE 5 bug where 4k were leaked with each socket close. So there was no Java object growth, but there was process memory usage growth. In these cases, it is helpful to make some counters and keep track of java allocations of objects with native memory vs. the actual growth. Then over a short enough window, you can look for any correlations and use these to make smaller testcases.

One other hint, make sure all your close operations are in finally blocks, just in case an exception is popping you out of your normal control flow. This has been known to cause this sort of problem as well.

Upvotes: 2

maximdim
maximdim

Reputation: 8169

If indeed your Java process is the one which takes memory and there is nothing suspucios in VisualVM or memory dump then it must be somewhere in native code - either in JVM or in some of the libraries you're using. On JVM level it could be, for example, if you're using NIO or memory mapped files. If some of your libraries are using native calls or you're using not type 4 JDBC driver for your database then leak could be there.

Some suggestions:

  • There are some details how to find memory leaks in native code here. Good read also.
  • As usual, make sure you're properly closing all resources (Files, Streams, Connections, Threads etc). Most of these are calling native implementation at some point so memory consumed might not be directly visible in JVM
  • Check resources consumed on OS level - number of open files, file descriptors, network connections etc.

Upvotes: 6

Justin
Justin

Reputation: 4116

Your file system caching is probably causing this, the file system cache will eat up all available memory when doing a large amount of IO. You systems performance should not be adversely affected by this behaviour, the kernel will immediately release file system cache when memory is requested by a process.

Upvotes: 0

Tassos Bassoukos
Tassos Bassoukos

Reputation: 16152

Hmm... use ipcs to check that shared memory segments aren't left open. Check the open file descriptors of your JVM (/proc/<jvm proccess id>/fd/*). In top, type fpFp to show swap and sort by used swap the task list.

That's all I can come up with for now, hope it helps at least a bit.

Upvotes: 1

Related Questions