Reputation: 683
in ArangoDB 2.4.0 I observe the following. When executing a query which runs in a time out, there seems to be a memory leak. Description:
I execute a query which is lasting longer than allowed by the setting request-timeout = 3600 (e.g. in arangosh config).
arandod starts working, consumes CPU and RAM
After the given time (here: 3600 seconds) the query throws an
exception (2001 - could not connect to server
) which by the way
at first was a bit confusing, as in my case its not a connect error,
it's a timeout error).
arangodb stops using CPU, but does not free the RAM which was used.
Up to now, even while further use, the RAM never went down again. I even can unload all collections, so the RAM must be blocked from somewhat else.
As long as I run queries which can be finished before the timeout arrives, run perfectly.
Is it possible that in such scenario there is a memory leak? Or do I manually have to start some kind of garbage collector or to do something else?
Upvotes: 1
Views: 642
Reputation: 683
OK, in the meanwhile I at least have some (partial?) answers:
In some of those "error scenarios", also new Skiplist indexes where
created. This seems to be a tasks which runs much longer than thought,
and also takes up very much of additional RAM space.
Now, when killing the server and restarting it again, the server again
wants to do the initial index building, so the whole collection,
the index space, the new index and a large amount of temporary RAM
are needed.
the web interface seems to be single-threaded, even arangosh commands are blocked in the meanwhile. So, if you click to additional buttons or type a command which e.g. also needs to load additional collections while executing, the simply will be delayed ... and maybe executed at a point of time where you do not expect them to run any more. So I declare them as beginner pitfalls of mine :-)
Upvotes: 1