Reputation: 127
I have a xquery for report generation which holds at times around 500K uris in a map and check for their existence(cts:search
with cts:document-query
) in database and return the diff. However there is slow down in response time, if we hit the same E-node with concurrent requests.
Is there any upper limit on the memory size maps can consume? In any case can maps be swapped to disk?
Upvotes: 2
Views: 168
Reputation: 20414
You can store maps as xml in the database itself, but you still need to load it into memory to use it. You can also store them as server fields, but you would still need to load it into memory. Though, If building the map with 500k entries takes considerable time it might be worth doing so. On the other hand, how would you keep track the map is up to date?
Upvotes: 2
Reputation: 7842
You can see how much memory each forest is using with the admin server. Go to the forest, click on the status tab, and scroll down to the bottom. The "in-memory size" is the total of the mapped file sizes. You can also use xdmp:forest-status
to gather this information in an XQuery program.
Memory consumption is limited by the address space of the OS, and by its virtual memory limits (available swap space). If the server memory consumption grows beyond available physical memory, the OS will swap pages to disk. This can have a severe impact on performance. So monitor the OS swap utilization to see if that is the likely cause of the slowdown.
Upvotes: 0