Deekshant
Deekshant

Reputation: 127

xdmp:invoke call in MarkLogic

If we invoke some xquery module first time it takes some time. Subsequent invoke calls are faster may be because xquery module is parsed and present in module cache.

Consider following scenario :-

HTTP Server1- xdmp:invoke('/a/sample.xqy')   
HTTP Server2 - xdmp:invoke('/a/sample.xqy') 

Both app servers point to same Modules DB.

Questions :-

  1. Why subsequent invoke calls are faster?

  2. However invoke is slow if we invoke same module in diff app server.For caching purpose will this xquery module be considered as separate object based on appserver?

  3. How MarkLogic decides which entry to move out of Module Cache?

  4. How long MarkLogic keeps module in cache after xdmp:invoke call?

  5. Is there any ML configuration to increase module cache size?

Upvotes: 2

Views: 713

Answers (3)

Mohit Singh
Mohit Singh

Reputation: 1

1. caching :- There are three kind of cache - expanded tree, compressed tree , list.Expanded tree store more recently used one then compressed and then list. ML documents explains it very nicely.

2. If the second app server is in the same group then ideally it should not take more time . if its in a diffrent group then it will take time . Because cacheing is done at group level.

3. Most recently used will be in expanded tree chache and least recently used will be in list cache . Which are not recently used will be flushed off from cache.

4. Refer to point three

5. yes , go to Admin--> group-->choose your group ---> configure tab.

Upvotes: 0

derickson
derickson

Reputation: 305

Is the query touching data in the database? If so, subsequent calls to the very same query could be access Expanded tree cache in the E-node on the second call.

How much faster is the second call? Last time I measured it, the difference in query evaluation time was small compared to the seek time on most I/O solutions.

Upvotes: 0

mblakele
mblakele

Reputation: 7840

  1. Caching - but I think you know that?
  2. It sounds like you have demonstrated that. It makes sense: different app-servers might have different configurations that could affect evaluation: namespaces and schemas, for example, and possibly output options. So it is probably simpler to just build the app-server id into the cache key.
  3. I believe it's an LRU cache. I don't know how large it is.
  4. Until it runs out of space, or the cache entry is invalidated by an update.
  5. Not as far as I know.

Upvotes: 3

Related Questions