Umang Pachaury
Umang Pachaury

Reputation: 41

Can we give more than 32GB Memory to dedicated Machine learning Node in ElasticSearch?

As per the documentation it is recommended by Elasticsearch Team that every Elasticsearch node should have the memory slightly less than 32GB. Now My question is that does this apply to a dedicated Machine learning Node as well. And Even if we give more than 32GB memory to the Dedicated Machine Learning Node what might be the repercussions of that.

Follow UP Edit:

Hello, We followed this configuration and gave less than 31GB to JVM heap now we have configured one ML job and observed that while the job is running we were able to monitor some changes in JVM heap utilization in kibana stack monitoring. But we didn't see any change in the total used RAM of our machine.

As per the documentation the ML job (ml processes) uses memory outside of JVM heap. Now we first observed the ram used while the job was in closed state and again while the job was in opened state and the datafeed was running, in both cases we were not able to see any changes in the used RAM figure. Can you explain this why is this happpening and why are we only able to see changes in JVM heap utilization and not in total RAM used. We are using the free -m command to look at the machine memory usage.

Can anyone explain about this.

Upvotes: 0

Views: 667

Answers (1)

Val
Val

Reputation: 217554

Not more than 32GB of HEAP not RAM, and the right amount is between 26GB and 30GB (differs depending on systems, but default is 31GB).

For dedicated ML nodes, it's no different than for data nodes, the default computation is shown here, i.e. 31GB max heap

Upvotes: 2

Related Questions