Reputation: 73619
couchbase-server-community 4.0.0-4051-1
We have a Couchbase cluster of 21 nodes and around 150 client boxes connect to this cluster. I see some boxes have RAM usage of 91%, while others are using only 66%, is there any way to ensure more distributed usage of RAM. Most of the RAM in all the boxes are being taken by /opt/couchbase/bin/memcached
, following are two examples of RAM usage from two extreme boxes:
box with less memory usage:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
999 4345 62.5 61.9 36074956 33934188 ? Ssl Feb01 27737:28 /opt/couchbase/bin/memcached -C /opt/couchbase/var/lib/couchbase/config/memcached.json
999 4216 52.8 2.4 1840896 1314404 ? Ssl Feb01 23450:08 /opt/couchbase/lib/erlang/erts-5.10.4.0.0.1/bin/beam.smp -A 16 -sbt u -P 327680 -K true -swt low -MMmcs 30 -e102400 -- -root /opt/couchbase/lib/erlang -progname erl -- -home /opt/couchbase -- -smp enable -setcookie nocookie -kernel inet_dist_listen_min 21100 inet_dist_listen_max 21299 error_logger false -sasl sasl_error_logger false -nouser -run child_erlang child_start ns_bootstrap -- -smp enable -couch_ini /opt/couchbase/etc/couchdb/default.ini /opt/couchbase/etc/couchdb/default.d/capi.ini /opt/couchbase/etc/couchdb/default.d/geocouch.ini /opt/couchbase/etc/couchdb/local.ini
999 9920 1.1 0.1 595164 105572 ? Sl Feb01 500:35 /opt/couchbase/bin/indexer -vbuckets=1024 -cluster=127.0.0.1:8091 -adminPort=9100 -scanPort=9101 -httpPort=9102 -streamInitPort=9103 -streamCatchupPort=9104 -streamMaintPort=9105 -storageDir=/storage/1/couchbase/index/@2i
box with high memory usage:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
999 20812 71.1 85.0 48014804 46535824 ? Ssl 2017 568761:06 /opt/couchbase/bin/memcached -C /opt/couchbase/var/lib/couchbase/config/memcached.json
999 20690 52.8 5.2 3564552 2863852 ? Ssl 2017 422691:18 /opt/couchbase/lib/erlang/erts-5.10.4.0.0.1/bin/beam.smp -A 16 -sbt u -P 327680 -K true -swt low -MMmcs 30 -e102400 -- -root /opt/couchbase/lib/erlang -progname erl -- -home /opt/couchbase -- -smp enable -setcookie nocookie -kernel inet_dist_listen_min 21100 inet_dist_listen_max 21299 error_logger false -sasl sasl_error_logger false -nouser -run child_erlang child_start ns_bootstrap -- -smp enable -couch_ini /opt/couchbase/etc/couchdb/default.ini /opt/couchbase/etc/couchdb/default.d/capi.ini /opt/couchbase/etc/couchdb/default.d/geocouch.ini /opt/couchbase/etc/couchdb/local.ini
999 21421 0.8 0.4 2914624 242084 ? Sl 2017 6439:31 /opt/couchbase/bin/cbq-engine --datastore=http://127.0.0.1:8091 --http=:8093 --configstore=http://127.0.0.1:8091 --enterprise=false
999 21395 4.0 0.4 1063508 223060 ? Sl 2017 32088:04 /opt/couchbase/bin/projector -kvaddrs=127.0.0.1:11210 -adminport=:9999 127.0.0.1:8091
Upvotes: 1
Views: 674
Reputation: 6199
On a first sight, it seems to be still OK to me. It's hard to tell without knowing which kind of workload you are running. Memcached is a memory cache.
Maybe the nodes with more utilized memory recently processed more memory-consuming workload that is cached at the moment.
Maybe the nodes consuming more memory are just running for longer time (memcached keeps objects in memory forever till they are not evicted by reaching mem quota).
Maybe your 150 clients are unable to fully utilize your 21 cluster nodes (Why 1 node per ~7,14 clients?)
Upvotes: 1