Reputation: 4080
We have Cloudera Express 5.11.0 cluster and I'm trying to add Kafka 3.0 as a service in the cloudera manager but I'm getting error that it failed to start the broker on all nodes but I dont see any error. I downloaded the percel and distributed and activated it successfully.
I have a few questions :
What value should I set in the ZooKeeper Root ? Is it something that I should decide or it depends on the installation of the zookeeper? I saw that the most common is /kafka so I set it to /kafka.
Our zookeeper runs as a stand alone and got an alert about maximum request latency, might be connected ?
During 4th step of adding Kafka as a service it fails on starting the broker in the nodes and I'm not sure what is the error. I saw a few messages about OutOfMemory but I'm not sure if its checks or errors.
I'll add the last lines of the logs I found :
stdout :
AUTHENTICATE_ZOOKEEPER_CONNECTION: true
SUPER_USERS: kafka
Kafka version found: 0.11.0-kafka3.0.0
Sentry version found: 1.5.1-cdh5.11.0
ZK_PRINCIPAL_NAME: zookeeper
Final Zookeeper Quorum is VMClouderaMasterDev01:2181/kafka
security.inter.broker.protocol inferred as PLAINTEXT
LISTENERS=listeners=PLAINTEXT://VMClouderaWorkerDev03:9092,
java.lang.OutOfMemoryError: Java heap space
Dumping heap to /tmp/kafka_kafka-KAFKA_BROKER- 933a1dc0c29ca08ffe475da27d5b13d4_pid113208.hprof ...
Heap dump file created [12122526 bytes in 0.086 secs]
#
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError="/usr/lib64/cmf/service/common/killparent.sh"
# Executing /bin/sh -c "/usr/lib64/cmf/service/common/killparent.sh"...
stderr:
+ export 'KAFKA_JVM_PERFORMANCE_OPTS=-XX:+HeapDumpOnOutOfMemoryError -
XX:HeapDumpPath=/tmp/kafka_kafka-KAFKA_BROKER-
933a1dc0c29ca08ffe475da27d5b13d4_pid113208.hprof -
XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh -server -
XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -
XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true'
+ KAFKA_JVM_PERFORMANCE_OPTS='-XX:+HeapDumpOnOutOfMemoryError -
XX:HeapDumpPath=/tmp/kafka_kafka-KAFKA_BROKER-
933a1dc0c29ca08ffe475da27d5b13d4_pid113208.hprof -
XX:OnOutOfMemoryError=/usr/lib64/cmf/service/common/killparent.sh -server -
XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -
XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC -Djava.awt.headless=true'
+ [[ false == \t\r\u\e ]]
+ exec /opt/cloudera/parcels/KAFKA-3.0.0-1.3.0.0.p0.40/lib/kafka/bin/kafka-
server-start.sh /var/run/cloudera-scm-agent/process/1177-kafka-
KAFKA_BROKER/kafka.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/KAFKA-3.0.0-
1.3.0.0.p0.40/lib/kafka/libs/slf4j-log4j12-
1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/KAFKA-3.0.0-
1.3.0.0.p0.40/lib/kafka/libs/slf4j-log4j12-
1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
+ grep -q OnOutOfMemoryError /proc/113208/cmdline
+ RET=0
+ '[' 0 -eq 0 ']'
+ TARGET=113208
++ date
+ echo Thu May 17 10:36:08 CDT 2018
+ kill -9 113208
/var/log/kafka/*.log :
50.1.22:2181, initiating session
2018-05-17 10:36:08,028 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server VMClouderaMasterDev01/10.150.1.22:2181, sessionid = 0x1626c7087e729cb, negotiated timeout = 6000
2018-05-17 10:36:08,028 INFO org.I0Itec.zkclient.ZkClient: zookeeper state changed (SyncConnected)
2018-05-17 10:36:08,183 INFO kafka.server.KafkaServer: Cluster ID = cM_4kCm6TZWxttCAXDo4GQ
2018-05-17 10:36:08,185 WARN kafka.server.BrokerMetadataCheckpoint: No meta.properties file under dir /var/local/kafka/data/meta.properties
2018-05-17 10:36:08,222 INFO kafka.server.ClientQuotaManager$ThrottledRequestReaper: [ThrottledRequestReaper- Fetch]: Starting
2018-05-17 10:36:08,224 INFO kafka.server.ClientQuotaManager$ThrottledRequestReaper: [ThrottledRequestReaper- Produce]: Starting
2018-05-17 10:36:08,226 INFO kafka.server.ClientQuotaManager$ThrottledRequestReaper: [ThrottledRequestReaper-Request]: Starting
2018-05-17 10:36:08,279 INFO kafka.log.LogManager: Loading logs.
2018-05-17 10:36:08,287 INFO kafka.log.LogManager: Logs loading complete in 8 ms.
Upvotes: 1
Views: 562
Reputation: 4080
In my case the solution was to increase the java_heap_broker size to 1G
Upvotes: 1