Michele
Michele

Reputation: 1

Upgrading Apache Kafka client to 3.8.0 issue

After upgrading the Apache Kafka client from 3.7.1 to 3.8.0 we ran into this UnsatisfiedLinkError during messages put on a topic:

java.lang.UnsatisfiedLinkError: /tmp/libzstd-jni-1.5.6-34907981766316087764.so: /tmp/libzstd-jni-1.5.6-34907981766316087764.so: failed to map segment from shared object: Operation not permitted
no zstd-jni-1.5.6-3 in java.library.path
Unsupported OS/arch, cannot find /linux/amd64/libzstd-jni-1.5.6-3.so or load zstd-jni-1.5.6-3 from system libraries. Please try building from source the jar or providing libzstd-jni-1.5.6-3 in your system.
    at it.vtfinance.vtpie.core.process.StreamExecutor$2.doInTransactionWithoutResult(StreamExecutor.java:144)
    at org.springframework.transaction.support.TransactionCallbackWithoutResult.doInTransaction(TransactionCallbackWithoutResult.java:36)
    at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:140)
    at it.vtfinance.vtpie.core.process.StreamExecutor.executeTaskInPhase(StreamExecutor.java:120)
    at it.vtfinance.vtpie.core.process.StreamExecutor.executePhase(StreamExecutor.java:184)
    at it.vtfinance.vtpie.core.process.template.processor.StreamDataProcessor.process(StreamDataProcessor.java:48)
    at it.vtfinance.vtpie.core.process.template.ProcessorTemplate$ProcessorTemplateWorker.execute(ProcessorTemplate.java:546)
    at it.vtfinance.vtpie.core.work.AbstractWork.run(AbstractWork.java:70)
    at org.jboss.jca.core.workmanager.WorkWrapper.runWork(WorkWrapper.java:445)
    at org.jboss.as.connector.services.workmanager.WildflyWorkWrapper.runWork(WildflyWorkWrapper.java:69)
    at org.jboss.jca.core.workmanager.WorkWrapper.run(WorkWrapper.java:223)
    at org.jboss.threads.SimpleDirectExecutor.execute(SimpleDirectExecutor.java:29)
    at org.jboss.threads.QueueExecutor.runTask(QueueExecutor.java:789)
    at org.jboss.threads.QueueExecutor.access$100(QueueExecutor.java:44)
    at org.jboss.threads.QueueExecutor$Worker.run(QueueExecutor.java:830)
    at java.lang.Thread.run(Thread.java:750)
    at org.jboss.threads.JBossThread.run(JBossThread.java:485)

This is caused by /tmp directory mounted with noexec property.

I think that this is a side effect of the new compression level support feature.

For security policy often in the production environment /tmp directories are mounted in noexec mode, there's a way to bypass this problem?

Those are Kafka client parameter:

acks = -1
auto.include.jmx.reporter = true
batch.size = 16384
bootstrap.servers = [xxx:9093]
buffer.memory = 33554432
client.dns.lookup = use_all_dns_ips
client.id = producer-1
compression.gzip.level = -1
compression.lz4.level = 9
compression.type = none
compression.zstd.level = 3
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = true
enable.metrics.push = true
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.IntegerSerializer
linger.ms = 0
max.block.ms = 5000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metadata.recovery.strategy = none
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.adaptive.partitioning.enable = true
partitioner.availability.timeout.ms = 0
partitioner.class = null
partitioner.ignore.keys = false
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.max.ms = 1000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism = GSSAPI
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
security.protocol = SSL
security.providers = null
send.buffer.bytes = 131072
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.2
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 5000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer

Upvotes: 0

Views: 1488

Answers (2)

YoungCyborg
YoungCyborg

Reputation: 13

You should add the java option to change the /tmp directory, I suffered the same problem when install elasticsearch and it worked:

export _JAVA_OPTIONS=-Djava.io.tmpdir=/new/tmp/dir

If you run kafka with systemd, add

EnvironmentFile=-/path/to/env/file

to your systemd file, content of the env file is _JAVA_OPTIONS=-Djava.io.tmpdir=/new/tmp/dir

Upvotes: 1

Tyler
Tyler

Reputation: 11

There is a fix for this in the 3.8.0 branch, but it did not make it into the release. The fix relies on a change in zstd-jni discussed here. We found a way around this by instead supplying Kafka with ZstdNativePath. This is a temporary workaround until Kafka's next release is available.

Upvotes: 1

Related Questions