dz902
dz902

Reputation: 5828

Confluent Kafka S3 sink connector throws `java.lang.NoClassDefFoundError: com/google/common/base/Preconditions` when using Parquet format

When using Confluent S3 sink connector, the following error happens:

[2021-08-08 02:25:15,588] ERROR WorkerSinkTask{id=s3-test-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover unt
il manually restarted. Error: com/google/common/base/Preconditions (org.apache.kafka.connect.runtime.WorkerSinkTask:607)
java.lang.NoClassDefFoundError: com/google/common/base/Preconditions
        at org.apache.hadoop.conf.Configuration$DeprecationDelta.<init>(Configuration.java:379)
        at org.apache.hadoop.conf.Configuration$DeprecationDelta.<init>(Configuration.java:392)
        at org.apache.hadoop.conf.Configuration.<clinit>(Configuration.java:474)
        at org.apache.parquet.hadoop.ParquetWriter$Builder.<init>(ParquetWriter.java:345)
        at org.apache.parquet.avro.AvroParquetWriter$Builder.<init>(AvroParquetWriter.java:162)
        at org.apache.parquet.avro.AvroParquetWriter$Builder.<init>(AvroParquetWriter.java:153)
        at org.apache.parquet.avro.AvroParquetWriter.builder(AvroParquetWriter.java:43)
        at io.confluent.connect.s3.format.parquet.ParquetRecordWriterProvider$1.write(ParquetRecordWriterProvider.java:79)
        at io.confluent.connect.s3.format.KeyValueHeaderRecordWriterProvider$1.write(KeyValueHeaderRecordWriterProvider.java:105)
        at io.confluent.connect.s3.TopicPartitionWriter.writeRecord(TopicPartitionWriter.java:532)
        at io.confluent.connect.s3.TopicPartitionWriter.checkRotationOrAppend(TopicPartitionWriter.java:302)
        at io.confluent.connect.s3.TopicPartitionWriter.executeState(TopicPartitionWriter.java:245)                                                                   at io.confluent.connect.s3.TopicPartitionWriter.write(TopicPartitionWriter.java:196)
        at io.confluent.connect.s3.S3SinkTask.put(S3SinkTask.java:234)                                                                                                at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:581)                                                                   at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:329)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:182)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:231)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)

This happens for 5.5, 10.0.0 and 10.0.1.

It only happens for Parquet, while Arvo works fine.

Logs show that partitioner and source data format works okay.

[2021-08-08 02:25:15,564] INFO Opening record writer for: xxxxx/xxxxx.xxxxx.users/year=2021/month=08/day=07/xxxxx.xxxxx.tablename+0+0000000000.snappy.parquet
 (io.confluent.connect.s3.format.parquet.ParquetRecordWriterProvider:74)

The connector is a manually downloaded from Confluent website.

Upvotes: 3

Views: 2579

Answers (1)

dz902
dz902

Reputation: 5828

It turns out hadoop-common requires guava utiltiy from Google, which was somehow missing in the distribution.

You need to locate the corresponding guava.jar in hadoop-common Maven repo page. Then manually download guava.jar to the lib/ folder of the connector.

It seems there is an entry that explicitly excluded guava from hadoop-common dependency that caused this problem:

        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>${hadoop.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>org.apache.avro</groupId>
                    <artifactId>avro</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>com.google.guava</groupId>
                    <artifactId>guava</artifactId>
                </exclusion>
                <exclusion>

This really should have been caught in testing.

Upvotes: 3

Related Questions