Reputation: 13
I set up on EC2 the confluent (4.0) connector that reads from kafka and writes to S3.
The standalone try goes well:
bin/connect-standalone etc/standalone/example-connect-worker.properties etc/standalone/example-connect-s3-sink.properties
However the distributed version keeps failing with
[2018-01-30 21:26:05,860] ERROR Unexpected exception in Thread[KafkaBasedLog Work Thread - connect-configs,5,main] (org.apache.kafka.connect.util.KafkaBasedLog:334)
java.lang.IllegalStateException: Consumer is not subscribed to any topics or assigned any partitions
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1097)
at org.apache.kafka.connect.util.KafkaBasedLog.poll(KafkaBasedLog.java:256)
at org.apache.kafka.connect.util.KafkaBasedLog.access$500(KafkaBasedLog.java:69)
at org.apache.kafka.connect.util.KafkaBasedLog$WorkThread.run(KafkaBasedLog.java:327)
I just wanted first use connector class equals FileStreamSinkConnector
The sink conf files goes like:
name=local-file-sink
#connector.class=FileStreamSink
connector.class=org.apache.kafka.connect.file.FileStreamSinkConnector
tasks.max=1
file=test.sink.txt
topics=tests3
s3.bucket=tests3
s3.prefix=tests3
s3.endpoint=http://localhost:9090
s3.path_style=true
local.buffer.dir=/tmp/connect-system-test
Thanks a lot!
Upvotes: 1
Views: 361
Reputation: 1993
When you start a distributed Connect worker with ./bin/connect-distributed
you may only supply the worker's properties through command line.
To load a connector by posting its config to the worker's REST endpoint you may use curl
or an equivalent command.
For example:
curl -X POST -H "Content-Type: application/json" --data @config.json http://localhost:8083/connectors
where config.json
is a file containing your connector's properties.
More info here: https://docs.confluent.io/current/connect/managing.html#distributed-example
Upvotes: 1