Reputation: 2598
Hello Everyone I have a case when I am using Kafka Connect API for sinking my data to an Elasticsearch. Right now I have a problem with this. My configuration file is very simple
name=elasticsearch-sink
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=5
topics=myTopicKafka
topic.index.map=myTopicKafka:myIndexES-1
schema.ignore=true
key.ignore=true
connection.url=http://elasticsearch:9200
type.name=kafka-connect
batch.size=200
#linger.ms=500
But in ES I am using Curator for roll over the index
actions:
# 1:
# action: create_index
# description: 'Create mwe.resource.locate index'
# options:
# name: 'myIndexES-1-%Y-%m-%d-1'
2:
action: rollover
description: >-
Rollover the index associated with alias 'myIndexES', after exceeds 500MB memory or is a day old
options:
name: all_myIndexES
conditions:
max_age: 1d
max_size: 500mb
This is creating a new Index every 500MB but with the Index name with myIndexES-00002, myIndexES-00003 and so on. So my question in here is how to support this with Kafka Connect API
Upvotes: 0
Views: 828
Reputation: 217344
When using the Rollover API, you're supposed to write to an alias pointing at a single index.
This means that in your elasticsearch-sink configuration, you should have this instead:
topic.index.map=myTopicKafka:myIndexES-write
And in your Curator configuration, you should have a name property with the name of your alias.
options:
name: 'myIndexES-write'
Upvotes: 2