Reputation: 41
I am having trouble setting up a replication to my elasticsearch remote cluster which I created. This is the error I am getting in cb UI and curl.
{"errors":{"toBucket":"Error validating target bucket 'bankable'. err=Failed to get bucket info."}}
Running Couchbase Community Edition 5.0.1 build 5003
Running ElasticSearch version "5.4.0"
{ "name" : "MtQ6ijh", "cluster_name" : "elasticsearch", "cluster_uuid" : "rKKA-zCvQGCGZN_tCUm8dQ", "version" : { "number" : "5.4.0", "build_hash" : "780f8c4", "build_date" : "2017-04-28T17:43:27.229Z", "build_snapshot" : false, "lucene_version" : "6.5.0" }, "tagline" : "You Know, for Search" }
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
couchbase.password: [correctPass]
couchbase.username: [correctUser]
couchbase.port: 9091
couchbase.maxConcurrentRequests: 1024
http.cors.enabled: true
http.cors.allow-origin : /http://localhost(:[0-9]+)?/
couchbase.ignoreFailures: true
couchbase.keyFilter: org.elasticsearch.transport.couchbase.capi.RegexKeyFilter
couchbase.keyFilter.type: exclude
couchbase.keyFilter.keyFiltersRegex.Histoty: ^Histo.*$
Upvotes: 2
Views: 160
Reputation: 8909
Version 3 of the Couchbase Elasticsearch connector requires the target index to already exist. You'll need to create the 'bankable' index in Elasticsearch first.
If that's not the problem, take a look at the Elasticsearch logs; that's where any connector error messages will appear for version 3.
By the way, I strongly encourage upgrading to version 4 of the connector. It uses a more robust replication mechanism and is the focus of future development.
Disclaimer: I work for Couchbase and maintain the Elasticsearch connector.
Upvotes: 1