Reputation: 21
if i deploy a instance of a consul client using the consul helm chart into my k3s cluster. The connect-injector pod will not start properly.
k3s version: k3s version v1.25.6+k3s1 (9176e03c)
Helm version: version.BuildInfo{Version:"v3.12.1", GitCommit:"f32a527a060157990e2aa86bf45010dfb3cc8b8d", GitTreeState:"clean", GoVersion:"go1.20.4"}
Terraform version: Terraform v1.5.2
Consul Version: v1.16
Docker compose version: Docker Compose version v2.19.1
Docker version:
Client: Docker Engine - Community
Version: 24.0.4
API version: 1.43
Go version: go1.20.5
Git commit: 3713ee1
Built: Fri Jul 7 14:50:55 2023
OS/Arch: linux/amd64
Context: default
Server: Docker Engine - Community
Engine:
Version: 24.0.4
API version: 1.43 (minimum version 1.12)
Go version: go1.20.5
Git commit: 4ffc614
Built: Fri Jul 7 14:50:55 2023
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.21
GitCommit: 3dce8eb055cbb6872793272b4f20ed16117344f8
runc:
Version: 1.1.7
GitCommit: v1.1.7-0-g860f061
docker-init:
Version: 0.19.0
GitCommit: de40ad0
This is my consul server.hcl file
server = true
bootstrap = false
bootstrap_expect = 1
node_name = "dev-consul"
datacenter = "dev-dc"
encrypt = "<encrypt-key>"
encrypt_verify_incoming = true
encrypt_verify_outgoing = true
tls {
defaults {
ca_file = "/consul/config/certs/consul-agent-ca.pem"
cert_file = "/consul/config/certs/dev-dc-server-consul-0.pem"
key_file = "/consul/config/certs/dev-dc-server-consul-0-key.pem"
verify_incoming = true
verify_outgoing = true
}
}
data_dir = "/consul-data"
log_level = "INFO"
advertise_addr = "<advertise-addr>"
bind_addr = "0.0.0.0"
addresses = {
"http" = "0.0.0.0"
}
auto_encrypt = {
"allow_tls" = true
}
connect = {
"enabled" = true
}
ui_config = {
"enabled" = true
}
I'm using this docker compose file to deploy my server node:
version: "3.9"
services:
consul:
image: hashicorp/consul:1.16
volumes:
- ./config/server.hcl:/consul/config/server.hcl:ro
- consul_data:/titanium/consul-data
- ./certs:/consul/config/certs/
ports:
- "8600:8600/tcp"
- "8600:8600/udp"
- "8500:8500/tcp"
- "8500:8500/udp"
- "8301:8301/tcp"
- "8301:8301/udp"
- "8302:8302/tcp"
- "8302:8302/udp"
- "8502:8502"
- "21000-21255:21000-21255"
- "8300:8300"
- "8300:8300/udp"
command: "agent"
volumes:
consul_data:
The consul server is working good so far. When i use the consul helm chart with the following values.yml
global:
name: consul
image: hashicorp/consul:1.16
domain: dev.local
datacenter: dev-dc
exposeGossipPorts: true
gossipEncryption:
secretName: "gossip-encryption-key-secret"
secretKey: "key"
tls:
enabled: true
enableAutoEncrypt: true
verify: true
caCert:
secretName: "consul-certs"
secretKey: "ca.pem"
connectInject:
enabled: true
default: true
cni:
enabled: true
logLevel: info
cniBinDir: "/opt/cni/bin"
cniNetDir: "/etc/cni/net.d"
namespaceSelector: |
matchLabels:
connect-inject : enabled
failurePolicy: "Ignore"
server:
enabled: false
client:
enabled: true
join: [ "<consul-service-addr>" ]
grpc: true
With the following values the consul client will connect to the cluster but the pod containing the consul-connect-injector will give me the following error message
2023-07-23T19:23:44.851Z [INFO] consul-server-connection-manager: trying to connect to a Consul server 2023-07-23T19:23:44.854Z [ERROR] consul-server-connection-manager: connection error: error="failed to discover Consul server addresses: failed to resolve DNS name: consul-server.consul.svc: lookup consul-server.consul.svc on 10.43.0.10:53: no such host
If i check the kubernetes events of the pod i will get errors like this:
MountVolume.SetUp failed for volume "consul-ca-cert
MountVolume.SetUp failed for volume "certs" : secret "consul-connect-inject-webhook-cert" not found
.
I create the secrets for the gossip encryption key and the tls cert via a kubernetes resource file.
I don't think its neccessary to know but this is the terraform script i use:
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
}
resource "helm_release" "consul-connect" {
name = "consul-connect"
chart = "consul"
repository = "https://helm.releases.hashicorp.com"
namespace = "consul"
values = [file("config/values.yml")]
}
I've tried to modify my values.yml multiple times but the connect-injector pod was printing the same error message over and over again. I've also tried to use a simplified values.yml
global:
name: consul
domain: dev.local
datacenter: dev-dc
gossipEncryption:
secretName: "gossip-encryption-key-secret"
secretKey: "key"
tls:
enabled: true
enableAutoEncrypt: true
verify: true
caCert:
secretName: "consul-certs"
secretKey: "ca.pem"
connectInject:
enabled: true
failurePolicy: "Ignore"
controller:
enabled: true
server:
enabled: false
client:
enabled: true
image: hashicorp/consul:1.15.3
join: [ "<consul-server-addr>" ]
But even that has not changed anything. I also tried to install a consul server via the helm chart but it didn't fix my problem with the connect-injector-pod.
Upvotes: 2
Views: 1081
Reputation: 21
Looks like it tries to reach consul-server.consul.svc which is disabled in your configuration.
I would try to enable externalServers and set externalServers.hosts
to the same value as client.join
Upvotes: 1