Reputation: 139
What i intended to do:
EDIT: Steps that i used to setup Kubernetes
Steps to setup master node:
Docker Setup using : https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker
$ sudo apt-get install -y kubelet kubeadm kubectl
$ kubeadm init —pod-network-cidr=192.168.0.0/16 —apiserver-advertise-address=<MASTER_IP>
$ kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
Kubernetes Version: v1.18.3 Docker Version: 19.03.8, build afacb8b7f0
Setting Up Nodes:
Docker Setup using : https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker
$ sudo apt-get install -y kubelet kubeadm kubectl
$ kubeadm join <PUBLIC_IP>:6443 --token <token> \
--discovery-token-ca-cert-hash <hash>
I have completed step1 successfully but unable to configure repliaca set. The rs.initiate() command fails with "No host described in new configuration 1 for replica set rs0 maps to this node" error.
$kubectl get all
NAME READY STATUS RESTARTS AGE
pod/mongo-0 1/1 Running 0 16m
pod/mongo-1 1/1 Running 0 16m
pod/mongo-2 1/1 Running 0 16m
pod/nfs-client-provisioner-5d7cbcd58-qs8r6 1/1 Running 0 43h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d21h
service/mongo ClusterIP None <none> 27017/TCP 16m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nfs-client-provisioner 1/1 1 1 43h
NAME DESIRED CURRENT READY AGE
replicaset.apps/nfs-client-provisioner-5d7cbcd58 1 1 1 43h
NAME READY AGE
statefulset.apps/mongo 3/3 16m
Below command fails:
$kubectl exec -it mongo-0 -- mongo
MongoDB shell version v4.2.7
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("64863831-2775-488f-a80d-aabdeb84bad9") }
MongoDB server version: 4.2.7
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
Server has startup warnings:
2020-05-31T09:46:16.537+0000 I CONTROL [initandlisten]
2020-05-31T09:46:16.537+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2020-05-31T09:46:16.537+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2020-05-31T09:46:16.537+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2020-05-31T09:46:16.538+0000 I CONTROL [initandlisten]
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
> rs.initiate({_id: "rs0", version: 1, members: [
... { _id: 0, host : "mongo-0.mongo:27017" },
... { _id: 1, host : "mongo-1.mongo:27017" },
... { _id: 2, host : "mongo-2.mongo:27017" }
... ]});
{
"operationTime" : Timestamp(0, 0),
"ok" : 0,
"errmsg" : "No host described in new configuration 1 for replica set rs0 maps to this node",
"code" : 93,
"codeName" : "InvalidReplicaSetConfig",
"$clusterTime" : {
"clusterTime" : Timestamp(0, 0),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
My yamls files:
1.Headless service:
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
app: mongo
spec:
ports:
- name: mongo
port: 27017
targetPort: 27017
clusterIP: None
selector:
app: mongo
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
serviceName: "mongo"
replicas: 3
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--bind_ip_all"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-volume
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongo-volume
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
The issue may be due to DNS. I did the below to fix it with no success:
$ufw allow 27017
Nothing worked.
How to fix this issue?
Upvotes: 0
Views: 2033
Reputation: 14084
I would say that you have issue with DNS configuration (No host described in new configuration 1 for replica set rs0 maps to this node
).
Not sure what could cause it as you didn't provide your environment specification, but you can troubleshoot it based on this Kubernetes docs.
Ive run your YAMLS on my GKE
cluster (unfortunately I don't have access to Digital Ocean cloud
) and it works without any issues (with "--bind_ip_all"
).
> rs.initiate({_id: "rs0", version: 1, members: [
... ... { _id: 0, host : "mongo-0.mongo:27017" },
... ... { _id: 1, host : "mongo-1.mongo:27017" },
... ... { _id: 2, host : "mongo-2.mongo:27017" }
... ... ]});
{
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1591791921, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1591791921, 1)
In each of my pods, in /etc/hosts
besides default entries I have:
NAME READY STATUS RESTARTS AGE IP NODE
mongo-0 1/1 Running 0 122m 10.52.1.6 gke-cluster-1-default-pool-6f885e04-9mdt
mongo-1 1/1 Running 0 121m 10.52.0.3 gke-cluster-1-default-pool-6f885e04-klkb
mongo-2 1/1 Running 0 120m 10.52.1.7 gke-cluster-1-default-pool-6f885e04-9mdt
mongo-0 pod
10.52.1.6 mongo-0.mongo.default.svc.cluster.local mongo-0
mongo-1 pod
10.52.0.3 mongo-1.mongo.default.svc.cluster.local mongo-1
mongo-2 pod
10.52.1.7 mongo-2.mongo.default.svc.cluster.local mongo-2
As per Kubernetes docs, DNS for Services and Pods, especially part regarding pods, inside your hostsfile you should have entry like:
hostname.default-subdomain.my-namespace.svc.cluster-domain.example
As fast check, you can add this DNS entry manually to your /etc/host
file in pods and verify.
Another workaround is Host Aliases.
Let me know it this help.
If not please provide content of /etc/hosts
and /etc/resolv.conf
files from your pods and Kubernetes version.
Also Verify if there are any issues with pods kubectl get pods -A
.
Upvotes: 1