Reputation: 2860
Please find below the sequence of operations I am performing to authorize and authenticate against kubectl to be able to perform deployments on EKS CLuster
The Jenkins execution log is as below:
Logged in as: arn:aws:sts::XXXXXXXXXXXX:assumed-role/dev-role/testusername
Your new access key pair has been stored in the AWS configuration
Note that it will expire at 2021-02-08 15:18:59 +0000 UTC
To use this credential, call the AWS CLI with the --profile option (e.g. aws --profile saml ec2 describe-instances).
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Compose Source Structure)
[Pipeline] sh
+ set -x
+ cat
+ kubectl config view
apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
+ rm -vf config
+ wget -nv --no-check-certificate https://testcompanyname.com.au/testrepo/jenkins/eks-nonprod-black-config
2021-02-08 14:19:35 URL:https://testcompanyname.com.au/testrepo/jenkins/eks-nonprod-black-config [2383/2383] -> "eks-nonprod-black-config" [1]
+ mv eks-nonprod-black-config config
+ pwd
/home/jenkins/agent/workspace/k8s-sync-from-cluster
+ ls -lrt
total 11640
-rwxrwxr-x 1 jenkins jenkins 11801948 Feb 28 2017 saml2aws
-rw-r--r-- 1 jenkins jenkins 2383 Jan 22 03:03 config
drwxr-xr-x 2 jenkins jenkins 4096 Feb 8 14:19 vars
drwxr-xr-x 3 jenkins jenkins 4096 Feb 8 14:19 test
drwxr-xr-x 3 jenkins jenkins 4096 Feb 8 14:19 src
-rw-r--r-- 1 jenkins jenkins 153 Feb 8 14:19 settings.gradle
drwxr-xr-x 9 jenkins jenkins 4096 Feb 8 14:19 resources
drwxr-xr-x 5 jenkins jenkins 4096 Feb 8 14:19 pipelines
-rw-r--r-- 1 jenkins jenkins 2841 Feb 8 14:19 gradlew.bat
-rwxr-xr-x 1 jenkins jenkins 5916 Feb 8 14:19 gradlew
drwxr-xr-x 3 jenkins jenkins 4096 Feb 8 14:19 gradle
drwxr-xr-x 3 jenkins jenkins 4096 Feb 8 14:19 csa-kubernetes-env
-rw-r--r-- 1 jenkins jenkins 1532 Feb 8 14:19 build.gradle
-rw-r--r-- 1 jenkins jenkins 208 Feb 8 14:19 README.md
+ cat config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
server: https://xxxxxxxxxxxxxxxxxxxxxxxxxx.gr7.ap-southeast-2.eks.amazonaws.com
name: arn:aws:eks:ap-southeast-2:XXXXXXXXXXXX:cluster/test-eks
contexts:
- context:
cluster: arn:aws:eks:ap-southeast-2:XXXXXXXXXXXX:cluster/test-eks
user: arn:aws:eks:ap-southeast-2:XXXXXXXXXXXX:cluster/test-eks
name: arn:aws:eks:ap-southeast-2:XXXXXXXXXXXX:cluster/test-eks
current-context: arn:aws:eks:ap-southeast-2:XXXXXXXXXXXX:cluster/test-eks
kind: Config
preferences: {}
users:
- name: arn:aws:eks:ap-southeast-2:XXXXXXXXXXXX:cluster/test-eks
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- ap-southeast-2
- eks
- get-token
- --cluster-name
- test-eks
command: aws
env:
- name: AWS_PROFILE
value: saml
+ kubectl config view --kubeconfig ./config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://7FE00E432DC6BEB1EB17DEF18DB1B926.gr7.ap-southeast-2.eks.amazonaws.com
name: arn:aws:eks:ap-southeast-2:XXXXXXXXXXXX:cluster/test-eks
contexts:
- context:
cluster: arn:aws:eks:ap-southeast-2:XXXXXXXXXXXX:cluster/test-eks
user: arn:aws:eks:ap-southeast-2:XXXXXXXXXXXX:cluster/test-eks
name: arn:aws:eks:ap-southeast-2:XXXXXXXXXXXX:cluster/test-eks
current-context: arn:aws:eks:ap-southeast-2:XXXXXXXXXXXX:cluster/test-eks
kind: Config
preferences: {}
users:
- name: arn:aws:eks:ap-southeast-2:XXXXXXXXXXXX:cluster/test-eks
user: {}
+ kubectl get namespaces --kubeconfig ./config
Please enter Username: Please enter Username: Please enter Username: error: EOF
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
So, as you see the issue is when I cat the file there is user information, however when I run kubectl it challeneges for credentials when it should not.
Upvotes: 1
Views: 814
Reputation: 4614
@learner I know you've solved your problem by upgrading kubectl
to a newer version.
Additionally I would like to provide more information about versions of Kubernetes components and relations between them.
Kubernetes version skew support policy
describes the maximum version skew supported between various Kubernetes components. You can find more information in the version-skew-policy documentation.
I'll describe general rule to illustrate you how it works.
Let's assume that the kube-apiserver
is at version 1.n. In this case:
kubelet
and kube-proxy
are supported at 1.n, 1.(n-1), and
1.(n-2).kube-controller-manager
, kube-scheduler
, and
cloud-controller-manager
are supported at 1.n and 1.(n-1).kubectl
is supported at 1.(n+1), 1.n, and 1.(n-1).NOTE: CoreDNS
and etcd
are separate projects and have their own versions.
Upvotes: 1
Reputation: 2860
This might sound as silly as it can, but the issue was with kubectl client version.
The issue faced because I was using kubectl 1.9, upgrading to the latest solved the issue.
Upvotes: 0