Reputation: 4432
I used to be able to curl
https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1beta3/namespaces/default/
as my base URL, but in kubernetes 0.18.0 it gives me "unauthorized". The strange thing is that if I used the external IP address of the API machine (http://172.17.8.101:8080/api/v1beta3/namespaces/default/
), it works just fine.
Upvotes: 144
Views: 126964
Reputation: 10642
Using the Python kubernetes client..
from kubernetes import client, config
config.load_incluster_config()
v1_core = client.CoreV1Api()
Upvotes: 20
Reputation: 6531
This is from the
Kubernetes In Action book.
You need to take care of authentication. The API server itself says you’re not authorized to access it, because it doesn’t know who you are.
To authenticate, you need an authentication token. Luckily, the token is provided through the default-token Secret mentioned previously, and is stored in the token file in the secret volume.
You’re going to use the token to access the API server. First, load the token into an environment variable:
root@myhome:/# TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
The token is now stored in the TOKEN environment variable. You can use it when sending requests to the API server:
root@curl:/# curl -H "Authorization: Bearer $TOKEN" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/default/pods/$HOSTNAME
{ "paths":
[
"/api",
"/api/v1",
"/apis",
"/apis/apps",
"/apis/apps/v1beta1",
"/apis/authorization.k8s.io",
...
"/ui/",
"/version"
]
}
Upvotes: 5
Reputation: 4432
In the official documentation I found this:
Apparently I was missing a security token that I didn't need in a previous version of Kubernetes. From that, I devised what I think is a simpler solution than running a proxy or installing golang on my container. See this example that gets the information, from the api, for the current container:
KUBE_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" \
https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/default/pods/$HOSTNAME
I also use include a simple binary, jq (http://stedolan.github.io/jq/download/), to parse the json for use in bash scripts.
Upvotes: 170
Reputation: 18230
Every pod has a service account automatically applied that allows it to access the apiserver. The service account provides both client credentials, in the form of a bearer token, and the certificate authority certificate that was used to sign the certificate presented by the apiserver. With these two pieces of information, you can create a secure, authenticated connection to the apisever without using curl -k
(aka curl --insecure
):
curl -v --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://kubernetes.default.svc/
Upvotes: 94
Reputation: 3558
The most important addendum to the details already mentioned above is that the pod from which you are trying to access the API server should have the RBAC capabilities to do so.
Each entity in the k8s system is identified by a service-account (like user account being used for users). Based on the RBAC capabilities, the service account token (/var/run/secrets/kubernetes.io/serviceaccount/token) is populated. The kube-api bindings (e.g. pykube) can take this token as a input when creating connection to the kube-api-servers. If the pod has the right RBAC capabilities, the pod would be able to establish the connection with the kube-api server.
Upvotes: 6
Reputation: 3743
I had a similar auth problem on GKE where python scripts suddenly threw exceptions. The solution that worked for me was to give pods permission through role
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: fabric8-rbac
subjects:
- kind: ServiceAccount
# Reference to upper's `metadata.name`
name: default
# Reference to upper's `metadata.namespace`
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
for more information enter link description here
Upvotes: 5
Reputation: 350
I ran into this issue when trying to access the API from inside a pod using Go Code. Below is what I implemented to get that working, should someone come across this question wanting to use Go too.
The example uses a pod resource, for which you should use the client-go
library if you are working with native kubernetes objects. The code is more helpful for those working with CustomResourceDefintions.
serviceHost := os.GetEnv("KUBERNETES_SERVICE_HOST")
servicePort := os.GetEnv("KUBERNETES_SERVICE_PORT")
apiVersion := "v1" // For example
namespace := default // For example
resource := "pod" // For example
httpMethod := http.MethodGet // For Example
url := fmt.Sprintf("https://%s:%s/apis/%s/namespaces/%s/%s", serviceHost, servicePort, apiVersion, namespace, resource)
u, err := url.Parse(url)
if err != nil {
panic(err)
}
req, err := http.NewRequest(httpMethod, u.String(), bytes.NewBuffer(payload))
if err != nil {
return err
}
caToken, err := ioutil.ReadFile("/var/run/secrets/kubernetes.io/serviceaccount/token")
if err != nil {
panic(err) // cannot find token file
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", string(caToken)))
caCertPool := x509.NewCertPool()
caCert, err := ioutil.ReadFile("/var/run/secrets/kubernetes.io/serviceaccount/ca.crt")
if err != nil {
return panic(err) // Can't find cert file
}
caCertPool.AppendCertsFromPEM(caCert)
client := &http.Client{
Transport: &http.Transport{
TLSClientConfig: &tls.Config{
RootCAs: caCertPool,
},
},
}
resp, err := client.Do(req)
if err != nil {
log.Printf("sending helm deploy payload failed: %s", err.Error())
return err
}
defer resp.Body.Close()
// Check resp.StatusCode
// Check resp.Status
Upvotes: 5
Reputation: 5126
With RBAC enabled, default service account don't have any permissions.
Better create separate service account for your needs and use it to create your pod.
spec:
serviceAccountName: secret-access-sa
containers:
...
It's well explained here https://developer.ibm.com/recipes/tutorials/service-accounts-and-auditing-in-kubernetes/
Upvotes: 3
Reputation: 2173
wget version:
KUBE_TOKEN=$(</var/run/secrets/kubernetes.io/serviceaccount/token)
wget -vO- --ca-certificate /var/run/secrets/kubernetes.io/serviceaccount/ca.crt --header "Authorization: Bearer $KUBE_TOKEN" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/default/pods/$HOSTNAME
Upvotes: 13
Reputation: 1451
From inside the pod, kubernetes api server can be accessible directly on "https://kubernetes.default". By default it uses the "default service account" for accessing the api server.
So, we also need to pass a "ca cert" and "default service account token" to authenticate with the api server.
certificate file is stored at the following location inside the pod : /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
and the default service account token at : /var/run/secrets/kubernetes.io/serviceaccount/token
You can use the nodejs kubbernetes godaddy client.
let getRequestInfo = () => {
return {
url: "https://kubernetes.default",
ca: fs.readFileSync('/var/run/secrets/kubernetes.io/serviceaccount/ca.crt').toString(),
auth: {
bearer: fs.readFileSync('/var/run/secrets/kubernetes.io/serviceaccount/token').toString(),
},
timeout: 1500
};
}
let initK8objs = () =>{
k8obj = getRequestInfo();
k8score = new Api.Core(k8obj),
k8s = new Api.Api(k8obj);
}
Upvotes: 8
Reputation: 16716
For whoever is using Google Container Engine (powered by Kubernetes):
A simple call to https://kubernetes
from within the cluster using this kubernetes client for Java works.
Upvotes: 2
Reputation: 9
curl -v -cacert <path to>/ca.crt --cert <path to>/kubernetes-node.crt --key <path to>/kubernetes-node.key https://<ip:port>
My k8s version is 1.2.0, and in other versions it's supposed to work too^ ^
Upvotes: 0