Reputation: 83
Problem Statement:
I have deployed a spring boot app in gke under a namespace when the app starts it uses a default gce sa credentials to authenticate. what i did is created a gke service account and used iam policy binding to bind with a google service account and added workload identity user role then annotated the gke sa by executing below 2 commands
issue is still my spring boot uses default gce sa credentials Can someOne Please help me in resolving this.
I can see serviceAccountName is changed to new gke k8 SA and secret is also getting created and mounted.But app deployed are not using this Gke SA
Note: I am using Helsm chart for deployment
gcloud iam service-accounts add-iam-policy-binding \
--member serviceAccount:{projectID}.svc.id.goog[default/{k8sServiceAccount}] \
--role roles/iam.workloadIdentityUser \
{googleServiceAccount}
kubectl annotate serviceaccount \
--namespace default \
{k8sServiceAccount} \
iam.gke.io/gcp-service-account={googleServiceAccount}
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: helloworld
appVersion: {{ .Values.appVersion }}
name: helloworld
spec:
replicas: 1
selector:
matchLabels:
app: helloworld
template:
metadata:
labels:
app: helloworld
environment: {{ .Values.environment }}
spec:
serviceAccountName: {{ .Values.serviceAccountName }}
containers:
- name: helloworld
image: {{ .Values.imageSha }}
imagePullPolicy: Always
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
ports:
- containerPort: 8080
env:
- name: SPRING_CONFIG_LOCATION
value: "/app/deployments/config/"
volumeMounts:
- name: application-config
mountPath: "/app/deployments/config"
readOnly: true
volumes:
- name: application-config
configMap:
name: {{ .Values.configMapName }}
items:
- key: application.properties
path: application.properties
Upvotes: 2
Views: 1265
Reputation: 773
When you create a pod, if you do not specify a service account, it is automatically assigned the default
service account in the same namespace. If you get the raw json or yaml for a pod you have created (for example, kubectl get pods/<podname> -o yaml
), you can see the spec.serviceAccountName
field has been automatically set.
You can access the API from inside a pod using automatically mounted service account credentials, as described in Accessing the Cluster [1]. The API permissions of the service account depend on the authorization plugin and policy [2] in use.
In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false
on the service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-robot
automountServiceAccountToken: false
...
In version 1.6+, you can also opt out of automounting API credentials for a particular pod:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...
The pod spec takes precedence over the service account if both specify a automountServiceAccountToken
value.
[1] https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/
[2] https://kubernetes.io/docs/reference/access-authn-authz/authorization/#authorization-modules
Upvotes: 3