Reputation: 333
Is there any way that I can run the oc commands on pod terminals? What I am trying to do is let the user login using
oc login
Then run the command to get the token.
oc whoami -t
And then use that token to call the REST APIs of openshift. This way works on local environment but on openshift, there are some permission issues as I guess openshift doesn't give the root permissions to the user. it says permission denied.
EDIT
So basically i want to be able to get that BEARER token I can send in the HEADERS of the REST APIs to create pods, services, routes etc. And i want that token before any pod is made because i am going to use that token to create pods. It might sound silly I know but that's what I want to know if it is possible, the way we do it using command line using oc commands, is it possible on openshift.
The other possible way could be to call an API that gives me a token and then use that token in other API calls.
@gshipley It does sound like a chiken egg problem to me. But if i were to explain you what I do on my local machine, all i would want is to replicate that on openshift if it is possible. I run the oc commands on nodejs, oc.exe file is there in my repository. I run oc login and oc whoami -t. I read the token i get and store it. Then I send that token as BEARER in API headers. Thats what works on my local machine. I just want to replicate this scenario on openshift. Is it possible?
Upvotes: 3
Views: 8085
Reputation: 784
As a cluster admin create new Role
as e.g. role.yml
apiVersion: authorization.openshift.io/v1
kind: ClusterRole
metadata:
name: mysudoer
rules:
apiGroups: [''],
resources: ['users']
verbs: ['impersonate']
resourceNames: ["<your user name>"]
and run
oc create -f role.yml
or instead of creating raw role.yml
file, use:
oc create clusterrole mysudoer --verb impersonate --resource users --resource-name "<your user name>"
then give your ServiceAccount the new role
oc adm policy add-cluster-role-to-user mysudoer system:serviceaccount:<project>:default
download the oc
tool into your container. Now whenever you execute a command you need to add --as=<user name>
, or to hide that, create a shell alias inside your container
alias oc="oc --as=<user name>"
the oc
should now behave exactly as on your machine, including the exact same privileges as the ServiceAccount
only functions as an entry point to the API, but the real tasks are done as your user.
In case you want something simpler, just add the proper permissions to your ServiceAccount
, e.g.
oc policy add-role-to-user admin -z default --namespace=<your project>
if you run the command, any container in your project that has oc
will be able to auto-magically do tasks inside the project. However, this way, the permissions are not inherited from the user as in the first step, so it's always required to manually add them to the service account as needed.
Explanation, there is always ServiceAccount
in your project called default
. It has no privileges a thus can not do anything, however all necessary credentials for authenticating the ServiceAccount
are by default in every single container. The cool thing is that oc
, if you do not provide any credentials, and just run it inside a container in OpenShift
, it will automatically try to login using this account. The steps above simply show, how to get the proper permissions to the account, so that the oc
can use it to do something meaningful.
In case you simply want access the RESt API, use the token provided in
/var/run/secrets/kubernetes.io/serviceaccount/token
and set up the permissions for the ServiceAccount
as described above. With that, you will not even need the oc
command line tool.
Upvotes: 2