Reputation: 695
kubernetes PodSecurityPolicy set to runAsNonRoot
, pods are not getting started post that Getting error Error: container has runAsNonRoot and image has non-numeric user (appuser), cannot verify user is non-root
We are creating the user (appuser) uid -> 999 and group (appgroup) gid -> 999 in the docker container, and we are starting the container with that user.
But the pod creating is throwing error.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 53s default-scheduler Successfully assigned app-578576fdc6-nfvcz to appmagent01
Normal SuccessfulMountVolume 52s kubelet, appagent01 MountVolume.SetUp succeeded for volume "default-token-ksn46"
Warning DNSConfigForming 11s (x6 over 52s) kubelet, appagent01 Search Line limits were exceeded, some search paths have been omitted, the applied search line is: app.svc.cluster.local svc.cluster.local cluster.local
Normal Pulling 11s (x5 over 51s) kubelet, appagent01 pulling image "app.dockerrepo.internal.com:5000/app:9f51e3e7ab91bb835d3b85f40cc8e6f31cdc2982"
Normal Pulled 11s (x5 over 51s) kubelet, appagent01 Successfully pulled image "app.dockerrepo.internal.com:5000/app:9f51e3e7ab91bb835d3b85f40cc8e6f31cdc2982"
Warning Failed 11s (x5 over 51s) kubelet, appagent01 Error: container has runAsNonRoot and image has non-numeric user (appuser), cannot verify user is non-root
Upvotes: 41
Views: 79078
Reputation: 8973
Here is the implementation of the verification:
case uid == nil && len(username) > 0:
return fmt.Errorf("container has runAsNonRoot and image has non-numeric user (%s), cannot verify user is non-root", username)
And here is the validation call with the comment:
// Verify RunAsNonRoot. Non-root verification only supports numeric user.
if err := verifyRunAsNonRoot(pod, container, uid, username); err != nil {
return nil, cleanupAction, err
}
As you can see, the only reason of that messages in your case is uid == nil
. Based on the comment in the source code, we need to set a numeric user value.
So, for the user with UID=999 you can do it in your pod definition like that:
securityContext:
runAsUser: 999
Upvotes: 56
Reputation: 16465
Adding this USER
to the Dockerfile solved my issue. Here 9000 is an arbitrary number
USER 9000:9000
Upvotes: 3
Reputation: 2078
Here is what worked for me. On the route.yml
file change spec.host value the right level where the cluster allows you to have the permissions. In my case it was:
from:
maximo-lab.domain.com
to:
maximo-lab.subdomain.domain.com
I also checked this article on Redhat which didn't have the answer that worked for me. It may have the answer for others. https://developers.redhat.com/blog/2020/10/26/adapting-docker-and-kubernetes-containers-to-run-on-red-hat-openshift-container-platform#how_to_debug_issues
Upvotes: -2