stran58
stran58

Reputation: 21

OpenShift 3 Origins Persistent Volume Issue

We are having issues with our openshift aws deployment when trying to use persistent volumes.

These are some of the there errors when trying to deploy the mysql-persistent instance.

-Unable to mount volumes for pod "mysql-4-uizxn_persistent-test": Cloud provider does not support volumes -Error syncing pod, skipping: Cloud provider does not support volumes

We added the following on each of our nodes node-config.yaml

kubeletArguments:
  cloud-provider:
    - "aws"
  cloud-config:
    - "/etc/aws/aws.conf"

and also added the following to our master-config.yaml

kubernetesMasterConfig:
  apiServerArguments:
    cloud-provider:
      - "aws"
    cloud-config:
      - "/etc/aws/aws.conf"
  controllerArguments:
    cloud-provider:
      - "aws"
    cloud-config:
      - "/etc/aws/aws.conf"

Not sure if we are just missing something or if there is a known issue/work around.

Also a question is how does openshift or kubernetes know that the config files have been changed?

Also just to give you some context we used openshift-ansible to deploy our environment.

Upvotes: 2

Views: 581

Answers (4)

qyddbear
qyddbear

Reputation: 11

@stran58

If you exported AWS credentials as above, started master service via systemctl and get that error, how about attempting to set AWS credentials like this:

systemctl set-environment AWS_SECRET_ACCESS_KEY=xxx
systemctl set-environment AWS_ACCESS_KEY_ID=xxx

Hope that helps.

Upvotes: 0

detiber
detiber

Reputation: 166

The way the documentation states to export environment variables is a bit inaccurate. They need to be added to the environment that is referenced by the systemd unit file or the node needs to be granted appropriate IAM permissions.

For configuring the credentials in the environment for the node, add the following to /etc/sysconfig/origin-node (assuming Origin 1.1):

AWS_ACCESS_KEY_ID=<key id>
AWS_SECRET_ACCESS_KEY=<secret key>

Alternatively, the nodes can be assigned an IAM role with the appropriate permissions. The following cloudformation resource snippet creates a role with the appropriate permissions for a node:

"NodeIAMRole": {
  "Type": "AWS::IAM::Role",
  "Properties": {
    "AssumeRolePolicyDocument": {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": { "Service": [ "ec2.amazonaws.com" ] },
          "Action": [ "sts:AssumeRole" ]
        }
      ]
    },
    "Policies": [
      {
        "PolicyName": "demo-node-1",
        "PolicyDocument": {
          "Version" : "2012-10-17",
          "Statement": [
            {
              "Effect": "Allow",
              "Action": "ec2:Describe*",
              "Resource": "*"
            }
          ]
        }
      },
      {
        "PolicyName": "demo-node-2",
        "PolicyDocument": {
          "Version" : "2012-10-17",
          "Statement": [
            {
              "Effect": "Allow",
              "Action": "ec2:AttachVolume",
              "Resource": "*"
            }
          ]
        }
      },
      {
        "PolicyName": "demo-node-3",
        "PolicyDocument": {
          "Version" : "2012-10-17",
          "Statement": [
            {
              "Effect": "Allow",
              "Action": "ec2:DetachVolume",
              "Resource": "*"
            }
          ]
        }
      }
    ]
  }
}

Upvotes: 1

screeley
screeley

Reputation: 596

Did you setup and configure your user in the admin group for IAM console? It's been a while but I think I had the same issue when I was experimenting with OSE on AWS and that was what finally fixed it.

https://console.aws.amazon.com/iam/home#home

I added my user into the admin group (can't remember if I had to create that group first and then add my user to it). I then attached two policies:

   EC2FullAccess and AdministratorAccess

Also, make sure you export your key pair and restart master and node services:

    export AWS_ACCESS_KEY_ID=<key id>  
    export AWS_SECRET_ACCESS_KEY=<secret key>  

Upvotes: 0

Jan Šafr&#225;nek
Jan Šafr&#225;nek

Reputation: 11

It looks like that OpenShift's AWS provider is not able to talk to AWS API.

According to https://docs.openshift.org/latest/install_config/configuring_aws.html, you should export these variables with your AWS credentials:

export AWS_ACCESS_KEY_ID=<key id> export AWS_SECRET_ACCESS_KEY=<secret key>

Creation of the AWS credentials is covered in AWS documentation here

Upvotes: 0

Related Questions