kleegrewec
kleegrewec

Reputation: 41

Assign Azure rbac Role to Virtual Machine using terraform

I have a virtual machine in my Azure subscription that should be able to read and write to a storage container in the same subscription.

Therefore, I created a rbac role that allows for reading and writing from and to storage containers. To assign the role to the VM I enable a system assigned identity in the VM and assign the rbac role with to the system assigned identity with the resource manager id of the container as scope.

After the VM is started I assume that I should be able to execute some az storage blob ... command in the VM to e.g. list blobs in the given container but the command fails with an authentication issue.

az storage blob list --account-name examplestorage --container-name example-container

There are no credentials provided in your command and environment, we will query for account key for your storage account.
It is recommended to provide --connection-string, --account-key or --sas-token in your command as credentials.

You also can add --auth-mode login in your command to use Azure Active Directory (Azure AD) for authorization if your login account is assigned required RBAC roles.
For more information about RBAC roles in storage, visit https://learn.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli.

In addition, setting the corresponding environment variables can avoid inputting credentials in your command. Please use --help to get more information about environment variable usage.

Skip querying account key due to failure: Please run 'az login' to setup account.
Public access is not permitted on this storage account.
RequestId:a3f2a5ce-301e-00a6-315b-931f48000000
Time:2021-08-17T11:30:31.2242111Z
ErrorCode:PublicAccessNotPermitted
Error:None

Here are parts of the terraform code I use to create resources and assign the role:

resource azurerm_storage_account storage_account {
  name                     = "examplestorage"
  resource_group_name      = azurerm_resource_group.example.name
  location                 = azurerm_resource_group.example.location
  account_tier             = "Standard"
  account_replication_type = "LRS"
}

resource azurerm_storage_container storage_container {
  name                  = "example-container"
  storage_account_name  = azurerm_storage_account.storage_account.name
  container_access_type = "private"
}

resource azurerm_role_definition storage_access_role {
  name        = "example-storage-access"
  description = "Role granting permissions to access the blob container storage."
  scope       = data.azurerm_subscription.example.id

  permissions {
    actions = [
      "Microsoft.Storage/storageAccounts/blobServices/containers/delete",
      "Microsoft.Storage/storageAccounts/blobServices/containers/read",
      "Microsoft.Storage/storageAccounts/blobServices/containers/write",
    ]
    data_actions = [
      "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete",
      "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read",
      "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write"
    ]
  }

  assignable_scopes = [
    data.azurerm_subscription.example.id
  ]
}

resource azurerm_linux_virtual_machine example_instance {
  name                = "example-instance"
  location            = data.azurerm_resource_group.example.location
  resource_group_name = data.azurerm_resource_group.example.name
  size                = "Standard_F2"
  admin_username      = "adminuser"

  source_image_reference {
    publisher = "Canonical"
    offer     = "UbuntuServer"
    sku       = "18.04-LTS"
    version   = "latest"
  }

  identity {
    type = "SystemAssigned"
  }
}

resource azurerm_role_assignment example_role_assignment {
  scope              = azurerm_storage_container.storage_container.resource_manager_id
  role_definition_id = azurerm_role_definition.storage_access_role.id
  principal_id       = azurerm_linux_virtual_machine.example_instance.identity[0].principal_id
}

The execution of terraform plan and apply terminates with no error and in my azure portal I see that there is an identity assigned by the system to my VM. The identity has the defined role assigned with the storage container as a scope.

I did the following tests to check permissions:

My terraform version is: 0.13.04 The versions of my terraform providers is: Initializing provider plugins...

Any suggestions what I am missing or doing wrong here? Any help is gratefully appreciated.

thanks, Christian

Upvotes: 3

Views: 2894

Answers (1)

kleegrewec
kleegrewec

Reputation: 41

I decided to implement access from my virtual machine to Azure storage using some workaround. Inspired by the terraform Azure provider issue https://github.com/hashicorp/terraform-provider-azuread/issues/40 I implemented the deployment of a service principal and assigned my rbac role for storage read and write to this principal. The credentials generated by the terraform execution are then rendered into a file that is part of the custom data when creating the virtual machine. Using this solution I can use the credentials from this file to login into Azure using

az login --service-principal --username ....

After this is done the command from my original question works when appending the proposed --auth-mode login flag to the command.

Since this solution is only a workaround for my problem and creates some security problems by storing credentials on the running vm I would be lucky if someone could post a solution that directly attaches the rbac role to my vm's system assigned identity.

thanks, Christian

Upvotes: 1

Related Questions