Reputation: 89
Anyone ever run into an issue with terraform and azure key vault access policy? I have an issue where the key vault gets created with a module that sets some default access policies on the key vault. Then when I create another resource that needs to set the system assigned identity access to the key vault, like synapse for example. Terraform will add the azurerm_key_vault_access_policy once the resource is created, however, this will work on the first apply, after that the key vault module and azurerm_key_vault_access_policy will be in a destroy/recreate loop each time I plan depending on which was applied last.
If I remove the access policy from the key vault module, terraform will error stating the terraform principal doesn't have access to add policies, it looks to run a GET on the key vault to check if it is already there is my best guess. One of my default access policies in the module is to add the service principal used by terraform to the KV access policy, but then this starts the cycle again. Wondering if this is a known issue, or how to rectify the key vault access policies for when resources get created with system assigned identities and then need added to the key vault access policies.
synapse.tf
# Key vault Policy
import {
to = azurerm_key_vault_access_policy.synapse-identity-policy
id = module.app3-kv.vault_id
}
resource "azurerm_key_vault_access_policy" "synapse-identity-policy" {
key_vault_id = module.app3-kv.vault_id
tenant_id = azurerm_synapse_workspace.synapse.identity[0].tenant_id
object_id = azurerm_synapse_workspace.synapse.identity[0].principal_id
key_permissions = [
"Get", "WrapKey", "UnwrapKey"
]
}
module - keyvault (main.tf)
resource "azurerm_key_vault" "key_vault" {
name = var.kv_name
location = var.location
resource_group_name = var.resource_group_name
tenant_id = data.azurerm_client_config.current.tenant_id
sku_name = var.kv_sku_name
enabled_for_disk_encryption = var.enabled_for_disk_encryption
enabled_for_deployment = var.enabled_for_deployment
enabled_for_template_deployment = var.enabled_for_template_deployment
soft_delete_retention_days = 7
purge_protection_enabled = true
tags = var.tags
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = var.kv_access_group_id
key_permissions = var.key_permissions
secret_permissions = var.secret_permissions
certificate_permissions = var.certificate_permissions
}
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = var.key_permissions
secret_permissions = var.secret_permissions
certificate_permissions = var.certificate_permissions
}
//public_network_access_enabled = var.public_network_access_enabled
network_acls {
bypass = var.kv_bypass
default_action = var.kv_default_action
ip_rules = var.kv_ip_address
}
}
output "vault_id" {
value = azurerm_key_vault.key_vault.id
}
output "vault_name" {
value = azurerm_key_vault.key_vault.name
}
output "vault_resource_group_name" {
value = azurerm_key_vault.key_vault.resource_group_name
}
key_vault.tf
module "app3-kv" {
source = "../modules/key_vault"
kv_name = var.kv_name
resource_group_name = data.azurerm_resource_group.kv-rg.name
location = var.location
enabled_for_deployment = true
enabled_for_disk_encryption = true
enabled_for_template_deployment = true
tags = var.kv_tags
depends_on = [data.azurerm_resource_group.kv-rg]
}
plan where it removes the synapse identity after it has been applied.
# module.spexs-app3-kv.azurerm_key_vault.key_vault will be updated in-place
~ resource "azurerm_key_vault" "key_vault" {
~ access_policy = [
# (1 unchanged element hidden)
{
application_id = ""
certificate_permissions = [
"Get",
"List",
"GetIssuers",
"ListIssuers",
]
key_permissions = [
"Get",
"Create",
"Delete",
"List",
"Recover",
"Restore",
"UnwrapKey",
"WrapKey",
"List",
"GetRotationPolicy",
"SetRotationPolicy",
]
object_id = "964c5f8c-c077-44b0-8be4-60c775720e5e"
secret_permissions = [
"Get",
"List",
"Set",
"Delete",
"Recover",
"Restore",
]
storage_permissions = []
tenant_id = ""
},
- {
- application_id = ""
- certificate_permissions = []
- key_permissions = [
- "Get",
- "WrapKey",
- "UnwrapKey",
]
- object_id = "b5957f89-c575-42f5-8fe1-c4ba8265eaa6"
- secret_permissions = []
- storage_permissions = []
- tenant_id = ""
},
]
id = "/subscriptions/subid/resourceGroups/AGV-C-AKV-RGP-04-001/providers/Microsoft.KeyVault/vaults/AGV-C-AKV-04-003-SYNAPSE"
name = "AGV-C-AKV-04-003-SYNAPSE"
tags = {
"Billing Project" = ""
"Component" = "Azure Key Vault"
"Environment" = ""
"System" = ""
"Terraform" = ""
}
# (12 unchanged attributes hidden)
# (1 unchanged block hidden)
}
Upvotes: 1
Views: 889
Reputation: 17994
From azurerm_key_vault_access_policy's documentation (emphasis mine):
It's possible to define Key Vault Access Policies both within the
azurerm_key_vault
resource via theaccess_policy
block and by using theazurerm_key_vault_access_policy
resource. However it's not possible to use both methods to manage Access Policies within a KeyVault, since there'll be conflicts.
I'm not sure if this is the case, given that the access policies are defined in different modules (if I understood correctly).
But anyway, try using a azurerm_key_vault_access_policy resource instead of an access_policy
block in the keyvault module.
Something like:
resource "azurerm_key_vault_access_policy" "p01" {
key_vault_id = azurerm_key_vault.key_vault.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = var.kv_access_group_id
key_permissions = var.key_permissions
secret_permissions = var.secret_permissions
certificate_permissions = var.certificate_permissions
}
resource "azurerm_key_vault_access_policy" "p02" {
key_vault_id = azurerm_key_vault.key_vault.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = var.key_permissions
secret_permissions = var.secret_permissions
certificate_permissions = var.certificate_permissions
}
Upvotes: 2