Madhur
Madhur

Reputation: 21

How to manage aks with terraform when AD enabled RBAC and local_account_disabled = true is set

I am creating aks cluster with Azure AD RBAC enabled and local account disabled. Scenario:

Everything is part of 1 main.tf file. But I get error Error: Unauthorized. I am new to aks. Thanks for any help

provider "azurerm" {
  features {

  }
} 
provider "azuread" {}  
data "azuread_user" "aad" {
  mail_nickname = "madhurshukla23jan_gmail.com#EXT#"
} 
resource "azuread_group" "k8sadmins" {
  display_name = "Kubernetes Admins"
  members = [
    data.azuread_user.aad.object_id,
  ]
  security_enabled = true
} 
resource "azurerm_resource_group" "example" {
  name     = "example-resources131"
  location = "West Europe"
} 
resource "azurerm_role_assignment" "example" {
  depends_on           = [azurerm_resource_group.example]
  scope                = azurerm_resource_group.example.id
  role_definition_name = "Azure Kubernetes Service Cluster Admin Role"
  principal_id         = azuread_group.k8sadmins.object_id
}
resource "azurerm_kubernetes_cluster" "example" {
  name                = "example-aks131"
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
  dns_prefix          = "exampleaks131"

  default_node_pool {
    name       = "default"
    node_count = 1
    vm_size    = "Standard_D2_v2"
  }
  azure_active_directory_role_based_access_control {
    managed                = true
    azure_rbac_enabled     = true
    admin_group_object_ids = [azuread_group.k8sadmins.id]
  }
  local_account_disabled = true

  service_principal {
    client_id     = "xxxxxx"
    client_secret = "xxxxxx"
  }

  tags = {
    Environment = "Production"
  }
  lifecycle {
    ignore_changes = all
  }
}
provider "kubernetes" {
  config_path = "~/.kube/config"
}
resource "kubernetes_namespace" "example" {
  metadata {
    name = "your-namespace-name-test13"
  }
}

Upvotes: 2

Views: 1767

Answers (3)

CrossedChaos
CrossedChaos

Reputation: 1

I was trying to do exactly what you were doing and I am going to share my findings / solution here. Because I could not find any good answers online. I wanted to manage an AKS cluster with azure AAD rbac enabled and local_account_disabled = true. What I found after inspecting the state file is if you have rbac enabled and local accounts disabled then the kube_admin_config is an empty list and kube_admin_config_raw is an empty string. So basically those two are unusable, and they basically state this in the terraform docs.

Now onto the kube_config output. What they do not tell you is that with rbac enabled and local accounts disabled, that the kube_config is only partially populated. Basically... client_certificate, client_key, and password are populated with empty strings (fun). The cluster_ca_certificate and host are correctly populated so you can use those outputs. The username is populated but its just some clusterUser name that you probably cannot use.

So the only real solution at the moment (as of azurerm 4.16.0) is to use the kube_config output to pull only the cluster_ca_certificate and host. Then use an exec block within the 'kubernetes' provider block and use kubelogin to retrieve some dynamic AAD credentials at runtime. You can see an AWS example in the docs, and if you search online you can find some blogs and stuff with azure based examples. https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs#exec-plugins

However, this means that you have to have kubelogin installed on whatever agent you are running your terraform plan so it can run the exec. Which really sucks, but that is the only method I see forward at the moment. Since the state file in this configuration literally does not contain login information.

Upvotes: 0

certainty3452
certainty3452

Reputation: 21

I've struggled with the same issue for last 2 days and found a solution applicable for my case, probably it will help you too.

Usually to perform it on clusters without AD Authentication with RBAC enabled it's enough to take kubeconfig from resource output and then reuse it for "kubernetes" provider like

provider "kubernetes" {
  host                   = azurerm_kubernetes_cluster.current.kube_config.0.host
  client_certificate     = base64decode(azurerm_kubernetes_cluster.example.kube_config.0.client_certificate)
  client_key             = base64decode(azurerm_kubernetes_cluster.example.kube_config.0.client_key)
  cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.example.kube_config.0.cluster_ca_certificate)
}

but to manage cluster in same apply session after creation following IaC in my case helped using kube_admin_config instead of kube_config like

provider "kubernetes" {
  host                   = azurerm_kubernetes_cluster.current.kube_admin_config.0.host
  client_certificate     = base64decode(azurerm_kubernetes_cluster.example.kube_admin_config.0.client_certificate)
  client_key             = base64decode(azurerm_kubernetes_cluster.example.kube_admin_config.0.client_key)
  cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.example.kube_admin_config.0.cluster_ca_certificate)
}

Not fully sure, but may help in your case as well.

Upvotes: 2

Venkat V
Venkat V

Reputation: 7614

How to manage aks with terraform when AD enabled RBAC and local_account_disabled = true is set

This error occurs because the Provider: kubernetes does not have the proper permission to create a namespace in your cluster.

enter image description here

To resolve the issue, ensure that ~/.kube/config is specified in the provider rather than passing the details manually. Additionally, make sure to have the Kubernetes Cluster Administrator role.

provider "kubernetes" {
  config_path = "~/.kube/config"
}

The config_path specifies the path to your Kubernetes configuration file (~/.kube/config ). This file contains the all necessary configuration details for accessing AKS cluster, including the cluster's API server URL, client certificate, and client key. Here is the updated code to create AKS cluster with Name space.

   provider "azurerm" {
  features {}
  skip_provider_registration = true
}

terraform {
  required_providers {
    azuread = {
      source  = "hashicorp/azuread"
      version = "~> 2.15.0"
    }
  }
}

provider "kubernetes" {
  config_path = "~/.kube/config"
}

data "azuread_user" "ad1m" {
  user_principal_name = "user1"
}

data "azuread_user" "ad2" {
  user_principal_name = "user2.com"
}

data "azuread_user" "ad3" {
  user_principal_name = "user3.onmicrosoft.com"
}

resource "azurerm_resource_group" "aks_rg" {
  name     = "aks_rg"
  location = "eastus"
}

resource "azuread_group" "aksadmngroup" {
  display_name     = "Kubernetes Admins"
  members          = [data.azuread_user.ad1m.object_id, data.azuread_user.ad2.object_id, data.azuread_user.ad3.object_id]
  security_enabled = true
}

resource "azurerm_role_assignment" "example" {
  scope                = azurerm_resource_group.aks_rg.id
  role_definition_name = "Azure Kubernetes Service Cluster Admin Role"
  principal_id         = azuread_group.aksadmngroup.object_id
}

resource "azurerm_kubernetes_cluster" "example" {
  name                = "venkat_demo-aks"
  location            = azurerm_resource_group.aks_rg.location
  resource_group_name = azurerm_resource_group.aks_rg.name
  dns_prefix          = "venkataks"

  default_node_pool {
    name       = "default"
    node_count = 1
    vm_size    = "Standard_D2_v2"
  }
  azure_active_directory_role_based_access_control {
    managed                = true
    azure_rbac_enabled     = true
    admin_group_object_ids = [azuread_group.aksadmngroup.id]
  }

  identity {
    type = "SystemAssigned"
  }

  local_account_disabled = true
  tags = {
    Environment = "testing"
  }

  lifecycle {
    ignore_changes = all
  }
}

resource "kubernetes_namespace" "example" {
  metadata {
    name = "venkat-test13"
  }
}

Terraform apply:

enter image description here

Reference: Stack link answer provided by me

Kubernetes Provider

Upvotes: 0

Related Questions