Reputation: 73
Within AzureRM, one or more Storage data disk(s) can be created empty and attached to the VM, e.g.:
variable "extra_disks" {
type = list(object({
name = string
disk_size_gb = number
managed_disk_type = string
lun = number
}))
default=[]
}
resource "azurerm_virtual_machine" "linuxvm" {
name = var.vm_name
location = var.location
resource_group_name = var.resource_group_name
...
dynamic "storage_data_disk" {
for_each = var.extra_disks
content {
name = storage_data_disk.value.name
disk_size_gb = storage_data_disk.value.disk_size_gb
managed_disk_type = storage_data_disk.value.managed_disk_type
lun = storage_data_disk.value.lun
caching = "ReadWrite"
create_option = "Empty"
}
}
...
}
And I can define an array to create extra_disks dynamically, e.g. in terraform.tfvars:
extra_disks = [
{
name = "data1" # logs, backup, etc.
disk_size_gb = 40
managed_disk_type = "Standard_LRS"
lun = 0
}
]
I can also attach existing managed disks while creating a VM. In this case the data structure would be something like this:
variable "extra_disks" {
type = list(object({
name = string
#disk_size_gb = null # existing data
#managed_disk_type = null # existing data
lun = number
}))
default=[]
}
The parameters such as disk_size_gb and managed_disk_type would not be relevant in this scenario.
extra_disks = [
{
name = "db-data-opt-03-manageddisk"
disk_size_gb = 0 #this will not be used
managed_disk_type = "Standard_LRS" #this will not be used
lun = 0
}
]
And with the following terraform code, I can perform a lookup query:
data "azurerm_managed_disk" "data_disks" {
for_each = var.extra_disks
name = each.value.name
resource_group_name = var.resource_group_name
}
Is there a way to create a dynamic block that can be used to support both these types of storage configurations?
What I have tried is the following:
variable "attach_existing_managed_data_disks" {
default = 0
}
dynamic "storage_data_disk" {
for_each = var.extra_disks
content {
name = (var.attach_existing_managed_data_disks == 0 ? storage_data_disk.value.name:
data.azurerm_managed_disk.data_disks[*].id)
disk_size_gb = (var.attach_existing_managed_data_disks == 0 ? storage_data_disk.value.disk_size_gb : null)
managed_disk_type = (var.attach_existing_managed_data_disks == 0 ? storage_data_disk.value.managed_disk_type : null)
lun = storage_data_disk.value.lun
caching = "ReadWrite"
create_option = (var.attach_existing_managed_data_disks == 0 ? "Empty" : "Attach" )
}
}
The problem is how should I map the name to associate with the data values returned in the query of data_disks while attaching existing managed disks?
name = (var.attach_existing_managed_data_disks == 0 ? storage_data_disk.value.name:
data.azurerm_managed_disk.data_disks[*].id)
Here is the error I am currently getting:
on ../terraform-azurerm-vm/main.tf line 61, in data "azurerm_managed_disk" "data_disks":
The reason is that I am creating 3 VMs using the same module and not all of the VMs have extra_disks.
The following code with count parameter would not work, as we can't have count and for_each in the same code block:
data "azurerm_managed_disk" "data_disks" {
#count = var.attach_existing_managed_data_disks == "" ? 0 : 1
for_each = var.extra_disks
name = each.value.name
resource_group_name = var.resource_group_name
}
If I can resolve these issues, not sure if the code block with "data_disk[*].id" would correctly index the data disks in the dynamic block:
name = (var.attach_existing_managed_data_disks == 0 ? storage_data_disk.value.name:
data.azurerm_managed_disk.data_disks[*].id)
Upvotes: 0
Views: 1606
Reputation: 7843
Creating a dynamic “storage_data_disk” block for Azure VM in Terraform.
I have followed GithubDoc as a reference to create a Azure VM with
dynamic storage data disk
and acttached to same VM using below terraform code.
provider "azurerm" { features {} } resource "azurerm_resource_group" "venkatrg" { name = "demo-rg" location = "westus2" } resource "azurerm_virtual_network" "vnet" { name = "vm-vnet" address_space = ["10.0.0.0/24"] location = azurerm_resource_group.venkatrg.location resource_group_name = azurerm_resource_group.venkatrg.name } resource "azurerm_subnet" "subnet" { name = "vm-subnet" resource_group_name = azurerm_resource_group.venkatrg.name virtual_network_name = azurerm_virtual_network.vnet.name address_prefixes = ["10.0.2.0/24"] } resource "azurerm_network_interface" "nic" { name = "vm-nic" location = azurerm_resource_group.venkatrg.location resource_group_name = azurerm_resource_group.venkatrg.name ip_configuration { name = "internal" subnet_id = azurerm_subnet.subnet.id private_ip_address_allocation = "Dynamic" } } resource "azurerm_virtual_machine" "vm-dbnode" { name = "sample-vm" location = azurerm_resource_group.venkatrg.location resource_group_name = azurerm_resource_group.rg.name primary_network_interface_id = azurerm_network_interface.nic.id network_interface_ids = [azurerm_network_interface.nic.id] vm_size = "Standard_D8s_v3" delete_os_disk_on_termination = "true" storage_os_disk { name = "vm-os-disk" caching = "ReadWrite" managed_disk_type = "Standard_LRS" create_option = "FromImage" disk_size_gb = 64 } storage_image_reference { publisher = "SUSE" offer = "sles-sap-12-sp5" sku = "gen1" version = "latest" } dynamic "storage_data_disk" { iterator = disk for_each = range(5) content { name = join("-", ["data", "disk", disk.key]) caching = "None" create_option = "Empty" managed_disk_type = "Standard_LRS" disk_size_gb = 512 write_accelerator_enabled = false lun = disk.key } } os_profile { computer_name = "venakt-vm" admin_username = "Admin" admin_password = "Pa$$word@123$" } os_profile_linux_config { disable_password_authentication = false } }
Terraform Apply:
Once the Terraform code is run, a virtual machine
is created with five 512GB data disks
, which are also attached to the VM
Upvotes: 0