Reputation: 439
I am trying to use packer version 1.3.2 to bake an AMI with multiple block devices where each block device is encrypted with a different KMS key, which is different then the KMS key used to encrypt the boot device.
At first I started to think that maybe this isn't supported by AWS; however, using AWS console, I was able to launch an EC2 instance with and AMI having previously encrypted volumes and add another volume that used a different KMS key. Then create an AMI from it. I then used the new AMI to launch another EC2 instance and the different KMS keys were maintained. This is because it did create a new snapshot for the additional volume with the different KMS key.
I have attempted so many different variations using the amazon-ebs builder with combinations of ami_block_device_mappings in conjunction with launch_block_device_mappings. Any combination at best generates the final volume snapshots tied to the AMI using the boot KMS key. I noticed that if I specify the alternate kms_key_ids in the launch_block_device_mappings like the following:
"launch_block_device_mappings": [
{
"device_name": "/dev/sdb",
"volume_type": "gp2",
"volume_size": "{{user `var_volume_size`}}",
"delete_on_termination": true,
"kms_key_id": "{{user `kms_key_arn_var`}}",
"encrypted": true
},
{
"device_name": "/dev/sdc",
"volume_type": "gp2",
"volume_size": "{{user `varlog_volume_size`}}",
"delete_on_termination": true,
"kms_key_id": "{{user `kms_key_arn_varlog`}}",
"encrypted": true
}, ...
It creates temporary snapshots with the alternate kms key but they are replaced with new ones that are encrypted with the boot kms key for the final AMI, regardless of whether I also include ami_block_device_mappings or not. Even if I set delete_on_termination to false for the launch...
I then looked that this from another angle by trying to create the snapshots from EBS volumes separately from the amazon-ebs builder. Using the amazon-ebsvolume builder, I created empty EBS volumes:
"type": "amazon-ebsvolume",
...
"ebs_volumes": [
{
"device_name": "/dev/sdb",
"volume_type" : "{{user `var_volume_type`}}",
"volume_size": 10,
"delete_on_termination": false,
"kms_key_id": "{{user `kms_key_arn_var`}}",
"encrypted": true,
"tags" : {
"Name" : "starter-volume-var",
"purpose" : "starter"
}
},
{
"device_name": "/dev/sdc",
"volume_type" : "{{user `varlog_volume_type`}}",
"volume_size": 5,
"delete_on_termination": false,
"kms_key_id": "{{user `kms_key_arn_varlog`}}",
"encrypted": true,
"tags" : {
"Name" : "starter-volume-varlog",
"purpose" : "starter"
}
},...
And then created snapshots from them and then attempted to use the snapshot_id of those instead of creating volumes inline in the amazon-ebs
"launch_block_device_mappings": [
{
"device_name": "/dev/sdb",
"volume_type" : "{{user `var_volume_type`}}",
"snapshot_id": "snap-08f2bed8aaa964469",
"delete_on_termination": true
},
{
"device_name": "/dev/sdc",
"volume_type" : "{{user `varlog_volume_type`}}",
"snapshot_id": "snap-037a4a6255e8d161d",
"delete_on_termination": true
}
],..
Doing this I get the following error:
2018/11/01 03:04:23 ui error: ==> amazon-ebs: Error launching source instance: InvalidBlockDeviceMapping: snapshotId can only be modified on EBS devices
I tried repeating the encryption settings along with the snapshot_ids:
"launch_block_device_mappings": [
{
"device_name": "/dev/sdb",
"volume_type" : "{{user `var_volume_type`}}",
"snapshot_id": "snap-08f2bed8aaa964469",
"kms_key_id": "{{user `kms_key_arn_var`}}",
"encrypted": true,
"delete_on_termination": true
},
{
"device_name": "/dev/sdc",
"volume_type" : "{{user `varlog_volume_type`}}",
"snapshot_id": "snap-037a4a6255e8d161d",
"kms_key_id": "{{user `kms_key_arn_varlog`}}",
"encrypted": true,
"delete_on_termination": true
}
],...
This results in a different error:
==> amazon-ebs: Error launching source instance: InvalidParameterDependency: The parameter KmsKeyId requires the parameter Encrypted to be set.
But I clearly have "encrypted": true
I am running out of ideas and feel it's possible, just apparently not smart enough to see it.
Upvotes: 3
Views: 4164
Reputation: 1
Came here because I had the same problem. I fixed this by moving the device to /dev/xvdf
.
Digging into this further the source AMI I was using has the following block mappings associated with it, these ephemeral disks were not displayed in the console so it took me a while to workout what was going on, a big clue was the fact I could mount the disk even before I defined it (I originally defined it as an AMI mapping rather than launch in error but had the mount in my scripts already)
Block devices: /dev/sda1=snap-0b399e12978e2290e:8:true:standard, /dev/xvdb=ephemeral0, /dev/xvdc=ephemeral1
I notice you have not listed the source AMI but hopefully this helps
Upvotes: 0