Reputation: 8196
I am using the https://wiki.jenkins.io/display/JENKINS/Amazon+EC2+Plugin for jenkins which allows me to dynamically provision new cloud instances as build slaves in AWS EC2.
I am launching ami-d834aba1
(Amazon Linux 2017.09.1).
The plugin supports providing user-data and block device mapping too, currently I provide config like this after reading https://cloudinit.readthedocs.io/en/latest/
#cloud-config
repo_update: true
repo_upgrade: all
package_upgrade: true
bootcmd:
- [ cloud-init-per, once, mkfs, -t, ext4, /dev/nvme1n1 ]
fs_setup:
- cmd: mkfs -t %(filesystem)s -L %(label)s %(device)s
label: jenkins
filesystem: 'ext4'
overwrite: false
device: '/dev/nvme1n1'
mounts:
- [ /dev/nvme1n1, /jenkins, "ext4", "defaults,nofail", "0", "2" ]
users:
- default
- name: jenkins
homedir: /jenkins
lock_passwd: true
ssh_authorized_keys:
- a-key
/dev/sdd=:100:true:gp2::encrypted
The instance would launch and would attach a new 100GB encrypted EBS volume which would be formatted as ext4
and mounted at /jenkins
as the home directory of the jenkins user.
The instance launches, the 100GB encrypted EBS volume is created and attached to the EC2 instance (shows as in use and attached in AWS console). However,
1) df -h
doesn't show the filesystem.
2)
cat /etc/fstab
/dev/nvme1n1 /jenkins ext4 defaults,nofail,comment=cloudconfig 0 2
does show it
3) sudo file -s /dev/nvme1n1
/dev/nvme1n1: data
shows the volume as data
formatted rather than ext4
4) sudo mount-a fails due to the filesystem not being ext4.
If i manually SSH to the machine after boot and run:
sudo mkfs -t ext4 /dev/nvme1n1
mke2fs 1.42.12 (29-Aug-2014)
Creating filesystem with 26214400 4k blocks and 6553600 inodes
Filesystem UUID: 7a434f7a-c048-4c3d-8098-b810e2ff8f84
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Then sudo mount -a
it seems to mount the volume.
Is there any way to have the device formatted and mounted automatically? I tried with and without the
bootcmd:
- [ cloud-init-per, once, mkfs, -t, ext4, /dev/nvme1n1 ]
Ideally it'd happen all before the user gets created since the home directory of the new user is going to be on this new mount.
If the instance is stopped and started/restarted I'd not want to ideally lose all data by the reformatting happening again on boot.
Upvotes: 5
Views: 15076
Reputation: 24656
There are two solutions depending on the cloud-init version
device_aliases
, disk_setup
, fs_setup
and mounts
with x-systemd.device-timeout
works fine and allows for disk partition labelsdisk_setup
, and mounts
with x-systemd.device-timeout
and x-systemd.makefs
. This is less elegant as it you can't set disk partition labels.With cloud-init 24.2 (released on July 2024) you can partition, format and mount ebs volumes like this:
device_aliases
and referring to the EBS volume id using /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_volxxxxxxxx
(note that the there is no -
after vol
). Now most EC2 instances are Nitro based and EBS volumes are exposed as NVMe so you can't rely on block device mapping to identify them. See Amazon EBS and NVMedisk_setup
to partition the disks, in the example below there are two ebs volumes, and the first one is partition with 3 partition where the first takes 50% of the available space, second 25% and the third another 25%fs_setup
to format the partitions and give them a disk labelmounts
to mount the partition by label, and using x-systemd.device-timeout=30
to wait for the device to be ready since ebs volumes are not always available right at the start.The complete cloud-config user_data is below (note the EBS volume id loses the -
in the device_aliases
):
#cloud-config
device_aliases:
disk1: /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0a250869ccd411b30
disk2: /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0f0dbdc453a55e68d
disk_setup:
disk1:
table_type: gpt
layout: [50,25,25]
overwrite: true
disk2:
table_type: gpt
layout: [90,10]
overwrite: true
fs_setup:
- label: disk1-earth
filesystem: xfs
device: disk1
partition: 1
- label: disk1-mars
filesystem: xfs
device: disk1
partition: 2
- label: disk1-venus
filesystem: xfs
device: disk1
partition: 3
- label: disk2-foo
filesystem: xfs
device: disk2
partition: 1
- label: disk2-bar
filesystem: xfs
device: disk2
partition: 2
mounts:
- [ LABEL=disk1-earth, /earth, xfs, "defaults,nofail,x-systemd.device-timeout=30"]
- [ LABEL=disk1-mars, /mars, xfs, "defaults,nofail,x-systemd.device-timeout=30"]
- [ LABEL=disk1-venus, /venus, xfs, "defaults,nofail,x-systemd.device-timeout=30"]
- [ LABEL=disk2-foo, /foo, xfs, "defaults,nofail,x-systemd.device-timeout=30"]
- [ LABEL=disk2-bar, /bar, xfs, "defaults,nofail,x-systemd.device-timeout=30"]
mounts_default_fields: [ None, None, "auto", "defaults,nofail", "0", "2"]
when I used the cloud-config above with a Fedora 41 Rawhide (this has cloud-init 24.2) I got the following results
sudo blkid -s LABEL
/dev/nvme0n1p3: LABEL="BOOT"
/dev/nvme0n1p4: LABEL="fedora"
/dev/nvme0n1p2: LABEL="EFI"
/dev/nvme2n1p2: LABEL="disk2-bar"
/dev/nvme2n1p1: LABEL="disk2-foo"
/dev/nvme1n1p2: LABEL="disk1-mars"
/dev/nvme1n1p3: LABEL="disk1-venus"
/dev/nvme1n1p1: LABEL="disk1-earth"
/dev/zram0: LABEL="zram0"
lsblk -o name,size,mountpoint,label
NAME SIZE MOUNTPOINT LABEL
zram0 3.8G [SWAP]
nvme0n1 30G
├─nvme0n1p1 2M
├─nvme0n1p2 100M /boot/efi EFI
├─nvme0n1p3 1000M /boot BOOT
└─nvme0n1p4 28.9G /var fedora
nvme1n1 100G
├─nvme1n1p1 50G /earth disk1-earth
├─nvme1n1p2 25G /mars disk1-mars
└─nvme1n1p3 25G /venus disk1-venus
nvme2n1 100G
├─nvme2n1p1 90G /foo disk2-foo
└─nvme2n1p2 10G /bar disk2-bar
findmnt --fstab
TARGET SOURCE FSTYPE OPTIONS
/ UUID=8542d054-1a20-450a-b354-51952dfcc6bc btrfs compress=zstd:1,defaults,subvol=root
/boot UUID=d2e97817-b2f3-47a8-8f5a-3fcc9996a974 ext4 defaults
/home UUID=8542d054-1a20-450a-b354-51952dfcc6bc btrfs compress=zstd:1,subvol=home
/var UUID=8542d054-1a20-450a-b354-51952dfcc6bc btrfs compress=zstd:1,subvol=var
/boot/efi UUID=6558-A949 vfat defaults,umask=0077,shortname=winnt
/earth LABEL=disk1-earth xfs defaults,nofail,x-systemd.device-timeout=30,comment=cloudconfig
/mars LABEL=disk1-mars xfs defaults,nofail,x-systemd.device-timeout=30,comment=cloudconfig
/venus LABEL=disk1-venus xfs defaults,nofail,x-systemd.device-timeout=30,comment=cloudconfig
/foo LABEL=disk2-foo xfs defaults,nofail,x-systemd.device-timeout=30,comment=cloudconfig
/bar LABEL=disk2-bar xfs defaults,nofail,x-systemd.device-timeout=30,comment=cloudconfig
If your distro using a cloud-init version prior to 24.2 then you better not use fs_setup
because it does not support nvme partition naming (this is was reported at #5246 that was fixed by #5263 and released on cloud-init 24.2).
So instead of fs_setup
you can use the x-systemd.makefs
mount options, it does the same thing as fs_setup
but you won't get the disk partition label that fs_setup
can set:
#cloud-config
device_aliases:
disk1: /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0a250869ccd411b30
disk_setup:
disk1:
table_type: gpt
layout: [50,25,25]
overwrite: true
mounts:
- [ /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0a250869ccd411b30-part1, /earth, xfs, "defaults,nofail,x-systemd.device-timeout=30s,x-systemd.makefs"]
- [ /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0a250869ccd411b30-part2, /mars, xfs, "defaults,nofail,x-systemd.device-timeout=30s,x-systemd.makefs"]
- [ /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol0a250869ccd411b30-part3, /venus, xfs, "defaults,nofail,x-systemd.device-timeout=30s,x-systemd.makefs"]
mounts_default_fields: [ None, None, "auto", "defaults,nofail", "0", "2"]
Upvotes: 0
Reputation: 1
This is a cleaner way to obtain the uuid
without piping to sed
.
blkid /dev/sdxx -S UUID -o value
Upvotes: 0
Reputation: 124
You can easily do that with user data script when launching EC2 instance. Here's an example user script:
# make a directory for a drive
sudo mkdir /data
# format disk
yes | sudo mkfs.ext4 /dev/sdb
# mount it
sudo mount /dev/sdb /data
# persist
uuid=$(sudo blkid /dev/sdb | sed -n 's/.*UUID=\"\([^\"]*\)\".*/\1/p')
sudo bash -c "echo 'UUID=${uuid} /data ext4 defaults' >> /etc/fstab"
Upvotes: 3
Reputation: 51
The magical x-systemd.makefs
fstab option can be used in cloud-init. Then the systemd mount unit will automatically format to the specified filesystem (don't use auto
) before mounting, if the device has no filesystem yet.
Note: manually mounting with mount
won't trigger formatting, but starting via systemd, either directly (systemctl start mnt-foo.mount
), or by a unit dependency (see RequiresMountsFor
in https://www.freedesktop.org/software/systemd/man/systemd.unit.html) works.
See https://www.freedesktop.org/software/systemd/man/systemd.mount.html.
Upvotes: 2
Reputation: 422
cloud-init on Amazon Linux does not support the fs_setup
module. Hence, your disk is not formatted. Furthermore the home directory /jenkins is created for the user, and used as a mount point. This hides the home directory.
I would suggest:
bootcmd:
- test -z "$(blkid /dev/nvme1n1)" && mkfs -t ext4 -L jenkins /dev/nvme1n1
- mkdir -p /jenkins
mounts:
- [ "/dev/nvme1n1", "/jenkins", "ext4", "defaults,nofail", "0", "2" ]
runcmd:
- useradd -m -b /jenkins jenkins
Upvotes: 14
Reputation: 8196
I didn't work out how to achieve this using the default AMI and a cloud init script.
I have solved this by creating my own AMI based on the AMI I wanted that has an encrypted EBS volume.. Now I just launch this AMI by ID and don't worry about formatting EBS, attaching, mounting etc.
It's more simple, requires less config. However, the big downside is when a new base AMI comes out I can't just simply update the AMI ID to latest. I need to create a new base AMI of my own.
Not ideal, but it works. If anyone knows how to do this "properly" I'd like to hear more about it.
Upvotes: 1