Reputation: 353
I have upgraded an EC2 instance from m4 to m5, and now want to increase the size of the attached EBS storage volume. I am using an solid-state (SSD) EBS volume mounted as an NVMe drive.
After some research, I performed this command:
growpart /dev/nvme0n1 p1
After doing so, I received this error message in response:
FAILED: partition-number must be a number
I have tried to find instructions in the AWS docs and forums, but have not found a solution to this error message.
How can I increase the size of the EBS volume?
Upvotes: 35
Views: 44050
Reputation: 1
To extend the XFS filesystem, instead of xfs_growfs
after the growpart
command I ran:
sudo pvresize /dev/nvme0n1p2
sudo lvextend -L +40G -r /dev/mapper/centos_sstemplate-root
and it worked.
Upvotes: 0
Reputation: 1061
Below worked for me in AWS CENTOS 🦊 - Amazon Linux 2 AMIS( Karoo)
Step 1 : Update the EBS volume from AWS console of attached EC2
Step 2 : Login( SSH ) to EC2 instance in which the volume is attached
Step 3 : Follow the below commands and just replace the disk name. i.e xda or nvme0n1
Commands :
lsblk
sudo growpart /dev/nvme0n1 1
df -h
sudo xfs_growfs /dev/nvme0n1p1
Note : You have to use xfs_growfs instead of resize2fs
Upvotes: 1
Reputation: 9910
Prerequesities:
/dev/nvme0n1p1
but not /dev/nvme0n1p2
. Note the mount location: /
df
Filesystem 1K-blocks Used Available Use% Mounted on
...
/dev/nvme0n1p1 _________ ________ ________ 99% /
lsblk
and it confirms a single partition under the volume nvme0n1
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:2 0 400G 0 disk
└─nvme0n1p1 259:3 0 400G 0 part /
Than you first extend your volume through AWS console and then run next code block.
Single place script:
# /dev/nvme0n1 - volume name, 1 - partition index
sudo growpart /dev/nvme0n1 1
# block for xfs
# / - mount from above. Command will safely fail for non xfs
sudo xfs_growfs -d /
# block for ext4
# /dev/nvme0n1p1 - partition you would like to extend
sudo resize2fs /dev/nvme0n1p1
Assembled from this resource: Extend a Linux file system after resizing a volume
Upvotes: 2
Reputation: 30278
resize2fs
didn't work for me, so I used this instead:
xfs_growfs /dev/nvme0n1p1
resize2fs
gave me this error when I used it:
[root@ip-1-2-3-4 ~]# resize2fs /dev/nvme0n1p1
resize2fs 1.42.9 (28-Dec-2013)
resize2fs: Bad magic number in super-block while trying to open /dev/nvme0n1p1
Couldn't find valid filesystem superblock.
I noticed the disk was using xfs
under /etc/fstab
:
UUID=4cbf4a19-1fba-4027-bf92-xxxxxxxxxxxx / xfs defaults,noatime 1 1
Upvotes: 23
Reputation: 1283
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
growpart [OPTIONS] DISK PARTITION-NUMBER
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:0 0 16G 0 disk
├─nvme0n1p1 259:1 0 8G 0 part /
└─nvme0n1p128 259:2 0 1M 0 part
So to grow the partition, we use the diskname nvme0n1
(see disk
under TYPE
) and desired partition is 1
sudo growpart /dev/nvme0n1 1
And then to extend the fs -
resize2fs device [ size ]
(device refers to the location of the target filesystem)
$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 470M 52K 470M 1% /dev
tmpfs 480M 0 480M 0% /dev/shm
/dev/nvme0n1p1 7.8G 7.7G 3.1M 100% /
So to extend the fs, we use the device name /dev/nvme01np1
:
sudo resize2fs /dev/nvme0n1p1
Voila!
$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 470M 52K 470M 1% /dev
tmpfs 480M 0 480M 0% /dev/shm
/dev/nvme0n1p1 16G 7.7G 7.9G 50% /
Upvotes: 110