Reputation: 1441
I have followed the steps for resizing an EC2 volume
Old volume was 5GB and the one I created is 100GB
Now, when i restart the instance and run df -h I
still see this
Filesystem Size Used Avail Use% Mounted on
/dev/xvde1 4.7G 3.5G 1021M 78% /
tmpfs 296M 0 296M 0% /dev/shm
This is what I get when running
sudo resize2fs /dev/xvde1
The filesystem is already 1247037 blocks long. Nothing to do!
If I run cat /proc/partitions
I see
202 64 104857600 xvde
202 65 4988151 xvde1
202 66 249007 xvde2
From what I understand if I have followed the right steps xvde should have the same data as xvde1 but I don't know how to use it
How can I use the new volume or umount xvde1 and mount xvde instead?
I cannot understand what I am doing wrong
I also tried sudo ifs_growfs /dev/xvde1
xfs_growfs: /dev/xvde1 is not a mounted XFS filesystem
By the way, this a linux box with centos 6.2 x86_64
Upvotes: 122
Views: 146453
Reputation: 24800
Just one detail. You don't need to wait till the "Optimizing" volume state is completed.
As mentioned here:
Before you begin (Extend a Linux file system). Confirm that the volume modification succeeded and that it is in the optimizing or completed state. For more information, see Monitor the progress of volume modifications. (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html)
And here:
Size changes usually take a few seconds to complete and take effect after the volume has transitioned to the Optimizing state. (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-volume-modifications.html)
You also don't need to interrupt your instance to resize it. You can do it on the fly. But then you do need to run the growpart
command as mentioned in other answers, before continuing with other resize commands.
Upvotes: 0
Reputation: 11
I modified the existing volume from 8 to 20 for the above issue. After that
df -h
lsblk
sudo mount -o size=10M,rw,nodev,nosuid -t tmpfs tmpfs /tmp
sudo growpart /dev/xvda 1
based on the OS or file system, after that
sudo resize2fs /dev/xvda 1
sudo umount /tmp
Upvotes: 0
Reputation: 12936
In case EC2 Linux disk size does not match attached volume size of device...
I had attached two devices
/dev/sda1 8GB
/dev/xvda 20GB
but lsblk
kept insisting
xvda 202:0 0 8G 0 disk
Then it dawned on me that sda1 could be shadowing the xvda
device, and I renamed it to
/dev/sda1 8GB
/dev/xvde 20GB
and voila lsblk
xvda 202:0 0 8G 0 disk
xvde 202:64 0 20G 0 disk
This behaviour may depend on your OS/kernel...
Upvotes: 1
Reputation: 401
As for my EC2, it growpart gives me:
growpart /dev/xvda 1
FAILED: /dev/xvda: does not exist
So I just used this after resize on AWS management website and it worked for me:
resize2fs /dev/xvda1
Upvotes: 2
Reputation: 189
I faced similar issue for Ubuntu system in EC2
Firstly checked the filesystem
lsblk
Then after increasing volume size from console, I ran below commands
sudo growpart /dev/nvme0n1 1
This will show change in lsblk command
Then I could then extend the FS with
sudo resize2fs /dev/nvme0n1p1
Finally verify it with df -h
command, it will work
Upvotes: 5
Reputation: 397
Once you modify the size of your EBS,
sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:2 0 10G 0 disk
|-nvme0n1p1 259:3 0 1M 0 part
`-nvme0n1p2 259:4 0 10G 0 part /
Suppose you want to extend the second partition mounted on /
,
sudo growpart /dev/nvme0n1 2
If all your space is used up in the root volume and basically you're not able to access /tmp
i.e. with error message Unable to growpart because no space left
,
/tmp
volume: sudo mount -o size=10M,rw,nodev,nosuid -t tmpfs tmpfs /tmp
sudo umount -l /tmp
Verify the new size
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:2 0 20G 0 disk
|-nvme0n1p1 259:3 0 1M 0 part
`-nvme0n1p2 259:4 0 10G 0 part /
sudo xfs_growfs /
sudo resize2fs /dev/nvme0n1p2
Upvotes: 9
Reputation: 6341
There's no need to stop instance and detach EBS volume to resize it anymore!
13-Feb-2017 Amazon announced: "Amazon EBS Update – New Elastic Volumes Change Everything"
The process works even if the volume to extend is the root volume of running instance!
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 16G 0 disk
└─xvda1 202:1 0 8G 0 part /
As you can see /dev/xvda1 is still 8 GiB partition on a 16 GiB device and there are no other partitions on the volume. Let's use "growpart" to resize 8G partition up to 16G:
# install "cloud-guest-utils" if it is not installed already
apt install cloud-guest-utils
# resize partition
growpart /dev/xvda 1
Let's check the result (you can see /dev/xvda1 is now 16G):
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 16G 0 disk
└─xvda1 202:1 0 16G 0 part /
Lots of SO answers suggest to use fdisk with delete / recreate partitions, which is nasty, risky, error-prone process especially when we change boot drive.
# Check before resizing ("Avail" shows 1.1G):
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 6.3G 1.1G 86% /
# resize filesystem
resize2fs /dev/xvda1
# Check after resizing ("Avail" now shows 8.7G!-):
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 16G 6.3G 8.7G 42% /
So we have zero downtime and lots of new space to use.
Enjoy!
Update: Update: Use sudo xfs_growfs /dev/xvda1 instead of resize2fs when XFS filesystem.
Upvotes: 387
Reputation: 4166
Put space between name and number, ex:
sudo growpart /dev/xvda 1
Note that there is a space between the device name and the partition number.
To extend the partition on each volume, use the following growpart commands. Note that there is a space between the device name and the partition number.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
Upvotes: 0
Reputation: 2545
Just in case if anyone here for GCP google cloud platform ,
Try this:
sudo growpart /dev/sdb 1
sudo resize2fs /dev/sdb1
Upvotes: 5
Reputation: 144
So in Case anyone had the issue where they ran into this issue with 100% use , and no space to even run growpart command (because it creates a file in /tmp)
Here is a command that i found that bypasses even while the EBS volume is being used , and also if you have no space left on your ec2 , and you are at 100%
/sbin/parted ---pretend-input-tty /dev/xvda resizepart 1 yes 100%
see this site here:
https://www.elastic.co/blog/autoresize-ebs-root-volume-on-aws-amis
Upvotes: 3
Reputation: 151
the above two commands saved my time for AWS ubuntu ec2 instances.
Upvotes: 7
Reputation: 616
Prefect comment by jperelli above.
I faced same issue today. AWS documentation does not clearly mention growpart. I figured out the hard way and indeed the two commands worked perfectly on M4.large & M4.xlarge with Ubuntu
sudo growpart /dev/xvda 1
sudo resize2fs /dev/xvda1
Upvotes: 57
Reputation: 7207
login into AWS web console -> EBS -> right mouse click on the one you wish to resize -> "Modify Volume" -> change "Size" field and click [Modify] button
growpart /dev/xvda 1
resize2fs /dev/xvda1
This is a cut-to-the-chase version of Dmitry Shevkoplyas' answer. AWS documentation does not show the growpart
command. This works ok for ubuntu AMI.
Upvotes: 7
Reputation: 749
Thanks, @Dimitry, it worked like a charm with a small change to match my file system.
Then use the following command, substituting the mount point of the filesystem (XFS file systems must be mounted to resize them):
[ec2-user ~]$ sudo xfs_growfs -d /mnt
meta-data=/dev/xvdf isize=256 agcount=4, agsize=65536 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=262144, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 262144 to 26214400
Note If you receive an xfsctl failed: Cannot allocate memory error, you may need to update the Linux kernel on your instance. For more information, refer to your specific operating system documentation. If you receive a The filesystem is already nnnnnnn blocks long. Nothing to do! error, see Expanding a Linux Partition.
Upvotes: 2
Reputation: 810
Thank you Wilman your commands worked correctly, small improvement need to be considered if we are increasing EBSs into larger sizes
/dev/sda1
)Access via SSH to the instance and run fdisk /dev/xvde
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u')
Hit p to show current partitions
partprobe
(from the parted
package) to tell the kernel about the new partition tableUpvotes: 73
Reputation: 691
Don't have enough rep to comment above; but also note per the comments above that you can corrupt your instance if you start at 1; if you hit 'u' after starting fdisk before you list your partitions with 'p' this will infact give you the correct start number so you don't corrupt your volumes. For centos 6.5 AMI, also as mentioned above 2048 was correct for me.
Upvotes: 0
Reputation: 11
Bootable flag (a) didn't worked in my case (EC2, centos6.5), so i had to re-create volume from snapshot. After repeating all steps EXCEPT bootable flag - everything worked flawlessly so i was able to resize2fs after. Thank you!
Upvotes: 1
Reputation: 1441
[SOLVED]
This is what it had to be done
fdisk /dev/xvde
resize2fs /dev/xvde1
df -h
This is it
Good luck!
Upvotes: 15
Reputation: 13644
This will work for xfs file system just run this command
xfs_growfs /
Upvotes: 9
Reputation: 5296
Did you make a partition on this volume? If you did, you will need to grow the partition first.
Upvotes: 2