Simon
Simon

Reputation: 251

Amazon EC2 and EBS disk space problem

I am having a problem reconciling the space available on my EBS volume. According to the AWS console the volume is 50GB and is attached to an instance.

If I ssh to this instance and do a df -h, I get the following output:

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              15G   13G  3.0G  81% /
udev                  858M   76K  858M   1% /dev
none                  858M     0  858M   0% /dev/shm
none                  858M   72K  858M   1% /var/run
none                  858M     0  858M   0% /var/lock
none                  858M     0  858M   0% /lib/init/rw

I am pretty new to AWS. I interpret this as "there is a device attached and it has 15GB capacity. Whats more, you're nearly out of space!"

Can anyone point out the cause of the apparent discrepancy between the space advertised in the console and what is displayed on the instance?

Many thanks in advance

S

Upvotes: 14

Views: 20969

Answers (9)

kplus
kplus

Reputation: 832

Here is how I resolved the issue relating to increasing the EC2 space with the EBS disk space

Use the EC2 console to expand the EBS volume

  1. Open the EC2 console.

  2. In the navigation pane, choose Instances, and then select your instance.

  3. Choose the Storage tab, and then select your volume.

  4. In the Volumes pane, select the check box for the volume you want to expand.

  5. From Actions, choose Modify volume.

  6. Under Volume details, enter the Size and IOPS based on the volume type.

  7. Choose Modify, and then choose Modify in the dialog box.

  8. In the Volumes pane, see the volume's optimizing progress under Volume state. Refresh the Volumes pane to see progress updates.

Now, the EC2 shows the changes in the disk space but when logged into the terminal through the ssh, the changes might not reflect.

Follow these steps to resolve this:

[ec2-user ~]$ sudo lsblk
NAME          MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1       259:0    0  30G  0 disk /data
nvme0n1       259:1    0  16G  0 disk
└─nvme0n1p1   259:2    0   8G  0 part /
└─nvme0n1p128 259:3    0   1M  0 part

In the above output, the root volume (nvme0n1) has two partitions (nvme0n1p1 and nvme0n1p128), while the additional volume (nvme1n1) has no partitions. The 30G is the same as what the space is showing in the ec2 console.

Now, extend the partition. Use the growpart command and specify the device name and the partition number.

The partition number is the number after the p. For example, for nvme0n1p1, the partition number is 1. For nvme0n1p128, the partition number is 128.

To extend a partition named nvme0n1p1, use the following command.

 [ec2-user ~]$ sudo growpart /dev/nvme0n1 1

Verify that the partition has been extended. Use the lsblk command. The partition size should now be equal to the volume size.

[ec2-user ~]$ sudo lsblk
NAME          MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1       259:0    0  30G  0 disk /data
nvme0n1       259:1    0  16G  0 disk
└─nvme0n1p1   259:2    0  16G  0 part /
└─nvme0n1p128 259:3    0   1M  0 part

Now Extend the file system.

a. Get the name, size, type, and mount point for the file system that you need to extend. Use the df -hT command.

[ec2-user ~]$ df -hT
Filesystem      Type  Size  Used Avail Use% Mounted on
/dev/nvme0n1p1  xfs   8.0G  1.6G  6.5G  20% /
/dev/nvme1n1    xfs   8.0G   33M  8.0G   1% /data
...

b. The commands to extend the file system differ depending on the file system type. Choose the following correct command based on the file system type that you noted in the previous step.

i. [XFS file system] Use the xfs_growfs command and specify the mount point of the file system that you noted in the previous step.

For example, to extend a file system mounted on /, use the following command.

 [ec2-user ~]$ sudo xfs_growfs -d /

ii. [Ext4 file system] Use the resize2fs command and specify the name of the file system that you noted in the previous step.

For example, to extend a file system mounted named /dev/nvme0n1p1, use the following command.

[ec2-user ~]$ sudo resize2fs /dev/nvme0n1p1

c. Finally, verify that the file system has been extended. Use the df -hT command and confirm that the file system size is equal to the volume size.

Upvotes: 0

iravinandan
iravinandan

Reputation: 739

On Ubuntu, for Extend the Filesystem.

To find block device:

blkid

In my case type is TYPE="ext4".

To resize the disk volume:

sudo resize2fs /dev/xvdf

Upvotes: 0

BJYC
BJYC

Reputation: 384

It is because, "After you increase the size of an EBS volume, you must use file system–specific commands to extend the file system to the larger size. You can resize the file system as soon as the volume enters the optimizing state.", without bouncing an instance.

I was just facing the same issue today, I was able to resolve it,

  1. Figure out the type of your file system, $ cat /etc/fstab

  2. Follow this AWS doc, that precisely documents the steps to extend the linux Partition/FS after resizing a volume of a EC2 instance.

    Extending a Linux File System After Resizing a Volume

Upvotes: 0

user18853
user18853

Reputation: 2837

Only Rebooting the instance solved my problem

Earlier:

/dev/xvda1       8256952 7837552         0 100% /
udev              299044       8    299036   1% /dev
tmpfs             121892     164    121728   1% /run
none                5120       0      5120   0% /run/lock
none              304724       0    304724   0% /run/shm

Now

/dev/xvda18256952 1062780   6774744  14% /
udev              299044       8    299036   1% /dev
tmpfs             121892     160    121732   1% /run
none                5120       0      5120   0% /run/lock
none              304724       0    304724   0% /run/shm

Upvotes: 2

AWS Fan
AWS Fan

Reputation: 76

Perhaps the original 15 GB Volume was cloned into a 50 GB volume but then not resized?

Please see this tutorial on how to clone and resize: How to increase disk space on existing AWS EC2 Linux (Ubuntu) Instance without losing data

Hope that helps.

Upvotes: 6

onkar
onkar

Reputation: 1189

Here is the simple way...

Assuming that you are using a linux AMI, in your case you have an easy method for increasing the size of the file system:

1) Stop the instance 2) Detach the root volume 3) Snapshot the volume 4) Create a new volume from the snapshot using the new size 5) Attach the new volume to the instance on the same place where the original one was 6) Start the instance, stop all services except ssh and set the root filesystem read only 7) Enlarge the filesystem (using for example resize2fs) and or the partition if needed 8) Reboot

As an alternative you can also launch a new instance and map the instance storage or you can create a new ami combining the two previous steps.

Upvotes: 6

Jeroen Ooms
Jeroen Ooms

Reputation: 32978

The remaining of your space is mounted by default at /mnt.

Upvotes: 1

Till
Till

Reputation: 22408

Yes, the issue is simple. The volume is only associated with the instance, but not mounted.

Check on the AWS console which drive it is mounted as - most likely /dev/sdf.

Then (on ubuntu):

sudo mkfs.ext3 /dev/sdf
sudo mkdir /ebs
sudo mount /dev/sdf /ebs

The first line formats the volume - using the ext3 file system type. This is pretty standard -- but depending on your usage (e.g. app server, database server, ...) you could also select another one like ext4 or xfs.

The second command creates a mount point and the third mounts it into it. This means that effectively, the new volume will be at /ebs. It should also show up in df now.

Last but not least, maybe also add an entry to /etc/fstab to make it reboot-proof.

Upvotes: 8

Related Questions