Reputation: 4888
I screwed up root volume of my system in ec2 instance so I attached root volume of the instance to other ec2 instance so that I could access the bad root volume and rectify my error. When I start the other instance, the screwed up root volume becomes the root volume of the instance. I attached the volume as /dev/sdb (kernel changed it to /dev/xvdf ) and the instance original root volume is at /dev/sda (kernel changed it to /dev/xvde ). So kernel should load /dev/xvde as root filesystem but its loading scrwed up root volume (/dev/xvdf) .
The snippet of system logs of the system is as following:
dracut: Starting plymouth daemon
xlblk_init: register_blkdev major: 202
blkfront: xvdf: barriers disabled
xvdf: unknown partition table
blkfront: xvde: barriers disabled
xvde: unknown partition table
EXT4-fs (xvdf): mounted filesystem with ordered data mode. Opts:
dracut: Mounted root filesystem /dev/xvdf
Upvotes: -1
Views: 4333
Reputation: 1834
Turn the tables on the bootloader.
If the working instance insists on booting the broken volume when attached as a data volume (/dev/xvd[f-p]
), then a simple hack you can try is to turn things around and attach a working root volume to the broken instance as a data volume.
This has worked to recover a broken CentOS 7 root volume using a root volume borrowed from a good instance. Both instances were built from the same Marketplace AMI and there were no complaints about attaching the marketplace AMI as a data volume. The system booted the good volume instead of the broken volume. (Even though the broken volume still showed up in its original device position at /dev/xvda1
, with the booted volume at /dev/xvdf1
.)
If you get wrong fs type, ...
when then mounting the broken volume, check for label collision:
# blkid
/dev/xvda1: UUID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" TYPE="xfs"
/dev/xvdf1: UUID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" TYPE="xfs"
You can mount without checking UUIDs with:
mount -o nouuid /dev/xvda1 /some/where
YMMV.
Upvotes: 0
Reputation: 1
PSA: don't use CentOS in AWS.
You can no longer attach a root volume for a CentOS instance to another instance. This is by design, to prevent people from circumventing licensing agreements. Even though CentOS is technically free, because it's a marketplace AMI, the rule applies. It's a good rule in general, but it makes recovery of a failed configuration impossible.
Use Amazon Linux. It's basically CentOS anyways.
Upvotes: 0
Reputation: 4888
OR
The simple way is to attach Centos root volume to a amazon linux machine and fix the issue. Don't attach Centos root volume to another ec2 instance running Centos. Centos in AWS marketplace have "centos" as label for root volume . So when we attach centos root volume to another centos machine, AWS gets confused as to which root volume to mount and anomaly happens.
Upvotes: 8
Reputation: 4888
As the screwed up root volume and the original instance root volume has the same label attached to the volume partition (in my case my OS is centos6.5 and the label is centos_root ) , so we have to change the label of our instance so that next time it boots it doesn't look for label centos_root and instead it will look for our changed label.
First, change the root volume partition label by the command ex. e2label /dev/xvde your_label , here /dev/xvde is the root partition
Second, change the label in "/etc/fstab and /boot/grub/grub.conf" with your_label.
Third, Stop the instance
Fourth, Attched the screwed up root volume to the instance
Fifth, Start the instance
Sixth, Voila now you can see the screwed up root volume partition and mount it to some mount point to fix your issue .
Upvotes: 1
Reputation: 34297
deattach the "screwed up" volume from the other ec2 instance
boot the other instance normally
attach the EBS to the running instance see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-attaching-volume.html
do fdisk -l
as root and find the device name of the new instance
make a "mount point" (a directory) and mount the desired disk partition on it
once it is fixed, use the umount
command on the mount point and then deattach
the volume
If the AMI has a marketplace code try the steps given in this answer https://serverfault.com/questions/522173/aws-vol-xxxxxxx-with-marketplace-codes-may-not-be-attached-as-as-secondary-dev
Upvotes: 0