Andrey Ivanov
Andrey Ivanov

Reputation: 1

Truncated file in XFS filesystem using dd - how to recover

Disk layout My HDD raid array is going for his end of life, and I bought some new disks for it. Old HDD I have used as a storage of raw disk images for kvm/qemu virtual machines. Raid array was built using mdadm. On md device I have physical volume for LVM. On physical volume I have XFS file system which stores raw disk images. Every raw disk image was made by qemu-img and contains physical volume for LVM. One PV = one LV = one VG inside raw disk image.

Action When I tried to use cp for data moving I was encountered with bad blocks and i/o problems in my raid array, so I switched from cp to dd with noerror,sync flags I wrote dd if=/mnt/old/file.img of=/mnt/**old**/file.img bs=4k conv=noerror,sync

Problem Now file /mnt/old/file.img has zero size in XFS file system. Is there a simple solution to recover it?

Upvotes: 0

Views: 1137

Answers (2)

Andrey Ivanov
Andrey Ivanov

Reputation: 1

Finally I have found the solution, but it isn't very simple.

Xfs_undelete haven't matched my problem because it does not support B+Tree extent storage format (V3) for very big files.

Successfull semi-manual procedure that had successed my problem is consists of theese main steps:

  1. Unmount filesystem immediately and make full partition backup using dd to a file
  2. Investigate XFS log entries about truncated file
  3. Revert manually inode core header using xfs_db in expert mode MB. Recovering inode core will not unmark extents as non-free, and when you try to copy some data in usual way from file with recovered inode header you will get i/o error. It was a cause for developing python script.
  4. Use script that extracts extents data from B+Tree tree for inode and writes them to disk

I have published recovery script under LGPL license at GitHub

P.S. Some data was lost because of corrupted inode b+tree extent records, but they are haven't make sense for me.

Upvotes: 0

S.Haran
S.Haran

Reputation: 3

My sense is your RAID array has failed. You can see the RAID state with...

 cat /proc/mdstat

Since you are seeing i/o errors that is likely the source of your problem. The best path forward would be to make sector level copies of each RAID member (or at a minimum the member(s) that are throwing i/o errors). See Linux ddrescue. It is desigened to copy failing hard drives. Then perform recovery work from the copies.

Upvotes: 0

Related Questions