[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

xfs_repair errors



Hi,

I'd like some feedback of xfs guru's in the list. I have an xfs filesystem on lvm with some errors:

root@server:/var/lib# xfs_repair -n -f /dev/mapper/system-var
Phase 1 - find and verify superblock...
Cannot get host filesystem geometry.
Repair may fail if there is a sector size mismatch between
the image and the host filesystem.
Phase 2 - using internal log
        - scan filesystem freespace and inode maps...
agi unlinked bucket 20 is 492308 in ag 3 (inode=3221717780)
agi unlinked bucket 37 is 58885285 in ag 3 (inode=3280110757)
agi unlinked bucket 49 is 69838705 in ag 3 (inode=3291064177)
agi unlinked bucket 41 is 50453993 in ag 1 (inode=1124195817)
agi unlinked bucket 63 is 36183807 in ag 1 (inode=1109925631)
agi unlinked bucket 6 is 74519110 in ag 0 (inode=74519110)
agi unlinked bucket 32 is 30824736 in ag 0 (inode=30824736)
sb_icount 3250752, counted 3278592
sb_ifree 101385, counted 88781
sb_fdblocks 113317596, counted 112215713
        - found root inode chunk
Phase 3 - for each AG...
        - scan (but don't clear) agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
data fork in ino 1102290365 claims free block 74182197
data fork in ino 1102290365 claims free block 74182198
        - agno = 2
imap claims in-use inode 2166611383 is free, would correct imap
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
No modify flag set, skipping phase 5
Phase 6 - check inode connectivity...
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
disconnected inode 30824736, would move to lost+found
disconnected inode 74519110, would move to lost+found
disconnected inode 1109925631, would move to lost+found
disconnected inode 1124195817, would move to lost+found
disconnected inode 2166611383, would move to lost+found
disconnected inode 3221717780, would move to lost+found
disconnected inode 3280110757, would move to lost+found
disconnected inode 3291064177, would move to lost+found
Phase 7 - verify link counts...
would have reset inode 30824736 nlinks from 0 to 1
would have reset inode 74519110 nlinks from 0 to 1
would have reset inode 1109925631 nlinks from 0 to 1
would have reset inode 1124195817 nlinks from 0 to 1
would have reset inode 3221717780 nlinks from 0 to 1
would have reset inode 3280110757 nlinks from 0 to 1
would have reset inode 3291064177 nlinks from 0 to 1
No modify flag set, skipping filesystem flush and exiting.

This xfs_repair was done while the system was running. That might have caused (some of) the errors as well.

But how serious does this look..?

Is it an idea to create an lvm snapshot of the var lvm volume, and try a repaur on that..? See how that goes..?

(I do have an rsync backup of this server)

Or any other ideas how to best/safest solve this..?

MJ


Reply to: