[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#579005: XFS internal error xfs_da_do_buf(2) at line 2085 of file fs/xfs/xfs_da_btree.c.




Hello,

I have booted with 2.6.39 from backports and was able to repair the filesystem. (I took a snapshot of the lun first and tried the repair on the snapshot first.)

I have attached the output of the repair.

I'm going to reply to the questions I can answer :)

  - what steps you perform to reproduce this, what happens, and how
    that differs from what you expected (should be easy in this
    example)

I was not able to go into a directory. The problem was already reported in november to us. But got lost in our ticketing system. I just recently saw it myself when doing some other checks.


  - which kernel versions you have tried and results from each

I was running 2.6.32


If we are lucky, someone on that list might suggest commands to help
diagnose it, which should make it easier for others to artificially
reproduce, see if 3.x.y is affected, and make sure it is fixed in
3.x.y and 2.6.32.y.

I suspect indeed a problem with the underlying hardware. The system has crashed in the past several times...

thanks for you feedback Jonathan, we were able to repair the problem, so for us the problem is fixed. It's good however to have this in a bug report.

Rudy
root@cyrprd3:~# xfs_repair -v /dev/mapper/mail22-ds3400-2 
Phase 1 - find and verify superblock...
        - block cache size set to 4628616 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 476774 tail block 476774
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
bad directory block magic # 0x494e0000 in block 0 for directory inode 1074040836
corrupt block 0 in directory inode 1074040836
  will junk block
no . entry for directory 1074040836
no .. entry for directory 1074040836
problem with directory contents in inode 1074040836
cleared inode 1074040836
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 1
        - agno = 0
        - agno = 2
        - agno = 3
entry "bo^hostens" at block 1 offset 1424 in directory inode 3221226960 references free inode 1074040836
  clearing inode number in entry at offset 1424...
Phase 5 - rebuild AG headers and trees...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
bad hash table for directory inode 3221226960 (no data entry): rebuilding
rebuilding directory inode 3221226960
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
disconnected dir inode 1073872228, moving to lost+found
disconnected inode 1073932762, moving to lost+found
disconnected inode 1073964036, moving to lost+found
disconnected inode 1074040838, moving to lost+found
disconnected inode 1074040839, moving to lost+found
disconnected inode 1074040841, moving to lost+found
Phase 7 - verify and correct link counts...
resetting inode 1600316 nlinks from 3 to 2
resetting inode 4828946 nlinks from 2 to 1
resetting inode 8181821 nlinks from 3 to 2
resetting inode 24462957 nlinks from 3 to 2
resetting inode 25151068 nlinks from 2 to 1
resetting inode 64114025 nlinks from 4 to 3
resetting inode 3221226960 nlinks from 178 to 177

        XFS_REPAIR Summary    Fri Jan 13 16:29:13 2012

Phase   Start   End   Duration
Phase 1:  01/13 16:24:39  01/13 16:24:39  
Phase 2:  01/13 16:24:39  01/13 16:25:48  1 minute, 9 seconds
Phase 3:  01/13 16:25:48  01/13 16:26:16  28 seconds
Phase 4:  01/13 16:26:16  01/13 16:26:25  9 seconds
Phase 5:  01/13 16:26:25  01/13 16:26:26  1 second
Phase 6:  01/13 16:26:26  01/13 16:26:41  15 seconds
Phase 7:  01/13 16:26:41  01/13 16:26:41  

Total run time: 2 minutes, 2 seconds
done


Reply to: