[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#608118: OCFS2 filesystem fails with Kernel Stacktraces



Hi Torben,

Torben Nehmer wrote:

> Case 1: At this point the system usually goes unresponsive and nees 
> to be rebooted again (usually with a manual, gradual recovery of the
> DRBD).
[...]
> INFO: task crm_mon:3354 blocked for more than 120 seconds.
[...]
> Call Trace:
>  [<ffffffff810b826f>] ? zone_watermark_ok+0x20/0xb1
>  [<ffffffff810c73d0>] ? zone_statistics+0x3c/0x5d
>  [<ffffffffa04ab687>] ? ocfs2_wait_for_recovery+0x9d/0xb7 [ocfs2]
>  [<ffffffff81064cea>] ? autoremove_wake_function+0x0/0x2e
>  [<ffffffffa049b8fb>] ? ocfs2_inode_lock_full_nested+0x16b/0xb2c [ocfs2]
>  [<ffffffffa04a273c>] ? ocfs2_permission+0x6a/0x166 [ocfs2]
[...]
>  [<ffffffff810ece53>] ? do_sys_open+0x55/0xfc
[...]
> Case 2:
>
> This happend when I manually put one node into standby using
> the CRM tools triggering a graceful recover. Again, the system 
> triggered a kernel stacktrace and after that was unusable:
[...]
> kernel BUG at [...]/fs/ocfs2/heartbe
[...]
> Call Trace:
>  [<ffffffffa04059fc>] ? ocfs2_control_write+0x4cc/0x525 [ocfs2_stack_user]
>  [<ffffffff810ef012>] ? vfs_write+0xa9/0x102
>  [<ffffffff810ef127>] ? sys_write+0x45/0x6e
>  [<ffffffff81010b42>] ? system_call_fastpath+0x16/0x1b
> Code: 08 3e 41 0f ab 6c 24 08 66 41 ff 85 a8 00 00 00 5e 5b 5d 41 5c 41 5d c3 55 89 fd 53 48 89 f3 48 83 ec 08 39 be 30 01 00 00 75 04 <0f> 0b eb fe f6 05 2b 6a f8 ff 01 74 41 f6 05 2a 6a f8 ff 01 75
> RIP  [<ffffffffa049a2bc>] ocfs2_do_node_down+0x13/0x7d [ocfs2]

Before investigating this further: can you still reproduce this?  What
kernel are you using these days?  Is it reproducible without the
vmware modules?

Thanks for a clear report, and sorry for the long quiet.

Sincerely,
Jonathan



Reply to: