[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#964494: marked as done (File system corruption with ext3 + kernel-4.19.0-9-amd64)



Your message dated Sun, 18 Apr 2021 19:01:41 +0200
with message-id <[🔎] YHxl9a5nrcrsVsxm@eldamar.lan>
and subject line Re: Bug#964494: Info received (Bug#964494: File system corruption with ext3 + kernel-4.19.0-9-amd64)
has caused the Debian Bug report #964494,
regarding File system corruption with ext3 + kernel-4.19.0-9-amd64
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact owner@bugs.debian.org
immediately.)


-- 
964494: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=964494
Debian Bug Tracking System
Contact owner@bugs.debian.org with problems
--- Begin Message ---
Package: linux-signed-amd64
Version: 4.19.0-9-amd64

We've had two separate reports now of debian buster users running 4.19.0-9-amd64 who experienced serious file system corruption.

- Both were using ext3
- Both are running Xen HVM, but I do not have reason to believe this to be related
- Both are on distinct physical hosts
- Both had upgraded from an older non 4.19 kernel within the last two or three weeks

One user had the error:

ext4-fs error (device xvda1): ext4_validate_block_bitmap:393: comm cat: bg 812: block 26607617: invalid block bitmap
aborting journal on device xvda1-8
ext4-fs error (device xvda1): ext4_journal_check_start:61: Detected abnormal journal
ext4-fs (xvda1): Remounting filesystem read-only
ext4-fs (xvda1): Remounting filesystem read-only
ext4-fs error (device xvda1) in ext4_orphan_add:2863: Journal has aborted

The other gave us the output of tune2fs -l:

tune2fs 1.44.5 (15-Dec-2018)
Last mounted on: /
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags: signed_directory_hash
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 4700160
Block count: 9437183
Reserved block count: 471048
Free blocks: 6164372
Free inodes: 4479367
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 730
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 16320
Inode blocks per group: 510
Filesystem created: Thu Apr 26 19:55:21 2012
Last mount time: Tue Jul 7 15:11:46 2020
Last write time: Tue Jul 7 15:11:45 2020
Mount count: 1
Maximum mount count: 26
Last checked: Tue Jul 7 15:10:50 2020
Check interval: 15552000 (6 months)
Next check after: Sun Jan 3 14:10:50 2021
Lifetime writes: 10 TB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 128
Journal inode: 8
First orphan inode: 1224109
Default directory hash: tea
Directory Hash Seed: 77ef7ea3-5e01-4e55-b3da-3036769fb64b
Journal backup: inode blocks

--Sarah

--- End Message ---
--- Begin Message ---
Hi,

On Sun, Apr 18, 2021 at 09:04:52AM -0700, Sarah Newman wrote:
> On 4/18/21 8:36 AM, Salvatore Bonaccorso wrote:
> > On Tue, Aug 18, 2020 at 10:02:12PM -0700, Sarah Newman wrote:
> > > We haven't had any further reports of file system corruption. I
> > > would guess that converting to EXT4 is sufficient to avoid the
> > > issue.
> > 
> > Should this bug be closed or is there anything we still can/should do
> > about it?
> > 
> > Regards,
> > Salvatore
> > 
> 
> You can close it.

Okay, let's do that then so.

Regards,
Salvatore

--- End Message ---

Reply to: