[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Corrupt root filesystem



On 20:30, Fri, 7 Jul 2023 Reco <recoverym4n@enotuniq.net wrote:
>
> On Fri, Jul 07, 2023 at 06:26:28PM +0100, Mick Ab wrote:
> > The error messages were of the form :-
> >
> >   "/dev/mapper/vgpcname-root contains a file system with errors, check
> > forced.
> >    Inodes that were a part of a corrupted orphan linked lost found.
> >    /dev/mapper/vgpcname-root : UNEXPECTED INCONSISTENCY; RUN fsck
> > manually.(i.e .,
> >    without -a or -p options). fsck exited with status code 4. The root
> >    filesystem on /dev/mapper/vgpcname-root requires a manual fsck
> >
> > There is then a flashing prompt after "(initramfs)".
>
> So, first things first, it's not "before reboot".
> It's "during the boot". And note that initramfs ran fsck, but it failed.
>
> Second, yes, that particular filesystem is indeed required fsck.
>
>
> > The following command was thus run :-
> > sudo fsck -y /dev/mapper/vgpcname-root
> > The PC could then be rebooted.
>
> You've got it wrong here again.
> During initramfs stage root filesystem is mounted readonly.
> This allows it to be checked by fsck, without causing an additional
> damage.
> And, since it's a root filesystem, it's *required* to reboot after the
> fsck.
>
>
> > The file system is ext4.
>
> Thanks. It's a rare sight these days that people actually answer all the
> questions they're asked.
>
> Now, assuming you're using a stock Debian kernel, it's unlikely to be a
> kernel bug. Likewise, we can exclude some "user-firendly" software (I'm
> looking at you, GNOME).
>
> Which leaves us with the hardware fault.
>
> Hate to bring it to you, but additional information would be welcome.
> You're using lvm2, it's obvious.
> But which drive your physical volume resides on?
> I.e. make, model, SMART attributes if any?
>
> Reco
>

I have two 1 TB 7500 rpm Seagate hard drives (I don't know the model name). They are in a Debian software RAID 1 array. I do not have any SMART diagnostics, but the RAID 1 array looks okay judging from a 'cat /proc/mdstat' output. Also no audible sign of any imminent failure of either hard drive.


Reply to: