Re: HELP! Re: How to fix I/O errors?
- To: debian-user@lists.debian.org
- Subject: Re: HELP! Re: How to fix I/O errors?
- From: David Christensen <dpchrist@holgerdanske.com>
- Date: Wed, 8 Feb 2017 09:52:13 -0800
- Message-id: <[🔎] b5cd4a8b-55c8-be3b-6dca-f8548fdb32f9@holgerdanske.com>
- In-reply-to: <[🔎] CANbRw9=xLDx0tjahovA4iwvcQBtg2bAzCC3D5hy7yVW7XYeG9A@mail.gmail.com>
- References: <CANbRw9=Eqs2vqUOJ9zQFdgwjn2yUv7CoxKpQLu4u41oEeVzjtw@mail.gmail.com> <CANbRw9ktzRv=BK4R+TgtsmEZK-DcM7TM7nVD2Gfe0b_BUc23Jw@mail.gmail.com> <CANbRw9==HZzDha2SoDdh1xgqzALGH43_KMAoMJpQqGLfBdfeiw@mail.gmail.com> <CANbRw9k2wdsqYu+FesVGnediuhfvbLSWFi_Y7uHwhBWHdo8tgg@mail.gmail.com> <CANbRw9k3J23Kh9W=-zhaeGtZ6KBC_4MPybhTB9W3MDw6-6dYHw@mail.gmail.com> <CANbRw9k=awgh+1PHV=2g+3F5zKdnE2AgjBs7R3KLuV=sU9iy5Q@mail.gmail.com> <CANbRw9mfPxf8qshyQ41Ji60PoYcw+NjrjRrcLtreNk_DE+3z3g@mail.gmail.com> <CANbRw9n+rLxDSCggw6ZZa36_Hpy2YTirPcBvfC7CvdWt2uMusQ@mail.gmail.com> <CANbRw9myyL64P6Bz+mdeAvso7OZahknfkh4gjsi62gDjdrETRA@mail.gmail.com> <CANbRw9mgNrZvihZoE8-gFcSSBTQBaUHdHGxr27J7XJrM-AKsRQ@mail.gmail.com> <CANbRw9=Jfr56exNBTJ5oH6YevAARvRfaU_RRY6iYHPf1+0uCsg@mail.gmail.com> <CANbRw9nOyCNN-AjsrCtbGX=b+-5hOY-AUkx=Rhh-mMqnYVU2FQ@mail.gmail.com> <CANbRw9ndA_gFEhKSeO=we-GmYYC4MrTrLSe5TVHz1cNWyRSGZA@mail.gmail.com> <[🔎] CANbRw9=xLDx0tjahovA4iwvcQBtg2bAzCC3D5hy7yVW7XYeG9A@mail.gmail.com>
On 02/07/17 23:37, Marc Shapiro wrote:
> How it went is not well.
> David Christensen wrote:
>> Run memtest86+ for 24+ hours to verify that you don't have a memory
>> problem.
Did you test the memory? If not, test it now just to be sure.
>> Use SeaTools to wipe the new 1 TB drive and run the short and long
>> tests. Stop if anything fails.
I tested the new drive with SeagateTools and it
was fine.
Please confirm that you wiped the 1 TB recovery drive.
Then I made a clonezilla live CD and booted from it. It stopped
on the first read error with a message saying to restart using the rescue
option. I did that. After 5 hours it finished without mentioning any
errors.
I tried to boot to the old disk (since it was still wired that way). I got
dropped int a maintenance shell with fs errors in /dev/sda4 which is the
physical volume for all my LVM logical volumes -- /usr, /var, /home and
/temp. It says to run fsck manually.
I decided to try the new drive, so I changed the cables and re-booted.
Maintenance shell, again.
/ mounted clean
lvm started
/home fs has errors run fsck (at this point, I'm afraid to try it)
/var, /usr, and /tmp all say that the superblock can not be read, or is
invalid. Try running
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
Which do I use?
>
How did trying to clone the disk nake such a mess of BOTH disks?
Don't blame Clonezilla. Everything is decaying -- you, me, those hard
drives, etc.. With that in mind, do the most precious operations first
-- because in 1 second, 1 minute, 1 hour, 1 day, 1 month, 1 year, 1
decade, 1 century, whatever, the data will be inaccessible without
extraordinary means.
Forget about booting off the failing 1 TB disk. Disconnect it for now.
Forget about booting off the 1 TB recovery disk. It should now contain
whatever blocks Clonezilla was able to recover. It is now in a state
analogous to Swiss cheese. Disconnect it for now.
Any help getting a working system again will be greatly appreciated.
On the computer you use for e-mail, start an administration log folder
for the machine in question. Start a log.txt file and take notes. Cut
and paste what you can. Photograph screens and transcribe what you
can't. Collect important files. Put it all into a version control system.
>> I'd do a fresh install on a 16+ GB SSD (USB flash drives also
>> work).
Install SSH when you build the new system drive.
Use ssh(1) to log in from your e-mail computer. Consider using
script(1) to capture your console sessions, and scp(1) to copy out the
files. Read fsck(8) and consider your moves carefully. Reconnect the 1
TB recovery disk and see what fsck can recover.
David
Reply to: