[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: What to do with dead raid 1 partitions under mdadm



On 25/10/14 11:19 PM, mett wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Hi,

I'm running Squeeze under raid 1 with mdadm.
One of the raid failed and I replace it with space I had available on
that same disk.

Today, when rebooting I got an error cause the boot flag was still on
both partitions(sdb1 and sdb3 below). I used the rescue part of the
debian installer CD to remove the boot flag with fdisk, and now
everything is working.

My question is what to do with the dead raid partition on that disk
(sdb1 and sdb2 below)?

Can I safely delete them and mark them unusable or similar?

Below are some details about the system.

/dev/sdb is 250G; I had an sdb1 and sdb2 failure. I
created sdb3 and sdb4 and add them to the array. They are the current
member of the md array.

/mett# uname -a
Linux asus 3.2.0-0.bpo.4-686-pae #1 SMP Debian 3.2.57-3+deb7u2~bpo60+1
i686 GNU/Linux

root@asus:/home/mett#
root@asus:/home/mett# mdadm --detail /dev/md1
/dev/md1:
         Version : 1.2
   Creation Time : Mon Feb  4 22:46:04 2013
      Raid Level : raid1
      Array Size : 97654712 (93.13 GiB 100.00 GB)
   Used Dev Size : 97654712 (93.13 GiB 100.00 GB)
    Raid Devices : 2
   Total Devices : 2
     Persistence : Superblock is persistent

     Update Time : Sun Oct 26 12:03:37 2014
           State : clean
  Active Devices : 2
Working Devices : 2
  Failed Devices : 0
   Spare Devices : 0

            Name : asus:1  (local to host asus)
            UUID : 639af1ab:8ec418b5:8254ef0d:ad9a728d
          Events : 75946

     Number   Major   Minor   RaidDevice State
        2       8        2        0      active sync   /dev/sda2
        3       8       20        1      active sync   /dev/sdb4

(/dev/md0 is same structure as above with sda1 and sdb3 as raid members)


root@asus:/home/mett#
Disk /dev/sdb: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00066b3e

    Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1          64      514048+  fd  Linux raid
    autodetect
/dev/sdb2              65       12515   100012657+  fd	Linux
    raid
    autodetect
/dev/sdb3   *       12516       12581   530145   fd  Linux raid
    autodetect
/dev/sdb4           12582       25636   104864287+  fd  Linux raid
    autodetect

Command (m for help):

Thanks a lot in advance.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iF4EAREIAAYFAlRMaDMACgkQGYZUGVwcQVJTNQEAtTFXt5o+TJUA6v7XQiUL1MCQ
f24zTUpe7Zqrcz6XLi4BAJNEuPRx8QFZZeSHK9f1Qg/zAHhXBVTn3G21ODgEp+XQ
=eaQS
-----END PGP SIGNATURE-----
As I undertand your issue:
- you had RAID 1 arrays md0 (sda1+sdb1) and md1 (sda2+sdb2),
- sdb1 & sdb2 showed an error, so you removed them from the arrays and added sdb3 & sdb4 from the same physical disk, - you are now wondering what to do with two partitions on device sdb (sdb1 & sdb2).

I'm guessing that sdb is nearly toast. Run smartctl -H /dev/sdb on it. If it passes, remove it from the array and repartition it, then add it back into the array.

If it fails, remove if from your computer and replace it. Whatever new drive you get will probably be larger than your current drives, so partition it so that the sdb1 is larger than the current sd1a and the rest of the space goes to sdb2. In this way, you can expand md1 when you eventually have to replace sda (it will happen - disks eventually fail).

In general it is a really bad idea to keep a filing disk in your system. It not only will fail sooner rather than later but will also slow down your system due to i/o failures.


Reply to: