[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: mdadm and UUIDs for its component drives



On Mon, Jun 27, 2011 at 1:59 PM, Philip Hands <phil@hands.com> wrote:

>>  ok, i bring in phil now, who i was talking to yesterday about this.
>> what he said was (and i may get this wrong: it only went in partly) -
>> something along the lines of "remember to build the drives with
>> individual mdadm bitmaps enabled".  this will save a great deal of
>> arseing about when re-adding drives which didn't get properly added:
>> only 1/2 a 1Tb drive will need syncing, not an entire drive :)  the
>> bitmap system he says has hierarchical granularity apparently.
>
> What I said was: "internal" bitmaps

 ahh.  yes.  i missed the word "internal" but heard the good bits,
then looked up the man page and went "ohh, ok, that must be it".  i
get there in the end :)


>> also, he recommended taking at least one of the external drives *out*
>
> I think I said: WTF?

 ha ha :)

>  You buy a machine that had 4 hot swap SATA bays,
> and you're plugging crappy external USB drives into it instead?  Are you
> mental?  (or at least, if I didn't say that out loud, that's what I was
> thinking ;-)

 i seem to remember the incredulity which definitely had the words
"are you mental??" behind it

> I must say that I'm a little beffuddled about how you managed to make
> the system sensitive to which device contains which MD component -- I
> seem to remember you mentioning that you had devices listed in your
> mdadm.conf -- just get rid of them.

 well, i may have implied that, on account of not being able to
express it - i get it now: the things i thought were "devices" are
actually the UUIDs associated with the RAID array...

> ARRAY /dev/md/2 metadata=1.2 UUID=65c09661:02fc3a16:402916d3:5d4987f4 name=sheikh:2

 ... just like this.

> No mention of devices, which is a good job because that machine seems
> to randomise the device mapping on each boot, and is capable of moving
> them about when running if you pop the drive out of the machine and back
> in again.

 yehhs, i noticed that.  even the bloody boot drive comes up as
/dev/sde occasionally.  last reboot i was adding drive 4 to the array,
it was named /dev/sda.  kinda freaky.

 ok.

 so.

 let's have a go at some updating the docs...

 DESCRIPTION
       RAID  devices  are  virtual devices created from two or more real block
       devices.  This allows multiple devices (typically disk drives or parti‐
       tions  thereof)  to be combined into a single device to hold (for exam‐
       ple) a single filesystem.  Some RAID levels include redundancy  and  so
       can survive some degree of device failure.

       Linux  Software  RAID  devices are implemented through the md (Multiple
       Devices) device driver.  UUIDs are used internally through Linux Software
       RAID to identify any device that is part of a RAID.  In this
way, names may
       change but the innocent are protected.

 ok, scratch that last sentence :)

       Linux  Software  RAID  devices are implemented through the md (Multiple
       Devices) device driver.  UUIDs are used internally in Linux Software RAID
       to identify any device that is part of a RAID, thus ensuring
that even if the
       name changes (such as may happen if devices are removed and placed
       into another system, or if using removable hot-swappable media) Linux
       RAID can still correctly identify the component devices.

can we start with that - what you think, martin?  it's right at the
top: it spells things out, and it makes linux RAID look good :)  i'll
try to find appropriate places to put the same info, but the page is
really quite long.  perhaps on "--add" somewhere?

 l.


Reply to: