Re: Poor quality of multipath-tools
On Wed, Jul 05, 2006 at 04:06:54PM -0500, John Goerzen wrote:
> On Wed, Jul 05, 2006 at 10:34:41PM +0200, Petter Reinholdtsen wrote:
> > Well, I do not even know what multipath _is_, nor why it is important.
> > If that is representative, I suspect the people interested in
> > multipath have some work to do to raise the awareness of the problem.
> > This email is a very good start, but it seem to assume that everyone
> > know what multipart is and why it is important. Is multipath machines
> > as common as the ppc64 machines, or is the problem affecting a lot of
> > users?
> Let me give a brief explanation of multipath.
> Let's say you want a bunch of disk space. A whole lot -- maybe
> terabytes worth. So you buy a SAN, which is a device that might have
> dozens or hundreds of disks in it. And you can hook multiple servers to
> this SAN. So you have a SAN controller, a bunch of disks in whatever
> RAID configurations you like hooked to it, a fibre channel switch, and
> each server hooked to the FC switch.
> Suddenly you have a lot of really important single points of failure
> that could take down not just one but many servers -- the FC switch, the
> SAN controller, the FC cables, etc.
> So the solution is to build two distinct I/O paths for any server to
> reach the disks. The SAN will have two controllers (each with access to
> disk enclosures). You'll have two FC switches, one controller cabled to
> each. And each server will have two FC links, one to each switch.
> Now, when you bring up this system, Linux will assign *two* /dev/sdx
> devices for each RAID LUN (basically looks like a disk). At any given
> time, exactly one will be readable and useful. That is, the disk can be
> probed on both controllers, but only one path will support I/O at any
> given time.
Not always true. Both paths can be active at the same time.. if supported by
the SAN array. Then you do also load balancing between the paths..
I'm currently using multipath with iSCSI SAN, using two active paths with
load balancing and failover.
So I'm also interested in this stuff..
- Pasi Kärkkäinen
> Adding to the complexity, which one to use can vary while the system
> runs. For instance, if a SAN controller dies, everybody switches over
> to the backup path.
> The multipath-tools package is the userland support necessary to make
> all this work in a sane fashion. It uses the dm-multipath kernel module
> to do that.
> But it's got some problems:
> 1) It doesn't properly scan partition tables in multipath devices
> 2) It doesn't integrate with initramfs, so it's not possible to boot
> off a multipath device unless more work is done
> 3) Some other general bugs and issues
> BTW, multipath is often called MPIO (MultiPath I/O)
> > > I am gravely concerned, though, about the lack of attention this
> > > package is receiving. Does anyone intend to give it some TLC
> > > anytime soon?
> > Perhaps you could give it some tender loving care, and talk to the
> > people maintaining the affected packages using IRC and email, and
> > hopefully get them to realize why they should fix it in time for
> > etch. :)
> That's what I intend to do. It's maintained by the LVM folks, though,
> and seems to be tied reasonably closely to that somehow. I'm not as
> familiar with all this as they are. But it seems like the package is
> not really being looked after, given its bug reports.
> I have already uploaded multipath-tools-initramfs to Incoming, which
> simply installs initramfs hooks and scripts to make it possible to boot
> from multipath. We are successfully using it with these scripts at our
> > I suspect you might wait in wane if you expect someone else to do
> > it. :)
> I understand. I'm just trying to figure out if there are interested
> parties out here to pitch in, if the LVM folks have plans for it, etc.
> I'm brand-new at this and wouldn't be at all surprised if someone else
> was more capable at it than I am.
> -- John