[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: shared SCSI install

On Sat, Nov 27, 2004 at 02:10:12PM +0000, Christian Mack wrote:
> Frank Lenaerts wrote:
> >Yesterday evening, I tested the debian-installer rc2 on a shared SCSI
> >system i.e.
> >
> >node1 (no internal disks)
> >node2 (no internal disks)
> >box with shared SCSI disks (both nodes connected to the same bus)
> >
> >Some notes:
> >* To avoid interference:
> >   - SCSI BIOS on node1 sees only SCSI IDs 2 and 3
> >   - SCSI BIOS on node2 sees only SCSI IDs 4 and 5
> >* The bootloader sees what the SCSI BIOS presents i.e. 2 disks.
> >* Linux itself recognises the 4 disks on each host[*]
> >
> >[*] I don't know how I can tell Linux to ignore some disks.
> >
> >As node1 was already installed some time ago, I only installed
> >node2. Installation took place on "/dev/sdc" and went fine. At the end
> >however, there is the question to install grub in the MBR or not. I
> >wondered what _the_ MBR was as both /dev/sda and /dev/sdb were already
> >bootable (RAID1 setup). As I was installing on /dev/sdc, I thought
> >that it would setup grub on the MBR of /dev/sdc, so I confirmed. I saw
> >that grub-install was called on hd0 so I knew something was wrong
> >i.e. it was writing to the MBR of /dev/sda. I corrected the problem on
> >node1 and installed grub manually on node2 (/dev/sdc) using a grub
> >floppy.
> >
> >Question: Wouldn't it be good to present a list of disks so that the
> >user can select the disk whose MBR should be changed?

Hi Christian,

> Sorry, but the MBR is always the MBR of the first disk.
> This is, because most BIOSes try to boot from that.

I know that most BIOSes try to boot from the first disk ("0x80
etc. story") but technically, each disk can have a master boot
record. The fact that PC BIOSes always use "the first" disk should not
change the definition of "MBR".

> You can choose another place by selecting "No" in the question about
> installing grub into MBR.

I know, and in future installs I would do that, but at that moment I
thought that, because I was installing on /dev/sdc, the installer
meant the MBR on /dev/sdc. 

At first, I would think that it would be better if the installer
presented a list of target MBRs to install grub into. However, this is
not that simple because there is the interplay of the SCSI BIOS, grub
when booting a machine, the Linux kernel and using grub-install when

Taking my original example from above:

- SCSI BIOS on node1 sees /dev/sda and /dev/sdb; SCSI BIOS on node2
  sees /dev/sdc and /dev/sdd

- when booting, grub on node1 uses hd0 for /dev/sda, hd1 for /dev/sdb;
  grub on node2 uses hd0 for /dev/sdc and hd1 for /dev/sdd

- when booted, Linux sees 4 disks on all nodes: /dev/sd{a,b,c,d}

- in Linux, grub-install uses hd0 for /dev/sda, hd1 for /dev/sdb, hd2
  for /dev/sdc and hd3 for /dev/sdd on all nodes

This makes it quite difficult to determine on which disk and how to
install the boot loader. Suppose for instance that you could indicate
that you would like grub to be installed on /dev/sdc. In this case, it
would also be important for grub to know how this disk would be called
when you boot the system i.e. /dev/sdc on node2 would be called hd0
and not hd2. To overcome this, the devices.map file would have to be
rewritten. This in turn would have to be translated into a user dialog
box asking for the mapping (the user should then know this mapping).

Actually, things would be a lot easier if you could force Linux to
ignore certain disks i.e. Linux on node1 should ignore /dev/sdc and
/dev/sdd; Linux on node2 should ignore /dev/sda and /dev/sdb. In this
case, there would not be any numbering problem at all.

Because of this numbering problem however, also other things come up
e.g.: /dev/sda and /dev/sdb are part of a RAID1 mirror (autodetection)
for node1 but when node2 is booted and has RAID1 in the kernel (to put
/dev/sdc and /dev/sdd in RAID1 later on), it also sees these disks and
notices that they form a RAID1 mirror and thus starts a resync to
update its view. I don't like this ... so I have been searching for
ways to enforce Linux to forget about certain disks, but it does not
seem to be possible.

The question for how to tell Linux not to recognise certain disks is
probably not for this mailinglist.

> Hope this helps a little.

Thanks for your reply. Personally, I think this kind of setup is not
that common compared to "normal setups". It is therefore probably not
wise to invest too much time into it.

However, I wonder how people do this kind of stuff with SANs: how do
they tell a node to be installed to see only a particular disk? I
don't have experience with SANs, but I can imagine that you can
"export" a disk to a node. With a shared SCSI bus however, this
"export" happens in the SCSI BIOS but as soon as Linux is running, all
shared disks are visible. Maybe, the SAN can really shield other disks
of. Hmmm, I should try the installer (with an NBD enabled kernel) on a
network block device exported from an NBD server;-)

> Bye


> Christian


gpg fingerprint: A41E A399 5160 BAB9 AEF1  58F2 B92A F4AB 9FFB 3707
gpg key id: 9FFB3707

Those who do not understand Unix are condemned to reinvent it, poorly."
-- Henry Spencer

Attachment: pgpNNkXjrx4bV.pgp
Description: PGP signature

Reply to: