[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: cfdisk vs fdisk & speaking of Western Digital drives...




On Saturday, Jan 3, 2004, at 14:52 America/Denver, Andy Firman wrote:


Hello.  I am not a hard drive expert and need some help in
understanding a few things.

Neither are most of us, but we'll try.  ;-)

First, what is the difference between fdisk and cfdisk,
other cfdisk being curses based?

fdisk in my experience gives you more things you can do, but for general partitioning of drives for use as Linux partitions, there's virtually zero difference other than the user interface.

Second, I have 2 Western Digital drives.
Both model WD400BB but they were manufactured about
6 months apart.  I just bought the second one as I want
to try out Lucas' new Debian software Root on Raid howto.

Sounds like fun.

I partioned both disk's exactly the same using cfdisk
during the install.  It seems that one drive has 4863 cylinders
and the other has 77545 cylinders.  Why would Western Digital
make the drives different?  Or did I do something wrong
with partitioning/formatting?

Unless the hardware design itself changed, you usually don't see this in identically numbered disk models... or not that I've ever run into yet, anyway. But WD may have changed firmware and/or drive control hardware on the drive. Nowadays the CHS information doesn't even really have to match what the disk really is -- the firmware guys can make that stuff do "virtually" whatever is wanted by the marketing department and/or the folks who do compatibility testing with various motherboard chipsets... they can "fix" compatibility issues by just having the drive look like something other than what it really is, if they desire to.

Do the physical drives and partitions have to be EXACTLY the
same for RAID 1 to work properly or will the following
layouts of my drives be sufficient?

No, the kernel handles it for you. If you think about it, some people might be using software RAID with two completely different disk manufacturers, for example. I have seen RAID1 setups that had two different sized drives set up before and wondered what happens when you fill the smaller one... hopefully the kernel's smart enough to report "disk full" when that happens. Without actually trying it out here, I'd assume it's failsafe enough to do that, but the kernel RAID docs would hopefully say for certain what the limitations are.

(Example of wildly different disks doing software RAID: Russel Coker recently posted some tests he did with bonnie++ on a RAID1 array consisting of an internal disk and an external USB 2.0 disk and the resulting speed hits on reading from the "wrong" disk -- the USB disk was much much slower, but the kernel would still attempt to read from it even when the internal disk was idle. Interesting data. It's in the list archives here, I'm sure.)

I read most of the other comments and agree with them. This looks like a BIOS problem. Are both disks set up the same in the BIOS for LBA or Standard or what-have-you?

Here is the info from cfdisk and fdisk on both drives:

cfdisk /dev/hda:

cfdisk /dev/hdd

Another thought... I note that the second disk is a slave on the second IDE chain. Is there a CD-ROM drive somewhere, perhaps on /dev/hdc?

There are some interactions with masters and slaves on IDE where the CD-ROM may be forcing the second IDE chain to slower speeds, etc. I've not seen anything documented where it would force the inability to use LBA or something similar, but perhaps if the second disk were on the main IDE chain it would be detected differently on that particular BIOS/motherboard combo?

Of course, then you lose the "benefit" of having the RAID1 split across two IDE busses and possibly across two separate motherboard busses from the IDE chipset(s).

Hope those are helpful ideas.  Just brainstorming here.
--
Nate Duehr, nate@natetech.com



Reply to: