[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Squeeze installation fdisk bug



On the 01/02/2011 14:13, Siju George wrote:
> Hi,
> 
> I installed Debian Squeeze on a server with 2 Disks on RAID 1
> The second disk failed and I was trying to replace it with a new one.
> And I found this in the partition table
> 
> ================================
> root@vmsrv:~# fdisk -l /dev/sda
> 
> Disk /dev/sda: 500.1 GB, 500107862016 bytes
> 255 heads, 63 sectors/track, 60801 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disk identifier: 0x0009e3c2
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sda1               1          37      291840   fd  Linux raid
> autodetect
> Partition 1 does not end on cylinder boundary.
> /dev/sda2              37         159      976896   fd  Linux raid
> autodetect
> Partition 2 does not end on cylinder boundary.
> /dev/sda3             159        2225    16601088   fd  Linux raid
> autodetect
> /dev/sda4            2225       60802   470515712   fd  Linux raid
> autodetect
> root@vmsrv:~#
> ==========================================
> 
> The disk has only 60801 cylinders but the ending cylinder is 60802 for
> the 4th partition :-(
> 
> The end of a partion and the begining of the following one are identical.
> 
> If i try to create the same on another identical disk using fdisk I get
> the value out of range error.
> 
> Finally I did
> 
> #sfdisk -d /dev/sda | sfdisk -f /dev/sdb
> 
> and got the hard disk added to raid.
> 
> Is this partitioning dangerous?
> 
> How should I rectify it?
> 
> Thanks
> 
> --Siju
> 
> 
> 

Hello, fdisk also always gives me this kind of warning on the raid1, use
"parted -l" instead.

Regarding "sfdisk -d", if you ran this command with a filesystem already
on the source drive you'll run into problems due to the filesystem
boundaries being misplaced. You would be better of starting again with
unformatted partitions, then format the md* devices.

Or you can correct the problem by running a "e2fsck -cc" on the affected
raid devices, then a "resize2fs", and you should be fine. It's a time
consuming process on large partitions though. It's supposed to be
harmless for the data, but of course you should check your backup first.


Reply to: