[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

[SOLVED] Re: Partitioning And Formatting A Large Disk (2086.09GB)



I've solved my problem -- sort of.

Michael S. Peek wrote:
Hello fellow Debian aficionados,

I'm having a hard time trying to figure out how to partition and format a large disk.

I have a 3ware card and an array defined thusly:
# tw_cli /c4/u0 show
Unit UnitType Status %Cmpl Port Stripe Size(GB) Blocks
-----------------------------------------------------------------------
u0 RAID-5 OK - - 64K 2086.09 4374845440
When I went to try to partition the disk with fdisk, it said:
# fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
You must set cylinders.
You can do this from the extra functions menu.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Hmm.... Cylinders.

I've never had to calculate geometry before, so I gave it a try.

First thing I did was (n00b alert) count the bytes on /dev/sdb. (Does tw_cli consider 1GB = 1000MB or 1024MB? What about fdisk? If I count it myself then I won't have to care.) Turns out /dev/sdb is 2239920865280 bytes. So, if I use the default: heads=255, sectors/track=63, sector size=512, then the number of cylinders should be 272321 -- well within the 1-1048576 range. (That comes out to be a total of 2239916474880 bytes.)
Command (m for help): x
Expert command (m for help): ?
Command action
   b   move beginning of data in a partition
   c   change number of cylinders
   d   print the raw data in the partition table
   e   list extended partitions
   f   fix partition order
   g   create an IRIX (SGI) partition table
   h   change number of heads
   m   print this menu
   p   print the partition table
   q   quit without saving changes
   r   return to main menu
   s   change number of sectors/track
   v   verify the partition table
   w   write table to disk and exit
Expert command (m for help): s
Number of sectors (1-63, default 63):  Using default value 63
Warning: setting sector offset for DOS compatiblity
Expert command (m for help): h
Number of heads (1-256, default 255):  Using default value 255
Expert command (m for help): c
Number of cylinders (1-1048576): 272321
The number of cylinders for this disk is set to 272321.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)
Expert command (m for help): r Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-4972, default 1):
Uh... What?
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-4972, default 4972): Using default value 4972

I thought I just set the number of cylinders to 272321. Where is 4972 coming from?

Is there anyone more experienced than I that can clue me in?

Michael

I have a 3ware card that creates arrays for me that show up a
/dev/sd(b|c|d|...).  My problem is that even though ext3 supports up to
32TB, fdisk, cfdisk, and sfdisk can't partition even my existing 2TB
arrays -- sucks on toast.  Why, I ask myself, would I want to have a
nice, snazzy 3ware card that takes care of all my RAID needs in
hardware, and then have to allocate tiny arrays on it only to glue them
back together in software via mdadm or LVM?

Then it hit me: If all I'm going to do is put one partition on each
array, then why bother partitioning it at all.  I'll just mkfs on
/dev/sdb and /dev/sdc and mount them as such.

Behold:
bkup2:~# mkfs.ext3 /dev/sdb
mke2fs 1.40-WIP (14-Nov-2006)
/dev/sdb is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
273432576 inodes, 546855680 blocks
27342784 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
16689 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
    102400000, 214990848, 512000000

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

bkup2:~# tune2fs -c 0 -i 0 /dev/sdb
tune2fs 1.40-WIP (14-Nov-2006)
Setting maximal mount count to -1
Setting interval between checks to 0 seconds

bkup2:~# mkfs.ext3 /dev/sdc
mke2fs 1.40-WIP (14-Nov-2006)
/dev/sdc is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
273432576 inodes, 546855680 blocks
27342784 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
16689 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
    102400000, 214990848, 512000000

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 37 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

bkup2:~# tune2fs -c 0 -i 0 /dev/sdc
tune2fs 1.40-WIP (14-Nov-2006)
Setting maximal mount count to -1
Setting interval between checks to 0 seconds

bkup2:~# mkdir -p /export/raid/0 /export/raid/1
bkup2:~# mount /dev/sdb /export/raid/0
bkup2:~# mount /dev/sdc /export/raid/1
bkup2:~# df --si
Filesystem             Size   Used  Avail Use% Mounted on
/dev/sda1              313G   984M   296G   1% /
tmpfs                  531M      0   531M   0% /lib/init/rw
udev                    11M    62k    11M   1% /dev
tmpfs                  531M      0   531M   0% /dev/shm
/dev/sdb               2.3T   208M   2.1T   1% /export/raid/0
/dev/sdc               2.3T   208M   2.1T   1% /export/raid/1

I'm wondering just how unwise it is for me to turn off filesystem
checking via tune2fs.  On the one hand, the 3ware card takes care of
managing the health of the drives, but on the other hand drive health
and filesystem health are two separate (albeit related) things.  But it
takes about 3.5 hours to do an fsck on just one of the 2TB arrays on
this machine.  It would take all day, literally, if I had to sit through
checking both arrays.  I think instead I'll arrange for a cron job that
unmounts and fscks them overnight on a semi-regular basis.

Michael



Reply to: