[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

RE: debian-user-digest Digest V2013 #252



I should unsubscribe; still too new to understand you folks!


JAWs



Date: Wed, 6 Mar 2013 19:22:24 +0000
From: debian-user-digest-request@lists.debian.org
Subject: debian-user-digest Digest V2013 #252
To: debian-user-digest@lists.debian.org



--Forwarded Message Attachment--

debian-user-digest Digest				Volume 2013 : Issue 252

Today's Topics:
Re: Iceweasel and chromium stable an [ Sven Joachim <svenjoac@gmx.de> ]
Re: RAID1 all bootable [ Shane Johnson <sdj@rasmussenequipme ]
Raid 5 [ Dick Thomas <xpd259@gmail.com> ]
Re: Raid 5 [ Gary Dale <garydale@rogers.com> ]
Re: Raid 5 [ Adam Wolfe <kadamwolfe@gmail.com> ]
Re: Raid 5 [ Dick Thomas <xpd259@gmail.com> ]
Re: Raid 5 [ Gary Dale <garydale@rogers.com> ]
Re: RAID1 all bootable [ Francesco Pietra <chiendarret@gmail ]
Re: Raid 5 [ Gary Dale <garydale@rogers.com> ]


--Forwarded Message Attachment--
Date: Wed, 6 Mar 2013 19:00:15 +0100
From: svenjoac@gmx.de
Subject: Re: Iceweasel and chromium stable and secure versions
To: debian-user@lists.debian.org

On 2013-03-06 04:39 +0100, Steven Rosenberg wrote:

> On Thu, Feb 28, 2013 at 9:49 AM, Sven Joachim <svenjoac@gmx.de> wrote:
>> On 2013-02-28 18:18 +0100, Henry Jensen wrote:
>>
>>> I noticed that chromium got updated to the newest stable and secure
>>> version 25.0.1364.97. Can we expect that Chromium will receive further
>>> updates in the future during the lifetime of Debian 7.0, perhaps as
>>> part of the Update-Repository (formerly known as volatile)?
>>
>> It is planned to regularly push new upstream versions of Chromium into
>> Debian 7.0, AFAIK as security updates.
>>
>>> And what about Iceweasel. Upstream Firefox 10.x is EOL now.
>>
>> Iceweasel will get security support, but no new major versions. If you
>> want those, look on http://mozilla.debian.net/.
>
>
> That's a nice compromise. I remember Chromium getting mighty old in
> the Lenny days.

Oh, you misremember. ;-) Chromium was never included in Lenny, it only
hit Debian in 2010. The "mighty old" version is in Squeeze, it has been
abandoned for ~1.5 years (last security update is from September 2011).

Looking at the latest build log[1], I doubt that Chromium will be
supportable on i386 in the Wheezy time frame. The memory requirements
for linking it are likely to increase further in the future.

Cheers,
Sven


1. https://buildd.debian.org/status/package.php?p=chromium-browser


--Forwarded Message Attachment--
Date: Wed, 6 Mar 2013 11:02:34 -0700
From: sdj@rasmussenequipment.com
Subject: Re: RAID1 all bootable
To: chiendarret@gmail.com
CC: debian-user@lists.debian.org



<snip>

As far as I can remember, I already posted for this system

root@.....:/home/francesco# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda2[0] sdb2[1]
á á á 487759680 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
á á á 191296 blocks super 1.2 [2/2] [UU]

unused devices: <none>
root@.....:/home/francesco#

francesco@.....:~$ df -h
Filesystem á á á á á áSize áUsed Avail Use% Mounted on
rootfs á á á á á á á á938M á185M á705M á21% /
udev á á á á á á á á á 10M á á 0 á 10M á 0% /dev
tmpfs á á á á á á á á 807M á628K á807M á 1% /run
/dev/mapper/vg1-root á938M á185M á705M á21% /
tmpfs á á á á á á á á 5.0M á á 0 á5.0M á 0% /run/lock
tmpfs á á á á á á á á 1.6G á 84K á1.6G á 1% /run/shm
/dev/md0 á á á á á á á176M á 19M á148M á11% /boot
/dev/mapper/vg1-home á395G á284G á 91G á76% /home
/dev/mapper/vg1-opt á 9.2G á1.5G á7.3G á17% /opt
/dev/mapper/vg1-tmp á 2.8G á 69M á2.6G á 3% /tmp
/dev/mapper/vg1-usr á á28G á4.3G á 22G á17% /usr
/dev/mapper/vg1-var á 9.2G á840M á7.9G á10% /var
francesco@.....:~$


the "deadly command' "grub-install /dev/sdb" áwas run with the system
started as above.

Thanks
francesco pietra
<snip>
Francesco,
The df -h shows us what is mounted but not if the drives are partitioned or not. áCan you do fdisk -l áand send us the output of that?
Also, you replied directly to me without the mailing list. áI have included it in the CC so that everyone can share in the knowledge. áPlease make sure áyou always reply to the list.(Reply-all works real good for this.)

Thanksá


--
Shane D. Johnson
IT Administrator
Rasmussen Equipment




--Forwarded Message Attachment--
Date: Wed, 6 Mar 2013 18:37:26 +0000
From: xpd259@gmail.com
Subject: Raid 5
To: debian-user@lists.debian.org

What is the best way to setup a raid 5 array (4* 2TB drives)
should I make raid 5 for my system and /home
then raid 0 or 1 for the boot, or should I buy a 5th drive for
system/boot and install in the standard way?
as this is my 1st time on debian and not sure what would be best


Dick Thomas
About.me http://about.me/dick.thomas
Blog: www.xpd259.co.uk
G+: www.google.com/profiles/xpd259
gpg key: C791809B


--Forwarded Message Attachment--
Date: Wed, 6 Mar 2013 13:43:06 -0500
From: garydale@rogers.com
Subject: Re: Raid 5
To: debian-user@lists.debian.org
CC: debian-user@lists.debian.org

On 06/03/13 01:37 PM, Dick Thomas wrote:
> What is the best way to setup a raid 5 array (4* 2TB drives)
> should I make raid 5 for my system and /home
> then raid 0 or 1 for the boot, or should I buy a 5th drive for
> system/boot and install in the standard way?
> as this is my 1st time on debian and not sure what would be best
>
Make one large RAID5 array then partition the RAID array as you like (/,
/home, swap, etc..). This is bootable using grub so there is no need for
a separate /boot.


--Forwarded Message Attachment--
Date: Wed, 6 Mar 2013 13:47:26 -0500
From: kadamwolfe@gmail.com
Subject: Re: Raid 5
To: debian-user@lists.debian.org

I had one h**l of a time doing this over the weekend.

What finally worked for me was creating LOGICAL partitions on each drive
and setting them as used for RAID volume devices.
This gave me /dev/sda5, /dev/sdb5 etc etc.
When grub did it's install, it added all the /dev/sda1 etc partitions
and rebooted fine.

When I tried primary partitions, grub would just fail and I'd have to
restart the whole install process over.

NOTE: when booting from the install cd i had to [tab] the 'install'
menu entry and add "dmraid=true".
Then after the initial install, it would still fail to boot.
Back to the install cd, choose 'rescue', [tab] and add 'dmraid=true'
again. Then get thee to a shell and 'grub-install --recheck /dev/sdaX'
for each partition.



On 03/06/2013 01:37 PM, Dick Thomas wrote:
> What is the best way to setup a raid 5 array (4* 2TB drives)
> should I make raid 5 for my system and /home
> then raid 0 or 1 for the boot, or should I buy a 5th drive for
> system/boot and install in the standard way?
> as this is my 1st time on debian and not sure what would be best
>
>
> Dick Thomas
> About.me http://about.me/dick.thomas
> Blog: www.xpd259.co.uk
> G+: www.google.com/profiles/xpd259
> gpg key: C791809B
>
>


--Forwarded Message Attachment--
Date: Wed, 6 Mar 2013 18:55:20 +0000
From: xpd259@gmail.com
Subject: Re: Raid 5
To: kadamwolfe@gmail.com
CC: debian-user@lists.debian.org

About.me http://about.me/dick.thomas
Blog: www.xpd259.co.uk
G+: www.google.com/profiles/xpd259
gpg key: C791809B


On 6 March 2013 18:47, Adam Wolfe <kadamwolfe@gmail.com> wrote:
> I had one h**l of a time doing this over the weekend.
>
> What finally worked for me was creating LOGICAL partitions on each drive and
> setting them as used for RAID volume devices.
> This gave me /dev/sda5, /dev/sdb5 etc etc.
> When grub did it's install, it added all the /dev/sda1 etc partitions and
> rebooted fine.
>
> When I tried primary partitions, grub would just fail and I'd have to
> restart the whole install process over.
>
> NOTE: when booting from the install cd i had to [tab] the 'install' menu
> entry and add "dmraid=true".
> Then after the initial install, it would still fail to boot.
> Back to the install cd, choose 'rescue', [tab] and add 'dmraid=true' again.
> Then get thee to a shell and 'grub-install --recheck /dev/sdaX' for each
> partition.
>
>
>
>
> On 03/06/2013 01:37 PM, Dick Thomas wrote:
>>
>> What is the best way to setup a raid 5 array (4* 2TB drives)
>> should I make raid 5 for my system and /home
>> then raid 0 or 1 for the boot, or should I buy a 5th drive for
>> system/boot and install in the standard way?
>> as this is my 1st time on debian and not sure what would be best
>>
>>
>> Dick Thomas
>> About.me http://about.me/dick.thomas
>> Blog: www.xpd259.co.uk
>> G+: www.google.com/profiles/xpd259
>> gpg key: C791809B
>>
>>
>
>
> --
> To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org with a subject
> of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
> Archive: http://lists.debian.org/[🔎] 51378F3E.20206@gmail.com
>


I've tried installing from my "hardware" motherboard raid and that
just fails even with dmraid=true
atm i've got a
raid 0 /boot

raid 5 encrypted then /lvm
/lvm/swap
/lvm/root
//var/log

but wasn't sure if that was the way to do it
or am I just confusing matters more :)


--Forwarded Message Attachment--
Date: Wed, 6 Mar 2013 13:59:49 -0500
From: garydale@rogers.com
Subject: Re: Raid 5
To: debian-user@lists.debian.org
CC: debian-user@lists.debian.org

On 06/03/13 01:47 PM, Adam Wolfe wrote:
> I had one h**l of a time doing this over the weekend.
>
> What finally worked for me was creating LOGICAL partitions on each
> drive and setting them as used for RAID volume devices.
> This gave me /dev/sda5, /dev/sdb5 etc etc.
> When grub did it's install, it added all the /dev/sda1 etc partitions
> and rebooted fine.
>
> When I tried primary partitions, grub would just fail and I'd have to
> restart the whole install process over.
>
> NOTE: when booting from the install cd i had to [tab] the 'install'
> menu entry and add "dmraid=true".
> Then after the initial install, it would still fail to boot.
> Back to the install cd, choose 'rescue', [tab] and add 'dmraid=true'
> again. Then get thee to a shell and 'grub-install --recheck
> /dev/sdaX' for each partition.
>
>
>
> On 03/06/2013 01:37 PM, Dick Thomas wrote:
>> What is the best way to setup a raid 5 array (4* 2TB drives)
>> should I make raid 5 for my system and /home
>> then raid 0 or 1 for the boot, or should I buy a 5th drive for
>> system/boot and install in the standard way?
>> as this is my 1st time on debian and not sure what would be best
>>

Sorry but this isn't difficult (although it may affect top-posters more
than bottom posters :) ). The Debian installer allows you to create a
whole-disk RAID array then partition it. You have a single RAID 5 array
with some number of primary partitions (up to 4 - I use 2, / and /home,
with swap files rather than swap partitions but traditionalists may
prefer a swap partition). Grub treats the array like a disk drive and
has no problem booting from it.

One issue you may have with Squeeze (I recommend Wheezy instead) is that
the UUID for / in grub.cfg may be wrong. Simply replace it with the
correct one (probably for /dev/md0p1) and everything will work. You will
have to repeat this anytime update-grub is run. This is not an issue
with Wheezy.


--Forwarded Message Attachment--
Date: Wed, 6 Mar 2013 20:03:12 +0100
From: chiendarret@gmail.com
Subject: Re: RAID1 all bootable
To: sdj@rasmussenequipment.com; lsorense@csclub.uwaterloo.ca; debian-amd64@lists.debian.org; debian-user@lists.debian.org

Sorry, I forgot both the list and the appropriate output;

root@....:/home/francesco# fdisk -l

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x000f1911

Device Boot Start End Blocks Id System
/dev/sda1 * 2048 385023 191488 fd Linux raid autodetect
/dev/sda2 385024 976166911 487890944 fd Linux raid autodetect

Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x0000cca6

Device Boot Start End Blocks Id System
/dev/sdb1 * 2048 385023 191488 fd Linux raid autodetect
/dev/sdb2 385024 976166911 487890944 fd Linux raid autodetect

Disk /dev/md0: 195 MB, 195887104 bytes
2 heads, 4 sectors/track, 47824 cylinders, total 382592 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table

Disk /dev/md1: 499.5 GB, 499465912320 bytes
2 heads, 4 sectors/track, 121939920 cylinders, total 975519360 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/mapper/vg1-root: 998 MB, 998244352 bytes
255 heads, 63 sectors/track, 121 cylinders, total 1949696 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/vg1-root doesn't contain a valid partition table

Disk /dev/mapper/vg1-swap: 15.0 GB, 14998831104 bytes
255 heads, 63 sectors/track, 1823 cylinders, total 29294592 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/vg1-swap doesn't contain a valid partition table

Disk /dev/mapper/vg1-usr: 30.0 GB, 29997662208 bytes
255 heads, 63 sectors/track, 3647 cylinders, total 58589184 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/vg1-usr doesn't contain a valid partition table

Disk /dev/mapper/vg1-opt: 9999 MB, 9999220736 bytes
255 heads, 63 sectors/track, 1215 cylinders, total 19529728 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/vg1-opt doesn't contain a valid partition table

Disk /dev/mapper/vg1-var: 9999 MB, 9999220736 bytes
255 heads, 63 sectors/track, 1215 cylinders, total 19529728 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/vg1-var doesn't contain a valid partition table

Disk /dev/mapper/vg1-tmp: 2998 MB, 2998927360 bytes
255 heads, 63 sectors/track, 364 cylinders, total 5857280 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/vg1-tmp doesn't contain a valid partition table

Disk /dev/mapper/vg1-home: 430.0 GB, 430000046080 bytes
255 heads, 63 sectors/track, 52277 cylinders, total 839843840 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/vg1-home doesn't contain a valid partition table
root@....:/home/francesco#

root@.....:/home/francesco# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda2[0] sdb2[1]
487759680 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
191296 blocks super 1.2 [2/2] [UU]

unused devices: <none>
root@.....:/home/francesco#

francesco@.....:~$ df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 938M 185M 705M 21% /
udev 10M 0 10M 0% /dev
tmpfs 807M 628K 807M 1% /run
/dev/mapper/vg1-root 938M 185M 705M 21% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.6G 84K 1.6G 1% /run/shm
/dev/md0 176M 19M 148M 11% /boot
/dev/mapper/vg1-home 395G 284G 91G 76% /home
/dev/mapper/vg1-opt 9.2G 1.5G 7.3G 17% /opt
/dev/mapper/vg1-tmp 2.8G 69M 2.6G 3% /tmp
/dev/mapper/vg1-usr 28G 4.3G 22G 17% /usr
/dev/mapper/vg1-var 9.2G 840M 7.9G 10% /var
francesco@.....:~$

This is the currest status.
"grub-install /dev/sdb" was run with that situation (deriving from
install with amd64 wheezy B$ installer downloaded on Feb 1, 2013. That
installation ended with

grub-install /dev/sda
update grub

Thanks
francesco pietra

On Wed, Mar 6, 2013 at 7:02 PM, Shane Johnson
<sdj@rasmussenequipment.com> wrote:
>
>
> <snip>
>
>> As far as I can remember, I already posted for this system
>>
>> root@.....:/home/francesco# cat /proc/mdstat
>> Personalities : [raid1]
>> md1 : active raid1 sda2[0] sdb2[1]
>> 487759680 blocks super 1.2 [2/2] [UU]
>>
>> md0 : active raid1 sda1[0] sdb1[1]
>> 191296 blocks super 1.2 [2/2] [UU]
>>
>> unused devices: <none>
>> root@.....:/home/francesco#
>>
>> francesco@.....:~$ df -h
>> Filesystem Size Used Avail Use% Mounted on
>> rootfs 938M 185M 705M 21% /
>> udev 10M 0 10M 0% /dev
>> tmpfs 807M 628K 807M 1% /run
>> /dev/mapper/vg1-root 938M 185M 705M 21% /
>> tmpfs 5.0M 0 5.0M 0% /run/lock
>> tmpfs 1.6G 84K 1.6G 1% /run/shm
>> /dev/md0 176M 19M 148M 11% /boot
>> /dev/mapper/vg1-home 395G 284G 91G 76% /home
>> /dev/mapper/vg1-opt 9.2G 1.5G 7.3G 17% /opt
>> /dev/mapper/vg1-tmp 2.8G 69M 2.6G 3% /tmp
>> /dev/mapper/vg1-usr 28G 4.3G 22G 17% /usr
>> /dev/mapper/vg1-var 9.2G 840M 7.9G 10% /var
>> francesco@.....:~$
>>
>>
>> the "deadly command' "grub-install /dev/sdb" was run with the system
>> started as above.
>>
>> Thanks
>> francesco pietra
>> <snip>
>
> Francesco,
> The df -h shows us what is mounted but not if the drives are partitioned or
> not. Can you do fdisk -l and send us the output of that?
> Also, you replied directly to me without the mailing list. I have included
> it in the CC so that everyone can share in the knowledge. Please make sure
> you always reply to the list.(Reply-all works real good for this.)
>
> Thanks
>
>
> --
> Shane D. Johnson
> IT Administrator
> Rasmussen Equipment
>
>


--Forwarded Message Attachment--
Date: Wed, 6 Mar 2013 14:06:23 -0500
From: garydale@rogers.com
Subject: Re: Raid 5
To: debian-user@lists.debian.org
CC: debian-user@lists.debian.org

On 06/03/13 01:55 PM, Dick Thomas wrote:
> About.me http://about.me/dick.thomas
> Blog: www.xpd259.co.uk
> G+: www.google.com/profiles/xpd259
> gpg key: C791809B
>
>
> On 6 March 2013 18:47, Adam Wolfe<kadamwolfe@gmail.com> wrote:
>> I had one h**l of a time doing this over the weekend.
>>
>> What finally worked for me was creating LOGICAL partitions on each drive and
>> setting them as used for RAID volume devices.
>> This gave me /dev/sda5, /dev/sdb5 etc etc.
>> When grub did it's install, it added all the /dev/sda1 etc partitions and
>> rebooted fine.
>>
>> When I tried primary partitions, grub would just fail and I'd have to
>> restart the whole install process over.
>>
>> NOTE: when booting from the install cd i had to [tab] the 'install' menu
>> entry and add "dmraid=true".
>> Then after the initial install, it would still fail to boot.
>> Back to the install cd, choose 'rescue', [tab] and add 'dmraid=true' again.
>> Then get thee to a shell and 'grub-install --recheck /dev/sdaX' for each
>> partition.
>>
>>
>>
>>
>> On 03/06/2013 01:37 PM, Dick Thomas wrote:
>>> What is the best way to setup a raid 5 array (4* 2TB drives)
>>> should I make raid 5 for my system and /home
>>> then raid 0 or 1 for the boot, or should I buy a 5th drive for
>>> system/boot and install in the standard way?
>>> as this is my 1st time on debian and not sure what would be best
>>>
>>>
>>> Dick Thomas
>>> About.me http://about.me/dick.thomas
>>> Blog: www.xpd259.co.uk
>>> G+: www.google.com/profiles/xpd259
>>> gpg key: C791809B
>>>
>>>
>>
>> --
>> To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org with a subject
>> of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
>> Archive: http://lists.debian.org/[🔎] 51378F3E.20206@gmail.com
>>
>
> I've tried installing from my "hardware" motherboard raid and that
> just fails even with dmraid=true
> atm i've got a
> raid 0 /boot
>
> raid 5 encrypted then /lvm
> /lvm/swap
> /lvm/root
> //var/log
>
> but wasn't sure if that was the way to do it
> or am I just confusing matters more :)
>
Do NOT use the hardware RAID. It's just a crippled form of software RAID.

Ignore the advice from Adam Wolfe - it's nonsense. Use the Debian
installer (advanced mode) to create the RAID 5 array on drives with just
one partition (whole disk) as /dev/md0. Then partition the RAID 5 array
into / and /home. Install and reboot.

If you are using Wheezy this will work directly. If you are using
Squeeze then you may need to fix the UUID in /boot/grub.cfg.

I've done this successfully several times. It just works.

Reply to: