[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: LSI MegaRAID SAS 9240-4i hangs system at boot



A situation update: Mounted the mobo with the CPU and RAM, attached the
PSU, the OS SATA disk, the LSI and expander as well as the graphics
card. There are no disks attached to the expander because I put them
again into the old NAS and backing up the data from the 1.5 TB disks to
it.

Then I installed Debian Squeeze AMD64 without problems. I don't have
the over-current error messages anymore :-)
But it still hangs at the same time as before.

I removed the LSI and installed the pbo kernel. Mounted the LSI again
and it stopps again at the same place.

I tried the BIOS settings you described earlier. It didn't help too.

So I wanted to update the BIOS. So I created a FreeDOS usb stick and
put the BIOS update files onto it. I got to the DOS prompt ran the
command to install the BIOS (ami.bat ROM.FILE). The prompt was blocked
for some time (about 5-10 mins or even more). Then a message was shown
that the file couldn't be found.
The whole directory where I put the BIOS update file into was empty or
even deleted completely (I can't remember anymore).

I'll try it again afterwards maybe the Supermicro doesn't like my
FreeDOS usb stick. So I'll try it with the Win program Supermicro
proposed [1] to create the usb stick.

If this doesn't help I'll contact LSI and if they want me to update the
BIOS I will ask my dealer again to do it. Probably they will have the
same problems and will have to send the mobo to Supermicro which will
take a month until I have it back :-/


[1]
http://www.softpedia.com/get/System/Boot-Manager-Disk/BootFlashDOS.shtml


On Fri, 08 Jun 2012 18:38:24 -0500
Stan Hoeppner <stan@hardwarefreak.com> wrote:

(...)

> I always do this when I build the system so I don't have to mess with
> it when I need to install more HBAs/cards later.  It's a 10 minute
> operation for me so it's better done up front.  In you case I
> understand the desire to wait until necessary.  However, the better
> airflow alone makes with worth doing.  Especially given that the
> heatsink on the 9240 needs good airflow.  If it runs hot it might act
> goofy, such as slow data transfer speeds, lockups, etc.

Thanks again very much.
The air flow / cooling argument is very convincing. I haven't thought
about that.

To mount the expander I'll probably have a month available until
the mobo is back ;-)


> > Yes and the fact that I didn't have any problems with the Asus
> > board. I could use LSI RAID1 to install Debian (couldn't boot
> > probably because the option RAM option of the Asus board was
> > disabled). I could also use the JBOD drives to set up a linux RAID.
> > But I didn't mention it before the throughput was very low (100
> > GB/s at the beginning and after some secs/min it went down to ~5
> > GB/s) when I copied recordings from a directly attached WD green 2
> > TB SATA disk to the linux RAID5 containing 4 JBOD drives attached
> > to the expander and the LSI.
> > 
> > I hope this was a problem I caused and not the hardware :-/
> 
> Too early to tell.  You were probably copying through Gnome/KDE
> desktop. Could have been other stuff slow it down, or it could have
> been something to do with the Green drive.  They are not known for
> high performance, and people have had lots of problems with them.

Probably the green drives.
I don't have a desktop environment installed on the server. It was done
using `rsync -Pha`.
But it could also be because I've split the RAM from the running server
to have some for the new server. That's why now the running old Asus
server has only 2 GB RAM and on the Supermicro I mounted the other 2 GB
RAM stick (but when the disks are set up I'd like to put some more in).


(...)

> > Exactly and the Asus doesn't. So if you'd have told me get another
> > mobo this would be a option I'd liked to have :-)
> > 
> > An other option I was thinking of was using the Asus board for the
> > new server and use the Supermicro for my new desktop. And not the
> > other way around as I had planned to do.
> 
> That's entirely up to you.  I have no advice here.
> 
> Which Asus board is it again?

It was the P7P55D premium.

The only two problems I have with this board is that I'd have to find
the right BIOS settings to enable the LSI online setting program (or
how is it called exactly?) where one can set up the disks as JBOD / HW
RAID.

And that it doesn't have any chassis LAN LED connectors :-o
But this is absolutely not important...


(...)

> > Btw. I saw that the JBOD devices which are seen by Debian from the
> > are e.g. /dev/sda1, /dev/sdb1. When I partition them I get something
> > like /dev/sda1.1, /dev/sda1.2, /dev/sdb1.1, /dev/sdb1.2 (I don't
> > remember exactly if it's only a number behind the point because I
> > think it had a prefix containing one or two character before the
> > number after the point).
> 
> I'd have so see more of your system setup.  This may be normal
> depending on how/when your mobo sata controller devices are
> enumerated.

Probably yes. I was just confused because it was not consistent with
how Debian names the "normal" drives and partitions.


> BTW, don't put partitions on your mdraid devices before creating the
> array. 

Sorry I don't understand what you mean by "don't put partitions on your
mdraid devices before creating the array".
Is it wrong to partition the disks and the do "mdadm --create
--verbose /dev/md0 --auto=yes --level=6
--raid-devices=4 /dev/sda1.1 /dev/sdb1.1 /dev/sdc1.1 /dev/sdd1.1"?

Should I first create an empty array with "mdadm --create
--verbose /dev/md0 --auto=yes --level=6 --raid-devices=0"

And then add the partitions?


> You may be tempted if you have dissimilar size drives and want
> to use all capacity of each drive.  I still say don't do it.  You
> should always use identical drives for your arrays, whether md based
> or hardware based.  Does md require this?  No.  But there are many
> many reasons to do so.  Butt I'm not going to get into all of them
> here, now. Take it on faith for now. ;)

Hmm, that's a very hard decision.
You probably understand that I don't want to buy 20 3 TB drives now. And
still I want to be able to add some 3 TB drives in the future. But at
the moment I have four Samsung HD154UI (1.5 TB) and four WD20EARS (2
TB).
Actually I've just saw that the Samsungs are green drives as well.

The reason why I bought green drives is that the server provides
mythbackend, nas, logitech media server, etc.
So it doesn't have much to do but it still should be ready all the time
(if I wan't to listen to music I don't want to power the squeezebox
radio which triggers the server to start up and only when it started I
can listen to music which would probably take >1 min.
So I thought the drives should manage themselves to save some power.

I understand that there may be timing problems. But do they make it
impossible?

What would you do if you?

Let's say I'd "throw away" these disks and go for 3 TB drives. At the
moment four in a RAID 6 array would be enough. So I'd have 6 TB
available.
Then I'd run out of space and want to upgrade with another disk.
Probably it'll still be available but will it also be when I'll have 19
disks and want to add the last one?
Just as an example to explain my worries ;-)


Cheers
Ramon


Reply to: