I am using 300GB drives, but Debian is not running on the array.
After many hours and attempts, I was only able to successfully get through the installer on a built array one time. Trying to setup raid and install into /dev/md0 in the installer would frequently freeze the install process at random times. There was only a single time I was able to get through the entire installation process on /dev/md0 -- but the installer scripted initramfs to point root in /etc/conf.d/params.conf to a random disk on the array (ie. /dev/sdb1) instead of /dev/md0 which would foobar the entire initramfs process. I was unable to manually get /init to migrate from initramfs.
Instead of fighting the installer -- I decided to separate a partition just for debian, and then mdadm'd the Raid 5 array just for data once debian was installed. This was the most successful attempt and is how the system is running currently.
It is noteworthy to add -- the performance of both the single partition and the built array are both equivalently horrible.
In either case, so far this project has not been worth the effort. The processor is so slow that using the device as any type of rudimentary server seems trivial -- and the performance of the unit seems unaffected by the software controlling it.
For anyone else trying this out -- once you have finished the Debian installation process and reboot, if it seems your system is freezing after the "Uncompressing kernel ..." stage, you will still need to CTRL-C your bootscript and specify the kernel parameters like you did previously.
exec -c "console=ttyS0,115200 rw root=/dev/sdd1 mem=256M@0xa0000000" -r 0x01800000
Although the root flag is overwritten by initramfs params.conf it seems.
----- Original Message -----
From: "Mark Clarke" <email@example.com>
To: "danny rodriguez" <firstname.lastname@example.org>
Sent: Thursday, May 13, 2010 5:30:29 AM GMT -05:00 US/Canada Eastern
Subject: Re: SS4000-E (Lack of) Performance
I am not able to help you with your question but would like to know
what size disks you are using. I put 4x1TB drives in mine but the unit
freezes building the raid. I tried 750G disks with the same effect. I
will buy 500gig disk if I know this works. what is wierd for me is why
the size of the disk would matter at all?
On Thu, May 13, 2010 at 1:55 AM, <email@example.com> wrote:
> Hi all -
> Thanks to the work of many on this mailing list, I've been able to install
> SID (w/ kernel 2.6.32-5) on my SS4000-E NAS.
> Much to my surprise, the performance of the unit is pretty much identical to
> how it was running the proprietary Falconstor implementation.
> hdparm -tT /dev/sda reads :
> Timing cached reads: 128 MB in 2.00 seconds = 63.93 MB/sec
> Timing buffered disk reads: 114 MB in 3.04 seconds = 37.44 MB/sec
> The results from hdparm are static across all 4 drives.
> Needless to say -- these numbers are atrocious. Max throughput by any means
> (NFS, FTP, etc.) is about 5MB/sec on writes to the NAS, and 8-9MB/sec on
> reads from it.
> I did come across an earlier post from "Andrushka" that stated 2.6.32 has
> DMA enabled for "this platform". >From everything I have read, there is no
> DMA to enable/disable for SATA, but my knowledge is extremely limited in
> this arena. He does claim that an alternative linux distro installed on the
> box got 12MB/sec out of the unit.
> (post is here:
> http://us.generation-nt.com/ss4000e-iop-dma-help-170355901.html )
> I'm wondering if anyone has figured out why the performance of this unit is
> so overwhelmingly bad, or if anyone has suggestions on how to go about
> diagnosing the cause?
> Many thanks,