Re: Linux RAID for Perfomance and supported Card
On 7/26/2011 6:34 AM, Siju George wrote:
> A few of my servers are running on Linux Software RAID 1 and is
> meeting disk I/O bottle Neck.
What is the nature of the bottleneck, IOPS or throughput? What is the
workload? Email, database, file server, web server, etc? How much disk
space does the workload require?
> I am considering other RAID Levels to increase Performance as well as
> keep the Redundancy.
This may not be necessary. Read on.
> I guess I should Go for Hardware RAID 10 as per my colleague's advise
> after reading through many reviews on the Internet.
> Does any body have any ideas or Suggestions?
Yes. Identify the workload and required storage space.
> Also which RAID card is most supported on Linux?
LSI/3ware are very good. I hear the higher quality/price Adaptec cards
are decent as well. Drivers for all above are in the kernel source
tree. Steer clear of Areca cards, and steer clear of all fakeraid
cards. If it doesn't have onboard memory it's a fakeraid card.
I'm going to make a couple of educated guesses before awaiting your
response. You're currently running 2 mdadm or LVM mirrored disks in
each server, and experiencing an IO bottleneck. Thus the servers are
likely MTAs in an MX farm. If so, your bottleneck is insufficient head
seek bandwidth while random writing incoming mail files to the queue
The best way to solve this problem is running a couple of properly sized
SLC SSDs in an mdadm mirror pair. Your IOPS will jump from the
~150-300/second you have now with 7.2k or 15k RPM drives, to between
3,000 and 30,000, far exceeding any future need. Side benefits are
reduced power draw and noise. A suitable SSD would likely be the Intel
SSD 311: http://www.intel.com/design/flash/nand/311series/overview.htm
If you'd rather go the mechanical route:
Includes 512MB flash backed write cache, no BBU required.
2.5" 15K RPM 73GB
A quality 4 bay 2.5" SAS/SATA hot swap cage that occupies a single 5.25"
bay, w/2 integrated cooling fans: http://www.icydock.com/goods.php?id=114
As always, disable the individual drive caches in the controller BIOS
and enable the card's 512MB write cache. Balance the RAID cache for
equal read/write as files written to the MX queue are typically
immediately read back then delivered to a mailbox server. Configure the
4 drives as RAID10 with the smallest possible stripe size because mail
files typically average less than 32KB in size. If the controller
offers an 8KB stripe size select it, if not select the smallest offered.
This will help minimize wasted space due to partial stripe width
writes, and will slightly increase performance. Due to the 512MB of
RAID cache your queue IOPS will be in the multiple thousands, vs only
600 for the 2 spindle stripe.
Mirrored SSDs will have orders of magnitude greater IOPS and will be
cheaper vs the RAID card/mechanical drive solution, but will have far
less space. Using larger SSDs drives the cost up quickly.