[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Tracing Filesystem Accesses



On 5/13/2011 1:34 PM, Bob McConnell wrote:

> Before we go any further, lets get a couple of things sorted out. What
> type of SSD (Solid State Drive) are you all talking about here?
> 
> If it contains Flash memory, then yes, there is a limit to the number of
> ERASE cycles each sector can do. How long they last depends on a number
> of factors, not the least of which is how old the chips are. The first
> generations of flash memory chips could only be erased about 10,000
> times before they started to fail. This could be mitigated by decent
> firmware that did load leveling behind the scenes. But there was still a
> finite limit to how long they could be used before they wouldn't erase
> anymore. Newer chips can handle 100,000-250,000 erase cycles. So decent
> drivers can help them last for several years even under heavy use. If
> the wear is spread out over a large space, it almost appears to last
> forever. But I still wouldn't want to use them for files that were
> frequently replaced or rewritten. I still think of them as Read-Mostly
> memory components.

Apparently no one is reading the authoritative article I cited.

Modern SSDs are definitely not limited to use as "Read-Mostly" devices.
 Mail spools are write mostly directories.  There are many high volume
mail sites using SSDs for their mail spools due to the massive random
write IOPS capability and cost savings when compared to striping many
SRDs together.

A single quality SSD can easily sustain 20k+ random write IOPS.  To
achieve 20k random write IOPS with SRDs would require 132 7.2k SATA
drives.  As SMTP mail is totally disk IO bound and requires almost no
CPU, building out a massive cluster of cheap 1 CPU nodes in an MX farm
with only a single spindle of IOPS per node wastes a ton of money in
completely under utilized processor/memory/power/etc.  This is what many
ISPs previously did to avoid buying expensive SAN storage to achieve the
required aggregate spool IOPS.

Today they can eliminate 130 of those nodes and replace the SRDs in two
or 4 of them with SSDs, achieving the same or better performance with
only 2 or 4 MX nodes.  Not only have they saved $60-100k on node costs
but they just eliminated the need for 130 switch ports as well, approx
$25k, and rack space, 4U vs 132U.  And they save tens of thousands of
dollars per year on electricity by eliminating 128 nodes.

The cost of the appropriate SSDs is less than $4k.  SSDs today are
definitely not "Read-Mostly" devices.

-- 
Stan


Reply to: