[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: question about storage



shawn wilson put forth on 2/20/2011 12:56 AM:

> Also, I'd go with a desktop for database stuff any day. From what I've
> experienced, it goes something like this:
> Fibre > iscsi > SCSI > ata > firewire > nfs > usb
> I say that iscsi is faster than SCSI because I generally have 10GE iscsi and
> many more disks than I can throw in a proliant. And, I don't know why but fc
> just seems to work faster than anything else for db stuff.

Fiber channel is faster for a couple of reasons:

1.  Dramatically lower protocol overhead.  This is the big one.  Fiber
channel is an OSI layer 2 protocol, the same as physical ethernet.  It
transfers data in variable length frames of up to 2148 bytes.  802.3
ethernet uses a 1518 byte frame size.  Thus fiber channel is immediately
33% more efficient for data packets that are larger than one frame as it
requires fewer frames to transmit the data, translating into fewer host
or HBA interrupts/sec.  Any sysop knows what either network or block IO
interrupt processing can do to a busy server's performance.

iSCSI encapsulates SCSI commands and data into TCP packets which are
then broken down into multiple ethernet frames, the process reversed for
reassembly on the receiving end.  TCP is an OSI layer 4 protocol,
running atop IP which is a layer 3 protocol atop ethernet which is a
layer 2 protol.  Fiber channel is therefore comparable to direct
ethernet frame transmission WRT overhead--FC essentially has none.  As
the frame encoding is 8B/10B it's data rate is 80% of it's link rate.
Coincidentally GbE also uses 8B/10B encoding.  This is FC's only
relevant overhead penalty.  The encoding of 10 GbE depends on the
implementation.  There are, TTBOMK, 7 implementations.  10GBASE-LX4 uses
8B/10B encoding while the other 6 use 64B/66B encoding.  The latter
being much more efficient.  Remember though, this efficiency advantage
of 10 GbE iSCSI is only at layer 2.  Layer 2 is always pretty efficient
to begin with whether it be FC, ethernet, FIDDI, infiniband, etc.

Until the advent of intelligent (and relatively expensive) iSCSI HBAs
all TCP processing of iSCSI packets had to be performed by the host CPU
using the operating system's TCP/IP stack.  While the Linux stack is
pretty efficient, a loaded server would still come to it's knees with
heavy block IO.  You never see this with fiber channel as the "heavy
lifting" is all handled by the relatively inexpensive low performance
low wattage IC on the FC HBA.  Notice that many 10 GbE iSCSI HBAs have
fans mounted on them like an nVidia GPU card?  You'll never see an FC
HBA sporting a fan, very few have heat sinks.  The amount of transistors
requires to implement an FC controller is a hundred times less than for
an iSCSI HBA controller chip.  Such a chip requires a real time
operating system, such as Linux, QNX, or VxWorks and local memory to
store that OS and it's TCP/IP stack.


2.  Fiber channel storage array controllers are typically higher end
devices, with faster CPUs (relative to peak processing demands), larger
caches, wider internal buses, etc, than their iSCSI only counterparts.
They also typically sport 15k RPM FC or SAS disk drives.  iSCSI only
arrays typically sport lower spindle speed lower performance 7.2k or 10k
SATA drives.  There are exceptions to this general statement, such as
NetApp and Nexsan, but generally this has been the trend.

In the case of the NetApp units which can sport both 8 Gbit FC and 10
GbE iSCSI host interfaces, the performance using the FC interfaces will
always be higher, simply because there is no TCP/IP stack processing
required, even though the physical link rate of 10 GbE is ~20% higher.
Processing TCP with a line rate of 10 Gbit/s requires an enormous amount
of CPU horsepower.

-- 
Stan


Reply to: