[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Suggestions regarding a PCI-X card.



On 2/13/2012 7:26 PM, Dan Serban wrote:
> Hello all,
> 
> I have recently upgraded my Tyan S2885 motherboard and repurposed it to
> become a file server.  In doing so, I picked up a Supermicro SAT2-MV8 which
> is based on a Marvell chipset.  So far everything comes up good, an
> am planning on 16 hard drives total, the first 4 that I've hooked up for
> vetting and benchmarking work well.

This is an 8 port card, so 16 drives will require 2 cards.  Unless you
plan to connect 4 SATA drives and 4 EIDE drives to the mobo ports...ick

> One thing that's come to my attention before I go forward is that when I
> run lspci -vv, I've noticed this:
> 
>  # lspci -vv -s 03
> 03:03.0 SCSI storage controller: Marvell Technology Group Ltd. MV88SX6081
> 8-port SATA II PCI-X Controller (rev 09)
> --snip--
>         Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort-
>         <TAbort- <MAbort- >SERR- <PERR- INTx-

That "Status: Cap+ 66MHz+" is an lspci default for all PCI devices.
Ignore it.  I have a 12 year old Intel BX test system here, w/33MHz 5v
only PCI slots.  66MHz PCI hadn't even been invented yet.  But lspci says:

        Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium
		TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

> --snip--
>         Capabilities: [60] PCI-X non-bridge device
>                 Status: Dev=03:03.0 64bit+ 133MHz+ SCD- USC- DC=simple

This is what you trust:               ^^^^^^^^^^^^^^

It's running at 133MHz.

>                 DMMRBC=512 DMOST=4 DMCRS=8 RSCEM- 266MHz- 533MHz-

> Am I reading the above wrong?  

Yes, you were.

> Under capabilities is says Status as well,
> but earlier it's simply status.  So I'm wondering if the bus is at 66MHz
> and the card is somehow at 133?  I don't fully understand the output.

Look at the same lspci data for all your other PCI devices.

> From what I can tell, the card is running at 66MHz, though the

As I said you're reading the lspci output wrong.

> capabilities list it as capable of 133MHz.  

No, "capabilities" tells you what the device can do *and* what's it's
currently doing.  Note the 133MHz is under the sub heading "status".

> I've double checked that the
> jumpers are set correctly on the motherboard and am concerned that I'm
> somehow doing something wrong.

You haven't yet.  But just in case...

slots 1/2 on PCI-X bus B:	Max 133MHz, single card
slots 1/2 on PCI-X bus B:	Max 100MHz, two cards
slots 3/4 on PCI-X bus A:	Max 100MHz, 1 or 2 cards

If you install a 2nd SAT2-MV8, put both cards in PCI-X slots 1/2, and
close J53.  This leaves slots 3/4 open for non PCI-X cards should you
need to install such in the future, or have such already.  Don't attempt
to max your SATA HBA bandwidth by using both PCI-X buses, one card in
each, as that's wholly unnecessary, and decreases your flexibility and
performance WRT future card installation.

The reason "maxing out" is unnecessary is this:

A 100MHz PCI-X bus has *only* (/sarcasm) 800MB/s of bandwidth.  That's
more than plenty for 16 SATA drives on two SAT2-MV8 cards.  Worth noting
here is that an OP on the XFS list yesterday is running a 24x1TB SATA
external InforTrend RAID box over a single U320 SCSI channel to the
host's PCI-X Adaptec HBA.  That's a 320MB/s bus for 24 drives.  That's
in a production lab environment and that array gets hammered 24x7.

Also note there are tons of 16,24,32,48 drive iSCSI SAN arrays and NAS
boxen in production environments that have only 1-4 GbE ports for host
connectivity.  That's *only* 200 to 800MB/s bidirectional aggregate b/w.
 And most environments with such arrays have dozens or more servers
using them as primary storage.  The reason the data pipe can be so
"small" is that most data accesses are random accesses, not sequential
streaming workloads.  A 16 disk RAID0 that can stream 1GB/s under
optimal conditions will only be able to move 20MB/s with a highly random
IO workload, because the disks become seek bound.

In other words, don't get yourself all wound up over theoretical maximum
bandwidths of drives, cards, and bus slots.  Even though 16
SATA-I/II/III drives in RAID0 may have a theoretical combined streaming
read rate of ~1.6GB/s, you'll never see it in the real world.  You'll be
lucky to see 1GB/s with the "perfect streaming test", which doesn't
exist, regardless of HBA, RAID card, bus slot speed, etc.

So don't worry about the PCI-X bus speed.

> After googling a bit, I only found one topic about something like this and
> the poster suggested that linux will not show 133MHz speeds via lspci.

It is showing it.  You're just been misreading the output.

> I'm not sure what I should believe.  

Believe me when I say I'm losing patience with you Dan.  ;)

Believe your own eyes.  Remove your cranium from your backside and use
some deduction and common sense.  It literally took me about 2 minutes
to figure this out, and it wasn't difficult at all.

> Is there any tool I can use to test
> this setup to be sure what speed it's running at?  Is there anything else I
> can do or check?

Drop it in slot 3/4 leaving the sister slot empty, and look at the lspci
output.  You should see 100 where you currently see 133.  If you don't,
then you know lspci is simply fuckered as both the 66 and 133 are wrong.
 Then you can simply tell lspci to piss off, assume the hardware is
working as it should (it is), and go on with your life.

> Thanks.

NP.

> PS. I'm running wheezy, but this output was taken from grml which is based
> on an older version of wheezy with kernel 2.6.38 .. if that makes a
> difference I can try this with the latest debian live distro.

None of this makes a difference.  That mobo was created in 2001/2002, 10
freak'n years ago.  You think the kernel/tools devs wouldn't have it
figured out by, already covered by previous kernels/tool revs?  Again,
you're simply reading lspci wrong, I'm guessing because you're just not
that familiar with it.

-- 
Stan


Reply to: