[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: UFS performance oddities



Hi,

On 07/09/12 11:55, David Given wrote:
> My suspicion is that there's something cache-related with the SSD, in
> that it's doing all metadata writes synchronously. Perhaps the SSD
> doesn't support write barriers?

Plain UFS could be expected to be slow.  Because it is unjournalled, I
think metadata updates are forced to be synchronous.

The `camcontrol identify ada0` command should show the status of a
disk's write cache (and sysctl hw.ata.wc must also be 1).  I think it
will be 'on', but the sync updates of metadata might mean it is flushed
regularly.

It sounds like soft updates would help with this:

http://www.freebsd.org/doc/handbook/configtuning-disk.html#SOFT-UPDATES

On Debian GNU/kFreeBSD the tunefs binary from ufsutils is instead called
tunefs.ufs


If it is true that upstream FreeBSD 9.0 now enable softupdates by
default, perhaps we should have done the same.  But now is probably too
late.

I think ideally this would be selectable during install, defaulting to
'on' (and that could be implemented around the same time as installer
support for selecting ZFS options e.g. compression/noatime/copies/dedup ;).

ZFS should of course be unaffected by the above issue, and be the
best-performing choice of filesystem here.


Anyway, I'm surprised the performance was still so poor on an SSD.

I would check that your partitions are aligned on a proper boundary, with:

# fdisk -u=sectors -l /dev/ada0

The erase block of an SSD might be 256 KiB or a larger power of 2.  So
check that the start sector of each partition is (assuming units = 512
bytes) exactly divisible by 512 or ideally 2048.

The installer should get this right.  Pre-existing partitions created
some other way might be misaligned, and thus incur a performance penalty
on writes.


The non-free iozone3 package can be useful for some kinds of disk
benchmarking, or you might try bonnie++.

Regards,
-- 
Steven Chamberlain
steven@pyro.eu.org


Reply to: