[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Advice on system purchase



Am Freitag, 2. November 2012 schrieb lee:
> > If the CPU isn´t too slow for it and most current CPUs aren´t, a SSD
> > will  be highly beneficial for just about any workload that is using
> > random I/O. And most workloads are.
> 
> Like?  When you edit a text in an editor or a WYSIWYG word processor or
> when you work on a spreadsheet, you are not creating a lot of disk I/O.
> When you compress or uncompress a tar archive, you are CPU limited.
> When you use a web browser, you are limited by the bandwidth of your
> network connection and by CPU --- not to mention your graphics card.
> When you play a game, you are limited by graphics card and CPU and
> perhaps by memory bandwidth.  When you do photo editing in gimp,
> you're limited by CPU and perhaps memory bandwidth and your graphics
> card, and you my be limited by having to swap.
> 
> Loading the editor or word processor or spreadsheet, tar and bzip2, the
> web browser and the game will probably be faster unless they are
> already in the disk cache.  Swapping will probably be faster as well.

One more addendum:

I suggest you to have a window open with vmstat 1 in it during workloads 
you claim are rather CPU bound than I/O bound.

Whenever you see cpu wait above 0% you CPU is waiting for applications 
that are stuck in system calls, ps aux | grep " D" shows these to you. And 
these system calls most often have to do with I/O.

I bet that you see this un untarring a kernel source archive and on other 
workloads. You won´t likely see this while editing a text document. But on 
that occasion the CPU is likely to be idling as well.

And as to compress/uncompressing: Only some decompressors/compressors can 
use more than one core at all. Neither gzip, nor bzip, nor xz will use 
more than one core. 7z possibly can. And lbzip, pbzip can. I tested this 
myself[1] - the multicore results are in the article I wrote about the 
compressor and decompressor comparison[2]. Only lbzip *directly* on a tar 
file, - thats been in cache - managed to almost max out a hexacore CPU. On 
the T42 the compression and decompression has been mostly CPU bound. But 
thats with a ready made tar file. If tar has to collect or create lots of 
files the picture would very likely to be quite different. So I created 
this benchmark to specifically load one or if the tool / algorithm allowed 
it several cores. So for compression and decompression usually all but one 
core are just doing one thing: Idling. Unless another workloads loads 
them.

That cpu wait is not the complete time spend on I/O, it is the time the 
CPU is stuck waiting for applications it can run *and* the only processes 
that it could run are stuck in system calls.

I even see this case with the SSD at times. But it got nicely rare.

[1] http://martin-steigerwald.de/computer/programme/packbench/index.html

[2] page three of:

http://www.linux-community.de/Internal/Artikel/Print-
Artikel/LinuxUser/2010/10/Aktuelle-Komprimierprogramme-in-der-Uebersicht


-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7


Reply to: