[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: filesystem and x86 vs. x86_64 benchmarking...



"Dale E. Martin" <dale@the-martins.org> writes:

> I'm subscribed to the mailing list, no need to Cc me.
>
>> First of all wall clock time is meaningless when comparing results. When
>> you have other processes competing for the CPU the wall clock can rise
>> drastically without the test being any slower. Wall clock without % cpu
>> usage is meaningless.
>  
> Well, the machine was not running X or cron let alone anything else while I
> was running the tests, so pretty much all of the tests got 100% of the CPU.
> Occasionally I ran "top" to see what the memory consumption looked like.
> But since the tests were mainly for my own purposes I didn't make them
> overly scientific, you're correct about that.

If they got 100% cpu then user+system should add up to real. If the
system was otherwise idle then the missing time must have been spend
on waiting for the disk.

The large difference in time would mean different FS have drastically
different seek behaviour. What else could create more IO waits?

PS: try using -pipe unless you notice that it causes swaping.

>> Secondly g++ using >1GB ram or temporary files in 64bit mode might not be
>> a bug at all but just the extra complexity of optimizing for 64bit. I
>> noticed that gcc usualy uses 4 times as much ram in 64bit than
>> 32bit. Some worst case sources can increase that easily.
>  
> True, and the register allocation would be different as well.  Perhaps it's

Register allocation is also exponential with the number of registers
iirc. And amd64 has a lot more than i386.

> not a bug.  It would be interesting to know if you need double the memory
> for comparable (slightly improved?) performance for C++ compilation in
> x86_64.  I hadn't really considered that.  Hopefully it's a bug ;-)

Every pointer is double the size and you have a lot of those in
gcc. On top of that you get more registers, more opcodes, more
inlining, ...

>> Looking over the compile tests it looks like a 64bit kernel is faster in
>> userspace but wastes about the same time in kernel space, probably
>> translating the 32bit syscalls to 64bit. It is too bad you have hardly
>> any comparable tests in there. A lot of 32bit tests without 64bit
>> counterpart.
>
> Yes, I thought I clearly explained my interest was how to set up this
> machine to get the best performance for my own purposes.  Once it became
> clear that 64 bit mode was going to be ineffective for my purposes, I
> thought I'd stop wasting my time slowly compiling the same code over and
> over ;-)

64bit userspace yes. But comparing 32/64 bit kernel with 32bit
userspace doesn't show a winner on your tests yet. You can#t realy
decide what kernel to use going by those tests.

>> And where is the bonie++ test with 64bit kernel? Do any of the FS become
>> faster/slower? Actualy I wan't three runs: 32bit kernel, 64bit kernel +
>> 32bit userland, 64bit kernel+userland.
>
> I'll run this for one filesystem type.  If the differences are noteworthy,
> I'll look at the others.  Thanks for the suggestion - that's what I was
> looking for in posting to this list.

I expect some differences in the char speed. The block speed should be
mostly disk bound and show little to no change.

But a lot of real live read/writes will be to/from cache. The char
access shows how fast system calls and the in memory VFS are and
should reflect on the cache speed (consider the char speed as cache
accesses per second).

>> We all have seen comparisons between FSes and that is a rather boring
>> repeat.
>
> Sorry to bore you, that certainly was not my intention.

You can make it up with doing those extra bonnie tests. :) Never
installed a 32bit kernel here so I could never test this.

> Take care,
>      Dale

MfG
        Goswin



Reply to: