[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: [RFD] optimized versions of openssl



Michael Stone <mstone@debian.org> writes:

> On Wed, Sep 04, 2002 at 11:17:55AM -0400, Michael Poole wrote:
> > I suspect it would be "better" to do install-time selection; the
> > preferred way to choose is to benchmark the alternatives on a
> > realistic task. That isn't something most people would want to do when
> > they load the library, since it takes a while to do, and on a given
> > CPU, the results are unlikely to change over time.
> 
> No, you don't need to benchmark anything. It should be sufficient to
> identify the cpu and jump to the proper optimization. Probably with a
> feature-based check for unknown cpus. A benchmark might theoretically be
> better, but an install-time check is a pita. (What happens if you
> upgrade your cpu? Or install from hd images created on another system?)

Install-time checks let you do as much pondering as you want.  Using
the alternatives system lets you override the system's current choice;
it provides a superset of the functionality that run-time selection
does, and only incurs extra cost for uncommon operations (such as
upgrading a CPU without reinstalling or installing a disk image --
either of which already requires intervention or automated fixups).

What makes this so different than autoconf vs imake, which is based on
similar in-system-test versus platform-detection selections?  There
are about as many x86 CPU variants as there are *nix OSes, and there
are many (notably different) ARM, MIPS and Sparc variants out there as
well.  How much extra code and single-use data will go into the shared
libraries to do CPU detection and run-time code selection?

-- Michael Poole



Reply to: