Re: reading an empty directory after reboot is very slow
Quoting Vincent Lefevre (email@example.com):
> On 2015-04-13 15:50:40 -0500, David Wright wrote:
> > That's staggering. My /var/lib/dpkg/info has ~8900 files and occupies
> > 462848 bytes. So that would be over ½million files in your case.
> > Does eftests stand for "excessive files tests"!
> It means "elementary function tests", but what this doesn't say is
> that these tests are exhaustive: 1 file = a small interval on which
> the double-precision function (e.g. exp, log) can be approximated
> by a small-degree polynomial, and the whole double-precision domain
> must be covered.
> Now, more interestingly, the fault is due to... proprietary software.
> I wrote these tests about 15 years ago and I needed rigorous interval
> arithmetic in multiple precision, and at that time, the Maple intpak
> package was the only thing I found (though a few years later, despite
> what its documentation said, it was shown that it was not rigorous at
> all, and I might have chosen a better solution with free software).
> So, I had to use Maple, and still use it (now with intpakX, which is
> better but still based on assumptions that could be wrong) because I
> haven't rewritten my tests completely. Maple is only used for ISO C
> code generation. In normal use, code is generated, then run, and after
> a few minutes (to get the result), the corresponding program can be
> removed, so that few files are present in such a directory at the same
> time. But some colleague in another lab needed these test files and he
> didn't have Maple. So, I had to generate all of them (yes, something
> like half a million) and give him a huge compressed tar file (not sent
> by e-mail, of course!).
Good to see people testing their tools. Perhaps someone like you came
across the famous HP35 bug.
In the past, I expect you would have been forced to store your files
in a tree using a method similar to Debian's pool to avoid running out
of directory entries.