Re: reading an empty directory after reboot is very slow
Quoting Kushal Kumaran (firstname.lastname@example.org):
> Bob Proulx <email@example.com> writes:
> > Petter Adsen wrote:
> >> Can someone please enlighten me as to why the entry for this directory
> >> is so large, even though it is empty? Since it's apparently obvious to
> >> everyone else, I would very much like to know :)
> > <snipped>
> > If a directory became full it was easy to extend it
> > by writing the array longer. But if an early entry in the array was
> > deleted the system would zero it out rather than move each and every
> > entry in the file system down a slot. (I always wondered why they
> > didn't simply take the *last* entry and move it down to the deleted
> > entry and simply keep the array always compacted. I wonder. But they
> > didn't do it that way.)
I think the reason for this is that the entries have different lengths
corresponding to the filenamelength, so you'd have to search for a slot
small enough. Were this slot not the last entry, then keep repeating...
I think I can see a pathological end-case here.
Once you have trees, I'm out of my depth. But I read that trees have
to be balanced, which may mean a whole new set of algorithms for
> Moving entries around breaks ongoing readdir operations. If a readdir
> has gone past the file being removed, and you moved the last entry
> there, the entry being moved would be missed, despite *it* not being the
> entry added or removed.
I don't think this matters. There's no guarantee that another process
isn't writing to that directory while you are working your way along
This whole discussion touches on one of the facts of life: people
generally design things for extending, not for contracting. Ability to
extend a design is an important criterion in its success. In the field
of computers this is often coupled with backwards compatibility, so you
can keep the old design going.
People extend their houses, but they don't demolish an extension.
They raze it all and start over. But they don't do it very often, so
one doesn't select a house on the basis that it's easy to shrink, or
quick to raze and rebuild. One looks at its steady-state performance,
and that patch of waste ground next to it.
The OP is happy to use a filesystem that can accomodate half a million
files with no advance warning. Ext4 filesystems are designed to be able to
grow by three orders of magnitude. I'm sure they won't be easy to shrink.