** On May 11, Wichert Akkerman scribbled: > Previously Marek Habersack wrote: > > Very unreliable stuff and also puts some strain on the server the data is > > fetched from - if it has a large directory tree, regenerating lsR with every > > change to the file system takes much time. With ftp the client just asks the > > server for the listing and that's it. > > Which means that in effect that tree is creates lots and lots of times on a > reasonably busy server, so you still loose. I don't have the exact numbers, of course, but given the time it takes to scan a directory tree on ext2fs I'd suspect that scanning one directory with, say, 100 entries 100 times a day would be much faster than scanning 1000 directories of those 100 entries everytime anything in the directory tree is changed. The lsR would have to start at the tree root, and when _any_ file is changed, it has to be regenerated _entirely_ which means traversing _all_ directories. If, OTOH, a client asks for a listing of one directory only that directory is scanned. In other words, the on-demand approach is more "politically" correct since it does only the job that's really needed. marek
Attachment:
pgpkGqfMZlHoR.pgp
Description: PGP signature