On 2021-06-14 12:55 p.m., Marc Haber wrote: > On Mon, 14 Jun 2021 07:23:45 -0700, John E Petersen > <jepetersen@utexas.edu> wrote: >> Thanks Paul, but I'm having a hard time finding the precise version I would >> like to archive on any ftp mirror. My scrape is actually working quite >> correctly now, though, since I added a sleep in there -- the source and >> machine-installation instructions are tidily tucked away in different >> directories, with names, locations, and success/failure logged into a >> key-value (more of a dictionary-ish, really) text file. I get that an ftp >> request is more civilized, but this scrape is quite convenient for me. If >> it is more palatable to the community, I can increase the sleep time in the >> loop to a couple of minutes or even a few or more, and throw it on one of >> my raspberry pis and forget about it for a while, since I'm not in a major >> hurry. > > rsync is incomprehensible rocket science? > When you expect to build a "new" Debian kernel (Hurd ? FreeBSD ?) because there's too much government agent involved in Debian, that you complain there's a security hole in Ubuntu (but we are in Debian here) because your hosting provider's web panel doesn't allow you to connect without first passing in a SSH key... Then let me say, that pretty much all of your life is... rocket science ! And you failed much of it. I still can't understand why would it be easier to build a custom system to scrape a website than to use either : apt-mirror / debmirror / aptly OR rsync -avH rsync.debian.org/debian/... ./mylocalrepo -- Polyna-Maude R.-Summerside -Be smart, Be wise, Support opensource development
Attachment:
OpenPGP_signature
Description: OpenPGP digital signature