[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

wget, HTTP compression and recursive downloading



Hello

I want to download a large amount of html files from a web server that
supports http gzip compression. I tried to call wget with the --headers
option to enable the compression:

wget -nc -np -k -r --header\="Accept-Encoding: gzip" http://address

The compression is enabled, and the first downloaded document is stored
unter the file name it has on the server. However, it is still
compressed with gzip, which prevents wget from looking up links in that
document for recursive downloading. 

Now I am looking for a way to either tell wget to compress the files
when writing them to disk, or to find the links in the compressed
files. Of course I could uncompress the first file, restart wget,
uncompress the next set of files and so on, but I would like to use a
way that is more convenient.

Any ideas?

best regards
        Andreas Janssen

-- 
Andreas Janssen
andreas.janssen@bigfoot.com
PGP-Key-ID: 0xDC801674
Registered Linux User #267976



Reply to: