Re: upstreams maintainer conflict, was: wget: remove outdated manual page
- To: Hrvoje Niksic <hniksic@srce.hr>, Joost Kooij <kooij@mpn.cp.philips.com>, debian-devel@lists.debian.org, Nicols Lichtmaier <nick@feedback.net.ar>, James Troup <J.J.Troup@scm.brad.ac.uk>
- Subject: Re: upstreams maintainer conflict, was: wget: remove outdated manual page
- From: Raul Miller <rdm@test.legislate.com>
- Date: Sat, 16 May 1998 22:56:19 -0400
- Message-id: <[🔎] 19980516225619.U3613@test.legislate.com>
- Mail-followup-to: Hrvoje Niksic <hniksic@srce.hr>, Joost Kooij <kooij@mpn.cp.philips.com>, debian-devel@lists.debian.org, Nicols Lichtmaier <nick@feedback.net.ar>, James Troup <J.J.Troup@scm.brad.ac.uk>
- In-reply-to: <[🔎] 19980516180135.A3613@test.legislate.com>; from Raul Miller on Sat, May 16, 1998 at 06:01:35PM -0400
- References: <[🔎] Pine.LNX.3.96.980514110438.4317B-100000@pc47.mpn.cp.philips.com> <[🔎] kigaf8kfnvw.fsf@jagor.srce.hr> <[🔎] 19980516180135.A3613@test.legislate.com>
Here's a man page for wget 1.5.1
I've never used this version, document is derived from the info
pages in wget's source code.
[Aside: it would be nice to have mechanism to just generates
a unique list of referenced URLs. This would allow more complicated
filtering schemes to determine what to download (at the expense
of having to run wget twice -- but it's easy enough to set up a
web proxy). --spider only checks a single file.]
I've written it in perl's plain old documentation format, to create the
man page save this message in a file named wget, and execute:
pod2man --section=1 --center=' ' --release=1.5.1 wget >wget.1
=head1 NAME
B<wget> - File-oriented World Wide Web retrieval program
=head1 SYNOPSIS
B<wget> I<[options] URL1 [URL2 ...]>
=head1 DESCRIPTION
B<wget> typically retrieves the contents associated with urls, and saves
them in files under the current directory whose paths are derived from
the file's url. For a detailed description, please read: C<info wget>, as
this man page is intended only as a quick reference.
=head1 OPTIONS
=head2 Basic Startup Options
-V, --version
Display the version of Wget.
-h, --help
Print a help screen.
-b, --background
Go to background mode immediately after starting.
-e COMMAND, --execute COMMAND
Execute COMMAND as if it were a part of .wgetrc,
after parsing .wgetrc.
=head2 Logging and Input File Options
-o LOGFILE, --output-file=LOGFILE
Log all messages to LOGFILE, instead of standard output, which
is the default. If you do not wish the log output to be verbose,
use -nv (non-verbose).
-a LOGFILE, --append-output=LOGFILE
Append to LOGFILE. Unlike -o, this will preserve any prior
log file contents.
-d, --debug
Turn on debug output.
-q, --quiet
Suppress all messages.
-v, --verbose
Turn on verbose output, with all the available data. This is the
default.
-nv, --non-verbose
Non-verbose output--turn off verbose without being completely
quiet (use -q for that), which means that error messages and
basic information still get printed.
-i FILE, --input-file=FILE
Read URLs from FILE, in which case no URLs need to be on the
command line. FILE may be an HTML document.
-F, --force-html
When input is read from a file, force it to be treated as
an HTML file.
=head2 Download Options
-t NUM, --tries=NUM
Set number of retries to NUM. Specify 0 or inf for infinite
retrying.
-O FILE, --output-document=FILE
The documents will not be written to the appropriate files, but
all will be concatenated together and written to FILE. If FILE
already exists, it will be overwritten. If the FILE is `-', the
documents will be written to standard output. Including this option
automatically sets the number of tries to 1.
-nc, --no-clobber
Do not clobber existing files when saving to directory hierarchy
within recursive retrieval of several files. This option treats
local .html and .htm files as if they had been downloaded.
-nc, --no-clobber
Do not clobber existing files when saving to directory hierarchy
within recursive retrieval of several files.
-c, --continue
Continue retrieval of FTP documents, from where it was left off
by another program or a previous instance of Wget.
--dot-style=STYLE
Affects progress indication. STYLE may be default,
binary, mega, or micro.
-N, --timestamping
Turn on time-stamping.
-S, --server-response
Print the headers sent by HTTP servers and responses sent by FTP
servers.
--spider
Only check for existence of explicitly listed file(s) -- do not
download.
-T seconds, --timeout=SECONDS
Set the read timeout to SECONDS seconds, default is
900 seconds and is almost always correct.
-w SECONDS, --wait=SECONDS
Wait the specified number of seconds between the retrievals.
-Y on/off, --proxy=on/off
Turn PROXY support on or off. The proxy is on by default if the
appropriate environmental variable is defined.
-Q QUOTA, --quota=QUOTA
Specify download quota for automatic retrievals. The value can
be specified in bytes (default), kilobytes (with k suffix), or
megabytes (with m suffix). Quota is checked after all files
mentioned on the command line have been retrieved, and then
after each file is downloaded.
Setting quota to 0 or to inf unlimits the download quota.
=head2 Directory Options
-nd, --no-directories
Do not create a hierarchy of directories when retrieving
recursively, adds a numeric extension where required to
avoid clobbering distinct files with the same name.
-x, --force-directories
The opposite of -nd, create a hierarchy of directories, even if
one would not have been created otherwise.
-nH, --no-host-directories
Disable generation of host-prefixed directories.
--cut-dirs=NUMBER
Trim up to NUMBER directory elements from the front of the
local path. [This does not do anything about the host directory]
-P PREFIX, --directory-prefix=PREFIX
Set directory prefix to PREFIX. Default is .
=head2 HTTP Options
--http-user=USER
--http-passwd=PASSWORD
Specify the username USER and password PASSWORD on an HTTP
server. Wget will encode them using the basic (insecure) WWW
authentication scheme.
-C on/off, --cache on/off
When off (default on), disable server side cache by sending
Pragma: no-cache
--ignore-length
Workaround for broken CGIs that send bogus Content-Length.
--header=ADDITIONAL-HEADER
Define an ADDITIONAL-HEADER to be passed to the HTTP servers.
Headers must contain a : preceded by one or more non-blank
characters, and must not contain newlines.
You may define more than one additional header by specifying
--header more than once. Specification of an empty string as
the header value will clear all previous user-defined headers.
--proxy-user=USER
--proxy-passwd=PASSWORD
Specify the username USER and password PASSWORD for
authentication on a PROXY server, which will be encoded
using the basic authentication scheme.
-s, --save-headers
Save the headers sent by the HTTP server to the file, preceding
the actual contents, with an empty line as the separator.
-U AGENT-STRING, --user-agent=AGENT-STRING
Identify as AGENT-STRING to the HTTP server. Default: the
result of wget -V
=head2 FTP Options
--retr-symlinks
Retrieve symbolic links on FTP sites as if they were plain
files, i.e. don't just create links locally.
-g on/off, --glob=on/off
Turn FTP globbing on or off (handling of *, ?, [ and ]).
Default is on only when command line specifies a globbing
character in the URL. Globbing attempts to parse the directory
listing from the remote machine.
--passive-ftp
Use the "passive" FTP retrieval scheme, in which the client
initiates the data connection.
=head2 Recursive Retrieval Options
-r, --recursive
Turn on recursive retrieving.
-l DEPTH, --level=DEPTH
Specify recursion maximum depth level DEPTH (default is 5).
--delete-after
This option tells Wget to delete every single file it downloads,
*after* having done so. It is useful for pre-fetching popular
pages through PROXY.
-k, --convert-links
After a document has been downloaded, convert non-relative
links to that document to relative links.
-m, --mirror
Turn on options suitable for mirroring. Currently equivalent
to -r -N -l0 -nr.
-nr, --dont-remove-listing
Do not remove the .listing files generated by FTP.
=head2 Recursive Accept/Reject Options
-A ACCLIST, --accept ACCLIST
-R REJLIST, --reject REJLIST
Specify comma-separated lists of file name suffices or patterns
to accept or reject.
-D DOMAIN-LIST, --domains=DOMAIN-LIST
Set domains to be accepted and DNS looked-up, where DOMAIN-LIST
is a comma-separated list. Note that it does *not* turn on
-H.
--exclude-domains DOMAIN-LIST
Exclude the domains given in a comma-separated DOMAIN-LIST from
DNS-lookup.
-L, --relative
Follow relative links only.
--follow-ftp
Follow FTP links from HTML documents. Without this option, Wget
will ignore all the FTP links.
-H, --span-hosts
Enable spanning across hosts when doing recursive retrieving.
-I LIST, --include-directories=LIST
Specify a comma-separated list of directories you wish to follow
when downloading.
-X LIST, --exclude-directories=LIST
Specify a comma-separated list of directories you wish to
exclude from download.
-nh, --no-host-lookup
Disable the time-consuming DNS lookup of almost all hosts.
-np, --no-parent
Do not ever ascend to the parent directory when retrieving
recursively.
=head1 SIGNALS
If sending anything to stdout, SIGHUP causes B<wget> to redirect this
stream to a file named F<wget-log>.
=head1 SECURITY
B<wget> sends password unencrypted, as required by the underlying protocols.
=head1 FILES
=over 4
F<$HOME/.netrc>
F</etc/wgetrc>
F<$HOME/.wgetrc> or F<$WGETRC>
F<http://>I<_____>F</robots.txt>
=back
=head1 SEE ALSO
F<wget.texi> (available through C<info wget>).
=cut
If there are problems in this document, please feel free to fix them.
For details on the format, read the manpages for perlpod and pod2man.
--
Raul
--
To UNSUBSCRIBE, email to debian-devel-request@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Reply to: