[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: [PATCH v2 1/2] libdpkg: let backends decide default compression level



Hi!

On Sat, 2009-10-24 at 18:05:06 -0500, Jonathan Nieder wrote:
> When compressing packages with gzip or bzip2, the tradeoff is
> clear: a better compression ratio for a distributed package is
> generally worth spending some extra time at build time.  Since
> better compressed packages are not much more inconvenient to
> decompress at all, dpkg defaults to the maximum compression level
> and developers rarely need to override that default.
> 
> On the other hand, LZ77-based decompressors use more memory at
> decompression time for more tightly compressed packages, so the
> maximum compression level of "-9" (which uses more than 32 MiB
> of memory to decompress) can be too high.
> 
> With this patch, instead of defaulting to -9, the compress_cat()
> function passes a special preset of "-\0" for the compression
> backend to sort out.  All backends handle this as -9 for now; no
> change in behavior is intended.

Applied this one also with few modifications.

> diff --git a/lib/dpkg/compression-backend.c b/lib/dpkg/compression-backend.c
> index 9bfbcee..8f0c055 100644
> --- a/lib/dpkg/compression-backend.c
> +++ b/lib/dpkg/compression-backend.c
> @@ -23,6 +23,10 @@

> +static const char default_gz_compression = '9';
> +static const char default_bz2_compression = '9';
> +static const char default_lzma_compression = '9';

I added a default_level member to ‘struct compressor’. Wich gets
initialized to those values.

> @@ -89,6 +93,7 @@ fd_fd_filter(int fd_in, int fd_out, const char *desc,
>  #define COMPRESS(format, zFile, zdopen, zwrite, zclose, zerror, ERR_ERRNO, \
>  		fd_in, fd_out, compression, desc) do \
>  { \
> +	/* If compression == '\0', use library default. */ \
>  	char combuf[] = {'w', compression, '\0'}; \
>  	int actualread, actualwrite; \
>  	char buffer[4096]; \
> @@ -119,6 +124,7 @@ static void
>  compress_cmd(int fd_in, int fd_out, const char *path,
>  	const char *cmd, char compression, const char *desc)
>  {
> +	/* If compression == '\0', use tool’s default compression level. */
>  	fd_fd_filter(fd_in, fd_out, desc, path, cmd, "-c%c", compression);
>  }

With a previous commit, I switched the compression_level to be handled
as an int, so -1 means default. But ...

> @@ -133,6 +139,9 @@ decompress_gzip(int fd_in, int fd_out, const char *desc)
>  void
>  compress_gzip(int fd_in, int fd_out, char compression, const char *desc)
>  {
> +	if (compression == '\0')
> +		compression = default_gz_compression;
> +
>  	COMPRESS("gzip", gzFile, gzdopen, gzwrite, gzclose, gzerror, Z_ERRNO,
>  		fd_in, fd_out, compression, desc);
>  }

... I let the frontends handle the default setting instead of each
backend ...

> diff --git a/lib/dpkg/compression.c b/lib/dpkg/compression.c
> index 813526f..c8a6b75 100644
> --- a/lib/dpkg/compression.c
> +++ b/lib/dpkg/compression.c
> @@ -38,8 +38,11 @@ void compress_cat(enum compress_type type, int fd_in, int fd_out, const char *co
>    varbufvprintf(&v, desc, al);
>    va_end(al);
>  
> -  if(compression == NULL) compression= "9";
> -  else if (*compression == '0')
> +  if (compression == NULL)
> +    /* Let the backend choose the default compression level. */
> +    compression = "\0";

... here.

> +  if (*compression == '0')
>      type = compress_type_cat;
>  
>    switch(type) {

thanks,
guillem


Reply to: