[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#814852: RFS: openfst/1.5.1-1 -- weighted finite-state transducers library



On 04/03/2016 21:12, Jakub Wilk wrote:
> * Giulio Paci <giuliopaci@gmail.com>, 2016-03-02, 09:45:
>> - added a new patch 1008_fix_linking_issues.patch, replacing and extending unresolved_symbols.diff.
> At the moment there's nothing in the changelog indicating any relation between 1008_fix_linking_issues.patch and unresolved_symbols.diff.

Added a note about it.

> When you saying you're dropping a patch, please also say why you're dropping it. (AIUI, all dropped patches except for unresolved_symbols were merged upstream.)

All of the patches have been merged upstream, with some changes.

For 2001_put_libfst_extension_libraries_in_usr_lib.patch an alternative patch was submitted and accepted.
Apparently unresolved_symbols.diff was also merged, but then a subsequent change broke its fixes again.

> Do the leading numbers in patch names mean something?

I added a README.source file to report the meaning. Essentially they encode information similar to the one present in DEP-3 headers:

0xxx patches come from upstream
1xxx are interesting for upstream
2xxx are Debian-only (or were refused by upstream)

The xxx part is a (mostly) chronological sequence number, but is not related to the order in which the patches should be applied.

> Is it intentional that they out of order in debian/patches/series?

It is due just to the fact that 1008_fix_linking_issues.patch has already been submitted and accepted.
1005_kaldi_patch.patch has been submitted, but it is still under revision and may require some work.

On Friday I discovered an issue with this patch that, randomly, prevents tests to complete.
I am not able to deal with the issue myself, but upstream and the original author of the patch has both been notified and are
looking into it.
According to preliminary investigation, the main issue is in the unpatched openfst, but the patched version seems to suffer more problems.
The main issue should be present in the currently packaged version as well, altough I did not check it myself.

If possible I would like to have this package uploaded anyway and later open a bug report, as this package will let further work to be
conducted on other packages (kaldi in particular).

I do not know yet when we may expect to see a proper fix for this issue. Probably a few months.

> The package FTBFS in minimal environments:
> 
> libtool: compile:  g++ -DHAVE_CONFIG_H -I./../../include -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -std=c++11 -c
> compress-script.cc  -fPIC -DPIC -o .libs/compress-script.o
> In file included from ./../../include/fst/extensions/compress/compress.h:18:0,
>                 from ./../../include/fst/extensions/compress/compress-script.h:13,
>                 from compress-script.cc:13:
> ./../../include/fst/extensions/compress/gzfile.h:19:18: fatal error: zlib.h: No such file or directory
> compilation terminated.
> Makefile:543: recipe for target 'compress-script.lo' failed

I added zlib1g-dev dependency.

> I think the 500 MB/job limit is insufficient. I did some poor man's memory profiling[0] on i386: it turns out that are many files that require more than that for compiling,
> and one outlier needs over 2 GB! (See the attachment for details.) And the memory requirements are most likely even bigger on 64-bit architectures...

I tried the same experiment on amd64. The critical files are the same ones and in particular algo_test.cc is still an outlier, requiring ~3.7Gb of RAM.
The other critical files required about 2Gb each.

I increased the limit to 2Gb/job, but I am not completely convinced about this new limit.
The reasoning behind this limit is that the outlier is compiled during tests, after all other critical files have been compiled, so it should not happen that a critical
file should be compiled at the same time of the outlier.
So, with 4Gb available it should still be possible to compile the package with parallel=2. Probably increasing this limit to 2.5Gb would be more safe, as there still is a
file requiring more than 600Mb that may be compiled at the same time of the outlier.

On the other end, increasing the limit will "waste" more RAM with the increase of parallel value and on other architectures.

What is your opinion about this limit? How likely is it that we are going to compile with parallel=2 on an amd64 system with 4Gb of RAM, without swap available?

> adequate(1) tells me that the obsolete conffile wasn't removed on upgrade:
> libfst-tools: obsolete-conffile /etc/bash_completion.d/openfstbc

I added libfst-tools.maintscript to remove it.
I also added a section in rules to run dh_bash-completion, as it was not run automatically.

> We have automatic debug packages these days, so I'd drop the -dbg package.

Dropped the -dbg package.

Bests,
	Giulio

> [0] "ps -u $(whoami) -o rss,args" in a loop, plus some manual post-processing.


Reply to: