Re: semi-RFS: xenium - good enough for me (at the moment)
Hi Andreas,
On 21.07.20 10:02, Andreas Tille wrote:
> Hi Steffen,
>
> On Mon, Jul 20, 2020 at 10:02:42PM +0200, Steffen Möller wrote:
>> Hello,
>>
>> There is more to the package than I managed to investigate:
>> - How is the benchmarking properly invoked? It builds at least.
> I have no idea for the moment.
Cross-checked with upstream - benchmarks are a non-issue for us for now.
>> - How is the google test properly built/performed? Better have a look
>> at azure-pipelines.yml
> Could you please add DEP3 header to your patch that deals with gtest
> issues? Its not obvious for the reader what your changes (basically
> commenting out things) are approaching.
Done (at least a DEP2.5 header) and patch simplified again.
>> - Why does the build fail over detecting pthread bits when I enable the
>> (optional) libcds inclusion?
> You mean when enabling what you commented in d/rules with
> # -DWITH_LIBCDS="1"
> ?
Yes, though it now works upon a dependency on boost-dev, just not with
libcds, which also is very optional.
>> But there is documentation built and as a headers-only library the files
>> also install neatly to /usr/include/xenium. This should be sufficient to
>> eventually address the reverse dependency mmmulti, but, .. if someone
>> reading this feels like rounding this up - would be much appreciated.
>> What's nagging the most is that I lack the time to edutain myself on the
>> nitty gritty of parallel computing that these libraries are helping
>> with. So, I finished this "functional enough for Covid-19" package but
>> cannot do more to make it a prime example for Debian packaging.
>>
>> https://salsa.debian.org/med-team/xenium
> I've done a bit of polishing and did a version upgrade but I was not
> really addressing your questions. May be one of the great new members
> (in CC or anybody else :-) ) might catch up in more detail.
>
>> Many thanks and best wishes to everyone,
> Thanks for your initial preparation
I got gtest to compile (albeit with lots of noise) and test:
[ OK ] VyukovHashMap/6.drain_sparsely_populated_map_using_erase (0
ms)
[ RUN ]
VyukovHashMap/6.iterator_covers_all_entries_in_densely_populated_map
[ OK ]
VyukovHashMap/6.iterator_covers_all_entries_in_densely_populated_map (2 ms)
[ RUN ]
VyukovHashMap/6.iterator_covers_all_entries_in_sparsely_populated_map
[ OK ]
VyukovHashMap/6.iterator_covers_all_entries_in_sparsely_populated_map (0
ms)
[ RUN ] VyukovHashMap/6.parallel_usage
[ OK ] VyukovHashMap/6.parallel_usage (1022 ms)
[ RUN ] VyukovHashMap/6.parallel_usage_with_nontrivial_types
[ OK ] VyukovHashMap/6.parallel_usage_with_nontrivial_types (2039 ms)
[ RUN ] VyukovHashMap/6.parallel_usage_with_same_values
[ OK ] VyukovHashMap/6.parallel_usage_with_same_values (237 ms)
[----------] 30 tests from VyukovHashMap/6 (3373 ms total)
[----------] Global test environment tear-down
[==========] 819 tests from 71 test suites ran. (35669 ms total)
[ PASSED ] 819 tests.
Yeah!
I don't think there is much for us to do, really. Please have another
look and if this also works on your end then I suggest to upload.
Concerning the +dfsg suffix - should this not just be a +ds (if it needs
a suffix at all) since all we do is not remove (mostly empty) directories?
Sidenote: I had to force-push the upstream branch. Something apparently
went weird when I merged.
Best,
Steffen
Reply to: