[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Autopkgtest for falcon



Hi, Andreas,

على الأربعاء 18 كانون الثاني 2017 ‫05:33، كتب Andreas Tille:
> Hi again,
> 
> On Wed, Jan 18, 2017 at 11:34:44AM +0100, Andreas Tille wrote:
>> Hi Afif,
>>
>> On Wed, Jan 18, 2017 at 01:33:55AM -0800, Afif Elghraoui wrote:
>>>
>>> falcon's new release fails autopkgtest, which I have not gotten around to
>>> debugging, so I have not uploaded it.
>>
>> I'll have a look and let you know.
> 
> As promised I had a look.  I'm usually providing the test that is runned
> as autopkgtest also as user runnable script.  To see what I mean I
> commited a change to Git and hope you like it.  For me this has two
> advantages:
> 
>   1. Users can easily reproduce what is tested by just doing
>       sh /usr/share/doc/falcon/run-unit-test
>   2. The developer can run this script on the local machine sparing
>      the overhead of creating the virtual environment which is a
>      quicker approach to finding errors inside the test suite (even
>      no final guarantee that the test will succeede in the sandbox).
> 
> I hope you like this change.

I'm fine with the idea, but it's not something I would do. This seems to
me like something that is better implemented within autopkgtest itself
(like for tests that don't specify the "breaks-testbed" restriction) or
something rather than on a per-package basis. I generally prefer to keep
the packaging simple, which is why I haven't manually set hardening
flags on every individual package (dpkg-buildflags could globally set it
if it is appropriate), why I don't change default compression methods
without good reason, and why I don't put the dummy watch line for
upstreams that don't tag releases (maybe there is a possibility to have
uscan not fail if there is no watch line to process) and such.

I won't revert this kind of change-- I just won't initiate it or go out
of my way to maintain it.


>  With this script I get:
> 
> ...
> 2017-01-18 13:21:44,348[ERROR] Task Node(1-preads_ovl/m_00001) failed with exit-code=256
> 2017-01-18 13:21:44,348[INFO] recently_satisfied: set([])
> 2017-01-18 13:21:44,348[INFO] Num satisfied in this iteration: 0
> 2017-01-18 13:21:44,348[INFO] Num still unsatisfied: 2
> 2017-01-18 13:21:44,348[ERROR] Some tasks are recently_done but not satisfied: set([Node(1-preads_ovl/m_00001)])
> 2017-01-18 13:21:44,348[ERROR] ready: set([])
> submitted: set([])
> Traceback (most recent call last):
>   File "/usr/lib/falcon/bin/fc_run.py", line 4, in <module>
>     __import__('pkg_resources').run_script('falcon-kit==0.7', 'fc_run.py')
>   File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 719, in run_script
>     self.require(requires)[0].run_script(script_name, ns)
>   File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 1504, in run_script
>     exec(code, namespace, namespace)
>   File "/usr/lib/falcon/pylib/falcon_kit-0.7-py2.7-linux-x86_64.egg/EGG-INFO/scripts/fc_run.py", line 5, in <module>
>     main(sys.argv)
>   File "/usr/lib/falcon/pylib/falcon_kit-0.7-py2.7-linux-x86_64.egg/falcon_kit/mains/run1.py", line 461, in main
>     main1(argv[0], args.config, args.logger)
>   File "/usr/lib/falcon/pylib/falcon_kit-0.7-py2.7-linux-x86_64.egg/falcon_kit/mains/run1.py", line 136, in main1
>     input_fofn_plf=input_fofn_plf,
>   File "/usr/lib/falcon/pylib/falcon_kit-0.7-py2.7-linux-x86_64.egg/falcon_kit/mains/run1.py", line 414, in run
>     wf.refreshTargets(exitOnFailure=exitOnFailure)
>   File "/usr/lib/falcon/pylib/pypeflow-1.0.0-py2.7.egg/pypeflow/simple_pwatcher_bridge.py", line 226, in refreshTargets
>     self._refreshTargets(updateFreq, exitOnFailure)
>   File "/usr/lib/falcon/pylib/pypeflow-1.0.0-py2.7.egg/pypeflow/simple_pwatcher_bridge.py", line 297, in _refreshTargets
>     failures, len(unsatg)))
> Exception: We had 1 failures. 2 tasks remain unsatisfied.
> makefile:4: recipe for target 'run-synth0' failed
> make[1]: *** [run-synth0] Error 1
> make[1]: Leaving directory '/tmp/falcon-test.0Rc93v'
> makefile:21: recipe for target 'full-test' failed
> 
> 
> Is this the same issue you was talking about?  I admit I have no good
> idea how to fix it but just want to make sure that we are in sync here.
> 

That looks about right. Don't worry about this one. I've requested
upstream (a while ago) to document how to debug failed runs and they've
accepted my bug report. I may go back and ask about this specific case,
though ours is not a supported installation.

Thanks and regards
Afif

-- 
Afif Elghraoui | عفيف الغراوي
http://afif.ghraoui.name


Reply to: