Source: dask.distributed Version: 2021.09.1+ds.1-2 Severity: serious X-Debbugs-CC: debian-ci@lists.debian.org Tags: sid bookworm User: debian-ci@lists.debian.org Usertags: needs-update Control: affects -1 src:python3-defaults Dear maintainer(s),With a recent upload of python3-defaults the autopkgtest of dask.distributed fails in testing when that autopkgtest is run with the binary packages of python3-defaults from unstable. It passes when run with only packages from testing. In tabular form:
pass fail python3-defaults from testing 3.9.8-1 dask.distributed from testing 2021.09.1+ds.1-2 all others from testing from testing I copied some of the output at the bottom of this report.Currently this regression is blocking the migration of python3-defaults to testing [1]. Of course, python3-defaults shouldn't just break your autopkgtest (or even worse, your package), but it seems to me that the change in python3-defaults was intended and your package needs to update to the new situation.
If this is a real problem in your package (and not only in your autopkgtest), the right binary package(s) from python3-defaults should really add a versioned Breaks on the unfixed version of (one of your) package(s). Note: the Breaks is nice even if the issue is only in the autopkgtest as it helps the migration software to figure out the right versions to combine in the tests.
More information about this bug and the reason for filing it can be found on https://wiki.debian.org/ContinuousIntegration/RegressionEmailInformation Paul [1] https://qa.debian.org/excuses.php?package=python3-defaults https://ci.debian.net/data/autopkgtest/testing/arm64/d/dask.distributed/17341392/log.gz=================================== FAILURES =================================== [31m[1m__________________________ test_client_actions[True] ___________________________[0m
direct_to_workers = True[37m@pytest[39;49;00m.mark.parametrize([33m"[39;49;00m[33mdirect_to_workers[39;49;00m[33m"[39;49;00m, [[94mTrue[39;49;00m, [94mFalse[39;49;00m]) [94mdef[39;49;00m [92mtest_client_actions[39;49;00m(direct_to_workers):
[37m@gen_cluster[39;49;00m(client=[94mTrue[39;49;00m)[94masync[39;49;00m [94mdef[39;49;00m [92mtest[39;49;00m(c, s, a, b):
c = [94mawait[39;49;00m Client(s.address, asynchronous=[94mTrue[39;49;00m, direct_to_workers=direct_to_workers
)counter = c.submit(Counter, workers=[a.address], actor=[94mTrue[39;49;00m) [94massert[39;49;00m [96misinstance[39;49;00m(counter, Future)
counter = [94mawait[39;49;00m counter [94massert[39;49;00m counter._address[94massert[39;49;00m [96mhasattr[39;49;00m(counter, [33m"[39;49;00m[33mincrement[39;49;00m[33m"[39;49;00m) [94massert[39;49;00m [96mhasattr[39;49;00m(counter, [33m"[39;49;00m[33madd[39;49;00m[33m"[39;49;00m) [94massert[39;49;00m [96mhasattr[39;49;00m(counter, [33m"[39;49;00m[33mn[39;49;00m[33m"[39;49;00m)
n = [94mawait[39;49;00m counter.n [94massert[39;49;00m n == [94m0[39;49;00m [94massert[39;49;00m counter._address == a.address[94massert[39;49;00m [96misinstance[39;49;00m(a.actors[counter.key], Counter)
[94massert[39;49;00m s.tasks[counter.key].actor[94mawait[39;49;00m asyncio.gather(counter.increment(), counter.increment())
n = [94mawait[39;49;00m counter.n [94massert[39;49;00m n == [94m2[39;49;00m counter.add([94m10[39;49;00m)[94mwhile[39;49;00m ([94mawait[39;49;00m counter.n) != [94m10[39;49;00m + [94m2[39;49;00m:
n = [94mawait[39;49;00m counter.n [94mawait[39;49;00m asyncio.sleep([94m0.01[39;49;00m) [94mawait[39;49;00m c.close() > test()[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:109: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [1m[31m/usr/lib/python3/dist-packages/distributed/utils_test.py[0m:994: in test_func
result = loop.run_sync([1m[31m/usr/lib/python3/dist-packages/tornado/ioloop.py[0m:530: in run_sync
[94mreturn[39;49;00m future_cell[[94m0[39;49;00m].result()[1m[31m/usr/lib/python3/dist-packages/distributed/utils_test.py[0m:953: in coro
result = [94mawait[39;49;00m future [1m[31m/usr/lib/python3.10/asyncio/tasks.py[0m:447: in wait_for [94mreturn[39;49;00m fut.result()[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:97: in test [94mawait[39;49;00m asyncio.gather(counter.increment(), counter.increment()) [1m[31m/usr/lib/python3/dist-packages/distributed/actor.py[0m:171: in func
q = asyncio.Queue(loop=[96mself[39;49;00m._io_loop.asyncio_loop) [1m[31m/usr/lib/python3.10/asyncio/queues.py[0m:33: in __init__ [96msuper[39;49;00m().[92m__init__[39;49;00m(loop=loop)_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7a6dfa60>
[94mdef[39;49;00m [92m__init__[39;49;00m([96mself[39;49;00m, *, loop=_marker): [94mif[39;49;00m loop [95mis[39;49;00m [95mnot[39;49;00m _marker:
[33mf[39;49;00m[33m'[39;49;00m[33mAs of 3.10, the *loop* parameter was removed from [39;49;00m[33m'[39;49;00m[94mraise[39;49;00m [96mTypeError[39;49;00m(
[33mf[39;49;00m[33m'[39;49;00m[33m{[39;49;00m[96mtype[39;49;00m([96mself[39;49;00m).[91m__name__[39;49;00m[33m}[39;49;00m[33m() since it is no longer necessary[39;49;00m[33m'[39;49;00m
)[1m[31mE TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary[0m
[1m[31m/usr/lib/python3.10/asyncio/mixins.py[0m:17: TypeError----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:35377 distributed.scheduler - INFO - dashboard at: 127.0.0.1:42921distributed.worker - INFO - Start worker at: tcp://127.0.0.1:36503 distributed.worker - INFO - Listening to: tcp://127.0.0.1:36503 distributed.worker - INFO - dashboard at: 127.0.0.1:45901 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:35377 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-3xgwbx0k distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:45731 distributed.worker - INFO - Listening to: tcp://127.0.0.1:45731 distributed.worker - INFO - dashboard at: 127.0.0.1:40889 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:35377 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-wnnrz3wp distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36503', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36503
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45731', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45731
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:35377 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:35377 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Receive client connection: Client-ecc11143-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Receive client connection: Client-ecc2ff2e-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Remove client Client-ecc11143-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-ecc11143-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-ecc11143-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36503 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45731distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36503', name: 0, memory: 1, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:36503distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45731', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:45731 distributed.scheduler - INFO - Lost all workers distributed.scheduler - INFO - Scheduler closing... distributed.scheduler - INFO - Scheduler closing all comms[31m[1m__________________________ test_client_actions[False] __________________________[0m
direct_to_workers = False[37m@pytest[39;49;00m.mark.parametrize([33m"[39;49;00m[33mdirect_to_workers[39;49;00m[33m"[39;49;00m, [[94mTrue[39;49;00m, [94mFalse[39;49;00m]) [94mdef[39;49;00m [92mtest_client_actions[39;49;00m(direct_to_workers):
[37m@gen_cluster[39;49;00m(client=[94mTrue[39;49;00m)[94masync[39;49;00m [94mdef[39;49;00m [92mtest[39;49;00m(c, s, a, b):
c = [94mawait[39;49;00m Client(s.address, asynchronous=[94mTrue[39;49;00m, direct_to_workers=direct_to_workers
)counter = c.submit(Counter, workers=[a.address], actor=[94mTrue[39;49;00m) [94massert[39;49;00m [96misinstance[39;49;00m(counter, Future)
counter = [94mawait[39;49;00m counter [94massert[39;49;00m counter._address[94massert[39;49;00m [96mhasattr[39;49;00m(counter, [33m"[39;49;00m[33mincrement[39;49;00m[33m"[39;49;00m) [94massert[39;49;00m [96mhasattr[39;49;00m(counter, [33m"[39;49;00m[33madd[39;49;00m[33m"[39;49;00m) [94massert[39;49;00m [96mhasattr[39;49;00m(counter, [33m"[39;49;00m[33mn[39;49;00m[33m"[39;49;00m)
n = [94mawait[39;49;00m counter.n [94massert[39;49;00m n == [94m0[39;49;00m [94massert[39;49;00m counter._address == a.address[94massert[39;49;00m [96misinstance[39;49;00m(a.actors[counter.key], Counter)
[94massert[39;49;00m s.tasks[counter.key].actor[94mawait[39;49;00m asyncio.gather(counter.increment(), counter.increment())
n = [94mawait[39;49;00m counter.n [94massert[39;49;00m n == [94m2[39;49;00m counter.add([94m10[39;49;00m)[94mwhile[39;49;00m ([94mawait[39;49;00m counter.n) != [94m10[39;49;00m + [94m2[39;49;00m:
n = [94mawait[39;49;00m counter.n [94mawait[39;49;00m asyncio.sleep([94m0.01[39;49;00m) [94mawait[39;49;00m c.close() > test()[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:109: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [1m[31m/usr/lib/python3/dist-packages/distributed/utils_test.py[0m:994: in test_func
result = loop.run_sync([1m[31m/usr/lib/python3/dist-packages/tornado/ioloop.py[0m:530: in run_sync
[94mreturn[39;49;00m future_cell[[94m0[39;49;00m].result()[1m[31m/usr/lib/python3/dist-packages/distributed/utils_test.py[0m:953: in coro
result = [94mawait[39;49;00m future [1m[31m/usr/lib/python3.10/asyncio/tasks.py[0m:447: in wait_for [94mreturn[39;49;00m fut.result()[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:97: in test [94mawait[39;49;00m asyncio.gather(counter.increment(), counter.increment()) [1m[31m/usr/lib/python3/dist-packages/distributed/actor.py[0m:171: in func
q = asyncio.Queue(loop=[96mself[39;49;00m._io_loop.asyncio_loop) [1m[31m/usr/lib/python3.10/asyncio/queues.py[0m:33: in __init__ [96msuper[39;49;00m().[92m__init__[39;49;00m(loop=loop)_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7a72a140>
[94mdef[39;49;00m [92m__init__[39;49;00m([96mself[39;49;00m, *, loop=_marker): [94mif[39;49;00m loop [95mis[39;49;00m [95mnot[39;49;00m _marker:
[33mf[39;49;00m[33m'[39;49;00m[33mAs of 3.10, the *loop* parameter was removed from [39;49;00m[33m'[39;49;00m[94mraise[39;49;00m [96mTypeError[39;49;00m(
[33mf[39;49;00m[33m'[39;49;00m[33m{[39;49;00m[96mtype[39;49;00m([96mself[39;49;00m).[91m__name__[39;49;00m[33m}[39;49;00m[33m() since it is no longer necessary[39;49;00m[33m'[39;49;00m
)[1m[31mE TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary[0m
[1m[31m/usr/lib/python3.10/asyncio/mixins.py[0m:17: TypeError----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:33073 distributed.scheduler - INFO - dashboard at: 127.0.0.1:37349distributed.worker - INFO - Start worker at: tcp://127.0.0.1:39083 distributed.worker - INFO - Listening to: tcp://127.0.0.1:39083 distributed.worker - INFO - dashboard at: 127.0.0.1:33581 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:33073 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-e3ylm9j0 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:33459 distributed.worker - INFO - Listening to: tcp://127.0.0.1:33459 distributed.worker - INFO - dashboard at: 127.0.0.1:38337 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:33073 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-47r_y5bl distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39083', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39083
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33459', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33459
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:33073 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:33073 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Receive client connection: Client-ece6eaec-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Receive client connection: Client-ece8d896-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Remove client Client-ece6eaec-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-ece6eaec-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-ece6eaec-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39083 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33459distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39083', name: 0, memory: 1, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:39083distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33459', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:33459 distributed.scheduler - INFO - Lost all workers distributed.scheduler - INFO - Scheduler closing... distributed.scheduler - INFO - Scheduler closing all comms[31m[1m__________________________ test_worker_actions[False] __________________________[0m
separate_thread = False[37m@pytest[39;49;00m.mark.parametrize([33m"[39;49;00m[33mseparate_thread[39;49;00m[33m"[39;49;00m, [[94mFalse[39;49;00m, [94mTrue[39;49;00m]) [94mdef[39;49;00m [92mtest_worker_actions[39;49;00m(separate_thread):
[37m@gen_cluster[39;49;00m(client=[94mTrue[39;49;00m)[94masync[39;49;00m [94mdef[39;49;00m [92mtest[39;49;00m(c, s, a, b): counter = c.submit(Counter, workers=[a.address], actor=[94mTrue[39;49;00m)
a_address = a.address [94mdef[39;49;00m [92mf[39;49;00m(counter): start = counter.n[94massert[39;49;00m [96mtype[39;49;00m(counter) [95mis[39;49;00m Actor
[94massert[39;49;00m counter._address == a_addressfuture = counter.increment(separate_thread=separate_thread) [94massert[39;49;00m [96misinstance[39;49;00m(future, ActorFuture) [94massert[39;49;00m [33m"[39;49;00m[33mFuture[39;49;00m[33m"[39;49;00m [95min[39;49;00m [96mtype[39;49;00m(future).[91m__name__[39;49;00m
end = future.result(timeout=[94m1[39;49;00m) [94massert[39;49;00m end > startfutures = [c.submit(f, counter, pure=[94mFalse[39;49;00m) [94mfor[39;49;00m _ [95min[39;49;00m [96mrange[39;49;00m([94m10[39;49;00m)]
[94mawait[39;49;00m c.gather(futures) counter = [94mawait[39;49;00m counter[94massert[39;49;00m [94mawait[39;49;00m counter.n == [94m10[39;49;00m
> test()[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:137: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [1m[31m/usr/lib/python3/dist-packages/distributed/utils_test.py[0m:994: in test_func
result = loop.run_sync([1m[31m/usr/lib/python3/dist-packages/tornado/ioloop.py[0m:530: in run_sync
[94mreturn[39;49;00m future_cell[[94m0[39;49;00m].result()[1m[31m/usr/lib/python3/dist-packages/distributed/utils_test.py[0m:953: in coro
result = [94mawait[39;49;00m future [1m[31m/usr/lib/python3.10/asyncio/tasks.py[0m:447: in wait_for [94mreturn[39;49;00m fut.result()[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:132: in test
[94mawait[39;49;00m c.gather(futures)[1m[31m/usr/lib/python3/dist-packages/distributed/client.py[0m:1831: in _gather
[94mraise[39;49;00m exception.with_traceback(traceback)[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:125: in f
future = counter.increment(separate_thread=separate_thread)[1m[31m/usr/lib/python3/dist-packages/distributed/actor.py[0m:171: in func
q = asyncio.Queue(loop=[96mself[39;49;00m._io_loop.asyncio_loop) [1m[31m/usr/lib/python3.10/asyncio/queues.py[0m:33: in __init__ [96msuper[39;49;00m().[92m__init__[39;49;00m(loop=loop)_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[33m"""Event loop mixins."""[39;49;00m [94mimport[39;49;00m [04m[96mthreading[39;49;00m[94mfrom[39;49;00m [04m[96m.[39;49;00m [94mimport[39;49;00m events
_global_lock = threading.Lock() [90m# Used as a sentinel for loop parameter[39;49;00m _marker = [96mobject[39;49;00m() [94mclass[39;49;00m [04m[92m_LoopBoundMixin[39;49;00m: _loop = [94mNone[39;49;00m[94mdef[39;49;00m [92m__init__[39;49;00m([96mself[39;49;00m, *, loop=_marker): [94mif[39;49;00m loop [95mis[39;49;00m [95mnot[39;49;00m _marker:
[33mf[39;49;00m[33m'[39;49;00m[33mAs of 3.10, the *loop* parameter was removed from [39;49;00m[33m'[39;49;00m[94mraise[39;49;00m [96mTypeError[39;49;00m(
[33mf[39;49;00m[33m'[39;49;00m[33m{[39;49;00m[96mtype[39;49;00m([96mself[39;49;00m).[91m__name__[39;49;00m[33m}[39;49;00m[33m() since it is no longer necessary[39;49;00m[33m'[39;49;00m
)[1m[31mE TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary[0m
[1m[31m/usr/lib/python3.10/asyncio/mixins.py[0m:17: TypeError----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:35839 distributed.scheduler - INFO - dashboard at: 127.0.0.1:33627distributed.worker - INFO - Start worker at: tcp://127.0.0.1:37271 distributed.worker - INFO - Listening to: tcp://127.0.0.1:37271 distributed.worker - INFO - dashboard at: 127.0.0.1:43143 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:35839 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-emdvj_5e distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:44251 distributed.worker - INFO - Listening to: tcp://127.0.0.1:44251 distributed.worker - INFO - dashboard at: 127.0.0.1:37837 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:35839 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-ahsf01vi distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37271', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37271
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44251', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44251
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:35839 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:35839 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Receive client connection: Client-ed0c88d1-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection distributed.worker - WARNING - Compute Failed Function: fargs: (<Actor: Counter, key=Counter-98dff89f-b463-4850-addf-61a6b235ce42>)
kwargs: {}Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')
distributed.worker - WARNING - Compute Failed Function: fargs: (<Actor: Counter, key=Counter-98dff89f-b463-4850-addf-61a6b235ce42>)
kwargs: {}Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')
distributed.worker - WARNING - Compute Failed Function: fargs: (<Actor: Counter, key=Counter-98dff89f-b463-4850-addf-61a6b235ce42>)
kwargs: {}Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')
distributed.worker - WARNING - Compute Failed Function: fargs: (<Actor: Counter, key=Counter-98dff89f-b463-4850-addf-61a6b235ce42>)
kwargs: {}Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')
distributed.scheduler - INFO - Remove client Client-ed0c88d1-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-ed0c88d1-56b7-11ec-9858-00163e03ed98 distributed.batched - INFO - Batched Comm Closed <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35839 remote=tcp://127.0.0.1:54238>
Traceback (most recent call last):File "/usr/lib/python3/dist-packages/distributed/batched.py", line 93, in _background_send
nbytes = yield self.comm.write( File "/usr/lib/python3/dist-packages/tornado/gen.py", line 762, in run value = future.result()File "/usr/lib/python3/dist-packages/distributed/comm/tcp.py", line 241, in write
raise CommClosedError() distributed.comm.core.CommClosedErrordistributed.scheduler - INFO - Close client connection: Client-ed0c88d1-56b7-11ec-9858-00163e03ed98
distributed.worker - WARNING - Compute Failed Function: fargs: (<Actor: Counter, key=Counter-98dff89f-b463-4850-addf-61a6b235ce42>)
kwargs: {}Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')
distributed.worker - WARNING - Compute Failed Function: fargs: (<Actor: Counter, key=Counter-98dff89f-b463-4850-addf-61a6b235ce42>)
kwargs: {}Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37271 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44251distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37271', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:37271distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44251', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:44251 distributed.scheduler - INFO - Lost all workers distributed.scheduler - INFO - Scheduler closing... distributed.scheduler - INFO - Scheduler closing all comms[31m[1m__________________________ test_worker_actions[True] ___________________________[0m
separate_thread = True[37m@pytest[39;49;00m.mark.parametrize([33m"[39;49;00m[33mseparate_thread[39;49;00m[33m"[39;49;00m, [[94mFalse[39;49;00m, [94mTrue[39;49;00m]) [94mdef[39;49;00m [92mtest_worker_actions[39;49;00m(separate_thread):
[37m@gen_cluster[39;49;00m(client=[94mTrue[39;49;00m)[94masync[39;49;00m [94mdef[39;49;00m [92mtest[39;49;00m(c, s, a, b): counter = c.submit(Counter, workers=[a.address], actor=[94mTrue[39;49;00m)
a_address = a.address [94mdef[39;49;00m [92mf[39;49;00m(counter): start = counter.n[94massert[39;49;00m [96mtype[39;49;00m(counter) [95mis[39;49;00m Actor
[94massert[39;49;00m counter._address == a_addressfuture = counter.increment(separate_thread=separate_thread) [94massert[39;49;00m [96misinstance[39;49;00m(future, ActorFuture) [94massert[39;49;00m [33m"[39;49;00m[33mFuture[39;49;00m[33m"[39;49;00m [95min[39;49;00m [96mtype[39;49;00m(future).[91m__name__[39;49;00m
end = future.result(timeout=[94m1[39;49;00m) [94massert[39;49;00m end > startfutures = [c.submit(f, counter, pure=[94mFalse[39;49;00m) [94mfor[39;49;00m _ [95min[39;49;00m [96mrange[39;49;00m([94m10[39;49;00m)]
[94mawait[39;49;00m c.gather(futures) counter = [94mawait[39;49;00m counter[94massert[39;49;00m [94mawait[39;49;00m counter.n == [94m10[39;49;00m
> test()[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:137: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [1m[31m/usr/lib/python3/dist-packages/distributed/utils_test.py[0m:994: in test_func
result = loop.run_sync([1m[31m/usr/lib/python3/dist-packages/tornado/ioloop.py[0m:530: in run_sync
[94mreturn[39;49;00m future_cell[[94m0[39;49;00m].result()[1m[31m/usr/lib/python3/dist-packages/distributed/utils_test.py[0m:953: in coro
result = [94mawait[39;49;00m future [1m[31m/usr/lib/python3.10/asyncio/tasks.py[0m:447: in wait_for [94mreturn[39;49;00m fut.result()[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:132: in test
[94mawait[39;49;00m c.gather(futures)[1m[31m/usr/lib/python3/dist-packages/distributed/client.py[0m:1831: in _gather
[94mraise[39;49;00m exception.with_traceback(traceback)[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:125: in f
future = counter.increment(separate_thread=separate_thread)[1m[31m/usr/lib/python3/dist-packages/distributed/actor.py[0m:171: in func
q = asyncio.Queue(loop=[96mself[39;49;00m._io_loop.asyncio_loop) [1m[31m/usr/lib/python3.10/asyncio/queues.py[0m:33: in __init__ [96msuper[39;49;00m().[92m__init__[39;49;00m(loop=loop)_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[33m"""Event loop mixins."""[39;49;00m [94mimport[39;49;00m [04m[96mthreading[39;49;00m[94mfrom[39;49;00m [04m[96m.[39;49;00m [94mimport[39;49;00m events
_global_lock = threading.Lock() [90m# Used as a sentinel for loop parameter[39;49;00m _marker = [96mobject[39;49;00m() [94mclass[39;49;00m [04m[92m_LoopBoundMixin[39;49;00m: _loop = [94mNone[39;49;00m[94mdef[39;49;00m [92m__init__[39;49;00m([96mself[39;49;00m, *, loop=_marker): [94mif[39;49;00m loop [95mis[39;49;00m [95mnot[39;49;00m _marker:
[33mf[39;49;00m[33m'[39;49;00m[33mAs of 3.10, the *loop* parameter was removed from [39;49;00m[33m'[39;49;00m[94mraise[39;49;00m [96mTypeError[39;49;00m(
[33mf[39;49;00m[33m'[39;49;00m[33m{[39;49;00m[96mtype[39;49;00m([96mself[39;49;00m).[91m__name__[39;49;00m[33m}[39;49;00m[33m() since it is no longer necessary[39;49;00m[33m'[39;49;00m
)[1m[31mE TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary[0m
[1m[31m/usr/lib/python3.10/asyncio/mixins.py[0m:17: TypeError----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:45121 distributed.scheduler - INFO - dashboard at: 127.0.0.1:38083distributed.worker - INFO - Start worker at: tcp://127.0.0.1:42727 distributed.worker - INFO - Listening to: tcp://127.0.0.1:42727 distributed.worker - INFO - dashboard at: 127.0.0.1:43983 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:45121 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-zwm31oo4 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:38807 distributed.worker - INFO - Listening to: tcp://127.0.0.1:38807 distributed.worker - INFO - dashboard at: 127.0.0.1:33775 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:45121 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-cuwpk56t distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42727', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42727
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38807', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38807
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:45121 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:45121 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Receive client connection: Client-ed45fd22-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection distributed.worker - WARNING - Compute Failed Function: fargs: (<Actor: Counter, key=Counter-1da4268b-3038-4f58-ba1c-f3e65d1487e6>)
kwargs: {}Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')
distributed.scheduler - INFO - Remove client Client-ed45fd22-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-ed45fd22-56b7-11ec-9858-00163e03ed98
distributed.worker - WARNING - Compute Failed Function: fargs: (<Actor: Counter, key=Counter-1da4268b-3038-4f58-ba1c-f3e65d1487e6>)
kwargs: {}Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')
distributed.worker - WARNING - Compute Failed Function: fargs: (<Actor: Counter, key=Counter-1da4268b-3038-4f58-ba1c-f3e65d1487e6>)
kwargs: {}Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')
distributed.worker - WARNING - Compute Failed Function: fargs: (<Actor: Counter, key=Counter-1da4268b-3038-4f58-ba1c-f3e65d1487e6>)
kwargs: {}Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')
distributed.scheduler - INFO - Close client connection: Client-ed45fd22-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42727 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38807distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42727', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:42727distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38807', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:38807 distributed.scheduler - INFO - Lost all workers distributed.scheduler - INFO - Scheduler closing... distributed.scheduler - INFO - Scheduler closing all comms[31m[1m____________________________ test_exceptions_method ____________________________[0m
c = <Client: No scheduler connected> s = <Scheduler: "tcp://127.0.0.1:34393" workers: 0 cores: 0, tasks: 0>a = <Worker: 'tcp://127.0.0.1:34663', 0, Status.closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0> b = <Worker: 'tcp://127.0.0.1:43181', 1, Status.closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>
[37m@gen_cluster[39;49;00m(client=[94mTrue[39;49;00m)[94masync[39;49;00m [94mdef[39;49;00m [92mtest_exceptions_method[39;49;00m(c, s, a, b):
[94mclass[39;49;00m [04m[92mFoo[39;49;00m:[94mdef[39;49;00m [92mthrow[39;49;00m([96mself[39;49;00m):
[94m1[39;49;00m / [94m0[39;49;00mfoo = [94mawait[39;49;00m c.submit(Foo, actor=[94mTrue[39;49;00m) [94mwith[39;49;00m pytest.raises([96mZeroDivisionError[39;49;00m):
[94mawait[39;49;00m foo.throw()
[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:202: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [1m[31m/usr/lib/python3/dist-packages/distributed/actor.py[0m:171: in func
q = asyncio.Queue(loop=[96mself[39;49;00m._io_loop.asyncio_loop) [1m[31m/usr/lib/python3.10/asyncio/queues.py[0m:33: in __init__ [96msuper[39;49;00m().[92m__init__[39;49;00m(loop=loop)_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7ace45e0>
[94mdef[39;49;00m [92m__init__[39;49;00m([96mself[39;49;00m, *, loop=_marker): [94mif[39;49;00m loop [95mis[39;49;00m [95mnot[39;49;00m _marker:
[33mf[39;49;00m[33m'[39;49;00m[33mAs of 3.10, the *loop* parameter was removed from [39;49;00m[33m'[39;49;00m[94mraise[39;49;00m [96mTypeError[39;49;00m(
[33mf[39;49;00m[33m'[39;49;00m[33m{[39;49;00m[96mtype[39;49;00m([96mself[39;49;00m).[91m__name__[39;49;00m[33m}[39;49;00m[33m() since it is no longer necessary[39;49;00m[33m'[39;49;00m
)[1m[31mE TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary[0m
[1m[31m/usr/lib/python3.10/asyncio/mixins.py[0m:17: TypeError----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:34393 distributed.scheduler - INFO - dashboard at: 127.0.0.1:34255distributed.worker - INFO - Start worker at: tcp://127.0.0.1:34663 distributed.worker - INFO - Listening to: tcp://127.0.0.1:34663 distributed.worker - INFO - dashboard at: 127.0.0.1:39381 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:34393 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-c2k0rt1q distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:43181 distributed.worker - INFO - Listening to: tcp://127.0.0.1:43181 distributed.worker - INFO - dashboard at: 127.0.0.1:39127 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:34393 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-su60qwfl distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34663', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34663
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43181', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43181
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:34393 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:34393 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Receive client connection: Client-eddf1163-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Remove client Client-eddf1163-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-eddf1163-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-eddf1163-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34663 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43181distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34663', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:34663distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43181', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:43181 distributed.scheduler - INFO - Lost all workers distributed.scheduler - INFO - Scheduler closing... distributed.scheduler - INFO - Scheduler closing all comms[31m[1m__________________________________ test_sync ___________________________________[0m
client = <Client: 'tcp://127.0.0.1:43857' processes=2 threads=2, memory=15.51 GiB>
[94mdef[39;49;00m [92mtest_sync[39;49;00m(client): counter = client.submit(Counter, actor=[94mTrue[39;49;00m) counter = counter.result() [94massert[39;49;00m counter.n == [94m0[39;49;00m > future = counter.increment()[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:270: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [1m[31m/usr/lib/python3/dist-packages/distributed/actor.py[0m:171: in func
q = asyncio.Queue(loop=[96mself[39;49;00m._io_loop.asyncio_loop) [1m[31m/usr/lib/python3.10/asyncio/queues.py[0m:33: in __init__ [96msuper[39;49;00m().[92m__init__[39;49;00m(loop=loop)_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7a6dfa90>
[94mdef[39;49;00m [92m__init__[39;49;00m([96mself[39;49;00m, *, loop=_marker): [94mif[39;49;00m loop [95mis[39;49;00m [95mnot[39;49;00m _marker:
[33mf[39;49;00m[33m'[39;49;00m[33mAs of 3.10, the *loop* parameter was removed from [39;49;00m[33m'[39;49;00m[94mraise[39;49;00m [96mTypeError[39;49;00m(
[33mf[39;49;00m[33m'[39;49;00m[33m{[39;49;00m[96mtype[39;49;00m([96mself[39;49;00m).[91m__name__[39;49;00m[33m}[39;49;00m[33m() since it is no longer necessary[39;49;00m[33m'[39;49;00m
)[1m[31mE TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary[0m
[1m[31m/usr/lib/python3.10/asyncio/mixins.py[0m:17: TypeError---------------------------- Captured stderr setup ----------------------------- distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:43857 distributed.scheduler - INFO - dashboard at: 127.0.0.1:8787distributed.worker - INFO - Start worker at: tcp://127.0.0.1:33239 distributed.worker - INFO - Listening to: tcp://127.0.0.1:33239 distributed.worker - INFO - dashboard at: 127.0.0.1:42799 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:43857 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-ca4ba20c-463b-41af-b4a0-d155b0a0429f/dask-worker-space/worker-0i86tzxz distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33239', name: tcp://127.0.0.1:33239, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33239
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:43857 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Start worker at: tcp://127.0.0.1:36397 distributed.worker - INFO - Listening to: tcp://127.0.0.1:36397 distributed.worker - INFO - dashboard at: 127.0.0.1:45387 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:43857 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-018524ad-f8bf-4b49-8d5a-311fd8135454/dask-worker-space/worker-yu70xndj distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36397', name: tcp://127.0.0.1:36397, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36397 distributed.worker - INFO - Registered to: tcp://127.0.0.1:43857
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Receive client connection: Client-ef457bef-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection--------------------------- Captured stderr teardown --------------------------- distributed.scheduler - INFO - Remove client Client-ef457bef-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-ef457bef-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-ef457bef-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33239 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36397[31m[1m_____________________________ test_numpy_roundtrip _____________________________[0m
c = <Client: No scheduler connected> s = <Scheduler: "tcp://127.0.0.1:40517" workers: 0 cores: 0, tasks: 0>a = <Worker: 'tcp://127.0.0.1:38063', 0, Status.closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0> b = <Worker: 'tcp://127.0.0.1:41763', 1, Status.closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>
[37m@gen_cluster[39;49;00m(client=[94mTrue[39;49;00m)[94masync[39;49;00m [94mdef[39;49;00m [92mtest_numpy_roundtrip[39;49;00m(c, s, a, b): np = pytest.importorskip([33m"[39;49;00m[33mnumpy[39;49;00m[33m"[39;49;00m) server = [94mawait[39;49;00m c.submit(ParameterServer, actor=[94mTrue[39;49;00m)
x = np.random.random([94m1000[39;49;00m)
[94mawait[39;49;00m server.put([33m"[39;49;00m[33mx[39;49;00m[33m"[39;49;00m, x)
[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:312: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [1m[31m/usr/lib/python3/dist-packages/distributed/actor.py[0m:171: in func
q = asyncio.Queue(loop=[96mself[39;49;00m._io_loop.asyncio_loop) [1m[31m/usr/lib/python3.10/asyncio/queues.py[0m:33: in __init__ [96msuper[39;49;00m().[92m__init__[39;49;00m(loop=loop)_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7ac39540>
[94mdef[39;49;00m [92m__init__[39;49;00m([96mself[39;49;00m, *, loop=_marker): [94mif[39;49;00m loop [95mis[39;49;00m [95mnot[39;49;00m _marker:
[33mf[39;49;00m[33m'[39;49;00m[33mAs of 3.10, the *loop* parameter was removed from [39;49;00m[33m'[39;49;00m[94mraise[39;49;00m [96mTypeError[39;49;00m(
[33mf[39;49;00m[33m'[39;49;00m[33m{[39;49;00m[96mtype[39;49;00m([96mself[39;49;00m).[91m__name__[39;49;00m[33m}[39;49;00m[33m() since it is no longer necessary[39;49;00m[33m'[39;49;00m
)[1m[31mE TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary[0m
[1m[31m/usr/lib/python3.10/asyncio/mixins.py[0m:17: TypeError----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:40517 distributed.scheduler - INFO - dashboard at: 127.0.0.1:40079distributed.worker - INFO - Start worker at: tcp://127.0.0.1:38063 distributed.worker - INFO - Listening to: tcp://127.0.0.1:38063 distributed.worker - INFO - dashboard at: 127.0.0.1:39705 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:40517 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-vnqlhcrw distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:41763 distributed.worker - INFO - Listening to: tcp://127.0.0.1:41763 distributed.worker - INFO - dashboard at: 127.0.0.1:36501 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:40517 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-clwyyoff distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38063', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38063
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41763', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41763
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:40517 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:40517 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Receive client connection: Client-ef7289f7-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Remove client Client-ef7289f7-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-ef7289f7-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-ef7289f7-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38063 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41763distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38063', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:38063distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41763', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:41763 distributed.scheduler - INFO - Lost all workers distributed.scheduler - INFO - Scheduler closing... distributed.scheduler - INFO - Scheduler closing all comms[31m[1m_________________________ test_numpy_roundtrip_getattr _________________________[0m
c = <Client: No scheduler connected> s = <Scheduler: "tcp://127.0.0.1:46495" workers: 0 cores: 0, tasks: 0>a = <Worker: 'tcp://127.0.0.1:42623', 0, Status.closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0> b = <Worker: 'tcp://127.0.0.1:46353', 1, Status.closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>
[37m@gen_cluster[39;49;00m(client=[94mTrue[39;49;00m)[94masync[39;49;00m [94mdef[39;49;00m [92mtest_numpy_roundtrip_getattr[39;49;00m(c, s, a, b): np = pytest.importorskip([33m"[39;49;00m[33mnumpy[39;49;00m[33m"[39;49;00m) counter = [94mawait[39;49;00m c.submit(Counter, actor=[94mTrue[39;49;00m)
x = np.random.random([94m1000[39;49;00m) > [94mawait[39;49;00m counter.add(x)[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:327: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [1m[31m/usr/lib/python3/dist-packages/distributed/actor.py[0m:171: in func
q = asyncio.Queue(loop=[96mself[39;49;00m._io_loop.asyncio_loop) [1m[31m/usr/lib/python3.10/asyncio/queues.py[0m:33: in __init__ [96msuper[39;49;00m().[92m__init__[39;49;00m(loop=loop)_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7accfa00>
[94mdef[39;49;00m [92m__init__[39;49;00m([96mself[39;49;00m, *, loop=_marker): [94mif[39;49;00m loop [95mis[39;49;00m [95mnot[39;49;00m _marker:
[33mf[39;49;00m[33m'[39;49;00m[33mAs of 3.10, the *loop* parameter was removed from [39;49;00m[33m'[39;49;00m[94mraise[39;49;00m [96mTypeError[39;49;00m(
[33mf[39;49;00m[33m'[39;49;00m[33m{[39;49;00m[96mtype[39;49;00m([96mself[39;49;00m).[91m__name__[39;49;00m[33m}[39;49;00m[33m() since it is no longer necessary[39;49;00m[33m'[39;49;00m
)[1m[31mE TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary[0m
[1m[31m/usr/lib/python3.10/asyncio/mixins.py[0m:17: TypeError----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:46495 distributed.scheduler - INFO - dashboard at: 127.0.0.1:37799distributed.worker - INFO - Start worker at: tcp://127.0.0.1:42623 distributed.worker - INFO - Listening to: tcp://127.0.0.1:42623 distributed.worker - INFO - dashboard at: 127.0.0.1:41251 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:46495 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-93dn8ysr distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:46353 distributed.worker - INFO - Listening to: tcp://127.0.0.1:46353 distributed.worker - INFO - dashboard at: 127.0.0.1:40369 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:46495 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-o50omzoo distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42623', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42623
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46353', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46353
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:46495 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:46495 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Receive client connection: Client-ef83ab02-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Remove client Client-ef83ab02-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-ef83ab02-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-ef83ab02-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42623 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46353distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42623', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:42623distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46353', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:46353 distributed.scheduler - INFO - Lost all workers distributed.scheduler - INFO - Scheduler closing... distributed.scheduler - INFO - Scheduler closing all comms[31m[1m____________________________ test_many_computations ____________________________[0m
c = <Client: No scheduler connected> s = <Scheduler: "tcp://127.0.0.1:38391" workers: 0 cores: 0, tasks: 0>a = <Worker: 'tcp://127.0.0.1:36957', 0, Status.closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: -3> b = <Worker: 'tcp://127.0.0.1:36931', 1, Status.closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>
[37m@gen_cluster[39;49;00m(client=[94mTrue[39;49;00m)[94masync[39;49;00m [94mdef[39;49;00m [92mtest_many_computations[39;49;00m(c, s, a, b): counter = [94mawait[39;49;00m c.submit(Counter, actor=[94mTrue[39;49;00m)
[94mdef[39;49;00m [92madd[39;49;00m(n, counter):[94mfor[39;49;00m i [95min[39;49;00m [96mrange[39;49;00m(n):
counter.increment().result()futures = c.map(add, [96mrange[39;49;00m([94m10[39;49;00m), counter=counter) done = c.submit([94mlambda[39;49;00m x: [94mNone[39;49;00m, futures)
[94mwhile[39;49;00m [95mnot[39;49;00m done.done():[94massert[39;49;00m [96mlen[39;49;00m(s.processing) <= a.nthreads + b.nthreads
[94mawait[39;49;00m asyncio.sleep([94m0.01[39;49;00m) > [94mawait[39;49;00m done[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:370: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [1m[31m/usr/lib/python3/dist-packages/distributed/client.py[0m:240: in _result
[94mraise[39;49;00m exc.with_traceback(tb) [1m[31m/usr/lib/python3/dist-packages/dask/utils.py[0m:35: in apply [94mreturn[39;49;00m func(*args, **kwargs)[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:361: in add
counter.increment().result()[1m[31m/usr/lib/python3/dist-packages/distributed/actor.py[0m:171: in func
q = asyncio.Queue(loop=[96mself[39;49;00m._io_loop.asyncio_loop) [1m[31m/usr/lib/python3.10/asyncio/queues.py[0m:33: in __init__ [96msuper[39;49;00m().[92m__init__[39;49;00m(loop=loop)_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[33m"""Event loop mixins."""[39;49;00m [94mimport[39;49;00m [04m[96mthreading[39;49;00m[94mfrom[39;49;00m [04m[96m.[39;49;00m [94mimport[39;49;00m events
_global_lock = threading.Lock() [90m# Used as a sentinel for loop parameter[39;49;00m _marker = [96mobject[39;49;00m() [94mclass[39;49;00m [04m[92m_LoopBoundMixin[39;49;00m: _loop = [94mNone[39;49;00m[94mdef[39;49;00m [92m__init__[39;49;00m([96mself[39;49;00m, *, loop=_marker): [94mif[39;49;00m loop [95mis[39;49;00m [95mnot[39;49;00m _marker:
[33mf[39;49;00m[33m'[39;49;00m[33mAs of 3.10, the *loop* parameter was removed from [39;49;00m[33m'[39;49;00m[94mraise[39;49;00m [96mTypeError[39;49;00m(
[33mf[39;49;00m[33m'[39;49;00m[33m{[39;49;00m[96mtype[39;49;00m([96mself[39;49;00m).[91m__name__[39;49;00m[33m}[39;49;00m[33m() since it is no longer necessary[39;49;00m[33m'[39;49;00m
)[1m[31mE TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary[0m
[1m[31m/usr/lib/python3.10/asyncio/mixins.py[0m:17: TypeError----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:38391 distributed.scheduler - INFO - dashboard at: 127.0.0.1:35569distributed.worker - INFO - Start worker at: tcp://127.0.0.1:36957 distributed.worker - INFO - Listening to: tcp://127.0.0.1:36957 distributed.worker - INFO - dashboard at: 127.0.0.1:44923 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:38391 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-nfapao56 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:36931 distributed.worker - INFO - Listening to: tcp://127.0.0.1:36931 distributed.worker - INFO - dashboard at: 127.0.0.1:36947 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:38391 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-mctqjba3 distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36957', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36957
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36931', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36931
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:38391 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:38391 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Receive client connection: Client-efc7ba22-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection distributed.worker - WARNING - Compute Failed Function: execute_taskargs: ((<function apply at 0xffff8753d2d0>, <function test_many_computations.<locals>.add at 0xffff7afa9e10>, (<class 'tuple'>, [1]), {'counter': <Actor: Counter, key=Counter-7943db04-f2c7-4ee3-b56a-4227b43485d0>}))
kwargs: {}Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')
distributed.worker - WARNING - Compute Failed Function: execute_taskargs: ((<function apply at 0xffff8753d2d0>, <function test_many_computations.<locals>.add at 0xffff7afa9ea0>, (<class 'tuple'>, [2]), {'counter': <Actor: Counter, key=Counter-7943db04-f2c7-4ee3-b56a-4227b43485d0>}))
kwargs: {}Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')
distributed.worker - WARNING - Compute Failed Function: execute_taskargs: ((<function apply at 0xffff8753d2d0>, <function test_many_computations.<locals>.add at 0xffff7afa9f30>, (<class 'tuple'>, [3]), {'counter': <Actor: Counter, key=Counter-7943db04-f2c7-4ee3-b56a-4227b43485d0>}))
kwargs: {}Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')
distributed.worker - WARNING - Compute Failed Function: execute_taskargs: ((<function apply at 0xffff8753d2d0>, <function test_many_computations.<locals>.add at 0xffff7afa9fc0>, (<class 'tuple'>, [4]), {'counter': <Actor: Counter, key=Counter-7943db04-f2c7-4ee3-b56a-4227b43485d0>}))
kwargs: {}Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')
distributed.worker - WARNING - Compute Failed Function: execute_taskargs: ((<function apply at 0xffff8753d2d0>, <function test_many_computations.<locals>.add at 0xffff7afaa050>, (<class 'tuple'>, [5]), {'counter': <Actor: Counter, key=Counter-7943db04-f2c7-4ee3-b56a-4227b43485d0>}))
kwargs: {}Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')
distributed.worker - WARNING - Compute Failed Function: execute_taskargs: ((<function apply at 0xffff8753d2d0>, <function test_many_computations.<locals>.add at 0xffff7afaa0e0>, (<class 'tuple'>, [9]), {'counter': <Actor: Counter, key=Counter-7943db04-f2c7-4ee3-b56a-4227b43485d0>}))
kwargs: {}Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')
distributed.worker - WARNING - Compute Failed Function: execute_taskargs: ((<function apply at 0xffff8753d2d0>, <function test_many_computations.<locals>.add at 0xffff7afaa170>, (<class 'tuple'>, [6]), {'counter': <Actor: Counter, key=Counter-7943db04-f2c7-4ee3-b56a-4227b43485d0>}))
kwargs: {}Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')
distributed.scheduler - INFO - Remove client Client-efc7ba22-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-efc7ba22-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-efc7ba22-56b7-11ec-9858-00163e03ed98
distributed.worker - WARNING - Compute Failed Function: execute_taskargs: ((<function apply at 0xffff8753d2d0>, <function test_many_computations.<locals>.add at 0xffff7afaa200>, (<class 'tuple'>, [7]), {'counter': <Actor: Counter, key=Counter-7943db04-f2c7-4ee3-b56a-4227b43485d0>}))
kwargs: {}Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36957 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36931distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36957', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:36957distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36931', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:36931 distributed.scheduler - INFO - Lost all workers distributed.scheduler - INFO - Scheduler closing... distributed.scheduler - INFO - Scheduler closing all comms[31m[1m______________________________ test_thread_safety ______________________________[0m
c = <Client: No scheduler connected> s = <Scheduler: "tcp://127.0.0.1:45091" workers: 0 cores: 0, tasks: 0>a = <Worker: 'tcp://127.0.0.1:33325', 0, Status.closed, stored: 0, running: 0/5, ready: 0, comm: 0, waiting: 0> b = <Worker: 'tcp://127.0.0.1:40825', 1, Status.closed, stored: 0, running: 0/5, ready: 0, comm: 0, waiting: 0>
[37m@gen_cluster[39;49;00m(client=[94mTrue[39;49;00m, nthreads=[([33m"[39;49;00m[33m127.0.0.1[39;49;00m[33m"[39;49;00m, [94m5[39;49;00m)] * [94m2[39;49;00m) [94masync[39;49;00m [94mdef[39;49;00m [92mtest_thread_safety[39;49;00m(c, s, a, b):
[94mclass[39;49;00m [04m[92mUnsafe[39;49;00m:[94mdef[39;49;00m [92m__init__[39;49;00m([96mself[39;49;00m):
[96mself[39;49;00m.n = [94m0[39;49;00m[94mdef[39;49;00m [92mf[39;49;00m([96mself[39;49;00m): [94massert[39;49;00m [96mself[39;49;00m.n == [94m0[39;49;00m
[96mself[39;49;00m.n += [94m1[39;49;00m[94mfor[39;49;00m i [95min[39;49;00m [96mrange[39;49;00m([94m20[39;49;00m):
sleep([94m0.002[39;49;00m)[94massert[39;49;00m [96mself[39;49;00m.n == [94m1[39;49;00m
[96mself[39;49;00m.n = [94m0[39;49;00munsafe = [94mawait[39;49;00m c.submit(Unsafe, actor=[94mTrue[39;49;00m) > futures = [unsafe.f() [94mfor[39;49;00m i [95min[39;49;00m [96mrange[39;49;00m([94m10[39;49;00m)]
[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:390: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:390: in <listcomp> futures = [unsafe.f() [94mfor[39;49;00m i [95min[39;49;00m [96mrange[39;49;00m([94m10[39;49;00m)] [1m[31m/usr/lib/python3/dist-packages/distributed/actor.py[0m:171: in func
q = asyncio.Queue(loop=[96mself[39;49;00m._io_loop.asyncio_loop) [1m[31m/usr/lib/python3.10/asyncio/queues.py[0m:33: in __init__ [96msuper[39;49;00m().[92m__init__[39;49;00m(loop=loop)_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7acddd50>
[94mdef[39;49;00m [92m__init__[39;49;00m([96mself[39;49;00m, *, loop=_marker): [94mif[39;49;00m loop [95mis[39;49;00m [95mnot[39;49;00m _marker:
[33mf[39;49;00m[33m'[39;49;00m[33mAs of 3.10, the *loop* parameter was removed from [39;49;00m[33m'[39;49;00m[94mraise[39;49;00m [96mTypeError[39;49;00m(
[33mf[39;49;00m[33m'[39;49;00m[33m{[39;49;00m[96mtype[39;49;00m([96mself[39;49;00m).[91m__name__[39;49;00m[33m}[39;49;00m[33m() since it is no longer necessary[39;49;00m[33m'[39;49;00m
)[1m[31mE TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary[0m
[1m[31m/usr/lib/python3.10/asyncio/mixins.py[0m:17: TypeError----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:45091 distributed.scheduler - INFO - dashboard at: 127.0.0.1:43117distributed.worker - INFO - Start worker at: tcp://127.0.0.1:33325 distributed.worker - INFO - Listening to: tcp://127.0.0.1:33325 distributed.worker - INFO - dashboard at: 127.0.0.1:45515 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:45091 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 5 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-0s13hhu0 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:40825 distributed.worker - INFO - Listening to: tcp://127.0.0.1:40825 distributed.worker - INFO - dashboard at: 127.0.0.1:34295 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:45091 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 5 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-6j5eo951 distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33325', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33325
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40825', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40825
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:45091 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:45091 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Receive client connection: Client-efef9a67-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Remove client Client-efef9a67-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-efef9a67-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-efef9a67-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33325 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40825distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33325', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:33325distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40825', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:40825 distributed.scheduler - INFO - Lost all workers distributed.scheduler - INFO - Scheduler closing... distributed.scheduler - INFO - Scheduler closing all comms[31m[1m______________________________ test_compute_sync _______________________________[0m
client = <Client: 'tcp://127.0.0.1:35271' processes=2 threads=2, memory=15.51 GiB>
[94mdef[39;49;00m [92mtest_compute_sync[39;49;00m(client): [37m@dask[39;49;00m.delayed [94mdef[39;49;00m [92mf[39;49;00m(n, counter):[94massert[39;49;00m [96misinstance[39;49;00m(counter, Actor), [96mtype[39;49;00m(counter) [94mfor[39;49;00m i [95min[39;49;00m [96mrange[39;49;00m(n):
counter.increment().result() [37m@dask[39;49;00m.delayed [94mdef[39;49;00m [92mcheck[39;49;00m(counter, blanks): [94mreturn[39;49;00m counter.n counter = dask.delayed(Counter)()values = [f(i, counter) [94mfor[39;49;00m i [95min[39;49;00m [96mrange[39;49;00m([94m5[39;49;00m)]
final = check(counter, values) > result = final.compute(actors=counter)[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:517: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [1m[31m/usr/lib/python3/dist-packages/dask/base.py[0m:288: in compute (result,) = compute([96mself[39;49;00m, traverse=[94mFalse[39;49;00m, **kwargs)
[1m[31m/usr/lib/python3/dist-packages/dask/base.py[0m:570: in compute results = schedule(dsk, keys, **kwargs)[1m[31m/usr/lib/python3/dist-packages/distributed/client.py[0m:2689: in get results = [96mself[39;49;00m.gather(packed, asynchronous=asynchronous, direct=direct) [1m[31m/usr/lib/python3/dist-packages/distributed/client.py[0m:1966: in gather
[94mreturn[39;49;00m [96mself[39;49;00m.sync([1m[31m/usr/lib/python3/dist-packages/distributed/client.py[0m:860: in sync
[94mreturn[39;49;00m sync([1m[31m/usr/lib/python3/dist-packages/distributed/utils.py[0m:330: in sync
[94mraise[39;49;00m exc.with_traceback(tb) [1m[31m/usr/lib/python3/dist-packages/distributed/utils.py[0m:313: in f result[[94m0[39;49;00m] = [94myield[39;49;00m future [1m[31m/usr/lib/python3/dist-packages/tornado/gen.py[0m:762: in run value = future.result()[1m[31m/usr/lib/python3/dist-packages/distributed/client.py[0m:1831: in _gather
[94mraise[39;49;00m exception.with_traceback(traceback)[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:507: in f
counter.increment().result()[1m[31m/usr/lib/python3/dist-packages/distributed/actor.py[0m:171: in func
q = asyncio.Queue(loop=[96mself[39;49;00m._io_loop.asyncio_loop) [1m[31m/usr/lib/python3.10/asyncio/queues.py[0m:33: in __init__ [96msuper[39;49;00m().[92m__init__[39;49;00m(loop=loop)_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
[33m"""Event loop mixins."""[39;49;00m [94mimport[39;49;00m [04m[96mthreading[39;49;00m[94mfrom[39;49;00m [04m[96m.[39;49;00m [94mimport[39;49;00m events
_global_lock = threading.Lock() [90m# Used as a sentinel for loop parameter[39;49;00m _marker = [96mobject[39;49;00m() [94mclass[39;49;00m [04m[92m_LoopBoundMixin[39;49;00m: _loop = [94mNone[39;49;00m[94mdef[39;49;00m [92m__init__[39;49;00m([96mself[39;49;00m, *, loop=_marker): [94mif[39;49;00m loop [95mis[39;49;00m [95mnot[39;49;00m _marker:
[33mf[39;49;00m[33m'[39;49;00m[33mAs of 3.10, the *loop* parameter was removed from [39;49;00m[33m'[39;49;00m[94mraise[39;49;00m [96mTypeError[39;49;00m(
[33mf[39;49;00m[33m'[39;49;00m[33m{[39;49;00m[96mtype[39;49;00m([96mself[39;49;00m).[91m__name__[39;49;00m[33m}[39;49;00m[33m() since it is no longer necessary[39;49;00m[33m'[39;49;00m
)[1m[31mE TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary[0m
[1m[31m/usr/lib/python3.10/asyncio/mixins.py[0m:17: TypeError---------------------------- Captured stderr setup ----------------------------- distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:35271 distributed.scheduler - INFO - dashboard at: 127.0.0.1:8787distributed.worker - INFO - Start worker at: tcp://127.0.0.1:40799 distributed.worker - INFO - Start worker at: tcp://127.0.0.1:40631 distributed.worker - INFO - Listening to: tcp://127.0.0.1:40799 distributed.worker - INFO - Listening to: tcp://127.0.0.1:40631 distributed.worker - INFO - dashboard at: 127.0.0.1:39337 distributed.worker - INFO - dashboard at: 127.0.0.1:35887 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:35271 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:35271 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-73ed35a0-f5c2-4c8a-b8b0-9d5eb9f9d2b8/dask-worker-space/worker-eyl89eh1 distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-2923585d-1ffa-41fc-8ad6-0b47e6475918/dask-worker-space/worker-1wbkt2th distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40631', name: tcp://127.0.0.1:40631, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40631
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:35271 distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40799', name: tcp://127.0.0.1:40799, memory: 0, processing: 0>
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40799
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:35271 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Receive client connection: Client-f0fb320c-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection----------------------------- Captured stderr call -----------------------------
distributed.worker - WARNING - Compute Failed Function: fargs: (3, <Actor: Counter, key=Counter-c4b5effc-5ff1-4f60-af17-97fbf296242e>)
kwargs: {}Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')
--------------------------- Captured stderr teardown --------------------------- distributed.scheduler - INFO - Receive client connection: Client-worker-f1031dac-56b7-11ec-98f8-00163e03ed98
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Remove client Client-f0fb320c-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-f0fb320c-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-f0fb320c-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40631 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40799distributed.scheduler - INFO - Remove client Client-worker-f1031dac-56b7-11ec-98f8-00163e03ed98 distributed.scheduler - INFO - Remove client Client-worker-f1031dac-56b7-11ec-98f8-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-worker-f1031dac-56b7-11ec-98f8-00163e03ed98 [31m[1m____________________________ test_actors_in_profile ____________________________[0m
c = <Client: No scheduler connected> s = <Scheduler: "tcp://127.0.0.1:40699" workers: 0 cores: 0, tasks: 0>a = <Worker: 'tcp://127.0.0.1:35211', 0, Status.closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0>
[37m@gen_cluster[39;49;00m( client=[94mTrue[39;49;00m, nthreads=[([33m"[39;49;00m[33m127.0.0.1[39;49;00m[33m"[39;49;00m, [94m1[39;49;00m)],config={[33m"[39;49;00m[33mdistributed.worker.profile.interval[39;49;00m[33m"[39;49;00m: [33m"[39;49;00m[33m1ms[39;49;00m[33m"[39;49;00m},
)[94masync[39;49;00m [94mdef[39;49;00m [92mtest_actors_in_profile[39;49;00m(c, s, a):
[94mclass[39;49;00m [04m[92mSleeper[39;49;00m:[94mdef[39;49;00m [92msleep[39;49;00m([96mself[39;49;00m, time):
sleep(time)sleeper = [94mawait[39;49;00m c.submit(Sleeper, actor=[94mTrue[39;49;00m) [94mfor[39;49;00m i [95min[39;49;00m [96mrange[39;49;00m([94m5[39;49;00m):
[94mawait[39;49;00m sleeper.sleep([94m0.200[39;49;00m)
[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:542: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [1m[31m/usr/lib/python3/dist-packages/distributed/actor.py[0m:171: in func
q = asyncio.Queue(loop=[96mself[39;49;00m._io_loop.asyncio_loop) [1m[31m/usr/lib/python3.10/asyncio/queues.py[0m:33: in __init__ [96msuper[39;49;00m().[92m__init__[39;49;00m(loop=loop)_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7a7d0ca0>
[94mdef[39;49;00m [92m__init__[39;49;00m([96mself[39;49;00m, *, loop=_marker): [94mif[39;49;00m loop [95mis[39;49;00m [95mnot[39;49;00m _marker:
[33mf[39;49;00m[33m'[39;49;00m[33mAs of 3.10, the *loop* parameter was removed from [39;49;00m[33m'[39;49;00m[94mraise[39;49;00m [96mTypeError[39;49;00m(
[33mf[39;49;00m[33m'[39;49;00m[33m{[39;49;00m[96mtype[39;49;00m([96mself[39;49;00m).[91m__name__[39;49;00m[33m}[39;49;00m[33m() since it is no longer necessary[39;49;00m[33m'[39;49;00m
)[1m[31mE TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary[0m
[1m[31m/usr/lib/python3.10/asyncio/mixins.py[0m:17: TypeError----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:40699 distributed.scheduler - INFO - dashboard at: 127.0.0.1:35005distributed.worker - INFO - Start worker at: tcp://127.0.0.1:35211 distributed.worker - INFO - Listening to: tcp://127.0.0.1:35211 distributed.worker - INFO - dashboard at: 127.0.0.1:33883 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:40699 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-52oroadv distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35211', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35211
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:40699 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Receive client connection: Client-f1324945-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Remove client Client-f1324945-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-f1324945-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-f1324945-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35211distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35211', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:35211 distributed.scheduler - INFO - Lost all workers distributed.scheduler - INFO - Scheduler closing... distributed.scheduler - INFO - Scheduler closing all comms[31m[1m_________________________________ test_waiter __________________________________[0m
c = <Client: No scheduler connected> s = <Scheduler: "tcp://127.0.0.1:41223" workers: 0 cores: 0, tasks: 0>a = <Worker: 'tcp://127.0.0.1:42029', 0, Status.closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0> b = <Worker: 'tcp://127.0.0.1:42963', 1, Status.closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>
[37m@gen_cluster[39;49;00m(client=[94mTrue[39;49;00m)[94masync[39;49;00m [94mdef[39;49;00m [92mtest_waiter[39;49;00m(c, s, a, b): [94mfrom[39;49;00m [04m[96mtornado[39;49;00m[04m[96m.[39;49;00m[04m[96mlocks[39;49;00m [94mimport[39;49;00m Event
[94mclass[39;49;00m [04m[92mWaiter[39;49;00m:[94mdef[39;49;00m [92m__init__[39;49;00m([96mself[39;49;00m):
[96mself[39;49;00m.event = Event()[94masync[39;49;00m [94mdef[39;49;00m [92mset[39;49;00m([96mself[39;49;00m):
[96mself[39;49;00m.event.set()[94masync[39;49;00m [94mdef[39;49;00m [92mwait[39;49;00m([96mself[39;49;00m):
[94mawait[39;49;00m [96mself[39;49;00m.event.wait()waiter = [94mawait[39;49;00m c.submit(Waiter, actor=[94mTrue[39;49;00m) > futures = [waiter.wait() [94mfor[39;49;00m _ [95min[39;49;00m [96mrange[39;49;00m([94m5[39;49;00m)] [90m# way more than we have actor threads[39;49;00m
[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:567: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:567: in <listcomp> futures = [waiter.wait() [94mfor[39;49;00m _ [95min[39;49;00m [96mrange[39;49;00m([94m5[39;49;00m)] [90m# way more than we have actor threads[39;49;00m [1m[31m/usr/lib/python3/dist-packages/distributed/actor.py[0m:171: in func
q = asyncio.Queue(loop=[96mself[39;49;00m._io_loop.asyncio_loop) [1m[31m/usr/lib/python3.10/asyncio/queues.py[0m:33: in __init__ [96msuper[39;49;00m().[92m__init__[39;49;00m(loop=loop)_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff78639f30>
[94mdef[39;49;00m [92m__init__[39;49;00m([96mself[39;49;00m, *, loop=_marker): [94mif[39;49;00m loop [95mis[39;49;00m [95mnot[39;49;00m _marker:
[33mf[39;49;00m[33m'[39;49;00m[33mAs of 3.10, the *loop* parameter was removed from [39;49;00m[33m'[39;49;00m[94mraise[39;49;00m [96mTypeError[39;49;00m(
[33mf[39;49;00m[33m'[39;49;00m[33m{[39;49;00m[96mtype[39;49;00m([96mself[39;49;00m).[91m__name__[39;49;00m[33m}[39;49;00m[33m() since it is no longer necessary[39;49;00m[33m'[39;49;00m
)[1m[31mE TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary[0m
[1m[31m/usr/lib/python3.10/asyncio/mixins.py[0m:17: TypeError----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:41223 distributed.scheduler - INFO - dashboard at: 127.0.0.1:45349distributed.worker - INFO - Start worker at: tcp://127.0.0.1:42029 distributed.worker - INFO - Listening to: tcp://127.0.0.1:42029 distributed.worker - INFO - dashboard at: 127.0.0.1:43833 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:41223 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-9mcbqkpw distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:42963 distributed.worker - INFO - Listening to: tcp://127.0.0.1:42963 distributed.worker - INFO - dashboard at: 127.0.0.1:45869 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:41223 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-6kys1y2x distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42029', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42029
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42963', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42963
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:41223 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:41223 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Receive client connection: Client-f14425fc-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Remove client Client-f14425fc-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-f14425fc-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-f14425fc-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42029 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42963distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42029', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:42029distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42963', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:42963 distributed.scheduler - INFO - Lost all workers distributed.scheduler - INFO - Scheduler closing... distributed.scheduler - INFO - Scheduler closing all comms[31m[1m___________________________ test_one_thread_deadlock ___________________________[0m
[94mdef[39;49;00m [92mtest_one_thread_deadlock[39;49;00m():[94mwith[39;49;00m cluster(nworkers=[94m2[39;49;00m) [94mas[39;49;00m (cl, w): client = Client(cl[[33m"[39;49;00m[33maddress[39;49;00m[33m"[39;49;00m]) ac = client.submit(Counter, actor=[94mTrue[39;49;00m).result() ac2 = client.submit(UsesCounter, actor=[94mTrue[39;49;00m, workers=[ac._address]).result() > [94massert[39;49;00m ac2.do_inc(ac).result() == [94m1[39;49;00m
[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:638: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [1m[31m/usr/lib/python3/dist-packages/distributed/actor.py[0m:171: in func
q = asyncio.Queue(loop=[96mself[39;49;00m._io_loop.asyncio_loop) [1m[31m/usr/lib/python3.10/asyncio/queues.py[0m:33: in __init__ [96msuper[39;49;00m().[92m__init__[39;49;00m(loop=loop)_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7aaa4c10>
[94mdef[39;49;00m [92m__init__[39;49;00m([96mself[39;49;00m, *, loop=_marker): [94mif[39;49;00m loop [95mis[39;49;00m [95mnot[39;49;00m _marker:
[33mf[39;49;00m[33m'[39;49;00m[33mAs of 3.10, the *loop* parameter was removed from [39;49;00m[33m'[39;49;00m[94mraise[39;49;00m [96mTypeError[39;49;00m(
[33mf[39;49;00m[33m'[39;49;00m[33m{[39;49;00m[96mtype[39;49;00m([96mself[39;49;00m).[91m__name__[39;49;00m[33m}[39;49;00m[33m() since it is no longer necessary[39;49;00m[33m'[39;49;00m
)[1m[31mE TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary[0m
[1m[31m/usr/lib/python3.10/asyncio/mixins.py[0m:17: TypeError----------------------------- Captured stderr call ----------------------------- distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:34207 distributed.scheduler - INFO - dashboard at: 127.0.0.1:8787distributed.worker - INFO - Start worker at: tcp://127.0.0.1:44305 distributed.worker - INFO - Listening to: tcp://127.0.0.1:44305 distributed.worker - INFO - dashboard at: 127.0.0.1:34449 distributed.worker - INFO - Start worker at: tcp://127.0.0.1:42909 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:34207 distributed.worker - INFO - Listening to: tcp://127.0.0.1:42909 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - dashboard at: 127.0.0.1:33311 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:34207 distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-9f81afdb-58e5-4649-a7dc-8dadbc2923de/dask-worker-space/worker-6qqc2pg5 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-93118b3f-310f-48e4-9df1-dda4f37ba379/dask-worker-space/worker-vr8c0pc7 distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42909', name: tcp://127.0.0.1:42909, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42909
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:34207 distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44305', name: tcp://127.0.0.1:44305, memory: 0, processing: 0>
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44305
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:34207 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Receive client connection: Client-f36b097f-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42909 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44305distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42909', name: tcp://127.0.0.1:42909, memory: 2, processing: 0> [31m[1m_____________________________ test_async_deadlock ______________________________[0m
client = <Client: No scheduler connected> s = <Scheduler: "tcp://127.0.0.1:39325" workers: 0 cores: 0, tasks: 0>a = <Worker: 'tcp://127.0.0.1:46605', 0, Status.closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0> b = <Worker: 'tcp://127.0.0.1:35169', 1, Status.closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>
[37m@gen_cluster[39;49;00m(client=[94mTrue[39;49;00m)[94masync[39;49;00m [94mdef[39;49;00m [92mtest_async_deadlock[39;49;00m(client, s, a, b): ac = [94mawait[39;49;00m client.submit(Counter, actor=[94mTrue[39;49;00m) ac2 = [94mawait[39;49;00m client.submit(UsesCounter, actor=[94mTrue[39;49;00m, workers=[ac._address]) > [94massert[39;49;00m ([94mawait[39;49;00m ac2.ado_inc(ac)) == [94m1[39;49;00m
[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:646: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [1m[31m/usr/lib/python3/dist-packages/distributed/actor.py[0m:171: in func
q = asyncio.Queue(loop=[96mself[39;49;00m._io_loop.asyncio_loop) [1m[31m/usr/lib/python3.10/asyncio/queues.py[0m:33: in __init__ [96msuper[39;49;00m().[92m__init__[39;49;00m(loop=loop)_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7a9a2980>
[94mdef[39;49;00m [92m__init__[39;49;00m([96mself[39;49;00m, *, loop=_marker): [94mif[39;49;00m loop [95mis[39;49;00m [95mnot[39;49;00m _marker:
[33mf[39;49;00m[33m'[39;49;00m[33mAs of 3.10, the *loop* parameter was removed from [39;49;00m[33m'[39;49;00m[94mraise[39;49;00m [96mTypeError[39;49;00m(
[33mf[39;49;00m[33m'[39;49;00m[33m{[39;49;00m[96mtype[39;49;00m([96mself[39;49;00m).[91m__name__[39;49;00m[33m}[39;49;00m[33m() since it is no longer necessary[39;49;00m[33m'[39;49;00m
)[1m[31mE TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary[0m
[1m[31m/usr/lib/python3.10/asyncio/mixins.py[0m:17: TypeError----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:39325 distributed.scheduler - INFO - dashboard at: 127.0.0.1:46377distributed.worker - INFO - Start worker at: tcp://127.0.0.1:46605 distributed.worker - INFO - Listening to: tcp://127.0.0.1:46605 distributed.worker - INFO - dashboard at: 127.0.0.1:37953 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:39325 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-bg9tle7c distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:35169 distributed.worker - INFO - Listening to: tcp://127.0.0.1:35169 distributed.worker - INFO - dashboard at: 127.0.0.1:46443 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:39325 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-0ertus9n distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46605', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46605
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35169', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35169
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:39325 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:39325 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Receive client connection: Client-f382fa9e-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Remove client Client-f382fa9e-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-f382fa9e-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-f382fa9e-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46605 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35169distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46605', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:46605distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35169', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:35169 distributed.scheduler - INFO - Lost all workers distributed.scheduler - INFO - Scheduler closing... distributed.scheduler - INFO - Scheduler closing all comms[31m[1m________________________________ test_exception ________________________________[0m
[94mdef[39;49;00m [92mtest_exception[39;49;00m():[94mclass[39;49;00m [04m[92mMyException[39;49;00m([96mException[39;49;00m):
[94mpass[39;49;00m [94mclass[39;49;00m [04m[92mBroken[39;49;00m:[94mdef[39;49;00m [92mmethod[39;49;00m([96mself[39;49;00m):
[94mraise[39;49;00m MyException [37m@property[39;49;00m [94mdef[39;49;00m [92mprop[39;49;00m([96mself[39;49;00m): [94mraise[39;49;00m MyException[94mwith[39;49;00m cluster(nworkers=[94m2[39;49;00m) [94mas[39;49;00m (cl, w): client = Client(cl[[33m"[39;49;00m[33maddress[39;49;00m[33m"[39;49;00m])
ac = client.submit(Broken, actor=[94mTrue[39;49;00m).result()
acfut = ac.method()
[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:664: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [1m[31m/usr/lib/python3/dist-packages/distributed/actor.py[0m:171: in func
q = asyncio.Queue(loop=[96mself[39;49;00m._io_loop.asyncio_loop) [1m[31m/usr/lib/python3.10/asyncio/queues.py[0m:33: in __init__ [96msuper[39;49;00m().[92m__init__[39;49;00m(loop=loop)_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7a95f8e0>
[94mdef[39;49;00m [92m__init__[39;49;00m([96mself[39;49;00m, *, loop=_marker): [94mif[39;49;00m loop [95mis[39;49;00m [95mnot[39;49;00m _marker:
[33mf[39;49;00m[33m'[39;49;00m[33mAs of 3.10, the *loop* parameter was removed from [39;49;00m[33m'[39;49;00m[94mraise[39;49;00m [96mTypeError[39;49;00m(
[33mf[39;49;00m[33m'[39;49;00m[33m{[39;49;00m[96mtype[39;49;00m([96mself[39;49;00m).[91m__name__[39;49;00m[33m}[39;49;00m[33m() since it is no longer necessary[39;49;00m[33m'[39;49;00m
)[1m[31mE TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary[0m
[1m[31m/usr/lib/python3.10/asyncio/mixins.py[0m:17: TypeError----------------------------- Captured stderr call ----------------------------- distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:35703 distributed.scheduler - INFO - dashboard at: 127.0.0.1:8787distributed.worker - INFO - Start worker at: tcp://127.0.0.1:45499 distributed.worker - INFO - Listening to: tcp://127.0.0.1:45499 distributed.worker - INFO - dashboard at: 127.0.0.1:43513 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:35703 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-0867a71c-35bc-4fa4-80c3-97578faef6d6/dask-worker-space/worker-p7ueclgl distributed.worker - INFO - Start worker at: tcp://127.0.0.1:33335 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Listening to: tcp://127.0.0.1:33335 distributed.worker - INFO - dashboard at: 127.0.0.1:40217 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:35703 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-aadea82d-d79c-418c-b87f-ca9691fd7067/dask-worker-space/worker-4a0h_eu8 distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45499', name: tcp://127.0.0.1:45499, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45499
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:35703 distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33335', name: tcp://127.0.0.1:33335, memory: 0, processing: 0>
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33335
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:35703 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Receive client connection: Client-f426cfbd-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33335 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45499distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33335', name: tcp://127.0.0.1:33335, memory: 1, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:33335[31m[1m_____________________________ test_exception_async _____________________________[0m
client = <Client: No scheduler connected> s = <Scheduler: "tcp://127.0.0.1:41503" workers: 0 cores: 0, tasks: 0>a = <Worker: 'tcp://127.0.0.1:42323', 0, Status.closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0> b = <Worker: 'tcp://127.0.0.1:45655', 1, Status.closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>
[37m@gen_cluster[39;49;00m(client=[94mTrue[39;49;00m)[94masync[39;49;00m [94mdef[39;49;00m [92mtest_exception_async[39;49;00m(client, s, a, b): [94mclass[39;49;00m [04m[92mMyException[39;49;00m([96mException[39;49;00m):
[94mpass[39;49;00m [94mclass[39;49;00m [04m[92mBroken[39;49;00m:[94mdef[39;49;00m [92mmethod[39;49;00m([96mself[39;49;00m):
[94mraise[39;49;00m MyException [37m@property[39;49;00m [94mdef[39;49;00m [92mprop[39;49;00m([96mself[39;49;00m): [94mraise[39;49;00m MyExceptionac = [94mawait[39;49;00m client.submit(Broken, actor=[94mTrue[39;49;00m)
acfut = ac.method()
[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:686: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [1m[31m/usr/lib/python3/dist-packages/distributed/actor.py[0m:171: in func
q = asyncio.Queue(loop=[96mself[39;49;00m._io_loop.asyncio_loop) [1m[31m/usr/lib/python3.10/asyncio/queues.py[0m:33: in __init__ [96msuper[39;49;00m().[92m__init__[39;49;00m(loop=loop)_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7aa93280>
[94mdef[39;49;00m [92m__init__[39;49;00m([96mself[39;49;00m, *, loop=_marker): [94mif[39;49;00m loop [95mis[39;49;00m [95mnot[39;49;00m _marker:
[33mf[39;49;00m[33m'[39;49;00m[33mAs of 3.10, the *loop* parameter was removed from [39;49;00m[33m'[39;49;00m[94mraise[39;49;00m [96mTypeError[39;49;00m(
[33mf[39;49;00m[33m'[39;49;00m[33m{[39;49;00m[96mtype[39;49;00m([96mself[39;49;00m).[91m__name__[39;49;00m[33m}[39;49;00m[33m() since it is no longer necessary[39;49;00m[33m'[39;49;00m
)[1m[31mE TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary[0m
[1m[31m/usr/lib/python3.10/asyncio/mixins.py[0m:17: TypeError----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:41503 distributed.scheduler - INFO - dashboard at: 127.0.0.1:37235distributed.worker - INFO - Start worker at: tcp://127.0.0.1:42323 distributed.worker - INFO - Listening to: tcp://127.0.0.1:42323 distributed.worker - INFO - dashboard at: 127.0.0.1:43983 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:41503 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-8ylggvje distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:45655 distributed.worker - INFO - Listening to: tcp://127.0.0.1:45655 distributed.worker - INFO - dashboard at: 127.0.0.1:36813 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:41503 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-al9gkp4t distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42323', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42323
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45655', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45655
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:41503 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:41503 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Receive client connection: Client-f43c31c7-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Remove client Client-f43c31c7-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-f43c31c7-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-f43c31c7-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42323 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45655distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42323', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:42323distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45655', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:45655 distributed.scheduler - INFO - Lost all workers distributed.scheduler - INFO - Scheduler closing... distributed.scheduler - INFO - Scheduler closing all comms[31m[1m______________________________ test_as_completed _______________________________[0m
client = <Client: 'tcp://127.0.0.1:42513' processes=2 threads=2, memory=15.51 GiB>
[94mdef[39;49;00m [92mtest_as_completed[39;49;00m(client): ac = client.submit(Counter, actor=[94mTrue[39;49;00m).result()
futures = [ac.increment() [94mfor[39;49;00m _ [95min[39;49;00m [96mrange[39;49;00m([94m10[39;49;00m)]
[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:696: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:696: in <listcomp> futures = [ac.increment() [94mfor[39;49;00m _ [95min[39;49;00m [96mrange[39;49;00m([94m10[39;49;00m)] [1m[31m/usr/lib/python3/dist-packages/distributed/actor.py[0m:171: in func
q = asyncio.Queue(loop=[96mself[39;49;00m._io_loop.asyncio_loop) [1m[31m/usr/lib/python3.10/asyncio/queues.py[0m:33: in __init__ [96msuper[39;49;00m().[92m__init__[39;49;00m(loop=loop)_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7a97a4d0>
[94mdef[39;49;00m [92m__init__[39;49;00m([96mself[39;49;00m, *, loop=_marker): [94mif[39;49;00m loop [95mis[39;49;00m [95mnot[39;49;00m _marker:
[33mf[39;49;00m[33m'[39;49;00m[33mAs of 3.10, the *loop* parameter was removed from [39;49;00m[33m'[39;49;00m[94mraise[39;49;00m [96mTypeError[39;49;00m(
[33mf[39;49;00m[33m'[39;49;00m[33m{[39;49;00m[96mtype[39;49;00m([96mself[39;49;00m).[91m__name__[39;49;00m[33m}[39;49;00m[33m() since it is no longer necessary[39;49;00m[33m'[39;49;00m
)[1m[31mE TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary[0m
[1m[31m/usr/lib/python3.10/asyncio/mixins.py[0m:17: TypeError---------------------------- Captured stderr setup ----------------------------- distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:42513 distributed.scheduler - INFO - dashboard at: 127.0.0.1:8787distributed.worker - INFO - Start worker at: tcp://127.0.0.1:39641 distributed.worker - INFO - Listening to: tcp://127.0.0.1:39641 distributed.worker - INFO - dashboard at: 127.0.0.1:42349 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:42513 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-1b80a926-019d-43f3-800d-1be9a80818ca/dask-worker-space/worker-rofipp7j distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39641', name: tcp://127.0.0.1:39641, memory: 0, processing: 0> distributed.worker - INFO - Start worker at: tcp://127.0.0.1:41607 distributed.worker - INFO - Listening to: tcp://127.0.0.1:41607 distributed.worker - INFO - dashboard at: 127.0.0.1:36635 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:42513 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39641 distributed.worker - INFO - Memory: 7.76 GiB
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-03a35189-d9de-4175-bc26-b383d2f37c08/dask-worker-space/worker-jv1wqflf distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:42513 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41607', name: tcp://127.0.0.1:41607, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41607
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:42513 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Receive client connection: Client-f4dfc1c5-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection--------------------------- Captured stderr teardown --------------------------- distributed.scheduler - INFO - Remove client Client-f4dfc1c5-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-f4dfc1c5-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-f4dfc1c5-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39641 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41607distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39641', name: tcp://127.0.0.1:39641, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:39641[31m[1m_________________________ test_actor_future_awaitable __________________________[0m
client = <Client: No scheduler connected> s = <Scheduler: "tcp://127.0.0.1:34913" workers: 0 cores: 0, tasks: 0>a = <Worker: 'tcp://127.0.0.1:39179', 0, Status.closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0> b = <Worker: 'tcp://127.0.0.1:42439', 1, Status.closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>
[37m@gen_cluster[39;49;00m(client=[94mTrue[39;49;00m, timeout=[94m3[39;49;00m) [94masync[39;49;00m [94mdef[39;49;00m [92mtest_actor_future_awaitable[39;49;00m(client, s, a, b): ac = [94mawait[39;49;00m client.submit(Counter, actor=[94mTrue[39;49;00m)
futures = [ac.increment() [94mfor[39;49;00m _ [95min[39;49;00m [96mrange[39;49;00m([94m10[39;49;00m)]
[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:711: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_actor.py[0m:711: in <listcomp> futures = [ac.increment() [94mfor[39;49;00m _ [95min[39;49;00m [96mrange[39;49;00m([94m10[39;49;00m)] [1m[31m/usr/lib/python3/dist-packages/distributed/actor.py[0m:171: in func
q = asyncio.Queue(loop=[96mself[39;49;00m._io_loop.asyncio_loop) [1m[31m/usr/lib/python3.10/asyncio/queues.py[0m:33: in __init__ [96msuper[39;49;00m().[92m__init__[39;49;00m(loop=loop)_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7a932a70>
[94mdef[39;49;00m [92m__init__[39;49;00m([96mself[39;49;00m, *, loop=_marker): [94mif[39;49;00m loop [95mis[39;49;00m [95mnot[39;49;00m _marker:
[33mf[39;49;00m[33m'[39;49;00m[33mAs of 3.10, the *loop* parameter was removed from [39;49;00m[33m'[39;49;00m[94mraise[39;49;00m [96mTypeError[39;49;00m(
[33mf[39;49;00m[33m'[39;49;00m[33m{[39;49;00m[96mtype[39;49;00m([96mself[39;49;00m).[91m__name__[39;49;00m[33m}[39;49;00m[33m() since it is no longer necessary[39;49;00m[33m'[39;49;00m
)[1m[31mE TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary[0m
[1m[31m/usr/lib/python3.10/asyncio/mixins.py[0m:17: TypeError----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:34913 distributed.scheduler - INFO - dashboard at: 127.0.0.1:34647distributed.worker - INFO - Start worker at: tcp://127.0.0.1:39179 distributed.worker - INFO - Listening to: tcp://127.0.0.1:39179 distributed.worker - INFO - dashboard at: 127.0.0.1:33445 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:34913 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-khssoc_q distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:42439 distributed.worker - INFO - Listening to: tcp://127.0.0.1:42439 distributed.worker - INFO - dashboard at: 127.0.0.1:38001 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:34913 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-q9pm6_pq distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39179', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39179
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42439', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42439
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:34913 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:34913 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Receive client connection: Client-f4f64ad2-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Remove client Client-f4f64ad2-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-f4f64ad2-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-f4f64ad2-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39179 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42439distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39179', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:39179distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42439', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:42439 distributed.scheduler - INFO - Lost all workers distributed.scheduler - INFO - Scheduler closing... distributed.scheduler - INFO - Scheduler closing all comms[31m[1m______________________ test_client_gather_semaphore_loop _______________________[0m
s = <Scheduler: "tcp://127.0.0.1:37599" workers: 0 cores: 0, tasks: 0> [37m@gen_cluster[39;49;00m(nthreads=[])[94masync[39;49;00m [94mdef[39;49;00m [92mtest_client_gather_semaphore_loop[39;49;00m(s): [94masync[39;49;00m [94mwith[39;49;00m Client(s.address, asynchronous=[94mTrue[39;49;00m) [94mas[39;49;00m c:
[1m[31mE AssertionError: assert None is <_UnixSelectorEventLoop running=True closed=False debug=False>[0m [1m[31mE + where None = <asyncio.locks.Semaphore object at 0xffff37997460 [unlocked, value:5]>._loop[0m [1m[31mE + where <asyncio.locks.Semaphore object at 0xffff37997460 [unlocked, value:5]> = <Client: 'tcp://127.0.0.1:37599' processes=0 threads=0, memory=0 B>._gather_semaphore[0m [1m[31mE + and <_UnixSelectorEventLoop running=True closed=False debug=False> = <tornado.platform.asyncio.AsyncIOLoop object at 0xffff37976ce0>.asyncio_loop[0m [1m[31mE + where <tornado.platform.asyncio.AsyncIOLoop object at 0xffff37976ce0> = <Client: 'tcp://127.0.0.1:37599' processes=0 threads=0, memory=0 B>.loop[0m[94massert[39;49;00m c._gather_semaphore._loop [95mis[39;49;00m c.loop.asyncio_loop
[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_client.py[0m:6306: AssertionError [31m[1m_______________________ test_as_completed_condition_loop _______________________[0m
c = <Client: No scheduler connected> s = <Scheduler: "tcp://127.0.0.1:39425" workers: 0 cores: 0, tasks: 0>a = <Worker: 'tcp://127.0.0.1:36581', 0, Status.closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0> b = <Worker: 'tcp://127.0.0.1:34017', 1, Status.closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>
[37m@gen_cluster[39;49;00m(client=[94mTrue[39;49;00m)[94masync[39;49;00m [94mdef[39;49;00m [92mtest_as_completed_condition_loop[39;49;00m(c, s, a, b):
seq = c.map(inc, [96mrange[39;49;00m([94m5[39;49;00m)) ac = as_completed(seq)
[1m[31mE assert None == <_UnixSelectorEventLoop running=True closed=False debug=False>[0m[94massert[39;49;00m ac.condition._loop == c.loop.asyncio_loop
[1m[31mE +None[0m[1m[31mE -<_UnixSelectorEventLoop running=True closed=False debug=False>[0m
[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_client.py[0m:6313: AssertionError [31m[1m__________________ test_client_connectionpool_semaphore_loop ___________________[0m
s = {'address': 'tcp://127.0.0.1:44083'}a = {'address': 'tcp://127.0.0.1:32867', 'proc': <weakref at 0xffff379bcc20; to 'SpawnProcess' at 0xffff379ade70>} b = {'address': 'tcp://127.0.0.1:46399', 'proc': <weakref at 0xffff379bc220; to 'SpawnProcess' at 0xffff379ad150>}
[94mdef[39;49;00m [92mtest_client_connectionpool_semaphore_loop[39;49;00m(s, a, b): [94mwith[39;49;00m Client(s[[33m"[39;49;00m[33maddress[39;49;00m[33m"[39;49;00m]) [94mas[39;49;00m c:
[1m[31mE AssertionError: assert None is <_UnixSelectorEventLoop running=True closed=False debug=False>[0m [1m[31mE + where None = <asyncio.locks.Semaphore object at 0xffff379d7c70 [unlocked, value:511]>._loop[0m [1m[31mE + where <asyncio.locks.Semaphore object at 0xffff379d7c70 [unlocked, value:511]> = <ConnectionPool: open=1, active=0, connecting=0>.semaphore[0m [1m[31mE + where <ConnectionPool: open=1, active=0, connecting=0> = <Client: 'tcp://127.0.0.1:44083' processes=2 threads=2, memory=15.51 GiB>.rpc[0m [1m[31mE + and <_UnixSelectorEventLoop running=True closed=False debug=False> = <tornado.platform.asyncio.AsyncIOLoop object at 0xffff379af190>.asyncio_loop[0m [1m[31mE + where <tornado.platform.asyncio.AsyncIOLoop object at 0xffff379af190> = <Client: 'tcp://127.0.0.1:44083' processes=2 threads=2, memory=15.51 GiB>.loop[0m[94massert[39;49;00m c.rpc.semaphore._loop [95mis[39;49;00m c.loop.asyncio_loop
[1m[31m/usr/lib/python3/dist-packages/distributed/tests/test_client.py[0m:6318: AssertionError ---------------------------- Captured stderr setup ----------------------------- distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
distributed.scheduler - INFO - Clear task state distributed.scheduler - INFO - Scheduler at: tcp://127.0.0.1:44083 distributed.scheduler - INFO - dashboard at: 127.0.0.1:8787distributed.worker - INFO - Start worker at: tcp://127.0.0.1:32867 distributed.worker - INFO - Listening to: tcp://127.0.0.1:32867 distributed.worker - INFO - dashboard at: 127.0.0.1:46829 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:44083 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-46d2d926-d664-4f9d-b27a-76ef0f34c482/dask-worker-space/worker-tyjyvqd0 distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:32867', name: tcp://127.0.0.1:32867, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:32867
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:44083 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Start worker at: tcp://127.0.0.1:46399 distributed.worker - INFO - Listening to: tcp://127.0.0.1:46399 distributed.worker - INFO - dashboard at: 127.0.0.1:45003 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:44083 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-3e9cc2a3-e95f-408b-8ec8-b024f55ab4a9/dask-worker-space/worker-r6a_g_7y distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46399', name: tcp://127.0.0.1:46399, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46399
distributed.core - INFO - Starting established connectiondistributed.worker - INFO - Registered to: tcp://127.0.0.1:44083 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection----------------------------- Captured stderr call ----------------------------- distributed.scheduler - INFO - Receive client connection: Client-8a53a99f-56b8-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connectiondistributed.scheduler - INFO - Remove client Client-8a53a99f-56b8-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-8a53a99f-56b8-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-8a53a99f-56b8-11ec-9858-00163e03ed98 --------------------------- Captured stderr teardown ---------------------------
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32867 distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46399
Attachment:
OpenPGP_signature
Description: OpenPGP digital signature