[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#1001241: dask.distributed: (autopkgtest) needs update for python3.10: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary



Source: dask.distributed
Version: 2021.09.1+ds.1-2
Severity: serious
X-Debbugs-CC: debian-ci@lists.debian.org
Tags: sid bookworm
User: debian-ci@lists.debian.org
Usertags: needs-update
Control: affects -1 src:python3-defaults

Dear maintainer(s),

With a recent upload of python3-defaults the autopkgtest of dask.distributed fails in testing when that autopkgtest is run with the binary packages of python3-defaults from unstable. It passes when run with only packages from testing. In tabular form:

                       pass            fail
python3-defaults       from testing    3.9.8-1
dask.distributed       from testing    2021.09.1+ds.1-2
all others             from testing    from testing

I copied some of the output at the bottom of this report.

Currently this regression is blocking the migration of python3-defaults to testing [1]. Of course, python3-defaults shouldn't just break your autopkgtest (or even worse, your package), but it seems to me that the change in python3-defaults was intended and your package needs to update to the new situation.

If this is a real problem in your package (and not only in your autopkgtest), the right binary package(s) from python3-defaults should really add a versioned Breaks on the unfixed version of (one of your) package(s). Note: the Breaks is nice even if the issue is only in the autopkgtest as it helps the migration software to figure out the right versions to combine in the tests.

More information about this bug and the reason for filing it can be found on
https://wiki.debian.org/ContinuousIntegration/RegressionEmailInformation

Paul

[1] https://qa.debian.org/excuses.php?package=python3-defaults

https://ci.debian.net/data/autopkgtest/testing/arm64/d/dask.distributed/17341392/log.gz

=================================== FAILURES =================================== __________________________ test_client_actions[True] ___________________________

direct_to_workers = True


@pytest.mark.parametrize("direct_to_workers", [True, False]) def test_client_actions(direct_to_workers):
        @gen_cluster(client=True)
async def test(c, s, a, b):
            c = await Client(
s.address, asynchronous=True, direct_to_workers=direct_to_workers
            )
counter = c.submit(Counter, workers=[a.address], actor=True) assert isinstance(counter, Future)
            counter = await counter
            assert counter._address
assert hasattr(counter, "increment") assert hasattr(counter, "add") assert hasattr(counter, "n")
                n = await counter.n
            assert n == 0
                assert counter._address == a.address
assert isinstance(a.actors[counter.key], Counter)
            assert s.tasks[counter.key].actor
await asyncio.gather(counter.increment(), counter.increment())
                n = await counter.n
            assert n == 2
                counter.add(10)
while (await counter.n) != 10 + 2:
                n = await counter.n
                await asyncio.sleep(0.01)
                await c.close()
    >       test()

/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:109: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/distributed/utils_test.py:994: in test_func
    result = loop.run_sync(
/usr/lib/python3/dist-packages/tornado/ioloop.py:530: in run_sync
    return future_cell[0].result()
/usr/lib/python3/dist-packages/distributed/utils_test.py:953: in coro
    result = await future
/usr/lib/python3.10/asyncio/tasks.py:447: in wait_for
    return fut.result()
/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:97: in test await asyncio.gather(counter.increment(), counter.increment()) /usr/lib/python3/dist-packages/distributed/actor.py:171: in func
    q = asyncio.Queue(loop=self._io_loop.asyncio_loop)
/usr/lib/python3.10/asyncio/queues.py:33: in __init__
    super().__init__(loop=loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7a6dfa60>

def __init__(self, *, loop=_marker): if loop is not _marker:
          raise TypeError(
f'As of 3.10, the *loop* parameter was removed from '

f'{type(self).__name__}() since it is no longer necessary'
            )
E TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary

/usr/lib/python3.10/asyncio/mixins.py:17: TypeError
----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35377
distributed.scheduler - INFO -   dashboard at:           127.0.0.1:42921
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:36503 distributed.worker - INFO - Listening to: tcp://127.0.0.1:36503 distributed.worker - INFO - dashboard at: 127.0.0.1:45901 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:35377 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-3xgwbx0k distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:45731 distributed.worker - INFO - Listening to: tcp://127.0.0.1:45731 distributed.worker - INFO - dashboard at: 127.0.0.1:40889 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:35377 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-wnnrz3wp distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36503', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36503
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45731', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45731
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:35377 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:35377 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-ecc11143-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-ecc2ff2e-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Remove client Client-ecc11143-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-ecc11143-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-ecc11143-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36503
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45731
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36503', name: 0, memory: 1, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:36503
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45731', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:45731
distributed.scheduler - INFO - Lost all workers
distributed.scheduler - INFO - Scheduler closing...
distributed.scheduler - INFO - Scheduler closing all comms
__________________________ test_client_actions[False] __________________________

direct_to_workers = False


@pytest.mark.parametrize("direct_to_workers", [True, False]) def test_client_actions(direct_to_workers):
        @gen_cluster(client=True)
async def test(c, s, a, b):
            c = await Client(
s.address, asynchronous=True, direct_to_workers=direct_to_workers
            )
counter = c.submit(Counter, workers=[a.address], actor=True) assert isinstance(counter, Future)
            counter = await counter
            assert counter._address
assert hasattr(counter, "increment") assert hasattr(counter, "add") assert hasattr(counter, "n")
                n = await counter.n
            assert n == 0
                assert counter._address == a.address
assert isinstance(a.actors[counter.key], Counter)
            assert s.tasks[counter.key].actor
await asyncio.gather(counter.increment(), counter.increment())
                n = await counter.n
            assert n == 2
                counter.add(10)
while (await counter.n) != 10 + 2:
                n = await counter.n
                await asyncio.sleep(0.01)
                await c.close()
    >       test()

/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:109: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/distributed/utils_test.py:994: in test_func
    result = loop.run_sync(
/usr/lib/python3/dist-packages/tornado/ioloop.py:530: in run_sync
    return future_cell[0].result()
/usr/lib/python3/dist-packages/distributed/utils_test.py:953: in coro
    result = await future
/usr/lib/python3.10/asyncio/tasks.py:447: in wait_for
    return fut.result()
/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:97: in test await asyncio.gather(counter.increment(), counter.increment()) /usr/lib/python3/dist-packages/distributed/actor.py:171: in func
    q = asyncio.Queue(loop=self._io_loop.asyncio_loop)
/usr/lib/python3.10/asyncio/queues.py:33: in __init__
    super().__init__(loop=loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7a72a140>

def __init__(self, *, loop=_marker): if loop is not _marker:
          raise TypeError(
f'As of 3.10, the *loop* parameter was removed from '

f'{type(self).__name__}() since it is no longer necessary'
            )
E TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary

/usr/lib/python3.10/asyncio/mixins.py:17: TypeError
----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:33073
distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37349
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:39083 distributed.worker - INFO - Listening to: tcp://127.0.0.1:39083 distributed.worker - INFO - dashboard at: 127.0.0.1:33581 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:33073 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-e3ylm9j0 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:33459 distributed.worker - INFO - Listening to: tcp://127.0.0.1:33459 distributed.worker - INFO - dashboard at: 127.0.0.1:38337 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:33073 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-47r_y5bl distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39083', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39083
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33459', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33459
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:33073 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:33073 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-ece6eaec-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-ece8d896-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Remove client Client-ece6eaec-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-ece6eaec-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-ece6eaec-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39083
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33459
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39083', name: 0, memory: 1, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:39083
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33459', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:33459
distributed.scheduler - INFO - Lost all workers
distributed.scheduler - INFO - Scheduler closing...
distributed.scheduler - INFO - Scheduler closing all comms
__________________________ test_worker_actions[False] __________________________

separate_thread = False


@pytest.mark.parametrize("separate_thread", [False, True]) def test_worker_actions(separate_thread):
        @gen_cluster(client=True)
async def test(c, s, a, b): counter = c.submit(Counter, workers=[a.address], actor=True)
            a_address = a.address
                def f(counter):
                start = counter.n
assert type(counter) is Actor
                assert counter._address == a_address
future = counter.increment(separate_thread=separate_thread) assert isinstance(future, ActorFuture) assert "Future" in type(future).__name__
                end = future.result(timeout=1)
                assert end > start
futures = [c.submit(f, counter, pure=False) for _ in range(10)]
            await c.gather(futures)
                counter = await counter
assert await counter.n == 10
    >       test()

/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:137: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/distributed/utils_test.py:994: in test_func
    result = loop.run_sync(
/usr/lib/python3/dist-packages/tornado/ioloop.py:530: in run_sync
    return future_cell[0].result()
/usr/lib/python3/dist-packages/distributed/utils_test.py:953: in coro
    result = await future
/usr/lib/python3.10/asyncio/tasks.py:447: in wait_for
    return fut.result()
/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:132: in test
    await c.gather(futures)
/usr/lib/python3/dist-packages/distributed/client.py:1831: in _gather
    raise exception.with_traceback(traceback)
/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:125: in f
    future = counter.increment(separate_thread=separate_thread)
/usr/lib/python3/dist-packages/distributed/actor.py:171: in func
    q = asyncio.Queue(loop=self._io_loop.asyncio_loop)
/usr/lib/python3.10/asyncio/queues.py:33: in __init__
    super().__init__(loop=loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    """Event loop mixins."""
        import threading
from . import events
        _global_lock = threading.Lock()
        # Used as a sentinel for loop parameter
    _marker = object()
            class _LoopBoundMixin:
        _loop = None
def __init__(self, *, loop=_marker): if loop is not _marker:
              raise TypeError(
f'As of 3.10, the *loop* parameter was removed from '

f'{type(self).__name__}() since it is no longer necessary'
                )
E TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary

/usr/lib/python3.10/asyncio/mixins.py:17: TypeError
----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35839
distributed.scheduler - INFO -   dashboard at:           127.0.0.1:33627
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:37271 distributed.worker - INFO - Listening to: tcp://127.0.0.1:37271 distributed.worker - INFO - dashboard at: 127.0.0.1:43143 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:35839 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-emdvj_5e distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:44251 distributed.worker - INFO - Listening to: tcp://127.0.0.1:44251 distributed.worker - INFO - dashboard at: 127.0.0.1:37837 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:35839 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-ahsf01vi distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:37271', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:37271
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44251', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44251
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:35839 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:35839 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-ed0c88d1-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
distributed.worker - WARNING - Compute Failed
Function:  f
args: (<Actor: Counter, key=Counter-98dff89f-b463-4850-addf-61a6b235ce42>)
kwargs:    {}
Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')

distributed.worker - WARNING - Compute Failed
Function:  f
args: (<Actor: Counter, key=Counter-98dff89f-b463-4850-addf-61a6b235ce42>)
kwargs:    {}
Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')

distributed.worker - WARNING - Compute Failed
Function:  f
args: (<Actor: Counter, key=Counter-98dff89f-b463-4850-addf-61a6b235ce42>)
kwargs:    {}
Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')

distributed.worker - WARNING - Compute Failed
Function:  f
args: (<Actor: Counter, key=Counter-98dff89f-b463-4850-addf-61a6b235ce42>)
kwargs:    {}
Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')

distributed.scheduler - INFO - Remove client Client-ed0c88d1-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-ed0c88d1-56b7-11ec-9858-00163e03ed98 distributed.batched - INFO - Batched Comm Closed <TCP (closed) Scheduler->Client local=tcp://127.0.0.1:35839 remote=tcp://127.0.0.1:54238>
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/distributed/batched.py", line 93, in _background_send
    nbytes = yield self.comm.write(
  File "/usr/lib/python3/dist-packages/tornado/gen.py", line 762, in run
    value = future.result()
File "/usr/lib/python3/dist-packages/distributed/comm/tcp.py", line 241, in write
    raise CommClosedError()
distributed.comm.core.CommClosedError
distributed.scheduler - INFO - Close client connection: Client-ed0c88d1-56b7-11ec-9858-00163e03ed98
distributed.worker - WARNING - Compute Failed
Function:  f
args: (<Actor: Counter, key=Counter-98dff89f-b463-4850-addf-61a6b235ce42>)
kwargs:    {}
Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')

distributed.worker - WARNING - Compute Failed
Function:  f
args: (<Actor: Counter, key=Counter-98dff89f-b463-4850-addf-61a6b235ce42>)
kwargs:    {}
Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')

distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:37271
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44251
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:37271', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:37271
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:44251', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:44251
distributed.scheduler - INFO - Lost all workers
distributed.scheduler - INFO - Scheduler closing...
distributed.scheduler - INFO - Scheduler closing all comms
__________________________ test_worker_actions[True] ___________________________

separate_thread = True


@pytest.mark.parametrize("separate_thread", [False, True]) def test_worker_actions(separate_thread):
        @gen_cluster(client=True)
async def test(c, s, a, b): counter = c.submit(Counter, workers=[a.address], actor=True)
            a_address = a.address
                def f(counter):
                start = counter.n
assert type(counter) is Actor
                assert counter._address == a_address
future = counter.increment(separate_thread=separate_thread) assert isinstance(future, ActorFuture) assert "Future" in type(future).__name__
                end = future.result(timeout=1)
                assert end > start
futures = [c.submit(f, counter, pure=False) for _ in range(10)]
            await c.gather(futures)
                counter = await counter
assert await counter.n == 10
    >       test()

/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:137: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/distributed/utils_test.py:994: in test_func
    result = loop.run_sync(
/usr/lib/python3/dist-packages/tornado/ioloop.py:530: in run_sync
    return future_cell[0].result()
/usr/lib/python3/dist-packages/distributed/utils_test.py:953: in coro
    result = await future
/usr/lib/python3.10/asyncio/tasks.py:447: in wait_for
    return fut.result()
/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:132: in test
    await c.gather(futures)
/usr/lib/python3/dist-packages/distributed/client.py:1831: in _gather
    raise exception.with_traceback(traceback)
/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:125: in f
    future = counter.increment(separate_thread=separate_thread)
/usr/lib/python3/dist-packages/distributed/actor.py:171: in func
    q = asyncio.Queue(loop=self._io_loop.asyncio_loop)
/usr/lib/python3.10/asyncio/queues.py:33: in __init__
    super().__init__(loop=loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    """Event loop mixins."""
        import threading
from . import events
        _global_lock = threading.Lock()
        # Used as a sentinel for loop parameter
    _marker = object()
            class _LoopBoundMixin:
        _loop = None
def __init__(self, *, loop=_marker): if loop is not _marker:
              raise TypeError(
f'As of 3.10, the *loop* parameter was removed from '

f'{type(self).__name__}() since it is no longer necessary'
                )
E TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary

/usr/lib/python3.10/asyncio/mixins.py:17: TypeError
----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45121
distributed.scheduler - INFO -   dashboard at:           127.0.0.1:38083
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:42727 distributed.worker - INFO - Listening to: tcp://127.0.0.1:42727 distributed.worker - INFO - dashboard at: 127.0.0.1:43983 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:45121 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-zwm31oo4 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:38807 distributed.worker - INFO - Listening to: tcp://127.0.0.1:38807 distributed.worker - INFO - dashboard at: 127.0.0.1:33775 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:45121 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-cuwpk56t distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42727', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42727
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38807', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38807
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:45121 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:45121 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-ed45fd22-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
distributed.worker - WARNING - Compute Failed
Function:  f
args: (<Actor: Counter, key=Counter-1da4268b-3038-4f58-ba1c-f3e65d1487e6>)
kwargs:    {}
Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')

distributed.scheduler - INFO - Remove client Client-ed45fd22-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-ed45fd22-56b7-11ec-9858-00163e03ed98
distributed.worker - WARNING - Compute Failed
Function:  f
args: (<Actor: Counter, key=Counter-1da4268b-3038-4f58-ba1c-f3e65d1487e6>)
kwargs:    {}
Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')

distributed.worker - WARNING - Compute Failed
Function:  f
args: (<Actor: Counter, key=Counter-1da4268b-3038-4f58-ba1c-f3e65d1487e6>)
kwargs:    {}
Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')

distributed.worker - WARNING - Compute Failed
Function:  f
args: (<Actor: Counter, key=Counter-1da4268b-3038-4f58-ba1c-f3e65d1487e6>)
kwargs:    {}
Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')

distributed.scheduler - INFO - Close client connection: Client-ed45fd22-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42727
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38807
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42727', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:42727
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38807', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:38807
distributed.scheduler - INFO - Lost all workers
distributed.scheduler - INFO - Scheduler closing...
distributed.scheduler - INFO - Scheduler closing all comms
____________________________ test_exceptions_method ____________________________

c = <Client: No scheduler connected>
s = <Scheduler: "tcp://127.0.0.1:34393" workers: 0 cores: 0, tasks: 0>
a = <Worker: 'tcp://127.0.0.1:34663', 0, Status.closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0> b = <Worker: 'tcp://127.0.0.1:43181', 1, Status.closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>

    @gen_cluster(client=True)
async def test_exceptions_method(c, s, a, b):
        class Foo:
def throw(self):
                1 / 0
foo = await c.submit(Foo, actor=True) with pytest.raises(ZeroDivisionError):
          await foo.throw()

/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:202: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/distributed/actor.py:171: in func
    q = asyncio.Queue(loop=self._io_loop.asyncio_loop)
/usr/lib/python3.10/asyncio/queues.py:33: in __init__
    super().__init__(loop=loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7ace45e0>

def __init__(self, *, loop=_marker): if loop is not _marker:
          raise TypeError(
f'As of 3.10, the *loop* parameter was removed from '

f'{type(self).__name__}() since it is no longer necessary'
            )
E TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary

/usr/lib/python3.10/asyncio/mixins.py:17: TypeError
----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34393
distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34255
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:34663 distributed.worker - INFO - Listening to: tcp://127.0.0.1:34663 distributed.worker - INFO - dashboard at: 127.0.0.1:39381 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:34393 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-c2k0rt1q distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:43181 distributed.worker - INFO - Listening to: tcp://127.0.0.1:43181 distributed.worker - INFO - dashboard at: 127.0.0.1:39127 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:34393 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-su60qwfl distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:34663', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:34663
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:43181', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:43181
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:34393 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:34393 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-eddf1163-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Remove client Client-eddf1163-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-eddf1163-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-eddf1163-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:34663
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:43181
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:34663', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:34663
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:43181', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:43181
distributed.scheduler - INFO - Lost all workers
distributed.scheduler - INFO - Scheduler closing...
distributed.scheduler - INFO - Scheduler closing all comms
__________________________________ test_sync ___________________________________

client = <Client: 'tcp://127.0.0.1:43857' processes=2 threads=2, memory=15.51 GiB>

    def test_sync(client):
        counter = client.submit(Counter, actor=True)
        counter = counter.result()
            assert counter.n == 0
    >       future = counter.increment()

/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:270: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/distributed/actor.py:171: in func
    q = asyncio.Queue(loop=self._io_loop.asyncio_loop)
/usr/lib/python3.10/asyncio/queues.py:33: in __init__
    super().__init__(loop=loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7a6dfa90>

def __init__(self, *, loop=_marker): if loop is not _marker:
          raise TypeError(
f'As of 3.10, the *loop* parameter was removed from '

f'{type(self).__name__}() since it is no longer necessary'
            )
E TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary

/usr/lib/python3.10/asyncio/mixins.py:17: TypeError
---------------------------- Captured stderr setup ----------------------------- distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:43857
distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:33239 distributed.worker - INFO - Listening to: tcp://127.0.0.1:33239 distributed.worker - INFO - dashboard at: 127.0.0.1:42799 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:43857 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-ca4ba20c-463b-41af-b4a0-d155b0a0429f/dask-worker-space/worker-0i86tzxz distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33239', name: tcp://127.0.0.1:33239, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33239
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:43857 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:36397 distributed.worker - INFO - Listening to: tcp://127.0.0.1:36397 distributed.worker - INFO - dashboard at: 127.0.0.1:45387 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:43857 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-018524ad-f8bf-4b49-8d5a-311fd8135454/dask-worker-space/worker-yu70xndj distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36397', name: tcp://127.0.0.1:36397, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36397 distributed.worker - INFO - Registered to: tcp://127.0.0.1:43857
distributed.core - INFO - Starting established connection
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-ef457bef-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
--------------------------- Captured stderr teardown --------------------------- distributed.scheduler - INFO - Remove client Client-ef457bef-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-ef457bef-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-ef457bef-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33239
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36397
_____________________________ test_numpy_roundtrip _____________________________

c = <Client: No scheduler connected>
s = <Scheduler: "tcp://127.0.0.1:40517" workers: 0 cores: 0, tasks: 0>
a = <Worker: 'tcp://127.0.0.1:38063', 0, Status.closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0> b = <Worker: 'tcp://127.0.0.1:41763', 1, Status.closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>

    @gen_cluster(client=True)
async def test_numpy_roundtrip(c, s, a, b): np = pytest.importorskip("numpy") server = await c.submit(ParameterServer, actor=True)
            x = np.random.random(1000)
      await server.put("x", x)

/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:312: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/distributed/actor.py:171: in func
    q = asyncio.Queue(loop=self._io_loop.asyncio_loop)
/usr/lib/python3.10/asyncio/queues.py:33: in __init__
    super().__init__(loop=loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7ac39540>

def __init__(self, *, loop=_marker): if loop is not _marker:
          raise TypeError(
f'As of 3.10, the *loop* parameter was removed from '

f'{type(self).__name__}() since it is no longer necessary'
            )
E TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary

/usr/lib/python3.10/asyncio/mixins.py:17: TypeError
----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40517
distributed.scheduler - INFO -   dashboard at:           127.0.0.1:40079
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:38063 distributed.worker - INFO - Listening to: tcp://127.0.0.1:38063 distributed.worker - INFO - dashboard at: 127.0.0.1:39705 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:40517 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-vnqlhcrw distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:41763 distributed.worker - INFO - Listening to: tcp://127.0.0.1:41763 distributed.worker - INFO - dashboard at: 127.0.0.1:36501 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:40517 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-clwyyoff distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:38063', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:38063
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41763', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41763
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:40517 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:40517 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-ef7289f7-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Remove client Client-ef7289f7-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-ef7289f7-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-ef7289f7-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:38063
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41763
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:38063', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:38063
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:41763', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:41763
distributed.scheduler - INFO - Lost all workers
distributed.scheduler - INFO - Scheduler closing...
distributed.scheduler - INFO - Scheduler closing all comms
_________________________ test_numpy_roundtrip_getattr _________________________

c = <Client: No scheduler connected>
s = <Scheduler: "tcp://127.0.0.1:46495" workers: 0 cores: 0, tasks: 0>
a = <Worker: 'tcp://127.0.0.1:42623', 0, Status.closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0> b = <Worker: 'tcp://127.0.0.1:46353', 1, Status.closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>

    @gen_cluster(client=True)
async def test_numpy_roundtrip_getattr(c, s, a, b): np = pytest.importorskip("numpy") counter = await c.submit(Counter, actor=True)
            x = np.random.random(1000)
    >       await counter.add(x)

/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:327: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/distributed/actor.py:171: in func
    q = asyncio.Queue(loop=self._io_loop.asyncio_loop)
/usr/lib/python3.10/asyncio/queues.py:33: in __init__
    super().__init__(loop=loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7accfa00>

def __init__(self, *, loop=_marker): if loop is not _marker:
          raise TypeError(
f'As of 3.10, the *loop* parameter was removed from '

f'{type(self).__name__}() since it is no longer necessary'
            )
E TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary

/usr/lib/python3.10/asyncio/mixins.py:17: TypeError
----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:46495
distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37799
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:42623 distributed.worker - INFO - Listening to: tcp://127.0.0.1:42623 distributed.worker - INFO - dashboard at: 127.0.0.1:41251 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:46495 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-93dn8ysr distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:46353 distributed.worker - INFO - Listening to: tcp://127.0.0.1:46353 distributed.worker - INFO - dashboard at: 127.0.0.1:40369 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:46495 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-o50omzoo distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42623', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42623
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46353', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46353
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:46495 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:46495 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-ef83ab02-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Remove client Client-ef83ab02-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-ef83ab02-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-ef83ab02-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42623
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46353
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42623', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:42623
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46353', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:46353
distributed.scheduler - INFO - Lost all workers
distributed.scheduler - INFO - Scheduler closing...
distributed.scheduler - INFO - Scheduler closing all comms
____________________________ test_many_computations ____________________________

c = <Client: No scheduler connected>
s = <Scheduler: "tcp://127.0.0.1:38391" workers: 0 cores: 0, tasks: 0>
a = <Worker: 'tcp://127.0.0.1:36957', 0, Status.closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: -3> b = <Worker: 'tcp://127.0.0.1:36931', 1, Status.closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>

    @gen_cluster(client=True)
async def test_many_computations(c, s, a, b): counter = await c.submit(Counter, actor=True)
            def add(n, counter):
for i in range(n):
                counter.increment().result()
futures = c.map(add, range(10), counter=counter) done = c.submit(lambda x: None, futures)
            while not done.done():
assert len(s.processing) <= a.nthreads + b.nthreads
            await asyncio.sleep(0.01)
    >       await done

/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:370: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/distributed/client.py:240: in _result
    raise exc.with_traceback(tb)
/usr/lib/python3/dist-packages/dask/utils.py:35: in apply
    return func(*args, **kwargs)
/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:361: in add
    counter.increment().result()
/usr/lib/python3/dist-packages/distributed/actor.py:171: in func
    q = asyncio.Queue(loop=self._io_loop.asyncio_loop)
/usr/lib/python3.10/asyncio/queues.py:33: in __init__
    super().__init__(loop=loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    """Event loop mixins."""
        import threading
from . import events
        _global_lock = threading.Lock()
        # Used as a sentinel for loop parameter
    _marker = object()
            class _LoopBoundMixin:
        _loop = None
def __init__(self, *, loop=_marker): if loop is not _marker:
              raise TypeError(
f'As of 3.10, the *loop* parameter was removed from '

f'{type(self).__name__}() since it is no longer necessary'
                )
E TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary

/usr/lib/python3.10/asyncio/mixins.py:17: TypeError
----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:38391
distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35569
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:36957 distributed.worker - INFO - Listening to: tcp://127.0.0.1:36957 distributed.worker - INFO - dashboard at: 127.0.0.1:44923 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:38391 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-nfapao56 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:36931 distributed.worker - INFO - Listening to: tcp://127.0.0.1:36931 distributed.worker - INFO - dashboard at: 127.0.0.1:36947 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:38391 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-mctqjba3 distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36957', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36957
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:36931', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:36931
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:38391 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:38391 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-efc7ba22-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
distributed.worker - WARNING - Compute Failed
Function:  execute_task
args: ((<function apply at 0xffff8753d2d0>, <function test_many_computations.<locals>.add at 0xffff7afa9e10>, (<class 'tuple'>, [1]), {'counter': <Actor: Counter, key=Counter-7943db04-f2c7-4ee3-b56a-4227b43485d0>}))
kwargs:    {}
Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')

distributed.worker - WARNING - Compute Failed
Function:  execute_task
args: ((<function apply at 0xffff8753d2d0>, <function test_many_computations.<locals>.add at 0xffff7afa9ea0>, (<class 'tuple'>, [2]), {'counter': <Actor: Counter, key=Counter-7943db04-f2c7-4ee3-b56a-4227b43485d0>}))
kwargs:    {}
Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')

distributed.worker - WARNING - Compute Failed
Function:  execute_task
args: ((<function apply at 0xffff8753d2d0>, <function test_many_computations.<locals>.add at 0xffff7afa9f30>, (<class 'tuple'>, [3]), {'counter': <Actor: Counter, key=Counter-7943db04-f2c7-4ee3-b56a-4227b43485d0>}))
kwargs:    {}
Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')

distributed.worker - WARNING - Compute Failed
Function:  execute_task
args: ((<function apply at 0xffff8753d2d0>, <function test_many_computations.<locals>.add at 0xffff7afa9fc0>, (<class 'tuple'>, [4]), {'counter': <Actor: Counter, key=Counter-7943db04-f2c7-4ee3-b56a-4227b43485d0>}))
kwargs:    {}
Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')

distributed.worker - WARNING - Compute Failed
Function:  execute_task
args: ((<function apply at 0xffff8753d2d0>, <function test_many_computations.<locals>.add at 0xffff7afaa050>, (<class 'tuple'>, [5]), {'counter': <Actor: Counter, key=Counter-7943db04-f2c7-4ee3-b56a-4227b43485d0>}))
kwargs:    {}
Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')

distributed.worker - WARNING - Compute Failed
Function:  execute_task
args: ((<function apply at 0xffff8753d2d0>, <function test_many_computations.<locals>.add at 0xffff7afaa0e0>, (<class 'tuple'>, [9]), {'counter': <Actor: Counter, key=Counter-7943db04-f2c7-4ee3-b56a-4227b43485d0>}))
kwargs:    {}
Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')

distributed.worker - WARNING - Compute Failed
Function:  execute_task
args: ((<function apply at 0xffff8753d2d0>, <function test_many_computations.<locals>.add at 0xffff7afaa170>, (<class 'tuple'>, [6]), {'counter': <Actor: Counter, key=Counter-7943db04-f2c7-4ee3-b56a-4227b43485d0>}))
kwargs:    {}
Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')

distributed.scheduler - INFO - Remove client Client-efc7ba22-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-efc7ba22-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-efc7ba22-56b7-11ec-9858-00163e03ed98
distributed.worker - WARNING - Compute Failed
Function:  execute_task
args: ((<function apply at 0xffff8753d2d0>, <function test_many_computations.<locals>.add at 0xffff7afaa200>, (<class 'tuple'>, [7]), {'counter': <Actor: Counter, key=Counter-7943db04-f2c7-4ee3-b56a-4227b43485d0>}))
kwargs:    {}
Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')

distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36957
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:36931
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36957', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:36957
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:36931', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:36931
distributed.scheduler - INFO - Lost all workers
distributed.scheduler - INFO - Scheduler closing...
distributed.scheduler - INFO - Scheduler closing all comms
______________________________ test_thread_safety ______________________________

c = <Client: No scheduler connected>
s = <Scheduler: "tcp://127.0.0.1:45091" workers: 0 cores: 0, tasks: 0>
a = <Worker: 'tcp://127.0.0.1:33325', 0, Status.closed, stored: 0, running: 0/5, ready: 0, comm: 0, waiting: 0> b = <Worker: 'tcp://127.0.0.1:40825', 1, Status.closed, stored: 0, running: 0/5, ready: 0, comm: 0, waiting: 0>


@gen_cluster(client=True, nthreads=[("127.0.0.1", 5)] * 2) async def test_thread_safety(c, s, a, b):
        class Unsafe:
def __init__(self):
                self.n = 0
def f(self): assert self.n == 0
                self.n += 1
for i in range(20):
                    sleep(0.002)
assert self.n == 1
                self.n = 0
unsafe = await c.submit(Unsafe, actor=True) > futures = [unsafe.f() for i in range(10)]

/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:390: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/distributed/tests/test_actor.py:390: in <listcomp> futures = [unsafe.f() for i in range(10)] /usr/lib/python3/dist-packages/distributed/actor.py:171: in func
    q = asyncio.Queue(loop=self._io_loop.asyncio_loop)
/usr/lib/python3.10/asyncio/queues.py:33: in __init__
    super().__init__(loop=loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7acddd50>

def __init__(self, *, loop=_marker): if loop is not _marker:
          raise TypeError(
f'As of 3.10, the *loop* parameter was removed from '

f'{type(self).__name__}() since it is no longer necessary'
            )
E TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary

/usr/lib/python3.10/asyncio/mixins.py:17: TypeError
----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:45091
distributed.scheduler - INFO -   dashboard at:           127.0.0.1:43117
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:33325 distributed.worker - INFO - Listening to: tcp://127.0.0.1:33325 distributed.worker - INFO - dashboard at: 127.0.0.1:45515 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:45091 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 5 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-0s13hhu0 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:40825 distributed.worker - INFO - Listening to: tcp://127.0.0.1:40825 distributed.worker - INFO - dashboard at: 127.0.0.1:34295 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:45091 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 5 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-6j5eo951 distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33325', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33325
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40825', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40825
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:45091 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:45091 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-efef9a67-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Remove client Client-efef9a67-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-efef9a67-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-efef9a67-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33325
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40825
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33325', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:33325
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:40825', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:40825
distributed.scheduler - INFO - Lost all workers
distributed.scheduler - INFO - Scheduler closing...
distributed.scheduler - INFO - Scheduler closing all comms
______________________________ test_compute_sync _______________________________

client = <Client: 'tcp://127.0.0.1:35271' processes=2 threads=2, memory=15.51 GiB>

    def test_compute_sync(client):
        @dask.delayed
        def f(n, counter):
assert isinstance(counter, Actor), type(counter) for i in range(n):
                counter.increment().result()
            @dask.delayed
        def check(counter, blanks):
            return counter.n
            counter = dask.delayed(Counter)()
values = [f(i, counter) for i in range(5)]
        final = check(counter, values)
    >       result = final.compute(actors=counter)

/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:517: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/dask/base.py:288: in compute (result,) = compute(self, traverse=False, **kwargs)
/usr/lib/python3/dist-packages/dask/base.py:570: in compute
    results = schedule(dsk, keys, **kwargs)
/usr/lib/python3/dist-packages/distributed/client.py:2689: in get results = self.gather(packed, asynchronous=asynchronous, direct=direct) /usr/lib/python3/dist-packages/distributed/client.py:1966: in gather
    return self.sync(
/usr/lib/python3/dist-packages/distributed/client.py:860: in sync
    return sync(
/usr/lib/python3/dist-packages/distributed/utils.py:330: in sync
    raise exc.with_traceback(tb)
/usr/lib/python3/dist-packages/distributed/utils.py:313: in f
    result[0] = yield future
/usr/lib/python3/dist-packages/tornado/gen.py:762: in run
    value = future.result()
/usr/lib/python3/dist-packages/distributed/client.py:1831: in _gather
    raise exception.with_traceback(traceback)
/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:507: in f
    counter.increment().result()
/usr/lib/python3/dist-packages/distributed/actor.py:171: in func
    q = asyncio.Queue(loop=self._io_loop.asyncio_loop)
/usr/lib/python3.10/asyncio/queues.py:33: in __init__
    super().__init__(loop=loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
    """Event loop mixins."""
        import threading
from . import events
        _global_lock = threading.Lock()
        # Used as a sentinel for loop parameter
    _marker = object()
            class _LoopBoundMixin:
        _loop = None
def __init__(self, *, loop=_marker): if loop is not _marker:
              raise TypeError(
f'As of 3.10, the *loop* parameter was removed from '

f'{type(self).__name__}() since it is no longer necessary'
                )
E TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary

/usr/lib/python3.10/asyncio/mixins.py:17: TypeError
---------------------------- Captured stderr setup ----------------------------- distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35271
distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:40799 distributed.worker - INFO - Start worker at: tcp://127.0.0.1:40631 distributed.worker - INFO - Listening to: tcp://127.0.0.1:40799 distributed.worker - INFO - Listening to: tcp://127.0.0.1:40631 distributed.worker - INFO - dashboard at: 127.0.0.1:39337 distributed.worker - INFO - dashboard at: 127.0.0.1:35887 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:35271 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:35271 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-73ed35a0-f5c2-4c8a-b8b0-9d5eb9f9d2b8/dask-worker-space/worker-eyl89eh1 distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-2923585d-1ffa-41fc-8ad6-0b47e6475918/dask-worker-space/worker-1wbkt2th distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40631', name: tcp://127.0.0.1:40631, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40631
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:35271 distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:40799', name: tcp://127.0.0.1:40799, memory: 0, processing: 0>
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:40799
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:35271 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-f0fb320c-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
----------------------------- Captured stderr call -----------------------------
distributed.worker - WARNING - Compute Failed
Function:  f
args: (3, <Actor: Counter, key=Counter-c4b5effc-5ff1-4f60-af17-97fbf296242e>)
kwargs:    {}
Exception: TypeError('As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary')

--------------------------- Captured stderr teardown --------------------------- distributed.scheduler - INFO - Receive client connection: Client-worker-f1031dac-56b7-11ec-98f8-00163e03ed98
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Remove client Client-f0fb320c-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-f0fb320c-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-f0fb320c-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40631
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:40799
distributed.scheduler - INFO - Remove client Client-worker-f1031dac-56b7-11ec-98f8-00163e03ed98 distributed.scheduler - INFO - Remove client Client-worker-f1031dac-56b7-11ec-98f8-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-worker-f1031dac-56b7-11ec-98f8-00163e03ed98 ____________________________ test_actors_in_profile ____________________________

c = <Client: No scheduler connected>
s = <Scheduler: "tcp://127.0.0.1:40699" workers: 0 cores: 0, tasks: 0>
a = <Worker: 'tcp://127.0.0.1:35211', 0, Status.closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0>

    @gen_cluster(
        client=True,

nthreads=[("127.0.0.1", 1)],

config={"distributed.worker.profile.interval": "1ms"},
    )
async def test_actors_in_profile(c, s, a):
        class Sleeper:
def sleep(self, time):
                sleep(time)
sleeper = await c.submit(Sleeper, actor=True) for i in range(5):
          await sleeper.sleep(0.200)

/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:542: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/distributed/actor.py:171: in func
    q = asyncio.Queue(loop=self._io_loop.asyncio_loop)
/usr/lib/python3.10/asyncio/queues.py:33: in __init__
    super().__init__(loop=loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7a7d0ca0>

def __init__(self, *, loop=_marker): if loop is not _marker:
          raise TypeError(
f'As of 3.10, the *loop* parameter was removed from '

f'{type(self).__name__}() since it is no longer necessary'
            )
E TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary

/usr/lib/python3.10/asyncio/mixins.py:17: TypeError
----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:40699
distributed.scheduler - INFO -   dashboard at:           127.0.0.1:35005
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:35211 distributed.worker - INFO - Listening to: tcp://127.0.0.1:35211 distributed.worker - INFO - dashboard at: 127.0.0.1:33883 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:40699 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-52oroadv distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35211', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35211
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:40699 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-f1324945-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Remove client Client-f1324945-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-f1324945-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-f1324945-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35211
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35211', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:35211
distributed.scheduler - INFO - Lost all workers
distributed.scheduler - INFO - Scheduler closing...
distributed.scheduler - INFO - Scheduler closing all comms
_________________________________ test_waiter __________________________________

c = <Client: No scheduler connected>
s = <Scheduler: "tcp://127.0.0.1:41223" workers: 0 cores: 0, tasks: 0>
a = <Worker: 'tcp://127.0.0.1:42029', 0, Status.closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0> b = <Worker: 'tcp://127.0.0.1:42963', 1, Status.closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>

    @gen_cluster(client=True)
async def test_waiter(c, s, a, b): from tornado.locks import Event
            class Waiter:
def __init__(self):
                self.event = Event()
async def set(self):
                self.event.set()
async def wait(self):
                await self.event.wait()
waiter = await c.submit(Waiter, actor=True) > futures = [waiter.wait() for _ in range(5)] # way more than we have actor threads

/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:567: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/distributed/tests/test_actor.py:567: in <listcomp> futures = [waiter.wait() for _ in range(5)] # way more than we have actor threads /usr/lib/python3/dist-packages/distributed/actor.py:171: in func
    q = asyncio.Queue(loop=self._io_loop.asyncio_loop)
/usr/lib/python3.10/asyncio/queues.py:33: in __init__
    super().__init__(loop=loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff78639f30>

def __init__(self, *, loop=_marker): if loop is not _marker:
          raise TypeError(
f'As of 3.10, the *loop* parameter was removed from '

f'{type(self).__name__}() since it is no longer necessary'
            )
E TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary

/usr/lib/python3.10/asyncio/mixins.py:17: TypeError
----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41223
distributed.scheduler - INFO -   dashboard at:           127.0.0.1:45349
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:42029 distributed.worker - INFO - Listening to: tcp://127.0.0.1:42029 distributed.worker - INFO - dashboard at: 127.0.0.1:43833 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:41223 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-9mcbqkpw distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:42963 distributed.worker - INFO - Listening to: tcp://127.0.0.1:42963 distributed.worker - INFO - dashboard at: 127.0.0.1:45869 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:41223 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-6kys1y2x distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42029', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42029
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42963', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42963
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:41223 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:41223 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-f14425fc-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Remove client Client-f14425fc-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-f14425fc-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-f14425fc-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42029
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42963
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42029', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:42029
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42963', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:42963
distributed.scheduler - INFO - Lost all workers
distributed.scheduler - INFO - Scheduler closing...
distributed.scheduler - INFO - Scheduler closing all comms
___________________________ test_one_thread_deadlock ___________________________

    def test_one_thread_deadlock():
with cluster(nworkers=2) as (cl, w): client = Client(cl["address"]) ac = client.submit(Counter, actor=True).result() ac2 = client.submit(UsesCounter, actor=True, workers=[ac._address]).result() > assert ac2.do_inc(ac).result() == 1

/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:638: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/distributed/actor.py:171: in func
    q = asyncio.Queue(loop=self._io_loop.asyncio_loop)
/usr/lib/python3.10/asyncio/queues.py:33: in __init__
    super().__init__(loop=loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7aaa4c10>

def __init__(self, *, loop=_marker): if loop is not _marker:
          raise TypeError(
f'As of 3.10, the *loop* parameter was removed from '

f'{type(self).__name__}() since it is no longer necessary'
            )
E TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary

/usr/lib/python3.10/asyncio/mixins.py:17: TypeError
----------------------------- Captured stderr call ----------------------------- distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34207
distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:44305 distributed.worker - INFO - Listening to: tcp://127.0.0.1:44305 distributed.worker - INFO - dashboard at: 127.0.0.1:34449 distributed.worker - INFO - Start worker at: tcp://127.0.0.1:42909 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:34207 distributed.worker - INFO - Listening to: tcp://127.0.0.1:42909 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - dashboard at: 127.0.0.1:33311 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:34207 distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-9f81afdb-58e5-4649-a7dc-8dadbc2923de/dask-worker-space/worker-6qqc2pg5 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-93118b3f-310f-48e4-9df1-dda4f37ba379/dask-worker-space/worker-vr8c0pc7 distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42909', name: tcp://127.0.0.1:42909, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42909
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:34207 distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:44305', name: tcp://127.0.0.1:44305, memory: 0, processing: 0>
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:44305
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:34207 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-f36b097f-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42909
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:44305
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42909', name: tcp://127.0.0.1:42909, memory: 2, processing: 0> _____________________________ test_async_deadlock ______________________________

client = <Client: No scheduler connected>
s = <Scheduler: "tcp://127.0.0.1:39325" workers: 0 cores: 0, tasks: 0>
a = <Worker: 'tcp://127.0.0.1:46605', 0, Status.closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0> b = <Worker: 'tcp://127.0.0.1:35169', 1, Status.closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>

    @gen_cluster(client=True)
async def test_async_deadlock(client, s, a, b): ac = await client.submit(Counter, actor=True) ac2 = await client.submit(UsesCounter, actor=True, workers=[ac._address]) > assert (await ac2.ado_inc(ac)) == 1

/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:646: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/distributed/actor.py:171: in func
    q = asyncio.Queue(loop=self._io_loop.asyncio_loop)
/usr/lib/python3.10/asyncio/queues.py:33: in __init__
    super().__init__(loop=loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7a9a2980>

def __init__(self, *, loop=_marker): if loop is not _marker:
          raise TypeError(
f'As of 3.10, the *loop* parameter was removed from '

f'{type(self).__name__}() since it is no longer necessary'
            )
E TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary

/usr/lib/python3.10/asyncio/mixins.py:17: TypeError
----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:39325
distributed.scheduler - INFO -   dashboard at:           127.0.0.1:46377
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:46605 distributed.worker - INFO - Listening to: tcp://127.0.0.1:46605 distributed.worker - INFO - dashboard at: 127.0.0.1:37953 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:39325 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-bg9tle7c distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:35169 distributed.worker - INFO - Listening to: tcp://127.0.0.1:35169 distributed.worker - INFO - dashboard at: 127.0.0.1:46443 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:39325 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-0ertus9n distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46605', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46605
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:35169', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:35169
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:39325 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:39325 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-f382fa9e-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Remove client Client-f382fa9e-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-f382fa9e-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-f382fa9e-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46605
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:35169
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:46605', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:46605
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:35169', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:35169
distributed.scheduler - INFO - Lost all workers
distributed.scheduler - INFO - Scheduler closing...
distributed.scheduler - INFO - Scheduler closing all comms
________________________________ test_exception ________________________________

    def test_exception():
class MyException(Exception):
            pass
            class Broken:
def method(self):
                raise MyException
                @property
            def prop(self):
                raise MyException
with cluster(nworkers=2) as (cl, w): client = Client(cl["address"])
            ac = client.submit(Broken, actor=True).result()
          acfut = ac.method()

/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:664: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/distributed/actor.py:171: in func
    q = asyncio.Queue(loop=self._io_loop.asyncio_loop)
/usr/lib/python3.10/asyncio/queues.py:33: in __init__
    super().__init__(loop=loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7a95f8e0>

def __init__(self, *, loop=_marker): if loop is not _marker:
          raise TypeError(
f'As of 3.10, the *loop* parameter was removed from '

f'{type(self).__name__}() since it is no longer necessary'
            )
E TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary

/usr/lib/python3.10/asyncio/mixins.py:17: TypeError
----------------------------- Captured stderr call ----------------------------- distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:35703
distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:45499 distributed.worker - INFO - Listening to: tcp://127.0.0.1:45499 distributed.worker - INFO - dashboard at: 127.0.0.1:43513 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:35703 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-0867a71c-35bc-4fa4-80c3-97578faef6d6/dask-worker-space/worker-p7ueclgl distributed.worker - INFO - Start worker at: tcp://127.0.0.1:33335 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Listening to: tcp://127.0.0.1:33335 distributed.worker - INFO - dashboard at: 127.0.0.1:40217 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:35703 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-aadea82d-d79c-418c-b87f-ca9691fd7067/dask-worker-space/worker-4a0h_eu8 distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45499', name: tcp://127.0.0.1:45499, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45499
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:35703 distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:33335', name: tcp://127.0.0.1:33335, memory: 0, processing: 0>
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:33335
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:35703 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-f426cfbd-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:33335
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45499
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:33335', name: tcp://127.0.0.1:33335, memory: 1, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:33335
_____________________________ test_exception_async _____________________________

client = <Client: No scheduler connected>
s = <Scheduler: "tcp://127.0.0.1:41503" workers: 0 cores: 0, tasks: 0>
a = <Worker: 'tcp://127.0.0.1:42323', 0, Status.closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0> b = <Worker: 'tcp://127.0.0.1:45655', 1, Status.closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>

    @gen_cluster(client=True)
async def test_exception_async(client, s, a, b): class MyException(Exception):
            pass
            class Broken:
def method(self):
                raise MyException
                @property
            def prop(self):
                raise MyException
ac = await client.submit(Broken, actor=True)
      acfut = ac.method()

/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:686: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/distributed/actor.py:171: in func
    q = asyncio.Queue(loop=self._io_loop.asyncio_loop)
/usr/lib/python3.10/asyncio/queues.py:33: in __init__
    super().__init__(loop=loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7aa93280>

def __init__(self, *, loop=_marker): if loop is not _marker:
          raise TypeError(
f'As of 3.10, the *loop* parameter was removed from '

f'{type(self).__name__}() since it is no longer necessary'
            )
E TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary

/usr/lib/python3.10/asyncio/mixins.py:17: TypeError
----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:41503
distributed.scheduler - INFO -   dashboard at:           127.0.0.1:37235
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:42323 distributed.worker - INFO - Listening to: tcp://127.0.0.1:42323 distributed.worker - INFO - dashboard at: 127.0.0.1:43983 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:41503 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-8ylggvje distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:45655 distributed.worker - INFO - Listening to: tcp://127.0.0.1:45655 distributed.worker - INFO - dashboard at: 127.0.0.1:36813 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:41503 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-al9gkp4t distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42323', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42323
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:45655', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:45655
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:41503 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:41503 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-f43c31c7-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Remove client Client-f43c31c7-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-f43c31c7-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-f43c31c7-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42323
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:45655
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42323', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:42323
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:45655', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:45655
distributed.scheduler - INFO - Lost all workers
distributed.scheduler - INFO - Scheduler closing...
distributed.scheduler - INFO - Scheduler closing all comms
______________________________ test_as_completed _______________________________

client = <Client: 'tcp://127.0.0.1:42513' processes=2 threads=2, memory=15.51 GiB>

    def test_as_completed(client):
        ac = client.submit(Counter, actor=True).result()
      futures = [ac.increment() for _ in range(10)]

/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:696: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/distributed/tests/test_actor.py:696: in <listcomp> futures = [ac.increment() for _ in range(10)] /usr/lib/python3/dist-packages/distributed/actor.py:171: in func
    q = asyncio.Queue(loop=self._io_loop.asyncio_loop)
/usr/lib/python3.10/asyncio/queues.py:33: in __init__
    super().__init__(loop=loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7a97a4d0>

def __init__(self, *, loop=_marker): if loop is not _marker:
          raise TypeError(
f'As of 3.10, the *loop* parameter was removed from '

f'{type(self).__name__}() since it is no longer necessary'
            )
E TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary

/usr/lib/python3.10/asyncio/mixins.py:17: TypeError
---------------------------- Captured stderr setup ----------------------------- distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:42513
distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:39641 distributed.worker - INFO - Listening to: tcp://127.0.0.1:39641 distributed.worker - INFO - dashboard at: 127.0.0.1:42349 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:42513 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-1b80a926-019d-43f3-800d-1be9a80818ca/dask-worker-space/worker-rofipp7j distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39641', name: tcp://127.0.0.1:39641, memory: 0, processing: 0> distributed.worker - INFO - Start worker at: tcp://127.0.0.1:41607 distributed.worker - INFO - Listening to: tcp://127.0.0.1:41607 distributed.worker - INFO - dashboard at: 127.0.0.1:36635 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:42513 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39641 distributed.worker - INFO - Memory: 7.76 GiB
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-03a35189-d9de-4175-bc26-b383d2f37c08/dask-worker-space/worker-jv1wqflf distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:42513 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:41607', name: tcp://127.0.0.1:41607, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:41607
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:42513 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-f4dfc1c5-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
--------------------------- Captured stderr teardown --------------------------- distributed.scheduler - INFO - Remove client Client-f4dfc1c5-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-f4dfc1c5-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-f4dfc1c5-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39641
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:41607
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39641', name: tcp://127.0.0.1:39641, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:39641
_________________________ test_actor_future_awaitable __________________________

client = <Client: No scheduler connected>
s = <Scheduler: "tcp://127.0.0.1:34913" workers: 0 cores: 0, tasks: 0>
a = <Worker: 'tcp://127.0.0.1:39179', 0, Status.closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0> b = <Worker: 'tcp://127.0.0.1:42439', 1, Status.closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>

@gen_cluster(client=True, timeout=3) async def test_actor_future_awaitable(client, s, a, b): ac = await client.submit(Counter, actor=True)
      futures = [ac.increment() for _ in range(10)]

/usr/lib/python3/dist-packages/distributed/tests/test_actor.py:711: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /usr/lib/python3/dist-packages/distributed/tests/test_actor.py:711: in <listcomp> futures = [ac.increment() for _ in range(10)] /usr/lib/python3/dist-packages/distributed/actor.py:171: in func
    q = asyncio.Queue(loop=self._io_loop.asyncio_loop)
/usr/lib/python3.10/asyncio/queues.py:33: in __init__
    super().__init__(loop=loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <[AttributeError("'Queue' object has no attribute '_maxsize'") raised in repr()] Queue object at 0xffff7a932a70>

def __init__(self, *, loop=_marker): if loop is not _marker:
          raise TypeError(
f'As of 3.10, the *loop* parameter was removed from '

f'{type(self).__name__}() since it is no longer necessary'
            )
E TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary

/usr/lib/python3.10/asyncio/mixins.py:17: TypeError
----------------------------- Captured stderr call -----------------------------
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:34913
distributed.scheduler - INFO -   dashboard at:           127.0.0.1:34647
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:39179 distributed.worker - INFO - Listening to: tcp://127.0.0.1:39179 distributed.worker - INFO - dashboard at: 127.0.0.1:33445 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:34913 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-khssoc_q distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Start worker at: tcp://127.0.0.1:42439 distributed.worker - INFO - Listening to: tcp://127.0.0.1:42439 distributed.worker - INFO - dashboard at: 127.0.0.1:38001 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:34913 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 2 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/dask-worker-space/worker-q9pm6_pq distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:39179', name: 0, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:39179
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:42439', name: 1, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:42439
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:34913 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Registered to: tcp://127.0.0.1:34913 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Receive client connection: Client-f4f64ad2-56b7-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Remove client Client-f4f64ad2-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-f4f64ad2-56b7-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-f4f64ad2-56b7-11ec-9858-00163e03ed98
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:39179
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:42439
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:39179', name: 0, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:39179
distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://127.0.0.1:42439', name: 1, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://127.0.0.1:42439
distributed.scheduler - INFO - Lost all workers
distributed.scheduler - INFO - Scheduler closing...
distributed.scheduler - INFO - Scheduler closing all comms
______________________ test_client_gather_semaphore_loop _______________________

s = <Scheduler: "tcp://127.0.0.1:37599" workers: 0 cores: 0, tasks: 0>

    @gen_cluster(nthreads=[])
async def test_client_gather_semaphore_loop(s): async with Client(s.address, asynchronous=True) as c:
          assert c._gather_semaphore._loop is c.loop.asyncio_loop
E AssertionError: assert None is <_UnixSelectorEventLoop running=True closed=False debug=False> E + where None = <asyncio.locks.Semaphore object at 0xffff37997460 [unlocked, value:5]>._loop E + where <asyncio.locks.Semaphore object at 0xffff37997460 [unlocked, value:5]> = <Client: 'tcp://127.0.0.1:37599' processes=0 threads=0, memory=0 B>._gather_semaphore E + and <_UnixSelectorEventLoop running=True closed=False debug=False> = <tornado.platform.asyncio.AsyncIOLoop object at 0xffff37976ce0>.asyncio_loop E + where <tornado.platform.asyncio.AsyncIOLoop object at 0xffff37976ce0> = <Client: 'tcp://127.0.0.1:37599' processes=0 threads=0, memory=0 B>.loop

/usr/lib/python3/dist-packages/distributed/tests/test_client.py:6306: AssertionError _______________________ test_as_completed_condition_loop _______________________

c = <Client: No scheduler connected>
s = <Scheduler: "tcp://127.0.0.1:39425" workers: 0 cores: 0, tasks: 0>
a = <Worker: 'tcp://127.0.0.1:36581', 0, Status.closed, stored: 0, running: 0/1, ready: 0, comm: 0, waiting: 0> b = <Worker: 'tcp://127.0.0.1:34017', 1, Status.closed, stored: 0, running: 0/2, ready: 0, comm: 0, waiting: 0>

    @gen_cluster(client=True)
async def test_as_completed_condition_loop(c, s, a, b):
        seq = c.map(inc, range(5))
        ac = as_completed(seq)
      assert ac.condition._loop == c.loop.asyncio_loop
E assert None == <_UnixSelectorEventLoop running=True closed=False debug=False>
E         +None
E -<_UnixSelectorEventLoop running=True closed=False debug=False>

/usr/lib/python3/dist-packages/distributed/tests/test_client.py:6313: AssertionError __________________ test_client_connectionpool_semaphore_loop ___________________

s = {'address': 'tcp://127.0.0.1:44083'}
a = {'address': 'tcp://127.0.0.1:32867', 'proc': <weakref at 0xffff379bcc20; to 'SpawnProcess' at 0xffff379ade70>} b = {'address': 'tcp://127.0.0.1:46399', 'proc': <weakref at 0xffff379bc220; to 'SpawnProcess' at 0xffff379ad150>}

def test_client_connectionpool_semaphore_loop(s, a, b): with Client(s["address"]) as c:
          assert c.rpc.semaphore._loop is c.loop.asyncio_loop
E AssertionError: assert None is <_UnixSelectorEventLoop running=True closed=False debug=False> E + where None = <asyncio.locks.Semaphore object at 0xffff379d7c70 [unlocked, value:511]>._loop E + where <asyncio.locks.Semaphore object at 0xffff379d7c70 [unlocked, value:511]> = <ConnectionPool: open=1, active=0, connecting=0>.semaphore E + where <ConnectionPool: open=1, active=0, connecting=0> = <Client: 'tcp://127.0.0.1:44083' processes=2 threads=2, memory=15.51 GiB>.rpc E + and <_UnixSelectorEventLoop running=True closed=False debug=False> = <tornado.platform.asyncio.AsyncIOLoop object at 0xffff379af190>.asyncio_loop E + where <tornado.platform.asyncio.AsyncIOLoop object at 0xffff379af190> = <Client: 'tcp://127.0.0.1:44083' processes=2 threads=2, memory=15.51 GiB>.loop

/usr/lib/python3/dist-packages/distributed/tests/test_client.py:6318: AssertionError ---------------------------- Captured stderr setup ----------------------------- distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:     tcp://127.0.0.1:44083
distributed.scheduler - INFO -   dashboard at:            127.0.0.1:8787
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:32867 distributed.worker - INFO - Listening to: tcp://127.0.0.1:32867 distributed.worker - INFO - dashboard at: 127.0.0.1:46829 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:44083 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-46d2d926-d664-4f9d-b27a-76ef0f34c482/dask-worker-space/worker-tyjyvqd0 distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:32867', name: tcp://127.0.0.1:32867, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:32867
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:44083 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Start worker at: tcp://127.0.0.1:46399 distributed.worker - INFO - Listening to: tcp://127.0.0.1:46399 distributed.worker - INFO - dashboard at: 127.0.0.1:45003 distributed.worker - INFO - Waiting to connect to: tcp://127.0.0.1:44083 distributed.worker - INFO - ------------------------------------------------- distributed.worker - INFO - Threads: 1 distributed.worker - INFO - Memory: 7.76 GiB distributed.worker - INFO - Local Directory: /tmp/autopkgtest-lxc.2wwnnd74/downtmp/autopkgtest_tmp/_test_worker-3e9cc2a3-e95f-408b-8ec8-b024f55ab4a9/dask-worker-space/worker-r6a_g_7y distributed.worker - INFO - ------------------------------------------------- distributed.scheduler - INFO - Register worker <WorkerState 'tcp://127.0.0.1:46399', name: tcp://127.0.0.1:46399, memory: 0, processing: 0> distributed.scheduler - INFO - Starting worker compute stream, tcp://127.0.0.1:46399
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Registered to: tcp://127.0.0.1:44083 distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
----------------------------- Captured stderr call ----------------------------- distributed.scheduler - INFO - Receive client connection: Client-8a53a99f-56b8-11ec-9858-00163e03ed98
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Remove client Client-8a53a99f-56b8-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Remove client Client-8a53a99f-56b8-11ec-9858-00163e03ed98 distributed.scheduler - INFO - Close client connection: Client-8a53a99f-56b8-11ec-9858-00163e03ed98 --------------------------- Captured stderr teardown ---------------------------
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:32867
distributed.worker - INFO - Stopping worker at tcp://127.0.0.1:46399

Attachment: OpenPGP_signature
Description: OpenPGP digital signature


Reply to: