[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#963822: numpy breaks scikit-learn autopkgtest: test_set_estimator_none[drop] fails

Source: numpy, scikit-learn
Control: found -1 numpy/1:1.19.0-1
Control: found -1 scikit-learn/0.22.2.post1+dfsg-7
Severity: serious
Tags: sid bullseye
X-Debbugs-CC: debian-ci@lists.debian.org
User: debian-ci@lists.debian.org
Usertags: breaks needs-update

Dear maintainer(s),

With a recent upload of numpy the autopkgtest of scikit-learn fails in
testing when that autopkgtest is run with the binary packages of numpy
from unstable. It passes when run with only packages from testing. In
tabular form:

                       pass            fail
numpy                  from testing    1:1.19.0-1
scikit-learn           from testing    0.22.2.post1+dfsg-7
all others             from testing    from testing

I copied some of the output at the bottom of this report.

Currently this regression is blocking the migration of numpy to testing
[1]. Due to the nature of this issue, I filed this bug report against
both packages. Can you please investigate the situation and reassign the
bug to the right package?

More information about this bug and the reason for filing it can be found on


[1] https://qa.debian.org/excuses.php?package=numpy


=================================== FAILURES
________________________ test_set_estimator_none[drop]

drop = 'drop'

    @pytest.mark.parametrize("drop", [None, 'drop'])
    def test_set_estimator_none(drop):
        """VotingClassifier set_params should be able to set estimators
as None or
        # Test predict
        clf1 = LogisticRegression(random_state=123)
        clf2 = RandomForestClassifier(n_estimators=10, random_state=123)
        clf3 = GaussianNB()
        eclf1 = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2),
                                             ('nb', clf3)],
                                 voting='hard', weights=[1, 0,
0.5]).fit(X, y)

        eclf2 = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2),
                                             ('nb', clf3)],
                                 voting='hard', weights=[1, 1, 0.5])
        with pytest.warns(None) as record:
            eclf2.set_params(rf=drop).fit(X, y)
>       assert record if drop is None else not record
E       assert False

________________ test_logistic_regression_path_convergence_fail

    def test_logistic_regression_path_convergence_fail():
        rng = np.random.RandomState(0)
        X = np.concatenate((rng.randn(100, 2) + [1, 1], rng.randn(100, 2)))
        y = [1] * 100 + [-1] * 100
        Cs = [1e3]

        # Check that the convergence message points to both a model agnostic
        # advice (scaling the data) and to the logistic regression specific
        # documentation that includes hints on the solver configuration.
        with pytest.warns(ConvergenceWarning) as record:
                X, y, Cs=Cs, tol=0., max_iter=1, random_state=0, verbose=0)

>       assert len(record) == 1
E       assert 6 == 1
E         -6
E         +1


Attachment: signature.asc
Description: OpenPGP digital signature

Reply to: