[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#995360: marked as done (pytorch: autopkgtest regression: fft: ATen not compiled with MKL support)

Your message dated Sun, 23 Jan 2022 14:37:46 +0000
with message-id <E1nBe02-000Dnj-Pv@fasolo.debian.org>
and subject line Bug#995360: fixed in pytorch 1.8.1-3
has caused the Debian Bug report #995360,
regarding pytorch: autopkgtest regression: fft: ATen not compiled with MKL support
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact owner@bugs.debian.org

995360: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=995360
Debian Bug Tracking System
Contact owner@bugs.debian.org with problems
--- Begin Message ---
Source: pytorch
Version: 1.8.1-2
X-Debbugs-CC: debian-ci@lists.debian.org
Severity: serious
User: debian-ci@lists.debian.org
Usertags: regression

Dear maintainer(s),

With a recent upload of pytorch the autopkgtest of pytorch fails in testing
when that autopkgtest is run with the binary packages of pytorch from
unstable. It passes when run with only packages from testing. In tabular form:

                       pass            fail
pytorch                from testing    1.8.1-2
versioned deps [0]     from testing    from unstable
all others             from testing    from testing

I copied some of the output at the bottom of this report.

Currently this regression is blocking the migration to testing [1]. Can you please
investigate the situation and fix it?

More information about this bug and the reason for filing it can be found on


[0] You can see what packages were added from the second line of the log file
quoted below. The migration software adds source package from unstable to the
list if they are needed to install packages from pytorch/1.8.1-2. I.e. due to
versioned dependencies or breaks/conflicts.
[1] https://qa.debian.org/excuses.php?package=pytorch


=================================== FAILURES ===================================
__________________ TestFFTCPU.test_stft_requires_complex_cpu ___________________

self = <test_spectral_ops.TestFFTCPU testMethod=test_stft_requires_complex_cpu>
device = 'cpu'

    def test_stft_requires_complex(self, device):
        x = torch.rand(100)
>       y = x.stft(10, pad_mode='constant')

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3/dist-packages/torch/tensor.py:453: in stft
    return torch.stft(self, n_fft, hop_length, win_length, window, center,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

input = tensor([0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0290, 0.4019, 0.2598, 0.3666,
        0.0583, 0.7006, 0.0518, 0.4681....0910, 0.2323,
        0.7269, 0.1187, 0.3951, 0.7199, 0.7595, 0.5311, 0.0000, 0.0000, 0.0000,
        0.0000, 0.0000])
n_fft = 10, hop_length = None, win_length = None, window = None, center = True
pad_mode = 'constant', normalized = False, onesided = None
return_complex = None

    def stft(input: Tensor, n_fft: int, hop_length: Optional[int] = None,
             win_length: Optional[int] = None, window: Optional[Tensor] = None,
             center: bool = True, pad_mode: str = 'reflect', normalized: bool = False,
             onesided: Optional[bool] = None,
             return_complex: Optional[bool] = None) -> Tensor:
        r"""Short-time Fourier transform (STFT).

        .. warning::
            From version 1.8.0, :attr:`return_complex` must always be given
            explicitly for real inputs and `return_complex=False` has been
            deprecated. Strongly prefer `return_complex=True` as in a future
            pytorch release, this function will only return complex tensors.

            Note that :func:`torch.view_as_real` can be used to recover a real
            tensor with an extra last dimension for real and imaginary components.

        The STFT computes the Fourier transform of short overlapping windows of the
        input. This giving frequency components of the signal as they change over
        time. The interface of this function is modeled after the librosa_ stft function.

        .. _librosa: https://librosa.org/doc/latest/generated/librosa.stft.html

        Ignoring the optional batch dimension, this method computes the following

        .. math::
            X[m, \omega] = \sum_{k = 0}^{\text{win\_length-1}}%
                                \text{window}[k]\ \text{input}[m \times \text{hop\_length} + k]\ %
                                \exp\left(- j \frac{2 \pi \cdot \omega k}{\text{win\_length}}\right),

        where :math:`m` is the index of the sliding window, and :math:`\omega` is
        the frequency that :math:`0 \leq \omega < \text{n\_fft}`. When
        :attr:`onesided` is the default value ``True``,

        * :attr:`input` must be either a 1-D time sequence or a 2-D batch of time

        * If :attr:`hop_length` is ``None`` (default), it is treated as equal to
          ``floor(n_fft / 4)``.

        * If :attr:`win_length` is ``None`` (default), it is treated as equal to

        * :attr:`window` can be a 1-D tensor of size :attr:`win_length`, e.g., from
          :meth:`torch.hann_window`. If :attr:`window` is ``None`` (default), it is
          treated as if having :math:`1` everywhere in the window. If
          :math:`\text{win\_length} < \text{n\_fft}`, :attr:`window` will be padded on
          both sides to length :attr:`n_fft` before being applied.

        * If :attr:`center` is ``True`` (default), :attr:`input` will be padded on
          both sides so that the :math:`t`-th frame is centered at time
          :math:`t \times \text{hop\_length}`. Otherwise, the :math:`t`-th frame
          begins at time  :math:`t \times \text{hop\_length}`.

        * :attr:`pad_mode` determines the padding method used on :attr:`input` when
          :attr:`center` is ``True``. See :meth:`torch.nn.functional.pad` for
          all available options. Default is ``"reflect"``.

        * If :attr:`onesided` is ``True`` (default for real input), only values for
          :math:`\omega` in :math:`\left[0, 1, 2, \dots, \left\lfloor
          \frac{\text{n\_fft}}{2} \right\rfloor + 1\right]` are returned because
          the real-to-complex Fourier transform satisfies the conjugate symmetry,
          i.e., :math:`X[m, \omega] = X[m, \text{n\_fft} - \omega]^*`.
          Note if the input or window tensors are complex, then :attr:`onesided`
          output is not possible.

        * If :attr:`normalized` is ``True`` (default is ``False``), the function
          returns the normalized STFT results, i.e., multiplied by :math:`(\text{frame\_length})^{-0.5}`.

        * If :attr:`return_complex` is ``True`` (default if input is complex), the
          return is a ``input.dim() + 1`` dimensional complex tensor. If ``False``,
          the output is a ``input.dim() + 2`` dimensional real tensor where the last
          dimension represents the real and imaginary components.

        Returns either a complex tensor of size :math:`(* \times N \times T)` if
        :attr:`return_complex` is true, or a real tensor of size :math:`(* \times N
        \times T \times 2)`. Where :math:`*` is the optional batch size of
        :attr:`input`, :math:`N` is the number of frequencies where STFT is applied
        and :math:`T` is the total number of frames used.

        .. warning::
          This function changed signature at version 0.4.1. Calling with the
          previous signature may cause error or return incorrect result.

            input (Tensor): the input tensor
            n_fft (int): size of Fourier transform
            hop_length (int, optional): the distance between neighboring sliding window
                frames. Default: ``None`` (treated as equal to ``floor(n_fft / 4)``)
            win_length (int, optional): the size of window frame and STFT filter.
                Default: ``None``  (treated as equal to :attr:`n_fft`)
            window (Tensor, optional): the optional window function.
                Default: ``None`` (treated as window of all :math:`1` s)
            center (bool, optional): whether to pad :attr:`input` on both sides so
                that the :math:`t`-th frame is centered at time :math:`t \times \text{hop\_length}`.
                Default: ``True``
            pad_mode (string, optional): controls the padding method used when
                :attr:`center` is ``True``. Default: ``"reflect"``
            normalized (bool, optional): controls whether to return the normalized STFT results
                 Default: ``False``
            onesided (bool, optional): controls whether to return half of results to
                avoid redundancy for real inputs.
                Default: ``True`` for real :attr:`input` and :attr:`window`, ``False`` otherwise.
            return_complex (bool, optional): whether to return a complex tensor, or
                a real tensor with an extra last dimension for the real and
                imaginary components.

            Tensor: A tensor containing the STFT result with shape described above

        if has_torch_function_unary(input):
            return handle_torch_function(
                stft, (input,), input, n_fft, hop_length=hop_length, win_length=win_length,
                window=window, center=center, pad_mode=pad_mode, normalized=normalized,
                onesided=onesided, return_complex=return_complex)
        # TODO: after having proper ways to map Python strings to ATen Enum, move
        #       this and F.pad to ATen.
        if center:
            signal_dim = input.dim()
            extended_shape = [1] * (3 - signal_dim) + list(input.size())
            pad = int(n_fft // 2)
            input = F.pad(input.view(extended_shape), [pad, pad], pad_mode)
            input = input.view(input.shape[-signal_dim:])
>       return _VF.stft(input, n_fft, hop_length, win_length, window,  # type: ignore
                        normalized, onesided, return_complex)
E       RuntimeError: fft: ATen not compiled with MKL support

/usr/lib/python3/dist-packages/torch/functional.py:580: RuntimeError

Attachment: OpenPGP_signature
Description: OpenPGP digital signature

--- End Message ---
--- Begin Message ---
Source: pytorch
Source-Version: 1.8.1-3
Done: Mo Zhou <lumin@debian.org>

We believe that the bug you reported is fixed in the latest version of
pytorch, which is due to be installed in the Debian FTP archive.

A summary of the changes between this version and the previous one is

Thank you for reporting the bug, which will now be closed.  If you
have further comments please address them to 995360@bugs.debian.org,
and the maintainer will reopen the bug report if appropriate.

Debian distribution maintenance software
Mo Zhou <lumin@debian.org> (supplier of updated pytorch package)

(This message was generated automatically at their request; if you
believe that there is a problem with it please contact the archive
administrators by mailing ftpmaster@ftp-master.debian.org)

Hash: SHA512

Format: 1.8
Date: Sun, 23 Jan 2022 09:14:58 -0500
Source: pytorch
Architecture: source
Version: 1.8.1-3
Distribution: unstable
Urgency: medium
Maintainer: Debian Deep Learning Team <debian-ai@lists.debian.org>
Changed-By: Mo Zhou <lumin@debian.org>
Closes: 994423 995360
 pytorch (1.8.1-3) unstable; urgency=medium
   * Add comments in d/rules on package maintainence.
   * Mask python spectral_ops test which requires MKL. (Closes: #995360)
   * Mask python binary_ufuncs autopkgtest.
   * Only build on modern 64-bit architectures. (Closes: #994423)
   * Add missing Dep libprotobuf-dev for libtorch-dev.
   * d/watch: Track releases instead of tags.
 07136a31d06c00ec0cec92cc6881a22aa1c6d554 3338 pytorch_1.8.1-3.dsc
 0537963336f822286a1751a5ca5691bac1fbdc89 56724 pytorch_1.8.1-3.debian.tar.xz
 a3c651357d4a3f247efa16740fc6c4c6d5bbf2af 7513 pytorch_1.8.1-3_source.buildinfo
 b90c09456b78cac04e045ad3e80303823d1a28dddc4f47014c059d63232431b4 3338 pytorch_1.8.1-3.dsc
 8f7e4214dcbfdc0239131650d602dadc63488825e3e901f490ef85f9ad0fa4f7 56724 pytorch_1.8.1-3.debian.tar.xz
 cfcdf978dd4a66f095e79e1708d5a16424efeee0b47c0d4a44001342977b8c3c 7513 pytorch_1.8.1-3_source.buildinfo
 6e463e6053862150392eecd32bbb8d4d 3338 science optional pytorch_1.8.1-3.dsc
 9fbb319dac9469a9dba80955b637ffc3 56724 science optional pytorch_1.8.1-3.debian.tar.xz
 8d27ed3b196dccab6ebfde361071240c 7513 science optional pytorch_1.8.1-3_source.buildinfo



--- End Message ---

Reply to: