[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#995360: pytorch: autopkgtest regression: fft: ATen not compiled with MKL support



Source: pytorch
Version: 1.8.1-2
X-Debbugs-CC: debian-ci@lists.debian.org
Severity: serious
User: debian-ci@lists.debian.org
Usertags: regression

Dear maintainer(s),

With a recent upload of pytorch the autopkgtest of pytorch fails in testing
when that autopkgtest is run with the binary packages of pytorch from
unstable. It passes when run with only packages from testing. In tabular form:

                       pass            fail
pytorch                from testing    1.8.1-2
versioned deps [0]     from testing    from unstable
all others             from testing    from testing

I copied some of the output at the bottom of this report.

Currently this regression is blocking the migration to testing [1]. Can you please
investigate the situation and fix it?

More information about this bug and the reason for filing it can be found on
https://wiki.debian.org/ContinuousIntegration/RegressionEmailInformation

Paul

[0] You can see what packages were added from the second line of the log file
quoted below. The migration software adds source package from unstable to the
list if they are needed to install packages from pytorch/1.8.1-2. I.e. due to
versioned dependencies or breaks/conflicts.
[1] https://qa.debian.org/excuses.php?package=pytorch

https://ci.debian.net/data/autopkgtest/testing/amd64/p/pytorch/15624807/log.gz

=================================== FAILURES ===================================
__________________ TestFFTCPU.test_stft_requires_complex_cpu ___________________

self = <test_spectral_ops.TestFFTCPU testMethod=test_stft_requires_complex_cpu>
device = 'cpu'

    def test_stft_requires_complex(self, device):
        x = torch.rand(100)
>       y = x.stft(10, pad_mode='constant')

test_spectral_ops.py:939:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python3/dist-packages/torch/tensor.py:453: in stft
    return torch.stft(self, n_fft, hop_length, win_length, window, center,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

input = tensor([0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0290, 0.4019, 0.2598, 0.3666,
        0.0583, 0.7006, 0.0518, 0.4681....0910, 0.2323,
        0.7269, 0.1187, 0.3951, 0.7199, 0.7595, 0.5311, 0.0000, 0.0000, 0.0000,
        0.0000, 0.0000])
n_fft = 10, hop_length = None, win_length = None, window = None, center = True
pad_mode = 'constant', normalized = False, onesided = None
return_complex = None

    def stft(input: Tensor, n_fft: int, hop_length: Optional[int] = None,
             win_length: Optional[int] = None, window: Optional[Tensor] = None,
             center: bool = True, pad_mode: str = 'reflect', normalized: bool = False,
             onesided: Optional[bool] = None,
             return_complex: Optional[bool] = None) -> Tensor:
        r"""Short-time Fourier transform (STFT).

        .. warning::
            From version 1.8.0, :attr:`return_complex` must always be given
            explicitly for real inputs and `return_complex=False` has been
            deprecated. Strongly prefer `return_complex=True` as in a future
            pytorch release, this function will only return complex tensors.

            Note that :func:`torch.view_as_real` can be used to recover a real
            tensor with an extra last dimension for real and imaginary components.

        The STFT computes the Fourier transform of short overlapping windows of the
        input. This giving frequency components of the signal as they change over
        time. The interface of this function is modeled after the librosa_ stft function.

        .. _librosa: https://librosa.org/doc/latest/generated/librosa.stft.html

        Ignoring the optional batch dimension, this method computes the following
        expression:

        .. math::
            X[m, \omega] = \sum_{k = 0}^{\text{win\_length-1}}%
                                \text{window}[k]\ \text{input}[m \times \text{hop\_length} + k]\ %
                                \exp\left(- j \frac{2 \pi \cdot \omega k}{\text{win\_length}}\right),

        where :math:`m` is the index of the sliding window, and :math:`\omega` is
        the frequency that :math:`0 \leq \omega < \text{n\_fft}`. When
        :attr:`onesided` is the default value ``True``,

        * :attr:`input` must be either a 1-D time sequence or a 2-D batch of time
          sequences.

        * If :attr:`hop_length` is ``None`` (default), it is treated as equal to
          ``floor(n_fft / 4)``.

        * If :attr:`win_length` is ``None`` (default), it is treated as equal to
          :attr:`n_fft`.

        * :attr:`window` can be a 1-D tensor of size :attr:`win_length`, e.g., from
          :meth:`torch.hann_window`. If :attr:`window` is ``None`` (default), it is
          treated as if having :math:`1` everywhere in the window. If
          :math:`\text{win\_length} < \text{n\_fft}`, :attr:`window` will be padded on
          both sides to length :attr:`n_fft` before being applied.

        * If :attr:`center` is ``True`` (default), :attr:`input` will be padded on
          both sides so that the :math:`t`-th frame is centered at time
          :math:`t \times \text{hop\_length}`. Otherwise, the :math:`t`-th frame
          begins at time  :math:`t \times \text{hop\_length}`.

        * :attr:`pad_mode` determines the padding method used on :attr:`input` when
          :attr:`center` is ``True``. See :meth:`torch.nn.functional.pad` for
          all available options. Default is ``"reflect"``.

        * If :attr:`onesided` is ``True`` (default for real input), only values for
          :math:`\omega` in :math:`\left[0, 1, 2, \dots, \left\lfloor
          \frac{\text{n\_fft}}{2} \right\rfloor + 1\right]` are returned because
          the real-to-complex Fourier transform satisfies the conjugate symmetry,
          i.e., :math:`X[m, \omega] = X[m, \text{n\_fft} - \omega]^*`.
          Note if the input or window tensors are complex, then :attr:`onesided`
          output is not possible.

        * If :attr:`normalized` is ``True`` (default is ``False``), the function
          returns the normalized STFT results, i.e., multiplied by :math:`(\text{frame\_length})^{-0.5}`.

        * If :attr:`return_complex` is ``True`` (default if input is complex), the
          return is a ``input.dim() + 1`` dimensional complex tensor. If ``False``,
          the output is a ``input.dim() + 2`` dimensional real tensor where the last
          dimension represents the real and imaginary components.

        Returns either a complex tensor of size :math:`(* \times N \times T)` if
        :attr:`return_complex` is true, or a real tensor of size :math:`(* \times N
        \times T \times 2)`. Where :math:`*` is the optional batch size of
        :attr:`input`, :math:`N` is the number of frequencies where STFT is applied
        and :math:`T` is the total number of frames used.

        .. warning::
          This function changed signature at version 0.4.1. Calling with the
          previous signature may cause error or return incorrect result.

        Args:
            input (Tensor): the input tensor
            n_fft (int): size of Fourier transform
            hop_length (int, optional): the distance between neighboring sliding window
                frames. Default: ``None`` (treated as equal to ``floor(n_fft / 4)``)
            win_length (int, optional): the size of window frame and STFT filter.
                Default: ``None``  (treated as equal to :attr:`n_fft`)
            window (Tensor, optional): the optional window function.
                Default: ``None`` (treated as window of all :math:`1` s)
            center (bool, optional): whether to pad :attr:`input` on both sides so
                that the :math:`t`-th frame is centered at time :math:`t \times \text{hop\_length}`.
                Default: ``True``
            pad_mode (string, optional): controls the padding method used when
                :attr:`center` is ``True``. Default: ``"reflect"``
            normalized (bool, optional): controls whether to return the normalized STFT results
                 Default: ``False``
            onesided (bool, optional): controls whether to return half of results to
                avoid redundancy for real inputs.
                Default: ``True`` for real :attr:`input` and :attr:`window`, ``False`` otherwise.
            return_complex (bool, optional): whether to return a complex tensor, or
                a real tensor with an extra last dimension for the real and
                imaginary components.

        Returns:
            Tensor: A tensor containing the STFT result with shape described above

        """
        if has_torch_function_unary(input):
            return handle_torch_function(
                stft, (input,), input, n_fft, hop_length=hop_length, win_length=win_length,
                window=window, center=center, pad_mode=pad_mode, normalized=normalized,
                onesided=onesided, return_complex=return_complex)
        # TODO: after having proper ways to map Python strings to ATen Enum, move
        #       this and F.pad to ATen.
        if center:
            signal_dim = input.dim()
            extended_shape = [1] * (3 - signal_dim) + list(input.size())
            pad = int(n_fft // 2)
            input = F.pad(input.view(extended_shape), [pad, pad], pad_mode)
            input = input.view(input.shape[-signal_dim:])
>       return _VF.stft(input, n_fft, hop_length, win_length, window,  # type: ignore
                        normalized, onesided, return_complex)
E       RuntimeError: fft: ATen not compiled with MKL support

/usr/lib/python3/dist-packages/torch/functional.py:580: RuntimeError



Attachment: OpenPGP_signature
Description: OpenPGP digital signature


Reply to: