[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#1032073: marked as done (unblock: scipy/1.10.1-1)



Your message dated Sun, 12 Mar 2023 09:32:15 +0200
with message-id <CAM8zJQscQfdW7K9PBfNGPaLU76iMN+YEYM_JDW55SMjheDYc0w@mail.gmail.com>
and subject line Re: Bug#1032073: unblock: scipy/1.10.1-1
has caused the Debian Bug report #1032073,
regarding unblock: scipy/1.10.1-1
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact owner@bugs.debian.org
immediately.)


-- 
1032073: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1032073
Debian Bug Tracking System
Contact owner@bugs.debian.org with problems
--- Begin Message ---
Package: release.debian.org
Severity: normal
User: release.debian.org@packages.debian.org
Usertags: unblock
X-Debbugs-Cc: scipy@packages.debian.org
Control: affects -1 + src:scipy

Please unblock package scipy

This is a prerequest, before uploading.

[ Reason ]
scipy is central tool for numerical computation using python, used by
many packaged.  We made the decision in January to allow the latest
release scipy 1.10.0 into bookworm, which was helpful for supporting
upgrades for other packages.

scipy 1.10.1 is now released, providing some bug fixes and stability
improvements and no new features compared to 1.10.0.  Release notes
are found at https://docs.scipy.org/doc/scipy/release.1.10.1.html

I recommend we allow scipy 1.10.1 into bookworm (assuming it passes 10
day freeze testing as normal). I'm filing this bug to check if you
agree that's a good idea before building and uploading to unstable.

[ Impact ]
If not permitted, then bookworm will have scipy 1.10.0 without the bug
fixes provided in 1.10.1

[ Tests ]
debci tests will run and are expected to pass over the 10-day waiting
period as normal 

[ Risks ]
debci tests of scipy 1.10.0 are passing. Upstream tests have already
passed. This is a bug-fix release only with no new features, risks are
minimal.

[ Checklist ] (TBD)
  [ ] all changes are documented in the d/changelog
  [ ] I reviewed all changes and I approve them
  [x] attach UPSTREAM diff against the package in testing

TBD:
unblock scipy/1.10.1-1
diff --git a/.github/workflows/macos.yml b/.github/workflows/macos.yml
index 89342de7f01..0d093bee397 100644
--- a/.github/workflows/macos.yml
+++ b/.github/workflows/macos.yml
@@ -90,4 +90,7 @@ jobs:
     - name: Test SciPy
       run: |
         export LIBRARY_PATH="$LIBRARY_PATH:/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/usr/lib"
-        SCIPY_USE_PYTHRAN=`test ${{ matrix.python-version }} != 3.9; echo $?` python -u runtests.py -- --durations=10 --timeout=60
+        export SCIPY_USE_PYTHRAN=1
+        python setup.py install
+        cd /tmp
+        python -m pytest --pyargs scipy --durations=10 --timeout=80 -n 3
diff --git a/.github/workflows/wheels.yml b/.github/workflows/wheels.yml
index bb1fcf8f2b6..e8513831c5f 100644
--- a/.github/workflows/wheels.yml
+++ b/.github/workflows/wheels.yml
@@ -145,6 +145,15 @@ jobs:
           CIBW_ARCHS: ${{ matrix.buildplat[2] }}
           CIBW_ENVIRONMENT_PASS_LINUX: RUNNER_OS
 
+          # MACOS_DEPLOYMENT_TARGET is set because of
+          # https://github.com/pypa/cibuildwheel/issues/1419. Once that
+          # is closed and meson-python==0.13 is available, then
+          # that environment variable can be removed.
+          CIBW_ENVIRONMENT_MACOS: >
+            MACOSX_DEPLOYMENT_TARGET=10.9
+            MACOS_DEPLOYMENT_TARGET=10.9
+            _PYTHON_HOST_PLATFORM=macosx-10.9-x86_64
+
       - uses: actions/upload-artifact@v3
         with:
           path: ./wheelhouse/*.whl
diff --git a/.mailmap b/.mailmap
index af11468a0a0..9cdd5f715d1 100644
--- a/.mailmap
+++ b/.mailmap
@@ -94,6 +94,7 @@ Bhavika Tekwani <bhavicka.7992@gmail.com> bhavikat <bhavicka.7992@gmail.com>
 Blair Azzopardi <blairuk@gmail.com> bsdz <blairuk@gmail.com>
 Blair Azzopardi <blairuk@gmail.com> Blair Azzopardi <bsdz@users.noreply.github.com>
 Brandon David <brandon.david@zoho.com> brandondavid <brandon.david@zoho.com>
+Brett Graham <brettgraham@gmail.com> Brett <brettgraham@gmail.com>
 Brett R. Murphy <bmurphy@enthought.com> brettrmurphy <bmurphy@enthought.com>
 Brian Hawthorne <brian.hawthorne@localhost> brian.hawthorne <brian.hawthorne@localhost>
 Brian Newsom <brian.newsom@colorado.edu> Brian Newsom <Brian.Newsom@Colorado.edu>
@@ -193,6 +194,7 @@ Fukumu Tsutsumi <levelfourslv@gmail.com> levelfour <levelfourslv@gmail.com>
 G Young <gfyoung17@gmail.com> gfyoung <gfyoung17@gmail.com>
 G Young <gfyoung17@gmail.com> gfyoung <gfyoung@mit.edu>
 Gagandeep Singh <gdp.1807@gmail.com> czgdp1807 <gdp.1807@gmail.com>
+Ganesh Kathiresan <ganesh3597@gmail.com> ganesh-k13 <ganesh3597@gmail.com>
 Garrett Reynolds <garrettreynolds5@gmail.com> Garrett-R <garrettreynolds5@gmail.com>
 Gaël Varoquaux <gael.varoquaux@normalesup.org> Gael varoquaux <gael.varoquaux@normalesup.org>
 Gavin Zhang <zhanggan@cn.ibm.com> GavinZhang <zhanggan@cn.ibm.com>
@@ -231,6 +233,7 @@ Jacob Vanderplas <jakevdp@gmail.com> Jake Vanderplas <jakevdp@gmail.com>
 Jacob Vanderplas <jakevdp@gmail.com> Jake Vanderplas <jakevdp@yahoo.com>
 Jacob Vanderplas <jakevdp@gmail.com> Jake Vanderplas <vanderplas@astro.washington.edu>
 Jacob Vanderplas <jakevdp@gmail.com> Jacob Vanderplas <jakevdp@yahoo.com>
+Jacopo Tissino <jacopok@gmail.com> Jacopo <jacopok@gmail.com>
 Jaime Fernandez del Rio <jaime.frio@gmail.com> jaimefrio <jaime.frio@gmail.com>
 Jaime Fernandez del Rio <jaime.frio@gmail.com> Jaime <jaime.frio@gmail.com>
 Jaime Fernandez del Rio <jaime.frio@gmail.com> Jaime Fernandez <jaimefrio@google.com>
@@ -483,6 +486,7 @@ Todd Goodall <beyondmetis@gmail.com> Todd <beyondmetis@gmail.com>
 Todd Jennings <toddrjen@gmail.com> Todd <toddrjen@gmail.com>
 Tom Waite <tom.waite@localhost> tom.waite <tom.waite@localhost>
 Tom Donoghue <tdonoghue@ucsd.edu> TomDonoghue <tdonoghue@ucsd.edu>
+Tomer Sery <tomer.sery@nextsilicon.com> Tomer.Sery <tomer.sery@nextsilicon.com>
 Tony S. Yu <tsyu80@gmail.com> tonysyu <tsyu80@gmail.com>
 Tony S. Yu <tsyu80@gmail.com> Tony S Yu <tsyu80@gmail.com>
 Toshiki Kataoka <tos.lunar@gmail.com> Toshiki Kataoka <kataoka@preferred.jp>
diff --git a/README.rst b/README.rst
index 27f872630bf..bd106039369 100644
--- a/README.rst
+++ b/README.rst
@@ -1,7 +1,7 @@
-.. image:: doc/source/_static/logo.svg
+.. image:: https://github.com/scipy/scipy/blob/main/doc/source/_static/logo.svg
   :target: https://scipy.org
-  :width: 100
-  :height: 100
+  :width: 110
+  :height: 110
   :align: left 
 
 .. image:: https://img.shields.io/badge/powered%20by-NumFOCUS-orange.svg?style=flat&colorA=E1523D&colorB=007D8A
diff --git a/ci/cirrus_general_ci.yml b/ci/cirrus_general_ci.yml
index 6fbca2e7501..8426af42c70 100644
--- a/ci/cirrus_general_ci.yml
+++ b/ci/cirrus_general_ci.yml
@@ -97,7 +97,7 @@ musllinux_amd64_test_task:
   python_dependencies_script: |
     cd $_CWD
     python -m pip install cython
-    python -m pip install -vvv --upgrade numpy    
+    pip install --upgrade --pre -i https://pypi.anaconda.org/scipy-wheels-nightly/simple numpy
     python -m pip install meson ninja pybind11 pythran pytest
     python -m pip install click rich_click doit pydevtool pooch
 
diff --git a/doc/release/1.10.1-notes.rst b/doc/release/1.10.1-notes.rst
new file mode 100644
index 00000000000..7d47ef54e09
--- /dev/null
+++ b/doc/release/1.10.1-notes.rst
@@ -0,0 +1,99 @@
+==========================
+SciPy 1.10.1 Release Notes
+==========================
+
+.. contents::
+
+SciPy 1.10.1 is a bug-fix release with no new features
+compared to 1.10.0.
+
+
+
+Authors
+=======
+* Name (commits)
+* alice (1) +
+* Matt Borland (2) +
+* Evgeni Burovski (2)
+* CJ Carey (1)
+* Ralf Gommers (9)
+* Brett Graham (1) +
+* Matt Haberland (5)
+* Alex Herbert (1) +
+* Ganesh Kathiresan (2) +
+* Rishi Kulkarni (1) +
+* Loïc Estève (1)
+* Michał Górny (1) +
+* Jarrod Millman (1)
+* Andrew Nelson (4)
+* Tyler Reddy (50)
+* Pamphile Roy (2)
+* Eli Schwartz (2)
+* Tomer Sery (1) +
+* Kai Striega (1)
+* Jacopo Tissino (1) +
+* windows-server-2003 (1)
+
+A total of 21 people contributed to this release.
+People with a "+" by their names contributed a patch for the first time.
+This list of names is automatically generated, and may not be fully complete.
+
+
+Issues closed for 1.10.1
+------------------------
+
+* `#14980 <https://github.com/scipy/scipy/issues/14980>`__: BUG: Johnson's algorithm fails without negative cycles
+* `#17670 <https://github.com/scipy/scipy/issues/17670>`__: Failed to install on Raspberry Pi (ARM) 32bit in 3.11.1
+* `#17715 <https://github.com/scipy/scipy/issues/17715>`__: scipy.stats.bootstrap broke with statistic returning multiple...
+* `#17716 <https://github.com/scipy/scipy/issues/17716>`__: BUG: interpolate.interpn fails with read only input
+* `#17718 <https://github.com/scipy/scipy/issues/17718>`__: BUG: RegularGridInterpolator 2D mixed precision crashes
+* `#17727 <https://github.com/scipy/scipy/issues/17727>`__: BUG: RegularGridInterpolator does not work on non-native byteorder...
+* `#17736 <https://github.com/scipy/scipy/issues/17736>`__: BUG: SciPy requires OpenBLAS even when building against a different...
+* `#17775 <https://github.com/scipy/scipy/issues/17775>`__: BUG: Asymptotic computation of ksone.sf has intermediate overflow
+* `#17782 <https://github.com/scipy/scipy/issues/17782>`__: BUG: Segfault in scipy.sparse.csgraph.shortest_path() with v1.10.0
+* `#17795 <https://github.com/scipy/scipy/issues/17795>`__: BUG: stats.pearsonr one-sided hypothesis yields incorrect p-value...
+* `#17801 <https://github.com/scipy/scipy/issues/17801>`__: BUG: stats.powerlaw.fit: raises OverflowError
+* `#17808 <https://github.com/scipy/scipy/issues/17808>`__: BUG: name of cython executable is hardcoded in _build_utils/cythoner.py
+* `#17811 <https://github.com/scipy/scipy/issues/17811>`__: CI job with numpy nightly build failing on missing \`_ArrayFunctionDispatcher.__code__\`
+* `#17839 <https://github.com/scipy/scipy/issues/17839>`__: BUG: 1.10.0 tests fail on i386 and other less common arches
+* `#17896 <https://github.com/scipy/scipy/issues/17896>`__: DOC: publicly expose \`multivariate_normal\` attributes \`mean\`...
+* `#17934 <https://github.com/scipy/scipy/issues/17934>`__: BUG: meson \`__config__\` generation - truncated unicode characters
+* `#17938 <https://github.com/scipy/scipy/issues/17938>`__: BUG: \`scipy.stats.qmc.LatinHypercube\` with \`optimization="random-cd"\`...
+
+
+Pull requests for 1.10.1
+------------------------
+
+* `#17712 <https://github.com/scipy/scipy/pull/17712>`__: REL, MAINT: prepare for 1.10.1
+* `#17717 <https://github.com/scipy/scipy/pull/17717>`__: BUG: allow readonly input to interpolate.interpn
+* `#17721 <https://github.com/scipy/scipy/pull/17721>`__: MAINT: update \`meson-python\` upper bound to <0.13.0
+* `#17726 <https://github.com/scipy/scipy/pull/17726>`__: BUG: interpolate/RGI: upcast float32 to float64
+* `#17735 <https://github.com/scipy/scipy/pull/17735>`__: MAINT: stats.bootstrap: fix BCa with vector-valued statistics
+* `#17743 <https://github.com/scipy/scipy/pull/17743>`__: DOC: improve the docs on using BLAS/LAPACK libraries with Meson
+* `#17777 <https://github.com/scipy/scipy/pull/17777>`__: BLD: link to libatomic if necessary
+* `#17783 <https://github.com/scipy/scipy/pull/17783>`__: BUG: Correct intermediate overflow in KS one asymptotic in SciPy.stats
+* `#17790 <https://github.com/scipy/scipy/pull/17790>`__: BUG: signal: fix check_malloc extern declaration type
+* `#17797 <https://github.com/scipy/scipy/pull/17797>`__: MAINT: stats.pearsonr: correct p-value with negative correlation...
+* `#17800 <https://github.com/scipy/scipy/pull/17800>`__: [sparse.csgraph] Fix a bug in dijkstra and johnson algorithm
+* `#17803 <https://github.com/scipy/scipy/pull/17803>`__: MAINT: add missing \`__init__.py\` in test folder
+* `#17806 <https://github.com/scipy/scipy/pull/17806>`__: MAINT: stats.powerlaw.fit: fix overflow when np.min(data)==0
+* `#17810 <https://github.com/scipy/scipy/pull/17810>`__: BLD: use Meson's found cython instead of a wrapper script
+* `#17831 <https://github.com/scipy/scipy/pull/17831>`__: MAINT, CI: GHA MacOS setup.py update
+* `#17850 <https://github.com/scipy/scipy/pull/17850>`__: MAINT: remove use of \`__code__\` in \`scipy.integrate\`
+* `#17854 <https://github.com/scipy/scipy/pull/17854>`__: TST: mark test for \`stats.kde.marginal\` as xslow
+* `#17855 <https://github.com/scipy/scipy/pull/17855>`__: BUG: Fix handling of \`powm1\` overflow errors
+* `#17859 <https://github.com/scipy/scipy/pull/17859>`__: TST: fix test failures on i386, s390x, ppc64, riscv64 (Debian)
+* `#17862 <https://github.com/scipy/scipy/pull/17862>`__: BLD: Meson \`__config__\` generation
+* `#17863 <https://github.com/scipy/scipy/pull/17863>`__: BUG: fix Johnson's algorithm
+* `#17872 <https://github.com/scipy/scipy/pull/17872>`__: BUG: fix powm1 overflow handling
+* `#17904 <https://github.com/scipy/scipy/pull/17904>`__: ENH: \`multivariate_normal_frozen\`: restore \`cov\` attribute
+* `#17910 <https://github.com/scipy/scipy/pull/17910>`__: CI: use nightly numpy musllinux_x86_64 wheel
+* `#17931 <https://github.com/scipy/scipy/pull/17931>`__: TST: test_location_scale proper 32bit Linux skip
+* `#17932 <https://github.com/scipy/scipy/pull/17932>`__: TST: 32-bit tol for test_pdist_jensenshannon_iris
+* `#17936 <https://github.com/scipy/scipy/pull/17936>`__: BUG: Use raw strings for paths in \`__config__.py.in\`
+* `#17940 <https://github.com/scipy/scipy/pull/17940>`__: BUG: \`rng_integers\` in \`_random_cd\` now samples on a closed...
+* `#17942 <https://github.com/scipy/scipy/pull/17942>`__: BLD: update classifiers for Python 3.11
+* `#17963 <https://github.com/scipy/scipy/pull/17963>`__: MAINT: backports/prep for SciPy 1.10.1
+* `#17981 <https://github.com/scipy/scipy/pull/17981>`__: BLD: make sure macosx_x86_64 10.9 tags are being made on maintenance/1.10.x
+* `#17984 <https://github.com/scipy/scipy/pull/17984>`__: DOC: update link of the logo in the readme
+* `#17997 <https://github.com/scipy/scipy/pull/17997>`__: BUG: at least one entry from trial should be used in exponential...
diff --git a/doc/source/dev/contributor/meson_advanced.rst b/doc/source/dev/contributor/meson_advanced.rst
index 6c1fdc403f3..338b43df6b4 100644
--- a/doc/source/dev/contributor/meson_advanced.rst
+++ b/doc/source/dev/contributor/meson_advanced.rst
@@ -21,11 +21,45 @@ implementations on conda-forge), use::
     $ python dev.py
 
     $ # to build and install a wheel
-    $ python -m build -C-Dblas=blas -C-Dlapack=lapack
+    $ python -m build -Csetup-args=-Dblas=blas -Csetup-args=-Dlapack=lapack
     $ pip install dist/scipy*.whl
 
 Other options that should work (as long as they're installed with
-``pkg-config`` support) include ``mkl`` and ``blis``.
+``pkg-config`` or CMake support) include ``mkl`` and ``blis``. Note that using
+``pip install`` or ``pip wheel`` doesn't work (as of Jan'23) because we need
+two ``setup-args`` flags for specifying both ``blas`` and ``lapack`` here, and
+``pip`` does not yet support specifying ``--config-settings`` with the same key
+twice, while ``build`` does support that.
+
+.. note::
+
+    The way BLAS and LAPACK detection works under the hood is that Meson tries
+    to discover the specified libraries first with ``pkg-config``, and then
+    with CMake. If all you have is a standalone shared library file (e.g.,
+    ``armpl_lp64.so`` in ``/a/random/path/lib/`` and a corresponding header
+    file in ``/a/random/path/include/``), then what you have to do is craft
+    your own pkg-config file. It should have a matching name (so in this
+    example, ``armpl_lp64.pc``) and may be located anywhere. The
+    ``PKG_CONFIG_PATH`` environment variable should be set to point to the
+    location of the ``.pc`` file. The contents of that file should be::
+
+        libdir=/path/to/library-dir      # e.g., /a/random/path/lib
+        includedir=/path/to/include-dir  # e.g., /a/random/path/include
+        version=1.2.3                    # set to actual version
+        extralib=-lm -lpthread -lgfortran   # if needed, the flags to link in dependencies
+        Name: armpl_lp64
+        Description: ArmPL - Arm Performance Libraries
+        Version: ${version}
+        Libs: -L${libdir} -larmpl_lp64      # linker flags
+        Libs.private: ${extralib}
+        Cflags: -I${includedir}
+
+    To check that this works as expected, you should be able to run::
+    
+        $ pkg-config --libs armpl_lp64
+        -L/path/to/library-dir -larmpl_lp64
+        $ pkg-config --cflags armpl_lp64
+        -I/path/to/include-dir
 
 
 Use different build types with Meson
diff --git a/doc/source/release.1.10.1.rst b/doc/source/release.1.10.1.rst
new file mode 100644
index 00000000000..8fa6b34e27b
--- /dev/null
+++ b/doc/source/release.1.10.1.rst
@@ -0,0 +1 @@
+.. include:: ../release/1.10.1-notes.rst
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 4f88cc9862e..7edd532cb08 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -8,6 +8,7 @@ see the `commit logs <https://github.com/scipy/scipy/commits/>`_.
 .. toctree::
    :maxdepth: 1
 
+   release.1.10.1
    release.1.10.0
    release.1.9.3
    release.1.9.2
diff --git a/meson.build b/meson.build
index 2a26d0cdd4e..ad18b245b52 100644
--- a/meson.build
+++ b/meson.build
@@ -1,10 +1,10 @@
 project(
   'SciPy',
-  'c', 'cpp',
+  'c', 'cpp', 'cython',
   # Note that the git commit hash cannot be added dynamically here (it is added
   # in the dynamically generated and installed `scipy/version.py` though - see
   # tools/version_utils.py
-  version: '1.10.0',
+  version: '1.10.1',
   license: 'BSD-3',
   meson_version: '>= 0.64.0',
   default_options: [
@@ -112,7 +112,8 @@ if not cc.links('', name: '-Wl,--version-script', args: ['-shared', version_link
   version_link_args = []
 endif
 
-cython = find_program('cython')
+# generator() doesn't accept compilers, only found programs. Cast it.
+cython = find_program(meson.get_compiler('cython').cmd_array()[0])
 generate_f2pymod = files('tools/generate_f2pymod.py')
 tempita = files('scipy/_build_utils/tempita.py')
 
diff --git a/pyproject.toml b/pyproject.toml
index 26da2ed92a0..454a3c2e3b5 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -10,7 +10,7 @@
 [build-system]
 build-backend = 'mesonpy'
 requires = [
-    "meson-python>=0.11.0,<0.12.0",
+    "meson-python>=0.11.0,<0.13.0",
     "Cython>=0.29.32,<3.0",
     # conservatively avoid issues from
     # https://github.com/pybind/pybind11/issues/4420
@@ -89,6 +89,7 @@ classifiers = [
     "Programming Language :: Python :: 3.8",
     "Programming Language :: Python :: 3.9",
     "Programming Language :: Python :: 3.10",
+    "Programming Language :: Python :: 3.11",
     "Topic :: Software Development :: Libraries",
     "Topic :: Scientific/Engineering",
     "Operating System :: Microsoft :: Windows",
diff --git a/scipy/__config__.py.in b/scipy/__config__.py.in
new file mode 100644
index 00000000000..61482faafa5
--- /dev/null
+++ b/scipy/__config__.py.in
@@ -0,0 +1,147 @@
+# This file is generated by SciPy's build process
+# It contains system_info results at the time of building this package.
+from enum import Enum
+
+__all__ = ["show"]
+_built_with_meson = True
+
+
+class DisplayModes(Enum):
+    stdout = "stdout"
+    dicts = "dicts"
+
+
+def _cleanup(d):
+    """
+    Removes empty values in a `dict` recursively
+    This ensures we remove values that Meson could not provide to CONFIG
+    """
+    if isinstance(d, dict):
+        return { k: _cleanup(v) for k, v in d.items() if v != '' and _cleanup(v) != '' }
+    else:
+        return d
+
+
+CONFIG = _cleanup(
+    {
+        "Compilers": {
+            "c": {
+                "name": "@C_COMP@",
+                "linker": "@C_COMP_LINKER_ID@",
+                "version": "@C_COMP_VERSION@",
+                "commands": "@C_COMP_CMD_ARRAY@",
+            },
+            "cython": {
+                "name": "@CYTHON_COMP@",
+                "linker": "@CYTHON_COMP_LINKER_ID@",
+                "version": "@CYTHON_COMP_VERSION@",
+                "commands": "@CYTHON_COMP_CMD_ARRAY@",
+            },
+            "c++": {
+                "name": "@CPP_COMP@",
+                "linker": "@CPP_COMP_LINKER_ID@",
+                "version": "@CPP_COMP_VERSION@",
+                "commands": "@CPP_COMP_CMD_ARRAY@",
+            },
+            "fortran": {
+                "name": "@FORTRAN_COMP@",
+                "linker": "@FORTRAN_COMP_LINKER_ID@",
+                "version": "@FORTRAN_COMP_VERSION@",
+                "commands": "@FORTRAN_COMP_CMD_ARRAY@",
+            },
+            "pythran": {
+                "version": "@PYTHRAN_VERSION@",
+                "include directory": r"@PYTHRAN_INCDIR@"
+            },
+        },
+        "Machine Information": {
+            "host": {
+                "cpu": "@HOST_CPU@",
+                "family": "@HOST_CPU_FAMILY@",
+                "endian": "@HOST_CPU_ENDIAN@",
+                "system": "@HOST_CPU_SYSTEM@",
+            },
+            "build": {
+                "cpu": "@BUILD_CPU@",
+                "family": "@BUILD_CPU_FAMILY@",
+                "endian": "@BUILD_CPU_ENDIAN@",
+                "system": "@BUILD_CPU_SYSTEM@",
+            },
+            "cross-compiled": bool("@CROSS_COMPILED@".lower().replace('false', '')),
+        },
+        "Build Dependencies": {
+            "blas": {
+                "name": "@BLAS_NAME@",
+                "found": bool("@BLAS_FOUND@".lower().replace('false', '')),
+                "version": "@BLAS_VERSION@",
+                "detection method": "@BLAS_TYPE_NAME@",
+                "include directory": r"@BLAS_INCLUDEDIR@",
+                "lib directory": r"@BLAS_LIBDIR@",
+                "openblas configuration": "@BLAS_OPENBLAS_CONFIG@",
+                "pc file directory": r"@BLAS_PCFILEDIR@",
+            },
+            "lapack": {
+                "name": "@LAPACK_NAME@",
+                "found": bool("@LAPACK_FOUND@".lower().replace('false', '')),
+                "version": "@LAPACK_VERSION@",
+                "detection method": "@LAPACK_TYPE_NAME@",
+                "include directory": r"@LAPACK_INCLUDEDIR@",
+                "lib directory": r"@LAPACK_LIBDIR@",
+                "openblas configuration": "@LAPACK_OPENBLAS_CONFIG@",
+                "pc file directory": r"@LAPACK_PCFILEDIR@",
+            },
+        },
+        "Python Information": {
+            "path": r"@PYTHON_PATH@",
+            "version": "@PYTHON_VERSION@",
+        },
+    }
+)
+
+
+def _check_pyyaml():
+    import yaml
+
+    return yaml
+
+
+def show(mode=DisplayModes.stdout.value):
+    """
+    Show libraries and system information on which SciPy was built
+    and is being used
+
+    Parameters
+    ----------
+    mode : {`'stdout'`, `'dicts'`}, optional.
+        Indicates how to display the config information.
+        `'stdout'` prints to console, `'dicts'` returns a dictionary
+        of the configuration.
+
+    Returns
+    -------
+    out : {`dict`, `None`}
+        If mode is `'dicts'`, a dict is returned, else None
+
+    Notes
+    -----
+    1. The `'stdout'` mode will give more readable
+       output if ``pyyaml`` is installed
+
+    """
+    if mode == DisplayModes.stdout.value:
+        try:  # Non-standard library, check import
+            yaml = _check_pyyaml()
+
+            print(yaml.dump(CONFIG))
+        except ModuleNotFoundError:
+            import warnings
+            import json
+
+            warnings.warn("Install `pyyaml` for better output", stacklevel=1)
+            print(json.dumps(CONFIG, indent=2))
+    elif mode == DisplayModes.dicts.value:
+        return CONFIG
+    else:
+        raise AttributeError(
+            f"Invalid `mode`, use one of: {', '.join([e.value for e in DisplayModes])}"
+        )
diff --git a/scipy/_build_utils/cythoner.py b/scipy/_build_utils/cythoner.py
deleted file mode 100644
index 6ef7ad43c2d..00000000000
--- a/scipy/_build_utils/cythoner.py
+++ /dev/null
@@ -1,28 +0,0 @@
-#!/usr/bin/env python3
-""" Scipy variant of Cython command
-
-Cython, as applied to single pyx file.
-
-Expects two arguments, infile and outfile.
-
-Other options passed through to cython command line parser.
-"""
-
-import os
-import os.path as op
-import sys
-import subprocess as sbp
-
-
-def main():
-    in_fname, out_fname = (op.abspath(p) for p in sys.argv[1:3])
-
-    sbp.run(['cython', '-3', '--fast-fail',
-             '--output-file', out_fname,
-             '--include-dir', os.getcwd()] +
-            sys.argv[3:] + [in_fname],
-            check=True)
-
-
-if __name__ == '__main__':
-    main()
diff --git a/scipy/_lib/meson.build b/scipy/_lib/meson.build
index 1e793622362..0f01c4552c9 100644
--- a/scipy/_lib/meson.build
+++ b/scipy/_lib/meson.build
@@ -17,8 +17,8 @@ _lib_pxd = [
 ]
 
 # Cython pyx -> c generator with _lib_pxd dependency
-lib_cython_gen = generator(cython_cli,
-  arguments : ['@INPUT@', '@OUTPUT@'],
+lib_cython_gen = generator(cython,
+  arguments : cython_args,
   output : '@BASENAME@.c',
   depends : [_cython_tree, _lib_pxd])
 
diff --git a/scipy/integrate/_ivp/tests/__init__.py b/scipy/integrate/_ivp/tests/__init__.py
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/scipy/integrate/_ivp/tests/meson.build b/scipy/integrate/_ivp/tests/meson.build
index f76d42b52b0..0897d0501d5 100644
--- a/scipy/integrate/_ivp/tests/meson.build
+++ b/scipy/integrate/_ivp/tests/meson.build
@@ -1,4 +1,5 @@
 py3.install_sources([
+    '__init__.py',
     'test_ivp.py',
     'test_rk.py'
   ],
diff --git a/scipy/integrate/_quadrature.py b/scipy/integrate/_quadrature.py
index b746c7b3b4c..1fe46e5c7a7 100644
--- a/scipy/integrate/_quadrature.py
+++ b/scipy/integrate/_quadrature.py
@@ -7,10 +7,6 @@
 import warnings
 from collections import namedtuple
 
-
-# trapezoid is a public function for scipy.integrate,
-# even though it's actually a NumPy function.
-from numpy import trapz as trapezoid
 from scipy.special import roots_legendre
 from scipy.special import gammaln, logsumexp
 from scipy._lib._util import _rng_spawn
@@ -22,20 +18,114 @@
            'AccuracyWarning']
 
 
-# Make See Also linking for our local copy work properly
-def _copy_func(f):
-    """Based on http://stackoverflow.com/a/6528148/190597 (Glenn Maynard)"""
-    g = types.FunctionType(f.__code__, f.__globals__, name=f.__name__,
-                           argdefs=f.__defaults__, closure=f.__closure__)
-    g = functools.update_wrapper(g, f)
-    g.__kwdefaults__ = f.__kwdefaults__
-    return g
+def trapezoid(y, x=None, dx=1.0, axis=-1):
+    r"""
+    Integrate along the given axis using the composite trapezoidal rule.
+
+    If `x` is provided, the integration happens in sequence along its
+    elements - they are not sorted.
+
+    Integrate `y` (`x`) along each 1d slice on the given axis, compute
+    :math:`\int y(x) dx`.
+    When `x` is specified, this integrates along the parametric curve,
+    computing :math:`\int_t y(t) dt =
+    \int_t y(t) \left.\frac{dx}{dt}\right|_{x=x(t)} dt`.
+
+    Parameters
+    ----------
+    y : array_like
+        Input array to integrate.
+    x : array_like, optional
+        The sample points corresponding to the `y` values. If `x` is None,
+        the sample points are assumed to be evenly spaced `dx` apart. The
+        default is None.
+    dx : scalar, optional
+        The spacing between sample points when `x` is None. The default is 1.
+    axis : int, optional
+        The axis along which to integrate.
+
+    Returns
+    -------
+    trapezoid : float or ndarray
+        Definite integral of `y` = n-dimensional array as approximated along
+        a single axis by the trapezoidal rule. If `y` is a 1-dimensional array,
+        then the result is a float. If `n` is greater than 1, then the result
+        is an `n`-1 dimensional array.
+
+    See Also
+    --------
+    cumulative_trapezoid, simpson, romb
 
+    Notes
+    -----
+    Image [2]_ illustrates trapezoidal rule -- y-axis locations of points
+    will be taken from `y` array, by default x-axis distances between
+    points will be 1.0, alternatively they can be provided with `x` array
+    or with `dx` scalar.  Return value will be equal to combined area under
+    the red lines.
 
-trapezoid = _copy_func(trapezoid)
-if trapezoid.__doc__:
-    trapezoid.__doc__ = trapezoid.__doc__.replace(
-        'sum, cumsum', 'numpy.cumsum')
+    References
+    ----------
+    .. [1] Wikipedia page: https://en.wikipedia.org/wiki/Trapezoidal_rule
+
+    .. [2] Illustration image:
+           https://en.wikipedia.org/wiki/File:Composite_trapezoidal_rule_illustration.png
+
+    Examples
+    --------
+    Use the trapezoidal rule on evenly spaced points:
+
+    >>> import numpy as np
+    >>> from scipy import integrate
+    >>> integrate.trapezoid([1, 2, 3])
+    4.0
+
+    The spacing between sample points can be selected by either the
+    ``x`` or ``dx`` arguments:
+
+    >>> integrate.trapezoid([1, 2, 3], x=[4, 6, 8])
+    8.0
+    >>> integrate.trapezoid([1, 2, 3], dx=2)
+    8.0
+
+    Using a decreasing ``x`` corresponds to integrating in reverse:
+
+    >>> integrate.trapezoid([1, 2, 3], x=[8, 6, 4])
+    -8.0
+
+    More generally ``x`` is used to integrate along a parametric curve. We can
+    estimate the integral :math:`\int_0^1 x^2 = 1/3` using:
+
+    >>> x = np.linspace(0, 1, num=50)
+    >>> y = x**2
+    >>> integrate.trapezoid(y, x)
+    0.33340274885464394
+
+    Or estimate the area of a circle, noting we repeat the sample which closes
+    the curve:
+
+    >>> theta = np.linspace(0, 2 * np.pi, num=1000, endpoint=True)
+    >>> integrate.trapezoid(np.cos(theta), x=np.sin(theta))
+    3.141571941375841
+
+    ``trapezoid`` can be applied along a specified axis to do multiple
+    computations in one call:
+
+    >>> a = np.arange(6).reshape(2, 3)
+    >>> a
+    array([[0, 1, 2],
+           [3, 4, 5]])
+    >>> integrate.trapezoid(a, axis=0)
+    array([1.5, 2.5, 3.5])
+    >>> integrate.trapezoid(a, axis=1)
+    array([2.,  8.])
+    """
+    # Future-proofing, in case NumPy moves from trapz to trapezoid for the same
+    # reasons as SciPy
+    if hasattr(np, 'trapezoid'):
+        return np.trapezoid(y, x=x, dx=dx, axis=axis)
+    else:
+        return np.trapz(y, x=x, dx=dx, axis=axis)
 
 
 # Note: alias kept for backwards compatibility. Rename was done
diff --git a/scipy/interpolate/_rgi.py b/scipy/interpolate/_rgi.py
index 37c76bb0b71..204bc3eaf40 100644
--- a/scipy/interpolate/_rgi.py
+++ b/scipy/interpolate/_rgi.py
@@ -23,11 +23,12 @@ def _check_points(points):
                 # input is descending, so make it ascending
                 descending_dimensions.append(i)
                 p = np.flip(p)
-                p = np.ascontiguousarray(p)
             else:
                 raise ValueError(
                     "The points in dimension %d must be strictly "
                     "ascending or descending" % i)
+        # see https://github.com/scipy/scipy/issues/17716
+        p = np.ascontiguousarray(p)
         grid.append(p)
     return tuple(grid), tuple(descending_dimensions)
 
@@ -330,7 +331,11 @@ def __call__(self, xi, method=None):
         if method == "linear":
             indices, norm_distances = self._find_indices(xi.T)
             if (ndim == 2 and hasattr(self.values, 'dtype') and
-                    self.values.ndim == 2):
+                    self.values.ndim == 2 and self.values.flags.writeable and
+                    self.values.dtype in (np.float64, np.complex128) and
+                    self.values.dtype.byteorder == '='):
+                # until cython supports const fused types, the fast path
+                # cannot support non-writeable values
                 # a fast path
                 out = np.empty(indices.shape[1], dtype=self.values.dtype)
                 result = evaluate_linear_2d(self.values,
diff --git a/scipy/interpolate/_rgi_cython.pyx b/scipy/interpolate/_rgi_cython.pyx
index 22d4a2eda7a..71e030664b7 100644
--- a/scipy/interpolate/_rgi_cython.pyx
+++ b/scipy/interpolate/_rgi_cython.pyx
@@ -17,7 +17,7 @@ np.import_array()
 @cython.boundscheck(False)
 @cython.initializedcheck(False)
 def evaluate_linear_2d(double_or_complex[:, :] values, # cannot declare as ::1
-                       long[:, :] indices,             # unless prior
+                       const long[:, :] indices,       # unless prior
                        double[:, :] norm_distances,    # np.ascontiguousarray
                        tuple grid not None,
                        double_or_complex[:] out):
@@ -72,11 +72,13 @@ def evaluate_linear_2d(double_or_complex[:, :] values, # cannot declare as ::1
 @cython.boundscheck(False)
 @cython.cdivision(True)
 @cython.initializedcheck(False)
-def find_indices(tuple grid not None, double[:, :] xi):
+def find_indices(tuple grid not None, const double[:, :] xi):
+    # const is required for xi above in case xi is read-only
     cdef:
         long i, j, grid_i_size
         double denom, value
-        double[::1] grid_i
+        # const is required in case grid is read-only
+        const double[::1] grid_i
 
         # Axes to iterate over
         long I = xi.shape[0]
diff --git a/scipy/interpolate/tests/test_rgi.py b/scipy/interpolate/tests/test_rgi.py
index fc255d946f2..5ff52547ba3 100644
--- a/scipy/interpolate/tests/test_rgi.py
+++ b/scipy/interpolate/tests/test_rgi.py
@@ -588,6 +588,35 @@ def test_nonscalar_values_linear_2D(self):
         v2 = np.expand_dims(vs, axis=0)
         assert_allclose(v, v2, atol=1e-14, err_msg=method)
 
+    @pytest.mark.parametrize(
+        "dtype",
+        [np.float32, np.float64, np.complex64, np.complex128]
+    )
+    @pytest.mark.parametrize("xi_dtype", [np.float32, np.float64])
+    def test_float32_values(self, dtype, xi_dtype):
+        # regression test for gh-17718: values.dtype=float32 fails
+        def f(x, y):
+            return 2 * x**3 + 3 * y**2
+
+        x = np.linspace(1, 4, 11)
+        y = np.linspace(4, 7, 22)
+
+        xg, yg = np.meshgrid(x, y, indexing='ij', sparse=True)
+        data = f(xg, yg)
+
+        data = data.astype(dtype)
+
+        interp = RegularGridInterpolator((x, y), data)
+
+        pts = np.array([[2.1, 6.2],
+                        [3.3, 5.2]], dtype=xi_dtype)
+
+        # the values here are just what the call returns; the test checks that
+        # that the call succeeds at all, instead of failing with cython not
+        # having a float32 kernel
+        assert_allclose(interp(pts), [134.10469388, 153.40069388], atol=1e-7)
+
+
 class MyValue:
     """
     Minimal indexable object
@@ -933,3 +962,58 @@ def test_invalid_xi_dimensions(self):
                "RegularGridInterpolator has dimension 1")
         with assert_raises(ValueError, match=msg):
             interpn(points, values, xi)
+
+    def test_readonly_grid(self):
+        # https://github.com/scipy/scipy/issues/17716
+        x = np.linspace(0, 4, 5)
+        y = np.linspace(0, 5, 6)
+        z = np.linspace(0, 6, 7)
+        points = (x, y, z)
+        values = np.ones((5, 6, 7))
+        point = np.array([2.21, 3.12, 1.15])
+        for d in points:
+            d.flags.writeable = False
+        values.flags.writeable = False
+        point.flags.writeable = False
+        interpn(points, values, point)
+        RegularGridInterpolator(points, values)(point)
+
+    def test_2d_readonly_grid(self):
+        # https://github.com/scipy/scipy/issues/17716
+        # test special 2d case
+        x = np.linspace(0, 4, 5)
+        y = np.linspace(0, 5, 6)
+        points = (x, y)
+        values = np.ones((5, 6))
+        point = np.array([2.21, 3.12])
+        for d in points:
+            d.flags.writeable = False
+        values.flags.writeable = False
+        point.flags.writeable = False
+        interpn(points, values, point)
+        RegularGridInterpolator(points, values)(point)
+
+    def test_non_c_contiguous_grid(self):
+        # https://github.com/scipy/scipy/issues/17716
+        x = np.linspace(0, 4, 5)
+        x = np.vstack((x, np.empty_like(x))).T.copy()[:, 0]
+        assert not x.flags.c_contiguous
+        y = np.linspace(0, 5, 6)
+        z = np.linspace(0, 6, 7)
+        points = (x, y, z)
+        values = np.ones((5, 6, 7))
+        point = np.array([2.21, 3.12, 1.15])
+        interpn(points, values, point)
+        RegularGridInterpolator(points, values)(point)
+
+    @pytest.mark.parametrize("dtype", ['>f8', '<f8'])
+    def test_endianness(self, dtype):
+        # https://github.com/scipy/scipy/issues/17716
+        # test special 2d case
+        x = np.linspace(0, 4, 5, dtype=dtype)
+        y = np.linspace(0, 5, 6, dtype=dtype)
+        points = (x, y)
+        values = np.ones((5, 6), dtype=dtype)
+        point = np.array([2.21, 3.12], dtype=dtype)
+        interpn(points, values, point)
+        RegularGridInterpolator(points, values)(point)
diff --git a/scipy/linalg/meson.build b/scipy/linalg/meson.build
index d9ed59858df..ce383e5d51c 100644
--- a/scipy/linalg/meson.build
+++ b/scipy/linalg/meson.build
@@ -23,20 +23,20 @@ cython_linalg = custom_target('cython_linalg',
 )
 
 # pyx -> c, pyx -> cpp generators, depending on __init__.py here.
-linalg_init_cython_gen = generator(cython_cli,
-  arguments : ['@INPUT@', '@OUTPUT@'],
+linalg_init_cython_gen = generator(cython,
+  arguments : cython_args,
   output : '@BASENAME@.c',
   depends : [_cython_tree, __init__py])
 
 # pyx -> c, pyx -> cpp generators, depending on _cythonized_array_utils.pxd
-linalg_init_utils_cython_gen = generator(cython_cli,
-  arguments : ['@INPUT@', '@OUTPUT@'],
+linalg_init_utils_cython_gen = generator(cython,
+  arguments : cython_args,
   output : '@BASENAME@.c',
   depends : [_cython_tree, __init__py, _cy_array_utils_pxd])
 
 # pyx -> c, pyx -> cpp generators, depending on copied pxd files and init
-linalg_cython_gen = generator(cython_cli,
-  arguments : ['@INPUT@', '@OUTPUT@'],
+linalg_cython_gen = generator(cython,
+  arguments : cython_args,
   output : '@BASENAME@.c',
   depends : [_cython_tree, __init__py, cython_linalg])
 
@@ -327,7 +327,7 @@ py3.install_sources(
 #       see https://github.com/mesonbuild/meson/issues/3206
 #
 #       For the below code to work, the script generating the files should use
-#       a different filename, and then it should be moved to the final location 
+#       a different filename, and then it should be moved to the final location
 #       (e.g. with `fs.copyfile`). Either that, or split the codegen scripts and
 #       call it twice: once for the installable files, and once for the
 #       non-installable files.
diff --git a/scipy/meson.build b/scipy/meson.build
index 87f7afca231..21ea04d5e68 100644
--- a/scipy/meson.build
+++ b/scipy/meson.build
@@ -134,6 +134,12 @@ endif
 blas = dependency(blas_name)
 lapack = dependency(lapack_name)
 
+# TODO: Add `pybind11` when available as a dependency
+dependency_map = {
+  'BLAS': blas,
+  'LAPACK': lapack,
+}
+
 # FIXME: conda-forge sets MKL_INTERFACE_LAYER=LP64,GNU, see gh-11812.
 #        This needs work on gh-16200 to make MKL robust. We should be
 #        requesting `mkl-dynamic-lp64-seq` here. And then there's work needed
@@ -148,16 +154,7 @@ else
   g77_abi_wrappers = files('_build_utils/src/wrap_dummy_g77_abi.f')
 endif
 
-generate_config = custom_target(
-  'generate-config',
-  install: true,
-  build_always_stale: true,
-  build_by_default: true,
-  output: '__config__.py',
-  input: '../tools/config_utils.py',
-  command: [py3, '@INPUT@', '@OUTPUT@'],
-  install_dir: py3.get_install_dir() / 'scipy'
-)
+scipy_dir = py3.get_install_dir() / 'scipy'
 
 generate_version = custom_target(
   'generate-version',
@@ -167,7 +164,7 @@ generate_version = custom_target(
   output: 'version.py',
   input: '../tools/version_utils.py',
   command: [py3, '@INPUT@', '--source-root', '@SOURCE_ROOT@'],
-  install_dir: py3.get_install_dir() / 'scipy'
+  install_dir: scipy_dir
 )
 
 python_sources = [
@@ -200,15 +197,16 @@ _cython_tree = [
   fs.copyfile('special.pxd'),
 ]
 
-cython_cli = find_program('_build_utils/cythoner.py')
+cython_args = ['-3', '--fast-fail', '--output-file', '@OUTPUT@', '--include-dir', '@BUILD_ROOT@', '@INPUT@']
+cython_cplus_args = ['--cplus'] + cython_args
 
-cython_gen = generator(cython_cli,
-  arguments : ['@INPUT@', '@OUTPUT@'],
+cython_gen = generator(cython,
+  arguments : cython_args,
   output : '@BASENAME@.c',
   depends : _cython_tree)
 
-cython_gen_cpp = generator(cython_cli,
-  arguments : ['@INPUT@', '@OUTPUT@', '--cplus'],
+cython_gen_cpp = generator(cython,
+  arguments : cython_cplus_args,
   output : '@BASENAME@.cpp',
   depends : [_cython_tree])
 
@@ -292,12 +290,116 @@ else
   use_math_defines = []
 endif
 
+# Determine whether it is necessary to link libatomic. This could be the case
+# e.g. on 32-bit platforms when atomic operations are used on 64-bit types.
+# The check is copied from Mesa <https://www.mesa3d.org/>.
+# Note that this dependency is not desired, it came in with a HiGHS update.
+# We should try to get rid of it. For discussion, see gh-17777.
+null_dep = dependency('', required : false)
+atomic_dep = null_dep
+code_non_lockfree = '''
+  #include <stdint.h>
+  int main() {
+   struct {
+     uint64_t *v;
+   } x;
+   return (int)__atomic_load_n(x.v, __ATOMIC_ACQUIRE) &
+          (int)__atomic_add_fetch(x.v, (uint64_t)1, __ATOMIC_ACQ_REL);
+  }
+'''
+if cc.get_id() != 'msvc'
+  if not cc.links(
+      code_non_lockfree,
+      name : 'Check atomic builtins without -latomic'
+    )
+    atomic_dep = cc.find_library('atomic', required: false)
+    if atomic_dep.found()
+      # We're not sure that with `-latomic` things will work for all compilers,
+      # so verify and only keep libatomic as a dependency if this works. It is
+      # possible the build will fail later otherwise - unclear under what
+      # circumstances (compilers, runtimes, etc.) exactly.
+      if not cc.links(
+          code_non_lockfree,
+          dependencies: atomic_dep,
+          name : 'Check atomic builtins with -latomic'
+        )
+        atomic_dep = null_dep
+      endif
+    endif
+  endif
+endif
+
 # Suppress warning for deprecated Numpy API.
 # (Suppress warning messages emitted by #warning directives).
 # Replace with numpy_nodepr_api after Cython 3.0 is out
 cython_c_args += [_cpp_Wno_cpp, use_math_defines]
 cython_cpp_args = cython_c_args
 
+compilers = {
+  'C': cc,
+  'CPP': cpp,
+  'CYTHON': meson.get_compiler('cython'),
+  'FORTRAN': meson.get_compiler('fortran')
+}
+
+machines = {
+  'HOST': host_machine,
+  'BUILD': build_machine,
+}
+
+conf_data = configuration_data()
+
+# Set compiler information
+foreach name, compiler : compilers
+  # conf_data.set(name + '_COMP_CMD_ARRAY', compiler.cmd_array())
+  conf_data.set(name + '_COMP_CMD_ARRAY', compiler.get_id())
+  conf_data.set(name + '_COMP', compiler.get_id())
+  conf_data.set(name + '_COMP_LINKER_ID', compiler.get_linker_id())
+  conf_data.set(name + '_COMP_VERSION', compiler.version())
+  conf_data.set(name + '_COMP_CMD_ARRAY', ', '.join(compiler.cmd_array()))
+endforeach
+# Add `pythran` information if present
+if use_pythran
+  pythran_version_command = run_command('pythran', '-V', check: true)
+  conf_data.set('PYTHRAN_VERSION', pythran_version_command.stdout().strip())
+  conf_data.set('PYTHRAN_INCDIR', incdir_pythran)
+endif
+
+# Machines CPU and system information
+foreach name, machine : machines
+  conf_data.set(name + '_CPU', machine.cpu())
+  conf_data.set(name + '_CPU_FAMILY', machine.cpu_family())
+  conf_data.set(name + '_CPU_ENDIAN', machine.endian())
+  conf_data.set(name + '_CPU_SYSTEM', machine.system())
+endforeach
+
+conf_data.set('CROSS_COMPILED', meson.is_cross_build())
+
+# Python information
+conf_data.set('PYTHON_PATH', py3.full_path())
+conf_data.set('PYTHON_VERSION', py3.language_version())
+
+# Dependencies information
+foreach name, dep : dependency_map
+  conf_data.set(name + '_NAME', dep.name())
+  conf_data.set(name + '_FOUND', dep.found())
+  if dep.found()
+    conf_data.set(name + '_VERSION', dep.version())
+    conf_data.set(name + '_TYPE_NAME', dep.type_name())
+    conf_data.set(name + '_INCLUDEDIR', dep.get_variable('includedir', default_value: 'unknown'))
+    conf_data.set(name + '_LIBDIR', dep.get_variable('libdir', default_value: 'unknown'))
+    conf_data.set(name + '_OPENBLAS_CONFIG', dep.get_variable('openblas_config', default_value: 'unknown'))
+    conf_data.set(name + '_PCFILEDIR', dep.get_variable('pcfiledir', default_value: 'unknown'))
+  endif
+endforeach
+
+configure_file(
+  input: '__config__.py.in',
+  output: '__config__.py',
+  configuration : conf_data,
+  install_dir: scipy_dir,
+)
+
 # Ordering of subdirs: special and linalg come first, because other submodules
 # have dependencies on cython_special.pxd and cython_linalg.pxd. After those,
 # subdirs with the most heavy builds should come first (that parallelizes
diff --git a/scipy/misc/tests/meson.build b/scipy/misc/tests/meson.build
index a4956692a14..e6281c7348f 100644
--- a/scipy/misc/tests/meson.build
+++ b/scipy/misc/tests/meson.build
@@ -1,7 +1,8 @@
 python_sources = [
   '__init__.py',
   'test_common.py',
-  'test_doccer.py'
+  'test_doccer.py',
+  'test_config.py',
 ]
 
 py3.install_sources(
diff --git a/scipy/misc/tests/test_config.py b/scipy/misc/tests/test_config.py
new file mode 100644
index 00000000000..b43d3f9f0da
--- /dev/null
+++ b/scipy/misc/tests/test_config.py
@@ -0,0 +1,44 @@
+"""
+Check the SciPy config is valid.
+"""
+import scipy
+import pytest
+from unittest.mock import patch
+
+pytestmark = pytest.mark.skipif(
+    not hasattr(scipy.__config__, "_built_with_meson"),
+    reason="Requires Meson builds",
+)
+
+
+class TestSciPyConfigs:
+    REQUIRED_CONFIG_KEYS = [
+        "Compilers",
+        "Machine Information",
+        "Python Information",
+    ]
+
+    @patch("scipy.__config__._check_pyyaml")
+    def test_pyyaml_not_found(self, mock_yaml_importer):
+        mock_yaml_importer.side_effect = ModuleNotFoundError()
+        with pytest.warns(UserWarning):
+            scipy.show_config()
+
+    def test_dict_mode(self):
+        config = scipy.show_config(mode="dicts")
+
+        assert isinstance(config, dict)
+        assert all([key in config for key in self.REQUIRED_CONFIG_KEYS]), (
+            "Required key missing,"
+            " see index of `False` with `REQUIRED_CONFIG_KEYS`"
+        )
+
+    def test_invalid_mode(self):
+        with pytest.raises(AttributeError):
+            scipy.show_config(mode="foo")
+
+    def test_warn_to_add_tests(self):
+        assert len(scipy.__config__.DisplayModes) == 2, (
+            "New mode detected,"
+            " please add UT if applicable and increment this count"
+        )
diff --git a/scipy/optimize/_differentialevolution.py b/scipy/optimize/_differentialevolution.py
index ad1f8ae7ca3..151b43fde94 100644
--- a/scipy/optimize/_differentialevolution.py
+++ b/scipy/optimize/_differentialevolution.py
@@ -1497,6 +1497,7 @@ def _mutate(self, candidate):
             i = 0
             crossovers = rng.uniform(size=self.parameter_count)
             crossovers = crossovers < self.cross_over_probability
+            crossovers[0] = True
             while (i < self.parameter_count and crossovers[i]):
                 trial[fill_point] = bprime[fill_point]
                 fill_point = (fill_point + 1) % self.parameter_count
diff --git a/scipy/optimize/_highs/meson.build b/scipy/optimize/_highs/meson.build
index 0d9ffba0b50..5051a08b979 100644
--- a/scipy/optimize/_highs/meson.build
+++ b/scipy/optimize/_highs/meson.build
@@ -237,7 +237,7 @@ _highs_wrapper = py3.extension_module('_highs_wrapper',
     '../../_lib/highs/src/lp_data/',
     '../../_lib/highs/src/util/'
   ],
-  dependencies: thread_dep,
+  dependencies: [thread_dep, atomic_dep],
   link_args: version_link_args,
   link_with: [highs_lib, ipx_lib, basiclu_lib],
   cpp_args: [highs_flags, highs_define_macros, cython_c_args],
diff --git a/scipy/optimize/cython_optimize/meson.build b/scipy/optimize/cython_optimize/meson.build
index b260bc752e7..3604ad1b0ee 100644
--- a/scipy/optimize/cython_optimize/meson.build
+++ b/scipy/optimize/cython_optimize/meson.build
@@ -13,8 +13,8 @@ _zeros_pyx = custom_target('_zeros_pyx',
   ]
 )
 
-cy_opt_gen = generator(cython_cli,
-  arguments : ['@INPUT@', '@OUTPUT@'],
+cy_opt_gen = generator(cython,
+  arguments : cython_args,
   output : '@BASENAME@.c',
   depends : [_cython_tree,
     _dummy_init_optimize,
diff --git a/scipy/optimize/meson.build b/scipy/optimize/meson.build
index c7079e7394e..05db6a68040 100644
--- a/scipy/optimize/meson.build
+++ b/scipy/optimize/meson.build
@@ -231,8 +231,8 @@ endif
 
 _dummy_init_optimize = fs.copyfile('__init__.py')
 
-opt_gen = generator(cython_cli,
-  arguments : ['@INPUT@', '@OUTPUT@'],
+opt_gen = generator(cython,
+  arguments : cython_args,
   output : '@BASENAME@.c',
   depends : [_cython_tree, cython_linalg, _dummy_init_optimize])
 
diff --git a/scipy/signal/_medianfilter.c b/scipy/signal/_medianfilter.c
index 90d66c59dd5..e49964d46c6 100644
--- a/scipy/signal/_medianfilter.c
+++ b/scipy/signal/_medianfilter.c
@@ -10,7 +10,7 @@
 void f_medfilt2(float*,float*,npy_intp*,npy_intp*);
 void d_medfilt2(double*,double*,npy_intp*,npy_intp*);
 void b_medfilt2(unsigned char*,unsigned char*,npy_intp*,npy_intp*);
-extern char *check_malloc (int);
+extern char *check_malloc (size_t);
 
 
 /* The QUICK_SELECT routine is based on Hoare's Quickselect algorithm,
diff --git a/scipy/sparse/csgraph/_shortest_path.pyx b/scipy/sparse/csgraph/_shortest_path.pyx
index 0d20388e6bc..459672ca271 100644
--- a/scipy/sparse/csgraph/_shortest_path.pyx
+++ b/scipy/sparse/csgraph/_shortest_path.pyx
@@ -1337,6 +1337,7 @@ cdef int _johnson_directed(
             const int[:] csr_indices,
             const int[:] csr_indptr,
             double[:] dist_array):
+    # Note: The contents of dist_array must be initialized to zero on entry
     cdef:
         unsigned int N = dist_array.shape[0]
         unsigned int j, k, count
@@ -1344,10 +1345,6 @@ cdef int _johnson_directed(
 
     # relax all edges (N+1) - 1 times
     for count in range(N):
-        for k in range(N):
-            if dist_array[k] < 0:
-                dist_array[k] = 0
-
         for j in range(N):
             d1 = dist_array[j]
             for k in range(csr_indptr[j], csr_indptr[j + 1]):
@@ -1373,6 +1370,7 @@ cdef int _johnson_undirected(
             const int[:] csr_indices,
             const int[:] csr_indptr,
             double[:] dist_array):
+    # Note: The contents of dist_array must be initialized to zero on entry
     cdef:
         unsigned int N = dist_array.shape[0]
         unsigned int j, k, ind_k, count
@@ -1380,10 +1378,6 @@ cdef int _johnson_undirected(
 
     # relax all edges (N+1) - 1 times
     for count in range(N):
-        for k in range(N):
-            if dist_array[k] < 0:
-                dist_array[k] = 0
-
         for j in range(N):
             d1 = dist_array[j]
             for k in range(csr_indptr[j], csr_indptr[j + 1]):
@@ -1556,7 +1550,7 @@ cdef void decrease_val(FibonacciHeap* heap,
         # at the leftmost end of the roots' linked-list.
         remove(node)
         node.right_sibling = heap.min_node
-        heap.min_node.left_sibling = node.right_sibling
+        heap.min_node.left_sibling = node
         heap.min_node = node
 
 
diff --git a/scipy/sparse/csgraph/tests/test_shortest_path.py b/scipy/sparse/csgraph/tests/test_shortest_path.py
index 2e55ca156e3..f745e0fbba3 100644
--- a/scipy/sparse/csgraph/tests/test_shortest_path.py
+++ b/scipy/sparse/csgraph/tests/test_shortest_path.py
@@ -1,11 +1,13 @@
+from io import StringIO
 import warnings
 import numpy as np
-from numpy.testing import assert_array_almost_equal, assert_array_equal
+from numpy.testing import assert_array_almost_equal, assert_array_equal, assert_allclose
 from pytest import raises as assert_raises
 from scipy.sparse.csgraph import (shortest_path, dijkstra, johnson,
                                   bellman_ford, construct_dist_matrix,
                                   NegativeCycleError)
 import scipy.sparse
+from scipy.io import mmread
 import pytest
 
 directed_G = np.array([[0, 3, 3, 0, 0],
@@ -77,6 +79,14 @@
                             [3, 3, 0, -9999, 3],
                             [4, 4, 0, 4, -9999]], dtype=float)
 
+directed_negative_weighted_G = np.array([[0, 0, 0],
+                                         [-1, 0, 0],
+                                         [0, -1, 0]], dtype=float)
+
+directed_negative_weighted_SP = np.array([[0, np.inf, np.inf],
+                                          [-1, 0, np.inf],
+                                          [-2, -1, 0]], dtype=float)
+
 methods = ['auto', 'FW', 'D', 'BF', 'J']
 
 
@@ -176,7 +186,7 @@ def test_dijkstra_indices_min_only(directed, SP_ans, indices):
 
 
 @pytest.mark.parametrize('n', (10, 100, 1000))
-def test_shortest_path_min_only_random(n):
+def test_dijkstra_min_only_random(n):
     np.random.seed(1234)
     data = scipy.sparse.rand(n, n, density=0.5, format='lil',
                              random_state=42, dtype=np.float64)
@@ -186,7 +196,7 @@ def test_shortest_path_min_only_random(n):
     np.random.shuffle(v)
     indices = v[:int(n*.1)]
     ds, pred, sources = dijkstra(data,
-                                 directed=False,
+                                 directed=True,
                                  indices=indices,
                                  min_only=True,
                                  return_predecessors=True)
@@ -198,6 +208,48 @@ def test_shortest_path_min_only_random(n):
             p = pred[p]
 
 
+def test_dijkstra_random():
+    # reproduces the hang observed in gh-17782
+    n = 10
+    indices = [0, 4, 4, 5, 7, 9, 0, 6, 2, 3, 7, 9, 1, 2, 9, 2, 5, 6]
+    indptr = [0, 0, 2, 5, 6, 7, 8, 12, 15, 18, 18]
+    data = [0.33629, 0.40458, 0.47493, 0.42757, 0.11497, 0.91653, 0.69084,
+            0.64979, 0.62555, 0.743, 0.01724, 0.99945, 0.31095, 0.15557,
+            0.02439, 0.65814, 0.23478, 0.24072]
+    graph = scipy.sparse.csr_matrix((data, indices, indptr), shape=(n, n))
+    dijkstra(graph, directed=True, return_predecessors=True)
+
+
+def test_gh_17782_segfault():
+    text = """%%MatrixMarket matrix coordinate real general
+                84 84 22
+                2 1 4.699999809265137e+00
+                6 14 1.199999973177910e-01
+                9 6 1.199999973177910e-01
+                10 16 2.012000083923340e+01
+                11 10 1.422000026702881e+01
+                12 1 9.645999908447266e+01
+                13 18 2.012000083923340e+01
+                14 13 4.679999828338623e+00
+                15 11 1.199999973177910e-01
+                16 12 1.199999973177910e-01
+                18 15 1.199999973177910e-01
+                32 2 2.299999952316284e+00
+                33 20 6.000000000000000e+00
+                33 32 5.000000000000000e+00
+                36 9 3.720000028610229e+00
+                36 37 3.720000028610229e+00
+                36 38 3.720000028610229e+00
+                37 44 8.159999847412109e+00
+                38 32 7.903999328613281e+01
+                43 20 2.400000000000000e+01
+                43 33 4.000000000000000e+00
+                44 43 6.028000259399414e+01
+    """
+    data = mmread(StringIO(text))
+    dijkstra(data, directed=True, return_predecessors=True)
+
+
 def test_shortest_path_indices():
     indices = np.arange(4)
 
@@ -279,6 +331,12 @@ def check(method, directed):
             check(method, directed)
 
 
+@pytest.mark.parametrize("method", ['FW', 'J', 'BF'])
+def test_negative_weights(method):
+    SP = shortest_path(directed_negative_weighted_G, method, directed=True)
+    assert_allclose(SP, directed_negative_weighted_SP, atol=1e-10)
+
+
 def test_masked_input():
     np.ma.masked_equal(directed_G, 0)
 
diff --git a/scipy/sparse/linalg/_isolve/tests/test_iterative.py b/scipy/sparse/linalg/_isolve/tests/test_iterative.py
index a782a8d27c2..48fc16b7a00 100644
--- a/scipy/sparse/linalg/_isolve/tests/test_iterative.py
+++ b/scipy/sparse/linalg/_isolve/tests/test_iterative.py
@@ -409,7 +409,9 @@ def test_atol(solver):
         residual = A.dot(x) - b
         err = np.linalg.norm(residual)
         atol2 = tol * b_norm
-        assert_(err <= max(atol, atol2))
+        # Added 1.00025 fudge factor because of `err` exceeding `atol` just
+        # very slightly on s390x (see gh-17839)
+        assert_(err <= 1.00025 * max(atol, atol2))
 
 
 @pytest.mark.parametrize("solver", [cg, cgs, bicg, bicgstab, gmres, qmr, minres, lgmres, gcrotmk, tfqmr])
@@ -452,8 +454,10 @@ def test_zero_rhs(solver):
                                                 and sys.version_info[1] == 9,
                                                 reason="gh-13019")),
     qmr,
-    pytest.param(lgmres, marks=pytest.mark.xfail(platform.machine() == 'ppc64le',
-                                                 reason="fails on ppc64le")),
+    pytest.param(lgmres, marks=pytest.mark.xfail(
+        platform.machine() not in ['x86_64' 'x86', 'aarch64', 'arm64'],
+        reason="fails on at least ppc64le, ppc64 and riscv64, see gh-17839")
+    ),
     pytest.param(cgs, marks=pytest.mark.xfail),
     pytest.param(bicg, marks=pytest.mark.xfail),
     pytest.param(bicgstab, marks=pytest.mark.xfail),
diff --git a/scipy/spatial/meson.build b/scipy/spatial/meson.build
index 569da4a068e..895e506ae67 100644
--- a/scipy/spatial/meson.build
+++ b/scipy/spatial/meson.build
@@ -5,8 +5,8 @@ _spatial_pxd = [
 
 # pyx -> c, pyx -> cpp generators, depending on copied pxd files.
 # _qhull.pyx depends also on _lib/messagestream
-spt_cython_gen = generator(cython_cli,
-  arguments : ['@INPUT@', '@OUTPUT@'],
+spt_cython_gen = generator(cython,
+  arguments : cython_args,
   output : '@BASENAME@.c',
   depends : [_cython_tree, _spatial_pxd, _lib_pxd])
 
diff --git a/scipy/spatial/tests/test_distance.py b/scipy/spatial/tests/test_distance.py
index 46c69e34420..42152c2dd04 100644
--- a/scipy/spatial/tests/test_distance.py
+++ b/scipy/spatial/tests/test_distance.py
@@ -1176,7 +1176,7 @@ def test_pdist_jensenshannon_random_nonC(self):
     def test_pdist_jensenshannon_iris(self):
         if _is_32bit():
             # Test failing on 32-bit Linux on Azure otherwise, see gh-12810
-            eps = 1.5e-10
+            eps = 2.5e-10
         else:
             eps = 1e-12
 
diff --git a/scipy/special/boost_special_functions.h b/scipy/special/boost_special_functions.h
index ae825e5de38..69ea2856237 100644
--- a/scipy/special/boost_special_functions.h
+++ b/scipy/special/boost_special_functions.h
@@ -90,7 +90,43 @@ Real powm1_wrap(Real x, Real y)
         z = NAN;
     } catch (const std::overflow_error& e) {
         sf_error("powm1", SF_ERROR_OVERFLOW, NULL);
-        z = INFINITY;
+        
+        // See: https://en.cppreference.com/w/cpp/numeric/math/pow
+        if (x > 0) {
+            if (y < 0) {
+                z = 0;
+            }
+            else if (y == 0) {
+                z = 1;
+            }
+            else {
+                z = INFINITY;
+            }
+        }
+        else if (x == 0) {
+            z = INFINITY;
+        }
+        else {
+            if (y < 0) {
+                if (std::fmod(y, 2) == 0) {
+                    z = 0;
+                }
+                else {
+                    z = -0;
+                }
+            }
+            else if (y == 0) {
+                z = 1;
+            }
+            else {
+                if (std::fmod(y, 2) == 0) {
+                    z = INFINITY;
+                }
+                else {
+                    z = -INFINITY;
+                }
+            }
+        }
     } catch (const std::underflow_error& e) {
         sf_error("powm1", SF_ERROR_UNDERFLOW, NULL);
         z = 0;
diff --git a/scipy/special/cephes/kolmogorov.c b/scipy/special/cephes/kolmogorov.c
index 633d36d8f2d..019d224123e 100644
--- a/scipy/special/cephes/kolmogorov.c
+++ b/scipy/special/cephes/kolmogorov.c
@@ -774,10 +774,16 @@ _smirnov(int n, double x)
     /* Special case:  n is so big, take too long to compute */
     if (n > SMIRNOV_MAX_COMPUTE_N) {
         /* p ~ e^(-(6nx+1)^2 / 18n) */
-        double logp = -pow(6*n*x+1.0, 2)/18.0/n;
-        sf = exp(logp);
-        cdf = 1 - sf;
-        pdf = (6 * nx + 1) * 2 * sf/3;
+        double logp = -pow(6.0*n*x+1, 2)/18.0/n;
+        /* Maximise precision for small p-value. */
+        if (logp < -M_LN2) {
+            sf = exp(logp);
+            cdf = 1 - sf;
+        } else {
+            cdf = -expm1(logp);
+            sf = 1 - cdf;
+        }
+        pdf = (6.0*n*x+1) * 2 * sf/3;
         RETURN_3PROBS(sf, cdf, pdf);
     }
     {
diff --git a/scipy/special/meson.build b/scipy/special/meson.build
index 461c4c5c8e3..f92183f50cb 100644
--- a/scipy/special/meson.build
+++ b/scipy/special/meson.build
@@ -307,13 +307,13 @@ cython_special = custom_target('cython_special',
 )
 
 # pyx -> c, pyx -> cpp generators, depending on copied pxi, pxd files.
-uf_cython_gen = generator(cython_cli,
-  arguments : ['@INPUT@', '@OUTPUT@'],
+uf_cython_gen = generator(cython,
+  arguments : cython_args,
   output : '@BASENAME@.c',
   depends : [_cython_tree, _ufuncs_pxi_pxd_sources])
 
-uf_cython_gen_cpp = generator(cython_cli,
-  arguments : ['@INPUT@', '@OUTPUT@', '--cplus'],
+uf_cython_gen_cpp = generator(cython,
+  arguments : cython_cplus_args,
   output : '@BASENAME@.cpp',
   depends : [_cython_tree, _ufuncs_pxi_pxd_sources])
 
diff --git a/scipy/special/tests/test_orthogonal.py b/scipy/special/tests/test_orthogonal.py
index 7a2d49c957d..24839bbfd5b 100644
--- a/scipy/special/tests/test_orthogonal.py
+++ b/scipy/special/tests/test_orthogonal.py
@@ -549,7 +549,7 @@ def test_roots_gegenbauer():
     vgq(rootf(170), evalf(170), weightf(170), -1., 1., 5, atol=1e-13)
     vgq(rootf(170), evalf(170), weightf(170), -1., 1., 25, atol=1e-12)
     vgq(rootf(170), evalf(170), weightf(170), -1., 1., 100, atol=1e-11)
-    vgq(rootf(170.5), evalf(170.5), weightf(170.5), -1., 1., 5, atol=1e-13)
+    vgq(rootf(170.5), evalf(170.5), weightf(170.5), -1., 1., 5, atol=1.25e-13)
     vgq(rootf(170.5), evalf(170.5), weightf(170.5), -1., 1., 25, atol=1e-12)
     vgq(rootf(170.5), evalf(170.5), weightf(170.5), -1., 1., 100, atol=1e-11)
 
diff --git a/scipy/stats/_continuous_distns.py b/scipy/stats/_continuous_distns.py
index bdd3e157c78..8b2c91e1447 100644
--- a/scipy/stats/_continuous_distns.py
+++ b/scipy/stats/_continuous_distns.py
@@ -7304,46 +7304,49 @@ def _entropy(self, a):
         return 1 - 1.0/a - np.log(a)
 
     def _support_mask(self, x, a):
-        if np.any(a < 1):
-            return (x != 0) & super(powerlaw_gen, self)._support_mask(x, a)
-        else:
-            return super(powerlaw_gen, self)._support_mask(x, a)
+        return (super(powerlaw_gen, self)._support_mask(x, a)
+                & ((x != 0) | (a >= 1)))
 
+    @_call_super_mom
+    @extend_notes_in_docstring(rv_continuous, notes="""\
+        Notes specifically for ``powerlaw.fit``: If the location is a free
+        parameter and the value returned for the shape parameter is less than
+        one, the true maximum likelihood approaches infinity. This causes
+        numerical difficulties, and the resulting estimates are approximate.
+        \n\n""")
     def fit(self, data, *args, **kwds):
-        '''
-        Summary of the strategy:
-
-        1) If the scale and location are fixed, return the shape according
-           to a formula.
-
-        2) If the scale is fixed, there are two possibilities for the other
-           parameters - one corresponding with shape less than one, and another
-           with shape greater than one. Calculate both, and return whichever
-           has the better log-likelihood.
-
-        At this point, the scale is known to be free.
-
-        3) If the location is fixed, return the scale and shape according to
-           formulas (or, if the shape is fixed, the fixed shape).
-
-        At this point, the location and scale are both free. There are separate
-        equations depending on whether the shape is less than one or greater
-        than one.
-
-        4a) If the shape is less than one, there are formulas for shape,
-            location, and scale.
-        4b) If the shape is greater than one, there are formulas for shape
-            and scale, but there is a condition for location to be solved
-            numerically.
-
-        If the shape is fixed and less than one, we use 4a.
-        If the shape is fixed and greater than one, we use 4b.
-        If the shape is also free, we calculate fits using both 4a and 4b
-        and choose the one that results a better log-likelihood.
-
-        In many cases, the use of `np.nextafter` is used to avoid numerical
-        issues.
-        '''
+        # Summary of the strategy:
+        #
+        # 1) If the scale and location are fixed, return the shape according
+        #    to a formula.
+        #
+        # 2) If the scale is fixed, there are two possibilities for the other
+        #    parameters - one corresponding with shape less than one, and
+        #    another with shape greater than one. Calculate both, and return
+        #    whichever has the better log-likelihood.
+        #
+        # At this point, the scale is known to be free.
+        #
+        # 3) If the location is fixed, return the scale and shape according to
+        #    formulas (or, if the shape is fixed, the fixed shape).
+        #
+        # At this point, the location and scale are both free. There are
+        # separate equations depending on whether the shape is less than one or
+        # greater than one.
+        #
+        # 4a) If the shape is less than one, there are formulas for shape,
+        #     location, and scale.
+        # 4b) If the shape is greater than one, there are formulas for shape
+        #     and scale, but there is a condition for location to be solved
+        #     numerically.
+        #
+        # If the shape is fixed and less than one, we use 4a.
+        # If the shape is fixed and greater than one, we use 4b.
+        # If the shape is also free, we calculate fits using both 4a and 4b
+        # and choose the one that results a better log-likelihood.
+        #
+        # In many cases, the use of `np.nextafter` is used to avoid numerical
+        # issues.
         if kwds.pop('superfit', False):
             return super().fit(data, *args, **kwds)
 
@@ -7376,7 +7379,8 @@ def get_shape(data, loc, scale):
             # The first-order necessary condition on `shape` can be solved in
             # closed form. It can be used no matter the assumption of the
             # value of the shape.
-            return -len(data) / np.sum(np.log((data - loc)/scale))
+            N = len(data)
+            return - N / (np.sum(np.log(data - loc)) - N*np.log(scale))
 
         def get_scale(data, loc):
             # analytical solution for `scale` based on the location.
@@ -7419,6 +7423,8 @@ def get_scale(data, loc):
 
         def fit_loc_scale_w_shape_lt_1():
             loc = np.nextafter(data.min(), -np.inf)
+            if np.abs(loc) < np.finfo(loc.dtype).tiny:
+                loc = np.sign(loc) * np.finfo(loc.dtype).tiny
             scale = np.nextafter(get_scale(data, loc), np.inf)
             shape = fshape or get_shape(data, loc, scale)
             return shape, loc, scale
@@ -7462,11 +7468,10 @@ def fit_loc_scale_w_shape_gt_1():
 
             # if the sign of `dL_dLocation_star` is positive at rbrack,
             # we're not going to find the root we're looking for
-            i = 1
             delta = (data.min() - rbrack)
             while dL_dLocation_star(rbrack) > 0:
-                rbrack = data.min() - i * delta
-                i *= 2
+                rbrack = data.min() - delta
+                delta *= 2
 
             def interval_contains_root(lbrack, rbrack):
                 # Check if the interval (lbrack, rbrack) contains the root.
@@ -7499,10 +7504,10 @@ def interval_contains_root(lbrack, rbrack):
 
         # Shape is free
         fit_shape_lt1 = fit_loc_scale_w_shape_lt_1()
-        ll_lt1 = penalized_nllf(fit_shape_lt1, data)
+        ll_lt1 = self.nnlf(fit_shape_lt1, data)
 
         fit_shape_gt1 = fit_loc_scale_w_shape_gt_1()
-        ll_gt1 = penalized_nllf(fit_shape_gt1, data)
+        ll_gt1 = self.nnlf(fit_shape_gt1, data)
 
         if ll_lt1 <= ll_gt1 and fit_shape_lt1[0] <= 1:
             return fit_shape_lt1
diff --git a/scipy/stats/_multivariate.py b/scipy/stats/_multivariate.py
index b979eba91d1..707834ffba2 100644
--- a/scipy/stats/_multivariate.py
+++ b/scipy/stats/_multivariate.py
@@ -839,6 +839,10 @@ def __init__(self, mean=None, cov=1, allow_singular=False, seed=None,
         self.abseps = abseps
         self.releps = releps
 
+    @property
+    def cov(self):
+        return self.cov_object.covariance
+
     def logpdf(self, x):
         x = self._dist._process_quantiles(x, self.dim)
         out = self._dist._logpdf(x, self.mean, self.cov_object)
diff --git a/scipy/stats/_qmc.py b/scipy/stats/_qmc.py
index f45a0873742..8f629e2e5c3 100644
--- a/scipy/stats/_qmc.py
+++ b/scipy/stats/_qmc.py
@@ -2273,9 +2273,9 @@ def _random_cd(
     while n_nochange_ < n_nochange and n_iters_ < n_iters:
         n_iters_ += 1
 
-        col = rng_integers(rng, *bounds[0])
-        row_1 = rng_integers(rng, *bounds[1])
-        row_2 = rng_integers(rng, *bounds[2])
+        col = rng_integers(rng, *bounds[0], endpoint=True)  # type: ignore[misc]
+        row_1 = rng_integers(rng, *bounds[1], endpoint=True)  # type: ignore[misc]
+        row_2 = rng_integers(rng, *bounds[2], endpoint=True)  # type: ignore[misc]
         disc = _perturb_discrepancy(best_sample,
                                     row_1, row_2, col,
                                     best_disc)
diff --git a/scipy/stats/_resampling.py b/scipy/stats/_resampling.py
index 7d4a3e8cc41..d28e97bbe74 100644
--- a/scipy/stats/_resampling.py
+++ b/scipy/stats/_resampling.py
@@ -130,7 +130,7 @@ def _bca_interval(data, statistic, axis, alpha, theta_hat_b, batch):
     theta_hat_ji = [np.concatenate(theta_hat_i, axis=-1)
                     for theta_hat_i in theta_hat_ji]
 
-    n_j = [len(theta_hat_i) for theta_hat_i in theta_hat_ji]
+    n_j = [theta_hat_i.shape[-1] for theta_hat_i in theta_hat_ji]
 
     theta_hat_j_dot = [theta_hat_i.mean(axis=-1, keepdims=True)
                        for theta_hat_i in theta_hat_ji]
diff --git a/scipy/stats/_stats_py.py b/scipy/stats/_stats_py.py
index 0ee2da96d88..d72f6af3d38 100644
--- a/scipy/stats/_stats_py.py
+++ b/scipy/stats/_stats_py.py
@@ -4466,21 +4466,16 @@ def pearsonr(x, y, *, alternative='two-sided'):
     # floating point arithmetic.
     r = max(min(r, 1.0), -1.0)
 
-    # As explained in the docstring, the p-value can be computed as
-    #     p = 2*dist.cdf(-abs(r))
-    # where dist is the beta distribution on [-1, 1] with shape parameters
-    # a = b = n/2 - 1.  `special.btdtr` is the CDF for the beta distribution
-    # on [0, 1].  To use it, we make the transformation  x = (r + 1)/2; the
-    # shape parameters do not change.  Then -abs(r) used in `cdf(-abs(r))`
-    # becomes x = (-abs(r) + 1)/2 = 0.5*(1 - abs(r)).  (r is cast to float64
-    # to avoid a TypeError raised by btdtr when r is higher precision.)
+    # As explained in the docstring, the distribution of `r` under the null
+    # hypothesis is the beta distribution on (-1, 1) with a = b = n/2 - 1.
     ab = n/2 - 1
+    dist = stats.beta(ab, ab, loc=-1, scale=2)
     if alternative == 'two-sided':
-        prob = 2*special.btdtr(ab, ab, 0.5*(1 - abs(np.float64(r))))
+        prob = 2*dist.sf(abs(r))
     elif alternative == 'less':
-        prob = 1 - special.btdtr(ab, ab, 0.5*(1 - abs(np.float64(r))))
+        prob = dist.cdf(r)
     elif alternative == 'greater':
-        prob = special.btdtr(ab, ab, 0.5*(1 - abs(np.float64(r))))
+        prob = dist.sf(r)
     else:
         raise ValueError('alternative must be one of '
                          '["two-sided", "less", "greater"]')
diff --git a/scipy/stats/meson.build b/scipy/stats/meson.build
index ef034b10a5c..0fca3205458 100644
--- a/scipy/stats/meson.build
+++ b/scipy/stats/meson.build
@@ -5,8 +5,8 @@ _stats_pxd = [
   fs.copyfile('_unuran/unuran.pxd'),
 ]
 
-stats_special_cython_gen = generator(cython_cli,
-  arguments : ['@INPUT@', '@OUTPUT@'],
+stats_special_cython_gen = generator(cython,
+  arguments : cython_args,
   output : '@BASENAME@.c',
   depends : [
     _cython_tree,
@@ -117,8 +117,8 @@ _stats_gen_pyx = custom_target('_stats_gen_pyx',
   depends: _stats_pxd
 )
 
-cython_stats_gen_cpp = generator(cython_cli,
-  arguments : ['@INPUT@', '@OUTPUT@', '--cplus'],
+cython_stats_gen_cpp = generator(cython,
+  arguments : cython_cplus_args,
   output : '@BASENAME@.cpp',
   depends : [_cython_tree, _stats_gen_pyx])
 
diff --git a/scipy/stats/tests/test_continuous_basic.py b/scipy/stats/tests/test_continuous_basic.py
index f8c3628470a..d0873deabea 100644
--- a/scipy/stats/tests/test_continuous_basic.py
+++ b/scipy/stats/tests/test_continuous_basic.py
@@ -362,6 +362,46 @@ def test_rvs_broadcast(dist, shape_args):
     check_rvs_broadcast(distfunc, dist, allargs, bshape, shape_only, 'd')
 
 
+# Expected values of the SF, CDF, PDF were computed using
+# mpmath with mpmath.mp.dps = 50 and output at 20:
+#
+# def ks(x, n):
+#     x = mpmath.mpf(x)
+#     logp = -mpmath.power(6.0*n*x+1.0, 2)/18.0/n
+#     sf, cdf = mpmath.exp(logp), -mpmath.expm1(logp)
+#     pdf = (6.0*n*x+1.0) * 2 * sf/3
+#     print(mpmath.nstr(sf, 20), mpmath.nstr(cdf, 20), mpmath.nstr(pdf, 20))
+#
+# Tests use 1/n < x < 1-1/n and n > 1e6 to use the asymptotic computation.
+# Larger x has a smaller sf.
+@pytest.mark.parametrize('x,n,sf,cdf,pdf,rtol',
+                         [(2.0e-5, 1000000000,
+                           0.44932297307934442379, 0.55067702692065557621,
+                           35946.137394996276407, 5e-15),
+                          (2.0e-9, 1000000000,
+                           0.99999999061111115519, 9.3888888448132728224e-9,
+                           8.6666665852962971765, 5e-14),
+                          (5.0e-4, 1000000000,
+                           7.1222019433090374624e-218, 1.0,
+                           1.4244408634752704094e-211, 5e-14)])
+def test_gh17775_regression(x, n, sf, cdf, pdf, rtol):
+    # Regression test for gh-17775. In scipy 1.9.3 and earlier,
+    # these test would fail.
+    #
+    # KS one asymptotic sf ~ e^(-(6nx+1)^2 / 18n)
+    # Given a large 32-bit integer n, 6n will overflow in the c implementation.
+    # Example of broken behaviour:
+    # ksone.sf(2.0e-5, 1000000000) == 0.9374359693473666
+    ks = stats.ksone
+    vals = np.array([ks.sf(x, n), ks.cdf(x, n), ks.pdf(x, n)])
+    expected = np.array([sf, cdf, pdf])
+    npt.assert_allclose(vals, expected, rtol=rtol)
+    # The sf+cdf must sum to 1.0.
+    npt.assert_equal(vals[0] + vals[1], 1.0)
+    # Check inverting the (potentially very small) sf (uses a lower tolerance)
+    npt.assert_allclose([ks.isf(sf, n)], [x], rtol=1e-8)
+
+
 def test_rvs_gh2069_regression():
     # Regression tests for gh-2069.  In scipy 0.17 and earlier,
     # these tests would fail.
diff --git a/scipy/stats/tests/test_distributions.py b/scipy/stats/tests/test_distributions.py
index 60f0c4d3a69..e0c55071432 100755
--- a/scipy/stats/tests/test_distributions.py
+++ b/scipy/stats/tests/test_distributions.py
@@ -2541,6 +2541,14 @@ def test_fit_warnings(self):
         with assert_raises(ValueError, match=msg):
             stats.powerlaw.fit([1, 2, 4], fscale=3)
 
+    def test_minimum_data_zero_gh17801(self):
+        # gh-17801 reported an overflow error when the minimum value of the
+        # data is zero. Check that this problem is resolved.
+        data = [0, 1, 2, 2, 3, 3, 3, 3, 4, 4, 5, 6]
+        dist = stats.powerlaw
+        with np.errstate(over='ignore'):
+            _assert_less_or_close_loglike(dist, data, dist.nnlf)
+
 
 class TestInvGamma:
     def test_invgamma_inf_gh_1866(self):
@@ -3928,7 +3936,7 @@ def test_pdf_nolan_samples(
             ],
             # for small alpha very slightly reduced accuracy
             [
-                'piecewise', 5e-11, lambda r: (
+                'piecewise', 2.5e-10, lambda r: (
                     np.isin(r['pct'], pct_range) &
                     np.isin(r['alpha'], alpha_range) &
                     np.isin(r['beta'], beta_range) &
@@ -4032,7 +4040,7 @@ def test_cdf_nolan_samples(
         tests = [
             # piecewise generally good accuracy
             [
-                'piecewise', 1e-12, lambda r: (
+                'piecewise', 2e-12, lambda r: (
                     np.isin(r['pct'], pct_range) &
                     np.isin(r['alpha'], alpha_range) &
                     np.isin(r['beta'], beta_range) &
@@ -4154,6 +4162,14 @@ def test_location_scale(
     ):
         """Tests for pdf and cdf where loc, scale are different from 0, 1
         """
+
+        uname = platform.uname()
+        is_linux_32 = uname.system == 'Linux' and "32bit" in platform.architecture()[0]
+        # Test seems to be unstable (see gh-17839 for a bug report on Debian
+        # i386), so skip it.
+        if is_linux_32 and case == 'pdf':
+            pytest.skip("Test unstable on some platforms; see gh-17839, 17859")
+
         data = nolan_loc_scale_sample_data
         # We only test against piecewise as location/scale transforms
         # are same for other methods.
diff --git a/scipy/stats/tests/test_kdeoth.py b/scipy/stats/tests/test_kdeoth.py
index fe677b34e61..1696099800d 100644
--- a/scipy/stats/tests/test_kdeoth.py
+++ b/scipy/stats/tests/test_kdeoth.py
@@ -418,7 +418,7 @@ def marginal_pdf(points):
     assert_allclose(pdf, ref, rtol=1e-6)
 
 
-@pytest.mark.slow
+@pytest.mark.xslow
 def test_marginal_2_axis():
     rng = np.random.default_rng(6111799263660870475)
     n_data = 30
diff --git a/scipy/stats/tests/test_mstats_basic.py b/scipy/stats/tests/test_mstats_basic.py
index 0fb0168fc1f..44461858a2a 100644
--- a/scipy/stats/tests/test_mstats_basic.py
+++ b/scipy/stats/tests/test_mstats_basic.py
@@ -1768,8 +1768,8 @@ def test_skewtest_2D_WithMask(self):
                 r = stats.skewtest(x)
                 rm = stats.mstats.skewtest(xm)
 
-                assert_allclose(r[0][0], rm[0][0], rtol=2e-15)
-                assert_allclose(r[0][1], rm[0][1], rtol=1e-15)
+                assert_allclose(r[0][0], rm[0][0], rtol=1e-14)
+                assert_allclose(r[0][1], rm[0][1], rtol=1e-14)
 
     def test_normaltest(self):
         with np.errstate(over='raise'), suppress_warnings() as sup:
diff --git a/scipy/stats/tests/test_multivariate.py b/scipy/stats/tests/test_multivariate.py
index 0dbc6602dd5..429ed2847ca 100644
--- a/scipy/stats/tests/test_multivariate.py
+++ b/scipy/stats/tests/test_multivariate.py
@@ -513,6 +513,20 @@ def test_frozen(self):
         assert_allclose(norm_frozen.cdf(x), multivariate_normal.cdf(x, mean, cov))
         assert_allclose(norm_frozen.logcdf(x),
                         multivariate_normal.logcdf(x, mean, cov))
+    
+    @pytest.mark.parametrize(
+        'covariance',
+        [
+            np.eye(2),
+            Covariance.from_diagonal([1, 1]),
+        ]
+    )
+    def test_frozen_multivariate_normal_exposes_attributes(self, covariance):
+        mean = np.ones((2,))
+        cov_should_be = np.eye(2)
+        norm_frozen = multivariate_normal(mean, covariance)
+        assert np.allclose(norm_frozen.mean, mean)
+        assert np.allclose(norm_frozen.cov, cov_should_be)
 
     def test_pseudodet_pinv(self):
         # Make sure that pseudo-inverse and pseudo-det agree on cutoff
diff --git a/scipy/stats/tests/test_resampling.py b/scipy/stats/tests/test_resampling.py
index 3745f44770f..46e4bab751f 100644
--- a/scipy/stats/tests/test_resampling.py
+++ b/scipy/stats/tests/test_resampling.py
@@ -123,9 +123,6 @@ def test_bootstrap_vectorized(method, axis, paired):
     # CI and standard_error of each axis-slice is the same as those of the
     # original 1d sample
 
-    if not paired and method == 'BCa':
-        # should re-assess when BCa is extended
-        pytest.xfail(reason="BCa currently for 1-sample statistics only")
     np.random.seed(0)
 
     def my_statistic(x, y, z, axis=-1):
@@ -621,7 +618,6 @@ def statistic_1d(*data):
     assert_allclose(res1, res2)
 
 
-@pytest.mark.xslow()
 @pytest.mark.parametrize("method", ["basic", "percentile", "BCa"])
 def test_vector_valued_statistic(method):
     # Generate 95% confidence interval around MLE of normal distribution
@@ -636,11 +632,12 @@ def test_vector_valued_statistic(method):
     params = 1, 0.5
     sample = stats.norm.rvs(*params, size=(100, 100), random_state=rng)
 
-    def statistic(data):
-        return stats.norm.fit(data)
+    def statistic(data, axis):
+        return np.asarray([np.mean(data, axis),
+                           np.std(data, axis, ddof=1)])
 
     res = bootstrap((sample,), statistic, method=method, axis=-1,
-                    vectorized=False, n_resamples=9999)
+                    n_resamples=9999, batch=200)
 
     counts = np.sum((res.confidence_interval.low.T < params)
                     & (res.confidence_interval.high.T > params),
@@ -653,6 +650,44 @@ def statistic(data):
     assert res.bootstrap_distribution.shape == (2, 100, 9999)
 
 
+@pytest.mark.slow
+@pytest.mark.filterwarnings('ignore::RuntimeWarning')
+def test_vector_valued_statistic_gh17715():
+    # gh-17715 reported a mistake introduced in the extension of BCa to
+    # multi-sample statistics; a `len` should have been `.shape[-1]`. Check
+    # that this is resolved.
+
+    rng = np.random.default_rng(141921000979291141)
+
+    def concordance(x, y, axis):
+        xm = x.mean(axis)
+        ym = y.mean(axis)
+        cov = ((x - xm[..., None]) * (y - ym[..., None])).mean(axis)
+        return (2 * cov) / (x.var(axis) + y.var(axis) + (xm - ym) ** 2)
+
+    def statistic(tp, tn, fp, fn, axis):
+        actual = tp + fp
+        expected = tp + fn
+        return np.nan_to_num(concordance(actual, expected, axis))
+
+    def statistic_extradim(*args, axis):
+        return statistic(*args, axis)[np.newaxis, ...]
+
+    data = [[4, 0, 0, 2],  # (tp, tn, fp, fn)
+            [2, 1, 2, 1],
+            [0, 6, 0, 0],
+            [0, 6, 3, 0],
+            [0, 8, 1, 0]]
+    data = np.array(data).T
+
+    res = bootstrap(data, statistic_extradim, random_state=rng, paired=True)
+    ref = bootstrap(data, statistic, random_state=rng, paired=True)
+    assert_allclose(res.confidence_interval.low[0],
+                    ref.confidence_interval.low, atol=1e-15)
+    assert_allclose(res.confidence_interval.high[0],
+                    ref.confidence_interval.high, atol=1e-15)
+
+
 # --- Test Monte Carlo Hypothesis Test --- #
 
 class TestMonteCarloHypothesisTest:
diff --git a/scipy/stats/tests/test_stats.py b/scipy/stats/tests/test_stats.py
index 35e382b159d..50f3849f904 100644
--- a/scipy/stats/tests/test_stats.py
+++ b/scipy/stats/tests/test_stats.py
@@ -439,22 +439,30 @@ def test_length_two_neg2(self):
     # cor.test(x, y, method = "pearson", alternative = "g")
     # correlation coefficient and p-value for alternative='two-sided'
     # calculated with mpmath agree to 16 digits.
-    @pytest.mark.parametrize('alternative, pval, rlow, rhigh',
-                             [('two-sided',
-                               0.325800137536, -0.814938968841, 0.99230697523),
-                              ('less',
-                               0.8370999312316, -1, 0.985600937290653),
-                              ('greater',
-                               0.1629000687684, -0.6785654158217636, 1)])
-    def test_basic_example(self, alternative, pval, rlow, rhigh):
+    @pytest.mark.parametrize('alternative, pval, rlow, rhigh, sign',
+            [('two-sided', 0.325800137536, -0.814938968841, 0.99230697523, 1),  # noqa
+             ('less', 0.8370999312316, -1, 0.985600937290653, 1),
+             ('greater', 0.1629000687684, -0.6785654158217636, 1, 1),
+             ('two-sided', 0.325800137536, -0.992306975236, 0.81493896884, -1),
+             ('less', 0.1629000687684, -1.0, 0.6785654158217636, -1),
+             ('greater', 0.8370999312316, -0.985600937290653, 1.0, -1)])
+    def test_basic_example(self, alternative, pval, rlow, rhigh, sign):
         x = [1, 2, 3, 4]
-        y = [0, 1, 0.5, 1]
+        y = np.array([0, 1, 0.5, 1]) * sign
         result = stats.pearsonr(x, y, alternative=alternative)
-        assert_allclose(result.statistic, 0.6741998624632421, rtol=1e-12)
+        assert_allclose(result.statistic, 0.6741998624632421*sign, rtol=1e-12)
         assert_allclose(result.pvalue, pval, rtol=1e-6)
         ci = result.confidence_interval()
         assert_allclose(ci, (rlow, rhigh), rtol=1e-6)
 
+    def test_negative_correlation_pvalue_gh17795(self):
+        x = np.arange(10)
+        y = -x
+        test_greater = stats.pearsonr(x, y, alternative='greater')
+        test_less = stats.pearsonr(x, y, alternative='less')
+        assert_allclose(test_greater.pvalue, 1)
+        assert_allclose(test_less.pvalue, 0, atol=1e-20)
+
     def test_length3_r_exactly_negative_one(self):
         x = [1, 2, 3]
         y = [5, -4, -13]
diff --git a/setup.py b/setup.py
index 00eca16581f..ff602a1cd83 100755
--- a/setup.py
+++ b/setup.py
@@ -46,6 +46,7 @@
 Programming Language :: Python :: 3.8
 Programming Language :: Python :: 3.9
 Programming Language :: Python :: 3.10
+Programming Language :: Python :: 3.11
 Topic :: Software Development :: Libraries
 Topic :: Scientific/Engineering
 Operating System :: Microsoft :: Windows
diff --git a/tools/config_utils.py b/tools/config_utils.py
deleted file mode 100644
index 6ee8f3d06a9..00000000000
--- a/tools/config_utils.py
+++ /dev/null
@@ -1,13 +0,0 @@
-import sys
-from pathlib import Path
-import numpy as np
-outfile = Path(sys.argv[1])
-
-# This piece of code will just just copy the Numpy's `__config__.py` file,
-# rather than write out SciPy's build-time config settings. This file should be
-# generated by `config.make_config_py()` in `scipy/setup.py` which relies on an
-# instance of `numpy.distutils.Configuration`. It cannot be accessed directly
-# with a Meson build. This is a workaround for the same.
-# For more details the implementation of this method is at https://github.com/numpy/numpy/blob/main/numpy/distutils/misc_util.py#L2113
-
-outfile.write_text(Path(str(np.__config__.__file__)).read_text())
diff --git a/tools/version_utils.py b/tools/version_utils.py
index 7eaa7457068..042795d18c5 100644
--- a/tools/version_utils.py
+++ b/tools/version_utils.py
@@ -5,7 +5,7 @@
 
 MAJOR = 1
 MINOR = 10
-MICRO = 0
+MICRO = 1
 ISRELEASED = True
 IS_RELEASE_BRANCH = True
 VERSION = '%d.%d.%d' % (MAJOR, MINOR, MICRO)

--- End Message ---
--- Begin Message ---
scipy was uploaded and migrated.

--- End Message ---

Reply to: