Bug#1070232: bookworm-pu: package python3.11/3.11.2-6+deb12u2
Package: release.debian.org
Severity: normal
Tags: bookworm
X-Debbugs-Cc: python3.11@packages.debian.org
Control: affects -1 + src:python3.11
User: release.debian.org@packages.debian.org
Usertags: pu
[ Reason ]
A collection of minor (no-dsa) security updates for python3.11 in
bookworm.
This fixes all of the outstanding CVEs (except CVE-2023-27043, which we
are waiting for upstream to commit to a patch for).
[ Impact ]
Minor security issues.
[ Tests ]
All of the patches come from upstream, and have unit tests included.
[ Risks ]
Changes are relatively straightforward, and all cherry-picked from
upstream's stable releases. They have been published in upstream stable
releases. Where regressions were found, their fixes are included.
[ Checklist ]
[x] *all* changes are documented in the d/changelog
[x] I reviewed all changes and I approve them
[x] attach debdiff against the package in (old)stable
[x] the issue is verified as fixed in unstable
[ Changes ]
python3.11 (3.11.2-6+deb12u2) bookworm; urgency=medium
[ Steve McIntyre ]
* Apply upstream security fix for CVE-2024-0450
Protect zipfile from "quoted-overlap" zipbomb.
Closes: #1070133
* Apply and tweak upstream security fix for CVE-2023-6597
tempfile.TemporaryDirectory: fix symlink bug in cleanup
Closes: #1070135
[ Stefano Rivera ]
* Apply upstream patch to avoid a potential null pointer dereference in
fileutils.
* Apply upstream security fix for CVE-2023-41105
os.path.normpath(): Path truncation at null bytes.
* Apply upstream security fix for CVE-2023-40217
Avoid bypass TLS of handshake protections on closed sockets.
* Apply upstream security fix for CVE-2023-24329
Strip C0 control and space characters in urlsplit.
diff -Nru python3.11-3.11.2/debian/changelog python3.11-3.11.2/debian/changelog
--- python3.11-3.11.2/debian/changelog 2024-03-02 16:28:50.000000000 -0400
+++ python3.11-3.11.2/debian/changelog 2024-05-02 07:59:08.000000000 -0400
@@ -1,3 +1,25 @@
+python3.11 (3.11.2-6+deb12u2) bookworm; urgency=medium
+
+ [ Steve McIntyre ]
+ * Apply upstream security fix for CVE-2024-0450
+ Protect zipfile from "quoted-overlap" zipbomb.
+ Closes: #1070133
+ * Apply and tweak upstream security fix for CVE-2023-6597
+ tempfile.TemporaryDirectory: fix symlink bug in cleanup
+ Closes: #1070135
+
+ [ Stefano Rivera ]
+ * Apply upstream patch to avoid a potential null pointer dereference in
+ fileutils.
+ * Apply upstream security fix for CVE-2023-41105
+ os.path.normpath(): Path truncation at null bytes.
+ * Apply upstream security fix for CVE-2023-40217
+ Avoid bypass TLS of handshake protections on closed sockets.
+ * Apply upstream security fix for CVE-2023-24329
+ Strip C0 control and space characters in urlsplit.
+
+ -- Stefano Rivera <stefanor@debian.org> Thu, 02 May 2024 07:59:08 -0400
+
python3.11 (3.11.2-6+deb12u1) bookworm; urgency=medium
[ Anders Kaseorg ]
diff -Nru python3.11-3.11.2/debian/patches/CVE-2023-24329-strip-control-chars-urlsplit.patch python3.11-3.11.2/debian/patches/CVE-2023-24329-strip-control-chars-urlsplit.patch
--- python3.11-3.11.2/debian/patches/CVE-2023-24329-strip-control-chars-urlsplit.patch 1969-12-31 20:00:00.000000000 -0400
+++ python3.11-3.11.2/debian/patches/CVE-2023-24329-strip-control-chars-urlsplit.patch 2024-05-02 07:59:08.000000000 -0400
@@ -0,0 +1,223 @@
+From 610cc0ab1b760b2abaac92bd256b96191c46b941 Mon Sep 17 00:00:00 2001
+From: "Miss Islington (bot)"
+ <31488909+miss-islington@users.noreply.github.com>
+Date: Wed, 17 May 2023 14:41:25 -0700
+Subject: [PATCH] [3.11] gh-102153: Start stripping C0 control and space chars
+ in `urlsplit` (GH-102508) (#104575)
+
+* gh-102153: Start stripping C0 control and space chars in `urlsplit` (GH-102508)
+
+`urllib.parse.urlsplit` has already been respecting the WHATWG spec a bit GH-25595.
+
+This adds more sanitizing to respect the "Remove any leading C0 control or space from input" [rule](https://url.spec.whatwg.org/GH-url-parsing:~:text=Remove%20any%20leading%20and%20trailing%20C0%20control%20or%20space%20from%20input.) in response to [CVE-2023-24329](https://nvd.nist.gov/vuln/detail/CVE-2023-24329).
+
+---------
+
+(cherry picked from commit 2f630e1ce18ad2e07428296532a68b11dc66ad10)
+
+Co-authored-by: Illia Volochii <illia.volochii@gmail.com>
+Co-authored-by: Gregory P. Smith [Google] <greg@krypto.org>
+---
+ Doc/library/urllib.parse.rst | 46 +++++++++++++-
+ Lib/test/test_urlparse.py | 61 ++++++++++++++++++-
+ Lib/urllib/parse.py | 12 ++++
+ ...-03-07-20-59-17.gh-issue-102153.14CLSZ.rst | 3 +
+ 4 files changed, 119 insertions(+), 3 deletions(-)
+ create mode 100644 Misc/NEWS.d/next/Security/2023-03-07-20-59-17.gh-issue-102153.14CLSZ.rst
+
+Origin: upstream, https://github.com/python/cpython/commit/610cc0ab1b760b2abaac92bd256b96191c46b941
+Bug-Uptream: https://github.com/python/cpython/issues/102153
+--- a/Doc/library/urllib.parse.rst
++++ b/Doc/library/urllib.parse.rst
+@@ -159,6 +159,10 @@
+ ParseResult(scheme='http', netloc='www.cwi.nl:80', path='/%7Eguido/Python.html',
+ params='', query='', fragment='')
+
++ .. warning::
++
++ :func:`urlparse` does not perform validation. See :ref:`URL parsing
++ security <url-parsing-security>` for details.
+
+ .. versionchanged:: 3.2
+ Added IPv6 URL parsing capabilities.
+@@ -324,8 +328,14 @@
+ ``#``, ``@``, or ``:`` will raise a :exc:`ValueError`. If the URL is
+ decomposed before parsing, no error will be raised.
+
+- Following the `WHATWG spec`_ that updates RFC 3986, ASCII newline
+- ``\n``, ``\r`` and tab ``\t`` characters are stripped from the URL.
++ Following some of the `WHATWG spec`_ that updates RFC 3986, leading C0
++ control and space characters are stripped from the URL. ``\n``,
++ ``\r`` and tab ``\t`` characters are removed from the URL at any position.
++
++ .. warning::
++
++ :func:`urlsplit` does not perform validation. See :ref:`URL parsing
++ security <url-parsing-security>` for details.
+
+ .. versionchanged:: 3.6
+ Out-of-range port numbers now raise :exc:`ValueError`, instead of
+@@ -338,6 +348,9 @@
+ .. versionchanged:: 3.10
+ ASCII newline and tab characters are stripped from the URL.
+
++ .. versionchanged:: 3.11.4
++ Leading WHATWG C0 control and space characters are stripped from the URL.
++
+ .. _WHATWG spec: https://url.spec.whatwg.org/#concept-basic-url-parser
+
+ .. function:: urlunsplit(parts)
+@@ -414,6 +427,35 @@
+ or ``scheme://host/path``). If *url* is not a wrapped URL, it is returned
+ without changes.
+
++.. _url-parsing-security:
++
++URL parsing security
++--------------------
++
++The :func:`urlsplit` and :func:`urlparse` APIs do not perform **validation** of
++inputs. They may not raise errors on inputs that other applications consider
++invalid. They may also succeed on some inputs that might not be considered
++URLs elsewhere. Their purpose is for practical functionality rather than
++purity.
++
++Instead of raising an exception on unusual input, they may instead return some
++component parts as empty strings. Or components may contain more than perhaps
++they should.
++
++We recommend that users of these APIs where the values may be used anywhere
++with security implications code defensively. Do some verification within your
++code before trusting a returned component part. Does that ``scheme`` make
++sense? Is that a sensible ``path``? Is there anything strange about that
++``hostname``? etc.
++
++What constitutes a URL is not universally well defined. Different applications
++have different needs and desired constraints. For instance the living `WHATWG
++spec`_ describes what user facing web clients such as a web browser require.
++While :rfc:`3986` is more general. These functions incorporate some aspects of
++both, but cannot be claimed compliant with either. The APIs and existing user
++code with expectations on specific behaviors predate both standards leading us
++to be very cautious about making API behavior changes.
++
+ .. _parsing-ascii-encoded-bytes:
+
+ Parsing ASCII Encoded Bytes
+--- a/Lib/test/test_urlparse.py
++++ b/Lib/test/test_urlparse.py
+@@ -649,6 +649,65 @@
+ self.assertEqual(p.scheme, "http")
+ self.assertEqual(p.geturl(), "http://www.python.org/javascript:alert('msg')/?query=something#fragment")
+
++ def test_urlsplit_strip_url(self):
++ noise = bytes(range(0, 0x20 + 1))
++ base_url = "http://User:Pass@www.python.org:080/doc/?query=yes#frag"
++
++ url = noise.decode("utf-8") + base_url
++ p = urllib.parse.urlsplit(url)
++ self.assertEqual(p.scheme, "http")
++ self.assertEqual(p.netloc, "User:Pass@www.python.org:080")
++ self.assertEqual(p.path, "/doc/")
++ self.assertEqual(p.query, "query=yes")
++ self.assertEqual(p.fragment, "frag")
++ self.assertEqual(p.username, "User")
++ self.assertEqual(p.password, "Pass")
++ self.assertEqual(p.hostname, "www.python.org")
++ self.assertEqual(p.port, 80)
++ self.assertEqual(p.geturl(), base_url)
++
++ url = noise + base_url.encode("utf-8")
++ p = urllib.parse.urlsplit(url)
++ self.assertEqual(p.scheme, b"http")
++ self.assertEqual(p.netloc, b"User:Pass@www.python.org:080")
++ self.assertEqual(p.path, b"/doc/")
++ self.assertEqual(p.query, b"query=yes")
++ self.assertEqual(p.fragment, b"frag")
++ self.assertEqual(p.username, b"User")
++ self.assertEqual(p.password, b"Pass")
++ self.assertEqual(p.hostname, b"www.python.org")
++ self.assertEqual(p.port, 80)
++ self.assertEqual(p.geturl(), base_url.encode("utf-8"))
++
++ # Test that trailing space is preserved as some applications rely on
++ # this within query strings.
++ query_spaces_url = "https://www.python.org:88/doc/?query= "
++ p = urllib.parse.urlsplit(noise.decode("utf-8") + query_spaces_url)
++ self.assertEqual(p.scheme, "https")
++ self.assertEqual(p.netloc, "www.python.org:88")
++ self.assertEqual(p.path, "/doc/")
++ self.assertEqual(p.query, "query= ")
++ self.assertEqual(p.port, 88)
++ self.assertEqual(p.geturl(), query_spaces_url)
++
++ p = urllib.parse.urlsplit("www.pypi.org ")
++ # That "hostname" gets considered a "path" due to the
++ # trailing space and our existing logic... YUCK...
++ # and re-assembles via geturl aka unurlsplit into the original.
++ # django.core.validators.URLValidator (at least through v3.2) relies on
++ # this, for better or worse, to catch it in a ValidationError via its
++ # regular expressions.
++ # Here we test the basic round trip concept of such a trailing space.
++ self.assertEqual(urllib.parse.urlunsplit(p), "www.pypi.org ")
++
++ # with scheme as cache-key
++ url = "//www.python.org/"
++ scheme = noise.decode("utf-8") + "https" + noise.decode("utf-8")
++ for _ in range(2):
++ p = urllib.parse.urlsplit(url, scheme=scheme)
++ self.assertEqual(p.scheme, "https")
++ self.assertEqual(p.geturl(), "https://www.python.org/")
++
+ def test_attributes_bad_port(self):
+ """Check handling of invalid ports."""
+ for bytes in (False, True):
+@@ -656,7 +715,7 @@
+ for port in ("foo", "1.5", "-1", "0x10", "-0", "1_1", " 1", "1 ", "६"):
+ with self.subTest(bytes=bytes, parse=parse, port=port):
+ netloc = "www.example.net:" + port
+- url = "http://" + netloc
++ url = "http://" + netloc + "/"
+ if bytes:
+ if netloc.isascii() and port.isascii():
+ netloc = netloc.encode("ascii")
+--- a/Lib/urllib/parse.py
++++ b/Lib/urllib/parse.py
+@@ -25,6 +25,10 @@
+ scenarios for parsing, and for backward compatibility purposes, some
+ parsing quirks from older RFCs are retained. The testcases in
+ test_urlparse.py provides a good indicator of parsing behavior.
++
++The WHATWG URL Parser spec should also be considered. We are not compliant with
++it either due to existing user code API behavior expectations (Hyrum's Law).
++It serves as a useful guide when making changes.
+ """
+
+ from collections import namedtuple
+@@ -79,6 +83,10 @@
+ '0123456789'
+ '+-.')
+
++# Leading and trailing C0 control and space to be stripped per WHATWG spec.
++# == "".join([chr(i) for i in range(0, 0x20 + 1)])
++_WHATWG_C0_CONTROL_OR_SPACE = '\x00\x01\x02\x03\x04\x05\x06\x07\x08\t\n\x0b\x0c\r\x0e\x0f\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f '
++
+ # Unsafe bytes to be removed per WHATWG spec
+ _UNSAFE_URL_BYTES_TO_REMOVE = ['\t', '\r', '\n']
+
+@@ -452,6 +460,10 @@
+ """
+
+ url, scheme, _coerce_result = _coerce_args(url, scheme)
++ # Only lstrip url as some applications rely on preserving trailing space.
++ # (https://url.spec.whatwg.org/#concept-basic-url-parser would strip both)
++ url = url.lstrip(_WHATWG_C0_CONTROL_OR_SPACE)
++ scheme = scheme.strip(_WHATWG_C0_CONTROL_OR_SPACE)
+
+ for b in _UNSAFE_URL_BYTES_TO_REMOVE:
+ url = url.replace(b, "")
+--- /dev/null
++++ b/Misc/NEWS.d/next/Security/2023-03-07-20-59-17.gh-issue-102153.14CLSZ.rst
+@@ -0,0 +1,3 @@
++:func:`urllib.parse.urlsplit` now strips leading C0 control and space
++characters following the specification for URLs defined by WHATWG in
++response to CVE-2023-24329. Patch by Illia Volochii.
diff -Nru python3.11-3.11.2/debian/patches/CVE-2023-40217-ref-cycle.patch python3.11-3.11.2/debian/patches/CVE-2023-40217-ref-cycle.patch
--- python3.11-3.11.2/debian/patches/CVE-2023-40217-ref-cycle.patch 1969-12-31 20:00:00.000000000 -0400
+++ python3.11-3.11.2/debian/patches/CVE-2023-40217-ref-cycle.patch 2024-05-02 07:59:08.000000000 -0400
@@ -0,0 +1,37 @@
+From 93714b7db795b14b26adffde30753cfda0ca4867 Mon Sep 17 00:00:00 2001
+From: "Miss Islington (bot)"
+ <31488909+miss-islington@users.noreply.github.com>
+Date: Wed, 23 Aug 2023 03:10:04 -0700
+Subject: [PATCH] [3.11] gh-108342: Break ref cycle in SSLSocket._create() exc
+ (GH-108344) (#108349)
+
+Explicitly break a reference cycle when SSLSocket._create() raises an
+exception. Clear the variable storing the exception, since the
+exception traceback contains the variables and so creates a reference
+cycle.
+
+This test leak was introduced by the test added for the fix of GH-108310.
+(cherry picked from commit 64f99350351bc46e016b2286f36ba7cd669b79e3)
+
+Co-authored-by: Victor Stinner <vstinner@python.org>
+---
+ Lib/ssl.py | 6 +++++-
+ 1 file changed, 5 insertions(+), 1 deletion(-)
+
+Origin: upstream, https://github.com/python/cpython/commit/93714b7db795b14b26adffde30753cfda0ca4867
+Bug-Upstream: https://github.com/python/cpython/issues/108342
+--- a/Lib/ssl.py
++++ b/Lib/ssl.py
+@@ -1083,7 +1083,11 @@
+ self.close()
+ except OSError:
+ pass
+- raise notconn_pre_handshake_data_error
++ try:
++ raise notconn_pre_handshake_data_error
++ finally:
++ # Explicitly break the reference cycle.
++ notconn_pre_handshake_data_error = None
+ else:
+ connected = True
+
diff -Nru python3.11-3.11.2/debian/patches/CVE-2023-40217-ssl-pre-close-flaw.patch python3.11-3.11.2/debian/patches/CVE-2023-40217-ssl-pre-close-flaw.patch
--- python3.11-3.11.2/debian/patches/CVE-2023-40217-ssl-pre-close-flaw.patch 1969-12-31 20:00:00.000000000 -0400
+++ python3.11-3.11.2/debian/patches/CVE-2023-40217-ssl-pre-close-flaw.patch 2024-05-02 07:59:08.000000000 -0400
@@ -0,0 +1,324 @@
+From 75a875e0df0530b75b1470d797942f90f4a718d3 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?=C5=81ukasz=20Langa?= <lukasz@langa.pl>
+Date: Tue, 22 Aug 2023 19:53:19 +0200
+Subject: [PATCH] [3.11] gh-108310: Fix CVE-2023-40217: Check for & avoid the
+ ssl pre-close flaw (#108317)
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+gh-108310: Fix CVE-2023-40217: Check for & avoid the ssl pre-close flaw
+
+Instances of `ssl.SSLSocket` were vulnerable to a bypass of the TLS handshake
+and included protections (like certificate verification) and treating sent
+unencrypted data as if it were post-handshake TLS encrypted data.
+
+The vulnerability is caused when a socket is connected, data is sent by the
+malicious peer and stored in a buffer, and then the malicious peer closes the
+socket within a small timing window before the other peers’ TLS handshake can
+begin. After this sequence of events the closed socket will not immediately
+attempt a TLS handshake due to not being connected but will also allow the
+buffered data to be read as if a successful TLS handshake had occurred.
+
+Co-authored-by: Gregory P. Smith [Google LLC] <greg@krypto.org>
+---
+ Lib/ssl.py | 31 ++-
+ Lib/test/test_ssl.py | 211 ++++++++++++++++++
+ ...-08-22-17-39-12.gh-issue-108310.fVM3sg.rst | 7 +
+ 3 files changed, 248 insertions(+), 1 deletion(-)
+ create mode 100644 Misc/NEWS.d/next/Security/2023-08-22-17-39-12.gh-issue-108310.fVM3sg.rst
+
+Origin: upstream, https://github.com/python/cpython/commit/75a875e0df0530b75b1470d797942f90f4a718d3
+Bug-Upstream: https://github.com/python/cpython/issues/108310
+--- a/Lib/ssl.py
++++ b/Lib/ssl.py
+@@ -1037,7 +1037,7 @@
+ )
+ self = cls.__new__(cls, **kwargs)
+ super(SSLSocket, self).__init__(**kwargs)
+- self.settimeout(sock.gettimeout())
++ sock_timeout = sock.gettimeout()
+ sock.detach()
+
+ self._context = context
+@@ -1056,9 +1056,38 @@
+ if e.errno != errno.ENOTCONN:
+ raise
+ connected = False
++ blocking = self.getblocking()
++ self.setblocking(False)
++ try:
++ # We are not connected so this is not supposed to block, but
++ # testing revealed otherwise on macOS and Windows so we do
++ # the non-blocking dance regardless. Our raise when any data
++ # is found means consuming the data is harmless.
++ notconn_pre_handshake_data = self.recv(1)
++ except OSError as e:
++ # EINVAL occurs for recv(1) on non-connected on unix sockets.
++ if e.errno not in (errno.ENOTCONN, errno.EINVAL):
++ raise
++ notconn_pre_handshake_data = b''
++ self.setblocking(blocking)
++ if notconn_pre_handshake_data:
++ # This prevents pending data sent to the socket before it was
++ # closed from escaping to the caller who could otherwise
++ # presume it came through a successful TLS connection.
++ reason = "Closed before TLS handshake with data in recv buffer."
++ notconn_pre_handshake_data_error = SSLError(e.errno, reason)
++ # Add the SSLError attributes that _ssl.c always adds.
++ notconn_pre_handshake_data_error.reason = reason
++ notconn_pre_handshake_data_error.library = None
++ try:
++ self.close()
++ except OSError:
++ pass
++ raise notconn_pre_handshake_data_error
+ else:
+ connected = True
+
++ self.settimeout(sock_timeout) # Must come after setblocking() calls.
+ self._connected = connected
+ if connected:
+ # create the SSL object
+--- a/Lib/test/test_ssl.py
++++ b/Lib/test/test_ssl.py
+@@ -9,11 +9,14 @@
+ from test.support import socket_helper
+ from test.support import threading_helper
+ from test.support import warnings_helper
++import re
+ import socket
+ import select
++import struct
+ import time
+ import enum
+ import gc
++import http.client
+ import os
+ import errno
+ import pprint
+@@ -4884,6 +4887,214 @@
+ s.connect((HOST, server.port))
+
+
++def set_socket_so_linger_on_with_zero_timeout(sock):
++ sock.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', 1, 0))
++
++
++class TestPreHandshakeClose(unittest.TestCase):
++ """Verify behavior of close sockets with received data before to the handshake.
++ """
++
++ class SingleConnectionTestServerThread(threading.Thread):
++
++ def __init__(self, *, name, call_after_accept):
++ self.call_after_accept = call_after_accept
++ self.received_data = b'' # set by .run()
++ self.wrap_error = None # set by .run()
++ self.listener = None # set by .start()
++ self.port = None # set by .start()
++ super().__init__(name=name)
++
++ def __enter__(self):
++ self.start()
++ return self
++
++ def __exit__(self, *args):
++ try:
++ if self.listener:
++ self.listener.close()
++ except OSError:
++ pass
++ self.join()
++ self.wrap_error = None # avoid dangling references
++
++ def start(self):
++ self.ssl_ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
++ self.ssl_ctx.verify_mode = ssl.CERT_REQUIRED
++ self.ssl_ctx.load_verify_locations(cafile=ONLYCERT)
++ self.ssl_ctx.load_cert_chain(certfile=ONLYCERT, keyfile=ONLYKEY)
++ self.listener = socket.socket()
++ self.port = socket_helper.bind_port(self.listener)
++ self.listener.settimeout(2.0)
++ self.listener.listen(1)
++ super().start()
++
++ def run(self):
++ conn, address = self.listener.accept()
++ self.listener.close()
++ with conn:
++ if self.call_after_accept(conn):
++ return
++ try:
++ tls_socket = self.ssl_ctx.wrap_socket(conn, server_side=True)
++ except OSError as err: # ssl.SSLError inherits from OSError
++ self.wrap_error = err
++ else:
++ try:
++ self.received_data = tls_socket.recv(400)
++ except OSError:
++ pass # closed, protocol error, etc.
++
++ def non_linux_skip_if_other_okay_error(self, err):
++ if sys.platform == "linux":
++ return # Expect the full test setup to always work on Linux.
++ if (isinstance(err, ConnectionResetError) or
++ (isinstance(err, OSError) and err.errno == errno.EINVAL) or
++ re.search('wrong.version.number', getattr(err, "reason", ""), re.I)):
++ # On Windows the TCP RST leads to a ConnectionResetError
++ # (ECONNRESET) which Linux doesn't appear to surface to userspace.
++ # If wrap_socket() winds up on the "if connected:" path and doing
++ # the actual wrapping... we get an SSLError from OpenSSL. Typically
++ # WRONG_VERSION_NUMBER. While appropriate, neither is the scenario
++ # we're specifically trying to test. The way this test is written
++ # is known to work on Linux. We'll skip it anywhere else that it
++ # does not present as doing so.
++ self.skipTest(f"Could not recreate conditions on {sys.platform}:"
++ f" {err=}")
++ # If maintaining this conditional winds up being a problem.
++ # just turn this into an unconditional skip anything but Linux.
++ # The important thing is that our CI has the logic covered.
++
++ def test_preauth_data_to_tls_server(self):
++ server_accept_called = threading.Event()
++ ready_for_server_wrap_socket = threading.Event()
++
++ def call_after_accept(unused):
++ server_accept_called.set()
++ if not ready_for_server_wrap_socket.wait(2.0):
++ raise RuntimeError("wrap_socket event never set, test may fail.")
++ return False # Tell the server thread to continue.
++
++ server = self.SingleConnectionTestServerThread(
++ call_after_accept=call_after_accept,
++ name="preauth_data_to_tls_server")
++ self.enterContext(server) # starts it & unittest.TestCase stops it.
++
++ with socket.socket() as client:
++ client.connect(server.listener.getsockname())
++ # This forces an immediate connection close via RST on .close().
++ set_socket_so_linger_on_with_zero_timeout(client)
++ client.setblocking(False)
++
++ server_accept_called.wait()
++ client.send(b"DELETE /data HTTP/1.0\r\n\r\n")
++ client.close() # RST
++
++ ready_for_server_wrap_socket.set()
++ server.join()
++ wrap_error = server.wrap_error
++ self.assertEqual(b"", server.received_data)
++ self.assertIsInstance(wrap_error, OSError) # All platforms.
++ self.non_linux_skip_if_other_okay_error(wrap_error)
++ self.assertIsInstance(wrap_error, ssl.SSLError)
++ self.assertIn("before TLS handshake with data", wrap_error.args[1])
++ self.assertIn("before TLS handshake with data", wrap_error.reason)
++ self.assertNotEqual(0, wrap_error.args[0])
++ self.assertIsNone(wrap_error.library, msg="attr must exist")
++
++ def test_preauth_data_to_tls_client(self):
++ client_can_continue_with_wrap_socket = threading.Event()
++
++ def call_after_accept(conn_to_client):
++ # This forces an immediate connection close via RST on .close().
++ set_socket_so_linger_on_with_zero_timeout(conn_to_client)
++ conn_to_client.send(
++ b"HTTP/1.0 307 Temporary Redirect\r\n"
++ b"Location: https://example.com/someone-elses-server\r\n"
++ b"\r\n")
++ conn_to_client.close() # RST
++ client_can_continue_with_wrap_socket.set()
++ return True # Tell the server to stop.
++
++ server = self.SingleConnectionTestServerThread(
++ call_after_accept=call_after_accept,
++ name="preauth_data_to_tls_client")
++ self.enterContext(server) # starts it & unittest.TestCase stops it.
++ # Redundant; call_after_accept sets SO_LINGER on the accepted conn.
++ set_socket_so_linger_on_with_zero_timeout(server.listener)
++
++ with socket.socket() as client:
++ client.connect(server.listener.getsockname())
++ if not client_can_continue_with_wrap_socket.wait(2.0):
++ self.fail("test server took too long.")
++ ssl_ctx = ssl.create_default_context()
++ try:
++ tls_client = ssl_ctx.wrap_socket(
++ client, server_hostname="localhost")
++ except OSError as err: # SSLError inherits from OSError
++ wrap_error = err
++ received_data = b""
++ else:
++ wrap_error = None
++ received_data = tls_client.recv(400)
++ tls_client.close()
++
++ server.join()
++ self.assertEqual(b"", received_data)
++ self.assertIsInstance(wrap_error, OSError) # All platforms.
++ self.non_linux_skip_if_other_okay_error(wrap_error)
++ self.assertIsInstance(wrap_error, ssl.SSLError)
++ self.assertIn("before TLS handshake with data", wrap_error.args[1])
++ self.assertIn("before TLS handshake with data", wrap_error.reason)
++ self.assertNotEqual(0, wrap_error.args[0])
++ self.assertIsNone(wrap_error.library, msg="attr must exist")
++
++ def test_https_client_non_tls_response_ignored(self):
++
++ server_responding = threading.Event()
++
++ class SynchronizedHTTPSConnection(http.client.HTTPSConnection):
++ def connect(self):
++ http.client.HTTPConnection.connect(self)
++ # Wait for our fault injection server to have done its thing.
++ if not server_responding.wait(1.0) and support.verbose:
++ sys.stdout.write("server_responding event never set.")
++ self.sock = self._context.wrap_socket(
++ self.sock, server_hostname=self.host)
++
++ def call_after_accept(conn_to_client):
++ # This forces an immediate connection close via RST on .close().
++ set_socket_so_linger_on_with_zero_timeout(conn_to_client)
++ conn_to_client.send(
++ b"HTTP/1.0 402 Payment Required\r\n"
++ b"\r\n")
++ conn_to_client.close() # RST
++ server_responding.set()
++ return True # Tell the server to stop.
++
++ server = self.SingleConnectionTestServerThread(
++ call_after_accept=call_after_accept,
++ name="non_tls_http_RST_responder")
++ self.enterContext(server) # starts it & unittest.TestCase stops it.
++ # Redundant; call_after_accept sets SO_LINGER on the accepted conn.
++ set_socket_so_linger_on_with_zero_timeout(server.listener)
++
++ connection = SynchronizedHTTPSConnection(
++ f"localhost",
++ port=server.port,
++ context=ssl.create_default_context(),
++ timeout=2.0,
++ )
++ # There are lots of reasons this raises as desired, long before this
++ # test was added. Sending the request requires a successful TLS wrapped
++ # socket; that fails if the connection is broken. It may seem pointless
++ # to test this. It serves as an illustration of something that we never
++ # want to happen... properly not happening.
++ with self.assertRaises(OSError) as err_ctx:
++ connection.request("HEAD", "/test", headers={"Host": "localhost"})
++ response = connection.getresponse()
++
++
+ class TestEnumerations(unittest.TestCase):
+
+ def test_tlsversion(self):
+--- /dev/null
++++ b/Misc/NEWS.d/next/Security/2023-08-22-17-39-12.gh-issue-108310.fVM3sg.rst
+@@ -0,0 +1,7 @@
++Fixed an issue where instances of :class:`ssl.SSLSocket` were vulnerable to
++a bypass of the TLS handshake and included protections (like certificate
++verification) and treating sent unencrypted data as if it were
++post-handshake TLS encrypted data. Security issue reported as
++`CVE-2023-40217
++<https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-40217>`_ by
++Aapo Oksman. Patch by Gregory P. Smith.
diff -Nru python3.11-3.11.2/debian/patches/CVE-2023-40217-test-reliability.patch python3.11-3.11.2/debian/patches/CVE-2023-40217-test-reliability.patch
--- python3.11-3.11.2/debian/patches/CVE-2023-40217-test-reliability.patch 1969-12-31 20:00:00.000000000 -0400
+++ python3.11-3.11.2/debian/patches/CVE-2023-40217-test-reliability.patch 2024-05-02 07:59:08.000000000 -0400
@@ -0,0 +1,231 @@
+From d90f2f3a6292614ce8ae22a15694dcb676bc8c36 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?=C5=81ukasz=20Langa?= <lukasz@langa.pl>
+Date: Thu, 24 Aug 2023 12:08:52 +0200
+Subject: [PATCH] [3.11] gh-108342: Make ssl TestPreHandshakeClose more
+ reliable (GH-108370) (#108405)
+
+* In preauth tests of test_ssl, explicitly break reference cycles
+ invoving SingleConnectionTestServerThread to make sure that the
+ thread is deleted. Otherwise, the test marks the environment as
+ altered because the threading module sees a "dangling thread"
+ (SingleConnectionTestServerThread). This test leak was introduced
+ by the test added for the fix of issue gh-108310.
+* Use support.SHORT_TIMEOUT instead of hardcoded 1.0 or 2.0 seconds
+ timeout.
+* SingleConnectionTestServerThread.run() catchs TimeoutError
+* Fix a race condition (missing synchronization) in
+ test_preauth_data_to_tls_client(): the server now waits until the
+ client connect() completed in call_after_accept().
+* test_https_client_non_tls_response_ignored() calls server.join()
+ explicitly.
+* Replace "localhost" with server.listener.getsockname()[0].
+(cherry picked from commit 592bacb6fc0833336c0453e818e9b95016e9fd47)
+
+Co-authored-by: Victor Stinner <vstinner@python.org>
+---
+ Lib/test/test_ssl.py | 102 ++++++++++++++++++++++++++++++-------------
+ 1 file changed, 71 insertions(+), 31 deletions(-)
+
+Origin: https://github.com/python/cpython/commit/d90f2f3a6292614ce8ae22a15694dcb676bc8c36
+Bug-Upstream: https://github.com/python/cpython/issues/108342
+--- a/Lib/test/test_ssl.py
++++ b/Lib/test/test_ssl.py
+@@ -4897,12 +4897,16 @@
+
+ class SingleConnectionTestServerThread(threading.Thread):
+
+- def __init__(self, *, name, call_after_accept):
++ def __init__(self, *, name, call_after_accept, timeout=None):
+ self.call_after_accept = call_after_accept
+ self.received_data = b'' # set by .run()
+ self.wrap_error = None # set by .run()
+ self.listener = None # set by .start()
+ self.port = None # set by .start()
++ if timeout is None:
++ self.timeout = support.SHORT_TIMEOUT
++ else:
++ self.timeout = timeout
+ super().__init__(name=name)
+
+ def __enter__(self):
+@@ -4925,13 +4929,19 @@
+ self.ssl_ctx.load_cert_chain(certfile=ONLYCERT, keyfile=ONLYKEY)
+ self.listener = socket.socket()
+ self.port = socket_helper.bind_port(self.listener)
+- self.listener.settimeout(2.0)
++ self.listener.settimeout(self.timeout)
+ self.listener.listen(1)
+ super().start()
+
+ def run(self):
+- conn, address = self.listener.accept()
+- self.listener.close()
++ try:
++ conn, address = self.listener.accept()
++ except TimeoutError:
++ # on timeout, just close the listener
++ return
++ finally:
++ self.listener.close()
++
+ with conn:
+ if self.call_after_accept(conn):
+ return
+@@ -4959,8 +4969,13 @@
+ # we're specifically trying to test. The way this test is written
+ # is known to work on Linux. We'll skip it anywhere else that it
+ # does not present as doing so.
+- self.skipTest(f"Could not recreate conditions on {sys.platform}:"
+- f" {err=}")
++ try:
++ self.skipTest(f"Could not recreate conditions on {sys.platform}:"
++ f" {err=}")
++ finally:
++ # gh-108342: Explicitly break the reference cycle
++ err = None
++
+ # If maintaining this conditional winds up being a problem.
+ # just turn this into an unconditional skip anything but Linux.
+ # The important thing is that our CI has the logic covered.
+@@ -4971,7 +4986,7 @@
+
+ def call_after_accept(unused):
+ server_accept_called.set()
+- if not ready_for_server_wrap_socket.wait(2.0):
++ if not ready_for_server_wrap_socket.wait(support.SHORT_TIMEOUT):
+ raise RuntimeError("wrap_socket event never set, test may fail.")
+ return False # Tell the server thread to continue.
+
+@@ -4992,20 +5007,31 @@
+
+ ready_for_server_wrap_socket.set()
+ server.join()
++
+ wrap_error = server.wrap_error
+- self.assertEqual(b"", server.received_data)
+- self.assertIsInstance(wrap_error, OSError) # All platforms.
+- self.non_linux_skip_if_other_okay_error(wrap_error)
+- self.assertIsInstance(wrap_error, ssl.SSLError)
+- self.assertIn("before TLS handshake with data", wrap_error.args[1])
+- self.assertIn("before TLS handshake with data", wrap_error.reason)
+- self.assertNotEqual(0, wrap_error.args[0])
+- self.assertIsNone(wrap_error.library, msg="attr must exist")
++ server.wrap_error = None
++ try:
++ self.assertEqual(b"", server.received_data)
++ self.assertIsInstance(wrap_error, OSError) # All platforms.
++ self.non_linux_skip_if_other_okay_error(wrap_error)
++ self.assertIsInstance(wrap_error, ssl.SSLError)
++ self.assertIn("before TLS handshake with data", wrap_error.args[1])
++ self.assertIn("before TLS handshake with data", wrap_error.reason)
++ self.assertNotEqual(0, wrap_error.args[0])
++ self.assertIsNone(wrap_error.library, msg="attr must exist")
++ finally:
++ # gh-108342: Explicitly break the reference cycle
++ wrap_error = None
++ server = None
+
+ def test_preauth_data_to_tls_client(self):
++ server_can_continue_with_wrap_socket = threading.Event()
+ client_can_continue_with_wrap_socket = threading.Event()
+
+ def call_after_accept(conn_to_client):
++ if not server_can_continue_with_wrap_socket.wait(support.SHORT_TIMEOUT):
++ print("ERROR: test client took too long")
++
+ # This forces an immediate connection close via RST on .close().
+ set_socket_so_linger_on_with_zero_timeout(conn_to_client)
+ conn_to_client.send(
+@@ -5025,8 +5051,10 @@
+
+ with socket.socket() as client:
+ client.connect(server.listener.getsockname())
+- if not client_can_continue_with_wrap_socket.wait(2.0):
+- self.fail("test server took too long.")
++ server_can_continue_with_wrap_socket.set()
++
++ if not client_can_continue_with_wrap_socket.wait(support.SHORT_TIMEOUT):
++ self.fail("test server took too long")
+ ssl_ctx = ssl.create_default_context()
+ try:
+ tls_client = ssl_ctx.wrap_socket(
+@@ -5040,24 +5068,31 @@
+ tls_client.close()
+
+ server.join()
+- self.assertEqual(b"", received_data)
+- self.assertIsInstance(wrap_error, OSError) # All platforms.
+- self.non_linux_skip_if_other_okay_error(wrap_error)
+- self.assertIsInstance(wrap_error, ssl.SSLError)
+- self.assertIn("before TLS handshake with data", wrap_error.args[1])
+- self.assertIn("before TLS handshake with data", wrap_error.reason)
+- self.assertNotEqual(0, wrap_error.args[0])
+- self.assertIsNone(wrap_error.library, msg="attr must exist")
++ try:
++ self.assertEqual(b"", received_data)
++ self.assertIsInstance(wrap_error, OSError) # All platforms.
++ self.non_linux_skip_if_other_okay_error(wrap_error)
++ self.assertIsInstance(wrap_error, ssl.SSLError)
++ self.assertIn("before TLS handshake with data", wrap_error.args[1])
++ self.assertIn("before TLS handshake with data", wrap_error.reason)
++ self.assertNotEqual(0, wrap_error.args[0])
++ self.assertIsNone(wrap_error.library, msg="attr must exist")
++ finally:
++ # gh-108342: Explicitly break the reference cycle
++ wrap_error = None
++ server = None
+
+ def test_https_client_non_tls_response_ignored(self):
+-
+ server_responding = threading.Event()
+
+ class SynchronizedHTTPSConnection(http.client.HTTPSConnection):
+ def connect(self):
++ # Call clear text HTTP connect(), not the encrypted HTTPS (TLS)
++ # connect(): wrap_socket() is called manually below.
+ http.client.HTTPConnection.connect(self)
++
+ # Wait for our fault injection server to have done its thing.
+- if not server_responding.wait(1.0) and support.verbose:
++ if not server_responding.wait(support.SHORT_TIMEOUT) and support.verbose:
+ sys.stdout.write("server_responding event never set.")
+ self.sock = self._context.wrap_socket(
+ self.sock, server_hostname=self.host)
+@@ -5072,28 +5107,33 @@
+ server_responding.set()
+ return True # Tell the server to stop.
+
++ timeout = 2.0
+ server = self.SingleConnectionTestServerThread(
+ call_after_accept=call_after_accept,
+- name="non_tls_http_RST_responder")
++ name="non_tls_http_RST_responder",
++ timeout=timeout)
+ self.enterContext(server) # starts it & unittest.TestCase stops it.
+ # Redundant; call_after_accept sets SO_LINGER on the accepted conn.
+ set_socket_so_linger_on_with_zero_timeout(server.listener)
+
+ connection = SynchronizedHTTPSConnection(
+- f"localhost",
++ server.listener.getsockname()[0],
+ port=server.port,
+ context=ssl.create_default_context(),
+- timeout=2.0,
++ timeout=timeout,
+ )
++
+ # There are lots of reasons this raises as desired, long before this
+ # test was added. Sending the request requires a successful TLS wrapped
+ # socket; that fails if the connection is broken. It may seem pointless
+ # to test this. It serves as an illustration of something that we never
+ # want to happen... properly not happening.
+- with self.assertRaises(OSError) as err_ctx:
++ with self.assertRaises(OSError):
+ connection.request("HEAD", "/test", headers={"Host": "localhost"})
+ response = connection.getresponse()
+
++ server.join()
++
+
+ class TestEnumerations(unittest.TestCase):
+
diff -Nru python3.11-3.11.2/debian/patches/CVE-2023-41105-path-truncation.patch python3.11-3.11.2/debian/patches/CVE-2023-41105-path-truncation.patch
--- python3.11-3.11.2/debian/patches/CVE-2023-41105-path-truncation.patch 1969-12-31 20:00:00.000000000 -0400
+++ python3.11-3.11.2/debian/patches/CVE-2023-41105-path-truncation.patch 2024-05-02 07:59:08.000000000 -0400
@@ -0,0 +1,125 @@
+From ccf81e1088c25a9f4464e478dc3b5c03ed7ee63b Mon Sep 17 00:00:00 2001
+From: Steve Dower <steve.dower@python.org>
+Date: Tue, 15 Aug 2023 18:07:52 +0100
+Subject: [PATCH] [3.11] gh-106242: Fix path truncation in os.path.normpath
+ (GH-106816) (#107982)
+
+Co-authored-by: Finn Womack <flan313@gmail.com>
+---
+ Include/internal/pycore_fileutils.h | 3 +-
+ Lib/test/test_genericpath.py | 4 +++
+ ...-08-14-23-11-11.gh-issue-106242.71HMym.rst | 1 +
+ Modules/posixmodule.c | 4 ++-
+ Python/fileutils.c | 29 ++++++++++++++-----
+ 5 files changed, 31 insertions(+), 10 deletions(-)
+ create mode 100644 Misc/NEWS.d/next/Library/2023-08-14-23-11-11.gh-issue-106242.71HMym.rst
+
+Fixes: CVE-2023-41105
+
+Origin: upstream, https://github.com/python/cpython/pull/107982
+Bug-Upstream: https://github.com/python/cpython/issues/106242
+
+--- a/Include/internal/pycore_fileutils.h
++++ b/Include/internal/pycore_fileutils.h
+@@ -244,7 +244,8 @@
+ const wchar_t *relfile,
+ size_t bufsize);
+ extern size_t _Py_find_basename(const wchar_t *filename);
+-PyAPI_FUNC(wchar_t *) _Py_normpath(wchar_t *path, Py_ssize_t size);
++PyAPI_FUNC(wchar_t*) _Py_normpath(wchar_t *path, Py_ssize_t size);
++extern wchar_t *_Py_normpath_and_size(wchar_t *path, Py_ssize_t size, Py_ssize_t *length);
+
+
+ // Macros to protect CRT calls against instant termination when passed an
+--- a/Lib/test/test_genericpath.py
++++ b/Lib/test/test_genericpath.py
+@@ -460,6 +460,10 @@
+ for path in ('', '.', '/', '\\', '///foo/.//bar//'):
+ self.assertIsInstance(self.pathmodule.normpath(path), str)
+
++ def test_normpath_issue106242(self):
++ for path in ('\x00', 'foo\x00bar', '\x00\x00', '\x00foo', 'foo\x00'):
++ self.assertEqual(self.pathmodule.normpath(path), path)
++
+ def test_abspath_issue3426(self):
+ # Check that abspath returns unicode when the arg is unicode
+ # with both ASCII and non-ASCII cwds.
+--- /dev/null
++++ b/Misc/NEWS.d/next/Library/2023-08-14-23-11-11.gh-issue-106242.71HMym.rst
+@@ -0,0 +1 @@
++Fixes :func:`os.path.normpath` to handle embedded null characters without truncating the path.
+--- a/Modules/posixmodule.c
++++ b/Modules/posixmodule.c
+@@ -4543,7 +4543,9 @@
+ if (!buffer) {
+ return NULL;
+ }
+- PyObject *result = PyUnicode_FromWideChar(_Py_normpath(buffer, len), -1);
++ Py_ssize_t norm_len;
++ wchar_t *norm_path = _Py_normpath_and_size(buffer, len, &norm_len);
++ PyObject *result = PyUnicode_FromWideChar(norm_path, norm_len);
+ PyMem_Free(buffer);
+ return result;
+ }
+--- a/Python/fileutils.c
++++ b/Python/fileutils.c
+@@ -2179,12 +2179,14 @@
+ path, which will be within the original buffer. Guaranteed to not
+ make the path longer, and will not fail. 'size' is the length of
+ the path, if known. If -1, the first null character will be assumed
+- to be the end of the path. */
++ to be the end of the path. 'normsize' will be set to contain the
++ length of the resulting normalized path. */
+ wchar_t *
+-_Py_normpath(wchar_t *path, Py_ssize_t size)
++_Py_normpath_and_size(wchar_t *path, Py_ssize_t size, Py_ssize_t *normsize)
+ {
+ assert(path != NULL);
+- if (!path[0] || size == 0) {
++ if ((size < 0 && !path[0]) || size == 0) {
++ *normsize = 0;
+ return path;
+ }
+ wchar_t *pEnd = size >= 0 ? &path[size] : NULL;
+@@ -2233,11 +2235,7 @@
+ *p2++ = lastC = *p1;
+ }
+ }
+- if (sepCount) {
+- minP2 = p2; // Invalid path
+- } else {
+- minP2 = p2 - 1; // Absolute path has SEP at minP2
+- }
++ minP2 = p2 - 1;
+ }
+ #else
+ // Skip past two leading SEPs
+@@ -2297,13 +2295,28 @@
+ while (--p2 != minP2 && *p2 == SEP) {
+ *p2 = L'\0';
+ }
++ } else {
++ --p2;
+ }
++ *normsize = p2 - path + 1;
+ #undef SEP_OR_END
+ #undef IS_SEP
+ #undef IS_END
+ return path;
+ }
+
++/* In-place path normalisation. Returns the start of the normalized
++ path, which will be within the original buffer. Guaranteed to not
++ make the path longer, and will not fail. 'size' is the length of
++ the path, if known. If -1, the first null character will be assumed
++ to be the end of the path. */
++wchar_t *
++_Py_normpath(wchar_t *path, Py_ssize_t size)
++{
++ Py_ssize_t norm_length;
++ return _Py_normpath_and_size(path, size, &norm_length);
++}
++
+
+ /* Get the current directory. buflen is the buffer size in wide characters
+ including the null character. Decode the path from the locale encoding.
diff -Nru python3.11-3.11.2/debian/patches/CVE-2023-6597.patch python3.11-3.11.2/debian/patches/CVE-2023-6597.patch
--- python3.11-3.11.2/debian/patches/CVE-2023-6597.patch 1969-12-31 20:00:00.000000000 -0400
+++ python3.11-3.11.2/debian/patches/CVE-2023-6597.patch 2024-05-02 07:59:08.000000000 -0400
@@ -0,0 +1,200 @@
+commit 5585334d772b253a01a6730e8202ffb1607c3d25
+Author: Serhiy Storchaka <storchaka@gmail.com>
+Date: Thu Dec 7 18:37:10 2023 +0200
+
+ [3.11] gh-91133: tempfile.TemporaryDirectory: fix symlink bug in cleanup (GH-99930) (GH-112839)
+
+ (cherry picked from commit 81c16cd94ec38d61aa478b9a452436dc3b1b524d)
+
+ Co-authored-by: S�L�g <sorenl@unity3d.com>
+
+Fixes: CVE-2023-6597
+
+Bug-Debian: https://bugs.debian.org/1070135
+Bug-Upstream: https://github.com/python/cpython/issues/91133
+
+--- a/Lib/tempfile.py
++++ b/Lib/tempfile.py
+@@ -408,6 +408,22 @@
+ raise FileExistsError(_errno.EEXIST,
+ "No usable temporary file name found")
+
++def _dont_follow_symlinks(func, path, *args):
++ # Pass follow_symlinks=False, unless not supported on this platform.
++ if func in _os.supports_follow_symlinks:
++ func(path, *args, follow_symlinks=False)
++ elif _os.name == 'nt' or not _os.path.islink(path):
++ func(path, *args)
++
++def _resetperms(path):
++ try:
++ chflags = _os.chflags
++ except AttributeError:
++ pass
++ else:
++ _dont_follow_symlinks(chflags, path, 0)
++ _dont_follow_symlinks(_os.chmod, path, 0o700)
++
+
+ # User visible interfaces.
+
+@@ -1001,17 +1017,10 @@
+ def _rmtree(cls, name, ignore_errors=False):
+ def onerror(func, path, exc_info):
+ if issubclass(exc_info[0], PermissionError):
+- def resetperms(path):
+- try:
+- _os.chflags(path, 0)
+- except AttributeError:
+- pass
+- _os.chmod(path, 0o700)
+-
+ try:
+ if path != name:
+- resetperms(_os.path.dirname(path))
+- resetperms(path)
++ _resetperms(_os.path.dirname(path))
++ _resetperms(path)
+
+ try:
+ _os.unlink(path)
+--- a/Lib/test/test_tempfile.py
++++ b/Lib/test/test_tempfile.py
+@@ -1554,6 +1554,103 @@
+ "were deleted")
+ d2.cleanup()
+
++ @os_helper.skip_unless_symlink
++ def test_cleanup_with_symlink_modes(self):
++ # cleanup() should not follow symlinks when fixing mode bits (#91133)
++ with self.do_create(recurse=0) as d2:
++ file1 = os.path.join(d2, 'file1')
++ open(file1, 'wb').close()
++ dir1 = os.path.join(d2, 'dir1')
++ os.mkdir(dir1)
++ for mode in range(8):
++ mode <<= 6
++ with self.subTest(mode=format(mode, '03o')):
++ def test(target, target_is_directory):
++ d1 = self.do_create(recurse=0)
++ symlink = os.path.join(d1.name, 'symlink')
++ os.symlink(target, symlink,
++ target_is_directory=target_is_directory)
++ try:
++ os.chmod(symlink, mode, follow_symlinks=False)
++ except NotImplementedError:
++ pass
++ try:
++ os.chmod(symlink, mode)
++ except FileNotFoundError:
++ pass
++ os.chmod(d1.name, mode)
++ d1.cleanup()
++ self.assertFalse(os.path.exists(d1.name))
++
++ with self.subTest('nonexisting file'):
++ test('nonexisting', target_is_directory=False)
++ with self.subTest('nonexisting dir'):
++ test('nonexisting', target_is_directory=True)
++
++ with self.subTest('existing file'):
++ os.chmod(file1, mode)
++ old_mode = os.stat(file1).st_mode
++ test(file1, target_is_directory=False)
++ new_mode = os.stat(file1).st_mode
++ self.assertEqual(new_mode, old_mode,
++ '%03o != %03o' % (new_mode, old_mode))
++
++ with self.subTest('existing dir'):
++ os.chmod(dir1, mode)
++ old_mode = os.stat(dir1).st_mode
++ test(dir1, target_is_directory=True)
++ new_mode = os.stat(dir1).st_mode
++ self.assertEqual(new_mode, old_mode,
++ '%03o != %03o' % (new_mode, old_mode))
++
++ @unittest.skipUnless(hasattr(os, 'chflags'), 'requires os.chflags')
++ @os_helper.skip_unless_symlink
++ def test_cleanup_with_symlink_flags(self):
++ # cleanup() should not follow symlinks when fixing flags (#91133)
++ flags = stat.UF_IMMUTABLE | stat.UF_NOUNLINK
++ self.check_flags(flags)
++
++ with self.do_create(recurse=0) as d2:
++ file1 = os.path.join(d2, 'file1')
++ open(file1, 'wb').close()
++ dir1 = os.path.join(d2, 'dir1')
++ os.mkdir(dir1)
++ def test(target, target_is_directory):
++ d1 = self.do_create(recurse=0)
++ symlink = os.path.join(d1.name, 'symlink')
++ os.symlink(target, symlink,
++ target_is_directory=target_is_directory)
++ try:
++ os.chflags(symlink, flags, follow_symlinks=False)
++ except NotImplementedError:
++ pass
++ try:
++ os.chflags(symlink, flags)
++ except FileNotFoundError:
++ pass
++ os.chflags(d1.name, flags)
++ d1.cleanup()
++ self.assertFalse(os.path.exists(d1.name))
++
++ with self.subTest('nonexisting file'):
++ test('nonexisting', target_is_directory=False)
++ with self.subTest('nonexisting dir'):
++ test('nonexisting', target_is_directory=True)
++
++ with self.subTest('existing file'):
++ os.chflags(file1, flags)
++ old_flags = os.stat(file1).st_flags
++ test(file1, target_is_directory=False)
++ new_flags = os.stat(file1).st_flags
++ self.assertEqual(new_flags, old_flags)
++
++ with self.subTest('existing dir'):
++ os.chflags(dir1, flags)
++ old_flags = os.stat(dir1).st_flags
++ test(dir1, target_is_directory=True)
++ new_flags = os.stat(dir1).st_flags
++ self.assertEqual(new_flags, old_flags)
++
+ @support.cpython_only
+ def test_del_on_collection(self):
+ # A TemporaryDirectory is deleted when garbage collected
+@@ -1726,9 +1823,27 @@
+ d.cleanup()
+ self.assertFalse(os.path.exists(d.name))
+
+- @unittest.skipUnless(hasattr(os, 'chflags'), 'requires os.lchflags')
++ def check_flags(self, flags):
++ # skip the test if these flags are not supported (ex: FreeBSD 13)
++ filename = os_helper.TESTFN
++ try:
++ open(filename, "w").close()
++ try:
++ os.chflags(filename, flags)
++ except OSError as exc:
++ # "OSError: [Errno 45] Operation not supported"
++ self.skipTest(f"chflags() doesn't support flags "
++ f"{flags:#b}: {exc}")
++ else:
++ os.chflags(filename, 0)
++ finally:
++ os_helper.unlink(filename)
++
++ @unittest.skipUnless(hasattr(os, 'chflags'), 'requires os.chflags')
+ def test_flags(self):
+ flags = stat.UF_IMMUTABLE | stat.UF_NOUNLINK
++ self.check_flags(flags)
++
+ d = self.do_create(recurse=3, dirs=2, files=2)
+ with d:
+ # Change files and directories flags recursively.
+--- /dev/null
++++ b/Misc/NEWS.d/next/Library/2022-12-01-16-57-44.gh-issue-91133.LKMVCV.rst
+@@ -0,0 +1,2 @@
++Fix a bug in :class:`tempfile.TemporaryDirectory` cleanup, which now no longer
++dereferences symlinks when working around file system permission errors.
diff -Nru python3.11-3.11.2/debian/patches/CVE-2024-0450.patch python3.11-3.11.2/debian/patches/CVE-2024-0450.patch
--- python3.11-3.11.2/debian/patches/CVE-2024-0450.patch 1969-12-31 20:00:00.000000000 -0400
+++ python3.11-3.11.2/debian/patches/CVE-2024-0450.patch 2024-05-02 07:59:08.000000000 -0400
@@ -0,0 +1,134 @@
+commit a956e510f6336d5ae111ba429a61c3ade30a7549
+Author: Miss Islington (bot) <31488909+miss-islington@users.noreply.github.com>
+Date: Thu Jan 11 10:24:47 2024 +0100
+
+ [3.11] gh-109858: Protect zipfile from "quoted-overlap" zipbomb (GH-110016) (GH-113913)
+
+ Raise BadZipFile when try to read an entry that overlaps with other entry or
+ central directory.
+ (cherry picked from commit 66363b9a7b9fe7c99eba3a185b74c5fdbf842eba)
+
+ Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
+
+Fixes: CVE-2024-0450
+
+Bug-Debian: https://bugs.debian.org/1070133
+Bug-Upstream: https://github.com/python/cpython/issues/109858
+
+--- a/Lib/test/test_zipfile.py
++++ b/Lib/test/test_zipfile.py
+@@ -2063,6 +2063,66 @@
+ with zipfile.ZipFile(zip_file) as zf:
+ self.assertRaises(RuntimeError, zf.extract, 'a.txt')
+
++ @requires_zlib()
++ def test_full_overlap(self):
++ data = (
++ b'PK\x03\x04\x14\x00\x00\x00\x08\x00\xa0lH\x05\xe2\x1e'
++ b'8\xbb\x10\x00\x00\x00\t\x04\x00\x00\x01\x00\x00\x00a\xed'
++ b'\xc0\x81\x08\x00\x00\x00\xc00\xd6\xfbK\\d\x0b`P'
++ b'K\x01\x02\x14\x00\x14\x00\x00\x00\x08\x00\xa0lH\x05\xe2'
++ b'\x1e8\xbb\x10\x00\x00\x00\t\x04\x00\x00\x01\x00\x00\x00\x00'
++ b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00aPK'
++ b'\x01\x02\x14\x00\x14\x00\x00\x00\x08\x00\xa0lH\x05\xe2\x1e'
++ b'8\xbb\x10\x00\x00\x00\t\x04\x00\x00\x01\x00\x00\x00\x00\x00'
++ b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00bPK\x05'
++ b'\x06\x00\x00\x00\x00\x02\x00\x02\x00^\x00\x00\x00/\x00\x00'
++ b'\x00\x00\x00'
++ )
++ with zipfile.ZipFile(io.BytesIO(data), 'r') as zipf:
++ self.assertEqual(zipf.namelist(), ['a', 'b'])
++ zi = zipf.getinfo('a')
++ self.assertEqual(zi.header_offset, 0)
++ self.assertEqual(zi.compress_size, 16)
++ self.assertEqual(zi.file_size, 1033)
++ zi = zipf.getinfo('b')
++ self.assertEqual(zi.header_offset, 0)
++ self.assertEqual(zi.compress_size, 16)
++ self.assertEqual(zi.file_size, 1033)
++ self.assertEqual(len(zipf.read('a')), 1033)
++ with self.assertRaisesRegex(zipfile.BadZipFile, 'File name.*differ'):
++ zipf.read('b')
++
++ @requires_zlib()
++ def test_quoted_overlap(self):
++ data = (
++ b'PK\x03\x04\x14\x00\x00\x00\x08\x00\xa0lH\x05Y\xfc'
++ b'8\x044\x00\x00\x00(\x04\x00\x00\x01\x00\x00\x00a\x00'
++ b'\x1f\x00\xe0\xffPK\x03\x04\x14\x00\x00\x00\x08\x00\xa0l'
++ b'H\x05\xe2\x1e8\xbb\x10\x00\x00\x00\t\x04\x00\x00\x01\x00'
++ b'\x00\x00b\xed\xc0\x81\x08\x00\x00\x00\xc00\xd6\xfbK\\'
++ b'd\x0b`PK\x01\x02\x14\x00\x14\x00\x00\x00\x08\x00\xa0'
++ b'lH\x05Y\xfc8\x044\x00\x00\x00(\x04\x00\x00\x01'
++ b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
++ b'\x00aPK\x01\x02\x14\x00\x14\x00\x00\x00\x08\x00\xa0l'
++ b'H\x05\xe2\x1e8\xbb\x10\x00\x00\x00\t\x04\x00\x00\x01\x00'
++ b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00$\x00\x00\x00'
++ b'bPK\x05\x06\x00\x00\x00\x00\x02\x00\x02\x00^\x00\x00'
++ b'\x00S\x00\x00\x00\x00\x00'
++ )
++ with zipfile.ZipFile(io.BytesIO(data), 'r') as zipf:
++ self.assertEqual(zipf.namelist(), ['a', 'b'])
++ zi = zipf.getinfo('a')
++ self.assertEqual(zi.header_offset, 0)
++ self.assertEqual(zi.compress_size, 52)
++ self.assertEqual(zi.file_size, 1064)
++ zi = zipf.getinfo('b')
++ self.assertEqual(zi.header_offset, 36)
++ self.assertEqual(zi.compress_size, 16)
++ self.assertEqual(zi.file_size, 1033)
++ with self.assertRaisesRegex(zipfile.BadZipFile, 'Overlapped entries'):
++ zipf.read('a')
++ self.assertEqual(len(zipf.read('b')), 1033)
++
+ def tearDown(self):
+ unlink(TESTFN)
+ unlink(TESTFN2)
+--- a/Lib/zipfile.py
++++ b/Lib/zipfile.py
+@@ -365,6 +365,7 @@
+ 'compress_size',
+ 'file_size',
+ '_raw_time',
++ '_end_offset',
+ )
+
+ def __init__(self, filename="NoName", date_time=(1980,1,1,0,0,0)):
+@@ -406,6 +407,7 @@
+ self.external_attr = 0 # External file attributes
+ self.compress_size = 0 # Size of the compressed file
+ self.file_size = 0 # Size of the uncompressed file
++ self._end_offset = None # Start of the next local header or central directory
+ # Other attributes are set by class ZipFile:
+ # header_offset Byte offset to the file header
+ # CRC CRC-32 of the uncompressed file
+@@ -1434,6 +1436,12 @@
+ if self.debug > 2:
+ print("total", total)
+
++ end_offset = self.start_dir
++ for zinfo in sorted(self.filelist,
++ key=lambda zinfo: zinfo.header_offset,
++ reverse=True):
++ zinfo._end_offset = end_offset
++ end_offset = zinfo.header_offset
+
+ def namelist(self):
+ """Return a list of file names in the archive."""
+@@ -1587,6 +1595,10 @@
+ 'File name in directory %r and header %r differ.'
+ % (zinfo.orig_filename, fname))
+
++ if (zinfo._end_offset is not None and
++ zef_file.tell() + zinfo.compress_size > zinfo._end_offset):
++ raise BadZipFile(f"Overlapped entries: {zinfo.orig_filename!r} (possible zip bomb)")
++
+ # check for encrypted flag & handle password
+ is_encrypted = zinfo.flag_bits & _MASK_ENCRYPTED
+ if is_encrypted:
+--- /dev/null
++++ b/Misc/NEWS.d/next/Library/2023-09-28-13-15-51.gh-issue-109858.43e2dg.rst
+@@ -0,0 +1,3 @@
++Protect :mod:`zipfile` from "quoted-overlap" zipbomb. It now raises
++BadZipFile when try to read an entry that overlaps with other entry or
++central directory.
diff -Nru python3.11-3.11.2/debian/patches/relfile-nullptr-dereference.patch python3.11-3.11.2/debian/patches/relfile-nullptr-dereference.patch
--- python3.11-3.11.2/debian/patches/relfile-nullptr-dereference.patch 1969-12-31 20:00:00.000000000 -0400
+++ python3.11-3.11.2/debian/patches/relfile-nullptr-dereference.patch 2024-05-02 07:59:08.000000000 -0400
@@ -0,0 +1,57 @@
+From b28f919007439b48a1d00d54134d7b020a683cda Mon Sep 17 00:00:00 2001
+From: Max Bachmann <kontakt@maxbachmann.de>
+Date: Sun, 26 Mar 2023 00:35:00 +0100
+Subject: [PATCH] =?UTF-8?q?[3.11]=20gh-102281:=20Fix=20potential=20nullptr?=
+ =?UTF-8?q?=20dereference=20+=20use=20of=20uninitia=E2=80=A6=20(#103040)?=
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+[3.11] gh-102281: Fix potential nullptr dereference + use of uninitialized memory (gh-102282)
+(cherry picked from commit afa6092ee4260bacf7bc11905466e4c3f8556cbb)
+---
+ .../2023-03-02-13-49-21.gh-issue-102281.QCuu2N.rst | 1 +
+ Modules/getpath.c | 5 ++++-
+ Python/fileutils.c | 6 +++++-
+ 3 files changed, 10 insertions(+), 2 deletions(-)
+ create mode 100644 Misc/NEWS.d/next/Core and Builtins/2023-03-02-13-49-21.gh-issue-102281.QCuu2N.rst
+
+Bug-Upstream: https://github.com/python/cpython/issues/102281
+Origin: upstream, https://github.com/python/cpython/commit/b28f919007439b48a1d00d54134d7b020a683cda
+
+--- a/Modules/getpath.c
++++ b/Modules/getpath.c
+@@ -452,7 +452,10 @@
+ if (s) {
+ *s = L'\0';
+ }
+- path2 = _Py_normpath(_Py_join_relfile(path, resolved), -1);
++ path2 = _Py_join_relfile(path, resolved);
++ if (path2) {
++ path2 = _Py_normpath(path2, -1);
++ }
+ PyMem_RawFree((void *)path);
+ path = path2;
+ }
+--- a/Python/fileutils.c
++++ b/Python/fileutils.c
+@@ -2142,7 +2142,10 @@
+ }
+ assert(wcslen(dirname) < MAXPATHLEN);
+ assert(wcslen(relfile) < MAXPATHLEN - wcslen(dirname));
+- join_relfile(filename, bufsize, dirname, relfile);
++ if (join_relfile(filename, bufsize, dirname, relfile) < 0) {
++ PyMem_RawFree(filename);
++ return NULL;
++ }
+ return filename;
+ }
+
+@@ -2180,6 +2183,7 @@
+ wchar_t *
+ _Py_normpath(wchar_t *path, Py_ssize_t size)
+ {
++ assert(path != NULL);
+ if (!path[0] || size == 0) {
+ return path;
+ }
diff -Nru python3.11-3.11.2/debian/patches/series python3.11-3.11.2/debian/patches/series
--- python3.11-3.11.2/debian/patches/series 2024-03-02 16:28:50.000000000 -0400
+++ python3.11-3.11.2/debian/patches/series 2024-05-02 07:59:08.000000000 -0400
@@ -40,3 +40,11 @@
ntpath-import.diff
shutdown-deadlock.diff
frame_dealloc-crash.diff
+CVE-2024-0450.patch
+CVE-2023-6597.patch
+relfile-nullptr-dereference.patch
+CVE-2023-41105-path-truncation.patch
+CVE-2023-40217-ssl-pre-close-flaw.patch
+CVE-2023-40217-ref-cycle.patch
+CVE-2023-40217-test-reliability.patch
+CVE-2023-24329-strip-control-chars-urlsplit.patch
Reply to: