[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#1118547: marked as done (trixie-pu: package dnsdist/1.9.10-1+deb13u1)



Your message dated Sat, 15 Nov 2025 11:21:45 +0000
with message-id <736c7150dc08501cc89945035c406eaf9688e144.camel@adam-barratt.org.uk>
and subject line Closing requests for updates included in 13.2
has caused the Debian Bug report #1118547,
regarding trixie-pu: package dnsdist/1.9.10-1+deb13u1
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact owner@bugs.debian.org
immediately.)


-- 
1118547: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1118547
Debian Bug Tracking System
Contact owner@bugs.debian.org with problems
--- Begin Message ---
Package: release.debian.org
Severity: normal
Tags: trixie
X-Debbugs-Cc: dnsdist@packages.debian.org, security@debian.org
Control: affects -1 + src:dnsdist
User: release.debian.org@packages.debian.org
Usertags: pu

[ Reason ]
Fix CVE-2025-8671, CVE-2025-30187. They are not high-sev issues.

[ Impact ]
Security issues remain open.

[ Tests ]
I've done a manual test but I cannot test the resource exhaustion 
issues.

[ Risks ]
Patch is from upstream.

[ Checklist ]
  [x] *all* changes are documented in the d/changelog
  [x] I reviewed all changes and I approve them
  [x] attach debdiff against the package in (old)stable
  [x] the issue is verified as fixed in unstable

[ Changes ]
Apply upstream patch fixing both CVEs.

diff -Nru dnsdist-1.9.10/debian/changelog dnsdist-1.9.10/debian/changelog
--- dnsdist-1.9.10/debian/changelog	2025-05-21 10:30:17.000000000 +0200
+++ dnsdist-1.9.10/debian/changelog	2025-09-12 10:39:35.000000000 +0200
@@ -1,3 +1,11 @@
+dnsdist (1.9.10-1+deb13u1) trixie; urgency=medium
+
+  * d/{gbp.conf,.gitlab-ci.yml}: setup for trixie
+  * Apply upstream fix for CVE-2025-8671, CVE-2025-30187
+    (Closes: #1115643)
+
+ -- Chris Hofstaedtler <zeha@debian.org>  Fri, 12 Sep 2025 10:39:35 +0200
+
 dnsdist (1.9.10-1) unstable; urgency=medium
 
   * New upstream version 1.9.10 including fix for CVE-2025-30193
diff -Nru dnsdist-1.9.10/debian/gbp.conf dnsdist-1.9.10/debian/gbp.conf
--- dnsdist-1.9.10/debian/gbp.conf	2025-05-21 10:30:03.000000000 +0200
+++ dnsdist-1.9.10/debian/gbp.conf	2025-09-12 10:39:35.000000000 +0200
@@ -2,3 +2,4 @@
 pristine-tar = True
 multimaint-merge = True
 patch-numbers = False
+debian-branch = debian/trixie
diff -Nru dnsdist-1.9.10/debian/.gitlab-ci.yml dnsdist-1.9.10/debian/.gitlab-ci.yml
--- dnsdist-1.9.10/debian/.gitlab-ci.yml	2025-05-21 10:30:03.000000000 +0200
+++ dnsdist-1.9.10/debian/.gitlab-ci.yml	2025-09-12 10:39:35.000000000 +0200
@@ -3,7 +3,7 @@
  - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/pipeline-jobs.yml
 
 variables:
-  RELEASE: 'unstable'
+  RELEASE: 'trixie'
   SALSA_CI_DISABLE_APTLY: 1
   SALSA_CI_DISABLE_PIUPARTS: 1
   SALSA_CI_DISABLE_REPROTEST: 1
diff -Nru dnsdist-1.9.10/debian/patches/series dnsdist-1.9.10/debian/patches/series
--- dnsdist-1.9.10/debian/patches/series	1970-01-01 01:00:00.000000000 +0100
+++ dnsdist-1.9.10/debian/patches/series	2025-09-12 10:39:35.000000000 +0200
@@ -0,0 +1 @@
+upstream/CVE-2025-8671-CVE-2025-30187-1.9.10.patch
diff -Nru dnsdist-1.9.10/debian/patches/upstream/CVE-2025-8671-CVE-2025-30187-1.9.10.patch dnsdist-1.9.10/debian/patches/upstream/CVE-2025-8671-CVE-2025-30187-1.9.10.patch
--- dnsdist-1.9.10/debian/patches/upstream/CVE-2025-8671-CVE-2025-30187-1.9.10.patch	1970-01-01 01:00:00.000000000 +0100
+++ dnsdist-1.9.10/debian/patches/upstream/CVE-2025-8671-CVE-2025-30187-1.9.10.patch	2025-09-12 10:39:35.000000000 +0200
@@ -0,0 +1,374 @@
+From: Remi Gacogne <remi.gacogne@powerdns.com>
+Date: Thu, 11 Sep 2025 13:38:49 +0200
+Subject: PowerDNS Security Advisory 2025-05 for DNSdist: Denial of service via crafted DoH exchange
+
+While working on adding mitigations against the MadeYouReset (CVE-2025-8671)
+attack, we noticed a potential denial of service in our DNS over HTTPS
+implementation when using the nghttp2 provider: an attacker might be able to
+cause a denial of service by crafting a DoH exchange that triggers an unbounded
+I/O read loop, causing an unexpected consumption of CPU resources. We assigned
+CVE-2025-30187 to this issue.
+
+Bug-Debian: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1115643
+
+
+diff -ruw dnsdist-1.9.10.orig/dnsdist-doh-common.hh dnsdist-1.9.10/dnsdist-doh-common.hh
+--- dnsdist-1.9.10.orig/dnsdist-doh-common.hh	2025-05-20 11:13:25.000000000 +0200
++++ dnsdist-1.9.10/dnsdist-doh-common.hh	2025-09-11 11:09:57.007006314 +0200
+@@ -35,6 +35,8 @@
+ 
+ namespace dnsdist::doh
+ {
++static constexpr uint32_t MAX_INCOMING_CONCURRENT_STREAMS{100U};
++
+ std::optional<PacketBuffer> getPayloadFromPath(const std::string_view& path);
+ }
+ 
+diff -ruw dnsdist-1.9.10.orig/dnsdist-nghttp2-in.cc dnsdist-1.9.10/dnsdist-nghttp2-in.cc
+--- dnsdist-1.9.10.orig/dnsdist-nghttp2-in.cc	2025-05-20 11:13:25.000000000 +0200
++++ dnsdist-1.9.10/dnsdist-nghttp2-in.cc	2025-09-11 11:10:21.628205731 +0200
+@@ -288,7 +288,7 @@
+ 
+ void IncomingHTTP2Connection::handleConnectionReady()
+ {
+-  constexpr std::array<nghttp2_settings_entry, 1> settings{{{NGHTTP2_SETTINGS_MAX_CONCURRENT_STREAMS, 100U}}};
++  constexpr std::array<nghttp2_settings_entry, 1> settings{{{NGHTTP2_SETTINGS_MAX_CONCURRENT_STREAMS, dnsdist::doh::MAX_INCOMING_CONCURRENT_STREAMS}}};
+   auto ret = nghttp2_submit_settings(d_session.get(), NGHTTP2_FLAG_NONE, settings.data(), settings.size());
+   if (ret != 0) {
+     throw std::runtime_error("Fatal error: " + std::string(nghttp2_strerror(ret)));
+@@ -440,6 +440,24 @@
+       if (nghttp2_session_want_read(d_session.get()) != 0) {
+         updateIO(IOState::NeedRead, handleReadableIOCallback);
+       }
++      else {
++        if (getConcurrentStreamsCount() == 0) {
++          d_connectionDied = true;
++          stopIO();
++        }
++        else {
++          updateIO(IOState::Done, handleReadableIOCallback);
++        }
++      }
++    }
++    else {
++      if (getConcurrentStreamsCount() == 0) {
++        d_connectionDied = true;
++        stopIO();
++      }
++      else {
++        updateIO(IOState::Done, handleReadableIOCallback);
++      }
+     }
+   }
+   catch (const std::exception& e) {
+@@ -547,12 +565,22 @@
+   NGHTTP2Headers::addCustomDynamicHeader(headers, name, value);
+ }
+ 
++std::unordered_map<IncomingHTTP2Connection::StreamID, IncomingHTTP2Connection::PendingQuery>::iterator IncomingHTTP2Connection::getStreamContext(StreamID streamID)
++{
++  auto streamIt = d_currentStreams.find(streamID);
++  if (streamIt == d_currentStreams.end()) {
++    /* it might have been closed by the remote end in the meantime */
++    d_killedStreams.erase(streamID);
++  }
++  return streamIt;
++}
++
+ IOState IncomingHTTP2Connection::sendResponse(const struct timeval& now, TCPResponse&& response)
+ {
+   if (response.d_idstate.d_streamID == -1) {
+     throw std::runtime_error("Invalid DoH stream ID while sending response");
+   }
+-  auto streamIt = d_currentStreams.find(response.d_idstate.d_streamID);
++  auto streamIt = getStreamContext(response.d_idstate.d_streamID);
+   if (streamIt == d_currentStreams.end()) {
+     /* it might have been closed by the remote end in the meantime */
+     return hasPendingWrite() ? IOState::NeedWrite : IOState::Done;
+@@ -592,7 +620,7 @@
+     throw std::runtime_error("Invalid DoH stream ID while handling I/O error notification");
+   }
+ 
+-  auto streamIt = d_currentStreams.find(response.d_idstate.d_streamID);
++  auto streamIt = getStreamContext(response.d_idstate.d_streamID);
+   if (streamIt == d_currentStreams.end()) {
+     /* it might have been closed by the remote end in the meantime */
+     return;
+@@ -735,17 +763,18 @@
+     NGHTTP2Headers::addCustomDynamicHeader(headers, key, value);
+   }
+ 
++  context.d_sendingResponse = true;
+   auto ret = nghttp2_submit_response(d_session.get(), streamID, headers.data(), headers.size(), &data_provider);
+   if (ret != 0) {
+-    d_currentStreams.erase(streamID);
+     vinfolog("Error submitting HTTP response for stream %d: %s", streamID, nghttp2_strerror(ret));
++    d_currentStreams.erase(streamID);
+     return false;
+   }
+ 
+   ret = nghttp2_session_send(d_session.get());
+   if (ret != 0) {
+-    d_currentStreams.erase(streamID);
+     vinfolog("Error flushing HTTP response for stream %d: %s", streamID, nghttp2_strerror(ret));
++    d_currentStreams.erase(streamID);
+     return false;
+   }
+ 
+@@ -921,7 +950,7 @@
+   /* is this the last frame for this stream? */
+   if ((frame->hd.type == NGHTTP2_HEADERS || frame->hd.type == NGHTTP2_DATA) && (frame->hd.flags & NGHTTP2_FLAG_END_STREAM) != 0) {
+     auto streamID = frame->hd.stream_id;
+-    auto stream = conn->d_currentStreams.find(streamID);
++    auto stream = conn->getStreamContext(streamID);
+     if (stream != conn->d_currentStreams.end()) {
+       conn->handleIncomingQuery(std::move(stream->second), streamID);
+     }
+@@ -941,7 +970,16 @@
+ {
+   auto* conn = static_cast<IncomingHTTP2Connection*>(user_data);
+ 
+-  conn->d_currentStreams.erase(stream_id);
++  auto streamIt = conn->d_currentStreams.find(stream_id);
++  if (streamIt == conn->d_currentStreams.end()) {
++    return 0;
++  }
++
++  if (!streamIt->second.d_sendingResponse) {
++    conn->d_killedStreams.emplace(stream_id);
++  }
++
++  conn->d_currentStreams.erase(streamIt);
+   return 0;
+ }
+ 
+@@ -952,20 +990,29 @@
+   }
+ 
+   auto* conn = static_cast<IncomingHTTP2Connection*>(user_data);
+-  auto insertPair = conn->d_currentStreams.emplace(frame->hd.stream_id, PendingQuery());
+-  if (!insertPair.second) {
+-    /* there is a stream ID collision, something is very wrong! */
+-    vinfolog("Stream ID collision (%d) on connection from %d", frame->hd.stream_id, conn->d_ci.remote.toStringWithPort());
+-    conn->d_connectionClosing = true;
+-    conn->d_needFlush = true;
+-    nghttp2_session_terminate_session(conn->d_session.get(), NGHTTP2_NO_ERROR);
+-    auto ret = nghttp2_session_send(conn->d_session.get());
++  auto close_connection = [](IncomingHTTP2Connection* connection, int32_t streamID, const ComboAddress& remote) -> int {
++    connection->d_connectionClosing = true;
++    connection->d_needFlush = true;
++    nghttp2_session_terminate_session(connection->d_session.get(), NGHTTP2_REFUSED_STREAM);
++    auto ret = nghttp2_session_send(connection->d_session.get());
+     if (ret != 0) {
+-      vinfolog("Error flushing HTTP response for stream %d from %s: %s", frame->hd.stream_id, conn->d_ci.remote.toStringWithPort(), nghttp2_strerror(ret));
++      vinfolog("Error flushing HTTP response for stream %d from %s: %s", streamID, remote.toStringWithPort(), nghttp2_strerror(ret));
+       return NGHTTP2_ERR_CALLBACK_FAILURE;
+     }
+ 
+     return 0;
++  };
++
++  if (conn->getConcurrentStreamsCount() >= dnsdist::doh::MAX_INCOMING_CONCURRENT_STREAMS) {
++    vinfolog("Too many concurrent streams on connection from %s", conn->d_ci.remote.toStringWithPort());
++    return close_connection(conn, frame->hd.stream_id, conn->d_ci.remote);
++  }
++
++  auto insertPair = conn->d_currentStreams.emplace(frame->hd.stream_id, PendingQuery());
++  if (!insertPair.second) {
++    /* there is a stream ID collision, something is very wrong! */
++    vinfolog("Stream ID collision (%d) on connection from %s", frame->hd.stream_id, conn->d_ci.remote.toStringWithPort());
++    return close_connection(conn, frame->hd.stream_id, conn->d_ci.remote);
+   }
+ 
+   return 0;
+@@ -1002,7 +1049,7 @@
+       return nameLen == expected.size() && memcmp(name, expected.data(), expected.size()) == 0;
+     };
+ 
+-    auto stream = conn->d_currentStreams.find(frame->hd.stream_id);
++    auto stream = conn->getStreamContext(frame->hd.stream_id);
+     if (stream == conn->d_currentStreams.end()) {
+       vinfolog("Unable to match the stream ID %d to a known one!", frame->hd.stream_id);
+       return NGHTTP2_ERR_CALLBACK_FAILURE;
+@@ -1065,7 +1112,7 @@
+ int IncomingHTTP2Connection::on_data_chunk_recv_callback(nghttp2_session* session, uint8_t flags, IncomingHTTP2Connection::StreamID stream_id, const uint8_t* data, size_t len, void* user_data)
+ {
+   auto* conn = static_cast<IncomingHTTP2Connection*>(user_data);
+-  auto stream = conn->d_currentStreams.find(stream_id);
++  auto stream = conn->getStreamContext(stream_id);
+   if (stream == conn->d_currentStreams.end()) {
+     vinfolog("Unable to match the stream ID %d to a known one!", stream_id);
+     return NGHTTP2_ERR_CALLBACK_FAILURE;
+@@ -1155,7 +1202,7 @@
+ 
+ uint32_t IncomingHTTP2Connection::getConcurrentStreamsCount() const
+ {
+-  return d_currentStreams.size();
++  return d_currentStreams.size() + d_killedStreams.size();
+ }
+ 
+ boost::optional<struct timeval> IncomingHTTP2Connection::getIdleClientReadTTD(struct timeval now) const
+@@ -1208,6 +1255,9 @@
+     ttd = getClientWriteTTD(now);
+     d_ioState->update(newState, callback, shared, ttd);
+   }
++  else if (newState == IOState::Done) {
++    d_ioState->reset();
++  }
+ }
+ 
+ void IncomingHTTP2Connection::handleIOError()
+@@ -1217,6 +1267,7 @@
+   d_outPos = 0;
+   nghttp2_session_terminate_session(d_session.get(), NGHTTP2_PROTOCOL_ERROR);
+   d_currentStreams.clear();
++  d_killedStreams.clear();
+   stopIO();
+ }
+ 
+diff -ruw dnsdist-1.9.10.orig/dnsdist-nghttp2-in.hh dnsdist-1.9.10/dnsdist-nghttp2-in.hh
+--- dnsdist-1.9.10.orig/dnsdist-nghttp2-in.hh	2025-05-20 11:13:25.000000000 +0200
++++ dnsdist-1.9.10/dnsdist-nghttp2-in.hh	2025-09-11 11:10:04.764742240 +0200
+@@ -55,6 +55,7 @@
+     size_t d_queryPos{0};
+     uint32_t d_statusCode{0};
+     Method d_method{Method::Unknown};
++    bool d_sendingResponse{false};
+   };
+ 
+   IncomingHTTP2Connection(ConnectionInfo&& connectionInfo, TCPClientThreadData& threadData, const struct timeval& now);
+@@ -86,6 +87,7 @@
+   std::unique_ptr<DOHUnitInterface> getDOHUnit(uint32_t streamID) override;
+ 
+   void stopIO();
++  std::unordered_map<StreamID, PendingQuery>::iterator getStreamContext(StreamID streamID);
+   uint32_t getConcurrentStreamsCount() const;
+   void updateIO(IOState newState, const FDMultiplexer::callbackfunc_t& callback);
+   void handleIOError();
+@@ -101,6 +103,7 @@
+ 
+   std::unique_ptr<nghttp2_session, decltype(&nghttp2_session_del)> d_session{nullptr, nghttp2_session_del};
+   std::unordered_map<StreamID, PendingQuery> d_currentStreams;
++  std::unordered_set<StreamID> d_killedStreams;
+   PacketBuffer d_out;
+   PacketBuffer d_in;
+   size_t d_outPos{0};
+diff -ruw dnsdist-1.9.10.orig/doh.cc dnsdist-1.9.10/doh.cc
+--- dnsdist-1.9.10.orig/doh.cc	2025-05-20 11:13:25.000000000 +0200
++++ dnsdist-1.9.10/doh.cc	2025-09-11 11:10:16.325285812 +0200
+@@ -313,6 +313,7 @@
+   struct timeval d_connectionStartTime{0, 0};
+   size_t d_nbQueries{0};
+   int d_desc{-1};
++  uint8_t d_concurrentStreams{0};
+ };
+ 
+ static thread_local std::unordered_map<int, DOHConnection> t_conns;
+@@ -386,6 +387,17 @@
+   return reasonIt->second;
+ }
+ 
++static DOHConnection* getConnectionFromQuery(const h2o_req_t* req)
++{
++  h2o_socket_t* sock = req->conn->callbacks->get_socket(req->conn);
++  const int descriptor = h2o_socket_get_fd(sock);
++  if (descriptor == -1) {
++    /* this should not happen, but let's not crash on it */
++    return nullptr;
++  }
++  return &t_conns.at(descriptor);
++}
++
+ /* Always called from the main DoH thread */
+ static void handleResponse(DOHFrontend& dohFrontend, st_h2o_req_t* req, uint16_t statusCode, const PacketBuffer& response, const std::unordered_map<std::string, std::string>& customResponseHeaders, const std::string& contentType, bool addContentType)
+ {
+@@ -461,6 +473,10 @@
+ 
+     ++dohFrontend.d_errorresponses;
+   }
++
++  if (auto* conn = getConnectionFromQuery(req)) {
++    --conn->d_concurrentStreams;
++  }
+ }
+ 
+ static std::unique_ptr<DOHUnit> getDUFromIDS(InternalQueryState& ids)
+@@ -918,6 +934,8 @@
+    via a pipe */
+ static void doh_dispatch_query(DOHServerConfig* dsc, h2o_handler_t* self, h2o_req_t* req, PacketBuffer&& query, const ComboAddress& local, const ComboAddress& remote, std::string&& path)
+ {
++  auto* conn = getConnectionFromQuery(req);
++
+   try {
+     /* we only parse it there as a sanity check, we will parse it again later */
+     // NOLINTNEXTLINE(cppcoreguidelines-pro-type-reinterpret-cast)
+@@ -949,6 +967,9 @@
+       }
+     }
+ 
++    if (conn != nullptr) {
++      ++conn->d_concurrentStreams;
++    }
+ #ifdef HAVE_H2O_SOCKET_GET_SSL_SERVER_NAME
+     h2o_socket_t* sock = req->conn->callbacks->get_socket(req->conn);
+     const char * sni = h2o_socket_get_ssl_server_name(sock);
+@@ -966,17 +987,26 @@
+       if (!dsc->d_querySender.send(std::move(dohUnit))) {
+         ++dnsdist::metrics::g_stats.dohQueryPipeFull;
+         vinfolog("Unable to pass a DoH query to the DoH worker thread because the pipe is full");
++        if (conn != nullptr) {
++          --conn->d_concurrentStreams;
++        }
+         h2o_send_error_500(req, "Internal Server Error", "Internal Server Error", 0);
+       }
+     }
+     catch (...) {
+       vinfolog("Unable to pass a DoH query to the DoH worker thread because we couldn't write to the pipe: %s", stringerror());
++      if (conn != nullptr) {
++        --conn->d_concurrentStreams;
++      }
+       h2o_send_error_500(req, "Internal Server Error", "Internal Server Error", 0);
+     }
+ #endif /* USE_SINGLE_ACCEPTOR_THREAD */
+   }
+   catch (const std::exception& e) {
+     vinfolog("Had error parsing DoH DNS packet from %s: %s", remote.toStringWithPort(), e.what());
++    if (conn != nullptr) {
++      --conn->d_concurrentStreams;
++    }
+     h2o_send_error_400(req, "Bad Request", "The DNS query could not be parsed", 0);
+   }
+ }
+@@ -1046,15 +1076,19 @@
+     }
+     // NOLINTNEXTLINE(cppcoreguidelines-pro-bounds-pointer-arithmetic): h2o API
+     auto* dsc = static_cast<DOHServerConfig*>(req->conn->ctx->storage.entries[0].data);
+-    h2o_socket_t* sock = req->conn->callbacks->get_socket(req->conn);
+-
+-    const int descriptor = h2o_socket_get_fd(sock);
+-    if (descriptor == -1) {
++    auto* connPtr = getConnectionFromQuery(req);
++    if (connPtr == nullptr) {
++      return 0;
++    }
++    auto& conn = *connPtr;
++    if (conn.d_concurrentStreams >= dnsdist::doh::MAX_INCOMING_CONCURRENT_STREAMS) {
++      vinfolog("Too many concurrent streams on connection from %d", conn.d_remote.toStringWithPort());
+       return 0;
+     }
+ 
+-    auto& conn = t_conns.at(descriptor);
+     ++conn.d_nbQueries;
++
++    h2o_socket_t* sock = req->conn->callbacks->get_socket(req->conn);
+     if (conn.d_nbQueries == 1) {
+       if (h2o_socket_get_ssl_session_reused(sock) == 0) {
+         ++dsc->clientState->tlsNewSessions;
+@@ -1121,6 +1155,7 @@
+       for (const auto& entry : *responsesMap) {
+         if (entry->matches(path)) {
+           const auto& customHeaders = entry->getHeaders();
++          ++conn.d_concurrentStreams;
+           handleResponse(*dsc->dohFrontend, req, entry->getStatusCode(), entry->getContent(), customHeaders ? *customHeaders : dsc->dohFrontend->d_customResponseHeaders, std::string(), false);
+           return 0;
+         }

--- End Message ---
--- Begin Message ---
Package: release.debian.org
Version: 13.2

Hi,

The updates referenced in each of these bugs were included in today's
13.2 trixie point release.

Regards,

Adam

--- End Message ---

Reply to: