[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#1033945: marked as done (unblock: pdns-recursor/4.8.4-1 [pre-approval])



Your message dated Thu, 06 Apr 2023 06:54:26 +0000
with message-id <E1pkJVq-006Wi8-RB@respighi.debian.org>
and subject line unblock pdns-recursor
has caused the Debian Bug report #1033945,
regarding unblock: pdns-recursor/4.8.4-1 [pre-approval]
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact owner@bugs.debian.org
immediately.)


-- 
1033945: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1033945
Debian Bug Tracking System
Contact owner@bugs.debian.org with problems
--- Begin Message ---
Package: release.debian.org
Severity: normal
User: release.debian.org@packages.debian.org
Usertags: unblock
X-Debbugs-Cc: Debian Security Team <team@security.debian.org>

Please unblock package pdns-recursor

[ Reason ]

I would like to update pdns-recursor 4.8.2 to 4.8.4, to:
- fix CVE-2023-26437, sole change in 4.8.4
- get the fixes for the resolving/validation logic from 4.8.3.

While this is a new upstream release, there are no new features, and
only bugfixes.

In previous Debian releases applying security fixes to pdns-recursor was
often problematic when the resolve/validation logic had to change. This
part of the code is long and complicated, only understood by DNS experts,
and also very relevant on the Internet and under flux of the living
Internet.
Security fixes have to change this code, and applying patches on top of
each other touching the same code parts often does not work without
importing all the changes.
We are certainly not in a better position to judge these code parts than
upstream is.

[ Impact ]
Security bug is fixed; applying future security patches will be easier.

[ Tests ]
Resolve/validation logic is tested by a build-time test suite.
I have manually tested it as well, but obviously I cannot reproduce the
security problem easily.

[ Risks ]
Open security bug in bookworm.
Applying future security patches will be harder or impossible.

[ Checklist ]
  [x] all changes are documented in the d/changelog
  [x] I reviewed all changes and I approve them
  [x] attach debdiff against the package in testing

[ Other info ]
Another fix upstream included in 4.8.3 involves log levels of common log
messages, to spam journal less with "error" severity.

debdiff is produced using the following command to ignore generated
files and the publicsuffixlist, which our packages do not use by default
at runtime:
debdiff pdns-recursor_4.8.2-1.dsc pdns-recursor_4.8.4-1.dsc| filterdiff -x '*/pubsuffix.cc' -x '*/effective_tld_names.dat' -x '*/*.1' -x '*/configure'

This is a pre-approval request, I have not uploaded yet.


unblock pdns-recursor/4.8.4-1
diff -Nru pdns-recursor-4.8.2/configure.ac pdns-recursor-4.8.4/configure.ac
--- pdns-recursor-4.8.2/configure.ac	2023-01-30 09:58:04.000000000 +0000
+++ pdns-recursor-4.8.4/configure.ac	2023-03-27 15:09:19.000000000 +0000
@@ -1,6 +1,6 @@
 AC_PREREQ([2.69])
 
-AC_INIT([pdns-recursor], [4.8.2])
+AC_INIT([pdns-recursor], [4.8.4])
 AC_CONFIG_AUX_DIR([build-aux])
 AM_INIT_AUTOMAKE([foreign dist-bzip2 no-dist-gzip tar-ustar -Wno-portability subdir-objects parallel-tests 1.11])
 AM_SILENT_RULES([yes])
diff -Nru pdns-recursor-4.8.2/debian/changelog pdns-recursor-4.8.4/debian/changelog
--- pdns-recursor-4.8.2/debian/changelog	2023-01-31 16:46:42.000000000 +0000
+++ pdns-recursor-4.8.4/debian/changelog	2023-04-04 11:10:26.000000000 +0000
@@ -1,3 +1,16 @@
+pdns-recursor (4.8.4-1) unstable; urgency=medium
+
+  * New upstream version 4.8.4
+    * Fixes CVE-2023-26437, see
+      https://doc.powerdns.com/recursor/security-advisories/powerdns-advisory-2023-02.html
+      (Closes: #1033941)
+    * Fixes high CPU usage caused by serve-stale logic.
+    * Fixes DNSSEC validation issues for some domains served by popular
+      DNS software by F5.
+    * Downgrades severity for a few log messages.
+
+ -- Chris Hofstaedtler <zeha@debian.org>  Tue, 04 Apr 2023 11:10:26 +0000
+
 pdns-recursor (4.8.2-1) unstable; urgency=medium
 
   * New upstream version 4.8.2
diff -Nru pdns-recursor-4.8.2/negcache.cc pdns-recursor-4.8.4/negcache.cc
--- pdns-recursor-4.8.2/negcache.cc	2023-01-30 09:57:23.000000000 +0000
+++ pdns-recursor-4.8.4/negcache.cc	2023-03-27 15:08:37.000000000 +0000
@@ -119,27 +119,32 @@
 
   const auto& idx = content->d_map.get<NegCacheEntry>();
   auto range = idx.equal_range(qname);
-  auto ni = range.first;
 
-  while (ni != range.second) {
+  for (auto ni = range.first; ni != range.second; ++ni) {
     // We have an entry
     if ((!typeMustMatch && ni->d_qtype == QType::ENT) || ni->d_qtype == qtype) {
       // We match the QType or the whole name is denied
       auto firstIndexIterator = content->d_map.project<CompositeKey>(ni);
 
-      if (!refresh && (serveStale || ni->d_servedStale > 0) && ni->d_ttd <= now.tv_sec && ni->d_servedStale < s_maxServedStaleExtensions) {
+      // this checks ttd, but also takes into account serve-stale
+      if (!ni->isEntryUsable(now.tv_sec, serveStale)) {
+        // Outdated
+        moveCacheItemToFront<SequenceTag>(content->d_map, firstIndexIterator);
+        continue;
+      }
+      // If we are serving this record stale (or *should*) and the ttd has passed increase ttd to
+      // the future and remember that we did. Also push a refresh task.
+      if ((serveStale || ni->d_servedStale > 0) && ni->d_ttd <= now.tv_sec && ni->d_servedStale < s_maxServedStaleExtensions) {
         updateStaleEntry(now.tv_sec, firstIndexIterator, qtype);
       }
-      if (now.tv_sec < ni->d_ttd && !(refresh && ni->d_servedStale > 0)) {
+      if (now.tv_sec < ni->d_ttd) {
         // Not expired
         ne = *ni;
         moveCacheItemToBack<SequenceTag>(content->d_map, firstIndexIterator);
-        return true;
+        // when refreshing, we consider served-stale entries outdated
+        return !(refresh && ni->d_servedStale > 0);
       }
-      // expired
-      moveCacheItemToFront<SequenceTag>(content->d_map, firstIndexIterator);
     }
-    ++ni;
   }
   return false;
 }
@@ -242,7 +247,7 @@
   return ret;
 }
 
-size_t NegCache::wipe(const DNSName& qname, QType qtype)
+size_t NegCache::wipeTyped(const DNSName& qname, QType qtype)
 {
   size_t ret = 0;
   auto& map = getMap(qname);
diff -Nru pdns-recursor-4.8.2/negcache.hh pdns-recursor-4.8.4/negcache.hh
--- pdns-recursor-4.8.2/negcache.hh	2023-01-30 09:57:23.000000000 +0000
+++ pdns-recursor-4.8.4/negcache.hh	2023-03-27 15:08:37.000000000 +0000
@@ -84,6 +84,12 @@
         return d_ttd < now;
       }
     };
+
+    bool isEntryUsable(time_t now, bool serveStale) const
+    {
+      // When serving stale, we consider expired records
+      return d_ttd > now || serveStale || d_servedStale != 0;
+    }
   };
 
   void add(const NegCacheEntry& ne);
@@ -96,7 +102,7 @@
   void clear();
   size_t doDump(int fd, size_t maxCacheEntries);
   size_t wipe(const DNSName& name, bool subtree = false);
-  size_t wipe(const DNSName& name, QType qtype);
+  size_t wipeTyped(const DNSName& name, QType qtype);
   size_t size() const;
 
 private:
diff -Nru pdns-recursor-4.8.2/pdns_recursor.cc pdns-recursor-4.8.4/pdns_recursor.cc
--- pdns-recursor-4.8.2/pdns_recursor.cc	2023-01-30 09:57:23.000000000 +0000
+++ pdns-recursor-4.8.4/pdns_recursor.cc	2023-03-27 15:08:37.000000000 +0000
@@ -2688,35 +2688,40 @@
 static void handleUDPServerResponse(int fd, FDMultiplexer::funcparam_t& var)
 {
   std::shared_ptr<PacketID> pid = boost::any_cast<std::shared_ptr<PacketID>>(var);
-  ssize_t len;
   PacketBuffer packet;
   packet.resize(g_outgoingEDNSBufsize);
   ComboAddress fromaddr;
   socklen_t addrlen = sizeof(fromaddr);
 
-  len = recvfrom(fd, &packet.at(0), packet.size(), 0, (sockaddr*)&fromaddr, &addrlen);
+  ssize_t len = recvfrom(fd, &packet.at(0), packet.size(), 0, reinterpret_cast<sockaddr*>(&fromaddr), &addrlen);
 
-  if (len < (ssize_t)sizeof(dnsheader)) {
-    if (len < 0)
-      ; //      cerr<<"Error on fd "<<fd<<": "<<stringerror()<<"\n";
-    else {
-      g_stats.serverParseError++;
-      if (g_logCommonErrors)
-        SLOG(g_log << Logger::Error << "Unable to parse packet from remote UDP server " << fromaddr.toString() << ": packet smaller than DNS header" << endl,
-             g_slogout->info(Logr::Error, "Unable to parse packet from remote UDP server", "from", Logging::Loggable(fromaddr)));
-    }
+  const ssize_t signed_sizeof_sdnsheader = sizeof(dnsheader);
 
+  if (len < 0) {
+    // len < 0: error on socket
     t_udpclientsocks->returnSocket(fd);
-    PacketBuffer empty;
 
+    PacketBuffer empty;
     MT_t::waiters_t::iterator iter = MT->d_waiters.find(pid);
-    if (iter != MT->d_waiters.end())
+    if (iter != MT->d_waiters.end()) {
       doResends(iter, pid, empty);
+    }
+    MT->sendEvent(pid, &empty); // this denotes error (does retry lookup using other NS)
+    return;
+  }
 
-    MT->sendEvent(pid, &empty); // this denotes error (does lookup again.. at least L1 will be hot)
+  if (len < signed_sizeof_sdnsheader) {
+    // We have received a packet that cannot be a valid DNS packet, as it has no complete header
+    // Drop it, but continue to wait for other packets
+    g_stats.serverParseError++;
+    if (g_logCommonErrors) {
+      SLOG(g_log << Logger::Error << "Unable to parse too short packet from remote UDP server " << fromaddr.toString() << ": packet smaller than DNS header" << endl,
+           g_slogout->info(Logr::Error, "Unable to parse too short packet from remote UDP server", "from", Logging::Loggable(fromaddr)));
+    }
     return;
   }
 
+  // We have at least a full header
   packet.resize(len);
   dnsheader dh;
   memcpy(&dh, &packet.at(0), sizeof(dh));
@@ -2738,10 +2743,18 @@
   }
   else {
     try {
-      if (len > 12)
-        pident->domain = DNSName(reinterpret_cast<const char*>(packet.data()), len, 12, false, &pident->type); // don't copy this from above - we need to do the actual read
+      if (len > signed_sizeof_sdnsheader) {
+        pident->domain = DNSName(reinterpret_cast<const char*>(packet.data()), len, static_cast<int>(sizeof(dnsheader)), false, &pident->type); // don't copy this from above - we need to do the actual read
+      }
+      else {
+        // len == sizeof(dnsheader), only header case
+        // We will do a full scan search later to see if we can match this reply even without a domain
+        pident->domain.clear();
+        pident->type = 0;
+      }
     }
     catch (std::exception& e) {
+      // Parse error, continue waiting for other packets
       g_stats.serverParseError++; // won't be fed to lwres.cc, so we have to increment
       SLOG(g_log << Logger::Warning << "Error in packet from remote nameserver " << fromaddr.toStringWithPort() << ": " << e.what() << endl,
            g_slogudpin->error(Logr::Warning, e.what(), "Error in packet from remote nameserver", "from", Logging::Loggable(fromaddr)));
@@ -2749,14 +2762,16 @@
     }
   }
 
-  MT_t::waiters_t::iterator iter = MT->d_waiters.find(pident);
-  if (iter != MT->d_waiters.end()) {
-    doResends(iter, pident, packet);
+  if (!pident->domain.empty()) {
+    MT_t::waiters_t::iterator iter = MT->d_waiters.find(pident);
+    if (iter != MT->d_waiters.end()) {
+      doResends(iter, pident, packet);
+    }
   }
 
 retryWithName:
 
-  if (!MT->sendEvent(pident, &packet)) {
+  if (pident->domain.empty() || MT->sendEvent(pident, &packet) == 0) {
     /* we did not find a match for this response, something is wrong */
 
     // we do a full scan for outstanding queries on unexpected answers. not too bad since we only accept them on the right port number, which is hard enough to guess
diff -Nru pdns-recursor-4.8.2/rec-lua-conf.cc pdns-recursor-4.8.4/rec-lua-conf.cc
--- pdns-recursor-4.8.2/rec-lua-conf.cc	2023-01-30 09:57:23.000000000 +0000
+++ pdns-recursor-4.8.4/rec-lua-conf.cc	2023-03-27 15:08:37.000000000 +0000
@@ -466,7 +466,7 @@
     }
     catch (const std::exception& e) {
       SLOG(g_log << Logger::Error << "Unable to load RPZ zone from '" << filename << "': " << e.what() << endl,
-           log->error(Logr::Error, e.what(), "Exception while loadinf  RPZ zone from file"));
+           log->error(Logr::Error, e.what(), "Exception while loading RPZ zone from file"));
     }
   });
 
diff -Nru pdns-recursor-4.8.2/rec-main.cc pdns-recursor-4.8.4/rec-main.cc
--- pdns-recursor-4.8.2/rec-main.cc	2023-01-30 09:57:23.000000000 +0000
+++ pdns-recursor-4.8.4/rec-main.cc	2023-03-27 15:08:37.000000000 +0000
@@ -366,12 +366,12 @@
       return ret;
     }
     catch (FDMultiplexerException& fe) {
-      SLOG(g_log << Logger::Error << "Non-fatal error initializing possible multiplexer (" << fe.what() << "), falling back" << endl,
-           log->error(Logr::Error, fe.what(), "Non-fatal error initializing possible multiplexer, falling back"));
+      SLOG(g_log << Logger::Warning << "Non-fatal error initializing possible multiplexer (" << fe.what() << "), falling back" << endl,
+           log->error(Logr::Warning, fe.what(), "Non-fatal error initializing possible multiplexer, falling back"));
     }
     catch (...) {
-      SLOG(g_log << Logger::Error << "Non-fatal error initializing possible multiplexer" << endl,
-           log->info(Logr::Error, "Non-fatal error initializing possible multiplexer"));
+      SLOG(g_log << Logger::Warning << "Non-fatal error initializing possible multiplexer" << endl,
+           log->info(Logr::Warning, "Non-fatal error initializing possible multiplexer"));
     }
   }
   SLOG(g_log << Logger::Error << "No working multiplexer found!" << endl,
diff -Nru pdns-recursor-4.8.2/recursor_cache.cc pdns-recursor-4.8.4/recursor_cache.cc
--- pdns-recursor-4.8.2/recursor_cache.cc	2023-01-30 09:57:23.000000000 +0000
+++ pdns-recursor-4.8.4/recursor_cache.cc	2023-03-27 15:08:37.000000000 +0000
@@ -418,7 +418,7 @@
         firstIndexIterator = map->d_map.project<OrderedTag>(i);
 
         // When serving stale, we consider expired records
-        if (i->isEntryUsable(now, serveStale)) {
+        if (!i->isEntryUsable(now, serveStale)) {
           moveCacheItemToFront<SequencedTag>(map->d_map, firstIndexIterator);
           continue;
         }
@@ -459,7 +459,7 @@
       firstIndexIterator = map->d_map.project<OrderedTag>(i);
 
       // When serving stale, we consider expired records
-      if (i->isEntryUsable(now, serveStale)) {
+      if (!i->isEntryUsable(now, serveStale)) {
         moveCacheItemToFront<SequencedTag>(map->d_map, firstIndexIterator);
         continue;
       }
diff -Nru pdns-recursor-4.8.2/recursor_cache.hh pdns-recursor-4.8.4/recursor_cache.hh
--- pdns-recursor-4.8.2/recursor_cache.hh	2023-01-30 09:57:23.000000000 +0000
+++ pdns-recursor-4.8.4/recursor_cache.hh	2023-03-27 15:08:37.000000000 +0000
@@ -104,7 +104,7 @@
     bool isEntryUsable(time_t now, bool serveStale) const
     {
       // When serving stale, we consider expired records
-      return d_ttd <= now && !serveStale && d_servedStale == 0;
+      return d_ttd > now || serveStale || d_servedStale != 0;
     }
 
     bool shouldReplace(time_t now, bool auth, vState state, bool refresh);
diff -Nru pdns-recursor-4.8.2/reczones.cc pdns-recursor-4.8.4/reczones.cc
--- pdns-recursor-4.8.2/reczones.cc	2023-01-30 09:57:23.000000000 +0000
+++ pdns-recursor-4.8.4/reczones.cc	2023-03-27 15:08:37.000000000 +0000
@@ -325,8 +325,8 @@
       // headers.first=toCanonic("", headers.first);
       if (n == 0) {
         ad.d_rdForward = false;
-        SLOG(g_log << Logger::Error << "Parsing authoritative data for zone '" << headers.first << "' from file '" << headers.second << "'" << endl,
-             log->info(Logr::Error, "Parsing authoritative data from file", "zone", Logging::Loggable(headers.first), "file", Logging::Loggable(headers.second)));
+        SLOG(g_log << Logger::Notice << "Parsing authoritative data for zone '" << headers.first << "' from file '" << headers.second << "'" << endl,
+             log->info(Logr::Notice, "Parsing authoritative data from file", "zone", Logging::Loggable(headers.first), "file", Logging::Loggable(headers.second)));
         ZoneParserTNG zpt(headers.second, DNSName(headers.first));
         zpt.setMaxGenerateSteps(::arg().asNum("max-generate-steps"));
         zpt.setMaxIncludes(::arg().asNum("max-include-depth"));
diff -Nru pdns-recursor-4.8.2/syncres.cc pdns-recursor-4.8.4/syncres.cc
--- pdns-recursor-4.8.2/syncres.cc	2023-01-30 09:57:23.000000000 +0000
+++ pdns-recursor-4.8.4/syncres.cc	2023-03-27 15:08:37.000000000 +0000
@@ -1807,227 +1807,220 @@
   const int iterations = !d_refresh && MemRecursorCache::s_maxServedStaleExtensions > 0 ? 2 : 1;
   for (int loop = 0; loop < iterations; loop++) {
 
-    if (loop == 1) {
-      d_serveStale = true;
-    }
+    d_serveStale = loop == 1;
 
-    // When we're not on the last iteration, a timeout is not fatal
-    const bool exceptionOnTimeout = loop == iterations - 1;
-
-    try {
-      // This is a difficult way of expressing "this is a normal query", i.e. not getRootNS.
-      if(!(d_updatingRootNS && qtype.getCode()==QType::NS && qname.isRoot())) {
-        if(d_cacheonly) { // very limited OOB support
-          LWResult lwr;
-          LOG(prefix<<qname<<": Recursion not requested for '"<<qname<<"|"<<qtype<<"', peeking at auth/forward zones"<<endl);
-          DNSName authname(qname);
-          domainmap_t::const_iterator iter=getBestAuthZone(&authname);
-          if(iter != t_sstorage.domainmap->end()) {
-            if(iter->second.isAuth()) {
-              ret.clear();
-              d_wasOutOfBand = doOOBResolve(qname, qtype, ret, depth, res);
-              if (fromCache)
-                *fromCache = d_wasOutOfBand;
-              return res;
-            }
-            else if (considerforwards) {
-              const vector<ComboAddress>& servers = iter->second.d_servers;
-              const ComboAddress remoteIP = servers.front();
-              LOG(prefix<<qname<<": forwarding query to hardcoded nameserver '"<< remoteIP.toStringWithPort()<<"' for zone '"<<authname<<"'"<<endl);
-
-              boost::optional<Netmask> nm;
-              bool chained = false;
-              // forwardes are "anonymous", so plug in an empty name; some day we might have a fancier config language...
-              auto resolveRet = asyncresolveWrapper(remoteIP, d_doDNSSEC, qname, authname, qtype.getCode(), false, false, &d_now, nm, &lwr, &chained, DNSName());
-
-              d_totUsec += lwr.d_usec;
-              accountAuthLatency(lwr.d_usec, remoteIP.sin4.sin_family);
-              ++g_stats.authRCode.at(lwr.d_rcode);
-              if (fromCache)
-                *fromCache = true;
-
-              // filter out the good stuff from lwr.result()
-              if (resolveRet == LWResult::Result::Success) {
-                for(const auto& rec : lwr.d_records) {
-                  if(rec.d_place == DNSResourceRecord::ANSWER)
-                    ret.push_back(rec);
-                }
-                return 0;
-              }
-              else {
-                return RCode::ServFail;
+    // This is a difficult way of expressing "this is a normal query", i.e. not getRootNS.
+    if(!(d_updatingRootNS && qtype.getCode()==QType::NS && qname.isRoot())) {
+      if(d_cacheonly) { // very limited OOB support
+        LWResult lwr;
+        LOG(prefix<<qname<<": Recursion not requested for '"<<qname<<"|"<<qtype<<"', peeking at auth/forward zones"<<endl);
+        DNSName authname(qname);
+        domainmap_t::const_iterator iter=getBestAuthZone(&authname);
+        if(iter != t_sstorage.domainmap->end()) {
+          if(iter->second.isAuth()) {
+            ret.clear();
+            d_wasOutOfBand = doOOBResolve(qname, qtype, ret, depth, res);
+            if (fromCache)
+              *fromCache = d_wasOutOfBand;
+            return res;
+          }
+          else if (considerforwards) {
+            const vector<ComboAddress>& servers = iter->second.d_servers;
+            const ComboAddress remoteIP = servers.front();
+            LOG(prefix<<qname<<": forwarding query to hardcoded nameserver '"<< remoteIP.toStringWithPort()<<"' for zone '"<<authname<<"'"<<endl);
+
+            boost::optional<Netmask> nm;
+            bool chained = false;
+            // forwardes are "anonymous", so plug in an empty name; some day we might have a fancier config language...
+            auto resolveRet = asyncresolveWrapper(remoteIP, d_doDNSSEC, qname, authname, qtype.getCode(), false, false, &d_now, nm, &lwr, &chained, DNSName());
+
+            d_totUsec += lwr.d_usec;
+            accountAuthLatency(lwr.d_usec, remoteIP.sin4.sin_family);
+            ++g_stats.authRCode.at(lwr.d_rcode);
+            if (fromCache)
+              *fromCache = true;
+
+            // filter out the good stuff from lwr.result()
+            if (resolveRet == LWResult::Result::Success) {
+              for(const auto& rec : lwr.d_records) {
+                if(rec.d_place == DNSResourceRecord::ANSWER)
+                  ret.push_back(rec);
               }
+              return 0;
+            }
+            else {
+              return RCode::ServFail;
             }
           }
         }
+      }
 
-        DNSName authname(qname);
-        bool wasForwardedOrAuthZone = false;
-        bool wasAuthZone = false;
-        bool wasForwardRecurse = false;
-        domainmap_t::const_iterator iter = getBestAuthZone(&authname);
-        if(iter != t_sstorage.domainmap->end()) {
-          const auto& domain = iter->second;
-          wasForwardedOrAuthZone = true;
+      DNSName authname(qname);
+      bool wasForwardedOrAuthZone = false;
+      bool wasAuthZone = false;
+      bool wasForwardRecurse = false;
+      domainmap_t::const_iterator iter = getBestAuthZone(&authname);
+      if(iter != t_sstorage.domainmap->end()) {
+        const auto& domain = iter->second;
+        wasForwardedOrAuthZone = true;
 
-          if (domain.isAuth()) {
-            wasAuthZone = true;
-          } else if (domain.shouldRecurse()) {
-            wasForwardRecurse = true;
-          }
+        if (domain.isAuth()) {
+          wasAuthZone = true;
+        } else if (domain.shouldRecurse()) {
+          wasForwardRecurse = true;
         }
+      }
 
-        /* When we are looking for a DS, we want to the non-CNAME cache check first
-           because we can actually have a DS (from the parent zone) AND a CNAME (from
-           the child zone), and what we really want is the DS */
-        if (qtype != QType::DS && doCNAMECacheCheck(qname, qtype, ret, depth, res, state, wasAuthZone, wasForwardRecurse)) { // will reroute us if needed
-          d_wasOutOfBand = wasAuthZone;
-          // Here we have an issue. If we were prevented from going out to the network (cache-only was set, possibly because we
-          // are in QM Step0) we might have a CNAME but not the corresponding target.
-          // It means that we will sometimes go to the next steps when we are in fact done, but that's fine since
-          // we will get the records from the cache, resulting in a small overhead.
-          // This might be a real problem if we had a RPZ hit, though, because we do not want the processing to continue, since
-          // RPZ rules will not be evaluated anymore (we already matched).
-          const bool stoppedByPolicyHit = d_appliedPolicy.wasHit();
-
-          if (fromCache && (!d_cacheonly || stoppedByPolicyHit)) {
-            *fromCache = true;
-          }
-          /* Apply Post filtering policies */
-
-          if (d_wantsRPZ && !stoppedByPolicyHit) {
-            auto luaLocal = g_luaconfs.getLocal();
-            if (luaLocal->dfe.getPostPolicy(ret, d_discardedPolicies, d_appliedPolicy)) {
-              mergePolicyTags(d_policyTags, d_appliedPolicy.getTags());
-              bool done = false;
-              handlePolicyHit(prefix, qname, qtype, ret, done, res, depth);
-              if (done && fromCache) {
-                *fromCache = true;
-              }
+      /* When we are looking for a DS, we want to the non-CNAME cache check first
+         because we can actually have a DS (from the parent zone) AND a CNAME (from
+         the child zone), and what we really want is the DS */
+      if (qtype != QType::DS && doCNAMECacheCheck(qname, qtype, ret, depth, res, state, wasAuthZone, wasForwardRecurse)) { // will reroute us if needed
+        d_wasOutOfBand = wasAuthZone;
+        // Here we have an issue. If we were prevented from going out to the network (cache-only was set, possibly because we
+        // are in QM Step0) we might have a CNAME but not the corresponding target.
+        // It means that we will sometimes go to the next steps when we are in fact done, but that's fine since
+        // we will get the records from the cache, resulting in a small overhead.
+        // This might be a real problem if we had a RPZ hit, though, because we do not want the processing to continue, since
+        // RPZ rules will not be evaluated anymore (we already matched).
+        const bool stoppedByPolicyHit = d_appliedPolicy.wasHit();
+
+        if (fromCache && (!d_cacheonly || stoppedByPolicyHit)) {
+          *fromCache = true;
+        }
+        /* Apply Post filtering policies */
+
+        if (d_wantsRPZ && !stoppedByPolicyHit) {
+          auto luaLocal = g_luaconfs.getLocal();
+          if (luaLocal->dfe.getPostPolicy(ret, d_discardedPolicies, d_appliedPolicy)) {
+            mergePolicyTags(d_policyTags, d_appliedPolicy.getTags());
+            bool done = false;
+            handlePolicyHit(prefix, qname, qtype, ret, done, res, depth);
+            if (done && fromCache) {
+              *fromCache = true;
             }
           }
-          return res;
         }
+        return res;
+      }
 
-        if (doCacheCheck(qname, authname, wasForwardedOrAuthZone, wasAuthZone, wasForwardRecurse, qtype, ret, depth, res, state)) {
-          // we done
-          d_wasOutOfBand = wasAuthZone;
-          if (fromCache) {
-            *fromCache = true;
-          }
-
-          if (d_wantsRPZ && !d_appliedPolicy.wasHit()) {
-            auto luaLocal = g_luaconfs.getLocal();
-            if (luaLocal->dfe.getPostPolicy(ret, d_discardedPolicies, d_appliedPolicy)) {
-              mergePolicyTags(d_policyTags, d_appliedPolicy.getTags());
-              bool done = false;
-              handlePolicyHit(prefix, qname, qtype, ret, done, res, depth);
-            }
+      if (doCacheCheck(qname, authname, wasForwardedOrAuthZone, wasAuthZone, wasForwardRecurse, qtype, ret, depth, res, state)) {
+        // we done
+        d_wasOutOfBand = wasAuthZone;
+        if (fromCache) {
+          *fromCache = true;
+        }
+
+        if (d_wantsRPZ && !d_appliedPolicy.wasHit()) {
+          auto luaLocal = g_luaconfs.getLocal();
+          if (luaLocal->dfe.getPostPolicy(ret, d_discardedPolicies, d_appliedPolicy)) {
+            mergePolicyTags(d_policyTags, d_appliedPolicy.getTags());
+            bool done = false;
+            handlePolicyHit(prefix, qname, qtype, ret, done, res, depth);
           }
+        }
 
-          return res;
+        return res;
+      }
+
+      /* if we have not found a cached DS (or denial of), now is the time to look for a CNAME */
+      if (qtype == QType::DS && doCNAMECacheCheck(qname, qtype, ret, depth, res, state, wasAuthZone, wasForwardRecurse)) { // will reroute us if needed
+        d_wasOutOfBand = wasAuthZone;
+        // Here we have an issue. If we were prevented from going out to the network (cache-only was set, possibly because we
+        // are in QM Step0) we might have a CNAME but not the corresponding target.
+        // It means that we will sometimes go to the next steps when we are in fact done, but that's fine since
+        // we will get the records from the cache, resulting in a small overhead.
+        // This might be a real problem if we had a RPZ hit, though, because we do not want the processing to continue, since
+        // RPZ rules will not be evaluated anymore (we already matched).
+        const bool stoppedByPolicyHit = d_appliedPolicy.wasHit();
+
+        if (fromCache && (!d_cacheonly || stoppedByPolicyHit)) {
+          *fromCache = true;
         }
+        /* Apply Post filtering policies */
 
-        /* if we have not found a cached DS (or denial of), now is the time to look for a CNAME */
-        if (qtype == QType::DS && doCNAMECacheCheck(qname, qtype, ret, depth, res, state, wasAuthZone, wasForwardRecurse)) { // will reroute us if needed
-          d_wasOutOfBand = wasAuthZone;
-          // Here we have an issue. If we were prevented from going out to the network (cache-only was set, possibly because we
-          // are in QM Step0) we might have a CNAME but not the corresponding target.
-          // It means that we will sometimes go to the next steps when we are in fact done, but that's fine since
-          // we will get the records from the cache, resulting in a small overhead.
-          // This might be a real problem if we had a RPZ hit, though, because we do not want the processing to continue, since
-          // RPZ rules will not be evaluated anymore (we already matched).
-          const bool stoppedByPolicyHit = d_appliedPolicy.wasHit();
-
-          if (fromCache && (!d_cacheonly || stoppedByPolicyHit)) {
-            *fromCache = true;
-          }
-          /* Apply Post filtering policies */
-
-          if (d_wantsRPZ && !stoppedByPolicyHit) {
-            auto luaLocal = g_luaconfs.getLocal();
-            if (luaLocal->dfe.getPostPolicy(ret, d_discardedPolicies, d_appliedPolicy)) {
-              mergePolicyTags(d_policyTags, d_appliedPolicy.getTags());
-              bool done = false;
-              handlePolicyHit(prefix, qname, qtype, ret, done, res, depth);
-              if (done && fromCache) {
-                *fromCache = true;
-              }
+        if (d_wantsRPZ && !stoppedByPolicyHit) {
+          auto luaLocal = g_luaconfs.getLocal();
+          if (luaLocal->dfe.getPostPolicy(ret, d_discardedPolicies, d_appliedPolicy)) {
+            mergePolicyTags(d_policyTags, d_appliedPolicy.getTags());
+            bool done = false;
+            handlePolicyHit(prefix, qname, qtype, ret, done, res, depth);
+            if (done && fromCache) {
+              *fromCache = true;
             }
           }
-
-          return res;
         }
-      }
 
-      if (d_cacheonly) {
-        return 0;
+        return res;
       }
+    }
 
-      LOG(prefix<<qname<<": No cache hit for '"<<qname<<"|"<<qtype<<"', trying to find an appropriate NS record"<<endl);
+    if (d_cacheonly) {
+      return 0;
+    }
+    // When trying to serve-stale, we also only look at the cache. Don't look at d_serveStale, it
+    // might be changed by recursive calls (this should be fixed in a better way!).
+    if (loop == 1) {
+      return res;
+    }
 
-      DNSName subdomain(qname);
-      if (qtype == QType::DS) subdomain.chopOff();
+    LOG(prefix<<qname<<": No cache hit for '"<<qname<<"|"<<qtype<<"', trying to find an appropriate NS record"<<endl);
 
-      NsSet nsset;
-      bool flawedNSSet=false;
+    DNSName subdomain(qname);
+    if (qtype == QType::DS) subdomain.chopOff();
 
-      // the two retries allow getBestNSNamesFromCache&co to reprime the root
-      // hints, in case they ever go missing
-      for(int tries=0;tries<2 && nsset.empty();++tries) {
-        subdomain=getBestNSNamesFromCache(subdomain, qtype, nsset, &flawedNSSet, depth, beenthere); //  pass beenthere to both occasions
-      }
+    NsSet nsset;
+    bool flawedNSSet=false;
 
-      res = doResolveAt(nsset, subdomain, flawedNSSet, qname, qtype, ret, depth, beenthere, state, stopAtDelegation, nullptr);
+    // the two retries allow getBestNSNamesFromCache&co to reprime the root
+    // hints, in case they ever go missing
+    for(int tries=0;tries<2 && nsset.empty();++tries) {
+      subdomain=getBestNSNamesFromCache(subdomain, qtype, nsset, &flawedNSSet, depth, beenthere); //  pass beenthere to both occasions
+    }
 
-      if (res == -1 && s_save_parent_ns_set) {
-        // It did not work out, lets check if we have a saved parent NS set
-        map<DNSName, vector<ComboAddress>> fallBack;
-        {
-          auto lock = s_savedParentNSSet.lock();
-          auto domainData= lock->find(subdomain);
-          if (domainData != lock->end() && domainData->d_nsAddresses.size() > 0) {
-            nsset.clear();
-            // Build the nsset arg and fallBack data for the fallback doResolveAt() attempt
-            // Take a copy to be able to release the lock, NsSet is actually a map, go figure
-            for (const auto& ns : domainData->d_nsAddresses) {
-              nsset.emplace(ns.first, pair(std::vector<ComboAddress>(), false));
-              fallBack.emplace(ns.first, ns.second);
-            }
-          }
-        }
-        if (fallBack.size() > 0) {
-          LOG(prefix<<qname<<": Failure, but we have a saved parent NS set, trying that one"<< endl)
-            res = doResolveAt(nsset, subdomain, flawedNSSet, qname, qtype, ret, depth, beenthere, state, stopAtDelegation, &fallBack);
-          if (res == 0) {
-            // It did work out
-            s_savedParentNSSet.lock()->inc(subdomain);
+    res = doResolveAt(nsset, subdomain, flawedNSSet, qname, qtype, ret, depth, beenthere, state, stopAtDelegation, nullptr);
+
+    if (res == -1 && s_save_parent_ns_set) {
+      // It did not work out, lets check if we have a saved parent NS set
+      map<DNSName, vector<ComboAddress>> fallBack;
+      {
+        auto lock = s_savedParentNSSet.lock();
+        auto domainData= lock->find(subdomain);
+        if (domainData != lock->end() && domainData->d_nsAddresses.size() > 0) {
+          nsset.clear();
+          // Build the nsset arg and fallBack data for the fallback doResolveAt() attempt
+          // Take a copy to be able to release the lock, NsSet is actually a map, go figure
+          for (const auto& ns : domainData->d_nsAddresses) {
+            nsset.emplace(ns.first, pair(std::vector<ComboAddress>(), false));
+            fallBack.emplace(ns.first, ns.second);
           }
         }
       }
-      /* Apply Post filtering policies */
-      if (d_wantsRPZ && !d_appliedPolicy.wasHit()) {
-        auto luaLocal = g_luaconfs.getLocal();
-        if (luaLocal->dfe.getPostPolicy(ret, d_discardedPolicies, d_appliedPolicy)) {
-          mergePolicyTags(d_policyTags, d_appliedPolicy.getTags());
-          bool done = false;
-          handlePolicyHit(prefix, qname, qtype, ret, done, res, depth);
+      if (fallBack.size() > 0) {
+        LOG(prefix<<qname<<": Failure, but we have a saved parent NS set, trying that one"<< endl)
+          res = doResolveAt(nsset, subdomain, flawedNSSet, qname, qtype, ret, depth, beenthere, state, stopAtDelegation, &fallBack);
+        if (res == 0) {
+          // It did work out
+          s_savedParentNSSet.lock()->inc(subdomain);
         }
       }
-
-      if (!res) {
-        return 0;
+    }
+    /* Apply Post filtering policies */
+    if (d_wantsRPZ && !d_appliedPolicy.wasHit()) {
+      auto luaLocal = g_luaconfs.getLocal();
+      if (luaLocal->dfe.getPostPolicy(ret, d_discardedPolicies, d_appliedPolicy)) {
+        mergePolicyTags(d_policyTags, d_appliedPolicy.getTags());
+        bool done = false;
+        handlePolicyHit(prefix, qname, qtype, ret, done, res, depth);
       }
+    }
 
-      LOG(prefix<<qname<<": failed (res="<<res<<")"<<endl);
-      if (res >= 0) {
-        break;
-      }
+    if (!res) {
+      return 0;
     }
-    catch (const ImmediateServFailException&) {
-      if (exceptionOnTimeout) {
-        throw;
-      }
+
+    LOG(prefix<<qname<<": failed (res="<<res<<")"<<endl);
+    if (res >= 0) {
+      break;
     }
   }
   return res<0 ? RCode::ServFail : res;
@@ -4601,7 +4594,7 @@
         // Delete potential negcache entry. When a record recovers with serve-stale the negcache entry can cause the wrong entry to
         // be served, as negcache entries are checked before record cache entries
         if (NegCache::s_maxServedStaleExtensions > 0) {
-          g_negCache->wipe(i->first.name, i->first.type);
+          g_negCache->wipeTyped(i->first.name, i->first.type);
         }
 
         if (g_aggressiveNSECCache && needWildcardProof && recordState == vState::Secure && i->first.place == DNSResourceRecord::ANSWER && i->first.name == qname && !i->second.signatures.empty() && !d_routingTag && !ednsmask) {
@@ -5200,6 +5193,12 @@
   }
 
   d_totUsec += lwr.d_usec;
+
+  if (resolveret == LWResult::Result::Spoofed) {
+    spoofed = true;
+    return false;
+  }
+
   accountAuthLatency(lwr.d_usec, remoteIP.sin4.sin_family);
   ++g_stats.authRCode.at(lwr.d_rcode);
 
@@ -5231,9 +5230,6 @@
       LOG(prefix<<qname<<": hit a local resource limit resolving"<< (doTCP ? " over TCP" : "")<<", probable error: "<<stringerror()<<endl);
       g_stats.resourceLimits++;
     }
-    else if (resolveret == LWResult::Result::Spoofed) {
-      spoofed = true;
-    }
     else {
       /* LWResult::Result::PermanentError */
       s_unreachables++;
@@ -5441,7 +5437,7 @@
       LOG(prefix<<qname<<": NXDOMAIN without a negative indication (missing SOA in authority) in a DNSSEC secure zone, going Bogus"<<endl);
       updateValidationState(state, vState::BogusMissingNegativeIndication);
     }
-    else if (state == vState::Indeterminate) {
+    else {
       /* we might not have validated any record, because we did get a NXDOMAIN without any SOA
          from an insecure zone, for example */
       updateValidationState(state, tempState);
@@ -5463,7 +5459,7 @@
       LOG(prefix<<qname<<": NODATA without a negative indication (missing SOA in authority) in a DNSSEC secure zone, going Bogus"<<endl);
       updateValidationState(state, vState::BogusMissingNegativeIndication);
     }
-    else if (state == vState::Indeterminate) {
+    else {
       /* we might not have validated any record, because we did get a NODATA without any SOA
          from an insecure zone, for example */
       updateValidationState(state, tempState);
diff -Nru pdns-recursor-4.8.2/test-negcache_cc.cc pdns-recursor-4.8.4/test-negcache_cc.cc
--- pdns-recursor-4.8.2/test-negcache_cc.cc	2023-01-30 09:57:23.000000000 +0000
+++ pdns-recursor-4.8.4/test-negcache_cc.cc	2023-03-27 15:08:37.000000000 +0000
@@ -438,6 +438,44 @@
   BOOST_CHECK_EQUAL(cache.size(), 400U);
 }
 
+BOOST_AUTO_TEST_CASE(test_wipe_typed)
+{
+  string qname(".powerdns.com");
+  DNSName auth("powerdns.com");
+
+  struct timeval now;
+  Utility::gettimeofday(&now, 0);
+
+  NegCache cache;
+  NegCache::NegCacheEntry ne;
+  ne = genNegCacheEntry(auth, auth, now, QType::A);
+  cache.add(ne);
+
+  for (int i = 0; i < 400; i++) {
+    ne = genNegCacheEntry(DNSName(std::to_string(i) + qname), auth, now, QType::A);
+    cache.add(ne);
+  }
+
+  BOOST_CHECK_EQUAL(cache.size(), 401U);
+
+  // Should only wipe the powerdns.com entry
+  cache.wipeTyped(auth, QType::A);
+  BOOST_CHECK_EQUAL(cache.size(), 400U);
+
+  NegCache::NegCacheEntry ne2;
+  bool ret = cache.get(auth, QType(1), now, ne2);
+
+  BOOST_CHECK_EQUAL(ret, false);
+
+  cache.wipeTyped(DNSName("1.powerdns.com"), QType::A);
+  BOOST_CHECK_EQUAL(cache.size(), 399U);
+
+  NegCache::NegCacheEntry ne3;
+  ret = cache.get(auth, QType(1), now, ne3);
+
+  BOOST_CHECK_EQUAL(ret, false);
+}
+
 BOOST_AUTO_TEST_CASE(test_clear)
 {
   string qname(".powerdns.com");
diff -Nru pdns-recursor-4.8.2/test-syncres_cc10.cc pdns-recursor-4.8.4/test-syncres_cc10.cc
--- pdns-recursor-4.8.2/test-syncres_cc10.cc	2023-01-30 09:57:23.000000000 +0000
+++ pdns-recursor-4.8.4/test-syncres_cc10.cc	2023-03-27 15:08:37.000000000 +0000
@@ -1565,7 +1565,7 @@
   BOOST_CHECK(ret[1].d_type == QType::A);
   BOOST_CHECK_EQUAL(ret[0].d_name, target);
   BOOST_CHECK_EQUAL(ret[1].d_name, DNSName("cname.powerdns.com"));
-  BOOST_CHECK_EQUAL(downCount, 4U);
+  BOOST_CHECK_EQUAL(downCount, 8U);
   BOOST_CHECK_EQUAL(lookupCount, 2U);
 
   // Again, no lookup as the record is marked stale
@@ -1577,7 +1577,7 @@
   BOOST_CHECK(ret[1].d_type == QType::A);
   BOOST_CHECK_EQUAL(ret[0].d_name, target);
   BOOST_CHECK_EQUAL(ret[1].d_name, DNSName("cname.powerdns.com"));
-  BOOST_CHECK_EQUAL(downCount, 4U);
+  BOOST_CHECK_EQUAL(downCount, 8U);
   BOOST_CHECK_EQUAL(lookupCount, 2U);
 
   // Again, no lookup as the record is marked stale but as the TTL has passed a task should have been pushed
@@ -1590,7 +1590,7 @@
   BOOST_CHECK(ret[1].d_type == QType::A);
   BOOST_CHECK_EQUAL(ret[0].d_name, target);
   BOOST_CHECK_EQUAL(ret[1].d_name, DNSName("cname.powerdns.com"));
-  BOOST_CHECK_EQUAL(downCount, 4U);
+  BOOST_CHECK_EQUAL(downCount, 8U);
   BOOST_CHECK_EQUAL(lookupCount, 2U);
 
   BOOST_REQUIRE_EQUAL(getTaskSize(), 2U);
@@ -1613,7 +1613,7 @@
   BOOST_REQUIRE_EQUAL(ret.size(), 1U);
   BOOST_CHECK(ret[0].d_type == QType::SOA);
   BOOST_CHECK_EQUAL(ret[0].d_name, auth);
-  BOOST_CHECK_EQUAL(downCount, 4U);
+  BOOST_CHECK_EQUAL(downCount, 8U);
   BOOST_CHECK_EQUAL(lookupCount, 3U);
 
   // And again, result should come from cache
@@ -1624,10 +1624,89 @@
   BOOST_REQUIRE_EQUAL(ret.size(), 1U);
   BOOST_CHECK(ret[0].d_type == QType::SOA);
   BOOST_CHECK_EQUAL(ret[0].d_name, auth);
-  BOOST_CHECK_EQUAL(downCount, 4U);
+  BOOST_CHECK_EQUAL(downCount, 8U);
   BOOST_CHECK_EQUAL(lookupCount, 3U);
 }
 
+BOOST_AUTO_TEST_CASE(test_servestale_immediateservfail)
+{
+  std::unique_ptr<SyncRes> sr;
+  initSR(sr);
+  MemRecursorCache::s_maxServedStaleExtensions = 1440;
+
+  primeHints();
+
+  const DNSName target("powerdns.com.");
+
+  std::set<ComboAddress> downServers;
+  size_t downCount = 0;
+  size_t lookupCount = 0;
+
+  const int theTTL = 5;
+
+  sr->setAsyncCallback([&downServers, &downCount, &lookupCount, target](const ComboAddress& ip, const DNSName& domain, int type, bool doTCP, bool sendRDQuery, int EDNS0Level, struct timeval* now, boost::optional<Netmask>& srcmask, boost::optional<const ResolveContext&> context, LWResult* res, bool* chained) {
+    /* this will cause issue with qname minimization if we ever implement it */
+
+    if (downServers.find(ip) != downServers.end()) {
+      downCount++;
+      throw ImmediateServFailException("FAIL!");
+    }
+
+    if (isRootServer(ip)) {
+      setLWResult(res, 0, false, false, true);
+      addRecordToLW(res, "com.", QType::NS, "a.gtld-servers.net.", DNSResourceRecord::AUTHORITY);
+      addRecordToLW(res, "a.gtld-servers.net.", QType::A, "192.0.2.1", DNSResourceRecord::ADDITIONAL);
+      addRecordToLW(res, "a.gtld-servers.net.", QType::AAAA, "2001:DB8::1", DNSResourceRecord::ADDITIONAL);
+      return LWResult::Result::Success;
+    }
+    else if (ip == ComboAddress("192.0.2.1:53") || ip == ComboAddress("[2001:DB8::1]:53")) {
+      setLWResult(res, 0, false, false, true);
+      addRecordToLW(res, "powerdns.com.", QType::NS, "pdns-public-ns1.powerdns.com.", DNSResourceRecord::AUTHORITY, theTTL);
+      addRecordToLW(res, "powerdns.com.", QType::NS, "pdns-public-ns2.powerdns.com.", DNSResourceRecord::AUTHORITY, theTTL);
+      addRecordToLW(res, "pdns-public-ns1.powerdns.com.", QType::A, "192.0.2.2", DNSResourceRecord::ADDITIONAL, 5);
+      addRecordToLW(res, "pdns-public-ns1.powerdns.com.", QType::AAAA, "2001:DB8::2", DNSResourceRecord::ADDITIONAL, theTTL);
+      addRecordToLW(res, "pdns-public-ns2.powerdns.com.", QType::A, "192.0.2.3", DNSResourceRecord::ADDITIONAL, 5);
+      addRecordToLW(res, "pdns-public-ns2.powerdns.com.", QType::AAAA, "2001:DB8::3", DNSResourceRecord::ADDITIONAL, theTTL);
+      return LWResult::Result::Success;
+    }
+    else if (ip == ComboAddress("192.0.2.2:53") || ip == ComboAddress("192.0.2.3:53") || ip == ComboAddress("[2001:DB8::2]:53") || ip == ComboAddress("[2001:DB8::3]:53")) {
+      setLWResult(res, 0, true, false, true);
+      addRecordToLW(res, target, QType::A, "192.0.2.4", DNSResourceRecord::ANSWER, 5);
+      lookupCount++;
+      return LWResult::Result::Success;
+    }
+    else {
+      return LWResult::Result::Timeout;
+    }
+  });
+
+  time_t now = time(nullptr);
+
+  vector<DNSRecord> ret;
+  int res = sr->beginResolve(target, QType(QType::A), QClass::IN, ret);
+
+  BOOST_CHECK_EQUAL(res, RCode::NoError);
+  BOOST_REQUIRE_EQUAL(ret.size(), 1U);
+  BOOST_CHECK(ret[0].d_type == QType::A);
+  BOOST_CHECK_EQUAL(ret[0].d_name, target);
+  BOOST_CHECK_EQUAL(downCount, 0U);
+  BOOST_CHECK_EQUAL(lookupCount, 1U);
+
+  downServers.insert(ComboAddress("192.0.2.2:53"));
+  downServers.insert(ComboAddress("192.0.2.3:53"));
+  downServers.insert(ComboAddress("[2001:DB8::2]:53"));
+  downServers.insert(ComboAddress("[2001:DB8::3]:53"));
+
+  sr->setNow(timeval{now + theTTL + 1, 0});
+
+  BOOST_REQUIRE_EQUAL(getTaskSize(), 0U);
+
+  // record is expired, so serve stale should kick in
+  ret.clear();
+  BOOST_REQUIRE_THROW(sr->beginResolve(target, QType(QType::A), QClass::IN, ret), ImmediateServFailException);
+  BOOST_CHECK_EQUAL(downCount, 1U);
+}
+
 BOOST_AUTO_TEST_CASE(test_glued_referral_additional_update)
 {
   // Test that additional records update the cache
diff -Nru pdns-recursor-4.8.2/test-syncres_cc7.cc pdns-recursor-4.8.4/test-syncres_cc7.cc
--- pdns-recursor-4.8.2/test-syncres_cc7.cc	2023-01-30 09:57:23.000000000 +0000
+++ pdns-recursor-4.8.4/test-syncres_cc7.cc	2023-03-27 15:08:37.000000000 +0000
@@ -1429,6 +1429,156 @@
   BOOST_CHECK_EQUAL(queriesCount, 5U);
 }
 
+BOOST_AUTO_TEST_CASE(test_dnssec_insecure_missing_soa_on_nodata)
+{
+  std::unique_ptr<SyncRes> sr;
+  initSR(sr, true);
+
+  setDNSSECValidation(sr, DNSSECMode::ValidateAll);
+
+  primeHints();
+  const DNSName target("powerdns.com.");
+  testkeysset_t keys;
+
+  auto luaconfsCopy = g_luaconfs.getCopy();
+  luaconfsCopy.dsAnchors.clear();
+  generateKeyMaterial(DNSName("."), DNSSECKeeper::ECDSA256, DNSSECKeeper::DIGEST_SHA256, keys, luaconfsCopy.dsAnchors);
+  generateKeyMaterial(DNSName("com."), DNSSECKeeper::ECDSA256, DNSSECKeeper::DIGEST_SHA256, keys, luaconfsCopy.dsAnchors);
+  g_luaconfs.setState(luaconfsCopy);
+
+  size_t queriesCount = 0;
+
+  sr->setAsyncCallback([target, &queriesCount, keys](const ComboAddress& ip, const DNSName& domain, int type, bool doTCP, bool sendRDQuery, int EDNS0Level, struct timeval* now, boost::optional<Netmask>& srcmask, boost::optional<const ResolveContext&> context, LWResult* res, bool* chained) {
+    queriesCount++;
+
+    if (type == QType::DS || type == QType::DNSKEY) {
+      if (domain.isPartOf(target)) {
+        /* proves cut */
+        return genericDSAndDNSKEYHandler(res, domain, domain, type, keys, true);
+      }
+      else {
+        return genericDSAndDNSKEYHandler(res, domain, domain, type, keys);
+      }
+    }
+    else if (isRootServer(ip)) {
+      setLWResult(res, 0, false, false, true);
+      addRecordToLW(res, "com.", QType::NS, "ns1.com.", DNSResourceRecord::AUTHORITY, 3600);
+      addDS(DNSName("com."), 300, res->d_records, keys);
+      addRRSIG(keys, res->d_records, DNSName("com."), 300);
+      addRecordToLW(res, "ns1.com.", QType::A, "192.0.2.1", DNSResourceRecord::ADDITIONAL, 3600);
+      return LWResult::Result::Success;
+    }
+    else {
+      if (domain == DNSName("com.")) {
+        setLWResult(res, 0, true, false, true);
+        addRecordToLW(res, DNSName("com"), QType::SOA, "whatever.com. blah.com. 2017032301 10800 3600 604800 3600", DNSResourceRecord::AUTHORITY, 3600);
+        addRRSIG(keys, res->d_records, DNSName("com"), 300);
+        addNSECRecordToLW(DNSName("com"), DNSName("com."), {QType::SOA}, 600, res->d_records);
+        addRRSIG(keys, res->d_records, DNSName("com."), 300);
+        return LWResult::Result::Success;
+      }
+      else if (domain == target) {
+        setLWResult(res, 0, true, false, true);
+        return LWResult::Result::Success;
+      }
+    }
+
+    return LWResult::Result::Timeout;
+  });
+
+  vector<DNSRecord> ret;
+  int res = sr->beginResolve(target, QType(QType::AAAA), QClass::IN, ret);
+  BOOST_CHECK_EQUAL(res, RCode::NoError);
+  BOOST_CHECK_EQUAL(sr->getValidationState(), vState::Insecure);
+  BOOST_REQUIRE_EQUAL(ret.size(), 0U);
+  /* powerdns.com|A, com|A, com|DNSKEY, powerdns.com|DS */
+  BOOST_CHECK_EQUAL(queriesCount, 4U);
+
+  /* again, to test the cache */
+  ret.clear();
+  res = sr->beginResolve(target, QType(QType::AAAA), QClass::IN, ret);
+  BOOST_CHECK_EQUAL(res, RCode::NoError);
+  BOOST_CHECK_EQUAL(sr->getValidationState(), vState::Insecure);
+  BOOST_REQUIRE_EQUAL(ret.size(), 0U);
+  /* we don't store empty results */
+  BOOST_CHECK_EQUAL(queriesCount, 5U);
+}
+
+BOOST_AUTO_TEST_CASE(test_dnssec_insecure_missing_soa_on_nxd)
+{
+  std::unique_ptr<SyncRes> sr;
+  initSR(sr, true);
+
+  setDNSSECValidation(sr, DNSSECMode::ValidateAll);
+
+  primeHints();
+  const DNSName target("powerdns.com.");
+  testkeysset_t keys;
+
+  auto luaconfsCopy = g_luaconfs.getCopy();
+  luaconfsCopy.dsAnchors.clear();
+  generateKeyMaterial(DNSName("."), DNSSECKeeper::ECDSA256, DNSSECKeeper::DIGEST_SHA256, keys, luaconfsCopy.dsAnchors);
+  generateKeyMaterial(DNSName("com."), DNSSECKeeper::ECDSA256, DNSSECKeeper::DIGEST_SHA256, keys, luaconfsCopy.dsAnchors);
+  g_luaconfs.setState(luaconfsCopy);
+
+  size_t queriesCount = 0;
+
+  sr->setAsyncCallback([target, &queriesCount, keys](const ComboAddress& ip, const DNSName& domain, int type, bool doTCP, bool sendRDQuery, int EDNS0Level, struct timeval* now, boost::optional<Netmask>& srcmask, boost::optional<const ResolveContext&> context, LWResult* res, bool* chained) {
+    queriesCount++;
+
+    if (type == QType::DS || type == QType::DNSKEY) {
+      if (domain.isPartOf(target)) {
+        /* proves cut */
+        return genericDSAndDNSKEYHandler(res, domain, domain, type, keys, true);
+      }
+      else {
+        return genericDSAndDNSKEYHandler(res, domain, domain, type, keys);
+      }
+    }
+    else if (isRootServer(ip)) {
+      setLWResult(res, 0, false, false, true);
+      addRecordToLW(res, "com.", QType::NS, "ns1.com.", DNSResourceRecord::AUTHORITY, 3600);
+      addDS(DNSName("com."), 300, res->d_records, keys);
+      addRRSIG(keys, res->d_records, DNSName("com."), 300);
+      addRecordToLW(res, "ns1.com.", QType::A, "192.0.2.1", DNSResourceRecord::ADDITIONAL, 3600);
+      return LWResult::Result::Success;
+    }
+    else {
+      if (domain == DNSName("com.")) {
+        setLWResult(res, 0, true, false, true);
+        addRecordToLW(res, DNSName("com"), QType::SOA, "whatever.com. blah.com. 2017032301 10800 3600 604800 3600", DNSResourceRecord::AUTHORITY, 3600);
+        addRRSIG(keys, res->d_records, DNSName("com"), 300);
+        addNSECRecordToLW(DNSName("com"), DNSName("com."), {QType::SOA}, 600, res->d_records);
+        addRRSIG(keys, res->d_records, DNSName("com."), 300);
+        return LWResult::Result::Success;
+      }
+      else if (domain == target) {
+        setLWResult(res, RCode::NXDomain, true, false, true);
+        return LWResult::Result::Success;
+      }
+    }
+
+    return LWResult::Result::Timeout;
+  });
+
+  vector<DNSRecord> ret;
+  int res = sr->beginResolve(target, QType(QType::AAAA), QClass::IN, ret);
+  BOOST_CHECK_EQUAL(res, RCode::NXDomain);
+  BOOST_CHECK_EQUAL(sr->getValidationState(), vState::Insecure);
+  BOOST_REQUIRE_EQUAL(ret.size(), 0U);
+  /* powerdns.com|A, com|A, com|DNSKEY, powerdns.com|DS */
+  BOOST_CHECK_EQUAL(queriesCount, 4U);
+
+  /* again, to test the cache */
+  ret.clear();
+  res = sr->beginResolve(target, QType(QType::AAAA), QClass::IN, ret);
+  BOOST_CHECK_EQUAL(res, RCode::NXDomain);
+  BOOST_CHECK_EQUAL(sr->getValidationState(), vState::Insecure);
+  BOOST_REQUIRE_EQUAL(ret.size(), 0U);
+  /* we don't store empty results */
+  BOOST_CHECK_EQUAL(queriesCount, 5U);
+}
+
 BOOST_AUTO_TEST_CASE(test_dnssec_bogus_nxdomain)
 {
   std::unique_ptr<SyncRes> sr;

--- End Message ---
--- Begin Message ---
Unblocked.

--- End Message ---

Reply to: