Re: greylisting on debian.org?
Henrique de Moraes Holschuh <firstname.lastname@example.org> writes:
> On Sun, 09 Jul 2006, Thomas Bushnell BSG wrote:
>> Henrique de Moraes Holschuh <email@example.com> writes:
>> > You can, for example, use dynamic IP supersets to do the greylisting
>> > "triplet" match. Now the problem is a matter of creating the supersets in a
>> > way to not break incoming email from outgoing-SMTP clusters.
>> Is there a way of doing this which doesn't require you to know in
>> advance the setup of remote networks and such? Does it scale?
> Yes. The most absurd way is to consider every non-stolen, valid for the
> public Internet IPv4 netblock as belonging to a single IP superset, and
> flushing the graylisted database often (but mind your outgoing email retry
I don't think I understand just what you're saying. Can you spell out
the details for me?
>> > You can also only graylist sites which match a set of conditions that flag
>> > them as suspicious. Depending on what conditions you set, you do not have
>> > the risk of blocking any server farms we would want to talk SMTP to.
>> You don't have the risk? Are you saying that there is exactly *zero*
>> risk? Please, if you don't mean that, be more precise.
> We == Debian.
> Server farms we want to talk to == those professionaly run by
> non-botnet-<censored>. We also want to talk to MTAs run by geeks on their
> home connections, but those are *not* outgoing SMTP farms, so they are not
> an issue.
Keeping a list of such server farms is exactly what I meant by a
nonworking pseudo-solution. I said, specifically, "is there a way of
doing this which doesn't require you to know in advance the setup of
remote networks and such?" This was the same idea I had already said
in terms of "all I have seen is to...[include] an exactly accurate
hardcoded list of all such sites."
It distresses me that I have said twice now that a "solution" which
requires a hardcoded list of special sites exempted from the rules is
not a solution I regard as answering my objection.
> Never mind nobody suggested using a dumb, deprecated graylister for @d.o.
Any graylister which requires a specific list of sites counts as a
dumb one in my book. I want a solution which specifically *never*
needs any preset hardcoded "this set of addresses/domains gets a
> In their dumbest form, match using big, static netmasks like 255.255.128.0.
> That should give you a hint of what I am talking about.
A hardcoded list is the problem. Got it? A loose hardcoded list is
still a problem.
>> >> Another problem is with hosts that do not accept a message from an MTA
>> >> unless that MTA is willing to accept replies. This is a common spam
>> >> prevention measure. The graylisting host cannot then send mail to
>> >> such sites until they've been whitelisted, because when they try the
>> >> reverse connection out, it always gets a 4xx error. I've been bitten
>> > Why will the host implementing incoming graylisting *always* get a 4xx error
>> > on his outgoing message? I am curious.
>> The other way round.
> Here's what I understood of what you wrote:
> Alice wants to send email to Bob. Alice graylists incoming email. Bob does
> sender verification trying to email people back before accepting a message.
> You claim Alice cannot send mail to Bob because Bob will attempt to "almost
> send email back to Alice", thus Bob's verification attempt will be
> graylisted (with a 4xx), causing Bob to deny the delivery of Alice's message
> with a 4xx.
> If that's not correct, please clarify.
> If it is correct, I am asking you *why* Alice's system will never let Bob's
> verification probe through (thus allowing her email to be delivered to Bob).
Because Bob never sends a complete email message to Alice.
> I *can* see a scenario where delivery might never happen (I am ignoring
> configuration error scenarios on Alice's side), but it depends on Alice also
> doing the same type of sender verification, and on one or both sides
> violating RFC 2821.
Doing sender verification and graylisting are both violations of the
RFCs. You can hardly say "this will work as long as everyone else
follows the RFC" when you aren't doing so yourself. My point is that
this is a case where two RFC-noncompliant spam pseudo-solutions
interact badly, because each is making up their own new requirements,
not in the RFCs, and those new requirements interact poorly.
If your system causes any RFC-compliant mail to lose, then your system
loses. So far you have argued at best that you are willing to ignore
the cases where it loses. Great. I'm not.