[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: OT: Re: Recipient validation - WAS: Re: Moderated posts?



On Wed, Oct 15, 2014 at 9:01 PM, Miles Fidelman
<mfidelman@meetinghouse.net> wrote:
> Tanstaafl wrote:
>>
>> On 10/14/2014 1:58 PM, Miles Fidelman <mfidelman@meetinghouse.net> wrote:
>>>
>>> Well, this really is OT for debian-users, but....  Turns out that SMTP
>>> WAS/IS intended to be reliable.
>>
>> Reliable, absolutely. 100% reliable? That simply isn't possible when
>> people are involved in the equation (people mis-configure servers -
>> whether accidentally, through ignorance, or intentionally (just because
>> that is the way they want it).
>>
>>> I'd always lumped SMTP in the category of unreliable protocols, w/o
>>> guaranteed delivery
>>
>> The protocol itself is extremely reliable. It is what people *do* with
>> it that can cause it to become less reliable/resilient.

There are three ways in which machines can be unreliable.

One, they can break.

Two, they can do what they are told to do, but what they are told to
do can be wrong.

Three, they can operate in a context in which they were not designed to operate.

Unfortunately, most machines operated outside the context in which
they were designed to operate. It's a limitation of design. We are the
designers, and we can't think of everything, therefore we cannot
really design for a real context.

Put another way, any context we can design for is necessarily more
constrained than reality.

Fortunately, most of the contexts we design for are "close enough" to
be useful under many real contexts. But we have to quit being taken by
surprise when our machines hit corner cases, or we end up wasting our
energy being surprised.

That's one of the reasons the Requests For Comments were RFCs and not
standards dictated from on high (like many of the earlier network
definitions that ended up too inflexible).

> There is a technical distinction between "best efforts" (unreliable)
> protocols, such as IP ('fire and forget' if you will), and "reliable"
> protocols, such as TCP (with explicit acks and retransmits).
>
> At least in the technical circles I run in (BBN - you know, we built the
> ARPANET; Ray Tomlinson, who coined use of the @ sign in email nominally
> worked for me, for a short period - in a matrixy version of "worked for"),
> SMTP is usually discussed as providing a "best efforts" (unreliable) service
> -- which, in reality, it is (particularly in real world configurations where
> mail often gets relayed through multiple servers).
>
> So.. I was just a bit surprised to go back and read the RFC and discover
> that SMTP is explicitly intended to provide a reliable service.

If it is, that has changed.

Elsewhere from the part you quoted, there used to be an explanation of
the self-contradictory nature of the requirements.

Specifically, machines cannot actually (the illusions of PKI becoming
widely accepted notwithstanding) certify delivery. That requires a
human at both ends of the chain, in addition to the possibly human
sender and recipient. RFC 821 messages were intended not to require
any human in the chain.

If that has changed, it would be the unreasoning demands of people who
want e-mail to perfect in ways snail mail only almost could be in the
best of times: people who want to be able to do things like sue other
people for not complying with obscure rules when informed of those
rules by e-mail.

> As to "100% reliable" - nothing is 100% reliable.
>
> Miles Fidelman
>
> --
> In theory, there is no difference between theory and practice.
> In practice, there is.   .... Yogi Berra

-- 
Joel Rees

Be careful when you see conspiracy.
Look first in your own heart,
and ask yourself if you are not your own worst enemy.
Arm yourself with knowledge of yourself.


Reply to: