[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: mailing list vs "the futur"

On 28/08/2018 12:10, Michael Stone wrote:
On Tue, Aug 28, 2018 at 09:39:43AM +0200, tomas@tuxteam.de wrote:
No. I guess the thing is that *because* NNTP was comparatively efficient,
it was used for the "big stuff" (alt.pic.* anyone?). The point is that,
to reap the benefits of its efficiency, a provider has to set up an
NNTP server and do its care and feeding. And perhaps prune the newsgroups
it's ready to carry. A full feed was, for that time, taxing, but not
because NNTP was inefficcient, but because that's where the big stuff
was. No one mailed pictures or archives around (unless, that is, to
punish the occasional spammer: X11 sources were mailed around, if I
remember correctly)

NNTP was fairly inefficient for large binaries because they were repacked to 7 bits and then chopped up into small pieces, some of which tended to get lost--so either the entire thing is reposted or enough redundant information was sent to survive the loss of some pieces. And the servers kept exchanging the data whether anyone requested/looked at it or not.

NNTP was inefficient in this regard compared to what other protocol or protocols, exactly?

Compared to email? Well, email suffered from very similar issues transferring binaries.

Compared to DCC over IRC? (DCC being a then-popular one-to-one alternative to Usenet's one-to-many distribution model). I must admit that I've never examined the details of the DCC protocol but it is certainly inefficient in terms of user experience compared to Usenet over NNTP: In practice DCC was essentially synchronous, one at a time, needing continuous user management whereas Usenet facilitated a time-efficient asynchronous access mechanism for the end user without continuous management.

So what one-to-many distribution platforms or protocols existed in this timeframe against which to compare NNTP (or Usenet)?

And you are persisting in conflating NNTP with Usenet. The problem with Usenet (as you say) was the volume of binaries, which would have been a problem no matter protocol was used to transfer them, regardless of the efficiency of NNTP. This problem with Usenet does not, however, translate to any kind of inherent efficiency problem with NNTP as a transfer mechanism for discussions.

May I ask, did you use Usenet in this timeframe? I ask this because some of your comments remind me of training courses run for certain types of professional at that time which were taught by people who, themselves, commonly had only limited, and sometimes very skewed and confused, experience of the systems and protocols they were supposedly experts on[1]. Thus what they taught was close to, but not quite, an accurate representation of how things really were. In particular, conflation of worldwide systems like Usenet with specific protocols like NNTP is an example of some of the inaccuracies or errors of comprehension that they passed on to their students. As I said in my other recent message, Usenet (at that time and now) relied and relies on NNTP but NNTP is not tainted by the problems of Usenet.

If you are saying that NNTP was not designed to carry binaries then you are of course correct but (a) just like other protocols, it has been extended to do so, and (b), as I observed above, what are you comparing it to in terms of efficiency? As a one-to-many (not anonymous, see below) distribution medium it had no real alternative at the time.

Heck, even the moderation (where it existed) was inefficient--first, transfer the spam; then, store the spam; transfer the cancel message; store the cancel message; check to see if the spam is in the stored messages; finally, delete the spam or wait for it to be transferred.

Certainly, NNTP moderation over the federated Usenet system was far less than ideal but, once again, let's remember that this is not a problem for a discussion group that is not shared over Usenet. Moderation using NNTP in this context (i.e. the context under discussion here) is actually better than with a mail list and not a lot different to a web forum.

Binaries on NNTP took off not because they were efficient, but because they were perceived to be more anonymous than direct transfers. (There's no central logs of which clients look at which specific content, and the full feed is deniable as to intent.)

I disagree. This attitude (that anonymity was the primary driver) is redolent of the confused or skewed training courses I referred to above. Whilst I can accept that some people may have perceived Usenet to be anonymous, they were of course wrong both then and now (and this was well known to technical users back at that time).

From what I recall, Usenet grew in popularity for binaries groups not because it was (supposedly, in some people's views) anonymous but because it was an efficient one-to-many distribution medium. In fact it was effectively the only one-to-many distribution medium available at all until the first peer-to-peer file sharing networks began to appear (which is why I wonder what other system or protocol you are comparing NNTP's binary transfer efficiency against).

I should add that I described Usenet as an "efficient" distribution medium above and it most certainly was efficient in this respect. Even though, as you say, NNTP needs to encode binaries, Usenet was still efficient because of its one-to-many capability and its asynchronous capability. It just worked.

And let me re-iterate that none of this history, whilst interesting, particularly relates to NNTP's continued suitability for discussion groups such as this one.

It really doesn't seem like you ever looked at the stats on what fraction of the feed an ISP received was ever requested by any customer, or you wouldn't argue that this was an efficient mechanism. (But god forbid you stopped carrying alt.binaries.stupid.waste.of.space because then customers would tie up the support line complaining that your newsgroup count was lower than your competitor's newsgroup count.) Again, nice idea 30 years ago, but incapable of withstanding abuse on the modern internet.

You're still conflating Usenet with NNTP. What you refer to here was an issue with Usenet. This tells us nothing whatsoever about the suitability of NNTP for discussion group transport, something for which NNTP was and is ideal. This use of NNTP is nothing to do with Usenet and is nothing to do with Usenet's binary-related practical problems.

1: A more recent example of a very similar skewed and confused view of things is the Casio F-91 watch. Certain elements of US intelligence had noticed that many terrorist suspects arrested in Iraq were wearing the Casio F-91W watch model. The intelligence reports extrapolated this apparent correlation to suggest, amongst other things, that the watch was chosen because its alarm capabilities allowed an alarm to be set more than 24 in the future (in fact that particular model allows no such thing, although some other Casio models do). In truth, the Casio F-91W model was and still is popular with third world terrorist suspects because it is (a) very cheap, and (b) it is produced in greater numbers than any other watch model in the world. I.e. Lots of people in third world countries wear Casio F-91Ws, not just terrorists. And yet the intelligence people were ignorant of the wider popularity of the F-91W and extrapolated incorrectly from the limited (skewed) data set of which they were aware. Similar errors of limited vision, confusion, and skew were made in the timeframe we're discussing here by some people running training course for professionals.

Mark Rousell

Reply to: