[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: mailing list vs "the futur"



On 28/08/2018 18:48, Michael Stone wrote:
I guarantee that for large files FTP is more efficient, and that when one person is sending a file to a small number of other peopl, FTP is dramatically more efficient.

I am sure. But it still doesn't make FTP meaningfully comparable to Usenet or NNTP in the context of this sub-thread discussion.

I guess NNTP binary distribution is more efficient in some theoretical world where exactly the right subscriptions are distributed to exactly the right people

I can only point you to the world as it actually stood where binary distribution (for certain types of binary for a certain type of user base) via Usenet was outstandingly common at one time (which you of course know). FTP just wasn't a feasible candidate protocol for that particular use case. As such, yes, NNTP was efficient enough. As I said when I entered this sub-thread (with added comment in square brackets):

NNTP was inefficient in this regard compared to what other protocol or protocols, exactly?

Compared to email? Well, email suffered from very similar issues transferring binaries.

Compared to DCC over IRC? (DCC being a then-popular one-to-one alternative to Usenet's one-to-many distribution model). I must admit that I've never examined the details of the DCC protocol but it is certainly inefficient in terms of user experience compared to Usenet over NNTP: In practice DCC was essentially synchronous, one at a time, needing continuous user management whereas Usenet facilitated a time-efficient asynchronous access mechanism for the end user without continuous management.

So what one-to-many distribution platforms or protocols existed in this timeframe against which to compare NNTP (or Usenet)?
I perhaps should have asked "NNTP was inefficient in this regard compared to what other relevant protocol or protocols, exactly?".

You have observed, quite correctly of course, that FTP is a more bandwidth-efficient protocol that was available in the timeframe under discussion for binary file transfers but the fact nonetheless remains that FTP did not and does not fulfil the particular mass volume and mass user numbers one-to-many use case to which Usenet was put at that time. FTP did not and does not have the federated, distributed, public access nature that Usenet provided and that led to its success in this context.

Sure, Usenet became impossible to cope with for ISPs due to the volume of binaries groups. But, from a user experience perspective, it was very efficient indeed (for reasons I enumerated in other messages) for the job it ended up being used for. And it was not significantly more bandwidth-inefficient than any other suitable or relevant system or protocol because, at the time, there were no other systems or protocols that could really fulfil the Usenet use case.

FTP, despite more bandwidth-efficiently allowing binary transfers of course, still did not fulfil the same use case.

Anyway, this part of the discussion is just more about Usenet history. It has nothing to do with NNTP in a discussion group context which is why I initially commented in this thread.

via local transit servers, with no reposts. We can probably just write the volume of such transfers off as noise in the real world.

You seem to be again conflating Usenet's issues relating to huge bandwidth due to mass distribution of binaries with the completely different use case of NNTP that is the subject of this thread.



-- 
Mark Rousell
 
 
 

Reply to: