[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: tech proposal to make manoj somewhat happy.



Jason Gunthorpe <jgg@gpu.srv.ualberta.ca> wrote:
> Unfortunately there are some major problems

They're not all that major, except that (as you point out) http
is simpler and cleaner:

>  1) The RFC explicly forbids pipeling FTP queries

Yes, it's not rfc compliant, so you have to be careful about what ftp
servers you talk to.  This is no worse than, for example, expecting
mirror to handle symlinks (which is something we've relied on for years).

>  2) There is no firewall support

What kind of firewall has a problem with passive ftp?

I'm really curious about this one.

>  3) If it aborts there is no resume capability (nor any way to even detect
>     an abort)

Well, for detecting an aborted transfer, there's whatever error messages
appear on the control port, and the fact that gzip would gripe about a
premature end of file.

> If size is an issue then a single file http query could be coded in
> about 100-200 lines of C - it should be much smaller and more relable
> than using nc.

Yes, maybe even less than that.  I was originally thinking that http
had too many complexities, but now that I think about it those are 
all in the server.

Anyways, since it would be so much simpler not to have to provide for
every user to decide where to get the base tgz from, it would probably
be best to dedicate a dns name to this http server.  That way if we get
popular enough to experience performance problems we could move it
onto its own machine (or machines, if we get that far).

-- 
Raul


--  
To UNSUBSCRIBE, email to debian-devel-request@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org


Reply to: