[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Debian Smart Upload Server



> After talking with Joerg on IRC, based off a comment of his that
> Debian should have a smart upload mechanism instead of FTP, I came up
> with the following. Please review, inspect, and maybe command on it.

Thanks for the proposal and start of discussion.

Now, after reading this and the replies, what I imagine of a new upload server:

 - It should know about Debian packages and how they look like.

 - An upload starts with a .changes file, not ends with it.

 - The server checks the signature on the changes. Valid, from a known
   keyring, etc.

 - Only files listed in that .changes are then accepted. For a limited
   period of time (either hardcoded, like a day or so, or depending on
   the largest file to upload, keeping usual upload speeds in mind).

 - Files are passed through various check functions right after we
   received them, and we can tell the client immediately if we are
   possibly going to ACCEPT it, or that we REJECT it anyway. 
   So for multi-file packages, after the first REJECT one can stop
   wasting upload time. And after all got ACCEPT you know its most
   probably going to get in unless something extra special happens later
   on.
   I guess the ability to immediately abort an upload if you got the
   first file REJECTed would be something maintainers of large packages
   love. (Imagine an md5sum error in your first file, or somehow you try
   to upload an older version than whats there already)

 - It should be able to deal with "DONT DO WORK" situations, and in that
   case dont check anything, just put them in a queue dir (and tell the
   client, like not saying ACCEPT but INTERIMACCEPT or whatever).
   We might want to do larger archive work in which case we don't want
   uploads, but why should we then disallow uploading at all?

Now, if that thing takes ftp, http, rsync or whatnot as the transport
protocol, I don't care much. Most probably http makes most sense, ftp
should also work. If we don't maintain the transport protocol on our
own, thats a hundred times better and should very much be preferred.

What *I* would like to get away from is the situation we have
now. Something puts the files into one dir, where they are then, at
regular intervals, looked at by a script. And then dealt with. Be that
debianqueued or an inotify based one, doesnt matter, the disadvantage is
the processing after everything got in, detached from the user.

Adjusting this proposal to be independent from the underlying transport
protocol should be possible, right?

-- 
bye, Joerg
<dloer> Joey was bist du eigentlich offiziell im debian projekt genau?

Attachment: pgpoZ7pM4qgpb.pgp
Description: PGP signature


Reply to: