Re: Apt & rsync
A deb consists of a couple of .tar.gz files by my understanding - why not
tear the deb apart, and uncompress the tar.gz's and then rsync the
uncompressed data ??
On Sat, 16 Oct 1999, Jason Gunthorpe wrote:
> On Fri, 15 Oct 1999, Dylan Thurston wrote:
> > > On Fri, 1 Oct 1999, Gary Allpike wrote:
> > > > Could apt be made to use rsync ??
> > > No, rsync is not suited for such a task
> > rsync seems quite well suited to the task, its just that, as you point
> > out:
> Let me be more clear, we will never get mirrors to run anon-rsyncd with a
> decent user limit because rsyncd takes up crazy amounts of cpu/memory.
> > > Nope, the gzip compression scrambles the contents so that rsync doesn't
> > > have any effect.
> > This is quite true, but raises the obvious question: why not change gzip
> > so that it doesn't scramble the contents so badly? This would have a
> > slight cost in compression percentage, but bandwidth gains should more
> > than make up for it. Andrew Tridgell addresses the issue in his original
> Although interesting, it seems to me that this pre-assumes that you have
> access to the previous uncompressed version in order to get the
> 'pre-deteremined' hash value.
> Otherwise this compression algorithm wouldn't be terribly bad to have
> aroud, we already use rsync for mirroring, but we mirror about 100 meg of
> .gz files each day beacuse of this problem :<
> > Is there interest in this? Is it a good idea? My weekends are a bit busy
> > just now, but it sounds like a fun project.
> What I would like to see is an abuse of HTTP that would allow a mod-rsync
> to be written for apache. It would have exactly two functions, send a set
> of checksums for a file and send the given set of fragments. Unlike rsyncd
> the server would then be very light weight and everything complicated and
> CPU/IO intensive could be implemented client side. A mechanism to cache
> checksums could even be implemented...
> Something I have been meaning to write is an rsync-like program that uses
> a detached precomputed checksum file. It would operate like the
> Pseudo-image kit, but instead of operating blindly on a mirror it would
> reconstruct the initial pass exactly using the checksum information and
> then directly move to httping the missing portions... Right now using the
> Pseudo-image kit and rsync is extremely hard on both the server and the
> client :<