[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bug#656142: ITP: duff -- Duplicate file finder



On Tue, Jan 17, 2012 at 02:18:48PM +0100, Lech Karol Pawłaszek wrote:
> On 01/17/2012 02:05 PM, Samuel Thibault wrote:
> > Roland Mas, le Tue 17 Jan 2012 13:41:23 +0100, a écrit :
> >> Samuel Thibault, 2012-01-17 12:03:41 +0100 :
> >>
> >> [...]
> >>
> >>> I'm not sure to understand what you mean exactly. If you have even
> >>> just a hundred files of the same size, you will need ten thousand file
> >>> comparisons!
> >>
> >>   I'm sure that can be optimised.  Read all 100 files in parallel,
> >> comparing blocks of similar offset.  You need to perform 99 comparisons
> >> on each block for as long as blocks are identical;
> > 
> > Ah, right. So you'll start writing yet another tool? ;)
> 
> ;-) Huh. Couldn't help myself of thinking about this (obligatory) XKCD:
> 
> http://xkcd.com/927/

Besides don't we already have fdupes?  How many times does it need to
be implemented.  Someone should implement one perfect file duplication
finder.

-- 
Len Sorensen


Reply to: