[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bits from /me: A humble draft policy on "deep learning v.s. freedom"



>>>>> "Mo" == Mo Zhou <lumin@debian.org> writes:

    Mo> Hi Holger, Yes, that section is about bit-by-bit
    Mo> reproducibility, and identical hashsum is expected. Let's call
    Mo> it "Bit-by-Bit reproducible".

    Mo> I updated that section to make the definition of "reproducible"
    Mo> explicit. And the strongest one is discussed by default.

    Mo> However, I'm not sure whether "bit-by-bit" is easy to break by
    Mo> some obscure reasons in a complex system (e.g. float point
    Mo> precision problems, time stamps hidden in the stored model). And
    Mo> I've never tried to compared my neural nets with hashsums...  I
    Mo> compare curves and digits instead ...  I need some time to think
    Mo> about it, verify, and refine the definition.

So, I think it's problematic to apply old assumptions to new areas.  The
reproducible builds world has gotten a lot further with bit-for-bit
identical builds than I ever imagined they would.

However, what's actually needed in the deep learning context is weaker
than bit-for-bit identical.  What we need is a way to validate that two
models are identical for some equality predicate that meets our security
and safety (and freedom) concerns.  Parallel computation in the
training, the sort of floating point issues you point to, and a lot of
other things may make bit-for-bit identical models hard to come by.

Obviously we need to validate the correctness of whatever comparison
function we use.  The checksums match is relatively easy to validate.
Something that for example understood floating point numbers would have
a greater potential for bugs than an implementation of say sha256.

So, yeah, bit-for-bit identical is great if we can get it.  But
validating these models is important enough that if we need to use a
different equality predicate it's still worth doing.

--Sam


Reply to: