Le 21 mai 2019 13:45, Mo Zhou <lumin@debian.org> a écrit :
It's always good if we can do these things purely with our archive.
However sometimes it's just not easy to enforce: datasets used by DL
are generally large, (several hundred MB ~ several TB or even larger).
And even with the data, the training might need an awfully powerful box *and* weeks of computation *and* some of the algorithms aren't deterministic, so reproducibility is a problem, not only for Debian but for the scientific community at large.