Re: Keras (abstraction layer for Theano and TensorFlow) seeks for an adopter
Hello everyone,
I'd like to provide some information about this.
Bengio (an important guy behind Theano) declared the end of Theano's
development. So we should not pay too much time on it. Tensorflow,
the computation graph can be considered as a successor of Theano
the symbolic graph engine.
Here is a slide from Stanford's Computer Vision class:
http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture8.pdf
>From the slides, especially pp. 151-152, we know that
(1) Caffe/Caffe2, Theano/Tensorflow, Torch/Pytorch are what
these specialists recommend to the students.
(2) Caffe, Caffe2, Tensorflow are recommended for production.
(3) Pytorch is recommended for research.
When linked against cuDNN[1], these deep learning software
will run many times faster than that in CPU mode.
e.g.
Caffe (CUDA w/o cuDNN) >= 10x * Caffe (CPU)
Caffe (CUDA with cuDNN) >= 4x * Caffe (CUDA w/o cuDNN)
This is my "caffe time" test result on Titan X Pascal card,
with the alexnet deploy version shipped by caffe source.
* Caffe and Torch are currently maintained by me. And I indeed
used them in my research work.
* Caffe2 is designed for production but its code base is not
that stable.
* My pytorch packaging is basically done. It just need some
tweaks and a patch that disables the SIMD instruction sets.
* cuDNN is valuable to package. I have some working packaging
script for it. Currently I'm contacting nvidia to sort out
some legal issues...
* Tensorflow the hotspot is certainly valuable to package, but
it is somewhat difficult. A Debian developer Paul Liu tried
to work on it (incl. bazel) but things seem to be not that easy.
Apart from the bazel build, Tensorflow also has a CMake build.
However that CMake build files need an amount of patches
to avoid violating the policy. Tensorboard is a noticeable
tool, which is also very valuable to package. I'm now a fan
of Pytorch and I currently have no plan to use tensorflow
in my research work, so I don't want to touch it...
Best.
[1] Magically optimized CUDA deep neural network library.
Reply to: