[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Bug#952640: ITP: armnn -- Arm NN is an inference engine for CPUs, GPUs and NPUs



On 2020-02-26 18:26 +0000, Wookey wrote:

Turns out that I had misunderstood the specificity/architecture of
this package. It does in fact work on all architectures, but is
currently only accelerated on arm64 and armhf. Other hardware and
other architectures and GPUs can be supported in the framework, but
currently are not.

So a revised description:

  Arm NN is a set of tools that enables machine learning workloads on
  any hardware. It provides a bridge between existing neural network
  frameworks and whatever hardware is available and supported. On arm
  architectures (arm64 and armhf) it utilizes the Arm Compute Library
  to target Cortex-A CPUs, Mali GPUs and Ethos NPUs as efficiently as
  possible. On other architectures/hardware it falls back to unoptimised
  funtions.
  
  This release supports Caffe, TensorFlow, TensorFlow Lite, and ONNX.
  Arm NN takes networks from these frameworks, translates them
  to the internal Arm NN format and then through the Arm Compute Library,
  deploys them efficiently on Cortex-A CPUs, and, if present, Mali GPUs. 

Wookey
-- 
Principal hats:  Linaro, Debian, Wookware, ARM
http://wookware.org/

Attachment: signature.asc
Description: PGP signature


Reply to: