Minerva: a scalable and highly efficient deep learning training platform

Published in NeurIPS 2014 Workshop of Distributed Matrix Computations, 2014

Recommended citation: M Wang, T Xiao, J Li, J Zhang, C Hong, Z Zhang. In NeurIPS 2014 workshop of Distributed Matrix Computations. NeurIPS 2014 Workshop

Abstract

The tooling landscape of deep learning is fragmented by a growing gap between the generic and productivity-oriented tools that optimize for algorithm development and the task-specific ones that optimize for speed and scale. This creates an artificial barrier to bring new innovations into real-world applications. Minerva addresses this issue with a layered design that provides language flexibility and execution efficiency simultaneously within one coherent framework. It proposes a matrix-based API, resulting in compact codes and the Matlab-like, imperative and procedural coding style. The code is dynamically translated into an internal dataflow representation, which is then efficiently executed against different hardware. The same user code runs on modern laptop and workstation, high-end multi-core server, or server clusters, with and without GPU acceleration, delivering performance and scalability better than or competitive with existing tools on different platforms.

Download

Link