First cuBLAS, then NPP, now cuDNN (CUDA Deep Neural Network). Announced on September 7, this is a library for GPU-accelerated machine learning that can be dropped into high-level libraries such as Berkeley's Caffe. Thanks, NVIDIA!
UPDATE 1: By the way, NVIDIA Performance Primatives (NPP) is an awesome GPU-accelerated clone of Intel's Integrated Performance Primitives, which are pretty awesome too! Interestingly, Intel is in the process of dropping multi-threading support from IPP. In all fairness, the strength of IPP is how well it utilizes CPU-based SIMD instructions, rather than how well it exercises OpenMP internally for multi-threading. The emphasis is now on managing threading at a higher level, such as with Thread Building Blocks (TBB) and Cilk Plus.
Update 2: DNNs are big business. In March 2013, the headlines read "Google Has Bought A Startup To Help It Recognize Voices And Objects".
#quickie
No comments:
Post a Comment