Artificial Intelligence in general and Deep learning in specific has been one of the most important technology adopted this year. A lot of artificial intelligence – deep learning open sources as well as commercial softwares are coming up to solve problems from image processing to fraud detection. Google’s TensorFlow, Facebook’s Torch, Amazon’s DSSTNE are few important deep learning open sources released this year.
DeepLearning processing requires a different type of compute suitable for GPU/Cuda – massively in-memory processing . Earlier this year Nividia announced Tesla P-100 processor and DGX-1 a machine specifically catered towards running deep learning / artificial intelligence type of work loads. Nividia also provided a Deep Learning SDK framework for developers.
Intel’s first answer to this was Intel Xeon Phi chip with 72 core, coupled with an on-package, high-bandwidth, memory subsystem (Multi-Channel DRAM) and integrated fabric technology called Intel® Omni-Path Architecture (Intel® OPA). However, it needed to do more of Software framework and better chipset. It achieved both with the acquisition of Nervana, founded by ex-Qualcomm researcher Navin Rao.
According to an unofficial source (Re-code), this 2.5 years old Nervana has been acquired at 400million+. This acquisition will given Intel Neon as a fast DL framework and upcoming Nervana engine (ASIC chipset to be released in 2017). This acquisition