omniture

KNL+FPGA, a Perfect Match for Deep Learning Acceleration

SAN FRANCISCO, August 18, 2016 /PRNewswire/ -- Inspur, a leading data center product and solutions provider, showcased its latest innovative products at IDF16. At the conference, John Hu, the CTO and Vice President of Inspur Group, made a presentation on advanced technology in which he analyzed characteristics of current deep learning applications and advanced computing technology infrastructure, stating that KNL+FPGA is a perfect match in the acceleration of deep learning.

When conducting the offline model training, computing systems often need to handle a larger amount of data, so the training time is very long, it is required to large-scale computing resource to train a model. When services related to deep learning applications is released, the energy consumption of the deep learning applications hugely increase because the system has to deal with tens of thousands of visits, per capita. John Hu has proposed that the structure of the deep learning platform should develop high-performance computing solutions according to different application characteristics of offline training and online identification.

In June of 2016, Intel launched its latest Xeon Phi Processor, Knights Landing (KNL), which attracted immense attention in the industry. KNL has up to 72 cores, its double-precision floating-point performance is more than 3TFlops while the single-precision is more than 6TFlops, which is seen as a revolutionary product for high-performance computing and deep learning. John Hu believes that KNL's strong performance can adequately meet the demands of structuring deep learning offline training platform. FPGA has an impressive performance per watt, which is five times greater than that of the CPU and is hugely energy-efficient, so it is an ideal choice for structuring the deep learning online identification platform. Therefore, KNL+FPGA is a perfect match for accelerating deep learning applications.

Inspur is a leading global service provider for cloud computing, big data and high-performance computing. Inspur's deep learning solutions have been adopted by Baidu, Alibaba, Qihoo360, IFLYTEK and many other Internet companies, covering 60% of the market. A few months back, Inspur launched a worldwide release on the latest deep learning computing framework, Caffe-MPI, of the KNL platform. Its high performance and scalability are able to meet the needs of the compute-intensive and network-intensive model.

To learn more about Inspur's deep learning solutions, please visit http://www.inspursystems.com/deep-learning/

To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/knlfpga-a-perfect-match-for-deep-learning-acceleration-300315291.html

Source: Inspur Electronic Information Industry Co.,Ltd
collection