Speedup of deep neural network learning on the MIC-architecture

E. Milova, S. Sveshnikova, I. Gankevich

Deep neural networks are more accurate, but require more computational power in the learning process. Moreover, it is an iterative process. The goal of the research is to investigate efficiency of solving this problem on MIC architecture without changing baseline algorithm. Well-known code vectorization and parallelization methods are used to increase the effectiveness of the program on MIC architecture. In the course of the experiments we test two coprocessor data transfer models: explicit and implicit one. We show that implicit memory copying is more efficient than explicit one, because only modified memory blocks are copied. MIC architecture shows competitive performance compared to multi-core x86 processor.

Bibtex
@inproceedings{sveshnikova2016dnn,
  title={Speedup of deep neural network learning on the MIC-architecture},
  author={E. Milova and S. Sveshnikova and I. Gankevich},
  booktitle={Proceedings of HPCS'16},
  year={2016},
  month={07},
  language={english},
  isbn={978-1-5090-2088-1},
  doi={10.1109/HPCSim.2016.7568443},
  pages={989--992},
  type={inproceedings}
}

Publication: Proceedings of HPCS'16