[세미나] [전기전자세미나] 12월 13일 Neural Networks on Chip: From Academia to Industry

2018-12-06l Hit 5550

860. Neural Networks on Chip:

From Academia to Industry



연사: Yu Wang, Tsinghua University Professor and Deephi Tech Co-Founder

일시: 2018년 12월 13일(목), 17:00 ~ 18:00

장소: 서울대학교 제1공학관(301동) 118호




Artificial neural networks, which dominate artificial intelligence applications such  as object recognition and speech recognition, are in evolution. To apply neural networks to wider applications, customized hardware are necessary since CPU and GPU are not efficient enough. Numerous architectures are proposed in the past 5 years to boost the energy efficiency of deep learning inference processing, including Tsinghua and Deephi’s effort. In this talk, we will talk about why everybody is designing their own inference architecture, and the software and hardware co-design methodology that Tsinghua and Deephi carry out, from 200GOPS/W FPGA accelerators to 1-5TOPS/W chips with DDR subsystems. The possibilities and trends of adopting emerging NVM technology for efficient learning systems, i.e., in-memory-computing, will also be discussed a little bit as one of the most promising ways to improve the energy efficiency.



  • 학력

    B.S., Electronics Engineering, Tsinghua University, 2002

    Ph.D., Electronics Engineering, Tsinghua University, 2007

  • 경력

    Tenured Associate Professor of Department of Electronic Engineering, Tsinghua University.

co-founder of Deephi Tech (acquired by Xilinx)