王林楠 (Linnan Wang)

Office 351, CIT
Department of Computer Science
Brown University
Providence, RI 02906
Email: wangnan318@gmail.com

Google Scholar GitHub LinkedIn

Brief Bio:

I'm a Ph.D. student at the CS department of Brown University. Before Brown, I was a OMSCS student at Gatech while being a full time software developer at Dow Jones . I acquired my bachelor degree from University of Electronic Science and Technology of China (UESTC) at the beautiful Qing Shui He campus in 2011.

My research interests are Supercomputing and Neural Networks. Particularly I'm really into scaling the coolest ML algorithms on top500 supercomputers or multiGPU shared memory machines. It also makes me very excited in inventing new ML algorithms to make impossible possible.

I work closely with Wei Wu, George Bosilca, Jack Dongarra, Yang Yi in Supercomputing.

in Submission:

Ye, Jinmian, Linnan Wang, et al
Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition

Luo, Xi , Wei Wu, George Bosilca, Thananon Patinyasakdikul, Linnan Wang, Jack Dongarra
ADAPT: An Event-based Adaptive Collective Communication Framework

Li, Ang, Weifeng Liu, Linnan Wang, Kevin Barker, Shuaiwen Leon Song
Warp-Consolidation: A Novel Execution Model for Modern GPUs

Publications:

2018

Wang, Linnan, et al
SuperNeurons:Dynamic GPU Memory Management for Training Deep Nonlinear Neural Networks
To Appear At the 23nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP2018)
Paper ·  Code ·  Presentation

2017

Wang, Linnan, Yi Yang, Renqiang Min, and Srimat Chakradhar
Accelerating Deep Neural Network Training with Inconsistent Stochastic Gradient Descent
Neural Networks (2017)
Paper ·  Patent

Zhao, Yiyang, Linnan Wang, Wei Wu, George Bosilca, Richard Vuduc, Jinmian Ye, Wenqi Tang, and Zenglin Xu.
Efficient Communications in Training Large Scale Neural Networks
In Proceedings of the 25th ACM international conference on Multimedia (MM2017)
Paper

Li, Guangxi, Zenglin Xu, Linnan Wang, Jinmian Ye, Irwin King, and Michael Lyu
Simple and Efficient Parallelization for Probabilistic Temporal Tensor Factorization
In 2017 International Joint Conference on Neural Networks (IJCNN2017)
Paper

2016

Wang, Linnan, Wei Wu, Zenglin Xu, Jianxiong Xiao, and Yi Yang
BLASX: A High Performance Level-3 BLAS Library for Heterogeneous MultiGPU Computing
In Proceedings of the 2016 International Conference on Supercomputing (ICS2016)
Paper ·  SC Poster ·  Code ·  Presentation

Patent:

Awards:

Academic Services:

Professional Experiences: