Discrepancies among Pre-trained Deep Neural Networks: A New Threat to Model Zoo Reliability
Training deep neural networks (DNNs) takes significant time and resources. A practice for expedited deployment is to use pre-trained deep neural networks (PTNNs), often from model zoos–collections of PTNNs; yet, the reliability of model zoos remains unexamined. In the absence of an industry standard for the implementation and performance of PTNNs, engineers cannot confidently incorporate them into production systems. As a first step, discovering potential discrepancies between PTNNs across model zoos would reveal a threat to model zoo reliability. Prior works indicated existing variances in deep learning systems in terms of accuracy. However, broader measures of reliability for PTNNs from model zoos are unexplored. This work measures notable discrepancies between accuracy, latency, and architecture of 36 PTNNs across four model zoos. Among the top 10 discrepancies, we find differences of 1.23%-2.62% in accuracy and 9%-131% in latency. We also find mismatches in architecture for well-known DNN architectures (e.g., ResNet and AlexNet). Our findings call for future works on empirical validation, automated tools for measurement, and best practices for implementation.
Tue 15 NovDisplayed time zone: Beijing, Chongqing, Hong Kong, Urumqi change
10:45 - 12:15 | Machine Learning IIResearch Papers / Ideas, Visions and Reflections / Industry Paper at SRC Auditorium 2 Chair(s): Atif Memon Apple | ||
10:45 15mTalk | Understanding Performance Problems in Deep Learning Systems Research Papers Junming Cao Fudan University, Bihuan Chen Fudan University, Chao Sun Fudan University, Longjie Hu Fudan University, Shuaihong Wu Fudan University, Xin Peng Fudan University DOI | ||
11:00 15mTalk | API Recommendation for Machine Learning Libraries: How Far Are We? Research Papers Moshi Wei York University, Yuchao Huang Institute of Software at Chinese Academy of Sciences, Junjie Wang Institute of Software at Chinese Academy of Sciences; University of Chinese Academy of Sciences, Jiho Shin York University, Nima Shiri Harzevili York University, Song Wang York University DOI Pre-print | ||
11:15 15mTalk | No More Fine-Tuning? An Experimental Evaluation of Prompt Tuning in Code Intelligence Research Papers Chaozheng Wang Harbin Institute of Technology, Yuanhang Yang Harbin Institute of Technology, Cuiyun Gao Harbin Institute of Technology, Yun Peng Chinese University of Hong Kong, Hongyu Zhang University of Newcastle, Michael Lyu Chinese University of Hong Kong DOI | ||
11:30 15mTalk | Improving ML-Based Information Retrieval Software with User-Driven Functional Testing and Defect Class Analysis Industry Paper DOI | ||
11:45 15mTalk | Discrepancies among Pre-trained Deep Neural Networks: A New Threat to Model Zoo Reliability Ideas, Visions and Reflections Diego Montes Purdue University, Pongpatapee Peerapatanapokin Purdue University, Jeff Schultz Purdue University, Chengjun Guo Purdue University, Wenxin Jiang Purdue University, James C. Davis Purdue University DOI |