Write a Blog >>
ESEC/FSE 2022
Mon 14 - Fri 18 November 2022 Singapore
Tue 15 Nov 2022 11:15 - 11:30 at SRC Auditorium 2 - Machine Learning II Chair(s): Atif Memon

Pre-trained models have been shown effective in many code intelligence tasks. These models are pre-trained on large-scale unlabeled corpus and then fine-tuned in downstream tasks. However, as the inputs to pre-training and downstream tasks are in different forms, it is hard to fully explore the knowledge of pre-trained models. Besides, the performance of fine-tuning strongly relies on the amount of downstream data, while in practice, the scenarios with scarce data are common. Recent studies in the natural language processing (NLP) field show that prompt tuning, a new paradigm for tuning, alleviates the above issues and achieves promising results in various NLP tasks. In prompt tuning, the prompts inserted during tuning provide task-specific knowledge, which is especially beneficial for tasks with relatively scarce data. In this paper, we empirically evaluate the usage and effect of prompt tuning in code intelligence tasks. We conduct prompt tuning on popular pre-trained models CodeBERT and CodeT5 and experiment with three code intelligence tasks including defect prediction, code summarization, and code translation. Our experimental results show that prompt tuning consistently outperforms fine-tuning in all three tasks. In addition, prompt tuning shows great potential in low-resource scenarios, e.g., improving the BLEU scores of fine-tuning by more than 26% on average for code summarization. Our results suggest that instead of fine-tuning, we could adapt prompt tuning for code intelligence tasks to achieve better performance, especially when lacking task-specific data.

Tue 15 Nov

Displayed time zone: Beijing, Chongqing, Hong Kong, Urumqi change

10:45 - 12:15
10:45
15m
Talk
Understanding Performance Problems in Deep Learning Systems
Research Papers
Junming Cao Fudan University, Bihuan Chen Fudan University, Chao Sun Fudan University, Longjie Hu Fudan University, Shuaihong Wu Fudan University, Xin Peng Fudan University
DOI
11:00
15m
Talk
API Recommendation for Machine Learning Libraries: How Far Are We?
Research Papers
Moshi Wei York University, Yuchao Huang Institute of Software at Chinese Academy of Sciences, Junjie Wang Institute of Software at Chinese Academy of Sciences; University of Chinese Academy of Sciences, Jiho Shin York University, Nima Shiri Harzevili York University, Song Wang York University
DOI Pre-print
11:15
15m
Talk
No More Fine-Tuning? An Experimental Evaluation of Prompt Tuning in Code Intelligence
Research Papers
Chaozheng Wang Harbin Institute of Technology, Yuanhang Yang Harbin Institute of Technology, Cuiyun Gao Harbin Institute of Technology, Yun Peng Chinese University of Hong Kong, Hongyu Zhang University of Newcastle, Michael Lyu Chinese University of Hong Kong
DOI
11:30
15m
Talk
Improving ML-Based Information Retrieval Software with User-Driven Functional Testing and Defect Class Analysis
Industry Paper
Junjie Zhu Apple, Teng Long Apple, Wei Wang Apple, Atif Memon Apple
DOI
11:45
15m
Talk
Discrepancies among Pre-trained Deep Neural Networks: A New Threat to Model Zoo Reliability
Ideas, Visions and Reflections
Diego Montes Purdue University, Pongpatapee Peerapatanapokin Purdue University, Jeff Schultz Purdue University, Chengjun Guo Purdue University, Wenxin Jiang Purdue University, James C. Davis Purdue University
DOI