Write a Blog >>
ESEC/FSE 2022
Mon 14 - Fri 18 November 2022 Singapore
Wed 16 Nov 2022 15:00 - 15:15 at SRC LT 53 - Program Repair/Synthesis Chair(s): Saikat Chakraborty

Due to the promising future of Automated Program Repair (APR), researchers have proposed various APR techniques, including heuristic-based, template-based, and constraint-based techniques. Among such classic APR techniques, template-based techniques have been widely recognized as state of the art. However, such template-based techniques require predefined templates to perform repair, and their effectiveness is thus limited. To this end, researchers have leveraged the recent advances in Deep Learning to further improve APR. Such learning-based techniques typically view APR as a Neural Machine Translation problem, using the buggy/fixed code snippets as the source/target languages for translation. In this way, such techniques heavily rely on large numbers of high-quality bug-fixing commits, which can be extremely costly/challenging to construct and may limit their edit variety and context representation.

In this paper, we aim to revisit the learning-based APR problem, and propose AlphaRepair, the first \textit{cloze-style} (or \textit{infilling-style}) APR approach to directly leveraging large pre-trained code models for APR without any fine-tuning/retraining on historical bug fixes. \textit{Our main insight is instead of modeling what a repair edit should look like (i.e., a NMT task), we can directly predict what the correct code is based on the context information (i.e., a cloze or text infilling task)}. Although our approach is general and can be built on various pre-trained code models, we have implemented AlphaRepair as a practical multilingual APR tool based on the recent CodeBERT model. Our evaluation of AlphaRepair on the widely used Defects4J benchmark \textit{shows for the first time that learning-based APR without any history bug fixes can already outperform state-of-the-art APR techniques}. We also studied the impact of different design choices and show that AlphaRepair performs even better on a newer version of Defects4J (2.0) with 3.3X more fixes than best performing baseline, indicating that AlphaRepair can potentially avoid the dataset-overfitting issue of existing techniques. Additionally, we demonstrate the multilingual repair ability of AlphaRepair by evaluating on the QuixBugs dataset where AlphaRepair achieved the state-of-the-art results on both Java and Python versions.

Wed 16 Nov

Displayed time zone: Beijing, Chongqing, Hong Kong, Urumqi change

14:00 - 15:30
Program Repair/SynthesisResearch Papers / Industry Paper at SRC LT 53
Chair(s): Saikat Chakraborty Microsoft Research
14:00
15m
Talk
PyTER: Effective Program Repair for Python Type Errors
Research Papers
Wonseok Oh Korea University, Hakjoo Oh Korea University
DOI
14:15
15m
Talk
VulRepair: A T5-Based Automated Software Vulnerability Repair
Research Papers
Micheal Fu Monash University, Kla Tantithamthavorn Monash University, Trung Le Monash University, Australia, Van Nguyen Monash University, Australia, Dinh Phung Monash University, Australia
DOI
14:30
15m
Talk
An Empirical Study of Deep Transfer Learning-Based Program Repair for Kotlin Projects
Industry Paper
Misoo Kim Sungkyunkwan University, Youngkyoung Kim Sungkyunkwan University, Hohyeon Jeong Sungkyunkwan University, Jinseok Heo Sungkyunkwan University, Sungoh Kim Samsung Electronics, Hyunhee Chung Samsung Electronics, Eunseok Lee Sungkyunkwan University
DOI
14:45
15m
Talk
DeepDev-PERF: A Deep Learning-Based Approach for Improving Software Performance
Research Papers
Spandan Garg Microsoft, Roshanak Zilouchian Moghaddam Microsoft, Colin Clement Microsoft, Neel Sundaresan Microsoft, Chen Wu Microsoft
DOI
15:00
15m
Talk
Less Training, More Repairing Please: Revisiting Automated Program Repair via Zero-Shot Learning
Research Papers
Chunqiu Steven Xia University of Illinois at Urbana-Champaign, Lingming Zhang University of Illinois at Urbana-Champaign
DOI