Write a Blog >>
ESEC/FSE 2022
Mon 14 - Fri 18 November 2022 Singapore
Mon 14 Nov 2022 11:30 - 11:45 at SRC LT 50 - Software Testing I Chair(s): Paolo Tonella

Automation of test oracles is one of the most challenging facets of software testing, but remains comparatively less addressed compared to automated test input generation. Test oracles rely on a ground-truth that can distinguish between the correct and buggy behavior to determine whether a test fails (detects a bug) or passes. What makes the oracle problem challenging and undecidable is the assumption that the ground-truth should know the exact expected, correct or buggy behavior. However, we argue that one can still build an accurate oracle without knowing the exact correct or buggy behavior, but how these two might differ. This paper presents SEER, a Deep Learning-based approach that in the absence of test assertions or other types of oracle, can automatically determine whether a unit test passes or fails on a given method under test (MUT). To build the ground-truth, SEER jointly embeds unit tests and the implementation of MUTs into a unified vector space, in such a way that the neural representation of tests are similar to that of MUTs they pass on them, but dissimilar to MUTs they fail on them. The classifier built on top of this vector representation serves as the oracle to generate “fail” labels, when test inputs detect a bug in MUT or “pass” labels, otherwise. Our extensive experiments on applying SEER to more than 5K unit tests from a diverse set of opensource Java projects show that the produced oracle is (1) effective in predicting the fail or pass labels, achieving an overall accuracy, precision, recall, and F1 measure of 93%, 86%, 94%, and 90%, (2) generalizable, predicting the labels for the unit test of projects that were not in training or validation set with negligible performance drop, and (3) efficient, detecting the existence of bugs in only 6.5 milliseconds on average. Moreover, by interpreting the neural model and looking at it beyond a closed-box solution, we confirm that the oracle is valid, i.e., it predicts the labels through learning relevant features.

Mon 14 Nov

Displayed time zone: Beijing, Chongqing, Hong Kong, Urumqi change

11:00 - 12:30
Software Testing IResearch Papers at SRC LT 50
Chair(s): Paolo Tonella USI Lugano
11:00
15m
Talk
Testing of Autonomous Driving Systems: Where Are We and Where Should We Go?
Research Papers
Guannan Lou Macquarie University, Yao Deng Macquarie University, Xi Zheng Macquarie University, Mengshi Zhang Meta, Tianyi Zhang Purdue University
DOI
11:15
15m
Talk
Fuzzing Deep-Learning Libraries via Automated Relational API Inference
Research Papers
Yinlin Deng University of Illinois at Urbana-Champaign, Chenyuan Yang University of Illinois at Urbana-Champaign, Anjiang Wei Stanford University, Lingming Zhang University of Illinois at Urbana-Champaign
DOI
11:30
15m
Talk
Perfect Is the Enemy of Test Oracle
Research Papers
Ali Reza Ibrahimzada University of Illinois Urbana-Champaign, Yigit Varli Middle East Technical University, Dilara Tekinoglu University of Massachusetts at Amherst, Reyhaneh Jabbarvand University of Illinois at Urbana-Champaign
DOI Pre-print Media Attached
11:45
15m
Talk
Scenario-Based Test Reduction and Prioritization for Multi-Module Autonomous Driving Systems
Research Papers
Yao Deng Macquarie University, Xi Zheng Macquarie University, Mengshi Zhang Meta, Guannan Lou Macquarie University, Tianyi Zhang Purdue University
DOI