Write a Blog >>
ESEC/FSE 2022
Mon 14 - Fri 18 November 2022 Singapore
Tue 15 Nov 2022 14:00 - 14:15 at SRC LT 52 - ESEC/FSE 20 Software Testing I Chair(s): Arie van Deursen

This paper discusses methods to test the performance of the adaptation layer in a self-adaptive system. The problem is notoriously hard, due to the high degree of uncertainty and variability inherent in an adaptive software application. In particular, providing any type of formal guarantee for this problem is extremely difficult. In this paper we propose the use of a rigorous probabilistic approach to overcome the mentioned difficulties and provide probabilistic guarantees on the software performance. We describe the set up needed for the application of a probabilistic approach. We then discuss the traditional tools from statistics that could be applied to analyse the results, highlighting their limitations and motivating why they are unsuitable for the given problem. We propose the use of a novel tool – the Scenario Theory – to overcome said limitations. We conclude the paper with a thorough empirical evaluation of the proposed approach, using three adaptive software applications: the Tele-Assistance Service, the Self-Adaptive Video Encoder, and the Traffic Reconfiguration via Adaptive Participatory Planning. With the first, we empirically expose the trade-off between data collection and confidence in the testing campaign. With the second, we demonstrate how to compare different adaptation strategies. With the third, we discuss the role of the randomisation in the selection of test inputs. In the evaluation, we apply the scenario theory and also classical statistical tools: Monte Carlo and Extreme Value Theory. We provide a complete evaluation and a thorough comparison of the confidence and guarantees that can be given with all the approaches.

Tue 15 Nov

Displayed time zone: Beijing, Chongqing, Hong Kong, Urumqi change

14:00 - 15:30
ESEC/FSE 20 Software Testing IESEC/FSE 2020 at SRC LT 52
Chair(s): Arie van Deursen Delft University of Technology
14:00
15m
Talk
Testing Self-Adaptive Software with Probabilistic Guarantees on Performance Metrics
ESEC/FSE 2020
Claudio Mandrioli Lund University, Sweden, Martina Maggio Saarland University, Germany / Lund University, Sweden
DOI Pre-print
14:15
15m
Talk
Search-Based Adversarial Testing and Improvement of Constrained Credit Scoring Systems
ESEC/FSE 2020
Salah Ghamizi University of Luxembourg, Luxembourg, Maxime Cordy University of Luxembourg, Luxembourg, Martin Gubri University of Luxembourg, Luxembourg, Mike Papadakis University of Luxembourg, Luxembourg, Andrey Boystov University of Luxembourg, Luxembourg, Yves Le Traon University of Luxembourg, Luxembourg, Anne Goujon BGL BNP Paribas, Luxembourg
14:30
15m
Talk
Is Neuron Coverage a Meaningful Measure for Testing Deep Neural Networks?
ESEC/FSE 2020
Fabrice Harel-Canada University of California at Los Angeles, USA, Lingxiao Wang University of California at Los Angeles, USA, Muhammad Ali Gulzar Virginia Tech, USA, Quanquan Gu University of California at Los Angeles, USA, Miryung Kim University of California at Los Angeles, USA
Link to publication Authorizer link Pre-print
14:45
15m
Talk
When Does My Program Do This? Learning Circumstances of Software Behavior
ESEC/FSE 2020
Alexander Kampmann CISPA, Germany, Nikolas Havrikov CISPA, Germany, Ezekiel Soremekun SnT, University of Luxembourg, Andreas Zeller CISPA Helmholtz Center for Information Security
Link to publication DOI
15:00
15m
Talk
FrUITeR: A Framework for Evaluating UI Test Reuse
ESEC/FSE 2020
Yixue Zhao University of Massachusetts at Amherst, Justin Chen Columbia University, USA, Adriana Sejfia University of Southern California, Marcelo Schmitt Laser University of Southern California, USA, Jie M. Zhang King's College London, Federica Sarro University College London, Mark Harman University College London, Nenad Medvidović University of Southern California
Pre-print Media Attached