A Retrospective Study of One Decade of Artifact Evaluations
Most software engineering research involves the development of a prototype, a proof of concept, or a measurement apparatus. Together with the data collected in the research process, they are collectively referred to as research artifacts and are subject to artifact evaluation (AE) at scientific conferences. Since its initiation in the SE community at ESEC/FSE 2011, both the goals and the process of AE have evolved and today expectations towards AE are strongly linked with reproducible research results and reusable tools that other researchers can build their work on. However, to date little evidence has been provided that artifacts which have passed AE actually live up to these high expectations, i.e., to which degree AE processes contribute to AE's goals and whether the overhead they impose is justified.
We aim to fill this gap by providing an in-depth analysis of research artifacts from a decade of software engineering (SE) and programming languages (PL) conferences, based on which we reflect on the goals and mechanisms of AE in our community. In summary, our analyses (1) suggest that articles with artifacts do not generally have better visibility in the community, (2) provide evidence how evaluated and not evaluated artifacts differ with respect to different quality criteria, and (3) highlight opportunities for further improving AE processes.
Mon 14 NovDisplayed time zone: Beijing, Chongqing, Hong Kong, Urumqi change
14:00 - 15:30 | CommunityResearch Papers / Ideas, Visions and Reflections / Demonstrations / Industry Paper at SRC LT 51 Chair(s): Dirk Riehle University of Bavaria, Erlangen | ||
14:00 15mTalk | In War and Peace: The Impact of World Politics on Software Ecosystems Ideas, Visions and Reflections Raula Gaikovina Kula Nara Institute of Science and Technology, Christoph Treude University of Melbourne DOI | ||
14:15 15mTalk | A Retrospective Study of One Decade of Artifact Evaluations Research Papers Stefan Winter LMU Munich, Christopher Steven Timperley Carnegie Mellon University, Ben Hermann TU Dortmund, Jürgen Cito TU Wien, Jonathan Bell Northeastern University, Michael Hilton Carnegie Mellon University, Dirk Beyer LMU Munich DOI | ||
14:30 15mTalk | Understanding Skills for OSS Communities on GitHub Research Papers Jenny T. Liang University of Washington, Thomas Zimmermann Microsoft Research, Denae Ford Microsoft Research DOI Pre-print Media Attached | ||
14:45 15mTalk | Achievement Unlocked: A Case Study on Gamifying DevOps Practices in Industry Industry Paper Patrick Ayoup Concordia University, Diego Costa Concordia University, Canada, Emad Shihab Concordia University DOI | ||
15:00 7mTalk | iTiger: An Automatic Issue Title Generation Tool Demonstrations Ting Zhang Singapore Management University, Ivana Clairine Irsan Singapore Management University, Ferdian Thung Singapore Management University, DongGyun Han Royal Holloway, University of London, David Lo Singapore Management University, Lingxiao Jiang Singapore Management University | ||
15:08 7mTalk | CodeMatcher: A Tool for Large-Scale Code Search Based on Query Semantics Matching Demonstrations Chao Liu Chongqing University, Xuanlin Bao Chongqing University, Xin Xia Huawei, Meng Yan Chongqing University, David Lo Singapore Management University, Ting Zhang Singapore Management University | ||
15:15 15mTalk | Generating Realistic Vulnerabilities via Neural Code Editing: An Empirical Study Research Papers Yu Nong Washington State University, Yuzhe Ou University of Texas at Dallas, Michael Pradel University of Stuttgart, Feng Chen University of Texas at Dallas, Haipeng Cai Washington State University DOI Pre-print |