Home | Research | Activities

Publications

Conference and Workshop Proceedings

  1. Hiroshi Ukai and Xiao Qu, "Test Design as Code: JCUnit", International Conference on Software Testing (ICST), March 2017. (to appear)

  2. Tingting Yu, Xiao Qu, Myra B. Cohen, "VDTest: An Automated Framework to Support Testing for Virtual Devices", International Conference on Software Engineering (ICSE), May 2016. ACM SIGSOFT Distinguished Paper Award

  3. Mithun P. Acharya, Chris Parnin, Nicholas A. Kraft, Aldo Dagnino, and Xiao Qu, "Code Drones", International Conference on Software Engineering (ICSE), Visions 2025 track, May 2016. Second Prize best paper

  4. Vinay Augustine, Patrick Francis, Xiao Qu, David Shepherd, Will Snipes, Christoph Bräunlich, Thomas Fritz, "A field study on fostering structural navigation with prodet", International Conference on Software Engineering (ICSE), Volume 2, May 2015. [pdf]

  5. Dongpu Jin, Myra B. Cohen, Xiao Qu, Brian Robinson, "PrefFinder: getting the right preference in configurable software systems", International Conference on Automated Software Engineering (ASE), September 2014. [pdf]

  6. Dongpu Jin, Xiao Qu, Myra B. Cohen, Brian Robinson, "Configurations everywhere: Implications for testing and debugging in practice", International Conference on Software Engineering (ICSE), Software Engineering in Practice Track (SEIP), May 2014. [pdf] Best SEIP paper Award

  7. Xiao Qu, and Myra B. Cohen, "A study in prioritization for higher strength combinatorial testing", International Conference on Software Testing, Verification and Validation Workshops (ICSTW), March 2013. [pdf]

  8. Tingting Yu, Xiao Qu, Mithun Acharya, and Gregg Rothermel, "Oracle-Based Regression Test Selection", International Conference on Software Testing (ICST), March 2013.

  9. Xiao Qu, Mithun Acharya, Brian Robinson, "Configuration Selection Using Code Change Impact Analysis for Regression Testing", International Conference on Software Maintenance (ICSM), September 2012. [pdf]

  10. Xiao Qu, Mithun Acharya, Brian Robinson, "Impact Analysis of Configuration Changes for Test Case Selection", International Symposium on Software Reliability Engineering (ISSRE), December 2011. [pdf] [slides]

  11. Xiao Qu, Brian Robinson, "A Case Study of Concolic Testing Tools and Their Limitations", International Symposium on Empirical Software Engineering and Measurement (ESEM), September 2011.

  12. Brian Robinson, Xiao Qu, "Customer Oriented Regression Testing", International Conference on Software Testing, Verification and Validation Workshops (ICSTW) , March 2011.

  13. Hema Srikanth, Myra B. Cohen, Xiao Qu, "Reducing Field Failures in System Configurable Software: Cost-Based Prioritization", International Symposium on Software Reliability Engineering (ISSRE), November 2009. [pdf]

  14. Wolfgang Grieskamp, Xiao Qu, Xiangjun Wei, Nico Kicillof, Myra B. Cohen, Interaction Coverage meets Path Coverage by SMT Constraint Solving, International Conference on Testing of Communicating Systems and International Workshop on Formal Approaches to Testing of Software (TESTCOM /FATES), November 2009. [pdf]

  15. X.Qu, Configuration Aware Prioritization Techniques In Regression Testing, International Conference On Software Engineering (ICSE) , Doctoral Symposium, May 2009. [slides] [poster]

  16. X. Qu, M.B. Cohen and G.Rothermel, Configuration-aware regression testing: an empirical study of sampling and prioritization, International Symposium on Software Testing and Analysis (ISSTA) , July 2008, pp. 75-85. [pdf] [slides]

  17. X. Qu, M.B. Cohen and K.M. Woolf, Combinatorial interaction regression testing: a study of test case generation and prioritization, IEEE International Conference on Software Maintenance (ICSM) , Paris, October 2007, pp. 255-264. [pdf] [slides]

Journal, Book Chapters

  1. Xiao Qu, "Testing of Configurable Systems.", Advances in Computers, Volume 89, 141-162, March 2013.

  2. Xiao Qu, Hongyu Yang and Zihe Dai, “Optimize test cases using CATS”, Journal of China Civil Aviation Flying College, Vol.17 No.2, pp.62-64, 2006.


Dissertation

CONFIGURATION AWARE PRIORITIZATION FOR REGRESSION TESTING

Configurable software lets users customize applications in many ways, and is becoming increasingly prevalent. Configurability requires extra effort for testing because there is evidence that running the same test case on different configurations may detect different faults. Differentiating test cases and configurations as two independent factors for testing, we must consider not just which test case to utilize, but also which configurations. Ideally, an exhaustive testing approach would combine every test case with every possible configuration. But since the full configuration space of most software systems is huge, it is infeasible to test all possible configurations with all test cases. Instead, sampling techniques are applied.

Despite successful sampling techniques, sometimes it is still costly to run only a configuration sample. In particular, the cost is magnified when new features and functionality are added as a system evolves, and the new version is regression tested. Regression testing is an important but expensive way to build confidence that software changes introduce no new faults as software evolves, and many efforts have been made to improve its performance given limited resources. For example, regression test selection and test case prioritization have been extensively researched, but they have rarely been considered for configurations.

In this dissertation, we provide cost-effective prioritization techniques for regression testing configurable systems -- configuration aware regression testing. Specifically, we first generalize the problem of configuration sampling, and systematically compare different sampling techniques for configurations in non-trivial software systems, across multiple consecutive versions. We then investigate different prioritization techniques for ordering configurations sampled. We also investigate the relative cost-benefits of prioritizing test cases and configurations, as two independent factors, by presenting a comprehensive method for prioritizing both test cases and configurations. These techniques and methods are evaluated through empirical studies. Finally, we extend or modify basic prioritization techniques for different practical environments that involve additional factors (e.g., cost) or constraints (e.g., without testing records of prior versions).