Abstract
Pairwise testing is an important branch of combinatorial testing that focuses on finding a minimum test suite that satisfies pairwise coverage. However, most existing methods fail to achieve a good balance between exploration and exploitation capabilities when searching for the test suite, or may not fully utilize the information related to the already-generated test cases: This can lead to unsatisfactory performance in combination coverage. To address these limitations, we propose an adaptive pairwise testing framework based on deep reinforcement learning, APT-DRL. Using this, a deep reinforcement learning model for pairwise testing based on the Proximal Policy Optimization (PPO) method is developed. We design the pairwise coverage vector as the state space, and use neural networks to solve the search problem in this huge state space. To reduce the size of the Markov decision space, we also design a masking technique to avoid repeated generation of actions (test cases) that have already been used. We conducted experiments using APT-DRL and eight other baseline algorithms (representing three categories): The results show that APT-DRL, as a novel pairwise testing method, significantly outperforms four random-based pairwise testing methods (RT, ARTsum, FSCS-HD, and FSCS-SD); is comparable to, or surpasses, the two heuristic algorithms (IPOG and AETG); and has better test-suite-generation efficiency and superior effectiveness than the two swarm-intelligence-based algorithms (GSTG and DPSO).
| Original language | English |
|---|---|
| Article number | 103353 |
| Journal | Science of Computer Programming |
| Volume | 247 |
| Early online date | 23 Jun 2025 |
| DOIs | |
| Publication status | Published Online - 23 Jun 2025 |
Keywords
- Combinatorial testing
- Deep reinforcement learning
- Pairwise testing
- Proximal policy optimization
- Test case generation
ASJC Scopus subject areas
- Software
- Information Systems
- Modelling and Simulation
- Computational Theory and Mathematics