Generating Python Mutants from Bug Fixes using Neural Machine Translation

Creative Commons License


IEEE Access, vol.11, pp.85678-85693, 2023 (SCI-Expanded) identifier

  • Publication Type: Article / Article
  • Volume: 11
  • Publication Date: 2023
  • Doi Number: 10.1109/access.2023.3302695
  • Journal Name: IEEE Access
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, INSPEC, Directory of Open Access Journals
  • Page Numbers: pp.85678-85693
  • Keywords: abstract syntax tree, bug fixes, deep learning, evaluation of generated code quality, mining software repositories, mutation testing, neural machine translation, Software quality, software testing, software verification and validation
  • Eskisehir Osmangazi University Affiliated: Yes


Due to the fast-paced development of technology, the software has become a crucial aspect of modern life, facilitating the operation and management of hardware devices. Nevertheless, using substandard software can result in severe complications for users, putting human lives at risk. This underscores the significance of error-free and premium-quality software. Verification and validation are essential in ensuring high-quality software development; software testing is integral to this process. Although code coverage is a prevalent method for assessing the efficacy of test suites, it has some limitations. Therefore, mutation testing is proposed as a remedy to tackle these limitations. Furthermore, mutation testing is recognized as a method for directing test case creation and evaluating the effectiveness of test suites. Our proposed method involves autonomously learning mutations from faults in real-world software applications. Firstly, our approach involves extracting bug fixes at the method-level, classifying them according to mutation types, and performing code abstraction. Subsequently, the approach utilizes a deep learning technique based on neural machine translation to develop mutation models. Our method has been trained and assessed using approximately ~588k bug fix commits extracted from GitHub. The results of our experimental assessment indicate that our models can forecast mutations resembling resolved bugs in 6% to 35% of instances. The models effectively revert fixed code to its original buggy version, reproducing the original bug and generating various other buggy codes with up to 94% accuracy. More than 96% of the generated mutants also demonstrate lexical and syntactic accuracy.