Generating Python Mutants from Bug Fixes using Neural Machine Translation


Creative Commons License

Aşik S., Yayan U.

IEEE Access, cilt.11, ss.85678-85693, 2023 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 11
  • Basım Tarihi: 2023
  • Doi Numarası: 10.1109/access.2023.3302695
  • Dergi Adı: IEEE Access
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, INSPEC, Directory of Open Access Journals
  • Sayfa Sayıları: ss.85678-85693
  • Anahtar Kelimeler: abstract syntax tree, bug fixes, deep learning, evaluation of generated code quality, mining software repositories, mutation testing, neural machine translation, Software quality, software testing, software verification and validation
  • Eskişehir Osmangazi Üniversitesi Adresli: Evet

Özet

Due to the fast-paced development of technology, the software has become a crucial aspect of modern life, facilitating the operation and management of hardware devices. Nevertheless, using substandard software can result in severe complications for users, putting human lives at risk. This underscores the significance of error-free and premium-quality software. Verification and validation are essential in ensuring high-quality software development; software testing is integral to this process. Although code coverage is a prevalent method for assessing the efficacy of test suites, it has some limitations. Therefore, mutation testing is proposed as a remedy to tackle these limitations. Furthermore, mutation testing is recognized as a method for directing test case creation and evaluating the effectiveness of test suites. Our proposed method involves autonomously learning mutations from faults in real-world software applications. Firstly, our approach involves extracting bug fixes at the method-level, classifying them according to mutation types, and performing code abstraction. Subsequently, the approach utilizes a deep learning technique based on neural machine translation to develop mutation models. Our method has been trained and assessed using approximately ~588k bug fix commits extracted from GitHub. The results of our experimental assessment indicate that our models can forecast mutations resembling resolved bugs in 6% to 35% of instances. The models effectively revert fixed code to its original buggy version, reproducing the original bug and generating various other buggy codes with up to 94% accuracy. More than 96% of the generated mutants also demonstrate lexical and syntactic accuracy.