SoftwareX, cilt.29, 2025 (SCI-Expanded)
The growing deployment of AI systems in high-risk environments, along with the increasing necessity of integrating AI into portable devices, emphasizes the need to rigorously assess their quality and reliability. Existing tools for analyzing Deep Neural Network (DNN) models' strength, safety, and quality are limited. CleanAI addresses this gap, serving as an advanced testing system to evaluate the robustness, quality, and dependability of DNN models. It incorporates eleven coverage testing methods, providing developers with insights into DNN quality, enabling analysis of model performance, and generating comprehensive output reports. This study compares various ResNet models using activation metrics, boundary metrics, and interaction metrics, revealing qualitative differences. This comparative analysis informs developers, setting a critical benchmark to tailor AI solutions adhering to stringent quality standards. Ultimately, it encourages reconsideration of model complexity and memory footprint for optimized designs, enhancing overall performance and efficiency. Additionally, by simplifying models and reducing their size, CleanAI facilitates the acceleration of AI models, resulting in significant time and cost savings. The findings from the comparative analysis also demonstrate the potential for substantial optimization in model complexity and size. By leveraging CleanAI's comprehensive coverage metrics, developers can identify areas for refinement, leading to streamlined models with reduced memory requirements. This approach not only enhances computational efficiency but also supports the growing demand for lightweight AI systems suitable for deployment on portable devices. CleanAI's role in bridging the gap between robustness and efficiency makes it a crucial tool for advancing AI development while maintaining high standards of quality and reliability.