Turkish Journal of Electrical Engineering and Computer Sciences, cilt.29, sa.4, ss.2101-2115, 2021 (SCI-Expanded)
© TÜBİTAKAutonomous transport vehicles (ATVs) are one of the most substantial components of smart factories of Industry 4.0. They are primarily considered to transfer the goods or perform some certain navigation tasks in the factory with self driving. The recent developments on computer vision studies allow the vehicles to visually perceive the environment and the objects in the environment. There are numerous applications especially for smart traffic networks in outdoor environments but there is lack of application and databases for autonomous transport vehicles in indoor industrial environments. There exist some essential safety and direction signs in smart factories and these signs have an important place in safety issues. Therefore, the detection of these signs by ATVs is crucial. In this study, a visual dataset which includes important indoor safety signs to simulate a factory environment is created. The dataset has been used to train different fast-responding popular deep learning object detection methods: faster R-CNN, YOLOv3, YOLOv4, SSD, and RetinaNet. These methods can be executed in real time to enhance the visual understanding of the ATV, which, in turn, helps the agent to navigate in a safe and reliable state in smart factories. The trained network models were compared in terms of accuracy on our created dataset, and YOLOv4 achieved the best performance among all the tested methods.