CVABS: moving object segmentation with common vector approach for videos


Creative Commons License

IŞIK Ş., ÖZKAN K., Gerek O. N.

IET COMPUTER VISION, cilt.13, sa.8, ss.719-729, 2019 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 13 Sayı: 8
  • Basım Tarihi: 2019
  • Doi Numarası: 10.1049/iet-cvi.2018.5642
  • Dergi Adı: IET COMPUTER VISION
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Sayfa Sayıları: ss.719-729
  • Anahtar Kelimeler: computer vision, object detection, video signal processing, image motion analysis, image segmentation, smart foreground detection, background updating procedure, dynamic control parameters, dynamic backgrounds, objective measures, background frames, object segmentation, common vector approach, real-time computer vision applications, security systems, subspace-based background-modelling algorithm, CDNet2014 dataset, Gram-Schmidt orthogonalisation, self-learning feedback mechanism, MULTISCALE, TRACKING
  • Eskişehir Osmangazi Üniversitesi Adresli: Evet

Özet

Background modelling is a fundamental step for several real-time computer vision applications that requires security systems and monitoring. An accurate background model helps to detect the activity of moving objects in the video. In this work, the authors have developed a new subspace-based background-modelling algorithm using the concept of common vector approach (CVA) with Gram-Schmidt orthogonalisation. Once the background model that involves the common characteristic of different views corresponding to the same scene is acquired, a smart foreground detection and background updating procedure is applied based on dynamic control parameters. A variety of experiments is conducted on different problem types related to dynamic backgrounds. Several types of metrics are utilised as objective measures and the obtained visual results are judged subjectively. It was observed that the proposed method stands successfully for all problem types reported on CDNet2014 dataset by updating the background frames with a self-learning feedback mechanism.