Polyhedral Conic Classifiers for Computer Vision Applications and Open Set Recognition

ÇEVİKALP H., Saglamlar H.

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol.43, no.2, pp.608-622, 2021 (Peer-Reviewed Journal) identifier identifier identifier

  • Publication Type: Article / Article
  • Volume: 43 Issue: 2
  • Publication Date: 2021
  • Doi Number: 10.1109/tpami.2019.2934455
  • Journal Indexes: Science Citation Index Expanded, Scopus, Academic Search Premier, PASCAL, ABI/INFORM, Aerospace Database, Applied Science & Technology Source, Business Source Elite, Business Source Premier, Communication Abstracts, Compendex, Computer & Applied Sciences, EMBASE, INSPEC, MEDLINE, Metadex, zbMATH, Civil Engineering Abstracts
  • Page Numbers: pp.608-622
  • Keywords: Support vector machines, Training, Object detection, Visualization, Neural networks, Face, Dogs, Polyhedral conic classifiers, object detection, large margin classifiers, open set recognition


This paper introduces a family of quasi-linear discriminants that outperform current large-margin methods in sliding window visual object detection and open set recognition tasks. In these applications, the classification problems are both numerically imbalanced - positive (object class) training and test windows are much rarer than negative (non-class) ones - and geometrically asymmetric - the positive samples typically form compact, visually-coherent groups while negatives are much more diverse, including anything at all that is not a well-centered sample from the target class. For such tasks, there is a need for discriminants whose decision regions focus on tightly circumscribing the positive class, while still taking account of negatives in zones where the two classes overlap. To this end, we propose a family of quasi-linear "polyhedral conic" discriminants whose positive regions are distorted L-1 or L-2 balls. In addition, we also integrated the proposed classification loss into deep neural networks so that both the features and classifier can be learned simultaneously end-to-end fashion to improve the classification accuracies. The methods have properties and run-time complexities comparable to linear Support Vector Machines (SVMs), and they can be trained from either binary or positive-only samples using constrained quadratic programs related to SVMs. Our experiments show that they significantly outperform linear SVMs, deep neural networks using softmax loss function and existing one-class discriminants on a wide range of object detection, face verification, open set recognition and conventional closed-set classification tasks.