Riemannian Dictionary Learning and Sparse Coding for Positive Definite Matrices.

Journal: IEEE transactions on neural networks and learning systems
Published Date:

Abstract

Data encoded as symmetric positive definite (SPD) matrices frequently arise in many areas of computer vision and machine learning. While these matrices form an open subset of the Euclidean space of symmetric matrices, viewing them through the lens of non-Euclidean Riemannian (Riem) geometry often turns out to be better suited in capturing several desirable data properties. Inspired by the great success of dictionary learning and sparse coding (DLSC) for vector-valued data, our goal in this paper is to represent data in the form of SPD matrices as sparse conic combinations of SPD atoms from a learned dictionary via a Riem geometric approach. To that end, we formulate a novel Riem optimization objective for DLSC, in which the representation loss is characterized via the affine-invariant Riem metric. We also present a computationally simple algorithm for optimizing our model. Experiments on several computer vision data sets demonstrate superior classification and retrieval performance using our approach when compared with SC via alternative non-Riem formulations.

Authors

  • Anoop Cherian
  • Suvrit Sra
    Macro-Eyes, Inc, Seattle, WA, United States.

Keywords

No keywords available for this article.