An Interpretable Quantum Adjoint Convolutional Layer for Image Classification.

Journal: IEEE transactions on cybernetics
Published Date:

Abstract

The interpretability of quantum machine learning (QML) refers to the capability to provide clear and understandable explanations for the predictions and decision-making processes of QML models. However, most quantum convolutional layers (QCLs) utilize closed-box structures that are inherently devoid of interpretability, leading to the opacity of principles and the suboptimal mapping of classical data. This significantly undermines the reliability of QML models. In addition, most of the current QML interpretability focuses on post hoc interpretability seriously neglecting the importance of exploring intrinsic causes. To tackle these challenges, we introduce the quantum adjoint convolution operation (QACO). It is an intrinsic interpretability scheme based on quantum evolution, as its quantum mapping precisely corresponds to the position and pixel values of the image and its principle is equivalent to the Frobenius inner product (FIP). Furthermore, we extend the QACO concept into the quantum adjoint convolutional layer (QACL) by integrating the quantum phase estimation (QPE) algorithm, enabling the parallel computation of all FIPs. Experimental results on PennyLane and TensorFlow platforms demonstrate that our method achieves a 6.3%, 3.4%, and 2.9% higher average test accuracy on Fashion MNIST, MNIST, and DermaMNIST datasets compared to classical and uninterpretable quantum counterparts, respectively, while maintaining 73.3% noise-robust accuracy under Gaussian noise, showcasing its superior generalizability and resilience in practical scenarios.

Authors

  • Shi Wang
    Ministry of Education Key Laboratory of Marine Genetics and Breeding, Ocean University of China, Qingdao, China.
  • Mengyi Wang
  • Ren-Xin Zhao
  • Licheng Liu
  • Yaonan Wang

Keywords

No keywords available for this article.