Deep quanvolutional neural networks with enhanced trainability and gradient propagation.

Journal: Scientific reports
Published Date:

Abstract

In this paper, we explore methods to enhance the performance of one of the frequently used variants of Quantum Convolutional Neural Networks, known as Quanvolutional Neural Networks (QuNNs) by introducing trainable quanvolutional layers and addressing the challenges associated with training multi-layered or deep QuNNs. Traditional QuNNs mostly rely on static (non-trainable) quanvolutional layers, limiting their feature extraction capabilities. Our approach enables the training of these layers, significantly improving the scalability and learning potential of QuNNs. However, multi-layered deep QuNNs face difficulties in gradient-based optimization due to limited gradient flow across all the layers of the network. To overcome this, we propose Residual Quanvolutional Neural Networks (ResQuNNs), which utilize residual learning by adding skip connections between quanvolutional layers. These residual blocks enhance gradient flow throughout the network, facilitating effective training in deep QuNNs, thus enabling deep learning in QuNNs. Moreover, we provide empirical evidence on the optimal placement of these residual blocks, demonstrating how strategic configurations improve gradient flow and lead to more efficient training. Our findings represent a significant advancement in quantum deep learning, opening new possibilities for both theoretical exploration and practical quantum computing applications.

Authors

  • Muhammad Kashif
    Department of Medical Informatics, RWTH Aachen University, Pauwelsstr. 30, 52057 Aachen, Germany. Electronic address: Muhammad.kashif@rwth-aachen.de.
  • Muhammad Shafique
    Faculty of Engineering and Applied Sciences, Riphah International University Islamabad, Islamabad 44000, Pakistan.

Keywords

No keywords available for this article.