A Hybrid Transformer Architecture with a Quantized Self-Attention Mechanism Applied to Molecular Generation.
Journal:
Journal of chemical theory and computation
Published Date:
May 7, 2025
Abstract
The success of the self-attention mechanism in classical machine learning models has inspired the development of quantum analogs aimed at reducing the computational overhead. Self-attention integrates learnable and matrices to calculate attention scores between all pairs of tokens in a sequence. These scores are then multiplied by a learnable matrix to obtain the output self-attention matrix, enabling the model to effectively capture long-range dependencies within the input sequence. Here, we propose a hybrid quantum-classical self-attention mechanism as part of a transformer decoder, the architecture underlying large language models (LLMs). To demonstrate its utility in chemistry, we train this model on the QM9 dataset for conditional generation, using SMILES strings as input, each labeled with a set of physicochemical properties that serve as conditions during inference. Our theoretical analysis shows that the time complexity of the query-key dot product is reduced from in a classical model to in our quantum model, where and represent the sequence length and the embedding dimension, respectively. We perform simulations using NVIDIA's CUDA-Q platform, which is designed for efficient GPU scalability. This work provides a promising avenue for quantum-enhanced natural language processing (NLP).
Authors
Keywords
No keywords available for this article.