Integrated codec decomposed Transformer for long-term series forecasting.
Journal:
Neural networks : the official journal of the International Neural Network Society
Published Date:
Apr 23, 2025
Abstract
Recently, Transformer-based and multilayer perceptron (MLP) based architectures have formed a competitive landscape in the field of time series forecasting. There is evidence that series decomposition can further enhance the model's ability to perceive temporal patterns. However, most of the existing Transformer-based decomposed models capture seasonal features progressively and assist in adding trends for forecasting, but ignore the deep information contained in trends and may lead to pattern mismatch in the fusion stage. In addition, the permutation invariance of the attention mechanism inevitably leads to the loss of temporal order. After in-depth analysis of the applicability of attention and linear layers to series components, we propose to use attention to learn multivariate correlations from trends, and MLP to capture seasonal patterns. We further introduce an integrated codec that provides the same multivariate relationship representation for both the encoding and decoding stages, ensuring effective inheritance of temporal dependencies. To mitigate the fading of sequentiality during attention, we propose trend enhancement module, which maintains the stability of the trend by expanding the series to a longer time scale, helping the attention mechanism to achieve fine-grained feature representations. Extensive experiments show that our model exhibits state-of-the-art prediction performance on large-scale datasets.