Video swin-CLSTM transformer: Enhancing human action recognition with optical flow and long-term dependencies.
Journal:
PloS one
Published Date:
Jul 7, 2025
Abstract
As video data volumes soar exponentially, the significance of video content analysis, particularly Human Action Recognition (HAR), has become increasingly prominent in fields such as intelligent surveillance, sports analytics, medical rehabilitation, and virtual reality. However, current deep learning-based HAR methods encounter challenges in recognizing subtle actions within complex backgrounds, comprehending long-term semantics, and maintaining computational efficiency. To address these challenges, we introduce the Video Swin-CLSTM Transformer. Based on the Video Swin Transformer backbone, our model incorporates optical flow information at the input stage to effectively counteract background interference, employing a sparse sampling strategy. Combined with the backbone's 3D Patch Partition and Patch Merging techniques, it efficiently extracts and fuses multi-level features from both optical flow and raw RGB inputs, thereby enhancing the model's ability to capture motion characteristics in complex backgrounds. Additionally, by embedding Convolutional Long Short-Term Memory (ConvLSTM) units, the model's capacity to capture and understand long-term dependencies among key actions in videos is further enhanced. Experiments on the UCF-101 dataset demonstrate that our model achieves mean Top-1/Top-5 accuracies of 92.8% and 99.4%, which are 3.2% and 2.0% higher than those of the baseline model, while the computational cost is reduced by an average of 3.3% at peak performance compared to models without optical flow. Ablation studies further validate the effectiveness of our model's crucial components, with the integration of optical flow and the embedding of ConvLSTM modules yielding maximum improvements in mean Top-1 accuracy of 2.6% and 1.9%, respectively. Notably, employing our custom ImageNet-1K-LSTM pre-training model results in a maximum increase of 2.7% in mean Top-1 accuracy compared to traditional ImageNet-1K pre-training model. These experimental results indicate that our model offers certain advantages over other Swin Transformer-based methods for video HAR tasks.