ACformer: A unified transformer for arbitrary-frame image exposure correction.
Journal:
Neural networks : the official journal of the International Neural Network Society
Published Date:
Jan 18, 2025
Abstract
Both the single-image exposure correction (SEC) methods and multi-image exposure fusion (MEF) methods aim to obtain a well-exposed image, but from different number of input image(s). Despite their promising performance on the specific SEC or MEF task, few of these methods explores the inherent correlation behind the same goal of the SEC or MEF tasks. In this paper, we propose to unify the SEC and MEF tasks into a unified task of "Arbitrary-Frame Exposure Correction" (AF-EC) with arbitrary number of input frames. To tackle our AF-EC task, we develop an Arbitrary-Frame Exposure Correction Transformer (ACformer) as an integrated model to tackle the AF-EC task, and achieves mutually boosted performance on both the SEC and MEF tasks. Our ACformer consists mainly of the proposed Parallel Feature Fusion and Correction (PFFC) module. It simultaneously exploits feature-level exposure correction by Spatial Self-Attention and Channel Self-Attention blocks from each input image, as well as Temporal Self-Attention blocks to fuse the features from arbitrary numbers of input frames for feature-level exposure fusion. Experiments on the two commonly used datasets demonstrates that our ACformer outperforms the comparison methods designed specifically for the SEC or MEF tasks, in terms of both objective metrics and subjective visual quality. The code and pretrained models will be publicly released.