Toggle Main Menu Toggle Search

Open Access padlockePrints

D2Fusion: Dual-domain fusion with feature superposition for Deepfake detection

Lookup NU author(s): Dr Haoran Duan, Dr Tejal Shah, Dr Varun OjhaORCiD, Professor Raj Ranjan

Downloads

Full text for this publication is not currently held within this repository. Alternative links are provided below where available.


Abstract

© 2025Deepfake detection is crucial for curbing the harm it causes to society. However, current Deepfake detection methods fail to thoroughly explore artifact information across different domains due to insufficient intrinsic interactions. These interactions refer to the fusion and coordination after feature extraction processes across different domains, which are crucial for recognizing complex forgery clues. Focusing on more generalized Deepfake detection, in this work, we introduce a novel bi-directional attention module to capture the local positional information of artifact clues from the spatial domain. This enables accurate artifact localization, thus addressing the coarse processing with artifact features. To further address the limitation that the proposed bi-directional attention module may not well capture global subtle forgery information in the artifact feature (e.g., textures or edges), we employ a fine-grained frequency attention module in the frequency domain. By doing so, we can obtain high-frequency information in the fine-grained features, which contains the global and subtle forgery information. Although these features from the diverse domains can be effectively and independently improved, fusing them directly does not effectively improve the detection performance. Therefore, we propose a feature superposition strategy that complements information from spatial and frequency domains. This strategy turns the feature components into the form of wave-like tokens, which are updated based on their phase, such that the distinctions between authentic and artifact features can be amplified. Our method demonstrates significant improvements over state-of-the-art (SOTA) methods on five public Deepfake datasets in capturing abnormalities across different manipulated operations and real-life. Specifically, in intra-dataset evaluations, D2Fusion surpasses the baseline accuracy by nearly 2.5%. In cross-manipulation evaluations, it exceeds the baseline AUC by up to 6.15%. In multi-source manipulation evaluations, it exceeds the SOTA methods by up to 14.62% in P-value, 10.26% in F1-score and 15.13% in R-value. In cross-dataset experiments, it exceeds the baseline AUC by up to 6.25%. For potential applications, D2Fusion can help improve content moderation on social media and aid forensic investigations by accurately identifying the tampered content.


Publication metadata

Author(s): Qiu X, Miao X, Wan F, Duan H, Shah T, Ojha V, Long Y, Ranjan R

Publication type: Article

Publication status: Published

Journal: Information Fusion

Year: 2025

Volume: 120

Print publication date: 01/08/2025

Online publication date: 13/03/2025

Acceptance date: 05/03/2025

ISSN (print): 1566-2535

ISSN (electronic): 1872-6305

Publisher: Elsevier B.V.

URL: https://doi.org/10.1016/j.inffus.2025.103087

DOI: 10.1016/j.inffus.2025.103087


Altmetrics

Altmetrics provided by Altmetric


Share