AN ATTENTION U-NET BASED ON NOISE INCONSISTENCY FOR ROBUST IMAGE FORGERY DETECTION AND PRECISE PIXEL-LEVEL LOCALIZATION
With the rapid advancement of image editing tools, digitally manipulated images have become increasingly common, raising serious concerns about the authenticity and reliability of visual information across domains such as social media, journalism, healthcare, and legal investigations. Although recent deep learning approaches, especially U-Net and attention-based models, have demonstrated promising results in forgery detection and localization, many of these methods mainly depend on visual and texture-based features. As a result, they often struggle to maintain performance when images undergo post-processing operations like compression, resizing, or contrast adjustments.
To address these challenges, this work introduces a Noise-Inconsistency Guided Attention U-Net (NIGA-UNet) for robust image forgery detection and precise pixel-level localization. The proposed model focuses on capturing noise-related inconsistencies by extracting noise residuals and analyzing local variations, which are then incorporated into a noise-guided attention mechanism. This enables the network to better highlight manipulated regions. In addition, a dual-encoder U-Net architecture with multi-scale feature fusion is employed to effectively combine both visual and noise-domain information.
P.Muniasamy, (2026). An Attention U-Net Based on Noise Inconsistency for Robust Image Forgery Detection and Precise Pixel-Level Localization. International Journal of Science, Strategic Management and Technology, 02(04). https://doi.org/10.55041/ijsmt.v2i3.445
P.Muniasamy, . "An Attention U-Net Based on Noise Inconsistency for Robust Image Forgery Detection and Precise Pixel-Level Localization." International Journal of Science, Strategic Management and Technology, vol. 02, no. 04, 2026, pp. . doi:https://doi.org/10.55041/ijsmt.v2i3.445.
P.Muniasamy, . "An Attention U-Net Based on Noise Inconsistency for Robust Image Forgery Detection and Precise Pixel-Level Localization." International Journal of Science, Strategic Management and Technology 02, no. 04 (2026). https://doi.org/https://doi.org/10.55041/ijsmt.v2i3.445.
2.Choudhary, R. R., Paliwal, S., &Meena, G. (2024). Image forgery detection system using VGG16 UNET model. Procedia Computer Science, 235, 735-744. https://doi.org/10.1016/j.procs.2024.04.070
3.Peng, J., Li, Y., Liu, C., &Gao, X. (2023). The circular U-Net with attention gate for image splicing forgery detection. Electronics, 12(6), 1451. https://doi.org/10.3390/electronics120614514.Gu, A. R., Nam, J. H., & Lee, S. C. (2022). FBI-Net: Frequency-based image forgery localization via multitask learning with self-attention. IEEE access, 10, 62751-62762.1109/ACCESS.2022.3182024
5.Yan, C., Li, S., & Li, H. (2023). Transu 2-net: A hybrid transformer architecture for image splicing forgery detection. IEEE Access, 11, 33313-33323. DOI: 1109/ACCESS.2023.3264014
6.Khalil, A. H., Ghalwash, A. Z., Elsayed, H. A. G., Salama, G. I., &Ghalwash, H. A. (2023). Enhancing digital image forgery detection using transfer learning. IEEE Access, 11, 91583-91594. DOI: 1109/ACCESS.2023.3307357
7.Albahli, S., & Nawaz, M. (2024). MedNet: Medical deepfakes detection using an improved deep learning approach. Multimedia Tools and Applications, 83(16), 48357-48375. https://doi.org/10.1007/s11042-023-17562-5
8.Mallampati, B., Ishaq, A., Rustam, F., Kuthala, V., Alfarhood, S., & Ashraf, I. (2023). Brain tumor detection using 3D-UNet segmentation features and hybrid machine learning model. IEEE Access, 11, 135020-135034. DOI: 1109/ACCESS.2023.3337363
9.Ali, S. S., Ganapathi, I. I., Vu, N. S., Ali, S. D., Saxena, N., &Werghi, N. (2022). Image forgery detection using deep learning by recompressing images. Electronics, 11(3), 403. https://doi.org/10.3390/electronics11030403
10.Sariturk, B., &Seker, D. Z. (2022). A residual-inception U-Net (RIU-Net) approach and comparisons with U-shaped CNN and transformer models for building segmentation from high-resolution satellite images. Sensors, 22(19), 7624. https://doi.org/10.3390/s22197624