NOISE GENERATION METHOD BASED ON A SET OF NOISY IMAGES WITHOUT CLEAN EXAMPLES
Abstract
In this work, a novel method is proposed for noise generation from noisy images that does not require aligned pairs of clean and noisy data. Unlike traditional approaches demanding matched image sets or a priori noise models, the developed technique models complex noise characteristics intrinsic to specific CMOS sensors solely from observed noisy data. Noise synthesis is achieved via a U‑Net‑like generative adversarial architecture based on StyleGANv2, featuring a modified discriminator conditioned on camera parameters and input image metadata. Special emphasis is placed on preserving the spatial–color structure and textural details of each image, enforced through a dedicated loss function that ensures fidelity to the original color rendering and fine-grained patterns. Training of the noise generator is performed without any paired clean and noisy images, which proves particularly valuable when handling real-world datasets acquired from multiple camera models under varied lighting conditions. The experimental section presents a detailed comparative analysis of the synthesized images using PSNR and SSIM metrics, along with an evaluation of the noise distribution based on intensity statistics and spectral characteristics. It is demonstrated that the generated dataset functions effectively as a standalone training corpus for denoising neural networks and, when combined with a real dataset (e.g., SIDD), yields further enhancements in denoising performance. Results indicate that combined training on the union of generated and real examples produces an average PSNR improvement of 1.5 dB compared to existing methods reliant on aligned data. Independence from the specific optical characteristics of any given sensor significantly broadens the method’s applicability. These findings confirm the utility of the proposed approach for realistic noise synthesis and removal in scenarios lacking clean reference images, and they open avenues for future research into adaptive noise-model generation
References
1. Al Mudhafar R.A., El Abbadi N.K. Noise in Digital Image Processing: A Review Study, 2022 3rd In-formation Technology To Enhance e-learning and Other Application (IT-ELA), 2022, pp. 79-84. DOI: 10.1109/IT-ELA57378.2022.10107965.
2. Srujana P., et al. Comparison of Image Denoising using Convolutional Neural Network (CNN) with Traditional Method, 2021 5th International Conference on Computing Methodologies and Communica-tion (ICCMC), 2021, pp. 826-831. DOI: 10.1109/ICCMC51019.2021.9418244.
3. Brouk I., Nemirovsky A., Nemirovsky Y. Analysis of noise in CMOS image sensor, 2008 IEEE Interna-tional Conference on Microwaves, Communications, Antennas and Electronic Systems, 2008, pp. 1-8. DOI: 10.1109/COMCAS.2008.4562800.
4. Bernardo Henz, Eduardo S.L., Gastal M.M.O. Synthesizing Camera Noise using Generative Adversari-al Networks, IEEE Trans. Vis. Comput. Graph, 2021, Vol. 27, No. 3, pp. 2123-2135. DOI: 10.1109/TVCG.2020.3012120.
5. Hasino S.W., Durand F., Freeman W.T. Noise-optimal capture for high dynamic range photography, 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010, pp. 553-560. Available at: https://api.semanticscholar.org/CorpusID:7762067.
6. Zhang F., et al. Towards General Low-Light Raw Noise Synthesis and Modeling, Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV). Oct. 2023, pp. 10820-10830.
7. Wu Q., et al. Realistic Noise Synthesis with Diffusion Models, arXiv preprint, 2023. arXiv:2305.14022 [cs.CV]. Available at: https://arxiv.org/abs/2305.14022.
8. Lee S., Kim T. H. NoiseTransfer: Image Noise Generation with Contrastive Embeddings, Proc. Asian Conf. Comput. Vis. (ACCV). Dec. 2022, pp. 3569-3585.
9. Lin X., et al. Unsupervised Image Denoising in Real-World Scenarios via Self-Collaboration Parallel Generative Adversarial Branches, 2023 IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2023, pp. 12608-12618. DOI: 10.1109/ICCV51070.2023.01162.
10. Zhu J.-Y., et al. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks, 2017 IEEE Int. Conf. Comput. Vis. (ICCV), 2017, pp. 2242-2251. DOI: 10.1109/ICCV.2017.244.
11. Kwon T., Ye J.C. Cycle-Free CycleGAN Using Invertible Generator for Unsupervised Low-Dose CT Denoising, IEEE Trans. Comput. Imaging, 2021, Vol. 7, pp. 1354-1368. DOI: https://doi.org/10.1109/TCI.2021.3129369.
12. Gevers T., Stokman H. Robust Histogram Construction from Color Invariants for Object Recognition, IEEE Trans. Pattern Anal. Mach. Intell., 2004, Vol. 26, No. 1, pp. 113-117. DOI: 10.1109/TPAMI.2004.1261083. Available at: https://doi.org/10.1109/TPAMI.2004.1261083.
13. Zhang R., et al. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric, Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR). Jun. 2018.
14. Abdelhamed A., Lin S., Brown M.S. A High-Quality Denoising Dataset for Smartphone Cameras, Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR). Jun. 2018.
15. Zhang Q., et al. Conditional Adversarial Domain Generalization With a Single Discriminator for Bearing Fault Diagnosis, IEEE Trans. Instrum. Meas., 2021, Vol. 70, pp. 1-15. DOI: 10.1109/TIM.2021.3071350.
16. Agustsson E., Timofte R. NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study, Proc. CVPR Workshops. Jul. 2017.
17. Huang J.-B., Singh A., Ahuja N. Single Image Super-Resolution From Transformed Self-Exemplars, Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2015, pp. 5197-5206.
18. Paszke A., et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library, Adv. Neural Inf. Process. Syst. (NeurIPS), 2019, Vol. 32, pp. 8024-8035. Available at: http://papers.neurips.cc/paper/9015.
19. Karras T., et al. Analyzing and Improving the Image Quality of StyleGAN, Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2020, pp. 8107-8116. DOI: 10.1109/CVPR42600.2020.00813.
20. Kingma D.P., Ba J. Adam: A Method for Stochastic Optimization, Int. Conf. Learn. Represent. (ICLR). May 2015. Available at: http://arxiv.org/abs/1412.6980.
21. Yue Z., et al. Dual Adversarial Network: Toward Real-World Noise Removal and Noise Generation, In: Vedaldi A., et al. (Eds.) Computer Vision – ECCV 2020. Cham: Springer, 2020, pp. 41-58. ISBN: 978-3-030-58607-2.
22. Wang Z., et al. Image Quality Assessment: From Error Visibility to Structural Similarity, IEEE Trans. Image Process, 2004, Vol. 13, No. 4, pp. 600-612.
23. Kousha S., et al. Modeling sRGB Camera Noise with Normalizing Flows, Proc. IEEE/CVF Conf. Com-put. Vis. Pattern Recognit. (CVPR). Jun. 2022, pp. 17442-17450. DOI: 10.1109/CVPR52688.2022.01694.
24. Fu Z., Guo L., Wen B. sRGB Real Noise Synthesizing with Neighboring Correlation-Aware Noise Model, Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2023, pp. 1683-1691. DOI: 10.1109/CVPR52729.2023.00168.
25. Aff M., Brubaker M. A., Brown M.S. HistoGAN: Controlling Colors of GAN-Generated and Real Imag-es via Color Histograms, Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2020, pp. 7937-7946. Available at: https://api.semanticscholar.org/CorpusID:227151819.
26. Li Y., et al. LSDIR: A Large Scale Dataset for Image Restoration, Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), 2023, pp. 1775-1787. DOI: 10.1109/CVPRW59228.2023.00178.
27. Zhang K., et al. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising, IEEE Trans. Image Process, 2017, Vol. 26, No. 7, pp. 3142-3155. DOI: 10.1109/TIP.2017.2662206.
28. Komatsu R., Gonsalves T. Comparing U-Net Based Models for Denoising Color Images, AI, 2020, Vol. 1, No. 4, pp. 465-486. ISSN: 2673-2688. DOI: 10.3390/ai1040029. Available at: https://www.mdpi.com/2673-2688/1/4/29.
29. Chu X., Chen L., Yu W. NAFSSR: Stereo Image Super-Resolution Using NAFNet, Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW). Jun. 2022, pp. 1239-1248.








