Object Detection in Foggy Weather using Deep Learning Model

Authors

  • M. Faiz Department of Computer Science, University of Engineering and Technology, Lahore, Pakistan
  • T. Ahmad Department of Computer Science, University of Engineering and Technology, Lahore, Pakistan
  • G. Mustafa Department of Computer Science, University of Engineering and Technology, Lahore, Pakistan

DOI:

https://doi.org/10.71330/thenucleus.61.1410

Abstract

This study addresses the challenge of accurate object detection in foggy environments, a critical issue in computer vision. We propose a novel approach using a real dataset collected from diverse foggy weather conditions, focusing on varying fog densities. By annotating the dataset from Real-Time Traffic Surveillance (RTTS) and using the YOLOv8x architecture, we systematically analyze the impact of fog density on detection performance. Our experiments demonstrate that the YOLOv8x model achieves a mean average precision (mAP) of 78.6% across varying fog densities, outperforming state-of-the-art methods by 4.2% on the augmented dataset. Additionally, we show that increased dataset diversity significantly enhances the robustness of the model in detecting objects under challenging foggy conditions. Our research contributes to advancing object detection systems tailored for foggy environments, with implications for safety and efficiency in domains like autonomous driving and surveillance.

References

L. Jinlong, R. Xu, X. Liu, J. Ma, B. Li, Q. Zou, J. Ma, and H. Yu, "Domain Adaptation based Object Detection for Autonomous Driving in Foggy and Rainy Weather." IEEE Transactions on Intelligent Vehicles, pp. 1-12, 2024.

Tran, LA. "Synthesize Hazy/Foggy Image Using Monodepth and Atmospheric Scattering Model." Towards Data Science, 2021.

H. Abbasi, M. Amini, and F. R. Yu, “Fog-aware adaptive YOLO for object detection in adverse weather,” IEEE Sensors Applications Symposium (SAS), pp. 1–6, 2023.

D. Kumar and N. Muhammad, “Object detection in adverse weather for autonomous driving through data merging and YOLOv8,” Sensors, vol. 23, no. 20, pp. 8471, 2023.

M. Mai, P. Duthon, L. Khoudour, A. Crouzil, and S. A. Velastin, “Sparse LiDAR and stereo fusion (SLS-Fusion) for depth estimation and 3D object detection,” 11th International Conference of Pattern Recognition Systems (ICPRS 2021), Online Conference, pp. 150–156, 2021.

X. Meng, Y. Liu, L. Fan, and J. Fan, “YOLOv5s-Fog: An improved model based on YOLOv5s for object detection in foggy weather scenarios,” Sensors, vol. 23, no. 11, pp. 5321, 2023.

Z. Liu, S. Zhao, and X. Wang, “Research on driving obstacle detection technology in foggy weather based on GCANet and feature fusion training,” Sensors, vol. 23, no. 5, pp. 2822, 2023.

Y. Guo, R. L. Liang, Y. K. Cui, X. M. Zhao, and Q. Meng, “A domain-adaptive method with cycle perceptual consistency adversarial networks for vehicle target detection in foggy weather,” IET Intelligent Transport Systems, vol. 16, no. 7, pp. 971–981, 2022.

Q. Zhang and X. Hu, “MSFFA-YOLO network: Multiclass object detection for traffic investigations in foggy weather,” IEEE Transactions on Instrumentation and Measurement, vol. 72, Article ID 2528712, 2023.

M. Hu, Y. Wu, Y. Yang, J. Fan, and B. Jing, “DAGL-Faster: Domain adaptive faster R-CNN for vehicle object detection in rainy and foggy weather conditions,” Displays, vol. 79, pp. 102484, 2023.

N. A. M. Mai, P. Duthon, P. H. Salmane, L. Khoudour, A. Crouzil, and S. A. Velastin, "Camera and LiDAR Analysis for 3D object detection in foggy atmospheric conditions." In Proceedings of the International Conference on Pattern Recognition and Signal Processing (ICPRS), 2022.

Kumari and S. K. Sahoo, “A new fast and efficient dehazing and defogging algorithm for single remote sensing images,” Signal Processing, vol. 215, pp. 109289, 2024.

Goyal, A. Dogra, D. C. Lepcha, V. Goyal, A. Alkhayyat, J. S. Chohan, and V. Kukreja, “Recent advances in image dehazing: Formal analysis to automated approaches,” Information Fusion, pp. 102151, 2023.

L. Wen, D. Du, Z. Cai, Z. Lei, M. C. Chang, H. Qi, and S. Lyu, “UA-DETRAC: A new benchmark and protocol for multi-object detection and tracking,” Computer Vision and Image Understanding, vol. 193, pp. 102907, 2020.

Downloads

Published

04-02-2025

How to Cite

[1]
M. Faiz, T. Ahmad, and G. Mustafa, “Object Detection in Foggy Weather using Deep Learning Model”, The Nucleus, vol. 61, no. 2, pp. 117–125, Feb. 2025.

Issue

Section

Articles