A Novel Deep Learning Model for Image Recomposition of Adverse Weather Conditions to Isolate Critical Subjects in Camera Input from Autonomous Vehicles
Abstract – As modern transportation trends toward adopting autonomous vehicles, the acceptance and implementation of these automobiles ultimately depend on the safety guaranteed during their operations. A vital safety component in these systems is maintaining a robust object detection system, which can detect cars, bicycles, traffic lights, etc. However, as climate change worsens globally, there is an increasing prevalence of adverse environmental conditions such as heavy rain, snow, fog, and haze. These environmental factors degrade the strength of the object detection models, putting both the passengers of these autonomous vehicles and pedestrians at severe risk. This research proposes a novel framework that employs image recomposition to remove adverse weather conditions from driving frames. The model consists of two discrete modules which process and refine images. The first is the visibility complementary module (VCM) which assesses clarity in each frame and employs a recomposition model and contrast adjustment. The second is the object detection module (ODM), which bounds and labels critical subjects in the frame. The VCMODM model created is then tested and compared against YOLO.V3, a leading market design. Although this framework was a little slower than YOLO.V3, our model outperformed in detection accuracy across each of the weather conditions.