Real-Time Traffic Light Identification using YOLOv3 Algorithm For Autonomous Vehicles

Rachel Kozel (Purdue University)
Naeemah Robert (New York Institute of Technology)

Safety has always been a priority in automobile manufacture. Traffic light detection plays a major role in regards to safety in autonomous vehicles. Previous methods involved utilizing a combination of image processing and training a neural network model. Those methods were not fully successful as they presented limitations such as trouble detecting yellow and arrow traffic lights, the inability to identify a traffic light from a few pixels due to long distances, and the obstruction of traffic lights causing failure of detection. This project uses a new method to identify traffic lights and their states in both urban and suburban areas by developing a deep learning model utilizing a YOLOv3 model. Additionally, we will use the large dataset of traffic light images from the 'Bosch Small Traffic Lights Dataset' to train the YOLOv3. Our proposed method will be successful because a better processed dataset indicates traffic lights in busier environments, and a balanced dataset allows the model to be better trained at identifying traffic signals and their status. The success of our proposed method will benefit autonomous vehicles' feasibility by improving safety at intersections. The risks of this method include missing potential traffic signals which could lead to potential accidents. We will use a camera that will record the data that will later be processed. The mid-term check for success is accurate detection of traffic lights in ROS simulations, and the final check for success is the successful test drive of University of Arizona's CAT Vehicle by detecting traffic lights at intersections.

Technical Report: http://csl.arizona.edu/content/real-time-traffic-lights-identification-u...

Rachel Kozel and Naeemah Robert