Abstract
The problem being tackled by this thesis is a very important one and very relevant to our days and times: it is about making improved target recognition and enhanced real-time response skills in AVs under simulated conditions. Our plan is to put some enhanced sensory capabilities into these vehicles and see if that makes them safer and more reliable. We are using as our base a particular object recognition algorithm (YOLOv7) and a particular simulation environment (CARLA). We utilized the CARLA 0.9.14 simulator on Ubuntu 20.04 as a more stable option than the initially used CARLA 0.9.15 on Ubuntu 22.04, where both were used in an unreal engin 4.26 environment. This research work drew upon the CARLA simulator and used stereo cameras and LIDAR to create a robust simulated environment for the collection of times of the day, weather conditions, and urban and rural scenarios across different town layouts. An annotation effort by us resulted in the labeling of a more focused dataset of 4,113 images from a broader set of 160,000 generated through sensor fusion,stereo camera and LIDAR overlayed model. The object detection algorithm used in this work was YOLOv7. The nuance of this work comes from the testing of enhancements made in this new model over previous models of YOLO. Comparisons were also made to some other recent methods for object detection in autonomous vehicle applications. The main object classes of interest were cars, pedestrians, and cyclists, because these are the most dangerous classes with which an Autonomous Vehicles might have a collision. Detection capacity for the YOLOv7 model dramatically improved over previous iterations, from 100 epochs to 700 epochs. At an intersection over union (IoU) threshold of 0.5, YOLOv7 achieved a mean average precision (mAP) of 76.3%, which is better than its predecessors with an increase of 12%. YOLOv7's performance also varied depending on the target class, with cars being the most accurately detected object class, showing a precision of 0.841, a recall of 0.843, and mAP values at the 0.5 and 0.5:0.95 thresholds of 0.835 and 0.590, respectively. In real-world applications, YOLOv7 should yield impressive results for detecting and tracking a wide variety of object classes across many different environments. While the thesis robustly validates the performance improvements of Autonomous Vehicles systems within simulated settings, future work should focus on the physical implementation of these technologies in actual vehicles and testing in real-world scenarios. In Addition, further research should explore integrating real-time object avoidance capabilities to enhance the practical applicability and safety of autonomous vehicles in dynamic and unpredictable environments.
School
School of Sciences and Engineering
Department
Robotics, Control & Smart Systems Program
Degree Name
MS in Robotics, Control and Smart Systems
Graduation Date
Winter 1-31-2025
Submission Date
9-20-2024
First Advisor
Maki Habib
Committee Member 1
Ashraf Nassef
Committee Member 2
Khaled El Sayed
Committee Member 3
Amr El-Mougy
Extent
203 p.
Document Type
Master's Thesis
Institutional Review Board (IRB) Approval
Approval has been obtained for this item
Recommended Citation
APA Citation
Hussein, M.
(2025).Navigating the Future Advancing Autonomous Vehicles through Robust Target Recognition and Real-Time Avoidance [Master's Thesis, the American University in Cairo]. AUC Knowledge Fountain.
https://fount.aucegypt.edu/etds/2422
MLA Citation
Hussein, Mohammed Ahmed Mohammed. Navigating the Future Advancing Autonomous Vehicles through Robust Target Recognition and Real-Time Avoidance. 2025. American University in Cairo, Master's Thesis. AUC Knowledge Fountain.
https://fount.aucegypt.edu/etds/2422