Perception is a fundamental component of any autonomous driving system. Semantic segmentation is the perception task of assigning semantic class labels to sensor inputs. While autonomous driving systems are currently equipped with a suite of sensors, much focus in the literature has been on semantic segmentation of camera images only. Research in the fusion of different sensor modalities for semantic segmentation has not been investigated as much. Deep learning models based on transformer architectures have proven successful in many tasks in computer vision and natural language processing. This work explores the use of deep learning transformers to fuse information from LiDAR and camera sensors to improve the segmentation of LiDAR point clouds. It also addresses the question of which fusion level in this deep learning framework provides better performance. This was done following an empirical approach in which different fusion models were designed and evaluated against each other using SemanticKITTI dataset.
School of Sciences and Engineering
Computer Science & Engineering Department
MS in Computer Science
Committee Member 1
Committee Member 2
Institutional Review Board (IRB) Approval
Not necessary for this item
(2022).Camera and LiDAR Fusion For Point Cloud Semantic Segmentation [Master's Thesis, the American University in Cairo]. AUC Knowledge Fountain.
Abdelkader, Ali. Camera and LiDAR Fusion For Point Cloud Semantic Segmentation. 2022. American University in Cairo, Master's Thesis. AUC Knowledge Fountain.
Available for download on Wednesday, January 24, 2024