Cellular Network-Supported Machine Learning Techniques for Autonomous UAV Trajectory Planning

Author's Department

Electronics & Communications Engineering Department

Find in your Library

https://doi.org/10.1109/access.2022.3229171

All Authors

Ghada Afifi, Yasser Gadallah

Document Type

Research Article

Publication Title

IEEE Access

Publication Date

1-1-2022

doi

10.1109/access.2022.3229171

Abstract

Autonomous trajectory planning is a hot topic in the UAV mission planning area of research. Autonomous UAVs have major use case applications which involve navigation in complex environments such as aerial photography, package delivery and relief operations. Many existing trajectory planning solutions rely on the GPS system. However, such GPS-based solutions do not provide a reliable real-time navigation solution, particularly in dense urban environments. Opportunely, cellular networks can be utilized as an attractive alternative for UAV navigation applications. We therefore propose to utilize existing 5G infrastructure to enable the UAV to navigate complex environments, independent of the GPS and other detectable signals transmitted by the UAV. Our objective is to propose an efficient solution to enable the UAVs to autonomously execute such tasks while meeting the real-time operational requirements, without the need to actually interact with the cellular network. For this purpose, we formulate the UAV trajectory planning problem as a joint objective optimization problem to minimize a composite cost metric that we introduce. The computational complexity involved in exact optimization techniques hinders obtaining the real-time calculation requirement that is needed due to the dynamic nature of the environment. To overcome this complexity, we utilize machine learning based techniques to solve the formulated trajectory planning problem. Specifically, we propose two machine learning-based techniques, namely, the reinforcement learning and the deep supervised learning-based approaches. We then analyze the performance of each of the proposed techniques as compared to the optimization-based approaches and other solutions from the literature. Our simulation results show that the proposed reinforcement and deep supervised learning-based solutions provide near optimal solutions to the formulated trajectory planning problem, with comparable accuracy of 99% and 98%, respectively, as compared to the optimal bound while meeting the real-time calculation requirement.

First Page

131996

Last Page

132011

This document is currently not available here.

Share

COinS