Robust and Ubiquitous Mobility Mode Estimation Using Limited Cellular Information

Author's Department

Computer Science & Engineering Department

Third Author's Department

Computer Science & Engineering Department

Find in your Library

https://doi.org/10.1109/TVT.2024.3454208

All Authors

Sherif Mostafa, Khaled A. Harras, Moustafa Youssef

Document Type

Research Article

Publication Title

IEEE Transactions on Vehicular Technology

Publication Date

1-1-2024

doi

10.1109/TVT.2024.3454208

Abstract

Recent mobility mode estimation systems propose using signals from the serving cell tower only to be deployable on all phones. However, all available solutions depend on statistical feature engineering, providing relatively low accuracy. Moreover, their performance is adversely affected by variations in phone placement, hindering their real-world use. To address these limitations, we present AutoSense, the first mobility mode detection system to provide high accuracy using signals from the serving tower only while being robust to phone placement variations. AutoSense proposes a novel domain-specific deep learning-based model to perform automatic feature extraction and temporal processing, improving mode estimation accuracy compared to available solutions. Furthermore, AutoSense integrates a denoising autoencoder into its model to learn salient features robust to changing phone placements. As part of its design, AutoSense addresses several challenges, including cellular data sparsity, absence of motion information, and information decay in long-term dependencies. We conduct extensive evaluations using a real-world public dataset comprising different phone placements. Our results show that AutoSense significantly outperforms state-of-the-art systems, achieving up to 20% and 19% higher average precision and recall, respectively. In addition, AutoSense exhibits remarkable resilience to unseen placements, giving up to 60% better robustness on average compared to the state-of-the-art.

Comments

Article. Record derived from SCOPUS.

Share

COinS