Abstract
Predictive maintenance (PdM) is an essential pillar of Industry 4.0, aiming to reduce operational downtime, extend equipment life, and enhance cost efficiency. This thesis presents an in-depth study on the development, optimization, and evaluation of hybrid learning architecture CNN-LSTM models—for time-series-based predictive maintenance tasks. We investigate the performance of both sequential and parallel hybrid models on benchmark datasets of varying complexity: NASA C-MAPSS, N-CMAPSS, and the NASA Battery dataset.
Understanding the limitations in conventional maintenance strategies and current predictive models. Motivated by the need for higher accuracy and generalization, a simulation-based framework was developed to evaluate hybrid architectures under realistic industrial conditions. CNN layers were used for localized feature extraction, while LSTM layers captured long-term temporal dependencies in degradation patterns. Both sequential and parallel CNN-LSTM architectures were developed and tested. Sequential models stack CNN and LSTM layers, whereas the parallel configuration processes data independently through each branch before merging the results. The parallel architecture notably preserved feature integrity and mitigated signal distortion between layers, leading to better learning stability.
In the initial experiments, the sequential CNN-LSTM model achieved an RMSE of 15.03 on the C-MAPSS dataset. However, validation loss began diverging from training loss early in training, indicating overfitting and instability. Transitioning to a parallel architecture provided modest but consistent improvements. On the same dataset, the RMSE was reduced to 14.75, and the training-validation curves showed better convergence. The contrast was more significant on the N-CMAPSS dataset. There, the sequential model yielded an RMSE of 26.01, while the parallel model demonstrated a sharp improvement, reaching 13.79. This reduction of nearly 50 percent highlights the importance of architectural decisions when dealing with complex, high-dimensional data. On the NASA Battery dataset, where data is less variable, optimization through Genetic Algorithms (GA) proved highly effective. The RMSE decreased from 0.35 in the initial model to 0.091 after optimization, reflecting a 73 percent improvement.
To further explore performance tuning, both Genetic Algorithm and Hyperband optimization strategies were applied. GA showed clear advantages on the battery dataset, while Hyperband performed better under tighter resource constraints. However, on the more complex C-MAPSS and N-CMAPSS datasets, these tuning strategies produced marginal gains of only 0.33 and 2.1 percent, respectively. This suggests that while tuning enhances fine-grained performance, architectural redesign yields more significant improvements in models trained on noisy or structurally varied data. Across all datasets, the parallel CNN-LSTM model demonstrated greater stability and generalization potential than its sequential counterpart, particularly when processing data with heterogeneous patterns and multiple failure modes.
This research contributes a comprehensive pipeline for predictive maintenance, integrating preprocessing, model development, and evaluation techniques. Across all experiments, the parallel CNN-LSTM consistently demonstrated better generalization than the sequential variant, particularly in handling diverse failure modes and noisy sensor data. Compared to traditional models like Random Forests and Support Vector Machines, the proposed deep learning architectures offered more accurate RUL estimation and stronger performance on unseen test data. Furthermore, the research highlights the practical constraints of implementing predictive maintenance at scale, especially the trade-off between model complexity and interpretability, and the limits of synthetic datasets in capturing real-world failure dynamics.
In conclusion, the thesis establishes that hybrid CNN-LSTM models, especially in parallel configuration, offer a robust and scalable solution for predictive maintenance. The integration of optimization techniques further refines performance on simpler datasets, while architecture remains the critical factor for success in complex environments. These findings pave the way for future research that incorporates real-time streaming data, explores more advanced architectures such as Transformers, and investigates domain adaptation methods to bridge the gap between simulated and real-world applications.
School
School of Sciences and Engineering
Department
Robotics, Control & Smart Systems Program
Degree Name
MS in Robotics, Control and Smart Systems
Graduation Date
Summer 6-16-2025
Submission Date
6-17-2025
First Advisor
Maki Habib
Committee Member 1
Ashraf Nassef
Committee Member 2
Mohamed Ibrahim Awad
Extent
141 P.
Document Type
Master's Thesis
Institutional Review Board (IRB) Approval
Not necessary for this item
Recommended Citation
APA Citation
El Sadeek, K. M.
(2025).Predictive Maintenance: Leveraging Hybrid LSTM-CNN Architectures for Enhanced Performance [Master's Thesis, the American University in Cairo]. AUC Knowledge Fountain.
https://fount.aucegypt.edu/etds/2557
MLA Citation
El Sadeek, Kamal Mohamed. Predictive Maintenance: Leveraging Hybrid LSTM-CNN Architectures for Enhanced Performance. 2025. American University in Cairo, Master's Thesis. AUC Knowledge Fountain.
https://fount.aucegypt.edu/etds/2557
Included in
Computer-Aided Engineering and Design Commons, Electro-Mechanical Systems Commons, Manufacturing Commons, Other Operations Research, Systems Engineering and Industrial Engineering Commons