Self-supervised deep learning for gnss time series imputation: a comparative study of neural network architectures

  • Le Khanh Giang

    University of Transport and Communications, No 3 Cau Giay Street, Hanoi, Vietnam
  • Tran Duc Cong

    University of Transport and Communications, No 3 Cau Giay Street, Hanoi, Vietnam
  • Ho Thi Lan Huong

    University of Transport and Communications, No 3 Cau Giay Street, Hanoi, Vietnam
Email: gianglk@utc.edu.vn
Từ khóa: self-supervised learning, GNSS time-series imputation, LSTM, structural health monitoring (SHM), sliding-window masking, cable-stayed bridge

Tóm tắt

Global Navigation Satellite System (GNSS) time series are widely used for structural health monitoring (SHM) and deformation analysis, but real-world recordings frequently contain short and long contiguous gaps that degrade downstream interpretation. This study addresses the challenge of accurate imputation for GNSS displacement series by proposing a self-supervised learning framework that trains models directly on real, unlabelled data using contiguous-span masking. We evaluate four neural architectures (ANN, CNN, GRU, LSTM) under a unified pipeline comprising signal denoising (moving-average, Kalman smoothing, Haar wavelet), sliding-window segmentation, Z-score normalization, and middle-region masking. Experiments use a year-long 10-minute-sampled dataset from the Can Tho cable-stayed bridge (sensor can519501, x/y/z components) and assess reconstruction quality via R², MAE, and MSE on withheld masked segments. Results indicate that recurrent architectures, particularly LSTM, produce the most faithful reconstructions: LSTM attains the highest validation R² (≈0.948) and the lowest MAE (≈0.137) and MSE (≈0.052) among tested models, while GRU offers competitive performance and CNN/ANN show substantially weaker recovery. These findings demonstrate that masking-based self-supervision is an effective strategy for GNSS gap recovery and that LSTM-like sequence models are well suited to capture the long-range temporal dependencies in bridge displacement data. The proposed approach enhances the reliability and continuity of GNSS-derived time series for structural monitoring and can inform future multi-sensor fusion and uncertainty quantification work

Tài liệu tham khảo

[1]. J. M. Dow, R. E. Neilan, C. Rizos, The International GNSS Service in a changing landscape of global navigation satellite systems, Journal of Geodesy, 83 (2009) 191–198. https://doi.org/10.1007/s00190-008-0300-3
[2]. D. C. Tran, T. L. H. Ho, K. G. Le, V. H. Le, Application of unsupervised clustering algorithms in GNSS-RTK data analysis for cable-stayed bridge monitoring, Transport and Communications Science Journal, 76 (2025) 1138–1150 (in Vietnamese). https://doi.org/10.47869/tcsj.76.8.8
[3]. H. Liu, L. Li, Missing data imputation in GNSS monitoring time series using temporal and spatial Hankel matrix factorization, Remote Sensing, 14 (2022) 1500. https://doi.org/10.3390/rs14061500.
[4]. M. Lepot, J.-B. Aubin, F. H. Clemens, Interpolation in time series: An introductive overview of existing methods, their performance criteria and uncertainty assessment, Water, 9 (2017) 796. https://doi.org/10.3390/w9100796
[5]. J. Wang, W. Du, Y. Yang, L. Qian, W. Cao, K. Zhang, W Wang, Y Liang, Q Wen, Deep learning for multivariate time series imputation: A survey, arXiv preprint, arXiv: 2402.04059, 2024.
[6]. T. Kim, J. Kim, W. Yang, H. Lee, J. Choo, Missing value imputation of time-series air-quality data via deep neural networks, International Journal of Environmental Research and Public Health, 18 (2021) 12213. https://doi.org/10.3390/ijerph182212213
[7]. F. M. Shiri, T. Perumal, N. Mustapha, R. Mohamed, A comprehensive overview and comparative analysis on deep learning models: CNN, RNN, LSTM, GRU, arXiv preprint, arXiv:2305.17473, 2023.
[8]. S. F. Ahmed, M. S. B. Alam, M. Hassan, M. R. Rozbu, T. Ishtiak, N. Rafa, M. Mofijur, A. B. M. S. Ali, A. H. Gandomi, Deep learning modelling techniques: Current progress, applications, advantages, and challenges, Artificial Intelligence Review, 56 (2023) 13521–13617. https://doi.org/10.1007/s10462-023-10466-8
[9]. L. Ericsson, H. Gouk, C. C. Loy, T. M. Hospedales, Self-supervised representation learning: Introduction, advances, and challenges, IEEE Signal Processing Magazine, 39 (2022) 42–62. https://arxiv.org/pdf/2110.09327
[10]. Z. Liu, A. Alavi, M. Li, X. Zhang, Self-supervised contrastive learning for medical time series: A systematic review, Sensors, 23 (2023) 4221. https://doi.org/10.3390/s23094221
[11]. Y. Wang, H. Ding, H. Li, Multivariate time-series missing data imputation with convolutional transformer model, Symmetry, 17 (2025) 686. https://doi.org/10.3390/sym17050686
[12]. M. Casella, N. Milano, P. Dolce, D. Marocco, Transformers deep learning models for missing data imputation: An application of the ReMasker model on a psychometric scale, Frontiers in Psychology, 15 (2024) 1449272. https://doi.org/10.3389/fpsyg.2024.1449272
[13]. W. Du, D. Côté, Y. Liu, SAITS: Self-attention-based imputation for time series, Expert Systems with Applications, 219 (2023) 119619. https://doi.org/10.1016/j.eswa.2023.119619
[14]. O. I. Abiodun, A. Jantan, A. E. Omolara, K. V. Dada, N. A. Mohamed, H. Arshad, State-of-the-art in artificial neural network applications: A survey, Heliyon, 4 (2018) e00998. https://doi.org/10.1016/j.heliyon.2018.e00938
[15]. J. Dong, H. Wu, H. Zhang, L. Zhang, J. Wang, M. Long, SimMTM: A Simple Pre-Training Framework for Masked Time-Series Modeling, Advances in Neural Information Processing Systems, 36 (2023) 29996-30025. https://doi.org/10.48550/arXiv.2302.00861
[16]. Z. Li, Z. Rao, L. Pan, P. Wang, Z. Xu, Ti-MAE: Self-Supervised Masked Time Series Autoencoders, (2023). https://doi.org/10.48550/arXiv.2301.08871
[17]. Z. Liu, B. Du, J. Ye, X. Wen, L. Sun, An NCDE-based Framework for Universal Representation Learning of Time Series, in Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (2024) 4623–4633. https://doi.org/10.24963/ijcai.2024/511
[18]. Y. Wang, H. Wu, J. Dong, G. Qin, H. Zhang, Y. Liu, Y. Qiu, J. Wang, M. Long, TimeXer: Empowering Transformers for Time Series Forecasting with Exogenous Variables, Advances in Neural Information Processing Systems, 37 (2024) 469-498. https://doi.org/10.48550/arXiv.2402.19072
[19]. R. Dey, F. M. Salem, Gate-variants of gated recurrent unit (GRU) neural networks, in: Proceedings of the IEEE 60th International Midwest Symposium on Circuits and Systems, (2017) 1597–1600. https://doi.org/10.1109/MWSCAS.2017.8053243
[20]. S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural Computation, 9 (1997) 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735
[21]. J. Gu, Z. Wang, J. Kuen, L. Ma, B. Shahroudy, B. Shuai, et al., Recent advances in convolutional neural networks, Pattern Recognition, 77 (2018) 354–377. https://doi.org/10.1016/j.patcog.2017.10.013
[22]. D. C. Montgomery, E. A. Peck, G. G. Vining, Introduction to Linear Regression Analysis, sixth ed., John Wiley & Sons, 2021.
[23]. Y. Bengio, I. Goodfellow, A. Courville, Deep Learning, MIT Press, Cambridge, MA, 2017.

Tải xuống

Chưa có dữ liệu thống kê
Nhận bài
01/12/2025
Nhận bài sửa
07/01/2026
Chấp nhận đăng
12/01/2026
Xuất bản
15/01/2026
Chuyên mục
Công trình khoa học