TiTrans: Leveraging Temporal Cues in a Transformer Model for Tackling Incomplete Data

  • Zeyi FAN*
  • *Corresponding author for this work

Research output: Journal PublicationsJournal Article (refereed)peer-review

Abstract

Handling missing values in multivariate time series is critical for reliable prediction and decision-making, yet high-quality imputation remains challenging when observations are sparse. While recent transformer-based models capture long-range dependencies, they often underutilize local temporal structure and treat encoder outputs uniformly. This letter presents TiTrans, an encoder-only transformer that integrates lightweight local interpolation with temporal and positional embeddings, and introduces a dynamic weight adjustment mechanism that adaptively fuses global encoder predictions with locally inferred estimates based on deviation-aware weighting. This design enhances robustness and efficiency by removing the decoder and avoiding auxiliary prediction heads. Experiments on widely used ETT and Weather benchmarks show that TiTrans consistently achieves the best performance across all missingness levels, reducing MSE by up to 28.6% on ETT and 88.6% on Weather compared to strong transformer baselines, and outperforming the next strongest method (MTSIT) by up to 28.1%. These results demonstrate the effectiveness of dynamic fusion for time-series imputation and highlight TiTrans as a simple yet powerful alternative for handling missing data in sequential systems.
Original languageEnglish
JournalIEEE Signal Processing Letters
DOIs
Publication statusE-pub ahead of print - 3 Feb 2026

Fingerprint

Dive into the research topics of 'TiTrans: Leveraging Temporal Cues in a Transformer Model for Tackling Incomplete Data'. Together they form a unique fingerprint.

Cite this