Irregular time series are ubiquitous in healthcare, with applications ranging from predicting patient health conditions to imputing missing values. Recent developments in conditional diffusion models, which predict missing values based on observed data, have shown significant promise for imputing regular time series. It also generalizes the self-supervised learning task of mask out reconstruction by replacing partial masking with injecting noise of variable scales to data and shows competitive results on image recognition. Despite the growing interest in diffusion models, their potential for irregular time series data, particularly in downstream tasks, remains under explored. We propose a conditional diffusion model designed as a self-supervised learning backbone for such data, integrating a learnable time embedding and a cross-dimensional attention mechanism to address the data’s complex temporal dynamics. This model not only suits conditional generation tasks naturally but also acquires hidden states beneficial for discriminative tasks. Empirical evidence demonstrates our model’s superiority in both imputation and classification tasks.


@inproceedings{ shirzad2024conditional, title={Conditional Diffusion Models as Self-supervised Learning Backbone for Irregular Time Series}, author={Hamed Shirzad and Ruizhi Deng and He Zhao and Frederick Tung}, booktitle={ICLR 2024 Workshop on Learning from Time Series For Health}, year={2024}, url={} }

Related Research


Artificial Intelligence is reshaping finance. Every day, our teams uncover new opportunities that advance the field of AI, building products that impact millions of people across Canada and beyond. Explore open roles!

Explore opportunities