We propose a self-supervised method for pretraining universal time series representations in
which we learn contrastive representations using similarity distillation along the temporal and instance dimensions. We analyze the effectiveness of both dimensions and evaluate our pre-trained representations on three downstream tasks: time series classification, anomaly detection, and forecasting.
Bibtex
@inproceedings{
hajimoradlou2022selfsupervised,
title={Self-Supervised Time Series Representation Learning with Temporal-Instance Similarity Distillation},
author={Ainaz Hajimoradlou and Leila Pishdad and Frederick Tung and Maryna Karpusha},
booktitle={First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward at ICML 2022},
year={2022},
url={https://openreview.net/forum?id=nhtkdCvVLIh}
}
Related Research
-
Few-Shot Learning & Meta-Learning | Tutorial
Few-Shot Learning & Meta-Learning | Tutorial
W. Zi, L. S. Ghoraie, and S. Prince.
Research
-
Meta Temporal Point Processes
Meta Temporal Point Processes
W. Bae, M. O. Ahmed, F. Tung, and G. Oliveira. International Conference on Learning Representations (ICLR)
Publications
-
Scaleformer: Iterative Multi-scale Refining Transformers for Time Series Forecasting
Scaleformer: Iterative Multi-scale Refining Transformers for Time Series Forecasting
T. Sylvain, L. Meng, and M. Amin Shabani.
Research