Positional Encoding in Transformer Model for time-series

53 views Asked by At

I'm working on multivariate time series anomaly detection and currently using positional encoding similar to the one introduced in the 'Attention is All You Need' paper. However, I'm facing information loss due to resampling my time series at fixed intervals, which doesn't capture rapid changes in variables. One solution is to decrease the resampling step, but this could lead to excessively large datasets and longer training times.

I'm considering an alternative approach where I modify the positional encoding to incorporate information about the time difference between consecutive steps instead of resampling. This way, I can retain temporal information without increasing dataset size significantly.

Is there an existing implementation or research paper that explores this type of positional encoding in the context of multivariate time series analysis?"

Absolute Positional Encoding

0

There are 0 answers