Synthetic time series dataset generation for unsupervised autoencoders

In Machine Learning, large models need to have access to a huge amount of training data. This requirement applies to many applications in an industrial environment. Furthermore, in specific processes it is not easy to obtain such a large amount of data for different reasons ranging from privacy, sec...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2022 IEEE 27th International Conference on Emerging Technologies and Factory Automation (ETFA) s. 1 - 8
Hlavní autoři: Klopries, Hendrik, Torres, David Orlando Salazar, Schwung, Andreas
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 06.09.2022
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:In Machine Learning, large models need to have access to a huge amount of training data. This requirement applies to many applications in an industrial environment. Furthermore, in specific processes it is not easy to obtain such a large amount of data for different reasons ranging from privacy, security, and even process affectations. Therefore, this work proposes the creation of synthetic time series datasets to simulate processes out of a given subset of functional relationships. Moreover, using transfer learning to improve the performance of four autoencoder architectures in terms of unsupervised time series reconstruction, requiring fewer target data for training. We outline multiple concepts of data generation and use statistical analysis to evaluate the dataset performance and complexity. Further, the data is used to train unsupervised models and enables them to improve their reconstruction performance over 52 sensors and multiple fault cases. By reducing the amount of available train data, we still gain sufficient results through the pre-training. Overall, we see significant performance and interpretability improvements on a new time series analysis approach named Bag-of-Functions compared to convolutional and linear autoencoders.
DOI:10.1109/ETFA52439.2022.9921598