Statistical Process Monitoring Using Advanced Data-Driven and Deep Learning Approaches Theory and Practical Applications
Uložené v:
| Hlavní autori: | , , , , |
|---|---|
| Médium: | E-kniha |
| Jazyk: | English |
| Vydavateľské údaje: |
Chantilly
Elsevier
2020
|
| Vydanie: | 1 |
| Predmet: | |
| ISBN: | 9780128193655, 0128193654 |
| On-line prístup: | Získať plný text |
| Tagy: |
Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
|
Obsah:
- 8.2.6 Obstacle detection using the Bahnhof dataset -- 8.3 Detecting abnormal ozone measurements using deep learning -- 8.3.1 Introduction -- 8.3.2 Data description -- 8.3.3 Ozone monitoring based on deep learning approaches -- 8.3.3.1 Results and discussion -- 8.3.4 Detection results -- 8.3.4.1 Sensor anomaly detection: false anomalies -- 8.3.4.1.1 Case A: single abrupt fault -- 8.3.4.1.2 Case B: multiple abrupt faults -- 8.3.4.1.3 Case C: intermittent faults -- 8.3.4.2 Conclusion -- 8.4 Monitoring of a wastewater treatment plant using deep learning -- 8.4.1 Introduction -- 8.4.2 Proposed DBN-based kNN, OCSVM, and k-means algorithms -- 8.4.3 Real data application: monitoring a decentralized wastewater treatment plant in Golden, CO, USA -- 8.4.4 Conclusion -- References -- 9 Conclusion and further research directions -- References -- Index -- Back Cover
- 4.5 Simulated synthetic data -- 4.5.1 Application of plug ow reactor -- 4.5.1.1 Data generation and modeling -- 4.5.1.2 Detection results -- 4.5.1.3 Case (A) - abrupt anomaly detection -- 4.5.1.4 Case (B) - intermittent anomaly detection -- 4.5.1.5 Case (B) - drift anomaly detection -- 4.6 Discussion -- References -- 5 Multiscale latent variable regression-based process monitoring methods -- 5.1 Introduction -- 5.2 Theoretical background of wavelet-based data representation -- 5.2.1 Wavelet transform -- 5.2.2 Multiscale representation of data using wavelets -- 5.2.3 Advantages of multiscale representation -- 5.2.3.1 Decorrelating autocorrelated measurements -- 5.2.3.2 Data are closer to normality at multiple scales -- 5.3 Multiscale ltering using wavelets -- 5.3.1 Single scale lter method -- 5.3.2 Multiscale ltering methods -- 5.3.3 Advantages of multiscale denoising -- 5.4 Wavelet-based multiscale univariate monitoring techniques -- 5.4.1 An illustrative example -- 5.4.1.1 Impact of autocorrelated data on the conventional Shewhart chart -- 5.4.1.2 Effect of measurement noise on the conventional Shewhart chart -- 5.4.1.3 Impact of the violation of normality assumption on the conventional Shewhart chart -- 5.5 Multiscale LVR modeling -- 5.5.1 Bene ts of multiscale denoising in LVR modeling -- 5.6 Multiscale LVR modeling -- 5.7 Results and discussions -- 5.7.1 Application with synthetic data -- 5.7.1.1 Simulation results: synthetic data -- 5.7.1.2 Simulation results: distillation column -- 5.7.2 Application of monitoring distillation column -- 5.8 Discussion -- References -- 6 Unsupervised deep learning-based process monitoring methods -- 6.1 Introduction -- 6.2 Clustering -- 6.2.1 Partition-based clustering techniques -- 6.2.1.1 k-Means clustering -- 6.2.2 Hierarchy-based clustering techniques -- 6.2.2.1 BIRCH (hierarchical)
- 2.5 Linear LVR-based process monitoring strategies -- 2.5.1 Conventional LVR monitoring statistics -- 2.5.1.1 Hotelling's T2 statistic -- 2.5.1.2 Q statistic or squared prediction error (SPE) -- 2.5.2 Fault isolation -- 2.5.2.1 Fault isolation using modi ed contribution plots -- T2 contribution approach -- SPE contribution approach -- 2.5.2.2 Fault diagnosis using RadViz visualizer -- 2.6 Cases studies -- 2.6.1 Simulated example -- 2.6.2 Monitoring in uent measurements at water resource recovery facilities -- 2.7 Discussion -- References -- 3 Fault isolation -- 3.1 Introduction -- 3.1.1 Pitfalls of standardizing data -- 3.1.2 Shortcomings of contribution plots/scores -- 3.2 Fault isolation -- 3.2.1 Variable thinning -- 3.2.2 Iterative traditional isolation -- 3.2.2.1 Mason-Young-Tracy method -- 3.2.2.2 Murphy method -- 3.2.2.3 Arti cial neural network methods -- 3.2.2.4 Discussion -- 3.2.3 Variable selection methods -- 3.2.3.1 Phase I variable selection -- 3.2.3.2 Phase II variable selection -- 3.3 Fault classi cation -- 3.4 Fault isolation metrics -- 3.4.1 Fault isolation errors -- 3.4.2 Precision and recall -- 3.4.3 Phase I FI metrics -- 3.4.4 Discussion -- 3.5 Case studies -- 3.5.1 Retrospective fault isolation -- 3.5.2 Real-time fault isolation -- 3.6 Further reading -- References -- 4 Nonlinear latent variable regression methods -- 4.1 Introduction -- 4.2 Limitations of linear LVR methods for process monitoring -- 4.3 Developing nonlinear LVR methods for process monitoring -- 4.3.1 Nonlinear partial least squares -- 4.3.1.1 Polynomial PLS modeling algorithm -- 4.3.2 ANFIS-PLS modeling framework -- 4.3.2.1 Nonlinear PLS-based monitoring -- 4.3.3 Kernel PCA -- 4.3.4 Kernel principal components analysis (KPCA) model -- 4.3.5 KPCA-based fault detection procedures -- 4.4 Cases study: monitoring WWTP -- 4.4.1 Anomaly detection using KPCA-OCSVM method
- Front Cover -- Statistical Process Monitoring using Advanced Data-Driven and Deep Learning Approaches -- Copyright -- Contents -- Preface -- Acknowledgments -- 1 Introduction -- 1.1 Introduction -- 1.1.1 Motivation: why process monitoring -- 1.1.2 Types of faults -- 1.1.3 Process monitoring -- 1.1.4 Physical redundancy vs analytical redundancy -- 1.2 Process monitoring methods -- 1.2.1 Model-based methods -- 1.2.2 Knowledge-based methods -- 1.2.3 Data-based monitoring methods -- 1.3 Fault detection metrics -- 1.4 Conclusion -- References -- 2 Linear latent variable regression (LVR)-based process monitoring -- 2.1 Introduction -- 2.2 Development of linear LVR models -- 2.2.1 Full rank methods -- 2.2.1.1 Ordinary least squares regression -- 2.2.1.2 Ridge regression (RR) -- 2.2.2 Latent variable regression (LVR) models -- 2.2.2.1 Principal component analysis -- Feature extraction with PCA -- Criteria for selecting the number of principal components to use -- 2.2.2.2 Principal component regression -- 2.2.2.3 Partial least squares -- 2.3 Dynamic LVR models -- 2.4 Process monitoring methods -- 2.4.1 Univariate chart for process monitoring -- 2.4.1.1 Shewhart-based monitoring scheme -- 2.4.1.2 Cumulative sum (CUSUM)-based monitoring schemes -- 2.4.1.3 Exponentially weighted moving average (EWMA) schemes -- 2.4.1.4 Generalized likelihood ratio (GLR) hypothesis testing approach -- 2.4.2 Distribution-based process monitoring schemes -- 2.4.2.1 Kullback-Leibler-based monitoring scheme -- 2.4.2.2 Hellinger-based monitoring scheme -- 2.4.2.3 Limitations of univariate monitoring schemes -- 2.4.3 Multivariate process monitoring schemes with parametric and nonparametric thresholds -- 2.4.3.1 Multivariate Shewhart schemes -- 2.4.3.2 Multivariate cumulative sum scheme (MCUSUM) -- 2.4.3.3 Multivariate exponentially weighted moving average scheme (MEWMA)
- 6.2.2.2 Agglomerative clustering -- 6.2.3 Density-based approach -- 6.2.3.1 Mean shift clustering -- 6.2.3.2 k-Nearest neighbor clustering -- 6.2.4 Expectation maximization -- 6.3 One-class classi cation -- 6.3.1 One-class SVM -- 6.3.2 Support vector data description (SVDD) -- 6.4 Deep learning models -- 6.4.1 Autoencoders -- 6.4.1.1 Variational autoencoder -- 6.4.1.2 Denoising autoencoder -- 6.4.1.3 Contrastive autoencoder -- 6.4.2 Probabilistic models -- 6.4.2.1 Boltzmann machine -- 6.4.2.2 Restricted Boltzmann machine -- 6.4.3 Deep neural networks -- 6.4.3.1 Deep belief networks -- 6.4.4 Deep Boltzmann machine -- 6.4.4.1 Deep stacked autoencoder -- 6.5 Deep learning-based clustering schemes for process monitoring -- 6.6 Discussion -- References -- 7 Unsupervised recurrent deep learning scheme for process monitoring -- 7.1 Introduction -- 7.2 Recurrent neural networks approach -- 7.2.1 Basics of recurrent neural networks -- 7.2.2 Long short-term memory -- 7.2.2.1 LSTM implementation steps -- 7.2.3 Gated recurrent neural networks -- 7.3 Hybrid deep models -- 7.3.1 RNN-RBM -- 7.3.2 RNN-RBM method -- 7.3.3 LSTM-RBM model -- 7.3.4 LSTM-DBN -- 7.4 Recurrent deep learning-based process monitoring -- 7.4.1 Residuals-based process monitoring approaches -- 7.4.2 Recurrent deep learning-based clustering schemes for process monitoring -- 7.4.2.1 RNN-RBM clustering -- 7.5 Applications: monitoring in uent conditions at WWTP -- 7.6 Discussion -- References -- 8 Case studies -- 8.1 Introduction -- 8.2 Stereovision -- 8.2.1 Deep stacked autoencoder-based KNN approach -- 8.2.1.1 Preliminary materials: autoencoders -- 8.2.1.2 The SDA-kNN obstacle detection approach -- 8.2.2 Data description -- 8.2.3 Results and discussion -- 8.2.4 Model trained using data with no obstacles -- 8.2.5 Evaluation of performance for busy scenes

