Abstract
Data modeling and online monitoring are two critical stages for data-driven anomaly detection. Regarding data modeling, deep neural networks (DNNs) can learn good decision boundaries to separate the anomaly and normal regions, due to their flexible model structures and excellent fitting ability. However, DNNs, using nonlinear activations with specific boundaries, may indirectly cause a limited anomaly detection margin, especially when there are samples far from centroids. Moreover, an anomaly detection model with a narrow detection margin is deemed insensitive to general faults. An anomaly detection model with a tight detection margin will suffer a severe performance degradation. To mitigate the intrinsic drawbacks of DNNs, we develop a new regularizer based on the maximum likelihood of complete data (i.e., observations and latent variables). The regularizer is neuronwise and mathematically acts as compressing neurons, dragging the marginal points into the centroids. Combining the regularizer with the encoding-decoding structure networks, we perform an industrial case study to verify the superiority of the proposed method.
Original language | English |
---|---|
Pages (from-to) | 7914-7924 |
Number of pages | 11 |
Journal | IEEE Transactions on Industrial Informatics |
Volume | 19 |
Issue number | 7 |
Early online date | 12 Oct 2022 |
DOIs | |
Publication status | Published - Jul 2023 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2005-2012 IEEE.
Keywords
- Anomaly detection
- deep neural network (DNN)
- regularization
- stacked autoencoder