Abstract
This paper focuses on the Tennessee Eastman (TE) process and for the first time investigates it in a cognitive way. The cognitive fault diagnosis does not assume prior knowledge of the fault numbers and signatures. This approach firstly employs deterministic reservoir models to fit the multiple-input and multiple-output signals in the TE process, which map the signal space to the (reservoir) model space. Then we investigate incremental learning algorithms in this reservoir model space based on the "function distance" between these models. The main contribution of this paper is to provide a cognitive solution to this popular benchmark problem. Our approach is not only applicable to fault detection, but also to fault isolation without knowing the prior information about the fault signature. Experimental comparisons with other state-of-the-art approaches confirmed the benefits of our approach. Our algorithm is efficient and can run in real-time for practical applications. © 2014 Published by Elsevier Ltd.
Original language | English |
---|---|
Pages (from-to) | 33-42 |
Number of pages | 10 |
Journal | Computers and Chemical Engineering |
Volume | 67 |
Early online date | 3 Apr 2014 |
DOIs | |
Publication status | Published - Aug 2014 |
Externally published | Yes |
Bibliographical note
Huanhuan Chen was supported by the National Natural Science Foundation of China under Grants 61203292, 61311130140 and the One Thousand Young Talents Program. Xin Yao was supported by a Royal Society Research Merit Award. This work was supported by the European Union Seventh Framework Programme under grant agreement No. INSFO-ICT-270428.Keywords
- Cognitive fault diagnosis
- Fault detection
- Learning in the model space
- One class learning
- Reservoir computing
- Tennessee Eastman Process