Entropy 2015, 17 5869
1. Introduction
Alarms are indications of abnormal situations in process industries, including food, beverages,
chemicals, pharmaceuticals, petroleum, ceramics, base metals, coal, plastics, rubber, textiles, tobacco,
wood and wood products, paper and paper products, etc, where the primary production processes are
either continuous or occur on a batch of materials that is indistinguishable [1]. In former times, due to the
limitations of high cost and low quality of industrial monitoring systems, alarms have been configured via
directly placing sensors to measure the physical quantity that needed to be monitored and transmitting
the measurements to the control panel through cables. This caused the number of alarms at that time
to remain at a low level. However, with the development of monitoring systems, modern plants in
process industries have a multitude of sensors that are recorded and archived by process historians and
monitored by distributed control systems (DCSs), supervisory control and data acquisition (SCADA)
systems or other monitoring systems. Thus, most of the process variables can be configured to have at
least one alarm and, in many cases, more than one. For example, for monitoring a pressure variable,
four alarm tags can be configured, namely high-high, high, low and low-low. As a result, the number of
alarm variables is often quite large, and hence, a large number of alarms are raised during an abnormal
situation. Moreover, there often exist complex interactions between the corresponding process variables
due to the process dynamics and the associated monitoring systems. Once an abnormal situation occurs
at some place in the process, the fault may spread to many other places through interconnections between
variables and process units. Such a situation often leads to alarm floods. In this case, it is difficult for
operators to identify the type of fault or to find its root cause to mitigate the source of the abnormality.
Without proper actions, such a situation may lead to serious and catastrophic events.
For example, in 1994, before an explosion accident happened in a fluid catalytic cracking unit of a
refinery of British Texaco Company, there were 1775 out of 2040 alarm tags set to be “high priority”
in DCS, and 275 alarms occurred in the last ten minutes, which caused operators not to take effective
actions, leading to a major accident [2].
There are different ways to reduce the number of alarms, in particular nuisance alarms. For univariate
methods, filtering, deadband, delay-timer and many other methods can be used [3].
On the other hand, for bivariate or multivariate situations, Folmer et al. [4] summarized several
approaches. Among these approaches, it is an essential method to identify the propagation paths between
variables and, thus, to localize the root cause of the abnormal situation. Yang et al. used signed directed
graphs to identify the process topology and connectivity that help in fault diagnosis and process hazard
assessment [5,6]. Noda et al. [7] and Yang et al. [8] used event correlation analysis to design a
policy to reduce alarms. Using causality information between alarm variables is another approach in
this area. Thereby, the propagation path of the fault can be found, and this will help operators identify
the root cause [9]. This enables operators to take preventative actions immediately. Thus, the detection
of causality between variables becomes important and has received a lot of attention.
The first experimental example of causality detection by analyzing consecutive time series was
demonstrated by Granger [10]. He formalized the causality identification idea in linear regression
models by the following thought: we consider that there exists causality from random variable I to
评论0
最新资源