WO2005008420A2 - Information processing apparatus and method - Google Patents

Information processing apparatus and method Download PDF

Info

Publication number
WO2005008420A2
WO2005008420A2 PCT/US2004/022235 US2004022235W WO2005008420A2 WO 2005008420 A2 WO2005008420 A2 WO 2005008420A2 US 2004022235 W US2004022235 W US 2004022235W WO 2005008420 A2 WO2005008420 A2 WO 2005008420A2
Authority
WO
WIPO (PCT)
Prior art keywords
evaluation
data
monitored apparatus
model
resultant
Prior art date
Application number
PCT/US2004/022235
Other languages
French (fr)
Other versions
WO2005008420A3 (en
Inventor
Joe Hasiewicz
Original Assignee
Smartsignal Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smartsignal Corporation filed Critical Smartsignal Corporation
Publication of WO2005008420A2 publication Critical patent/WO2005008420A2/en
Publication of WO2005008420A3 publication Critical patent/WO2005008420A3/en

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0208Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterized by the configuration of the monitoring system
    • G05B23/021Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterized by the configuration of the monitoring system adopting a different treatment of each operating region or a different mode of the monitored system, e.g. transient modes; different operating configurations of monitored system

Definitions

  • This invention relates generally to information processing and more particularly to facilitating the processing of information regarding performance of a monitored apparatus using multiple evaluation models against which performance of the monitored apparatus is compared.
  • the evaluation data that results through use of a first evaluation model is treated in essentially all respects as being discrete from and different than evaluation data that results through use of a second evaluation model.
  • the data that corresponds to the second evaluation model will be presented in a discontinuous and a discrete manner with respect to the data from the first evaluation model.
  • fault prediction programming that can process a series of evaluation data to extract meaning from its temporal content will nevertheless not bridge the discontinuity that occurs when the monitoring process switches from one evaluation model to another.
  • the analysis process begins anew with a new evaluation model.
  • Such discrete and non-contiguous treatment of the resultant evaluation data is not without problem.
  • gaps and discrete handling can be bothersome to an operator who must make their own independent efforts to stitch together the meaning of the multiple data outputs as are provided by multiple evaluation models.
  • the power of the analysis engine to extract useful information from monitored performance trends can be compromised or even lost when the ability to track a trend breaks upon switching to a new evaluation model.
  • FIG. 1 comprises a block diagram as configured in accordance with an embodiment of the invention
  • FIG. 2 comprises a high-level flow diagram as configured in accordance with an embodiment of the invention
  • FIG. 3 comprises an illustrative schematic of a display as configured in accordance with an embodiment of the invention
  • FIG. 4 comprises a flow diagram as configured in accordance with various embodiments of the invention
  • FIG. 5 comprises a flow diagram as configured in accordance with another embodiment of the invention.
  • a process provides for a plurality of substantially temporally-coincident sensor readings that each correspond to a sensed condition of a monitored apparatus.
  • a first evaluation model (selected in accord with a present or anticipated operating mode of the monitored apparatus) is used to process this plurality of sensor readings to provide resultant first evaluation data that relates, at least in part, to apparent expected operation of the monitored apparatus.
  • current sensor readings wherein the selection of a different evaluation model is based, for example, upon a changed operating mode of the monitored apparatus
  • the resultant evaluation data is provided substantially contiguous with provision of the resultant evaluation data from use of the first evaluation model.
  • This interlacing of data permits, in part, the new resultant evaluation data to relate, at least in part, to continued and contiguous apparent expected operation of the monitored apparatus.
  • the data from the two independent processes is united into a common data stream that then appears wholistic to subsequent downstream processing actions and steps.
  • evaluation models can be selected from amongst a considerably larger number of candidate evaluation models. Since there is no loss of continuity to be experienced upon switching to a new evaluation model, an operator is in fact considerably more free to utilize many more evaluation models than might ordinarily be contemplated in order to permit more accurate matching of a given evaluation model to a given set of operating conditions and present operating mode of the monitored apparatus.
  • a system 10 appropriate to support the teachings set forth below serves in conjunction with a monitored apparatus 11 having a plurality of sensors (not shown) deployed therewith.
  • the monitored apparatus 11 can be essentially any apparatus or process now known or hereafter developed.
  • the sensors can be many and varied, including but not limited to thermocouples, accelerometers, pressure transducers, flow meters, tachometers and the like.
  • An apparatus-operation sensor input 12 operably couples to the sensors that monitor the monitored apparatus 11 to receive those sensor readings.
  • the sensor input 12 can provide additional support as may be appropriate to a given sensor or process.
  • the sensor input 12 can source operating power to sensors that require an external power source.
  • the sensor input 12 can provide appropriate pre-processing of the sensor signals.
  • Such pre-processing can include any of filtering (or other noise reduction processing), weighting, frequency shifting, digitizing, storing-and-forwarding, and so forth.
  • sensor inputs are known in the art and further detail need not be provided here.
  • the output of the apparatus-operation sensor inputs 12 couples to a performance evaluator 13.
  • the latter can comprise an integrated engine or can be partially or wholly distributed using an implementing architecture of choice.
  • the performance evaluator 13 will comprise a programmable platform that facilitates a corresponding performance evaluation engine such as, for example, the eCM runtime engine as offered by SmartSignal Corporation of Lisle, Illinois U.S.A.
  • the performance evaluator 13 serves three primary purposes: - the use of an evaluation model to process real-time sensor information to generate expected sensor values for each extracted data sample and to then generate a residual vector element for each sensor that comprises the difference between the estimated sensor value and the corresponding real-time value (when using accurate models and when the monitored apparatus is behaving normally these residuals tend towards a relatively small range Gaussian distribution around a value of zero); - comparing the residual (or a series of residuals) (for example, against one or more thresholds or against a statistical hypothesis test) to identify when sufficient deviation occurs to warrant generating an alert condition for a given sensor (in a preferred embodiment this process will include determining whether such detected deviations are of statistical significance using, for example, the SmartSignal Corporation Active Decision Index algorithm as is included with SmartSignal's eCM Runtime Engine noted above); and - using the sensor alerts and other sensor information as may be relevant to a given application to determine when the alerts are significant enough to
  • Performance evaluation can be based upon other approaches and criteria as well, with various methodologies being in use and/or otherwise suggested by those skilled in the art. It will be understood that the teachings set forth herein are generally applicable to such alternative approaches as well as to the illustrative performance evaluator platform described above.
  • the performance evaluator 13 utilizes, at any given moment, an evaluation model selected from amongst a plurality of monitored apparatus evaluation models 14. In general, at least some of these evaluation models 14 will be designed and intended to model the expected behavior of the monitored apparatus 11 during a particular mode of operation.
  • one of the evaluation models can serve to model the behavior of the monitored apparatus 11 when the monitored apparatus 11 operates under a first set of operating conditions (such as when fully loaded) and another of the evaluation models can serve to model the behavior of the monitored apparatus 11 when the monitored apparatus 11 operates under a different set of operating conditions (such as when substantially less loaded).
  • the number of evaluation models provided for use with a given monitored apparatus 11 will vary with the apparatus itself and with the needs of the specific application. Where the behavior of the monitored apparatus varies widely over its range of operating conditions and/or where tight correspondence between the evaluation model and the performance of the monitored apparatus is critical, a greater number of evaluation models may be appropriate.
  • Such evaluation models can be either of auto-associative models or inferential models.
  • the evaluation models can be initially provided through various means. For example, the evaluation models can be sourced by the manufacturer of a given apparatus to be monitored.
  • one or more evaluation models can be provided by the supplier of the performance evaluator 13.
  • the performance evaluator 13 can itself be configured to permit generation of one or more evaluation models by the operator to permit highly customized and potentially more accurate and operationally relevant evaluation models.
  • Evaluation models in general and their manner of creation and usage are generally well understood in the art. Use of many such evaluation models may benefit from the teachings set forth herein. In a preferred approach, however, particular approaches to modeling may provide superior results when effecting mode-to-mode contiguous and interleaved data streams.
  • reference set of observations are formed into a matrix, designated H for purposes hereof, with each column of the matrix typically representing an observation, and each row representing values from a single sensor or measurement.
  • the ordering of the columns (i.e., observations) in the matrix is not important, and there is no element of causality or time progression inherent in the modeling method.
  • the ordering of the rows is also not important, only that the rows are maintained in their correspondence to sensors throughout the modeling process, and readings from only one sensor appear on a given row. This step can occur as part of the setup of the modeling system, and is not necessarily repeated during online operation.
  • modeling can be carried out. Such modeling results in the generation of estimates in response to acquiring or inputting a real-time or current or test observation, which estimates can be estimates of sensors or non- sensor parameters of the modeled system, or estimates of classifications or qualifications distinctive of the state of the system.
  • this generation of estimates comprises two major steps after input acquisition.
  • the current observation is compared to the reference data H to determine a subset of reference observations from H having a particular relationship or affinity with the current observation, with which to constitute a smaller matrix, designated D for purposes hereof.
  • the D matrix is used to compute an estimate of at least one output parameter characteristic of the modeled system based on the current observation. Accordingly, it may be understood that the model estimate Y es t is a function of the current input observation Yj n and the current matrix D, derived from H:
  • the multiplication operation is the standard matrix/vector multiplication operator, or inner product.
  • the similarity operator is presented as the symbol ⁇ 8>. (Note: this meaning should not to be confused with the normal meaning of this symbol; here the meaning of ® is that of a "similarity" operation.)
  • the similarity operator, ® works much as regular matrix multiplication operations, on a row-to-column basis, and results in a matrix having as many rows as the first operand and as many columns as the second operand.
  • the similarity operation yields a scalar value for each combination of a row from the first operand and column from the second operand.
  • One similarity operation involves taking the ratio of corresponding elements of a row vector from the first operand and a column vector of the second operand, and inverting ratios greater than one, and averaging all the ratios, which for normalized and positive elements always yields a row/column similarity value between zero (very different) and one (identical). Hence, if the values are identical, the similarity is equal to one, and if the values are grossly unequal, the similarity approaches zero.
  • Another example of a similarity operator determines an elemental similarity between two corresponding elements of two observation vectors or snapshots, by subtracting from one a quantity with the absolute difference of the two elements in the numerator, and the expected range for the elements in the denominator.
  • the expected range can be determined, for example, by the difference of the maximum and minimum values for that element to be found across all the data of the reference data H.
  • the vector similarity is then determined by averaging the elemental similarities.
  • the vector similarity of two observation vectors is equal to the inverse of the quantity of one plus the magnitude Euclidean distance between the two vectors in n-dimensional space, where n is the number of elements in each observation, that is, the number of sensors being observed.
  • n is the number of elements in each observation, that is, the number of sensors being observed.
  • the similarity reaches a highest value of one when the vectors are identical and are separated by zero distance, and diminishes as the vectors become increasingly distant (different).
  • Other similarity operators are known or may become known to those skilled in the art, and may be usefully employed as described herein. The recitation of the above operators is exemplary and not meant to limit the scope of the claimed invention. In general, the following guidelines help to define a similarity operator but are not meant to limit the scope of the invention: 1.
  • Similarity is a scalar range, bounded at each end. 2.
  • the similarity of two identical inputs is the value of one of the bounded ends. 3.
  • the absolute value of the similarity increases as the two inputs approach being identical. Accordingly, for example, an effective similarity operator for use in the present invention can generate a similarity often (10) when the inputs are identical, and a similarity that diminishes toward zero as the inputs become more different.
  • a bias or translation can be used, so that the similarity is 12 for identical inputs, and diminishes toward 2 as the inputs become more different.
  • a scaling can be used, so that the similarity is 100 for identical inputs, and diminishes toward zero with increasing difference.
  • the scaling factor can also be a negative number, so that the similarity for identical inputs is -100 and approaches zero from the negative side with increasing difference of the inputs.
  • the similarity can be rendered for the elements of two vectors being compared, and summed or otherwise statistically combined to yield an overall vector-to-vector similarity, or the similarity operator can operate on the vectors themselves (as in Euclidean distance).
  • these teachings are compatible with processes that provide for monitoring variables in an autoassociative mode as well as processes that utilize an inferential mode.
  • model estimates are made of variables that also comprise inputs to the model, while in the inferential mode model estimates are made of variables that are not present in the input to the model.
  • Equation 1 above preferably becomes: and Equation 3 above becomes: where the D matrix has been separated into two matrices D; n and D ou t, according to which rows are inputs and which rows are outputs, but column (observation) correspondence is maintained.
  • Another example of an empirical modeling method appropriate for use in the current invention is kernel regression, or kernel smoothing.
  • a kernel regression can be used to generate an estimate based on a current observation in much the same way as the similarity-based model, which can then be used to generate a residual as detailed elsewhere herein. Accordingly, the following Nadaraya- Watson estimator can be used:
  • a single scalar inferred parameter y-hat is estimated as a sum of weighted exemplar yi from exemplar data, where the weight it determined by a kernel K of width h acting on the difference between the current observation X and the exemplar observations Xi corresponding to the yi from exemplar data.
  • the independent variables Xi can be scalars or vectors.
  • the estimate can be a vector, instead of a scalar:
  • the scalar kernel multiplies the vector Yi to yield the estimated vector.
  • kernels are known in the art and may be used.
  • One well-known kernel is the Epanechnikov kernel:
  • h is the bandwidth of the kernel, a tuning parameter, and u can be obtained from the difference between the current observation and the exemplar observations as in equation 6.
  • Another kernel of the countless kernels that can be used in remote monitoring according to the invention is the common Gaussian kernel:
  • evaluation model can potentially be utilized in conjunction with these teachings.
  • the particular style and configuration of evaluation models as are set forth above, however, work particularly well when interleaving the resultant data from multiple evaluation models is undertaken as described herein.
  • evaluation model architecture serves well to provide empirical output that, even when comprising the unified output of multiple such models, nevertheless readily facilitates presentation and/or trending/statistical testing analysis.
  • selection of a given evaluation model to be used by the performance evaluator 13 can be governed in a manual fashion by an operator.
  • an evaluation model selector 15 that detects and responds to the present operating mode of the monitored apparatus 11 serves to automatically select a particular evaluation model for use by the performance evaluator 13.
  • the evaluation model selector 15 can make a particular selection based upon a subsequently expected operating state of the monitored apparatus. So configured, the evaluation model selector 15 can identify a particular evaluation model prior to its actual needed deployment. This, in turn, can foster a potentially better match between the actual operating mode of the monitored apparatus 11 at any given point in time and the particular evaluation model then being used to access the performance and behavior of the monitored apparatus 11. In another embodiment, if desired, the evaluation model selector 15 can select between auto-associative and inferential evaluation models wherein both models are intended to model a common operating mode of the monitored apparatus 11. Such a selection can be based, for example, upon a determination that a given sensor has become faulty.
  • the resultant monitored apparatus performance data as is provided as an output of the performance evaluator 13 remains substantially contiguous regardless of changes regarding which of the evaluation models is presently being used by the performance evaluator 13.
  • the resultant monitored apparatus performance data is substantially interleaved regardless of changes to evaluation model usage. This output information is then used in a manner appropriate to the needs of a given application.
  • the information may be stored in a database 17 and/or provided to a human interface 18 such as a display or printer output as is well understood in the art.
  • a human interface 18 such as a display or printer output as is well understood in the art.
  • the contiguous and interleaved information will serve to present the resultant information as a whole rather than as segregated discrete temporal segments. Subsequent display and processing will benefit accordingly. Referring now to FIG.
  • the above described embodiments and/or any other suitable platform serves generally to facilitate a process 20 wherein a plurality of substantially temporally-coincident sensor readings are provided 21 and where multiple evaluation models are used 22 in a discrete temporally segregated fashion to process the plurality of substantially temporally-coincident sensor readings to provide resultant evaluation data that relates, at least in part, to apparent expected operation of the monitored apparatus.
  • specific evaluation models are used in this discrete temporally segregated fashion with respect to one another as a function, at least in part, of a present (or anticipated) operational state of the monitored apparatus 11.
  • This process 20 then facilitates processing 23 this resultant evaluation data in a temporally interleaved manner to thereby facilitate detection of anomalous circumstances regarding the monitored apparatus 11.
  • the resultant evaluation data can be presented in an interleaved manner on a user-perceivable display.
  • a displayed output graph 30 of information regarding a particular monitored condition appears as a contiguous display of data.
  • a first portion of the data 31 corresponds to data determined through use of a first evaluation model (model 1) and a following portion of the data 32 corresponds to data determined through use of a different evaluation model (model 3)
  • the data itself appears as a contiguous entity.
  • a method 40 for use in facilitating processing of information regarding performance of a monitored apparatus can begin with provision 41 of a plurality of sensor readings that correspond to various monitored performance parameters and characteristics of the apparatus.
  • these sensor readings are different from one another (though redundant sensor readings can be provided, if desired, to protect against the loss of a given sensor).
  • these sensor readings have a predetermined substantially temporally-coincident relationship with one another.
  • the readings from the sensors all relate to substantially similar points in time. If desired, and as may be appropriate to a given approach to subsequent processing of the resultant information, the sensor readings can relate to different moments in time, though the degree of difference will likely need to be constant or at least of a known value to permit a synchronized view of the overall data set.
  • the process 40 selects 42 a particular evaluation model from amongst a plurality of candidate evaluation models to be used when evaluating the sensor readings.
  • the operating mode of the monitored apparatus influences the selection process. As noted above, this can include, for example, a present operating mode of the monitored apparatus or an expected operating mode (such as an imminent likely operating mode) of the monitored apparatus. If desired, more than one evaluation model can be selected for use at any given moment. So configured, the sensor readings can be processed in parallel in accord with such multiple evaluation models. This would permit, for example, a hot switchover from usage of one evaluation model to another evaluation model upon the occurrence of a predetermined trigger (such as detection of attainment of a particular operation mode of the monitored apparatus).
  • the process 40 then uses 43 the selected evaluation model to process the sensor readings and to provide resultant evaluation data.
  • This resultant evaluation data relates at least in part to the apparent expected operation of the monitored apparatus.
  • the exact nature of the resultant evaluation data can also vary in accordance with the needs of the downstream processing to be applied.
  • the resultant evaluation data can comprise at least some of the sensor readings themselves, one or more estimated sensor reading values as calculated through usage of the evaluation model, and/or one or more residual values as correspond to a difference between the sensor readings and a corresponding estimated sensor reading, to name a few.
  • Such resultant evaluation data can then be optionally provided to a database, to a display, or to another processing and/or analysis platform for subsequent consideration.
  • the process 40 determines 44 from time to time where the operating mode has changed enough to warrant selection of a different evaluation model in substitution of the first selected evaluation model. When true, the process 40 selects 45 a new evaluation model. In a preferred embodiment, again, the process selects 40 this new evaluation model as a function of the operating mode of the monitored apparatus. The process 40 then uses 46 this new evaluation model to process the sensor readings and to provide resultant evaluation data in accordance with that usage. Pursuant to a preferred approach, the resultant evaluation data as corresponds to the new evaluation model is provided in a manner that is substantially contiguous to provision of the earlier resultant evaluation data as corresponded to the first evaluation model.
  • the resultant evaluation data that corresponds to the first evaluation model is spliced onto the following resultant evaluation data that corresponds to the next subsequent evaluation model.
  • the combined resultant evaluation data from the first and subsequent evaluation model relates, at least in part, to a continued and contiguous view of the continuing and contiguous apparent expected operation of the monitored apparatus.
  • the resultant evaluation data can again be provided to a database, to a display, and/or to a subsequent processing platform.
  • the contiguous and interleaved nature of the resultant evaluation data provides various benefits as noted before, including the opportunity to now facilitate the detection of anomalous performance of the monitored apparatus over a given period of time that bridges a change from one evaluation model to another using a data driven empirical modeling technique.
  • an optional process 50 can provide an operator with an opportunity to select 51 from amongst a plurality of data-processing modes in this regard.
  • two candidate data-processing modes can be provided, one being a continuous data-processing mode as described above with respect to FIG. 5 and the other being a non-continuous data-processing mode as characterizes the prior art.
  • the process 50 can then determine 52 which mode the operator has selected.
  • the process 50 can use 53 a corresponding discrete/non-continuous data-processing mode.
  • the process 50 can use a corresponding continuous data-processing mode 40 as described above. So configured, an operator can have a choice between such modes of operation to thereby permit selection of a particular process mode to suit the specific needs of a given application and context.
  • the continuous presentation of resultant evaluation data posited above will likely tend to work more successfully when using evaluation models that both use essentially identical sensor inputs.
  • one could switch to an evaluation model that uses at least alternative sensor readings by, for example, normalizing the resultant evaluation data of the latter to better correlate to and conform with the resultant evaluation data of the former.
  • the resultant evaluation data from a first model abruptly shifts, albeit contiguously and continuously, to the resultant evaluation data of a subsequent model. For many applications this approach will provide superior results. There may be some instances, however, when a more gradual shift from one data stream to the other may be appropriate.
  • bridging period of time it may be appropriate to simply average the data from both models (presuming that both are being processed in parallel and discrete from the other) before shifting completely to only data from the subsequent model. Or, as another example, one might first weight or otherwise normalize one or both data streams prior to making a combination of the two for some bridging period of time to thereby effect a more gradual and softer transition from the use of one model to the use of another.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Testing And Monitoring For Control Systems (AREA)
  • Test And Diagnosis Of Digital Computers (AREA)

Abstract

A performance evaluation apparatus (10) and method has an evaluation model selector (15) that selects among a set of monitored apparatus evaluation models (14) for use by a performance evaluator (13), such that performance data output by the performance evaluator (13) remains substantially contiguous regardless of changes to which one of the evaluation models (14) is presently used by the performance evaluator (13).

Description

INFORMATION PROCESSING APPARATUS AND METHOD
Technical Field This invention relates generally to information processing and more particularly to facilitating the processing of information regarding performance of a monitored apparatus using multiple evaluation models against which performance of the monitored apparatus is compared.
Background Various techniques are known to permit the monitoring of a given apparatus. Some techniques permit off-line analysis of a given apparatus to support, for example, identification of a particular cause of malfunction. Other techniques permit on-line monitoring to detect, for example, out-of-specification performance that may require the attention of service personnel. As to the latter, it is known to utilize an evaluation model against which present performance of a given apparatus can be compared to thereby ascertain the health of the apparatus. Such techniques have proven to be both robust and powerful. In particular, proper selection and use of an evaluation model can permit reliable detection of apparatus conditions that likely signal a future problem even while overall performance of the apparatus presently remains within acceptable guidelines. By identifying such conditions in advance of an actual breakdown or other diminution of performance, early corrective actions can be taken with a lessened impact on overall apparatus performance. Because many apparatus have more than one operating mode, in many settings a single evaluation model may be inadequate to permit sufficiently accurate analysis in this regard. For example, the overall performance of a jet engine operating during a highly loaded state can be considerably different than for a jet engine that operates during a lightly loaded state. To meet this concern, multiple evaluation models are sometimes provided. A given evaluation model is then selected and used during a corresponding apparatus mode of operation. Use of multiple evaluation models has proven effective to extend the useful operating range over which performance of the corresponding apparatus can be successfully monitored to facilitate accurate condition determination. Unfortunately, however, existing systems tend to use the resultant evaluation data in a relatively discrete and non-contiguous fashion. In particular, the evaluation data that results through use of a first evaluation model is treated in essentially all respects as being discrete from and different than evaluation data that results through use of a second evaluation model. For example, when providing a display of performance evaluation data over time to a user, the data that corresponds to the second evaluation model will be presented in a discontinuous and a discrete manner with respect to the data from the first evaluation model. As another example, and in the same manner, fault prediction programming that can process a series of evaluation data to extract meaning from its temporal content will nevertheless not bridge the discontinuity that occurs when the monitoring process switches from one evaluation model to another. In effect, the analysis process (and the resultant corresponding display associated therewith) begins anew with a new evaluation model. Such discrete and non-contiguous treatment of the resultant evaluation data is not without problem. At a minimum such gaps and discrete handling can be bothersome to an operator who must make their own independent efforts to stitch together the meaning of the multiple data outputs as are provided by multiple evaluation models. Worse, the power of the analysis engine to extract useful information from monitored performance trends can be compromised or even lost when the ability to track a trend breaks upon switching to a new evaluation model.
Brief Description of the Drawings The above needs are at least partially met through provision of the information processing apparatus and method described in the following detailed description, particularly when studied in conjunction with the drawings, wherein: FIG. 1 comprises a block diagram as configured in accordance with an embodiment of the invention; FIG. 2 comprises a high-level flow diagram as configured in accordance with an embodiment of the invention; FIG. 3 comprises an illustrative schematic of a display as configured in accordance with an embodiment of the invention; FIG. 4 comprises a flow diagram as configured in accordance with various embodiments of the invention; and FIG. 5 comprises a flow diagram as configured in accordance with another embodiment of the invention. Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of various embodiments of the present invention. Also, common but well-understood elements that are useful or necessary in a commercially feasible embodiment are typically not depicted in order to facilitate a less obstructed view of these various embodiments of the present invention.
Detailed Description Generally speaking, pursuant to these various embodiments, a process provides for a plurality of substantially temporally-coincident sensor readings that each correspond to a sensed condition of a monitored apparatus. A first evaluation model (selected in accord with a present or anticipated operating mode of the monitored apparatus) is used to process this plurality of sensor readings to provide resultant first evaluation data that relates, at least in part, to apparent expected operation of the monitored apparatus. Upon later selecting a different evaluation model to continue processing then current sensor readings (wherein the selection of a different evaluation model is based, for example, upon a changed operating mode of the monitored apparatus), the resultant evaluation data is provided substantially contiguous with provision of the resultant evaluation data from use of the first evaluation model. This interlacing of data permits, in part, the new resultant evaluation data to relate, at least in part, to continued and contiguous apparent expected operation of the monitored apparatus. In effect, the data from the two independent processes is united into a common data stream that then appears wholistic to subsequent downstream processing actions and steps. Depending upon the embodiment, such evaluation models can be selected from amongst a considerably larger number of candidate evaluation models. Since there is no loss of continuity to be experienced upon switching to a new evaluation model, an operator is in fact considerably more free to utilize many more evaluation models than might ordinarily be contemplated in order to permit more accurate matching of a given evaluation model to a given set of operating conditions and present operating mode of the monitored apparatus. If desired, both continuous and contiguous handling of resultant evaluation data and a more traditional discrete and non-contiguous handling of resultant evaluation data can be selectively provided. So configured, a given operator can select between such alternative modes of operation to suit the needs of a given context or setting. Referring now to the drawings, and in particular to FIG. 1, a system 10 appropriate to support the teachings set forth below serves in conjunction with a monitored apparatus 11 having a plurality of sensors (not shown) deployed therewith. The monitored apparatus 11 can be essentially any apparatus or process now known or hereafter developed. The sensors can be many and varied, including but not limited to thermocouples, accelerometers, pressure transducers, flow meters, tachometers and the like. In a preferred approach, there will usually be a number of sensors that monitor different conditions with respect to the monitored apparatus 11 as a broad spectrum of sensor data tends to provide richer material for the purposes of detecting the pending development of fault conditions. The particular sensors employed (and the conditions to be monitored) are selected to suit the circumstances of a given apparatus and the needs of a given context and application. The selection and deployment of sensors in this context is understood in the art and hence further detail will not be provided here for the sake of brevity and the preservation of focus. An apparatus-operation sensor input 12 operably couples to the sensors that monitor the monitored apparatus 11 to receive those sensor readings. The sensor input 12 can provide additional support as may be appropriate to a given sensor or process. For example, the sensor input 12 can source operating power to sensors that require an external power source. As another example, the sensor input 12 can provide appropriate pre-processing of the sensor signals. Such pre-processing can include any of filtering (or other noise reduction processing), weighting, frequency shifting, digitizing, storing-and-forwarding, and so forth. Again, such sensor inputs are known in the art and further detail need not be provided here. The output of the apparatus-operation sensor inputs 12 couples to a performance evaluator 13. The latter can comprise an integrated engine or can be partially or wholly distributed using an implementing architecture of choice. In a preferred embodiment, the performance evaluator 13 will comprise a programmable platform that facilitates a corresponding performance evaluation engine such as, for example, the eCM runtime engine as offered by SmartSignal Corporation of Lisle, Illinois U.S.A. In accordance with present practice, and pursuant to a preferred embodiment, the performance evaluator 13 serves three primary purposes: - the use of an evaluation model to process real-time sensor information to generate expected sensor values for each extracted data sample and to then generate a residual vector element for each sensor that comprises the difference between the estimated sensor value and the corresponding real-time value (when using accurate models and when the monitored apparatus is behaving normally these residuals tend towards a relatively small range Gaussian distribution around a value of zero); - comparing the residual (or a series of residuals) (for example, against one or more thresholds or against a statistical hypothesis test) to identify when sufficient deviation occurs to warrant generating an alert condition for a given sensor (in a preferred embodiment this process will include determining whether such detected deviations are of statistical significance using, for example, the SmartSignal Corporation Active Decision Index algorithm as is included with SmartSignal's eCM Runtime Engine noted above); and - using the sensor alerts and other sensor information as may be relevant to a given application to determine when the alerts are significant enough to merit being identified as an incident and being brought specifically to the attention of an operator (with incident generation being based upon various rules as is otherwise generally well understood in the art). Performance evaluation can be based upon other approaches and criteria as well, with various methodologies being in use and/or otherwise suggested by those skilled in the art. It will be understood that the teachings set forth herein are generally applicable to such alternative approaches as well as to the illustrative performance evaluator platform described above. Pursuant to a preferred embodiment, the performance evaluator 13 utilizes, at any given moment, an evaluation model selected from amongst a plurality of monitored apparatus evaluation models 14. In general, at least some of these evaluation models 14 will be designed and intended to model the expected behavior of the monitored apparatus 11 during a particular mode of operation. For example, one of the evaluation models can serve to model the behavior of the monitored apparatus 11 when the monitored apparatus 11 operates under a first set of operating conditions (such as when fully loaded) and another of the evaluation models can serve to model the behavior of the monitored apparatus 11 when the monitored apparatus 11 operates under a different set of operating conditions (such as when substantially less loaded). The number of evaluation models provided for use with a given monitored apparatus 11 will vary with the apparatus itself and with the needs of the specific application. Where the behavior of the monitored apparatus varies widely over its range of operating conditions and/or where tight correspondence between the evaluation model and the performance of the monitored apparatus is critical, a greater number of evaluation models may be appropriate. Such evaluation models can be either of auto-associative models or inferential models. These characterizations relate generally to the manner by which the performance evaluator 13 utilizes the evaluation model to calculate a resultant residual value. An auto-associative model-based process uses every sensor input to calculate the estimated state of every sensor in the model. An inferential model- based process uses only a portion of the sensor inputs to generate the estimated values for all the sensors in the model. The inferential approach serves well when a particular sensor value appears unduly noisy or the corresponding sensor is known to be prone to failure, or when the dynamics of the equipment are understood and it is desirable to monitor only certain sensors as they are driven by parameters for equipment health purposes. The evaluation models can be initially provided through various means. For example, the evaluation models can be sourced by the manufacturer of a given apparatus to be monitored. As another example, one or more evaluation models can be provided by the supplier of the performance evaluator 13. As yet another example, the performance evaluator 13 can itself be configured to permit generation of one or more evaluation models by the operator to permit highly customized and potentially more accurate and operationally relevant evaluation models. Evaluation models in general and their manner of creation and usage are generally well understood in the art. Use of many such evaluation models may benefit from the teachings set forth herein. In a preferred approach, however, particular approaches to modeling may provide superior results when effecting mode-to-mode contiguous and interleaved data streams. In one preferred approach, reference set of observations are formed into a matrix, designated H for purposes hereof, with each column of the matrix typically representing an observation, and each row representing values from a single sensor or measurement. The ordering of the columns (i.e., observations) in the matrix is not important, and there is no element of causality or time progression inherent in the modeling method. The ordering of the rows is also not important, only that the rows are maintained in their correspondence to sensors throughout the modeling process, and readings from only one sensor appear on a given row. This step can occur as part of the setup of the modeling system, and is not necessarily repeated during online operation. After assembling a sufficiently characterizing set H of reference data observations for the modeled system, modeling can be carried out. Such modeling results in the generation of estimates in response to acquiring or inputting a real-time or current or test observation, which estimates can be estimates of sensors or non- sensor parameters of the modeled system, or estimates of classifications or qualifications distinctive of the state of the system. These estimates can be used for a variety of useful modeling purposes as described below. In this preferred approach, this generation of estimates comprises two major steps after input acquisition. In a first step, the current observation is compared to the reference data H to determine a subset of reference observations from H having a particular relationship or affinity with the current observation, with which to constitute a smaller matrix, designated D for purposes hereof. In the second step, the D matrix is used to compute an estimate of at least one output parameter characteristic of the modeled system based on the current observation. Accordingly, it may be understood that the model estimate Yest is a function of the current input observation Yjn and the current matrix D, derived from H:
Figure imgf000009_0001
where the vector Yest of estimated values for the sensors is equal to the contributions from each of the snapshots of contemporaneous sensor values arranged to comprise matrix D. These contributions are determined by weight vector W. The multiplication operation is the standard matrix/vector multiplication operator, or inner product. The similarity operator is presented as the symbol <8>. (Note: this meaning should not to be confused with the normal meaning of this symbol; here the meaning of ® is that of a "similarity" operation.) The similarity operator, ®, works much as regular matrix multiplication operations, on a row-to-column basis, and results in a matrix having as many rows as the first operand and as many columns as the second operand. The similarity operation yields a scalar value for each combination of a row from the first operand and column from the second operand. One similarity operation involves taking the ratio of corresponding elements of a row vector from the first operand and a column vector of the second operand, and inverting ratios greater than one, and averaging all the ratios, which for normalized and positive elements always yields a row/column similarity value between zero (very different) and one (identical). Hence, if the values are identical, the similarity is equal to one, and if the values are grossly unequal, the similarity approaches zero. Another example of a similarity operator that can be used determines an elemental similarity between two corresponding elements of two observation vectors or snapshots, by subtracting from one a quantity with the absolute difference of the two elements in the numerator, and the expected range for the elements in the denominator. The expected range can be determined, for example, by the difference of the maximum and minimum values for that element to be found across all the data of the reference data H. The vector similarity is then determined by averaging the elemental similarities. In yet another similarity operator that can be used, the vector similarity of two observation vectors is equal to the inverse of the quantity of one plus the magnitude Euclidean distance between the two vectors in n-dimensional space, where n is the number of elements in each observation, that is, the number of sensors being observed. Thus, the similarity reaches a highest value of one when the vectors are identical and are separated by zero distance, and diminishes as the vectors become increasingly distant (different). Other similarity operators are known or may become known to those skilled in the art, and may be usefully employed as described herein. The recitation of the above operators is exemplary and not meant to limit the scope of the claimed invention. In general, the following guidelines help to define a similarity operator but are not meant to limit the scope of the invention: 1. Similarity is a scalar range, bounded at each end. 2. The similarity of two identical inputs is the value of one of the bounded ends. 3. The absolute value of the similarity increases as the two inputs approach being identical. Accordingly, for example, an effective similarity operator for use in the present invention can generate a similarity often (10) when the inputs are identical, and a similarity that diminishes toward zero as the inputs become more different. Alternatively, a bias or translation can be used, so that the similarity is 12 for identical inputs, and diminishes toward 2 as the inputs become more different. Further, a scaling can be used, so that the similarity is 100 for identical inputs, and diminishes toward zero with increasing difference. Moreover, the scaling factor can also be a negative number, so that the similarity for identical inputs is -100 and approaches zero from the negative side with increasing difference of the inputs. The similarity can be rendered for the elements of two vectors being compared, and summed or otherwise statistically combined to yield an overall vector-to-vector similarity, or the similarity operator can operate on the vectors themselves (as in Euclidean distance). As noted earlier, these teachings are compatible with processes that provide for monitoring variables in an autoassociative mode as well as processes that utilize an inferential mode. Generally stated, in the autoassociative mode model estimates are made of variables that also comprise inputs to the model, while in the inferential mode model estimates are made of variables that are not present in the input to the model. In the inferential mode, Equation 1 above preferably becomes:
Figure imgf000011_0001
and Equation 3 above becomes:
Figure imgf000011_0002
where the D matrix has been separated into two matrices D;n and Dout, according to which rows are inputs and which rows are outputs, but column (observation) correspondence is maintained. Another example of an empirical modeling method appropriate for use in the current invention is kernel regression, or kernel smoothing. A kernel regression can be used to generate an estimate based on a current observation in much the same way as the similarity-based model, which can then be used to generate a residual as detailed elsewhere herein. Accordingly, the following Nadaraya- Watson estimator can be used:
Figure imgf000011_0003
where in this case a single scalar inferred parameter y-hat is estimated as a sum of weighted exemplar yi from exemplar data, where the weight it determined by a kernel K of width h acting on the difference between the current observation X and the exemplar observations Xi corresponding to the yi from exemplar data. The independent variables Xi can be scalars or vectors. Alternatively, the estimate can be a vector, instead of a scalar:
Figure imgf000012_0001
Here, the scalar kernel multiplies the vector Yi to yield the estimated vector. A wide variety of kernels are known in the art and may be used. One well- known kernel, by way of example, is the Epanechnikov kernel:
Figure imgf000012_0002
where h is the bandwidth of the kernel, a tuning parameter, and u can be obtained from the difference between the current observation and the exemplar observations as in equation 6. Another kernel of the countless kernels that can be used in remote monitoring according to the invention is the common Gaussian kernel:
Figure imgf000012_0003
As noted, other manner and style of evaluation model can potentially be utilized in conjunction with these teachings. The particular style and configuration of evaluation models as are set forth above, however, work particularly well when interleaving the resultant data from multiple evaluation models is undertaken as described herein. In particular, such evaluation model architecture serves well to provide empirical output that, even when comprising the unified output of multiple such models, nevertheless readily facilitates presentation and/or trending/statistical testing analysis. If desired, selection of a given evaluation model to be used by the performance evaluator 13 can be governed in a manual fashion by an operator. In a preferred embodiment, however, an evaluation model selector 15 that detects and responds to the present operating mode of the monitored apparatus 11 serves to automatically select a particular evaluation model for use by the performance evaluator 13. In the alternative, if desired, the evaluation model selector 15 can make a particular selection based upon a subsequently expected operating state of the monitored apparatus. So configured, the evaluation model selector 15 can identify a particular evaluation model prior to its actual needed deployment. This, in turn, can foster a potentially better match between the actual operating mode of the monitored apparatus 11 at any given point in time and the particular evaluation model then being used to access the performance and behavior of the monitored apparatus 11. In another embodiment, if desired, the evaluation model selector 15 can select between auto-associative and inferential evaluation models wherein both models are intended to model a common operating mode of the monitored apparatus 11. Such a selection can be based, for example, upon a determination that a given sensor has become faulty. In such a case, although the operating mode of the monitored apparatus 11 has not changed, it may be appropriate to switch from an auto-associative evaluation model to an inferential evaluation model. Pursuant to a preferred embodiment, the resultant monitored apparatus performance data as is provided as an output of the performance evaluator 13 remains substantially contiguous regardless of changes regarding which of the evaluation models is presently being used by the performance evaluator 13. In particular, and as the performance evaluator 13 may switch back and forth between two or more evaluation models, the resultant monitored apparatus performance data is substantially interleaved regardless of changes to evaluation model usage. This output information is then used in a manner appropriate to the needs of a given application. For example, the information may be stored in a database 17 and/or provided to a human interface 18 such as a display or printer output as is well understood in the art. Pursuant to these preferred embodiments, however, the contiguous and interleaved information will serve to present the resultant information as a whole rather than as segregated discrete temporal segments. Subsequent display and processing will benefit accordingly. Referring now to FIG. 2, the above described embodiments and/or any other suitable platform serves generally to facilitate a process 20 wherein a plurality of substantially temporally-coincident sensor readings are provided 21 and where multiple evaluation models are used 22 in a discrete temporally segregated fashion to process the plurality of substantially temporally-coincident sensor readings to provide resultant evaluation data that relates, at least in part, to apparent expected operation of the monitored apparatus. As noted earlier, specific evaluation models are used in this discrete temporally segregated fashion with respect to one another as a function, at least in part, of a present (or anticipated) operational state of the monitored apparatus 11. This process 20 then facilitates processing 23 this resultant evaluation data in a temporally interleaved manner to thereby facilitate detection of anomalous circumstances regarding the monitored apparatus 11. For example, the resultant evaluation data can be presented in an interleaved manner on a user-perceivable display. To illustrate, and referring momentarily to FIG. 3, a displayed output graph 30 of information regarding a particular monitored condition appears as a contiguous display of data. In particular, though a first portion of the data 31 corresponds to data determined through use of a first evaluation model (model 1) and a following portion of the data 32 corresponds to data determined through use of a different evaluation model (model 3), the data itself appears as a contiguous entity. This contrasts sharply with prior art practice wherein separate graphs would be utilized to portray the resultant data as corresponds to use of these two models. (It should be understood that the phantom line partitions and the "model" legends on the graph are for purposes of illustrating these demarcations in accord with the above description and would not ordinarily themselves be provided on a display to an operator.) Referring now to FIG. 4, and to elaborate upon the above generally- described process, a method 40 for use in facilitating processing of information regarding performance of a monitored apparatus can begin with provision 41 of a plurality of sensor readings that correspond to various monitored performance parameters and characteristics of the apparatus. In a preferred embodiment, at least some of these sensor readings are different from one another (though redundant sensor readings can be provided, if desired, to protect against the loss of a given sensor). Furthermore, in a preferred approach, these sensor readings have a predetermined substantially temporally-coincident relationship with one another. In a perhaps simplest approach, the readings from the sensors all relate to substantially similar points in time. If desired, and as may be appropriate to a given approach to subsequent processing of the resultant information, the sensor readings can relate to different moments in time, though the degree of difference will likely need to be constant or at least of a known value to permit a synchronized view of the overall data set. The process 40 then selects 42 a particular evaluation model from amongst a plurality of candidate evaluation models to be used when evaluating the sensor readings. In a preferred embodiment the operating mode of the monitored apparatus influences the selection process. As noted above, this can include, for example, a present operating mode of the monitored apparatus or an expected operating mode (such as an imminent likely operating mode) of the monitored apparatus. If desired, more than one evaluation model can be selected for use at any given moment. So configured, the sensor readings can be processed in parallel in accord with such multiple evaluation models. This would permit, for example, a hot switchover from usage of one evaluation model to another evaluation model upon the occurrence of a predetermined trigger (such as detection of attainment of a particular operation mode of the monitored apparatus). The process 40 then uses 43 the selected evaluation model to process the sensor readings and to provide resultant evaluation data. This resultant evaluation data, of course, relates at least in part to the apparent expected operation of the monitored apparatus. The exact nature of the resultant evaluation data can also vary in accordance with the needs of the downstream processing to be applied. For example, the resultant evaluation data can comprise at least some of the sensor readings themselves, one or more estimated sensor reading values as calculated through usage of the evaluation model, and/or one or more residual values as correspond to a difference between the sensor readings and a corresponding estimated sensor reading, to name a few. Such resultant evaluation data can then be optionally provided to a database, to a display, or to another processing and/or analysis platform for subsequent consideration. As the monitored apparatus changes its mode of operation during its course of usage, the process 40 determines 44 from time to time where the operating mode has changed enough to warrant selection of a different evaluation model in substitution of the first selected evaluation model. When true, the process 40 selects 45 a new evaluation model. In a preferred embodiment, again, the process selects 40 this new evaluation model as a function of the operating mode of the monitored apparatus. The process 40 then uses 46 this new evaluation model to process the sensor readings and to provide resultant evaluation data in accordance with that usage. Pursuant to a preferred approach, the resultant evaluation data as corresponds to the new evaluation model is provided in a manner that is substantially contiguous to provision of the earlier resultant evaluation data as corresponded to the first evaluation model. For example, the resultant evaluation data that corresponds to the first evaluation model is spliced onto the following resultant evaluation data that corresponds to the next subsequent evaluation model. Accordingly, the combined resultant evaluation data from the first and subsequent evaluation model relates, at least in part, to a continued and contiguous view of the continuing and contiguous apparent expected operation of the monitored apparatus. As before, the resultant evaluation data can again be provided to a database, to a display, and/or to a subsequent processing platform. The contiguous and interleaved nature of the resultant evaluation data provides various benefits as noted before, including the opportunity to now facilitate the detection of anomalous performance of the monitored apparatus over a given period of time that bridges a change from one evaluation model to another using a data driven empirical modeling technique. Other advantages include a more intuitive and user-friendly presentation of the evaluation data. Notwithstanding these various advantages of providing the resultant evaluation data in a contiguous and interleaved manner, there may be circumstances when an operator may wish to utilize the non-contiguous processing mode that tends to characterize the prior art. With reference to FIG. 5, an optional process 50 can provide an operator with an opportunity to select 51 from amongst a plurality of data-processing modes in this regard. To illustrate, two candidate data-processing modes can be provided, one being a continuous data-processing mode as described above with respect to FIG. 5 and the other being a non-continuous data-processing mode as characterizes the prior art. The process 50 can then determine 52 which mode the operator has selected. When the operator selects a non-continuous mode, the process 50 can use 53 a corresponding discrete/non-continuous data-processing mode. Similarly, when the operator selects a continuous mode, the process 50 can use a corresponding continuous data-processing mode 40 as described above. So configured, an operator can have a choice between such modes of operation to thereby permit selection of a particular process mode to suit the specific needs of a given application and context. Those skilled in the art will recognize that a wide variety of modifications, alterations, and combinations can be made with respect to the above described embodiments without departing from the spirit and scope of the invention, and that such modifications, alterations, and combinations are to be viewed as being within the ambit of the inventive concept. For example, in general the continuous presentation of resultant evaluation data posited above will likely tend to work more successfully when using evaluation models that both use essentially identical sensor inputs. To suit the needs of a given application, however, one could switch to an evaluation model that uses at least alternative sensor readings by, for example, normalizing the resultant evaluation data of the latter to better correlate to and conform with the resultant evaluation data of the former. As another example, in the illustrative embodiments provided above the resultant evaluation data from a first model abruptly shifts, albeit contiguously and continuously, to the resultant evaluation data of a subsequent model. For many applications this approach will provide superior results. There may be some instances, however, when a more gradual shift from one data stream to the other may be appropriate. For example, within some predetermined bridging period of time it may be appropriate to simply average the data from both models (presuming that both are being processed in parallel and discrete from the other) before shifting completely to only data from the subsequent model. Or, as another example, one might first weight or otherwise normalize one or both data streams prior to making a combination of the two for some bridging period of time to thereby effect a more gradual and softer transition from the use of one model to the use of another.

Claims

We claim: 1. A method for use in facilitating processing of information regarding performance of a monitored apparatus, comprising:
- providing a plurality of substantially temporally-coincident sensor readings that each correspond to a sensed condition of the monitored apparatus;
- selecting a first evaluation model based upon an operating mode of the monitored apparatus;
- using the first evaluation model to process the plurality of substantially temporally- coincident sensor readings to provide resultant first evaluation data that relates, at least in part, to apparent expected operation of the monitored apparatus;
- subsequently, selecting a second evaluation model based upon an operating mode of the monitored apparatus;
- using the second evaluation model to process the plurality of substantially temporally-coincident sensor readings substantially contiguous to using the first evaluation model to process the plurality of substantially temporally-coincident sensor readings to thereby provide resultant second evaluation data substantially contiguous to providing the resultant first evaluation data, which resultant second evaluation data relates, at least in part, to continued and contiguous apparent expected operation of the monitored apparatus.
2. The method of claim 1 wherein providing a plurality of substantially temporally- coincident sensor readings that each correspond to a sensed condition of the monitored apparatus comprises providing a plurality of substantially temporally- coincident sensor readings that each correspond to a sensed condition of the monitored apparatus wherein at least two of the sensed conditions are different from one another.
3. The method of claim 1 wherein selecting a first evaluation model comprises selecting a first evaluation model from amongst a plurality of candidate evaluation models.
4. The method of claim 1 wherein selecting a first evaluation model based upon an operating mode of the monitored apparatus further comprises:
- determining at least one of: - a present operating mode of the monitored apparatus; and - an expected operating mode of the monitored apparatus; to provide an identified operating mode;
- using the identified operating mode to select the first evaluation model.
5. The method of claim 1 wherein using the first evaluation model to process the plurality of substantially temporally-coincident sensor readings to provide resultant first evaluation data that relates, at least in part, to apparent expected operation of the monitored apparatus comprises providing resultant first evaluation data that includes at least one of:
- at least some of the sensor readings;
- at least one estimated sensor reading;
- at least one residual value that corresponds to a difference between one of the sensor readings and a corresponding estimated sensor reading.
6. The method of claim 1 wherein subsequently, selecting a second evaluation model based upon an operating mode of the monitored apparatus further comprises:
- determining at least one of: - a present operating mode of the monitored apparatus; and - an expected operating mode of the monitored apparatus; to provide a subsequently identified operating mode;
- using the subsequently identified operating mode to select the second evaluation model.
7. The method of claim 6 wherein using the subsequently identified operating mode to select the second evaluation model comprises substituting use of the second evaluation model for use of the first evaluation model.
8. The method of claim 1 and further comprising:
- providing the first evaluation data and the second evaluation data as contiguous evaluation data to a database;
- storing the first evaluation data and the second evaluation data as contiguous evaluation data in the database.
9. The method of claim 1 and further comprising:
- providing the first evaluation data and the second evaluation data as contiguous evaluation data to facilitate evaluation of continuous performance of the monitored apparatus over a given period of time, wherein the given period of time includes a time when the first evaluation model was being used and a time when the second evaluation model was being used.
10. The method of claim 9 wherein providing the first evaluation data and the second evaluation data as contiguous evaluation data to facilitate evaluation of continuous performance of the monitored apparatus further comprises facilitating detection of anomalous performance of the monitored apparatus over the given period of time.
11. The method of claim 1 wherein selecting a first evaluation model comprises selecting a first evaluation model that comprises an auto-associative model.
12. The method of claim 1 wherein selecting a first evaluation model comprises selecting a first evaluation model that comprises an inferential model.
13. The method of claim 1 wherein:
- selecting a first evaluation model comprises selecting one of: - a first evaluation model that comprises an auto-associative model; and - a first evaluation model that comprises an inferential model; and
- selecting a second evaluation model comprises selecting one of: - a second evaluation model that comprises an auto-associative model; and - a second evaluation model that comprises an inferential model.
14. The method of claim 1 and further comprising using the first evaluation data and the second evaluation data to facilitate provision of a display that comprises a substantially continuous view of performance of the monitored apparatus over time.
15. A method for use in facilitating processing of information regarding performance of a monitored apparatus, comprising:
- selecting a first data-processing mode;
- providing a plurality of substantially temporally-coincident sensor readings that each correspond to a sensed condition of the monitored apparatus;
- selecting a first evaluation model based upon an operating mode of the monitored apparatus;
- using the first evaluation model to process the plurality of substantially temporally- coincident sensor readings to provide resultant first evaluation data that relates, at least in part, to apparent expected operation of the monitored apparatus;
- subsequently, selecting a second evaluation model based upon an operating mode of the monitored apparatus;
- using the second evaluation model to process the plurality of substantially temporally-coincident sensor readings substantially contiguous to using the first evaluation model to process the plurality of substantially temporally-coincident sensor readings to thereby provide resultant second evaluation data substantially contiguous to providing the resultant first evaluation data, which resultant second evaluation data relates, at least in part, to continued and contiguous apparent expected operation of the monitored apparatus.
16. The method of claim 15 wherein selecting a first data-processing mode comprises selecting a first data-processing mode from amongst at least two candidate data-processing modes.
17. The method of claim 16 wherein selecting a first data-processing mode from amongst at least two candidate data-processing modes further comprises selecting a first data-processing mode from amongst at least:
- a first candidate data-processing mode that provides for continuous and contiguous handling of resultant evaluation data as provided by differing evaluation models; and
- a second candidate data-processing mode that provides for discrete and noncontiguous handling of resultant evaluation data as provided by differing evaluation models.
18. The method of claim 17 wherein selecting a first data-processing mode from amongst at least:
- a first candidate data-processing mode that provides for continuous and contiguous handling of resultant evaluation data as provided by differing evaluation models; and
- a second candidate data-processing mode that provides for discrete and noncontiguous handling of resultant evaluation data as provided by differing evaluation models; further includes: selecting a first data-processing mode from amongst at least:
- a first candidate data-processing mode that provides for interleaved handling of resultant evaluation data as provided by differing evaluation models; and
- a second candidate data-processing mode that provides for non-interleaved handling of resultant evaluation data as provided by differing evaluation models.
19. A method for use in facilitating processing of information regarding performance of a monitored apparatus, comprising:
- providing a plurality of substantially temporally-coincident sensor readings that each correspond to a sensed condition of the monitored apparatus;
- using, in discrete and temporally segregated fashion, individual evaluation models as selected from amongst a plurality of evaluation models to process the plurality of substantially temporally-coincident sensor readings to provide resultant evaluation data that relates, at least in part, to apparent expected operation of the monitored apparatus;
- processing the resultant evaluation data in a temporally interleaved manner to facilitate detection of anomalous circumstances regarding the monitored apparatus.
20. The method of claim 19 wherein using, in discrete and temporally segregated fashion, individual evaluation models as selected from amongst a plurality of evaluation models further comprises selecting individual evaluation models as a function, at least in part, of a present operational state of the monitored apparatus.
21. The method of claim 19 wherein processing the resultant evaluation data in a temporally interleaved manner to facilitate detection of anomalous circumstances regarding the monitored apparatus further comprises presenting interleaved resultant evaluation data in a user-perceivable display.
22. The method of claim 19 wherein processing the resultant evaluation data in a temporally interleaved manner to facilitate detection of anomalous circumstances regarding the monitored apparatus further comprises using interleaved resultant evaluation data to detect potentially anomalous conditions regarding the monitored apparatus as a function, at least in part, of time.
23. An apparatus for processing information regarding performance of a monitored apparatus, comprising:
- a plurality of apparatus-operation sensor inputs;
- a plurality of monitored apparatus evaluation models, including at least a first evaluation model and a second evaluation model;
- an evaluation model selector that is operably coupled to the plurality of monitored apparatus evaluation models;
- a performance evaluator operably coupled to receive: - the plurality of apparatus-operation sensor inputs; and - a selected one of the plurality of monitored apparatus evaluation models; and having at least a first mode of operation wherein resultant monitored apparatus performance data as provided at an output of the performance evaluator remains substantially contiguous regardless of changes to which one of the evaluation models is presently used by the performance evaluator.
24. The apparatus of claim 23 wherein the first evaluation model corresponds to a first mode of operation of the monitored apparatus and the second evaluation model corresponds to a second mode of operation of the monitored apparatus, wherein the first mode and the second mode of operation are different from one another.
25. The apparatus of claim 23 wherein:
- the first evaluation model comprises one of: - an auto-associative model; and - an inferential model; and
- the second evaluation model comprises one of: - an auto-associative model; and - an inferential model.
26. The apparatus of claim 23 wherein the evaluation model selector is responsive to operating states of the monitored apparatus, such that a given evaluation model is selected, at least in part, as a function of a present operating state of the monitored apparatus.
27. The apparatus of claim 23 wherein the resultant monitored apparatus performance data as provided at an output of the performance evaluator is substantially interleaved regardless of changes to which one of the evaluation models is presently used by the performance evaluator.
28. The apparatus of claim 23 wherein the evaluation model selector further comprises selector means for selecting a particular evaluation model as a function, at least in part, of an operating state of the monitored apparatus.
29. The apparatus of claim 28 wherein the operating state of the monitored apparatus comprises one of:
- a present detected operating state of the monitored apparatus; and
- a subsequently expected operating state of the monitored apparatus.
30. The apparatus of claim 23 wherein the performance evaluator further comprises interleaving means for interleaving in a substantially contiguous manner the evaluation data as provided by use of the selected one of the evaluation models as the selected one of the evaluation models changes over time.
PCT/US2004/022235 2003-07-09 2004-07-09 Information processing apparatus and method WO2005008420A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US48567503P 2003-07-09 2003-07-09
US60/485,675 2003-07-09

Publications (2)

Publication Number Publication Date
WO2005008420A2 true WO2005008420A2 (en) 2005-01-27
WO2005008420A3 WO2005008420A3 (en) 2005-05-12

Family

ID=34079153

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2004/022235 WO2005008420A2 (en) 2003-07-09 2004-07-09 Information processing apparatus and method

Country Status (1)

Country Link
WO (1) WO2005008420A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7275018B2 (en) 2006-02-10 2007-09-25 Alstom Technology Ltd. Method of condition monitoring
WO2008106071A1 (en) 2007-02-27 2008-09-04 Exxonmobil Research And Engineering Company Method and system of using inferential measurements for abnormal event detection in continuous industrial processes
GB2500388A (en) * 2012-03-19 2013-09-25 Ge Aviat Systems Ltd Fault detection in system monitoring

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6246972B1 (en) * 1996-08-23 2001-06-12 Aspen Technology, Inc. Analyzer for modeling and optimizing maintenance operations
US6289330B1 (en) * 1994-11-02 2001-09-11 Netuitive, Inc. Concurrent learning and performance information processing system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6289330B1 (en) * 1994-11-02 2001-09-11 Netuitive, Inc. Concurrent learning and performance information processing system
US6246972B1 (en) * 1996-08-23 2001-06-12 Aspen Technology, Inc. Analyzer for modeling and optimizing maintenance operations

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JAW L.C ET AL: 'Anomaly Detection and Reasoning with Embedded Psysical Model' IEEE 2002 AEROSPACE CONFERENCE vol. 6, March 2002, pages 3073 - 3081, XP010604876 *
SMITH J. ET AL: 'Using Date Mining for Plant Maintenance' PLANT ENGINEERING vol. 56, no. 12, December 2002, pages 26 - 30 *
TARASSENKO L. ET AL: 'Novelty Detection in Jet Engines' IEEE COLLOQUIUM ON CONDITION MONITORING: MACHINERY, EXTERNAL STRUCTURES AND HEALTH April 1999, pages 4/1 - 4/5, XP006500577 *
WEGERICH S.H. ET AL: 'Nonparametric Modeling of vibration Signal Features for Equipment Health Monitoring' 2003 IEEE AEROSPACE CONFERENCE vol. 7, March 2003, pages 3113 - 3121 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7275018B2 (en) 2006-02-10 2007-09-25 Alstom Technology Ltd. Method of condition monitoring
WO2008106071A1 (en) 2007-02-27 2008-09-04 Exxonmobil Research And Engineering Company Method and system of using inferential measurements for abnormal event detection in continuous industrial processes
EP2132605A1 (en) * 2007-02-27 2009-12-16 ExxonMobil Research and Engineering Company Method and system of using inferential measurements for abnormal event detection in continuous industrial processes
EP2132605A4 (en) * 2007-02-27 2011-05-18 Exxonmobil Res & Eng Co Method and system of using inferential measurements for abnormal event detection in continuous industrial processes
US8285513B2 (en) 2007-02-27 2012-10-09 Exxonmobil Research And Engineering Company Method and system of using inferential measurements for abnormal event detection in continuous industrial processes
GB2500388A (en) * 2012-03-19 2013-09-25 Ge Aviat Systems Ltd Fault detection in system monitoring
US9116965B2 (en) 2012-03-19 2015-08-25 Ge Aviation Systems Limited Method and apparatus for monitoring performance characteristics of a system and identifying faults
EP2642362A3 (en) * 2012-03-19 2016-03-09 GE Aviation Systems Limited System monitoring
GB2500388B (en) * 2012-03-19 2019-07-31 Ge Aviat Systems Ltd System monitoring

Also Published As

Publication number Publication date
WO2005008420A3 (en) 2005-05-12

Similar Documents

Publication Publication Date Title
US7496798B2 (en) Data-centric monitoring method
JP5284503B2 (en) Diagnostic system and method for predictive condition monitoring
JP5179086B2 (en) Industrial process monitoring method and monitoring system
AU2012284497B2 (en) Monitoring system using kernel regression modeling with pattern sequences
US5586066A (en) Surveillance of industrial processes with correlated parameters
JP4741172B2 (en) Adaptive modeling of change states in predictive state monitoring
EP3444724B1 (en) Method and system for health monitoring and fault signature identification
CA3023957C (en) Controlling a gas turbine considering a sensor failure
US20140365179A1 (en) Method and Apparatus for Detecting and Identifying Faults in a Process
Lahdhiri et al. Supervised process monitoring and fault diagnosis based on machine learning methods
KR20140041767A (en) Monitoring method using kernel regression modeling with pattern sequences
AU2002246994A1 (en) Diagnostic systems and methods for predictive condition monitoring
JP7068246B2 (en) Abnormality judgment device and abnormality judgment method
Ceschini et al. A Comprehensive Approach for Detection, Classification and Integrated Diagnostics of Gas Turbine Sensors (DCIDS)
EP3866132A1 (en) Power plant early warning device and method employing multiple prediction model
KR20200005202A (en) System and method for fault detection of equipment based on machine learning
Vishnu et al. Recurrent neural networks for online remaining useful life estimation in ion mill etching system
JP2021179740A (en) Monitoring device, monitoring method, program, and model training device
WO2005008420A2 (en) Information processing apparatus and method
Mina et al. Fault detection for large scale systems using dynamic principal components analysis with adaptation
Mahyari et al. Domain adaptation for robot predictive maintenance systems
Mahyari Robust predictive maintenance for robotics via unsupervised transfer learning
CN116802471A (en) Method and system for comprehensively diagnosing defects of rotary machine
Jelali et al. Statistical process control
Virgiawan et al. Big data & early alert (anomaly) detection in Paiton coal fired power plant

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase