INFORMATION PROCESSING APPARATUS AND METHOD
Technical Field This invention relates generally to information processing and more particularly to facilitating the processing of information regarding performance of a monitored apparatus using multiple evaluation models against which performance of the monitored apparatus is compared.
Background Various techniques are known to permit the monitoring of a given apparatus. Some techniques permit off-line analysis of a given apparatus to support, for example, identification of a particular cause of malfunction. Other techniques permit on-line monitoring to detect, for example, out-of-specification performance that may require the attention of service personnel. As to the latter, it is known to utilize an evaluation model against which present performance of a given apparatus can be compared to thereby ascertain the health of the apparatus. Such techniques have proven to be both robust and powerful. In particular, proper selection and use of an evaluation model can permit reliable detection of apparatus conditions that likely signal a future problem even while overall performance of the apparatus presently remains within acceptable guidelines. By identifying such conditions in advance of an actual breakdown or other diminution of performance, early corrective actions can be taken with a lessened impact on overall apparatus performance. Because many apparatus have more than one operating mode, in many settings a single evaluation model may be inadequate to permit sufficiently accurate analysis in this regard. For example, the overall performance of a jet engine operating during a highly loaded state can be considerably different than for a jet engine that operates during a lightly loaded state. To meet this concern, multiple evaluation models are sometimes provided. A given evaluation model is then selected and used during a corresponding apparatus mode of operation. Use of multiple evaluation models has proven effective to extend the useful operating range over which performance of the corresponding apparatus can be successfully monitored to facilitate accurate condition determination. Unfortunately, however, existing systems tend to use the resultant evaluation data in
a relatively discrete and non-contiguous fashion. In particular, the evaluation data that results through use of a first evaluation model is treated in essentially all respects as being discrete from and different than evaluation data that results through use of a second evaluation model. For example, when providing a display of performance evaluation data over time to a user, the data that corresponds to the second evaluation model will be presented in a discontinuous and a discrete manner with respect to the data from the first evaluation model. As another example, and in the same manner, fault prediction programming that can process a series of evaluation data to extract meaning from its temporal content will nevertheless not bridge the discontinuity that occurs when the monitoring process switches from one evaluation model to another. In effect, the analysis process (and the resultant corresponding display associated therewith) begins anew with a new evaluation model. Such discrete and non-contiguous treatment of the resultant evaluation data is not without problem. At a minimum such gaps and discrete handling can be bothersome to an operator who must make their own independent efforts to stitch together the meaning of the multiple data outputs as are provided by multiple evaluation models. Worse, the power of the analysis engine to extract useful information from monitored performance trends can be compromised or even lost when the ability to track a trend breaks upon switching to a new evaluation model.
Brief Description of the Drawings The above needs are at least partially met through provision of the information processing apparatus and method described in the following detailed description, particularly when studied in conjunction with the drawings, wherein: FIG. 1 comprises a block diagram as configured in accordance with an embodiment of the invention; FIG. 2 comprises a high-level flow diagram as configured in accordance with an embodiment of the invention; FIG. 3 comprises an illustrative schematic of a display as configured in accordance with an embodiment of the invention; FIG. 4 comprises a flow diagram as configured in accordance with various embodiments of the invention; and
FIG. 5 comprises a flow diagram as configured in accordance with another embodiment of the invention. Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of various embodiments of the present invention. Also, common but well-understood elements that are useful or necessary in a commercially feasible embodiment are typically not depicted in order to facilitate a less obstructed view of these various embodiments of the present invention.
Detailed Description Generally speaking, pursuant to these various embodiments, a process provides for a plurality of substantially temporally-coincident sensor readings that each correspond to a sensed condition of a monitored apparatus. A first evaluation model (selected in accord with a present or anticipated operating mode of the monitored apparatus) is used to process this plurality of sensor readings to provide resultant first evaluation data that relates, at least in part, to apparent expected operation of the monitored apparatus. Upon later selecting a different evaluation model to continue processing then current sensor readings (wherein the selection of a different evaluation model is based, for example, upon a changed operating mode of the monitored apparatus), the resultant evaluation data is provided substantially contiguous with provision of the resultant evaluation data from use of the first evaluation model. This interlacing of data permits, in part, the new resultant evaluation data to relate, at least in part, to continued and contiguous apparent expected operation of the monitored apparatus. In effect, the data from the two independent processes is united into a common data stream that then appears wholistic to subsequent downstream processing actions and steps. Depending upon the embodiment, such evaluation models can be selected from amongst a considerably larger number of candidate evaluation models. Since there is no loss of continuity to be experienced upon switching to a new evaluation model, an operator is in fact considerably more free to utilize many more evaluation models than might ordinarily be contemplated in order to permit more accurate
matching of a given evaluation model to a given set of operating conditions and present operating mode of the monitored apparatus. If desired, both continuous and contiguous handling of resultant evaluation data and a more traditional discrete and non-contiguous handling of resultant evaluation data can be selectively provided. So configured, a given operator can select between such alternative modes of operation to suit the needs of a given context or setting. Referring now to the drawings, and in particular to FIG. 1, a system 10 appropriate to support the teachings set forth below serves in conjunction with a monitored apparatus 11 having a plurality of sensors (not shown) deployed therewith. The monitored apparatus 11 can be essentially any apparatus or process now known or hereafter developed. The sensors can be many and varied, including but not limited to thermocouples, accelerometers, pressure transducers, flow meters, tachometers and the like. In a preferred approach, there will usually be a number of sensors that monitor different conditions with respect to the monitored apparatus 11 as a broad spectrum of sensor data tends to provide richer material for the purposes of detecting the pending development of fault conditions. The particular sensors employed (and the conditions to be monitored) are selected to suit the circumstances of a given apparatus and the needs of a given context and application. The selection and deployment of sensors in this context is understood in the art and hence further detail will not be provided here for the sake of brevity and the preservation of focus. An apparatus-operation sensor input 12 operably couples to the sensors that monitor the monitored apparatus 11 to receive those sensor readings. The sensor input 12 can provide additional support as may be appropriate to a given sensor or process. For example, the sensor input 12 can source operating power to sensors that require an external power source. As another example, the sensor input 12 can provide appropriate pre-processing of the sensor signals. Such pre-processing can include any of filtering (or other noise reduction processing), weighting, frequency shifting, digitizing, storing-and-forwarding, and so forth. Again, such sensor inputs are known in the art and further detail need not be provided here. The output of the apparatus-operation sensor inputs 12 couples to a performance evaluator 13. The latter can comprise an integrated engine or can be partially or wholly distributed using an implementing architecture of choice. In a
preferred embodiment, the performance evaluator 13 will comprise a programmable platform that facilitates a corresponding performance evaluation engine such as, for example, the eCM runtime engine as offered by SmartSignal Corporation of Lisle, Illinois U.S.A. In accordance with present practice, and pursuant to a preferred embodiment, the performance evaluator 13 serves three primary purposes: - the use of an evaluation model to process real-time sensor information to generate expected sensor values for each extracted data sample and to then generate a residual vector element for each sensor that comprises the difference between the estimated sensor value and the corresponding real-time value (when using accurate models and when the monitored apparatus is behaving normally these residuals tend towards a relatively small range Gaussian distribution around a value of zero); - comparing the residual (or a series of residuals) (for example, against one or more thresholds or against a statistical hypothesis test) to identify when sufficient deviation occurs to warrant generating an alert condition for a given sensor (in a preferred embodiment this process will include determining whether such detected deviations are of statistical significance using, for example, the SmartSignal Corporation Active Decision Index algorithm as is included with SmartSignal's eCM Runtime Engine noted above); and - using the sensor alerts and other sensor information as may be relevant to a given application to determine when the alerts are significant enough to merit being identified as an incident and being brought specifically to the attention of an operator (with incident generation being based upon various rules as is otherwise generally well understood in the art). Performance evaluation can be based upon other approaches and criteria as well, with various methodologies being in use and/or otherwise suggested by those skilled in the art. It will be understood that the teachings set forth herein are generally applicable to such alternative approaches as well as to the illustrative performance evaluator platform described above. Pursuant to a preferred embodiment, the performance evaluator 13 utilizes, at any given moment, an evaluation model selected from amongst a plurality of monitored apparatus evaluation models 14. In general, at least some of these evaluation models 14 will be designed and intended to model the expected behavior of the monitored apparatus 11 during a particular mode of operation. For example,
one of the evaluation models can serve to model the behavior of the monitored apparatus 11 when the monitored apparatus 11 operates under a first set of operating conditions (such as when fully loaded) and another of the evaluation models can serve to model the behavior of the monitored apparatus 11 when the monitored apparatus 11 operates under a different set of operating conditions (such as when substantially less loaded). The number of evaluation models provided for use with a given monitored apparatus 11 will vary with the apparatus itself and with the needs of the specific application. Where the behavior of the monitored apparatus varies widely over its range of operating conditions and/or where tight correspondence between the evaluation model and the performance of the monitored apparatus is critical, a greater number of evaluation models may be appropriate. Such evaluation models can be either of auto-associative models or inferential models. These characterizations relate generally to the manner by which the performance evaluator 13 utilizes the evaluation model to calculate a resultant residual value. An auto-associative model-based process uses every sensor input to calculate the estimated state of every sensor in the model. An inferential model- based process uses only a portion of the sensor inputs to generate the estimated values for all the sensors in the model. The inferential approach serves well when a particular sensor value appears unduly noisy or the corresponding sensor is known to be prone to failure, or when the dynamics of the equipment are understood and it is desirable to monitor only certain sensors as they are driven by parameters for equipment health purposes. The evaluation models can be initially provided through various means. For example, the evaluation models can be sourced by the manufacturer of a given apparatus to be monitored. As another example, one or more evaluation models can be provided by the supplier of the performance evaluator 13. As yet another example, the performance evaluator 13 can itself be configured to permit generation of one or more evaluation models by the operator to permit highly customized and potentially more accurate and operationally relevant evaluation models. Evaluation models in general and their manner of creation and usage are generally well understood in the art. Use of many such evaluation models may benefit from the teachings set forth herein. In a preferred approach, however,
particular approaches to modeling may provide superior results when effecting mode-to-mode contiguous and interleaved data streams. In one preferred approach, reference set of observations are formed into a matrix, designated H for purposes hereof, with each column of the matrix typically representing an observation, and each row representing values from a single sensor or measurement. The ordering of the columns (i.e., observations) in the matrix is not important, and there is no element of causality or time progression inherent in the modeling method. The ordering of the rows is also not important, only that the rows are maintained in their correspondence to sensors throughout the modeling process, and readings from only one sensor appear on a given row. This step can occur as part of the setup of the modeling system, and is not necessarily repeated during online operation. After assembling a sufficiently characterizing set H of reference data observations for the modeled system, modeling can be carried out. Such modeling results in the generation of estimates in response to acquiring or inputting a real-time or current or test observation, which estimates can be estimates of sensors or non- sensor parameters of the modeled system, or estimates of classifications or qualifications distinctive of the state of the system. These estimates can be used for a variety of useful modeling purposes as described below. In this preferred approach, this generation of estimates comprises two major steps after input acquisition. In a first step, the current observation is compared to the reference data H to determine a subset of reference observations from H having a particular relationship or affinity with the current observation, with which to constitute a smaller matrix, designated D for purposes hereof. In the second step, the D matrix is used to compute an estimate of at least one output parameter characteristic of the modeled system based on the current observation. Accordingly,
it may be understood that the model estimate Yest is a function of the current input observation Yjn and the current matrix D, derived from H:
where the vector Ye
st of estimated values for the sensors is equal to the contributions from each of the snapshots of contemporaneous sensor values arranged to comprise matrix D. These contributions are determined by weight vector W. The multiplication operation is the standard matrix/vector multiplication operator, or inner product. The similarity operator is presented as the symbol <8>. (Note: this meaning should not to be confused with the normal meaning of this symbol; here the meaning of ® is that of a "similarity" operation.) The similarity operator, ®, works much as regular matrix multiplication operations, on a row-to-column basis, and results in a matrix having as many rows as the first operand and as many columns as the second operand. The similarity operation yields a scalar value for each combination of a row from the first operand and column from the second operand. One similarity operation involves taking the ratio of corresponding elements of a row vector from the first operand and a column vector of the second operand, and inverting ratios greater than one, and averaging all the ratios, which for normalized and positive elements always yields a row/column similarity value between zero (very different) and one (identical). Hence, if the values are identical, the similarity is equal to one, and if the values are grossly unequal, the similarity approaches zero. Another example of a similarity operator that can be used determines an elemental similarity between two corresponding elements of two observation vectors or snapshots, by subtracting from one a quantity with the absolute difference of the
two elements in the numerator, and the expected range for the elements in the denominator. The expected range can be determined, for example, by the difference of the maximum and minimum values for that element to be found across all the data of the reference data H. The vector similarity is then determined by averaging the elemental similarities. In yet another similarity operator that can be used, the vector similarity of two observation vectors is equal to the inverse of the quantity of one plus the magnitude Euclidean distance between the two vectors in n-dimensional space, where n is the number of elements in each observation, that is, the number of sensors being observed. Thus, the similarity reaches a highest value of one when the vectors are identical and are separated by zero distance, and diminishes as the vectors become increasingly distant (different). Other similarity operators are known or may become known to those skilled in the art, and may be usefully employed as described herein. The recitation of the above operators is exemplary and not meant to limit the scope of the claimed invention. In general, the following guidelines help to define a similarity operator but are not meant to limit the scope of the invention: 1. Similarity is a scalar range, bounded at each end. 2. The similarity of two identical inputs is the value of one of the bounded ends. 3. The absolute value of the similarity increases as the two inputs approach being identical. Accordingly, for example, an effective similarity operator for use in the present invention can generate a similarity often (10) when the inputs are identical, and a similarity that diminishes toward zero as the inputs become more different. Alternatively, a bias or translation can be used, so that the similarity is 12 for identical inputs, and diminishes toward 2 as the inputs become more different. Further, a scaling can be used, so that the similarity is 100 for identical inputs, and diminishes toward zero with increasing difference. Moreover, the scaling factor can also be a negative number, so that the similarity for identical inputs is -100 and approaches zero from the negative side with increasing difference of the inputs. The similarity can be rendered for the elements of two vectors being compared, and summed or otherwise statistically combined to yield an overall vector-to-vector
similarity, or the similarity operator can operate on the vectors themselves (as in Euclidean distance). As noted earlier, these teachings are compatible with processes that provide for monitoring variables in an autoassociative mode as well as processes that utilize an inferential mode. Generally stated, in the autoassociative mode model estimates are made of variables that also comprise inputs to the model, while in the inferential mode model estimates are made of variables that are not present in the input to the model. In the inferential mode, Equation 1 above preferably becomes:
and Equation 3 above becomes:
where the D matrix has been separated into two matrices D;
n and D
out, according to which rows are inputs and which rows are outputs, but column (observation) correspondence is maintained. Another example of an empirical modeling method appropriate for use in the current invention is kernel regression, or kernel smoothing. A kernel regression can be used to generate an estimate based on a current observation in much the same way as the similarity-based model, which can then be used to generate a residual as detailed elsewhere herein. Accordingly, the following Nadaraya- Watson estimator can be used:
where in this case a single scalar inferred parameter y-hat is estimated as a sum of weighted exemplar yi from exemplar data, where the weight it determined by a kernel K of width h acting on the difference between the current observation X and the exemplar observations Xi corresponding to the yi from exemplar data. The
independent variables Xi can be scalars or vectors. Alternatively, the estimate can be a vector, instead of a scalar:
Here, the scalar kernel multiplies the vector Yi to yield the estimated vector. A wide variety of kernels are known in the art and may be used. One well- known kernel, by way of example, is the Epanechnikov kernel:
where h is the bandwidth of the kernel, a tuning parameter, and u can be obtained from the difference between the current observation and the exemplar observations as in equation 6. Another kernel of the countless kernels that can be used in remote monitoring according to the invention is the common Gaussian kernel:
As noted, other manner and style of evaluation model can potentially be utilized in conjunction with these teachings. The particular style and configuration of evaluation models as are set forth above, however, work particularly well when interleaving the resultant data from multiple evaluation models is undertaken as described herein. In particular, such evaluation model architecture serves well to provide empirical output that, even when comprising the unified output of multiple such models, nevertheless readily facilitates presentation and/or trending/statistical testing analysis. If desired, selection of a given evaluation model to be used by the performance evaluator 13 can be governed in a manual fashion by an operator. In a preferred embodiment, however, an evaluation model selector 15 that detects and responds to the present operating mode of the monitored apparatus 11 serves to automatically select a particular evaluation model for use by the performance
evaluator 13. In the alternative, if desired, the evaluation model selector 15 can make a particular selection based upon a subsequently expected operating state of the monitored apparatus. So configured, the evaluation model selector 15 can identify a particular evaluation model prior to its actual needed deployment. This, in turn, can foster a potentially better match between the actual operating mode of the monitored apparatus 11 at any given point in time and the particular evaluation model then being used to access the performance and behavior of the monitored apparatus 11. In another embodiment, if desired, the evaluation model selector 15 can select between auto-associative and inferential evaluation models wherein both models are intended to model a common operating mode of the monitored apparatus 11. Such a selection can be based, for example, upon a determination that a given sensor has become faulty. In such a case, although the operating mode of the monitored apparatus 11 has not changed, it may be appropriate to switch from an auto-associative evaluation model to an inferential evaluation model. Pursuant to a preferred embodiment, the resultant monitored apparatus performance data as is provided as an output of the performance evaluator 13 remains substantially contiguous regardless of changes regarding which of the evaluation models is presently being used by the performance evaluator 13. In particular, and as the performance evaluator 13 may switch back and forth between two or more evaluation models, the resultant monitored apparatus performance data is substantially interleaved regardless of changes to evaluation model usage. This output information is then used in a manner appropriate to the needs of a given application. For example, the information may be stored in a database 17 and/or provided to a human interface 18 such as a display or printer output as is well understood in the art. Pursuant to these preferred embodiments, however, the contiguous and interleaved information will serve to present the resultant information as a whole rather than as segregated discrete temporal segments. Subsequent display and processing will benefit accordingly. Referring now to FIG. 2, the above described embodiments and/or any other suitable platform serves generally to facilitate a process 20 wherein a plurality of substantially temporally-coincident sensor readings are provided 21 and where multiple evaluation models are used 22 in a discrete temporally segregated fashion
to process the plurality of substantially temporally-coincident sensor readings to provide resultant evaluation data that relates, at least in part, to apparent expected operation of the monitored apparatus. As noted earlier, specific evaluation models are used in this discrete temporally segregated fashion with respect to one another as a function, at least in part, of a present (or anticipated) operational state of the monitored apparatus 11. This process 20 then facilitates processing 23 this resultant evaluation data in a temporally interleaved manner to thereby facilitate detection of anomalous circumstances regarding the monitored apparatus 11. For example, the resultant evaluation data can be presented in an interleaved manner on a user-perceivable display. To illustrate, and referring momentarily to FIG. 3, a displayed output graph 30 of information regarding a particular monitored condition appears as a contiguous display of data. In particular, though a first portion of the data 31 corresponds to data determined through use of a first evaluation model (model 1) and a following portion of the data 32 corresponds to data determined through use of a different evaluation model (model 3), the data itself appears as a contiguous entity. This contrasts sharply with prior art practice wherein separate graphs would be utilized to portray the resultant data as corresponds to use of these two models. (It should be understood that the phantom line partitions and the "model" legends on the graph are for purposes of illustrating these demarcations in accord with the above description and would not ordinarily themselves be provided on a display to an operator.) Referring now to FIG. 4, and to elaborate upon the above generally- described process, a method 40 for use in facilitating processing of information regarding performance of a monitored apparatus can begin with provision 41 of a plurality of sensor readings that correspond to various monitored performance parameters and characteristics of the apparatus. In a preferred embodiment, at least some of these sensor readings are different from one another (though redundant sensor readings can be provided, if desired, to protect against the loss of a given sensor). Furthermore, in a preferred approach, these sensor readings have a predetermined substantially temporally-coincident relationship with one another. In a perhaps simplest approach, the readings from the sensors all relate to substantially similar points in time. If desired, and as may be appropriate to a given approach to subsequent processing of the resultant information, the sensor readings can relate to
different moments in time, though the degree of difference will likely need to be constant or at least of a known value to permit a synchronized view of the overall data set. The process 40 then selects 42 a particular evaluation model from amongst a plurality of candidate evaluation models to be used when evaluating the sensor readings. In a preferred embodiment the operating mode of the monitored apparatus influences the selection process. As noted above, this can include, for example, a present operating mode of the monitored apparatus or an expected operating mode (such as an imminent likely operating mode) of the monitored apparatus. If desired, more than one evaluation model can be selected for use at any given moment. So configured, the sensor readings can be processed in parallel in accord with such multiple evaluation models. This would permit, for example, a hot switchover from usage of one evaluation model to another evaluation model upon the occurrence of a predetermined trigger (such as detection of attainment of a particular operation mode of the monitored apparatus). The process 40 then uses 43 the selected evaluation model to process the sensor readings and to provide resultant evaluation data. This resultant evaluation data, of course, relates at least in part to the apparent expected operation of the monitored apparatus. The exact nature of the resultant evaluation data can also vary in accordance with the needs of the downstream processing to be applied. For example, the resultant evaluation data can comprise at least some of the sensor readings themselves, one or more estimated sensor reading values as calculated through usage of the evaluation model, and/or one or more residual values as correspond to a difference between the sensor readings and a corresponding estimated sensor reading, to name a few. Such resultant evaluation data can then be optionally provided to a database, to a display, or to another processing and/or analysis platform for subsequent consideration. As the monitored apparatus changes its mode of operation during its course of usage, the process 40 determines 44 from time to time where the operating mode has changed enough to warrant selection of a different evaluation model in substitution of the first selected evaluation model. When true, the process 40 selects 45 a new evaluation model. In a preferred embodiment, again, the process selects 40
this new evaluation model as a function of the operating mode of the monitored apparatus. The process 40 then uses 46 this new evaluation model to process the sensor readings and to provide resultant evaluation data in accordance with that usage. Pursuant to a preferred approach, the resultant evaluation data as corresponds to the new evaluation model is provided in a manner that is substantially contiguous to provision of the earlier resultant evaluation data as corresponded to the first evaluation model. For example, the resultant evaluation data that corresponds to the first evaluation model is spliced onto the following resultant evaluation data that corresponds to the next subsequent evaluation model. Accordingly, the combined resultant evaluation data from the first and subsequent evaluation model relates, at least in part, to a continued and contiguous view of the continuing and contiguous apparent expected operation of the monitored apparatus. As before, the resultant evaluation data can again be provided to a database, to a display, and/or to a subsequent processing platform. The contiguous and interleaved nature of the resultant evaluation data provides various benefits as noted before, including the opportunity to now facilitate the detection of anomalous performance of the monitored apparatus over a given period of time that bridges a change from one evaluation model to another using a data driven empirical modeling technique. Other advantages include a more intuitive and user-friendly presentation of the evaluation data. Notwithstanding these various advantages of providing the resultant evaluation data in a contiguous and interleaved manner, there may be circumstances when an operator may wish to utilize the non-contiguous processing mode that tends to characterize the prior art. With reference to FIG. 5, an optional process 50 can provide an operator with an opportunity to select 51 from amongst a plurality of data-processing modes in this regard. To illustrate, two candidate data-processing modes can be provided, one being a continuous data-processing mode as described above with respect to FIG. 5 and the other being a non-continuous data-processing mode as characterizes the prior art. The process 50 can then determine 52 which mode the operator has selected. When the operator selects a non-continuous mode, the process 50 can use 53 a corresponding discrete/non-continuous data-processing mode. Similarly, when the operator selects a continuous mode, the process 50 can
use a corresponding continuous data-processing mode 40 as described above. So configured, an operator can have a choice between such modes of operation to thereby permit selection of a particular process mode to suit the specific needs of a given application and context. Those skilled in the art will recognize that a wide variety of modifications, alterations, and combinations can be made with respect to the above described embodiments without departing from the spirit and scope of the invention, and that such modifications, alterations, and combinations are to be viewed as being within the ambit of the inventive concept. For example, in general the continuous presentation of resultant evaluation data posited above will likely tend to work more successfully when using evaluation models that both use essentially identical sensor inputs. To suit the needs of a given application, however, one could switch to an evaluation model that uses at least alternative sensor readings by, for example, normalizing the resultant evaluation data of the latter to better correlate to and conform with the resultant evaluation data of the former. As another example, in the illustrative embodiments provided above the resultant evaluation data from a first model abruptly shifts, albeit contiguously and continuously, to the resultant evaluation data of a subsequent model. For many applications this approach will provide superior results. There may be some instances, however, when a more gradual shift from one data stream to the other may be appropriate. For example, within some predetermined bridging period of time it may be appropriate to simply average the data from both models (presuming that both are being processed in parallel and discrete from the other) before shifting completely to only data from the subsequent model. Or, as another example, one might first weight or otherwise normalize one or both data streams prior to making a combination of the two for some bridging period of time to thereby effect a more gradual and softer transition from the use of one model to the use of another.