Power Plant Centralized Monitoring Using SBM

By Philip Flesch and Chad Stoecker, SmartSignal

Introduction
The advent of certain technologies have opened the door to higher plant performance through highly accurate, centralized monitoring. These technologies include data historians, better network/internet connection speeds, faster processor speeds and Similarity Based Modeling (SBM). By combining these technologies the modern utility can monitor a broad base of sites, components and plants from a central location.

Higher Plant Performance
In the famous Star Trek series the unsung hero, the ship’s computer, often notifies the crew of impending danger. Quotes like, “Core Meltdown in Thirty Seconds,” are familiar to those who watch the series. Wouldn’t any engineer or manager, responsible for equipment, love to have a computer like that? And in particular where the reliability of the equipment is important?

While that ship’s computer is not yet a reality, the elements of that scenario are truly a reality because of advances in technology. The computer – or more accurately the software running on that computer with input from remote devices – can provide early warning by identifying signals which deviate from their historical pattern. In the Star Trek series the computer warned of dire emergencies. In real life deviations from pattern may indicate shifts in efficiency, erratic control circuits, mispositioned manual controls, increasing mechanical wear, etc. In summary, the ability to detect deviations from normal performance highlights any behavior other than the “best” behavior, and this allows the analyst to identify a wide range of performance issues. Detecting and addressing these issues results in higher plant performance.

This paper will discuss how a specific technology, Similarity Based Modeling, provides this detection capability.

The Technical Elements – Input and Flow
To create a monitoring system the information must flow to the location where it is analyzed. In a simplified flow diagram, all of the information flows directly from the instruments on the machines into the software as shown below.

This figure may represent the ultimate goal of this technology. However, monitoring fleets of assets results in a scaling and information flow problem. For example, consider a “system” that includes anywhere from 4000 to 16,000 signals. This is a fairly typical size for a fleet of fossil fueled power plants (on the typical order of 800-1000 signals per plant this is equivalent to 5-20 plants). We will assume that all of the instruments are capable of transmitting their data continuously, successfully to a central computer. Once the data arrives at the central computer it must be processed – modeled, analyzed and stored. In real time analysis, the functions of computation, analysis and data storage – including overhead for the operating system and related software – cannot keep up with the data stream. In essence, the “flow” does not work. Even assuming that data transmission is instantaneous, it is not physically possible to model, analyze and store the data as fast as it is created. An added limitation, not noted above, is that data is not synchronized when collected individually – adding a synchronization burden to the analysis.

Therefore the system needs some way to synchronize and reduce the data flow. One solution is introduced in the simplified flow diagram, below, which reflects currently deployed systems.

The new elements in the second figure are data historians. Data historians are databases that collect, synchronize and store equipment information. By introducing data historians into the system a buffer and a synchronizer have been introduced. When data is synchronized it is possible to take snapshots of equipment operation. Then the “analysis” component, Similarity Based Modeling, can analyze those data snapshots instead of processing and analyzing all data. The speed of delivery can then be matched to the speed at which the data can be processed by varying the sample rate.

Not to be ignored in this discussion of “flow” are improvements in Information Technology (IT) that allow all this data to move so easily between destinations. Internet and intranet connectivity improvements, standardization of protocols, memory upgrades and processor speed improvements now allow these large amounts of data to move fairly quickly and reliably between sources. While the technology described in this paper is mathematical, it relies on a computer/processor (and a database and a viewer and other software) to make it work.

The bottom line is that the system described above, coupled with the advances in technology that have allowed data collection and transmission, along with improved processor capabilities have created a nearly real-time monitoring solution. The table below provides examples of fleet sizes and “typical” processing rates.

In the table, one can see that data can be collected, analyzed and stored in under five minutes – typically. The data collection rate, the polling rate, can be tuned to the typical cycle time – allowing time for computer or database overhead activities. In current applications a polling rate of 5 or 10 minutes is common.

The result is that analyzed data is available in nearly real time. Every five or ten minutes the data historians are polled. Less than five minutes later that data has been analyzed and stored, and the results are available.

An important point, worthy of restatement, is that the data is analyzed. Thanks to the same IT improvements that were noted above ALL of the raw data is available to an analyst in real time. The benefit of analyzed data is that only the signals performing outside the normal (or trained) range are highlighted for the analyst, and this represents a small fraction of the total number of signals monitored. Therefore, the data is available in nearly real time and has been reduced to the important points. The analyst can assume that signals following pattern are not problematic, and an exception based monitoring system results. Stated differently, the signals performing within pattern are “normal” and do not draw the analyst’s attention. Signals performing outside pattern are highlighted. The “needle in the haystack”, the problematic signals, are easily found.

Centralized Monitoring
When the capability of data reduction, described above, is available in near real time the potential for a centralized monitoring function is realized. The term “centralized monitoring” will be defined, for the purposes of this paper, to be monitoring conducted by a consistent group of individuals across more than one geographically separated site.

An astute reader should note that centralized monitoring has probably been in place, in some fashion, since historic times. Governments, for example, monitor activities and budgets across geographically separated sites. The military performs centralized monitoring. What is different here?

The difference lies in adjectives not included in the definition. By highlighting only signals deviating from pattern the monitoring is efficient. The analyst no longer has to search all signals looking for visible problems. Efficiency means that either the scope of the analyst’s monitoring increases or the man-power required is reduced. By performing this analysis routinely and continuously (every 5-10 minutes for the typical power application) the analysis is near real-time. By using algorithms to identify pattern changes earlier than the human eye can typically detect the analysis is highly accurate (also discussed later). By using specific historical data from the equipment or system to “train” the models (reducing the need for a priori knowledge) the analysis is “smart” to a degree.

The result is that the definition of centralized monitoring using SBM should be modified to read “efficient, accurate, near real time monitoring conducted by a consistent group of individuals, who do not have to be domain experts, across more than one geographically separated site.

When these factors are incorporated into a business process the results can be quantified in terms that make sense. For example, “an effective monitoring system more accurate than domain expertise can be maintained by two mid level engineers for most of the assets in a fleet of thirty coal plants” could be said. The efficiency improvements might therefore be used to justify a 24/7 monitoring center or less analysts monitoring more units. Several US utilities have established centralized monitoring centers.

The Accuracy Component - SBM
The modeling technique described above is Similarity Based Modeling (SBM). In general, SBM models a group of related signals by analyzing historical data from those signals to identify the historical pattern of behavior of the signals. The patterns identified are then put on-line and used to analyze each sample of data collected. From the pattern an estimate of each signal’s behavior is generated and compared to the actual value. The difference between the actual and estimate, the residual, is continuously compared to empirical thresholds.
The results of the comparison cause rules to fire that then draw the attention of the analyst.

Similarity Based Modeling results in accuracies typically more than adequate for any application. Since SBM considers a group of signals and compares the behavior of a signal relative to the behavior of all the other, related signals it serves as a constant check for deviation from historical patterns of behavior. In the graphics below, two sensors from an operating air heater are shown.

The first sensor, Inlet Temp (Deg F), demonstrates the behavior and accuracy of the modeling technique. The top chart has two lines – a blue and a green. The blue line is the actual value, and the green line is the estimated value. The lower chart presents the residual value, the difference between the two, and represents the accuracy of the modeling technique. In this case it ranges from about 4 to –6 with most values falling within 2 to –4.

The second sensor, Inlet Pressure (in-wc), also demonstrates the behavior and accuracy of the modeling technique. The same charts and line colors apply. Note that the residual chart lies within 2 to –2 with most values falling between 1 to –1. This represents a rough accuracy of about 1/38 = 2.6% of the sensor’s mean value. Results this accurate can be obtained over a wide range of operation for a nearly any signal in a power plant.

The accuracy of this methodology could be demonstrated repeatedly with similar conclusions. Ultimately, the modeling technique provides accurate indication of behavior that is outside the historical pattern of behavior of a sensor.

 

 

 

Combining Signals in SBM
One significant advantage of using SBM is that signals can be modeled together when they are physically linked in behavior. Modeling a fan is an example. The fan amps, speed, inlet vane position, motor bearing temperatures, motor bearing vibrations, fan bearing temperatures, fan bearing vibrations and fan flow can all be modeled together. No regression or other parametric analysis need be done. The parameters all relatively move together and identifiable patterns of behavior will be present. Therefore, the engineer or analyst can bring together monitoring disciplines which are often conducted separately.

Conclusion
The benefits of this modeling technique coupled with the potential for centralized monitoring should be clear for improved plant performance. Analysts can monitor more units, and more sensors, relying on the efficiency and accuracy of the modeling technique to point out eviations in pattern that plant staff can use to assess, adjust or fix a failing sensor or component. Either more attention can be paid to earlier deviations from pattern ostensibly resulting in less equipment failures as problems are addressed before causing large damage, or a larger number of sensors can be monitored by a smaller group of individuals. In either case, plant performance should improve either due to maintained consistency/efficiency or through less failed components.

References
1. S. Wegerich, X. Xu, "A Performance Comparison of Similarity-Based and Kernel Modeling Techniques," Proc. of MARCON 2003, Knoxville TN, (May 5-7, 2003)

2. S. Wegerich, R. Singer, J. Herzog and A. Wilks, "Challenges Facing Equipment Condition Monitoring Systems", Proc. of MARCON 2001, Gatlinburg, TN (May 6-9, 2001)

3. J. Herzog, S. Wegerich, A. Wilks and J. Hanlin, "High Performance Condition Monitoring of Aircraft Engines," Proc. of ASME Turbo Expo 2005: Power for Land, Sea and Air, Reno-Tahoe, NV, USA (June 6-9, 2005)


Chad Stoecker has been with SmartSignal Corporation for four years. He is currently a Project Manager with the product implementation group. Previously, Chad was the technical lead for SmartSignal’s Advanced Applications group, where he worked on the development of SmartSignal’s SHIELD Diagnostic product for over two years, and has a patent pending for his contributions. Chad previously worked on mathematical modeling in the hard disk drive industry where he also received a patent for his work. He holds a bachelor and master degree in mechanical engineering from Oklahoma State University.

Phil Flesch has been with SmartSignal Corporation for six years. He started as an Engineering Manager and currently serves as a Sales Manager. In his engineering role, Phil was instrumental in founding the Availability and Performance Center, which remotely monitors equipment for SmartSignal customers. Phil previously worked for several engineering consulting and design firms specializing in Equipment Reliability. He holds a bachelor degree in Mechanical Engineering and English from the University of Notre Dame, and was commission in the United States Navy where he worked in the Nuclear Propulsion Program.

SmartSignal maximizes worldwide industry equipment performance, availability, and reliability by detecting, diagnosing, and prioritizing equipment and process problems before they become costly failures. Drawing on over 40 patents, SmartSignal delivers specific, relevant, and actionable intelligence that makes people more proactive and productive. SmartSignal serves customers in power generation, oil and gas, mining, aviation, pulp and paper, and other process industries worldwide. SmartSignal and its customers have won over twenty awards for excellence, including the Wall Street Journal Technology Innovation Award.

Did You Like this Article? Get All the Energy Industry News Delivered to Your Inbox

Subscribe to an email newsletter today at no cost and receive the latest news and information.

 Subscribe Now

Whitepapers

Maximizing Operational Excellence

In a recent survey conducted by PennEnergy Research, 70% of surveyed energy industry professional...

Leveraging the Power of Information in the Energy Industry

Information Governance is about more than compliance. It’s about using your information to drive ...

Reduce Engineering Project Complexity

Engineering document management presents unique and complex challenges. A solution based in Enter...

Revolutionizing Asset Management in the Electric Power Industry

With the arrival of the Industrial Internet of Things, data is growing and becoming more accessib...

Latest PennEnergy Jobs

PennEnergy Oil & Gas Jobs