Photo credit: mady70 / iStock / Getty Images Plus
The process manufacturing industry faces myriad challenges. Examples include obtaining improved operational and plant efficiency, adhering to regulatory compliance, dealing with skilled labor shortages, and establishing sustainability in a competitive global market. And, though conditions continue to improve, in the last year, the COVID-19 pandemic has overlaid upon these existing challenges.
One of the ways in which organizations can tackle such difficulties is by optimizing process data management. Managing and democratizing all of the process data, eliminating data silos and extracting value from the production data allow various stakeholders access to data that can yield insights. Fortunately, this can be achieved through the use of Industrial Internet of Things (IIoT) and analytics technology. This is key in helping process manufacturers achieve competitiveness and sustainability — now and in the years to come.
For years, process manufacturing companies have been capturing sensor-generated, time-series data and storing it in historians. More recently, additional sensors have been installed to monitor and predict asset performance.
Besides the sensor-generated, time-series data, a significant amount of contextual operational data is gathered in many forms, formats and systems. Batch records, product quality data, shift logbook information and maintenance data are just a few examples. This data, however, is stored in its respective business application, creating data silos that are only accessible to a few people. This results in a great loss of information potential.
The challenge for a wastewater plant was to reduce energy consumption. The team hypothesized that the energy used to clean one liter of water depended on the number of reverse osmosis units running.
Photos credit: TrendMiner
The crucial question becomes how to integrate and leverage all of the process and operational data in order to analyze it and make data-driven decisions at scale.
One method to improve business outcomes is by using the data available in combination with human expertise. This requires the democratization of data, where each user has direct access to relevant captured data while, at the same time, democratizing analytics, which where each user is allowed to extract information from the available data. When both the data and analytics are put into the hands of the business users, they can make data-driven decisions at scale.
Big data holds a wealth of opportunities to improve operational performance — if the data is easily accessible to the process experts. It is these people who have the production knowledge and experience to understand what the data is showing. They are the ones who know about what is happening with production and who can interpret the data if given the chance. With self-service industrial analytics tools based on pattern recognition and machine learning, these experts can analyze the data, often without the help of data scientists. Thus, they can contribute directly to improving business outcomes and operational performance.
To test the reverse osmosis theory, the tags were visualized independently of time via a scatterplot with the flow in the Y axis and power consumption in the X axis. The data showed that two different operating zones were noticeable, proving total energy consumption increased for the total amount of treated water.
A real-world example can demonstrate these principles. At one facility in the water and wastewater industry, reverse osmosis was used for demineralizing water. (Reverse osmosis is most frequently used in the context of desalination of seawater to produce fresh drinking water.) At this plant, the RO process involved using external pressure to push the water through a semi-permeable membrane against its chemical gradient. Once completed, the mineral contaminants were trapped and removed from the water.
Every reverse osmosis process is intrinsically energy intensive. The challenge for this plant was to reduce energy consumption — and the fines incurred for exceeding governmental limits. The team hypothesized that the energy used to clean one liter of water depended on the number of reverse osmosis units running, but the engineering personnel were unable to test this theory using conventional tooling.
In many chemical processing applications, when a valve starts to stick, typically there is a delay between the valve output changing and the actual process responding. In one chemical plant, the plant engineers wanted to know in advance when a valve was starting to stick.
Instead, the engineers used an on-demand analytics solution to prove their hypothesis. This allowed them to use pattern-recognition and machine-learning techniques without data modeling. First, they created calculated tags to measure and display the total power consumption of the reverse osmosis unit, and the total flow of treated water. This allowed them to analyze the relationship between these two variables. Second, they selected the two-month timeframe that historically had the highest number of fines. The tags were visualized independently of time via a scatterplot. Flow was plotted in the Y axis and power consumption was plotted in the X axis.
Two different operating zones were noticeable, giving the team quick insight: Total energy consumption increased for the total amount of treated water, but the energy efficiency was different for producing with either one or two reverse osmosis skids. Power usage was much lower when operating with two reverse osmosis skids, especially at high production rates.
Using self-directed analytics software, the team analyzed their operations and determined the optimal number of reverse osmosis units for their process. In addition, they were able to keep the energy consumption below the maximum limit, thus avoiding future financial penalties.
Plant personnel performed an operational area search to identify periods of normal and bad operation to see if they could find parameters that distinguished between them. Once identified, periods of good process behavior were turned into fingerprints to monitor operations for out-of-phase behavior and alerts were set for such periods.
Another real-world example is from the chemical industry.
Typically, when a valve starts to stick, there is a delay between the valve output changing and the actual process responding. In one chemical plant, process experts wanted to know in advance when a valve was starting to stick. To do this, they needed to identify good process behavior and set monitors for when there were deviations from this behavior. The company’s analytics software made this possible via pattern-recognition capabilities.
The process experts performed an operation area search to identify periods of normal and bad operation to see if they could find parameters that distinguished those two periods. Once they identified periods of good process behavior, they turned these into fingerprints to monitor operations for out-of-phase behavior, and to set alerts for such periods. With these monitors and alerts in place, if an issue occurred, the monitor would send an email notifying personnel about the situation and also suggest possible corrective actions.
By understanding the difference between good and bad behavior, the basis is created to understand when maintenance is required.
Asset performance, or overall equipment effectiveness (OEE), depends greatly on the process in which the asset operates. To have a more thorough picture, instead of just using equipment-related sensor data for performance analysis, all process-related sensor data can be taken into account. This is called the contextualization of asset performance with process data, and it makes predictive maintenance for both critical and noncritical assets possible.
The goal of predictive maintenance is to be able to perform maintenance at a time when it will be the most cost effective and have the least impact on operations. This requires an understanding of the process performance. Process experts are in the best position to analyze good and bad performance. By representing all sensor-generated data in a graph, especially when multiple tags need investigation, it is difficult to find correlations. Analytical tools can provide descriptive tagging features to quickly explore and filter data visually, allowing users to search through large amounts of process data. Advanced analytics capabilities also may allow users to do root-cause analysis (RCA) and test hypotheses (discovery analytics) to quickly find similar behavioral occurrences.
Through diagnostic analysis, process experts can understand the effects of process changes (comparing before vs. after a specific issue) and find potential influence factors. By understanding the difference between good and bad behavior, the basis is created to understand when maintenance is required. With this information, experts can set monitors to safeguard the best operating zones, increase asset reliability, improve plant safety and predict when maintenance is required.
In conclusion, the IIoT brings new possibilities for the process manufacturing industry. Originally, data was captured using hard-wired sensors. Because new wireless sensors and devices as well as cloud-storage capacities exist today, a more extensive capture and storage of process data is possible.
Combining IIoT with operator-accessible analytics allows plant and process engineers to tap into the facility’s to increase operational excellence and business resilience. Process manufacturers will be able to obtain an enhanced operational understanding and transparency by eliminating data silos and incorporating contextual data with process data. Such an analytics tool centralizes the process data, so all stakeholders can use it to visualize, monitor and analyze the production process.