Jump to content

Marketing

Super Seeqer
  • Posts

    474
  • Joined

  • Last visited

    Never

Posts posted by Marketing

  1. Online Furnace Conversion
    Online Furnace Conversion
    roger.sprague… Mon, 04/26/2021 - 05:57

    Challenge
    Making an accurate prediction for real-time furnace ethane conversion is critical to maintaining effluent purity and resulting ethylene production rates. Overestimation of conversion leads to undershooting production targets and lost opportunity. Underestimation can lead to overproduction, overloading downstream units or increasing inventory storage costs. For a large-scale petrochemical manufacturer, conversion calculations were complicated, typically performed offline, and required manual calculation each time a new weekly lab sample was received. 

    Solution
    An online version of the existing furnace conversion calculation script was implemented using Seeq’s external calculation function. The actual conversion was compared to the online prediction signal and large deviations were flagged. Asset Tree functionalities were used to perform the calculations on a single furnace and then quickly scaled to all site furnaces, with results summarized for all furnaces in a dashboard. Auto-updates were configured so that anytime users open the dashboard, they see the furnace conversion and comparison calculations for each new lab sample received. They can then make decisions as to whether the predicted conversion signal, a value calculated in the plant’s distributed control system, should undergo a bias adjustment to better reflect recent data.

    Results
    This improvement eliminated the need for SMEs to manually calculate the furnace conversion, saving time, and ensured the calculation was performed for every lab sample received. When a high delta between the actual and predicted conversion is observed, the process engineers take proactive actions to adjust the predicted conversion signal calculation. This provides operators with an accurate account of ethylene production rate and allows them to make rate bumps when necessary to keep production rate at target. Recently, one site observed more than 1 million pounds of production losses due to running below target production rates when they believed they were on target. Since the implementation of the online calculation, they have seen less than half of these losses, projecting a year-on-year revenue improvement of more than $250,000. 

    Data Sources

    • Process Data Historian (OSIsoft PI, AspenTech IP21, Honeywell PHD, etc.)
    • OSIsoft Asset Framework or other asset hierarchy 
    • LIMS (Lab Information Management System)

    Data Cleansing

    • Process data signals were cleansed to include only data with timestamps matching lab data timestamps

    Calculations and Conditions

    • Lab data and process data from different data sources were overlaid in Seeq Workbench.
    • Seeq Formula was used to call the external conversion calculation script for calculating actual values from the process data and lab data, eliminating the need for manual calculation any time a new lab value is received. 
    • Scorecard metrics showed the delta between the actual and predicted furnace conversion at the time of each lab sample and used priority color thresholds to draw attention to high deviations.
    • An asset structure was built for these calculations using Seeq Data Lab.
    • Treemaps were used to view current predictor status across all furnace assets.

    Reporting and Collaboration
    Results were summarized for all furnaces in a dashboard. With configured auto-updates, users see the furnace conversion and comparison calculations for each new lab sample received upon opening. They can make immediate and accurate decisions as to whether the predicted conversion signal needs a bias adjustment to better reflect recent data.
     

    Use Case Activity
    Use Case Business Improvement

    View the full article

  2. When I graduated with a degree in chemical engineering, I was excited to join the workforce and make a positive contribution. Being part of different engineering teams at various companies made me believe that mining the many years’ worth of big data stored in the process manufacturing historians could be a gamechanger. There were many opportunities to contribute to sustainability, reliability, and profit maximization goals that would result in happier customers and a more environmentally responsible industry.      

    __ptq.gif?a=3808865&k=14&r=https%3A%2F%2

    View the full article

  3. Data Across Time for Transformer Health Insights
    Data Across Time for Transformer Health Insights
    roger.sprague… Tue, 04/13/2021 - 09:15
    Industry

    Challenge

    Knowing when to maintain one of the most critical components within the electrical network, the power transformer, is a unique challenge. Transitioning from a calendar-based maintenance plan to a condition-based plan requires the synthesis of information from a variety of data sources. This might include:

    • Nameplate characteristics
    • Diagnostic tests (such as fluid, dissolved gas analysis, and electrical tests)
    • Maintenance and financial history 
    • Real-time data (such as cooling performance)

    Developing and refining transformer health analytics that take advantage of all this disparate, but valuable data is arduous, especially when applied to numerous assets. This article provides an example of one such analytic, Dissolved Gas Analysis, and demonstrates how it can easily integrate with multiple data sources and scale across a fleet of transformers. 

    Solution

    DGA is the study of dissolved gases in transformer oil. Transformer oil is used to insulate the transformer's electrical equipment. When it breaks down, it releases gases within the oil. The distribution of these gases can be related to the type of electrical fault and the rate of gas generation can indicate the severity of the fault. DGA provides an inside view of a transformer. By analyzing dissolved gases, we can observe the inner condition of any transformer. Many faults like arcing, overheating, and partial discharge can only be detected by analyzing gases.

    Many electrical utilities have a DGA program. This typically consists of manually sampling the oil and sending the sample to a laboratory for analysis (every 1-4 years). There are a number of industry-recognized methods used to translate the lab results into fault codes. Seeq’s advanced analytics can be used to easily aggregate the data needed for such methods, evaluate the required formulas, and scale the analytics across numerous assets. Methods include:

    • IEEE C57-104 Total Dissolved Combustible Gases
    • IEC 60599
    • Roger's ratio
    • Dornenburg's state estimation
    • Duval's triangle 

    Seeq can also be used to develop customized transformer health algorithms.

    Results

    • Rapid iteration and refinement of analytics to identify assets for targeted, condition-based maintenance
    • Improved reliability and reduced maintenance costs
    • The ability to predict OpEx and extend the life of the transformer
    • Maintenance of transformer versus replacement of transformer
    • Reduction in catastrophic asset failures ($10s of millions per transformer)
    • Minimal corporate exposure from preventable failures and outages
    • Connected data, all in one place
    • Individual transformer health algorithms
    • Avoidance of transformer failures saves multiple millions of dollars in outage cost

    Data Sources

    • Load and temperature data is stored in PI
    • DGA test results are stored in an SQL database

    Calculations and Conditions

    • Formula
    • Value search
    • Asset swapping
    • Treemap

    Reporting and Collaboration

    • Organizer topic
       
    Use Case Activity
    Use Case Business Improvement

    View the full article

  4. Seeq began offering our advanced analytics applications as Software-as-a-Service (SaaS) in 2018. SaaS offers many benefits to end users including higher performance, better supportability, and lower overall costs. However, there can be a perceived risk associated with SaaS offerings because end user data is in the cloud on computing resources managed by a vendor. The “Security, Availability, Processing Integrity, Confidentiality, and Privacy” of customer data largely depends on the controls used to operate both the vendor’s organization and specifically the controls used to operate the SaaS service.

    __ptq.gif?a=3808865&k=14&r=https%3A%2F%2

    View the full article

  5. Bearing Failure Prediction
    Bearing Failure Prediction
    roger.sprague… Wed, 03/24/2021 - 08:41

    Challenge

    At many process manufacturing operations, bearings fail exponentially and with little notice, leading to downtime that can become expensive and making scheduled maintenance difficult. System interdependence often means that a failure of one bearing results in the subsequent failure of other system bearings. Being able to prepare for and prevent the first bearing failure can reduce the costly and harmful effects of unplanned bearing failures.

    Solution

    Seeq’s advanced analytics application allows SMEs in process manufacturing operations to expertly cleanse data, identify normal operation, calculate statistical thresholds, and identify deviations. The company’s team of data scientists extends the analysis by using the cleansed data and operational context from the SMEs to build a bearing health status signal by running an anomaly detection algorithm. The data scientists finish their work by passing this anomaly detection signal back into Seeq Workbench.

    Results

    Maintenance is notified of abnormal operations several days before a catastrophic failure. This early warning time enables proactive greasing of bearings to extend the run life, advanced parts ordering, and resource allocation so maintenance cost and downtime are minimized when the failure occurs. For a critical refrigeration compressor, preventing a single bearing failure may result in lost time savings of >$0.5M.

    Data Sources

    • Process Data Historian like OSIsoft PI or others
    • An Asset Framework structure, like OSIsoft PI AF, to quickly scale calculations from one bearing to many
    • Supplemental high-frequency data from periodic vendor testing, stored as CSV files uploaded to Seeq

    Data Cleansing

    • Bearing vibration signals were cleansed to remove downtime data 

    Calculations and Conditions

    • Value Search and Formula were used to identify and remove downtime data
    • Formula was used to identify "normal operation"—the data set used for calculating the statistical thresholds and the eventual training data set for the Neural Network (NN) model
    • Formula was used to calculate significant statistical thresholds based on normal operation period
    • Deviation searches were used to identify when one bearing signal deviated from its statistical limit
    • Composite condition was used to identify when any bearing signal deviated from its statistical limit
    • Seeq Data Lab was used by the Data Science team to pull cleansed, contextualized data from Seeq Workbench and a NN model was used to create a 1-0 anomaly detection signal for each bearing
    • Seeq Data Lab was used to scale the analysis from a single bearing to all of the bearings at a particular site

    Reporting and Collaboration

    • The anomaly detection signals were pushed back to Seeq Workbench where the machinery engineer could use them for his daily machinery monitoring to identify potential bearing failures earlier.
    • The results were added into a Seeq Organizer Topic with multiple interactive dashboards for different layers of the Organization. 
       
    Use Case Business Improvement

    View the full article

  6. Calculating KPIs During Heats
    Calculating KPIs During Heats
    roger.sprague… Wed, 03/17/2021 - 08:29

    Challenge

    In arc furnaces, steel manufacturers refer to the different batches of steel being produced as heats. A steel manufacturing company wanted to be able to calculate various KPIs and quality indicators during each heat. Due to the large number of heats in each data set, the KPIs can require extensive manual labor to calculate. This leaves the engineering team with little to no time to deeply analyze the data and make better production decisions after the data and results are finally pulled and organized.

    Solution

    Using Seeq’s advanced analytics tools, the steel manufacturer can identify each heat using the heat number signal from the historian. Once the heat numbers have been identified, various metrics for each heat can be calculated using both historian signals and lab data. The KPIs are then used to determine metrics over each shift. The steel manufacturer can also define ideal heats and use the reference profile tool to understand when operations are outside of defined boundaries. These boundary exceedances are then used to track whether something is experiencing long-term drift in the process, or is just a one-off deviation. 

    Results

    The steel manufacturer was able to build an organizer topic that they updated each morning. This allowed them to see daily performance and perform per-shift corrections as needed to further optimize the amount of time spent in each stage of operation. They have taken a process that would have been hours of work in excel spreadsheets and now can be updated and sent to the team automatically. The plant leadership team can very quickly see performance over the last day and focus on boundary excursions to troubleshoot over the next shift of operations. 

    Data Sources

    • Historian Data (Ignition)
    • LIMS data (SQL)

    Data Cleansing

    • ToCondition to Identify Heats

    Calculations and Conditions

    • Value Search to Identify Power On Condition
    • Signal From Condition to identify power on time per heat
    • Deviation search to understand when power on time is above recommended limit 

    Reporting and Collaboration

    • Description of how results were shared with the team and company.


     
     

    Use Case Activity
    Use Case Business Improvement

    View the full article

  7. Seeq recently conducted a poll of chemical industry professionals—process engineers, mechanical and reliability engineers, production managers, chemists, research professionals, and others—to get their take on the state of data analytics and digitalization. Some of the responses confirmed behaviors we’ve witnessed first-hand in recent years: the challenges of organizational silos and workflow inefficiencies, and a common set of high-value use cases across organizations. Other responses surprised us, read on to see why.

    __ptq.gif?a=3808865&k=14&r=https%3A%2F%2

    View the full article

  8. Improve Reliability and Reduce Maintenance With Advanced Analytics roger.sprague… Fri, 02/12/2021 - 15:29

    A common misconception in large-scale process manufacturing is that minimizing downtime by increasing reliability is the key to increasing profitability. This mindset fails to address the fact that some downtime can improve capacity, increase production, and minimize maintenance expenses.

    Improve Reliability and Reduce Maintenance With Advanced Analytics

    View the full article

  9. Control Loop Performance Monitoring (CLPM)
    Control Loop Performance Monitoring (CLPM)
    roger.sprague… Fri, 02/12/2021 - 13:22

    Challenge

    Manufacturing sites have many automatic controllers (typically in the hundreds or even thousands for large facilities). These controllers are designed to run in automatic mode without operator intervention. Most sites don’t have insight into how these controllers are actually performing. Potential unknowns include:

    • Is the controller running in automatic as designed, or is an operator running the controller in manual?
    • Is the controller stable?
    • Is the controller effectively maintaining setpoint?

    Poor control may result in an increased risk of safety incidents, harmful environmental impact, failure to meet product specifications, lower throughput, waste of energy and raw materials, increased maintenance costs, and increased operator intervention.

    Off-the-shelf Control Loop Performance Monitoring (CLPM) applications are currently available but do not have a method to calculate metrics based upon different recipes, grades, operating conditions, etc. These applications are also unable to incorporate additional process data tags, such as overall production, and provide limited metrics available to users.

    Solution

    Seeq enables engineers to calculate and monitor key controller performance metrics. These metrics are calculated based upon values from the controller mode, output, setpoint, and process variable. While there are many metrics that may be calculated, a few examples include controller error, output travel, percent time in the correct mode, setpoint changes, and more.

    Seeq offers a flexible CLPM solution that can easily create conditions based upon different recipes, grades or operating conditions. Users have the flexibility to tie in additional process data and tags from the historian to investigate how controller performance is impacting overall unit and plant performance. Additionally, the metrics calculated for each controller are flexible and configurable by the end-user.

    Results

    Using Seeq to monitor controller performance may reveal issues related to lack of operator training, mechanical issues, poor controller tuning, ineffective control strategy, and/or change in operating conditions. With this insight into how controllers are performing, engineers and operators can troubleshoot the issue to optimize process performance.

    Data Sources

    • Process historian

    Data Cleansing 

    Seeq can create conditions based upon different modes of operation or recipes that a process may run. Often, facilities use the same controller for different recipes, and controller performance varies from one recipe to the next. The ability to add this context to the data within Seeq and create condition-based metric limits is very useful in these cases. Other off-the-shelf CLPM packages do not currently have this capability.  

    Calculations and Conditions

    • Periodic condition – create a condition for how frequently the CLPM metrics are calculated
    • Signal from condition – calculate controller metrics
    • Formula – establish metric benchmark limits based upon historic performance
    • Value search – Identify when metric performance is outside of its limits .  create conditions for different operating conditions or recipes, which can be used to calculate condition-based metrics
    • Scorecard – visualize metric calculations in tabular form
    • Treemap – high-level overview of how each controller is performing relative to its metric limits

    Reporting and Collaboration

    Results can be summarized in an organizer topic.  An organizer topic can also be used as a dashboard for continuous monitoring of controllers.

    Use Case Activity

    View the full article

  10. Smart Industry 2021 Crystal Ball Report roger@uncommon… Fri, 02/05/2021 - 13:49

    Predictions in a normal year are tricky. Predicting what is to come after a year of unprecedented volatility courtesy of a global pandemic is downright gutsy. But planning for next steps is wise, as is considering advice from experts in your field. 

    Here find a collection of that advice…insights from dozens of industrial thought-leaders on what’s next, who’s involved, and what/why/how to charge into the new year with confidence.

    This report is sponsored by Seeq, Software AG, and Stratus.

    Smart Industry 2021 Crystal Ball Report

    View the full article

  11. Steel Coil Processing: Scheduling and Performance
    Steel Coil Processing: Scheduling and Performance
    roger.sprague… Mon, 02/01/2021 - 12:53

    Challenge:​

    There are many insights to be identified from the steel coil processing operation, but they are challenging to obtain due to the need to combine processing data with relational data (such as product properties). The data needs to be cleansed, aggregated, and categorized in various ways to improve production goal setting, scheduling accuracy, and troubleshooting of delayed time causes. Typical data analysis tools do not provide the time series/relational data integration and contextualization features needed for this challenge. ​

    Solution:​

    Using Seeq’s capsule/condition functionality for contextualizing data, the steel coil processing steps (loading, threading, etc.) were trended at a large-scale flat-rolled steel processing company. Product properties were easily integrated as condition properties. Seeq's condition cleansing features were used to separate coil processing performance by operator (shifts). 

    Results:
    The insights gained from the steel coil processing analytics generated the following results for the operations team and management, as the dashboard reports were regularly reviewed: ​

    • Increased scheduling accuracy: Daily production goals were set more consistently and realistically. Scheduling is now based on actual data and can be differentiated by several variables. ​
    • Significant processing delays based on operator and product properties were addressed more rapidly.​
    • Gaps in operator performance were readily visible. These results drive training, sharing of best practices, and additional staffing efforts.​
    • As these major optimization items are addressed and operator performance becomes more consistent, the analytics can be used to identify deeper opportunities in processing steps, sequencing of customer orders, etc.​

    Data Sources:​

    • Process Data Historian (OSIsoft PI, AspenTech IP21, Honeywell PHD, etc.)​
    • SQL data source with relational data (e.g., product properties)​

    Data Cleansing:​

    • Condition cleansing/filtering to allow aggregations by operator (shifts)​
    Use Case Activity
    Use Case Business Improvement

    View the full article

  12. Solar Generation: Axis Tracking/Soiling Loss Calculation
    Solar Generation: Axis Tracking/Soiling Loss Calculation
    roger.sprague… Tue, 01/19/2021 - 14:39
    Industry

    Challenge

    In solar energy generation, it is known that the soiling of panels and axis tracking faults (for panels with axis tracking capability) will reduce the total output of a solar plant. Generally, some flat number based on weather and assumptions, as well as total net output, is applied to determine when it becomes worthwhile to clean solar panels or fix/tune axis tracking controllers to maximize output. Developing models of the plant to account for these kinds of power losses is difficult or requires expensive and specific software that does not allow for much tuning or any ad hoc analysis of plant performance. ​

    Solution

    In Seeq, periods that exemplify optimal performance can be identified: The panels are clean and there are minimal tracking deviations from setpoint. These periods can be combined to create a training window. A model of plant performance based on the identified training window and plant meteorology instruments can then be developed to monitor the impact of axis tracking faults and soiling losses. ​

    Results​

    The developed plant model allows for continuous monitoring of soiling and axis tracking losses, while also enabling a dollar quantification report to determine when the monetary cost of maintenance is justified by the losses associated with that maintenance.​

    Data Sources

    • Wonderware​

    Data Cleansing

    Capsules were developed to identify when each tracker string in a block was deviating by more than a given threshold from its setpoint. Formula was then used to combine all of these capsules and remove them from periods when the sun was shining​.

    Calculations and Conditions

    • Value search, formula, composite condition, prediction, scorecard metric

    Reporting and Collaboration

    Results were documented in a set of scorecard metrics within a larger monitoring dashboard worksheet that was used by plant engineers to make data-based decisions and account for inefficiencies in the generating system.​

    Use Case Activity
    Use Case Business Improvement

    View the full article

  13. Earlier this month (November 2020 ) I was honored to be asked to present at the American Fuel and Petrochemicals Manufacturers Women in Industry event and speak to over 100 attendees on leading during uncertain times. My talk focused on what I believe are key leadership themes and what those look like when a crisis strikes. These types of crises will stretch and grow us, forcing us to make choices we never thought we’d be faced with, while determining the types of leaders we want to become.

    When I asked the group what they think when they hear the phrase “leadership in times of uncertainty,” many used words like listening, empathy, communication, and inspiring. As women, we are leaders in our industries, communities, education, and family lives. Each of these areas of life are different, yet we have the ability to listen, empathize, communicate, and inspire in every single one. The group was spot on in identifying these skills as the cornerstone of how we lead through any challenge. Now, we must lean on those abilities more than ever while taking risks, building a healthy network, and communicating to drive alignment.

    The first topic we explored was taking risks. When I asked the group what they picture when they hear the word risk, many said failure, danger, reward, opportunity, being vulnerable, and potential mistakes. All of these are true, and more than one can be true at the same time. We’ve all taken risks, for better or worse, and we’ve all failed. However, by taking those risks we acquired new skills and experiences, preparing us for leading in a crisis. Calculated risk taking is actually the foundation of a good leader.

    When we look back at our lives, it’s clear we’ve all taken a series of risks to get to where we are today. We went out and got an education to enable us to succeed in traditionally male-dominated fields. We chose jobs that would give us the experience we wanted to reach career goals. All throughout this journey we’ve shared our ideas, and there is certainly risk in that. Beyond sharing what we think, we also navigate day-to-day risks, such as where we sit in a meeting and when we speak.

    I took a risk by coming to Seeq and growing the analytic engineering team. One of my goals has always been to encourage my team members to freely share their ideas. That has proved to be an effective strategy as we’ve navigated the COVID-19 pandemic. When travel came to a screeching halt, we were forced to pivot our entire training model. My team sprang into action to create an incredible virtual training setup, enabling Seeq to reach our customers anywhere in the world, while accommodating their different learning styles. It was absolutely a risk, but we had to try—and our efforts netted success. We are on pace to complete just over 300 virtual trainings by the end of the year, with 50-100 attendees per session.

    Another key element of leading during uncertainty is networking. While most people think of networking in the context of a social event, this type of networking is based on trust and energy. That is imperative for leading during a crisis.

    So how do we build the type of network that goes beyond swapping business cards? It’s not easy and, when asked, many in the group said the hardest part is following up, keeping up with relationships, and interactions feeling forced. However, when we shift our mentality to focus on building trust and energy, we take the long view. Trust only evolves with time—from initiating a relationship, to developing and sustaining it. We build that trust by doing what we say we will do and showing that we have the best interest of everyone on the team in mind.

    One of the best ways we can network to build trust is to be an energizer. Energizers create enthusiasm in part because they engage in a set of foundational behaviors that build trust. Energizers approach situations with a clear head, giving them endurance to lead. When you interact with an energizer, try not to worry that you will be judged, dismissed, or devalued. Without fear of rejection, it’s easier to share fledgling ideas or novel plans—to innovate, take risks, and think big. Energizers create trust, but trust isn’t all that they create. The real power of energizers is that they enable others to realize their full potential.

    While taking risks and networking to build trust and energy are critical to leading during uncertainty, ultimate success hinges on combining these endeavors with effective communication and leading by example. We must align our teams (and our customers!) towards a common goal so we can make efficient and effective progress. In addition, we must be willing to take the risk of putting our own ideas out there for all the world to see. After all, in every crisis there is an opportunity to consider new and exciting ideas to pave new paths, elevate team members, and reap the rewards of hard work and ingenuity.

    Many thanks to each and every woman who participated in the AFPM Women in Industry event. Your feedback and willingness to engage in discussion made the risk I took to speak pay off. The organizers did a fantastic job and it was a terrific experience overall. I am grateful for the opportunity and I can’t wait to see the impact each of you will have on your teams, your companies, and the industry as a whole.

    __ptq.gif?a=3808865&k=14&r=https%3A%2F%2

    View the full article

  14. Seeq for Operators - Find Production Settings
    Seeq for Operators - Find Production Settings
    admin Mon, 11/23/2020 - 12:16

    Challenge

    Process operators are responsible for making production target settings when changing product grades. Typically, operators use a written logbook to record production settings, and when they make grade changes, they reference the logbook. While the logbook may contain information about finished product quality or other process KPIs, they do not get the full context about the last production run.

    They’re often looking at numbers on a piece of paper, not trends that show the relationships between process parameters and key KPIs throughout the run. Operators frequently just choose production settings from the end of the run, which results in a large amount of off-specification production when changing grades.

    Solution

    Create conditions for the production runs of product grades. Easily navigate to past production runs to find past production settings. View the relationship between the production settings and key process KPIs, like quality or production rate.

    Results

    With the operator’s exposure to advanced analytics tools, the team can:

    • Reduce off-spec production when changing grades
    • Select better startup settings by considering the entire previous campaign instead of just the last value
    • Reduce time spent recording production settings in written logbooks and searching through logbooks for settings

    Calculations and Conditions

    • Define production runs using the Value Search tool.
       
    Use Case Business Improvement

    View the full article

  15. Seeq for Operators - Find Production Settings
    Seeq for Operators
    admin Mon, 11/23/2020 - 12:16

    Challenge

    Process operators are responsible for making production target settings when changing product grades. Typically, operators use a written logbook to record production settings, and when they make grade changes, they reference the logbook. While the logbook may contain information about finished product quality or other process KPIs, they do not get the full context about the last production run.

    They’re often looking at numbers on a piece of paper, not trends that show the relationships between process parameters and key KPIs throughout the run. Operators frequently just choose production settings from the end of the run, which results in a large amount of off-specification production when changing grades.

    Solution

    Create conditions for the production runs of product grades. Easily navigate to past production runs to find past production settings. View the relationship between the production settings and key process KPIs, like quality or production rate.

    for%20operators.png
    Trend view of previous production campaign and process settings.

    Results

    With the operator’s exposure to advanced analytics tools, the team can:

    • Reduce off-spec production when changing grades
    • Select better startup settings by considering the entire previous campaign instead of just the last value
    • Reduce time spent recording production settings in written logbooks and searching through logbooks for settings
    Use Case Business Improvement

    View the full article

  16. Mass Balance
    Mass Balance
    admin Mon, 11/23/2020 - 12:00

    Challenge

    Manufacturing sites have many process units, each with inlet and outlet streams. Many sites do not have insight into the mass balance of these process units. Performing a mass balance on these process units (or the overall plant) is critical for identifying a number of issues, including leaks, faulty sensors, meter calibration issues, process inefficiencies, and more. Unfortunately, the plants that do perform mass balances likely use a method that is difficult to maintain and does not update as new data is available for continuous monitoring.

    Solution

    Companies using these systems turn to Seeq’s advanced analytics. Using context to separate product batch operation To ensure mass balance calculations are accurate, reliable, and up-to-speed, process manufacturing operations use Seeq to calculate and monitor their plants’ mass balance. This mass balance can run continuously to track changes over time and identify discrepancies between inlet and outlet streams.

    Results

    The results of the mass balance (acceptable or unacceptable) provide key insights into the operation of the system. If the mass balance results are acceptable (in equals out), this confirms that the process does not have major leaks and that the flow sensors are operating correctly. If the mass balance results are not acceptable (in does not equal out), this may indicate several potential issues, such as a leak, a bottleneck in the process, faulty sensors, meter calibration issues, accumulation, or more.

    Calculations & Conditions

    • Formula: Convert flow rates to consistent mass units
    • Periodic condition: Create a condition for how frequently the mass balance is calculated
    • Signal from condition: Calculate totalized flow rates
    • Formula: Calculate the difference between the total in and the total out
    • Value Search or Deviation Search: Identify when the difference between in and out exceeds a certain value or percentage
    • Scorecard: Visualize mass balance in table form

     

     

    Data Sources

    • Process historian

    Data Cleansing

    Before a mass balance can be performed, all inlet and outlet flow rates must be converted to consistent mass units. This unit conversion is easily accomplished in Seeq using the Formula tool and stream properties. Additional data cleansing, such as filtering of noisy signals, may also be applied, if necessary.

    Reporting and Collaboration

    Results can be summarized in an organizer topic. An organizer topic can also be used as a dashboard for continuous monitoring of the mass balance

    Use Case Activity
    Use Case Business Improvement

    View the full article

  17. Real-Time Production Potential (RTPP) Calculation
    Real-Time Production Potential (RTPP) Calculation
    admin Mon, 11/23/2020 - 12:00
    Industry

    Challenge

    For power generation operations, expected power is based on the manufacturer-provided turbine power curve, but the actual power produced may vary due to the age of components, inaccurate anemometers, and turbulence. It can be very difficult to plan operational production with inaccurate data to rely on.

    Solution

    Real-Time Production Potential (RTPP) can easily and rapidly be calculated using Seeq's Prediction Tool. Trained on a subset of data where the turbine is communicating, producing power, and not curtailed by the grid operator, the tool definitively provides accurate predictions for the expected power of the operation’s assets.

    Results

    After implementing Seeq’s advanced analytics, significant time is saved in validation and fine-tuning of the RTPP model. More accurate RTPP is reported to the grid operator every 5 minutes, improving the grid operator’s ability to maintain system reliability and balance load.

    Calculations & Conditions

    • Value Search
    • Composite Condition
    • Formula
    • Prediction
    • Scatterplot

    Data Sources

    • OSIsoft PI 

    Data Cleansing

    Composite condition indicating the turbine is unavailable when power is less than 0 and turbine has fault code with stop status and turbine fault code is not categorized as curtailment. Seeq formula to remove periods where the turbine is unavailable and curtailed from the prediction training data set. Further limit the prediction training data set to periods when the site is capable of producing more power, as those situations are when RTPP is more important. 

    Reporting and Collaboration

    • Resulting model used for reporting to ERCOT
    Use Case Activity
    Use Case Business Improvement

    View the full article

  18. Mass Balance
    Mass Balance
    admin Mon, 11/23/2020 - 12:00

    Challenge

    Manufacturing sites have many process units, each with inlet and outlet streams. Many sites do not have insight into the mass balance of these process units. Performing a mass balance on these process units (or the overall plant) is critical for identifying a number of issues, including leaks, faulty sensors, meter calibration issues, process inefficiencies, and more. Unfortunately, the plants that do perform mass balances likely use a method that is difficult to maintain and does not update as new data is available for continuous monitoring.

    Solution

    Companies using these systems turn to Seeq’s advanced analytics. Using context to separate product batch operation To ensure mass balance calculations are accurate, reliable, and up-to-speed, process manufacturing operations use Seeq to calculate and monitor their plants’ mass balance. This mass balance can run continuously to track changes over time and identify discrepancies between inlet and outlet streams.

    Results

    The results of the mass balance (acceptable or unacceptable) provide key insights into the operation of the system. If the mass balance results are acceptable (in equals out), this confirms that the process does not have major leaks and that the flow sensors are operating correctly. If the mass balance results are not acceptable (in does not equal out), this may indicate several potential issues, such as a leak, a bottleneck in the process, faulty sensors, meter calibration issues, accumulation, or more.

    Calculations & Conditions

    • Formula: Convert flow rates to consistent mass units
    • Periodic condition: Create a condition for how frequently the mass balance is calculated
    • Signal from condition: Calculate totalized flow rates
    • Formula: Calculate the difference between the total in and the total out
    • Value Search or Deviation Search: Identify when the difference between in and out exceeds a certain value or percentage
    • Scorecard: Visualize mass balance in table form

    Mass%20Balance%201.png

    Mass%20Balance%202.png

    Use Case Activity
    Use Case Business Improvement

    View the full article

  19. Filter Membrane Predictive Maintenance
    Filter Membrane Predictive Maintenance
    admin Mon, 11/23/2020 - 11:00

    Challenge

    At manufacturing operations using ultrafiltration systems, the ultrafiltration membranes are used for numerous batches without replacement, using Clean-In- Place (CIP) operations in between batches to maintain filter performance. However, ineffective CIP cycles or long-term fouling or degradation of the filter membrane can result in increased cycle times to move the desired amount of product through the filter, lost yield as the product is unable to permeate the filter, or poor product quality as membrane failure may occur.

    Solution

    Companies using these systems turn to Seeq’s advanced analytics. Using context to separate product batch operation from cleaning operations, the data set during batch operation was isolated to provide context to how the filter was changing over time. In addition, Darcy’s Law was used to calculate and trend the membrane resistance over time, which is calculated based on the flow rates and pressures around the filter. A predictive maintenance model was created to extrapolate long-term degradation trends with warning and alert limits to plan maintenance periods prior to membrane failure.

    Results

    The ability to plan for maintenance of the filter membrane resulted in reduced impact to operations, saving both time and money. In addition, the replacement of membranes prior to significant degradation or failure resulted in reduced cycle time of filter operations, increased yield, and consistent batch quality.

     

    Data Sources

    • OSIsoft PI Historian
    • SQL Database for UV Data

    Data Cleansing

    • Used UV Data and Flow Rates to determine when the filter was in operation, and specifically whether it was in cleaning or batch operation.

    Calculations and Conditions

    • Used Seeq’s Formula functions with Darcy’s Law to calculate membrane resistance during batch operation.
    • Created prediction model to identify maintenance window.

    Reporting and Collaboration

    • Projected maintenance period was presented in Daily Meetings on Organizer Topic to plan an efficient time for maintenance prior to membrane failure or significant deterioration that impacts the process.
    Use Case Activity
    Use Case Business Improvement

    View the full article

  20. Filter Membrane Predictive Maintenance
    Filter Membrane Predictive Maintenance
    admin Mon, 11/23/2020 - 11:00

    Challenge

    At manufacturing operations using ultrafiltration systems, the ultrafiltration membranes are used for numerous batches without replacement, using Clean-In- Place (CIP) operations in between batches to maintain filter performance. However, ineffective CIP cycles or long-term fouling or degradation of the filter membrane can result in increased cycle times to move the desired amount of product through the filter, lost yield as the product is unable to permeate the filter, or poor product quality as membrane failure may occur.

    Solution

    Companies using these systems turn to Seeq’s advanced analytics. Using context to separate product batch operation from cleaning operations, the data set during batch operation was isolated to provide context to how the filter was changing over time. In addition, Darcy’s Law was used to calculate and trend the membrane resistance over time, which is calculated based on the flow rates and pressures around the filter. A predictive maintenance model was created to extrapolate long-term degradation trends with warning and alert limits to plan maintenance periods prior to membrane failure.

    Results

    The ability to plan for maintenance of the filter membrane resulted in reduced impact to operations, saving both time and money. In addition, the replacement of membranes prior to significant degradation or failure resulted in reduced cycle time of filter operations, increased yield, and consistent batch quality.

     

    Data Sources

    • OSIsoft PI Historian
    • SQL Database for UV Data

    Data Cleansing

    • Used UV Data and Flow Rates to determine when the filter was in operation, and specifically whether it was in cleaning or batch operation.

    Calculations and Conditions

    • Used Seeq’s Formula functions with Darcy’s Law to calculate membrane resistance during batch operation.
    • Created prediction model to identify maintenance window.

    Reporting and Collaboration

    • Projected maintenance period was presented in Daily Meetings on Organizer Topic to plan an efficient time for maintenance prior to membrane failure or significant deterioration that impacts the process.
    Use Case Activity
    Use Case Business Improvement

    View the full article

  21. Batch Quality Prediction
    Batch Quality Prediction
    admin Mon, 11/23/2020 - 10:00

    Challenge

    Quality is the most critical metric in pharmaceutical manufacturing—after all, nothing is more important than protecting patient health. Drug companies need to test each batch to ensure it meets quality standards.

    However, predicting the quality of a batch has traditionally been a challenge for drug manufacturers. The usual process is to take samples while a process is running and send it to the lab for analysis. But waiting for lab results adds time—often several hours—to the process. Inadequate lab results can require time consuming changes or expensive reworks if it is even possible to recover the batch. If the batch does not meet the quality requirements, the manufacturer can lose anywhere from hundreds of thousands to millions of dollars for a lost batch.

    A large molecule pharmaceutical manufacturer was struggling to predict batch quality results in near real-time. Delayed lab results made it difficult for the company to optimize process inputs to control the batch yield. The company’s process inputs were set without optimizing the process, resulting in the potential of wasted energy and raw materials or reduced product quality and yield. The company needed a better way to predict batch quality, enabling process optimization.

    Solution

    Using Seeq, the scientists built a model of process quality based on data from the OSIsoft PI data historian. The team uses the model to predict the quality of during in progress batches, enabling modifications during production before a batch needs to be scrapped for a quality issue.

    This analysis uses typical process measurements such as the reactor temperature, volume, and concentration as process parameters for controlling yield. The raw data is filtered to the desired operation of interest, the reactor heating portion of the process. A predictive model for yield is then generated based on statistically significant process parameters. The model was deployed online to detect abnormal batches.

    Results

    Instead of waiting for quality tests to come back from the lab, the manufacturer has potentially saved millions of dollars by gaining the ability to rapidly identify and analyze root cause analysis of abnormal batches via modeling. It can reduce the number of out-of-specification batches by adjusting process parameters during the batch. The company also saved on the reduction of wasted energy and materials.

    Developing and deploying an online predictive model of the product quality and yield can aid in fault detection and enable rapid root cause analysis, helping to ensure quality standards are maintained with every batch.

    Data Sources 

    • OSIsoft PI: Process data 
    • OSIsoft PI Event Frames: Batch execution information 
    • Microsoft SQL: Batch lab results 

    Data Cleansing 

    • Filtered data to build model based on reactor heating operation. 

    Calculations and Conditions 

    • Define operations for the batch process 
    • Calculate critical process parameters 
    • Create predictive model for yield 

    Reporting and Collaboration

    • Model deployed online with dashboard visualization 
    • Enabled rapid fault detection and root cause analysis 
       
    Use Case Activity
    Use Case Business Improvement

    View the full article

×
×
  • Create New...