Jump to content
  • To Search the Seeq Knowledgebase:

    button_seeq-knowledgebase.png.ec0acc75c6f5b14c9e2e09a6e4fc8d12.png.4643472239090d47c54cbcd358bd485f.png

Search the Community

Showing results for tags 'monitoring'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Community Technical Forums
    • Tips & Tricks
    • General Seeq Discussions
    • Seeq Data Lab
    • Seeq Developer Club
    • Seeq Admin Forum
    • Feature Requests

Categories

  • Seeq FAQs
  • Online Manual
    • General Information

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Company


Title


Level of Seeq User

Found 5 results

  1. Summary: Many of our users monitor process variables on some periodic frequency and are interested in a quick visual way of noting when a process variable is outside some limits. Perhaps you have multiple tiers of limits indicating violations of operating envelopes or violations of operating limits, and are interested in creating a visualization like that shown below. Solution: Method 1: Boundaries Tool One method to do this involves using the boundaries tool. This tool is discussed in Step 3 of this seeq.org post, and results in a graphic like that shown below. Some frequently asked questions around the above method are: Is there a way to make the different levels of boundaries different colors? Is there a way to color the section outside of the limits rather than inside of the limits? Method 2: Scorecard Metrics in Trend View Step 1. Load the signal you are interested in monitoring as well as the limits into the display pane. The limits can be added directly from the historian, or if they do not exist in the historian they can be created using Seeq Formula. Step 2. Open a new Scorecard Metric from the tools panel, create a simple scorecard metric on your signal of interest, with no statistic. Click the "+" icon to optionally enter thresholds, and add the threshold color limits that you are interested in visualizing. Note that the thresholds input in the boundary tool can be constant (entering a numeric value) or variable, selecting a signal or scalar.
  2. Update - To see a video of the below workflow in action, check out this Seeq Tips and Tricks video. Webhooks are a convenient method to send event data from Seeq to “channel” productivity tools such as Microsoft Teams or Slack. The following post describes how Seeq users can leverage Seeq Data Lab to send messages directly to MS Teams via Webhooks. Pre-Requisites: 1) Seeq Data Lab with Scheduled Notebooks enabled a. See Administration Panel -> Configuration and filter for “Features/DataLab/ScheduledNotebooks/Enabled” 2) MS Teams Channel with a Webhook Connector Assumptions: 1) Summary of capsules generated in a defined time range (i.e., every 12 or 24 hours) 2) Notifications are not near-real-time – script will run on a pre-defined schedule generally measured in hours, not minutes or seconds 3) Events of interest are contained in an Asset Tree or Group with one or more Conditions Step 1: Configure Webhook in MS Teams To send Seeq capsules/events to MS Teams, a Webhook for the target channel needs to be created. Detailed instructions on how to configure Webhooks in MS Teams can be found here: https://learn.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/add-incoming-webhook For the purpose of this post, we will create a Webhook URL in our “Seeq Notifications” Team to alert on Temperature Excursions. The alerts will be posted in the “Cooling Tower Temperature Monitoring” channel. Teams and Channel names can be configured to fit your need/operation, this is just an example for demonstration purposes: MS Teams will generate a Webhook URL which we will use in our script in Step 4. Step 2: Identify or Create an Asset Group or Asset Tree to define the Monitoring Scope To scope the events of interest, we will use an Asset Tree that contains “High Temperature” conditions for a collection of Monitoring Assets. While this is not a requirement for using Webhooks, it helps with scaling the Notification workflow. It also allows us to combine multiple Conditions from different Assets into a single workflow. To learn how create an Asset Tree, follow the “Asset Trees 1 – Introduction.ipynb” tutorial in the SPy Documentation folder contained in each new Seeq Datalab project. The script for the Monitoring Asset Tree used in this post is attached for reference: Monitoring Asset Tree.ipynb Alternatively, Asset Groups can be also used to create an asset structure directly in Workbench without using Python: Once the Asset Group/Tree containing the monitoring Conditions is determined, create a Worksheet with a Treemap or Table overview for monitoring use: Make note of the URL as it will be included in the Notification as a link to the monitoring overview whenever an event is detected. For locally scoped Asset Groups or Trees, it will also inform the script where to look for Conditions. Step 3: Install the “pymsteams” library in Seeq Datalab The pymsteams library allows users to compose and post messages (or cards) to MS Teams. The library can be installed from the pypi repository (pypi.org) using the “pip install” command. 1) Open a Seeq Datalab Project 2) Launch a Terminal session 3) Install the pymsteams library by executing pip install pymsteams Additional documentation on pymsteams can be found here: https://pypi.org/project/pymsteams/ Step 4: Create or Update the Monitoring script We are now ready configure a monitoring script that sends notifications to the Webhook configured in Step 1 using Conditions scoped to the Asset Tree in Step 2. a) Import the relevant libraries, including the newly installed pymsteams library import pandas as pd from datetime import datetime,timedelta import pytz import pymsteams b) Configure Input Parameters #Refer to Microsoft Documentation on how to configure a Webhook for a MS Teams channel webhook_url='YOUR WEBHOOK HERE' #Specify the monitoring workbook - this is where the alert will link with the associated timeframe monitoring_workbook_url='YOUR WORKBOOK HERE' #Specify the asset tree and associated condition for which the webhook should be triggered asset_tree='Compressor Monitoring' monitoring_condition='High Temperature' #Specify the lookback period and timezone to search for capsules lookback_interval_hours=24 timezone=('US/Mountain') c) Search for Event Capsules #Set time range to look for new conditions delta=timedelta(hours=lookback_interval_hours) end=datetime.now(tz=pytz.timezone(timezone)) start=end-delta #Parse the workbook information workbook_id=spy.utils.get_workbook_id_from_url(monitoring_workbook_url) worksheet_id=spy.utils.get_worksheet_id_from_url(monitoring_workbook_url) #This block is optional, it stores search results for the conditions once instead of searching each time the #script runs. Saves time if the search result is not expected to change. To reset, just delete the .pkl file. pkl_file_name=asset_tree+'_'+monitoring_condition+'_'+workbook_id+'.pkl' try: monitoring_conditions=pd.read_pickle(pkl_file_name) except: monitoring_conditions=spy.search({'Name':monitoring_condition, 'Type':'Condition', 'Path':asset_tree}, workbook=workbook_id,quiet=True) monitoring_conditions.to_pickle(pkl_file_name) #Pull capsules present during the specified time range events=spy.pull(monitoring_conditions,start=start,end=end,group_by=['Asset'],header='Asset',quiet=True) number_of_events=len(events) events d) Send Message to Webhook using the pymsteams library if a Capsule is detected in the time range #If capsules are present, trigger the webhook to compile and send a card to MS Teams if number_of_events != 0: events.sort_values(by='Condition',inplace=True) #Create url for specific notification time-frame using Seeq URL builder investigate_start=start.astimezone(pytz.utc).strftime('%Y-%m-%dT%H:%M:%SZ') investigate_end=end.astimezone(pytz.utc).strftime('%Y-%m-%dT%H:%M:%SZ') investigate_url=f"https://explore.seeq.com/workbook/builder?startFresh=false"\ f"&workbookName={workbook_id}"\ f"&worksheetName={worksheet_id}"\ f"&displayStartTime={investigate_start}"\ f"&displayEndTime={investigate_end}"\ f"&expandedAsset={asset_tree}" #Create message information to be posted in channel assets=[] text=[] for event in events.itertuples(): assets.append(event.Condition) #Capsule started before lookback window if pd.isnull(event[2]): if pd.isnull(event[3]) or event[4] == True: text.append(f'Event was already in progress at {start.strftime("%Y-%m-%d %H:%M:%S %Z")} and is in Progress') else: text.append(f'Event was already in progress at {start.strftime("%Y-%m-%d %H:%M:%S %Z")} and ended {event[3].strftime("%Y-%m-%d at %H:%M:%S %Z")}') #Capsule started during lookback window else: if pd.isnull(event[3]) or event[4] == True: text.append(f'Event started {event[2].strftime("%Y-%m-%d at %H:%M:%S %Z")} and is in Progress') else: text.append(f'Event started {event[2].strftime("%Y-%m-%d at %H:%M:%S %Z")} and ended {event[3].strftime("%Y-%m-%d at %H:%M:%S %Z")}') message='\n'.join(text) #Create MS Teams Card - see pymsteams documentation for details TeamsMessage = pymsteams.connectorcard(webhook_url) TeamsMessage.title(monitoring_condition+" Event Detected") TeamsMessage.text(monitoring_condition+' triggered in '+asset_tree+f' Asset Tree in the last {lookback_interval_hours} hours') TeamsMessageSection=pymsteams.cardsection() for i,value in enumerate(text): TeamsMessageSection.addFact(assets[i],value) TeamsMessage.addSection(TeamsMessageSection) TeamsMessage.addLinkButton('Investigate in Workbench',investigate_url) TeamsMessage.send() Step 5: Test the Script Execute the script ensuring at least one “High Temperature” capsule is present in the lookback duration. The events dataframe in step 4. c) will list capsules that were detected. If no capsules are present, adjust the lookback duration. If at least one capsule is detected, a notification will automatically be posted in the channel for which the Webhook has been configured: Step 6: Schedule Script to run on a specified Frequency If the script operates as desired, configure a schedule for it to run automatically. #Optional - schedule the above script to run on a regular interval spy.jobs.schedule(f'every day at 6am') The script will run on the specified interval and post a summary of “High Temperature” capsules/events that occur during the lookback period directly to the MS Teams channel. Refer to the spy.jobs.ipynb notebook in the “SPy Documentation” folder for additional information on scheduling options. Attached is a copy of the full example script: Seeq MS Teams Notification Webhook - Example Script.ipynb
  3. Monitoring KPI's for your process or equipment is a valuable method in determining overall system performance and health. However, it can be cumbersome to comb through all the different KPI's and understand when each is deviating from an expected range or set of boundaries. We can, however, shorten our time to insight by aggregating all associated KPI's into one Health Score; the result allows us to monitor just one trend item, and take action when deviations occur. To walk through the steps of building a Health Score, I will walk through an example below which looks at 4 KPI's for a Pump, and aggregates them into one final Health Score. Note that the time period examined is a 3 month period leading up to a pump failure. KPI DETAILS KPI #1 The first indicator I can monitor on this pump is how my Discharge Pressure is trending relative to an expected range determined by my Manufacturer's Pump Performance Curve. (To enable using a pump curve in Seeq, reference this article for more information: Creating Pump and Compressor Curves in Seeq). As my Discharge Pressure deviates from the expected range, red Capsules are created by using a Deviation Search in Seeq. KPI #2 The second indicator I can monitor on this pump is whether the NPSHa (available) is remaining higher than the NPSHr (required) as stipulated by the Manufacturer's Pump Datasheet. If my NPSHa drops below my NPSHr, red Capsules will be created by using a Deviation Search in Seeq. (No deviations noted in the time period evaluated). KPI #3 The third indicator I can monitor on this pump is whether the pump Vibration signals remain lower than specified thresholds (these could be determine empirically, from the Manufacturer, or industry standard). In this case I have 4 Vibration signals. I am using a "Union" method to combine the 4 conditions into the final KPI Alert, which will show red Capsules if any of the 4 vibrations exceed their threshold. The formula for this KPI Alert condition is as shown below: $vib1>$limit1 or $vib2>$limit2 or $vib3>$limit3 or $vib4>$limit4 KPI #4 The fourth indicator I can monitor on this pump is whether the flow through the pump is remaining higher than minimum allowable as stipulated by the Manufacturer's Pump Curve/Datasheet. If my measured Flow drops below my Flow Limit, red Capsules will be created by using a Deviation Search in Seeq. (No deviations noted in the time period evaluated). BUILDING THE HEALTH SCORE Now that we have 4 conditions, 1 for each KPI if exceeding the determined normal operating range, we need to aggregate these into the Health Score. First, we determine how much % time each KPI alert has been active during each Day (in fraction form, ie range of 0 to 1). We do this by creating a Formula for each KPI Alert condition, with the syntax as follows: #Determine the % duration that a KPI alert is active during each Day (in fraction form) $kpi1.aggregate(percentDuration(), days(), startKey()).convertUnits('').toStep() The result of applying this to all 4 KPI Alert conditions should be as follows - you may note that if a KPI Alert condition is "on" for the full duration of a day, it will show a value of 1. If partially "on", it will show a fractional value between 0 and 1, and if no Condition is present at all, it will show a value of 0. Now we aggregate these individual indicators into a rolled up Health Score, by using the Sum of Squares method, and then dividing by the total number of indicators. To do so, enter the following in a formula: #Aggregate the Sum of Squares of the fractional alert values ($k1.pow(2) + $k2.pow(2) + $k3.pow(2) + $k4.pow(2))/4 I could also have performed the above 2 steps in 1 Formula: #First determine the % duration that a KPI alert is active during each Day (in fraction form) $k1 = $kpi1.aggregate(percentDuration(), days(), startKey()).convertUnits('').toStep() $k2 = $kpi2.aggregate(percentDuration(), days(), startKey()).convertUnits('').toStep() $k3 = $kpi3.aggregate(percentDuration(), days(), startKey()).convertUnits('').toStep() $k4 = $kpi4.aggregate(percentDuration(), days(), startKey()).convertUnits('').toStep() #Then aggregate the Sum of Squares of those fractional values $sumOfSquares = $k1.pow(2) + $k2.pow(2) + $k3.pow(2) + $k4.pow(2) $sumOfSquares/4 I can also add a Health Score high limit (in my example 0.25 = if one KPI Alert is active for a full day), to trigger some action (perhaps schedule pump maintenance), prior to failure. A new red Capsule will appear if my Health Score exceeds this limit of 0.25 (can be configured via Value Search or Deviation Search). Enter the following into Formula to create this limit (a new scalar): Below you will see my final Health Score trend item as well as the limit and Health Score Alert condition. Optionally, I can use the High Limit to create a shaded boundary for my Health Score, using the Boundaries Tool. (I also create a low limit of 0 to be the lower boundary). We can see that in the month leading up to Failure (early Feb), I had multiple forewarning indications through this aggregated Health Score. In future, I could monitor this Pump health in a dashboard in a Seeq Organizer Topic, and trigger some maintenance activity proactively. Dashboard Example:
  4. Summary: I want to create a process variable monitoring dashboard to view my trends over the last day. Step 1. Create a new Organizer Topic. From the Seeq home screen, click the "New" drop down and select "Organizer Topic". Step 2. Use the "Insert Table" button to insert a table with an many row and columns as you would like charts in your dashboard, or insert a template to provide you with a starting layout for your content. Insert Table: Insert Template: Step 3. In a blank table, or to adjust formatting on a template, use the table formatting options (available by clicking into the cell in the table or highlighting multiple cells) to merge cells, for example you can merge the top row to create a title row if desired. From the table formatting options, you can also change the background color of the cell by selecting Cell Properties. Step 4. Click into the cell that you want to insert your Seeq trend into and, with your cursor in the cell, use the Seeq Q logo in the top left of the toolbar to insert Seeq content. Alternatively, insert your content anywhere in your Topic and cut and paste the content into the correct cell in the table. Tip: to adjust the size of your content once inserted select the content, select the blue Seeq Q icon, and customize content size, shape, and more. Step 5. Repeat step 4 for each monitoring chart you want to view in your dashboard. Step 6. Create a custom date range to apply to the Seeq content in your dashboard. This example shows how to create a daily auto-update frequency. Click the + icon in the Date Ranges panel to open a new custom date range window. Then click on Auto Update. Customize your display range and then click Next. Select how often you want your update to occur (in this case Daily) and when you want it to update (in this case, 6AM Pacific). Make sure to click Save! If you'd like to be notified when your dashboard updates, expand the Notify when Schedule Runs option to enable notifications to your email. Step 7. Share your dashboard with your peers by clicking on the Share icon in the top right, with options for an Edit, View, or Presentation link. Presentation View: Alternatively, share a PDF by selecting the PDF icon on the toolbar. PDF: Content Verified DEC2023
  5. When addressing a business problem with analytics, we should always start by asking ourselves 4 key questions: Why are we talking about this: what is the business problem we are trying to address, and what value will solving this problem generate? What data do I have available to help with solving this problem? How can I build an effective analysis to identify the root of my problem (both in the past, and in future)? How will I visualize the outputs to ensure proactive action to prevent the problem from manifesting? This is where you extract the value. With that in mind, please read below how we approach the above 4 questions while working in Seeq to deal with heat exchanger performance issues. What is the business problem? Issues with heat exchanger performance can lead to downstream operational issues which may lead to lost production and revenue. To effectively monitor the exchanger, a case-specific approach is required depending on the performance driver: Fouling in the exchanger is limiting heat transfer, requiring further heating/cooling downstream Fouling in the exchanger is limiting system hydraulics, causing flow restrictions or other concerns Equipment integrity, identify leaks inside the exchanger What Data do we have available? Process Sensors – flow rates, temperatures, pressures, control valve positions Design Data – drawings, datasheets Maintenance Data – previous repairs or cleaning, mean-time between cleanings How can we tackle the business problem with the available data? There are many ways to monitor a heat exchanger's performance, and the selection of the appropriate indicator depends on a) the main driver for monitoring and b) the available data. The decision tree below is merely meant to guide what indicators can be applied based on your dataset. Generally speaking, the more data available, the more robust an analysis you can create (ie. first principles based calculations). However, in the real world, we are often working with sparse datasets, and therefore may need to rely on data-based approaches to identify subtle trends which indicate changes in performance over time. Implementing each of the indicators listed above follow a similar process in Seeq Workbench, as outlined in the steps below. In this example, we focus on a data-based approach (middle category above). For an example of a first-principles based approach, check out this Seeq University video. Step 1 - Gather Data In a new Workbench, search in the Data Tab for relevant process signals Use Formula to add scalars or use the .toSignal() function to convert supplemental data such as boundary limits or design values Use Formula, Value Search or Custom Condition to enter maintenance period(s) and heat exchanger cycle(s) conditions (if these cannot be imported from a datasource) Step 2 - Identify Periods of Interest •Use Value Search, Custom Condition, Composite Condition or Formula to identify downtime periods, periods where exchanger is bypassed, or periods of bad data which should be ignored in the analysis Step 3 - Cleanse Data Use Formula to remove periods of bad data or downtime from the process signals, using functions such as $signal.remove($condition) or $signal.removeOutliers() Use Formula to smooth data as needed, using functions such as $signal.agileFilter() or the Low Pass Filter tool Step 4 - Quantify Use Formula to calculate any required equations In this example, no calculations are required. Step 5 - Model & Predict Use Prediction and choose a process signal to be the Target Variable, and use other available process signals as Input Variables; choose a Training Period when it is known the exchanger is in good condition Using Boundaries: establish an upper and lower boundary signal based on the predicted (model) signal from previous step (e.g. +/-5% of the modeled signal represents the boundaries) Step 6 - Monitor Use Deviation Search or Value Search to find periods where the target signal exceeds a boundary(ies) The deviation capsules created represent areas where heat exchanger performance is not as expected Aggregate the Total Duration or Percent Duration statistic using Scorecard or Signal From Condition to assess deteriorating exchanger health over time How can we visualize the outputs to ensure proactive action in future? Step 7 - Publish Once the analysis has been built in a Seeq Workbench, it can be published in a monitoring dashboard in Seeq Organizer as seen in the examples below. This dashboard can then be shared among colleagues in the organization, with the ability to monitor the exchanger, and log alerts and take action as necessary as time progresses - this last step is key to implementing a sustainable workflow to ensure full value is extracted from solving your business problem.
×
×
  • Create New...