Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 12/04/2021 in all areas

  1. A small team of us (with help from Seeq team members) built a short script to extract signal names from our legacy excel workbooks so that we could push them to Seeq workbench. Perhaps, like us, you are involved in migrating workbooks for monitoring/ reporting over to Seeq and could do with a boost to get started so you can get to real work of setting up the Seeq workbench. The attached script will extract the signal names (assuming you can craft your own regex search filter) from each excel worksheet and then push them to workbench with labeling for organization. Hopefully its a help to some 🙂 signal_transfer_excel2seeq_rev1.ipynb
    5 points
  2. When creating signal forecasts, especially for cyclic signals that degrade, we often use forecastLinear() in formula to easily forecast a signal out into the future to determine when a threshold is met. The methodology is often the same regardless of if we are looking at a filter, a heat exchanger, or any other equipment that fouls overtime or any equipment that needs to go through some periodic maintenance when a KPI threshold is met. A question that comes up occasionally from users is how to create a signal forecast that only uses data from the current operation cycle for signal forecasting. The forecastlinear() operator only takes into account a historical training period and does not determine if that data is coming from the current cycle or not (which results in unexpected results). Before entering the formula, you will need to define: a condition that identifies the current cycle, here i have called it "$runningCycle" a Signal to do a linear forecast on, i have called it "$signal" To forecast out into the future based on only the most recent cycle, the following code snippet can be used in formula: $training = $runningCycle.setmaximumduration(10d).toGroup(capsule(now()-2h, now())) $forecast=$Signal.predict($training, timesince(toTime('2000-01-01T00:00Z'))) $signal.forecastSplice($forecast, 1d) In this code snippet, there are a few parameters that you might want to change: .setMaximumDuration(10d): results in a longest cycle duration of 10 days, this should be changed to be longer than the longest cycle you would expect to see capsule(now-2h, now()): this creates a period during which seeq will look for the most recent cycle. In this case it is any time in the last 2 hours. If you have very frequent data (data comes in every few seconds to minutes) then 2 hours or less will work. If you have infrequent data (data that comes in once a day or less) then extend this so that it covers the last 3-4 data points. $signal.forecastSplice($forecast, 1d): When using forecastLinear(), there is an option to force the prediction through the last sample point. This date parameter (1 day in this case) does something similar- it blends the last historical data point with the forecast over the given time range. In other words, if the last data point was a value of 5, but my first forecasted datapoint had a value of 10, this parameter is the time frame over which to smooth from the historical data point to the forecast. Here is a screenshot of my formula : and the formula in action:
    4 points
  3. Webhooks are a convenient method to send event data from Seeq to “channel” productivity tools such as Microsoft Teams or Slack. The following post describes how Seeq users can leverage Seeq Data Lab to send messages directly to MS Teams via Webhooks. Pre-Requisites: 1) Seeq Data Lab with Scheduled Notebooks enabled a. See Administration Panel -> Configuration and filter for “Features/DataLab/ScheduledNotebooks/Enabled” 2) MS Teams Channel with a Webhook Connector Assumptions: 1) Summary of capsules generated in a defined time range (i.e., every 12 or 24 hours) 2) Notifications are not near-real-time – script will run on a pre-defined schedule generally measured in hours, not minutes or seconds 3) Events of interest are contained in an Asset Tree or Group with one or more Conditions Step 1: Configure Webhook in MS Teams To send Seeq capsules/events to MS Teams, a Webhook for the target channel needs to be created. Detailed instructions on how to configure Webhooks in MS Teams can be found here: https://learn.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/add-incoming-webhook For the purpose of this post, we will create a Webhook URL in our “Seeq Notifications” Team to alert on Temperature Excursions. The alerts will be posted in the “Cooling Tower Temperature Monitoring” channel. Teams and Channel names can be configured to fit your need/operation, this is just an example for demonstration purposes: MS Teams will generate a Webhook URL which we will use in our script in Step 4. Step 2: Identify or Create an Asset Group or Asset Tree to define the Monitoring Scope To scope the events of interest, we will use an Asset Tree that contains “High Temperature” conditions for a collection of Monitoring Assets. While this is not a requirement for using Webhooks, it helps with scaling the Notification workflow. It also allows us to combine multiple Conditions from different Assets into a single workflow. To learn how create an Asset Tree, follow the “Asset Trees 1 – Introduction.ipynb” tutorial in the SPy Documentation folder contained in each new Seeq Datalab project. The script for the Monitoring Asset Tree used in this post is attached for reference: Monitoring Asset Tree.ipynb Alternatively, Asset Groups can be also used to create an asset structure directly in Workbench without using Python: Once the Asset Group/Tree containing the monitoring Conditions is determined, create a Worksheet with a Treemap or Table overview for monitoring use: Make note of the URL as it will be included in the Notification as a link to the monitoring overview whenever an event is detected. For locally scoped Asset Groups or Trees, it will also inform the script where to look for Conditions. Step 3: Install the “pymsteams” library in Seeq Datalab The pymsteams library allows users to compose and post messages (or cards) to MS Teams. The library can be installed from the pypi repository (pypi.org) using the “pip install” command. 1) Open a Seeq Datalab Project 2) Launch a Terminal session 3) Install the pymsteams library by executing pip install pymsteams Additional documentation on pymsteams can be found here: https://pypi.org/project/pymsteams/ Step 4: Create or Update the Monitoring script We are now ready configure a monitoring script that sends notifications to the Webhook configured in Step 1 using Conditions scoped to the Asset Tree in Step 2. a) Import the relevant libraries, including the newly installed pymsteams library import pandas as pd from datetime import datetime,timedelta import pytz import pymsteams b) Configure Input Parameters #Refer to Microsoft Documentation on how to configure a Webhook for a MS Teams channel webhook_url='YOUR WEBHOOK HERE' #Specify the monitoring workbook - this is where the alert will link with the associated timeframe monitoring_workbook_url='YOUR WORKBOOK HERE' #Specify the asset tree and associated condition for which the webhook should be triggered asset_tree='Compressor Monitoring' monitoring_condition='High Temperature' #Specify the lookback period and timezone to search for capsules lookback_interval_hours=24 timezone=('US/Mountain') c) Search for Event Capsules #Set time range to look for new conditions delta=timedelta(hours=lookback_interval_hours) end=datetime.now(tz=pytz.timezone(timezone)) start=end-delta #Parse the workbook information workbook_id=spy.utils.get_workbook_id_from_url(monitoring_workbook_url) worksheet_id=spy.utils.get_worksheet_id_from_url(monitoring_workbook_url) #This block is optional, it stores search results for the conditions once instead of searching each time the #script runs. Saves time if the search result is not expected to change. To reset, just delete the .pkl file. pkl_file_name=asset_tree+'_'+monitoring_condition+'_'+workbook_id+'.pkl' try: monitoring_conditions=pd.read_pickle(pkl_file_name) except: monitoring_conditions=spy.search({'Name':monitoring_condition, 'Type':'Condition', 'Path':asset_tree}, workbook=workbook_id,quiet=True) monitoring_conditions.to_pickle(pkl_file_name) #Pull capsules present during the specified time range events=spy.pull(monitoring_conditions,start=start,end=end,group_by=['Asset'],header='Asset',quiet=True) number_of_events=len(events) events d) Send Message to Webhook using the pymsteams library if a Capsule is detected in the time range #If capsules are present, trigger the webhook to compile and send a card to MS Teams if number_of_events != 0: events.sort_values(by='Condition',inplace=True) #Create url for specific notification time-frame using Seeq URL builder investigate_start=start.astimezone(pytz.utc).strftime('%Y-%m-%dT%H:%M:%SZ') investigate_end=end.astimezone(pytz.utc).strftime('%Y-%m-%dT%H:%M:%SZ') investigate_url=f"https://explore.seeq.com/workbook/builder?startFresh=false"\ f"&workbookName={workbook_id}"\ f"&worksheetName={worksheet_id}"\ f"&displayStartTime={investigate_start}"\ f"&displayEndTime={investigate_end}"\ f"&expandedAsset={asset_tree}" #Create message information to be posted in channel assets=[] text=[] for event in events.itertuples(): assets.append(event.Condition) #Capsule started before lookback window if pd.isnull(event[2]): if pd.isnull(event[3]) or event[4] == True: text.append(f'Event was already in progress at {start.strftime("%Y-%m-%d %H:%M:%S %Z")} and is in Progress') else: text.append(f'Event was already in progress at {start.strftime("%Y-%m-%d %H:%M:%S %Z")} and ended {event[3].strftime("%Y-%m-%d at %H:%M:%S %Z")}') #Capsule started during lookback window else: if pd.isnull(event[3]) or event[4] == True: text.append(f'Event started {event[2].strftime("%Y-%m-%d at %H:%M:%S %Z")} and is in Progress') else: text.append(f'Event started {event[2].strftime("%Y-%m-%d at %H:%M:%S %Z")} and ended {event[3].strftime("%Y-%m-%d at %H:%M:%S %Z")}') message='\n'.join(text) #Create MS Teams Card - see pymsteams documentation for details TeamsMessage = pymsteams.connectorcard(webhook_url) TeamsMessage.title(monitoring_condition+" Event Detected") TeamsMessage.text(monitoring_condition+' triggered in '+asset_tree+f' Asset Tree in the last {lookback_interval_hours} hours') TeamsMessageSection=pymsteams.cardsection() for i,value in enumerate(text): TeamsMessageSection.addFact(assets[i],value) TeamsMessage.addSection(TeamsMessageSection) TeamsMessage.addLinkButton('Investigate in Workbench',investigate_url) TeamsMessage.send() Step 5: Test the Script Execute the script ensuring at least one “High Temperature” capsule is present in the lookback duration. The events dataframe in step 4. c) will list capsules that were detected. If no capsules are present, adjust the lookback duration. If at least one capsule is detected, a notification will automatically be posted in the channel for which the Webhook has been configured: Step 6: Schedule Script to run on a specified Frequency If the script operates as desired, configure a schedule for it to run automatically. #Optional - schedule the above script to run on a regular interval spy.jobs.schedule(f'every day at 6am') The script will run on the specified interval and post a summary of “High Temperature” capsules/events that occur during the lookback period directly to the MS Teams channel. Refer to the spy.jobs.ipynb notebook in the “SPy Documentation” folder for additional information on scheduling options. Attached is a copy of the full example script: Seeq MS Teams Notification Webhook - Example Script.ipynb
    3 points
  4. Hi Coolhunter, I have seen this requested multiple times and one solution might be to use a custom PI Vision symbol that enables you to embed Seeq content into PI Vision. A solution to this challenge can be found here: Get the most out of PI Vision - Seeq Analytics in PI Vision - Seeq in PI Vision (werusys.de) If you want to know more about the PI Vision integration with Seeq feel free to drop me a mail: julian.weber@werusys.de Cheers, Julian Seeq-WerusysPIVision.pdf
    3 points
  5. Check out the data lab script and video that walks through it to automate data pull->apply ml->push results to workbench in an efficient manner. Of course you can skin the cat many different ways however this gives a good way to do it in bulk. Use case details: Apply ML on Temperature signals across the whole Example Asset Tree on a weekly basis. For your case, you can build you own asset tree and filter the relevant attributes instead of Temperature and set spy.jobs.schedule frequency to whatever works for you. Let me know if there are any unanswered questions in my post or demo. Happy to update as needed. apply_ml_at_scale.mp4 Apply ML on Asset Tree Rev0.ipynb
    3 points
  6. Have you ever wanted to scale calculations in Seeq across different assets without having to delve into external systems or write code to generate asset structures? Is your process data historian a giant pool of tags which you need to have organized and named in a human readable format? Do you want to take advantage of Seeq features such as Asset Swapping and Treemaps, but do not have an existing Asset structure to leverage? If the answer is yes, Asset Groups can help! Beginning in Seeq version R52 Asset Groups were added to configure collections of items such as Equipment, Operating Lines, KPIs, etc via a simple point-and-click tool. Users can leverage Asset Groups to easily organize and scale their analyses directly in Workbench, as well as apply Seeq Asset-centric tools such as Treemaps and Tables across Assets. What is an Asset Group? An Asset Group is a collection of assets (listed in rows) and associated parameters called “Attributes” (listed in columns). If your assets share common parameters, Asset Groups can be a great way to organize and scale analyses instead of re-creating each analysis separately. Assets can be anything users want them to be. It could be a piece of equipment, geographical region, business unit, KPI, etc. Asset Groups serve to organize and map associated parameters (Attributes) for each Asset in the group. Each Asset can have one or several Attributes mapped to it. Attributes are parameters that are common to all the assets and are mapped to tags from one or many data sources. Examples of Asset/Attribute combinations include: Asset Attribute(s) Pump Suction Pressure, Discharge Pressure, Flow, Curve ID, Specific Gravity Heat Exchanger Cold Inlet T, Cold Outlet T, Hot Inlet T, Hot Outlet T, Surface Area Production Line Active Alarms, Widgets per Hour, % of time in Spec It’s very important to configure the name of the common Attribute to be the same for all Assets, even if the underlying tag or datasource is not. Using standard nomenclature for Attributes (Columns) enables Seeq to later compare and seamlessly “swap” between assets without having to worry about the underlying tag name or calculation. Do This: Do Not Do This: How to Configure Asset Groups in Seeq Let’s create an Asset Group to organize a few process tags from different locations. While Asset Groups support pre-existing data tree structures (such as OSI PI Asset Framework), the following example will assume the tags to not be structured and added manually from a pool of existing process tags. NOTE: Asset Groups require an Asset Group license. For versions prior to R54, they also have to be enabled in the Seeq Administrator Configuration page. Contact your Seeq Administrator for details. 1) In the “Data” tab, create a new Asset Group: 2) Specify Asset Group name and add Assets You can rename the assets by clicking on the respective name in the first column. In this case, we'll define Locations 1-3. 3) Map the source tags a. Rename “Column 1” by clicking on the text and entering a new name b. Click on the (+) icon to bring up the search window and add the tag corresponding to each asset. You can use wildcards and/or regular expressions to narrow your search. c. Repeat mapping of the tags for the other assets until there’s a green checkmark in each row d. Additional source tags can be used by clicking on “Add Column” button in the toolbar In this case, we will add a column for Relative Humidity and map a tag for each of the Locations 4) Save the Asset Group 5) Trend using the newly created Asset Group The newly created Asset Group will now be available in the Data pane and can be used for navigation and trending a. Navigate to “Location 1” and add the items to the display pane by clicking on them. You can also change the display range to 7 days to show a bit more data b. Notice the Assets Column now listed in the Details pane showing from which Asset the Signal originates We can also add the Asset Path to the Display pane by clicking on Labels and checking the desired display configuration settings (Name, Unit of Measure, etc). c. Swap to Location 2 (or 3) using the Asset Swapping functionality. In the Data tab, navigate up one level in the Asset Group, then click the Swap icon ( ) to swap the display items from a different location . Notice how Seeq will automatically swap the display items 6) Create a “High Temperature” Condition Calculations configured from Asset Group Items will “follow” that asset, which can help in scaling analyses. Let’s create a “High Temperature” condition. a. Using “Tools -> Identify -> Value Search” create a condition when the Temperature exceeds 100 b. Click “Execute” to generate the Condition c. Notice the condition has been generated and is automatically affiliated with the Asset from which the Signals were selected d. Swap to a different Asset and notice the “High Temperature” Condition will swap using the same condition criteria but with the signals from the swapped Asset Note: Calculations can also be configured in the Asset Group directly, which can be advantageous if different condition criteria need to be defined for each asset. This topic will be covered in Part 2 of this series. 7) Create a Treemap Asset Groups enables users to combine monitoring across assets using Seeq’s Treemap functionality. a. Set up a Treemap for the Assets in the Group by switching to the Treemap view in the Seeq Workbench toolbar. b. Click on the color picker for the “High Temperature” condition to select a color to display when that condition is active in the given time range. (if you have more than one Condition in the Details pane, repeat this step for each Condition) c. A Treemap is generated for each Asset in the Asset Group. Signal statistics can optionally be added by configuring the “Statistics” field in the toolbar. Your tree map may differ depending on the source signal and time range selected. The tree map will change color if the configured Condition triggers during the time period selected. This covers the basics for Asset Groups. Please check out Part 2 on how to configure calculations in Asset Groups and add them directly to the Hierarchy.
    3 points
  7. Users of OSIsoft Asset Framework often want to filter elements and attributes based on the AF Templates they were built on. At this time though, the spy.search command in Seeq Data Lab only filters based on the properties Type, Name, Description, Path, Asset, Datasource Class, Datasource ID, Datasource Name, Data ID, Cache Enabled, and Scoped To. This post discusses a way in which we can still filter elements and/or attributes based on AF Template. Step 1: Retrieve all elements in the AF Database The code below will return all assets in an AF Database that are based on a AF Template whose name contains Location. asset_search = spy.search({"Path":"Example-AF", "Type":"Asset"}, all_properties=True) #Make sure to include all properties since this will also return the AF Template asset_search.dropna(subset=['Template'], inplace=True) # Remove assets not based on a template since we can't filter with NaN values asset_search_location = asset_search[asset_search['Template'].str.contains('Location')] # Apply filter to only consider Location AF Template assets Step 2: Find all relevant attributes This code will retrieve the desired attributes. Note wildcards and regular expression can be used to find multiple attributes. signal_search = spy.search({"Path":"Example-AF", "Type":"Signal", "Name":"Compressor Power"}) #Find desired attributes Step 3: Filter attributes based on if they come from an element from the desired AF template Last step cross references the signals returned with the desired elements. This is done by looking at their paths. # Define a function to recreate paths, items directly beneath the database asset don't have a Path def path_merger(row): row = row.dropna() return ' >> '.join(row) asset_search_location['Full Path'] = asset_search_location[['Path', 'Asset', 'Name']].apply(lambda row: path_merger(row),axis=1) # Create path for the asset that includes its name signal_search['Parent Path'] = signal_search[['Path', 'Asset']].apply(lambda row: path_merger(row),axis=1) # Create path for the parents of the signals signal_search_location = signal_search[signal_search['Parent Path'].isin((asset_search_location['Full Path']))] # Cross reference parent path in signals with full paths in assets to see if these signals are children of the desired elements
    3 points
  8. Statistical Process Control (SPC) can give production teams a uniform way to view and interpret data to improve processes and identify and prevent production issues. Control charts and run rule conditions can be created in Seeq to monitor near-real time data and flag when the data indicates abnormal or out of control behavior. Creating a Control Chart 1. Find the signal of interest and a signal that can be used to detect the current production grade (ideally a grade code or similar signal - if this does not exist for your product, you can use process set points to stitch together a calculated grade signal). Use .tocondition() in formula to create a condition for each change in the Grade_Code signal. 2. Check data for normalcy and other statistical assumptions prior to proceeding. To check for normalcy, use the Histogram tool in Seeq. Find more information on the Histogram tool in the Seeq Knowledge Base. Note that for this analysis we are using a subgroup size of one and assuming normalcy. 3. Determine the methodology to use to create the average and standard deviation for each grade. In this case, we will identify periods after start-up when the process was in control using Manual Condition, and select these times across each grade. 4. Calculate the mean and standard deviation for each grade, based on the times the process was in-control. Choose a time window that captures all capsules created for the in-control period. To use an unweighted mean, use .toDiscrete() before calculating average. The same calculation for the mean can be used for the standard deviation, by replacing average() with stddev(). //Define the time period that contains all in control capsules $capsule = capsule('2020-10-16T00:00-00:00', '2021-09-21T00:00-00:00') //Narrow down data to when process is in control, use keep() to filter the condition by the specific grade code capsule property $g101 = $allgrades.keep('Grade Code',isMatch('Grade 101')).intersect($inControl) $g102 = $allgrades.keep('Grade Code',isMatch('Grade 102')).intersect($inControl) $g103 = $allgrades.keep('Grade Code',isMatch('Grade 103')).intersect($inControl) //Create average based on the times the product is in control, use .toDiscrete to create an unweighted average $g101_ave = $viscosity.remove(not $g101).toDiscrete().average($capsule) $g102_ave = $viscosity.remove(not $g102).toDiscrete().average($capsule) $g103_ave = $viscosity.remove(not $g103).toDiscrete().average($capsule) //Create average for all grades in one signal using splice(), use keep() to filter the condition by the specific grade code capsule property //use within() to show only average only during the condition 0.splice($g101_ave, $allgrades.keep('Grade Code',isMatch('Grade 101'))) .splice($g102_ave, $allgrades.keep('Grade Code',isMatch('Grade 102'))) .splice($g103_ave, $allgrades.keep('Grade Code',isMatch('Grade 103'))) .within($allgrades) 5. Use the mean and standard deviation to create +/- 1 sigma limits, +/-2 sigma limits, and +/-3sigma limits (sometimes called upper and lower control limits). Here is an example of creating the +2 sigma limit: //Add 2*standard deviation to the mean to create the $plus2sd limit, use within() to show limits only during the time periods of interest ($mean + (2*$standardDeviation)).within($grade_code) 6. Overlay the standard deviation limits and mean with the signal of interest by placing on one lane and one y-axis, remove standard deviation. Creating Run Rules Once the control chart is created, run rule conditions can be created to detect instability and the presence of assignable cause in the process. In this example, Western Electric Run Rules are used, but other run rules can be applied using similar principles. Western Electric Run Rules: Run Rule 1: Any single data point falls outside the 3sigma-limit from the centerline. Run Rule 2: Two out of three consecutive points fall beyond the 2sigma-limit, on the same side of the centerline. Run Rule 3: Four out of five consecutive points fall beyond the 1sigma-limit, on the same side of the centerline. Run Rule 4: NINE consecutive points fall on the same side of the centerline. 7. The following formulas can be used to create a condition for each run rule: Run Rule 1: //convert to a step signal $signalStep = $signal.toStep() //find when one data point goes outside the plus3sigma or minus 3sigma limits ($signalStep < $minus3sd or $signalStep > $plus3sd) //set the property on the condition .setProperty('Run Rule', 'Run Rule 1') Run Rule 2: *Note that the function toCapsulesByCount() is available in Seeq versions R54+ //Create step-interpolated signal to keep from capturing the linear interpolation between sample points $signalStep = $signal.toStep() //create capsules for every 3 samples ($toCapsulesbyCount) and for every sample ($toCapsules) $toCapsulesbyCount = $signalStep.toCapsulesByCount(3,3*$maxinterp) //set the maximum interpolation based on the longest time you would expect between samples $toCapsules = $signalStep.toCapsules() //Create condition for when the signal is not between +/-2 sigma limits //separate upper and lower to capture when the rule violations occur on the same side of the centerline $condLess = ($signalStep <= $minus2sd) $condGreater = ($signalStep >= $plus2sd) //within 3 data points ($toCapsulesByCount), count how many sample points are not between +/-2 sigma limits $countLess = $signal.todiscrete().remove(not $condLess).aggregate(count(),$toCapsulesbyCount,durationKey()) $countGreater = $signal.todiscrete() .remove(not $condGreater).aggregate(count(),$toCapsulesbyCount,durationKey()) //Find when 2+ out of 3 are outside of +/-2 sigma limits //by setting the count as a property on $toCapsulesByCount and keeping only capsules greater than or equal to 2 $RR5below = $toCapsulesbyCount.setProperty('Run Rule 5 Violations', $countLess, endvalue()) .keep('Run Rule 5 Violations', isGreaterThanOrEqualto(2)) $RR5above = $toCapsulesbyCount.setProperty('Run Rule 5 Violations', $countGreater, endvalue()) .keep('Run Rule 5 Violations', isGreaterThanOrEqualto(2)) //Find every sample point capsule that touches a run rule violation capsule //Combine upper and lower into one condition and use merge to combine overlapping capsules and to remove properties $toCapsules.touches($RR5below or $RR5above).merge(true) .setProperty('Run Rule', 'Run Rule 2') Run Rule 3: *Note that the function toCapsulesByCount() is available in Seeq versions R54+ //Create step-interpolated signal to keep from capturing the linear interpolation between sample points $signalStep = $signal.toStep() //create capsules for every 5 samples ($toCapsulesbyCount) and for every sample ($toCapsules) $toCapsulesbyCount = $signalStep.toCapsulesByCount(5,5*$maxinterp) //set the maximum interpolation based on the longest time you would expect between samples $toCapsules = $signalStep.toCapsules() //Create condition for when the signal is not between +/-1 sigma limits //separate upper and lower to capture when the rule violations occur on the same side of the centerline $condLess = ($signalStep <= $minus1sd) $condGreater = ($signalStep >= $plus1sd) //within 5 data points ($toCapsulesByCount), count how many sample points ($toCapsules) are not between +/-1 sigma limits $countLess = $signal.toDiscrete().remove(not $condLess).aggregate(count(),$toCapsulesbyCount, durationkey()) $countGreater = $signal.toDiscrete() .remove(not $condGreater).aggregate(count(),$toCapsulesbyCount,durationkey()) //Find when 4+ out of 5 are outside of +/-1 sigma limits //by setting the count as a property on $toCapsulesByCount and keeping only capsules greater than or equal to 4 $RR6below = $toCapsulesbyCount.setProperty('Run Rule 6 Violations', $countLess, endvalue()) .keep('Run Rule 6 Violations', isGreaterThanOrEqualto(4)) $RR6above = $toCapsulesbyCount.setProperty('Run Rule 6 Violations', $countGreater, endvalue()) .keep('Run Rule 6 Violations', isGreaterThanOrEqualto(4)) //Find every sample point capsule ($toCapsules) that touches a run rule violation capsule //Combine upper and lower into one condition and use merge to combine overlapping capsules and to remove properties $toCapsules.touches($RR6below or $RR6above).merge(true) .setproperty('Run Rule', 'Run Rule 3') Run Rule 4: //Create step-interpolated signal to keep from capturing the linear interpolation between sample points $signalStep = $signal.toStep() //create capsules for every 9 samples ($toCapsulesbyCount) and for every sample ($toCapsules) $toCapsulesbyCount = $signalStep.toCapsulesByCount(9,9*$maxinterp) //set the maximum interpolation based on the longest time you would expect between samples $toCapsules = $signalStep.toCapsules() //Create condition for when the signal is either greater than or less than the mean //separate upper and lower to capture when the rule violations occur on the same side of the centerline $condLess = $signalStep.isLessThan($mean) $condGreater = $signalStep.isGreaterThan($mean) //Find when the last 9 samples are fuly within the greater than or less than the mean //use merge to combine overlapping capsules and remove properties $toCapsules.touches(combinewith($toCapsulesbyCount.inside($condLess), $toCapsulesbyCount.inside($condGreater))).merge(true) .setproperty('Run Rule', 'Run Rule 4') **To make it easier to use these run rules in Seeq, custom formula functions can be created for each run rule using the User Defined Formula Function Editor Add-on which can be found in Seeq’s Open Source Gallery along with user guides and instructions for installation. For example, Run Rule 2 can be simplified to the following formula using the User-Defined Formula Functions Add-on with Seeq Data Lab: $signal.WesternElectric_runRule2($minus2sd, $plus2sd) 9. If desired, all run rules can be combined into one condition in formula using .combinewith(): combineWith($runRule1,$runRule2,$runRule3, $runRule4) 10. As a final step, a table can be created detailing the run rule violations in the trend view. Here, the header column is set as ‘Capsule Property’ >> ‘Run Rule’ and the capsule properties, start, end, and duration were added as columns. The last value of the signal ‘Grade_Code’ was also added as a column to the table. For more information on Tables, see the Seeq Knowledge Base.
    3 points
  9. We often get asked how to use the various API endpoints via the python SDK so I thought it would be helpful to write a guide on how to use the API/SDK in Seeq Data Lab. As some background, Seeq is built on a REST API that enables all the interactions in the software. Whenever you are trending data, using a Tool, creating an Organizer Topic, or any of the other various things you can do in the Seeq software, the software is making API calls to perform the tasks you are asking for. From Seeq Data Lab, you can use the python SDK to interact with the API endpoints in the same way as users do in the interface, but through a coding environment. Whenever users want to use the python SDK to interact with API endpoints, I recommend opening the API Reference via the hamburger menu in the upper right hand corner of Seeq: This will open a page that will show you all the different sections of the API with various operations beneath them. For some orientation, there are blue GET operations, green POST operations, and red DELETE operations. Although these may be obvious, the GET operations are used to retrieve information from Seeq, but are not making any changes - for instance, you may want to know what the dependencies of a Formula are so you might GET the item's dependencies with GET/items/{id}/dependencies. The POST operations are used to create or change something in Seeq - as an example, you may create a new workbook with the POST/workbooks endpoint. And finally, the DELETE operations are used to archive something in Seeq - for instance, deleting a user would use the DELETE/users/{id} endpoint. Each operation endpoint has model example values for the inputs or outputs in yellow boxes, along with any required or optional parameters that need to be filled in and then a "Try it out!" button to execute the operation. For example, if I wanted to get the item information for the item with the ID "95644F20-BD68-4DFC-9C15-E4E1D262369C" (if you don't know where to get the ID, you can either use spy.search in python or use Item Properties: https://seeq.atlassian.net/wiki/spaces/KB/pages/141623511/Item+Properties) , I could do the following: Using the API Reference provides a nice easy way to see what the inputs are and what format they have to be in. As an example, if I wanted to post a new property to an item, you can see that there is a very specific syntax format required as specified in the Model on the right hand side below: I typically recommend testing your syntax and operation in the API Reference to ensure that it has the effect that you are hoping to achieve with your script before moving into python to program that function. How do I code the API Reference operations into Python? Once you know what API endpoint you want to use and the format for the inputs, you can move into python to code that using the python SDK. The python SDK comes with the seeq package that is loaded by default in Seeq Data Lab or can be installed for your Seeq version from pypi if not using Seeq Data Lab (see https://pypi.org/project/seeq/). Therefore, to import the sdk, you can simply do the following command: from seeq import sdk Once you've done that, you will see that if you start typing sdk. and hit "tab" after the period, it will show you all the possible commands underneath the SDK. Generally the first thing you are looking for is the ones that end in "Api" and there should be one for each section observed in the API Reference that we will need to login to using "spy.client". If I want to use the Items API, then I would first want to login using the following command: items_api = sdk.ItemsApi(spy.client) Using the same trick as mentioned above with "tab" after "items_api." will provide a list of the possible functions that can be performed on the ItemsApi: While the python functions don't have the exact same names as the operations in the API Reference, it should hopefully be clear which python function corresponds to the API endpoint. For example, if I want to get the item information, I would use "get_item_and_all_properties". Similar to the "tab" trick mentioned above, you can use "shift+tab" with any function to get the Documentation for that function: Opening the documentation fully with the "^" icon shows that this function has two possible parameters, id and callback where the callback is optional, but the id is required, similar to what we saw in the API Reference above. Therefore, in order to execute this command in python, I can simply add the ID parameter (as a string as denoted by "str" in the documentation) by using the following command: items_api.get_item_and_all_properties(id='95644F20-BD68-4DFC-9C15-E4E1D262369C') In this case, because I executed a "GET" function, I return all the information about the item that I requested: This same approach can be used for any of the API endpoints that you desire to work with. How do I use the information output from the API endpoint? Oftentimes, GET endpoints are used to retrieve a piece of information to use it in another function later on. From the previous example, maybe you want to retrieve the value for the "name" of the item. In this case, all you have to do is save the output as a variable, change it to a dictionary, and then request the item you desire. For example, first save the output as a variable, in this case, we'll call that "item": item = items_api.get_item_and_all_properties(id='95644F20-BD68-4DFC-9C15-E4E1D262369C') Then convert the output "item" into a dictionary and request whatever key you would like: item.to_dict()['name']
    3 points
  10. Seeq has functions to allow easy manipulation of the starts and ends of capsules, including functions like afterStart(), move(), and afterEnd(). One limitation of these functions is that they expect scalar inputs, which means all capsules in the condition have to be adjusted by the same amount (e.g. move all capsules 1 hour into the future). There are cases when you want to adjust each capsule dynamically, for instance using the value of a signal to determine how to adjust the capsule. Solution: This post will show how to accomplish a dynamic / signal-based version of afterStart(). This approach can be modified slightly to recreate other capsule adjustment functions. Assume I have an arbitrary condition 'Condition', and signal 'Capsule Adjustment Signal'. I want to find the first X hours after each capsule start, where X is the value of 'Capsule Adjustment Signal' at the capsule start. I can do this with the below formula. $condition .afterStart(3h) // has to be longer than an output capsule will ever be .transform($capsule -> { $newStartKey = $capsule.startKey() $newEndKey = $capsule.startKey() + $signal.valueAt($capsule.startKey()) capsule($newStartKey, $newEndKey) }) This formula only takes two inputs: $condition, and $signal. This formula goes through each capsule in the condition, and manipulates its start and end keys. In this case, the start key is the same as the original, but the new end key is set to the original start key plus the value of my signal. This formula produces the following purple condition: Some notes on this formula: The output capsules must be within the original capsules. Therefore, I have included .afterStart(3h) in the formula. This ensures the original capsules will always be larger than the outputted capsules. If you don't do this, you may see the following warning on your item, which indicates the formula is throwing away capsules: Your capsule adjustment signal must have units of time To accomplish other capsule adjustments, look at changing the definitions of the $newStartKey and $newEndKey variables to suit your needs.
    2 points
  11. Capsule Based Back Prediction or Back-Casting Scenario: Instead of forecasting data into the future, there may be a need to extrapolate a signal back in time based on data from an event or period of interest. The following steps will allow you to backcast a target signal from every capsule within a condition. Data Target Signal – a signal that you would like to backcast. Event – a condition that encapsulates the event or period of interest from which you would like to backcast the target signal. The target signal must have sufficient sample points within each capsule to create an accurate regression model. Method Step 1. Create a new extended event that will combine the capsules from the original event with a prediction window for backcasting. In this example, the prediction window is 1 hr and a maximum capsule duration of 40h is defined. $prediction_window = $event.beforeStart(1h) $prediction_window.join($event, 40h) Step 2. Create a new time since signal that quantifies the time since the beginning of each capsule in the extended event condition. This new signal will be the independent variable in the regression model. $extended_event.timeSince(1min) Replace 1min with a sample frequency sufficient for your use case. Step 3. In formula, use the example below to create a regression model for the target signal, with data from the event as training data, and the time since signal as an independent variable. Assign the regression model coefficients as capsule properties for a new condition called regression condition. $event.transform($cap-> {$model=$target_signal.validValues().regressionModelOLS( group($cap),false,$time_since,$time_since^2) $cap .setProperty('m1',$model.get('coefficient1')) .setProperty('m2',$model.get('coefficient2')) .setProperty('c',$model.get('intercept'))}) The formula above creates a second-order polynomial ordinary least squares regression model. The order of the polynomial can be modified (from linear up to 9th) by adding sequential 'timesince^n' statements on line 2 and defining all coefficients as is on lines 4 and 5. See the example below of how to adjust the formula for a third-order polynomial model. Step 4. Using the regression model coefficients from the regression condition, and the time since signal, the target signal can then be backcast over the prediction window. $c = $regression_condition.toSignal('c',durationKey()).aggregate(average(),$extended_event,durationKey()) $m1 = $regression_condition.toSignal('m1',durationKey()).aggregate(average(),$extended_event,durationKey()) $m2 = $regression_condition.toSignal('m2',durationKey()).aggregate(average(),$extended_event,durationKey()) return $m1*$time_since+$m2*$time_since^2 + $c The example above is for a second-order polynomial and the formula needs to be modified depending on the order of the polynomial defined in Step 3. See the example below for a linear model. Note that it may be required to manually set the units (using setunits() function) of each part of the polynomial equation. Result The result is a new signal which backcasts the target signal for the duration of the prediction window prior to the event or period of interest.
    2 points
  12. A common industrial use case is to select the highest or lowest signal value among several similar measurements. One example is identifying the highest temperature in a reactor or distillation column containing many temperature signals. One of many situations where this is useful is in identifying the current "hot spot" location to analyze catalyst deactivation/performance degradation. When selecting the highest value over time among many signals, Seeq's max() Formula function makes this easy. Likewise, if selecting the lowest value, the min() Formula function can be used. A more challenging use case is to select the 2nd highest, 3rd highest, etc., among a set of signals. There are several approaches to do this using Seeq Formula and there may be caveats with each one. I will demonstrate one approach below. For our example, we will use a set of 4 temperature signals (T100, T200, T300, T400). Viewing the raw temperature data: 1. We first convert each of the raw temperature signals to step interpolated signals, and then resample the signals based on the sample values of a chosen reference signal that has representative, regular data samples (in this case, T100). This makes the later formulas a little simpler overall and provides slightly cleaner results when signal values cross each other. For the T100 step signal Formula: Note that the T200 step signal Formula includes a resample based on using 'T100 Step' as a reference signal: The 'T300 Step' and 'T400 Step' formulas are identical to that for T200 Step, with the raw T signals substituted. 2. We now create the "Highest T Value" signal using the max() function and the step version T signals: 3. To create the '2nd Highest T Value' signal, we use the splice() function to insert 0 values where a given T signal is equal to the 'Highest T Value'. Following this, the max() function can again be used but this time will select the 2nd highest value: 4. The process is repeated to find the '3rd Highest T Value', with a very similar formula, but substituting in values of 0 where a given T signal is >= the '2nd Highest Value': The result is now checked for a time period where there are several transitions of the T signal ordering: 5. The user may also want to create a signal which identifies the highest value temperature signal NAME at any given point in time, for trending, display in tables, etc. We again make use of the splice() function, to insert the corresponding signal name when that signal is equal to the 'Highest T Value': Similarly, the '2nd Highest T Sensor' is created, but using the '2nd Highest T Value': (The '3rd Highest T Sensor' is created similarly.) We now have correctly identified values and sensor names... highest, 2nd highest, 3rd highest: This approach (again, one possible approach of several) can be extended to as many signals as needed, can be adapted for finding low values instead of high values, can be used for additional calculations, etc.
    2 points
  13. Hi, the error means that you are referencing a variable that is not defined in your variable list. You should change your variable "$signal1" in the formula to a variable you have in your variables list: Also be aware that you cannot use a signal and a condition together on combineWith(). You can combine either signals only or conditions only. Regards, Thorsten
    2 points
  14. I'd make two conditions, one for RPM and one for Temperature, then try to use the "Combining Conditions" formulas. I think .encloses() would work.
    2 points
  15. There is not a mechanism to move the graphics directly to PowerBI but you can move the data that developed the graphs using the oData export https://support.seeq.com/space/KB/112868662/OData Export#Example-Importing-to-Microsoft-Power-BI-(Authenticate-Using-Seeq-Username-and-Password) This will require building the graphics again inside of PowerBI and I would recommend using the Signal export on a fixed grid in order to get datapoints that are at the exact same timestamp which will make life in PowerBI much easier Shamus
    2 points
  16. Hi Brian, In more recent versions of Seeq, the max function used in that way (that specific syntax or form) only works with scalars. For your case, try $p53h2.max($p53h3).max($p53h4) That is the form needed with signals. Hope this helps! John
    2 points
  17. To better understand their process, users often want to compare time-series signals in a dimension other than time. For example, seeing how the temperature within a reactor changes as a function of distance. Seeq is built to compare data against time but this method highlights how we can use time to mimic an alternate dimension. Step 1: Sample Alignment In order to accurately mimic the alternate dimension, the samples to be included in each profile must occur at the same time. This can be achieved through a couple methods in Seeq if the samples don't already align. Option 1: Re-sampling Re-sampling selects points along a signal at select intervals. You can also re-sample based on another signal's keys. Since its possible for there not to be a sample at that select interval, the interpolated value is chosen. An example Formula demonstrating how to use the function is shown below. //Function to resample a signal $signal.resample(5sec) Option 2: Average Aggregation Aggregating allows users to determine the average of a signal over a given period of time and then place this average at a specific point within that period. Signal From condition can be used to find the average over a period and place this average at a specific timestamp within the period. In the example below, the sample is placed at the start but alignment will occur if the samples are placed at the middle or end as well. Step 2: Delay Samples In Formula, apply a delay to the samples of the signal that represents their value in the alternative dimension. For example, if a signal occurs at 6 feet from the start of a reactor, delay it by 6. If there is not a signal with a 0 value in the alternate dimension, the final graph will be offset by the smallest value in the alternate dimension. To fix this, in Formula create a placeholder signal such as 0 and ensure its samples align with the other samples using the code listed below. This placeholder would serve as a signal delayed by 0, meaning it would have a value of 0 in the alternate dimension. //Substitute Period_of_Time_for_Alignment with the period used above for aligning your samples 0.toSignal(Period_of_Time_for_Alignment) Note: Choosing the unit of the delay depends upon the new sampling frequency of your aligned signals as well as the largest value you will have in the alternative dimension. For example, if your samples occur every 5 minutes, you should choose a unit where your maximum delay is not greater than 5 minutes. Please refer to the table below for selecting units Largest Value in Alternate Dimension Highest Possible Delay Unit 23 Hour, Hour (24 Hour Clock) 59 Minute 99 Centisecond 999 Millisecond Step 3: Develop Sample Profiles Use the Formula listed below to create a new signal that joins the samples from your separate signals into a new signal. Replace "Max_Interpolation" with a number large enough to connect the samples within a profile, but small enough to not connect the separate profiles. For example, if the signals were re-sampled every 5 minutes but the largest delay applied was 60 seconds, any value below 4 minutes would work for the Max_Interpolation. This is meant to ensure the last sample within a profile does not interpolate to the first sample of the next profile. //Make signals into discrete to only get raw samples, and then use combineWith and toLinear to combine the signals while maintaining their uniqueness combineWith($signal1.toDiscrete() , $signal2.toDiscrete() , $signal3.toDiscrete()).toLinear(Max_Interpolation) Step 4: Condition Highlighting Profiles Create a condition in Formula for each instance of this new signal using the formula below. The isValid() function was introduced in Seeq version 44. For versions 41 to 43, you can use .valueSearch(isValid()). Versions prior to 41 can use .validityCapsules() //Develop capsule highlighting the profile to leverage other views based on capsules to compare profiles $sample_profiles.isValid() Step 5: Comparing Profiles Now with a condition highlighting each profile, Seeq views built around conditions can be used. Chain View can be used to compare the profiles side by side while Capsule View can overlay these profiles. Since we delayed our samples before, we are able to look at their relative times and use that to represent the alternate dimension. Further Applications With these profiles now available in Seeq, all of the tools available in Seeq can be used to gain more insight from these examples. Below are a few examples. Comparing profiles against a golden profile Determine at what value in the alternate dimension does each profile reach a threshold Developing a soft sensor based on another sensor and a calibration curve profile
    2 points
  18. Was this a duplicated analysis? If so, I suspect that the IDs you're seeing are associated to items that couldn't be cloned successfully. If this is the case, you should find a journal in the (new) first worksheet of the cloned analysis, which will list items that couldn't be cloned successfully for some reason. Often this has to do with permissions. These items are created in the cloned analysis, but they're assigned placeholder IDs that are serial numbers preceded by an appropriate number of 0s to make a GUID. If that's what happened here, examining the journal on the first worksheet of the analysis should provide clues as to what needs to be fixed before a subsequent attempt at duplication.
    1 point
  19. Summary: Many of our users monitor process variables on some periodic frequency and are interested in a quick visual way of noting when a process variable is outside some limits. Perhaps you have multiple tiers of limits indicating violations of operating envelopes or violations of operating limits, and are interested in creating a visualization like that shown below. Solution: Method 1: Boundaries Tool One method to do this involves using the boundaries tool. This tool is discussed in Step 3 of this seeq.org post, and results in a graphic like that shown below. Some frequently asked questions around the above method are: Is there a way to make the different levels of boundaries different colors? Is there a way to color the section outside of the limits rather than inside of the limits? Method 2: Scorecard Metrics in Trend View Step 1. Load the signal you are interested in monitoring as well as the limits into the display pane. The limits can be added directly from the historian, or if they do not exist in the historian they can be created using Seeq Formula. Step 2. Open a new Scorecard Metric from the tools panel, create a simple scorecard metric on your signal of interest, with no statistic. Click the "+" icon to optionally enter thresholds, and add the threshold color limits that you are interested in visualizing. Note that the thresholds input in the boundary tool can be constant (entering a numeric value) or variable, selecting a signal or scalar.
    1 point
  20. OK thanks Ivan. I think this is a bug on our side. We're looking into it. But in the meantime, I think you should be able to drop the `Permission Inheritance Disabled` column from the DataFrame before pushing it. That should solve the issue. Can you try that?
    1 point
  21. You can append to existing topics/documents (pages) by first pulling the Organizer topic, and then adding to the respective page: #Search for topic (I used the Topic ID, but there are other options as well): topic_search=spy.workbooks.search({'ID':'DFFC7BB8-9EE3-42DD-937A-2CE2FAAAB0E8'}) #Pull the topic associated with that ID. This creates an object of the Organizer that you can modify. topic=spy.workbooks.pull(topic_search) #Extract the "Tensile" page so you can modify it tensile = topic[0].document('Tensile') #Add image to the "Tensile" page tensile.document.add_image(filename='Awesome_Chart.png', placement='end'); #Push modified Organizer back to Seeq: spy.workbooks.push(topic) Let me know if this works!
    1 point
  22. Here is an example of how to convert a String signal into a table where each row contains information on the start / end time and total duration of each time the string signal changed values Step 1: Convert your string signal into a condition inside of Formula $signal.tocondition() -> This formula creates a new capsule every time that the string signal changes value regardless of how many sample points have the same string value. Step 2: Create a table view of the condition. Select the "Tables and Charts" view and the "Condition" mode Step 3: Add Capsule properties as values to the table. To add the "Value" property which is the value from the string signal type in "Value" into the Capsule property statistics table. You can also select the duration here Final Product
    1 point
  23. The problem was: RampCond_Tags['Archived']='True' This should not be set to a string, it needs to be set to a boolean: RampCond_Tags['Archived']=True
    1 point
  24. Hi Sivaji, have you played around with using the %run magic command in a datalab notebook to execute python files? For example, I can run test_spy_search.py using: %run test_spy.search.py This will pass through my authentication, so I don't have to login to execute spy functions. Would that cover your use case, or is there additional functionality you're getting out of using the terminal?
    1 point
  25. Hi Ruby, This should work instead: spy.jobs.schedule('0 0 0 ? * 6#1 *')
    1 point
  26. Hey Pat, just confirming that this fix resolved your original problem?
    1 point
  27. One more comment on the post above: The within() function is creating additional samples at the beginning and the end of the capsule. These additional sample have an effect on the calculations performed in the histogram. To avoid this you can use the remove() and inverse() function to remove parts of the data when the condition is not met: In contrast to within() the remove() function will interpolate values if the distance between samples is less than the Maximum Interpolation setting of the signal. To avoid this when using remove you can combine the signal that the remove function creates with a signal that adds an "invalid" value at the beginning of each capsule so that the interpolation cannot be calculated: $invalidsAtCapsuleStart = $condition.removeLongerThan(1wk).tosamples($c -> sample($c.startkey(), SCALAR.INVALID), 1s) $signal.remove($condition.inverse()).combineWith($invalidsAtCapsuleStart) You can see the different behaviours of the three described methods in the screenshot:
    1 point
  28. Seeq's grouping functionality is helpful when you want to align multiple conditions and signals, but a signal (or set of signals) only applies to a subset of the conditions. Grouping a signal with a condition will only display that signal during the grouped condition. For example, here I have two temperature signals and a condition for each signal. I have used Profile Search to identify the profile for each signal I would like to compare. When I use Chain View or Capsule Time, the default will cause both signals to show up during each capsule. In this Chain View, the information of Reactor Temperature 1 is not relevant during the Profile Reactor 3 condition and vice versa: In order to view only the relevant signal information, I can use the grouping functionality in Chain View and Capsule Time to compare the relevant signals during each condition: 1) Select grouping in the toolbar. 2) Navigate to the grouping icon next to the condition in the details pane and select the signal that corresponds to the condition. Select multiple signals, if necessary. I can also view this in Capsule Time. As an additional option, I can overlay both signals using the One Lane and One Axis selections in the tool bar:
    1 point
  29. Now I see! I was assuming maxValue($SearchArea) was hard coding the search. Your explanation makes sense: maxValue is returning a search result, but then $signal.within($ValidData) is only passing the capsules in the condition to it. Therefore, as long as $SearchArea fully includes the capsules in $ValidData it will work. I just need to hard code dates well before and well after any capsules I would use. Thanks!
    1 point
  30. I think what you are going for will look like the formula below Where $SearchArea is the total range where any of your valid data capsule could fall (you can be very conservative with these dates). This formula will work if you have multiple valid data range capsules as long as they all fall within the $SearchArea $SearchArea = capsule("2020-01-01T00:00:00Z","2022-07-28T00:00:00Z") $Signal.within($ValidData).maxValue($SearchArea).toSignal()
    1 point
  31. Capsules can have properties or information attached to the event. By default, all capsules contain information on time context such as the capsule’s start time, end time, and duration. However, one can assign additional capsule properties based on other signals’ values during the capsules. This guide will walk through some common formulas that can be used to assign capsule properties and work with those properties. Note: The formula syntax used in the following examples will all be based on the formula language for Seeq version 49. If you have questions about errors you may be receiving in the formulas on different versions, please check out the What’s New in Seeq Knowledge Base pages for formula changes or drop a comment below with the error message. How can I visualize capsule properties? Capsule Properties can be added to the Capsules Pane in the bottom right hand corner of Seeq Workbench with the black grid icon. Any capsule properties beyond start, end, duration, and similarity that are created with the formulas that follow or come in automatically through the datasource connection can be found by clicking the “Add Column” option and selecting the desired property in the modal to get a tabular view of the properties for each capsule as shown below. In Trend View, you can Capsule Property labels inside the capsules by selecting the property from the labels modal. Note that only Capsule Properties added to the Capsule Pane in the bottom right corner will be available in the labels modal. In Capsule Time, the signals can be colored by Capsule Properties by turning on the coloring to rainbow or gradient. The selection for which property is performing the coloring is done by sorting by the capsule property in the Capsule Pane in the bottom right corner. Therefore, if Batch ID is sorted by as selected below, the legend shown on the chart will show the Batch ID values. When working with Scorecard Metrics, you can use a Capsule Property as the header of a condition based scorecard by typing in the property name into the header modal: How do I create a capsule for every (unique) value of a signal? Let’s say you have a signal that you want to turn into individual capsules for each value or unique value of the signal. This is often used for string signals (e.g. batch IDs, operations, or phases) that may look like the signal below. There’s two main operators that can be used for this: $signal.toCapsules() The toCapsules operator will create a capsule for each data point of the signal. Therefore, if there was only 1 data point per value in the string signal below, it would create one capsule per value, but if the string value was recorded every minute regardless of whether it change values, it would create 1 minute capsules. In addition, the toCapsules operator also automatically records a Capsule Property called ‘Value’ that contains the value of the signal data point. $signal.toCondition('Property Name') The toCondition operator will create a capsule for each change in value of the signal. Therefore, in the case above where the value was recorded every minute regardless of value changes, it would only create one capsule for the entire time the value was equivalent. Similarly to the toCapsules operator, the toCondition operator also automatically records a Capsule Property called ‘Value’ that contains the value of the signal data point. However, with the toCondition operator, there’s an optional entry to store the property under a different name instead by specifying a property name in the parentheses in single quotes as shown in the example above. Note: Sometimes when working with string signals of phases or steps that are just numbered (e.g. Phase 1), if there is only one phase in the operation, you may end up wanting two Phase 1 capsules in a row (e.g. Operation 1 Phase 1 and Operation 2 Phase 1) whereas the toCondition method above will only create a single capsule. In this instance, it can be useful to concatenate the operation and phase signals together to find the unique combination of Operations and Phases. This can be done by using the following formula: ($operationsignal + ': ' + $phasesignal).toCondition('Property Name') How do I assign a capsule property? Option 1: Assigning a constant value to all capsules within a condition $condition.setProperty('Property Name', 'Property Value') Note that it is important to know whether you would like the property stored as a string or numeric value. If the desired property value is a string, make sure that the ‘Property Value’ is in single quotes to represent a string like the above formula. If the desired value is numeric, you should not use the single quotes. Option 2: Assigning a property based on another signal value during the capsule For these operations, you will have to use a transform operator to perform a particular set of operations per capsule to retrieve the desired property. For example, you may want the first value of a signal within a capsule or the average value of a signal during the capsule. Below are some examples of options you have and how you would set these values to capsule properties. The general format for this operation is listed below where we will define some different options to input for Property Scalar Value. $condition.transform($capsule -> $capsule.setProperty('Property Name', Property Scalar Value)) The following are options for what to input into Property Scalar Value in the formula above to obtain the desired property values: First value of signal within a capsule: $signal.toScalars($capsule).first() Last value of signal within a capsule: $signal.toScalars($capsule).last() Average value of signal within a capsule: $signal.average($capsule) Maximum value of signal within a capsule: $signal.maxValue($capsule) Minimum value of signal within a capsule: $signal.minValue($capsule) Standard deviation of signal within a capsule: $signal.stdDev($capsule) Totalization of a signal within a capsule: $signal.totalized($capsule) Count the capsules of a separate condition within a capsule: $DifferentCondition.count($capsule) Duration of capsules in seconds of a separate condition within a capsule: $DifferentCondition.totalduration($capsule) There are more statistical operations that can be done if desired, but hopefully this gives you an idea of the syntax. Please leave a comment if you struggle with a particular operator that you are trying to perform. Finally, there are often times when you want to perform one of the above operations, but only within a subset of each capsule. For example, maybe for each batch, you want to store the max temperature during just a particular phase of the batch. In order to do this, first make sure you have created a condition for that phase of the batch and then you can use the following to input into Property Scalar Value in the formula above: $signal.within($PhaseCondition).maxValue($capsule) In this case, the within function is cutting the signal to only be present during the $PhaseCondition so that only that section of the signal is present when finding the maximum value. Option 3: Assign a property based on a parent or child condition In batch processing, there is often a parent/child relationship of conditions in an S88 (or ISA-88) tree hierarchy where the batch is made up of smaller operations, which is then made up of smaller phases. Some events databases may only set properties on particular capsules within that hierarchy, but you may want to move the properties to higher or lower levels of that hierarchy. This formula will allow you to assign the desired property to the condition without the property: $ConditionWithoutProperty.transform($capsule -> $capsule.setProperty('Current Property Name', $ConditionWithProperty.toGroup($capsule).first().property('Desired Property Name'))) Note that this same formula works whether the condition with the property is the parent or child in this relationship. I also want to point out that if there are multiple capsules of the $ConditionWithProperty within any capsule of the $ConditionWithoutProperty, that this formula is set up to take the property from the first capsule within that time span. If you would like a different capsule to be taken, you can switch the first() operator in the formula above to last() or pick(Number) where last will take the property from the last capsule in the time span and pick is used to specify a particular capsule to take the property from (e.g. 2nd capsule or 2nd to last capsule). There’s another write-up about this use case here for more details and some visuals: How do I filter a condition by capsule properties? Conditions are filtered by capsule properties using the keep operator. Some examples of this are listed below: Option 1: Keep exact match to property $condition.keep('Property Name', isEqualTo('Property Value')) Note that it is important to know whether the property is stored as a string or numeric value. If the property value is a string, make sure that the ‘Property Value’ is in single quotes to represent a string like the above formula. If the value is numeric, you should not use the single quotes. Option 2: Keep regular expression string match $condition.keep('Property Name', isMatch('B*')) $condition.keep('Property Name', isNotMatch('B*')) You can specify to keep either matches or not matches to partial string signals. In the above formulas, I’m specifying to either keep all capsules where the capsule property starts with a B or in the second equation, the ones that do not start with a B. If you need additional information on regular expressions, please see our Knowledge Base article here: https://support.seeq.com/space/KB/146637020/Regex%20Searches Option 3: Other keep operators for numeric properties Using the same format of Option 1 above, you can replace the isEqualTo operator with any of the following operators for comparison functions on numeric properties: isGreaterThan isGreaterThanOrEqualTo isLessThan isLessThanOrEqualTo isBetween isNotBetween isNotEqualTo Option 4: Keep capsules where capsule property exists $condition.keep('Property Name', isValid()) In this case, any capsules that have a value for the property specified will be retained, but all capsules without a value for the specified property will be removed from the condition. How do I turn a capsule property into a signal? A capsule property can be turned into a signal by using the toSignal operator. The properties can be placed at either the start timestamp of the capsule (startKey) or the end timestamp of the capsule (endKey): $condition.toSignal('Property Name', startKey()) $condition.toSignal('Property Name', endKey()) This will create a discrete signal where the value is at the selected timestamp for each capsule in the condition. If you would like to turn the discrete signal into a continuous signal connecting the data points, you can do so by adding a toStep or toLinear operator at the end to either add step or linear interpolation to the signal. Inside the parentheses for the interpolation operators, you will need to add a maximum interpolation time that represent the maximum time distance between points that you would want to interpolate. For example, a desired linear interpolation of capsules may look like the following equation: $condition.toSignal('Property Name', startKey()).toLinear(40h) It should be noted that some properties that are numeric may be stored as string properties instead of numeric, particularly if the capsules are a direct connection to a datasource. In this case, a .toNumber() operator may need to be added after the toSignal, but before the interpolation operator. Finally, it is often useful to have the property across the entire duration of the capsule if there are no overlapping capsules (e.g. when looking at batches on a particular unit). This is done by turning the signal into a step signal and then filtering the data to only when it is within a capsule: $condition.toSignal('Property Name', startKey()).toStep(40h).within($condition) What capsule adjustment/combination formulas retain capsule properties? When adjusting or combining conditions, the rule of thumb is that operators that have a 1 to 1 relationship between input capsule and output capsule will retain capsule properties, but when there are multiple capsules that are required as the input to the formula operator, the capsule properties are not retained. For example, moving a capsule by 1 hour has knowledge of the input properties whereas merging 2 capsules together results in not knowing which capsule to keep the properties from. A full list of these operators and their stance on whether they retain or lose capsule properties during usage is below. Operators that retain properties Operators that lose properties afterEnd inverse afterStart merge beforeEnd fragment beforeStart intersect ends join starts union (if more than one capsule overlap) middles grow growEnd shrink move combineWith encloses inside subtract matchesWith touches union (when no capsules overlap) It is important to note that capsule properties are attached to the individual capsule. Therefore, using a combination formula like combinewith where multiple conditions are combined may result in empty values in your Capsule Pane table if each of the conditions being combined has different capsule properties. How do I rename capsule properties? Properties can be swapped to new names by using the renameProperty operator: $condition.renameProperty('Current Property Name', 'New Property Name') A complex example of using capsule properties: What if you had upstream and downstream processes where the Batch ID (or other property) could link the data between an upstream and downstream capsule that do not touch and are not in a specific order? In this case, you would want to be able to search a particular range of time for a matching property value and to transfer another property between the capsules. This can be explained with the following equation: $UpstreamCondition.move(0,7d) .transform($capsule ->{ $matchingDownstreamProperty = ($DownstreamCondition.toSignal('Downstream Property to Match Name') == $capsule.property('Upstream Property to Match Name')) $firstMatchKey = $matchingDownstreamProperty.togroup($capsule,CAPSULEBOUNDARY.INTERSECT).first().startkey() $capsule.setProperty('Desired Property Name',$DownstreamCondition.toSignal('Desired Downstream Property Name').tostep(40h).valueAt($firstMatchKey)) }) .move(0,-7d) Let's walk through what this is doing step by step. First, it's important to start with the condition that you want to add the property to, in this case the upstream condition. Then we move the condition by a certain amount to be able to find the matching capsule value. In this case, we are moving just the end of each capsule 7 days into the future to search for a matching property value and then at the end of the formula, we move the end of the capsule back 7 days to return the capsule to its original state. After moving the capsules, we start the transform. Inside the transform, we find the matching downstream property by turning the capsule property to match from the downstream condition into a signal and match it to the capsule property from the upstream capsule. We then find the key (timestamp) of the first match of the property. Again, similar to some other things, the first option could be swapped out with last or pick to grab a different matching capsule property. Finally, we set the property on the upstream condition to 'Desired Property Name' by turning the downstream capsule property that we want into a step signal and take the value where the first match was found.
    1 point
  32. Psychrometric charts are used for cooling tower and combustion calculations. For users that have weather data, this could give people a better idea of how their cooling towers etc. It might make sense to include them in formula in the future. thanks!
    1 point
  33. We don't currently support it, but we've noted the desires for the 2D use cases. Without committing to a specific release or date, I'll say it is on our near term roadmap 🙂
    1 point
  34. When creating Seeq signals using Seeq Data Lab(SDL) it can be useful to know how to delete signals from Seeq after pushing the signal to a workbench using a Spy push. Fortunately, the process is as simple as adding a column to your metadata dataframe called 'Archived' and setting the value to 'True' for any signal(s) you would like to archive. Below is a snippet of code where an 'Archived' column is added to the spy.push example notebook metadata dataframe is the SDL Spy Documentation. The DEPTH(ft) signal will be archived after pushing the new metadata dataframe with the archived column set to 'True'. A couple other notes about deleting signals in Seeq: If you would like to keep the signal in Seeq, but want to update the data, you can do so with a subsequent push of the signal that includes the new data. The caveat is the new data must have the same keys (timestamps) as the old data. If the keys are different, the data will be appended to the existing signal. Otherwise, you will need to push the signal with a unique name/path/type. If you would like to fully delete the signal samples from Seeq, you can do so using Seeq's API call to delete the signal samples.
    1 point
  35. Hi vadeka, Now that you've removed the "Inactive" data the issue is likely that either (1) your maximum interpolation is not long enough to interpolate between those points or (2) there is actual invalid data (not just saying "Inactive", but a data point that is not visible as it doesn't have a value. Check out this post under Question 4 for how to solve this: In terms of your second question, if you want to view the data points on the trend, then check out how to adjust the "Samples" in the Customize menu for the Details Pane, which allows you to turn on the individual data points: https://support.seeq.com/space/KB/149651519/Adjusting+Signal+Lanes%2C+Axes+%26+Formatting. If you want to view it in a Table, then using $signal.tocapsules() will give you a capsule per data point so that you can view that in a Condition Table (https://support.seeq.com/space/KB/1617592515#Condition-Tables). Note that there are two similar functions: tocapsules will provide a capsule per data point (even if the data points are identical) whereas tocondition will provide a sample per change in data point, meaning it will not show repeat capsules if the data points are equivalent in sequence.
    1 point
  36. Another approach that you can take if you don't need to know start or end times of the "active" capsules is a filtered Simple Table counting capsules. To get this summary table listing the "active" conditions in any Display Range, choose a Simple Table with the count column enabled from the Column button in the toolbar. If you also have signals in your Details pane, you will want to deselect those and only select the 9 conditions. If you only have conditions, you can exclude this Details pane selection. You can then filter that table using a menu that opens from the three vertical dots from the column header. Below I applied a filter for when count is greater than 0 and have only 4 rows displaying of 6 total conditions. The filter icon lets me know the table is filtered, and I can click on it to change or remove the filter. In R55 and later, percent duration and total duration are also possible column configurations in the Simple Table in addition to count. You can read more on how these table displays work on our Knowledge Base.
    1 point
  37. The details and approach will vary depending on exactly where you are starting from, but here is one approach that will show some of the key things you may need. When you say you have 3 capsules active at any given time, I assume you mean 3 separate conditions. Assuming that is the case, you can create a "combined condition" out of all 9 conditions using Seeq Formula: // Assign meaningful text to a new capsule property // named TableDesription $c1 = $condition1.setProperty('TableDescription','High Temperature') $c2 = $condition2.setProperty('TableDescription','Low Pressure') and so forth... $c9 = $condition9.setProperty('TableDescription','Low Flow') // Create a capsule that starts 5 minutes from the current time $NowCapsule = past().inverse().move(-5min) // Combine and keep only the currently active capsules (touching "now") combineWith($c1,$c2,...,$c9).touches($NowCapsule) You can then go to Table View, and click the Condition table type. Select your "combined condition" in the Details Pane and add a capsule property (TableDescription) using the Column or Row button at the top of the display. You can then set your display time range to include the current time, and you should see the currently active capsules with the TableDescription text that you assigned in the Formula. You of course will want to vary the "-5min" value and the 'TableDescription' values per what is best for you. Your approach may be a little different depending on where you are starting from, but I think that creating capsule properties (text you want to see in your final table), combining into one condition, and using .touches($NowCapsule), may all be things you will need in your approach. Here are some screenshots from an example I put together, where I used "Stage" for the capsule property name:
    1 point
  38. I have a signal where I'd like to compare the distributions in 2021 vs. 2022 using a histogram, but with the normalized counts per year (counts divided by total counts in that year, or relative frequency histogram) instead of just the raw counts. Is there an easy way to configure this in Seeq?
    1 point
  39. A question that comes up from time to time is how to search for a list of signals or other data items in Seeq. Typically we get a request for an ability to search based on a comma separated list. While we do not currently (as of R55) support a comma separated list, you can get around this using Regex searching simply by replacing the comma with a vertical bar "|" and encapsulate the search in forward slashes as below: Compressor Power,Temperature,Relative Humidity becomes /Compressor Power|Temperature|Relative Humidity/ In this search, the forward slashes tell Seeq that this is a Regex search, and the | is an "or" in regex. I.e. it will search for something exactly containing "Compressor Power" or something exactly containing "Temperature" etc. which in effect gives you the ability to search for a list! For very long lists, do a find and replace in a word editor to build the new search. The ability to search for a list will soon become quite handy with the "add all" feature slated to come with R56 which should be coming soon!
    1 point
  40. Hi Rezwan, You could do an aggregation to get just the final sum value instead. $signal.aggregate(sum(), $condition, startkey())
    1 point
  41. Hi Vladimir, There are several ways to apply this analysis to other assets. The first & easiest method that I'll mention is working in an Asset Framework or Asset Group (if existing framework is not available). All previous calculations would need to be created using the data in the Asset Group, but once done, you'll be able to seamlessly swap the entire analysis over to your other assets (Trains, in this case). Asset Groups allow you to create your own framework either manually or utilizing existing frameworks. This video does a great job of showing the creation and scaling calculations across other assets. Note that you would need to be at least on version R52 to take advantage of Asset Groups. Another easy approach is to consolidate your analysis within 1 - 3 formulas (depending on what you really want to see). Generally speaking, this analysis could fall within ONE formula, but you may want more formulas if you care about seeing things like your "Tr1C1 no input, no output" condition across your other trains. I'll provide you with a way to consolidate this workflow in one formula, but feel free to break it into multiple if helpful to you. The reason this could be easier is you can simply duplicate a single formula and manually reassign your variables to the respective variables of your other Train. Some useful things to note before viewing the formula: Formulas can make use of variable definitions... You'll notice within each step, except for the very last step, I'm assigning arbitrary/descriptive variables to each line, so that I can reference these variables later in the formula. These variables could be an entire condition, or a signal / scalar calculation In the formula comments (denoted by the double slashes: //), I note certain things could be different for your case. You can access the underlying Formula to any point and click tools you use (Value Searches, Signal from Conditions, etc) by clicking the item's Item Properties (in the Details Pane) and scrolling down to Formula. Do this for your Tr1 C1 rate of change, monthly periodic condition, and average monthly rate calculations to see what specific parameters are. This Seeq Knowledge Base article has an example of viewing the underlying formula within an item's Item Properties The only RAW SIGNALS needed in this formula are: $valveTag1, $valveTag2, $productionTag, and $tr1Signal... The rest of the variables are assigned internally to the formula // Steps 1, 2, 3, and 4 // Note 'Closed' could be different for you if your valve tags are strings... // If your valve tags are binary (0 or 1), it would be "$valveTag == 0" (or 1) $bothValvesClosed = ($valveTag1 ~= 'Closed' and $valveTag2 ~= 'Closed).removeShorterThan(6h) // Step 5 $valvesClosedProductionHigh = $bothValvesClosed and $productionTag > 10000 // Step 6 ASSUMING YOU USED SIGNAL FROM CONDITION TO CALCULATE RATE // Note the "h" and ".setMaximumDuration(40h), middleKey(), 40h)" could all be different for your case $tr1RateofChange = $tr1Signal.aggregate(rate("h"), $valvesClosedProductionHigh.setMaximumDuration(40h), middleKey(), 40h) // Step 7 // $months could also be different in your case // Note my final output has no variable definition. This is to ensure THIS is the true output of my formula // Again, the ".setMaximumDuration(40h), middleKey(), 40h)" could all be different for your case $months = months("US/Eastern") $tr1RateofChange.aggregate(average(), $months.setMaximumDuration(40h), middleKey(), 40h) Hopefully this makes sense and at the very least provides you with an idea of how you can consolidate calculations within Formula for easier duplication of complex, multi-step calculations. Please let me know if you have any questions. Emilio Conde Analytics Engineer
    1 point
  42. Hi Robin, you can create a batch condition by using replace() to extract the batchnumber and toCondition() for creating the capsules for each batch: $subbatches.replace('/(\\d{1,7})NR\\d{3}/', '$1').toCondition() In the next step you can do the aggregation: $v1.aggregate(sum(), $batch.removeLongerThan(1wk), middleKey()) + $v2.aggregate(sum(), $batch.removeLongerThan(1wk), middleKey()) Regards, Thorsten
    1 point
  43. I tried this on R54.1.4 and came across a similar error but fixed it by appending .toString() to $seq. Below is the updated formula code. //creates a condition for 1 minute of time encompassing 30 seconds on either side of a transition $Transition = $CompressorStage.toCondition().beforeStart(0.5min).afterStart(1min) //Assigns the mode on both sides of the step change to a concatenated string that is a property of the capsule. $Transition .transform( $cap -> $cap.setProperty('StartModeEndMode', $CompressorStage.toCondition() .toGroup($cap, CAPSULEBOUNDARY.INTERSECT) .reduce("", ($seq, $stepCap) -> $seq.toString() + $stepCap.getProperty('Value') //Changes the format of the stage names for more clear de-lineation as a property in the capsules pane. .replace('STAGE 1','-STAGE1-').replace('STAGE 2','-STAGE2-').replace('TRANSITION','-TRANSITION-').replace('OFF','-OFF-') )))
    1 point
  44. Kenny, There is not currently a way to delete oData exports for non-admins. However, the exports do not put any load on the Seeq system unless they are being used by an external system (PowerBI, Tableau, etc) To answer your second question we are creating a new export url endpoint every time someone runs the tool in workbench. These oData feeds are in active development and we have plans for making the creation and maintenance of them easier in upcoming releases.
    1 point
  45. As an add on to this topic, there can be times when one wants to push a different scorecard type. The previous example shows how to create a Simple Scorecard but similar logic can be applied to make a Condition and Continuous Scorecard. Condition Scorecard Since the Condition Scorecard is also based on a condition, we need to retrieve the condition to be used. This can be done using spy.search again search_result_condition = spy.search({"Name":"Stage 2 Operation", "Scoped To":"C43E5ADB-ABED-48DC-A769-F3A97961A829"}) From there we can tweak the scorecard code to include the bounding condition. This is the condition over which this calculation is performed in scorecard. Note scorecard requires conditions with maximum capsule duration, so an additional parameter is required if the condition does not have a maximum capsule duration. Below is the code as well as the result my_metric_input_condition = { 'Type': 'Metric', 'Name': 'My Metric Condition', 'Measured Item': {'ID': search_result[search_result['Name'] == 'Temperature']['ID'].iloc[0]}, 'Statistic': 'Average', 'Bounding Condition': {'ID': search_result_condition[search_result_condition['Name'] == 'Stage 2 Operation']['ID'].iloc[0]}, 'Bounding Condition Maximum Duration': '30h' # Required for conditions without a maximum capsule duration } spy.push(metadata = pd.DataFrame([my_metric_input_condition]), workbook='Example Scorecard') Continuous Scorecard For Continuous Scorecards, users need to specify the rolling window over which to perform the calculations. To do this, a Duration and Period need to be provided. The Duration tells how long is the rolling window and the Period tells the frequency at which the rolling window is performed. my_metric_input_continuous = { 'Type': 'Metric', 'Name': 'My Metric Continuous', 'Measured Item': {'ID': search_result[search_result['Name'] == 'Temperature']['ID'].iloc[0]}, 'Statistic': 'Average', 'Duration': '1d', # Length of time the calculation is done for 'Period': '3d', # How often is the calculation being performed } spy.push(metadata = pd.DataFrame([my_metric_input_continuous]), workbook='Example Scorecard')
    1 point
  46. Sometimes users want to find more documentation on the SPy functions than what is provided in the SPy Documentation Notebooks. A quick way to access SPy object documentation from your notebook is by using the Shift + Tab shortcut to access the docstring documentation of the function. Example view of the docstrings after using the Shift + Tab shortcut: You can expand the docstrings to view more details by clicking the + button circled in red in the above image. Expanded docstrings: From here, you can scroll through the documentation in the pop-up window. Another useful shortcut is the Tab shortcut. This will show a list of available functions or methods for either the SPy module or any other python object you have in memory. Example view of the Tab shortcut:
    1 point
  47. Hi Banderson, you can create a duration signal from each capsule in a condition, using "signal from condition" tool. As you may know these point and click tools create a Seeq formula underneath. So after using point and click signal from condition tool, you can find the syntax of formula in item properties of that calculation. You can copy this syntax and paste it in Formula and use it to further develop your calculations.
    1 point
  48. Overview This method will provide a simple visualization of externally determined control limits or help you accurately calculate new control limits for a signal. Using these limits we will also create a boundary and find excursions for how many times and for how long a signal deviates from the limits.These created signals can be used in follow-on analysis search for periods of abnormal system behavior. In this example we will be creating average, +3 Std Deviation and -3 Standard Deviation boundaries on a Temperature Signal. Setup Signals In the Data tab, select the following: Asset → Example → Cooling Tower 1 → Area A Signal → Temperature Option 1: Manually Define Simple Control Limits From the Tools tab, navigate to the Formula tool. The Formula can be used to easily plot simple scalar values. If you already have calculated values for the upper and lower limit just enter them in the formula editor with their units as shown in the screenshot below. Formula - Simple Upper Limit 103F Formula - Simple Lower Limit 70F Option 2: Calculate The Control Limits From the Tools tab, navigate to the Formula tool. In formula we are going to define the time period over which we want to calculate our control limits as well as the math behind those limits. Step 1 - Calculate the upper limit Variables Name Item Type $Series Temperature Signal Formula $calcPeriod = capsule("2018-01-01T00:00:00-06:00","2018-05-01T00:00:00-06:00") $tempAve = $Series.average($calcPeriod) $tempStdDev = $Series.standardDeviation($calcPeriod) $tempAve + 3*$tempStdDev Description of Code $calcPeriod → This is the time range over which we are going to calculate the average and standard deviation of our signal. The start and end time of our period must be written in ISO8601 format (Year - Month - Day "T" Hour : Minutes : Seconds . Fractional Seconds -/+ Timezone) $tempAve → Intermediate variable calculating the average of the temperature signal over our calculation period $tempStdDev → Intermediate variable calculating the standard deviation of the temperature signal over our calculation period $tempAve + 3*$tempStdDev → Example control limit calculation Step 2 - Duplicate your formula to calculate the lower limits Click the info icon in the details pane next to your calculated upper limit signal. From the info panel select duplicate to create a copy of the formula. With this copy simply edit the formula to calculate the lower limit. $calcPeriod = capsule("2018-01-01T00:00:00-06:00","2018-05-01T00:00:00-06:00") $tempAve = $Series.average($calcPeriod) $tempStdDev = $Series.standardDeviation($calcPeriod) $tempAve - 3*$tempStdDev **Alternate method number three -- if you wanted $calcperiod to actually changed based on the previous month or week of operation you could use signal from condition based off a periodic condition to achieve this solution. Step 3 - Visualize Limits as a Boundary Using the Boundary Tool to connect the process variable and upper and lower limits. Select Temperature as your primary signal and select "New" Select Boundary under relation type, name your new boundary and select the signals for your upper and lower limit. Click save to visualize the boundary on the trend. Using this same method you can create and visualize multiple boundaries (simple and calculated) at the same time Step 4 - Create Capsules when Outside the Boundary Using the Deviation Search tool create a condition for when the signal crosses the boundary. Name your new condition, select temperature as the input signal, select outside a boundary pair and the upper and lower signals. Estimate the maximum time you would expect any one out of boundary event to last and input that time in the max capsule duration field. Step 5 - Create a Scorecard to Quantify How Often and How Long Boundary Excursions Occur Create a Scorecard to count how many and how long and what % of total time these excursions are occurring. Create each metric using the Scorecard Metric tool and the Count, Total Duration and Percent Duration statistics. Use a Condition Based scorecard to get weekly or monthly metrics. Step 6 - Plot how these KPIs are Changing Over Time By creating a signal which plots these KPIs over time we can quantify how our process variable is changing relative to these limits. To begin, determine how often you would like to calculate the KPI per Hour/Day/Week/Month and create a condition for those time segments using the Periodic Condition tool. In the screenshot below we are creating a weekly condition with capsules every week. Using the Signal from Condition Tool count the number of Outside Simple Boundary capsules which occur within each weekly capsule. This same methodology can be used to create signals for total duration and % duration just like in the scorecard section above. For each week the tool will create a single sample. The timestamp placement and interpolation method selections will determine how those samples are placed within the week and visualized on the chart. The scorecard metrics that you created above can also be trended over time by switching from Scorecard View to Trend View.
    1 point
  49. In reporting, users may be interested in creating a Scorecard that contains certain metric results over a variety of time periods, such as "April 2019", "Quarter 1", and "Year to Date", etc. This can be accomplished using the following steps: 1. Use Formula to create a condition that contains a capsule for each time period that you are interested in. Note that I assigned a property to each capsule; this text will be used as the column header in the scorecard: 2. Create a Condition based scorecard and add a metric for each item you are looking to calculate: 3. Finally, use the capsule property as the column headers:
    1 point
  50. Hi Thorsten- In the first screenshot, the area of each box is actually the same, even though some boxes have different dimensions. As you observed, the size of your display impacts how the boxes are drawn. To adjust the box sizes via the API, please use the following steps: 1. On your Seeq installation, open the workbook that contains the Treemap and navigate to the API: 2. To get the ID of the asset that you would like to resize: a. Navigate to GET Assets b. Adjust the "limit" to 200 and click "Try it out!" c. In the Response Body, locate the asset to resize and copy the "id": 3. To resize the asset: a. Navigate to POST Item Properties b. Paste the asset ID into the "id" field. Use the following syntax in the "Body" field. [ { "unitOfMeasure": "", "name": "size", "value": 10 } ] The following screenshot shows a size 10, but this number may be adjusted. c. Click "Try it out!" 4. Navigate back to the Treemap and refresh the browser. The Treemap now reflects the adjusted size: Please let me know if you have any additional questions. Thanks, Lindsey
    1 point
This leaderboard is set to Los Angeles/GMT-08:00
×
×
  • Create New...