Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 03/29/2022 in all areas

  1. A few weeks ago in Office Hours, a Seeq user asked how to perform iterative calculation in which the next value of the calculation relies on its previous values. This functionality is currently logged as a feature request. In the meantime, users can utilize Seeq Data Lab and push the calculated result from Seeq Data Lab to Seeq Workbench. Let's check out the example below: There are a total of 4 signals, Signal G, Signal H, Signal J, and Signal Y added to Seeq workbench. The aim is to calculate the value of Signal Y, under the selected period. Step 1: Set the start date and end date of the calculation. #Set the start date and end date of calculation startdate = '2023-01-01' enddate = '2023-01-09' Step 2: Set the workbook URL and workbook ID. #Set the workbook URL and workbook ID workbook_url = 'workbook_url' workbook_id = 'workbook_id' Step 3: Retrieve all raw data points for the time interval specified in Step 1 using spy.pull(). #Retrieve all raw data points for the time internal specified in Step 1: data = spy.pull(workbook_url, start = startdate, end = enddate, grid = None) data Step 4: Calculate the value of Signal Y, (Yi = Gi * Y(i-1) + Hi * Ji) #Calculate the value of Signal Y (Yi = Gi * Y(i-1) + Hi * Ji) for n in range(len(data)-1): data['Signal Y'][n+1] = data['Signal G'][n+1] * data['Signal Y'][n] + data['Signal H'][n+1] * data['Signal J'][n+1] data Step 5: Push the calculated value of Signal Y to the source workbook using spy.push(). #Push the calculated result of Signal Y to the source workbook spy.push(data = data[['Signal Y']], workbook = workbook_id)
    7 points
  2. A small team of us (with help from Seeq team members) built a short script to extract signal names from our legacy excel workbooks so that we could push them to Seeq workbench. Perhaps, like us, you are involved in migrating workbooks for monitoring/ reporting over to Seeq and could do with a boost to get started so you can get to real work of setting up the Seeq workbench. The attached script will extract the signal names (assuming you can craft your own regex search filter) from each excel worksheet and then push them to workbench with labeling for organization. Hopefully its a help to some 🙂 signal_transfer_excel2seeq_rev1.ipynb
    5 points
  3. Seeq has functions to allow easy manipulation of the starts and ends of capsules, including functions like afterStart(), move(), and afterEnd(). One limitation of these functions is that they expect scalar inputs, which means all capsules in the condition have to be adjusted by the same amount (e.g. move all capsules 1 hour into the future). There are cases when you want to adjust each capsule dynamically, for instance using the value of a signal to determine how to adjust the capsule. Solution: This post will show how to accomplish a dynamic / signal-based version of afterStart(). This approach can be modified slightly to recreate other capsule adjustment functions. Assume I have an arbitrary condition 'Condition', and signal 'Capsule Adjustment Signal'. I want to find the first X hours after each capsule start, where X is the value of 'Capsule Adjustment Signal' at the capsule start. I can do this with the below formula. $condition .afterStart(3h) // has to be longer than an output capsule will ever be .transform($capsule -> { $newStartKey = $capsule.startKey() $newEndKey = $capsule.startKey() + $signal.valueAt($capsule.startKey()) capsule($newStartKey, $newEndKey) }) This formula only takes two inputs: $condition, and $signal. This formula goes through each capsule in the condition, and manipulates its start and end keys. In this case, the start key is the same as the original, but the new end key is set to the original start key plus the value of my signal. This formula produces the following purple condition: Some notes on this formula: The output capsules must be within the original capsules. Therefore, I have included .afterStart(3h) in the formula. This ensures the original capsules will always be larger than the outputted capsules. If you don't do this, you may see the following warning on your item, which indicates the formula is throwing away capsules: Your capsule adjustment signal must have units of time To accomplish other capsule adjustments, look at changing the definitions of the $newStartKey and $newEndKey variables to suit your needs.
    4 points
  4. Webhooks are a convenient method to send event data from Seeq to “channel” productivity tools such as Microsoft Teams or Slack. The following post describes how Seeq users can leverage Seeq Data Lab to send messages directly to MS Teams via Webhooks. Pre-Requisites: 1) Seeq Data Lab with Scheduled Notebooks enabled a. See Administration Panel -> Configuration and filter for “Features/DataLab/ScheduledNotebooks/Enabled” 2) MS Teams Channel with a Webhook Connector Assumptions: 1) Summary of capsules generated in a defined time range (i.e., every 12 or 24 hours) 2) Notifications are not near-real-time – script will run on a pre-defined schedule generally measured in hours, not minutes or seconds 3) Events of interest are contained in an Asset Tree or Group with one or more Conditions Step 1: Configure Webhook in MS Teams To send Seeq capsules/events to MS Teams, a Webhook for the target channel needs to be created. Detailed instructions on how to configure Webhooks in MS Teams can be found here: https://learn.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/add-incoming-webhook For the purpose of this post, we will create a Webhook URL in our “Seeq Notifications” Team to alert on Temperature Excursions. The alerts will be posted in the “Cooling Tower Temperature Monitoring” channel. Teams and Channel names can be configured to fit your need/operation, this is just an example for demonstration purposes: MS Teams will generate a Webhook URL which we will use in our script in Step 4. Step 2: Identify or Create an Asset Group or Asset Tree to define the Monitoring Scope To scope the events of interest, we will use an Asset Tree that contains “High Temperature” conditions for a collection of Monitoring Assets. While this is not a requirement for using Webhooks, it helps with scaling the Notification workflow. It also allows us to combine multiple Conditions from different Assets into a single workflow. To learn how create an Asset Tree, follow the “Asset Trees 1 – Introduction.ipynb” tutorial in the SPy Documentation folder contained in each new Seeq Datalab project. The script for the Monitoring Asset Tree used in this post is attached for reference: Monitoring Asset Tree.ipynb Alternatively, Asset Groups can be also used to create an asset structure directly in Workbench without using Python: Once the Asset Group/Tree containing the monitoring Conditions is determined, create a Worksheet with a Treemap or Table overview for monitoring use: Make note of the URL as it will be included in the Notification as a link to the monitoring overview whenever an event is detected. For locally scoped Asset Groups or Trees, it will also inform the script where to look for Conditions. Step 3: Install the “pymsteams” library in Seeq Datalab The pymsteams library allows users to compose and post messages (or cards) to MS Teams. The library can be installed from the pypi repository (pypi.org) using the “pip install” command. 1) Open a Seeq Datalab Project 2) Launch a Terminal session 3) Install the pymsteams library by executing pip install pymsteams Additional documentation on pymsteams can be found here: https://pypi.org/project/pymsteams/ Step 4: Create or Update the Monitoring script We are now ready configure a monitoring script that sends notifications to the Webhook configured in Step 1 using Conditions scoped to the Asset Tree in Step 2. a) Import the relevant libraries, including the newly installed pymsteams library import pandas as pd from datetime import datetime,timedelta import pytz import pymsteams b) Configure Input Parameters #Refer to Microsoft Documentation on how to configure a Webhook for a MS Teams channel webhook_url='YOUR WEBHOOK HERE' #Specify the monitoring workbook - this is where the alert will link with the associated timeframe monitoring_workbook_url='YOUR WORKBOOK HERE' #Specify the asset tree and associated condition for which the webhook should be triggered asset_tree='Compressor Monitoring' monitoring_condition='High Temperature' #Specify the lookback period and timezone to search for capsules lookback_interval_hours=24 timezone=('US/Mountain') c) Search for Event Capsules #Set time range to look for new conditions delta=timedelta(hours=lookback_interval_hours) end=datetime.now(tz=pytz.timezone(timezone)) start=end-delta #Parse the workbook information workbook_id=spy.utils.get_workbook_id_from_url(monitoring_workbook_url) worksheet_id=spy.utils.get_worksheet_id_from_url(monitoring_workbook_url) #This block is optional, it stores search results for the conditions once instead of searching each time the #script runs. Saves time if the search result is not expected to change. To reset, just delete the .pkl file. pkl_file_name=asset_tree+'_'+monitoring_condition+'_'+workbook_id+'.pkl' try: monitoring_conditions=pd.read_pickle(pkl_file_name) except: monitoring_conditions=spy.search({'Name':monitoring_condition, 'Type':'Condition', 'Path':asset_tree}, workbook=workbook_id,quiet=True) monitoring_conditions.to_pickle(pkl_file_name) #Pull capsules present during the specified time range events=spy.pull(monitoring_conditions,start=start,end=end,group_by=['Asset'],header='Asset',quiet=True) number_of_events=len(events) events d) Send Message to Webhook using the pymsteams library if a Capsule is detected in the time range #If capsules are present, trigger the webhook to compile and send a card to MS Teams if number_of_events != 0: events.sort_values(by='Condition',inplace=True) #Create url for specific notification time-frame using Seeq URL builder investigate_start=start.astimezone(pytz.utc).strftime('%Y-%m-%dT%H:%M:%SZ') investigate_end=end.astimezone(pytz.utc).strftime('%Y-%m-%dT%H:%M:%SZ') investigate_url=f"https://explore.seeq.com/workbook/builder?startFresh=false"\ f"&workbookName={workbook_id}"\ f"&worksheetName={worksheet_id}"\ f"&displayStartTime={investigate_start}"\ f"&displayEndTime={investigate_end}"\ f"&expandedAsset={asset_tree}" #Create message information to be posted in channel assets=[] text=[] for event in events.itertuples(): assets.append(event.Condition) #Capsule started before lookback window if pd.isnull(event[2]): if pd.isnull(event[3]) or event[4] == True: text.append(f'Event was already in progress at {start.strftime("%Y-%m-%d %H:%M:%S %Z")} and is in Progress') else: text.append(f'Event was already in progress at {start.strftime("%Y-%m-%d %H:%M:%S %Z")} and ended {event[3].strftime("%Y-%m-%d at %H:%M:%S %Z")}') #Capsule started during lookback window else: if pd.isnull(event[3]) or event[4] == True: text.append(f'Event started {event[2].strftime("%Y-%m-%d at %H:%M:%S %Z")} and is in Progress') else: text.append(f'Event started {event[2].strftime("%Y-%m-%d at %H:%M:%S %Z")} and ended {event[3].strftime("%Y-%m-%d at %H:%M:%S %Z")}') message='\n'.join(text) #Create MS Teams Card - see pymsteams documentation for details TeamsMessage = pymsteams.connectorcard(webhook_url) TeamsMessage.title(monitoring_condition+" Event Detected") TeamsMessage.text(monitoring_condition+' triggered in '+asset_tree+f' Asset Tree in the last {lookback_interval_hours} hours') TeamsMessageSection=pymsteams.cardsection() for i,value in enumerate(text): TeamsMessageSection.addFact(assets[i],value) TeamsMessage.addSection(TeamsMessageSection) TeamsMessage.addLinkButton('Investigate in Workbench',investigate_url) TeamsMessage.send() Step 5: Test the Script Execute the script ensuring at least one “High Temperature” capsule is present in the lookback duration. The events dataframe in step 4. c) will list capsules that were detected. If no capsules are present, adjust the lookback duration. If at least one capsule is detected, a notification will automatically be posted in the channel for which the Webhook has been configured: Step 6: Schedule Script to run on a specified Frequency If the script operates as desired, configure a schedule for it to run automatically. #Optional - schedule the above script to run on a regular interval spy.jobs.schedule(f'every day at 6am') The script will run on the specified interval and post a summary of “High Temperature” capsules/events that occur during the lookback period directly to the MS Teams channel. Refer to the spy.jobs.ipynb notebook in the “SPy Documentation” folder for additional information on scheduling options. Attached is a copy of the full example script: Seeq MS Teams Notification Webhook - Example Script.ipynb
    4 points
  5. When creating signal forecasts, especially for cyclic signals that degrade, we often use forecastLinear() in formula to easily forecast a signal out into the future to determine when a threshold is met. The methodology is often the same regardless of if we are looking at a filter, a heat exchanger, or any other equipment that fouls overtime or any equipment that needs to go through some periodic maintenance when a KPI threshold is met. A question that comes up occasionally from users is how to create a signal forecast that only uses data from the current operation cycle for signal forecasting. The forecastlinear() operator only takes into account a historical training period and does not determine if that data is coming from the current cycle or not (which results in unexpected results). Before entering the formula, you will need to define: a condition that identifies the current cycle, here i have called it "$runningCycle" a Signal to do a linear forecast on, i have called it "$signal" To forecast out into the future based on only the most recent cycle, the following code snippet can be used in formula: $training = $runningCycle.setmaximumduration(10d).toGroup(capsule(now()-2h, now())) $forecast=$Signal.predict($training, timesince(toTime('2000-01-01T00:00Z'))) $signal.forecastSplice($forecast, 1d) In this code snippet, there are a few parameters that you might want to change: .setMaximumDuration(10d): results in a longest cycle duration of 10 days, this should be changed to be longer than the longest cycle you would expect to see capsule(now-2h, now()): this creates a period during which seeq will look for the most recent cycle. In this case it is any time in the last 2 hours. If you have very frequent data (data comes in every few seconds to minutes) then 2 hours or less will work. If you have infrequent data (data that comes in once a day or less) then extend this so that it covers the last 3-4 data points. $signal.forecastSplice($forecast, 1d): When using forecastLinear(), there is an option to force the prediction through the last sample point. This date parameter (1 day in this case) does something similar- it blends the last historical data point with the forecast over the given time range. In other words, if the last data point was a value of 5, but my first forecasted datapoint had a value of 10, this parameter is the time frame over which to smooth from the historical data point to the forecast. Here is a screenshot of my formula : and the formula in action:
    4 points
  6. Hi Coolhunter, I have seen this requested multiple times and one solution might be to use a custom PI Vision symbol that enables you to embed Seeq content into PI Vision. A solution to this challenge can be found here: Get the most out of PI Vision - Seeq Analytics in PI Vision - Seeq in PI Vision (werusys.de) If you want to know more about the PI Vision integration with Seeq feel free to drop me a mail: [email protected] Cheers, Julian Seeq-WerusysPIVision.pdf
    3 points
  7. Check out the data lab script and video that walks through it to automate data pull->apply ml->push results to workbench in an efficient manner. Of course you can skin the cat many different ways however this gives a good way to do it in bulk. Use case details: Apply ML on Temperature signals across the whole Example Asset Tree on a weekly basis. For your case, you can build you own asset tree and filter the relevant attributes instead of Temperature and set spy.jobs.schedule frequency to whatever works for you. Let me know if there are any unanswered questions in my post or demo. Happy to update as needed. apply_ml_at_scale.mp4 Apply ML on Asset Tree Rev0.ipynb
    3 points
  8. Have you ever wanted to scale calculations in Seeq across different assets without having to delve into external systems or write code to generate asset structures? Is your process data historian a giant pool of tags which you need to have organized and named in a human readable format? Do you want to take advantage of Seeq features such as Asset Swapping and Treemaps, but do not have an existing Asset structure to leverage? If the answer is yes, Asset Groups can help! Beginning in Seeq version R52 Asset Groups were added to configure collections of items such as Equipment, Operating Lines, KPIs, etc via a simple point-and-click tool. Users can leverage Asset Groups to easily organize and scale their analyses directly in Workbench, as well as apply Seeq Asset-centric tools such as Treemaps and Tables across Assets. What is an Asset Group? An Asset Group is a collection of assets (listed in rows) and associated parameters called “Attributes” (listed in columns). If your assets share common parameters, Asset Groups can be a great way to organize and scale analyses instead of re-creating each analysis separately. Assets can be anything users want them to be. It could be a piece of equipment, geographical region, business unit, KPI, etc. Asset Groups serve to organize and map associated parameters (Attributes) for each Asset in the group. Each Asset can have one or several Attributes mapped to it. Attributes are parameters that are common to all the assets and are mapped to tags from one or many data sources. Examples of Asset/Attribute combinations include: Asset Attribute(s) Pump Suction Pressure, Discharge Pressure, Flow, Curve ID, Specific Gravity Heat Exchanger Cold Inlet T, Cold Outlet T, Hot Inlet T, Hot Outlet T, Surface Area Production Line Active Alarms, Widgets per Hour, % of time in Spec It’s very important to configure the name of the common Attribute to be the same for all Assets, even if the underlying tag or datasource is not. Using standard nomenclature for Attributes (Columns) enables Seeq to later compare and seamlessly “swap” between assets without having to worry about the underlying tag name or calculation. Do This: Do Not Do This: How to Configure Asset Groups in Seeq Let’s create an Asset Group to organize a few process tags from different locations. While Asset Groups support pre-existing data tree structures (such as OSI PI Asset Framework), the following example will assume the tags to not be structured and added manually from a pool of existing process tags. NOTE: Asset Groups require an Asset Group license. For versions prior to R54, they also have to be enabled in the Seeq Administrator Configuration page. Contact your Seeq Administrator for details. 1) In the “Data” tab, create a new Asset Group: 2) Specify Asset Group name and add Assets You can rename the assets by clicking on the respective name in the first column. In this case, we'll define Locations 1-3. 3) Map the source tags a. Rename “Column 1” by clicking on the text and entering a new name b. Click on the (+) icon to bring up the search window and add the tag corresponding to each asset. You can use wildcards and/or regular expressions to narrow your search. c. Repeat mapping of the tags for the other assets until there’s a green checkmark in each row d. Additional source tags can be used by clicking on “Add Column” button in the toolbar In this case, we will add a column for Relative Humidity and map a tag for each of the Locations 4) Save the Asset Group 5) Trend using the newly created Asset Group The newly created Asset Group will now be available in the Data pane and can be used for navigation and trending a. Navigate to “Location 1” and add the items to the display pane by clicking on them. You can also change the display range to 7 days to show a bit more data b. Notice the Assets Column now listed in the Details pane showing from which Asset the Signal originates We can also add the Asset Path to the Display pane by clicking on Labels and checking the desired display configuration settings (Name, Unit of Measure, etc). c. Swap to Location 2 (or 3) using the Asset Swapping functionality. In the Data tab, navigate up one level in the Asset Group, then click the Swap icon ( ) to swap the display items from a different location . Notice how Seeq will automatically swap the display items 6) Create a “High Temperature” Condition Calculations configured from Asset Group Items will “follow” that asset, which can help in scaling analyses. Let’s create a “High Temperature” condition. a. Using “Tools -> Identify -> Value Search” create a condition when the Temperature exceeds 100 b. Click “Execute” to generate the Condition c. Notice the condition has been generated and is automatically affiliated with the Asset from which the Signals were selected d. Swap to a different Asset and notice the “High Temperature” Condition will swap using the same condition criteria but with the signals from the swapped Asset Note: Calculations can also be configured in the Asset Group directly, which can be advantageous if different condition criteria need to be defined for each asset. This topic will be covered in Part 2 of this series. 7) Create a Treemap Asset Groups enables users to combine monitoring across assets using Seeq’s Treemap functionality. a. Set up a Treemap for the Assets in the Group by switching to the Treemap view in the Seeq Workbench toolbar. b. Click on the color picker for the “High Temperature” condition to select a color to display when that condition is active in the given time range. (if you have more than one Condition in the Details pane, repeat this step for each Condition) c. A Treemap is generated for each Asset in the Asset Group. Signal statistics can optionally be added by configuring the “Statistics” field in the toolbar. Your tree map may differ depending on the source signal and time range selected. The tree map will change color if the configured Condition triggers during the time period selected. This covers the basics for Asset Groups. Please check out Part 2 on how to configure calculations in Asset Groups and add them directly to the Hierarchy.
    3 points
  9. Users of OSIsoft Asset Framework often want to filter elements and attributes based on the AF Templates they were built on. At this time though, the spy.search command in Seeq Data Lab only filters based on the properties Type, Name, Description, Path, Asset, Datasource Class, Datasource ID, Datasource Name, Data ID, Cache Enabled, and Scoped To. This post discusses a way in which we can still filter elements and/or attributes based on AF Template. Step 1: Retrieve all elements in the AF Database The code below will return all assets in an AF Database that are based on a AF Template whose name contains Location. asset_search = spy.search({"Path":"Example-AF", "Type":"Asset"}, all_properties=True) #Make sure to include all properties since this will also return the AF Template asset_search.dropna(subset=['Template'], inplace=True) # Remove assets not based on a template since we can't filter with NaN values asset_search_location = asset_search[asset_search['Template'].str.contains('Location')] # Apply filter to only consider Location AF Template assets Step 2: Find all relevant attributes This code will retrieve the desired attributes. Note wildcards and regular expression can be used to find multiple attributes. signal_search = spy.search({"Path":"Example-AF", "Type":"Signal", "Name":"Compressor Power"}) #Find desired attributes Step 3: Filter attributes based on if they come from an element from the desired AF template Last step cross references the signals returned with the desired elements. This is done by looking at their paths. # Define a function to recreate paths, items directly beneath the database asset don't have a Path def path_merger(row): row = row.dropna() return ' >> '.join(row) asset_search_location['Full Path'] = asset_search_location[['Path', 'Asset', 'Name']].apply(lambda row: path_merger(row),axis=1) # Create path for the asset that includes its name signal_search['Parent Path'] = signal_search[['Path', 'Asset']].apply(lambda row: path_merger(row),axis=1) # Create path for the parents of the signals signal_search_location = signal_search[signal_search['Parent Path'].isin((asset_search_location['Full Path']))] # Cross reference parent path in signals with full paths in assets to see if these signals are children of the desired elements
    3 points
  10. If you modify your wind_dir variable to $wind_dir = group( capsule(0, 22.5).setProperty('Value', 'ENUM{{0|N}}'), capsule(22.5, 67.5).setProperty('Value', 'ENUM{{1|NE}}'), capsule(67.5, 112.5).setProperty('Value', 'ENUM{{2|E}}'), capsule(112.5, 158.5).setProperty('Value', 'ENUM{{3|SE}}'), capsule(158.5, 202.5).setProperty('Value', 'ENUM{{4|S}}'), capsule(202.5, 247.5).setProperty('Value', 'ENUM{{5|SW}}'), capsule(247.5, 292.5).setProperty('Value', 'ENUM{{6|W}}'), capsule(292.5, 337.5).setProperty('Value', 'ENUM{{7|NW}}'), capsule(337.5, 360).setProperty('Value', 'ENUM{{8|N}}') ) You will get an ordered Y axis: This is how Seeq handles enum Signal values from other systems - it has some limitations, but it seems like it should work well for your use case.
    2 points
  11. Summary/TLDR Users commonly want to duplicate Seeq created items (Value Search, Formula, etc.) for different purposes, such as testing the effect of different calculation parameters, expanding calculations to similar areas/equipment, collaboration, etc. Guidance is summarized below to prevent unintended changes. Duplicating Seeq created items on a worksheet Creates new/independent items that can be modified without affecting the original. Duplicating worksheets within a Workbench Analysis Duplicating a worksheet simply copies the worksheet but doesn't create new/independent items. A change to a Seeq created item on one sheet modifies the same item everywhere it appears, on all other worksheets. Duplicating entire Workbench Analysis Creates new/independent items in the duplicated Workbench Analysis. You can modify them without affecting the corresponding items in the original Workbench Analysis. Details Each worksheet in an analysis can be used to help tell the story of how you got to your conclusions or give a different view into a related part of your process. Worksheets can be added/renamed/duplicated, and entire analyses can also be duplicated: Worksheet and Document Organization Confusion sometimes arises for Seeq users related to editing existing calculation items (Value Searches, Formulas, etc.) that appear on multiple worksheets, within the same analysis. Often a user will duplicate a worksheet within an analysis and not realize that editing existing items on the new worksheet also changes the same items everywhere else they are used within the analysis. They assume that each individual worksheet is independent of the others, but this is not the case. The intent of this post is to eliminate this confusion and to prevent users making unintended changes to calculations. Working with the same item on a Duplicated Worksheet When duplicating worksheets, remember that everything within a single Workbench Analysis, no matter what worksheet it is on, is "scoped" to the entire analysis. Duplicating a worksheet simply copies the worksheet but doesn't create new/independent items. A change to an item on one sheet modifies it everywhere it appears (on all other worksheets). For some use cases, duplicating a worksheet is a quick way to expand the calculations further or to create alternate visualizations, and the user wants to continues working with the original items. In other situations, worksheet duplication may be a first step in creating new versions of existing items. To avoid modifying an original item on a duplicated worksheet, from the Item Properties (Detail Pane "i" symbol) for the calculated signal/condition of interest, click to DUPLICATE the item. You can edit the duplicated version without affecting the original. Duplicating worksheets is often useful when you are doing multiple calculation steps on different worksheets, when you want trends on one worksheet and tables or other visualizations on another, when doing asset swapping and creating a worksheet for each unique asset, etc. Working with Items in a Duplicated Workbench Analysis If you duplicate the entire Workbench Analysis (for example, from the Seeq start page, see screenshot below), new/independent items are created in the duplicated Workbench Analysis. You can modify the items in the duplicated Workbench Analysis, without affecting the original (corresponding) items in the original Workbench Analysis. This is often a good approach when you have created a lengthy set of calculations and you would like to modify them or apply them in a similar way for another piece of equipment, processing line, etc., and an asset group approach isn’t applicable. There is one exception to this: Seeq created items that have been made global. Global items can be searched for and accessed outside of an individual Workbench Analysis. Editing a global item in a duplicated analysis will change it everywhere else it appears. There are many considerations for best practices when testing new parameter values and modifications for existing calculations. Keep in mind the differences between duplicating worksheets and duplicating entire analyses, and of course consider the potential use of asset groups when needing to scale similar calculations across many assets, pieces of equipment, process phases, etc. There are in-depth posts here with further information on asset groups: Asset Groups 101 - Part 1 Asset Groups 101 - Part 2
    2 points
  12. Seasonal variation can influence specific process parameters whose values are influenced by ambient conditions, or perhaps raw material make up changes over the year's seasons based on scheduled orders from different vendors. For these reasons and more, it may not suffice to compare your previous month's process parameters against current. For these situations, it may be best to compare current product runs against previous product runs occurring the same month, but a year ago in order to assess consistency or deviations. In Seeq, this can be achieved through utilizing Condition Properties. 1. Bring in raw data. For this example, I will utilize a single parameter (Viscosity) and a grade code signal. 2. Convert Product step-signal into a condition. Add properties of Product ID, Variable statistic(s), and month start/end times. // Create a month condition. Be sure to specify your time zone so that start/end times are at 12:00 AM $m = months('US/Eastern') // Create a signal for start and end times to add into "Product ID" condition $start_signal = $m.toSignal('Start', startKey()).toStep() $end_signal = $m.toSignal('End', startKey()).toStep() $p.toCondition('Product ID') // Convert string signal into a condition, with a capsule at each unique string // Specifying 'Product ID' ensures the respective values in Signal populate // a property named 'Product ID' .removeLongerThan(100d) // Bound condition. 100d as arbitrary limit .setProperty('Avg Visc', $vs, average()) // Set 'Avg Visc' property reflecting avg visc over each Product ID .setProperty('Month Start', $start_signal, startValue()) // Set 'Month Start' property to know what month Product ID ran .setProperty('Month End', $end_signal, startValue()) // Set 'Month End' property to know what month Product ID ran 3. Create another condition that has a capsule ranging the entire month for each product run within the month. Add similar properties, but note naming differences of 'Previous Month Start' and 'Previous Month Avg Visc'. This is because in the next step we will move this condition forward by one year. $pi.grow(60d) // Need to grow capsules in the condition to ensure they consume the entire month .transform($capsule -> // For each capsule (Product Run) in 'Product ID'.... capsule($capsule.property('Month Start'), $capsule.property('Month End')) // Create a capsule ranging the entire month .setProperty('Product ID', $capsule.property('Product ID')) // Add property of Product ID .setProperty('Previous Month Start', $capsule.property('Month Start')) // Add property of Month Start named 'Previous Month Start' .setProperty('Previous Month Avg Visc', $capsule.property('Avg Visc')) // Add property of Avg Visc named 'Previous Month Avg Visc' ) Notice we now have many overlapping capsules in our new condition ranging an entire month -- one for each Product Run that occurred within the month. 4. Move the previous 'Month's Product Runs' condition forward a year and merge with existing 'Product ID' condition. Aggregate properties of 'Previous Month Avg Visc'. This ensures that if a product was ran multiple times and had different avg visc values in each run, then what is displayed will be the average of all the avg visc values for each product. $previousYearMonthProductRun = $mspi.move(1y) // Move condition forward a year $pi.mergeProperties($previousYearMonthProductRun, 'Product ID', // Merge the properties of both conditions only if their // capsules share a common value of 'Product ID' keepProperties(), // keepProperties() will preserve all existing properties aggregateProperty('Previous Month Avg Visc', average())) // aggregateProperty() will take the average of all 'Previous // Month Avg Visc' properties if multiple exist... I.e. if // there were multiple Product Runs, each with a different value // for 'Previous Month Avg Visc', then take the average of all of // them. The resulting condition will match our original condition, except now with two new properties: 'Previous Month Start' & 'Previous Month Avg Visc' We can then add these properties in a condition table to create a cleaner view. We could also consider creating any other statistics of interest such as % difference of current Avg Visc vs Previous Month Avg Visc. To do this, we could use a method similar to gathering $start_signal and $end_signal in Step 2, create the calculation using the signals, then add back to the condition as a property.
    2 points
  13. Capsule Based Back Prediction or Back-Casting Scenario: Instead of forecasting data into the future, there may be a need to extrapolate a signal back in time based on data from an event or period of interest. The following steps will allow you to backcast a target signal from every capsule within a condition. Data Target Signal – a signal that you would like to backcast. Event – a condition that encapsulates the event or period of interest from which you would like to backcast the target signal. The target signal must have sufficient sample points within each capsule to create an accurate regression model. Method Step 1. Create a new extended event that will combine the capsules from the original event with a prediction window for backcasting. In this example, the prediction window is 1 hr and a maximum capsule duration of 40h is defined. $prediction_window = $event.beforeStart(1h) $prediction_window.join($event, 40h) Step 2. Create a new time since signal that quantifies the time since the beginning of each capsule in the extended event condition. This new signal will be the independent variable in the regression model. $extended_event.timeSince(1min) Replace 1min with a sample frequency sufficient for your use case. Step 3. In formula, use the example below to create a regression model for the target signal, with data from the event as training data, and the time since signal as an independent variable. Assign the regression model coefficients as capsule properties for a new condition called regression condition. $event.transform($cap-> {$model=$target_signal.validValues().regressionModelOLS( group($cap),false,$time_since,$time_since^2) $cap .setProperty('m1',$model.get('coefficient1')) .setProperty('m2',$model.get('coefficient2')) .setProperty('c',$model.get('intercept'))}) The formula above creates a second-order polynomial ordinary least squares regression model. The order of the polynomial can be modified (from linear up to 9th) by adding sequential 'timesince^n' statements on line 2 and defining all coefficients as is on lines 4 and 5. See the example below of how to adjust the formula for a third-order polynomial model. Step 4. Using the regression model coefficients from the regression condition, and the time since signal, the target signal can then be backcast over the prediction window. $c = $regression_condition.toSignal('c',durationKey()).aggregate(average(),$extended_event,durationKey()) $m1 = $regression_condition.toSignal('m1',durationKey()).aggregate(average(),$extended_event,durationKey()) $m2 = $regression_condition.toSignal('m2',durationKey()).aggregate(average(),$extended_event,durationKey()) return $m1*$time_since+$m2*$time_since^2 + $c The example above is for a second-order polynomial and the formula needs to be modified depending on the order of the polynomial defined in Step 3. See the example below for a linear model. Note that it may be required to manually set the units (using setunits() function) of each part of the polynomial equation. Result The result is a new signal which backcasts the target signal for the duration of the prediction window prior to the event or period of interest.
    2 points
  14. A common industrial use case is to select the highest or lowest signal value among several similar measurements. One example is identifying the highest temperature in a reactor or distillation column containing many temperature signals. One of many situations where this is useful is in identifying the current "hot spot" location to analyze catalyst deactivation/performance degradation. When selecting the highest value over time among many signals, Seeq's max() Formula function makes this easy. Likewise, if selecting the lowest value, the min() Formula function can be used. A more challenging use case is to select the 2nd highest, 3rd highest, etc., among a set of signals. There are several approaches to do this using Seeq Formula and there may be caveats with each one. I will demonstrate one approach below. For our example, we will use a set of 4 temperature signals (T100, T200, T300, T400). Viewing the raw temperature data: 1. We first convert each of the raw temperature signals to step interpolated signals, and then resample the signals based on the sample values of a chosen reference signal that has representative, regular data samples (in this case, T100). This makes the later formulas a little simpler overall and provides slightly cleaner results when signal values cross each other. For the T100 step signal Formula: Note that the T200 step signal Formula includes a resample based on using 'T100 Step' as a reference signal: The 'T300 Step' and 'T400 Step' formulas are identical to that for T200 Step, with the raw T signals substituted. 2. We now create the "Highest T Value" signal using the max() function and the step version T signals: 3. To create the '2nd Highest T Value' signal, we use the splice() function to insert 0 values where a given T signal is equal to the 'Highest T Value'. Following this, the max() function can again be used but this time will select the 2nd highest value: 4. The process is repeated to find the '3rd Highest T Value', with a very similar formula, but substituting in values of 0 where a given T signal is >= the '2nd Highest Value': The result is now checked for a time period where there are several transitions of the T signal ordering: 5. The user may also want to create a signal which identifies the highest value temperature signal NAME at any given point in time, for trending, display in tables, etc. We again make use of the splice() function, to insert the corresponding signal name when that signal is equal to the 'Highest T Value': Similarly, the '2nd Highest T Sensor' is created, but using the '2nd Highest T Value': (The '3rd Highest T Sensor' is created similarly.) We now have correctly identified values and sensor names... highest, 2nd highest, 3rd highest: This approach (again, one possible approach of several) can be extended to as many signals as needed, can be adapted for finding low values instead of high values, can be used for additional calculations, etc.
    2 points
  15. Hi, the error means that you are referencing a variable that is not defined in your variable list. You should change your variable "$signal1" in the formula to a variable you have in your variables list: Also be aware that you cannot use a signal and a condition together on combineWith(). You can combine either signals only or conditions only. Regards, Thorsten
    2 points
  16. I'd make two conditions, one for RPM and one for Temperature, then try to use the "Combining Conditions" formulas. I think .encloses() would work.
    2 points
  17. Statistical Process Control (SPC) can give production teams a uniform way to view and interpret data to improve processes and identify and prevent production issues. Control charts and run rule conditions can be created in Seeq to monitor near-real time data and flag when the data indicates abnormal or out of control behavior. Creating a Control Chart 1. Find the signal of interest and a signal that can be used to detect the current production grade (ideally a grade code or similar signal - if this does not exist for your product, you can use process set points to stitch together a calculated grade signal). Use .tocondition() in formula to create a condition for each change in the Grade_Code signal. 2. Check data for normalcy and other statistical assumptions prior to proceeding. To check for normalcy, use the Histogram tool in Seeq. Find more information on the Histogram tool in the Seeq Knowledge Base. Note that for this analysis we are using a subgroup size of one and assuming normalcy. 3. Determine the methodology to use to create the average and standard deviation for each grade. In this case, we will identify periods after start-up when the process was in control using Manual Condition, and select these times across each grade. 4. Calculate the mean and standard deviation for each grade, based on the times the process was in-control. Choose a time window that captures all capsules created for the in-control period. To use an unweighted mean, use .toDiscrete() before calculating average. The same calculation for the mean can be used for the standard deviation, by replacing average() with stddev(). //Define the time period that contains all in control capsules $capsule = capsule('2020-10-16T00:00-00:00', '2021-09-21T00:00-00:00') //Narrow down data to when process is in control, use keep() to filter the condition by the specific grade code capsule property $g101 = $allgrades.keep('Grade Code',isMatch('Grade 101')).intersect($inControl) $g102 = $allgrades.keep('Grade Code',isMatch('Grade 102')).intersect($inControl) $g103 = $allgrades.keep('Grade Code',isMatch('Grade 103')).intersect($inControl) //Create average based on the times the product is in control, use .toDiscrete to create an unweighted average $g101_ave = $viscosity.remove(not $g101).toDiscrete().average($capsule) $g102_ave = $viscosity.remove(not $g102).toDiscrete().average($capsule) $g103_ave = $viscosity.remove(not $g103).toDiscrete().average($capsule) //Create average for all grades in one signal using splice(), use keep() to filter the condition by the specific grade code capsule property //use within() to show only average only during the condition 0.splice($g101_ave, $allgrades.keep('Grade Code',isMatch('Grade 101'))) .splice($g102_ave, $allgrades.keep('Grade Code',isMatch('Grade 102'))) .splice($g103_ave, $allgrades.keep('Grade Code',isMatch('Grade 103'))) .within($allgrades) 5. Use the mean and standard deviation to create +/- 1 sigma limits, +/-2 sigma limits, and +/-3sigma limits (sometimes called upper and lower control limits). Here is an example of creating the +2 sigma limit: //Add 2*standard deviation to the mean to create the $plus2sd limit, use within() to show limits only during the time periods of interest ($mean + (2*$standardDeviation)).within($grade_code) 6. Overlay the standard deviation limits and mean with the signal of interest by placing on one lane and one y-axis, remove standard deviation. Creating Run Rules Once the control chart is created, run rule conditions can be created to detect instability and the presence of assignable cause in the process. In this example, Western Electric Run Rules are used, but other run rules can be applied using similar principles. Western Electric Run Rules: Run Rule 1: Any single data point falls outside the 3sigma-limit from the centerline. Run Rule 2: Two out of three consecutive points fall beyond the 2sigma-limit, on the same side of the centerline. Run Rule 3: Four out of five consecutive points fall beyond the 1sigma-limit, on the same side of the centerline. Run Rule 4: NINE consecutive points fall on the same side of the centerline. 7. The following formulas can be used to create a condition for each run rule: Run Rule 1: //convert to a step signal $signalStep = $signal.toStep() //find when one data point goes outside the plus3sigma or minus 3sigma limits ($signalStep < $minus3sd or $signalStep > $plus3sd) //set the property on the condition .setProperty('Run Rule', 'Run Rule 1') Run Rule 2: *Note that the function toCapsulesByCount() is available in Seeq versions R54+ //Create step-interpolated signal to keep from capturing the linear interpolation between sample points $signalStep = $signal.toStep() //create capsules for every 3 samples ($toCapsulesbyCount) and for every sample ($toCapsules) $toCapsulesbyCount = $signalStep.toCapsulesByCount(3,3*$maxinterp) //set the maximum interpolation based on the longest time you would expect between samples $toCapsules = $signalStep.toCapsules() //Create condition for when the signal is not between +/-2 sigma limits //separate upper and lower to capture when the rule violations occur on the same side of the centerline $condLess = ($signalStep <= $minus2sd) $condGreater = ($signalStep >= $plus2sd) //within 3 data points ($toCapsulesByCount), count how many sample points are not between +/-2 sigma limits $countLess = $signal.todiscrete().remove(not $condLess).aggregate(count(),$toCapsulesbyCount,durationKey()) $countGreater = $signal.todiscrete() .remove(not $condGreater).aggregate(count(),$toCapsulesbyCount,durationKey()) //Find when 2+ out of 3 are outside of +/-2 sigma limits //by setting the count as a property on $toCapsulesByCount and keeping only capsules greater than or equal to 2 $RR5below = $toCapsulesbyCount.setProperty('Run Rule 5 Violations', $countLess, endvalue()) .keep('Run Rule 5 Violations', isGreaterThanOrEqualto(2)) $RR5above = $toCapsulesbyCount.setProperty('Run Rule 5 Violations', $countGreater, endvalue()) .keep('Run Rule 5 Violations', isGreaterThanOrEqualto(2)) //Find every sample point capsule that touches a run rule violation capsule //Combine upper and lower into one condition and use merge to combine overlapping capsules and to remove properties $toCapsules.touches($RR5below or $RR5above).merge(true) .setProperty('Run Rule', 'Run Rule 2') Run Rule 3: *Note that the function toCapsulesByCount() is available in Seeq versions R54+ //Create step-interpolated signal to keep from capturing the linear interpolation between sample points $signalStep = $signal.toStep() //create capsules for every 5 samples ($toCapsulesbyCount) and for every sample ($toCapsules) $toCapsulesbyCount = $signalStep.toCapsulesByCount(5,5*$maxinterp) //set the maximum interpolation based on the longest time you would expect between samples $toCapsules = $signalStep.toCapsules() //Create condition for when the signal is not between +/-1 sigma limits //separate upper and lower to capture when the rule violations occur on the same side of the centerline $condLess = ($signalStep <= $minus1sd) $condGreater = ($signalStep >= $plus1sd) //within 5 data points ($toCapsulesByCount), count how many sample points ($toCapsules) are not between +/-1 sigma limits $countLess = $signal.toDiscrete().remove(not $condLess).aggregate(count(),$toCapsulesbyCount, durationkey()) $countGreater = $signal.toDiscrete() .remove(not $condGreater).aggregate(count(),$toCapsulesbyCount,durationkey()) //Find when 4+ out of 5 are outside of +/-1 sigma limits //by setting the count as a property on $toCapsulesByCount and keeping only capsules greater than or equal to 4 $RR6below = $toCapsulesbyCount.setProperty('Run Rule 6 Violations', $countLess, endvalue()) .keep('Run Rule 6 Violations', isGreaterThanOrEqualto(4)) $RR6above = $toCapsulesbyCount.setProperty('Run Rule 6 Violations', $countGreater, endvalue()) .keep('Run Rule 6 Violations', isGreaterThanOrEqualto(4)) //Find every sample point capsule ($toCapsules) that touches a run rule violation capsule //Combine upper and lower into one condition and use merge to combine overlapping capsules and to remove properties $toCapsules.touches($RR6below or $RR6above).merge(true) .setproperty('Run Rule', 'Run Rule 3') Run Rule 4: //Create step-interpolated signal to keep from capturing the linear interpolation between sample points $signalStep = $signal.toStep() //create capsules for every 9 samples ($toCapsulesbyCount) and for every sample ($toCapsules) $toCapsulesbyCount = $signalStep.toCapsulesByCount(9,9*$maxinterp) //set the maximum interpolation based on the longest time you would expect between samples $toCapsules = $signalStep.toCapsules() //Create condition for when the signal is either greater than or less than the mean //separate upper and lower to capture when the rule violations occur on the same side of the centerline $condLess = $signalStep.isLessThan($mean) $condGreater = $signalStep.isGreaterThan($mean) //Find when the last 9 samples are fuly within the greater than or less than the mean //use merge to combine overlapping capsules and remove properties $toCapsules.touches(combinewith($toCapsulesbyCount.inside($condLess), $toCapsulesbyCount.inside($condGreater))).merge(true) .setproperty('Run Rule', 'Run Rule 4') **To make it easier to use these run rules in Seeq, custom formula functions can be created for each run rule using the User Defined Formula Function Editor Add-on which can be found in Seeq’s Open Source Gallery along with user guides and instructions for installation. For example, Run Rule 2 can be simplified to the following formula using the User-Defined Formula Functions Add-on with Seeq Data Lab: $signal.WesternElectric_runRule2($minus2sd, $plus2sd) 9. If desired, all run rules can be combined into one condition in formula using .combinewith(): combineWith($runRule1,$runRule2,$runRule3, $runRule4) 10. As a final step, a table can be created detailing the run rule violations in the trend view. Here, the header column is set as ‘Capsule Property’ >> ‘Run Rule’ and the capsule properties, start, end, and duration were added as columns. The last value of the signal ‘Grade_Code’ was also added as a column to the table. For more information on Tables, see the Seeq Knowledge Base.
    2 points
  18. We often get asked how to use the various API endpoints via the python SDK so I thought it would be helpful to write a guide on how to use the API/SDK in Seeq Data Lab. As some background, Seeq is built on a REST API that enables all the interactions in the software. Whenever you are trending data, using a Tool, creating an Organizer Topic, or any of the other various things you can do in the Seeq software, the software is making API calls to perform the tasks you are asking for. From Seeq Data Lab, you can use the python SDK to interact with the API endpoints in the same way as users do in the interface, but through a coding environment. Whenever users want to use the python SDK to interact with API endpoints, I recommend opening the API Reference via the hamburger menu in the upper right hand corner of Seeq: This will open a page that will show you all the different sections of the API with various operations beneath them. For some orientation, there are blue GET operations, green POST operations, and red DELETE operations. Although these may be obvious, the GET operations are used to retrieve information from Seeq, but are not making any changes - for instance, you may want to know what the dependencies of a Formula are so you might GET the item's dependencies with GET/items/{id}/dependencies. The POST operations are used to create or change something in Seeq - as an example, you may create a new workbook with the POST/workbooks endpoint. And finally, the DELETE operations are used to archive something in Seeq - for instance, deleting a user would use the DELETE/users/{id} endpoint. Each operation endpoint has model example values for the inputs or outputs in yellow boxes, along with any required or optional parameters that need to be filled in and then a "Try it out!" button to execute the operation. For example, if I wanted to get the item information for the item with the ID "95644F20-BD68-4DFC-9C15-E4E1D262369C" (if you don't know where to get the ID, you can either use spy.search in python or use Item Properties: https://seeq.atlassian.net/wiki/spaces/KB/pages/141623511/Item+Properties) , I could do the following: Using the API Reference provides a nice easy way to see what the inputs are and what format they have to be in. As an example, if I wanted to post a new property to an item, you can see that there is a very specific syntax format required as specified in the Model on the right hand side below: I typically recommend testing your syntax and operation in the API Reference to ensure that it has the effect that you are hoping to achieve with your script before moving into python to program that function. How do I code the API Reference operations into Python? Once you know what API endpoint you want to use and the format for the inputs, you can move into python to code that using the python SDK. The python SDK comes with the seeq package that is loaded by default in Seeq Data Lab or can be installed for your Seeq version from pypi if not using Seeq Data Lab (see https://pypi.org/project/seeq/). Therefore, to import the sdk, you can simply do the following command: from seeq import sdk Once you've done that, you will see that if you start typing sdk. and hit "tab" after the period, it will show you all the possible commands underneath the SDK. Generally the first thing you are looking for is the ones that end in "Api" and there should be one for each section observed in the API Reference that we will need to login to using "spy.client". If I want to use the Items API, then I would first want to login using the following command: items_api = sdk.ItemsApi(spy.client) Using the same trick as mentioned above with "tab" after "items_api." will provide a list of the possible functions that can be performed on the ItemsApi: While the python functions don't have the exact same names as the operations in the API Reference, it should hopefully be clear which python function corresponds to the API endpoint. For example, if I want to get the item information, I would use "get_item_and_all_properties". Similar to the "tab" trick mentioned above, you can use "shift+tab" with any function to get the Documentation for that function: Opening the documentation fully with the "^" icon shows that this function has two possible parameters, id and callback where the callback is optional, but the id is required, similar to what we saw in the API Reference above. Therefore, in order to execute this command in python, I can simply add the ID parameter (as a string as denoted by "str" in the documentation) by using the following command: items_api.get_item_and_all_properties(id='95644F20-BD68-4DFC-9C15-E4E1D262369C') In this case, because I executed a "GET" function, I return all the information about the item that I requested: This same approach can be used for any of the API endpoints that you desire to work with. How do I use the information output from the API endpoint? Oftentimes, GET endpoints are used to retrieve a piece of information to use it in another function later on. From the previous example, maybe you want to retrieve the value for the "name" of the item. In this case, all you have to do is save the output as a variable, change it to a dictionary, and then request the item you desire. For example, first save the output as a variable, in this case, we'll call that "item": item = items_api.get_item_and_all_properties(id='95644F20-BD68-4DFC-9C15-E4E1D262369C') Then convert the output "item" into a dictionary and request whatever key you would like: item.to_dict()['name']
    2 points
  19. To better understand their process, users often want to compare time-series signals in a dimension other than time. For example, seeing how the temperature within a reactor changes as a function of distance. Seeq is built to compare data against time but this method highlights how we can use time to mimic an alternate dimension. Step 1: Sample Alignment In order to accurately mimic the alternate dimension, the samples to be included in each profile must occur at the same time. This can be achieved through a couple methods in Seeq if the samples don't already align. Option 1: Re-sampling Re-sampling selects points along a signal at select intervals. You can also re-sample based on another signal's keys. Since its possible for there not to be a sample at that select interval, the interpolated value is chosen. An example Formula demonstrating how to use the function is shown below. //Function to resample a signal $signal.resample(5sec) Option 2: Average Aggregation Aggregating allows users to determine the average of a signal over a given period of time and then place this average at a specific point within that period. Signal From condition can be used to find the average over a period and place this average at a specific timestamp within the period. In the example below, the sample is placed at the start but alignment will occur if the samples are placed at the middle or end as well. Step 2: Delay Samples In Formula, apply a delay to the samples of the signal that represents their value in the alternative dimension. For example, if a signal occurs at 6 feet from the start of a reactor, delay it by 6. If there is not a signal with a 0 value in the alternate dimension, the final graph will be offset by the smallest value in the alternate dimension. To fix this, in Formula create a placeholder signal such as 0 and ensure its samples align with the other samples using the code listed below. This placeholder would serve as a signal delayed by 0, meaning it would have a value of 0 in the alternate dimension. //Substitute Period_of_Time_for_Alignment with the period used above for aligning your samples 0.toSignal(Period_of_Time_for_Alignment) Note: Choosing the unit of the delay depends upon the new sampling frequency of your aligned signals as well as the largest value you will have in the alternative dimension. For example, if your samples occur every 5 minutes, you should choose a unit where your maximum delay is not greater than 5 minutes. Please refer to the table below for selecting units Largest Value in Alternate Dimension Highest Possible Delay Unit 23 Hour, Hour (24 Hour Clock) 59 Minute 99 Centisecond 999 Millisecond Step 3: Develop Sample Profiles Use the Formula listed below to create a new signal that joins the samples from your separate signals into a new signal. Replace "Max_Interpolation" with a number large enough to connect the samples within a profile, but small enough to not connect the separate profiles. For example, if the signals were re-sampled every 5 minutes but the largest delay applied was 60 seconds, any value below 4 minutes would work for the Max_Interpolation. This is meant to ensure the last sample within a profile does not interpolate to the first sample of the next profile. //Make signals into discrete to only get raw samples, and then use combineWith and toLinear to combine the signals while maintaining their uniqueness combineWith($signal1.toDiscrete() , $signal2.toDiscrete() , $signal3.toDiscrete()).toLinear(Max_Interpolation) Step 4: Condition Highlighting Profiles Create a condition in Formula for each instance of this new signal using the formula below. The isValid() function was introduced in Seeq version 44. For versions 41 to 43, you can use .valueSearch(isValid()). Versions prior to 41 can use .validityCapsules() //Develop capsule highlighting the profile to leverage other views based on capsules to compare profiles $sample_profiles.isValid() Step 5: Comparing Profiles Now with a condition highlighting each profile, Seeq views built around conditions can be used. Chain View can be used to compare the profiles side by side while Capsule View can overlay these profiles. Since we delayed our samples before, we are able to look at their relative times and use that to represent the alternate dimension. Further Applications With these profiles now available in Seeq, all of the tools available in Seeq can be used to gain more insight from these examples. Below are a few examples. Comparing profiles against a golden profile Determine at what value in the alternate dimension does each profile reach a threshold Developing a soft sensor based on another sensor and a calibration curve profile
    2 points
  20. Hi David, It seems the handling of enum signals involves providing some buffer from the edges of the lane, which is not configurable. I was able to produce a fairly good alignment by using the Customize > Axis settings in the Details panel to turn of Auto scaling of the numeric signal and set the min and max to -90 and 450, resp.: This will work as long as all enum values appear in the time range of interest but will produce misalignment if a proper subset of these is in the time range, because for enums the axis is only labeled with the values of those enums that are found in the time range. Hopefully this works for your visualization; if not, it may make sense to file a support ticket for a feature request to have more control over the visualization of enumerated data.
    1 point
  21. Hi Tranquil, Do you mind supplying a bit more information on your question and possibly some screenshots? If by resources you mean assets, with some assets containing a specific signal, then it may not be necessary to write any script. If this is indeed the case, Asset Groups could possibly be utilized to scale the creation of a condition if the signal(s) of interest exist.
    1 point
  22. Ivan, If you would like to resample, I would recommend doing it in a standalone formula prior to the regression formula. The reason for this is that only formula outputs are cached. The intermediates are not cached, so it would not reduce the number of samples that the formula needs to look at since it is doing the resampling in the same formula. Resampling in the same formula would reduce the samples in the fitting but typically it is better to reduce samples pulled. Resampling the predictor variables would have some benefit. Seeq will apply the prediction output to every sample so it would reduce the number of total samples that the output will be applied to. I would also recommend, only using the necessary order of predictors. Since you are writing it as a formula you can select which variables you need to have higher orders and which ones can be linear. Regards, Teddy
    1 point
  23. Hi david, To add to joe's comment: Besides excel export and looking at this in tables and charts, an easier way to do this would be to simply add the count as a statistic in your details pane. to do this, click on the table icon in your details, it should be right underneath the "customize" button. In the pop up, simply select "count" and the count of data points for each signal will be displayed in the details pane. Hope that helps! Thanks, Sean
    1 point
  24. Hi David, The count should be accurate in the Tables and Charts view. Are you using the "ungridded original timestamps" option in the export to Excel? If not, it could be exporting gridded samples, which could result in interpolated values coming out in your Excel export that aren't actually raw data points in your signal.
    1 point
  25. This works on the worksheet but not on the organiser, is there a way to change organiser too?
    1 point
  26. Hello Kemi, Thank you for your question. This chart can be created as follows; 1. Calculate the moving range in the formula tool; abs($signal.runningDelta()) 2. Create the monthly condition using Identify > Periodic Condition tool and select a monthly duration. 3. Use Quantify > Signal from Condition tool to find the average moving range over each month. This is “CL” as shown in the video. 4. In the formula tool, calculate the UCL parameter as follows 3.268*$average_movingrange Alternatively, create a new Signal from Condition to calculate the standard deviation of the moving average, and in formula use the following; 3*$stddev_movingrange 5. Add the signals to one lane and align their y-axis. We also have a very comprehensive blog post on creating a Control Chart and applying SPC run rules which may be of interest to you.
    1 point
  27. Was this a duplicated analysis? If so, I suspect that the IDs you're seeing are associated to items that couldn't be cloned successfully. If this is the case, you should find a journal in the (new) first worksheet of the cloned analysis, which will list items that couldn't be cloned successfully for some reason. Often this has to do with permissions. These items are created in the cloned analysis, but they're assigned placeholder IDs that are serial numbers preceded by an appropriate number of 0s to make a GUID. If that's what happened here, examining the journal on the first worksheet of the analysis should provide clues as to what needs to be fixed before a subsequent attempt at duplication.
    1 point
  28. When anything is deleted in Seeq, the Archived property gets set to "true". You can use API reference and POST Archived as "false" property. Check out this screenshot on how to do so. You can also do this programatically using SDK as shown in this post:https://www.seeq.org/index.php?/forums/topic/1291-how-to-use-the-seeq-apisdk-in-pythonseeq-data-lab/
    1 point
  29. This is a solution for a question that came in the support channel that I though would be of general interest. The question was how to designate a fixed training range for a signal and then calculate upper and lower limits of the signal using the 3rd and 97th percentile and apply those limits to the entire history of the signal This requires a two step process. The first is to create scalar signals for the upper and lower limits. Next we use those upper and lower limits to clean the signal using the remove() formula Step 1) Calculating the Scalar values for the 97th and 3rd Percentiles In the example below the training range start and end dates are hard coded into the formulas for simplicity $trainingRangeStart = '2022-10-01T00:00:00Z' $trainingRangeEnd = '2022-10-31T00:00:00Z' $trainingCondition = condition(capsule($trainingRangeStart,$trainingRangeEnd)) $calcPercentile = $signal.aggregate(percentile(97), $trainingCondition, startKey()) $calcPercentile.toScalars(capsule($trainingRangeStart,$trainingRangeEnd)).average() Similar formula for the lower limit $trainingRangeStart = '2022-10-01T00:00:00Z' $trainingRangeEnd = '2022-10-31T00:00:00Z' $trainingCondition = condition(capsule($trainingRangeStart,$trainingRangeEnd)) $calcPercentile = $signal.aggregate(percentile(3), $trainingCondition, startKey()) $calcPercentile.toScalars(capsule($trainingRangeStart,$trainingRangeEnd)).average() Step 2) Clean the signal using the new scalar values for upper and lower limits $signal .remove(isGreaterThan($upper)) .remove(islessthan($lower))
    1 point
  30. Using formulas with trended data (temperature) I created a signal representing the density of a fluid in a vessel. I have reason to believe ambient conditions are impacting the temperature of the liquid inside of a level bridle thus changing the liquid properties. Using level measurements in the vessel and level bridle and the density/ specific gravity calculated for the liquid in the vessel (based on actual vessel temperature) I was able to calculate the density/ SG of the of the liquid in the level bridles based on the variation in level measurement. The equation for density is fairly complicated so manipulating the equation solving for temperature isn't a realistic option for me. Is there a way to have Seeq calculate/ trend a signal representing the temperature when I have a signal representing the solution (density) with temperature being the only variable? The equation I'm working with is shown below. I have the values for all of the constants and I'm wanting Seeq to calculate the value of T.
    1 point
  31. Here is an example of how to convert a String signal into a table where each row contains information on the start / end time and total duration of each time the string signal changed values Step 1: Convert your string signal into a condition inside of Formula $signal.tocondition() -> This formula creates a new capsule every time that the string signal changes value regardless of how many sample points have the same string value. Step 2: Create a table view of the condition. Select the "Tables and Charts" view and the "Condition" mode Step 3: Add Capsule properties as values to the table. To add the "Value" property which is the value from the string signal type in "Value" into the Capsule property statistics table. You can also select the duration here Final Product
    1 point
  32. Hi Ruby, This should work instead: spy.jobs.schedule('0 0 0 ? * 6#1 *')
    1 point
  33. Hey Pat, just confirming that this fix resolved your original problem?
    1 point
  34. Hi Jesse - Unfortunately moving property columns to the left/top of metric columns in tables is not yet supported. There is an open developer request for this functionality. If you would like to be notified when this functionality becomes available (and advocate for prioritization), please send us a ticket to [email protected] referencing developer request #27076 so we can log your ticket against that request. Thanks, Patrick
    1 point
  35. Hi Matthias, There are a few ways you could display the average of each of these columns in the manner you describe. The first option is to calculate the average of the columns per year and display them in a separate scorecard with only one row. Then bring these two tables together in an organizer topic for the final display. Below is an example of what this could look like (using days instead of months and a week instead of a year): Tip: By inserting the table as interactive (in versions R54+) the table will render more clearly and the columns will line up. Another option is to create a condition for years and then combine this yearly condition with your monthly condition using the combineWith() formula function. This will create a condition with both the monthly capsules used for your averages, and a yearly capsule. By using this condition in your scorecard metric, you will get a row for each month, as well as a row for the year. A few things to note with this approach. First, the row for the year will appear at the top of your table instead of at the bottom because it is ordered on capsule start time, not capsule end time. In fact, you may want to delay your monthly condition by 1 second to ensure that the yearly condition shows as the first row. Second, the average result you will see for the year will be an average of all the samples in the input signal over the course of the year, instead of an average of the monthly averages. Below is an example of what this would look like (again with days instead of months and a week instead of a year): Thanks, Emily
    1 point
  36. Now I see! I was assuming maxValue($SearchArea) was hard coding the search. Your explanation makes sense: maxValue is returning a search result, but then $signal.within($ValidData) is only passing the capsules in the condition to it. Therefore, as long as $SearchArea fully includes the capsules in $ValidData it will work. I just need to hard code dates well before and well after any capsules I would use. Thanks!
    1 point
  37. The first response with the hard coded dates will give you the answer you are looking for as long as you do anticipate adding new capsules to the "Data Valid" condition in the future. The part of the formula that limits the scope of the search is the $signal.within($ValidData) section. This means that only data that falls within capsules part of the ValidData condition AND within the capsule("2020-01-01T00:00:00Z","2022-07-28T00:00:00Z") date range
    1 point
  38. I think what you are going for will look like the formula below Where $SearchArea is the total range where any of your valid data capsule could fall (you can be very conservative with these dates). This formula will work if you have multiple valid data range capsules as long as they all fall within the $SearchArea $SearchArea = capsule("2020-01-01T00:00:00Z","2022-07-28T00:00:00Z") $Signal.within($ValidData).maxValue($SearchArea).toSignal()
    1 point
  39. Brett, There is not currently a direct equivalent function that would allow you to move a capsule using a variable amount. However, below is a formula that does the same thing in a couple of steps. It comes with a couple of caveats however If you have capsule properties on the first calculation they will not be transferred over to the delayed signal This formula will delay the start and end of the capsule the same amount as defined by the value of your delay signal at the capsule start. You could probably extend this to do more complex transformations if needed $step1 = $condition.aggregate(totalDuration("min"), $condition, startKey(), 0s) $step2 = $step1.move($timeShiftSignal,2h) $step2.toCapsules($sample -> capsule($sample.key(),$sample.key()+$sample.value()),30d) Let me know if this helps get you on the right track. Also I am curious to understand more about your use case so that we can help improve the built-in functions in the future. Shamus
    1 point
  40. Hi Matthias, I would recommend checking out this post, which follows the same process: When looking at that post, it sounds like you'll want the $reset variable to be equal to a monthly condition.
    1 point
  41. We don't currently support it, but we've noted the desires for the 2D use cases. Without committing to a specific release or date, I'll say it is on our near term roadmap 🙂
    1 point
  42. Yes! You can deploy ipywidgets from jupyter notebooks in data lab. Widgets can be great for wrapping code in UI for No-code experience from workbench using Seeq Add-on tools for implementing Machine Learning models. Here is one example for datepicker. import ipywidgets as widgets #import the library widgets.DatePicker( description='Start:', style={'description_width': '150px'}, layout={'width': '300px'}) You can read more about various widgets here: https://ipywidgets.readthedocs.io/en/latest/examples/Widget List.html
    1 point
  43. Another approach that you can take if you don't need to know start or end times of the "active" capsules is a filtered Simple Table counting capsules. To get this summary table listing the "active" conditions in any Display Range, choose a Simple Table with the count column enabled from the Column button in the toolbar. If you also have signals in your Details pane, you will want to deselect those and only select the 9 conditions. If you only have conditions, you can exclude this Details pane selection. You can then filter that table using a menu that opens from the three vertical dots from the column header. Below I applied a filter for when count is greater than 0 and have only 4 rows displaying of 6 total conditions. The filter icon lets me know the table is filtered, and I can click on it to change or remove the filter. In R55 and later, percent duration and total duration are also possible column configurations in the Simple Table in addition to count. You can read more on how these table displays work on our Knowledge Base.
    1 point
  44. I tried this on R54.1.4 and came across a similar error but fixed it by appending .toString() to $seq. Below is the updated formula code. //creates a condition for 1 minute of time encompassing 30 seconds on either side of a transition $Transition = $CompressorStage.toCondition().beforeStart(0.5min).afterStart(1min) //Assigns the mode on both sides of the step change to a concatenated string that is a property of the capsule. $Transition .transform( $cap -> $cap.setProperty('StartModeEndMode', $CompressorStage.toCondition() .toGroup($cap, CAPSULEBOUNDARY.INTERSECT) .reduce("", ($seq, $stepCap) -> $seq.toString() + $stepCap.getProperty('Value') //Changes the format of the stage names for more clear de-lineation as a property in the capsules pane. .replace('STAGE 1','-STAGE1-').replace('STAGE 2','-STAGE2-').replace('TRANSITION','-TRANSITION-').replace('OFF','-OFF-') )))
    1 point
  45. Starting in Seeq Version R52 Data Lab notebooks can be run on a schedule which opens up a world of new interesting possibilities. One of those possibilities is to create a simple script that pulls data from a Web API source and pushes it into the Seeq data cache on a schedule. This can be a great way to prototype out a data connection prior to building a full featured connector using the Connector SDK This example notebook pulls from the USGS which has information on river levels, temperatures, turbidity etc and pushes those signals for multiple sites into the Seeq system. The next logical step would be to make a notebook to organize these signals into an asset tree. Curious to see what this inspires other to do and to connect to. If there are additional public resources of interest put them in the thread for ideas. USGS Upload Example.ipynb
    1 point
  46. Hi Robin, maybe you want to try this. For this demo I created 3 signals based on the example data of Seeq as I did not have data like yours. For each of the signals I created capsules whenever the value is above 1kW: In the next step I joined the running conditions to one parent condition: Now I am able to calculate the delay between the start of the "All running" capsules and the "Running" capsules of each signal and delay the original signal by this value: In the last step I created capsules for the delayed signal, whenever the value is about 1kW: You may have to do some adjustments to this example regarding your needs. Hope this helps. Regards, Thorsten
    1 point
  47. Hi Banderson, you can create a duration signal from each capsule in a condition, using "signal from condition" tool. As you may know these point and click tools create a Seeq formula underneath. So after using point and click signal from condition tool, you can find the syntax of formula in item properties of that calculation. You can copy this syntax and paste it in Formula and use it to further develop your calculations.
    1 point
  48. Overview This method will provide a simple visualization of externally determined control limits or help you accurately calculate new control limits for a signal. Using these limits we will also create a boundary and find excursions for how many times and for how long a signal deviates from the limits.These created signals can be used in follow-on analysis search for periods of abnormal system behavior. In this example we will be creating average, +3 Std Deviation and -3 Standard Deviation boundaries on a Temperature Signal. Setup Signals In the Data tab, select the following: Asset → Example → Cooling Tower 1 → Area A Signal → Temperature Option 1: Manually Define Simple Control Limits From the Tools tab, navigate to the Formula tool. The Formula can be used to easily plot simple scalar values. If you already have calculated values for the upper and lower limit just enter them in the formula editor with their units as shown in the screenshot below. Formula - Simple Upper Limit 103F Formula - Simple Lower Limit 70F Option 2: Calculate The Control Limits From the Tools tab, navigate to the Formula tool. In formula we are going to define the time period over which we want to calculate our control limits as well as the math behind those limits. Step 1 - Calculate the upper limit Variables Name Item Type $Series Temperature Signal Formula $calcPeriod = capsule("2018-01-01T00:00:00-06:00","2018-05-01T00:00:00-06:00") $tempAve = $Series.average($calcPeriod) $tempStdDev = $Series.standardDeviation($calcPeriod) $tempAve + 3*$tempStdDev Description of Code $calcPeriod → This is the time range over which we are going to calculate the average and standard deviation of our signal. The start and end time of our period must be written in ISO8601 format (Year - Month - Day "T" Hour : Minutes : Seconds . Fractional Seconds -/+ Timezone) $tempAve → Intermediate variable calculating the average of the temperature signal over our calculation period $tempStdDev → Intermediate variable calculating the standard deviation of the temperature signal over our calculation period $tempAve + 3*$tempStdDev → Example control limit calculation Step 2 - Duplicate your formula to calculate the lower limits Click the info icon in the details pane next to your calculated upper limit signal. From the info panel select duplicate to create a copy of the formula. With this copy simply edit the formula to calculate the lower limit. $calcPeriod = capsule("2018-01-01T00:00:00-06:00","2018-05-01T00:00:00-06:00") $tempAve = $Series.average($calcPeriod) $tempStdDev = $Series.standardDeviation($calcPeriod) $tempAve - 3*$tempStdDev **Alternate method number three -- if you wanted $calcperiod to actually changed based on the previous month or week of operation you could use signal from condition based off a periodic condition to achieve this solution. Step 3 - Visualize Limits as a Boundary Using the Boundary Tool to connect the process variable and upper and lower limits. Select Temperature as your primary signal and select "New" Select Boundary under relation type, name your new boundary and select the signals for your upper and lower limit. Click save to visualize the boundary on the trend. Using this same method you can create and visualize multiple boundaries (simple and calculated) at the same time Step 4 - Create Capsules when Outside the Boundary Using the Deviation Search tool create a condition for when the signal crosses the boundary. Name your new condition, select temperature as the input signal, select outside a boundary pair and the upper and lower signals. Estimate the maximum time you would expect any one out of boundary event to last and input that time in the max capsule duration field. Step 5 - Create a Scorecard to Quantify How Often and How Long Boundary Excursions Occur Create a Scorecard to count how many and how long and what % of total time these excursions are occurring. Create each metric using the Scorecard Metric tool and the Count, Total Duration and Percent Duration statistics. Use a Condition Based scorecard to get weekly or monthly metrics. Step 6 - Plot how these KPIs are Changing Over Time By creating a signal which plots these KPIs over time we can quantify how our process variable is changing relative to these limits. To begin, determine how often you would like to calculate the KPI per Hour/Day/Week/Month and create a condition for those time segments using the Periodic Condition tool. In the screenshot below we are creating a weekly condition with capsules every week. Using the Signal from Condition Tool count the number of Outside Simple Boundary capsules which occur within each weekly capsule. This same methodology can be used to create signals for total duration and % duration just like in the scorecard section above. For each week the tool will create a single sample. The timestamp placement and interpolation method selections will determine how those samples are placed within the week and visualized on the chart. The scorecard metrics that you created above can also be trended over time by switching from Scorecard View to Trend View.
    1 point
  49. One limitation to the method mentioned above is if one of the signals doesn't have any values, then no answer is returned. If you still want the value even if one signal is missing than you can try the alternative formula described below. This method works for versions prior to R21.0.40.05. Here is the formula for 2 signals as shown above: $signal1.zipWith($signal2, ($s1, $s2) -> max($s1.getValue(), $s2.getValue())) If you have more than 2 signals, then add additional zipWith() statements: $signal1.zipWith($signal2, ($s1, $s2) -> max($s1.getValue(), $s2.getValue())) .zipWith($signal3, ($s1, $s3) -> max($s1.getValue(), $s3.getValue())) .zipWith($signal4, ($s1, $s4) -> max($s1.getValue(), $s4.getValue()))
    1 point
  50. Hi Thorsten- In the first screenshot, the area of each box is actually the same, even though some boxes have different dimensions. As you observed, the size of your display impacts how the boxes are drawn. To adjust the box sizes via the API, please use the following steps: 1. On your Seeq installation, open the workbook that contains the Treemap and navigate to the API: 2. To get the ID of the asset that you would like to resize: a. Navigate to GET Assets b. Adjust the "limit" to 200 and click "Try it out!" c. In the Response Body, locate the asset to resize and copy the "id": 3. To resize the asset: a. Navigate to POST Item Properties b. Paste the asset ID into the "id" field. Use the following syntax in the "Body" field. [ { "unitOfMeasure": "", "name": "size", "value": 10 } ] The following screenshot shows a size 10, but this number may be adjusted. c. Click "Try it out!" 4. Navigate back to the Treemap and refresh the browser. The Treemap now reflects the adjusted size: Please let me know if you have any additional questions. Thanks, Lindsey
    1 point
This leaderboard is set to Los Angeles/GMT-07:00
×
×
  • Create New...