1. ## Patrick

Seeq Team

12

• ### Posts

15

2. ## Joe Reckamp

Seeq Team

11

• ### Posts

108

3. ## Kin How Chong

Seeq International Teams

8

• ### Posts

6

4. Seeq Team

8

39

## Popular Content

Showing content with the highest reputation since 03/29/2022 in all areas

1. A few weeks ago in Office Hours, a Seeq user asked how to perform iterative calculation in which the next value of the calculation relies on its previous values. This functionality is currently logged as a feature request. In the meantime, users can utilize Seeq Data Lab and push the calculated result from Seeq Data Lab to Seeq Workbench. Let's check out the example below: There are a total of 4 signals, Signal G, Signal H, Signal J, and Signal Y added to Seeq workbench. The aim is to calculate the value of Signal Y, under the selected period. Step 1: Set the start date and end date of the calculation. #Set the start date and end date of calculation startdate = '2023-01-01' enddate = '2023-01-09' Step 2: Set the workbook URL and workbook ID. #Set the workbook URL and workbook ID workbook_url = 'workbook_url' workbook_id = 'workbook_id' Step 3: Retrieve all raw data points for the time interval specified in Step 1 using spy.pull(). #Retrieve all raw data points for the time internal specified in Step 1: data = spy.pull(workbook_url, start = startdate, end = enddate, grid = None) data Step 4: Calculate the value of Signal Y, (Yi = Gi * Y(i-1) + Hi * Ji) #Calculate the value of Signal Y (Yi = Gi * Y(i-1) + Hi * Ji) for n in range(len(data)-1): data['Signal Y'][n+1] = data['Signal G'][n+1] * data['Signal Y'][n] + data['Signal H'][n+1] * data['Signal J'][n+1] data Step 5: Push the calculated value of Signal Y to the source workbook using spy.push(). #Push the calculated result of Signal Y to the source workbook spy.push(data = data[['Signal Y']], workbook = workbook_id)
7 points
2. A small team of us (with help from Seeq team members) built a short script to extract signal names from our legacy excel workbooks so that we could push them to Seeq workbench. Perhaps, like us, you are involved in migrating workbooks for monitoring/ reporting over to Seeq and could do with a boost to get started so you can get to real work of setting up the Seeq workbench. The attached script will extract the signal names (assuming you can craft your own regex search filter) from each excel worksheet and then push them to workbench with labeling for organization. Hopefully its a help to some 🙂 signal_transfer_excel2seeq_rev1.ipynb
5 points
4 points
4 points
5. When creating signal forecasts, especially for cyclic signals that degrade, we often use forecastLinear() in formula to easily forecast a signal out into the future to determine when a threshold is met. The methodology is often the same regardless of if we are looking at a filter, a heat exchanger, or any other equipment that fouls overtime or any equipment that needs to go through some periodic maintenance when a KPI threshold is met. A question that comes up occasionally from users is how to create a signal forecast that only uses data from the current operation cycle for signal forecasting. The forecastlinear() operator only takes into account a historical training period and does not determine if that data is coming from the current cycle or not (which results in unexpected results). Before entering the formula, you will need to define: a condition that identifies the current cycle, here i have called it "\$runningCycle" a Signal to do a linear forecast on, i have called it "\$signal" To forecast out into the future based on only the most recent cycle, the following code snippet can be used in formula: \$training = \$runningCycle.setmaximumduration(10d).toGroup(capsule(now()-2h, now())) \$forecast=\$Signal.predict(\$training, timesince(toTime('2000-01-01T00:00Z'))) \$signal.forecastSplice(\$forecast, 1d) In this code snippet, there are a few parameters that you might want to change: .setMaximumDuration(10d): results in a longest cycle duration of 10 days, this should be changed to be longer than the longest cycle you would expect to see capsule(now-2h, now()): this creates a period during which seeq will look for the most recent cycle. In this case it is any time in the last 2 hours. If you have very frequent data (data comes in every few seconds to minutes) then 2 hours or less will work. If you have infrequent data (data that comes in once a day or less) then extend this so that it covers the last 3-4 data points. \$signal.forecastSplice(\$forecast, 1d): When using forecastLinear(), there is an option to force the prediction through the last sample point. This date parameter (1 day in this case) does something similar- it blends the last historical data point with the forecast over the given time range. In other words, if the last data point was a value of 5, but my first forecasted datapoint had a value of 10, this parameter is the time frame over which to smooth from the historical data point to the forecast. Here is a screenshot of my formula : and the formula in action:
4 points
6. Hi Coolhunter, I have seen this requested multiple times and one solution might be to use a custom PI Vision symbol that enables you to embed Seeq content into PI Vision. A solution to this challenge can be found here: Get the most out of PI Vision - Seeq Analytics in PI Vision - Seeq in PI Vision (werusys.de) If you want to know more about the PI Vision integration with Seeq feel free to drop me a mail: [email protected] Cheers, Julian Seeq-WerusysPIVision.pdf
3 points
7. Check out the data lab script and video that walks through it to automate data pull->apply ml->push results to workbench in an efficient manner. Of course you can skin the cat many different ways however this gives a good way to do it in bulk. Use case details: Apply ML on Temperature signals across the whole Example Asset Tree on a weekly basis. For your case, you can build you own asset tree and filter the relevant attributes instead of Temperature and set spy.jobs.schedule frequency to whatever works for you. Let me know if there are any unanswered questions in my post or demo. Happy to update as needed. apply_ml_at_scale.mp4 Apply ML on Asset Tree Rev0.ipynb
3 points
3 points
9. Users of OSIsoft Asset Framework often want to filter elements and attributes based on the AF Templates they were built on. At this time though, the spy.search command in Seeq Data Lab only filters based on the properties Type, Name, Description, Path, Asset, Datasource Class, Datasource ID, Datasource Name, Data ID, Cache Enabled, and Scoped To. This post discusses a way in which we can still filter elements and/or attributes based on AF Template. Step 1: Retrieve all elements in the AF Database The code below will return all assets in an AF Database that are based on a AF Template whose name contains Location. asset_search = spy.search({"Path":"Example-AF", "Type":"Asset"}, all_properties=True) #Make sure to include all properties since this will also return the AF Template asset_search.dropna(subset=['Template'], inplace=True) # Remove assets not based on a template since we can't filter with NaN values asset_search_location = asset_search[asset_search['Template'].str.contains('Location')] # Apply filter to only consider Location AF Template assets Step 2: Find all relevant attributes This code will retrieve the desired attributes. Note wildcards and regular expression can be used to find multiple attributes. signal_search = spy.search({"Path":"Example-AF", "Type":"Signal", "Name":"Compressor Power"}) #Find desired attributes Step 3: Filter attributes based on if they come from an element from the desired AF template Last step cross references the signals returned with the desired elements. This is done by looking at their paths. # Define a function to recreate paths, items directly beneath the database asset don't have a Path def path_merger(row): row = row.dropna() return ' >> '.join(row) asset_search_location['Full Path'] = asset_search_location[['Path', 'Asset', 'Name']].apply(lambda row: path_merger(row),axis=1) # Create path for the asset that includes its name signal_search['Parent Path'] = signal_search[['Path', 'Asset']].apply(lambda row: path_merger(row),axis=1) # Create path for the parents of the signals signal_search_location = signal_search[signal_search['Parent Path'].isin((asset_search_location['Full Path']))] # Cross reference parent path in signals with full paths in assets to see if these signals are children of the desired elements
3 points
10. If you modify your wind_dir variable to \$wind_dir = group( capsule(0, 22.5).setProperty('Value', 'ENUM{{0|N}}'), capsule(22.5, 67.5).setProperty('Value', 'ENUM{{1|NE}}'), capsule(67.5, 112.5).setProperty('Value', 'ENUM{{2|E}}'), capsule(112.5, 158.5).setProperty('Value', 'ENUM{{3|SE}}'), capsule(158.5, 202.5).setProperty('Value', 'ENUM{{4|S}}'), capsule(202.5, 247.5).setProperty('Value', 'ENUM{{5|SW}}'), capsule(247.5, 292.5).setProperty('Value', 'ENUM{{6|W}}'), capsule(292.5, 337.5).setProperty('Value', 'ENUM{{7|NW}}'), capsule(337.5, 360).setProperty('Value', 'ENUM{{8|N}}') ) You will get an ordered Y axis: This is how Seeq handles enum Signal values from other systems - it has some limitations, but it seems like it should work well for your use case.
2 points
11. ## Avoiding Unintended Changes When Editing Calculation Items: Duplicating Items, Worksheets, & Analyses

Summary/TLDR Users commonly want to duplicate Seeq created items (Value Search, Formula, etc.) for different purposes, such as testing the effect of different calculation parameters, expanding calculations to similar areas/equipment, collaboration, etc. Guidance is summarized below to prevent unintended changes. Duplicating Seeq created items on a worksheet Creates new/independent items that can be modified without affecting the original. Duplicating worksheets within a Workbench Analysis Duplicating a worksheet simply copies the worksheet but doesn't create new/independent items. A change to a Seeq created item on one sheet modifies the same item everywhere it appears, on all other worksheets. Duplicating entire Workbench Analysis Creates new/independent items in the duplicated Workbench Analysis. You can modify them without affecting the corresponding items in the original Workbench Analysis. Details Each worksheet in an analysis can be used to help tell the story of how you got to your conclusions or give a different view into a related part of your process. Worksheets can be added/renamed/duplicated, and entire analyses can also be duplicated: Worksheet and Document Organization Confusion sometimes arises for Seeq users related to editing existing calculation items (Value Searches, Formulas, etc.) that appear on multiple worksheets, within the same analysis. Often a user will duplicate a worksheet within an analysis and not realize that editing existing items on the new worksheet also changes the same items everywhere else they are used within the analysis. They assume that each individual worksheet is independent of the others, but this is not the case. The intent of this post is to eliminate this confusion and to prevent users making unintended changes to calculations. Working with the same item on a Duplicated Worksheet When duplicating worksheets, remember that everything within a single Workbench Analysis, no matter what worksheet it is on, is "scoped" to the entire analysis. Duplicating a worksheet simply copies the worksheet but doesn't create new/independent items. A change to an item on one sheet modifies it everywhere it appears (on all other worksheets). For some use cases, duplicating a worksheet is a quick way to expand the calculations further or to create alternate visualizations, and the user wants to continues working with the original items. In other situations, worksheet duplication may be a first step in creating new versions of existing items. To avoid modifying an original item on a duplicated worksheet, from the Item Properties (Detail Pane "i" symbol) for the calculated signal/condition of interest, click to DUPLICATE the item. You can edit the duplicated version without affecting the original. Duplicating worksheets is often useful when you are doing multiple calculation steps on different worksheets, when you want trends on one worksheet and tables or other visualizations on another, when doing asset swapping and creating a worksheet for each unique asset, etc. Working with Items in a Duplicated Workbench Analysis If you duplicate the entire Workbench Analysis (for example, from the Seeq start page, see screenshot below), new/independent items are created in the duplicated Workbench Analysis. You can modify the items in the duplicated Workbench Analysis, without affecting the original (corresponding) items in the original Workbench Analysis. This is often a good approach when you have created a lengthy set of calculations and you would like to modify them or apply them in a similar way for another piece of equipment, processing line, etc., and an asset group approach isn’t applicable. There is one exception to this: Seeq created items that have been made global. Global items can be searched for and accessed outside of an individual Workbench Analysis. Editing a global item in a duplicated analysis will change it everywhere else it appears. There are many considerations for best practices when testing new parameter values and modifications for existing calculations. Keep in mind the differences between duplicating worksheets and duplicating entire analyses, and of course consider the potential use of asset groups when needing to scale similar calculations across many assets, pieces of equipment, process phases, etc. There are in-depth posts here with further information on asset groups: Asset Groups 101 - Part 1 Asset Groups 101 - Part 2
2 points
2 points
13. Capsule Based Back Prediction or Back-Casting Scenario: Instead of forecasting data into the future, there may be a need to extrapolate a signal back in time based on data from an event or period of interest. The following steps will allow you to backcast a target signal from every capsule within a condition. Data Target Signal – a signal that you would like to backcast. Event – a condition that encapsulates the event or period of interest from which you would like to backcast the target signal. The target signal must have sufficient sample points within each capsule to create an accurate regression model. Method Step 1. Create a new extended event that will combine the capsules from the original event with a prediction window for backcasting. In this example, the prediction window is 1 hr and a maximum capsule duration of 40h is defined. \$prediction_window = \$event.beforeStart(1h) \$prediction_window.join(\$event, 40h) Step 2. Create a new time since signal that quantifies the time since the beginning of each capsule in the extended event condition. This new signal will be the independent variable in the regression model. \$extended_event.timeSince(1min) Replace 1min with a sample frequency sufficient for your use case. Step 3. In formula, use the example below to create a regression model for the target signal, with data from the event as training data, and the time since signal as an independent variable. Assign the regression model coefficients as capsule properties for a new condition called regression condition. \$event.transform(\$cap-> {\$model=\$target_signal.validValues().regressionModelOLS( group(\$cap),false,\$time_since,\$time_since^2) \$cap .setProperty('m1',\$model.get('coefficient1')) .setProperty('m2',\$model.get('coefficient2')) .setProperty('c',\$model.get('intercept'))}) The formula above creates a second-order polynomial ordinary least squares regression model. The order of the polynomial can be modified (from linear up to 9th) by adding sequential 'timesince^n' statements on line 2 and defining all coefficients as is on lines 4 and 5. See the example below of how to adjust the formula for a third-order polynomial model. Step 4. Using the regression model coefficients from the regression condition, and the time since signal, the target signal can then be backcast over the prediction window. \$c = \$regression_condition.toSignal('c',durationKey()).aggregate(average(),\$extended_event,durationKey()) \$m1 = \$regression_condition.toSignal('m1',durationKey()).aggregate(average(),\$extended_event,durationKey()) \$m2 = \$regression_condition.toSignal('m2',durationKey()).aggregate(average(),\$extended_event,durationKey()) return \$m1*\$time_since+\$m2*\$time_since^2 + \$c The example above is for a second-order polynomial and the formula needs to be modified depending on the order of the polynomial defined in Step 3. See the example below for a linear model. Note that it may be required to manually set the units (using setunits() function) of each part of the polynomial equation. Result The result is a new signal which backcasts the target signal for the duration of the prediction window prior to the event or period of interest.
2 points
14. A common industrial use case is to select the highest or lowest signal value among several similar measurements. One example is identifying the highest temperature in a reactor or distillation column containing many temperature signals. One of many situations where this is useful is in identifying the current "hot spot" location to analyze catalyst deactivation/performance degradation. When selecting the highest value over time among many signals, Seeq's max() Formula function makes this easy. Likewise, if selecting the lowest value, the min() Formula function can be used. A more challenging use case is to select the 2nd highest, 3rd highest, etc., among a set of signals. There are several approaches to do this using Seeq Formula and there may be caveats with each one. I will demonstrate one approach below. For our example, we will use a set of 4 temperature signals (T100, T200, T300, T400). Viewing the raw temperature data: 1. We first convert each of the raw temperature signals to step interpolated signals, and then resample the signals based on the sample values of a chosen reference signal that has representative, regular data samples (in this case, T100). This makes the later formulas a little simpler overall and provides slightly cleaner results when signal values cross each other. For the T100 step signal Formula: Note that the T200 step signal Formula includes a resample based on using 'T100 Step' as a reference signal: The 'T300 Step' and 'T400 Step' formulas are identical to that for T200 Step, with the raw T signals substituted. 2. We now create the "Highest T Value" signal using the max() function and the step version T signals: 3. To create the '2nd Highest T Value' signal, we use the splice() function to insert 0 values where a given T signal is equal to the 'Highest T Value'. Following this, the max() function can again be used but this time will select the 2nd highest value: 4. The process is repeated to find the '3rd Highest T Value', with a very similar formula, but substituting in values of 0 where a given T signal is >= the '2nd Highest Value': The result is now checked for a time period where there are several transitions of the T signal ordering: 5. The user may also want to create a signal which identifies the highest value temperature signal NAME at any given point in time, for trending, display in tables, etc. We again make use of the splice() function, to insert the corresponding signal name when that signal is equal to the 'Highest T Value': Similarly, the '2nd Highest T Sensor' is created, but using the '2nd Highest T Value': (The '3rd Highest T Sensor' is created similarly.) We now have correctly identified values and sensor names... highest, 2nd highest, 3rd highest: This approach (again, one possible approach of several) can be extended to as many signals as needed, can be adapted for finding low values instead of high values, can be used for additional calculations, etc.
2 points
15. Hi, the error means that you are referencing a variable that is not defined in your variable list. You should change your variable "\$signal1" in the formula to a variable you have in your variables list: Also be aware that you cannot use a signal and a condition together on combineWith(). You can combine either signals only or conditions only. Regards, Thorsten
2 points
16. I'd make two conditions, one for RPM and one for Temperature, then try to use the "Combining Conditions" formulas. I think .encloses() would work.
2 points
2 points
2 points
19. To better understand their process, users often want to compare time-series signals in a dimension other than time. For example, seeing how the temperature within a reactor changes as a function of distance. Seeq is built to compare data against time but this method highlights how we can use time to mimic an alternate dimension. Step 1: Sample Alignment In order to accurately mimic the alternate dimension, the samples to be included in each profile must occur at the same time. This can be achieved through a couple methods in Seeq if the samples don't already align. Option 1: Re-sampling Re-sampling selects points along a signal at select intervals. You can also re-sample based on another signal's keys. Since its possible for there not to be a sample at that select interval, the interpolated value is chosen. An example Formula demonstrating how to use the function is shown below. //Function to resample a signal \$signal.resample(5sec) Option 2: Average Aggregation Aggregating allows users to determine the average of a signal over a given period of time and then place this average at a specific point within that period. Signal From condition can be used to find the average over a period and place this average at a specific timestamp within the period. In the example below, the sample is placed at the start but alignment will occur if the samples are placed at the middle or end as well. Step 2: Delay Samples In Formula, apply a delay to the samples of the signal that represents their value in the alternative dimension. For example, if a signal occurs at 6 feet from the start of a reactor, delay it by 6. If there is not a signal with a 0 value in the alternate dimension, the final graph will be offset by the smallest value in the alternate dimension. To fix this, in Formula create a placeholder signal such as 0 and ensure its samples align with the other samples using the code listed below. This placeholder would serve as a signal delayed by 0, meaning it would have a value of 0 in the alternate dimension. //Substitute Period_of_Time_for_Alignment with the period used above for aligning your samples 0.toSignal(Period_of_Time_for_Alignment) Note: Choosing the unit of the delay depends upon the new sampling frequency of your aligned signals as well as the largest value you will have in the alternative dimension. For example, if your samples occur every 5 minutes, you should choose a unit where your maximum delay is not greater than 5 minutes. Please refer to the table below for selecting units Largest Value in Alternate Dimension Highest Possible Delay Unit 23 Hour, Hour (24 Hour Clock) 59 Minute 99 Centisecond 999 Millisecond Step 3: Develop Sample Profiles Use the Formula listed below to create a new signal that joins the samples from your separate signals into a new signal. Replace "Max_Interpolation" with a number large enough to connect the samples within a profile, but small enough to not connect the separate profiles. For example, if the signals were re-sampled every 5 minutes but the largest delay applied was 60 seconds, any value below 4 minutes would work for the Max_Interpolation. This is meant to ensure the last sample within a profile does not interpolate to the first sample of the next profile. //Make signals into discrete to only get raw samples, and then use combineWith and toLinear to combine the signals while maintaining their uniqueness combineWith(\$signal1.toDiscrete() , \$signal2.toDiscrete() , \$signal3.toDiscrete()).toLinear(Max_Interpolation) Step 4: Condition Highlighting Profiles Create a condition in Formula for each instance of this new signal using the formula below. The isValid() function was introduced in Seeq version 44. For versions 41 to 43, you can use .valueSearch(isValid()). Versions prior to 41 can use .validityCapsules() //Develop capsule highlighting the profile to leverage other views based on capsules to compare profiles \$sample_profiles.isValid() Step 5: Comparing Profiles Now with a condition highlighting each profile, Seeq views built around conditions can be used. Chain View can be used to compare the profiles side by side while Capsule View can overlay these profiles. Since we delayed our samples before, we are able to look at their relative times and use that to represent the alternate dimension. Further Applications With these profiles now available in Seeq, all of the tools available in Seeq can be used to gain more insight from these examples. Below are a few examples. Comparing profiles against a golden profile Determine at what value in the alternate dimension does each profile reach a threshold Developing a soft sensor based on another sensor and a calibration curve profile
2 points
20. Hi David, It seems the handling of enum signals involves providing some buffer from the edges of the lane, which is not configurable. I was able to produce a fairly good alignment by using the Customize > Axis settings in the Details panel to turn of Auto scaling of the numeric signal and set the min and max to -90 and 450, resp.: This will work as long as all enum values appear in the time range of interest but will produce misalignment if a proper subset of these is in the time range, because for enums the axis is only labeled with the values of those enums that are found in the time range. Hopefully this works for your visualization; if not, it may make sense to file a support ticket for a feature request to have more control over the visualization of enumerated data.
1 point
21. Hi Tranquil, Do you mind supplying a bit more information on your question and possibly some screenshots? If by resources you mean assets, with some assets containing a specific signal, then it may not be necessary to write any script. If this is indeed the case, Asset Groups could possibly be utilized to scale the creation of a condition if the signal(s) of interest exist.
1 point
22. ## Creating many Prediction Signals for Monitoring, anything to be careful of?

Ivan, If you would like to resample, I would recommend doing it in a standalone formula prior to the regression formula. The reason for this is that only formula outputs are cached. The intermediates are not cached, so it would not reduce the number of samples that the formula needs to look at since it is doing the resampling in the same formula. Resampling in the same formula would reduce the samples in the fitting but typically it is better to reduce samples pulled. Resampling the predictor variables would have some benefit. Seeq will apply the prediction output to every sample so it would reduce the number of total samples that the output will be applied to. I would also recommend, only using the necessary order of predictors. Since you are writing it as a formula you can select which variables you need to have higher orders and which ones can be linear. Regards, Teddy
1 point
23. Hi david, To add to joe's comment: Besides excel export and looking at this in tables and charts, an easier way to do this would be to simply add the count as a statistic in your details pane. to do this, click on the table icon in your details, it should be right underneath the "customize" button. In the pop up, simply select "count" and the count of data points for each signal will be displayed in the details pane. Hope that helps! Thanks, Sean
1 point
24. Hi David, The count should be accurate in the Tables and Charts view. Are you using the "ungridded original timestamps" option in the export to Excel? If not, it could be exporting gridded samples, which could result in interpolated values coming out in your Excel export that aren't actually raw data points in your signal.
1 point
25. This works on the worksheet but not on the organiser, is there a way to change organiser too?
1 point
26. Hello Kemi, Thank you for your question. This chart can be created as follows; 1. Calculate the moving range in the formula tool; abs(\$signal.runningDelta()) 2. Create the monthly condition using Identify > Periodic Condition tool and select a monthly duration. 3. Use Quantify > Signal from Condition tool to find the average moving range over each month. This is “CL” as shown in the video. 4. In the formula tool, calculate the UCL parameter as follows 3.268*\$average_movingrange Alternatively, create a new Signal from Condition to calculate the standard deviation of the moving average, and in formula use the following; 3*\$stddev_movingrange 5. Add the signals to one lane and align their y-axis. We also have a very comprehensive blog post on creating a Control Chart and applying SPC run rules which may be of interest to you.
1 point
27. Was this a duplicated analysis? If so, I suspect that the IDs you're seeing are associated to items that couldn't be cloned successfully. If this is the case, you should find a journal in the (new) first worksheet of the cloned analysis, which will list items that couldn't be cloned successfully for some reason. Often this has to do with permissions. These items are created in the cloned analysis, but they're assigned placeholder IDs that are serial numbers preceded by an appropriate number of 0s to make a GUID. If that's what happened here, examining the journal on the first worksheet of the analysis should provide clues as to what needs to be fixed before a subsequent attempt at duplication.
1 point
28. When anything is deleted in Seeq, the Archived property gets set to "true". You can use API reference and POST Archived as "false" property. Check out this screenshot on how to do so. You can also do this programatically using SDK as shown in this post:https://www.seeq.org/index.php?/forums/topic/1291-how-to-use-the-seeq-apisdk-in-pythonseeq-data-lab/
1 point
29. This is a solution for a question that came in the support channel that I though would be of general interest. The question was how to designate a fixed training range for a signal and then calculate upper and lower limits of the signal using the 3rd and 97th percentile and apply those limits to the entire history of the signal This requires a two step process. The first is to create scalar signals for the upper and lower limits. Next we use those upper and lower limits to clean the signal using the remove() formula Step 1) Calculating the Scalar values for the 97th and 3rd Percentiles In the example below the training range start and end dates are hard coded into the formulas for simplicity \$trainingRangeStart = '2022-10-01T00:00:00Z' \$trainingRangeEnd = '2022-10-31T00:00:00Z' \$trainingCondition = condition(capsule(\$trainingRangeStart,\$trainingRangeEnd)) \$calcPercentile = \$signal.aggregate(percentile(97), \$trainingCondition, startKey()) \$calcPercentile.toScalars(capsule(\$trainingRangeStart,\$trainingRangeEnd)).average() Similar formula for the lower limit \$trainingRangeStart = '2022-10-01T00:00:00Z' \$trainingRangeEnd = '2022-10-31T00:00:00Z' \$trainingCondition = condition(capsule(\$trainingRangeStart,\$trainingRangeEnd)) \$calcPercentile = \$signal.aggregate(percentile(3), \$trainingCondition, startKey()) \$calcPercentile.toScalars(capsule(\$trainingRangeStart,\$trainingRangeEnd)).average() Step 2) Clean the signal using the new scalar values for upper and lower limits \$signal .remove(isGreaterThan(\$upper)) .remove(islessthan(\$lower))
1 point
30. Using formulas with trended data (temperature) I created a signal representing the density of a fluid in a vessel. I have reason to believe ambient conditions are impacting the temperature of the liquid inside of a level bridle thus changing the liquid properties. Using level measurements in the vessel and level bridle and the density/ specific gravity calculated for the liquid in the vessel (based on actual vessel temperature) I was able to calculate the density/ SG of the of the liquid in the level bridles based on the variation in level measurement. The equation for density is fairly complicated so manipulating the equation solving for temperature isn't a realistic option for me. Is there a way to have Seeq calculate/ trend a signal representing the temperature when I have a signal representing the solution (density) with temperature being the only variable? The equation I'm working with is shown below. I have the values for all of the constants and I'm wanting Seeq to calculate the value of T.
1 point
31. Here is an example of how to convert a String signal into a table where each row contains information on the start / end time and total duration of each time the string signal changed values Step 1: Convert your string signal into a condition inside of Formula \$signal.tocondition() -> This formula creates a new capsule every time that the string signal changes value regardless of how many sample points have the same string value. Step 2: Create a table view of the condition. Select the "Tables and Charts" view and the "Condition" mode Step 3: Add Capsule properties as values to the table. To add the "Value" property which is the value from the string signal type in "Value" into the Capsule property statistics table. You can also select the duration here Final Product
1 point
32. Hi Ruby, This should work instead: spy.jobs.schedule('0 0 0 ? * 6#1 *')
1 point
33. 1 point
34. Hi Jesse - Unfortunately moving property columns to the left/top of metric columns in tables is not yet supported. There is an open developer request for this functionality. If you would like to be notified when this functionality becomes available (and advocate for prioritization), please send us a ticket to [email protected] referencing developer request #27076 so we can log your ticket against that request. Thanks, Patrick
1 point
1 point
36. Now I see! I was assuming maxValue(\$SearchArea) was hard coding the search. Your explanation makes sense: maxValue is returning a search result, but then \$signal.within(\$ValidData) is only passing the capsules in the condition to it. Therefore, as long as \$SearchArea fully includes the capsules in \$ValidData it will work. I just need to hard code dates well before and well after any capsules I would use. Thanks!
1 point
37. The first response with the hard coded dates will give you the answer you are looking for as long as you do anticipate adding new capsules to the "Data Valid" condition in the future. The part of the formula that limits the scope of the search is the \$signal.within(\$ValidData) section. This means that only data that falls within capsules part of the ValidData condition AND within the capsule("2020-01-01T00:00:00Z","2022-07-28T00:00:00Z") date range
1 point
38. I think what you are going for will look like the formula below Where \$SearchArea is the total range where any of your valid data capsule could fall (you can be very conservative with these dates). This formula will work if you have multiple valid data range capsules as long as they all fall within the \$SearchArea \$SearchArea = capsule("2020-01-01T00:00:00Z","2022-07-28T00:00:00Z") \$Signal.within(\$ValidData).maxValue(\$SearchArea).toSignal()
1 point
39. Brett, There is not currently a direct equivalent function that would allow you to move a capsule using a variable amount. However, below is a formula that does the same thing in a couple of steps. It comes with a couple of caveats however If you have capsule properties on the first calculation they will not be transferred over to the delayed signal This formula will delay the start and end of the capsule the same amount as defined by the value of your delay signal at the capsule start. You could probably extend this to do more complex transformations if needed \$step1 = \$condition.aggregate(totalDuration("min"), \$condition, startKey(), 0s) \$step2 = \$step1.move(\$timeShiftSignal,2h) \$step2.toCapsules(\$sample -> capsule(\$sample.key(),\$sample.key()+\$sample.value()),30d) Let me know if this helps get you on the right track. Also I am curious to understand more about your use case so that we can help improve the built-in functions in the future. Shamus
1 point
40. Hi Matthias, I would recommend checking out this post, which follows the same process: When looking at that post, it sounds like you'll want the \$reset variable to be equal to a monthly condition.
1 point
41. We don't currently support it, but we've noted the desires for the 2D use cases. Without committing to a specific release or date, I'll say it is on our near term roadmap 🙂
1 point
42. Yes! You can deploy ipywidgets from jupyter notebooks in data lab. Widgets can be great for wrapping code in UI for No-code experience from workbench using Seeq Add-on tools for implementing Machine Learning models. Here is one example for datepicker. import ipywidgets as widgets #import the library widgets.DatePicker( description='Start:', style={'description_width': '150px'}, layout={'width': '300px'}) You can read more about various widgets here: https://ipywidgets.readthedocs.io/en/latest/examples/Widget List.html
1 point
43. Another approach that you can take if you don't need to know start or end times of the "active" capsules is a filtered Simple Table counting capsules. To get this summary table listing the "active" conditions in any Display Range, choose a Simple Table with the count column enabled from the Column button in the toolbar. If you also have signals in your Details pane, you will want to deselect those and only select the 9 conditions. If you only have conditions, you can exclude this Details pane selection. You can then filter that table using a menu that opens from the three vertical dots from the column header. Below I applied a filter for when count is greater than 0 and have only 4 rows displaying of 6 total conditions. The filter icon lets me know the table is filtered, and I can click on it to change or remove the filter. In R55 and later, percent duration and total duration are also possible column configurations in the Simple Table in addition to count. You can read more on how these table displays work on our Knowledge Base.
1 point
44. I tried this on R54.1.4 and came across a similar error but fixed it by appending .toString() to \$seq. Below is the updated formula code. //creates a condition for 1 minute of time encompassing 30 seconds on either side of a transition \$Transition = \$CompressorStage.toCondition().beforeStart(0.5min).afterStart(1min) //Assigns the mode on both sides of the step change to a concatenated string that is a property of the capsule. \$Transition .transform( \$cap -> \$cap.setProperty('StartModeEndMode', \$CompressorStage.toCondition() .toGroup(\$cap, CAPSULEBOUNDARY.INTERSECT) .reduce("", (\$seq, \$stepCap) -> \$seq.toString() + \$stepCap.getProperty('Value') //Changes the format of the stage names for more clear de-lineation as a property in the capsules pane. .replace('STAGE 1','-STAGE1-').replace('STAGE 2','-STAGE2-').replace('TRANSITION','-TRANSITION-').replace('OFF','-OFF-') )))
1 point
45. Starting in Seeq Version R52 Data Lab notebooks can be run on a schedule which opens up a world of new interesting possibilities. One of those possibilities is to create a simple script that pulls data from a Web API source and pushes it into the Seeq data cache on a schedule. This can be a great way to prototype out a data connection prior to building a full featured connector using the Connector SDK This example notebook pulls from the USGS which has information on river levels, temperatures, turbidity etc and pushes those signals for multiple sites into the Seeq system. The next logical step would be to make a notebook to organize these signals into an asset tree. Curious to see what this inspires other to do and to connect to. If there are additional public resources of interest put them in the thread for ideas. USGS Upload Example.ipynb
1 point
46. Hi Robin, maybe you want to try this. For this demo I created 3 signals based on the example data of Seeq as I did not have data like yours. For each of the signals I created capsules whenever the value is above 1kW: In the next step I joined the running conditions to one parent condition: Now I am able to calculate the delay between the start of the "All running" capsules and the "Running" capsules of each signal and delay the original signal by this value: In the last step I created capsules for the delayed signal, whenever the value is about 1kW: You may have to do some adjustments to this example regarding your needs. Hope this helps. Regards, Thorsten
1 point
47. Hi Banderson, you can create a duration signal from each capsule in a condition, using "signal from condition" tool. As you may know these point and click tools create a Seeq formula underneath. So after using point and click signal from condition tool, you can find the syntax of formula in item properties of that calculation. You can copy this syntax and paste it in Formula and use it to further develop your calculations.
1 point
48. Overview This method will provide a simple visualization of externally determined control limits or help you accurately calculate new control limits for a signal. Using these limits we will also create a boundary and find excursions for how many times and for how long a signal deviates from the limits.These created signals can be used in follow-on analysis search for periods of abnormal system behavior. In this example we will be creating average, +3 Std Deviation and -3 Standard Deviation boundaries on a Temperature Signal. Setup Signals In the Data tab, select the following: Asset → Example → Cooling Tower 1 → Area A Signal → Temperature Option 1: Manually Define Simple Control Limits From the Tools tab, navigate to the Formula tool. The Formula can be used to easily plot simple scalar values. If you already have calculated values for the upper and lower limit just enter them in the formula editor with their units as shown in the screenshot below. Formula - Simple Upper Limit 103F Formula - Simple Lower Limit 70F Option 2: Calculate The Control Limits From the Tools tab, navigate to the Formula tool. In formula we are going to define the time period over which we want to calculate our control limits as well as the math behind those limits. Step 1 - Calculate the upper limit Variables Name Item Type \$Series Temperature Signal Formula \$calcPeriod = capsule("2018-01-01T00:00:00-06:00","2018-05-01T00:00:00-06:00") \$tempAve = \$Series.average(\$calcPeriod) \$tempStdDev = \$Series.standardDeviation(\$calcPeriod) \$tempAve + 3*\$tempStdDev Description of Code \$calcPeriod → This is the time range over which we are going to calculate the average and standard deviation of our signal. The start and end time of our period must be written in ISO8601 format (Year - Month - Day "T" Hour : Minutes : Seconds . Fractional Seconds -/+ Timezone) \$tempAve → Intermediate variable calculating the average of the temperature signal over our calculation period \$tempStdDev → Intermediate variable calculating the standard deviation of the temperature signal over our calculation period \$tempAve + 3*\$tempStdDev → Example control limit calculation Step 2 - Duplicate your formula to calculate the lower limits Click the info icon in the details pane next to your calculated upper limit signal. From the info panel select duplicate to create a copy of the formula. With this copy simply edit the formula to calculate the lower limit. \$calcPeriod = capsule("2018-01-01T00:00:00-06:00","2018-05-01T00:00:00-06:00") \$tempAve = \$Series.average(\$calcPeriod) \$tempStdDev = \$Series.standardDeviation(\$calcPeriod) \$tempAve - 3*\$tempStdDev **Alternate method number three -- if you wanted \$calcperiod to actually changed based on the previous month or week of operation you could use signal from condition based off a periodic condition to achieve this solution. Step 3 - Visualize Limits as a Boundary Using the Boundary Tool to connect the process variable and upper and lower limits. Select Temperature as your primary signal and select "New" Select Boundary under relation type, name your new boundary and select the signals for your upper and lower limit. Click save to visualize the boundary on the trend. Using this same method you can create and visualize multiple boundaries (simple and calculated) at the same time Step 4 - Create Capsules when Outside the Boundary Using the Deviation Search tool create a condition for when the signal crosses the boundary. Name your new condition, select temperature as the input signal, select outside a boundary pair and the upper and lower signals. Estimate the maximum time you would expect any one out of boundary event to last and input that time in the max capsule duration field. Step 5 - Create a Scorecard to Quantify How Often and How Long Boundary Excursions Occur Create a Scorecard to count how many and how long and what % of total time these excursions are occurring. Create each metric using the Scorecard Metric tool and the Count, Total Duration and Percent Duration statistics. Use a Condition Based scorecard to get weekly or monthly metrics. Step 6 - Plot how these KPIs are Changing Over Time By creating a signal which plots these KPIs over time we can quantify how our process variable is changing relative to these limits. To begin, determine how often you would like to calculate the KPI per Hour/Day/Week/Month and create a condition for those time segments using the Periodic Condition tool. In the screenshot below we are creating a weekly condition with capsules every week. Using the Signal from Condition Tool count the number of Outside Simple Boundary capsules which occur within each weekly capsule. This same methodology can be used to create signals for total duration and % duration just like in the scorecard section above. For each week the tool will create a single sample. The timestamp placement and interpolation method selections will determine how those samples are placed within the week and visualized on the chart. The scorecard metrics that you created above can also be trended over time by switching from Scorecard View to Trend View.
1 point
49. One limitation to the method mentioned above is if one of the signals doesn't have any values, then no answer is returned. If you still want the value even if one signal is missing than you can try the alternative formula described below. This method works for versions prior to R21.0.40.05. Here is the formula for 2 signals as shown above: \$signal1.zipWith(\$signal2, (\$s1, \$s2) -> max(\$s1.getValue(), \$s2.getValue())) If you have more than 2 signals, then add additional zipWith() statements: \$signal1.zipWith(\$signal2, (\$s1, \$s2) -> max(\$s1.getValue(), \$s2.getValue())) .zipWith(\$signal3, (\$s1, \$s3) -> max(\$s1.getValue(), \$s3.getValue())) .zipWith(\$signal4, (\$s1, \$s4) -> max(\$s1.getValue(), \$s4.getValue()))
1 point
50. Hi Thorsten- In the first screenshot, the area of each box is actually the same, even though some boxes have different dimensions. As you observed, the size of your display impacts how the boxes are drawn. To adjust the box sizes via the API, please use the following steps: 1. On your Seeq installation, open the workbook that contains the Treemap and navigate to the API: 2. To get the ID of the asset that you would like to resize: a. Navigate to GET Assets b. Adjust the "limit" to 200 and click "Try it out!" c. In the Response Body, locate the asset to resize and copy the "id": 3. To resize the asset: a. Navigate to POST Item Properties b. Paste the asset ID into the "id" field. Use the following syntax in the "Body" field. [ { "unitOfMeasure": "", "name": "size", "value": 10 } ] The following screenshot shows a size 10, but this number may be adjusted. c. Click "Try it out!" 4. Navigate back to the Treemap and refresh the browser. The Treemap now reflects the adjusted size: Please let me know if you have any additional questions. Thanks, Lindsey
1 point
This leaderboard is set to Los Angeles/GMT-07:00
×