-
Posts
39 -
Joined
-
Last visited
-
Days Won
21
Content Type
Profiles
Forums
Events
Downloads
Gallery
Everything posted by John Cox
-
Hi Marcelo, I'll add a few suggestions below, but I would most recommend you sign up for an upcoming Seeq Office Hours where you can work through the calculations with a Seeq Analytics Engineer: https://info.seeq.com/office-hours If you simply want to count regenerations, you could do a running sum of your regeneration signal over a specified time period or a Periodic Condition. If you want to break the sum into groups of 3, you could divide the running sum by 3 and look for time periods where the result is an integer. If you already have a cleanup condition, it would be easy to identify all the cleanup events (capsules) when a cleanup is performed and the number of regenerations just before the cleanup is not equal to 3. You could do this by doing a running sum of regenerations based on a condition that is the time between cleanups (which would be $Cleanups.inverse()). It is also possible that the Formula functions below may be useful to you, as they allow you to focus on a specific number of recent sample values or capsules.
-
Hi Marcelo, I am not certain I understand but I think you want to count the number of regenerations in between each "cleanup". If this is the case then you likely need to define a condition for when "cleanups" actually occur, and then you can create a condition for "time in between cleanups" using the .inverse() Formula function (there are several relatively easy you might find the "time in between cleanups"). Once you have the condition for "time in between cleanups", then a runningSum() Formula function would be one way to count the current number of regenerations. I'm happy to try to help iterate with your solution here on the forum, but because I'm not certain about the calculations you want to do, the most efficient way to get a solution for your questions would be to sign up for an upcoming Seeq Office Hours where you can share your screen and get help: https://info.seeq.com/office-hours John
-
Replace missing data with zero
John Cox replied to Curious George's topic in General Seeq Discussions
Hi Ross, The same approach should work when dealing with string signals. This sounds like a potential bug that I would suggest you create a support ticket for by going to https://support.seeq.com/, then clicking on Support Portal in the lower right, then creating an "Analytics Help" support ticket. When investigating the support ticket, Seeq support will also try to determine if there is an alternate approach that will eliminate the Treemap error. -
Basic Formatting Tips for Workbench Formulas Pushed from Data Lab
John Cox replied to John Cox's topic in Seeq Data Lab
As a much easier alternative to having to manage string delimiters, apostrophe escape characters, line continuation characters, etc. with the approach above, Python offers a multiline string literal feature (3 double or single quotes). We use this approach below (note the 3 double quotes on code lines 6 and 14). We also use the textwrap.dedent() function (link) to nicely indent our code without indenting the final result in Workbench. This alternate approach is much easier to code as well as being more readable on the Data Lab side. -
When pushing Formulas to Workbench from Data Lab, you can use "\" and "\n" in a few different ways in the Data Lab code, to create a nicely formatted, multi-line Formula in Workbench: For the Formula pushed to Workbench, a \n can be used for a line return. This is valuable for creating multi-line Formulas with intermediate variable assignments, or for making lengthy, single lines of code more readable. For the Formula pushed to Workbench, a \ can be used just before a single quote, when you need to include a single quote in the Formula. This helps for example when working with string signals. On the Data Lab side, using a \ at the end of the line continues the code to the next line, helping with readability in Data Lab. These are illustrated in the Data Lab code below which pushes a condition Formula to Workbench: Editing the Formula in Workbench, the Formula appears as we desired:
-
Proportional-integral-derivative (PID) control loops are essential to the automation and control of manufacturing processes and are foundational to the financial gains realized through advanced process control (APC) applications. Because poorly performing PID controllers can negatively impact production capacity, product quality, and energy consumption, implementing controller performance monitoring analytics leads to new data-driven insights and process optimization opportunities. The following sections provide the essential steps for creating basic controller performance monitoring at scale in Seeq. More advanced CLPM solutions can be implemented by expanding the standard framework outlined below with additional metrics, customization features, and visualizations. Data Lab’s Spy library functionality is integral to creating large scale CLPM, but small scale CLPM is possible with no coding, using Asset Groups in Workbench. Key steps in creating a CLPM solution include: Controller asset tree creation Developing performance metric calculations and inserting them in the tree as formulas Calculating advanced metrics via scheduled Data Lab notebooks (if needed) Configuring asset scaled and individual controller visualizations in Seeq Workbench Setting up site performance as well as individual controller monitoring in Seeq Organizer Using these steps and starting from only the controller signals within Seeq, large scale CLPM monitoring can be developed relatively quickly, and a variety of visualizations can be made available to the manufacturing team for monitoring and improving performance. As a quick example of many end result possibilities, this loop health treemap color codes controller performance (green=good, yellow=questionable, red=poor): The key steps in CLPM implementation, summarized above, are detailed below. Note: for use as templates for development of your own CLPM solution, the associated Data Lab notebooks containing the example code (for Steps 1-3) are included as file attachments to this article. The code and formulas described in Steps 1-3 can be adjusted and expanded to customize your CLPM solution as desired. Example CLPM Solution: Detailed Steps STEP 1: Controller asset tree creation An asset tree is the key ingredient which enables scaling of the performance calculations across a large number of controllers. The desired structure is chosen and the pertinent controller tags (typically at a minimum SP, PV, OP and MODE) are mapped into the tree. For this example, we will create a structure with two manufacturing sites and a small number of controllers at each site. In most industrial applications, the number of controllers would be much higher, and additional asset levels (departments, units, equipment, etc.) could of course be included. We use SPy.trees functionality within the Data Lab notebook to create the basic structure: Controller tags for SP, PV, OP, and MODE are identified using SPy.search. Cleansed controller names are extracted and inserted as asset names within the tree: Next, the controller tags (SP, PV, OP, and MODE), identified in the previous Data Lab code section using SPy.search, are mapped into the tree. At the same time, the second key step in creating the CLPM solution, developing basic performance metrics and calculations using Seeq Formula and inserting them into the asset tree, is completed. Note that in our example formulas, manual mode is detected when the numerical mode signal equals 0; this formula logic will need to be adjusted based on your mode signal conventions. While this second key step could be done just as easily as a separate code section later, it also works nicely to combine it with the mapping of the controller tag signals: STEP 2: Developing performance metric calculations and inserting them in the tree as formulas Several key points need to be mentioned related to this step in the CLPM process, which was implemented using the Data Lab code section above (see previous screenshot). There are of course many possible performance metrics of varying complexity. A good approach is to start with basic metrics that are easily understood, and to incrementally layer on more complex ones if needed, as the CLPM solution is used and shortcomings are identified. The selection of metrics, parameters, and the extent of customization for individual controllers should be determined by those who understand the process operation, control strategy, process dynamics, and overall CLPM objectives. The asset structure and functionality provided with the Data Lab asset tree creation enables the user to implement the various calculation details that will work best for their objectives. Above, we implemented Hourly Average Absolute Controller Error (as a percentage based on the PV value) and Hourly Percent Span Traveled as basic metrics for quantifying performance. When performance is poor (high variation), the average absolute controller error and percent span traveled will be abnormally high. Large percent span traveled values also lead to increased control valve maintenance costs. We chose to calculate these metrics on an hourly basis, but calculating more or less frequently is easily achieved, by substituting different recurring time period functions in place of the “hours()” function in the formulas for Hourly Average Absolute Controller Error and Hourly Percent Span Traveled. The performance metric calculations are inserted as formula items in the asset tree. This is an important aspect as it allows calculation parameters to be customized as needed on an individual controller basis, using Data Lab code, to give more accurate performance metrics. There are many customization possibilities, for example controller specific PV ranges could be used to normalize the controller error, or loosely tuned level controllers intended to minimize OP movement could be assigned a value of 0 error when the PV is within a specified range of SP. The Hourly Average Absolute Controller Error and Hourly Percent Span Traveled are then aggregated into an Hourly Loop Health Score using a simple weighting calculation to give a single numerical value (0-100) for categorizing overall performance. Higher values represent better performance. Another possible approach is to calculate loop health relative to historical variability indices for time periods of good performance specified by the user. The magnitude of a loop health score comprised of multiple, generalized metrics is never going to generate a perfect measure of performance. As an alternative to using just the score value to flag issues, the loop health score can be monitored for significant decreasing trends to detect performance degradation and report controller issues. While not part of the loop health score, note in the screenshot above that we create an Hourly Percent Time Manual Mode signal and an associated Excessive Time in Manual Mode condition, as another way to flag performance issues – where operators routinely intervene and adjust the controller OP manually to keep the process operating in desired ranges. Manual mode treemap visualizations can then be easily created for all site controllers. With the asset tree signal mappings and performance metric calculations inserted, the tree is pushed to a Workbench Analysis and the push results are checked: STEP 3: Calculating advanced metrics via scheduled Data Lab notebooks (if needed) Basic performance metrics (using Seeq Formula) may be all that are needed to generate actionable CLPM, and there are advantages to keeping the calculations simple. If more advanced performance metrics are needed, scheduled Data Lab notebooks are a good approach to do the required math calculations, push the results as signals into Seeq, and then map/insert the advanced metrics as items in the existing asset tree. There are many possibilities for advanced metrics (oscillation index, valve stiction, non-linearity measures, etc.), but here as an example, we calculate an oscillation index and associated oscillation period using the asset tree Controller Error signal as input data. The oscillation index is calculated based on the zero crossings of the autocorrelation function. Note: the code below does not account for time periods when the controller is in manual, the process isn’t running, problems with calculating the oscillation index across potential data gaps, etc. – these issues would need to be considered for this and any advanced metric calculation. Initially, the code above would be executed to fill in historical data oscillation metric results for as far back in time as the user chooses, by adjusting the calculation range parameters. Going forward, this code would be run in a recurring fashion as a scheduled notebook, to calculate oscillation metrics as time moves forward and new data becomes available. The final dataframe result from the code above looks as follows: After examining the results above for validity, we push the results as new signals into Seeq Workbench with tag names corresponding to the column names above. Note the new, pushed signals aren’t yet part of the asset tree: There are two additional sections of code that need to be executed after oscillation tag results have been pushed for the first time, and when new controller tags are added into the tree. These code sections update the oscillation tag metadata, adding units of measure and descriptions, and most importantly, map the newly created oscillation tags into the existing CLPM asset tree: STEP 4: Configuring asset scaled and individual controller visualizations in Seeq Workbench The CLPM asset tree is now in place in the Workbench Analysis, with drilldown functionality from the “US Sites” level to the signals, formula-based performance metrics, and Data Lab calculated advanced metrics, all common to each controller: The user can now use the tree to efficiently create monitoring and reporting visualizations in Seeq Workbench. Perhaps they start by setting up raw signal and performance metric trends for an individual controller. Here, performance degradation due to the onset of an oscillation in a level controller is clearly seen by a decrease in the loop health score and an oscillation index rising well above 1: There are of course many insights to be gained by asset scaling loop health scores and excessive time in manual mode across all controllers at a site. Next, the user creates a sorted, simple table for average loop health, showing that 7LC155 is the worst performing controller at the Beech Island site over the time range: The user then flags excessive time in manual mode for controller 2FC117 by creating a Lake Charles treemap based on the Excessive Time in Manual Mode condition: A variety of other visualizations can also be created, including controller data for recent runs versus current in chain view, oscillation index and oscillation period tables, a table of derived control loop statistics (see screenshot below for Lake Charles controller 2FC117) that can be manually created within Workbench or added later as items within the asset tree, and many others. Inspecting a trend in Workbench (see screenshot below) for a controller with significant time in manual mode, we of course see Excessive Time in Manual Mode capsules, created whenever the Hourly Percent Time Manual Mode was > 25% for at least 4 hours in a row. More importantly, we can see the effectiveness of including hours().intersect($Mode!=0) in the formulas for Hourly Average Absolute Controller Error and Hourly Percent Span Traveled. When the controller is in manual mode, that data is excluded from the metric calculations, resulting in the gaps shown in the trend. Controller error and OP travel have little meaning when the controller is in manual, so excluding data is necessary to keep the metrics accurate. It would also be very easy to modify the formulas to only calculate metrics for hourly time periods where the controller was in auto or cascade for the entire hour (using Composite Condition and finding hourly capsules that are “outside” manual mode capsules). The ability to accurately contextualize the metric calculations, to the time periods where they can be validly calculated, is a key feature in Seeq for doing effective CLPM implementations. Please also refer to the “Reminders and Other Considerations” section below for more advanced ideas on how to identify time periods for metric calculations. STEP 5: Setting up site performance as well as individual controller monitoring in Seeq Organizer As the final key step, the visualizations created in Workbench are inserted into Seeq Organizer to create a cohesive, auto-updating CLPM report with site overviews as well as individual controller visuals. With auto-updating date ranges applied, the CLPM reports can be “review ready” for recurring meetings. Asset selection functionality enables investigative workflows: poorly performing controllers are easily identified using the “Site CLPM” worksheet, and then the operating data and metrics for those specific controllers can be quickly investigated via asset selection in the site’s “Individual Controllers” worksheet – further details are described below. An example “Site CLPM” Organizer worksheet (see screenshot below) begins with a loop health performance ranking for each site, highlighting the worst performing controllers at the top of the table and therefore enabling the manufacturing team to focus attention where needed; if monitoring hundreds of controllers, the team could filter the table to the top 10 or 20 worst performing controllers. The visualizations also include a treemap for controllers that are often switched to manual mode – the team can talk to operators on each crew to determine why the controllers are not trusted in auto or cascade mode, and then generate action items to resolve. Finally, oscillating controllers are flagged in red in the sorted Oscillation Metrics tables, with the oscillation period values also sorted – short oscillation periods may prematurely wear out equipment and valves due to high frequency cycling; long oscillation periods are more likely to negatively impact product quality, production rate, and energy consumption. Controllers often oscillate due to root causes such as tuning and valve stiction and this variability can often be eliminated once an oscillating controller has been identified. The oscillation period table can also be perused for controllers with similar periods, which may be evidence of an oscillation common to multiple controllers which is generating widespread process variation. An example “Individual Controllers” Organizer worksheet is shown below, where detailed operating trends and performance metrics can be viewed for changes, patterns, etc., and chain view can be used to compare controller behavior for the current production run versus recent production runs. Other controllers can be quickly investigated using the asset selection dropdown, and the heading labels (e.g., Beech Island >> 7LC155) change dynamically depending on which controller asset is selected. For example, the Beech Island 7LC155 controller which was identified as the worst performing controller in the “Site CLPM” view above, can be quickly investigated in the view below, where it is evident that the controller is oscillating regularly and the problem has been ongoing, as shown by the comparison of the current production run to the previous two runs: Reminders and Other Considerations As evident with the key steps outlined above, a basic CLPM solution can be rapidly implemented in Seeq. While Seeq’s asset and contextualization features are ideal for efficiently creating CLPM, there are many details which go into developing and maintaining actionable CLPM dashboards for your process operation. A list of reminders and considerations is given below. 1. For accurate and actionable results, it is vital to only calculate performance metrics when it makes sense to do so, which typically means when the process is running at or near normal production rates. For example, a controller in manual during startup may be expected and part of the normal procedure. And of course, calculating average absolute controller error when the process is in an unusual state will likely lead to false indications of poor performance. Seeq is designed to enable finding those very specific time periods when the calculations should be performed. In the CLPM approach outlined in this article, we used time periods when the controller was not in manual by including hours().intersect($Mode!=0) in the formulas for Hourly Average Absolute Controller Error and Hourly Percent Span Traveled. When the controller is in manual mode, that data is excluded from the metric calculations. But of course, a controller might be in auto or cascade mode when the process is down, and there could be many other scenarios where only testing for manual mode isn’t enough. In the CLPM approach outlined above, we intentionally kept things simple by just calculating metrics when the controller was not in manual mode, but for real CLPM implementations you will need to use a more advanced method. Here are a few examples of finding “process running” related time periods using simple as well as more advanced ways. Similar approaches can be used with your process signals, in combination with value searches on controller mode, for excluding data from CLPM calculations: A simple Process Running condition can be created with a Value Search for when Process Feed Flow is > 1 for at least 2 hours. A 12 Hours after Process Running condition can be created with a formula based on the Process Running Condition: $ProcessRunning.afterStart(12hr) A Process Running (> 12 hours after first running) condition can then be created from the first two conditions with the formula: $ProcessRunning.subtract($TwelveHrsAfterRunning) Identifying time periods based on the value and variation of the production rate is also a possibility as shown in this Formula: The conditions described above are shown in the trends below: 2. As previously mentioned, including metric formulas and calculations as items within the asset tree enables customization for individual controllers as needed, when controllers need unique weightings, or when unique values such as PV ranges are part of the calculations. It may also be beneficial to create a “DoCalcsFlag” signal (0 or 1 value) as an item under each controller asset and use that as the criteria to exclude data from metric calculations. This would allow customization of “process is running normally and controller is not in manual” on an individual controller basis, with the result common for each controller represented as the “DoCalcsFlag” value. 3. In the CLPM approach outlined above, we used SPy.trees in Data Lab to create the asset tree. This is the most efficient method for creating large scale CLPM. You can also efficiently create the basic tree (containing the raw signal mappings) from a CSV file. For small trees (<= 20 controllers), it is feasible to interactively create the CLPM asset tree (including basic metric calculation formulas) directly in Workbench using Asset Groups. The Asset Groups approach requires no Python coding and can be quite useful for a CLPM proof of concept, perhaps focused on a single unit at the manufacturing site. For more details on Asset Groups: https://www.seeq.org/index.php?/forums/topic/1333-asset-groups-101-part-1/). 4. In our basic CLPM approach, we scoped the CLPM asset tree and calculations to a single Workbench Analysis. This is often the best way to start for testing, creating a proof of concept, getting feedback from users, etc. You can always decide later to make the CLPM tree and calculations global, using the SPy.push option for workbook=None. 5. For long-term maintenance of the CLPM tree, you may want to consider developing an add-on for adding new controllers into the tree, or for removing deleted controllers from the tree. The add-on interface could also prompt the user for any needed customization parameters (e.g., PV ranges, health score weightings) and could use SPy.trees insert and remove functionality for modifying the tree elements. 6. When evidence of more widespread variation is found (more than just variability in a single controller), and the root cause is not easily identified, CLPM findings can be used to generate a list of controllers (and perhaps measured process variables in close proximity) that are then fed as a dataset to Seeq’s Process Health Solution for advanced diagnostics. 7. For complex performance or diagnostic metrics (for example, stiction detection using PV and OP patterns), high quality data may be needed to generate accurate results. Therefore, some metric calculations may not be feasible depending on the sample frequency and compression levels inherent with archived and compressed historian data. The only viable options might be using raw data read directly from the distributed control system (DCS), or specifying high frequency scan rates and turning off compression for controller tags such as SP, PV, and OP in the historian. Another issue to be aware of is that some metric calculations will require evenly spaced data and therefore need interpolation (resampling) of historian data. Resampling should be carefully considered as it can be problematic in terms of result accuracy, depending on the nature of the calculation and the signal variability. 8. The purpose of this article was to show how to set up basic CLPM in Seeq but note that many types of process calculations to monitor “asset health” metrics could be created using a similar framework. Summary While there are of course many details and customizations to consider for generating actionable controller performance metrics for your manufacturing site, the basic CLPM approach above illustrates a general framework for getting started with controller performance monitoring in Seeq. The outlined approach is also widely applicable for health monitoring of other types of assets. Asset groups/trees are key to scaling performance calculations across all controllers, and Seeq Data Lab can be used as needed for implementing more complex metrics such as oscillation index, stiction detection, and others. Finally, Seeq Workbench tools and add-on applications like Seeq’s Process Health solution can be used for diagnosing the root cause of performance issues automatically identified via CLPM monitoring. CLPM Asset Tree Creation.ipynb Advanced CLPM Metric Calculations.ipynb
-
- 4
-
-
-
- process control
- controller
-
(and 3 more)
Tagged with:
-
A typical data cleansing workflow is to exclude equipment downtime data from calculations. This is easily done using the .remove() and .within() functions in Seeq formula. These functions remove or retain data when capsules are present in the condition that the user supplies as a parameter to the function. There is a distinct difference in the behavior of the .remove() and .within() functions that users should know about, so that they can use the best approach for their use case. .remove() removes the data during the capsules in the input parameter condition. For step or linearly interpolated signals, interpolation will occur across those data gaps that are of shorter duration than the signal's maximum interpolation. (See Interpolation for more details concerning maximum interpolation.) .within() produces data gaps between the input parameter capsules. No interpolation will occur across data gaps (no matter what the maximum interpolation value is). Let's show this behavior with an example (see the first screenshot below, Data Cleansed Signal Trends), where an Equipment Down condition is identified with a simple Value Search for when Equipment Feedrate is < 500 lb/min. We then generate cleansed Feedrate signals which will only have data when the equipment is running. We do this 2 ways to show the different behaviors of the .remove() and .within() functions. $Feedrate.remove($EquipmentDown) interpolates across the downtime gaps because the gap durations are all less than the 40 hour max interpolation setting. $Feedrate.within($EquipmentDown.inverse()) does NOT interpolate across the downtime gaps. In the majority of cases, this result is more in line with what the user expects. As shown below, there is a noticeable visual difference in the trend results. Gaps are present in the green signal produced using the .within() function, wherever there is an Equipment Down capsule. A more significant difference is that depending on the nature of the data, the statistical calculation results for time weighted values like averages and standard deviations, can be very different. This is shown in the simple table (Signal Averages over the 4 Hour Time Period, second screenshot below). The effect of time weighting the very low, interpolated values across the Equipment Down capsules when averaging the Feedrate.remove($EquipmentDown) signal, gives a much lower average value compared to that for $Feedrate.within($EquipmentDown.inverse()) (1445 versus 1907). Data Cleansed Signal Trends Signal Averages over the 4 Hour Time Period
-
Technique for Scaling Data Exports across Assets
John Cox posted a topic in General Seeq Discussions
Asset groups and asset trees (link) are used frequently in Seeq for asset swapping calculations, building treemap visualizations, and scaling tabular results across assets. In some cases, users want to export calculated results for assets from Seeq Workbench to Excel or perhaps to Power BI. The following example illustrates a technique for doing this efficiently with a single export. 1. Asset group for 8 furnaces contains an "Outlet Temperature" signal that we want to calculate and export daily statistics (avg, std deviation, min, max) for: 2. Note that the "Daily Statistics" condition is created with a Formula that is part of the asset group. This is key to enabling the data export across all assets. See the formula for the "Daily Statistics" condition below for Furnace 1. Note that we create a daily condition and then add the temperature statistics as capsule properties, and assign an asset name property. These details are also essential in setting up an efficient export. As a reminder, we need to edit the "Daily Statistics" formula for each furnace to assign the correct furnace number to the Asset Name capsule property. For this example (only 8 assets), this is easy to do manually. For a large asset group (50, 100 or more), a better approach would be to create an asset tree using Data Lab, and programmatically create the individualized "Daily Statistics" conditions. 3. Next, we navigate the asset group and add the Daily Statistics condition for each furnace to the trend display in Workbench, which makes it easy to set up the "Daily Furnace Statistics for Export" in step 4. 4. Create the "Daily Furnace Statistics for Export" condition which will have overlapping daily capsules for the 8 furnaces. Here, we combine the separate Daily Statistics conditions (for all 8 furnaces) into a single condition. For the export to work as expected in a later step, we need to slightly offset the capsules using the move() function, so that they do not have identical start times. 5. Next, we visually check the capsules and their properties on the trend (Asset Name and Daily Avg) and in the capsules pane on the lower right. Everything looks as expected, we have a capsule for each furnace for the day of May 24. 6. The export to Excel or to other applications via OData can now be set up. The key is to export only the "Daily Furnace Statistics for Export" condition, and to set the time range appropriately based on your objectives. Here, we only want the results for 1 day: 7. Checking the export on the Excel side, all looks good. We have the daily statistics, scaled across all furnace assets, with one row for each furnace: To summarize, the following are keys to this technique for exporting calculation results across all assets, from Seeq Workbench: Store the calculation results as capsule properties in a condition that is an item in the asset group, and also assign an asset name property (see Step 2 above). In this example we used a daily time basis for calculations, but the techniques can be applied and extended for many scenarios. To store the results across all assets, create a single condition for export, which is a combination of all the individual asset "calculation results" conditions, and offset capsules slightly as needed to avoid capsules having identical start times (see Steps 3 and 4 above). In this example, we only had 8 assets so all formulas could be created interactively by the user. For large asset structures, the asset tree and formulas, including the final condition for export, can be created programmatically using Data Lab (link).-
- export
- asset tree
-
(and 2 more)
Tagged with:
-
Heat Exchanger Maintenance Prediction - Informations
John Cox replied to Maschermascher's topic in General Seeq Discussions
Hello, I'm going to attach an example Seeq Journal for this use case, which includes a few of the actual formulas used. Why don't you take a look at this and then let me know if there are specific steps that you need more information on? There are additional formulas that I can provide if needed. -
Can I create an alarm history table within SeeQ?
John Cox replied to Nicolas Noiset's topic in General Seeq Discussions
Hello Nicolas, This is a great application for Seeq. The details of what you will actually need to do will depend on how the alarm data comes into Seeq (as signals or as capsules) and how you want to arrange/display it. To get you started, here are 2 resources that may help specifically with your dashboard of alarms use case. I encourage you to review these resources and then reach out to a Seeq Analytics Engineer in Office Hours (https://info.seeq.com/office-hours) for further help as needed. 1. A Seeq University video on creating tables for event based data (similar to alarms). 2. A Seeq.org post on turning a signal's values into table rows. -
Selecting Capsules within a Condition
John Cox replied to John Cox's topic in General Seeq Discussions
Depending on the details of the use case, adding properties from one condition to another (during a join or some type of combination of the 2 conditions) can be achieved with at least 2 approaches: The mergeProperties() Formula function, introduced in Seeq R56 can be used. You can look up example uses of mergeProperties() in Seeq Formula's documentation. Also note that mergeProperties() has functionality for quantitatively merging numerical property values (max, average, etc.) if the 2 conditions have conflicting values for the same property. The setProperty() Formula function can be used to add property values from one condition to another. It may be necessary to use getProperty() in conjunction with setProperty(): please see the highlighted sections in the screenshot below, which is included here only to show an example of using setProperty() inside of a transform(). Another alternative with setProperty(): you may also consider converting a property value (to be retained from the second condition) to a signal with $condition.toSignal(propertyName), and then use the signal result with setProperty(). You can see examples of adding a signal aggregation as a capsule property in Seeq's Formula documentation for setProperty(). I think one of these approaches should help with your use case. -
Summary/TLDR Users commonly want to duplicate Seeq created items (Value Search, Formula, etc.) for different purposes, such as testing the effect of different calculation parameters, expanding calculations to similar areas/equipment, collaboration, etc. Guidance is summarized below to prevent unintended changes. Duplicating Seeq created items on a worksheet Creates new/independent items that can be modified without affecting the original. Duplicating worksheets within a Workbench Analysis Duplicating a worksheet simply copies the worksheet but doesn't create new/independent items. A change to a Seeq created item on one sheet modifies the same item everywhere it appears, on all other worksheets. Duplicating entire Workbench Analysis Creates new/independent items in the duplicated Workbench Analysis. You can modify them without affecting the corresponding items in the original Workbench Analysis. Details Each worksheet in an analysis can be used to help tell the story of how you got to your conclusions or give a different view into a related part of your process. Worksheets can be added/renamed/duplicated, and entire analyses can also be duplicated: Worksheet and Document Organization Confusion sometimes arises for Seeq users related to editing existing calculation items (Value Searches, Formulas, etc.) that appear on multiple worksheets, within the same analysis. Often a user will duplicate a worksheet within an analysis and not realize that editing existing items on the new worksheet also changes the same items everywhere else they are used within the analysis. They assume that each individual worksheet is independent of the others, but this is not the case. The intent of this post is to eliminate this confusion and to prevent users making unintended changes to calculations. Working with the same item on a Duplicated Worksheet When duplicating worksheets, remember that everything within a single Workbench Analysis, no matter what worksheet it is on, is "scoped" to the entire analysis. Duplicating a worksheet simply copies the worksheet but doesn't create new/independent items. A change to an item on one sheet modifies it everywhere it appears (on all other worksheets). For some use cases, duplicating a worksheet is a quick way to expand the calculations further or to create alternate visualizations, and the user wants to continues working with the original items. In other situations, worksheet duplication may be a first step in creating new versions of existing items. To avoid modifying an original item on a duplicated worksheet, from the Item Properties (Detail Pane "i" symbol) for the calculated signal/condition of interest, click to DUPLICATE the item. You can edit the duplicated version without affecting the original. Duplicating worksheets is often useful when you are doing multiple calculation steps on different worksheets, when you want trends on one worksheet and tables or other visualizations on another, when doing asset swapping and creating a worksheet for each unique asset, etc. Working with Items in a Duplicated Workbench Analysis If you duplicate the entire Workbench Analysis (for example, from the Seeq start page, see screenshot below), new/independent items are created in the duplicated Workbench Analysis. You can modify the items in the duplicated Workbench Analysis, without affecting the original (corresponding) items in the original Workbench Analysis. This is often a good approach when you have created a lengthy set of calculations and you would like to modify them or apply them in a similar way for another piece of equipment, processing line, etc., and an asset group approach isn’t applicable. There is one exception to this: Seeq created items that have been made global. Global items can be searched for and accessed outside of an individual Workbench Analysis. Editing a global item in a duplicated analysis will change it everywhere else it appears. There are many considerations for best practices when testing new parameter values and modifications for existing calculations. Keep in mind the differences between duplicating worksheets and duplicating entire analyses, and of course consider the potential use of asset groups when needing to scale similar calculations across many assets, pieces of equipment, process phases, etc. There are in-depth posts here with further information on asset groups: Asset Groups 101 - Part 1 Asset Groups 101 - Part 2
-
Accurate data alignment can be the most challenging part of creating process calculations, finding correlations, and developing prediction models. It is an often overlooked data cleansing step that may be even more critical than smoothing noisy signals and removing data outliers. The need for data alignment stems from time delays present in the industrial process (related to physical transport and equipment/piping volumes, as well as lab measured data reported well after process operation completes). Another common data alignment need centers on comparing process metrics before/after process events of interest, which again involves time delays (or time shifts). There are at least four categories of data alignment use cases prevalent across the process manufacturing industries: Known time delay - The time delay resulting from the transportation of material at a given speed or velocity (across some distance in the process operation) can mask strong correlations between upstream/downstream signals or other signals separated in time, resulting in poor modeling results if not accounted for by a data alignment step. Variable time delay – Here the time delay is variable and a function of production speed, storage volumes, etc. but can be calculated based on measured/known process parameters. Before/after comparisons - Related to process experimentation and optimization, there is a recurring need to calculate and compare process metrics before and after some identified process event, such as a process feed being turned on, a unit restart, equipment maintenance, a process parameter or setpoint being adjusted, etc. Process and analytical (LIMS) data - When trying to correlate process signals with lab-measured analytical results, work is often needed to align the process signals and subsequent analytical results. In some cases, the alignment can be based on sequential, consistently reported lab data values. In other cases, more sophisticated logic, such as connecting process operation and lab results by matching id properties, is needed. Example methods for addressing each of these cases, using Seeq tools and Formula, are included below. Use Case Category #1: Known Time Delay In this specific example, the goal is to correlate a process signal (temperature) with a later reported lab result, but this use case also occurs frequently with continuous upstream (earlier in the process) and downstream signals (later in process), and the same time shifting methodology applies. It is known that the lab result is consistently reported 2 hours after the process operation which would most correlate. Therefore, the “Process Temperature” signal needs to be shifted 2 hours to the right in time and then sampled for each unique “Lab Measured Analytical Result” value. Analysis Steps 1. We shift the raw "Process Temperature" 2 hours to the right using Seeq Formula's move() function to create "Temperature Shifted 2 Hours to Right": $Temperature.move(2h) 2. We use Seeq Formula's resample() function to pick off values of the shifted temperature at the exact timestamp that new "Lab Measured Analytical Result" values are reported. The resample() function gives the ability to sample one signal based on the timestamps of the data values from another signal. The result is 1 temperature sample (green, Lane 1) per lab sample (Lane 2). $TemperatureShifted.resample($LabDataValues) 3. The value of the data alignment is obvious by comparing two XY plots of the lab result and the original, raw temperature/aligned temperature. The correlation between the process temperature and the lab result, hidden in the XY plot on the left, is obvious in the XY plot on the right, which shows the aligned temperature: Use Case Category #2: Variable Time Delay This use case is similar to the previous: correlating two measurements separated by a time delay, but with an additional complication. Here, the time delay is variable and is based on the physics of a material transport distance: changes in an upstream pressure physically take many hours to work their way through equipment/piping and influence a downstream analyzer signal. The physical time delay varies based on Production Line Speed. Analysis Steps 1. The transport distance is known (500 feet) and set up as a value using Seeq Formula. The user calculates the Time Delay between the 2 signals based on the Transport Distance / Production Line Speed. The resulting time delay fluctuates between 7 and 13 hours. Transport Distance Formula: 500.toSignal().setUnits('ft') Calculated Time Delay Formula: $TransportDistance/$LineSpeed 2. Using Seeq Formula's move() function, the Upstream Pressure can then be shifted (by the variable time delay calculated in Step 1) so that the effect of pressures changes upstream aligns in time with the resulting impact on the Downstream Analyzer. // Limit the maximum time delay to 20 hours $UpstreamPressure.move($CalculatedTimeDelay,20h) Changes in the upstream pressure (yellow signal) are now correctly aligned in time with their effect on the downstream analyzer (blue signal): Note that in this use case, we did not need to use the resample() function as we did with the discretely measured lab data in Use Case Category #1, as here we are working with continuously measured signals. Use Case Category #3: Before/After Comparisons An extremely common use case is to compare process metrics before/after some process event. The process event could be anything of interest: a process unit restart, equipment or control strategy modifications, periodic maintenance work, process experimentation, etc. The analysis begins by identifying the process events and then calculating the metrics over appropriate time ranges before and after each event. The alignment step typically involves creating a common time basis spanning before/after operation and moving the before/after calculated metrics to the same point in time for comparison. Analysis Steps 1. For this example we begin with a Pressure signal that ramps up over time as equipment run life increases and the equipment fouls. The equipment is shut down for Maintenance periodically and the process then restarts at a much lower pressure value. We use the Pressure signal and Seeq Formula to identify Equipment Maintenance periods based on the running delta of the pressure signal < 0: $Pressure.runningDelta().isLessThan(0) 2. Use Formula's grow() function to expand Equipment Maintenance to create a Before and After Maintenance time period which includes the desired time period (for example, 4 hrs) for calculating before/after pressures. This also gives a common time basis for a later alignment step. $EquipmentMaintenance.grow(4hr) Results are shown here in Chain View for the Before and After Maintenance capsules, which gives a nice visual of the pressure 4 hours before and after maintenance. 3. Average the Pressure signal over the first 2 hours of the Before and After Maintenance capsules. // Do a 2 hour avg pressure over the FIRST 2 hours // of the Before and After Maintenance capsules $Pressure.aggregate(average(),$BeforeAfterMaintenance.afterStart(2hr),middleKey()) 4. Use Signal from Condition to align the average pressure before maintenance (Step 3) at the middle of the Before and After Maintenance capsules. Signal from Condition is commonly used to find a value within a condition and move it to a specific location in time. In this case, we can use the Maximum statistic to find the correct "Pressure Avg (Before)" value, as there is only 1 value for each maintenance capsule. Using Signal from Condition in this way is a key technique for aligning before/after process values/metrics. We can see that the "Pressure Avg (Before)" has now been moved to the middle of the "Before and After Maintenance" capsules: 5. Repeat steps 3 and 4 to compute/align the average pressure after maintenance, and then move/align to the middle of the Before and After Maintenance capsules. 6. With the before/after pressure values aligned in time, we now use Formula to calculate the percent pressure reduction resulting from maintenance. // Calculate the % change relative to the // the Pressure Avg (Before) (($BeforeAvg-$AfterAvg)/$BeforeAvg*100).setUnits('%') Use Case Category #4: Process and Analytical (LIMS) Data In this example, we will align calculated process metric values from each batch (e.g., max temperature, total Chemical A flow added) with later reported lab results (often referred to as LIMS - Laboratory Information Management System data).This is a very common analytics need. Here, the “Product Impurity” lab results are reported at varying time intervals following batch completion, so a constant time delay alignment approach isn’t feasible. Zooming in, we can confirm that the lab results are reported (step to a new value) some time after the Process Batches complete. The reporting time is variable. Analysis Steps 1. The maximum temperature and totalized chemical added over each Process Batch are thought to correlate with (influence) the amount of final Product Impurity. So, we need to calculate the max T and total chemical A added per Process Batches capsule. Use Signal for Condition to do the calculations and place results at the end of each Process Batch. 2. We now need to work on alignment. To provide a basis for joining the process results to the corresponding lab results, we need to create "Product Impurity Results Capsules" for every value change in the reported Product Impurity. We use the Formula toCondition() function for this and store the numeric lab result in a capsule property named PctImpurity. $ProductImpurityData.toCondition('PctImpurity') Note: in the screenshot below, the highlighted Product Impurity Results capsule contains the lab result for the Process Batches capsule at the top left of the screen. 3. Now, use Composite Condition to "join" the Process Batches condition to the Product Impurity Results Capsules condition, so we have a time period that contains the process and lab results we want to align and correlate. We check the Inclusive options to create capsules from the start of Process Batches to the end of Product Impurity Results Capsules: The resulting yellow "joined" capsules in the screenshot below now span the time period of process operation and the eventual lab measured impurity result, for each individual batch. Investigating a zoomed time period, we can see the Process Batches start joined to the end of the resulting Product Impurity Results Capsules. 4. With a common capsule established, we align Max T, Total Chemical A, and Product Impurity at the middle of the "Process Batches to Impurity Result (joined)" capsules. For Max T and Total Chemical A, we use the 'Value at Start', and for Product Impurity, we use the 'Value at End'. Using Signal from Condition in this way is a key technique used in aligning process and lab data values. The results for a short time period look like this, and the "aligned" values can be used for further calculations or for creating a prediction model to predict Product Impurity based on Max T and Total Chemical A: Joining Process and Lab Values Based on a Matching Capsule Property With that example finished, let’s look at another common (and more complicated) scenario in this use case category: when analytical results can be reported inconsistently or out of sequence with process batches, a more advanced condition join using Seeq Formula, based on a matching id or other capsule property value, may be needed. In this example, a numeric batch id is a capsule property shared by the Process Batches and Lab Analytical Batches conditions: see the 437 and 438 capsule property values shown as labels on the blue and green capsules in the screenshot below. Using this batch id linkage, matching id Process/Lab capsules can be joined with a single line formula, and capsule properties from the separate process and lab conditions (e.g., see the 0.99 and 1.32 lab product impurity results shown on the blue capsules) can be preserved, all courtesy of enhanced “capsule matching by property” functionality introduced in Seeq R56 (link). Starting with the raw data and Process Batches and Lab Analytical Batches conditions and their capsule properties, we have already calculated the "Average Process Ratio for Batch" using Signal from Condition and located it at the start of the Process Batches capsules. We illustrate only the critical steps in this use case under Analysis Steps below: Analysis Steps - Joining Process and Lab Data on a Matching Capsule Property 1. We join the Process Batches (capsules) to Lab Analytical Batches, based on their BatchID/LabID property matching. We use the join() Formula function. The batch id numeric is stored as the "LabID" capsule property in the Lab Batches condition, so, prior to doing the join, we must rename it to have the same property name as the "BatchID" capsule property on the Process Batches condition. Note: the capsule property matching and keepProperties() are enhanced functionality options introduced in Seeq R56. // Join process to lab batches based on matching ID. // We need to rename the LabID capsule property to BatchID for the // lab capsules. // Use keepProperties() so that the resulting condition has // the Result capsule property (the lab measured product impurity value). $ProcessBatches.join($LabBatches.renameProperty('LabID','BatchID'), 4d, true, 'BatchId', keepProperties()) Inspecting the capsules and the batch id and product impurity capsule property values at the top of the trend, we see the "Process Batches Joined" capsules are linked based on matching ID, and the product impurity "Result" capsule property (the 0.99 and 1.32 values) is retained and now part of a capsule that starts at the beginning of each Process Batches capsule: 2. We now translate the "Process Batches Joined to Lab Batches on ID Match" Result capsule property into a "Lab Measured Product Impurity Aligned to Batch Start" signal, with values moved to the start of the Process Batches, at the exact location we have the Avg Process Ratio for Batch. $JoinedBatches.toSignal('Result',startKey()).toDiscrete() For this short time range, we can confirm the 0.99 value for the resulting impurity signal aligns correctly to BatchID 437, and the 1.32 value aligns correctly to Batch 438. As a result, we have now connected the lab measured product impurity result to each individual process batch, regardless of lab result timing, and with additional steps have aligned the Average Process Ratio and Lab Measured Product Impurity.
-
- align signals
- time delay
-
(and 5 more)
Tagged with:
-
A common industrial use case is to select the highest or lowest signal value among several similar measurements. One example is identifying the highest temperature in a reactor or distillation column containing many temperature signals. One of many situations where this is useful is in identifying the current "hot spot" location to analyze catalyst deactivation/performance degradation. When selecting the highest value over time among many signals, Seeq's max() Formula function makes this easy. Likewise, if selecting the lowest value, the min() Formula function can be used. A more challenging use case is to select the 2nd highest, 3rd highest, etc., among a set of signals. There are several approaches to do this using Seeq Formula and there may be caveats with each one. I will demonstrate one approach below. For our example, we will use a set of 4 temperature signals (T100, T200, T300, T400). Viewing the raw temperature data: 1. We first convert each of the raw temperature signals to step interpolated signals, and then resample the signals based on the sample values of a chosen reference signal that has representative, regular data samples (in this case, T100). This makes the later formulas a little simpler overall and provides slightly cleaner results when signal values cross each other. For the T100 step signal Formula: Note that the T200 step signal Formula includes a resample based on using 'T100 Step' as a reference signal: The 'T300 Step' and 'T400 Step' formulas are identical to that for T200 Step, with the raw T signals substituted. 2. We now create the "Highest T Value" signal using the max() function and the step version T signals: 3. To create the '2nd Highest T Value' signal, we use the splice() function to insert 0 values where a given T signal is equal to the 'Highest T Value'. Following this, the max() function can again be used but this time will select the 2nd highest value: 4. The process is repeated to find the '3rd Highest T Value', with a very similar formula, but substituting in values of 0 where a given T signal is >= the '2nd Highest Value': The result is now checked for a time period where there are several transitions of the T signal ordering: 5. The user may also want to create a signal which identifies the highest value temperature signal NAME at any given point in time, for trending, display in tables, etc. We again make use of the splice() function, to insert the corresponding signal name when that signal is equal to the 'Highest T Value': Similarly, the '2nd Highest T Sensor' is created, but using the '2nd Highest T Value': (The '3rd Highest T Sensor' is created similarly.) We now have correctly identified values and sensor names... highest, 2nd highest, 3rd highest: This approach (again, one possible approach of several) can be extended to as many signals as needed, can be adapted for finding low values instead of high values, can be used for additional calculations, etc.
-
Querying a SEEQ Workbook TABLE from Excel using SEEQ API
John Cox replied to Chase E's topic in General Seeq Discussions
Hi Chase, Teddy has a great suggestion above on using Seeq's OData export. I also wanted to mention another option for your consideration: creating an Add-on tool in Seeq Workbench that has a button the operators click to pull the data. Seeq Add-ons use the Seeq Python Library module (SPy) with Seeq Data Lab. Add-ons can easily pull data into the Data Lab environment and create custom visualizations, calculations, charts, etc. using any Python library functionality, and all of this can be accessed from Seeq Workbench. This may or may not be a way to address your current need, but even if not, it may be a good solution for other applications in the future. There is a Seeq Add-on Gallery here: https://seeq12.github.io/gallery/ The following links provide additional information on how to create your own add-ons or install the open source add-ons: https://seeq.atlassian.net/wiki/spaces/KB/pages/2254536724/Add-ons -
Hi, Currently you can add a capsule property to a condition type table without using the New Metric, by using the Row or Column buttons. But, while you do have filtering and sorting capabilities on these capsule properties in the table, this doesn't give you the ability to color code the table cells for that property (New Metric with thresholds is needed for that). So yes, currently you would need to convert the capsule property to a signal first. There is a new feature request in our development system to do exactly what you want, so this may be added in the future. John
-
How to display active capsules in a table?
John Cox replied to Kspring's topic in General Seeq Discussions
The details and approach will vary depending on exactly where you are starting from, but here is one approach that will show some of the key things you may need. When you say you have 3 capsules active at any given time, I assume you mean 3 separate conditions. Assuming that is the case, you can create a "combined condition" out of all 9 conditions using Seeq Formula: // Assign meaningful text to a new capsule property // named TableDesription $c1 = $condition1.setProperty('TableDescription','High Temperature') $c2 = $condition2.setProperty('TableDescription','Low Pressure') and so forth... $c9 = $condition9.setProperty('TableDescription','Low Flow') // Create a capsule that starts 5 minutes from the current time $NowCapsule = past().inverse().move(-5min) // Combine and keep only the currently active capsules (touching "now") combineWith($c1,$c2,...,$c9).touches($NowCapsule) You can then go to Table View, and click the Condition table type. Select your "combined condition" in the Details Pane and add a capsule property (TableDescription) using the Column or Row button at the top of the display. You can then set your display time range to include the current time, and you should see the currently active capsules with the TableDescription text that you assigned in the Formula. You of course will want to vary the "-5min" value and the 'TableDescription' values per what is best for you. Your approach may be a little different depending on where you are starting from, but I think that creating capsule properties (text you want to see in your final table), combining into one condition, and using .touches($NowCapsule), may all be things you will need in your approach. Here are some screenshots from an example I put together, where I used "Stage" for the capsule property name: -
How to get a capsule that spans the display range?
John Cox replied to Siang Lim's topic in General Seeq Discussions
Hello Siang, Currently, simple type scorecard metrics can't be used in Seeq Formula. We have recently added some functionality for using condition type scorecard metrics in Formula, but that will not help you in this use case. Your request to get a display range capsule is a good one and is already on our feature request list for our development team. For now, your best method is to create a Manual Condition, and add a capsule to the Manual Condition that is your current display range, to create a Display Range Condition. You can then use the Display Range Condition with Signal from Condition to calculate a signal representing the average (or other desired statistics) over the display range. That signal can be used in additional Formulas and calculations. When you want to change your analysis to a new display range, you will of course need to take one extra step to edit the Manual Condition and update it to match the current display range. John -
Unable to see signal that changes infrequently
John Cox replied to Bayden Smith's topic in General Seeq Discussions
Hi Bayden, I would try something like this in Seeq Formula, on the signals that change infrequently: $signal.validValues().toStep(30d) In place of the "30d" you would enter the maximum time that you want to connect data points and to see the lines drawn. Given you are working with setpoints, I would recommend converting them to step interpolation using .toStep(). If that doesn't get you to the final result that you want, you can add one further function: $signal.validValues().toStep(30d).resample(1d) The resample will add more frequent data values at the period you specify (change the "1d" to whatever works best for you). You do need to be careful that you aren't adding any incorrect or misleading values with this approach, based on your knowledge of the signals. Let us know if this helps. I believe you can get a solution you are happy with. John -
Hello Filip, When you want to add data to an already existing signal, you will want to find your original imported file in the Seeq Data tab, and click the edit button (circled in red below): You will then be able to drag a new file for import, and you will notice that you now have Replace and Append options (see an example I did below): You can choose to replace or append. I think this will solve your problem. If it does not, please let me know. Some additional information on adding new data to an existing imported signal can be found here in our Knowledge Base: Knowledge Base: CSV Import Replace and Append John
-
Create a Signal which Is Max or Min of Multiple Signals
John Cox replied to John Cox's topic in General Seeq Discussions
Hi Brian, In more recent versions of Seeq, the max function used in that way (that specific syntax or form) only works with scalars. For your case, try $p53h2.max($p53h3).max($p53h4) That is the form needed with signals. Hope this helps! John -
standard deviation Event weighted Standard Deviation per 10 minutes
John Cox replied to robin's topic in General Seeq Discussions
Hi Robin, 1) To answer your earlier question, to avoid the time weighted standard deviation, all you need to do is change the signal to a discrete signal and then apply the same formula: $CanWeightSignal.toDiscrete() .aggregate(stdDev(), // std deviation statistic periods(10min,1min).inside($CanProductionCapsules), // do the calc over 10 min rolling window every minute // but only for 10 minute capsules fully inside CanProduction capsules endKey() // place the calculation result at the end of the 10 minute window , 0s) // max interpolation between data points, use 0s if you want discrete result 2) To answer your most recent question, to convert your current result to a stepped signal rather than the discrete results you now have on the trend, you can use a Formula function named .toStep(). Inside the parentheses, you enter the maximum amount of interpolation, the maximum amount of time you want to draw a line connecting your calculated results. For example, the function below sets a maximum interpolation between data points of 8 hours. .toStep(8hr) Hope this helps! John -
Hello Mohammed, The answer depends on exactly what you want to remove from the signal. In some cases, you may want to do a Value Search to find the peaks you want to remove, then use the .remove() function in Seeq Formula to create the new signal with the peaks removed. In some cases, all you may need is to apply a smoothing filter (for example, agileFilter() in Seeq Formula or one of the Signal Smoothing options under the Cleanse Tools), or perhaps use removeOutliers() in Seeq Formula. The documentation below provides a great summary of the many data cleansing tools in Seeq. I hope this information is what you are looking for!
-
I do not have access to your original signal for testing, so my solution not handle all the unusual characteristics that may occur. But I believe the approach below will help you get going on this calculation. 1. I would first recommend you create a clean weight signal that uses the remove() function to eliminate small decreases in the weight signal due to measurement noise and equipment variation. (This first step makes step 2 work accurately, because in step 2 we will be looking for the weight to go above 2000, 3000, 4000, etc., and we don't want noise variation from say 3002 lb to 2999 lb at the next sample, to appear to be a step decrease.) Create a clean weight signal using the formula below. This should result in a weight signal that only increases during the normal rampups, but still resets to 0 when the next rampup processing begins. Note that the 1hr inside of toStep() is the maximum interpolation between data values; set this based on your preferences. 2. Create the Step Counting Signal by dividing the clean weight (step 1 result) by 1000 and using the floor function to generate an integer result. 3. For an example signal that I tested with, you can see the Step Counting Signal incrementing as each 1000 lb increment is exceeded. Depending on your final analysis goals, further calculations could of course be done to calculate/report the number of steps completed for each of your original purple capsules or to do other types of statistics.
-
standard deviation Event weighted Standard Deviation per 10 minutes
John Cox replied to robin's topic in General Seeq Discussions
Hi Robin, I would use a Seeq formula similar to the one below, with the Can Weight signal and your existing Can Production capsules as inputs. The key functions used are aggregate(), periods(), and inside(). Note that you could also do the std deviation calculation using Signal from Condition, using a condition created in Formula with: periods(10min,1min).inside($CanProductionCapsules). $CanWeightSignal .aggregate(stdDev(), // std deviation statistic periods(10min,1min).inside($CanProductionCapsules), // do the calc over 10 min rolling window every minute // but only for 10 minute capsules fully inside CanProduction capsules endKey() // place the calculation result at the end of the 10 minute window , 0s) // max interpolation between data points, use 0s if you want discrete result Hope this helps! John