Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 08/08/2018 in all areas

  1. A few weeks ago in Office Hours, a Seeq user asked how to perform iterative calculation in which the next value of the calculation relies on its previous values. This functionality is currently logged as a feature request. In the meantime, users can utilize Seeq Data Lab and push the calculated result from Seeq Data Lab to Seeq Workbench. Let's check out the example below: There are a total of 4 signals, Signal G, Signal H, Signal J, and Signal Y added to Seeq workbench. The aim is to calculate the value of Signal Y, under the selected period. Step 1: Set the start date and end date of the calculation. #Set the start date and end date of calculation startdate = '2023-01-01' enddate = '2023-01-09' Step 2: Set the workbook URL and workbook ID. #Set the workbook URL and workbook ID workbook_url = 'workbook_url' workbook_id = 'workbook_id' Step 3: Retrieve all raw data points for the time interval specified in Step 1 using spy.pull(). #Retrieve all raw data points for the time internal specified in Step 1: data = spy.pull(workbook_url, start = startdate, end = enddate, grid = None) data Step 4: Calculate the value of Signal Y, (Yi = Gi * Y(i-1) + Hi * Ji) #Calculate the value of Signal Y (Yi = Gi * Y(i-1) + Hi * Ji) for n in range(len(data)-1): data['Signal Y'][n+1] = data['Signal G'][n+1] * data['Signal Y'][n] + data['Signal H'][n+1] * data['Signal J'][n+1] data Step 5: Push the calculated value of Signal Y to the source workbook using spy.push(). #Push the calculated result of Signal Y to the source workbook spy.push(data = data[['Signal Y']], workbook = workbook_id)
    7 points
  2. While Seeq does not officially have dark mode, google chrome has a plug in that may be a feasible workaround. Here is what organizer topic looks like And here is workbench analysis screenshot Interested in getting one for your Seeq? Check out the dark reader plugin. DISCLAIMER: PLEASE CHECK WITH YOUR LOCAL IT BEFORE INSTALLING ANYTHING FROM THE INTERNET Here goes the link: https://chrome.google.com/webstore/detail/dark-reader/eimadpbcbfnmbkopoojfekhnkhdbieeh Cheers, Sanman
    7 points
  3. To generate and display curves in Seeq, a formula representing the shape of the curve needs to be created and added to an XY Plot. There are two methods to achieve this. Method #1 - Use the Plot Curve Add-on Create a .csv file with a tabular representation of the curve. This can be done with a tool such as https://apps.automeris.io/wpd/. Install the Plot Curve Add-on - https://seeq12.github.io/seeq-plot-curve/ Add "Flow" and "Head" signals in Workbench and navigate to an XY Plot View If you don't have a "Head" or "Flow" signal tag, you can use Formula to calculate them and use the output of that calculation as the associated signal. Launch the Plot Curve Add-on via Tools->Add-ons Load the csv containing the curve data by clicking the "Load" button in the Add-on and navigating to the location of the csv Identify the Independent variable (e.g., Flow) and Dependent Variable (e.g., Head) from the csv, as well as the Independent Signal (e.g., Flow Signal Tag) and Output Signal Name (e.g., 1800 RPM Curve) You can also specify the curve polynomial fit that best matches the shape of the curve in the "Plot Curve Variables" section Repeat step 6 and 7 for EACH curve Push the curves to Seeq by clicking "Push to Seeq". The formula(s) for the curve(s) will be added to your Display Pane. You will have the option to only push the active curve, or all the curves configured In Workbench XY Plot view, click the f(x) button to add trend line(s)s to the plot Select the Trend Line(s) to add confirm the associated independent signal (e.g., Flow) is correctly associated. Close the modal. The curve(s) will be added to the XY Plot. Adjust the colors as needed via the Details Pane. Method #2 - Manually create the curve using Formula Determine the X&Y components of the curve. This can be done with a tool such as https://apps.automeris.io/wpd/. Enter or paste the components in columns A and B in the CurveFitter excel sheet. See screenshot below for details. The CurveFitter file can be found here: CurveFitter.xlsx Once the new Flow and Head data has been pasted into excel copy the contents in from D2 to E9 and paste them into the Seeq formula tool. See screenshots below for copy paste details Copy: Paste: Paste the following syntax in the same formula under the coefficients. Be sure that the flow signal has the variable name “$flow”. $f=$flow.remove($flow.isNotBetween($lower,$upper)).setunits('') $coeff4*$f^4+$coeff3*$f^3+$coeff2*$f^2+$coeff1*$f+$const Final formula view: Add the line to the XY Plot by selecting the f(x) in the XY Plot tool bar and pick the correct item from the select item dropdown. If adding more than one curve, then click on the item properties “i” of the first curve and click on duplicate. Once in the formula tool copy the new coefficients from excel replacing the old one and hit execute. Follow step 5 to add the curve to the plot. Final View: Note that both methods yield curves expressed as formula which can be used for calculations like any other signal in Seeq (e.g., calculate curve head vs actual, etc).
    5 points
  4. Tulip is one of the leading frontline operations platforms, providing manufacturers with a holistic view of quality, process cycle times, OEE, and more. The Tulip platform provides the ability to create user-friendly apps and dashboards to improve the productivity of your operations without writing any code. Integrating Tulip and Seeq allows Tulip app and dashboard developers to directly include best-in-class time series analytics into their displays. Additionally, Seeq can access a wealth of contextual information through Tulip Tables. Accessing Tulip Table Data in Seeq Tulip table data is an excellent source of contextual information as it often includes information not gathered by other systems. In our example, we will be using a Tulip Table called (Log) Station Activity History. This data allows us to see how long a line process has been running, the number of components targeted for assembly, actually assembled, and the number of defects. The easiest way to bring this into Seeq is as condition data. We will create one condition per station and each column will be a capsule property. This can be achieved with a scheduled notebook: import requests import json import pandas as pd # This method gets data from a tulip table and formats the data frame into a Seeq-friendly structure def get_data_from_tulip(table_id, debug): url = f"https://{TENANT_NAME}.tulip.co/api/v3/tables/{table_id}/records" headers = { "Authorization": AUTH_TOKEN } params = { "limit": 100, "sortOptions" : '[{"sortBy": "_createdAt", "sortDir": "asc"}]' } all_data = [] data = None while True: # Use for paginating the reqeusts if data: last_sequence = data[-1]['_sequenceNumber'] params['filters'] = json.dumps([{"field":"_sequenceNumber","functionType":"greaterThan","arg":last_sequence}]) # Make the API request response = requests.get(url, headers=headers, params=params) if debug: print(json.dumps(response.json(), indent=4)) # Check if the request was successful if response.status_code == 200: # Parse the JSON response data = response.json() all_data.extend(data) if len(data) < 100: break # Exit the loop if condition is met else: print(f"API request failed with status code: {response.status_code}") break # Convert JSON data to pandas DataFrame df = pd.DataFrame(all_data) df = df.rename(columns={'id': '_id'}) df.columns = df.columns.str.split('_').str[1] df = df.drop(columns=['sequenceNumber','hour'], errors='ignore') df['createdAt'] = pd.to_datetime(df['createdAt']) df['updatedAt'] = pd.to_datetime(df['updatedAt']) df = df.rename(columns={'createdAt': 'Capsule Start', 'updatedAt': 'Capsule End', 'duration': 'EventDuration'}) df = df.dropna() return df def create_metadata(station_data, station_name): print(f"DataFrame for station: {station}") print("Number of rows:", len(group)) metadata=pd.DataFrame([{ 'Name': station_name, 'Type': 'Condition', 'Maximum Duration': '1d', 'Capsule Property Units': {'status': 'string', 'id': 'string', 'station':'string', 'duration':'s'} }]) return metadata # This method splits the dataframe by station. Each Station will represent a condition in Seeq. def create_dataframe_per_station(all_data, debug): data_by_station = all_data.groupby('station') if debug: for station, group in data_by_station: print(f"DataFrame for station: {station}") print("Number of rows:", len(group)) display(group) return data_by_station # This method sends the data to Seeq def send_to_seeq(data, metadata, workbook, quiet): spy.push(data=data, metadata=metadata, workbook=workbook, datasource="Tulip Operations", quiet=quiet) data = get_data_from_tulip(TABLE_NAME, False) per_station = create_dataframe_per_station(data, False) for station, group in per_station: metadata = create_metadata(group, station) send_to_seeq(group, metadata, 'Tulip Integration >> Bearing Failure', False) The above notebook can be run on a schedule with the following command: spy.jobs.schedule('every 6 hours') This will pull the data from the Tulip Table into Seeq to allow for quick analysis. The notebook above will need you to provide a tenant, API key, and table name. It will also be using this REST API method to get the records. Once provided, this data will be pulled into a dataset called Tulip Operations and scoped to a workbook called Tulip Integration. We can now leverage the capsule properties to start isolating interesting periods of time. For example, using the formula $ea.keep('status', isEqualTo('RUNNING')) Where $ea is the Endbell Assembly condition from the Tulip Table. We can create a new condition keeping only the capsules where the state is running. Once a full analysis is created, Seeq content can be displayed in a Tulip App as an iFrame, allowing for the combination of Tulip and Seeq data: Data can be pushed back to Tulip using the create record API. This allows for Tulip Dashboards to contain Seeq Content:
    5 points
  5. Proportional-integral-derivative (PID) control loops are essential to the automation and control of manufacturing processes and are foundational to the financial gains realized through advanced process control (APC) applications. Because poorly performing PID controllers can negatively impact production capacity, product quality, and energy consumption, implementing controller performance monitoring analytics leads to new data-driven insights and process optimization opportunities. The following sections provide the essential steps for creating basic controller performance monitoring at scale in Seeq. More advanced CLPM solutions can be implemented by expanding the standard framework outlined below with additional metrics, customization features, and visualizations. Data Lab’s Spy library functionality is integral to creating large scale CLPM, but small scale CLPM is possible with no coding, using Asset Groups in Workbench. Key steps in creating a CLPM solution include: Controller asset tree creation Developing performance metric calculations and inserting them in the tree as formulas Calculating advanced metrics via scheduled Data Lab notebooks (if needed) Configuring asset scaled and individual controller visualizations in Seeq Workbench Setting up site performance as well as individual controller monitoring in Seeq Organizer Using these steps and starting from only the controller signals within Seeq, large scale CLPM monitoring can be developed relatively quickly, and a variety of visualizations can be made available to the manufacturing team for monitoring and improving performance. As a quick example of many end result possibilities, this loop health treemap color codes controller performance (green=good, yellow=questionable, red=poor): The key steps in CLPM implementation, summarized above, are detailed below. Note: for use as templates for development of your own CLPM solution, the associated Data Lab notebooks containing the example code (for Steps 1-3) are included as file attachments to this article. The code and formulas described in Steps 1-3 can be adjusted and expanded to customize your CLPM solution as desired. Example CLPM Solution: Detailed Steps STEP 1: Controller asset tree creation An asset tree is the key ingredient which enables scaling of the performance calculations across a large number of controllers. The desired structure is chosen and the pertinent controller tags (typically at a minimum SP, PV, OP and MODE) are mapped into the tree. For this example, we will create a structure with two manufacturing sites and a small number of controllers at each site. In most industrial applications, the number of controllers would be much higher, and additional asset levels (departments, units, equipment, etc.) could of course be included. We use SPy.trees functionality within the Data Lab notebook to create the basic structure: Controller tags for SP, PV, OP, and MODE are identified using SPy.search. Cleansed controller names are extracted and inserted as asset names within the tree: Next, the controller tags (SP, PV, OP, and MODE), identified in the previous Data Lab code section using SPy.search, are mapped into the tree. At the same time, the second key step in creating the CLPM solution, developing basic performance metrics and calculations using Seeq Formula and inserting them into the asset tree, is completed. Note that in our example formulas, manual mode is detected when the numerical mode signal equals 0; this formula logic will need to be adjusted based on your mode signal conventions. While this second key step could be done just as easily as a separate code section later, it also works nicely to combine it with the mapping of the controller tag signals: STEP 2: Developing performance metric calculations and inserting them in the tree as formulas Several key points need to be mentioned related to this step in the CLPM process, which was implemented using the Data Lab code section above (see previous screenshot). There are of course many possible performance metrics of varying complexity. A good approach is to start with basic metrics that are easily understood, and to incrementally layer on more complex ones if needed, as the CLPM solution is used and shortcomings are identified. The selection of metrics, parameters, and the extent of customization for individual controllers should be determined by those who understand the process operation, control strategy, process dynamics, and overall CLPM objectives. The asset structure and functionality provided with the Data Lab asset tree creation enables the user to implement the various calculation details that will work best for their objectives. Above, we implemented Hourly Average Absolute Controller Error (as a percentage based on the PV value) and Hourly Percent Span Traveled as basic metrics for quantifying performance. When performance is poor (high variation), the average absolute controller error and percent span traveled will be abnormally high. Large percent span traveled values also lead to increased control valve maintenance costs. We chose to calculate these metrics on an hourly basis, but calculating more or less frequently is easily achieved, by substituting different recurring time period functions in place of the “hours()” function in the formulas for Hourly Average Absolute Controller Error and Hourly Percent Span Traveled. The performance metric calculations are inserted as formula items in the asset tree. This is an important aspect as it allows calculation parameters to be customized as needed on an individual controller basis, using Data Lab code, to give more accurate performance metrics. There are many customization possibilities, for example controller specific PV ranges could be used to normalize the controller error, or loosely tuned level controllers intended to minimize OP movement could be assigned a value of 0 error when the PV is within a specified range of SP. The Hourly Average Absolute Controller Error and Hourly Percent Span Traveled are then aggregated into an Hourly Loop Health Score using a simple weighting calculation to give a single numerical value (0-100) for categorizing overall performance. Higher values represent better performance. Another possible approach is to calculate loop health relative to historical variability indices for time periods of good performance specified by the user. The magnitude of a loop health score comprised of multiple, generalized metrics is never going to generate a perfect measure of performance. As an alternative to using just the score value to flag issues, the loop health score can be monitored for significant decreasing trends to detect performance degradation and report controller issues. While not part of the loop health score, note in the screenshot above that we create an Hourly Percent Time Manual Mode signal and an associated Excessive Time in Manual Mode condition, as another way to flag performance issues – where operators routinely intervene and adjust the controller OP manually to keep the process operating in desired ranges. Manual mode treemap visualizations can then be easily created for all site controllers. With the asset tree signal mappings and performance metric calculations inserted, the tree is pushed to a Workbench Analysis and the push results are checked: STEP 3: Calculating advanced metrics via scheduled Data Lab notebooks (if needed) Basic performance metrics (using Seeq Formula) may be all that are needed to generate actionable CLPM, and there are advantages to keeping the calculations simple. If more advanced performance metrics are needed, scheduled Data Lab notebooks are a good approach to do the required math calculations, push the results as signals into Seeq, and then map/insert the advanced metrics as items in the existing asset tree. There are many possibilities for advanced metrics (oscillation index, valve stiction, non-linearity measures, etc.), but here as an example, we calculate an oscillation index and associated oscillation period using the asset tree Controller Error signal as input data. The oscillation index is calculated based on the zero crossings of the autocorrelation function. Note: the code below does not account for time periods when the controller is in manual, the process isn’t running, problems with calculating the oscillation index across potential data gaps, etc. – these issues would need to be considered for this and any advanced metric calculation. Initially, the code above would be executed to fill in historical data oscillation metric results for as far back in time as the user chooses, by adjusting the calculation range parameters. Going forward, this code would be run in a recurring fashion as a scheduled notebook, to calculate oscillation metrics as time moves forward and new data becomes available. The final dataframe result from the code above looks as follows: After examining the results above for validity, we push the results as new signals into Seeq Workbench with tag names corresponding to the column names above. Note the new, pushed signals aren’t yet part of the asset tree: There are two additional sections of code that need to be executed after oscillation tag results have been pushed for the first time, and when new controller tags are added into the tree. These code sections update the oscillation tag metadata, adding units of measure and descriptions, and most importantly, map the newly created oscillation tags into the existing CLPM asset tree: STEP 4: Configuring asset scaled and individual controller visualizations in Seeq Workbench The CLPM asset tree is now in place in the Workbench Analysis, with drilldown functionality from the “US Sites” level to the signals, formula-based performance metrics, and Data Lab calculated advanced metrics, all common to each controller: The user can now use the tree to efficiently create monitoring and reporting visualizations in Seeq Workbench. Perhaps they start by setting up raw signal and performance metric trends for an individual controller. Here, performance degradation due to the onset of an oscillation in a level controller is clearly seen by a decrease in the loop health score and an oscillation index rising well above 1: There are of course many insights to be gained by asset scaling loop health scores and excessive time in manual mode across all controllers at a site. Next, the user creates a sorted, simple table for average loop health, showing that 7LC155 is the worst performing controller at the Beech Island site over the time range: The user then flags excessive time in manual mode for controller 2FC117 by creating a Lake Charles treemap based on the Excessive Time in Manual Mode condition: A variety of other visualizations can also be created, including controller data for recent runs versus current in chain view, oscillation index and oscillation period tables, a table of derived control loop statistics (see screenshot below for Lake Charles controller 2FC117) that can be manually created within Workbench or added later as items within the asset tree, and many others. Inspecting a trend in Workbench (see screenshot below) for a controller with significant time in manual mode, we of course see Excessive Time in Manual Mode capsules, created whenever the Hourly Percent Time Manual Mode was > 25% for at least 4 hours in a row. More importantly, we can see the effectiveness of including hours().intersect($Mode!=0) in the formulas for Hourly Average Absolute Controller Error and Hourly Percent Span Traveled. When the controller is in manual mode, that data is excluded from the metric calculations, resulting in the gaps shown in the trend. Controller error and OP travel have little meaning when the controller is in manual, so excluding data is necessary to keep the metrics accurate. It would also be very easy to modify the formulas to only calculate metrics for hourly time periods where the controller was in auto or cascade for the entire hour (using Composite Condition and finding hourly capsules that are “outside” manual mode capsules). The ability to accurately contextualize the metric calculations, to the time periods where they can be validly calculated, is a key feature in Seeq for doing effective CLPM implementations. Please also refer to the “Reminders and Other Considerations” section below for more advanced ideas on how to identify time periods for metric calculations. STEP 5: Setting up site performance as well as individual controller monitoring in Seeq Organizer As the final key step, the visualizations created in Workbench are inserted into Seeq Organizer to create a cohesive, auto-updating CLPM report with site overviews as well as individual controller visuals. With auto-updating date ranges applied, the CLPM reports can be “review ready” for recurring meetings. Asset selection functionality enables investigative workflows: poorly performing controllers are easily identified using the “Site CLPM” worksheet, and then the operating data and metrics for those specific controllers can be quickly investigated via asset selection in the site’s “Individual Controllers” worksheet – further details are described below. An example “Site CLPM” Organizer worksheet (see screenshot below) begins with a loop health performance ranking for each site, highlighting the worst performing controllers at the top of the table and therefore enabling the manufacturing team to focus attention where needed; if monitoring hundreds of controllers, the team could filter the table to the top 10 or 20 worst performing controllers. The visualizations also include a treemap for controllers that are often switched to manual mode – the team can talk to operators on each crew to determine why the controllers are not trusted in auto or cascade mode, and then generate action items to resolve. Finally, oscillating controllers are flagged in red in the sorted Oscillation Metrics tables, with the oscillation period values also sorted – short oscillation periods may prematurely wear out equipment and valves due to high frequency cycling; long oscillation periods are more likely to negatively impact product quality, production rate, and energy consumption. Controllers often oscillate due to root causes such as tuning and valve stiction and this variability can often be eliminated once an oscillating controller has been identified. The oscillation period table can also be perused for controllers with similar periods, which may be evidence of an oscillation common to multiple controllers which is generating widespread process variation. An example “Individual Controllers” Organizer worksheet is shown below, where detailed operating trends and performance metrics can be viewed for changes, patterns, etc., and chain view can be used to compare controller behavior for the current production run versus recent production runs. Other controllers can be quickly investigated using the asset selection dropdown, and the heading labels (e.g., Beech Island >> 7LC155) change dynamically depending on which controller asset is selected. For example, the Beech Island 7LC155 controller which was identified as the worst performing controller in the “Site CLPM” view above, can be quickly investigated in the view below, where it is evident that the controller is oscillating regularly and the problem has been ongoing, as shown by the comparison of the current production run to the previous two runs: Reminders and Other Considerations As evident with the key steps outlined above, a basic CLPM solution can be rapidly implemented in Seeq. While Seeq’s asset and contextualization features are ideal for efficiently creating CLPM, there are many details which go into developing and maintaining actionable CLPM dashboards for your process operation. A list of reminders and considerations is given below. 1. For accurate and actionable results, it is vital to only calculate performance metrics when it makes sense to do so, which typically means when the process is running at or near normal production rates. For example, a controller in manual during startup may be expected and part of the normal procedure. And of course, calculating average absolute controller error when the process is in an unusual state will likely lead to false indications of poor performance. Seeq is designed to enable finding those very specific time periods when the calculations should be performed. In the CLPM approach outlined in this article, we used time periods when the controller was not in manual by including hours().intersect($Mode!=0) in the formulas for Hourly Average Absolute Controller Error and Hourly Percent Span Traveled. When the controller is in manual mode, that data is excluded from the metric calculations. But of course, a controller might be in auto or cascade mode when the process is down, and there could be many other scenarios where only testing for manual mode isn’t enough. In the CLPM approach outlined above, we intentionally kept things simple by just calculating metrics when the controller was not in manual mode, but for real CLPM implementations you will need to use a more advanced method. Here are a few examples of finding “process running” related time periods using simple as well as more advanced ways. Similar approaches can be used with your process signals, in combination with value searches on controller mode, for excluding data from CLPM calculations: A simple Process Running condition can be created with a Value Search for when Process Feed Flow is > 1 for at least 2 hours. A 12 Hours after Process Running condition can be created with a formula based on the Process Running Condition: $ProcessRunning.afterStart(12hr) A Process Running (> 12 hours after first running) condition can then be created from the first two conditions with the formula: $ProcessRunning.subtract($TwelveHrsAfterRunning) Identifying time periods based on the value and variation of the production rate is also a possibility as shown in this Formula: The conditions described above are shown in the trends below: 2. As previously mentioned, including metric formulas and calculations as items within the asset tree enables customization for individual controllers as needed, when controllers need unique weightings, or when unique values such as PV ranges are part of the calculations. It may also be beneficial to create a “DoCalcsFlag” signal (0 or 1 value) as an item under each controller asset and use that as the criteria to exclude data from metric calculations. This would allow customization of “process is running normally and controller is not in manual” on an individual controller basis, with the result common for each controller represented as the “DoCalcsFlag” value. 3. In the CLPM approach outlined above, we used SPy.trees in Data Lab to create the asset tree. This is the most efficient method for creating large scale CLPM. You can also efficiently create the basic tree (containing the raw signal mappings) from a CSV file. For small trees (<= 20 controllers), it is feasible to interactively create the CLPM asset tree (including basic metric calculation formulas) directly in Workbench using Asset Groups. The Asset Groups approach requires no Python coding and can be quite useful for a CLPM proof of concept, perhaps focused on a single unit at the manufacturing site. For more details on Asset Groups: https://www.seeq.org/index.php?/forums/topic/1333-asset-groups-101-part-1/). 4. In our basic CLPM approach, we scoped the CLPM asset tree and calculations to a single Workbench Analysis. This is often the best way to start for testing, creating a proof of concept, getting feedback from users, etc. You can always decide later to make the CLPM tree and calculations global, using the SPy.push option for workbook=None. 5. For long-term maintenance of the CLPM tree, you may want to consider developing an add-on for adding new controllers into the tree, or for removing deleted controllers from the tree. The add-on interface could also prompt the user for any needed customization parameters (e.g., PV ranges, health score weightings) and could use SPy.trees insert and remove functionality for modifying the tree elements. 6. When evidence of more widespread variation is found (more than just variability in a single controller), and the root cause is not easily identified, CLPM findings can be used to generate a list of controllers (and perhaps measured process variables in close proximity) that are then fed as a dataset to Seeq’s Process Health Solution for advanced diagnostics. 7. For complex performance or diagnostic metrics (for example, stiction detection using PV and OP patterns), high quality data may be needed to generate accurate results. Therefore, some metric calculations may not be feasible depending on the sample frequency and compression levels inherent with archived and compressed historian data. The only viable options might be using raw data read directly from the distributed control system (DCS), or specifying high frequency scan rates and turning off compression for controller tags such as SP, PV, and OP in the historian. Another issue to be aware of is that some metric calculations will require evenly spaced data and therefore need interpolation (resampling) of historian data. Resampling should be carefully considered as it can be problematic in terms of result accuracy, depending on the nature of the calculation and the signal variability. 8. The purpose of this article was to show how to set up basic CLPM in Seeq but note that many types of process calculations to monitor “asset health” metrics could be created using a similar framework. Summary While there are of course many details and customizations to consider for generating actionable controller performance metrics for your manufacturing site, the basic CLPM approach above illustrates a general framework for getting started with controller performance monitoring in Seeq. The outlined approach is also widely applicable for health monitoring of other types of assets. Asset groups/trees are key to scaling performance calculations across all controllers, and Seeq Data Lab can be used as needed for implementing more complex metrics such as oscillation index, stiction detection, and others. Finally, Seeq Workbench tools and add-on applications like Seeq’s Process Health solution can be used for diagnosing the root cause of performance issues automatically identified via CLPM monitoring. CLPM Asset Tree Creation.ipynb Advanced CLPM Metric Calculations.ipynb
    5 points
  6. The SPy Documentation for spy.assets includes an example of specifying metrics as attributes in asset classes. It is also possible to push scorecard metrics using the spy.push functionality by defining the appropriate metadata. An example of this process is given in the code snippets below: #import relevant libraries import pandas as pd from seeq import spy Log in to the SPY module if running locally using spy.login, or skip this step if running Seeq Data Lab. #Search for data that will be used to create the scorecard. This example will search the Example asset tree to find tags in Area A. search_result = spy.search({'Path': 'Example >> Cooling Tower 1 >> Area A'}) The next code segment creates and pushes a signal that will be used as a threshold limit in the scorecard metric. This can be skipped if threshold signals will not be used in the final metric. #Define data frame for low limit threshold signal. my_lo_signal = { 'Type': 'Signal', 'Name': 'Lo Signal', 'Formula': '$signal * 50', 'Formula Parameters': {'$signal': search_result[search_result['Name'] == 'Optimizer']['ID'].iloc[0]} } #Push data frame for low limit threshold signal. lo_push_result = spy.push(metadata=pd.DataFrame([my_lo_signal]), workbook='Example Scorecard') Finally, create the and push the scorecard metric. This example metric measures the average temperature and apply a static high limit threshold of 90 and a moving low limit threshold using the signal defined above. #Define data frame for scorecard metric. my_metric_input = { 'Type': 'Metric', 'Name': 'My Metric', 'Measured Item': {'ID': search_result[search_result['Name'] == 'Temperature']['ID'].iloc[0]}, 'Statistic': 'Average', 'Thresholds': { 'Lo': {'ID': lo_push_result['ID'].iloc[0]}, 'Hi': 90 } } #Push data frame for scorecard metric. spy.push(metadata = pd.DataFrame([my_metric_input]), workbook='Example Scorecard') The final result can be seen in the created workbook.
    5 points
  7. A small team of us (with help from Seeq team members) built a short script to extract signal names from our legacy excel workbooks so that we could push them to Seeq workbench. Perhaps, like us, you are involved in migrating workbooks for monitoring/ reporting over to Seeq and could do with a boost to get started so you can get to real work of setting up the Seeq workbench. The attached script will extract the signal names (assuming you can craft your own regex search filter) from each excel worksheet and then push them to workbench with labeling for organization. Hopefully its a help to some 🙂 signal_transfer_excel2seeq_rev1.ipynb
    5 points
  8. Hi Mattheus, You can do this by using the move (v49+) or delay (<=v48) function. For example, your equation would be: ($signal.move(5s)+$signal.move(10s)+$signal.move(60s))/3
    5 points
  9. When addressing a business problem with analytics, we should always start by asking ourselves 4 key questions: Why are we talking about this: what is the business problem we are trying to address, and what value will solving this problem generate? What data do I have available to help with solving this problem? How can I build an effective analysis to identify the root of my problem (both in the past, and in future)? How will I visualize the outputs to ensure proactive action to prevent the problem from manifesting? This is where you extract the value. With that in mind, please read below how we approach the above 4 questions while working in Seeq to deal with heat exchanger performance issues. What is the business problem? Issues with heat exchanger performance can lead to downstream operational issues which may lead to lost production and revenue. To effectively monitor the exchanger, a case-specific approach is required depending on the performance driver: Fouling in the exchanger is limiting heat transfer, requiring further heating/cooling downstream Fouling in the exchanger is limiting system hydraulics, causing flow restrictions or other concerns Equipment integrity, identify leaks inside the exchanger What Data do we have available? Process Sensors – flow rates, temperatures, pressures, control valve positions Design Data – drawings, datasheets Maintenance Data – previous repairs or cleaning, mean-time between cleanings How can we tackle the business problem with the available data? There are many ways to monitor a heat exchanger's performance, and the selection of the appropriate indicator depends on a) the main driver for monitoring and b) the available data. The decision tree below is merely meant to guide what indicators can be applied based on your dataset. Generally speaking, the more data available, the more robust an analysis you can create (ie. first principles based calculations). However, in the real world, we are often working with sparse datasets, and therefore may need to rely on data-based approaches to identify subtle trends which indicate changes in performance over time. Implementing each of the indicators listed above follow a similar process in Seeq Workbench, as outlined in the steps below. In this example, we focus on a data-based approach (middle category above). For an example of a first-principles based approach, check out this Seeq University video. Step 1 - Gather Data In a new Workbench, search in the Data Tab for relevant process signals Use Formula to add scalars or use the .toSignal() function to convert supplemental data such as boundary limits or design values Use Formula, Value Search or Custom Condition to enter maintenance period(s) and heat exchanger cycle(s) conditions (if these cannot be imported from a datasource) Step 2 - Identify Periods of Interest •Use Value Search, Custom Condition, Composite Condition or Formula to identify downtime periods, periods where exchanger is bypassed, or periods of bad data which should be ignored in the analysis Step 3 - Cleanse Data Use Formula to remove periods of bad data or downtime from the process signals, using functions such as $signal.remove($condition) or $signal.removeOutliers() Use Formula to smooth data as needed, using functions such as $signal.agileFilter() or the Low Pass Filter tool Step 4 - Quantify Use Formula to calculate any required equations In this example, no calculations are required. Step 5 - Model & Predict Use Prediction and choose a process signal to be the Target Variable, and use other available process signals as Input Variables; choose a Training Period when it is known the exchanger is in good condition Using Boundaries: establish an upper and lower boundary signal based on the predicted (model) signal from previous step (e.g. +/-5% of the modeled signal represents the boundaries) Step 6 - Monitor Use Deviation Search or Value Search to find periods where the target signal exceeds a boundary(ies) The deviation capsules created represent areas where heat exchanger performance is not as expected Aggregate the Total Duration or Percent Duration statistic using Scorecard or Signal From Condition to assess deteriorating exchanger health over time How can we visualize the outputs to ensure proactive action in future? Step 7 - Publish Once the analysis has been built in a Seeq Workbench, it can be published in a monitoring dashboard in Seeq Organizer as seen in the examples below. This dashboard can then be shared among colleagues in the organization, with the ability to monitor the exchanger, and log alerts and take action as necessary as time progresses - this last step is key to implementing a sustainable workflow to ensure full value is extracted from solving your business problem.
    5 points
  10. Contextual data is often brought into Seeq to add more information to time series data. This data tends to be brought in as a condition, with the capsule properties of this condition containing different pieces of information. In some cases, a particular capsule property may not contain just one piece of information; it may contain different pieces that are separated based on some logic or code. Rather than having users visually parse the code to extract the segments of interest, Seeq can be used to extract the substring continuously. The code below extracts a substring based on its location in the property. This code is based on incrementing from left to right, starting at the beginning of the string. Changing the inputs will extract a substring from different positions in the property selected. //Inputs Section (Start and end assume reading left to right) $condition = $hex_maint //Recommend to filter condition to only include correct property values $property_to_capture = 'Reason Code' $start_position = 1 //Incrementing starts from 1 $number_of_characters = 2 //Including the start //Code Section $property_signal = $condition.toSignal($property_to_capture).toStep(2wk) //Change duration for interpolation $start_position_regex = ($start_position - 1).toString() //Regular exression indexes from 0 $number_of_characters_regex = ($number_of_characters - 1).toString() $property_signal.replace('/.{'+$start_position_regex+'}(?<Hold>.{'+$number_of_characters_regex+'}.).*/','${Hold}') This alternative version is based on incrementing right to left, starting at the end of the string. //Inputs Section (Start and end assume reading left to right) $condition = $hex_maint //Recommend to filter condition to only include correct property values $property_to_capture = 'Reason Code' $end_position = 1 //Relative to end, incremented from 1 $number_of_characters = 4 //Including the end character //Code Section $property_signal = $condition.toSignal($property_to_capture).toStep(2wk) //Change duration for interpolation $end_position_regex = ($end_position).toString() $number_of_characters_regex = ($number_of_characters - 1).toString() $property_signal.replace('/.*(?<Hold>.{'+$number_of_characters_regex+'}.{'+$end_position_regex+'})$/','${Hold}') Note the output of these formulas is a string. In the case that a numeric value is wanted, append .toNumber() after '${Hold}') Below is an example of the results. With this substring parsed, all of Seeq's analytical tools can be further leveraged. Some examples are developing histograms based on the values of the substring and making conditions to highlight whenever a particular value in the substring is occurring.
    5 points
  11. To better understand their process, users often want to compare time-series signals in a dimension other than time. For example, seeing how the temperature within a reactor changes as a function of distance. Seeq is built to compare data against time but this method highlights how we can use time to mimic an alternate dimension. Step 1: Sample Alignment In order to accurately mimic the alternate dimension, the samples to be included in each profile must occur at the same time. This can be achieved through a couple methods in Seeq if the samples don't already align. Option 1: Re-sampling Re-sampling selects points along a signal at select intervals. You can also re-sample based on another signal's keys. Since its possible for there not to be a sample at that select interval, the interpolated value is chosen. An example Formula demonstrating how to use the function is shown below. //Function to resample a signal $signal.resample(5sec) Option 2: Average Aggregation Aggregating allows users to determine the average of a signal over a given period of time and then place this average at a specific point within that period. Signal From condition can be used to find the average over a period and place this average at a specific timestamp within the period. In the example below, the sample is placed at the start but alignment will occur if the samples are placed at the middle or end as well. Step 2: Delay Samples In Formula, apply a delay to the samples of the signal that represents their value in the alternative dimension. For example, if a signal occurs at 6 feet from the start of a reactor, delay it by 6. If there is not a signal with a 0 value in the alternate dimension, the final graph will be offset by the smallest value in the alternate dimension. To fix this, in Formula create a placeholder signal such as 0 and ensure its samples align with the other samples using the code listed below. This placeholder would serve as a signal delayed by 0, meaning it would have a value of 0 in the alternate dimension. //Substitute Period_of_Time_for_Alignment with the period used above for aligning your samples 0.toSignal(Period_of_Time_for_Alignment) Note: Choosing the unit of the delay depends upon the new sampling frequency of your aligned signals as well as the largest value you will have in the alternative dimension. For example, if your samples occur every 5 minutes, you should choose a unit where your maximum delay is not greater than 5 minutes. Please refer to the table below for selecting units Largest Value in Alternate Dimension Highest Possible Delay Unit 23 Hour, Hour (24 Hour Clock) 59 Minute 99 Centisecond 999 Millisecond Step 3: Develop Sample Profiles Use the Formula listed below to create a new signal that joins the samples from your separate signals into a new signal. Replace "Max_Interpolation" with a number large enough to connect the samples within a profile, but small enough to not connect the separate profiles. For example, if the signals were re-sampled every 5 minutes but the largest delay applied was 60 seconds, any value below 4 minutes would work for the Max_Interpolation. This is meant to ensure the last sample within a profile does not interpolate to the first sample of the next profile. //Make signals into discrete to only get raw samples, and then use combineWith and toLinear to combine the signals while maintaining their uniqueness combineWith($signal1.toDiscrete() , $signal2.toDiscrete() , $signal3.toDiscrete()).toLinear(Max_Interpolation) Step 4: Condition Highlighting Profiles Create a condition in Formula for each instance of this new signal using the formula below. The isValid() function was introduced in Seeq version 44. For versions 41 to 43, you can use .valueSearch(isValid()). Versions prior to 41 can use .validityCapsules() //Develop capsule highlighting the profile to leverage other views based on capsules to compare profiles $sample_profiles.isValid() Step 5: Comparing Profiles Now with a condition highlighting each profile, Seeq views built around conditions can be used. Chain View can be used to compare the profiles side by side while Capsule View can overlay these profiles. Since we delayed our samples before, we are able to look at their relative times and use that to represent the alternate dimension. Further Applications With these profiles now available in Seeq, all of the tools available in Seeq can be used to gain more insight from these examples. Below are a few examples. Comparing profiles against a golden profile Determine at what value in the alternate dimension does each profile reach a threshold Developing a soft sensor based on another sensor and a calibration curve profile Example Use Cases Assess rotating equipment performance based on OEM curve regressions that vary based on equipment speed due to a VFD (alternate dimension = speed) Monitor distillation cut points based on distillation lab data (alternate dimension = lab standard, boil % in this case) Observe temperature profile along a reactor or well (alternate dimension = distance, length and depth in these cases)
    5 points
  12. For those like me who keep getting this error: Error getting data: condition must have a maximum duration. Consider using removeLongerThan() to apply a maximum duration. Click the wrench icon to add a Maximum Capsule duration. The resolution seemed to make sense to apply removeLongerThan() or setMaximumDuration() to the signal, but the correct answer is to set it to the capsule. For example, this is the incorrect formula I attempted. $series.aggregate(maxValue(), $capsules, endKey(), 0s) Here is the resolution: $series.aggregate(maxValue(), $capsules.setMaximumDuration(40h), endKey(), 0s) or $series.aggregate(maxValue(), $capsules.removeLongerThan(40h), endKey(), 0s) Hope this helps others who didn't have luck searching this specific alarm previously.
    5 points
  13. A typical data cleansing workflow is to exclude equipment downtime data from calculations. This is easily done using the .remove() and .within() functions in Seeq formula. These functions remove or retain data when capsules are present in the condition that the user supplies as a parameter to the function. There is a distinct difference in the behavior of the .remove() and .within() functions that users should know about, so that they can use the best approach for their use case. .remove() removes the data during the capsules in the input parameter condition. For step or linearly interpolated signals, interpolation will occur across those data gaps that are of shorter duration than the signal's maximum interpolation. (See Interpolation for more details concerning maximum interpolation.) .within() produces data gaps between the input parameter capsules. No interpolation will occur across data gaps (no matter what the maximum interpolation value is). Let's show this behavior with an example (see the first screenshot below, Data Cleansed Signal Trends), where an Equipment Down condition is identified with a simple Value Search for when Equipment Feedrate is < 500 lb/min. We then generate cleansed Feedrate signals which will only have data when the equipment is running. We do this 2 ways to show the different behaviors of the .remove() and .within() functions. $Feedrate.remove($EquipmentDown) interpolates across the downtime gaps because the gap durations are all less than the 40 hour max interpolation setting. $Feedrate.within($EquipmentDown.inverse()) does NOT interpolate across the downtime gaps. In the majority of cases, this result is more in line with what the user expects. As shown below, there is a noticeable visual difference in the trend results. Gaps are present in the green signal produced using the .within() function, wherever there is an Equipment Down capsule. A more significant difference is that depending on the nature of the data, the statistical calculation results for time weighted values like averages and standard deviations, can be very different. This is shown in the simple table (Signal Averages over the 4 Hour Time Period, second screenshot below). The effect of time weighting the very low, interpolated values across the Equipment Down capsules when averaging the Feedrate.remove($EquipmentDown) signal, gives a much lower average value compared to that for $Feedrate.within($EquipmentDown.inverse()) (1445 versus 1907). Data Cleansed Signal Trends Signal Averages over the 4 Hour Time Period Content Verified DEC2023
    4 points
  14. Seeq has functions to allow easy manipulation of the starts and ends of capsules, including functions like afterStart(), move(), and afterEnd(). One limitation of these functions is that they expect scalar inputs, which means all capsules in the condition have to be adjusted by the same amount (e.g. move all capsules 1 hour into the future). There are cases when you want to adjust each capsule dynamically, for instance using the value of a signal to determine how to adjust the capsule. Solution: This post will show how to accomplish a dynamic / signal-based version of afterStart(). This approach can be modified slightly to recreate other capsule adjustment functions. Assume I have an arbitrary condition 'Condition', and signal 'Capsule Adjustment Signal'. I want to find the first X hours after each capsule start, where X is the value of 'Capsule Adjustment Signal' at the capsule start. I can do this with the below formula. $condition .afterStart(3h) // has to be longer than an output capsule will ever be .transform($capsule -> { $newStartKey = $capsule.startKey() $newEndKey = $capsule.startKey() + $signal.valueAt($capsule.startKey()) capsule($newStartKey, $newEndKey) }) This formula only takes two inputs: $condition, and $signal. This formula goes through each capsule in the condition, and manipulates its start and end keys. In this case, the start key is the same as the original, but the new end key is set to the original start key plus the value of my signal. This formula produces the following purple condition: Some notes on this formula: The output capsules must be within the original capsules. Therefore, I have included .afterStart(3h) in the formula. This ensures the original capsules will always be larger than the outputted capsules. If you don't do this, you may see the following warning on your item, which indicates the formula is throwing away capsules: Your capsule adjustment signal must have units of time To accomplish other capsule adjustments, look at changing the definitions of the $newStartKey and $newEndKey variables to suit your needs.
    4 points
  15. Update - To see a video of the below workflow in action, check out this Seeq Tips and Tricks video. Webhooks are a convenient method to send event data from Seeq to “channel” productivity tools such as Microsoft Teams or Slack. The following post describes how Seeq users can leverage Seeq Data Lab to send messages directly to MS Teams via Webhooks. Pre-Requisites: 1) Seeq Data Lab with Scheduled Notebooks enabled a. See Administration Panel -> Configuration and filter for “Features/DataLab/ScheduledNotebooks/Enabled” 2) MS Teams Channel with a Webhook Connector Assumptions: 1) Summary of capsules generated in a defined time range (i.e., every 12 or 24 hours) 2) Notifications are not near-real-time – script will run on a pre-defined schedule generally measured in hours, not minutes or seconds 3) Events of interest are contained in an Asset Tree or Group with one or more Conditions Step 1: Configure Webhook in MS Teams To send Seeq capsules/events to MS Teams, a Webhook for the target channel needs to be created. Detailed instructions on how to configure Webhooks in MS Teams can be found here: https://learn.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/add-incoming-webhook For the purpose of this post, we will create a Webhook URL in our “Seeq Notifications” Team to alert on Temperature Excursions. The alerts will be posted in the “Cooling Tower Temperature Monitoring” channel. Teams and Channel names can be configured to fit your need/operation, this is just an example for demonstration purposes: MS Teams will generate a Webhook URL which we will use in our script in Step 4. Step 2: Identify or Create an Asset Group or Asset Tree to define the Monitoring Scope To scope the events of interest, we will use an Asset Tree that contains “High Temperature” conditions for a collection of Monitoring Assets. While this is not a requirement for using Webhooks, it helps with scaling the Notification workflow. It also allows us to combine multiple Conditions from different Assets into a single workflow. To learn how create an Asset Tree, follow the “Asset Trees 1 – Introduction.ipynb” tutorial in the SPy Documentation folder contained in each new Seeq Datalab project. The script for the Monitoring Asset Tree used in this post is attached for reference: Monitoring Asset Tree.ipynb Alternatively, Asset Groups can be also used to create an asset structure directly in Workbench without using Python: Once the Asset Group/Tree containing the monitoring Conditions is determined, create a Worksheet with a Treemap or Table overview for monitoring use: Make note of the URL as it will be included in the Notification as a link to the monitoring overview whenever an event is detected. For locally scoped Asset Groups or Trees, it will also inform the script where to look for Conditions. Step 3: Install the “pymsteams” library in Seeq Datalab The pymsteams library allows users to compose and post messages (or cards) to MS Teams. The library can be installed from the pypi repository (pypi.org) using the “pip install” command. 1) Open a Seeq Datalab Project 2) Launch a Terminal session 3) Install the pymsteams library by executing pip install pymsteams Additional documentation on pymsteams can be found here: https://pypi.org/project/pymsteams/ Step 4: Create or Update the Monitoring script We are now ready configure a monitoring script that sends notifications to the Webhook configured in Step 1 using Conditions scoped to the Asset Tree in Step 2. a) Import the relevant libraries, including the newly installed pymsteams library import pandas as pd from datetime import datetime,timedelta import pytz import pymsteams b) Configure Input Parameters #Refer to Microsoft Documentation on how to configure a Webhook for a MS Teams channel webhook_url='YOUR WEBHOOK HERE' #Specify the monitoring workbook - this is where the alert will link with the associated timeframe monitoring_workbook_url='YOUR WORKBOOK HERE' #Specify the asset tree and associated condition for which the webhook should be triggered asset_tree='Compressor Monitoring' monitoring_condition='High Temperature' #Specify the lookback period and timezone to search for capsules lookback_interval_hours=24 timezone=('US/Mountain') c) Search for Event Capsules #Set time range to look for new conditions delta=timedelta(hours=lookback_interval_hours) end=datetime.now(tz=pytz.timezone(timezone)) start=end-delta #Parse the workbook information workbook_id=spy.utils.get_workbook_id_from_url(monitoring_workbook_url) worksheet_id=spy.utils.get_worksheet_id_from_url(monitoring_workbook_url) #This block is optional, it stores search results for the conditions once instead of searching each time the #script runs. Saves time if the search result is not expected to change. To reset, just delete the .pkl file. pkl_file_name=asset_tree+'_'+monitoring_condition+'_'+workbook_id+'.pkl' try: monitoring_conditions=pd.read_pickle(pkl_file_name) except: monitoring_conditions=spy.search({'Name':monitoring_condition, 'Type':'Condition', 'Path':asset_tree}, workbook=workbook_id,quiet=True) monitoring_conditions.to_pickle(pkl_file_name) #Pull capsules present during the specified time range events=spy.pull(monitoring_conditions,start=start,end=end,group_by=['Asset'],header='Asset',quiet=True) number_of_events=len(events) events d) Send Message to Webhook using the pymsteams library if a Capsule is detected in the time range #If capsules are present, trigger the webhook to compile and send a card to MS Teams if number_of_events != 0: events.sort_values(by='Condition',inplace=True) #Create url for specific notification time-frame using Seeq URL builder investigate_start=start.astimezone(pytz.utc).strftime('%Y-%m-%dT%H:%M:%SZ') investigate_end=end.astimezone(pytz.utc).strftime('%Y-%m-%dT%H:%M:%SZ') investigate_url=f"https://explore.seeq.com/workbook/builder?startFresh=false"\ f"&workbookName={workbook_id}"\ f"&worksheetName={worksheet_id}"\ f"&displayStartTime={investigate_start}"\ f"&displayEndTime={investigate_end}"\ f"&expandedAsset={asset_tree}" #Create message information to be posted in channel assets=[] text=[] for event in events.itertuples(): assets.append(event.Condition) #Capsule started before lookback window if pd.isnull(event[2]): if pd.isnull(event[3]) or event[4] == True: text.append(f'Event was already in progress at {start.strftime("%Y-%m-%d %H:%M:%S %Z")} and is in Progress') else: text.append(f'Event was already in progress at {start.strftime("%Y-%m-%d %H:%M:%S %Z")} and ended {event[3].strftime("%Y-%m-%d at %H:%M:%S %Z")}') #Capsule started during lookback window else: if pd.isnull(event[3]) or event[4] == True: text.append(f'Event started {event[2].strftime("%Y-%m-%d at %H:%M:%S %Z")} and is in Progress') else: text.append(f'Event started {event[2].strftime("%Y-%m-%d at %H:%M:%S %Z")} and ended {event[3].strftime("%Y-%m-%d at %H:%M:%S %Z")}') message='\n'.join(text) #Create MS Teams Card - see pymsteams documentation for details TeamsMessage = pymsteams.connectorcard(webhook_url) TeamsMessage.title(monitoring_condition+" Event Detected") TeamsMessage.text(monitoring_condition+' triggered in '+asset_tree+f' Asset Tree in the last {lookback_interval_hours} hours') TeamsMessageSection=pymsteams.cardsection() for i,value in enumerate(text): TeamsMessageSection.addFact(assets[i],value) TeamsMessage.addSection(TeamsMessageSection) TeamsMessage.addLinkButton('Investigate in Workbench',investigate_url) TeamsMessage.send() Step 5: Test the Script Execute the script ensuring at least one “High Temperature” capsule is present in the lookback duration. The events dataframe in step 4. c) will list capsules that were detected. If no capsules are present, adjust the lookback duration. If at least one capsule is detected, a notification will automatically be posted in the channel for which the Webhook has been configured: Step 6: Schedule Script to run on a specified Frequency If the script operates as desired, configure a schedule for it to run automatically. #Optional - schedule the above script to run on a regular interval spy.jobs.schedule(f'every day at 6am') The script will run on the specified interval and post a summary of “High Temperature” capsules/events that occur during the lookback period directly to the MS Teams channel. Refer to the spy.jobs.ipynb notebook in the “SPy Documentation” folder for additional information on scheduling options. Attached is a copy of the full example script: Seeq MS Teams Notification Webhook - Example Script.ipynb
    4 points
  16. Users are often interested in creating pareto charts using conditions they've created in Seeq sorted by a particular capsule property. The chart below was created using the Histogram tool in Seeq Workbench. For more information on how to create Histograms that look like this, check out this article on creating and using capsule properties. Often times users would like to see the histogram above, but with the bars sorted from largest to smallest in a traditional pareto chart. Users can easily create paretos from Seeq conditions using Seeq Data Lab. A preview of the chart that we can create is: The full Jupyter Notebook documentation of this workflow (including output) can be found in the attached pdf file. If you're unable to download the PDF, the code snippets below can be run in Seeq Data Lab to produce the chart above. #Import relevant libraries from seeq import spy import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt Log in to the SPY module if running locally using spy.login, or skip this step if running Seeq Data Lab. #Search for your condition that has capsule properties using spy.search #Use the 'scoped to' argument to search for items only in a particular workbook. If the item is global, no 'scoped to' argument is necessary condition = spy.search({ "Name": "Production Loss Events (with Capsule Properties)", "Scoped To": "9E50F449-A6A1-4BCB-830A-8D0878C8C925", }) condition #pull the data from the time frame of interest using spy.pull into a Pandas dataframe called 'my_data' my_data = spy.pull(condition, start='2019-01-15 12:00AM', end='2019-07-15 12:00AM', header='Name',grid=None) #remove columns from the my_data dataframe that will not be used in creation of the pareto/CDF my_data = my_data.drop(['Condition','Capsule Is Uncertain','Source Unique Id'], axis=1, inplace=False) #Calculate a new dataframe column named 'Duration' by subtracting the capsule start from the capsule end time my_data['Duration'] = my_data['Capsule End']-my_data['Capsule Start'] #Group the dataframe by reason code my_data_by_reason_code = my_data.groupby('Reason Code') #check out what the new data frame grouped by reason code looks like my_data_by_reason_code.head() #sum total time broken down by reason code and sort from greatest to least total_time_by_reason_code['Total_Time_by_Reason_Code'] = my_data_by_reason_code['Duration'].sum().sort_values(ascending=False) total_time_by_reason_code['Total_Time_by_Reason_Code'] = total_time_by_reason_code['Total_Time_by_Reason_Code'].rename('Total_Time_by_Reason_Code') total_time_by_reason_code['Total_Time_by_Reason_Code'] #plot pareto of total time by reason code total_time_by_reason_code['Total_Time_by_Reason_Code'].plot(kind='bar') #Calculate the total time from all reason codes total_time = total_time_by_reason_code['Total_Time_by_Reason_Code'].sum() total_time #calculate percentatge of total time from each individual reason code percent_time_by_reason_code['Percent_Time_by_Reason_Code'] = total_time_by_reason_code['Total_Time_by_Reason_Code'].divide(total_time) percent_time_by_reason_code['Percent_Time_by_Reason_Code'] #Calculate cumulative sum of percentage of time for each reason code cum_percent_time_by_reason_code['Cum_Percent_Time_by_Reason_Code'] = percent_time_by_reason_code['Percent_Time_by_Reason_Code'].cumsum() cum_percent_time_by_reason_code['Cum_Percent_Time_by_Reason_Code'] = cum_percent_time_by_reason_code['Cum_Percent_Time_by_Reason_Code'].rename('Cum_Percent_Time_by_Reason_Code') cum_percent_time_by_reason_code['Cum_Percent_Time_by_Reason_Code'] #plot cumulative distribution function of time spent by reason code cum_percent_time_by_reason_code['Cum_Percent_Time_by_Reason_Code'].plot(linestyle='-', linewidth=3,marker='o',markersize=15, color='b') #convert time units on total time by reason code column from default (nanoseconds) to hours total_time_by_reason_code['Total_Time_by_Reason_Code'] = total_time_by_reason_code['Total_Time_by_Reason_Code'].dt.total_seconds()/(60*60) #build dataframe for final overlaid chart df_for_chart = pd.concat([total_time_by_reason_code['Total_Time_by_Reason_Code'], cum_percent_time_by_reason_code['Cum_Percent_Time_by_Reason_Code']], axis=1) df_for_chart #create figure with overlaid Pareto + CDF plt.figure(figsize=(20,12)) ax = df_for_chart['Total_Time_by_Reason_Code'].plot(kind='bar',ylim=(0,800),style='ggplot',fontsize=12) ax.set_ylabel('Total Hours by Reason Code',fontsize=14) ax.set_title('Downtime Reason Code Pareto',fontsize=16) ax2 = df_for_chart['Cum_Percent_Time_by_Reason_Code'].plot(secondary_y=['Cum_Percent_Time_by_Reason_Code'],linestyle='-', linewidth=3,marker='o',markersize=15, color='b') ax2.set_ylabel('Cumulative Frequency',fontsize=14) plt.show()
    4 points
  17. When creating signal forecasts, especially for cyclic signals that degrade, we often use forecastLinear() in formula to easily forecast a signal out into the future to determine when a threshold is met. The methodology is often the same regardless of if we are looking at a filter, a heat exchanger, or any other equipment that fouls overtime or any equipment that needs to go through some periodic maintenance when a KPI threshold is met. A question that comes up occasionally from users is how to create a signal forecast that only uses data from the current operation cycle for signal forecasting. The forecastlinear() operator only takes into account a historical training period and does not determine if that data is coming from the current cycle or not (which results in unexpected results). Before entering the formula, you will need to define: a condition that identifies the current cycle, here i have called it "$runningCycle" a Signal to do a linear forecast on, i have called it "$signal" To forecast out into the future based on only the most recent cycle, the following code snippet can be used in formula: $training = $runningCycle.setmaximumduration(10d).toGroup(capsule(now()-2h, now())) $forecast=$Signal.predict($training, timesince(toTime('2000-01-01T00:00Z'))) $signal.forecastSplice($forecast, 1d) In this code snippet, there are a few parameters that you might want to change: .setMaximumDuration(10d): results in a longest cycle duration of 10 days, this should be changed to be longer than the longest cycle you would expect to see capsule(now-2h, now()): this creates a period during which seeq will look for the most recent cycle. In this case it is any time in the last 2 hours. If you have very frequent data (data comes in every few seconds to minutes) then 2 hours or less will work. If you have infrequent data (data that comes in once a day or less) then extend this so that it covers the last 3-4 data points. $signal.forecastSplice($forecast, 1d): When using forecastLinear(), there is an option to force the prediction through the last sample point. This date parameter (1 day in this case) does something similar- it blends the last historical data point with the forecast over the given time range. In other words, if the last data point was a value of 5, but my first forecasted datapoint had a value of 10, this parameter is the time frame over which to smooth from the historical data point to the forecast. Here is a screenshot of my formula : and the formula in action:
    4 points
  18. Capsules can have properties or information attached to the event. By default, all capsules contain information on time context such as the capsule’s start time, end time, and duration. However, one can assign additional capsule properties based on other signals’ values during the capsules. Datasources can also bring in conditions with capsule properties already added. This guide will walk through some common workflows to visualize, add, and work with capsule properties. How can I visualize capsule properties? Capsule Properties can be added to the Capsules Pane in the bottom right hand corner of Seeq Workbench with the black grid icon. Any capsule properties beyond start, end, duration, and similarity that are created with the formulas that follow or come in automatically through the datasource connection can be found by clicking the “Add Column” option and selecting the desired property in the modal. In the capsule pane, you can filter on capsule properties by clicking on the three dots next to a column. In Trend View, you can add Capsule Property labels inside the capsules by selecting the property from the Labels drop down at the top of the trend. In Capsule Time, the signals can be colored by Capsule Properties by turning on the coloring to rainbow or gradient. The selection for which property is performing the coloring is done by sorting by the capsule property in the Capsule Pane in the bottom right corner. Therefore, if Batch ID is sorted by as selected below, the legend shown on the chart will show the Batch ID values. When working with Condition Table, you can add capsule properties as columns or as headers How do I create a capsule for every (unique) value of a signal? The Condition with Properties tool allows you take one or more step signals and turn them into a condition with a capsule per value change. The signal values will be added as properties on the capsules. For instance, if we have a BatchID and an Operation signal, we can use this tool to generate a capsule per Operation. In formula this would be done using the toCondition() operator: $batchID.toCondition('Batch ID') Where Batch ID is the name of the resulting property on the condition. How do I assign a capsule property? Option 1: Assigning a constant value to all capsules within a condition $condition.setProperty('Property Name', 'Property Value') Note that it is important to know whether you would like the property stored as a string or numeric value. If the desired property value is a string, make sure that the ‘Property Value’ is in single quotes to represent a string like the above formula. If the desired value is numeric, you should not use the single quotes. Option 2: Assigning a property based on another signal value during the capsule For these operations, you can also use the setProperty operator. For example, you may want the first value of a signal within a capsule or the average value of a signal during the capsule. Below are some examples of options you have and how you would set these values to capsule properties. $condition.setProperty('Property Name', $signal, aggregationMethod()) A full list of aggregation methods can be found under the Signal Value Statistics and Condition Value Statistics pages in the formula documentation, but common aggregation methods include: startValue() endValue() max() average() count() totalDuration() Option 3: Moving properties between conditions (e.g. parent to child conditions) In batch processing, there is often a parent/child relationship of conditions in an S88 (or ISA-88) tree hierarchy where the batch is made up of smaller operations, which is then made up of smaller phases. Some events databases may only set properties on particular capsules within that hierarchy, but you may want to move the properties to higher or lower levels of that hierarchy. You can accomplish this using mergeProperties() $conditionWithoutProperty.mergeProperties($conditionWithProperty) How do I filter a condition by capsule properties? Conditions are filtered by capsule properties using the keep() operator. Some examples of this are listed below: Option 1: Keep exact match to property $condition.keep('Property Name', isEqualTo('Property Value')) Note that it is important to know whether the property is stored as a string or numeric value. If the property value is a string, make sure that the ‘Property Value’ is in single quotes to represent a string like the above formula. If the value is numeric, you should not use the single quotes. Option 2: Keep regular expression string match $condition.keep('Property Name', isMatch('B*')) $condition.keep('Property Name', isNotMatch('B*')) You can specify to keep either matches or not matches to partial string signals. In the above formulas, I’m specifying to either keep all capsules where the capsule property starts with a B or in the second equation, the ones that do not start with a B. If you need additional information on regular expressions, please see our Knowledge Base article. Option 3: Other keep operators for numeric properties Using the same format of Option 1 above, you can replace the isEqualTo operator with any of the following operators for comparison functions on numeric properties: isGreaterThan isGreaterThanOrEqualTo isLessThan isLessThanOrEqualTo isBetween isNotBetween isNotEqualTo Option 4: Keep capsules where capsule property exists $condition.keep('Property Name', isValid()) In this case, any capsules that have a value for the property specified will be retained, but all capsules without a value for the specified property will be removed from the condition. How do I turn a capsule property into a signal? A capsule property can be turned into a signal by using the toSignal() operator. The properties can be placed at either the start timestamp of the capsule (startKey), the end timestamp of the capsule (endKey), or the entire duration of the capsule (durationKey). For most use cases the default of durationKey is the most appropriate. $condition.toSignal('Property Name') // will default to durationKey $condition.toSignal('Property Name', startKey()) $condition.toSignal('Property Name', endKey()) Using startKey or endKey will result in a discrete signal. If you would like to turn the discrete signal into a continuous signal connecting the data points, you can do so by adding a toStep or toLinear operator at the end to either add step or linear interpolation to the signal. Inside the parentheses for the interpolation operators, you will need to add a maximum interpolation time that represent the maximum time distance between points that you would want to interpolate. For example, a desired step interpolation of capsules may look like the following formula: $condition.toSignal('Batch ID', startKey()).toStep(40h) How do I rename capsule properties? Properties can be swapped to new names by using the renameProperty operator: $condition.renameProperty('Current Property Name', 'New Property Name') A complex example of using capsule properties: What if you had upstream and downstream processes where the Batch ID (or other property) could link the data between an upstream and downstream capsule that do not touch and are not in a specific order? In this case, you would want to be able to search a particular range of time for a matching property value and to transfer another property between the capsules. This can be explained with the following equation: $upstreamCondition.move(0, 7d).mergeProperties($downstreamCondition, 'Batch ID').move(0, -7d) Let's walk through what this is doing step by step. First, it's important to start with the condition that you want to add the property to, in this case the upstream condition. Then we move the condition by a certain amount to be able to find the matching capsule value. In this case, we are moving just the end of each capsule 7 days into the future to search for a matching property value and then at the end of the formula, we move the end of the capsule back 7 days to return the capsule to its original state. After moving the capsules, we merge properties with the downstream condition, only bringing properties from overlapping downstream capsules with the same Batch ID as the upstream condition. Using capsule properties in histogram You can create bins using capsule properties in the histogram tool by selecting the 'Condition' aggregation type. The following example creates a Histogram based upon the Value property in the toCondition() condition. The output is a Histogram with a count of the number of capsules with a Value equal to each of the four stages of operation: Creating Capsule Properties Reference Video: Conditions and capsules in Seeq are key to focusing #timeseries data analytics on specific time periods of interest. In this video, we explore how to create capsule properties to add additional context to the data. Using Capsule Properties Reference Video: Conditions and capsules in Seeq are key to focusing analytics on specific time periods of interest. In this video, we explore how to use capsule properties, the data attached to an event, to supply further insights into the data. Content Verified DEC2023
    4 points
  19. We often get asked how to use the various API endpoints via the python SDK so I thought it would be helpful to write a guide on how to use the API/SDK in Seeq Data Lab. As some background, Seeq is built on a REST API that enables all the interactions in the software. Whenever you are trending data, using a Tool, creating an Organizer Topic, or any of the other various things you can do in the Seeq software, the software is making API calls to perform the tasks you are asking for. From Seeq Data Lab, you can use the python SDK to interact with the API endpoints in the same way as users do in the interface, but through a coding environment. Whenever users want to use the python SDK to interact with API endpoints, I recommend opening the API Reference via the hamburger menu in the upper right hand corner of Seeq: This will open a page that will show you all the different sections of the API with various operations beneath them. For some orientation, there are blue GET operations, green POST operations, and red DELETE operations. Although these may be obvious, the GET operations are used to retrieve information from Seeq, but are not making any changes - for instance, you may want to know what the dependencies of a Formula are so you might GET the item's dependencies with GET/items/{id}/dependencies. The POST operations are used to create or change something in Seeq - as an example, you may create a new workbook with the POST/workbooks endpoint. And finally, the DELETE operations are used to archive something in Seeq - for instance, deleting a user would use the DELETE/users/{id} endpoint. Each operation endpoint has model example values for the inputs or outputs in yellow boxes, along with any required or optional parameters that need to be filled in and then a "Try it out!" button to execute the operation. For example, if I wanted to get the item information for the item with the ID "95644F20-BD68-4DFC-9C15-E4E1D262369C" (if you don't know where to get the ID, you can either use spy.search in python or use Item Properties: https://seeq.atlassian.net/wiki/spaces/KB/pages/141623511/Item+Properties) , I could do the following: Using the API Reference provides a nice easy way to see what the inputs are and what format they have to be in. As an example, if I wanted to post a new property to an item, you can see that there is a very specific syntax format required as specified in the Model on the right hand side below: I typically recommend testing your syntax and operation in the API Reference to ensure that it has the effect that you are hoping to achieve with your script before moving into python to program that function. How do I code the API Reference operations into Python? Once you know what API endpoint you want to use and the format for the inputs, you can move into python to code that using the python SDK. The python SDK comes with the seeq package that is loaded by default in Seeq Data Lab or can be installed for your Seeq version from pypi if not using Seeq Data Lab (see https://pypi.org/project/seeq/). Therefore, to import the sdk, you can simply do the following command: from seeq import sdk Once you've done that, you will see that if you start typing sdk. and hit "tab" after the period, it will show you all the possible commands underneath the SDK. Generally the first thing you are looking for is the ones that end in "Api" and there should be one for each section observed in the API Reference that we will need to login to using "spy.client". If I want to use the Items API, then I would first want to login using the following command: items_api = sdk.ItemsApi(spy.client) Using the same trick as mentioned above with "tab" after "items_api." will provide a list of the possible functions that can be performed on the ItemsApi: While the python functions don't have the exact same names as the operations in the API Reference, it should hopefully be clear which python function corresponds to the API endpoint. For example, if I want to get the item information, I would use "get_item_and_all_properties". Similar to the "tab" trick mentioned above, you can use "shift+tab" with any function to get the Documentation for that function: Opening the documentation fully with the "^" icon shows that this function has two possible parameters, id and callback where the callback is optional, but the id is required, similar to what we saw in the API Reference above. Therefore, in order to execute this command in python, I can simply add the ID parameter (as a string as denoted by "str" in the documentation) by using the following command: items_api.get_item_and_all_properties(id='95644F20-BD68-4DFC-9C15-E4E1D262369C') In this case, because I executed a "GET" function, I return all the information about the item that I requested: This same approach can be used for any of the API endpoints that you desire to work with. How do I use the information output from the API endpoint? Oftentimes, GET endpoints are used to retrieve a piece of information to use it in another function later on. From the previous example, maybe you want to retrieve the value for the "name" of the item. In this case, all you have to do is save the output as a variable, change it to a dictionary, and then request the item you desire. For example, first save the output as a variable, in this case, we'll call that "item": item = items_api.get_item_and_all_properties(id='95644F20-BD68-4DFC-9C15-E4E1D262369C') Then convert the output "item" into a dictionary and request whatever key you would like: item.to_dict()['name']
    4 points
  20. It is common in manufacturing processing plants such as oil and gas refineries, to monitor the temperature trend in furnaces. Example in the refineries, the furnaces tube metal temperature (TMT) monitoring severity increases for dirty services such as crude distillation and coker units. Operation team uses this information to decide either to go for rate cut or feed rate skewing before the TMT reaching mechanical limits to prolong the run length. Upon reaching the limit, the furnace will be taken out-of-service by means of spalling or pigging, and consequently impacts the production rate. Use case: The objective is to highlight the highest and the second highest temperature out of several temperatures in a matrix. Seeq enables users to build a matrix table (or Scorecard prior to R51) to highlight the temperature priority sequence by using a combination of functions and tools including max(), splice(), composite condition and scorecard metrics. Step 1: Start by loading all of the signals we want to include in the matrix into the display. Step 2: Use max() to look for the highest value signal at any time in the formula tool. Type in formula below into formula editor. $t.max($t2).max($t3).max($t4).max($t5).max($t6).max($t7).max($t8) Step 3: Create the second highest signal using splice() and composite condition. To capture the second highest signal, we need first to exclude a signal with the highest temperature at any time and then identify the highest value out of the remaining seven signals. To achieve that, use the highest temperature signal we created in step 2, we then create a condition when a signal reads the highest value, for each eight signals. //Which is the max $if_t1_is_the_max = $t1 == $max $if_t2_is_the_max = $t2 == $max $if_t3_is_the_max = $t3 == $max $if_t4_is_the_max = $t4 == $max $if_t5_is_the_max = $t5 == $max $if_t6_is_the_max = $t6 == $max $if_t7_is_the_max = $t7 == $max $if_t8_is_the_max = $t8 == $max Prior to looking for the max a second time, we must remove or replace the values from each of the signals when they are equal to the max. In this method, we will replace the highest signal values with zero using the splice function during the condition when that signal was the max. With these highest values replaced by zero (or removed), applying the same technique with the max function will yield the value of the second highest signal. //replace the max with 0 $removing_the_max_value = ($t1.splice(0,$if_t1_is_the_max)) .max($t2.splice(0,$if_t2_is_the_max)) .max($t3.splice(0,$if_t3_is_the_max)) .max($t4.splice(0,$if_t4_is_the_max)) .max($t5.splice(0,$if_t5_is_the_max)) .max($t6.splice(0,$if_t6_is_the_max)) .max($t7.splice(0,$if_t7_is_the_max)) .max($t8.splice(0,$if_t8_is_the_max)) .toStep() return $removing_the_max_value Step 4: Create Metric Threshold Limits Subtract the highest signal by a fairly small value using Formula tool in order to use signal as a threshold limit. Repeat the step for second highest limit. $max-0.001 Step 5: Create scorecard metric for each signal. Create scorecards for all 8 signals, as an example we choose value at the end for statistic for daily condition and apply the threshold accordingly. In the table view: Do check this post by Nick. He used different approach to yield the maximum of three signals, and displayed signal string in a matrix table.
    4 points
  21. For those Formula-savvy users, a one Formula approach to this would be as follows where you would define your percentage of the capsule in the first line: $percent = 10%.tosignal().resample(1s) $condition.transform($capsule ->{ $movetime = $capsule.duration()*$percent.toScalars($capsule).first() capsule($capsule.startKey(),$capsule.startKey()+$movetime)})
    4 points
  22. Monitoring KPI's for your process or equipment is a valuable method in determining overall system performance and health. However, it can be cumbersome to comb through all the different KPI's and understand when each is deviating from an expected range or set of boundaries. We can, however, shorten our time to insight by aggregating all associated KPI's into one Health Score; the result allows us to monitor just one trend item, and take action when deviations occur. To walk through the steps of building a Health Score, I will walk through an example below which looks at 4 KPI's for a Pump, and aggregates them into one final Health Score. Note that the time period examined is a 3 month period leading up to a pump failure. KPI DETAILS KPI #1 The first indicator I can monitor on this pump is how my Discharge Pressure is trending relative to an expected range determined by my Manufacturer's Pump Performance Curve. (To enable using a pump curve in Seeq, reference this article for more information: Creating Pump and Compressor Curves in Seeq). As my Discharge Pressure deviates from the expected range, red Capsules are created by using a Deviation Search in Seeq. KPI #2 The second indicator I can monitor on this pump is whether the NPSHa (available) is remaining higher than the NPSHr (required) as stipulated by the Manufacturer's Pump Datasheet. If my NPSHa drops below my NPSHr, red Capsules will be created by using a Deviation Search in Seeq. (No deviations noted in the time period evaluated). KPI #3 The third indicator I can monitor on this pump is whether the pump Vibration signals remain lower than specified thresholds (these could be determine empirically, from the Manufacturer, or industry standard). In this case I have 4 Vibration signals. I am using a "Union" method to combine the 4 conditions into the final KPI Alert, which will show red Capsules if any of the 4 vibrations exceed their threshold. The formula for this KPI Alert condition is as shown below: $vib1>$limit1 or $vib2>$limit2 or $vib3>$limit3 or $vib4>$limit4 KPI #4 The fourth indicator I can monitor on this pump is whether the flow through the pump is remaining higher than minimum allowable as stipulated by the Manufacturer's Pump Curve/Datasheet. If my measured Flow drops below my Flow Limit, red Capsules will be created by using a Deviation Search in Seeq. (No deviations noted in the time period evaluated). BUILDING THE HEALTH SCORE Now that we have 4 conditions, 1 for each KPI if exceeding the determined normal operating range, we need to aggregate these into the Health Score. First, we determine how much % time each KPI alert has been active during each Day (in fraction form, ie range of 0 to 1). We do this by creating a Formula for each KPI Alert condition, with the syntax as follows: #Determine the % duration that a KPI alert is active during each Day (in fraction form) $kpi1.aggregate(percentDuration(), days(), startKey()).convertUnits('').toStep() The result of applying this to all 4 KPI Alert conditions should be as follows - you may note that if a KPI Alert condition is "on" for the full duration of a day, it will show a value of 1. If partially "on", it will show a fractional value between 0 and 1, and if no Condition is present at all, it will show a value of 0. Now we aggregate these individual indicators into a rolled up Health Score, by using the Sum of Squares method, and then dividing by the total number of indicators. To do so, enter the following in a formula: #Aggregate the Sum of Squares of the fractional alert values ($k1.pow(2) + $k2.pow(2) + $k3.pow(2) + $k4.pow(2))/4 I could also have performed the above 2 steps in 1 Formula: #First determine the % duration that a KPI alert is active during each Day (in fraction form) $k1 = $kpi1.aggregate(percentDuration(), days(), startKey()).convertUnits('').toStep() $k2 = $kpi2.aggregate(percentDuration(), days(), startKey()).convertUnits('').toStep() $k3 = $kpi3.aggregate(percentDuration(), days(), startKey()).convertUnits('').toStep() $k4 = $kpi4.aggregate(percentDuration(), days(), startKey()).convertUnits('').toStep() #Then aggregate the Sum of Squares of those fractional values $sumOfSquares = $k1.pow(2) + $k2.pow(2) + $k3.pow(2) + $k4.pow(2) $sumOfSquares/4 I can also add a Health Score high limit (in my example 0.25 = if one KPI Alert is active for a full day), to trigger some action (perhaps schedule pump maintenance), prior to failure. A new red Capsule will appear if my Health Score exceeds this limit of 0.25 (can be configured via Value Search or Deviation Search). Enter the following into Formula to create this limit (a new scalar): Below you will see my final Health Score trend item as well as the limit and Health Score Alert condition. Optionally, I can use the High Limit to create a shaded boundary for my Health Score, using the Boundaries Tool. (I also create a low limit of 0 to be the lower boundary). We can see that in the month leading up to Failure (early Feb), I had multiple forewarning indications through this aggregated Health Score. In future, I could monitor this Pump health in a dashboard in a Seeq Organizer Topic, and trigger some maintenance activity proactively. Dashboard Example:
    4 points
  23. Various parts of the world display date and time stamps differently. Often times, we get requests for changing the order of month and day in the timestamp string or to display the date as a Scorecard metric in a specific format. This can be done using the replace() operator in Formula. For example, let's say we wanted to pull the start time for each capsule in a condition and display it as mm/dd/yyyy hh:mm format: $condition.transformToSamples($cap -> Sample($cap.getStart(),$cap.getProperty('Start')), 1d) .replace('/(?<year>....)-(?<month>..)-(?<day>..)T(?<hour>..):(?<minute>..):(?<sec>..)(?<dec>.*)Z/' , '${month}/${day}/${year} ${hour}:${minute}') This takes the original timestamp (for example: '2019-11-13T17:04:13.7220714157Z') and parses it into the year, month, day, hour, minute, second, and decimal to be able to set up any format desired. The various parts of the string can then be called in the second half of the replace to get the desired format as shown above with ${month}/${day}/${year} ${hour}:${minute}. From there, you can either view this data in the trend or use Scorecard Metric to display the Value at Start in a condition based metric. If the end time is desired instead of the start, the only changes needed would be to (1) switch the .getStart operator to .getEnd, and (2) switch the .getProperty('Start') to .getProperty('End'). Note: The '1d' at the end of the 2nd line of the formula represents the maximum interpolation for the data, which is important if you want to view this as a string signal. This value may need to be increased depending on the prevalence of the capsules in the condition.
    4 points
  24. Sometimes users want to find more documentation on the SPy functions than what is provided in the SPy Documentation Notebooks. A quick way to access SPy object documentation from your notebook is by using the Shift + Tab shortcut to access the docstring documentation of the function. Example view of the docstrings after using the Shift + Tab shortcut: You can expand the docstrings to view more details by clicking the + button circled in red in the above image. Expanded docstrings: From here, you can scroll through the documentation in the pop-up window. Another useful shortcut is the Tab shortcut. This will show a list of available functions or methods for either the SPy module or any other python object you have in memory. Example view of the Tab shortcut:
    4 points
  25. Hi Theresa, no, you cannot adjust the size of the lanes itself.But you can use the "Dimming" feature of Seeq to show only lanes with items selected in the Details Pane: Regards, Thorsten
    4 points
  26. Hi Esther, you can do this the following way: 1. Create a Periodic Condition (found in Tools - Pane): 2. Use Signal from Condition to calculate the total duration. Result: Regards, Thorsten
    4 points
  27. The replace operator can be applied a signal without a transform. Using the replace function without the transform will be less computationally expensive, resulting in better performance. Both ways will get to the same answer. Syntax for replace function without transform: $signal.replace('/(\\d+)_\\d+_\\w+/', '$1')
    4 points
  28. Hi Pablo, you can use transform() and replace() to do this. I made an example with a signal that contains the following data: To create a signal that contains only the numeric values I used the following formula: $originalSignal.transform(($p, $c) -> sample($c.getKey(), $c.getValue().replace('/\\w+\\s(\\d+)/', '$1'))) Transform is used to access every sample in the signal by specifying a lambda expression. The current sample is temporarily stored in the variable $c. $p contains the previous sample which is not used here. For each sample of the original signal a new sample is created by using the timestamp (key) of the original sample. The value for the new sample is determined by the value of the sample of the original signal on which replace() is executed. As the name suggests replace() is used to replace portions of a string. In this case I make use of a regular expression to determine the numeric value inside the original string and replace the original string by the determined value. The result looks like this: For your example (12345_20200326_PJR) you have to modify the replace part to: .replace('/(\\d+)_\\d+_\\w+/', '$1') Hope this helps. Regards, Thorsten
    4 points
  29. Hi Banderson, you can create a duration signal from each capsule in a condition, using "signal from condition" tool. As you may know these point and click tools create a Seeq formula underneath. So after using point and click signal from condition tool, you can find the syntax of formula in item properties of that calculation. You can copy this syntax and paste it in Formula and use it to further develop your calculations.
    4 points
  30. Here's an alternative method to getting the last X batches in the last 30 days: // return X most recent batches in the past Y amount of time $numBatches = 20 // X $lookback = 1mo // Y // create a rolling condition with capsules that contain X adjacent capsules $rollingBatches = $batchCondition.removeLongerThan($lookback) .toCapsulesByCount($numBatches, $lookback) // find the last capsule in the rolling condition that's within the lookback period $currentLookback = capsule(now()-$lookback, now()) $batchWindow = condition( $lookback, $rollingBatches.toGroup($currentLookback, CAPSULEBOUNDARY.ENDSIN).last() ) // find all the batches within the capsule identified // ensure all the batches are within the lookback period $batchCondition.inside($batchWindow) .touches(condition($lookback, $currentLookback)) This is similar to yours in that it uses toGroup, but the key is in the use of toCapsulesByCount as a way to get a grouping of X capsules in a condition. You can see an example output below. All capsules will show up as hollow because by the nature of the rolling 'Last X days' the result will always be uncertain.
    3 points
  31. FAQ: I have a signal with a gap in the data from a system outage. I want to replace the gap with a constant value, ideally the average of the time period immediately before the data. Solution: 1. Once you've identified your data gaps, extend the capsules backwards by the amount over which time you want to take the average. In this example, we want to fill in the gap with the average of the 10 minutes before the signal dropped, so we will extend the start of the data gap capsule 10 minutes in the past. This is done using the move function in Formula: $conditionForDataGaps.move(-10min,0min) 2. Use Signal from Condition to calculate the average of the gappy signal during the condition created in step 1. Make sure to select "Duration" for the timestamp of the statistic. 3. Stitch the two signals together using the splice function. The validvalues() function at the end ensures a continuous output signal. $gappysignal.splice($replacementsignal,$gaps).validvalues()
    3 points
  32. If you modify your wind_dir variable to $wind_dir = group( capsule(0, 22.5).setProperty('Value', 'ENUM{{0|N}}'), capsule(22.5, 67.5).setProperty('Value', 'ENUM{{1|NE}}'), capsule(67.5, 112.5).setProperty('Value', 'ENUM{{2|E}}'), capsule(112.5, 158.5).setProperty('Value', 'ENUM{{3|SE}}'), capsule(158.5, 202.5).setProperty('Value', 'ENUM{{4|S}}'), capsule(202.5, 247.5).setProperty('Value', 'ENUM{{5|SW}}'), capsule(247.5, 292.5).setProperty('Value', 'ENUM{{6|W}}'), capsule(292.5, 337.5).setProperty('Value', 'ENUM{{7|NW}}'), capsule(337.5, 360).setProperty('Value', 'ENUM{{8|N}}') ) You will get an ordered Y axis: This is how Seeq handles enum Signal values from other systems - it has some limitations, but it seems like it should work well for your use case.
    3 points
  33. Hi Coolhunter, I have seen this requested multiple times and one solution might be to use a custom PI Vision symbol that enables you to embed Seeq content into PI Vision. A solution to this challenge can be found here: Get the most out of PI Vision - Seeq Analytics in PI Vision - Seeq in PI Vision (werusys.de) If you want to know more about the PI Vision integration with Seeq feel free to drop me a mail: julian.weber@werusys.de Cheers, Julian Seeq-WerusysPIVision.pdf
    3 points
  34. Check out the data lab script and video that walks through it to automate data pull->apply ml->push results to workbench in an efficient manner. Of course you can skin the cat many different ways however this gives a good way to do it in bulk. Use case details: Apply ML on Temperature signals across the whole Example Asset Tree on a weekly basis. For your case, you can build you own asset tree and filter the relevant attributes instead of Temperature and set spy.jobs.schedule frequency to whatever works for you. Let me know if there are any unanswered questions in my post or demo. Happy to update as needed. apply_ml_at_scale.mp4 Apply ML on Asset Tree Rev0.ipynb
    3 points
  35. Have you ever wanted to scale calculations in Seeq across different assets without having to delve into external systems or write code to generate asset structures? Is your process data historian a giant pool of tags which you need to have organized and named in a human readable format? Do you want to take advantage of Seeq features such as Asset Swapping and Treemaps, but do not have an existing Asset structure to leverage? If the answer is yes, Asset Groups can help! Beginning in Seeq version R52 Asset Groups were added to configure collections of items such as Equipment, Operating Lines, KPIs, etc via a simple point-and-click tool. Users can leverage Asset Groups to easily organize and scale their analyses directly in Workbench, as well as apply Seeq Asset-centric tools such as Treemaps and Tables across Assets. What is an Asset Group? An Asset Group is a collection of assets (listed in rows) and associated parameters called “Attributes” (listed in columns). If your assets share common parameters, Asset Groups can be a great way to organize and scale analyses instead of re-creating each analysis separately. Assets can be anything users want them to be. It could be a piece of equipment, geographical region, business unit, KPI, etc. Asset Groups serve to organize and map associated parameters (Attributes) for each Asset in the group. Each Asset can have one or several Attributes mapped to it. Attributes are parameters that are common to all the assets and are mapped to tags from one or many data sources. Examples of Asset/Attribute combinations include: Asset Attribute(s) Pump Suction Pressure, Discharge Pressure, Flow, Curve ID, Specific Gravity Heat Exchanger Cold Inlet T, Cold Outlet T, Hot Inlet T, Hot Outlet T, Surface Area Production Line Active Alarms, Widgets per Hour, % of time in Spec It’s very important to configure the name of the common Attribute to be the same for all Assets, even if the underlying tag or datasource is not. Using standard nomenclature for Attributes (Columns) enables Seeq to later compare and seamlessly “swap” between assets without having to worry about the underlying tag name or calculation. Do This: Do Not Do This: How to Configure Asset Groups in Seeq Let’s create an Asset Group to organize a few process tags from different locations. While Asset Groups support pre-existing data tree structures (such as OSI PI Asset Framework), the following example will assume the tags to not be structured and added manually from a pool of existing process tags. NOTE: Asset Groups require an Asset Group license. For versions prior to R54, they also have to be enabled in the Seeq Administrator Configuration page. Contact your Seeq Administrator for details. 1) In the “Data” tab, create a new Asset Group: 2) Specify Asset Group name and add Assets You can rename the assets by clicking on the respective name in the first column. In this case, we'll define Locations 1-3. 3) Map the source tags a. Rename “Column 1” by clicking on the text and entering a new name b. Click on the (+) icon to bring up the search window and add the tag corresponding to each asset. You can use wildcards and/or regular expressions to narrow your search. c. Repeat mapping of the tags for the other assets until there’s a green checkmark in each row d. Additional source tags can be used by clicking on “Add Column” button in the toolbar In this case, we will add a column for Relative Humidity and map a tag for each of the Locations 4) Save the Asset Group 5) Trend using the newly created Asset Group The newly created Asset Group will now be available in the Data pane and can be used for navigation and trending a. Navigate to “Location 1” and add the items to the display pane by clicking on them. You can also change the display range to 7 days to show a bit more data b. Notice the Assets Column now listed in the Details pane showing from which Asset the Signal originates We can also add the Asset Path to the Display pane by clicking on Labels and checking the desired display configuration settings (Name, Unit of Measure, etc). c. Swap to Location 2 (or 3) using the Asset Swapping functionality. In the Data tab, navigate up one level in the Asset Group, then click the Swap icon ( ) to swap the display items from a different location . Notice how Seeq will automatically swap the display items 6) Create a “High Temperature” Condition Calculations configured from Asset Group Items will “follow” that asset, which can help in scaling analyses. Let’s create a “High Temperature” condition. a. Using “Tools -> Identify -> Value Search” create a condition when the Temperature exceeds 100 b. Click “Execute” to generate the Condition c. Notice the condition has been generated and is automatically affiliated with the Asset from which the Signals were selected d. Swap to a different Asset and notice the “High Temperature” Condition will swap using the same condition criteria but with the signals from the swapped Asset Note: Calculations can also be configured in the Asset Group directly, which can be advantageous if different condition criteria need to be defined for each asset. This topic will be covered in Part 2 of this series. 7) Create a Treemap Asset Groups enables users to combine monitoring across assets using Seeq’s Treemap functionality. a. Set up a Treemap for the Assets in the Group by switching to the Treemap view in the Seeq Workbench toolbar. b. Click on the color picker for the “High Temperature” condition to select a color to display when that condition is active in the given time range. (if you have more than one Condition in the Details pane, repeat this step for each Condition) c. A Treemap is generated for each Asset in the Asset Group. Signal statistics can optionally be added by configuring the “Statistics” field in the toolbar. Your tree map may differ depending on the source signal and time range selected. The tree map will change color if the configured Condition triggers during the time period selected. This covers the basics for Asset Groups. Please check out Part 2 on how to configure calculations in Asset Groups and add them directly to the Hierarchy.
    3 points
  36. The following steps will create a prediction model for every capsule in a condition. Step 1. pick a condition with capsules that isolate the desired area of regression. Any condition with non-overlapping capsules will work as long as there are enough sample points within its duration. For this example, an increasing temperature condition will be used. However, periodic conditions and value search conditions will work as well. Step 2. Create a time counter for each capsule in the condition. This can be done with the new timesince() function in the formula tool. The timesince() function will have samples spaced depending on the selected period so it is important to select a period that has enough points to build a model with. See below for details on the timesince() formula setup. Step 3. In this step a condition with capsule properties that hold the regression constants will be made. This will be done in the formula tool with one formula. The concept behind the formula below is to split the condition from step one into individual capsules and use each of the capsules as the training window for a regression model. Once the regression model is done for one capsule the coefficients of the model are assigned as properties to the capsule used for the training window. The formula syntax for a linear model-based condition can be seen below. An example of a polynomial regression model can be found in the post below. $Condtition.removeLongerThan(24h).transform($cap-> { $model=$SignalToModel.validValues().regressionModelOLS( group($cap),false,$Time) $cap.setProperty('Slope',$model.get('coefficient1')) .setProperty('Intercept',$model.get('intercept'))}) Below is a screenshot of how the formula looks in Seeq. Note: The regression constants can be added to the capsule pane by clicking on the black stats button and selecting add column. Below is a screen shot of the results. Step 4. Once a condition with the regression coefficients has been created the information will need to be extracted to a signal form. The following formula syntax will extract the information. This will need to be repeated for every constant from your regression model. e.g.(So for a linear model this must be done for both the slope and for the intercept.) The formula syntax for extracting the regression coefficients can be seen below. $signal=$Condition.transformToSamples( $cap -> sample($cap.getmiddle(), $cap.getProperty('Intercept').toNumber()), 1min) $signal.aggregate(average(),$Condition,durationKey()) Below is a screenshot of the formula in Seeq. Below is a screenshot of the display window of how the signals should look. Step 5. Use the formula tool to plot the equation. See screenshot below for details. Final Result
    3 points
  37. I know this is an old thread, but I am including what I did in case posterity finds it useful. I am more or less working the the same issue, but with a somewhat noisier and less reliable signal. I found the above a helpful starting point, but had to do a bit of tweaking to get something reliable that didn't require tuning. The top lane is the raw signal, from which I remove all the drop outs, filled in any gaps with a persisted value, and did some smoothing with agile to get the cleansed level on the next lane. For the value decreasing condition I used a small positive threshold (since there were some small periods of the levels fluctuating and the tank being refilled was a very large positive slope) and a merge to eliminate any gaps in the condition shorter than 2 hours (since all the true fills were several hours). For the mins and maxes I did not use the grow function on the condition like was done above, instead just used relatively wide max durations and trusted that the cleansing I did on value decreasing was good enough. I was then able to use the combinewith and running delta function on the mins and maxes, and filter to get the deliveries and the usage. One additional set of calculations I added was to filter out all the periods of deliveries by converting the Delta function to a condition and removing all the data points in conditions that started positive from the cleansed signal. I then subtracted a running sum of the delta function over a quarter, yielding a signal that without the effect of an of the deliveries over each quarter. I could then aggregate the delta for days and quarters of that signal to get the daily and quarterly consumption figures. Chart showing all the calculated signals for this example. Top lane is the raw signals. Next lane shows the cleansed signal with the nexus of the mins and maxes between deliveries. Middle lane combines the mins and maxes and takes the running deltas, and then filters them into delivery and usage numbers. The next lane removes the deliveries from the cleansed signal and does a running sum of the consumption over the quarter. The last two lanes are daily and quarterly deltas in those consumption figures. Calculation for identifying the periods in which the chemical level is decreasing. I used a small positive threshold and removed two hour gaps, and that allowed it to span the whole time between deliveries. Aggregate the cleansed signal over those decreasing time periods to find the min and max values. Used the combinewith and running delta functions to get the next deltas of consumption and deliveries. Filtered based on positive and negative value to separate into deliveries and consumption numbers. Removed the delivery numbers from the cleansed signal in order to get a running sum of consumption over a quarter. aggregated the deltas in the consumption history over days and quarters to calculate daily and quarterly consumption.
    3 points
  38. I believe the unbounded error is because the original code creates conditions for >=0 and <0 with no maximum duration. You can change $gte and $lte to have max durations and this may fix the issue. $gte = ($delta >= 0).setMaximumDuration(7d) $lt = ($delta < 0).setMaximumDuration(7d) In your example, the reversal count will not count 3, it will count one. Let's transform your example as runningDelta() would and assume the previous value was also 5. 5 -> 5 -> 10 -> 15 -> 10 becomes 0 -> 5 -> 5 -> -5 Now let's identify the gte and lt conditions. We would see an $gte {0 -> 5 -> 5->}, and an $lt at {-> -5}. Use our formula and the result would be 1 reversal. However this does bring up a potential issue. If the example had another 10 at the end, the delta samples would become 0 -> 5 -> 5 -> -5 -> 0 and that would create an additional $gte capsule at the very end, resulting in 2 reversals. You could fix this by change the $gte to only greater than instead of greater than or equal zero. Thanks, Andrew
    3 points
  39. Sometimes when looking at an xy plot, it can be helpful to use lines to designate regions of the chart that you'd like users to focus on. In this example, we want to draw a rectangle on the xy plot showing the ideal region of operation, like below. We can do this utilizing Seeq's ability to display formulas overlaid against an xy plot. 1. For this first step, we will create a ~horizontal line on the scatter plot at y=65. This can be achieved using a y=mx+b formula with a very small slope, and a y-intercept of 65. The equation for this "horizontal" line on the xy plot is: 0.00001*$x+65 2. If we want to restrict the line to only the segment making the bottom of our ideal operation box, we can leverage the within function in formula to clip the line at values we specify. Here we add to the original formula to only include values of the line between x=55 and 5=60. (0.00001*$x+65) .within($x>55 and $x<60) 3. Now let's make the left side of the box. A similar concept can be applied to create a vertical line, only a very large positive or negative slope can be used. For our "vertical" line at x=55, we can use the following formula. Note some adjustment of the y-axis scale may be required after this step. (-10000*($x-55)) 4. To clip a line into a line segment by restricting the y values, you can use the max and min functions in Formula, combined with the within function. The following formula is used to achieve the left side boundary on our box: (-10000*($x-55)) .max(65) .min(85) .within($x<55.01 and $x>54.99) The same techniques from steps 1-4 could be used to create the temperature and wet bulb max boundaries. Formula for max temp boundary: (0.00001*$x+85).within($x>55 and $x<60) Formula for max wet bulb boundary: (-10000*($x-60)) .max(65) .min(85) .within($x<60.01 and $x>59.99)
    3 points
  40. Hi Julian, The first formula ($signal.toCondition()) will create a capsule the length of each time frame between step changes. From there, you can use Signal from Condition to calculate the "Total Duration" statistic for each of the capsules created (use the result of the formula for both the condition of interest and bounding condition).
    3 points
  41. Frequently Asked Question: Is there a way to change the color of my signals overlaid in capsule time view to highlight the different capsules? Solution: One approach to changing the color of the signal being overlaid in capsule time view is to create separate signals for each desired display color that only contain samples during specific capsule(s). The examples below provide a step-wise approach to coloring based on either logic or time. Logic Based Coloring: Scenario: We start out with a temperature signal that we are overlaying based on a daily condition. We want the temperature signal to show up in red if the daily average temperature is greater than 80F, and blue if the daily average temperature is below 80F. 1. Switch back to calendar view and calculate the Average Daily Temperature using the Signal from Condition tool. In this example, we choose the "duration" time stamp to simplify subsequent steps. 2. Use the Value Search tool to identify days in which the Average Daily Temp signal is greater than 80F. 3. Repeat step 2 to find the days where the average daily temperature was below 80F. 4. Use Formula to break the original temperature signal into two pieces, one when daily average temperature is above 80F, and one when then daily average temperature is below 80F. The formula code to complete step 4 is: //take the temperature signal and keep only samples within the Avg Daily Temp < 80F condition $temp.within($DailyAvgLow) 5. Finally, switch back to capsule time view, and use dimming to display only your new "Temp signal during High Avg Daily Temp" and "Temp signal during Low Avg Daily Temp" signals. Use the "One Lane" and "One Y-axis" buttons to display in a single lane on the same axis. Time Based Coloring: Scenario: We have a temperature signal and we want to overlay data from the last 4 weeks in different colors so we can easily see changes in the signal from week to week. 1. Use Formula to create a condition for the past 7 days. //Create a condition with max capsule duration 8d, comprised of a single capsule that begins at now-7d and ends at the current time. condition(8d,capsule(now()-7d,now())) 2. Use Capsule Adjustments in Formula (move() function) to create a capsule for 2 weeks prior. //Shift the capsule for last week back in time by 7d. $lastWeekCapsule.move(-7d) 3. Repeat step 2, shifting by -14d and -21d to create capsules for 3 weeks prior and 4 weeks prior. 4. Use Formula to break the original temperature signal into 4 pieces, one for each of the previous 4 weeks. //Create a new signal from the original temp signal that contains only samples that fall within the prior 7d. $temp.within($lastWeek) 5. Switch to capsule time view, and put all the new temperature signals on one lane and 1 y-axis.
    3 points
  42. Hello Arnaud, the error is occuring because of the unit of the signal. I guess the unit of your signal is "t" which is converted to "t²" and "t³" for the respective parts of the formula. Therefore they cannot be used in an addition or subtraction. To get the fomula working simply convert the signal to a unitless one before calculating the polynom. The resulting signal is unitless. You can use the setUnits() function to specify a unit if you need one. Hope this helps. Regards, Thorsten
    3 points
  43. This week, I used the Journal link approach suggested by Kjell for 40 similar assets ! (Yes 40) and created couple of scorecard tables and output them to an organizer topic. This exercise was a good test of hand eye coordination and patience 🙂 Hopefully in a near future, I can rely on better and more efficient experience in seeq for this use case.
    3 points
  44. Hi Tommy, The easiest way to do this in Seeq is to use a condition to define the if condition, and then splice in a new signal when your condition is true. Follow the steps below to achieve this. 1) Use the Value Search tool to find when your signal .OP <= 0 2) In Formula, enter the following: $flowsignal.splice(0.toSignal(), $conditionclosed) where $flowsignal is your Flow Rate signal, and $conditionclosed is the condition we created in Step 1. What we are doing here is splicing in a new signal we create ( 0.toSignal() ) which will equal 0 when the .OP <= condition is true. You could also write all of this into 1 Formula (combining steps 1 & 2 together) by writing the following: $conditionclosed = $OP.validValues().valueSearch(isLessThanOrEqualTo(0)) $flowsignal.splice(0.toSignal(),$conditionclosed) Please let me know if this solved your question. -Kjell
    3 points
  45. Hi Greg Since you already have a condition identifying when your signal changes, to identify the magnitude of the change all you need to do is use Signal From Condition. Here is an example of how it might look: In this case i am using "Range" because it will always give me a positive value of the change in my power signal. If i wanted to know if it was positive or negative I would use "Delta" instead. Here i am using the Duration as my timestamp so i can more easily accomplish the next step- filtering the original change condition. Since you want to count the number of instances the value changes by more than some amount, we can then filter our original condition (the one that identified the change) so it only retains the capsules where the change was over your threshold. To do this i will use Formula: In this case, i am filtering my Load Swing to keep capsules where the swing is greater than 25kW. You can see the filtered condition is shown in blue where my original Condition is shown in green. From here, you can use the Scorecard Metric to count the number of the filtered capsules. Hope this helps!
    3 points
  46. During a complex analysis, your Workbench can become cluttered with intermediate Signals and Conditions. Users utilize the Journal tab to keep their calculations documented and organized, often in the form of a Data Tray. If you have been adding Item links one-by-one, try using this trick to add all (or a large subset) of your items to your Journal all at once: Select all items in your display Click the Annotate button on the Toolbar Cut the Item links from your Annotation Paste the links into your Journal
    3 points
  47. Hi Chris- I think this can be achieved using a combination of the Value Search and Composite Condition tools. In the following screenshot, I have a signal (temperature) and a condition (Start Capsule). Let's say I'd like to create a new condition that starts at the start of each capsule in Start Capsule and ends when the temperature is 90. First, use the Value Search tool to identify when the temperature is 90 F. This results in several small capsules each time the temperature is 90 F. Next, combine these 2 conditions using the join operator in the Composite Condition tool: This results in a new condition (Start to Temp=90) where each capsule starts at the start the Start Capsule capsule and ends at the start of the Temp=90 capsule. Please let me know if you have any additional questions. Thanks, Lindsey
    3 points
  48. Hi Moriel, you can use a formula to calculate this: In this example I calculate the average power consumption for each month only when "Compressor Stage" equals "STAGE 2". Regards, Thorsten
    3 points
  49. FAQ: Why does the derivative look funny? When taking a derivative and the result looks like the screenshot below, but a smoother signal is expected it is likely that the input signal is step interpolated. To verify this, click on the item properties “I” on the input signal and check the interpolation method. Item Properties View The interpolation method can be corrected by simply adding toLinear() in front of the derivative. See screen shot below of the formula. Final Results
    3 points
  50. Seeq Version: R21.0.43.03 but the solution is applicable to previous versions as well. The Profile Search Tool is great for specifying a profile in Signal A and then looking for occurrences of that profile throughout time. In the screenshot below I've used Profile Search to identify when the Compressor Power Area A resembles the shape of a chair. For basics about how to use the tool check out the Seeq KB article Profile Search. However, what if I want to look for that same profile on another signal? In the screenshot below, I've added a second signal, Compressor Power Area G. I'd like to identify when the "chair" profile I previously specified for Compressor Power Area A is present in Compressor Power Area G. I can do this by using the profileSearch() function in Seeq Formula. Here is how... 1. Start with the Chair in Area A condition I previously made and use Duplicate to Formula. The duplicated formula looks like (note that $cpA refers to Compressor Power Area A): profileSearch($cpA, toTime("2019-09-02T15:43:30.135Z"), toTime("2019-09-03T06:28:09.629Z"), 98, 0.5, 0.3, 0.3) 2. Modify the formula to add $cpG (Compressor Power Area G) to the start of the function. This is an optional argument in Profile Search which allows us to use the profile identified on signal $cpA and look for when it occurs on signal $cpG. For more information on the Profile Search function, check out the documentation available in the Formula Tool. $cpG.profileSearch($cpA, toTime("2019-09-02T15:43:30.135Z"), toTime("2019-09-03T06:28:09.629Z"), 98, 0.5, 0.3, 0.3) Here is a screenshot of what it looks like in the Formula Tool: 3. View the final result. In this example two "chairs" where identified in Area G.
    3 points
This leaderboard is set to Los Angeles/GMT-07:00
×
×
  • Create New...