Jump to content

Lindsey.Wilcox

Administrators
  • Posts

    85
  • Joined

  • Last visited

  • Days Won

    16

Everything posted by Lindsey.Wilcox

  1. Hi Adam- The NOAA weather connector brings in all historical and forecast data that is available. This typically includes about a 7-day forecast: Are you not observing the forecasted portion (dotted line) of the signal? Could you please provide a screenshot of what the temperature signal looks like? Thanks, Lindsey
  2. Hi Adam- Which specific signals are you looking to get a forecast for? Lindsey
  3. Hi Connor- Yes, you can do this using Capsule Properties. Capsule properties can be displayed as the header of Scorecard columns (in place of the start / end time). More information on this can be found here: Please let me know if you have any additional questions. Thanks, Lindsey
  4. Hi Connor - To display the value of a signal during each capsule (event frame), you will want to create a Condition based scorecard (as you've done). However, the item to measure should be the signal. With the signal selected, you won't need to select a statistic (since you just want the value) and specify the condition (event frames) that you like to display the value for. Please let me know if you have any additional questions. Thanks, Lindsey
  5. Hi Sivaji Unfortunately there is not a workaround to plot a vertical line for average capsule duration in Capsule Time. However, I have logged a Feature Request for this item (CRAB-24964). Additionally, I have linked you to the Feature Request via Support Ticket SUP-27734. This ensures that you will be notified of any progress towards implementation. Please let me know if you have any additional questions. Thanks, Lindsey
  6. Hi Sivaji- You have to take a few steps to get more than ten pushed signals to display in the Worksheet, but it’s totally possible. To begin, make sure that you assign the result of spy.push to variable. Here is some example code: push_results = spy.push(data=my_data_df) # Pull the workbook. Use the URL of the Workbook where the signals where pushed 4# You can take it from the output of the above command url = 'https://explore.seeq.com/36202737-B32E-463B-A583-202BD4D8A7AB/workbook/6D2D5190-3A28-4E60-B9AD-F4DCE35ACD06/worksheet/904599EB-93D1-4C32-BD23-643D641EB063' wb = spy.workbooks.pull(url)[0] # get the worksheet from the workbook ws = wb.worksheets[0] # add all the pushed items to the worksheet ws.display_items = push_results # push the workbook with the modified worksheet back to Seeq spy.workbooks.push(wb)
  7. Hi Alex- Thank you so much for all of these great suggestions! I have some comments / notes on each of your suggestions (in red). I have linked you to each of the Feature Requests (CRABs) mentioned to ensure that you are notified of any progress towards implementation. Change the x and y axis range by typing in the values. Like customise in trend view. Scrolling and dragging can be painful, particularly if I have multiple scatter plots I want to have the same range so I can show them together in Organizer. This Feature Request is already logged as CRAB-18708 Filter by a/multiple specific condition(s), not all present. This is already possible as described in the Filtering Data in Scatter Plot section of our Knowledge Base article: https://seeq.atlassian.net/wiki/spaces/KB/pages/153518135/Scatter+Plot#ScatterPlot-FilteringDatainaScatterPlot. Once you have selected to filter the Scatter plot by condition, select the condition(s) of interest in the details pane to further customize the display: Add multiple trends to either x or y axis This Feature Request is already logged as CRAB-5295 Gradient color by time, e.g. oldest to newest as lightest to darkest. This Feature Request is already logged as CRAB-5289 Be able to edit the color by start and end ranges This is already possible as described in the Coloring Data section of our Knowledge Base article: https://seeq.atlassian.net/wiki/spaces/KB/pages/153518135/Scatter+Plot#ScatterPlot-FilteringDatainaScatterPlot. From the color menu above the scatterplot, a start and end time can be specified to color based on fixed date ranges: Have the label show values of other signals at that timestamp. I have filed a new Feature Request for this item : CRAB-24198 Draw a freeform shape to select points from plot to make a condition - not just a rectangle. This Feature Request is already logged as CRAB-21692
  8. Hi Sivaji- Thanks for reaching out. Yes, the top 10 signals are displayed in the worksheet by design. If more than 10 signals are pushed, they are still searchable in the workbook. If you’d like to trend more than 10 items, I’d recommend that you pull the workbook and then modify the display_items for the worksheet, and then push it back. There is not a maximum number of signals that can be plotted on a single worksheet. However, as the number of trended items increases, the more difficult it will be to actually view the data clearly and identify trends in your signals. Additionally, if a significant number of items are trended, performance may degrade as well. Please let me know if you have any additional questions. Thanks, Lindsey
  9. Background In this use case, we start with a signal that indicates the product code as a 9-digit number. I would like to aggregate the data by product code and display the results in a histogram. The current method do this involves using the .toCondition() operator on the product code signal and then aggregating the results in the Histogram tool using the value property of the product code condition. However, when this is done, the product code bin labels are displayed in the histogram using scientific notation: However, I'd like to see the full product code; the value in scientific notation loses its meaning. This feature request is logged as CRAB-22684. This post documents the current work around. Analysis Steps 1. In Formula, convert the product code signal to a string signal and add an underscore ("_") to the end. 2. Use the .toCondition() operator to create a new condition with a capsule for every value in the string product code signal. 3. Finally, aggregate the condition created in Step 2 by the value property in the Histogram tool.
  10. This post summarizes the use case presented during the Advanced Analytics for Sustainability webinar held in January 2021. Background Manufacturers often don’t have insight into how much water and energy their facility is actually consuming. Having a way to quickly aggregate this consumption data is important, because it provides the information necessary to determine if the process is using water and electricity efficiently when running or if the water and electricity usage is minimized when not running. The following use case shows how to perform some of these aggregations in Seeq and scale the calculations to other process units or assets. For this use case, an asset tree is leveraged. The asset tree in the following image has a separate asset for each part of the process. For this analysis, the electricity consumption tag, the water consumption tag and the mode tag are used. Analysis Aggregate consumption by mode of operation The Histogram tool is used to aggregate the consumption data by mode of operation, as described in the following steps. 1. Create a condition that contains a capsule for each mode. This can be done using the .toCondition operator in Formula. Note that each of the capsules in the Mode Condition has a 'Value' property that indicates the value of the Mode signal. These capsule properties can be used to aggregate data in the Histogram tool. 2. Use the Histogram tool to create a histogram of electricity consumption by mode of operation. 3. To create a similar histogram for water consumption, first duplicate the Electricity Consumption by Mode histogram from the Item Properties menu. Next, provide a new name and switch the input signal to create the second histogram 4. These calculations can easily be scaled to other assets using Seeq's Asset Swapping capability. With one click, the same analytics are applied to another plant area. Quantify consumption while not running Manufacturers want to minimize consumption when their processes are not running. To do this, they first need to quantify how much electricity is being consumed while not running and then identify instances of high usage. This can be achieved using the following steps. 1. Use the Value Search tool to identify when the process is not running. 2. Use the Signal from Condition tool to quantify the total amount of electricity consumed while not running. 3. Use the Value Search tool to identify when the amount of electricity consumed while not running is high. Notice that aren't any visible capsules along the top of the trend. However, if this new condition is selected in the Details Pane, one capsule is identified in the Capsules Pane, with a duration of 0. The duration of this capsule is 0 because the searched signal is discrete and there is only a single data point that exceeds the limit. 4. Since there are several areas to monitor for environmental deviations, the Treemap view can be leveraged. The color coding of the treemap indicates which plant areas have exceeded the maximum threshold and is a great way to quickly identify parts of the process that are not performing optimally.
  11. Hi Devin- Unfortunately there is not currently a way to drop a cursor on the Scatter Plot, as you described. However, this Feature Request is already logged as CRAB-22175. I have opened a support ticket (SUP-26165) to link you to the feature request; this ensures that you will be notified of any progress towards its implementation. Thanks, Lindsey
  12. Hi Sivaji Unfortunately, this is not currently possible as you described. However, it is a commonly requested feature (CRAB-15825), and I have linked you to the request so that you will be notified of any progress towards its implementation. As a workaround, I recommend creating a Scorecard to display the values. Please let me know if you have any additional questions. Thanks, Lindsey
  13. Hi Yanmin- Question 1: You can find more information about the CSV import tool in our Knowledge Base: https://seeq.atlassian.net/wiki/spaces/KB/pages/537690127/Import+CSV+Files+2.0. When you use this tool to import data, you can specify how you would like Seeq to handle "invalid" data (like the failure or null values you described). For the example you provided, I think you would want to select "Skip". Once the data is imported Signals A & B will only have data points where there is valid data. Question 2: The .runningDelta() operator calculates the difference between adjacent samples, as you described in your post. More information can be found in the Formula documentation. Please let me know if you have any additional questions. Thanks, Lindsey
  14. Background I would like to create a Histogram that displays the average temperature for each hour during the night shift (8 PM - 8 AM). However, when I configure a Histogram to aggregate based on the hour of the day, the bins are displayed in numerical order, not chronological order: The following steps describe how to display the bins in chronological order. Solution 1. Create an Hourly Periodic Condition 2. Use the Formula tool to assign a property to each capsule in the Hours condition that indicates the start date and time of each capsule; this will create a new condition that can be used to aggregate the histogram. $hours.transform($capsule -> $capsule.setProperty('startDateTime', $capsule.property('Start'))) (NOTE: By default, each capsule in any condition has a 'Start' property. However, this property can not be used in histogram aggregations, which is why this step is required.) 3. Aggregate the Histogram using the condition and capsule properties defined in Step 2. While the bins are now displayed in chronological order, the date and time displayed are in UTC time. To display the time consistent with your time zone, edit the 'Hours with Property' condition to include the time zone conversion. In the following screenshot, I am converting from UTC to EST and removing the 'Z' displayed at the end of the timestamp. $hours.transform($capsule -> $capsule.setProperty('startDateTime', (((($capsule.property('Start'))-5h).tostring().replace('Z',''))))) The histogram bins now display in chronological order, with the timestamps in the correct (EST) timezone.
  15. Background As a starting point, I have a signal that indicates the process error code (Error) and 2 signals that indicate the current container (Container 1 and Container 2). I would like to create a histogram that summarizes which containers have the most frequent errors. More specifically, I want to know: For the Container 1 signal, what is the distribution of container values when the error code is equal to 44 and 47 For the Container 2 signal, what is the distribution of the container values when the error code is equal to 45 I want the results summarized as a single histogram. The following steps describe how this can be achieved using conditions and capsule properties. Solution 1. Create a condition with a capsule for each value change in the Error signal. This can be accomplished in Formula using the following syntax $errorSignal.toCondition() Note: the .toCondition() operator assigns a Value property to each capsule that indicates the value of the Error signal during the capsule. This property can be used for aggregation in a Histogram. 2. Filter the condition created in Step 1 to only include capsules with errors of interest for the Container 1 signal. This can be accomplished in Formula, using the following syntax: //Create intermediate conditions for each error $error1=$errorCond.keep('value',isEqualTo('44')) $error2=$errorCond.keep('value',isEqualTo('47’)) //Combine the 2 intermediate conditions $error1.combineWith($error2) 3. Assign a ‘Container’ capsule property to each of the capsules in the Error Cond 1 condition that indicates the value of the Container 1 signal. This can be accomplished within Formula, using the following syntax: $errorCond1.removeLongerThan(1week).transform($capsule -> $capsule.setProperty('can', $container1.average($capsule))) 4. Repeat Steps 2&3 for Container 2 a. Filter the condition created in Step 1 to only include capsules with errors of interest for Container 2. This can be accomplished in Formula, using the following syntax: $errorCond.keep('value',isEqualTo('45')) b. Assign a ‘container’ capsule property to each of the capsules in the Error Cond 2 condition that indicates the value of the Container 2 signal. This can be accomplished within Formula, using the following syntax: $errorCond2.removeLongerThan(1week).transform($capsule -> $capsule.setProperty('container', $container2.average($capsule))) 5. Combine the error conditions with container property to create a single, composite condition. This can be performed in the Composite Condition tool. 6. Finally, create the histogram based upon this composite condition.
  16. Users frequently ask if it is possible to plot multiple y-axis variables against a single x-axis variable in scatterplot. While this functionality is not currently available, it is logged as CRAB-10474. However, as a work around, Organizer topic can be leveraged to display scatterplots side-by-side to compare the relationships of multiple y-axis variables against a single x-axis variable. An example of this is shown in the following image:
  17. Users frequently ask if it is possible to export a scorecard to Excel. While the functionality to export directly from Scorecard is not currently available, it is a commonly requested feature that is already logged as CRAB-15132. This post documents the current work around. Let's start with the following Scorecard: If we switch to trend view, we can see that the Scorecard metrics are displayed on the trend. The export button above the trend only exports signal and condition data, not metrics. However, we can use the Signal from Condition tool to create signals that display the same results as the Scorecard metrics. Once the results are calculated as signals, the Export button can be used to export the signal and condition data displayed on the trend.
  18. Hi @MarcS- I suggest submitting a support ticket so that one of our support engineers can investigate the issue. To do this, please submit a copy of your Seeq server logs: https://seeq.atlassian.net/wiki/spaces/KB/pages/114395156/Viewing+Logs+and+Sending+Log+Files+to+Seeq. Thanks, Lindsey
  19. Hi Jemma- I made a mistake in my previous post; it has been updated to show the correct Formula. Thanks for pointing that out 🙂 Lindsey
  20. Hi Jemma- You can achieve this by adding another property (i.e. 'sub-category) to the conditions in Formula: Please let me know if you have any additional questions. Thanks, Lindsey
  21. Hi Jemma - To do this, you will need to create a separate condition for each subcategory. There are several ways that you can do this. One simple way would be to use the Custom Condition tool to manually select capsules in the Pump condition for each sub-category condition: Once you have the sub-categories identified as separate conditions, you can combine all of the cause conditions just as we did in Step 2. Please let me know if you have any additional questions. Thanks, Lindsey
  22. Background In this use case, process engineers are investigating a process during which downtime frequently occurs. They have already identified the downtime periods as a condition in Seeq. Additionally, process engineers have also identified the 3 leading causes (pump, compressor, and level) of downtime and created conditions for when these parameters exceed an alarm limit, causing the process to shut down. To identify the leading cause of each downtime event, engineers would like to determine which of the 3 cause conditions are present 1 hour before the downtime event and which event happened first. The following method can be used. Method 1. Create a condition for the 1 hour prior to downtime Process engineers are interested in identifying which of the 3 causes are present in the 1 hour prior to process downtime. To do this, we first need to create a condition for the 1 hour prior to downtime. This can be accomplished in formula using the following syntax: $downtime.beforeStart(1h) 2. Combine all cause conditions and assign capsule properties To perform the analysis, we need to combine all of our cause conditions into a one condition, Leading Events. However, to know which capsules in the Leading Events condition correspond to which cause, capsule properties must be used. This can be achieved using the following Formula: //Assign a property to each of the cause conditions $compProp=$comp.setProperty('cause','compressor') $levelProp=$level.setProperty('cause','level') $pumpProp=$pump.setProperty('cause','pump') //Combine the cause conditions $compProp.combineWith($LevelProp).combineWith($pumpProp) You can view the Cause property of each capsule in the Leading Events condition by adding it to the capsules pane. 3. Filter the Leading Events condition to only contain capsules that overlap with the 1 hour before downtime capsules. This can be accomplished within the Composite Condition tool. 4. Create a signal that reports the value of the 'cause' capsule property for each capsule in the Leading Events before Downtime condition. This can be achieved in Formula using the following syntax (Note: '4h' indicates a maximum interpolation of 4 hours between data points and may be adjusted as needed). $eventsBeforeDowntime.toSignal('cause',startkey()).toStep(4h) 5. Identify the first value of the cause signal for each downtime event. When we calculate the first value in the Cause signal, the result should overlap with the downtime events, so that the cause can be associated with the downtime. To do this, first create a new condition using the Composite Condition tool to identify the union between the 1 hour before downtime condition and the Leading Events before Downtime condition. Now, use the Signal from Condition tool to identify the first value in the cause signal. Note that it is important to place the result of this calculation at the end of the bounding condition so that the results can be associated with each downtime event. 6. Create a scorecard that lists the leading cause of each downtime event
  23. This post summarizes the Performance Loss Monitoring use case covered in the Advanced Analytics 101 webinar in September 2020. In this webinar, the use case was explored by addressing 4 key analytics questions: Why? What data is available? What method(s) can I use with the available data? How do I want to visualize the results? Why? A manufacturing company needs to track performance losses. If engineers are able to identify and quantify those performance losses, the results can be used to justify process improvement projects or to do historical and global benchmarking. The current method to do this involves retroactively wrangling data in Excel. This exercise is very time consuming, so developing a method to automatically generated monthly reports has the potential to save up to 1 week of valuable Process Engineer' time per month. This is time they get back to work on improvement projects and other value added activities. What data is available? For this analysis, two data tags are needed: the target production rate and the actual production rate. What method(s) can I use? Step 1: Identify time periods of lost production To identify the time periods of production loss, I first need to calculate the difference between the target rate and the actual reactor rate. This can be accomplished in the Formula tool. Now, to identify the production losses, I can use the Value Search tool to identify whenever the value of this new signal is greater than 0. Step 2: Quantify the total production loss The Signal from Condition tool can be used to calculate the totalized production loss during each of the Production Loss Events capsules. How do I want to visualize the results? Ultimately, I’d like to create a weekly report that summarizes the production per day of a given week. So in this case, I’d like to create a histogram that aggregates the lost production for each day of the week.
  24. Hi Felix- To do this, you will need to replace the missing data gaps in your signals with 0. This can be done by following the steps in the following forum post: Please let me know if you have any additional questions. Thanks, Lindsey
  25. Hi @Adam Georgeson- I am happy to report that this issue has already been resolved in Seeq version 0.49. Also glad to hear you like the new homepage! Thanks, Lindsey
×
×
  • Create New...