-
Posts
72 -
Joined
-
Last visited
-
Days Won
35
Allison Buenemann last won the day on June 3
Allison Buenemann had the most liked content!
Personal Information
-
Company
Seeq Corporation
-
Title
Analytics Engineer
-
Level of Seeq User
Seeq Beginner
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
Allison Buenemann's Achievements
-
In Seeq's May 25 Webinar, Product Run and Grade Transition Analytics, we received a lot of requests during the Q&A asking to share more details about how the grade transition capsules were created and how they were used to build the histograms presented. So here is your step-by-step guide to building the condition that will make everything else possible! Defining your transition condition Step 1: Identifying product campaigns The "condition with properties" tool under "identify" can transform a string signal of product or grade code into a single capsule per campaign. In older versions of Seeq this same outcome can be achieved in formula using $gradeCodeStringSignal.toCondition('grade') Note: if you don't have a signal representing grade code, you're not out of luck, you can make one! Follow the general methodology here to search for set point ranges associated with different grades or combinations of set points that indicate a particular grade is being produced. Step 2: Create the transition condition encoded with the starting and ending product as a capsule property $t = $c.shrink(2min).inverse() //2 min is an arbitrary small amound of time to enable the inverse function to //capture gaps in the campaigns $t.removelongerthan(5min) // 5min is an arbitrarily small amount of time to bound calc (that is >2min * 2) .transform($capsule -> $capsule.setProperty('Transition Type', $pt.resample(1min).toscalars($capsule).first() + '-->' + $pt.resample(1min).toscalars($capsule).last())) .shrink(2min) Step 3: Identify time period before the transition Step 4: Identify time period after the transition Step 5: Optional - for display purposes only - Show the potential transition timeline (helpful for chain view visualizations) Step 6: Create a condition for on-spec data Here we use value search against spec limit signals. If you don't have spec limit signals already in Seeq, you can use the splice method discussed in the post linked in step 1 to define them in Seeq. Step 7: Combine condition for on-spec data and condition for time period before the transition Step 8: Combine condition for on-spec data and condition for time period after the transition Step 9: Join the end of the condition for on-spec before the transition to the start of the condition for on-spec after the transition to identify the full transition duration Your condition for product transitions is now officially ready for use in downstream applications/calculations! Working with your transition condition Option 1: Calculate and display the transition duration Use Signal from Condition to calculate the duration. The duration timestamp will generate a visual where the length of the line is proportional to the length of the transition. Increasing the line width in the customize panel will display the transition duration KPI in a gantt-like view. Option 2: Build some histograms Summarize a count of capsules like the number of times each distinct transition type occurred within the date range. Summarize a signal during each unique transition type, like the max/min/average transition duration for a distinct transition type. Option 3: Generate some summary tables Switch to the table view and toggle "condition" to view each transition as a row in the table with key metrics like total duration and viscosity delta between the two grades. Add the "transition type" property from the "columns" selector. If you run into hiccups using this methodology on your data and would like assistance from a Seeq or partner analytics engineer please take advantage of our support portal and daily office hours.
-
- 2
-
- grade transition
- grade code
- (and 3 more)
-
Use Case: Cpk measures process capability to produce a given product type at target and within its sales specifications. Cpk analysis can be used to inform selling prices for products that are particularly challenging to run on production units and can in some cases warrant abandoning certain products to free up reactor space for product grades that are more easily made on specification. Challenges: Cpk calculations are done infrequently and retroactively in most cases due to the time consuming nature of the analysis. A large chunk of time is spent dividing up data into different product types and then calculating the metrics for each type. When the analysis is requested again, the process is repeated leading to significant drain on engineering resources. Solution: Seeq's asset groups can be used to address this use case in a highly automated and scalable manner. By building as asset group that creates "assets" for each product type, we can automatically segment the data for a given quality parameter into different data sets by product type. Using asset groups allows us to build out Cpk calculations for a single product type and then instantly calculate them for the remaining product types with just a couple of clicks. Seeq capsules, aggregations, and formulas are integral to performing this analysis. Analysis Steps: 1. Locate the signal of interest, the upper specification limit, the lower specification limit, and a signal that can be used to detect the current production grade (ideally a grade code signal - if this does not exist for your product, you can use process set points to stitch together a calculated grade signal). 2. Build the asset group. Seeq asset groups can be used to generate small-medium-sized use-case-specific asset structures to help scale analytics. General instructions for how to build an asset group are available on the Seeq Knowledge Base. For this asset group, create an asset for each product type produced on your production line using the Add Asset button. Click on the asset name to rename it to the product type. Note that asset groups are evergreen, and you can always add additional product types to the asset group if new grades are developed. Add columns to your asset group for the Quality parameter signal, the grade code, and the upper and lower specification limits. Use the "+" button in each table cell to map the correct signal to each of the assets. Note, the signal will be the same for each product type asset (same for each row in a given column). Create a filtered version of the Viscosity, Viscosity LSL, and Viscosity USL signals by creating a new Column with a Calculated item. The following formula be used for a grade code signal that is numeric: $viscosity.within($grade==101) Alternatively, for a string grade code, the value of the grade should be entered in quotes, for example, $viscosity.within($grade=='Grade 101') The calculated formulas will automatically be applied to all other rows in the asset group table. Edit each row to ensure the correct grade is used in the filtering formula. Repeat the steps above for each of the filtered signals needed for the analysis. At this point, the asset group is ready for use. Make sure to click save before closing. For those comfortable programming in python, an identical asset tree with the original and filtered signals can be created using Seeq Data Lab and spy.assets or spy.trees. 3. Optional Step: To leverage chain view to see data from each of the campaigns of a given product back-to-back, create a condition for when there is data. The Value Search tool can be used to do this, by looking for time periods when the viscosity signal is not equal to zero. With the condition defined you can use chain view to view the campaigns end-to-end. 4. Begin to build the Cpk calculations. The first step is to define the window over which Cpk will be calculated. This could be periodic, rolling, or relative to the current time. In this example, we will use formula to create a condition containing a single capsule beginning 3 months prior to now and ending at now. The following formula can be used to create such a condition, where 94 days is the maximum capsule duration -- longer than the longest possible 3-month period. condition(94d, capsule(now()-3months,now())) 5. Calculate the inputs into the Cpk formula, beginning with the sample mean. This can be calculated using the Signal from Condition tool. 6. Calculate the sample standard deviation using the Signal from Condition tool. 7. Calculate Cpk using formula with the input variables: sample mean, sample standard deviation, viscosity LSL, viscosity USL. min($ucl-$avg, $avg-$lcl)/(3*$sd) 8. View the Cpk value in a table by toggling to table view. 9. Scale the Cpk calculation across all product types in the asset group using the asset functionality available in tables. 10. Alternatively, view the Cpk values across assets as a bar chart, to better visualize the magnitude of Cpk values relative to those of other product grades. Final result:
-
Sometimes when looking at an xy plot, it can be helpful to use lines to designate regions of the chart that you'd like users to focus on. In this example, we want to draw a rectangle on the xy plot showing the ideal region of operation, like below. We can do this utilizing Seeq's ability to display formulas overlaid against an xy plot. 1. For this first step, we will create a ~horizontal line on the scatter plot at y=65. This can be achieved using a y=mx+b formula with a very small slope, and a y-intercept of 65. The equation for this "horizontal" line on the xy plot is: 0.00001*$x+65 2. If we want to restrict the line to only the segment making the bottom of our ideal operation box, we can leverage the within function in formula to clip the line at values we specify. Here we add to the original formula to only include values of the line between x=55 and 5=60. (0.00001*$x+65) .within($x>55 and $x<60) 3. Now let's make the left side of the box. A similar concept can be applied to create a vertical line, only a very large positive or negative slope can be used. For our "vertical" line at x=55, we can use the following formula. Note some adjustment of the y-axis scale may be required after this step. (-10000*($x-55)) 4. To clip a line into a line segment by restricting the y values, you can use the max and min functions in Formula, combined with the within function. The following formula is used to achieve the left side boundary on our box: (-10000*($x-55)) .max(65) .min(85) .within($x<55.01 and $x>54.99) The same techniques from steps 1-4 could be used to create the temperature and wet bulb max boundaries. Formula for max temp boundary: (0.00001*$x+85).within($x>55 and $x<60) Formula for max wet bulb boundary: (-10000*($x-60)) .max(65) .min(85) .within($x<60.01 and $x>59.99) CONTENT VERIFIED MAY2024
- 1 reply
-
- 3
-
- workbench
- scatterplot
-
(and 2 more)
Tagged with:
-
Use Case: It is common in industry to seek to use the behavior of upstream process variables to predict what the behavior of a downstream variable might be minutes, hours or days from the present time. Solution: A traditional predictive modeling workflow can be applied to solve this problem. Identify an appropriate training data set Perform any necessary data cleansing Create a predictive model Evaluate the model fit Improve the model Operationalize the model What differentiates this use case from any other predictive modeling use case is a specific data cleansing step for adjusting signals to remove process lag. 1. Load Data Load your target signal and the relevant upstream signals into the display pane. In this example, the target signal is the product viscosity, measured in an analytical lab based off a sample from a downstream sample point. Three upstream signals: the reactor temperature, reactant conversion, and viscosity modifier flow to the reactor significantly influence the product viscosity measurement and will be used as inputs into the model signal. 2. Identify Training Data Set Identify an appropriate training data set for your regression model. This may involve a longer time window to include variability in product type or seasonality. In this example, we will pan out to 3 months to capture multiple cycles of different product types. With an appropriate training window identified, you can also limit your training data set to a subset of samples present during a particular condition. If this interests you, consult the "advanced options" section of the Prediction Tool KB article for more information. This method is particularly useful if you're wishing to create different models for different modes of operation. 3. Cleanse Signals -- Adjust for Process Lag We can time-shift our upstream signals using either a known constant delay, a known variable delay (like a calculated residence time signal), or an unknown delay of maximum correlation to the target signal. The first two of these options will utilize the .move() function in Formula (or .delay() in earlier versions of Seeq). The latter will utilize the .correlationOffset() function. In this example, we have a known lag of 1.5 hours between the reactor and the product sampling point. We will use the move function with an input scalar of 1.5h, as shown below. The time shift calculation should be applied across all relevant input signals. More information on the different options for time shifting signals using fixed, variable, or calculated offsets is available in this forum post: 4. Cleanse Signals -- Remove signal noise, outliers, abnormal operating data In this example, we apply an agileFilter to each of our time-shifted model input signals. Apply the same technique to each of the model inputs. Note that steps 3 & 4 could have been combined into a single formula. An example of this would be: $reactor_temp.move(1.5h).agileFilter(1min) For guidance on additional cleansing techniques, consult the Interactive Training. 5. Build the Predictive Model Use the Prediction tool panel to create a model of your target signal based on your cleansed, time-adjusted input signals. Ensure your model training window matches the date range that you identified in step 2. You can view the model parameters like coefficients, rSquared, and p-values using the "+ Prediction Model" option. 6. Evaluate the Model Fit Use Scatter Plot view and the model parameters to evaluate the goodness of fit of the model. Switch to a time range outside of your training data set to ensure your model is a good fit for data throughout time. 7. Improve Model (as needed) If the scatter plot indicates a non-linear relationship, test out additional model scales in the prediction tool panel. Consider eliminating variables with p-values higher than your significance level cutoff (frequently 0.05). Add additional variables if relevant. If distinct modes of operation introduce significant signal variability, consider creating a model for each operating mode and stitch the models together into a single model using the splice() function in Formula. 8. Deploy the Model The model should project out into the future by the amount of the process lag between the upstream and target signals.
-
- 2
-
- prediction
- soft sensor
-
(and 6 more)
Tagged with:
-
FAQ: I have a condition for events of variable duration. I would like to create a new condition that comprises the first third of the time (or 4th, or 10th) of the original condition. Solution: A stepwise approach can be taken to achieve this functionality. 1. Begin with your condition loaded in the display pane. 2. Create a new Signal using Signal from Condition that calculates the total duration of each of your event capsules, interpolated as a step signal. 3. Create a new signal that is your total event duration multiplied by the proportion of the event that you would like to capture. e.g. for the first 1/3 of the event, divide your total duration signal by 3, as shown below. 4. Create an arbitrary discrete signal with a sample at the start of each of your event capsules. 5. Shift the arbitrary discrete signal in time by the value of your signal calculated in step 3. In this example, the 1/3 duration signal. Note, depending on your version of Seeq, the function to do this may be called move() or delay(). 6. Use the toCapsules() function in Formula to create a tiny (zero duration) capsule at each of your shifted, discrete samples. 7. Join the start of your original condition with the capsules created in step 6 using the composite condition tool.
-
Hi Venaktaramesh, The Knowledge Base article below describes how the AF Data Reference connector can be used to write Seeq calculations to AF. In order to se PI Notifications for Seeq calculations, items must follow this pathway of communication. https://support.seeq.com/space/KB/933725050/Seeq%20AF%20Data%20Reference Thanks, Allison
- 1 reply
-
- 1
-
Answer: You can refer to this KB article on the Import CSV for the requirements before you start importing. Steps below is to guide you on the optional settings to set the information as properties. When you open the import from CSV file tool, select import file as condition. If you scroll down to the "+" icon that says "Optional Settings" you'll see that the default treatment of CSVs imported as a condition is to treat all columns as capsule properties. Content Verified MAY2024
- 1 reply
-
- 1
-
- csv
- csv import
- (and 8 more)
-
Sometimes it is desired to have custom units of measure display in Seeq Scorecards. This could be used when the signal or condition has no units or when you want to add a custom display or a unit that might not be a recognized Seeq unit. You can use Seeq's Number Format customization in the item properties panel to add custom text units to your scorecard. Here are some examples showing different ways to add text display units. The key here is including the text in quotation marks. More information on how to customize these number displays, including the syntax for adding custom text, can be found by clicking the "?" icon next to "Number Format".
-
Scatterplot: Plot multiple y-axis variables
Allison Buenemann replied to Lindsey.Wilcox's topic in General Seeq Discussions
The workaround discussed in this forum post is another way to visualize multiple series together on one scatter plot, and holds up so long as the variables can be displayed on the same y-axis scale. For dramatically different y-axis scales, Seeq Data Lab can be used to produce scatter plots with multiple y-axes.- 1 reply
-
- 1
-
FAQ: I have a signal with a gap in the data from a system outage. I want to replace the gap with a constant value, ideally the average of the time period immediately before the data. Solution: 1. Once you've identified your data gaps, extend the capsules backwards by the amount over which time you want to take the average. In this example, we want to fill in the gap with the average of the 10 minutes before the signal dropped, so we will extend the start of the data gap capsule 10 minutes in the past. This is done using the move function in Formula: $conditionForDataGaps.move(-10min,0min) 2. Use Signal from Condition to calculate the average of the gappy signal during the condition created in step 1. Make sure to select "Duration" for the timestamp of the statistic. 3. Stitch the two signals together using the splice function. The validvalues() function at the end ensures a continuous output signal. $gappysignal.splice($replacementsignal,$gaps).validvalues()
- 2 replies
-
- 3
-
- data cleansing
- data gaps
-
(and 2 more)
Tagged with:
-
Felipe_O started following Allison Buenemann
-
When examining data in Capsule Time view it can be useful to view data from the time period immediately the capsules alongside the data during the capsules. This can be done by: 1. Hover over the x-axis (shown in the image above as measuring time from the start of the capsule in hours), click and drag your mouse to the right. You will likely see no data from the time period before 0.0 on the x-axis. 2. Click on the "Dimming" option at the top of the Display Pane. Check the box to "Show Data Outside of Conditions". When this box is checked the data outside of the conditions is displayed, slightly more faintly than the data within the capsules. Optionally, utilize some of Seeq's coloring features in capsule time to display the data from each capsule and before/after in different colors (rainbow shown).
-
- 1
-
- capsule time
- capsule
-
(and 2 more)
Tagged with:
-
Anubhav Sharma started following Allison Buenemann
-
tip Identify subtle trends or step changes in a signal
Allison Buenemann posted a topic in Tips & Tricks
Background: When looking to identify trends or step changes in a signal, we typically recommend an approach of smoothing the signal, taking the first derivative, then identifying when that derivative is positive or negative. This method works well most of the time, but employing this technique in combination with others can be more effective at capturing trends/step changes when the value change in the signal is more subtle. Solution: When looking for step changes, we can use a technique of calculating a range of the signal on a rolling periodic basis and search for when the range exceeds some limit. We can then combine this condition with when the derivative is positive (increasing step changes) or negative (decreasing step changes) to capture our final condition. 1. Create a rolling window over which you will look at the range (max-min value) of the signal. In my example I used a 4h window every 30 minutes, because my tank draining events were typically never longer than 4h. Select the smallest time period that you can that is still longer than your longest draining event. periods(4h, 30min) 2. Use Signal from Condition to calculate the range (max-min) of your signal over each of the rolling windows. Make sure to place the time stamp of the statistic at the end of each rolling capsule. 3. Identify time periods when that range calculation is above some threshold. In this example we used a threshold of 2 based looking at the trend output of our step 2. If we zoom in on a smaller range of time, we see that our capsules for when the range value is high actually extend beyond the completion of our decreasing signal. 4. We can intersect this condition that we have identified for high range in the signal with a condition for when the derivative of the signal is negative to capture our desired events. First calculate the first derivative of the signal. We apply a smoothing agileFilter in this step as well to remove signal noise. $tlth.agileFilter(2min).derivative() 5. Identify when that derivative value is less than zero using the value search tool. 6. Now take the intersection of the condition for negative derivative of the level and the condition for high range. The final view of the original signal and the events identified: Use chain view to validate your calculations: Content Verified MAY2024 -
Background: Seeq has functions in Formula to remove outliers based on different algorithms, but sometimes it is desired to identify and remove outliers that falls outside of the interquartile range. Solution: The approach we can take to solve this data cleansing problem in Seeq is to determine the periods over which we want to calculate the quartiles, calculate new signals from the 25th and 75th percentiles during each of those periods, identify deviations from those percentiles, and remove data outside of the IQR from our original signal. 1. The first step is to decide what type of periods you would like to use to calculate your percentiles. Some periodic choices might include: hourly, daily, or a rolling window of 24 hours each hour. Other choices could be the current production run, the time since the equipment was last maintained, etc. In this example we will use an hourly periodic condition in our quartile calculations. 2. Next, use the signal from condition tool to calculate the 25th percentile during each of the capsules defined above. 3. Use the same method to calculate the 75th percentile during each of the capsules defined above. 4. Use Seeq's Formula tool to calculate the IQR. $UpperQ - $LowerQ 5. Now use Formula to calculate the upper and lower limits as for outlier removal as: $upperQ + n*$IQR (where n is a scalar multiplier, 1.5 in this example) $lowerQ - n*$IQR 6. Search for deviations from the Upper and Lower limits using Deviation Search. 7. Then use Formula to remove data during the identified outlier capsules. $signal.remove($outliers) Content Verified MAY2024
-
- 2
-
- periodic condition
- signal from condition
- (and 4 more)
-
Background: One of the quirks of raw, ungridded time series data is that sampling frequencies may vary. Sometimes the sample frequency is different for two signals that you are comparing, and sometimes the sample frequency is different for a single signal at different points in time due to a process data historian's compression configuration. How you handle this variability in sampling rate can have a significant impact on calculations of summary statistics. Various approaches to calculating summary statistics and their implications: In this example we have four signals with different amounts of samples in the display range (as highlighted by the "count" statistic in the details pane. Our goal is to calculate a single "average" value for all of the signals during this window. Notice the different outcome values from each different approach. Method 1: One option that we have for getting a single "average" statistic is to take the average value of each of the 4 signals over the time window, then take the average of that. This method weights each of the signals evenly in the calculation of the final average value, since the final average value is equal to (0.25)*avg1 + (0.25)*avg2 + (0.25)*avg3 + (0.25)*avg4 Method 2: A second option is to first create a continuous average signal, then aggregate that over the display window to calculate an average. The average can be calculated using formula and the average function with the syntax shown below. Note that the sample count on the output signal has a significantly larger number of samples than any of the original signals. This is because the average function calculates a sample any time any of the input signals has a sample. For the signals that do not have a sample at a particular key, the linearly interpolated value of the signal is used in the average calculation. Then a scorecard aggregation of the average of the continuous average signal can be calculated in the Scorecard Metric tool to get the result below. Method 3: A third option is to take the average of all the data points in the display window, independent of which signal they belong to. This approach involves first combining all of the samples from the 4 signals into a single signal, then taking the average of that value over the display window. The sample count of the combined signal will be equal to the sum of the sample counts of all other signals, as demonstrated below. In this approach, the resultant signal also has a sample any time any of the other signals contain a sample. Note that if signals have the same frequency, a tiny delay can be applied to the 2nd through the nth signal (1 ns to n-1 ns) to ensure all samples are kept. A scorecard metric of the average of the combined signal can then be calculated. The overall average value returned using this method is much higher than the two previous methods due to the relatively higher amounts of samples in signals 3 & 4, which have generally have higher values than signals 1 & 2. Method 4: A non-time-weighted average (similar to method 3) can also be calculated using the following formula: average($signal1.toDiscrete(),$signal2.toDiscrete(),$signal3.toDiscrete(),$signal4.toDiscrete()) Once again the final signal contains a number of samples equal to the sum of all of the sample counts of the input signals. In conclusion.. Which method of averaging is best for your use case? The answer is probably "it depends" on the use case you are analyzing. Some examples of when different methods may be applied: An average over a specific time range - Method 1 An instantaneous average at a point in time - Method 2 An average of signals where each sample represents a unique event or independent measurement (e.g. lab or quality data) - Method 4 Regardless of your specific use case, having an understanding of how your data frequency, your historian's compression settings and your analytical approach can impact your results is an important starting point in any analysis!
-
- 2
-
- weighted average
- unweighted average
- (and 4 more)
-
tip Creating a Signal for a Running Count of Capsules
Allison Buenemann posted a topic in Tips & Tricks
As a Seeq user, you may have created a condition for a particular event of interest and would like to create a signal that is the running count of these events over a given time period. This analysis is common in equipment fatigue use cases when equipment degrades slowly based on a number of cycles (thermal, pressure, tension, etc) that it has undergone during it's life or since a last component replacement. This use case can be done very efficiently in Seeq Formula. The assumptions for the below solution are: You have a condition ($condition) of interest that you would like to understand the running count for There is a defined timeframe of interest, where counting will start and end. Note the end date can be sometime in the future. For the below example, this condition is referenced as $manualCondition, but could very well be another condition that wasn't created via the Manual Condition tool. Just note that for each capsule in this condition, the count will restart at 0. Solution - Utilize the runningCount() formula function: 1) runningCount() currently only accepts signals as inputs, so convert your $condition to a signal by using .toSignal(), which produces a single sample for each capsule: $condition.toSignal(SAMPLE_PLACEMENT) SAMPLE_PLACEMENT should be specified as startKey(), middleKey(), or endKey(). If you want your count to increase at the start of each event, then use startKey(). If wanting the count to increase in the middle or end of each event, then use middleKey() or endKey() 2) Use the runningCount() function on the signal created above. $signal.runningCount($conditionToStartAndEndCounting) Both steps are shown below in a unified Formula: /* This portion yields a SINGLE point for each capsule. startKey() generates the point at the START of each capsule, where middleKey() and endKey() could also be used to generate a point at the MIDDLE or END of each capsule. Where these points are placed matter, as that is the point in time the count will increase. */ $samplePerCapsule = $condition.toSignal(startKey()) /* This portion yields the running count of each sample (capsule). The 15d in toStep() can be adjusted. Ideally this number will be the duration of the longest expected time between two events that are being counted. */ $samplePerCapsule.runningCount($manualCondition).toStep(15d) .toStep(15d) ensures the output signal is step interpolated, interpolating points at most 15 days apart. If step interpolation is not required, then this can be removed or replaced with something like .toLinear(15d). Below shows the associated output. Content Verified DEC2023