Jump to content

Emilio Conde

Seeq Team
  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by Emilio Conde

  1. Ruby and I implemented a solution that avoids the .transform() function. Details can be found here.
  2. Sometimes it's beneficial to fill these gaps with a prior average that is dynamic. The above post details how to fill the gap with a static timeframe, such as the 10 minutes before the gap. But what if we wanted to do something different, such as take the gap duration, a dynamic value, and fill the gap with the prior average based on the gap duration? Below details how to do this. There is a similar approach that leverages the .transform() Seeq function here, but I've provided an alternative method that avoids the usage of .transform(). Of course, this all can be input to a single formula, but below details each step broken out. Solution: 1. Identify data gaps & calculate gap durations: Notes: .intersect(past()) guarantees we are not considering the future for data validity The maximum capsule duration should be carefully considered depending on the maximum gap duration you want to fill Placing the timestamp across the capsule durations is important 2. Create an arbitrary flat signal & move this signal backwards (negatively) based on the gap durations signal Notes: The timeframe specified in .toSignal(1min) influences the resolution of representing the gap duration. 1min should suffice for most cases. It's important to include a minus in front of the the gap duration to indicate we are moving backwards in time. The 24h dictates the maximum duration allowed to move, which is dependent on the expected gap durations. 3. Identify the new, moved signal and join the capsules from the new condition to the original signal gaps condition Notes: Again, the maximum capsule duration needs to be carefully considered when joining the two conditions. 4. Calculate the average of the gappy signal across the joined condition, then splice this average into the original gappy signal Notes: Again, specifying the timestamp to be placed across the capsule durations is important here. Be sure to splice the average across the original signal gaps. Including .validValues() ensures we interpolate across our original gappy signal and the replacement average signal.
  3. Hi Tranquil, The method I described above will be a way to find assets that contain a signal of interest... I.e. perform a search for the signal name within the top level of the asset. However, I don't believe this manual identification is 100% necessary to complete what you're wanting. See below. Results of the above search show: The above shows that only the Example >> Cooling Tower 2 >> Area E asset has the Irregular Data signal. However, I don't need to do these searches to manually identify which assets have which signals. I can jump immediately into asset groups: Clicking Add above then repeating for Cooling Tower 2 will yield the following: Adding a condition based on Irregular Data: Saving the Asset Group then searching again for Irregular Data, this time within the newly created Asset Group: Now I can see that the specified Condition was only created for Area E because Area E was the only asset that contained the signal of interest (in this case, Irregular Data). If more assets also had the Irregular Data signal, then conditions would have also been created for those assets referencing their Irregular Data signal. Hopefully this helps. I encourage you to take a look at some of the videos we have for Asset Groups within our YouTube page. Specifically, this video discusses the process I used above, where I populate an Asset Group using other preexisting asset trees as a starting point.
  4. Do you mean exporting the values of the asset group items? If so, you can make use of the Data tab to add all variables to your display and then the Export to Excel tool to export the values. More information on searching within an Asset Tree/Group can be found here and information on exporting data to Excel (i.e. Using the Export to Excel Tool) found here. See screenshots below on what this process may look like: Please let me know if this helps or if this isn't what you meant when inquiring about exporting the asset group subitems.
  5. Hi Tranquil, Do you mind supplying a bit more information on your question and possibly some screenshots? If by resources you mean assets, with some assets containing a specific signal, then it may not be necessary to write any script. If this is indeed the case, Asset Groups could possibly be utilized to scale the creation of a condition if the signal(s) of interest exist.
  6. There is another approach to this solution that I thought would be good to share, as it is a bit less involved. The assumptions for the below solution are: You have a condition ($condition) of interest that you would like to understand the running count for There is a defined timeframe of interest, where counting will start and end. Note the end date can be sometime in the future. For the below example, this condition is referenced as $manualCondition, but could very well be another condition that wasn't created via the Manual Condition tool. Just note that for each capsule in this condition, the count will restart at 0. Solution - Utilize the runningCount() formula function: 1) runningCount() currently only accepts signals as inputs, so convert your $condition to a signal by using .toSignal(), which produces a single sample for each capsule: $condition.toSignal(SAMPLE_PLACEMENT) SAMPLE_PLACEMENT should be specified as startKey(), middleKey(), or endKey(). If you want your count to increase at the start of each event, then use startKey(). If wanting the count to increase in the middle or end of each event, then use middleKey() or endKey() 2) Use the runningCount() function on the signal created above. $signal.runningCount($conditionToStartAndEndCounting) Both steps are shown below in a unified Formula: /* This portion yields a SINGLE point for each capsule. startKey() generates the point at the START of each capsule, where middleKey() and endKey() could also be used to generate a point at the MIDDLE or END of each capsule. Where these points are placed matter, as that is the point in time the count will increase. */ $samplePerCapsule = $condition.toSignal(startKey()) /* This portion yields the running count of each sample (capsule). The 15d in toStep() can be adjusted. Ideally this number will be the duration of the longest expected time between two events that are being counted. */ $samplePerCapsule.runningCount($manualCondition).toStep(15d) .toStep(15d) ensures the output signal is step interpolated, interpolating points at most 15 days apart. If step interpolation is not required, then this can be removed or replaced with something like .toLinear(15d). Below shows the associated output.
  7. The third example in your documentation image seems to be the equivalent... It's pretty evident why .setProperty() was enhanced! Until your server is upgraded, you'll likely have to use that $condition.transform() method to add signal-referenced, statistical property values to capsules.
  8. Note that the below statement, $condition.keep(...) OR $condition.keep(...), will no longer maintain capsule properties as of version R57. Capsule properties persisting when using an OR statement was actually unintended behavior, i.e. a bug. The behavior after R57 is now aligned with expected behavior. An OR statement in Seeq is equivalent to the union() function. In the search documentation, you can see that it explicitly states "Capsule properties are not preserved in the output capsules“, even in versions before R57. See screenshots from R55 and R56 formula documentation demonstrating this. There are many other methods available to achieve similar behavior, such as mergeProperties() or using combineWith(). See example below when using an OR vs CombineWith() in R57.
  9. Hey Red, I'm curious what the formula search documentation for setProperty() looks like for you? Are you able to take/send a screenshot? I'm able to see internally that the ability to use setProperty() in the way you're attempting (along with the other posts you've referenced) is a capability/feature in Seeq as of version R51. Prior to R51, this method will not work, as is being demonstrated with your error. Has your organization considered upgrading? There are many new features that you could be taking advantage of (see more details of What's New)! Reach out to your Seeq contact or [email protected] to get the upgrade process started. The point made above is also found in this other seeq.org post: For reference, below is what the setProperty() documentation looks like as of version R58. It could be that the only variation you're able to use in your version is $condition.setProperty(propertyName, value)
  10. Seasonal variation can influence specific process parameters whose values are influenced by ambient conditions, or perhaps raw material make up changes over the year's seasons based on scheduled orders from different vendors. For these reasons and more, it may not suffice to compare your previous month's process parameters against current. For these situations, it may be best to compare current product runs against previous product runs occurring the same month, but a year ago in order to assess consistency or deviations. In Seeq, this can be achieved through utilizing Condition Properties. 1. Bring in raw data. For this example, I will utilize a single parameter (Viscosity) and a grade code signal. 2. Convert Product step-signal into a condition. Add properties of Product ID, Variable statistic(s), and month start/end times. // Create a month condition. Be sure to specify your time zone so that start/end times are at 12:00 AM $m = months('US/Eastern') // Create a signal for start and end times to add into "Product ID" condition $start_signal = $m.toSignal('Start', startKey()).toStep() $end_signal = $m.toSignal('End', startKey()).toStep() $p.toCondition('Product ID') // Convert string signal into a condition, with a capsule at each unique string // Specifying 'Product ID' ensures the respective values in Signal populate // a property named 'Product ID' .removeLongerThan(100d) // Bound condition. 100d as arbitrary limit .setProperty('Avg Visc', $vs, average()) // Set 'Avg Visc' property reflecting avg visc over each Product ID .setProperty('Month Start', $start_signal, startValue()) // Set 'Month Start' property to know what month Product ID ran .setProperty('Month End', $end_signal, startValue()) // Set 'Month End' property to know what month Product ID ran 3. Create another condition that has a capsule ranging the entire month for each product run within the month. Add similar properties, but note naming differences of 'Previous Month Start' and 'Previous Month Avg Visc'. This is because in the next step we will move this condition forward by one year. $pi.grow(60d) // Need to grow capsules in the condition to ensure they consume the entire month .transform($capsule -> // For each capsule (Product Run) in 'Product ID'.... capsule($capsule.property('Month Start'), $capsule.property('Month End')) // Create a capsule ranging the entire month .setProperty('Product ID', $capsule.property('Product ID')) // Add property of Product ID .setProperty('Previous Month Start', $capsule.property('Month Start')) // Add property of Month Start named 'Previous Month Start' .setProperty('Previous Month Avg Visc', $capsule.property('Avg Visc')) // Add property of Avg Visc named 'Previous Month Avg Visc' ) Notice we now have many overlapping capsules in our new condition ranging an entire month -- one for each Product Run that occurred within the month. 4. Move the previous 'Month's Product Runs' condition forward a year and merge with existing 'Product ID' condition. Aggregate properties of 'Previous Month Avg Visc'. This ensures that if a product was ran multiple times and had different avg visc values in each run, then what is displayed will be the average of all the avg visc values for each product. $previousYearMonthProductRun = $mspi.move(1y) // Move condition forward a year $pi.mergeProperties($previousYearMonthProductRun, 'Product ID', // Merge the properties of both conditions only if their // capsules share a common value of 'Product ID' keepProperties(), // keepProperties() will preserve all existing properties aggregateProperty('Previous Month Avg Visc', average())) // aggregateProperty() will take the average of all 'Previous // Month Avg Visc' properties if multiple exist... I.e. if // there were multiple Product Runs, each with a different value // for 'Previous Month Avg Visc', then take the average of all of // them. The resulting condition will match our original condition, except now with two new properties: 'Previous Month Start' & 'Previous Month Avg Visc' We can then add these properties in a condition table to create a cleaner view. We could also consider creating any other statistics of interest such as % difference of current Avg Visc vs Previous Month Avg Visc. To do this, we could use a method similar to gathering $start_signal and $end_signal in Step 2, create the calculation using the signals, then add back to the condition as a property.
  11. The condition is created simply in Formula -- it is not created within the asset group. It seems like you've already done the other calculations, so I don't think adding a calculated item will be necessary. You simply need to add two assets and add all your columns via "Add Column". You can properly name your assets and columns, then click the + sign within the associated asset/column and search for the respective signal/calculation you already have. Creating scorecards will be the same as usual, except you need to have a condition based scorecard referencing the now-1d condition, and also ensure the item in question belongs to the asset group you've created. It's worth noting you only need to create the scorecards for ONE asset, and the asset table feature will scale it to your other asset(s). If the threshold limits are not the same between say Raw for Salt(ptb) and BSW Water (%), then you may need to create upper/lower threshold references in the asset group to reference instead of hard-input values. Feel free to register to office hours tomorrow if you'd like some live assistance with setting it up: https://info.seeq.com/office-hours
  12. Hi Jesse, This is currently a feature request in our system, but there's a fairly simple workaround utilizing asset trees, asset tables, and condition-based tables. If these items don't belong to an asset tree, then you can quickly create an Asset Group and map the associated signal calculations you've made. (You can give our Asset Groups video a watch if you've never worked with Asset Groups) For best results, the condition you create will only have one capsule. I've elected to create a condition that represent the past day (with respect to now) by entering the following into formula: condition(1d, capsule(now()-1d,now())) Then simply add the relevant signals into your details pane by opening up the asset group and selecting them from there. Notice the Assets column in my details pane confirms these items are from the Asset Group I created in step 1. Then, you can go into Table view, and select the Condition option vs Simple. You can then Transpose, and click the Column drop down to add each signal into the table. I am simply adding the "Last Value" for this example, which I hard coded your values into the signals for consistency. Click the Headers drop down to get rid of the date range of the condition (unless you'd like to keep it). Finally, you can select the Asset Icon at the top, and select the Asset Group you've created ("Example" in my case). This will scale out to any other assets in the same level of the tree. The final result looks something like this after editing my column names to remove "Last" Note that the far left blank column will not appear if inserted into an Organizer. Also note I've only demonstrated this for raw signals from an asset tree, but this method still works with Scorecard metrics to allow color thresholds. If you want to do this with scorecard metrics, just create the metrics referencing one of your "asset's" associated signals, BSW Water (%) for example, and then scale across the other "assets" as described above. Hopefully this helps get you the look you were hoping to create! Please let me know if you have any questions with any of the above.
  13. Hi Vladimir, There are several ways to apply this analysis to other assets. The first & easiest method that I'll mention is working in an Asset Framework or Asset Group (if existing framework is not available). All previous calculations would need to be created using the data in the Asset Group, but once done, you'll be able to seamlessly swap the entire analysis over to your other assets (Trains, in this case). Asset Groups allow you to create your own framework either manually or utilizing existing frameworks. This video does a great job of showing the creation and scaling calculations across other assets. Note that you would need to be at least on version R52 to take advantage of Asset Groups. Another easy approach is to consolidate your analysis within 1 - 3 formulas (depending on what you really want to see). Generally speaking, this analysis could fall within ONE formula, but you may want more formulas if you care about seeing things like your "Tr1C1 no input, no output" condition across your other trains. I'll provide you with a way to consolidate this workflow in one formula, but feel free to break it into multiple if helpful to you. The reason this could be easier is you can simply duplicate a single formula and manually reassign your variables to the respective variables of your other Train. Some useful things to note before viewing the formula: Formulas can make use of variable definitions... You'll notice within each step, except for the very last step, I'm assigning arbitrary/descriptive variables to each line, so that I can reference these variables later in the formula. These variables could be an entire condition, or a signal / scalar calculation In the formula comments (denoted by the double slashes: //), I note certain things could be different for your case. You can access the underlying Formula to any point and click tools you use (Value Searches, Signal from Conditions, etc) by clicking the item's Item Properties (in the Details Pane) and scrolling down to Formula. Do this for your Tr1 C1 rate of change, monthly periodic condition, and average monthly rate calculations to see what specific parameters are. This Seeq Knowledge Base article has an example of viewing the underlying formula within an item's Item Properties The only RAW SIGNALS needed in this formula are: $valveTag1, $valveTag2, $productionTag, and $tr1Signal... The rest of the variables are assigned internally to the formula // Steps 1, 2, 3, and 4 // Note 'Closed' could be different for you if your valve tags are strings... // If your valve tags are binary (0 or 1), it would be "$valveTag == 0" (or 1) $bothValvesClosed = ($valveTag1 ~= 'Closed' and $valveTag2 ~= 'Closed).removeShorterThan(6h) // Step 5 $valvesClosedProductionHigh = $bothValvesClosed and $productionTag > 10000 // Step 6 ASSUMING YOU USED SIGNAL FROM CONDITION TO CALCULATE RATE // Note the "h" and ".setMaximumDuration(40h), middleKey(), 40h)" could all be different for your case $tr1RateofChange = $tr1Signal.aggregate(rate("h"), $valvesClosedProductionHigh.setMaximumDuration(40h), middleKey(), 40h) // Step 7 // $months could also be different in your case // Note my final output has no variable definition. This is to ensure THIS is the true output of my formula // Again, the ".setMaximumDuration(40h), middleKey(), 40h)" could all be different for your case $months = months("US/Eastern") $tr1RateofChange.aggregate(average(), $months.setMaximumDuration(40h), middleKey(), 40h) Hopefully this makes sense and at the very least provides you with an idea of how you can consolidate calculations within Formula for easier duplication of complex, multi-step calculations. Please let me know if you have any questions. Emilio Conde Analytics Engineer
  14. Hi Mohammed, As John Cox stated in your previous post, there are a number of methods that can be used to remove or identify peaks. In your trend, it's not directly obvious what exactly you are defining as a peak and thus wanting to remove/identify. The image at the bottom contains various methods that you can explore. In order for us to provide a specific method to identify or remove peaks in your signal, you would need to provide us with additional information on what exactly you define as a peak (maybe by circling which peaks on your image you want to identify/remove). If you'd rather work on this over a meeting, you can always join one of our daily office hour sessions where a Seeq Analytics Engineer can help you 1:1. Emilio Conde Analytics Engineer
  15. You may have noticed that pushed data does not have a red trash icon at the bottom of its Item Properties. There's a simple way to move this (and any other) data to the trash through Data Lab. Read below. Pushed Data Normal Calculated Seeq Signal Moving data to trash through SDL Step 1: Identify data of interest and store in a dataframe For this example, I want to move all of the items on this worksheet to trash. Thus, I can use spy.search to store as a dataframe. remove_Data = spy.search('worksheet URL') Step 2: Create an 'Archived' column in the dataframe and set to 'true' remove_Data['Archived'] = 'true' Step 3: Push this dataframe back into Seeq as metadata spy.push(metadata = remove_Data) The associated data should now be in the trash and unable to be searchable in the Data pane.
  16. Hi Yanmin, Unfortunately, Seeq doesn't currently offer any contour map features; however, I've listed some options below to address your use case. In addition, while not directly applicable to what you're trying to achieve as contour across 9 separate tags, I recommend looking into the Density Plot feature available in Seeq as you may find some interest in this feature. Option 1: Create a simple scorecard for each Well and assemble them in an organizer in a neater format. It seems that you're using a 3x3 organizer table--one cell for each Well. You could use only one table cell to get them to better fit, emulating a single table. Something like below. I only used "Well 02" to demonstrate the layout, but the idea is your "mapping" will be on the left to understand what you're looking at on the right. To go about this, create a worksheet for each Well. Create a metric as you have (with thresholds specified) and goto Table view. Under Headers, select None. Under Columns: If you are creating the left table, only have Name checked. If you are creating the right table, only have Metric Value checked. Insert each into a single cell of a table in Organizer--I used a 1x2 table. For assembling adjacent columns, you'll want to make sure you insert each worksheet directly next to the other (no spaces between). For going to the next row, you'll want to make sure to SHIFT+ENTER, instead of a simple ENTER. Something like this should be the result. To remove the space between, simply click each individual cell (metric) and click Toggle Margin. After completing this for each metric, the table should resemble the first one I posted. You can resize the 1x2 Organizer table by clicking Table properties. For this example, I specified a width of 450 to narrow up the two columns. Option 2: Create a Treemap. This method will require that the Wells be a part of an asset group. If not already configured, this can be done within Seeq as of R52. This method may or may not give you the information you're looking for. Before considering this option, please be sure to read about Treemaps more on our Knowledge Base. Depending on the Priority colors and conditions you specify, your Treemap would look something like this. Note there is no way to change or specify the orientation within the normal user interface in Seeq (i.e. you can't easily specify a 3x3 layout). I hope this helps!
  17. We often would like to summarize data in a table to reflect something similar to below: There are a couple ways to achieve this in Seeq. In this example, we'll explore using Simple Table view to get this result. If you're interested instead in using Conditional Scorecard Metrics, I would take a look at this Seeq.org post! Step 1: Goto Table view & Select Simple Under Columns, ensure Average, Last Value, and Name are selected Step 2: Rearrange & rename the Headers; Last can be moved to 2nd column and renamed to Current. Avg (now 3rd column) can be renamed to 1 hr avg. Step 3: Copy the link and paste it into an organizer topic. Create a new Date Range named 1 hr (with a duration of 1 hr) to assign to your table. After clicking the table & Update Seeq Content: Step 4: Can be done on the same worksheet, or a new worksheet. I will create a new worksheet. Back to Simple table, remove the Name column so only Average is selected. Rename this column to 24 hr avg. Step 5: Paste this worksheet into your organizer next to your other table. Create another Date Range named 24 hr (with a duration of 24 hr) to assign to this newly added table (similar to Step 3). Step 6: Click each table to then click the Toggle Margin button. When complete, the table should look like one single table. To update the date range for the entire table, simply click the "Step to current time signal" next to Fixed Date Ranges.
  18. Timezone mismatches can oftentimes arise when using the .push() function with a dataframe. To ensure the dataframe’s timezone matches with the source workbench, we can use pandas tz_localize() function. See an example of encountering and addressing this issue while pushing a csv dataset into workbench below. Step 1: Complete imports Step 2: Load in csv file as a dataframe. When you want to push data, it must have an index with a datetime data type. That's why we used the parse_dates and index_col arguments for Pandas.read_csv(). Note my csv file’s date/time column is named TIME(unitless), hence the arguments within parse_dates and index_col. *** Note the dates in Out[5] all are -06:00*** If I simply moved forward to .push(), I’d see the following results: The original data’s dates are not properly aligned with my worksheet, which is in US/Eastern. Instead, I should use the tz_localize() function on my index before pushing. See Step 3. Step 3: Use the tz_localize() function on your index to first remove any native timezone from the dataframe, then again to assign the timezone of interest to the dataframe. *** Note the dates in Out[8] now are all -04:00*** Finally, I can proceed to push the data into Seeq. You can now see that the timestamps of my data in workbench matches with their original timestamps.
  19. There are times when we'd like to view histograms in a monthly view ordered chronologically by the starting month in the display range. This post reviews the results of 3 different methods of utilizing Histogram vs Signal from Condition. All 3 examples show the same results but differ in how the results are displayed. Example 1: This method displays histograms by order of Month, thus, January will show first with December showing last, even though the display range is set from 7/26/2017 - 7/26/2018. As a result, we are not always looking at data in chronological order with this method. Simply goto your Histogram tool, select your signal/condition & statistic, then select Time as aggregation type --> Month of Year. Continue to Execute. Example 2: This method will ensure your Histogram is in chronological order, first ordered by year, then by month. The caveat to this is the spacing of all bars in the display window is not held constant (a gap between years will be observed). Go back to the Histogram tool, select your signal/condition & statistic, then select Time as aggregation type --> Year. After this, select Add grouping. Again, select Time as aggregation type --> Month of Year. Continue to Execute. The color gradient can be changed by changing the main color in the histogram. Individual bar colors can also be changed by clicking the respective color box in the legend (top right of histogram). Example 3: This method will produce equally spaced bars in chronological order with no color gradient. To achieve this, we will use Signal from Condition. First, we need to create our condition. Because we are interested in a Monthly view, we can navigate to our Periodic Condition tool under Identify; Duration-->Monthly (All). Timezone can be specified and any shifts to the resulting capsules can be applied under Advanced. Now that we have our condition, we can proceed to our Signal from Condition tool under Quantify. As with the other examples, select your signal/condition & statistic. The bounding condition will be the Monthly condition we just created. For this use case, we will want our timestamp to be at the start of each capsule (month), and the interpolation method to be Discrete so that bars will be the resulting output. The output may have skinny bars and non-ideal axis min/max. This can be adjusted by clicking Customize in the Details pane. For this example, I used a width of 50, and axis min/max of 0/1.25.
  20. James, This is a great suggestion. This feature request has been logged and will be looked at by the Dev team. Thanks, Emilio
  • Create New...