Jump to content

Emilio Conde

Seeq Team
  • Posts

    8
  • Joined

  • Last visited

  • Days Won

    1

Emilio Conde last won the day on February 6

Emilio Conde had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Emilio Conde's Achievements

Apprentice

Apprentice (3/14)

  • One Month Later
  • Dedicated Rare
  • Week One Done
  • Conversation Starter
  • First Post

Recent Badges

2

Reputation

  1. Hi Vladimir, There are several ways to apply this analysis to other assets. The first & easiest method that I'll mention is working in an Asset Framework or Asset Group (if existing framework is not available). All previous calculations would need to be created using the data in the Asset Group, but once done, you'll be able to seamlessly swap the entire analysis over to your other assets (Trains, in this case). Asset Groups allow you to create your own framework either manually or utilizing existing frameworks. This video does a great job of showing the creation and scaling calculations across other assets. Note that you would need to be at least on version R52 to take advantage of Asset Groups. Another easy approach is to consolidate your analysis within 1 - 3 formulas (depending on what you really want to see). Generally speaking, this analysis could fall within ONE formula, but you may want more formulas if you care about seeing things like your "Tr1C1 no input, no output" condition across your other trains. I'll provide you with a way to consolidate this workflow in one formula, but feel free to break it into multiple if helpful to you. The reason this could be easier is you can simply duplicate a single formula and manually reassign your variables to the respective variables of your other Train. Some useful things to note before viewing the formula: Formulas can make use of variable definitions... You'll notice within each step, except for the very last step, I'm assigning arbitrary/descriptive variables to each line, so that I can reference these variables later in the formula. These variables could be an entire condition, or a signal / scalar calculation In the formula comments (denoted by the double slashes: //), I note certain things could be different for your case. You can access the underlying Formula to any point and click tools you use (Value Searches, Signal from Conditions, etc) by clicking the item's Item Properties (in the Details Pane) and scrolling down to Formula. Do this for your Tr1 C1 rate of change, monthly periodic condition, and average monthly rate calculations to see what specific parameters are. This Seeq Knowledge Base article has an example of viewing the underlying formula within an item's Item Properties The only RAW SIGNALS needed in this formula are: $valveTag1, $valveTag2, $productionTag, and $tr1Signal... The rest of the variables are assigned internally to the formula // Steps 1, 2, 3, and 4 // Note 'Closed' could be different for you if your valve tags are strings... // If your valve tags are binary (0 or 1), it would be "$valveTag == 0" (or 1) $bothValvesClosed = ($valveTag1 ~= 'Closed' and $valveTag2 ~= 'Closed).removeShorterThan(6h) // Step 5 $valvesClosedProductionHigh = $bothValvesClosed and $productionTag > 10000 // Step 6 ASSUMING YOU USED SIGNAL FROM CONDITION TO CALCULATE RATE // Note the "h" and ".setMaximumDuration(40h), middleKey(), 40h)" could all be different for your case $tr1RateofChange = $tr1Signal.aggregate(rate("h"), $valvesClosedProductionHigh.setMaximumDuration(40h), middleKey(), 40h) // Step 7 // $months could also be different in your case // Note my final output has no variable definition. This is to ensure THIS is the true output of my formula // Again, the ".setMaximumDuration(40h), middleKey(), 40h)" could all be different for your case $months = months("US/Eastern") $tr1RateofChange.aggregate(average(), $months.setMaximumDuration(40h), middleKey(), 40h) Hopefully this makes sense and at the very least provides you with an idea of how you can consolidate calculations within Formula for easier duplication of complex, multi-step calculations. Please let me know if you have any questions. Emilio Conde Analytics Engineer
  2. Hi Mohammed, As John Cox stated in your previous post, there are a number of methods that can be used to remove or identify peaks. In your trend, it's not directly obvious what exactly you are defining as a peak and thus wanting to remove/identify. The image at the bottom contains various methods that you can explore. In order for us to provide a specific method to identify or remove peaks in your signal, you would need to provide us with additional information on what exactly you define as a peak (maybe by circling which peaks on your image you want to identify/remove). If you'd rather work on this over a meeting, you can always join one of our daily office hour sessions where a Seeq Analytics Engineer can help you 1:1. Emilio Conde Analytics Engineer
  3. You may have noticed that pushed data does not have a red trash icon at the bottom of its Item Properties. There's a simple way to move this (and any other) data to the trash through Data Lab. Read below. Pushed Data Normal Calculated Seeq Signal Moving data to trash through SDL Step 1: Identify data of interest and store in a dataframe For this example, I want to move all of the items on this worksheet to trash. Thus, I can use spy.search to store as a dataframe. remove_Data = spy.search('worksheet URL') Step 2: Create an 'Archived' column in the dataframe and set to 'true' remove_Data['Archived'] = 'true' Step 3: Push this dataframe back into Seeq as metadata spy.push(metadata = remove_Data) The associated data should now be in the trash and unable to be searchable in the Data pane.
  4. Hi Yanmin, Unfortunately, Seeq doesn't currently offer any contour map features; however, I've listed some options below to address your use case. In addition, while not directly applicable to what you're trying to achieve as contour across 9 separate tags, I recommend looking into the Density Plot feature available in Seeq as you may find some interest in this feature. Option 1: Create a simple scorecard for each Well and assemble them in an organizer in a neater format. It seems that you're using a 3x3 organizer table--one cell for each Well. You could use only one table cell to get them to better fit, emulating a single table. Something like below. I only used "Well 02" to demonstrate the layout, but the idea is your "mapping" will be on the left to understand what you're looking at on the right. To go about this, create a worksheet for each Well. Create a metric as you have (with thresholds specified) and goto Table view. Under Headers, select None. Under Columns: If you are creating the left table, only have Name checked. If you are creating the right table, only have Metric Value checked. Insert each into a single cell of a table in Organizer--I used a 1x2 table. For assembling adjacent columns, you'll want to make sure you insert each worksheet directly next to the other (no spaces between). For going to the next row, you'll want to make sure to SHIFT+ENTER, instead of a simple ENTER. Something like this should be the result. To remove the space between, simply click each individual cell (metric) and click Toggle Margin. After completing this for each metric, the table should resemble the first one I posted. You can resize the 1x2 Organizer table by clicking Table properties. For this example, I specified a width of 450 to narrow up the two columns. Option 2: Create a Treemap. This method will require that the Wells be a part of an asset group. If not already configured, this can be done within Seeq as of R52. This method may or may not give you the information you're looking for. Before considering this option, please be sure to read about Treemaps more on our Knowledge Base. Depending on the Priority colors and conditions you specify, your Treemap would look something like this. Note there is no way to change or specify the orientation within the normal user interface in Seeq (i.e. you can't easily specify a 3x3 layout). I hope this helps!
  5. We often would like to summarize data in a table to reflect something similar to below: There are a couple ways to achieve this in Seeq. In this example, we'll explore using Simple Table view to get this result. If you're interested instead in using Conditional Scorecard Metrics, I would take a look at this Seeq.org post! Step 1: Goto Table view & Select Simple Under Columns, ensure Average, Last Value, and Name are selected Step 2: Rearrange & rename the Headers; Last can be moved to 2nd column and renamed to Current. Avg (now 3rd column) can be renamed to 1 hr avg. Step 3: Copy the link and paste it into an organizer topic. Create a new Date Range named 1 hr (with a duration of 1 hr) to assign to your table. After clicking the table & Update Seeq Content: Step 4: Can be done on the same worksheet, or a new worksheet. I will create a new worksheet. Back to Simple table, remove the Name column so only Average is selected. Rename this column to 24 hr avg. Step 5: Paste this worksheet into your organizer next to your other table. Create another Date Range named 24 hr (with a duration of 24 hr) to assign to this newly added table (similar to Step 3). Step 6: Click each table to then click the Toggle Margin button. When complete, the table should look like one single table. To update the date range for the entire table, simply click the "Step to current time signal" next to Fixed Date Ranges.
  6. Timezone mismatches can oftentimes arise when using the .push() function with a dataframe. To ensure the dataframe’s timezone matches with the source workbench, we can use pandas tz_localize() function. See an example of encountering and addressing this issue while pushing a csv dataset into workbench below. Step 1: Complete imports Step 2: Load in csv file as a dataframe. When you want to push data, it must have an index with a datetime data type. That's why we used the parse_dates and index_col arguments for Pandas.read_csv(). Note my csv file’s date/time column is named TIME(unitless), hence the arguments within parse_dates and index_col. *** Note the dates in Out[5] all are -06:00*** If I simply moved forward to .push(), I’d see the following results: The original data’s dates are not properly aligned with my worksheet, which is in US/Eastern. Instead, I should use the tz_localize() function on my index before pushing. See Step 3. Step 3: Use the tz_localize() function on your index to first remove any native timezone from the dataframe, then again to assign the timezone of interest to the dataframe. *** Note the dates in Out[8] now are all -04:00*** Finally, I can proceed to push the data into Seeq. You can now see that the timestamps of my data in workbench matches with their original timestamps.
  7. There are times when we'd like to view histograms in a monthly view ordered chronologically by the starting month in the display range. This post reviews the results of 3 different methods of utilizing Histogram vs Signal from Condition. All 3 examples show the same results but differ in how the results are displayed. Example 1: This method displays histograms by order of Month, thus, January will show first with December showing last, even though the display range is set from 7/26/2017 - 7/26/2018. As a result, we are not always looking at data in chronological order with this method. Simply goto your Histogram tool, select your signal/condition & statistic, then select Time as aggregation type --> Month of Year. Continue to Execute. Example 2: This method will ensure your Histogram is in chronological order, first ordered by year, then by month. The caveat to this is the spacing of all bars in the display window is not held constant (a gap between years will be observed). Go back to the Histogram tool, select your signal/condition & statistic, then select Time as aggregation type --> Year. After this, select Add grouping. Again, select Time as aggregation type --> Month of Year. Continue to Execute. The color gradient can be changed by changing the main color in the histogram. Individual bar colors can also be changed by clicking the respective color box in the legend (top right of histogram). Example 3: This method will produce equally spaced bars in chronological order with no color gradient. To achieve this, we will use Signal from Condition. First, we need to create our condition. Because we are interested in a Monthly view, we can navigate to our Periodic Condition tool under Identify; Duration-->Monthly (All). Timezone can be specified and any shifts to the resulting capsules can be applied under Advanced. Now that we have our condition, we can proceed to our Signal from Condition tool under Quantify. As with the other examples, select your signal/condition & statistic. The bounding condition will be the Monthly condition we just created. For this use case, we will want our timestamp to be at the start of each capsule (month), and the interpolation method to be Discrete so that bars will be the resulting output. The output may have skinny bars and non-ideal axis min/max. This can be adjusted by clicking Customize in the Details pane. For this example, I used a width of 50, and axis min/max of 0/1.25.
  8. James, This is a great suggestion. This feature request has been logged and will be looked at by the Dev team. Thanks, Emilio
×
×
  • Create New...