Jump to content

Emilio Conde

Seeq Team
  • Posts

    10
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Emilio Conde

  1. The condition is created simply in Formula -- it is not created within the asset group. It seems like you've already done the other calculations, so I don't think adding a calculated item will be necessary. You simply need to add two assets and add all your columns via "Add Column". You can properly name your assets and columns, then click the + sign within the associated asset/column and search for the respective signal/calculation you already have. Creating scorecards will be the same as usual, except you need to have a condition based scorecard referencing the now-1d condition, and also ensure the item in question belongs to the asset group you've created. It's worth noting you only need to create the scorecards for ONE asset, and the asset table feature will scale it to your other asset(s). If the threshold limits are not the same between say Raw for Salt(ptb) and BSW Water (%), then you may need to create upper/lower threshold references in the asset group to reference instead of hard-input values. Feel free to register to office hours tomorrow if you'd like some live assistance with setting it up: https://info.seeq.com/office-hours
  2. Hi Jesse, This is currently a feature request in our system, but there's a fairly simple workaround utilizing asset trees, asset tables, and condition-based tables. If these items don't belong to an asset tree, then you can quickly create an Asset Group and map the associated signal calculations you've made. (You can give our Asset Groups video a watch if you've never worked with Asset Groups) For best results, the condition you create will only have one capsule. I've elected to create a condition that represent the past day (with respect to now) by entering the following into formula: condition(1d, capsule(now()-1d,now())) Then simply add the relevant signals into your details pane by opening up the asset group and selecting them from there. Notice the Assets column in my details pane confirms these items are from the Asset Group I created in step 1. Then, you can go into Table view, and select the Condition option vs Simple. You can then Transpose, and click the Column drop down to add each signal into the table. I am simply adding the "Last Value" for this example, which I hard coded your values into the signals for consistency. Click the Headers drop down to get rid of the date range of the condition (unless you'd like to keep it). Finally, you can select the Asset Icon at the top, and select the Asset Group you've created ("Example" in my case). This will scale out to any other assets in the same level of the tree. The final result looks something like this after editing my column names to remove "Last" Note that the far left blank column will not appear if inserted into an Organizer. Also note I've only demonstrated this for raw signals from an asset tree, but this method still works with Scorecard metrics to allow color thresholds. If you want to do this with scorecard metrics, just create the metrics referencing one of your "asset's" associated signals, BSW Water (%) for example, and then scale across the other "assets" as described above. Hopefully this helps get you the look you were hoping to create! Please let me know if you have any questions with any of the above.
  3. Hi Vladimir, There are several ways to apply this analysis to other assets. The first & easiest method that I'll mention is working in an Asset Framework or Asset Group (if existing framework is not available). All previous calculations would need to be created using the data in the Asset Group, but once done, you'll be able to seamlessly swap the entire analysis over to your other assets (Trains, in this case). Asset Groups allow you to create your own framework either manually or utilizing existing frameworks. This video does a great job of showing the creation and scaling calculations across other assets. Note that you would need to be at least on version R52 to take advantage of Asset Groups. Another easy approach is to consolidate your analysis within 1 - 3 formulas (depending on what you really want to see). Generally speaking, this analysis could fall within ONE formula, but you may want more formulas if you care about seeing things like your "Tr1C1 no input, no output" condition across your other trains. I'll provide you with a way to consolidate this workflow in one formula, but feel free to break it into multiple if helpful to you. The reason this could be easier is you can simply duplicate a single formula and manually reassign your variables to the respective variables of your other Train. Some useful things to note before viewing the formula: Formulas can make use of variable definitions... You'll notice within each step, except for the very last step, I'm assigning arbitrary/descriptive variables to each line, so that I can reference these variables later in the formula. These variables could be an entire condition, or a signal / scalar calculation In the formula comments (denoted by the double slashes: //), I note certain things could be different for your case. You can access the underlying Formula to any point and click tools you use (Value Searches, Signal from Conditions, etc) by clicking the item's Item Properties (in the Details Pane) and scrolling down to Formula. Do this for your Tr1 C1 rate of change, monthly periodic condition, and average monthly rate calculations to see what specific parameters are. This Seeq Knowledge Base article has an example of viewing the underlying formula within an item's Item Properties The only RAW SIGNALS needed in this formula are: $valveTag1, $valveTag2, $productionTag, and $tr1Signal... The rest of the variables are assigned internally to the formula // Steps 1, 2, 3, and 4 // Note 'Closed' could be different for you if your valve tags are strings... // If your valve tags are binary (0 or 1), it would be "$valveTag == 0" (or 1) $bothValvesClosed = ($valveTag1 ~= 'Closed' and $valveTag2 ~= 'Closed).removeShorterThan(6h) // Step 5 $valvesClosedProductionHigh = $bothValvesClosed and $productionTag > 10000 // Step 6 ASSUMING YOU USED SIGNAL FROM CONDITION TO CALCULATE RATE // Note the "h" and ".setMaximumDuration(40h), middleKey(), 40h)" could all be different for your case $tr1RateofChange = $tr1Signal.aggregate(rate("h"), $valvesClosedProductionHigh.setMaximumDuration(40h), middleKey(), 40h) // Step 7 // $months could also be different in your case // Note my final output has no variable definition. This is to ensure THIS is the true output of my formula // Again, the ".setMaximumDuration(40h), middleKey(), 40h)" could all be different for your case $months = months("US/Eastern") $tr1RateofChange.aggregate(average(), $months.setMaximumDuration(40h), middleKey(), 40h) Hopefully this makes sense and at the very least provides you with an idea of how you can consolidate calculations within Formula for easier duplication of complex, multi-step calculations. Please let me know if you have any questions. Emilio Conde Analytics Engineer
  4. Hi Mohammed, As John Cox stated in your previous post, there are a number of methods that can be used to remove or identify peaks. In your trend, it's not directly obvious what exactly you are defining as a peak and thus wanting to remove/identify. The image at the bottom contains various methods that you can explore. In order for us to provide a specific method to identify or remove peaks in your signal, you would need to provide us with additional information on what exactly you define as a peak (maybe by circling which peaks on your image you want to identify/remove). If you'd rather work on this over a meeting, you can always join one of our daily office hour sessions where a Seeq Analytics Engineer can help you 1:1. Emilio Conde Analytics Engineer
  5. You may have noticed that pushed data does not have a red trash icon at the bottom of its Item Properties. There's a simple way to move this (and any other) data to the trash through Data Lab. Read below. Pushed Data Normal Calculated Seeq Signal Moving data to trash through SDL Step 1: Identify data of interest and store in a dataframe For this example, I want to move all of the items on this worksheet to trash. Thus, I can use spy.search to store as a dataframe. remove_Data = spy.search('worksheet URL') Step 2: Create an 'Archived' column in the dataframe and set to 'true' remove_Data['Archived'] = 'true' Step 3: Push this dataframe back into Seeq as metadata spy.push(metadata = remove_Data) The associated data should now be in the trash and unable to be searchable in the Data pane.
  6. Hi Yanmin, Unfortunately, Seeq doesn't currently offer any contour map features; however, I've listed some options below to address your use case. In addition, while not directly applicable to what you're trying to achieve as contour across 9 separate tags, I recommend looking into the Density Plot feature available in Seeq as you may find some interest in this feature. Option 1: Create a simple scorecard for each Well and assemble them in an organizer in a neater format. It seems that you're using a 3x3 organizer table--one cell for each Well. You could use only one table cell to get them to better fit, emulating a single table. Something like below. I only used "Well 02" to demonstrate the layout, but the idea is your "mapping" will be on the left to understand what you're looking at on the right. To go about this, create a worksheet for each Well. Create a metric as you have (with thresholds specified) and goto Table view. Under Headers, select None. Under Columns: If you are creating the left table, only have Name checked. If you are creating the right table, only have Metric Value checked. Insert each into a single cell of a table in Organizer--I used a 1x2 table. For assembling adjacent columns, you'll want to make sure you insert each worksheet directly next to the other (no spaces between). For going to the next row, you'll want to make sure to SHIFT+ENTER, instead of a simple ENTER. Something like this should be the result. To remove the space between, simply click each individual cell (metric) and click Toggle Margin. After completing this for each metric, the table should resemble the first one I posted. You can resize the 1x2 Organizer table by clicking Table properties. For this example, I specified a width of 450 to narrow up the two columns. Option 2: Create a Treemap. This method will require that the Wells be a part of an asset group. If not already configured, this can be done within Seeq as of R52. This method may or may not give you the information you're looking for. Before considering this option, please be sure to read about Treemaps more on our Knowledge Base. Depending on the Priority colors and conditions you specify, your Treemap would look something like this. Note there is no way to change or specify the orientation within the normal user interface in Seeq (i.e. you can't easily specify a 3x3 layout). I hope this helps!
  7. We often would like to summarize data in a table to reflect something similar to below: There are a couple ways to achieve this in Seeq. In this example, we'll explore using Simple Table view to get this result. If you're interested instead in using Conditional Scorecard Metrics, I would take a look at this Seeq.org post! Step 1: Goto Table view & Select Simple Under Columns, ensure Average, Last Value, and Name are selected Step 2: Rearrange & rename the Headers; Last can be moved to 2nd column and renamed to Current. Avg (now 3rd column) can be renamed to 1 hr avg. Step 3: Copy the link and paste it into an organizer topic. Create a new Date Range named 1 hr (with a duration of 1 hr) to assign to your table. After clicking the table & Update Seeq Content: Step 4: Can be done on the same worksheet, or a new worksheet. I will create a new worksheet. Back to Simple table, remove the Name column so only Average is selected. Rename this column to 24 hr avg. Step 5: Paste this worksheet into your organizer next to your other table. Create another Date Range named 24 hr (with a duration of 24 hr) to assign to this newly added table (similar to Step 3). Step 6: Click each table to then click the Toggle Margin button. When complete, the table should look like one single table. To update the date range for the entire table, simply click the "Step to current time signal" next to Fixed Date Ranges.
  8. Timezone mismatches can oftentimes arise when using the .push() function with a dataframe. To ensure the dataframe’s timezone matches with the source workbench, we can use pandas tz_localize() function. See an example of encountering and addressing this issue while pushing a csv dataset into workbench below. Step 1: Complete imports Step 2: Load in csv file as a dataframe. When you want to push data, it must have an index with a datetime data type. That's why we used the parse_dates and index_col arguments for Pandas.read_csv(). Note my csv file’s date/time column is named TIME(unitless), hence the arguments within parse_dates and index_col. *** Note the dates in Out[5] all are -06:00*** If I simply moved forward to .push(), I’d see the following results: The original data’s dates are not properly aligned with my worksheet, which is in US/Eastern. Instead, I should use the tz_localize() function on my index before pushing. See Step 3. Step 3: Use the tz_localize() function on your index to first remove any native timezone from the dataframe, then again to assign the timezone of interest to the dataframe. *** Note the dates in Out[8] now are all -04:00*** Finally, I can proceed to push the data into Seeq. You can now see that the timestamps of my data in workbench matches with their original timestamps.
  9. There are times when we'd like to view histograms in a monthly view ordered chronologically by the starting month in the display range. This post reviews the results of 3 different methods of utilizing Histogram vs Signal from Condition. All 3 examples show the same results but differ in how the results are displayed. Example 1: This method displays histograms by order of Month, thus, January will show first with December showing last, even though the display range is set from 7/26/2017 - 7/26/2018. As a result, we are not always looking at data in chronological order with this method. Simply goto your Histogram tool, select your signal/condition & statistic, then select Time as aggregation type --> Month of Year. Continue to Execute. Example 2: This method will ensure your Histogram is in chronological order, first ordered by year, then by month. The caveat to this is the spacing of all bars in the display window is not held constant (a gap between years will be observed). Go back to the Histogram tool, select your signal/condition & statistic, then select Time as aggregation type --> Year. After this, select Add grouping. Again, select Time as aggregation type --> Month of Year. Continue to Execute. The color gradient can be changed by changing the main color in the histogram. Individual bar colors can also be changed by clicking the respective color box in the legend (top right of histogram). Example 3: This method will produce equally spaced bars in chronological order with no color gradient. To achieve this, we will use Signal from Condition. First, we need to create our condition. Because we are interested in a Monthly view, we can navigate to our Periodic Condition tool under Identify; Duration-->Monthly (All). Timezone can be specified and any shifts to the resulting capsules can be applied under Advanced. Now that we have our condition, we can proceed to our Signal from Condition tool under Quantify. As with the other examples, select your signal/condition & statistic. The bounding condition will be the Monthly condition we just created. For this use case, we will want our timestamp to be at the start of each capsule (month), and the interpolation method to be Discrete so that bars will be the resulting output. The output may have skinny bars and non-ideal axis min/max. This can be adjusted by clicking Customize in the Details pane. For this example, I used a width of 50, and axis min/max of 0/1.25.
  10. James, This is a great suggestion. This feature request has been logged and will be looked at by the Dev team. Thanks, Emilio
×
×
  • Create New...