Jump to content

Emilio Conde

Seeq Team
  • Posts

    31
  • Joined

  • Last visited

  • Days Won

    8

Posts posted by Emilio Conde

  1. I agree with Nuraisyah on being able to utilize Custom Labels, though depending on the scale, this may not be very efficient. Also, if you move the trend items around to different lanes, then the Custom Labels will not follow the signals. 

     

    Another approach you can consider, directly addressing your question of is the only other option to include the percentage type in the signal description? is to add the unit directly in the name of the item in parentheses upon creating in Workbench or in Data Lab. It's not the most ideal approach, but at least it is directly assigned to the item and is very findable/obvious.

     

    An example of this is shown in the spy.push example notebook, but as I mentioned Data Lab is not required, you can essentially add anything to the Name of an item even in Workbench:

    image.png

  2. Thanks for the context, John.

     

    Unfortunately, Organizer currently only supports basic html, and not CSS / HTML5 as implemented by the .highlight_max() or .highlight_null operators.

     

    With that said, I recommend you send us a support ticket here requesting this functionality so that we can link it to an existing feature request and allow you to be automatically notified of this capability once it's implemented in a future version of Seeq.

     

    Working with ChatGPT, I was able to get the basic HTML equivalent applied to the df in your example, and have verified it pushes to Organizer as expected. Of course, another approach could be to create an image out of your html and just push the image. There are other posts on this forum that discuss that functionality.

    See working example below:

    import numpy as np
    import pandas as pd
    from bs4 import BeautifulSoup
    
    
    organizer_id = '7AC3EAA0-6429-46D5-88E3-57ADC6AA1ED7'
    
    df = pd.DataFrame({
    "A": [0, -5, 12, -4, 3],
    "B": [12.24, 3.14, 2.71, -3.14, np.nan], 
    "C": [0.5, 1.2, 0.3, 1.9, 2.2],
    "D": [2000, np.nan, 1000, 7000, 5000]
    })
    
    
    # Basic HTML to highlight a cell
    def highlight_cell_bg(val, color=None):
        # If there's a color specified, apply it as the background
        if color:
            return 'background-color: {}'.format(color)
        return ""
    
    # Highlight the max value in a column yellow
    def highlight_max_bg(series):
        # Remove non-numeric values and compute max
        numeric_vals = series[pd.to_numeric(series, errors='coerce').notnull()]
        if not numeric_vals.empty:
            max_val = numeric_vals.max()
            return [highlight_cell_bg(val, 'yellow') if val == max_val else "" for val in series]
        return [""] * len(series)
    
    # Highlight Missing row values red
    def highlight_missing_bg(val):
        if val == 'Missing':
            return highlight_cell_bg(val, 'red')
        return ""
    
    # Replace NaN values with "Missing" and highlight them
    df.replace({np.nan: 'Missing'}, inplace=True)
    missing_bg = df.applymap(highlight_missing_bg)
    
    
    # Highlight the maximum values in each column
    max_bg = df.apply(highlight_max_bg)
    
    
    # Merge the two background styles
    final_bg = missing_bg.where(missing_bg != "", max_bg)
    
    # Convert DataFrame to HTML without additional formatting
    raw_html = df.to_html(escape=False, header=True, index=False)
    
    # Parse the HTML using BeautifulSoup
    soup = BeautifulSoup(raw_html, 'html.parser')
    
    # Iterate through each cell in the table and apply styles
    for row in soup.findAll("tr"):
        for col_name, cell in zip(df.columns, row.findAll("td")):
            if cell.text in df.columns:
                continue  # Skip headers
            
            # Convert dataframe values to string for comparison
            idx = df[col_name].astype(str).tolist().index(cell.text)
            style = final_bg[col_name].iloc[idx]
            if style:
                cell["style"] = style
    
    # Convert the modified HTML back to a string
    html = str(soup)
    
    
    # Find the topic you want
    topic_search = spy.workbooks.search({'ID': organizer_id})
    
    # Pull in the topic
    topic = spy.workbooks.pull(topic_search, include_referenced_workbooks = False)[0]
    
    # Create a new sheet
    topic.document('New Sheet')
    
    # Modify the html of the new sheet with the styled df html
    topic.worksheets[-1].html = html
    
    # Push your changes
    organizer_push = spy.workbooks.push(topic)

    image.png

  3. Hi John,

     

    Have you tried something similar to below?

    # Convert styled df to html
    html = df.style.to_html()
    
    # Find the topic you want
    topic_search = spy.workbooks.search({'ID': organizer_id})
    
    # Pull in the topic
    topic = spy.workbooks.pull(topic_search, include_referenced_workbooks = False)[0]
    
    # Create a new sheet
    topic.document('Sheet Name')
    
    # Modify the html of the new sheet with the styled df html
    topic.worksheets[-1].html = html
    
    # Push your changes
    organizer_push = spy.workbooks.push(topic)

     

  4. One thing to add to this is SPy has been upgraded since this post was made to allow more options of templatizing content, including Organizers with images! Take a look at the documentation for Templates with images here, and the base documentation for Templates here.

     

    This serves a different approach to replacing images in Organizer that doesn't require dealing directly with the HTML.

    • Thanks 1
  5. There are times when you may need to calculate a standard deviation across a time-range using the data within a number of signals. Consider the below example.

    image.png

     

    When a calculation like this is meaningful/important, the straightforward options in Seeq may not be mathematically representative to calculate a comprehensive standard deviation. These straightforward options include:

    • Take a daily standard deviation for each signal, then average these standard deviations
    • Take a daily standard deviation for each signal, then take the standard deviation of the standard deviations
    • Create a real-time standard deviation signal (using stddev($signal1, $signal2, ... , $signalN)), then take the daily average or standard deviation of this signal

    While straightforward options may be OK for many statistics (max of maxes, average of averages, sum of totalizes, etc), a time-weighted standard deviation across multiple signals presents an interesting challenge. 

    This post will detail methods to achieve this type of calculation by time-warping the data from each signal then combining each individually warped signal into a single signal. Similar methods are also discussed in the following two seeq.org posts:

     

     

    Two different methods to arrive at the same outcome will be explored. Both of these methods share the same Step 1 & 2.

     

    Step 1: Gather Signals of Interest

    This example will consider 4 signals. The same methods can be used for more signals, but note that implementing this solution programmatically via Data Lab may be more efficient when considering a high number of signals (>20-30).

    image.png

     

    Step 2: Create Important Scalar Constants and Condition

    1. Number of Signals: The number of signals to be considered. 4 in this case.
    2. Un-Warped Interval: The interval you are interested in calculating a standard deviation (I am interested in a Daily standard deviation, so I entered 1d)
    3. Warped Interval: A ratio calculation of Un-Warped Interval / Number of Signals. This metric is detailing what the new time-range will be for the time-warped signals. I.e. given I have 4 signals considering a days worth of data of, each signal's day worth of data will be warped into 6 hour intervals
    4. Un-Warped Periods: This creates a condition with capsules spanning the original periods of interest. periods($unwarped_interval)

    image.png

     

    Method 1: Create ONE Time-Shift Signal, and move output Warped Signals

    The Time Shift Signal will be used as a counter to condense the data in the period of interest (1 day for this example) down to the warped interval (6 hours for this example).

    image.png

    0-timeSince($unwarped_period, 1s)*(1-1/$num_of_signals)

     

    The next step is to use this Time Shift Signal to move the data within each signal. Note there is an integer in this Formula that steps with each signal applied to. Details can be viewed in the screenshots.

    image.png

    $area_a.move($time_shift_signal, $unwarped_interval).setMaxInterpolation($warped_interval).move(0*$warped_interval)

     

    The last step is to combine each of these warped signals together. We now have a Combined Output that can be used as an input into a Daily Standard Deviation that will represent the time-weighted standard deviation across all 4 signals within that day.

    image.png

     

    Method 2: Create a Time-Shift Signal per each Signal - No Need to move output Warped Signals

    This method takes advantage of 4 time-shift signals, one per signal. Note there is also an integer in this Formula that steps with each signal applied to. Details can be viewed in the screenshot. These signals take care of the data placement, where-as the data placement was taken care of using .move(N*$warped_interval) above.

    image.png

    0*$warped_interval-timeSince($unwarped_period, 1s)*(1-1/$num_of_signals)

     

    We can then follow Method 1 to use the time shift signals to arrange our signals. We just need to be careful to use each time shift signal, as opposed to the single time shift signal that was created in Method 1. As mentioned above, there is no longer a .move(N*$warped_interval) needed at the end of this formula. 

    The last step is to combine each of these warped signals together, similar to Method 1. 

    image.png

    $area_a.move($time_shift_1, $unwarped_interval).setMaxInterpolation($warped_interval)

     

     

    Comparing Method 1 and Method 2 & Calculation Outputs

    The below screenshot shows how Methods 1 & 2 arrive at the same output

    image.png

     

    Note the difference in calculated values. The Methods reviewed in this post most closely capture the true time-weighted standard deviation per day across the 4 signals.

    image.png

     

    Caveats and Final Thoughts

    While this method is still the most mathematically correct, there is a slight loss in data at the edges. When combining the data in the final step, the beginning of $signal_2 falls at the end of $signal_1, and so on. There are some methods that could possibly address this, but this loss in samples should be negligible to the overall standard deviation calculation.

    This method is also heavy on processing, especially depending on the input signals' data resolution and as the overall number of signals being considered increases. It is most ideal to use this method if real-time results are not of high importance, and better fitting if the calculation outputs are input in an Organizer that displays the previous day's/week's/etc results.

    • Like 1
    • Thanks 1
  6. Sometimes it's beneficial to fill these gaps with a prior average that is dynamic. The above post details how to fill the gap with a static timeframe, such as the 10 minutes before the gap. But what if we wanted to do something different, such as take the gap duration, a dynamic value, and fill the gap with the prior average based on the gap duration?

    Below details how to do this. There is a similar approach that leverages the .transform() Seeq function here, but I've provided an alternative method that avoids the usage of .transform(). Of course, this all can be input to a single formula, but below details each step broken out.

     

    Solution:

    1. Identify data gaps & calculate gap durations:

    Notes:

    • .intersect(past()) guarantees we are not considering the future for data validity
    • The maximum capsule duration should be carefully considered depending on the maximum gap duration you want to fill
    • Placing the timestamp across the capsule durations is important

     

    image.pngimage.png

    image.png

     

     

    2. Create an arbitrary flat signal & move this signal backwards (negatively) based on the gap durations signal

    Notes:

    • The timeframe specified in .toSignal(1min) influences the resolution of representing the gap duration. 1min should suffice for most cases.
    • It's important to include a minus in front of the the gap duration to indicate we are moving backwards in time. The 24h dictates the maximum duration allowed to move, which is dependent on the expected gap durations.

    image.pngimage.png

    image.png

     

    3. Identify the new, moved signal and join the capsules from the new condition to the original signal gaps condition

    Notes:

    • Again, the maximum capsule duration needs to be carefully considered when joining the two conditions.

    image.pngimage.png

    image.png

     

    4. Calculate the average of the gappy signal across the joined condition, then splice this average into the original gappy signal

    Notes:

    • Again, specifying the timestamp to be placed across the capsule durations is important here.
    • Be sure to splice the average across the original signal gaps. Including .validValues() ensures we interpolate across our original gappy signal and the replacement average signal.

    image.pngimage.png

    image.png

    • Like 1
  7. Hi Tranquil,

     

    The method I described above will be a way to find assets that contain a signal of interest... I.e. perform a search for the signal name within the top level of the asset. However, I don't believe this manual identification is 100% necessary to complete what you're wanting. See below.

    image.png

     

    Results of the above search show:

    image.png

     

    The above shows that only the Example >> Cooling Tower 2 >> Area E asset has the Irregular Data signal. However, I don't need to do these searches to manually identify which assets have which signals. I can jump immediately into asset groups:

    image.png

     

    Clicking Add above then repeating for Cooling Tower 2 will yield the following:

    image.png

     

    Adding a condition based on Irregular Data:

    image.png

     

    Saving the Asset Group then searching again for Irregular Data, this time within the newly created Asset Group:

    image.png

     

    Now I can see that the specified Condition was only created for Area E because Area E was the only asset that contained the signal of interest (in this case, Irregular Data). If more assets also had the Irregular Data signal, then conditions would have also been created for those assets referencing their Irregular Data signal.

     

    Hopefully this helps. I encourage you to take a look at some of the videos we have for Asset Groups within our YouTube page. Specifically, this video discusses the process I used above, where I populate an Asset Group using other preexisting asset trees as a starting point.

     

     

  8. Do you mean exporting the values of the asset group items? If so, you can make use of the Data tab to add all variables to your display and then the Export to Excel tool to export the values.

    More information on searching within an Asset Tree/Group can be found here and information on exporting data to Excel (i.e. Using the Export to Excel Tool) found here.

     

    See screenshots below on what this process may look like:

    image.png

     

    image.png

     

    image.png

     

    Please let me know if this helps or if this isn't what you meant when inquiring about exporting the asset group subitems.

  9. Hey Red,

     

    I'm curious what the formula search documentation for setProperty() looks like for you? Are you able to take/send a screenshot?

     

    I'm able to see internally that the ability to use setProperty() in the way you're attempting (along with the other posts you've referenced) is a capability/feature in Seeq as of version R51. Prior to R51, this method will not work, as is being demonstrated with your error. Has your organization considered upgrading? There are many new features that you could be taking advantage of (see more details of What's New)! Reach out to your Seeq contact or support@seeq.com to get the upgrade process started.

     

    The point made above is also found in this other seeq.org post: 

     

     

    For reference, below is what the setProperty() documentation looks like as of version R58. It could be that the only variation you're able to use in your version is

    $condition.setProperty(propertyName, value)

     

    image.png

  10. Seasonal variation can influence specific process parameters whose values are influenced by ambient conditions, or perhaps raw material make up changes over the year's seasons based on scheduled orders from different vendors.

    For these reasons and more, it may not suffice to compare your previous month's process parameters against current. For these situations, it may be best to compare current product runs against previous product runs occurring the same month, but a year ago in order to assess consistency or deviations.

     

    In Seeq, this can be achieved through utilizing Condition Properties.

     

    1. Bring in raw data. For this example, I will utilize a single parameter (Viscosity) and a grade code signal.

    image.png

     

    2. Convert Product step-signal into a condition. Add properties of Product ID, Variable statistic(s), and month start/end times.

    // Create a month condition. Be sure to specify your time zone so that start/end times are at 12:00 AM
    $m = months('US/Eastern')
    
    // Create a signal for start and end times to add into "Product ID" condition
    $start_signal = $m.toSignal('Start', startKey()).toStep()
    $end_signal = $m.toSignal('End', startKey()).toStep()
    
    
    $p.toCondition('Product ID') // Convert string signal into a condition, with a capsule at each unique string
                                 // Specifying 'Product ID' ensures the respective values in Signal populate
                                 // a property named 'Product ID'
                                 
    .removeLongerThan(100d)      // Bound condition. 100d as arbitrary limit
    .setProperty('Avg Visc', $vs, average())  // Set 'Avg Visc' property reflecting avg visc over each Product ID
    .setProperty('Month Start', $start_signal, startValue()) // Set 'Month Start' property to know what month Product ID ran
    .setProperty('Month End', $end_signal, startValue())     // Set 'Month End' property to know what month Product ID ran

    image.png

     

    image.png

     

     

    3. Create another condition that has a capsule ranging the entire month for each product run within the month.

    Add similar properties, but note naming differences of 'Previous Month Start' and 'Previous Month Avg Visc'. This is because in the next step we will move this condition forward by one year.

    $pi.grow(60d) // Need to grow capsules in the condition to ensure they consume the entire month
    
       .transform($capsule -> // For each capsule (Product Run) in 'Product ID'....
       
         capsule($capsule.property('Month Start'), $capsule.property('Month End')) // Create a capsule ranging the entire month
         .setProperty('Product ID', $capsule.property('Product ID'))               // Add property of Product ID
         .setProperty('Previous Month Start', $capsule.property('Month Start'))    // Add property of Month Start named 'Previous Month Start'
         .setProperty('Previous Month Avg Visc', $capsule.property('Avg Visc'))    // Add property of Avg Visc named 'Previous Month Avg Visc'
         )

    image.png

     

    image.png

    Notice we now have many overlapping capsules in our new condition ranging an entire month -- one for each Product Run that occurred within the month.

     

     

    4. Move the previous 'Month's Product Runs' condition forward a year and merge with existing 'Product ID' condition.

    Aggregate properties of 'Previous Month Avg Visc'. This ensures that if a product was ran multiple times and had different avg visc values in each run, then what is displayed will be the average of all the avg visc values for each product.

    $previousYearMonthProductRun = $mspi.move(1y) // Move condition forward a year
    
    $pi.mergeProperties($previousYearMonthProductRun, 'Product ID', // Merge the properties of both conditions only if their 
                                                                    // capsules share a common value of 'Product ID'
      keepProperties(),                                             // keepProperties() will preserve all existing properties
      
      aggregateProperty('Previous Month Avg Visc', average()))      // aggregateProperty() will take the average of all 'Previous
                                                                    // Month Avg Visc' properties if multiple exist... I.e. if 
                                                                    // there were multiple Product Runs, each with a different value
                                                                    // for 'Previous Month Avg Visc', then take the average of all of 
                                                                    // them.

    image.png

     

    The resulting condition will match our original condition, except now with two new properties: 'Previous Month Start' & 'Previous Month Avg Visc'

    image.png

     

    We can then add these properties in a condition table to create a cleaner view.

    image.png

     

    image.png

     

    We could also consider creating any other statistics of interest such as % difference of current Avg Visc vs Previous Month Avg Visc. To do this, we could use a method similar to gathering $start_signal and $end_signal in Step 2, create the calculation using the signals, then add back to the condition as a property.

    • Like 2
  11. The condition is created simply in Formula -- it is not created within the asset group. It seems like you've already done the other calculations, so I don't think adding a calculated item will be necessary. You simply need to add two assets and add all your columns via "Add Column". You can properly name your assets and columns, then click the + sign within the associated asset/column and search for the respective signal/calculation you already have.

    image.png

    image.png

     

    Creating scorecards will be the same as usual, except you need to have a condition based scorecard referencing the now-1d condition, and also ensure the item in question belongs to the asset group you've created. It's worth noting you only need to create the scorecards for ONE asset, and the asset table feature will scale it to your other asset(s). If the threshold limits are not the same between say Raw for Salt(ptb) and BSW Water (%), then you may need to create upper/lower threshold references in the asset group to reference instead of hard-input values.

    image.png

     

    image.gif

     

    Feel free to register to office hours tomorrow if you'd like some live assistance with setting it up: https://info.seeq.com/office-hours

  12. Hi Jesse,

     

    This is currently a feature request in our system, but there's a fairly simple workaround utilizing asset trees, asset tables, and condition-based tables.

     

    If these items don't belong to an asset tree, then you can quickly create an Asset Group and map the associated signal calculations you've made. (You can give our Asset Groups video a watch if you've never worked with Asset Groups)

    image.png

     

    For best results, the condition you create will only have one capsule. I've elected to create a condition that represent the past day (with respect to now) by entering the following into formula:

    condition(1d, capsule(now()-1d,now()))

    Then simply add the relevant signals into your details pane by opening up the asset group and selecting them from there. Notice the Assets column in my details pane confirms these items are from the Asset Group I created in step 1.

    image.png

     

    Then, you can go into Table view, and select the Condition option vs Simple. You can then Transpose, and click the Column drop down to add each signal into the table. I am simply adding the "Last Value" for this example, which I hard coded your values into the signals for consistency. Click the Headers drop down to get rid of the date range of the condition (unless you'd like to keep it).

    image.png

     

    Finally, you can select the Asset Icon at the top, and select the Asset Group you've created ("Example" in my case). This will scale out to any other assets in the same level of the tree.

    image.png

     

    The final result looks something like this after editing my column names to remove "Last"

    image.png

     

    Note that the far left blank column will not appear if inserted into an Organizer.

    Also note I've only demonstrated this for raw signals from an asset tree, but this method still works with Scorecard metrics to allow color thresholds. If you want to do this with scorecard metrics, just create the metrics referencing one of your "asset's" associated signals, BSW Water (%) for example, and then scale across the other "assets" as described above.

     

    Hopefully this helps get you the look you were hoping to create! Please let me know if you have any questions with any of the above.

  13. Hi Vladimir,

     

    There are several ways to apply this analysis to other assets. The first & easiest method that I'll mention is working in an Asset Framework or Asset Group (if existing framework is not available). All previous calculations would need to be created using the data in the Asset Group, but once done, you'll be able to seamlessly swap the entire analysis over to your other assets (Trains, in this case). Asset Groups allow you to create your own framework either manually or utilizing existing frameworks. This video does a great job of showing the creation and scaling calculations across other assets. Note that you would need to be at least on version R52 to take advantage of Asset Groups.

     

    Another easy approach is to consolidate your analysis within 1 - 3 formulas (depending on what you really want to see). Generally speaking, this analysis could fall within ONE formula, but you may want more formulas if you care about seeing things like your "Tr1C1 no input, no output" condition across your other trains. I'll provide you with a way to consolidate this workflow in one formula, but feel free to break it into multiple if helpful to you. The reason this could be easier is you can simply duplicate a single formula and manually reassign your variables to the respective variables of your other Train.

     

     

    Some useful things to note before viewing the formula:

    • Formulas can make use of variable definitions... You'll notice within each step, except for the very last step, I'm assigning arbitrary/descriptive variables to each line, so that I can reference these variables later in the formula. These variables could be an entire condition, or a signal / scalar calculation
    • In the formula comments (denoted by the double slashes: //), I note certain things could be different for your case. You can access the underlying Formula to any point and click tools you use (Value Searches, Signal from Conditions, etc) by clicking the item's Item Properties (in the Details Pane) and scrolling down to Formula. Do this for your Tr1 C1 rate of change, monthly periodic condition, and average monthly rate calculations to see what specific parameters are. This Seeq Knowledge Base article has an example of viewing the underlying formula within an item's Item Properties
    • The only RAW SIGNALS needed in this formula are: $valveTag1, $valveTag2, $productionTag, and $tr1Signal... The rest of the variables are assigned internally to the formula

     

    // Steps 1, 2, 3, and 4
    // Note 'Closed' could be different for you if your valve tags are strings... 
    // If your valve tags are binary (0 or 1), it would be "$valveTag == 0" (or 1)
    
    $bothValvesClosed = ($valveTag1 ~= 'Closed' and $valveTag2 ~= 'Closed).removeShorterThan(6h)
    
    
    // Step 5
    
    $valvesClosedProductionHigh = $bothValvesClosed and $productionTag > 10000
    
    
    // Step 6 ASSUMING YOU USED SIGNAL FROM CONDITION TO CALCULATE RATE
    // Note the "h" and ".setMaximumDuration(40h), middleKey(), 40h)" could all be different for your case
    
    $tr1RateofChange = $tr1Signal.aggregate(rate("h"), $valvesClosedProductionHigh.setMaximumDuration(40h), middleKey(), 40h)
    
    
    // Step 7
    // $months could also be different in your case
    // Note my final output has no variable definition. This is to ensure THIS is the true output of my formula
    // Again, the ".setMaximumDuration(40h), middleKey(), 40h)" could all be different for your case
    
    $months = months("US/Eastern")
    $tr1RateofChange.aggregate(average(), $months.setMaximumDuration(40h), middleKey(), 40h)

     

    Hopefully this makes sense and at the very least provides you with an idea of how you can consolidate calculations within Formula for easier duplication of complex, multi-step calculations.

     

    Please let me know if you have any questions.

     

    Emilio Conde

    Analytics Engineer

     

    • Like 1
  14. Hi Mohammed,

     

    As John Cox stated in your previous post, there are a number of methods that can be used to remove or identify peaks. In your trend, it's not directly obvious what exactly you are defining as a peak and thus wanting to remove/identify. The image at the bottom contains various methods that you can explore.

     

    In order for us to provide a specific method to identify or remove peaks in your signal, you would need to provide us with additional information on what exactly you define as a peak (maybe by circling which peaks on your image you want to identify/remove). If you'd rather work on this over a meeting, you can always join one of our daily office hour sessions where a Seeq Analytics Engineer can help you 1:1.

    Cleanse.JPG.d0cb60b1a7bb68e72d960b90f4ecd739.jpeg

     

    Emilio Conde

    Analytics Engineer

  15. You may have noticed that pushed data does not have a red trash icon at the bottom of its Item Properties. There's a simple way to move this (and any other) data to the trash through Data Lab. Read below.

    Pushed Data

    Screen Shot 2021-09-14 at 12.30.28 PM.png

    Normal Calculated Seeq Signal

    Screen Shot 2021-09-14 at 12.34.15 PM.png

     

    Moving data to trash through SDL

    Step 1: Identify data of interest and store in a dataframe

    For this example, I want to move all of the items on this worksheet to trash. Thus, I can use spy.search to store as a dataframe. 

    remove_Data = spy.search('worksheet URL')

    Screen Shot 2021-09-14 at 12.48.28 PM.png

     

    Step 2: Create an 'Archived' column in the dataframe and set to 'true'

    remove_Data['Archived'] = 'true'

    Screen Shot 2021-09-14 at 12.52.18 PM.png

     

    Step 3: Push this dataframe back into Seeq as metadata

    spy.push(metadata = remove_Data)

    Screen Shot 2021-09-14 at 12.57.12 PM.png

     

    The associated data should now be in the trash and unable to be searchable in the Data pane.

    Screen Shot 2021-09-14 at 12.59.47 PM.png

     

  16. Hi Yanmin,

     

    Unfortunately, Seeq doesn't currently offer any contour map features; however, I've listed some options below to address your use case. In addition, while not directly applicable to what you're trying to achieve as contour across 9 separate tags, I recommend looking into the Density Plot feature available in Seeq as you may find some interest in this feature.

     

    Option 1: Create a simple scorecard for each Well and assemble them in an organizer in a neater format.

    It seems that you're using a 3x3 organizer table--one cell for each Well. You could use only one table cell to get them to better fit, emulating a single table. Something like below.

    Screen Shot 2021-09-09 at 9.37.13 AM.png

    I only used "Well 02" to demonstrate the layout, but the idea is your "mapping" will be on the left to understand what you're looking at on the right.

    To go about this, create a worksheet for each Well. Create a metric as you have (with thresholds specified) and goto Table view. Under Headers, select None. 

    Screen Shot 2021-09-09 at 9.42.55 AM.png

     

    Under Columns:
    If you are creating the left table, only have Name checked. If you are creating the right table, only have Metric Value checked.

    Screen Shot 2021-09-09 at 9.44.32 AM.png

     

    Insert each into a single cell of a table in Organizer--I used a 1x2 table. For assembling adjacent columns, you'll want to make sure you insert each worksheet directly next to the other (no spaces between). For going to the next row, you'll want to make sure to SHIFT+ENTER, instead of a simple ENTER.

    Something like this should be the result.

    Screen Shot 2021-09-09 at 9.50.42 AM.png

    To remove the space between, simply click each individual cell (metric) and click Toggle Margin. After completing this for each metric, the table should resemble the first one I posted.

    Screen Shot 2021-09-09 at 9.52.21 AM.png

     

    You can resize the 1x2 Organizer table by clicking Table properties. For this example, I specified a width of 450 to narrow up the two columns.

    Screen Shot 2021-09-09 at 9.54.24 AM.png

    Screen Shot 2021-09-09 at 9.55.26 AM.png

     

     

    Option 2: Create a Treemap.

    This method will require that the Wells be a part of an asset group. If not already configured, this can be done within Seeq as of R52. 

    This method may or may not give you the information you're looking for. Before considering this option, please be sure to read about Treemaps more on our Knowledge Base. Depending on the Priority colors and conditions you specify, your Treemap would look something like this. Note there is no way to change or specify the orientation within the normal user interface in Seeq (i.e. you can't easily specify a 3x3 layout).

    Screen Shot 2021-09-09 at 10.15.00 AM.png

     

    I hope this helps!

     

  17. We often would like to summarize data in a table to reflect something similar to below:

     Screen Shot 2021-09-01 at 3.17.00 PM.png

    There are a couple ways to achieve this in Seeq. In this example, we'll explore using Simple Table view to get this result. If you're interested instead in using Conditional Scorecard Metrics, I would take a look at this Seeq.org post

     

    Step 1:

    Goto Table view & Select Simple

    Under Columns, ensure Average, Last Value, and Name are selected 

    Screen Shot 2021-09-02 at 8.35.56 AM.png

     

    Step 2:

    Rearrange & rename the Headers; Last can be moved to 2nd column and renamed to Current. Avg (now 3rd column) can be renamed to 1 hr avg.

    Screen Shot 2021-09-02 at 8.41.55 AM.png

     

    Step 3:

    Copy the link and paste it into an organizer topic. Create a new Date Range named 1 hr (with a duration of 1 hr) to assign to your table.

    Screen Shot 2021-09-02 at 8.45.07 AM.png

    Screen Shot 2021-09-02 at 8.48.15 AM.png

    After clicking the table & Update Seeq Content:

    Screen Shot 2021-09-02 at 8.50.20 AM.png

     

    Step 4: Can be done on the same worksheet, or a new worksheet. I will create a new worksheet.

    Back to Simple table, remove the Name column so only Average is selected.

    Screen Shot 2021-09-02 at 9.07.00 AM.png

     

    Rename this column to 24 hr avg.

    Screen Shot 2021-09-02 at 9.08.08 AM.png

     

    Step 5:

    Paste this worksheet into your organizer next to your other table. Create another Date Range named 24 hr (with a duration of 24 hr) to assign to this newly added table (similar to Step 3).

    Screen Shot 2021-09-02 at 9.12.03 AM.png

     

    Step 6: 

    Click each table to then click the Toggle Margin button. When complete, the table should look like one single table. To update the date range for the entire table, simply click the "Step to current time signal" next to Fixed Date Ranges.

    Screen Shot 2021-09-02 at 9.16.28 AM.png

    Screen Shot 2021-09-02 at 9.19.24 AM.png

     

     

     

  18. Timezone mismatches can oftentimes arise when using the .push() function with a dataframe. To ensure the dataframe’s timezone matches with the source workbench, we can use pandas tz_localize() function. See an example of encountering and addressing this issue while pushing a csv dataset into workbench below.

     

    Step 1:

    Complete imports

    Screen Shot 2021-08-27 at 3.58.40 PM.png

     

    Step 2:

    Load in csv file as a dataframe. When you want to push data, it must have an index with a datetime data type. That's why we used the parse_dates and index_col arguments for Pandas.read_csv(). Note my csv file’s date/time column is named TIME(unitless), hence the arguments within parse_dates and index_col.

    Screen Shot 2021-08-27 at 4.00.29 PM.png

     

    *** Note the dates in Out[5] all are -06:00***

     

    If I simply moved forward to .push(), I’d see the following results:

    Screen Shot 2021-08-27 at 4.05.08 PM.png

    Screen Shot 2021-08-27 at 8.36.32 AM.png

     

    The original data’s dates are not properly aligned with my worksheet, which is in US/Eastern. Instead, I should use the tz_localize() function on my index before pushing. See Step 3.

     

    Step 3:

    Use the tz_localize() function on your index to first remove any native timezone from the dataframe, then again to assign the timezone of interest to the dataframe. 

    Screen Shot 2021-08-27 at 4.15.39 PM.png

     

    *** Note the dates in Out[8] now are all -04:00***

     

    Finally, I can proceed to push the data into Seeq. You can now see that the timestamps of my data in workbench matches with their original timestamps.

    Screen Shot 2021-08-27 at 4.21.48 PM.png

    Screen Shot 2021-08-27 at 8.57.19 AM.png

     

     

     

     

     

×
×
  • Create New...