Jump to content

Emilio Conde

Seeq Team
  • Posts

    31
  • Joined

  • Last visited

  • Days Won

    8

Emilio Conde last won the day on August 23 2023

Emilio Conde had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Emilio Conde's Achievements

Contributor

Contributor (5/14)

  • Reacting Well
  • One Year In
  • Collaborator Rare
  • One Month Later
  • Dedicated

Recent Badges

15

Reputation

1

Community Answers

  1. I agree with Nuraisyah on being able to utilize Custom Labels, though depending on the scale, this may not be very efficient. Also, if you move the trend items around to different lanes, then the Custom Labels will not follow the signals. Another approach you can consider, directly addressing your question of is the only other option to include the percentage type in the signal description? is to add the unit directly in the name of the item in parentheses upon creating in Workbench or in Data Lab. It's not the most ideal approach, but at least it is directly assigned to the item and is very findable/obvious. An example of this is shown in the spy.push example notebook, but as I mentioned Data Lab is not required, you can essentially add anything to the Name of an item even in Workbench:
  2. Thanks for the context, John. Unfortunately, Organizer currently only supports basic html, and not CSS / HTML5 as implemented by the .highlight_max() or .highlight_null operators. With that said, I recommend you send us a support ticket here requesting this functionality so that we can link it to an existing feature request and allow you to be automatically notified of this capability once it's implemented in a future version of Seeq. Working with ChatGPT, I was able to get the basic HTML equivalent applied to the df in your example, and have verified it pushes to Organizer as expected. Of course, another approach could be to create an image out of your html and just push the image. There are other posts on this forum that discuss that functionality. See working example below: import numpy as np import pandas as pd from bs4 import BeautifulSoup organizer_id = '7AC3EAA0-6429-46D5-88E3-57ADC6AA1ED7' df = pd.DataFrame({ "A": [0, -5, 12, -4, 3], "B": [12.24, 3.14, 2.71, -3.14, np.nan], "C": [0.5, 1.2, 0.3, 1.9, 2.2], "D": [2000, np.nan, 1000, 7000, 5000] }) # Basic HTML to highlight a cell def highlight_cell_bg(val, color=None): # If there's a color specified, apply it as the background if color: return 'background-color: {}'.format(color) return "" # Highlight the max value in a column yellow def highlight_max_bg(series): # Remove non-numeric values and compute max numeric_vals = series[pd.to_numeric(series, errors='coerce').notnull()] if not numeric_vals.empty: max_val = numeric_vals.max() return [highlight_cell_bg(val, 'yellow') if val == max_val else "" for val in series] return [""] * len(series) # Highlight Missing row values red def highlight_missing_bg(val): if val == 'Missing': return highlight_cell_bg(val, 'red') return "" # Replace NaN values with "Missing" and highlight them df.replace({np.nan: 'Missing'}, inplace=True) missing_bg = df.applymap(highlight_missing_bg) # Highlight the maximum values in each column max_bg = df.apply(highlight_max_bg) # Merge the two background styles final_bg = missing_bg.where(missing_bg != "", max_bg) # Convert DataFrame to HTML without additional formatting raw_html = df.to_html(escape=False, header=True, index=False) # Parse the HTML using BeautifulSoup soup = BeautifulSoup(raw_html, 'html.parser') # Iterate through each cell in the table and apply styles for row in soup.findAll("tr"): for col_name, cell in zip(df.columns, row.findAll("td")): if cell.text in df.columns: continue # Skip headers # Convert dataframe values to string for comparison idx = df[col_name].astype(str).tolist().index(cell.text) style = final_bg[col_name].iloc[idx] if style: cell["style"] = style # Convert the modified HTML back to a string html = str(soup) # Find the topic you want topic_search = spy.workbooks.search({'ID': organizer_id}) # Pull in the topic topic = spy.workbooks.pull(topic_search, include_referenced_workbooks = False)[0] # Create a new sheet topic.document('New Sheet') # Modify the html of the new sheet with the styled df html topic.worksheets[-1].html = html # Push your changes organizer_push = spy.workbooks.push(topic)
  3. I see... What version of Seeq is your server on? And for my own testing, what's the styling you're applying to df? My simple test just created clickable hyperlinked text in df.
  4. Hi John, Have you tried something similar to below? # Convert styled df to html html = df.style.to_html() # Find the topic you want topic_search = spy.workbooks.search({'ID': organizer_id}) # Pull in the topic topic = spy.workbooks.pull(topic_search, include_referenced_workbooks = False)[0] # Create a new sheet topic.document('Sheet Name') # Modify the html of the new sheet with the styled df html topic.worksheets[-1].html = html # Push your changes organizer_push = spy.workbooks.push(topic)
  5. Hello, The Path and Asset should be returned if these properties exist for what is being searched. all_properties = True is not required to get these properties.
  6. Fantastic, @Manoel Janio Borralho dos Santos! Great job using the SPy documentation examples as a reference for this use-case. Glad this worked out well for you.
  7. @Manoel Janio Borralho dos Santos my apologies, I forgot to mention that you need to upgrade your SPy to the latest version for this approach. Run spy.upgrade() in a cell and then restart your Kernel and try again. The template will need to be recreated on the latest version of SPy, then see if this newly created template shows the images in the document.code.
  8. One thing to add to this is SPy has been upgraded since this post was made to allow more options of templatizing content, including Organizers with images! Take a look at the documentation for Templates with images here, and the base documentation for Templates here. This serves a different approach to replacing images in Organizer that doesn't require dealing directly with the HTML.
  9. There are times when you may need to calculate a standard deviation across a time-range using the data within a number of signals. Consider the below example. When a calculation like this is meaningful/important, the straightforward options in Seeq may not be mathematically representative to calculate a comprehensive standard deviation. These straightforward options include: Take a daily standard deviation for each signal, then average these standard deviations Take a daily standard deviation for each signal, then take the standard deviation of the standard deviations Create a real-time standard deviation signal (using stddev($signal1, $signal2, ... , $signalN)), then take the daily average or standard deviation of this signal While straightforward options may be OK for many statistics (max of maxes, average of averages, sum of totalizes, etc), a time-weighted standard deviation across multiple signals presents an interesting challenge. This post will detail methods to achieve this type of calculation by time-warping the data from each signal then combining each individually warped signal into a single signal. Similar methods are also discussed in the following two seeq.org posts: Two different methods to arrive at the same outcome will be explored. Both of these methods share the same Step 1 & 2. Step 1: Gather Signals of Interest This example will consider 4 signals. The same methods can be used for more signals, but note that implementing this solution programmatically via Data Lab may be more efficient when considering a high number of signals (>20-30). Step 2: Create Important Scalar Constants and Condition Number of Signals: The number of signals to be considered. 4 in this case. Un-Warped Interval: The interval you are interested in calculating a standard deviation (I am interested in a Daily standard deviation, so I entered 1d) Warped Interval: A ratio calculation of Un-Warped Interval / Number of Signals. This metric is detailing what the new time-range will be for the time-warped signals. I.e. given I have 4 signals considering a days worth of data of, each signal's day worth of data will be warped into 6 hour intervals Un-Warped Periods: This creates a condition with capsules spanning the original periods of interest. periods($unwarped_interval) Method 1: Create ONE Time-Shift Signal, and move output Warped Signals The Time Shift Signal will be used as a counter to condense the data in the period of interest (1 day for this example) down to the warped interval (6 hours for this example). 0-timeSince($unwarped_period, 1s)*(1-1/$num_of_signals) The next step is to use this Time Shift Signal to move the data within each signal. Note there is an integer in this Formula that steps with each signal applied to. Details can be viewed in the screenshots. $area_a.move($time_shift_signal, $unwarped_interval).setMaxInterpolation($warped_interval).move(0*$warped_interval) The last step is to combine each of these warped signals together. We now have a Combined Output that can be used as an input into a Daily Standard Deviation that will represent the time-weighted standard deviation across all 4 signals within that day. Method 2: Create a Time-Shift Signal per each Signal - No Need to move output Warped Signals This method takes advantage of 4 time-shift signals, one per signal. Note there is also an integer in this Formula that steps with each signal applied to. Details can be viewed in the screenshot. These signals take care of the data placement, where-as the data placement was taken care of using .move(N*$warped_interval) above. 0*$warped_interval-timeSince($unwarped_period, 1s)*(1-1/$num_of_signals) We can then follow Method 1 to use the time shift signals to arrange our signals. We just need to be careful to use each time shift signal, as opposed to the single time shift signal that was created in Method 1. As mentioned above, there is no longer a .move(N*$warped_interval) needed at the end of this formula. The last step is to combine each of these warped signals together, similar to Method 1. $area_a.move($time_shift_1, $unwarped_interval).setMaxInterpolation($warped_interval) Comparing Method 1 and Method 2 & Calculation Outputs The below screenshot shows how Methods 1 & 2 arrive at the same output Note the difference in calculated values. The Methods reviewed in this post most closely capture the true time-weighted standard deviation per day across the 4 signals. Caveats and Final Thoughts While this method is still the most mathematically correct, there is a slight loss in data at the edges. When combining the data in the final step, the beginning of $signal_2 falls at the end of $signal_1, and so on. There are some methods that could possibly address this, but this loss in samples should be negligible to the overall standard deviation calculation. This method is also heavy on processing, especially depending on the input signals' data resolution and as the overall number of signals being considered increases. It is most ideal to use this method if real-time results are not of high importance, and better fitting if the calculation outputs are input in an Organizer that displays the previous day's/week's/etc results.
  10. Ruby and I implemented a solution that avoids the .transform() function. Details can be found here.
  11. Sometimes it's beneficial to fill these gaps with a prior average that is dynamic. The above post details how to fill the gap with a static timeframe, such as the 10 minutes before the gap. But what if we wanted to do something different, such as take the gap duration, a dynamic value, and fill the gap with the prior average based on the gap duration? Below details how to do this. There is a similar approach that leverages the .transform() Seeq function here, but I've provided an alternative method that avoids the usage of .transform(). Of course, this all can be input to a single formula, but below details each step broken out. Solution: 1. Identify data gaps & calculate gap durations: Notes: .intersect(past()) guarantees we are not considering the future for data validity The maximum capsule duration should be carefully considered depending on the maximum gap duration you want to fill Placing the timestamp across the capsule durations is important 2. Create an arbitrary flat signal & move this signal backwards (negatively) based on the gap durations signal Notes: The timeframe specified in .toSignal(1min) influences the resolution of representing the gap duration. 1min should suffice for most cases. It's important to include a minus in front of the the gap duration to indicate we are moving backwards in time. The 24h dictates the maximum duration allowed to move, which is dependent on the expected gap durations. 3. Identify the new, moved signal and join the capsules from the new condition to the original signal gaps condition Notes: Again, the maximum capsule duration needs to be carefully considered when joining the two conditions. 4. Calculate the average of the gappy signal across the joined condition, then splice this average into the original gappy signal Notes: Again, specifying the timestamp to be placed across the capsule durations is important here. Be sure to splice the average across the original signal gaps. Including .validValues() ensures we interpolate across our original gappy signal and the replacement average signal.
  12. Hi Tranquil, The method I described above will be a way to find assets that contain a signal of interest... I.e. perform a search for the signal name within the top level of the asset. However, I don't believe this manual identification is 100% necessary to complete what you're wanting. See below. Results of the above search show: The above shows that only the Example >> Cooling Tower 2 >> Area E asset has the Irregular Data signal. However, I don't need to do these searches to manually identify which assets have which signals. I can jump immediately into asset groups: Clicking Add above then repeating for Cooling Tower 2 will yield the following: Adding a condition based on Irregular Data: Saving the Asset Group then searching again for Irregular Data, this time within the newly created Asset Group: Now I can see that the specified Condition was only created for Area E because Area E was the only asset that contained the signal of interest (in this case, Irregular Data). If more assets also had the Irregular Data signal, then conditions would have also been created for those assets referencing their Irregular Data signal. Hopefully this helps. I encourage you to take a look at some of the videos we have for Asset Groups within our YouTube page. Specifically, this video discusses the process I used above, where I populate an Asset Group using other preexisting asset trees as a starting point.
  13. Do you mean exporting the values of the asset group items? If so, you can make use of the Data tab to add all variables to your display and then the Export to Excel tool to export the values. More information on searching within an Asset Tree/Group can be found here and information on exporting data to Excel (i.e. Using the Export to Excel Tool) found here. See screenshots below on what this process may look like: Please let me know if this helps or if this isn't what you meant when inquiring about exporting the asset group subitems.
  14. Hi Tranquil, Do you mind supplying a bit more information on your question and possibly some screenshots? If by resources you mean assets, with some assets containing a specific signal, then it may not be necessary to write any script. If this is indeed the case, Asset Groups could possibly be utilized to scale the creation of a condition if the signal(s) of interest exist.
  15. The third example in your documentation image seems to be the equivalent... It's pretty evident why .setProperty() was enhanced! Until your server is upgraded, you'll likely have to use that $condition.transform() method to add signal-referenced, statistical property values to capsules.
×
×
  • Create New...