Jump to content

Kristopher Wiggins

Seeq Team
  • Posts

    25
  • Joined

  • Last visited

  • Days Won

    13

Kristopher Wiggins last won the day on September 20 2021

Kristopher Wiggins had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Kristopher Wiggins's Achievements

Apprentice

Apprentice (3/14)

  • Conversation Starter
  • First Post
  • Collaborator Rare
  • Week One Done
  • One Month Later

Recent Badges

20

Reputation

  1. Hi Dharun, It looks you won't be able to use the folium package in Seeq DataLab due to Content Security Policy issues. Feel free to read more about the issue in https://github.com/nteract/hydrogen/issues/1069 . I'd recommend using another geospatial library since workarounds to this issue would probably introduce security vulnerabilities.
  2. Hi Renzo. Feel free to come to our Office Hours where we can troubleshoot this particular issue but one thing I'd recommend looking at is if the signal where you pushed data to is scoped to a particular workbook. To access the data, you'll need to modify your spy.search to include the workbook.
  3. Hi Stephen, At this time it is not possible in a supported way via SPy to manipulate these aspects of your display. The worksheet properties that can be modified are display_items, display_range, scatter_plot_series (items displayed in scatter plot), scorecard_date_display, scorecard_date_format (for pre-51 Seeq versions), table_date_display, table_date_format, table_mode (R52+ Seeq), time_zone, and the view (Table, Trend, Treemap, etc.). There are ways to manipulate other worksheet properties by editing the workstep data but those methods may no longer work in future Seeq versions if we change things on the backend as we introduce new features. If you would like assistance in seeing how these particular properties you mentioned can be changed, please send a request to support@seeq.com.
  4. I tried this on R54.1.4 and came across a similar error but fixed it by appending .toString() to $seq. Below is the updated formula code. //creates a condition for 1 minute of time encompassing 30 seconds on either side of a transition $Transition = $CompressorStage.toCondition().beforeStart(0.5min).afterStart(1min) //Assigns the mode on both sides of the step change to a concatenated string that is a property of the capsule. $Transition .transform( $cap -> $cap.setProperty('StartModeEndMode', $CompressorStage.toCondition() .toGroup($cap, CAPSULEBOUNDARY.INTERSECT) .reduce("", ($seq, $stepCap) -> $seq.toString() + $stepCap.getProperty('Value') //Changes the format of the stage names for more clear de-lineation as a property in the capsules pane. .replace('STAGE 1','-STAGE1-').replace('STAGE 2','-STAGE2-').replace('TRANSITION','-TRANSITION-').replace('OFF','-OFF-') )))
  5. Since its just HTML you can assign it to the worksheet's HTML directly. So you'd only need to follow Step 1 of pulling the workbook's HTML and a portion of Step 3 in the above example. Step 3 in your case would just be "ws_to_update.html = table_html" and then you'd push the workbook. If you're looking to repeatedly update a table in Organizer with new information while retaining the other aspects of the Organizer then you'd need to find a way to capture that table using the re module. Tables in HTML have a table tag (i.e. <table> .... </table>) so you can create a regular expression to search based on this. For additional regular expression help, have a look at https://regex101.com/ to learn more and test different expressions. Once you have your regular expression finalized you can adapt Step 3 to account for that instead.
  6. The SPy library supports the creation of asset trees through the spy.assets models. These asset trees can include various types of items such as signals and conditions, calculations like scorecard metrics, and can be used to create numerous Workbench Analysis Worksheets and Organizer Topic Documents. One question that commonly comes up when making these trees is how to reference attributes that are located in other parts of the tree. Roll-Ups The first example of referencing other items in the tree is through roll-ups. These type of calculations "roll-up" attributes from levels below where the roll-up calculation is being performed, whether the level is directly beneath or multiple levels below. These attributes are then combined using logic you provide. For signals and scalars, the options are Average, Maximum, Minimum, Range, Sum and Multiply. For conditions, the options are Union, Intersect, Counts, Count Overlaps, and Combine With. Below are examples where .Cities() are a component beneath the current class. All attributes and assets beneath the Cities component will be searched through and included in the roll-up based on the criteria given in the pick function. Here, we're filtering based on Name but any property such as Type can be supplied. Note Seeq Workbench's search mechanism is used here, so wildcards and regular expressions can be included. Lastly, we specify the kind of roll-up we'd like to perform. @Asset.Attribute() def Regional_Compressor_Running_Poorly(self,metadata): return self.Cities().pick({'Name':'Compressor Running Poorly'}).roll_up('union') @Asset.Attribute() def Regional_Total_Energy_Consumption(self,metadata): return self.Cities().pick({'Name':'Total Daily Energy Consumption'}).roll_up('sum') Child Attributes The second example looks at how to reference child attributes without rolling them up. Maybe there's a particular attribute that needs to be included in a calculation used at a higher level in the asset tree. For this scenario, the pick function can be used once again. Rather than do a roll-up, we'll just index the particular item we want. Most of the time the goal is to reference a specific item using this method so the criteria passed into the pick function should be specific enough to find one item so the index will always be 0. One property that may be of interest for this is Template, where you can specify the particular class used that will contain the item wanted. @Asset.Attribute() def Child_Power_Low(self, metadata): child_power = self.Cities().pick({"Name": "Compressor Power", "Asset": "/Area (A|C|D)/"})[0] return { 'Name': "Child Power Low", 'Type': "Condition", "Formula": "$child_power < 5", "Formula Parameters" : {"child_power":child_power} } Parent Attributes The next example looks at how we can reference parent attributes in calculations that are beneath it. Rather than reference a particular component, we'll use the parent. From there we'll include the attribute we'd want to reference from our parent asset. If looking to reference attributes at higher levels of the tree, chain multiple ".parent". For example, "self.parent.parent" will look two levels above the current level. @Asset.Attribute() def Parent_Temp_Multiplied(self, metadata): parent_temp = self.parent.Temperature() return { 'Name': "Parent Temp Multiplied", 'Type': "Signal", "Formula": "$parent_temp * 10", "Formula Parameters" : {"parent_temp":parent_temp} } Advanced Selection In this example, we'll look at how can we combine the previously mentioned options to find items located in other parts of the tree. Here, we're looking to reference items located at the same level of the tree but in another class so it's not located beneath the same asset. We have two separate assets beneath the regions, Temperature Items and Power Items. The Temperature Item class has a calculation called Max Temperature 1 When Compressor Is On which references an attribute beneath its corresponding Power Item class. To fetch this attribute, we go up a level to the parent, navigate down to the Power_Items and then pick that attribute. class Region(Asset): @Asset.Component() def Temperature_Items(self,metadata): return self.build_components(template=Temperature_Item, metadata=metadata, column_name='Region Temp') @Asset.Component() def Power_Items(self,metadata): return self.build_components(template=Power_Item, metadata=metadata, column_name='Region Power') class Power_Item(Asset): @Asset.Attribute() def Power_1(self,metadata): return { 'Name':'Power 1', 'Type':'Signal', 'Formula':'$power', 'Formula Parameters': {'$power':metadata[metadata['Name'].str.contains('Power')].iloc[0]['ID']} } class Temperature_Item(Asset): @Asset.Attribute() def Temperature_1(self,metadata): return { 'Name':'Temperature 1', 'Type':'Signal', 'Formula':'$temp', 'Formula Parameters': {'$temp':metadata[metadata['Name'].str.contains('Temperature')].iloc[0]['ID']} } @Asset.Attribute() def Temp_When_Comp_On(self, metadata): power_adjacent_class = self.parent.Power_Items().pick({'Name':"Power 1"})[0] return { 'Name': "Max Temperature 1 When Compressor Is On", 'Type': 'Signal', 'Formula': '$temp1.aggregate(maxValue(), ($power1<5).removeLongerThan(7d), durationKey())', 'Formula Parameters':{ 'temp1':self.Temperature_1(), 'power1': power_adjacent_class } } Item Group To help with even more complex attribute selections, we introduced the ability to include ItemGroup rather than using the pick and parent functions. ItemGroup provides an alternate way of findings items located in other parts of the tree using established Python logic. Below are two examples using ItemGroup to perform selections that would be very complex to do with the pick function Advanced Roll-up Roll-ups using the pick reference one component beneath your class but what if there was a need for a roll-up across multiple components. ItemGroup can be used for a simple roll-up as well as this complex example. Rather than specifying a particular component and picking in it, we can use ItemGroup to iterate over every asset. Here, we retrieve every High Power attribute beneath the assets if the asset is a child of the current asset. @Asset.Attribute() def Compressor_High_Power(self, metadata): # Helpful functions: # asset.is_child_of(self) - Is the asset one of my direct children? # asset.is_parent_of(self) - Is the asset my direct parent? # asset.is_descendant_of(self) - Is the asset below me in the tree? # asset.is_ancestor_of(self) - Is the asset above me? (i.e. parent/grandparent/great-grandparent/etc) return ItemGroup([ asset.High_Power() for asset in self.all_assets() if asset.is_child_of(self) ]).roll_up('union') Referencing Items In A Different Section In this example, we're looking to reference attributes in other similar assets, but these assets are located in different sections of the tree. We can use the previous option in the Advanced Selection section but what if these compressors weren't necessarily at the same level of the tree or were beneath different components. This would mean they have different pathways and the method previously stated wouldn't work. Using ItemGroup we can iterate through all assets and find any that are also based on the Compressor class. Here we also exclude the current asset and then perform a roll-up based on all of the other High Powers. @Asset.Attribute() def Other_Compressors_Are_High_Power(self, metadata): return ItemGroup([ asset.High_Power() for asset in self.all_assets() if isinstance(asset, Compressor) and self != asset ]).roll_up('union')
  7. To follow up on this item, currently there is not a simple way to export Scorecard data into Seeq. In R53, Seeq added a copy button to Table View allowing users to Copy the table and paste into other applications like Excel. Please email support@seeq.com in order to create a ticket in our system so you'll be notified when there is an easy way to export scorecard data to Seeq Data Lab. Below is an example of a script that exports scorecard data using the SDK. What is exported are the samples shown when the scorecard is trended in Trend View and whether they're uncertain/subject to change. Note: This code was developed based on Seeq R53.3.0. Its common for Seeq to edit its SDK to enable new features so this code may not work for other versions of Seeq. Using the Seeq Python module (SPy) is the only certain way to ensure scripts will work across versions. from seeq import sdk # Include from seeq import spy and spy.login commands if not working in Seeq Data Lab import pandas as pd import datetime as dt formAPI = sdk.FormulasApi(spy.client) metricAPI = sdk.MetricsApi(spy.client) ############################################################### User Input Area ######################################################################## workbench_url = "https://explore.seeq.com/9C3916CE-0778-489C-90EE-7BC4C5734640/workbook/66D753DE-9189-4E8D-BE65-AB7BA6408EC8/worksheet/B476B826-4345-4699-8516-45F87AD50571" ######################################################################################################################################################## # Pull in the metrics displayed on the worksheet as well as its display range analysis_items = spy.search(workbench_url, quiet=True) analysis_metrics = analysis_items[analysis_items['Type'].str.contains('Metric')] analysis_metrics.reset_index(drop=True, inplace=True) pulled_analysis = spy.workbooks.pull(spy.workbooks.search({"ID":spy.utils.get_workbook_id_from_url(workbench_url)}, recursive=True, quiet=True), quiet=True) for ws in pulled_analysis[0].worksheets: if spy.utils.get_worksheet_id_from_url(workbench_url)==ws.id: display_range = ws.display_range # Define function to extract scorecard metric data (data is based on trended metric, not its tabular value) def extract_metric_data(metric_id, start_time, end_time): metric = metricAPI.get_metric(id=metric_id) metric_calc_id = metric.display_item.id metric_name = metric.name # API Exception 400 happens for Simple Metrics, use different SDK parameters for it try: result = formAPI.run_formula( start=start_time, end=end_time, formula="$series", parameters=["series="+metric_calc_id], limit = 10000 # Limit on number of samples returned ) except: result = formAPI.run_formula( start=start_time, end=end_time, fragments=['capsule=capsule("' +start_time+'","'+end_time+'")', 'laneWidth=315000ms'], function = metric_calc_id, limit=10000 # Limit on number of samples returned ) samples_df = pd.DataFrame() # Iterate through the samples pulled and add to a DataFrame that includes the sample's values and whether its uncertain for sample in result.samples.samples: ts_epoch= sample.key ts_datetime = dt.datetime.fromtimestamp(ts_epoch/1000000000) sample_df = pd.DataFrame(index=[ts_datetime], data={metric_name+" Value":sample.value, metric_name + " Uncertain":sample.is_uncertain}) samples_df = samples_df.append(sample_df) # Remove returned values that are None, i.e. trend had a gap during that time return samples_df.dropna(subset=[metric_name + ' Value']) # Iterate over metrics gathered from the worksheet and get its trended sample data using the defined function all_metric_data = pd.DataFrame() for m_id in analysis_metrics['ID']: single_metric = extract_metric_data(metric_id = m_id, start_time = display_range['Start'].strftime("%Y-%m-%dT%H:%M:%S.%fZ"), end_time = display_range['End'].strftime("%Y-%m-%dT%H:%M:%S.%fZ")) all_metric_data = pd.concat([all_metric_data, single_metric]) display(all_metric_data) # Replace below with the metric name to only see that metric's data # met_name = "Example Metric" # metric_data[[met_name + ' Value', met_name + ' Uncertain']].dropna(subset=[met_name + " Value"])
  8. To build on what Thorsten sent, below is a Python adaptation of his code. Rather than manually specify the ids required, this information is pulled from the first two histograms displayed on a worksheet. The next few steps in the code divide the two histograms, makes a graph of the histogram using the plotly or matplotlib library, and exports the results to a csv file. Note: This code was developed based on Seeq R53.3.0. Its common for Seeq to edit its SDK to enable new features so this code may not work for other versions of Seeq. Using the Seeq Python module (SPy) is the only certain way to ensure scripts will work across versions. workbench_url = "https://explore.seeq.com/workbook/BE0673EA-9DA3-49D7-BA99-33CA77405E7E/worksheet/A4F2F47E-F238-4075-9030-FFDABE34F2DF" from seeq import sdk from seeq import spy import pandas as pd formAPI = sdk.FormulasApi(spy.client) analysis_items = spy.search(workbench_url, quiet=True) analysis_histograms = analysis_items[analysis_items['Type']=='Chart'] analysis_histograms.reset_index(drop=True, inplace=True) pulled_analysis = spy.workbooks.pull(spy.workbooks.search({"ID":spy.utils.get_workbook_id_from_url(workbench_url)}, quiet=True), quiet=True) for ws in pulled_analysis[0].worksheets: if spy.utils.get_worksheet_id_from_url(workbench_url)==ws.id: display_range = ws.display_range def hist_search(hist_id, display_range): hist_info = formAPI.get_function(id=hist_id) hist_params = [elt.name + "=" + elt.item.id for elt in hist_info.parameters if elt.name != 'viewCapsule'] # hist_capsule = [elt.name + "=" + elt.formula for elt in hist_info.parameters if elt.name == 'viewCapsule'] # Required since the viewCapsule in the formula function isn't always the same as the display range hist_capsule = ["viewCapsule=capsule(\""+display_range['Start'].strftime('%Y-%m-%dT%H:%M:%S.%fZ')+ \ "\", \""+display_range['End'].strftime('%Y-%m-%dT%H:%M:%S.%fZ')+'")'] output = formAPI.run_formula( function = hist_id, parameters = hist_params, fragments = hist_capsule) return output def extract_hist_data(output): headers = [header.name for header in output.table.headers] hist_df = pd.DataFrame(columns=headers, data= output.table.data) if 'timeCol_Day Of Week' in headers: day_week_dict = {1:'Monday', 2:'Tuesday', 3:'Wednesday', 4:'Thursday', 5:'Friday', 6:'Saturday', 7:'Sunday'} hist_df['timeCol_Day Of Week'] = hist_df['timeCol_Day Of Week'].apply(lambda x:day_week_dict[int(x)]) elif 'timeCol_Month' in headers: month_year_dict = {1:'January', 2:'February', 3:'March', 4:'April', 5:'May', 6:'June', 7:'July', 8:'August', 9:'September', 10:'October', 11:'November', 12:'December'} hist_df['timeCol_Month'] = hist_df['timeCol_Month'].apply(lambda x:month_year_dict[int(x)]) elif 'timeCol_Quarter' in headers: hist_df['timeCol_Quarter'] = hist_df['timeCol_Quarter'].apply(lambda x:'Q'+str(x)) if 'signalToAggregate' in headers[-1]: hist_df.rename(columns={headers[-1]:'signaltoAggregate'}, inplace=True) # Condition aggregations still have it shown on the backend as signalToAggregate hist_df.set_index(headers[:-1], inplace=True) return hist_df hist_1 = extract_hist_data(hist_search(analysis_histograms.loc[0,"ID"], display_range)) hist_2 = extract_hist_data(hist_search(analysis_histograms.loc[1,"ID"], display_range)) result = hist_1.div(hist_2, axis=1) matplotlib_fig = result.unstack().plot(kind='bar', y=result.columns[-1], stacked=False) matplotlib_fig import plotly.express as px hold= result.reset_index(drop=False, inplace=False) plotly_graph = px.bar(hold, x=hold.columns[0], y=hold.columns[-1], color=hold.columns[1], barmode='group') plotly_graph hist_1_csv = hist_1.rename(columns={'signaltoAggregate':analysis_histograms.loc[0,"Name"]}) hist_2_csv = hist_2.rename(columns={'signaltoAggregate':analysis_histograms.loc[1,"Name"]}) result_csv = result.rename(columns={'signaltoAggregate':(analysis_histograms.loc[0,"Name"]+"/"+analysis_histograms.loc[1,"Name"])}) result_csv.unstack().transpose().to_csv('TEST.csv') . 1863770801_DivideTwoHistogramsOutputtoCSV.ipynb
  9. Seeq Data Lab allows users to programatically interact with data connected to Seeq through Python. With this, users can create numerous advanced visualizations. Some examples of these are Sankey diagrams, Waterfall plots, radar plots and 3D contour plots. These plots can then be pushed back into Seeq Organizer for other users to consume the visualizations. A common workflow that can stem from this process is the need to update the Python visualizations in an existing Organizer Topic with new ones as newer data become available. Here we'll look over the steps of how you can update an existing Organizer Topic with a new graphic. Step 1: Retrieve the Workbook HTML Behind every organizer topic is the HTML that controls what the reports display. We'll need to modify this HTML to add a new image will also retaining whatever pieces of Seeq content were already on the report. pulled_workbooks = spy.workbooks.pull(spy.workbooks.search({'Name':'Organizer Topic Name'})) org_topic = pulled_workbooks[0] # Note you may need to confirm that the first item in pulled_workbooks is the topic of interest ws_to_update = org_topic.worksheets[0] # Choose the index based on the worksheet intended to be updated ws_to_update.html # View the HTML behind the worksheet Step 2: Create the HTML for The Image The "add_image" function can be used to generate the HTML that will be inserted into the Organizer Topic HTML replace_html = ws_to_update.document.add_image(filename = "Image_To_Insert.png") replace_html Step 3: Replace HTML and Push Back to Seeq To find where in the Organizer Topic HTML to replace, we can use the re module. This will allow us to parse the HTML string to find our previously inserted image, which should begin with "<img src=". Note additional changes are required if multiple images are included in the report. import re before_html = re.findall("(.*)<img src=", ws_to_update.html)[0] # Capture everything before the image after_html = re.findall(".*<img src=.*?>(.*)", ws_to_update.html)[0] # Captures everything after the image full_html = before_html + replace_html + after_html # Combine the before and after with the html generated for the new picture ws_to_update.html = full_html # Reassign the html to the worksheet and push it back to Seeq spy.workbooks.push(pulled_workbooks)
  10. Hi Sivaji, Is the asset tree made using Seeq's Python library (SPy) or is it made from a connector? For non-SPy based trees, you would need to use our Software Development Kit (SDK) to add signals into the asset tree. This process can be complex and dependent on the use case so I'd recommend emailing support@seeq.com for assistance. If the asset tree is made via SPy and you're looking to include new calculations/signals in the asset tree, you should be able to just add the calculations as part of your tree and re-push it. The existing tree will stay as is but will get appended with the new calculations.
  11. Hi Muhammad, I'd recommend emailing support@seeq.com for further assistance so a member of our System Reliability team can schedule a meeting with you to look over the issue. Feel free to copy and paste exactly what you've wrote here and suggest times for a potential meeting.
  12. As an add on to this topic, there can be times when one wants to push a different scorecard type. The previous example shows how to create a Simple Scorecard but similar logic can be applied to make a Condition and Continuous Scorecard. Condition Scorecard Since the Condition Scorecard is also based on a condition, we need to retrieve the condition to be used. This can be done using spy.search again search_result_condition = spy.search({"Name":"Stage 2 Operation", "Scoped To":"C43E5ADB-ABED-48DC-A769-F3A97961A829"}) From there we can tweak the scorecard code to include the bounding condition. This is the condition over which this calculation is performed in scorecard. Note scorecard requires conditions with maximum capsule duration, so an additional parameter is required if the condition does not have a maximum capsule duration. Below is the code as well as the result my_metric_input_condition = { 'Type': 'Metric', 'Name': 'My Metric Condition', 'Measured Item': {'ID': search_result[search_result['Name'] == 'Temperature']['ID'].iloc[0]}, 'Statistic': 'Average', 'Bounding Condition': {'ID': search_result_condition[search_result_condition['Name'] == 'Stage 2 Operation']['ID'].iloc[0]}, 'Bounding Condition Maximum Duration': '30h' # Required for conditions without a maximum capsule duration } spy.push(metadata = pd.DataFrame([my_metric_input_condition]), workbook='Example Scorecard') Continuous Scorecard For Continuous Scorecards, users need to specify the rolling window over which to perform the calculations. To do this, a Duration and Period need to be provided. The Duration tells how long is the rolling window and the Period tells the frequency at which the rolling window is performed. my_metric_input_continuous = { 'Type': 'Metric', 'Name': 'My Metric Continuous', 'Measured Item': {'ID': search_result[search_result['Name'] == 'Temperature']['ID'].iloc[0]}, 'Statistic': 'Average', 'Duration': '1d', # Length of time the calculation is done for 'Period': '3d', # How often is the calculation being performed } spy.push(metadata = pd.DataFrame([my_metric_input_continuous]), workbook='Example Scorecard')
  13. Hi Jack, At this time, the only way to share an Organizer document to someone without a Seeq account is as a PDF. Instructions about how to generate this PDF are available at https://support.seeq.com/space/KB/159121437/Publishing%20a%20PDF
  14. Hi Yanmin, For your first question, yes your time column is treated as the x-axis in Seeq. As a result, every signal that is trended in Seeq naturally incorporates this time column as its x-axis. When you say "create a signal that includes this time information" are you thinking of having time on the y-axis as well? For your second question, yes we can. Seeq has a function called runningDelta(), which calculates the difference of successive samples. So in your scenario, (b2-b1) would be captured by $b.runningDelta(). (b2-b1)/(a2-a1) would also be $b.runningDelta()/$a.runningDelta(). This would all be done in Seeq's Formula tool. Hopefully the example screenshot below helps. If you'd like help understanding or would like to discuss more, please come to our office hours where a member of our team can help you. https://info.seeq.com/office-hours Regards, Kris
  15. Hi Sivaji, Thanks for coming to Office Hours. I'll post the resolution here in case others have the same error occur. When pushing data to Seeq from a Python environment like Seeq Data Lab, I'd recommend including a Value Unit Of Measure column in your DataFrame. By default, if this column is excluded Seeq treats the pushed data as having a null Value Unit Of Measure. Since Seeq can't modify the type of a Value Unit Of Measure ("string", null, "a unit"), the error above occurs since you were trying to modify the Value Unit Of Measure to be a string from its original null value. To resolve the issue, you will need assistance from your Seeq Admin to hard delete (not just archive) the item. If this item is scoped to a workbook, it may be easier to just push to a new workbook. Feel free to reach out to support@seeq.com if help is needed.
×
×
  • Create New...