Jump to content

Kristopher Wiggins

Seeq Team
  • Posts

    52
  • Joined

  • Last visited

  • Days Won

    18

Everything posted by Kristopher Wiggins

  1. Since its just HTML you can assign it to the worksheet's HTML directly. So you'd only need to follow Step 1 of pulling the workbook's HTML and a portion of Step 3 in the above example. Step 3 in your case would just be "ws_to_update.html = table_html" and then you'd push the workbook. If you're looking to repeatedly update a table in Organizer with new information while retaining the other aspects of the Organizer then you'd need to find a way to capture that table using the re module. Tables in HTML have a table tag (i.e. <table> .... </table>) so you can create a regular expression to search based on this. For additional regular expression help, have a look at https://regex101.com/ to learn more and test different expressions. Once you have your regular expression finalized you can adapt Step 3 to account for that instead.
  2. The SPy library supports the creation of asset trees through the spy.assets models. These asset trees can include various types of items such as signals and conditions, calculations like scorecard metrics, and can be used to create numerous Workbench Analysis Worksheets and Organizer Topic Documents. One question that commonly comes up when making these trees is how to reference attributes that are located in other parts of the tree. Roll-Ups The first example of referencing other items in the tree is through roll-ups. These type of calculations "roll-up" attributes from levels below where the roll-up calculation is being performed, whether the level is directly beneath or multiple levels below. These attributes are then combined using logic you provide. For signals and scalars, the options are Average, Maximum, Minimum, Range, Sum and Multiply. For conditions, the options are Union, Intersect, Counts, Count Overlaps, and Combine With. Below are examples where .Cities() are a component beneath the current class. All attributes and assets beneath the Cities component will be searched through and included in the roll-up based on the criteria given in the pick function. Here, we're filtering based on Name but any property such as Type can be supplied. Note Seeq Workbench's search mechanism is used here, so wildcards and regular expressions can be included. Lastly, we specify the kind of roll-up we'd like to perform. @Asset.Attribute() def Regional_Compressor_Running_Poorly(self,metadata): return self.Cities().pick({'Name':'Compressor Running Poorly'}).roll_up('union') @Asset.Attribute() def Regional_Total_Energy_Consumption(self,metadata): return self.Cities().pick({'Name':'Total Daily Energy Consumption'}).roll_up('sum') Child Attributes The second example looks at how to reference child attributes without rolling them up. Maybe there's a particular attribute that needs to be included in a calculation used at a higher level in the asset tree. For this scenario, the pick function can be used once again. Rather than do a roll-up, we'll just index the particular item we want. Most of the time the goal is to reference a specific item using this method so the criteria passed into the pick function should be specific enough to find one item so the index will always be 0. One property that may be of interest for this is Template, where you can specify the particular class used that will contain the item wanted. @Asset.Attribute() def Child_Power_Low(self, metadata): child_power = self.Cities().pick({"Name": "Compressor Power", "Asset": "/Area (A|C|D)/"})[0] return { 'Name': "Child Power Low", 'Type': "Condition", "Formula": "$child_power < 5", "Formula Parameters" : {"child_power":child_power} } Parent Attributes The next example looks at how we can reference parent attributes in calculations that are beneath it. Rather than reference a particular component, we'll use the parent. From there we'll include the attribute we'd want to reference from our parent asset. If looking to reference attributes at higher levels of the tree, chain multiple ".parent". For example, "self.parent.parent" will look two levels above the current level. @Asset.Attribute() def Parent_Temp_Multiplied(self, metadata): parent_temp = self.parent.Temperature() return { 'Name': "Parent Temp Multiplied", 'Type': "Signal", "Formula": "$parent_temp * 10", "Formula Parameters" : {"parent_temp":parent_temp} } Advanced Selection In this example, we'll look at how can we combine the previously mentioned options to find items located in other parts of the tree. Here, we're looking to reference items located at the same level of the tree but in another class so it's not located beneath the same asset. We have two separate assets beneath the regions, Temperature Items and Power Items. The Temperature Item class has a calculation called Max Temperature 1 When Compressor Is On which references an attribute beneath its corresponding Power Item class. To fetch this attribute, we go up a level to the parent, navigate down to the Power_Items and then pick that attribute. class Region(Asset): @Asset.Component() def Temperature_Items(self,metadata): return self.build_components(template=Temperature_Item, metadata=metadata, column_name='Region Temp') @Asset.Component() def Power_Items(self,metadata): return self.build_components(template=Power_Item, metadata=metadata, column_name='Region Power') class Power_Item(Asset): @Asset.Attribute() def Power_1(self,metadata): return { 'Name':'Power 1', 'Type':'Signal', 'Formula':'$power', 'Formula Parameters': {'$power':metadata[metadata['Name'].str.contains('Power')].iloc[0]['ID']} } class Temperature_Item(Asset): @Asset.Attribute() def Temperature_1(self,metadata): return { 'Name':'Temperature 1', 'Type':'Signal', 'Formula':'$temp', 'Formula Parameters': {'$temp':metadata[metadata['Name'].str.contains('Temperature')].iloc[0]['ID']} } @Asset.Attribute() def Temp_When_Comp_On(self, metadata): power_adjacent_class = self.parent.Power_Items().pick({'Name':"Power 1"})[0] return { 'Name': "Max Temperature 1 When Compressor Is On", 'Type': 'Signal', 'Formula': '$temp1.aggregate(maxValue(), ($power1<5).removeLongerThan(7d), durationKey())', 'Formula Parameters':{ 'temp1':self.Temperature_1(), 'power1': power_adjacent_class } } Item Group To help with even more complex attribute selections, we introduced the ability to include ItemGroup rather than using the pick and parent functions. ItemGroup provides an alternate way of findings items located in other parts of the tree using established Python logic. Below are two examples using ItemGroup to perform selections that would be very complex to do with the pick function Advanced Roll-up Roll-ups using the pick reference one component beneath your class but what if there was a need for a roll-up across multiple components. ItemGroup can be used for a simple roll-up as well as this complex example. Rather than specifying a particular component and picking in it, we can use ItemGroup to iterate over every asset. Here, we retrieve every High Power attribute beneath the assets if the asset is a child of the current asset. @Asset.Attribute() def Compressor_High_Power(self, metadata): # Helpful functions: # asset.is_child_of(self) - Is the asset one of my direct children? # asset.is_parent_of(self) - Is the asset my direct parent? # asset.is_descendant_of(self) - Is the asset below me in the tree? # asset.is_ancestor_of(self) - Is the asset above me? (i.e. parent/grandparent/great-grandparent/etc) return ItemGroup([ asset.High_Power() for asset in self.all_assets() if asset.is_child_of(self) ]).roll_up('union') Referencing Items In A Different Section In this example, we're looking to reference attributes in other similar assets, but these assets are located in different sections of the tree. We can use the previous option in the Advanced Selection section but what if these compressors weren't necessarily at the same level of the tree or were beneath different components. This would mean they have different pathways and the method previously stated wouldn't work. Using ItemGroup we can iterate through all assets and find any that are also based on the Compressor class. Here we also exclude the current asset and then perform a roll-up based on all of the other High Powers. @Asset.Attribute() def Other_Compressors_Are_High_Power(self, metadata): return ItemGroup([ asset.High_Power() for asset in self.all_assets() if isinstance(asset, Compressor) and self != asset ]).roll_up('union')
  3. To follow up on this item, currently there is not a simple way to export Scorecard data into Seeq. In R53, Seeq added a copy button to Table View allowing users to Copy the table and paste into other applications like Excel. Please email support@seeq.com in order to create a ticket in our system so you'll be notified when there is an easy way to export scorecard data to Seeq Data Lab. Below is an example of a script that exports scorecard data using the SDK. What is exported are the samples shown when the scorecard is trended in Trend View and whether they're uncertain/subject to change. Note: This code was developed based on Seeq R53.3.0. Its common for Seeq to edit its SDK to enable new features so this code may not work for other versions of Seeq. Using the Seeq Python module (SPy) is the only certain way to ensure scripts will work across versions. from seeq import sdk # Include from seeq import spy and spy.login commands if not working in Seeq Data Lab import pandas as pd import datetime as dt formAPI = sdk.FormulasApi(spy.client) metricAPI = sdk.MetricsApi(spy.client) ############################################################### User Input Area ######################################################################## workbench_url = "https://explore.seeq.com/9C3916CE-0778-489C-90EE-7BC4C5734640/workbook/66D753DE-9189-4E8D-BE65-AB7BA6408EC8/worksheet/B476B826-4345-4699-8516-45F87AD50571" ######################################################################################################################################################## # Pull in the metrics displayed on the worksheet as well as its display range analysis_items = spy.search(workbench_url, quiet=True) analysis_metrics = analysis_items[analysis_items['Type'].str.contains('Metric')] analysis_metrics.reset_index(drop=True, inplace=True) pulled_analysis = spy.workbooks.pull(spy.workbooks.search({"ID":spy.utils.get_workbook_id_from_url(workbench_url)}, recursive=True, quiet=True), quiet=True) for ws in pulled_analysis[0].worksheets: if spy.utils.get_worksheet_id_from_url(workbench_url)==ws.id: display_range = ws.display_range # Define function to extract scorecard metric data (data is based on trended metric, not its tabular value) def extract_metric_data(metric_id, start_time, end_time): metric = metricAPI.get_metric(id=metric_id) metric_calc_id = metric.display_item.id metric_name = metric.name # API Exception 400 happens for Simple Metrics, use different SDK parameters for it try: result = formAPI.run_formula( start=start_time, end=end_time, formula="$series", parameters=["series="+metric_calc_id], limit = 10000 # Limit on number of samples returned ) except: result = formAPI.run_formula( start=start_time, end=end_time, fragments=['capsule=capsule("' +start_time+'","'+end_time+'")', 'laneWidth=315000ms'], function = metric_calc_id, limit=10000 # Limit on number of samples returned ) samples_df = pd.DataFrame() # Iterate through the samples pulled and add to a DataFrame that includes the sample's values and whether its uncertain for sample in result.samples.samples: ts_epoch= sample.key ts_datetime = dt.datetime.fromtimestamp(ts_epoch/1000000000) sample_df = pd.DataFrame(index=[ts_datetime], data={metric_name+" Value":sample.value, metric_name + " Uncertain":sample.is_uncertain}) samples_df = samples_df.append(sample_df) # Remove returned values that are None, i.e. trend had a gap during that time return samples_df.dropna(subset=[metric_name + ' Value']) # Iterate over metrics gathered from the worksheet and get its trended sample data using the defined function all_metric_data = pd.DataFrame() for m_id in analysis_metrics['ID']: single_metric = extract_metric_data(metric_id = m_id, start_time = display_range['Start'].strftime("%Y-%m-%dT%H:%M:%S.%fZ"), end_time = display_range['End'].strftime("%Y-%m-%dT%H:%M:%S.%fZ")) all_metric_data = pd.concat([all_metric_data, single_metric]) display(all_metric_data) # Replace below with the metric name to only see that metric's data # met_name = "Example Metric" # metric_data[[met_name + ' Value', met_name + ' Uncertain']].dropna(subset=[met_name + " Value"])
  4. To build on what Thorsten sent, below is a Python adaptation of his code. Rather than manually specify the ids required, this information is pulled from the first two histograms displayed on a worksheet. The next few steps in the code divide the two histograms, makes a graph of the histogram using the plotly or matplotlib library, and exports the results to a csv file. Note: This code was developed based on Seeq R53.3.0. Its common for Seeq to edit its SDK to enable new features so this code may not work for other versions of Seeq. Using the Seeq Python module (SPy) is the only certain way to ensure scripts will work across versions. workbench_url = "https://explore.seeq.com/workbook/BE0673EA-9DA3-49D7-BA99-33CA77405E7E/worksheet/A4F2F47E-F238-4075-9030-FFDABE34F2DF" from seeq import sdk from seeq import spy import pandas as pd formAPI = sdk.FormulasApi(spy.client) analysis_items = spy.search(workbench_url, quiet=True) analysis_histograms = analysis_items[analysis_items['Type']=='Chart'] analysis_histograms.reset_index(drop=True, inplace=True) pulled_analysis = spy.workbooks.pull(spy.workbooks.search({"ID":spy.utils.get_workbook_id_from_url(workbench_url)}, quiet=True), quiet=True) for ws in pulled_analysis[0].worksheets: if spy.utils.get_worksheet_id_from_url(workbench_url)==ws.id: display_range = ws.display_range def hist_search(hist_id, display_range): hist_info = formAPI.get_function(id=hist_id) hist_params = [elt.name + "=" + elt.item.id for elt in hist_info.parameters if elt.name != 'viewCapsule'] # hist_capsule = [elt.name + "=" + elt.formula for elt in hist_info.parameters if elt.name == 'viewCapsule'] # Required since the viewCapsule in the formula function isn't always the same as the display range hist_capsule = ["viewCapsule=capsule(\""+display_range['Start'].strftime('%Y-%m-%dT%H:%M:%S.%fZ')+ \ "\", \""+display_range['End'].strftime('%Y-%m-%dT%H:%M:%S.%fZ')+'")'] output = formAPI.run_formula( function = hist_id, parameters = hist_params, fragments = hist_capsule) return output def extract_hist_data(output): headers = [header.name for header in output.table.headers] hist_df = pd.DataFrame(columns=headers, data= output.table.data) if 'timeCol_Day Of Week' in headers: day_week_dict = {1:'Monday', 2:'Tuesday', 3:'Wednesday', 4:'Thursday', 5:'Friday', 6:'Saturday', 7:'Sunday'} hist_df['timeCol_Day Of Week'] = hist_df['timeCol_Day Of Week'].apply(lambda x:day_week_dict[int(x)]) elif 'timeCol_Month' in headers: month_year_dict = {1:'January', 2:'February', 3:'March', 4:'April', 5:'May', 6:'June', 7:'July', 8:'August', 9:'September', 10:'October', 11:'November', 12:'December'} hist_df['timeCol_Month'] = hist_df['timeCol_Month'].apply(lambda x:month_year_dict[int(x)]) elif 'timeCol_Quarter' in headers: hist_df['timeCol_Quarter'] = hist_df['timeCol_Quarter'].apply(lambda x:'Q'+str(x)) if 'signalToAggregate' in headers[-1]: hist_df.rename(columns={headers[-1]:'signaltoAggregate'}, inplace=True) # Condition aggregations still have it shown on the backend as signalToAggregate hist_df.set_index(headers[:-1], inplace=True) return hist_df hist_1 = extract_hist_data(hist_search(analysis_histograms.loc[0,"ID"], display_range)) hist_2 = extract_hist_data(hist_search(analysis_histograms.loc[1,"ID"], display_range)) result = hist_1.div(hist_2, axis=1) matplotlib_fig = result.unstack().plot(kind='bar', y=result.columns[-1], stacked=False) matplotlib_fig import plotly.express as px hold= result.reset_index(drop=False, inplace=False) plotly_graph = px.bar(hold, x=hold.columns[0], y=hold.columns[-1], color=hold.columns[1], barmode='group') plotly_graph hist_1_csv = hist_1.rename(columns={'signaltoAggregate':analysis_histograms.loc[0,"Name"]}) hist_2_csv = hist_2.rename(columns={'signaltoAggregate':analysis_histograms.loc[1,"Name"]}) result_csv = result.rename(columns={'signaltoAggregate':(analysis_histograms.loc[0,"Name"]+"/"+analysis_histograms.loc[1,"Name"])}) result_csv.unstack().transpose().to_csv('TEST.csv') . 1863770801_DivideTwoHistogramsOutputtoCSV.ipynb
  5. Seeq Data Lab allows users to programatically interact with data connected to Seeq through Python. With this, users can create numerous advanced visualizations. Some examples of these are Sankey diagrams, Waterfall plots, radar plots and 3D contour plots. These plots can then be pushed back into Seeq Organizer for other users to consume the visualizations. A common workflow that can stem from this process is the need to update the Python visualizations in an existing Organizer Topic with new ones as newer data become available. Here we'll look over the steps of how you can update an existing Organizer Topic with a new graphic. Step 1: Retrieve the Workbook HTML Behind every organizer topic is the HTML that controls what the reports display. We'll need to modify this HTML to add a new image will also retaining whatever pieces of Seeq content were already on the report. pulled_workbooks = spy.workbooks.pull(spy.workbooks.search({'Name':'Organizer Topic Name'})) org_topic = pulled_workbooks[0] # Note you may need to confirm that the first item in pulled_workbooks is the topic of interest ws_to_update = org_topic.worksheets[0] # Choose the index based on the worksheet intended to be updated ws_to_update.html # View the HTML behind the worksheet Step 2: Create the HTML for The Image The "add_image" function can be used to generate the HTML that will be inserted into the Organizer Topic HTML replace_html = ws_to_update.document.add_image(filename = "Image_To_Insert.png") replace_html Step 3: Replace HTML and Push Back to Seeq To find where in the Organizer Topic HTML to replace, we can use the re module. This will allow us to parse the HTML string to find our previously inserted image, which should begin with "<img src=". Note additional changes are required if multiple images are included in the report. import re before_html = re.findall("(.*)<img src=", ws_to_update.html)[0] # Capture everything before the image after_html = re.findall(".*<img src=.*?>(.*)", ws_to_update.html)[0] # Captures everything after the image full_html = before_html + replace_html + after_html # Combine the before and after with the html generated for the new picture ws_to_update.html = full_html # Reassign the html to the worksheet and push it back to Seeq spy.workbooks.push(pulled_workbooks)
  6. Hi Sivaji, Is the asset tree made using Seeq's Python library (SPy) or is it made from a connector? For non-SPy based trees, you would need to use our Software Development Kit (SDK) to add signals into the asset tree. This process can be complex and dependent on the use case so I'd recommend emailing support@seeq.com for assistance. If the asset tree is made via SPy and you're looking to include new calculations/signals in the asset tree, you should be able to just add the calculations as part of your tree and re-push it. The existing tree will stay as is but will get appended with the new calculations.
  7. Hi Muhammad, I'd recommend emailing support@seeq.com for further assistance so a member of our System Reliability team can schedule a meeting with you to look over the issue. Feel free to copy and paste exactly what you've wrote here and suggest times for a potential meeting.
  8. As an add on to this topic, there can be times when one wants to push a different scorecard type. The previous example shows how to create a Simple Scorecard but similar logic can be applied to make a Condition and Continuous Scorecard. Condition Scorecard Since the Condition Scorecard is also based on a condition, we need to retrieve the condition to be used. This can be done using spy.search again search_result_condition = spy.search({"Name":"Stage 2 Operation", "Scoped To":"C43E5ADB-ABED-48DC-A769-F3A97961A829"}) From there we can tweak the scorecard code to include the bounding condition. This is the condition over which this calculation is performed in scorecard. Note scorecard requires conditions with maximum capsule duration, so an additional parameter is required if the condition does not have a maximum capsule duration. Below is the code as well as the result my_metric_input_condition = { 'Type': 'Metric', 'Name': 'My Metric Condition', 'Measured Item': {'ID': search_result[search_result['Name'] == 'Temperature']['ID'].iloc[0]}, 'Statistic': 'Average', 'Bounding Condition': {'ID': search_result_condition[search_result_condition['Name'] == 'Stage 2 Operation']['ID'].iloc[0]}, 'Bounding Condition Maximum Duration': '30h' # Required for conditions without a maximum capsule duration } spy.push(metadata = pd.DataFrame([my_metric_input_condition]), workbook='Example Scorecard') Continuous Scorecard For Continuous Scorecards, users need to specify the rolling window over which to perform the calculations. To do this, a Duration and Period need to be provided. The Duration tells how long is the rolling window and the Period tells the frequency at which the rolling window is performed. my_metric_input_continuous = { 'Type': 'Metric', 'Name': 'My Metric Continuous', 'Measured Item': {'ID': search_result[search_result['Name'] == 'Temperature']['ID'].iloc[0]}, 'Statistic': 'Average', 'Duration': '1d', # Length of time the calculation is done for 'Period': '3d', # How often is the calculation being performed } spy.push(metadata = pd.DataFrame([my_metric_input_continuous]), workbook='Example Scorecard')
  9. Hi Jack, At this time, the only way to share an Organizer document to someone without a Seeq account is as a PDF. Instructions about how to generate this PDF are available at https://support.seeq.com/space/KB/159121437/Publishing%20a%20PDF
  10. Hi Yanmin, For your first question, yes your time column is treated as the x-axis in Seeq. As a result, every signal that is trended in Seeq naturally incorporates this time column as its x-axis. When you say "create a signal that includes this time information" are you thinking of having time on the y-axis as well? For your second question, yes we can. Seeq has a function called runningDelta(), which calculates the difference of successive samples. So in your scenario, (b2-b1) would be captured by $b.runningDelta(). (b2-b1)/(a2-a1) would also be $b.runningDelta()/$a.runningDelta(). This would all be done in Seeq's Formula tool. Hopefully the example screenshot below helps. If you'd like help understanding or would like to discuss more, please come to our office hours where a member of our team can help you. https://info.seeq.com/office-hours Regards, Kris
  11. Hi Sivaji, Thanks for coming to Office Hours. I'll post the resolution here in case others have the same error occur. When pushing data to Seeq from a Python environment like Seeq Data Lab, I'd recommend including a Value Unit Of Measure column in your DataFrame. By default, if this column is excluded Seeq treats the pushed data as having a null Value Unit Of Measure. Since Seeq can't modify the type of a Value Unit Of Measure ("string", null, "a unit"), the error above occurs since you were trying to modify the Value Unit Of Measure to be a string from its original null value. To resolve the issue, you will need assistance from your Seeq Admin to hard delete (not just archive) the item. If this item is scoped to a workbook, it may be easier to just push to a new workbook. Feel free to reach out to support@seeq.com if help is needed.
  12. The SPy module serves as the core of Seeq Data Lab. Due to being a fairly new Seeq offering and a Python module, the extensive documentation users are accustomed to when working with other Seeq applications (Workbench/Organizer Topic) is not available yet. Below, we'll list some tips for learning more about the SPy module. Option 1: Read through the SPy Documentation When working in Seeq Data Lab, a folder titled "SPy Documentation" is automatically generated when creating a new project. If working outside of Seeq Data Lab, the command "spy.docs.copy()" can be used after importing the SPy module to download the documentation. The SPy documentation contains a tutorial to guide users through most of the things that can be done with SPy. Take a look through the documentation and chances are there is an example discussing what you would like to use SPy to do. Option 2: Access an Example of the Item When attempting to do something not discussed in the SPy documentation, try pulling an example of the item you're looking to modify. For example, if looking to modify the format of scorecards on a worksheet, the item to be brought in would be a worksheet, since this format is carried over the worksheet rather than an individual scorecard. A combination of spy.search and spy.pull will allow you to access an example of the item. You can then leverage the Keyboard shortcuts discussed in Jupyter Docstring Shortcut to learn more about the options available. Using ".+Tab" will allow you to see the properties and methods associated with that item and "Shit+Tab" will allow you to see the Docstring associated with that property or method. Docstrings are also available for SPy functions. In addition to the previously mentioned shortcuts, another way to access the Docstring is to type the function and then a question mark and run the cell. For example, the code below will return the Docstring for the spy.search function. The ? allows for the full Docstring to appear instantly rather than a scrollable pop up. spy.search?
  13. As covered in Push Scorecard from Seeq Data Lab, it is possible to push a scorecard from Seeq Data Lab into Seeq Workbench Analysis. In this post, we'll cover how we can format the display of a worksheet in Scorecard view. The scorecards can be pre-made in Workbench or have been previously pushed from Seeq Data Lab. Please refer to R21 Scorecard Metric for more information about how to modify scorecard display format in Workbench. To format the display of a worksheet, we will first need to access the worksheet to be changed. In these examples, we'll assume that we're modifying an existing worksheet. Similar logic can be applied to new worksheets being created. Step 1: Pull in the worksheet First, using the spy.search and spy.pull commands we can pull information about the workbook of interest. We can then navigate through the workbook in Python to access the contents of the worksheet. # The output of this command is a list of pulled workbooks that match the criteria passed into the search command pulled_workbooks = spy.workbooks.pull(spy.workbooks.search({"Workbook Type":"Analysis", "Name": "Example Analysis"})) # You can then access the worksheet looking to be changed using the command below. A variation to access the worksheet by its location rather than its name is listed beneath scorecard_worksheet = pulled_workbooks[0].worksheet("Scorecard Worksheet") # scorecard_interested =pulled_workbooks[0].worksheets[1] Step 2: Modifying the Scorecard View We can then modify the format of the display using the code below. # This line is only needed if the worksheet is not in Scorecard view scorecard_worksheet.view = "Scorecard" # The lines below change the format to only show the Start time in an "l" format, which is m/d/yyyy scorecard_worksheet.scorecard_date_display = "Start" scorecard_worksheet.scorecard_date_format = "l" For additional options for formatting, please take a look at the code below @property def scorecard_date_display(self): """ Get/Set the date display for scorecards Parameters ---------- str or None The dates that should be displayed for scorecards. Valid values are: =============== ================================ Date Display Result =============== ================================ None No date display 'Start' Start of the time period only 'End' End of the time period only 'Start And End' Start and end of the time period =============== ================================ Returns ------- str or None The scorecard date display """ return self._get_scorecard_date_display() @property def scorecard_date_format(self): """ Get/Set the format for scorecard date displays Parameters ---------- str The string defining the date format. Formats are parsed using momentjs. The full documentation for the momentjs date parsing can be found at https://momentjs.com/docs/#/displaying/ Examples -------- "d/m/yyy" omitting leading zeros (eg, 4/27/2020): l "Mmm dd, yyyy, H:MM AM/PM" (eg, Apr 27, 2020 5:00 PM) : lll "H:MM AM/PM" (eg, "5:00 PM"): LT Returns ------- str The formatting string """ return self._get_scorecard_date_format() Step 3: Push the workbook back to Workbench Pushing the workbook back into Workbench will cause the existing workbook to update to match the formatting we specified in Seeq Data Lab. # Push the workbook back into Workbench spy.workbooks.push(workbooks= pulled_workbooks)
  14. Prior to Seeq major version R20, the only way to bring in data coming from a SQL datasource into Seeq was through the SQL Connector V1. From major version R20 and beyond, Seeq created the SQL Connector V2 that features many improvements. For more information, please take a look at SQL Connectors. In order to gain the improvements from the new connector, admins may want to migrate their old SQL V1 connections to V2 and all of the Seeq calculations built from them. The steps below can be used as a rough guide for the steps needed. Please contact Seeq support if attempting to do this as different SQL configurations will require changes to the steps listed below. Step 1: Creating your SQL V2 Connections One significant difference between the SQL V1 and SQL V2 Connectors is that SQL V1 only connects to the SQL datasource. To actually query information from the SQL datasource, individual Seeq formulas have to be written for each signal or condition being retrieved. The SQL V2 Connector, however, can connect to the SQL datasource and perform a single query to bring in numerous signals and conditions. In order to replicate the queries made from the SQL V1 Connections to SQL V2, the best approach is to create a new query that generalizes the SQL V1 queries rather than copying each individual one. Below is an simple example of a V1 query and its V2 equivalent. Note in the V1 version, 4 different formulas had to be made to retrieve volume, thickness, temperature and cost. V1 Query: V2 Query: The SQL V1 and V2 Connectors are separate such that each item can have its own name, description, etc. but still point to the same data. For the purpose of migrating from SQL V1 to V2, it is best to have the signals/conditions brought in from SQL V1 and V2 have either the same names or a small variation that is consistent throughout all items. For example, if the signal made from the SQL V1 connection was titled "Volume From Lab Signals", I should try to make the query made with the SQL V2 connector either replicate that name or have a slight variation such as "Volume From Lab Signals_V2". Step 2: Migrating SQL V1 Items to SQL V2 In this step, we will be leveraging Seeq's command line interface to swap all of the SQL V1 items with their V2 counterpart. Please refer to CLI Datasource Swapping for more information about datasource swapping. We'll assume here that it is intended to perform this mapping on the same server. By default, datasource swapping looks at all of the items available in a datasource and then determines which meet the criteria for swapping. Since we are working with Seeq's internal database that stores calculations, we will have to modify a file on the Seeq server that controls datasource swapping. For a common Seeq install, this file will be located at "C:\Program Files\Seeq Server\pilot\datasource.py". Save a copy of the file outside of the Seeq folder in order to revert the file back to the original once the migration is complete. Depending on your version number, replace the datasource file with the python file shown at the bottom of the post. After download, change the name of the file to be "datasource.py". The next step is to perform the map. For this walk-through, we will assume the Seeq database that houses these SQL V1 items is the cassandra database. To verify, look at the item properties on one of the signals/conditions made in Formula that uses the SQL V1 Connection. The cassandra database has a Datasource ID = default and a Datasource Class = cassandraV2. We have to specify a class for the cassandra datasource since its datasource ID is shared by other internal Seeq databases. For the SQL V2 Connector, we will assume it has a Datasource ID=11BB11B1-B1B1-1B11-1BB1-BB11B111BB1B, but this can be found in the connector file. If your SQL V1 and V2 items are named the same, you will run a version of the command below to create the map file. If they're not exactly the same but only have a minor variation, you will include a regex parameter. For example, in the previous scenario of adding "_V2" after my SQL V2 items, I would include the --name-regex "(?<V1Name>.*)" --new-name-regex "${V1Name}_V2" after the command. With the below command, we will map all of the items from the SQL V2 Connector to their cassandra counterpart. Its more likely that there will be more item in the cassandra database than in the SQL V2 Connector so performing the mapping in this manner will save time. If that is not the case, switch the two datasources in the command. The map command will then produce a csv file we can use for the swapping. seeq datasource map same-server --datasource-id "11BB11B1-B1B1-1B11-1BB1-BB11B111BB1B" --new-datasource-id "00AA00A0-A0A0-0A00-0AA0-AA00A000AA0A" --new-datasource-class "cassandraV2" If a switch was performed, proceed to the next portion of this step. If not, we'll have to perform the switch in the csv file. Replace the word "Stored" with "Calculated" in the Type column. Cut and paste all of the old columns (Column C through G) from the map csv into the columns following the new columns. Change the column headings such that old becomes new and new becomes old and save the file. With this map file, you can then perform the swap across all workbooks or a particular workbook as described in CLI Datasource Swapping. Step 3: Archiving SQL V1 Items This step is not necessary since the migration is now complete. Sometimes, admins would like to clean up their Seeq server by removing the V1 items that aren't used anymore. You can either archive the items in bulk or individually delete the items. The first step is to find all of the items that are dependent on the SQL V1 connection. Use the "GET /datasources" endpoint with SQL V1 Connection filters to find the ID of your SQL V1 Connection. You can then use the "GET /items/{id}/dependents" to retrieve the ids of all the items that depend on the SQL V1 Connection. If you'd like to archive these items, you can use the API reference "POST /signals/batch" or the Seeq SDK equivalent to archive the signals in bulk. If you'd like to irreversibly delete the items, you can use the API reference "DELETE /items/{id}" endpoint or the Seeq SDK equivalent. This endpoint will need to be ran twice since deletion requires items to have been archived. The archiving is done by the first execution and the second does the deletion. Lastly the SQL V1 Connection itself will need to be archived. Please refer to Removing a Datasource for more information. Version48andAfter_datasource.py BeforeVersion48_datasource.py
  15. Hi Steve, You are correct in that there is not currently a function for performing first order filters. We tend to see the filtering algorithms already available work for the majority of use cases but we understand the value gained from having an explicit first order filter. As such this feature is currently being worked on and I'd expect it to be included in an upcoming release. If you'd like to learn about the other filtering algorithms in the meantime, I recommend looking at https://support.seeq.com/space/KB/592117784/Filtering%20and%20Smoothing where we discuss filtering in Seeq in depth.
  16. Seeq administrators often limit the access of people to Seeq based on an external authentication system. Some examples of this are Windows Authentication, LDAP Authentication, or the built-in Seeq authentication. These mechanisms only limit access to Seeq, not access to the datasources or items within Seeq. Here, we'll discuss how to restrict access to datasources based on user groups developed through authentication mechanisms. Step 1: Creating the group in Seeq First, we'll have to ensure the group is brought into Seeq. If using Seeq Authentication, please refer to https://support.seeq.com/space/KB/239304838/Users%20and%20Groups#Creating-Groups. If not, the external authentication mechanism will only bring in the groups that are the children of the groups that are allowed access. For example, if "Seeq_Users" and "Seeq_Admins" are two groups underneath "Seeq" and "Seeq" is the group allowed by the authentication mechanism, then "Seeq_Users" and "Seeq_Admins" will be brought in as groups. Note that "Seeq" will not be brought in as a group, only its members will be brought in. In the case that "Seeq" would also want to be brought in as a group, as of Seeq version R22.0.45.00, you can modify the IdentitySynchronization parameter to specify bringing in the "Seeq" group. An example configuration is shown below where the "DOMAIN\\Seeq" is being brought into Seeq as a group too. More information can be found in https://support.seeq.com/space/KB/554041498/Identity%20Synchronization%20using%20Windows%20Authentication%20Connector { "Version" : "com.seeq.link.connectors.windowsauth.config.WindowsAuthConnectorConfigV1", "Connections" : [ { "Name" : "Windows Auth: grant access to only specified Windows groups", "Id" : "7393a87e-611a-4f43-b4a5-20e56f28f5d3", "Enabled" : true, "Indexing" : { "Frequency" : "1w", "OnStartupAndConfigChange" : true, "Next" : "2020-03-13T16:55:31.050979100Z[UTC]" }, "Transforms" : null, "VerboseLogging" : false, "AllowGroups" : [ "DOMAIN\\Seeq" ], "AllowUsers" : null, "IdentitySynchronization" : { "Enabled" : true, "GroupsToSync" : [ "DOMAIN\\Seeq" ] } } ], "Help" : "For examples and documentation, see https://telemetry.seeq.com/support-link/wiki/spaces/KB/pages/420053401" } Step 2: Datasource Permission After the group is available in Seeq, we can restrict who has access to the datasource. The steps listed below discuss the connector property transform approach but additional methods are discussed in https://support.seeq.com/space/KB/596607096/Datasource%20Permissions. The connector property transform is applied on the connector json file located on the Seeq server or remote agent. In the example below, we are modifying all items within the datasource so the "Everyone" group has read access. Note that this security is appended to existing access control, not replacing it. "Transforms" : [ { "Inputs" : [ { "Property" : "Name", "Value" : ".*" } ], "Outputs" : [ { "Property" : "Security String", "Value" : "Auth/Seeq/Everyone:r,rd", "UnitOfMeasure" : null } ], "Enabled" : true, "Log" : false } ] The value of Security String can be applied to any group where "Auth" is the datasouceClass of the authentication mechanism, "Seeq" is the datasourceID of the connection, and "Everyone" is the dataID of the group. For Built-in Seeq Authentication, these items are based on readable names but for external authentication mechanisms these tend to be GUIDs. The datasourceClass tend to stay the same without any purposeful modifications. The table below outlines the typical configuration and their mappings. Authentication Mechanism datasourceClass Built-in Seeq Authentication Auth Windows Authentication Windows Auth LDAP Authentication LDAP OpenID Connect OAuth 2.0 The datasouceID will vary based on the connection specified in the json connector file. You can access this in the connector json file through the ID of the connection or by going to the Seeq API Reference and querying the endpoint GET /datasources with a filter for the datasourceClass. The dataID will also change for each group. There is a two part process for accessing this dataID. First, you will have to get the Seeq ID from the endpoint GET /usergroups . You can filter the query by the name of the group you're looking for. There you can copy the id. This id is located in the group json section, not in the datasource. This id should be a Seeq ID, meaning it contains uppercase alphanumeric characters. Second, you paste this id in the GET / usergroups/{userGroupId} to get the dataID of the user group, which will be located towards the bottom of the response body. With these items, you can modify the security string value and specify the level of access with r,rd being read, read data, w,wd being write and write data and m being manage. Additional groups can be separated using |
  17. Contextual data is often brought into Seeq to add more information to time series data. This data tends to be brought in as a condition, with the capsule properties of this condition containing different pieces of information. In some cases, a particular capsule property may not contain just one piece of information; it may contain different pieces that are separated based on some logic or code. Rather than having users visually parse the code to extract the segments of interest, Seeq can be used to extract the substring continuously. The code below extracts a substring based on its location in the property. This code is based on incrementing from left to right, starting at the beginning of the string. Changing the inputs will extract a substring from different positions in the property selected. //Inputs Section (Start and end assume reading left to right) $condition = $hex_maint //Recommend to filter condition to only include correct property values $property_to_capture = 'Reason Code' $start_position = 1 //Incrementing starts from 1 $number_of_characters = 2 //Including the start //Code Section $property_signal = $condition.toSignal($property_to_capture).toStep(2wk) //Change duration for interpolation $start_position_regex = ($start_position - 1).toString() //Regular exression indexes from 0 $number_of_characters_regex = ($number_of_characters - 1).toString() $property_signal.replace('/.{'+$start_position_regex+'}(?<Hold>.{'+$number_of_characters_regex+'}.).*/','${Hold}') This alternative version is based on incrementing right to left, starting at the end of the string. //Inputs Section (Start and end assume reading left to right) $condition = $hex_maint //Recommend to filter condition to only include correct property values $property_to_capture = 'Reason Code' $end_position = 1 //Relative to end, incremented from 1 $number_of_characters = 4 //Including the end character //Code Section $property_signal = $condition.toSignal($property_to_capture).toStep(2wk) //Change duration for interpolation $end_position_regex = ($end_position).toString() $number_of_characters_regex = ($number_of_characters - 1).toString() $property_signal.replace('/.*(?<Hold>.{'+$number_of_characters_regex+'}.{'+$end_position_regex+'})$/','${Hold}') Note the output of these formulas is a string. In the case that a numeric value is wanted, append .toNumber() after '${Hold}') Below is an example of the results. With this substring parsed, all of Seeq's analytical tools can be further leveraged. Some examples are developing histograms based on the values of the substring and making conditions to highlight whenever a particular value in the substring is occurring.
  18. To better understand their process, users often want to compare time-series signals in a dimension other than time. For example, seeing how the temperature within a reactor changes as a function of distance. Seeq is built to compare data against time but this method highlights how we can use time to mimic an alternate dimension. Step 1: Sample Alignment In order to accurately mimic the alternate dimension, the samples to be included in each profile must occur at the same time. This can be achieved through a couple methods in Seeq if the samples don't already align. Option 1: Re-sampling Re-sampling selects points along a signal at select intervals. You can also re-sample based on another signal's keys. Since its possible for there not to be a sample at that select interval, the interpolated value is chosen. An example Formula demonstrating how to use the function is shown below. //Function to resample a signal $signal.resample(5sec) Option 2: Average Aggregation Aggregating allows users to determine the average of a signal over a given period of time and then place this average at a specific point within that period. Signal From condition can be used to find the average over a period and place this average at a specific timestamp within the period. In the example below, the sample is placed at the start but alignment will occur if the samples are placed at the middle or end as well. Step 2: Delay Samples In Formula, apply a delay to the samples of the signal that represents their value in the alternative dimension. For example, if a signal occurs at 6 feet from the start of a reactor, delay it by 6. If there is not a signal with a 0 value in the alternate dimension, the final graph will be offset by the smallest value in the alternate dimension. To fix this, in Formula create a placeholder signal such as 0 and ensure its samples align with the other samples using the code listed below. This placeholder would serve as a signal delayed by 0, meaning it would have a value of 0 in the alternate dimension. //Substitute Period_of_Time_for_Alignment with the period used above for aligning your samples 0.toSignal(Period_of_Time_for_Alignment) Note: Choosing the unit of the delay depends upon the new sampling frequency of your aligned signals as well as the largest value you will have in the alternative dimension. For example, if your samples occur every 5 minutes, you should choose a unit where your maximum delay is not greater than 5 minutes. Please refer to the table below for selecting units Largest Value in Alternate Dimension Highest Possible Delay Unit 23 Hour, Hour (24 Hour Clock) 59 Minute 99 Centisecond 999 Millisecond Step 3: Develop Sample Profiles Use the Formula listed below to create a new signal that joins the samples from your separate signals into a new signal. Replace "Max_Interpolation" with a number large enough to connect the samples within a profile, but small enough to not connect the separate profiles. For example, if the signals were re-sampled every 5 minutes but the largest delay applied was 60 seconds, any value below 4 minutes would work for the Max_Interpolation. This is meant to ensure the last sample within a profile does not interpolate to the first sample of the next profile. //Make signals into discrete to only get raw samples, and then use combineWith and toLinear to combine the signals while maintaining their uniqueness combineWith($signal1.toDiscrete() , $signal2.toDiscrete() , $signal3.toDiscrete()).toLinear(Max_Interpolation) Step 4: Condition Highlighting Profiles Create a condition in Formula for each instance of this new signal using the formula below. The isValid() function was introduced in Seeq version 44. For versions 41 to 43, you can use .valueSearch(isValid()). Versions prior to 41 can use .validityCapsules() //Develop capsule highlighting the profile to leverage other views based on capsules to compare profiles $sample_profiles.isValid() Step 5: Comparing Profiles Now with a condition highlighting each profile, Seeq views built around conditions can be used. Chain View can be used to compare the profiles side by side while Capsule View can overlay these profiles. Since we delayed our samples before, we are able to look at their relative times and use that to represent the alternate dimension. Further Applications With these profiles now available in Seeq, all of the tools available in Seeq can be used to gain more insight from these examples. Below are a few examples. Comparing profiles against a golden profile Determine at what value in the alternate dimension does each profile reach a threshold Developing a soft sensor based on another sensor and a calibration curve profile Example Use Cases Assess rotating equipment performance based on OEM curve regressions that vary based on equipment speed due to a VFD (alternate dimension = speed) Monitor distillation cut points based on distillation lab data (alternate dimension = lab standard, boil % in this case) Observe temperature profile along a reactor or well (alternate dimension = distance, length and depth in these cases)
  19. Hi Bryan, Sorry but it looks as though it is not currently possible to achieve this in a succinct formula. We've made sure to document this request and will hopefully have it available in a future release. One workaround we found allows you to look back over a certain time frame and determine the statistical limits of the current grade being ran. I've included the formula code below with comments as to what occurs in it. This won't solve your problem but will at least allow you to compare your current grade. One thing to note is where there is a toNumber function. This function is only needed if your grades are coming as numeric values. If instead they are strings, the .toNumber() can be removed. //Inputs $time_to_lookback = 2 month //Replace the 2mo with how far you'd like to look back to determine the limits $max_grade_run_time = 4wk //Replace the 4wk with a value greater than the longest amount of time a grade would be ran //Making a condition out of the grade signal, which retains the grade used as a capsule property $grade_condition = $grade.toCondition().removeLongerThan($max_grade_run_time) $current_time = capsule(now()-$time_to_lookback , now()) //Retrieve all of the capsules over the lookback time where the current grade is being ran $cap_filtered = $grade_condition.filter($cap -> $cap.getProperty('Value').toNumber()==$grade.getValue($current_time.getEnd())) //Determine the stastical limit over the previous times where the current grade was ran $avg = $pv.aggregate(average(), $cap_filtered,durationKey()).average($current_time) $std_dev = $pv.aggregate(stdDev(), $cap_filtered,durationKey()).average($current_time) $avg + 2* $std_dev
  20. Hi Bryan, I am assuming that you have a signal showing which grade is being ran and that these operating envelopes are pre-set, such that for the same grade they will be the same throughout time. Since we’ll have to bring in these operating limits into Seeq, we will have to perform a lengthy formula either way. The first option is as you mentioned, manually making scalars for the operating envelopes in Seeq and splicing them together. The below example shows an example of how it looks for a demo grade signal, where we are first making the limits, and then splicing them together based on what grade is being ran. Another option is to leverage capsule properties. Rather than making the scalars for splicing, we can assign them as capsule properties during each time a grade is being ran. We can then convert the properties into a signal. Below is an example of this formula. Both of these options result in the same limit and both would also result in a lengthy formula. Creating the formula in Excel and copying and pasting it over to Seeq would be great for this, especially if you already have the limits in an Excel worksheet. Apart from the grade and the limit, the text is the same for each line. A mixture of Excel concatenation and copying and pasting will streamline this process. Let us know if we can help.
×
×
  • Create New...