Jump to content

Kristopher Wiggins

Seeq Team
  • Posts

    21
  • Joined

  • Last visited

  • Days Won

    13

Kristopher Wiggins last won the day on September 20

Kristopher Wiggins had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Kristopher Wiggins's Achievements

Rookie

Rookie (2/14)

  • Conversation Starter
  • First Post
  • Collaborator Rare
  • Week One Done
  • One Month Later

Recent Badges

19

Reputation

  1. Since its just HTML you can assign it to the worksheet's HTML directly. So you'd only need to follow Step 1 of pulling the workbook's HTML and a portion of Step 3 in the above example. Step 3 in your case would just be "ws_to_update.html = table_html" and then you'd push the workbook. If you're looking to repeatedly update a table in Organizer with new information while retaining the other aspects of the Organizer then you'd need to find a way to capture that table using the re module. Tables in HTML have a table tag (i.e. <table> .... </table>) so you can create a regular expression to search based on this. For additional regular expression help, have a look at https://regex101.com/ to learn more and test different expressions. Once you have your regular expression finalized you can adapt Step 3 to account for that instead.
  2. The SPy library supports the creation of asset trees through the spy.assets models. These asset trees can include various types of items such as signals and conditions, calculations like scorecard metrics, and can be used to create numerous Workbench Analysis Worksheets and Organizer Topic Documents. One question that commonly comes up when making these trees is how to reference attributes that are located in other parts of the tree. Roll-Ups The first example of referencing other items in the tree is through roll-ups. These type of calculations "roll-up" attributes from levels below where the roll-up calculation is being performed, whether the level is directly beneath or multiple levels below. These attributes are then combined using logic you provide. For signals and scalars, the options are Average, Maximum, Minimum, Range, Sum and Multiply. For conditions, the options are Union, Intersect, Counts, Count Overlaps, and Combine With. Below are examples where .Cities() are a component beneath the current class. All attributes and assets beneath the Cities component will be searched through and included in the roll-up based on the criteria given in the pick function. Here, we're filtering based on Name but any property such as Type can be supplied. Note Seeq Workbench's search mechanism is used here, so wildcards and regular expressions can be included. Lastly, we specify the kind of roll-up we'd like to perform. @Asset.Attribute() def Regional_Compressor_Running_Poorly(self,metadata): return self.Cities().pick({'Name':'Compressor Running Poorly'}).roll_up('union') @Asset.Attribute() def Regional_Total_Energy_Consumption(self,metadata): return self.Cities().pick({'Name':'Total Daily Energy Consumption'}).roll_up('sum') Child Attributes The second example looks at how to reference child attributes without rolling them up. Maybe there's a particular attribute that needs to be included in a calculation used at a higher level in the asset tree. For this scenario, the pick function can be used once again. Rather than do a roll-up, we'll just index the particular item we want. Most of the time the goal is to reference a specific item using this method so the criteria passed into the pick function should be specific enough to find one item so the index will always be 0. One property that may be of interest for this is Template, where you can specify the particular class used that will contain the item wanted. @Asset.Attribute() def Child_Power_Low(self, metadata): child_power = self.Cities().pick({"Name": "Compressor Power", "Asset": "/Area (A|C|D)/"})[0] return { 'Name': "Child Power Low", 'Type': "Condition", "Formula": "$child_power < 5", "Formula Parameters" : {"child_power":child_power} } Parent Attributes The next example looks at how we can reference parent attributes in calculations that are beneath it. Rather than reference a particular component, we'll use the parent. From there we'll include the attribute we'd want to reference from our parent asset. If looking to reference attributes at higher levels of the tree, chain multiple ".parent". For example, "self.parent.parent" will look two levels above the current level. @Asset.Attribute() def Parent_Temp_Multiplied(self, metadata): parent_temp = self.parent.Temperature() return { 'Name': "Parent Temp Multiplied", 'Type': "Signal", "Formula": "$parent_temp * 10", "Formula Parameters" : {"parent_temp":parent_temp} } Advanced Selection In this example, we'll look at how can we combine the previously mentioned options to find items located in other parts of the tree. Here, we're looking to reference items located at the same level of the tree but in another class so it's not located beneath the same asset. We have two separate assets beneath the regions, Temperature Items and Power Items. The Temperature Item class has a calculation called Max Temperature 1 When Compressor Is On which references an attribute beneath its corresponding Power Item class. To fetch this attribute, we go up a level to the parent, navigate down to the Power_Items and then pick that attribute. class Region(Asset): @Asset.Component() def Temperature_Items(self,metadata): return self.build_components(template=Temperature_Item, metadata=metadata, column_name='Region Temp') @Asset.Component() def Power_Items(self,metadata): return self.build_components(template=Power_Item, metadata=metadata, column_name='Region Power') class Power_Item(Asset): @Asset.Attribute() def Power_1(self,metadata): return { 'Name':'Power 1', 'Type':'Signal', 'Formula':'$power', 'Formula Parameters': {'$power':metadata[metadata['Name'].str.contains('Power')].iloc[0]['ID']} } class Temperature_Item(Asset): @Asset.Attribute() def Temperature_1(self,metadata): return { 'Name':'Temperature 1', 'Type':'Signal', 'Formula':'$temp', 'Formula Parameters': {'$temp':metadata[metadata['Name'].str.contains('Temperature')].iloc[0]['ID']} } @Asset.Attribute() def Temp_When_Comp_On(self, metadata): power_adjacent_class = self.parent.Power_Items().pick({'Name':"Power 1"})[0] return { 'Name': "Max Temperature 1 When Compressor Is On", 'Type': 'Signal', 'Formula': '$temp1.aggregate(maxValue(), ($power1<5).removeLongerThan(7d), durationKey())', 'Formula Parameters':{ 'temp1':self.Temperature_1(), 'power1': power_adjacent_class } } Item Group To help with even more complex attribute selections, we introduced the ability to include ItemGroup rather than using the pick and parent functions. ItemGroup provides an alternate way of findings items located in other parts of the tree using established Python logic. Below are two examples using ItemGroup to perform selections that would be very complex to do with the pick function Advanced Roll-up Roll-ups using the pick reference one component beneath your class but what if there was a need for a roll-up across multiple components. ItemGroup can be used for a simple roll-up as well as this complex example. Rather than specifying a particular component and picking in it, we can use ItemGroup to iterate over every asset. Here, we retrieve every High Power attribute beneath the assets if the asset is a child of the current asset. @Asset.Attribute() def Compressor_High_Power(self, metadata): # Helpful functions: # asset.is_child_of(self) - Is the asset one of my direct children? # asset.is_parent_of(self) - Is the asset my direct parent? # asset.is_descendant_of(self) - Is the asset below me in the tree? # asset.is_ancestor_of(self) - Is the asset above me? (i.e. parent/grandparent/great-grandparent/etc) return ItemGroup([ asset.High_Power() for asset in self.all_assets() if asset.is_child_of(self) ]).roll_up('union') Referencing Items In A Different Section In this example, we're looking to reference attributes in other similar assets, but these assets are located in different sections of the tree. We can use the previous option in the Advanced Selection section but what if these compressors weren't necessarily at the same level of the tree or were beneath different components. This would mean they have different pathways and the method previously stated wouldn't work. Using ItemGroup we can iterate through all assets and find any that are also based on the Compressor class. Here we also exclude the current asset and then perform a roll-up based on all of the other High Powers. @Asset.Attribute() def Other_Compressors_Are_High_Power(self, metadata): return ItemGroup([ asset.High_Power() for asset in self.all_assets() if isinstance(asset, Compressor) and self != asset ]).roll_up('union')
  3. To follow up on this item, currently there is not a simple way to export Scorecard data into Seeq. In R53, Seeq added a copy button to Table View allowing users to Copy the table and paste into other applications like Excel. Please email support@seeq.com in order to create a ticket in our system so you'll be notified when there is an easy way to export scorecard data to Seeq Data Lab. Below is an example of a script that exports scorecard data using the SDK. What is exported are the samples shown when the scorecard is trended in Trend View and whether they're uncertain/subject to change. Note: This code was developed based on Seeq R53.3.0. Its common for Seeq to edit its SDK to enable new features so this code may not work for other versions of Seeq. Using the Seeq Python module (SPy) is the only certain way to ensure scripts will work across versions. from seeq import sdk # Include from seeq import spy and spy.login commands if not working in Seeq Data Lab import pandas as pd import datetime as dt formAPI = sdk.FormulasApi(spy.client) metricAPI = sdk.MetricsApi(spy.client) ############################################################### User Input Area ######################################################################## workbench_url = "https://explore.seeq.com/9C3916CE-0778-489C-90EE-7BC4C5734640/workbook/66D753DE-9189-4E8D-BE65-AB7BA6408EC8/worksheet/B476B826-4345-4699-8516-45F87AD50571" ######################################################################################################################################################## # Pull in the metrics displayed on the worksheet as well as its display range analysis_items = spy.search(workbench_url, quiet=True) analysis_metrics = analysis_items[analysis_items['Type'].str.contains('Metric')] analysis_metrics.reset_index(drop=True, inplace=True) pulled_analysis = spy.workbooks.pull(spy.workbooks.search({"ID":spy.utils.get_workbook_id_from_url(workbench_url)}, recursive=True, quiet=True), quiet=True) for ws in pulled_analysis[0].worksheets: if spy.utils.get_worksheet_id_from_url(workbench_url)==ws.id: display_range = ws.display_range # Define function to extract scorecard metric data (data is based on trended metric, not its tabular value) def extract_metric_data(metric_id, start_time, end_time): metric = metricAPI.get_metric(id=metric_id) metric_calc_id = metric.display_item.id metric_name = metric.name # API Exception 400 happens for Simple Metrics, use different SDK parameters for it try: result = formAPI.run_formula( start=start_time, end=end_time, formula="$series", parameters=["series="+metric_calc_id], limit = 10000 # Limit on number of samples returned ) except: result = formAPI.run_formula( start=start_time, end=end_time, fragments=['capsule=capsule("' +start_time+'","'+end_time+'")', 'laneWidth=315000ms'], function = metric_calc_id, limit=10000 # Limit on number of samples returned ) samples_df = pd.DataFrame() # Iterate through the samples pulled and add to a DataFrame that includes the sample's values and whether its uncertain for sample in result.samples.samples: ts_epoch= sample.key ts_datetime = dt.datetime.fromtimestamp(ts_epoch/1000000000) sample_df = pd.DataFrame(index=[ts_datetime], data={metric_name+" Value":sample.value, metric_name + " Uncertain":sample.is_uncertain}) samples_df = samples_df.append(sample_df) # Remove returned values that are None, i.e. trend had a gap during that time return samples_df.dropna(subset=[metric_name + ' Value']) # Iterate over metrics gathered from the worksheet and get its trended sample data using the defined function all_metric_data = pd.DataFrame() for m_id in analysis_metrics['ID']: single_metric = extract_metric_data(metric_id = m_id, start_time = display_range['Start'].strftime("%Y-%m-%dT%H:%M:%S.%fZ"), end_time = display_range['End'].strftime("%Y-%m-%dT%H:%M:%S.%fZ")) all_metric_data = pd.concat([all_metric_data, single_metric]) display(all_metric_data) # Replace below with the metric name to only see that metric's data # met_name = "Example Metric" # metric_data[[met_name + ' Value', met_name + ' Uncertain']].dropna(subset=[met_name + " Value"])
  4. To build on what Thorsten sent, below is a Python adaptation of his code. Rather than manually specify the ids required, this information is pulled from the first two histograms displayed on a worksheet. The next few steps in the code divide the two histograms, makes a graph of the histogram using the plotly or matplotlib library, and exports the results to a csv file. Note: This code was developed based on Seeq R53.3.0. Its common for Seeq to edit its SDK to enable new features so this code may not work for other versions of Seeq. Using the Seeq Python module (SPy) is the only certain way to ensure scripts will work across versions. workbench_url = "https://explore.seeq.com/workbook/BE0673EA-9DA3-49D7-BA99-33CA77405E7E/worksheet/A4F2F47E-F238-4075-9030-FFDABE34F2DF" from seeq import sdk from seeq import spy import pandas as pd formAPI = sdk.FormulasApi(spy.client) analysis_items = spy.search(workbench_url, quiet=True) analysis_histograms = analysis_items[analysis_items['Type']=='Chart'] analysis_histograms.reset_index(drop=True, inplace=True) pulled_analysis = spy.workbooks.pull(spy.workbooks.search({"ID":spy.utils.get_workbook_id_from_url(workbench_url)}, quiet=True), quiet=True) for ws in pulled_analysis[0].worksheets: if spy.utils.get_worksheet_id_from_url(workbench_url)==ws.id: display_range = ws.display_range def hist_search(hist_id, display_range): hist_info = formAPI.get_function(id=hist_id) hist_params = [elt.name + "=" + elt.item.id for elt in hist_info.parameters if elt.name != 'viewCapsule'] # hist_capsule = [elt.name + "=" + elt.formula for elt in hist_info.parameters if elt.name == 'viewCapsule'] # Required since the viewCapsule in the formula function isn't always the same as the display range hist_capsule = ["viewCapsule=capsule(\""+display_range['Start'].strftime('%Y-%m-%dT%H:%M:%S.%fZ')+ \ "\", \""+display_range['End'].strftime('%Y-%m-%dT%H:%M:%S.%fZ')+'")'] output = formAPI.run_formula( function = hist_id, parameters = hist_params, fragments = hist_capsule) return output def extract_hist_data(output): headers = [header.name for header in output.table.headers] hist_df = pd.DataFrame(columns=headers, data= output.table.data) if 'timeCol_Day Of Week' in headers: day_week_dict = {1:'Monday', 2:'Tuesday', 3:'Wednesday', 4:'Thursday', 5:'Friday', 6:'Saturday', 7:'Sunday'} hist_df['timeCol_Day Of Week'] = hist_df['timeCol_Day Of Week'].apply(lambda x:day_week_dict[int(x)]) elif 'timeCol_Month' in headers: month_year_dict = {1:'January', 2:'February', 3:'March', 4:'April', 5:'May', 6:'June', 7:'July', 8:'August', 9:'September', 10:'October', 11:'November', 12:'December'} hist_df['timeCol_Month'] = hist_df['timeCol_Month'].apply(lambda x:month_year_dict[int(x)]) elif 'timeCol_Quarter' in headers: hist_df['timeCol_Quarter'] = hist_df['timeCol_Quarter'].apply(lambda x:'Q'+str(x)) if 'signalToAggregate' in headers[-1]: hist_df.rename(columns={headers[-1]:'signaltoAggregate'}, inplace=True) # Condition aggregations still have it shown on the backend as signalToAggregate hist_df.set_index(headers[:-1], inplace=True) return hist_df hist_1 = extract_hist_data(hist_search(analysis_histograms.loc[0,"ID"], display_range)) hist_2 = extract_hist_data(hist_search(analysis_histograms.loc[1,"ID"], display_range)) result = hist_1.div(hist_2, axis=1) matplotlib_fig = result.unstack().plot(kind='bar', y=result.columns[-1], stacked=False) matplotlib_fig import plotly.express as px hold= result.reset_index(drop=False, inplace=False) plotly_graph = px.bar(hold, x=hold.columns[0], y=hold.columns[-1], color=hold.columns[1], barmode='group') plotly_graph hist_1_csv = hist_1.rename(columns={'signaltoAggregate':analysis_histograms.loc[0,"Name"]}) hist_2_csv = hist_2.rename(columns={'signaltoAggregate':analysis_histograms.loc[1,"Name"]}) result_csv = result.rename(columns={'signaltoAggregate':(analysis_histograms.loc[0,"Name"]+"/"+analysis_histograms.loc[1,"Name"])}) result_csv.unstack().transpose().to_csv('TEST.csv') . 1863770801_DivideTwoHistogramsOutputtoCSV.ipynb
  5. Seeq Data Lab allows users to programatically interact with data connected to Seeq through Python. With this, users can create numerous advanced visualizations. Some examples of these are Sankey diagrams, Waterfall plots, radar plots and 3D contour plots. These plots can then be pushed back into Seeq Organizer for other users to consume the visualizations. A common workflow that can stem from this process is the need to update the Python visualizations in an existing Organizer Topic with new ones as newer data become available. Here we'll look over the steps of how you can update an existing Organizer Topic with a new graphic. Step 1: Retrieve the Workbook HTML Behind every organizer topic is the HTML that controls what the reports display. We'll need to modify this HTML to add a new image will also retaining whatever pieces of Seeq content were already on the report. pulled_workbooks = spy.workbooks.pull(spy.workbooks.search({'Name':'Organizer Topic Name'})) org_topic = pulled_workbooks[0] # Note you may need to confirm that the first item in pulled_workbooks is the topic of interest ws_to_update = org_topic.worksheets[0] # Choose the index based on the worksheet intended to be updated ws_to_update.html # View the HTML behind the worksheet Step 2: Create the HTML for The Image The "add_image" function can be used to generate the HTML that will be inserted into the Organizer Topic HTML replace_html = ws_to_update.document.add_image(filename = "Image_To_Insert.png") replace_html Step 3: Replace HTML and Push Back to Seeq To find where in the Organizer Topic HTML to replace, we can use the re module. This will allow us to parse the HTML string to find our previously inserted image, which should begin with "<img src=". Note additional changes are required if multiple images are included in the report. import re before_html = re.findall("(.*)<img src=", ws_to_update.html)[0] # Capture everything before the image after_html = re.findall(".*<img src=.*?>(.*)", ws_to_update.html)[0] # Captures everything after the image full_html = before_html + replace_html + after_html # Combine the before and after with the html generated for the new picture ws_to_update.html = full_html # Reassign the html to the worksheet and push it back to Seeq spy.workbooks.push(pulled_workbooks)
  6. Hi Sivaji, Is the asset tree made using Seeq's Python library (SPy) or is it made from a connector? For non-SPy based trees, you would need to use our Software Development Kit (SDK) to add signals into the asset tree. This process can be complex and dependent on the use case so I'd recommend emailing support@seeq.com for assistance. If the asset tree is made via SPy and you're looking to include new calculations/signals in the asset tree, you should be able to just add the calculations as part of your tree and re-push it. The existing tree will stay as is but will get appended with the new calculations.
  7. Hi Muhammad, I'd recommend emailing support@seeq.com for further assistance so a member of our System Reliability team can schedule a meeting with you to look over the issue. Feel free to copy and paste exactly what you've wrote here and suggest times for a potential meeting.
  8. As an add on to this topic, there can be times when one wants to push a different scorecard type. The previous example shows how to create a Simple Scorecard but similar logic can be applied to make a Condition and Continuous Scorecard. Condition Scorecard Since the Condition Scorecard is also based on a condition, we need to retrieve the condition to be used. This can be done using spy.search again search_result_condition = spy.search({"Name":"Stage 2 Operation", "Scoped To":"C43E5ADB-ABED-48DC-A769-F3A97961A829"}) From there we can tweak the scorecard code to include the bounding condition. This is the condition over which this calculation is performed in scorecard. Note scorecard requires conditions with maximum capsule duration, so an additional parameter is required if the condition does not have a maximum capsule duration. Below is the code as well as the result my_metric_input_condition = { 'Type': 'Metric', 'Name': 'My Metric Condition', 'Measured Item': {'ID': search_result[search_result['Name'] == 'Temperature']['ID'].iloc[0]}, 'Statistic': 'Average', 'Bounding Condition': {'ID': search_result_condition[search_result_condition['Name'] == 'Stage 2 Operation']['ID'].iloc[0]}, 'Bounding Condition Maximum Duration': '30h' # Required for conditions without a maximum capsule duration } spy.push(metadata = pd.DataFrame([my_metric_input_condition]), workbook='Example Scorecard') Continuous Scorecard For Continuous Scorecards, users need to specify the rolling window over which to perform the calculations. To do this, a Duration and Period need to be provided. The Duration tells how long is the rolling window and the Period tells the frequency at which the rolling window is performed. my_metric_input_continuous = { 'Type': 'Metric', 'Name': 'My Metric Continuous', 'Measured Item': {'ID': search_result[search_result['Name'] == 'Temperature']['ID'].iloc[0]}, 'Statistic': 'Average', 'Duration': '1d', # Length of time the calculation is done for 'Period': '3d', # How often is the calculation being performed } spy.push(metadata = pd.DataFrame([my_metric_input_continuous]), workbook='Example Scorecard')
  9. Hi Jack, At this time, the only way to share an Organizer document to someone without a Seeq account is as a PDF. Instructions about how to generate this PDF are available at https://seeq.atlassian.net/wiki/spaces/KB/pages/159121437/Publishing+a+PDF
  10. Hi Yanmin, For your first question, yes your time column is treated as the x-axis in Seeq. As a result, every signal that is trended in Seeq naturally incorporates this time column as its x-axis. When you say "create a signal that includes this time information" are you thinking of having time on the y-axis as well? For your second question, yes we can. Seeq has a function called runningDelta(), which calculates the difference of successive samples. So in your scenario, (b2-b1) would be captured by $b.runningDelta(). (b2-b1)/(a2-a1) would also be $b.runningDelta()/$a.runningDelta(). This would all be done in Seeq's Formula tool. Hopefully the example screenshot below helps. If you'd like help understanding or would like to discuss more, please come to our office hours where a member of our team can help you. https://info.seeq.com/office-hours Regards, Kris
  11. Hi Sivaji, Thanks for coming to Office Hours. I'll post the resolution here in case others have the same error occur. When pushing data to Seeq from a Python environment like Seeq Data Lab, I'd recommend including a Value Unit Of Measure column in your DataFrame. By default, if this column is excluded Seeq treats the pushed data as having a null Value Unit Of Measure. Since Seeq can't modify the type of a Value Unit Of Measure ("string", null, "a unit"), the error above occurs since you were trying to modify the Value Unit Of Measure to be a string from its original null value. To resolve the issue, you will need assistance from your Seeq Admin to hard delete (not just archive) the item. If this item is scoped to a workbook, it may be easier to just push to a new workbook. Feel free to reach out to support@seeq.com if help is needed.
  12. The SPy module serves as the core of Seeq Data Lab. Due to being a fairly new Seeq offering and a Python module, the extensive documentation users are accustomed to when working with other Seeq applications (Workbench/Organizer Topic) is not available yet. Below, we'll list some tips for learning more about the SPy module. Option 1: Read through the SPy Documentation When working in Seeq Data Lab, a folder titled "SPy Documentation" is automatically generated when creating a new project. If working outside of Seeq Data Lab, the command "spy.docs.copy()" can be used after importing the SPy module to download the documentation. The SPy documentation contains a tutorial to guide users through most of the things that can be done with SPy. Take a look through the documentation and chances are there is an example discussing what you would like to use SPy to do. Option 2: Access an Example of the Item When attempting to do something not discussed in the SPy documentation, try pulling an example of the item you're looking to modify. For example, if looking to modify the format of scorecards on a worksheet, the item to be brought in would be a worksheet, since this format is carried over the worksheet rather than an individual scorecard. A combination of spy.search and spy.pull will allow you to access an example of the item. You can then leverage the Keyboard shortcuts discussed in Jupyter Docstring Shortcut to learn more about the options available. Using ".+Tab" will allow you to see the properties and methods associated with that item and "Shit+Tab" will allow you to see the Docstring associated with that property or method. Docstrings are also available for SPy functions. In addition to the previously mentioned shortcuts, another way to access the Docstring is to type the function and then a question mark and run the cell. For example, the code below will return the Docstring for the spy.search function. The ? allows for the full Docstring to appear instantly rather than a scrollable pop up. spy.search?
  13. As covered in Push Scorecard from Seeq Data Lab, it is possible to push a scorecard from Seeq Data Lab into Seeq Workbench Analysis. In this post, we'll cover how we can format the display of a worksheet in Scorecard view. The scorecards can be pre-made in Workbench or have been previously pushed from Seeq Data Lab. Please refer to R21 Scorecard Metric for more information about how to modify scorecard display format in Workbench. To format the display of a worksheet, we will first need to access the worksheet to be changed. In these examples, we'll assume that we're modifying an existing worksheet. Similar logic can be applied to new worksheets being created. Step 1: Pull in the worksheet First, using the spy.search and spy.pull commands we can pull information about the workbook of interest. We can then navigate through the workbook in Python to access the contents of the worksheet. # The output of this command is a list of pulled workbooks that match the criteria passed into the search command pulled_workbooks = spy.workbooks.pull(spy.workbooks.search({"Workbook Type":"Analysis", "Name": "Example Analysis"})) # You can then access the worksheet looking to be changed using the command below. A variation to access the worksheet by its location rather than its name is listed beneath scorecard_worksheet = pulled_workbooks[0].worksheet("Scorecard Worksheet") # scorecard_interested =pulled_workbooks[0].worksheets[1] Step 2: Modifying the Scorecard View We can then modify the format of the display using the code below. # This line is only needed if the worksheet is not in Scorecard view scorecard_worksheet.view = "Scorecard" # The lines below change the format to only show the Start time in an "l" format, which is m/d/yyyy scorecard_worksheet.scorecard_date_display = "Start" scorecard_worksheet.scorecard_date_format = "l" For additional options for formatting, please take a look at the code below @property def scorecard_date_display(self): """ Get/Set the date display for scorecards Parameters ---------- str or None The dates that should be displayed for scorecards. Valid values are: =============== ================================ Date Display Result =============== ================================ None No date display 'Start' Start of the time period only 'End' End of the time period only 'Start And End' Start and end of the time period =============== ================================ Returns ------- str or None The scorecard date display """ return self._get_scorecard_date_display() @property def scorecard_date_format(self): """ Get/Set the format for scorecard date displays Parameters ---------- str The string defining the date format. Formats are parsed using momentjs. The full documentation for the momentjs date parsing can be found at https://momentjs.com/docs/#/displaying/ Examples -------- "d/m/yyy" omitting leading zeros (eg, 4/27/2020): l "Mmm dd, yyyy, H:MM AM/PM" (eg, Apr 27, 2020 5:00 PM) : lll "H:MM AM/PM" (eg, "5:00 PM"): LT Returns ------- str The formatting string """ return self._get_scorecard_date_format() Step 3: Push the workbook back to Workbench Pushing the workbook back into Workbench will cause the existing workbook to update to match the formatting we specified in Seeq Data Lab. # Push the workbook back into Workbench spy.workbooks.push(workbooks= pulled_workbooks)
  14. Hello Carl, We've significantly improved the CSV Import tool to handle situations similar to yours. Feel free to look at the CSV Import 2.0 to learn more about some of the benefits. As for your particular issue of different date formats, we've added an option in Seeq Version 49 to import day first timestamps. This parameter can also be set at the server level by an admin so users won't have to keep customizing the CSV Import tool.
  15. Prior to Seeq major version R20, the only way to bring in data coming from a SQL datasource into Seeq was through the SQL Connector V1. From major version R20 and beyond, Seeq created the SQL Connector V2 that features many improvements. For more information, please take a look at SQL Connectors. In order to gain the improvements from the new connector, admins may want to migrate their old SQL V1 connections to V2 and all of the Seeq calculations built from them. The steps below can be used as a rough guide for the steps needed. Please contact Seeq support if attempting to do this as different SQL configurations will require changes to the steps listed below. Step 1: Creating your SQL V2 Connections One significant difference between the SQL V1 and SQL V2 Connectors is that SQL V1 only connects to the SQL datasource. To actually query information from the SQL datasource, individual Seeq formulas have to be written for each signal or condition being retrieved. The SQL V2 Connector, however, can connect to the SQL datasource and perform a single query to bring in numerous signals and conditions. In order to replicate the queries made from the SQL V1 Connections to SQL V2, the best approach is to create a new query that generalizes the SQL V1 queries rather than copying each individual one. Below is an simple example of a V1 query and its V2 equivalent. Note in the V1 version, 4 different formulas had to be made to retrieve volume, thickness, temperature and cost. V1 Query: V2 Query: The SQL V1 and V2 Connectors are separate such that each item can have its own name, description, etc. but still point to the same data. For the purpose of migrating from SQL V1 to V2, it is best to have the signals/conditions brought in from SQL V1 and V2 have either the same names or a small variation that is consistent throughout all items. For example, if the signal made from the SQL V1 connection was titled "Volume From Lab Signals", I should try to make the query made with the SQL V2 connector either replicate that name or have a slight variation such as "Volume From Lab Signals_V2". Step 2: Migrating SQL V1 Items to SQL V2 In this step, we will be leveraging Seeq's command line interface to swap all of the SQL V1 items with their V2 counterpart. Please refer to CLI Datasource Swapping for more information about datasource swapping. We'll assume here that it is intended to perform this mapping on the same server. By default, datasource swapping looks at all of the items available in a datasource and then determines which meet the criteria for swapping. Since we are working with Seeq's internal database that stores calculations, we will have to modify a file on the Seeq server that controls datasource swapping. For a common Seeq install, this file will be located at "C:\Program Files\Seeq Server\pilot\datasource.py". Save a copy of the file outside of the Seeq folder in order to revert the file back to the original once the migration is complete. Depending on your version number, replace the datasource file with the python file shown at the bottom of the post. After download, change the name of the file to be "datasource.py". The next step is to perform the map. For this walk-through, we will assume the Seeq database that houses these SQL V1 items is the cassandra database. To verify, look at the item properties on one of the signals/conditions made in Formula that uses the SQL V1 Connection. The cassandra database has a Datasource ID = default and a Datasource Class = cassandraV2. We have to specify a class for the cassandra datasource since its datasource ID is shared by other internal Seeq databases. For the SQL V2 Connector, we will assume it has a Datasource ID=11BB11B1-B1B1-1B11-1BB1-BB11B111BB1B, but this can be found in the connector file. If your SQL V1 and V2 items are named the same, you will run a version of the command below to create the map file. If they're not exactly the same but only have a minor variation, you will include a regex parameter. For example, in the previous scenario of adding "_V2" after my SQL V2 items, I would include the --name-regex "(?<V1Name>.*)" --new-name-regex "${V1Name}_V2" after the command. With the below command, we will map all of the items from the SQL V2 Connector to their cassandra counterpart. Its more likely that there will be more item in the cassandra database than in the SQL V2 Connector so performing the mapping in this manner will save time. If that is not the case, switch the two datasources in the command. The map command will then produce a csv file we can use for the swapping. seeq datasource map same-server --datasource-id "11BB11B1-B1B1-1B11-1BB1-BB11B111BB1B" --new-datasource-id "00AA00A0-A0A0-0A00-0AA0-AA00A000AA0A" --new-datasource-class "cassandraV2" If a switch was performed, proceed to the next portion of this step. If not, we'll have to perform the switch in the csv file. Replace the word "Stored" with "Calculated" in the Type column. Cut and paste all of the old columns (Column C through G) from the map csv into the columns following the new columns. Change the column headings such that old becomes new and new becomes old and save the file. With this map file, you can then perform the swap across all workbooks or a particular workbook as described in CLI Datasource Swapping. Step 3: Archiving SQL V1 Items This step is not necessary since the migration is now complete. Sometimes, admins would like to clean up their Seeq server by removing the V1 items that aren't used anymore. You can either archive the items in bulk or individually delete the items. The first step is to find all of the items that are dependent on the SQL V1 connection. Use the "GET /datasources" endpoint with SQL V1 Connection filters to find the ID of your SQL V1 Connection. You can then use the "GET /items/{id}/dependents" to retrieve the ids of all the items that depend on the SQL V1 Connection. If you'd like to archive these items, you can use the API reference "POST /signals/batch" or the Seeq SDK equivalent to archive the signals in bulk. If you'd like to irreversibly delete the items, you can use the API reference "DELETE /items/{id}" endpoint or the Seeq SDK equivalent. This endpoint will need to be ran twice since deletion requires items to have been archived. The archiving is done by the first execution and the second does the deletion. Lastly the SQL V1 Connection itself will need to be archived. Please refer to Removing a Datasource for more information. Version48andAfter_datasource.py BeforeVersion48_datasource.py
×
×
  • Create New...