Jump to content

Kin How

Seeq Team
  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by Kin How

  1. You can try archiving the conditions using API. The code below should archive the conditions in your worksheet. from seeq import sdk import pandas as pd # Setting up the SDK end point items_api = sdk.ItemsApi(spy.client) worksheet_url = 'worksheet_url' condition_df = spy.search(worksheet_url) for id in condition_df['ID']: items_api.archive_item(id=id)
  2. The dotted border and lighter color indicate uncertainty. We are still working on a new version to ensure consistency between uncertain and certain data trends.
  3. The "Samples table grid" has "Ungridded original timestamps" added as an option in R63. You will see this option after upgrading to R63. https://support.seeq.com/kb/latest/cloud/odata-export
  4. Hi Nitish, You need to deselect "Auto" before setting a custom "Axis Max".
  5. Unsure why, since the Formula shouldn't care if the signal variable name is reused. I was told the variables don't work globally, e.g. either of the following would work as long as they are tied appropriately to the formula parameter. Is this not true? ($OFCoC2 + $OFCoC3 + $OFCoC5 + $OFCoC6 + $OFCoC7) / 100 * $FaPOF ($a + $b + $c + $d +$e) / 100 * $g Yes, either of the formula parameter combinations will work for your case. The error message you received, SPy Error, indicates that there were multiple entries of $OFPOF in the DataFrame. This suggests that there might be more than one match found in your df. To confirm this, could you please print df and check?
  6. Your code should return the actual ID of the item after adding ['ID].iloc[0] to the end of each line. Please try the "Formula Parameter" below: { '$OFCoC2':df[(df['Asset'] =='advisor_elements_sOiltags_leastCleaned_v2_csv(import)') & (df['Name'] == 'Olefin Feed Composition_online C3=')]['ID'].iloc[0], '$OFCoC3':df[(df['Asset'] =='advisor_elements_sOiltags_leastCleaned_v2_csv(import)') & (df['Name'] == 'Olefin Feed Composition_online C4=1')]['ID'].iloc[0], '$OFCoC5':df[(df['Asset'] =='advisor_elements_sOiltags_leastCleaned_v2_csv(import)') & (df['Name'] == 'Olefin Feed Composition_online C4=2c')]['ID'].iloc[0], '$OFCoC6':df[(df['Asset'] =='advisor_elements_sOiltags_leastCleaned_v2_csv(import)') & (df['Name'] == 'Olefin Feed Composition_online C4=2t')]['ID'].iloc[0], '$OFCoC7':df[(df['Asset'] =='advisor_elements_sOiltags_leastCleaned_v2_csv(import)') & (df['Name'] == 'Olefin Feed Composition_online C4=i')]['ID'].iloc[0], '$FaPOF':df[(df['Asset'] =='advisor_elements_sOiltags_leastCleaned_v2_csv(import)') & (df['Name'] == 'Feed and Products Olefin Feed+HYC')]['ID'].iloc[0] } After that, the spy.push() should run without error. Let me know the outcome after trying it.
  7. To ensure a successful metadata push, the Formula Parameter array must contain the actual ID of the item in Seeq. In your screenshot, row 0 failed to push because the Formula Parameter did not contain the actual ID for each item. You cannot proceed with just {'$OFCoC2'=['ID'], ...}. Instead, you need to specify the ID for each item, for example {$OFCoC2=['0EE76D39-3D08-FF40-BBFF-53255CCEB514'], ...}. You can use spy.search() to obtain the ID of each item in your formula and map those IDs to the corresponding Formula Parameter. Therefore, you need to push row 0 to Seeq to get the ID of the item and map the ID to row 1 before you can push row 1. When working with formulas, you are free to choose any variable name you prefer, as long as the correct ID is mapped under the Formula Parameter. For instance, for row 1, you can use $a/$b. The only requirement is that the ID of $a and $b is correctly set. For example, {'$a'='0EE76D39-3D08-FF40-BBFF-53255CCEB514', '$b'='0EE76D39-3D08-FF40-BBFF-53255CCEB55'}.
  8. Hi Edmund, I have a suggestion that may be helpful. You can create separate conditions for each color threshold - Concern, Investigate, and Shutdown - and put them all in one lane. Then, place the signal under the same lane. By doing this, you can monitor the signal movement on the trend, and the condition color will be shown in the background of the trend. Please refer to the example screenshot below for a better understanding. Let me know your thoughts on this.
  9. Hi Edmund, Have you ever considered using Treemap for condition-based monitoring? With Treemap, you can prioritize the conditions you want to monitor and focus on high-priority events when you have lots of process parameters to keep track of. Plus, you can easily drill down and check out what happened with high-priority events over a trend view. For more information on Treemap, please refer to this article.
  10. Hi Joseph, Kindly raise a support ticket via this link. A Seeq representative will assist you.
  11. Hi David, You can use the iterrows() function to loop over your DataFrame and add the scalars. Let's say I have a DataFrame with all PVHI and PVLO limits: I can apply iterrows() function to add these limits to my asset: for index, row in csv.iterrows(): #add Hi/HiHi Limits my_csv_tree.insert(name = row['Limits 1 Name'], formula = str(row['Limits 1']), parent = row['Level 3']) #add Lo/LoLo Limits my_csv_tree.insert(name = row['Limits 2 Name'], formula = str(row['Limits 2']), parent = row['Level 3']) my_csv_tree.visualize() The asset tree will look like this: My CSV Tree |-- Cooling Tower 1 | |-- Area A | | |-- PVHIHI | | |-- PVLO | | |-- Temperature | |-- Area B | |-- PVHIHI | |-- PVLO | |-- Temperature |-- Cooling Tower 2 |-- Area D | |-- PVHI | |-- PVLO | |-- Temperature |-- Area E |-- PVHI |-- PVLO |-- Temperature
  12. Hi Robert, After the spy.search and dropNA step, you can create a metadata to convert the unit of the selected signals and push it to Seeq workbench. #Search for tags and dropNA search_df = spy.search({ 'Name': 'Area ?_Compressor Stage', 'Datasource Name': 'Example Data' }, estimate_sample_period=dict(Start='2019-01-01', End='2019-01-30')) search_df = search_df.dropna(subset=['Estimated Sample Period']) # Create a copy of the search table so we can manipulate it formulas = search_df.copy() formulas #Create the metadata formulas['Name'] = formulas['Name'] + '_convertunit' formulas['Formula'] = '$signal.convertUnits(\'C\')' #For this example, I am converting the unit to DegC. formulas['Formula Parameters'] = '$signal=' + formulas['ID'] formulas.head() #Push the metadata to Seeq workbench spy.push(metadata = formulas[['Name', 'Formula', 'Formula Parameters']], worksheet='Unit Conversion')
  13. Hi Robert, Could you provide more details about the "dropna was done on "Estimated Sample Period" to remove stale tags" step? When you run the spy.pull() step, you can apply a calculation to the pull. See this document for more information.
  14. Hi Martin, You will need the ID of the new tag to swap the tag using Seeq Data Lab. Please see the example below: #Search for the calculated tag metadata_df = spy.search({'ID': 'your_calculated_tag_id'}, all_properties=True) #alternatively, you can search using the name of the tag metadata_df #read the formula parameter from the metadata_df metadata_df['Formula Parameters'][0][0] #you will see the id of the $a parameter for example 'a=F8E053D1-A4D5-4671-9969-1D5D7D4F27DD' #swap the id of $a in the 'Formula Parameter' of metadata_df with the new id metadata_df['Formula Parameters'][0][0] = 'a=4E9416E8-9C75-426A-8E0A-4D07432CAC5D' #push the metadata_df back to Seeq spy.push(metadata=metadata_df)
  15. Hi Manoel, You can refer to this article for instructions on grabbing just the document template. However, pushing the document template to an existing organizer topic is currently not supported in Seeq Data Lab. To generate your report, I suggest adding a blank document to the topic and handle the HTML content. If the Python code is well-structured, spy.jobs will create the report based on your schedule. Let me know if you need help setting up the HTML content.
  16. Hi Manoel, You can add one step to create a new document to the Organizer Topic you already created. Then, follow the steps suggested by Kristopher and Emilio here to set up the html of the new document. #Search for topic topic_search=spy.workbooks.search({'ID':'your_organizer_topic_id'}) #Pull the topic associated with that ID topic=spy.workbooks.pull(topic_search) #Add a new document/page to the topic object page = topic.document('February 2023') #Follow the steps suggested by Kristopher and Emilio
  17. Hi James, Thank you for the suggestion. I'd recommend submitting this suggestion to our Support Portal. We will then log it internally for follow-up.
  18. Hi James, On the details pane, change the samples to "Points only" or "Bar chart" to see the samples of the signal without interpolation. Points only: Bar chart: You can view the timestamp and value of the signal in Table view too. First, create a capsule for every sample of your signal using a Formula. $signal.setMaxInterpolation(1s).isValid() Then, switch to "Condition Table" view -> click "Columns" -> select the totaliser signal -> select "Maximum". Now, you can see the timestamp and value of the samples in a Table.
  19. Hi James, You can see the samples of the signal on the trend view by adjusting the samples to "Line and Points" or "Points only". See this article for more information. Kin How
  20. Hi James, Have you attempted to plot the totaliser tag on the default trend view of workbench? The X-axis of the trend view displays the timestamps of the tag samples. My second inquiry pertains to the placement of the timestamp on the XY plot in relation to the timestamp on the trend view. How would you prefer it to be positioned? Kin How
  21. Hi Mkuhl70, Alternatively, you can use the runningAggregate(average()) function in Seeq Formula to calculate the cumulative average of the signal. Kindly refer to the example below. $condition_I_want = $condition1 or $condition2 $sixmonthcondition = periods(6month, 6month) $signal.remove(not $condition_I_want).runningAggregate(average(), $sixmonthcondition)
  22. Hi Margarida, The only way to convert it in Workbench is by duplicating it to a formula. After that, update all calculations with this variable by substituting the old condition created with Workbench's tool with the new duplicate. Regards, Kin How
  23. Hi Dano, The removeLongerThan function only works on a Condition. In your screenshots, the selected $b (Making in 401, 402, 403) in your Formula is a Signal, and this is causing the error you encountered when executing the Formula. Kindly change the $b variable to the "Making 401, 402, 403" condition (the purple color condition in your screenshot) and re-execute the Formula. The symbol of Signal and Condition under the variable selection pane of the Formula tool is different and you can refer to the symbol to check the item type of each variable. Regards, Kin How
  24. A few weeks ago in Office Hours, a Seeq user asked how to perform iterative calculation in which the next value of the calculation relies on its previous values. This functionality is currently logged as a feature request. In the meantime, users can utilize Seeq Data Lab and push the calculated result from Seeq Data Lab to Seeq Workbench. Let's check out the example below: There are a total of 4 signals, Signal G, Signal H, Signal J, and Signal Y added to Seeq workbench. The aim is to calculate the value of Signal Y, under the selected period. Step 1: Set the start date and end date of the calculation. #Set the start date and end date of calculation startdate = '2023-01-01' enddate = '2023-01-09' Step 2: Set the workbook URL and workbook ID. #Set the workbook URL and workbook ID workbook_url = 'workbook_url' workbook_id = 'workbook_id' Step 3: Retrieve all raw data points for the time interval specified in Step 1 using spy.pull(). #Retrieve all raw data points for the time internal specified in Step 1: data = spy.pull(workbook_url, start = startdate, end = enddate, grid = None) data Step 4: Calculate the value of Signal Y, (Yi = Gi * Y(i-1) + Hi * Ji) #Calculate the value of Signal Y (Yi = Gi * Y(i-1) + Hi * Ji) for n in range(len(data)-1): data['Signal Y'][n+1] = data['Signal G'][n+1] * data['Signal Y'][n] + data['Signal H'][n+1] * data['Signal J'][n+1] data Step 5: Push the calculated value of Signal Y to the source workbook using spy.push(). #Push the calculated result of Signal Y to the source workbook spy.push(data = data[['Signal Y']], workbook = workbook_id)
  • Create New...