Jump to content

Kin How

Seeq Team
  • Posts

    29
  • Joined

  • Last visited

  • Days Won

    7

Kin How last won the day on December 5 2023

Kin How had the most liked content!

Personal Information

  • Company
    Seeq
  • Title
    Senior Analytics Engineer
  • Level of Seeq User
    Seeq Super-User

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Kin How's Achievements

  1. You can try archiving the conditions using API. The code below should archive the conditions in your worksheet. from seeq import sdk import pandas as pd # Setting up the SDK end point items_api = sdk.ItemsApi(spy.client) worksheet_url = 'worksheet_url' condition_df = spy.search(worksheet_url) for id in condition_df['ID']: items_api.archive_item(id=id)
  2. The dotted border and lighter color indicate uncertainty. We are still working on a new version to ensure consistency between uncertain and certain data trends.
  3. The "Samples table grid" has "Ungridded original timestamps" added as an option in R63. You will see this option after upgrading to R63. https://support.seeq.com/kb/latest/cloud/odata-export
  4. Hi Nitish, You need to deselect "Auto" before setting a custom "Axis Max".
  5. Unsure why, since the Formula shouldn't care if the signal variable name is reused. I was told the variables don't work globally, e.g. either of the following would work as long as they are tied appropriately to the formula parameter. Is this not true? ($OFCoC2 + $OFCoC3 + $OFCoC5 + $OFCoC6 + $OFCoC7) / 100 * $FaPOF ($a + $b + $c + $d +$e) / 100 * $g Yes, either of the formula parameter combinations will work for your case. The error message you received, SPy Error, indicates that there were multiple entries of $OFPOF in the DataFrame. This suggests that there might be more than one match found in your df. To confirm this, could you please print df and check?
  6. Your code should return the actual ID of the item after adding ['ID].iloc[0] to the end of each line. Please try the "Formula Parameter" below: { '$OFCoC2':df[(df['Asset'] =='advisor_elements_sOiltags_leastCleaned_v2_csv(import)') & (df['Name'] == 'Olefin Feed Composition_online C3=')]['ID'].iloc[0], '$OFCoC3':df[(df['Asset'] =='advisor_elements_sOiltags_leastCleaned_v2_csv(import)') & (df['Name'] == 'Olefin Feed Composition_online C4=1')]['ID'].iloc[0], '$OFCoC5':df[(df['Asset'] =='advisor_elements_sOiltags_leastCleaned_v2_csv(import)') & (df['Name'] == 'Olefin Feed Composition_online C4=2c')]['ID'].iloc[0], '$OFCoC6':df[(df['Asset'] =='advisor_elements_sOiltags_leastCleaned_v2_csv(import)') & (df['Name'] == 'Olefin Feed Composition_online C4=2t')]['ID'].iloc[0], '$OFCoC7':df[(df['Asset'] =='advisor_elements_sOiltags_leastCleaned_v2_csv(import)') & (df['Name'] == 'Olefin Feed Composition_online C4=i')]['ID'].iloc[0], '$FaPOF':df[(df['Asset'] =='advisor_elements_sOiltags_leastCleaned_v2_csv(import)') & (df['Name'] == 'Feed and Products Olefin Feed+HYC')]['ID'].iloc[0] } After that, the spy.push() should run without error. Let me know the outcome after trying it.
  7. To ensure a successful metadata push, the Formula Parameter array must contain the actual ID of the item in Seeq. In your screenshot, row 0 failed to push because the Formula Parameter did not contain the actual ID for each item. You cannot proceed with just {'$OFCoC2'=['ID'], ...}. Instead, you need to specify the ID for each item, for example {$OFCoC2=['0EE76D39-3D08-FF40-BBFF-53255CCEB514'], ...}. You can use spy.search() to obtain the ID of each item in your formula and map those IDs to the corresponding Formula Parameter. Therefore, you need to push row 0 to Seeq to get the ID of the item and map the ID to row 1 before you can push row 1. When working with formulas, you are free to choose any variable name you prefer, as long as the correct ID is mapped under the Formula Parameter. For instance, for row 1, you can use $a/$b. The only requirement is that the ID of $a and $b is correctly set. For example, {'$a'='0EE76D39-3D08-FF40-BBFF-53255CCEB514', '$b'='0EE76D39-3D08-FF40-BBFF-53255CCEB55'}.
  8. Hi Edmund, I have a suggestion that may be helpful. You can create separate conditions for each color threshold - Concern, Investigate, and Shutdown - and put them all in one lane. Then, place the signal under the same lane. By doing this, you can monitor the signal movement on the trend, and the condition color will be shown in the background of the trend. Please refer to the example screenshot below for a better understanding. Let me know your thoughts on this.
  9. Hi Edmund, Have you ever considered using Treemap for condition-based monitoring? With Treemap, you can prioritize the conditions you want to monitor and focus on high-priority events when you have lots of process parameters to keep track of. Plus, you can easily drill down and check out what happened with high-priority events over a trend view. For more information on Treemap, please refer to this article.
  10. Hi Joseph, Kindly raise a support ticket via this link. A Seeq representative will assist you.
  11. Hi David, You can use the iterrows() function to loop over your DataFrame and add the scalars. Let's say I have a DataFrame with all PVHI and PVLO limits: I can apply iterrows() function to add these limits to my asset: for index, row in csv.iterrows(): #add Hi/HiHi Limits my_csv_tree.insert(name = row['Limits 1 Name'], formula = str(row['Limits 1']), parent = row['Level 3']) #add Lo/LoLo Limits my_csv_tree.insert(name = row['Limits 2 Name'], formula = str(row['Limits 2']), parent = row['Level 3']) my_csv_tree.visualize() The asset tree will look like this: My CSV Tree |-- Cooling Tower 1 | |-- Area A | | |-- PVHIHI | | |-- PVLO | | |-- Temperature | |-- Area B | |-- PVHIHI | |-- PVLO | |-- Temperature |-- Cooling Tower 2 |-- Area D | |-- PVHI | |-- PVLO | |-- Temperature |-- Area E |-- PVHI |-- PVLO |-- Temperature
  12. Hi Robert, After the spy.search and dropNA step, you can create a metadata to convert the unit of the selected signals and push it to Seeq workbench. #Search for tags and dropNA search_df = spy.search({ 'Name': 'Area ?_Compressor Stage', 'Datasource Name': 'Example Data' }, estimate_sample_period=dict(Start='2019-01-01', End='2019-01-30')) search_df = search_df.dropna(subset=['Estimated Sample Period']) # Create a copy of the search table so we can manipulate it formulas = search_df.copy() formulas #Create the metadata formulas['Name'] = formulas['Name'] + '_convertunit' formulas['Formula'] = '$signal.convertUnits(\'C\')' #For this example, I am converting the unit to DegC. formulas['Formula Parameters'] = '$signal=' + formulas['ID'] formulas.head() #Push the metadata to Seeq workbench spy.push(metadata = formulas[['Name', 'Formula', 'Formula Parameters']], worksheet='Unit Conversion')
  13. Hi Robert, Could you provide more details about the "dropna was done on "Estimated Sample Period" to remove stale tags" step? When you run the spy.pull() step, you can apply a calculation to the pull. See this document for more information.
  14. Hi Martin, You will need the ID of the new tag to swap the tag using Seeq Data Lab. Please see the example below: #Search for the calculated tag metadata_df = spy.search({'ID': 'your_calculated_tag_id'}, all_properties=True) #alternatively, you can search using the name of the tag metadata_df #read the formula parameter from the metadata_df metadata_df['Formula Parameters'][0][0] #you will see the id of the $a parameter for example 'a=F8E053D1-A4D5-4671-9969-1D5D7D4F27DD' #swap the id of $a in the 'Formula Parameter' of metadata_df with the new id metadata_df['Formula Parameters'][0][0] = 'a=4E9416E8-9C75-426A-8E0A-4D07432CAC5D' #push the metadata_df back to Seeq spy.push(metadata=metadata_df)
×
×
  • Create New...