Jump to content
  • To Search the Seeq Knowledgebase:


Search the Community

Showing results for tags 'spy'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Community Technical Forums
    • General Seeq Discussions
    • Seeq Admin Forum
    • Training Resources
    • Product Suggestions
    • Seeq Data Lab
  • Community News
    • Seeq Blog Posts
    • News Articles
    • Press Releases
    • Upcoming Events
    • Resources


  • Seeq FAQs
  • Online Manual
    • General Information

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start



About Me



Level of Seeq User

Found 6 results

  1. We often get asked how to use the various API endpoints via the python SDK so I thought it would be helpful to write a guide on how to use the API/SDK in Seeq Data Lab. As some background, Seeq is built on a REST API that enables all the interactions in the software. Whenever you are trending data, using a Tool, creating an Organizer Topic, or any of the other various things you can do in the Seeq software, the software is making API calls to perform the tasks you are asking for. From Seeq Data Lab, you can use the python SDK to interact with the API endpoints in the same way as users do in the interface, but through a coding environment. Whenever users want to use the python SDK to interact with API endpoints, I recommend opening the API Reference via the hamburger menu in the upper right hand corner of Seeq: This will open a page that will show you all the different sections of the API with various operations beneath them. For some orientation, there are blue GET operations, green POST operations, and red DELETE operations. Although these may be obvious, the GET operations are used to retrieve information from Seeq, but are not making any changes - for instance, you may want to know what the dependencies of a Formula are so you might GET the item's dependencies with GET/items/{id}/dependencies. The POST operations are used to create or change something in Seeq - as an example, you may create a new workbook with the POST/workbooks endpoint. And finally, the DELETE operations are used to archive something in Seeq - for instance, deleting a user would use the DELETE/users/{id} endpoint. Each operation endpoint has model example values for the inputs or outputs in yellow boxes, along with any required or optional parameters that need to be filled in and then a "Try it out!" button to execute the operation. For example, if I wanted to get the item information for the item with the ID "95644F20-BD68-4DFC-9C15-E4E1D262369C" (if you don't know where to get the ID, you can either use spy.search in python or use Item Properties: https://seeq.atlassian.net/wiki/spaces/KB/pages/141623511/Item+Properties) , I could do the following: Using the API Reference provides a nice easy way to see what the inputs are and what format they have to be in. As an example, if I wanted to post a new property to an item, you can see that there is a very specific syntax format required as specified in the Model on the right hand side below: I typically recommend testing your syntax and operation in the API Reference to ensure that it has the effect that you are hoping to achieve with your script before moving into python to program that function. How do I code the API Reference operations into Python? Once you know what API endpoint you want to use and the format for the inputs, you can move into python to code that using the python SDK. The python SDK comes with the seeq package that is loaded by default in Seeq Data Lab or can be installed for your Seeq version from pypi if not using Seeq Data Lab (see https://pypi.org/project/seeq/). Therefore, to import the sdk, you can simply do the following command: from seeq import sdk Once you've done that, you will see that if you start typing sdk. and hit "tab" after the period, it will show you all the possible commands underneath the SDK. Generally the first thing you are looking for is the ones that end in "Api" and there should be one for each section observed in the API Reference that we will need to login to using "spy.client". If I want to use the Items API, then I would first want to login using the following command: items_api = sdk.ItemsApi(spy.client) Using the same trick as mentioned above with "tab" after "items_api." will provide a list of the possible functions that can be performed on the ItemsApi: While the python functions don't have the exact same names as the operations in the API Reference, it should hopefully be clear which python function corresponds to the API endpoint. For example, if I want to get the item information, I would use "get_item_and_all_properties". Similar to the "tab" trick mentioned above, you can use "shift+tab" with any function to get the Documentation for that function: Opening the documentation fully with the "^" icon shows that this function has two possible parameters, id and callback where the callback is optional, but the id is required, similar to what we saw in the API Reference above. Therefore, in order to execute this command in python, I can simply add the ID parameter (as a string as denoted by "str" in the documentation) by using the following command: items_api.get_item_and_all_properties(id='95644F20-BD68-4DFC-9C15-E4E1D262369C') In this case, because I executed a "GET" function, I return all the information about the item that I requested: This same approach can be used for any of the API endpoints that you desire to work with. How do I use the information output from the API endpoint? Oftentimes, GET endpoints are used to retrieve a piece of information to use it in another function later on. From the previous example, maybe you want to retrieve the value for the "name" of the item. In this case, all you have to do is save the output as a variable, change it to a dictionary, and then request the item you desire. For example, first save the output as a variable, in this case, we'll call that "item": item = items_api.get_item_and_all_properties(id='95644F20-BD68-4DFC-9C15-E4E1D262369C') Then convert the output "item" into a dictionary and request whatever key you would like: item.to_dict()['name']
  2. Recently we have gotten several requests to clear an item's cache in Seeq with Seeq Data Lab (SDL). The following python script creates a function that will clear an items cache when an item's ID is given as an input. Note: this script assumes you are using SDL. # imports from seeq import sdk # Setup Items API items_api = sdk.ItemsApi(spy._login.client) # Cache clearing function def ItemCacheClearingTool(itemID): try: clearCache = items_api.clear_cache(id=itemID) except: print('Error clearing cache for item ' + itemID) Screenshot of the Jupyter Notebook used to create the function: This function can be used in for loops or applied to columns in a dataframe to clear multiple items' caches at a time.
  3. Question: I've already got a Seeq Workbench with a mix of raw signals and calculated items for my process. Is there a way to grab all signals and conditions on the display rather than having to find all of those through a spy.search()? Answer: Yes, you can absolutely specify the workbook ID and the worksheet number to grab the display items from the Seeq Workbench. Here's an example of how to do that: The first step is to pull the desired workbook by specifying the Workbook ID: desired_workbook = spy.workbooks.pull(spy.workbooks.search({ 'ID': '741F06AE-62D6-4729-A4C3-8C9CC701A2A1' }),include_referenced_workbooks=False) If you are not aware, the Workbook ID can be found in the URL by finding the identifier following the .../workbook/... part of the URL. (e.g. https://explore.seeq.com/workbook/741F06AE-62D6-4729-A4C3-8C9CC701A2A1/worksheet/DFAC6933-A68F-4EEB-8C57-C34956F3F238). In this case, I added an extra function to not include referenced workbooks so that we only get the workbook with that specific ID. Once you have the desired workbook, you will want to grab the index 0 of the desired_workbook since there should only be one workbook with that ID. You will then want to specify which worksheet you want to grab the display items from. In order to see your options for the worksheet index, run the following command: desired_workbook[0].worksheets This command should return a list of the possible worksheets within the Workbook ID you specified. For example, an output might look like this: [Worksheet "1" (EDAA0608-29EA-4EA6-96FA-B6A59D8AE003), Worksheet "From Data Lab" (34AC07F9-F2FF-4C9E-A923-B636D6642B32)] Depending on which worksheet in the list you want to grab the display items from, you will then specify that index. Please note that the indexes start at 0, not 1 so the first worksheet will be index 0. Therefore, if I wanted the first worksheet in the list, I can specify that I want to know the display items as: displayed_items = desired_workbook[0].worksheets[0].display_items If you show what the item "displayed_items" looks like, you should get a Pandas DataFrame of the Workbench items as if you had done a spy.search for all of them. For example: You can then use displayed_items as your DataFrame to perform the spy.pull() command to grab the data from a specific time range.
  4. The SPy module serves as the core of Seeq Data Lab. Due to being a fairly new Seeq offering and a Python module, the extensive documentation users are accustomed to when working with other Seeq applications (Workbench/Organizer Topic) is not available yet. Below, we'll list some tips for learning more about the SPy module. Option 1: Read through the SPy Documentation When working in Seeq Data Lab, a folder titled "SPy Documentation" is automatically generated when creating a new project. If working outside of Seeq Data Lab, the command "spy.docs.copy()" can be used after importing the SPy module to download the documentation. The SPy documentation contains a tutorial to guide users through most of the things that can be done with SPy. Take a look through the documentation and chances are there is an example discussing what you would like to use SPy to do. Option 2: Access an Example of the Item When attempting to do something not discussed in the SPy documentation, try pulling an example of the item you're looking to modify. For example, if looking to modify the format of scorecards on a worksheet, the item to be brought in would be a worksheet, since this format is carried over the worksheet rather than an individual scorecard. A combination of spy.search and spy.pull will allow you to access an example of the item. You can then leverage the Keyboard shortcuts discussed in Jupyter Docstring Shortcut to learn more about the options available. Using ".+Tab" will allow you to see the properties and methods associated with that item and "Shit+Tab" will allow you to see the Docstring associated with that property or method. Docstrings are also available for SPy functions. In addition to the previously mentioned shortcuts, another way to access the Docstring is to type the function and then a question mark and run the cell. For example, the code below will return the Docstring for the spy.search function. The ? allows for the full Docstring to appear instantly rather than a scrollable pop up. spy.search?
  5. In batch processes, genealogy of lots of batch IDs is important to understand. When a deviation occurs in a raw material or batch, the use of that material in subsequent steps of the process needs to be quickly identified. In the case of pharmaceuticals, batches that use that product may need to be pulled from the shelves immediately and not used by consumers. Batch Genealogy starts with having data in a format where the input material lot numbers or Batch IDs are tracked for each batch. In Seeq, these input material lot numbers or Batch IDs are tracked as properties attached to each Batch capsule. The properties can either be set up from the datasource connector (such as bringing in OSIsoft PI Event Frames or information from a SQL database) or they can be built using transforms in Seeq Formula. Once the data is set up properly, visualization can be achieved with Seeq Data Lab. This article will describe one approach to Batch Genealogy visualization in Seeq Data Lab. In this particular scenario, we are looking at 6 distinct batch chemical transformations to get to the final product and we are tracking the key raw material as it goes through the process. 1. First, use spy.search to find the batches of interest: results = spy.search({ "Name": "Step * Batches" },workbook='3FC99BE8-F721-48E1-ACC3-18751ADA549F') # Print the output to the Jupyter page results 2. Next, use spy.pull to retrieve the records for a specific time range of interest: step2_batches = results.loc[results['Name'].isin(['Step 2 Batches'])] step2_data = spy.pull(step2_batches, start='2019-01-01', end='2019-07-01', header='Name') step2_data 3. In the previous step, you can see that some batches have multiple input Batch IDs. In order to track all combinations of the input batch IDs and output batch IDs, use pandas DataFrame manipulations to get a new DataFrame with just the Input Batch ID and Output Batch ID combinations. step2_1 = step2_data[['Step 1 Input Batch ID (1)', 'Batch ID']].rename(columns={'Step 1 Input Batch ID (1)': 'Step 1 Input Batch ID'}) step2_2 = step2_data[['Step 1 Input Batch ID (2)', 'Batch ID']].rename(columns={'Step 1 Input Batch ID (2)': 'Step 1 Input Batch ID'}) step2 = pd.concat([step2_1,step2_2]).rename(columns={'Batch ID': 'Step 2 Batch ID'}).set_index('Step 2 Batch ID') step2 = step2.loc[(step2!='0.0').any(1)] step2 You may notice that instead of 6 rows in the previous DataFrame, we now have 11 rows (5 batches with 2 input batch IDs + 1 batch with 1 input batch ID = 11 total combinations). 4. Repeat the above steps for each additional step in the process, creating a final DataFrame with Step 1 through Step 6 batches of all combinations of input Batch IDs from prior steps. step5_6 = pd.merge(step6.reset_index(),step5.reset_index().rename(columns={'Step 5 Batch ID': 'Step 5 Input Batch ID'})) step4_5_6 = pd.merge(step5_6,step4.reset_index().rename(columns={'Step 4 Batch ID': 'Step 4 Input Batch ID'})) step3_4_5_6 = pd.merge(step4_5_6,step3.reset_index().rename(columns={'Step 3 Batch ID': 'Step 3 Input Batch ID'})) FinalDf = pd.merge(step3_4_5_6,step2.reset_index().rename(columns={'Step 2 Batch ID': 'Step 2 Input Batch ID'})) FinalDf
  6. I recently received a question about passing formula parameters to the spy.pull, calculation argument, such as: spy.pull(signals_dataframe, start='2019-01-01', end='2019-02-01', grid='15min', calculation='$signal.aggregate(average(), $capsules, endKey(), 0s)') where $capsules is a condition parameter of the calculation. The calculation argument in pull only performs calculations with a single parameter of the signal or condition being pulled, such as $signal.removeOutliers(), or $condition.removeLongerThan(10min). To pull signals or conditions and apply calculations that require parameters, we can iterate over the DataFrame containing the items we want and create calculated signals on the server, which we then pull using the output from spy.push(). # find the condition that is the parameter of our calculation condition_parameter = spy.search({'Name': 'Parameter Condition', 'Asset': 'Parameter Condition Asset'}) # make a list of dictionaries that we will turn into a dataframe for spy.push() metadata_for_calcs = list() for signal in signals_dataframe.itertuples(): calc_dict = dict() calc_dict['Name'] = signal.Name + '_calced' calc_dict['Type'] = 'Signal' calc_dict['Formula'] = '$signal.aggregate(average(), $capsules, endKey(), 0s)' # Formula Parameters takes a dictionary with keys of the parameter name, and values of # a dictionary or DataFrame that has an ‘ID’ attribute identifying the item. # We'll specify the $signal by ID, and pass the condition_parameter dataframe for $capsules. calc_dict['Formula Parameters'] = { '$signal': {'ID': signal.ID}, '$capsules': condition_parameter } metadata_for_calcs.append(calc_dict) # convert the list of dicts to a dataframe metadata_for_calculated_signals = pd.DataFrame(metadata_for_calcs) # Push the calculated signal definitions. # I would recommend scoping them to a workbook to prevent clutter pushed_caled_signals = spy.push(metadata=metadata_for_calculated_signals, workbook='Scratch Folder >> DataLab Calcs', worksheet='Calculated Signals') calculated_signals = spy.pull(pushed_calced_signals, start='2019-01-01', end='2019-02-01', grid='15min')
  • Create New...