Jump to content

Patrick

Seeq Team
  • Posts

    35
  • Joined

  • Last visited

  • Days Won

    15

Everything posted by Patrick

  1. Hi Leah - Would you mind submitting a support ticket via our portal at https://seeq.atlassian.net/servicedesk/customer/portal/3? If you can view the dashboard and have shared it with your colleagues via the share menu, yet they get this error, we will want to investigate a bit more.
  2. Hi Manoel - Can you try inserting a `.validvalues()` after the "$reg" in the last line? If invalid samples are present in the signal, no interpolation will occur. This also might be worth a quick office hour visit to troubleshoot 1:1 - https://outlook.office365.com/book/SeeqOfficeHours@seeq.com/
  3. Hi Nurhazx - In general, the average displayed in the details pane will be dependent on the time range selected in your display pane. i.e., in the above screenshot, 9/13/2023 to 3/14/2024. If you select a timerange by left-click and dragging a section of the trend, as you did in your screenshot, the average will representative for the selection - in the above, it will be 12/1/2023 to 12/13/2023. The details pane average will also include data removed in your monthly aggregation since you are passing the `remove` operator into your monthly aggregation. The calculated monthly average will always be based on the start and end time of the monthly condition, regardless of the display pane time range. It will of course also honor the "remove()" parameter. One note regarding "remove()" - if you are removing periods shorter than your signal maximum interpolation, Seeq will interpolate (draw a line) across the "gap" of data removed. If you, instead, want the removed section to be shown as a true gap regardless of interpolation settings, you can use the "within()" operator combined with "inverse()" $signal=$bt.within($d.inverse())
  4. Hi EPH - We don't have the capability to remove short Capsules in a Composite Condition via the GUI, however, it can be done easily in formula like so: ($a.intersect($b)) .removeShorterThan(1h) where $a and $b are the respective conditions and the parameter in removeshorterthan() specifies the minimum capsule duration (above example, anything shorter than 1 hour will be ignored). If you already have the Composite Condition set up, go to "Item Properties" (green i), click the down arrow next to "Duplicate", and select "Duplicate to Formula". This will pre-populate $a.intersect($b) with your signals in Formula, and you can then append ".removeShorterThan(1h)" to create the new condition that ignores short capsules.
  5. There are instances where it's desirable to add or remove Access Control to a user created datasource. For instance, you may want to push data from Seeq Datalab to a Datasource that is only available to a sub-set of users. Below is a tip how to add additional entries to a Datasource Access Control List (ACL) using `spy.acl`. 1) Identify Datasource ID If not already known, retrieve the ID for the Datasource. This can be done via the Seeq REST API (accessible via the "hamburger" menu in the upper right corner of your browser window). This is different from the datasourceId identifier and will contain a series of characters as shown below. It can be retrieved via the `GET/datasources` endpoint under the `Datasources` section. The response body will contain the ID: 2) Identify User and/or Group ID We will also want to identify the User and/or Group ID. Below is an example of how to retrieve a Group ID via the `UserGroups` GET/usergroups endpoint: The response body will contain the ID: 3) Use `spy.acl()` to add group to the Datasource Access Control List (ACL) Launch a Seeq Datalab Project and create a new ipynb Notebook Assign the two IDs to a variable datasource_id='65BDFD4F-1FA3-4EB5-B21E-069FA5A772DF' group_id='19F1B7DC-69BE-4402-AD3B-DF89B1B9A1A4' (Optional) - Check the current Access Control List for the Datasource using `spy.acl.pull()` current_access_control_df=spy.acl.pull(datasource_id) current_access_control_df.iloc[0]['Access Control'] Add the group_id to the Datasource ACL using `spy.acl.push()` spy.acl.push(datasource_id, { 'ID': group_id, 'Read': True, 'Write': True, 'Manage': False }, replace=False) Note I've set `replace=False`, which means the group will be appended to the current access list. If the desire is to the replace the entire existing ACL list, this can be toggled to `replace=True`. Similarly, you can adjust the read/write/manage permissions to fit your need. For more information on `spy.acl.push`, reference the `spy.acl.ipynb` document located in the SPy Documentation folder or access the docstring by executing help(spy.acl.push) (Optional) - To confirm the correct ID has been added, re-pull the ACL via `spy.acl.pull()` current_access_control_df=spy.acl.pull(datasource_id) current_access_control_df.iloc[0]['Access Control']
  6. Hi Pat - Please take a look at the attached script. It takes a Worksheet template, maps the signals in the worksheet, maps the worksheet to the Organizer Template page, appends that page to the existing Organizer, and pushes all referenced content, including the templated Worksheet. The template used is what's included in our SPy documentation, but you should be able to load your own template as well. AppendingTemplateDocuments.ipynb
  7. Starting in R63, you have the ability to "deep copy" worksheets to automatically create new, independent items. Details here: https://support.seeq.com/kb/latest/cloud/worksheet-document-organization
  8. Solution: To address your first question of creating the t>1 hr after shut-in capsule. One method to do this is below: 1. Create a condition for ~5 min before shut-in and shift it by ~5 min to make sure you have the Steady state DHP prior to shut-in. This is done using Formula. The Formula syntax is $ShutinCondition.beforeStart(5min).move(-5min) The "beforeStart(5 min)" creates a new 5 minute long capsule directly preceding your shut-in capsule. The move(-5 min) function shifts this new "beforeStart" capsule backwards in time by 5 minutes. 2. Calculate a new signal that represents the average DHP over the condition created in Step 1. Use "Signal from Condition" with the following inputs: Signal or condition = DHP Summary Statistic = Average Bounding Condition = Condition from step 1 Where to place timestamp = Start Interpolation Method = Step 3. Calculate a new signal that is the signal created in Step 2 (Average DHP before Shut-in) + 10%. Use Formula and the following syntax: $signalFromStep2*(1.1) 4. Calculate a new signal that is the delta between the DHP and this initial + 10% signal. Use Formula and the following syntax: $DHPsignal-$signalFromstep3 5. Create a condition for when this delta signal created in Step 4 drops below zero. Use Value Search and <0 as inputs. 6. Create a Composite Condition that joins the end of your existing Capsules t<1 hr after shut-in to the start of your condition created in Step 5. The inputs to this composite condition will be: Condition A = t<1 hour after shut in Condition B = Delta signal drops below zero Logic = join Not inclusive of A Not inclusive of B Max capsule duration = long enough to capture the time back down to initial DHP. For each of the two new signals you want to create, you can use formula and the following syntax, where $ChooseCondition will be your t<1 hr and t>1 hr conditions in each formula. $DHP.within($ChooseCondition).runningAggregate(average(),hours()) This will give you a new signal of the 1-hr rolling average of DHP only during the specified condition.
  9. Thanks for checking Ivan. I'm not sure what is happening here. I've not been able to reproduce this issue on our server. I also checked our ticketing system and am unable to find known issues related to the calculation you are trying to perform (the previous bug you encountered was related to min/max formulas and is fixed in 62.0.7.) I suspect there might be an issue upstream of the calculation - are you performing any min/max or other operations on $s?. It might be more expedient to sign up for an office hour session so we can work this 1:1. https://outlook.office365.com/owa/calendar/SeeqOfficeHours@seeq.com/bookings/
  10. Hi Ivan - Have you tried clearing the cache on the source signal via Item Properties in the Details Pane ("Clear Cached Values" under the Advanced section)?
  11. One suggestion from my colleague that you can try as well if you want Datasource caching on, is go to the Admin page for the datasource and enable/disable/enable the cache (or, if it is already enabled, disable and re-enable the cache.) This may achieve the same result without resorting to code.
  12. Hi David - While I'm not aware of an "easy button" to clear the cache on all items in a Workbench/Environment, you can programmatically clear the cache using the items API endpoint. One way to do this in Seeq Data Lab, given a list of signal IDs, is loop through that list and call the API endpoint to clear the cache with something like this: signal_id_list=spy.search({'Path':'Path_to_AF_signals_that_need_to_be_cleared'},limit=max_number_of_expected_signals)['ID'].tolist() items_api = sdk.ItemsApi(spy.client) for signal_id in signal_id_list(): items_api.clear_cache(id=signal_id,clear_dependents=True) The above example will get all the signal IDs in the AF path of choice, then clear their cache and any dependent items that are used in Workbenches in your system. If you want to be more surgical, you could pull the item IDs from the Workbench (e.g., via spy.utils.get_analysis_worksheet_from_url(worksheet_url).display_items), but note you will want to get to the root of any calculations to make sure the cache clearing propagates through all dependencies. Would this work for your use case?
  13. Gotcha. The good news is that coming in R63 will be a way to force string signal values to be upper, lower, or titlecase. The bad news is that R63 is not yet released, though it's right around the corner. IF your signal name is consistent, one way to tackle this would be to do a replace on the string to standardize the case: $string_signal .replace('BLU','Blu') .replace('LB','lb') You can then create a condition using .toCondition() which will span the entire duration when the signal value doesn't change. $replaced_string.toCondition() The result will be a single condition for that string state: As noted, this will be more simple/robust starting in R63 where step 1 can simply be replaced by $string_signal.lower() which will force the Signal value to be all lower case (alternatively, .upper() and .toTitleCase() will also be available for your use.) Let me know if this helps.
  14. Hi protonshake - To convert a step signal into capsules, you can use the .tocapsules() or .toCondition() formula. The former will create a capsule between each sample, regardless if the value has changed or not, whereas .toCondition() will create a continuous capsule if the value is not changing. Regarding your input signal and changing case, is the end goal for the above to have a single capsule for the duration shown (i.e., ignore the signal case), or do you want them separated out each time the source signal case changes?
  15. Hi Tayyab - One alternative would be to disable inheritance on a folder in the Corporate Drive and then removing the "Everyone" group. To do so, create a folder in the Corporate Drive, click the "three dots" on the right, and select "Manage Permissions". You will see that "Everyone" has access. Click on "Advanced" and "Disable permission inheritance". You can now remove the "Everyone" group from your folder access and add your collaborators and have a folder all your collaborators share (and not "Everyone").
  16. Hi Alyssa - I'm trying to reproduce your issue on one of our Seeq servers but am unable to get the same behavior. Have you tried reducing the update frequency to see if that helps? If not, I'd suggest you to swing by one of our office hours so an Analytics Engineer can take a look with you 1:1. Alternatively, please submit a support ticket here so we can follow-up. Office Hour sign up: https://outlook.office365.com/owa/calendar/SeeqOfficeHours@seeq.com/bookings/ Support Portal: https://seeq.atlassian.net/servicedesk/customer/portal/3
  17. To add what Kin How mentioned above, I would encourage you to submit a feature request for this capability as I could see this being very helpful when iterating analyses over time. To do so, go to support.seeq.com, click on "Support Portal" in the lower right, and submit an "Analytics Help" request referencing this post. A one-liner will be sufficient. We will make sure it gets linked to a developer feature request.
  18. Here's a simple radar plot Add-on that I have written. You can use it as a starting point if helpful. Radar Plot Add-on.ipynb
  19. Hi Tranquil - While not specific to XY plot, Seeq does have formulas that help you detect step changes in signals. If you are looking for changes in the samples coming in, I would suggest you explore the runningDelta() formula, which compares sample-by-sample changes. For instance, if you want to generate a "Step Change Condition" whenever the change in signal is greater than 3, you could write it as: abs($signal.runningDelta())>3 Once the condition is generated, you can filter your XY plot by that condition. To count the number of changes for a time period, you can apply a Signal from Condition or Scorecard Metric to count up the number of step changes. Example of a Simple Metric:
  20. As a follow-up to the above, you will also want to check the Seeq Datalab version currently installed on your server by looking at the lower left in the Seeq Datalab browser. If it matches the Seeq version shown in workbench, the pip uninstall -y seeq command should fix your issue. If the version does not match, contact your Seeq Admin to upgrade the SDL server to match. This is a more robust solution as the approach above can cause the error to reoccur post future upgrades.
  21. Hi Mohamed - This error happens when the Seeq Datalab version doesn't match the version of Seeq. You can update the SDL version to match with the Seeq server (ex R54) by executing pip install -U seeq~=54.x where x is the point release on your Seeq server (found in lower left of the browser window.) By the way, to check the Datalab version currently installed, you can execute: pip show seeq Lastly, to see what Seeq PIPY versions are available and install instructions, visit the Seeq PYPI project at https://pypi.org/project/seeq/
  22. Hi David - For this you can use the periods Formula. You can also "anchor" it to a specific start time. Example below. periods(12 hours, 12 hours, '2023-01-01T12:00:00','US/Mountain')
  23. Hi Brie - You can try using the remove formula, i.e., $signal.remove($condition). This should remove the data inside that condition.
  24. Update - To see a video of the below workflow in action, check out this Seeq Tips and Tricks video. Webhooks are a convenient method to send event data from Seeq to “channel” productivity tools such as Microsoft Teams or Slack. The following post describes how Seeq users can leverage Seeq Data Lab to send messages directly to MS Teams via Webhooks. Pre-Requisites: 1) Seeq Data Lab with Scheduled Notebooks enabled a. See Administration Panel -> Configuration and filter for “Features/DataLab/ScheduledNotebooks/Enabled” 2) MS Teams Channel with a Webhook Connector Assumptions: 1) Summary of capsules generated in a defined time range (i.e., every 12 or 24 hours) 2) Notifications are not near-real-time – script will run on a pre-defined schedule generally measured in hours, not minutes or seconds 3) Events of interest are contained in an Asset Tree or Group with one or more Conditions Step 1: Configure Webhook in MS Teams To send Seeq capsules/events to MS Teams, a Webhook for the target channel needs to be created. Detailed instructions on how to configure Webhooks in MS Teams can be found here: https://learn.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/add-incoming-webhook For the purpose of this post, we will create a Webhook URL in our “Seeq Notifications” Team to alert on Temperature Excursions. The alerts will be posted in the “Cooling Tower Temperature Monitoring” channel. Teams and Channel names can be configured to fit your need/operation, this is just an example for demonstration purposes: MS Teams will generate a Webhook URL which we will use in our script in Step 4. Step 2: Identify or Create an Asset Group or Asset Tree to define the Monitoring Scope To scope the events of interest, we will use an Asset Tree that contains “High Temperature” conditions for a collection of Monitoring Assets. While this is not a requirement for using Webhooks, it helps with scaling the Notification workflow. It also allows us to combine multiple Conditions from different Assets into a single workflow. To learn how create an Asset Tree, follow the “Asset Trees 1 – Introduction.ipynb” tutorial in the SPy Documentation folder contained in each new Seeq Datalab project. The script for the Monitoring Asset Tree used in this post is attached for reference: Monitoring Asset Tree.ipynb Alternatively, Asset Groups can be also used to create an asset structure directly in Workbench without using Python: Once the Asset Group/Tree containing the monitoring Conditions is determined, create a Worksheet with a Treemap or Table overview for monitoring use: Make note of the URL as it will be included in the Notification as a link to the monitoring overview whenever an event is detected. For locally scoped Asset Groups or Trees, it will also inform the script where to look for Conditions. Step 3: Install the “pymsteams” library in Seeq Datalab The pymsteams library allows users to compose and post messages (or cards) to MS Teams. The library can be installed from the pypi repository (pypi.org) using the “pip install” command. 1) Open a Seeq Datalab Project 2) Launch a Terminal session 3) Install the pymsteams library by executing pip install pymsteams Additional documentation on pymsteams can be found here: https://pypi.org/project/pymsteams/ Step 4: Create or Update the Monitoring script We are now ready configure a monitoring script that sends notifications to the Webhook configured in Step 1 using Conditions scoped to the Asset Tree in Step 2. a) Import the relevant libraries, including the newly installed pymsteams library import pandas as pd from datetime import datetime,timedelta import pytz import pymsteams b) Configure Input Parameters #Refer to Microsoft Documentation on how to configure a Webhook for a MS Teams channel webhook_url='YOUR WEBHOOK HERE' #Specify the monitoring workbook - this is where the alert will link with the associated timeframe monitoring_workbook_url='YOUR WORKBOOK HERE' #Specify the asset tree and associated condition for which the webhook should be triggered asset_tree='Compressor Monitoring' monitoring_condition='High Temperature' #Specify the lookback period and timezone to search for capsules lookback_interval_hours=24 timezone=('US/Mountain') c) Search for Event Capsules #Set time range to look for new conditions delta=timedelta(hours=lookback_interval_hours) end=datetime.now(tz=pytz.timezone(timezone)) start=end-delta #Parse the workbook information workbook_id=spy.utils.get_workbook_id_from_url(monitoring_workbook_url) worksheet_id=spy.utils.get_worksheet_id_from_url(monitoring_workbook_url) #This block is optional, it stores search results for the conditions once instead of searching each time the #script runs. Saves time if the search result is not expected to change. To reset, just delete the .pkl file. pkl_file_name=asset_tree+'_'+monitoring_condition+'_'+workbook_id+'.pkl' try: monitoring_conditions=pd.read_pickle(pkl_file_name) except: monitoring_conditions=spy.search({'Name':monitoring_condition, 'Type':'Condition', 'Path':asset_tree}, workbook=workbook_id,quiet=True) monitoring_conditions.to_pickle(pkl_file_name) #Pull capsules present during the specified time range events=spy.pull(monitoring_conditions,start=start,end=end,group_by=['Asset'],header='Asset',quiet=True) number_of_events=len(events) events d) Send Message to Webhook using the pymsteams library if a Capsule is detected in the time range #If capsules are present, trigger the webhook to compile and send a card to MS Teams if number_of_events != 0: events.sort_values(by='Condition',inplace=True) #Create url for specific notification time-frame using Seeq URL builder investigate_start=start.astimezone(pytz.utc).strftime('%Y-%m-%dT%H:%M:%SZ') investigate_end=end.astimezone(pytz.utc).strftime('%Y-%m-%dT%H:%M:%SZ') investigate_url=f"https://explore.seeq.com/workbook/builder?startFresh=false"\ f"&workbookName={workbook_id}"\ f"&worksheetName={worksheet_id}"\ f"&displayStartTime={investigate_start}"\ f"&displayEndTime={investigate_end}"\ f"&expandedAsset={asset_tree}" #Create message information to be posted in channel assets=[] text=[] for event in events.itertuples(): assets.append(event.Condition) #Capsule started before lookback window if pd.isnull(event[2]): if pd.isnull(event[3]) or event[4] == True: text.append(f'Event was already in progress at {start.strftime("%Y-%m-%d %H:%M:%S %Z")} and is in Progress') else: text.append(f'Event was already in progress at {start.strftime("%Y-%m-%d %H:%M:%S %Z")} and ended {event[3].strftime("%Y-%m-%d at %H:%M:%S %Z")}') #Capsule started during lookback window else: if pd.isnull(event[3]) or event[4] == True: text.append(f'Event started {event[2].strftime("%Y-%m-%d at %H:%M:%S %Z")} and is in Progress') else: text.append(f'Event started {event[2].strftime("%Y-%m-%d at %H:%M:%S %Z")} and ended {event[3].strftime("%Y-%m-%d at %H:%M:%S %Z")}') message='\n'.join(text) #Create MS Teams Card - see pymsteams documentation for details TeamsMessage = pymsteams.connectorcard(webhook_url) TeamsMessage.title(monitoring_condition+" Event Detected") TeamsMessage.text(monitoring_condition+' triggered in '+asset_tree+f' Asset Tree in the last {lookback_interval_hours} hours') TeamsMessageSection=pymsteams.cardsection() for i,value in enumerate(text): TeamsMessageSection.addFact(assets[i],value) TeamsMessage.addSection(TeamsMessageSection) TeamsMessage.addLinkButton('Investigate in Workbench',investigate_url) TeamsMessage.send() Step 5: Test the Script Execute the script ensuring at least one “High Temperature” capsule is present in the lookback duration. The events dataframe in step 4. c) will list capsules that were detected. If no capsules are present, adjust the lookback duration. If at least one capsule is detected, a notification will automatically be posted in the channel for which the Webhook has been configured: Step 6: Schedule Script to run on a specified Frequency If the script operates as desired, configure a schedule for it to run automatically. #Optional - schedule the above script to run on a regular interval spy.jobs.schedule(f'every day at 6am') The script will run on the specified interval and post a summary of “High Temperature” capsules/events that occur during the lookback period directly to the MS Teams channel. Refer to the spy.jobs.ipynb notebook in the “SPy Documentation” folder for additional information on scheduling options. Attached is a copy of the full example script: Seeq MS Teams Notification Webhook - Example Script.ipynb
  25. You can append to existing topics/documents (pages) by first pulling the Organizer topic, and then adding to the respective page: #Search for topic (I used the Topic ID, but there are other options as well): topic_search=spy.workbooks.search({'ID':'DFFC7BB8-9EE3-42DD-937A-2CE2FAAAB0E8'}) #Pull the topic associated with that ID. This creates an object of the Organizer that you can modify. topic=spy.workbooks.pull(topic_search) #Extract the "Tensile" page so you can modify it tensile = topic[0].document('Tensile') #Add image to the "Tensile" page tensile.document.add_image(filename='Awesome_Chart.png', placement='end'); #Push modified Organizer back to Seeq: spy.workbooks.push(topic) Let me know if this works!
×
×
  • Create New...