Jump to content

Patrick

Seeq Team
  • Posts

    35
  • Joined

  • Last visited

  • Days Won

    15

Patrick last won the day on March 19

Patrick had the most liked content!

1 Follower

Personal Information

  • Company
    Seeq
  • Title
    Sr Analytics Engineer
  • Level of Seeq User
    Seeq Intermediate

Recent Profile Visitors

884 profile views

Patrick's Achievements

Contributor

Contributor (5/14)

  • Collaborator Rare
  • Conversation Starter
  • Dedicated Rare
  • One Year In
  • Reacting Well

Recent Badges

23

Reputation

5

Community Answers

  1. Hi Leah - Would you mind submitting a support ticket via our portal at https://seeq.atlassian.net/servicedesk/customer/portal/3? If you can view the dashboard and have shared it with your colleagues via the share menu, yet they get this error, we will want to investigate a bit more.
  2. Hi Manoel - Can you try inserting a `.validvalues()` after the "$reg" in the last line? If invalid samples are present in the signal, no interpolation will occur. This also might be worth a quick office hour visit to troubleshoot 1:1 - https://outlook.office365.com/book/SeeqOfficeHours@seeq.com/
  3. Hi Nurhazx - In general, the average displayed in the details pane will be dependent on the time range selected in your display pane. i.e., in the above screenshot, 9/13/2023 to 3/14/2024. If you select a timerange by left-click and dragging a section of the trend, as you did in your screenshot, the average will representative for the selection - in the above, it will be 12/1/2023 to 12/13/2023. The details pane average will also include data removed in your monthly aggregation since you are passing the `remove` operator into your monthly aggregation. The calculated monthly average will always be based on the start and end time of the monthly condition, regardless of the display pane time range. It will of course also honor the "remove()" parameter. One note regarding "remove()" - if you are removing periods shorter than your signal maximum interpolation, Seeq will interpolate (draw a line) across the "gap" of data removed. If you, instead, want the removed section to be shown as a true gap regardless of interpolation settings, you can use the "within()" operator combined with "inverse()" $signal=$bt.within($d.inverse())
  4. Hi EPH - We don't have the capability to remove short Capsules in a Composite Condition via the GUI, however, it can be done easily in formula like so: ($a.intersect($b)) .removeShorterThan(1h) where $a and $b are the respective conditions and the parameter in removeshorterthan() specifies the minimum capsule duration (above example, anything shorter than 1 hour will be ignored). If you already have the Composite Condition set up, go to "Item Properties" (green i), click the down arrow next to "Duplicate", and select "Duplicate to Formula". This will pre-populate $a.intersect($b) with your signals in Formula, and you can then append ".removeShorterThan(1h)" to create the new condition that ignores short capsules.
  5. There are instances where it's desirable to add or remove Access Control to a user created datasource. For instance, you may want to push data from Seeq Datalab to a Datasource that is only available to a sub-set of users. Below is a tip how to add additional entries to a Datasource Access Control List (ACL) using `spy.acl`. 1) Identify Datasource ID If not already known, retrieve the ID for the Datasource. This can be done via the Seeq REST API (accessible via the "hamburger" menu in the upper right corner of your browser window). This is different from the datasourceId identifier and will contain a series of characters as shown below. It can be retrieved via the `GET/datasources` endpoint under the `Datasources` section. The response body will contain the ID: 2) Identify User and/or Group ID We will also want to identify the User and/or Group ID. Below is an example of how to retrieve a Group ID via the `UserGroups` GET/usergroups endpoint: The response body will contain the ID: 3) Use `spy.acl()` to add group to the Datasource Access Control List (ACL) Launch a Seeq Datalab Project and create a new ipynb Notebook Assign the two IDs to a variable datasource_id='65BDFD4F-1FA3-4EB5-B21E-069FA5A772DF' group_id='19F1B7DC-69BE-4402-AD3B-DF89B1B9A1A4' (Optional) - Check the current Access Control List for the Datasource using `spy.acl.pull()` current_access_control_df=spy.acl.pull(datasource_id) current_access_control_df.iloc[0]['Access Control'] Add the group_id to the Datasource ACL using `spy.acl.push()` spy.acl.push(datasource_id, { 'ID': group_id, 'Read': True, 'Write': True, 'Manage': False }, replace=False) Note I've set `replace=False`, which means the group will be appended to the current access list. If the desire is to the replace the entire existing ACL list, this can be toggled to `replace=True`. Similarly, you can adjust the read/write/manage permissions to fit your need. For more information on `spy.acl.push`, reference the `spy.acl.ipynb` document located in the SPy Documentation folder or access the docstring by executing help(spy.acl.push) (Optional) - To confirm the correct ID has been added, re-pull the ACL via `spy.acl.pull()` current_access_control_df=spy.acl.pull(datasource_id) current_access_control_df.iloc[0]['Access Control']
  6. Hi Pat - Please take a look at the attached script. It takes a Worksheet template, maps the signals in the worksheet, maps the worksheet to the Organizer Template page, appends that page to the existing Organizer, and pushes all referenced content, including the templated Worksheet. The template used is what's included in our SPy documentation, but you should be able to load your own template as well. AppendingTemplateDocuments.ipynb
  7. Starting in R63, you have the ability to "deep copy" worksheets to automatically create new, independent items. Details here: https://support.seeq.com/kb/latest/cloud/worksheet-document-organization
  8. Solution: To address your first question of creating the t>1 hr after shut-in capsule. One method to do this is below: 1. Create a condition for ~5 min before shut-in and shift it by ~5 min to make sure you have the Steady state DHP prior to shut-in. This is done using Formula. The Formula syntax is $ShutinCondition.beforeStart(5min).move(-5min) The "beforeStart(5 min)" creates a new 5 minute long capsule directly preceding your shut-in capsule. The move(-5 min) function shifts this new "beforeStart" capsule backwards in time by 5 minutes. 2. Calculate a new signal that represents the average DHP over the condition created in Step 1. Use "Signal from Condition" with the following inputs: Signal or condition = DHP Summary Statistic = Average Bounding Condition = Condition from step 1 Where to place timestamp = Start Interpolation Method = Step 3. Calculate a new signal that is the signal created in Step 2 (Average DHP before Shut-in) + 10%. Use Formula and the following syntax: $signalFromStep2*(1.1) 4. Calculate a new signal that is the delta between the DHP and this initial + 10% signal. Use Formula and the following syntax: $DHPsignal-$signalFromstep3 5. Create a condition for when this delta signal created in Step 4 drops below zero. Use Value Search and <0 as inputs. 6. Create a Composite Condition that joins the end of your existing Capsules t<1 hr after shut-in to the start of your condition created in Step 5. The inputs to this composite condition will be: Condition A = t<1 hour after shut in Condition B = Delta signal drops below zero Logic = join Not inclusive of A Not inclusive of B Max capsule duration = long enough to capture the time back down to initial DHP. For each of the two new signals you want to create, you can use formula and the following syntax, where $ChooseCondition will be your t<1 hr and t>1 hr conditions in each formula. $DHP.within($ChooseCondition).runningAggregate(average(),hours()) This will give you a new signal of the 1-hr rolling average of DHP only during the specified condition.
  9. Thanks for checking Ivan. I'm not sure what is happening here. I've not been able to reproduce this issue on our server. I also checked our ticketing system and am unable to find known issues related to the calculation you are trying to perform (the previous bug you encountered was related to min/max formulas and is fixed in 62.0.7.) I suspect there might be an issue upstream of the calculation - are you performing any min/max or other operations on $s?. It might be more expedient to sign up for an office hour session so we can work this 1:1. https://outlook.office365.com/owa/calendar/SeeqOfficeHours@seeq.com/bookings/
  10. Hi Ivan - Have you tried clearing the cache on the source signal via Item Properties in the Details Pane ("Clear Cached Values" under the Advanced section)?
  11. One suggestion from my colleague that you can try as well if you want Datasource caching on, is go to the Admin page for the datasource and enable/disable/enable the cache (or, if it is already enabled, disable and re-enable the cache.) This may achieve the same result without resorting to code.
  12. Hi David - While I'm not aware of an "easy button" to clear the cache on all items in a Workbench/Environment, you can programmatically clear the cache using the items API endpoint. One way to do this in Seeq Data Lab, given a list of signal IDs, is loop through that list and call the API endpoint to clear the cache with something like this: signal_id_list=spy.search({'Path':'Path_to_AF_signals_that_need_to_be_cleared'},limit=max_number_of_expected_signals)['ID'].tolist() items_api = sdk.ItemsApi(spy.client) for signal_id in signal_id_list(): items_api.clear_cache(id=signal_id,clear_dependents=True) The above example will get all the signal IDs in the AF path of choice, then clear their cache and any dependent items that are used in Workbenches in your system. If you want to be more surgical, you could pull the item IDs from the Workbench (e.g., via spy.utils.get_analysis_worksheet_from_url(worksheet_url).display_items), but note you will want to get to the root of any calculations to make sure the cache clearing propagates through all dependencies. Would this work for your use case?
  13. Gotcha. The good news is that coming in R63 will be a way to force string signal values to be upper, lower, or titlecase. The bad news is that R63 is not yet released, though it's right around the corner. IF your signal name is consistent, one way to tackle this would be to do a replace on the string to standardize the case: $string_signal .replace('BLU','Blu') .replace('LB','lb') You can then create a condition using .toCondition() which will span the entire duration when the signal value doesn't change. $replaced_string.toCondition() The result will be a single condition for that string state: As noted, this will be more simple/robust starting in R63 where step 1 can simply be replaced by $string_signal.lower() which will force the Signal value to be all lower case (alternatively, .upper() and .toTitleCase() will also be available for your use.) Let me know if this helps.
  14. Hi protonshake - To convert a step signal into capsules, you can use the .tocapsules() or .toCondition() formula. The former will create a capsule between each sample, regardless if the value has changed or not, whereas .toCondition() will create a continuous capsule if the value is not changing. Regarding your input signal and changing case, is the end goal for the above to have a single capsule for the duration shown (i.e., ignore the signal case), or do you want them separated out each time the source signal case changes?
  15. Hi Tayyab - One alternative would be to disable inheritance on a folder in the Corporate Drive and then removing the "Everyone" group. To do so, create a folder in the Corporate Drive, click the "three dots" on the right, and select "Manage Permissions". You will see that "Everyone" has access. Click on "Advanced" and "Disable permission inheritance". You can now remove the "Everyone" group from your folder access and add your collaborators and have a folder all your collaborators share (and not "Everyone").
×
×
  • Create New...