Jump to content

SBC

Members
  • Posts

    22
  • Joined

  • Last visited

Posts posted by SBC

  1. In a particular use-case of the Prediction function, the end-user is interested in the regression fit on the data in the current display range only. i.e., the regression training window will auto-pick the current display range instead of requiring to manually update every time the display range is changed)

    For example, this will be useful in a typical XY plot analysis with a curve fit function overlaid to the data data points.

    Is there any simple trick on capsules/conditions that can be utilized to make the Training window to always default to the current display/investigation range ?

    Alternatively, can we utilize a date/time formula for the user to adjust it from the details pane ?

     

    image.png.d892ba21ae7957a47c01200b0fa96fcc.pngimage.thumb.png.0b806c80c9f0ee9da6c0d0aae5e4efc3.png

  2. I have two notebooks that have to have to be run at a cadence of every 6 months - the first notebook's output (image files) is used by the second notebook (to push the image files to an organizer topic).

    I scheduled two jobs with separate spy.jobs.schedule function calls using the url of the corresponding notebook, staggering the schedule time to be an hour apart. Attached below are some code snippets showing the two cells of code that scheduled the jobs separately.

    However, when the scheduled time was reached, Datalab executed the second notebook only (the Job_Results folder contained the execution snapshot of the second notebook corresponding to Job2_notebook_url only).

    Is this a limitation of the current Datalab architecture i.e. only a single notebook can be scheduled to run for one datalab project ?

     

    image.png.c3b73391b8d131f51971e4374f6c69ba.png

     

    image.png.03e78fd90f832c404cc2e71d26e2a207.png

  3. I am working collaboratively with another team member on a workbench and datalab project - both of which is currently shared with full access rights ('manage').

    Can the spy.push syntax use a shared folder path (I understand there is also a workbook ID input argument support in spy.push but wanted to confirm if the path context method is feasible for shared workbenches) ?

     

    image.thumb.png.4df4ef8953b39520aad5537f3484361a.png

  4. Is there a detailed documentation for the available key words arguments to pick for the Metric attribute properties. For example, it was not straight forward to know which keyword to pick for getting the Statistic =  Value at End (the AggregationFunction was called differently 'endValue'

    In other words, Can we assume all the attribute properties will have the same name as the GUI drop down menu list items ?

    Please see some screenshots attached to illustrate the quesiton: 

    image.png.020cfae01866e7d21f0074eda774a0c6.pngimage.png.054a1c4f7c75664da014b416819b7949.png

  5. Hello Sanman - I had a follow-up question on this same subject.

    I am able to 'Unarchive' a worksheet by using spy.pull, knowing the specific worksheet ID (fortunatley because it was used in a Organizer topic)

    I want to confirm if there is a spy function that can retrieve 'Archived' worksheets inside a workbook?

    Scenario: When I am using the 'Step-by-step.." code to push an Asset Tree in existing workbook, it archives the pre-existing worksheets. Ran into this in one of my use-cases (for Temp Mixing points asset tree I was showing you a while ago).

    This can be a critical workflow step in my use-case for data-lab coding for 'unarchiving worksheet' whenever the asset tree needs an update from the end-users.

     

    Adding some code snippets on how I am doing it today...

    Thank you.

     

    image.thumb.png.f6beadb8ddd0d4fa5eced3f13c2acd4d.png

     

     

  6. 1 hour ago, Sanman Mehta said:

    Seeq stores all results in cache (most cases), so once you have done calculations once, they should not need recalculation for 10 years every time and resulting signal can be instantly used in further calculations.  This should help alleviate calculation load from 2nd time onwards such that only new calculation would be for the last 1 day.  You should not have to run scripts for 10 years everyday. 

    Sanman

    Thank you Sanman  - I believe there is a difference in speed when the calculated signal is pushed as an "Asset Attribute" (and cached for each Asset) VS. keeping it as a "work-sheet  (cached formula has to re-populate when swapping assets). 

  7. I need to calculate a simple difference (delta) between two time series signals feeding from Continuous Historian to SeeQ over the past history (10 years) and keep it available in SeeQ for other aggregate calculations. This will have to be done over several Assets in an Asset tree (similar to  the Example tree above). Currently, my workbooks are trying to calculate 'just-in-time' whenever we open the workbook which is causing a lot of time lag in getting the outputs as it is waiting to first pull and calculate the delta value between the two signals.

    Will taking the above approach speed up my worksheet calculation performance/speed if the the delta value was pre-pushed with SeeQ data lab on a regular basis (say, daily) ? I am also planning to combine with the Asset tree signal push script shared by SeeQ team member in another post.

     

  8. Can a SeeQ condition be added as an "attribute" (Process Value) to each asset by some method (either while creating the asset tree via csv import or after the asset tree is create) ?

    Context:

    I am looking to have the same condition formula applied to different assets but keeping certain parameters flexible. For example, I am checking is $signal > 7 for Pump A vs $signal > 10 for pump B, where $signal is the same calc across all pumps.

×
×
  • Create New...