Jump to content

John B

Members
  • Posts

    8
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by John B

  1. There are two main ways to construct asset trees using SPy: spy.assets using python class-based templates (Asset Trees 2 notebook in the documentation) spy.assets.Tree using simpler function-based construction (Asset Trees 1 notebook in the documentation) It sounds like your use-case would make better use of the second spy.assets.Trees option, which does not require you to predefine the possible 'slots' for signals/conditions. Take a read of the linked spy.assets.Trees documentation and see if that will work for you.
  2. Hi Pat, While you cannot move files between SDL projects in this manner, there are a couple options that I use to make this process easier: 1. To simply copy numerous files/folders from one project to another, I like to make a zip archive of the files I want to move so I only have to download/upload one file between projects. You can do this by opening a terminal window in the source SDL project and using the 'zip' command to make a zip file. If you want to zip up the whole project except for the SPy Documentation folder, you can use this command: zip -r archive.zip . -x ".*" -x "SPy Documentation/*" You can then download the resulting archive.zip file from your source project and upload it into the destination project. In your destination project, open a new terminal window and use the following command to unzip the archive: unzip archive.zip 2. Alternatively, If you've created a common set of scripts that you would like to keep in sync with version control, consider using git to do your tracking / copying. Just be aware that there is a moderate learning curve going down this path. Starting in R58 with JupyterLab, there is a GUI interface for git that lowers the barrier to entry.
  3. A question came up recently that I thought would be of wider interest: how can I prevent interpolation across batch/capsule boundaries? In batch processes, lab samples are often taken periodically throughout the course of a batch. When viewing these samples in Seeq, you may encounter times when samples are interpolating between batches rather than just within a batch. In the image below, the periods highlighted in yellow correspond to this unwanted interpolation. The following formula is one way of preventing this interpolation. $signal corresponds to your signal of interest and $condition corresponds to your batches you want to prevent interpolation across. combineWith( $signal, Scalar.Invalid.toSignal().aggregate(startValue(), $condition.removeLongerThan(40h), startKey()) ) This results in the following signal that has our desired behavior:
  4. Glad the grid=None worked for you! Looking back at the behavior you experienced, I do think it would be worth investigating further. Could you send a support ticket to [email protected] with a link to this forum post and a screenshot of what those two signals you're trying to pull look like in workbench?
  5. Hi Ruby, I don't know what your two signals look like, but this behavior can happen when your signals are using linear interpolation but spaced by more than the maximum interpolation. You have grid set to 1 day, which means Seeq will try to give you an interpolated value per day. If there is no valid interpolated value at that timestamp, a NaN will be returned as you see in your first example. In your second example I'm guessing you provided a start time where both signals have valid values, and so no interpolation needs to occur. Again, I don't know what your use case is or what your signals look like, but one option would be to set the grid=None to return the samples without any gridding applied.
  6. Seeq has functions to allow easy manipulation of the starts and ends of capsules, including functions like afterStart(), move(), and afterEnd(). One limitation of these functions is that they expect scalar inputs, which means all capsules in the condition have to be adjusted by the same amount (e.g. move all capsules 1 hour into the future). There are cases when you want to adjust each capsule dynamically, for instance using the value of a signal to determine how to adjust the capsule. Solution: This post will show how to accomplish a dynamic / signal-based version of afterStart(). This approach can be modified slightly to recreate other capsule adjustment functions. Assume I have an arbitrary condition 'Condition', and signal 'Capsule Adjustment Signal'. I want to find the first X hours after each capsule start, where X is the value of 'Capsule Adjustment Signal' at the capsule start. I can do this with the below formula. $condition .afterStart(3h) // has to be longer than an output capsule will ever be .transform($capsule -> { $newStartKey = $capsule.startKey() $newEndKey = $capsule.startKey() + $signal.valueAt($capsule.startKey()) capsule($newStartKey, $newEndKey) }) This formula only takes two inputs: $condition, and $signal. This formula goes through each capsule in the condition, and manipulates its start and end keys. In this case, the start key is the same as the original, but the new end key is set to the original start key plus the value of my signal. This formula produces the following purple condition: Some notes on this formula: The output capsules must be within the original capsules. Therefore, I have included .afterStart(3h) in the formula. This ensures the original capsules will always be larger than the outputted capsules. If you don't do this, you may see the following warning on your item, which indicates the formula is throwing away capsules: Your capsule adjustment signal must have units of time To accomplish other capsule adjustments, look at changing the definitions of the $newStartKey and $newEndKey variables to suit your needs.
  7. As you've seen, forecastLinear can only take a constant training period (in your case the last three days). In order to get more flexibility than this, you may want to explore using the Prediction tool, using a time counter (time since last top-up) as your input signal. As a similar example, I have a pressure signal that shows several increasing pressure cycles. I want to predict when my pressure is going to reach a certain threshold using only the data from the current run, rather than a fixed training duration. First, I can generate a time counter that we're going to use as the input signal to my prediction tool: timeSince($runs.growEnd(10d), 1h) Where $runs is a condition where each capsule is an individual filter run. In your case you would want to generate a condition where the capsules go from top-up to top-up. The growEnd part of this function ensures the time counter extends into the future. I can then generate a condition that contains only my current filter run using something like this: $runs.join(past().inverse().afterStart(1s),10d,false) Where 10d is the maximum duration a run can last. We can then plug this all into the prediction tool, using our time counter as our input signal, and limiting the prediction to only use data from the Current Run. Now you can use a simple value search to find when your predicted pressure goes above/below a certain threshold. Hope this helps!
  8. Hi Sivaji, have you played around with using the %run magic command in a datalab notebook to execute python files? For example, I can run test_spy_search.py using: %run test_spy.search.py This will pass through my authentication, so I don't have to login to execute spy functions. Would that cover your use case, or is there additional functionality you're getting out of using the terminal?
×
×
  • Create New...