Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 08/12/2022 in all areas

  1. One more comment on the post above: The within() function is creating additional samples at the beginning and the end of the capsule. These additional sample have an effect on the calculations performed in the histogram. To avoid this you can use the remove() and inverse() function to remove parts of the data when the condition is not met: In contrast to within() the remove() function will interpolate values if the distance between samples is less than the Maximum Interpolation setting of the signal. To avoid this when using remove you can combine the signal that the remove function creates with a signal that adds an "invalid" value at the beginning of each capsule so that the interpolation cannot be calculated: $invalidsAtCapsuleStart = $condition.removeLongerThan(1wk).tosamples($c -> sample($c.startkey(), SCALAR.INVALID), 1s) $signal.remove($condition.inverse()).combineWith($invalidsAtCapsuleStart) You can see the different behaviours of the three described methods in the screenshot:
    1 point
  2. Hi VpJ, If you build it out using any of the solutions mentioned above, it is relatively easy to make it available in all workbenches.
    1 point
  3. Hi VpJ, Currently, asset groups are only available in a single workbench. You can see some differences between asset groups and asset trees (including that workbook/global scoping) here: https://seeq.atlassian.net/wiki/spaces/KB/pages/1590165555/Asset+Groups#Asset-Trees-vs.-Asset-Groups
    1 point
  4. Hi SBC, applying the the within() function to your signal will result in a new signal that only keeps those parts of the signal where the condition is met. The filtered signal can then be used for the histogram. Regards, Thorsten
    1 point
  5. Hi Jesse. Thanks for this input. While text boxes cannot yet be tied to date ranges in Organizer, our team has been thinking about ways to improve the process of entering and discovering comments around data. What some of our customers do in this scenario is have an Organizer Topic for each week, and they can navigate to a Topic representing a week in the past when they want to see what comments they had entered alongside the data. Some users duplicate that report each new week and clear out comments that may not be applicable when entering in new comments, and step the date ranges to now. This would be my recommendation for the time being. Another route you could explore is Annotations in Workbench. They have a date/time context and can appear on the trend as a little chat bubble, but the information added there is not yet available in Organizer. We hope to bring the concepts of Annotations in Workbench and Comments entered in Organizer together some time in the future. We might reach out for more feedback from you on these ideas.
    1 point
  6. Starting in Seeq Version R52 Data Lab notebooks can be run on a schedule which opens up a world of new interesting possibilities. One of those possibilities is to create a simple script that pulls data from a Web API source and pushes it into the Seeq data cache on a schedule. This can be a great way to prototype out a data connection prior to building a full featured connector using the Connector SDK This example notebook pulls from the USGS which has information on river levels, temperatures, turbidity etc and pushes those signals for multiple sites into the Seeq system. The next logical step would be to make a notebook to organize these signals into an asset tree. Curious to see what this inspires other to do and to connect to. If there are additional public resources of interest put them in the thread for ideas. USGS Upload Example.ipynb
    1 point
This leaderboard is set to Los Angeles/GMT-07:00
×
×
  • Create New...