Jump to content

Joe Reckamp

Seeq Team
  • Posts

    128
  • Joined

  • Last visited

  • Days Won

    36

Everything posted by Joe Reckamp

  1. Hi Kate, The easiest way is to rename the tag by making a Formula that is just a formula of: $signal That way it will just reference your original tag, but you can give it whatever name you would like.
  2. Hi Kemi, Using the derivative can be very helpful to find peaks in your data. It appears your data seems to increase without too much noise (in terms of small peaks on the way towards the main peak) so I doubt you would need too much data cleansing first. So you may be able to simply do a derivative by using the Formula tool: $signal.derivative() That will then show a positive value when you are increasing, a negative value when you are decreasing, which means that the point at which it goes from positive through zero to negative would be the high point of your peak. Therefore, I would then find when the derivative is positive (> 0) using a Value Search. You'll also want to make a Periodic Condition for "Daily" that represents when you want your days to start and end. From there, you should be able to use Signal from Condition to count the ends (remembering that the end of the positive section is when the peak occurs) each day.
  3. You should be able to do a Formula using toCondition to turn that into capsules: $signal.toCondition('Batch ID')
  4. What does your Batch Data look like? Do you have a Batch ID that tells you what batch is currently running?
  5. Hi Matia, If you have already made a capsule for the batch, you can simply use the timesince function in Formula to make a counter for the current batch duration: timeSince($condition, 1h)
  6. Hi Matia, Can you clarify a bit as to exactly what you are looking for? For example, when Batch 2 is running, are you wanting to see the remaining time for Batch 2 assuming it would operate at the same duration as Batch 1 (e.g. (batch 1 duration) - (running time of batch 2))? Or are you just wanting to see the previous batch's duration? Or something else?
  7. Hi Pat, Can you try making the spy.search line include all properties so that hopefully it brings in the maximum duration for you: RampCond_Tags = spy.search({'Name': '*_RampCond'}, all_properties=True)
  8. Hi Pat, Did you spy.push the dataframe back to Seeq after the Data Lab screenshots you show here? You set the Archived value in Data Lab, but it doesn't actually occur in Seeq until you push that metadata back to Seeq. So you'll want to add a line of spy.push(metadata=RampCond_Tags)
  9. Here's a quick example of setting the width to 5 for all signals on the display for a single worksheet:
  10. Hi Surendra, You could modify the widths programmatically in Seeq Data Lab by looping through each worksheet and changing the display items dataframe in python, but no way currently to do this across multiple worksheets directly in Workbench.
  11. You just would change the third value to a 1: spy.jobs.schedule('0 0 1 ? * 6#1 *')
  12. Hi Ruby, This should work instead: spy.jobs.schedule('0 0 0 ? * 6#1 *')
  13. Ultimately this depends on the network setup and what your IT department may allow. If Data Lab is hosted in your company's network, it may be possible to get access to and directly pull from SharePoint, but would likely need to be provided access through the firewalls by IT to get there. If Data Lab is hosted as SaaS, it is unlikely that an IT department would allow that access into their network.
  14. If you check out the docstring for the spy.push function, you should see an option for "replace". If you set the values to np.NaN and then push with the replace around the timestamps you want to remove, you should be able to remove that data in those time frames.
  15. Generally, pushing to a different workbook should result in two separate items only scoped to that particular workbook. Therefore, what you are seeing may have been a bug. If you want the same signal in both workbooks, you can push with workbook=None to make the item globally scoped instead.
  16. Hi Surendra, Please see the following Knowledge Base article on how to set up email notifications: https://seeq.atlassian.net/wiki/spaces/KB/pages/2242740294/Data+Lab+Notifications
  17. This is the example of the formulas I sent you: It starts with the Signal in blue. I then calculated the timesince value in orange, followed by the hourly average for each day in dark blue. The spot it seems you are running into issues is with the green signal, which is calculating the latest hourly average. Notice that all the other signals stop at "now" about halfway through the screen. In order to see the result of the latest hourly average, you'll have to set your display range to view the remainder of the day beyond "now" as I have done above as that signal will only appear in the future remaining part of the day. From there, the last formula I mentioned creates the red extrapolation at the top that continues the trend using that hourly average. The formula that you suggested: $hourlyaverage.aggregate( average(), periods(1hour, 5min), endKey()) is not going to do the same function as the green signal above and should not be used as a replacement.
  18. You can use the formula you suggested for latest hourly average, but that would not provide the same result. In that case, you would be averaging an hour worth of data rather than the entire last day of data. In addition, the result would need to extend until the end of the day instead of stopping at the latest data point.
  19. Does the display range go out into the future? That formula would only have data beyond the "now" timestamp.
  20. Hi Surendra, 1. You will need to first create a capsule that represents the sawtooth schedule. If it's always 6am reset time, I would recommend creating a periodic condition that is daily, running from 6am to 6am in your time zone. From there, a formula function known as timesince exists to calculate the amount of time that has passed in each capsule. Therefore, you could do the following formula to get the time passed since the last capsule (ending the calculation at "now"): timesince($condition, 1h).within(past()) From there, you can simply do the calculation you want to get hourly average: $signal/$timesince 2. First, you can get the latest hourly average value and project that forward until the end of the day using: $hourlyaverage.aggregate(endvalue(true), $condition, durationkey()).within(not past()) Then you can multiply that by the hours in the day to get the extrapolation going forward: $LatestHourlyAverage * timesince($condition, 1h) 3. For this, you'll probably want to use the Reference Profile tool: https://seeq.atlassian.net/wiki/spaces/KB/pages/142770210/Reference+Profile
  21. Hi VpJ, Currently, asset groups are only available in a single workbench. You can see some differences between asset groups and asset trees (including that workbook/global scoping) here: https://seeq.atlassian.net/wiki/spaces/KB/pages/1590165555/Asset+Groups#Asset-Trees-vs.-Asset-Groups
  22. Hi Jessica, I would recommend opening your file in Notepad or some sort of text editor. From the error message, it looks like the columns in the file may be semi-colon separated (;) instead of comma separated (,). If that's the case, you'll want to change the "Column delimiter" option in the Import CSV and try again.
  23. Hi Surendra, This post also works in reverse by simply defining the $reset period to when you want the sawtooth to reset:
  24. Hi Adewale, When you create an OData endpoint for PowerBI, there are two separate links for the Capsule Summary Table and the Sample Table. The Sample Table is equivalent to the "Grid" tab on the Excel export.
×
×
  • Create New...