Jump to content

Joe Reckamp

Seeq Team
  • Posts

    127
  • Joined

  • Last visited

  • Days Won

    36

Everything posted by Joe Reckamp

  1. Hi SBC, Interactive content has been added in recent releases. Tables were the first item introduced in R54 so that option will only be available on R54 or later. Other visuals (e.g. XY Plot, Trend) have been released in even more recent versions. In terms of what interactivity does, please check out the What's New in R55 page located here, which describes the interactivity of tables and table-based charts: https://seeq.atlassian.net/wiki/spaces/KB/pages/2218229877/What+s+New+in+R55#Trend-lines,-pie-charts-and-bar-graphs,-oh-my! Regards, Joe
  2. Hi SBC, All users that access Organizer must also login to authenticate to Seeq so they must have an account. Regards, Joe
  3. Hi John, As you mention, currently the metrics cannot be moved around the table like the statistics can be. However, the order of the metrics is based on the lane of the metric in the details pane. If you change the order of those lanes and refresh/change time range, the order of the metrics will change.
  4. Hi Filip, In the Release Notes, you should see that it mentions this change (see excerpt below). You should still be able to click on the links to move to the previous views even in edit mode. Workbench Analysis Editor Changes In R55.0.0, the journal will no longer be able to be toggled from edit mode to view mode. When the analysis is in edit mode the journal will always be in edit mode, and when the analysis is in view only mode the journal will be in view only mode.
  5. Hi Annie, You can simply string the setProperty operator along to add multiple properties: $o2.removeLongerThan(40h).setProperty('OpeName', $o, startvalue()).setProperty('Batch', $b, startvalue())
  6. Another approach that may be a bit simpler than the reduce function to solve the same problem would be to first take the maximum of each section of the increasing counter tag and place it at the start of the counting, which should assign it in time to the correct batch. This can be done with the following formula: $counter.aggregate(maxvalue(), ($counter == 0).growend(30d), startkey()) After that, you can then simply sum up the individual sections for each batch: $signal.aggregate(sum(), $batch, durationkey())
  7. Hi Matthias, I would recommend checking out this post, which follows the same process: When looking at that post, it sounds like you'll want the $reset variable to be equal to a monthly condition.
  8. We often get asked how to install the Add-ons from the open-source Seeq Add-on Gallery (https://seeq12.github.io/gallery/). First, you'll need a Seeq administrator for your company as an administrator is required to install a new Add-on. Then, each Add-on has specific instructions linked it's documentation page linked from the Add-on Gallery. For example, you can click on the "Documentation for Correlation Add-on" to get to the documentation for the Correlation Add-on: On the left hand side, there's an installation section: From there, simply ensure all the requirements are met and follow the instructions listed: If you run into any issues or want to file a feature request, use the GitHub page also linked from the Seeq Add-on Gallery to file an issue:
  9. Hi vadeka, Now that you've removed the "Inactive" data the issue is likely that either (1) your maximum interpolation is not long enough to interpolate between those points or (2) there is actual invalid data (not just saying "Inactive", but a data point that is not visible as it doesn't have a value. Check out this post under Question 4 for how to solve this: In terms of your second question, if you want to view the data points on the trend, then check out how to adjust the "Samples" in the Customize menu for the Details Pane, which allows you to turn on the individual data points: https://support.seeq.com/space/KB/149651519/Adjusting+Signal+Lanes%2C+Axes+%26+Formatting. If you want to view it in a Table, then using $signal.tocapsules() will give you a capsule per data point so that you can view that in a Condition Table (https://support.seeq.com/space/KB/1617592515#Condition-Tables). Note that there are two similar functions: tocapsules will provide a capsule per data point (even if the data points are identical) whereas tocondition will provide a sample per change in data point, meaning it will not show repeat capsules if the data points are equivalent in sequence.
  10. Hi Sam, Unfortunately, there’s no great workaround for this as it doesn’t know which asset to swap out. Therefore, the only way to “fix” the issue so that you can swap the formula would be to make $prof be not asset specific. There’s two options for this that I think could do what you are looking for: If the signal underneath $prof is available outside of the asset tree, use that signal instead so that it doesn’t have the asset reference that you do not want to swap. For example, when you connect to PI, you often have the common name in the asset tree (PI AF), but it is really a duplicate of a more complex PI tag name that it is referencing. Using that reference which is not in AF will allow you to swap only the target portion which is linked to the asset tree and maintain the $prof as a not asset specific value. If #1 is not an option, another option would be to use spy.push to create a new signal that doesn’t have an asset assignment. If you did a spy.pull of the data from the timeframe being used for $prof, spy.push that data back into Seeq, and then point $prof to that Seeq Data Lab signal instead (which will not be specific to a particular asset), then you should be able to swap the calculation.
  11. We often get this question in a variety of forms when users either see no data or have gaps in their data set that they don't expect. This article should hopefully give you a bit of a troubleshooting guide for things that you can look for before having to reach out for assistance. Question 1: Can you find the signal/condition or is it completely missing? If you cannot find the signal or condition by searching in the data tab, it likely means that the data is not yet connected to Seeq. Please reach out to your company's Seeq administrator or champion for assistance in making the datasource connection. Note: Make sure that when you are searching in the Data tab, that you are not within a tree and all filters are off. If you are unsure of how to adjust these, you can hit the "reset" button to turn off all the filters prior to searching. Question 2: Is the signal giving a red triangle error when added to the display? Generally a red triangle error with something that usually starts with "Client data source exception" indicates an issue with the datasource. This can be seen for a variety of reasons, the most common of which are: Datasource was disconnected Remote agent was disconnected Tag is not set up correctly in the historian/database/query If you are seeing red triangle errors on tags that you would expect to be working, the first thing I would recommend doing is checking whether it is specific to the signal or condition or if the entire database is causing issues. You can check this by trending other tags from that same database to see if they are also giving red triangle errors and/or check the datasource connection to see if that datasource is still connected or not. If you observe this issue, I would recommend reaching out to your company's Seeq administrator or champion to see if there's a known reason for the issue. If there's no known issue and you're the Seeq administrator for your company, reach out to your partner or Seeq support for assistance. Please make sure you attach logs (https://seeq.atlassian.net/wiki/spaces/KB/pages/114395156/Viewing+Logs+and+Sending+Log+Files+to+Seeq) to your request as they often have information about the issue that caused the disconnection. If the remote agent disconnected, often a restart will cause the remote agent to reconnect. Question 3: Does the signal have no data with a yellow "i" button in the Details Pane? The yellow "i" button means that Seeq is not receiving any errors from the datasource. Clicking on the yellow "i" will give more details, but the most common message is "There is no data in your current display range". First thing to try is to Zoom out. Try zooming out to a month, a year, or whatever makes sense for your data set. It's also imperative to make sure you are looking at the right time frame. Make sure the month/day/year are appropriate for where you would expect to have changes in your data. If zooming out finds data, but you're displaying gaps in that data set, see Question 4. If you still cannot find any data, check the raw datasource for data in that same time range. If the datasource is showing data, but Seeq is not, there are a couple of options that will generally require your company's Seeq administrator to troubleshoot: Check the connector configuration to ensure that there are no settings that could be filtering out the data. If the datasource is a PI collective with multiple members, check that Seeq is connected to the desired member of the collective: https://support.seeq.com/space/KB/113067567/OSIsoft+PI+Connector#PI-Collective-Member-Priority Question 4: Does the signal show data, but there are gaps in the data set (or near now)? If the data points are all available, but the points are not all connected, that means there is nothing wrong with the datasource connection, but that the interpolation between data points is not as desired. There is much more information about interpolation in the Knowledge Base (https://seeq.atlassian.net/wiki/spaces/KB/pages/734560556/Interpolation), but the two most common issues and their solutions are: Each signal has a maximum interpolation, which is the maximum amount of time between sample points that it will connect the points. If the length of time between sample points exceeds that maximum interpolation, then a gap will appear in the data. To fix this, if you have access to, you can override the maximum interpolation in Item Properties (see https://seeq.atlassian.net/wiki/spaces/KB/pages/141623511/Item+Properties) or by creating a formula similar to the one below, changing out the 7 days for however long is necessary to fill in the gap observed in your data. Please note that having an excessively long maximum interpolation can result in reduced performance for any calculations built on top of that signal. $signal.setmaxinterpolation(7d) If increasing the maximum interpolation did not fill in the gap in the data (or the maximum interpolation is already long enough), it likely means there is invalid data point(s) in the gap. In this case, invalids can be removed with a formula similar to the one below: $signal.validvalues()
  12. We often get asked how to use the various API endpoints via the python SDK so I thought it would be helpful to write a guide on how to use the API/SDK in Seeq Data Lab. As some background, Seeq is built on a REST API that enables all the interactions in the software. Whenever you are trending data, using a Tool, creating an Organizer Topic, or any of the other various things you can do in the Seeq software, the software is making API calls to perform the tasks you are asking for. From Seeq Data Lab, you can use the python SDK to interact with the API endpoints in the same way as users do in the interface, but through a coding environment. Whenever users want to use the python SDK to interact with API endpoints, I recommend opening the API Reference via the hamburger menu in the upper right hand corner of Seeq: This will open a page that will show you all the different sections of the API with various operations beneath them. For some orientation, there are blue GET operations, green POST operations, and red DELETE operations. Although these may be obvious, the GET operations are used to retrieve information from Seeq, but are not making any changes - for instance, you may want to know what the dependencies of a Formula are so you might GET the item's dependencies with GET/items/{id}/dependencies. The POST operations are used to create or change something in Seeq - as an example, you may create a new workbook with the POST/workbooks endpoint. And finally, the DELETE operations are used to archive something in Seeq - for instance, deleting a user would use the DELETE/users/{id} endpoint. Each operation endpoint has model example values for the inputs or outputs in yellow boxes, along with any required or optional parameters that need to be filled in and then a "Try it out!" button to execute the operation. For example, if I wanted to get the item information for the item with the ID "95644F20-BD68-4DFC-9C15-E4E1D262369C" (if you don't know where to get the ID, you can either use spy.search in python or use Item Properties: https://seeq.atlassian.net/wiki/spaces/KB/pages/141623511/Item+Properties) , I could do the following: Using the API Reference provides a nice easy way to see what the inputs are and what format they have to be in. As an example, if I wanted to post a new property to an item, you can see that there is a very specific syntax format required as specified in the Model on the right hand side below: I typically recommend testing your syntax and operation in the API Reference to ensure that it has the effect that you are hoping to achieve with your script before moving into python to program that function. How do I code the API Reference operations into Python? Once you know what API endpoint you want to use and the format for the inputs, you can move into python to code that using the python SDK. The python SDK comes with the seeq package that is loaded by default in Seeq Data Lab or can be installed for your Seeq version from pypi if not using Seeq Data Lab (see https://pypi.org/project/seeq/). Therefore, to import the sdk, you can simply do the following command: from seeq import sdk Once you've done that, you will see that if you start typing sdk. and hit "tab" after the period, it will show you all the possible commands underneath the SDK. Generally the first thing you are looking for is the ones that end in "Api" and there should be one for each section observed in the API Reference that we will need to login to using "spy.client". If I want to use the Items API, then I would first want to login using the following command: items_api = sdk.ItemsApi(spy.client) Using the same trick as mentioned above with "tab" after "items_api." will provide a list of the possible functions that can be performed on the ItemsApi: While the python functions don't have the exact same names as the operations in the API Reference, it should hopefully be clear which python function corresponds to the API endpoint. For example, if I want to get the item information, I would use "get_item_and_all_properties". Similar to the "tab" trick mentioned above, you can use "shift+tab" with any function to get the Documentation for that function: Opening the documentation fully with the "^" icon shows that this function has two possible parameters, id and callback where the callback is optional, but the id is required, similar to what we saw in the API Reference above. Therefore, in order to execute this command in python, I can simply add the ID parameter (as a string as denoted by "str" in the documentation) by using the following command: items_api.get_item_and_all_properties(id='95644F20-BD68-4DFC-9C15-E4E1D262369C') In this case, because I executed a "GET" function, I return all the information about the item that I requested: This same approach can be used for any of the API endpoints that you desire to work with. How do I use the information output from the API endpoint? Oftentimes, GET endpoints are used to retrieve a piece of information to use it in another function later on. From the previous example, maybe you want to retrieve the value for the "name" of the item. In this case, all you have to do is save the output as a variable, change it to a dictionary, and then request the item you desire. For example, first save the output as a variable, in this case, we'll call that "item": item = items_api.get_item_and_all_properties(id='95644F20-BD68-4DFC-9C15-E4E1D262369C') Then convert the output "item" into a dictionary and request whatever key you would like: item.to_dict()['name']
  13. Hi JWu, The easiest way would be to use the Manual Signal tool (https://support.seeq.com/space/KB/1795391628/Manual+Signal) to create a new signal where the values line up with when the capsules are present and then setting that as a property on the capsules.
  14. Depending on whether you want the numbers to be 1-6 or just a unique identifier, another common approach is to use the timestamp of the start of the capsules. In this case, you could do something like: $condition.setproperty('ID', $condition.tosignal('start').tostring(), startvalue())
  15. Hi Rezwan, You could do an aggregation to get just the final sum value instead. $signal.aggregate(sum(), $condition, startkey())
  16. Hi Niranjan, Just so you're aware, Seeq does not allow duplicate timestamps for signals if that's the way you way to import the data, but does allow duplicate timestamps for capsules if you import the data as a condition. It's often best to import this data as a condition anyways with each of the columns being a property on each capsule as all of the information is related and that retains those relationships between the various columns. I'm not sure what you are asking with the final part - you can choose what date you use for Seeq or use whatever might be in the database already. Are the Sample ID and Component already unique?
  17. There is a .tostep(<max interpolation>) that is the easiest method. However, if it does not interpolate even though the data is within the max interpolation, that means that you likely have invalid values somewhere in the signal. You can remove those with .validvalues()
  18. Hi Niranjan, Data must have a timestamp in order for it to be possible to upload into Seeq. Usually for LIMS data, there is a timestamp that is either when the sample was taken (less common) or when the sample test was performed (more common). Even if that timestamp does not correspond in time to when the batch was being run, we can move that LIMS data back in time if there is some sort of identifier that tells it which batch or data it may belong to. However, it does need to have a timestamp in order to be uploaded into Seeq.
  19. Hi Rezwan, When you have a signal that is step interpolated, it may count twice (when the first sample is dropped and when the last sample is dropped). I'd suggest doing $signal.todiscrete() prior to the summation and verify that there is only one sample point per value shown on your screen. If so, then complete the running sum equation.
  20. Hi KYM, You could make a Simple Scorecard Metric that provides the total duration of time on the screen by making a signal that was "1.tosignal()' and then totalizing that signal in the Scorecard Metric in hours, but it would show as a trend across the screen instead of just the raw value.
  21. Hi Rezwan, You can use the ".within()" function to limit a signal in a formula to only the time period within a condition. For example, you could do: ($signal1*$signal2).within($condition)
  22. HI robin, Is there some way that identifies the end of the sub-batches or just a new unique first 7 characters for a new batch?
  23. Hi robin, I would start by creating a capsule that represents the entire time frame that you want to sum up the values for. If you want those two sub-batches together in the summation, you'd likely want to either do a regex value search that would capture both of those values (but not any others) or the simpler method would be to simply do a value search for each of the individual sub-batch values and then perform a "join" using composite condition to join the first one to the second. That should provide a single capsule that goes across both sub-batches. After that, you need to do your summation. The easiest method would be to use Signal from Condition to perform a sum of the green signal during the capsule created above and place that wherever you want the value to be (I'd suggest using duration as the timestamp so that you can see the statistic goes across both sub-batches). Repeat the same Signal from Condition for the purple signal as well. After that, you can use Formula with $GreenSignalFromCondition + $PurpleSignalFromCondition to complete the summation you'd like. Regards, Joe
  24. Hi Adam, If you're referring to pushing data, then per the docstring (SHIFT + TAB) for spy.push, you'll need to set the column name to the ID of the Seeq signal for it to write to an existing signal: To push to an existing signal, set the column name to the Seeq ID of the item to be pushed.
  25. Hi Filip, You can turn the discrete signals into step signals by doing $signal.tostep(1d) such that it will interpolate all data points within a day of each other and then the custom grid will work effectively.
×
×
  • Create New...