Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 08/19/2021 in all areas

  1. Have you ever wanted to scale calculations in Seeq across different assets without having to delve into external systems or write code to generate asset structures? Is your process data historian a giant pool of tags which you need to have organized and named in a human readable format? Do you want to take advantage of Seeq features such as Asset Swapping and Treemaps, but do not have an existing Asset structure to leverage? If the answer is yes, Asset Groups can help! Beginning in Seeq version R52 Asset Groups were added to configure collections of items such as Equipment, Operating Lines, KPIs, etc via a simple point-and-click tool. Users can leverage Asset Groups to easily organize and scale their analyses directly in Workbench, as well as apply Seeq Asset-centric tools such as Treemaps and Tables across Assets. What is an Asset Group? An Asset Group is a collection of assets (listed in rows) and associated parameters called “Attributes” (listed in columns). If your assets share common parameters, Asset Groups can be a great way to organize and scale analyses instead of re-creating each analysis separately. Assets can be anything users want them to be. It could be a piece of equipment, geographical region, business unit, KPI, etc. Asset Groups serve to organize and map associated parameters (Attributes) for each Asset in the group. Each Asset can have one or several Attributes mapped to it. Attributes are parameters that are common to all the assets and are mapped to tags from one or many data sources. Examples of Asset/Attribute combinations include: Asset Attribute(s) Pump Suction Pressure, Discharge Pressure, Flow, Curve ID, Specific Gravity Heat Exchanger Cold Inlet T, Cold Outlet T, Hot Inlet T, Hot Outlet T, Surface Area Production Line Active Alarms, Widgets per Hour, % of time in Spec It’s very important to configure the name of the common Attribute to be the same for all Assets, even if the underlying tag or datasource is not. Using standard nomenclature for Attributes (Columns) enables Seeq to later compare and seamlessly “swap” between assets without having to worry about the underlying tag name or calculation. Do This: Do Not Do This: How to Configure Asset Groups in Seeq Let’s create an Asset Group to organize a few process tags from different locations. While Asset Groups support pre-existing data tree structures (such as OSI PI Asset Framework), the following example will assume the tags to not be structured and added manually from a pool of existing process tags. NOTE: Asset Groups require an Asset Group license. For versions prior to R54, they also have to be enabled in the Seeq Administrator Configuration page. Contact your Seeq Administrator for details. 1) In the “Data” tab, create a new Asset Group: 2) Specify Asset Group name and add Assets You can rename the assets by clicking on the respective name in the first column. In this case, we'll define Locations 1-3. 3) Map the source tags a. Rename “Column 1” by clicking on the text and entering a new name b. Click on the (+) icon to bring up the search window and add the tag corresponding to each asset. You can use wildcards and/or regular expressions to narrow your search. c. Repeat mapping of the tags for the other assets until there’s a green checkmark in each row d. Additional source tags can be used by clicking on “Add Column” button in the toolbar In this case, we will add a column for Relative Humidity and map a tag for each of the Locations 4) Save the Asset Group 5) Trend using the newly created Asset Group The newly created Asset Group will now be available in the Data pane and can be used for navigation and trending a. Navigate to “Location 1” and add the items to the display pane by clicking on them. You can also change the display range to 7 days to show a bit more data b. Notice the Assets Column now listed in the Details pane showing from which Asset the Signal originates We can also add the Asset Path to the Display pane by clicking on Labels and checking the desired display configuration settings (Name, Unit of Measure, etc). c. Swap to Location 2 (or 3) using the Asset Swapping functionality. In the Data tab, navigate up one level in the Asset Group, then click the Swap icon ( ) to swap the display items from a different location . Notice how Seeq will automatically swap the display items 6) Create a “High Temperature” Condition Calculations configured from Asset Group Items will “follow” that asset, which can help in scaling analyses. Let’s create a “High Temperature” condition. a. Using “Tools -> Identify -> Value Search” create a condition when the Temperature exceeds 100 b. Click “Execute” to generate the Condition c. Notice the condition has been generated and is automatically affiliated with the Asset from which the Signals were selected d. Swap to a different Asset and notice the “High Temperature” Condition will swap using the same condition criteria but with the signals from the swapped Asset Note: Calculations can also be configured in the Asset Group directly, which can be advantageous if different condition criteria need to be defined for each asset. This topic will be covered in Part 2 of this series. 7) Create a Treemap Asset Groups enables users to combine monitoring across assets using Seeq’s Treemap functionality. a. Set up a Treemap for the Assets in the Group by switching to the Treemap view in the Seeq Workbench toolbar. b. Click on the color picker for the “High Temperature” condition to select a color to display when that condition is active in the given time range. (if you have more than one Condition in the Details pane, repeat this step for each Condition) c. A Treemap is generated for each Asset in the Asset Group. Signal statistics can optionally be added by configuring the “Statistics” field in the toolbar. Your tree map may differ depending on the source signal and time range selected. The tree map will change color if the configured Condition triggers during the time period selected. This covers the basics for Asset Groups. Please check out Part 2 on how to configure calculations in Asset Groups and add them directly to the Hierarchy.
    3 points
  2. Check out the data lab script and video that walks through it to automate data pull->apply ml->push results to workbench in an efficient manner. Of course you can skin the cat many different ways however this gives a good way to do it in bulk. Use case details: Apply ML on Temperature signals across the whole Example Asset Tree on a weekly basis. For your case, you can build you own asset tree and filter the relevant attributes instead of Temperature and set spy.jobs.schedule frequency to whatever works for you. Let me know if there are any unanswered questions in my post or demo. Happy to update as needed. apply_ml_at_scale.mp4 Apply ML on Asset Tree Rev0.ipynb
    3 points
  3. Statistical Process Control (SPC) can give production teams a uniform way to view and interpret data to improve processes and identify and prevent production issues. Control charts and run rule conditions can be created in Seeq to monitor near-real time data and flag when the data indicates abnormal or out of control behavior. Creating a Control Chart 1. Find the signal of interest and a signal that can be used to detect the current production grade (ideally a grade code or similar signal - if this does not exist for your product, you can use process set points to stitch together a calculated grade signal). Use .tocondition() in formula to create a condition for each change in the Grade_Code signal. 2. Check data for normalcy and other statistical assumptions prior to proceeding. To check for normalcy, use the Histogram tool in Seeq. Find more information on the Histogram tool in the Seeq Knowledge Base. Note that for this analysis we are using a subgroup size of one and assuming normalcy. 3. Determine the methodology to use to create the average and standard deviation for each grade. In this case, we will identify periods after start-up when the process was in control using Manual Condition, and select these times across each grade. 4. Calculate the mean and standard deviation for each grade, based on the times the process was in-control. Choose a time window that captures all capsules created for the in-control period. To use an unweighted mean, use .toDiscrete() before calculating average. The same calculation for the mean can be used for the standard deviation, by replacing average() with stddev(). //Define the time period that contains all in control capsules $capsule = capsule('2020-10-16T00:00-00:00', '2021-09-21T00:00-00:00') //Narrow down data to when process is in control, use keep() to filter the condition by the specific grade code capsule property $g101 = $allgrades.keep('Grade Code',isMatch('Grade 101')).intersect($inControl) $g102 = $allgrades.keep('Grade Code',isMatch('Grade 102')).intersect($inControl) $g103 = $allgrades.keep('Grade Code',isMatch('Grade 103')).intersect($inControl) //Create average based on the times the product is in control, use .toDiscrete to create an unweighted average $g101_ave = $viscosity.remove(not $g101).toDiscrete().average($capsule) $g102_ave = $viscosity.remove(not $g102).toDiscrete().average($capsule) $g103_ave = $viscosity.remove(not $g103).toDiscrete().average($capsule) //Create average for all grades in one signal using splice(), use keep() to filter the condition by the specific grade code capsule property //use within() to show only average only during the condition 0.splice($g101_ave, $allgrades.keep('Grade Code',isMatch('Grade 101'))) .splice($g102_ave, $allgrades.keep('Grade Code',isMatch('Grade 102'))) .splice($g103_ave, $allgrades.keep('Grade Code',isMatch('Grade 103'))) .within($allgrades) 5. Use the mean and standard deviation to create +/- 1 sigma limits, +/-2 sigma limits, and +/-3sigma limits (sometimes called upper and lower control limits). Here is an example of creating the +2 sigma limit: //Add 2*standard deviation to the mean to create the $plus2sd limit, use within() to show limits only during the time periods of interest ($mean + (2*$standardDeviation)).within($grade_code) 6. Overlay the standard deviation limits and mean with the signal of interest by placing on one lane and one y-axis, remove standard deviation. Creating Run Rules Once the control chart is created, run rule conditions can be created to detect instability and the presence of assignable cause in the process. In this example, Western Electric Run Rules are used, but other run rules can be applied using similar principles. Western Electric Run Rules: Run Rule 1: Any single data point falls outside the 3sigma-limit from the centerline. Run Rule 2: Two out of three consecutive points fall beyond the 2sigma-limit, on the same side of the centerline. Run Rule 3: Four out of five consecutive points fall beyond the 1sigma-limit, on the same side of the centerline. Run Rule 4: NINE consecutive points fall on the same side of the centerline. 7. The following formulas can be used to create a condition for each run rule: Run Rule 1: //convert to a step signal $signalStep = $signal.toStep() //find when one data point goes outside the plus3sigma or minus 3sigma limits ($signalStep < $minus3sd or $signalStep > $plus3sd) //set the property on the condition .setProperty('Run Rule', 'Run Rule 1') Run Rule 2: *Note that the function toCapsulesByCount() is available in Seeq versions R54+ //Create step-interpolated signal to keep from capturing the linear interpolation between sample points $signalStep = $signal.toStep() //create capsules for every 3 samples ($toCapsulesbyCount) and for every sample ($toCapsules) $toCapsulesbyCount = $signalStep.toCapsulesByCount(3,3*$maxinterp) //set the maximum interpolation based on the longest time you would expect between samples $toCapsules = $signalStep.toCapsules() //Create condition for when the signal is not between +/-2 sigma limits //separate upper and lower to capture when the rule violations occur on the same side of the centerline $condLess = ($signalStep <= $minus2sd) $condGreater = ($signalStep >= $plus2sd) //within 3 data points ($toCapsulesByCount), count how many sample points are not between +/-2 sigma limits $countLess = $signal.todiscrete().remove(not $condLess).aggregate(count(),$toCapsulesbyCount,durationKey()) $countGreater = $signal.todiscrete() .remove(not $condGreater).aggregate(count(),$toCapsulesbyCount,durationKey()) //Find when 2+ out of 3 are outside of +/-2 sigma limits //by setting the count as a property on $toCapsulesByCount and keeping only capsules greater than or equal to 2 $RR5below = $toCapsulesbyCount.setProperty('Run Rule 5 Violations', $countLess, endvalue()) .keep('Run Rule 5 Violations', isGreaterThanOrEqualto(2)) $RR5above = $toCapsulesbyCount.setProperty('Run Rule 5 Violations', $countGreater, endvalue()) .keep('Run Rule 5 Violations', isGreaterThanOrEqualto(2)) //Find every sample point capsule that touches a run rule violation capsule //Combine upper and lower into one condition and use merge to combine overlapping capsules and to remove properties $toCapsules.touches($RR5below or $RR5above).merge(true) .setProperty('Run Rule', 'Run Rule 2') Run Rule 3: *Note that the function toCapsulesByCount() is available in Seeq versions R54+ //Create step-interpolated signal to keep from capturing the linear interpolation between sample points $signalStep = $signal.toStep() //create capsules for every 5 samples ($toCapsulesbyCount) and for every sample ($toCapsules) $toCapsulesbyCount = $signalStep.toCapsulesByCount(5,5*$maxinterp) //set the maximum interpolation based on the longest time you would expect between samples $toCapsules = $signalStep.toCapsules() //Create condition for when the signal is not between +/-1 sigma limits //separate upper and lower to capture when the rule violations occur on the same side of the centerline $condLess = ($signalStep <= $minus1sd) $condGreater = ($signalStep >= $plus1sd) //within 5 data points ($toCapsulesByCount), count how many sample points ($toCapsules) are not between +/-1 sigma limits $countLess = $signal.toDiscrete().remove(not $condLess).aggregate(count(),$toCapsulesbyCount, durationkey()) $countGreater = $signal.toDiscrete() .remove(not $condGreater).aggregate(count(),$toCapsulesbyCount,durationkey()) //Find when 4+ out of 5 are outside of +/-1 sigma limits //by setting the count as a property on $toCapsulesByCount and keeping only capsules greater than or equal to 4 $RR6below = $toCapsulesbyCount.setProperty('Run Rule 6 Violations', $countLess, endvalue()) .keep('Run Rule 6 Violations', isGreaterThanOrEqualto(4)) $RR6above = $toCapsulesbyCount.setProperty('Run Rule 6 Violations', $countGreater, endvalue()) .keep('Run Rule 6 Violations', isGreaterThanOrEqualto(4)) //Find every sample point capsule ($toCapsules) that touches a run rule violation capsule //Combine upper and lower into one condition and use merge to combine overlapping capsules and to remove properties $toCapsules.touches($RR6below or $RR6above).merge(true) .setproperty('Run Rule', 'Run Rule 3') Run Rule 4: //Create step-interpolated signal to keep from capturing the linear interpolation between sample points $signalStep = $signal.toStep() //create capsules for every 9 samples ($toCapsulesbyCount) and for every sample ($toCapsules) $toCapsulesbyCount = $signalStep.toCapsulesByCount(9,9*$maxinterp) //set the maximum interpolation based on the longest time you would expect between samples $toCapsules = $signalStep.toCapsules() //Create condition for when the signal is either greater than or less than the mean //separate upper and lower to capture when the rule violations occur on the same side of the centerline $condLess = $signalStep.isLessThan($mean) $condGreater = $signalStep.isGreaterThan($mean) //Find when the last 9 samples are fuly within the greater than or less than the mean //use merge to combine overlapping capsules and remove properties $toCapsules.touches(combinewith($toCapsulesbyCount.inside($condLess), $toCapsulesbyCount.inside($condGreater))).merge(true) .setproperty('Run Rule', 'Run Rule 4') **To make it easier to use these run rules in Seeq, custom formula functions can be created for each run rule using the User Defined Formula Function Editor Add-on which can be found in Seeq’s Open Source Gallery along with user guides and instructions for installation. For example, Run Rule 2 can be simplified to the following formula using the User-Defined Formula Functions Add-on with Seeq Data Lab: $signal.WesternElectric_runRule2($minus2sd, $plus2sd) 9. If desired, all run rules can be combined into one condition in formula using .combinewith(): combineWith($runRule1,$runRule2,$runRule3, $runRule4) 10. As a final step, a table can be created detailing the run rule violations in the trend view. Here, the header column is set as ‘Capsule Property’ >> ‘Run Rule’ and the capsule properties, start, end, and duration were added as columns. The last value of the signal ‘Grade_Code’ was also added as a column to the table. For more information on Tables, see the Seeq Knowledge Base.
    3 points
  4. This Use Case became simpler in R21.0.41 with the addition of the now() function in formula. You no longer need to do the first two steps of defining a search window for now and then using a signal to identify the last measured time stamp. Instead you just need to use the now() function to define the capsules and create your condition... //Create conditions representing the last 7, 14, and 365 days $Last7DayCapsule = capsule($now()-7d, $now()).setProperty("Time","Last 7d") $Last14DayCapsule = capsule($now()-14d, $now()).setProperty("Time","Last 14d") $Last365DayCapsule = capsule($now()-365d, $now()).setProperty("Time","Last 365d") condition(370d,$Last7DayCapsule,$Last14DayCapsule,$Last365DayCapsule)
    3 points
  5. Users of OSIsoft Asset Framework often want to filter elements and attributes based on the AF Templates they were built on. At this time though, the spy.search command in Seeq Data Lab only filters based on the properties Type, Name, Description, Path, Asset, Datasource Class, Datasource ID, Datasource Name, Data ID, Cache Enabled, and Scoped To. This post discusses a way in which we can still filter elements and/or attributes based on AF Template. Step 1: Retrieve all elements in the AF Database The code below will return all assets in an AF Database that are based on a AF Template whose name contains Location. asset_search = spy.search({"Path":"Example-AF", "Type":"Asset"}, all_properties=True) #Make sure to include all properties since this will also return the AF Template asset_search.dropna(subset=['Template'], inplace=True) # Remove assets not based on a template since we can't filter with NaN values asset_search_location = asset_search[asset_search['Template'].str.contains('Location')] # Apply filter to only consider Location AF Template assets Step 2: Find all relevant attributes This code will retrieve the desired attributes. Note wildcards and regular expression can be used to find multiple attributes. signal_search = spy.search({"Path":"Example-AF", "Type":"Signal", "Name":"Compressor Power"}) #Find desired attributes Step 3: Filter attributes based on if they come from an element from the desired AF template Last step cross references the signals returned with the desired elements. This is done by looking at their paths. # Define a function to recreate paths, items directly beneath the database asset don't have a Path def path_merger(row): row = row.dropna() return ' >> '.join(row) asset_search_location['Full Path'] = asset_search_location[['Path', 'Asset', 'Name']].apply(lambda row: path_merger(row),axis=1) # Create path for the asset that includes its name signal_search['Parent Path'] = signal_search[['Path', 'Asset']].apply(lambda row: path_merger(row),axis=1) # Create path for the parents of the signals signal_search_location = signal_search[signal_search['Parent Path'].isin((asset_search_location['Full Path']))] # Cross reference parent path in signals with full paths in assets to see if these signals are children of the desired elements
    2 points
  6. Hi, the error means that you are referencing a variable that is not defined in your variable list. You should change your variable "$signal1" in the formula to a variable you have in your variables list: Also be aware that you cannot use a signal and a condition together on combineWith(). You can combine either signals only or conditions only. Regards, Thorsten
    2 points
  7. I'd make two conditions, one for RPM and one for Temperature, then try to use the "Combining Conditions" formulas. I think .encloses() would work.
    2 points
  8. We often get asked how to use the various API endpoints via the python SDK so I thought it would be helpful to write a guide on how to use the API/SDK in Seeq Data Lab. As some background, Seeq is built on a REST API that enables all the interactions in the software. Whenever you are trending data, using a Tool, creating an Organizer Topic, or any of the other various things you can do in the Seeq software, the software is making API calls to perform the tasks you are asking for. From Seeq Data Lab, you can use the python SDK to interact with the API endpoints in the same way as users do in the interface, but through a coding environment. Whenever users want to use the python SDK to interact with API endpoints, I recommend opening the API Reference via the hamburger menu in the upper right hand corner of Seeq: This will open a page that will show you all the different sections of the API with various operations beneath them. For some orientation, there are blue GET operations, green POST operations, and red DELETE operations. Although these may be obvious, the GET operations are used to retrieve information from Seeq, but are not making any changes - for instance, you may want to know what the dependencies of a Formula are so you might GET the item's dependencies with GET/items/{id}/dependencies. The POST operations are used to create or change something in Seeq - as an example, you may create a new workbook with the POST/workbooks endpoint. And finally, the DELETE operations are used to archive something in Seeq - for instance, deleting a user would use the DELETE/users/{id} endpoint. Each operation endpoint has model example values for the inputs or outputs in yellow boxes, along with any required or optional parameters that need to be filled in and then a "Try it out!" button to execute the operation. For example, if I wanted to get the item information for the item with the ID "95644F20-BD68-4DFC-9C15-E4E1D262369C" (if you don't know where to get the ID, you can either use spy.search in python or use Item Properties: https://seeq.atlassian.net/wiki/spaces/KB/pages/141623511/Item+Properties) , I could do the following: Using the API Reference provides a nice easy way to see what the inputs are and what format they have to be in. As an example, if I wanted to post a new property to an item, you can see that there is a very specific syntax format required as specified in the Model on the right hand side below: I typically recommend testing your syntax and operation in the API Reference to ensure that it has the effect that you are hoping to achieve with your script before moving into python to program that function. How do I code the API Reference operations into Python? Once you know what API endpoint you want to use and the format for the inputs, you can move into python to code that using the python SDK. The python SDK comes with the seeq package that is loaded by default in Seeq Data Lab or can be installed for your Seeq version from pypi if not using Seeq Data Lab (see https://pypi.org/project/seeq/). Therefore, to import the sdk, you can simply do the following command: from seeq import sdk Once you've done that, you will see that if you start typing sdk. and hit "tab" after the period, it will show you all the possible commands underneath the SDK. Generally the first thing you are looking for is the ones that end in "Api" and there should be one for each section observed in the API Reference that we will need to login to using "spy.client". If I want to use the Items API, then I would first want to login using the following command: items_api = sdk.ItemsApi(spy.client) Using the same trick as mentioned above with "tab" after "items_api." will provide a list of the possible functions that can be performed on the ItemsApi: While the python functions don't have the exact same names as the operations in the API Reference, it should hopefully be clear which python function corresponds to the API endpoint. For example, if I want to get the item information, I would use "get_item_and_all_properties". Similar to the "tab" trick mentioned above, you can use "shift+tab" with any function to get the Documentation for that function: Opening the documentation fully with the "^" icon shows that this function has two possible parameters, id and callback where the callback is optional, but the id is required, similar to what we saw in the API Reference above. Therefore, in order to execute this command in python, I can simply add the ID parameter (as a string as denoted by "str" in the documentation) by using the following command: items_api.get_item_and_all_properties(id='95644F20-BD68-4DFC-9C15-E4E1D262369C') In this case, because I executed a "GET" function, I return all the information about the item that I requested: This same approach can be used for any of the API endpoints that you desire to work with. How do I use the information output from the API endpoint? Oftentimes, GET endpoints are used to retrieve a piece of information to use it in another function later on. From the previous example, maybe you want to retrieve the value for the "name" of the item. In this case, all you have to do is save the output as a variable, change it to a dictionary, and then request the item you desire. For example, first save the output as a variable, in this case, we'll call that "item": item = items_api.get_item_and_all_properties(id='95644F20-BD68-4DFC-9C15-E4E1D262369C') Then convert the output "item" into a dictionary and request whatever key you would like: item.to_dict()['name']
    2 points
  9. There is not a mechanism to move the graphics directly to PowerBI but you can move the data that developed the graphs using the oData export https://support.seeq.com/space/KB/112868662/OData Export#Example-Importing-to-Microsoft-Power-BI-(Authenticate-Using-Seeq-Username-and-Password) This will require building the graphics again inside of PowerBI and I would recommend using the Signal export on a fixed grid in order to get datapoints that are at the exact same timestamp which will make life in PowerBI much easier Shamus
    2 points
  10. Hi Brian, In more recent versions of Seeq, the max function used in that way (that specific syntax or form) only works with scalars. For your case, try $p53h2.max($p53h3).max($p53h4) That is the form needed with signals. Hope this helps! John
    2 points
  11. Hi Kenny, You will need to create the access key in Seeq. Please refer to this https://support.seeq.com/space/KB/740721558/Access%20Keys . Please save the generated access key and password accordingly. Then you can insert the access key details as your username (access key) and password.
    2 points
  12. I know this is an old thread, but I am including what I did in case posterity finds it useful. I am more or less working the the same issue, but with a somewhat noisier and less reliable signal. I found the above a helpful starting point, but had to do a bit of tweaking to get something reliable that didn't require tuning. The top lane is the raw signal, from which I remove all the drop outs, filled in any gaps with a persisted value, and did some smoothing with agile to get the cleansed level on the next lane. For the value decreasing condition I used a small positive threshold (since there were some small periods of the levels fluctuating and the tank being refilled was a very large positive slope) and a merge to eliminate any gaps in the condition shorter than 2 hours (since all the true fills were several hours). For the mins and maxes I did not use the grow function on the condition like was done above, instead just used relatively wide max durations and trusted that the cleansing I did on value decreasing was good enough. I was then able to use the combinewith and running delta function on the mins and maxes, and filter to get the deliveries and the usage. One additional set of calculations I added was to filter out all the periods of deliveries by converting the Delta function to a condition and removing all the data points in conditions that started positive from the cleansed signal. I then subtracted a running sum of the delta function over a quarter, yielding a signal that without the effect of an of the deliveries over each quarter. I could then aggregate the delta for days and quarters of that signal to get the daily and quarterly consumption figures. Chart showing all the calculated signals for this example. Top lane is the raw signals. Next lane shows the cleansed signal with the nexus of the mins and maxes between deliveries. Middle lane combines the mins and maxes and takes the running deltas, and then filters them into delivery and usage numbers. The next lane removes the deliveries from the cleansed signal and does a running sum of the consumption over the quarter. The last two lanes are daily and quarterly deltas in those consumption figures. Calculation for identifying the periods in which the chemical level is decreasing. I used a small positive threshold and removed two hour gaps, and that allowed it to span the whole time between deliveries. Aggregate the cleansed signal over those decreasing time periods to find the min and max values. Used the combinewith and running delta functions to get the next deltas of consumption and deliveries. Filtered based on positive and negative value to separate into deliveries and consumption numbers. Removed the delivery numbers from the cleansed signal in order to get a running sum of consumption over a quarter. aggregated the deltas in the consumption history over days and quarters to calculate daily and quarterly consumption.
    2 points
  13. Hi Matthias, The first question is to identify each maximum peak of the blue trend : 1. First apply an agileFilter to remove noisy signal and calculate the derivative of the signal. Please refer to formula documentation for derivative. Example: $signal.agileFilter(15min).derivative('min') 2. Then use value search to identify increasing trend, example as below and you can optimize accordingly. or you can refer to step 4 and 5 from this post : 3. Use Signal from condition to find the maximum value bounded to the condition created in step 2 and place the timestamp at point of max value: Second Question : Rate of change of one peak and the upcoming peak 4. We can apply the same derivative function. Example: $max.derivative('min') Third Question : to detect a significant change of the signal (marked by red X's) 5. The max derivative will increase or decrease significantly when there's a sudden change. Example here, we're using the threshold of +-0.005 to detect the changes for max rate decrease and increase respectively. Forth Question : to know its Duration (marked by the red arrows) : 6. Use Composite condition to join ''Rate Decreased'' and ''Rate Increased" 8. Calculate Total Duration of ''Join Decreased to Increased'' condition using signal from condition tool. Please give this a try to your signal and let us know if you have further question.
    2 points
  14. Hi Feng. Can you give a try this formula and let me know if it works $SumRange = periods(3min) $signal.runningSum($SumRange)
    2 points
  15. Seeq Data Lab allows users to programatically interact with data connected to Seeq through Python. With this, users can create numerous advanced visualizations. Some examples of these are Sankey diagrams, Waterfall plots, radar plots and 3D contour plots. These plots can then be pushed back into Seeq Organizer for other users to consume the visualizations. A common workflow that can stem from this process is the need to update the Python visualizations in an existing Organizer Topic with new ones as newer data become available. Here we'll look over the steps of how you can update an existing Organizer Topic with a new graphic. Step 1: Retrieve the Workbook HTML Behind every organizer topic is the HTML that controls what the reports display. We'll need to modify this HTML to add a new image will also retaining whatever pieces of Seeq content were already on the report. pulled_workbooks = spy.workbooks.pull(spy.workbooks.search({'Name':'Organizer Topic Name'})) org_topic = pulled_workbooks[0] # Note you may need to confirm that the first item in pulled_workbooks is the topic of interest ws_to_update = org_topic.worksheets[0] # Choose the index based on the worksheet intended to be updated ws_to_update.html # View the HTML behind the worksheet Step 2: Create the HTML for The Image The "add_image" function can be used to generate the HTML that will be inserted into the Organizer Topic HTML replace_html = ws_to_update.document.add_image(filename = "Image_To_Insert.png") replace_html Step 3: Replace HTML and Push Back to Seeq To find where in the Organizer Topic HTML to replace, we can use the re module. This will allow us to parse the HTML string to find our previously inserted image, which should begin with "<img src=". Note additional changes are required if multiple images are included in the report. import re before_html = re.findall("(.*)<img src=", ws_to_update.html)[0] # Capture everything before the image after_html = re.findall(".*<img src=.*?>(.*)", ws_to_update.html)[0] # Captures everything after the image full_html = before_html + replace_html + after_html # Combine the before and after with the html generated for the new picture ws_to_update.html = full_html # Reassign the html to the worksheet and push it back to Seeq spy.workbooks.push(pulled_workbooks)
    2 points
  16. There is a quick little trick to do this by combining two formulas together $signal.tocondition().tosignal() What this formula does is create a condition .tocondition() where each capsule starts when the value in your ID signal changes (eliminating duplicate values) and then transforms those capsules back into a signal using .tosignal() Let me know if this example helps solve your question
    2 points
  17. Hi VpJ, If you build it out using any of the solutions mentioned above, it is relatively easy to make it available in all workbenches.
    1 point
  18. Hi VpJ, Currently, asset groups are only available in a single workbench. You can see some differences between asset groups and asset trees (including that workbook/global scoping) here: https://seeq.atlassian.net/wiki/spaces/KB/pages/1590165555/Asset+Groups#Asset-Trees-vs.-Asset-Groups
    1 point
  19. Hi SBC, applying the the within() function to your signal will result in a new signal that only keeps those parts of the signal where the condition is met. The filtered signal can then be used for the histogram. Regards, Thorsten
    1 point
  20. Now I see! I was assuming maxValue($SearchArea) was hard coding the search. Your explanation makes sense: maxValue is returning a search result, but then $signal.within($ValidData) is only passing the capsules in the condition to it. Therefore, as long as $SearchArea fully includes the capsules in $ValidData it will work. I just need to hard code dates well before and well after any capsules I would use. Thanks!
    1 point
  21. I think what you are going for will look like the formula below Where $SearchArea is the total range where any of your valid data capsule could fall (you can be very conservative with these dates). This formula will work if you have multiple valid data range capsules as long as they all fall within the $SearchArea $SearchArea = capsule("2020-01-01T00:00:00Z","2022-07-28T00:00:00Z") $Signal.within($ValidData).maxValue($SearchArea).toSignal()
    1 point
  22. Hi Matthias, I would recommend checking out this post, which follows the same process: When looking at that post, it sounds like you'll want the $reset variable to be equal to a monthly condition.
    1 point
  23. Psychrometric charts are used for cooling tower and combustion calculations. For users that have weather data, this could give people a better idea of how their cooling towers etc. It might make sense to include them in formula in the future. thanks!
    1 point
  24. When creating Seeq signals using Seeq Data Lab(SDL) it can be useful to know how to delete signals from Seeq after pushing the signal to a workbench using a Spy push. Fortunately, the process is as simple as adding a column to your metadata dataframe called 'Archived' and setting the value to 'True' for any signal(s) you would like to archive. Below is a snippet of code where an 'Archived' column is added to the spy.push example notebook metadata dataframe is the SDL Spy Documentation. The DEPTH(ft) signal will be archived after pushing the new metadata dataframe with the archived column set to 'True'. A couple other notes about deleting signals in Seeq: If you would like to keep the signal in Seeq, but want to update the data, you can do so with a subsequent push of the signal that includes the new data. The caveat is the new data must have the same keys (timestamps) as the old data. If the keys are different, the data will be appended to the existing signal. Otherwise, you will need to push the signal with a unique name/path/type. If you would like to fully delete the signal samples from Seeq, you can do so using Seeq's API call to delete the signal samples.
    1 point
  25. Hi vadeka, Now that you've removed the "Inactive" data the issue is likely that either (1) your maximum interpolation is not long enough to interpolate between those points or (2) there is actual invalid data (not just saying "Inactive", but a data point that is not visible as it doesn't have a value. Check out this post under Question 4 for how to solve this: In terms of your second question, if you want to view the data points on the trend, then check out how to adjust the "Samples" in the Customize menu for the Details Pane, which allows you to turn on the individual data points: https://support.seeq.com/space/KB/149651519/Adjusting+Signal+Lanes%2C+Axes+%26+Formatting. If you want to view it in a Table, then using $signal.tocapsules() will give you a capsule per data point so that you can view that in a Condition Table (https://support.seeq.com/space/KB/1617592515#Condition-Tables). Note that there are two similar functions: tocapsules will provide a capsule per data point (even if the data points are identical) whereas tocondition will provide a sample per change in data point, meaning it will not show repeat capsules if the data points are equivalent in sequence.
    1 point
  26. The details and approach will vary depending on exactly where you are starting from, but here is one approach that will show some of the key things you may need. When you say you have 3 capsules active at any given time, I assume you mean 3 separate conditions. Assuming that is the case, you can create a "combined condition" out of all 9 conditions using Seeq Formula: // Assign meaningful text to a new capsule property // named TableDesription $c1 = $condition1.setProperty('TableDescription','High Temperature') $c2 = $condition2.setProperty('TableDescription','Low Pressure') and so forth... $c9 = $condition9.setProperty('TableDescription','Low Flow') // Create a capsule that starts 5 minutes from the current time $NowCapsule = past().inverse().move(-5min) // Combine and keep only the currently active capsules (touching "now") combineWith($c1,$c2,...,$c9).touches($NowCapsule) You can then go to Table View, and click the Condition table type. Select your "combined condition" in the Details Pane and add a capsule property (TableDescription) using the Column or Row button at the top of the display. You can then set your display time range to include the current time, and you should see the currently active capsules with the TableDescription text that you assigned in the Formula. You of course will want to vary the "-5min" value and the 'TableDescription' values per what is best for you. Your approach may be a little different depending on where you are starting from, but I think that creating capsule properties (text you want to see in your final table), combining into one condition, and using .touches($NowCapsule), may all be things you will need in your approach. Here are some screenshots from an example I put together, where I used "Stage" for the capsule property name:
    1 point
  27. Hi Siang Lim, You can search for your asset in the organizer topic as shown in the screenshot below. We are working towards increasing the discoverability of local asset groups.
    1 point
  28. I have a signal where I'd like to compare the distributions in 2021 vs. 2022 using a histogram, but with the normalized counts per year (counts divided by total counts in that year, or relative frequency histogram) instead of just the raw counts. Is there an easy way to configure this in Seeq?
    1 point
  29. A question that comes up from time to time is how to search for a list of signals or other data items in Seeq. Typically we get a request for an ability to search based on a comma separated list. While we do not currently (as of R55) support a comma separated list, you can get around this using Regex searching simply by replacing the comma with a vertical bar "|" and encapsulate the search in forward slashes as below: Compressor Power,Temperature,Relative Humidity becomes /Compressor Power|Temperature|Relative Humidity/ In this search, the forward slashes tell Seeq that this is a Regex search, and the | is an "or" in regex. I.e. it will search for something exactly containing "Compressor Power" or something exactly containing "Temperature" etc. which in effect gives you the ability to search for a list! For very long lists, do a find and replace in a word editor to build the new search. The ability to search for a list will soon become quite handy with the "add all" feature slated to come with R56 which should be coming soon!
    1 point
  30. Hi Dharun, I haven't tried this with Seeq displays and Power BI, but some customers have Seeq displaying inside of other applications (including Sharepoint and other web applications developed by customers). Check out this youtube video I found on how to insert websites into a Power BI display. For this to work, you'll need an administrator to update some configuration parameters in Seeq to allow either a specific host (your Power BI domain) or any host to display Seeq content. You can link them to this KB article with more details on configuration. Anyone viewing the Power BI dashboard would likely also need a Seeq login and at least read permission of the Organizer Topic, otherwise I would expect the inserted browser window to show the Seeq login page. I hope this helps and let us know how it goes! Joanna
    1 point
  31. There is an easy way to filter a condition to only keep capsules that are either longer or shorter than a specified duration. In formula just use one of the following function $myCondition.removelongerthan(5h) or $myCondition.removeshorterthan(5h)
    1 point
  32. Hi Robin, you can create a batch condition by using replace() to extract the batchnumber and toCondition() for creating the capsules for each batch: $subbatches.replace('/(\\d{1,7})NR\\d{3}/', '$1').toCondition() In the next step you can do the aggregation: $v1.aggregate(sum(), $batch.removeLongerThan(1wk), middleKey()) + $v2.aggregate(sum(), $batch.removeLongerThan(1wk), middleKey()) Regards, Thorsten
    1 point
  33. Hello, you can use the merge() function for this. If a tolerance value is specified all capules beginning withing that tolerance after the end of a capsule are combined. Regards, Thorsten
    1 point
  34. Kenny, There is not currently a way to delete oData exports for non-admins. However, the exports do not put any load on the Seeq system unless they are being used by an external system (PowerBI, Tableau, etc) To answer your second question we are creating a new export url endpoint every time someone runs the tool in workbench. These oData feeds are in active development and we have plans for making the creation and maintenance of them easier in upcoming releases.
    1 point
  35. Hi Matthias, you can try this: First, calculate the duration of each capsule using "Signal from Condition" tool: Then you can use timesince() to calculate the percentage value over the duration and also splice() to insert it into the base signal of 0% everytime the condition is met: 0%.splice(timesince($condition.removeLongerThan(40h), 1min)/$duration, $condition) Did I understand your question correctly? Regards, Thorsten
    1 point
  36. Mike & Andrew - I came up with a kludgy, resource-intensive solution, but it works. Let's say $tau is a continuous signal specifying the reactor residence time in minutes. Create a new formula that calculates the average of $tau in 10-minute time intervals, and then round it to the nearest 10-minute integer value: $tenminutes = periods(10min) $tau_avg = $tau.aggregate(average(),$tenminutes,startKey()) round($tau_avg/10)*10 Now, create a formula that defines multiple signals that apply the exponential filter to the reaction parameter (in my case, monomer concentration) explicitly for each possible value of $tau_rounded. Then splice these signals together based on the current value of $tau_rounded. //conditions $tau10 = $tau_rounded == 10 $tau20 = $tau_rounded == 20 $tau30 = $tau_rounded == 30 //signals $CC2_T10 = $CC2SS.exponentialFilter(10min,1min) $CC2_T20 = $CC2SS.exponentialFilter(20min,1min) $CC2_T30 = $CC2SS.exponentialFilter(30min,1min) //final spliced signal: exponential filter with varying values of tau $CC2SS //default is the steady state C2 concentration .splice($CC2_T10, $tau10) .splice($CC2_T20, $tau20) .splice($CC2_T30, $tau30) //extend as needed to cover expected range of values of $tau_rounded edit: of course, this method has some obvious limitations. It is not going to be accurate during time periods when the residence time and concentration are both changing (reactor startup, process upset, etc.). There will be a discontinuity in the signal for each "step" in the residence time, as it jumps between the exponential signals. Perhaps an agileFilter could be applied at the end to smooth it out. It's a decent approximation (better than nothing), and during times when the residence time is constant, the output will match that of the exponentialFilter for the current residence time.
    1 point
  37. Wow! Very good description and explanation! Thank you very much - it worked for me.
    1 point
  38. We have had a couple of users ask for a method to create a table of timestamps and values for each of the samples in a signal. Below is a quick method to create such a table using the new features available in versions R53+ This method in general makes the most sense for finding the timestamps of discrete points but could be used for signal Step 1: Create a condition with a capsule for every sample point $signal.toCapsules() Step 2: Move to the table view and select the "Condition" option The general settings you are going to want to pick are Condition mode Headers = Start Time Columns Capsule Property = Value Final Product:
    1 point
  39. Hey Abby, I was able to reproduce your issue, and it seems it is due to the way Seeq creates the Step trend, it is placing two values at each point in time a new value occurs to create the stepped transition. I was able to fix this by using Formula to create two new signals which are discrete. $x.toDiscrete() I still used the original stepped values to create the In Range/Out of Range conditions, but used the discrete values to display on the scatter plot. My limits are 0 - 4 for Y and 40 - 50 for X. Here's the before and after. Hope this helps, Andrew BEFORE AFTER
    1 point
  40. The SPy library supports the creation of asset trees through the spy.assets models. These asset trees can include various types of items such as signals and conditions, calculations like scorecard metrics, and can be used to create numerous Workbench Analysis Worksheets and Organizer Topic Documents. One question that commonly comes up when making these trees is how to reference attributes that are located in other parts of the tree. Roll-Ups The first example of referencing other items in the tree is through roll-ups. These type of calculations "roll-up" attributes from levels below where the roll-up calculation is being performed, whether the level is directly beneath or multiple levels below. These attributes are then combined using logic you provide. For signals and scalars, the options are Average, Maximum, Minimum, Range, Sum and Multiply. For conditions, the options are Union, Intersect, Counts, Count Overlaps, and Combine With. Below are examples where .Cities() are a component beneath the current class. All attributes and assets beneath the Cities component will be searched through and included in the roll-up based on the criteria given in the pick function. Here, we're filtering based on Name but any property such as Type can be supplied. Note Seeq Workbench's search mechanism is used here, so wildcards and regular expressions can be included. Lastly, we specify the kind of roll-up we'd like to perform. @Asset.Attribute() def Regional_Compressor_Running_Poorly(self,metadata): return self.Cities().pick({'Name':'Compressor Running Poorly'}).roll_up('union') @Asset.Attribute() def Regional_Total_Energy_Consumption(self,metadata): return self.Cities().pick({'Name':'Total Daily Energy Consumption'}).roll_up('sum') Child Attributes The second example looks at how to reference child attributes without rolling them up. Maybe there's a particular attribute that needs to be included in a calculation used at a higher level in the asset tree. For this scenario, the pick function can be used once again. Rather than do a roll-up, we'll just index the particular item we want. Most of the time the goal is to reference a specific item using this method so the criteria passed into the pick function should be specific enough to find one item so the index will always be 0. One property that may be of interest for this is Template, where you can specify the particular class used that will contain the item wanted. @Asset.Attribute() def Child_Power_Low(self, metadata): child_power = self.Cities().pick({"Name": "Compressor Power", "Asset": "/Area (A|C|D)/"})[0] return { 'Name': "Child Power Low", 'Type': "Condition", "Formula": "$child_power < 5", "Formula Parameters" : {"child_power":child_power} } Parent Attributes The next example looks at how we can reference parent attributes in calculations that are beneath it. Rather than reference a particular component, we'll use the parent. From there we'll include the attribute we'd want to reference from our parent asset. If looking to reference attributes at higher levels of the tree, chain multiple ".parent". For example, "self.parent.parent" will look two levels above the current level. @Asset.Attribute() def Parent_Temp_Multiplied(self, metadata): parent_temp = self.parent.Temperature() return { 'Name': "Parent Temp Multiplied", 'Type': "Signal", "Formula": "$parent_temp * 10", "Formula Parameters" : {"parent_temp":parent_temp} } Advanced Selection In this example, we'll look at how can we combine the previously mentioned options to find items located in other parts of the tree. Here, we're looking to reference items located at the same level of the tree but in another class so it's not located beneath the same asset. We have two separate assets beneath the regions, Temperature Items and Power Items. The Temperature Item class has a calculation called Max Temperature 1 When Compressor Is On which references an attribute beneath its corresponding Power Item class. To fetch this attribute, we go up a level to the parent, navigate down to the Power_Items and then pick that attribute. class Region(Asset): @Asset.Component() def Temperature_Items(self,metadata): return self.build_components(template=Temperature_Item, metadata=metadata, column_name='Region Temp') @Asset.Component() def Power_Items(self,metadata): return self.build_components(template=Power_Item, metadata=metadata, column_name='Region Power') class Power_Item(Asset): @Asset.Attribute() def Power_1(self,metadata): return { 'Name':'Power 1', 'Type':'Signal', 'Formula':'$power', 'Formula Parameters': {'$power':metadata[metadata['Name'].str.contains('Power')].iloc[0]['ID']} } class Temperature_Item(Asset): @Asset.Attribute() def Temperature_1(self,metadata): return { 'Name':'Temperature 1', 'Type':'Signal', 'Formula':'$temp', 'Formula Parameters': {'$temp':metadata[metadata['Name'].str.contains('Temperature')].iloc[0]['ID']} } @Asset.Attribute() def Temp_When_Comp_On(self, metadata): power_adjacent_class = self.parent.Power_Items().pick({'Name':"Power 1"})[0] return { 'Name': "Max Temperature 1 When Compressor Is On", 'Type': 'Signal', 'Formula': '$temp1.aggregate(maxValue(), ($power1<5).removeLongerThan(7d), durationKey())', 'Formula Parameters':{ 'temp1':self.Temperature_1(), 'power1': power_adjacent_class } } Item Group To help with even more complex attribute selections, we introduced the ability to include ItemGroup rather than using the pick and parent functions. ItemGroup provides an alternate way of findings items located in other parts of the tree using established Python logic. Below are two examples using ItemGroup to perform selections that would be very complex to do with the pick function Advanced Roll-up Roll-ups using the pick reference one component beneath your class but what if there was a need for a roll-up across multiple components. ItemGroup can be used for a simple roll-up as well as this complex example. Rather than specifying a particular component and picking in it, we can use ItemGroup to iterate over every asset. Here, we retrieve every High Power attribute beneath the assets if the asset is a child of the current asset. @Asset.Attribute() def Compressor_High_Power(self, metadata): # Helpful functions: # asset.is_child_of(self) - Is the asset one of my direct children? # asset.is_parent_of(self) - Is the asset my direct parent? # asset.is_descendant_of(self) - Is the asset below me in the tree? # asset.is_ancestor_of(self) - Is the asset above me? (i.e. parent/grandparent/great-grandparent/etc) return ItemGroup([ asset.High_Power() for asset in self.all_assets() if asset.is_child_of(self) ]).roll_up('union') Referencing Items In A Different Section In this example, we're looking to reference attributes in other similar assets, but these assets are located in different sections of the tree. We can use the previous option in the Advanced Selection section but what if these compressors weren't necessarily at the same level of the tree or were beneath different components. This would mean they have different pathways and the method previously stated wouldn't work. Using ItemGroup we can iterate through all assets and find any that are also based on the Compressor class. Here we also exclude the current asset and then perform a roll-up based on all of the other High Powers. @Asset.Attribute() def Other_Compressors_Are_High_Power(self, metadata): return ItemGroup([ asset.High_Power() for asset in self.all_assets() if isinstance(asset, Compressor) and self != asset ]).roll_up('union')
    1 point
  41. Thanks very much Patrick. I downgraded tzlocal to v2.1 and the issue is resolved.
    1 point
  42. Hi Dylan, you can use the aggregate function for this: $signal.aggregate(average(), ($signal > 90).removeLongerThan(1wk), durationKey()) In this example the average of the signal is calculated whenever its value is above 90 and the result is drawn as a line over the duration of each capsule. You can find more information on the function and its parameters inside formula documentation. Hope this helps. Regards, Thorsten
    1 point
  43. There are times when we'd like to view histograms in a monthly view ordered chronologically by the starting month in the display range. This post reviews the results of 3 different methods of utilizing Histogram vs Signal from Condition. All 3 examples show the same results but differ in how the results are displayed. Example 1: This method displays histograms by order of Month, thus, January will show first with December showing last, even though the display range is set from 7/26/2017 - 7/26/2018. As a result, we are not always looking at data in chronological order with this method. Simply goto your Histogram tool, select your signal/condition & statistic, then select Time as aggregation type --> Month of Year. Continue to Execute. Example 2: This method will ensure your Histogram is in chronological order, first ordered by year, then by month. The caveat to this is the spacing of all bars in the display window is not held constant (a gap between years will be observed). Go back to the Histogram tool, select your signal/condition & statistic, then select Time as aggregation type --> Year. After this, select Add grouping. Again, select Time as aggregation type --> Month of Year. Continue to Execute. The color gradient can be changed by changing the main color in the histogram. Individual bar colors can also be changed by clicking the respective color box in the legend (top right of histogram). Example 3: This method will produce equally spaced bars in chronological order with no color gradient. To achieve this, we will use Signal from Condition. First, we need to create our condition. Because we are interested in a Monthly view, we can navigate to our Periodic Condition tool under Identify; Duration-->Monthly (All). Timezone can be specified and any shifts to the resulting capsules can be applied under Advanced. Now that we have our condition, we can proceed to our Signal from Condition tool under Quantify. As with the other examples, select your signal/condition & statistic. The bounding condition will be the Monthly condition we just created. For this use case, we will want our timestamp to be at the start of each capsule (month), and the interpolation method to be Discrete so that bars will be the resulting output. The output may have skinny bars and non-ideal axis min/max. This can be adjusted by clicking Customize in the Details pane. For this example, I used a width of 50, and axis min/max of 0/1.25.
    1 point
  44. While Seeq is working towards enhancing our features in the Histogram tool, here are a simple workaround commonly used to create a histogram for multiple signals with display times in chronological order. Step 1: Start by loading all of the signals we want to include in the matrix into the display. Step 2: Create a monthly condition with a property that split the years and months (YYYY-MM) from the initial date format (YYYY-MM-DDT00:00:00Z) using the formula tool. Formula 1 - month_withProperty $monthly = months("Asia/Kuala_Lumpur") //split the years and month of the data $monthly .move(8hrs) //move based on specific timezone example the timezone used here is UTC+8 .transform($capsule -> $capsule.setProperty('month_year',$capsule.property('start').tostring().replace('/(.*)(-..T.*)/','$1'))) Step 3: Combine the aggregate calculation for the multiple signals in a formula. Formula 2 -combine the aggregate calculations, example here is based on average. //Step 1: calculate monthly average for each signals $monthlyaverageA = $t1.aggregate(average(),$month_withProperty, startKey(),0s) $monthlyaverageB = $t2.aggregate(average(),$month_withProperty, startKey(),0s) $monthlyaverageC = $t3.aggregate(average(),$month_withProperty, startKey(),0s) //Step2: combine all the discrete points and move each signal by 1,2 and 3 hours to have different time stamp. Please refer combinewith() formula documentation. combinewith( $monthlyaverageA.move(1hr), $monthlyaverageB.move(2hr), $monthlyaverageC.move(3hr)) Step 4 : Create condition for each average temperature signals and use setProperty() function to set the naming for each signal. //Step 1: calculate monthly average for each signals $monthlyaverageA = $t1.aggregate(average(), $month_withProperty, startKey(),0s) $monthlyaverageB = $t2.aggregate(average(), $month_withProperty, startKey(),0s) $monthlyaverageC = $t3.aggregate(average(), $month_withProperty, startKey(),0s) //Step2: combine all and create condition for each discrete points and set the property accordingly. combinewith( $monthlyaverageA.move(1hr).toCapsules().setProperty('Mode','Area A'), $monthlyaverageB.move(2hr).toCapsules().setProperty('Mode','Area B'), $monthlyaverageC.move(3hr).toCapsules().setProperty('Mode','Area C')) Step 5: Create the histogram as shown in the screenshot below. The colour for each signal can be changed by selecting the legend box on the top right side of the histogram. For users who would like to split the quarter_year or year_week, please refer to the formula below. Formula to split quarter_year $quarter = quarters(Month.January, 1, "Asia/Kuala_Lumpur") $quarter.move(8hrs)//move based on specific timezone, example used here is UTC+8 .transform($cap -> { $year = $cap.startKey().toString().replace('/-.*/','') $quart = $cap.property('Quarter').toString() $cap.setproperty('Quart',$year+' '+'Q'+$quart)}) Formula to split year_week //Set up formula for $week_counter = (timesince(years("UTC"),1wk)+1).round() //The aim is to add '0' in front of single digit number so that the sequence in histogram 01, 02,....10, $weekLessThan10 = $week_counter < 10 $signal1 = $week_counter.toString() $signal2 = toString('0').toSignal() + $signal1 $new_week_counter = $signal1.splice($signal2,$weekLessThan10 ) $weekly_capsule_embedded_property = $new_week_counter.toCondition() //Setting the year and week property //$year - the function here meant to extract the year //$week - the embedded Value is XX.0wk - remove the .0wk //set new property of year_week $weekly_capsule_embedded_property.removeLongerThan(8d).transform($cap -> { $year = $cap.startKey().toString().replace('/-.*/','') $week = $cap.property('Value').toString().replace('/\wk/','') $cap.setproperty('Year_Week',$year+' '+'W'+$week)}) You can also check this post to create histogram with hourly bins.
    1 point
  45. The SPy Documentation for spy.assets includes an example of specifying metrics as attributes in asset classes. It is also possible to push scorecard metrics using the spy.push functionality by defining the appropriate metadata. An example of this process is given in the code snippets below: #import relevant libraries import pandas as pd from seeq import spy Log in to the SPY module if running locally using spy.login, or skip this step if running Seeq Data Lab. #Search for data that will be used to create the scorecard. This example will search the Example asset tree to find tags in Area A. search_result = spy.search({'Path': 'Example >> Cooling Tower 1 >> Area A'}) The next code segment creates and pushes a signal that will be used as a threshold limit in the scorecard metric. This can be skipped if threshold signals will not be used in the final metric. #Define data frame for low limit threshold signal. my_lo_signal = { 'Type': 'Signal', 'Name': 'Lo Signal', 'Formula': '$signal * 50', 'Formula Parameters': {'$signal': search_result[search_result['Name'] == 'Optimizer']['ID'].iloc[0]} } #Push data frame for low limit threshold signal. lo_push_result = spy.push(metadata=pd.DataFrame([my_lo_signal]), workbook='Example Scorecard') Finally, create the and push the scorecard metric. This example metric measures the average temperature and apply a static high limit threshold of 90 and a moving low limit threshold using the signal defined above. #Define data frame for scorecard metric. my_metric_input = { 'Type': 'Metric', 'Name': 'My Metric', 'Measured Item': {'ID': search_result[search_result['Name'] == 'Temperature']['ID'].iloc[0]}, 'Statistic': 'Average', 'Thresholds': { 'Lo': {'ID': lo_push_result['ID'].iloc[0]}, 'Hi': 90 } } #Push data frame for scorecard metric. spy.push(metadata = pd.DataFrame([my_metric_input]), workbook='Example Scorecard') The final result can be seen in the created workbook.
    1 point
  46. As an add on to this topic, there can be times when one wants to push a different scorecard type. The previous example shows how to create a Simple Scorecard but similar logic can be applied to make a Condition and Continuous Scorecard. Condition Scorecard Since the Condition Scorecard is also based on a condition, we need to retrieve the condition to be used. This can be done using spy.search again search_result_condition = spy.search({"Name":"Stage 2 Operation", "Scoped To":"C43E5ADB-ABED-48DC-A769-F3A97961A829"}) From there we can tweak the scorecard code to include the bounding condition. This is the condition over which this calculation is performed in scorecard. Note scorecard requires conditions with maximum capsule duration, so an additional parameter is required if the condition does not have a maximum capsule duration. Below is the code as well as the result my_metric_input_condition = { 'Type': 'Metric', 'Name': 'My Metric Condition', 'Measured Item': {'ID': search_result[search_result['Name'] == 'Temperature']['ID'].iloc[0]}, 'Statistic': 'Average', 'Bounding Condition': {'ID': search_result_condition[search_result_condition['Name'] == 'Stage 2 Operation']['ID'].iloc[0]}, 'Bounding Condition Maximum Duration': '30h' # Required for conditions without a maximum capsule duration } spy.push(metadata = pd.DataFrame([my_metric_input_condition]), workbook='Example Scorecard') Continuous Scorecard For Continuous Scorecards, users need to specify the rolling window over which to perform the calculations. To do this, a Duration and Period need to be provided. The Duration tells how long is the rolling window and the Period tells the frequency at which the rolling window is performed. my_metric_input_continuous = { 'Type': 'Metric', 'Name': 'My Metric Continuous', 'Measured Item': {'ID': search_result[search_result['Name'] == 'Temperature']['ID'].iloc[0]}, 'Statistic': 'Average', 'Duration': '1d', # Length of time the calculation is done for 'Period': '3d', # How often is the calculation being performed } spy.push(metadata = pd.DataFrame([my_metric_input_continuous]), workbook='Example Scorecard')
    1 point
  47. This post summarizes the Performance Loss Monitoring use case covered in the Advanced Analytics 101 webinar in September 2020. In this webinar, the use case was explored by addressing 4 key analytics questions: Why? What data is available? What method(s) can I use with the available data? How do I want to visualize the results? Why? A manufacturing company needs to track performance losses. If engineers are able to identify and quantify those performance losses, the results can be used to justify process improvement projects or to do historical and global benchmarking. The current method to do this involves retroactively wrangling data in Excel. This exercise is very time consuming, so developing a method to automatically generated monthly reports has the potential to save up to 1 week of valuable Process Engineer' time per month. This is time they get back to work on improvement projects and other value added activities. What data is available? For this analysis, two data tags are needed: the target production rate and the actual production rate. What method(s) can I use? Step 1: Identify time periods of lost production To identify the time periods of production loss, I first need to calculate the difference between the target rate and the actual reactor rate. This can be accomplished in the Formula tool. Now, to identify the production losses, I can use the Value Search tool to identify whenever the value of this new signal is greater than 0. Step 2: Quantify the total production loss The Signal from Condition tool can be used to calculate the totalized production loss during each of the Production Loss Events capsules. How do I want to visualize the results? Ultimately, I’d like to create a weekly report that summarizes the production per day of a given week. So in this case, I’d like to create a histogram that aggregates the lost production for each day of the week.
    1 point
  48. The ability to remove individual cursors is available starting in Seeq version 22.0.46.
    1 point
  49. Hi Thorsten- In the first screenshot, the area of each box is actually the same, even though some boxes have different dimensions. As you observed, the size of your display impacts how the boxes are drawn. To adjust the box sizes via the API, please use the following steps: 1. On your Seeq installation, open the workbook that contains the Treemap and navigate to the API: 2. To get the ID of the asset that you would like to resize: a. Navigate to GET Assets b. Adjust the "limit" to 200 and click "Try it out!" c. In the Response Body, locate the asset to resize and copy the "id": 3. To resize the asset: a. Navigate to POST Item Properties b. Paste the asset ID into the "id" field. Use the following syntax in the "Body" field. [ { "unitOfMeasure": "", "name": "size", "value": 10 } ] The following screenshot shows a size 10, but this number may be adjusted. c. Click "Try it out!" 4. Navigate back to the Treemap and refresh the browser. The Treemap now reflects the adjusted size: Please let me know if you have any additional questions. Thanks, Lindsey
    1 point
This leaderboard is set to Los Angeles/GMT-07:00
×
×
  • Create New...