Leaderboard
Popular Content
Showing content with the highest reputation since 11/02/2023 in Posts
-
Hi Paul, In R60+, you can use the resampleHold() function. Here is a link to the Knowledge Base article. https://support.seeq.com/kb/latest/cloud/optimizing-for-infrequently-updating-data If you are using an older version, you can splice a signal that is equal to the last value. $capsule = capsule(now()-7d,now()) $last_value = $signal.toScalars($capsule).last().tosignal() $signal.forecastSplice($last_value) You may need to modify how far back from now in your capsule definition to capture the last signal.1 point
-
@Johannes Sikstrom I think this is a bug in the SPy framework. It's not yet obvious what the problem is by looking at your code, I think I will need to see it in action. I will log a support ticket on your behalf and follow up with you there.1 point
-
Hello Baishun, You are correct. When using the Histogram Tool with the aggregation type "time," the timezone of UTC is used. One way to ensure the aggregation time fits your time zone is to create a new condition with your appropriate time zone from Tools > Identify > Periodic Condition. Using this new condition, change the aggregation type in your histogram to Condition (instead of time). Let me know if this works for you. Best Regards, Rupesh1 point
-
Seeq's .inverse() Formula function creates a new condition that is the inverse of the capsules of another condition. It is often used for identifying time periods between the capsules in an event related condition. For example, a user may create an "event" condition which represents equipment changes or maintenance events, and they then want to quantify the time duration in between the events, as well as the time elapsed from the last event to the current time. It may be important to statistically analyze the historical time between events, or they may want to be notified if the time since the most recent event exceeds some guideline value. A common and possibly confusing issue encountered by users is that the most recent "Time Since...." capsule (created with $EventCondition.inverse()) may extend indefinitely into the future, making it impossible to quantify the time elapsed from the last event to the current time. This issue is easily resolved with the approach shown in step 3 in the example below. 1. The user already has an event condition created named "Filter Changes", which represents maintenance events on a process filter which removes particulates. The user wants to monitor the time elapsed between filter changes and therefore creates a "Time Since Filter Change" condition using $FilterChanges.inverse(): 2. Signal from Condition is used to create the "Calculated Time Since Filter Change" signal, and the user chooses to place the result at the end of each capsule. Because the most recent capsule in the "Time Since Filter Change" condition extends past the current time and indefinitely into the future, the duration of that capsule can't be quantified (and it of course exceeds any specified maximum capsule duration). The user may be confused by the missing data point for the signal (see the trend screenshot below), and the missing value is an important result needed for monitoring. 3. The issue is easily resolved by clipping the most recent "Time Since Filter Change" capsule at the current time by adding .intersect(past()) to the Formula. This ensures the most recent "time since" capsule will not extend past the current time, by intersecting it with the "past" capsule. The "Calculated Time Since Filter Change" signal (lane 3 in screenshot, based on the clipped condition) updates dynamically as time moves forward, giving the user near real time information on the time elapsed.1 point
-
Working with Asset Groups or spy.assets.Tree() has been effective. However, as your project expands, the process of adding and managing additional assets has become cumbersome. It's time to elevate your approach by transitioning to an Asset Template for building your Asset Tree. In this guide, I'll share valuable tips and tricks for creating an asset tree from an existing hierarchy. If you already have an established asset tree from another source, such as PI AF, you can leverage it as a quick starting point to build your tree in Seeq. Whether you're starting with an existing asset tree or starting from scratch, these insights will help you seamlessly integrate your asset structure into Seeq Data Lab. Additionally, I'll cover essential troubleshooting techniques, focusing on maximizing ease of use when using Asset Templates. In a future topic, I'll discuss the process of building an Asset Tree from scratch using Asset Templates. Let's dive into the best practices for optimizing your asset management workflow with Seeq Data Lab. Why Templates over Trees? The choice between using Asset Templates and spy.assets.Tree() depends on the complexity of your project and the level of customization you need. Here are some reasons why you might want to use Asset Templates instead of spy.assets.Tree(): Object-Oriented Design: Asset Templates allow you to leverage object-oriented design principles like encapsulation, inheritance, composition, and mixins. This can increase code reuse and consistency, especially in larger projects. However, the learning curve is a bit longer if you do not come from a programming background. Customization: Asset Templates provide a higher level of customization. You can define your own classes and methods, which can be useful if you need to implement complex logic or calculations. You can more easily create complex hierarchical trees similar Asset Trees from PI AF. Handling Exceptions: Manufacturing scenarios often have many exceptions. Asset Templates can accommodate these exceptions more easily than spy.assets.Tree(). Here are some reasons why you might want to use spy.assets.Tree() instead of Asset Templates: Simplicity: spy.assets.Tree() is simpler to use and understand, especially for beginners or for simple projects. If you just need to create a basic hierarchical structure without any custom attributes or methods, spy.assets.Tree() is a good choice. You can always graduate to Asset Templates later. Flat Structure: If your data is already organized in a flat structure (like a CSV file), spy.assets.Tree() can easily convert that structure into an asset tree. Less Code: With spy.assets.Tree(), you can create an asset tree with just a few lines of code. You don't need to define any classes or methods. Okay, we are settled on Asset Templates. What's first? If you have an existing asset hierarchy with the attributes you want to use in your calculations, you can start there. I'll be using the Example >> Cooling Tower 1 as my example existing Asset Tree. We'll use the asset tree path for our spy.search criteria and use the existing asset hierarchy when building our Template. The attachment includes a complete working Jupyter notebook. You can upload it to a Data Lab project and test each step as described here. Okay, let's review the code section by section. # 1. Importing Libraries from seeq import spy import pandas as pd import re from seeq.spy.assets import Asset # 2. Checking and displaying the spy version version = spy.__version__ print(version) # 3. Setting the compatibility option so that you maximize the chance that SPy # will remain compatible with your notebook/script spy.options.compatibility = 189 # 4. Disply configuration for Pandas to show all columns in DataFrame output pd.set_option('display.max_colwidth', None) In this cell, the script initializes the Seeq Python (SPy) library as well as other required libraries, checks the spy version, sets a compatibility option, and configures pandas to display DataFrame output without column width truncation. # 1. Use spy.search to search for assets using the existing Asset Tree hierarchy. metadata = spy.search( {'Path': 'Example >> Cooling Tower 1 >> Area*'}, # MODIFY: Change the search path old_asset_format=False, limit=None, ) # 2. Split the "Path" column by the ">>" delimiter and create a new column "Modified_Path." metadata["Modified_Path"] = metadata['Path'] + ' >> ' + metadata['Asset'] split_columns = metadata['Modified_Path'].str.split(' >> ', expand=True) # 3. Rename the columns of the split DataFrame to represent different hierarchy levels. split_columns.columns = [f'Level_{i+1}' for i in range(split_columns.shape[1])] # 4. Concatenate the split columns with the original DataFrame to incorporate the hierarchy levels. metadata = pd.concat([metadata, split_columns], axis=1) # 5. Required for building an asset tree. This is the parent branch/name of the asset tree. metadata['Build Path'] = 'Asset Tree Descriptive Name' # MODIFY: Set the desired parent branch name # 6. The child branch of 'Build Path' and will be named after the "Level_2" column. metadata['Build Asset'] = metadata['Level_2'] # Ex: "Cooling Tower 1" This cell constructs the metadata used in our new asset tree. The spy.search on the existing hierarchy is used to retrieve the metadata for each attribute in the asset path. The existing tree path is "Example >> Cooling Tower 1 >> Area*". This will fetch all attributes under each of the Areas (e.g., Area A - K). However, it will not retrieve any attributes that may reside in "Example >> Cooling Tower 1. The complete path for each asset is the concatenation of the "Path" column and the "Asset" column. This "Modified Path" is how we will define each level of our new Asset Tree. We are splitting the Modified Path and inserting each level as a new column that will be referenced later when building the asset tree. In steps 5 and 6, we define the first two levels of the asset tree. In this example, the new path will start with "Asset Tree Descriptive Name >> Cooling Tower 1". By utilizing the 'Level_2' column in the 'Build Path,' if we had included all cooling towers in our spy.search, the new second level in the path could be 'Cooling Tower 2,' depending on the asset's location in the existing tree." Alternatively, if you prefer a flat tree, you can specify the name of "Level 2" as a string rather than referencing the metadata column. # This is the class will be called when using Spy.Asset.Build # This is the child branch of 'Build Asset' # Ex: Asset Tree Descriptive Name >> Cooling Tower 1 >> Area A # Can you also have attributes at these levels by adding @Asset.Attribute() within the class. class AssetTreeName(Asset): # MODIFY: AssetTreeName @Asset.Component() def Asset_Component(self, metadata): # MODIFY: Unit_Component return self.build_components( template=Level_4, # This the name of the child branch class metadata=metadata, column_name='Level_3' # Metadata column name of the desired asset; Ex: Area A ) # Example Roll Up Calculation # This roll up calculation will find the evaluate the temperature if each child branch (e.g., Area A, Area B, # Area C, etc.) and create a new signal of the maximum temperature of the group @Asset.Attribute() def Cooling_Tower_Max_Temperature(self, metadata): return self.Asset_Component().pick({ ## MODIFY: Unit_Component() if changed in @Asset.Component() above 'Name': 'Temperature Unique Raw Tag' # MODIFY:'Temperature Unique Raw Tag'; this is the Friendly name of the attribute created in Level_4 }).roll_up('maximum') # MODIFY: 'maximum' In this cell, we are building our first class in our Asset Template. In the context of Asset Templates, consider a class as a general contractor responsible for constructing the current level in your asset tree. It subcontracts the responsibility for the lower level to that level's class. TIn the class, you define attributes such as signals, calculated signals, conditions, scorecard metrics, etc. or components, which are the 'child' branches of this 'parent' branch. These components are built by a different class. This first class will be called when we use spy.assets.build to create the new asset tree. @Asset.Component() indicates that we are defining a method to construct a component. The method name is Asset_Component which will construct the child branch. The parameter 'template' defines the class that will construct the child branch, while 'column_name' specifies the name of each child branch. Here we are using "Level_3" which contains the Area name (e.g., Area A). @Asset.Attribute() indicates that we are defining a method that will be an attribute. More details about creating attributes will be discussed in the next class. However, the example provided is a roll up calculation which will be a signal equal to the maximum value of each of the child branches' "Temperature Unique Raw Tag" which will be defined in the next class. When using roll-up calculations, note that the method being called is the component method responsible for constructing the child branches, not the attribute residing in the child branches. The .pick() function utilizes the friendly name of the attribute to identify the attributes in the child branches. # This is the child branch of 'AssetTreeName' class class Level_4(Asset): @Asset.Attribute() # Raw Tag (Friendly Name is the same as the method name with '_' removed) # Use this method if you are highly confident there are no multiple matches in current or child branches def Temperature_Unique_Raw_Tag(self, metadata): # MODIFY: Temperature_Unique_Raw_Tag return metadata[metadata['Name'] == 'Temperature'] @Asset.Attribute() def Temperature_Possible_Duplicates_Raw_Tag(self, metadata): # MODIFY:Temperature_Possible_Duplicates_Raw_Tag # If multiple matching metadata is found, the first match is selected using .iloc[[0]]. # This also addresses duplicate attributes in child branches. # If selecting first match is not desired, improve regex search criteria # or add additional filters to select desired attribute (e.g, # metadata[metadata['Name'].str.contains(r'(?:Temperature)$') # & metadata[metadata['Description'].str.contains(r'(?:Discharge)')] # ) filtered = metadata[metadata['Name'].str.contains(r'(?:Temperature)$')] # MODIFY: r'(?:Temperature)$') return filtered.iloc[[0]] In our example, Level_4 is the lowest level in our asset tree. All our attributes related to an asset will reside here. In our example, our raw tags (i.e., no calculations) will reside in the same branch as our derived tags (i.e., calculated), which typically reference the raw tags in the formula parameter. Each attribute is defined by a method starting with @Asset.Attribute before defining the method. When not specified, the friendly name of the attribute is the same as the method name with the underscores removed. Example: The method name is Temperature_Unique_Raw_Tag, and the friendly name is "Temperature Unique Raw Tag". We reference the metadata dataframe with a search criteria to select the proper row for the attribute. There are several methods using pandas to select the correct row. Here we are using the "Name" to find the row with "Temperature" as the value. Spy will find each "Temperature" value for each asset. This method can result in an error if there are multiple matches. The second method "Temperature_Possible_Duplicates_Raw_Tag" explains how you can avoid this error by using .iloc[[0]] to select the first match found or how you can increase the complexity of your search criteria to further filter your results. # Calculated Tag (Signal) referencing an attribute in this class @Asset.Attribute() def Temperature_Calc(self, metadata): return { 'Name': 'Signal Friendly Name', # MODIFY: 'Signal Friendly Name' 'Type': 'Signal', # This is the same formula you would use in Workbench Formula 'Formula': '$signal.agileFilter(1min)', # MODIFY: "$signal.agileFilter(1min)"; 'Formula Parameters': {'$signal': self.Temperature_Unique_Raw_Tag()}, # MODIFY: '$signal' and self.Temperature_Unique_Raw_Tag() } # Calculated Tag (Signal) not previously referenced as an attribute @Asset.Attribute() def Wet_Bulb(self, metadata): filtered = metadata[metadata['Name'].str.contains(r'^Wet Bulb$')] # MODIFY: '^Wet Bulb$' return { 'Name': 'Signal Friendly Name 2', # MODIFY: 'Signal Friendly Name 2' 'Type': 'Signal', 'Formula': '$a.agileFilter(1min)', # MODIFY: '$a.agileFilter(1min)' 'Formula Parameters': {'$a': filtered.iloc[[0]]}, # MODIFY: '$a' } # Calculated Tag (Condition) @Asset.Attribute() def Temperature_Cond(self, metadata): return { 'Name': 'Condition Friendly Name', # MODIFY: 'Condition Friendly Name' 'Type': 'Condition', 'Formula': '$b > 90', # MODIFY: '$b > 90' 'Formula Parameters': {'$b': self.Temperature_Possible_Duplicates_Raw_Tag()}, # MODIFY: '$b' & self.Temperature_Possible_Duplicates_Raw_Tag() } # Scorecard Metric # See Asset Templates documentation for more examples of Scorecard Metrics @Asset.Attribute() def Temperature_Statistic_KPI(self, metadata): return { 'Type': 'Metric', 'Measured Item': self.Temperature_Unique_Raw_Tag(), # MODIFY: self.Temperature_Unique_Raw_Tag() 'Statistic': 'Average' # MODIFY: 'Average' } This is a continuation of our Level_4 class. Here we are creating derived attributes which use a formula or metric and reference other attributes in most cases. There are several different example attributes for calculated signals, conditions, and scorecard metrics. You can specify the friendly name in the "Name" parameter as an alternative method to using the name of the method. The value of the "Formula" is the exact same formula you use in Workbench. "Formula Parameters" is where you define your variables in your formula. The value of the "variable" key (i.e., $signal) is a method call to the attribute to be used in the formula (i.e., self.Temperature_Unique_Raw_Tag()). If you wish to specify a signal that has not been defined as an attribute, you can specify the value of the referenced variable using the same search criteria when specifying a new attribute. It is important to remember to change the "Type" when creating a condition or a scorecard metric, as this will cause an error if not properly specified. # Use this method if you want a different friendly name rather than the regex search criteria @Asset.Attribute() def Friendly_Name_Example(self, metadata): return self.select_metadata(metadata, name_contains = r'^Compressor Power$', friendly_name ='Power') #MODIFY: name_contains = r'Compressor Power', friendly_name = 'Power' # Use this method if you want the friendly name to be the same as regex search criteria # without the regex formatting @Asset.Attribute() def No_Friendly_Name_Example(self, metadata): return self.select_metadata(metadata, r'^Compressor Power$', None) # Example of a string attribute being passed through @Asset.Attribute() def String_Example(self, metadata): return self.select_metadata(metadata, r'Compressor Stage$', None) # Example of an attribute not found @Asset.Attribute() def No_Attribute_Exists(self, metadata): return self.select_metadata(metadata, r'Non-existing Attribute', None) # 1. Method below matches the metadata, perform attribute type detection, and applies proper formula def select_metadata(self, metadata, name_contains, friendly_name): filtered = metadata[metadata['Name'].str.contains(name_contains)] if not filtered.empty: # checks if metadata is a signal vs scalar and selects the first signal type if filtered['Type'].str.contains('signal', case=False).any() : signal_check = filtered[filtered['Type'].str.contains('signal', case=False) & ~filtered['Value Unit Of Measure'].str.contains('string', case=False)] filtered_signal = signal_check if len(signal_check) > 0 else filtered.iloc[[0]] selected_metadata = filtered_signal.iloc[[0]] else: selected_metadata = filtered.iloc[[0]] return self.determine_attribute_type(selected_metadata, name_contains, friendly_name) else: # This returns a signal with no values rather than not adding an attribute when # an attribute cannot be found. If this is not desired, simply comment out the # else statement. Having a signal is recommended to assist in asset swapping. return { 'Name' : friendly_name if friendly_name is not None else re.sub( r'\^|\(|\$|\(\?:|\)$|\)', '', name_contains), 'Type': 'Signal', 'Formula': 'Scalar.Invalid.toSignal()', } # 2. Determine if an attribute is a signal or scalar type def determine_attribute_type(self, metadata, name_contains, friendly_name): if metadata['Type'].str.contains('signal', case=False).any(): return self.generic_metadata_function(metadata, name_contains, 'Signal', friendly_name) elif metadata['Type'].str.contains('scalar', case=False).any(): return self.generic_metadata_function(metadata, name_contains, 'Scalar', friendly_name) return None # 3. Creates the metadata to create the attribute def generic_metadata_function(self, metadata, name_contains, formula_type, friendly_name): if friendly_name is None: # Regular expression pattern to remove caret (^), first "(", dollar ($), # non-capturing groups (?:), and trailing ")" pattern = r'\^|\(|\$|\(\?:|\)$|\)' friendly_name = re.sub(pattern, '', name_contains) # Metadata for signals includes strings. Required to separate strings to perform a # formula on non-string signals. Check if the signal is a "string" based on 'Value Unit Of Measure' if formula_type == 'Signal' and metadata['Value Unit Of Measure'].str.contains( 'string', case=False).any(): formula_type = 'String' # If signal, perform a calculation. If string or scalar, pass only the variable in formula. if formula_type == 'Signal': formula = '$signal.validValues()' # MODIFY: '$signal.validValues()' elif formula_type == 'String': formula = '$signal' else: formula = '$scalar' return { 'Name' : friendly_name, 'Type': formula_type, 'Formula': formula, 'Formula Parameters': ( {'$signal': metadata} if formula_type in ['Signal', 'String'] else {'$scalar': metadata}) } This is a continuation of the Level_4 class. This section is more of a trick than a tip. A common request is how to bulk apply a formula to all signals as an initial data cleaning step. This can be challenging if your existing asset tree contains signals, scalars, and/or strings with the same friendly name in the parent and/or child branches. I've seen this happen when an instrument does not exist for a particular asset. The attribute still exists in the asset tree but contains no data and is either a scalar or string. This method allows users to quickly perform the same calculations on all raw data signals. It will check for string, scalar, and signal types to determine if a calculation is required. If there are multiple matches, it will preferentially select a signal over a string or scalar. It will perform the appropriate calculation on the tag based on the type or return a blank signal if the attribute is not found. The blank signal is a necessary evil for asset swapping if the attribute is used in a calculation for other attributes. The formula should be adjusted to handle when there are no valid values in the attribute. @Asset.Attribute() def Friendly_Name_Example(self, metadata): return self.select_metadata(metadata, name_contains = r'^Compressor Power$', friendly_name ='Power') This attribute method has been modified to make a method call to select_metadata. It passes the metadata, name_contains, which is the search criteria for finding the name value in the metadata "Name" column, and a friendly name. In this example, the attribute will be named "Power". @Asset.Attribute() def No_Friendly_Name_Example(self, metadata): return self.select_metadata(metadata, r'^Compressor Power$', None) If the friendly name does not need to be different from the name_contains parameter without the regex formatting (i.e., special characters), it will name the attribute the search criteria name minus special characters. In the example above the attribute will be named "Compressor Power". This naming method is fairly simple and is not dynamic enough to insert result of the regex search. In those cases, the friendly_name should be specified. The subsequent method call to determine_attribute_type determines the attribute type (e.g., signal, string, scalar). If multiple matches are found, it will select the first signal type over a scalar or string. If attribute is not found, it creates an empty attribute. The final method call to generic_metadata_function determines the friendly name to be used and selects the correct formula for the proper attribute type. As I said earlier, this is more of a trick that address most common issues I have faced when dealing with complex asset trees and applying a common formula in bulk. This can also be applied even if you don't want to apply a formula to all signals. In that case, you would simply modify the formula to be "$signal" instead of "$signal.validValues()". You can still preferentially select signals types over string or scalar types. However, if referencing one of these attributes in a calculated attribute later, you will still have to use the attribute method name (e.g., self.Friendly_Name_Example() when referencing the 'Power' attribute in the formula parameter). build_df = spy.assets.build(AssetTreeName,metadata) # Check for success for each tag for an asset build_df.head(15) After creating our asset structure and attributes, you can run this cell to build our asset tree. I recommend visually checking for success in the 'Build Result' column of the dataframe, looking for 'Success' in each attribute for a single asset. workbookid = "WORKBOOK_ID" # MODIFY: "WORKBOOK_ID" # If you want the tree to be available globally use None as the workbookid push_results_df = spy.push(metadata=build_df, workbook=workbookid, errors='catalog') # Check for success at Push Result push_results_df.head(15) Once you are satisfied with your results or have resolved any errors, you can push your asset tree. If you wish to push your tree globally, change the value of the 'workbook' argument in spy.push to be set to None. I recommend doing this only after addressing all the issues in a workbook, which can serve as your sandbox environment. errors = push_results_df[push_results_df['Push Result'] != "Success"] errors The final step I perform is to check if any of my attributes were not successfully pushed. If there were no issues, the error dataframe should contain no rows. Well, if you have made it to the end, congrats! If you skipped to the end, that's fine too. In this guide, we delved into the world of Asset Templates and their role in streamlining the creation and management of asset trees within Seeq Data Lab. By leveraging object-oriented design principles, encapsulation, and inheritance, Asset Templates offer a powerful way to build complex hierarchical structures. Whether you're dealing with an existing asset tree or starting from scratch, the insights shared here aim to maximize the efficiency of your workflow. Download the attached Jupyter notebook and experiment with the code by modifying or adding new attributes or applying it to a different asset tree. Remember, the best way to grasp these concepts is to dive into the provided Jupyter notebook, make modifications, and witness the results firsthand. In the comments below, I will add some screenshots of common errors I come across when using spy.assets.build and what you should do to troubleshoot the errors. Tips Tricks for Existing Asset Trees.ipynb1 point
-
Troubleshooting common errors when using Asset Templates: Spy.Assets.Build Error 1: No matching metadata row found Error - Key Phrase: "No matching metadata row found" for Temperature Unique Raw Tag on Level_4 class Fix - Modify the search criteria of the attribute shown in the error Error 2: Component dependency not built (calculated tag) Errors - Key phrases: Component Dependency not built - A calculated signal could not be created because the formula parameter referenced was not created. Attribute dependency not built - No matching metadata row found for the referenced formula parameter. Fix - Modify the search criteria of the attributes referenced by the calculated signals Error 3: Component dependency not built (branch not built) Error: Key Phrases: name 'Level 4' is not defined. Unit Component [on AssetTreeName class] component dependency not built. This is because the referenced template in def Unit_Component in the AssetTreeClass does not exist. The class name for the child branch is actually "Not_Level_4". Fix- Change the template to match the actual class name. Example: Code causing error: class AssetTreeName(Asset): @Asset.Component() def Unit_Component(self, metadata): return self.build_components( template=Level_4, metadata=metadata, column_name='Level_3' ) class Not_Level_4(Asset): @Asset.Attribute() def Temperature_Unique_Raw_Tag(self, metadata): return metadata[metadata['Name'] == 'Temperature Unique Raw Tag'].iloc[[0]] Corrected Code: class Level_4(Asset): @Asset.Attribute() def Temperature_Unique_Raw_Tag(self, metadata): return metadata[metadata['Name'] == 'Temperature Unique Raw Tag'].iloc[[0]] Error 4: Multiple attributes returned Error - Key Phrase: "Multiple attributes returned" - This indicates that your search criteria for the attribute returned multiple results. In this case, there a match in ">> Area A" and a match in ">> Area A >> Area A-Duplicate". Fix - Use .iloc[[0]] at the end of the metadata search criteria to select the first match. Original: metadata[metadata['Name'].str.contains(r'(?:Temperature)')] Revised: metadata[metadata['Name'].str.contains(r'(?:Temperature)')].iloc[[0]] Alternatively, you can add more criteria to the search to filter down to the desired tag: metadata[metadata['Name'].str.contains(r'(?:Temperature)')] & metadata[metadata['Level_4'].str.contains(‘Duplicate’) Error 5: ‘class name’ object has no attribute ‘attribute method name’ for calculated attributes Error: Key Phrases: "in Temperature_Cond ‘Formula Parameters’ lies the error. For the formula parameter, we have misspelled the attribute method name in the Level_4 class. Fix – check for misspelled methods or correct method is referenced Error 6: ‘class name’ object has no attribute ‘attribute name’ for roll up calculations Error: Key Phrases: "Cooling_Tower_Max_Temperature" is causing the error. For the roll up calculation, we are trying to reference the self.Temperature_Unique_Raw_Tag() which is in Level_4 which is the child branch. Since this method does not reside in the AssetTreeName class, it cannot find it. Example: Code causing error: class AssetTreeName(Asset): @Asset.Component() def Unit_Component(self, metadata): return self.build_components( template=Level_4, # This the name of the child branch class metadata=metadata, column_name='Level_3' # Metadata column name of the desired asset; Ex: Area A ) # Example Roll Up Calculation @Asset.Attribute() def Cooling_Tower_Max_Temperature(self, metadata): return self.Temperature_Unique_Raw_Tag().roll_up('maximum') Fix - Use the method that builds the child branch (Unit_Component) in the AssetTreeName class and then use the pick function using the friendly name of the tag to be used in the calculation. # Example Roll Up Calculation @Asset.Attribute() def Cooling_Tower_Max_Temperature(self, metadata): return self.Unit_Component().pick({ 'Name': 'Temperature Possible Duplicates Raw Tag' }).roll_up('maximum') Error 7: Spy.Push Errors In this example, I have tried to apply an agileFilter function onto a string attribute. This error can occur when an asset may have a string or scalar attribute exist instead of a signal attribute when a field instrument does not exist for this asset while a sibling asset would have a signal attribute and you want to apply the agileFilter. It can be difficult to determine which attributes are causing the issue. You can catalog the errors and allow spy.push to continue to push the Asset Tree without the errors and cataloging the errors. push_results_df = spy.push(metadata=build_df, workbook=workbookid, errors='catalog') The error will be cataloged in the "Push Result" column with same error shown in the example. To view all errors, you can filter the push result to see where the push result was not success. error = push_results_df[push_results_df['Push Result'] != "Success"] error1 point
-
If you set your axis limits manually you can specify an inverted axis, however it looks like there's a bug where that results in the tick marks disappearing from the axis. As an alternative, your idea of making the signal negative seems like a clever workaround. Taking it a step further, you could modify the number format of the negative signal in order to not show the (-) sign. For example, #,##0.00;#,##0.001 point
-
Thanks for checking Ivan. I'm not sure what is happening here. I've not been able to reproduce this issue on our server. I also checked our ticketing system and am unable to find known issues related to the calculation you are trying to perform (the previous bug you encountered was related to min/max formulas and is fixed in 62.0.7.) I suspect there might be an issue upstream of the calculation - are you performing any min/max or other operations on $s?. It might be more expedient to sign up for an office hour session so we can work this 1:1. https://outlook.office365.com/owa/calendar/SeeqOfficeHours@seeq.com/bookings/1 point
-
Hi Chris, (I am assuming you are not using Seeq Data Lab, please correct me if I'm wrong.) Seeq's SPy library is recommended for Python scripts, as it contains higher-level functions for common use cases, and it employs Pandas DataFrames as the primary data structure for input and output. seeq-spy is a "wrapper" around the seeq REST API Python SDK. To use SPy in Seeq Server R62.0.9, first install the seeq module like so: pip install seeq~=62.0 Then install the latest version of SPy: pip install seeq-spy Then use the documentation for SPy found here: https://python-docs.seeq.com/user_guide/Tutorial.html1 point
-
Hi Brandon Vincent, May I clarify the question? the issue arises when the drums loading occur in the middle of the day which causes your daily condition to split? If this is the case, one way to calculate the total material flow out of the tank daily could be as follows: Step 1. Use Composite Condition with intersection logic between "not loading condition" and "daily condition". This will result in splitting the daily condition based on the not loading condition Step 2. Use Signal from Condition to calculate the tank level delta (End value - Start value) bounded to the condition (1) and place the statistic at the start of the capsule. The result below shows 2 value for Sep 2023 capsules, one at 00:00 and another one at 13:32 timestamp. Step 3. The delta value will be negative as the End value is smaller than Start value. Use Formula, abs() function to take the absolute value (positive value) $end_start.abs() Step 4. Find the Sum of delta (End value - Start value) using Signal from Condition and bound the calculation to the "daily condition". This should be the total amount of material flow out from the tank daily despite having drum loading in the middle of the day. If the steps mentioned earlier don't apply to the problem you're facing, here's how you can calculate how many drums have been loaded. Step 1. Use Signal from Condition to calculate the tank level delta (End value - Start value) during "drum loading" condition. Step 2. Use formula to calculate the number of drums loaded by dividing it with 21% (as mentioned in the question, 21% level increased per drum loaded). The floor() function is used to round a numeric value down to the next smallest integer. For example if the tank level during drum loading is 68.69%, 68.69%/21% = 3.27. The floor function round the value down to 3 drum. ($level/21).floor().setunits('') Let me know if this helps.1 point
-
A typical data cleansing workflow is to exclude equipment downtime data from calculations. This is easily done using the .remove() and .within() functions in Seeq formula. These functions remove or retain data when capsules are present in the condition that the user supplies as a parameter to the function. There is a distinct difference in the behavior of the .remove() and .within() functions that users should know about, so that they can use the best approach for their use case. .remove() removes the data during the capsules in the input parameter condition. For step or linearly interpolated signals, interpolation will occur across those data gaps that are of shorter duration than the signal's maximum interpolation. (See Interpolation for more details concerning maximum interpolation.) .within() produces data gaps between the input parameter capsules. No interpolation will occur across data gaps (no matter what the maximum interpolation value is). Let's show this behavior with an example (see the first screenshot below, Data Cleansed Signal Trends), where an Equipment Down condition is identified with a simple Value Search for when Equipment Feedrate is < 500 lb/min. We then generate cleansed Feedrate signals which will only have data when the equipment is running. We do this 2 ways to show the different behaviors of the .remove() and .within() functions. $Feedrate.remove($EquipmentDown) interpolates across the downtime gaps because the gap durations are all less than the 40 hour max interpolation setting. $Feedrate.within($EquipmentDown.inverse()) does NOT interpolate across the downtime gaps. In the majority of cases, this result is more in line with what the user expects. As shown below, there is a noticeable visual difference in the trend results. Gaps are present in the green signal produced using the .within() function, wherever there is an Equipment Down capsule. A more significant difference is that depending on the nature of the data, the statistical calculation results for time weighted values like averages and standard deviations, can be very different. This is shown in the simple table (Signal Averages over the 4 Hour Time Period, second screenshot below). The effect of time weighting the very low, interpolated values across the Equipment Down capsules when averaging the Feedrate.remove($EquipmentDown) signal, gives a much lower average value compared to that for $Feedrate.within($EquipmentDown.inverse()) (1445 versus 1907). Data Cleansed Signal Trends Signal Averages over the 4 Hour Time Period Content Verified DEC20231 point
-
If you modify your wind_dir variable to $wind_dir = group( capsule(0, 22.5).setProperty('Value', 'ENUM{{0|N}}'), capsule(22.5, 67.5).setProperty('Value', 'ENUM{{1|NE}}'), capsule(67.5, 112.5).setProperty('Value', 'ENUM{{2|E}}'), capsule(112.5, 158.5).setProperty('Value', 'ENUM{{3|SE}}'), capsule(158.5, 202.5).setProperty('Value', 'ENUM{{4|S}}'), capsule(202.5, 247.5).setProperty('Value', 'ENUM{{5|SW}}'), capsule(247.5, 292.5).setProperty('Value', 'ENUM{{6|W}}'), capsule(292.5, 337.5).setProperty('Value', 'ENUM{{7|NW}}'), capsule(337.5, 360).setProperty('Value', 'ENUM{{8|N}}') ) You will get an ordered Y axis: This is how Seeq handles enum Signal values from other systems - it has some limitations, but it seems like it should work well for your use case.1 point
-
In some cases you may want to do a calculation (such as an average) for a specific capsule within a condition. In Seeq Formula, the toGroup() function can be used to get a group of capsules from a condition (over a user-specified time period). The pick() function can then be used to select a specific capsule from the group. The Formula example below illustrates calculating an average temperature for a specific capsule in a high temperature condition. // Calculate the average temperature during a user selected capsule of a high temperature // condition. (The high temperature condition ($HighT) was created using the Value Search tool.) // // To prevent an unbounded search for the capsules, must define the search start/end to use in toGroup(). // Here, $capsule simply defines a search time period and does not refer to any specific capsules in the $HighT condition. $capsule = capsule('2019-06-19T09:00Z','2019-07-07T12:00Z') // Pick the 3rd capsule of the $HighT condition during the $capsule time period. // We must specify capsule boundary behavior (Intersect, EnclosedBy, etc.) to // define which $HighTcapsules are used and what their boundaries are (see // CapsuleBoundary documentation within the Formula tool for more information). $SelectedCapsule = $HighT.toGroup($capsule,CapsuleBoundary.EnclosedBy).pick(3) // Calculate the temperature average during the selected capsule. $Temperature.average($SelectedCapsule) The above example shows how to select and perform analysis on one specific capsule in a given time range. If instead you wanted to select a certain capsule of one condition within the capsule of a second condition, you can use the .transform() function. In this example, the user want to pick the first capsule from condition 2 within condition 1. Use formula tool: $condition1 .removeLongerThan(1d) .transform($c -> $condition2.removeLongerThan(1d).toGroup($c).pick(1)) The output: Content Verified DEC20231 point
-
When addressing a business problem with analytics, we should always start by asking ourselves 4 key questions: Why are we talking about this: what is the business problem we are trying to address, and what value will solving this problem generate? What data do I have available to help with solving this problem? How can I build an effective analysis to identify the root of my problem (both in the past, and in future)? How will I visualize the outputs to ensure proactive action to prevent the problem from manifesting? This is where you extract the value. With that in mind, please read below how we approach the above 4 questions while working in Seeq to deal with heat exchanger performance issues. What is the business problem? Issues with heat exchanger performance can lead to downstream operational issues which may lead to lost production and revenue. To effectively monitor the exchanger, a case-specific approach is required depending on the performance driver: Fouling in the exchanger is limiting heat transfer, requiring further heating/cooling downstream Fouling in the exchanger is limiting system hydraulics, causing flow restrictions or other concerns Equipment integrity, identify leaks inside the exchanger What Data do we have available? Process Sensors – flow rates, temperatures, pressures, control valve positions Design Data – drawings, datasheets Maintenance Data – previous repairs or cleaning, mean-time between cleanings How can we tackle the business problem with the available data? There are many ways to monitor a heat exchanger's performance, and the selection of the appropriate indicator depends on a) the main driver for monitoring and b) the available data. The decision tree below is merely meant to guide what indicators can be applied based on your dataset. Generally speaking, the more data available, the more robust an analysis you can create (ie. first principles based calculations). However, in the real world, we are often working with sparse datasets, and therefore may need to rely on data-based approaches to identify subtle trends which indicate changes in performance over time. Implementing each of the indicators listed above follow a similar process in Seeq Workbench, as outlined in the steps below. In this example, we focus on a data-based approach (middle category above). For an example of a first-principles based approach, check out this Seeq University video. Step 1 - Gather Data In a new Workbench, search in the Data Tab for relevant process signals Use Formula to add scalars or use the .toSignal() function to convert supplemental data such as boundary limits or design values Use Formula, Value Search or Custom Condition to enter maintenance period(s) and heat exchanger cycle(s) conditions (if these cannot be imported from a datasource) Step 2 - Identify Periods of Interest •Use Value Search, Custom Condition, Composite Condition or Formula to identify downtime periods, periods where exchanger is bypassed, or periods of bad data which should be ignored in the analysis Step 3 - Cleanse Data Use Formula to remove periods of bad data or downtime from the process signals, using functions such as $signal.remove($condition) or $signal.removeOutliers() Use Formula to smooth data as needed, using functions such as $signal.agileFilter() or the Low Pass Filter tool Step 4 - Quantify Use Formula to calculate any required equations In this example, no calculations are required. Step 5 - Model & Predict Use Prediction and choose a process signal to be the Target Variable, and use other available process signals as Input Variables; choose a Training Period when it is known the exchanger is in good condition Using Boundaries: establish an upper and lower boundary signal based on the predicted (model) signal from previous step (e.g. +/-5% of the modeled signal represents the boundaries) Step 6 - Monitor Use Deviation Search or Value Search to find periods where the target signal exceeds a boundary(ies) The deviation capsules created represent areas where heat exchanger performance is not as expected Aggregate the Total Duration or Percent Duration statistic using Scorecard or Signal From Condition to assess deteriorating exchanger health over time How can we visualize the outputs to ensure proactive action in future? Step 7 - Publish Once the analysis has been built in a Seeq Workbench, it can be published in a monitoring dashboard in Seeq Organizer as seen in the examples below. This dashboard can then be shared among colleagues in the organization, with the ability to monitor the exchanger, and log alerts and take action as necessary as time progresses - this last step is key to implementing a sustainable workflow to ensure full value is extracted from solving your business problem.1 point
-
The following formula will calculate the average of your signals, even if one happens to go off-line. average($a,$b,$c) For those who are looking to have the calculation stop if one of the signals goes off-line, the following formula will help. ($a+$b+$c)/3 Content Verified DEC20231 point
-
Background In this Use Case, a user created a condition to identify when the compressor is running. During each Compressor Running capsule, the compressor operates in a variety of modes. The user would like a summary of the modes of operation for each capsule in the form of a new signal that reports all modes for each capsule (i.e. Transition;Stage 1;Transition;Stage 2;Transition, Stage 1;Transition). Method 1. The first step is to resample the string value to only have data points at the value changes. It's possible the signal is already sampled this way, but if it is not, use the following Formula syntax to create a "compressed" signal: $stringSignal.tocondition().setMaximumDuration(3d).transformToSamples($capsule -> sample($capsule.getStart(), $capsule.getProperty('Value')), 4d) 2. Now, you can create a signal that concatenates the string values during each capsule. This is achieved using the following Formula syntax: $compressorRunning.setmaximumduration(10d).transformToSamples($cap-> sample( $cap.getStart(), $compressedStringSignal.toGroup($cap).reduce("", ($s, $capsule) -> $s + $capsule.getvalue())), 7d).toStep()1 point
This leaderboard is set to Los Angeles/GMT-08:00