Jump to content

Patrick

Seeq Team
  • Posts

    28
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by Patrick

  1. Solution: To address your first question of creating the t>1 hr after shut-in capsule. One method to do this is below: 1. Create a condition for ~5 min before shut-in and shift it by ~5 min to make sure you have the Steady state DHP prior to shut-in. This is done using Formula. The Formula syntax is $ShutinCondition.beforeStart(5min).move(-5min) The "beforeStart(5 min)" creates a new 5 minute long capsule directly preceding your shut-in capsule. The move(-5 min) function shifts this new "beforeStart" capsule backwards in time by 5 minutes. 2. Calculate a new signal that represents the average DHP over the condition created in Step 1. Use "Signal from Condition" with the following inputs: Signal or condition = DHP Summary Statistic = Average Bounding Condition = Condition from step 1 Where to place timestamp = Start Interpolation Method = Step 3. Calculate a new signal that is the signal created in Step 2 (Average DHP before Shut-in) + 10%. Use Formula and the following syntax: $signalFromStep2*(1.1) 4. Calculate a new signal that is the delta between the DHP and this initial + 10% signal. Use Formula and the following syntax: $DHPsignal-$signalFromstep3 5. Create a condition for when this delta signal created in Step 4 drops below zero. Use Value Search and <0 as inputs. 6. Create a Composite Condition that joins the end of your existing Capsules t<1 hr after shut-in to the start of your condition created in Step 5. The inputs to this composite condition will be: Condition A = t<1 hour after shut in Condition B = Delta signal drops below zero Logic = join Not inclusive of A Not inclusive of B Max capsule duration = long enough to capture the time back down to initial DHP. For each of the two new signals you want to create, you can use formula and the following syntax, where $ChooseCondition will be your t<1 hr and t>1 hr conditions in each formula. $DHP.within($ChooseCondition).runningAggregate(average(),hours()) This will give you a new signal of the 1-hr rolling average of DHP only during the specified condition.
  2. Thanks for checking Ivan. I'm not sure what is happening here. I've not been able to reproduce this issue on our server. I also checked our ticketing system and am unable to find known issues related to the calculation you are trying to perform (the previous bug you encountered was related to min/max formulas and is fixed in 62.0.7.) I suspect there might be an issue upstream of the calculation - are you performing any min/max or other operations on $s?. It might be more expedient to sign up for an office hour session so we can work this 1:1. https://outlook.office365.com/owa/calendar/SeeqOfficeHours@seeq.com/bookings/
  3. Hi Ivan - Have you tried clearing the cache on the source signal via Item Properties in the Details Pane ("Clear Cached Values" under the Advanced section)?
  4. One suggestion from my colleague that you can try as well if you want Datasource caching on, is go to the Admin page for the datasource and enable/disable/enable the cache (or, if it is already enabled, disable and re-enable the cache.) This may achieve the same result without resorting to code.
  5. Hi David - While I'm not aware of an "easy button" to clear the cache on all items in a Workbench/Environment, you can programmatically clear the cache using the items API endpoint. One way to do this in Seeq Data Lab, given a list of signal IDs, is loop through that list and call the API endpoint to clear the cache with something like this: signal_id_list=spy.search({'Path':'Path_to_AF_signals_that_need_to_be_cleared'},limit=max_number_of_expected_signals)['ID'].tolist() items_api = sdk.ItemsApi(spy.client) for signal_id in signal_id_list(): items_api.clear_cache(id=signal_id,clear_dependents=True) The above example will get all the signal IDs in the AF path of choice, then clear their cache and any dependent items that are used in Workbenches in your system. If you want to be more surgical, you could pull the item IDs from the Workbench (e.g., via spy.utils.get_analysis_worksheet_from_url(worksheet_url).display_items), but note you will want to get to the root of any calculations to make sure the cache clearing propagates through all dependencies. Would this work for your use case?
  6. Gotcha. The good news is that coming in R63 will be a way to force string signal values to be upper, lower, or titlecase. The bad news is that R63 is not yet released, though it's right around the corner. IF your signal name is consistent, one way to tackle this would be to do a replace on the string to standardize the case: $string_signal .replace('BLU','Blu') .replace('LB','lb') You can then create a condition using .toCondition() which will span the entire duration when the signal value doesn't change. $replaced_string.toCondition() The result will be a single condition for that string state: As noted, this will be more simple/robust starting in R63 where step 1 can simply be replaced by $string_signal.lower() which will force the Signal value to be all lower case (alternatively, .upper() and .toTitleCase() will also be available for your use.) Let me know if this helps.
  7. Hi protonshake - To convert a step signal into capsules, you can use the .tocapsules() or .toCondition() formula. The former will create a capsule between each sample, regardless if the value has changed or not, whereas .toCondition() will create a continuous capsule if the value is not changing. Regarding your input signal and changing case, is the end goal for the above to have a single capsule for the duration shown (i.e., ignore the signal case), or do you want them separated out each time the source signal case changes?
  8. Hi Tayyab - One alternative would be to disable inheritance on a folder in the Corporate Drive and then removing the "Everyone" group. To do so, create a folder in the Corporate Drive, click the "three dots" on the right, and select "Manage Permissions". You will see that "Everyone" has access. Click on "Advanced" and "Disable permission inheritance". You can now remove the "Everyone" group from your folder access and add your collaborators and have a folder all your collaborators share (and not "Everyone").
  9. Hi Alyssa - I'm trying to reproduce your issue on one of our Seeq servers but am unable to get the same behavior. Have you tried reducing the update frequency to see if that helps? If not, I'd suggest you to swing by one of our office hours so an Analytics Engineer can take a look with you 1:1. Alternatively, please submit a support ticket here so we can follow-up. Office Hour sign up: https://outlook.office365.com/owa/calendar/SeeqOfficeHours@seeq.com/bookings/ Support Portal: https://seeq.atlassian.net/servicedesk/customer/portal/3
  10. Hi Johannes - Thank you for the suggestion. As you note, this is currently possible via Scorecard Metrics and use of Thresholds (which can reference static values or signals), but it does involve creating another signal. I'd recommend submitting this suggestion to our Support Portal (https://seeq.atlassian.net/servicedesk/customer/portal/3). Feature requests like these can be submitted as an "Analytics Request", and we will then log it internally with our Product Team for follow-up.
  11. To add what Kin How mentioned above, I would encourage you to submit a feature request for this capability as I could see this being very helpful when iterating analyses over time. To do so, go to support.seeq.com, click on "Support Portal" in the lower right, and submit an "Analytics Help" request referencing this post. A one-liner will be sufficient. We will make sure it gets linked to a developer feature request.
  12. Here's a simple radar plot Add-on that I have written. You can use it as a starting point if helpful. Radar Plot Add-on.ipynb
  13. Hi Tranquil - While not specific to XY plot, Seeq does have formulas that help you detect step changes in signals. If you are looking for changes in the samples coming in, I would suggest you explore the runningDelta() formula, which compares sample-by-sample changes. For instance, if you want to generate a "Step Change Condition" whenever the change in signal is greater than 3, you could write it as: abs($signal.runningDelta())>3 Once the condition is generated, you can filter your XY plot by that condition. To count the number of changes for a time period, you can apply a Signal from Condition or Scorecard Metric to count up the number of step changes. Example of a Simple Metric:
  14. As a follow-up to the above, you will also want to check the Seeq Datalab version currently installed on your server by looking at the lower left in the Seeq Datalab browser. If it matches the Seeq version shown in workbench, the pip uninstall -y seeq command should fix your issue. If the version does not match, contact your Seeq Admin to upgrade the SDL server to match. This is a more robust solution as the approach above can cause the error to reoccur post future upgrades.
  15. Hi Mohamed - This error happens when the Seeq Datalab version doesn't match the version of Seeq. You can update the SDL version to match with the Seeq server (ex R54) by executing pip install -U seeq~=54.x where x is the point release on your Seeq server (found in lower left of the browser window.) By the way, to check the Datalab version currently installed, you can execute: pip show seeq Lastly, to see what Seeq PIPY versions are available and install instructions, visit the Seeq PYPI project at https://pypi.org/project/seeq/
  16. Hi David - For this you can use the periods Formula. You can also "anchor" it to a specific start time. Example below. periods(12 hours, 12 hours, '2023-01-01T12:00:00','US/Mountain')
  17. Hi Brie - You can try using the remove formula, i.e., $signal.remove($condition). This should remove the data inside that condition.
  18. Update - To see a video of the below workflow in action, check out this Seeq Tips and Tricks video. Webhooks are a convenient method to send event data from Seeq to “channel” productivity tools such as Microsoft Teams or Slack. The following post describes how Seeq users can leverage Seeq Data Lab to send messages directly to MS Teams via Webhooks. Pre-Requisites: 1) Seeq Data Lab with Scheduled Notebooks enabled a. See Administration Panel -> Configuration and filter for “Features/DataLab/ScheduledNotebooks/Enabled” 2) MS Teams Channel with a Webhook Connector Assumptions: 1) Summary of capsules generated in a defined time range (i.e., every 12 or 24 hours) 2) Notifications are not near-real-time – script will run on a pre-defined schedule generally measured in hours, not minutes or seconds 3) Events of interest are contained in an Asset Tree or Group with one or more Conditions Step 1: Configure Webhook in MS Teams To send Seeq capsules/events to MS Teams, a Webhook for the target channel needs to be created. Detailed instructions on how to configure Webhooks in MS Teams can be found here: https://learn.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/add-incoming-webhook For the purpose of this post, we will create a Webhook URL in our “Seeq Notifications” Team to alert on Temperature Excursions. The alerts will be posted in the “Cooling Tower Temperature Monitoring” channel. Teams and Channel names can be configured to fit your need/operation, this is just an example for demonstration purposes: MS Teams will generate a Webhook URL which we will use in our script in Step 4. Step 2: Identify or Create an Asset Group or Asset Tree to define the Monitoring Scope To scope the events of interest, we will use an Asset Tree that contains “High Temperature” conditions for a collection of Monitoring Assets. While this is not a requirement for using Webhooks, it helps with scaling the Notification workflow. It also allows us to combine multiple Conditions from different Assets into a single workflow. To learn how create an Asset Tree, follow the “Asset Trees 1 – Introduction.ipynb” tutorial in the SPy Documentation folder contained in each new Seeq Datalab project. The script for the Monitoring Asset Tree used in this post is attached for reference: Monitoring Asset Tree.ipynb Alternatively, Asset Groups can be also used to create an asset structure directly in Workbench without using Python: Once the Asset Group/Tree containing the monitoring Conditions is determined, create a Worksheet with a Treemap or Table overview for monitoring use: Make note of the URL as it will be included in the Notification as a link to the monitoring overview whenever an event is detected. For locally scoped Asset Groups or Trees, it will also inform the script where to look for Conditions. Step 3: Install the “pymsteams” library in Seeq Datalab The pymsteams library allows users to compose and post messages (or cards) to MS Teams. The library can be installed from the pypi repository (pypi.org) using the “pip install” command. 1) Open a Seeq Datalab Project 2) Launch a Terminal session 3) Install the pymsteams library by executing pip install pymsteams Additional documentation on pymsteams can be found here: https://pypi.org/project/pymsteams/ Step 4: Create or Update the Monitoring script We are now ready configure a monitoring script that sends notifications to the Webhook configured in Step 1 using Conditions scoped to the Asset Tree in Step 2. a) Import the relevant libraries, including the newly installed pymsteams library import pandas as pd from datetime import datetime,timedelta import pytz import pymsteams b) Configure Input Parameters #Refer to Microsoft Documentation on how to configure a Webhook for a MS Teams channel webhook_url='YOUR WEBHOOK HERE' #Specify the monitoring workbook - this is where the alert will link with the associated timeframe monitoring_workbook_url='YOUR WORKBOOK HERE' #Specify the asset tree and associated condition for which the webhook should be triggered asset_tree='Compressor Monitoring' monitoring_condition='High Temperature' #Specify the lookback period and timezone to search for capsules lookback_interval_hours=24 timezone=('US/Mountain') c) Search for Event Capsules #Set time range to look for new conditions delta=timedelta(hours=lookback_interval_hours) end=datetime.now(tz=pytz.timezone(timezone)) start=end-delta #Parse the workbook information workbook_id=spy.utils.get_workbook_id_from_url(monitoring_workbook_url) worksheet_id=spy.utils.get_worksheet_id_from_url(monitoring_workbook_url) #This block is optional, it stores search results for the conditions once instead of searching each time the #script runs. Saves time if the search result is not expected to change. To reset, just delete the .pkl file. pkl_file_name=asset_tree+'_'+monitoring_condition+'_'+workbook_id+'.pkl' try: monitoring_conditions=pd.read_pickle(pkl_file_name) except: monitoring_conditions=spy.search({'Name':monitoring_condition, 'Type':'Condition', 'Path':asset_tree}, workbook=workbook_id,quiet=True) monitoring_conditions.to_pickle(pkl_file_name) #Pull capsules present during the specified time range events=spy.pull(monitoring_conditions,start=start,end=end,group_by=['Asset'],header='Asset',quiet=True) number_of_events=len(events) events d) Send Message to Webhook using the pymsteams library if a Capsule is detected in the time range #If capsules are present, trigger the webhook to compile and send a card to MS Teams if number_of_events != 0: events.sort_values(by='Condition',inplace=True) #Create url for specific notification time-frame using Seeq URL builder investigate_start=start.astimezone(pytz.utc).strftime('%Y-%m-%dT%H:%M:%SZ') investigate_end=end.astimezone(pytz.utc).strftime('%Y-%m-%dT%H:%M:%SZ') investigate_url=f"https://explore.seeq.com/workbook/builder?startFresh=false"\ f"&workbookName={workbook_id}"\ f"&worksheetName={worksheet_id}"\ f"&displayStartTime={investigate_start}"\ f"&displayEndTime={investigate_end}"\ f"&expandedAsset={asset_tree}" #Create message information to be posted in channel assets=[] text=[] for event in events.itertuples(): assets.append(event.Condition) #Capsule started before lookback window if pd.isnull(event[2]): if pd.isnull(event[3]) or event[4] == True: text.append(f'Event was already in progress at {start.strftime("%Y-%m-%d %H:%M:%S %Z")} and is in Progress') else: text.append(f'Event was already in progress at {start.strftime("%Y-%m-%d %H:%M:%S %Z")} and ended {event[3].strftime("%Y-%m-%d at %H:%M:%S %Z")}') #Capsule started during lookback window else: if pd.isnull(event[3]) or event[4] == True: text.append(f'Event started {event[2].strftime("%Y-%m-%d at %H:%M:%S %Z")} and is in Progress') else: text.append(f'Event started {event[2].strftime("%Y-%m-%d at %H:%M:%S %Z")} and ended {event[3].strftime("%Y-%m-%d at %H:%M:%S %Z")}') message='\n'.join(text) #Create MS Teams Card - see pymsteams documentation for details TeamsMessage = pymsteams.connectorcard(webhook_url) TeamsMessage.title(monitoring_condition+" Event Detected") TeamsMessage.text(monitoring_condition+' triggered in '+asset_tree+f' Asset Tree in the last {lookback_interval_hours} hours') TeamsMessageSection=pymsteams.cardsection() for i,value in enumerate(text): TeamsMessageSection.addFact(assets[i],value) TeamsMessage.addSection(TeamsMessageSection) TeamsMessage.addLinkButton('Investigate in Workbench',investigate_url) TeamsMessage.send() Step 5: Test the Script Execute the script ensuring at least one “High Temperature” capsule is present in the lookback duration. The events dataframe in step 4. c) will list capsules that were detected. If no capsules are present, adjust the lookback duration. If at least one capsule is detected, a notification will automatically be posted in the channel for which the Webhook has been configured: Step 6: Schedule Script to run on a specified Frequency If the script operates as desired, configure a schedule for it to run automatically. #Optional - schedule the above script to run on a regular interval spy.jobs.schedule(f'every day at 6am') The script will run on the specified interval and post a summary of “High Temperature” capsules/events that occur during the lookback period directly to the MS Teams channel. Refer to the spy.jobs.ipynb notebook in the “SPy Documentation” folder for additional information on scheduling options. Attached is a copy of the full example script: Seeq MS Teams Notification Webhook - Example Script.ipynb
  19. You can append to existing topics/documents (pages) by first pulling the Organizer topic, and then adding to the respective page: #Search for topic (I used the Topic ID, but there are other options as well): topic_search=spy.workbooks.search({'ID':'DFFC7BB8-9EE3-42DD-937A-2CE2FAAAB0E8'}) #Pull the topic associated with that ID. This creates an object of the Organizer that you can modify. topic=spy.workbooks.pull(topic_search) #Extract the "Tensile" page so you can modify it tensile = topic[0].document('Tensile') #Add image to the "Tensile" page tensile.document.add_image(filename='Awesome_Chart.png', placement='end'); #Push modified Organizer back to Seeq: spy.workbooks.push(topic) Let me know if this works!
  20. Hi patjdixon - You do have the ability to push images directly into an Organizer Topic. To do this, first save the image using plt's savefig method: Lab_Pred_Fig.savefig('Awesome_Chart.png') Create an Organizer Topic Object: topic = spy.workbooks.Topic({'Name': "Charts"}) Add a page to the topic object: page = topic.document('Visualizations Using Data Lab') Add the image to the page. Note the semicolon on the end suppresses the output from the cell. page.document.add_image(filename='Awesome_Chart.png', placement='end'); Publish the Topic: spy.workbooks.push(topic) If you have an existing Topic, you can use spy.workbooks.pull() to pull in the object and follow the same methodology as above. Note I included a [0] index to reference the Topic document in the pull results. topic_search=spy.workbooks.search({'ID':'DFFC7BB8-9EE3-42DD-937A-2CE2FAAAB0E8'}) topic=spy.workbooks.pull(topic_search) page = topic[0].document('Visualizations Using Data Lab')
  21. Hi Eric - Thanks for the suggestions, all great ideas. If you haven't already, please submit these ideas to support@seeq.com, as we do track feature requests there and will feed that info to our dev team. You will then also be notified when those features are implemented. Note that version tracking for Seeq Datalab projects will be improving in R58 with Git integration, which should make it easier to manage and track the scripts you are creating. https://seeq.atlassian.net/wiki/spaces/KB/pages/2436104292/What+s+New+in+R58
  22. We've received reports from some users encountering errors when following the above guide. Below are solutions to two errors users have reported: 1) NameError NameError: name 'total_time_by_reason_code' is not defined To fix this, define the dataframes first by inserting the definitions towards the beginning of the script: total_time_by_reason_code=pd.DataFrame() percent_time_by_reason_code=pd.DataFrame() cum_percent_time_by_reason_code=pd.DataFrame() 2) Type Error when trying to plot the first bar chart TypeError: value should be a 'Timedelta', 'NaT', or array of those. Got 'int' instead. To fix this, the 'Total_Time_by_Reason_Code' series can be converted to hours by adding the following line of code: total_time_by_reason_code['Total_Time_by_Reason_Code'] = total_time_by_reason_code['Total_Time_by_Reason_Code'].dt.total_seconds()/(60*60) Note the original post performs this conversion at a later time in the script. That conversion is no longer needed and should be removed:
  23. Hi Jesse - Unfortunately moving property columns to the left/top of metric columns in tables is not yet supported. There is an open developer request for this functionality. If you would like to be notified when this functionality becomes available (and advocate for prioritization), please send us a ticket to support@seeq.com referencing developer request #27076 so we can log your ticket against that request. Thanks, Patrick
  24. This is a continuation of the Asset Groups Part 1 post. If you are not familiar with Seeq Asset Groups, it’s recommended to read Part 1 first. This post explores how to configure calculations directly in an Asset Group. In the previous post we covered how to create a basic Asset Group from a collection of unorganized tags. By mapping these tags to an Asset as an Attribute, we can trend, swap, calculate, and visualize analytics across the configured Assets. Once an Asset Group is configured, calculations can be generated and swapped provided they are based on the signals that were configured in the Asset Group. Note that the calculations in Part 1 were not configured as a separate Attribute in the Asset Group, but rather are a dependency of the signals in the Asset Group. To illustrate, navigate to “Location 1” in the Asset Group created in Part 1 and note that while the two signals are listed, the newly created “High Temperature” condition is not a dedicated item in the Asset Group. This is because the calculation was not created in the Asset Group, but instead relies on signals (Attributes) from Assets in an Asset Group. Now let's explore how to configure calculations directly in Asset Groups. Make sure your trend/treemap view is set to Location 1. (If it is not, swap the asset to Location 1 by Navigating to “Facility Temperature Monitoring” in the Data tab and clicking the swap icon next to Location 1) 1) In the Data pane, click “Reset” and then edit the Asset Group created in Part 1. 2) Click on “Add Column” followed by “Add Calculated Item”. 3) Users are given two options for the calculation type – “Existing Seeq Item” and “Build Formula from scratch”. Select “Existing Seeq Item”: 4) A search modal will appear. Under “Recently Accessed” you should see the previously created “High Temperature” condition. You can also search for it by name. Once located, select it, and click “Next” to copy the formula syntax directly into the Asset Group. 5) Seeq will automatically map the associated column(s) to the variable(s) in the formula syntax. Give the Column (Attribute) a name and click “Add Calculated Item”. 6) Note the new column that was added with an f(x) symbol to denote that it’s a formula rather than a mapped tag. The formula can be viewed and edited by clicking on the f(x) icon. Click “Save” to save the Asset Group. 7) In the Data pane, navigate to Location 1 in the Asset Group – note the newly created condition is now listed directly in the Asset Group. Creating the condition directly in the Asset Group offers two main benefits: a. Items can be easily discovered, added, and removed from trends by navigating through the Asset Group b. Each Asset gets its own version of the item, meaning it can be modified to be different for each individual Asset. More on this next. 8 ) Add the newly created condition (“High Temperature AG Condition”) to the details pane and remove the previous “High Temperature” condition. If you are still in Treemap view, re-assign the priority color for the new condition. 9) Click the edit icon to edit the High Temperature AG Condition for Location 1, then change the limit from 100 to 110, and click “Execute”. This will update the condition criteria for Location 1. Location 2 & 3 will retain the previous limit of 100. 10) To verify, Asset Swap to Location 2 via the Data tab and edit “High Temperature AG Condition”. Notice the Formula for Location 2 still retains the original limit of 100. 11) Individual Formula modifications can also be done directly in the Asset Group. Let’s edit the Asset Group via the Data pane by clicking on the “Edit” icon. 12) Click on the f(x) icon for Location 1 & notice the limit of 110 configured earlier. 13) Close the dialog box click the f(x) for Location 2. Note the limit is set to 100. Let’s change it to 90 and click “Save”. This will change the limit for Location 2. Location 1&3 will remain unaffected, which you can verify by clicking the respective f(x) button for those locations. 14) Asset Group calculations can also be added without referencing a pre-existing item. Seeq refers to this as building a Formula from scratch. To do so, click on “Add Column -> Add Calculated Item”. This time, select “Build Formula from scratch” from the modal. 15) Let’s create a Low Temperature Condition that will trigger if the Temperature is less than 40 deg F. Click the “Add Calculated Item” to add it to the Asset Group. 16) Save the Asset Group and navigate to the Asset Group in the Data pane to verify the new condition has been added. 17) (Optional – Seeq version R55 and later) Edit the Asset Group to add additional Assets which will automatically generate the same analysis for the added source tags. You can rename the newly added assets and map underlying tags associated with the other Assets. Once assets are named and tags mapped, save the updated Asset Group. The additional Assets will be displayed in the Asset Group and can be trended, swapped, and displayed the same way.
  25. Have you ever wanted to scale calculations in Seeq across different assets without having to delve into external systems or write code to generate asset structures? Is your process data historian a giant pool of tags which you need to have organized and named in a human readable format? Do you want to take advantage of Seeq features such as Asset Swapping and Treemaps, but do not have an existing Asset structure to leverage? If the answer is yes, Asset Groups can help! Beginning in Seeq version R52 Asset Groups were added to configure collections of items such as Equipment, Operating Lines, KPIs, etc via a simple point-and-click tool. Users can leverage Asset Groups to easily organize and scale their analyses directly in Workbench, as well as apply Seeq Asset-centric tools such as Treemaps and Tables across Assets. What is an Asset Group? An Asset Group is a collection of assets (listed in rows) and associated parameters called “Attributes” (listed in columns). If your assets share common parameters, Asset Groups can be a great way to organize and scale analyses instead of re-creating each analysis separately. Assets can be anything users want them to be. It could be a piece of equipment, geographical region, business unit, KPI, etc. Asset Groups serve to organize and map associated parameters (Attributes) for each Asset in the group. Each Asset can have one or several Attributes mapped to it. Attributes are parameters that are common to all the assets and are mapped to tags from one or many data sources. Examples of Asset/Attribute combinations include: Asset Attribute(s) Pump Suction Pressure, Discharge Pressure, Flow, Curve ID, Specific Gravity Heat Exchanger Cold Inlet T, Cold Outlet T, Hot Inlet T, Hot Outlet T, Surface Area Production Line Active Alarms, Widgets per Hour, % of time in Spec It’s very important to configure the name of the common Attribute to be the same for all Assets, even if the underlying tag or datasource is not. Using standard nomenclature for Attributes (Columns) enables Seeq to later compare and seamlessly “swap” between assets without having to worry about the underlying tag name or calculation. Do This: Do Not Do This: How to Configure Asset Groups in Seeq Let’s create an Asset Group to organize a few process tags from different locations. While Asset Groups support pre-existing data tree structures (such as OSI PI Asset Framework), the following example will assume the tags to not be structured and added manually from a pool of existing process tags. NOTE: Asset Groups require an Asset Group license. For versions prior to R54, they also have to be enabled in the Seeq Administrator Configuration page. Contact your Seeq Administrator for details. 1) In the “Data” tab, create a new Asset Group: 2) Specify Asset Group name and add Assets You can rename the assets by clicking on the respective name in the first column. In this case, we'll define Locations 1-3. 3) Map the source tags a. Rename “Column 1” by clicking on the text and entering a new name b. Click on the (+) icon to bring up the search window and add the tag corresponding to each asset. You can use wildcards and/or regular expressions to narrow your search. c. Repeat mapping of the tags for the other assets until there’s a green checkmark in each row d. Additional source tags can be used by clicking on “Add Column” button in the toolbar In this case, we will add a column for Relative Humidity and map a tag for each of the Locations 4) Save the Asset Group 5) Trend using the newly created Asset Group The newly created Asset Group will now be available in the Data pane and can be used for navigation and trending a. Navigate to “Location 1” and add the items to the display pane by clicking on them. You can also change the display range to 7 days to show a bit more data b. Notice the Assets Column now listed in the Details pane showing from which Asset the Signal originates We can also add the Asset Path to the Display pane by clicking on Labels and checking the desired display configuration settings (Name, Unit of Measure, etc). c. Swap to Location 2 (or 3) using the Asset Swapping functionality. In the Data tab, navigate up one level in the Asset Group, then click the Swap icon ( ) to swap the display items from a different location . Notice how Seeq will automatically swap the display items 6) Create a “High Temperature” Condition Calculations configured from Asset Group Items will “follow” that asset, which can help in scaling analyses. Let’s create a “High Temperature” condition. a. Using “Tools -> Identify -> Value Search” create a condition when the Temperature exceeds 100 b. Click “Execute” to generate the Condition c. Notice the condition has been generated and is automatically affiliated with the Asset from which the Signals were selected d. Swap to a different Asset and notice the “High Temperature” Condition will swap using the same condition criteria but with the signals from the swapped Asset Note: Calculations can also be configured in the Asset Group directly, which can be advantageous if different condition criteria need to be defined for each asset. This topic will be covered in Part 2 of this series. 7) Create a Treemap Asset Groups enables users to combine monitoring across assets using Seeq’s Treemap functionality. a. Set up a Treemap for the Assets in the Group by switching to the Treemap view in the Seeq Workbench toolbar. b. Click on the color picker for the “High Temperature” condition to select a color to display when that condition is active in the given time range. (if you have more than one Condition in the Details pane, repeat this step for each Condition) c. A Treemap is generated for each Asset in the Asset Group. Signal statistics can optionally be added by configuring the “Statistics” field in the toolbar. Your tree map may differ depending on the source signal and time range selected. The tree map will change color if the configured Condition triggers during the time period selected. This covers the basics for Asset Groups. Please check out Part 2 on how to configure calculations in Asset Groups and add them directly to the Hierarchy.
×
×
  • Create New...