Jump to content

All Activity

This stream auto-updates

  1. Yesterday
  2. Hi all, I am trying to use data from a tag that utilizes strings (e.g. 'ON', 'OFF', 'FFWD', etc. etc.). I noticed that the UOM is incorrect and I believe that is why I can't pull this data to work on. Is there a way to ignore the UOM and grab the data from this tag as a list of strings vs. time? # Find the tag APC = spy.search({ "Name" : "/^tag$/", "Datasource Name": "source" }) APC # Sample calculation spy.pull(APC) Error encountered, scroll down to view ID Type Path Asset Name Time Count Pages Data Processed Result 0 0973736A-F4A7-4DC0-A8C2-758DC0E42B50 StoredSignal xxxx Tags tagTATUS 0 0 0 0 B (503) Service Unavailable - INTERNAL: Client data source exception: Error processing signal request for e5358933-f0b8-11ed-83b3-64d69ab1de78/ba208ed7-859e-49ef-ba59-a822a6d1e2d6 with start: 2024-04-18T19:30:00Z and end: 2024-04-18T20:45:00Z. Cannot convert 'ON' to type Double using UOM 'percent'. Exception: InvalidCastException Seeq API Error: (503) Service Unavailable - INTERNAL: Client data source exception: Error processing signal request for e5358933-f0b8-11ed-83b3-64d69ab1de78/ba208ed7-859e-49ef-ba59-a822a6d1e2d6 with start: 2024-04-18T19:30:00Z and end: 2024-04-18T20:45:00Z. Cannot convert 'ON' to type Double using UOM 'percent'. Exception: InvalidCastException Error found at line 2 in cell 14.
  3. Hi John, Thank you it worked! Had to set a maximum duration, is there a way where I can use the duration of my capsule instead of a fixate number? $hssp.aggregate(maxValue(), $hs.setMaximumDuration(40h), durationKey(), 0s)
  4. Your last formula is pretty close! You also need to include where you want to place the result of the aggregation - do you want it to display for the duration of the capsule, or maybe put it at the start or the end? $signal.aggregate(maxValue(), $condition, startKey()) // could replace startKey() with endKey(), middleKey(), durationKey(), maxKey() or minKey() This aggregation could also be performed using the Signal from Condition tool, which will let you pick your inputs in a point-and-click UI
  5. Here are two ways to calculate the difference between the signal values during the two capsules. I am assuming you have one signal and one condition that has two capsules. Using Reference Profile The first method involves using the Model & Predict > Reference Profile tool to create a "duplicate" of the signal, and overlay it during each capsule. Since you only have two capsules, you will effectively take the signal values during the first capsule and overlay it onto the second capsule. Then, you can use Formula to calculate the difference between the original signal and the reference profile signal. The key for this approach is to set the training window so it only includes the first capsule. Make sure to set a gridding period appropriate for your signal. Then you can select Average as the Reference Statistic (since only one capsule is in the training window, the average will be equal to the signal values during that one capsule). Now you should have a "duplicate" of the signal during the first capsule during the time of the second capsule. Open Formula and calculate the difference: $originalSignal - $referenceProfileSignal. Note that reference profiles will grid the result. You can increase or decrease the gridding resolution as needed. Using Formula Only This method involves using the move() function to create a new signal that matches the signal values of the first capsule and moving those values to the second capsule. Then, calculate the difference. The key for this approach is getting the duration between when the first capsule starts and when the second capsule starts. Let's assume the two capsules are less than 100 hours apart: // Adjust this value if the two capsule start times are more than 100 hours apart. // This value can be approximate but should be greater than the actual duration. $timeDelta = 100h // Create a signal that is the duration between the start of capsule 1 and the start of capsule 2 $durationToMove = $condition.growEnd($timeDelta).removeLongerThan($timeDelta-1).toSignal('Duration') // Create a signal that moves the original signal by the duration between the capsule start times $movedSignal = $signal.move($durationToMove,$timeDelta).within($condition) // Calculate the difference between the original signal and the moved signal $signal - $movedSignal
  6. I am trying to only show values of the maximum difference between my temperature setpoint and PV ($hssp) within a certain period ($hrs). I used the below to set up my capsule (hopefully using the right notation) but am getting all the values during my condition range ($hrs which I identified which steps of the process to consider). $hssp.within($hrs).toCondition('Heater_Sag') I will like to ONLY show the maximum value seen during this condition range. I tried the following below but got an error message as it seems that I am not using the proper notation: $hssp.maxValue($hssp.within($hrs).toCondition('Heater_Sag')) No variant of function 'maxValue' consumes the parameters (Signal, Condition) at 'maxValue', line=1, column=7 I then tried the below and received another error message: $hd.aggregate(maxValue(), $hsis) ⬆ No variant of function 'aggregate' consumes the parameters (Signal, Stat:Sample:Scalar, Condition) at 'aggregate', line=1, column=5
  7. Check out inserting with references, which will do what you're looking for: search_results = spy.search({'Name': '/Area [D,E,F]_Compressor Power/', 'Datasource Name': 'Example Data'}, order_by='Name') tree = spy.assets.Tree('My Tree') tree.insert(children=['Area D', 'Area E', 'Area F']) tree.insert( children=search_results, friendly_name='{{Name}}', parent='{{Name}(Area ?)_*}' ) Results in a tree that looks like: My Tree |-- Area D | |-- Area D_Compressor Power |-- Area E | |-- Area E_Compressor Power |-- Area F |-- Area F_Compressor Power In most cases though you're going to want the leaf node to have a 'generic' name (i.e. Compressor Power), and use the context of the tree to tell you what area it belongs to. You can also accomplish this using references: search_results = spy.search({'Name': '/Area [D,E,F]_Compressor Power/', 'Datasource Name': 'Example Data'}, order_by='Name') tree = spy.assets.Tree('My Tree') tree.insert(children=['Area D', 'Area E', 'Area F']) tree.insert( children=search_results, friendly_name='{{Name}Area ?_(*)}', # note the new inclusion of the capture group parent='{{Name}(Area ?)_*}' )
  8. Okay thanks. I think I'll try using a for loop to iterate through the search results. I've got a lot of tags to insert in the tree and I'm not looking forward to inserting them all manually.
  9. I have two pressure curves for two different periods (2 capsules). How can I calculate the difference between these two curves? So I want a graph where I can see the momentary differences between these two curves. So this is the pressure value of 2 batches, and I would like to see a difference signal.
  10. Hi MarkCov, You need to insert the signal into respective parent one by one. For example: search_results = spy.search({'Name': 'Area D_Compressor Power', 'Datasource Name': 'Example Data'}) my_tree.insert(children=search_results, friendly_name='Compressor Power', parent='Area D') my_tree.visualize() and then repeat the same to insert other signals to their respective parents. Another alternative is you can use csv to create your tree. You can read more here: https://python-docs.seeq.com/user_guide/Asset Trees 1 - Introduction.html#creating-a-tree-using-csv-files Let me know if this helps.
  11. Last week
  12. I'm building an asset tree in SDL and I'm trying to import an ordered list of signals into multiple parents, one signal per parent. I've tried a couple variations on tree.insert and it doesn't seem to do what I want. tree = spy.assets.Tree('My Tree') tree.insert(children=['Area D','Area E','Area F']) search_results = spy.search(query = {'Name': '/Area [D,E,F]_Compressor Power/'}, order_by = 'Name') tree.insert(children= search_results, friendly_name = 'Power', parent = 'Area ?') tree.visualize() My Tree |-- Area D | |-- Power (is actually Area D_Compressor Power) |-- Area E | |-- Power (is actually Area D_Compressor Power) |-- Area F |-- Power (is actually Area D_Compressor Power) This appears to do what I want, but each "Power" is actually the same tag. I tried removing the friendly_name parameter next. tree2 = spy.assets.Tree('My Tree') tree2.insert(children=['Area D','Area E','Area F']) search_results2 = spy.search(query = {'Name': '/Area [D,E,F]_Compressor Power/'}, order_by = 'Name') tree2.insert(children= search_results2, parent = 'Area ?') tree2.visualize() My Tree |-- Area D | |-- Area D_Compressor Power | |-- Area E_Compressor Power | |-- Area F_Compressor Power |-- Area E | |-- Area D_Compressor Power | |-- Area E_Compressor Power | |-- Area F_Compressor Power |-- Area F |-- Area D_Compressor Power |-- Area E_Compressor Power |-- Area F_Compressor Power Now that's too many signals. If I have an ordered list of signals that I pull from spy.search, how can I insert one per parent? My goal is a tree that looks like the one below. I'm hoping there's a method other than manual insertion or CSV import My Tree |-- Area D | |-- Area D_Compressor Power |-- Area E | |-- Area E_Compressor Power |-- Area F |-- Area F_Compressor Power Thanks!
  13. Hello, this sounds like a good use case for Seeq. I suggest you sign up for an upcoming Office Hours time slot: Seeq Office Hours In the Office Hours session, you can share your screen and an Analytics Engineer can assist with these questions.
  14. Question: How to calculate overall Batch duration for Mn (M1, M2) and Fn (F1, F2, F3) Stages? We are currently facing a challenge in calculating the overall running duration for both Mn and Fn stages within each batch. Presently, we are only able to link the running durations of the two stages by the duration of the Mn - Fn Transfer (when the material flows from Mn to Fn). This results in the running durations of each stage being displayed separately on the same Lane, as shown in the below image. However, we are seeking a method to intuitively represent the overall running duration of both stages as a single batch. (Simply merging the durations of the two stages would lead to issues in batch determination, specifically when the running durations of different batches have overlapping time periods, causing them to be connected.) We would greatly appreciate any insights or suggestions on how to effectively address this issue and accurately represent the overall running duration of Mn and Fn stages within each batch. Thank you in advance for your help and input.
  15. Hi Onkar, Our certificates doesn't include a credential ID just the certificate with the person's name on it. And for your 2nd question, upon checking with our training team, you have already completed all required courses within the path. The only way to have 100% completion is to take as well the other optional courses within the path, but it is not required.
  16. Thank you - I can confirm Chiragasourabh's solution removing the Replace statement from my push worked. I have not tried Kin How's solution, yet.
  17. As a workaround, you can use the spy.push without replace parameter to archive the conditions. condition_df['Archived'] = True spy.push(metadata=condition_df,workbook=workbook_id)
  18. Earlier
  19. You can try archiving the conditions using API. The code below should archive the conditions in your worksheet. from seeq import sdk import pandas as pd # Setting up the SDK end point items_api = sdk.ItemsApi(spy.client) worksheet_url = 'worksheet_url' condition_df = spy.search(worksheet_url) for id in condition_df['ID']: items_api.archive_item(id=id)
  20. The dotted border and lighter color indicate uncertainty. We are still working on a new version to ensure consistency between uncertain and certain data trends.
  21. Why are the months shows with a dotted border and lighter color starting in February of this year? I want the entire trend to look the same.
  22. Has anyone tried to code a peacock test to check in multidimensional multivariate surfaces are the same? This would be akin to ANOVA and T, Z or KS tests.
  23. I have used DataLab to create and push Conditions in a Workbook based on signal properties. I am not pushing this in as a formula, but rather using Start and End times calculated in DataLab. In developing my code, I've ended up with multiple Conditions with the same name. I can ultimately find the correct one which I pushed in, but would like to "Delete" or "Archive" those incorrect Conditions. I have been able to achieve this with Signals by setting the Archived parameter to True. I would like to do the equivalent for my conditions. My attempt, which does not seem to be working, is as follows: condition_df = spy.search('worksheet_url') condition_df.set_index('Name', inplace=True, drop=False) condition_df['Maximum Duration'] = '1mo' condition_df['Archived'] = True spy.push(metadata=condition_df, replace={'Start': start_time, 'End': end_time}, workbook=workbook_id) This deletes all capsules in my date range, but if I search for my conditions in the Data tab, the attempt to archive these conditions does not seem to have worked.
  24. I completed my Seeq Super User Test and got the certificate. But that doesn't seem to have any credential ID. Are the certificates just like that or do I need to do anything else? Additional info: I have taken the Data Science learning path. My learning progress is at 67%, as I didn't go for the instructor-led ones, but the e-learning modules.
  25. Thank you, Mark. Read through the docs but missed that paragraph when reading through. Knowing that it was in there helped me find it - and your solution sped up the data push, as expected.
  26. Hi Ryan, In the latest versions of SPy (use v190.2 or later) you can supply a "Condition" column in your "data" DataFrame that corresponds to an index entry in your "metadata" DataFrame, and this combination will allow you to push multiple conditions in spy.push() call. It should be faster because SPy will push the conditions in parallel. Take a look at the docs here for more info: https://python-docs.seeq.com/user_guide/spy.push.html#pushing-condition-and-capsule-data
  27. I am using Seeq DataLab to pull in data from multiple processing units running in parallel. I am assessing the states of each of these units, and building a set of capsules for when each individual unit is running. I then am pushing these capsules as a condition back into a workbook. I am able to successfully achieve this using a for loop and using separate data pushes for each unit. I find that the time for the data to push back to the Workbook takes the longest, and was curious to see if I could do one large push of Conditions for all unit ops at once, rather than having to do separate pushes for each unit? I have done this for multiple Signals in the past, but cannot find documentation for a way to do it with multiple Conditions. Below is a concatenated version of the code I am currently running using the for loops: for index in range(len(state)): ... capsule_bounds = pd.DataFrame({'Capsule Start': start_list, 'Capsule End': end_list, 'Batch ID': batch_id_list}) spy.push(data=capsule_bounds, metadata=pd.DataFrame([{ 'Name': 'Unit '+[index]+' Condition', 'Type': 'Condition', 'Maximum Duration': max_dur}]), replace={'Start': start_time, 'End': end_time}, workbook=workbook_id)
  1. Load more activity
×
×
  • Create New...