Jump to content

All Activity

This stream auto-updates

  1. Past hour
  2. Check out inserting with references, which will do what you're looking for: search_results = spy.search({'Name': '/Area [D,E,F]_Compressor Power/', 'Datasource Name': 'Example Data'}, order_by='Name') tree = spy.assets.Tree('My Tree') tree.insert(children=['Area D', 'Area E', 'Area F']) tree.insert( children=search_results, friendly_name='{{Name}}', parent='{{Name}(Area ?)_*}' ) Results in a tree that looks like: My Tree |-- Area D | |-- Area D_Compressor Power |-- Area E | |-- Area E_Compressor Power |-- Area F |-- Area F_Compressor Power In most cases though you're going to want the leaf node to have a 'generic' name (i.e. Compressor Power), and use the context of the tree to tell you what area it belongs to. You can also accomplish this using references: search_results = spy.search({'Name': '/Area [D,E,F]_Compressor Power/', 'Datasource Name': 'Example Data'}, order_by='Name') tree = spy.assets.Tree('My Tree') tree.insert(children=['Area D', 'Area E', 'Area F']) tree.insert( children=search_results, friendly_name='{{Name}Area ?_(*)}', # note the new inclusion of the capture group parent='{{Name}(Area ?)_*}' )
  3. Today
  4. Okay thanks. I think I'll try using a for loop to iterate through the search results. I've got a lot of tags to insert in the tree and I'm not looking forward to inserting them all manually.
  5. I have two pressure curves for two different periods (2 capsules). How can I calculate the difference between these two curves? So I want a graph where I can see the momentary differences between these two curves. So this is the pressure value of 2 batches, and I would like to see a difference signal.
  6. Hi MarkCov, You need to insert the signal into respective parent one by one. For example: search_results = spy.search({'Name': 'Area D_Compressor Power', 'Datasource Name': 'Example Data'}) my_tree.insert(children=search_results, friendly_name='Compressor Power', parent='Area D') my_tree.visualize() and then repeat the same to insert other signals to their respective parents. Another alternative is you can use csv to create your tree. You can read more here: https://python-docs.seeq.com/user_guide/Asset Trees 1 - Introduction.html#creating-a-tree-using-csv-files Let me know if this helps.
  7. Yesterday
  8. I'm building an asset tree in SDL and I'm trying to import an ordered list of signals into multiple parents, one signal per parent. I've tried a couple variations on tree.insert and it doesn't seem to do what I want. tree = spy.assets.Tree('My Tree') tree.insert(children=['Area D','Area E','Area F']) search_results = spy.search(query = {'Name': '/Area [D,E,F]_Compressor Power/'}, order_by = 'Name') tree.insert(children= search_results, friendly_name = 'Power', parent = 'Area ?') tree.visualize() My Tree |-- Area D | |-- Power (is actually Area D_Compressor Power) |-- Area E | |-- Power (is actually Area D_Compressor Power) |-- Area F |-- Power (is actually Area D_Compressor Power) This appears to do what I want, but each "Power" is actually the same tag. I tried removing the friendly_name parameter next. tree2 = spy.assets.Tree('My Tree') tree2.insert(children=['Area D','Area E','Area F']) search_results2 = spy.search(query = {'Name': '/Area [D,E,F]_Compressor Power/'}, order_by = 'Name') tree2.insert(children= search_results2, parent = 'Area ?') tree2.visualize() My Tree |-- Area D | |-- Area D_Compressor Power | |-- Area E_Compressor Power | |-- Area F_Compressor Power |-- Area E | |-- Area D_Compressor Power | |-- Area E_Compressor Power | |-- Area F_Compressor Power |-- Area F |-- Area D_Compressor Power |-- Area E_Compressor Power |-- Area F_Compressor Power Now that's too many signals. If I have an ordered list of signals that I pull from spy.search, how can I insert one per parent? My goal is a tree that looks like the one below. I'm hoping there's a method other than manual insertion or CSV import My Tree |-- Area D | |-- Area D_Compressor Power |-- Area E | |-- Area E_Compressor Power |-- Area F |-- Area F_Compressor Power Thanks!
  9. Hello, this sounds like a good use case for Seeq. I suggest you sign up for an upcoming Office Hours time slot: Seeq Office Hours In the Office Hours session, you can share your screen and an Analytics Engineer can assist with these questions.
  10. Question: How to calculate overall Batch duration for Mn (M1, M2) and Fn (F1, F2, F3) Stages? We are currently facing a challenge in calculating the overall running duration for both Mn and Fn stages within each batch. Presently, we are only able to link the running durations of the two stages by the duration of the Mn - Fn Transfer (when the material flows from Mn to Fn). This results in the running durations of each stage being displayed separately on the same Lane, as shown in the below image. However, we are seeking a method to intuitively represent the overall running duration of both stages as a single batch. (Simply merging the durations of the two stages would lead to issues in batch determination, specifically when the running durations of different batches have overlapping time periods, causing them to be connected.) We would greatly appreciate any insights or suggestions on how to effectively address this issue and accurately represent the overall running duration of Mn and Fn stages within each batch. Thank you in advance for your help and input.
  11. Last week
  12. Hi Onkar, Our certificates doesn't include a credential ID just the certificate with the person's name on it. And for your 2nd question, upon checking with our training team, you have already completed all required courses within the path. The only way to have 100% completion is to take as well the other optional courses within the path, but it is not required.
  13. Thank you - I can confirm Chiragasourabh's solution removing the Replace statement from my push worked. I have not tried Kin How's solution, yet.
  14. As a workaround, you can use the spy.push without replace parameter to archive the conditions. condition_df['Archived'] = True spy.push(metadata=condition_df,workbook=workbook_id)
  15. You can try archiving the conditions using API. The code below should archive the conditions in your worksheet. from seeq import sdk import pandas as pd # Setting up the SDK end point items_api = sdk.ItemsApi(spy.client) worksheet_url = 'worksheet_url' condition_df = spy.search(worksheet_url) for id in condition_df['ID']: items_api.archive_item(id=id)
  16. The dotted border and lighter color indicate uncertainty. We are still working on a new version to ensure consistency between uncertain and certain data trends.
  17. Why are the months shows with a dotted border and lighter color starting in February of this year? I want the entire trend to look the same.
  18. Has anyone tried to code a peacock test to check in multidimensional multivariate surfaces are the same? This would be akin to ANOVA and T, Z or KS tests.
  19. Earlier
  20. I have used DataLab to create and push Conditions in a Workbook based on signal properties. I am not pushing this in as a formula, but rather using Start and End times calculated in DataLab. In developing my code, I've ended up with multiple Conditions with the same name. I can ultimately find the correct one which I pushed in, but would like to "Delete" or "Archive" those incorrect Conditions. I have been able to achieve this with Signals by setting the Archived parameter to True. I would like to do the equivalent for my conditions. My attempt, which does not seem to be working, is as follows: condition_df = spy.search('worksheet_url') condition_df.set_index('Name', inplace=True, drop=False) condition_df['Maximum Duration'] = '1mo' condition_df['Archived'] = True spy.push(metadata=condition_df, replace={'Start': start_time, 'End': end_time}, workbook=workbook_id) This deletes all capsules in my date range, but if I search for my conditions in the Data tab, the attempt to archive these conditions does not seem to have worked.
  21. I completed my Seeq Super User Test and got the certificate. But that doesn't seem to have any credential ID. Are the certificates just like that or do I need to do anything else? Additional info: I have taken the Data Science learning path. My learning progress is at 67%, as I didn't go for the instructor-led ones, but the e-learning modules.
  22. Thank you, Mark. Read through the docs but missed that paragraph when reading through. Knowing that it was in there helped me find it - and your solution sped up the data push, as expected.
  23. Hi Ryan, In the latest versions of SPy (use v190.2 or later) you can supply a "Condition" column in your "data" DataFrame that corresponds to an index entry in your "metadata" DataFrame, and this combination will allow you to push multiple conditions in spy.push() call. It should be faster because SPy will push the conditions in parallel. Take a look at the docs here for more info: https://python-docs.seeq.com/user_guide/spy.push.html#pushing-condition-and-capsule-data
  24. I am using Seeq DataLab to pull in data from multiple processing units running in parallel. I am assessing the states of each of these units, and building a set of capsules for when each individual unit is running. I then am pushing these capsules as a condition back into a workbook. I am able to successfully achieve this using a for loop and using separate data pushes for each unit. I find that the time for the data to push back to the Workbook takes the longest, and was curious to see if I could do one large push of Conditions for all unit ops at once, rather than having to do separate pushes for each unit? I have done this for multiple Signals in the past, but cannot find documentation for a way to do it with multiple Conditions. Below is a concatenated version of the code I am currently running using the for loops: for index in range(len(state)): ... capsule_bounds = pd.DataFrame({'Capsule Start': start_list, 'Capsule End': end_list, 'Batch ID': batch_id_list}) spy.push(data=capsule_bounds, metadata=pd.DataFrame([{ 'Name': 'Unit '+[index]+' Condition', 'Type': 'Condition', 'Maximum Duration': max_dur}]), replace={'Start': start_time, 'End': end_time}, workbook=workbook_id)
  25. Hi Trevor, Here's a topic that covers how to hold the last value from infrequently updated signals. Let me know if you have any other questions.
  26. I have multiple signals for pump speeds. These speeds can remain constant for weeks at a time. I'm trying to add them together, but because the value doesn't change it states that there is no data in current range (when viewing the variable individually, it uses a dotted line). Is there a way to use the last value when the signal changed to add it to other signals?
  27. Thank you. My confusion is that I'm following along line by line in the training video and the formulas are identical. I didn't see any instructions in the training to use the "toCondition" function. I'll give it a try.
  28. Hello, Be careful to two things: When you create the condition, make sure you name your property correctly: $signal.toCondition('Grade Code') Then pay attention to single and double quotes: $condition.keep('Grade Code', isEqualTo('Grade 106')) Let me know if this helpful.
  29. My keep function in advanced training does not work and only returns a ZERO. Below is the code an the image. $Grade106 = $gcwp.keep('Grade Code', isEqualTo('Grade 106')) $Grade107 = $gcwp.keep("Grade Code", isEqualTo("Grade 107")) $Grade108 = $gcwp.keep("Grade Code", isEqualTo("Grade 108")) 0.splice(0.6,$Grade106).splice(1.15,$Grade107).splice(1.75,$Grade108)
  1. Load more activity
×
×
  • Create New...