Jump to content

Emilio Conde

Seeq Team
  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by Emilio Conde

  1. Hey @achnaf, am I understanding correctly that you are making a spy.Template from a source workbook, then modifying the template in a script and attempting to push it back to the same (original) source workbook? Can you provide any snippets of code that you're using, particularly for the push?
  2. Hi Mohamed, Using an Access Key will allow authentication for SSO profiles. Below is the Knowledge Base article that walks you through creating one, and the SPy documentation for how to implement. Knowledge Base: https://support.seeq.com/kb/latest/cloud/working-with-access-keys SPy Docs: https://python-docs.seeq.com/user_guide/spy.login.html#access-keys
  3. I agree with Nuraisyah on being able to utilize Custom Labels, though depending on the scale, this may not be very efficient. Also, if you move the trend items around to different lanes, then the Custom Labels will not follow the signals. Another approach you can consider, directly addressing your question of is the only other option to include the percentage type in the signal description? is to add the unit directly in the name of the item in parentheses upon creating in Workbench or in Data Lab. It's not the most ideal approach, but at least it is directly assigned to the item and is very findable/obvious. An example of this is shown in the spy.push example notebook, but as I mentioned Data Lab is not required, you can essentially add anything to the Name of an item even in Workbench:
  4. Thanks for the context, John. Unfortunately, Organizer currently only supports basic html, and not CSS / HTML5 as implemented by the .highlight_max() or .highlight_null operators. With that said, I recommend you send us a support ticket here requesting this functionality so that we can link it to an existing feature request and allow you to be automatically notified of this capability once it's implemented in a future version of Seeq. Working with ChatGPT, I was able to get the basic HTML equivalent applied to the df in your example, and have verified it pushes to Organizer as expected. Of course, another approach could be to create an image out of your html and just push the image. There are other posts on this forum that discuss that functionality. See working example below: import numpy as np import pandas as pd from bs4 import BeautifulSoup organizer_id = '7AC3EAA0-6429-46D5-88E3-57ADC6AA1ED7' df = pd.DataFrame({ "A": [0, -5, 12, -4, 3], "B": [12.24, 3.14, 2.71, -3.14, np.nan], "C": [0.5, 1.2, 0.3, 1.9, 2.2], "D": [2000, np.nan, 1000, 7000, 5000] }) # Basic HTML to highlight a cell def highlight_cell_bg(val, color=None): # If there's a color specified, apply it as the background if color: return 'background-color: {}'.format(color) return "" # Highlight the max value in a column yellow def highlight_max_bg(series): # Remove non-numeric values and compute max numeric_vals = series[pd.to_numeric(series, errors='coerce').notnull()] if not numeric_vals.empty: max_val = numeric_vals.max() return [highlight_cell_bg(val, 'yellow') if val == max_val else "" for val in series] return [""] * len(series) # Highlight Missing row values red def highlight_missing_bg(val): if val == 'Missing': return highlight_cell_bg(val, 'red') return "" # Replace NaN values with "Missing" and highlight them df.replace({np.nan: 'Missing'}, inplace=True) missing_bg = df.applymap(highlight_missing_bg) # Highlight the maximum values in each column max_bg = df.apply(highlight_max_bg) # Merge the two background styles final_bg = missing_bg.where(missing_bg != "", max_bg) # Convert DataFrame to HTML without additional formatting raw_html = df.to_html(escape=False, header=True, index=False) # Parse the HTML using BeautifulSoup soup = BeautifulSoup(raw_html, 'html.parser') # Iterate through each cell in the table and apply styles for row in soup.findAll("tr"): for col_name, cell in zip(df.columns, row.findAll("td")): if cell.text in df.columns: continue # Skip headers # Convert dataframe values to string for comparison idx = df[col_name].astype(str).tolist().index(cell.text) style = final_bg[col_name].iloc[idx] if style: cell["style"] = style # Convert the modified HTML back to a string html = str(soup) # Find the topic you want topic_search = spy.workbooks.search({'ID': organizer_id}) # Pull in the topic topic = spy.workbooks.pull(topic_search, include_referenced_workbooks = False)[0] # Create a new sheet topic.document('New Sheet') # Modify the html of the new sheet with the styled df html topic.worksheets[-1].html = html # Push your changes organizer_push = spy.workbooks.push(topic)
  5. I see... What version of Seeq is your server on? And for my own testing, what's the styling you're applying to df? My simple test just created clickable hyperlinked text in df.
  6. Hi John, Have you tried something similar to below? # Convert styled df to html html = df.style.to_html() # Find the topic you want topic_search = spy.workbooks.search({'ID': organizer_id}) # Pull in the topic topic = spy.workbooks.pull(topic_search, include_referenced_workbooks = False)[0] # Create a new sheet topic.document('Sheet Name') # Modify the html of the new sheet with the styled df html topic.worksheets[-1].html = html # Push your changes organizer_push = spy.workbooks.push(topic)
  7. Hello, The Path and Asset should be returned if these properties exist for what is being searched. all_properties = True is not required to get these properties.
  8. Fantastic, @Manoel Janio Borralho dos Santos! Great job using the SPy documentation examples as a reference for this use-case. Glad this worked out well for you.
  9. @Manoel Janio Borralho dos Santos my apologies, I forgot to mention that you need to upgrade your SPy to the latest version for this approach. Run spy.upgrade() in a cell and then restart your Kernel and try again. The template will need to be recreated on the latest version of SPy, then see if this newly created template shows the images in the document.code.
  10. One thing to add to this is SPy has been upgraded since this post was made to allow more options of templatizing content, including Organizers with images! Take a look at the documentation for Templates with images here, and the base documentation for Templates here. This serves a different approach to replacing images in Organizer that doesn't require dealing directly with the HTML.
  11. There are times when you may need to calculate a standard deviation across a time-range using the data within a number of signals. Consider the below example. When a calculation like this is meaningful/important, the straightforward options in Seeq may not be mathematically representative to calculate a comprehensive standard deviation. These straightforward options include: Take a daily standard deviation for each signal, then average these standard deviations Take a daily standard deviation for each signal, then take the standard deviation of the standard deviations Create a real-time standard deviation signal (using stddev($signal1, $signal2, ... , $signalN)), then take the daily average or standard deviation of this signal While straightforward options may be OK for many statistics (max of maxes, average of averages, sum of totalizes, etc), a time-weighted standard deviation across multiple signals presents an interesting challenge. This post will detail methods to achieve this type of calculation by time-warping the data from each signal then combining each individually warped signal into a single signal. Similar methods are also discussed in the following two seeq.org posts: Two different methods to arrive at the same outcome will be explored. Both of these methods share the same Step 1 & 2. Step 1: Gather Signals of Interest This example will consider 4 signals. The same methods can be used for more signals, but note that implementing this solution programmatically via Data Lab may be more efficient when considering a high number of signals (>20-30). Step 2: Create Important Scalar Constants and Condition Number of Signals: The number of signals to be considered. 4 in this case. Un-Warped Interval: The interval you are interested in calculating a standard deviation (I am interested in a Daily standard deviation, so I entered 1d) Warped Interval: A ratio calculation of Un-Warped Interval / Number of Signals. This metric is detailing what the new time-range will be for the time-warped signals. I.e. given I have 4 signals considering a days worth of data of, each signal's day worth of data will be warped into 6 hour intervals Un-Warped Periods: This creates a condition with capsules spanning the original periods of interest. periods($unwarped_interval) Method 1: Create ONE Time-Shift Signal, and move output Warped Signals The Time Shift Signal will be used as a counter to condense the data in the period of interest (1 day for this example) down to the warped interval (6 hours for this example). 0-timeSince($unwarped_period, 1s)*(1-1/$num_of_signals) The next step is to use this Time Shift Signal to move the data within each signal. Note there is an integer in this Formula that steps with each signal applied to. Details can be viewed in the screenshots. $area_a.move($time_shift_signal, $unwarped_interval).setMaxInterpolation($warped_interval).move(0*$warped_interval) The last step is to combine each of these warped signals together. We now have a Combined Output that can be used as an input into a Daily Standard Deviation that will represent the time-weighted standard deviation across all 4 signals within that day. Method 2: Create a Time-Shift Signal per each Signal - No Need to move output Warped Signals This method takes advantage of 4 time-shift signals, one per signal. Note there is also an integer in this Formula that steps with each signal applied to. Details can be viewed in the screenshot. These signals take care of the data placement, where-as the data placement was taken care of using .move(N*$warped_interval) above. 0*$warped_interval-timeSince($unwarped_period, 1s)*(1-1/$num_of_signals) We can then follow Method 1 to use the time shift signals to arrange our signals. We just need to be careful to use each time shift signal, as opposed to the single time shift signal that was created in Method 1. As mentioned above, there is no longer a .move(N*$warped_interval) needed at the end of this formula. The last step is to combine each of these warped signals together, similar to Method 1. $area_a.move($time_shift_1, $unwarped_interval).setMaxInterpolation($warped_interval) Comparing Method 1 and Method 2 & Calculation Outputs The below screenshot shows how Methods 1 & 2 arrive at the same output Note the difference in calculated values. The Methods reviewed in this post most closely capture the true time-weighted standard deviation per day across the 4 signals. Caveats and Final Thoughts While this method is still the most mathematically correct, there is a slight loss in data at the edges. When combining the data in the final step, the beginning of $signal_2 falls at the end of $signal_1, and so on. There are some methods that could possibly address this, but this loss in samples should be negligible to the overall standard deviation calculation. This method is also heavy on processing, especially depending on the input signals' data resolution and as the overall number of signals being considered increases. It is most ideal to use this method if real-time results are not of high importance, and better fitting if the calculation outputs are input in an Organizer that displays the previous day's/week's/etc results.
  12. Ruby and I implemented a solution that avoids the .transform() function. Details can be found here.
  13. Sometimes it's beneficial to fill these gaps with a prior average that is dynamic. The above post details how to fill the gap with a static timeframe, such as the 10 minutes before the gap. But what if we wanted to do something different, such as take the gap duration, a dynamic value, and fill the gap with the prior average based on the gap duration? Below details how to do this. There is a similar approach that leverages the .transform() Seeq function here, but I've provided an alternative method that avoids the usage of .transform(). Of course, this all can be input to a single formula, but below details each step broken out. Solution: 1. Identify data gaps & calculate gap durations: Notes: .intersect(past()) guarantees we are not considering the future for data validity The maximum capsule duration should be carefully considered depending on the maximum gap duration you want to fill Placing the timestamp across the capsule durations is important 2. Create an arbitrary flat signal & move this signal backwards (negatively) based on the gap durations signal Notes: The timeframe specified in .toSignal(1min) influences the resolution of representing the gap duration. 1min should suffice for most cases. It's important to include a minus in front of the the gap duration to indicate we are moving backwards in time. The 24h dictates the maximum duration allowed to move, which is dependent on the expected gap durations. 3. Identify the new, moved signal and join the capsules from the new condition to the original signal gaps condition Notes: Again, the maximum capsule duration needs to be carefully considered when joining the two conditions. 4. Calculate the average of the gappy signal across the joined condition, then splice this average into the original gappy signal Notes: Again, specifying the timestamp to be placed across the capsule durations is important here. Be sure to splice the average across the original signal gaps. Including .validValues() ensures we interpolate across our original gappy signal and the replacement average signal.
  14. Hi Tranquil, The method I described above will be a way to find assets that contain a signal of interest... I.e. perform a search for the signal name within the top level of the asset. However, I don't believe this manual identification is 100% necessary to complete what you're wanting. See below. Results of the above search show: The above shows that only the Example >> Cooling Tower 2 >> Area E asset has the Irregular Data signal. However, I don't need to do these searches to manually identify which assets have which signals. I can jump immediately into asset groups: Clicking Add above then repeating for Cooling Tower 2 will yield the following: Adding a condition based on Irregular Data: Saving the Asset Group then searching again for Irregular Data, this time within the newly created Asset Group: Now I can see that the specified Condition was only created for Area E because Area E was the only asset that contained the signal of interest (in this case, Irregular Data). If more assets also had the Irregular Data signal, then conditions would have also been created for those assets referencing their Irregular Data signal. Hopefully this helps. I encourage you to take a look at some of the videos we have for Asset Groups within our YouTube page. Specifically, this video discusses the process I used above, where I populate an Asset Group using other preexisting asset trees as a starting point.
  15. Do you mean exporting the values of the asset group items? If so, you can make use of the Data tab to add all variables to your display and then the Export to Excel tool to export the values. More information on searching within an Asset Tree/Group can be found here and information on exporting data to Excel (i.e. Using the Export to Excel Tool) found here. See screenshots below on what this process may look like: Please let me know if this helps or if this isn't what you meant when inquiring about exporting the asset group subitems.
  16. Hi Tranquil, Do you mind supplying a bit more information on your question and possibly some screenshots? If by resources you mean assets, with some assets containing a specific signal, then it may not be necessary to write any script. If this is indeed the case, Asset Groups could possibly be utilized to scale the creation of a condition if the signal(s) of interest exist.
  17. The third example in your documentation image seems to be the equivalent... It's pretty evident why .setProperty() was enhanced! Until your server is upgraded, you'll likely have to use that $condition.transform() method to add signal-referenced, statistical property values to capsules.
  18. Hey Red, I'm curious what the formula search documentation for setProperty() looks like for you? Are you able to take/send a screenshot? I'm able to see internally that the ability to use setProperty() in the way you're attempting (along with the other posts you've referenced) is a capability/feature in Seeq as of version R51. Prior to R51, this method will not work, as is being demonstrated with your error. Has your organization considered upgrading? There are many new features that you could be taking advantage of (see more details of What's New)! Reach out to your Seeq contact or support@seeq.com to get the upgrade process started. The point made above is also found in this other seeq.org post: For reference, below is what the setProperty() documentation looks like as of version R58. It could be that the only variation you're able to use in your version is $condition.setProperty(propertyName, value)
  19. Seasonal variation can influence specific process parameters whose values are influenced by ambient conditions, or perhaps raw material make up changes over the year's seasons based on scheduled orders from different vendors. For these reasons and more, it may not suffice to compare your previous month's process parameters against current. For these situations, it may be best to compare current product runs against previous product runs occurring the same month, but a year ago in order to assess consistency or deviations. In Seeq, this can be achieved through utilizing Condition Properties. 1. Bring in raw data. For this example, I will utilize a single parameter (Viscosity) and a grade code signal. 2. Convert Product step-signal into a condition. Add properties of Product ID, Variable statistic(s), and month start/end times. // Create a month condition. Be sure to specify your time zone so that start/end times are at 12:00 AM $m = months('US/Eastern') // Create a signal for start and end times to add into "Product ID" condition $start_signal = $m.toSignal('Start', startKey()).toStep() $end_signal = $m.toSignal('End', startKey()).toStep() $p.toCondition('Product ID') // Convert string signal into a condition, with a capsule at each unique string // Specifying 'Product ID' ensures the respective values in Signal populate // a property named 'Product ID' .removeLongerThan(100d) // Bound condition. 100d as arbitrary limit .setProperty('Avg Visc', $vs, average()) // Set 'Avg Visc' property reflecting avg visc over each Product ID .setProperty('Month Start', $start_signal, startValue()) // Set 'Month Start' property to know what month Product ID ran .setProperty('Month End', $end_signal, startValue()) // Set 'Month End' property to know what month Product ID ran 3. Create another condition that has a capsule ranging the entire month for each product run within the month. Add similar properties, but note naming differences of 'Previous Month Start' and 'Previous Month Avg Visc'. This is because in the next step we will move this condition forward by one year. $pi.grow(60d) // Need to grow capsules in the condition to ensure they consume the entire month .transform($capsule -> // For each capsule (Product Run) in 'Product ID'.... capsule($capsule.property('Month Start'), $capsule.property('Month End')) // Create a capsule ranging the entire month .setProperty('Product ID', $capsule.property('Product ID')) // Add property of Product ID .setProperty('Previous Month Start', $capsule.property('Month Start')) // Add property of Month Start named 'Previous Month Start' .setProperty('Previous Month Avg Visc', $capsule.property('Avg Visc')) // Add property of Avg Visc named 'Previous Month Avg Visc' ) Notice we now have many overlapping capsules in our new condition ranging an entire month -- one for each Product Run that occurred within the month. 4. Move the previous 'Month's Product Runs' condition forward a year and merge with existing 'Product ID' condition. Aggregate properties of 'Previous Month Avg Visc'. This ensures that if a product was ran multiple times and had different avg visc values in each run, then what is displayed will be the average of all the avg visc values for each product. $previousYearMonthProductRun = $mspi.move(1y) // Move condition forward a year $pi.mergeProperties($previousYearMonthProductRun, 'Product ID', // Merge the properties of both conditions only if their // capsules share a common value of 'Product ID' keepProperties(), // keepProperties() will preserve all existing properties aggregateProperty('Previous Month Avg Visc', average())) // aggregateProperty() will take the average of all 'Previous // Month Avg Visc' properties if multiple exist... I.e. if // there were multiple Product Runs, each with a different value // for 'Previous Month Avg Visc', then take the average of all of // them. The resulting condition will match our original condition, except now with two new properties: 'Previous Month Start' & 'Previous Month Avg Visc' We can then add these properties in a condition table to create a cleaner view. We could also consider creating any other statistics of interest such as % difference of current Avg Visc vs Previous Month Avg Visc. To do this, we could use a method similar to gathering $start_signal and $end_signal in Step 2, create the calculation using the signals, then add back to the condition as a property.
  20. The condition is created simply in Formula -- it is not created within the asset group. It seems like you've already done the other calculations, so I don't think adding a calculated item will be necessary. You simply need to add two assets and add all your columns via "Add Column". You can properly name your assets and columns, then click the + sign within the associated asset/column and search for the respective signal/calculation you already have. Creating scorecards will be the same as usual, except you need to have a condition based scorecard referencing the now-1d condition, and also ensure the item in question belongs to the asset group you've created. It's worth noting you only need to create the scorecards for ONE asset, and the asset table feature will scale it to your other asset(s). If the threshold limits are not the same between say Raw for Salt(ptb) and BSW Water (%), then you may need to create upper/lower threshold references in the asset group to reference instead of hard-input values. Feel free to register to office hours tomorrow if you'd like some live assistance with setting it up: https://info.seeq.com/office-hours
  21. Hi Jesse, This is currently a feature request in our system, but there's a fairly simple workaround utilizing asset trees, asset tables, and condition-based tables. If these items don't belong to an asset tree, then you can quickly create an Asset Group and map the associated signal calculations you've made. (You can give our Asset Groups video a watch if you've never worked with Asset Groups) For best results, the condition you create will only have one capsule. I've elected to create a condition that represent the past day (with respect to now) by entering the following into formula: condition(1d, capsule(now()-1d,now())) Then simply add the relevant signals into your details pane by opening up the asset group and selecting them from there. Notice the Assets column in my details pane confirms these items are from the Asset Group I created in step 1. Then, you can go into Table view, and select the Condition option vs Simple. You can then Transpose, and click the Column drop down to add each signal into the table. I am simply adding the "Last Value" for this example, which I hard coded your values into the signals for consistency. Click the Headers drop down to get rid of the date range of the condition (unless you'd like to keep it). Finally, you can select the Asset Icon at the top, and select the Asset Group you've created ("Example" in my case). This will scale out to any other assets in the same level of the tree. The final result looks something like this after editing my column names to remove "Last" Note that the far left blank column will not appear if inserted into an Organizer. Also note I've only demonstrated this for raw signals from an asset tree, but this method still works with Scorecard metrics to allow color thresholds. If you want to do this with scorecard metrics, just create the metrics referencing one of your "asset's" associated signals, BSW Water (%) for example, and then scale across the other "assets" as described above. Hopefully this helps get you the look you were hoping to create! Please let me know if you have any questions with any of the above.
  22. Hi Vladimir, There are several ways to apply this analysis to other assets. The first & easiest method that I'll mention is working in an Asset Framework or Asset Group (if existing framework is not available). All previous calculations would need to be created using the data in the Asset Group, but once done, you'll be able to seamlessly swap the entire analysis over to your other assets (Trains, in this case). Asset Groups allow you to create your own framework either manually or utilizing existing frameworks. This video does a great job of showing the creation and scaling calculations across other assets. Note that you would need to be at least on version R52 to take advantage of Asset Groups. Another easy approach is to consolidate your analysis within 1 - 3 formulas (depending on what you really want to see). Generally speaking, this analysis could fall within ONE formula, but you may want more formulas if you care about seeing things like your "Tr1C1 no input, no output" condition across your other trains. I'll provide you with a way to consolidate this workflow in one formula, but feel free to break it into multiple if helpful to you. The reason this could be easier is you can simply duplicate a single formula and manually reassign your variables to the respective variables of your other Train. Some useful things to note before viewing the formula: Formulas can make use of variable definitions... You'll notice within each step, except for the very last step, I'm assigning arbitrary/descriptive variables to each line, so that I can reference these variables later in the formula. These variables could be an entire condition, or a signal / scalar calculation In the formula comments (denoted by the double slashes: //), I note certain things could be different for your case. You can access the underlying Formula to any point and click tools you use (Value Searches, Signal from Conditions, etc) by clicking the item's Item Properties (in the Details Pane) and scrolling down to Formula. Do this for your Tr1 C1 rate of change, monthly periodic condition, and average monthly rate calculations to see what specific parameters are. This Seeq Knowledge Base article has an example of viewing the underlying formula within an item's Item Properties The only RAW SIGNALS needed in this formula are: $valveTag1, $valveTag2, $productionTag, and $tr1Signal... The rest of the variables are assigned internally to the formula // Steps 1, 2, 3, and 4 // Note 'Closed' could be different for you if your valve tags are strings... // If your valve tags are binary (0 or 1), it would be "$valveTag == 0" (or 1) $bothValvesClosed = ($valveTag1 ~= 'Closed' and $valveTag2 ~= 'Closed).removeShorterThan(6h) // Step 5 $valvesClosedProductionHigh = $bothValvesClosed and $productionTag > 10000 // Step 6 ASSUMING YOU USED SIGNAL FROM CONDITION TO CALCULATE RATE // Note the "h" and ".setMaximumDuration(40h), middleKey(), 40h)" could all be different for your case $tr1RateofChange = $tr1Signal.aggregate(rate("h"), $valvesClosedProductionHigh.setMaximumDuration(40h), middleKey(), 40h) // Step 7 // $months could also be different in your case // Note my final output has no variable definition. This is to ensure THIS is the true output of my formula // Again, the ".setMaximumDuration(40h), middleKey(), 40h)" could all be different for your case $months = months("US/Eastern") $tr1RateofChange.aggregate(average(), $months.setMaximumDuration(40h), middleKey(), 40h) Hopefully this makes sense and at the very least provides you with an idea of how you can consolidate calculations within Formula for easier duplication of complex, multi-step calculations. Please let me know if you have any questions. Emilio Conde Analytics Engineer
  23. Hi Mohammed, As John Cox stated in your previous post, there are a number of methods that can be used to remove or identify peaks. In your trend, it's not directly obvious what exactly you are defining as a peak and thus wanting to remove/identify. The image at the bottom contains various methods that you can explore. In order for us to provide a specific method to identify or remove peaks in your signal, you would need to provide us with additional information on what exactly you define as a peak (maybe by circling which peaks on your image you want to identify/remove). If you'd rather work on this over a meeting, you can always join one of our daily office hour sessions where a Seeq Analytics Engineer can help you 1:1. Emilio Conde Analytics Engineer
  24. You may have noticed that pushed data does not have a red trash icon at the bottom of its Item Properties. There's a simple way to move this (and any other) data to the trash through Data Lab. Read below. Pushed Data Normal Calculated Seeq Signal Moving data to trash through SDL Step 1: Identify data of interest and store in a dataframe For this example, I want to move all of the items on this worksheet to trash. Thus, I can use spy.search to store as a dataframe. remove_Data = spy.search('worksheet URL') Step 2: Create an 'Archived' column in the dataframe and set to 'true' remove_Data['Archived'] = 'true' Step 3: Push this dataframe back into Seeq as metadata spy.push(metadata = remove_Data) The associated data should now be in the trash and unable to be searchable in the Data pane.
  25. Hi Yanmin, Unfortunately, Seeq doesn't currently offer any contour map features; however, I've listed some options below to address your use case. In addition, while not directly applicable to what you're trying to achieve as contour across 9 separate tags, I recommend looking into the Density Plot feature available in Seeq as you may find some interest in this feature. Option 1: Create a simple scorecard for each Well and assemble them in an organizer in a neater format. It seems that you're using a 3x3 organizer table--one cell for each Well. You could use only one table cell to get them to better fit, emulating a single table. Something like below. I only used "Well 02" to demonstrate the layout, but the idea is your "mapping" will be on the left to understand what you're looking at on the right. To go about this, create a worksheet for each Well. Create a metric as you have (with thresholds specified) and goto Table view. Under Headers, select None. Under Columns: If you are creating the left table, only have Name checked. If you are creating the right table, only have Metric Value checked. Insert each into a single cell of a table in Organizer--I used a 1x2 table. For assembling adjacent columns, you'll want to make sure you insert each worksheet directly next to the other (no spaces between). For going to the next row, you'll want to make sure to SHIFT+ENTER, instead of a simple ENTER. Something like this should be the result. To remove the space between, simply click each individual cell (metric) and click Toggle Margin. After completing this for each metric, the table should resemble the first one I posted. You can resize the 1x2 Organizer table by clicking Table properties. For this example, I specified a width of 450 to narrow up the two columns. Option 2: Create a Treemap. This method will require that the Wells be a part of an asset group. If not already configured, this can be done within Seeq as of R52. This method may or may not give you the information you're looking for. Before considering this option, please be sure to read about Treemaps more on our Knowledge Base. Depending on the Priority colors and conditions you specify, your Treemap would look something like this. Note there is no way to change or specify the orientation within the normal user interface in Seeq (i.e. you can't easily specify a 3x3 layout). I hope this helps!
  • Create New...