Jump to content

Kjell Raemdonck

Seeq Team
  • Posts

    14
  • Joined

  • Last visited

  • Days Won

    1

Kjell Raemdonck last won the day on February 15 2021

Kjell Raemdonck had the most liked content!

Personal Information

  • Company
    Seeq Corporation
  • Title
    Analytics Engineer
  • Level of Seeq User
    Seeq Advanced

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Kjell Raemdonck's Achievements

Rookie

Rookie (2/14)

  • First Post
  • Collaborator Rare
  • Conversation Starter
  • Week One Done
  • One Month Later

Recent Badges

15

Reputation

  1. One question you may have - why so many backslashes in the Property Transform? Well, we need to escape backslashes, twice in this case, once because we are working in JSON language in the config file, and another time for regular expressions. The expression below will match PI AF paths like "\\AFServerName\AFDatabaseName" but will not match, e.g., "\\AFServerName\AFDatabaseName\AnyElement|SomeAttribute". For the curious, after removing the JSON escaping, the regular expression used to match the AF path looks like this: \\\\[^\\]+\\(?<path>[^\\]+?)$ This expression can be tested at a site like regextester.com, which will explain what each part of the regular expression does. Note, you should be able to use this transform as is in your config file, by only editing the prefix 'ONE' in "Outputs" to your desired Asset Tree name prefix. Transform: "Transforms": [ { "Inputs": [ { "Property": "AF Path", "Value": "\\\\\\\\[^\\\\]+\\\\(?<path>[^\\\\]+?)$" } ], "Outputs": [ { "Property": "Name", "Value": "ONE.${path}" } ], "Enabled": true, "Log": false } ],
  2. If you have many different OSIsoft AF databases connected to Seeq, you will see all those Asset Trees show up in your Data Tab. As you can imagine, some customers have 50+ AF databases connecting to Seeq, which could lead to 50+ Asset Trees, making navigation to the desired tree difficult. Prior to R52, there was no alternative but to live with a messy Data Tab. Now, starting with R52, you can add a Property Transform to your OSIsoft AF Connector.json config file to rename the root Asset Tree name of each data source with multiple databases. Example: I have 2 PI AF data sources connected to Seeq, and each data source has 2 data bases with identical naming - see OSIsoft AF Connector.json original configuration file below. { "Version": "Seeq.Link.Connector.AF.Config.AFConnectorConfigV3", "Connections": [ { "Name": "piAFserverONE", "Id": "570974a9-d38f-4445-ad0d-3aac24fa88da", "Enabled": true, "Indexing": { "Frequency": "1w", "OnStartupAndConfigChange": false, "Next": "2021-07-05T01:00:00-05[America/Chicago]" }, "Transforms": null, "MaxConcurrentRequests": null, "MaxResultsPerRequest": null, "IncrementalIndexingFrequency": "300d", "AFServerID": "46abe034-1602-484f-b142-d8a667356e9f", "Username": '***', "Password": '***', "IncrementalIndexingMaxChangedPerDatabase": 10000, "IgnoreHiddenAttributes": true, "IgnoreExcludedAttributes": true, "NestChildAttributes": false, "SyncElementReferences": false, "RegisterSeeqDataReference": null, "AFServerName": "PIAFONE", "Databases": [ { "Name": "Database One", "ID": "af79d0c8-4afb-43ed-a7b2-115844f1ad29", "Enabled": true }, { "Name": "Database Two", "ID": "dee1a62d-9eac-4945-82e1-4a1e9baa9d8e", "Enabled": true } ], "AdditionalProperties": null, "PISecuritySynchronization": { "PointSecurity": false, "PIWorldMapping": null }, "AFSecuritySynchronization": { "IdentityMappingsDatasourceClass": "Windows Auth", "IdentityMappingsStopRegex": "^(BUILTIN\\\\.*)$", "Identities": false, "ElementsSecurity": false, "IdentityMappingsDatasourceId": null, "WorldMapping": null } }, { "Name": "piAFserverTWO", "Id": "68a9bccc-a421-4e0e-b06d-0586867decca", "Enabled": true, "Indexing": { "Frequency": "1w", "OnStartupAndConfigChange": false, "Next": "2021-07-05T03:00:00-05[America/Chicago]" }, "Transforms": null, "MaxConcurrentRequests": null, "MaxResultsPerRequest": null, "IncrementalIndexingFrequency": "300d", "AFServerID": "57abc41f-8822-4d0d-a668-8607db1c1445", "Username": '***', "Password": '***', "IncrementalIndexingMaxChangedPerDatabase": 10000, "IgnoreHiddenAttributes": true, "IgnoreExcludedAttributes": true, "NestChildAttributes": false, "SyncElementReferences": false, "RegisterSeeqDataReference": null, "AFServerName": "PIAFTWO", "Databases": [ { "Name": "Database One", "ID": "dee1a62d-9eac-4945-82e1-4a1e9baa9d8e", "Enabled": true }, { "Name": "Database Two", "ID": "8e05153b-a249-4165-a6cf-fa3e13fd6f4c", "Enabled": true } ], "AdditionalProperties": null, "PISecuritySynchronization": { "PointSecurity": false, "PIWorldMapping": null }, "AFSecuritySynchronization": { "IdentityMappingsDatasourceClass": "Windows Auth", "IdentityMappingsStopRegex": "^(BUILTIN\\\\.*)$", "Identities": false, "ElementsSecurity": false, "IdentityMappingsDatasourceId": null, "WorldMapping": null } } ], "ApplicationIdentity": null, "RestartAgentAfterErrorTimeout": null } In Seeq, my Data Tab would look like this: This is confusing, because I have no way to distinguish which data source each Database One is coming from, without diving into the Item Properties to get more information. Ideally, I could visually identify which is which. Hence, thanks to R52, I will rename the Asset Trees I see here via a Property Transform in my connector config file. See updated config below, with Transform: { "Version": "Seeq.Link.Connector.AF.Config.AFConnectorConfigV3", "Connections": [ { "Name": "piAFserverONE", "Id": "570974a9-d38f-4445-ad0d-3aac24fa88da", "Enabled": true, "Indexing": { "Frequency": "1w", "OnStartupAndConfigChange": false, "Next": "2021-07-05T01:00:00-05[America/Chicago]" }, "Transforms": [ { "Inputs": [ { "Property": "AF Path", "Value": "\\\\\\\\[^\\\\]+\\\\(?<path>[^\\\\]+?)$" } ], "Outputs": [ { "Property": "Name", "Value": "ONE.${path}" } ], "Enabled": true, "Log": false } ], "MaxConcurrentRequests": null, "MaxResultsPerRequest": null, "IncrementalIndexingFrequency": "300d", "AFServerID": "46abe034-1602-484f-b142-d8a667356e9f", "Username": '***', "Password": '***', "IncrementalIndexingMaxChangedPerDatabase": 10000, "IgnoreHiddenAttributes": true, "IgnoreExcludedAttributes": true, "NestChildAttributes": false, "SyncElementReferences": false, "RegisterSeeqDataReference": null, "AFServerName": "PIAFONE", "Databases": [ { "Name": "Database One", "ID": "af79d0c8-4afb-43ed-a7b2-115844f1ad29", "Enabled": true }, { "Name": "Database Two", "ID": "dee1a62d-9eac-4945-82e1-4a1e9baa9d8e", "Enabled": true } ], "AdditionalProperties": null, "PISecuritySynchronization": { "PointSecurity": false, "PIWorldMapping": null }, "AFSecuritySynchronization": { "IdentityMappingsDatasourceClass": "Windows Auth", "IdentityMappingsStopRegex": "^(BUILTIN\\\\.*)$", "Identities": false, "ElementsSecurity": false, "IdentityMappingsDatasourceId": null, "WorldMapping": null } }, { "Name": "piAFserverTWO", "Id": "68a9bccc-a421-4e0e-b06d-0586867decca", "Enabled": true, "Indexing": { "Frequency": "1w", "OnStartupAndConfigChange": false, "Next": "2021-07-05T03:00:00-05[America/Chicago]" }, "Transforms": [ { "Inputs": [ { "Property": "AF Path", "Value": "\\\\\\\\[^\\\\]+\\\\(?<path>[^\\\\]+?)$" } ], "Outputs": [ { "Property": "Name", "Value": "TWO.${path}" } ], "Enabled": true, "Log": false } ], "MaxConcurrentRequests": null, "MaxResultsPerRequest": null, "IncrementalIndexingFrequency": "300d", "AFServerID": "57abc41f-8822-4d0d-a668-8607db1c1445", "Username": '***', "Password": '***', "IncrementalIndexingMaxChangedPerDatabase": 10000, "IgnoreHiddenAttributes": true, "IgnoreExcludedAttributes": true, "NestChildAttributes": false, "SyncElementReferences": false, "RegisterSeeqDataReference": null, "AFServerName": "PIAFTWO", "Databases": [ { "Name": "Database One", "ID": "dee1a62d-9eac-4945-82e1-4a1e9baa9d8e", "Enabled": true }, { "Name": "Database Two", "ID": "8e05153b-a249-4165-a6cf-fa3e13fd6f4c", "Enabled": true } ], "AdditionalProperties": null, "PISecuritySynchronization": { "PointSecurity": false, "PIWorldMapping": null }, "AFSecuritySynchronization": { "IdentityMappingsDatasourceClass": "Windows Auth", "IdentityMappingsStopRegex": "^(BUILTIN\\\\.*)$", "Identities": false, "ElementsSecurity": false, "IdentityMappingsDatasourceId": null, "WorldMapping": null } } ], "ApplicationIdentity": null, "RestartAgentAfterErrorTimeout": null } After a fresh re-index of my 2 data sources, I can now see my Asset Trees are renamed in Seeq's Data Tab, and can clearly distinguish which Database belongs to which data source. You can use this to add data source information as I have done above, or simply use another method to order them in a different way - they will always populate alphabetically.
  3. You're on the right track with the Capsule Property being the Signal name. So in the histogram, you could then first aggregate by "Condition" (instead of "Time" as shown in the above) - this condition would be a combineWith() condition with all your signal conditions with their associated properties. More info on capsules props here: I also wonder if you could use a method similar to this one to bin the signals? Maybe it could work for your use? This example also shows how to bin (in this case via assets)
  4. Have you ensured to 'step to current time'? Hitting the button below. You may just need to ensure your current display range encompasses the last 2min and last 1hr.
  5. Monitoring KPI's for your process or equipment is a valuable method in determining overall system performance and health. However, it can be cumbersome to comb through all the different KPI's and understand when each is deviating from an expected range or set of boundaries. We can, however, shorten our time to insight by aggregating all associated KPI's into one Health Score; the result allows us to monitor just one trend item, and take action when deviations occur. To walk through the steps of building a Health Score, I will walk through an example below which looks at 4 KPI's for a Pump, and aggregates them into one final Health Score. Note that the time period examined is a 3 month period leading up to a pump failure. KPI DETAILS KPI #1 The first indicator I can monitor on this pump is how my Discharge Pressure is trending relative to an expected range determined by my Manufacturer's Pump Performance Curve. (To enable using a pump curve in Seeq, reference this article for more information: Creating Pump and Compressor Curves in Seeq). As my Discharge Pressure deviates from the expected range, red Capsules are created by using a Deviation Search in Seeq. KPI #2 The second indicator I can monitor on this pump is whether the NPSHa (available) is remaining higher than the NPSHr (required) as stipulated by the Manufacturer's Pump Datasheet. If my NPSHa drops below my NPSHr, red Capsules will be created by using a Deviation Search in Seeq. (No deviations noted in the time period evaluated). KPI #3 The third indicator I can monitor on this pump is whether the pump Vibration signals remain lower than specified thresholds (these could be determine empirically, from the Manufacturer, or industry standard). In this case I have 4 Vibration signals. I am using a "Union" method to combine the 4 conditions into the final KPI Alert, which will show red Capsules if any of the 4 vibrations exceed their threshold. The formula for this KPI Alert condition is as shown below: $vib1>$limit1 or $vib2>$limit2 or $vib3>$limit3 or $vib4>$limit4 KPI #4 The fourth indicator I can monitor on this pump is whether the flow through the pump is remaining higher than minimum allowable as stipulated by the Manufacturer's Pump Curve/Datasheet. If my measured Flow drops below my Flow Limit, red Capsules will be created by using a Deviation Search in Seeq. (No deviations noted in the time period evaluated). BUILDING THE HEALTH SCORE Now that we have 4 conditions, 1 for each KPI if exceeding the determined normal operating range, we need to aggregate these into the Health Score. First, we determine how much % time each KPI alert has been active during each Day (in fraction form, ie range of 0 to 1). We do this by creating a Formula for each KPI Alert condition, with the syntax as follows: #Determine the % duration that a KPI alert is active during each Day (in fraction form) $kpi1.aggregate(percentDuration(), days(), startKey()).convertUnits('').toStep() The result of applying this to all 4 KPI Alert conditions should be as follows - you may note that if a KPI Alert condition is "on" for the full duration of a day, it will show a value of 1. If partially "on", it will show a fractional value between 0 and 1, and if no Condition is present at all, it will show a value of 0. Now we aggregate these individual indicators into a rolled up Health Score, by using the Sum of Squares method, and then dividing by the total number of indicators. To do so, enter the following in a formula: #Aggregate the Sum of Squares of the fractional alert values ($k1.pow(2) + $k2.pow(2) + $k3.pow(2) + $k4.pow(2))/4 I could also have performed the above 2 steps in 1 Formula: #First determine the % duration that a KPI alert is active during each Day (in fraction form) $k1 = $kpi1.aggregate(percentDuration(), days(), startKey()).convertUnits('').toStep() $k2 = $kpi2.aggregate(percentDuration(), days(), startKey()).convertUnits('').toStep() $k3 = $kpi3.aggregate(percentDuration(), days(), startKey()).convertUnits('').toStep() $k4 = $kpi4.aggregate(percentDuration(), days(), startKey()).convertUnits('').toStep() #Then aggregate the Sum of Squares of those fractional values $sumOfSquares = $k1.pow(2) + $k2.pow(2) + $k3.pow(2) + $k4.pow(2) $sumOfSquares/4 I can also add a Health Score high limit (in my example 0.25 = if one KPI Alert is active for a full day), to trigger some action (perhaps schedule pump maintenance), prior to failure. A new red Capsule will appear if my Health Score exceeds this limit of 0.25 (can be configured via Value Search or Deviation Search). Enter the following into Formula to create this limit (a new scalar): Below you will see my final Health Score trend item as well as the limit and Health Score Alert condition. Optionally, I can use the High Limit to create a shaded boundary for my Health Score, using the Boundaries Tool. (I also create a low limit of 0 to be the lower boundary). We can see that in the month leading up to Failure (early Feb), I had multiple forewarning indications through this aggregated Health Score. In future, I could monitor this Pump health in a dashboard in a Seeq Organizer Topic, and trigger some maintenance activity proactively. Dashboard Example:
  6. Hi Jason, You can certainly aggregate by hour (we call it hour of the day - so your bins would become hour 0, 1, 2, 3, 4, 5, etc until you reach 23 - for a full 24h each day). You'll see that as a setting in the Histogram tool. I've tossed together an example which I'm hoping will show how you can go about this for multiple signals. My example has 4 signals I want to aggregate - just think of expanding that to 17 for your case. Step 1. Need to get all those samples from my 4 different signals into 1 common signal. in Formula, I write the following: $s1 = $t1.validValues().resample(1min).setMaxInterpolation(3s) $s2 = $t2.validValues().resample(1min).setMaxInterpolation(3s).move(1s) $s3 = $t3.validValues().resample(1min).setMaxInterpolation(3s).move(2s) $s4 = $t4.validValues().resample(1min).setMaxInterpolation(3s).move(3s) combineWith($s1,$s2,$s3,$s4) I resample to 1min so that all my original signals now have a sample point at the same timestamp, and then I delay each sample by 1s, so that in the combined signal they are offset - this allows me to aggregate each of those samples from my original signal. Now if your original signal samples are all coming in at the same timestamp anyway, you don't need to use the resample(1min) function. Visually, you can see the transformation in the snapshot below (the new combined signal is at the bottom): Step 2. I duplicate the signal I just created - this is so that I can use the sample values from this signal as my second aggregation (recall, first aggregation is by hour of day, so second will be by values - can create 5 bins for instance by choice). Step 3. Create Histogram Now I create a new Histogram via the tool, and I have configured it as you can see on the left of the snapshot below. The x-axis of the histogram represents the "hour of the day", where 01 is hour 0, or the 1st hour (midnight to 1am), and 231 is hour 23 (11pm to midnight). Then the different bars inside each hour represent the 5 bins I chose to define my distribution (you can also choose based on set values by choosing the "size of bin" option). Now you can make this stacked by opening the "Customize" pane and checking the "stacked" box. See below: I also changed the display to just 1 day from midnight to midnight - as you can see we have the same total samples every hour (which makes sense as we resampled to a constant 1min frequency for each of the 4 signals), but the sample count within the value bins changes over time - I've changed the colors so you can see those bins more easily. Let me know if this answers your question. It's a bit more in depth, so if you'd like, we can hop on a Zoom call and work through it for your data together. Thanks, Kjell
  7. Post below documents how you could pull the last sample's timestamp into a Scorecard - in case all you wanted was to just view that timestamp. Enter the following into a new Formula: //In this formula, we will create a signal of all the timestamps for each sample point coming into Seeq //Create a condition with tiny capsules for each sample point // NOTE - to the get the accurate timezone date/time, I need to delay my signal according to my UTC-6h adjustment for Mountain timezone // if you are in Central for instance, you would do UTC-5h, or .delay(-5h) $sampleCapsules = $signal.delay(-6h).toCapsules() //Find the timestamp of each recorded sample point // NOTE - you may need to change the 1d to a larger value, as this is the interpolation length // so if your samples are spaced by more than 1d, increase to 2d and see if this interpolates $sampleCapsules.toSignal('End').toLinear(1d).toString() //Below transform is Optional to view the timestamp in a neater format of MM/DD/YYYY HH:MM .replace('/(?<year>....)-(?<month>..)-(?<day>..)T(?<hour>..):(?<minute>..):(?<sec>..)(?<dec>.*)Z/' , '${month}/${day}/${year} ${hour}:${minute}') Then, in a new Scorecard Metric, grab the Value at End. Every time you return to Seeq, you can "step to now" and get the latest timestamp for the last sample, corrected for your timezone per the delay above.
  8. Hi Bharat, You should be able to find Shared items by clicking on the shared button on the left navigation menu. This should contain items shared with you by other users. You may need to increase the items you can see on one page by changing from 5 to 20 (as example) on the bottom as shown in screenshot below. You can also search for all Workbench files shared with you via search bar at the top as shown in the screenshot. You can also find some more info here: https://support.seeq.com/space/KB/1349779782/Home%20Screen
  9. When addressing a business problem with analytics, we should always start by asking ourselves 4 key questions: Why are we talking about this: what is the business problem we are trying to address, and what value will solving this problem generate? What data do I have available to help with solving this problem? How can I build an effective analysis to identify the root of my problem (both in the past, and in future)? How will I visualize the outputs to ensure proactive action to prevent the problem from manifesting? This is where you extract the value. With that in mind, please read below how we approach the above 4 questions while working in Seeq to deal with heat exchanger performance issues. What is the business problem? Issues with heat exchanger performance can lead to downstream operational issues which may lead to lost production and revenue. To effectively monitor the exchanger, a case-specific approach is required depending on the performance driver: Fouling in the exchanger is limiting heat transfer, requiring further heating/cooling downstream Fouling in the exchanger is limiting system hydraulics, causing flow restrictions or other concerns Equipment integrity, identify leaks inside the exchanger What Data do we have available? Process Sensors – flow rates, temperatures, pressures, control valve positions Design Data – drawings, datasheets Maintenance Data – previous repairs or cleaning, mean-time between cleanings How can we tackle the business problem with the available data? There are many ways to monitor a heat exchanger's performance, and the selection of the appropriate indicator depends on a) the main driver for monitoring and b) the available data. The decision tree below is merely meant to guide what indicators can be applied based on your dataset. Generally speaking, the more data available, the more robust an analysis you can create (ie. first principles based calculations). However, in the real world, we are often working with sparse datasets, and therefore may need to rely on data-based approaches to identify subtle trends which indicate changes in performance over time. Implementing each of the indicators listed above follow a similar process in Seeq Workbench, as outlined in the steps below. In this example, we focus on a data-based approach (middle category above). For an example of a first-principles based approach, check out this Seeq University video. Step 1 - Gather Data In a new Workbench, search in the Data Tab for relevant process signals Use Formula to add scalars or use the .toSignal() function to convert supplemental data such as boundary limits or design values Use Formula, Value Search or Custom Condition to enter maintenance period(s) and heat exchanger cycle(s) conditions (if these cannot be imported from a datasource) Step 2 - Identify Periods of Interest •Use Value Search, Custom Condition, Composite Condition or Formula to identify downtime periods, periods where exchanger is bypassed, or periods of bad data which should be ignored in the analysis Step 3 - Cleanse Data Use Formula to remove periods of bad data or downtime from the process signals, using functions such as $signal.remove($condition) or $signal.removeOutliers() Use Formula to smooth data as needed, using functions such as $signal.agileFilter() or the Low Pass Filter tool Step 4 - Quantify Use Formula to calculate any required equations In this example, no calculations are required. Step 5 - Model & Predict Use Prediction and choose a process signal to be the Target Variable, and use other available process signals as Input Variables; choose a Training Period when it is known the exchanger is in good condition Using Boundaries: establish an upper and lower boundary signal based on the predicted (model) signal from previous step (e.g. +/-5% of the modeled signal represents the boundaries) Step 6 - Monitor Use Deviation Search or Value Search to find periods where the target signal exceeds a boundary(ies) The deviation capsules created represent areas where heat exchanger performance is not as expected Aggregate the Total Duration or Percent Duration statistic using Scorecard or Signal From Condition to assess deteriorating exchanger health over time How can we visualize the outputs to ensure proactive action in future? Step 7 - Publish Once the analysis has been built in a Seeq Workbench, it can be published in a monitoring dashboard in Seeq Organizer as seen in the examples below. This dashboard can then be shared among colleagues in the organization, with the ability to monitor the exchanger, and log alerts and take action as necessary as time progresses - this last step is key to implementing a sustainable workflow to ensure full value is extracted from solving your business problem.
  10. To get all your Asset specific signals (KPI signal), grab a Journal link (see screenshot below, click on the green Q button and add link for "Trend Item", in this case you would select your KPI signal). Then, once you have the link for the KPI related to Asset A, you can do the asset swap to Asset B, and once again grab the link. Repeat this for all your assets, and you should create a list as per below. Once you have all the links, remove everything from your details pane, and click each link to bring that signal into your trend (or scorecard). Your details pane should now look as per mine below. Now you have all your KPI signals, metrics or conditions in one place for comparison purposes. Please let me know if this achieves what you were looking for. -Kjell
  11. Hi Tommy, The easiest way to do this in Seeq is to use a condition to define the if condition, and then splice in a new signal when your condition is true. Follow the steps below to achieve this. 1) Use the Value Search tool to find when your signal .OP <= 0 2) In Formula, enter the following: $flowsignal.splice(0.toSignal(), $conditionclosed) where $flowsignal is your Flow Rate signal, and $conditionclosed is the condition we created in Step 1. What we are doing here is splicing in a new signal we create ( 0.toSignal() ) which will equal 0 when the .OP <= condition is true. You could also write all of this into 1 Formula (combining steps 1 & 2 together) by writing the following: $conditionclosed = $OP.validValues().valueSearch(isLessThanOrEqualTo(0)) $flowsignal.splice(0.toSignal(),$conditionclosed) Please let me know if this solved your question. -Kjell
  12. Hi Jason, Great question! You can achieve a stacked histogram by following the steps below. You'll see that in the screenshot below I have 3 temperature signals, all having the same units - I believe this mimics the scenario in your question. Then I create a Histogram - note you can aggregate your signal properties as you like here, this is just an example. Then finally, once you Execute the Histogram, you can convert the Histogram by clicking the "Stack" box as circled in the image below. This configuration will appear when you click the Customize button in your details pane, as denoted by the red arrow. The final result should look like a set of stacked histograms, as per my image below. Please let me know if this addresses your question. -Kjell
  13. I would say that it depends on the changes you are making. If person A and B are collaborating on the same analysis, you could do it in the sheet, as long as the workbook is shared to that individual using the access controls - you can find these by clicking on the icon showing 2 people top right of your screen (see picture below, item 1) when in a Workbench analysis; this way you can then use the links are previously described to track each step that person A or B is completing as you go along. You can also use the "get link" feature (see picture below, item 2) to share an edit link if you give that person access. If you want to ensure that the changes person B makes does not affect the "master copy" at all - I would highly recommend you duplicate the original "master" workbook, and only share the copy with Person B. If person A and B are not working collaboratively, and let's say person B wants to use person A's work as a starting point - I would duplicate person A's final workbench and then share the copy with person B. Person B can also do the duplication action themselves. See picture below for where to find the duplicate Workbench feature on your home screen. Note that duplicating just the worksheet will maintain the same tags as the first worksheet (ie. changes made in sheet 2 will reflect in sheet 1 in terms of Formula content etc).If you do copy the sheet, you can duplicate the individual formulas, however, by accessing the "i" button, and clicking "Duplicate" at bottom of the pop up - this will ensure the original does not get changed. Hope these steps are helpful. If you would like more in depth information, perhaps we could schedule some time together and you can share screen.
  14. Hi Felix, The way that "versioning" can be easiest applied in Seeq is using the Workstep link feature in Journal. If you navigate to the "Q" top left in the Journal toolbar, you can see when you click on this that there are a few different link options. What I would recommend is that after completing your "master" analysis, you link that view with a final workstep link. You can highlight a string of text in the Journal - such as "Master View" and then add the link to it by accessing the "Q" button while the text is highlighted. See example screenshots below. Below the screenshots is a link to our Knowledge Base which shows some more detail around Journal entries and adding links. Then, once you have that view linked, anytime someone comes in and plays with the analysis, you can use the link to return to the workstep. Note, however, that this only links the vizual elements - if the people you are sharing the analysis with intend to change the formula content, you are better off duplicating the workbook, and then share with them. See the article here about the different links and what they do: https://support.seeq.com/space/KB/162201610/Inserting%20Seeq%20Links Please let me know if this does not address your question.
×
×
  • Create New...