Chris Herrera
-
Posts
5 -
Joined
-
Last visited
-
Days Won
3
Content Type
Profiles
Forums
Events
Library
Downloads
Gallery
Posts posted by Chris Herrera
-
-
Hi Onyekachi,
The only way to implement this is with a polling loop. Seeq is a lazily-evaluating system, meaning that no data pull or calculation is performed without a query driven either via the UI or our API. There are examples of customers implementing polling loops to query for new data/capsules and push the results of that query on a messaging infrastructure such as Kafka, Kinesis, etc... This is generally performed using our Seeq Python library Spy.
Regards,
Chris
A Guide to Working with Tulip and Seeq Together
in Seeq Data Lab
Posted
Tulip is one of the leading frontline operations platforms, providing manufacturers with a holistic view of quality, process cycle times, OEE, and more. The Tulip platform provides the ability to create user-friendly apps and dashboards to improve the productivity of your operations without writing any code. Integrating Tulip and Seeq allows Tulip app and dashboard developers to directly include best-in-class time series analytics into their displays. Additionally, Seeq can access a wealth of contextual information through Tulip Tables.
Accessing Tulip Table Data in Seeq
Tulip table data is an excellent source of contextual information as it often includes information not gathered by other systems. In our example, we will be using a Tulip Table called (Log) Station Activity History.
This data allows us to see how long a line process has been running, the number of components targeted for assembly, actually assembled, and the number of defects. The easiest way to bring this into Seeq is as condition data. We will create one condition per station and each column will be a capsule property.
This can be achieved with a scheduled notebook:
The above notebook can be run on a schedule with the following command:
spy.jobs.schedule('every 6 hours')
This will pull the data from the Tulip Table into Seeq to allow for quick analysis. The notebook above will need you to provide a tenant, API key, and table name. It will also be using this REST API method to get the records. Once provided, this data will be pulled into a dataset called Tulip Operations and scoped to a workbook called Tulip Integration.
We can now leverage the capsule properties to start isolating interesting periods of time. For example, using the formula
Where $ea is the Endbell Assembly condition from the Tulip Table. We can create a new condition keeping only the capsules where the state is running.
Once a full analysis is created, Seeq content can be displayed in a Tulip App as an iFrame, allowing for the combination of Tulip and Seeq data:
Data can be pushed back to Tulip using the create record API. This allows for Tulip Dashboards to contain Seeq Content: