Jump to content

Chris Herrera

Seeq Team
  • Posts

    5
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by Chris Herrera

  1. Tulip is one of the leading frontline operations platforms, providing manufacturers with a holistic view of quality, process cycle times, OEE, and more. The Tulip platform provides the ability to create user-friendly apps and dashboards to improve the productivity of your operations without writing any code. Integrating Tulip and Seeq allows Tulip app and dashboard developers to directly include best-in-class time series analytics into their displays. Additionally, Seeq can access a wealth of contextual information through Tulip Tables. Accessing Tulip Table Data in Seeq Tulip table data is an excellent source of contextual information as it often includes information not gathered by other systems. In our example, we will be using a Tulip Table called (Log) Station Activity History. This data allows us to see how long a line process has been running, the number of components targeted for assembly, actually assembled, and the number of defects. The easiest way to bring this into Seeq is as condition data. We will create one condition per station and each column will be a capsule property. This can be achieved with a scheduled notebook: import requests import json import pandas as pd # This method gets data from a tulip table and formats the data frame into a Seeq-friendly structure def get_data_from_tulip(table_id, debug): url = f"https://{TENANT_NAME}.tulip.co/api/v3/tables/{table_id}/records" headers = { "Authorization": AUTH_TOKEN } params = { "limit": 100, "sortOptions" : '[{"sortBy": "_createdAt", "sortDir": "asc"}]' } all_data = [] data = None while True: # Use for paginating the reqeusts if data: last_sequence = data[-1]['_sequenceNumber'] params['filters'] = json.dumps([{"field":"_sequenceNumber","functionType":"greaterThan","arg":last_sequence}]) # Make the API request response = requests.get(url, headers=headers, params=params) if debug: print(json.dumps(response.json(), indent=4)) # Check if the request was successful if response.status_code == 200: # Parse the JSON response data = response.json() all_data.extend(data) if len(data) < 100: break # Exit the loop if condition is met else: print(f"API request failed with status code: {response.status_code}") break # Convert JSON data to pandas DataFrame df = pd.DataFrame(all_data) df = df.rename(columns={'id': '_id'}) df.columns = df.columns.str.split('_').str[1] df = df.drop(columns=['sequenceNumber','hour'], errors='ignore') df['createdAt'] = pd.to_datetime(df['createdAt']) df['updatedAt'] = pd.to_datetime(df['updatedAt']) df = df.rename(columns={'createdAt': 'Capsule Start', 'updatedAt': 'Capsule End', 'duration': 'EventDuration'}) df = df.dropna() return df def create_metadata(station_data, station_name): print(f"DataFrame for station: {station}") print("Number of rows:", len(group)) metadata=pd.DataFrame([{ 'Name': station_name, 'Type': 'Condition', 'Maximum Duration': '1d', 'Capsule Property Units': {'status': 'string', 'id': 'string', 'station':'string', 'duration':'s'} }]) return metadata # This method splits the dataframe by station. Each Station will represent a condition in Seeq. def create_dataframe_per_station(all_data, debug): data_by_station = all_data.groupby('station') if debug: for station, group in data_by_station: print(f"DataFrame for station: {station}") print("Number of rows:", len(group)) display(group) return data_by_station # This method sends the data to Seeq def send_to_seeq(data, metadata, workbook, quiet): spy.push(data=data, metadata=metadata, workbook=workbook, datasource="Tulip Operations", quiet=quiet) data = get_data_from_tulip(TABLE_NAME, False) per_station = create_dataframe_per_station(data, False) for station, group in per_station: metadata = create_metadata(group, station) send_to_seeq(group, metadata, 'Tulip Integration >> Bearing Failure', False) The above notebook can be run on a schedule with the following command: spy.jobs.schedule('every 6 hours') This will pull the data from the Tulip Table into Seeq to allow for quick analysis. The notebook above will need you to provide a tenant, API key, and table name. It will also be using this REST API method to get the records. Once provided, this data will be pulled into a dataset called Tulip Operations and scoped to a workbook called Tulip Integration. We can now leverage the capsule properties to start isolating interesting periods of time. For example, using the formula $ea.keep('status', isEqualTo('RUNNING')) Where $ea is the Endbell Assembly condition from the Tulip Table. We can create a new condition keeping only the capsules where the state is running. Once a full analysis is created, Seeq content can be displayed in a Tulip App as an iFrame, allowing for the combination of Tulip and Seeq data: Data can be pushed back to Tulip using the create record API. This allows for Tulip Dashboards to contain Seeq Content:
  2. Hi Onyekachi, The only way to implement this is with a polling loop. Seeq is a lazily-evaluating system, meaning that no data pull or calculation is performed without a query driven either via the UI or our API. There are examples of customers implementing polling loops to query for new data/capsules and push the results of that query on a messaging infrastructure such as Kafka, Kinesis, etc... This is generally performed using our Seeq Python library Spy. Regards, Chris
×
×
  • Create New...