To ingest data:
- Choose the destination table
- Define a Transform (Write Schema)
- Choose a method of ingest.
- Start Ingestion
Choose where the data should reside. Create a Project and/or Table.
To load data the system needs to be told about the data type, format, and the structure of the incoming data. This is achieved through the use of a transform schema.
Data can be ingested into Hydrolix using the HTTP Streaming API, Kafka or by configuring a Batch Job. You can use one or all of them to load data.
- Streaming API (HTTP) Options - Stream your data to your Hydrolix cluster using an HTTP based API
- Batch Load - Load data from a source "bucket" or with AWS use S3 Notifications (aka Auto-Ingest)
- Kafka Streaming Ingestion - Attach Hydrolix to your Kafka Cluster to ingest your data.
- Batch Jobs are started via the UI or Batch Jobs API or start uploading files to your bucket (AWS Notify).
- For Streaming, it is by calling the HTTP Streaming API
- Kafka, start your Kafka brokers.
Updated about 2 months ago