Loading Data

👍

To ingest data:

  1. Choose the destination table
  2. Define or choose an Ingest Transform Schema
  3. Choose a method of ingest.
  4. Begin Ingestion

Destination Table

Choose where the data should reside. If the table does not exist, create it.

Define a Transform Schema

A transform schema needs to be chosen from existing schemas that are attached to the table, or sent with the data via the HTTP Streaming API. Transform schemas give information about the data type, format, and the structure of the data so that Hydrolix knows how to treat each individual item. Transforms can be defined using the API or the UI.

Which Method?

Data can be ingested into Hydrolix using the HTTP Streaming API, Kafka or by configuring a Batch Job.

Batch jobs are defined via the Hydrolix UI or API and are intended for initial ingestion of existing data.

The Streaming API is used for sending data on a regular basis to Hydrolix. The Streaming API can take a variety of data types and formats. Streaming does not necessarily mean single rows or events. Streaming can take more than one row or event per API call.

If you are ingesting data from a location for initial ingestion, set up a Batch Job.

If you want to regularly send data to Hydrolix, use the HTTP Streaming API or Kafka.

Kick off Ingestion

For a Batch Job this is can done via the UI or Hydrolix API.

For Streaming, it is by calling the HTTP Streaming API or Starting your Kafka brokers.


Did this page help you?