Ingesting Your Data

To ingest data:

  1. Choose the Streaming or Batch Job method
  2. Choose the destination table
  3. Define or choose an Ingest Transform Schema
  4. Begin Ingestion

Streaming or Batch?

Data can be ingested into Hydrolix using the Streaming API or by configuring a Batch Job.

Batch jobs are defined via the Hydrolix UI or API and are intended for initial ingestion of existing data.

The Streaming API is used for sending data on a regular basis to Hydrolix. The Streaming API can take a variety of data types and formats. Streaming does not necessarily mean single rows or events. Streaming can take more than one row or event per API call.

If you are ingesting data from a location for initial ingestion, set up a Batch Job.

If you want to regularly send data to Hydrolix, use the Streaming API.

Destination Table

Choose where the data should reside. If the table does not exist, create it.

Define a Transform Schema

A transform schema needs to be chosen from existing schemas that are attached to the table, or sent with the data via the Streaming API. Transform schemas give information about the data type, format, and the structure of the data so that Hydrolix knows how to treat each individual item. Transforms can be defined using the API or the UI.

Kick off Ingestion

For a Batch Job this is done via the UI.

For Streaming, it is by calling the Streaming API.