Batch Ingest

Batch ingestion loads data from a storage bucket into a target table. We provide the following mechanisms:

  • Batch Job API. A one-off task that will load one or more files based on the Job configuration and then stop.
  • Batch auto-ingest. A continuous task of ingesting new files arriving in a storage bucket, through a combination of table settings, cloud provider pub/sub and cloud storage notification mechanism.

Batch ingest supports CSV and JSON data formats. Hydrolix requires read permissions to access external storage buckets.

The majority of the work is done by the batch-peer and batch-indexer containers in a pod called batch-peer. By default, batch ingest processes one file at a time, with the replicas tunable set to a value of 1. Increase the value of the replicas tunable to increase parallelism. The containers use a predefined resource profile, but you can override memory or storage settings if needed. For more details, see Scale your Kubernetes Cluster.

You can configure cloud storage to notify Hydrolix when new data is available for ingest. For more information, see the following pages:

Create a Batch Ingest Job via API

👍

Prerequisite Steps

Once you've completed the prerequisites, Create a Batch Job. You must specify the following:

  • the table where Hydrolix will store the data
  • the transform Hydrolix should use to process the data
  • the URL where Hydrolix should fetch the data
  • the regex filter Hydrolix should use to limit ingestion to a subset of data (optional)

AWS Example

{
    "type": "batch_import",
    "name": "job_sample_data",
    "description": "sample data on aws",
    "settings": {
        "max_active_partitions": 576,
        "max_rows_per_partition": 33554432,
        "max_minutes_per_partition": 20,
        "source": {
            "settings": {
                "url": "s3://mydatatoingest/mypath/"
            },
            "table": "sample.data",
            "type": "batch",
            "subtype": "aws s3",
            "transform": "mytransform"
        },
        "regex_filter": "^s3://mydatatoingest/mypath/.*.gz"
    }
}

GCP Example

🚧

GCP/GKE Note;

Make sure to add the bucket permissions to your service account. For example:

gsutil iam ch serviceAccount:${GCP_STORAGE_SA}:roles/storage.objectAdmin gs://my bucket

{
    "type": "batch_import",
    "name": "job_sample_data",
    "description": "sample data on gcp",
    "settings": {
        "max_active_partitions": 576,
        "max_rows_per_partition": 33554432,
        "max_minutes_per_partition": 20,
        "source": {
            "settings": {
                "url": "gs://mydatatoingest/mypath/"
            },
            "table": "sample.data",
            "type": "batch",
            "subtype": "gcp gs",
            "transform": "mytransform"
        },
        "regex_filter": "^gs://burninbucket/gcp-prod-test/.*.gz"
    }
}

Linode Example

🚧

Limitations

The k8s cluster must use the same account as the Linode bucket. Linode storage does not support auto-ingest.

{
    "type": "batch_import",
    "name": "job_sample_data",
    "description": "sample data on Linode",
    "settings": {
        "max_active_partitions": 576,
        "max_rows_per_partition": 33554432,
        "max_minutes_per_partition": 20,
        "source": {
            "settings": {
                "url": "s3://mydatatoingest/mypath/"
            },
            "table": "sample.data",
            "type": "batch",
            "subtype": "aws s3",
            "transform": "mytransform"
        },
        "regex_filter": "^s3://mydatatoingest/mypath/.*.gz"
    }
}

Job Attributes

A job describes how to treat the data set as a whole as it is being ingested. Hydrolix batch jobs can employ varying file and path structures to load data. A single file, directory of files and a directory of files with a filter can all be applied.

For example:

  • A single file i.e. "s3://mybucket/another/file.gz"
  • All files in a single bucket i.e. "s3://mybucket/another/"
  • All files matching regex pattern i.e. "s3://mybucket/" along with "settings.regex_filter": "^s3://mybucket/.*/.*.gz"
ElementPurpose
nameA unique name for this job in this organization.
descriptionAn optional description.
typeOnly accepts the value batch_import.
settingsThe settings to use for this particular ingestion job.

The settings object

Some data sets consist of many small files, other data sets consist of fewer larger files. Hydrolix ultimately writes data into "partitions". The number and size of partitions influences performance of query.

What is best for each data set is an "it depends" answer, however, consider:

  1. Partitions are a single unit to be processed. This means that queries of large partitions cannot be parallelized as much as smaller partitions.
  2. Smaller partitions mean more parallelization, but also mean less efficient use of resources.

Example Settings:

{ ...
"settings": {
  	"max_active_partitions": 576,
		"max_rows_per_partition": 10000000,
		"max_minutes_per_partition": 14400,
    "input_concurrency": 1,
    "input_aggregation": 1536000000,
    "max_files": 0,
    "dry_run": false,
		"source": {
			...
		}
	}
}

The following are the default settings. We would suggest starting with the defaults and then tuning.

SettingDescriptionExample
max_minutes_per_partitionThe maximum number of minutes to hold in a partition. For dense data sets, five minutes of data may be massive. In other data sets, 2 weeks of data may be required for the same volume. The velocity of your data will influence this value.15
max_active_partitionsMaximum number of active partitions.576
max_rows_per_partitionBased on the width of your data, you can control total the data size of the partition with max_rows_per_partition.64000
max_filesNumber of files to dispatch to peers. Limiting is typically only used for testing. In general this should not be set so that the entire bucket is procesed0 (disabled)
input_concurrencyInput Concurrency restricts the number of batch peer processes which are run on a single instance. .1
(This should be kept at 1. If you wish to change this please contact Hydrolix)
input_aggregationControls how much data should be considered a single unit of work, which ultimately drives the size of the partition. Files larger than the input_aggregation will be processed as a single unit of work.1536000000
dry_runWhether or not the job is a dry run. If true, all indexing work will be done but no results will be uploaded. Resulting HDX partitions are effectively thrown away.

A note on Ingest Parallelization

Batch ingest is performed on compute instances. Batch performance can be improved by:

  1. Adding more batch instances
  2. Adding larger batch instances with more parallelism

Each scenario has the potential to be different. The type and number of instances can be adjusted via Hydrolix configuration. max_active_partitions tells Hydrolix how many partitions it should work on in parallel at one time.

max_active_partitionstotal number of partitions that should be processing on a single batch peer at a time - this is a balance of speed and memory

Regex Filter

If data is stored in a complex bucket structure on AWS S3 and cannot be expressed with a simple S3 path. regex_filter allows you to express the structure pattern to search. It is used in conjuction with settings.url which narrows down the scope.

Given the following example s3 source path
s3://mybucket/level1/2020/01/app1/pattern_xyz.gz
with setting "url":"s3://mybucket/".

Possible regex_filter pattern could be:

  • ^.*\\.gz$
  • ^s3://mybucket/level1/2020/\\d{2}/app1/.*.gz
  • ^.*/level1/2020/\\d{2}/app1/.*.gz
  • ^.*/level1/2020/\\d{2}/.*/pattern_\\w{3}.gz
  • ^.*/level1/2020/\\d{2}/.*/pattern_.*.gz576
ElementDescription
regex_filterFilters the files to ingest using a Regex match. Note backwards slash '\' need to be escaped within the regex string. The pattern starts from s3://|

The source element

The source element specifies information about the data itself, where it is, and how it should be treated.

Example source:

{
    ...
        "source": {
					"table": "sample.trips",
					"type": "batch",
					"subtype": "aws s3",
					"transform": "mytransform",
					"settings": {
						"url": "s3://mydatatoingest"
					}
			}
    ...
}
ElementPurposeExample
tableThe table were the data should go. The format is <project_name>.<table_name>."table": "myproject.mytable"
typeOnly accepts the value batch."type": "batch"
subtypeaccepts either aws s3 or gcp gs"subtype": "gcp gs"
transformThe name of a transform that already exists to use for this job."transform": "mytransform",
settings.urlThe path of the files to be ingested. All paths will be analyzed in the given location and all files in the path will be ingested."settings": {
"url": "gs://mydatatoingest/path/"
}

Cancel Jobs

Use the cancel jobs endpoint to cancel the batch ingest job and tasks associated with the job ID. The cancellation will be reflected in the status output /v1/orgs/{org_id}/jobs/batch/{job_id}/cancel.

Jobs Status

Get the status of a job and it's tasks. This endpoint is suitable for polling for job completion /v1/orgs/{org_id}/jobs/batch/{job_id}/status.

Job Response codes

CodeDescription
200Success
404Job not found
405Request was not a GET
500Internal error

Response body on success

{
  "status": "RUNNING",
  "status_detail": {
    "tasks": {
      "INDEX": {
        "READY": 5
      },
      "LIST": {
        "DONE": 1
      }
    },
    "percent_complete": 0.16666667,
    "estimated": false
  }
}
KeyDescriptionOptional
.statusStatus of the job. One of READY,RUNNING,DONE, or CANCELED.No
.status_detailIn-depth task status information if tasks exists.Yes
.status_detail.tasksAggregations of task types and states.No
.status_detail.percent_completeJob progress percentage as a float.No
.status_detail.estimatedWhether or not the progress is estimated. Once all listing tasks are complete progress percentage is no longer estimated.No

AWS Data to GKE - Cross cloud

If you have data in AWS storage and your cluster is in Google GKE it is possible to load data from AWS.
To do this you will need to set-up a user and role within AWS that has access to the bucket you want to retrieve the data from.

You will then need to add the AWS Secret Key and ID to your Kubernetes deployment.

This can be done either through the use of the hkt command to create your hydrolixcluster.yaml and apply it as follows:

./hkt hydrolix-cluster --env AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID --env AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY > hydrolixcluster.yaml

or it can be done directly in the hydrolixcluster.yaml

spec:
  admin_email: .....
  ....
  env:
    AWS_ACCESS_KEY_ID: AWS_ACCESS_KEY_ID_HERE
    AWS_SECRET_ACCESS_KEY: AWS_ACCESS_SECRET_KEY_HERE
  host: ..........
  ip_allowlist:
  - source: ................

To run the job within the Batch Jobs API you will need to specify the URL path with an S3:// path and a Subtype of aws s3

{
    "type": "batch_import",
    "name": "job_sample_data",
    "description": "sample data on aws",
    "settings": {
        "max_active_partitions": 576,
        "max_rows_per_partition": 33554432,
        "max_minutes_per_partition": 20,
        "source": {
            "settings": {
                "url": "s3://mydatatoingest/mypath/"
            },
            "table": "sample.data",
            "type": "batch",
            "subtype": "aws s3",
            "transform": "mytransform"
        },
        "regex_filter": "^s3://mydatatoingest/mypath/.*.gz"
    }
}