Create a Project & Table with API

Hydrolix has a restful JSON API using JWT authentication. You can login, create a project, a table and other actions. The full api is described here. We recommend you use a standard HTTP client like Postman or Insomnia which makes working with API's a lot easier - especially capturing and using environment variables.

The API is specific to your deployment, base URI We'll need to call three API endpoints

  • POST /login with username & password. Response contains bearer token and org uuid
  • POST /{{org uuid}}/projects with a name to create a project. Response contains project uuid
  • POST /{{org uuid}}/project/{{project uuid}}/tables with a name to create a table. Response contains table uuid



Hydrolix will check your IP has permission to access API - Enabling Access to your platform. If your IP has not been enabled your request will timeout.

Hydrolix API users will receive an email asking them to set a password. If you don't have an email, ask your admin to invite you to the project. The API uses bearer token based on your permissions.


The first step is to login on the API

curl -X POST '' \
-H 'Content-Type: application/json' \
-d '{
    "username": "myusername",
    "password": "mypassword"
   "uuid": "cc96f3d0-7f15-4608-9d28-613ab5b6c780",
   "email": "[email protected]",
   "orgs": [
         "uuid": "d1234567-1234-1234-abcd-defgh123456",
         "name": "Hydrolix",
         "type": "singletenant"
   "groups": [
   "auth_token": {
      "access_token": "thebearertoken1234567890abcdefghijklmnopqrstuvwxyz",
      "expires_in": 3600,
      "token_type": "Bearer"

You will need the orgs.uuid and the auth_token.access_token from the response for subsequent steps. Normally defined as environment variables in standard HTTP client API tools.

Create a Project

A project is a container for one or more tables, every table must be within a project. To create a project we use the following API end-point

curl -X POST '{{org_uuid}}/projects/' \
-H 'Authorization: Bearer thebearertoken1234567890abcdefghijklmnopqrstuvwxyz' \
-H 'Content-Type: application/json' \
-d '{
    "name": "website",
    "description": "A description of my project"


Quick Tip

Use a project and table name that you know will be easy to use in your queries. Underscores, hyphens and long names are painful to use if you will be writing lots of queries!

      "name": "website",
      "org": "d1234567-1234-1234-abcd-defgh123456",
      "description": "A description of my project",
      "uuid": "myprojectuuid-1324-abcd-efgh-132465789",
      "url": "",
      "created": "2021-05-11T13:27:59.258016Z",
      "modified": "2021-05-11T13:27:59.258037Z"

You will need the uuid for the next step.

Create a Table

A table is where the data is stored. A single table can store one or more data sets (i.e. data sets that are search together should be stored together - strong context). To create a table we use the following end-point

curl -X POST '{{org_uuid}}/projects/{{project_uuid}}/tables/' \
-H 'Authorization: Bearer thebearertoken1234567890abcdefghijklmnopqrstuvwxyz' \
-H 'Content-Type: application/json' \
-d '{
    "name": "events",
    "description": "My new table"
      "project": "myprojectuuid-1324-abcd-efgh-132465789",
      "name": "events",
      "description": "My new table",
      "uuid": "mytableuuid-9876-9876-4567-zyxwv1234",
      "created": "2021-08-12T10:32:44.747749Z",
      "modified": "2021-08-12T11:14:21.206759Z",
      "settings": {
         "stream": {
            "hot_data_max_age_minutes": 3,
            "hot_data_max_active_partitions": 3,
            "hot_data_max_rows_per_partition": 12288000,
            "hot_data_max_minutes_per_partition": 1,
            "hot_data_max_open_seconds": 60,
            "hot_data_max_idle_seconds": 30,
            "cold_data_max_age_days": 365,
            "cold_data_max_active_partitions": 50,
            "cold_data_max_rows_per_partition": 12288000,
            "cold_data_max_minutes_per_partition": 60,
            "cold_data_max_open_seconds": 30,
            "cold_data_max_idle_seconds": 60
         "age": {
            "max_age_days": 0
         "reaper": {
            "max_age_days": 1
         "merge": {
            "enabled": true,
            "partition_duration_minutes": 60,
            "input_aggregation": 20000000000,
            "max_candidates": 20,
            "max_rows": 10000000,
            "max_partitions_per_candidate": 100,
            "min_age_mins": 1,
            "max_age_mins": 10080
         "autoingest": {
            "enabled": false,
            "pattern": "",
            "max_rows_per_partition": 12288000,
            "max_minutes_per_partition": 60,
            "max_active_partitions": 50,
            "input_aggregation": 1073741824,
            "dry_run": false
         "sort_keys": [],
         "shard_key": null
      "url": ""

Make sure to keep the table uuid, along with the other uuid's you have used previously. Our next job is to create the write schema or transform


More information

You can see in the above there are a whole load more settings to use, leave the rest of the settings as default for now. We've chosen some settings that should cover the majority of cases. For more information on these settings, more information can be found here:

Did this page help you?