HDXCTL Command Reference

Using hdxctl, the Hydrolix CLI

The program hdxctl lets you build and manage your Hydrolix clusters from the command line.

Usage

$ hdxctl [OPTIONS] command [ARGS]

Options

--help

Displays summary documentation for using hdxctl and exits.

--region

Specify the cloud-service region to apply to the command.

For example, to specify to hdxctl that it should [create a new cluster] in AWS's us-east-2 region, using the Client ID "hdxcli-example123":

$ hdxctl --region us-east-2 create-cluster hdxcli-example123

Using the --region option sets the value you provide as a default. Commands that require you to specify a region will make use of this default if you subsequently run hdxctl without setting this option.

Commands

Summary

CommandPurpose
cloudformation-eventsLists the CloudFormation events recorded for a given cluster or client ID.
clustersLists your clusters.
create-clusterCreates a new Hydrolix stack, in full.
deleteDeletes the compute components of a given cluster.
delete-bootstrapDeletes the stateful components of a given cluster.
deployed-versionDisplays the version of the bootstrap, cluster and hdxctl.
filesallows the setting of the configuration files (ini) for Grafana and Superset.
get-licenseCreates a new Hydrolix license.
gotoConnects to a Hydrolix component via SSH.
installMoves the hdxctl executable into the the bin directory of your choice,
instancesLists the compute instances currently in use.
list-client-idsLists the client IDs currently in use.
nat-gateway-ipDisplays your clusters' externally visible IP address.
routeAssociates a given client ID's hostname with a given cluster.
scaleDisplays or sets the resource use of a cluster's components.
scale-dbScale the Catalog Database component.
smoketestRuns basic-functionality tests on a given cluster.
support-bundleGenerates a bundle that contains project, table, view and transform descriptions that can be sent to support.
tunablesUpdate or get tunables configuration file for a given client ID.
updateUpdates the Hydrolix software run by a given client ID.
update-selfUpdates HDXCTL to the most current executable.
using-hdxreaderwill tell you if billing is enabled or bypassed in the currently deployed infrastructure.
versionDisplays the currently installed version of Hydrolix.

cloudformation-events

Lists the events for a client ID or a cluster ID. Using just the client ID returns the events for the core components. Using the client ID and the cluster ID returns the events for the cluster componets.

Usage

$ hdxctl cloudformation-events [OPTIONS] CLIENT_ID [CLUSTER_ID]

Options

OptionDescription
--helpDisplays summary documentation for this command, and exits.

Example

$ hdxctl cloudformation-events hdxcli-c2tpmuym
--------------------------------  ---------------------------  -----------------------------------  --------------------------------------------------------------------------------------------------------------------------------------------------------------------------
2020-11-02 17:52:54.603000+00:00  hdxcli-c2tpmuym-self-deploy  CREATE_IN_PROGRESS                   User Initiated
2020-11-02 17:52:57.766000+00:00  ClientBucket                 CREATE_IN_PROGRESS                   -
2020-11-02 17:52:57.984000+00:00  SelfDeployRole              .................

$ hdxctl cloudformation-events hdxcli-c2tpmuym hdx-qngq4obs
--------------------------------  ---------------------------------  -----------------------------------  ---------------------------
2020-11-02 18:14:07.639000+00:00  hdx-qngq4obs                       CREATE_IN_PROGRESS                   User Initiated
2020-11-02 18:14:14.871000+00:00  hdx-qngq4obs                       CREATE_IN_PROGRESS                   Transformation succeeded
.........

clusters

Lists your clusters.

Note that this command uses a local cache for efficiency. To reload the cache with your clusters' most recent metadata, run this command with the --sync option.

Usage

$ hdxctl clusters [OPTIONS]
OptionDescription
--add / --no-addunused
--id TEXTunused
--syncReloads metadata from your clusters prior to display.
--helpDisplays summary documentation for this command, and exits.

Example:

$ hdxctl clusters
CLIENT_ID        CLUSTER_ID    CREATED              HOST                    STATUS           WHO      REGION
---------------  ------------  -------------------  ---------------------  ---------------  -------  ---------
hdxcli-pmpudpqe  hdx-ex6bdtsn  2020-08-18 20:53:19  mysite.hydrolix.live.  UPDATE_COMPLETE  imauser  us-east-2

create-cluster

Creates a cluster using a supplied CLIENT_ID.

Usage

$ hdxctl --region REGION create-cluster [OPTIONS] CLIENT_ID
OptionDescription
--admin-email EMAILSet the default Administrator Email address for the cluster on first build
--autoingest-max-receive-countThe number of times a message is delivered to the queue before being moved to the dead-letter queue. Recommended to be kept as default (10).
--autoingest-queue-timeoutspecify the maximum message retention period for the Autoingest Queue. Default 4 Days
--aws-ssh-key-nameAdd an AWS defined Key Pair to the authorized keys of a deployment. Allows on-box access.
--batch-bucket-kms-arnAllow Hydrolix servers to decrypt a source bucket where a customer defined KMS key is required. Takes the ARN
--batch-peer-threadsSpecify the number of vCPU's a batch-peer should use for import jobs.
--bucket-allowlistEnables the architecture to access other buckets. For example: --bucket-allowlist mybucket1 --bucket-allowlist anotherbucket. Any update will overwrite previous configurations.
--ec2-detailed-monitoringTurns off additional monitoring for Hydrolix EC2 components. Default true.
--enable-grafana-cloudwatchEnable cloudwatch metrics within Grafana.
--enable-turbine-monitorAllow query components to monitor the Hydrolix query engine, restarting it if it hangs.
--enable-query-authEnable query authorisation for requests to the query end-point. Currently a place-holder and not in use.
--enable-query-peer-hyperthreadingEnable hyperthreading on the query peer. Default disabled.
--environ / --no-environunused
--full-hydrolix-access/ --no-full-hydrolix-accessEnable Hydrolix access by deploying Hydrolix support SSH keys/certificate.
--helpDisplays summary documentation for this command, and exits.
--ignore-versionunused
--import-max-receive-countThe number of times a message is delivered to the queue before being moved to the dead-letter queue. Recommended to be kept as default (1).
--import-queue-timeoutSpecify the time for an individual job to timeout on the SQS queue. Recommended to be kept as default.
--ip-allowlistSets IP allow lists on the appropriate security groups (BastionSecurityGroup and ELBSecurityGroup for incoming connections. IP's are provided as CIDR formations. For example: --ip-allowlist 4.2.2.2/32 --ip-allowlist 8.8.8.0/24. Note: If an allow list doesn't contain "0.0.0.0/0" then the ip /32 of the nat gateway will get added automatically. This is not additivie, any update will overwrite previous configurations.
--kafka-tls-caAllows the addition of a TLS Certificate Authority (CA) for mutual identification of Hydrolix Kafka ingest. PEM Format
--kafka-tls-certAllows the addition of a TLS Certificate for mutual identification of Hydrolix Kafka ingest. PEM format
--kafka-tls-keyAllows the addition of a TLS Key for mutual identification of Hydrolix Kafka ingest. PEM format
--listing-max-receive-countThe number of times a message is delivered to the queue before being moved to the dead-letter queue. Recommended to be kept as default (1).
--listing-queue-timeoutSpecify the time for an import job to timeout of the SQS queue. Recommended to be kept as default.
--stream-shard-countAlter the Ingest Streaming Shard count for Kinesis. Default 2
--merge-intervalspecify the interval for the Merge process to trigger. Recommended to be kept as default.
--merge-max-receive-countThe number of times a message is delivered to the queue before being moved to the dead-letter queue. Recommended to be kept as default (1).
--merge-queue-timeoutspecify the maximum message retention period for the Merge Queue. Recommended to be kept as default. Default 4 Days.
--reaper-queue-timeoutspecify the maximum message retention period for the Reaper Queue. Recommended to be kept as default. Default 4 Days
--ssh-authorized-keysAllows the provision of a file in the format of .ssh/authorized_keys to be appended to all .ssh/authorized_keys files in the deployed infrastructure.
--stream-shard-countAlter the Ingest Streaming Shard count for Kinesis. Default 2
--superset-threads INTEGER The number of threads for each Superset web worker.
--superset-timeout INTEGER Superset web workers that are silent for more than this many seconds are killed and restarted.
--superset-workers INTEGER The number of workers for handling Superset requests.
--tagTags to apply to this cluster, in the format TAG-NAME:TAG-VALUE. See also further notes about tags, below.
--vpc-cidrAn alternate CIDR block for the deployment.
--waitHave the client watch the command execute. Information is supplied as STDOUT
--use-s3-kms-keyBoolean to enable the creation and use of a new key to encrypt the S3 bucket used by Hydrolix. Default false
--keep-legacy-kms-keyBoolean to enable / disable keeping previously generated/used KMS Key, useful to change the key without loosing access to previously encrypted partition. Default true
--s3-kms-key-arnSpecify specific KMS Key to use on your S3 bucket, useful if you want to use custom settings or external key generated in an HSM
--boundary-policy-arnSpecify the ARN to use to set the maximum permissions that our policy is granted
--use-https-with-s3Use HTTPS to connect to download partition from S3, required if you use a custom KMS key. Default true

Example

The following will create a cluster in us-east-2 with the command outputting where it is within the build cycle.

$ hdxctl --region us-east-2 create-cluster hdxcli-u7mtxhmh --wait

creating hydrolix stack
initiated creation of hdx-nglnawnx
hdx-nglnawnx status: CREATE_IN_PROGRESS, sleeping 30 seconds
hdx-nglnawnx status: CREATE_IN_PROGRESS, sleeping 30 seconds
hdx-nglnawnx status: CREATE_IN_PROGRESS, sleeping 30 seconds
hdx-nglnawnx status: CREATE_IN_PROGRESS, sleeping 30 seconds
hdx-nglnawnx status: CREATE_IN_PROGRESS, sleeping 30 seconds
hdx-nglnawnx status: CREATE_IN_PROGRESS, sleeping 30 seconds
hdx-nglnawnx status: CREATE_IN_PROGRESS, sleeping 30 seconds
hdx-nglnawnx status: CREATE_IN_PROGRESS, sleeping 30 seconds
hdx-nglnawnx status: CREATE_IN_PROGRESS, sleeping 30 seconds
hdx-nglnawnx status: CREATE_IN_PROGRESS, sleeping 30 seconds

delete

Deletes the compute components of the cluster using a supplied CLIENT_ID

Usage

$ hdxctl delete [OPTIONS] CLIENT_ID CLUSTER_ID
OptionDescription
--waitHave the client watch the command execute. Information is supplied as STDOUT
--force / --no-forceunused
--helpDisplays summary documentation for this command, and exits.

Example

$ hdxctl delete hdxcli-k754x5zs hdx-2xoq7xei --wait

delete-bootstrap

Deletes the stateful components of the stack.

Usage

$ hdxctl delete-bootstrap [OPTIONS] CLIENT_ID
OptionDescription
--waitHave the client watch the command execute. Information is supplied as STDOUT
--force / --no-forceunused
--helpDisplays summary documentation for this command, and exits.

Example

$ hdxctl delete-bootstrap hdxcli-cflmkpyl --wait

deployed-version

Retrieves the version information of the client stack, cluster stack and hdxctl.

Usage

$ hdxctl deployed-version CLIENT_ID CLUSTER_ID
OptionDescription
--helpDisplays summary documentation for this command, and exits.

Example

$hdxctl deployed-version hdxcli-ek1gho6y hdx-tva1b3l4

client stack    cluster stack    hdxctl
--------------  ---------------  --------
v2.14.4         v2.14.4          v2.14.4

files

Allows the retrieval and setting of the Gafana and Superset ini's.

Usage

$ hdxctl files get [OPTIONS] CLIENT_ID [grafana|superset]

$ hdxctl files set [OPTIONS] CLIENT_ID [grafana|superset]
CommandPurpose
getretrieve the ini file that is in a Grafana or Superset deployment
setapply a file that is in a Grafana or Superset deployment and deploy it.
OptionDescription
--helpDisplays summary documentation for this command, and exits.

Example

$ hdxctl files get hdxcli-cflmkpyl grafana

$ hdxctl files set hdxcli-cflmkpyl mysettings.ini grafana

get-license

Creates a new Hydrolix license, with its own, new client ID. This works as a command-line alternative to obtaining a license through the hydrolix.io website.

All of this command's arguments are required, and map to the field found on the license registration web form.

Usage

$ hdxctl delete-bootstrap [OPTIONS] CLIENT_ID CLUSTER_ID
OptionDescription
--account-idThe account id Hydrolix will be deployed within. [Required]
--admin-emailAdministrators Email address, on creation of the cluster a one time password will be supplied to this address. [Required]
--cloud-providerThe cloud provider that services will be deployed in. AWS is currently the only platform currently supported [Required]
--organizationThe name of the organization. [Required]
--full-nameName of the Administrator. [Required]
--hostThe hostname that will be used for access to Hydrolix services, the host is applied as .hydrolix.live . [Required]
--regionThe AWS region the Hydrolix services will be deployed in, options are: eu-west-1, eu-west-3, us-east-1, us-east-2, us-west-1 and us-west-2. [Required]

Example

This example would attempt to create a new license whose clusters would run at the host "example.hydrolix.live".

$ hdxctl get-license \
    --admin-email "[email protected]" \
    --organization "Yoyodyne Propulsion Systems" \
    --cloud-provider "AWS" \
    --full-name "Alice Nobody" \
    --account-id "123456789123" \
    --host "example" \
    --region "us-east-1" \

goto

Connects to a single component of a specified cluster via SSH.

If run with the -i option, this command instead displays the specified component's local IP address and exits.

Connecting to components through this command requires setting SSH public keys on your client stack as a prerequisite step. For more information on this and other aspects of using this command, see SSH Access.

Usage

$ hdxctl goto [-i] CLIENT_ID CLUSTER_ID COMPONENT_NAME

Where COMPONENT_NAME is one of:

  • bastion
  • batch-peer
  • clickhouse
  • grafana
  • head
  • intake-misc
  • kafka-peer
  • merge-peer
  • peer
  • prometheus
  • stream-head
  • stream-peer
  • superset
  • web
  • zookeeper

Example

Logging into a cluster's UI component via SSH:

$ hdxctl goto hdxcli-abc1234 hdx-xyz1234 head

install

Moves the hdxctl executable into the the bin directory of your choice.

Usage

$ hdxctl install [OPTIONS] CLIENT_ID CLUSTER_ID
OptionDescription
--bin-directorySpecify the bin directory to use.
--helpDisplays summary documentation for this command, and exits.

Example

$ hdxctl install --bin-directory /usr/local/bin/
installed hdxctl at /usr/local/bin/hdxctl

instances

Gets a list of IP's, the current state and the service type for a deployment

Usage

$ hdxctl instances [OPTIONS] CLIENT_ID CLUSTER_ID
OptionDescription
--helpDisplays the help text for the command

Example

$ hdxctl instances hdxcli-pmpudpqe hdx-ex6bdtsn

LAUNCH_TIME          SERVICE      IP            STATE    POOL
-------------------  -----------  ------------  -------  ------
2020-08-18 20:55:48  bastion      11.22.33.444  running
2020-08-25 12:26:34  batch-peer   10.0.3.98     running
2020-08-25 09:47:17  config       10.0.2.14     running
2020-08-25 09:47:32  head         10.0.13.203   running
2020-08-25 09:47:28  peer         10.0.12.177   running
2020-08-25 12:26:29  peer         10.0.11.95    running
2020-08-25 12:26:29  peer         10.0.9.229    running
2020-08-25 09:47:31  stream-head  10.0.15.167   running
2020-08-25 09:47:25  stream-peer  10.0.13.77    running
2020-08-25 09:47:20  web          10.0.2.249    running
2020-08-18 20:55:48  zookeeper    10.0.3.231    running

list-client-ids

Lists the Client ID's attributed to you and the region they are aligned with.

Usage

$ hdxctl list-client-ids [OPTIONS]
OptionDescription
--helpDisplays the help text for the command

Example

$ hdxctl list-client-ids

CLIENT_ID        REGION
---------------  ---------
hdxcli-dasfd243  eu-west-2
hdxcli-cn6nad32  us-west-2
hdxcli-puwas2fa  us-east-2

nat-gateway-ip

Displays the single IP address that your various components present to the wider internet, from the perspective of an external recipient

Usage

$ hdxctl nat-gateway-ip CLIENT_ID

Example

$ hdxctl nat-gateway-ip hdxcli-example123

34.192.246.29

route

Switch the hostname to point to a different compute cluster. See Upgrading Hydrolix for usage scenario.

Usage

$ hdxctl route [OPTIONS] CLIENT_ID CLUSTER_ID
OptionDescription
--helpDisplays the help text for the command

Example

$ hdxctl route hdxcli-puwas2fa  hdx-zuxr7yt6

scale

Scale components of the cluster. Using the command without any options will provide a current state list of the Auto-scaling group.

Usage

$ hdxctl scale [OPTIONS] CLIENT_ID CLUSTER_ID
OptionDescription
--bastion-countSpecify the number of bastion servers.
--bastion-instance-typeChange the type and class of the bastion server.
--bastion-diskSpecify the amount of disk (EBS) you wish to use on the bastion.
--bastion-cache-diskSpecify the size of the cache-disk to use on the bastion. Default 0.
--batch-peer-countSpecify the number of batch peers, a minimum of 1 is required to use batch ingest.
--batch-peer-diskSpecify the amount of disk (EBS) you wish to use on the batch peer. Recommended to be kept as default.
--batch-peer-instance-typeChange the type and class of the batch peer (e.g. m5.large).
--batch-peer-cache-diskSpecify the size of the cache-disk to use on the batch-peer. Default 0.
--edit / --no-editEdit the scaling TOML directly using vi and update the cluster when finished.
--emit-tomlDisplay the toml in STDOUT.
--from-fileLoad a configuration from a file. See the Advanced HDXCTL for more information.
--grafana-countSpecify the number of grafana servers you would like to run. Deploys as 0
--grafana-diskSpecify the size of disk for the Grafana server/s you would like to run
--grafana-instance-typeSpecify the instance type of Grafana servers you would like to run.
--grafana-cache-diskSpecify the size of the cache-disk to use on the grafana-peer. Default 0.
--head-countSpecify the number of query heads to use. A minimum of 1 is required to query the infrastructure.
--head-diskSpecify the amount of disk (EBS) you wish to use on the query head. Note this is for caching purposes. Recommended to be kept as default.
--head-instance-typeChange the type and class of the query head (e.g. c5n.xlarge).
--head-cache-diskSpecify the size of the cache-disk to use on the query head. Default 0.
--helpDisplays the help text for the command
--intake-misc-countSpecify the number of intake-misc servers.
--intake-misc-instance-typeChange the type and class of the intake-misc server.
--intake-misc-diskSpecify the amount of disk (EBS) you wish to use on the intake-misc server.
--intake-misc-cache-diskSpecify the size of the cache-disk to use on the intake-misc server. Default 0.
--merge-peer-countSpecify the number of merge-peer servers
--merge-peer-instance-typeChange the type and class of the merge-peer servers.
--merge-peer-diskSpecify the amount of disk (EBS) you wish to use on the merge-peer servers.
--merge-peer-cache-diskSpecify the size of the cache-disk to use on the merge-peer servers. Default 0.
--minimal / --no-minimalScale the stack to a minimal state with all components at a minimum level.
--off / --no-offTurn off all except required stateful components.
--prometheus-countSpecify the number of prometheus servers.
--prometheus-instance-typeChange the type and class of the prometheus servers.
--prometheus-diskSpecify the amount of disk (EBS) you wish to use on the prometheus servers.
--prometheus-cache-diskSpecify the size of the cache-disk to use on the prometheus servers. Default 0.
--query-peer-countSpecify the number of query peers, a minimum of 1 is required to query the infrastructure. If auto scaling is to be used for a component either a min or max can be provided (e.g. 2-5) or a min, desired and max can be specified e.g. (2-5-10).
--query-peer-diskSpecify the amount of disk (EBS) you wish to use on the query peer. Note this is for caching purposes. Recommended to be kept as default.
--query-peer-instance-typeChange the type and class of the query peer (e.g. c5n.2xlarge).
--query-peer-cache-diskSpecify the size of the cache-disk to use on the query-peer servers. Default 24GB
--query-peer-spotEnable Spot instance usage for query-peers. Default false
--stream-head-countSpecify the number of stream heads to use, a minimum of 1 is required to use streaming ingest.
--stream-head-diskSpecify the amount of disk (EBS) you wish to use on the stream head. Recommended to be kept as default.
--stream-head-instance-typeChange the type and class of the stream head (e.g. m5.large).
--stream-head-cache-diskSpecify the size of the cache-disk to use on the stream-head servers. Default 0.
--stream-peer-countSpecify the number of query peers, a minimum of 1 is required to use streaming ingest.
--stream-peer-diskSpecify the amount of disk (EBS) you wish to use on the stream peer. Recommended to be kept as default.
--stream-peer-instance-typeChange the type and class of the stream peer (e.g. m5.large).
--stream-peer-cache-diskSpecify the size of the cache-disk to use on the stream-peer servers. Default 0.
--superset-countSpecify the number of superset server
--superset-instance-typeChange the type and class of the superset servers.
--superset-diskSpecify the amount of disk (EBS) you wish to use on the superset servers.
--superset-cache-diskSpecify the size of the cache-disk to use on the superset servers. Default 0.
--web-countSpecify the number of servers that host the configuration API and portal. Recommended to be kept as default.
--web-diskSpecify the amount of disk (EBS) you wish to use on the configuration API and portal server. Recommended to be kept as default.
--web-instance-typeChange the type and class of the configuration API and portal server. Recommended to be kept as default.
--web-cache-diskSpecify the size of the cache-disk to use on the superset servers. Default 0
--update-okAllow a cluster to be updated if the HDXCTL version doesn't match the cluster version.
--zookeeper-countSpecify the number of Zookeeper servers, options are 0 or 3. Default 3.
--zookeeper-instance-typeChange the type and class of the stream head (e.g. t2.micro)
--zookeeper-diskSpecify the amount of disk (EBS) you wish to use on the Zookeeper servers.
--zookeeper-cache-diskSpecify the size of the cache-disk to use on the superset servers. Default 0.

📘

Note:

You will note that settings for the batch-head appear to be missing. This is due to batch ingest using Lambda as the ingest head and so does not have a count, type or disk option.

Example

$ hdxctl scale hdxcli-cflmkpyl hdx-zgnswsoi --query-head-count 1 --query-peer-instance-type c5n.2xlarge --query-peer-count 5

hdxctl scale hdxcli-pmpudpqe hdx-ex6bdtsn
SERVICE        COUNT  FAMILY    SIZE       DISK
-----------  -------  --------  -------  ------
batch-peer         0  r5        2xlarge      30
config             0  t2        micro        30
query-head         0  c5n       xlarge       30
query-peer         0  c5n       4xlarge     100
stream-head        0  m5        xlarge       30
stream-peer        0  m5        xlarge       30
ui                 0  t2        micro        30

scale-db

Scale the RDS Catalog Database component of the cluster. Using the command without any options will provide a current state of the database. The catalog is a core component to the system. Downgrading of size should be done carefully. Any upgrade/downgrade needs to be completed off-line.

Usage

hdxctl scale-db [OPTIONS] CLIENT_ID
OptionDescription
--db-instance-typeUpdate the instance type to be used.
--db-diskUpdate the instance's disk size.
--helpDisplays the help text for the command.

Example

$ hdxctl scale-db hdxcli-ekmeho6e

Family: db.t2, Instance Size: medium, Disk Size: 30GB

smoketest

Completes a basic test of the system to ingest and query some data.

Usage

$ hdxctl smoketest [OPTIONS] CLIENT_ID CLUSTER_ID
OptionDescription
--helpDisplays the help text for the command

support-bundle

Create a bundle of configuration and log files to send to Hydrolix support

Usage

$ hdxctl support-bundle [OPTIONS] CLIENT_ID
OptionDescription
--concurrencyunused
--daysDays of log files to include in the bundle.
--helpDisplays the help text for the command.

Example

$ hdxctl support-bundle hdxcli-ekmeho6e

tunables

Modify tunables configuration file define for a Client ID.

After modifying your configuration files and using the set options you need to update your cluster to apply the configuration hdxctl update CLIENTID CLUSTERID

Usage

$ hdxctl tunables get [OPTIONS] CLIENT_ID
CommandPurpose
getprints the TOML format tunables. For every supported tunable a commented line is printed showing the default value for that tunable. Any tunables that have been set explicitly will also appear uncommented. See ip_allowlist in the output below for an example.
OptionDescription
-vVerbose output of additional description information into the output
--helpDisplays summary documentation for this command, and exits.
$ hdxctl tunables get hdxcli-xxxxx
#  import_max_receive_count = 1
#  import_queue_timeout = 43200
#  ip_allowlist = [ "104.248.xxx.xxx/32", "44.226.xxx.xxx/32", "44.230.xxx.xxx/32",]
ip_allowlist = [ "0.0.0.0/0",]
#  kafka_tls_ca = ""
#  kafka_tls_cert = ""
$ hdxctl tunables set CLIENT_ID tunables.toml
CommandPurpose
setSet tunable configuration for the clientID using tunables.toml file configuration
$ hdxctl tunables set CLIENT_ID tunables.toml
OptionDescription
--helpDisplays summary documentation for this command, and exits.
tunableDescriptionDefault
autoingest_max_receive_countthe number of times a message is delivered to the autoingest queue before being moved to the dead-letter queue.10
autoingest_queue_timeoutspecify the maximum message retention period for the Autoingest Queue in seconds200
aws_ssh_key_nameAdd an AWS defined Key Pair to the authorized keys of a deployment. Allows on-box access.none
batch_bucket_kms_arnAllow Hydrolix servers to decrypt a source bucket where a customer defined KMS key is required. Takes the ARNnone
batch_peer_threadsThe number of concurrent threads the batch-peer should use to process data.1
bucket_allowlistAdditional buckets that the cluster has access toonone
ec2_detailed_monitoringTurns off additional monitoring for Hydrolix EC2 components.true
enable_query_authEnable query authorisation for requests to the query end-point.false
enable_query_peer_hyperthreadingEnable hyperthreading on the query peer.true
enable_turbine_monitorAllow query components to monitor the Hydrolix query engine, restarting it if it hangs.true
import_max_receive_countThe number of times a message is delivered to the import queue before being moved to the dead-letter queue.1
import_queue_timeoutSpecify the time for an individual job to timeout on the SQS queue, in seconds. Recommended to be kept as default.43200
ip_allowlistSets IP allow lists on the appropriate security groups (BastionSecurityGroup and ELBSecurityGroup for incoming connections. IP’s are provided as CIDR formations, for example: "4.2.2.2/32", "8.8.8.0/24".none
kafka_tls_caAllows the addition of a TLS Certificate Authority (CA) for mutual identification of Hydrolix Kafka ingest. PEM Formatnone
kafka_tls_certAllows the addition of a TLS Certificate for mutual identification of Hydrolix Kafka ingest. PEM formatnone
listing_max_receive_countThe number of times a message is delivered to the listing queue before being moved to the dead-letter queue.1
listing_queue_timeoutSpecify the time for an import job to timeout of the SQS queue, in seconds. Recommended to be kept as default.43200
merge_intervalspecify the interval for the Merge process to trigger. Recommended to be kept as default.5m
merge_max_receive_countThe number of times a message is delivered to the merge queue before being moved to the dead-letter queue.1
merge_queue_timeoutspecify the maximum message retention period for the Merge Queue, in seconds. Recommended to be kept as default.300 Seconds
reaper_queue_timeoutspecify the maximum message retention period for the Reaper Queue, in seconds. Recommended to be kept as default.30 seconds
ssh_authorized_keysList of Authorized keys that are deployed to components for SSH accessnone
stream_shard_countThe number of shards AWS Kinesis is configured to use. This Kinesis stream is used between the stream-head and the stream-peers.2
tagTags to apply to this cluster, in the format TAG-NAME:TAG-VALUE. See also further notes about tags, below.none
superset_workersThe number of threads for each Superset web worker.10
superset_threadsSuperset web workers that are silent for more than this many seconds are killed and restarted.20
superset_timeoutThe number of workers for handling Superset requests.60
use_s3_kms_keyBoolean to enable the creation and use of a new key to encrypt the S3 bucket used by Hydrolix.false
keep_legacy_kms_keyBoolean to enable / disable keeping previously generated/used KMS Key, useful to change the key without loosing access to previously encrypted partition.true
s3_kms_key_arnSpecify specific KMS Key to use on your S3 bucket, useful if you want to use custom settings or external key generated in an HSMnone
boundary_policy_arnSpecify the ARN to use to set the maximum permissions that our policy is grantednone
use_https_with_s3Use HTTPS to connect to download partition from S3, required if you use a custom KMS key.true

update

Used to update the version of the Hydrolix stack being used.

Usage

$ hdxctl update [OPTIONS] CLIENT_ID
OptionDescription
--autoingest-max-receive-countThe number of times a message is delivered to the queue before being moved to the dead-letter queue. Recommended to be kept as default (10).
--autoingest-queue-timeoutspecify the maximum message retention period for the Autoingest Queue. Recommended to be kept as default. Default 4 Days
--aws-ssh-key-nameAdd an AWS defined Key Pair to the authorized keys of a deployment. Allows on-box access.
--batch-bucket-kms-arnAllow Hydrolix servers to decrypt a source bucket where a customer defined KMS key is required. Takes the ARN
--batch-peer-threadsSpecify the number of vCPU's a batch-peer should use for import jobs. Recommended to be kept as default.
--bucket-allowlistEnables the architecture to access other buckets. Buckets names are provided as only the bucket name. For example --bucket-allowlist mybucket1 --bucket-allowlist anotherbucket. This is not additivie, any update will overwrite previous configurations.
---deploy / --no-deployunused.
--ec2-detailed-monitoringTurns off additional monitoring for Hydrolix EC2 components. Default true.
--enable-query-authEnable query authorisation for requests to the query end-point. Currently a place-holder and not in use.
--enable-query-peer-hyperthreadingEnable hyperthreading on the query peer. Default disabled.
--enable-turbine-monitorAllow query components to monitor the Hydrolix query engine, restarting it if it hangs.
--enable-grafana-cloudwatchEnable cloudwatch metrics within Grafana.
--helpDisplays the help text for the command
--import-max-receive-countThe number of times a message is delivered to the queue before being moved to the dead-letter queue. Recommended to be kept as default (1).
--import-queue-timeoutSpecify the time for an individual job to timeout on the SQS queue. Recommended to be kept as default.
--ip-allowlistSets IP allow lists on the appropriate security groups (BastionSecurityGroup and ELBSecurityGroup for incoming connections. IP's are provided as CIDR formations. For example: --ip-allowlist 4.2.2.2/32 --ip-allowlist 8.8.8.0/24. Note: If an allow list doesn't contain "0.0.0.0/0" then the ip /32 of the nat gateway will get added automatically. This is not additivie, any update will overwrite previous configurations.
--kafka-tls-caAllows the addition of a TLS Certificate Authority (CA) for mutual identification of Hydrolix Kafka ingest. PEM Format
--kafka-tls-certAllows the addition of a TLS Certificate for mutual identification of Hydrolix Kafka ingest. PEM format
--kafka-tls-keyAllows the addition of a TLS Key for mutual identification of Hydrolix Kafka ingest. PEM format
--listing-max-receive-countThe number of times a message is delivered to the queue before being moved to the dead-letter queue. Recommended to be kept as default (1).
--listing-queue-timeoutSpecify the time for an import job to timeout of the SQS queue. Recommended to be kept as default.
--merge-intervalspecify the interval for the Merge process to trigger. Recommended to be kept as default.
--merge-max-receive-countThe number of times a message is delivered to the queue before being moved to the dead-letter queue. Recommended to be kept as default (1).
--merge-queue-timeoutspecify the maximum message retention period for the Merge Queue. Recommended to be kept as default. Default 4 Days.
--reaper-queue-timeoutspecify the maximum message retention period for the Reaper Queue. Recommended to be kept as default. Default 4 Days
--ssh-authorized-keysAllows the provision of a file in the format of .ssh/authorized_keys to be appended to all .ssh/authorized_keys files in the deployed infrastructure.
--stream-shard-countAlter the Ingest Streaming Shard count for Kinesis. Default 2
--superset-workersThe number of threads for each Superset web worker. Default 10
--superset-threadsSuperset web workers that are silent for more than this many seconds are killed and restarted. Default 20
--superset-timeoutThe number of workers for handling Superset requests. Default 60
--use-s3-kms-keyBoolean to enable the creation and use of a new key to encrypt the S3 bucket used by Hydrolix. Default false
--keep-legacy-kms-keyBoolean to enable / disable keeping previously generated/used KMS Key, useful to change the key without loosing access to previously encrypted partition. Default true
--s3-kms-key-arnSpecify specific KMS Key to use on your S3 bucket, useful if you want to use custom settings or external key generated in an HSM
--boundary-policy-arnSpecify the ARN to use to set the maximum permissions that our policy is granted
--use-https-with-s3Use HTTPS to connect to download partition from S3, required if you use a custom KMS key. Default true
--tagAdditional tags you may require on your infrastructure, in the format TAG-NAME:TAG-VALUE. See also further notes about tags, below.

update-self

Updates HDXCTL to the most current executable.

Usage

$ hdxctl update-self [OPTIONS]
OptionDescription
-v, --version TEXTUpdates the HDXCTL tool to the version specified.
--helpDisplays the help text for the command.

Example

$ hdxctl update-self

using-hdxreader

Updates HDXCTL to the most current executable.

Usage

hdxctl using-hdxreader [OPTIONS] CLIENT_ID CLUSTER_ID
OptionDescription
--helpDisplays the help text for the command.

Example

$ hdxctl using-hdxreader hdxcli-eemdho3e hdx-tvgsb3a6
True

version

Displays the version of the hdxctl program you are using.

Usage

$ hdxctl version [OPTIONS]
OptionDescription
-a, --allDisplays all the version information for the stack
--helpDisplays the help text for the command

Example

$ hdxctl version -a

-----------------------  --------
SHARED_VERSION           b5de4e88
UI_VERSION               7605aa1e
CONFIG_VERSION           1b43fda6
KEYCLOAK_VERSION         15a561b0
INTAKE_VERSION           b5de4e88
TURBINE_VERSION          b78e7185
LAMBDA_VERSION           b5de4e88
CLI_DNS_SERVICE_VERSION  7181f171
SELF_DEPLOY_VERSION      b5de4e88
MACHINE_VERSION          b5de4e88
HUMAN_VERSION            v2.9.3
SECRETS_FROM_S3_VERSION  15a561b0
ENVIRON_VERSION          15a561b0
LAST_TAG_NAME            v2.9.3
LAST_TAG                 b5de4e88
TAG                      v2.9.3
GRAFANA_VERSION          e1f7907d
PROMETHEUS_VERSION       e1f7907d
SUPERSET_VERSION         e1f7907d
TERRAFORM_VERSION        f8011a06
-----------------------  --------

Further notes about tags

Using the --tag option with the update command will erase all tags previously set on the cluster prior to setting the newly specified ones.

Several tag names are reserved for Hydrolix's internal use, and cannot be set via the --tag option:

  • aws:cloudformation:logical-id

  • aws:cloudformation:stack-id

  • HdxContact

  • HdxService

  • Name

  • HdxBudget

  • aws:cloudformation:stack-name