HDXCTL Command Reference
Using hdxctl
, the Hydrolix CLI
hdxctl
, the Hydrolix CLIThe program hdxctl
lets you build and manage your Hydrolix clusters from the command line.
Usage
$ hdxctl [OPTIONS] command [ARGS]
Options
--help
Displays summary documentation for using hdxctl
and exits.
--region
Specify the cloud-service region to apply to the command.
For example, to specify to hdxctl
that it should [create a new cluster] in AWS's us-east-2
region, using the Client ID "hdxcli-example123":
$ hdxctl --region us-east-2 create-cluster hdxcli-example123
Using the --region
option sets the value you provide as a default. Commands that require you to specify a region will make use of this default if you subsequently run hdxctl
without setting this option.
Commands
Summary
Command | Purpose |
---|---|
cloudformation-events | Lists the CloudFormation events recorded for a given cluster or client ID. |
clusters | Lists your clusters. |
create-cluster | Creates a new Hydrolix stack, in full. |
delete | Deletes the compute components of a given cluster. |
delete-bootstrap | Deletes the stateful components of a given cluster. |
deployed-version | Displays the version of the bootstrap, cluster and hdxctl. |
files | allows the setting of the configuration files (ini) for Grafana and Superset. |
get-license | Creates a new Hydrolix license. |
goto | Connects to a Hydrolix component via SSH. |
install | Moves the hdxctl executable into the the bin directory of your choice, |
instances | Lists the compute instances currently in use. |
list-client-ids | Lists the client IDs currently in use. |
nat-gateway-ip | Displays your clusters' externally visible IP address. |
route | Associates a given client ID's hostname with a given cluster. |
scale | Displays or sets the resource use of a cluster's components. |
scale-db | Scale the Catalog Database component. |
smoketest | Runs basic-functionality tests on a given cluster. |
support-bundle | Generates a bundle that contains project, table, view and transform descriptions that can be sent to support. |
tunables | Update or get tunables configuration file for a given client ID. |
update | Updates the Hydrolix software run by a given client ID. |
update-self | Updates HDXCTL to the most current executable. |
using-hdxreader | will tell you if billing is enabled or bypassed in the currently deployed infrastructure. |
version | Displays the currently installed version of Hydrolix. |
cloudformation-events
Lists the events for a client ID or a cluster ID. Using just the client ID returns the events for the core components. Using the client ID and the cluster ID returns the events for the cluster componets.
Usage
$ hdxctl cloudformation-events [OPTIONS] CLIENT_ID [CLUSTER_ID]
Options
Option | Description |
---|---|
--help | Displays summary documentation for this command, and exits. |
Example
$ hdxctl cloudformation-events hdxcli-c2tpmuym
-------------------------------- --------------------------- ----------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------
2020-11-02 17:52:54.603000+00:00 hdxcli-c2tpmuym-self-deploy CREATE_IN_PROGRESS User Initiated
2020-11-02 17:52:57.766000+00:00 ClientBucket CREATE_IN_PROGRESS -
2020-11-02 17:52:57.984000+00:00 SelfDeployRole .................
$ hdxctl cloudformation-events hdxcli-c2tpmuym hdx-qngq4obs
-------------------------------- --------------------------------- ----------------------------------- ---------------------------
2020-11-02 18:14:07.639000+00:00 hdx-qngq4obs CREATE_IN_PROGRESS User Initiated
2020-11-02 18:14:14.871000+00:00 hdx-qngq4obs CREATE_IN_PROGRESS Transformation succeeded
.........
clusters
Lists your clusters.
Note that this command uses a local cache for efficiency. To reload the cache with your clusters' most recent metadata, run this command with the --sync
option.
Usage
$ hdxctl clusters [OPTIONS]
Option | Description |
---|---|
--add / --no-add | unused |
--id TEXT | unused |
--sync | Reloads metadata from your clusters prior to display. |
--help | Displays summary documentation for this command, and exits. |
Example:
$ hdxctl clusters
CLIENT_ID CLUSTER_ID CREATED HOST STATUS WHO REGION
--------------- ------------ ------------------- --------------------- --------------- ------- ---------
hdxcli-pmpudpqe hdx-ex6bdtsn 2020-08-18 20:53:19 mysite.hydrolix.live. UPDATE_COMPLETE imauser us-east-2
create-cluster
Creates a cluster using a supplied CLIENT_ID.
Usage
$ hdxctl --region REGION create-cluster [OPTIONS] CLIENT_ID
Option | Description |
---|---|
--admin-email EMAIL | Set the default Administrator Email address for the cluster on first build |
--autoingest-max-receive-count | The number of times a message is delivered to the queue before being moved to the dead-letter queue. Recommended to be kept as default (10). |
--autoingest-queue-timeout | specify the maximum message retention period for the Autoingest Queue. Default 4 Days |
--aws-ssh-key-name | Add an AWS defined Key Pair to the authorized keys of a deployment. Allows on-box access. |
--batch-bucket-kms-arn | Allow Hydrolix servers to decrypt a source bucket where a customer defined KMS key is required. Takes the ARN |
--batch-peer-threads | Specify the number of vCPU's a batch-peer should use for import jobs. |
--bucket-allowlist | Enables the architecture to access other buckets. For example: --bucket-allowlist mybucket1 --bucket-allowlist anotherbucket . Any update will overwrite previous configurations. |
--ec2-detailed-monitoring | Turns off additional monitoring for Hydrolix EC2 components. Default true. |
--enable-grafana-cloudwatch | Enable cloudwatch metrics within Grafana. |
--enable-turbine-monitor | Allow query components to monitor the Hydrolix query engine, restarting it if it hangs. |
--enable-query-auth | Enable query authorisation for requests to the query end-point. Currently a place-holder and not in use. |
--enable-query-peer-hyperthreading | Enable hyperthreading on the query peer. Default disabled. |
--environ / --no-environ | unused |
--full-hydrolix-access/ --no-full-hydrolix-access | Enable Hydrolix access by deploying Hydrolix support SSH keys/certificate. |
--help | Displays summary documentation for this command, and exits. |
--ignore-version | unused |
--import-max-receive-count | The number of times a message is delivered to the queue before being moved to the dead-letter queue. Recommended to be kept as default (1). |
--import-queue-timeout | Specify the time for an individual job to timeout on the SQS queue. Recommended to be kept as default. |
--ip-allowlist | Sets IP allow lists on the appropriate security groups (BastionSecurityGroup and ELBSecurityGroup for incoming connections. IP's are provided as CIDR formations. For example: --ip-allowlist 4.2.2.2/32 --ip-allowlist 8.8.8.0/24 . Note: If an allow list doesn't contain "0.0.0.0/0" then the ip /32 of the nat gateway will get added automatically. This is not additivie, any update will overwrite previous configurations. |
--kafka-tls-ca | Allows the addition of a TLS Certificate Authority (CA) for mutual identification of Hydrolix Kafka ingest. PEM Format |
--kafka-tls-cert | Allows the addition of a TLS Certificate for mutual identification of Hydrolix Kafka ingest. PEM format |
--kafka-tls-key | Allows the addition of a TLS Key for mutual identification of Hydrolix Kafka ingest. PEM format |
--listing-max-receive-count | The number of times a message is delivered to the queue before being moved to the dead-letter queue. Recommended to be kept as default (1). |
--listing-queue-timeout | Specify the time for an import job to timeout of the SQS queue. Recommended to be kept as default. |
--stream-shard-count | Alter the Ingest Streaming Shard count for Kinesis. Default 2 |
--merge-interval | specify the interval for the Merge process to trigger. Recommended to be kept as default. |
--merge-max-receive-count | The number of times a message is delivered to the queue before being moved to the dead-letter queue. Recommended to be kept as default (1). |
--merge-queue-timeout | specify the maximum message retention period for the Merge Queue. Recommended to be kept as default. Default 4 Days. |
--reaper-queue-timeout | specify the maximum message retention period for the Reaper Queue. Recommended to be kept as default. Default 4 Days |
--ssh-authorized-keys | Allows the provision of a file in the format of .ssh/authorized_keys to be appended to all .ssh/authorized_keys files in the deployed infrastructure. |
--stream-shard-count | Alter the Ingest Streaming Shard count for Kinesis. Default 2 |
--superset-threads INTEGER | The number of threads for each Superset web worker. |
--superset-timeout INTEGER | Superset web workers that are silent for more than this many seconds are killed and restarted. |
--superset-workers INTEGER | The number of workers for handling Superset requests. |
--tag | Tags to apply to this cluster, in the format TAG-NAME:TAG-VALUE. See also further notes about tags, below. |
--vpc-cidr | An alternate CIDR block for the deployment. |
--wait | Have the client watch the command execute. Information is supplied as STDOUT |
--use-s3-kms-key | Boolean to enable the creation and use of a new key to encrypt the S3 bucket used by Hydrolix. Default false |
--keep-legacy-kms-key | Boolean to enable / disable keeping previously generated/used KMS Key, useful to change the key without loosing access to previously encrypted partition. Default true |
--s3-kms-key-arn | Specify specific KMS Key to use on your S3 bucket, useful if you want to use custom settings or external key generated in an HSM |
--boundary-policy-arn | Specify the ARN to use to set the maximum permissions that our policy is granted |
--use-https-with-s3 | Use HTTPS to connect to download partition from S3, required if you use a custom KMS key. Default true |
Example
The following will create a cluster in us-east-2 with the command outputting where it is within the build cycle.
$ hdxctl --region us-east-2 create-cluster hdxcli-u7mtxhmh --wait
creating hydrolix stack
initiated creation of hdx-nglnawnx
hdx-nglnawnx status: CREATE_IN_PROGRESS, sleeping 30 seconds
hdx-nglnawnx status: CREATE_IN_PROGRESS, sleeping 30 seconds
hdx-nglnawnx status: CREATE_IN_PROGRESS, sleeping 30 seconds
hdx-nglnawnx status: CREATE_IN_PROGRESS, sleeping 30 seconds
hdx-nglnawnx status: CREATE_IN_PROGRESS, sleeping 30 seconds
hdx-nglnawnx status: CREATE_IN_PROGRESS, sleeping 30 seconds
hdx-nglnawnx status: CREATE_IN_PROGRESS, sleeping 30 seconds
hdx-nglnawnx status: CREATE_IN_PROGRESS, sleeping 30 seconds
hdx-nglnawnx status: CREATE_IN_PROGRESS, sleeping 30 seconds
hdx-nglnawnx status: CREATE_IN_PROGRESS, sleeping 30 seconds
delete
Deletes the compute components of the cluster using a supplied CLIENT_ID
Usage
$ hdxctl delete [OPTIONS] CLIENT_ID CLUSTER_ID
Option | Description |
---|---|
--wait | Have the client watch the command execute. Information is supplied as STDOUT |
--force / --no-force | unused |
--help | Displays summary documentation for this command, and exits. |
Example
$ hdxctl delete hdxcli-k754x5zs hdx-2xoq7xei --wait
delete-bootstrap
Deletes the stateful components of the stack.
Usage
$ hdxctl delete-bootstrap [OPTIONS] CLIENT_ID
Option | Description |
---|---|
--wait | Have the client watch the command execute. Information is supplied as STDOUT |
--force / --no-force | unused |
--help | Displays summary documentation for this command, and exits. |
Example
$ hdxctl delete-bootstrap hdxcli-cflmkpyl --wait
deployed-version
Retrieves the version information of the client stack, cluster stack and hdxctl.
Usage
$ hdxctl deployed-version CLIENT_ID CLUSTER_ID
Option | Description |
---|---|
--help | Displays summary documentation for this command, and exits. |
Example
$hdxctl deployed-version hdxcli-ek1gho6y hdx-tva1b3l4
client stack cluster stack hdxctl
-------------- --------------- --------
v2.14.4 v2.14.4 v2.14.4
files
Allows the retrieval and setting of the Gafana and Superset ini's.
Usage
$ hdxctl files get [OPTIONS] CLIENT_ID [grafana|superset]
$ hdxctl files set [OPTIONS] CLIENT_ID [grafana|superset]
Command | Purpose |
---|---|
get | retrieve the ini file that is in a Grafana or Superset deployment |
set | apply a file that is in a Grafana or Superset deployment and deploy it. |
Option | Description |
---|---|
--help | Displays summary documentation for this command, and exits. |
Example
$ hdxctl files get hdxcli-cflmkpyl grafana
$ hdxctl files set hdxcli-cflmkpyl mysettings.ini grafana
get-license
Creates a new Hydrolix license, with its own, new client ID. This works as a command-line alternative to obtaining a license through the hydrolix.io website.
All of this command's arguments are required, and map to the field found on the license registration web form.
Usage
$ hdxctl delete-bootstrap [OPTIONS] CLIENT_ID CLUSTER_ID
Option | Description |
---|---|
--account-id | The account id Hydrolix will be deployed within. [Required] |
--admin-email | Administrators Email address, on creation of the cluster a one time password will be supplied to this address. [Required] |
--cloud-provider | The cloud provider that services will be deployed in. AWS is currently the only platform currently supported [Required] |
--organization | The name of the organization. [Required] |
--full-name | Name of the Administrator. [Required] |
--host | The hostname that will be used for access to Hydrolix services, the host is applied as .hydrolix.live . [Required] |
--region | The AWS region the Hydrolix services will be deployed in, options are: eu-west-1, eu-west-3, us-east-1, us-east-2, us-west-1 and us-west-2. [Required] |
Example
This example would attempt to create a new license whose clusters would run at the host "example.hydrolix.live".
$ hdxctl get-license \
--admin-email "[email protected]" \
--organization "Yoyodyne Propulsion Systems" \
--cloud-provider "AWS" \
--full-name "Alice Nobody" \
--account-id "123456789123" \
--host "example" \
--region "us-east-1" \
goto
Connects to a single component of a specified cluster via SSH.
If run with the -i
option, this command instead displays the specified component's local IP address and exits.
Connecting to components through this command requires setting SSH public keys on your client stack as a prerequisite step. For more information on this and other aspects of using this command, see SSH Access.
Usage
$ hdxctl goto [-i] CLIENT_ID CLUSTER_ID COMPONENT_NAME
Where COMPONENT_NAME is one of:
- bastion
- batch-peer
- clickhouse
- grafana
- head
- intake-misc
- kafka-peer
- merge-peer
- peer
- prometheus
- stream-head
- stream-peer
- superset
- web
- zookeeper
Example
Logging into a cluster's UI component via SSH:
$ hdxctl goto hdxcli-abc1234 hdx-xyz1234 head
install
Moves the hdxctl executable into the the bin directory of your choice.
Usage
$ hdxctl install [OPTIONS] CLIENT_ID CLUSTER_ID
Option | Description |
---|---|
--bin-directory | Specify the bin directory to use. |
--help | Displays summary documentation for this command, and exits. |
Example
$ hdxctl install --bin-directory /usr/local/bin/
installed hdxctl at /usr/local/bin/hdxctl
instances
Gets a list of IP's, the current state and the service type for a deployment
Usage
$ hdxctl instances [OPTIONS] CLIENT_ID CLUSTER_ID
Option | Description |
---|---|
--help | Displays the help text for the command |
Example
$ hdxctl instances hdxcli-pmpudpqe hdx-ex6bdtsn
LAUNCH_TIME SERVICE IP STATE POOL
------------------- ----------- ------------ ------- ------
2020-08-18 20:55:48 bastion 11.22.33.444 running
2020-08-25 12:26:34 batch-peer 10.0.3.98 running
2020-08-25 09:47:17 config 10.0.2.14 running
2020-08-25 09:47:32 head 10.0.13.203 running
2020-08-25 09:47:28 peer 10.0.12.177 running
2020-08-25 12:26:29 peer 10.0.11.95 running
2020-08-25 12:26:29 peer 10.0.9.229 running
2020-08-25 09:47:31 stream-head 10.0.15.167 running
2020-08-25 09:47:25 stream-peer 10.0.13.77 running
2020-08-25 09:47:20 web 10.0.2.249 running
2020-08-18 20:55:48 zookeeper 10.0.3.231 running
list-client-ids
Lists the Client ID's attributed to you and the region they are aligned with.
Usage
$ hdxctl list-client-ids [OPTIONS]
Option | Description |
---|---|
--help | Displays the help text for the command |
Example
$ hdxctl list-client-ids
CLIENT_ID REGION
--------------- ---------
hdxcli-dasfd243 eu-west-2
hdxcli-cn6nad32 us-west-2
hdxcli-puwas2fa us-east-2
nat-gateway-ip
Displays the single IP address that your various components present to the wider internet, from the perspective of an external recipient
Usage
$ hdxctl nat-gateway-ip CLIENT_ID
Example
$ hdxctl nat-gateway-ip hdxcli-example123
34.192.246.29
route
Switch the hostname to point to a different compute cluster. See Upgrading Hydrolix for usage scenario.
Usage
$ hdxctl route [OPTIONS] CLIENT_ID CLUSTER_ID
Option | Description |
---|---|
--help | Displays the help text for the command |
Example
$ hdxctl route hdxcli-puwas2fa hdx-zuxr7yt6
scale
Scale components of the cluster. Using the command without any options will provide a current state list of the Auto-scaling group.
Usage
$ hdxctl scale [OPTIONS] CLIENT_ID CLUSTER_ID
Option | Description |
---|---|
--bastion-count | Specify the number of bastion servers. |
--bastion-instance-type | Change the type and class of the bastion server. |
--bastion-disk | Specify the amount of disk (EBS) you wish to use on the bastion. |
--bastion-cache-disk | Specify the size of the cache-disk to use on the bastion. Default 0. |
--batch-peer-count | Specify the number of batch peers, a minimum of 1 is required to use batch ingest. |
--batch-peer-disk | Specify the amount of disk (EBS) you wish to use on the batch peer. Recommended to be kept as default. |
--batch-peer-instance-type | Change the type and class of the batch peer (e.g. m5.large). |
--batch-peer-cache-disk | Specify the size of the cache-disk to use on the batch-peer. Default 0. |
--edit / --no-edit | Edit the scaling TOML directly using vi and update the cluster when finished. |
--emit-toml | Display the toml in STDOUT. |
--from-file | Load a configuration from a file. See the Advanced HDXCTL for more information. |
--grafana-count | Specify the number of grafana servers you would like to run. Deploys as 0 |
--grafana-disk | Specify the size of disk for the Grafana server/s you would like to run |
--grafana-instance-type | Specify the instance type of Grafana servers you would like to run. |
--grafana-cache-disk | Specify the size of the cache-disk to use on the grafana-peer. Default 0. |
--head-count | Specify the number of query heads to use. A minimum of 1 is required to query the infrastructure. |
--head-disk | Specify the amount of disk (EBS) you wish to use on the query head. Note this is for caching purposes. Recommended to be kept as default. |
--head-instance-type | Change the type and class of the query head (e.g. c5n.xlarge). |
--head-cache-disk | Specify the size of the cache-disk to use on the query head. Default 0. |
--help | Displays the help text for the command |
--intake-misc-count | Specify the number of intake-misc servers. |
--intake-misc-instance-type | Change the type and class of the intake-misc server. |
--intake-misc-disk | Specify the amount of disk (EBS) you wish to use on the intake-misc server. |
--intake-misc-cache-disk | Specify the size of the cache-disk to use on the intake-misc server. Default 0. |
--merge-peer-count | Specify the number of merge-peer servers |
--merge-peer-instance-type | Change the type and class of the merge-peer servers. |
--merge-peer-disk | Specify the amount of disk (EBS) you wish to use on the merge-peer servers. |
--merge-peer-cache-disk | Specify the size of the cache-disk to use on the merge-peer servers. Default 0. |
--minimal / --no-minimal | Scale the stack to a minimal state with all components at a minimum level. |
--off / --no-off | Turn off all except required stateful components. |
--prometheus-count | Specify the number of prometheus servers. |
--prometheus-instance-type | Change the type and class of the prometheus servers. |
--prometheus-disk | Specify the amount of disk (EBS) you wish to use on the prometheus servers. |
--prometheus-cache-disk | Specify the size of the cache-disk to use on the prometheus servers. Default 0. |
--query-peer-count | Specify the number of query peers, a minimum of 1 is required to query the infrastructure. If auto scaling is to be used for a component either a min or max can be provided (e.g. 2-5) or a min, desired and max can be specified e.g. (2-5-10). |
--query-peer-disk | Specify the amount of disk (EBS) you wish to use on the query peer. Note this is for caching purposes. Recommended to be kept as default. |
--query-peer-instance-type | Change the type and class of the query peer (e.g. c5n.2xlarge). |
--query-peer-cache-disk | Specify the size of the cache-disk to use on the query-peer servers. Default 24GB |
--query-peer-spot | Enable Spot instance usage for query-peers. Default false |
--stream-head-count | Specify the number of stream heads to use, a minimum of 1 is required to use streaming ingest. |
--stream-head-disk | Specify the amount of disk (EBS) you wish to use on the stream head. Recommended to be kept as default. |
--stream-head-instance-type | Change the type and class of the stream head (e.g. m5.large). |
--stream-head-cache-disk | Specify the size of the cache-disk to use on the stream-head servers. Default 0. |
--stream-peer-count | Specify the number of query peers, a minimum of 1 is required to use streaming ingest. |
--stream-peer-disk | Specify the amount of disk (EBS) you wish to use on the stream peer. Recommended to be kept as default. |
--stream-peer-instance-type | Change the type and class of the stream peer (e.g. m5.large). |
--stream-peer-cache-disk | Specify the size of the cache-disk to use on the stream-peer servers. Default 0. |
--superset-count | Specify the number of superset server |
--superset-instance-type | Change the type and class of the superset servers. |
--superset-disk | Specify the amount of disk (EBS) you wish to use on the superset servers. |
--superset-cache-disk | Specify the size of the cache-disk to use on the superset servers. Default 0. |
--web-count | Specify the number of servers that host the configuration API and portal. Recommended to be kept as default. |
--web-disk | Specify the amount of disk (EBS) you wish to use on the configuration API and portal server. Recommended to be kept as default. |
--web-instance-type | Change the type and class of the configuration API and portal server. Recommended to be kept as default. |
--web-cache-disk | Specify the size of the cache-disk to use on the superset servers. Default 0 |
--update-ok | Allow a cluster to be updated if the HDXCTL version doesn't match the cluster version. |
--zookeeper-count | Specify the number of Zookeeper servers, options are 0 or 3. Default 3. |
--zookeeper-instance-type | Change the type and class of the stream head (e.g. t2.micro) |
--zookeeper-disk | Specify the amount of disk (EBS) you wish to use on the Zookeeper servers. |
--zookeeper-cache-disk | Specify the size of the cache-disk to use on the superset servers. Default 0. |
Note:
You will note that settings for the batch-head appear to be missing. This is due to batch ingest using Lambda as the ingest head and so does not have a count, type or disk option.
Example
$ hdxctl scale hdxcli-cflmkpyl hdx-zgnswsoi --query-head-count 1 --query-peer-instance-type c5n.2xlarge --query-peer-count 5
hdxctl scale hdxcli-pmpudpqe hdx-ex6bdtsn
SERVICE COUNT FAMILY SIZE DISK
----------- ------- -------- ------- ------
batch-peer 0 r5 2xlarge 30
config 0 t2 micro 30
query-head 0 c5n xlarge 30
query-peer 0 c5n 4xlarge 100
stream-head 0 m5 xlarge 30
stream-peer 0 m5 xlarge 30
ui 0 t2 micro 30
scale-db
Scale the RDS Catalog Database component of the cluster. Using the command without any options will provide a current state of the database. The catalog is a core component to the system. Downgrading of size should be done carefully. Any upgrade/downgrade needs to be completed off-line.
Usage
hdxctl scale-db [OPTIONS] CLIENT_ID
Option | Description |
---|---|
--db-instance-type | Update the instance type to be used. |
--db-disk | Update the instance's disk size. |
--help | Displays the help text for the command. |
Example
$ hdxctl scale-db hdxcli-ekmeho6e
Family: db.t2, Instance Size: medium, Disk Size: 30GB
smoketest
Completes a basic test of the system to ingest and query some data.
Usage
$ hdxctl smoketest [OPTIONS] CLIENT_ID CLUSTER_ID
Option | Description |
---|---|
--help | Displays the help text for the command |
support-bundle
Create a bundle of configuration and log files to send to Hydrolix support
Usage
$ hdxctl support-bundle [OPTIONS] CLIENT_ID
Option | Description |
---|---|
--concurrency | unused |
--days | Days of log files to include in the bundle. |
--help | Displays the help text for the command. |
Example
$ hdxctl support-bundle hdxcli-ekmeho6e
tunables
Modify tunables configuration file define for a Client ID.
After modifying your configuration files and using the set
options you need to update your cluster to apply the configuration hdxctl update CLIENTID CLUSTERID
Usage
$ hdxctl tunables get [OPTIONS] CLIENT_ID
Command | Purpose |
---|---|
get | prints the TOML format tunables. For every supported tunable a commented line is printed showing the default value for that tunable. Any tunables that have been set explicitly will also appear uncommented. See ip_allowlist in the output below for an example. |
Option | Description |
---|---|
-v | Verbose output of additional description information into the output |
--help | Displays summary documentation for this command, and exits. |
$ hdxctl tunables get hdxcli-xxxxx
# import_max_receive_count = 1
# import_queue_timeout = 43200
# ip_allowlist = [ "104.248.xxx.xxx/32", "44.226.xxx.xxx/32", "44.230.xxx.xxx/32",]
ip_allowlist = [ "0.0.0.0/0",]
# kafka_tls_ca = ""
# kafka_tls_cert = ""
$ hdxctl tunables set CLIENT_ID tunables.toml
Command | Purpose |
---|---|
set | Set tunable configuration for the clientID using tunables.toml file configuration |
$ hdxctl tunables set CLIENT_ID tunables.toml
Option | Description |
---|---|
--help | Displays summary documentation for this command, and exits. |
tunable | Description | Default |
---|---|---|
autoingest_max_receive_count | the number of times a message is delivered to the autoingest queue before being moved to the dead-letter queue. | 10 |
autoingest_queue_timeout | specify the maximum message retention period for the Autoingest Queue in seconds | 200 |
aws_ssh_key_name | Add an AWS defined Key Pair to the authorized keys of a deployment. Allows on-box access. | none |
batch_bucket_kms_arn | Allow Hydrolix servers to decrypt a source bucket where a customer defined KMS key is required. Takes the ARN | none |
batch_peer_threads | The number of concurrent threads the batch-peer should use to process data. | 1 |
bucket_allowlist | Additional buckets that the cluster has access too | none |
ec2_detailed_monitoring | Turns off additional monitoring for Hydrolix EC2 components. | true |
enable_query_auth | Enable query authorisation for requests to the query end-point. | false |
enable_query_peer_hyperthreading | Enable hyperthreading on the query peer. | true |
enable_turbine_monitor | Allow query components to monitor the Hydrolix query engine, restarting it if it hangs. | true |
import_max_receive_count | The number of times a message is delivered to the import queue before being moved to the dead-letter queue. | 1 |
import_queue_timeout | Specify the time for an individual job to timeout on the SQS queue, in seconds. Recommended to be kept as default. | 43200 |
ip_allowlist | Sets IP allow lists on the appropriate security groups (BastionSecurityGroup and ELBSecurityGroup for incoming connections. IP’s are provided as CIDR formations, for example: "4.2.2.2/32", "8.8.8.0/24". | none |
kafka_tls_ca | Allows the addition of a TLS Certificate Authority (CA) for mutual identification of Hydrolix Kafka ingest. PEM Format | none |
kafka_tls_cert | Allows the addition of a TLS Certificate for mutual identification of Hydrolix Kafka ingest. PEM format | none |
listing_max_receive_count | The number of times a message is delivered to the listing queue before being moved to the dead-letter queue. | 1 |
listing_queue_timeout | Specify the time for an import job to timeout of the SQS queue, in seconds. Recommended to be kept as default. | 43200 |
merge_interval | specify the interval for the Merge process to trigger. Recommended to be kept as default. | 5m |
merge_max_receive_count | The number of times a message is delivered to the merge queue before being moved to the dead-letter queue. | 1 |
merge_queue_timeout | specify the maximum message retention period for the Merge Queue, in seconds. Recommended to be kept as default. | 300 Seconds |
reaper_queue_timeout | specify the maximum message retention period for the Reaper Queue, in seconds. Recommended to be kept as default. | 30 seconds |
ssh_authorized_keys | List of Authorized keys that are deployed to components for SSH access | none |
stream_shard_count | The number of shards AWS Kinesis is configured to use. This Kinesis stream is used between the stream-head and the stream-peers. | 2 |
tag | Tags to apply to this cluster, in the format TAG-NAME:TAG-VALUE. See also further notes about tags, below. | none |
superset_workers | The number of threads for each Superset web worker. | 10 |
superset_threads | Superset web workers that are silent for more than this many seconds are killed and restarted. | 20 |
superset_timeout | The number of workers for handling Superset requests. | 60 |
use_s3_kms_key | Boolean to enable the creation and use of a new key to encrypt the S3 bucket used by Hydrolix. | false |
keep_legacy_kms_key | Boolean to enable / disable keeping previously generated/used KMS Key, useful to change the key without loosing access to previously encrypted partition. | true |
s3_kms_key_arn | Specify specific KMS Key to use on your S3 bucket, useful if you want to use custom settings or external key generated in an HSM | none |
boundary_policy_arn | Specify the ARN to use to set the maximum permissions that our policy is granted | none |
use_https_with_s3 | Use HTTPS to connect to download partition from S3, required if you use a custom KMS key. | true |
update
Used to update the version of the Hydrolix stack being used.
Usage
$ hdxctl update [OPTIONS] CLIENT_ID
Option | Description |
---|---|
--autoingest-max-receive-count | The number of times a message is delivered to the queue before being moved to the dead-letter queue. Recommended to be kept as default (10). |
--autoingest-queue-timeout | specify the maximum message retention period for the Autoingest Queue. Recommended to be kept as default. Default 4 Days |
--aws-ssh-key-name | Add an AWS defined Key Pair to the authorized keys of a deployment. Allows on-box access. |
--batch-bucket-kms-arn | Allow Hydrolix servers to decrypt a source bucket where a customer defined KMS key is required. Takes the ARN |
--batch-peer-threads | Specify the number of vCPU's a batch-peer should use for import jobs. Recommended to be kept as default. |
--bucket-allowlist | Enables the architecture to access other buckets. Buckets names are provided as only the bucket name. For example --bucket-allowlist mybucket1 --bucket-allowlist anotherbucket . This is not additivie, any update will overwrite previous configurations. |
---deploy / --no-deploy | unused. |
--ec2-detailed-monitoring | Turns off additional monitoring for Hydrolix EC2 components. Default true. |
--enable-query-auth | Enable query authorisation for requests to the query end-point. Currently a place-holder and not in use. |
--enable-query-peer-hyperthreading | Enable hyperthreading on the query peer. Default disabled. |
--enable-turbine-monitor | Allow query components to monitor the Hydrolix query engine, restarting it if it hangs. |
--enable-grafana-cloudwatch | Enable cloudwatch metrics within Grafana. |
--help | Displays the help text for the command |
--import-max-receive-count | The number of times a message is delivered to the queue before being moved to the dead-letter queue. Recommended to be kept as default (1). |
--import-queue-timeout | Specify the time for an individual job to timeout on the SQS queue. Recommended to be kept as default. |
--ip-allowlist | Sets IP allow lists on the appropriate security groups (BastionSecurityGroup and ELBSecurityGroup for incoming connections. IP's are provided as CIDR formations. For example: --ip-allowlist 4.2.2.2/32 --ip-allowlist 8.8.8.0/24 . Note: If an allow list doesn't contain "0.0.0.0/0" then the ip /32 of the nat gateway will get added automatically. This is not additivie, any update will overwrite previous configurations. |
--kafka-tls-ca | Allows the addition of a TLS Certificate Authority (CA) for mutual identification of Hydrolix Kafka ingest. PEM Format |
--kafka-tls-cert | Allows the addition of a TLS Certificate for mutual identification of Hydrolix Kafka ingest. PEM format |
--kafka-tls-key | Allows the addition of a TLS Key for mutual identification of Hydrolix Kafka ingest. PEM format |
--listing-max-receive-count | The number of times a message is delivered to the queue before being moved to the dead-letter queue. Recommended to be kept as default (1). |
--listing-queue-timeout | Specify the time for an import job to timeout of the SQS queue. Recommended to be kept as default. |
--merge-interval | specify the interval for the Merge process to trigger. Recommended to be kept as default. |
--merge-max-receive-count | The number of times a message is delivered to the queue before being moved to the dead-letter queue. Recommended to be kept as default (1). |
--merge-queue-timeout | specify the maximum message retention period for the Merge Queue. Recommended to be kept as default. Default 4 Days. |
--reaper-queue-timeout | specify the maximum message retention period for the Reaper Queue. Recommended to be kept as default. Default 4 Days |
--ssh-authorized-keys | Allows the provision of a file in the format of .ssh/authorized_keys to be appended to all .ssh/authorized_keys files in the deployed infrastructure. |
--stream-shard-count | Alter the Ingest Streaming Shard count for Kinesis. Default 2 |
--superset-workers | The number of threads for each Superset web worker. Default 10 |
--superset-threads | Superset web workers that are silent for more than this many seconds are killed and restarted. Default 20 |
--superset-timeout | The number of workers for handling Superset requests. Default 60 |
--use-s3-kms-key | Boolean to enable the creation and use of a new key to encrypt the S3 bucket used by Hydrolix. Default false |
--keep-legacy-kms-key | Boolean to enable / disable keeping previously generated/used KMS Key, useful to change the key without loosing access to previously encrypted partition. Default true |
--s3-kms-key-arn | Specify specific KMS Key to use on your S3 bucket, useful if you want to use custom settings or external key generated in an HSM |
--boundary-policy-arn | Specify the ARN to use to set the maximum permissions that our policy is granted |
--use-https-with-s3 | Use HTTPS to connect to download partition from S3, required if you use a custom KMS key. Default true |
--tag | Additional tags you may require on your infrastructure, in the format TAG-NAME:TAG-VALUE. See also further notes about tags, below. |
update-self
Updates HDXCTL to the most current executable.
Usage
$ hdxctl update-self [OPTIONS]
Option | Description |
---|---|
-v, --version TEXT | Updates the HDXCTL tool to the version specified. |
--help | Displays the help text for the command. |
Example
$ hdxctl update-self
using-hdxreader
Updates HDXCTL to the most current executable.
Usage
hdxctl using-hdxreader [OPTIONS] CLIENT_ID CLUSTER_ID
Option | Description |
---|---|
--help | Displays the help text for the command. |
Example
$ hdxctl using-hdxreader hdxcli-eemdho3e hdx-tvgsb3a6
True
version
Displays the version of the hdxctl
program you are using.
Usage
$ hdxctl version [OPTIONS]
Option | Description |
---|---|
-a, --all | Displays all the version information for the stack |
--help | Displays the help text for the command |
Example
$ hdxctl version -a
----------------------- --------
SHARED_VERSION b5de4e88
UI_VERSION 7605aa1e
CONFIG_VERSION 1b43fda6
KEYCLOAK_VERSION 15a561b0
INTAKE_VERSION b5de4e88
TURBINE_VERSION b78e7185
LAMBDA_VERSION b5de4e88
CLI_DNS_SERVICE_VERSION 7181f171
SELF_DEPLOY_VERSION b5de4e88
MACHINE_VERSION b5de4e88
HUMAN_VERSION v2.9.3
SECRETS_FROM_S3_VERSION 15a561b0
ENVIRON_VERSION 15a561b0
LAST_TAG_NAME v2.9.3
LAST_TAG b5de4e88
TAG v2.9.3
GRAFANA_VERSION e1f7907d
PROMETHEUS_VERSION e1f7907d
SUPERSET_VERSION e1f7907d
TERRAFORM_VERSION f8011a06
----------------------- --------
Further notes about tags
Using the --tag
option with the update
command will erase all tags previously set on the cluster prior to setting the newly specified ones.
Several tag names are reserved for Hydrolix's internal use, and cannot be set via the --tag
option:
-
aws:cloudformation:logical-id
-
aws:cloudformation:stack-id
-
HdxContact
-
HdxService
-
Name
-
HdxBudget
-
aws:cloudformation:stack-name
Updated 3 months ago