HDXCLI Commands
Introduction
hdxcli
is a command-line tool to work with Hydrolix projects and tables interactively.
Common operations such as CRUD operations on projects/tables/transforms and others can be performed.
Installation
You can install hdxcli
from pip:
pip install hdxcli
Or if you just need to update to the latest version:
pip install --upgrade hdxcli
System Requirements
Python version >= 3.10
is required.
Make sure you have the correct Python version installed before proceeding with the installation of hdxcli
.
Useful Tips for Installing HDXCLI
As hdxcli
currently requires specific Python versions, a tool called pyenv
comes in handy. It allows you to effortlessly switch between various installed Python versions and install any available Python version. By using pyenv
in conjunction with venv
you can easily create environments with a specific Python version.
For more information on pyenv
, including installation instructions, refer to the pyenv GitHub repository or pyenv GitHub repository for Windows.
HDXCLI Usage
init
Command
init
CommandThe init
command is used to set up the initial configuration for hdxcli
. It creates the necessary configuration directory and default profile, allowing you to start using the CLI with your specific environment. The configuration file, by default, is stored in a directory created automatically by the tool, but you can customize its location by setting the HDX_CONFIG_DIR environment variable.
Usage
When you run hdxcli init
, you will be prompted to enter the following details:
- Cluster Hostname: Enter the hostname of the cluster you will be connecting to.
- Username: Provide your cluster's username, typically your email address.
- Protocol: Specify whether you will be using HTTPS by typing Y or N.
After entering these details, a configuration file will be generated with the profile named default
and saved at the specified location (e.g., /path/to/your/config.toml).
Example
$ hdxcli init
================== HDXCLI Init ===================
A new configuration will be created now.
Please, type the host name of your cluster: my-cluster.example.com
Please, type the user name of your cluster: [email protected]
Will you be using https (Y/N): Y
Your configuration with profile [default] has been created at /path/to/your/config.toml
This command must be run before using other commands in hdxcli
, as it sets up the essential connection parameters.
Command-line Tool Organization
The tool is organized mostly with the general invocation form of:
hdxcli <resource> [subresource] <verb> [resource_name]
Table and project resources have defaults that depend on the profile you are working with, so they can be omitted if you previously used the set
command.
For all other resources, you can use --transform
, --dictionary
, --source
, etc. Please see the command line help for more information.
Profiles
hdxcli
supports multiple profiles. You can use a default profile or use the --profile
option to operate on a non-default profile.
When invoking a command, if a login to the server is necessary, a prompt will be shown and the token will be cached.
Listing and Showing Profiles
Listing profiles:
hdxcli profile list
Showing default profile:
hdxcli profile show
Projects, Tables, and Transforms
The basic operations you can do with these resources are:
- list them
- create a new resource
- delete an existing resource
- modify an existing resource
- show a resource in raw JSON format
- show settings from a resource
- write a setting
- show a single setting
Working with Transforms
You can create and override transforms with the following commands.
Create a transform:
hdxcli transform create -f <transform-settings-file>.json <transform-name>
Remember that a transform is applied to a table in a project, so whatever you set with the command line tool will be the target of your transform.
If you want to override it, specify the table name with the --table
setting:
hdxcli transform --project <project-name> --table <table-name> create -f <transform-settings>.json <transform-name>
Migration Command for Hydrolix Tables
This command provides a way to migrate Hydrolix tables and its data to a target cluster or even within the same cluster. You only need to pass the source and target table names in the format project_name.table_name
. The migration process will handle creating the project, table, and transforms at the target location. It will then copy the partitions from the source bucket to the target bucket and finally load the catalog so that Hydrolix can associate the created table with the migrated partitions.
If --target-profile
is not provided, or if --target-hostname
, --target-username
, --target-password
, and --target-uri-scheme
are not passed, the tool assumes that the migration is performed within the same cluster.
Usage
hdxcli migrate [OPTIONS] SOURCE_TABLE TARGET_TABLE
Options
-tp, --target-profile
-h, --target-hostname
-u, --target-username
-p, --target-password
-s, --target-uri-scheme
--allow-merge Allow migration if the merge setting is enabled
--only The migration type
--min-timestamp Minimum timestamp for filtering partitions
--max-timestamp Maximum timestamp for filtering partitions
--recovery Continue a previous migration
--reuse-partitions Perform a dry migration without moving partitions
--workers Number of worker threads to use for migrating partitions
--allow-merge
This flag allows skipping the check for the merge setting enabled on the source table.
--only
This option expects either resources
or data
. If resources
is selected, only the resources (project, table, and transforms) will be migrated. If data
is selected, only the data will be migrated, and the resources must already exist.
--min-timestamp and --max-timestamp
These options help filter the partitions to be migrated. They expect dates in the format: YYYY-MM-DD HH:MM:SS
.
--recovery
This flag allows resuming a previous migration that did not complete successfully for any reason.
--reuse-partitions
This option enables dry migration. Both the source and target clusters must share the storage where the table's partitions are located. This allows migrating the table to the target cluster while reusing the partitions from the source cluster without creating new ones. This results in an almost instant migration but requires that the same partitions are shared by different tables across clusters. Note: Modifying data in one table may cause issues in the other.
--workers
This option allows manually setting the number of workers available for partition migration. The default number of workers is 10, with a minimum of 1 and a maximum of 50. Note: Generally, having a large number of workers is beneficial when dealing with many small partitions.
Supported Cloud Storages:
- AWS
- GCP
- Azure
- Linode
During the migration process, credentials to access these clouds will likely be required. These credentials need to be provided when prompted:
- AWS: AWS_ACCESS_KEY_ID and AWS_ACCESS_SECRET_ID
- GCP: GOOGLE_APPLICATION_CREDENTIALS
- Azure: CONNECTION_STRING
- Linode: AWS_ACCESS_KEY_ID and AWS_ACCESS_SECRET_ID (same format as AWS)
Pre-Migration Checks and Validations
Before creating resources and migrating partitions, the following checks are performed:
- The source table does not have the merge setting enabled (use
--allow-merge
to skip this validation) - There are no running alter jobs on the source table
- If filtering is applied, it validates that there are partitions remaining to migrate after filtering
- If using the
--reuse-partitions
option, it checks that the storage where the partitions are located is shared between both clusters
Migrating Individual Resources
This command migrates an individual resource from one cluster to another (or even in the same one). This migrate command is available in almost all resources (project, table, transform, function, dictionary, storage), and it clones (with the same settings but different UUID) any resource to a target cluster.
hdxcli project migrate <project-name> --target-cluster-hostname <target-cluster> --target-cluster-username <username> --target-cluster-password <password> --target-cluster-uri-scheme \<http/https>
This command also handles profiles that can be used as the target cluster by specifying them like this: -tp
, --target-cluster.
hdxcli project migrate <project-name> -tp <profile-name>
When migrating a table, it is necessary to provide the project name in the target cluster where that table will be migrated. This option must be provided: -P
, --target-project-name
.
hdxcli table --project <project-name> migrate <table-name> -tp <profile-name> -P <project-name>
Mapping DDLs to a Hydrolix Transform
The command transform map-from
consumes data languages such as SQL, Elastic and others and creates a Hydrolix transform from them.
hdxcli transform map-from --ddl-custom-mapping \<sql_to_hdx_mapping>.json \<ddl_file> <transform-name>
There are three things involved:
- The mapping file
sql_to_hdx_mapping.json
: tells how to map simple and compound types from the source DDL into Hydrolix. - The input file
ddl_file
: in SQL, Elastic or other language. - The target table: the table in which the transform is applied. The current table is used if none is provided.
Mapping File
The mapping file contains two sections:
simple_datatypes
compound_datatypes
An example of this for SQL could be the following:
{
"simple_datatypes": {
"INT": ["uint8", "uint16", "uint32", "int8", "int16", "int32_optimal"],
"BIGINT":["uint64", "int64_optimal"],
"STRING": "string",
"DATE": "datetime",
"REAL": "double",
"DOUBLE": "double",
"FLOAT": "double",
"BOOLEAN": "boolean",
"TIMESTAMP": "datetime",
"TEXT": "string"
},
"compound_datatypes": {
"ARRAY": "array",
"STRUCT": "map",
"MAP": "map",
"VARCHAR": "string"
}
}
The mappings that have a list as the value uses the one finishing in _optimal
. This is just a replacement for comments since JSON does not allow comments.
The compound_datatypes
parsing needs code help to be completely parsed. This is where the extensible interfaces for DDLs enters.
DDL File as Input
The ddl_file
specifies the structure of various data storage entities, including SQL and Elasticsearch. It should encompass fields, their associated data types, and any additional constraints. This specification is pivotal for accurate mapping to Hydrolix transforms.
The following DDL SQL could be an example of this:
CREATE TABLE a_project.with_a_table (
account_id STRING,
device_id STRING,
playback_session_id STRING,
user_session_id STRING,
user_agent STRING,
timestamp TIMESTAMP,
start_timestamp BIGINT,
another_time TIMESTAMP PRIMARY KEY,
end_timestamp BIGINT,
video_ranges ARRAY<STRING>,
playback_started BOOLEAN,
average_peak_bitrate BIGINT,
bitrate_changed_count INT,
bitrate_change_list ARRAY<BIGINT>,
bitrate_change_timestamp_list ARRAY<BIGINT>,
start_bitrate BIGINT,
fetch_and_render_media_segment_duration BIGINT,
fetch_and_render_media_segment_start_timestamp BIGINT,
re_buffer_start_timestamp_list ARRAY<BIGINT>,
exit_before_ad_start_flag BOOLEAN,
ingest_utc_hour INT,
ingest_utc_date DATE)
USING delta
PARTITIONED BY (ingest_utc_date, ingest_utc_hour)
LOCATION 's3://...'
TBLPROPERTIES ('delta.minReaderVersion' = '1',
'delta.minWriterVersion' = '2');
User Choices File
After transform map-from
does type mappings, it might need some tweaks that are user choices. There are two ways to provide these choices. One way is to do it interactively (default) and the other way is to provide a user choices file by means of the --user-choices
option followed by a file. The file is a JSON file with
The user choices are done as a post-processing step, and some of those options are shared by all DDLs, but some are specific to a single DDL. Example of user choices are:
- general: the ingest index for each field if CSV is used
- Elastic: add fields whose cardinality will be potentially more than one, since this needs to adjust the output of the algorithm to change simple types to an array mapping
There are two kind of user choices. The general ones and the DDL-specific ones. The DDL-specific user choices go prefixed with the name of the DDL (the same name you pass to the -s
option in the command line). For Elastic it would be elastic
and for SQL, sql
.
User Choice Key | Example Value | Purpose |
---|---|---|
ingest_type | ‘json’ Type of ingest. Valid values are ‘json’, ‘csv’. | Tell the transform the expected data type for ingestion. |
csv_indexes | [[“field1”, 0], [“field2”, 1],…]Array of arrays with index and field name | Know where ingest happens for which fields in the transform. Applies to csv. |
csv_delimters | ',' a string that separates csv fields | Delimit fields in csv format |
elastic.array_fields | ['field1', ‘field2’, ‘some_field.*regex’] Which fields are considered arrays in Elastic mappings | In Elastic, all fields have 0..* cardinality by default. Hydrolix will map all to cardinality one except the ones indicated in this user choice, which will be mapped to arrays of that type. |
compression | ‘gzip’ | Which compression algorithm to use |
primary_key | ‘your_key’ | the primary key for the transform |
add_ignored_fields_as_string_columns | true|false | Whether ignored fields should be added as a string |
Ingest
Batch Job
Create a batch job:
hdxcli job batch ingest <job-name> <job-settings>.json
job-name
is the name of the job that will be displayed when listing batch jobs. job-settings
is the path to the file containing the specifications required to create that ingestion (for more information on the required specifications, see Hydrolix API Reference).
In this case, the project, table, and transform are being omitted. hdxcli
will use the default transform within the project and table previously configured in the profile with the set
command. Otherwise, you can add --project <project-name> --table <table-name> --transform <transform-name>
.
This allows you to execute the command as follows:
hdxcli job batch --project <project-name> --table <table-name> --transform <transform-name> ingest <job-name> <job-settings>.json
Stream
Create the streaming ingest as follows:
hdxcli stream --project <project-name> --table <table-name> --transform <transform-name> ingest <data-file>
data-file
is the path of the data file to be used for the ingest. This can be .csv, .json, or compressed files. The transform has to have that configuration (type and compression).
Commands
Profile
There is a 'default' profile that will be used if you omit --profile <profile-name>
on each request to the HDXCLI.
List
hdxcli profile list
Add
One way is to do it interactively:
hdxcli profile add <profile-name>
Please, type the host name of your cluster: <host-name>
Please, type the user name of your cluster: <user-name>
Will you be using https (Y/N): \<Y/N>
Or by passing the same information via options:
hdxcli add <profile-name> --hostname <hostname> --username <username> --scheme <http/s>
Show
If you omit --profile <profile-name>
, 'default' profile will be shown.
hdxcli --profile <profile-name> profile show
Set/Unset
hdxcli
allows you to save both project name and table name to use as default.
Set
hdxcli set <project-name> <table-name>
Unset
hdxcli unset
From now on, when you see --project <project-name>
and --table <table-name>
, you know that it can be omitted if the set
command was previously used.
User
List
hdxcli user list
Delete
hdxcli user delete <user-email>
Show
hdxcli user --user <user-email> show
Assign roles
You can assign just one or multiple roles to a specific user. The option -r
, --role
can be used multiple times. It's a required setting.
hdxcli user assign-role <user-email> --role <role-name>
Remove roles
The same as the assign-role
command, but instead it removes roles from a user.
hdxcli user remove-role <user-email> --role <role-name>
Invite
List
The -p
or --pending
option can be used to filter the invite list, displaying only pending invitations.
hdxcli user invite list
Send
The -r
or --role
option can be used multiple times. It is a required setting.
hdxcli user invite send <user-email> --role <role-name>
Resend
hdxcli user invite send <user-email>
Delete
hdxcli user invite delete <user-email>
Show
hdxcli user invite --user <user-email> show\
Role
List
hdxcli role list
Create
There are two ways to create a new role:
Using options. However, if you choose this method, the role cannot have two or more policies.
Using the interactive method.
Create a role using options
-n
, --name
and -p
, --permission
are required options to create a new role, and the -p, --permission option can be used multiple times. While a role may or may not have a specific scope, these options are considered optional. The -t
, --scope-type
expects to receive the name of a resource, such as project, table, transform, etc., and the -i
, --scope-id
should be the UUID identifier for that specific resource.
Use the following command to create a new role:
hdxcli role create --name <role-name> --scope-type <scope-type> --scope-id <scope-id> --permission <permission-name>
Create a role using interactive method
To create a role interactively, use the following command:
hdxcli role create
Enter the name for the new role: <role-name>
Adding a Policy, does it have a specific scope? [Y/n]\: \<y/n>
Specify the type of scope for the role (e.g., project): <scope-type>
Provide the 'uuid' for the specified scope: <scope-id>
1 - commit_alterjob
2 - change_kinesissource
3 - ...
n - All of them
Enter the numbers corresponding to the permissions you'd want to add (comma-separated): \<i.e:1,2 or n>
Do you want to add another Policy? [Y/n]\: \<y/n>
***
## Review Role Details
Role Name: <role-name>
Policy 1:
Scope Type: <scope-type>
Scope ID: <scope-id>
Permissions: \<[permissions]>
Confirm the creation of the new role? [Y/n]\: \<y/n>
Created role role-name
Delete
hdxcli role delete <role-name>
Show
hdxcli role --role <role-name> show
Edit
This is an interactive command that allows you to modify the name of a role and add, modify, or delete policies associated with that role. Before finalizing the operation, it presents a detailed view of the modified role for confirmation or cancellation.
hdxcli role edit <role-name>
Add users
It allows to add just one or multiple users to a specific role. The option -u
, --user
can be used multiple times. It's required.
hdxcli role add-user <role-name> --user <user-email>
Remove users
The same as add-user command but to remove users from a role.
hdxcli role remove-user <role-name> --user <user-email>
Permissions list
This command displays a list of available permissions. Additionally, it allows filtering by type using the option -t
, --type
(e.g., -t function
), showing only permissions related to that specific type.
hdxcli role permission list
Project
List
hdxcli project list
Create
hdxcli project create <project-name>
Delete
hdxcli project delete <project-name>
Display activity
hdxcli project --project <project-name> activity
Display statistics
hdxcli project --project <project-name> stats
Show
hdxcli project --project <project-name> show
Settings
Display all settings
hdxcli project --project <project-name> settings
Display single setting
hdxcli project --project <project-name> settings <setting-name>
Modify setting
hdxcli project --project <project-name> settings <setting-name> <new-value>
Migrate
hdxcli project migrate <project-name> -tp <profile-name>
Table
List
hdxcli table --project <project-name> list
Create Raw Table (Regular)
If creating a regular table, no additional options are required. Use the table create command without specifying any options.
hdxcli table --project <project-name> create <table-name>
Additionally, an option (--settings-file
) exists to set different table settings other than the default. This works for both regular tables and summary tables:
hdxcli table --project <project-name> create <table-name> --settings-file <settings-file>.json
Create Aggregation Table (Summary)
When creating summary tables, the following options are necessary:
-
--type
or-t
: Specify as summary. -
--sql-query
or-s
: Provide the SQL query directly via the command line.
Alternatively, use --sql-query-file
to specify a file containing the SQL query.
Example using a SQL query in the command line:
hdxcli table --project <project-name> create <table-name> --type summary --sql-query <sql-query>
Example using a file containing the SQL query:
hdxcli table --project <project-name> create <table-name> --type summary --sql-query-file <sql-query-file>.txt
Delete
hdxcli table --project <project-name> delete <table-name>
Display Activity
hdxcli table --project <project-name> --table <table-name> activity
Display Statistics
hdxcli table --project <project-name> --table <table-name> stats
Show
hdxcli table --project <project-name> --table <table-name> show
Settings
Display All Settings
hdxcli table --project <project-name> --table <table-name> settings
Display Single Setting
hdxcli table --project <project-name> --table <table-name> settings <setting-name>
Modify Setting
hdxcli table --project <project-name> --table <table-name> settings <setting-name> <new-value>
Migrate
hdxcli table --project <project-name> migrate <table-name> -tp <profile-name> -P <target-project-name>
Truncate
hdxcli table --project <project-name> truncate <table-name>
Transform
List
hdxcli transform --project <project-name> --table <table-name> list
Create
The field ‘name’ in settings will be replaced by .
hdxcli transform --project <project-name> --table <table-name> create -f <transform-settings>.json <transform-name>
Delete
hdxcli transform --project <project-name> --table <table-name> delete <transform-name>
Show
hdxcli transform --project <project-name> --table <table-name> --transform <transform-name> show
Settings
Display All Settings
hdxcli transform --project <project-name> --table <table-name> --transform <transform-name> settings
Display Single Setting
hdxcli transform --project <project-name> --table <table-name> --transform <transform-name> settings <setting-name>
Modify setting
hdxcli transform --project <project-name> --table <table-name> --transform <transform-name> settings <setting-name> <new-value>
Migrate
hdxcli transform --project <project-name> --table <table-name> migrate <transform-name> -tp <profile-name> -P <target-project-name> -T <target-table-name>
Map-from
The command transform map-from
consumes data languages such as SQL, Elastic and others and create a Hydrolix transform from them.
hdxcli transform map-from --ddl-custom-mapping \<sql_to_hdx_mapping>.json \<ddl_file> <transform-name>
To learn more about map-from
command, see Mapping DDLs to Hydrolix transform.
Jobs
Batch
List
hdxcli job batch list
Ingest
It will use the default transform in <project-name>.<table-name>
if you don't provide --transform <transform-name>
.
hdxcli job batch --project <project-name> --table <table-name> --transform <transform-name> ingest <job-name> <job-settings>.json
Delete
hdxcli job batch delete <job-name>
Retry
hdxcli job batch retry <job-name>
Cancel
hdxcli job batch cancel <job-name>
Show
hdxcli job batch --job <job-name> show
Settings
Display all settings
hdxcli job batch --job <job-name> settings
Display single setting
hdxcli job batch --job <job-name> settings <setting-name>
Modify setting
hdxcli job batch --job <job-name> settings <setting-name> <new-value>
Alter
List
hdxcli job alter list
In addition, you can add the following options to filter the result of the alter job list command:
--status TEXT Filter alter jobs by status.
--project TEXT Filter alter jobs by project name.
--table TEXT Filter alter jobs by table name.
For example, to filter by status running, you would use:
hdxcli job alter list --status running
Create
There are two ways to create an alter job, each serving a different purpose. The update
command allows you to modify the value of a column based on a specified where clause. On the other hand, as you might expect, the delete
command provides a way to remove rows based on a where clause. After creating these alter jobs, it is necessary to perform a commit on the created alter job to apply the modifications.
Update
hdxcli job alter create update --table <project-name>.<table-name> --column <column-name> --value <value> --where <where-clause>
Delete
hdxcli job alter create delete --table <project-name>.<table-name> --where <where-clause>
Commit
hdxcli job alter commit <job-name>
Cancel
hdxcli job alter cancel <job-name>
Retry
hdxcli job alter retry <job-name>
Show
hdxcli job alter show <job-name>
Delete
hdxcli job alter delete <job-name>
Purgejobs
This command purge all batch jobs in your org.
hdxcli job purgejobs
Please type 'purge all jobs' to proceed: purge all jobs
All jobs purged
Stream
Ingest
hdxcli stream --project <project-name> --table <table-name> --transform <transform-name> ingest <data-file>
Sources
Kinesis / Kafka / SIEM
The command structure of these resources is the same, it's only necessary to replace ‘kinesis’ with ‘kafka’ or ‘siem’.
List
hdxcli sources kinesis --project <project-name> --table <table-name> list
Create
The field ‘name’ in settings will be replaced by <source-name>
.
hdxcli sources kinesis --project <project-name> --table <table-name> create <source_settings>.json <source-name>
Delete
hdxcli sources kinesis --project <project-name> --table <table-name> delete <source-name>
Show
hdxcli sources kinesis --project <project-name> --table <table-name> --source <source-name> show
Settings
Display All Settings
hdxcli sources kinesis --project <project-name> --table <table-name> --source <source-name> settings
Display single setting
hdxcli sources kinesis --project <project-name> --table <table-name> --source <source-name> settings <setting-name>
Modify setting
hdxcli sources kinesis --project <project-name> --table <table-name> --source <source-name> settings <setting-name> <new-value>
Pool
List
hdxcli pool list
Create
pool-service
needs to contain the name of the service (query-head, query-peer, etc. For more information see Hydrolix API Reference). pool-name
is the name of the pool, that means the field ‘name’ in settings will be replaced by pool-name
.
These options are not required since they have default values. Override the defaults with these options:
Options
-
-r
,--replicas
INTEGER
Number of replicas for the workload (default: 1) -
-c
,--cpu
FLOAT
Dedicated CPU allocation for each replica (default: 0.5) -
-m
,--memory
FLOAT
Dedicated memory allocation for each replica, expressed in Gi (default: 0.5) -
-s
,--storage
FLOAT
Storage capacity for each replica, expressed in Gi (default: 0.5)
hdxcli pool create <pool-service> <pool-name>
An example modifying the default options would be:
hdxcli pool create <pool-service> <pool-name> -r 5 -c 1 -m 1 -s 2
Delete
hdxcli pool delete <pool-name>
Show
hdxcli pool --pool <pool-name> show
Settings
Display all settings
hdxcli pool --pool <pool-name> settings
Display single setting
hdxcli pool --pool <pool-name> settings <setting-name>
Modify setting
hdxcli pool --pool <pool-name> settings <setting-name> <new-value>
Dictionary
List
hdxcli dictionary --project <project-name> list
Create
<dictionary-settings>
must contain all required fields (for more information on the required specifications, see the Hydrolix API Reference). <dictionary-filename>
is the name of the dictionary file previously uploaded to Hydrolix. The field ‘name’ in settings will be replaced by <dictionary-name>
.
hdxcli dictionary --project <project-name> create <dictionary-settings>.json <dictionary-filename> <dictionary-name>
Delete
hdxcli dictionary --project <project-name> delete <dictionary-name>
Show
hdxcli dictionary --project <project-name> --dictionary <dictionary-name> show
Settings
Display all settings
hdxcli dictionary --project <project-name> --dictionary <dictionary-name> settings
Display single setting
hdxcli dictionary --project <project-name> --dictionary <dictionary-name> settings <setting-name>
Modify setting
hdxcli dictionary --project <project-name> --dictionary <dictionary-name> settings <setting-name> <new-value>
Migrate
hdxcli dictionary --project <project-name> migrate <dictionary-name> -tp <profile-name> -P <target-project-name>
Dictionary Files
List
hdxcli dictionary --project <project-name> files list
Upload
hdxcli
supports two formats for <dictionary-file-to-upload>
: JSON and CSV.
If the format is ‘json’ you don’t need to specify it.
hdxcli dictionary --project <project-name> files upload <dictionary-file-to-upload>.json <dictionary-filename>
Otherwise, if the format is ‘csv’ you must use -t verbatim
or --body-from-file-type verbatim
.
hdxcli dictionary --project <project-name> files upload -t verbatim <dictionary-file-to-upload>.csv <dictionary-filename>
Delete
hdxcli dictionary --project <project-name> files delete <dictionary-filename>
Function
List
hdxcli function --project <project-name> list
Create
There are two ways to create a new function:
- Passing the function on the command line using
-s
or--inline-sql
option:
hdxcli function --project create -s ''
- Using a JSON file with the function settings via
-f
or--sql-from-file
option. The field ‘name’ in settings will be replaced by<function-name>
:
hdxcli function --project <project-name> create -f <function-settings>.json <function-name>
Delete
hdxcli function --project <project-name> delete <function-name>
Show
hdxcli function --project <project-name> --function <function-name> show
Settings
Display all settings
hdxcli function --project <project-name> --function <function-name> settings
Display single setting
hdxcli function --project <project-name> --function <function-name> settings <setting-name>
Modify setting
hdxcli function --project <project-name> --function <function-name> settings <setting-name> <new-value>
Migrate
hdxcli function --project <project-name> migrate <function-name> -tp <profile-name> -P <target-project-name>
Storage
List
hdxcli storage list
Create
There are two available methods to create a new storage: using a file containing the storage configuration or passing the configuration via the command line.
- Using settings file (
-f
,--settings-filename
):
hdxcli storage create <storage-name> --settings-filename <storage-settings>.json
Passing the configuration via the command line:
hdxcli storage create <storage-name> --bucket-path <bucket-path> --bucket-name <bucket-name> --region <region> --cloud <cloud>
Delete
hdxcxli storage delete <storage-name>
Show
hdxcli storage --storage <storage-name> show
Settings
Display all settings
hdxcli storage --storage <storage-name> settings
Display single setting
hdxcli storage --storage <storage-name> settings <setting-name>
Modify setting
hdxcli storage --storage <storage-name> settings <setting-name> <new-value>
Migrate
hdxcli storage migrate <storage-name> -tp <profile-name>
Query-option
Query options operations at org-level. For more information, refer to Query Options documentation.
List
hdxcli query-option list
Set
hdxcli query-option set <query-option-name> <query-option-value>
An option (--from-file
) is also available to configure a bunch of query options at once.
hdxcli query-option set --from-file <query-option-file>.json
This query option file must be a JSON file with the following structure:
{
'<query-option-name>': <query-option-value>,
...
}
Unset
To unset a query option, one only needs to use the following command:
hdxcli query-option unset <query-option-name>
Additionally, there is an option --all
that unsets all query options.
Integration
List
Allows you to list all transforms available in the repository.
hdxcli integration transform list
Apply
With this command you can create a new transform using the settings of a transform hosted in the repository.
hdxcli integration transform --project <project-name> --table <table-name> apply <public-transform-name> <transform-name>
Show
hdxcli integration transform show <public-transform-name>
Migrate
Migrate table within the same cluster:
hdxcli migrate <source-project-name>.<source-table-name> <target-project-name>.<target-table-name>
Migrate table to another cluster, using --target-profile
:
hdxcli migrate <source-project-name>.<source-table-name> <target-project-name>.<target-table-name> --target-profile <profile-name>
Or passing host information:
hdxcli migrate <source-project-name>.<source-table-name> <target-project-name>.<target-table-name> --target-hostname <hostname> --target-username <username> --target-password <password> --target-uri-scheme <http/s>
To learn more about migrate command, see Migration Command for Hydrolix Tables.
Textual User Interface
To activate the hdxcli
user interface, do as follows:
hdxcli tui
Version
hdxcli version
FAQ: Common Operations
Showing Help
In order to see what you can do with the tool:
hdxcli --help
Check which commands are available for each resource by typing:
hdxcli <resource-name> --help
HDX_CONFIG_DIR
This environment variable specifies the location of the files used by hdxcli
to store configuration profiles. The default directory path is ~/.hdx_cli/
.
Performing Operations Against Another Server
If you want to use hdxcli
against another server, use the --profile
option:
hdxcli --profile <profile-name> project list
Delete Resources
When you want to delete a resource, hdxcli
will prompt you type delete this resource
to continue deleting the resource, like this:
hdxcli project delete <project-name>
Please type 'delete this resource' to delete: delete this resource
Deleted <project-name>
If you want to skip the confirmation prompt, you can add --disable-confirmation-prompt
as follows:
hdxcli project delete <project-name> --disable-confirmation-prompt
Deleted <project-name>
Obtain Indented Resource Information
When using the 'show' command on any resource, the output appears as follows:
hdxcli project --project <project-name> show
{"name": "project-name", "org": "org-uuid", "description": "description", "uuid": "uuid", ...}
If you want an indented JSON version, simply add the -i
, --indent option
:
hdxcli project --project <project-name> show --indent
{
"name": "project-name",
"org": "org-uuid",
"description": "description",
"uuid": "uuid",
...,
}
Note: The --indent
option no longer requires an integer value; it is now a boolean option for improved simplicity.
Configuring Request Timeout
By default, HDXCLI waits for 30 seconds when a request is performed. However, you can modify this timeout if the cluster takes more time to respond. The --timeout INT
option allows you to specify the duration in seconds and can be used with any command by placing it after the root command, as follows:
hdxcli --timeout 60 project delete <project-name> --disable-confirmation-prompt
hdxcli --timeout 120 migrate <target-cluster-username> <target-cluster-hostname> -p <target-cluster-password> -u <target-cluster-uri-scheme> -b <project-1-to-migrate> -b <project-2-not-to-migrate>
Debug Mode
The --debug
option allows you to run commands in debug mode, providing additional information for troubleshooting purposes. When enabled, hdxcli
will display detailed debugging information, such as request and response details, which can be helpful for diagnosing issues.
hdxcli --debug command [...]
Updated 21 days ago