Hydrolix Tunables List

A listing of Hydrolix tunables

A listing of HTN tunables used by Hydrolix. These tunables are set in the hydrolixcluster.yaml configuration file, under spec:.

Tunable NameDescriptionDefaultExamples
acme_enabledAutomatically generate and renew SSL certs for your Hydrolix domain. Overrides any existing Kubernetes secret named traefik-tls.False
admin_emailThe email address of the Hydrolix cluster administrator.
autoingest_unique_file_pathsEnable unique file paths from object store by ignoring duplicate paths.False
aws_credentials_methodDEPRECATED: Use db_bucket_credentials_method.["static", "instance_profile"]
aws_load_balancer_subnetsSubnets to assign to the load balancer of the Traefik service when running in EKS.["subnet-xxxx,mySubnet"]
aws_load_balancer_tagsAdditional tags to be added to the load balancer of the Traefik service when running in EKS.["Environment=dev,Team=test"]
traefik_service_annotationsAdditional annotations for Traefik service.{}
traefik_service_cors_headersOptional key values pairs of CORS headers{}
traefik_service_custom_response_headersOptional key value pairs of custom headers that will be applied to the response{}
azure_blob_storage_accountThe storage account to access an Azure blob storage container.
basic_authA list of Hydrolix services that should be protected with basic auth when accessed over HTTP.[]
batch_peer_heartbeat_periodHow frequently a batch peer should heartbeat any task it's working on as a duration string.5m
bucketDEPRECATED: Use db_bucket_url.
client_idDEPRECATED: Use hydrolix_name and db_bucket_url.
catalog_db_admin_userThe admin user of the PostgreSQL server where Hydrolix metadata is stored.turbine
catalog_db_admin_dbThe default database of the admin user on the PostgreSQL server where Hydrolix metadata is stored.turbine
catalog_db_hostThe PostgreSQL server where Hydrolix metadata is stored.postgres
catalog_db_portThe PostgreSQL server port where Hydrolix metadata is stored.5432
catalog_intake_connectionsConnection pool settings for intake services that connect to the PostgreSQL server where Hydrolix metadata is stored. Available options:

1. max_lifetime - The max duration that a connection can live before being recycled.
2. max_idle_time - The max duration that a connection can be idle before being closed.
3. max - The max number of connections that can be opened by each intake service that connects to the PostgreSQL server.
4. min - The minimum number of connections to keep open to the PostgreSQL server.
5. check_writable - If set to true, when a connection is opened to the PostgreSQL server, ensure the server can handle writes.
{"max_lifetime": "10m", "max_idle_time": "1m"}
clickhouse_http_portThe dedicated port for the ClickHouse HTTP interface.8088
data_service_termination_grace_periodTermination grace period for most data services.120
db_bucket_credentials_methodThe method Hydrolix uses to acquire credentials for connecting to cloud storage.web_identity["static", "ec2_profile", "web_identity"]
db_bucket_endpointThe endpoint URLfor S3 compatible object storage services. Not required if using AWS S3 or if db_bucket_url is provided.
db_bucket_nameThe name of the bucket for Hydrolix to store data in. Not required if db_bucket_url is provided.
db_bucket_regionNot required if it can be inferred from db_bucket_url.["us-east-2", "us-central1"]
db_bucket_typeThe object storage type of the bucket you would like Hydrolix to store data in. Not required if db_bucket_url is provided.["gs", "s3"]
db_bucket_urlThe URL of the cloud storage bucket you would like Hydrolix to store data in.["gs://my-bucket", "s3://my-bucket", "https://my-bucket.s3.us-east-2.amazonaws.com", "https://s3.us-east-2.amazonaws.com/my-bucket", "https://my-bucket.us-southeast-1.linodeobjects.com", "https://minio.local/my-bucket"]
db_bucket_use_httpsIf true use HTTPS when connecting to the cloud storage service. Inferred from db_bucket_url if possible.True
default_query_poolName of the default query pool.query-peer
dns_server_ipThe IP address of the DNS server used for performance-critical purposes.
use_hydrolix_dns_resolverIf true, use Hydrolix DNSResolver. If false, use system resolver.True
dns_gcs_max_ttl_secsMax DNS TTL for GCS storage. It is the longest period of time for which the DNS resolver can cache a DNS record before it expires and needs to be refreshed.max_ttl=0 means DNS cache strictly respects the TTL from the DNS query response.0
dns_aws_max_ttl_secsMax DNS TTL for AWS and S3-compatible storages. It is the longest period of time for which the DNS resolver can cache a DNS record before it expires and needs to be refreshed. max_ttl=0 means DNS cache strictly respects the TTL from the DNS query response.0
dns_azure_max_ttl_secsMax DNS TTL for Azure storage. It is the longest period of time for which the DNS resolver can cache a DNS record before it expires and needs to be refreshed. max_ttl=0 means DNS cache strictly respects the TTL from the DNS query response.0
dns_gcs_max_resolution_attemptsMaximum number of attempts made by the DNs Resolver for GCS storage in a given DNS refresh cycle.1
dns_aws_max_resolution_attemptsMaximum number of attempts made by the DNS Resolver for AWS and all s3 compatible storages in a given DNS refresh cycle.1
dns_azure_max_resolution_attemptsMaximum number of attempts made by the DNS Resolver for Azure storage in a given DNS refresh cycle1
domainDEPRECATED: Use hydrolix_url.
disable_disk_cacheIf true, query peers will immediately delete partition metadata from disk after use.False
disk_cache_cull_start_percPercentage of cache disk space used before starting to remove files.75
disk_cache_cull_stop_percPercentage of cache disk space used before stopping removing files.65
disk_cache_redzone_start_percMinimum percentage of cache disk space used to be considered as redzone.90
disk_cache_entry_max_ttl_minutesMax TTL for a cache disk entry. It is the longest period of time for which the LRU disk cache can save an entry before it expires.360
max_http_retriesMaximum times to retry any query-related HTTP requests that fail.3
max_exp_backoff_secondsCap for exponentially back off sleep time.20
initial_exp_backoff_msSleep time starts from this value and exponentially grows with retry count.0
eks_product_codeEKS product code for use with Amazon Marketplace.6ae46hfauzadikp9f8npdbh9v
exp_backoff_growth_factor_msEvery sleep will use this as multiplicative factor. For example, 2^i * (growth_factor)ms.50
exp_backoff_additive_jitterTrue: (growthfactor)(1 + jitter). False: growthfactor(jitter).True
enable_traefik_access_loggingIf set to true, Traefik will log all access requests.WARNING: This will produce a very high and potentially unmanageable amount of logsFalse
hdx_traefik_auth_workersNumber of async workers gunicorn will create for services requests.1
enable_traefik_hstsIf set to true, Traefik will enforce HSTS on all its connections.WARNING: This may lead to hard-to-diagnose persistent SSL failures if there are any errors in SSL configuration, and cannot be turned off later.False
enable_password_complexity_policyIf set to true, uses the default password policy:Minimum length: 8 charactersUppercase characters: 1Lowercase characters: 1Digits: 1Special characters: 1Not recently used: Past 24 passwordsExpire password: 90 daysNot usernameNot emailFalse
password_expiration_policyNumber of days to expire password
traefik_hsts_expire_timeExpiration time for HSTS caching in seconds.315360000
http_connect_timeout_msMaximum time to wait for socket connection to cloud storage to complete300
http_ssl_connect_timeout_msMaximum time to wait for SSL handshake during connection to cloud storage1000
http_response_timeout_msMaximum time to wait for receiving HTTP headers to complete while reading from cloud storage.1000
http_read_timeout_msMaximum time to wait between a socket read and cloud storage having data ready to be read.1000
http_write_timeout_msMaximum time to wait before uploading partition to cloud is complete10000
io_perf_mappingsInternally used presets for io_perf_mode. Parsed as JSON Array(Array(Int)).[[2097152, 256, 256], [6291456, 128, 128], [12582912, 64, 64]]
disable_traefik_http_portIf true the load balancer will not forward to Traefik on port
80. When TLS is enabled, this port is only used to redirect to HTTPS. Otherwise this is the main way to access all services.
False
disable_traefik_https_portIf true, the load balancer will not forward to Traefik on port
443. Only relevant if TLS is enabled
False
disable_traefik_native_portIf true the load balancer will not forward to Traefik on the ClickHouse native protocol port. This is port 9440 when TLS is enabled or 9000 if not.False
disable_traefik_mysql_portIf true the load balancer will not forward to Traefik on the ClickHouse MySQL interface port. This is port
9004.
False
disable_traefik_clickhouse_http_portIf true the load balancer will not forward to Traefik on port
8088. This port provides a ClickHouse compatible query interface at the root of the service rather than at a subpath.
False
enable_query_authWhen enabled requests to the query service, URLpaths starting with /query require authentication.False
user_acl_refresh_interval_secsFrequency at which user ACL permissions are refreshed (in secs)30
user_token_refresh_interval_secsFrequency at which user tokens are refreshed (in secs)240
user_token_expiration_secsuser token expiration period (in secs)1800
auth_http_response_timeout_msMaximum time to wait for receiving HTTP headers from auth endpoint (turbine-api) in response to user permission requests2000
auth_http_read_timeout_msMaximum time to wait for a socket read for user-permission data from auth endpoint (turbine-api)2000
enable_vectorRun vector to send Kubernetes pod logs to JSON files in a bucket and to the internal logs topic. Default inferred from the value of scale_off.
disable_vector_kafka_loggingPrevent vector from emitting logs to Redpanda.False
disable_vector_bucket_loggingPrevent vector from sending logs to the bucket.False
envEnvironment variables to set on all Kubernetes pods that are part of the Hydrolix cluster.{}
force_container_user_rootSet the initial user for all containers to 0 (root).False
hostDEPRECATED: Use hydrolix_url
http_portThe port to serve Hydrolix plain HTTP on.
https_portThe port to serve Hydrolix HTTPS on.
hydrolix_nameThe name you would like to assign your Hydrolix cluster. Will be the same as the namespace name if not specified.
hydrolix_urlThe URLyou would like to use to access your Hydrolix cluster.["https://my-host.hydrolix.live", "https://my-host.mydomain.com", "http://my-host.local"]
ip_allowlistA list of CIDR ranges that should be allowed to connect to the Hydrolix cluster load balancer.["127.0.0.1/32"]
intake_head_index_backlog_enabledWhether to absorb received buckets in a backlog prior to indexing in intake-head to allow for more buffer for absorption in the face of spikes of traffic or throughput, disruptions in indexing, or uploading of partitions. If enabled, the newest data received will indexed ahead of older data when the backlog grows.False
intake_head_index_backlog_max_mbControls the maximum size in MB that the indexing backlog on intake-head is allowed to grow before either dropping data or slowing new entries depending on the configured value of intake_head_index_backlog_trim_enabled. Only applicable if intake_head_index_backlog_enabled is true.256
intake_head_index_backlog_purge_concurrencyControls the number of workers used to purge buckets from the intake-head backlog when the max size is breached. Only applicable if intake_head_index_backlog_enabled is true.1
intake_head_index_backlog_max_accept_batch_sizeControls the maximum number of buckets accepted from ingestion and added to the backlog at a time. Only applicable if intake_head_index_backlog_enabled is true.50
intake_head_max_outstanding_requestsConfigures the maximum number of requests that an intake-head pod will allow to be outstanding and in process before rejecting new requests with a 429 status code response. If not configured or set to 0, intake-head pods will never reject new requests.0
intake_head_accept_data_timeoutConfigures the maximum duration that intake-head will wait for a request to be accepted into the partition creation pipeline. If the timeout is reached, the request will be rejected with a 429 status code response. If not configured or set to 0, intake-head pods will not timeout.0s
intake_head_raw_data_spill_configProvides configuration of the spill functionality for raw data in intake-head where ingested data is spilled to object storage when partition generation is slowed on a particular intake-head pod. Supported keys are:enabled``max_concurrent_spill``max_attempts_spill{"enabled": "false", "max_concurrent_spill": "20", "max_attempts_spill": "5"}
intake_head_catalog_spill_configProvides configuration of the spill functionality for catalog adds in intake-head whereby catalog adds are spilled to object storage when catalog interactions are slowed or fail on a particular intake-head pod. Supported keys are:enabled``max_concurrent_spill``max_attempts_spill{"enabled": "false", "max_concurrent_spill": "20", "max_attempts_spill": "5"}
kafka_careful_modeFalse
kafka_tls_caA CA certificate used by the kafka_peer to authenticate Kafka servers it connects to.
kafka_tls_certThe PEM format certificate the kafka_peer will use to authenticate itself to a Kafka server.
kafka_tls_keyThe PEM format key the kafka_peer will use to authenticate itself to a Kafka server.
kinesis_coordinate_strategyThe strategy to use for coordinating Kinesis peers for a Kinesis source. Possible values are EXTERNAL_COORDINATOR or ZOOKEEPEREXTERNAL_COORDINATOR
kinesis_coordinate_periodFor Kinesis sources, how often the coordination process runs which checks for the available shards and peers and distributes consuming amongst available peers.10s
kubernetes_cloudDEPRECATED: Use kubernetes_profile.["aws", "gcp"]
kubernetes_premium_storage_classThe storage class to use with persistent volumes created in Kubernetes for parts of a Hydrolix cluster where throughput is most critical.
kubernetes_profileUse default settings appropriate to this type of Kubernetes deployment.generic["gke", "eks", "lke"]
kubernetes_storage_classThe storage class to use with persistent volumes created in Kubernetes as part of a Hydrolix cluster.
logs_sink_typeType of logs sink.http
logs_sink_local_urlThe full URI to make local HTTP request to.http://hydrologs-intake-head:8089/ingest/event
logs_sink_remote_urlThe full URI to make remote HTTP request to.
logs_sink_remote_auth_enabledWhen enabled, remote HTTP will use basic auth from curated secret.False
logs_http_remote_tableAn existing Hydrolix <project.table> where the data should land in remote cluster.hydro.logs
logs_http_remote_transformA transform schema for ingest in remote cluster.megaTransform
logs_http_tableAn existing Hydrolix <project.table> where the data should land.hydro.logs
logs_http_transformA transform schema for ingest.megaTransform
logs_kafka_bootstrap_serversA comma separated list of Kafka bootstrap servers to send logs to.redpanda
logs_kafka_topicA Kafka topic to send logs to.logs
logs_topic_partition_countThe number of partitions to assign to the logs topic for stream processing.81
merge_head_batch_sizeNumber of records to pull from the catalog per request by the merge head.10000
merge_intervalThe time the merge process waits between checking for mergeable partitions.15s
merge_max_partitions_per_candidateThe maximum number of partitions per merge candidate.100
merge_max_candidatesNumber of candidates to produce per merge target each cycle.100
merge_min_mbSize in megabytes of the smallest merge tier. All other merge tiers are multiples of this value.1024
merge_dispatch_frequencyHow often a slot should be checked for exceeding max_idle. Expressed as duration string. For example, 5s.5s
merge_first_era_frequencyHow often merge candidates should be constructed for the first era.10s
merge_second_era_frequencyHow often merge candidates should be constructed for the second era.60s
merge_third_era_frequencyHow often merge candidates should be constructed for the third era.60m
merge_streaming_selectorWhether or not to use the Streaming Candidate SelectorTrue
merge_primary_window_widthSpecifies the interval used to further filter partition selection queries. Smaller values limit the number of records the database needs to produce, but can increase query count.1080h
merge_candidate_concurrencyNumber of concurrent MergeCandidate construction queries to run.6
merge_controller_enabledWhether or not the next generation merge controller is enabled.False
native_portThe port to serve the ClickHouse plaintext native protocol on if applicable.9000
native_tls_portThe port to serve the ClickHouse TLS native protocol on if applicable.9440
mysql_portThe port to serve the ClickHouse MySQL interface on if applicable.9004
mysql_port_disable_tlsWhen True, Traefik will not use TLS configuration on MySQL TCP route.True
oom_detectionConfiguration options for detecting indexing OOM scenarios and retry with smaller data sizes if possible for services that perform ingest. Outer keys are names of the ingest services. The supported services are:intake-head``kafka-peer``kinesis-peer``akamai-siem-peerAvailable keys under each service are:k8s_oom_kill_detection_enabled``k8s_oom_kill_detection_max_attempts``circuit_break_oom_detection_enabled``preemptive_splitting_enabled
otel_endpointSend OTLP data to the HTTP server at this URL.
overcommitWhen true, removes all requests and limits from Kubernetes containers. Useful when running on a single node Kubernetes cluster with constrained resources. When set to requests, only turns off requests. Similarly, limits removes just the limits. Not being set is the same as false. Note that removing either a memory or CPU limit or request from any container on a pod removes the Guaranteed quality of service class from that pod.False
ownerDEPRECATED: this was previously used internally by Hydrolix.
pg_ssl_modeDetermines whether and with what priority an SSL connection is negotiated when connecting to a PostgreSQL server. See https://bit.ly/3U9ao8O.disable["disable", "require", "verify-ca", "verify-full"]
poolsA list of dictionaries describing pools to deploy as part of the Hydrolix cluster.
registryA docker registry to pull Hydrolix containers from.PUBLIC_REGISTRY
sample_data_urlThe storage bucket URLto use to load sample data.
sql_transform_max_ast_elementsThe number of AST elements an SQL transform can contain. This limits the maximum complexity of a SQL transform.[100000, 150000]
sql_transform_max_expanded_ast_elementsThe number of expanded AST elements an SQL transform can contain. This limits the maximum complexity of a SQL transform.[100000, 150000]
scaleA list of dictionaries describing overrides for scale related configuration for Hydrolix services.
scale_offWhen true, override all deployment and StatefulSet replica counts with a value of 0 and disable vector.False
scale_profileSelects from a set of predefined defaults for scaleeval
sdk_timeout_secHow many seconds the Merge SDK should be given to run before it is killed.300
silence_linode_alertsIf true will run a DaemonSet that turns off Linode alerts for LKE nodes.False
str_dict_enabledEnable/disable multi-threaded string dictionary decoding.True
str_dict_nr_threadsSets the maximum number of concurrent vCPU used for decoding.8
str_dict_min_dict_sizeControls the number of entries in each string dictionary block.32768
stream_concurrency_limitThe number of concurrent stream requests per cpu allocated across all pods beyond which Traefik will return 429 busy error responses. If not set or set to null no limit is enforced.
stream_partition_countThe number of partitions to use on the default Redpanda topic for stream service.50
stream_load_balancer_algorithmThe load balancer algorithm to use with stream-head and intake-head services.round-robin["least-connections-p2c", "round-robin"]
stream_partition_blockThe number of partitions to use on a non-default Redpanda stream topic per TB/day of usage.6
stream_replication_factorThe replication factor for the internal Redpanda topic used by the stream service it must always be less than the number of Redpanda replicas. If it is not, the configuration will not change.3
targetingA dictionary to pass targeting related Kubernetes settings to resources according to what Hydrolix service they are part of.{}
turbine_api_init_poolsIf enabled, the turbine-api component initializes some pools.False
turbine_api_require_table_default_storageIf enabled, turbine-api will require tables to have their storage_map be populated with a default_storage_id. Useful when use of the cluster's default bucket should be discouraged.False
traefik_external_ipsTraffic that ingresses into the cluster with one of these IPs gets directed to the Traefik service. Useful in particular when deploying all on one node.[["192.168.1.5", "192.16.1.4"], ["172.16.0.8"]]
traefik_keep_alive_max_timeThe number of seconds a client HTTP connection can be reused before receiving a Connection: close response from the server. Zero means no limit.26
traefik_service_typeThe type of service to use for Traefik, the entry point to the cluster.public_lb["public_lb", "private_lb", "node_port", "cluster_ip"]
use_https_with_s3DEPRECATED: Use db_bucket_url or db_bucket_http_enabled.
use_tlsDEPRECATED: inferred from hydrolix_url.False
prometheus_label_value_length_limitIf a label value is larger than the value configured, Prometheus discards the entire scrape.512[]
prometheus_remote_write_urlA URL you wish to use to configure Prometheus's remote-write functionality.[]
prometheus_remote_write_usernameThe username for Prometheus to use with basic auth to connect to a remote-write endpoint. Ignored if prometheus_remote_write_url is not set.hdx[]
prometheus_scrape_intervalHow frequently to scrape targets by default.15s[]
prometheus_curated_configmapCustom curated Prometheus ConfigMap that will be mounted onto the Prometheus pod.[]
vector_bucketBucket where Vector should save JSON format pod logs.
vector_bucket_pathPrefix under which vector will save pod logs.logs
decay_enabledWhether or not the Decay CronJob should run.True
decay_scheduleCRON schedule for Decay CronJob0 0 * * *
decay_batch_sizeNumber of entries to fetch for each request to the catalog.5000
decay_max_deactivate_iterationsMaximum number of deactivation iterations to execute per table.
decay_reap_batch_sizeNumber of entries to fetch for each request when locating entries for reaping5000
decay_max_reap_iterationsMaximum number of reap iterations to execute per table.
job_purge_enabledWhether or not the Job Purge CronJob should run.True
job_purge_scheduleCRON schedule for Job Purge CronJob0 2 * * *
job_purge_ageHow old a terminal job must be before it's deleted expressed as a duration string2160h
partition_cleaner_dry_runIf true, Partition Cleaner will only log it's intentions and take no actionTrue
partition_cleaner_grace_periodMinimum age of a partition before it is considered for deactivation or deletion expressed as a duration string.24h
partition_cleaner_scheduleCrontab style schedule for when partition cleaner should run.0 0 * * *
prune_locks_enabledWhether or not the Prune Locks CronJob should run.True
prometheus_retention_ratioThe amount of the volume to reserve for Prometheus data. Example:
0.7
0.7
prometheus_retention_timeWhen to remove old Prometheus data. Example: 15d
prometheus_retention_sizeThe maximum number of bytes of Prometheus data to retain. Overrides prometheus_retention_ratio. Units supported: B, KB, MB, GB, TB, PB, EB
prometheus_ignored_appsA comma delimited list of app labels to ignore when determining scrape targets for prometheus["batch-head", "stream-peer,vector"]
prune_locks_scheduleCRON schedule for Prune Locks CronJob30 0 * * *
prune_locks_grace_periodMinimum age of a lock before it is considered for removal expressed as a duration string.24h
limit_cpuWhen set to false, removes all CPU container limits. By default, containers are set with the same request and limit value. Note that removing either a memory or CPU limit or request from any container on a pod removes the Guaranteed quality of service class from that pod.True
log_levelA dictionary to specify logging verbosity. Keys are service names with the special value of * controlling the default.{}
merge_cleanup_enabledWhether or not the Merge Clean-up CronJob should run.True
merge_cleanup_scheduleCRON schedule for Merge Clean-up CronJob/5 * * *
merge_cleanup_delayHow long before a merged partition should be deleted expressed as a duration string.15m
merge_cleanup_batch_sizeNumber of entries to fetch for each request to the catalog.5000
monitor_ingestIf enabled, deploy a service to ingest a timestamp into the hydro.monitor table every second.False
monitor_ingest_timeoutDeprecated. Use monitor_ingest_request_timeout.
monitor_ingest_request_timeoutThe number in seconds for HTTP timeout in HTTP POST from monitor_ingest.1
monitor_ingest_retry_timeoutThe deadline for one submission by monitor ingest including all retries.1
query_peer_liveness_check_pathThe HTTP path used to configure a Kubernetes liveness check for query-peers. Set to none to disable.?query=select%20count%28id%29%20from%20hdx.liveliness%20SETTINGS%20hdx_log_query=false%2Chdx_query_timerange_required=0
query_peer_liveness_failure_thresholdHow many times query liveness check can fail.5
query_peer_liveness_period_secondsHow often should query liveness check run, in seconds.60
query_peer_liveness_probe_timeoutNumber of seconds after which the liveness probe times out10
query_peer_liveness_initial_delayTime in seconds to wait before starting query liveness checks.300
query_readiness_initial_delayTime in seconds to wait before starting query readiness checks.0
refresh_job_statuses_enabledWhether or not the Refresh Job Statuses CronJob should run.True
refresh_job_statuses_scheduleCRON schedule for Refresh Job Statuses CronJob***
siem_backoff_durationBackoff duration when SIEM limit not hit, for politeness.1s
skip_init_turbine_apiSkips running database migrations in the init-turbine-api job. Set to true when running multiple clusters with a shared databaseFalse
stale_job_monitor_enabledWhether or not the Stale Job Monitor CronJob should run.True
stale_job_monitor_scheduleCRON schedule for Stale Job Monitor/5 * * *
stale_job_monitor_batch_sizeHow many jobs to probe in a single request.300
stale_job_monitor_limitHow many jobs in total StaleJob will process per cycle.3000
task_monitor_enabledWhether or not the Task Monitor CronJob should run.True
task_monitor_scheduleCRON schedule for Task Monitor./2 * * *
task_monitor_start_timeoutHow old a ready task should be (in seconds) before it is considered lost and timed out.21600
task_monitor_heartbeat_timeoutHow old a tasks heartbeat should be (in seconds) before it is timed out.600
unified_authUse the same auth used with the API for all services.True
usagemeter_preserveDuration to hang onto old, already-reported usage meter data on local clusters.1440h
usagemeter_reporting_urlURL to send usage data to.https://prometheus-us.trafficpeak.live/ingest
usagemeter_reporting_tableHydrolix table to send usage to, in project.table format.metering_project.metering_table
usagemeter_reporting_transformHydrolix transform name or UUID for usage reporting.metering_transform
usagemeter_query_timeoutMaximum time to wait for query against catalog to complete.4m
usagemeter_request_timeoutMaximum time to wait for reporting HTTP request to complete.1m
usagemeter_scheduleCRON schedule for usage meter cron job. Defaults to every 10 minutes./10 * * *
usagemeter_enabledWhether or not the usage meter cron job should run.True
hdx_query_max_memory_usage_percMaximum amount of memory to use for running a query on a single server as a percentage of the total available memory.80
hdx_query_max_perc_before_external_group_byMaximum amount of memory to use for running a summary merge query as a percentage of the total available memory. Zero deactivates the restriction.0
max_concurrent_queriesMax limit on total number of concurrently executed queries. Zero means unlimited.0
max_server_memory_usage_percMax % of total system memory that server can use and allocate for its operation.0
hdx_node_enabledWhether or not enable hdx-node DaemonSet.False
hdx_node_configHDX-node YAML configuration.{}
quesma_configQuesma config for Hydrolix data source parameters.{"project": "hydro", "table": "logs"}
data_visualization_toolsList of data visualization tools to deploy. For example, Grafana and Kibana.[]
rollout_strategy_max_surgeConfigures the number of pods (represented as percentage) that can be created above the desired amount of pods during deployment rollout update.25
rollout_strategy_max_unavailableEnsures the number of pods (represented as integer) that can be unavailable during deployment rollout update.0
grafana_imageDefinition of Grafana image:tag to be used.grafana/grafana-enterprise:11.5.0
grafana_configGrafana configuration.

NOTE: Ensure grafana is included in data_visualization_tools Tunable to enable Grafana deployment.

- admin_user: Grafana admin username.
- admin_email: Grafana admin user email.
- allow_embedding: Prevents embedding Grafana in frames to mitigate clickjacking risks.
- db_user: Grafana database username.
- alert_eval_timeout: Timeout for alert evaluation when fetching data from a source.
- smtp_enabled: Enables email server settings. Requires GRAFANA_SMTP_PASSWORD in the curated secret.
- smtp_host: Email server host.
- smtp_user: Email server authentication username.
- rendering_timeout: Timeout for rendering reports (PDFs, embedded images, CSV attachments).
- is_enterprise: Enables Grafana Enterprise. Requires GRAFANA_LICENSE in the curated secret.
- google_auth_enabled: Enables Google OAuth authentication. Requires GOOGLE_CLIENT_SECRET in the curated secret.
- google_client_id: Client ID of the Google Auth App.
- inactive_timeout: Maximum inactive duration before a user must log in again.
- allow_sign_up: Controls Grafana user creation through OAuth. If false, only existing users can log in.
- admin_user: "admin"
- admin_email: "admin@localhost"
- allow_embedding: false
- db_user: "grafana"
- alert_eval_timeout: "30s"
- smtp_enabled: false
- smtp_host: "smtp.sendgrid.net:587"
- smtp_user: "apikey"
- rendering_timeout: "120s"
- is_enterprise: false
- google_auth_enabled: false
- google_client_id: null
- inactive_timeout: "7d"
- allow_sign_up: false
issue_wildcard_certWhether to issue wildcard TLS certificate. NOTE: DNS Challenge will be used. Route53 credentals need to be provided in ROUTE53_AWS_ACCESS_KEY_ID and ROUTE53_AWS_SECRET_ACCESS_KEY via curated secret.False
http_proxyHTTP-proxy configuration parameters.- enabled: false
- port: 9444
- log_debug: false
- allow_ping: false0
- server:
- read_timeout: "2m"
- write_timeout: "4m"
- idle_timeout: "8m"
- users:
- max_execution_time: "2m"
- heartbeat:
- interval: "5s"
- timeout: "3s"
- request: "/query?query=SELECT%201&hdx_query_output_format=TSV"
- response: "1\n"
- cache:
- dir: "/tmp/http-proxy/cache"
- max_size: "150M"
- expire: "1m"
metadataCustom Kubernetes labels and annotations to propagate to hydrolix workloads. Changing this value will trigger restarts for all services{}- Entry 1:
- annotations:
- example.com/owner: "hdx"
- labels:
- env: "dev"

What’s Next

Learn more about Hydrolix tunables