Skip to content

v5.7.9

This bugfix release addresses an issue encountered while scaling down intake-head.

This release contains a bug fix to v5.7.8. Refer to the release notes for v5.7.4 and v5.7.8 to see other notable feature announcements and information concerning the v5.7 minor release.

prod scale profile users may need to scale turbine-api

By default in versions v5.7.x, the prod scale profile has the following settings:

  • The turbine-api pods have 1Gi of memory and 1 CPU.
  • The turbine_api_worker_count tunable is set to 8 workers. This value may be overridden in the Hydrolix spec configuration.

For clusters using the prod scale profile, make the following adjustments to the scale of the turbine-api pod in the Hydrolix spec configuration based on the value of the turbine_api_worker_count tunable:

turbine-api scaling guidance:

  • 8 workers: A minimum of 1.7Gi memory and 1 CPU
  • 16 workers: A minimum of 2.74Gi memory and 2 CPU
  • 32 workers: A minimum of 6.80Gi memory and 3 CPU

Upgrade⚓︎

Do not skip minor versions when upgrading or downgrading

Skipping versions when upgrading or downgrading Hydrolix can result in database schema inconsistencies and cluster instability. Always upgrade or downgrade sequentially through each minor version.

Example:
Upgrade from 5.5.05.6.x5.7.4, not 5.5.05.7.4.

Upgrade on GKE⚓︎

kubectl apply -f "https://www.hydrolix.io/operator/v5.7.9/operator-resources?namespace=${HDX_KUBERNETES_NAMESPACE}&gcp-storage-sa=${GCP_STORAGE_SA}"

Upgrade on EKS⚓︎

kubectl apply -f "https://www.hydrolix.io/operator/v5.7.9/operator-resources?namespace=${HDX_KUBERNETES_NAMESPACE}&aws-storage-role=${AWS_STORAGE_ROLE}"

Upgrade on LKE⚓︎

kubectl apply -f "https://www.hydrolix.io/operator/v5.7.9/operator-resources?namespace=$HDX_KUBERNETES_NAMESPACE"

Changelog⚓︎

Bug Fixes⚓︎

Operator fixes⚓︎

  • While scaling down intake-head, the operator now waits until intake-head terminates before terminating the corresponding turbine-server. This avoids data loss while scaling down ingest.