Skip to content

Autoscale with Prometheus

Overview⚓︎

Hydrolix can autoscale cluster components based on Prometheus metrics.

The HDX Autoscaler is managed by the Hydrolix operator and supports both target-based scaling and range-based scaling.

Improvements to the HDX Autoscaler and its functions were added in the v5.3-v5.6 releases.

Configuration⚓︎

The HDX Autoscaler supports both target mode, by default, and range mode when metric_min and metric_max are set. Range mode normalizes metrics against the configured bounds and applies a sensitizer function to determine scaling aggressiveness.

Target mode scaling⚓︎

  • Works with metric + target_value
  • Ratio = average_value ÷ target_value
  • Dead-zone applied with tolerances
  • Desired replicas = ratio × current replicas (throttled)

Range mode scaling⚓︎

  • Activated when metric_min and metric_max are set
  • Normalizes metrics between 0 and 1
  • Applies a sensitizer function (exponential by default)
  • Supports aggressive scale-up and conservative scale-down

Sensitizer function for range mode⚓︎

The autoscaler applies a sensitizer function in range mode to adjust scaling responsiveness. By default, an exponential curve makes scaling more aggressive when far from the target, and gentler when close to it. This helps prevent overshoot and thrashing.

The sensitizer default is exp 1/3.

Sensitizer example⚓︎

This autoscaler example reacts quickly to real overloads, stays calm near the target, and avoids extreme scaling.

spec:
  scale:
    intake-head:
      # Keep between 1 and 10 pods
      replicas: 1-10

      hdxscalers:
        - metric: http_source_outstanding_reqs   # Prometheus metric (per pod)
          port: 27182
          # ---- Range mode ----
          metric_min: 0
          metric_max: 200
          # Target (still used in range mode as the "aim point")
          target_value: 80

          # Sensitizer (exponent). < 1 = more aggressive far from target, gentler near target
          exp: 0.33

          # Dead-zone around target to reduce flapping
          tolerance_up: 0.10     # ignore upscales unless >10% above target
          tolerance_down: 0.10   # ignore downscales unless >10% below target

          # Cool windows to avoid back-to-back changes
          cool_up_seconds: 45
          cool_down_seconds: 60

          # Throttles (safety rails)
          scale_up_throttle: 9.0     # at most +900% in one step
          scale_down_throttle: 0.2   # at most -20% in one step

Enable the autoscaler⚓︎

The autoscaler (hdx-scaler) feature is disabled by default. To enable it, set a replica count in the hydrolixcluster.yaml configuration file:

1
2
3
4
spec:
  scale:
    hdx-scaler:
      replicas: 1

You can run only one autoscaler replica. This pod acts as the controller managing scaling for other services.

Configure autoscaling⚓︎

Autoscaler settings are defined in the hdxscalers block of the hydrolixcluster.yaml file. You can configure multiple scalers per service or pool.

Key fields for autoscaling⚓︎

Field Description Required Default
metric Prometheus metric to scale on Yes
port Port where metrics are served Yes
target_value Target metric value for scaling Yes
metric_min / metric_max Lower/upper bounds to activate Range mode No
exp Exponent for sensitizer function in Range mode No 1/3
tolerance_up / tolerance_down Fractional dead-zone tolerances No 0.1 / 0.1
cool_up_seconds Minimum time (in seconds) to wait after a scale-up event before allowing another upscale. No 15
cool_down_seconds Minimum time (in seconds) to wait after a scale-down event before allowing another downsize. No 15
scale_up_throttle Max factor per upscale (9.0 = +900%) No 9.0
scale_down_throttle Max fraction per downscale (0.2 = -20%) No 0.2
app Use metrics from another service No
rate Use rate of change instead of absolute No false
halflife EWMA decay time in seconds No 30
precision Digits of rounding when computing ratio No 10
path Metrics endpoint path No metrics

Use precision to set the scale ratio⚓︎

The precision configuration sets the number of digits to round to when calculating the average-to-target ratio. The default is 10.

A higher precision number smooths the transitions when scaling up and down.

For more frequent scaling to zero, set a lower precision. Set a higher precision value to keep small ratios above zero and have less frequent scaling to zero.

  • A ratio of 0.045 with precision ≤1 rounds to zero, scaling down to zero pods.
  • A ratio of 0.045 with precision ≥2 rounds up and keeps one replica active.

In this example, merge_duty_cycle, part of merge-controller, is the metric that determines when merge-peer scales up, if it's been set to a low or zero precision value.

merge-peer:
    cpu: 4
    memory: 4Gi
    hdxscalers:
    - metric: merge_duty_cycle
      port: 27182
      target_value: 0.5
      cool_down_seconds: 40
      app: merge-controller
      precision: 1
    replicas: 1-5
    scale_profile: I
    service: merge-peer

Use metrics from other services to autoscale from minimum replicas⚓︎

The autoscaler uses external metrics to decide when to scale up a scaled-to-zero service. If no separate app metric is specified, the scaler sets the minimum replica to 1 instead of 0.

Cool-up and cool-down windows⚓︎

The autoscaler supports configurable wait times between scaling actions.

  • The cool_up_seconds time - the minimum time to wait after a scale-up before another scale-up can occur
  • The cool_down_seconds time - the minimum time to wait after a scale-down before another scale-down can occur

The cool-up window prevents frequent scale-ups. The cool-down window prevents frequent scale-downs.

Tolerance windows (dead-zone)⚓︎

Use tolerance_up and tolerance_down to define a range around the target where no scaling occurs. This prevents unnecessary pod changes when metrics fluctuate near the target value.

For example:

  • tolerance_up: 0.1 means scale-up is skipped unless the metric is more than 10% above target.
  • tolerance_down: 0.1 means scale-down is skipped unless the metric is more than 10% below target.

A tolerance window helps keep the cluster stable and avoids thrashing.

Tolerance window example⚓︎

In this example:

spec:
  scale:
    intake-head:
      replicas: 1-5
      hdxscalers:
      - metric: http_source_outstanding_reqs
        port: 27182
        target_value: 50
        tolerance_up: 0.1     # Scale up only if >10% above target
        tolerance_down: 0.2   # Scale down only if >20% below target
  • If the average requests per pod are ≤60 (50 + 10%), the autoscaler holds steady and doesn't scale up.
  • If requests exceed 60, it triggers a scale-up.
  • If requests fall to <40 (50 - 20%), it triggers a scale-down.
  • Between 40 and 60, no scaling occurs, keeping the system stable and avoiding thrash.

Basic target mode example⚓︎

This example shows a basic way to use target mode to autoscale.

1
2
3
4
5
6
7
8
spec:
  scale:
    intake-head:
      replicas: 1-3
      hdxscalers:
      - metric: http_source_outstanding_reqs
        port: 27182
        target_value: 4

Advanced range mode example⚓︎

This example sets specific values using coolup and cooldown, throttle, and tolerance ranges.

spec:
  scale:
    intake-head:
      replicas: 1-5
      hdxscalers:
      - metric: http_source_outstanding_reqs
        port: 27182
        metric_min: 0
        metric_max: 100
        target_value: 50
        exp: 0.33
        cool_up_seconds: 45
        cool_down_seconds: 60
        scale_up_throttle: 9.0
        scale_down_throttle: 0.2
        tolerance_up: 0.1
        tolerance_down: 0.1

Observability⚓︎

  • Run hdxscaler state inside the autoscaler pod to inspect current settings
  • Logs show scaling action and reasons
  • Metrics are exported to Prometheus for dashboard visualization