Last updated February 06, 2026
Scale Profiles
Overview
Scale profiles provide predefined resource and replica settings for all cluster components.
They give you a consistent baseline without tuning each service individually.
When to use a scale profile
Create a new cluster. Start with a profile that matches your expected ingest and query load.
Standardize resources. Apply the same profile across environments for consistency.
Simplify scaling. Use a single profile instead of setting replicas for every service.
How scale profiles work
Each profile defines default CPU, memory, and replica values for services in a cluster.
You apply the profile once, and the operator propagates those settings across the cluster.
Scale profile changes apply cluster-wide.
Profiles are named for typical use cases. For example:
prod : tuned for steady ingest of 1–4 TB per day with balanced query load.
dev : lighter settings for development or test environments.
Set a profile
Add the profile to your hydrolixcluster.yaml file:
spec :
scale_profile : prod
The operator applies the prod settings to all cluster services.
Default profile
If no scale profile is set, the cluster defaults to eval.
Available profiles
Hydrolix includes predefined profiles for common use cases:
dev : light settings for small clusters and testing.
eval : evaluation settings for trying out features.
prod : production-ready profile for 1–4 TB/day ingest with balanced query load.
mega : large-scale production profile for 10–50 TB/day ingest.
Profiles provide a baseline and can be customized or overridden. See the default settings for each component for a profile using:
Scale Defaults Endpoint curl 'https://www.hydrolix.io/operator/${VERSION}/scale-defaults?profile=${PROFILE}&kubernetes=${PLATFORM}'
For example:
Example Request to the Scale Defaults Endpoint curl 'https://www.hydrolix.io/operator/v5.8.6/scale-defaults?profile=prod&kubernetes=gke'
which returns
Example response from Scale Defaults Endpoint
service pool replicas cpu memory storage data_storage
---------------------------- ------ ---------- ------ -------- --------- --------------
acme 1 0.25 256Mi 256Mi
ad 1 2 2Gi 256Mi
akamai-siem-indexer 0 2 2Gi 5Gi
akamai-siem-peer 0 2 2Gi 5Gi
alter-head 0 0.25 256Mi 256Mi
alter-indexer 0 2 8Gi 5Gi
alter-peer 0 0.25 256Mi 5Gi
anomaly-rca 1 0.75 1Gi 256Mi
ariadne-core 1 0.5 2Gi 256Mi
autoingest 0 0.25 256Mi 256Mi
batch-head 1 0.5 512Mi 256Mi
batch-indexer 0 2 4Gi 5Gi
batch-peer 1 2 4Gi 5Gi
decay 1 0.25 256Mi 256Mi
elasticsearch 1 1 2Gi 2Gi 5Gi
grafana 2 4 8Gi 512Mi
hdx-ariadne-janus 1 2 4Gi 256Mi
hdx-ariadne-janus-guardrails 1 4 8Gi 1Gi
hdx-gate 1 0.25 526Mi 526Mi
hdx-node 0 0.25 256Mi 256Mi
hdx-pg-monitor 1 0.25 256Mi 256Mi
hdx-pod-metrics 1 0.025 5Mi 5Mi
hdx-scaler 0 1 1Gi 256Mi
hdx-traefik-auth 0 0.25 256Mi 256Mi
http-proxy 2 0.5 512Mi 512Mi
init-cluster 1 0.5 512Mi 512Mi
init-turbine-api 1 0.5 512Mi 512Mi
intake-api 2 0.25 256Mi 256Mi
intake-head 2 4 4Gi 5Gi
intake-indexer 0 2 4Gi 5Gi
job-purge 1 0.25 256Mi 256Mi
kafka-indexer 0 2 2Gi 5Gi
kafka-peer 0 2 2Gi 5Gi
keycloak 1 4 4Gi 1Gi
kibana 1 1 1Gi 512Mi
kinesis-coordinator 0 0.25 256Mi 256Mi
kinesis-indexer 0 2 2Gi 5Gi
kinesis-kcl-consumer 0 2 2Gi 256Mi
kinesis-peer 0 2 2Gi 5Gi
load-sample-project 1 0.5 512Mi 512Mi
merge 4 0.25 512Mi 5Gi
merge I 4 0.25 512Mi 5Gi
merge II 4 0.25 512Mi 5Gi
merge III 4 0.25 512Mi 5Gi
merge-cleanup 1 0.25 256Mi 256Mi
merge-controller 0 1 1Gi 512Mi
merge-head 1 1 1Gi 512Mi
merge-indexer 0 2 4Gi 5Gi
merge-indexer I 0 2 4Gi 5Gi
merge-indexer II 0 2 6Gi 5Gi
merge-indexer III 0 2 12Gi 5Gi
merge-peer 1-4 0.25 512Mi 5Gi
merge-peer I 1-4 0.25 512Mi 5Gi
merge-peer II 1-4 0.25 512Mi 5Gi
merge-peer III 1-4 0.25 512Mi 5Gi
monitor-ingest 1 0.05 64Mi 64Mi
operator 1 0.25 512Mi 256Mi
otel 0 0.5 512Mi 512Mi
partition-cleaner 0 2 2Gi 5Gi
periodic-service 0 1 1Gi 512Mi
pgbouncer 1 2 512Mi 512Mi
pgbouncer-exporter 1 0.25 128Mi 128Mi
postgres 1 4 16Gi 1Gi 100Gi
prometheus 1 2 2Gi 1Gi 50Gi
promwaltz 0 0.25 256Mi 256Mi
prune-locks 1 0.25 256Mi 256Mi
pushgateway 1 1 512Mi 512Mi
query-head 1 14 56Gi 50Gi
query-head-api 1 1 1Gi 1Gi
query-peer 3 14 56Gi 50Gi
quesma 1 1 1Gi 1Gi
rabbitmq 3 1 512Mi 512Mi 5Gi
reaper 1 0.5 512Mi 512Mi
redpanda 3 4 8Gi 1Gi 1Ti
renderer 2 6 8Gi 512Mi
silence-linode 0 0.25 256Mi 256Mi
spill-controller 0 1 1Gi 256Mi
stale-job-monitor 1 0.25 256Mi 256Mi
stream-head 0 4 4Gi 5Gi
stream-indexer 0 2 4Gi 5Gi
stream-peer 0 3 4Gi 5Gi
summary-indexer 0 2 2Gi 5Gi
summary-peer 0 3 4Gi 5Gi
superset 1 2 2Gi 512Mi
superset-init-db 1 0.5 512Mi 512Mi
task-monitor 1 0.25 256Mi 256Mi
thanos 0 4 8Gi 1Gi
tooling 0 1 1Gi 16Gi
traefik 2 2 512Mi 256Mi
traefik-cfg 0 1 1Gi 256Mi
turbine-api 2 1 1Gi 512Mi
turbine-api-worker 2 1 1Gi 512Mi
ui 1 0.25 256Mi 256Mi
usagemeter 1 0.25 256Mi 256Mi
validator 1 0.5 512Mi 512Mi
validator-indexer 0 0.5 1536Mi 512Mi
vector 0 0.5 512Mi 512Mi
version 1 0.25 256Mi 256Mi
wait-dep 1 0.025 50Mi 50Mi
zookeeper 3 0.5 512Mi 512Mi 512Mi
Node size recommendations
16 vCPU nodes
For eval and prod deployments, use 16 vCPU nodes:
EKS : c5n.4xlarge
GKE : n2-standard-16
LKE : Dedicated 32 GB
32 vCPU nodes
For mega deployments, use 32 vCPU nodes:
EKS : c5n.9xlarge
GKE : n2-standard-32
LKE : Dedicated 64 GB