Target Kubernetes Resources
Hydrolix configuration allow users to assign pods to specific Kubernetes (k8s) resources.
The configuration specification is very similar to official k8s documentation
In order to constrain Hydrolix cluster into specific nodes you need to specify that in the cluster configuration using the targeting
flags.
apiVersion: hydrolix.io/v1
kind: HydrolixCluster
metadata:
name: <Please provide your namespace>
spec:
admin_email: <Please provide your email>
db_bucket_url: https://bucket.region.linodeobjects.com <replace with your bucket URL>
hydrolix_url: https://hostname.company.net <replace with your comany hostname for Hydrolix>
catalog_db_admin_user: linpostgres
catalog_db_admin_db: postgres
catalog_db_host: lin-yyyyy-xxxx-pgsql-primary-private.servers.linodedb.net <replace with your private network host created before>
pg_ssl_mode: require
env:
AWS_ACCESS_KEY_ID: <Please provide the AWS_ACCESS_KEY_ID created earlier>
ip_allowlist: <This allow the cluster to be fully accessible, you can restrict to your IP>
- 0.0.0.0/0
scale_profile: dev <You can change the scale_profile to whatever fits your needs>
scale:
postgres:
replicas: 0
targeting:
'*':
node_selector:
node.kubernetes.io/instance-type: g6-dedicated-16
In the example above every service will be run in nodes which have a label: node.kubernetes.io/instance-type: g6-dedicated-16
.
Here's more example of the different usage you can have with targeting:
targeting:
'*':
node_selector:
cloud.google.com/machine-family: n2
init-turbine-api:
node_selector:
cloud.google.com/machine-family: n2
query-head:
node_name: gke-hydrolix-demo-4a2b59be-clnb
merge-peer:
node_affinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- us-central1-f
pool:merge-peer-ii:
node_affinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- us-central1-b
stream-head:
pod_anti_affinity: {}
pod_affinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- query-head
topologyKey: topology.kubernetes.io/zone
postgres:
pod_anti_affinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S2
topologyKey: topology.kubernetes.io/zone
batch-head:
topology_spread_constraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: batch-head
targeting:
'*':
node_selector:
disktype: ssd
tolerations:
- key: example-key
operator: Equal
value: special
effect: NoSchedule
Our approach is very flexible and allows you to specify the different isolation, restriction you want to use the same way it works in k8s configuration deployment.
For the operator-resources to use the same targeting rules when you generate your operator configuration you need to specify the config file with the flags -c
.
If your hydrolix configuration is hydrolixcluster.yaml
hkt operator-resources -c hydrolixcluster.yaml > operator.yaml
You can then check your configuration to ensure that you have the proper nodeSelector
rule in place.
Updated 6 months ago