Target Kubernetes Resources

Hydrolix configuration allow users to assign pods to specific Kubernetes (k8s) resources.
The configuration specification is very similar to official k8s documentation

Constrain your Hydrolix cluster to use specific nodes by using the targeting flag in the cluster configuration.

apiVersion: hydrolix.io/v1
kind: HydrolixCluster
metadata:
  name: <Please provide your namespace>
spec:
  admin_email: <Please provide your email>
  db_bucket_url: https://bucket.region.linodeobjects.com <replace with your bucket URL>
  hydrolix_url: https://hostname.company.net <replace with your comany hostname for Hydrolix>
  catalog_db_admin_user: linpostgres
  catalog_db_admin_db: postgres
  catalog_db_host: lin-yyyyy-xxxx-pgsql-primary-private.servers.linodedb.net <replace with your private network host created before>
  pg_ssl_mode: require
  env:
    AWS_ACCESS_KEY_ID: <Please provide the AWS_ACCESS_KEY_ID created earlier>
  ip_allowlist: <This allow the cluster to be fully accessible, you can restrict to your IP>
    - 0.0.0.0/0
  scale_profile: dev <You can change the scale_profile to whatever fits your needs>
  scale:
    postgres:
      replicas: 0
  targeting:
    '*':
      node_selector:
        node.kubernetes.io/instance-type: g6-dedicated-16

In the example above, every service will be run in nodes which have a label: node.kubernetes.io/instance-type: g6-dedicated-16.

Here are more examples of flexible targeting:

targeting:
    '*':
      node_selector:
        cloud.google.com/machine-family: n2

    init-turbine-api:
      node_selector:
        cloud.google.com/machine-family: n2

    query-head:
      node_name: gke-hydrolix-demo-4a2b59be-clnb

    merge-peer:
      node_affinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            - matchExpressions:
                - key: topology.kubernetes.io/zone
                  operator: In
                  values:
                    - us-central1-f

    pool:merge-peer-ii:
      node_affinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
            - matchExpressions:
                - key: topology.kubernetes.io/zone
                  operator: In
                  values:
                    - us-central1-b

    intake-head:
      pod_anti_affinity: {}
      pod_affinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
                - key: app
                  operator: In
                  values:
                    - query-head
            topologyKey: topology.kubernetes.io/zone

    postgres:
      pod_anti_affinity:
        preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                  - key: security
                    operator: In
                    values:
                      - S2
              topologyKey: topology.kubernetes.io/zone

    batch-head:
      topology_spread_constraints:
        - maxSkew: 1
          topologyKey: topology.kubernetes.io/zone
          whenUnsatisfiable: DoNotSchedule
          labelSelector:
            matchLabels:
              app: batch-head
              
    targeting:
      '*':
        node_selector:
          disktype: ssd
        tolerations:
          - key: example-key
            operator: Equal
            value: special
            effect: NoSchedule

This approach can specify the different isolations and restrictions you want to use the same way it works in kubernetes configuration deployment.

For the operator-resources to use the same targeting rules when you generate your operator configuration, specify the configuration file with the -c flag.

For example, if your Hydrolix configuration is hydrolixcluster.yaml,

hkt operator-resources -c hydrolixcluster.yaml > operator.yaml

You can then examine the operator.yaml file to ensure you have the proper nodeSelector rule in place.