Deploy Hydrolix

Hydrolix deployments follow the Kubernetes operator pattern. To deploy Hydrolix, generate an operator configuration (operator.yaml) and a custom resource Hydrolix configuration (hydrolixcluster.yaml). You'll use these files to deploy Hydrolix on your Kubernetes cluster.

Generate the Operator Configuration

The Hydrolix operator resources API generates all of the Kubernetes resource definitions required to deploy the operator, including service accounts and role permissions. Once deployed, the operator manages your Hydrolix cluster deployment. To upgrade your deployment to a new version, repeat this step.

📘

Prerequisite: Environment Variables

These CLI commands require you to set environment variables before generating the configuration. See Prepare a Cluster for more information about the required inputs.

Run the following command to generate the operator YAML file, named operator.yaml:

curl "https://www.hydrolix.io/operator/latest/operator-resources?namespace=${HDX_KUBERNETES_NAMESPACE}" > operator.yaml

Generate the hydrolixcluster.yaml Configuration

Now that the environment is set up, create the Hydrolix configuration hydrolixcluster.yaml:

---     
apiVersion: hydrolix.io/v1
kind: HydrolixCluster
metadata:
  name: hdx
  namespace: ${HDX_KUBERNETES_NAMESPACE}
spec:
  admin_email: ${HDX_ADMIN_EMAIL}
  db_bucket_region: ${HDX_BUCKET_REGION}
  db_bucket_url: ${HDX_DB_BUCKET_URL}
  db_bucket_endpoint: ${HDX_DB_BUCKET_ENDPOINT}
  env:
    AWS_REGION: ${HDX_BUCKET_REGION}
    S3_ENDPOINT: ${HDX_DB_BUCKET_ENDPOINT}
  hydrolix_name: hdx
  hydrolix_url: ${HDX_HYDROLIX_URL}
  ip_allowlist:
  - 0.0.0.0/0 #TODO: Replace this with your IP address in CIDR notation, eg. 12.13.14.15/32
  kubernetes_namespace: ${HDX_KUBERNETES_NAMESPACE}
  kubernetes_profile: lke
  scale:
    postgres:
      replicas: 1
  scale_profile: dev

Use the following command to replace the environment variables above with their values:

eval "echo \"$(cat hydrolixcluster.yaml)\"" > hydrolixcluster.yaml

Now, manually add your IP address to the allowlist. You can get your IP address by running curl -s ifconfig.me.

📘

Manually Edit Configuration Files

You can also edit the hydrolixcluster.yaml to tune each deployment to your resource requirements. Scale profile information can be found in Scale Profiles.

After creating all those files, you can deploy Hydrolix by using the following command in the folder containing these YAML configs:

kubectl apply -f operator.yaml --namespace ${HDX_KUBERNETES_NAMESPACE}
kubectl apply -f hydrolixcluster.yaml --namespace ${HDX_KUBERNETES_NAMESPACE}

The cluster typically takes five to ten minutes to fully deploy. When it's ready for you to sign into the web UI, it will send e-mail to the address you configured in theHDX_ADMIN_EMAIL environment variable. There's a link in that e-mail to allow you to set a new password and log in. However, you'll need to set up DNS for your cluster, and probably HTTPS/TLS before that will work.

Create Your DNS Record

Next, create a DNS record so you can access your cluster. Run the following command to retrieve the traefik record:

kubectl get service/traefik --namespace=$HDX_KUBERNETES_NAMESPACE

You should see output similar to the following:

NAME          TYPE           CLUSTER-IP       EXTERNAL-IP                                                                     PORT(S)                                AGE                                                                          8089/TCP                               68m
traefik       LoadBalancer   10.64.14.42    34.66.136.134   80:31708/TCP,9000:32344/TCP            2m50s

Using the DNS provider of your choice, set up an A record for your hostname that points to the EXTERNAL-IP above.

Check Deployment Status

You can now check the status of your deployment. Run the followingkubectl command to see the status of all pods in your cluster:

kubectl get pods --namespace $HDX_KUBERNETES_NAMESPACE

You should see output similar to the following:

NAME                             READY   STATUS      RESTARTS   AGE
autoingest-658f799497-czw59      1/1     Running     0          5m44s
batch-head-bcf7869bc-fm794       1/1     Running     0          5m46s
batch-peer-555df86d8-svlmw       2/2     Running     0          5m45s
decay-78775df79d-ppxpf           1/1     Running     0          5m45s
init-cluster-v3-16-0-6fcml       0/1     Completed   0          5m45s
init-turbine-api-v3-16-0-jqt4m   0/1     Completed   0          5m46s
intake-api-747cdd5d4d-vrsjm      1/1     Running     0          5m45s
keycloak-68fcff9b69-p4lt5        1/1     Running     0          5m46s
load-sample-project-nv8dl        1/1     Running     0          5m44s
merge-head-7df478d57-7qgwn       1/1     Running     0          5m44s
merge-peer-dbb68cc75-c8fl4       1/1     Running     0          5m45s
merge-peer-dbb68cc75-ntwpj       1/1     Running     0          5m45s
operator-55d4dfff6f-pktrl        1/1     Running     0          7m10s
postgres-0                       1/1     Running     0          5m46s
prometheus-0                     2/2     Running     0          5m45s
query-head-65bf688594-l9prj      1/1     Running     0          5m45s
query-peer-67dfcccb56-h6rkw      1/1     Running     0          5m44s
rabbitmq-0                       1/1     Running     0          5m46s
reaper-647d474f5-mfgww           1/1     Running     0          5m44s
redpanda-0                       2/2     Running     0          5m46s
redpanda-1                       2/2     Running     0          5m23s
redpanda-2                       2/2     Running     0          3m38s
stream-head-6ccc9779df-7jvzf     1/1     Running     0          5m43s
stream-peer-6db9464bd5-cgq6x     2/2     Running     0          5m44s
traefik-6f898fd647-lxf84         2/2     Running     0          5m43s
turbine-api-65d44c7d54-crpcm     1/1     Running     0          5m43s
ui-5b8bc9c9d4-pgjtv              1/1     Running     0          5m43s
validator-769ff76ddb-5mm5w       2/2     Running     0          5m43s
vector-557q5                     1/1     Running     0          4m58s
vector-5ttd4                     1/1     Running     0          5m46s
vector-5z8zq                     1/1     Running     0          5m46s
vector-qnpn9                     1/1     Running     0          5m46s
vector-r8pj6                     1/1     Running     0          3m4s
version-848c8c964c-j2khx         1/1     Running     0          5m43s
zookeeper-0                      1/1     Running     0          5m46s

You can also check your cluster status via the k9s tool or in the Kubernetes Dashboard available from the Linode LKE cluster summary page.

Enable IP Access and TLS

If the URL of your Hydrolix cluster uses HTTPS, you will need to configure a TLS certificate. This is sometimes a complex process, but with simpler DNS configurations you can just follow these two steps:

  1. Set the configuration option acme_enabled to true in hydrolixcluster.yaml.
    spec:
       acme_enabled: true
    
  2. Load the configuration changes to your Hydrolix cluster with kubectl apply -f hydrolixcluster.yaml. Hydrolix will automatically generate a certificate for your cluster and store it in a Kubernetes secret named traefik-tls. This process can take up to 30 seconds.

If your DNS policies are more complicated, or if you want to generate and use your own certificate, refer to the instructions in Enabling Access & TLS.

TLS setup failing?

Sometimes the Linode API will be unavailable and certificate storage will fail. In this case, delete the init-acme job from your Kubernetes cluster. The Hydrolix operator will automatically start a new one in 5 seconds and retry the operation.

To find the full name of the init-acme job, list the Kubernetes cluster's jobs:

% kubectl get jobs
NAME                            STATUS      COMPLETIONS   DURATION   AGE
backup-keycloak-db-v4-18-2      Suspended   0/1                      8m53s
check-bucket-access-v45x6we4v   Suspended   0/1                      8m53s
init-acme-509c50f0              Suspended   0/1                      8m53s
init-cluster-v4-18-2-66dbb6ba   Suspended   0/1                      8m53s
init-turbine-api-v4-18-2        Suspended   0/1                      8m53s
load-sample-project             Suspended   0/1                      8m53s

Remove the job with

% kubectl delete jobs init-acme-509c50f0

Enable Autoscaling

Once your Hydrolix cluster is deployed, you can enable autoscaling by downloading and applying the following yaml:

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --kubelet-insecure-tls=true
        - --metric-resolution=15s
        image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

Once you have downloaded the file you can apply it using:

kubectl apply -f autoscale.yaml

The Final Step

You should have received an email that will now allow you to set a password and login. If you do not receive this e-mail, or have trouble logging in, try these things:

  • Verify the e-mail address in your hydrolixcluster.yaml file is correct and that you can receive mail sent to it.
  • Try the "Forgot my password" option on the login page.
  • If those two steps fail, contact us at [email protected] and we'll happily assist you.

What’s Next