Deploy Hydrolix
Hydrolix deployments follow the Kubernetes operator pattern. To deploy Hydrolix, generate an operator configuration (operator.yaml
) and a custom resource Hydrolix configuration (hydrolixcluster.yaml
). You'll use these files to deploy Hydrolix on your Kubernetes cluster.
Generate operator config
The Hydrolix operator resources API generates all of the Kubernetes resource definitions required to deploy the operator, including service accounts and role permissions. Once deployed, the operator manages your Hydrolix cluster deployment. To upgrade your deployment to a new version, repeat this step.
Run the following command to generate the operator YAML file, named operator.yaml:
curl "https://www.hydrolix.io/operator/latest/operator-resources?namespace=${HDX_KUBERNETES_NAMESPACE}" > operator.yaml
Generate hydrolixcluster.yaml config
Now that the environment is set up, create the Hydrolix configuration hydrolixcluster.yaml
and fill in the details:
---
apiVersion: hydrolix.io/v1
kind: HydrolixCluster
metadata:
name: hdx
namespace: ${HDX_KUBERNETES_NAMESPACE}
spec:
admin_email: ${HDX_ADMIN_EMAIL}
db_bucket_region: ${HDX_BUCKET_REGION}
db_bucket_url: ${HDX_DB_BUCKET_URL}
hydrolix_name: hdx
hydrolix_url: ${HDX_HYDROLIX_URL}
ip_allowlist:
- 0.0.0.0/0 # TODO: Replace this with your IP address!
kubernetes_namespace: ${HDX_KUBERNETES_NAMESPACE}
kubernetes_profile: lke
scale:
postgres:
replicas: 0
scale_profile: dev
Use the following command to replace the environment variables above with their values:
eval "echo \"$(cat hydrolixcluster.yaml)\"" > hydrolixcluster.yaml
Don't forget to:
- add your IP address to the allowlist. You can get your IP address by running
curl -s ifconfig.me
. - add the access key ID we created earlier.
- In the
${HDX_DB_BUCKET_URL}
have not just the host but the protocol as well (e.g.https://
).
Manually Edit Configuration Files
You can also edit the
hydrolixcluster.yaml
to tune each deployment to your resource requirements. Scale profile information can be found here - Scale Profiles.
After creating all those files, you can deploy Hydrolix by using the following command in the folder containing these YAML configs:
kubectl apply -f operator.yaml --namespace ${HDX_KUBERNETES_NAMESPACE}
kubectl apply -f hydrolixcluster.yaml --namespace ${HDX_KUBERNETES_NAMESPACE}
Enable Autoscaling
Once your Hydrolix cluster is deployed, you can enable autoscaling by downloading and applying the following yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: kube-system
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --kubelet-insecure-tls=true
- --metric-resolution=15s
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
version: v1beta1
versionPriority: 100
Once you have downloaded the file you can apply it using:
kubectl apply -f autoscale.yaml
Create your DNS record
The final step in your deployment should be creating the DNS record so you are able to access the services.
To retrieve the traefik IP you can use the kubectl get services
command.
kubectl get services
intake-api ClusterIP 10.64.10.148 <none> 8080/TCP 2m51s
keycloak ClusterIP 10.64.8.47 <none> 8080/TCP 2m51s
native ClusterIP 10.64.8.158 <none> 9000/TCP 2m51s
postgres ClusterIP None <none> 5432/TCP 2m51s
prometheus ClusterIP None <none> 9090/TCP 2m51s
query-head ClusterIP 10.64.8.199 <none> 9000/TCP,8088/TCP 2m51s
rabbit ClusterIP None <none> 5672/TCP,15692/TCP,4369/TCP 2m51s
redpanda ClusterIP None <none> 9092/TCP,8082/TCP,33146/TCP,9644/TCP 2m51s
stream-head ClusterIP 10.64.2.40 <none> 8089/TCP 2m51s
traefik LoadBalancer 10.64.14.42 WW.XX.YYY.ZZZ 80:31708/TCP,9000:32344/TCP 2m50s
turbine-api ClusterIP 10.64.15.225 <none> 3000/TCP 2m51s
ui ClusterIP 10.64.3.254 <none> 80/TCP 2m50s
validator ClusterIP 10.64.15.112 <none> 8089/TCP 2m51s
version ClusterIP 10.64.12.105 <none> 23925/TCP 2m51s
zoo ClusterIP None <none> 2181/TCP 2m51s
The public IP address for traefik is the A record you should create.
Adding IPv6 AAAA records.
Linode NodeBalancers are typically published with IPv6 addresses as well as IPv4. Once you have your IPv4 address the Linode NodeBalancers can be searched within the Linode portal. Clicking on the NodeBalancer with the IP will provide further details and you will find the IPv6 address under the IP Addresses section.
Updated 3 months ago