Deploying to LKE

It's happening!

To deploy you will use the hkt tool to create the YAML required for kubectl and then apply it.

The HKT tool uses the environment variables we set earlier in the deployment process.

The first step is to install the operator in your Kubernetes Cluster:

hkt operator-resources > operator.yaml && kubectl apply -f operator.yaml

Then you need to create the secret which contains the AWS secret key:

---
apiVersion: v1
kind: Secret
metadata:
  name: curated
  namespace: $HYDROLIX_NAMESPACE
stringData:
  AWS_SECRET_ACCESS_KEY: $YOURSECRETGENERATEDINLINODEPORTAL
type: Opaque

Once the file is created you can use kubectl to set it in your cluster:

kubectl apply -f secret.yaml

Then you can create the resource definition using the hydrolix-cluster command in hkt, and apply it.

hkt hydrolix-cluster --ip-allowlist `curl -s ifconfig.me`/32 > hydrolixcluster.yaml

You then need to modify the hydrolixcluster.yaml to include some specific settings for Linode:

apiVersion: hydrolix.io/v1
kind: HydrolixCluster
metadata:
  name: $HYDROLIX_NAMESPACE
  namespace: $HYDROLIX_NAMESPACE
spec:
  admin_email: $ADMIN_EMAIL
  aws_credentials_method: static <--- ADD 
  bucket: $HYDROLIX_DB_BUCKET
  cloud: $CLOUD
  domain: $HYDROLIX_DOMAIN
  env:
    AWS_ACCESS_KEY_ID: YOURACCESSKEYD  <--- ADD Key ID
  host: $HYDROLIX_HOST
  ip_allowlist:
  - source: $YOURPUBLICIP/32
  kubernetes_cloud: linode  <--- ADD
  namespace: $HYDROLIX_NAMESPACE
  owner: $OWNER
  region: $REGION
  s3_endpoint: https://$REGION.linodeobjects.com <--- ADD replace your real region
  scale: {}
  scale_profile: minimal

After the modification of your hydrolixcluster.yaml you can apply it which will start the deployment of your cluster:

kubectl apply -f hydrolixcluster.yaml

Within the hydrolixcluster.yaml a default scale is defined for the Hydrolix cluster. More information can be found here Scaling on GCP should you wish to deploy differently than the default.

The default setting applied will bring up a minimal size of infrastructure of stateful and stateless components

Stateful

ServiceDescriptionReplicasCPUMemoryStorageData Storage
postgresThe Core1416Gi5Gi100Gi
redpandaIngest226Gi5Gi1Ti
prometheusReporting and Control115Gi5Gi50GB

Stateless

ServiceDescriptionReplicasCPUMemoryStorageData Storage
alter-peerAlter0220Gi10Gi-
batch-headIngest1500m1Gi5Gi-
batch-peerIngest1-12210Gi10Gi-
decayAge111Gi5Gi-
intake-apiIngest1500m1Gi5Gi-
kafka-peerIngest0210Gi10Gi-
keycloakThe Core112Gi5Gi-
merge-headMerge1500m1Gi5Gi-
merge-peerMerge1-12230Gi10Gi-
operatorKubernetes Operator1500m1Gi5Gi-
query-headQuery1648Gi50Gi-
query-peerQuery3-121548Gi50Gi-
rabbitmqIngest114Gi10Gi-
reaperAge121Gi5Gi-
stream-headIngest1-12410Gi10Gi-
stream-peerIngest1-12210Gi10Gi-
turbine-apiQuery1500m1Gi5Gi-
traefikThe Core211Gi5Gi-
versionThe Core1500m1Gi1Gi-
zookeeperThe Core111Gi5Gi-

Setting up Autoscaling

To setup autoscaling with Linode you need to use the following configuration:

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --kubelet-insecure-tls=true
        - --metric-resolution=15s
        image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

You can download this configuration and apply it:

kubectl apply -f autoscale.yaml

Create your DNS record

The final step in your deployment should be creating the DNS record so you are able to access the services.
To retrieve the traefik IP you can use the kubectl get services command.

kubectl get services
intake-api    ClusterIP      10.64.10.148   <none>          8080/TCP                               2m51s
keycloak      ClusterIP      10.64.8.47     <none>          8080/TCP                               2m51s
native        ClusterIP      10.64.8.158    <none>          9000/TCP                               2m51s
postgres      ClusterIP      None           <none>          5432/TCP                               2m51s
prometheus    ClusterIP      None           <none>          9090/TCP                               2m51s
query-head    ClusterIP      10.64.8.199    <none>          9000/TCP,8088/TCP                      2m51s
rabbit        ClusterIP      None           <none>          5672/TCP,15692/TCP,4369/TCP            2m51s
redpanda      ClusterIP      None           <none>          9092/TCP,8082/TCP,33146/TCP,9644/TCP   2m51s
stream-head   ClusterIP      10.64.2.40     <none>          8089/TCP                               2m51s
traefik       LoadBalancer   10.64.14.42    WW.XX.YYY.ZZZ   80:31708/TCP,9000:32344/TCP            2m50s
turbine-api   ClusterIP      10.64.15.225   <none>          3000/TCP                               2m51s
ui            ClusterIP      10.64.3.254    <none>          80/TCP                                 2m50s
validator     ClusterIP      10.64.15.112   <none>          8089/TCP                               2m51s
version       ClusterIP      10.64.12.105   <none>          23925/TCP                              2m51s
zoo           ClusterIP      None           <none>          2181/TCP                               2m51s

👍

Enabling IP and SSL/TLS access.

This may also be a good time to set-up the IP Access control and TLS certificate. You can find instructions in the Enabling Access & TLS section.

Checking your deployment.

You can now check the status of your deployment. This can all be done via the Kubectl command or via the Linode Cloud console, for example to see the status of each pod running:

kubectl get pods --namespace $CLIENT_ID
NAME                             READY   STATUS      RESTARTS   AGE
autoingest-658f799497-czw59      1/1     Running     0          5m44s
batch-head-bcf7869bc-fm794       1/1     Running     0          5m46s
batch-peer-555df86d8-svlmw       2/2     Running     0          5m45s
decay-78775df79d-ppxpf           1/1     Running     0          5m45s
init-cluster-v3-16-0-6fcml       0/1     Completed   0          5m45s
init-turbine-api-v3-16-0-jqt4m   0/1     Completed   0          5m46s
intake-api-747cdd5d4d-vrsjm      1/1     Running     0          5m45s
keycloak-68fcff9b69-p4lt5        1/1     Running     0          5m46s
load-sample-project-nv8dl        1/1     Running     0          5m44s
merge-head-7df478d57-7qgwn       1/1     Running     0          5m44s
merge-peer-dbb68cc75-c8fl4       1/1     Running     0          5m45s
merge-peer-dbb68cc75-ntwpj       1/1     Running     0          5m45s
operator-55d4dfff6f-pktrl        1/1     Running     0          7m10s
postgres-0                       1/1     Running     0          5m46s
prometheus-0                     2/2     Running     0          5m45s
query-head-65bf688594-l9prj      1/1     Running     0          5m45s
query-peer-67dfcccb56-h6rkw      1/1     Running     0          5m44s
rabbitmq-0                       1/1     Running     0          5m46s
reaper-647d474f5-mfgww           1/1     Running     0          5m44s
redpanda-0                       2/2     Running     0          5m46s
redpanda-1                       2/2     Running     0          5m23s
redpanda-2                       2/2     Running     0          3m38s
stream-head-6ccc9779df-7jvzf     1/1     Running     0          5m43s
stream-peer-6db9464bd5-cgq6x     2/2     Running     0          5m44s
traefik-6f898fd647-lxf84         2/2     Running     0          5m43s
turbine-api-65d44c7d54-crpcm     1/1     Running     0          5m43s
ui-5b8bc9c9d4-pgjtv              1/1     Running     0          5m43s
validator-769ff76ddb-5mm5w       2/2     Running     0          5m43s
vector-557q5                     1/1     Running     0          4m58s
vector-5ttd4                     1/1     Running     0          5m46s
vector-5z8zq                     1/1     Running     0          5m46s
vector-qnpn9                     1/1     Running     0          5m46s
vector-r8pj6                     1/1     Running     0          3m4s
version-848c8c964c-j2khx         1/1     Running     0          5m43s
zookeeper-0                      1/1     Running     0          5m46s

The Final Step

You should have received an email that will now allow you to set a password and login. If you do not receive this email, please feel free to contact us at [email protected] and we'll happily assist you.


Did this page help you?