Deploy Hydrolix
Hydrolix deployments follow the Kubernetes operator pattern. To deploy Hydrolix, generate an operator configuration (operator.yaml) and a custom resource Hydrolix configuration (hydrolixcluster.yaml). You'll use these files to deploy Hydrolix on your Kubernetes cluster.
Prerequisite: Environment Variables
These CLI commands require you to set environment variables before generating the configuration. See Prepare your GKE Cluster for more information about the required inputs.
Configure and Deploy the Hydrolix Operator⚓︎
The operator-resources command generates the Kubernetes resource definitions required for deploying the operator, service accounts, and role permissions. The operator manages all Hydrolix cluster deployments. Run the following command to generate a YAML operator configuration file for your cluster:
Next, use the Kubernetes command line tool (kubectl) to apply the generated configuration to your Kubernetes cluster:
Configure and Deploy a Hydrolix Cluster⚓︎
The hydrolix-cluster command generates the hydrolixcluster.yaml deployment file. We provide scale profiles for various cloud providers and deployment sizes. You can optionally specify a profile using the scale-profile flag. By default, Hydrolix uses a minimal profile. Add the following to a file named hydrolixcluster.yaml to generate a YAML cluster configuration file for a dev scale deployment:
The above config will deploy, among other things, a default, internal Postgres instance that is non-HA. If you want to run a more resilient version, read our Deploy Production Postgres guide.
Use the following command to replace the environment variables above with their values:
Don't forget to add your IP address to the allowlist. You can get your IP address by running curl -s ifconfig.me.
Manually Edit Configuration Files
You can also edit the hydrolixcluster.yaml to tune each deployment to your resource requirements.
Next, use the Kubernetes command line tool (kubectl) to apply the generated configuration to your Kubernetes cluster:
Create Your DNS Record⚓︎
Next, create a DNS record so you can access your cluster. Run the following command to retrieve the traefik record:
You should see output similar to the following:
If the response you receive instead is
try restarting the operator with
Check Deployment Status⚓︎
You can now check the status of your deployment. Run the followingkubectl command to see the status of all pods in your cluster:
You should see output similar to the following:
You can also check your cluster status in the Google Cloud console.
Enable IP Access and TLS⚓︎
Configure IP Access control and a TLS certificate. You can find instructions in Secure a Kubernetes Cluster.
Login⚓︎
You should have received an email that will now allow you to set a password and login. If you do not receive this e-mail, or have trouble logging in, try these things:
- Verify the e-mail address in your hydrolixcluster.yaml file is correct and that you can receive mail sent to it.
- Try the "Forgot my password" option on the login page.
- If those two steps fail, contact us at mailto:support@hydrolix.io and we'll happily assist you.
Once you are able to log in to your Hydrolix cluster, setup is complete, and you are ready to store and query data. Proceed to the next step only if you want to query your data using the Hydrolix Connector for Apache Spark
(Hydrolix Connector for Apache Spark only) Add a Credential to the Storage Bucket⚓︎
To query the Hydrolix Cluster using the Hydrolix Connector for Apache Spark, configure a credential for your storage bucket. The following steps will walk you through generating a new credential and updating your storage bucket to use the credential.
Step 1: Create a credential⚓︎
This step is best accomplished in the UI. Download the credentials.json file from Google containing your keys. If you need to create a new credential, or you're not sure where to find this file, see Google's Create credentials for a service account instructions.
Within the Hydrolix cluster UI, select Add new -> Credential in your Hydrolix cluster UI. Fill out the ensuing form with the following:
- Supply a name and description for your credential
- Select
gcp_service_account_keysfor Cloud Provider Type - Upload your Google credentials file
- Review the fields filled in from the supplied credentials file then select Create credential

New credential example input:
You can review your new credential by navigating to Security -> Credentials, then selecting your credential by name. You can also do this using the API through the List Credentials endpoint. You will need your credential ID for the next step.
Step 2: Attach the Credential to the Storage Bucket⚓︎
Using the update storage endpoint, in the next steps you will attach your newly created credential to the storage bucket.
Set settings.credential_id to the ID of the credential you created in the previous step. This is the Credential ID in the UI or uuid in the API response to List Credentials.
Credential ID in the UI

Credential ID in the API Response

Append ?force_operation=true to the URL.
The following is an example cURL request attaching a credential to the default Google storage bucket:
Once you've completed these steps, your cluster can receive queries from the Hydrolix Connector for Apache Spark.