Kibana Integration via Quesma
Query your data in Hydrolix using a gateway layer like Quesma. This setup features a Kibana front end for dashboarding and ElasticSearch as a secondary datastore.
What is Quesma?
Quesma is a lightweight database proxy that enables interoperability between a front end such as Kibana and multiple back end datastores such as Hydrolix and ElasticSearch. It provides this interoperability by acting as a SQL translation layer that entirely decouples your dashboarding and ingestion layers from your datastore(s). Two compelling use cases for Quesma are:
- Using it as a translation layer for queries during a datastore platform migration. Doing so can remove the need to alter application code or refactor SQL queries to maintain datastore compatibility during the migration.
- Using Kibana with Hydrolix instead of ElasticSearch
- Easy, simultaneous querying of multiple back end datastores.
Installation Guide for Existing Environments
If you have an existing system in which you want to incorporate Quesma and Hydrolix, there are various scenarios and their instructions described in Quesma's installation guide.
Installation Guide from Scratch
In the following tutorial, we will be standing up a Kibana front end, a Quesma query and ingest routing and translation layer, and an ElasticSearch back end datastore. This setup will be integrated with your existing Hydrolix cluster and can be used to simulate a multi-datastore architecture. Ingest will be routed to ElasticSearch, while queries are routed to both Hydrolix and ElasticSearch.
Before You Begin
You will need:
- A running Hydrolix deployment. Follow the instructions for your preferred cloud vendor if you have yet to deploy Hydrolix. From your running Hydrolix cluster, you will need the following information:
Item | Description | Example values |
---|---|---|
Hydrolix URL + ClickHouse port | This is the URL of your Hydrolix cluster appended with the ClickHouse Native protocol SSL/TLS port. | https://{myhostname}.hydrolix.live:9440 |
Project name | The project name in Hydrolix which will correspond to the database name configured for Quesma. It is the logical namespace for your tables. Instructions for creating a project can be followed here. | Name: project_name |
Table name(s) | You will need the names of the tables you want to expose within the Kibana interface via Quesma configuration. Instructions for creating a table can be followed here . | Name:table_name0 , table_name1 |
Credentials | The username and password for your Hydrolix cluster account. | username: [email protected] password: correcthorsebatterystaple |
- Git. You can install the git CLI or Github Desktop.
- Docker v20.10 or higher. You can install Docker Desktop here.
- A Quesma license key compatible with the back end connector of type
hydrolix
. To obtain one, please contact Quesma support ([email protected]).
Configure the Containers With Docker Compose
The following steps are similar to Quesma's quick start demo setup for deploying Kibana and ElasticSearch. You will be using the Docker Compose file from those directions but omitting the ClickHouse server and replacing it with your running Hydrolix cluster.
We will also be following the Hydrolix-specific instructions from Quesma to configure Quesma to route queries to your Hydrolix cluster in lieu of the ClickHouse server.
Ingest Currently Not Supported
Quesma currently supports routing queries to Hydrolix. Ingest to Hydrolix via Quesma is not supported.
Open a command-line terminal and execute the following:
- Clone Quesma's repository with
git clone https://github.com/QuesmaOrg/quesma.git
. - Navigate to the working directory containing the Docker Compose file with
cd quesma/examples/kibana-sample-data
. - Edit the
docker-compose.yml
in this directory and remove or comment out theclickhouse
container as the following example shows:
services:
{...}
#clickhouse:
# user: 'default', no password
# image: clickhouse/clickhouse-server:23.12.2.59-alpine
# ports:
# - "8123:8123"
# - "9000:9000"
# healthcheck:
# test: wget --no-verbose --tries=1 --spider http://clickhouse:8123/ping || exit 1
# interval: 1s
# timeout: 1s
# start_period: 1m
Be sure to save your changes!
We're not yet ready to execute this Docker Compose file. However, with these steps completed, executing the file would locally deploy:
- A Quesma container (which is not yet configured to communicate with your Hydrolix cluster)
- An ElasticSearch datastore and query engine
- A Kibana data visualization interface
- Three sample data sets that load data into ElasticSearch via two containers (
log-generator
andkibana-sidecar
)
Configure Quesma
In this section, we will configure Quesma to route queries to your running Hydrolix cluster and we will remove any references to the ClickHouse cluster. You can read more about configuring Quesma in their Configuration Primer.
Back in the command line, navigate into the directory containing the Quesma config with cd quesma/config
. Apply the following changes to local-dev.yaml
:
- Add your license key to the top with:
licenseKey: {your-quesma-license-key}
- Add the following to
backendConnectors
:
- name: my-hydrolix-instance
type: hydrolix
config:
url: https://{hydrolix_host}:9440
user: {username}
password: {password}
database: {hydrolix_project_name}
Fill in your Hydrolix host, username, password, and database name. This provides connection information to Quesma so it can communicate with your running Hydrolix cluster. The name you specify for the Hydrolix backend connector is strictly for referencing it within this configuration file. It does not have to be the name of your running Hydrolix cluster.
Determining your database name
The project name in Hydrolix corresponds to the database name for Quesma.
- Remove or comment out ClickHouse from
backendConnectors
:
backendConnectors:
{...}
#- name: my-clickhouse-data-source
# type: clickhouse-os
# config:
# url: "clickhouse://clickhouse:9000"
# adminUrl: "http://localhost:8123/play"
- In the
pipelines
section, replace mentions ofmy-clickhouse-data-source
withmy-hydrolix-instance
. This configuration change ensures that Quesma will route queries to both the Hydrolix and ElasticSearch back ends. For consistency, you can also rename the pipelines:
my-pipeline-elasticsearch-query-clickhouse
->my-pipeline-elasticsearch-query-hydrolix
my-pipeline-elasticsearch-ingest-to-clickhouse
->my-pipeline-elasticsearch-ingest-to-elastic
.
- Replace all other instances of
my-clickhouse-data-source
withmy-minimal-elasticsearch
. There should be six altogether configured for the following indices:
kibana_sample_data_ecommerce
kibana_sample_data_flights
logs-generic-default
These indices are specified for the following processors:
my-query-processor
my-ingest-processor
We are modifying this configuration so Quesma knows to ingest data into and query those indices solely within ElasticSearch.
- Add your Hydrolix table(s) to
processors.my-query-processor.config.indexes
with targetmy-hydrolix-instance
. If you were to use the tablestable_name0
andtable_name1
from above, you would configure them like so:
processors:
- name: my-query-processor
type: quesma-v1-processor-query
config:
indexes:
table_name0:
target: [ my-hydrolix-instance ]
table_name1:
target: [ my-hydrolix-instance ]
This tells Quesma to query the data stored in those tables in your Hydrolix cluster.
Save your changes.
Deploy Kibana, ElasticSearch, and Quesma
Run the following command to deploy your containers locally:
docker compose -f docker-compose.yml up
Verify Functionality
Kibana
The Kibana UI should be available at localhost:5601:
Quesma
The Quesma admin panel should be accessible at localhost:9999:
Note that the admin panel does not require authentication by default and can be disabled in the docker-compose.yaml
by commenting out or removing port forwarding for port 9999
:
services:
quesma:
ports:
#- "9999:9999"
- "9200:8080"
You should also be able to view your tables, including Hydrolix tables, at localhost:9999/schemas.
The Quesma container logs to
stderr
.If you run into problems with your Quesma container and want to search its logs, be sure to combine both
stderr
andstdout
to thestdout
stream with2>&1
. For example:docker logs {quesma_container_id} 2>&1 | grep "ERROR"
Otherwise, grep will not function on the logs from the Quesma container.
Create a Data View in Kibana for your Hydrolix Data
In order to view your Hydrolix tables in Kibana, you need to create Data Views for tables (indexes). Follow these directions to create one.
Updated about 2 months ago