Now let’s query some data; first, the good news: that huge volume of data e.g. petabytes, can now be queried just as fast for a time range of yesterday as what happened five years ago. Cloud storage and the way data is indexed make it possible to accomplish high throughput..
And now…for more good news! There are three great ways to query for data using Hydrolix:
- Run a SQL-based query from the UI
- Query using the query API from the documentation tool or a language of choice
- Use a 3rd-party tool like grafana through the Clickhouse connector
The first two approaches will be covered, here. There is an excellent document already written on the third method.
The Hydrolix UI is equipped with a handy console to run queries. These queries leverage the Clickhouse SQL.
Click on Query from the left navigation (see Figure 9) and then try running the query:
There are several other examples provided in the sample_queries.sql file which formats nicely within VS Code. As a best practice, we recommend not using SELECT *; consider replacing the asterisk in the query with the timestamp and a few other fields and then clicking Run Query to test your new query. When querying through petabytes of data, time filters in particular are going to be very important!
Queries to Hydrolix are ultimately REST calls - either secure or not secure. SQL can be sent via POST through the Hydrolix query API for both CSV and JSON format. Examples of the API are provided in the example NGINX project in the sample_API_calls.http file (see Figure 10) which also renders format nicely in VS Code so you can POST directly using the http plug-in. The API lets you work with all sort of Hydrolix objects - tables, ingest, security operations, and queries. The last example in that file illustrates a sample CURL.
The http extension in VS Code is very convenient, in that you can set environment variables, in order to not have to repeat values, like API keys when filling out requests. Sample common variables are provided at the top, so you can just fill them in. Some API calls require project ID and table ID and others, like the Query API use the project and table names instead. The file is designed to help you understand how to obtain an auth token and be able to try out the sample queries, in order to get a good feel of working with the API. We recommend you try it out and then go to our API documents to get even better! There are all sorts of API calls - table settings, for example, is going to be a really useful table update which you’ll need to tune your tables to read/write workloads.
Updated 3 months ago