Hydrolix Spark Connector: Microsoft Fabric Deployment
Analyze your Hydrolix data using Apache Spark and Microsoft Fabric
Overview
Microsoft Fabric is an end-to-end data platform that unifies data engineering, data science, real-time analytics, and business intelligence under one SaaS offering. With the help of the Hydrolix Spark Connector, you can improve query speed and storage costs of your Fabric-deployed Spark cluster by using Hydrolix as the backing store.
Requirements
Prerequisites
- Hydrolix Cluster: A running Hydrolix cluster version 4.22.1 or higher. Deployment instructions for your preferred cloud vendor (AWS, Google, Linode, or Microsoft) can be found here.
- Fabric Spark Cluster: A Microsoft Fabric account with a deployed Spark cluster.
Microsoft Requirements
Dependency | Description | Instructions |
---|---|---|
Resource Group | The primary purpose of setting up a new Resource Group is to keep related services logically organized together. | Create a resource group |
Capacity | Capacity refers to the resource limits and reservations assigned to nodes for managing workloads in Microsoft Fabric. | Create a new capacity |
(Optional) Key Vault | Azure Key Vault is a cloud service provided by Microsoft Azure that helps you securely store and manage sensitive information, such as cryptographic keys, passwords, certificates, and other secrets. Use one to securely store Hydrolix credentials. | Use Microsoft's Secrets documentation to create a Key Vault using your preferred method (Portal, CLI, etc.) |
Required User Permissions
Querying your Hydrolix bucket via Spark requires the same permissions as querying via your cluster. Specifically, a user needs the following permissions at the levels indicated:
Permission name | Level |
---|---|
view_org | Org |
view_hdxstorage | Org |
catalog_urls_table | Project or table |
view_table | Project or table |
view_view | Project or table |
view_transform | Project or table |
select_sql | Project or table |
If you will be querying multiple tables within the same Hydrolix project, it's easier to scope those permissions to the project level instead of granting the permissions for each table.
Setup Steps
At which point you can start querying your data using Spark.
Create a Summary Table of the Data You Will Be Querying
Create a summary table, including a transform, of the data you will be querying. This aggregates the queriable data, reducing query time. Instructions for creating a summary table via the Hydrolix UI and API are on the Summary Tables page. While this step can be skipped, it is highly recommended.
The general structure of summary transforms and their limitations are explained in this section on creating summary transforms.
Create and configure a workspace
A workspace in MS Fabric is a collaborative environment where users can create, manage, and share data, reports, dashboards, and other analytics assets. The following steps walk you through the process of creating and configuring a workspace to query Hydrolix data with the Hydrolix Spark Connector using python notebooks.
To create a new workspace, follow these steps.
- Open the Capacity created in a previous step. Verify that it's running.
- Navigate to Microsoft Fabric.
- In the left-hand panel, select Workspaces > + New workspace.
- Fill in a name for the new workspace.
- Under License mode, select Fabric Capacity and choose the created capacity from the available options.
- Save the workspace.
- Open your created workspace.
- Select Workspace settings > Data Engineering/Science > Spark settings > Environment.
- Enable Set default environment then select Workspace default > New environment.

- Provide a custom name for the new environment. The new environment settings will open immediately after creation.
- Navigate to Custom Library and upload the Spark Connector JAR file.

- Navigate to Spark properties and configure the connection to your Hydrolix cluster using the following properties.
Property | Value | Description |
---|---|---|
spark.sql.catalog.hydrolix | io.hydrolix.connectors.spark.SparkTableCatalog | The fully qualified name of the class to instantiate when you ask for the hydrolix catalog |
spark.sql.extensions | io.hydrolix.connectors.spark.SummaryUdfExtension | A comma-separated list of fully-qualified SQL extension classes -- using summary tables requires including SummaryUdfExtension in this set |
spark.sql.catalog.hydrolix.cluster_url | https://{myhost}.hydrolix.live | Hydrolix cluster URL. If this field is set, jdbc_url and api_url fields are unnecessary and will be ignored |
spark.sql.catalog.hydrolix.jdbc_protocol | https | Defaults to https if not provided. Used with the cluster_url and jdbc_port configs to derive the JDBC URL |
spark.sql.catalog.hydrolix.jdbc_port | 8088 | Defaults to 8088 if not provided. Used with the cluster_url and jdbc_protocol configs to derive the JDBC URL |
spark.sql.catalog.hydrolix.jdbc_url | jdbc:ch://{host}:{port}/{database}?ssl=true | JDBC URL of the Hydrolix query head. Note that the Clickhouse JDBC driver requires a valid database name in the URL, but the connector will read any database the user has access to. Ignored if cluster_url is provided. |
spark.sql.catalog.hydrolix.api_url | https://{myhost}.hydrolix.live/config/v1/ | URL of the Hydrolix config API, usually must end with /config/v1/ including the trailing slash. Ignored if cluster_url is provided. |
spark.sql.catalog.hydrolix.username | {hdx-username} | Username to login to the Hydrolix cluster |
spark.sql.catalog.hydrolix.password | {hdx-password} | Password to login to the Hydrolix cluster |
spark.sql.catalog.hydrolix.hdx_partitions_per_task | 1 | Optional. Defines how many HDX partitions will be read by each Spark partition. Default value is 1 . For example, if this setting is set to 2 and partition planning returns 40 HDX partitions, the query launches 20 Spark tasks each processing 2 HDX partitions. Left unset, 40 Spark tasks would be launched processing 1 HDX partition each. |
spark.driver.extraJavaOptions | -Dio.netty.tryReflectionSetAccessible=true | Optional. Set this option if you want to enable MS Fabric Native execution engine for your Spark queries. |
spark.executor.extraJavaOptions | -Dio.netty.tryReflectionSetAccessible=true | Optional. Set this option if you want to enable MS Fabric Native execution engine for your Spark queries. |
Spark configuration is dynamically reloaded for every query.
The Spark Connector reloads its configuration for every query. To change the configuration inside spark-shell or a Jupyter notebook, set
spark.conf.set("some_setting", "value")
.
Verify user permissions.
Make sure the user supplied within the spark properties has the required permissions for the underlying Hydrolix cluster.
- Save and Publish your Spark configurations. Wait until the publishing process is complete.
- Return to your workspace, select the newly created environment as your default environment, and select Save.
- Add a new item by selecting + New item > Notebook in your workspace.
- Run a simple query from the Notebook to test the connection.

query = """SELECT app, avg(num_partitions) FROM hydrolix.hydro.logs WHERE app in ('query-peer', 'query-head', 'intake-head', 'intake-peer')
and timestamp >= '2025-01-17 12:00:00' AND timestamp < '2025-01-18 12:00:00' GROUP BY app""""
df = spark.sql(query)
df.show(10, truncate=False)
Querying
After you have configured your cluster, you can use the Hydrolix Spark Connector in a Spark notebook.
To begin using the connector with a Spark notebook, you’ll use one of the two commands depending on your use case:
- Python or Scala fragment:
sql("use hydrolix")
- SQL fragment:
use hydrolix;
Alternatively, you can prepend each table you want to query from your Hydrolix Back-End with hydrolix.
.
Summary table query instructions
Summary table queries have unique requirements.
- Wrap summary aliases in SELECT statements with
hdxagg()
. For example, you might run the following to query both an aliased column (summary_column_alias
) and a non-summary column (non_summary_column
).
SELECT hdxAgg('summary_column_alias'), non_summary_column
FROM hydrolix.project.summary_table
GROUP BY {time_field_alias}
You can read more creating and querying summary tables in the Summary Tables documentation.
The hdxAgg()
function resolves name conflicts between summary table alias columns and spark function names. For example, count
may be a column name in a summary table, but count()
is also a pre-existing spark function. Using the hdxAgg()
function disambiguates the two.
The hdxAgg()
function also resolves name conflicts between multiple summary tables. For example, one summary table might have count(timestamp) AS total
in its definition while another might have count(distinct username) AS total
in its definition. The hdxAgg()
function disambiguates total()
and ensures it works when querying either table.
- Load the Hydrolix datasource. Doing so loads the required
hdxAgg()
function. The options for loading the Hydrolix datasource are:
- Run
use hydrolix
- Run any query against a non-summary table
- Run
io.hydrolix.connectors.spark.HdxUdfRegistry.enableSummaryTables(spark)
(v1.0.0 only) Summary table query instructions
The v1.0.0
release of the Hydrolix Spark Connector has unique instructions for querying summary tables.
- To enable querying the summary table you created during the Create a Summary Table step, run the following line in a Spark shell or in a PySpark session:
io.hydrolix.connectors.spark.HdxUdfRegistry.registerSummaryTable(spark, "{project.summary_table_name}")
- Reference any summary table column aliases using the following syntax:
summary_column_alias()
. For example, suppose there is a summary table calledmy_project.my_summary_table
which includes the following in its definition:
SELECT ...
sum(salary) AS totalSalary,
department,
...
This table can then queried with:
SELECT totalSalary(), department FROM hydrolix.my_project.my_summary_table GROUP BY department
Troubleshooting
Authentication Error
If you see "access denied" errors from the Hydrolix database when you are making queries, ensure the cluster username and password are correct, and make sure that user has query permissions.
User Permissions
Partitions in a table might be distributed across different storage buckets.
If the user set in your Spark Connector configuration doesn't have the required permissions for querying all the storage buckets via the ClickHouse JDBC Driver, the cluster will be unable to sign partitions from a storage bucket for the table. This will result in the query failing and returning an error response.
Error querying summary tables
If a user sees this error:
org.apache.spark.sql.AnalysisException: [UNRESOLVED_ROUTINE] Cannot resolve function `hdxAgg` on search path [`system`.`builtin`, `system`.`session`, `spark_catalog`.`default`]
The fix is to load the Hydrolix datasource. You can do so by doing any one of the following:
- Run
io.hydrolix.connectors.spark.HdxUdfRegistry.enableSummaryTables(spark)
. This performs a global setup step which creates thehdxAgg
function. - Run the
use hydrolix
fragment. This directly loads the Hydrolix datasource. - Run any query against a non-summary table. This directly loads the Hydrolix datasource.
Limitations
Read-only
The Hydrolix Spark Connector is read-only. ALTER and INSERT statements aren't supported.
Updated 6 days ago