Azure Event Hubs
Configure a Hydrolix cluster to read messages from Azure Event Hubs using a Kafka data source
Overview
The Kafka client in a Hydrolix cluster can read messages from Azure Event Hubs and feed the data into a table.
An Azure event hub is loosely equivalent to a Kafka topic. The Kafka data source receives payloads, executes the Hydrolix transform, and inserts the data into the specified table.
This guide shows how to:
- create a credential for connecting to an Azure event hub, and collect the credential ID
- collect the other values specific to the Azure Event Hubs service for use when following the general Kafka configuration page
Before you begin
To complete the Hydrolix Kafka client configuration you need:
- the hostname and port (usually 9093) for your Azure event hub
- the Hydrolix credential name created to connect to that event hub
Prerequisites
- A project and table must exist to receive the data. See Projects & Tables
- A transform must exist. See Write Transforms
Requirements
- Access to Azure Event Hubs (choose one):
- create and configure a new event hub
- retrieve Azure Shared Access Signature (SAS) credentials from an existing event hub
- Permission to add a
kafka_sasl_plain
credential to your cluster - Permission to create a new Kafka data source
Process outline
To make use of the Azure Event Hubs with your Hydrolix cluster:
- Create an event hub in the Azure Event Hubs system.
- Collect the hostname for the event hub.
- Collect a Shared Access Signature (SAS) connection string for the Hydrolix cluster from the event hub definition.
- Create the SASL credential in the Hydrolix cluster.
- Create a Kafka source that refers to the credential ID. See Kafka.
- Begin publishing events into your event hub.
This document doesn't cover scaling or partition sizing for an event hub. See Scaling with Event Hubs.
Create an event hub
See Quickstart: Create an event hub using Azure portal.
An Azure event hub is equivalent to a Kafka topic.
Collect the hostname
Each Azure event hub has a unique hostname
- From the Azure portal, navigate to Home > Event Hubs and select your event hub.
- From the left nav, select Overview.
- Locate the Host name.
The Azure Event Hub service listens on tcp/9093 on that hostname.

Collect a Shared Access Signature
The Azure Shared Access Signature (SAS) system provides control similar to service accounts on other platforms. This is a flexible permissions system for enforcing policies associated with event hubs.
An Azure event hub can have many SAS policies, each with different permissions. For example, one policy may restrict permissions to publishing events. Another may permit only listeners, like the Hydrolix Kafka data source.
See also Authenticate access to Event Hubs resources using shared access signatures (SAS).
The example in this page uses the default SAS policy.
- From the Azure portal, navigate to Home > Event Hubs and select your event hub.
- From the left nav, select Settings > Shared access policies.
- From the list of SAS Policy entries, select a policy. The default is RootManageSharedAccessKey.
- Collect the Primary connection string for use in the Hydrolix SASL credential from the detail panel displayed on the right,.

The primary connection string is used in the password field when creating a Hydrolix SASL credential.
Create a SASL credential
The Kafka ecosystem supports different authentication mechanisms. The client and server communicate the choice using Simple Authentication and Security Layer (SASL).
The Hydrolix cluster connects to Azure Event Hub over TLS using the SASL/PLAIN
mechanism.
Use the kafka_sasl_plain
credential type with the following values:
user
is the literal string$ConnectionString
, a magic valuepassword
must be the primary connection string from the desired SAS policy
Create a credential using either the UI or API.
New credential with UI

New credential with API
Create a new credential object using the Create credential endpoint.
The cloud
must be null
. A kafka_sasl_plain
credential can be used with any SASL-capable server, so the cloud
parameter is not necessary.
POST /config/v1/orgs/${HDX_ORGID}/credentials/
{
"name": "test-event5",
"type": "kafka_sasl_plain",
"cloud": null,
"description": "Azure Event Hubs - test-event5",
"details": {
"username": "$ConnectionString",
"password": "TextCopiedFromSASPolicyPrimaryConnectionString"
}
}
Create a Kafka data source
Use the API to create the Kafka data source for use with Azure Event Hubs.
As of v5.6.x, prefer the API
The Hydrolix UI lacks support for the
credential_id
to communicate thekafka_sasl_plain
credential UUID. If a data source exists already, use PATCH Kafka data source.The API, however, supports the field on all Kafka data source endpoints.
If a Kafka data source specifies an Azure event hub but lacks a credential, the logs will show constant authentication errors.
The crucial configuration data for support Azure Event Hubs is the settings
key.
{
...
"settings": {
"credential_id": "466bcf0a-b0ac-413c-a77d-e2b6a01deeec",
"topics": [
"test-event5"
],
"bootstrap_servers": [
"test-event5.servicebus.windows.net:9093"
]
},
...
}
This example shows full sample values.
{
"name": "mabrown-catch-all-test-event5",
"type": "pull",
"subtype": "kafka",
"table": "mabrown_2025.mabrown_catchall",
"transform": "debug",
"settings": {
"credential_id": "466bcf0a-b0ac-413c-a77d-e2b6a01deeec",
"topics": [
"test-event5"
],
"bootstrap_servers": [
"test-event5.servicebus.windows.net:9093"
]
},
"pool_name": "az-kafka-pool",
"k8s_deployment": {
"cpu": 1,
"replicas": 1,
"service": "kafka-peer"
}
}
Follow the general instructions for using Kafka in a Hydrolix cluster with the hostname from the Azure event hub and the name of the kafka_sasl_plain
credential.
Verification
Use the Azure Event Hub tools to send messages to your event hub.
- From the Azure portal, navigate to Home > Event Hubs and select your event hub.
- From the left nav bar, select Data Explorer.
- From the Event Hub menu, select an event hub.
- From the detail panel displayed on the right, enter a payload and click Send.


Additional information
Azure Event Hubs offers many controls for throughput, processing, and retention of data. Each event hub tracks the state of the events retained in the system and the state of clients connecting to the event hub.
If a Hydrolix Kafka data source is deleted and recreated, the Azure event hub may treat the new client instance as a separate client application. This can cause older events still present in the event hub to be returned to the Hydrolix Kafka client, and inserted again into the specified Hydrolix table.
Getting more help
If you need more help using Azure Event Hubs with Hydrolix, or you'd just like to learn more about this integration, please contact Hydrolix support.
Updated about 5 hours ago