Qwilt Logs
Ingest Qwilt HTTP transaction logs into Hydrolix using Log Shipping
Introduction
Qwilt operates a software platform deployed within ISP networks, enabling local caching and delivery of HTTP content. It generates detailed HTTP transaction logs which include information such as timestamps, cache metrics, request/response data, and traffic metadata. This data can be exported via Qwilt’s Log Shipping feature. Hydrolix ingests these logs from Qwilt Cloud storage (for example AWS S3 or GCS) using the pull‑based batch autoingest feature, enabling efficient ingestion, query, and analysis within Hydrolix.
Before you begin
Item | Description | Example value | How to obtain this information |
---|---|---|---|
Cluster hostname | This is the hostname of your Hydrolix cluster. | https://$HDX_HOSTNAME | The value of hydrolix_url in your hydrolixcluster.yaml file. |
Table name | The destination table in the Hydrolix cluster for Qwilt HTTP transaction logs. Specified in the format: project_name.table_name . | qwilt_project.http_transaction_logs | See create a table if you need to create a Hydrolix table in which to store your Fastly CDN logs. |
SQS queue name | A Qwilt-managed notification queue which informs the Hydrolix cluster that there is new data to consume from a configured object store. | sqs://qwilt-log-notification-queue | See Configure Qwilt log delivery to S3 |
Intermediate log bucket | An S3 storage bucket used as intermediate storage for Qwilt HTTP transaction logs. Not directly used, but it may be necessary to construct the regex filter. | s3://my-qwilt-logs/ | See Configure Qwilt log delivery to S3 |
Regex filter | A regex pattern used to identify which subset of files to consume into the Hydrolix cluster. | s3://my-qwilt-logs/path/to/data/.*.json ^.*\\.gz$ | See Configure Qwilt log delivery to S3 |
Access Key Id and Secret Access Key | Keys corresponding to an AWS user able to access the Qwilt-managed notification queue and storage bucket. | Access Key Id: AKIAIOSFODNN7EXAMPLE Secret Access Key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY | See Configure Qwilt log delivery to S3 |
Getting Started
Configure Qwilt log delivery to S3
Contact Qwilt at [email protected]
to have your Qwilt HTTP transaction logs delivered to a Qwilt-managed S3 storage bucket. Qwilt support will:
- Create the intermediate S3 bucket for HTTP transaction logs
- Create the SQS notification queue
- Configure IAM permissions and provide you with an access key ID and secret access key which grant access the S3 bucket and notification queue
- Provide you with a regex filter that matches the paths to the logs that will be exported to Hydrolix
Configure Hydrolix batch autoingest
The following steps create a batch autoingest job in Hydrolix. When running, the job polls the configured SQS notification queue. When a message appears in the queue, Hydrolix reads files from the object store which match the configured regex, and process those into the specified Hydrolix table.
Create credential
Create a credential in Hydrolix. This credential will be attached to the batch autoingest job so that the Hydrolix cluster can access the Qwilt-managed notification queue and object store containing HTTP transaction logs.
- In your Hydrolix cluster UI, select + Add New > Credential.
- In the New credential dialog, enter the following information:
- Name: Any unique display name for this credential, such as
qwilt_creds
- Description: Any description
- Cloud Provider Type:
aws_access_keys
- Upload Credential JSON (optional): Don't upload a file.
- Access Key Id: Enter the Access Key ID provided by Qwilt.
- Secret Access Key: Enter the Secret Access Key provided by Qwilt.

- Select Create credential.
Create a transform
Create a transform in Hydrolix. The transform is a schema which determines how Hydrolix maps incoming log data from Qwilt onto a table.
You should have a table already created from Before you begin. The transform you create must be associated with an existing project and table.
See Publishing Your Transform to create and publish a transform using the Qwilt transform schema. The transform determines how your Qwilt log data will be mapped onto your Hydrolix table.
You have two options for the publishing the transform:
- UI: Register a transform through the UI:
- Project name
- Table name
- The contents of the
output_columns
property
- API: Use the Create transform API call, which requires:
- Project ID
- Table ID
- The entirety of the Qwilt transform json
Configure autoingest job
You can configure the batch autoingest job using the Hydrolix Config API or the UI.
API
See batch autoingest for instructions on how to create an autoingest job using the Hydrolix Config API. You will need:
- Queue Name: The SQS queue name provided by Qwilt.
- Source Region: The AWS region in which the notification queue and bucket are located. Provided by Qwilt.
- Regex filter: The regex used to identify which logs Hydrolix should consume. Provided by Qwilt.
- Transform: The name of the transform created in Create transform step.
- Credential ID: The ID of the credential created in the Create credential step. Provide this credential as both the
source_credential_id
, used to interact with the SQS notification queue, andbucket_credential_id
, used to access the intermediate S3 bucket. This credential uses an AWS access key ID and secret access key which have the required permissions to access both the queue and storage bucket.
UI
-
From the Hydrolix UI, select + Add new > Table Source.
-
In the Select Table field, select the table indicated in Before you begin.
-
Under Source type, choose Auto Ingest.
-
Input the following:
- Name: A unique name for the autoingest job
- Queue Name: The SQS queue name provided by Qwilt.
- Source region: The AWS region in which the notification queue and bucket are located. Provided by Qwilt.
- Regex filter: The regex used to identify which logs to Hydrolix should consume. Provided by Qwilt.
- Select transform: Select the transform created in Create transform step.
- Source Credential: Select the credential created in the Create credential step.
- Bucket Credential: Select the credential created in the Create credential step.
-
Select Add source.
Verification
If Qwilt is generating HTTP transaction log data, that data will begin arriving in Hydrolix. You can verify that data is in Hydrolix using the following steps:
- Visit
https://${HDX_HOSTNAME)/data/tables/{table-uuid}
to verify that the Total Size and Total Volume of the table are increasing. You can also check the Ingest Latency to verify when the table last received data.
Go Further
Qwilt’s metadata-rich Media Delivery Log (MDL) files include all available log data fields by default. If needed, you may work with Qwilt support to customize which parameters and metadata are exported to Hydrolix.
Grafana dashboard
View your Qwilt HTTP transaction log data in Grafana using our Grafana dashboard JSON and Grafana's Import a Dashboard instructions. Deploy Grafana in your Hydrolix cluster using the Grafana Automatic Installation instructions if you don't already have access to a Grafana instance.
Updated about 7 hours ago