Skip to content

Configure Event-Driven On-Demand Sync for Snowflake Connector

Overview

Event-driven On-Demand Sync enables real-time, targeted policy synchronization in Snowflake based on specific resource changes. Unlike scheduled sync operations that process all resources periodically, On-Demand Sync allows you to trigger immediate synchronization for specific resources when changes occur.

D2P Mode Only

Event-driven On-Demand Sync for snowflake is currently supported only in Data Plane (D2P) deployment mode.

Key Benefits

Benefit Description
Real-Time Updates Policy changes are applied immediately when triggered
Targeted Sync Synchronizes only the affected resources, not the entire catalog
Reduced Load Minimizes connector processing overhead by syncing only what's needed
Event-Driven Integrates with Azure Event Hub for scalable, asynchronous processing

How It Works

Event-driven On-Demand Sync follows this workflow:

  1. Event Trigger: External systems publish sync request events to Azure Event Hub
  2. Connector Receives: The Snowflake Connector listens to the Event Hub for incoming sync requests
  3. Batch Processing: Multiple events are batched together for efficient processing
  4. Resource Sync: The connector loads resources(specified in the incoming sync requests) from Snowflake and applies policy changes.
  5. Completion: Audit records track the sync operation with SUCCESS, FAILED, or SKIPPED status

Prerequisites

Before enabling Event-Driven On-Demand Sync, ensure you have:

  1. Azure Event Hub namespace created in your Azure subscription
  2. Connection string with appropriate permissions (Send/Listen)

Configuration Properties

Required Properties

Property Description Example
CONNECTOR_SNOWFLAKE_ON_DEMAND_V2_ENABLED Enable event-driven on-demand sync true
CONNECTOR_SNOWFLAKE_LOAD_RESOURCES_KEY It is the key specifying the mode for loading resources during on-demand sync load_multi_thread
CONNECTOR_ON_DEMAND_V2_AZURE_EVENT_HUB_ENABLED Enable Azure Event Hub consumer true
CONNECTOR_ON_DEMAND_V2_AZURE_CONNECTION_STRING Azure Event Hub connection string Endpoint=sb://<update_namespace>.servicebus.windows.net/;SharedAccessKeyName=<update_shared_access_key_name>;SharedAccessKey=<update_shared_key>
CONNECTOR_ON_DEMAND_V2_AZURE_EVENT_HUB_NAME Name of the Event Hub <update_event_hub_name>

Optional Properties

Property Default Value Description
CONNECTOR_ON_DEMAND_V2_AZURE_CONSUMER_GROUP $Default Consumer group name
CONNECTOR_ON_DEMAND_V2_BATCH_SIZE 50 Maximum number of events to process in a batch
CONNECTOR_ON_DEMAND_V2_BATCH_TIMEOUT_MS 5000 Maximum wait time (ms) before processing a partial batch
CONNECTOR_ON_DEMAND_V2_QUEUE_THRESHOLD 100 Maximum queue size before applying backpressure

Azure Failure Hub Configuration (for failure tracking)

Property Description Example
CONNECTOR_ON_DEMAND_V2_AZURE_FAILURE_HUB_ENABLED Enable publishing of failed events to an Azure Event Hub. true
CONNECTOR_ON_DEMAND_V2_AZURE_FAILURE_HUB_CONNECTION_STRING Azure Event Hub connection string for the failure hub. Endpoint=sb://<namespace>.servicebus.windows.net/;SharedAccessKeyName=<key_name>;SharedAccessKey=<key>
CONNECTOR_ON_DEMAND_V2_AZURE_FAILURE_HUB_NAME Name of the Event Hub used for failure tracking. failure-tracker

Failure tracking behavior

When enabled, events that fail during processing—for example, due to invalid JSON or parsing errors—are published to a dedicated Azure Event Hub. This allows you to monitor, debug, and replay failed events without losing them. Each event sent to the failure hub includes a JSON payload with failure context such as error type and message, connector name, stack trace, Kafka topic/partition/offset, timestamp, and the original message that caused the failure. Use this to debug and fix malformed payloads or replay corrected events.

Setup

Step 1: Edit Connector Configuration

SSH to the instance where Privacera is installed and edit your connector configuration file:

Bash
cd ~/privacera/privacera-manager/config
vi custom-vars/connectors/snowflake/instance1/vars.connector.snowflake.yml

Step 2: Add On-Demand Sync Configuration

Add the following configuration to your connector YAML file:

YAML
# Enable Event-Driven On-Demand Sync
CONNECTOR_SNOWFLAKE_ON_DEMAND_V2_ENABLED: "true"
CONNECTOR_SNOWFLAKE_LOAD_RESOURCES_KEY: "load_multi_thread"

# Azure Event Hub Configuration
CONNECTOR_ON_DEMAND_V2_AZURE_EVENT_HUB_ENABLED: "true"
CONNECTOR_ON_DEMAND_V2_AZURE_CONNECTION_STRING: "Endpoint=sb://<update_namespace>.servicebus.windows.net/;SharedAccessKeyName=<update_shared_access_key_name>;SharedAccessKey=<update_shared_key>"
CONNECTOR_ON_DEMAND_V2_AZURE_EVENT_HUB_NAME: "<update_event_hub_name>"
CONNECTOR_ON_DEMAND_V2_AZURE_CONSUMER_GROUP: "$Default"

# Optional: Batch Processing Configuration
CONNECTOR_ON_DEMAND_V2_BATCH_SIZE: "50"
CONNECTOR_ON_DEMAND_V2_BATCH_TIMEOUT_MS: "5000"
CONNECTOR_ON_DEMAND_V2_QUEUE_THRESHOLD: "100"

Azure Failure Hub Configuration (for failure tracking)

Add the following to your connector YAML when you want to track failures in a dedicated Event Hub:

YAML
1
2
3
4
# Azure Failure Hub (optional - for failure tracking)
CONNECTOR_ON_DEMAND_V2_AZURE_FAILURE_HUB_ENABLED: "true"
CONNECTOR_ON_DEMAND_V2_AZURE_FAILURE_HUB_CONNECTION_STRING: "Endpoint=sb://<namespace>.servicebus.windows.net/;SharedAccessKeyName=<key_name>;SharedAccessKey=<key>"
CONNECTOR_ON_DEMAND_V2_AZURE_FAILURE_HUB_NAME: "failure-tracker"

Step 3: Deploy Configuration

Once the properties are configured, run the following commands to update your Privacera Manager platform instance:

Step 1 - Setup which generates the helm charts. This step usually takes few minutes.

Bash
cd ~/privacera/privacera-manager
./privacera-manager.sh setup
Step 2 - Apply the Privacera Manager helm charts.
Bash
cd ~/privacera/privacera-manager
./pm_with_helm.sh upgrade
Step 3 - (Optional) Post-installation step which generates Plugin tar ball, updates Route 53 DNS and so on. This step is not required if you are updating only connector properties.

Bash
cd ~/privacera/privacera-manager
./privacera-manager.sh post-install

Event Payload Structure

To trigger an on-demand sync, publish a JSON event to the Azure Event Hub with the following structure.

Sending events to Event Hub

Follow Microsoft's documentation to send events to your Event Hub. For example:

Similar quickstarts and SDK guides for other languages and platforms are available in the Azure Event Hubs documentation.

Sample Event Payload

JSON
{
  "id": "005",
  "type": "RESOURCE_SYNC",
  "appType": "PS_CONNECTOR",
  "appSubType": "SNOWFLAKE",
  "requestInfo": {
    "resources": [
      {
        "type": "table",
        "values": {
          "database": "AP_PS_OMNI_PROD_DB",
          "schema": "TEST_SCHEMA1",
          "table": "TEST_DATA3"
        }
      }
    ]
  },
  "source": "KAFKA",
  "createTime": "2026-01-14T10:00:00Z"
}

Payload Field Descriptions

Field Type Required Description
id String No Unique identifier for the task (If no ID is provided, the value will appear as null.)
type String Yes Task type. Use "RESOURCE_SYNC" for resource synchronization
appType String No Application type. Use "PS_CONNECTOR"
appSubType String No Connector subtype. Use "SNOWFLAKE" for Snowflake connector
source String Yes Source system that triggered the event (e.g., "KAFKA", "API")
createTime String No ISO 8601 timestamp when the event was created
requestInfo.resources Array Yes List of resources to sync
requestInfo.resources[].type String Yes Resource type: "database", "schema", "table", "view"
requestInfo.resources[].values Object Yes Resource identifiers (database, schema, table)

Resource Types and Values

Database:

JSON
1
2
3
4
5
6
{
  "type": "database",
  "values": {
    "database": "SALES_DB"
  }
}

Schema:

JSON
1
2
3
4
5
6
7
{
  "type": "schema",
  "values": {
    "database": "SALES_DB",
    "schema": "PUBLIC"
  }
}

Table:

JSON
1
2
3
4
5
6
7
8
{
  "type": "table",
  "values": {
    "database": "SALES_DB",
    "schema": "PUBLIC",
    "table": "CUSTOMERS"
  }
}

Multiple Resources Example

JSON
{
  "id": "006",
  "type": "RESOURCE_SYNC",
  "appType": "PS_CONNECTOR",
  "appSubType": "SNOWFLAKE",
  "requestInfo": {
    "resources": [
      {
        "type": "database",
        "values": {
          "database": "HR_DB"
        }
      },
      {
        "type": "schema",
        "values": {
          "database": "HR_DB",
          "schema": "EMPLOYEE_DATA"
        }
      },
      {
        "type": "table",
        "values": {
          "database": "HR_DB",
          "schema": "EMPLOYEE_DATA",
          "table": "EMPLOYEES"
        }
      }
    ]
  },
  "source": "ACCESS_REQUEST",
  "createTime": "2026-01-21T10:30:00Z"
}

Task Status

After processing, each sync task will have one of the following statuses:

Status Description
SUCCESS Sync completed successfully
FAILED Sync failed due to an error
SKIPPED Task skipped (e.g., invalid request or unsupported resource type)