Skip to content

Configure Event-Driven On-Demand Sync for Snowflake Connector

Overview

Event-driven On-Demand Sync enables real-time, targeted policy synchronization in Snowflake based on specific resource changes. Unlike scheduled sync operations that process all resources periodically, On-Demand Sync allows you to trigger immediate synchronization for specific resources when changes occur.

D2P Mode Only

Event-driven On-Demand Sync for snowflake is currently supported only in Data Plane (D2P) deployment mode.

Key Benefits

Benefit Description
Real-Time Updates Policy changes are applied immediately when triggered
Targeted Sync Synchronizes only the affected resources, not the entire catalog
Reduced Load Minimizes connector processing overhead by syncing only what's needed
Event-Driven Integrates with Azure Event Hub for scalable, asynchronous processing

How It Works

Event-driven On-Demand Sync follows this workflow:

  1. Event Trigger: External systems publish sync request events to Azure Event Hub
  2. Connector Receives: The Snowflake Connector listens to the Event Hub for incoming sync requests
  3. Batch Processing: Multiple events are batched together for efficient processing
  4. Resource Sync: The connector loads resources(specified in the incoming sync requests) from Snowflake and applies policy changes.
  5. Completion: Audit records track the sync operation with SUCCESS, FAILED, or SKIPPED status

Prerequisites

Before enabling Event-Driven On-Demand Sync, ensure you have:

  1. Azure Event Hub namespace created in your Azure subscription
  2. Connection string with appropriate permissions (Send/Listen)

Configuration Properties

Required Properties

Property Description Example
CONNECTOR_ON_DEMAND_V2_ENABLED Enable event-driven on-demand sync true
CONNECTOR_ON_DEMAND_V2_AZURE_EVENT_HUB_ENABLED Enable Azure Event Hub consumer true
CONNECTOR_ON_DEMAND_V2_AZURE_CONNECTION_STRING Azure Event Hub connection string Endpoint=sb://<update_namespace>.servicebus.windows.net/;SharedAccessKeyName=<update_shared_access_key_name>;SharedAccessKey=<update_shared_key>

Optional Properties

Property Default Value Description
CONNECTOR_ON_DEMAND_V2_AZURE_EVENT_HUB_NAME none Name of the Event Hub
CONNECTOR_ON_DEMAND_V2_AZURE_CONSUMER_GROUP $Default Consumer group name
CONNECTOR_ON_DEMAND_V2_BATCH_SIZE 50 Maximum number of events to process in a batch
CONNECTOR_ON_DEMAND_V2_BATCH_TIMEOUT_MS 5000 Maximum wait time (ms) before processing a partial batch
CONNECTOR_ON_DEMAND_V2_QUEUE_THRESHOLD 100 Maximum queue size before applying backpressure

Setup

Step 1: Edit Connector Configuration

SSH to the instance where Privacera is installed and edit your connector configuration file:

Bash
cd ~/privacera/privacera-manager/config
vi custom-vars/connectors/snowflake/instance1/vars.connector.snowflake.yml

Step 2: Add On-Demand Sync Configuration

Add the following configuration to your connector YAML file:

YAML
# Enable Event-Driven On-Demand Sync
CONNECTOR_ON_DEMAND_V2_ENABLED: "true"

# Azure Event Hub Configuration
CONNECTOR_ON_DEMAND_V2_AZURE_EVENT_HUB_ENABLED: "true"
CONNECTOR_ON_DEMAND_V2_AZURE_CONNECTION_STRING: "Endpoint=sb://<update_namespace>.servicebus.windows.net/;SharedAccessKeyName=<update_shared_access_key_name>;SharedAccessKey=<update_shared_key>"
CONNECTOR_ON_DEMAND_V2_AZURE_EVENT_HUB_NAME: "<update_event_hub_name>"
CONNECTOR_ON_DEMAND_V2_AZURE_CONSUMER_GROUP: "$Default"

# Optional: Batch Processing Configuration
CONNECTOR_ON_DEMAND_V2_BATCH_SIZE: "50"
CONNECTOR_ON_DEMAND_V2_BATCH_TIMEOUT_MS: "5000"
CONNECTOR_ON_DEMAND_V2_QUEUE_THRESHOLD: "100"

Step 3: Deploy Configuration

Once the properties are configured, run the following commands to update your Privacera Manager platform instance:

Step 1 - Setup which generates the helm charts. This step usually takes few minutes.

Bash
cd ~/privacera/privacera-manager
./privacera-manager.sh setup
Step 2 - Apply the Privacera Manager helm charts.
Bash
cd ~/privacera/privacera-manager
./pm_with_helm.sh upgrade
Step 3 - (Optional) Post-installation step which generates Plugin tar ball, updates Route 53 DNS and so on. This step is not required if you are updating only connector properties.

Bash
cd ~/privacera/privacera-manager
./privacera-manager.sh post-install

Event Payload Structure

To trigger an on-demand sync, publish a JSON event to the Azure Event Hub with the following structure:

Sample Event Payload

JSON
{
  "id": "005",
  "type": "RESOURCE_SYNC",
  "appType": "PS_CONNECTOR",
  "appSubType": "SNOWFLAKE",
  "requestInfo": {
    "resources": [
      {
        "type": "table",
        "values": {
          "database": "AP_PS_OMNI_PROD_DB",
          "schema": "TEST_SCHEMA1",
          "table": "TEST_DATA3"
        }
      }
    ]
  },
  "source": "KAFKA",
  "createTime": "2026-01-14T10:00:00Z"
}

Payload Field Descriptions

Field Type Required Description
id String Yes Unique identifier for the task
type String Yes Task type. Use "RESOURCE_SYNC" for resource synchronization
appType String Yes Application type. Use "PS_CONNECTOR"
appSubType String Yes Connector subtype. Use "SNOWFLAKE" for Snowflake connector
source String No Source system that triggered the event (e.g., "KAFKA", "API")
createTime String No ISO 8601 timestamp when the event was created
requestInfo.resources Array Yes List of resources to sync
requestInfo.resources[].type String Yes Resource type: "database", "schema", "table", "view", "column"
requestInfo.resources[].values Object Yes Resource identifiers (database, schema, table, column)

Resource Types and Values

Database:

JSON
1
2
3
4
5
6
{
  "type": "database",
  "values": {
    "database": "SALES_DB"
  }
}

Schema:

JSON
1
2
3
4
5
6
7
{
  "type": "schema",
  "values": {
    "database": "SALES_DB",
    "schema": "PUBLIC"
  }
}

Table:

JSON
1
2
3
4
5
6
7
8
{
  "type": "table",
  "values": {
    "database": "SALES_DB",
    "schema": "PUBLIC",
    "table": "CUSTOMERS"
  }
}

Column:

JSON
1
2
3
4
5
6
7
8
9
{
  "type": "column",
  "values": {
    "database": "SALES_DB",
    "schema": "PUBLIC",
    "table": "CUSTOMERS",
    "column": "EMAIL"
  }
}

Multiple Resources Example

JSON
{
  "id": "006",
  "type": "RESOURCE_SYNC",
  "appType": "PS_CONNECTOR",
  "appSubType": "SNOWFLAKE",
  "requestInfo": {
    "resources": [
      {
        "type": "database",
        "values": {
          "database": "HR_DB"
        }
      },
      {
        "type": "schema",
        "values": {
          "database": "HR_DB",
          "schema": "EMPLOYEE_DATA"
        }
      },
      {
        "type": "table",
        "values": {
          "database": "HR_DB",
          "schema": "EMPLOYEE_DATA",
          "table": "EMPLOYEES"
        }
      }
    ]
  },
  "source": "ACCESS_REQUEST",
  "createTime": "2026-01-21T10:30:00Z"
}

Task Status

After processing, each sync task will have one of the following statuses:

Status Description
SUCCESS Sync completed successfully
FAILED Sync failed due to an error
SKIPPED Task skipped (e.g., invalid request or unsupported resource type)