Configure Event-Driven On-Demand Sync for Snowflake Connector¶
Overview¶
Event-driven On-Demand Sync enables real-time, targeted policy synchronization in Snowflake based on specific resource changes. Unlike scheduled sync operations that process all resources periodically, On-Demand Sync allows you to trigger immediate synchronization for specific resources when changes occur.
D2P Mode Only
Event-driven On-Demand Sync for snowflake is currently supported only in Data Plane (D2P) deployment mode.
Key Benefits¶
| Benefit | Description |
|---|---|
| Real-Time Updates | Policy changes are applied immediately when triggered |
| Targeted Sync | Synchronizes only the affected resources, not the entire catalog |
| Reduced Load | Minimizes connector processing overhead by syncing only what's needed |
| Event-Driven | Integrates with Azure Event Hub for scalable, asynchronous processing |
How It Works¶
Event-driven On-Demand Sync follows this workflow:
- Event Trigger: External systems publish sync request events to Azure Event Hub
- Connector Receives: The Snowflake Connector listens to the Event Hub for incoming sync requests
- Batch Processing: Multiple events are batched together for efficient processing
- Resource Sync: The connector loads resources(specified in the incoming sync requests) from Snowflake and applies policy changes.
- Completion: Audit records track the sync operation with SUCCESS, FAILED, or SKIPPED status
Prerequisites¶
Before enabling Event-Driven On-Demand Sync, ensure you have:
- Azure Event Hub namespace created in your Azure subscription
- Connection string with appropriate permissions (Send/Listen)
Configuration Properties¶
Required Properties¶
| Property | Description | Example |
|---|---|---|
| CONNECTOR_SNOWFLAKE_ON_DEMAND_V2_ENABLED | Enable event-driven on-demand sync | true |
| CONNECTOR_SNOWFLAKE_LOAD_RESOURCES_KEY | It is the key specifying the mode for loading resources during on-demand sync | load_multi_thread |
| CONNECTOR_ON_DEMAND_V2_AZURE_EVENT_HUB_ENABLED | Enable Azure Event Hub consumer | true |
| CONNECTOR_ON_DEMAND_V2_AZURE_CONNECTION_STRING | Azure Event Hub connection string | Endpoint=sb://<update_namespace>.servicebus.windows.net/;SharedAccessKeyName=<update_shared_access_key_name>;SharedAccessKey=<update_shared_key> |
| CONNECTOR_ON_DEMAND_V2_AZURE_EVENT_HUB_NAME | Name of the Event Hub | <update_event_hub_name> |
Optional Properties¶
| Property | Default Value | Description |
|---|---|---|
| CONNECTOR_ON_DEMAND_V2_AZURE_CONSUMER_GROUP | $Default | Consumer group name |
| CONNECTOR_ON_DEMAND_V2_BATCH_SIZE | 50 | Maximum number of events to process in a batch |
| CONNECTOR_ON_DEMAND_V2_BATCH_TIMEOUT_MS | 5000 | Maximum wait time (ms) before processing a partial batch |
| CONNECTOR_ON_DEMAND_V2_QUEUE_THRESHOLD | 100 | Maximum queue size before applying backpressure |
Azure Failure Hub Configuration (for failure tracking)¶
| Property | Description | Example |
|---|---|---|
| CONNECTOR_ON_DEMAND_V2_AZURE_FAILURE_HUB_ENABLED | Enable publishing of failed events to an Azure Event Hub. | true |
| CONNECTOR_ON_DEMAND_V2_AZURE_FAILURE_HUB_CONNECTION_STRING | Azure Event Hub connection string for the failure hub. | Endpoint=sb://<namespace>.servicebus.windows.net/;SharedAccessKeyName=<key_name>;SharedAccessKey=<key> |
| CONNECTOR_ON_DEMAND_V2_AZURE_FAILURE_HUB_NAME | Name of the Event Hub used for failure tracking. | failure-tracker |
Failure tracking behavior
When enabled, events that fail during processing—for example, due to invalid JSON or parsing errors—are published to a dedicated Azure Event Hub. This allows you to monitor, debug, and replay failed events without losing them. Each event sent to the failure hub includes a JSON payload with failure context such as error type and message, connector name, stack trace, Kafka topic/partition/offset, timestamp, and the original message that caused the failure. Use this to debug and fix malformed payloads or replay corrected events.
Setup¶
Step 1: Edit Connector Configuration¶
SSH to the instance where Privacera is installed and edit your connector configuration file:
| Bash | |
|---|---|
Step 2: Add On-Demand Sync Configuration¶
Add the following configuration to your connector YAML file:
Azure Failure Hub Configuration (for failure tracking)¶
Add the following to your connector YAML when you want to track failures in a dedicated Event Hub:
Step 3: Deploy Configuration¶
Once the properties are configured, run the following commands to update your Privacera Manager platform instance:
Step 1 - Setup which generates the helm charts. This step usually takes few minutes.
Step 2 - Apply the Privacera Manager helm charts. Step 3 - (Optional) Post-installation step which generates Plugin tar ball, updates Route 53 DNS and so on. This step is not required if you are updating only connector properties.Event Payload Structure¶
To trigger an on-demand sync, publish a JSON event to the Azure Event Hub with the following structure.
Sending events to Event Hub¶
Follow Microsoft's documentation to send events to your Event Hub. For example:
- REST API — Send event (REST API) for sending events via HTTP.
- Python — Send and receive events using Python for a quickstart with the Azure Event Hubs Python SDK.
Similar quickstarts and SDK guides for other languages and platforms are available in the Azure Event Hubs documentation.
Sample Event Payload¶
Payload Field Descriptions¶
| Field | Type | Required | Description |
|---|---|---|---|
| id | String | No | Unique identifier for the task (If no ID is provided, the value will appear as null.) |
| type | String | Yes | Task type. Use "RESOURCE_SYNC" for resource synchronization |
| appType | String | No | Application type. Use "PS_CONNECTOR" |
| appSubType | String | No | Connector subtype. Use "SNOWFLAKE" for Snowflake connector |
| source | String | Yes | Source system that triggered the event (e.g., "KAFKA", "API") |
| createTime | String | No | ISO 8601 timestamp when the event was created |
| requestInfo.resources | Array | Yes | List of resources to sync |
| requestInfo.resources[].type | String | Yes | Resource type: "database", "schema", "table", "view" |
| requestInfo.resources[].values | Object | Yes | Resource identifiers (database, schema, table) |
Resource Types and Values¶
Database:
Schema:
Table:
| JSON | |
|---|---|
Multiple Resources Example¶
Task Status¶
After processing, each sync task will have one of the following statuses:
| Status | Description |
|---|---|
| SUCCESS | Sync completed successfully |
| FAILED | Sync failed due to an error |
| SKIPPED | Task skipped (e.g., invalid request or unsupported resource type) |
- Prev topic: Advanced Configuration
- Next topic: Troubleshooting