Skip to content

Throttling Exception Alert

Root Cause

AWS Lake Formation triggers this alert when it limits or rejects API requests due to excessive call rates.

Possible causes include:

1.High request volume: The connector sends a high volume of API requests in a short period, exceeding API rate limits.

2.Insufficient service quotas: AWS account limits are too low for your workload.

3.Credential contention: Multiple connectors share the same AWS credentials.

4.Metadata scan bursts: Sudden metadata scans or workload bursts triggering rate-limit enforcement.

5.Service capacity issues: Temporary AWS Lake Formation capacity restrictions or backend service instability.

Troubleshooting Steps

The connector automatically retries throttled requests according to its retry and backoff configuration.

Step 1: Monitor the Connector Dashboards

Open the Connector-Common or LakeFormation dashboard and inspect:

  • Throttling Exception Counter

    • Verify that the throttling count is not continuously increasing.
    • Ensure no bursts of throttling exceptions occurred during the last 5 minutes.
  • Throttling Exceptions – Time Series Panel

    • Check for patterns such as spikes or sustained throttling activity.

Escalation Checklist

If the issue cannot be resolved through the specific troubleshooting guides, escalate it to the Privacera support with the following details. For additional assistance, refer How to Contact Support for detailed guidance on reaching out to the support team.

  • Timestamp of the error: Include the exact time the alert was triggered.
  • Grafana dashboard and alert screenshots:
    1. Grafana → Dashboards → Application-Dashboards → Connector-Common → Throttling Exception Counter
    2. Grafana → Alerting → Alert rules → Throttling Exception Alert
  • Connector Service Logs: Include any logs showing throttling exceptions, including HTTP 429 or "Rate exceeded" messages.

    Option 1: Download Log from Diagnostic Portal (Recommended)

    1. Open the Diagnostic Portal and navigate to DashboardPods.
    2. Select the connector pod from the available pods list.
    3. Click on the Logs tab and download logs by clicking on DOWNLOAD LOGS button.

    Option 2: Manual Log Collection (If Diagnostic Service is Not Enabled)

    Bash
    1
    2
    3
    4
    5
    6
    7
    8
    # Create log archive
    kubectl exec -it <CONNECTOR_POD> -n <NAMESPACE> -- bash -c "cd /workdir/policysync/logs/ && tar -czf connector-logs.tar.gz *.log"
    
    # Copy the fixed-name archive
    kubectl cp <CONNECTOR_POD>:/workdir/policysync/logs/connector-logs.tar.gz ./connector-logs.tar.gz -n <NAMESPACE>
    
    # Extract logs
    tar -xzf connector-logs.tar.gz
    
  • Configuration files: All relevant connector configuration files (e.g., polling intervals, batching config, authentication settings).

  • Description: Detailed description of the issue and actions already taken.
  • Alert details: Alert name, alert message, alert trigger time, and severity from the received alert.