Skip to content

Manage Access Audits

You can configure your AWS account to allow Privacera to access Amazon RDS PostgreSQL audit logs through Amazon CloudWatch Logs. This configuration enables Privacera to collect and process access audit events.

To enable access audits, complete the following tasks in your AWS account:

  1. Update the AWS RDS parameter group for the database
  2. Create an AWS SQS queue
  3. Specify an AWS Lambda function
  4. Create an IAM Policy for an IAM Role Attached to an EC2 Instance

Update the AWS RDS Parameter Group for the Database

To expose access audit logs, update the configuration for the data source.

Procedure

  1. Log in to your AWS account.

  2. To create a role for audits, run the following SQL query with a user with administrative credentials for your data source:

    SQL
    CREATE ROLE rds_pgaudit;
    

  3. Create a new parameter group for your database and specify the following values:

    • Parameter group family: Select a database from either the aurora-postgresql or postgres families.
    • Type: Select DB Parameter Group.
    • Group name: Specify a group name for the parameter group.
    • Description: Specify a description for the parameter group.
  4. Edit the parameter group that you created in the previous step and set the following values:

    • pgaudit.log: Specify all, overwriting any existing value.
    • shared_preload_libraries: Specify pg_stat_statements,pgaudit.
    • pgaudit.role: Specify rds_pgaudit.
  5. Associate the parameter group that you created with your database. Modify the configuration for the database instance and make the following changes:

    • DB parameter group: Specify the parameter group you created in this procedure.
    • PostgreSQL log: Ensure this option is set to enable logging to Amazon CloudWatch Logs.
  6. When prompted, select Apply immediately to apply the changes immediately.

  7. Restart the database instance.

Verification

To verify that your database instance logs are available, complete the following steps:

  1. From the Amazon RDS console, view the database instance logs for your database instance from the RDS console.

  2. From the CloudWatch console, complete the following steps:

    • Navigate to Log management.
    • Locate the /aws/rds/ log group that corresponds to your database instance.
    • Select the log group name to confirm that a log stream exists for the database instance.
    • Select a log stream name to confirm that log messages are present.

Create an AWS SQS Queue

To create an SQS queue used by an AWS Lambda function that you will create later, complete the following steps.

  1. In the AWS console, create an Amazon SQS queue.
  2. For Name, enter a name using the following format:

    Text Only
    privacera-postgres-<RDS_INSTANCE_NAME>-audits
    
    where <RDS_INSTANCE_NAME> is the name of your AWS RDS instance.

  3. After the queue is created, save the queue URL for later use.

Specify an AWS Lambda Function

To create an AWS Lambda function to interact with the SQS queue, complete the following steps. In addition to creating the function, you must create a new IAM policy and associate a new IAM role with the function. You need to know your AWS account ID and AWS region to complete this procedure.

Create IAM Policy for Lambda Function

  1. From the IAM console, create a new IAM policy and input the following JSON:

    JSON
       {
         "Version": "2012-10-17",
         "Statement": [
           {
             "Effect": "Allow",
             "Action": "logs:CreateLogGroup",
             "Resource": "arn:aws:logs:<REGION>:<ACCOUNT_ID>:*"
           },
           {
             "Effect": "Allow",
             "Action": [
               "logs:CreateLogStream",
               "logs:PutLogEvents"
             ],
             "Resource": [
               "arn:aws:logs:<REGION>:<ACCOUNT_ID>:log-group:/aws/lambda/<LAMBDA_FUNCTION_NAME>:*"
             ]
           },
           {
             "Effect": "Allow",
             "Action": "sqs:SendMessage",
             "Resource": "arn:aws:sqs:<REGION>:<ACCOUNT_ID>:<SQS_QUEUE_NAME>"
           }
         ]
     }
    
    Replace the following placeholders:

    • <REGION>: Your AWS region
    • <ACCOUNT_ID>: Your AWS account ID
    • <LAMBDA_FUNCTION_NAME>: The name of the AWS Lambda function you will create (for example, privacera-postgres-instance1-audits)
    • <SQS_QUEUE_NAME>: The name of the AWS SQS queue
  2. Specify a name for the IAM policy, such as privacera-postgres-audits-lambda-execution-policy, and then create the policy.

Create IAM Role for Lambda Function

  1. From the IAM console, create a new IAM role and choose the Use case as Lambda.

  2. Search for the IAM policy that you just created with a name that might be similar to privacera-postgres-audits-lambda-execution-policy and select it.

  3. Specify a Role name, such as privacera-postgres-audits-lambda-execution-role, and then create the role.

Create Lambda Function

  1. From the AWS Lambda console, create a new function and specify the following fields:
    • Function name: Specify a name for the function, such as privacera-postgres-instance1-audits.
    • Runtime: Select Node.js 24.x from the list.
    • Permissions: Select Use an existing role and choose the role created earlier in this procedure, such as privacera-postgres-audits-lambda-execution-role.

Add CloudWatch Logs Trigger

  1. Add a trigger to the function you created in the previous step and select CloudWatch Logs from the list, and then specify the following values:
    • Log group: Select the log group path for your Amazon RDS database instance, such as /aws/rds/instance/database-1/postgresql.
    • Filter name: Specify auditTrigger.

Add Lambda Function Code

  1. In the Lambda source code editor, provide the following JavaScript code in the index.mjs file, which is open by default in the editor:
JavaScript
import zlib from 'node:zlib';
import { promisify } from 'node:util';
import { SQSClient, SendMessageCommand } from "@aws-sdk/client-sqs";

const gunzip = promisify(zlib.gunzip);
const sqsClient = new SQSClient({ region: process.env.REGION || 'us-east-1' });

export const handler = async (event) => {
    let awslogsData;

    try {
        // 1. Get the raw data from the event
        const rawData = event.awslogs.data;
        const buffer = Buffer.from(rawData, 'base64');

        // 2. Try to decompress. If it fails, try parsing as raw JSON
        try {
            const decompressed = await gunzip(buffer);
            awslogsData = JSON.parse(decompressed.toString('utf-8'));
            console.log("Successfully decompressed Gzip data.");
        } catch (e) {
            console.log("Data is not Gzipped. Attempting to parse as raw JSON...");
            awslogsData = JSON.parse(buffer.toString('utf-8'));
        }

        if (awslogsData.messageType !== 'DATA_MESSAGE') return { status: 'skipped' };

        // 3. Process the logs
        const sqsQueueURL = process.env.SQS_QUEUE_URL;

        // Validate required environment variable
        if (!sqsQueueURL) {
            throw new Error('SQS_QUEUE_URL environment variable is not set. Please configure it in Lambda environment variables.');
        }

        // Detect if queue is FIFO (FIFO queues end with .fifo)
        const isFifoQueue = sqsQueueURL.endsWith('.fifo');

        // Parse ignore database and users from environment variables
        const ignoreDatabase = process.env.IGNORE_DATABASE || '';
        const ignoreUsers = process.env.IGNORE_USERS || '';
        // Trim whitespace from each value and filter out empty entries
        const ignoreDatabaseArray = ignoreDatabase.split(',').map(db => db.trim()).filter(db => db !== '');
        const ignoreUsersArray = ignoreUsers.split(',').map(user => user.trim()).filter(user => user !== '');

        const promises = awslogsData.logEvents.map(async (log) => {
            // Filter logs to only include AUDIT: and STATEMENT: entries (matching connector expectations)
            const logMessage = log.message || '';
            const isAuditLog = logMessage.includes('AUDIT:');
            const isStatementLog = logMessage.includes('STATEMENT:');

            // Skip logs that don't contain AUDIT: or STATEMENT: prefixes
            if (!isAuditLog && !isStatementLog) {
                return;
            }

            // Check if message should be filtered based on ignore database/users
            let sendToSQS = true;
            const logMessageLower = logMessage.toLowerCase();

            // Check for ignored databases (pattern: "@database_name")
            if (sendToSQS && ignoreDatabaseArray.length > 0) {
                for (let i = 0; i < ignoreDatabaseArray.length; i++) {
                    if (logMessageLower.indexOf("@" + ignoreDatabaseArray[i].toLowerCase()) !== -1) {
                        sendToSQS = false;
                        break;
                    }
                }
            }

            // Check for ignored users (pattern: "username@")
            if (sendToSQS && ignoreUsersArray.length > 0) {
                for (let i = 0; i < ignoreUsersArray.length; i++) {
                    if (logMessageLower.indexOf(ignoreUsersArray[i].toLowerCase() + "@") !== -1) {
                        sendToSQS = false;
                        break;
                    }
                }
            }

            // Skip sending to SQS if filtered out
            if (!sendToSQS) {
                return;
            }

            const params = {
                QueueUrl: sqsQueueURL,
                MessageBody: JSON.stringify(log)
            };

            // Only include FIFO-specific parameters for FIFO queues
            if (isFifoQueue) {
                params.MessageDeduplicationId = log.id;
                params.MessageGroupId = "Audits";
            }

            const command = new SendMessageCommand(params);
            return sqsClient.send(command);
        });

        await Promise.all(promises);
        return { status: 'success' };

    } catch (error) {
        console.error("Fatal Error:", error);
        throw error;
    }
};

Note on AWS SDK

The AWS SDK v3 (@aws-sdk/client-sqs) is used in this code. If your Lambda runtime doesn't include it, you may need to add it as a dependency. For Node.js 24.x runtime, you can add it via Lambda layers or include it in your deployment package.

Configure Lambda Environment Variables

  1. For the Lambda function, edit the environment variables and create the following environment variables:
    • REGION: Specify your AWS region.
    • SQS_QUEUE_URL: Specify your AWS SQS queue URL.
    • IGNORE_DATABASE: Name of database(s) whose audits you want to exclude (e.g., privacera_db).
    • IGNORE_USERS: Name of user(s) whose audits you want to exclude (e.g., privacera).

Create an IAM Policy for an IAM Role Attached to an EC2 Instance

To enable Privacera to read messages from the AWS SQS queue, you need to create an IAM policy and attach it to the IAM role that is associated with the EC2 instance where Privacera is installed.

Procedure

  1. From the IAM console, create a new IAM policy and input the following JSON:

    JSON
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "sqs:DeleteMessage",
                    "sqs:GetQueueUrl",
                    "sqs:ListDeadLetterSourceQueues",
                    "sqs:ReceiveMessage",
                    "sqs:GetQueueAttributes"
                ],
                "Resource": "<SQS_QUEUE_ARN>"
            },
            {
                "Effect": "Allow",
                "Action": "sqs:ListQueues",
                "Resource": "*"
            }
        ]
    }
    
    where:

    • <SQS_QUEUE_ARN>: Specifies the AWS SQS Queue ARN identifier for the SQS Queue you created earlier.
  2. Specify a name for the IAM policy, such as postgres-audits-sqs-read-policy, and create the policy.

  3. Attach the IAM policy to the IAM role that is attached to the AWS EC2 instance where you installed Privacera.

Configure Connector

  1. SSH to the instance where Privacera Manager is installed.

  2. Run the following command to open the .yml file to be edited.

    If you have multiple connectors, then replace instance1 with the appropriate connector instance name.

    Bash
    vi ~/privacera/privacera-manager/config/custom-vars/connectors/postgres/instance1/vars.connector.postgres.yml
    
  3. Enable access audits by setting the following property:

    YAML
    CONNECTOR_POSTGRES_AUDIT_ENABLE: "true"
    CONNECTOR_POSTGRES_AUDIT_SOURCE: "sqs"
    

  4. Configure the AWS RDS PostgreSQL audit properties:

    YAML
    1
    2
    3
    # AWS SQS Queue Configuration
    CONNECTOR_POSTGRES_AUDIT_SQS_QUEUE_NAME: "privacera-postgres-<RDS_INSTANCE_NAME>-audits"
    CONNECTOR_POSTGRES_AUDIT_SQS_QUEUE_REGION: "us-east-1"
    

    Replace all placeholder values

    Replace the following placeholders with your actual values:
    <RDS_INSTANCE_NAME>: Your AWS RDS database instance name

  5. Once the properties are configured, run the following commands to update your Privacera Manager platform instance:

    Step 1 - Setup which generates the helm charts. This step usually takes few minutes.

    Bash
    cd ~/privacera/privacera-manager
    ./privacera-manager.sh setup
    
    Step 2 - Apply the Privacera Manager helm charts.
    Bash
    cd ~/privacera/privacera-manager
    ./pm_with_helm.sh upgrade
    
    Step 3 - (Optional) Post-installation step which generates Plugin tar ball, updates Route 53 DNS and so on. This step is not required if you are updating only connector properties.

    Bash
    cd ~/privacera/privacera-manager
    ./privacera-manager.sh post-install
    
  1. In PrivaceraCloud portal, navigate to Settings -> Applications.

  2. On the Connected Applications screen, select PostgreSQL.

  3. Click on the icon or the Account Name to modify the settings.

  4. On the Edit Application screen, go to Access Management.

  5. Under the BASIC tab:

    • Enable access audits: Turn on this to fetch access audits for the connector.
    • Audit source for postgres: Add value as sqs for AWS RDS PostgreSQL.
    • AWS sqs queue name: Enter your SQS queue name (privacera-postgres--audits`).
    • AWS region of sqs queue: Enter your AWS region (e.g., us-east-1).
  6. Click SAVE to apply the changes.