Skip to main content

Privacera Documentation


Privacera Access Management preserves audit records for all data accesses and important access policy-related changes. Administrators can use the built-in audit store,  audit browser, and search capabilities to:

  • Track recent access control enforcement decisions.

  • View recent changes to policies, resources, security principals and entitlements.

  • Monitor policy and user synchronization operations across systems under management.

Open access to the underlying Apache Solr audit data store is available, so that audit records can be extracted and forwarded to systems that more closely fit a customer’s requirements for long-term audit management.

The Audits page includes information under the following categories:

  • Access: Each access (or denial) to a managed data repository.

  • Admin: Portal administrative activity including revisions to policies.

  • Login Sessions: Logins to your PrivaceraCloud account web portal.

  • Plugin: Logged status for each synchronization exchange with a data access plug-in component.

  • Plugin Status: Logged updates with each data access plug-in component.

  • UserSync: Logged user updates from LDAP/AD service.

  • PolicySync: Logged queries to data resources integrated using policy sync method.

Categories are using the top navigation tab selected. A date filter is on the upper right. By default it opens to the last 7 days but can be set to other intervals and custom date ranges.

Required permissions to view audit logs on Privacera Platform

To view the audit page, you must be assigned either the ROLE_ADMIN or ROLE_AUDITOR role.

Anyone who can access the audit page can view all access audit log records for all data objects under management.

The Audits Page reports access to objects in all security zones to any user who has access to the audit page.

Some PolicySync connectors, when collecting audit records, are unable to annotate the audit record with the security zone(s) of tables referenced in each query. Audit records from those connectors do not specify security zone information. It may therefore be impractical to rely on filtering audit records based on security zone.

See the documentation for each connector for details on any audit limitations.

About PolicySync access audit records and policy ID on Privacera Platform

For data sources where Ranger plugins make policy decisions, those plugins can log the specific policy that was enforced, and the Policy ID column is populated with a link to the relevant policy.

You can configure your AWS account to allow Privacera to access your RDS PostgreSQL instance audit logs through Amazon cloudWatch logs. To enable this functionality, you must make the following changes in your account:

  • Update the AWS RDS parameter group for the database

  • Create an AWS SQS queue

  • Specify an AWS Lambda function

  • Create an IAM role for an EC2 instance

Update the AWS RDS parameter group for the database

To expose access audit logs, you must update configuration for the data source.


  1. Log in to your AWS account.

  2. To create a role for audits, run the following SQL query with a user with administrative credentials for your data source:

    CREATE ROLE rds_pgaudit;
  3. Create a new parameter group for your database and specify the following values:

    • Parameter group family: Select a database from either the aurora-postgresql or postgres families.

    • Type: Select DB Parameter Group.

    • Group name: Specify a group name for the parameter group.

    • Description: Specify a description for the parameter group.

  4. Edit the parameter group that you created in the previous step and set the following values:

    • pgaudit.log: Specify all, overwriting any existing value.

    • shared_preload_libraries: Specify pg_stat_statements,pgaudit.

    • pgaudit.role: Specify rds_pgaudit.

    • pgaudit.log_rotation = 1 : Specify 1 (true)

  5. Associate the parameter group that you created with your database. Modify the configuration for the database instance and make the following changes:

    • DB parameter group: Specify the parameter group you created in this procedure.

    • PostgreSQL log: Ensure this option is set to enable logging to Amazon cloudWatch logs.

  6. When prompted, choose the option to immediately apply the changes you made in the previous step.

  7. Restart the database instance.


To verify that your database instance logs are available, complete the following steps:

  1. From the Amazon RDS console, View the logs for your database instance from the RDS console.

  2. From the CloudWatch console, complete the following steps:

    1. Find the /aws/rds/cluster/* log group that corresponds to your database instance.

    2. Click the log group name to confirm that a log stream exists for the database instance, and then click on a log stream name to confirm that log messages are present.

Create an AWS SQS queue

To create an SQS queue used by an AWS Lambda function that you will create later, complete the following steps.

  1. From the AWS console, create a new Amazon SQS queue with the default settings. Use the following format when specifying a value for the Name field:



    • RDS_CLUSTER_NAME: Specifies the name of your RDS cluster.

  2. After the queue is created save the URL of the queue for use later.

Specify an AWS Lambda function

To create an AWS Lambda function to interact with the SQS queue, complete the following steps. In addition to creating the function, you must create a new IAM policy and associate a new IAM role with the function. You need to know your AWS account ID and AWS region to complete this procedure.

  1. From the IAM console, create a new IAM policy and input the following JSON:

        "Version": "2012-10-17",
        "Statement": [
                "Effect": "Allow",
                "Action": "logs:CreateLogGroup",
                "Resource": "arn:aws:logs:<REGION>:<ACCOUNT_ID>:*"
                "Effect": "Allow",
                "Action": [
                "Resource": [
                "Effect": "Allow",
                "Action": "sqs:SendMessage",
                "Resource": "arn:aws:sqs:<REGION>:<ACCOUNT_ID>:<SQS_QUEUE_NAME>"


    • REGION: Specify your AWS region.

    • ACCOUNT_ID: Specify your AWS account ID.

    • LAMBDA_FUNCTION_NAME: Specify the name of the AWS Lambda function, which you will create later. For example: privacera-postgres-cluster1-audits

    • SQS_QUEUE_NAME: Specify the name of the AWS SQS Queue.

  2. Specify a name for the IAM policy, such as privacera-postgres-audits-lambda-execution-policy, and then create the policy.

  3. From the IAM console, create a new IAM role and choose for the Use case the Lambda option.

  4. Search for the IAM policy that you just created with a name that might be similar to privacera-postgres-audits-lambda-execution-policy and select it.

  5. Specify a Role name for the IAM policy, such as privacera-postgres-audits-lambda-execution-role, and then create the role.

  6. From the AWS Lambda console, create a new function and specify the following fields:

    • Function name: Specify a name for the function, such as privacera-postgres-cluster1-audits.

    • Runtime: Select Node.js 12.x from the list.

    • Permissions: Select Use an existing role and choose the role created earlier in this procedure, such as privacera-postgres-audits-lambda-execution-role.

  7. Add a trigger to the function you created in the previous step and select CloudWatch Logs from the list, and then specify the following values:

    • Log group: Select the log group path for your Amazon RDS database instance, such as /aws/rds/cluster/database-1/postgresql.

    • Filter name: Specify auditTrigger.

  8. In the Lambda source code editor, provide the following JavaScript code in the index.js file, which is open by default in the editor:

    var zlib = require('zlib');
    // CloudWatch logs encoding
    var encoding = process.env.ENCODING || 'utf-8';  // default is utf-8
    var awsRegion = process.env.REGION || 'us-east-1';
    var sqsQueueURL = process.env.SQS_QUEUE_URL;
    var ignoreDatabase = process.env.IGNORE_DATABASE;
    var ignoreUsers = process.env.IGNORE_USERS;
    var ignoreDatabaseArray = ignoreDatabase.split(',');
    var ignoreUsersArray = ignoreUsers.split(',');
    // Import the AWS SDK
    const AWS = require('aws-sdk');
    // Configure the region
    AWS.config.update({region: awsRegion});
    exports.handler = function (event, context, callback) {
        var zippedInput = Buffer.from(, 'base64');
            zlib.gunzip(zippedInput, function (e, buffer) {
            if (e) {
            var awslogsData = JSON.parse(buffer.toString(encoding));
            // Create an SQS service object
            const sqs = new AWS.SQS({apiVersion: '2012-11-05'});
            if (awslogsData.messageType === 'DATA_MESSAGE') {
                // Chunk log events before posting
                awslogsData.logEvents.forEach(function (log) {
                    //// Remove any trailing \n
                    // Checking if message falls under ignore users/database
                    var sendToSQS = true;
                    if(sendToSQS) {
                        for(var i = 0; i < ignoreDatabaseArray.length; i++) {
                           if(log.message.toLowerCase().indexOf("@" + ignoreDatabaseArray[i]) !== -1) {
                                sendToSQS = false;
                    if(sendToSQS) {
                        for(var i = 0; i < ignoreUsersArray.length; i++) {
                           if(log.message.toLowerCase().indexOf(ignoreUsersArray[i] + "@") !== -1) {
                                sendToSQS = false;
                    if(sendToSQS) {
                        let sqsOrderData = {
                            MessageBody: JSON.stringify(log),
                            MessageGroupId: "Audits",
                            QueueUrl: sqsQueueURL
                        // Send the order data to the SQS queue
                        let sendSqsMessage = sqs.sendMessage(sqsOrderData).promise();
                        sendSqsMessage.then((data) => {
                            console.log("Sent to SQS");
                        }).catch((err) => {
                            console.log("Error in Sending to SQS = " + err);
  9. For the Lambda function, edit the environment variables and create the following environment variables:

    • REGION: Specify your AWS region.

    • SQS_QUEUE_URL: Specify your AWS SQS queue URL.

    • IGNORE_DATABASE: Specify privacera_db.

    • IGNORE_USERS: Specify your database administrative user, such as privacera.

Create an IAM role for an EC2 instance

To create an IAM role for the AWS EC2 instance where you installed Privacera so that Privacera can read the AWS SQS queue, complete the following steps:

  1. From the IAM console, create a new IAM policy and input the following JSON:

        "Version": "2012-10-17",
        "Statement": [
                "Effect": "Allow",
                "Action": [
                "Resource": "<SQS_QUEUE_ARN>"
                "Effect": "Allow",
                "Action": "sqs:ListQueues",
                "Resource": "*"


    • SQS_QUEUE_ARN: Specifies the AQS SQS Queue ARN identifier for the SQS Queue you created earlier.

  2. Specify a name for the IAM policy, such as postgres-audits-sqs-read-policy, and create the policy.

  3. Attach the IAM policy to the AWS EC2 instance where you installed Privacera.