Skip to content

FAQ and Troubleshooting#

Privacera Manager#

Unable to Connect to Docker#

Problem: Inability to connect to Docker could be due to several different causes.

Solution: Please check the following -

  1. Make sure the user account running docker is part of the docker group. Test it by running the following Linux command:

    Output: uid=1000(ec2-user) gid=1000(ec2-user) groups=1000(ec2-user),4(adm),10(wheel),190(systemd-journal),**991(docker)**
  2. Make sure that you have added the docker user to the docker group. See Adding OS user to docker group.

  3. If steps 1 and 2 don’t solve the issue, exit the shell and log back in.

Terminate Installation#

Problem: Privacera Manager is either not responding or taking too long to complete the installation process.

Cause: Either a bad connectivity to Docker hub, or an SSL related issue.

Solution: In the terminal, press CTRL+C or similar interrupt key sequence while ./ update is running. Privacera Manager will stop running, and provide a rollback of the installation and/or a warning about an incomplete installation.

Ansible Kubernetes Module does not load#

Problem: During the installation of EKS, the following exception is displayed by Ansible.


An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ModuleNotFoundError: No module named 'kubernetes' fatal: [privacera1]: FAILED! => changed=false error: No module named 'kubernetes' msg: Failed to import the required Python library (openshift) on ip-10-211-24-82.ec2.internal's Python /usr/bin/python3.

Solution: Restart the installation by running the Privacera Manager update.

Common Errors/Warnings in YAML Config Files#

When you run the yaml_check command, it analyzes the YAML files and displays any errors/warnings, if found. For more information on the command, see Verify YAML Config Files.

The following table lists the error/warning messages that will be displayed when you run the check.

Error/Warning Message Description Solution
warning too many blank lines (1 > 0) (empty-lines) There are empty lines in the YAML file. Review the file config/custom-vars/ and remove the empty lines
error too many spaces before colon (colons) Extra space(s) found in the variable before the colon(:) at line X. It needs to be removed. Review config/sample-vars/ and remove the space before quotes.
error string value is not quoted with any quotes (quoted-strings) A variable value at line X is not quoted. Review the file config/sample-vars/' and add the quotes.
error syntax error: expected <block end>, but found '{' (syntax) Syntax errors found in the YAML file. Review the variables in file config/custom-vars/, it could be a missing quote ( ' ) or bracket ( } ).
warning too few spaces before comment (comments) Space is missing between variable and comment. Comment should start after a single space or from the next line.
error duplication of key "AWS_REGION" in mapping (key-duplicates) Variable 'X' has been used twice in the file. Review the file and remove one of the duplicate variables.

Delete Old Unused Privacera Docker Images#

Every time you upgrade Privacera it pulls a docker image with the new upgraded version. Any unused images can take up unnecessary disk space. You can free this disk space by deleting all the old unused images.

Problem: You're trying to pull a new image, but you get the following error:


2d473b07cdd5: Pull complete 
2253a1066f45: Extracting 1.461GB/1.461GB
failed to register layer: Error processing tar file(exit status 1): write /aws/dist/botocore/data/s3/2006-03-01/service-2.json: no space left on device


  1. List all the images available on the disk.

    docker images
  2. Remove all those images not associated with a container.

    docker image prune -a

Unable to Debug Error for an Ansible Task#

Problem: The Privacera installation/update fails due the following exception. You're unable to view/debug what the Ansible error is.


fatal: [privacera]: FAILED! => censored: 'the output has been hidden due to the fact that ''no_log: true'' was specified for this result'


  1. Create a no_log.yml file.

    vi ~/privacera/privacera-manager/config/custom-vars/no_log.yml
  2. Add the GLOBAL_NO_LOG property.

    GLOBAL_NO_LOG: "false"
  3. Run the update.

    cd ~/privacera/privacera-manager/

Unable to upgrade from 4.x to 5.x#

Problem: The Privacera upgrade from 4.x to 5.x fails due the following exceptions.


Unable to load database on disk No snapshot found, but there are log entries. Something is broken!

Unexpected exception, exiting abnormally

java.lang.RuntimeException: Unable to run quorum server

Cause: Zookeeper pod/container was stopped before performing the upgrade.


Before starting the upgrade, do the following:

  1. Ensure Zookeeper pod/container is up & running. If it was stopped, restart it and run the update again.

  2. If step 1 does not resolve the upgrade issue, do the following:

    1. Run the following command.

      cd ~/privacera/docker/
      ./privacera_services down
      mkdir -p ../backup/zooData
      cp -r data/zoo-1/data ../backup/zooData/
      mv data/zoo-1/data/* data/zoo-1/datalog/
      mkdir -p data/zoo-1/data/version-2
      cp ../privacera-manager/ansible/privacera-docker/roles/files/zookeeper/snapshot.0 data/zoo-1/data/version-2
    2. Run the update.

      cd ~/privacera/privacera-manager/
      ./ update
    1. Edit zookeeper-statefulset.yml.

      cd ~/privacera/privacera-manager/output/kubernetes/helm/zookeeper/templates
      vi zookeeper-statefulset.yml +57
    2. Add the following line. It should be before the / start-foreground line.

      sleep 120 && \ 
    3. Save and exit the file.

    4. In the following command, enter your Kubernetes namespace in $YOUR_NAMESPACE and then run it.

      kubectl apply -f zookeeper-statefulset.yml -n $YOUR_NAMESPACE
      cd ~/privacera/privacera-manager/ansible/privacera-docker/roles/files/zookeeper/
      kubectl -n $YOUR_NAMESPACE cp snapshot.0 zk-0:/store/data/version-2/
    5. Run the update.

      cd ~/privacera/privacera-manager/
      ./ update

Portal Service#

Remove the WhiteLabel Error Page error#

Problem: Privacera Portal cannot be accessed because of the WhiteLabel Error Page message being displayed.

Solution: To address this problem, you need to add the following properties:


To add these properties, perform the following steps:

  1. Run the following command.

    cd privacera/privacera-manager
    cp config/sample-vars/vars.portal.yml config/custom-vars
    vi config/custom-vars/vars.portal.yml
  2. Add the following properties with their values.

    SAML_MAX_AUTH_AGE_SEC: "7889400"
    SAML_FORCE_AUTHN: "true"
  3. Run the update.

    cd ~/privacera/privacera-manager
    ./ update

Unable to Start the Portal Service#

Problem: Unable to start the Portal server, and it cannot start the Tomcat server. Due to this, the following log is generated:

liquibase.exception.LockException: Could not acquire change log lock. Currently locked by portal-f957f5997-jnb7v ( 


  1. Scale down Ranger, and Portal.

  2. Connect to your Postgres database. For example, privacera_db.

  3. Run the following command.

  4. Close the database connection.

  5. Scale up Ranger.

  6. Scale up Portal.

Database Lockup in Docker#

Problem: Privacera services are not starting.

Cause: The database used by Privacera services could be locked up. This could happen due to an improper or abrupt shutdown of the Privacera Manager host machine.


  1. SSH to the machine where Privacera is installed.

  2. SSH to the database container shell.

    cd privacera/docker
    ./privacera_services shell mariadb
  3. Run the following command. It will prompt for a password. This will give you access to the MySQL database.

    mysql -p
  4. List all the databases.

    show databases;
  5. From the list, select privacera_db database.

    use privacera_db;
  6. Query the DATABASECHANGELOGLOCK table. You will see that the value is 1 or greater under the LOCKED column.

  7. Remove the database lock.

    update DATABASECHANGELOGLOCK set locked=0, lockgranted=null, lockedby=null where id=1;
  8. Exit MySQL shell.

  9. Exit Docker container.

  10. Restart Privacera services.

    ./privacera_services restart

Grafana Service#

Unable to See Metrics on Grafana Dashboard#

Problem: You're unable to see metrics on the Grafana dashboard. When you check the logs, the following exception is displayed.


[console] Error creating stats.timers.view.graphite.errors.POST.median: [Errno 28] No space left on device

[console] Unhandled Error



The solution steps are applicable to Grafana deployed in a Kubernetes environment.

Increase the persistent volume claim (PVC). Do the following;

  1. Open vars.grafana.yml.

    cd ~/privacera/privacera-manager/
    cp config/sample-vars/vars.grafana.yml config/custom-vars/
    vi config/custom-vars/vars.grafana.yml
  2. Add the GRAFANA_K8S_PVC_STORAGE_SIZE_MB and GRAPHITE_K8S_PVC_STORAGE_SIZE_MB two properties. The property values are in megabytes (MB).

  3. Run the update.

    cd ~/privacera/privacera-manager/

Audit Server#

Unable to view the audits#

Problem: You have configured Audit Server to receive the audits, but they are not visible.

Solution: Enable the application logs of Audit Server and debug the problem.

To debug the application logs of Audit Server, do the following:

  1. SSH to the instance as USER.

  2. Run the following command.

    cd ~/privacera/docker/
    vi privacera/auditserver/conf/
  3. At line 7, change INFO to DEBUG.,logfile
  4. If you want to enable debugging outside the Privacera package, change line 4 from WARN to DEBUG.

  5. Save the file.

  6. Restart Audit Server.

    ./privacera_services restart auditserver
  1. SSH to the instance as USER.

  2. Run the following command. Replace ${NAMESPACE} with your Kubernetes namespace.

    kubectl edit cm auditserver-cm-conf -n ${NAMESPACE}
  3. At line 47, edit the following property and change it to DEBUG mode.,logfile
  4. At line 44, enable DEBUG at root level.

  5. Save the file.

  6. Restart Audit Server.

    kubectl rollout restart statefulset auditserver -n ${NAMESPACE}

Audit Fluentd#

Unable to view the audits#

Problem: You have configured Audit Fluentd to receive the audit, but they are not visible.

Solution: Enable the application logs of Audit Fluentd and debug the problem.

To view the application logs of Audit Fluentd, do the following:

  1. SSH to the instance as User.

  2. Run the following commands.

    cd ~/privacera/docker
    ./privacera_services logs audit-fluentd -f
    kubectl logs audit-fluentd-0 -f -n $YOUR_NAMESPACE