- Platform Release 6.5
- Privacera Platform Installation
- Privacera Platform User Guide
- Privacera Discovery User Guide
- Privacera Encryption Guide
- Privacera Access Management User Guide
- AWS User Guide
- Overview of Privacera on AWS
- Configure policies for AWS services
- Using Athena with data access server
- Using DynamoDB with data access server
- Databricks access manager policy
- Accessing Kinesis with data access server
- Accessing Firehose with Data Access Server
- EMR user guide
- AWS S3 bucket encryption
- Getting started with Minio
- Plugins
- How to Get Support
- Coordinated Vulnerability Disclosure (CVD) Program of Privacera
- Shared Security Model
- Privacera Platform documentation changelog
Troubleshooting
How to validate installation
You can run validations to ensure that all the pre-installation conditions are satisfied and the services post-installation are up and running with expected functionality. See Pre-installation Validation and Post-installation validations to learn more.
Possible Errors and Solutions in Privacera Manager
Unable to Connect to Docker
Problem: Inability to connect to Docker could be due to several different causes.
Solution: Please check the following -
Make sure the user account running docker is part of the docker group. Test it by running the following Linux command:
id Output: uid=1000(ec2-user)gid=1000(ec2-user)groups=1000(ec2-user),4(adm),10(wheel),190(systemd-journal),**991(docker)**
Make sure that you have added the docker user to the docker group.
If steps 1 and 2 don’t solve the issue, exit the shell and log back in.
Terminate Installation
Problem: Privacera Manager is either not responding or taking too long to complete the installation process.
Cause: Either a bad connectivity to Docker hub, or an SSL related issue.
Solution: In the terminal, press CTRL+C or similar interrupt key sequence while ./privacera_manager.sh update
is running. Privacera Manager will stop running, and provide a rollback of the installation and/or a warning about an incomplete installation.
6.5 Platform Installation fails with invalid apiVersion
Error message:
error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
Problem: This error occurs when the AWS CLI version is incompatible with your current AWS EKS version. AWS CLI has been upgraded to Version 2 in PM Image version rel_6.5
so the kubeconfig file should have apiVersion v1beta1
not v1alpha1
.
Check version: AWS CLI Version in PM can be checked by running the following commands:
cd ~/privacera/privacera-manager ./privacera-manager.sh shell aws --version
There are two possible ways to upgrade the versions:
Upgrade AWS CLI in the Client/Edge Node to Version2.
Update the kubeconfig file, with the following command:
aws eks update-kubeconfig --name <Cluster_Name> --region <Cluster_Region>
This will update the apiVersion to
client.authentication.k8s.io/v1beta1
in the kubeconfig file.
Create kube folder under credentials:
cd ~/privacera/privacera-manager mkdir -p credentials/kube
Copy kube-config file from home:
cp ~/.kube/config credentials/kube
In the the kube-config file in credentials folder, replace
v1alpha1
withv1beta1
.vi credentials/kube/config
Update PM:
cd ~/privacera/privacera-manager ./privacera-manager.sh update
Ansible Kubernetes Module does not load
Problem: During the installation of EKS, the following exception is displayed by Ansible:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ModuleNotFoundError: No module named 'kubernetes' fatal: [privacera1]: FAILED! => changed=false error: No module named 'kubernetes' msg: Failed to import the required Python library (openshift) on ip-10-211-24-82.ec2.internal's Python /usr/bin/python3
Solution: Restart the installation by running the Privacera Manager update.
Unable to connect to Kubernetes Cluster
Problem: Privacera Manager installation fails due to failure to connect to your Kubernetes cluster. And kubeconfig
file (~/.kube/config
) is configured correctly to grant access.
Cause: It could be the kubeconfig
file is missing at the location ~/privacera/privacera-manager/credentials/kube
.
Solution: Copy the kubeconfig
file from ~/.kube/config
to ~/privacera/privacera-manager/credentials/kube
.
Common Errors/Warnings in YAML Config Files
When you run the yaml_check
command, it analyzes the YAML files and displays any errors/warnings, if found.
The following table lists the error/warning messages that will be displayed when you run the check.
Error/Warning Message | Description | Solution |
---|---|---|
warning too many blank lines (1 > 0) (empty-lines) | There are empty lines in the YAML file. | Review the file config/custom-vars/vars.xxx.yml and remove the empty lines |
error too many spaces before colon (colons) | Extra space(s) found in the variable before the colon(:) at line X. It needs to be removed. | Review config/sample-vars/vars.xxx.yml and remove the space before quotes. |
error string value is not quoted with any quotes (quoted-strings) | A variable value at line X is not quoted. | Review the file config/sample-vars/vars.xxx.yml' and add the quotes. |
error syntax error: expected <block end>, but found '{' (syntax) | Syntax errors found in the YAML file. | Review the variables in file config/custom-vars/vars.xxx.yml, it could be a missing quote ( ' ) or bracket ( } ). |
warning too few spaces before comment (comments) | Space is missing between variable and comment. | Comment should start after a single space or from the next line. |
error duplication of key "AWS_REGION" in mapping (key-duplicates) | Variable 'X' has been used twice in the file. | Review the file and remove one of the duplicate variables. |
Delete old unused Privacera Docker images
Every time you upgrade Privacera it pulls a docker image with the new upgraded version. Any unused images can take up unnecessary disk space. You can free this disk space by deleting all the old unused images.
Problem: You're trying to pull a new image, but you get the following error:
2d473b07cdd5: Pull complete 2253a1066f45: Extracting 1.461GB/1.461GB failed to register layer: Error processing tar file(exit status 1): write /aws/dist/botocore/data/s3/2006-03-01/service-2.json: no space left on device
Solution
List all the images available on the disk.
docker images
Remove all those images not associated with a container.
docker image prune -a
Unable to debug error for an Ansible task
Problem: The Privacera installation/update fails due the following exception. You're unable to view/debug what the Ansible error is.
fatal: [privacera]: FAILED! => censored: 'the output has been hidden due to the fact that ''no_log: true'' was specified for this result'
Solution
Create a
no_log.yml
file.vi ~/privacera/privacera-manager/config/custom-vars/no_log.yml
Add the
GLOBAL_NO_LOG
property.GLOBAL_NO_LOG:"false"
Run the update.
cd ~/privacera/privacera-manager/ ./privacera-manager.sh
Unable to upgrade from 4.x to 5.x or 6.x due to Zookeeper snapshot issue
Problem: The Privacera upgrade from 4.x to 6.x fails due the following exceptions.
Exception
Unable to load database on disk
java.io.IOException: No snapshot found, but there are log entries. Something is broken!
Unexpected exception, exiting abnormally
java.lang.RuntimeException: Unable to run quorum server
Cause: Zookeeper pod/container stopped before performing the upgrade.
Solution
Before starting the upgrade, do the following:
Ensure Zookeeper pod/container is up & running. If it was stopped, restart it and run the update again.
If Step 1 does not resolve the upgrade issue, do the following:
Docker
Run the following command.
cd ~/privacera/docker/ ./privacera_services down mkdir -p ../backup/zooData cp -r data/zoo-1/data ../backup/zooData/ mv data/zoo-1/data/* data/zoo-1/datalog/ mkdir -p data/zoo-1/data/version-2 cp ../privacera-manager/ansible/privacera-docker/roles/files/zookeeper/snapshot.0 data/zoo-1/data/versi
Run the update.
cd ~/privacera/privacera-manager/ ./privacera-manager.sh update
Kubernetes
Edit
zookeeper-statefulset.yml
.cd ~/privacera/privacera-manager/output/kubernetes/helm/zookeeper/templates vi zookeeper-statefulset.yml +57
Add the following line. It should be before the
/docker-entrypoint.sh zkServer.sh start-foreground
line.sleep 120 && \
Save and exit the file.
In the following command, enter your Kubernetes namespace in
$YOUR_NAMESPACE
and then run it.kubectl apply -f zookeeper-statefulset.yml -n $YOUR_NAMESPACEcd ~/privacera/privacera-manager/ansible/privacera-docker/roles/files/zookeeper/ kubectl -n $YOUR_NAMESPACE cp snapshot.0 zk-0:/store/data/version-2/
Run the update.
cd ~/privacera/privacera-manager/ ./privacera-manager.sh update
Storage issue in Privacera UserSync & PolicySync
Problem: You are facing storage issue with Privacera UserSync and PolicySync in your Kubernetes environment.
Cause: Prior to Privacera release 6.2, the storage size was fixed at 5 GB.
Solution:
You need to increase the storage space to 11 GB. Follow the steps below:
Create a
vars.policysyncv2-custom.yml
file incustom-vars
folder.vi ~/privacera/privacera-manager/config/custom-vars/vars.policysyncv2-custom.yml
Add the following variables:
POLICYSYNC_V2_ROCKSDB_K8S_PVC_STORAGE_SIZE_MB:"11264"PRIVACERA_USERSYNC_ROCKSDB_K8S_PVC_STORAGE_SIZE_MB:"11264"
Run the update.
cd ~/privacera/privacera-manager/ ./privacera-manager.sh update
Permission Denied Errors in PM Docker Installation
Problem: After a Privacera Manager Docker installation, some containers can fail to come up and the application log may show permission denied errors.
Cause: The host user who is running the installation does not have user id equal to 1000.
Solution:
Copy vars.docker.custom.user.yml from sample-vars to custom-vars:
cd ~/privacera/privacera-manager/ cp config/sample-vars/vars.docker.custom.user.yml config/custom-vars/
Run the update.
./privacera-manager.sh update
Unable to initialize the Discovery Kubernetes pod
Problem: During Privacera Manager update or installation, newly created Discovery Kubernetes pods do not initialize.
Cause: A Kubernetes deployment strategy issue in the Discovery deployment file.
Solution:
Scale the Discovery deployment to 0.
kubectl -n <NAME_SPACE> scale deploy discovery --replicas=0
Update the strategy in Discovery deployment file.
cd ~/privacera/privacera-manager sed -i 's/Strategy/strategy/g' ansible/privacera-docker/roles/templates/discovery/kubernetes/discovery-deployment.yml sed -i 's/Strategy/strategy/g' output/kubernetes/helm/discovery/templates/discovery-deployment.yml
Apply the changes manually.
kubectl apply -f output/kubernetes/helm/discovery/templates/discovery-deployment.yml -n <NAME_SPACE>
Scale back the Discovery deployment to 1 or run the update.
kubectl -n <NAME_SPACE> scale deploy discovery --replicas=1
Or
cd ~/privacera/privacera-manager ./privacera-manager.sh update
Portal service
Remove the WhiteLabel error page error
Problem: Privacera Portal cannot be accessed because of the WhiteLabel Error Page message being displayed.
Solution: To address this problem, you need to add the following properties:
SAML_MAX_AUTH_AGE_SEC
SAML_RESPONSE_SKEW_SEC
SAML_FORCE_AUTHN
To add these properties, perform the following steps:
Run the following command.
cd privacera/privacera-manager cp config/sample-vars/vars.portal.yml config/custom-vars vi config/custom-vars/vars.portal.yml
Add the following properties with their values.
SAML_MAX_AUTH_AGE_SEC: "7889400" SAML_RESPONSE_SKEW_SEC: "600" SAML_FORCE_AUTHN: "true"
Run the update.
cd ~/privacera/privacera-manager ./privacera-manager.sh update
Unable to start the Portal service
Problem: Unable to start the Portal server, and it cannot start the Tomcat server. Due to this, the following log is generated:
liquibase.exception.LockException: Could not acquire change log lock. Currently locked by portal-f957f5997-jnb7v (100.90.9.218)
Solution:
Scale down Ranger, and Portal.
Connect to your Postgres database. For example, privacera_db.
Run the following command.
UPDATE DATABASE CHANGE LOG LOCK SET LOCKED=0, LOCKGRANTED=null, LOCKEDBY=null where ID=1;
Close the database connection.
Scale up Ranger.
Scale up Portal.
Database lockup in Docker
Problem: Privacera services are not starting.
Cause: The database used by Privacera services could be locked up. This could happen due to an improper or abrupt shutdown of the Privacera Manager host machine.
Solution:
SSH to the machine where Privacera is installed.
SSH to the database container shell.
cd privacera/docker ./privacera_services shell mariadb
Run the following command. It will prompt for a password. This will give you access to the MySQL database.
mysql-p
List all the databases.
show databases;
From the list, select
privacera_db
database.use privacera_db;
Query the
DATABASECHANGELOGLOCK
table. You will see that the value is 1 or greater under the LOCKED column.Remove the database lock.
update DATABASE CHANGE LOG LOCK set locked=0,lockgranted=null,lockedby=nullwhereid=1;commit;
Exit MySQL shell.
exit;
Exit Docker container.
exit;
Restart Privacera services.
./privacera_services restart
Grafana service
Unable to See Metrics on Grafana Dashboard
Problem: You're unable to see metrics on the Grafana dashboard. When you check the logs, the following exception is displayed:
[console] Error creating stats.timers.view.graphite.errors.POST.median: [Errno 28] No space left on device [console] Unhandled Error
Solution
Notice
The below solution steps are applicable to Grafana deployed in a Kubernetes environment.
Increase the persistent volume claim (PVC). Do the following;
Open
vars.grafana.yml
.cd ~/privacera/privacera-manager/ cp config/sample-vars/vars.grafana.yml config/custom-vars/ vi config/custom-vars/vars.grafana.yml
Add the
GRAFANA_K8S_PVC_STORAGE_SIZE_MB
andGRAPHITE_K8S_PVC_STORAGE_SIZE_MB
two properties. The property values are in megabytes (MB).GRAFANA_K8S_PVC_STORAGE_SIZE_MB:"5000"GRAPHITE_K8S_PVC_STORAGE_SIZE_MB:"5000"
Run the update.
cd ~/privacera/privacera-manager/ ./privacera-manager.sh
Audit server
Unable to view the audits
Problem: You have configured Audit Server to receive the audits, but they are not visible.
Solution: Enable the application logs of Audit Server and debug the problem.
To debug the application logs of Audit Server, do the following:
SSH to the instance as USER.
Run the following command.
cd ~/privacera/docker/ vi privacera/auditserver/conf/log4j.properties
At line 7, change INFO to DEBUG.
log4j.category.com.privacera=DEBUG,logfile
If you want to enable debugging outside the Privacera package, change line 4 from WARN to DEBUG.
log4j.rootLogger=DEBUG,logfile
Save the file.
Restart Audit Server.
./privacera_services restart auditserver
SSH to the instance as USER.
Run the following command. Replace
${NAMESPACE}
with your Kubernetes namespace.kubectl edit cm auditserver-cm-conf -n ${NAMESPACE}
At line 47, edit the following property and change it to DEBUG mode.
log4j.category.com.privacera=DEBUG,logfile
At line 44, enable DEBUG at root level.
log4j.rootLogger=DEBUG,logfile
Save the file.
Restart Audit Server.
kubectl rollout restart statefulset auditserver -n ${NAMESPACE}
Audit Fluentd
Unable to view the audits
Problem: You have configured Audit Fluentd to receive the audit, but they are not visible.
Solution: Enable the application logs of Audit Fluentd and debug the problem.
To view the application logs of Audit Fluentd, do the following:
SSH to the instance as User.
Run the following command, depending on your deployment type.
Docker
cd ~/privacera/docker ./privacera_services logs audit-fluentd -f
Kubernetes
kubectl logs audit-fluentd-0 -f -n $YOUR_NAMESPACE
Privacera Plugin
EMR
Non-Portal users can access restricted resources
Problem: Local users of an EMR cluster who are not defined in the Privacera Portal policy can get access to the resources on which the policy is applied. This happens when Hive is used on EMR.
Cause: If a group of the same name exists in Privacera Portal and locally in the EMR cluster, then the permissions assigned to the group users in the policy of Privacera Portal get applied to the local group users in the EMR cluster.
Solution:
Copy the following property to the
/etc/hive/conf/ranger-hive-security.xml
file:<property><name>ranger.plugin.hive.use.only.rangerGroups</name><value>true</value></property>
Restart Hive.
sudo service hive-server2 stop sudo service hive-server2 start