Skip to content

Known Issues

This section outlines the known issues in Privacera.

Masking Policies Issues

Databricks FGAC

  • Issue: In Databricks, column-level masking with Hash is showing null for data types other than String.

EMR HIVE, Trino

  • Issue: In EMR HIVE, and Trino, column-level masking with Hash is showing null for data types other than String.

Trino

  • Issue: In Trino 379 and above, column level maskings Partial mask:show first 4 and Partial mask: show last 4 showing null for datatypes other than String in PostgreSQL and Redshift catalog.

PolicySync Connector Issues

Databricks SQL

  • Issue: The Databricks SQL connector does not support functions or UDFs (User-Defined Functions) when using the On-Demand Sync feature.

OPS Connector

From the 9.0.12.1 to 9.0.16.1 release, users enabling vars.ops-bridge.yml may encounter issues due to recent changes in the release process. As part of ongoing improvements, artifacts are now managed in a different location with a new versioning approach. This may impact access to certain resources, including the MSK CloudFormation template.

To resolve this issue, refer to the workaround section.

BigQuery

In 9.0.22.1, the BigQuery connector has the following known issues:

Issue 1: CMEK Configuration Is Not Updated for Existing Secure Datasets

When using the BigQuery connector, changing the Custom-Managed Encryption Key (CMEK) configuration — such as switching from CMEK to non-CMEK or updating the key — does not affect previously created secure datasets.

Secure datasets retain the CMEK configuration defined at the time of their creation. As a result, even after modifying the connector's CMEK settings, existing datasets may continue to use outdated encryption keys. This behavior can cause inconsistencies in encryption policies across datasets.

Workaround: To apply the updated CMEK configuration, manually delete the existing secure datasets. The connector will recreate them using the new encryption settings.


Issue 2: Previously Saved Properties Not Rendered After Enabling Access Management in Self Managed Portal Configuration

When toggling the Access Management setting from disabled to enabled in the self-managed portal, the previously saved configuration properties are not displayed.

Privacera Post Install Issue

If you encounter an issue during the post-install process in the Clone Privacera Dashboard repository task, related to SSH access needed to clone the Privacera monitoring dashboard repository, refer to the Grafana Post Install failure failure section.

Prometheus Upgrade Failure in Privacera-Monitoring

When upgrading the Privacera Prometheus component, you might encounter the following error:

Bash
Chart prometheus is already installed. Upgrading it.
Error: UPGRADE FAILED: cannot patch "prometheus-server" with kind StatefulSet: StatefulSet.apps "prometheus-server" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'ordinals', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden

This error typically occurs when the PersistentVolumeClaim (PVC) size defined in the Helm chart does not match the existing PVC size in the Kubernetes cluster.

Resolution Steps

1. Check existing PVC size

Run the following command to view the existing PVC size for Prometheus:

Bash
kubectl get pvc -n <your-monitoring-namespace>

Replace <your-monitoring-namespace> with your actual monitoring namespace (e.g., privacera-monitoring).

2. Note down the PVC size

Identify the PVC used by Prometheus and make a note of its current size.

3. Copy the vars.monitoring.yml file from sample-vars folder to custom-vars folder

Bash
cd ~/privacera/privacera-manager/config
cp sample-vars/vars.monitoring.yml custom-vars/vars.monitoring.yml

4. Update the PVC size in the vars.monitoring.yml file

Open the file custom-vars/vars.monitoring.yml and locate the following line:

YAML
# PROMETHEUS_K8S_PVC_SIZE: "<PLEASE_CHANGE>"

Uncomment and update it to match the PVC size noted in step 2:

5. Re-run the upgrade

a. Go to privacera-manager directory.

Bash
cd ~/privacera/privacera-manager
b. Run setup to generate the required files.
Bash
./privacera-manager.sh setup
c. Install the monitoring components.
Bash
./pm_with_helm.sh install-monitoring
d. Run install to update the Grafana.
Bash
./pm_with_helm.sh install
e. Once done, run post-install.
Bash
./privacera-manager.sh post-install

EFS Data not getting deleted from AWS EFS

When using Amazon EFS as persistent storage in a Kubernetes environment, deleting a Persistent Volume (PV) and Persistent Volume Claim (PVC) is expected to remove the associated data from the EFS file system. However, it has been observed that the data on EFS remains intact even after the PV and PVC are deleted. As a result, residual data accumulates on the EFS file system, leading to increased storage costs and potential data security concerns.

It is essential to understand the root cause of this behavior and implement proper cleanup mechanisms to ensure efficient storage utilization and cost management. This issue occurs when the delete-access-point-root-dir property of the EFS CSI driver is set to false.

To check the value of this property in your environment, run the command below.

Note

Here namespace is the namespace in which EFS CSI Controller is deployed.

Bash
kubectl -n <namespace> get deployment efs-csi-controller -o json | grep -i "delete-access-point-root-dir"
If it is set to false, follow the steps in the AWS EKS cluster for running Privacera Software under AWS EFS CSI Driver additional configuration section.

Clean unwanted data from EFS

If you've already encountered the issue and want to clean up the unwanted data from EFS, follow the steps outlined below. This involves mounting the EFS file system on an EC2 instance and running a script to identify and delete stale data.

Tip

Depending on your setup, you may need to follow the AWS documentation link here and here to mount the EFS file system on an EC2 instance.

  1. First, we need to mount EFS on an EC2 instance to check for stale data. You can either mount it on the PM jumphost or create a temporary EC2 instance for this purpose.

  2. Depending on your operating system, you may need to install NFS utilities to mount the EFS volume.

    Bash
    sudo yum install -y nfs-utils
    
    Bash
    sudo apt update
    sudo apt install -y nfs-common
    
  3. Before cleaning up the data, you must mount the EFS file system to an EC2 instance.

    Note

    • The value of EFS_DNS_URL can be obtained from the AWS Console under the EFS file system details.
    • Ensure that the EC2 instance has the necessary permissions and that the EFS file system is accessible. Verify the security group associated with the EFS to allow NFS traffic.
    Bash
    mkdir /mnt/efs
    sudo mount -t nfs4 -o <EFS_DNS_URL>:/ /mnt/efs
    
  4. After mounting, navigate to the EFS directory:

    Bash
    cd /mnt/efs
    

  5. To automate the cleanup process, create a script named clean_efs.sh with the following content.

    Bash
    #!/bin/bash
    
    if [[ $# -ne 1 || ( "$1" != "list" && "$1" != "delete" ) ]]; then
      echo "Usage: $0 [list|delete]"
      exit 1
    fi
    
    action="$1"
    delete_list=()
    
    # Get all PersistentVolume (PV) names from Kubernetes and store them in the exempted array
    exempted=($(kubectl get pv -o custom-columns=NAME:.metadata.name --no-headers))
    echo "Exempted folders: ${exempted[@]}"
    
    # Loop through all folders in the current directory
    for dir in */; do
      # Remove the trailing slash from the folder name
      dir_name="${dir%/}"
    
      # Check if the folder name is in the exempted list
      if [[ ! " ${exempted[@]} " =~ " ${dir_name} " ]]; then
        if [[ "$action" == "delete" ]]; then
          echo "Deleting folder: $dir_name"
          sudo rm -rf "$dir_name"
        else
          delete_list+=("$dir_name")
        fi
      else
        echo "Exempted folder: $dir_name"
      fi
    done
    
    if [[ "$action" == "list" ]]; then
      echo "Folders that would be deleted:"
      for folder in "${delete_list[@]}"; do
        echo "$folder"
      done
    fi
    
    echo "Cleanup complete!"
    

  6. Add execution permission to the script.

    Bash
    chmod +x clean_efs.sh
    

  7. The script accepts one of two arguments: list or delete.

    • list: Use this mode to display the folders that would be deleted without actually deleting them. Example, ./clean_efs.sh list.
    • delete: Use this mode to delete folders that are not in the exempted list of Persistent Volumes (PVs) associated with your Kubernetes cluster. Example, ./clean_efs.sh delete.
  8. Once the cleanup is complete, unmount the EFS file system.

    Bash
    sudo umount /mnt/efs
    

By following these steps, you can effectively clean up unwanted data from Amazon EFS while ensuring that active PV data remains intact. Automating the cleanup process helps maintain a tidy storage environment, reduces costs, and enhances storage efficiency.

Privacera Portal

OAuth SSO Login May Fail for Portal Users

Issue: OAuth-based SSO login may fail for portal users in versions 9.0.20.1 through 9.0.23.1. Affected users may be unable to authenticate using OAuth and are blocked from accessing the portal. This is a known issue and is currently under investigation.

Workaround: Users can log in using one of the following alternative methods:

  • Username and password
  • SSO via SAML or LDAP

A fix is in progress and will be included in an upcoming release.

Discovery Issues

Offline Scan Cleanup Not Working

Issue: In versions 9.0.36.1 through 9.0.40.1, the offline scan cleanup process fails.

Workaround: To disable offline scan cleanup, follow these steps:

  1. Navigate to Settings > Data Source Registration
  2. Edit Data Source for the affected data source
  3. Set Enable Offline Scan Clean Up to false
  4. Save the configuration

Fix

This issue has been resolved in version 9.0.41.1.

Solr Replica Scaling Issue

Issue: Scaling Solr from a single replica to multiple replicas (e.g., 3) may result in startup issues or cluster instability. This occurs because the initial deployment does not include High Availability (HA) configurations such as Zookeeper clustering. Additionally, the certificates generated during the initial setup are intended for a single-replica environment and should be refreshed to align with the updated HA configuration. Without applying the necessary HA settings and regenerating certificates, the Solr service may not operate reliably after scaling.

Workaround:

To enable Solr with multiple replicas, follow the steps below:

  1. Enable HA Settings in Custom Vars

    Navigate to the custom-vars directory and copy the HA sample configuration:

    Bash
    cd ~/privacera/privacera-manager/config/custom-vars  
    cp ../sample-vars/vars.kubernetes.ha.yml ./
    
  2. Edit the File

    Open the copied vars.kubernetes.ha.yml and update it to only include the following variables. Comment out any other services:

    YAML
    ### Set Zookeeper Replicas (Recommended for HA) 
    ZOOKEEPER_CLUSTER_SIZE: 3
    
    ### Set Solr Pod Replicas
    SOLR_K8S_CLUSTER_SIZE: 3
    
    ### Enables HA for Portal Service and Set the Replicas
    # PRIVACERA_PORTAL_K8S_HA_ENABLE: "true"
    # PORTAL_K8S_REPLICAS: "3"
    
    ### Set Ranger Replicas
    # RANGER_K8S_REPLICAS: "3"
    
    ### Set Dataserver Replicas
    # DATASERVER_K8S_CLUSTER_SIZE: "3"
    
    ### Set Auditserver Replicas
    # AUDITSERVER_K8S_REPLICAS: "3"
    
  3. Delete Existing Solr SSL Certificates

    Remove any existing certificates for Solr to ensure new ones are generated with the correct configuration:

    Bash
    cd ~/privacera/privacera-manager/config/ssl  
    rm -rf solr-keystore.p12 solr-trust.cer
    
  4. Re-run Setup

    Execute the full setup to apply changes:

    Bash
    1
    2
    3
    4
       cd ~/privacera/privacera-manager/
       ./privacera-manager setup  
       ./pm_with_helm.sh install  
       ./privacera-manager post-install
    

This will configure and deploy Solr and Zookeeper with 3 replicas each, properly enabling HA mode for Solr and resolving issues related to scaling from a single replica.


CPU Resource Configuration Variables Require Updates in Versions 9.2.4.1 and 9.2.5.1

Overview

In releases 9.2.4.1 and 9.2.5.1, custom CPU resource configurations require updated variable names to be properly recognized by Privacera Manager. This change affects only deployments with custom CPU request and limit settings.

Who is Affected?

This issue affects customers who meet all of the following criteria:

  • Running Privacera version 9.2.4.1 or 9.2.5.1
  • Have configured custom CPU request or limit values in their Privacera Manager variables
  • Using the legacy variable naming format (e.g., <SERVICE>_CPU_MIN, <SERVICE>_CPU_MAX)

Note

If you are using the default CPU settings provided by Privacera Manager without any custom overrides, your deployment is not affected and no action is required.

Issue Description

Due to a variable naming standardization introduced in these releases, Privacera Manager does not recognize the previous variable naming convention for CPU resources. As a result, custom CPU configurations using the old variable names will not be applied to service deployments, causing them to revert to default values.

How to Identify If You Are Affected

Review your vars.sizing.yaml file (located in ~/privacera/privacera-manager/config/custom-vars/) for CPU-related variables. If you find variables following the old naming pattern shown below, you will need to update them:

  • Old pattern: <SERVICE>_CPU_MIN or <SERVICE>_CPU_MAX
  • Examples: AUDITSERVER_CPU_MIN, DATASERVER_CPU_MAX, PORTAL_CPU_MIN

Resolution Steps

To resolve this issue, update your CPU resource variable names in the vars.sizing.yaml file from the legacy naming convention to the new standardized format:

Naming Convention Change:

  • Old Format (Deprecated): <SERVICE>_CPU_MIN and <SERVICE>_CPU_MAX
  • New Format (Required): <SERVICE>_K8S_CPU_REQUESTS and <SERVICE>_K8S_CPU_LIMITS

Example Migration:

YAML
1
2
3
4
5
6
7
# Old format (no longer recognized)
AUDITSERVER_CPU_MIN: "0.5"
AUDITSERVER_CPU_MAX: "1.0"

# New format (required)
AUDITSERVER_K8S_CPU_REQUESTS: "0.5"
AUDITSERVER_K8S_CPU_LIMITS: "1.0"

Complete Variable Name Reference

The table below provides the complete mapping of old variable names to new variable names for all Privacera services. Locate your service and update the variable names accordingly:

Service Old Variable (CPU Request) New Variable (CPU Request) Old Variable (CPU Limit) New Variable (CPU Limit)
AuditServer AUDITSERVER_CPU_MIN AUDITSERVER_K8S_CPU_REQUESTS AUDITSERVER_CPU_MAX AUDITSERVER_K8S_CPU_LIMITS
Connector CONNECTOR_CPU_MIN CONNECTOR_K8S_CPU_REQUESTS CONNECTOR_CPU_MAX CONNECTOR_K8S_CPU_LIMITS
DataServer DATASERVER_CPU_MIN DATASERVER_K8S_CPU_REQUESTS DATASERVER_CPU_MAX DATASERVER_K8S_CPU_LIMITS
Discovery Driver DISCOVERY_DRIVER_CPU_MIN DISCOVERY_DRIVER_K8S_CPU_REQUESTS DISCOVERY_DRIVER_CPU_MAX DISCOVERY_DRIVER_K8S_CPU_LIMITS
Discovery Executor DISCOVERY_EXECUTOR_CPU_MIN DISCOVERY_EXECUTOR_K8S_CPU_REQUESTS DISCOVERY_EXECUTOR_CPU_MAX DISCOVERY_EXECUTOR_K8S_CPU_LIMITS
Discovery Consumer DISCOVERY_CONSUMER_CPU_MIN DISCOVERY_CONSUMER_K8S_CPU_REQUESTS DISCOVERY_CONSUMER_CPU_MAX DISCOVERY_CONSUMER_K8S_CPU_LIMITS
Kafka KAFKA_CPU_MIN KAFKA_K8S_CPU_REQUESTS KAFKA_CPU_MAX KAFKA_K8S_CPU_LIMITS
OPS Server OPS_SERVER_CPU_MIN OPS_SERVER_K8S_CPU_REQUESTS OPS_SERVER_CPU_MAX OPS_SERVER_K8S_CPU_LIMITS
PEG PEG_CPU_MIN PEG_K8S_CPU_REQUESTS PEG_CPU_MAX PEG_K8S_CPU_LIMITS
Portal PORTAL_CPU_MIN PORTAL_K8S_CPU_REQUESTS PORTAL_CPU_MAX PORTAL_K8S_CPU_LIMITS
Privacera Services PRIVACERA_SERVICES_CPU_MIN PRIVACERA_SERVICES_K8S_CPU_REQUESTS PRIVACERA_SERVICES_CPU_MAX PRIVACERA_SERVICES_K8S_CPU_LIMITS
Ranger RANGER_CPU_MIN RANGER_K8S_CPU_REQUESTS RANGER_CPU_MAX RANGER_K8S_CPU_LIMITS
Scheme Server SCHEME_SERVER_CPU_MIN SCHEME_SERVER_K8S_CPU_REQUESTS SCHEME_SERVER_CPU_MAX SCHEME_SERVER_K8S_CPU_LIMITS
Solr SOLR_CPU_MIN SOLR_K8S_CPU_REQUESTS SOLR_CPU_MAX SOLR_K8S_CPU_LIMITS
UserSync USERSYNC_CPU_MIN USERSYNC_K8S_CPU_REQUESTS USERSYNC_CPU_MAX USERSYNC_K8S_CPU_LIMITS
ZooKeeper ZOOKEEPER_CPU_MIN ZOOKEEPER_K8S_CPU_REQUESTS ZOOKEEPER_CPU_MAX ZOOKEEPER_K8S_CPU_LIMITS
TagSync TAGSYNC_CPU_MIN TAGSYNC_K8S_CPU_REQUESTS TAGSYNC_CPU_MAX TAGSYNC_K8S_CPU_LIMITS
Trino TRINO_CPU_MIN TRINO_K8S_CPU_REQUESTS TRINO_CPU_MAX TRINO_K8S_CPU_LIMITS
Ranger KMS RANGER_KMS_CPU_MIN RANGER_KMS_K8S_CPU_REQUESTS RANGER_KMS_CPU_MAX RANGER_KMS_K8S_CPU_LIMITS
PolicySync V2 POLICYSYNC_V2_CPU_MIN POLICYSYNC_V2_K8S_CPU_REQUESTS POLICYSYNC_V2_CPU_MAX POLICYSYNC_V2_K8S_CPU_LIMITS
PEG V2 PEG_V2_CPU_MIN PEG_V2_K8S_CPU_REQUESTS PEG_V2_CPU_MAX PEG_V2_K8S_CPU_LIMITS
PKafka PKAFKA_CPU_MIN PKAFKA_K8S_CPU_REQUESTS PKAFKA_CPU_MAX PKAFKA_K8S_CPU_LIMITS
DB MariaDB DB_MARIADB_CPU_MIN DB_MARIADB_K8S_CPU_REQUESTS DB_MARIADB_CPU_MAX DB_MARIADB_K8S_CPU_LIMITS
Privacera UserSync PRIVACERA_USERSYNC_CPU_MIN PRIVACERA_USERSYNC_K8S_CPU_REQUESTS PRIVACERA_USERSYNC_CPU_MAX PRIVACERA_USERSYNC_K8S_CPU_LIMITS
Diag Server DIAG_SERVER_CPU_MIN DIAG_SERVER_K8S_CPU_REQUESTS DIAG_SERVER_CPU_MAX DIAG_SERVER_K8S_CPU_LIMITS
Audit Fluentd AUDIT_FLUENTD_CPU_MIN AUDIT_FLUENTD_K8S_CPU_REQUESTS AUDIT_FLUENTD_CPU_MAX AUDIT_FLUENTD_K8S_CPU_LIMITS
Solr Exporter SOLR_EXPORTER_CPU_MIN SOLR_EXPORTER_K8S_CPU_REQUESTS SOLR_EXPORTER_CPU_MAX SOLR_EXPORTER_K8S_CPU_LIMITS

Next Steps

  1. Locate Your Sizing Configuration File: Navigate to your Privacera Manager configuration directory and open the sizing file:

    Bash
    cd ~/privacera/privacera-manager/config/custom-vars/
    vi vars.sizing.yaml
    

  2. Update Variable Names: Edit the vars.sizing.yaml file and replace all occurrences of the old variable naming format with the new format using the mapping table above.

  3. Verify Changes: Review your updated configuration to ensure all CPU-related variables have been migrated to the new naming convention.

  4. Apply Changes: Re-run the Privacera Manager setup to apply the updated configuration:

Bash
1
2
3
4
   cd ~/privacera/privacera-manager/
   ./privacera-manager setup  
   ./pm_with_helm.sh install  
   ./privacera-manager post-install
  1. Validate Deployment: After deployment, verify that your custom CPU configurations are being applied correctly to your services.

Additional Resources

Need Help?

If you need assistance with this migration or have questions about your configuration, please contact Privacera Support or your Customer Experience Representative.