Skip to content

Known Issues

This section outlines the known issues in Privacera.

Masking Policies Issues

Databricks FGAC

  • Issue: In Databricks, column-level masking with Hash is showing null for data types other than String.

EMR HIVE, Trino

  • Issue: In EMR HIVE, and Trino, column-level masking with Hash is showing null for data types other than String.

Trino

  • Issue: In Trino 379 and above, column level maskings Partial mask:show first 4 and Partial mask: show last 4 showing null for datatypes other than String in PostgreSQL and Redshift catalog.

PolicySync Connector Issues

OPS Connector

From the 9.0.12.1 release, users enabling vars.ops-bridge.yml may encounter an issue due to recent changes in the release process. As part of ongoing improvements, artifacts are now managed in a different location with a new versioning approach. This may impact access to certain resources, including the MSK CloudFormation template.

To resolve this issue, refer to the workaround section.

Privacera Post Install Issue

If you encounter an issue during the post-install process in the Clone Privacera Dashboard repository task, related to SSH access needed to clone the Privacera monitoring dashboard repository, refer to the Grafana Post Install failure failure section.

EFS Data not getting deleted from AWS EFS

When using Amazon EFS as persistent storage in a Kubernetes environment, deleting a Persistent Volume (PV) and Persistent Volume Claim (PVC) is expected to remove the associated data from the EFS file system. However, it has been observed that the data on EFS remains intact even after the PV and PVC are deleted. As a result, residual data accumulates on the EFS file system, leading to increased storage costs and potential data security concerns.

It is essential to understand the root cause of this behavior and implement proper cleanup mechanisms to ensure efficient storage utilization and cost management. This issue occurs when the delete-access-point-root-dir property of the EFS CSI driver is set to false.

To check the value of this property in your environment, run the command below.

Note

Here namespace is the namespace in which EFS CSI Controller is deployed.

Bash
kubectl -n <namespace> get deployment efs-csi-controller -o json | grep -i "delete-access-point-root-dir"
If it is set to false, follow the steps in the Prerequisites section for EFS CSI Configuration

Clean unwanted data from EFS

If you've already encountered the issue and want to clean up the unwanted data from EFS, follow the steps outlined below.

  1. First, we need to mount EFS on an EC2 instance to check for stale data. You can either mount it on the PM jumphost or create a temporary EC2 instance for this purpose.

  2. Depending on your operating system, you may need to install NFS utilities to mount the EFS volume.

    Bash
    1
    2
    3
    sudo yum install -y nfs-utils
    sudo apt update
    sudo apt install -y nfs-common
    
    Bash
    sudo apt update
    sudo apt install -y nfs-common
    
  3. Before cleaning up the data, you must mount the EFS file system to an EC2 instance.

    Note

    • The value of EFS_DNS_URL can be obtained from the AWS Console under the EFS file system details.
    • Ensure that the EC2 instance has the necessary permissions and that the EFS file system is accessible. Verify the security group associated with the EFS to allow NFS traffic.
    Bash
    mkdir /mnt/efs
    sudo mount -t nfs4 -o <EFS_DNS_URL>:/ /mnt/efs
    
  4. After mounting, navigate to the EFS directory:

    Bash
    cd /mnt/efs
    

  5. To automate the cleanup process, create a script named clean_efs.sh with the following content.

    Bash
    #!/bin/bash
    
    if [[ $# -ne 1 || ( "$1" != "list" && "$1" != "delete" ) ]]; then
      echo "Usage: $0 [list|delete]"
      exit 1
    fi
    
    action="$1"
    delete_list=()
    
    # Get all PersistentVolume (PV) names from Kubernetes and store them in the exempted array
    exempted=($(kubectl get pv -o custom-columns=NAME:.metadata.name --no-headers))
    echo "Exempted folders: ${exempted[@]}"
    
    # Loop through all folders in the current directory
    for dir in */; do
      # Remove the trailing slash from the folder name
      dir_name="${dir%/}"
    
      # Check if the folder name is in the exempted list
      if [[ ! " ${exempted[@]} " =~ " ${dir_name} " ]]; then
        if [[ "$action" == "delete" ]]; then
          echo "Deleting folder: $dir_name"
          sudo rm -rf "$dir_name"
        else
          delete_list+=("$dir_name")
        fi
      else
        echo "Exempted folder: $dir_name"
      fi
    done
    
    if [[ "$action" == "list" ]]; then
      echo "Folders that would be deleted:"
      for folder in "${delete_list[@]}"; do
        echo "$folder"
      done
    fi
    
    echo "Cleanup complete!"
    

  6. Add execution permission to the script.

    Bash
    chmod +x clean_efs.sh
    

  7. The script accepts one of two arguments: list or delete.

    • list: Use this mode to display the folders that would be deleted without actually deleting them. Example, ./clean_efs.sh list.
    • delete: Use this mode to delete folders that are not in the exempted list of Persistent Volumes (PVs) associated with your Kubernetes cluster. Example, ./clean_efs.sh delete.
  8. Once the cleanup is complete, unmount the EFS file system.

    Bash
    sudo umount /mnt/efs
    

By following these steps, you can effectively clean up unwanted data from Amazon EFS while ensuring that active PV data remains intact. Automating the cleanup process helps maintain a tidy storage environment, reduces costs, and enhances storage efficiency.

Comments