Known Issues¶
This section outlines the known issues in Privacera.
Masking Policies Issues¶
Databricks FGAC¶
- Issue: In Databricks, column-level masking with
Hash
is showingnull
for data types other than String.
EMR HIVE, Trino¶
- Issue: In EMR HIVE, and Trino, column-level masking with
Hash
is showingnull
for data types other than String.
Trino¶
- Issue: In Trino 379 and above, column level maskings
Partial mask:show first 4
andPartial mask: show last 4
showingnull
for datatypes other than String in PostgreSQL and Redshift catalog.
PolicySync Connector Issues¶
OPS Connector¶
From the 9.0.12.1 release, users enabling vars.ops-bridge.yml
may encounter an issue due to recent changes in the release process. As part of ongoing improvements, artifacts are now managed in a different location with a new versioning approach. This may impact access to certain resources, including the MSK CloudFormation template.
To resolve this issue, refer to the workaround section.
Privacera Post Install Issue¶
If you encounter an issue during the post-install
process in the Clone Privacera Dashboard repository
task, related to SSH access needed to clone the Privacera monitoring dashboard repository, refer to the Grafana Post Install failure failure section.
EFS Data not getting deleted from AWS EFS¶
When using Amazon EFS as persistent storage in a Kubernetes environment, deleting a Persistent Volume (PV) and Persistent Volume Claim (PVC) is expected to remove the associated data from the EFS file system. However, it has been observed that the data on EFS remains intact even after the PV and PVC are deleted. As a result, residual data accumulates on the EFS file system, leading to increased storage costs and potential data security concerns.
It is essential to understand the root cause of this behavior and implement proper cleanup mechanisms to ensure efficient storage utilization and cost management. This issue occurs when the delete-access-point-root-dir
property of the EFS CSI driver is set to false
.
To check the value of this property in your environment, run the command below.
Note
Here namespace
is the namespace in which EFS CSI Controller is deployed.
Bash | |
---|---|
false
, follow the steps in the Prerequisites section for EFS CSI Configuration Clean unwanted data from EFS¶
If you've already encountered the issue and want to clean up the unwanted data from EFS, follow the steps outlined below.
-
First, we need to mount EFS on an EC2 instance to check for stale data. You can either mount it on the PM jumphost or create a temporary EC2 instance for this purpose.
-
Depending on your operating system, you may need to install NFS utilities to mount the EFS volume.
-
Before cleaning up the data, you must mount the EFS file system to an EC2 instance.
Note
- The value of
EFS_DNS_URL
can be obtained from the AWS Console under the EFS file system details. - Ensure that the EC2 instance has the necessary permissions and that the EFS file system is accessible. Verify the security group associated with the EFS to allow NFS traffic.
- The value of
-
After mounting, navigate to the EFS directory:
Bash -
To automate the cleanup process, create a script named
clean_efs.sh
with the following content. -
Add execution permission to the script.
Bash -
The script accepts one of two arguments:
list
ordelete
.list
: Use this mode to display the folders that would be deleted without actually deleting them. Example,./clean_efs.sh list
.delete
: Use this mode to delete folders that are not in the exempted list of Persistent Volumes (PVs) associated with your Kubernetes cluster. Example,./clean_efs.sh delete
.
-
Once the cleanup is complete, unmount the EFS file system.
Bash
By following these steps, you can effectively clean up unwanted data from Amazon EFS while ensuring that active PV data remains intact. Automating the cleanup process helps maintain a tidy storage environment, reduces costs, and enhances storage efficiency.