Handling LogicalPlan Filter Conflicts in Spark FGAC Plugin

Warning

Disabling this property is not recommended when using Privacera Row Level Filter policies.

In Spark FGAC (Fine-Grained Access Control) use cases, the plugin transforms the LogicalPlan in multiple phases using custom Scala rules. These transformations are implemented to support scenarios such as view-based access control, row level filtering (RLF) and column masking.

However, in certain cases, the incoming LogicalPlan may contain multiple filter conditions. This can lead to incorrect evaluation by the plugin, potentially causing query failures due to runtime exceptions.

For instance, the plugin might throw a ClassCastException if it encounters a filter condition that is incompatible with the expected data type. Such issues typically arise when a filter is applied to a column that has been transformed or implicitly cast to a different type.

To troubleshoot or mitigate this issue, you can disable the plugin's modification of the LogicalPlan. This can be done by following below steps:

  1. Go to the Databricks cluster where you want to attach the libraries.
  2. Click the Edit button in cluster.
  3. Under the Advanced options navigate to the Spark tab.
  4. In the Spark config section, add the following Spark property to skip the plugin’s transformation logic for Filter conditions.
    Bash
    spark.hadoop.privacera.fgac.wa.partition.filter.enable false
    
  5. Click the Confirm button to save the changes.
  6. Click Restart to restart the cluster.

Tip

If the Privacera Row Level Filter policy is enabled and the issue persists, enable the Enhanced Extension. For configuration steps, refer to the advanced configurations here

Comments