Skip to content

Spark Configuration Table Properties

Fine-grained Access Control


Property

Description
Default Value
Possible Values

spark.databricks.isv.product

To specify partnership with Privacera. This is required to set via Spark Config UI only.

privacera

spark.databricks.cluster.profile

This should be used to enable multi-concurrency.

serverless

spark.databricks.pyspark.enableProcessIsolation

This should be enabled for additional security. It removes DBFS mounting on the cluster and blocks users directly trying to access IAM Role, etc.
true
true, false

spark.databricks.pyspark.dbconnect.enableProcessIsolation

This should be enabled for additional security when Databricks Connect is being used. It removes DBFS mounting on the cluster, and blocks user directly trying to access IAM Role, etc.
true
true, false

spark.databricks.pyspark.enablePy4JSecurity

This should be enabled to block Python libraries which could bypass security
true
true, false

spark.databricks.repl.allowedLanguages

Set the Databricks notebook language. Allowed Languages. Scala is excluded since Scala queries run as root user and can bypass Privacera.

sql,python,r

Property

Description

Default Value
Possible Values
spark.driver.extraJavaOptions To enable the Privacera plugin. This is required to set via Spark Config UI only. -javaagent:/databricks/jars/privacera-agent.jar
spark.hadoop.privacera.custom.current_user.udf.names Map logged-in user to Ranger user for row-filter policy. Valid function name however you have to make sure it should be in sync with row-filter current_user condition. current_user()
spark.hadoop.privacera.spark.view.levelmaskingrowfilter.extension.enable To enable View Level Access Control (Using Data_admin feature), View Level Column Masking, and View Level Row Filtering. false true, false
spark.hadoop.privacera.spark.rowfilter.extension.enable To enable/disable Row Filtering on table. true true, false
spark.hadoop.privacera.spark.masking.extension.enable To enable/disable Column Masking on table. true true, false

Object-level Access Control


Property

Description

Default Value
Possible Values
spark.databricks.isv.product To specify partnership with Privacera. This is required to set via Spark Config UI only. privacera

Property

Description

Default Value
Possible Values
spark.driver.extraJavaOptions To enable code injection for Privacera authorization. This is required to set via Spark Config UI only. -javaagent:/databricks/jars/privacera-agent.jar
spark.executor.extraJavaOptions To enable code injection for Privacera authorization. This is required to set via Spark Config UI only. -javaagent:/databricks/jars/privacera-agent.jar

spark.hadoop.privacera.olac.ignore.paths

By default, Dataserver is used to perform access control on all folder or bucket paths. Setting this property allows you to use IAM roles for a specific path. This property is optional.

To add multiple paths, use comma-separated values.

 

s3://my-team-bucket