Skip to content

Databricks#

The topic describes how to connect Databricks application to PrivaceraCloud using AWS and Azure platforms.

  1. Go the Setting > Applications.

  2. In the Applications screen, select Databricks.

  3. Select the platform type (AWS or Azure) on which you want to configure the Databricks application.

  4. Enter the application Name and Description, and then click Save.

  5. Click the toggle button to enable Access Management for Databricks.

Databricks Spark Fine-Grained Access Control Plugin [FGAC]#

PrivaceraCloud integrates with Databricks SQL using the Plug-In integration method with an account-specific cluster-scoped initialization script. Privacera’s Spark Plug-In will be installed on the Databricks cluster enabling Fine-Grained Access Control. This script will be added it to your cluster as an init script to run at cluster startup. As your cluster is restarted, it runs the init script and connects to PrivaceraCloud. 

Note

Accounts upgrading from PrivaceraCloud 2.0 to PrivaceraCloud 2.1 and intending to use Privacera Encryption with Databricks must re-install the init script to Databricks.

Prerequisites#

Ensure that the following prerequisites are met:

  • You must have an existing Databricks account and login credentials with sufficient privileges to manage your Databricks cluster.

  • PrivaceraCloud portal admin user access.

This setup is recommended for SQL, Python, and R language notebooks.

  • It provides FGAC on databases with row filtering and column masking features.
  • It uses privacera_hive, privacera_s3, privacera_adls, privacera_files services for resource-based access control, and privacera_tag service for tag-based access control.
  • It uses the plugin implementation from Privacera.

    Note

    • If you are using Scala notebooks, recommendation is to use OLAC. See the Databricks Spark Object-level Access Control Plugin [OLAC] section below.

    • OLAC and FGAC methods are mutually exclusive and cannot be enabled on the same cluster.

    • OLAC plugin was introduced to provide an alternative solution for Scala language clusters, since using Scala language on Databricks Spark has some security concerns.

Steps#

  1. Log in to the PrivaceraCloud portal as an admin user (role ROLE_ACCOUNT_ADMIN).

  2. Generate the new API and Init Script. For more information, see API Key.

  3. On the Databricks Init Script section, click DOWNLOAD SCRIPT.

    By default, this script is named privacera_databricks.sh.  Save it to a local filesystem or shared storage.

  4. Log in to your Databricks account using credentials with sufficient account management privileges. 

  5. Copy the Init script to your Databricks cluster. This can be done via the UI or using the Databricks CLI.

    1. Using the Databricks UI:

      1. On the left navigation, click the Data icon.

      2. Click the Add Data button from the upper right corner.

      3. In the Create New Table dialog, select Upload File, and then click browse. 

      4. Select privacera_databricks.sh, and then click Open to upload it. 

        Once the file is uploaded, the dialog will display the uploaded file path. This filepath will be required in the later step.

        The file will be uploaded to /FileStore/tables/privacera_databricks.sh path, or similar.

    2. Using the Databricks CLI, copy the script to a location in DBFS:

      databricks fs cp ~/<sourcepath_privacera_databricks.sh> dbfs:/<destinaton_path>
      

      For example:

      databricks fs cp ~/Downloads/privacera_databricks.sh dbfs:/FileStore/tables/
      

  6. You can add PrivaceraCloud to an existing cluster, or create a new cluster and attach PrivaceraCloud to that cluster.

    a. In the Databricks navigation panel select Clusters.  

    b. Choose a cluster name from the list provided and click Edit to open the configuration dialog page.

    c. Open Advanced Options and select the Init Scripts tab.

    d. Enter the DBFS init script path name you copied earlier.

    e. Click Add.

    f. From Advanced Options, select the Spark tab. Add the following Spark configuration content to the Spark Config edit window. For more information on the properties, see Spark Configuration Table Properties.

    spark.databricks.isv.product privacera 
    spark.databricks.cluster.profile serverless
    spark.databricks.delta.formatCheck.enabled false 
    spark.driver.extraJavaOptions -javaagent:/databricks/jars/privacera-agent.jar 
    spark.databricks.repl.allowedLanguages sql,python,r
    
    spark.databricks.isv.product privacera 
    spark.databricks.cluster.profile serverless
    spark.databricks.delta.formatCheck.enabled false
    spark.driver.extraJavaOptions -javaagent:/databricks/jars/ranger-spark-plugin-faccess-2.0.0-SNAPSHOT.jar
    spark.databricks.repl.allowedLanguages sql,python,r
    

    Note

    • From PrivaceraCloud release 4.1.0.1 onwards, it is recommended to replace the Old Properties with the New Properties. However, the Old Properties will also continue to work.

    • For Databricks versions <=8.2, Old Properties should only be used since the versions are in extended support.

    • If you are upgrading the Databricks Runtime from an existing version (6.4-8.2) to a version 8.3 and higher, contact Privacera technical sales representative for assistance.

  7. Restart the Databricks cluster.

Related Information

For further reading, see:

Validate Installation#

Confirm connectivity by executing a simple data access sequence and then examining the PrivaceraCloud audit stream. 

You will see corresponding events in the Access Manager > Audits.

Example data access sequence:

  1. Create or open an existing Notebook. Associate the Notebook with the Databricks cluster you secured in the steps above.

  2. Run an SQL show tables command in the Notebook:

  3. On the PrivaceraCloud, go to Access Manager > Audits to view the monitored data access.

  4. Create a Deny policy, run this same SQL access sequence a second time, and confirm corresponding Denied events.

Databricks Spark Object-level Access Control Plugin [OLAC]#

This section outlines the steps needed to setup Object-Level Access Control (OLAC) in Databricks clusters. This setup is recommended for Scala language notebooks.

  • It provides OLAC on S3 locations accessed via Spark.
  • It uses privacera_s3 service for resource-based access control and privacera_tag service for tag-based access control.
  • It uses the signed-authorization implementation from Privacera.

    Note

    • If you are using SQL, Python, and R language notebooks, recommendation is to use FGAC. See the Databricks Spark Fine-Grained Access Control Plugin section above.

    • OLAC and FGAC methods are mutually exclusive and cannot be enabled on the same cluster.

    • OLAC plugin was introduced to provide an alternative solution for Scala language clusters, since using Scala language on Databricks Spark has some security concerns.

Prerequisites#

Ensure that the following prerequisites are met:

  • You must have an existing Databricks account and login credentials with sufficient privileges to manage your Databricks cluster.

  • PrivaceraCloud portal admin user access.

Steps#

Note

For working with Delta format files, configure the AWS S3 application using IAM role permissions.

  1. Create a new AWS S3 Databricks connection. For more information, see Create S3 application.

    After creating an S3 application.

    1. In the BASIC tab, provide Access Key, Secret Key, or an IAM Role. For more information, see Create S3 application.

    2. In the ADVANCED tab, add the following property:

      dataserver.databricks.allowed.urls=<DATABRICKS_URL_LIST>
      

      where <DATABRICKS_URL_LIST>:  Comma-separated list of the target Databricks cluster URLs.

      For example,

      dataserver.databricks.allowed.urls=https://dbc-yyyyyyyy-xxxx.cloud.databricks.com/.

    3. Click Save.

  2. If you are updating an S3 application:

    1. Go to Settings > Applications > S3, and click the pen icon to edit properties.

    2. Click the toggle button of a service you wish to enable.

    3. In the ADVANCED tab, add the following property:

      dataserver.databricks.allowed.urls=<DATABRICKS_URL_LIST>
      
      where <DATABRICKS_URL_LIST>:  Comma-separated list of the target Databricks cluster URLs. For example,

      dataserver.databricks.allowed.urls=https://dbc-yyyyyyyy-xxxx.cloud.databricks.com/.

    c. Save your configuration.

  3. Download the Databricks init script.

    a. Log in to the PrivaceraCloud portal.

    b. Generate the new API and Init Script. For more information, refer to the topic API Key.

    c. On the Databricks Init Script section, click the DOWNLOAD SCRIPT button.

    By default, this script is named privacera_databricks.sh. Save it to a local filesystem or shared storage.

  4. Upload the Databricks init script to your Databricks clusters.

    a. Log in to your Databricks cluster using administrator privileges.

    b. On the left navigation, click the Data icon.

    c. Click  Add Data from the upper right corner.

    d. From the Create New Table dialog box select Upload File, then select and open privacera_databricks.sh

    e. Copy the full storage path onto your clipboard.

  5. Add the Databricks init script to your target Databricks clusters:

    a. In the Databricks navigation panel select Clusters.  

    b. Choose a cluster name from the list provided and click Edit to open the configuration dialog page.

    c. Open Advanced Options and select the Init Scripts tab.

    d. Enter the DBFS init script path name you copied earlier.

    e. Click Add.

    f. From Advanced Options, select the Spark tab. Add the following Spark configuration content to the Spark Config edit window. For more information on the properties, see Spark Configuration Table Properties.

    spark.databricks.isv.product privacera
    spark.databricks.repl.allowedLanguages sql,python,r,scala
    spark.driver.extraJavaOptions -javaagent:/databricks/jars/privacera-agent.jar
    spark.executor.extraJavaOptions -javaagent:/databricks/jars/privacera-agent.jar
    spark.databricks.delta.formatCheck.enabled false
    

    Add the following property in the Environment Variables text box:

    PRIVACERA_PLUGIN_TYPE=OLAC
    
    spark.databricks.isv.product privacera
    spark.databricks.repl.allowedLanguages sql,python,r,scala
    spark.driver.extraJavaOptions -javaagent:/databricks/jars/ranger-spark-plugin-faccess-2.0.0-SNAPSHOT.jar
    spark.hadoop.fs.s3.impl com.databricks.s3a.PrivaceraDatabricksS3AFileSystem
    spark.hadoop.fs.s3n.impl com.databricks.s3a.PrivaceraDatabricksS3AFileSystem
    spark.hadoop.fs.s3a.impl com.databricks.s3a.PrivaceraDatabricksS3AFileSystem
    spark.executor.extraJavaOptions -javaagent:/databricks/jars/ranger-spark-plugin-faccess-2.0.0-SNAPSHOT.jar
    spark.hadoop.signed.url.enable true
    

    Note

    • From PrivaceraCloud release 4.1.0.1 onwards, it is recommended to replace the Old Properties with the New Properties. However, the Old Properties will also continue to work.

    • For Databricks versions <= 8.2, Old Properties should only be used since the versions are in extended support.

    • If you are upgrading the Databricks Runtime from an existing version (6.4-8.2) to a version 8.3 and higher, contact Privacera technical sales representative for assistance.

    g. Save and close. 

    h. Restart the Databricks Cluster.

Your S3 Databricks cluster data resource is now available for Access Manager Policy Management, under Access Manager > Resource Policies, Service "privacera_s3".


Last update: March 25, 2022