Setup Databricks Spark SQL on Privacera Portal¶
These steps apply to both Self-Managed and Data Plane deployments..
- For Self-Managed, log in to the Privacera Portal; for Data Plane, log in to the Privacera Discovery Admin Console.
- Navigate to Settings > Data Source Registration
- Under the system name you added here, click the more icon. Select Add Application > DATABRICKS SPARK SQL
- Under Configure JDBC Application, provide values for the Application Name and Application Code fields.
-
Under Application Properties, provide values for the following:
Important
Identify the user who must have appropriate permissions in your data source. For scanning, you need a user with read permission. Login credentials of that user are needed for the JDBC Username and JDBC Password properties as shown below.
-
JDBC Url:
jdbc:hive2://<domainName>:<port>/default;transportMode=http;ssl=true;httpPath=sql/protocolv1/o/0/xxxx-xxxxxx-xxxxxxxx;AuthMech=3;
-
JDBC Username:
<user_with_readwrite_permission>
-
JDBC Password:
<login_credentials_of_identified_user>
-
JDBC Driver Class:
org.apache.hive.jdbc.HiveDriver
For more information about Driver Class You can refer List of JDBC Driver Class for all datasources
Tip
You don't need to update any other fields during the initial setup. You can update them later as needed.
-
-
Click TEST CONNECTION. Ensure that a message named success is displayed.
- Click SAVE
You can start using the connector to scan Databricks Spark SQL resources by configuring the targets to be scanned. For more information, refer to the Setup for Discovery Scanning
- Prev topic: Prerequisites
- Next topic: Advanced Configuration