- Platform Release 6.5
- Privacera Platform Installation
- Privacera Platform User Guide
- Privacera Discovery User Guide
- Privacera Encryption Guide
- Privacera Access Management User Guide
- AWS User Guide
- Overview of Privacera on AWS
- Configure policies for AWS services
- Using Athena with data access server
- Using DynamoDB with data access server
- Databricks access manager policy
- Accessing Kinesis with data access server
- Accessing Firehose with Data Access Server
- EMR user guide
- AWS S3 bucket encryption
- Getting started with Minio
- Plugins
- How to Get Support
- Coordinated Vulnerability Disclosure (CVD) Program of Privacera
- Shared Security Model
- Privacera Platform documentation changelog
Resource policies
Policies overview
Concepts in Access Management
For conceptual background, see How Access Management Works.
View and manage resource services. The Resource Policies page shows your services grouped by service type. A resource service consists of a connection to one or more datasources and a set of policies that control access to data in those repositories. A service type is a collection of services sharing similar attributes and configuration parameters.
Service/service group global actions
On the Resource Policies page, you can filter the view and import/export policies.
Add a new resource-based service. Service types have some common attributes as well as attributes specific to that service type.
Export services in JSON-formatted policy sets.
Import a previously exported policy set.
View policy details
Click a service to open to the Policy definition and management page. Each policy definition row shows key attributes:
Policy ID: Each policy is assigned a numeric identifier. These ids are monotonically incremented and unique within each PrivaceraCloud account.Policy identifiers are referenced in the audit trail event messages, so that action taken and recorded to the audit trail is associated with a specific policy.
Policy Name: Policies are assigned a name, either by the system or by a user. System-created policy names can be changed.
Validity Period: A policy can be defined to be effective only for a period of time. Start and End date/times may be defined (to the minute) with a selectable Time Zone. Use the Add Validity Period button in the upper right to set a validity period for this policy.
Policy Label: Policies may be assigned a new or existing label. Labels assist in filtering and with search reports.
Resource Specifier: Underneath the Policy Label field are the Resource specifiers. These will be different for each type of resource, and the set of specifiers will change depending on the top down choices. For example, by default a Hive resource will display fields for 'database', 'table', and 'column'.
The Autocomplete feature is available to add your resources. When you enter the first character in the resource field, the autocomplete feature displays the resources (databases, tables, or columns) available in the data source. The autocomplete feature supports the Wildcard character "*" to add the resources.
Note
Autocomplete feature is supported on the resource fields of the PolicySync connectors only.
Condition Sets: The rules used to allow or deny access to resources. Condition sets are made up of permissions, users, groups, and roles. The permission selection list will be specific to the type of service. (For example, for the ADLS service, the permission set is {read, write, delete, metadata read, metadata write, admin}.) There are four sets of access conditions (rules):
Allow Conditions
Exclude from Allow Conditions
Deny Conditions
Exclude from Deny Conditions
At least one rule should be defined. Rules for the other condition sets may be omitted.
One or more default 'all...' policies are automatically created for any default created services (those named as "privacera_<service_type>"). (The actual policy names are adjusted for each type of service. For example, for 'hive' services, the 'all' policy is named 'all - database'. For database repository oriented services, the default policy name is: 'all - database, schema, table, column', and so on.).
Creating Resource Based Policies
Concepts in Access Management
For conceptual background, see How Access Management Works.
Create and configure policies that control access to specific resources.
From the home page, click Access Management > Resource Policies.
Click a service in one of the service groups.
Click Add New Policy.
Configure the new resource policy.
Configuration Settings Common to All Policies
Policies contain access rules associated with a particular data source or a subset of it. Specific policy attributes differ depending on the policy type, but all policies contain the following attributes:
Policy Type: The basis for controlling access. For example, a policy can be based on the resource, on a tag, or on a scheme.
Policy Name: Policies are assigned a name, either by the system or when created by a portal user. Default, system-created policies can be renamed. The policy name should be unique and can not be duplicated across the system.
Normal/Override: This option allows you to select policy type whether it is a 'Normal' or 'Override' policy. If you select 'Override', access permissions in the policy override the access permissions in existing policies.
Enable/Disable: By default, the policy is enabled. If the policy is not required, you can disable it by switching to 'Disabled' mode.
Policy Id: Each policy is assigned a numeric identifier. These IDs are incremented and unique within each account. Policy identifiers are referenced in the audit trail event messages, so that action taken and recorded to the audit trail is associated with a specific policy.
Policy Label: A descriptive label that helps users find this policy when searching for policies and filtering policy lists.
Resource Specifier: These will be different for each type of resource, and the set of specifiers will change depending on the top down choices.
The autocomplete feature is available only if you have defined PolicySync connectors for the following services:
Postgres
Redshift
MSSQL
Snowflake
Databricks SQL
Validity Period: A policy can be defined to be effective only for a period of time. Start and End date/times (defined to the minute), with a selectable Time zone.
Description: This field required description of policy which can be used to identify among others policies.
Audit Logging: Enable/disable Audit Logging. Toggle to 'No', if this policy doesn't need to be audited. By default, it is selected as 'Yes'.
Condition Sets: The rules that allow or deny access to a resource. Available permissions are specific to the type of service. There are four access conditions:
Allow Conditions
Exclude from Allow Conditions
Deny Conditions
Exclude from Deny Conditions
At least one rule must be defined. One or more default 'all...' policies are automatically created for any default created services (those named as "privacera_<service_type>"). Policy names reflect the type of service.
Service-Specific Policy Configuration Settings
Service Name | Supported Policy Type |
---|---|
Hive, Presto, MS SQL, Postgres, Snowflake | Access, Masking, Row Level Filter |
S3, DynamoDB, Athena, Glue, Redshift, Kinesis, Lambda, ADLS, Kafka, PowerBI, GCS, GBQ, and Files. | Access |
Hive
Database: Specify the database name.
Table/UDF: Specify the table or udf name.
Column: Specify the column name.
Note
By default the 'Include' option is selected to allow access for all the above fields. In case you want to deny access, toggle to the 'Exclude' option.
URL: Specify the cloud storage path. For example - s3a://user/poc/sales.txt where the end-user permission is needed to read/write the Hive data from/to a cloud storage path.
Recursive
Non-recursive
Global: Specify global dataset.
Allow Conditions:
Policy Conditions: This option allows a user to add custom conditions while evaluating authorization requests.
Accessed Together ?: This option allows a user to access a specified request (minimum 2 columns) in the query format.
For example: default.employeepersonalview.EMP_SSN, default.employeepersonalview.CC
Above query allows user to access EMP_SSN & CC columns only when both are mentioned together in the query else it will give denied permission error.
Not Accessed Together?: This option denies specified requests (minimum 2 columns) in the query format.
For example: default.employeepersonalview.EMP_SSN, default.employeepersonalview.CC
Above query deny user to view EMP_SSN & CC columns data when both are mentioned together in the query and give denied permission error.
Permission: Add permissions as per the requirement. The list of permissions are -
Select:
Update:
Create:
Drop:
Alter:
Index:
Lock:
All:
Read:
Write:
Hive - Masking Policy
Hive Database: Select the appropriate database. This field holds the list of Hive databases.
Hive Table: Select the appropriate table. This field holds the list of Hive tables.
Hive Column: Select the appropriate column. This field holds the list of Hive columns.
Masking Conditions:
Permissions: Tick the permission as 'Select'. At present, only 'Select' permission is available.
Select Masking Options: You are allowed to select only one masking option from the below list -
Redact: This option masks all the alphabetic characters with 'x' and all numeric characters with 'n'.
Partial mask: show last 4 – This option shows only the last four characters.
Partial mask: show first 4 – This option shows only the first four characters.
Hash: This option replaces all the characters with '#' of the entire cell value.
Nullify: This option replaces all the characters with NULL value.
Unmasked (retain original value): This option is used when no masking is required.
Date: show only year: This option shows only the year portion of a date string and default the month and day to 01/01.
Custom: Using this option you need to mention a custom masked value or expression. Custom masking can use any valid Hive UDF (Hive that returns the same data type as the data type in the column being masked).
Hive - Row Level Filter
Hive Database: Enter the appropriate database name.
Hive Table: Enter the appropriate table name.
Row Level Conditions:
Permissions: Click the Add Permissions and tick as 'Select'. At present, only 'Select' permission is available.
Row Level Filter: Click the Add Row Filter and enter the valid SQL predicate for whom the policy will be applied based on selected role/groups/users. Note: Row level filtering works by adding the predicate to the query, if this is not a valid SQL query, then it can be failed. If you do not wish to apply a row level filter then keep this field blank. In this case, only 'Select' access will be applied.
AWS S3
Bucket Name: Specify the bucket name. For example: aws-athena-query-result
Note: Wildcard characters such as '*' are allowed if you want to give access to all buckets. |
Object Path: Specify the object path. It accepts wildcard character such as '*'.
Recursive: This allows you to view multiple folders based on the mentioned object path.
Non-recursive: This allows you to view specific folders based on the mentioned object path.
Example:
If the Bucket name is {bucket-AWS} and the Object path is {path1},
Sample 1: s3://bucket-AWS/path1/
Sample 2: s3://bucket-name/path1/path2/
If the Recursive toggle button is enabled [the default behavior], you can view all files within the path1
and path2
folders.
If the Recursive toggle button is disabled, you won't be able to view any files in the path1
folder.
Allow Conditions:
Permissions:
Read: READ permission on the URL permits the user to perform HiveServer2 operations which use S3 as a data source for Hive tables.
Write: WRITE permission on the URL permits the user to perform HiveServer2 operations which write data to the specified S3 location.
Delete: DELETE permission allows you to delete the resource.
Metadata Read: METADATA READ permission allows you to run HEAD operation on objects. Also, this permission list buckets, list objects and retrieves objects metadata.
Metadata Write: METADATA WRITE permission allows you to modify object's metadata and object's ACL, Tagging, Cros, etc.
Admin: Administrators can edit or delete the policy, and can also create child policies based on the original policy.
Presto
Catalog: Specify the catalog name.
Schema: Specify the schema name.
Sessionproperty: Specify the session property.
Table: Specify the table name.
Procedure: Specify the procedure name.
Column: Specify the column name.
Prestouser:
Systemproperty:
Function:
Allow Conditions:
Permissions:
Select
Insert
Create
Drop
Delete
Use
Alter
Grant
Revoke
Show
Impersonate
All
Execute
Create View
Delegate Admin: Assign administrator rights to the roles, groups, or users specified in the policy. The administrator can edit or delete the policy, and can also create child policies based on the original policy.
Presto - Masking Policy
Presto Catalog
Presto Schema
Presto Table
Presto Column
Masking Conditions:
Permissions
Select: Tick the permission as 'Select'. At present, only 'Select' permission is available.
Select Masking Option: You are allowed to select only one masking option from the below list.
Redact: This option masks all the alphabetic characters with 'x' and all numeric characters with 'n'.
Partial mask: show last 4 – This option shows only the last four characters.
Partial mask: show first 4 – This option shows only the first four characters.
Hash: This option replaces all the characters with '#' of the entire cell value.
Nullify: This option replaces all the characters with NULL value.
Unmasked (retain original value): This option is used when no masking is required.
Date: show only year: This option shows only the year portion of a date string and default the month and day to 01/01.
Custom: Using this option you need to mention a custom masked value or expression.
Presto - Row Level Filter
Presto Catalog
Presto Schema
Presto Table
Row Level Conditions:
Permissions: Click the Add Permissions and tick as 'Select'. At present, only 'Select' permission is available.
Row Level Filter: Click the Add Row Filter and enter the valid SQL predicate to which the policy will be applied based on selected role/groups/users. Note: Row level filtering works by adding the predicate to the query. If the query is not valid, it will fail.
DynamoDB
Table: Specify the table name.
Attribute: Specify the attribute name.
Allow Conditions
Permissions:
Read
Write
Create
Delete
List tables
Admin
Delegate Admin: Select 'Delegate Admin' to assign administrator rights to the roles, groups, or users specified in the policy. The administrator can edit or delete the policy, and can also create child policies based on the original policy.
Athena
Workgroup: Specify the workgroup name of Athena.
Data source: Specify the name of the data source.
Database: Specify the name of the database.
Table: Specify the name of the table.
Column: Specify the name of the column.
URL: Specify the cloud storage path. For example - s3a://user/poc/sales.txt where the end-user permission is needed to access the data from/to a cloud storage path.
Allow Conditions:
Permissions:
BatchGetNamedQuery
BatchGetQueryExecution
CreateNamedQuery
CreateWorkGroup
DeleteNamedQuery
DeleteWorkGroup
GetNamedQuery
GetQueryExecution
GetQueryResults
GetWorkGroup
ListNamedQueries
ListQueryExecutions
ListTagsForResource
ListWorkGroups
StartQueryExecution
StopQueryExecution
TagResource
UntagResource
UpdateWorkGroup
Alter
Create
Describe
Drop
Insert
MSCK Repair
Select
Show
ListDataCatalogs
Delegate Admin: Select 'Delegate Admin' to assign administrator rights to the roles, groups, or users specified in the policy. The administrator can edit or delete the policy, and can also create child policies based on the original policy.
Glue
Database: Specify the database name.
Table: Specify the table name.
Note: You are allowed to enter wildcard character such as '*'. in the above fields.
Allow Conditions:
Permissions:
GetCatalogImportStatus
GetDatabases
GetDatabase
GetTables
GetTable
CreateTable
CreateDatabase
DeleteDatabase
DeleteTable
Delegate Admin: Select 'Delegate Admin' to assign administrator rights to the roles, groups, or users specified in the policy. The administrator can edit or delete the policy, and can also create child policies based on the original policy.
Redshift
Global: Specify the Redshift hosted IP. To get Redshift hosted ip, connect with Redshift environment and run this query: SELECT inet_server_addr() as host, inet_server_port() as port
Database: Specify the database name.
Schema: Specify the schema name.
Table: Specify the table name.
Column: Specify the column name.
Cluster: Specify the cluster ip.
Allow Condition:
Permissions:
Create Database
Create Schema
Usage Schema
Create Table
Select
Insert
Update
Delete
ListClusters
CreateCluster
UpdateCluster
DeleteCluster
ResizeCluster
PauseCluster
RebootCluster
CreateSnapshot
RestoreSnapshot
Delegate Admin: Select 'Delegate Admin' to assign administrator rights to the roles, groups, or users specified in the policy. The administrator can edit or delete the policy, and can also create child policies based on the original policy.
Kinesis
Kinesis_Datastream: Specify the datastream name.
Kinesis_Firehose: Specify the firehose name.
Allow Conditions:
Permissions:
PutRecord
CreateDeliveryStream
DeleteDeliveryStream
DeleteDeliveryStream
ListDeliveryStreams
UpdateDestination
PutRecordBatch
ListTagsForDeliveryStream
StartDeliveryStreamEncryption
StopDeliveryStreamEncryption
TagDeliveryStream
UntagDeliveryStream
Delegate Admin: Select 'Delegate Admin' to assign administrator rights to the roles, groups, or users specified in the policy. The administrator can edit or delete the policy, and can also create child policies based on the original policy.
Lambda
Function: Specify the function name of Lambda.
Layer: Specify the layer name of Lambda.
Note: You are allowed to enter wildcard characters such as '*'.
Allow Conditions:
Permissions:
ListAliases
ListEventSourceMappings
ListFunctionEventInvokeConfigs
ListFunctions
ListLayers
ListLayerVersions
ListProvisionedConcurrencyConfigs
ListVersionsByFunction
GetAccountSettings
GetAlias
GetEventSourceMapping
GetFunction
GetFunctionConcurrency
GetFunctionConfiguration
GetFunctionEventInvokeConfig
GetLayerVersion
GetLayerVersionByArn
GetLayerVersionPolicy
GetPolicy
GetProvisionedConcurrencyConfig
ListTags
CreateAlias
CreateEventSourceMapping
CreateFunction
DeleteAlias
DeleteEventSourceMapping
DeleteFunction
DeleteFunctionConcurrency
DeleteFunctionEventInvokeConfig
DeleteLayerVersion
DeleteProvisionedConcurrencyConfig
InvokeFunction
PublishLayerVersion
PublishVersion
PutFunctionConcurrency
PutFunctionEventInvokeConfig
PutProvisionedConcurrencyConfig
TagResource
UntagResource
UpdateAlias
UpdateEventSourceMapping
UpdateFunctionCode
UpdateFunctionConfiguration
UpdateFunctionEventInvokeConfig
AddLayerVersionPermission
AddPermission
RemoveLayerVersionPermission
RemovePermission
Delegate Admin: Select 'Delegate Admin' to assign administrator rights to the roles, groups, or users specified in the policy. The administrator can edit or delete the policy, and can also create child policies based on the original policy.
MSSQL
Database
Schema
Table
Column
Allow Conditions:
Permissions
Create Database
Create Schema
Create Table
Select
Insert
Update
Delete
Delegate Admin: Select 'Delegate Admin' to assign administrator rights to the roles, groups, or users specified in the policy. The administrator can edit or delete the policy, and can also create child policies based on the original policy.
MSSQL - Masking Policy
Database
Schema
Table
Column
Masking Conditions:
Permissions
Select
Select Masking Options:
Default
Nullify: This option replaces all the characters with NULL value.
Unmasked: This option is used when no masking is required.
Custom: Using this option you need to mention a custom masked value or expression.
MSSQL - Row Level Filter
Database
Schema
Table
Row Level Conditions:
Permissions: Click the Add Permissions and tick as 'Select'. At present, only 'Select' permission is available.
Row Level Filter: Click the Add Row Filter and enter the valid SQL predicate for whom the policy will be applied based on selected role/groups/users. Note: Row level filtering works by adding the predicate to the query. If the query is not valid, it will fail.
ADLS
Account Name
Container Name
Object Path
Allow Conditions:
Permissions:
Read: READ permission on the URL permits the user to perform HiveServer2 operations which use S3 as a data source for Hive tables.
Write: WRITE permission on the URL permits the user to perform HiveServer2 operations which write data to the specified S3 location.
Delete: DELETE permission allows you to delete the resource.
Metadata Read: METADATA READ permission allows you to run HEAD operation on objects. Also, this permission list buckets, list objects and retrieves objects metadata.
Metadata Write: METADATA WRITE permission allows you to modify object's metadata and object's ACL, Tagging, Cros, etc.
Admin: Administrators can edit or delete the policy, and can also create child policies based on the original policy.
Delegate Admin: Select 'Delegate Admin' to assign administrator rights to the roles, groups, or users specified in the policy. The administrator can edit or delete the policy, and can also create child policies based on the original policy.
Postgres
Global
Database
Schema
Table
Column
Allow Conditions:
Permissions:
Create Database
Connect Database
Create Schema
Usage Schema
Create Table
Select
Insert
Update
Delete
Truncate
Delegate Admin: Select 'Delegate Admin' to assign administrator rights to the roles, groups, or users specified in the policy. The administrator can edit or delete the policy, and can also create child policies based on the original policy.
Postgres - Masking Policy
Database
Schema
Table
Column
Masking Conditions:
Permissions
Select
Select Masking Option:
Default:
Nullify: This option replaces all the characters with NULL value.
Unmasked: This option is used when no masking is required.
Custom: Using this option you need to mention a custom masked value or expression.
Postgres - Row Level Filter
Database
Schema
Table
Row Level Conditions:
Permissions: Click the Add Permissions and tick as 'Select'. At present, only 'Select' permission is available.
Row Level Filter: Click the Add Row Filter and enter the valid SQL predicate for whom the policy will be applied based on selected role/groups/users. Note: Row level filtering works by adding the predicate to the query. If the query is not valid, it will fail.
Kafka
Topic
Transactionalid
Cluster
Delegationtoken
Consumergroup
Policy Conditions
Add Conditions
Allow Conditions:
Policy Conditions
Add Conditions
Permissions
Consume
Describe
Delete
Delegate Admin: Select 'Delegate Admin' to assign administrator rights to the roles, groups, or users specified in the policy. The administrator can edit or delete the policy, and can also create child policies based on the original policy.
Snowflake
Warehouse: Specify the warehouse name of Snowflake.
When you select
warehouse
, the following warehouse permissions will be displayed in the Allow Conditions > Permissions section.Operate
UseWarehouse
Monitor
Modify
Database: Specify the database name.
When you select
database
, the following database permissions will be displayed in the Allow Conditions > Permissions section.CreateSchema
UseDB
Schema: Specify the schema name.
When you select
schema
along withdatabase
, the following schema permissions will be displayed in the Allow Conditions > Permissions section.CreateTmpTable
CreateTable
UseSchema
CreateStream
CreateFunction
CreateProcedure
CreateSequence
CreatePipe
CreateFileFormat
CreateStage
CreateExternalTable
Table: Specify the table name.
When you select
table
along withdatabase
andschema
, the following table permissions will be displayed in the Allow Conditions > Permissions section.Select
Insert
Update
Delete
Truncate
References
Stream: Specify the stream that you have created over standard tables.
When you select
stream
along withdatabase
andschema
, the following stream permission will be displayed in the Allow Conditions > Permissions section.Select
Function: Specify the function.
When you select
function
along withdatabase
andschema
, the following function permission will be displayed in the Allow Conditions > Permissions section.Usage
Procedure: Specify Snowflake stored procedure.
When you select
procedure
along withdatabase
andschema
, the following procedure permission will be displayed in the Allow Conditions > Permissions section.Usage
File_Format: Specify the file format for SQL statement.
When you select
file_format
along withdatabase
andschema
, the following file_format permission will be displayed in the Allow Conditions > Permissions section.Usage
Pipe: Specify pipe objects that are created and managed to load data using Snowpipe.
When you select
pipe
along withdatabase
andschema
, the following pipe permissions will be displayed in the Allow Conditions > Permissions section.Operate
Monitor
External_stage: Specify external storage, which is the object storage of the cloud platform.
When you select
external_stage
along withdatabase
andschema
, the following external_stage permission will be displayed in the Allow Conditions > Permissions section.Usage
Internal_stage: Specify internal storage, which is the database storage.
When you select
internal_stage
along withdatabase
andschema
, the following Internal_stage permissions will be displayed in the Allow Conditions > Permissions section.Read
Write
Sequence: Specify Snowflake sequence objects.
When you select
sequence
along withdatabase
andschema
, the following sequence permission will be displayed in the Allow Conditions > Permissions section.Usage
Column: Specify the column name.
When you select
column
along withdatabase
,schema
andtable
, the followingcolumn
permissions will be displayed in the Allow Conditions > Permissions section.Select
Insert
Update
Delete
Truncate
References
Global: Specify the snowflake account name. To get the snowflake account name, connect with Snowflake environment and run this query: select current_account() as account
When you select
global
, the following global permissions will be displayed in the Allow Conditions > Permissions section.CreateWarehouse
CreateDatabase
Delegate Admin: Select the
Delegate Admin
checkbox to assign administrator rights to the roles, groups, or users specified in the policy. The administrator can edit or delete the policy, and can also create child policies based on the original policy.
Note
When you create a policy for a table with UPDATE and DELETE permissions granted to a user/group/role, you must choose the SELECT permission along with it.
Snowflake - Masking Policy
Database: Specify the database name.
Schema: Specify the schema name.
Table/View: Specify the table or view name.
Column: Specify the column name.
Masking Conditions:
Permissions: Tick the permission as 'Select'. At present, only 'Select' permission is available.
Select Masking Option: If a masking option is applied to a data type that is not supported, then the default masking value is applied. You are allowed to select only one masking option from the following list:
Default: This option masks column with default value specified by its datatype's property.
The following are the default data type property values:
SNOWFLAKE_MASKED_NUMBER_VALUE=0
SNOWFLAKE_MASKED_DOUBLE_VALUE=0
SNOWFLAKE_MASKED_TEXT_VALUE='{{MASKED}}'
Hash: Returns a hex-encoded string containing the N-bit SHA-2 of the volume in the column, where N is the specified output digest size.
Internal Function: SHA2({col})
Supported Data Type: Text
For more information see Snowflake Documentation.
Nullify: This option replaces all the characters with NULL value.
Supported Data Type: All Data Types
Unmasked (retain original value): This option is used when no masking is required.
Supported Data Type: All Data Types
Regular expression:
Internal Function: regexp_replace({col},'{value_or_expr}','{replace_value}')
Supported Data Type: Text
For more information see Snowflake Documentation.
Literal mask: This option replaces entire cell value with given character.
Supported Data Type: Text
Partial mask: show last 4 - This option shows only the last four characters.
Internal Function: regexp_replace({col},'(..)(.{4})(.)','***\2')
Supported Data Type: Text
For more information see Snowflake Documentation.
Partial mask: show first 4 - This option shows only the first four characters.
Internal Function: regexp_replace({col},'.','*','5')
Supported Data Type: Text
For more information see Snowflake Documentation.
Protect:
Supported Data Type: Text
For more information see /protect.
Unprotect:
Supported Data Type: Text
For more information see /unprotect.
Custom: Using this option you need to mention a custom masked value or expression.
Snowflake - Row Level Filter
Database: Specify the database name.
Schema: Specify the schema name.
Table: Specify the table name.
Row Level Conditions:
Permissions: Click the Add Permissions and tick as 'Select'. At present, only 'Select' permission is available.
Row Level Filter: Click the Add Row Filter and enter the valid SQL predicate for whom the policy will be applied based on selected role/groups/users. Note: Row level filtering works by adding the predicate to the query. If the query is not valid, it will fail.
PowerBI
Workspace
Allow Conditions:
Permissions
Contributor
Member
Admin
None
Delegate Admin: Select 'Delegate Admin' to assign administrator rights to the roles, groups, or users specified in the policy. The administrator can edit or delete the policy, and can also create child policies based on the original policy.
GCS
Project ID
Bucket Name
Object Path
Recursive/Non-recursive:
Allow Conditions
Permissions:
Read: READ permission on the URL permits the user to perform HiveServer2 operations which use S3 as a data source for Hive tables.
Write: WRITE permission on the URL permits the user to perform HiveServer2 operations which write data to the specified S3 location.
Delete: DELETE permission allows you to delete the resource.
Metadata Read: METADATA READ permission allows you to run HEAD operation on objects. Also, this permission list buckets, list objects, and retrieves objects metadata.
Metadata Write: METADATA WRITE permission allows you to modify object's metadata and object's ACL, Tagging, Cros, etc.
Admin: Administrators can edit or delete the policy, and can also create child policies based on the original policy.
Delegate Admin: Select 'Delegate Admin' to assign administrator rights to the roles, groups, or users specified in the policy. The administrator can edit or delete the policy, and can also create child policies based on the original policy.
GBQ
Project ID
Dataset Name
TableName
Column Name
Allow Conditions
Permissions
CreateTable
CreateTableAsSelect
CreateView
Delete
DropTable
DropView
Insert
Query
Update
Delegate Admin: Select 'Delegate Admin' to assign administrator rights to the roles, groups, or users specified in the policy. The administrator can edit or delete the policy, and can also create child policies based on the original policy.
Files
Resource Path
Recursive/Non-Recursive:
Allow Conditions
Permissions
Read
Write
Delegate Admin: Select 'Delegate Admin' to assign administrator rights to the roles, groups, or users specified in the policy. The administrator can edit or delete the policy, and can also create child policies based on the original policy.
Databricks
By default, Databricks File System (DBFS) is protected by Privacera. This blocks common tasks like adding jars/libraries into the cluster. For example, when you try to install a library into a protected DBFS cluster, the following exception will be displayed:
Exception
Exception while installing a Jar in Databricks Cluster with Plugin enabled? java.lang.RuntimeException: ManagedLibraryInstallFailed: java.security.AccessControlException: Access denied for resource [dbfs:/local_disk0/tmp/addedFile4604599454488620309privacera_crypto_jar_with_dependencies-eba20.jar] action [READ] for library:JavaJarId(dbfs:/privacera/crypto/jars/privacera-crypto-jar-with-dependencies.jar,,NONE),isSharedLibrary=false
To grant permissions to read/write on DBFS, you need to create an access policy. Access to DBFS will be audited.
To create an access policy for Databricks, do the following:
Go to Access Management > Resource Policies > privacera_files.
Click Add New Policy.
Enter the following details:
Policy Name: Access to Temporary Folder for adding libraries
Resource: dbfs:/local_disk0/tmp
Note
Make sure the recursive box next to the Resource field is checked.
Group: public
Permission: read & write
Note
The above policy gives permission to all the users. If you want to restrict to only certain users, then instead of giving permission to the group public, provide it to appropriate users or groups.
Configure Policy with Attribute-Based Access Control
Privacera enables use of user, group, resource, classification, and the environment attributes in authorization policies. Attribute-Based Access Control (ABAC) makes it possible to express authorization policies without prior knowledge of specific resources or specific users, which helps avoid the need for new policies as new resources or users are introduced.
For more information, see How access policy enforcement works.
Overview
With the ABAC feature, you can configure resource policies based on user attributes from your LDAP or AD service.
You can assign attributes to users, groups and tags in policies. You can also implement logical conditions on the user attributes for the resource policies.
Attributes can be referenced using expressions, for example:
USER.employeeType != 'intern' TAG.piiType == 'email' TAG.sensitivityLevel <= USER.allowedSensitivityLevel
Attributes can be used to set up access control. For example, they can be used in row-filters, such as:
dept = ${{USER.dept}} state in ( ${{GET_UG_ATTR_Q_CSV('state')}} )
Attributes can also be used in resource names, for example:
path: /home/${{USER._name}} path: /departments/${{USER.dept}} database: dept_${{USER.dept}}p
Ranger service-def update might be required to support conditions in policies. For example:
"policyConditions": [ { "name": "expression", "evaluator": "org.apache.ranger.plugin.conditionevaluator.RangerScriptConditionEvaluator", "label": "Enter boolean expression" } ]
User attributes are typically managed in LDAP, and synced to Privacera.
The Privacera Portal user interface enables you to view and add and update user and group attributes.
This section covers:
Supported connectors for ABAC
How to sync new users from Privacera Ranger through the UI.
How to enable ABAC for a resource policy through the CLI.
How to test ABAC for a resource policy.
Supported connectors for ABAC
ABAC is supported for the following data sources.
Databricks/EMR Hive, Spark, and all services using
privacera_hive
service definitionsPolicySync Snowflake
S3
Note
For Databricks and all Hive-based services, ABAC is supported without any additional configuration. However, ABAC for S3 requires configuration as described in this section.
Prerequisites
Ensure the following prerequisites are met:
Import the users from the LDAP or AD directory to the Privacera Ranger database.
If you have not imported LDAP users yet, see LDAP / LDAP-S for Data Access User Synchronization for information.
Determine the resources you want to protect with ABAC-based policies.
Sync new users from Privacera Ranger
You need to add the new configuration in the resource policies to import only the new user entries from the Privacera Ranger database.
To add new configuration in the resource policies:
Login to the Privacera portal.
On the navigation menu, go to Access Management > Resource Policies.
On the S3 service, click the edit button.
The Add Service dialog will display.
In the Add New Configurations text box, add userstore.download.auth.users as a key and asterisk (*) as a value, and then click Save.
Enable ABAC in a resource policy
You need to update the service definition to enable user ABAC in your service.
Below is an example of S3 for configuring service definition:
To configure service definition for S3, run the following command:
curl -sS -L -k -u <User_Name>:<Password> -H "Content-type: application/json" -H "Accept: application/json" -X GET http://<YOUR_INSTANCE_IP>:6080/service/public/v2/api/servicedef/name/s3
You will get a response in the JSON format. In the response body, the values of the
contextEnrichers
andpolicyConditions
will be blank."policyConditions": [ ], "contextEnrichers": [ ], "enums": [], "dataMaskDef": { "maskTypes": [], "accessTypes": [], "resources": [] }, "rowFilterDef": { "accessTypes": [], "resources": [] } }
Add the following
policyConditions
andcontextEnrichers
tags in the response body:"policyConditions": [{ "itemId": 1, "name": "expression", "evaluator": "org.apache.ranger.plugin.conditionevaluator.RangerScriptConditionEvaluator", "evaluatorOptions": { "ui.isMultiline": "true", "engineName": "JavaScript" }, "label": "Enter Attribute condition", "description": "Attribute condition" }], "contextEnrichers": [{ "itemId": 1, "name": "UserEnricher", "enricher": "org.apache.ranger.plugin.contextenricher.RangerUserStoreEnricher", "enricherOptions": { "userStoreRetrieverClassName": "org.apache.ranger.plugin.contextenricher.RangerAdminUserStoreRetriever", "userStoreRefresherPollingInterval": "60000" } }]
Save the document in the JSON format, such as
update.json
.Note
Make sure that the
update.json
file format and tags are correct and properly aligned.To update the configuration, run the following command:
curl -sS -L -k -u admin:welcome1 -H "Content-type: application/json" -H "Accept: application/json" -X PUT http://<YOUR_INSTANCE_IP>:6080/service/public/v2/api/servicedef/name/s3 -d @update.json
In the response, you will get the updated JSON service definition.
Test ABAC in a resource policy
For testing, create two users with permissions to assume roles with the same tags. For example, “tony” and “odin” are the users. You can also use logical condition operator ('&&' and '||') which are allowed in the policy conditions expression.
To check available attributes for a user:
Go to the Privacera Portal.
Click Access Management > Users/Groups/Roles > Users.
Search user name, and then select Attributes.
Use givenName as an Attribute
Below are the attributes for "tony" and "odin". The attribute givenName
will be used to define access permissions in the resource policy.
User attributes for “tony”

User attributes for “odin”

To edit an ABAC-based policy:
Go to your service, and click your service type.
List of policies are displayed.
Click
on the policy in which you want to add attributes for "tony".
In the Bucket Name field, select the bucket in which you want to give access to “tony”.
On the Allow Conditions section:
In the Select Group field, select public.
In the Policy Conditions field, click Add Conditions +, and then enter attribute conditions, such as
givenName=="tony"
.
Click Save.
Now, "tony" can access all the buckets:
But "odin" cannot access all the buckets which "tony" can, because we have not added user attributes for "odin" in the Policy Conditions.
Use a logical condition operator
If you want to add logical condition on the attributes of the resource policy, then do the following steps:
In the Policy Conditions field, click Add Conditions +, and then add the following logical condition operator for your attributes:
(sync_source=="ad") && (givenName=="tony")
: Add this condition when both the attribute conditions to be validated and true.(givenName=="tony") || (givenName=="odin")
- Add this condition when only one of the attributes to be validated and true.
Use Macros with Attribute-Based Access Control
Attribute-based access control (ABAC) supports a number of macros to make it easier to write frequently-used conditions.
The following table lists macros provided by Privacera for ABAC:
Name | Description | Sample Usage |
---|---|---|
USER | User accessing the resource. |
|
TAG | Current tag - use only in tag-based policy | TAG.piiType == 'email' |
UGNAMES | Name of groups the user belongs to | UGNAMES.indexOf('interns') == -1 |
URNAMES | Name of roles the user belongs to | URNAMES.indexOf('admin') != -1 |
TAGNAMES | Name of tags associated with accessed resource | TAGNAMES.indexOf('PII') != -1 TAGNAMES.indexOf('FINANCE') |
UG_NAMES_Q_CSV | Quoted name of groups the user belong to, separated by comma. For example: 'grp1','grp2' | Row filter:group_name in (${{UG_NAMES_Q_CSV}}) |
UR_NAMES_Q_CSV | Quoted name of roles the user belong to, separated by comma. For example: 'role1','role2' | Row filter:role_name in (${{UR_NAMES_Q_CSV}}) |
GET_UG_ATTR_Q_CSV | Quoted attribute values of groups the user belongs to, separated by comma. For example: 'store1','store2' | Row filter:store_name in (${{GET_UG_ATTR_Q_CSV('managesStore'}}) |
IS_IN_GROUP | User accessing the resource belongs to a specific group | IS_IN_GROUP('sales') |
IS_IN_ROLE | User accessing the resource belongs to a specific role | IS_IN_ROLE('accounts') |
HAS_TAG | Resource being access has a specific tag | (HAS_TAG('PERSON_NAME')) |
HAS_USER_ATTR | User accessing the resource has a specific user attribute | HAS_USER_ATTR('activities') |
HAS_UG_ATTR | User accessing the resource has a specific group attribute | HAS_UG_ATTR('marketing') |
HAS_TAG_ATTR | Resource being access has a specific tag attribute | (HAS_TAG_ATTR('identification')) |
It is sometimes necessary to setup permissions for users who do or don't belong to any group or any role. The following macros will make it easier to create those permissions:
Name | Description | Sample usage |
---|---|---|
IS_IN_ANY_GROUP | This macro can be used in policy conditions to ALLOW/DENY policy items. If the user who is accessing the resource is a member of any group, it returns true. | IS_IN_ANY_GROUP |
IS_IN_ANY_ROLE | This macro can be used in policy conditions to ALLOW/DENY policy items If the user who is accessing the resource has any role, it returns true. | IS_IN_ANY_ROLE |
IS_NOT_IN_ANY_GROUP | This macro can be used in policy conditions to ALLOW/DENY policy items If the user who is accessing the resource does not belong to any groups, it returns true. | IS_NOT_IN_ANY_GROUP |
IS_NOT_IN_ANY_ROLE | This macro can be used in policy conditions to ALLOW/DENY policy items If the user who is accessing the resource does not have any roles, it returns true. | IS_NOT_IN_ANY_ROLE |
The following macros will make it easier to check if current resource has any tags or not
Name | Description | Sample usage |
---|---|---|
HAS_ANY_TAG | This macro can be used in policy conditions to ALLOW/DENY policy items If the user who is accessing the resource has any tags, this method returns true. | HAS_ANY_TAG |
HAS_NO_TAG | This macro can be used in policy conditions to ALLOW/DENY policy items If the user who is accessing the resource does not have any tags, it returns true. | HAS_NO_TAG |
Configuring Policy with Conditional Masking
Conditional masking is a masking of a column based on the condition applied on a different column. For example, a condition is applied on column A to mask column B.
Conditional masking is supported for the following systems:
Hive with EMR
Hive with Databricks
Presto SQL with EMR
Trino
To configure a conditional masking in a policy, do the following:
Add Policy. For more details, see Creating Resource Based Policies.
Add the database, table, and column.
In the Select Masking Option of Masking Conditions, select Custom. A text appears where you can enter your conditional expression.
Examples
Conditional Masking using Single Column
When the column name has Tamara, then the column email will be masked.
Conditional Expression:
CASE WHEN (name=='Tamara') THEN mask(email) ELSE email END
Conditional Masking using Multiple Columns
Conditional Expression:
CASE WHEN (name=='Tamara' OR address like '%Robin%') THEN mask(email) ELSE email END
Conditional Masking in PrestoSQL
The examples above are applicable for data sources supporting SQL syntax expressions. For PrestoSQL, the syntax changes.
You need to create an access policy in the privacera_presto service which gives access to the following Presto functions for the respective users:
to_hex
sha256
to_utf8
After creating the access policy, you can use the functions in defining the following conditional expression:
Conditional Expression:
if(name='Richard', to_hex(sha256(to_utf8("address"))), "address")
Conditional Masking in Trino
For conditional masking in Trino, you need to cast/convert the masked column to its appropriate datatype.
You need to create an access policy in the privacera_trino service which gives access to the following Trino functions for the respective users:
CAST
to_hex
sha256
to_utf8
After creating the access policy, you can use the functions in defining the following conditional expression:
Conditional Expression:
CASE WHEN person_name='Pearlene' THEN (CAST(to_hex(sha256(to_utf8(email_address))) as varchar(100))) ELSE email_address END