- Platform Release 6.5
- Privacera Platform Release 6.5
- Enhancements and updates in Privacera Access Management 6.5 release
- Enhancements and updates in Privacera Discovery 6.5 release
- Enhancements and updates in Privacera Encryption 6.5 release
- Deprecation of older version of PolicySync
- Upgrade Prerequisites
- Supported versions of third-party systems
- Documentation changelog
- Known Issues 6.5
- Platform - Supported Versions of Third-Party Systems
- Platform Support Policy and End-of-Support Dates
- Privacera Platform Release 6.5
- Privacera Platform Installation
- About Privacera Manager (PM)
- Install overview
- Prerequisites
- Installation
- Default services configuration
- Component services configurations
- Access Management
- Data Server
- UserSync
- Privacera Plugin
- Databricks
- Spark standalone
- Spark on EKS
- Portal SSO with PingFederate
- Trino Open Source
- Dremio
- AWS EMR
- AWS EMR with Native Apache Ranger
- GCP Dataproc
- Starburst Enterprise
- Privacera services (Data Assets)
- Audit Fluentd
- Grafana
- Ranger Tagsync
- Discovery
- Encryption & Masking
- Privacera Encryption Gateway (PEG) and Cryptography with Ranger KMS
- AWS S3 bucket encryption
- Ranger KMS
- AuthZ / AuthN
- Security
- Access Management
- Reference - Custom Properties
- Validation
- Additional Privacera Manager configurations
- Upgrade Privacera Manager
- Troubleshooting
- How to validate installation
- Possible Errors and Solutions in Privacera Manager
- Unable to Connect to Docker
- Terminate Installation
- 6.5 Platform Installation fails with invalid apiVersion
- Ansible Kubernetes Module does not load
- Unable to connect to Kubernetes Cluster
- Common Errors/Warnings in YAML Config Files
- Delete old unused Privacera Docker images
- Unable to debug error for an Ansible task
- Unable to upgrade from 4.x to 5.x or 6.x due to Zookeeper snapshot issue
- Storage issue in Privacera UserSync & PolicySync
- Permission Denied Errors in PM Docker Installation
- Unable to initialize the Discovery Kubernetes pod
- Portal service
- Grafana service
- Audit server
- Audit Fluentd
- Privacera Plugin
- How-to
- Appendix
- AWS topics
- AWS CLI
- AWS IAM
- Configure S3 for real-time scanning
- Install Docker and Docker compose (AWS-Linux-RHEL)
- AWS S3 MinIO quick setup
- Cross account IAM role for Databricks
- Integrate Privacera services in separate VPC
- Securely access S3 buckets ssing IAM roles
- Multiple AWS account support in Dataserver using Databricks
- Multiple AWS S3 IAM role support in Dataserver
- Azure topics
- GCP topics
- Kubernetes
- Microsoft SQL topics
- Snowflake configuration for PolicySync
- Create Azure resources
- Databricks
- Spark Plug-in
- Azure key vault
- Add custom properties
- Migrate Ranger KMS master key
- IAM policy for AWS controller
- Customize topic and table names
- Configure SSL for Privacera
- Configure Real-time scan across projects in GCP
- Upload custom SSL certificates
- Deployment size
- Service-level system properties
- PrestoSQL standalone installation
- AWS topics
- Privacera Platform User Guide
- Introduction to Privacera Platform
- Settings
- Data inventory
- Token generator
- System configuration
- Diagnostics
- Notifications
- How-to
- Privacera Discovery User Guide
- What is Discovery?
- Discovery Dashboard
- Scan Techniques
- Processing order of scan techniques
- Add and scan resources in a data source
- Start or cancel a scan
- Tags
- Dictionaries
- Patterns
- Scan status
- Data zone movement
- Models
- Disallowed Tags policy
- Rules
- Types of rules
- Example rules and classifications
- Create a structured rule
- Create an unstructured rule
- Create a rule mapping
- Export rules and mappings
- Import rules and mappings
- Post-processing in real-time and offline scans
- Enable post-processing
- Example of post-processing rules on tags
- List of structured rules
- Supported scan file formats
- Data Source Scanning
- Data Inventory
- TagSync using Apache Ranger
- Compliance Workflow
- Data zones and workflow policies
- Workflow Policies
- Alerts Dashboard
- Data Zone Dashboard
- Data zone movement
- Workflow policy use case example
- Discovery Health Check
- Reports
- How-to
- Privacera Encryption Guide
- Overview of Privacera Encryption
- Install Privacera Encryption
- Encryption Key Management
- Schemes
- Encryption with PEG REST API
- Privacera Encryption REST API
- PEG API endpoint
- PEG REST API encryption endpoints
- PEG REST API authentication methods on Privacera Platform
- Common PEG REST API fields
- Construct the datalist for the /protect endpoint
- Deconstruct the response from the /unprotect endpoint
- Example data transformation with the /unprotect endpoint and presentation scheme
- Example PEG API endpoints
- /authenticate
- /protect with encryption scheme
- /protect with masking scheme
- /protect with both encryption and masking schemes
- /unprotect without presentation scheme
- /unprotect with presentation scheme
- /unprotect with masking scheme
- REST API response partial success on bulk operations
- Audit details for PEG REST API accesses
- Make encryption API calls on behalf of another user
- Troubleshoot REST API Issues on Privacera Platform
- Privacera Encryption REST API
- Encryption with Databricks, Hive, Streamsets, Trino
- Databricks UDFs for encryption and masking on PrivaceraPlatform
- Hive UDFs for encryption on Privacera Platform
- StreamSets Data Collector (SDC) and Privacera Encryption on Privacera Platform
- Trino UDFs for encryption and masking on Privacera Platform
- Privacera Access Management User Guide
- Privacera Access Management
- How Polices are evaluated
- Resource policies
- Policies overview
- Creating Resource Based Policies
- Configure Policy with Attribute-Based Access Control
- Configuring Policy with Conditional Masking
- Tag Policies
- Entitlement
- Service Explorer
- Users, groups, and roles
- Permissions
- Reports
- Audit
- Security Zone
- Access Control using APIs
- AWS User Guide
- Overview of Privacera on AWS
- Configure policies for AWS services
- Using Athena with data access server
- Using DynamoDB with data access server
- Databricks access manager policy
- Accessing Kinesis with data access server
- Accessing Firehose with Data Access Server
- EMR user guide
- AWS S3 bucket encryption
- Getting started with Minio
- Plugins
- How to Get Support
- Coordinated Vulnerability Disclosure (CVD) Program of Privacera
- Shared Security Model
- Privacera Platform documentation changelog
AWS CLI
Enable AWS CLI
In the Privacera Portal, click LaunchPad from the left menu.
Under the AWS Services section, click
to open the AWS CLI dialog. This dialog provides the means to download an AWS CLI setup script specific to your installation. It also provides a set of usage instructions.
In AWS CLI, under Configure Script, click Download Script to save the script on your local machine. If you will be running AWS CLI on another system such as a 'jump server' copy it to that host.
Alternatively, use 'wget' to pull this script down to your execution platform, as shown below. Substitute your installation's Privacera Platform host domain name or IPv4 address for "<PRIVACERA_PORTAL_HOST>".
wget http://<PRIVACERA_PORTAL_HOST>:6868/api/cam/download/script -O privacera_aws.sh # USE THE "--no-check-certificate" option for HTTPS - and remove the # below # wget --no-check-certificate https://<PRIVACERA_PORTAL_HOST>:6868/api/cam/download/script -O privacera_aws.sh
Copy the downloaded script to home directory.
cp privacera_aws.sh ~/ cd ~/
Set this file to be executable:
chmod a+x . ~/privacera_aws.sh
Under the AWS Cli Generate Token section, first, generate a platform token.
Note
All the commands should be run with a space between the dot (.) and the script name (~/privacera_aws.sh).
Run the following command:
. ~/privacera_aws.sh --config-token
Select/check Never Expired to generate a token that does not expire. Click Generate.
Enable the Proxy or the endpoint and run one of the two commands shown below.
. ~/privacera_aws.sh --enable-proxy
or:
. ~/privacera_aws.sh --enable-endpoint
Under the Check Status section, run the command below.
. ~/privacera_aws.sh --status
To disable both the proxy and the endpoint, under the AWS Access section, run the commands shown below.
. ~/privacera_aws.sh --disable-proxy . ~/privacera_aws.sh --disable-endpoint
AWS CLI Examples
Get Database
aws glue get-databases --region ca-central-1 aws glue get-databases --region us-west-2
Get Catalog Import Status
aws glue get-catalog-import-status --region us-west-2
Create Database
aws glue create-database --cli-input-json '{"DatabaseInput":{"CreateTableDefaultPermissions": \[{"Permissions": \["ALL"\],"Principal": {"DataLakePrincipalIdentifier": "IAM\_ALLOWED\_PRINCIPALS"}}\],"Name":"qa\_test","LocationUri": "s3://daffodil-us-west-2/privacera/hive\_warehouse/qa\_test.db"}}' --region us-west-2 --output json
Create Table
aws glue create-table --database-name qa\_test --table-input file://tb1.json --region us-west-2
A tb1.json file should be created by the user on the location where the create table command will be executed. Sample json file:
{ " ""Name":"tb1", " ""Retention":0, " ""StorageDescriptor":{ " ""Columns":"\\"[ " "{ " ""Name":"CC", " ""Type":"string"" " }, " "{ "Name":"FST\\_NM", " ""Type":"string"" " }, " "{ " ""Name":"LST\\_NM", " ""Type":"string"" " }, " "{ " ""Name":"SOC\\_SEC\\_NBR", " ""Type":"string"" " }" \\" ], " ""Location":"s3://daffodil-us-west-2/data/sample\\_parquet/index.html", " ""InputFormat":"org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat", " ""OutputFormat":"org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat", " ""Compressed":false, " ""NumberOfBuckets":0, " ""SerdeInfo":{ " ""SerializationLibrary":"org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe", " ""Parameters":{ " ""serialization.format":"1"" " }" " }, " ""SortColumns":"\\"[ "\\" ], " ""StoredAsSubDirectories":"false " }, " ""TableType":"EXTERNAL\\_TABLE", " ""Parameters":{ " ""classification":"parquet"" " } }
Delete Table
aws glue delete-table --database-name qa\_db --name test --region us-west-2 aws glue delete-table --database-name qa\_db --name test --region us-east-1 aws glue delete-table --database-name qa\_db --name test --region ca-central-1 aws glue delete-table --database-name qa\_db --name test --region ap-south-1
Delete Database
aws glue delete-database --name qa\_test --region us-west-2 aws glue delete-database --name qa\_test --region us-east-1 aws glue delete-database --name qa\_test --region ap-south-1 aws glue delete-database --name qa\_test --region ca-central-1
AWS Kinesis - CLI Examples
CreateStream:
aws kinesis create-stream --stream-name SalesDataStream --shard-count 1 --region us-west-2
Put Record:
aws kinesis put-records --stream-name SalesDataStream --records Data=name,PartitionKey=partitionkey1 Data=sales\_amount,PartitionKey=partitionkey2 --region us-west-2
Read Record:
aws kinesis list-shards --stream-name SalesDataStream --region us-west-2
#Copy Shard id from above command output. aws kinesis get-shard-iterator --stream-name SalesDataStream --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON --region us-west-2 #Copy Shard Iterator from above command output. aws kinesis get-records --shard-iterator AAAAAAAAAAG13t9nwsYft2p0IDF8qJOVh/Dc69RXm5v+QEqK4AW0CUlu7YmFChiV5YtyMzqFvourqhgHdANPxa7rjduAiIOUUwgaBNjJuc67SYeqZQLMgLosfQBiF6BeRQ+WNzRkssCZJx7j3/W53kpH70GJZym+Qf73bvepFWpmflYCAlRuFUjpJ/soWUmO+2Q/R1rJCdFuyl3YvGYJYmBnuzzfDoR6cnPLI0sjycI3lDJnlzrC+A== #Copy Data from above command output. #We Received the encoded Data, Copy Data and Use it in Below Command. echo <data> | base64 --decode
Kinesis Firehose
CreateDelivery Stream:
aws firehose create-delivery-stream --delivery-stream-name SalesDeliveryStream --delivery-stream-type DirectPut --extended-s3-destination-configuration "BucketARN=arn:aws:s3:::daffodil-data,RoleARN=arn:aws:iam::857494200836:role/privacera\_user\_role" --region us-west-2
Put Record:
aws firehose put-record --delivery-stream-name SalesDeliveryStream --record="{\\"Data\\":\\"Sales\_amount\\"}" --region us-west-2
Describe Delivery Stream:
aws firehose describe-delivery-stream --delivery-stream-name SalesDeliveryStream --region us-west-2
AWS DynamoDB CLI examples
create-table
aws dynamodb create-table \ --attribute-definitions AttributeName=id,AttributeType=N AttributeName=country,AttributeType=S \ --table-name SalesData --key-schema AttributeName=id,KeyType=HASH AttributeName=country,KeyType=RANGE \ --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1 --region us-west-2 \ --output json
put-item
aws dynamodb put-item --table-name SalesData \ --item '{"id": {"N": "3"},"country": {"S": "UK"},\ "region": {"S": "EUl"},"city": {"S": "Rogerville"},"name": {"S": "Nigel"},\ "sales_amount": {"S": "87567.74"}}' \ --region us-west-2
scan
aws dynamodb scan --table-name SalesData --region us-west-2