- PrivaceraCloud Release 4.5
- PrivaceraCloud User Guide
- PrivaceraCloud
- What is PrivaceraCloud?
- Getting Started with Privacera Cloud
- User Interface
- Dashboard
- Access Manager
- Discovery
- Usage statistics
- Encryption and Masking
- Privacera Encryption core ideas and terminology
- Encryption Schemes
- Encryption Schemes
- System Encryption Schemes Enabled by Default
- View Encryption Schemes
- Formats, Algorithms, and Scopes
- Record the Names of Schemes in Use and Do Not Delete Them
- System Encryption Schemes Enabled by Default
- Viewing the Encryption Schemes
- Formats, Algorithms, and Scopes
- Record the Names of Schemes in Use and Do Not Delete Them
- Encryption Schemes
- Presentation Schemes
- Masking schemes
- Create scheme policies on PrivaceraCloud
- Encryption formats, algorithms, and scopes
- Deprecated encryption formats, algorithms, and scopes
- PEG REST API on PrivaceraCloud
- PEG API Endpoint
- Request Summary for PrivaceraCloud
- Prerequisites
- Anatomy of a PEG API endpoint on PrivaceraCloud
- About constructing the datalist for /protect
- About deconstructing the response from /unprotect
- Example of data transformation with /unprotect and presentation scheme
- Example PEG REST API endpoints for PrivaceraCloud
- Audit details for PEG REST API accesses
- Make calls on behalf of another user on PrivaceraCloud
- Privacera Encryption UDF for masking in Databricks
- Privacera Encryption UDFs for Trino
- Syntax of Privacera Encryption UDFs for Trino
- Prerequisites for installing Privacera Crypto plug-in for Trino
- Variable values to obtain from Privacera
- Determine required paths to crypto jar and crypto.properties
- Download Privacera Crypto Jar
- Set variables in Trino etc/crypto.properties
- Restart Trino to register the Privacera Crypto UDFs for Trino
- Example queries to verify Privacera-supplied UDFs
- Azure AD setup
- Launch Pad
- Settings
- General functions in PrivaceraCloud settings
- Applications
- About applications
- Azure Data Lake Storage Gen 2 (ADLS)
- Athena
- Privacera Discovery with Cassandra
- Databricks
- Databricks SQL
- Dremio
- DynamoDB
- Elastic MapReduce from Amazon
- EMRFS S3
- Files
- File Explorer for Google Cloud Storage
- Glue
- Google BigQuery
- Kinesis
- Lambda
- Microsoft SQL Server
- MySQL for Discovery
- Open Source Spark
- Oracle for Discovery
- PostgreSQL
- Power BI
- Presto
- Redshift
- Redshift Spectrum
- Kinesis
- Snowflake
- Starburst Enterprise with PrivaceraCloud
- Starburst Enterprise Presto
- Trino
- Datasource
- User Management
- API Key
- About Account
- Statistics
- Help
- Apache Ranger API
- Reference
- Okta Setup for SAML-SSO
- Azure AD setup
- SCIM Server User-Provisioning
- AWS Access with IAM
- Access AWS S3 buckets from multiple AWS accounts
- Add UserInfo in S3 Requests sent via Dataserver
- EMR Native Ranger Integration with PrivaceraCloud
- Spark Properties
- Operational Status
- How-to
- Create CloudFormation Stack
- Enable Real-time Scanning of S3 Buckets
- Enable Discovery Realtime Scanning Using IAM Role
- How to configure multiple JSON Web Tokens (JWTs) for EMR
- Enable offline scanning on Azure Data Lake Storage Gen 2 (ADLS)
- Enable Real-time Scanning on Azure Data Lake Storage Gen 2 (ADLS)
- How to Get Support
- Coordinated Vulnerability Disclosure (CVD) Program of Privacera
- Shared Security Model
- PrivaceraCloud
- PrivaceraCloud Previews
- Privacera documentation changelog
Access AWS S3 buckets from multiple AWS accounts
In this topic, you will learn how to access AWS S3 buckets from multiple AWS accounts.
Create or use two AWS accounts, Account A and Account B.
Log in to AWS console as account A and create an IAMrole.
Example 1.‘arn:aws:iam::12345678:role/DataServer_Role’
Revise this AWS IAM Role in Account A so that it has full access to the data resources that will be connected to your PrivaceraCloud account.
Establish IAM Role trust relationship with the PrivaceraCloud AWS access role. For more information, see AWS Access with IAM.
Log in to Account B in your AWS console to access the bucket through dataserver.
Select Bucket B and update the bucket policy as shown in the following sample JSON.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::12345678:role/DataServer_Role" ] }, "Action": "s3:*", "Resource": [ "arn:aws:s3:::bucketB/*", "arn:aws:s3:::bucketB" ] } ] }
Note
The IAM role ‘DataServer_Role’ is from Account A.
On PrivaceraCloud, add or update the IAM role ‘arn:aws:iam::12345678:role/DataServer_Role” in S3 application.
Once the S3 application is configured you can access Bucket B from Account B through AWS-CLI. For more information, see Scripts for AWS CLI or Azure CLI for managing connected applications..
Example 2.aws s3 ls s3://bucketB
You can access Bucket B from Databricks cluster. For more information, see Databricks.
Example 3.readFilePath = "s3a://bucketB/sample_sf.csvdf = spark.read.csv(readFilePath, inferSchema=True, header=True)df.count()df.show(2)