Skip to content

Configure Privacera Manager

Privacera Manager takes a set of configuration variables that are defined in YAML files. Out of the box, several sample variable YAML files are provided that you can use to configure Privacera Manager. These are in the config/sample-vars folder. The process of configuring Privacera Manager involves copying these sample variable YAML files to the config/custom-vars folder and modifying them to suit your environment. In some cases, you will create new configuration YAML files and add properties as given in the documentation. The files under config/custom-vars are not overwritten during upgrades.

After you have configured Privacera Manager, you can run the privacera-manager.sh script as given in the Using Privacera Manager section.

Self-Managed and PrivaceraCloud Data plane

The following are the mandatory configuration steps that you need to perform before you can start using Privacera Manager. These steps are common for all cloud providers.

These steps are common for Self-Managed and PrivaceraCloud Data plane installations.

# Configuration Step Description
1. Enable Vault in Privacera Manager Enable Vault in Privacera Manager to store credentials. This ensures that credentials are not saved in clear text in the server where Privacera Manager is running
2. Enable Keystores Enable use of keystores at runtime for storing credentials
3. Configure Cloud Provider Configure your cloud provider whether it is AWS, Azure or Google Cloud
4. Configure Kubernetes Configure Kubernetes and Helm
5. Configure AWS EFS Configure AWS EFS (optional - only if you are in AWS and planning to use AWS EFS as Kubernetes volumes)
6. Configure External RDBMS Configure External RDBMS to be used for Privacera policy store
7. Configure TLS Configure TLS
8. Configure AWS ALB Controller Configure AWS ALB Controller (optional - only if you are in AWS and using AWS ALB Controller)
9. Configure Load Balancer Configure Load Balancer (optional - only if you are in AWS and do not plan to use AWS ALB Controller, and if you are in Azure or Google Cloud)
10. Privacera Monitoring Installs and configures Prometheus server in the namespace. It is supported only on AWS and if you are using AWS ALB Controller.

Appendix - Self-Managed and PrivaceraCloud Data plane

Enable Vault in Privacera Manager

Enable Vault

Privacera Manager Vault is a credential store that is used to securely store various credentials at configuration time. This ensures that credentials are not saved in clear text in the configuration files on Privacera Manager host and allows the configuration files to be kept in Git.

Run the following commands to enable Vault in Privacera Manager.

Bash
cd ~/privacera/privacera-manager/
./privacera-manager.sh vault
You will be asked whether you want to edit the common Privacera secrets.

Bash
Do you want to add/edit Common Privacera vault Secrets? (y/n): 
You can enter y and then you will be prompted to enter a password for the vault. This password will be used to encrypt the vault. You should use a strong password.

Note down the password

Note down this password, as you will be prompted to enter it every time you add or edit a secret, as well as every time you run Privacera Manager.

This will then bring up nano editor with a file in which you can add or edit your secrets.

Privacera Manager configuration will contain various credentials such as usernames, passwords and tokens that are used to connect to your data sources. These are configured as key-value pairs in various YAML files. Any value that is a secret can be stored in the vault by putting the key and value in the vault file and by commenting out the key in the original YAML file.

Remove the secret from the original YAML file after the vault entry is added

Go ahead and store your first secret in the vault. This is the docker registry password in config/vars.privacera.yml

YAML
# in config YAML file
privacera_hub_password: "StrongPassword"
You must comment out the above line in the YAML file and remove the secret.

YAML
# in config YAML file
# privacera_hub_password: ""
Add the secret to the vault file as follows:

YAML
# in vault file
privacera_hub_password: "StrongPassword"

Any time you need to add a new secret, you can run the ./privacera-manager.sh vault command and add the secret to the vault file, and remove it from the original YAML file.

This is an optional step but highly recommended so your secret values are not in clear text in the YAML files.

Enable use of keystores for secrets

Enable Keystores

This will enable storing your credentials in keystores at runtime in various pods.

Run the following commands to enable the use of keystore for storing secrets at runtime.

Bash
1
2
3
cd ~/privacera/privacera-manager
cp -n config/sample-vars/vars.encrypt.secrets.yml config/custom-vars/
vi config/custom-vars/vars.encrypt.secrets.yml
In the vars.encrypt.secrets.yml file, modify the value of this variable to a strong password. This is the password that will be used to encrypt the keystore used at runtime.

YAML
GLOBAL_DEFAULT_SECRETS_KEYSTORE_PASSWORD:"<PLEASE_CHANGE>"

The above password should be stored in Privacera Manager Vault so it is stored securely in the Privacera Manager configuration file.

This is an optional step but highly recommended so that secrets in configuration files of Privacera Services are not in clear text.

Configure your Cloud Provider

Configure Cloud Provider

Run the following commands to configure your cloud provider. Then, select the tab corresponding to your cloud provider.

Bash
1
2
3
cd ~/privacera/privacera-manager/config
cp -n sample-vars/vars.aws.yml custom-vars/
vi custom-vars/vars.aws.yml
Set to your AWS region such as us-east-1
YAML
AWS_REGION: "<PLEASE_CHANGE>"

Bash
cd ~/privacera/privacera-manager/config
cp -n sample-vars/vars.azure.yml custom-vars/

Bash
1
2
3
cd ~/privacera/privacera-manager/config
cp -n sample-vars/vars.gcp.yml custom-vars/
vi custom-vars/vars.gcp.yml
Set the Project ID of your Google Cloud project, this value can be found in the Google Console.
YAML
PROJECT_ID: "<PLEASE_CHANGE>"

Configure Kubernetes

Configure Kubernetes and Helm

Run the following commands. This step is common for all cloud providers.

Bash
1
2
3
4
5
6
cd ~/privacera/privacera-manager/config

cp -n sample-vars/vars.helm.yml custom-vars/

cp -n sample-vars/vars.kubernetes.yml custom-vars/
vi custom-vars/vars.kubernetes.yml
Set the Kubernetes cluster name in the vars.kubernetes.yml file.
YAML
K8S_CLUSTER_NAME: "<PLEASE_CHANGE>"
To obtain the name of your Kubernetes cluster, you can run the following command,

Bash
kubectl config get-contexts

The output of the above command will be different based on your cloud provider as follows -

Bash
1
2
3
4
# In EKS, you will get the cluster name from the ARN of the cluster
arn:aws:eks:<REGION>:<ACCOUNT>:cluster/<CLUSTER_NAME>

# The cluster name is the last part of the ARN.
Bash
# In AKS, you will get the cluster name from the context name
<CLUSTER_NAME>
Bash
# In GKE, you will get the cluster name from the context name
gke_<PROJECT-NAME>_<REGION>_<CLUSTER_NAME>

Configure AWS EFS (optional)

Configure AWS EFS

This step is required if you are on AWS and plan to use AWS EFS for the storage volumes of pods. You will need the EFS ID from the EFS setup done on your EKS cluster for this step. Do not proceed if you don't have the EFS ID.

If you don't plan on using AWS EFS, but use AWS EBS storage volumes you can skip this step.

If you are on Azure or Google Cloud, we currently don't support using a managed network file-system as a storage volume for the pods.

Run the following commands -

Bash
1
2
3
4
cd ~/privacera/privacera-manager/config/ 
cp -n sample-vars/vars.efs.yml custom-vars/

vi custom-vars/vars.efs.yml
Edit the file and modify the value of the EFS ID.

YAML
EFS_FSID: "<YOUR_EFS_ID>"

Configure External RDBMS

Configure External RDBMS

Run the following commands to configure the external RDBMS that you plan to use with Privacera Manager.

Bash
1
2
3
4
5
6
7
8
9
cd ~/privacera/privacera-manager/config/
cp -n sample-vars/vars.external.db.mysql.yml custom-vars/

vi custom-vars/vars.external.db.mysql.yml
# Edit these variables with your values. Do not change any other variable values.
EXTERNAL_DB_HOST: "<PLEASE_CHANGE>"
EXTERNAL_DB_NAME: "<PLEASE_CHANGE>"
EXTERNAL_DB_USER: "<PLEASE_CHANGE>"
EXTERNAL_DB_PASSWORD: "<PLEASE_CHANGE>"
Bash
1
2
3
4
5
6
7
8
9
cd ~/privacera/privacera-manager/config/
cp -n sample-vars/vars.external.db.postgres.yml custom-vars/

vi custom-vars/vars.external.db.postgres.yml
# Edit these variables with your values. Do not change any other variable values.
EXTERNAL_DB_HOST: "<PLEASE_CHANGE>"
EXTERNAL_DB_NAME: "<PLEASE_CHANGE>"
EXTERNAL_DB_USER: "<PLEASE_CHANGE>"
EXTERNAL_DB_PASSWORD: "<PLEASE_CHANGE>"

Configure TLS

Configure TLS

Bash
1
2
3
4
cd ~/privacera/privacera-manager/config/
cp -n sample-vars/vars.ssl.yml custom-vars/

vi custom-vars/vars.ssl.yml
Edit the file and modify these values,
YAML
SSL_SELF_SIGNED: "true"
SSL_DEFAULT_PASSWORD: "welcome1"
The above commands will enable TLS for inter-pod communication.

Configuring TLS for external access is covered in the Ingres Controller or Load Balancer section below.

If you have wildcard certificate and private key in PEM file, first copy them in to the ssl folder as follows,

Bash
1
2
3
4
cd ~/privacera/privacera-manager/
mkdir ssl
cp /path/to/your/cert.pem ssl/
cp /path/to/your/private_key.key ssl/

Now run these commands,

Bash
1
2
3
4
cd ~/privacera/privacera-manager/config/
cp -n sample-vars/vars.ssl.yml custom-vars/

vi custom-vars/vars.ssl.yml
Edit the file and modify these values,
YAML
SSL_SELF_SIGNED: "false"
SSL_DEFAULT_PASSWORD: "welcome1"

And set the below properties with the certificate and private key PEM file names.

Bash
SSL_SIGNED_PEM_FULL_CHAIN: "<FULL_CHAIN_CERT_NAME>"
SSL_SIGNED_PEM_PRIVATE_KEY: "<PRIVATE_KEY_NAME>"

If you have an application specific certificate and private key PEM files, then copy them into the ssl folder and then set the below properties:

Bash
### PORTAL ###
PRIVACERA_PORTAL_SSL_SIGNED_PEM_FULL_CHAIN: "<PORTAL_CERT_NAME>"
PRIVACERA_PORTAL_SSL_SIGNED_PEM_PRIVATE_KEY: "<PORTAL_PRIVATE_KEY_NAME>"

### RANGER ###
RANGER_ADMIN_SSL_SIGNED_PEM_FULL_CHAIN: "<RANGER_CERT_NAME>"
RANGER_ADMIN_SSL_SIGNED_PEM_PRIVATE_KEY: "<RANGER_PRIVATE_KEY_NAME>"

### AUDITSERVER ###
AUDITSERVER_SSL_SIGNED_PEM_FULL_CHAIN: "<AUDITSERVER_CERT_NAME>"
AUDITSERVER_SSL_SIGNED_PEM_PRIVATE_KEY: "<AUDITSERVER_PRIVATE_KEY_NAME>"

### PRIVACERA-DIAGNOSTICS ###
DIAG_SERVER_SSL_SIGNED_PEM_FULL_CHAIN: "<PRIVACERA-DIAGNOSTICS_CERT_NAME>"
DIAG_SERVER_SSL_SIGNED_PEM_PRIVATE_KEY: "<PRIVACERA-DIAGNOSTICS_PRIVATE_KEY_NAME>"

Configure AWS ALB Controller (optional)

Configure Ingress Controller (optional)

To access the service endpoints from outside the Kubernetes cluster, you need to configure either an Ingress controller such as AWS Load Balancer Controller or an external Load Balancer.

AWS Load Balancer Controller is supported in AWS.

For Azure and Google Cloud, an ingress controller is not supported currently. You can use an external Load Balancer to access the service endpoints.

If you are on AWS and have installed AWS Load Balancer Controller, then you can run the following commands.

You will need the following values to configure the AWS ALB Controller -

  • Certificate ARN - a wild card certificate has to be created in ACM
  • Subnets
  • Security Groups
  • Internal (recommended) or Internet-facing load balancer (protected with security group)

You can adjust the annotations as needed based on your requirements.

Run the following commands to configure the AWS ALB ingress objects -

vars.aws.alb.ingress.yml

Bash
1
2
3
cd ~/privacera/privacera-manager
cp -n config/sample-vars/vars.aws.alb.ingress.yml config/custom-vars/
vi config/custom-vars/vars.aws.alb.ingress.yml
Edit the file and modify/add these values,
YAML
1
2
3
4
5
6
7
8
AWS_ALB_EXTRA_ANNOTATIONS:
- "alb.ingress.kubernetes.io/certificate-arn: '<PLEASE_CHANGE>'"
- "alb.ingress.kubernetes.io/subnets: '<subnet-1>,<subnet-2>'"
- "alb.ingress.kubernetes.io/security-groups: '<sg-1234>'"

AWS_ALB_EXTERNAL_URL: "<PLEASE_CHANGE>"
PRIVACERA_AWS_ZONE_ID: “<PLEASE_CHANGE>”
AWS_ROUTE_53_DOMAIN_NAME: “<PLEASE_CHANGE>”
Add/update the value of Certificate ARN, Security Groups and Subnet IDs in the respective annotation. You will need the AWS Route 53 hosted zone ID and the domain name that you plan to use for the service endpoints. The AWS Route 53 hosted zone ID is obtained from the AWS console and the domain name will be your domain name, example, prod.example.com.

For now, you will keep the AWS_ALB_EXTERNAL_URL as it is. After the installation is done, Kubernetes ingress object is created which is detected by the AWS Load Balancer controller which then creates an ALB. You will get the ALB's DNS name from AWS console and set it as a the value for this variable, and then run the post-install step of Privacera Manager which will set the AWS Route 53 entries.

Note

Following default annotations will be added by Privacera Manager. In case you need to override these, then copy the following block of code into vars.aws.alb.ingress.yml file and make the changes that you need.

Bash
1
2
3
4
5
6
7
8
9
AWS_ALB_DEFAULT_ANNOTATIONS:
- "kubernetes.io/ingress.class: 'alb'"
- "alb.ingress.kubernetes.io/target-type: 'ip'"
- "alb.ingress.kubernetes.io/scheme: 'internal'"
- "alb.ingress.kubernetes.io/backend-protocol: 'HTTPS'"
- "alb.ingress.kubernetes.io/ssl-redirect: '443'"
- "alb.ingress.kubernetes.io/listen-ports: '[{\"HTTP\": 80}, {\"HTTPS\":443}]'"
- "alb.ingress.kubernetes.io/success-codes: '302,400,401,404'"
- "alb.ingress.kubernetes.io/target-group-attributes: 'stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=86400,stickiness.type=lb_cookie,deregistration_delay.timeout_seconds=30,slow_start.duration_seconds=0'"

The ALB is used for all HTTP endpoints. In addition, a ranger-plugin service endpoint is created that requires an NLB. Run the following commands -

vars.ranger-plugin.yml

Bash
1
2
3
4
cd ~/privacera/privacera-manager/config/custom-vars

# create a new file 
vi vars.ranger-plugin.yml
Edit the file and add these values,
YAML
RANGER_PLUGIN_SERVICE_ANNOTATIONS:
- 'service.beta.kubernetes.io/aws-load-balancer-type: nlb'
- 'service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp'
- 'service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"'
- 'service.beta.kubernetes.io/aws-load-balancer-subnets: <subnet-xxxx>, <subnet-yyyy>'
#- 'service.beta.kubernetes.io/aws-load-balancer-security-groups: <sg-1234>'

# for internal
- 'service.beta.kubernetes.io/aws-load-balancer-internal: "true"'

# For internetfacing
#- 'service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"'

Configure Load Balancer (optional)

Configure Load Balancer (optional)

If you are not using AWS ALB Controller then the kubernetes services will be created of type LoadBalancer. This will create Load Balancers in your cloud provider.

TBD: How do we assign TLS certificates to these Load Balancers? You need to copy the cert and key into the config/ssl/custom_certificates folder and the load-balancers are of type classic loadbalancers or NLB with port level pass through and certificates are part of the individual services. To be confirmed

Next steps

Depending upon your deployment type and choice of Privacera modules, you can proceed to the next steps.

If you are installing Self Managed, then you are all set. Now run the Privacera Manager commands as per Using Privacera Manager section.

After you have successfully run all the steps of Privacera Manager, you can verify that the various pods come up successfully in your Kubernetes cluster. The list of pods for the basic configuration is given here.

If you are installing PrivaceraCloud Data Plane, then at this point you have done all the necessary configuration steps. Now you can proceed to the PrivaceraCloud Data plane section.

Configuring AWS Load Balancer

If you are using AWS Load Balancer Controller then you can get the AWS ALB hostname from the ingress object,

Bash
1
2
3
kubectl -n <DEPLOYMENT_NAME> get ingress \
    privacera-ingress-resource \
    -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'

Note the hostname and add it to this file,

Bash
cd ~/privacera/privacera-manager
vi config/custom-vars/vars.aws.alb.ingress.yml
Set the following variable in the file to the ALB hostname
Bash
AWS_ALB_EXTERNAL_URL: "<PLEASE_CHANGE>"
Confirm the settings for Route 53 hosted zone ID and the domain name in these variables,
Bash
PRIVACERA_AWS_ZONE_ID: “<PLEASE_CHANGE>”
AWS_ROUTE_53_DOMAIN_NAME: “<PLEASE_CHANGE>”
Run only the post-install step from Using Privacera Manager section. This will update the AWS Route53 entries for the public endpoints.

Now you should be able to access the Portal end-point in your web-browser. Refer to here for the various service endpoint hostnames.

Once this has been verified, you can move on to deploying the Connectors and User Management.

If you are installing PrivaceraCloud Data Plane, then at this point you have done all the necessary configuration steps. Now you can proceed to the PrivaceraCloud Data plane section.

Comments