Skip to content

High Availability for Privacera Portal with Kubernetes

This topic shows how to configure High Availability (HA) for Privacera Portal with Kubernetes. Under a normal working environment, the core Privacera services such as Solr, MariaDB, Dataserver, Zookeeper, and Ranger connect to a Portal service. Configuring HA for Privacera Portal ensures that the Portal service is always up and running.

Kubernetes Environment Required

Portal HA is supported only in a Kubernetes environment.

A high-availability Kubernetes cluster is created with multiple pods in a typical master-slave setup, with each pod running a Portal service. If one pod goes down, the other pod takes over, ensuring Portal service continuity.

Zookeeper determines which pod/node would be Master. In a three pod setup, Zookeeper automatically elects a pod as a master node and the remaining pods as slaves.

Prerequisites

Ensure the following prerequisite is met:

Privacera Services Installed: Privacera services are installed and running. For more information, refer to Install Privacera Manager on Privacera Platform.

Procedure

Step 1: Configure Zookeeper, Solr, and Portal HA

Use the sample file vars.kubernetes.ha.yml to configure Zookeeper and Solr cluster size, Portal HA, and optional replicas for Ranger, Dataserver, and Auditserver in one place.

  1. SSH to an instance as USER.

  2. Copy the HA variables file and edit it:

Bash
1
2
3
cd ~/privacera/privacera-manager
cp config/sample-vars/vars.kubernetes.ha.yml config/custom-vars/
vi config/custom-vars/vars.kubernetes.ha.yml
  1. The file contains variables for Zookeeper, Solr, Portal HA, and other service replicas. Set or adjust as needed; for HA, use at least 3 replicas where applicable:
Property Description Example
PRIVACERA_PORTAL_K8S_HA_ENABLE Activates HA mode for the Portal service. "true"
PORTAL_K8S_REPLICAS Number of Portal pods. Use an odd number (minimum 3 for HA). A value of 1 disables HA. "3"
ZOOKEEPER_CLUSTER_SIZE Zookeeper replicas (recommended 3 for HA). "3"
SOLR_K8S_CLUSTER_SIZE Solr pod replicas. "3"
RANGER_K8S_REPLICAS Ranger replicas. "3"
DATASERVER_K8S_CLUSTER_SIZE Dataserver replicas. "3"
AUDITSERVER_K8S_REPLICAS Auditserver replicas. "3"

SSL Certificate Regeneration Required

Whenever you change the Solr cluster from 1 to 3, re-generate the SSL key and certificates by backing up the privacera/privacera-manager/config/ssl folder and creating a new config/ssl folder. This re-creates the SSL certificates and adds the subject alternate names for all Solr pods.

When using a CA-signed certificate, if you delete the SSL folder to create new certificates, recreate the SSL folder and place the CA-signed certificate and private key back into it so Privacera Manager can generate the necessary certificates for its services.

Step 2: Configure AWS Load Balancer Ingress

AWS only

This step applies only when running in an AWS environment. If you are on Azure or GCP, skip this step.

In HA mode, the Privacera Portal is accessed through a browser. Therefore, a sticky session is required. For that, AWS load balancer ingress has been implemented.

Run the following commands:

Bash
cd ~/privacera/privacera-manager
cp config/sample-vars/vars.aws.alb.ingress.yml config/custom-vars/

For more information on configuring the AWS Load Balancer, see Configuring AWS Load Balancer.

Step 3: Update Privacera Manager

Run the following commands:

Bash
1
2
3
cd ~/privacera/privacera-manager
./privacera-manager.sh setup
./pm_with_helm.sh upgrade
Bash
cd ~/privacera/privacera-manager
./privacera-manager.sh post-install

Since 3 nodes are set in the PORTAL_K8S_REPLICAS property, it will create 3 pods/nodes of the Portal service.

At the end of the update, the service URLs are provided. The sample below applies to an AWS environment. The external Portal URL is an ingress URL that can be used in a browser to access Privacera Portal.

Service Type URL
SOLR INTERNAL http://solr-service:8983
EXTERNAL http://xxx.region.elb.amazonaws.com:8983
PORTAL INTERNAL http://portal:6868
EXTERNAL http://xxx.region.elb.amazonaws.com:6868
RANGER INTERNAL http://ranger:6086
EXTERNAL http://xxx.region.elb.amazonaws.com:6080
DATASERVER INTERNAL http://dataserver:8181
EXTERNAL http://xxx.region.elb.amazonaws.com:8181