Skip to main content

Privacera Platform

Table of Contents

High availability (HA) for Privacera Portal

Configure Portal HA

This topic shows how to configure the Privacera Portal HA mode for AWS. Under a normal working environment, the core Privacera services such as Solr, MariaDB, Dataserver, Zookeeper, and Ranger connect to a Portal service. By configuring a HA mode for Privacera Portal would ensure that the Portal service is always up and running.


Portal HA is supported only in a Kubernetes environment.

A high-availability (HA) Kubernetes cluster is created with multiple pods in a typical master-slave setup, each pod running a Portal service. If one pod goes down, the other pod takes over, thereby ensuring the Portal service continuity.

Zookeeper is given the task of electing which pod/node would be Master. In a 3 pod setup, Zookeeper automatically elects a pod as a master node and the remaining pods as slaves.


Ensure the following prerequisites are met:

  • Privacera services are installed and running. For more information, refer to Configure and Install Core Services.

  • Assign an IAM role with a policy that gives access to the AWS Controller for Kubernetes (ACK). To attach such an IAM role, see ???


  1. SSH to an instance as USER.

  2. Edit the cluster size (replicas) of Zookeeper and Solr.

    cd ~/privacera/privacera-manager
    cp config/sample-vars/vars.kubernetes.yml config/custom-vars/
    vi config/custom-vars/vars.kubernetes.yml          

    Change the value of the properties from 1 to 3.

  3. Run the following commands.

    cd ~/privacera/privacera-manager
    cp config/sample-vars/vars.portal.kubernetes.ha.yml config/custom-vars/
    vi config/custom-vars/vars.portal.kubernetes.ha.yml          
  4. Edit the following properties or keep them unchanged.






    Activates the HA mode for Portal service.



    Enter an odd number of nodes/pods to be created.

    Zookeeper that manages the nodes/pods requires an odd number to elect a master node successfully.

    Note: A minimum of 3 nodes is required in HA mode. By giving a value of 1 will turn it into a non-HA mode.


  5. Run the following commands. Since, in an HA mode, the Privacera Portal is accessed through a browser, a sticky session is required. For that, AWS load balancer ingress has been implemented.

    cd ~/privacera/privacera-manager
    cp config/sample-vars/ config/custom-vars/          
  6. Run the following commands.

    cd ~/privacera/privacera-manager
    ./ update          

    Since 3 nodes are set in the PORTAL_K8S_REPLICAS property, it will create 3 pods/nodes of the Portal service.

At the end of the update, the service URLs are provided as shown below. The external Portal URL is an ingress URL that can be used in a browser to access Privacera Portal.

Add replicas

After the Portal service is up and running, run the following command to update the Solr replication on the other nodes:

cd ~/privacera/privacera-manager
cd output/solr/
./ --add_replica
Set replicas for other Privacera services

To set the replicas for the services such as Ranger, Dataserver, and Auditserver, add the following in the config/custom-vars/vars.kubernetes.yml file.

  • For Ranger

  • For Dataserver

  • For AuditServer