Skip to content

Self Managed and PrivaceraCloud Data Plane Compute Sizing

Compute Sizing

This section provides the compute sizing for the Privacera Platform deployment. The compute sizing is based on the deployment size of the Privacera Platform. The deployment size is categorized into three sizes: small, medium, and large. The compute sizing is provided for each pod in the Privacera Platform deployment.

Default Deployment Size and Configuration

!!! SMALL is the default deployment size for the Privacera Platform. This is suitable for Proof of Concept (PoC) environments and small-scale deployments.

To change the deployment size from the default SMALL to MEDIUM or LARGE, you need to set the DEPLOYMENT_SIZE variable:

  1. Create or edit the custom variables file:

    Bash
    mkdir -p ~/privacera/privacera-manager/config/custom-vars/
    vi ~/privacera/privacera-manager/config/custom-vars/vars.yml
    

  2. Add the DEPLOYMENT_SIZE variable with one of the following values:

    YAML
    1
    2
    3
    4
    5
    6
    7
    8
    # For Medium deployment
    DEPLOYMENT_SIZE: "MEDIUM"
    
    # For Large deployment  
    DEPLOYMENT_SIZE: "LARGE"
    
    # For Small deployment (default - can be omitted)
    DEPLOYMENT_SIZE: "SMALL"
    

  3. Apply the configuration by running:

    Bash
    1
    2
    3
    cd ~/privacera/privacera-manager
    ./privacera-manager.sh setup
    ./pm_with_helm.sh upgrade 
    

Note

The compute sizing is for reference only. The actual compute sizing may vary based on the workload and the deployment environment. You can adjust the compute sizing based on the workload and the deployment environment.

Finding Service-Specific Variable Names

To locate the correct variable names for a specific service's resource configuration that may not be listed in this document:

  1. Navigate to the service's Kubernetes template directory:

    Bash
    ~/privacera/privacera-manager/ansible/privacera-docker/roles/templates/<SERVICE_NAME>/kubernetes
    
    For example, for the Portal service:
    Bash
    ~/privacera/privacera-manager/ansible/privacera-docker/roles/templates/portal/kubernetes
    

  2. In this directory, you'll find deployment or statefulset template files that contain the service's resource configuration variables.

  3. Look for variables following these naming patterns:

  4. CPU configuration: <SERVICE_NAME>_K8S_CPU_REQUEST and <SERVICE_NAME>_K8S_CPU_LIMIT
  5. Memory configuration: <SERVICE_NAME>_K8S_MEM_REQUEST and <SERVICE_NAME>_K8S_MEM_LIMIT
  6. Replica configuration: <SERVICE_NAME>_K8S_REPLICAS or <SERVICE_NAME>_REPLICAS_MIN/MAX
  7. Storage configuration: <SERVICE_NAME>_K8S_PVC_STORAGE_SIZE or <SERVICE_NAME>_K8S_PVC_STORAGE_SIZE_MB

This approach helps you find the exact variable names for any service, even if they're not explicitly listed in the override variable examples below.

Pod Memory CPU Disk Replication Factor
Portal 2GB 0.5 NA min=1 max=1
Maria DB 1GB 0.5 12
Data Server 2GB 1 NA min=1 max=1
Discovery - Driver 2GB 1 32
Discovery - Executor 2GB 1 NA
Discovery Consumer 2GB 1 NA
PolicySync 2GB 2 32
Solr 1.5GB 1 64 1
Zookeeper 1GB 0.5 32 1
Ranger KMS 1GB 0.5 12 NA
Ranger UserSync 1GB 0.5 12 NA
Grafana 2GB 2 1
Graphite 1GB 0.5 32
Kafka 1GB 0.5 32
PEG 1GB 0.5 NA min=1 max=2
pkafka 1GB 0.5 NA
Ranger Admin 2GB 1 NA
Audit Server 1GB 1 32
FluentD 512MB 0.1 32
Scheme Server 1GB 0.5 NA
Diagnostics Server 512MB 0.2 1
Diagnostics Client 300MB 0.2 NA
Ranger Tagsync 1GB 0.5 12
Loki 4GB 2 32
Prometheus Server 10GB 3 32
Prometheus Blackbox 1GB 0.5 NA
Prometheus Node Exporter 1GB 0.5 NA
Prometheus Kube-State 5GB 2 NA
OTEL Collector 2GB 0.5 NA
Pyroscope 4GB 2 NA
Post Install Job 128MB 0.2 NA
Metrics Annotation 64MB 0.2 NA
Tempo 4GB 2 NA
Pod Memory CPU Disk Replication Factor
Portal 4GB 2 NA
Maria DB 2GB 1 12
Data Server 4GB 2 NA min=2 max=4
Discovery - Driver 4GB 2 32
Discovery - Executor 4GB 2 NA
Discovery Consumer 4GB 2 NA
PolicySync 4GB 2 32
Solr 4GB 2 64 3
Zookeeper 2GB 1 32 3
Ranger KMS 2GB 2 12 NA
Ranger UserSync 4GB 2 12 NA
Grafana 2GB 2 1
Graphite 4GB 2 32
Kafka 4GB 2 32
PEG 4GB 2 NA min=2 max=10
pkafka 4GB 2 NA
Ranger Admin 4GB 2 NA min=2 max=4
Audit Server 4GB 2 32
FluentD 1GB 0.5 32
Scheme Server 4GB 2 NA
Diagnostics Server 1GB 0.5 1
Diagnostics Client 300MB 0.2 NA
Ranger Tagsync 4GB 2 12
Loki 4GB 2 32
Prometheus Server 10GB 3 32
Prometheus Blackbox 1GB 0.5 NA
Prometheus Node Exporter 1GB 0.5 NA
Prometheus Kube-State 5GB 2 NA
OTEL Collector 2GB 0.5 NA
Pyroscope 4GB 2 NA
Post Install Job 128MB 0.2 NA
Metrics Annotation 64MB 0.2 NA
Tempo 4GB 2 NA
Pod Memory CPU Disk Replication Factor
Portal 8GB 4 NA
Maria DB 4GB 2 12
Data Server 8GB 2 NA min=3 max=20
Discovery - Driver 8GB 4 32
Discovery - Executor 8GB 4 NA
Discovery Consumer 8GB 4 NA
PolicySync 8GB 4 32
Solr 8GB 4 64 3
Zookeeper 4GB 2 32 3
Ranger KMS 4GB 4 12 NA
Ranger UserSync 8GB 4 12 NA
Grafana 2GB 2 1
Graphite 8GB 4 32
Kafka 8GB 4 32
PEG 8GB 4 NA min=3 max=20
pkafka 8GB 4 NA
Ranger Admin 8GB 4 NA min=2 max=4
Audit Server 4GB 2 32
FluentD 2GB 1 32
Scheme Server 8GB 4 NA
Diagnostics Server 2GB 1 1
Diagnostics Client 300MB 0.2 NA
Ranger Tagsync 8GB 4 12
Loki 4GB 2 32
Prometheus Server 10GB 3 32
Prometheus Blackbox 1GB 0.5 NA
Prometheus Node Exporter 1GB 0.5 NA
Prometheus Kube-State 5GB 2 NA
OTEL Collector 2GB 0.5 NA
Pyroscope 4GB 2 NA
Post Install Job 128MB 0.2 NA
Metrics Annotation 64MB 0.2 NA
Tempo 4GB 2 NA

Override variables

To customize the default compute sizing, create a file named vars.sizing.yaml in the ~/privacera/privacera-manager/config/custom-vars/ directory and define the required variables in it as shown below:

Pod Heap Memory

To provide Guaranteed QoS for memory, request and limit memory are set to the same value by default.

Values shown are examples of the format to be used. These are numbers in MB formatted as YAML Strings.

YAML
AUDITSERVER_HEAP_MAX_MEMORY_MB: "4096"

CONNECTOR_HEAP_MAX_MEMORY_MB: "4096"

DATASERVER_HEAP_MAX_MEMORY_MB: "8192"

DISCOVERY_CONSUMER_HEAP_MAX_MEMORY_MB: "8192"

DISCOVERY_DRIVER_HEAP_MAX_MEMORY_MB: "8192"

DISCOVERY_EXECUTOR_HEAP_MAX_MEMORY_MB: "8192"

KAFKA_HEAP_MAX_MEMORY_MB: "4096"

OPS_SERVER_HEAP_MAX_MEMORY_MB: "4096"

PEG_HEAP_MAX_MEMORY_MB: "4096"

PORTAL_HEAP_MAX_MEMORY_MB: "8192"

# GDS backend in Platform  
PRIVACERA_SERVICES_HEAP_MAX_MEMORY_MB: "8192"

RANGER_HEAP_MAX_MEMORY_MB: "8192"

SCHEME_SERVER_HEAP_MAX_MEMORY_MB: "8192"

SOLR_HEAP_MAX_MEMORY_MB: "8192"

USERSYNC_HEAP_MAX_MEMORY_MB: "4096"

ZOOKEEPER_HEAP_MAX_MEMORY_MB: "4096"

TAGSYNC_HEAP_MAX_MEMORY_MB: "4096"

TRINO_HEAP_MAX_MEMORY_MB: "4096"

RANGER_KMS_HEAP_MAX_MEMORY_MB: "4096"

POLICYSYNC_V2_HEAP_MAX_MEMORY_MB: "8192"

PEG_V2_HEAP_MAX_MEMORY_MB: "4096"

# Monitoring Components

LOKI_RESOURCE_REQUESTS_MEMORY: "8Gi"
LOKI_RESOURCE_LIMITS_MEMORY: "8Gi"

OTEL_COLLECTOR_RESOURCE_MEMORY_REQUEST: "8Gi"
OTEL_COLLECTOR_RESOURCE_MEMORY_LIMIT: "8Gi"

PYROSCOPE_RESOURCE_REQUESTS_MEMORY: "8Gi"
PYROSCOPE_RESOURCE_LIMITS_MEMORY: "8Gi"

TEMPO_SERVER_RESOURCES_REQUESTS_MEMORY: "8Gi"
TEMPO_SERVER_RESOURCES_LIMITS_MEMORY: "8Gi"

PROMETHEUS_RESOURCE_MEMORY_LIMIT: "20Gi"
PROMETHEUS_RESOURCE_MEMORY_REQUEST: "20Gi"

PROMETHEUS_BLACKBOX_MEMORY_LIMIT: "2Gi"
PROMETHEUS_BLACKBOX_MEMORY_REQUEST: "2Gi"

PROMETHEUS_KUBE_STATE_METRICS_RESOURCE_MEMORY_LIMIT: "10Gi"
PROMETHEUS_KUBE_STATE_METRICS_RESOURCE_MEMORY_REQUEST: "10Gi"

PROMETHEUS_NODE_EXPORTER_RESOURCE_MEMORY_LIMIT: "2Gi"
PROMETHEUS_NODE_EXPORTER_RESOURCE_MEMORY_REQUEST: "2Gi"

GRAFANA_RESOURCE_MEMORY_LIMIT: "4Gi"
GRAFANA_RESOURCE_MEMORY_REQUEST: "4Gi"
Pod CPU Min and Max

Values are show as examples of format to be used. These are numbers in CPU units formatted as YAML Strings.

YAML
AUDITSERVER_CPU_MIN: "0.5"
AUDITSERVER_CPU_MAX: "1.0"

CONNECTOR_CPU_MIN: "1"
CONNECTOR_CPU_MAX: "4"

DATASERVER_CPU_MIN: "2"
DATASERVER_CPU_MAX: "8"

DISCOVERY_DRIVER_CPU_MIN: "2"
DISCOVERY_DRIVER_CPU_MAX: "8"

DISCOVERY_EXECUTOR_CPU_MIN: "2"
DISCOVERY_EXECUTOR_CPU_MAX: "8"

DISCOVERY_CONSUMER_CPU_MIN: "2"
DISCOVERY_CONSUMER_CPU_MAX: "8"

KAFKA_CPU_MIN: "1"
KAFKA_CPU_MAX: "4"

OPS_SERVER_CPU_MIN: "1"
OPS_SERVER_CPU_MAX: "4"

PEG_CPU_MIN: "1"
PEG_CPU_MAX: "4"

PORTAL_CPU_MIN: "2"
PORTAL_CPU_MAX: "8"

PRIVACERA_SERVICES_CPU_MIN: "2"
PRIVACERA_SERVICES_CPU_MAX: "8"

RANGER_CPU_MIN: "2"
RANGER_CPU_MAX: "8"

SCHEME_SERVER_CPU_MIN: "2"
SCHEME_SERVER_CPU_MAX: "8"

SOLR_CPU_MIN: "2"
SOLR_CPU_MAX: "8"

USERSYNC_CPU_MIN: "1"
USERSYNC_CPU_MAX: "4"

ZOOKEEPER_CPU_MIN: "1"
ZOOKEEPER_CPU_MAX: "4"

TAGSYNC_CPU_MIN: "1"
TAGSYNC_CPU_MAX: "4"

TRINO_CPU_MIN: "1"
TRINO_CPU_MAX: "4"

RANGER_KMS_CPU_MIN: "1"
RANGER_KMS_CPU_MAX: "4"

POLICYSYNC_V2_CPU_MIN: "2"
POLICYSYNC_V2_CPU_MAX: "8"

PEG_V2_CPU_MIN: "1"
PEG_V2_CPU_MAX: "4"

PKAFKA_CPU_MIN: "0.5"
PKAFKA_CPU_MAX: "2"

DB_MARIADB_CPU_MIN: "1"
DB_MARIADB_CPU_MAX: "4"

PRIVACERA_USERSYNC_CPU_MIN: "1"
PRIVACERA_USERSYNC_CPU_MAX: "4"

DIAG_SERVER_CPU_MIN: "0.5"
DIAG_SERVER_CPU_MAX: "2"

AUDIT_FLUENTD_CPU_MIN: "0.5"
AUDIT_FLUENTD_CPU_MAX: "2"

SOLR_EXPORTER_CPU_MIN: "0.5"
SOLR_EXPORTER_CPU_MAX: "2"

LOKI_RESOURCE_REQUESTS_CPU: "3"
LOKI_RESOURCE_LIMITS_CPU: "3"

OTEL_COLLECTOR_RESOURCE_CPU_REQUEST: "3"
OTEL_COLLECTOR_RESOURCE_CPU_LIMIT: "3"

PYROSCOPE_RESOURCE_REQUESTS_CPU: "3"
PYROSCOPE_RESOURCE_LIMITS_CPU: "3"

TEMPO_SERVER_RESOURCES_REQUESTS_CPU: "3"
TEMPO_SERVER_RESOURCES_LIMITS_CPU: "3"

PROMETHEUS_RESOURCE_CPU_LIMIT: "3"
PROMETHEUS_RESOURCE_CPU_REQUEST: "3"

PROMETHEUS_BLACKBOX_CPU_LIMIT: "1"
PROMETHEUS_BLACKBOX_CPU_REQUEST: "1"

PROMETHEUS_KUBE_STATE_METRICS_RESOURCE_CPU_LIMIT: "2"
PROMETHEUS_KUBE_STATE_METRICS_RESOURCE_CPU_REQUEST: "2"

PROMETHEUS_NODE_EXPORTER_RESOURCE_CPU_LIMIT: "1"
PROMETHEUS_NODE_EXPORTER_RESOURCE_CPU_REQUEST: "1"

GRAFANA_RESOURCE_CPU_LIMIT: "2"
GRAFANA_RESOURCE_CPU_REQUEST: "2"
Pod Replicas Min and Max

Values are show as examples of format to be used. These are numbers formatted as YAML Strings. Min and Max is when the workload has a Kubernetes HPA configured otherwise it is only the replica count.

YAML
AUDIT_FLUENTD_K8S_REPLICAS: "1"

AUDITSERVER_K8S_REPLICAS: "1"

DISCOVERY_CONSUMER_K8S_REPLICAS_MIN: "1"
DISCOVERY_CONSUMER_K8S_REPLICAS_MAX: "4"

DISCOVERY_K8S_REPLICAS: "1"

PROMETHEUS_DEPLOYMENT_REPLICAS: "1"
PROMETHEUS_STATEFULSET_REPLICAS: "1"

GRAFANA_DEPLOYMENT_REPLICAS: "1"
GRAFANA_AUTOSCALING_HPA_MIN_REPLICA: "1"
GRAFANA_AUTOSCALING_HPA_MAX_REPLICA: "2"

LOKI_DEPLOYMENT_REPLICAS: "1"

PORTAL_K8S_REPLICAS: "1"

DIAG_SERVER_K8S_REPLICAS: "1"

PRIVACERA_SERVICES_K8S_REPLICAS: "1"

PRIVACERA_USERSYNC_K8S_REPLICAS: "1"

RANGER_K8S_REPLICAS: "1"

USERSYNC_K8S_REPLICAS: "1"

TAGSYNC_K8S_REPLICAS: "1"

RANGER_KMS_K8S_REPLICAS: "1"

OPS_SERVER_REPLICAS_MIN: "1"
OPS_SERVER_REPLICAS_MAX: "4"

PEG_REPLICAS_MIN: "1"
PEG_REPLICAS_MAX: "4"

PEG_V2_REPLICAS_MIN: "1"
PEG_V2_REPLICAS_MAX: "3"
PEG_V2_K8S_REPLICAS: "1"

SCHEME_SERVER_REPLICAS_MIN: "1"
SCHEME_SERVER_REPLICAS_MAX: "4"

POLICYSYNC_K8S_REPLICAS: "1"

POLICYSYNC_V2_K8S_REPLICAS: "1"

PKAFKA_K8S_REPLICAS: "1"

DB_MARIADB_K8S_REPLICAS: "1"

KAFKA_K8S_REPLICAS: "1"

ZOOKEEPER_K8S_REPLICAS: "1"

SOLR_K8S_REPLICAS: "1"

TRINO_K8S_REPLICAS: "1"
TRINO_WORKER_K8S_REPLICAS: "1"

CONNECTOR_K8S_REPLICAS: "1"
Pod Storage Size (EFS)

Values are show as examples of format to be used. These are numbers in MB formatted as YAML Strings when the variable ends in _MB. And for other variables the value format is Kubernetes disk units.

EFS has no hard capacity limits—the filesystem expands automatically as data grows. The storage values shown here are logical quotas defined at the PVC level, used only for monitoring and alerting on usage thresholds.

YAML
# The value is in Kubernetes disk format
AUDIT_FLUENTD_K8S_PVC_STORAGE_SIZE: "10Gi"

# The value is in Kubernetes disk format 
AUDITSERVER_K8S_PVC_STORAGE_SIZE: "10Gi"

CONNECTOR_K8S_PVC_STORAGE_SIZE_MB: "1024"
CONNECTOR_ROCKSDB_K8S_PVC_STORAGE_SIZE_MB: "11264"

DATASERVER_K8S_PVC_STORAGE_SIZE_MB: "1024"

DB_MARIADB_K8S_PVC_STORAGE_SIZE_MB: "2048"

DIAG_SERVER_K8S_PVC_STORAGE_SIZE_MB: "1024"

DISCOVERY_K8S_MAPDB_PVC_STORAGE_SIZE_MB: "2048"

# The value is in Kubernetes disk format
KAFKA_K8S_PVC_STORAGE_SIZE: "10Gi"

OPS_SERVER_K8S_PVC_STORAGE_SIZE_MB: "1024"

PEG_K8S_PVC_STORAGE_SIZE_MB: "1024"

POLICYSYNC_K8S_PVC_STORAGE_SIZE_MB: "1024"
POLICYSYNC_ROCKSDB_K8S_PVC_STORAGE_SIZE_MB: "5120"

POLICYSYNC_V2_K8S_PVC_STORAGE_SIZE_MB: "1024"
POLICYSYNC_V2_ROCKSDB_K8S_PVC_STORAGE_SIZE_MB: "11264"

PORTAL_K8S_PVC_STORAGE_SIZE_MB: "1024"

PRIVACERA_USERSYNC_K8S_PVC_STORAGE_SIZE_MB: "1024"
PRIVACERA_USERSYNC_ROCKSDB_K8S_PVC_STORAGE_SIZE_MB: "11264"

RANGER_KMS_K8S_PVC_STORAGE_SIZE_MB: "1024"

SCHEME_SERVER_K8S_PVC_STORAGE_SIZE_MB: "1024"

# The value is in Kubernetes disk format
SOLR_K8S_PVC_STORAGE_SIZE: "5G"

# The value is in Kubernetes disk format
TRINO_WORKER_K8S_PVC_STORAGE_SIZE: "1Gi"

ZOOKEEPER_K8S_PVC_STORAGE_SIZE_MB: "5120"

Apply Override Variables

After creating or updating the vars.sizing.yaml file, the following steps are required to ensure the updated properties are correctly propagated:

Bash
1
2
3
cd ~/privacera/privacera-manager
./privacera-manager.sh setup
./pm_with_helm.sh upgrade