Self Managed and PrivaceraCloud Data Plane Compute Sizing¶
Compute Sizing¶
This section provides the compute sizing for the Privacera Platform deployment. The compute sizing is based on the deployment size of the Privacera Platform. The deployment size is categorized into three sizes: small, medium, and large. The compute sizing is provided for each pod in the Privacera Platform deployment.
Default Deployment Size and Configuration
!!! SMALL is the default deployment size for the Privacera Platform. This is suitable for Proof of Concept (PoC) environments and small-scale deployments.
To change the deployment size from the default SMALL to MEDIUM or LARGE, you need to set the DEPLOYMENT_SIZE variable:
-
Create or edit the custom variables file:
-
Add the DEPLOYMENT_SIZE variable with one of the following values:
-
Apply the configuration by running:
Note
The compute sizing is for reference only. The actual compute sizing may vary based on the workload and the deployment environment. You can adjust the compute sizing based on the workload and the deployment environment.
Finding Service-Specific Variable Names
To locate the correct variable names for a specific service's resource configuration that may not be listed in this document:
-
Navigate to the service's Kubernetes template directory:
For example, for the Portal service:Bash Bash -
In this directory, you'll find deployment or statefulset template files that contain the service's resource configuration variables.
-
Look for variables following these naming patterns:
- CPU configuration:
<SERVICE_NAME>_K8S_CPU_REQUESTand<SERVICE_NAME>_K8S_CPU_LIMIT - Memory configuration:
<SERVICE_NAME>_K8S_MEM_REQUESTand<SERVICE_NAME>_K8S_MEM_LIMIT - Replica configuration:
<SERVICE_NAME>_K8S_REPLICASor<SERVICE_NAME>_REPLICAS_MIN/MAX - Storage configuration:
<SERVICE_NAME>_K8S_PVC_STORAGE_SIZEor<SERVICE_NAME>_K8S_PVC_STORAGE_SIZE_MB
This approach helps you find the exact variable names for any service, even if they're not explicitly listed in the override variable examples below.
| Pod | Memory | CPU | Disk | Replication Factor |
|---|---|---|---|---|
| Portal | 2GB | 0.5 | NA | min=1 max=1 |
| Maria DB | 1GB | 0.5 | 12 | |
| Data Server | 2GB | 1 | NA | min=1 max=1 |
| Discovery - Driver | 2GB | 1 | 32 | |
| Discovery - Executor | 2GB | 1 | NA | |
| Discovery Consumer | 2GB | 1 | NA | |
| PolicySync | 2GB | 2 | 32 | |
| Solr | 1.5GB | 1 | 64 | 1 |
| Zookeeper | 1GB | 0.5 | 32 | 1 |
| Ranger KMS | 1GB | 0.5 | 12 | NA |
| Ranger UserSync | 1GB | 0.5 | 12 | NA |
| Grafana | 2GB | 2 | 1 | |
| Graphite | 1GB | 0.5 | 32 | |
| Kafka | 1GB | 0.5 | 32 | |
| PEG | 1GB | 0.5 | NA | min=1 max=2 |
| pkafka | 1GB | 0.5 | NA | |
| Ranger Admin | 2GB | 1 | NA | |
| Audit Server | 1GB | 1 | 32 | |
| FluentD | 512MB | 0.1 | 32 | |
| Scheme Server | 1GB | 0.5 | NA | |
| Diagnostics Server | 512MB | 0.2 | 1 | |
| Diagnostics Client | 300MB | 0.2 | NA | |
| Ranger Tagsync | 1GB | 0.5 | 12 | |
| Loki | 4GB | 2 | 32 | |
| Prometheus Server | 10GB | 3 | 32 | |
| Prometheus Blackbox | 1GB | 0.5 | NA | |
| Prometheus Node Exporter | 1GB | 0.5 | NA | |
| Prometheus Kube-State | 5GB | 2 | NA | |
| OTEL Collector | 2GB | 0.5 | NA | |
| Pyroscope | 4GB | 2 | NA | |
| Post Install Job | 128MB | 0.2 | NA | |
| Metrics Annotation | 64MB | 0.2 | NA | |
| Tempo | 4GB | 2 | NA |
| Pod | Memory | CPU | Disk | Replication Factor |
|---|---|---|---|---|
| Portal | 4GB | 2 | NA | |
| Maria DB | 2GB | 1 | 12 | |
| Data Server | 4GB | 2 | NA | min=2 max=4 |
| Discovery - Driver | 4GB | 2 | 32 | |
| Discovery - Executor | 4GB | 2 | NA | |
| Discovery Consumer | 4GB | 2 | NA | |
| PolicySync | 4GB | 2 | 32 | |
| Solr | 4GB | 2 | 64 | 3 |
| Zookeeper | 2GB | 1 | 32 | 3 |
| Ranger KMS | 2GB | 2 | 12 | NA |
| Ranger UserSync | 4GB | 2 | 12 | NA |
| Grafana | 2GB | 2 | 1 | |
| Graphite | 4GB | 2 | 32 | |
| Kafka | 4GB | 2 | 32 | |
| PEG | 4GB | 2 | NA | min=2 max=10 |
| pkafka | 4GB | 2 | NA | |
| Ranger Admin | 4GB | 2 | NA | min=2 max=4 |
| Audit Server | 4GB | 2 | 32 | |
| FluentD | 1GB | 0.5 | 32 | |
| Scheme Server | 4GB | 2 | NA | |
| Diagnostics Server | 1GB | 0.5 | 1 | |
| Diagnostics Client | 300MB | 0.2 | NA | |
| Ranger Tagsync | 4GB | 2 | 12 | |
| Loki | 4GB | 2 | 32 | |
| Prometheus Server | 10GB | 3 | 32 | |
| Prometheus Blackbox | 1GB | 0.5 | NA | |
| Prometheus Node Exporter | 1GB | 0.5 | NA | |
| Prometheus Kube-State | 5GB | 2 | NA | |
| OTEL Collector | 2GB | 0.5 | NA | |
| Pyroscope | 4GB | 2 | NA | |
| Post Install Job | 128MB | 0.2 | NA | |
| Metrics Annotation | 64MB | 0.2 | NA | |
| Tempo | 4GB | 2 | NA |
| Pod | Memory | CPU | Disk | Replication Factor |
|---|---|---|---|---|
| Portal | 8GB | 4 | NA | |
| Maria DB | 4GB | 2 | 12 | |
| Data Server | 8GB | 2 | NA | min=3 max=20 |
| Discovery - Driver | 8GB | 4 | 32 | |
| Discovery - Executor | 8GB | 4 | NA | |
| Discovery Consumer | 8GB | 4 | NA | |
| PolicySync | 8GB | 4 | 32 | |
| Solr | 8GB | 4 | 64 | 3 |
| Zookeeper | 4GB | 2 | 32 | 3 |
| Ranger KMS | 4GB | 4 | 12 | NA |
| Ranger UserSync | 8GB | 4 | 12 | NA |
| Grafana | 2GB | 2 | 1 | |
| Graphite | 8GB | 4 | 32 | |
| Kafka | 8GB | 4 | 32 | |
| PEG | 8GB | 4 | NA | min=3 max=20 |
| pkafka | 8GB | 4 | NA | |
| Ranger Admin | 8GB | 4 | NA | min=2 max=4 |
| Audit Server | 4GB | 2 | 32 | |
| FluentD | 2GB | 1 | 32 | |
| Scheme Server | 8GB | 4 | NA | |
| Diagnostics Server | 2GB | 1 | 1 | |
| Diagnostics Client | 300MB | 0.2 | NA | |
| Ranger Tagsync | 8GB | 4 | 12 | |
| Loki | 4GB | 2 | 32 | |
| Prometheus Server | 10GB | 3 | 32 | |
| Prometheus Blackbox | 1GB | 0.5 | NA | |
| Prometheus Node Exporter | 1GB | 0.5 | NA | |
| Prometheus Kube-State | 5GB | 2 | NA | |
| OTEL Collector | 2GB | 0.5 | NA | |
| Pyroscope | 4GB | 2 | NA | |
| Post Install Job | 128MB | 0.2 | NA | |
| Metrics Annotation | 64MB | 0.2 | NA | |
| Tempo | 4GB | 2 | NA |
Override variables¶
To customize the default compute sizing, create a file named vars.sizing.yaml in the ~/privacera/privacera-manager/config/custom-vars/ directory and define the required variables in it as shown below:
Pod Heap Memory
To provide Guaranteed QoS for memory, request and limit memory are set to the same value by default.
Values shown are examples of the format to be used. These are numbers in MB formatted as YAML Strings.
Pod CPU Min and Max
Values are show as examples of format to be used. These are numbers in CPU units formatted as YAML Strings.
| YAML | |
|---|---|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 | |
Pod Replicas Min and Max
Values are show as examples of format to be used. These are numbers formatted as YAML Strings. Min and Max is when the workload has a Kubernetes HPA configured otherwise it is only the replica count.
Pod Storage Size (EFS)
Values are show as examples of format to be used. These are numbers in MB formatted as YAML Strings when the variable ends in _MB. And for other variables the value format is Kubernetes disk units.
EFS has no hard capacity limits—the filesystem expands automatically as data grows. The storage values shown here are logical quotas defined at the PVC level, used only for monitoring and alerting on usage thresholds.
Apply Override Variables¶
After creating or updating the vars.sizing.yaml file, the following steps are required to ensure the updated properties are correctly propagated: