Skip to content

Privacera Performance Benchmarks

This page summarizes performance benchmarks for the Privacera Encryption Gateway (PEG) under concurrent load. Results are from a controlled test environment. Use these results as reference guidance to plan deployment settings; throughput and latency may vary with workload, payload size, and infrastructure.

Test Methodology

  • Testing method: REST API calls to the Privacera Encryption Gateway Server.
  • Payload size: ~2 MB.
  • Load profile: Concurrent simulated users (10, 50, 100, 150) over a fixed test duration (300 seconds ).
  • Schemes tested:
    • FPE (Format-Preserving Encryption): Format type Text, algorithm FPE — scheme name SYSTEM_PERSON_NAME.
    • Non-FPE: Format type Text, algorithm AlphaNumeric — scheme name SYSTEM_ACCOUNT.
  • Metrics reported: Total requests, requests per minute, average latency (Avg Latency), and latency percentiles P50, P90, P95, P99, P99.9, P99.99.
  • Replica scenarios: Tests were run with 1 and 3 PEG replicas (see tables below).

Test Configuration

  • Pod resources: Controlled by the following variables:
YAML
1
2
3
PEG_V2_K8S_MEM_LIMITS: "4096M"
PEG_V2_CPU_MIN: "2"
PEG_V2_CPU_MAX: "4"
  • Autoscaling (HPA): To enable or tune Horizontal Pod Autoscaler (HPA) behavior, set the minimum and maximum replica count. Adjust these PEG variables as needed for your workload:
YAML
PEG_V2_REPLICAS_MIN: "1"
PEG_V2_REPLICAS_MAX: "3"

To adjust these values and other JVM or resource settings, see JVM parameters override variables.

Latency metrics

  • Avg Latency: Mean time for a request to complete.
  • P50 (median): 50% of requests completed within this time.
  • P90: 90% of requests completed within this time.
  • P95: 95% of requests completed within this time.
  • P99: 99% of requests completed within this time.
  • P99.9: 99.9% of requests completed within this time.
  • P99.99: 99.99% of requests completed within this time.

Benchmark Results

All runs used a test duration of 300 seconds.

FPE (SYSTEM_PERSON_NAME)

Replicas Users Total requests Req/min Avg Latency P50 P90 P95 P99 P99.9 P99.99
1 10 3754 750.8 1.22 ms 0.90 ms 2.37 ms 2.41 ms 2.44 ms 2.45 ms 2.45 ms
1 50 3916 783.2 3.36 s 2.83 s 7.14 s 8.04 s 9.36 s 10.28 s 11.34 s
1 100 4837 967.4 3.49 s 3.56 s 7.30 s 10.64 s 13.64 s 14.28 s 15.41 s
1 150 5140 1028 5.86 s 6.12 s 12.55 s 15.70 s 18.77 s 22.49 s 22.87 s
3 10 5343 1068.6 572 ms 231.39 ms 1.58 s 1.92 s 3.44 s 4.98 s 6.91 s
3 50 4079 815.8 3.19 s 2.60 s 7.23 s 8.20 s 9.66 s 11.04 s 11.41 s
3 100 4178 835.6 5.47 s 4.69 s 10.69 s 11.77 s 13.94 s 15.48 s 15.72 s
3 150 4246 849.2 7.79 s 7.33 s 15.61 s 19.26 s 22.69 s 27.86 s 28.56 s

Non-FPE (SYSTEM_ACCOUNT)

Replicas Users Total requests Req/min Avg Latency P50 P90 P95 P99 P99.9 P99.99
1 10 4184 836.8 652 ms 453.90 ms 1.70 s 2.31 s 3.62 s 4.98 s 8.44 s
1 50 3903 780.6 3.07 s 2.88 s 7.75 s 8.63 s 9.97 s 11.28 s 11.44 s
1 100 3925 785 6.25 s 4.68 s 14.50 s 16.03 s 20.75 s 22.69 s 22.88 s
1 150 4373 874.6 6.91 s 6.30 s 14.77 s 17.87 s 22.58 s 27.85 s 28.55 s
3 10 4167 833.4 714 ms 390.02 ms 1.87 s 2.33 s 3.98 s 9.46 s 9.97 s
3 50 4553 910.6 2.97 s 2.38 s 6.68 s 7.98 s 9.96 s 13.68 s 15.10 s
3 100 4470 894 5.66 s 4.92 s 11.75 s 13.50 s 15.39 s 16.80 s 17.14 s
3 150 4328 865.6 7.59 s 6.61 s 14.85 s 18.55 s 22.50 s 27.67 s 28.54 s

Note : In case of 3 replicas, Load served by single pod; HPA did not scale out.


Monitoring: Dashboards for Memory and API Response Time

Use the following dashboards to monitor memory usage and API response time for PEG.

Dashboard locations

Dashboard group Dashboard Use for
Infra-Dashboards → Pod Monitoring PEG server Pod-level metrics.
Infra-Dashboards → Pod Monitoring Scheme Server Scheme Server pod metrics.
Application Dashboards → PEG PEG Server PEG application metrics (e.g., request rate, response time).
Application Dashboards → PEG Scheme Server Scheme Server application metrics.
Common-Dashboards → JVM (SpringBoot-Applications) PEG Server JVM/Spring Boot metrics for PEG server.
Common-Dashboards → JVM (SpringBoot-Applications) Scheme Server JVM/Spring Boot metrics for Scheme Server.

How to check memory usage (PEG server)

  1. In Grafana, open Infra-DashboardsPod Monitoring.
  2. Select the namespace and pods for your environment.
  3. Use the Memory Usage (or All Processes Memory Usage) panel to view pod memory over time.

Memory usage can increase under load when processing larger payloads; use this panel to confirm sufficient capacity.

How to check API response time (PEG server)

  1. Open Application DashboardsPEGPEG Server.
  2. Select the namespace and pod for your environment.
  3. Under Incoming HTTP Requests, use the HTTP Response Time panel to view response times per endpoint.

Recommendations

  • Replicas and autoscaling: Use 3 replicas or enable autoscaling (e.g., via Privacera Manager) when processing heavy loads so PEG can scale out and maintain latency. Use the benchmark tables above to align with expected user concurrency and target latency.
  • JVM and resources: Tune JVM and pod resource limits/requests (using the variables above) through Privacera Manager based on your workload. To fine-tune JVM settings, see JVM parameters override variables.

Conclusion

  • PEG handles concurrent REST API load for both FPE and non-FPE schemes; throughput and latency depend on replica count and user concurrency.
  • Results are reported at P50, P90, P95, P99, P99.9, and P99.99 to support capacity planning; latency and throughput can differ by scheme and replica count.
  • In the 3-replica runs, all load was served by a single pod; the Horizontal Pod Autoscaler (HPA) did not scale out to additional replicas under the tested load profile.
  • Tests covered 10–150 concurrent users—plan replicas and resources based on your expected concurrency and target latency (e.g., P95 or P99).
  • Use the results here to size deployment for your load, and prefer 3 replicas or autoscaling for heavy or variable traffic.