Skip to content

Enabling Tempo

Introduction

Tempo is a distributed tracing backend designed to ingest and store traces efficiently without requiring indexing. It allows developers to troubleshoot and analyze request flows across microservices. It stores traces in an efficient manner, reducing infrastructure and operational costs. Also, it is designed to handle high volumes of trace data with minimal overhead.

Process

  1. SSH into the instance where Privacera Manager is installed.
  2. Navigate to the config directory using the following command:
    Bash
    cd ~/privacera/privacera-manager/config/
    
  3. Copy vars.monioring.yml file from sample-vars folder to custom-vars folder.

    If this file already exists in custom-vars folder then you can skip this step.

    Bash
    cp sample-vars/vars.monitoring.yml custom-vars/
    
  4. Open vars.monioring.yml.

    Bash
    vi custom-vars/vars.monitoring.yml
    

  5. Uncomment the below variables in the file and save it.
    • Enable Tempo.
      Bash
      TEMPO_DEPLOYMENT_ENABLED: "true"
      
  6. Once done, redeploy the monitoring components.

    a. Go to privacera-manager directory.

    Bash
    cd ~/privacera/privacera-manager.
    
    b. Run setup to generate the required files.
    Bash
    ./privacera-manager.sh setup
    
    c. Install the monitoring components.
    Bash
    ./pm_with_helm.sh install-monitoring
    
    d. Once done, run post-install.
    Bash
    ./privacera-manager.sh post-install
    

Configure Cloud Storage for Tempo

This guide explains how to configure cloud-based object storage for Grafana Tempo using AWS S3, Azure Blob Storage, or Google Cloud Storage (GCS) within a production-ready Privacera Monitoring stack.

Configure AWS S3 for Tempo

Prerequisites

Ensure the following are in place:

  1. An S3 bucket to store Tempo data.
  2. An IAM role with the necessary permissions and trust relationship(of kubernetes service account).

Step 1: Ensure that an Identity Provider (IdP) is already created for your EKS cluster’s OIDC. If not, create a new Identity Provider before proceeding.

  1. Navigate to AWS EKS → Select your cluster.
  2. From the Overview tab, copy the OIDC (OpenID Connect) provider URL.
  3. Go to IAMIdentity ProvidersAdd provider.
    • Select OpenID Connect.
    • Paste the OIDC URL under Provider URL and click Get thumbprint.
    • Set Audience as sts.amazonaws.com.
    • Add optional tags and click Add provider. ( we will need this ID to be added in the IAM role.)

Step 2: Create an IAM Policy

Go to IAM → Policies, and create a new policy with the following JSON definition:

JSON
{
"Version": "2012-10-17",
"Statement": [
    {
    "Effect": "Allow",
    "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:ListBucket",
        "s3:DeleteObject",
        "s3:GetObjectTagging",
        "s3:PutObjectTagging"
    ],
    "Resource": [
        "arn:aws:s3:::<AWS_S3_BUCKET_NAME>",
        "arn:aws:s3:::<AWS_S3_BUCKET_NAME>/*"
    ]
    }
]
}

Tip

  • Replace <AWS_S3_BUCKET_NAME> with your AWS S3 bucket name.

Step 3: Create an IAM Role and Trust Relationship

  1. Navigate to IAM → RolesCreate role.
  2. Select Web identity as the trusted entity type.
  3. Choose the OIDC provider created earlier, set Audience to sts.amazonaws.com, and proceed.
  4. Attach the custom policy from the previous step.
  5. Name and create the role.

Once created, modify the trust relationship to limit role assumption to specific Kubernetes service accounts:

JSON
{
"Version": "2012-10-17",
"Statement": [
    {
    "Effect": "Allow",
    "Principal": {
        "Federated": "arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/oidc.eks.<AWS_REGION>.amazonaws.com/id/<OIDC_ID>"
    },
    "Action": "sts:AssumeRoleWithWebIdentity",
    "Condition": {
        "StringLike": {
        "oidc.eks.<AWS_REGION>.amazonaws.com/id/<OIDC_ID>:sub": [
            "system:serviceaccount:privacera-monitoring:tempo-distributed"
        ]
        }
    }
    }
]
}

Tip

  • Replace <AWS_ACCOUNT_ID>, & <OIDC_ID> with your AWS account ID , aws region & OIDC_ID respectively.
  • Action sts:AssumeRoleWithWebIdentity allows a service (like a Kubernetes service account) to assume an IAM role using a web identity token (e.g.OIDC).
  • Default service account name is tempo-distributed and namespace is privacera-monitoring.

Step 4: Configure Tempo for S3

  1. SSH into the instance where Privacera Manager is installed.
  2. Navigate to the configuration directory:
Bash
cd ~/privacera/privacera-manager/config/custom-vars/
  1. Create the Tempo custom values file:
Bash
vi tempo_distributed_custom_values.yml
  1. Add the following configuration:
YAML
storage:
  trace:
    block:
      version: null
      dedicated_columns: []
    backend: s3
    s3:
      bucket: <AWS_S3_BUCKET_NAME>
      prefix: tempo-data
      endpoint: s3-accesspoint.<AWS_REGION>.amazonaws.com
      region: <AWS_REGION>
    pool:
      max_workers: 400
      queue_depth: 20000
  admin:
    backend: filesystem

serviceAccount:
  annotations:
    eks.amazonaws.com/role-arn:  <AWS_IAM_ROLE_ARN>

Tip

  • Replace <AWS_S3_BUCKET_NAME>, <AWS_REGION> & <AWS_IAM_ROLE_ARN> with your S3 bucket name , AWS region & IAM role created above.
  • Prefix tempo-data is directory name which will be created inside aws S3 bucket.

Step 5: Redeploy Monitoring Components

a. Go to privacera-manager directory.

Bash
cd ~/privacera/privacera-manager.
b. Run setup to generate the required files.
Bash
./privacera-manager.sh setup
c. Install the monitoring components.
Bash
./pm_with_helm.sh install-monitoring
d. Once done, run post-install.
Bash
./privacera-manager.sh post-install

Configure Azure Blob Storage for Tempo

Step 1: Enable Workload Identity on AKS

  1. Go to Azure Portal → Kubernetes Services → Your AKS Cluster.
  2. In the left sidebar, go to Settings → Authentication.
  3. Ensure the following are enabled:
    • OIDC Issuer: Enabled
    • Workload Identity: Enabled
  4. Click Save if any changes were made.

Step 2: Create a Storage Account

  1. Go to Azure Portal → Storage Accounts → Create.
  2. Configure:
    • Name: e.g., privaceramonitoringsa (must be globally unique)
    • Region: Same as your AKS cluster
    • Resource Group: Same as AKS
    • Performance: Standard
    • Replication: LRS
  3. Click Review + create, then Create.

Step 3: Create a Blob Container

  1. In your created storage account, go to Data storage → Containers.
  2. Click + Container and configure:
    • Name: e.g., privacera-monitoring-container
    • Public access level: Private (no anonymous access)
  3. Click Create.

Step 4: Create a User-Assigned Managed Identity

  1. Go to Azure Portal → Managed Identities → Create.
  2. Configure:
    • Name: e.g., privacera-monitoring-identity
    • Region: Same as your AKS cluster
    • Resource Group: Same as AKS/storage
  3. Click Review + create, then Create.
  4. After creation, note down:
    • Client ID

Step 5: Assign Permissions to the Managed Identity

Note

"Make sure you have owner access to your storage account."

  1. Go to Storage Accounts → Your Storage Account → Access Control (IAM).
  2. Click + Add → Add role assignment.
  3. Configure:
    • Role: Storage Blob Data Contributor
    • Assign access to: Managed identity
    • Select member: Choose the privacera-monitoring-identity managed identity
  4. Click Save.

Step 6: Add Federated Credential to the Managed Identity

  1. Go to Azure Portal → Kubernetes Services → Your AKS Cluster → Settings → Authentication.
  2. Copy the OIDC Issuer URL (e.g., https://oidc.prod-aks.azure.com/...).

Then:

  1. Go to Azure Portal → Managed Identities → privacera-monitoring-identity → Federated credentials.
  2. Click + Add credential.
  3. Fill in the following:
    • Name: e.g., tempo-federated
    • Issuer: Paste the OIDC Issuer URL
    • Subject: e.g., system:serviceaccount:privacera-monitoring:tempo-distributed
    • Audience: api://AzureADTokenExchange
  4. Click Add.

Tip

  • Default service account name is tempo-distributed and namespace is privacera-monitoring.

Step 7: Configure Tempo for Azure

  1. SSH into the Privacera Manager instance.
  2. Navigate to the config directory:
Bash
cd ~/privacera/privacera-manager/config/custom-vars/
  1. Create the custom values file:
Bash
vi tempo_distributed_custom_values.yml
  1. Add the following content:
YAML
storage:
  trace:
    backend: azure
    azure:
      container_name: <CONTAINER_NAME>
      storage_account_name: <STORAGE_ACCOUNT_NAME>
      use_federated_token: true
  admin:
    backend: filesystem
serviceAccount:
  annotations:
    azure.workload.identity/client-id: <AZURE_MANAGED_IDENTITY_CLIENT_ID>
tempo: 
  podLabels: 
    azure.workload.identity/use: "true"

Tip

  • Replace <CONTAINER_NAME>, <STORAGE_ACCOUNT_NAME> & <AZURE_MANAGED_IDENTITY_CLIENT_ID> with your azure container name, storage account name & CLINT_ID of managed identity created above.

Step 8: Redeploy Monitoring Components

a. Go to privacera-manager directory.

Bash
cd ~/privacera/privacera-manager.
b. Run setup to generate the required files.
Bash
./privacera-manager.sh setup
c. Install the monitoring components.
Bash
./pm_with_helm.sh install-monitoring
d. Once done, run post-install.
Bash
./privacera-manager.sh post-install

Configure GCP Storage for tempo

Step 1: Create a GCS Bucket

  1. Go to GCP Console → Cloud Storage.
  2. Click Create Bucket.
  3. Configure:
    • Name: Globally unique
    • Region: As required
    • Storage class: Standard
    • Access control: Uniform
  4. Click Create.

Step 2: Create a Service Account

  1. Go to IAM & Admin → Service Accounts.
  2. Click Create Service Account.
  3. Provide a name and (optionally) a description.
  4. Skip assigning roles for now.
  5. Click Done.

Step 3: Grant Bucket Access to the Service Account

  1. Go back to your bucket → Permissions tab.
  2. Click Grant Access.
  3. Add your service account’s email.
  4. Assign the following roles:
    • Storage Admin
    • Storage Object Admin
  5. Click Save.

Step 4: Configure Workload Identity

  1. Open the service account details.
  2. Under Permissions, click + Grant Access.
  3. In New principals, add:
Text Only
serviceAccount:<PROJECT_ID>.svc.id.goog[privacera-monitoring/tempo-distributed]

Tip

  • Replace <PROJECT_ID> with your GCP project id .
  • Default value for privacera monitoring namespace is privacera-monitoring & default k8s service account name is tempo-distributed.
  1. Assign role:
    Service Account Workload Identity User
  2. Save the configuration.

Step 5: Configure tempo for GCP

  1. SSH into the Privacera Manager instance.
  2. Navigate to the config directory:
Bash
cd ~/privacera/privacera-manager/config/custom-vars/
  1. Create the custom values file:
Bash
vi tempo_distributed_custom_values.yml
  1. Add the configuration below:
YAML
storage:
  trace:
    backend: gcs
    gcs:
      bucket_name: <GCP_BUCKET_NAME>
      prefix: "tempo-data"
    pool:
      max_workers: 400
      queue_depth: 20000
  admin:
    backend: filesystem

serviceAccount:
  annotations:
    iam.gke.io/gcp-service-account: "<GCP_SERVICE_ACCOUNT>@<PROJECT_ID>.iam.gserviceaccount.com"

Tip

  • Replace <GCP_BUCKET_NAME>, <GCP_SERVICE_ACCOUNT> & ` with your GCP bucket name, service account name and project id respectively.
  • Prefix tempo-data is directory name which will be created inside GPC bucket.

Step 6: Redeploy Monitoring Components

a. Go to privacera-manager directory.

Bash
cd ~/privacera/privacera-manager.
b. Run setup to generate the required files.
Bash
./privacera-manager.sh setup
c. Install the monitoring components.
Bash
./pm_with_helm.sh install-monitoring
d. Once done, run post-install.
Bash
./privacera-manager.sh post-install

Comments