Skip to content

Enabling Loki

Introduction

Loki is a log aggregation system designed for storing and querying logs efficiently. Unlike traditional log management systems, Loki indexes only metadata instead of the full log contents, making it highly scalable and cost-effective. Stores logs in a compressed format with minimal indexing, reducing storage and operational costs.

Process

  1. SSH into the instance where Privacera Manager is installed.
  2. Navigate to the config directory using the following command:
    Bash
    cd ~/privacera/privacera-manager/config/
    
  3. Copy vars.monioring.yml file from sample-vars folder to custom-vars folder.

    If this file already exists in custom-vars folder then you can skip this step.

    Bash
    cp sample-vars/vars.monitoring.yml custom-vars/
    
  4. Open vars.monioring.yml.

    Bash
    vi custom-vars/vars.monitoring.yml
    

  5. Uncomment the below variables in the file and save it.
    • Enable loki.
      Bash
      LOKI_DEPLOYMENT_ENABLED: "true"
      
  6. Once done, redeploy the monitoring components.

    a. Go to privacera-manager directory.

    Bash
    cd ~/privacera/privacera-manager.
    
    b. Run setup to generate the required files.
    Bash
    ./privacera-manager.sh setup
    
    c. Install the monitoring components.
    Bash
    ./pm_with_helm.sh install-monitoring
    
    d. Once done, run post-install.
    Bash
    ./privacera-manager.sh post-install
    

Configure Cloud Storage for Loki

This guide outlines the steps to configure cloud-based object storage for Grafana Loki using AWS S3, Azure Blob Storage, or Google Cloud Storage (GCS) within a production-ready Privacera Monitoring stack.

Configure AWS S3 for Loki

Prerequisites

Ensure the following are in place:

  1. An S3 bucket to store Loki logs.
  2. An IAM role with the necessary permissions and trust relationship(of kubernetes service account).

Note

By default, Loki retaintion period is set to 30 days.

Step 1: Ensure that an Identity Provider (IdP) is already created for your EKS cluster’s OIDC. If not, create a new Identity Provider before proceeding.

  1. Navigate to AWS EKS → Select your cluster.
  2. From the Overview tab, copy the OIDC (OpenID Connect) provider URL.
  3. Go to IAMIdentity ProvidersAdd provider.
    • Select OpenID Connect.
    • Paste the OIDC URL under Provider URL and click Get thumbprint.
    • Set Audience as sts.amazonaws.com.
    • Add optional tags and click Add provider. ( we will need this ID to be added in the IAM role.)

Step 2: Create an IAM Policy

Go to IAM → Policies, and create a new policy with the following JSON definition:

JSON
{
"Version": "2012-10-17",
"Statement": [
    {
    "Effect": "Allow",
    "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:ListBucket",
        "s3:DeleteObject",
        "s3:GetObjectTagging",
        "s3:PutObjectTagging"
    ],
    "Resource": [
        "arn:aws:s3:::<AWS_S3_BUCKET_NAME>",
        "arn:aws:s3:::<AWS_S3_BUCKET_NAME>/*"
    ]
    }
]
}

Tip

  • Replace <AWS_S3_BUCKET_NAME> with your AWS S3 bucket name.

Step 3: Create an IAM Role and Trust Relationship

  1. Navigate to IAM → RolesCreate role.
  2. Select Web identity as the trusted entity type.
  3. Choose the OIDC provider created earlier, set Audience to sts.amazonaws.com, and proceed.
  4. Attach the custom policy from the previous step.
  5. Name and create the role.

Once created, modify the trust relationship to limit role assumption to specific Kubernetes service accounts:

JSON
{
"Version": "2012-10-17",
"Statement": [
    {
    "Effect": "Allow",
    "Principal": {
        "Federated": "arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/oidc.eks.<AWS_REGION>.amazonaws.com/id/<OIDC_ID>"
    },
    "Action": "sts:AssumeRoleWithWebIdentity",
    "Condition": {
        "StringLike": {
        "oidc.eks.<AWS_REGION>.amazonaws.com/id/<OIDC_ID>:sub": [
            "system:serviceaccount:privacera-monitoring:loki-distributed"
        ]
        }
    }
    }
]
}

Tip

  • Replace <AWS_ACCOUNT_ID>, <AWS_REGION> & <OIDC_ID> with your AWS account ID , aws region & OIDC_ID respectively.
  • Action sts:AssumeRoleWithWebIdentity allows a service (like a Kubernetes service account) to assume an IAM role using a web identity token (e.g.OIDC).
  • Default service account name is loki-distributed and namespace is privacera-monitoring.

Step 4: Configure Loki for S3

  1. SSH into the instance where Privacera Manager is installed.
  2. Navigate to the configuration directory:
Bash
cd ~/privacera/privacera-manager/config/custom-vars/
  1. Create the Loki custom values file:
Bash
vi loki_custom_values.yml
  1. Add the following configuration:
YAML
loki:
  schemaConfig:
    configs:
    - from: "2020-09-07"
      store: boltdb-shipper
      object_store: s3
      schema: v11
      index:
        prefix: loki_index_
        period: 24h

  storageConfig:
    aws:
      s3: s3://<AWS_REGION>/<AWS_S3_BUCKET_NAME>
      region: <AWS_REGION>
      s3forcepathstyle: true
      bucketnames: <AWS_S3_BUCKET_NAME>
    boltdb_shipper:
      shared_store: s3
      active_index_directory: /var/loki/index
      cache_location: /var/loki/cache
      cache_ttl: 168h

serviceAccount:
  annotations:
    eks.amazonaws.com/role-arn: <AWS_IAM_ROLE_ARN>

Tip

  • Replace <AWS_S3_BUCKET_NAME>, <AWS_REGION> & <AWS_IAM_ROLE_ARN> with your S3 bucket name , AWS region & IAM role created above.

Step 5: Redeploy Monitoring Components

a. Go to privacera-manager directory.

Bash
cd ~/privacera/privacera-manager.
b. Run setup to generate the required files.
Bash
./privacera-manager.sh setup
c. Install the monitoring components.
Bash
./pm_with_helm.sh install-monitoring
d. Once done, run post-install.
Bash
./privacera-manager.sh post-install

Configure Azure Blob Storage for Loki

Step 1: Enable Workload Identity on AKS

  1. Go to Azure Portal → Kubernetes Services → Your AKS Cluster.
  2. In the left sidebar, go to Settings → Authentication.
  3. Ensure the following are enabled:
    • OIDC Issuer: Enabled
    • Workload Identity: Enabled
  4. Click Save if any changes were made.

Step 2: Create a Storage Account

  1. Go to Azure Portal → Storage Accounts → Create.
  2. Configure:
    • Name: e.g., privaceramonitoringsa (must be globally unique)
    • Region: Same as your AKS cluster
    • Resource Group: Same as AKS
    • Performance: Standard
    • Replication: LRS
  3. Click Review + create, then Create.

Step 3: Create a Blob Container

  1. In your created storage account, go to Data storage → Containers.
  2. Click + Container and configure:
    • Name: e.g., privacera-monitoring-container
    • Public access level: Private (no anonymous access)
  3. Click Create.

Step 4: Create a User-Assigned Managed Identity

  1. Go to Azure Portal → Managed Identities → Create.
  2. Configure:
    • Name: e.g., privacera-monitoring-identity
    • Region: Same as your AKS cluster
    • Resource Group: Same as AKS/storage
  3. Click Review + create, then Create.
  4. After creation, note down:
    • Client ID

Step 5: Assign Permissions to the Managed Identity

Note

"Make sure you have owner access to your storage account."

  1. Go to Storage Accounts → Your Storage Account → Access Control (IAM).
  2. Click + Add → Add role assignment.
  3. Configure:
    • Role: Storage Blob Data Contributor
    • Assign access to: Managed identity
    • Select member: Choose the privacera-monitoring-identity managed identity
  4. Click Save.

Step 6: Add Federated Credential to the Managed Identity

  1. Go to Azure Portal → Kubernetes Services → Your AKS Cluster → Settings → Authentication.
  2. Copy the OIDC Issuer URL (e.g., https://oidc.prod-aks.azure.com/...).

Then:

  1. Go to Azure Portal → Managed Identities → privacera-monitoring-identity → Federated credentials.
  2. Click + Add credential.
  3. Fill in the following:
    • Name: e.g.,loki-federated
    • Issuer: Paste the OIDC Issuer URL
    • Subject: e.g., system:serviceaccount:privacera-monitoring:loki-distributed
    • Audience: api://AzureADTokenExchange
  4. Click Add.

Tip

  • Default service account name is loki-distributed and namespace is privacera-monitoring.

Step 7: Configure Loki for Azure

  1. SSH into the Privacera Manager instance.
  2. Navigate to the config directory:
Bash
cd ~/privacera/privacera-manager/config/custom-vars/
  1. Create the custom values file:
Bash
vi loki_custom_values.yml
  1. Add the following content:
YAML
loki:
  podLabels:
    azure.workload.identity/use: "true"
  schemaConfig:
    configs:
    - from: "2020-09-07"
      store: boltdb-shipper
      object_store: azure
      schema: v11
      index:
        prefix: loki_index_
        period: 24h

  storageConfig:
    boltdb_shipper:
      shared_store: azure
      active_index_directory: /var/loki/index
      cache_location: /var/loki/cache
      cache_ttl: 168h
    azure:
      account_name: <STORAGE_ACCOUNT_NAME>
      container_name: <CONTAINER_NAME>
      use_federated_token: true
      client_id: <AZURE_MANAGED_IDENTITY_CLIENT_ID>

serviceAccount:
  annotations:
    azure.workload.identity/client-id: <AZURE_MANAGED_IDENTITY_CLIENT_ID>

Tip

  • Replace <STORAGE_ACCOUNT_NAME>, <CONTAINER_NAME> & <AZURE_MANAGED_IDENTITY_CLIENT_ID> with your azure storage account, container and CLIENT_ID of managed identity created above.

Step 8: Redeploy Monitoring Components

a. Go to privacera-manager directory.

Bash
cd ~/privacera/privacera-manager.
b. Run setup to generate the required files.
Bash
./privacera-manager.sh setup
c. Install the monitoring components.
Bash
./pm_with_helm.sh install-monitoring
d. Once done, run post-install.
Bash
./privacera-manager.sh post-install

Configure GCP Storage for Loki

Step 1: Create a GCS Bucket

  1. Go to GCP Console → Cloud Storage.
  2. Click Create Bucket.
  3. Configure:
    • Name: Globally unique
    • Region: As required
    • Storage class: Standard
    • Access control: Uniform
  4. Click Create.

Step 2: Create a Service Account

  1. Go to IAM & Admin → Service Accounts.
  2. Click Create Service Account.
  3. Provide a name and (optionally) a description.
  4. Skip assigning roles for now.
  5. Click Done.

Step 3: Grant Bucket Access to the Service Account

  1. Go back to your bucket → Permissions tab.
  2. Click Grant Access.
  3. Add your service account’s email.
  4. Assign the following roles:
    • Storage Admin
    • Storage Object Admin
  5. Click Save.

Step 4: Configure Workload Identity

  1. Open the service account details.
  2. Under Permissions, click + Grant Access.
  3. In New principals, add:
Text Only
serviceAccount:<PROJECT_ID>.svc.id.goog[privacera-monitoirng/loki-distributed]

Tip

  • Replace <PROJECT_ID> with your GCP project id .
  • Default value for privacera monitoring namespace is privacera-monitoring & default k8s service account name is loki-distributed.
  1. Assign role:
    Service Account Workload Identity User
  2. Save the configuration.

Step 5: Configure Loki for GCP

  1. SSH into the Privacera Manager instance.
  2. Navigate to the config directory:
Bash
cd ~/privacera/privacera-manager/config/custom-vars/
  1. Create the custom values file:
Bash
vi loki_custom_values.yml
  1. Add the configuration below:
YAML
loki:
  schemaConfig:
    configs:
    - from: "2020-09-07"
      store: boltdb-shipper
      object_store: gcs
      schema: v11
      index:
        prefix: index_
        period: 24h

  storageConfig:
    boltdb_shipper:
      active_index_directory: var/loki/index
      cache_location: var/loki/index_cache
      shared_store: gcs
    gcs:
      bucket_name: <GCP_BUCKET_NAME>

serviceAccount:
  annotations:
    iam.gke.io/gcp-service-account: <GCP_SERVICE_ACCOUNT>@<PROJECT_ID>.iam.gserviceaccount.com

Tip

  • Replace <GCP_BUCKET_NAME>, <GCP_SERVICE_ACCOUNT> & ` with your GCP bucket name, service account name and project id respectively.

Step 6: Redeploy Monitoring Components

a. Go to privacera-manager directory.

Bash
cd ~/privacera/privacera-manager.
b. Run setup to generate the required files.
Bash
./privacera-manager.sh setup
c. Install the monitoring components.
Bash
./pm_with_helm.sh install-monitoring
d. Once done, run post-install.
Bash
./privacera-manager.sh post-install

Comments