Prerequisites
Ensure the following are in place:
- An S3 bucket to store Tempo data.
- An IAM role with the necessary permissions and trust relationship(of kubernetes service account).
Step 1: Ensure that an Identity Provider (IdP) is already created for your EKS cluster’s OIDC. If not, create a new Identity Provider before proceeding.
- Navigate to AWS EKS → Select your cluster.
- From the Overview tab, copy the OIDC (OpenID Connect) provider URL.
- Go to IAM → Identity Providers → Add provider.
- Select OpenID Connect.
- Paste the OIDC URL under Provider URL and click Get thumbprint.
- Set Audience as
sts.amazonaws.com
. - Add optional tags and click Add provider. ( we will need this ID to be added in the IAM role.)
Step 2: Create an IAM Policy
Go to IAM → Policies, and create a new policy with the following JSON definition:
JSON |
---|
| {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket",
"s3:DeleteObject",
"s3:GetObjectTagging",
"s3:PutObjectTagging"
],
"Resource": [
"arn:aws:s3:::<AWS_S3_BUCKET_NAME>",
"arn:aws:s3:::<AWS_S3_BUCKET_NAME>/*"
]
}
]
}
|
Tip
- Replace
<AWS_S3_BUCKET_NAME>
with your AWS S3 bucket name.
Step 3: Create an IAM Role and Trust Relationship
- Navigate to IAM → Roles → Create role.
- Select Web identity as the trusted entity type.
- Choose the OIDC provider created earlier, set Audience to
sts.amazonaws.com
, and proceed. - Attach the custom policy from the previous step.
- Name and create the role.
Once created, modify the trust relationship to limit role assumption to specific Kubernetes service accounts:
JSON |
---|
| {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/oidc.eks.<AWS_REGION>.amazonaws.com/id/<OIDC_ID>"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringLike": {
"oidc.eks.<AWS_REGION>.amazonaws.com/id/<OIDC_ID>:sub": [
"system:serviceaccount:privacera-monitoring:tempo-distributed"
]
}
}
}
]
}
|
Tip
- Replace
<AWS_ACCOUNT_ID>
, & <OIDC_ID>
with your AWS account ID , aws region & OIDC_ID respectively. - Action
sts:AssumeRoleWithWebIdentity
allows a service (like a Kubernetes service account) to assume an IAM role using a web identity token (e.g.OIDC). - Default service account name is
tempo-distributed
and namespace is privacera-monitoring
.
- SSH into the instance where Privacera Manager is installed.
- Navigate to the configuration directory:
Bash |
---|
| cd ~/privacera/privacera-manager/config/custom-vars/
|
- Create the Tempo custom values file:
Bash |
---|
| vi tempo_distributed_custom_values.yml
|
- Add the following configuration:
YAML |
---|
| storage:
trace:
block:
version: null
dedicated_columns: []
backend: s3
s3:
bucket: <AWS_S3_BUCKET_NAME>
prefix: tempo-data
endpoint: s3-accesspoint.<AWS_REGION>.amazonaws.com
region: <AWS_REGION>
pool:
max_workers: 400
queue_depth: 20000
admin:
backend: filesystem
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: <AWS_IAM_ROLE_ARN>
|
Tip
- Replace
<AWS_S3_BUCKET_NAME>
, <AWS_REGION>
& <AWS_IAM_ROLE_ARN>
with your S3 bucket name , AWS region & IAM role created above. - Prefix
tempo-data
is directory name which will be created inside aws S3 bucket.
Step 5: Redeploy Monitoring Components
a. Go to privacera-manager
directory.
Bash |
---|
| cd ~/privacera/privacera-manager.
|
b. Run
setup
to generate the required files.
Bash |
---|
| ./privacera-manager.sh setup
|
c. Install the monitoring components.
Bash |
---|
| ./pm_with_helm.sh install-monitoring
|
d. Run
install
to update the datasources in Grafana.
Bash |
---|
| ./pm_with_helm.sh install
|
e. Once done, run
post-install
.
Bash |
---|
| ./privacera-manager.sh post-install
|
Step 1: Enable Workload Identity on AKS
- Go to Azure Portal → Kubernetes Services → Your AKS Cluster.
- In the left sidebar, go to Settings → Authentication.
- Ensure the following are enabled:
- OIDC Issuer: Enabled
- Workload Identity: Enabled
- Click Save if any changes were made.
Step 2: Create a Storage Account
- Go to Azure Portal → Storage Accounts → Create.
- Configure:
- Name: e.g.,
privaceramonitoringsa
(must be globally unique) - Region: Same as your AKS cluster
- Resource Group: Same as AKS
- Performance: Standard
- Replication: LRS
- Click Review + create, then Create.
Step 3: Create a Blob Container
- In your created storage account, go to Data storage → Containers.
- Click + Container and configure:
- Name: e.g.,
privacera-monitoring-container
- Public access level: Private (no anonymous access)
- Click Create.
Step 4: Create a User-Assigned Managed Identity
- Go to Azure Portal → Managed Identities → Create.
- Configure:
- Name: e.g.,
privacera-monitoring-identity
- Region: Same as your AKS cluster
- Resource Group: Same as AKS/storage
- Click Review + create, then Create.
- After creation, note down:
Step 5: Assign Permissions to the Managed Identity
Note
"Make sure you have owner access to your storage account."
- Go to Storage Accounts → Your Storage Account → Access Control (IAM).
- Click + Add → Add role assignment.
- Configure:
- Role: Storage Blob Data Contributor
- Assign access to: Managed identity
- Select member: Choose the
privacera-monitoring-identity
managed identity
- Click Save.
Step 6: Add Federated Credential to the Managed Identity
- Go to Azure Portal → Kubernetes Services → Your AKS Cluster → Settings → Authentication.
- Copy the OIDC Issuer URL (e.g.,
https://oidc.prod-aks.azure.com/...
).
Then:
- Go to Azure Portal → Managed Identities → privacera-monitoring-identity → Federated credentials.
- Click + Add credential.
- Fill in the following:
- Name: e.g.,
tempo-federated
- Issuer: Paste the OIDC Issuer URL
- Subject: e.g.,
system:serviceaccount:privacera-monitoring:tempo-distributed
- Audience:
api://AzureADTokenExchange
- Click Add.
Tip
- Default service account name is
tempo-distributed
and namespace is privacera-monitoring
.
- SSH into the Privacera Manager instance.
- Navigate to the config directory:
Bash |
---|
| cd ~/privacera/privacera-manager/config/custom-vars/
|
- Create the custom values file:
Bash |
---|
| vi tempo_distributed_custom_values.yml
|
- Add the following content:
YAML |
---|
| storage:
trace:
backend: azure
azure:
container_name: <CONTAINER_NAME>
storage_account_name: <STORAGE_ACCOUNT_NAME>
use_federated_token: true
admin:
backend: filesystem
serviceAccount:
annotations:
azure.workload.identity/client-id: <AZURE_MANAGED_IDENTITY_CLIENT_ID>
tempo:
podLabels:
azure.workload.identity/use: "true"
|
Tip
- Replace
<CONTAINER_NAME>
, <STORAGE_ACCOUNT_NAME>
& <AZURE_MANAGED_IDENTITY_CLIENT_ID>
with your azure container name, storage account name & CLINT_ID of managed identity created above.
Step 8: Redeploy Monitoring Components
a. Go to privacera-manager
directory.
Bash |
---|
| cd ~/privacera/privacera-manager.
|
b. Run
setup
to generate the required files.
Bash |
---|
| ./privacera-manager.sh setup
|
c. Install the monitoring components.
Bash |
---|
| ./pm_with_helm.sh install-monitoring
|
d. Run
install
to update the datasources in Grafana.
Bash |
---|
| ./pm_with_helm.sh install
|
e. Once done, run
post-install
.
Bash |
---|
| ./privacera-manager.sh post-install
|
Step 1: Create a GCS Bucket
- Go to GCP Console → Cloud Storage.
- Click Create Bucket.
- Configure:
- Name: Globally unique
- Region: As required
- Storage class: Standard
- Access control: Uniform
- Click Create.
Step 2: Create a Service Account
- Go to IAM & Admin → Service Accounts.
- Click Create Service Account.
- Provide a name and (optionally) a description.
- Skip assigning roles for now.
- Click Done.
Step 3: Grant Bucket Access to the Service Account
- Go back to your bucket → Permissions tab.
- Click Grant Access.
- Add your service account’s email.
- Assign the following roles:
- Storage Admin
- Storage Object Admin
- Click Save.
- Open the service account details.
- Under Permissions, click + Grant Access.
- In New principals, add:
Text Only |
---|
| serviceAccount:<PROJECT_ID>.svc.id.goog[privacera-monitoring/tempo-distributed]
|
Tip
- Replace
<PROJECT_ID>
with your GCP project id . - Default value for privacera monitoring namespace is
privacera-monitoring
& default k8s service account name is tempo-distributed
.
- Assign role:
Service Account Workload Identity User
- Save the configuration.
- SSH into the Privacera Manager instance.
- Navigate to the config directory:
Bash |
---|
| cd ~/privacera/privacera-manager/config/custom-vars/
|
- Create the custom values file:
Bash |
---|
| vi tempo_distributed_custom_values.yml
|
- Add the configuration below:
YAML |
---|
| storage:
trace:
backend: gcs
gcs:
bucket_name: <GCP_BUCKET_NAME>
prefix: "tempo-data"
pool:
max_workers: 400
queue_depth: 20000
admin:
backend: filesystem
serviceAccount:
annotations:
iam.gke.io/gcp-service-account: "<GCP_SERVICE_ACCOUNT>@<PROJECT_ID>.iam.gserviceaccount.com"
|
Tip
- Replace
<GCP_BUCKET_NAME>
, <GCP_SERVICE_ACCOUNT>
& ` with your GCP bucket name, service account name and project id respectively. - Prefix
tempo-data
is directory name which will be created inside GPC bucket.
Step 6: Redeploy Monitoring Components
a. Go to privacera-manager
directory.
Bash |
---|
| cd ~/privacera/privacera-manager.
|
b. Run
setup
to generate the required files.
Bash |
---|
| ./privacera-manager.sh setup
|
c. Install the monitoring components.
Bash |
---|
| ./pm_with_helm.sh install-monitoring
|
d. Run
install
to update the datasources in Grafana.
Bash |
---|
| ./pm_with_helm.sh install
|
e. Once done, run
post-install
.
Bash |
---|
| ./privacera-manager.sh post-install
|