1 - Prerequisites for Installing the Policy Workbench
Prerequisites to install Policy Workbench.
Ensure that the jumpbox can connect to the required repositories. If not already authenticated, then log in to the required repository.
- For connecting and deploying from the Protegrity Container Registry (PCR), use the following command and the credentials obtained from the My.Protegrity portal during account creation:
helm registry login registry.protegrity.com:9443
- For connecting and deploying to the local repository, use your local credentials and local repository endpoint as required.
Ensure that the PPC Cluster is installed and accessible, before installing Policy Workbench on PPC.
For more information about installing PPC, refer to the section Installing PPC.
Ensure that the following tools are available on the jump box on which Policy Workbench is installed.
| Tool | Version | Description |
|---|
| OpenTofu | >=1.10.0 | Used to run the installer. |
| AWS CLI | Any version | Must be configured with credentials that have EKS and IAM permissions. The default region must also be set using either the AWS_DEFAULT_REGION or the AWS_REGION environment variables or the ~/.aws/config configuration file. |
| kubectl | Any version | Required for validating the deployment. It must be configured for the target PPC cluster where Policy Workbench is deployed. |
IAM Permissions
The following IAM permissions are automatically created by the OpenTofu script.
| Permission | Purpose |
|---|
iam:CreatePolicy / iam:DeletePolicy / iam:GetPolicy | Create and manage the AWS KMS access policy. |
iam:CreateRole / iam:DeleteRole / iam:GetRole / iam:UpdateAssumeRolePolicy | Create and manage the AWS KMS pod identity role. |
iam:AttachRolePolicy / iam:DetachRolePolicy | Attach the AWS KMS policy to the role. |
EKS Permissions
The following EKS permissions are automatically created by the OpenTofu script.
| Permission | Purpose |
|---|
eks:DescribeCluster | Read the cluster endpoint and the certificate authority data for the Helm provider in OpenTofu. The Helm provider requires this information to connect to the PPC. |
eks:DescribeAddon | Verify that the eks-podidentity-agent is installed. |
eks:CreatePodIdentityAssociation /eks:DeletePodIdentityAssociation /eks:DescribePodIdentityAssociation | Associate the AWS KMS role with the Policy Workbench service account. |
2 - Installing Policy Workbench
Steps to install Policy Workbench.
Before installing Policy Workbench, ensure that the prerequisites are met. For more information about the prerequisites, refer to the section Prerequisites for Installing the Policy Workbench.
To install Policy Workbench, first provision the AWS resources using the policy-workbench OpenTofu module and then deploy the Policy Workbench using Helm. The policy-workbench OpenTofu module is published to the Protegrity Container Registry and must be consumed from a root module. A root module is the working directory for executing the OpenTofu commands.
For more information about OpenTofu modules and the root module, refer to the section Modules in the OpenTofu documentation.
The Policy Workbench is installed depending on one of the following scenarios:
- Root module is not available.
- Root module is available.
Installing Policy Workbench when root module is not available
Install the Policy Workbench using the following commands:
- Run the following command to create the deployment directory.
# must install from an empty directory
mkdir policy-workbench && cd policy-workbench
- Create a root module with a single
main.tf file.
terraform {
required_version = ">= 1.10.0"
required_providers {
aws = {
source = "registry.opentofu.org/hashicorp/aws"
version = ">= 5.0"
}
}
}
module "policy_workbench" {
source = "oci://<Container_Registry_Path>/policy-workbench/<major.minor>/opentofu/modules/policy-workbench?tag=<version>"
cluster_name = var.cluster_name
}
variable "cluster_name" {
type = string
description = "EKS cluster name."
nullable = false
validation {
condition = length(trimspace(var.cluster_name)) > 0
error_message = "cluster_name must be provided and cannot be empty."
}
}
In the main.tf file, specify the values of the following variables.
| Variable Name | Description | Value |
|---|
<Container_Registry_Path> | Location of the Protegrity Container Registry or the local repository where the policy-workbench OpenTofu module is published. | registry.protegrity.com:9443 if Protegrity Container Registry is used.- local repository endpoint if a local repository is used.
|
<major.minor> | Major and minor version of the Protegrity Policy Manager, as specified in the product part number. Obtain the product part number from the Policy Manager Readme. | 1.11 |
<version> | Tag version of the Protegrity Policy Manager. | 1.11.0 |
Perform the following steps to configure the credentials to install the Policy Workbench from the Protegrity Container Registry.
a. Run the following command to create the configuration directory.
mkdir -p ~/.config/containers
b. Obtain the username and access token from the My.Protegrity portal.
For more information about obtaining the credentials from the My.Protegrity portal, refer to the section Configuring Authentication for Protegrity AI Team Edition.
c. Generate base64 encoded string with padding for username:accesstoken obtained from the My.Protegrity portal.
Ensure that you specify the username and access token within single quotes when generating the base64 encoded value. For example, 'username:accesstoken'.
d. Create a file named ~/.config/containers/auth.json with the following content.
{
"auths": {
"registry.protegrity.com:9443": {
"auth": "<base64 generated string from step-3c>"
}
}
}
Run the following command to navigate to the deployment directory.
Run the following commands to plan and install the Policy Workbench OpenTofu module.
# init, plan, and install
tofu init
tofu plan -var="cluster_name=<PPC-cluster-name>"
tofu apply -var="cluster_name=<PPC-cluster-name>"
In the cluster_name field, specify the name of the PPC cluster that you have specified in step 1 while deploying the PPC.
For more information about deploying the PPC, refer to the section Deploying PPC.
OpenTofu prints the plan and prompts for confirmation. Enter yes to proceed. To skip the prompt, add the -auto-approve option to the commands.
- Run the following command to install the Policy Workbench using Helm.
helm upgrade --install policy-workbench \
oci://<Container_Registry_Path>/policy-workbench/<major.minor>/helm/policy-workbench \
--set karpenterResources.nodeClass.amiId="<ami-id>" \
--version <version> \
--namespace policy-workbench \
--create-namespace
In the command, specify the values of the following variables.
| Variable Name | Description | Value |
|---|
<Container_Registry_Path> | Location of the Protegrity Container Registry or the local repository where the policy-workbench OpenTofu module is published. | registry.protegrity.com:9443 if Protegrity Container Registry is used.- local repository endpoint if a local repository is used.
|
<major.minor> | Major and minor version of the Protegrity Policy Manager, as specified in the product part number. Obtain the product part number from the Policy Manager Readme. | 1.11 |
<version> | Tag version of the Protegrity Policy Manager. | 1.11.0 |
Important: You need to pass the in the command only if you are deploying the feature in regions other than us-east-1.
Option A (Recommended): Run the following AWS CLI command to retrieve the AMI ID dynamically.
```
aws ssm get-parameter \
--name /aws/service/bottlerocket/aws-k8s-1.34/x86_64/latest/image_id \
--region <region> \
--query "Parameter.Value" \
--output text
```
Alternatively, refer to these example AMI IDs.
Option B: The following table provides the list of AMI IDs.
| Region | AMI ID |
|---|
| ap-south-1 | ami-07959c05dcdb79a72 |
| eu-north-1 | ami-0268b0bfff0f25d31 |
| eu-west-3 | ami-0ea9454aef60045a2 |
| eu-west-2 | ami-0d5eee57a6a1398a3 |
| eu-west-1 | ami-00a8d14029b60a028 |
| ap-northeast-3 | ami-0e495c3ffd416c65e |
| ap-northeast-2 | ami-0fc18a24aec719c1c |
| ap-northeast-1 | ami-00ec85b83bf713aac |
| ca-central-1 | ami-03891f0d8b41eb296 |
| sa-east-1 | ami-0a30f044a5781b4e0 |
| ap-southeast-1 | ami-0ae51324bf2e89725 |
| ap-southeast-2 | ami-0ef7e8095b163dc42 |
| eu-central-1 | ami-00e36131a0343c374 |
| us-east-2 | ami-0e486911b2d0a5f7e |
| us-west-1 | ami-01183e1261529749e |
| us-west-2 | ami-04f850c412625dfe6 |
- Run the following command to view the pods created in the
policy-workbench namespace.
kubectl get pods -n policy-workbench
The following output appears.
NAME READY STATUS RESTARTS AGE
bootstrap-bffb4b5d9-v6ww4 1/1 Running 0 13m
cert-7b88dcd84-zx7cv 1/1 Running 0 13m
devops-75755d87d4-qw9n6 1/1 Running 0 13m
hubcontroller-0 1/1 Running 0 13m
kmgw-0 1/1 Running 0 13m
mbs-6b7dc765dd-brrfk 1/1 Running 0 13m
repository-0 1/1 Running 0 13m
rpproxy-79fc498d8-qp4fz 1/1 Running 0 13m
rpproxy-79fc498d8-s9k5p 1/1 Running 0 13m
rpproxy-79fc498d8-tbdtb 1/1 Running 0 13m
rps-8d79b7d98-svhdw 1/1 Running 0 13m
Installing Policy Workbench when root module is available
Install the Policy Workbench using the following commands:
- Add the
policy-workbench OpenTofu module by adding the following code block to an existing root module.
module "policy_workbench" {
source = "oci://<Container_Registry_Path>/policy-workbench/<major.minor>/opentofu/modules/policy-workbench?tag=<version>"
cluster_name = "<PPC-cluster-name>"
}
variable "cluster_name" {
type = string
description = "EKS cluster name."
nullable = false
validation {
condition = length(trimspace(var.cluster_name)) > 0
error_message = "cluster_name must be provided and cannot be empty."
}
}
For more information about adding a module to an existing root module, refer to the section Module Blocks in the OpenTofu documentation.
In the root module, specify the values of the following variables.
| Variable Name | Description | Value |
|---|
<Container_Registry_Path> | Location of the Protegrity Container Registry or the local repository where the policy-workbench OpenTofu module is published. | registry.protegrity.com:9443 if Protegrity Container Registry is used.- local repository endpoint if a local repository is used.
|
<major.minor> | Major and minor version of the Protegrity Policy Manager, as specified in the product part number. Obtain the product part number from the Policy Manager Readme. | 1.11 |
<version> | Tag version of the Protegrity Policy Manager. | 1.11.0 |
In the cluster_name field, specify the name of the PPC cluster that you have specified in step 1 while deploying the PPC.
For more information about deploying the PPC, refer to the section Deploying PPC.
- If the root module does not include the
hashicorp/aws provider version >= 5.0, then add the following code block to the terraform {} block. Else navigate to the next step.
required_providers {
aws = {
source = "registry.opentofu.org/hashicorp/aws"
version = ">= 5.0"
}
}
For more information about including the hashicorp/aws provider in the root module, refer to the OpenTofu Registry documentation.
Perform the following steps to configure the credentials to install the Policy Workbench from the Protegrity Container Registry.
a. Run the following command to create the configuration directory.
mkdir -p ~/.config/containers
b. Obtain the username and access token from the My.Protegrity portal.
For more information about obtaining the credentials from the My.Protegrity portal, refer to the section Configuring Authentication for Protegrity AI Team Edition.
c. Generate base64 encoded string with padding for username:accesstoken obtained from the My.Protegrity portal.
Ensure that you specify the username and access token within single quotes when generating the base64 encoded value. For example, 'username:accesstoken'.
d. Create a file named ~/.config/containers/auth.json with the following content.
{
"auths": {
"registry.protegrity.com:9443": {
"auth": "<base64 generated string from step-3c>"
}
}
}
Navigate to the directory containing the root module.
Run the following commands to plan and install the Policy Workbench OpenTofu module.
# init, plan, and install
tofu init
tofu plan -var="cluster_name=<PPC-cluster-name>"
tofu apply -var="cluster_name=<PPC-cluster-name>"
OpenTofu prints the plan and prompts for confirmation. Enter yes to proceed. To skip the prompt, add the -auto-approve option to the commands.
In the cluster_name field, specify the name of the PPC cluster that you have specified in step 1 while deploying the PPC.
For more information about deploying the PPC, refer to the section Deploying PPC.
- Run the following command to install the Policy Workbench using Helm.
helm upgrade --install policy-workbench \
oci://<Container_Registry_Path>/policy-workbench/<major.minor>/helm/policy-workbench \
--set karpenterResources.nodeClass.amiId="<ami-id>" \
--version <version> \
--namespace policy-workbench \
--create-namespace
In the command, specify the values of the following variables.
| Variable Name | Description | Value |
|---|
<Container_Registry_Path> | Location of the Protegrity Container Registry or the local repository where the policy-workbench OpenTofu module is published. | registry.protegrity.com:9443 if Protegrity Container Registry is used.- local repository endpoint if a local repository is used.
|
<major.minor> | Major and minor version of the Protegrity Policy Manager, as specified in the product part number. Obtain the product part number from the Policy Manager Readme. | 1.11 |
<version> | Tag version of the Protegrity Policy Manager. | 1.11.0 |
Important: You need to pass the in the command only if you are deploying the feature in regions other than us-east-1.
Option A (Recommended): Run the following AWS CLI command to retrieve the AMI ID dynamically.
```
aws ssm get-parameter \
--name /aws/service/bottlerocket/aws-k8s-1.34/x86_64/latest/image_id \
--region <region> \
--query "Parameter.Value" \
--output text
```
Alternatively, refer to these example AMI IDs.
Option B: The following table provides the list of AMI IDs.
| Region | AMI ID |
|---|
| ap-south-1 | ami-07959c05dcdb79a72 |
| eu-north-1 | ami-0268b0bfff0f25d31 |
| eu-west-3 | ami-0ea9454aef60045a2 |
| eu-west-2 | ami-0d5eee57a6a1398a3 |
| eu-west-1 | ami-00a8d14029b60a028 |
| ap-northeast-3 | ami-0e495c3ffd416c65e |
| ap-northeast-2 | ami-0fc18a24aec719c1c |
| ap-northeast-1 | ami-00ec85b83bf713aac |
| ca-central-1 | ami-03891f0d8b41eb296 |
| sa-east-1 | ami-0a30f044a5781b4e0 |
| ap-southeast-1 | ami-0ae51324bf2e89725 |
| ap-southeast-2 | ami-0ef7e8095b163dc42 |
| eu-central-1 | ami-00e36131a0343c374 |
| us-east-2 | ami-0e486911b2d0a5f7e |
| us-west-1 | ami-01183e1261529749e |
| us-west-2 | ami-04f850c412625dfe6 |
- Run the following command to view the pods created in the
policy-workbench namespace.
kubectl get pods -n policy-workbench
The following output appears.
NAME READY STATUS RESTARTS AGE
bootstrap-bffb4b5d9-v6ww4 1/1 Running 0 13m
cert-7b88dcd84-zx7cv 1/1 Running 0 13m
devops-75755d87d4-qw9n6 1/1 Running 0 13m
hubcontroller-0 1/1 Running 0 13m
kmgw-0 1/1 Running 0 13m
mbs-6b7dc765dd-brrfk 1/1 Running 0 13m
repository-0 1/1 Running 0 13m
rpproxy-79fc498d8-qp4fz 1/1 Running 0 13m
rpproxy-79fc498d8-s9k5p 1/1 Running 0 13m
rpproxy-79fc498d8-tbdtb 1/1 Running 0 13m
rps-8d79b7d98-svhdw 1/1 Running 0 13m
Validating the deployment
Note: Before validating the deployment of the Policy Workbench, ensure that the kubectl context is set to the target PPC cluster. Run kubectl config current-context to verify the current context. Run kubectl config use-context <context-name> to switch the context.
After installation, validate the Policy Workbench deployment using the following steps. The desired outcome of these steps is to get a [] response from the datastores API call using a dedicated workbench user.
- Run the following command to retrieve the gateway host details.
export GW_HOST="$(kubectl get gateway pty-main -n api-gateway -o jsonpath='{.status.addresses[0].value}')"
- Run the following command to generate the JWT token.
TOKEN=$(curl -k -s "https://$GW_HOST/api/v1/auth/login/token" \
-X POST \
-H 'Content-Type: application/x-www-form-urlencoded' \
-d "loginname=admin" \
-d "password=Admin123!" \
-D - -o /dev/null 2>&1 \
| grep -i 'pty_access_jwt_token:' \
| sed 's/pty_access_jwt_token: //' \
| tr -d '\r') && echo "${TOKEN:0:10}"
- Create a workbench user using the following command. Due to separation of duties, the datastores API requires a user with workbench roles.
curl -sk -X POST "https://$GW_HOST/pty/v1/auth/users" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"username": "workbench",
"password": "Admin123!",
"roles": [
"workbench_administrator"
]
}'
The following output appears.
{"user_id":"397beecc-87bb-404e-85bb-f8a6d83984d6","username":"workbench"}
Use the JWT token generated in step 2.
- Ensure that the user with the
workbench_administrator roles has the following permissions:
- workbench_management_policy_write
- workbench_deployment_immutablepackage_export
- workbench_deployment_certificate_export
- cli_access
- can_create_token
To ensure the required permissions, run the following command:
curl -sk -X PUT "https://$GW_HOST/pty/v1/auth/roles" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "workbench_administrator",
"permissions": [
"workbench_management_policy_write",
"workbench_deployment_immutablepackage_export",
"workbench_deployment_certificate_export",
"cli_access",
"can_create_token"
]
}'
The following output appears.
{"role_name":"workbench_administrator","status":"updated"}
For more information about the workbench_administrator permissions, refer to the section Workbench Roles and Permissions.
For more information about the cli_access and can_create_token permissions, refer to the section Roles and Permissions.
- Run the following command to get a token for the workbench user.
export TOKEN=$(curl -k -s https://$GW_HOST/pty/v1/auth/login/token \
-X POST \
-H 'Content-Type: application/x-www-form-urlencoded' \
-d 'loginname=workbench' \
-d 'password=Admin123!' \
-D - -o /dev/null 2>&1 | grep -i 'pty_access_jwt_token:' | sed 's/pty_access_jwt_token: //' | tr -d '\r')
- Run the Policy Management REST API to get datastores. Use the JWT token generated in step 4.
curl -k -v https://$GW_HOST/pty/v2/pim/datastores -H "Authorization: Bearer $TOKEN"
The expected output is []. This indicates that the Policy Workbench is initialized but the datastore is not yet created.
4 - Backing up the Policy Workbench
Back up the Policy Workbench.
By default, the Policy Workbench data is backed up on a daily basis using a scheduled backup, after the Policy Workbench has been installed. The backed-up data includes the Kubernetes object state and the persistent volume data. The backed-up data is automatically stored in the encrypted AWS S3 bucket that you created when you deployed PPC.
For more information about the AWS S3 bucket, refer to the section Creating AWS KMS Key and S3 Bucket.
You can also choose to manually back up the data to the AWS S3 bucket using Velero.
Important: Before you manually back up the data, ensure that Velero CLI version 1.17 or later is installed.
To manually back up the data:
- Run the following command on the jump box.
velero backup create --from-schedule workbench-backup-schedule -n <Namespace where data is backed up>
For example:
velero backup create --from-schedule workbench-backup-schedule -n pty-backup-recovery
The following output appears.
INFO[0001] No Schedule.template.metadata.labels set - using Schedule.labels for backup object backup=pty-backup-recovery/workbench-backup-schedule-20260331094735 labels="map[app.kubernetes.io/managed-by:Helm deployment:policy-workbench]"
Creating backup from schedule, all other filters are ignored.
Backup request "workbench-backup-schedule-20260331094735" submitted successfully.
Run `velero backup describe workbench-backup-schedule-20260331094735` or `velero backup logs workbench-backup-schedule-20260331094735` for more details.
For more information about the velero backup command, refer to the section Backup Reference in the Velero documentation.
- Run the following commands to monitor the backup status.
- Run the following command to retrieve the list of existing backups.
velero backup get -n pty-backup-recovery
The following output appears.NAME STATUS ERRORS WARNINGS CREATED EXPIRES STORAGE LOCATION SELECTOR
authnz-postgresql-schedule-backup-20260331093017 Completed 0 0 2026-03-31 09:30:18 +0000 UTC 59d default app.kubernetes.io/persistence=enabled
workbench-backup-schedule-20260331094735 WaitingForPluginOperations 0 0 2026-03-31 09:47:38 +0000 UTC 59d default <none>
workbench-backup-schedule-20260331094704 WaitingForPluginOperations 0 0 2026-03-31 09:47:04 +0000 UTC 59d
- Run the following command to obtain details of a specific backup.
velero backup describe <backup-name> -n pty-backup-recovery
The following code block shows a snippet of the output.Name: workbench-backup-schedule-20260331094735
Namespace: pty-backup-recovery
Labels: app.kubernetes.io/managed-by=Helm
deployment=policy-workbench
velero.io/schedule-name=workbench-backup-schedule
velero.io/storage-location=default
Annotations: meta.helm.sh/release-name=policy-workbench
meta.helm.sh/release-namespace=policy-workbench
velero.io/resource-timeout=10m0s
velero.io/source-cluster-k8s-gitversion=v1.35.2-eks-f69f56f
velero.io/source-cluster-k8s-major-version=1
velero.io/source-cluster-k8s-minor-version=35
Phase: WaitingForPluginOperations
- Run the following command obtains the log details for a specific backup.
velero backup logs <backup-name> -n pty-backup-restore
The following code block shows a snippet of the output.time="2026-03-31T09:47:38Z" level=info msg="Setting up backup temp file" backup=pty-backup-recovery/workbench-backup-schedule-20260331094735 logSource="pkg/controller/backup_controller.go:690"
time="2026-03-31T09:47:38Z" level=info msg="Setting up plugin manager" backup=pty-backup-recovery/workbench-backup-schedule-20260331094735 logSource="pkg/controller/backup_controller.go:697"
time="2026-03-31T09:47:38Z" level=info msg="Getting backup item actions" backup=pty-backup-recovery/workbench-backup-schedule-20260331094735 logSource="pkg/controller/backup_controller.go:701"
5 - Restoring the Policy Workbench
Complete the steps provided in this section to restore the Policy Workbench data using an existing backup.
Before you begin
Before starting a restore, ensure that the following prerequisites are met:
- Ensure that an existing backup is available. Backups are taken automatically as part of the default installation of the Policy Workbench using scheduled backup mechanisms. The backups are available in the encrypted AWS S3 bucket that you created when you deployed PPC. You can also choose to manually back up the data.
For more information about the AWS S3 bucket, refer to the section Creating AWS KMS Key and S3 Bucket.
For more information about manually backing up the data, refer to the section Backing up the Policy Workbench.
- Ensure that a restored PPC cluster is available. The Policy Workbench is restored on a restored PPC cluster. For information about restoring the PPC, refer to the section Restoring the PPC.
Important: Before you restore the data, ensure that Velero CLI version 1.17 or later is installed.
To restore the data:
- Ensure that the
main.tf file in the root module, which is the working directory for executing the OpenTofu commands, contains the following code block. If root module is not available, then you need to create a root module with the main.tf file.
module "policy_workbench" {
source = "oci://<Container_Registry_Path>/policy-workbench/<major.minor>/opentofu/modules/policy-workbench?tag=<version>"
cluster_name = var.cluster_name
}
variable "cluster_name" {
type = string
description = "EKS cluster name."
nullable = false
validation {
condition = length(trimspace(var.cluster_name)) > 0
error_message = "cluster_name must be provided and cannot be empty."
}
}
This code block adds the Policy Workbench OpenTofu module.
- Run the following commands on the jump box.
tofu init
tofu plan -var="cluster_name=<Restored-PPC-cluster-name>"
tofu apply -var="cluster_name=<Restored-PPC-cluster-name>"
Specify the name of the restored PPC cluster as the value of the cluster_name variable.
For information about restoring the PPC, refer to the section Restoring the PPC.
- Run the following command on the jump box.
velero restore create workbench-restore-$(date +%Y%m%d-%H%M%S) --from-backup <backup-name> -n <Namespace where data is backed up>
For example:
velero restore create workbench-restore-$(date +%Y%m%d-%H%M%S) --from-backup <backup-name> -n pty-backup-recovery
For more information about the velero restore command, refer to the section Restore Reference in the Velero documentation.
- Run the following command to list all the restore operations in the specific namespace.
velero restore get -n pty-backup-recovery
Ensure that the status of the restore operation is WaitingForPluginOperations.
- Run the following command to annotate the Kubernetes resources.
kubectl annotate productconfiguration workbench -n pty-admin kopf.zalando.org/last-handled-configuration- --overwrite
- Run the following command to upgrade the Policy Workbench.
helm upgrade policy-workbench \
<chart> \
--version <version> \
--namespace policy-workbench \
--reuse-values
<chart> is the name of the Helm chart that you specified while installing the Policy Workbench.
- Run the following commands to monitor the restore status.
- Run the following command to retrieve the list of existing restores.
velero restore get -n pty-backup-recovery
- Run the following command to obtain details of a specific restore.
velero restore describe workbench-restore-<timestamp> -n pty-backup-recovery
- Run the following command obtains the log details for a specific restore.
velero restore logs workbench-restore-<timestamp> -n pty-backup-restore
6 - Workbench Roles and Permissions
List of Roles and Permissions used in the Policy Workbench.
Roles are templates that include permissions and users can be assigned to one or more roles. All users in the appliance must be associated with a role.
The roles packaged with Policy Workbench are as follows:
| Roles | Description | Permissions |
|---|
| workbench_administrator | Full administrative access to workbench. | workbench_management_policy_write, workbench_deployment_immutablepackage_export, workbench_deployment_certificate_export |
| workbench_viewer | Read-only access to workbench. | workbench_management_policy_read |
| workbench_deployment_administrator | Administrative access to workbench deployments. | workbench_deployment_immutablepackage_export, workbench_deployment_certificate_export |
The capabilities of a role are defined by the permissions attached to the role. Though roles can be created, modified, or deleted from the appliance, permissions cannot be edited. The permissions that are available to map with a user and packaged with Policy Workbench as default permissions are as follows:
| Permissions | Description |
|---|
| workbench_management_policy_write | Allows management of policies and configurations. |
| workbench_management_policy_read | Allows viewing of policies and configurations. |
| workbench_deployment_immutablepackage_export | Allows exporting encrypted resilient packages. |
| workbench_deployment_certificate_export | Allows exporting certificates used by protectors for dynamic resilient packages. |
7 - Troubleshooting the Protegrity Policy Manager
Helm upgrade fails due to existing Kubernetes jobs
Issue: Helm upgrade fails because existing jobs, such as hubcontroller-init and kmgw-create-keystore, cannot be patched.
Description: Helm upgrade cannot modify or replace existing Kubernetes jobs if fields such as image registry, environment variables, args, and volumes are changed. This happens because the pod template of a job is immutable. So, the existing pods cannot be replaced when their template changes. As a result, the Helm upgrade fails.
Workaround:
Delete the existing jobs manually and then run the Helm upgrade command.
To manually delete the jobs, run the following commands:
kubectl delete job hubcontroller-init -n policy-workbench
kubectl delete job kmgw-create-keystore -n policy-workbench