This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Governance and Policy

Governance and Policy for AI Team Edition.

1 - Protegrity Policy Manager

Working with Protegrity Policy Manager.

Data Security Policy is at the core of Protegrity’s platform. A policy is a set of rules that governs how sensitive data is protected, and who in the organization can see the data in the clear. Sensitive data can include Personally Identifiable Information (PII), financial information, health-related information, and so on. A Data Security Policy is enforced within different systems and environments in the enterprise, providing the same level of security regardless of the location of the sensitive data.

The Protegrity Policy Manager enables users perform protect, unprotect, and reprotect operations on sensitive data. Install the Policy Workbench to deploy the Protegrity Policy Manager.

Important: Protegrity Policy Manager is the name of the feature, while Policy Workbench is the name of the component.

For more information about installing the Policy Workbench, refer to the section Installing Policy Workbench.

For more information about the components used in the Protegrity Policy Manager, refer to the section Policy Components in the Policy Management documentation.

For more information about creating, managing, and viewing policies using the Policy Management API, refer to the section Using the Policy Management REST APIs.

For more information about creating, managing, and viewing policies using the Policy Management CLI, refer to the section Policy Management Command Line Interface (CLI) Reference.

1.1 - Prerequisites for Installing the Policy Workbench

Prerequisites to install Policy Workbench.

Ensure that the jumpbox can connect to the required repositories. If not already authenticated, then log in to the required repository.

  • For connecting and deploying from the Protegrity Container Registry (PCR), use the following command and the credentials obtained from the My.Protegrity portal during account creation:
helm registry login registry.protegrity.com:9443
  • For connecting and deploying to the local repository, use your local credentials and local repository endpoint as required.

Ensure that the PPC Cluster is installed and accessible, before installing Policy Workbench on PPC.

For more information about installing PPC, refer to the section Installing PPC.

Required Tools

Ensure that the following tools are available on the jump box on which Policy Workbench is installed.

ToolVersionDescription
OpenTofu>=1.10.0Used to run the installer.
AWS CLIAny versionMust be configured with credentials that have EKS and IAM permissions. The default region must also be set using either the AWS_DEFAULT_REGION or the AWS_REGION environment variables or the ~/.aws/config configuration file.
kubectlAny versionRequired for validating the deployment. It must be configured for the target PPC cluster where Policy Workbench is deployed.

IAM Permissions

The following IAM permissions are automatically created by the OpenTofu script.

PermissionPurpose
iam:CreatePolicy / iam:DeletePolicy / iam:GetPolicyCreate and manage the AWS KMS access policy.
iam:CreateRole / iam:DeleteRole / iam:GetRole / iam:UpdateAssumeRolePolicyCreate and manage the AWS KMS pod identity role.
iam:AttachRolePolicy / iam:DetachRolePolicyAttach the AWS KMS policy to the role.

EKS Permissions

The following EKS permissions are automatically created by the OpenTofu script.

PermissionPurpose
eks:DescribeClusterRead the cluster endpoint and the certificate authority data for the Helm provider in OpenTofu. The Helm provider requires this information to connect to the PPC.
eks:DescribeAddonVerify that the eks-podidentity-agent is installed.
eks:CreatePodIdentityAssociation /eks:DeletePodIdentityAssociation /eks:DescribePodIdentityAssociationAssociate the AWS KMS role with the Policy Workbench service account.

1.2 - Installing Policy Workbench

Steps to install Policy Workbench.

Before installing Policy Workbench, ensure that the prerequisites are met. For more information about the prerequisites, refer to the section Prerequisites for Installing the Policy Workbench.

To install Policy Workbench, first provision the AWS resources using the policy-workbench OpenTofu module and then deploy the Policy Workbench using Helm. The policy-workbench OpenTofu module is published to the Protegrity Container Registry and must be consumed from a root module. A root module is the working directory for executing the OpenTofu commands.

For more information about OpenTofu modules and the root module, refer to the section Modules in the OpenTofu documentation.

The Policy Workbench is installed depending on one of the following scenarios:

  • Root module is not available.
  • Root module is available.

Installing Policy Workbench when root module is not available

Install the Policy Workbench using the following commands:

  1. Run the following command to create the deployment directory.
# must install from an empty directory
mkdir policy-workbench && cd policy-workbench
  1. Create a root module with a single main.tf file.
terraform {
  required_version = ">= 1.10.0"

  required_providers {
    aws = {
      source  = "registry.opentofu.org/hashicorp/aws"
      version = ">= 5.0"
    }
  }
}


module "policy_workbench" {
  source = "oci://<Container_Registry_Path>/policy-workbench/<major.minor>/opentofu/modules/policy-workbench?tag=<version>"

  cluster_name = var.cluster_name
}


variable "cluster_name" {
  type        = string
  description = "EKS cluster name."
  nullable    = false

  validation {
    condition     = length(trimspace(var.cluster_name)) > 0
    error_message = "cluster_name must be provided and cannot be empty."
  }
}

In the main.tf file, specify the values of the following variables.

Variable NameDescriptionValue
<Container_Registry_Path>Location of the Protegrity Container Registry or the local repository where the policy-workbench OpenTofu module is published.
  • registry.protegrity.com:9443 if Protegrity Container Registry is used.
  • local repository endpoint if a local repository is used.
<major.minor>Major and minor version of the Protegrity Policy Manager, as specified in the product part number. Obtain the product part number from the Policy Manager Readme.1.11
<version>Tag version of the Protegrity Policy Manager.1.11.0
  1. Perform the following steps to configure the credentials to install the Policy Workbench from the Protegrity Container Registry.

    a. Run the following command to create the configuration directory.

    mkdir -p ~/.config/containers
    

    b. Obtain the username and access token from the My.Protegrity portal. For more information about obtaining the credentials from the My.Protegrity portal, refer to the section Configuring Authentication for Protegrity AI Team Edition.

    c. Generate base64 encoded string with padding for username:accesstoken obtained from the My.Protegrity portal.

    Ensure that you specify the username and access token within single quotes when generating the base64 encoded value. For example, 'username:accesstoken'.

    d. Create a file named ~/.config/containers/auth.json with the following content.

    {
        "auths": {
                "registry.protegrity.com:9443": {
                        "auth": "<base64 generated string from step-3c>"
                }
        }
}
  1. Run the following command to navigate to the deployment directory.

    cd policy-workbench
    
  2. Run the following commands to plan and install the Policy Workbench OpenTofu module.

# init, plan, and install
tofu init
tofu plan -var="cluster_name=<PPC-cluster-name>"
tofu apply -var="cluster_name=<PPC-cluster-name>"

In the cluster_name field, specify the name of the PPC cluster that you have specified in step 1 while deploying the PPC.

For more information about deploying the PPC, refer to the section Deploying PPC.

OpenTofu prints the plan and prompts for confirmation. Enter yes to proceed. To skip the prompt, add the -auto-approve option to the commands.

  1. Run the following command to install the Policy Workbench using Helm.
helm upgrade --install policy-workbench \
  oci://<Container_Registry_Path>/policy-workbench/<major.minor>/helm/policy-workbench \
  --set karpenterResources.nodeClass.amiId="<ami-id>" \
  --version <version> \
  --namespace policy-workbench \
  --create-namespace

In the command, specify the values of the following variables.

Variable NameDescriptionValue
<Container_Registry_Path>Location of the Protegrity Container Registry or the local repository where the policy-workbench OpenTofu module is published.
  • registry.protegrity.com:9443 if Protegrity Container Registry is used.
  • local repository endpoint if a local repository is used.
<major.minor>Major and minor version of the Protegrity Policy Manager, as specified in the product part number. Obtain the product part number from the Policy Manager Readme.1.11
<version>Tag version of the Protegrity Policy Manager.1.11.0

Important: You need to pass the in the command only if you are deploying the feature in regions other than us-east-1.

Option A (Recommended): Run the following AWS CLI command to retrieve the AMI ID dynamically.

```
aws ssm get-parameter \
  --name /aws/service/bottlerocket/aws-k8s-1.34/x86_64/latest/image_id \
  --region <region> \
  --query "Parameter.Value" \
  --output text
```

Alternatively, refer to these example AMI IDs.

Option B: The following table provides the list of AMI IDs.

RegionAMI ID
ap-south-1ami-07959c05dcdb79a72
eu-north-1ami-0268b0bfff0f25d31
eu-west-3ami-0ea9454aef60045a2
eu-west-2ami-0d5eee57a6a1398a3
eu-west-1ami-00a8d14029b60a028
ap-northeast-3ami-0e495c3ffd416c65e
ap-northeast-2ami-0fc18a24aec719c1c
ap-northeast-1ami-00ec85b83bf713aac
ca-central-1ami-03891f0d8b41eb296
sa-east-1ami-0a30f044a5781b4e0
ap-southeast-1ami-0ae51324bf2e89725
ap-southeast-2ami-0ef7e8095b163dc42
eu-central-1ami-00e36131a0343c374
us-east-2ami-0e486911b2d0a5f7e
us-west-1ami-01183e1261529749e
us-west-2ami-04f850c412625dfe6
  1. Run the following command to view the pods created in the policy-workbench namespace.
kubectl get pods -n policy-workbench

The following output appears.

NAME                        READY   STATUS    RESTARTS   AGE
bootstrap-bffb4b5d9-v6ww4   1/1     Running   0          13m
cert-7b88dcd84-zx7cv        1/1     Running   0          13m
devops-75755d87d4-qw9n6     1/1     Running   0          13m
hubcontroller-0             1/1     Running   0          13m
kmgw-0                      1/1     Running   0          13m
mbs-6b7dc765dd-brrfk        1/1     Running   0          13m
repository-0                1/1     Running   0          13m
rpproxy-79fc498d8-qp4fz     1/1     Running   0          13m
rpproxy-79fc498d8-s9k5p     1/1     Running   0          13m
rpproxy-79fc498d8-tbdtb     1/1     Running   0          13m
rps-8d79b7d98-svhdw         1/1     Running   0          13m

Installing Policy Workbench when root module is available

Install the Policy Workbench using the following commands:

  1. Add the policy-workbench OpenTofu module by adding the following code block to an existing root module.
module "policy_workbench" {
  source = "oci://<Container_Registry_Path>/policy-workbench/<major.minor>/opentofu/modules/policy-workbench?tag=<version>"

  cluster_name = "<PPC-cluster-name>"
}

variable "cluster_name" {
  type        = string
  description = "EKS cluster name."
  nullable    = false

  validation {
    condition     = length(trimspace(var.cluster_name)) > 0
    error_message = "cluster_name must be provided and cannot be empty."
  }
}

For more information about adding a module to an existing root module, refer to the section Module Blocks in the OpenTofu documentation.

In the root module, specify the values of the following variables.

Variable NameDescriptionValue
<Container_Registry_Path>Location of the Protegrity Container Registry or the local repository where the policy-workbench OpenTofu module is published.
  • registry.protegrity.com:9443 if Protegrity Container Registry is used.
  • local repository endpoint if a local repository is used.
<major.minor>Major and minor version of the Protegrity Policy Manager, as specified in the product part number. Obtain the product part number from the Policy Manager Readme.1.11
<version>Tag version of the Protegrity Policy Manager.1.11.0

In the cluster_name field, specify the name of the PPC cluster that you have specified in step 1 while deploying the PPC.

For more information about deploying the PPC, refer to the section Deploying PPC.

  1. If the root module does not include the hashicorp/aws provider version >= 5.0, then add the following code block to the terraform {} block. Else navigate to the next step.
required_providers {
  aws = {
    source  = "registry.opentofu.org/hashicorp/aws"
    version = ">= 5.0"
  }
}

For more information about including the hashicorp/aws provider in the root module, refer to the OpenTofu Registry documentation.

  1. Perform the following steps to configure the credentials to install the Policy Workbench from the Protegrity Container Registry.

    a. Run the following command to create the configuration directory.

    mkdir -p ~/.config/containers
    

    b. Obtain the username and access token from the My.Protegrity portal. For more information about obtaining the credentials from the My.Protegrity portal, refer to the section Configuring Authentication for Protegrity AI Team Edition.

    c. Generate base64 encoded string with padding for username:accesstoken obtained from the My.Protegrity portal.

    Ensure that you specify the username and access token within single quotes when generating the base64 encoded value. For example, 'username:accesstoken'.

    d. Create a file named ~/.config/containers/auth.json with the following content.

    {
        "auths": {
                "registry.protegrity.com:9443": {
                        "auth": "<base64 generated string from step-3c>"
                }
        }
}
  1. Navigate to the directory containing the root module.

  2. Run the following commands to plan and install the Policy Workbench OpenTofu module.

# init, plan, and install
tofu init
tofu plan -var="cluster_name=<PPC-cluster-name>"
tofu apply -var="cluster_name=<PPC-cluster-name>"

OpenTofu prints the plan and prompts for confirmation. Enter yes to proceed. To skip the prompt, add the -auto-approve option to the commands.

In the cluster_name field, specify the name of the PPC cluster that you have specified in step 1 while deploying the PPC.

For more information about deploying the PPC, refer to the section Deploying PPC.

  1. Run the following command to install the Policy Workbench using Helm.
helm upgrade --install policy-workbench \
  oci://<Container_Registry_Path>/policy-workbench/<major.minor>/helm/policy-workbench \
  --set karpenterResources.nodeClass.amiId="<ami-id>" \
  --version <version> \
  --namespace policy-workbench \
  --create-namespace

In the command, specify the values of the following variables.

Variable NameDescriptionValue
<Container_Registry_Path>Location of the Protegrity Container Registry or the local repository where the policy-workbench OpenTofu module is published.
  • registry.protegrity.com:9443 if Protegrity Container Registry is used.
  • local repository endpoint if a local repository is used.
<major.minor>Major and minor version of the Protegrity Policy Manager, as specified in the product part number. Obtain the product part number from the Policy Manager Readme.1.11
<version>Tag version of the Protegrity Policy Manager.1.11.0

Important: You need to pass the in the command only if you are deploying the feature in regions other than us-east-1.

Option A (Recommended): Run the following AWS CLI command to retrieve the AMI ID dynamically.

```
aws ssm get-parameter \
  --name /aws/service/bottlerocket/aws-k8s-1.34/x86_64/latest/image_id \
  --region <region> \
  --query "Parameter.Value" \
  --output text
```

Alternatively, refer to these example AMI IDs.

Option B: The following table provides the list of AMI IDs.

RegionAMI ID
ap-south-1ami-07959c05dcdb79a72
eu-north-1ami-0268b0bfff0f25d31
eu-west-3ami-0ea9454aef60045a2
eu-west-2ami-0d5eee57a6a1398a3
eu-west-1ami-00a8d14029b60a028
ap-northeast-3ami-0e495c3ffd416c65e
ap-northeast-2ami-0fc18a24aec719c1c
ap-northeast-1ami-00ec85b83bf713aac
ca-central-1ami-03891f0d8b41eb296
sa-east-1ami-0a30f044a5781b4e0
ap-southeast-1ami-0ae51324bf2e89725
ap-southeast-2ami-0ef7e8095b163dc42
eu-central-1ami-00e36131a0343c374
us-east-2ami-0e486911b2d0a5f7e
us-west-1ami-01183e1261529749e
us-west-2ami-04f850c412625dfe6
  1. Run the following command to view the pods created in the policy-workbench namespace.
kubectl get pods -n policy-workbench

The following output appears.

NAME                        READY   STATUS    RESTARTS   AGE
bootstrap-bffb4b5d9-v6ww4   1/1     Running   0          13m
cert-7b88dcd84-zx7cv        1/1     Running   0          13m
devops-75755d87d4-qw9n6     1/1     Running   0          13m
hubcontroller-0             1/1     Running   0          13m
kmgw-0                      1/1     Running   0          13m
mbs-6b7dc765dd-brrfk        1/1     Running   0          13m
repository-0                1/1     Running   0          13m
rpproxy-79fc498d8-qp4fz     1/1     Running   0          13m
rpproxy-79fc498d8-s9k5p     1/1     Running   0          13m
rpproxy-79fc498d8-tbdtb     1/1     Running   0          13m
rps-8d79b7d98-svhdw         1/1     Running   0          13m

Validating the deployment

Note: Before validating the deployment of the Policy Workbench, ensure that the kubectl context is set to the target PPC cluster. Run kubectl config current-context to verify the current context. Run kubectl config use-context <context-name> to switch the context.

After installation, validate the Policy Workbench deployment using the following steps. The desired outcome of these steps is to get a [] response from the datastores API call using a dedicated workbench user.

  1. Run the following command to retrieve the gateway host details.
export GW_HOST="$(kubectl get gateway pty-main -n api-gateway -o jsonpath='{.status.addresses[0].value}')"
  1. Run the following command to generate the JWT token.
TOKEN=$(curl -k -s "https://$GW_HOST/api/v1/auth/login/token" \
  -X POST \
  -H 'Content-Type: application/x-www-form-urlencoded' \
  -d "loginname=admin" \
  -d "password=Admin123!" \
  -D - -o /dev/null 2>&1 \
  | grep -i 'pty_access_jwt_token:' \
  | sed 's/pty_access_jwt_token: //' \
  | tr -d '\r') && echo "${TOKEN:0:10}"
  1. Create a workbench user using the following command. Due to separation of duties, the datastores API requires a user with workbench roles.
curl -sk -X POST "https://$GW_HOST/pty/v1/auth/users" \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "username": "workbench",
    "password": "Admin123!",
    "roles": [
      "workbench_administrator"
    ]
  }'

The following output appears.

{"user_id":"397beecc-87bb-404e-85bb-f8a6d83984d6","username":"workbench"}

Use the JWT token generated in step 2.

  1. Ensure that the user with the workbench_administrator roles has the following permissions:
  • workbench_management_policy_write
  • workbench_deployment_immutablepackage_export
  • workbench_deployment_certificate_export
  • cli_access
  • can_create_token

To ensure the required permissions, run the following command:

curl -sk -X PUT "https://$GW_HOST/pty/v1/auth/roles" \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "workbench_administrator",
    "permissions": [
      "workbench_management_policy_write",
      "workbench_deployment_immutablepackage_export",
      "workbench_deployment_certificate_export",
      "cli_access",
      "can_create_token"
    ]
  }'

The following output appears.

{"role_name":"workbench_administrator","status":"updated"}

For more information about the workbench_administrator permissions, refer to the section Workbench Roles and Permissions.

For more information about the cli_access and can_create_token permissions, refer to the section Roles and Permissions.

  1. Run the following command to get a token for the workbench user.
export TOKEN=$(curl -k -s https://$GW_HOST/pty/v1/auth/login/token \
  -X POST \
  -H 'Content-Type: application/x-www-form-urlencoded' \
  -d 'loginname=workbench' \
  -d 'password=Admin123!' \
-D - -o /dev/null 2>&1 | grep -i 'pty_access_jwt_token:' | sed 's/pty_access_jwt_token: //' | tr -d '\r')
  1. Run the Policy Management REST API to get datastores. Use the JWT token generated in step 4.
curl -k -v https://$GW_HOST/pty/v2/pim/datastores -H "Authorization: Bearer $TOKEN"

The expected output is []. This indicates that the Policy Workbench is initialized but the datastore is not yet created.

1.3 - Uninstalling the Protegrity Policy Manager

Uninstalling the deployment.

To uninstall the deployment:

  1. Run the following command to uninstall the Policy Workbench.
helm uninstall policy-workbench -n policy-workbench
  1. Run the following command to clean up the AWS resources.
tofu destroy -var='cluster_name=<PPC cluster name>'

1.4 - Backing up the Policy Workbench

Back up the Policy Workbench.

By default, the Policy Workbench data is backed up on a daily basis using a scheduled backup, after the Policy Workbench has been installed. The backed-up data includes the Kubernetes object state and the persistent volume data. The backed-up data is automatically stored in the encrypted AWS S3 bucket that you created when you deployed PPC.

For more information about the AWS S3 bucket, refer to the section Creating AWS KMS Key and S3 Bucket.

You can also choose to manually back up the data to the AWS S3 bucket using Velero.

Important: Before you manually back up the data, ensure that Velero CLI version 1.17 or later is installed.

To manually back up the data:

  1. Run the following command on the jump box.
velero backup create --from-schedule workbench-backup-schedule -n <Namespace where data is backed up>

For example:

velero backup create --from-schedule workbench-backup-schedule -n pty-backup-recovery

The following output appears.

INFO[0001] No Schedule.template.metadata.labels set - using Schedule.labels for backup object  backup=pty-backup-recovery/workbench-backup-schedule-20260331094735 labels="map[app.kubernetes.io/managed-by:Helm deployment:policy-workbench]"
Creating backup from schedule, all other filters are ignored.
Backup request "workbench-backup-schedule-20260331094735" submitted successfully.
Run `velero backup describe workbench-backup-schedule-20260331094735` or `velero backup logs workbench-backup-schedule-20260331094735` for more details.

For more information about the velero backup command, refer to the section Backup Reference in the Velero documentation.

  1. Run the following commands to monitor the backup status.
    1. Run the following command to retrieve the list of existing backups.
    velero backup get -n pty-backup-recovery
    
    The following output appears.
    NAME                                               STATUS                       ERRORS   WARNINGS   CREATED                         EXPIRES   STORAGE LOCATION   SELECTOR
    authnz-postgresql-schedule-backup-20260331093017   Completed                    0        0          2026-03-31 09:30:18 +0000 UTC   59d       default            app.kubernetes.io/persistence=enabled
    workbench-backup-schedule-20260331094735           WaitingForPluginOperations   0        0          2026-03-31 09:47:38 +0000 UTC   59d       default            <none>
    workbench-backup-schedule-20260331094704           WaitingForPluginOperations   0        0          2026-03-31 09:47:04 +0000 UTC   59d
    
    1. Run the following command to obtain details of a specific backup.
    velero backup describe <backup-name> -n pty-backup-recovery
    
    The following code block shows a snippet of the output.
    Name:         workbench-backup-schedule-20260331094735
    Namespace:    pty-backup-recovery
    Labels:       app.kubernetes.io/managed-by=Helm
                  deployment=policy-workbench
                  velero.io/schedule-name=workbench-backup-schedule
                  velero.io/storage-location=default
    Annotations:  meta.helm.sh/release-name=policy-workbench
                  meta.helm.sh/release-namespace=policy-workbench
                  velero.io/resource-timeout=10m0s
                  velero.io/source-cluster-k8s-gitversion=v1.35.2-eks-f69f56f
                  velero.io/source-cluster-k8s-major-version=1
                  velero.io/source-cluster-k8s-minor-version=35
    
    Phase:  WaitingForPluginOperations
    
    1. Run the following command obtains the log details for a specific backup.
    velero backup logs <backup-name> -n pty-backup-restore
    
    The following code block shows a snippet of the output.
    time="2026-03-31T09:47:38Z" level=info msg="Setting up backup temp file" backup=pty-backup-recovery/workbench-backup-schedule-20260331094735 logSource="pkg/controller/backup_controller.go:690"
    time="2026-03-31T09:47:38Z" level=info msg="Setting up plugin manager" backup=pty-backup-recovery/workbench-backup-schedule-20260331094735 logSource="pkg/controller/backup_controller.go:697"
    time="2026-03-31T09:47:38Z" level=info msg="Getting backup item actions" backup=pty-backup-recovery/workbench-backup-schedule-20260331094735 logSource="pkg/controller/backup_controller.go:701"
    

1.5 - Restoring the Policy Workbench

Complete the steps provided in this section to restore the Policy Workbench data using an existing backup.

Before you begin

Before starting a restore, ensure that the following prerequisites are met:

  • Ensure that an existing backup is available. Backups are taken automatically as part of the default installation of the Policy Workbench using scheduled backup mechanisms. The backups are available in the encrypted AWS S3 bucket that you created when you deployed PPC. You can also choose to manually back up the data.

For more information about the AWS S3 bucket, refer to the section Creating AWS KMS Key and S3 Bucket.

For more information about manually backing up the data, refer to the section Backing up the Policy Workbench.

  • Ensure that a restored PPC cluster is available. The Policy Workbench is restored on a restored PPC cluster. For information about restoring the PPC, refer to the section Restoring the PPC.

Important: Before you restore the data, ensure that Velero CLI version 1.17 or later is installed.

To restore the data:

  1. Ensure that the main.tf file in the root module, which is the working directory for executing the OpenTofu commands, contains the following code block. If root module is not available, then you need to create a root module with the main.tf file.
module "policy_workbench" {
  source = "oci://<Container_Registry_Path>/policy-workbench/<major.minor>/opentofu/modules/policy-workbench?tag=<version>"

  cluster_name = var.cluster_name
}

variable "cluster_name" {
  type        = string
  description = "EKS cluster name."
  nullable    = false

  validation {
    condition     = length(trimspace(var.cluster_name)) > 0
    error_message = "cluster_name must be provided and cannot be empty."
  }
}

This code block adds the Policy Workbench OpenTofu module.

  1. Run the following commands on the jump box.
tofu init
tofu plan -var="cluster_name=<Restored-PPC-cluster-name>"
tofu apply -var="cluster_name=<Restored-PPC-cluster-name>" 

Specify the name of the restored PPC cluster as the value of the cluster_name variable.

For information about restoring the PPC, refer to the section Restoring the PPC.

  1. Run the following command on the jump box.
velero restore create workbench-restore-$(date +%Y%m%d-%H%M%S) --from-backup <backup-name> -n <Namespace where data is backed up>

For example:

velero restore create workbench-restore-$(date +%Y%m%d-%H%M%S) --from-backup <backup-name> -n pty-backup-recovery

For more information about the velero restore command, refer to the section Restore Reference in the Velero documentation.

  1. Run the following command to list all the restore operations in the specific namespace.
velero restore get -n pty-backup-recovery

Ensure that the status of the restore operation is WaitingForPluginOperations.

  1. Run the following command to annotate the Kubernetes resources.
kubectl annotate productconfiguration workbench -n pty-admin kopf.zalando.org/last-handled-configuration- --overwrite
  1. Run the following command to upgrade the Policy Workbench.
helm upgrade policy-workbench \​
  <chart> \​
  --version <version> \​
  --namespace policy-workbench \​
  --reuse-values

<chart> is the name of the Helm chart that you specified while installing the Policy Workbench.

  1. Run the following commands to monitor the restore status.
    1. Run the following command to retrieve the list of existing restores.
    velero restore get -n pty-backup-recovery
    
    1. Run the following command to obtain details of a specific restore.
    velero restore describe workbench-restore-<timestamp> -n pty-backup-recovery
    
    1. Run the following command obtains the log details for a specific restore.
    velero restore logs workbench-restore-<timestamp> -n pty-backup-restore
    

1.6 - Workbench Roles and Permissions

List of Roles and Permissions used in the Policy Workbench.

Roles are templates that include permissions and users can be assigned to one or more roles. All users in the appliance must be associated with a role.

The roles packaged with Policy Workbench are as follows:

RolesDescriptionPermissions
workbench_administratorFull administrative access to workbench.workbench_management_policy_write, workbench_deployment_immutablepackage_export, workbench_deployment_certificate_export
workbench_viewerRead-only access to workbench.workbench_management_policy_read
workbench_deployment_administratorAdministrative access to workbench deployments.workbench_deployment_immutablepackage_export, workbench_deployment_certificate_export

The capabilities of a role are defined by the permissions attached to the role. Though roles can be created, modified, or deleted from the appliance, permissions cannot be edited. The permissions that are available to map with a user and packaged with Policy Workbench as default permissions are as follows:

PermissionsDescription
workbench_management_policy_writeAllows management of policies and configurations.
workbench_management_policy_readAllows viewing of policies and configurations.
workbench_deployment_immutablepackage_exportAllows exporting encrypted resilient packages.
workbench_deployment_certificate_exportAllows exporting certificates used by protectors for dynamic resilient packages.

1.7 - Troubleshooting the Protegrity Policy Manager

Helm upgrade fails due to existing Kubernetes jobs

Issue: Helm upgrade fails because existing jobs, such as hubcontroller-init and kmgw-create-keystore, cannot be patched.

Description: Helm upgrade cannot modify or replace existing Kubernetes jobs if fields such as image registry, environment variables, args, and volumes are changed. This happens because the pod template of a job is immutable. So, the existing pods cannot be replaced when their template changes. As a result, the Helm upgrade fails.

Workaround:

Delete the existing jobs manually and then run the Helm upgrade command.

To manually delete the jobs, run the following commands:

kubectl delete job hubcontroller-init -n policy-workbench
kubectl delete job kmgw-create-keystore -n policy-workbench

2 - Protegrity Agent

Using Protegrity Agent with PPC

Protegrity Agent is an intelligent agentic AI system designed for Data Protection architects and administrators. Protegrity Agent manages the Protegrity Policy, Data Elements, Roles, Masks, Data Stores and other configurations through natural language conversations. The system provides automated planning and execution capabilities for complex data protection workflows, including policy management, data element configuration, and security rule deployment.

Protegrity Agent leverages advanced Large Language Model (LLM) capabilities within an agentic loop. The agent orchestrates operations across the Protegrity ecosystem through Protegrity Policy Management, providing an intuitive chatbot-like interface for sophisticated data protection management tasks.

The key capabilities of Protegrity Agent include:

  • Natural Language Interface: Manages data protection configurations using conversational API.
  • Intelligent Planning: Decomposes complex user queries into actionable and dynamic plans.
  • Autonomous Execution: Executes multi-step workflows with adaptive tool selection and error recovery.
  • Real-time Streaming: Displays Server-Sent Events (SSE) for live progress updates and intermediate results.
  • Enterprise Integration: Integrates with Protegrity Policy Management seamlessly through comprehensive API coverage.
  • Semantic Tool Discovery: Selects RAG-based tool for optimal endpoint matching.
  • Conversation Management: Tracks conversation history with context-aware interactions.
  • LLM APIs: Must have own LLM API keys.

    Important: The current version of Protegrity Agent expects GPT 5.2 main endpoints and GPT 4o embeddings endpoint.

2.1 - Prerequisites

Prerequisites to install Protegrity Agent

The following requirements are met before installing Protegrity Agent with PPC.

  • The jumpbox is registered and prepared.
    For more information about registering a jumpbox, refer to Configuring Authentication for Protegrity AI Team Edition.

  • Ensure that a PPC cluster is installed and accessible.
    For more information about installing a PPC, refer Installing PPC.

  • Ensure that the Protegrity Policy Manager is installed. Install the Policy Workbench to deploy the Protegrity Policy Manager.

    For more information about installing the Policy Workbench, refer to the section Installing Protegrity Policy Manager.

  • Ensure to have access to OpenAI API keys. These are required to during the installation process.

  • The agent_admin role must be available.
    For more information about creating a role, refer Working with Roles.

2.2 - Roles and Permissions

Roles and permissions required for Protegrity Agent

2.2.1 - Required Roles and Permissions

The Protegrity Agent uses role-based access control (RBAC) to govern access to its features. The Protegrity Policy Cloud gateway enforces all permissions through JSON Web Token (JWT) authentication. The Agent API does not perform permission checks internally.

Roles

The following table lists the permissions assigned to the roles.

RolesDescriptionPermissions
agent_adminGrants full read-write access to policy, packages, and Insightproagent_conversations_permission , proagent_responses_permission, proagent_health_permission, proagent_readiness_permission, proagent_liveness_permission, proagent_version_permission, proagent_ui_permission, proagent_doc_permission, proagent_log_permission, workbench_policy_view, workbench_policy_manage, workbench_certificate_export, workbench_package_export_dynamic, workbench_package_export_encrypted, insight_viewer, insight_admin, can_create_token
agent_readerRestricts access to read-only operationsproagent_conversations_permission, proagent_responses_permission, proagent_health_permission, proagent_readiness_permission, proagent_liveness_permission, proagent_version_permission, proagent_ui_permission, proagent_doc_permission, proagent_log_permission, workbench_policy_view, insight_viewer, can_create_token

For more information about creating the role, refer to Working with Roles.

Permissions

Protegrity Agent API Permissions

These permissions control access to the core Agent endpoints. All endpoints are authenticated using the jwt_token method.

PermissionDescriptionProtected EndpointHTTP Methods
proagent_ui_permissionAccess the Agent web dashboard interface/pty/proagent/v1.0/ui, /pty/proagent/v1.0/ui*GET, POST
proagent_conversations_permissionAccess conversation management endpoints/pty/proagent/v1.0/conversations, /pty/proagent/v1.0/conversations*GET, POST, DELETE
proagent_responses_permissionAccess response generation endpoints/pty/proagent/v1.0/responsesPOST
proagent_doc_permissionAccess the Agent documentation endpoints/pty/proagent/v1.0/docGET
proagent_log_permissionAccess the Agent log endpoints/pty/proagent/v1.0/logGET, POST
proagent_health_permissionAccess health check endpoints/pty/proagent/v1.0/healthGET
proagent_readiness_permissionAccess readiness probe endpoints/pty/proagent/v1.0/readyGET
proagent_liveness_permissionAccess liveness probe endpoints/pty/proagent/v1.0/liveGET
proagent_version_permissionAccess version information endpoints/pty/proagent/v1.0/versionGET

Workbench Permissions

These permissions control access to Workbench features such as policy management and package distribution.

PermissionDescription
workbench_policy_viewView policies and configurations
workbench_policy_manageCreate, update, and delete policies and configurations
workbench_certificate_exportExport certificates used by protectors for dynamic Resilient Packages
workbench_package_export_dynamicDistribute Resilient Packages dynamically
workbench_package_export_encryptedExport encrypted Resilient Packages

Insight Permissions

These permissions control access to the Insight dashboard.

PermissionDescription
insight_viewerView the Insight dashboard
insight_adminManage the Insight dashboard, including configuration and settings

Administrative Permissions

These permissions control token creation and user management.

PermissionDescription
can_create_tokenCreate authentication tokens for Agent access
user_manager_adminManage user accounts and retrieve user token and profile information

2.2.2 - Working with Roles

This section describes about creating roles and users for the Protegrity Agent on a Protegrity Policy Cloud cluster. Roles define the features that a user can access. Users inherit permissions from their assigned roles.

For more information about permissions, refer to Required Roles and Permissions.

Prerequisites

  • A running PPC cluster with the Protegrity Agent deployed.
  • kubectl is configured and is accessible for the target PPC cluster.
  • An admin account on the PPC cluster with required permissions to create roles and users.

Retrieving the Gateway Host

To store the PPC gateway address in a shell variable, run the following command .

export GW_HOST="$(kubectl get gateway pty-main -n api-gateway -o jsonpath='{.status.addresses[0].value}')"

The GW_HOST variable is used in every subsequent command.

Generate a JWT Token

Ensure to authenticate as the PPC admin user to obtain a JSON Web Token (JWT). All role and user creation commands require this token.

TOKEN=$(curl -k -s "https://$GW_HOST/api/v1/auth/login/token" \
  -X POST \
  -H 'Content-Type: application/x-www-form-urlencoded' \
  -d "loginname=admin" \
  -d "password=Admin123!" \
  -D - -o /dev/null 2>&1 \
  | grep -i 'pty_access_jwt_token:' \
  | sed 's/pty_access_jwt_token: //' \
  | tr -d '\r') && echo "${TOKEN:0:10}"

A successful response prints the first 10 characters of the token. If the output is empty, verify the admin credentials and gateway address.

Creating Agent Roles

Create one or more roles that bundle the permissions that are required by the users.

This section provides two recommended role skeletons:

  • Administrator with complete access permissions
  • Viewer with read-only permissions

Complete-Access Role (agent_admin)

This role grants read-write access to all Agent, Workbench, and Insight features.

curl -sk -X POST "https://$GW_HOST/pty/v1/auth/roles" \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "agent_admin",
    "description": "Administrator role",
    "permissions": [
      "proagent_conversations_permission",
      "proagent_doc_permission",
      "proagent_health_permission",
      "proagent_liveness_permission",
      "proagent_log_permission",
      "proagent_readiness_permission",
      "proagent_responses_permission",
      "proagent_ui_permission",
      "proagent_version_permission",

      "workbench_certificate_export",
      "workbench_package_export_dynamic",
      "workbench_package_export_encrypted",
      "workbench_policy_manage",
      "workbench_policy_view",

      "insight_admin",
      "insight_viewer",

      "can_create_token"
    ]
  }'

This user inherits all permissions from the agent_admin role.

For more information about the available permissions for agent_admin, refer to Roles.

Read-Only Role (agent_reader)

This role restricts access to read-only operations. The user can view conversations and policies but cannot modify or export them.

curl -sk -X POST "https://$GW_HOST/pty/v1/auth/roles" \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "agent_reader",
    "description": "Read-only role",
    "permissions": [
      "proagent_conversations_permission",
      "proagent_doc_permission",
      "proagent_health_permission",
      "proagent_liveness_permission",
      "proagent_log_permission",
      "proagent_readiness_permission",
      "proagent_responses_permission",
      "proagent_ui_permission",
      "proagent_version_permission",

      "workbench_policy_view",

      "insight_viewer",

      "can_create_token"
    ]
  }'

This role excludes workbench_policy_manage, all package export permissions, and insight_admin. The user can view policies and the Insight dashboard but cannot make changes.

For more information about the available permissions for agent_reader, refer to Permissions.

Building a Custom Role

To create a role with any subset of the available permissions, select the required permission from Protegrity Agent Permissions Reference . The JSON payload follows the same structure shown above. Replace the name, description, and permissions array with your values.

curl -sk -X POST "https://$GW_HOST/pty/v1/auth/roles" \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "<name of custom the role>",
    "description": "<description custom the role>",
    "permissions": [
      <permission 1>,
      <permission 2>,
      <permission 3>
    ]
  }'

Validate Role Creation

To list all roles and confirm your new roles exist, run the following command .

curl -sk -X GET "https://$GW_HOST/pty/v1/auth/roles" \
  -H "Accept: application/json" \
  -H "Authorization: Bearer $TOKEN"

The response includes every role on the PPC cluster. Ensure that agent_admin and agent_reader roles appear in the list.

Create Agent Users

Create user accounts and assign them to the required roles.

Admin User

This user inherits all permissions from the agent_admin role. To create an agent_admin, run the following command.

curl -sk -X POST "https://$GW_HOST/pty/v1/auth/users" \
  -H "Accept: application/json" \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "username": "agent_admin",
    "email": "agent_admin@example.com",
    "firstName": "Agent",
    "lastName": "Admin",
    "enabled": true,
    "password": "Admin123!",
    "roles": [
      "agent_admin"
    ]
  }'

Read-Only User

This user inherits the read-only permissions from the agent_reader role. To create an agent_reader, run the following command.

curl -sk -X POST "https://$GW_HOST/pty/v1/auth/users" \
  -H "Accept: application/json" \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "username": "agent_reader",
    "email": "agent_reader@example.com",
    "firstName": "Agent",
    "lastName": "Reader",
    "enabled": true,
    "password": "Admin123!",
    "roles": [
      "agent_reader"
    ]
  }'
```=

#### User Skeleton Structure

Every user creation request follows the same JSON structure.

```json
{
  "username": "<unique-username>",
  "email": "<email-address>",
  "firstName": "<first-name>",
  "lastName": "<last-name>",
  "enabled": true,
  "password": "<password>",
  "roles": [
    "<role-name-1>",
    "<role-name-2>"
  ]
}
FieldDescription
usernameSet a unique identifier for the user account
emailSet the email address associated with the user account
firstNameSet the first name of the user
lastNameSet the last name of the user
enabledEnable or disable the user account; set to true to activate
passwordSet the initial password for the user account
rolesAssign one or more roles that define the permissions this user receives

A user can hold multiple roles. The effective permission set is the union of all permissions from every assigned role. For example, assigning both agent_reader and a custom role that includes workbench_policy_manage grants the user both read and write access to policies.

Note: Change the default passwords before deploying to a production environment.

Verify the User Configuration

After creating the users, log in with the new credentials to confirm the accounts are accessible.

curl -k -s "https://$GW_HOST/api/v1/auth/login/token" \
  -X POST \
  -H 'Content-Type: application/x-www-form-urlencoded' \
  -d "loginname=agent_admin" \
  -d "password=Admin123!" \
  -D - -o /dev/null 2>&1 \
  | grep -i 'pty_access_jwt_token:'

A successful response returns a pty_access_jwt_token header. An empty response indicates incorrect credentials or a missing role.

2.3 - Installing Protegrity Agent

Steps to install Protegrity Agent

The Protegrity Agent can be installed using helm chart. The helm chart deploys the following components.

ComponentDescription
Protegrity Agent ServiceMain application service
PostgreSQL DatabasePersistent database for conversation storage
UIWeb interface for Protegrity Agent

2.3.1 - Installing Protegrity Agent

Steps to install Protegrity Agent

To install the Protegrity Agent run the following command.

Installing Protegrity Agent

Before you begin

The my-values.yaml file must provide the OPENAI keys.

For OpenAI LLM endpoint, provide the following details. This is essential for the Agent to work. The my-values.yaml file should appear as mentioned below.

proagentService:
 
 secrets:
    # Main Endpoint
   OPENAI_API_ENDPOINT: ""
   OPENAI_API_KEY: ""
   OPENAI_API_VERSION: ""
   OPENAI_LLM_MODEL: ""
 
   # Embeddings 
   OPENAI_EMBEDDINGS_API_ENDPOINT: ""
   OPENAI_EMBEDDINGS_API_KEY: ""
   OPENAI_EMBEDDINGS_API_VERSION: ""
   OPENAI_EMBEDDING_MODEL: ""

For more information on additional configurations, refer to Configuring Protegrity Agent.

These values can be provided during installation with -f my-values.yaml.

Important: The current version of Protegrity Agent requires GPT 5.2 main endpoints and GPT 4o embeddings endpoint.

Installing Protegrity Agent

To install the Protegrity Agent, run the following command.

helm upgrade --install protegrity-agent \
oci://<container_registry_path>:9443/protegrity-agent/1.0/helm/protegrity-agent \
--set karpenterResources.nodeClass.amiId="<ami-id>" \
--version 1.0.0 \
--namespace pty-protegrity-agent \
--create-namespace \
-f my-values.yaml

Ensure that in the preceding command is replaced with a valid AMI ID for the AWS region in use.

The following table provides the list of AMI IDs

RegionAMI ID
ap-south-1ami-07959c05dcdb79a72
eu-north-1ami-0268b0bfff0f25d31
eu-west-3ami-0ea9454aef60045a2
eu-west-2ami-0d5eee57a6a1398a3
eu-west-1ami-00a8d14029b60a028
ap-northeast-3ami-0e495c3ffd416c65e
ap-northeast-2ami-0fc18a24aec719c1c
ap-northeast-1ami-00ec85b83bf713aac
ca-central-1ami-03891f0d8b41eb296
sa-east-1ami-0a30f044a5781b4e0
ap-southeast-1ami-0ae51324bf2e89725
ap-southeast-2ami-0ef7e8095b163dc42
eu-central-1ami-00e36131a0343c374
us-east-1ami-07e4e828a19159636
us-east-2ami-0e486911b2d0a5f7e
us-west-1ami-01183e1261529749e
us-west-2ami-04f850c412625dfe6

Verifying Protegrity Agent installation

To verify whether the Protegrity Agent is successfully installed, run the following command:

kubectl get pods -n pty-protegrity-agent

The output should be similar to the following:

NAME                                             READY   STATUS    RESTARTS   AGE
database-statefulset-0                           1/1     Running   0          3m6s
protegrity-agent-db-backup-init-r1-n5bwl         1/1     Running   0          3m6s
protegrity-agent-deployment-7488c88f6d-rgclq     1/1     Running   0          3m6s
protegrity-agent-ui-deployment-c8f848d57-mrm4r   1/1     Running   0          3m6s

Creating Protegrity Agent Role and User

To use Protegrity Agent, a user must have certain roles. Run the following commands to create administrator role and user:

# Get Token
export GW_HOST="$(kubectl get gateway pty-main -n api-gateway -o jsonpath='{.status.addresses[0].value}')"
echo $GW_HOST

TOKEN=$(curl -k -s "https://$GW_HOST/pty/v1/auth/login/token" \
 -X POST \
 -H 'Content-Type: application/x-www-form-urlencoded' \
 -d "loginname=admin" \
 -d "password=Admin123!" \
 -D - -o /dev/null 2>&1 \
 | grep -i 'pty_access_jwt_token:' \
 | sed 's/pty_access_jwt_token: //' \
 | tr -d '\r') && echo "${TOKEN:0:10}"

# Create Admin Role
curl -sk -X POST "https://$GW_HOST/pty/v1/auth/roles" \
 -H "Authorization: Bearer $TOKEN" \
 -H "Content-Type: application/json" \
 -d '{
   "name": "agent_admin",
   "description": "Administrator role",
   "permissions": [
     "proagent_conversations_permission",
     "proagent_doc_permission",
     "proagent_health_permission",
     "proagent_liveness_permission",
     "proagent_log_permission",
     "proagent_readiness_permission",
     "proagent_responses_permission",
     "proagent_ui_permission",
     "proagent_version_permission",

     "workbench_certificate_export",
     "workbench_package_export_dynamic",
     "workbench_package_export_encrypted",
     "workbench_policy_manage",
     "workbench_policy_view",

     "insight_admin",
     "insight_viewer",

     "can_create_token"
   ]
 }'

# Create Admin User
curl -sk -X POST "https://$GW_HOST/pty/v1/auth/users" \
 -H "Accept: application/json" \
 -H "Authorization: Bearer $TOKEN" \
 -H "Content-Type: application/json" \
 -d '{
   "username": "agent_admin",
   "email": "agent_admin@example.com",
   "firstName": "Agent",
   "lastName": "Admin",
   "enabled": true,
   "password": "Admin123!",
   "roles": [
     "agent_admin"
   ]
 }'

For more information about managing Protegrity Agent different roles and users, including their creation, refer to Working with Roles.

2.4 - Configuring Protegrity Agent

Steps to configure Protegrity Agent

API Service Endpoints

This section provides an overview of the service API endpoints exposed by Protegrity Agent.

NameEndpoint
Conversations/pty/proagent/v1.0/conversations
Responses/pty/proagent/v1.0/responses
Health Check/pty/proagent/v1.0/health
Readiness Probe/pty/proagent/v1.0/ready
Liveness Probe/pty/proagent/v1.0/live
Version Info/pty/proagent/v1.0/version
UI Dashboard/pty/proagent/v1.0/ui

Protegrity Agent Configurations

In addition to the OpenAI endpoints, the following parameters are configurable through a user-provided my-values.yaml file, supplied during deployment using the helm upgrade ... -f my-values.yaml command.

SectionVariableComment
globalrestoreRestore mode flag. When true, Velero restores the backup PVC from S3 and a restore Job imports the pg_dump into a fresh Postgres instance.
proagentService.environmentLOG_LEVELApplication log level (default: INFO)
proagentService.environmentTHINKING_TIMEOUTAgent pauses and asks for feedback if it goes over this limit (in minutes). Must be less than the application’s internal response timeout or any other network timeouts.
backupenabledBackup status
backupschedulepg_dump CronJob schedule
backupveleroScheduleVelero snapshot schedule (30-min offset)
backupscheduleNameBackup name
backupbackupNameSet to a specific backup name for point-in-time restore

Updating the Deployment

To update the deployed cluster it is recommended to create a my-values.yaml and then helm upgrade ... -f my-values.yaml.

helm upgrade --install protegrity-agent \
oci://<container_registry_path>:9443/protegrity-agent/1.0/helm/protegrity-agent \
--set karpenterResources.nodeClass.amiId="<ami-id>" \
--version 1.0.0 \
--namespace pty-protegrity-agent \
--create-namespace \
-f my-values.yaml

Ensure that in the preceding command is replaced with a valid AMI ID for the AWS region in use.

The following table provides the list of AMI IDs

RegionAMI ID
ap-south-1ami-07959c05dcdb79a72
eu-north-1ami-0268b0bfff0f25d31
eu-west-3ami-0ea9454aef60045a2
eu-west-2ami-0d5eee57a6a1398a3
eu-west-1ami-00a8d14029b60a028
ap-northeast-3ami-0e495c3ffd416c65e
ap-northeast-2ami-0fc18a24aec719c1c
ap-northeast-1ami-00ec85b83bf713aac
ca-central-1ami-03891f0d8b41eb296
sa-east-1ami-0a30f044a5781b4e0
ap-southeast-1ami-0ae51324bf2e89725
ap-southeast-2ami-0ef7e8095b163dc42
eu-central-1ami-00e36131a0343c374
us-east-1ami-07e4e828a19159636
us-east-2ami-0e486911b2d0a5f7e
us-west-1ami-01183e1261529749e
us-west-2ami-04f850c412625dfe6

After upgrading the parameters successfully, execute the following command.

kubectl rollout restart protegrity-agent -n pty-protegrity-agent

2.5 - Using Protegrity Agent

Sample use cases for Protegrity Agent

Protegrity Agent is a conversational AI assistant that helps to manage Protegrity data protection products.

It supports three main categories of tasks.

Answer Questions About Protegrity Products

Enquire about Protegrity concepts, configurations, and best practices. The agent searches the Protegrity documentation library to find accurate answers.

Example questions:

  • “What is a data element in Protegrity?”
  • “How do policies and rules relate to each other?”
  • “What tokenization methods does Protegrity support?”
  • “Explain the difference between masking and encryption.”

Manage PIM Resources

Enquire to create, view, update, or delete resources in PIM. The Protegrity Agent finds the correct application programming interface (API) schema, builds the request, and executes it in real-time.

The agent supports the following PIM resource types:

Resource typeDescription
Data elementDefine how data receives cryptographic protection
MaskDefine a pattern that obscures data on presentation
RoleDefine who receives access to protected data
PolicyGroup rules that govern protection operations
RuleBind a role and data element within a policy
Data storeGroup server locations for policy distribution
Source (member source)Connect to an identity directory for role membership
Application (trusted application)Authorize a specific application and user pair to use a protector

The agent verifies each operation and reports the outcome.
For multi-step tasks, it creates a plan and works through each step.

Look Up API and Product Details

Enquire about PIM API endpoints, schemas, or product concepts.

Example requests:

  • “What fields do I need to create a policy?”
  • “Show me the schema for adding a rule to a policy.”
  • “What tokenizer types can I use for credit cards?”

2.5.1 - Accessing Protegrity Agent UI

Using Protegrity Agent UI

Accessing the Dashboard

Access the PPC using the FQDN provided during the installation process.

Enter the username and password for the admin user to log in.

If Protegrity Agent is installed, then the Protegrity Agent dashboard appears. If Protegrity Agent is not installed, then the Insight Dashboard is available.

Afetr logging in successfully, a Welcome message is displayed prompting users to start a conversation.

The left panel contains Ongoing Conversation and Chat History icons.

Click Chat History to refer to previous requests and results.

The Protegrity Agent version is displayed at the bottom-right of the page. Currently, Protegrity Agent is on version 1.0.

Starting a Conversation

Get started with Protegrity Agent by typing a question in the Start your conversation here… textbox. The Protegrity Agent breaks down the request into actionable steps and executes them, providing live updates in the Canvas.

Once the Agent starts processing the query, the stop icon is enabled. Use this icon to stop the Agent from proceeding with the request.

Interacting with the Agent

The following are the best practices while interacting with the agent.

Be Specific

Provide clear instructions. Include resource names, values, and desired outcomes.

Less effectiveMore effective
“Set up some security.”“Create a policy named PCI_Compliance with a tokenization rule for the CC_Token data element.”
“Fix the policy.”“Update the PCI_Compliance policy to change the masking rule for SSN_Token from full mask to partial mask.”

Understanding the Canvas

The attributes displayed in the canvas response along with their description are listed below.

AttributeDescription
typeItems of an agent response can be one of several types:
- TextResponse: The agent’s text output
- AgentPlan: A plan created by the agent
- ExecutedStep: An individual step in the plan
- FeedbackRequest: A request for user feedback
- Attestation: Evidence or reasoning from the agent
idUnique identifier for this conversation
claimThe claim or assertion this evidence supports
confidence_levelConfidence level (0-1) in the validity of this evidence
evidence_dataThe actual evidence data
- source: Source of the evidence
- collected_at: When the evidence was collected
- data: The evidence payload
evidence_typeType of evidence being provided
verification_methodMethod used to verify or collect this evidence
created_atWhen the conversation was created

If the response from the Agent is text information only, the Canvas does not display any output.

2.5.2 - Working with Protegrity Agent

Interacting with Protegrity Agent to manage Protegrity data protection products

This section walks through each step of creating a complete, working protection policy. The example protects three types of sensitive data: credit card numbers, Social Security numbers (SSN), and email addresses.

Each step can be run as an individual request, or the agent can build the entire setup in a single conversation.

PIM uses unique identifiers (UIDs) to reference resources. When you create a resource, PIM assigns it a UID. The Protegrity Agent tracks these UIDs across steps within the same conversation, so you can refer to resources by name.

Step 1: Create Data Elements

Data elements define how Protegrity protects a specific type of data. Each data element specifies one protection method. It is recommended to create data elements first because the rules might reference them later.

Data elements cannot change their protection method after creation. To change a protection method, create a new data element.

Caution: Deleting a data element destroys its cryptographic material. Data protected with a deleted data element can not be recovered.

Create a Credit Card Data Element

Request the agent to create a data element that tokenizes credit card numbers.

Create a data element named CC_Token that tokenizes credit card numbers
using the SLT_1_3 tokenizer.

The agent creates a data element with credit card tokenization. This protection method preserves the format of the card number and produces a token that passes Luhn validation.

Create a Social Security Number Data Element

Request the agent to create a data element that tokenizes numeric data for SSN values.

Create a data element named SSN_Token that uses numeric tokenization
with the SLT_1_3 tokenizer.

The agent creates a numeric tokenization data element. The token output contains only digits and preserves the original length.

Create an Email Data Element

Request the agent to create a data element that tokenizes email addresses.

Create a data element named Email_Token that uses email tokenization
with the SLT_1_3 tokenizer.

The Protegrity Agent creates an email tokenization data element. The token preserves the email format, including the @ symbol and domain structure.

Other Protection Methods

Protegrity supports several additional protection methods. The agent supports questions related to any of the following.

Protection methodUse case
Format-preserving tokenizationPreserve data format and length in the token output
Format-preserving encryption (FPE)Apply NIST 800-38G encryption while preserving format
AES-128 or AES-256 encryptionApply strong encryption for data at rest
HMAC-SHA256 hashingCreate irreversible one-way hashes for comparison
No encryption with monitoringTrack data access without applying protection

Request the agent for details on any method:

What tokenization options are available for numeric data?

Step 2: Create a Mask

Masks define how to partially reveal protected data for display. Rules can optionally reference a mask to control what users see when they access data without full unprotect permission.

Create a mask named CC_Show_Last_4 that shows the last 4 characters
and masks the rest with asterisks.

The Protegrity Agent creates a mask that displays the last four digits while replacing the remaining characters with * symbols. For example, a credit card number appears as ************1234 in the output.

Mask parameters:

ParameterDescription
fromLeftSpecify the number of characters to keep visible from the left
fromRightSpecify the number of characters to keep visible from the right
characterSet the masking character (*, #, -, or 0-9)

Step 3: Create a Role

Roles define who can access protected data. Every rule in a policy requires a role.
Ensure to create roles before creating rules.

Create a Role With Full Access

For a simple setup, create a role with the allowAll flag. This role grants access to all users without requiring a member source.

Create a role named DataAnalyst with manual mode and allow all users.

The agent creates a manual role where all authenticated users receive access.

Create a Role With Restricted Access

For more fine-grained control, create a role that restricts access to specific users or groups from a member source.

Create a role named PCI_Auditor with manual mode. Do not allow all users.

After the role is created, members can be added from a member source.
For more information about setting up member source, refer Create a Member Source.

Role Modes

ModeBehavior
MANUALManually manage role membership with no automatic refresh
SEMIAUTOMATICRefresh membership on a configured schedule
AUTOMATICContinuously synchronize membership from the member source

Step 4: Create a Member Source (Optional)

Member sources connect PIM to an identity directory. Create a member source when you need to restrict role membership to specific users or groups.

This step is optional if allowAll roles are used.

Create a file-based member source named LocalUsers.

The agent creates a member source. Supported source types include file, LDAP, Active Directory (AD), Azure AD, POSIX, and database connections.

After creating a member source, add members to a role.

Add a group member from the LocalUsers source to the PCI_Auditor role.

Step 5: Create a Policy

Policies group rules that govern protect, unprotect, and reprotect operations. Ensure to create the policy container before adding rules.

Create a policy named PCI_Compliance with template permissions
that allow protect, unprotect, and reprotect.

The agent creates an empty policy with default permissions. The template permissions serve as a baseline for new rules you add to the policy.

Step 6: Add Rules to the Policy

Rules bind a data element, a role, and a set of permissions within a policy. Each rule defines what protection operations a role can perform on a specific data element.

Add a Credit Card Rule With Masking

Add a Credit Card Rule With Masking

Add a rule to the PCI_Compliance policy for the DataAnalyst role and
CC_Token data element. Use the CC_Show_Last_4 mask. Allow protect and
reprotect, but deny unprotect. Set the no-access operation to
NULL_VALUE. Enable auditing for all operations.

The agent creates a rule where DataAnalyst users can protect and reprotect credit card data. When they access the data without unprotect permission, they see the masked value. The no-access operation returns a null value for unauthorized users.

Add a Social Security Number Rule

Add a Social Security Number Rule

Add a rule to the PCI_Compliance policy for the DataAnalyst role and
SSN_Token data element. Allow protect, unprotect, and reprotect.
Set the no-access operation to EXCEPTION.

The agent creates a rule granting full access to SSN data. Unauthorized access raises an exception.

Add an Email Rule

Add an Email Rule

Add a rule to the PCI_Compliance policy for the DataAnalyst role and
Email_Token data element. Allow protect and unprotect. Deny reprotect.

Rule Permissions

Each rule controls the following three operations independently.

OperationDescription
ProtectConvert clear text into its protected form
UnprotectConvert protected data back into clear text
ReprotectConvert data from one protected form to another

No-Access Operations

When a user without permission accesses protected data, PIM returns one of these values.

ValueBehavior
NULL_VALUEReturn a null value (default)
PROTECTED_VALUEReturn the protected (tokenized or encrypted) value
EXCEPTIONRaise an exception and block the operation

Step 7: Create a Data Store and Deploy

Data stores define where protectors retrieve their policies. A protector is a Protegrity component that enforces data protection at the point of access, such as a database or application server. Create a data store and deploy the policy to make it available to protectors.

Create a Data Store

Create a default data store named Production_DS with the description
"Production data store for PCI compliance."

A default data store allows any server to connect. For restricted access, create a non-default data store and add allowed server ranges:

Create a data store named Restricted_DS.
Add an allowed server range from 10.30.0.1 to 10.30.0.50
to the Restricted_DS data store.

Only one default data store can exist in a PIM instance.

Deploy the Policy

Deploy the Policy

Deploy the PCI_Compliance policy to the Production_DS data store.

The agent binds the policy to the data store. Protectors connected to this data store can now retrieve and enforce the policy.

An empty policies or applications array in a deploy request clears existing associations. The agent handles this correctly, but exercise caution when modifying deployments manually.

Trusted Applications (Optional)

If the environment uses Application Protector, create a trusted application to authorize a specific application and user pair. Include trusted applications in the deploy step alongside policies.

Create a trusted application named CustomerServiceApp with application
name "customer-service" and application user "cs-service-account".
Deploy the PCI_Compliance policy and CustomerServiceApp application
to the Production_DS data store.

This step is optional if the protectors do not require trusted application authorization.

Complete Setup in One Request

The agent can build an entire policy configuration in a single request. It creates a plan and works through each step.

Set up a complete PCI compliance configuration:
1. Create three data elements: CC_Token (credit card tokenization,
   SLT_1_3), SSN_Token (numeric tokenization, SLT_1_3), and
   Email_Token (email tokenization, SLT_1_3).
2. Create a mask named CC_Show_Last_4 that shows the last 4 characters.
3. Create a role named DataAnalyst with manual mode and allow all users.
4. Create a policy named PCI_Compliance.
5. Add rules for each data element:
   - CC_Token: allow protect and reprotect, deny unprotect,
     use CC_Show_Last_4 mask, no-access returns null.
   - SSN_Token: allow all operations, no-access raises exception.
   - Email_Token: allow protect and unprotect, deny reprotect.
6. Create a default data store named Production_DS.
7. Deploy PCI_Compliance to Production_DS.

The agent creates a multi-step plan and executes each step in sequence. It reports progress and the final results.

2.5.3 - Samples for using Protegrity Agent

Examples to get started with Protegrity Agent

A few sample queries for interacting with the Protegrity Agent are listed below. Expand the sample queries below for detailed information

Sample Query 1

In the Start your conversation here… textbox type the query.

Query 1

Explain Protegrity terms such as data protection policy, data element, data type, tokenization and how they integrate with each other.

Sample Output

Protegrity Agent Response:

Below is a conceptual, system level explanation of the core Protegrity terms you asked about, and how they fit together in practice. I’ll stay out of API mechanics and focus on meaning and flow.


  1. Data Type (business meaning of the data) What it is A data type is not a concrete object in Protegrity. It’s the business classification of data. Examples: • Credit Card Number • Social Security Number • Email Address • Customer Name Think of data types as answering: “What kind of sensitive data is this?” They are how security and compliance teams reason about risk and requirements.

  1. Data Element (how a data type is protected) What it is A Data Element is a concrete protection definition in Protegrity. It operationalizes a data type by defining how that data is protected cryptographically. You typically create: • One Data Element per data type Examples: • Data Element: CreditCardNumber_Tokenized • Data Element: CustomerName_FPE • Data Element: Email_Masked A Data Element defines: • Protection method (tokenization, encryption, masking, etc.) • Cryptographic parameters • Format preservation behavior • Optional output character rules (alphabets) Key properties: • Immutable after creation (except description) • Reused across many policies and systems • The foundation of enforcement In short: Data type = what the data is Data element = how that data is protected

  1. Tokenization (one protection method) What it is Tokenization is a protection method where: • Sensitive data is replaced with a token • The token has no mathematical relationship to the original value • Re identification is possible only via Protegrity Example: Original: 4111 1111 1111 1111 Token: 8394 5521 9012 4478 Why tokenization is commonly used: • Preserves length and often format • Reduces compliance scope (e.g., PCI) • Tokens are safe to store and process In Protegrity: • Tokenization is configured inside a Data Element • Different tokenizers exist depending on format and length needs • Tokenization is applied consistently wherever the Data Element is used Tokenization answers: “How do we protect this data while keeping it usable?”

  1. Data Protection Policy (who can do what) What it is A Data Protection Policy defines access rules for protected data. It does not define cryptography. It defines authorization and behavior. A policy answers: “Who is allowed to protect, unprotect, or see this data?” A policy contains rules, and each rule links: • A Role (who) • A Data Element (what data) • Permissions (what actions are allowed) Example rules: • App role can tokenize and detokenize credit cards • Analytics role can only see masked values • Everyone else gets no access Important: • Policies reference Data Elements • Policies are enforced at runtime by protectors • Policies are deployed to environments via data stores

  1. How they all come together (end to end flow) Here is the mental model that ties everything together: Step 1: Classify the data You identify a data type: • “This column is a credit card number” Step 2: Define protection You create a Data Element: • Uses tokenization • Preserves format • Meets compliance requirements Now Protegrity knows how to protect that type of data. Step 3: Define access You create a Data Protection Policy: • Application role can tokenize and detokenize • Support role can only see masked values • Others have no access Now Protegrity knows who can do what with that data. Step 4: Enforcement at runtime When an application: • Calls protect → Protegrity uses the Data Element’s tokenization • Calls unprotect → Protegrity checks the policy rules • Returns clear, masked, or blocked data accordingly

Everything flows through: Data Type ↓ Data Element (protection method, e.g. tokenization) ↓ Policy Rule (role + permissions) ↓ Runtime enforcement by protectors


One sentence summary • Data type: what the data represents (business meaning) • Data element: how that data is protected (cryptography) • Tokenization: one way a data element protects data • Data protection policy: who is allowed to protect or see the data • Together: Protegrity separates what the data is, how it’s protected, and who can access it so security is consistent, centralized, and enforceable everywhere If you want, next we can: • Walk through a concrete example (e.g., credit card in an app) • Compare tokenization vs encryption vs FPE • Explain roles and masking in more depth

Sample Query 2

Query

In the Start your conversation here… textbox type the query.

Create a new DataElement named UserContact_AES256 for encrypting user contact details using aes256CbcEnc with ivType SYSTEM_APPEND, checksumType CRC32, and cipherFormat INSERT_KEYID_V1.

Sample Output

Protegrity Agent Response:

Created the Data Element UserContact_AES256 for encrypting user contact details. Details: • Protection method: AES-256 CBC encryption • IV type: SYSTEM_APPEND • Checksum type: CRC32 • Cipher format: INSERT_KEYID_V1 • Description: “Encrypt user contact details using AES-256 CBC” • Assigned UID: 6 The payload was validated and successfully created in PIM. This Data Element is now available to be referenced in policy rules.

Click View JSON. The following is a sample information that displays.

{
  "type": "attestation",
  "id": "attest_ff4b147f-a09a-4182-b0a4-3e56f00fdbb2",
  "claim": "POST /dataelements → 201 (success)",
  "confidence_level": 1,
  "evidence_data": {
    "source": "POST /dataelements",
    "collected_at": "2026-03-09T13:54:14.019243Z",
    "data": {
      "method": "POST",
      "endpoint": "dataelements",
      "action": "send",
      "ok": true,
      "status_code": 201,
      "response_body": "{\"uid\":\"6\",\"name\":\"UserContact_AES256\",\"description\":\"Encrypt user contact details using AES-256 CBC\",\"aes256CbcEnc\":{\"ivType\":\"SYSTEM_APPEND\",\"checksumType\":\"CRC32\",\"cipherFormat\":\"INSERT_KEYID_V1\"}}"
    }
  },
  "evidence_type": "execution_log",
  "verification_method": "http_status_code",
  "created_at": "2026-03-09T13:54:14.019266Z"
}

Sample Query 3

In the Start your conversation here… textbox type the query.

Query

Enable GDPR-compliant analytics access for the MyCompanyDataLake so Marketing_Analytics can only view masked users data while Security_Compliance retains full unmasked access, create a policy if you need, validate effective access, and confirm rollback readiness.

Sample Output

Protegrity Agent Response:

Here’s a concise status summary of what was done and what’s in effect. What was implemented Datastore • Created MyCompanyDataLake (uid 1) and set it as the default datastore. Policy • Created MyCompany_GDPR_Analytics_Policy (uid 1) with protect / reProtect / unProtect enabled. Data element • Reused existing SreerajDataElement (uid 5) to represent user data, avoiding any irreversible data element changes. Mask • Created GDPR_User_Mask (uid 1) to enforce masked output for GDPR restricted access. Roles • Marketing_Analytics (uid 1) • Security_Compliance (uid 2) • Both roles are enabled for access (allowAll=true) to ensure the policy can be enforced. Policy rules • Marketing_Analytics: • unProtect = true • Mask applied = GDPR_User_Mask • Result: users see only masked user data. • Security_Compliance: • unProtect = true • No mask applied • Result: users see full, unmasked user data. Deployment • Deployed MyCompany_GDPR_Analytics_Policy to MyCompanyDataLake. • Deployment verified as active. Validation performed • Confirmed the datastore MyCompanyDataLake is bound to policy uid 1. • Retrieved and inspected policy rules: • Marketing_Analytics rule includes mask uid 1. • Security_Compliance rule has no mask. • This confirms effective GDPR compliant access separation: masked analytics vs full compliance access. Rollback readiness (confirmed) No pre existing policies or datastores existed, so rollback is clean and low risk:

  1. Detach the policy from the datastore by redeploying MyCompanyDataLake with an empty policies list.
  2. Optionally delete the two policy rules under policy uid 1.
  3. Optionally delete MyCompany_GDPR_Analytics_Policy (uid 1).
  4. Optionally revert or delete the two roles.
  5. Delete GDPR_User_Mask (uid 1) once no rules reference it. This restores the system to its original empty state. Net result • Marketing analytics access is GDPR compliant and masked. • Security/compliance access retains full visibility. • Configuration is deployed, validated, and fully reversible.

Click View JSON. The following is a sample information that displays.

{
  "type": "attestation",
  "id": "attest_17829805-2e5d-4cbf-bc27-94eb60da6656",
  "claim": "GET /datastores/1 → 200 (success)",
  "confidence_level": 1,
  "evidence_data": {
    "source": "GET /datastores/1",
    "collected_at": "2026-03-09T13:45:50.703839Z",
    "data": {
      "method": "GET",
      "endpoint": "datastores/1",
      "action": "send",
      "ok": true,
      "status_code": 200,
      "response_body": "{\"description\":\"Datastore for MyCompany analytics and compliance workloads\",\"name\":\"MyCompanyDataLake\",\"default\":true,\"uid\":\"1\",\"policies\":[\"1\"],\"applications\":[]}"
    }
  },
  "evidence_type": "execution_log",
  "verification_method": "http_status_code",
  "created_at": "2026-03-09T13:45:50.703862Z"
}

Sample Query 4

Query

Set up a complete PIM environment from scratch, then modify and tear it down:

PHASE 1 — CREATE (respect dependency order):

  1. Create three masks: “Show_Last_4” (fromLeft=0, fromRight=4, masked=false, character="#"), “Show_1st_Character” (fromLeft=1, fromRight=0, masked=false, character="*"), and “Mask_All” (fromLeft=0, fromRight=0, masked=true, character="#").
  2. Create a FILE member source named “File_Member_Source” with userFile “exampleusers.txt” and groupFile “examplegroups.txt”.
  3. Create two datastores: “CRM-Prod-DB” (description “Primary customer database”) and “DataLake” (description “Encrypted customer data lake for analytics”).
  4. Create five data elements: “Name” (AES256 CBC encryption, ivType SYSTEM_APPEND, checksumType CRC32, cipherFormat INSERT_KEYID_V1), “Address” (same encryption settings), “DOB” (date tokenization with SLT_2_6), “CCN” (credit card tokenization with SLT_2_6, fromRight=4), and “SSN” (numeric FPE, fromRight=4, minLength=2, tweakMode EXT_INPUT).
  5. Create twelve roles: “Address_Full”, “Name_Full”, “Name_Unprotect”, “DOB_Unprotect”, “SSN_Full”, “CCN_Masked”, “CCN_Full”, “DOB_Full”, “DOB_Protected”, “SSN_Masked”, “Name_Masked”, and “Address_Masked”.
  6. Create two IP ranges: 10.20.0.50–10.20.0.150 assigned to CRM-Prod-DB, and 10.20.0.180–10.20.0.250 assigned to DataLake.
  7. Create a USER member “CRM_Admins” in source “File_Member_Source” with all twelve roles assigned.
  8. Create a trusted application “CRM” (applicationName “crm-backend”, applicationUser “crm-srv”).
  9. Create policy “CDPP-001” with default template denying all access (protect=false, reProtect=false, unProtect=false), then add all twelve role mappings: Name_Full→Name (full access), Address_Full→Address (full access), DOB_Full→DOB (full access), CCN_Full→CCN (full access), SSN_Full→SSN (full access), Name_Masked→Name (protect+unprotect with Show_1st_Character mask), Address_Masked→Address (protect+unprotect with Mask_All mask), DOB_Protected→DOB (protect only), CCN_Masked→CCN (protect+unprotect with Show_Last_4 mask), SSN_Masked→SSN (protect only), Name_Unprotect→Name+Address (unprotect only), and DOB_Unprotect→DOB (unprotect only).*

**PHASE 2 — MODIFY: 10. Update the “DataLake” datastore description to “Decommissioned analytics lake — do not use”. 11. Update the “Mask_All” mask character from “#” to “X”. 12. Update mask “Show_Last_4” to reveal 6 characters from right instead of 4.

PHASE 3 — TEARDOWN (reverse dependency order): 13. Delete policy “CDPP-001”. 14. Delete member “CRM_Admins”. 15. Delete trusted application “CRM”. 16. Delete both IP ranges. 17. Delete all twelve roles. 18. Delete all five data elements. 19. Delete both datastores. 20. Delete member source “File_Member_Source”. 21. Delete all three masks.**

Confirm each operation succeeded before moving to the next.

Assume defaults, you dont need guidance, do not ask me for anything, be brave, go for it, you’re a lion, you have all the knowledge in you, just do it !

Sample Output

Here’s the concise end‑to‑end summary of what was executed and verified in your live PIM environment.

✅ PHASE 1 — CREATE (all dependencies respected) Masks

Created: Show_Last_4 (UID 16) Show_1st_Character (UID 17) Mask_All (UID 18) Member Source

Created FILE member source File_Member_Source (UID 6) userFile: exampleusers.txt groupFile: examplegroups.txt Datastores

Created: CRM-Prod-DB (UID 11) — “Primary customer database” DataLake (UID 12) — “Encrypted customer data lake for analytics” Data Elements

Created: Name (UID 22) — AES256 CBC encryption Address (UID 23) — AES256 CBC encryption DOB (UID 25) — DateTime tokenization (SLT_8_DATETIME) CCN (UID 26) — Credit card tokenization (SLT_2_6, fromRight=4) SSN (UID 24) — Numeric FPE (fromRight=4, minLength=2, EXT_INPUT) Roles

Created 12 roles (UIDs 50–61): Address_Full, Address_Masked Name_Full, Name_Masked, Name_Unprotect DOB_Full, DOB_Protected, DOB_Unprotect CCN_Full, CCN_Masked SSN_Full, SSN_Masked IP Ranges

CRM-Prod-DB: 10.20.0.50–10.20.0.150 (UID 9) DataLake: 10.20.0.180–10.20.0.250 (UID 10) Role Membership

Added USER member CRM_Admins (source UID 6) to all 12 roles Trusted Application

Created CRM (UID 5) applicationName: crm-backend applicationUser: crm-srv Policy and Rules

Created policy CDPP-001 (UID 6) with deny‑all template Added all 12 requested rule mappings, including: Full access rules Masked unprotect rules using correct masks Protect‑only and unprotect‑only rules All rules verified created successfully

✅ PHASE 2 — MODIFY Updated DataLake datastore description to: “Decommissioned analytics lake - do not use” (Adjusted to ASCII hyphen due to API constraints) Updated Mask_All: Mask character changed to “0” (alphabetic characters not allowed by API) Updated Show_Last_4: fromRight changed from 4 → 6 All updates confirmed applied.

✅ PHASE 3 — TEARDOWN (strict reverse order) Successfully deleted, in order:

Policy CDPP-001 CRM_Admins members from all roles Trusted application CRM Both IP ranges All 12 roles All 5 data elements Both datastores File member source File_Member_Source All 3 masks Every delete returned a successful confirmation.

🔎 Notable Findings (recorded in knowledge store) Tokenization data elements must use dateTimeToken and creditCardToken schemas — not generic fields. Mask characters are restricted to *, #, -, or digits 0–9. Datastore descriptions reject Unicode punctuation (e.g., em dashes).

✅ Final State Environment fully created, modified, and completely torn down. No residual resources remain. Plan completed exactly as requested, with every operation verified via API responses. Execution complete.

Click View JSON. The following is a sample information that displays.

{
  "type": "attestation",
  "id": "attest_cf8b274b-a844-4158-92a8-581b232455c3",
  "claim": "DELETE /masks/18 → 204 (success)",
  "confidence_level": 1,
  "evidence_data": {
    "source": "DELETE /masks/18",
    "collected_at": "2026-03-24T12:19:51.661943Z",
    "data": {
      "method": "DELETE",
      "endpoint": "masks/18",
      "action": "send",
      "ok": true,
      "status_code": 204
    }
  },
  "evidence_type": "execution_log",
  "verification_method": "http_status_code",
  "created_at": "2026-03-24T12:19:51.661967Z"
}

2.6 - Uninstalling Protegrity Agent

Uninstalling Protegrity Agent

To uninstall the Protegrity Agent, run the following command.

helm uninstall protegrity-agent -n pty-protegrity-agent
kubectl patch ec2nodeclass protegrity-agent-nodeclass -p '{"metadata":{"finalizers":[]}}' --type=merge
kubectl delete namespace pty-protegrity-agent

To verify the uninstallation is successfully completed, run the following command:

kubectl get all -n pty-protegrity-agent 2>/dev/null; kubectl get ec2nodeclass,nodepool,nodes

If the command returns no output, all tracked resources for the product are successfully removed from the cluster.

This operation might require few minutes for clearing all the resources.

2.7 - Appendix - Features and Capabilities and Limitations

List of features and capabilities of Protegrity Agent

ProAgent Singleton

Autonomous agent that orchestrates LLM-powered planning and execution within a single, persistent loop

  • Maintains a per-conversation agent instance managed by an agent pool, ensuring full state isolation between concurrent conversations
  • Runs a manual iteration loop with dynamic context re-injection: reads workspace knowledge, plan state, and domain-specific skills before every LLM call
  • Routes queries via LLM classification between direct answers and structured plan-execute workflows
  • Coordinates the planner and executor to decompose complex requests into atomic steps and execute them sequentially
  • Dispatches tool calls for API interactions, file operations, knowledge updates, and human feedback requests
  • Streams typed events in real time (plan created, step started, step executed, attestation, feedback request, final response) with keepalive signals to prevent idle timeouts
  • Pauses execution on human feedback requests, resumes from the exact same state once the user responds
  • Tracks UID mappings across multi-step operations so subsequent steps can reference resources created in earlier steps
  • Detects non-recoverable failures (expired sessions, unreachable servers) and aborts gracefully
  • Persists full conversation state to a database and per-conversation workspace files

Integrated Planning and Execution

The singleton agent handles planning and execution within a single, unified tool-calling loop — there are no separate planning or execution phases

  • Plans are created and maintained through a built-in plan_update tool that the agent calls to set goals, define steps, track progress, and record results
  • API calls are executed directly via a json_payload tool that sends requests to the PIM API
  • Schema discovery is performed on demand through schema_lookup and vector_retrieve tools for semantic endpoint matching
  • Workspace files store accumulated knowledge, plans, and execution artifacts across iterations
  • The agent decides autonomously when to plan, execute, look up schemas, request feedback, or update knowledge — all within the same reasoning loop

Tool Palette

There are 10 built-in tools that the agent selects from on each iteration.

  • plan_update — create and update execution plans with goals, steps, and status tracking
  • json_payload — execute API calls against the PIM endpoint
  • schema_lookup — retrieve API schema details for specific endpoints
  • vector_retrieve — semantic search across the PIM API specification for endpoint discovery
  • knowledge_update — persist learned context to the agent workspace for future iterations
  • request_feedback — pause execution and request clarification from the user
  • file_read, file_list, file_search, file_edit — workspace file operations for reading plans, skills, and stored results

Protegrity Policy Management Integration

ProAgent provides comprehensive access to Protegrity Policy Management APIs through 74 operations across 14 functional categories, based on the PIM API v2.12.3 specification.

2.8 - Appendix - Backup and Restore

Backup and Restore Overview

Backup

The Protegrity Agent Helm chart includes automated Postgres backup and disaster recovery via Velero. Backup runs by default on every installation.

How It Works

A CronJob runs pg_dump every three hours to a dedicated Persistent Volume Claim (PVC). Velero snapshots that PVC to S3 thirty minutes later. This two-stage approach ensures database consistency.

Restore

To recover from a disaster, reinstall the chart with global.restore=true:

helm upgrade --install protegrity-agent \
  --namespace pty-protegrity-agent \
  --set global.restore=true \
  --timeout 15m

To restore a specific point-in-time backup, add --set backup.backupName=<name>.

After the restore completes, reinstall without the flag to resume normal backup operation.

Key Configuration

ValueDefaultDescription
backup.enabledtrueEnable or disable all backup resources
backup.schedule0 */3 * * *CronJob schedule for the pg_dump export
global.restorefalseSet to true to trigger disaster recovery
backup.backupName""Specify a backup name for point-in-time restore

3 - Sample Protection Workflows

Workflows to show how to protect sample data.

This section provides workflows on how to protect the following sample data:

  • Credit Card Number (CCN)
  • Date of Birth (DOB)

3.1 - Policy Workflow

Explaining the policy workflow.

Summary

This section outlines a workflow for creating a policy. This workflow is used in examples and scripts that are provided in related sections of the documentation.

Here are the general steps of the Policy Workflow:

  1. Initialize Policy Management
  2. Prepare Data Element
  3. Create Member Source
  4. Create Role
  5. Assign Member Source to Role
  6. Create Policy Shell
  7. Define Rule with Data Element and Role
  8. Create Datastore
  9. Deploy Policy to a Datastore
  10. Confirm Deployment

Each step of the workflow has a sub-page that describes the workflow in detail, including a description, purpose, inputs, and outputs. Other sections of this documentation show examples of these steps for creating a policy, such as a policy to protect a credit card number.

Description

The workflow described in this section assumes that the reader is working in an environment where the core Protegrity platform components are already installed, accessible, and functioning correctly. The steps focus on policy creation and deployment, not on platform installation, infrastructure provisioning, or troubleshooting underlying system issues.

Assumptions

To execute any CLI or API command in this example, the following assumptions have been made:

  • You are operating on a new AI Team Edition setup.
    • Set up the AI Team Edition by installing the Protegrity Provisioned Cluster. For more information about installing the PPC, refer to the section Installing PPC.
  • You are connected to the Policy Manager container.
    • Connect to the Policy Manager container by deploying the Protegrity Policy Manager. For more information about deploying the Protegrity Policy Manager, refer to the section Installing Policy Workbench.

CLI Examples

To execute any CLI command in this example, the following additional assumption has been made:

API Examples

To execute any API command in this example, the following additional assumption has been made:

  • You have access to the Protegrity Policy Management REST APIs.

Purpose

To clearly establish the scope of the workflow and avoid ambiguity about what the documentation covers versus what is expected to be completed beforehand. By defining these assumptions up front, the workflow can focus on explaining policy behavior and intent, rather than environmental setup.

Outcome

With these assumptions satisfied, the reader can proceed through the workflow steps with the expectation that each command or configuration action will succeed without requiring additional environment preparation.

Tips

  • If any assumption is not met, resolve it before continuing with the workflow to avoid misleading errors later.
  • For environment setup, installation, or operational guidance, refer to the dedicated platform installation and operations documentation rather than this workflow.

3.1.1 - Initialize Policy Management

Workflow to initialize policy management.

Summary

Initialize the Policy Management environment so it can store keys, policies, and configuration data required for all subsequent steps.

Description

This step prepares the Policy Management subsystem by creating the internal key material and policy repository used by the API. Initialization ensures that the environment is in a valid state before you create any data elements, roles, policies, or datastores.

Purpose

To set up the foundational Policy Management environment so that all future API commands operate against a valid and initialized repository.

Prerequisites

None.

Initialization is the first action performed before any policy‑related configuration can occur.

Inputs

No inputs are required.

The initialization command runs with system defaults and prepares the environment automatically.

Outcome

Policy Management is fully initialized, and the system is ready to accept policy configuration commands. After this step completes, proceed to create data elements, roles, member sources, and policies using the API.

Conceptual Examples

  • Example 1: A new environment has just been installed. Initialize the internal structures needed so that the administrator can begin defining data protection policies.
  • Example 2: A test or sandbox environment is reset. Initialization is performed again to rebuild the policy repository before running new API‑based examples or scripts.

Tips

None.

3.1.2 - Prepare Data Element

Workflow to prepare data element.

Summary

Create a Data Element that defines the sensitive data type and how it will be protected. For example, whether the data is tokenzied, encrypted, or masked.

Description

A Data Element describes a category of sensitive information, such as credit card numbers, Social Security numbers, names, or email addresses. It then defines the protection method that applies to the category. This includes the protection algorithm, formatting constraints, visibility rules, and validation options. A Data Element is the foundation of all policy rules. Policies reference Data Elements to determine how data is protected and under which circumstances it may be revealed or transformed.

Purpose

To formally define what data will be protected and how it should be processed. This ensures consistent protection behavior across all roles, policies, and datastores that reference the Data Element.

Prerequisites

None.

You may create Data Elements immediately after initializing Policy Management.

Inputs

Typical inputs may include:

  • Data Element name
  • Description
  • Protection method. For example, tokenization, encryption, and masking.
  • Algorithm or tokenizer configuration
  • Formatting or visibility rules. For example, keep last four digits.
  • Validation rules. For example, Luhn checks for credit cards.

Sub-tasks

Sometimes you might want to create a mask or use a special alphabet in your policy.

Create Mask
  • When and why
    • Create a Mask when you need to partially or fully hide sensitive data during presentation to end‑users. Masks allow you to obfuscate some or all characters. For example, showing only the last four digits. Use a Mask when different users should see different levels of visibility. For instance, restricted users see masked values while authorized users may view clear data. Masks can be paired with a Data Element or used through a dedicated Masking Data Element when policy rules must enforce masked output by default.

Create Alphabet

  • When and why
    • Create an Alphabet when the data you are protecting includes characters from specific languages or extended Unicode sets, such as Spanish, Polish, Korean, or other multilingual inputs. Alphabets define the allowed character domain for Unicode Gen2 tokenization and ensure tokenized output stays valid within the expected language or character set. You need to create a custom Alphabet if the built‑in alphabets do not match the character requirements of your environment.

Outcome

A Data Element is created and stored in the Policy Management environment. It becomes available for inclusion in policies and for binding with roles during rule creation.

Conceptual Examples

  • Example 1: Credit Card Tokenization
    • A Data Element named de_credit_card is created to tokenize credit card numbers using a chosen tokenizer. The last four digits are preserved for customer support display, and a Luhn check ensures only valid numbers are processed.
  • Example 2: Email Address Masking
    • A Data Element named de_email is created to enforce consistent masking of email addresses, such as replacing the user portion with asterisks while preserving the domain.

Tips

  • Use descriptive names so Data Elements are easy to identify when building policies.
  • Choose protection methods based on business use cases. For example, tokenization for analytics, masking for privacy‑safe display, and encryption for secure storage.
  • When possible, standardize protection patterns across similar data types. For example, all PAN fields follow the same tokenization rule.
  • Before creating many Data Elements, define a naming convention. For example, de_<datatype>_<method>.

3.1.3 - Create Member Source

Workflow to create a member source.

Summary

Create a Member Source that defines the external system from which user and group identities will be imported for use in roles and policies.

Description

A Member Source establishes a connection to an identity provider, such as a directory service, a database, or a simple user or group file. This ensures that real users and service accounts can be referenced within policy roles. Member Sources supply the identities that roles draw from, allowing the system to stay aligned with organizational updates to accounts, groups, and permissions.

Purpose

To provide a trusted and maintainable source of user and group information for policy enforcement. Member Sources ensure that roles are populated automatically or programmatically using authoritative identity data rather than manual user entry.

Prerequisites

None.

Member Sources can be created at any time, though they are typically defined before assigning them to roles.

Inputs

Inputs vary depending on the type of Member Source, but commonly include:

  • Source type. For example, file, directory, database, and API.
  • Location or connection settings. For example, paths, URLs, and hostnames.
  • User and group data. For example, lists, queries, or mappings.
  • Access credentials if required.

Outcome

A Member Source is created and available for assignment to one or more roles. Once assigned, the Member Source becomes the mechanism through which those roles obtain their user and group membership.

Conceptual Examples

  • Example 1: File‑Based Member Source for Testing
    • A small file containing sample users and groups is created for a development environment. A Member Source is configured to read from this file, populating roles without connecting to a production identity system.
  • Example 2: Directory‑Backed Member Source for Production
    • A Member Source is configured to point to an organization’s central directory service. When new employees join or leave teams, their group membership updates automatically in the Member Source, and corresponding roles inherit those changes.

Tips

  • Use file‑based Member Sources for demos, pilots, and sandbox environments. They are easy to set up and reset.
  • For production, use a centralized identity provider to avoid manually updating user lists.
  • Keep Member Source names descriptive. For example, ms_hr_directory and ms_test_users.
  • Confirm that users and groups in the Member Source align with your expected role design to avoid misconfiguration during rule creation.

3.1.4 - Create Role

Workflow to create a role.

Summary

Create a Role to represent a group of users or service accounts that will receive specific permissions in a policy.

Description

A Role is a logical container that defines who will receive access to a Data Element within a policy. Roles do not hold permissions on their own. Instead, they become meaningful when paired with Data Elements and permissions in policy rules. Roles allow you to centralize and standardize access behavior across multiple users by grouping identities into functional categories such as Data Analysts, Customer Support, or Payment Service Applications.

Purpose

To establish an authorization boundary that policies can reference when granting or restricting access to sensitive data. Roles allow policies to express business intent clearly. For example, This group may tokenize credit card data,” or Only this role may unprotect values.

Prerequisites

None.

Roles can be created at any time, although they become active only after a Member Source is assigned in the next step.

Inputs

Typical inputs when creating a Role include:

  • Role name.
  • Description of its business purpose.
  • Assignment mode. For example, manual assignment versus assignment from a Member Source.

These inputs help clearly define the role’s identity and intended usage in policy rules.

Outcome

A Role is created and ready to populate with members. It can now be linked to a Member Source and later associated with Data Elements and permissions within a policy.

Conceptual Examples

  • Example 1: Protection Role
    • A role named r_cc_protect is created for payment‑processing applications responsible for protecting credit card numbers using tokenization before storage.
  • Example 2: Limited‑Access Role
    • A role named r_customer_support_masked is created for agents who may view masked customer data but cannot unprotect or view clear‑text values.

Tips

  • Keep role names short but descriptive. For example r_<domain>_<capability>
  • Use separate roles for different permission levels, such as protect versus unprotect, to keep policies clean and auditable.
  • Avoid putting too many responsibilities in a single role. For example, smaller, purpose‑specific roles simplify long‑term maintenance.
  • If possible, design roles around business functions and not individuals, to avoid maintenance churn.
  • Note this can be created with the option of ALL_USERS.

3.1.5 - Assign Member Source to Role

Workflow to assign member source to role.

Summary

Assign a user or group from a Member Source to a Role so the Role is backed by real identities that can receive policy permissions. This step links the Role to the identities it should represent and, when synchronized, imports current membership from the source into the Role.

Description

This step connects a previously created Role to a specific user or group that exists in a Member Source. For example, LDAP, Active Directory, Azure AD, a database, or a file-based source. Using pim create roles members, you define which source-backed identity should belong to the Role. After that, running a role sync updates the Role with membership information from the source.

This is the point where a Role stops being only a named container and becomes tied to actual enterprise identities. Once this binding exists, the Role can be used meaningfully in policy rules, because the system can map policy access decisions back to real users, groups, or service accounts.

Purpose

To bind a Role to authoritative identities from a Member Source so that policy permissions apply to real users or groups rather than to an empty logical object. This ensures policy enforcement reflects the organization’s existing identity model and can stay aligned with membership changes in the source system over time.

Prerequisites

  • A Role must already exist.
  • A Member Source must already be created and available.
  • The user or group to be assigned must exist in that Member Source. It should also be identifiable by name and type, with an optional synchronization identifier if required by the source.

Inputs

Typical inputs for this step include:

  • Role UID or Role identifier.
  • Member name from the source, such as user or group name.
  • Source UID identifying the Member Source.
  • Member type, such as USER or GROUP.
  • Optional synchronization identifier, depending on the source and membership model.

You may also optionally run a synchronization operation after assignment so that the Role reflects current membership from the source immediately.

Outcome

The Role now has a source-backed member assignment and can be used as an identity-backed object in policy rules. After synchronization, the Role reflects the current membership information from the Member Source, allowing policy access to apply to actual users, groups, or service accounts. Without this step, policies may be defined correctly but still not grant access to anyone in practice.

Conceptual Examples

  • Example 1: Assigning an LDAP Group to a Protection Role
    • A Role named r_cc_protect is linked to a group such as pci_analysts from an LDAP Member Source. After the role is synchronized, all current members of that LDAP group become the effective identities behind the Role. This allows them to receive the permissions defined later in the policy.
  • Example 2: Assigning a Service Account User from a File-Based Source
    • In a test environment, a file-based Member Source contains sample users. A specific service account user is attached to a Role so that demo or automation workflows can exercise the policy. After synchronization, that Role can be referenced in rules just like a production role backed by a centralized identity provider.

Tips

  • Prefer assigning groups instead of individual users when possible. This reduces maintenance and keeps Role design aligned with business functions. This is consistent with the examples and scripts, which commonly model role membership using source groups such as pci_analysts or hr_analysts. Note that some of the examples will not use groups.
  • Run a role synchronization after assigning the member source so the Role reflects current source membership immediately. The example workflow explicitly marks sync as recommended.
  • Use clear naming and role design so the source membership aligns with the intended policy behavior. A mismatch between Role purpose and source membership can make later rule definitions misleading or ineffective. This follows the workflow guidance that roles should map to business purpose and member sources should align with expected role design.

3.1.6 - Create Policy Shell

Workflow to create a policy shell.

Summary

Create an empty Policy Shell that acts as the container for roles, data elements, rules, and deployment configuration.

Description

A Policy Shell is the foundational policy object that holds all components of a complete policy but initially contains no rules or assignments. It defines the policy’s identity, which is its name, description, and purpose, and prepares the environment for adding data elements, roles, permissions, and datastores. Creating a Policy Shell is the administrative starting point for constructing a full policy.

Purpose

To establish a dedicated policy container that will later be populated with rules governing how sensitive data is protected and who may access it. The Policy Shell provides organizational structure and acts as the anchor for all subsequent policy configuration steps.

Prerequisites

  • Policy Management must be initialized. For more information about the initialization step, refer to section Initialize Policy Management.
  • Any Data Elements, Roles, or Member Sources you plan to use may optionally be created beforehand, but are not required at this step.

Inputs

Typical inputs for this step include:

  • Policy name.
  • Policy description.
  • Optional metadata or tags for categorization.

At this stage, no data elements, roles, or permissions are defined.Only the policy container itself is defined.

Outcome

A new, empty policy is created and ready to be configured. You can now begin attaching Data Elements, assigning Roles, defining permissions, and associating Datastores.

Conceptual Examples

  • Example 1: Credit Card Protection Policy
    • An administrator creates a new policy shell named policy_credit_card intended to govern how credit card numbers are tokenized and which users can unprotect them.
  • Example 2: Customer Support Access Policy
    • A policy shell named policy_support_data is created to organize rules that provide masked data to customer service roles while restricting access to full values.

Tips

  • Choose clear and descriptive names so the purpose of the policy is immediately recognizable.
  • Create separate policies for distinct business domains to simplify auditing and updates. For example payments, HR, or analytics.
  • Avoid overloading one policy with too many unrelated Data Elements. Smaller policies are easier to manage and review.
  • Think of the Policy Shell as the project folder for everything that will follow.

3.1.7 - Define Rule with Data Element and Role

Workflow to define rule with Data Element and Role.

Summary

Define a rule that specifies how a Role may interact with a Data Element by assigning permissions such as protect, unprotect, mask, or view.

Description

A Rule establishes the relationship between a Data Element and a Role within a policy. It defines which operations members of that Role are allowed to perform on the Data Element. For example, protecting the data using tokenization, viewing masked values, or unprotecting the data if permitted. Rules are the core of policy logic. They determine the behavior of the system when a user or application attempts to access or process sensitive data.

Purpose

To define who, the Role; can do what, permission; to which data, the Data Element. Rules translate business intent into enforceable policy logic and ensure consistent application of protection standards across all datastores.

Prerequisites

  • A Policy Shell must exist. For more information about creating a policy shell, refer to the section Create Policy Shell.
  • A Data Element must be created. For more information about creating a data element, refer to the section Prepare Data Element.
  • A Role must be created and associated with a Member Source. For more information about creating a member source, creating a role, and assigning member source to the role, refer to the following sections:

Inputs

Typical inputs for this step include:

  • Role to which the rule applies.
  • Data Element being controlled.
  • Permissions. For example, protect, unprotect, mask, and view.
  • Optional output behavior. For example, allow masked return only.
  • Optional masking configuration if applicable.

Outcome

A rule is added to the policy, granting or restricting specific interactions between the designated Role and Data Element. The policy now contains enforceable access logic that dictates how protected data will behave for different types of users or applications.

Conceptual Examples

  • Example 1: Protect‑Only Rule
    • A rule is created to allow the r_cc_protect role to protect credit card numbers using tokenization but not unprotect them. Applications using this role can store sensitive data safely, but cannot retrieve clear values.
  • Example 2: Masked‑View Rule
    • A rule is created for the r_support_masked role, allowing customer support teams to view masked data but not access clear text or perform protection operations.

Tips

  • Define rules with the principle of least privilege. Only grant operations that are required for the role’s function.
  • Avoid giving unprotect permissions unless absolutely necessary. Restricting this keeps sensitive data safe.
  • Use naming conventions to help visually match roles to rule types. For example, r_<domain>_protect and r_<domain>_viewmasked.
  • For complex policies, document why each rule exists to simplify future audits or updates.

3.1.8 - Create Datastore

Workflow to create datastore.

Summary

Create a Datastore entry that represents the application, service, or infrastructure component where the policy will be deployed and enforced.

Description

A Datastore defines the environment in which a policy will operate, such as an application server, a database engine, an API endpoint, or another enforcement point. It represents the location where data is accessed or processed and where the policy rules, which have been defined earlier through roles and data elements, will be applied. Creating a Datastore registers this target environment with the policy management system so that policies can later be deployed to it.

Purpose

To identify and register a policy enforcement location so the policy system knows where the rules should run. Without a Datastore, a policy cannot be enforced, because the system has no target environment to push the configuration to.

Prerequisites

  • A Policy Shell must exist. For more information about creating a policy shell, refer to the section Create Policy Shell.
  • Rules that define how roles interact with data elements should already be created. For more information about defining roles, refer to the section Define Rule with Data Element and Role.
  • The environment where the Datastore will be mapped. For example, application, service, or host should be known.

Inputs

Typical Datastore inputs include:

  • Name of the Datastore.
  • Type. For examle, application, service, and database.
  • Connection information. For example, hostnames, endpoints, and identifiers.
  • Optional metadata. For example, environment tags such as dev, test, or production.

Actual inputs depend on the type of enforcement point being registered.

Outcome

A Datastore is created and available for policy deployment. Policies can now be associated with this Datastore so that enforcement can occur during real data access operations.

Conceptual Examples

None.

Tips

  • Use consistent naming to distinguish environments. For example, ds_payments_prod and ds_analytics_dev.
  • Create separate Datastores for different systems, even if they use the same policy, to maintain clear deployment boundaries.
  • Map Datastores to actual data‑flow locations. Wherever sensitive data is read, processed, or stored, a Datastore should exist.
  • Confirm the destination system is reachable or properly registered before deploying policies to avoid deployment failures.

3.1.9 - Deploy Policy to a Datastore

Workflow to deploy policy to datastore.

Summary

Deploy the completed policy to a Datastore so that its rules are actively enforced during real data access operations.

Description

Deploying a policy makes it operational on a specific Datastore, such as an application, service, database, or other enforcement point. Until deployment occurs, a policy exists only as a configuration object. Deployment pushes all rules, including Data Elements, Roles, and permissions, to the target Datastore. This ensures that the runtime environment can apply them when users or applications interact with sensitive data.

Purpose

To activate the policy in an environment where protected data is accessed. Deployment ensures that the Datastore enforces the correct behavior, such as tokenization, masking, unprotect permissions, or other rules, based on the policy definition.

Prerequisites

  • A Policy Shell must be created. For more information about creating a policy shell, refer to the section Create Policy Shell.
  • The policy must contain rules that bind Data Elements to Roles. For more information about defining roles, refer to the section Define Rule with Data Element and Role.
  • A Datastore must exist. For more information about creating a datastore, refer to the section Create Datastore.
  • Connectivity or registration between Policy Management and the Datastore should be confirmed.

Inputs

Typical deployment inputs include:

  • Policy name or ID.
  • Datastore name or ID.
  • Optional deployment parameters, depending on environment. For example, environment tags and version notes.

Outcome

The policy is successfully deployed to the specified Datastore. The enforcement point is now configured to apply the defined data protection rules whenever sensitive data is read, written, or processed.

Conceptual Examples

Not applicable.

Tips

  • Always verify that the Datastore is correctly registered before deploying to avoid deployment errors.
  • If you maintain separate development, test, and production environments, use clearly named Datastores for each to avoid mis-deployment.
  • After deployment, test a few representative access scenarios to confirm enforcement works as intended.
  • Consider using versioning or descriptions on deployments for auditability and rollback clarity.

3.1.10 - Confirm Deployment

Workflow to confirm deployment.

Summary

Verify that the policy has been successfully deployed to the intended Datastore by retrieving deployment information.

Description

After deploying a policy, it is important to confirm that the system has registered the deployment correctly. The API provides a command to retrieve a list of all Datastores along with the policies currently connected to them. This verification step ensures that the deployment completed successfully and that the Datastore is now enforcing the appropriate policy rules.

Purpose

To confirm that the policy deployment is active and correctly associated with the target Datastore. This step provides assurance that the configuration is in effect and ready for runtime enforcement.

Prerequisites

A policy must have been deployed to a Datastore. For more information about deploying a policy, refer to the section Deploy Policy to a Datastore.

Inputs

No inputs are required. The confirmation command runs without arguments.

Outcome

You receive a deployment report listing each Datastore by UID and the policies associated with it. If the policy appears in the list, then deployment is confirmed.

Conceptual Examples

Not applicable.

Tips

  • If the expected policy does not appear in the list, re‑run the deployment or check for configuration errors.
  • Use this command routinely when validating changes or troubleshooting application behavior.
  • Keep track of Datastore UIDs to avoid confusion in complex environments.

3.2 - Create a policy to protect Credit Card Number (CCN)

Workflow example to protect CCN.

Goal

Create a policy that protects Credit Card Number (CCN) using CCN data element, with:

  • At least one role.
  • At least one member source assigned to that role.
  • Deployed to at least one datastore.

This example provides a walkthrough of the complete workflow to create a policy to protect a Credit Card Number (CCN) with tokenization using Protegrity CLI and REST APIs. The example includes defining a CCN Data Element and access controls to deploy a policy that protectors can enforce at runtime. The CCNs have a specific format and must comply with existing regulations. Hence, this example uses the Credit Card token type, with a common usability pattern of keeping the last four digits visible while tokenizing the rest.

Before using the CLI or the REST APIs, determine the properties required for the CCNs. For example:

  • How many digits should be in the clear.
  • Whether invalid values should be rejected or tokenized, for example, via Luhn handling.
  • What security operations should be allowed. For example, protect, unprotect, or reprotect.

These properties determine how the data element and the policy rules that are configured. They determine what applications and users will experience when data is protected or unprotected.

A key design choice specific to tokenization is selecting the tokenizer. You need to choose a tokenizer because it defines the tokenization engine and lookup-table strategy. Protegrity uses the tokenizer to deterministically transform a CCN into a same-length token. The tokenizer controls how the CCN digits are mapped into tokens so the protector can reliably produce and resolve tokens under policy. Protegrity offers multiple Static Lookup Table (SLT) tokenizer variants, such as, SLT_1_3, SLT_2_3, SLT_1_6, and SLT_2_6, which differ mainly in lookup-table design and operational footprint. For most CCN use cases, this example uses SLT_2_3 because it strikes a practical balance of memory usage and performance while working well for standard PAN lengths. This avoids the much larger memory footprint of the _6 options unless specifically required.

Assumptions

To execute any CLI or API command in this example, the following assumptions have been made:

  • You are operating on a new AI Team Edition setup.
    • Set up the AI Team Edition by installing the Protegrity Provisioned Cluster. For more information about installing the PPC, refer to the section Installing PPC.
  • You are connected to the Policy Manager container.
    • Connect to the Policy Manager container by deploying the Protegrity Policy Manager. For more information about deploying the Protegrity Policy Manager, refer to the section Installing Policy Workbench.

CLI Examples

To execute any CLI command in this example, the following additional assumption has been made:

API Examples

To execute any API command in this example, the following additional assumption has been made:

  • You have access to the Protegrity Policy Management REST APIs.

3.2.1 - Initialize Policy Management

Initializing the policy management.

This step initializes the Policy Management system. This step needs to be executed only once.

CLI Code

pim invoke init

CLI Actual Output

✅ PIM successfully initialized (bootstrapped).

API Endpoint

POST /pim/init

Get Gateway host address

export GW_HOST="$(kubectl get gateway pty-main -n api-gateway -o jsonpath='{.status.addresses[0].value}')"

Generate JWT token

TOKEN=$(curl -k -s  https://$GW_HOST/api/v1/auth/login/token \
## -X Post \
-H 'Content-Type: application/x-www-form-urlencoded' \
-d 'loginname=workbench' \
-d 'password=Admin123!' \
-D - -o /dev/null 2>&1 | grep -i 'pty_access_jwt_token:' | sed 's/pty_access_jwt_token: //' | tr -d '\r') && echo "${TOKEN:0:10}"

API Code

curl -k -H "Authorization: Bearer ${TOKEN}" -X POST "https://{$GW_HOST}/pty/v2/pim/init" -H "accept: application/json"

API Actual Output:

It does not return any output as response.

3.2.2 - Prepare Data Element

Create a data element.

What you are doing

Creating the data element that defines:

  • What is protected: CCN.
  • How it is protected: Tokenization settings.

Why it matters

Data elements are the protection building blocks that will be granted the permissions by the policy.

Tips

  • To keep nothing in clear: set --from-left 0 --from-right 0.
  • To avoid Luhn enforcement: omit --invalid-luhn-digit.

CLI Code

pim create dataelements token credit-card --name "de_ccn_token" --description "Tokenize credit card numbers, keeping last 4 chars in clear" --tokenizer "SLT_1_6" --from-left 0 --from-right 4 --invalid-luhn-digit

CLI Actual Output

UID NAME DESCRIPTION                                                TOKENIZER  FROMLEFT  FROMRIGHT  VALUEIDENTIFICATION
15  de_ccn_token  Tokenize credit card numbers, keeping last 4 chars in clear  SLT_1_6    0         4          {'invalidCardType': False, 'invalidLuhnDigit': True, 'alphabeticIndicator': False, 'alphabeticIndicatorPosition': 1}

API Endpoint

POST /pim/dataelements

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-X POST "https://${GW_HOST}/pty/v2/pim/dataelements" \
-d '{
"name": "de_ccn_token",
"description": "Tokenize credit card numbers, keeping last 4 chars in clear",
"creditCardToken": {
"tokenizer": "SLT_1_6",
"fromLeft": 0,
"fromRight": 4,
"valueIdentification": {
"invalidCardType": false,
"invalidLuhnDigit": true,
"alphabeticIndicator": false,
"alphabeticIndicatorPosition": 1
}
}
}'

API Actual Output

{"uid":"1","name":"de_ccn_token","description":"Tokenize credit card numbers, keeping last 4 chars in clear","creditCardToken":{"tokenizer":"SLT_1_6","fromLeft":0,"fromRight":4,"valueIdentification":{"invalidCardType":false,"invalidLuhnDigit":true,"alphabeticIndicator":false,"alphabeticIndicatorPosition":1}}}

3.2.2.1 - Create Mask

Create mask for the CCN.

What you are doing

Creating the mask that is applied to the unprotection permission of a data element to customize how data is displayed. The mask is to be used later on de_ccn_token.

Why it matters

Creating a mask defines what characters are displayed in the clear when the data is unprotected. The mask is optionally applied to an unprotection to display only a certain value to a consumer of the data.

CLI Code

pim create masks --name "clear_mask" --from-left 0 --from-right 4 --character "*"

CLI Actual Output

## Name        Description  Fromleft  Fromright  Masked  Character  Uid
clear_mask               0         4          False   *          1

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-X POST "https://${GW_HOST}/pty/v2/pim/masks" \
-d '{
"name": "clear_mask",
"masked": true,
"fromLeft": 0,
"fromRight": 4,
"character": "*"
}'

API Endpoint

POST /pim/masks

API Actual Output

{"uid":"1","name":"clear_mask","description":"","fromLeft":0,"fromRight":4,"masked":true,"character":"*"}

3.2.3 - Create Member Source

Create member source for the CCN.

What you are doing

You are creating a Member Source, which is a source that informs Protegrity where user identities are stored. For example, Active Directory, LDAP, Azure AD or a text file. This configuration stores the connection and the lookup context that Protegrity needs to discover users and groups that are then mapped to Roles.

Why it matters

Policies do not grant access to individual users directly. Access is granted through roles, and roles are populated from external identity systems via member sources. Without a source, you cannot reliably attach real enterprise groups or users to roles, synchronize memberships, or enforce policy access the same way your organization manages identity.

CLI Code

pim create sources file --name test-file --user-file exampleusers.txt --group-file examplegroups.txt

CLI Actual Output

NAME       DESCRIPTION  TYPE             USERFILE          GROUPFILE          TIMEOUT  UID
test-file               SourceType.FILE  exampleusers.txt  examplegroups.txt  120      1

API Endpoint

POST /pim/sources

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-X POST "https://${GW_HOST}/pty/v2/pim/sources" \
-d '{
"name": "test-file",
"type": "FILE",
"connection": {
"userFile": "exampleusers.txt",
"groupFile": "examplegroups.txt"
}
}'

API Actual Output

{"name":"test-file","description":"","type":"FILE","connection":{"userFile":"exampleusers.txt","groupFile":"examplegroups.txt"},"uid":"1"}

3.2.3.1 - Test New Member Source

Test whether the member source has been successfully created.

CLI Code

pim invoke sources test 1

CLI Actual Output

+----------------+--------+---------+
|      type      | passed | message |
+----------------+--------+---------+
|   connection   |  True  |         |
| authentication |  True  |         |
|     groups     |  True  |         |
|     users      |  True  |         |
+----------------+--------+---------+ 

API Endpoint

POST /pim/sources/{SOURCE_UID}/test

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-X POST "https://${GW_HOST}/pty/v2/pim/sources/1/test"

API Actual Output

{"connection":{"passed":true,"message":""},"authentication":{"passed":true,"message":""},"groups":{"passed":true,"message":""},"users":{"passed":true,"message":""}}

3.2.4 - Create Role

Create a role.

What you are doing

Creating the role that represents who can perform operations against the data element.

Why it matters

Permissions are granted to roles and roles map to users or groups, ideally from member sources.

Tips

  • If you keep the --allow-all option in the command, then set ALLOWALL to True.
  • Consider which user needs what level of access and create a role for each set of users.

CLI Code

pim create roles role --name "role_protect_ccn" --description "This role have access to protect CCN data" --mode "MANUAL"

CLI Actual Output

NAME              DESCRIPTION                                MODE             ALLOWALL  UID
role_protect_ccn  This role have access to protect CCN data  RoleMode.MANUAL  False     1

API Endpoint

POST /pim/roles

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-X POST "https://${GW_HOST}/pty/v2/pim/roles" \
-d '{
"name": "role_protect_ccn",
"description": "This role have access to protect CCN data",
"mode": "MANUAL",
"allowAll": false
}'

API Actual Output

{"name":"role_protect_ccn","description":"This role have access to protect CCN data","mode":"MANUAL","uid":"1","allowAll":false}

3.2.5 - Assign Member Source to Role

Assign member source to the role.

What you are doing

You are attaching a specific user or group from a member source to a role. This creates the who belongs in this role binding. If you run synchronization, it retrieves the current membership from the source into the role.

Why it matters

This is the step that turns a role from a named container into an identity-backed access control object. If you do not assign a source user or group to the role and synchronize it, then no real identities are associated with that role. This means that your policy rules may exist, but nobody in your organization will actually have the access that those rules intend to grant.

CLI Code

pim create roles members 1 --member "exampleuser1,1,USER"

CLI Actual Output

## Name          Source  Type             Uid
exampleuser1  1       MemberType.USER  1  

API Endpoint

POST /pim/roles/{SOURCE_UID}/test

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-X POST "https://${GW_HOST}/pty/v2/pim/roles/1/members" \
-d '[
{
"name": "exampleuser1",
"source": "1",
"type": "USER"
}
]'

API Actual Output

[{"type":"USER","name":"exampleuser1","syncid":"exampleuser1","uid":"1","source":"1"}]

3.2.5.1 - Synchronize Member Source

Synchronize the member source.

Connect and synchronize the users with your member source.

CLI Code

pim invoke roles sync 1

CLI Actual Output

Successfully synchronized members for role with UID '1'.  

API Endpoint

POST /pim/roles/{SOURCE_UID}/sync

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-X POST "https://${GW_HOST}/pty/v2/pim/roles/1/sync"

API Actual Output

The API does not return any output as response.

3.2.6 - Create Policy Shell

Create a policy shell.

What you are doing

Creating the policy that will hold the access rules. For example, Data Element, Role, and Rules.

Why it matters

The policy is the object that ties together the pieces and becomes deployable.

Tips

Multiple Roles, multiple data elements, and their corresponding rules can be added to a single policy. Consider structuring your policy around specific areas of focus, as in this example, the treatment of a Credit Card Number across the entirety of your enterprise.

CLI Code

pim create policies policy --name "ccn-policy" --description "Protect CCN with tokenization"

CLI Actual Output

NAME        DESCRIPTION                    ACCESS                                                      UID
ccn-policy  Protect CCN with tokenization  {'protect': False, 'reProtect': False, 'unProtect': False}  1  

API Endpoint

POST /pim/policies

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-X POST "https://${GW_HOST}/pty/v2/pim/policies" \
-d '{
"name": "ccn-policy",
"description": "Protect CCN with tokenization",
"template": {
"access": {
"protect": false,
"reProtect": false,
"unProtect": false
}
}
}'

API Actual Output

{"name":"ccn-policy","description":"Protect CCN with tokenization","uid":"1","template":{"access":{"protect":false,"reProtect":false,"unProtect":false}}}

3.2.7 - Define Rule with Data Element and Role

Define that includes a data element and role.

What you are doing

Creating the policy rule that binds:

  • A role: Who.
  • A data element: What.
  • Permitted operations: Protect, Reprotect, or Unprotect.

Why it matters

This binding is what makes the policy enforceable. Without rules, the policy exists but grants no access.

Tips

This rule grants the specified role permission to protect the CCN data element, while disallowing reprotect and unprotect.

CLI Code

pim create policies rules 1 --rule "1,1,1,NULL_VALUE,true,false,true"

CLI Actual Output

## Role  Dataelement  Mask  Noaccessoperation  Access
1     1            1     NULL_VALUE         {'protect': True, 'reProtect': False, 'unProtect': True}  

API Endpoint

POST /pim/policies/{POLICY_UID}/rules

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-X POST "https://${GW_HOST}/pty/v2/pim/policies/1/rules" \
-d '{
"role": "1",
"dataElement": "1",
"mask": "1",
"noAccessOperation": "NULL_VALUE",
"permission": {
"access": {
"protect": true,
"reProtect": false,
"unProtect": true
}
}
}'

API Actual Output

{"role":"1","mask":"1","dataElement":"1","permission":{"access":{"protect":true,"reProtect":false,"unProtect":true}}}

3.2.8 - Create Datastore

Create a datastore.

What you are doing

Creating the datastore target where a policy will be deployed.

Why it matters

A policy is not active for protectors until it is deployed to a datastore.

CLI Code

pim create datastores datastore --name "ds_protect_ccn" --description "Datastore to demonstrate CCN protection" --default

CLI Actual Output

## Name            Description                              Default  Uid
ds_protect_ccn  Datastore to demonstrate CCN protection  True     1  

API Endpoint

POST /pim/datastores

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-X POST "https://${GW_HOST}/pty/v2/pim/datastores" \
-d '{
"name": "ds_protect_ccn",
"description": "Datastore to demonstrate CCN protection",
"default": true
}'

API Actual Output

{"description":"Datastore to demonstrate CCN protection","name":"ds_protect_ccn","default":true,"uid":"1"}

3.2.9 - Deploy Policy to a Datastore

Deploy a policy to a datastore.

What you are doing

Deploying the policy to the datastore so protectors that target that datastore can load the policy.

Why it matters

Until the policy is deployed, the policy is not available to runtime protectors.

Tips

  • You may want to deploy the multiple policies to a single datastores. If so, include them by repeating the –-policies parameter.
  • You can also add a single policy to multiple datastores by creating a loop.

CLI Code

pim invoke datastores deploy 1 --policies 1

CLI Actual Output

Successfully deployed to datastore '1':
Policies: 1  

API Endpoint

POST /pim/datastores/{DATASTORE_UID}/deploy

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-X POST "https://${GW_HOST}/pty/v2/pim/datastores/1/deploy" \
-d '{
"policies": ["1"],
"applications": []
}'

API Actual Output

API does not return any output as response

3.2.10 - Confirm Deployment

Confirm the policy deployment.

What you are doing

Confirming which policies are deployed to which datastores.

Why it matters

Verifying deployment confirms the policy is active, correctly mapped, and enforceable.

CLI Code

pim get deploy

CLI Actual output

## Uid  Policies  Applications
1    ['1']     []  

API Endpoint

POST /pim/deploy

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-X GET "https://${GW_HOST}/pty/v2/pim/deploy"

API Actual Output

{"dataStores":[{"uid":"1","policies":["1"],"applications":[]}]}

3.3 - Create a policy to protect Date of Birth (DOB)

Workflow example to protect DOB.

Goal

Create one policy that protects Date of Birth (DOB) using Datetime data element, with:

  • At least one role.
  • At least one member source feeding that role.
  • Deployed to at least one datastore.

This example provides a walkthrough of the complete workflow to create a policy to protect a Date of Birth (DOB). DOB is a common piece of sensitive personal data, and organizations typically protect it using datetime tokenization. This tokenization preserves the YYYY‑MM‑DD structure while preventing direct exposure of the original value. In this example, a single role is used whose members are obtained from an LDAP-based Member Source. The role is granted permission to protect (tokenize) DOB values.

For this walkthrough, a dedicated DOB data element is created using a date‑specific tokenizer, ensuring that the output maintains a valid date format for downstream systems. The role and data element are combined into a single policy. The policy is then deployed to a datastore so applications working with DOB information can enforce the protection rules at runtime.

Assumptions

To execute any CLI or API command in this example, the following assumptions have been made:

  • You are operating on a new AI Team Edition setup.
    • Set up the AI Team Edition by installing the Protegrity Provisioned Cluster. For more information about installing the PPC, refer to the section Installing PPC.
  • You are connected to the Policy Manager container.
    • Connect to the Policy Manager container by deploying the Protegrity Policy Manager. For more information about deploying the Protegrity Policy Manager, refer to the section Installing Policy Workbench.

CLI Examples

To execute any CLI command in this example, the following additional assumption has been made:

API Examples

To execute any API command in this example, the following additional assumption has been made:

  • You have access to the Protegrity Policy Management REST APIs.

3.3.1 - Initialize Policy Management

Initialize the policy management.

This step initializes the Policy Information Management system. This step needs to be executed only once.

CLI Code

pim invoke init

CLI Actual Output

✅ PIM successfully initialized (bootstrapped).  

API Endpoint

POST /pim/init

Get Gateway host address

export GW_HOST="$(kubectl get gateway pty-main -n api-gateway -o jsonpath='{.status.addresses[0].value}')"

Generate JWT token

TOKEN=$(curl -k -s  https://$GW_HOST/api/v1/auth/login/token \
## -X Post \
-H 'Content-Type: application/x-www-form-urlencoded' \
-d 'loginname=workbench' \
-d 'password=Admin123!' \
-D - -o /dev/null 2>&1 | grep -i 'pty_access_jwt_token:' | sed 's/pty_access_jwt_token: //' | tr -d '\r') && echo "${TOKEN:0:10}"

API Code

curl -k -H "Authorization: Bearer ${TOKEN}" -X POST "https://{$GW_HOST}/pty/v2/pim/init" -H "accept: application/json"

API Actual Output

The API does not return any output as response.

3.3.2 - Prepare Data Element

Prepare data element.

What you are doing

Creating a DOB data element that defines what is protected (Date of Birth) and how it is protected using date tokenization in YYYY-MM-DD format.

Why it matters

Data elements are the protection building blocks that policies grant access to.

CLI Code

pim create dataelements token date-time --name "de_dob_token" --description "Tokenize Date of Birth" --tokenizer "SLT_8_DATETIME"

CLI Actual Output

UID  NAME          DESCRIPTION             TOKENIZER       TOKENIZETIME  DISTINGUISHABLEDATE  DATEINCLEAR
1    de_dob_token  Tokenize Date of Birth  SLT_8_DATETIME  False         False                TokenElementDateInClear.NONE  

API Endpoint

POST /pim/dataelements

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-X POST "https://${GW_HOST}/pty/v2/pim/dataelements" \
-d '{
"name": "de_dob_token",
"description": "Tokenize Date of Birth",
"dateTimeToken": {
"tokenizer": "SLT_8_DATETIME",
"tokenizeTime": false,
"distinguishableDate": false,
"dateInClear": "NONE"
}
}'

API Actual Output

{"uid":"1","name":"de_dob_token","description":"Tokenize Date of Birth","dateTimeToken":{"tokenizer":"SLT_8_DATETIME","tokenizeTime":false,"distinguishableDate":false,"dateInClear":"NONE"}}

3.3.3 - Create Member Source

Create member source.

What you are doing

You are creating a Member Source, which is a source that informs Protegrity where user identities are stored. For example, Active Directory, LDAP, Azure AD or a text file. This configuration stores the connection and the lookup context that Protegrity needs to discover users and groups that are then mapped to Roles.

Why it matters

Policies do not grant access to individual users directly. Access is granted through roles, and roles are populated from external identity systems via member sources. Without a source, you cannot reliably attach real enterprise groups or users to roles, synchronize memberships, or enforce policy access the same way your organization manages identity.

CLI Code

pim create sources file --name test-file --user-file exampleusers.txt --group-file examplegroups.txt

CLI Actual output

NAME       DESCRIPTION  TYPE             USERFILE          GROUPFILE          TIMEOUT  UID
test-file               SourceType.FILE  exampleusers.txt  examplegroups.txt  120      1  

API Endpoint

POST /pim/sources

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-X POST "https://${GW_HOST}/pty/v2/pim/sources" \
-d '{
"name": "test-file",
"type": "FILE",
"connection": {
"userFile": "exampleusers.txt",
"groupFile": "examplegroups.txt"
}
}'

API Actual output

{"name":"test-file","description":"","type":"FILE","connection":{"userFile":"exampleusers.txt","groupFile":"examplegroups.txt"},"uid":"1"}

3.3.3.1 - Test the Member Source

Test whether the member source has been successfully created.

CLI Code

pim invoke sources test 1

CLI Actual Output

+----------------+--------+---------+
|      type      | passed | message |
+----------------+--------+---------+
|   connection   |  True  |         |
| authentication |  True  |         |
|     groups     |  True  |         |
|     users      |  True  |         |
+----------------+--------+---------+  

API Endpoint

POST /pim/sources/{SOURCE_UID}/test

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-X POST "https://${GW_HOST}/pty/v2/pim/sources/1/test"

API Actual Output

{"connection":{"passed":true,"message":""},"authentication":{"passed":true,"message":""},"groups":{"passed":true,"message":""},"users":{"passed":true,"message":""}}

3.3.4 - Create Role

Create a role.

What you are doing

Creating the role that represents who can perform operations against the DOB data element.

Why it matters

Permissions are granted to roles, and roles map to real users or groups via member sources.

Tips

  • If you keep the --allow-all option in the command, then set ALLOWALL to True.
  • Consider which user needs what level of access and create a role for each set of users.

CLI Code

pim create roles role --name "dob_protect_role" --description "Role having access to protect DOB" --mode "MANUAL"

CLI Actual Output

NAME              DESCRIPTION                        MODE             ALLOWALL  UID
dob_protect_role  Role having access to protect DOB  RoleMode.MANUAL  False     1  

API Endpoint

POST /pim/roles

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-X POST "https://${GW_HOST}/pty/v2/pim/roles" \
-d '{
"name": "dob_protect_role",
"description": "Role having access to protect DOB",
"mode": "MANUAL",
"allowAll": false
}'

API Actual Output

{"name":"dob_protect_role","description":"Role having access to protect DOB","mode":"MANUAL","uid":"1","allowAll":false}

3.3.5 - Assign Member Source to Role

Assign member source to a role.

What you are doing

You are attaching a specific user or group from a member source to a role. This creates the who belongs in this role binding. If you run synchronization, it retrieves the current membership from the source into the role.

Why it matters

This is the step that turns a role from a named container into an identity-backed access control object. If you do not assign a source user or group to the role and synchronize it, then no real identities are associated with that role. This means that your policy rules may exist, but nobody in your organization will actually have the access that those rules intend to grant.

CLI Code

pim create roles members 1 --member "exampleuser1,1,USER"

CLI Actual Output

## Name          Source  Type             Uid
exampleuser1  1       MemberType.USER  1  

API Endpoint

POST /pim/roles/{SOURCE_UID}/test

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-X POST "https://${GW_HOST}/pty/v2/pim/roles/1/members" \
-d '[
{
"name": "exampleuser1",
"source": "1",
"type": "USER"
}
]'

API Actual Output

[{"type":"USER","name":"exampleuser1","syncid":"exampleuser1","uid":"1","source":"1"}]

3.3.5.1 - Synchronize Member Source

Synchronize the member source.

Connect and synchronize the users with your member source.

CLI Code

pim invoke roles sync 1

CLI Actual Output

Successfully synchronized members for role with UID '1'.  

API Endpoint

POST /pim/roles/{SOURCE_UID}/sync

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-X POST "https://${GW_HOST}/pty/v2/pim/roles/1/sync"

API Actual Output

The API does not return any output as response.

3.3.6 - Create Policy Shell

Create a policy shell.

What you are doing

Creating the policy that will hold the access rules. For example, Data Element, Role, and Rules.

Why it matters

The policy is the object that ties together the pieces and becomes deployable.

Tips

Multiple roles, multiple data elements, and their corresponding rules can be added to a single policy. Consider structuring your policy around specific areas of focus. For example, how the DOB is used across the entire enterprise.

CLI Code

pim create policies policy --name "dob-policy" --description "Protect DOB with tokenization"

CLI Actual Output

NAME        DESCRIPTION                    ACCESS                                                      UID
dob-policy  Protect DOB with tokenization  {'protect': False, 'reProtect': False, 'unProtect': False}  1  

API Endpoint

POST /pim/policies

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-X POST "https://${GW_HOST}/pty/v2/pim/policies" \
-d '{
"name": "dob-policy",
"description": "Protect DOB with tokenization",
"template": {
"access": {
"protect": false,
"reProtect": false,
"unProtect": false
}
}
}'

API Actual Output

{"name":"dob-policy","description":"Protect DOB with tokenization","uid":"1","template":{"access":{"protect":false,"reProtect":false,"unProtect":false}}}

3.3.7 - Define Rule with Data Element and Role

Define a rule that includes the data element and role.

What you are doing

Creating the policy rule that binds:

  • A role: Who.
  • A data element: What.
  • Permitted operations: Protect, Reprotect, or Unprotect.

Why it matters

This binding is what makes the policy enforceable. Without rules, the policy exists but grants no access.

Tips

This rule grants the specified role permission to protect and unprotect to the DOB data element, while disallowing reprotect.

CLI Code

# Format:
# "roleUid,dataElementUid,,NOACCESSOPERATION,protect,reProtect,unProtect"

pim create policies rules 1 --rule "1,1,,NULL_VALUE,true,false,true"

CLI Actual Output

## Role  Dataelement  Mask  Noaccessoperation  Access
1     1            0     NULL_VALUE         {'protect': True, 'reProtect': False, 'unProtect': True}  

API Endpoint

POST /pim/policies/{POLICY_UID}/rules

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-X POST "https://${GW_HOST}/pty/v2/pim/policies/1/rules" \
-d '{
"role": "1",
"dataElement": "1",
"mask": "0",
"noAccessOperation": "NULL_VALUE",
"permission": {
"access": {
"protect": true,
"reProtect": false,
"unProtect": true
}
}
}'

API Actual Output

{"role":"1","mask":"0","dataElement":"1","permission":{"access":{"protect":true,"reProtect":false,"unProtect":true}}}

3.3.8 - Create Datastore

Create a datastore.

What you are doing

Creating the datastore target where a policy will be deployed.

Why it matters

A policy is not active for protectors until it is deployed to a datastore.

CLI Code

pim create datastores datastore --name "ds_protect_dob" --description "Datastore to demonstrate DOB protection" --default

CLI Actual Output

## Name            Description                              Default  Uid
ds_protect_dob  Datastore to demonstrate DOB protection  True     1 

API Endpoint

POST /pim/datastores

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-X POST "https://${GW_HOST}/pty/v2/pim/datastores" \
-d '{
"name": "ds_protect_dob",
"description": "Datastore to demonstrate DOB protection",
"default": true
}'

API Actual Output

{"description":"Datastore to demonstrate DOB protection","name":"ds_protect_dob","default":true,"uid":"1"}

3.3.9 - Deploy Policy to Datastore

Deploy the policy to a datastore.

What you are doing

Deploying the policy to the datastore so protectors that target that datastore can load the policy.

Why it matters

Until the policy is deployed, the policy is not available to runtime protectors.

Tips

  • You may want to deploy the multiple policies to a single datastores. If so, include them by repeating the –-policies option in the command.
  • You can also add a single policy to multiple datastores by creating a loop.

CLI Code

pim invoke datastores deploy 1 --policies 1

CLI Actual Output

Successfully deployed to datastore '1':
Policies: 1  

API Endpoint

POST /pim/datastores/{DATASTORE_UID}/deploy

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-X POST "https://${GW_HOST}/pty/v2/pim/datastores/1/deploy" \
-d '{
"policies": ["1"],
"applications": []
}'

API Actual Output

The API does not return any output as response.

3.3.10 - Confirm Deployment

Confirm the policy deployment.

What you are doing

Confirm that the policies have been deployed to the respective datastores.

Why it matters

Verifying deployment confirms the policy is active, correctly mapped, and enforceable.

CLI Code

pim get deploy

CLI Actual Output

## Uid  Policies  Applications
1    ['1']     []  

API Endpoint

POST /pim/deploy

API Code

curl -k \
-H "Authorization: Bearer ${TOKEN}" \
-H "accept: application/json" \
-X GET "https://${GW_HOST}/pty/v2/pim/deploy"

API Actual Output

{"dataStores":[{"uid":"1","policies":["1"],"applications":[]}]}

3.4 - Full Script Examples

Full scripts to create and deploy a data protection policy.

The following section contains the end-to-end full scripts that can be used to create and protect sample data using the Policy Management REST APIs.

3.4.1 - Full Script to Protect CCN using Policy Management REST APIs

Full script for protecting CCN data.

The following code snippet contains the contents of the deploy-ccn-policy.sh shell script. This script enables the creation and deployment of a policy to protect CCN data using the Policy Management REST APIs.

#!/usr/bin/env bash
###############################################################################
# Script Name  : ccn_policy.sh
# Description  : End-to-end automation script for creating and deploying a
#                Credit Card Number (CCN) protection policy using the
#                Protegrity Policy Information Management (PIM) REST API.
#
#
# ─────────────────────────────────────────────────────────────────────────────
# IMPORTANT NOTES
# ─────────────────────────────────────────────────────────────────────────────
#
# 1. WORKBENCH REQUIREMENT:
#    The Policy Management REST APIs will work only after you have installed
#    the Protegrity Workbench. Attempting to use these APIs before the
#    Workbench is installed will result in errors.
#
# 2. USER PERMISSIONS:
#    The user account used to authenticate against these APIs must have the
#    appropriate Protegrity role assigned:
#      - Security Officer : Required for write access (create, update, delete)
#      - Security Viewer  : Required for read-only access (get, list)
#    For more information about the roles and permissions required, refer to
#    the section "Managing Roles" in the Protegrity documentation.
#
# 3. API VERSION:
#    The Policy Management API uses version v2.
#    All endpoints in this script are prefixed with /pty/v2/pim/
#    Requests to older API versions will not be supported.
#
# ─────────────────────────────────────────────────────────────────────────────
# PREREQUISITES
# ─────────────────────────────────────────────────────────────────────────────
#   - Protegrity Workbench must be installed and running
#   - kubectl configured and connected to your Kubernetes cluster
#   - curl installed on the machine running this script
#   - Access to the Protegrity API Gateway
#   - A user account with Security Officer permissions
#
# ─────────────────────────────────────────────────────────────────────────────
# USAGE
# ─────────────────────────────────────────────────────────────────────────────
#   chmod +x deploy-ccn-policy.sh
#   ./deploy-ccn-policy.sh
#
# ─────────────────────────────────────────────────────────────────────────────
# WORKFLOW
# ─────────────────────────────────────────────────────────────────────────────
#   Step 1    - Initialize Policy Management
#   Step 2    - Prepare Data Element (CCN Token)
#   Step 2.1  - Create Mask (subsection of Prepare Data Element)
#   Step 3    - Create Member Source
#   Step 3.1  - Test Member Source Connectivity
#   Step 4    - Create Role
#   Step 5    - Assign Member Source to Role
#   Step 5.1  - Sync Role Membership
#   Step 6    - Create Policy Shell
#   Step 7    - Define Policy Rule (bind Role + Data Element + Permissions)
#   Step 8    - Create Datastore
#   Step 9    - Deploy Policy to Datastore
#   Step 10   - Confirm Deployment
#
# ─────────────────────────────────────────────────────────────────────────────
# SECURITY NOTES
# ─────────────────────────────────────────────────────────────────────────────
#   - If any API call returns HTTP 401 (Unauthorized), the script will
#     automatically attempt to re-generate the JWT token and retry the
#     request once before failing.
#   - If any API call indicates that a resource already exists, the script
#     will exit immediately with an error. Delete the conflicting resource
#     first, or update the name variables in SECTION 1 before re-running.
#
# ─────────────────────────────────────────────────────────────────────────────
# EXIT CODES
# ─────────────────────────────────────────────────────────────────────────────
#   0 - Success
#   1 - Script error (set -e will trigger on any failed command)
###############################################################################

set -euo pipefail

###############################################################################
# SECTION 1: USER-CONFIGURABLE VARIABLES
# ─────────────────────────────────────
# Modify the variables below to match your environment before running
# this script. All other values are derived automatically.
#
# NOTE: The user specified by ADMIN_USER must have the Security Officer
#       permission to perform write operations via the Policy Management API.
#       For read-only operations, the Security Viewer permission is sufficient.
#       For more information, refer to the "Managing Roles" section in the
#       Protegrity documentation.
###############################################################################

# --- Protegrity Admin Credentials ---
# WARNING: For production use, consider sourcing these values from a secrets
#          manager (e.g., HashiCorp Vault, Kubernetes Secrets, AWS SSM).
ADMIN_USER="workbench"
ADMIN_PASS="Admin123!"

# --- Data Element ---
DE_NAME="de_ccn_token"
DE_DESC="Tokenize credit card numbers, keeping last 4 chars in clear"
DE_TOKENIZER="SLT_1_6"           # Options: SLT_1_3 | SLT_2_3 | SLT_1_6 | SLT_2_6
DE_FROM_LEFT=0                    # Number of digits to keep in clear from the left
DE_FROM_RIGHT=4                   # Number of digits to keep in clear from the right

# --- Mask (subsection of Prepare Data Element) ---
MASK_NAME="clear_mask"
MASK_FROM_LEFT=0                  # Number of characters to keep in clear from the left
MASK_FROM_RIGHT=4                 # Number of characters to keep in clear from the right
MASK_CHARACTER="*"                # Character used to mask hidden digits

# --- Role ---
ROLE_NAME="role_protect_ccn"
ROLE_DESC="This role has access to protect CCN data"
ROLE_MODE="MANUAL"                # Options: MANUAL | SEMIAUTOMATIC | AUTOMATIC

# --- Member Source ---
SOURCE_NAME="test-file"
SOURCE_USER_FILE="exampleusers.txt"
SOURCE_GROUP_FILE="examplegroups.txt"

# --- Role Member ---
MEMBER_NAME="exampleuser1"
MEMBER_TYPE="USER"                # Options: USER | GROUP

# --- Policy ---
POLICY_NAME="ccn-policy"
POLICY_DESC="Protect CCN with tokenization"

# --- Policy Rule Permissions ---
RULE_PROTECT=true                 # Allow protect operation
RULE_REPROTECT=false              # Allow re-protect operation
RULE_UNPROTECT=true               # Allow unprotect operation
RULE_NO_ACCESS_OP="NULL_VALUE"    # Behavior for no-access: NULL_VALUE | EXCEPTION

# --- Datastore ---
DS_NAME="ds_protect_ccn"
DS_DESC="Datastore to demonstrate CCN protection"
DS_DEFAULT=true                   # Set as the default datastore: true | false

# --- Token Retry Settings ---
# On receiving HTTP 401 Unauthorized, the script will refresh the JWT token
# and retry the failed request. MAX_TOKEN_RETRIES controls how many refresh
# attempts are made before the script aborts.
MAX_TOKEN_RETRIES=1               # Number of times to retry generating a token on 401

###############################################################################
# SECTION 2: HELPER FUNCTIONS
# ────────────────────────────
# Internal utility functions used throughout the script.
# Do not modify unless necessary.
###############################################################################

# Prints a formatted primary section header to stdout
log() {
  printf "\n%s\n" "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
  printf "  %s\n" "$*"
  printf "%s\n" "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
}

# Prints a formatted subsection header to stdout (indented, lighter style)
log_sub() {
  printf "\n%s\n" "  ┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄"
  printf "    %s\n" "$*"
  printf "%s\n" "  ┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄"
}

# Prints an error message to stderr and exits with code 1
# Usage: die <message>
die() {
  printf "\n  [ERROR] %s\n" "$*" >&2
  exit 1
}

# Attempts to extract a UID from a JSON API response.
# Handles both string UIDs ("uid":"1") and integer UIDs ("uid":1).
# Exits with an error if extraction fails — never prompts interactively.
# Usage: extract_uid <json_response> <resource_label>
extract_uid() {
  local response="$1"
  local label="$2"
  local uid
  # Match string-quoted UIDs: "uid":"<value>"
  uid=$(echo "$response" | grep -o '"uid":"[^"]*"' | head -1 | sed 's/"uid":"//;s/"//' || true)
  # Fallback: match integer UIDs: "uid":<number>
  if [[ -z "${uid:-}" ]]; then
    uid=$(echo "$response" | grep -o '"uid":[0-9]*' | head -1 | grep -o '[0-9]*' || true)
  fi
  if [[ -z "${uid:-}" ]]; then
    die "Failed to extract UID for '${label}'. API response was: ${response}"
  fi
  echo "$uid"
}

# Generates a new JWT authentication token using the configured admin
# credentials (ADMIN_USER / ADMIN_PASS). Stores the result in the global
# TOKEN variable.
#
# NOTE: The user must have the Security Officer permission for write access
#       or the Security Viewer permission for read-only access to the
#       Policy Management API (v2). For more information, refer to
#       the "Managing Roles" section in the Protegrity documentation.
#
# Usage: generate_token
generate_token() {
  echo "  Generating JWT authentication token..."
  TOKEN=$(curl -k -s "https://${GW_HOST}/api/v1/auth/login/token" \
    -X POST \
    -H 'Content-Type: application/x-www-form-urlencoded' \
    -d "loginname=${ADMIN_USER}" \
    -d "password=${ADMIN_PASS}" \
    -D - -o /dev/null 2>&1 \
    | grep -i 'pty_access_jwt_token:' \
    | sed 's/pty_access_jwt_token: //' \
    | tr -d '\r')

  if [[ -z "${TOKEN:-}" ]]; then
    die "Failed to retrieve JWT token. Please verify the following:
         - The Protegrity Workbench is installed and running.
         - The API Gateway host (${GW_HOST}) is reachable.
         - The credentials for user '${ADMIN_USER}' are correct.
         - The user '${ADMIN_USER}' has the Security Officer or Security Viewer
           permission assigned. Refer to 'Managing Roles' in the Protegrity
           documentation for more information."
  fi

  echo "  Token acquired successfully."
}

# Executes a curl API call and automatically retries with a refreshed JWT
# token if a 401 Unauthorized response is received.
#
# All Policy Management API calls in this script target the v2 API version:
#   https://<gateway>/pty/v2/pim/...
#
# On HTTP 401, the token is refreshed (up to MAX_TOKEN_RETRIES times) and
# the request is retried. This can occur when a token expires mid-run.
# On any other non-2xx response, a warning is logged but execution continues.
#
# Usage: api_call <curl_args...>
api_call() {
  local retries=0
  local http_status
  local response_body
  local tmp_file
  tmp_file=$(mktemp)

  while true; do
    # Execute the curl call, capturing body and HTTP status separately
    http_status=$(curl -k -s -o "$tmp_file" -w "%{http_code}" \
      -H "Authorization: Bearer ${TOKEN}" \
      "$@")
    response_body=$(cat "$tmp_file")

    # Handle 401 Unauthorized:
    # This typically means the JWT token has expired or is invalid.
    # The script will attempt to refresh the token and retry the request.
    # Ensure the user has the correct permissions (Security Officer / Viewer).
    # Refer to "Managing Roles" in the Protegrity documentation.
    if [[ "$http_status" == "401" ]]; then
      if [[ "$retries" -lt "$MAX_TOKEN_RETRIES" ]]; then
        echo "  [Warning] Received HTTP 401 Unauthorized." >&2
        echo "            Refreshing JWT token and retrying (attempt $((retries + 1)) of ${MAX_TOKEN_RETRIES})..." >&2
        generate_token
        retries=$((retries + 1))
        continue
      else
        rm -f "$tmp_file"
        die "Received HTTP 401 Unauthorized after ${MAX_TOKEN_RETRIES} token refresh attempt(s).
         Please verify that user '${ADMIN_USER}' has the required permissions:
           - Security Officer : for write access
           - Security Viewer  : for read-only access
         Refer to 'Managing Roles' in the Protegrity documentation."
      fi
    fi

    # Fail on "already exists" (HTTP 400/409) — resource must be removed first
    if echo "$response_body" | grep -qi "already exist"; then
      rm -f "$tmp_file"
      die "Resource already exists (HTTP ${http_status}). The script cannot continue.
         Response : ${response_body}
         Action   : Delete or rename the existing resource before re-running,
                    or update the name variables at the top of this script."
    fi

    # Log other non-2xx responses (excluding 401 already handled above)
    if [[ "$http_status" != 2* ]]; then
      echo "  [Warning] Received HTTP ${http_status}. Response: ${response_body}" >&2
    fi

    rm -f "$tmp_file"
    echo "$response_body"
    break
  done
}

###############################################################################
# SECTION 3: ENVIRONMENT SETUP
# ─────────────────────────────
# Retrieves the API Gateway host address and generates a JWT authentication
# token required for all subsequent API calls.
#
# NOTE: The Policy Management REST APIs will work only after the Protegrity
#       Workbench has been installed. All API calls target version v2:
#         https://<gateway>/pty/v2/pim/
###############################################################################

log "Environment Setup: Retrieving API Gateway Host"
export GW_HOST
GW_HOST="$(kubectl get gateway pty-main -n api-gateway \
  -o jsonpath='{.status.addresses[0].value}')"
echo "  API Gateway Host : ${GW_HOST}"
echo "  API Version      : v2  (/pty/v2/pim/)"

log "Environment Setup: Generating JWT Authentication Token"
generate_token

###############################################################################
# SECTION 4: WORKFLOW EXECUTION
# ──────────────────────────────
# Executes each step of the CCN policy creation workflow in sequence.
# UIDs returned by each step are captured and reused in subsequent steps.
#
# NOTE: All write operations (POST) require the Security Officer permission.
#       The read operation in Step 12 (GET) requires at minimum the
#       Security Viewer permission.
###############################################################################

# ─────────────────────────────────────────────────────────────────────────────
# STEP 1: Initialize Policy Management
# ─────────────────────────────────────────────────────────────────────────────
# Initializes the PIM system. This step only needs to be performed once
# per environment setup.
#
# Requirement : Protegrity Workbench must be installed before running this step.
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/init
# ─────────────────────────────────────────────────────────────────────────────
log "Step 1: Initialize Policy Management"
INIT_RESPONSE=$(api_call \
  -H "accept: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/init")
if [[ -z "${INIT_RESPONSE}" ]]; then
  echo "  Status: OK (empty response — PIM already initialized or no content returned)"
else
  echo "  Response: ${INIT_RESPONSE}"
fi
echo ""

# ─────────────────────────────────────────────────────────────────────────────
# STEP 2: Prepare Data Element
# ─────────────────────────────────────────────────────────────────────────────
# Prepares the CCN Data Element that defines what data is protected and how
# it is tokenized. The tokenizer (SLT_1_6) and clear-text settings determine
# how many digits remain visible after tokenization.
#
# This step also includes the creation of a Mask (Step 2.1) as a subsection,
# since the mask is directly associated with how the data element's unprotected
# value is presented to consumers.
#
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/dataelements
# ─────────────────────────────────────────────────────────────────────────────
log "Step 2: Prepare Data Element — ${DE_NAME}"
DE_RESPONSE=$(api_call \
  -H "accept: application/json" \
  -H "Content-Type: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/dataelements" \
  -d '{
    "name": "'"${DE_NAME}"'",
    "description": "'"${DE_DESC}"'",
    "creditCardToken": {
      "tokenizer": "'"${DE_TOKENIZER}"'",
      "fromLeft": '"${DE_FROM_LEFT}"',
      "fromRight": '"${DE_FROM_RIGHT}"',
      "valueIdentification": {
        "invalidCardType": false,
        "invalidLuhnDigit": true,
        "alphabeticIndicator": false,
        "alphabeticIndicatorPosition": 1
      }
    }
  }')
echo "  Response: ${DE_RESPONSE}"
DE_UID=$(extract_uid "$DE_RESPONSE" "$DE_NAME")
echo "  Data Element UID: ${DE_UID}"

# ─────────────────────────────────────────────────────────────────────────────
# STEP 2.1: Create Mask
# ─────────────────────────────────────────────────────────────────────────────
# Subsection of: Prepare Data Element
#
# Creates a mask that controls how data is displayed when unprotected.
# The mask is optionally applied to an unprotect operation to display only
# certain characters to the consumer of the data. Hidden characters are
# replaced with the specified mask character, while the defined number of
# characters on each side remain visible in the clear.
#
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/masks
# ─────────────────────────────────────────────────────────────────────────────
log_sub "Step 2.1: Create Mask — ${MASK_NAME}  (Subsection of: Prepare Data Element)"
MASK_RESPONSE=$(api_call \
  -H "accept: application/json" \
  -H "Content-Type: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/masks" \
  -d '{
    "name": "'"${MASK_NAME}"'",
    "masked": true,
    "fromLeft": '"${MASK_FROM_LEFT}"',
    "fromRight": '"${MASK_FROM_RIGHT}"',
    "character": "'"${MASK_CHARACTER}"'"
  }')
echo "    Response: ${MASK_RESPONSE}"
MASK_UID=$(extract_uid "$MASK_RESPONSE" "$MASK_NAME")
echo "    Mask UID: ${MASK_UID}"

# ─────────────────────────────────────────────────────────────────────────────
# STEP 3: Create Member Source
# ─────────────────────────────────────────────────────────────────────────────
# Creates a Member Source that defines where user and group identities are
# sourced from (in this example, a flat file). Member Sources are used to
# populate roles with real enterprise identities.
#
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/sources
# ─────────────────────────────────────────────────────────────────────────────
log "Step 4: Create Member Source — ${SOURCE_NAME}"
SOURCE_RESPONSE=$(api_call \
  -H "accept: application/json" \
  -H "Content-Type: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/sources" \
  -d '{
    "name": "'"${SOURCE_NAME}"'",
    "type": "FILE",
    "connection": {
      "userFile": "'"${SOURCE_USER_FILE}"'",
      "groupFile": "'"${SOURCE_GROUP_FILE}"'"
    }
  }')
echo "  Response: ${SOURCE_RESPONSE}"
SOURCE_UID=$(extract_uid "$SOURCE_RESPONSE" "$SOURCE_NAME")
echo "  Source UID: ${SOURCE_UID}"

# ─────────────────────────────────────────────────────────────────────────────
# STEP 3.1: Test Member Source Connectivity
# ─────────────────────────────────────────────────────────────────────────────
# Validates that the Member Source is reachable and correctly configured.
# All connectivity checks (connection, authentication, groups, users) must
# pass before proceeding.
#
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/sources/{id}/test
# ─────────────────────────────────────────────────────────────────────────────
log "Step 3.1: Test Member Source Connectivity — UID: ${SOURCE_UID}"
api_call \
  -H "accept: application/json" \
  -H "Content-Type: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/sources/${SOURCE_UID}/test"
echo ""
# ─────────────────────────────────────────────────────────────────────────────
# STEP 4: Create Role
# ─────────────────────────────────────────────────────────────────────────────
# Creates a role that represents who is allowed to perform operations on
# the protected data. Permissions are granted to roles, which are then
# mapped to users and groups via Member Sources.
#
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/roles
# ─────────────────────────────────────────────────────────────────────────────
log "Step 4: Create Role — ${ROLE_NAME}"
ROLE_RESPONSE=$(api_call \
  -H "accept: application/json" \
  -H "Content-Type: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/roles" \
  -d '{
    "name": "'"${ROLE_NAME}"'",
    "description": "'"${ROLE_DESC}"'",
    "mode": "'"${ROLE_MODE}"'",
    "allowAll": false
  }')
echo "  Response: ${ROLE_RESPONSE}"
ROLE_UID=$(extract_uid "$ROLE_RESPONSE" "$ROLE_NAME")
echo "  Role UID: ${ROLE_UID}"


# ─────────────────────────────────────────────────────────────────────────────
# STEP 5: Assign Member Source to Role
# ─────────────────────────────────────────────────────────────────────────────
# Binds a specific user or group from the Member Source to the Role.
# This establishes the identity-to-role mapping that makes the policy
# enforceable for real users.
#
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/roles/{id}/members
# ─────────────────────────────────────────────────────────────────────────────
log "Step 5: Assign Member '${MEMBER_NAME}' to Role — UID: ${ROLE_UID}"
api_call \
  -H "accept: application/json" \
  -H "Content-Type: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/roles/${ROLE_UID}/members" \
  -d '[
    {
      "name": "'"${MEMBER_NAME}"'",
      "source": "'"${SOURCE_UID}"'",
      "type": "'"${MEMBER_TYPE}"'"
    }
  ]'
echo ""

# ─────────────────────────────────────────────────────────────────────────────
# STEP 5.1: Sync Role Membership
# ─────────────────────────────────────────────────────────────────────────────
# Synchronizes the role membership from the Member Source. This pulls the
# current list of users and groups into the role so that access controls
# reflect the latest state of the identity source.
#
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/roles/{id}/sync
# ─────────────────────────────────────────────────────────────────────────────
log "Step 5.1: Sync Role Membership — Role UID: ${ROLE_UID}"
SYNC_RESPONSE=$(api_call \
  -H "accept: application/json" \
  -H "Content-Type: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/roles/${ROLE_UID}/sync")
if [[ -z "${SYNC_RESPONSE}" ]]; then
  echo "  Status: OK"
else
  echo "  Response: ${SYNC_RESPONSE}"
fi
echo ""

# ─────────────────────────────────────────────────────────────────────────────
# STEP 6: Create Policy Shell
# ─────────────────────────────────────────────────────────────────────────────
# Creates the policy container that will hold the access rules. The policy
# is the deployable object that ties together Data Elements, Roles, and Rules.
#
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/policies
# ─────────────────────────────────────────────────────────────────────────────
log "Step 6: Create Policy Shell — ${POLICY_NAME}"
POLICY_RESPONSE=$(api_call \
  -H "accept: application/json" \
  -H "Content-Type: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/policies" \
  -d '{
    "name": "'"${POLICY_NAME}"'",
    "description": "'"${POLICY_DESC}"'",
    "template": {
      "access": {
        "protect": false,
        "reProtect": false,
        "unProtect": false
      }
    }
  }')
echo "  Response: ${POLICY_RESPONSE}"
POLICY_UID=$(extract_uid "$POLICY_RESPONSE" "$POLICY_NAME")
echo "  Policy UID: ${POLICY_UID}"

# ─────────────────────────────────────────────────────────────────────────────
# STEP 7: Define Policy Rule
# ─────────────────────────────────────────────────────────────────────────────
# Creates the rule that binds a Role (who), a Data Element (what), a Mask
# (how unprotected data is displayed), and the permitted operations
# (protect / reProtect / unProtect) into the policy.
# Without rules, the policy exists but grants no access.
#
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/policies/{id}/rules
# ─────────────────────────────────────────────────────────────────────────────
log "Step 7: Define Policy Rule — Policy UID: ${POLICY_UID}"
RULE_RESPONSE=$(api_call \
  -H "accept: application/json" \
  -H "Content-Type: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/policies/${POLICY_UID}/rules" \
  -d '{
    "role": "'"${ROLE_UID}"'",
    "dataElement": "'"${DE_UID}"'",
    "mask": "'"${MASK_UID}"'",
    "noAccessOperation": "'"${RULE_NO_ACCESS_OP}"'",
    "permission": {
      "access": {
        "protect": '"${RULE_PROTECT}"',
        "reProtect": '"${RULE_REPROTECT}"',
        "unProtect": '"${RULE_UNPROTECT}"'
      }
    }
  }')
if [[ -z "${RULE_RESPONSE}" ]]; then
  echo "  Status: OK"
else
  echo "  Response: ${RULE_RESPONSE}"
fi
echo ""

# ─────────────────────────────────────────────────────────────────────────────
# STEP 8: Create Datastore
# ─────────────────────────────────────────────────────────────────────────────
# Creates the datastore target to which the policy will be deployed.
# A policy is not active for protectors until it has been deployed to
# at least one datastore.
#
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/datastores
# ─────────────────────────────────────────────────────────────────────────────
log "Step 8: Create Datastore — ${DS_NAME}"
DS_RESPONSE=$(api_call \
  -H "accept: application/json" \
  -H "Content-Type: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/datastores" \
  -d '{
    "name": "'"${DS_NAME}"'",
    "description": "'"${DS_DESC}"'",
    "default": '"${DS_DEFAULT}"'
  }')
echo "  Response: ${DS_RESPONSE}"
DS_UID=$(extract_uid "$DS_RESPONSE" "$DS_NAME")
echo "  Datastore UID: ${DS_UID}"

# ─────────────────────────────────────────────────────────────────────────────
# STEP 9: Deploy Policy to Datastore
# ─────────────────────────────────────────────────────────────────────────────
# Deploys the policy to the target datastore. After this step, runtime
# protectors that reference this datastore will be able to load and
# enforce the policy.
#
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/datastores/{id}/deploy
# ─────────────────────────────────────────────────────────────────────────────
log "Step 9: Deploy Policy to Datastore — DS UID: ${DS_UID}, Policy UID: ${POLICY_UID}"
DEPLOY_RESPONSE=$(api_call \
  -H "accept: application/json" \
  -H "Content-Type: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/datastores/${DS_UID}/deploy" \
  -d '{
    "policies": ["'"${POLICY_UID}"'"],
    "applications": []
  }')
if [[ -z "${DEPLOY_RESPONSE}" ]]; then
  echo "  Status: OK"
else
  echo "  Response: ${DEPLOY_RESPONSE}"
fi
echo ""

# ─────────────────────────────────────────────────────────────────────────────
# STEP 10: Confirm Deployment
# ─────────────────────────────────────────────────────────────────────────────
# Verifies that the policy has been successfully deployed to the datastore.
# Confirms the policy is active, correctly mapped, and enforceable.
#
# Permission  : Security Viewer (read-only) or Security Officer
# API Version : v2  — GET /pty/v2/pim/deploy
# ─────────────────────────────────────────────────────────────────────────────
log "Step 10: Confirm Deployment"
api_call \
  -H "accept: application/json" \
  -X GET "https://${GW_HOST}/pty/v2/pim/deploy"
echo ""

###############################################################################
# SECTION 5: SUMMARY
# ───────────────────
# Displays a summary of all created resources and their UIDs.
###############################################################################
log "Workflow Complete ✅"
printf "\n%-20s %-30s %-10s\n" "Resource"             "Name"                          "UID"
printf "%-20s %-30s %-10s\n"   "────────────────────" "──────────────────────────────" "──────────"
printf "%-20s %-30s %-10s\n"   "Data Element"  "${DE_NAME}"      "${DE_UID}"
printf "%-20s %-30s %-10s\n"   "  └─ Mask"     "${MASK_NAME}"    "${MASK_UID}"
printf "%-20s %-30s %-10s\n"   "Role"          "${ROLE_NAME}"    "${ROLE_UID}"
printf "%-20s %-30s %-10s\n"   "Member Source" "${SOURCE_NAME}"  "${SOURCE_UID}"
printf "%-20s %-30s %-10s\n"   "Policy"        "${POLICY_NAME}"  "${POLICY_UID}"
printf "%-20s %-30s %-10s\n"   "Datastore"     "${DS_NAME}"      "${DS_UID}"
printf "\n"

3.4.2 - Full Script to Protect DOB using Policy Management REST APIs

Full script for protecting DOB data.

The following code snippet contains the contents of the deploy-dob-policy.sh shell script. This script enables the creation and deployment of a policy to protect DOB data using the Policy Management REST APIs.

#!/usr/bin/env bash
###############################################################################
# Script Name  : dob_policy.sh
# Description  : End-to-end automation script for creating and deploying a
#                Date of Birth (DOB) protection policy using the
#                Protegrity Policy Information Management (PIM) REST API.
#
#
# ─────────────────────────────────────────────────────────────────────────────
# IMPORTANT NOTES
# ─────────────────────────────────────────────────────────────────────────────
#
# 1. WORKBENCH REQUIREMENT:
#    The Policy Management REST APIs will work only after you have installed
#    the Protegrity Workbench. Attempting to use these APIs before the
#    Workbench is installed will result in errors.
#
# 2. USER PERMISSIONS:
#    The user account used to authenticate against these APIs must have the
#    appropriate Protegrity role assigned:
#      - Security Officer : Required for write access (create, update, delete)
#      - Security Viewer  : Required for read-only access (get, list)
#    For more information about the roles and permissions required, refer to
#    the section "Managing Roles" in the Protegrity documentation.
#
# 3. API VERSION:
#    The Policy Management API uses version v2.
#    All endpoints in this script are prefixed with /pty/v2/pim/
#    Requests to older API versions will not be supported.
#
# ─────────────────────────────────────────────────────────────────────────────
# PREREQUISITES
# ─────────────────────────────────────────────────────────────────────────────
#   - Protegrity Workbench must be installed and running
#   - kubectl configured and connected to your Kubernetes cluster
#   - curl installed on the machine running this script
#   - Access to the Protegrity API Gateway
#   - A user account with Security Officer permissions
#
# ─────────────────────────────────────────────────────────────────────────────
# USAGE
# ─────────────────────────────────────────────────────────────────────────────
#   chmod +x deploy-dob-policy.sh
#   ./deploy-dob-policy.sh
#
# ─────────────────────────────────────────────────────────────────────────────
# WORKFLOW
# ─────────────────────────────────────────────────────────────────────────────
#   Step 1    - Initialize Policy Management
#   Step 2    - Prepare Data Element (DOB DateTime Token)
#   Step 3    - Create Member Source
#   Step 3.1  - Test Member Source Connectivity
#   Step 4    - Create Role
#   Step 5    - Assign Member Source to Role
#   Step 5.1  - Sync Role Membership
#   Step 6    - Create Policy Shell
#   Step 7    - Define Policy Rule (bind Role + Data Element + Permissions)
#   Step 8    - Create Datastore
#   Step 9    - Deploy Policy to Datastore
#   Step 10   - Confirm Deployment
#
# ─────────────────────────────────────────────────────────────────────────────
# SECURITY NOTES
# ─────────────────────────────────────────────────────────────────────────────
#   - If any API call returns HTTP 401 (Unauthorized), the script will
#     automatically attempt to re-generate the JWT token and retry the
#     request once before failing.
#   - If any API call indicates that a resource already exists, the script
#     will exit immediately with an error. Delete the conflicting resource
#     first, or update the name variables in SECTION 1 before re-running.
#
# ─────────────────────────────────────────────────────────────────────────────
# EXIT CODES
# ─────────────────────────────────────────────────────────────────────────────
#   0 - Success
#   1 - Script error (set -e will trigger on any failed command)
###############################################################################

set -euo pipefail

###############################################################################
# SECTION 1: USER-CONFIGURABLE VARIABLES
# ─────────────────────────────────────
# Modify the variables below to match your environment before running
# this script. All other values are derived automatically.
#
# NOTE: The user specified by ADMIN_USER must have the Security Officer
#       permission to perform write operations via the Policy Management API.
#       For read-only operations, the Security Viewer permission is sufficient.
#       For more information, refer to the "Managing Roles" section in the
#       Protegrity documentation.
###############################################################################

# --- Protegrity Admin Credentials ---
# WARNING: For production use, consider sourcing these values from a secrets
#          manager (e.g., HashiCorp Vault, Kubernetes Secrets, AWS SSM).
ADMIN_USER="workbench"
ADMIN_PASS="Admin123!"

# --- Data Element ---
DE_NAME="de_dob_token"
DE_DESC="Tokenize Date of Birth"
DE_TOKENIZER="SLT_8_DATETIME"     # DateTime tokenizer for date/time fields

# --- Role ---
ROLE_NAME="dob_protect_role"
ROLE_DESC="Role having access to protect DOB"
ROLE_MODE="MANUAL"                # Options: MANUAL | SEMIAUTOMATIC | AUTOMATIC

# --- Member Source ---
SOURCE_NAME="test-file"
SOURCE_USER_FILE="exampleusers.txt"
SOURCE_GROUP_FILE="examplegroups.txt"

# --- Role Member ---
MEMBER_NAME="exampleuser1"
MEMBER_TYPE="USER"                # Options: USER | GROUP

# --- Policy ---
POLICY_NAME="dob-policy"
POLICY_DESC="Protect Date of Birth with tokenization"

# --- Policy Rule Permissions ---
RULE_PROTECT=true                 # Allow protect operation
RULE_REPROTECT=false              # Allow re-protect operation
RULE_UNPROTECT=true               # Allow unprotect operation
RULE_NO_ACCESS_OP="NULL_VALUE"    # Behavior for no-access: NULL_VALUE | EXCEPTION

# --- Datastore ---
DS_NAME="ds_protect_dob"
DS_DESC="Datastore to demonstrate DOB protection"
DS_DEFAULT=true                   # Set as the default datastore: true | false

# --- Token Retry Settings ---
# On receiving HTTP 401 Unauthorized, the script will refresh the JWT token
# and retry the failed request. MAX_TOKEN_RETRIES controls how many refresh
# attempts are made before the script aborts.
MAX_TOKEN_RETRIES=1               # Number of times to retry generating a token on 401

###############################################################################
# SECTION 2: HELPER FUNCTIONS
# ────────────────────────────
# Internal utility functions used throughout the script.
# Do not modify unless necessary.
###############################################################################

# Prints a formatted primary section header to stdout
log() {
  printf "\n%s\n" "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
  printf "  %s\n" "$*"
  printf "%s\n" "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
}

# Prints a formatted subsection header to stdout (indented, lighter style)
log_sub() {
  printf "\n%s\n" "  ┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄"
  printf "    %s\n" "$*"
  printf "%s\n" "  ┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄"
}

# Prints an error message to stderr and exits with code 1
# Usage: die <message>
die() {
  printf "\n  [ERROR] %s\n" "$*" >&2
  exit 1
}

# Attempts to extract a UID from a JSON API response.
# Handles both string UIDs ("uid":"1") and integer UIDs ("uid":1).
# Exits with an error if extraction fails — never prompts interactively.
# Usage: extract_uid <json_response> <resource_label>
extract_uid() {
  local response="$1"
  local label="$2"
  local uid
  # Match string-quoted UIDs: "uid":"<value>"
  uid=$(echo "$response" | grep -o '"uid":"[^"]*"' | head -1 | sed 's/"uid":"//;s/"//' || true)
  # Fallback: match integer UIDs: "uid":<number>
  if [[ -z "${uid:-}" ]]; then
    uid=$(echo "$response" | grep -o '"uid":[0-9]*' | head -1 | grep -o '[0-9]*' || true)
  fi
  if [[ -z "${uid:-}" ]]; then
    die "Failed to extract UID for '${label}'. API response was: ${response}"
  fi
  echo "$uid"
}

# Generates a new JWT authentication token using the configured admin
# credentials (ADMIN_USER / ADMIN_PASS). Stores the result in the global
# TOKEN variable.
#
# NOTE: The user must have the Security Officer permission for write access
#       or the Security Viewer permission for read-only access to the
#       Policy Management API (v2). For more information, refer to
#       the "Managing Roles" section in the Protegrity documentation.
#
# Usage: generate_token
generate_token() {
  echo "  Generating JWT authentication token..."
  TOKEN=$(curl -k -s "https://${GW_HOST}/api/v1/auth/login/token" \
    -X POST \
    -H 'Content-Type: application/x-www-form-urlencoded' \
    -d "loginname=${ADMIN_USER}" \
    -d "password=${ADMIN_PASS}" \
    -D - -o /dev/null 2>&1 \
    | grep -i 'pty_access_jwt_token:' \
    | sed 's/pty_access_jwt_token: //' \
    | tr -d '\r')

  if [[ -z "${TOKEN:-}" ]]; then
    die "Failed to retrieve JWT token. Please verify the following:
         - The Protegrity Workbench is installed and running.
         - The API Gateway host (${GW_HOST}) is reachable.
         - The credentials for user '${ADMIN_USER}' are correct.
         - The user '${ADMIN_USER}' has the Security Officer or Security Viewer
           permission assigned. Refer to 'Managing Roles' in the Protegrity
           documentation for more information."
  fi

  echo "  Token acquired successfully."
}

# Executes a curl API call and automatically retries with a refreshed JWT
# token if a 401 Unauthorized response is received.
#
# All Policy Management API calls in this script target the v2 API version:
#   https://<gateway>/pty/v2/pim/...
#
# On HTTP 401, the token is refreshed (up to MAX_TOKEN_RETRIES times) and
# the request is retried. This can occur when a token expires mid-run.
# On any other non-2xx response, a warning is logged but execution continues.
#
# Usage: api_call <curl_args...>
api_call() {
  local retries=0
  local http_status
  local response_body
  local tmp_file
  tmp_file=$(mktemp)

  while true; do
    # Execute the curl call, capturing body and HTTP status separately
    http_status=$(curl -k -s -o "$tmp_file" -w "%{http_code}" \
      -H "Authorization: Bearer ${TOKEN}" \
      "$@")
    response_body=$(cat "$tmp_file")

    # Handle 401 Unauthorized:
    # This typically means the JWT token has expired or is invalid.
    # The script will attempt to refresh the token and retry the request.
    # Ensure the user has the correct permissions (Security Officer / Viewer).
    # Refer to "Managing Roles" in the Protegrity documentation.
    if [[ "$http_status" == "401" ]]; then
      if [[ "$retries" -lt "$MAX_TOKEN_RETRIES" ]]; then
        echo "  [Warning] Received HTTP 401 Unauthorized." >&2
        echo "            Refreshing JWT token and retrying (attempt $((retries + 1)) of ${MAX_TOKEN_RETRIES})..." >&2
        generate_token
        retries=$((retries + 1))
        continue
      else
        rm -f "$tmp_file"
        die "Received HTTP 401 Unauthorized after ${MAX_TOKEN_RETRIES} token refresh attempt(s).
         Please verify that user '${ADMIN_USER}' has the required permissions:
           - Security Officer : for write access
           - Security Viewer  : for read-only access
         Refer to 'Managing Roles' in the Protegrity documentation."
      fi
    fi

    # Fail on "already exists" (HTTP 400/409) — resource must be removed first
    if echo "$response_body" | grep -qi "already exist"; then
      rm -f "$tmp_file"
      die "Resource already exists (HTTP ${http_status}). The script cannot continue.
         Response : ${response_body}
         Action   : Delete or rename the existing resource before re-running,
                    or update the name variables at the top of this script."
    fi

    # Log other non-2xx responses (excluding 401 already handled above)
    if [[ "$http_status" != 2* ]]; then
      echo "  [Warning] Received HTTP ${http_status}. Response: ${response_body}" >&2
    fi

    rm -f "$tmp_file"
    echo "$response_body"
    break
  done
}

###############################################################################
# SECTION 3: ENVIRONMENT SETUP
# ─────────────────────────────
# Retrieves the API Gateway host address and generates a JWT authentication
# token required for all subsequent API calls.
#
# NOTE: The Policy Management REST APIs will work only after the Protegrity
#       Workbench has been installed. All API calls target version v2:
#         https://<gateway>/pty/v2/pim/
###############################################################################

log "Environment Setup: Retrieving API Gateway Host"
export GW_HOST
GW_HOST="$(kubectl get gateway pty-main -n api-gateway \
  -o jsonpath='{.status.addresses[0].value}')"
echo "  API Gateway Host : ${GW_HOST}"
echo "  API Version      : v2  (/pty/v2/pim/)"

log "Environment Setup: Generating JWT Authentication Token"
generate_token

###############################################################################
# SECTION 4: WORKFLOW EXECUTION
# ──────────────────────────────
# Executes each step of the DOB policy creation workflow in sequence.
# UIDs returned by each step are captured and reused in subsequent steps.
#
# NOTE: All write operations (POST) require the Security Officer permission.
#       The read operation in Step 12 (GET) requires at minimum the
#       Security Viewer permission.
###############################################################################

# ─────────────────────────────────────────────────────────────────────────────
# STEP 1: Initialize Policy Management
# ─────────────────────────────────────────────────────────────────────────────
# Initializes the PIM system. This step only needs to be performed once
# per environment setup.
#
# Requirement : Protegrity Workbench must be installed before running this step.
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/init
# ─────────────────────────────────────────────────────────────────────────────
log "Step 1: Initialize Policy Management"
INIT_RESPONSE=$(api_call \
  -H "accept: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/init")
if [[ -z "${INIT_RESPONSE}" ]]; then
  echo "  Status: OK (empty response — PIM already initialized or no content returned)"
else
  echo "  Response: ${INIT_RESPONSE}"
fi
echo ""

# ─────────────────────────────────────────────────────────────────────────────
# STEP 2: Prepare Data Element
# ─────────────────────────────────────────────────────────────────────────────
# Prepares the DOB DateTime Data Element that defines what data is protected
# and how it is tokenized. The SLT_8_DATETIME tokenizer handles date/time
# field tokenization.
#
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/dataelements
# ─────────────────────────────────────────────────────────────────────────────
log "Step 2: Prepare Data Element — ${DE_NAME}"
DE_RESPONSE=$(api_call \
  -H "accept: application/json" \
  -H "Content-Type: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/dataelements" \
  -d '{
    "name": "'"${DE_NAME}"'",
    "description": "'"${DE_DESC}"'",
    "dateTimeToken": {
      "tokenizer": "'"${DE_TOKENIZER}"'"
    }
  }')
echo "  Response: ${DE_RESPONSE}"
DE_UID=$(extract_uid "$DE_RESPONSE" "$DE_NAME")
echo "  Data Element UID: ${DE_UID}"

# ─────────────────────────────────────────────────────────────────────────────
# STEP 3: Create Member Source
# ─────────────────────────────────────────────────────────────────────────────
# Creates a Member Source that defines where user and group identities are
# sourced from (in this example, a flat file). Member Sources are used to
# populate roles with real enterprise identities.
#
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/sources
# ─────────────────────────────────────────────────────────────────────────────
log "Step 3: Create Member Source — ${SOURCE_NAME}"
SOURCE_RESPONSE=$(api_call \
  -H "accept: application/json" \
  -H "Content-Type: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/sources" \
  -d '{
    "name": "'"${SOURCE_NAME}"'",
    "type": "FILE",
    "connection": {
      "userFile": "'"${SOURCE_USER_FILE}"'",
      "groupFile": "'"${SOURCE_GROUP_FILE}"'"
    }
  }')
echo "  Response: ${SOURCE_RESPONSE}"
SOURCE_UID=$(extract_uid "$SOURCE_RESPONSE" "$SOURCE_NAME")
echo "  Source UID: ${SOURCE_UID}"

# ─────────────────────────────────────────────────────────────────────────────
# STEP 3.1: Test Member Source Connectivity
# ─────────────────────────────────────────────────────────────────────────────
# Validates that the Member Source is reachable and correctly configured.
# All connectivity checks (connection, authentication, groups, users) must
# pass before proceeding.
#
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/sources/{id}/test
# ─────────────────────────────────────────────────────────────────────────────
log "Step 3.1: Test Member Source Connectivity — UID: ${SOURCE_UID}"
api_call \
  -H "accept: application/json" \
  -H "Content-Type: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/sources/${SOURCE_UID}/test"
echo ""

# ─────────────────────────────────────────────────────────────────────────────
# STEP 4: Create Role
# ─────────────────────────────────────────────────────────────────────────────
# Creates a role that represents who is allowed to perform operations on
# the protected data. Permissions are granted to roles, which are then
# mapped to users and groups via Member Sources.
#
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/roles
# ─────────────────────────────────────────────────────────────────────────────
log "Step 4: Create Role — ${ROLE_NAME}"
ROLE_RESPONSE=$(api_call \
  -H "accept: application/json" \
  -H "Content-Type: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/roles" \
  -d '{
    "name": "'"${ROLE_NAME}"'",
    "description": "'"${ROLE_DESC}"'",
    "mode": "'"${ROLE_MODE}"'",
    "allowAll": false
  }')
echo "  Response: ${ROLE_RESPONSE}"
ROLE_UID=$(extract_uid "$ROLE_RESPONSE" "$ROLE_NAME")
echo "  Role UID: ${ROLE_UID}"

# ─────────────────────────────────────────────────────────────────────────────
# STEP 5: Assign Member Source to Role
# ─────────────────────────────────────────────────────────────────────────────
# Binds a specific user or group from the Member Source to the Role.
# This establishes the identity-to-role mapping that makes the policy
# enforceable for real users.
#
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/roles/{id}/members
# ─────────────────────────────────────────────────────────────────────────────
log "Step 5: Assign Member '${MEMBER_NAME}' to Role — UID: ${ROLE_UID}"
api_call \
  -H "accept: application/json" \
  -H "Content-Type: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/roles/${ROLE_UID}/members" \
  -d '[
    {
      "name": "'"${MEMBER_NAME}"'",
      "source": "'"${SOURCE_UID}"'",
      "type": "'"${MEMBER_TYPE}"'"
    }
  ]'
echo ""

# ─────────────────────────────────────────────────────────────────────────────
# STEP 5.1: Sync Role Membership
# ─────────────────────────────────────────────────────────────────────────────
# Synchronizes the role membership from the Member Source. This pulls the
# current list of users and groups into the role so that access controls
# reflect the latest state of the identity source.
#
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/roles/{id}/sync
# ─────────────────────────────────────────────────────────────────────────────
log "Step 5.1: Sync Role Membership — Role UID: ${ROLE_UID}"
SYNC_RESPONSE=$(api_call \
  -H "accept: application/json" \
  -H "Content-Type: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/roles/${ROLE_UID}/sync")
if [[ -z "${SYNC_RESPONSE}" ]]; then
  echo "  Status: OK"
else
  echo "  Response: ${SYNC_RESPONSE}"
fi
echo ""

# ─────────────────────────────────────────────────────────────────────────────
# STEP 6: Create Policy Shell
# ─────────────────────────────────────────────────────────────────────────────
# Creates the policy container that will hold the access rules. The policy
# is the deployable object that ties together Data Elements, Roles, and Rules.
#
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/policies
# ─────────────────────────────────────────────────────────────────────────────
log "Step 6: Create Policy Shell — ${POLICY_NAME}"
POLICY_RESPONSE=$(api_call \
  -H "accept: application/json" \
  -H "Content-Type: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/policies" \
  -d '{
    "name": "'"${POLICY_NAME}"'",
    "description": "'"${POLICY_DESC}"'",
    "template": {
      "access": {
        "protect": false,
        "reProtect": false,
        "unProtect": false
      }
    }
  }')
echo "  Response: ${POLICY_RESPONSE}"
POLICY_UID=$(extract_uid "$POLICY_RESPONSE" "$POLICY_NAME")
echo "  Policy UID: ${POLICY_UID}"

# ─────────────────────────────────────────────────────────────────────────────
# STEP 7: Define Policy Rule
# ─────────────────────────────────────────────────────────────────────────────
# Creates the rule that binds a Role (who), a Data Element (what), and
# the permitted operations (protect / reProtect / unProtect) into the
# policy. Without rules, the policy exists but grants no access.
# Note: No mask is applied for DateTime data elements.
#
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/policies/{id}/rules
# ─────────────────────────────────────────────────────────────────────────────
log "Step 7: Define Policy Rule — Policy UID: ${POLICY_UID}"
RULE_RESPONSE=$(api_call \
  -H "accept: application/json" \
  -H "Content-Type: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/policies/${POLICY_UID}/rules" \
  -d '{
    "role": "'"${ROLE_UID}"'",
    "dataElement": "'"${DE_UID}"'",
    "noAccessOperation": "'"${RULE_NO_ACCESS_OP}"'",
    "permission": {
      "access": {
        "protect": '"${RULE_PROTECT}"',
        "reProtect": '"${RULE_REPROTECT}"',
        "unProtect": '"${RULE_UNPROTECT}"'
      }
    }
  }')
if [[ -z "${RULE_RESPONSE}" ]]; then
  echo "  Status: OK"
else
  echo "  Response: ${RULE_RESPONSE}"
fi
echo ""

# ─────────────────────────────────────────────────────────────────────────────
# STEP 8: Create Datastore
# ─────────────────────────────────────────────────────────────────────────────
# Creates the datastore target to which the policy will be deployed.
# A policy is not active for protectors until it has been deployed to
# at least one datastore.
#
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/datastores
# ─────────────────────────────────────────────────────────────────────────────
log "Step 8: Create Datastore — ${DS_NAME}"
DS_RESPONSE=$(api_call \
  -H "accept: application/json" \
  -H "Content-Type: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/datastores" \
  -d '{
    "name": "'"${DS_NAME}"'",
    "description": "'"${DS_DESC}"'",
    "default": '"${DS_DEFAULT}"'
  }')
echo "  Response: ${DS_RESPONSE}"
DS_UID=$(extract_uid "$DS_RESPONSE" "$DS_NAME")
echo "  Datastore UID: ${DS_UID}"

# ─────────────────────────────────────────────────────────────────────────────
# STEP 9: Deploy Policy to Datastore
# ─────────────────────────────────────────────────────────────────────────────
# Deploys the policy to the target datastore. After this step, runtime
# protectors that reference this datastore will be able to load and
# enforce the policy.
#
# Permission  : Security Officer
# API Version : v2  — POST /pty/v2/pim/datastores/{id}/deploy
# ─────────────────────────────────────────────────────────────────────────────
log "Step 9: Deploy Policy to Datastore — DS UID: ${DS_UID}, Policy UID: ${POLICY_UID}"
DEPLOY_RESPONSE=$(api_call \
  -H "accept: application/json" \
  -H "Content-Type: application/json" \
  -X POST "https://${GW_HOST}/pty/v2/pim/datastores/${DS_UID}/deploy" \
  -d '{
    "policies": ["'"${POLICY_UID}"'"],
    "applications": []
  }')
if [[ -z "${DEPLOY_RESPONSE}" ]]; then
  echo "  Status: OK"
else
  echo "  Response: ${DEPLOY_RESPONSE}"
fi
echo ""

# ─────────────────────────────────────────────────────────────────────────────
# STEP 10: Confirm Deployment
# ─────────────────────────────────────────────────────────────────────────────
# Verifies that the policy has been successfully deployed to the datastore.
# Confirms the policy is active, correctly mapped, and enforceable.
#
# Permission  : Security Viewer (read-only) or Security Officer
# API Version : v2  — GET /pty/v2/pim/deploy
# ─────────────────────────────────────────────────────────────────────────────
log "Step 10: Confirm Deployment"
api_call \
  -H "accept: application/json" \
  -X GET "https://${GW_HOST}/pty/v2/pim/deploy"
echo ""

###############################################################################
# SECTION 5: SUMMARY
# ───────────────────
# Displays a summary of all created resources and their UIDs.
###############################################################################
log "Workflow Complete ✅"
printf "\n%-20s %-30s %-10s\n" "Resource"             "Name"                          "UID"
printf "%-20s %-30s %-10s\n"   "────────────────────" "──────────────────────────────" "──────────"
printf "%-20s %-30s %-10s\n"   "Data Element"  "${DE_NAME}"      "${DE_UID}"
printf "%-20s %-30s %-10s\n"   "Role"          "${ROLE_NAME}"    "${ROLE_UID}"
printf "%-20s %-30s %-10s\n"   "Member Source" "${SOURCE_NAME}"  "${SOURCE_UID}"
printf "%-20s %-30s %-10s\n"   "Policy"        "${POLICY_NAME}"  "${POLICY_UID}"
printf "%-20s %-30s %-10s\n"   "Datastore"     "${DS_NAME}"      "${DS_UID}"
printf "\n"