Beyond infrastructure, Protegrity Provisioned Cluster (PPC) introduces a suite of common Protegrity Common Services (PCS), that act as the backbone for Protegrity AI Team Edition features. These include ingress control for secure traffic routing, certificate management for request validation, and robust authentication and authorization services. PPC also integrates Insight for audit logging and analytics, leveraging OpenSearch and OpenDashboards for visualization and compliance reporting. Along with this foundation, AI Team Edition delivers advanced capabilities such as policy management, anonymization, data discovery, semantic guardrails, and synthetic data generation. All these features are orchestrated within the PPC cluster. This modular approach ensures scalability, security, and flexibility, making PPC a strategic enabler for organizations adopting cloud-first and containerized environments.
This is the multi-page printable view of this section. Click here to print.
Protegrity Provisioned Cluster
- 1: Installing PPC
- 1.1: Prerequisites
- 1.2: Preparing for PPC deployment
- 1.3: Deploying PPC
- 2: Accessing PPC using a Linux machine
- 3: Installing Features and Protectors
- 4: Login to PPC
- 4.1: Prerequisites
- 4.2: Log in to PPC
- 5: Accessing the PPC CLI
- 5.1: Prerequisites
- 5.2: Accessing the PPC CLI
- 6: Deleting PPC
- 7: Restoring the PPC
1 - Installing PPC
The Protegrity Provisioned Cluster PPC is the core framework that forms the AI Team Edition. It is designed to deliver a modern, cloud-native experience for data security and governance. Built on Kubernetes, PPC uses a containerized architecture that simplifies deployment and scaling. Using OpenTofu scripts and Helm charts, administrators can stand up clusters with minimal manual intervention, ensuring consistency and reducing operational overhead.
Perform the following steps to set up and deploy the PPC:
1.1 - Prerequisites
Updating the Roles and Permissions using JSON
The roles and permissions are updated using the JSONs.
From the AWS Console, navigate to IAM > Policies > Create policy > JSON, and create the following JSONs.
Note: Before using the provided JSON, replace the
AWS_ACCOUNT_IDandREGIONvalues with those of the account and region where the resources are being deployed.
- Creating KMS key and S3 bucket
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadOnlyAccess",
"Effect": "Allow",
"Action": [
"eks:DescribeClusterVersions",
"ec2:DescribeInstances",
"ec2:DescribeVolumes",
"s3:ListAllMyBuckets",
"iam:ListUsers",
"ec2:RunInstances",
"ec2:DescribeInstances",
"ec2:DescribeVolumes",
"ec2:CreateKeyPair",
"ec2:DescribeImages"
],
"Resource": "*"
},
{
"Sid": "ScopedS3AndKMS",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:PutEncryptionConfiguration",
"s3:GetEncryptionConfiguration",
"kms:CreateKey",
"kms:PutKeyPolicy",
"kms:GetKeyPolicy"
],
"Resource": [
"arn:aws:s3:::*",
"arn:aws:kms:*:<AWS_ACCOUNT_ID>:key/*"
]
},
{
"Sid": "SelfServiceIAM",
"Effect": "Allow",
"Action": [
"iam:ListSSHPublicKeys",
"iam:ListServiceSpecificCredentials",
"iam:GetLoginProfile",
"iam:ListAccessKeys",
"iam:CreateAccessKey"
],
"Resource": "arn:aws:iam::<AWS_ACCOUNT_ID>:user/${aws:username}"
},
{
"Sid": "EC2KeyPairPermission",
"Effect": "Allow",
"Action": [
"ec2:CreateKeyPair",
"ec2:DescribeKeyPairs"
],
"Resource": [
"*"
]
}
]
}
- EC2 Service Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyEC2Instances",
"Effect": "Deny",
"Action": "ec2:RunInstances",
"Resource": "arn:aws:ec2:*:*:instance/*",
"Condition": {
"StringLike": {
"ec2:InstanceType": [
"p*",
"g*",
"inf*",
"trn*",
"x*",
"u-*",
"z*",
"mac*"
]
}
}
},
{
"Sid": "ReadOnlyDescribeListEC2RegionRestricted",
"Effect": "Allow",
"Action": [
"ec2:DescribeVpcs",
"ec2:DescribeSubnets",
"ec2:DescribeVpcAttribute",
"ec2:DescribeTags",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSecurityGroupRules",
"ec2:DescribeLaunchTemplates",
"ec2:DescribeLaunchTemplateVersions",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeAccountAttributes"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": [
"<REGION>"
]
}
}
},
{
"Sid": "EC2LifecycleAndSecurity",
"Effect": "Allow",
"Action": [
"ec2:CreateSecurityGroup",
"ec2:DeleteSecurityGroup",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:AuthorizeSecurityGroupEgress",
"ec2:RevokeSecurityGroupIngress",
"ec2:RevokeSecurityGroupEgress",
"ec2:CreateLaunchTemplate",
"ec2:DeleteLaunchTemplate",
"ec2:CreateTags",
"ec2:DeleteTags"
],
"Resource": [
"arn:aws:ec2:*:*:security-group/*",
"arn:aws:ec2:*:*:launch-template/*",
"arn:aws:ec2:*:*:instance/*",
"arn:aws:ec2:*:*:network-interface/*",
"arn:aws:ec2:*:*:subnet/*",
"arn:aws:ec2:*:*:vpc/*",
"arn:aws:ec2:*:*:image/*",
"arn:aws:ec2:*:*:volume/*",
"arn:aws:ec2:*:*:snapshot/*"
]
}
]
}
- EKS Service Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadOnlyDescribeListEKSVersionsRegionRestricted",
"Effect": "Allow",
"Action": [
"eks:DescribeAddonVersions"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": [
"<REGION>"
]
}
}
},
{
"Sid": "ReadOnlyDescribeListEKS",
"Effect": "Allow",
"Action": [
"eks:DescribeCluster",
"eks:DescribeAddon",
"eks:DescribePodIdentityAssociation",
"eks:DescribeNodegroup",
"eks:ListAddons",
"eks:ListPodIdentityAssociations"
],
"Resource": [
"arn:aws:eks:*:<AWS_ACCOUNT_ID>:cluster/*",
"arn:aws:eks:*:<AWS_ACCOUNT_ID>:nodegroup/*",
"arn:aws:eks:*:<AWS_ACCOUNT_ID>:addon/*",
"arn:aws:eks:*:<AWS_ACCOUNT_ID>:podidentityassociation/*"
]
},
{
"Sid": "EKSLifecycleAndTag",
"Effect": "Allow",
"Action": [
"eks:CreateCluster",
"eks:UpdateClusterVersion",
"eks:UpdateClusterConfig",
"eks:CreateNodegroup",
"eks:UpdateNodegroupConfig",
"eks:UpdateNodegroupVersion",
"eks:DeleteNodegroup",
"eks:CreateAddon",
"eks:UpdateAddon",
"eks:DeleteAddon",
"eks:CreatePodIdentityAssociation",
"eks:DeletePodIdentityAssociation",
"eks:TagResource",
"eks:ListClusters"
],
"Resource": [
"arn:aws:eks:*:<AWS_ACCOUNT_ID>:cluster/*",
"arn:aws:eks:*:<AWS_ACCOUNT_ID>:nodegroup/*",
"arn:aws:eks:*:<AWS_ACCOUNT_ID>:addon/*",
"arn:aws:eks:*:<AWS_ACCOUNT_ID>:podidentityassociation/*"
]
},
{
"Sid": "AllowEKSNodegroupSLR",
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:CreateServiceLinkedRole"
],
"Resource": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/aws-service-role/eks-nodegroup.amazonaws.com/AWSServiceRoleForAmazonEKSNodegroup"
},
{
"Sid": "EKSDeleteClusterV6",
"Effect": "Allow",
"Action": "eks:DeleteCluster",
"Resource": "arn:aws:eks:*:<AWS_ACCOUNT_ID>:cluster/*"
}
]
}
- IAM Service Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyAdminPolicyAttachment",
"Effect": "Deny",
"Action": [
"iam:AttachRolePolicy",
"iam:PutRolePolicy"
],
"Resource": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/eks-*",
"Condition": {
"ArnLike": {
"iam:PolicyARN": [
"arn:aws:iam::aws:policy/AdministratorAccess",
"arn:aws:iam::aws:policy/PowerUserAccess",
"arn:aws:iam::aws:policy/*FullAccess"
]
}
}
},
{
"Sid": "DenyInlinePolicyEscalation",
"Effect": "Deny",
"Action": [
"iam:PutRolePolicy",
"iam:PutUserPolicy",
"iam:PutGroupPolicy"
],
"Resource": "*"
},
{
"Sid": "ReadOnlyDescribeListIAMScoped",
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:ListRolePolicies",
"iam:ListAttachedRolePolicies",
"iam:ListInstanceProfilesForRole",
"iam:GetInstanceProfile",
"iam:GetPolicy",
"iam:GetPolicyVersion",
"iam:ListPolicyVersions",
"iam:ListAccessKeys"
],
"Resource": [
"arn:aws:iam::<AWS_ACCOUNT_ID>:role/eks-*",
"arn:aws:iam::<AWS_ACCOUNT_ID>:instance-profile/eks-*",
"arn:aws:iam::<AWS_ACCOUNT_ID>:policy/eks-*"
]
},
{
"Sid": "ReadOnlyDescribeListUnavoidableStar",
"Effect": "Allow",
"Action": "iam:ListRoles",
"Resource": "*"
},
{
"Sid": "IAMLifecycleRolesPoliciesInstanceProfiles",
"Effect": "Allow",
"Action": [
"iam:CreateRole",
"iam:TagRole",
"iam:CreatePolicy",
"iam:DeletePolicy",
"iam:DeletePolicyVersion",
"iam:TagPolicy",
"iam:AttachRolePolicy",
"iam:DetachRolePolicy",
"iam:CreateInstanceProfile",
"iam:TagInstanceProfile",
"iam:AddRoleToInstanceProfile",
"iam:RemoveRoleFromInstanceProfile",
"iam:DeleteInstanceProfile"
],
"Resource": [
"arn:aws:iam::<AWS_ACCOUNT_ID>:role/eks-*",
"arn:aws:iam::<AWS_ACCOUNT_ID>:policy/eks-*",
"arn:aws:iam::<AWS_ACCOUNT_ID>:instance-profile/eks-*"
]
},
{
"Sid": "EKSDeleteRoles",
"Effect": "Allow",
"Action": "iam:DeleteRole",
"Resource": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/eks*"
},
{
"Sid": "PassRoleOnlyToEKS",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::<AWS_ACCOUNT_ID>:role/eks-*",
"Condition": {
"StringEquals": {
"iam:PassedToService": [
"eks.amazonaws.com",
"ec2.amazonaws.com",
"eks-pods.amazonaws.com",
"pods.eks.amazonaws.com"
]
}
}
},
{
"Sid": "PassRoleForEKSPodIdentityRoles",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": [
"arn:aws:iam::<AWS_ACCOUNT_ID>:role/eks-*-karpenter-role",
"arn:aws:iam::<AWS_ACCOUNT_ID>:role/eks-*-backup-recovery-utility-role"
]
}
]
}
- KMS Service Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "KMSCreateAndList",
"Effect": "Allow",
"Action": [
"kms:CreateKey",
"kms:ListAliases"
],
"Resource": "*"
},
{
"Sid": "KMSKeyManagementScoped",
"Effect": "Allow",
"Action": [
"kms:PutKeyPolicy",
"kms:GetKeyPolicy",
"kms:DescribeKey",
"kms:GenerateDataKey",
"kms:Decrypt",
"kms:TagResource",
"kms:UntagResource",
"kms:EnableKeyRotation",
"kms:GetKeyRotationStatus",
"kms:ListResourceTags",
"kms:ScheduleKeyDeletion",
"kms:CreateAlias",
"kms:DeleteAlias"
],
"Resource": [
"arn:aws:kms:*:<AWS_ACCOUNT_ID>:key/*",
"arn:aws:kms:*:<AWS_ACCOUNT_ID>:alias/*"
]
}
]
}
- S3 Service Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "S3EncryptionConfigAndStateScoped",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetEncryptionConfiguration",
"s3:PutEncryptionConfiguration",
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:CreateBucket",
"s3:GetBucketTagging",
"s3:GetBucketPolicy",
"s3:GetBucketAcl",
"s3:GetBucketCORS",
"s3:PutBucketTagging",
"s3:GetBucketWebsite",
"s3:GetBucketVersioning",
"s3:GetAccelerateConfiguration",
"s3:GetBucketRequestPayment",
"s3:GetBucketLogging",
"s3:GetLifecycleConfiguration",
"s3:GetReplicationConfiguration",
"s3:GetBucketObjectLockConfiguration",
"s3:DeleteBucket"
],
"Resource": "arn:aws:s3:::*",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": [
"<REGION>"
],
"aws:PrincipalAccount": "<AWS_ACCOUNT_ID>"
}
}
}
]
}
Description for the JSON components
This section provides information for the permissions mentioned in the JSON file.
IAM Roles
Contact your IT team to create the necessary IAM roles with the following permissions to create and manage AWS EKS resources.
| IAM Role | Required Policies |
|---|---|
| Amazon EKS cluster IAM Role Manages the Kubernetes cluster. | - AmazonEKSBlockStoragePolicy - AmazonEKSClusterPolicy - AmazonEKSComputePolicy - AmazonEKSLoadBalancingPolicy - AmazonEKSNetworkingPolicy - AmazonEKSVPCResourceController - AmazonEKSServicePolicy - AmazonEBSCSIDriverPolicy |
| Amazon EKS node IAM Role Communicates with the node. | - AmazonEBSCSIDriverPolicy - AmazonEC2ContainerRegistryReadOnly - AmazonEKS_CNI_Policy - AmazonEKSWorkerNodePolicy - AmazonSSMManagedInstanceCore |
These policies are managed by AWS. For more information about AWS managed policies, refer to AWS managed policies for Amazon Elastic Kubernetes Service in the AWS documentation.
AWS IAM Permissions
The AWS IAM user or role to install PPC must have permissions to create and manage Amazon EKS clusters and the required supporting AWS resources.
EC2 Permissions
| Category | Required Permissions |
|---|---|
| Networking & VPC | ec2:DescribeVpcs ec2:DescribeSubnets ec2:DescribeVpcAttribute ec2:DescribeTags ec2:DescribeNetworkInterfaces |
| Security Groups | ec2:DescribeSecurityGroups ec2:DescribeSecurityGroupRules ec2:CreateSecurityGroup ec2:DeleteSecurityGroup ec2:AuthorizeSecurityGroupIngress ec2:AuthorizeSecurityGroupEgress ec2:RevokeSecurityGroupIngress ec2:RevokeSecurityGroupEgress |
| Launch Templates | ec2:DescribeLaunchTemplates ec2:DescribeLaunchTemplateVersions ec2:CreateLaunchTemplate ec2:DeleteLaunchTemplate |
| Instances | ec2:RunInstances |
| Tagging | ec2:CreateTags ec2:DeleteTags |
EKS Permissions
| Category | Required Permissions |
|---|---|
| Cluster Management | eks:CreateCluster eks:DescribeCluster |
| Node Groups | eks:CreateNodegroup eks:DescribeNodegroup |
| Add-ons | eks:CreateAddon eks:DescribeAddon eks:DescribeAddonVersions eks:DeleteAddon eks:ListAddons |
| Pod Identity Associations | eks:CreatePodIdentityAssociation eks:DescribePodIdentityAssociation eks:DeletePodIdentityAssociation eks:ListPodIdentityAssociations |
| Tagging | eks:TagResource |
IAM Permissions
| Category | Required Permissions |
|---|---|
| Roles & Policies | iam:CreateRole iam:DeleteRole iam:TagRole iam:GetRole iam:ListRoles iam:AttachRolePolicy iam:DetachRolePolicy iam:ListRolePolicies iam:ListAttachedRolePolicies |
| Policies | iam:CreatePolicy iam:DeletePolicy iam:TagPolicy iam:GetPolicy iam:GetPolicyVersion iam:ListPolicyVersions |
| Instance Profiles | iam:CreateInstanceProfile iam:DeleteInstanceProfile iam:TagInstanceProfile iam:GetInstanceProfile iam:AddRoleToInstanceProfile iam:RemoveRoleFromInstanceProfile iam:ListInstanceProfilesForRole |
| Service-linked Role | iam:CreateServiceLinkedRole |
S3 Permissions
| Required Permissions |
|---|
| s3:ListBucket |
| s3:PutEncryptionConfiguration |
| s3:GetEncryptionConfiguration |
KMS Permissions
| Required Permissions |
|---|
| kms:CreateKey |
| kms:PutKeyPolicy |
| kms:GetKeyPolicy |
Jump box or local machine
A dedicated EC2 instance (RHEL 10 , Debian 12/13) for deployment.
AWS Account Details
A valid AWS account where Amazon EKS will be deployed. The AWS account ID and AWS region must be identified in advance, as all resources will be provisioned in the selected region.
Service Quotas
Verify that the AWS account has sufficient service quotas to support the deployment. At a minimum, ensure adequate limits for the following:
- EC2 instances based on node group size and instance types.
- VPC and networking limits, including subnets, route tables, and security groups.
- Elastic IP addresses and Load balancers.
If required, request quota increases through the AWS Service Quotas console before proceeding.
Service Control Policies (SCPs)
The AWS account must not have SCPs that restrict required permissions. In particular, SCPs must not block the following actions:
- eks:*
- ec2:*
- iam:PassRole
Restrictive SCPs may prevent successful cluster creation and resource provisioning.
Virtual Private Cloud (VPC)
- An existing VPC must be available in the target AWS region.
- The VPC should be configured to support Amazon EKS workloads.
Subnet Requirements
- At least two private subnets must be available.
- Subnets must be distributed across two or more Availability Zones (AZs).
Specify an AWS Region other than us-east-1
By default, the installation deploys resources in the us-east-1 AWS Region. The AWS Region is currently hardcoded in the Terraform configuration and must be manually updated to deploy to a different region.
Note: The AWS Region is defined in the
iac_setup/scripts/iac/variables.tffile.
To update the AWS Region, perform the following steps:
Open the
variables.tffile in a text editor.Locate the text
default = "us-east-1".Replace
us-east-1with the required AWS Region. For example,"us-west-1".Save the file.
Additional Step for Regions Outside North America
If you are deploying in an AWS Region outside North America, the OS image configuration must also be updated.
In the same
variables.tffile, locate the textdefault = "BOTTLEROCKET_x86_64_FIPS".Update the value to
default = "BOTTLEROCKET_x86_64".Save the file.
Creating AWS KMS Key and S3 Bucket
Amazon S3 Bucket: An Amazon S3 bucket is required to store critical data such as backups, configuration artifacts, and restore metadata used during installation and recovery workflows. Using a dedicated S3 bucket helps ensure data durability, isolation, and controlled access during cluster operations.
AWS KMS Key: An AWS KMS customer‑managed key is required to encrypt data stored in the S3 bucket. This ensures that sensitive data is protected at rest and allows customers to manage encryption policies, key rotation, and access control in accordance with their security requirements.
Note: The KMS key must allow access to the IAM roles used by the EKS cluster and related services.
The following section explains how to create AWS KMS Key and S3 Bucket. This can be done from the AWS Web UI or using the script.
- Create a KMS key for backup bucket
The KMS key created is referenced during installation and restore using its KMS ARN, and is validated by the installer.
Before you begin, ensure to have:
Access to the AWS account where the KMS key is created.
The KMS key can be in the same AWS account as the S3 bucket, or in a different, cross‑account AWS account.
The user running the installer must have the permission
kms:DescribeKeyto describe the KMS key. Without this permission, installation and restore fails.
The steps to create a KMS key are available at https://docs.aws.amazon.com/. Follow the KMS key creation steps, but ensure to select the following configurations.
On the Key configuration page:
Select Key type as Symmetric.
Select Key usage as Encrypt and decrypt.These settings are required for encrypting and decrypting S3 objects used by backup and restore operations.
On the Key Administrative Permissions page, select the users or roles that can manage the key. The key administrators do not automatically get permission to encrypt or decrypt data, unless these permissions are explicitly granted.
On the Define key usage permissions page, grant permissions to the principals that will use the key.
The user or role running the installation and restore must have the permission
kms:DescribeKeyto describe the key. This permission is mandatory because the installer validates the KMS key before proceeding. Without this, the installation or restore procedure fails, especially in cross‑account KMS scenarios.On the Edit key policy - optional page, click Edit.
The KMS key policy controls the access to the encryption key and must be applied before creating the S3 bucket.
Note: If you are using AWS SSO IAM Identity Center, ensure that the IAM role ARN specified in the KMS key policy includes the full SSO path prefix:
aws-reserved/sso.amazonaws.com/.
For example:arn:aws:iam::<ACCOUNT_ID>:role/aws-reserved/sso.amazonaws.com/<SSO_ROLE_NAME>
Omitting this path results in KMS key policy creation failures with anInvalidArnException.The following example shows a key policy that:
- Allows the PPC bootstrap user to verify the KMS key.
- Allows the IAM role to encrypt and decrypt EKS backups.
cat > kms-key-policy.json << 'EOF'
{
"Version": "2012-10-17",
"Id": "key-resource-policy-0",
"Statement": [
{
"Sid": "Allow KMS administrative actions only, no key usage permissions.",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<<ADMIN_AWS_ACCOUNT>>:root"
},
"Action": [
"kms:Create*",
"kms:Describe*",
"kms:Enable*",
"kms:List*",
"kms:Put*",
"kms:Update*",
"kms:Revoke*",
"kms:Disable*",
"kms:Get*",
"kms:Delete*",
"kms:ScheduleKeyDeletion",
"kms:CancelKeyDeletion"
],
"Resource": "*"
},
{
"Sid": "Allow user running bootstrap.sh script of the PPC to verify the KMS key.",
"Effect": "Allow",
"Principal": {
"AWS": "<<SSO_OR_IAM_USER_ACCOUNT_ARN>>"
},
"Action": "kms:DescribeKey",
"Resource": "*"
},
{
"Sid": "Allow backup recovery utility and EKS Node roles KMS key usage permissions, Replace <<<<CLUSTER_NAME>>>> with the name of your EKS cluster.",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<<DEPLOYMENT_AWS_ACCOUNT>>:root"
},
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:ReEncryptFrom",
"kms:ReEncryptTo",
"kms:GenerateDataKey",
"kms:GenerateDataKeyWithoutPlaintext",
"kms:GenerateDataKeyPair",
"kms:GenerateDataKeyPairWithoutPlaintext",
"kms:DescribeKey"
],
"Resource": "*",
"Condition": {
"ArnLike": {
"aws:PrincipalArn": [
"arn:aws:iam::<<DEPLOYMENT_AWS_ACCOUNT>>:role/eks-<<CLUSTER_NAME>>-backup-recovery-utility-role",
"arn:aws:iam::<<DEPLOYMENT_AWS_ACCOUNT>>:role/eks-<<CLUSTER_NAME>>-node-role"
]
}
}
}
]
}
EOF
Update the values of the following based on the environment:
DEPLOYMENT_AWS_ACCOUNT- AWS account ID.CLUSTER_NAME- EKS cluster name.SSO_OR_IAM_USER_ACCOUNT_ARN- ARN of the IAM role used to run the bootstrap script. The ARN format depends on your authentication method:IAM role – Use the ARN returned by
aws sts get-caller-identity.AWS SSO (IAM Identity Center) – Convert the session ARN returned by
aws sts get-caller-identityto a full IAM role ARN before using it in the KMS key policy.
Note: If you are using AWS SSO (IAM Identity Center), the ARN returned by
aws sts get-caller-identityis a session ARN and cannot be used directly in an AWS KMS key policy. AWS KMS requires the full IAM role ARN, including theaws-reserved/sso.amazonaws.com/path. Without this, KMS key policy creation fails withInvalidArnException.
Retrieving the IAM role ARN for KMS key policy
To identify the role used to run the bootstrap script, run the following command:
aws sts get-caller-identity --query Arn --output text
IAM role: Use the returned ARN directly.
arn:aws:iam::<DEPLOYMENT_AWS_ACCOUNT>:role/your-role-nameAWS SSO (IAM Identity Center): The command returns a session ARN, which must be converted.
Do not use the session ARN:
arn:aws:sts::<<DEPLOYMENT_AWS_ACCOUNT>>:assumed-role/AWSReservedSSO_PermissionSetName_abc123/john.doe@company.comUse this converted IAM role ARN in KMS policy:
arn:aws:iam::<<DEPLOYMENT_AWS_ACCOUNT>>:role/aws-reserved/sso.amazonaws.com/AWSReservedSSO_PermissionSetName_abc123To convert:
- Replace
arn:aws:sts::witharn:aws:iam::. - Replace
assumed-role/withrole/aws-reserved/sso.amazonaws.com/. - Remove the session suffix (everything after the last /).
- Replace
Important: Before initiating restore, review and update the KMS key policy to reflect the restore
CLUSTER_NAME. Even if the policy was already configured for the source cluster, it must be updated for the new restore cluster. If the policy continues to reference the source cluster name, the IAM role created during restore cannot decrypt the backup data, causing the restore to fail.
After the KMS key is created, note the KMS key ARN. This KMS key ARN is required while creating the S3 backup bucket.
- Create an AWS S3 Bucket encrypted with SSE‑KMS
The S3 bucket encrypted with SSE‑KMS is used as a backup bucket during installation and restore.
Before you begin, ensure to have:
Access to the AWS account where the S3 bucket will be created.
Permission to create S3 bucket.
The user running the installer must have permission to describe the KMS key. Without this permission, installation and restore fails.
The steps to create an AWS S3 bucket are available at https://docs.aws.amazon.com/. Follow the S3 bucket creation steps, but ensure to set the following configurations as mentioned below.
In the Default Encryption section:
Select Encryption type as Server-side encryption with AWS Key Management Service keys (SSE-KMS).
Select the AWS KMS key ARN.
If the KMS key is in a different AWS account than the S3 bucket, then the key will not appear in the AWS console dropdown. In this case, enter the KMS key ARN manually.
Enable Bucket Key.
Automating AWS KMS Key and S3 Bucket Creation
This section describes how to use the optional resiliency initialization script to automatically create an AWS KMS key and an encrypted S3 bucket. This script can be used only after dowloading and extracting the PCT.
The S3 bucket and KMS key will be created in the same AWS account using this script. Cross-account KMS configurations are not supported with this script. For cross account KMS configurations, follow the steps mentioned in the tab Using AWS Web UI.
This automated approach is an alternative to the manual creation of the S3 bucket and KMS key using the AWS Web UI. Running this script is optional and not required for standard setup.
Before running the script, ensure the following:
- You have permissions to:
- Create S3 buckets.
- Create AWS KMS keys.
- Modify KMS key policies.
- AWS credentials can be configured during script execution.
If required permissions are missing, the script fails during readiness checks.
The resiliency initialization script automates the following tasks:
- Creates an AWS KMS key.
- Creates an S3 bucket.
- Associates the S3 bucket with the KMS key.
- Enables encryption on the S3 bucket.
- Outputs the S3 bucket ARN and KMS key ARN for future reference.
The script is available in the extracted build under the bootstrap-scripts directory. Run the script from the bootstrap-scripts directory to view a list of available parameters and options.
```bash
cd <extracted_folder>/bootstrap-scripts
./init-resiliency.sh --help
```
The following parameters are mandatory when running the resiliency script:
- AWS region
- EKS cluster name
The EKS cluster name is required because:
- It identifies and authorizes an IAM role.
- The IAM role is referenced in the KMS key policy.
- The same cluster name must also be provided in the bootstrap script. If the cluster name differs between this script and the bootstrap script, backup operations fail.
Note: Before running the bootstrap or resiliency scripts as the root user on RHEL, ensure that /usr/local/bin (and the AWS CLI binary path, if applicable) is included in the $PATH. Alternatively, run the script using a non-root user (such as ec2-user) where /usr/local/bin is already part of the default PATH.
Run the following command to initiate AWS KMS Key and S3 bucket creation:
./bootstrap-scripts/init-resiliency.sh --aws-region <AWS_region> --bucket-name <backup_bucket_name> --cluster-name <EKS_cluster_name>
The script prompts for AWS access key, secret key, and session token.
After running the script, the following confirmation message appears.
Do you want to proceed with creating the S3 bucket and KMS key? (yes/no) :
Type yes to proceed with S3 bucket creation and AWS KMS key.
After the setup is complete, the output displays details of the generated S3 bucket ARN and the KMS key ARN. Note these values for future reference.
1.2 - Preparing for PPC deployment
This section describes the steps to download and extract the recipe for deploying the PPC.
Note: If you have set up the jump box previously, then from
/deployment/iac_setup/directory, run themake cleancommand. This ensures that the local repository on the jump box and the clusters are cleaned up before proceeding with a new installation.
Warning: Do not install or manage multiple clusters from the same working directory. Each cluster deployment maintains its own Terraform/OpenTofu state, and reusing a directory can overwrite state files, causing loss of cluster tracking and unintended cleanup behavior.
Use a dedicated directory, and jump box, where possible, per cluster, and always verify the active kubectl context before running cleanup commands such asmake clean.
Log in to the My.Protegrity portal.
Navigate to Product Management > Explore Products > AI Team Edition.
From the Release list, select a release version.
From Platform and Feature Installation, click the Download Product icon.
Create a
deploymentdirectory on the jumpbox.mkdir deployment && cd deploymentCopy the archive to the
deploymentdirectory on the jumpbox.Extract the archive.
tar -xvf PPC-K8S-64_x86-64_AWS-EKS_1.0.0.x
1.3 - Deploying PPC
Before you begin
Before running the bootstrap or resiliency scripts as the root user on RHEL, ensure that /usr/local/bin (and the AWS CLI binary path, if applicable) is included in the $PATH. Alternatively, run the script using a non-root user (such as ec2-user) where /usr/local/bin is already part of the default PATH.
By default, the installation is configured to use the us-east-1 AWS region. If you plan to install the product in a different region, update the region value in the iac_setup/scripts/iac/variables.tf file before starting the installation.
For more information on updating the AWS region, refer to Specify an AWS Region other than
us-east-1.
The repository provides a bootstrap script that automatically installs or updates the following software on the jump box:
- AWS CLI - Required to communicate with your AWS account.
- OpenTofu - Required to manage infrastructure as code.
- kubectl - Required to communicate with the Kubernetes cluster.
- Helm - Required to manage Kubernetes packages.
- Make - Required to run the OpenTofu automation scripts.
- jq - Required to parse JSON.
The bootstrap script also checks if you have the required permissions on AWS. It then sets up the EKS cluster and installs the microservices required for deploying the PPC.
The bootstrap script asks for variables to be set to complete your deployment. Follow the instructions on the screen:
./bootstrap.sh
The script prompts for the following variables.
Enter Cluster Name
The following characters are allowed:
- Lowercase letters:
a-z - Numbers:
0-9 - Hyphens:
-
The following characters are not allowed:
- Uppercase letters:
A-Z - Underscores:
_ - Spaces
- Any special characters such as:
/ ? * + % ! @ # $ ^ & ( ) = [ ] { } : ; , . - Leading or trailing hyphens
- More than 31 characters
Note: Ensure that the cluster name does not exceed 31 characters. Cluster names longer than this limit can cause the bootstrap script to fail in subsequent installation steps.
If the installation fails because the cluster name exceeds the 31-character limit, correct the name and re-run the script.- Correction: Choose a cluster name with 31 characters or fewer.
- Retry: Execute the installation command again with the updated name. The script will automatically handle the update and proceed with the bootstrap process.
- Lowercase letters:
Enter a VPC ID from the table
The script automatically retrieves the available VPCs. Enter the VPC ID where the cluster must be created.
Querying for subnets in VPC…
The script queries for the available VPC subnets and prompts to enter two private subnet IDs. Specify two private subnet IDs from different availability zones.
The script then automatically updates the VPC CIDR block based on the VPC details.Enter FQDN
This is the Fully Qualified Domain Name for the ingress.
Warning: Ensure that the FQDN does not exceed 50 characters and only the following characters are used:
- Lowercase letters:
a-z - Numbers:
0-9 - Special characters:
- .
- Lowercase letters:
Enter S3 Backup Bucket Name
An AWS S3 bucket encrypted with SSE‑KMS for storing backup data for disaster recovery.
Use a dedicated S3 bucket per cluster for backup and restore operations to ensure data and encryption isolation. Sharing a bucket across clusters increases the risk of cross-cluster data access or decryption due to IAM misconfiguration. Dedicated buckets with unique IAM policies eliminate this risk.
During disaster management, OpenSearch restores only those snapshots that are created using the daily-insight-snapshots policy. For more information, refer to Backing up and restoring indexes.
Enter Image Registry Endpoint
The image repository from where the container images are retrieved. Use
registry.protegrity.com:9443for using the Protegrity Container Registry (PCR), else use the local repository endpoint for the local repository.Expected format:
[:port]. Do not include ‘https://’ Note: The container registry endpoint must be a FQDN (Fully Qualified Domain Name). Sub-paths like, my-registry.com/v2/path, are not supported by the OCI distribution specification.
Enter Registry Username []
Enter the username for the registry mentioned in the previous step. Leave this entry blank if the registry does not require authentication.
Enter Registry Password or Access Token
Enter Password or Access Token for the registry. Input is masked with
*characters. Press Enter to keep the current value. Leave this entry blank if the registry does not require authentication.After providing all information, the following confirmation message appears.
Configuration updated successfully.
Would you like to proceed with the setup now?
Proceed? (yes/no):
Type yes to initiate the setup.
Note: The cluster creation process can take 10-15 minutes.
If the session is terminated during installation due to network issues, power outage, and so on, then the installation stops. To restart the installation, run the following commands:
# Navigate to setup directory
cd iac_setup
# Clean up all resources
make clean
# Navigate to setup directory
./boostrap.sh
Warning: Do not install or manage multiple clusters from the same working directory. Each cluster deployment maintains its own Terraform/OpenTofu state, and reusing a directory can overwrite state files, causing loss of cluster tracking and unintended cleanup behavior.
Use a dedicated directory, and jump box, where possible, per cluster, and always verify the active kubectl context before running cleanup commands such asmake clean.
To check the active kubectl context, run the following command:kubectl config current-context
2 - Accessing PPC using a Linux machine
Before you begin
Ensure that the following prerequisites are met.
- A Linux machine is available and running.
- AWS CLI is installed and configured.
- Kubernetes command-line tool is installed.
Perform the following steps to access PPC using a separate Linux machine.
Log in to Linux machine with root credentials.
Configure AWS credentials, using the following command.
aws configureVerify that AWS credentials are working, using the following command.
aws sts get-caller-identityIf the Kubernetes command-line tool is not available, then install the Kubernetes command-line tool, using the following command.
kubectl version --client 2>/dev/null || { echo "Installing kubectl..." curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" chmod +x kubectl sudo mv kubectl /usr/local/bin/ kubectl version --client }Set up the Kubernetes command-line tool and access the cluster, using the following command.
aws eks update-kubeconfig --region <region_name> --name <cluster_name>Verify the access to the cluster, using the following command.
kubectl get nodes
3 - Installing Features and Protectors
Before you begin
Ensure that PPC is successfully installed before installing the features or protectors.
Installing Features
The following table lists the available features.
| Feature | Description |
|---|---|
| Data Discovery | Installing Data Discovery |
| Semantic Guardrails | Installing Semantic Guardrails |
| Protegrity Agent | Installing Protegrity Agent |
| Anonymization | Installing Anonymization |
| Synthetic Data | Installing Synthetic Data |
Installing Protectors
The following table lists the available protectors.
| Protector | Description |
|---|---|
| Application Protector | Installing Application Protector |
| Repository Protector | Installing Repository Protector |
| Application Protector Java Container | Installing Application Protector Java Container |
| Rest Container | Installing Rest Container |
| Cloud Protector | Installing Cloud Protector |
4 - Login to PPC
4.1 - Prerequisites
Use Route 53 configuration on AWS to resolve the PPC FQDN specified during the installation to the internal load balancer.
- Ensure that the instance is using the AWS-provided DNS server, such as, VPC CIDR + 2.
- Verify that
enableDnsHostnamesandenableDnsSupportare set to true in the VPC settings. - Verify the Security Group of the load balancer. Ensure that Inbound traffic is allowed on the required ports, such as, 80 and 443, from the client instance’s IP or Security Group.
- Keep the following information ready:
- VPC ID: The ID of the VPC for the client instances and the Load Balancer. For example, vpc-0123456789.
- Internal ELB DNS Name: The DNS name of the load balancer. For example, internal-abcdefghi123456-123456789.us-east-1.amazonaws.com.
- Target FQDN: The FQDN for PPC. For example, mysite.aws.com.
Find the AWS Load Balancer address.
kubectl get gateway -AThe output appears similar to the following:
NAMESPACE NAME CLASS ADDRESS PROGRAMMED AGE api-gateway pty-main envoy internal-abcdefghi123456-123456789.us-east-1.elb.amazonaws.com TrueMap the PPC FQDN to the load balancer using Route 53.
For more information about configuring Route 53, refer to the AWS documentation.
4.2 - Log in to PPC
Access the PPC using the FQDN provided during the installation process.
Enter the username and password for the admin user to log in and view the Insight Dashboard.
If Protegrity Agent is installed, then the Protegrity Agent dashboard appears. Click Insight to open the Insight Dashboard. For more information about Protegrity Agent, refer to Using Protegrity Agent.
5 - Accessing the PPC CLI
5.1 - Prerequisites
To access the PPC CLI, ensure that the following prerequisites are met.
SSH Keys: The SSH private key that corresponds to the public key configured in the
pty-clipod is required.Network Access: Ensure to have network connectivity to the cluster.
Resolve FQDN: Use Route 53 configuration on AWS to resolve the PPC FQDN specified during the installation to the internal load balancer. For more information, refer to Prerequisites.
For Linux/macOS Users
The private key to access the CLI pod will be in the /deployment/keys directory. The key file is authorized_keys.
From the /deployment/keys directory:
ssh -i authorized_keys -p 22 ptyitusr@<user-provided-fqdn>
With options to skip host key checking:
ssh -i authorized_keys -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -p 22 ptyitusr@<user-provided-fqdn>
For Windows Users
The private key to access the CLI pod will be in the /deployment/keys directory. The key file is authorized_keys. Copy the key file to a directory on the local Windows machine.
Using Windows SSH Client (Windows 10/11 with OpenSSH):
ssh -i C:\path\to\copied\file\authorized_keys -p 22 ptyitusr@<user-provided-fqdn>Using PuTTY:
- Host Name:
<user-provided-fqdn> - Port:
22 - Connection Type:
SSH - Under Connection > SSH > Auth, browse and select your private key file (.ppk format)
- Username:
ptyitusr
- Host Name:
5.2 - Accessing the PPC CLI
Once connected, the Protegrity CLI welcome banner displays. Enter the following parameters when prompted:
- Username: Application username
- Password: Application password
For more information about the default credentials, refer to the Release Notes.
The CLI supports two main command categories:
pim: Policy Information Management commands for data protection policiesadmin: User, Role, Permission, and Group management commands
Note: Ensure that at least one additional backup administrator user is configured with the same administrative privileges as the primary admin user.
If the primary admin account is locked or its credentials are lost, restoring the system from a backup may be the only recovery option.
6 - Deleting PPC
Uninstalling Features and Protectors
To uninstall features and protectors, refer the relevant documentation.
Cleaning up the EKS Resources
To destroy all created resources, including the EKS cluster and related components, run the following commands.
# Navigate setup directory
cd iac_setup
# Clean up all resources
make clean
Executing this command destroys the PPC and all related components.
7 - Restoring the PPC
Before you begin
Before starting a restore, ensure the following conditions are met:
An existing backup is available. Backups are taken automatically as part of the default installation using scheduled backup mechanisms. These backups are stored in an AWS S3 bucket configured during the original installation.
Access to the original backup AWS S3 bucket. During restore, the same S3 bucket that was used during the original installation must be specified.
Before initiating the restore, review and update the KMS key policy to reflect the restore cluster name. Even if the policy was already configured for the source cluster, it must be updated for the new restore cluster. If the policy continues to reference the source cluster name, the IAM role created during restore cannot decrypt the backup data, causing the restore to fail.
Permissions to read from the S3 bucket. The user performing the restore must have sufficient permissions to access the backup data stored in the bucket.
A new Kubernetes cluster is created. Restore is performed as part of creating a new cluster, not on an existing one. Restore is only supported during a fresh installation flow.
While the backup is taken from the source cluster, do not perform Create, Read, Update, or Delete (CRUD) operations on the source cluster. This ensures backup consistency and prevents data corruption during restore.
Before restoring to a new cluster, if the source cluster is accessible, disable the backup operations on the source cluster by setting the backup storage location to read‑only. This ensures that no additional backup data is written during the restore process.
To disable the backup operation on the source cluster, run the following command:
kubectl patch backupstoragelocation default -n pty-backup-recovery --type merge -p '{"spec":{"accessMode":"ReadOnly"}}'If the source cluster is not accessible, this step can be skipped.
During Disaster management, the backup data is used to restore the cluster and the OpenSearch indexes using snapshots. However, Insight restores OpenSearch data only from the most recent snapshot created by the daily-insight-snapshots policy.
For more information, refer to Backing up and restoring indexes.
Warning: Do not install or manage multiple clusters from the same working directory. Each cluster deployment maintains its own Terraform/OpenTofu state, and reusing a directory can overwrite state files, causing loss of cluster tracking and unintended cleanup behavior.
Use a dedicated directory, and jump box, where possible, per cluster, and always verify the active kubectl context before running cleanup commands such asmake clean.
The repository provides a bootstrap script that automatically installs or updates the following software on the jump box:
- AWS CLI - Required to communicate with your AWS account.
- OpenTofu - Required to manage infrastructure as code.
- kubectl - Required to communicate with the Kubernetes cluster.
- Helm - Required to manage Kubernetes packages.
- Make - Required to run the OpenTofu automation scripts.
- jq - Required to parse JSON.
The bootstrap script also checks if you have the required permissions on AWS. It then sets up the EKS cluster and installs the microservices required for deploying the PCS on a PPC.
Note: Before running the bootstrap or resiliency scripts as the root user on RHEL, ensure that /usr/local/bin (and the AWS CLI binary path, if applicable) is included in the $PATH. Alternatively, run the script using a non-root user (such as ec2-user) where /usr/local/bin is already part of the default PATH.
Run the following command to initiate restore using an existing backup:
./bootstrap.sh --restore
The bootstrap script asks for variables to be set to complete the deployment. Follow the instructions on the screen.
The --restore command enables the restore mode for the installation. It initiates restoration of data from the configured backup bucket. This process must be followed on a fresh installation.
The script prompts for the following variables.
Enter Cluster Name
- Ensure that the cluster name does not match the name of the source cluster. Reusing an existing cluster name during restore can lead to discrepancies during cluster installation.
- This same cluster name must already be updated in the KMS key policy. If this update is not performed, the restore process fails because the new cluster cannot decrypt the backup data.
- Ensure that the cluster name does not exceed 31 characters. Cluster names longer than this limit can cause the bootstrap script to fail in subsequent installation steps.
If the installation fails because the cluster name exceeds the 31-character limit, correct the name and re-run the script.- Correction: Choose a cluster name with 31 characters or fewer.
- Retry: Execute the installation command again with the updated name. The script will automatically handle the update and proceed with the bootstrap process.
- Correction: Choose a cluster name with 31 characters or fewer.
The following characters are allowed:
- Lowercase letters:
a-z - Numbers:
0-9 - Hyphens:
-
The following characters are not allowed:
- Uppercase letters:
A-Z - Underscores:
_ - Spaces
- Any special characters such as:
/ ? * + % ! @ # $ ^ & ( ) = [ ] { } : ; , . - Leading or trailing hyphens
- More than 31 characters
- Ensure that the cluster name does not match the name of the source cluster. Reusing an existing cluster name during restore can lead to discrepancies during cluster installation.
Enter a VPC ID from the table
The script automatically retrieves the available VPCs. Enter the VPC ID where the cluster must be created.
Querying for subnets in VPC
The script automatically queries for the available VPC subnets and prompts to enter two private subnet IDs. Specify two private subnet IDs from different availability zones.
The script then automatically updates the VPC CIDR block based on the VPC details.Enter FQDN
This is the Fully Qualified Domain Name for the ingress.
Ensure only the following characters are used:
- Lowercase letters:
a-z - Numbers:
0-9 - Special characters:
- .
- Lowercase letters:
Enter S3 Backup Bucket Name
An AWS S3 bucket encrypted with SSE‑KMS containing backup artifacts used during the restore process.
Use a dedicated S3 bucket per cluster for backup and restore operations to ensure data and encryption isolation. Sharing a bucket across clusters increases the risk of cross-cluster data access or decryption due to IAM misconfiguration. Dedicated buckets with unique IAM policies eliminate this risk.
Enter Image Registry Endpoint
The image repository from where the container images are retrieved.
Expected format:
[:port]. Do not include ‘https://’ Note: The container registry endpoint must be a FQDN (Fully Qualified Domain Name). Sub-paths like, my-registry.com/v2/path, are not supported by the OCI distribution specification.
Enter Registry Username
Enter the username for the registry mentioned in the previous step. Leave this entry blank if the registry does not require authentication.
Enter Registry Password or Access Token
Enter Password or Access Token for the registry. Input is masked with
*characters. Press Enter to keep the current value.Leave this entry blank if the registry does not require authentication.
After providing all information, the following confirmation message appears.
Configuration updated successfully. Would you like to proceed with the setup now? Proceed? (yes/no):Type yes to initiate the setup.
During restore, the script prompts to manually select a backup from the available backups stored in the S3 bucket. User input is required to either restore from the latest backup or choose a specific backup from the list.
Restore from latest backup? [Y/n]
- Enter Y to restore from the most recent backup.
- Enter n to manually select a backup.
If you choose to manually select a backup, then the script displays a list of available backups (latest first) and prompts to select one by number:
Available backups (latest first):
[1] authnz-postgresql-schedule-backup-<timestamp>
[2] authnz-postgresql-schedule-backup-<timestamp>
Select a backup number:
After entering the backup number, the chosen backup is used for the restore, and the installation continues.
Note: The cluster creation process can take 10-15 minutes.
If the session is terminated during restore due to network issues, power outage, and so on, then the restore stops. To restart the process, run the following commands:
# Navigate to setup directory
cd iac_setup
# Clean up all resources
make clean
./boostrap.sh --restore
Warning: Do not install or manage multiple clusters from the same working directory. Each cluster deployment maintains its own Terraform/OpenTofu state, and reusing a directory can overwrite state files, causing loss of cluster tracking and unintended cleanup behavior.
Use a dedicated directory, and jump box, where possible, per cluster, and always verify the active kubectl context before running cleanup commands such asmake clean.
To check the active kubectl context, run the following command:kubectl config current-context
After the restore to the new cluster is completed successfully and all required validation and migration activities are finished, the source cluster can be deleted.