The Protegrity Provisioned Cluster PPC is the core framework that forms the AI Team Edition. It is designed to deliver a modern, cloud-native experience for data security and governance. Built on Kubernetes, PPC uses a containerized architecture that simplifies deployment and scaling. Using OpenTofu scripts and Helm charts, administrators can stand up clusters with minimal manual intervention, ensuring consistency and reducing operational overhead.
Perform the following steps to set up and deploy the PPC:
Ensure that the following prerequisites are met before deploying the PPC.
Updating the Roles and Permissions using JSON
The roles and permissions are updated using the JSONs.
From the AWS Console, navigate to IAM > Policies > Create policy > JSON, and create the following JSONs.
Note: Before using the provided JSON, replace the AWS_ACCOUNT_ID and REGION values with those of the account and region where the resources are being deployed.
A dedicated EC2 instance (RHEL 10 , Debian 12/13) for deployment.
AWS Account Details
A valid AWS account where Amazon EKS will be deployed.
The AWS account ID and AWS region must be identified in advance, as all resources will be provisioned in the selected region.
Service Quotas
Verify that the AWS account has sufficient service quotas to support the deployment. At a minimum, ensure adequate limits for the following:
EC2 instances based on node group size and instance types.
VPC and networking limits, including subnets, route tables, and security groups.
Elastic IP addresses and Load balancers.
If required, request quota increases through the AWS Service Quotas console before proceeding.
Service Control Policies (SCPs)
The AWS account must not have SCPs that restrict required permissions. In particular, SCPs must not block the following actions:
eks:*
ec2:*
iam:PassRole
Restrictive SCPs may prevent successful cluster creation and resource provisioning.
Virtual Private Cloud (VPC)
An existing VPC must be available in the target AWS region.
The VPC should be configured to support Amazon EKS workloads.
Subnet Requirements
At least two private subnets must be available.
Subnets must be distributed across two or more Availability Zones (AZs).
Specify an AWS Region other than us-east-1
By default, the installation deploys resources in the us-east-1 AWS Region. The AWS Region is currently hardcoded in the Terraform configuration and must be manually updated to deploy to a different region.
Note: The AWS Region is defined in the iac_setup/scripts/iac/variables.tf file.
To update the AWS Region, perform the following steps:
Open the variables.tf file in a text editor.
Locate the text default = "us-east-1".
Replace us-east-1 with the required AWS Region. For example, "us-west-1".
Save the file.
Additional Step for Regions Outside North America
If you are deploying in an AWS Region outside North America, the OS image configuration must also be updated.
In the same variables.tf file, locate the text default = "BOTTLEROCKET_x86_64_FIPS".
Update the value to default = "BOTTLEROCKET_x86_64".
Save the file.
Creating AWS KMS Key and S3 Bucket
Amazon S3 Bucket: An Amazon S3 bucket is required to store critical data such as backups, configuration artifacts, and restore metadata used during installation and recovery workflows. Using a dedicated S3 bucket helps ensure data durability, isolation, and controlled access during cluster operations.
AWS KMS Key: An AWS KMS customer‑managed key is required to encrypt data stored in the S3 bucket. This ensures that sensitive data is protected at rest and allows customers to manage encryption policies, key rotation, and access control in accordance with their security requirements.
Note: The KMS key must allow access to the IAM roles used by the EKS cluster and related services.
The following section explains how to create AWS KMS Key and S3 Bucket. This can be done from the AWS Web UI or using the script.
Create a KMS key for backup bucket The KMS key created is referenced during installation and restore using its KMS ARN, and is validated by the installer.
Before you begin, ensure to have:
Access to the AWS account where the KMS key is created.
The KMS key can be in the same AWS account as the S3 bucket, or in a different, cross‑account AWS account.
The user running the installer must have the permission kms:DescribeKey to describe the KMS key. Without this permission, installation and restore fails.
The steps to create a KMS key are available at https://docs.aws.amazon.com/. Follow the KMS key creation steps, but ensure to select the following configurations.
On the Key configuration page: Select Key type as Symmetric. Select Key usage as Encrypt and decrypt.
These settings are required for encrypting and decrypting S3 objects used by backup and restore operations.
On the Key Administrative Permissions page, select the users or roles that can manage the key. The key administrators do not automatically get permission to encrypt or decrypt data, unless these permissions are explicitly granted.
On the Define key usage permissions page, grant permissions to the principals that will use the key.
The user or role running the installation and restore must have the permission kms:DescribeKey to describe the key. This permission is mandatory because the installer validates the KMS key before proceeding. Without this, the installation or restore procedure fails, especially in cross‑account KMS scenarios.
On the Edit key policy - optional page, click Edit.
The KMS key policy controls the access to the encryption key and must be applied before creating the S3 bucket.
Note: If you are using AWS SSO IAM Identity Center, ensure that the IAM role ARN specified in the KMS key policy includes the full SSO path prefix: aws-reserved/sso.amazonaws.com/. For example: arn:aws:iam::<ACCOUNT_ID>:role/aws-reserved/sso.amazonaws.com/<SSO_ROLE_NAME> Omitting this path results in KMS key policy creation failures with an InvalidArnException.
The following example shows a key policy that:
Allows the PPC bootstrap user to verify the KMS key.
Allows the IAM role to encrypt and decrypt EKS backups.
Update the values of the following based on the environment:
DEPLOYMENT_AWS_ACCOUNT - AWS account ID.
CLUSTER_NAME - EKS cluster name.
SSO_OR_IAM_USER_ACCOUNT_ARN - ARN of the IAM role used to run the bootstrap script. The ARN format depends on your authentication method:
IAM role – Use the ARN returned by aws sts get-caller-identity.
AWS SSO (IAM Identity Center) – Convert the session ARN returned by aws sts get-caller-identity to a full IAM role ARN before using it in the KMS key policy.
Note: If you are using AWS SSO (IAM Identity Center), the ARN returned by aws sts get-caller-identity is a session ARN and cannot be used directly in an AWS KMS key policy. AWS KMS requires the full IAM role ARN, including the aws-reserved/sso.amazonaws.com/ path. Without this, KMS key policy creation fails with InvalidArnException.
Retrieving the IAM role ARN for KMS key policy
To identify the role used to run the bootstrap script, run the following command:
aws sts get-caller-identity --query Arn --output text
Replace assumed-role/ with role/aws-reserved/sso.amazonaws.com/.
Remove the session suffix (everything after the last /).
Important: Before initiating restore, review and update the KMS key policy to reflect the restore CLUSTER_NAME. Even if the policy was already configured for the source cluster, it must be updated for the new restore cluster. If the policy continues to reference the source cluster name, the IAM role created during restore cannot decrypt the backup data, causing the restore to fail.
After the KMS key is created, note the KMS key ARN. This KMS key ARN is required while creating the S3 backup bucket.
Create an AWS S3 Bucket encrypted with SSE‑KMS The S3 bucket encrypted with SSE‑KMS is used as a backup bucket during installation and restore.
Before you begin, ensure to have:
Access to the AWS account where the S3 bucket will be created.
Permission to create S3 bucket.
The user running the installer must have permission to describe the KMS key. Without this permission, installation and restore fails.
The steps to create an AWS S3 bucket are available at https://docs.aws.amazon.com/. Follow the S3 bucket creation steps, but ensure to set the following configurations as mentioned below.
In the Default Encryption section:
Select Encryption type as Server-side encryption with AWS Key Management Service keys (SSE-KMS).
Select the AWS KMS key ARN.
If the KMS key is in a different AWS account than the S3 bucket, then the key will not appear in the AWS console dropdown. In this case, enter the KMS key ARN manually.
Enable Bucket Key.
Automating AWS KMS Key and S3 Bucket Creation
This section describes how to use the optional resiliency initialization script to automatically create an AWS KMS key and an encrypted S3 bucket. This script can be used only after dowloading and extracting the PCT.
The S3 bucket and KMS key will be created in the same AWS account using this script. Cross-account KMS configurations are not supported with this script. For cross account KMS configurations, follow the steps mentioned in the tab Using AWS Web UI.
This automated approach is an alternative to the manual creation of the S3 bucket and KMS key using the AWS Web UI. Running this script is optional and not required for standard setup.
Before running the script, ensure the following:
You have permissions to:
Create S3 buckets.
Create AWS KMS keys.
Modify KMS key policies.
AWS credentials can be configured during script execution.
If required permissions are missing, the script fails during readiness checks.
The resiliency initialization script automates the following tasks:
Creates an AWS KMS key.
Creates an S3 bucket.
Associates the S3 bucket with the KMS key.
Enables encryption on the S3 bucket.
Outputs the S3 bucket ARN and KMS key ARN for future reference.
The script is available in the extracted build under the bootstrap-scripts directory. Run the script from the bootstrap-scripts directory to view a list of available parameters and options.
```bash
cd <extracted_folder>/bootstrap-scripts
./init-resiliency.sh --help
```
The following parameters are mandatory when running the resiliency script:
AWS region
EKS cluster name
The EKS cluster name is required because:
It identifies and authorizes an IAM role.
The IAM role is referenced in the KMS key policy.
The same cluster name must also be provided in the bootstrap script. If the cluster name differs between this script and the bootstrap script, backup operations fail.
Note: Before running the bootstrap or resiliency scripts as the root user on RHEL, ensure that /usr/local/bin (and the AWS CLI binary path, if applicable) is included in the $PATH. Alternatively, run the script using a non-root user (such as ec2-user) where /usr/local/bin is already part of the default PATH.
Run the following command to initiate AWS KMS Key and S3 bucket creation:
The script prompts for AWS access key, secret key, and session token.
After running the script, the following confirmation message appears.
Do you want to proceed with creating the S3 bucket and KMS key? (yes/no) :
Type yes to proceed with S3 bucket creation and AWS KMS key.
After the setup is complete, the output displays details of the generated S3 bucket ARN and the KMS key ARN. Note these values for future reference.
2 - Preparing for PPC deployment
Downloading and Extracting the Recipe for Deploying Protegrity Cluster Template (PCT)
This section describes the steps to download and extract the recipe for deploying the PPC.
Note: If you have set up the jump box previously, then from /deployment/iac_setup/ directory, run the make clean command. This ensures that the local repository on the jump box and the clusters are cleaned up before proceeding with a new installation.
Warning: Do not install or manage multiple clusters from the same working directory. Each cluster deployment maintains its own Terraform/OpenTofu state, and reusing a directory can overwrite state files, causing loss of cluster tracking and unintended cleanup behavior. Use a dedicated directory, and jump box, where possible, per cluster, and always verify the active kubectl context before running cleanup commands such as make clean.
Navigate to Product Management > Explore Products > AI Team Edition.
From the Release list, select a release version.
From Platform and Feature Installation, click the Download Product icon.
Create a deployment directory on the jumpbox.
mkdir deployment && cd deployment
Copy the archive to the deployment directory on the jumpbox.
Extract the archive.
tar -xvf PPC-K8S-64_x86-64_AWS-EKS_1.0.0.x
3 - Deploying PPC
Complete the steps provided in this section to deploy PPC in AWS EKS.
Before you begin
Before running the bootstrap or resiliency scripts as the root user on RHEL, ensure that /usr/local/bin (and the AWS CLI binary path, if applicable) is included in the $PATH. Alternatively, run the script using a non-root user (such as ec2-user) where /usr/local/bin is already part of the default PATH.
By default, the installation is configured to use the us-east-1 AWS region. If you plan to install the product in a different region, update the region value in the iac_setup/scripts/iac/variables.tf file before starting the installation.
The bootstrap script also checks if you have the required permissions on AWS. It then sets up the EKS cluster and installs the microservices required for deploying the PPC.
The bootstrap script asks for variables to be set to complete your deployment. Follow the instructions on the screen:
./bootstrap.sh
The script prompts for the following variables.
Enter Cluster Name
The following characters are allowed:
Lowercase letters: a-z
Numbers: 0-9
Hyphens: -
The following characters are not allowed:
Uppercase letters: A-Z
Underscores: _
Spaces
Any special characters such as: / ? * + % ! @ # $ ^ & ( ) = [ ] { } : ; , .
Leading or trailing hyphens
More than 31 characters
Note: Ensure that the cluster name does not exceed 31 characters. Cluster names longer than this limit can cause the bootstrap script to fail in subsequent installation steps. If the installation fails because the cluster name exceeds the 31-character limit, correct the name and re-run the script.
Correction: Choose a cluster name with 31 characters or fewer.
Retry: Execute the installation command again with the updated name. The script will automatically handle the update and proceed with the bootstrap process.
Enter a VPC ID from the table
The script automatically retrieves the available VPCs. Enter the VPC ID where the cluster must be created.
Querying for subnets in VPC…
The script queries for the available VPC subnets and prompts to enter two private subnet IDs. Specify two private subnet IDs from different availability zones. The script then automatically updates the VPC CIDR block based on the VPC details.
Enter FQDN
This is the Fully Qualified Domain Name for the ingress.
Warning: Ensure that the FQDN does not exceed 50 characters and only the following characters are used:
Use a dedicated S3 bucket per cluster for backup and restore operations to ensure data and encryption isolation. Sharing a bucket across clusters increases the risk of cross-cluster data access or decryption due to IAM misconfiguration. Dedicated buckets with unique IAM policies eliminate this risk.
During disaster management, OpenSearch restores only those snapshots that are created using the daily-insight-snapshots policy. For more information, refer to Backing up and restoring indexes.
Enter Image Registry Endpoint
The image repository from where the container images are retrieved. Use registry.protegrity.com:9443 for using the Protegrity Container Registry (PCR), else use the local repository endpoint for the local repository.
Expected format: [:port].
Do not include ‘https://’
Note: The container registry endpoint must be a FQDN (Fully Qualified Domain Name). Sub-paths like, my-registry.com/v2/path, are not supported by the OCI distribution specification.
Enter Registry Username []
Enter the username for the registry mentioned in the previous step. Leave this entry blank if the registry does not require authentication.
Enter Registry Password or Access Token
Enter Password or Access Token for the registry.
Input is masked with * characters. Press Enter to keep the current value.
Leave this entry blank if the registry does not require authentication.
After providing all information, the following confirmation message appears.
Configuration updated successfully.
Would you like to proceed with the setup now?
Proceed? (yes/no):
Type yes to initiate the setup.
Note: The cluster creation process can take 10-15 minutes.
If the session is terminated during installation due to network issues, power outage, and so on, then the installation stops. To restart the installation, run the following commands:
# Navigate to setup directory cd iac_setup
# Clean up all resourcesmake clean
# Navigate to setup directory ./boostrap.sh
Warning: Do not install or manage multiple clusters from the same working directory. Each cluster deployment maintains its own Terraform/OpenTofu state, and reusing a directory can overwrite state files, causing loss of cluster tracking and unintended cleanup behavior. Use a dedicated directory, and jump box, where possible, per cluster, and always verify the active kubectl context before running cleanup commands such as make clean. To check the active kubectl context, run the following command: kubectl config current-context