Restoring the PPC

Complete the steps provided in this section to restore a PPC deployment using an existing backup.

Before you begin

Before starting a restore, ensure the following conditions are met:

  • An existing backup is available. Backups are taken automatically as part of the default installation using scheduled backup mechanisms. These backups are stored in an AWS S3 bucket configured during the original installation.

  • Access to the original backup AWS S3 bucket. During restore, the same S3 bucket that was used during the original installation must be specified.

  • Before initiating the restore, review and update the KMS key policy to reflect the restore cluster name. Even if the policy was already configured for the source cluster, it must be updated for the new restore cluster. If the policy continues to reference the source cluster name, the IAM role created during restore cannot decrypt the backup data, causing the restore to fail.

  • Permissions to read from the S3 bucket. The user performing the restore must have sufficient permissions to access the backup data stored in the bucket.

  • A new Kubernetes cluster is created. Restore is performed as part of creating a new cluster, not on an existing one. Restore is only supported during a fresh installation flow.

  • While the backup is taken from the source cluster, do not perform Create, Read, Update, or Delete (CRUD) operations on the source cluster. This ensures backup consistency and prevents data corruption during restore.

  • Before restoring to a new cluster, if the source cluster is accessible, disable the backup operations on the source cluster by setting the backup storage location to read‑only. This ensures that no additional backup data is written during the restore process.

    To disable the backup operation on the source cluster, run the following command:

    kubectl patch backupstoragelocation default -n pty-backup-recovery --type merge -p '{"spec":{"accessMode":"ReadOnly"}}'
    

    If the source cluster is not accessible, this step can be skipped.

During Disaster management, the backup data is used to restore the cluster and the OpenSearch indexes using snapshots. However, Insight restores OpenSearch data only from the most recent snapshot created by the daily-insight-snapshots policy.
For more information, refer to Backing up and restoring indexes.

Warning: Do not install or manage multiple clusters from the same working directory. Each cluster deployment maintains its own Terraform/OpenTofu state, and reusing a directory can overwrite state files, causing loss of cluster tracking and unintended cleanup behavior.
Use a dedicated directory, and jump box, where possible, per cluster, and always verify the active kubectl context before running cleanup commands such as make clean.

The repository provides a bootstrap script that automatically installs or updates the following software on the jump box:

  • AWS CLI - Required to communicate with your AWS account.
  • OpenTofu - Required to manage infrastructure as code.
  • kubectl - Required to communicate with the Kubernetes cluster.
  • Helm - Required to manage Kubernetes packages.
  • Make - Required to run the OpenTofu automation scripts.
  • jq - Required to parse JSON.

The bootstrap script also checks if you have the required permissions on AWS. It then sets up the EKS cluster and installs the microservices required for deploying the PCS on a PPC.

Note: Before running the bootstrap or resiliency scripts as the root user on RHEL, ensure that /usr/local/bin (and the AWS CLI binary path, if applicable) is included in the $PATH. Alternatively, run the script using a non-root user (such as ec2-user) where /usr/local/bin is already part of the default PATH.

Run the following command to initiate restore using an existing backup:

./bootstrap.sh --restore

The bootstrap script asks for variables to be set to complete the deployment. Follow the instructions on the screen.

The --restore command enables the restore mode for the installation. It initiates restoration of data from the configured backup bucket. This process must be followed on a fresh installation.

The script prompts for the following variables.

  1. Enter Cluster Name

    • Ensure that the cluster name does not match the name of the source cluster. Reusing an existing cluster name during restore can lead to discrepancies during cluster installation.
    • This same cluster name must already be updated in the KMS key policy. If this update is not performed, the restore process fails because the new cluster cannot decrypt the backup data.
    • Ensure that the cluster name does not exceed 31 characters. Cluster names longer than this limit can cause the bootstrap script to fail in subsequent installation steps.
      If the installation fails because the cluster name exceeds the 31-character limit, correct the name and re-run the script.
      • Correction: Choose a cluster name with 31 characters or fewer.
      • Retry: Execute the installation command again with the updated name. The script will automatically handle the update and proceed with the bootstrap process.

    The following characters are allowed:

    • Lowercase letters: a-z
    • Numbers: 0-9
    • Hyphens: -

    The following characters are not allowed:

    • Uppercase letters: A-Z
    • Underscores: _
    • Spaces
    • Any special characters such as: / ? * + % ! @ # $ ^ & ( ) = [ ] { } : ; , .
    • Leading or trailing hyphens
    • More than 31 characters
  2. Enter a VPC ID from the table

    The script automatically retrieves the available VPCs. Enter the VPC ID where the cluster must be created.

  3. Querying for subnets in VPC

    The script automatically queries for the available VPC subnets and prompts to enter two private subnet IDs. Specify two private subnet IDs from different availability zones.
    The script then automatically updates the VPC CIDR block based on the VPC details.

  4. Enter FQDN

    This is the Fully Qualified Domain Name for the ingress.

    Ensure only the following characters are used:

    • Lowercase letters: a-z
    • Numbers: 0-9
    • Special characters: - .
  5. Enter S3 Backup Bucket Name

    An AWS S3 bucket encrypted with SSE‑KMS containing backup artifacts used during the restore process.

    Use a dedicated S3 bucket per cluster for backup and restore operations to ensure data and encryption isolation. Sharing a bucket across clusters increases the risk of cross-cluster data access or decryption due to IAM misconfiguration. Dedicated buckets with unique IAM policies eliminate this risk.

  6. Enter Image Registry Endpoint

    The image repository from where the container images are retrieved.

    Expected format: [:port]. Do not include ‘https://’

    Note: The container registry endpoint must be a FQDN (Fully Qualified Domain Name). Sub-paths like, my-registry.com/v2/path, are not supported by the OCI distribution specification.

  7. Enter Registry Username

    Enter the username for the registry mentioned in the previous step. Leave this entry blank if the registry does not require authentication.

  8. Enter Registry Password or Access Token

    Enter Password or Access Token for the registry. Input is masked with * characters. Press Enter to keep the current value.

    Leave this entry blank if the registry does not require authentication.

  9. After providing all information, the following confirmation message appears.

    Configuration updated successfully.
    
    Would you like to proceed with the setup now?
    
    Proceed? (yes/no): 
    

    Type yes to initiate the setup.


During restore, the script prompts to manually select a backup from the available backups stored in the S3 bucket. User input is required to either restore from the latest backup or choose a specific backup from the list.
Restore from latest backup? [Y/n]
  • Enter Y to restore from the most recent backup.
  • Enter n to manually select a backup.

If you choose to manually select a backup, then the script displays a list of available backups (latest first) and prompts to select one by number:

Available backups (latest first):
  [1] authnz-postgresql-schedule-backup-<timestamp>
  [2] authnz-postgresql-schedule-backup-<timestamp>

Select a backup number:

After entering the backup number, the chosen backup is used for the restore, and the installation continues.

Note: The cluster creation process can take 10-15 minutes.

If the session is terminated during restore due to network issues, power outage, and so on, then the restore stops. To restart the process, run the following commands:

# Navigate to setup directory
cd iac_setup

# Clean up all resources
make clean

./boostrap.sh --restore

Warning: Do not install or manage multiple clusters from the same working directory. Each cluster deployment maintains its own Terraform/OpenTofu state, and reusing a directory can overwrite state files, causing loss of cluster tracking and unintended cleanup behavior.
Use a dedicated directory, and jump box, where possible, per cluster, and always verify the active kubectl context before running cleanup commands such as make clean.
To check the active kubectl context, run the following command:
kubectl config current-context

After the restore to the new cluster is completed successfully and all required validation and migration activities are finished, the source cluster can be deleted.


Last modified : April 13, 2026