1 - Verifying the Prerequisites

Prerequisites for configuring Protegrity Anonymization API on Amazon Elastic Kubernetes Service (EKS).

Ensure that the following prerequisites are met:

  • Base machine - This might be a Linux machine instance that is used to communicate with the Kubernetes cluster. This instance can be on-premise or on AWS. Ensure that Helm is installed on this Linux instance. You must also install Docker on this Linux instance to communicate with the Container Registry, where you want to upload the Docker images.

    For more information about the minimum hardware requirements, refer to the section Prerequisites for Deploying the Protegrity Anonymization API.

  • Access to an AWS account.

  • Permissions to create a Kubernetes cluster.

  • IAM user:

    • Required to create the Kubernetes cluster. This user requires the following policy permissions managed by AWS:

      • AmazonEC2FullAccess
      • AmazonEKSClusterPolicy
      • AmazonS3FullAccess
      • AmazonSSMFullAccess
      • AmazonEKSServicePolicy
      • AmazonEKS_CNI_Policy
      • AWSCloudFormationFullAccess
      • Custom policy that allows the user to create a new role and an instance profile, retrieve information regarding a role and an instance profile, attach a policy to the specified IAM role, and so on. The following actions must be permitted on the IAM service:
        • GetInstanceProfile
        • GetRole
        • AddRoleToInstanceProfile
        • CreateInstanceProfile
        • CreateRole
        • PassRole
        • AttachRolePolicy
      • Custom policy that allows the user to delete a role and an instance profile, detach a policy from a specified role, delete a policy from the specified role, remove an IAM role from the specified EC2 instance profile, and so on. The following actions must be permitted on the IAM service:
        • GetOpenIDConnectProvider
        • CreateOpenIDConnectProvider
        • DeleteInstanceProfile
        • DeleteRole
        • RemoveRoleFromInstanceProfile
        • DeleteRolePolicy
        • DetachRolePolicy
        • PutRolePolicy
      • Custom policy that allows the user to manage EKS clusters. The following actions must be permitted on the EKS service:
        • ListClusters
        • ListNodegroups
        • ListTagsForResource
        • ListUpdates
        • DescribeCluster
        • DescribeNodegroup
        • DescribeUpdate
        • CreateCluster
        • CreateNodegroup
        • DeleteCluster
        • DeleteNodegroup
        • UpdateClusterConfig
        • UpdateClusterVersion
        • UpdateNodegroupConfig
        • UpdateNodegroupVersion

      For more information about creating an IAM user, refer to Creating an IAM User in Your AWS Account. Contact your system administrator for creating the IAM users.

      For more information about the AWS-specific permissions, refer to API Reference document for Amazon EKS.

  • Access to the Amazon Elastic Kubernetes Service (EKS) to create a Kubernetes cluster.

  • Access to the AWS Elastic Container Registry (ECR) to upload the Protegrity Anonymization API image.

2 - Preparing the Base Machine

Steps to prepare the base machine for working with the EKS cluster.

The steps provided here installs the software required for running the various EKS commands for setting up and working with the Protegrity Anonymization API cluster.

  1. Log in to your system as an administrator.

  2. Open a command prompt with administrator.

  3. Install the following tools to get started with creating the EKS cluster.

    1. Install AWS CLI 2, which provides a set of command line tools for the AWS Cloud Platform.

      For more information about installing the AWS CLI 2, refer to Installing or updating to the latest version of the AWS CLI.

    2. Configure AWS CLI on your machine by running the following command.

      aws configure
      

      You are prompted to enter the AWS Access Key ID, Secret Access Key, AWS Region, and the default output format where these results are formatted.

      For more information about configuring AWS CLI, refer to Configuring settings for the AWS CLI.

      You need to specify the credentials of IAM User created in the section Verifying the Prerequisites to create the Kubernetes cluster.

      AWS Access Key ID [None]: <AWS Access Key ID of the IAM User 1>
      AWS Secret Access Key [None]: <AWS Secret Access Key of the IAM User 1>
      Default region name [None]: <Region where you want to deploy the Kubernetes cluster>
      Default output format [None]: json
      
    3. Install Kubectl version 1.22, which is the command line interface for Kubernetes.

      Kubectl enables you to run commands from the Linux instance so that you can communicate with the Kubernetes cluster.

      For more information about installing kubectl, refer to Set up kubectl and eksctl in the AWS documentation.

    4. Install one of the following command line tools for creating the Kubernetes cluster on AWS (EKS):

      • eksctl: Install eksctl which is a command line utility to create and manage Kubernetes clusters on Amazon Elastic Kubernetes Service (Amazon EKS).

        For more information about installing eksctl on the Linux instance, refer to Set up to use Amazon EKS.

      • Terraform/OpenTofu: Optionally, install Terraform or OpenTofu which is the command line to create and manage Kubernetes clusters. Use the terraform version command in the CLI to verify that Terraform or OpenTofu is installed.

        For more information about installing Terraform or OpenTofu, refer to Install Terraform.

    5. Install the Helm client version 3.8.2 for working with Kubernetes clusters.

      For more information about installing the Helm client, refer to Installing Helm.

3 - Creating the EKS Cluster

Steps to create the EKS cluster.

Complete the steps provided here to create the EKS cluster by running commands on the machine for the Protegrity Anonymization API.

Note: The steps listed in this procedure for creating the EKS cluster are for reference use. If you have an existing EKS cluster or want to create an EKS cluster based on your own requirements, then you can directly navigate to the section Accessing the EKS Cluster to connect your EKS cluster and the Linux instance.

To create an EKS cluster:

  1. Log in to the Linux machine.

  2. Obtain and extract the Protegrity Anonymization API files to a directory on your system.

    1. Download and extract the ANON-API_DEB-ALL-64_x86-64_Docker-ALL-64_1.4.0.x.tgz file.
    2. Verify that the following files are available in the package:
      • ANON-REST-API_1.4.0.x.tgz: The files for working with Protegrity Anonymization REST API.
      • ANON-NOTEBOOK_1.4.0.x.tgz: This file contains the image for the Anon-workstation.
    3. Extract the contents of the ANON-REST-API_1.4.0.x.tgz and ANON-NOTEBOOK_1.4.0.x.tgz files to a directory.
  3. Add the Cloud-related settings in the configuration files using one of the following options:

    Note: Use the checklist at AWS Checklist to update the YAML files.

    • For eksctl: Update the cluster-aws.yaml template file with the EKS authentication values for creating the EKS cluster.

      • Update the following placeholder information in the cluster-aws.yaml file.

          apiVersion: eksctl.io/v1alpha5
          kind: ClusterConfig
          metadata:
            name: <cluster_name>   #(provide an appropriate name for your cluster)
            region: <Region where you want to deploy Kubernetes Cluster>   #(specify the region to be used)
            version: "1.27"
          vpc:
            id: "#Update_vpc_here#  #   (enter the vpc id to be used)
            subnets:             # (In this section specify the subnet region and subnet id accordingly)
              private:
                <Availability zone for the region where you want to deploy your Kubernetes cluster>:
                  id: "#Update_id_here#"
                <Availability zone for the region where you want to deploy your Kubernetes cluster>:
                  id: "#Update_id_here#"
          nodeGroups:
            - name: <Name of your Node Group>
              instanceType: t3a.xlarge
              minSize: 2
              maxSize: 4        # (Set max node size according to load to be processed, for cluster-autoscaling)
              desiredCapacity: 3
              privateNetworking: true
              iam:
                attachPolicyARNs:
                  - "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
                  - "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
                  - "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
                withAddonPolicies:
                  autoScaler: true
                  awsLoadBalancerController: true
                  ebs: true
              securityGroups:
                withShared: true
                withLocal: true
                attachIDs: ['#Update_security_group_id_linked_to_your_VPC_here#']
              tags:
                #Add required tags (Product, name, etc.) here
                k8s.io/cluster-autoscaler/<cluster_name>: "owned"       # (Update your cluster name in this line) ## These tags are required for
                k8s.io/cluster-autoscaler/enabled: "true"                                                 ##     cluster-autoscaling
                Product: "Anonymization"
              ssh:
                publicKeyName: '<EC2 Key Pair>'                    rgba(4, 4, 4, 1) SSH key to login to Nodes in the cluster if needed.</ns:clipboard
        

        Note: In the ssh/publicKeyName parameter, you must specify the name of the key pair that you have created.

        For more information about creating the EC2 key pair, refer to Amazon EC2 key pairs and Amazon EC2 instances.

        The AmazonEKSWorkerNodePolicy policy allows Amazon EKS worker nodes to connect to Amazon EKS Clusters. For more information about the policy, refer to Amazon EKS Worker Node Policy.

        For more information about the attached role arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy in the nodegroup, refer to Amazon EKS node IAM role.

        The ARN of the AmazonEKS_CNI_Policy policy is a default AWS policy that enables the Amazon VPC CNI Plugin to modify the IP address configuration on your EKS nodes. For more information about this policy, refer to Amazon EKS CNI Policy.

        For more information about the attached role arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy in the nodegroup, refer to Configure Amazon VPC CNI plugin to use IRSA.

    • For Terraform: Update the following placeholder information in the aws-terraform/vars.tf file with the Terraform values for creating the cluster.

      variable "cluster_name" {
      default = "<Cluster_name>" ## Supply the name for your EKS cluster.
      }
      variable "cluster_version" {
      default = "1.27"
      }
      variable "aws_region" {
      default = "<Region>" ## The region in which EKS cluster will be
      created.
      }
      variable "role_arn" {
      default = "<Specify Role_arn>" ## Amazon Resource Name (ARN) of the IAM
      role that provides permissions for the Kubernetes control plane to make calls to AWS
      API operations on your behalf.
      }
      variable "security_group_id" {
      default = ["<Specify security group id>"] ## The Security Group ID for your VPC.
      }
      variable "subnet_ids" {
      default = ["<subnet-1 id>", "<subnet-2 id>"] ## Supply the subnet ID's. Ensure the
      subnets should be in different Availability Zone.
      }
      variable "node_group_name" {
      default = "<Nodegroup Name>" ## Name of the nodegroup that will join the
      EKS cluster.
      }
      variable "node_role_arn" { ## Amazon Resource Name (ARN) of the IAM
      Role that provides permissions for the EKS Node Group.
      default = "<IAM-Node ROLE ARN>" ## Refer
      }
      variable "instance_type" {
      default = ["<instance_type>"] ## Type of Nodes in EKS cluster. Eg:
      t3a.xlarge.
      }
      variable "desired_nodes_count" {
      default = "<Desired node count>" ## Desired number of Nodes Running in EKS
      cluster.
      }
      variable "max_nodes" {
      default = "<Max node count>" ## Maximum number of Nodes in EKS cluster
      can Autoscale to.
      }
      variable "min_nodes" {
      default = "<Min node count>" ## Minimum number of Nodes in EKS cluster.
      }
      variable "ssh_key" {
      default = "<EC2-SSH-key>" ## EC2-SSH Key Pair to SSH to Nodes of
      cluster.
      }
      output "endpoint" {
      value = aws_eks_cluster.eks_Anon.endpoint
      }
      
  4. Run one of the the following commands to create the Kubernetes cluster. This process might take time to complete. You might need to wait for 10 to 15 minutes for the cluster creation process to complete:

    • For eksctl:

      eksctl create cluster -f cluster-aws.yaml
      
    • For Terraform:

      terraform init terraform plan terraform apply
      
  5. Deploy the Cluster Autoscaler component to enable the autoscaling of nodes in the EKS cluster.

    For more information about deploying the Cluster Autoscaler, refer to the Deploy the Cluster Autoscaler section in the Amazon EKS documentation.

  6. Install the Metrics Server to enable the horizontal autoscaling of pods in the Kubernetes cluster.

    For more information about installing the Metrics Server, refer to the Horizontal Pod Autoscaler section in the Amazon EKS documentation.

4 - Accessing the EKS Cluster

Steps to access the EKS cluster.

Connect to the cloud service using the steps in this section.

  1. Run the following command to connect your Linux instance to the Kubernetes cluster.

    aws eks update-kubeconfig --name <Name of Kubernetes cluster> --region <Region in which the cluster is created>
    
  2. Run the following command to verify that the nodes are deployed.

    kubectl get nodes
    

    Note: You can also verify that the nodes are deployed in AWS from the EKS Kubernetes Cluster dashboard.

5 - Uploading the Image to AWS Container Registry (ECR)

Steps to upload the Protegrity Anonymization API image.

Use the information in this section to upload the Protegrity Anonymization API image to the AWS container registry (ECR) for running the Protegrity Anonymization API in EKS.

Ensure that you have set up your Container Registry.

Note: The steps listed in this section for uploading the container images to the Amazon Elastic Container Repository (ECR) are for reference use. You can choose to use a different Container Registry for uploading the container images.

For more information about setting up Amazon ECR, refer to Moving an image through its lifecycle in Amazon ECR.

To install the Protegrity Anonymization API:

  1. Log in to the machine as an administrator to install the Protegrity Anonymization API.

  2. Install Docker using the steps provided at https://docs.docker.com/engine/install/.

  3. Configure Docker to push the Protegrity Anonymization API images to the AWS Container Registry (ECR) by running following command:

    aws ecr get-login-password --region <Region> | docker login --username AWS --password-stdin <AWS_account_ID>.dkr.ecr.<Region>.amazonaws.com
    
  4. Obtain and extract the Protegrity Anonymization files to a directory on your system.

    1. Download and extract the ANON-API_DEB-ALL-64_x86-64_Docker-ALL-64_1.4.0.x.tgz file.

    2. Extract the contents of the ANON-REST-API_1.4.0.x.tgz and ANON-NOTEBOOK_1.4.0.x.tgz files to a directory.

      Note: Do not extract the ANON-API_1.4.0.x.tar.gz package obtained in the directory after performing the extraction. You need to run the docker load command on the package obtained in the directory.

  5. Navigate to the directory where the ANON-API_1.4.0.x.tar.gz file is saved.

  6. Load the Docker image into Docker by using the following command:

    docker load < ANON-API_1.4.0.x.tar.gz
    
  7. List the images that are loaded by using the following command:

    docker images
    
  8. Tag the image to the ECR repository by using the following command:

    docker tag <Container image>:<Tag> <Container registry path>/<Container image>:<Tag>
    

    For example:

    docker tag ANON-API_1.4.0.x:anon_EKS <account_name>.dkr.ecr.region.amazonaws.com/anon:anon_EKS
    
  9. Push the tagged image to the ECR by using the following command:

    docker push <Container_regitry_path>/<Container_image>:<Tag>
    

    For example:

    docker push <account_name>.dkr.ecr.region.amazonaws.com/anon:anon_EKS
    
  10. Extract ANON-NOTEBOOK_1.4.0.x.tgz to obtain the ANON-NOTEBOOK_1.4.0.x.tar.gz file and then repeat the steps 5 to 9 for ANON-NOTEBOOK_1.4.0.x.tar.gz.

    The images are loaded to the ECR and are ready for deployment.

    For more information about pushing container images to the ECR, refer to Moving an image through its lifecycle in Amazon ECR.

6 - Setting up NGINX Ingress Controller

Steps to install the NGINX Ingress Controller.

Complete the steps provided here for installing the NGINX Ingress Controller on the base machine.

  1. Login to the base machine and open a command prompt.

  2. Create a namespace where the NGINX Ingress Controller needs to be deployed using the following command.

    kubectl create namespace <Namespace name>
    

    For example,

    kubectl create namespace nginx
    
  3. Add the repository from where the Helm charts for installing the NGINX Ingress Controller must be fetched using the following command.

    helm repo add stable https://charts.helm.sh/stable
    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
    
  4. Install the NGINX Ingress Controller using Helm charts using the following command.

    helm install nginx-ingress --namespace <Namespace name> --set controller.replicaCount=1 --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux ingressnginx/ingress-nginx --set controller.publishService.enabled=true --set controller.ingressClassResource.name=<NGINX ingress class name> --set podSecurityPolicy.enabled=true --set rbac.create=true --set controller.extraArgs.enablessl-passthrough="true" --set controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-internal"=\"true\" --set controller.service.annotations."service\\.beta\\.kubernetes\\.io/aws-load-balancer-connection-idle-timeout"=\\"300\\" --version 4.3.0
    

    For example,

    helm install nginx-ingress --namespace nginx --set controller.replicaCount=1 --set controller.extraArgs.enable-ssl-passthrough="true" --set controller.nodeSelector."beta\\.kubernetes\\.io/os"=linux --set defaultBackend.nodeSelector."beta\\.kubernetes\\.io/os"=linux ingress-nginx/ingress-nginx --set controller.publishService.enabled=true --setcontroller.ingressClassResource.name=nginx-anon --set podSecurityPolicy.enabled=true --set rbac.create=true --set controller.service.annotations."service\\.beta\\.kubernetes\\.io/aws-load-balancer-internal"=\\"true\\" --set controller.service.annotations."service\\.beta\\.kubernetes\\.io/aws-load-balancer-connection-idle-timeout"=\\"300\\" --version 4.3.0
    

    For more information about the various configuration parameters for installing the NGINX Ingress Helm charts, refer to values.yaml file.

  5. Check the status of the nginx-ingress release and verify that all the deployments are running accurately using the following command.

    kubectl get pods -n <Namespace name>
    

    For example,

    kubectl get pods -n nginx
    

    Note: The pod name should be noted. It is required as a parameter in the next step.

  6. View the logs on the Ingress pod using the following command.

    kubectl logs pod/<pod-name> -n <Namespace name>
    
  7. Obtain the external IP of the nginx service by executing the following command.

    kubectl get service --namespace <Namespace name>
    

    For example,

    kubectl get service -n nginx
    

    Note: The IP should be noted. It is required for communicating the Protegrity Anonymization API.

7 - Using Custom Certificates in Ingress

Steps to use your custom certificates with the Ingress Controller.

Protegrity Anonymization API uses certificates for secure communication with the client. You can use the certificates provided by Protegrity or use your own certificates. Complete the configurations provided in this section to use your custom certificates with the Ingress Controller.

Ensure that the certificates and keys are in the .pem format.

Note: Skip the steps provided in this section if you want to use the default Protegrity certificates for the Protegrity Anonymization API.

  1. Login to the Base Machine where Ingress in configured and open a command prompt.

  2. Copy your certificates to the Base Machine.

    Note: Verify the certificates using the commands provided in the section Working with Certificates.

  3. Create a Kubernetes secret of the server certificate using the following command. The namespace used must be the same where the Protegrity Anonymization API application is to be deployed.

    kubectl create secret --namespace <namespace-name> generic <secret-name> --from-file=tls.crt=<path_to_certificate>/<certificate-name> --from-file=tls.key=<path_to_certificate>/<certificate-key>
    

    For example,

    kubectl create secret --namespace anon-ns generic anon-protegrity-tls --from-file=tls.crt=/tmp/cust_cert/anon-server-cert.pem --from-file=tls.key=/tmp/cust_cert/anon-server-key.pem
    
  4. Create a Kubernetes secret of the CA certificate using the following command. The namespace used must be the same where the Protegrity Anonymization API application is to be deployed.

    kubectl create secret --namespace <namespace-name> generic <secret-name> --from-file=ca.crt=<path_to_certificate>/<certificate-name>
    

    For example,

    kubectl create secret --namespace anon-ns generic ca-protegrity --from-file=ca.crt=/tmp/cust_cert/anon-ca-cert.pem
    
  5. Open the values.yaml file.

  6. Add the following host and secret code for the Ingress configuration at the end of the values.yaml file.

    ## Refer section in documentation for setting up and configuring NGINX-INGRESS before deploying the application.
    ingress:
      ## Add host section with the hostname used as CN while creating server certificates.
      ## While creating the certificates you can use *.protegrity.com as CN and SAN used in below example
      host: **anon.protegrity.com**                  # Update the host according to your server certificates.
    
      ## To terminate TLS on the Ingress Controller Load Balancer.
      ## K8s TLS Secret containing the certificate and key must also be provided.
      secret: **anon-protegrity-tls**                # Update the secretName according to your secretName.
    
      ## To validate the client certificate with the above server certificate
      ## Create the secret of the CA certificate used to sign both the server and client certificate as shown in example below
      ca_secret: **ca-protegrity**                    # Update the ca-secretName according to your secretName.
    
      ingress_class: nginx-anon
    

    Note: Ensure that you replace the host, secret, and ca_secret attributes in the values.yaml file with the values as per your certificate.

    For more information about using custom certificates, refer to Updating the Configuration Files.

8 - Updating the Configuration Files

Steps to update the Configuration Files.

Use the template files provided to specify the EKS settings for the Protegrity Anonymization API.

  1. Extract and update the files in the ANON-API_HELM_1.4.0.x.tgz package.

    The ANON-API_HELM_1.4.0.x.tgz package contains the values.yaml file that must be modified as per your requirements. It also contains the templates directory with yaml files.

    Note: Ensure that the necessary permissions for updating the files are assigned to the .yaml files.

  2. Navigate to the <path_to_helm>/templates directory and delete the anon-db-storage-aws.yaml file.

  3. Update the values.yaml file.

    Note: For more information about the values.yaml file, refer to values.yaml.

    1. Specify a namespace for the pods.

      namespace:
        name: **anon-ns**
      
    2. Specify the node name and zone information for the node as a prerequisite for the database pod and the Anon-Storage(MinIO) pod. Use the node name which is running in the same zone where the EBS is created.

      ## Prerequisite for setting up Database and Minio Pod.
      ## This is to handle any new DB pod getting created that uses the same persistence storage in case the running Database pod gets disrupted.
      ## This persistence also helps persist Anon-storage data.
      persistence:
        ## 1. Get the list of nodes in the cluster. CMD: kubectl get nodes
        ## 2. Get the node name which is running in the same zone where the external-storage is created. CMD: kubectl describe nodes
        nodename: "**<Node_name>**"                    # Update the Node name
      
        ## Fetch the zone in which the node is running using the `kubectl describe node/nodename` command or the following command.
        ## CMD: ` kubectl describe node/<nodename> | grep topology.kubernetes.io/zone | grep -oP 'topology.kubernetes.io/zone=K[^ ]+' `
        zone: "**<Zone in which above Node is running>**"
      
        ## For EKS cluster, supply the volumeID of the aws-ebs
        ## For AKS cluster, supply the subscriptionID of the azure-disk
        dbstorageId: "**<Provide dbstorage ID>**"           # To persist database schemas.
        anonstorageId: "**<Provide anonstorage ID>**"       # To persist Anonymized data.
      
    3. Update the repository information in the file. The Anon-Storage pod uses the MinIO Docker image quay.io/minio/minio:RELEASE.2022-10-29T06-21-33Z, which is pulled from the Public repository.

      image:
        minio_repo: quay.io/minio/minio                    # Public repo path for Minio Image.
        minio_tag: RELEASE.2022-10-29T06-21-33Z            # Tag name for Minio image.
      
        repository: **<Repo_path>**                            # Repo path for the Container Registry in Azure, GCP, AWS.
        anonapi_tag: **<AnonImage_tag>**                       # Tag name of the ANON-API Image.
        anonworkstation_tag: **<WorkstationImage_tag>**        # Tag name of the ANON-Workstation Image.
      
        pullPolicy: Always
      

      Note: Ensure that you update the repository, anonapi_tag, and anonworkstation_tag according to your container registry.

    4. MinIO uses access keys and secret for performing file operations. Protegrity provides a default set of credentials that are stored as part of the secret storage-creds. If you are creating your own secret, then, update the existingSecret parameter.

      anonstorage:
        ## Refer the following command for creating your own secret.
        ## CMD: kubectl create secret generic my-minio-secret --from-literal=rootUser=foobarbaz --from-literal=rootPassword=foobarbazqux
        existingSecret: ""                # Supply your secret Name for ignoring below default credentials.
        bucket_name: "anonstorage"        # Default bucket name for minio
        secret:
          name: "storage-creds"           # Secret to access minio-server
          access_key: "anonuser"          # Access key for minio-server
          secret_key: "protegrity"        # Secret key for minio-server
      

9 - Deploying the Protegrity Anonymization API to the EKS Cluster

Steps to deploy the Protegrity Anonymization API on the EKS cluster.

Complete the following steps to deploy the Protegrity Anonymization API on the EKS cluster.

  1. Navigate to the <path_to_helm>/templates directory and delete the anon-dbpvc-azure.yaml and the anon-storagepvc-azure.yaml files.

  2. Create the Protegrity Anonymization API namespace using the following command.

    kubectl create namespace <name>
    

    Note: Update and use the from the values.yaml file that is present in the Helm chart that you used in the previous section.

  3. Run the following command to deploy the pods.

    helm install <helm-name> /<path_to_helm> -n <namespace>
    
  4. Verify that the necessary pods and services are configured and running.

    1. Run the following command to verify the information for accessing the Protegrity Anonymization API externally on the cluster. The port mapping for accessing the UI is displayed after running the command.

      kubectl get service -n <namespace>
      
    2. Run the following command to verify the deployment.

      kubectl get deployment -n <namespace>
      
    3. Run the following command to verify the pods created.

      kubectl get pods -n <namespace>
      
    4. Run the following command to verify the pods.

      kubectl get pods -o wide -n <namespace>
      
  5. If you customize the values.yaml, then update the configuration using the following command.

    helm upgrade <helm name> /path/to/helmchart -n <namespace>
    
  6. If required, configure logging using the steps provided in the section Setting Up Logging for the Protegrity Anonymization API.

  7. Execute the following command to obtain the IP address of the service.

    kubectl get ingress -n <namespace>
    

10 - Viewing Protegrity Anonymization API Using REST

Steps to view the Protegrity Anonymization API service.

Use the URLs provided here for viewing the Protegrity Anonymization API service and pod details after you have successfully deployed the Protegrity Anonymization API.

You need to map the IP address of Ingress in the hosts file with the host name set in the Ingress configuration.

For more information about updating the hosts file, refer to step 2 of the section Enabling Custom Certificates From SDK.

Optionally, update the hostname of the Elastic Load Balancer (ELB) that is created by the NGINX Ingress Controller using the section Creating a DNS Entry for the ELB Hostname in Route53.

For more information about configuring the DNS, refer to the section Creating a DNS Entry for the ELB Hostname in Route53.

  1. Open a web browser.

  2. Use the following URL to view basic information about the Protegrity Anonymization API.

    https://anon.protegrity.com/

  3. Use the following URL to view the Swagger UI. The various Protegrity Anonymization APIs are visible on this page.

    https://anon.protegrity.com/anonymization/api/v1/ui

  4. Use the following URL to view the contractual information for the Protegrity Anonymization API.

    https://anon.protegrity.com/about

11 - Creating Kubernetes Service Accounts and Kubeconfigs for Anonymization Cluster

Steps to create a Kubernetes service account and the role-based access control (RBAC) configuration.

A service account in the anonymization cluster namespace has access to the anonymization namespace. It might also have access to the whole cluster. These permissions for the service account allow the user to create, read, update, and delete objects in the anonymization Kubernetes cluster or the namespace. Additionally, the kubeconfig is required to access the service account using a token.

In this section, you create a Kubernetes service account and the role-based access control (RBAC) configuration manually using kubectl.

Ensure that the user has access to permissions for creating and updating the following resources in the Kubernetes cluster:

  • Kubernetes Service Accounts

  • Kubernetes Roles and Rolebindings

  • Optional: Kubernetes ClusterRoles and Rolebindings

  • Use the steps provided in the followng link to create the namespace and assign the required permissions to the cluster.
    Creating the Service Account

  • Complete the steps provided in the following link to retrieve the tokens for the Protegrity Anonymization API service account and to create a kubeconfig with access to the service account.
    Obtaining the Tokens for the Service Account

Obtaining the Tokens for the Service Account

Complete the steps provided int his section to retrieve the tokens for the Protegrity Anonymization API service account and to create a kubeconfig with access to the service account.

  1. Open a command line interface on the base machine for running the configuration commands.

    Note: A copy of the commands is available in the kubconfigcmd.txt file in the rbac directory of the Protegrity Anonymization API package. Use the code form the file to run the commands.

  2. Set the environment variables for running the configuration commands using the following command.

    SERVICE_ACCOUNT_NAME=anon-service-account
    CONTEXT=$(kubectl config current-context)
    NAMESPACE=anon-namespace
    NEW_CONTEXT=anon-context
    
    SECRET_NAME=$(kubectl get serviceaccount ${SERVICE_ACCOUNT_NAME} -n ${NAMESPACE} --context ${CONTEXT} --namespace ${NAMESPACE} -o jsonpath='{.secrets[0].name}')
    TOKEN_DATA=$(kubectl get secret ${SECRET_NAME} -n ${NAMESPACE} --context ${CONTEXT} --namespace ${NAMESPACE} -o  jsonpath='{.data.token}')
    TOKEN=$(echo ${TOKEN_DATA} | base64 -d)
    

    Note: Ensure that you use the appropriate values as per your configuration in the above command.

  3. Set the token in the config credentials using the following command.

    kubectl config set-credentials <username> --token=$TOKEN
    

    For example,

    kubectl config set-credentials test-user --token=$TOKEN
    
  4. Retrieve the cluster name using the following command.

    kubectl config get-clusters
    
  5. Set the context in kubeconfig using the following command.

    kubectl config set-context ${NEW_CONTEXT} --cluster=<name of your cluster> --user=test-user
    
  6. Set the current context to to use the new anonymization config using the following command.

    kubectl config use-context ${NEW_CONTEXT}
    
  7. Verify the new context using the following command.

    kubectl config current-context
    
  8. Verify the status of the pods using the following command.

    kubectl get pods -n <name space>
    

Creating the Service Account

Use the steps provided in this section to create the namespace and assign the required permissions to the cluster.

  1. Create the Kubernetes Service Account using the following steps.

    1. Navigate to the rbac directory of the extracted Protegrity Anonymization API package.

    2. Open the anon-service-account.yaml file using a text editor.

    3. Update the namespace as per your configuration in the anon-service-account.yaml file.

    4. Save and close the file.

    5. From a command prompt, navigate to the rbac directory and run the following command to create the service account.

      kubectl apply -f anon-service-account.yaml
      
  2. Grant the appropriate permission to the service account using any one of the following two steps.

    • Grant cluster-admin permissions for the service account to all the namespaces using the following steps.

      Note: You need to run this step only if you want to grant the service account access to all namespaces in your cluster.

      A Kubernetes ClusterRoleBinding is available at the cluster level, but the subject of the ClusterRoleBinding exists in a single namespace. Hence, you must specify the namespace for the service account.

      1. Navigate to the rbac directory of the extracted Protegrity Anonymization API package.

      2. Open the anon-clusterrolebinding.yaml file using a text editor.

      3. Update the namespace as per your configuration in the anon-clusterrolebinding.yaml file.

      4. Save and close the file.

      5. From a command prompt, navigate to the rbac directory and run the following command to assign the appropriate permissions.

        kubectl apply -f anon-clusterrolebinding.yaml
        
    • Grant namespace-specific permissions to the service account using the following steps.

      Note: You need to run this step only if you want to grant the service account access to just the Protegrity Anonymization API namespace.

      Ensure that you create a role with a set of permissions and rolebinding for attaching the role to the service account.

      1. Navigate to the rbac directory of the extracted Protegrity Anonymization API package.

      2. Open the anon-role-and-rolebinding.yaml file using a text editor.

      3. Update the namespace, role, and service account name as per your configuration in the anon-role-and-rolebinding.yaml file.

      4. Save and close the file.

      5. From a command prompt, navigate to the rbac directory and run the following command to assign the appropriate permissions.

        kubectl apply -f anon-role-and-rolebinding.yaml