Using Cloud Services
Configure the Protegrity Anonymization API in the different cloud services.
The Protegrity Anonymization API can be hosted in the Kubernetes service provided by various cloud platforms, such as AWS and Azure.
- Anonymizing Using Amazon Elastic Kubernetes Service (EKS)
- Anonymizing Using Azure Kubernetes Service (AKS)
Note: Protegrity Anonymization API is compatible for use with other Cloud providers. However, the compatibility has not been tested.
1 - Anonymizing Using Amazon Elastic Kubernetes Service (EKS)
1.1 - Verifying the Prerequisites
Prerequisites for configuring Protegrity Anonymization API on Amazon Elastic Kubernetes Service (EKS).
Ensure that the following prerequisites are met:
Base machine - This might be a Linux machine instance that is used to communicate with the Kubernetes cluster. This instance can be on-premise or on AWS. Ensure that Helm is installed on this Linux instance. You must also install Docker on this Linux instance to communicate with the Container Registry, where you want to upload the Docker images.
For more information about the minimum hardware requirements, refer to the section Prerequisites for Deploying the Protegrity Anonymization API.
Access to an AWS account.
Permissions to create a Kubernetes cluster.
IAM user:
Required to create the Kubernetes cluster. This user requires the following policy permissions managed by AWS:
- AmazonEC2FullAccess
- AmazonEKSClusterPolicy
- AmazonS3FullAccess
- AmazonSSMFullAccess
- AmazonEKSServicePolicy
- AmazonEKS_CNI_Policy
- AWSCloudFormationFullAccess
- Custom policy that allows the user to create a new role and an instance profile, retrieve information regarding a role and an instance profile, attach a policy to the specified IAM role, and so on. The following actions must be permitted on the IAM service:
- GetInstanceProfile
- GetRole
- AddRoleToInstanceProfile
- CreateInstanceProfile
- CreateRole
- PassRole
- AttachRolePolicy
- Custom policy that allows the user to delete a role and an instance profile, detach a policy from a specified role, delete a policy from the specified role, remove an IAM role from the specified EC2 instance profile, and so on. The following actions must be permitted on the IAM service:
- GetOpenIDConnectProvider
- CreateOpenIDConnectProvider
- DeleteInstanceProfile
- DeleteRole
- RemoveRoleFromInstanceProfile
- DeleteRolePolicy
- DetachRolePolicy
- PutRolePolicy
- Custom policy that allows the user to manage EKS clusters. The following actions must be permitted on the EKS service:
- ListClusters
- ListNodegroups
- ListTagsForResource
- ListUpdates
- DescribeCluster
- DescribeNodegroup
- DescribeUpdate
- CreateCluster
- CreateNodegroup
- DeleteCluster
- DeleteNodegroup
- UpdateClusterConfig
- UpdateClusterVersion
- UpdateNodegroupConfig
- UpdateNodegroupVersion
For more information about creating an IAM user, refer to Creating an IAM User in Your AWS Account. Contact your system administrator to create the IAM users.
For more information about the AWS-specific permissions, refer to API Reference document for Amazon EKS.
Access to the Amazon Elastic Kubernetes Service (EKS) to create a Kubernetes cluster.
Access to the AWS Elastic Container Registry (ECR) to upload the Protegrity Anonymization API image.
1.2 - Preparing the Base Machine
Steps to prepare the base machine for working with the EKS cluster.
The steps provided here install the software required for running the various EKS commands for setting up and working with the Protegrity Anonymization API cluster.
Log in to your system as an administrator.
Open a command prompt with administrator.
Install the following tools to get started with creating the EKS cluster.
Install AWS CLI 2, which provides a set of command line tools for the AWS Cloud Platform.
For more information about installing the AWS CLI 2, refer to Installing or updating to the latest version of the AWS CLI.
Configure AWS CLI on your machine by running the following command.
You are prompted to enter the AWS Access Key ID, Secret Access Key, AWS Region, and the default output format where these results are formatted.
For more information about configuring AWS CLI, refer to Configuring settings for the AWS CLI.
You need to specify the credentials of IAM User created in the section Verifying the Prerequisites to create the Kubernetes cluster.
AWS Access Key ID [None]: <AWS Access Key ID of the IAM User 1>
AWS Secret Access Key [None]: <AWS Secret Access Key of the IAM User 1>
Default region name [None]: <Region where you want to deploy the Kubernetes cluster>
Default output format [None]: json
Install Kubectl version 1.34, which is the command line interface for Kubernetes.
Kubectl enables you to run commands from the Linux instance so that you can communicate with the Kubernetes cluster.
For more information about installing kubectl, refer to Set up kubectl and eksctl in the AWS documentation.
Install the following command line tools for creating the Kubernetes cluster on AWS (EKS):
eksctl: Install eksctl which is a command line utility to create and manage Kubernetes clusters on Amazon Elastic Kubernetes Service (Amazon EKS).
For more information about installing eksctl on the Linux instance, refer to Set up to use Amazon EKS.
Install the Helm client version 4.1.1 for working with Kubernetes clusters.
For more information about installing the Helm client, refer to Installing Helm.
1.3 - Creating the EKS Cluster
Steps to create the EKS cluster.
Complete the steps provided here to create the EKS cluster by running commands on the machine for the Protegrity Anonymization API.
Note: The steps listed in this procedure for creating the EKS cluster are for reference use. If you have an existing EKS cluster or want to create an EKS cluster based on your own requirements, then you can directly navigate to the section Accessing the EKS Cluster to connect your EKS cluster and the Linux instance.
To create an EKS cluster:
Log in to the Linux machine.
Obtain and extract the Protegrity Anonymization API files to a directory on your system.
a. Download and extract the ANON-API_RHUBI-ALL-64_x86-64_Generic.K8S_1.4.1.14.tgz file.
b. Verify that the following files are available in the package:
- anonrestapi_1.4.1.14.tgz
- cluster-autoscaler-autodiscover.yaml
- cluster-aws.yaml
- dependent_images.tgz
Add the Cloud-related settings in the configuration files using one of the following options:
Note: Use the checklist at AWS Checklist to update the YAML files.
- For eksctl: Update the
cluster-aws.yaml template file with the EKS authentication values for creating the EKS cluster.Update the following placeholder information in the cluster-aws.yaml file.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: "<Your Cluster Name>" # Proposed name. This needs to match your cluster name on install.properties. This needs to be reflected below in the tags.
region: "<Your AWS region>"
version: "1.35"
vpc:
id: "<VPC ID>" # (enter the vpc id to be used)
subnets: # (In this section specify the subnet region and subnet id accordingly)
private:
<Availability zone 1, e.g., us-east-1a>:
id: "<Subnet ID>"
cidr: "<cidr>"
<Availability zone 1, e.g., us-east-1b>:
id: "<Subnet ID>"
cidr: "<cidr>"
addons:
- name: aws-ebs-csi-driver
wellKnownPolicies:
ebsCSIController: true
nodeGroups:
- name: "<Your Node Group Name>" #Update as required.
instanceType: t3a.xlarge
minSize: 2
maxSize: 4 # (Set max node size according to load to be processed , for cluster-autoscaling )
desiredCapacity: 3
privateNetworking: true
iam:
attachPolicyARNs:
- "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
- "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
- "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
withAddonPolicies:
autoScaler: true
awsLoadBalancerController: true
ebs: true
securityGroups:
withShared: true
withLocal: true
attachIDs: ["<Security Group ID>"]
tags:
#Add required tags (Product, name, etc.) here. These tags are required for cluster-autoscaling.
k8s.io/cluster-autoscaler/<Your Cluster Name>: "owned" # (Update your cluster name in this line if required)
k8s.io/cluster-autoscaler/enabled: "true"
Product: "Anonymization"
ssh:
publicKeyName: "<SSH key pair name>" #Add SSH key to login to Nodes in the cluster if needed.
Note: In the ssh/publicKeyName parameter, you must specify the name of the key pair that you have created.
For more information about creating the EC2 key pair, refer to Amazon EC2 key pairs and Amazon EC2 instances.
The AmazonEKSWorkerNodePolicy policy allows Amazon EKS worker nodes to connect to Amazon EKS Clusters. For more information about the policy, refer to Amazon EKS Worker Node Policy.
For more information about the attached role arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy in the nodegroup, refer to Amazon EKS node IAM role.
The ARN of the AmazonEKS_CNI_Policy policy is a default AWS policy that enables the Amazon VPC CNI Plugin to modify the IP address configuration on your EKS nodes. For more information about this policy, refer to Amazon EKS CNI Policy.
For more information about the attached role arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy in the nodegroup, refer to Configure Amazon VPC CNI plugin to use IRSA.
Run the the following commands to create the Kubernetes cluster. This process might take time to complete. You might need to wait for 10 to 15 minutes for the cluster creation process to complete:
For eksctl:
eksctl create cluster -f cluster-aws.yaml
Deploy the Cluster Autoscaler component to enable the autoscaling of nodes in the EKS cluster.
For more information about deploying the Cluster Autoscaler, refer to the Deploy the Cluster Autoscaler section in the Amazon EKS documentation.
Install the Metrics Server to enable the horizontal autoscaling of pods in the Kubernetes cluster.
For more information about installing the Metrics Server, refer to the Horizontal Pod Autoscaler section in the Amazon EKS documentation.
1.4 - Accessing the EKS Cluster
Steps to access the EKS cluster.
Connect to the cloud service using the steps in this section.
Run the following command to connect your Linux instance to the Kubernetes cluster.
aws eks update-kubeconfig --name <Name of Kubernetes cluster> --region <Region in which the cluster is created>
Run the following command to verify that the nodes are deployed.
Note: You can also verify that the nodes are deployed in AWS from the EKS Kubernetes Cluster dashboard.
1.5 - Uploading the Image to AWS Container Registry (ECR)
Steps to upload the Protegrity Anonymization API image.
Use the information in this section to upload the Protegrity Anonymization API image to the AWS container registry (ECR) for running the Protegrity Anonymization API in EKS.
Ensure that you have set up your Container Registry.
Note: The steps listed in this section for uploading the container images to the Amazon Elastic Container Repository (ECR) are for reference use. You can choose to use a different Container Registry for uploading the container images.
For more information about setting up Amazon ECR, refer to Moving an image through its lifecycle in Amazon ECR.
To install the Protegrity Anonymization API:
Log in to the machine as an administrator to install the Protegrity Anonymization API.
Install Docker using the steps provided at https://docs.docker.com/engine/install/.
Configure Docker to push the Protegrity Anonymization API images to the AWS Container Registry (ECR) by running following command:
aws ecr get-login-password --region <Region> | docker login --username AWS --password-stdin <AWS_account_ID>.dkr.ecr.<Region>.amazonaws.com
Obtain and extract the Protegrity Anonymization files to a directory on your system.
a. Download and extract the ANON-API_RHUBI-ALL-64_x86-64_Generic.K8S_1.4.1.14.tgz files from the .tgz archive.
anonrestapi_1.4.1.14.tgzcluster-autoscaler-autodiscover.yamlcluster-aws.yamldependent_images.tgz
b. Extract the anonrestapi_1.4.1.14.tgz files from the .tgz archive.
Anon_logs.shREADME.mdanonapi_1.4.1.14.tgzanonapi_helm_1.4.1.14.tgzcontractual.csv
Note: Do not extract the anonapi_1.4.1.14.tgz package. This file must be loaded directly into Docker using the docker load command.
Navigate to the directory where the following image packages are located.
dependent_images.tgzanonapi_1.4.1.14.tgz
Load the Docker image into Docker by using the following command:
a. Load the dependent images.
docker load < dependent_images.tgz
b. Load the Protegrity Anonymization API image.
docker load < anonapi_1.4.1.14.tgz
List the images that are loaded by using the following command:
Tag the image to the ECR repository by using the following command:
docker tag <Container image>:<Tag> <Container registry path>/<Container image>:<Tag>
For example:
docker tag ANON-API_1.4.1.14:anon_EKS <account_name>.dkr.ecr.region.amazonaws.com/anon:anon_EKS
Push the tagged image to the ECR by using the following command:
docker push <Container_regitry_path>/<Container_image>:<Tag>
For example:
docker push <account_name>.dkr.ecr.region.amazonaws.com/anon:anon_EKS
The images are loaded to the ECR and are ready for deployment.
For more information about pushing container images to the ECR, refer to Moving an image through its lifecycle in Amazon ECR.
1.6 - Setting up NGINX Ingress Controller
Steps to install the NGINX Ingress Controller.
Complete the steps provided here for installing the NGINX Ingress Controller on the base machine.
Log in to the base machine and open a command prompt.
Create a namespace where the NGINX Ingress Controller needs to be deployed using the following command.
kubectl create namespace <Namespace name>
For example,
kubectl create namespace nginx
Add the repository from where the Helm charts for installing the NGINX Ingress Controller must be fetched using the following command.
helm repo add stable https://charts.helm.sh/stable
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
Install the NGINX Ingress Controller using Helm charts using the following command.
helm install nginx-ingress --namespace <Namespace name> --set controller.replicaCount=1 --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux ingressnginx/ingress-nginx --set controller.publishService.enabled=true --set controller.ingressClassResource.name=<NGINX ingress class name> --set podSecurityPolicy.enabled=true --set rbac.create=true --set controller.extraArgs.enablessl-passthrough="true" --set controller.service.annotations."service\.beta\.kubernetes\.io/aws-load-balancer-internal"=\"true\" --set controller.service.annotations."service\\.beta\\.kubernetes\\.io/aws-load-balancer-connection-idle-timeout"=\\"300\\" --version 4.3.0
For example,
helm install nginx-ingress --namespace nginx --set controller.replicaCount=1 --set controller.extraArgs.enable-ssl-passthrough="true" --set controller.nodeSelector."beta\\.kubernetes\\.io/os"=linux --set defaultBackend.nodeSelector."beta\\.kubernetes\\.io/os"=linux ingress-nginx/ingress-nginx --set controller.publishService.enabled=true --setcontroller.ingressClassResource.name=nginx-anon --set podSecurityPolicy.enabled=true --set rbac.create=true --set controller.service.annotations."service\\.beta\\.kubernetes\\.io/aws-load-balancer-internal"=\\"true\\" --set controller.service.annotations."service\\.beta\\.kubernetes\\.io/aws-load-balancer-connection-idle-timeout"=\\"300\\" --version 4.3.0
For more information about the various configuration parameters for installing the NGINX Ingress Helm charts, refer to values.yaml file.
Check the status of the nginx-ingress release and verify that all the deployments are running accurately using the following command.
kubectl get pods -n <Namespace name>
For example,
kubectl get pods -n nginx
Note: The pod name should be noted. It is required as a parameter in the next step.
View the logs on the Ingress pod using the following command.
kubectl logs pod/<pod-name> -n <Namespace name>
Obtain the external IP of the nginx service by executing the following command.
kubectl get service --namespace <Namespace name>
For example,
kubectl get service -n nginx
Note: The IP should be noted. It is required for communicating the Protegrity Anonymization API.
1.7 - Using Custom Certificates in Ingress
Steps to use your custom certificates with the Ingress Controller.
Protegrity Anonymization API uses certificates for secure communication with the client. You can use the certificates provided by Protegrity or use your own certificates. Complete the configurations provided in this section to use your custom certificates with the Ingress Controller.
Ensure that the certificates and keys are in the .pem format.
Note: Skip the steps provided in this section if you want to use the default Protegrity certificates for the Protegrity Anonymization API.
Log in to the Base Machine where Ingress in configured and open a command prompt.
Copy your certificates to the Base Machine.
Note: Verify the certificates using the commands provided in the section Working with Certificates.
Create a Kubernetes secret of the server certificate using the following command. The namespace used must be the same where the Protegrity Anonymization API application is to be deployed.
kubectl create secret --namespace <namespace-name> generic <secret-name> --from-file=tls.crt=<path_to_certificate>/<certificate-name> --from-file=tls.key=<path_to_certificate>/<certificate-key>
For example,
kubectl create secret --namespace anon-ns generic anon-protegrity-tls --from-file=tls.crt=/tmp/cust_cert/anon-server-cert.pem --from-file=tls.key=/tmp/cust_cert/anon-server-key.pem
Create a Kubernetes secret of the CA certificate using the following command. The namespace used must be the same where the Protegrity Anonymization API application is to be deployed.
kubectl create secret --namespace <namespace-name> generic <secret-name> --from-file=ca.crt=<path_to_certificate>/<certificate-name>
For example,
kubectl create secret --namespace anon-ns generic ca-protegrity --from-file=ca.crt=/tmp/cust_cert/anon-ca-cert.pem
Open the values.yaml file.
Add the following host and secret code for the Ingress configuration at the end of the values.yaml file.
## Refer section in documentation for setting up and configuring NGINX-INGRESS before deploying the application.
ingress:
## Add host section with the hostname used as CN while creating server certificates.
## While creating the certificates you can use *.protegrity.com as CN and SAN used in below example
host: **anon.protegrity.com** # Update the host according to your server certificates.
## To terminate TLS on the Ingress Controller Load Balancer.
## K8s TLS Secret containing the certificate and key must also be provided.
secret: **anon-protegrity-tls** # Update the secretName according to your secretName.
## To validate the client certificate with the above server certificate
## Create the secret of the CA certificate used to sign both the server and client certificate as shown in example below
ca_secret: **ca-protegrity** # Update the ca-secretName according to your secretName.
ingress_class: nginx-anon
Note: Ensure that you replace the host, secret, and ca_secret attributes in the values.yaml file with the values as per your certificate.
For more information about using custom certificates, refer to Updating the Configuration Files.
1.8 - Updating the Configuration Files
Steps to update the Configuration Files.
Use the template files provided to specify the EKS settings for the Protegrity Anonymization API.
Extract and update the files in the ANON-API_HELM_1.4.1.14.tgz package.
The ANON-API_HELM_1.4.1.14.tgz package contains the values.yaml file that must be modified as per your requirements. It also contains the templates directory with yaml files.
Note: Ensure that the necessary permissions for updating the files are assigned to the .yaml files.
Update the values.yaml file.
Note: For more information about the values.yaml file, refer to values.yaml.
a. Specify a namespace for the pods.
```
namespace:
name: **anon-ns** # Update the namespace if required.
```
b. Specify the node name and zone information for the node as a prerequisite for the database pod and the Anon-Storage(S3 bucket) pod. Use the node name which is running in the same zone where the EBS is created.
```
## Prerequisite for setting up Database and MinIO Pod.
## This is to handle any new DB pod getting created that uses the same persistence storage in case the running Database pod gets disrupted.
## This persistence also helps persist Anon-storage data.
persistence:
## Update storageClassName based on the PV/PVC/Storage config.
storageClassName: # Example: managed-premium for Azure, standard for AWS EKS, gp2 for AWS EC2, standard for GCP.
fsType: ext4
## This section is required if the image is getting pulled from the Azure Container Registry
## create image pull secrets and specify the name here.
## remove the [] after 'imagePullSecrets:' once you specify the secrets
#imagePullSecrets: []
# - name: regcred
## This section is required if the S3 bucket image is getting pulled from the Azure Container Registry instead of Public Repo
## create image pull secrets and specify the name here.
## remove the [] after 'imagePullSecrets:' once you specify the secrets
#minioImagePullSecrets: []
# - name: regcred
```
Update the repository information in the file. The Anon-Storage pod uses the S3 bucket Docker image quay.io/minio/minio:RELEASE.2025-04-03T14-56-28Z, which is pulled from the Public repository.
image:
minio_repo: quay.io/minio/minio # Public repo path for MiniO Image.
minio_tag: RELEASE.2025-04-03T14-56-28Z # Tag name for Minio image.
repository: <Repo_path> # Repo path for the Container Registry in Azure, AWS.
anonapi_tag: <AnonImage_tag> # Tag name of the ANON-API Image.
database_tag: <DatabaseImage_tag> # Tag name of the ANON-API Image.
pullPolicy: Always
Note: Ensure that you update the repository and anonapi_tag according to your container registry.
S3 bucket uses access keys and secrets for performing file operations. Protegrity provides a default set of credentials that are stored as part of the secret storage-creds. If you are creating your own secret, then, update the existingSecret parameter.
storage:
## Refer the following command for creating your own secret.
## CMD: kubectl create secret generic my-minio-secret --from-literal=rootUser=foobarbaz --from-literal=rootPassword=foobarbazqux
existingSecret: "" # Supply your secret Name for ignoring below default credentials.
bucket_name: "anonstorage" # Default bucket name for S3 bucket
secret:
name: "storage-creds" # Secret to access minio-server
access_key: "anonuser" # Access key for minio-server
secret_key: "protegrity" # Secret key for minio-server
1.9 - Deploying the Protegrity Anonymization API to the EKS Cluster
Steps to deploy the Protegrity Anonymization API on the EKS cluster.
Complete the following steps to deploy the Protegrity Anonymization API on the EKS cluster.
Create the Protegrity Anonymization API namespace using the following command.
kubectl create namespace <name>
Note: Update and use the from the values.yaml file that is present in the Helm chart that you used in the previous section.
Run the following command to deploy the pods.
helm install <helm-name> /<path_to_helm> -n <namespace>
Verify that the necessary pods and services are configured and running.
a. Run the following command to verify the information for accessing the Protegrity Anonymization API externally on the cluster. The port mapping for accessing the UI is displayed after running the command.
```
kubectl get service -n <namespace>
```
b. Run the following command to verify the deployment.
```
kubectl get deployment -n <namespace>
```
c. Run the following command to verify the pods created.
```
kubectl get pods -n <namespace>
```
d. Run the following command to verify the pods.
```
kubectl get pods -o wide -n <namespace>
```
If you customize the values.yaml, then update the configuration using the following command.
helm upgrade <helm name> /path/to/helmchart -n <namespace>
If required, configure logging using the steps provided in the section Setting Up Logging for the Protegrity Anonymization API.
Execute the following command to obtain the IP address of the service.
kubectl get ingress -n <namespace>
1.10 - Viewing Protegrity Anonymization API Using REST
Steps to view the Protegrity Anonymization API service.
Use the URLs provided here for viewing the Protegrity Anonymization API service and pod details after you have successfully deployed the Protegrity Anonymization API.
You need to map the IP address of Ingress in the hosts file with the host name set in the Ingress configuration.
For more information about updating the hosts file, refer to step 2 of the section Enabling Custom Certificates From SDK.
Optionally, update the hostname of the Elastic Load Balancer (ELB) that is created by the NGINX Ingress Controller using the section Creating a DNS Entry for the ELB Hostname in Route53.
For more information about configuring the DNS, refer to the section Creating a DNS Entry for the ELB Hostname in Route53.
Open a web browser.
Use the following URL to view basic information about the Protegrity Anonymization API.
https://anon.protegrity.com/
Use the following URL to view the Swagger UI. The various Protegrity Anonymization APIs are visible on this page.
https://anon.protegrity.com/pty/anonymization/api/v2/ui
Use the following URL to view the contractual information for the Protegrity Anonymization API.
https://anon.protegrity.com/about
2 - Anonymizing Using Azure Kubernetes Service (AKS)
2.1 - Set up Protegrity Anonymization API on Azure Kubernetes Service (AKS)
Steps to set up Protegrity Anonymization API on Azure Kubernetes Service (AKS).
To set up and use the Protegrity Anonymization API on Azure, follow the steps provided in this section.
2.2 - Preparing the Base Machine
Steps to prepare the base machine for working with Azure Kubernetes Service (AKS).
Install the Azure CLI and log in to your account to work with Protegrity Anonymization API on the Azure Cloud.
Install and initialize the Azure CLI on your system.
For more information about the installation steps, refer to How to install the Azure CLI.
Log in to your account using the following command from a command prompt.
Sign in to your account.
The configuration complete message appears.
{
"cloudName": "AzureCloud",
"homeTenantId":
"id":
"isDefault": true,
"managedByTenants": [
{
"tenantId":
}
],
"name": "Azure Cloud Platform",
"state": "Enabled",
"tenantId":
"user": {
"name":
"type": "user"
}
}
Install Kubectl version 1.34, which is the command line interface for Kubernetes.
Kubectl enables you to run commands from the Linux instance so that you can communicate with the Kubernetes cluster.
For more information about installing kubectl, refer to Set up Kubernetes tools on your computer.
Install the Helm client version 3.8.2 for working with Kubernetes clusters.
For more information about installing the Helm client, refer to Installing Helm.
2.3 - Creating a Kubernetes Cluster
Steps to create a Kubernetes Cluster on Azure.
This section describes how to create a Kubernetes Cluster on Azure.
Note: The steps listed in this procedure for creating a Kubernetes cluster are for reference use. If you have an existing Kubernetes cluster or want to create a Kubernetes cluster based on your own requirements, then you can directly navigate to the section Accessing the AKS Cluster to connect your Kubernetes cluster and the Linux instance.
To create a Kubernetes cluster:
Log in to the Azure environment.
Click the Portal menu icon.
The Portal menu appears.
Navigate to All Services > Kubernetes services.
The Kubernetes Services screen appears.
Click Add and click Add Kubernetes cluster.
The Create Kubernetes cluster screen appears.
In the Resource group field, select the required resource group.
In the Kubernetes cluster name field, specify a name for your Kubernetes cluster.
Retain the default values for the remaining settings.
Click Review + create to validate the configuration.
Click Create to create the Kubernetes cluster.
The Kubernetes cluster is created.
2.4 - Accessing the AKS Cluster
Steps to access the Kubernetes Cluster.
Connect to the cloud service using the steps in this section.
Log in to the Linux instance and run the following command to connect your Base machine to the Kubernetes cluster.
az aks get-credentials --resource-group <Name_of _Resource_group> --name <Name_of Kubernetes_Cluster>
The Base machine is now connected with the Kubernetes cluster. You can now run commands using the Kubernetes command line interface (kubectl) to control the nodes on the Kubernetes cluster.
Validate whether the cluster is up by running the following command.
The command lists the Kubernetes nodes available in your cluster.
2.5 - Uploading the Image to the Azure Container Registry
Steps to upload the Docker image to the Azure Container Registry (ACR).
Use the information in this section to upload the Docker image to the Azure Container Registry (ACR) for running the Protegrity Anonymization API in AKS.
Note: For more information about creating the Azure Container Registry, refer to Create an Azure container registry using the Azure portal.
To install the Protegrity Anonymization API:
Log in to the machine as an administrator to install the Protegrity Anonymization API.
Install Docker using the steps provided at https://docs.docker.com/engine/install/.
Configure Docker to push the Protegrity Anonymization API images to the Azure Container Registry (ACR) by running following command:
docker login <Container_registry_name>.azurecr.io
Obtain and extract the Protegrity Anonymization files to a directory on your system.
a. Download and extract the ANON-API_RHUBI-ALL-64_x86-64_Generic.K8S_1.4.1.14.tgz files from the .tgz archive.
- anonrestapi_1.4.1.14.tgz
- cluster-autoscaler-autodiscover.yaml
- cluster-aws.yaml
- dependent_images.tgz
b. Extract the anonrestapi_1.4.1.14.tgz files from the .tgz archive.
- Anon_logs.sh
- README.md
- anonapi_1.4.1.14.tgz
- anonapi_helm_1.4.1.14.tgz
- contractual.csv
> **Note**: Do not extract the `anonapi_1.4.1.14.tgz` package. This file must be loaded directly into Docker using the docker load command.
Navigate to the directory where the following image packages are located.
dependent_images.tgzanonapi_1.4.1.14.tgz
Load the Docker image into Docker by using the following command:
a. Load the dependent images.
docker load < dependent_images.tgz
b. Load the Protegrity Anonymization API image.
docker load < anonapi_1.4.1.14.tgz
List the images that are loaded by using the following command:
Tag the image to the ACR repository by using the following command:
docker tag <Container image>:<Tag> <Container registry path>/<Container image>:<Tag>
For example:
docker tag ANON-API_1.4.1.14:anon_AZ <container_registry_name>.azurecr.io/anon:anon_AZ
Push the tagged image to the ACR by using the following command:
docker push <Container_regitry_path>/<Container_image>:<Tag>
For example:
docker push <container_registry_name>.azurecr.io/anon:anon_AZ
Note: Ensure that the appropriate path for the image registry along with the tag is updated in the values.yaml file.
The image is loaded to the ACR and is ready for deployment.
2.6 - Setting up NGINX Ingress Controller
Steps to install the NGINX Ingress Controller.
Complete the steps provided here for installing the NGINX Ingress Controller on the base machine.
Login to the base machine and open a command prompt.
Create a namespace where the NGINX Ingress Controller needs to be deployed using the following command.
kubectl create namespace <Namespace name>
For example,
kubectl create namespace nginx
Add the repository from where the Helm charts for installing the NGINX Ingress Controller must be fetched using the following command.
helm repo add stable https://charts.helm.sh/stable
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
Install the NGINX Ingress Controller using Helm charts using the following command.
helm install nginx-ingress --namespace <Namespace name> --set controller.replicaCount=1 --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux ingress-nginx/ingress-nginx --set controller.publishService.enabled=true --set controller.ingressClassResource.name=<NGINX ingress class name> --set podSecurityPolicy.enabled=true --set rbac.create=true --set controller.extraArgs.enable-ssl-passthrough="true" --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-internal"=\"true\" --version 4.3.0
For example,
helm install nginx-ingress --namespace nginx --set controller.replicaCount=1 --set controller.extraArgs.enable-ssl-passthrough="true" --set controller.nodeSelector."beta\\.kubernetes\\.io/os"=linux --set defaultBackend.nodeSelector."beta\\.kubernetes\\.io/os"=linux ingress-nginx/ingress-nginx --set controller.publishService.enabled=true --set controller.ingressClassResource.name=nginx-anon --set podSecurityPolicy.enabled=true --set rbac.create=true --set controller.service.annotations."service\\.beta\\.kubernetes\\.io/azure-load-balancer-internal"=\\"true\\" --version 4.3.0
For more information about the various configuration parameters for installing the NGINX Ingress Helm charts, refer to values.yaml file.
Check the status of the nginx-ingress release and verify that all the deployments are running accurately using the following command.
kubectl get pods -n <Namespace name>
For example,
kubectl get pods -n nginx
Note: The pod name should be noted. It is required as a parameter in the next step.
View the logs on the Ingress pod using the following command.
kubectl logs pod/<pod-name> -n <Namespace name>
Obtain the external IP of the nginx service by executing the following command.
kubectl get service --namespace <Namespace name>
For example,
kubectl get service -n nginx
Note: The IP should be noted. It is required for configuring the Protegrity Anonymization API SDK.
2.7 - Using Custom Certificates in Ingress
Steps to use custom certificates with the Ingress Controller.
Protegrity Anonymization API uses certificates for secure communication with the client. You can use the certificates provided by Protegrity or use your own certificates. Complete the configurations provided in this section to use your custom certificates with the Ingress Controller.
Ensure that the certificates and keys are in the .pem format.
Note: Skip the steps provided in this section if you want to use the default Protegrity certificates for the Protegrity Anonymization API.
Log in to the Base Machine where Ingress is configured and open a command prompt.
Copy your certificates to the Base Machine.
Note: Verify the certificates using the commands provided in the section Working with Certificates.
Create a Kubernetes secret of the server certificate using the following command. The namespace used must be the same where the Protegrity Anonymization API application is to be deployed.
kubectl create secret --namespace <namespace-name> generic <secret-name> --from-file=tls.crt=<path_to_certificate>/<certificate-name> --from-file=tls.key=<path_to_certificate>/<certificate-key>
For example,
kubectl create secret --namespace anon-ns generic anon-protegrity-tls --from-file=tls.crt=/tmp/cust_cert/anon-server-cert.pem --from-file=tls.key=/tmp/cust_cert/anon-server-key.pem
Create a Kubernetes secret of the CA certificate using the following command. The namespace used must be the same where the Protegrity Anonymization API application is to be deployed.
kubectl create secret --namespace <namespace-name> generic <secret-name> --from-file=ca.crt=<path_to_certificate>/<certificate-name>
For example,
kubectl create secret --namespace anon-ns generic ca-protegrity --from-file=ca.crt=/tmp/cust_cert/anon-ca-cert.pem
Open the values.yaml file.
Add the following host and secret code for the Ingress configuration at the end of the values.yaml file.
## Refer section in documentation for setting up and configuring NGINX-INGRESS before deploying the application.
ingress:
## Add host section with the hostname used as CN while creating server certificates.
## While creating the certificates you can use *.protegrity.com as CN and SAN used in below example
host: **anon.protegrity.com** # Update the host according to your server certificates.
## To terminate TLS on the Ingress Controller Load Balancer.
## K8s TLS Secret containing the certificate and key must also be provided.
secret: **anon-protegrity-tls** # Update the secretName according to your secretName.
## To validate the client certificate with the above server certificate
## Create the secret of the CA certificate used to sign both the server and client certificate as shown in example below
ca_secret: **ca-protegrity** # Update the ca-secretName according to your secretName.
ingress_class: nginx-anon
Note: Ensure that you replace the host, secret, and ca_secret attributes in the values.yaml file with the values as per your certificate.
For more information about using custom certificates, refer to Updating the Configuration Files.
2.8 - Updating the Configuration Files
Steps to update configuration files.
Use the template files provided to specify the AKS settings for the Protegrity Anonymization API.
Create the Protegrity Anonymization API namespace using the following command.
kubectl create namespace <name>
Note: Update and use the from the values.yaml file that is present in the Helm chart.
Extract and update the files in the ANON-API_HELM_1.4.1.14.tgz package.
The ANON-API_HELM_1.4.1.14.tgz package contains the values.yaml file that must be modified as per your requirements. It also contains the templates directory with yaml files.
Note: Ensure that the necessary permissions for updating the files are assigned to the .yaml files.
Update the values.yaml file.
Note: For more information about the values.yaml file, refer to values.yaml.
a. Specify a namespace for the pods.
```
namespace:
name: **anon-ns** # Update the namespace if required.
```
b. Specify the node name and zone information for the node as a prerequisite for the database pod and the Anon-Storage (S3 bucket) pod. Use the node name which is running in the same zone where the AKS is created.
```
## Prerequisite for setting up Database and S3 bucket Pod.
## This is to handle any new DB pod getting created that uses the same persistence storage in case the running Database pod gets disrupted.
## This persistence also helps persist Anon-storage data.
persistence:
## Update storageClassName based on the PV/PVC/Storage config.
storageClassName: # Example: managed-premium for Azure, standard for AWS EKS, gp2 for AWS EC2, standard for GCP.
fsType: ext4
## This section is required if the image is getting pulled from the Azure Container Registry
## create image pull secrets and specify the name here.
## remove the [] after 'imagePullSecrets:' once you specify the secrets
#imagePullSecrets: []
# - name: regcred
## This section is required if the S3 bucket image is getting pulled from the Azure Container Registry instead of Public Repo
## create image pull secrets and specify the name here.
## remove the [] after 'imagePullSecrets:' once you specify the secrets
#minioImagePullSecrets: []
# - name: regcred
```
Update the repository information in the file. The Anon-Storage pod uses the S3 bucket Docker image quay.io/minio/minio:RELEASE.2025-04-03T14-56-28Z, which is pulled from the Public repository.
image:
minio_repo: quay.io/minio/minio # Public repo path for Minio Image.
minio_tag: RELEASE.2025-04-03T14-56-28Z # Tag name for Minio image.
repository: <Repo_path> # Repo path for the Container Registry in Azure, AWS.
anonapi_tag: <AnonImage_tag> # Tag name of the ANON-API Image.
database_tag: <DatabaseImage_tag> # Tag name of the ANON-API Image.
pullPolicy: Always
Note: Ensure that you update the repository and anonapi_tag according to your container registry.
S3 bucket uses access keys and secrets for performing file operations. Protegrity provides a default set of credentials that are stored as part of the secret storage-creds. If you are creating your own secret, then, update the existingSecret section.
storage:
## Refer the following command for creating your own secret.
## CMD: kubectl create secret generic my-minio-secret --from-literal=rootUser=foobarbaz --from-literal=rootPassword=foobarbazqux
existingSecret: "" # Supply your secret Name for ignoring below default credentials.
bucket_name: "anonstorage" # Default bucket name for S3 bucket
secret:
name: "storage-creds" # Secret to access minio-server
access_key: "anonuser" # Access key for minio-server
secret_key: "protegrity" # Secret key for minio-server
Extract the values.yaml Helm chart from the package.
Uncomment the following parameters and update the secret name in the values.yaml file.
## This section is required if the image is getting pulled from the Azure Container Registry
## create image pull secrets and specify the name here.
## remove the [] after 'imagePullSecrets:' once you specify the secrets
#imagePullSecrets: []
# - name: regcred
Perform the following steps for the communication between the Kubernetes cluster and the Azure Container Registry.
a. Run the following command from a command prompt to log in.
```
docker login
```
b. Specify your ACR access credentials.
2.9 - Deploying the Protegrity Anonymization API to the AKS Cluster
Steps to deploy the AKS cluster.
Deploy the pods using the steps in the following section.
Run the following command to deploy the pods.
helm install <helm-name> /<path_to_helm> -n <namespace>
Verify that the necessary pods and services are configured and running.
Run the following command to verify the information for accessing the Protegrity Anonymization API externally on the cluster. The port mapping for accessing the UI is displayed after running the command.
kubectl get service -n <namespace>
Run the following command to verify the deployment.
kubectl get deployment -n <namespace>
Run the following command to verify the pods created.
kubectl get pods -n <namespace>
Run the following command to verify the pods.
kubectl get pods -o wide -n <namespace>
Execute the following command to obtain the IP address of the service.
kubectl get ingress -n <namespace>
The container is now ready to process Protegrity Anonymization API requests.
2.10 - Viewing Protegrity Anonymization API Using REST
Steps to view the Protegrity Anonymization API service and pod details.
Use the URLs provided here for viewing the Protegrity Anonymization API service and pod details after you have successfully deployed the Protegrity Anonymization API.
You need to map the IP address of Ingress in the hosts file with the host name set in the Ingress configuration.
For more information about updating the hosts file, refer to step 2 of the section Enabling Custom Certificates From SDK.
Open a web browser.
Use the following URL to view basic information about the Protegrity Anonymization API.
https://anon.protegrity.com/
Use the following URL to view the Swagger UI. The various Protegrity Anonymization APIs are visible on this page.
https://anon.protegrity.com/pty/anonymization/api/v2/ui
Use the following URL to view the contractual information for the Protegrity Anonymization API.
https://anon.protegrity.com/about