This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Protegrity AI Team Edition

Protegrity AI Team Edition delivers enterprise-grade data protection in a lightweight, scalable package. It is ideal for teams seeking secure, policy-driven control with fast deployment and seamless integration.

1 - Introduction to Protegrity AI Team Edition

Overview of the product.

Protegrity AI Team Edition is a container-based data protection solution designed for teams and mid-enterprise organizations that need to safeguard sensitive data across AI, GenAI, and analytics workloads.

It delivers core Protegrity capabilities, including governance, discovery, protection, and privacy. This is provided in a lightweight, containerized form factor that emphasizes fast deployment, simplified operations, and consistent enforcement of data security policies across environments.

Built on a modular, microservices architecture, it moves away from the legacy appliance model to align with modern DevOps practices. The result is a deployment that scales easily, integrates natively with existing CI/CD pipelines, and supports governing agents and securing departmental data.

Purpose and Audience

Protegrity AI Team Edition is intended for:

  • Organizations seeking to protect data used in AI or analytics pipelines.
  • Teams that require fast deployment cycles and simplified upgrades.
  • Customers who need enterprise-grade data protection in a form that can start small, operate independently, and later scale into Protegrity AI Enterprise Edition.

Tech Preview Release Model

A Tech Preview is a controlled early release that allows customers to test new capabilities in real environments before general availability. It lets selected customers peek behind the curtain to try emerging features, validate real-world fit, and provide input on product development before the final, fully supported release. It is not feature-complete, may change based on input, and typically comes without SLA-grade support.

  • Release Date: November 17, 2025
  • Intended Use: Testing, training, demonstrations, and feedback.
  • Known Limitations:
    • No Graphical User Interface (GUI).
    • Policy management limited to create/delete operations.
    • No backup/restore.
    • No audit store policy management.
    • Not upgradable to GA.
    • No license enforcement.
  • Restrictions:
    • Not for production use.
    • Does not include all capabilities planned for version 1.0.
    • Setup may require direct assistance from Protegrity.

2 - Overview

An overview of the Protegrity AI Team Edition.

The Protegrity AI Team Edition Tech Preview introduces a modern, container-based approach to data protection built on a microservices architecture. It enables organizations to evaluate how Protegrity’s methods, such as, policy management, anonymization, discovery, and semantic controls, integrate into AI and analytics pipelines.

Architecture and Design Principles

Protegrity AI Team Edition delivers core Protegrity capabilities. This includes governance, discovery, protection, privacy, and semantic controls. It is provided in a lightweight, containerized form factor that emphasizes fast deployment, simplified operations, and consistent enforcement of data security policies across environments. It is designed around five engineering goals: ease of deployment, high availability, scalability, extensibility, and maintainability.

GoalImplementation Details
Ease of Deployment- OpenTofu templates provision a Kubernetes environment (EKS, ECS, or Docker Compose) with minimal manual intervention.
- Helm Charts deploy and configure all components for consistent, reproducible setups.
- Because each component runs as a container image, upgrades and patches follow standard CI/CD workflows.
High Availability- Kubernetes manages service health and redundancy automatically.
- No Trusted Appliance Cluster (TAC) required.
- No external load balancers required.
- No manual replication required.
Scalability- The system scales horizontally and vertically through Kubernetes-native scale-up and scale-down mechanisms.
- Administrators can adjust resources dynamically as workloads grow or shrink without redeployment.
Extensibility- New capabilities are introduced by adding new container images and Helm configurations.
- Allows incremental feature expansion without redesign.
Maintainability- Kubernetes simplifies lifecycle management.
- Updating a container image replaces an older version automatically, avoiding downtime and manual patching.

Core Services

All deployments include a standardized set of common services delivered by a microservices architecture provide routing, security, and audit capabilities for all features and protectors.

ServiceDescription
Common Ingress ControllerThe main entry point for all API and service traffic to the cluster.
Certificate ManagementManages and validates TLS certificates for inbound and inter-service communication.
Authentication and AuthorizationProvides user and service credential validation with role-based access enforcement.
Routing to Feature EndpointsDirects traffic to the appropriate running service or feature container.
InsightProvides logging and auditing capabilities using OpenSearch for event storage and OpenDashboards for visualization and reporting.

Feature Set

The Tech Preview release of Protegrity AI Team Edition includes a limited but functional subset of capabilities for evaluation.

The various features compatible with Protegrity AI Team Edition are provided here.

* - Available for purchase as an add-on.

FeatureDescription
Policy ManagementDefine and manage data protection policies that govern tokenization, masking, and anonymization.
Data DiscoveryAutomatically identify structured and unstructured sensitive data through pattern matching and machine learning classification.
Semantic GuardrailsApply contextual and runtime safeguards to AI and analytics workflows to prevent data leakage or misuse.
AnonymizationApply statistical privacy models such as k-anonymity, l-diversity, and t-closeness to sensitive datasets.
Synthetic DataGenerate tabular synthetic datasets for development, testing, and AI model validation without exposing real sensitive data.

Protegrity Protectors

Protegrity AI Team Edition protectors enable organizations to embed data protection directly where data is processed, inside applications, analytics engines, or cloud-native data systems.

Application Protectors

Application protectors provide data protection directly within applications or runtime containers. They are suitable for teams developing secure APIs or microservices that handle sensitive data in languages such as Java, Python, or .NET.

NameDescriptionPart Number
Immutable Application Protector – Java ContainerProtects data within Java-based containers, such as OpenShift and EKS.ApplicationProtector_Java_RHUBI_K8S
Immutable Application Protector – REST ContainerProvides REST-based protection services for containerized workloads.REST_RHUBI-9-64_x86-64_K8S
Application Protector – PythonProtegrity Application Protector for Python environments.ApplicationProtector_Linux-ALL-64_x86-64_PY-3.11
Application Protector – JavaStandard Java runtime protector.ApplicationProtector_Linux-ALL-64_x86-64_JRE-1.8-64
Application Protector – .NETProtegrity Application Protector for Microsoft .NET applications.ApplicationProtector_WIN-ALL-64_x86-64_NET-STD-2.0-64

Cloud API Protector

The Cloud API protector extends Protegrity protection to AWS serverless and API-based workloads. It is typically used for securing transient data handled by AWS Lambda or similar function-based architectures.

NameDescriptionPart Number
CloudProtect – Cloud API – AWSProtegrity CloudProtect using AWS Serverless Functions.CP_SVRL-ALL-64_x86-64_AWS.API

Cloud-Native Data Warehouse Protectors

Cloud-Native Data Warehouse protectors apply field-level protection inside analytics environments such as Snowflake, Redshift, and Athena. These protectors preserve query usability and analytical fidelity while maintaining data confidentiality.

NameDescriptionPart Number
Cloud Native Data Warehouse Protector – SnowflakeIntegrates with Snowflake for secure, compliant analytics across clouds.CP_SVRL-ALL-64_x86-64_AWS.Snowflake
Cloud Native Data Warehouse Protector – RedshiftProvides protection for Amazon Redshift queries and transformations.CP_SVRL-ALL-64_x86-64_AWS.Redshift
Cloud Native Data Warehouse Protector – AthenaApplies protection to Amazon Athena query execution.CP_SVRL-ALL-64_x86-64_AWS.Athena

Big Data Protectors

Big Data protectors integrate with large-scale analytics and data lake environments to secure data during ETL, batch, or stream processing operations.

NameDescriptionPart Number
Big Data Protector – Amazon EMRProvides data protection within Amazon EMR clusters.BigDataProtector_Linux-ALL-64_x86-64_EMR-7.x-64
Big Data Protector – DatabricksEnables tokenization and masking for Databricks across AWS, Azure, and GCP.BigDataProtector_Linux-ALL-64_x86-64_AWS.Databricks-15.4-64
Big Data Protector – CDP Data HubSupports Cloudera DataWorks Platform deployments CROSS AWS, Azure, AND GCP.BigDataProtector_Linux-ALL-64_x86-64_AWS.Generic.CDP-Datahub-7.3-64

Features Included in the AI Team Edition

The following features are available with the AI Team Edition:

  • Policy Management using AI Agent
  • Data Discovery
  • Semantic Guardrails
  • Any one protector from each of the following families:
    • Application protector
    • Cloud protector
    • Big Data protector

The following features are additional add-ons that must be purchased separately:

  • Anonymization
  • Synthetic Data

3 - Features

List of features for AI Team Edition.

Installing NFA

Before using the AI Team Edition features, install the NFA.

Installing the Gen AI features

Use the automated installer provided or use manual steps for installing the GenAI features. Ensure that the namespaces used are consistent, else features installed using manual steps cannot be removed using the automated installer.

  1. Log in to the NFA CLI.
  2. Navigate to the deployment/iac-setup directory.
  3. Run the following command.
./install_features.sh
  1. Enter the number for the feature that must be installed.

Bootstrapping Policy Management

Note: This is an optional step. Please run this if you need to initialize the pim and create sample data elements, datastores, policies etc for testing purposes. This script will not deploy them, it is up to the end user to decide what gets deployed.

Generating Access Token for Authentication

Navigate back to the root directory (deployment/) to run the bootstrap script:

# From iac-setup directory, go back to root
cd ..

# Generate access token
TOKEN=$(curl -s -i -X 'POST' 'https://eclipse.aws.protegrity.com/pty/v1/auth/login/token' \
  -H 'Content-Type: application/x-www-form-urlencoded' \
  -d 'loginname=admin' \
  -d 'password=Admin123!' \
  -k  \
  | grep -i 'pty_access_jwt_token:' \
  | awk '{print $2}' \
  | tr -d '\r')

Note: The variable $TOKEN is utilized in subsequent scripts. Therefore, users should execute the above block and initialize the $TOKEN variable prior to running the scripts below.

Running the Bootstrap Script

./bootstrap-pim.sh --host eclipse.aws.protegrity.com --token $TOKEN

Expected output:

Configured host: eclipse.aws.protegrity.com
Wait for devops to come alive
Devops is alive
Bootstrap environment...
Bootstrap done
Wait for hubcontroller to come alive
Hubcontroller is alive
Creating datastores...
Creating roles...
Adding members...
Creating custom alphabet...
Creating masks...
Creating data elements...
Creating policies...
Creating rules...
Creating trusted applications...
Finished!

Note: This example assumes there is an /etc/hosts entry for eclipse.aws.protegrity.com, which points to the address listed for the above load balancer.

Deploying Policies

After the PIM has been initialized, and the resources have been created, you can deploy datastores, applications, and policies.

List available resources:

# List datastores
curl -k -H "Authorization: Bearer $TOKEN" -X GET https://eclipse.aws.protegrity.com/pty/v2/pim/datastores

# List applications
curl -k -H "Authorization: Bearer $TOKEN" -X GET https://eclipse.aws.protegrity.com/pty/v2/pim/applications

# List policies
curl -k -H "Authorization: Bearer $TOKEN" -X GET https://eclipse.aws.protegrity.com/pty/v2/pim/policies

Deploy a combination:

curl -k -H "Authorization: Bearer $TOKEN" -H "content-type: application/json" -X POST https://eclipse.aws.protegrity.com/pty/v2/pim/datastores/1/deploy -d '{"policies": ["1"], "applications": ["1"]}'

Where 1 is the UID from the above requests.

Compatibility list

The following containers and versions are compatible with the AI Team Edition.

FeatureVersion
NFA0.9.0
Data Discoveryver
Gen AI
Semantic Guardrailsver
AI Agentver
Protectors
Application Protectors
Javaver
Pythonver
.Netver
Big Data Protector
Amazon EMRver
CDP-AWS-DataHubver
Containers
Application Protector Java Container10.0
REST Container10.0
Cloud
Productver
Gen AI Add-ons
Anonymizationver
Sytnthetic Dataver

3.1 - NFA

Details about NFA

Beyond infrastructure, NFA introduces a suite of common microservices that act as the backbone for AI Team Edition features. These include ingress control for secure traffic routing, certificate management for request validation, and robust authentication and authorization services. NFA also integrates Insight for audit logging and analytics, leveraging OpenSearch and OpenDashboards for visualization and compliance reporting. On top of this foundation, AI Team Edition delivers advanced capabilities such as policy management, anonymization, data discovery, semantic guardrails, and synthetic data generation. All these features are orchestrated within the NFA cluster. This modular approach ensures scalability, security, and flexibility, making NFA a strategic enabler for organizations adopting cloud-first and containerized environments.

3.1.1 - Installing NFA

Steps to install NFA

The New Foundational Architecture (NFA) is the core framework that forms the AI Team Edition. It is designed to deliver a modern, cloud-native experience for data security and governance. Built on Kubernetes, NFA uses a containerized architecture that simplifies deployment and scaling. Using OpenTofu scripts and Helm charts, administrators can stand up clusters with minimal manual intervention, ensuring consistency and reducing operational overhead.

3.1.1.1 - Prerequisites

Ensure that the following prerequisites are met.

Ensure that you have completed the following prerequisites before deploying the NFA.

Baseline EKS Cluster Setup

This section provides instructions for deploying the NFA baseline environment on an AWS EKS cluster using OpenTofu and Make utility.

Software Prerequisites

Before you begin installing the NFA, ensure you have the following prerequisites met:

  1. IAM Roles: Contact your IT team to create the necessary IAM roles with the following permissions to create and manage AWS EKS resources:

For more information about AWS managed policies, refer to the section AWS managed policies for Amazon Elastic Kubernetes Service in the AWS documentation.

  1. Jump box or local machine - Use a dedicated EC2 instance (RHEL 10 , Debian 13, Debian 12, or Ubuntu 24.04) for deployment.

3.1.1.2 - Setting up and Deploying the NFA

Complete the steps provided in this section to deploy NFA in AWS EKS.

Perform the following steps to set up and deploy the NFA:

Downloading and Extracting the Recipe for Deploying NFA

This section describes the steps to download and extract the recipe for deploying the NFA.

Note: If you have set up the jump box previously, then run the make clean command. This ensures that the local repository on the jump box and the clusters are cleaned up before proceeding with a new installation.

Download the Team-Edition-All-64_x86-64_0.9.0.12.tar archive from repository and extract it:

# Create deployment directory
mkdir deployment && cd deployment

# Download the archive
wget https://artifactory.protegrity.com/artifactory/eclipse/eclipse-init/latest/Team-Edition-All-64_x86-64_0.9.0.12.tar

# Extract the archive
tar -xvf Team-Edition-All-64_x86-64_0.9.0.12.tar

Installing the NFA

The repository provides a bootstrap script that automatically installs or updates the following software on the jump box:

  • AWS CLI - Required to communicate with your AWS account.
  • OpenTofu - Required to manage infrastructure as code.
  • kubectl - Required to communicate with the Kubernetes cluster.
  • Helm - Required to manage Kubernetes packages.
  • Make - Required to run the OpenTofu automation scripts.
  • jq - Required to parse JSON.

The bootstrap script also checks if you have the required permissions on AWS. It then sets up the EKS cluster and installs the microservices required for deploying the NFA.

The bootstrap script asks for variables to be set to complete your deployment. Follow the instructions on the screen:

./bootstrap.sh

The script will prompt for the following variables. Press ENTER to accept the default value.

  1. eks_cluster_name - Name of the EKS cluster. The default value is opencluster-v1.

    The following characters are allowed:

    • Lowercase letters: a-z
    • Numbers: 0-9
    • Hyphens: -

    The following characters are not allowed:

    • Uppercase letters: A-Z
    • Underscores: _
    • Spaces
    • Any special characters such as: / ? * + % ! @ # $ ^ & ( ) = [ ] { } : ; , .
    • Leading or trailing hyphens
    • More than 50 characters

    For guidelines on creating an Amazon EKS cluster, refer to the section Create an Amazon EKS cluster in the Amazon EKS User Guide.

  2. vpc_id: The script automatically queries for the available VPCs. Type the VPC ID that is available.

  3. private subnet ID - The script automatically queries for the available VPC subnets. You are prompted to enter two private subnet IDs. Specify two private subnet IDs from different regions.
    The script then automatically updates the VPC CIDR block based on your VPC details.

  4. ingress_fqdn - Fully Qualified Domain Name for your ingress. The default value is eclipse.aws.protegrity.com.

  5. global_image_reg - Global image repository from where the container images will be retrieved. The default value is artifactory.protegrity.com:443.

  6. Enter the registry user name. If you are using artifactory as the global image repository, then leave this value blank.

  7. Enter registry password. If you are using artifactory as the global image repository, then leave this value blank.

A variables.tf.bak backup file is created that saves a copy of all the values that you have specified for the variables. After you enter all the values, you are prompted to apply the changes to the variables.tf file. Type yes to apply the changes.

Note: The cluster creation process can take 10-15 minutes.

If the session is terminated during installation due to network issues, power outage, and so on, then the installation stops. To restart the installation, run the following commands:

make clean
./boostrap.sh

Accessing the NFA UI

To access the Web UI, you need to map the Ingress hostname to the Load Balancer’s IP address in your local hosts file.

  1. Get Ingress Details: Find the hostname and the AWS Load Balancer address.

    kubectl get ingress -A
    

    The output will look something like this:

    NAME      CLASS    HOSTS                      ADDRESS                                                              PORTS   AGE
    eclipse   <none>   eclipse.aws.protegrity.com   internal-a1412...us-east-1.elb.amazonaws.com                         80      20m
    
  2. Find the Load Balancer IP: Use ping on the ADDRESS from the output above to resolve its IP address.

    ping internal-a1412824c35d942aa9373733bd3b8cca-391878848.us-east-1.elb.amazonaws.com
    

    Copy the IP address from the ping output (e.g., 192.0.2.1).

  3. Update Your Hosts File: Add an entry to your local hosts file to map the hostname to the IP address.

    • Linux/macOS: /etc/hosts
    • Windows: C:\Windows\System32\drivers\etc\hosts

    Example Entry:

    192.0.2.1     eclipse.aws.protegrity.com
    
  4. Access the UI: You can now access the Web UI in your browser.

    • URL: https://eclipse.aws.protegrity.com
    • Default User Credentials: user: admin, password: Admin123!

Prerequisites

  1. SSH Keys: You need the SSH private key that corresponds to the public key configured in the pty-cli pod.
  2. Network Access: Ensure you have network connectivity to the cluster.
  3. Hosts File Update: Same as the Web UI access, update your hosts file to map eclipse.aws.protegrity.com to the load balancer IP.

The private key to access the CLI pod will be in the iac-setup directory. The key file is ssh_host_rsa_key.

For Linux/macOS Users

From the iac-setup directory:

ssh -i ssh_host_rsa_key -p 22 ptyitusr@eclipse.aws.protegrity.com

With options to skip host key checking:

ssh -i ssh_host_rsa_key -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -p 22 ptyitusr@eclipse.aws.protegrity.com

For Windows Users

  1. Using Windows SSH Client (Windows 10/11 with OpenSSH):

    ssh -i C:\path\to\iac-setup\ssh_host_rsa_key -p 22 ptyitusr@eclipse.aws.protegrity.com
    
  2. Using PuTTY:

    • Host Name: eclipse.aws.protegrity.com
    • Port: 22
    • Connection Type: SSH
    • Under Connection > SSH > Auth, browse and select your private key file (.ppk format)
    • Username: ptyitusr

CLI Usage

Once connected, you’ll see the Protegrity CLI welcome banner. You’ll be prompted for:

  • Username: Your Protegrity application username (default: admin)
  • Password: Your Protegrity application password (default: Admin123!)

The CLI supports two main command categories:

  • pim: Policy Information Management commands for data protection policies
  • admin: User, Role, Permission, and Group management commands

Getting Started with the CLI

Please refer the documentation for the CLI here: Protegrity Command Line Interface (CLI) Reference

Installing Protegrity AI Team Edition Features

This section describes how to install the Protegrity AI Team Edition features on the NFA.

  1. Run the following command to navigate to the iac-setup directory.
   cd deployment/iac-setup
  1. Run the following command to install the Protegrity AI Team Edition features.
# From iac-setup directory

./install_features.sh

The Protegrity AI Team Edition Features installer for Tech Preview appears. It shows the following options:

--- Install ---
 1. Install Data Discovery
 2. Install Semantic Guardrails
 3. Install Anonymization
 4. Install Synthetic Data
--- Uninstall ---
 5. Uninstall Data Discovery
 6. Uninstall Semantic Guardrails
 7. Uninstall Anonymization
 8. Uninstall Synthetic Data
--- Other ---
 9. List Installed Products
 10. Exit
  1. Type any value from 1 to 4 to install the Protegrity AI Team Edition features.

    The corresponding option is executed.

    For example, if you type 1, then the installer installs Data Discover on the NFA.

    After the command is executed, press ENTER to continue.

Note: Before installing Semantic Guardrails, you need to install Data Discovery. If you try and install Semantic Guardrails without installing Data Discovery, then you are first prompted to install Data Discovery.

  1. Type any values from 5 to 8 to uninstall any feature that you have already installed. A prompt appears to confirm the installation. Type y to uninstall the feature.

    After the command is executed, press ENTER to continue.

  2. Type 9 to list the features that have been installed.

    The installer checks the status of all the products and then displays the installation status for each product.

  3. Type 10 to exit the installer.

The installation logs are saved in the /tmp/protegrity_installer.log file.

Bootstrapping Policy Management

Note: This is an optional step. Please run this if you need to initialize the pim and create sample data elements, datastores, policies etc for testing purposes. This script will not deploy them, it is up to the end user to decide what gets deployed.

Generating Access Token for Authentication

Navigate back to the root directory (eclipse-init/) to run the bootstrap script:

# From iac-setup directory, go back to root
cd ..

# Generate access token
TOKEN=$(curl -s -i -X 'POST' 'https://eclipse.aws.protegrity.com/pty/v1/auth/login/token' \
  -H 'Content-Type: application/x-www-form-urlencoded' \
  -d 'loginname=admin' \
  -d 'password=Admin123!' \
  -k  \
  | grep -i 'pty_access_jwt_token:' \
  | awk '{print $2}' \
  | tr -d '\r')

Note: The variable $TOKEN is utilized in subsequent scripts. Therefore, users should execute the above block and initialize the $TOKEN variable prior to running the scripts below.

Running the Bootstrap Script

./bootstrap-pim.sh --host eclipse.aws.protegrity.com --token $TOKEN

Expected output:

Configured host: eclipse.aws.protegrity.com
Wait for devops to come alive
Devops is alive
Bootstrap environment...
Bootstrap done
Wait for hubcontroller to come alive
Hubcontroller is alive
Creating datastores...
Creating roles...
Adding members...
Creating custom alphabet...
Creating masks...
Creating data elements...
Creating policies...
Creating rules...
Creating trusted applications...
Finished!

Note: This example assumes there is an /etc/hosts entry for eclipse.aws.protegrity.com, which points to the address listed for the above load balancer.

Deploying Policies

After the PIM has been initialized, and the resources have been created, you can deploy datastores, applications, and policies.

List available resources:

# List datastores
curl -k -H "Authorization: Bearer $TOKEN" -X GET https://eclipse.aws.protegrity.com/pty/v2/pim/datastores

# List applications
curl -k -H "Authorization: Bearer $TOKEN" -X GET https://eclipse.aws.protegrity.com/pty/v2/pim/applications

# List policies
curl -k -H "Authorization: Bearer $TOKEN" -X GET https://eclipse.aws.protegrity.com/pty/v2/pim/policies

Deploy a combination:

curl -k -H "Authorization: Bearer $TOKEN" -H "content-type: application/json" -X POST https://eclipse.aws.protegrity.com/pty/v2/pim/datastores/1/deploy -d '{"policies": ["1"], "applications": ["1"]}'

Where 1 is the UID from the above requests.

Troubleshooting

  • Permission denied (publickey): Ensure you’re using the correct private key that matches the authorized_keys in the pod
  • Connection refused: Verify the load balancer IP and hosts file configuration
  • Key format issues: Ensure your private key is in the correct format (OpenSSH format for Linux/macOS, .ppk for PuTTY)

Management commands

The following section lists the commands to manage the Kubernetes cluster.

Available Makefile Targets

From the iac-setup directory:

  • make or make run: Deploys the entire stack
  • make clean: Destroys all resources created by setup and cleans up Helm/Kubernetes components
  • make check-tools: Verifies that all required CLI tools are installed and available in your PATH

Cleaning up the EKS Resources

To destroy all created resources, including the EKS cluster and related components, run the following commands.

# Navigate setup directory
cd iac-setup

# Clean up all resources
make clean

3.1.2 - Protegrity REST APIs

Overview of the Policy Management and Encrypted Resilient Package REST APIs.

The Protegrity REST APIs include the following APIs:

  • Policy Management REST APIs The Policy Management REST APIs are used to create or manage policies.
  • Encrypted Resilient Package APIs The Encrypted Resilient Package REST APIs include the REST API that is used to encrypt and export a resilient package, which is used by the resilient protectors.
    For more information on how the REST API is used to export the encrypted resilient package in an immutable policy deployment, refer to the section DevOps Approach for Application Protector.

3.1.2.1 - Accessing the Protegrity REST APIs

Overview of how to access the Protegrity REST APIs.

The following section lists the requirements for accessing the Protegrity REST APIs.

  1. Available endpoints - Protegrity has enabled the following endpoints to access the REST APIs.

    Base URL
    https://{ESA IP address or Hostname}/pty/<Version>/<API>

    Where:

    • ESA IP address or Hostname: Specifies the IP address or Hostname of the ESA.
    • Version: Specifies the version of the API.
    • API: Endpoint of the REST API.
  2. Authentication - You can access the REST APIs using basic authentication, client certificates, or tokens. The authentication depends on the type of REST API that you are using. For more information about accessing the REST APIs using these authentication mechanisms, refer to the section Accessing REST API Resources.

  3. Authorization - You must assign the permissions to roles for accessing the REST APIs. For more information about the roles and permissions required, refer to the section Managing Roles.

3.1.2.2 - View the Protegrity REST API Specification Document

Access and view the REST API specification document. Use an OpenAPI specification editor, such as Swagger Editor, to generate samples.

The steps mentioned in this section contains the usage of Docker containers and services to download and launch the images for Swagger Editor within a Docker container.

For more information about Docker, refer to the Docker documentation.

The following example uses Swagger Editor to view the REST API specification document. In this example, JSON Web Token (JWT) is used to authenticate the REST API.

  1. Install and start the Swagger Editor.

  2. Download the Swagger Editor image within a Docker container using the following command.

    docker pull swaggerapi/swagger-editor
    
  3. Launch the Docker container using the following command.

    docker run -d -p 8888:8080 swaggerapi/swagger-editor
    
  4. Paste the following address on a browser window to access the Swagger Editor using the specified host port.

    http://localhost:8888/
    
  5. Download the REST API specification document using the following command.

    curl -H "Authorization: Bearer ${TOKEN}" "https://<NFA IP address or Hostname>/pty/<Version>/<API>/doc" -H "accept: application/x-yaml" --output api-doc.yaml 
    

    In this command:

    • TOKEN is the environment variable that contains the JWT token used to authenticate the REST API.
    • <Version> is the version number of the API. For example, v1 or v2.
    • <API> is the API for which you want to download the OpenAPI specifications document. For example, specify the value as pim to download the OpenAPI specifications for the Policy Management REST API. Similarly, specify the value as auth to download the OpenAPI specifications for the Authentication and Token Management API.
      For more information about the Policy Management REST APIs, refer to the section Using the Policy Management REST APIs.
      For more information about the Authentication and Token Management REST APIs, refer to the section Using the Authentication and Token Management REST APIs
  6. Drag and drop the downloaded api-doc.yaml* file into a browser window of the Swagger Editor.

Generating the REST API Samples Using the Swagger Editor

Perform the following steps to generate samples using the Swagger Editor.

  1. Open the api-doc.yaml* file in the Swagger Editor.

  2. On the Swagger Editor UI, click on the required API request.

  3. Click Try it out.

  4. Enter the parameters for the API request.

  5. Click Execute.

    The generated Curl command and the URL for the request appears in the Responses section.

3.1.2.3 - Using the Common REST API Endpoints

Explains the usage of the Common APIs with some generic samples.

The following section specifies the common operations that are applicable to all the Protegrity REST APIs.

The Base URL for each API will change depending on the version of the API being used. The following table specifies the version that you must use when executing the common operations for each API.

REST APIVersion in the Base URL <Version>
Policy Managementv2
Encrypted Resilient Packagev1
Authentication and Token Managementv1

Common REST API Endpoints

The following table lists the common operations for the Protegrity REST APIs.

REST APIDescription
/versionRetrieves the service versions that are supported by the Protegrity REST APIs on the ESA.
/healthThis API request retrieves the health information for the Protegrity REST APIs and identifies whether the corresponding service is running.
/docThis API request retrieves the API specification document.
/logThis API request retrieves current log level set for the corresponding REST API logs.
/logThis API request changes the log level for the REST API service during run-time. The level set through this resource is persisted until the corresponding service is restarted. This log level overrides the log level defined in the configuration.

Retrieving the Supported Service Versions

This API retrieves the service versions that are supported by the corresponding REST API service on the ESA.

Base URL
https://{ESA IP address}/pty/<Version>/<API>
Path
/version
Method
GET

CURL request syntax

curl -H "Authorization: Bearer TOKEN" -X GET "https://{ESA IP address}/pty/<Version>/<API>/version"

In this command, Token indicates the JWT token used for authenticating the API.

Alternatively, you can also store the JWT token in an environment variable named TOKEN, as shown in the following command.

curl -H "Authorization: Bearer ${TOKEN}" -X GET "https://{ESA IP address}/pty/<Version>/<API>/version"

Authentication credentials

TOKEN - Environment variable containing the JWT token.
For more information about creating a JWT token, refer to the section Generating JWT for REST APIs.

Sample CURL request

curl -H "Authorization: Bearer ${TOKEN}" -X GET "https://10.10.101.43/pty/v1/rps/version"

This sample request uses the JWT token authentication.

Sample CURL response

{"version":"1.8.1","buildVersion":"1.8.1-alpha+605.gfe6b1d.master"}

Retrieving the Service Health Information

This API request retrieves the health information of the REST API service and identifies whether the service is running.

Base URL
https://{ESA IP address}/pty/<Version>/<API>
Path
/health
Method
GET

CURL request syntax

curl -H "Authorization: Bearer <TOKEN>" -X GET "https://{ESA IP address}/pty/<Version>/<API>/health"

In this command, Token indicates the JWT token used for authenticating the API.

Alternatively, you can also store the JWT token in an environment variable named TOKEN, as shown in the following command.

curl -H "Authorization: Bearer ${TOKEN}" -X GET "https://{ESA IP address}/pty/<Version>/<API>/health"

Authentication credentials

TOKEN - Enviroment variable containing the JWT token.
For more information about creating a JWT token, refer to the section Generating JWT for REST APIs.

Sample CURL request

curl -H "Authorization: Bearer ${TOKEN}" -X GET "https://10.10.101.43/pty/v2/pim/health"

This sample request uses the JWT token authentication.

Sample CURL response

{
  "isHealthy" : true
}

Where,

  • isHealthy: true - Indicates that the service is up and running.
  • isHealthy: false - Indicates that the service is down.

Retrieving the API Specification Document

This API request retrieves the API specification document.

Base URL
https://{ESA IP address}/pty/<Version>/<API>
Path
/doc
Method
GET

CURL request syntax

curl -H "Authorization: Bearer <Token>" -X GET "https://{ESA IP address}/pty/<Version>/<API>/doc"

In this command, Token indicates the JWT token used for authenticating the API.

Alternatively, you can also store the JWT token in an environment variable named TOKEN, as shown in the following command.

curl -H "Authorization: Bearer ${TOKEN}" -X GET "https://{ESA IP address}/pty/<Version>/<API>/doc"

Authentication credentials

TOKEN - Environment variable containing the JWT token.
For more information about creating a JWT token, refer to the section Generating JWT for REST APIs.

Sample CURL requests

curl -H "Authorization: Bearer ${TOKEN}" -X GET "https://10.10.101.43/pty/v1/rps/doc"
curl -H "Authorization: Bearer ${TOKEN}" -X GET "https://10.10.101.43/pty/v1/rps/doc" -o "rps.yaml"

These sample requests uses the JWT token authentication.

Sample CURL responses

The Encrypted Resilient Package API specification document is displayed as a response. If you have specified the “-o” parameter in the CURL request, then the API specification is copied to a file specified in the command. You can use the Swagger UI to view the API specification document.

Retrieving the Log Level

This API request retrieves current log level set for the REST API service logs.

Base URL
https://{ESA IP address}/pty/<Version>/<API>
Path
/log
Method
GET

CURL request syntax

curl -H "Authorization: Bearer <TOKEN>" -X GET "https://{ESA IP address}/pty/<Version>/<API>/log"

In this command, Token indicates the JWT token used for authenticating the API.

Alternatively, you can also store the JWT token in an environment variable named TOKEN, as shown in the following command.

curl -H "Authorization: Bearer ${TOKEN}" -X GET "https://{ESA IP address}/pty/<Version>/<API>/log"

Authentication credentials

TOKEN - Environment variable containing the JWT token.
For more information about creating a JWT token, refer to the section Generating JWT for REST APIs.

Sample CURL request

curl -H "Authorization: Bearer ${TOKEN}" -X GET "https://10.10.101.43/pty/v2/pim/log"

This sample request uses the JWT token authentication.

Sample CURL response

{
  "level": "INFO"
}

Setting Log Level for the REST API Service Log

This API request changes the REST API service log level during run-time. The level set through this resource persists until the corresponding service is restarted. This log level overrides the log level defined in the configuration.

Base URL
https://{ESA IP address}/pty/<Version>/<API>
Path
/log
Method
POST

CURL request syntax

curl -X POST "https://{ESA IP Address}/pty/<Version>/<API>/log" -H "Authorization: Bearer <TOKEN>" -H "accept: application/json" -H "Content-Type: application/json" -d "{\"level\":\"log level\"}"

In this command, Token indicates the JWT token used for authenticating the API.

Alternatively, you can also store the JWT token in an environment variable named TOKEN, as shown in the following command.

curl -X POST "https://{ESA IP Address}/pty/<Version>/<API>/log" -H "Authorization: Bearer ${TOKEN}" -H "accept: application/json" -H "Content-Type: application/json" -d "{\"level\":\"log level\"}"

Authentication credentials

TOKEN - Environment variable containing the JWT token.
For more information about creating a JWT token, refer to the section Generating JWT for REST APIs.

Request body elements

log level

Set the log level. The log level can be set to SEVERE, WARNING, INFO, CONFIG, FINE, FINER, or FINEST.

Sample CURL request

curl -X POST "https://{ESA IP Address}/pty/v1/rps/log" -H "Authorization: Bearer ${TOKEN}" -H "accept: application/json" -H "Content-Type: application/json" -d "{\"level\":\"SEVERE\"}"

This sample request uses the JWT token authentication.

Sample response

The log level is set successfully.

3.1.2.4 - Using the Policy Management REST APIs

Explains the usage of the Policy Management APIs with some generic samples.

The user accessing these APIs must have the Security Officer permission for write access and the Security Viewer permission for read-only access.
For more information about the roles and permissions required, refer to the section Managing Roles.

The Policy Management API uses the v2 version.

If you want to perform common operations using the Policy Management REST API, then refer the section Using the Common REST API Endpoints.

The following table provides section references that explain usage of some of the Policy Management REST APIs. It includes an example workflow to work with the Policy Management functions. If you want to view all the Policy Management APIs, then use the /doc API to retrieve the API specification.

REST APISection Reference
Policy Management initializationInitializing the Policy Management
Creating an empty manual role that will accept all usersCreating a Manual Role
Create data elementsCreate Data Elements
Create policyCreate Policy
Add roles and data elements to the policyAdding roles and data elements to the policy
Create a default data storeCreating a default datastore
Deploy the data storeDeploying the Data Store
Get the deployment informationGetting the Deployment Information

Initializing the Policy Management

This section explains how you can initialize Policy Management to create the keys-related data and the policy repository.

Base URL
https://{ESA IP address or Hostname}/pty/v2
Authentication credentials
TOKEN - Environment variable containing the JWT token.
For more information about creating a JWT token, refer to the section Generating JWT for REST APIs.
Path
/pim/init
Method
POST

Sample Request

curl -H "Authorization: Bearer ${TOKEN}" -X POST "https://{ESA IP address or Hostname}:443/pty/v2/pim/init" -H "accept: application/json"

This sample request uses the JWT token authentication.

Creating a Manual Role

This section explains how you can create a manual role that accepts all the users.

For more information about working with roles, refer to the section Working with Roles.

Base URL
https://{ESA IP address or Hostname}/pty/v2
Authentication credentials
TOKEN - Environment variable containing the JWT token.
For more information about creating a JWT token, refer to the section Generating JWT for REST APIs.
Path
/pim/roles
Method
POST

Sample Request

curl -H "Authorization: Bearer ${TOKEN}" -X POST "https://{ESA IP address or Hostname}:443/pty/v2/pim/roles" -H "accept: application/json" -H "Content-Type: application/json" -d "{\"name\":\"ROLE\",\"mode\":\"MANUAL\",\"allowAll\": true}

This sample request uses the JWT token authentication.

Creating Data Elements

This section explains how you can create data elements.

For more information about working with data elements, refer to the section Working with Data Elements.

Base URL
https://{ESA IP address or Hostname}/pty/v2
Authentication credentials
TOKEN - Environment variable containing the JWT token.
For more information about creating a JWT token, refer to the section Generating JWT for REST APIs.
Path
/pim/roles
Method
POST

Sample Request

curl -H "Authorization: Bearer ${TOKEN}" -X POST "https://{ESA IP address or Hostname}:443/pty/v2/pim/dataelements" -H "accept: application/json" -H "Content-Type: application/json" -d "{\"name\": \"DE_ALPHANUM\",\"description\": \"DE_ALPHANUM\",\"alphaNumericToken\":{\"tokenizer\":\"SLT_1_3\",\"fromLeft\": 0,\"fromRight\": 0,\"lengthPreserving\": true, \"allowShort\": \"YES\"}}"

This sample request uses the JWT token authentication.

Creating Policy

This section explains how you can create a policy.

For more information about working with policies, refer to the section Creating Policies.

Base URL
https://{ESA IP address or Hostname}/pty/v2
Authentication credentials
TOKEN - Environment variable containing the JWT token.
For more information about creating a JWT token, refer to the section Generating JWT for REST APIs.
Path
/pim/policies
Method
POST

Sample Request

curl -H "Authorization: Bearer ${TOKEN}" -X POST "https://{ESA IP address or Hostname}:443/pty/v2/pim/policies" -H "accept: application/json" -H "Content-Type: application/json" -d "{\"name\":\"POLICY\",\"description\": \"POLICY\", \"template\":{\"access\":{\"protect\":true,\"reProtect\":true,\"unProtect\":true},\"audit\":{\"success\":{\"protect\":false,\"reProtect\":false,\"unProtect\":false},\"failed\":{\"protect\":false,\"reProtect\":false,\"unProtect\":false}}}}"

This sample request uses the JWT token authentication.

Adding Roles and Data Elements to a Policy

This section explains how you can add roles and data elements to a policy.

For more information about adding roles and data elements to a policy, refer to the sections Adding Data Elements to Policy and Adding Roles to Policy respectively.

Base URL
https://{ESA IP address or Hostname}/pty/v2
Authentication credentials
TOKEN - Environment variable containing the JWT token.
For more information about creating a JWT token, refer to the section Generating JWT for REST APIs.
Path
/pim/policies/1/rules
Method
POST

Sample Request

curl -H "Authorization: Bearer ${TOKEN}" -X POST "https://{ESA IP address or Hostname}:443/pty/v2/pim/policies/1/rules" -H "accept: application/json" -H "Content-Type: application/json" -d "{\"role\":\"1\",\"dataElement\":\"1\",\"noAccessOperation\":\"EXCEPTION\",\"permission\":{\"access\":{\"protect\":true,\"reProtect\":true,\"unProtect\":true},\"audit\":{\"success\":{\"protect\":false,\"reProtect\":false,\"unProtect\":false},\"failed\":{\"protect\":false,\"reProtect\":false,\"unProtect\":false}}}}"

This sample request uses the JWT token authentication.

Creating a Default Data Store

This section explains how you can create a default data store.

For more information about working with data stores, refer to the section Creating a Data Store.

Base URL
https://{ESA IP address or Hostname}/pty/v2
Authentication credentials
TOKEN - Environment variable containing the JWT token.
For more information about creating a JWT token, refer to the section Generating JWT for REST APIs.
Path
/pim/datastores
Method
POST

Sample Request

curl -H "Authorization: Bearer ${TOKEN}" -X POST "https://{ESA IP address or Hostname}:443/pty/v2/pim/datastores" -H "accept: application/json" -H "Content-Type: application/json" -d "{\"name\":\"DS\",\"description\": \"DS\", \"default\":true}"

This sample request uses the JWT token authentication.

Deploying the Data Store

This section explains how you can deploy policies or trusted applications linked to a specific data store or multiple data stores.

For more information about deploying the Data Store, refer to the section Deploying Data Stores to Protectors.

Deploying a Specific Data Store

This section explains how you can deploy policies and trusted applications linked to a specific data store. The specifications provided for the specific data store are applied and becomes the end-result.

Note: If you deploy an array with empty policies or trusted applications, or both, then the connected protectors contain empty definitions for these respective items.

Base URL
https://{ESA IP address or Hostname}/pty/v2
Authentication credentials
TOKEN - Environment variable containing the JWT token.
For more information about creating a JWT token, refer to the section Generating JWT for REST APIs.
Path
/pim/datastores/{dataStoreUid}/deploy
Method
POST

Sample Request

curl -H "Authorization: Bearer ${TOKEN}" -X POST "https://{ESA IP address or Hostname}:443/pty/v2/pim/datastores/{dataStoreUid}/deploy" -H "accept: application/json" -H "Content-Type: application/json" -d "{\"policies\":[\"1\"],\"applications\":[\"1\"]}"

This sample request uses the JWT token authentication.

Deploying Data Stores

This section explains how you can deploy data stores, which can contain the linking of either the policies or trusted applications, or both for the deployment.

Note: If you deploy a data store containing an array with empty policies or trusted applications, or both, then the connected protectors contain empty definitions for these respective items.

Base URL
https://{ESA IP address or Hostname}/pty/v2
Authentication credentials
TOKEN - Environment variable containing the JWT token.
For more information about creating a JWT token, refer to the section Generating JWT for REST APIs.
Path
/pim/deploy
Method
POST

Sample Request

curl -H "Authorization: Bearer ${TOKEN}" -X POST "https://{ESA IP address}:443/pty/v2/pim/deploy" -H "accept: application/json" -H "Content-Type: application/json" -d "{\"dataStores\":[{\"uid\":\"1\",\"policies\":[\"1\"],\"applications\":[\"1\"]},{\"uid\":\"2\",\"policies\":[\"2\"],\"applications\":[\"2\"]}]}"

This sample request uses the JWT token authentication.

Getting the Deployment Information

This section explains how you can check the complete deployment information. This service returns the list of the data stores with the connected policies and trusted applications.

Note: The result might contain data store information that is pending deployment after combining the Policy Management operations performed through the ESA Web UI and PIM API.

Base URL
https://{ESA IP address or Hostname}/pty/v2
Authentication credentials
TOKEN - Environment variable containing the JWT token.
For more information about creating a JWT token, refer to the section Generating JWT for REST APIs.
Path
/pim/deploy
Method
GET

Sample Request

curl -H "Authorization: Bearer ${TOKEN}" -X GET "https://{ESA IP address or Hostname}:443/pty/v2/pim/deploy" -H "accept: application/json"

This sample request uses the JWT token authentication.

3.1.2.5 - Using the Encrypted Resilient Package REST APIs

Explains the usage of the Encrypted Resilient Package APIs.

The Encrypted Resilient Package API is only used by the Immutable Resilient protectors.

Before you begin:

  • Ensure that the concept of resilient protectors and necessity of a resilient package is understood.
    For more information on how the REST API is used to export the encrypted resilient package in an immutable policy deployment, refer to the section DevOps Approach for Application Protector.

  • Ensure that the RPS service is running on the ESA.

  • The user accessing this API must have the Export Resilient Package permission.
    For more information about the roles and permissions required, refer to the section Managing Roles.

The Encrypted Resilient Package API uses the v1 version.

If you want to perform common operations using the Encrypted Resilient Package API, then refer the section Using the Common REST API Endpoints.

The following table provides a section reference to the Encrypted Resilient Package API.

REST APISection Reference
Exporting the resilient packageExporting Resilient Package

Exporting Resilient Package Using GET Method

This API request exports the resilient package that can be used with resilient protectors. You can use the Basic authentication, Certificate authentication, and JWT authentication for encrypting and exporting the resilient package.

Warning: Do not modify the package that has been exported using the RPS Service API. If you modify the exported package, then the package will get corrupted.

The resilient package that has been exported using the Encrypted Resilient Package API is not FIPS-compliant.

Base URL
https://<ESA IP address or Hostname>/pty/v1/rps
Path
/export
Method
GET
CURL request syntax
Export API
curl -H "Authorization: Bearer <TOKEN>" -X GET https://<ESA IP address or Hostname>/pty/v1/rps/export/<fingerprint>?version=1\&coreVersion=1 -H "Content-Type: application/json" -o rps.json
In this command, TOKEN indicates the JWT token used for authenticating the API.
For more information about creating a JWT token, refer to the section Generating JWT for REST APIs.
Query Parameters
fingerprint
Specify the fingerprint of the Data Store Export Key. The fingerprint is used to identify which Data Store to export and which export key to use for protecting the resilient package. The user with the Security Officer permissions must share the fingerprint of the Export Key with the user who is executing this API.

version

Set the schema version of the exported resilient package that is supported by the specific protector.

coreVersion

Set the Core policy schema version that is supported by the specific protector.
Sample CURL request
Export API
curl -H "Authorization: Bearer $<TOKEN>" -X GET https://<ESA IP address or Hostname>/pty/v1/rps/export/a7fdbc0cccc954e00920a4520787f0a08488db8e0f77f95aa534c5f80477c03a?version=1\&coreVersion=1 -H "Content-Type: application/json" -o rps.json

This sample request uses the JWT token authentication.

Sample response
The rps.json file is exported using the public key associated with the specified fingerprint.

Protect the encrypted resilient package with standard file permissions to ensure that only the dedicated protectors can access the package.

Exporting Resilient Package Using POST Method (Deprecated)

Note: The POST method of the Export API has been deprecated. A DevOps user can use this API with any public-private key pair of their choosing. Instead of the POST method, it is recommended to use the GET method for exporting a protected resilient package.
If you want to disable this API, contact Protegrity Support.

This API request exports the resilient package that can be used with resilient protectors. You can use the Basic authentication, Certificate authentication, and JWT authentication for encrypting and exporting the resilient package.

Warning: Do not modify the package that has been exported using the RPS Service API. If you modify the exported package, then the package will get corrupted.

The resilient package that has been exported using the Encrypted Resilient Package API is not FIPS-compliant.

Base URL
https://<ESA IP address or Hostname>/pty/v1/rps
Path
/export
Method
POST
CURL request syntax
Export API - KEK
curl -H "Authorization: Bearer <TOKEN>" -X POST https://<ESA IP address or Hostname>/pty/v1/rps/export\?version=1\&coreVersion=1 -H "Content-Type: application/json" --data '{
    "kek":{
"publicKey":{
"label": "<Key_name>",
"algorithm": "<RSA_Algorithm>",
"value": "<Value of publickey>"
}
}' -o rps.json
In this command, TOKEN indicates the JWT token used for authenticating the API.
For more information about creating a JWT token, refer to the section Generating JWT for REST APIs.

Note: You can download the resilient package only from the IP address that is part of the allowed servers list connected to a Data Store. This is only applicable for the 10.0.x and 10.1.0 protectors.

Query parameters
version
Set the schema version of the exported resilient package that is supported by the specific protector.

coreVersion

Set the Core policy schema version that is supported by the specific protector.
Request body elements
Encryption Method
The kek encryption can be used to protect the exported file.
Encryption MethodSub-elementsDescription
kek\publicKeylabelName of the publicKey.
algorithmThe RPS API supports the following algorithms:
  • RSA_OAEP_SHA1
  • RSA_OAEP_SHA256
  • RSA_OAEP_SHA512
  • RSA_PKCS1_v1_5
valueSpecify the value of the publicKey.
Sample CURL request
Export API - KEK
curl -H "Authorization: Bearer $<TOKEN>" -X POST https://<ESA IP address or Hostname>/pty/v1/rps/export\?version=1\&coreVersion=1 -H "Content-Type: application/json" --data '{
    "kek":{
"publicKey":{
"label": "key_name",
"algorithm": "RSA_OAEP_SHA256",
"value": "-----BEGIN PUBLIC KEY-----MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA1eq9vH5Dq8pwPqOSqB0YdY6ehBRNWCgYhh9z1X093id+42eTRDHMOpLXRLhOMdgOeeyEsue1s5ZEOKY9j2TcaVTRwLhSMfacjugfiknnUESziUi9mt+XFnSgk7n4t5EF7fjvriOQvHCp24xCbtwKQlOT3x4zUs/REyJ8FXSrFEvrzbb/mEFfYhp2J6c90CKYqbDX6SFW8WjphDb/kgqg/KfT8AlsllAnci4CZ+7u0Iw7GsRvEvrVUCbBsXfB7InTst3hTc4A7iiY36kSEn78mXtfLjWiMpzEBxOteohmXKgSAynI7nI8c0ZhHSoZLUSJ2IQUi25ho8uxd/v3fedTTD91zRTxMJKw8XDrwjXllH7FGgsWBUenkO2lRlfIYBDctjv1MB+QJlNo+gOTGg8sJ1czBm20VQHHcyHpCKNu2gKzqWqSU6iGcwGXPCKY8/yEpNyPVFS/i7GAp10jO+QdOBskPviiLFN5kMh05ZGBpyNvfAQantwGv15Ip0RJ3LTQbKE62DAGNcdP6rizwm9SSt0WcG58OenBX5eB0gWBRrZI5s3EkhThYXyxbvFWObMWb/3jMsE+O22NvqAxWSasPR1zS1WBf25ush3v6BGBO4Frl5kBRrTCSSfAZBDha5VqXOqR1XIdQKf8wKn5DSScpMRuyf3ymRGQf915CC7zwp0CAwEAAQ==-----END PUBLIC KEY-----"}
}
}' -o rps.json

This sample request uses the JWT token authentication.

Sample response
The rps.json file is exported.

Protect the encrypted resilient package with standard file permissions to ensure that only the dedicated protectors can access the package.

3.1.2.6 - Using the Authentication and Token Management REST APIs

Explains the usage of the Authentication and Token Management APIs with some generic samples.

The Authentication and Token Management API uses the v2 version.

If you want to perform common operations using the Authentication and Token REST API, then refer the section Using the Common REST API Endpoints.

The following table provides section references that explain usage of some of the Authentication and Token REST APIs. It includes sample examples to work with the Authentication and Token functions. If you want to view all the Authentication and Token APIs, then use the /doc API to retrieve the API specification.

CategoryREST API
Token ManagementGenerate token
Refresh token
Roles and Permissions ManagementList all permissions
List all roles
Update roles
Create new role
Delete roles
User ManagementCreate user endpoint
Fetch users
Update user endpoint
Fetch user by ID
Delete user endpoint
Update user password endpoint
Group ManagementFetch groups
Create group endpoint
Update group endpoint
Delete group endpoint
SAML SSO ConfigurationList SAML providers
Create SAML provider endpoint
Get SAML provider
Update SAML provider endpoint
Delete SAML provider endpoint

Token Management

The following section lists the commonly used APIs to manage tokens.

Generate token

This API explains how you can generate an access token for authenticating the APIs.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/auth/login/token
Method
POST

Request Body

  • loginname: User name for authentication.
  • password: Password for authentication.

Result This API returns JWT access token in the response header and the refresh token in the response body. You can use the refresh token in the Refresh token API to obtain new access tokens without logging again.

Sample Request

curl -X 'POST' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/login/token' \
  -H 'accept: application/json' \
  -H 'Content-Type: application/x-www-form-urlencoded' \
  -d 'loginname=<User name>&password=<Password>!'

The login name and password are default for the MVP.

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

{
  "status": 0,
  "data": {
    "refreshToken": "eyJhbGciOiJIUzUxMiIsInR5cCcZQ",
    "expiresIn": 1800,
    "refreshExpiresIn": 1800
  },
  "messages": []
}

Response header

content-length: 832 
 content-type: application/json 
 date: Thu,16 Oct 2025 10:30:53 GMT 
 pty_access_jwt_token: eyJhbGciOiJSUzI1NiIsInR4YRUw 
 strict-transport-security: max-age=31536000; includeSubDomains 

Refresh token

This API explains how to refresh an access token using the refresh token.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/auth/login/refresh
Method
POST

Request Body

  • refreshToken: Refresh token for getting a new access token.

Result This API returns a new JWT access token in the response header and a new refresh token in the response body. You can use this refresh token to obtain new access tokens without logging again.

Sample Request

curl -X 'POST' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/login/token/refresh' \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
  "refreshToken": "eyJhbGciOiJIUzUxMiIsInR5cCINGFeZEf8hw"
}
'

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

{
  "status": 0,
  "data": {
    "refreshToken": "MSkVFZiRl3Jkwi0uJQ",
    "expiresIn": 1800,
    "refreshExpiresIn": 1799
  },
  "messages": []
}

Response header

content-length: 832 
 content-type: application/json 
 date: Thu,16 Oct 2025 10:36:28 GMT 
 pty_access_jwt_token: eyJhbGciOiJSUzI1Nim95VHqh00vHfr8ip9RhyO-4FcxQ 
 strict-transport-security: max-age=31536000; includeSubDomains

Roles and Permissions Management

The following section lists the commonly used APIs for managing user roles and permissions.

List all permissions

This API lists all the permissions that are applicable to the user.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/auth/permissions
Method
GET

Request Body No parameters.

Result This API returns a list of all the permissions available for the logged in user.

Sample Request

curl -X 'GET' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/permissions' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <Access_token>'

This sample request uses the access token for authentication.
For more information about generating the access token, refer to the section Generate token.

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

[
  {
    "id": "faf70fd8-1c6d-4209-b64a-e9db37bbf58b",
    "name": "keycloak_viewer",
    "description": "Keycloak Viewer",
    "composite": false,
    "clientRole": true,
    "containerId": "a09679da-b559-4856-8d29-6d39aa6485a1"
  },
  {
    "id": "b0ae18f9-fb85-4b1e-86fe-51b9f0fd8e53",
    "name": "cli_admin",
    "description": "Command Line Admin",
    "composite": false,
    "clientRole": true,
    "containerId": "a09679da-b559-4856-8d29-6d39aa6485a1"
  },
  {
    "id": "cfb4d416-85f2-4b5b-a7d9-95bedfb5c57f",
    "name": "can_access_ap",
    "description": "Permission to access application",
    "composite": false,
    "clientRole": true,
    "containerId": "a09679da-b559-4856-8d29-6d39aa6485a1"
  },
  {
    "id": "f03fcfcd-0aa9-4fcb-862f-01227f093516",
    "name": "can_create_token",
    "description": "Can Create JWT Token",
    "composite": false,
    "clientRole": true,
    "containerId": "a09679da-b559-4856-8d29-6d39aa6485a1"
  },
  {
    "id": "ab434af4-8a49-430b-8306-9089744cd37c",
    "name": "keycloak_manager",
    "description": "Keycloak Manager",
    "composite": false,
    "clientRole": true,
    "containerId": "a09679da-b559-4856-8d29-6d39aa6485a1"
  },
  {
    "id": "46142cfb-be47-4936-b399-7afcb4eea2cc",
    "name": "web_viewer",
    "description": "Appliance web viewer",
    "composite": false,
    "clientRole": true,
    "containerId": "a09679da-b559-4856-8d29-6d39aa6485a1"
  },
  {
    "id": "1ced685a-7a02-4820-84bd-1dafa6273e13",
    "name": "uma_protection",
    "composite": false,
    "clientRole": true,
    "containerId": "a09679da-b559-4856-8d29-6d39aa6485a1"
  },
  {
    "id": "f0a84500-e19a-4be2-a712-b01cc94b448d",
    "name": "can_proxy_ap",
    "description": "Proxy for a an application",
    "composite": false,
    "clientRole": true,
    "containerId": "a09679da-b559-4856-8d29-6d39aa6485a1"
  },
  {
    "id": "efe391ed-032e-494e-ac62-20368cb4e41e",
    "name": "can_export_certificates",
    "description": "Can download protector certificates",
    "composite": false,
    "clientRole": true,
    "containerId": "a09679da-b559-4856-8d29-6d39aa6485a1"
  },
  {
    "id": "43e7648d-49a8-4813-bf86-3d5bf31bfe7c",
    "name": "Shell_Accounts",
    "description": "Accounts that have cli access",
    "composite": false,
    "clientRole": true,
    "containerId": "a09679da-b559-4856-8d29-6d39aa6485a1"
  },
  {
    "id": "0bb2cd71-01c4-4634-b77f-2d65dcdb5e8e",
    "name": "can_login_web",
    "description": "Web Login Permission",
    "composite": false,
    "clientRole": true,
    "containerId": "a09679da-b559-4856-8d29-6d39aa6485a1"
  },
  {
    "id": "ce93275f-5cd2-4905-8237-89f4dbf2cc48",
    "name": "web_admin",
    "description": "Appliance web manager",
    "composite": false,
    "clientRole": true,
    "containerId": "a09679da-b559-4856-8d29-6d39aa6485a1"
  },
  {
    "id": "e70d3988-4dfe-4411-b6df-c2181eb16b03",
    "name": "security_officer",
    "description": "Security Officer",
    "composite": false,
    "clientRole": true,
    "containerId": "a09679da-b559-4856-8d29-6d39aa6485a1"
  },
  {
    "id": "aee1f91e-08fa-4c14-94ad-1d3b1cf08d4b",
    "name": "nfa_insight",
    "description": "Access to Insights and Insight Dashboard",
    "composite": false,
    "clientRole": true,
    "containerId": "a09679da-b559-4856-8d29-6d39aa6485a1"
  },
  {
    "id": "28f7ad8b-1d7c-49e0-95c8-6bde2a4bfba5",
    "name": "can_fetch_package",
    "description": "Can download resilient packages",
    "composite": false,
    "clientRole": true,
    "containerId": "a09679da-b559-4856-8d29-6d39aa6485a1"
  }
]

List all roles

This API lists all the roles applicable to the user.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/auth/roles
Method
GET

Request Body No parameters.

Result This API returns a list of all the roles available for the logged in user.

Sample Request

curl -X 'GET' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/roles' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <Access_token>'

This sample request uses the access token for authentication.
For more information about generating the access token, refer to the section Generate token.

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

[
  {
    "name": "directory_admin",
    "description": "Directory Administrator",
    "composite": true,
    "permissions": [
      "keycloak_manager"
    ]
  },
  {
    "name": "policy_proxy_user",
    "description": "Policy Proxy User",
    "composite": true,
    "permissions": [
      "can_proxy_ap"
    ]
  },
  {
    "name": "security_viewer",
    "description": "Security Administrator Viewer",
    "composite": true,
    "permissions": [
      "keycloak_viewer",
      "can_login_web"
    ]
  },
  {
    "name": "policy_user",
    "description": "Policy User",
    "composite": true,
    "permissions": [
      "can_access_ap"
    ]
  },
  {
    "name": "security_admin",
    "description": "Security Administrator",
    "composite": true,
    "permissions": [
      "cli_admin",
      "can_create_token",
      "web_admin",
      "keycloak_manager",
      "security_officer",
      "eclipse_insight",
      "can_fetch_package",
      "can_export_certificates"
    ]
  }
]

Update role

This API enables you to update an existing role and its permissions.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/auth/roles
Method
PUT

Request Body

  • name: Role name.
  • description: Description of the role.
  • permissions: List of permissions that need to be updated for the existing role.

Result This API updates the existing role and its permissions.

Sample Request

curl -X 'PUT' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/roles' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <Access_token>' \
  -H 'Content-Type: application/json' \
  -d '{
  "name": "admin",
  "description": "Administrator role",
  "permissions": [
    "perm1",
    "perm2"
  ]
}'

This sample request uses the access token for authentication.
For more information about generating the access token, refer to the section Generate token.

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

{
  "role_name": "admin",
  "status": "updated"
}

Create role

This API enables you to create a role.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/auth/roles
Method
POST

Request Body

  • name: Role name.
  • description: Description of the role.
  • permissions: List of permissions that need to created for the existing role.

Result This API creates a roles with the requested permissions.

Sample Request

curl -X 'POST' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/roles' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <Access_token>' \
  -H 'Content-Type: application/json' \
  -d '{
  "name": "admin",
  "description": "Administrator role",
  "permissions": [
    "perm1",
    "perm2"
  ]
}'

This sample request uses the access token for authentication.
For more information about generating the access token, refer to the section Generate token.

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

{
  "role_name": "admin",
  "status": "created"
}

Delete role

This API enables you to delete a role.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/auth/roles
Method
DELETE

Request Body

  • name: Role name.

Result This API deletes the specific role.

Sample Request

curl -X 'POST' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/roles' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <Access_token>' \
  -H 'Content-Type: application/json' \
  -d '{
  "name": "admin"
}'

This sample request uses the access token for authentication.
For more information about generating the access token, refer to the section Generate token.

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

{
  "role_name": "admin",
  "status": "deleted"
}

User Management

The following section lists the commonly used APIs for managing users.

Create user endpoint

This API enables you to create a user.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/auth/users
Method
POST

Request Body

  • username: Name of the user. This is a mandatory field
  • email: Email of the user.
  • firstName: First name of the user.
  • lastName: Last name of the user.
  • enabled: Enable the user.
  • password: Password for the user.
  • roles: Roles to be assigned to the user.
  • groups: Groups in which the user is included.

Result This API creates a user with a unique user ID.

Sample Request

curl -X 'POST' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/users' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <Access_token>' \
  -H 'Content-Type: application/json' \
  -d '{
  "username": "alpha",
  "email": "alpha@example.com",
  "firstName": "Alpha",
  "lastName": "User",
  "enabled": true,
  "password": "StrongPassword123!",
  "roles": [
    "directory_admin"
  ]
  
}'

This sample request uses the access token for authentication.
For more information about generating the access token, refer to the section Generate token.

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

{
  "user_id": "7636708c-c714-4e8e-a3e6-f5fc6c49f9c0",
  "username": "alpha"
}

Fetch users

This API enables you to retrieve the user details.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/auth/users
Method
GET

Request Body No parameters.

Query Parameters

  • max: Maximum number of entries that can retrieved.
  • first: Number of entries that can be skipped from the start of the data. For example, if you specify the value as 4, then the first four entries will be skipped from the result.

Result This API retrieves a list of users.

Sample Request

curl -X 'GET' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/users?max=100&first=0' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <Access_token>>'

This sample request uses the access token for authentication.
For more information about generating the access token, refer to the section Generate token.

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

[
  {
    "username": "admin",
    "email": "admin@example.com",
    "firstName": "Admin",
    "lastName": "User",
    "enabled": true,
    "id": "71c573a0-7412-475d-be67-4bf6fdf71404",
    "createdTimestamp": null,
    "attributes": null,
    "emailVerified": true
  },
  {
    "username": "alpha",
    "email": "alpha@example.com",
    "firstName": "Alpha",
    "lastName": "User",
    "enabled": true,
    "id": "7636708c-c714-4e8e-a3e6-f5fc6c49f9c0",
    "createdTimestamp": 1760643896108,
    "attributes": null,
    "emailVerified": false
  },
  {
    "username": "dfuser",
    "email": null,
    "firstName": "se",
    "lastName": "se",
    "enabled": false,
    "id": "12770ab4-d3a0-4243-8018-5bb1fb0d06d7",
    "createdTimestamp": 1760425034931,
    "attributes": null,
    "emailVerified": false
  },
  {
    "username": "fds",
    "email": null,
    "firstName": "dsf",
    "lastName": "fs",
    "enabled": false,
    "id": "a1251ca4-664d-469a-b1c1-539fe8c73a9d",
    "createdTimestamp": 1760425052196,
    "attributes": null,
    "emailVerified": false
  },
  {
    "username": "shiva",
    "email": "shiva.v@protegrity.com",
    "firstName": "shiva",
    "lastName": "v",
    "enabled": true,
    "id": "0743b449-c050-4e49-ba95-974cd2069a84",
    "createdTimestamp": 1760433609089,
    "attributes": null,
    "emailVerified": false
  },
  {
    "username": "testuser1",
    "email": null,
    "firstName": "t",
    "lastName": "tes",
    "enabled": true,
    "id": "948c1484-45d7-4df9-aea4-9534ca2d1923",
    "createdTimestamp": 1760424968139,
    "attributes": null,
    "emailVerified": false
  },
  {
    "username": "testuser2",
    "email": null,
    "firstName": "sdf",
    "lastName": "df",
    "enabled": true,
    "id": "d4961126-d324-4166-97e6-2fac1f40566a",
    "createdTimestamp": 1760425012482,
    "attributes": null,
    "emailVerified": false
  }
]

Update user endpoint

This API enables you to update the details of an user.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/auth/users
Method
PUT

Request Body

  • id: ID of the user. This is a mandatory field
  • email: Email of the user.
  • firstName: First name of the user.
  • lastName: Last name of the user.
  • enabled: Enable the user.
  • password: Password for the user.
  • roles: Roles to be assigned to the user.
  • groups: Groups in which the user is included.

Result This API updates the user details.

Sample Request

curl -X 'PUT' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/users' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <Access_token>' \
  -H 'Content-Type: application/json' \
  -d '{
  "id": "7636708c-c714-4e8e-a3e6-f5fc6c49f9c0",
  "email": "alpha@example.com",
  "firstName": "lpha",
  "lastName": "User",
  "enabled": true,
  "roles": [
    "directory_admin"
  ]
}'

This sample request uses the access token for authentication.
For more information about generating the access token, refer to the section Generate token.

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

{
  "status": "updated",
  "userId": "7636708c-c714-4e8e-a3e6-f5fc6c49f9c0"
}

Fetch User by Id

This API enables you to fetch the details of a specific user by specifying the user ID.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/auth/users/{user_id}
Method
GET

Request Body No parameters.

Path Parameters

  • user_id: Unique ID of the user. This is a mandatory field.

Result This API retrieves the details of the specific user.

Sample Request

curl -X 'GET' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/users/7636708c-c714-4e8e-a3e6-f5fc6c49f9c0' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <Access_token>'

This sample request uses the access token for authentication.
For more information about generating the access token, refer to the section Generate token.

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

{
  "id": "7636708c-c714-4e8e-a3e6-f5fc6c49f9c0",
  "username": "alpha",
  "firstName": "lpha",
  "lastName": "User",
  "email": "alpha@example.com",
  "emailVerified": false,
  "enabled": true,
  "createdTimestamp": 1760643896108,
  "groups": [],
  "roles": [
    "directory_admin"
  ]
}

Delete user endpoint

This API enables you to delete a user.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/auth/users
Method
DELETE

Request Body No parameters.

Path Parameters

  • user_id: Unique ID of the user. This is a mandatory field.

Result This API deletes the specified user.

Sample Request

curl -X 'DELETE' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/users/7636708c-c714-4e8e-a3e6-f5fc6c49f9c0' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <Access_token>'
}'

This sample request uses the access token for authentication.
For more information about generating the access token, refer to the section Generate token.

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

{
  "status": "deleted",
  "user_id": "7636708c-c714-4e8e-a3e6-f5fc6c49f9c0"
}

Update user password endpoint

This API enables you to update the password of an existing user.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/auth/users/password
Method
PUT

Request Body

  • userId: Unique ID of the user. This is a mandatory field.
  • newPassword: New password for the user..
  • temporary: Specifies whether the password is temporary and must be changed on next login.

Result This API creates a roles with the requested permissions.

Sample Request

curl -X 'PUT' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/users/password' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <Access_token>' \
  -H 'Content-Type: application/json' \
  -d '{
  "userId": "7636708c-c714-4e8e-a3e6-f5fc6c49f9c0",
  "newPassword": "NewStrongPassword123!",
  "temporary": false
}'

This sample request uses the access token for authentication.
For more information about generating the access token, refer to the section Generate token.

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

{
  "status": "password_updated",
  "userId": "7636708c-c714-4e8e-a3e6-f5fc6c49f9c0",
  "temporary": false
}

Group Management

The following section lists the commonly used APIs for managing groups.

Fetch groups

This API enables you retrieve a list of all the groups.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/auth/groups
Method
GET

Request Body No parameters.

Query Parameters

  • max: Maximum number of entries that can be retrieved.
  • first: Number of entries that can be skipped from the start of the data. For example, if you specify the value as 4, then the first four entries will be skipped from the result.

Result This API a list of the available groups.

Sample Request

curl -X 'GET' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/groups?max=100&first=0' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <access_token>'

This sample request uses the access token for authentication.
For more information about generating the access token, refer to the section Generate token.

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

[ { “id”: “aee4c370-4e97-4b55-a072-0840fe83a2aa”, “name”: “developers”, “description”: “”, “members”: [ “testuser1”, “testuser2” ], “roles”: [] } ]

Create group endpoint

This API enables you create a group.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/auth/groups
Method
POST

Request Body

  • name: Name of the group. This is a mandatory field.
  • description: Description of the group.
  • members: List of user names that need to be added as members.
  • roles: List of role names that need to be assigned to the group.

Result This API creates a group with the specified members and roles.

Sample Request

curl -X 'POST' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/groups' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <access_token>' \
  -H 'Content-Type: application/json' \
  -d '{
  "name": "developers",
  "description": "",
  "members": [
    "testuser1",
    "testuser2"
  ],
  "roles": [
    "service_admin",
    "user_manager"
  ]
}'

This sample request uses the access token for authentication.
For more information about generating the access token, refer to the section Generate token.

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

{ “group_id”: “aee4c370-4e97-4b55-a072-0840fe83a2aa”, “name”: “developers”, “status”: “created”, “members_added”: 2, “roles_assigned”: 2 }

Update group endpoint

This API enables you update an existing group.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/auth/groups
Method
PUT

Request Body

  • group_id: Unique ID of the group. This is a mandatory field.
  • members: Members added to the group.
  • roles: Roles assigned to the group.

Result This API updates the existing role and its permissions.

Sample Request

curl -X 'PUT' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/groups' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <access_token>' \
  -H 'Content-Type: application/json' \
  -d '{
  "group_id": "group-uuid",
  "members": [
    "testuser2"
  ],
  "roles": [
    "service_admin"
  ]
}'

This sample request uses the access token for authentication.
For more information about generating the access token, refer to the section Generate token.

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

{ “status”: “updated”, “group_id”: “group-uuid”, “members_updated”: 1, “roles_updated”: 1 }

Delete group end point

This API enables you to delete an existing group.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/auth/groups/{group_id}
Method
DELETE

Request Body No parameters.

Path Parameters

  • group_id: ID of the group that needs to be deleted. This is a mandatory field.

Result This API deletes the specified group.

Sample Request

curl -X 'DELETE' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/groups/aee4c370-4e97-4b55-a072-0840fe83a2aa' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <access_token>'

This sample request uses the access token for authentication.
For more information about generating the access token, refer to the section Generate token.

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

{
  "status": "deleted",
  "group_id": "aee4c370-4e97-4b55-a072-0840fe83a2aa"
}

SAML SSO Configuration

The following section lists the commonly used APIs for managing SAML providers.

List SAML providers

This API enables you to list the existing SAML providers.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/auth/saml/providers
Method
GET

Request Body No parameters

Result This API retrieves a list of the existing SAML providers.

Sample Request

curl -X 'GET' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/saml/providers' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <access_token>'

This sample request uses the access token for authentication.
For more information about generating the access token, refer to the section Generate token.

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

[
{
  "alias": "azuread",
  "displayName": "Protegrity SAML SSO  (Azure AD IDP)",
  "providerId": "saml",
  "enabled": false,
  "config": {
    "postBindingLogout": "false",
    "singleLogoutServiceUrl": "https://login.microsoftonline.com/c3ebfd9e-ec7b-4ab3-a13f-915e941a2785/saml2",
    "postBindingResponse": "true",
    "backchannelSupported": "false",
    "caseSensitiveOriginalUsername": "false",
    "encryptionAlgorithm": "RSA-OAEP",
    "xmlSigKeyInfoKeyNameTransformer": "KEY_ID",
    "idpEntityId": "https://sts.windows.net/c3ebfd9e-ec7b-4ab3-a13f-915e941a2785/",
    "useMetadataDescriptorUrl": "false",
    "loginHint": "false",
    "allowCreate": "true",
    "enabledFromMetadata": "true",
    "syncMode": "LEGACY",
    "authnContextComparisonType": "exact",
    "singleSignOnServiceUrl": "https://login.microsoftonline.com/c3ebfd9e-ec7b-4ab3-a13f-915e941a2785/saml2",
    "wantAuthnRequestsSigned": "true",
    "allowedClockSkew": "0",
    "artifactBindingResponse": "false",
    "validateSignature": "true",
    "nameIDPolicyFormat": "urn:oasis:names:tc:SAML:2.0:nameid-format:persistent",
    "entityId": "https://nfa.aws.protegrity.com/mysamlapp/saml/metadata",
    "signSpMetadata": "true",
    "wantAssertionsEncrypted": "false",
    "signatureAlgorithm": "RSA_SHA256",
    "sendClientIdOnLogout": "false",
    "wantAssertionsSigned": "false",
    "metadataDescriptorUrl": "https://login.microsoftonline.com/c3ebfd9e-ec7b-4ab3-a13f-915e941a2785/federationmetadata/2007-06/federationmetadata.xml?appid=967110c7-a06b-432e-ad40-47859837a76c",
    "sendIdTokenOnLogout": "true",
    "postBindingAuthnRequest": "true",
    "forceAuthn": "false",
    "attributeConsumingServiceIndex": "0",
    "addExtensionsElementWithKeyInfo": "false",
    "principalType": "SUBJECT"
  }
},
{
  "alias": "azured",
  "displayName": "Protegrity 2 SAML SSO  (Azure AD IDP)",
  "providerId": "saml",
  "enabled": true,
  "config": {
    "postBindingLogout": "false",
    "singleLogoutServiceUrl": "https://login.microsoftonline.com/c3ebfd9e-ec7b-4ab3-a13f-915e941a2785/saml2",
    "postBindingResponse": "true",
    "backchannelSupported": "false",
    "caseSensitiveOriginalUsername": "false",
    "xmlSigKeyInfoKeyNameTransformer": "KEY_ID",
    "idpEntityId": "https://sts.windows.net/c3ebfd9e-ec7b-4ab3-a13f-915e941a2785/",
    "useMetadataDescriptorUrl": "false",
    "loginHint": "false",
    "allowCreate": "true",
    "enabledFromMetadata": "true",
    "syncMode": "LEGACY",
    "authnContextComparisonType": "exact",
    "singleSignOnServiceUrl": "https://login.microsoftonline.com/c3ebfd9e-ec7b-4ab3-a13f-915e941a2785/saml2",
    "wantAuthnRequestsSigned": "true",
    "allowedClockSkew": "0",
    "guiOrder": "0",
    "artifactBindingResponse": "false",
    "validateSignature": "true",
    "signingCertificate": "MIIC8DCCAdigAwIBAgIQf++2tyNO+YtIpa4MDh1hiQcoVX",
    "nameIDPolicyFormat": "urn:oasis:names:tc:SAML:2.0:nameid-format:persistent",
    "entityId": "https://nfa.aws.protegrity.com/mysamlapp/saml/metadata",
    "signSpMetadata": "true",
    "wantAssertionsEncrypted": "false",
    "signatureAlgorithm": "RSA_SHA256",
    "sendClientIdOnLogout": "false",
    "wantAssertionsSigned": "false",
    "metadataDescriptorUrl": "https://login.microsoftonline.com/c3ebfd9e-ec7b-4ab3-a13f-915e941a2785/federationmetadata/2007-06/federationmetadata.xml?appid=967110c7-a06b-432e-ad40-47859837a76c",
    "sendIdTokenOnLogout": "true",
    "postBindingAuthnRequest": "true",
    "forceAuthn": "false",
    "attributeConsumingServiceIndex": "0",
    "addExtensionsElementWithKeyInfo": "false",
    "principalType": "SUBJECT"
  }
}
]

Create SAML provider endpoint

This API enables you to create a SAML provider configuration.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/auth/saml/providers
Method
POST

Request Body

  • alias: Unique alias for the SAML provider. This is a mandatory field.
  • displayName: Display name for the SAML provider that will appear on the login page. This is a mandatory field.
  • configType: Configuration type, either metadata URL or metadata file content. This is a mandatory field.
  • metadataUrl: URL to fetch the SAML metadata from the identity provider. For example, https://login.microsoftonline.com/tenant-id/federationmetadata/2007-06/federationmetadata.xml.
  • metadataFileContent: SAML metadata XML content as a string. For example, <?xml version=\"1.0\"?>...</EntityDescriptor>.
  • signingCertificate: X.509 certificate for signing SAML requests. Use the PEM format without the headers.
  • nameIdPolicyFormat: NameID policy format for SAML authentication. For example, urn:oasis:names:tc:SAML:2.0:nameid-format:persistent.
  • forceAuthn: Force re-authentication of the user even if the user is already authenticated.
  • validateSignature: Validate the SAML response and assertion signatures.
  • wantAssertionsSigned: Require the SAML assertions to be signed.
  • wantAssertionsEncrypted: Require the SAML assertions to be encrypted.
  • signatureAlgorithm: Signature algorithm for SAML requests. For example, RSA_SHA256.
  • attributeMapping: Mapping of SAML attributes to user attributes.
  • enabled: Enable or disable the SAML provider.

For details of the each parameter, refer the documentation for the corresponding SAML provider.

Result This API enables you to add a SAML provider.

Sample Request

curl -X 'POST' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/saml/providers' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <access_token>' \
  -H 'Content-Type: application/json' \
  -d '{
  "alias": "test-azure-ad-saml",
  "displayName": "Test Azure AD SAML",
  "configType": "metadataUrl",
  "metadataUrl": "https://login.microsoftonline.com/tenant-id/federationmetadata/2007-06/federationmetadata.xml",
  "metadataFileContent": "<?xml version=\"1.0\"?>...</EntityDescriptor>",
  "signingCertificate": "MIIDXTCCAkWgAwIBAgIJAL...",
  "nameIdPolicyFormat": "urn:oasis:names:tc:SAML:2.0:nameid-format:persistent",
  "forceAuthn": false,
  "validateSignature": true,
  "wantAssertionsSigned": true,
  "wantAssertionsEncrypted": false,
  "signatureAlgorithm": "RSA_SHA256",
  "attributeMapping": {
    "email": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress",
    "firstName": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname",
    "lastName": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname"
  },
  "enabled": true
}'

This sample request uses the access token for authentication.
For more information about generating the access token, refer to the section Generate token.

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

{ “status”: “created”, “alias”: “test-azure-ad-saml”, “configType”: “metadataUrl”, “message”: “SAML provider created successfully from metadata” }

Get SAML provider

This API enables you retrieve the details of a specific SAML provider.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/saml/providers/{alias}
Method
GET

Request Body No parameters.

Path Parameters

  • alias: Alias of the SAML provider. This is a mandatory field.

Result This API retrieves the details about the specific SAML provider.

Sample Request

curl -X 'GET' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/saml/providers/azure-ad-saml' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <access_token>'

This sample request uses the access token for authentication.
For more information about generating the access token, refer to the section Generate token.

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

{
  "alias": "azure-ad-saml",
  "displayName": "Azure AD SAML",
  "providerId": "saml",
  "enabled": true,
  "config": {
    "additionalProp1": {}
  }
}

Update SAML provider endpoint

This API enables you update the configurations of an existing SAML provider.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/saml/providers/{alias}
Method
PUT

Request Body No parameters.

Path Parameters

  • alias: Alias of the SAML provider that you want to update. This is a mandatory field.

Result This API updates the existing SAML provider.

Sample Request

curl -X 'PUT' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/saml/providers/azure-ad-sampl' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <access_token>' \
  -H 'Content-Type: application/json' \
  -d '{
  "displayName": "Azure AD SAML Updated",
  "configType": "metadataUrl",
  "metadataUrl": "https://login.microsoftonline.com/new-tenant-id/federationmetadata/2007-06/federationmetadata.xml",
  "metadataFileContent": "string",
  "signingCertificate": "string",
  "nameIdPolicyFormat": "string",
  "forceAuthn": true,
  "validateSignature": true,
  "wantAssertionsSigned": true,
  "wantAssertionsEncrypted": true,
  "signatureAlgorithm": "string",
  "attributeMapping": {
    "additionalProp1": "string",
    "additionalProp2": "string",
    "additionalProp3": "string"
  },
  "enabled": true
}'

This sample request uses the access token for authentication.
For more information about generating the access token, refer to the section Generate token.

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

{ “status”: “updated”, “alias”: “azure-ad-saml” }

Delete SAML provider endpoint

This API enables you to delete the configuration of an existing SAML provider.

Base URL
https://{NFA IP address or Hostname}/pty/v1
Path
/saml/providers/{alias}
Method
DELETE

Request Body No parameters.

Path Parameters

  • alias: Alias of the SAML provider that you want to delete. This is a mandatory field.

Result This API deletes the SAML provider.

Sample Request

curl -X 'DELETE' \
  'https://nfa.aws.protegrity.com/pty/v1/auth/saml/providers/azure-ad-saml' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <access_token>'

This sample request uses the access token for authentication.
For more information about generating the access token, refer to the section Generate token.

Sample Response

The following response appears for the status code 200, if the API is invoked successfully.

Response body

{
  "status": "deleted",
  "alias": "azure-ad-saml"
}

3.1.3 - Protegrity Command Line Interface (CLI) Reference

Overview of the Protegrity CLI.

The Protegrity CLI include the following CLI:

  • Administrator CLI: The Administrator CLI is used to perform administrative tasks for the NFA.
  • Policy Management CLI: The Policy Management CLI is used to create or manage policies. The CLI performs the same function that can be performed using the Policy Management REST APIs.

3.1.3.1 - Administrator Command Line Interface (CLI) Reference

Administrator CLI documentation.

Main Admin Command

The following command shows to access the help for the admin commands.

user@server$ admin --help
Usage: admin [OPTIONS] COMMAND [ARGS]...

  Users, Roles, Permissions, Groups, SAML and Azure AD management commands.

Options:
  --help  Show this message and exit.

Commands:
  create  Create a resource.
  delete  Delete a resource.
  get     Display one resource.
  list    List resources.
  set     Update fields of a resource.
  test    Test various configurations and connections.

Create Commands

The following section lists the create commands.

Main Create Command

The following command shows how to access help for the create command.

user@server$ admin create --help
Usage: admin create [OPTIONS] COMMAND [ARGS]...

  Create a resource.

Options:
  --help  Show this message and exit.

Commands:
  entra-id               Create Entra ID configuration.
  entra-id-import-users  Import Entra ID users with role assignments.
  groups                 Create a new group.
  roles                  Create a new role.
  saml-mappers           Create an attribute mapper for a SAML provider.
  saml-providers         Create a new SAML SSO provider.
  users                  Create a new user.

Create Entra-id

The following command shows how to access help for the create entra-id command. It also provides examples on how to create a an Entra ID configuration.

user@server$ admin create entra-id --help
Usage: admin create entra-id [OPTIONS]

  Create Entra ID configuration.

  Required Entra ID Setup:
  1. Register an application in Entra ID
  2. Grant Microsoft Graph API permissions:
     - User.Read.All (Application)
     - Group.Read.All (Application) - if importing groups
  3. Create a client secret for the application
  4. Note the Tenant ID, Application (Client) ID, and Client Secret

  Examples:
      admin create entra-id --tenant-id "12345678-1234-1234-1234-123456789012" --client-id "87654321-4321-4321-4321-210987654321" --client-secret "your-secret-here"

Options:
  -t, --tenant-id TEXT      Entra ID Tenant ID  [required]
  -c, --client-id TEXT      Entra ID Application (Client) ID  [required]
  -s, --client-secret TEXT  Entra ID Application Client Secret  [required]
  --enabled / --disabled    Enable/disable configuration
  --help                    Show this message and exit.

Create Entra-id-import-users

The following command shows how to access help for the create entra-id-import-users command. It also provides examples on how to import Entra ID users.

user@server$ admin create entra-id-import-users --help
Usage: admin create entra-id-import-users [OPTIONS]

  Import Entra ID users with role assignments.

  Import users from Entra ID into the application with role assignments.
  Users must be provided via JSON data.

  JSON Format:
  {
      "users": [
          {
              "userPrincipalName": "john.doe@company.com",
              "email": "john.doe@company.com",
              "firstName": "John",
              "lastName": "Doe",
              "roles": ["admin", "user"]
          }
      ],
      "dryRun": false
  }

  Examples:
      # Direct JSON input
      admin create entra-id-import-users --json-data '{"users":[{"userPrincipalName":"john@company.com","email":"john@company.com","firstName":"John","lastName":"Doe","roles":["user"]}]}'

      # Dry run with JSON
      admin create entra-id-import-users --json-data '{"users":[...]}' --dry-run

Options:
  --dry-run             Validate import without creating users
  -j, --json-data TEXT  JSON string with users data to import directly
                        [required]
  --help                Show this message and exit.

Create Groups

The following command shows how to access help for the create groups command. It also provides examples on how to create a group.

user@server$ admin create groups --help
Usage: admin create groups [OPTIONS]

  Create a new group.

  Examples:
      admin create groups --name developers --description "Development team"
      admin create groups --name admins --members "john,jane" --roles "admin,user_manager"
      admin create groups --name operators --description "System operators" --members "user1,user2" --roles "operator"

Options:
  -n, --name TEXT         Group name  [required]
  -d, --description TEXT  Group description
  -m, --members TEXT      Comma-separated list of usernames to add as members
  -r, --roles TEXT        Comma-separated list of role names to assign to group
  --help                  Show this message and exit.

Create Roles

The following command shows how to access help for the create roles command. It also provides examples on how to create a role.

user@server$ admin create roles --help
Usage: admin create roles [OPTIONS]

  Create a new role.

  Examples:
      admin create roles --name manager --description "Manager role"
      admin create roles --name admin --permissions "security_officer"
      admin create roles --name operator --description "System operator" --permissions "security_officer"

Options:
  -n, --name TEXT         Role name  [required]
  -d, --description TEXT  Role description
  -p, --permissions TEXT  Comma-separated list of permission names
  --help                  Show this message and exit.

Create Saml-mappers

The following command shows how to access help for the create saml-mappers command. It also provides examples on how to create an attribute mapper for a SAML provider.

user@server$ admin create saml-mappers --help
Usage: admin create saml-mappers [OPTIONS] PROVIDER_ALIAS

  Create an attribute mapper for a SAML provider.

  Examples:
      admin create saml-mappers azure-ad --name email-mapper --mapper-type saml-user-attribute-idp-mapper --attribute-name "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" --user-attribute email
      admin create saml-mappers azure-ad --name role-mapper --mapper-type saml-role-idp-mapper --attribute-value admin --role admin

Options:
  -n, --name TEXT                 Name of the attribute mapper  [required]
  --mapper-type [saml-user-attribute-idp-mapper|saml-role-idp-mapper|saml-advanced-group-idp-mapper|saml-username-idp-mapper]
                                  Type of mapper  [required]
  --sync-mode TEXT                Sync mode for the mapper
  --attribute-name TEXT           SAML attribute name to map from
  --user-attribute TEXT           User attribute to map to
  --attribute-value TEXT          SAML attribute value for role mapping
  --role TEXT                     Role to assign
  --group TEXT                    Group to assign users to
  --template TEXT                 Username template
  --attributes TEXT               Key-value pairs for attribute mapping (JSON
                                  format)
  --help                          Show this message and exit.

Create Saml-providers

The following command shows how to access help for the create saml-providers command. It also provides examples on how to create a new SAML SSO provider.

user@server$ admin create saml-providers --help 
Usage: admin create saml-providers [OPTIONS]

  Create a new SAML SSO provider.

  Examples:
      admin create saml-providers --alias azure-ad --display-name "Azure AD" --config-type metadataUrl --service-provider-entity-id "https://your-keycloak.com/realms/your-realm" --metadata-url "https://..."
      admin create saml-providers --alias okta --display-name "Okta" --config-type metadataFile --service-provider-entity-id "https://your-keycloak.com/realms/your-realm" --metadata-file /path/to/metadata.xml

Options:
  -a, --alias TEXT                Unique alias for the SAML provider
                                  [required]
  -d, --display-name TEXT         Display name shown in login pages
                                  [required]
  --config-type [metadataUrl|metadataFile]
                                  Configuration type  [required]
  --service-provider-entity-id TEXT
                                  Service Provider Entity ID  [required]
  --metadata-url TEXT             URL to fetch SAML metadata (for metadataUrl
                                  type)
  --metadata-file FILENAME        Path to SAML metadata XML file (for
                                  metadataFile type)
  --signing-certificate TEXT      X.509 certificate for signing (PEM format
                                  without headers)
  --name-id-format TEXT           NameID Policy Format
  --force-authn / --no-force-authn
                                  Force re-authentication
  --validate-signature / --no-validate-signature
                                  Validate SAML response signatures
  --want-assertions-signed / --no-want-assertions-signed
                                  Require signed assertions
  --want-assertions-encrypted / --no-want-assertions-encrypted
                                  Require encrypted assertions
  --signature-algorithm TEXT      Signature algorithm for SAML requests
  --post-binding-response / --no-post-binding-response
                                  Use POST binding for SAML responses
  --post-binding-authn-request / --no-post-binding-authn-request
                                  Use POST binding for SAML authentication
                                  requests
  --post-binding-logout / --no-post-binding-logout
                                  Use POST binding for SAML logout requests
  --want-authn-requests-signed / --no-want-authn-requests-signed
                                  Sign SAML authentication requests
  --attribute-mapping TEXT        Attribute mapping as JSON string or
                                  key=value pairs
  --enabled / --disabled          Enable/disable the provider
  --store-token / --no-store-token
                                  Store tokens returned by the identity
                                  provider
  --help                          Show this message and exit.

Create Users

The following command shows how to access help for the create users command. It also provides examples on how to create a user.

user@server$ admin create users --help
Usage: admin create users [OPTIONS]

  Create a new user.

  Examples:
      admin create users --username john.doe --email john@example.com
      admin create users --username jane --email jane@example.com --first-name Jane --last-name Smith --roles "admin,user"

Options:
  -u, --username TEXT     Username  [required]
  -e, --email TEXT        Email address
  --first-name TEXT       First name
  --last-name TEXT        Last name
  -p, --password TEXT     Password
  --enabled / --disabled  Enable/disable user
  --roles TEXT            Comma-separated list of role names
  --groups TEXT           Comma-separated list of group names
  --help                  Show this message and exit.

Delete Commands

The following section lists the delete commands.

Main Delete Command

The following command shows how to access help for the delete command.

user@server$ admin delete --help
Usage: admin delete [OPTIONS] COMMAND [ARGS]...

  Delete a resource.

Options:
  --help  Show this message and exit.

Commands:
  entra-id        Delete Entra ID configuration.
  groups          Delete a group.
  roles           Delete a role.
  saml-mappers    Delete an attribute mapper for a SAML provider.
  saml-providers  Delete a SAML SSO provider.
  users           Delete a user by ID.

Delete Entra-id

The following command shows how to access help for the delete entra-id command. It also provides examples on how to delete Entra ID configuration.

user@server$ admin delete entra-id --help
Usage: admin delete entra-id [OPTIONS]

  Delete Entra ID configuration.

  Warning: This action cannot be undone and will permanently remove
  all stored Entra ID settings.

  Examples:
      admin delete entra-id

Options:
  --help  Show this message and exit.

Delete Groups

The following command shows how to access help for the delete groups command. It also provides examples on how to delete a group.

user@server$ admin delete groups --help
Usage: admin delete groups [OPTIONS] GROUP_ID

  Delete a group.

  Examples:
      admin delete groups group-uuid-here

Options:
  --help  Show this message and exit.

Delete Roles

The following command shows how to access help for the delete roles command. It also provides examples on how to delete a role.

user@server$ admin delete roles --help
Usage: admin delete roles [OPTIONS] ROLE_NAME

  Delete a role.

  Examples:
      admin delete roles admin

Options:
  --help  Show this message and exit.

Delete Saml-mappers

The following command shows how to access help for the delete saml-mappers command. It also provides examples on how to delete an attribute mapper for a SAML provider.

user@server$ admin delete saml-mappers --help
Usage: admin delete saml-mappers [OPTIONS] PROVIDER_ALIAS MAPPER_ID

  Delete an attribute mapper for a SAML provider.

  Examples:
      admin delete saml-mappers azure-ad mapper-uuid

Options:
  --help  Show this message and exit.

Delete Saml-providers

The following command shows how to access help for the delete saml-providers command. It also provides examples on how to delete a SSL SAML provider.

user@server$ admin delete saml-providers --help
Usage: admin delete saml-providers [OPTIONS] ALIAS

  Delete a SAML SSO provider.

  Examples:
      admin delete saml-providers azure-ad

Options:
  --help  Show this message and exit.

Delete Users

The following command shows how to access help for the delete users command. It also provides examples on how to delete a user.

user@server$ admin delete users --help
Usage: admin delete users [OPTIONS] USER_ID

  Delete a user by ID.

  Examples:
      admin delete users USER_ID

Options:
  --help  Show this message and exit.

Get Commands

The following section lists the get commands.

Main Get Command

The following command shows how to access help for the get command.

user@server$ admin get --help
Usage: admin get [OPTIONS] COMMAND [ARGS]...

  Display one resource.

Options:
  --help  Show this message and exit.

Commands:
  entra-id        Get Entra ID configuration details.
  groups          Get detailed information about a specific group.
  password-policy Get password policy details.
  roles           Get detailed information about a specific role.
  saml-mappers    Get detailed information about a SAML Mappers...
  saml-providers  Get detailed information about a specific SAML...
  users           Get detailed information about a specific user.

Get Entra-id

The following command shows how to access help for the get entra-id command. It also provides examples on how to get current Entra ID configuration.

user@server$ admin get entra-id --help
Usage: admin get entra-id [OPTIONS]

  Get current Entra ID configuration.

  Examples:
      admin get entra-id

Options:
  --help  Show this message and exit.

Get Groups

The following command shows how to access help for the get groups command. It also provides examples on how to get detailed information about a group.

user@server$ admin get groups --help
Usage: admin get groups [OPTIONS] GROUP_ID

  Get detailed information about a specific group.

  Examples:
      admin get groups group-uuid-here
      admin get groups developers

Options:
  --help  Show this message and exit.

Get Password-policy

The following command shows how to access help for the get password-policy command. It also provides examples on how to get details about the password policy.

user@server$ admin get password-policy --help
Usage: admin get password-policy [OPTIONS] POLICY_ID

  Get password policy details.

  Examples:
      admin get password-policy policy-uuid-here

Options:
  --help  Show this message and exit.

Get Roles

The following command shows how to access help for the get roles command. It also provides examples on how to get detailed information about a role.

user@server$ admin get roles --help
Usage: admin get roles [OPTIONS] ROLE_NAME

  Get detailed information about a specific role.

  Examples:
      admin get roles admin

Options:
  --help  Show this message and exit.

Get Saml-mappers

The following command shows how to access help for the get saml-mappers command. It also provides examples on how to get detailed information about a SAML provider including its mappers.

user@server$ admin get saml-mappers --help
Usage: admin get saml-mappers [OPTIONS] ALIAS

  Get detailed information about a SAML provider including its mappers.

  Examples:
      admin get saml-mappers azure-ad

Options:
  --help  Show this message and exit.

Get Saml-providers

The following command shows how to access help for the get saml-providers command. It also provides examples on how to get detailed information about a specific SAML provider.

Usage: admin get saml-providers [OPTIONS] ALIAS

  Get detailed information about a specific SAML provider.

  Examples:
      admin get saml-providers tttt
      admin get saml-providers azure-ad-saml

Options:
  --help  Show this message and exit.

Get Users

The following command shows how to access help for the get users command. It also provides examples on how to get detailed information about a user.

user@server$ admin get users --help
Usage: admin get users [OPTIONS] USER_ID

  Get detailed information about a specific user.

  Examples:
      admin get users USER_ID
      admin get users 12345-uuid

Options:
  --help  Show this message and exit.

List Commands

The following section lists the list commands.

Main List Command

The following command shows how to access help for the list command.

user@server$ admin list --help
Usage: admin list [OPTIONS] COMMAND [ARGS]...

  List resources.

Options:
  --help  Show this message and exit.

Commands:
  entra-id-users  Search Entra ID users.
  groups          List all groups with their members and roles.
  permissions     List all available permissions.
  roles           List all roles.
  saml-mappers    List all attribute mappers for a SAML provider.
  saml-providers  List all SAML SSO providers.
  users           List all users.

List Entra-id-users

The following command shows how to access help for the list entra-id-users command. It also provides examples on how to list all the Entra ID users.

user@server$ admin list entra-id-users --help
Usage: admin list entra-id-users [OPTIONS]

  Search Entra ID users.

  Search across userPrincipalName, givenName, surname, and mail fields.
  If no search query provided, returns all enabled users (up to 50).

  Examples:
      admin list entra-id-users
      admin list entra-id-users --search "john"
      admin list entra-id-users --search "smith"

Options:
  -s, --search TEXT  Search query to find users
  --help             Show this message and exit.

List Groups

The following command shows how to access help for the list groups command. It also provides examples on how to list all group with their member and roles.

user@server$ admin list groups --help
Usage: admin list groups [OPTIONS]

  List all groups with their members and roles.

  Examples:
      admin list groups
      admin list groups --max 10
      admin list groups --max 5 --first 10

Options:
  -m, --max INTEGER    Maximum number of groups to return
  -f, --first INTEGER  Offset for pagination
  --help               Show this message and exit.

List Permissions

The following command shows how to access help for the list permissions command. It also provides examples on how to list all the available permissions.

user@server$ admin list permissions --help
Usage: admin list permissions [OPTIONS]

  List all available permissions.

  Examples:
      admin list permissions
      admin list permissions --filter "read*"

Options:
  -f, --filter TEXT  Filter permissions by name pattern
  --help             Show this message and exit.

List Roles

The following command shows how to access help for the list roles command. It also provides examples on how to list all the available roles.

user@server$ admin list roles --help
Usage: admin list roles [OPTIONS]

  List all roles.

  Examples:
      admin list roles

Options:
  --help  Show this message and exit.

List Saml-mappers

The following command shows how to access help for the list saml-mappers command. It also provides examples on how to list all attribute mappers for a SAML provider.

user@server$ admin list saml-mappers --help
Usage: admin list saml-mappers [OPTIONS] PROVIDER_ALIAS

  List all attribute mappers for a SAML provider.

  Examples:
      admin list saml-mappers azure-ad

Options:
  --help  Show this message and exit.

List Saml-providers

The following command shows how to access help for the list saml-providers command. It also provides examples on how to list all SAML SSO providers.

user@server$ admin list saml-providers --help
Usage: admin list saml-providers [OPTIONS]

  List all SAML SSO providers.

  Examples:
      admin list saml-providers

Options:
  --help  Show this message and exit.

List Users

The following command shows how to access help for the list users command. It also provides examples on how to list all the available users.

user@server$ admin list users --help
Usage: admin list users [OPTIONS]

  List all users.

  Examples:
      admin list users
      admin list users --max 10
      admin list users --max 5 --first 10

Options:
  -m, --max INTEGER    Maximum number of users to return
  -f, --first INTEGER  Offset for pagination
  --help               Show this message and exit.

Set Commands

The following section lists the set commands.

Main Set Command

The following command shows how to access help for the set command.

user@server$ admin set --help
Usage: admin set [OPTIONS] COMMAND [ARGS]...

  Update fields of a resource.

Options:
  --help  Show this message and exit.

Commands:
  entra-id                Update existing Entra ID configuration.
  groups                  Update an existing group.
  lock_user               Lock a user account.
  roles                   Update an existing role.
  saml-providers          Update an existing SAML SSO provider.
  unlock_user             Unlock a user account and set a new password.
  update_password         Update user password.
  password_policy  Update password policy configuration.
  users                   Update an existing user.

Set Entra-id

The following command shows how to access help for the set entra-id command. It also provides examples on how to update an existing Entra ID configuration.

user@server$ admin set entra-id --help
Usage: admin set entra-id [OPTIONS]

  Update existing Entra ID configuration.

  Only provided fields are updated. Configuration is tested if credentials are changed.

  Examples:
      admin set entra-id --enabled
      admin set entra-id --client-secret "new-secret-here"
      admin set entra-id --tenant-id "new-tenant-id" --client-id "new-client-id"

Options:
  -t, --tenant-id TEXT      Update Entra ID Tenant ID
  -c, --client-id TEXT      Update Entra ID Application (Client) ID
  -s, --client-secret TEXT  Update Entra ID Application Client Secret
  --enabled / --disabled    Enable/disable configuration
  --help                    Show this message and exit.

Set Groups

The following command shows how to access help for the set groups command. It also provides examples on how to update an existing group.

user@server$ admin set groups --help
Usage: admin set groups [OPTIONS] GROUP_ID

  Update an existing group.

  Examples:
      admin set groups group-uuid --members "john,jane,bob"
      admin set groups group-uuid --roles "admin,user_manager"
      admin set groups group-uuid --members "user1,user2" --roles "operator,viewer"

Options:
  -m, --members TEXT  Comma-separated list of usernames (replaces existing members)
  -r, --roles TEXT    Comma-separated list of role names (replaces existing roles)
  --help              Show this message and exit.

Set Lock_user

The following command shows how to access help for the set lock_user command. It also provides examples on how to lock a user account.

user@server$ admin set lock_user --help
Usage: admin set lock_user [OPTIONS] USER_ID

  Lock a user account.

  Examples:
      admin set lock_user USER_ID

Options:
  --help  Show this message and exit.

Set Password-policy

The following command shows how to access help for the set password-policy command. It also provides examples on how to update an existing password policy.

user@server$ admin set password_policy --help
Usage: admin set password_policy [OPTIONS]

  Update password policy configuration.

Options:
  --policy TEXT  Password policy configuration as JSON string.
                 
                 Common Keys:
                 - length: Minimum password length
                 - digits: Number of digits required
                 - lowerCase: Number of lowercase characters required
                 - upperCase: Number of uppercase characters required
                 - specialChars: Number of special characters required
                 - notUsername: Password cannot be same as username (0 or 1)
                 - passwordHistory: Number of previous passwords to remember
                 - maxLength: Maximum password length
                 
                 Examples:
                     admin set password_policy --policy '{"length": 8, "digits": 1, "upperCase": 1, "specialChars": 1}'
                     admin set password_policy --policy '{"length": 12, "digits": 2, "lowerCase": 1, "upperCase": 1, "specialChars": 2, "notUsername": 1}'
                     admin set password_policy --policy '{"length": 10, "passwordHistory": 5, "maxLength": 128}'   [required]
  --help         Show this message and exit.

Set Roles

The following command shows how to access help for the set roles command. It also provides examples on how to update an existing role.

user@server$ admin set roles --help
Usage: admin set roles [OPTIONS] ROLE_NAME

  Update an existing role.

  Examples:
      admin set roles admin --description "Updated admin role"
      admin set roles manager --permissions "security_officer"
      admin set roles operator --description "System operator" --permissions "security_officer"

Options:
  -d, --description TEXT  New role description
  -p, --permissions TEXT  Comma-separated list of permission names (replaces existing)
  --help                  Show this message and exit.

Set Saml-providers

The following command shows how to access help for the set saml-providers command. It also provides examples on how to update an existing SAML SSO provider.

user@server$ admin set saml-providers --help
Usage: admin set saml-providers [OPTIONS] ALIAS

  Update an existing SAML SSO provider.

  Only the parameters you explicitly provide will be updated.

  Examples:
      admin set saml-providers azure-ad --display-name "New Azure AD"
      admin set saml-providers Test --enabled
      admin set saml-providers Test --disabled
      admin set saml-providers Test --force-authn
      admin set saml-providers Test --no-validate-signature
      admin set saml-providers Test --metadata-url "https://new-metadata-url.com"
      admin set saml-providers Test --signature-algorithm "RSA_SHA512"

Options:
  -d, --display-name TEXT         Update display name for the provider
  --config-type [metadataUrl|metadataFile]
                                  Update configuration type
  --service-provider-entity-id TEXT
                                  Update Service Provider Entity ID
  --metadata-url TEXT             Update metadata URL
  --metadata-file FILENAME        Update metadata file content
  --signing-certificate TEXT      Update signing certificate
  --name-id-policy-format TEXT    Update NameID Policy Format
  --force-authn                   Enable force authentication
  --no-force-authn                Disable force authentication
  --validate-signature            Enable signature validation
  --no-validate-signature         Disable signature validation
  --want-assertions-signed        Require signed assertions
  --no-want-assertions-signed     Don't require signed assertions
  --want-assertions-encrypted     Require encrypted assertions
  --no-want-assertions-encrypted  Don't require encrypted assertions
  --signature-algorithm TEXT      Update signature algorithm
  --post-binding-response         Enable POST binding for responses
  --no-post-binding-response      Disable POST binding for responses
  --post-binding-authn-request    Enable POST binding for auth requests
  --no-post-binding-authn-request
                                  Disable POST binding for auth requests
  --post-binding-logout           Enable POST binding for logout
  --no-post-binding-logout        Disable POST binding for logout
  --want-authn-requests-signed    Enable authentication request signing
  --no-want-authn-requests-signed
                                  Disable authentication request signing
  --attribute-mapping TEXT        Update attribute mapping (JSON format)
  --enabled                       Enable the provider
  --disabled                      Disable the provider
  --store-token                   Enable token storage
  --no-store-token                Disable token storage
  --help                          Show this message and exit.

Set Unlock_user

The following command shows how to access help for the set unlock_user command. It also provides examples on how to unlock a user account and set a new password.

user@server$ admin set unlock_user --help
Usage: admin set unlock_user [OPTIONS] USER_ID

  Unlock a user account and set a new password.

  Examples:
      admin set unlock_user USER_ID --password "NewPassword123!"
      admin set unlock_user USER_ID -p "StrongPass123!"

Options:
  -p, --password TEXT  New password to set after unlocking  [required]
  --help               Show this message and exit.

Set Update_password

The following command shows how to access help for the set update_password command. It also provides examples on how to update user password.

user@server$ admin set update_password --help
Usage: admin set update_password [OPTIONS] USER_ID

  Update user password.

  Examples:
      admin set update_password USER_ID --new-password "NewPassword123!" --old-password "OldPass123!"
      admin set update_password USER_ID -n "NewPass123!" -o "OldPass123!"

Options:
  -n, --new-password TEXT  New password  [required]
  -o, --old-password TEXT  Current password for validation  [required]
  --help                   Show this message and exit.

Set Users

The following command shows how to access help for the set users command. It also provides examples on how to update an existing user.

user@server$ admin set users --help
Usage: admin set users [OPTIONS] USER_ID

  Update an existing user.

  Examples:
      admin set users USER_ID --email newemail@example.com
      admin set users USER_ID --enabled --roles "admin,manager"

Options:
  -e, --email TEXT        New email address
  --first-name TEXT       New first name
  --last-name TEXT        New last name
  --enabled / --disabled  Enable/disable user
  --roles TEXT            Comma-separated list of role names (replaces existing)
  --groups TEXT           Comma-separated list of group names (replaces existing)
  --help                  Show this message and exit.

Test Commands

The following section lists the test commands.

Main Test Command

The following command shows how to access help for the test command.

user@server$ admin test --help
Usage: admin test [OPTIONS] COMMAND [ARGS]...

  Test various configurations and connections.

Options:
  --help  Show this message and exit.

Commands:
  entra-id  Test Entra ID connection.

Test Entra-id

The following command shows how to access help for the test entra-id command. It also provides examples on how to test an Entra ID connection.

user@server$ admin test entra-id --help
Usage: admin test entra-id [OPTIONS]

  Test Entra ID connection.

  Test Options:
  1. Test stored configuration: --use-stored
  2. Test provided credentials: --tenant-id, --client-id, --client-secret

  Examples:
      admin test entra-id --use-stored
      admin test entra-id --tenant-id "tenant-id" --client-id "client-id" --client-secret "secret"

Options:
  --use-stored              Test stored configuration
  -t, --tenant-id TEXT      Entra ID Tenant ID (for direct test)
  -c, --client-id TEXT      Entra ID Application (Client) ID (for direct test)
  -s, --client-secret TEXT  Entra ID Application Client Secret (for direct
                            test)
  --help                    Show this message and exit.

3.1.3.2 - Policy Management Command Line Interface (CLI) Reference

Policy Management CLI documentation.

Main Pim Command

The following command shows to access the help for the pim commands.

user@server$ pim --help
Usage: pim [OPTIONS] COMMAND [ARGS]...

  Policy Information Management commands.

Options:
  --help  Show this message and exit.

Commands:
  create  Create a resource.
  delete  Delete a resource.
  get     Display one or many resources.
  invoke  Invoke resource by operation defined by the API.
  set     Update fields of a resource.

Invoke Commands

The following section lists the invoke commands.

Main Invoke Command

The following command shows how to access help for the invoke command.

user@server$ pim invoke --help
Usage: pim invoke [OPTIONS] COMMAND [ARGS]...

  Invoke resource by operation defined by the API.

Options:
  --help  Show this message and exit.

Commands:
  datastores  Commands for deploying datastore resources.
  init        Bootstrap PIM - Initialize the Policy Information system.
  roles       Commands for synchronizing role resources.
  sources     Commands for testing source resources.

Invoke Datastores

The following command shows how to access help for the invoke datastores command. It also provides examples on how to deploy datastore resources.

user@server$ pim invoke datastores --help
Usage: pim invoke datastores [OPTIONS] COMMAND [ARGS]...

  Commands for deploying datastore resources.

Options:
  --help  Show this message and exit.

Commands:
  deploy  Deploy policies and/or trusted applications to a specific datastore.

Invoke Datastores Types

The following commands show how to access help for the invoke datastores <type> command.

Invoke Datastores Deploy

The following command shows how to access help for the invoke datastores deploy command. It also provides examples on how to deploy policies or trusted applications or both to a specific datastore.

user@server$ pim invoke datastores deploy --help
Usage: pim invoke datastores deploy [OPTIONS] DATASTORE_UID

  Deploy policies and/or trusted applications to a specific datastore.

  EXAMPLES:

  # Deploy single policy to datastore
  pim invoke datastores deploy 15 --policies 1

  # Deploy multiple policies to datastore
  pim invoke datastores deploy 15 --policies 1 --policies 2 --policies 3

  # Deploy trusted applications to datastore
  pim invoke datastores deploy 15 --applications 1 --applications 2

  # Deploy both policies and applications together
  pim invoke datastores deploy "<datastore-uid>" --policies 1 --policies 2 --applications 1 --applications 2

  # Clear all deployments (deploy empty configuration)
  pim invoke datastores deploy 42

  WORKFLOW:
  # Step 1: Verify datastore exists and is accessible
  pim get datastores datastore <datastore-uid>

  # Step 2: List available policies and applications
  pim get policies policy
  pim get applications application

  # Step 3: Deploy to  datastore
  pim invoke datastores deploy <datastore-uid> --policies <policy-uid> --applications <app-uid>

Options:
  --policies TEXT      UIDs of policies to deploy (can be specified multiple
                       times).
  --applications TEXT  UIDs of trusted applications to deploy (can be
                       specified multiple times).
  --help               Show this message and exit.

Invoke Init

The following command shows how to access help for the invoke init command. It also provides examples on how to initialize the Policy Information Management system.

user@server$ pim invoke init --help
Usage: pim invoke init [OPTIONS]

  Bootstrap PIM - Initialize the Policy Information Management system.

  EXAMPLES:

  # Initialize PIM system for first-time setup
  pim invoke init

Options:
  --help  Show this message and exit.

Invoke Roles

The following command shows how to access help for the invoke roles command. It also provides examples on how to synchronize role resources.

user@server$ pim invoke roles --help
Usage: pim invoke roles [OPTIONS] COMMAND [ARGS]...

  Commands for synchronizing role resources.

Options:
  --help  Show this message and exit.

Commands:
  sync  Synchronize all group members for a role with external identity sources.

Roles Types

The following commands show how to access help for the invoke roles <type> command.

Invoke Roles Sync

The following command shows how to access help for the invoke roles sync command. It also provides examples on how to synchronize all group members for a role.

user@server$ pim invoke roles sync --help
Usage: pim invoke roles sync [OPTIONS] ROLE_UID

  Synchronize all group members for a role with external identity sources.

  EXAMPLES:

  # Synchronize role members with LDAP/AD source
  pim invoke roles sync 15

Options:
  --help  Show this message and exit.

Invoke Sources

The following command shows how to access help for the invoke sources command. It also provides examples on how to test source resources.

user@server$ pim invoke sources --help
Usage: pim invoke sources [OPTIONS] COMMAND [ARGS]...

  Commands for testing source resources.

Options:
  --help  Show this message and exit.

Commands:
  test  Tests the connection and functionality of a source.

Invoke Sources Types

The following commands show how to access help for the invoke sources <type> command.

Invoke Sources Test

The following command shows how to access help for the invoke sources test command. It also provides examples on how to test the connection to a member source.

user@server$ pim invoke sources test --help
Usage: pim invoke sources test [OPTIONS] UID

  Tests the connection and functionality of a source.

  EXAMPLES:

  # Basic connectivity test
  pim invoke sources test 15

Options:
  --help  Show this message and exit.

Create Commands

The following section lists the create commands.

Main Create Command

The following command shows how to access help for the create command.

user@server$ pim create --help
Usage: pim create [OPTIONS] COMMAND [ARGS]...

  Create a resource.

Options:
  --help  Show this message and exit.

Commands:
  alphabets     Creates a new alphabet.
  applications  Creates a new application.
  dataelements  Creates a new data element of a specific type.
  datastores    Commands for creating datastore resources.
  deploy        Deploys policies and/or trusted applications to a datastore.
  masks         Creates a new mask with specified masking pattern and configuration.
  policies      Creates a new policy or rule.
  roles         Creates a new role or adds members to a role.
  sources       Creates a new source.

Create Alphabets

The following command shows how to access help for the create alphabets command. It also provides examples on how to create an alphabet.

user@server$ pim create alphabets --help
Usage: pim create alphabets [OPTIONS]

  Creates a new alphabet.

  EXAMPLES:

  # Create alphabet combining existing alphabets (use numeric UIDs from 'pim get alphabets')
  pim create alphabets --label "LatinExtended" --alphabets "1,2"

  # Create alphabet with Unicode ranges (Basic Latin + punctuation)
  pim create alphabets --label "ASCIIPrintable" --ranges '[{"from": "0020", "to": "007E"}]'

  # Create alphabet with specific code points (more than 10 examples)
  pim create alphabets --label "SpecialChars" --code-points "00A9,00AE,2122,2603,2615,20AC,00A3,00A5,00B5,00B6,2020,2021,2030,2665,2660"

  # Create complex alphabet with multiple options (use numeric UIDs)
  pim create alphabets --label "CompleteSet" --alphabets "1,3,5"  --ranges '[{"from": "0100", "to": "017F"}, {"from": "1E00", "to": "1EFF"}]' --code-points "20AC,00A3,00A5"

  # Create mathematical symbols alphabet
  pim create alphabets --label "MathSymbols" --ranges '[{"from": "2200", "to": "22FF"}, {"from": "2190", "to": "21FF"}]'

Options:
  --label TEXT        The label for the custom alphabet.  [required]
  --alphabets TEXT    Comma-separated list of alphabet UIDs.
  --ranges TEXT       JSON string of code point ranges. For example, '[{"from":
                      "0020", "to": "007E"}]'.
  --code-points TEXT  Comma-separated list of code points.
  --help              Show this message and exit.

Create Applications

The following command shows how to access help for the create applications command. It also provides examples on how to create a trusted application.

user@server$ pim create applications --help
Usage: pim create applications [OPTIONS]

  Creates a new application.

  EXAMPLES:

  # Create a basic application with required fields
  pim create applications --name "WebApp" --application-name "mywebapp" --application-user "webuser"

  # Create application with description
  pim create applications --name "DatabaseApp" --description "Main database application" --application-name "dbapp" --application-user "dbuser"

Options:
  --name TEXT              Name of the application.  [required]
  --description TEXT       Description of the application.
  --application-name TEXT  The application name or the application loading the
                           API jar file.  [required]
  --application-user TEXT  The application user or the OS user.  [required]
  --help                   Show this message and exit.

Create Dataelements

The following command shows how to access help for the create dataelements command. It also provides examples on how to create a data element.

user@server$ pim create dataelements --help
Usage: pim create dataelements [OPTIONS] COMMAND [ARGS]...

  Creates a new data element of a specific type.

  AVAILABLE PROTECTION TYPES:

  # Encryption Methods:
  - aes128-cbc-enc       # AES-128 CBC encryption
  - aes128-cusp-enc      # AES-128 CUSP encryption  
  - aes256-cbc-enc       # AES-256 CBC encryption
  - aes256-cusp-enc      # AES-256 CUSP encryption
  - triple-des-cbc-enc   # 3DES CBC encryption
  - triple-des-cusp-enc  # 3DES CUSP encryption
  - sha1-hmac-enc        # SHA1 HMAC encryption (deprecated)
  - sha256-hmac-enc      # SHA256 HMAC encryption
  - no-enc               # No encryption (clear text)

  # Tokenization Methods:
  - token numeric        # Numeric tokens
  - token alphabetic     # Alphabetic tokens
  - token alpha-numeric  # Alphanumeric tokens
  - token printable      # Printable character tokens
  - token unicode        # Unicode tokens
  - token credit-card    # Credit card specific tokens
  - token email          # Email specific tokens

  # Format Preserving Encryption (FPE):
  - fpe numeric          # Numeric FPE
  - fpe alphabetic       # Alphabetic FPE
  - fpe alpha-numeric    # Alphanumeric FPE

  # Special Protection Types:
  - masking              # Data masking using NoEnc
  - monitor              # Data monitoring using NoEnc

Options:
  --help  Show this message and exit.

Commands:
  aes128-cbc-enc       Creates a new AES-128-CBC-ENC data element.
  aes128-cusp-enc      Creates a new AES-128-CUSP-ENC data element.
  aes256-cbc-enc       Creates a new AES-256-CBC-ENC data element.
  aes256-cusp-enc      Creates a new AES-256-CUSP-ENC data element.
  fpe                  Creates a new FPE (Format Preserving Encryption)...
  masking              Creates a new masking data element using NoEnc...
  monitor              Creates a new monitoring data element using NoEnc...
  no-enc               Creates a new No-Enc data element.
  sha1-hmac-enc        Creates a new SHA1-HMAC-ENC data element...
  sha256-hmac-enc      Creates a new SHA256-HMAC-ENC data element.
  token                Creates a new token data element of a specific type.
  triple-des-cbc-enc   Creates a new 3DES-CBC-ENC data element.
  triple-des-cusp-enc  Creates a new 3DES-CUSP-ENC data element.

Create Dataelements Types

The following commands show how to access help for the create dataelements <type> command. It also provides examples on how to create a data element of a specific type.

Create Dataelements aes128 cbc enc

The following command shows how to access help for the create dataelements aes128-cbc-enc command. It also provides examples on how to create a AES-128-CBC-ENC data element.

user@server$ pim create dataelements aes128-cbc-enc --help
Usage: pim create dataelements aes128-cbc-enc [OPTIONS]

  Creates a new AES-128-CBC-ENC data element.

  EXAMPLES:

  # Create basic AES-128 encryption data element
  pim create dataelements aes128-cbc-enc --name "BasicEncryption" --description "Basic data encryption"

  # Create with all security features enabled
  pim create dataelements aes128-cbc-enc --name "FullSecurityEnc" --description "Full security encryption" --iv-type "SYSTEM_APPEND" --checksum-type "CRC32" --cipher-format "INSERT_KEYID_V1"

Options:
  --name TEXT                               The name for the data element.  [required]
  --description TEXT                        An optional description for the data element.
  --iv-type [NONE|SYSTEM_APPEND]            Initialization Vector type.
  --checksum-type [NONE|CRC32]              Checksum type.
  --cipher-format [NONE|INSERT_KEYID_V1]    Cipher format.
  --help                                    Show this message and exit.
Create Dataelements aes128 cusp enc

The following command shows how to access help for the create dataelements aes128-cusp-enc command. It also provides examples on how to create a AES-128-CUSP-ENC data element.

user@server$ pim create dataelements aes128-cusp-enc --help
Usage: pim create dataelements aes128-cusp-enc [OPTIONS]

  Creates a new AES-128-CUSP-ENC data element.      EXAMPLES:

  # Create with key rotation support
  pim create dataelements aes128-cusp-enc --name "RotatingCUSP" --description "CUSP with key rotation" --cipher-format "INSERT_KEYID_V1"

Options:
  --name TEXT                               The name for the data element.  [required]
  --description TEXT                        An optional description for the data element.
  --iv-type [NONE|SYSTEM_APPEND]            Initialization Vector type.
  --checksum-type [NONE|CRC32]              Checksum type.
  --cipher-format [NONE|INSERT_KEYID_V1]    Cipher format.
  --help                                    Show this message and exit.
Create Dataelements aes256 cbc enc

The following command shows how to access help for the create dataelements aes256-cbc-enc command. It also provides examples on how to create a AES-256-CBC-ENC data element.

user@server$ pim create dataelements aes256-cbc-enc --help
Usage: pim create dataelements aes256-cbc-enc [OPTIONS]

  Creates a new AES-256-CBC-ENC data element.

  EXAMPLES:

  # Create with system-generated IV and CRC32 checksum
  pim create dataelements aes256-cbc-enc --name "CreditCardEnc" --description "Credit card encryption" --iv-type "SYSTEM_APPEND" --checksum-type "CRC32"    

Options:
  --name TEXT                               The name for the data element.  [required]
  --description TEXT                        An optional description for the data element.
  --iv-type [NONE|SYSTEM_APPEND]            Initialization Vector type.
  --checksum-type [NONE|CRC32]              Checksum type.
  --cipher-format [NONE|INSERT_KEYID_V1]    Cipher format.
  --help                                    Show this message and exit.
Create Dataelements aes256 cusp enc

The following command shows how to access help for the create dataelements aes256-cusp-enc command. It also provides examples on how to create a AES-256-CUSP-ENC data element.

user@server$ pim create dataelements aes256-cusp-enc --help
Usage: pim create dataelements aes256-cusp-enc [OPTIONS]

  Creates a new AES-256-CUSP-ENC data element.

  EXAMPLES:

  # Create basic AES-256 CUSP encryption
  pim create dataelements aes256-cusp-enc --name "HighSecurityEnc" --description "High security data encryption"

  # Create with key ID insertion for key management
  pim create dataelements aes256-cusp-enc --name "EnterpriseEnc" --description "Enterprise encryption with key tracking" --cipher-format "INSERT_KEYID_V1"

Options:
  --name TEXT                               The name for the data element.  [required]
  --description TEXT                        An optional description for the data element.
  --iv-type [NONE|SYSTEM_APPEND]            Initialization Vector type.
  --checksum-type [NONE|CRC32]              Checksum type.
  --cipher-format [NONE|INSERT_KEYID_V1]    Cipher format.
  --help                                    Show this message and exit.
Create Dataelements triple des cbc enc

The following command shows how to access help for the create dataelements triple-des-cbc-enc command. It also provides examples on how to create a 3DES-CBC-ENC data element.

user@server$ pim create dataelements triple-des-cbc-enc --help
Usage: pim create dataelements triple-des-cbc-enc [OPTIONS]

  Creates a new 3DES-CBC-ENC data element.

  EXAMPLES:

  # Create basic 3DES-CBC encryption
  pim create dataelements triple-des-cbc-enc --name "Legacy3DESEnc" --description "Legacy 3DES encryption for compatibility"

  # Create with key ID insertion for key management
  pim create dataelements triple-des-cbc-enc --name "Managed3DES" --description "3DES with key tracking" --cipher-format "INSERT_KEYID_V1"

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --iv-type [NONE|SYSTEM_APPEND]  Initialization Vector type.
  --checksum-type [NONE|CRC32]    Checksum type.
  --cipher-format [NONE|INSERT_KEYID_V1]
                                  Cipher format.
  --help                          Show this message and exit.
Create Dataelements triple des cusp enc

The following command shows how to access help for the create dataelements triple-des-cusp-enc command. It also provides examples on how to create a 3DES-CUSP-ENC data element.

user@server$ pim create dataelements triple-des-cusp-enc --help
Usage: pim create dataelements triple-des-cusp-enc [OPTIONS]

  Creates a new 3DES-CUSP-ENC data element.

  EXAMPLES:

  # Create with system-generated IV and integrity checking
  pim create dataelements triple-des-cusp-enc --name "Secure3DESCusp" --description "3DES CUSP with enhanced security" --iv-type "SYSTEM_APPEND" --checksum-type "CRC32"

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --iv-type [NONE|SYSTEM_APPEND]  Initialization Vector type.
  --checksum-type [NONE|CRC32]    Checksum type.
  --cipher-format [NONE|INSERT_KEYID_V1]
                                  Cipher format.
  --help                          Show this message and exit.
Create Dataelements fpe

The following command shows how to access help for the create dataelements fpe command. It also provides examples on how to create a Format Preserving Encryption (FPE) data element.

user@server$ pim create dataelements fpe --help
Usage: pim create dataelements fpe [OPTIONS] COMMAND [ARGS]...

  Creates a new FPE (Format Preserving Encryption) data element of a specific
  type.

  AVAILABLE FPE TYPES:

  - numeric                           # Numeric data (0-9)
  - alphabetic                        # Alphabetic data (a-z, A-Z)  
  - alpha-numeric                     # Alphanumeric data (0-9, a-z, A-Z)
  - unicode-basic-latin-alphabetic    # Unicode Basic Latin alphabetic
  - unicode-basic-latin-alpha-numeric # Unicode Basic Latin alphanumeric

Options:
  --help  Show this message and exit.

Commands:
  alpha-numeric                        Creates a new Alpha Numeric FPE data element.
  alphabetic                           Creates a new Alphabetic FPE data element.
  numeric                              Creates a new Numeric FPE data element.
  unicode-basic-latin-alpha-numeric    Creates a new Unicode Basic Latin Alpha Numeric (Format Preserving Encryption) FPE data element.
  unicode-basic-latin-alphabetic       Creates a new Unicode Basic Latin Alphabetic FPE data element.
Create Dataelements fpe alpha numeric

The following command shows how to access help for the create dataelements fpe alpha numeric command. It also provides examples on how to create an alpha numeric (FPE) data element.

user@server$ pim create dataelements fpe alpha-numeric --help
Usage: pim create dataelements fpe alpha-numeric [OPTIONS]

  Creates a new Alpha Numeric FPE data element.

  EXAMPLES:

  # Create basic alphanumeric FPE for user IDs
  pim create dataelements fpe alpha-numeric --name "UserIDFPE" --description "User ID alphanumeric format-preserving encryption"

  # Create for product codes with flexible length handling
  pim create dataelements fpe alpha-numeric --name "ProductCodeFPE" --description "Product code alphanumeric FPE" --from-left 2 --min-length 5 --allow-short "NOINPUTVALUE"

  # Create for mixed case identifiers
  pim create dataelements fpe alpha-numeric --name "MixedCaseIDFPE" --description "Mixed case identifier encryption"  --from-left 1 --from-right 2 --min-length 7

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --plain-text-encoding TEXT      Kept for backwards compatibility, will be
                                  ignored if sent in. Removed in later
                                  releases.
  --from-left INTEGER             Number of characters to retain in clear from
                                  the left.
  --from-right INTEGER            Number of characters to retain in clear from
                                  the right.
  --min-length INTEGER            The minimum supported input length is 2
                                  bytes and is configurable up to 10 bytes.
  --tweak-mode [EXT_API|EXT_INPUT]
                                  The tweak input is derived from either the
                                  API (EXT_API) or the input message
                                  (EXT_INPUT).
  --allow-short [NOWITHERROR|NOINPUTVALUE]
                                  Specifies whether the short data must be
                                  supported or not.
  --help                          Show this message and exit.
Create Dataelements fpe alphabetic

The following command shows how to access help for the create dataelements fpe alphabetic command. It also provides examples on how to create an alphabetic (FPE) data element.

user@server$ pim create dataelements fpe alphabetic --help
Usage: pim create dataelements fpe alphabetic [OPTIONS]

  Creates a new Alphabetic FPE data element.

  EXAMPLES:

  # Create with partial clear text (preserve first 2 and last 2 chars)
  pim create dataelements fpe alphabetic --name "PartialAlphaFPE" --description "Partial alphabetic FPE with clear boundaries" --from-left 2 --from-right 2

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --plain-text-encoding TEXT      Kept for backwards compatibility, will be
                                  ignored if sent in. Removed in later
                                  releases.
  --from-left INTEGER             Number of characters to retain in clear from
                                  the left.
  --from-right INTEGER            Number of characters to retain in clear from
                                  the right.
  --min-length INTEGER            The minimum supported input length is 2
                                  bytes and is configurable up to 10 bytes.
  --allow-short [NOWITHERROR|NOINPUTVALUE]
                                  Specifies whether the short data must be
                                  supported or not.
  --tweak-mode [EXT_API|EXT_INPUT]
                                  The tweak input is derived from either the
                                  API (EXT_API) or the input message
                                  (EXT_INPUT).
  --help                          Show this message and exit.
Create Dataelements fpe numeric

The following command shows how to access help for the create dataelements fpe numeric command. It also provides examples on how to create a numeric (FPE) data element.

user@server$ pim create dataelements fpe numeric --help
Usage: pim create dataelements fpe numeric [OPTIONS]

  Creates a new Numeric FPE data element.

  EXAMPLES:

  # Create basic numeric FPE for account numbers
  pim create dataelements fpe numeric --name "AccountFPE" --description "Account number format-preserving encryption" --min-length 6

  # Create FPE with partial masking (show first 4 digits)
  pim create dataelements fpe numeric --name "PartialFPE" --description "Partial numeric FPE" --min-length 8 --from-left 4

  # Create credit card FPE with BIN preservation
  pim create dataelements fpe numeric --name "CreditCardFPE" --description "Credit card FPE with BIN visible" --min-length 8 --from-left 6 --from-right 4 --special-numeric-handling "CCN"

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --plain-text-encoding TEXT      Kept for backwards compatibility, will be
                                  ignored if sent in. Removed in later
                                  releases.
  --from-left INTEGER             Number of characters to retain in clear from
                                  the left.
  --from-right INTEGER            Number of characters to retain in clear from
                                  the right.
  --min-length INTEGER            The minimum supported input length is 2
                                  bytes and is configurable up to 10 bytes.
                                  The default minimum supported input length
                                  for Credit Card Number (CCN) is 8 bytes and
                                  is configurable up to 10 bytes.
  --tweak-mode [EXT_API|EXT_INPUT]
                                  The tweak input is derived from either the
                                  API (EXT_API) or the input message
                                  (EXT_INPUT).
  --allow-short [NOWITHERROR|NOINPUTVALUE]
                                  Specifies whether the short data must be
                                  supported or not.
  --special-numeric-handling [NONE|CCN]
                                  The Format Preserving Encryption (FPE) for
                                  Credit Card Number (CCN) is handled by
                                  configuring numeric data type as the
                                  plaintext alphabet.
  --help                          Show this message and exit.
Create Dataelements fpe unicode basic latin alpha numeric

The following command shows how to access help for the create dataelements fpe unicode-basic-latin-alpha-numeric command. It also provides examples on how to create a unicode basic latin alpha numeric (FPE) data element.

user@server$ pim create dataelements fpe unicode-basic-latin-alpha-numeric --help
Usage: pim create dataelements fpe unicode-basic-latin-alpha-numeric 
           [OPTIONS]

  Creates a new Unicode Basic Latin Alpha Numeric (Format Preserving
  Encryption) FPE data element.

  EXAMPLES:

  # Create basic Unicode Latin alphanumeric FPE
  pim create dataelements fpe unicode-basic-latin-alpha-numeric --name "UnicodeLatinFPE"  --description "Unicode Latin alphanumeric format-preserving encryption"

  # Create with partial clear text for international IDs
  pim create dataelements fpe unicode-basic-latin-alpha-numeric --name "IntlIDFPE" --description "International ID with clear prefix,suffix" --from-left 2 --from-right 2 --min-length 6

  # Create for international user IDs with flexible length
  pim create dataelements fpe unicode-basic-latin-alpha-numeric --name "GlobalUserIDFPE" --description "Global user ID format-preserving encryption" --min-length 4 --allow-short "NOINPUTVALUE"

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --plain-text-encoding TEXT      Kept for backwards compatibility, will be
                                  ignored if sent in. Removed in later
                                  releases.
  --from-left INTEGER             Number of characters to retain in clear from
                                  the left.
  --from-right INTEGER            Number of characters to retain in clear from
                                  the right.
  --min-length INTEGER            The minimum supported input length is 2
                                  bytes and is configurable up to 10 bytes.
  --tweak-mode [EXT_API|EXT_INPUT]
                                  The tweak input is derived from either the
                                  API (EXT_API) or the input message
                                  (EXT_INPUT).
  --allow-short [NOWITHERROR|NOINPUTVALUE]
                                  Specifies whether the short data must be
                                  supported or not.
  --help                          Show this message and exit.
Create Dataelements fpe unicode basic latin alpha alphabetic

The following command shows how to access help for the create dataelements fpe unicode-basic-latin-alphabetic command. It also provides examples on how to create a unicode basic latin alphabetic (FPE) data element.

user@server$ pim create dataelements fpe unicode-basic-latin-alphabetic --help
Usage: pim create dataelements fpe unicode-basic-latin-alphabetic 
           [OPTIONS]

  Creates a new Unicode Basic Latin Alphabetic FPE data element.

  EXAMPLES:

  # Create basic Unicode Basic Latin alphabetic FPE
  pim create dataelements fpe unicode-basic-latin-alphabetic --name "UnicodeAlphaFPE"  --description "Unicode Basic Latin alphabetic FPE"

  # Create for European customer names
  pim create dataelements fpe unicode-basic-latin-alphabetic --name "EuropeanNameFPE" --description "European customer name FPE" --from-left 1 --min-length 3 --allow-short "NOWITHERROR"

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --plain-text-encoding TEXT      Kept for backwards compatibility, will be
                                  ignored if sent in. Removed in later
                                  releases.
  --from-left INTEGER             Number of characters to retain in clear from
                                  the left.
  --from-right INTEGER            Number of characters to retain in clear from
                                  the right.
  --min-length INTEGER            The minimum supported input length is 2
                                  bytes and is configurable up to 10 bytes.
  --tweak-mode [EXT_API|EXT_INPUT]
                                  The tweak input is derived from either the
                                  API (EXT_API) or the input message
                                  (EXT_INPUT).
  --allow-short [NOWITHERROR|NOINPUTVALUE]
                                  Specifies whether the short data must be
                                  supported or not.
  --help                          Show this message and exit.
Create Dataelements masking

The following command shows how to access help for the create dataelements masking command. It also provides examples on how to create a masking data element using no encryption with masking enabled.

user@server$ pim create dataelements masking --help
Usage: pim create dataelements masking [OPTIONS]

  Creates a new masking data element using NoEnc with masking enabled.

  EXAMPLES:

  # Create basic data masking with a specific mask
  pim create dataelements masking --name "SSNMasking" --description "Social Security Number masking"  --mask-uid "1"

  # Create email masking for development environment
  pim create dataelements masking --name "EmailMasking" --description "Email masking for dev environment" --mask-uid "2"

Options:
  --name TEXT         The name for the data element.  [required]
  --description TEXT  An optional description for the data element.
  --mask-uid TEXT     The UID of the mask to apply for masking data.
                      [required]
  --help              Show this message and exit.
Create Dataelements monitor

The following command shows how to access help for the create dataelements monitor command. It also provides examples on how to create a monitoring data element using NoEnc with monitoring enabled.

user@server$ pim create dataelements monitor --help
Usage: pim create dataelements monitor [OPTIONS]

  Creates a new monitoring data element using no encryption with monitoring enabled.

  EXAMPLES:

  # Create basic monitoring for sensitive database fields
  pim create dataelements monitor --name "CustomerDataMonitor"  --description "Monitor customer data access"

Options:
  --name TEXT         The name for the data element.  [required]
  --description TEXT  An optional description for the data element.
  --help              Show this message and exit.
Create Dataelements no enc

The following command shows how to access help for the create dataelements no-enc command. It also provides examples on how to create a no encryption data element.

user@server$ pim create dataelements no-enc --help
Usage: pim create dataelements no-enc [OPTIONS]

  Creates a new No-Enc data element.

  EXAMPLES:

  # Create basic no-encryption element for testing
  pim create dataelements no-enc --name "TestNoEnc" --description "Test data element with no encryption"

Options:
  --name TEXT         The name for the data element.  [required]
  --description TEXT  An optional description for the data element.
  --help              Show this message and exit.
Create Dataelements sha1 hmac enc

The following command shows how to access help for the create dataelements sha1-hmac-enc command. It also provides examples on how to create a SHA1-HMAC-ENC data element.

Note: The SHA1-HMAC-ENC data element is deprecated.

user@server$ pim create dataelements sha1-hmac-enc --help
Usage: pim create dataelements sha1-hmac-enc [OPTIONS]

  Creates a new SHA1-HMAC-ENC data element (deprecated).

  EXAMPLES:

  # Create basic SHA1-HMAC encryption (legacy support)
  pim create dataelements sha1-hmac-enc --name "LegacyHashEnc" --description "SHA1 HMAC for legacy system compatibility"

Options:
  --name TEXT         The name for the data element.  [required]
  --description TEXT  An optional description for the data element.
  --help              Show this message and exit.
Create Dataelements sha256 hmac enc

The following command shows how to access help for the create dataelements sha256-hmac-enc command. It also provides examples on how to create a SHA256-HMAC-ENC data element.

user@server$ pim create dataelements sha256-hmac-enc --help
Usage: pim create dataelements sha256-hmac-enc [OPTIONS]

  Creates a new SHA256-HMAC-ENC data element.

  EXAMPLES:

  # Create basic SHA256-HMAC encryption
  pim create dataelements sha256-hmac-enc --name "SecureHashEnc" --description "Strong SHA256 HMAC encryption"

Options:
  --name TEXT         The name for the data element.  [required]
  --description TEXT  An optional description for the data element.
  --help              Show this message and exit.
Create Dataelements token

The following command shows how to access help for the create dataelements token command. It also provides examples on how to create a token data element.

user@server$ pim create dataelements token --help
Usage: pim create dataelements token [OPTIONS] COMMAND [ARGS]...

  Creates a new token data element of a specific type.

  AVAILABLE TOKEN TYPES:

  - numeric              # Numeric data tokenization (0-9)
  - alphabetic           # Alphabetic data tokenization (a-z, A-Z)
  - alpha-numeric        # Alphanumeric tokenization (0-9, a-z, A-Z)
  - printable            # Printable ASCII characters
  - unicode              # Unicode character tokenization
  - unicode-base64       # Base64 encoded Unicode tokens
  - unicode-gen2         # Generation 2 Unicode tokens with custom alphabets
  - binary               # Binary data tokenization
  - lower-ascii          # Lowercase ASCII tokenization
  - upper-alphabetic     # Uppercase alphabetic tokens
  - upper-alpha-numeric  # Uppercase alphanumeric tokens

  # Specialized Token Types:
  - credit-card          # Credit card number tokenization
  - email                # Email address tokenization
  - integer              # Integer value tokenization
  - decimal              # Decimal number tokenization
  - date-yyyymmdd        # Date in YYYY-MM-DD format
  - date-ddmmyyyy        # Date in DD-MM-YYYY format
  - date-mmddyyyy        # Date in MM-DD-YYYY format
  - date-time            # Date and time tokenization

  COMMON OPTIONS:

  --tokenizer            # Lookup table type (SLT_1_3, SLT_2_3, SLT_1_6, SLT_2_6)
  --from-left            # Characters to keep in clear from left
  --from-right           # Characters to keep in clear from right
  --length-preserving    # Maintain original data length
  --allow-short          # Handle short input data (YES, NO, ERROR)

Options:
  --help  Show this message and exit.

Commands:
  alpha-numeric        Creates a new Alpha Numeric Token data element.
  alphabetic           Creates a new Alphabetic Token data element.
  binary               Creates a new Binary Token data element.
  credit-card          Creates a new Credit Card Token data element.
  date-ddmmyyyy        Creates a new Date DDMMYYYY Token data element.
  date-mmddyyyy        Creates a new Date MMDDYYYY Token data element.
  date-time            Creates a new Date Time Token data element.
  date-yyyymmdd        Creates a new Date YYYYMMDD Token data element.
  decimal              Creates a new Decimal Token data element.
  email                Creates a new Email Token data element.
  integer              Creates a new Integer Token data element.
  lower-ascii          Creates a new Lower ASCII Token data element.
  numeric              Creates a new Numeric Token data element.
  printable            Creates a new Printable Token data element.
  unicode              Creates a new Unicode Token data element.
  unicode-base64       Creates a new Unicode Base64 Token data element.
  unicode-gen2         Creates a new Unicode Gen2 Token data element.
  upper-alpha-numeric  Creates a new Upper Alpha Numeric Token data element.
  upper-alphabetic     Creates a new Upper Alphabetic Token data element.
Create Dataelements token alpha numeric

The following command shows how to access help for the create dataelements token alpa-numeric command. It also provides examples on how to create an alpha-numeric token data element.

user@server$ pim create dataelements token alpha-numeric --help
Usage: pim create dataelements token alpha-numeric [OPTIONS]

  Creates a new Alpha Numeric Token data element.

  EXAMPLES:     # Create for reference codes pim create dataelements token
  alpha-numeric --name "RefCodeToken" --description "Reference code
  alphanumeric tokenization" --tokenizer "SLT_1_3" --from-left 2 --allow-short
  NOWITHERROR

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --tokenizer [SLT_1_3|SLT_2_3|SLT_1_6|SLT_2_6]
                                  The lookup tables to be generated.
                                  [required]
  --from-left INTEGER             Number of characters to keep in clear from
                                  the left.
  --from-right INTEGER            Number of characters to keep in clear from
                                  the right.
  --length-preserving             Specifies whether the output must be of the
                                  same length as the input.
  --allow-short [YES|NOINPUTVALUE|NOWITHERROR]
                                  Allow short tokens.
  --help                          Show this message and exit.
Create Dataelements token alphabetic

The following command shows how to access help for the create dataelements token alpabetic command. It also provides examples on how to create an alphabetic token data element.

user@server$ pim create dataelements token alphabetic --help
Usage: pim create dataelements token alphabetic [OPTIONS]

  Creates a new Alphabetic Token data element.

  EXAMPLES:

  # Create length-preserving alphabetic token
  pim create dataelements token alphabetic --name "ExactLengthAlpha" --description "Length-preserving alphabetic token" --tokenizer "SLT_2_3" --length-preserving

  # Create for name tokenization with short value support
  pim create dataelements token alphabetic --name "NameToken" --description "Name tokenization with short support" --tokenizer "SLT_2_3" --allow-short YES --length-preserving

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --tokenizer [SLT_1_3|SLT_2_3]   The lookup tables to be generated.
                                  [required]
  --from-left INTEGER             Number of characters to keep in clear from
                                  the left.
  --from-right INTEGER            Number of characters to keep in clear from
                                  the right.
  --length-preserving             Specifies whether the output must be of the
                                  same length as the input.
  --allow-short [YES|NOINPUTVALUE|NOWITHERROR]
                                  Allow short tokens.
  --help                          Show this message and exit.
Create Dataelements token binary

The following command shows how to access help for the create dataelements token binary command. It also provides examples on how to create a binary token data element.

user@server$ pim create dataelements token binary --help
Usage: pim create dataelements token binary [OPTIONS]

  Creates a new Binary Token data element.

  EXAMPLES:

  # Create basic binary tokenization
  pim create dataelements token binary --name "BinaryToken" --description "Binary data tokenization" --tokenizer "SLT_1_3"

Options:
  --name TEXT                    The name for the data element.  [required]
  --description TEXT             An optional description for the data element.
  --tokenizer [SLT_1_3|SLT_2_3]  The lookup tables to be generated.
                                 [required]
  --from-left INTEGER            Number of characters to keep in clear from
                                 the left.
  --from-right INTEGER           Number of characters to keep in clear from
                                 the right.
  --help                         Show this message and exit.
Create Dataelements token credit card

The following command shows how to access help for the create dataelements token credit-card command. It also provides examples on how to create a credit card token data element.

user@server$ pim create dataelements token credit-card --help
Usage: pim create dataelements token credit-card [OPTIONS]

  Creates a new Credit Card Token data element.

  EXAMPLES:

  # Create basic credit card tokenization
  pim create dataelements token credit-card --name "CCTokenBasic" --description "Basic credit card tokenization" --tokenizer "SLT_1_6"

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --tokenizer [SLT_1_3|SLT_2_3|SLT_1_6|SLT_2_6]
                                  The lookup tables to be generated.
                                  [required]
  --from-left INTEGER             Number of characters to keep in clear from
                                  the left.
  --from-right INTEGER            Number of characters to keep in clear from
                                  the right.
  --invalid-card-type             Token values will not begin with digits that
                                  real credit card numbers begin with.
  --invalid-luhn-digit            Validate Luhn checksum (requires valid
                                  credit cards as input).
  --alphabetic-indicator          Include one alphabetic character in the
                                  token.
  --alphabetic-indicator-position INTEGER
                                  Position for the alphabetic indicator
                                  (required when alphabetic-indicator is
                                  enabled).
  --help                          Show this message and exit.
Create Dataelements token date ddmmyyyy

The following command shows how to access help for the create dataelements token date-ddmmyyyy command. It also provides examples on how to create a DDMMYYYY date token data element.

user@server$ pim create dataelements token date-ddmmyyyy --help
Usage: pim create dataelements token date-ddmmyyyy [OPTIONS]

  Creates a new Date DDMMYYYY Token data element.

  EXAMPLES:

  # Create basic DDMMYYYY date tokenization
  pim create dataelements token date-ddmmyyyy --name "DateDDMMYYYY" --description "European date format DD-MM-YYYY tokenization" --tokenizer "SLT_1_3"

  # Create for compliance reporting dates
  pim create dataelements token date-ddmmyyyy --name "ComplianceDate" --description "Compliance reporting DD-MM-YYYY dates" --tokenizer "SLT_2_3"

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --tokenizer [SLT_1_3|SLT_2_3|SLT_1_6|SLT_2_6]
                                  The lookup tables to be generated.
                                  [required]
  --help                          Show this message and exit.
Create Dataelements token date mmddyyyy

The following command shows how to access help for the create dataelements token date-mmddyyyy command. It also provides examples on how to create a MMDDYYYY date token data element.

user@server$ pim create dataelements token date-mmddyyyy --help
Usage: pim create dataelements token date-mmddyyyy [OPTIONS]

  Creates a new Date MMDDYYYY Token data element.

  EXAMPLES:

  # Create for financial reporting dates
  pim create dataelements token date-mmddyyyy --name "FinancialReportDate" --description "Financial reporting MM-DD-YYYY format" --tokenizer "SLT_2_3"

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --tokenizer [SLT_1_3|SLT_2_3|SLT_1_6|SLT_2_6]
                                  The lookup tables to be generated.
                                  [required]
  --help                          Show this message and exit.
Create Dataelements token date time

The following command shows how to access help for the create dataelements token date-time command. It also provides examples on how to create a date-time token data element.

user@server$ pim create dataelements token date-time --help
Usage: pim create dataelements token date-time [OPTIONS]

  Creates a new Date Time Token data element.

  EXAMPLES:

  # Create basic date-time tokenization
  pim create dataelements token date-time --name "DateTimeToken" --description "Basic date-time tokenization" --tokenizer "SLT_8_DATETIME"

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --tokenizer [SLT_8_DATETIME]    The lookup tables to be generated.
                                  [required]
  --tokenize-time                 Whether to tokenize time (HH:MM:SS).
  --distinguishable-date          Whether date tokens should be
                                  distinguishable from real dates.
  --date-in-clear [NONE|YEAR|MONTH]
                                  Which date parts to keep in clear.
  --help                          Show this message and exit.
Create Dataelements token date yyyymmdd

The following command shows how to access help for the create dataelements token date-yyyymmdd command. It also provides examples on how to create a YYYYMMDD date token data element.

user@server$ pim create dataelements token date-yyyymmdd --help
Usage: pim create dataelements token date-yyyymmdd [OPTIONS]

  Creates a new Date YYYYMMDD Token data element.

  EXAMPLES:

  # Create basic YYYYMMDD date tokenization
  pim create dataelements token date-yyyymmdd --name "DateYYYYMMDD" --description "Date tokenization in YYYY-MM-DD format" --tokenizer "SLT_1_3"

  # Create for event date tracking
  pim create dataelements token date-yyyymmdd --name "EventDateToken" --description "Event date in YYYY-MM-DD format" --tokenizer "SLT_2_3"

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --tokenizer [SLT_1_3|SLT_2_3|SLT_1_6|SLT_2_6]
                                  The lookup tables to be generated.
                                  [required]
  --help                          Show this message and exit.
Create Dataelements token decimal

The following command shows how to access help for the create dataelements token decimal command. It also provides examples on how to create a decimal token data element.

user@server$ pim create dataelements token decimal --help
Usage: pim create dataelements token decimal [OPTIONS]

  Creates a new Decimal Token data element.

  EXAMPLES:

  # Create basic decimal tokenization for amounts
  pim create dataelements token decimal --name "DecimalToken" --description "Financial decimal amount tokenization" --tokenizer "SLT_6_DECIMAL" --max-length 15

Options:
  --name TEXT                  The name for the data element.  [required]
  --description TEXT           An optional description for the data element.
  --tokenizer [SLT_6_DECIMAL]  The lookup tables to be generated.  [required]
  --min-length INTEGER         Minimum length of the token element that can be
                               protected.
  --max-length INTEGER         Maximum length of the token element that can be
                               protected (max 38).  [required]
  --help                       Show this message and exit.
Create Dataelements token email

The following command shows how to access help for the create dataelements token email command. It also provides examples on how to create a email token data element.

user@server$ pim create dataelements token email --help
Usage: pim create dataelements token email [OPTIONS]

  Creates a new Email Token data element.

  EXAMPLES:

  # Create basic email tokenization
  pim create dataelements token email --name "EmailTokenBasic" --description "Basic email tokenization" --tokenizer "SLT_1_3"  --allow-short NOWITHERROR

  # Create email tokenization with error on short input
  pim create dataelements token email --name "EmailTokenError" --description "Email tokenization with short input errors" --tokenizer "SLT_1_3" --length-preserving  --allow-short NOWITHERROR

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --tokenizer [SLT_1_3|SLT_2_3]   The lookup tables to be generated.
                                  [required]
  --length-preserving             Specifies whether the output must be of the
                                  same length as the input.
  --allow-short [YES|NOINPUTVALUE|NOWITHERROR]
                                  Allow short tokens.
  --help                          Show this message and exit.
Create Dataelements token integer

The following command shows how to access help for the create dataelements token integer command. It also provides examples on how to create a integer token data element.

user@server$ pim create dataelements token integer --help
Usage: pim create dataelements token integer [OPTIONS]

  Creates a new Integer Token data element.

  EXAMPLES:

  # Create basic integer tokenization (default 4-byte)
  pim create dataelements token integer --name "IntegerToken" --description "Basic integer tokenization" --tokenizer "SLT_1_3"

  # Create short integer tokenization for small numbers
  pim create dataelements token integer --name "ShortIntegerToken" --description "Short integer (2-byte) tokenization" --tokenizer "SLT_1_3" --integer-size "SHORT"

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --tokenizer [SLT_1_3]           The lookup tables to be generated.
                                  [required]
  --integer-size [SHORT|INT|LONG]
                                  Integer size: 2 bytes (SHORT), 4 bytes
                                  (INT), or 8 bytes (LONG).
  --help                          Show this message and exit.
Create Dataelements token lower ascii

The following command shows how to access help for the create dataelements token lower-ascii command. It also provides examples on how to create a lower-ascii token data element.

user@server$ pim create dataelements token lower-ascii --help
Usage: pim create dataelements token lower-ascii [OPTIONS]

  Creates a new Lower ASCII Token data element.

  EXAMPLES:

  # Create strict ASCII tokenization (error on short input)
  pim create dataelements token lower-ascii --name "StrictAsciiToken" --description "Strict ASCII tokenization" --tokenizer "SLT_1_3" --allow-short "NOWITHERROR"

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --tokenizer [SLT_1_3]           The lookup tables to be generated.
                                  [required]
  --from-left INTEGER             Number of characters to keep in clear from
                                  the left.
  --from-right INTEGER            Number of characters to keep in clear from
                                  the right.
  --length-preserving             Specifies whether the output must be of the
                                  same length as the input.
  --allow-short [YES|NOINPUTVALUE|NOWITHERROR]
                                  Allow short tokens.
  --help                          Show this message and exit.
Create Dataelements token numeric

The following command shows how to access help for the create dataelements token numeric command. It also provides examples on how to create a numeric token data element.

user@server$ pim create dataelements token numeric --help
Usage: pim create dataelements token numeric [OPTIONS]

  Creates a new Numeric Token data element.

  EXAMPLES:

  # Create basic numeric token for SSN
  pim create dataelements token numeric --name "SSNToken" --description "Social Security Number tokenization" --tokenizer "SLT_1_6" --length-preserving

  # Create high-security token for financial data
  pim create dataelements token numeric --name "FinancialToken" --description "Financial account tokenization" --tokenizer "SLT_2_6" --length-preserving --allow-short "NOWITHERROR"

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --tokenizer [SLT_1_3|SLT_2_3|SLT_1_6|SLT_2_6]
                                  The lookup tables to be generated.
                                  [required]
  --from-left INTEGER             Number of characters to keep in clear from
                                  the left.
  --from-right INTEGER            Number of characters to keep in clear from
                                  the right.
  --length-preserving             Specifies whether the output must be of the
                                  same length as the input.
  --allow-short [YES|NOINPUTVALUE|NOWITHERROR]
                                  Allow short tokens.
  --help                          Show this message and exit.
Create Dataelements token printable

The following command shows how to access help for the create dataelements token printable command. It also provides examples on how to create a printable token data element.

user@server$ pim create dataelements token printable --help
Usage: pim create dataelements token printable [OPTIONS]

  Creates a new Printable Token data element.

  EXAMPLES:

  # Create length-preserving printable token
  pim create dataelements token printable --name "ExactLengthPrintable" --description "Length-preserving printable tokenization" --tokenizer "SLT_1_3" --length-preserving

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --tokenizer [SLT_1_3]           The lookup tables to be generated.
                                  [required]
  --from-left INTEGER             Number of characters to keep in clear from
                                  the left.
  --from-right INTEGER            Number of characters to keep in clear from
                                  the right.
  --length-preserving             Specifies whether the output must be of the
                                  same length as the input.
  --allow-short [YES|NOINPUTVALUE|NOWITHERROR]
                                  Allow short tokens.
  --help                          Show this message and exit.
Create Dataelements token unicode

The following command shows how to access help for the create dataelements token unicode command. It also provides examples on how to create a Unicode token data element.

user@server$ pim create dataelements token unicode --help
Usage: pim create dataelements token unicode [OPTIONS]

  Creates a new Unicode Token data element.

  EXAMPLES:

  # Create with short value support for names
  pim create dataelements token unicode --name "IntlNameToken" --description "International name tokenization" --tokenizer "SLT_2_3" --allow-short "YES"

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --tokenizer [SLT_1_3|SLT_2_3]   The lookup tables to be generated.
                                  [required]
  --allow-short [NOWITHERROR|YES|NOINPUTVALUE]
                                  Allow short tokens.
  --help                          Show this message and exit.
Create Dataelements token unicode base64

The following command shows how to access help for the create dataelements token unicode-base64 command. It also provides examples on how to create a Unicode Base64 token data element.

user@server$ pim create dataelements token unicode-base64 --help
Usage: pim create dataelements token unicode-base64 [OPTIONS]

  Creates a new Unicode Base64 Token data element.

  EXAMPLES:

  # Create basic Unicode Base64 tokenization
  pim create dataelements token unicode-base64 --name "UnicodeBase64Token" --description "Base64 encoded Unicode tokenization" --tokenizer "SLT_1_3"

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --tokenizer [SLT_1_3|SLT_2_3|SLT_1_6|SLT_2_6]
                                  The lookup tables to be generated.
                                  [required]
  --help                          Show this message and exit.
Create Dataelements token unicode gen2

The following command shows how to access help for the create dataelements token unicode-gen2 command. It also provides examples on how to create a Unicode Gen2 token data element.

user@server$ pim create dataelements token unicode-gen2 --help
Usage: pim create dataelements token unicode-gen2 [OPTIONS]

  Creates a new Unicode Gen2 Token data element.

  EXAMPLES:

  # Create basic Unicode Gen2 token with custom alphabet
  pim create dataelements token unicode-gen2 --name "UnicodeGen2Token" --description "Unicode Gen2 with custom alphabet" --tokenizer "SLT_1_3" --alphabet-uid "1"

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --tokenizer [SLT_1_3|SLT_X_1]   The lookup tables to be generated.
                                  [required]
  --alphabet-uid TEXT             The UID of the alphabet to use for
                                  tokenization.  [required]
  --from-left INTEGER             Number of characters to keep in clear from
                                  the left.
  --from-right INTEGER            Number of characters to keep in clear from
                                  the right.
  --length-preserving             Specifies whether the output must be of the
                                  same length as the input.
  --allow-short [YES|NOINPUTVALUE|NOWITHERROR]
                                  Allow short tokens.
  --default-encoding TEXT         Default encoding (kept for backwards
                                  compatibility).
  --help                          Show this message and exit.
Create Dataelements token upper alpha numeric

The following command shows how to access help for the create dataelements token upper-alpha-numeric command. It also provides examples on how to create an upper alpha-numeic token data element.

user@server$ pim create dataelements token upper-alpha-numeric --help
Usage: pim create dataelements token upper-alpha-numeric 
           [OPTIONS]

  Creates a new Upper Alpha Numeric Token data element.

  EXAMPLES:

  # Create for product codes
  pim create dataelements token upper-alpha-numeric --name "ProductCodeToken" --description "Product code uppercase tokenization" --tokenizer "SLT_1_3" --from-left 2 --length-preserving

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --tokenizer [SLT_1_3|SLT_2_3]   The lookup tables to be generated.
                                  [required]
  --from-left INTEGER             Number of characters to keep in clear from
                                  the left.
  --from-right INTEGER            Number of characters to keep in clear from
                                  the right.
  --length-preserving             Specifies whether the output must be of the
                                  same length as the input.
  --allow-short [YES|NOINPUTVALUE|NOWITHERROR]
                                  Allow short tokens.
  --help                          Show this message and exit.
Create Dataelements token upper alphabetic

he following command shows how to access help for the create dataelements token upper-alphabetic command. It also provides examples on how to create an upper alphabetic token data element.

user@server$ pim create dataelements token upper-alphabetic --help
Usage: pim create dataelements token upper-alphabetic [OPTIONS]

  Creates a new Upper Alphabetic Token data element.

  EXAMPLES:

  # Create for organization names with short support
  pim create dataelements token upper-alphabetic --name "OrgNameToken" --description "Organization name tokenization"  --tokenizer "SLT_2_3" --allow-short "NOINPUTVALUE" --length-preserving

Options:
  --name TEXT                     The name for the data element.  [required]
  --description TEXT              An optional description for the data
                                  element.
  --tokenizer [SLT_1_3|SLT_2_3]   The lookup tables to be generated.
                                  [required]
  --from-left INTEGER             Number of characters to keep in clear from
                                  the left.
  --from-right INTEGER            Number of characters to keep in clear from
                                  the right.
  --length-preserving             Specifies whether the output must be of the
                                  same length as the input.
  --allow-short [YES|NOINPUTVALUE|NOWITHERROR]
                                  Allow short tokens.
  --help                          Show this message and exit.

Create Datastores

The following command shows how to access help for the create datastores command. It also provides examples on how to create a datastore resource.

user@server$ pim create datastores --help
Usage: pim create datastores [OPTIONS] COMMAND [ARGS]...

  Commands for creating datastore resources.

Options:
  --help  Show this message and exit.

Commands:
  datastore  Creates a new datastore with the specified name and configuration.
  key        Creates and exports a datastore key for secure data operations.
  range      Adds an IP address range to a datastore for network access control.

Create Datastores Types

The following commands show how to access help for the create datastores <type> command. It also provides examples on how to manage datastore resources.

Create Datastores Datastore

The following command shows how to access help for the create datastores datastore command. It also provides examples on how to create a datastore.

user@server$ pim create datastores datastore --help
Usage: pim create datastores datastore [OPTIONS]

  Creates a new datastore with the specified name and configuration.

  Datastores represent physical or logical storage systems that host protected
  data. They define where data protection policies are applied and provide the
  foundation for implementing encryption, tokenization, and access controls.

  EXAMPLES:

  # Create a simple datastore for development
  pim create datastores datastore --name "dev-database" --description "Development PostgreSQL database"

  # Create production datastore with detailed description
  pim create datastores datastore --name "prod-customer-db" --description "Production customer data warehouse with PII protection"

  # Create datastore and set as default
  pim create datastores datastore --name "primary-db" --description "Primary application database" --default

  WORKFLOW:
  # Step 1: Plan your datastore configuration
  # - Choose descriptive name for identification
  # - Decide if this should be the default datastore

  # Step 2: Create the datastore
  pim create datastores datastore --name <name> --description <description> [--default]

  # Step 3: Configure IP ranges and access controls
  pim create datastores range <datastore-uid> --from-ip <start> --to <end>

  # Step 4: Set up encryption keys if needed
  pim create datastores key <datastore-uid> --name <key-name>

Options:
  --name TEXT         Name of the datastore.  [required]
  --description TEXT  Description for the datastore.
  --default           Set this datastore as the default.
  --help              Show this message and exit.
Create Datastores Key

The following command shows how to access help for the create datastores key command. It also provides examples on how to export a datastore key.

user@server$ pim create datastores key --help
Usage: pim create datastores key [OPTIONS] DATASTORE_UID

  Creates and exports a datastore key for secure data operations.

  EXAMPLES:

  # Create RSA export key for datastore
  pim create datastores key 15 --algorithm "RSA-OAEP-512" --description "export key" --pem "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQ...\n-----END PUBLIC KEY-----"

  WORKFLOW:
  # Step 1: Generate a key pair (outside of PIM)
  openssl genrsa -out private_key.pem 2048
  openssl rsa -in private_key.pem -pubout -out public_key.pem

  # Step 2: Prepare the PEM content (escape newlines for command line)
  awk 'NF {sub(/
  /, ""); printf "%s\n",$0;}' public_key.pem

  # Step 3: Create the export key in PIM
  pim create datastores key <datastore-uid> --algorithm <algorithm> --description <description> --pem <pem-content>

  # Step 4: Verify the key was created
  pim get datastores keys <datastore-uid>

Options:
  --algorithm [RSA-OAEP-256|RSA-OAEP-512]
                                  Algorithm for the key.  [required]
  --description TEXT              Description of the key.
  --pem TEXT                      PEM formatted public key.  [required]
  --help                          Show this message and exit.
Create Datastores Range

The following command shows how to access help for the create datastores range command. It also provides examples on how to add a range of IP addresses to a datastore.

user@server$ pim create datastores range --help
Usage: pim create datastores range [OPTIONS] DATASTORE_UID

  Adds an IP address range to a datastore for network access control.

  IP ranges define which network addresses are allowed to access the
  datastore. This provides network-level security by restricting datastore
  access to specific IP addresses or CIDR blocks.

  EXAMPLES:

  # Add single IP address access
  pim create datastores range 15 --from "192.168.1.100" --to "192.168.1.100"

  # Add corporate network access range
  pim create datastores range <datastore-uid> --from "10.0.0.1" --to "10.0.255.255"

  WORKFLOW:
  # Step 1: Get datastore UID
  pim get datastores datastore

  # Step 2: Plan your IP range requirements
  # - Identify source networks that need access
  # - Define start and end IP addresses

  # Step 3: Create the IP range
  pim create datastores range <datastore-uid> --from <start-ip> --to <end-ip>

  # Step 4: Verify the range was created
  pim get datastores ranges <datastore-uid>

Options:
  --from TEXT  Start IP address of the range.  [required]
  --to TEXT    End IP address of the range.  [required]
  --help       Show this message and exit.

Create Deploy

The following command shows how to access help for the create deploy command. It also provides examples on how to deploy policies or trusted applications or both to a datastore.

user@server$ pim create deploy --help
Usage: pim create deploy [OPTIONS]

  Deploys policies and/or trusted applications to a data store.

  Creates a deployment that pushes data protection policies and trusted
  application configurations to the specified datastore.

  EXAMPLES:

  # Deploy single policy to a datastore
  pim create deploy --data-store-uid 15 --policy-uids 1

  # Deploy multiple policies to a datastore
  pim create deploy --data-store-uid 15 --policy-uids 1 --policy-uids 2 --policy-uids 3

  # Deploy trusted applications to grant access
  pim create deploy --data-store-uid 15 --trusted-application-uids 1 --trusted-application-uids 2

  # Deploy both policies and applications together
  pim create deploy --data-store-uid 15 --policy-uids 1 --policy-uids 2 --trusted-application-uids 1 --trusted-application-uids 2

  WORKFLOW:
  # Step 1: Verify datastore exists and is accessible
  pim get datastores datastore <data-store-uid>

  # Step 2: List available policies and applications
  pim get policies policy
  pim get applications application

  # Step 3: Deploy to a datastore
  pim create deploy --data-store-uid <datastore-uid> --policy-uids <policy-uid> --trusted-application-uids <app-uid>

  # Step 4: Verify deployment was successful
  pim get deploy

Options:
  --data-store-uid TEXT                UID of the data store to deploy.  [required]
  --policy-uids TEXT                   UIDs of the policies to deploy.
  --trusted-application-uids TEXT      UIDs of the trusted applications to deploy.
  --help                               Show this message and exit.

Create Masks

The following command shows how to access help for the create masks command. It also provides examples on how to create a mask.

user@server$ pim create masks --help
Usage: pim create masks [OPTIONS]

  Creates a new mask with specified masking pattern and configuration.

  EXAMPLES:

  # Create mask for credit card numbers (show last 4 digits)
  pim create masks --name "credit-card-mask" --description "Mask credit card showing last 4 digits" --from-left 0 --from-right 4 --character "*"

  MASKING PATTERNS:

  Credit Card Masking (****-****-****-1234):
  --from-left 0 --from-right 4 --character "*"

  Email Masking (j***@example.com):
  --from-left 1 --from-right 0 --character "*"

  Full Masking (***********):
  --from-left 0 --from-right 0 --character "*" --masked

Options:
  --name TEXT           The name for the mask.  [required]
  --description TEXT    An optional description for the mask.
  --from-left INTEGER   Number of characters to be masked or kept in clear
                        from the left.  [required]
  --from-right INTEGER  Number of characters to be masked or kept in clear
                        from the right.  [required]
  --masked              Specifies whether the left and right characters should
                        be masked or kept in clear.
  --character TEXT      Specifies the mask character (*,#,-,0,1,2,3,4,5,6,7,8,
                        or 9).  [required]
  --help                Show this message and exit.

Create Policies

The following command shows how to access help for the create policies command. It also provides examples on how to create a policy.

user@server$ pim create policies --help
Usage: pim create policies [OPTIONS] COMMAND [ARGS]...

  Creates a new policy or rule.

Options:
  --help  Show this message and exit.

Commands:
  policy  Creates a new data protection policy with specified access permissions.
  rules   Creates multiple rules and adds them to a policy in bulk.

Create Policies Types

The following commands show how to access help for the create policies <type> command. It also provides examples on how to manage policy resources.

Create Policies Policy

The following command shows how to access help for the create policies policy command. It also provides examples on how to create a policy.

Important: Ensure that you mandatorily add a description while creating a policy. If you do not add the description, then the pim get policies command fails.

user@server$ pim create policies policy --help
Usage: pim create policies policy [OPTIONS]

  Creates a new data protection policy with specified access permissions.

    EXAMPLES:

  # Create basic policy with all protection operations enabled
  pim create policies policy --name "full-protection-policy" --description "Complete data protection with all operations" --protect --re-protect --un-protect

  # Create read-only policy (no protection operations)
  pim create policies policy --name "read-only-policy" --description "Read-only access without protection operations"

Options:
  --name TEXT         Name of the policy.  [required]
  --description TEXT  Description of the policy.  [required]
  --protect           Allow protect operation.
  --re-protect        Allow re-protect operation.
  --un-protect        Allow un-protect operation.
  --help              Show this message and exit.
Create Policies Rules

The following command shows how to access help for the create policies rules command. It also provides examples on how to create multiple rules and them to a policy.

user@server$ pim create policies rules --help
Usage: pim create policies rules [OPTIONS] POLICY_UID

  Creates multiple rules and adds them to a policy in bulk.

  Rules define the mapping between roles and data elements with specific
  protection methods and access permissions. Each rule specifies how a role
  can access a data element, what masking to apply, and which protection
  operations are allowed.

  RULE FORMAT: role_uid,data_element_uid[,mask][,no_access_operation][,protect
  ][,re_protect][,un_protect]

  EXAMPLES:

  # Create rules for different roles accessing PII data elements
  pim create policies rules 15  --rule "1,3,1,NULL_VALUE,true,true,true" --rule "3,3,1,PROTECTED_VALUE,false,false,false" --rule "4,2,,NULL_VALUE,true,false,false"

  WORKFLOW:
  # Step 1: Verify policy exists and review its configuration
  pim get policies <policy-uid>

  # Step 2: Identify required roles and data elements
  pim get applications application  # for roles
  pim get data_elements data_element  # for data elements
  pim get masks  # for available masks

  # Step 3: Create rules in bulk
  pim create policies rules <policy-uid> --rule "..." --rule "..." --rule "..."

  # Step 4: Verify rules were created successfully
  pim get policies <policy-uid> --rules

  PARAMETER DESCRIPTIONS:

  role_uid (Required): UID of the role/application that will access data
  - References trusted applications or user roles
  - Must exist in the system before creating rules
  - Determines who can perform operations on data elements

  data_element_uid (Required): UID of the data element 
  - References specific data fields or columns
  - Must exist before creating rules
  - Defines what data is being protected

  mask (Optional): UID of mask to apply for data obfuscation
  - Empty/omitted: No masking applied
  - Must reference existing mask configuration
  - Controls how data appears when accessed

  no_access_operation (Optional, Default: NULL_VALUE):
  - NULL_VALUE: Return null when access denied
  - PROTECTED_VALUE: Return masked/protected format
  - EXCEPTION: Throw exception when access denied

  protect (Optional, Default: false): Allow data protection operations
  - true: Role can encrypt/tokenize/mask data
  - false: Role cannot perform protection operations

  re_protect (Optional, Default: false): Allow data re-protection
  - true: Role can change protection methods/keys
  - false: Role cannot re-protect data

  un_protect (Optional, Default: false): Allow data un-protection
  - true: Role can decrypt/detokenize/unmask data
  - false: Role cannot remove protection

  Examples: --rule "role1,de1,mask1,NULL_VALUE,true,false,false" --rule
  "role2,de2,,EXCEPTION,false,true,true"  --rule "role3,de3"

Options:
  --rule TEXT  Rule specification in format: "role_uid,data_element_uid[,mask]
               [,no_access_operation][,protect][,re_protect][,un_protect]".
               Can be specified multiple times.  [required]
  --help       Show this message and exit.

Create Roles

The following command shows how to access help for the create roles command. It also provides examples on how to create a role.

user@server$ pim create roles --help
Usage: pim create roles [OPTIONS] COMMAND [ARGS]...

  Creates a new role or adds members to a role.

Options:
  --help  Show this message and exit.

Commands:
  members  Adds members to a role in bulk.
  role     Creates a new role with specified configuration and access mode.

Create Roles Types

The following commands show how to access help for the create roles <type> command. It also provides examples on how to manage roles.

Create Roles Members

The following command shows how to access help for the create roles members command. It also provides examples on how to add members to a role.

user@server$ pim create roles members --help
Usage: pim create roles members [OPTIONS] ROLE_UID

  Adds members to a role in bulk.

  Members can be individual users or groups from various identity sources.
  This command allows adding multiple members at once with proper validation
  and error handling for each member specification.

  MEMBER FORMAT: name,source,sync_id,type  OR  name,source,type (sync_id
  optional)

  EXAMPLES:

  # Add individual users from LDAP
  pim create roles members 15  --member "john.doe,1,12345,USER" --member "jane.smith,1,67890,USER"

  Examples: --member "john.doe,ldap,12345,USER" --member
  "admin_group,ldap,67890,GROUP" --member "jane.smith,ad,USER"  (sync_id
  omitted)

Options:
  --member TEXT  Member specification in format: "name,source,sync_id,type" or
                 "name,source,type". Can be specified multiple times. Where
                 name is the member name (required, min_length=1), source is
                 the source of the member (required), sync_id is the
                 synchronization ID (optional), and type is the member type
                 (required: USER or GROUP).
  --help         Show this message and exit.
Create Roles Role

The following command shows how to access help for the create roles role command. It also provides examples on how to create a role.

user@server$ pim create roles role --help
Usage: pim create roles role [OPTIONS]

  Creates a new role with specified configuration and access mode.

  EXAMPLES:

  # Create semiautomatic role for project team
  pim create roles role --name "project-alpha-team" --description "Project Alpha mixed access" --mode "SEMIAUTOMATIC"

Options:
  --name TEXT                               Name of the role.  [required]
  --description TEXT                        Description of the role.
  --mode [MANUAL|SEMIAUTOMATIC|AUTOMATIC]   Role mode.  [required]
  --allow-all                               Allow access to all users for this role.
  --help                                    Show this message and exit.

Create Sources

The following command shows how to access help for the create sources command. It also provides examples on how to create a member source.

user@server$ pim create sources --help
Usage: pim create sources [OPTIONS] COMMAND [ARGS]...

  Creates a new source.

Options:
  --help  Show this message and exit.

Commands:
  ad        Creates a new Active Directory source for Windows domain integration.
  azure     Creates a new AZURE AD source for Microsoft cloud identity integration.
  database  Creates a new DATABASE source for relational database user repositories.
  file      Creates a new FILE source for static user and group management.
  ldap      Creates a new LDAP source for directory-based authentication and user management.
  posix     Creates a new POSIX source for Unix/Linux system account integration.

Create Sources Types

The following commands show how to access help for the create source <type> command. It also provides examples on how to create a member source of a specific type.

Create Source Ad

The following command shows how to access help for the create source ad command. It also provides examples on how to create an active directory member source.

user@server$ pim create sources ad --help
Usage: pim create sources ad [OPTIONS]

  Creates a new Active Directory source for Windows domain integration.

  EXAMPLES:

  Note: The following commands use line continuation (\) for readability.
  In practice, run each command as a single line or use your shell's
  line continuation syntax

  # Create basic AD source with domain controller
  pim create sources ad --name "corporate-ad" --description "Corporate Active Directory" \
      --host "dc1.company.com" --port 389 \
      --user-name "service@company.com" --pass-word "password123" \
      --base-dn "dc=company,dc=com"

Options:
  --name TEXT         Name of the source.  [required]
  --description TEXT  Description of the source.
  --user-name TEXT    Authentication user.
  --pass-word TEXT    Authentication password.
  --host TEXT         The Fully Qualified Domain Name (FQDN) or IP address of
                      the directory server.
  --port INTEGER      The network port on the directory server where the
                      service is listening.
  --tls               The TLS protocol is enabled to create a secure
                      communication to the directory server.
  --base-dn TEXT      The Base DN for the server to search for users.
  --recursive         Enables recursive search for active directory or Azure
                      AD.
  --ldaps             Use LDAPS instead of startTLS.
  --help              Show this message and exit.
Create Source Azure

The following command shows how to access help for the create source azure command. It also provides examples on how to create an Azure member source.

user@server$ pim create sources azure --help
Usage: pim create sources azure [OPTIONS]

  Creates a new AZURE AD source for Microsoft cloud identity integration.

  EXAMPLES:

  Note: The following commands use line continuation (\) for readability.
  In practice, run each command as a single line or use your shell's
  line continuation syntax.

  # Create basic Azure AD source for corporate tenant
  pim create sources azure --name "corporate-azure" --description "Corporate Azure AD" \
      --client-id "12345678-1234-1234-1234-123456789012" \
      --tenant-id "87654321-4321-4321-4321-210987654321" \
      --environment "PUBLIC"

  # Create Azure AD source with service principal authentication
  pim create sources azure --name "sp-azure" --description "Service Principal Azure AD" \
      --user-name "service-principal@company.onmicrosoft.com" \
      --pass-word "sp-secret-key" \
      --client-id "app-registration-id" \
      --tenant-id "company-tenant-id" \
      --environment "PUBLIC" --recursive

  # Create Azure Government cloud source
  pim create sources azure --name "gov-azure" --description "Azure Government Cloud" \
      --client-id "gov-app-id" \
      --tenant-id "gov-tenant-id" \
      --environment "USGOVERNMENT" \
      --user-attribute "userPrincipalName" \
      --group-attribute "displayName"

  # Create Azure China cloud source
  pim create sources azure --name "china-azure" --description "Azure China Cloud" \
      --client-id "china-app-id" \
      --tenant-id "china-tenant-id" \
      --environment "CHINA" \
      --recursive

  # Create Azure AD with custom attributes
  pim create sources azure --name "custom-azure" --description "Custom Azure AD Configuration" \
      --client-id "custom-app-id" \
      --tenant-id "custom-tenant-id" \
      --environment "PUBLIC" \
      --user-attribute "mail" \
      --group-attribute "displayName" \
      --group-members-attribute "members" \
      --recursive

  # Create multi-tenant Azure AD source
  pim create sources azure --name "partner-azure" --description "Partner Tenant Azure AD" \
      --client-id "partner-app-id" \
      --tenant-id "partner-tenant-id" \
      --environment "PUBLIC" \
      --user-name "guest@partner.onmicrosoft.com" \
      --pass-word "guest-credentials"

Options:
  --name TEXT                     Name of the source.  [required]
  --description TEXT              Description of the source.
  --user-name TEXT                Authentication user.
  --pass-word TEXT                Authentication password.
  --recursive                     Enables recursive search for active
                                  directory or Azure AD.
  --user-attribute TEXT           The Relative Distinguished Name (RDN)
                                  attribute of the user distinguished name.
  --group-attribute TEXT          The Relative Distinguished Name (RDN)
                                  attribute of the group distinguished name.
  --group-members-attribute TEXT  The attribute that enumerates members of the
                                  group.
  --client-id TEXT                The client id for AZURE AD.
  --tenant-id TEXT                The tenant id for the AZURE AD.
  --environment [CHINA|CANARY|PUBLIC|USGOVERNMENT|USGOVERNMENTL5]
                                  The AZURE AD environment that should be used.
  --help                          Show this message and exit.
Create Source Database

The following command shows how to access help for the create source database command. It also provides examples on how to create a database member source.

user@server$ pim create sources database --help
Usage: pim create sources database [OPTIONS]

  Creates a new DATABASE source for relational database user repositories.

  EXAMPLES:

  Note: The following commands use line continuation (\) for readability.
  In practice, run each command as a single line or use your shell's
  line continuation syntax

  # Create Oracle database source with DSN
  pim create sources database --name "oracle-hr" --description "Oracle HR Database" \
      --user-name "pim_service" --pass-word "oracle123" \
      --host "oracle.company.com" --port 1521 \
      --dsn "XE" --vendor "ORACLE"

Options:
  --name TEXT                     Name of the source.  [required]
  --description TEXT              Description of the source.
  --user-name TEXT                Authentication user.
  --pass-word TEXT                Authentication password.
  --host TEXT                     The Fully Qualified Domain Name (FQDN) or IP
                                  address of the database server.
  --port INTEGER                  The network port on the directory server
                                  where the service is listening.
  --dsn TEXT                      The Data Source Name (DSN) for ODBC
                                  connection.
  --vendor [TERADATA|ORACLE|DATABASE|SQLSERVER|DB2|POSTGRESQLX]
                                  The vendor of the ODBC driver.
  --help                          Show this message and exit.
Create Source File

The following command shows how to access help for the create source file command. It also provides examples on how to create a file member source.

user@server$ pim create sources file --help
Usage: pim create sources file [OPTIONS]

  Creates a new FILE source for static user and group management.

  EXAMPLES:

  # Create basic file source with user list
  pim create sources file --name "dev-users" --description "environment users" --user-file exampleusers.txt --group-file examplegroups.txt

Options:
  --name TEXT         Name of the source.  [required]
  --description TEXT  Description of the source.
  --user-file TEXT    A sample file that contains a list of individual
                      members.
  --group-file TEXT   A sample file that contains groups of members.
  --help              Show this message and exit.
Create Source Ldap

The following command shows how to access help for the create source ldap command. It also provides examples on how to create an LDAP member source.

user@server$ pim create sources ldap --help
Usage: pim create sources ldap [OPTIONS]

  Creates a new LDAP source for directory-based authentication and user
  management.

  EXAMPLES:

  Note: The following commands use line continuation (\) for readability.
  In practice, run each command as a single line or use your shell's
  line continuation syntax

  # Create basic LDAP source with minimal configuration
  pim create sources ldap --name "company-ldap" --description "Company LDAP directory" \
      --host "ldap.company.com" --port 389 \
      --user-name "cn=admin,dc=company,dc=com" --pass-word "password123" \
      --user-base-dn "ou=users,dc=company,dc=com" \
      --group-base-dn "ou=groups,dc=company,dc=com"

  # Create OpenLDAP source with detailed configuration
  pim create sources ldap --name "openldap-prod" --description "Production OpenLDAP" \
      --host "openldap.company.com" --port 389 \
      --user-name "cn=readonly,dc=company,dc=com" --pass-word "readonly123" \
      --user-base-dn "ou=employees,dc=company,dc=com" \
      --user-attribute "uid" --user-object-class "posixAccount" \
      --user-login-attribute "uid" \
      --group-base-dn "ou=departments,dc=company,dc=com" \
      --group-attribute "cn" --group-object-class "posixGroup" \
      --group-members-attribute "memberUid" --timeout 60

Options:
  --name TEXT                     Name of the source.  [required]
  --description TEXT              Description of the source.
  --user-name TEXT                Authentication user.
  --pass-word TEXT                Authentication password.
  --host TEXT                     The Fully Qualified Domain Name (FQDN) or IP
                                  address of the directory server.
  --port INTEGER                  The network port on the directory server
                                  where the service is listening.
  --tls                           The TLS protocol is enabled to create a
                                  secure communication to the directory
                                  server.
  --user-base-dn TEXT             The base distinguished name where users can
                                  be found in the directory.
  --user-attribute TEXT           The Relative Distinguished Name (RDN)
                                  attribute of the user distinguished name.
  --user-object-class TEXT        The object class of entries where user
                                  objects are stored.
  --user-login-attribute TEXT     The attribute intended for authentication or
                                  login.
  --group-base-dn TEXT            The base distinguished name where groups can
                                  be found in the directory.
  --group-attribute TEXT          The Relative Distinguished Name (RDN)
                                  attribute of the group distinguished name.
  --group-object-class TEXT       The object class of entries where group
                                  objects are stored.
  --group-members-attribute TEXT  The attribute that enumerates members of the
                                  group.
  --group-member-is-dn            The members may be listed using their fully
                                  qualified name.
  --timeout INTEGER               The timeout value when waiting for a
                                  response from the directory server.
  --help                          Show this message and exit.

Delete Commands

The following section lists the delete commands.

Main Delete Command

The following command shows how to access help for the delete command.

user@server$ pim delete --help
Usage: pim delete [OPTIONS] COMMAND [ARGS]...

  Delete a resource.

Options:
  --help  Show this message and exit.

Commands:
  alphabets     Deletes a specific alphabet by UID.
  applications  Deletes a specific application by UID.
  dataelements  Deletes a specific data element by UID.
  datastores    Commands for deleting datastore resources.
  masks         Deletes a specific mask by its UID.
  policies      Deletes a policy, a rule from a policy, or a data element from a policy.
  roles         Commands for deleting role resources.
  sources       Permanently deletes a source from the system.

Delete Alphabets

The following command shows how to access help for the delete alphabets command. It also provides examples on how to delete an alphabet.

user@server$ pim delete alphabets --help
Usage: pim delete alphabets [OPTIONS] UID

  Deletes a specific alphabet by UID.

  WORKFLOW:

  # Step 1: First, list all alphabets to find the UID you want to delete
  pim get alphabets

  # Step 2: Copy the UID from the list and use it to delete the alphabet
  pim delete alphabets <uid-from-list>

  EXAMPLES:

  # Complete workflow example:
  # 1. List all alphabets to see available UIDs
  pim get alphabets

  # 2. Delete a specific alphabet using UID from the list above
  pim delete alphabets 14

Options:
  --help  Show this message and exit.

Delete Applications

The following command shows how to access help for the delete applications command. It also provides examples on how to delete a trusted application.

user@server$ pim delete applications --help
Usage: pim delete applications [OPTIONS] UID

  Deletes a specific application by UID.

  WORKFLOW:

  # Step 1: First, list all applications to find the UID you want to delete
  pim get applications

  # Step 2: Copy the UID from the list and use it to delete the application
  pim delete applications <uid-from-list>

  EXAMPLES:

  # 1. List all applications to see available UIDs
  pim get applications

  # 2. Delete a specific application using numeric UID from the list above
  pim delete applications 42

Options:
  --help  Show this message and exit.

Delete Dataelements

The following command shows how to access help for the delete dataelements command. It also provides examples on how to delete a dataelement.

user@server$ pim delete dataelements --help
Usage: pim delete dataelements [OPTIONS] UID

  Deletes a specific data element by UID.

  WORKFLOW:

  # Step 1: First, list all data elements to find the UID you want to delete
  pim get dataelements

  # Step 2: Copy the UID from the list and use it to delete the data element
  pim delete dataelements <uid-from-list>

  EXAMPLES:

  # Complete workflow example: # 1. List all data elements to see available
  UIDs pim get dataelements

  # 2. Delete a specific data element using numeric UID from the list above
  pim delete dataelements 42

Options:
  --help  Show this message and exit.

Delete Datastores

The following command shows how to access help for the delete datastores command. It also provides examples on how to delete a datastore.

user@server$ pim delete datastores --help
Usage: pim delete datastores [OPTIONS] COMMAND [ARGS]...

  Commands for deleting datastore resources.

Options:
  --help  Show this message and exit.

Commands:
  datastore  Deletes a datastore by UID.
  key        Deletes an export key from a datastore.
  range      Deletes an IP address range from a datastore.

Delete Datastores Types

The following commands show how to access help for the delete datastores <type> command. It also provides examples on how to delete a datastore of a specific type.

Delete Datastores Datastore

The following command shows how to access help for the delete datastores datastore command. It also provides examples on how to delete a datastore by the UID.

user@server$ pim delete datastores datastore --help
Usage: pim delete datastores datastore [OPTIONS] UID

  Deletes a datastore by UID.

  EXAMPLES:

  # Delete datastore by numeric UID
  pim delete datastores datastore 15

Options:
  --help  Show this message and exit.
Delete Datastores Key

The following command shows how to access help for the delete datastores key command. It also provides examples on how to delete a key from a datastore.

user@server$ pim delete datastores key --help
Usage: pim delete datastores key [OPTIONS] DATASTORE_UID KEY_UID

  Deletes an export key from a datastore.      
  
  EXAMPLES:

  # Remove specific export key from datastore
  pim delete datastores key 1 2

  WORKFLOW:
  # Step 1: List current keys to identify the key UID
  pim get datastores keys <datastore-uid>

  # Step 2: Verify which processes use this key
  # - Check backup and migration schedules
  # - Verify no active export operations

  # Step 3: Delete the key
  pim delete datastores key <datastore-uid> <key-uid>

  # Step 4: Verify deletion
  pim get datastores keys <datastore-uid>

Options:
  --help  Show this message and exit.
Delete Datastores Range

The following command shows how to access help for the delete datastores range command. It also provides examples on how to delete a range of IP addresses from a datastore.

user@server$ pim delete datastores range --help
Usage: pim delete datastores range [OPTIONS] DATASTORE_UID RANGE_UID

  Deletes an IP address range from a datastore.

  EXAMPLES:

  # Remove specific IP range from datastore
  pim delete datastores range 15 1

  WORKFLOW:
  # Step 1: List current ranges to identify the range UID
  pim get datastores ranges <datastore-uid>

  # Step 2: Verify which systems use this range
  # - Check with network administrators
  # - Verify no active connections from this range

  # Step 3: Delete the range
  pim delete datastores range <datastore-uid> <range-uid>

  # Step 4: Verify deletion
  pim get datastores ranges <datastore-uid>

Options:
  --help  Show this message and exit.

Delete Masks

The following command shows how to access help for the delete masks command. It also provides examples on how to delete a mask.

user@server$ pim delete masks --help
Usage: pim delete masks [OPTIONS] UID

  Deletes a specific mask by its UID.

  EXAMPLES:

  # Delete mask by UID
  pim delete masks 15

Options:
  --help  Show this message and exit.

Delete Policies

The following command shows how to access help for the delete policies command. It also provides examples on how to delete a policy, a rule from a policy, or a data element from a policy.

user@server$ pim delete policies --help
Usage: pim delete policies [OPTIONS] UID

  Deletes a policy, a rule from a policy, or a data element from a policy.

  EXAMPLES:

  # Delete entire policy (removes all rules and deployments)
  pim delete policies 15

  # Remove specific rule from policy
  pim delete policies 15 --rule-uid 23

  # Remove all rules for specific data element from policy
  pim delete policies 42 --data-element-uid 67

Options:
  --rule-uid TEXT          UID of the rule to remove.
  --data-element-uid TEXT  UID of the data element to remove from a policy.
  --help                   Show this message and exit.

Delete Roles

The following command shows how to access help for the delete roles command. It also provides examples on how to delete a role.

user@server$ pim delete roles --help
Usage: pim delete roles [OPTIONS] COMMAND [ARGS]...

  Commands for deleting role resources.

Options:
  --help  Show this message and exit.

Commands:
  members  Removes a specific member from a role.
  role     Permanently deletes a role from the system.

Delete Roles Types

The following commands show how to access help for the delete roles <type> command.

Delete Roles Members

The following command shows how to access help for the delete roles members command. It also provides examples on how to remove a member from a role.

user@server$ pim delete roles members --help
Usage: pim delete roles members [OPTIONS] ROLE_UID MEMBER_UID

  Removes a specific member from a role.

  EXAMPLES:

  # Remove specific user from role
  pim delete roles members 15 42
  pim delete roles members <role_uuid> <member_uuid>

Options:
  --help  Show this message and exit.
Delete Roles Role

The following command shows how to access help for the delete roles role command. It also provides examples on how to remove a role by the UID.

user@server$ pim delete roles role --help
Usage: pim delete roles role [OPTIONS] UID

  Permanently deletes a role from the system.

  EXAMPLES:

  # Remove specific role  
  pim delete roles role 15

Options:
  --help  Show this message and exit.

Delete Sources

The following command shows how to access help for the delete source command. It also provides examples on how to delete a member source by the UID.

user@server$ pim delete sources --help
Usage: pim delete sources [OPTIONS] UID

  Permanently deletes a source from the system.

  EXAMPLES:

  # Interactive source deletion with confirmation
  pim delete sources 15

Options:
  --help  Show this message and exit.

Get Commands

The following section lists the get commands.

Main Get Command

The following command shows how to access help for the get command.

user@server$ pim get --help
Usage: pim get [OPTIONS] COMMAND [ARGS]...

  Display one or many resources.

Options:
  --help  Show this message and exit.

Commands:
  alphabets     Gets a specific alphabet by UID, or lists all alphabets if no UID is provided.
  applications  Gets a specific application by UID, or lists all applications if no UID is provided.
  dataelements  Gets a specific data element by UID, or lists all data elements if no UID is provided.
  datastores    Commands for getting datastore resources.
  deploy        List deployment history across all datastores.
  health        Displays the server health information and status.
  log           Gets the current log level configuration.
  masks         Gets a specific mask by UID, or lists all masks if no UID is provided.
  policies      Gets a specific policy by UID, lists all policies, or lists rules of a policy.
  ready         Displays the server readiness information and operational status.
  roles         Commands for getting role resources.
  sources       Gets source information by UID, lists all sources, or lists source members.
  version       Displays the server version information.

Get Alphabets

The following command shows how to access help for the get alphabets command. It also provides examples on how to retrieve all the alphabets or a specific alphabet.

user@server$ pim get alphabets --help
Usage: pim get alphabets [OPTIONS] [UID]

  Gets a specific alphabet by UID, or lists all alphabets if no UID is
  provided.

  EXAMPLES:

  # List all available alphabets
  pim get alphabets

  # Get details for a specific alphabet by UID
  pim get alphabets 29

Options:
  --help  Show this message and exit.

Get Applications

The following command shows how to access help for the get applications command. It also provides examples on how to retrieve all trusted applications or a specific trusted application.

user@server$ pim get applications --help
Usage: pim get applications [OPTIONS] [UID]

  Gets a specific application by UID, or lists all applications if no UID is
  provided.

  EXAMPLES:

  # List all available applications
  pim get applications

  # Get details for a specific application by UID
  pim get applications 1

Options:
  --help  Show this message and exit.

Get Dataelements

The following command shows how to access help for the get dataelements command. It also provides examples on how to retrieve all the data elements or a specific data element.

user@server$ pim get dataelements --help
Usage: pim get dataelements [OPTIONS] [UID]

  Gets a specific data element by UID, or lists all data elements if no UID is
  provided.

  EXAMPLES:

  # List all available data elements pim get dataelements

  # Get details for a specific data element by UID pim get dataelements 15

Options:
  --help  Show this message and exit.

Get Datastores

The following command shows how to access help for the get datastores command. It also provides examples on how to retreive the datastore resources.

user@server$ pim get datastores --help
Usage: pim get datastores [OPTIONS] COMMAND [ARGS]...

  Commands for getting datastore resources.

Options:
  --help  Show this message and exit.

Commands:
  datastore  Gets a specific datastore by UID, or lists all datastores if no UID is provided.
  keys       Gets a specific key by UID, or lists all keys for a datastore.
  ranges     Gets a specific range by UID, or lists all ranges for a datastore.

Get Datastores Types

The following commands show how to access help for the get datastores <type> command. It also provides examples on how to retrieve specific datastores.

Get Datastores Datastore

The following command shows how to access help for the get datastores datastore command. It also provides examples on how to retrieve all datastores or a specific datastore.

user@server$ pim get datastores datastore --help
Usage: pim get datastores datastore [OPTIONS] [UID]

  Gets a specific datastore by UID, or lists all datastores if no UID is
  provided.

  Datastores represent the physical or logical storage systems where protected
  data is stored. They contain policies, applications, and IP ranges that
  define access control.

  EXAMPLES:

  # List all available datastores
  pim get datastores datastore

  # Get details for a specific datastore by UID
  pim get datastores datastore 15

Options:
  --help  Show this message and exit.
Get Datastores Keys

The following command shows how to access help for the get datastores key command. It also provides examples on how to retrieve all keys for a datastore or a specific key.

user@server$ pim get datastores keys --help
Usage: pim get datastores keys [OPTIONS] DATASTORE_UID

  Gets a specific key by UID, or lists all keys for a datastore.

  Datastore keys manage encryption and access credentials for secure data
  operations. Keys can be export keys for data migration or operational keys
  for ongoing protection services. Key management is critical for data
  security.

  EXAMPLES:

  # List all keys for a specific datastore
  pim get datastores keys <datastore-uid>

  # Get details for a specific key within a datastore
  pim get datastores keys 15 --key-uid <key-uid>

  WORKFLOW:

  # Step 1: List all datastores to find the datastore UID
  pim get datastores datastore

  # Step 2: List keys for the specific datastore
  pim get datastores keys <datastore-uid>

  # Step 3: Get specific key details if needed
  pim get datastores keys <datastore-uid> --key-uid <key-uid>

Options:
  --key-uid TEXT  UID of the specific key to get.
  --help          Show this message and exit.
Get Datastores Ranges

The following command shows how to access help for the get datastores ranges command. It also provides examples on how to retrieve all the IP address range for a datastore or a specific range.

user@server$ pim get datastores ranges --help
Usage: pim get datastores ranges [OPTIONS] DATASTORE_UID

  Gets a specific range by UID, or lists all ranges for a datastore.

  IP ranges define which network addresses are allowed to access the
  datastore. Ranges provide network-level security by restricting datastore
  access to specific IP addresses or CIDR blocks.

  EXAMPLES:

  # List all IP ranges for a specific datastore
  pim get datastores ranges 15

  # Get details for a specific range within a datastore
  pim get datastores ranges 15 --range-uid 1

  WORKFLOW:

  # Step 1: List all datastores to find the datastore UID
  pim get datastores datastore

  # Step 2: List ranges for the specific datastore
  pim get datastores ranges <datastore-uid>

  # Step 3: Get specific range details if needed
  pim get datastores ranges <datastore-uid> --range-uid <range-uid>

Options:
  --range-uid TEXT  UID of the range to get.
  --help            Show this message and exit.

Get Deploy

The following command shows how to access help for the get deploy command. It also provides examples on how to list the deployment history.

user@server$ pim get deploy --help
Usage: pim get deploy [OPTIONS]

  List deployment history across all datastores.

  EXAMPLES:

  # List all deployment history
  pim get deploy

Options:
  --help  Show this message and exit.

Get Health

The following command shows how to access help for the get health command. It also provides examples on how to display the server health information.

user@server$ pim get health --help
Usage: pim get health [OPTIONS]

  Displays the server health information and status.

  EXAMPLES:

  # Check current server health status
  pim get health

Options:
  --help  Show this message and exit.

Get Log

The following command shows how to access help for the get log command. It also provides examples on how to retrieve the current log level.

user@server$ pim get log --help
Usage: pim get log [OPTIONS]

  Gets the current log level configuration.

  EXAMPLES:

  # Check current log level setting
  pim get log

Options:
  --help  Show this message and exit.

Get Masks

The following command shows how to access help for the get masks command. It also provides examples on how to retrieve all masks or a specific mask.

user@server$ pim get masks --help
Usage: pim get masks [OPTIONS] [UID]

  Gets a specific mask by UID, or lists all masks if no UID is provided.

  EXAMPLES:

  # List all available masks
  pim get masks

  # Get details for a specific mask by UID
  pim get masks 15

Options:
  --help  Show this message and exit.

Get Policies

The following command shows how to access help for the get policies command. It also provides examples on how to retrieve all policies, a specific policy, or all rules of a policy.

user@server$ pim get policies --help
Usage: pim get policies [OPTIONS] [UID]

  Gets a specific policy by UID, lists all policies, or lists rules of a
  policy.

  EXAMPLES:

  # List all available policies
  pim get policies

  # Get details for a specific policy by UID
  pim get policies 15

  # List all rules within a specific policy
  pim get policies 15 --rules

Options:
  --rules  List rules of the policy.
  --help   Show this message and exit.

Get Ready

The following command shows how to access help for the get ready command. It also provides examples on how to display the server readiness information.

user@server$ pim get ready --help
Usage: pim get ready [OPTIONS]

  Displays the server readiness information and operational status.

  EXAMPLES:

  # Check if server is ready for requests
  pim get ready

Options:
  --help  Show this message and exit.

Get Roles

The following command shows how to access help for the get roles command. It also provides examples on how to retrieve the resources for a role.

user@server$ pim get roles --help
Usage: pim get roles [OPTIONS] COMMAND [ARGS]...

  Commands for getting role resources.

Options:
  --help  Show this message and exit.

Commands:
  members  Lists all members of a specific role.
  role     Gets a specific role by UID, or lists all roles if no UID is provided.
  users    Lists users of a specific member in a role.

Get Roles Types

The following commands show how to access help for the get roles <type> command.

Get Roles Members

The following command shows how to access help for the get roles members command. It also provides examples on how to list all members of a role.

user@server$ pim get roles members --help
Usage: pim get roles members [OPTIONS] ROLE_UID

  Lists all members of a specific role.

  EXAMPLES:

  # List all members of a specific role
  pim get roles members 15

Options:
  --help  Show this message and exit.
Get Roles Role

The following command shows how to access help for the get roles role command. It also provides examples on how to retrieve all roles or a specific role.

user@server$ pim get roles role --help
Usage: pim get roles role [OPTIONS] [UID]

  Gets a specific role by UID, or lists all roles if no UID is provided.

  EXAMPLES:

  # List all available roles
  pim get roles role

  # Get details for a specific role by UID
  pim get roles role 15

Options:
  --help  Show this message and exit.
Get Roles Users

The following command shows how to access help for the get roles users command. It also provides examples on how to retrieve users of a specific member in a role.

user@server$ pim get roles users --help
Usage: pim get roles users [OPTIONS] ROLE_UID MEMBER_UID

  Lists users of a specific member in a role.

  EXAMPLES:

  # List users in a specific group member of a role
  pim get roles users 15 23
  pim get roles users "<role-uuid>" "<member-uuid>"

Options:
  --help  Show this message and exit.

Get Sources

The following command shows how to access help for the get sources command. It also provides examples on how to retrieve all source, a specific source, or members of a source.

user@server$ pim get sources --help
Usage: pim get sources [OPTIONS] [UID]

  Gets source information by UID, lists all sources, or lists source members.

  EXAMPLES:

  # List all configured sources
  pim get sources

  # Get detailed information about a specific source
  pim get sources 15

  # List all members of a specific source
  pim get sources 23 --members

Options:
  --members  List members of the source.
  --help     Show this message and exit.

Get Version

The following command shows how to access help for the get version command. It also provides examples on how to display the version information of the server.

user@server$ pim get version --help
Usage: pim get version [OPTIONS]

  Displays the server version information.

  EXAMPLES:

  # Display server version information
  pim get version

Options:
  --help  Show this message and exit.

Set Commands

The following section lists the set commands.

Main Set Command

The following command shows how to access help for the set command.

user@server$ pim set --help
Usage: pim set [OPTIONS] COMMAND [ARGS]...

  Update fields of a resource.

Options:
  --help  Show this message and exit.

Commands:
  log  Sets the log level for the PIM server.

Set Log

The following command shows how to access help for the set log command. It also provides examples on how to set the log level.

user@server$ pim set log --help
Usage: pim set log [OPTIONS] {ERROR|WARN|INFO|DEBUG|TRACE}

  Sets the log level for the PIM server.

  Higher levels include all lower levels (TRACE includes DEBUG, INFO, WARN,
  ERROR).

  EXAMPLES:

  # Enable debug logging for troubleshooting
  pim set log DEBUG

Options:
  --help  Show this message and exit.

3.1.3.2.1 - Using the Policy Management Command Line Interface (CLI)

Explains the usage of the Policy Management CLI with some generic samples.

The following table provides section references that explain usage of some of the Policy Management CLI. It includes an example workflow to work with the Policy Management functions. If you want to view all the Policy Management CLI, then refer to the section Policy Management Command Line Interface (CLI) Reference.

Policy Management CLISection Reference
Policy Management initializationInitializing the Policy Management
Creating an empty manual role that will accept all usersCreating a Manual Role
Create data elementsCreate Data Elements
Create policyCreate Policy
Add roles and data elements to the policyAdding roles and data elements to the policy
Create a default data storeCreating a default datastore
Deploy the data storeDeploying the Data Store
Get the deployment informationGetting the Deployment Information

Initializing the Policy Management

This section explains how you can initialize Policy Management to create the keys-related data and the policy repository.

pim invoke init

The following output appears:

✅ PIM successfully initialized (bootstrapped).

Creating a Manual Role

This section explains how you can create a manual role that accepts all the users.

pim create roles role --name "project-alpha-team" --description "Project Alpha all access" --mode "MANUAL" --allow-all

The following output appears:

NAME                DESCRIPTION               MODE             ALLOWALL  UID
project-alpha-team  Project Alpha all access  RoleMode.MANUAL  True      1

The command creates a role named project-alpha-team that has the UID as 1.

Creating Data Elements

This section explains how you can create a data element.

pim create dataelements aes128-cbc-enc --name "BasicEncryption" --description "Basic data encryption"

The following output appears:

UID  NAME             DESCRIPTION            IVTYPE       CHECKSUMTYPE       CIPHERFORMAT
1    BasicEncryption  Basic data encryption  IvType.NONE  ChecksumType.NONE  CipherFormat.NONE

The command creates an AES-128-CBC-ENC encryption data element named BasicEncryption that has the UID as 1.

Creating Policy

This section explains how you can create a policy.

pim create policies policy --name "full-protection-policy" --description "Complete data protection with all operations" --protect --re-protect --un-protect

The following output appears:

NAME                     DESCRIPTION                                   ACCESS                                                   UID
 full-protection-policy  Complete data protection with all operations  {'protect': True, 'reProtect': True, 'unProtect': True}  1

The command creates a policy named full-protection-policy that has the UID as 1.

Adding Roles and Data Elements to a Policy

This section explains how you can add roles and data elements to a policy.

pim create policies rules <policy-uid> --rule "1,1,,NULL_VALUE,true,false,false"

The following output appears:

 ROLE  DATAELEMENT  MASK  NOACCESSOPERATION  ACCESS
 1     1            0     NULL_VALUE         {'protect': True, 'reProtect': False, 'unProtect': False}

The command adds the role with the UID 1 and the data element with the UID 1 to the policy with the UID 1.

Creating a Default Data Store

This section explains how you can create a default data store.

pim create datastores datastore --name "primary-db" --description "Primary application database" --default

The following output appears:

NAME        DESCRIPTION                   DEFAULT  UID
primary-db  Primary application database  True     1

The command creates a default data store named primary-db that has the UID as 1.

Deploying a Specific Data Store

This section explains how you can deploy policies and trusted applications linked to a specific data store. The specifications provided for the specific data store are applied and becomes the end-result.

pim invoke datastores deploy 1 --policies 1

The following output appears:

Successfully deployed to datastore '1':
  Policies: 1

The command deploys the policy with the UID 1 to the data store with the UID 1.

Getting the Deployment Information

This section explains how you can check the complete deployment information. This service returns the list of the data stores with the connected policies and trusted applications.

pim get deploy

The following output appears:

 UID  POLICIES  APPLICATIONS
 1    ['1']     []

The command retrieves the deployment information. It displays the UID of the data store and the policy that has been deployed.

3.2 - Data Discovery

Working with Data Discovery.

Data Discovery’s Classification Service specializes in the detection of Personally Identifiable Information (PII), Protected Health Information (PHI), Payment Card Information (PCI) within free-text (unstructured) and table-based (structured. CSV) inputs. Unlike traditional data tools, it excels in dynamic, unstructured environments such as chatbot conversations, call transcripts, and Generative AI (Gen AI) outputs.

3.2.1 - Installing Data-Discovery

Steps to install Data Discovery.

Data-Discovery application can be deployed using helm.

Prerequisites

Before you begin, ensure the following requirements are met:

  • Team Edition Tech Preview NFA Cluster is installed and reachable.

  • Helm 3.x is installed.

  • kubectl is installed.

  • Kubernetes cluster access with permissions to create:

    • Namespaces
    • Deployments
    • Services
    • Ingress rules
    • Karpenter resources

Installing Data-Discovery

Install Data-Discovery using the following command:

helm upgrade --install data-discovery \
  https://artifactory.protegrity.com/artifactory/helm-charts/data-discovery/release_1.1.1/data-discovery-1.1.1+306.26d04c6c.tgz \
  --namespace data-discovery --create-namespace

Validating the deployment

After installation, you can validate the Data-Discovery deployment as follows:

Note: Using Data-Discovery in Team Edition requires the use of an authentication token. To obtain a JWT token, follow the instructions in the NFA documentation.

curl -k https://eclipse.aws.protegrity.com/pty/data-discovery/v1.1/classify \
  -H 'Content-Type: text/plain' \
  -H "Authorization: Bearer $YOUR_JWT_TOKEN" \
  --data 'You can reach Dave Elliot by phone 203-555-1286'

To view a sample response, refer to the Data Discovery API Documentation.

Uninstall

To uninstall, run the following command:

helm uninstall data-discovery -n data-discovery --wait --timeout 300s

Troubleshooting

The following table provides a quick reference for common issues, their causes, and actions:

SymptomLikely CauseAction
Pods remain PendingNodePool not ready or insufficient capacityCheck NodePool status and capacity
HPA shows unknown metricsMetrics Server missing or unhealthyInstall/fix Metrics Server
401 via IngressJWT/token or role mismatch (NFA specific)- Validate ProductConfiguration and existing roles
- Verify application functionality without authentication
Uninstall hangsKarpenter finalizer preventing deletionManually remove the remaining resources
NodePool or NodeClass resources remain in the clusterPermission issuesManually remove the remaining resources

Verify application functionality without authentication

Execute the following command to see the results:

kubectl -n data-discovery run curl --image=curlimages/curl -it --rm --restart=Never -- \
  curl -v -X POST classification-service:8050/pty/data-discovery/v1.1/classify \
  -H 'Content-Type: text/plain' \
  --data 'Detect Jane Roe phone 203-555-1111'

To view a sample response, refer to the Data Discovery API Documentation.

Manually remove the remaining resources

1. Remove finalizer from the Data-Discovery EC2NodeClass

kubectl patch ec2nodeclass data-discovery-nodeclass \
  --type merge \
  -p '{"metadata":{"finalizers":[]}}'
kubectl delete ec2nodeclass data-discovery-nodeclass

2. Delete NodePools

kubectl delete nodepool data-discovery-classification
kubectl delete nodepool data-discovery-context  
kubectl delete nodepool data-discovery-pattern

3. Delete EC2NodeClass (AWS provider)

kubectl delete ec2nodeclass data-discovery-nodeclass

3.3 - Gen AI

Gen AI features

3.3.1 - Semantic Guardrails

Using Semantic Guardrails with NFA

Protegrity’s Semantic Guardrails solution is a security guardrail engine for AI systems. It evaluates risks in GenAI systems such as chatbots, workflows, and agents, through advanced semantic analytics and intent classification to detect potentially malicious messages. PII detection can also be leveraged for comprehensive security coverage.

For more information about Semantic Guardrails, refer to the Semantic Guardrails documentation.

3.3.1.1 - Prerequisites

Prerequisites to install Semantic Guardrails

Before you begin

Ensure that the following requirements are met before installing Semantic Guardrails with NFA.

  • The NFA cluster is installed and accessible.
  • For PII detection, Data Discovery service is installed.
  • Kubernetes command-line tool, kubectl, is configured for the NFA cluster.
  • helm CLI v3.x is installed.
  • Access to relevant container images is available.

3.3.1.2 - Installing Semantic Guardrails

Steps to install Semantic Guardrails

This section describes the steps to install Semantic Guardrails.

Installing Data Discovery

For PII detection, it is recommended to install Data Discovery services before installing Semantic Guardrails.

For more information about installing Data Discovery service, refer to Installing Data Discovery.

To verify the Data Discovery service status, run the following command.

kubectl get pods -n data-discovery

1. Preparing and Installing

To install Semantic Guardrails, it is recommended to install from the Artifactory repository.

NOTE: If the installation fails due to existing namespace, then uninstall Semantic Guardrails.

Installing from Artifactory repository

Perform the following steps.

  1. To add the Semantic Guardrails repository, run the following command.

    helm repo add protegrity-charts https://artifactory.protegrity.com/artifactory/helm-charts/
    
  2. To update the repository index, run the following command.

    helm repo update
    
  3. To install the helm chart, run the following command.

    helm upgrade --install nfa-semantic-guardrails semantic-guardrails/semantic-guardrails --version 1.1.0 --namespace pty-semantic-guardrails --create-namespace
    

Installing using a pre-packaged .tgz file

Packaging

If the source code is available, then semantic guardrails can be packaged.

Perform the following steps.

  1. To navigate to the chart directory, run the following command.

    cd deployment/nfa
    
  2. To create a package, run the following command.

    helm package charts/semantic-guardrails
    

Installing

If the semantic-guardrails-1.1.0.tgz file is available, then install the file using the following command.

helm upgrade --install nfa-semantic-guardrails ./semantic-guardrails-1.1.0.tgz \
--namespace pty-semantic-guardrails \
--create-namespace

2. Verifying Deployment Status

To verify the deployment status, run the following command.

kubectl get pods -n pty-semantic-guardrails

After Semantic Guardrails feature is successfully deployed, the expected output is as follows.

NAME                                                 READY   STATUS    RESTARTS   AGE
semantic-guardrails-deployment-xxxxxxxxxx-xxxxx     1/1     Running   0          2m

3. Verifying the Service Status

To verify the service status, run the following command.

kubectl get svc -n pty-semantic-guardrails

After Semantic Guardrails feature is successfully deployed, the expected output is as follows.

NAME                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
semantic-guardrails-service       ClusterIP   172.20.109.155   <none>        8001/TCP   3h

3.3.1.3 - Testing the Semantic Guardrails deployment with NFA

Steps to test the deployment for Semantic Guardrails

Perform the following steps to test the Semantic Guardrails deployment.

1. Testing Semantic Guardrails API

To test the Semantic Guardrails API endpoint, run the following command.

Note: The endpoints require authentication.

curl -k -X POST https://eclipse.aws.protegrity.com/pty/semantic-guardrails/v1.1/conversations/messages/scan \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer YOUR_JWT_TOKEN' \
  --data '{
    "messages": [
      {
        "from": "user",
        "to": "ai", 
        "content": "This is a test message for semantic analysis",
        "processors": ["financial"]
      }
    ]
  }'

We should see, on the response, something similar to:


{
  "from": "user",
  "to": "ai",
  "content": "This is a test message for semantic analysis",
  "outcome": "approved",
  "score": 0.2,
  "explanation": "in-domain"
}

2. Testing Data Discovery Integration

If Data Discovery is installed, then to test the Data Discovery integration, run the following command.

curl -k -X POST https://eclipse.aws.protegrity.com/pty/semantic-guardrails/v1.1/conversations/messages/scan \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer YOUR_JWT_TOKEN' \
  --data '{
    "messages": [
      {
        "from": "ai",
        "to": "user", 
        "content": "My name is John Smith, my credit card number is 15342724758",
        "processors": ["pii"]
      }
    ]
  }'

We should see, on the response, something similar to:

{
  "from": "ai",
  "to": "user",
  "content": "My name is John Smith, my credit card number is 15342724758".,
  "outcome": "rejected",
  "score": 0.9,
  "explanation": "['NAME' : 'John Smith', 'CCN' : '15342724758']"
}

3.3.1.4 - Configuring Semantic Guardrails with NFA

Steps to configure Semantic Guardrails with NFA

This service provides AI conversation scanning and semantic analysis capabilities for Semantic Guardrails.

API Endpoints

This section provides an overview of the primary endpoint.

NameEndpoint
Main API/v1/conversations/messages
Models API/pty/semantic-guardrail/v1.1/domain-models/

Environment variables

The semantic-guardrail service uses these environment variables:

VariableDescription
LOG_LEVELLogging level (INFO/DEBUG)
PRODUCTION_LOGGINGEnables production logging (true/false)
DATA_DISCOVERY_SEARCHEnables PII detection search (true/false)
DATA_DISCOVERY_URLData Discovery classification service hostname
(default: “classification-service.data-discovery”)
DATA_DISCOVERY_PORTData Discovery classification service port
(default: 8050)

Custom configurations

To use a different image, ensure to update the content of the values.yaml file.

serviceImages:
  semanticGuardrails: "<your-semantic-guardrails-image>"

To update the deployed cluster, run the following command.

helm template semantic-guardrails charts/semantic-guardrails > semantic_guardrails.yaml 2>&1
kubectl delete -f semantic_guardrails.yaml
kubectl apply -f semantic_guardrails.yaml

3.3.1.5 - Uninstalling Semantic Guardrails

Steps to uninstall Semantic Guardrails

Perform the following steps to uninstall Semantic Guardrails.

Uninstalling Semantic-Guardrails

To uninstall semantic-guardrails, run the following command.

helm uninstall semantic-guardrails -n pty-semantic-guardrails

Uninstalling Data Discovery

If Data Discovery is not needed, then uninstall the Data Discovery service.

To uninstall data discovery, run the following command.

helm uninstall data-discovery -n data-discovery

3.3.2 - Protegrity Agent

Using Protegrity Agent with NFA

Protegrity Agent is an intelligent agentic AI system designed for Data Protection architects and administrators. Protegrity Agent manages the Protegrity Policy, Data Elements, Roles, Masks, Data Stores and other configurations through natural language conversations. The system provides automated planning and execution capabilities for complex data protection workflows, including policy management, data element configuration, and security rule deployment.

Protegrity Agent leverages advanced Large Language Model (LLM) capabilities with chain-of-thought reasoning to break down complex user requests into executable step-by-step plans. The agent orchestrates operations across the Protegrity ecosystem through the Policy Management (PIM) API, providing an intuitive chatbot-like interface for sophisticated data protection management tasks.

The key capabilities of Protegrity Agent include:

  • Intelligent Planning: Decomposes complex user queries into hierarchical, executable task plans.
  • Natural Language Interface: Manages data protection configurations using conversational API.
  • Autonomous Execution: Executes multi-step workflows with adaptive tool selection and error recovery.
  • Real-time Streaming: Displays Server-Sent Events (SSE) for live progress updates and intermediate results.
  • Enterprise Integration: Integrates with Protegrity PIM seamlessly through comprehensive API coverage.
  • Semantic Tool Discovery: Selects RAG-based tool for optimal endpoint matching.
  • Conversation Management: Tracks conversation history with context-aware interactions.

3.3.2.1 - Prerequisites

Prerequisites to install Protegrity Agent

Ensure that the following requirements are met before installing Protegrity Agent with NFA.

  • Kubernetes cluster with RBAC enabled is installed and accessible.
  • Kubernetes command-line tool, kubectl, is configured for the cluster.
  • helm CLI v3.x is installed.
  • Storage class is configured. The chart uses ebs-sc by default.
  • Access to container images from Protegrity Artifactory is available.
  • Required secrets are configured.
    • OPENAI_API_KEY
    • OPENAI_API_ENDPOINT
    • OPENAI_API_VERSION
    • OPENAI_LLM_MODEL
    • OPENAI_EMBEDDING_MODEL
  • The following ports are required while working with Protegrity Agent.
    • Port 8010: FastAPI backend used by Protegrity Agent Service.
    • Port 3000: Web interface for accessing the UI Dashboard.

3.3.2.2 - Installing Protegrity Agent

Steps to install Protegrity Agent

The Protegrity Agent can be installed using helm chart. The helm chart deploys the following components.

ComponentDescription
Protegrity Agent ServiceMain application service
PostgreSQL DatabasePersistent database for conversation storage
UI DashboardWeb interface for Protegrity Agent management
IngressExternal access configuration for API and UI endpoints

3.3.2.2.1 - Installing Protegrity Agent

Steps to install Protegrity Agent

1. Preparing and Installing

Protegrity Agent can be installed directly from the helm chart in the repository.

NOTE: If the installation fails due to an existing namespace, uninstall the previous deployment before proceeding further.

Perform the following steps to install the helm chart.

  1. Navigate to the Protegrity Agent deployment directory, using the following command.

    cd /path/to/proagent/deployment/nfa
    
  2. Install the helm chart with required configuration, using the following command.

    helm install proagent ./proagent \
      --namespace pty-proagent \
      --create-namespace
    

2. Verifying Deployment Status

Verify the deployment status, using the following command.

kubectl get pods -n pty-proagent

After Protegrity Agent is successfully deployed, the expected output includes:

NAME                                     READY   STATUS    RESTARTS   AGE
proagent-deployment-xxxxxxxxxx-xxxxx     1/1     Running   0          2m
database-statefulset-0                   1/1     Running   0          2m
proagent-ui-deployment-xxxxxxxxxx-xxxxx  1/1     Running   0          2m

Verifying Karpenter nodes

Verify the Karpenter nodes, using the following command.

kubectl get nodes -l karpenter.sh/nodepool=proagent

Verifying Ingress

Verify the ingress status, using the following command.

kubectl get ingress -n pty-proagent

3. Verifying the Service Status

To verify the service status, use the following command:

kubectl get svc -n pty-proagent

After Protegrity Agent is successfully deployed, the expected output includes:

NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
proagent-service       ClusterIP   172.20.x.x       <none>        8010/TCP   3h
database-service       ClusterIP   172.20.x.x       <none>        5432/TCP   3h
proagent-ui-service    ClusterIP   172.20.x.x       <none>        3000/TCP   3h

4. Testing the API

Test the Protegrity Agent API health endpoint, using the following command.

kubectl exec -n pty-proagent deployment/proagent-deployment -- \
  curl -s http://localhost:8010/health

3.3.2.2.2 - Post deployment configurations

Steps to install Protegrity Agent

Configuring fluentd service for Logging

After the helm chart is successfully deployed, ensure to patch the fluentd daemonset. This allows it to schedule on Protegrity Agent nodes for log collection.

The Protegrity Agent nodes have two taints which need matching toleration configurations.

Add the toleration for logging taint, using the following command.

kubectl patch daemonset daemonset-fluentd -n default --type='json' -p='[
  {
    "op": "add",
    "path": "/spec/template/spec/tolerations/-",
    "value": {
      "key": "logging",
      "operator": "Equal",
      "value": "fluentd",
      "effect": "NoSchedule"
    }
  }
]'

After running the unsure to restart fluentd services to apply these changes. Restart the fluentd services, using the following command.

kubectl rollout restart daemonset daemonset-fluentd -n default

Verifying status of fluentd service

Verify the status of fluentd service on Protegrity Agent nodes, using the following command.

kubectl get pods -n default -o wide | grep fluentd

Provisioning Karpenter Nodes

Protegrity Agent uses Karpenter to automatically provision dedicated t3.2xlarge nodes.

Using the Karpenter, nodes are:

  • Created on-demand when pods are pending.
  • Tainted to prevent other workloads.
  • Deprovisioned when idle.

If using the existing nodes, then Karpenter nodes can be disabled in values.yaml, using the following command.

karpenterResources:
  enabled: false

Rewriting Ingress URL

After the helm chart is deployed successfully, two separate ingresses are created.

  • API Ingress
  • UI Ingress

The API Ingress, forwards unchanged requests.

ServiceAPI Endpoint
Protegrity Agent service/pty/proagent/v0.9/conversations
Protegrity Agent service/pty/proagent/v0.9/health

The UI Ingress, rewrites URLs for the UI. This allows the UI to run with root-based paths internally while being served from a prefixed external URL.

ServiceAPI Endpoint
UI service/pty/proagent/v0.9/ui
UI service/pty/proagent/v0.9/ui/dashboard

3.3.2.2.3 - Accessing Protegrity Agent

Steps to access Protegrity Agent

After the helm charts are successfully deployed, the following services can be accessed using the following.

ServiceAccessing the service
UIhttps://your-domain.com/pty/proagent/v0.9/ui
API Healthhttps://your-domain.com/pty/proagent/v0.9/health
API Docshttps://your-domain.com/pty/proagent/v0.9/docs

3.3.2.3 - Configuring Protegrity Agent with NFA

Steps to configure Protegrity Agent with NFA

API Endpoints

This section provides an overview of the primary API endpoints exposed by Protegrity Agent.

The Protegrity Agent exposes the API on port 8010.

NameEndpoint
Conversations/pty/proagent/v0.9/conversations
Responses/pty/proagent/v0.9/responses
Health Check/pty/proagent/v0.9/health
Readiness Probe/pty/proagent/v0.9/ready
Liveness Probe/pty/proagent/v0.9/live
Version Info/pty/proagent/v0.9/version
UI Dashboard/pty/proagent/v0.9/ui

Environment Variables

The Protegrity Agent service uses these environment variables (configured in values.yaml):

Protegrity Agent Application Configuration

  • LOG_LEVEL: Logging level (default: “INFO”)
  • DB_HOST: Database hostname (default: “database-service”)
  • DB_PORT: Database port (default: “5432”)
  • DB_USER: Database user (default: “agentuser”)
  • DB_PASSWORD: Database password
  • DB_NAME: Database name (default: “agentdb”)

Database Configuration

  • PGDATA: PostgreSQL data directory (default: “/var/lib/pgsql/17/data_local”)
  • DB_USER: Database user
  • DB_PASSWORD: Database password
  • DB_NAME: Database name

UI Configuration

  • API_BASE_URL: Protegrity Agent service URL (default: “http://proagent-service:8010”)
  • UI_TITLE: Dashboard title (default: “Protegrity Agent Dashboard”)
  • NODE_ENV: Node environment (default: “production”)

Required Secrets

The following secrets must be configured before deployment:

  • OPENAI_API_KEY: OpenAI API key
  • OPENAI_API_ENDPOINT: OpenAI API endpoint URL
  • OPENAI_API_VERSION: OpenAI API version
  • OPENAI_LLM_MODEL: OpenAI LLM model name
  • OPENAI_EMBEDDING_MODEL: OpenAI embedding model name

RESTful APIs

Event Streaming Server-Sent Events provides real-time updates during agent execution.

RESTful APIs are a component of response endpoints

ComponentDescription
agent_planPlan creation and updates
agent_stepIndividual step execution progress
attestationTool execution attestations with confidence scores
feedback_requestHuman feedback requests for clarification
text_deltaStreaming text responses from reasoning steps
execution_resultFinal execution outcomes

Custom Configurations

To use different container images, update the content of the values.yaml file:

serviceImages:
  proagent: "<your-proagent-image>"
  postgres: "<your-postgres-image>"
  ui: "<your-ui-image>"

To customize storage settings:

storage:
  storageClass: "your-storage-class"
  size: "20Gi"

To customize resource limits:

proagentService:
  resourcesRequired:
    memory: "4Gi"
    cpu: "1000m"
  resourcesLimits:
    memory: "12Gi"
    cpu: "2000m"

Updating the Deployment

To update the deployed cluster with custom values:

  1. Edit the values.yaml file in the helm chart directory, using the following command.

    cd /path/to/proagent/deployment/nfa
    vi proagent/values.yaml
    
  2. Upgrade the deployment using the following command.

    helm upgrade proagent ./proagent \
      --namespace pty-proagent \
    

Alternatively, if specific values from the command line are to be overwritten, then use the following command.

helm upgrade proagent ./proagent \
  --namespace pty-proagent \
  --set serviceImages.proagent="your-custom-image:tag" \
  --set storage.storageClass="your-storage-class"

3.3.2.4 - Using Protegrity Agent

Using Protegrity Agent

Accessing the Dashboard

The Protegrity Agent dashboard appears after installing the .tgz file. A Welcome message is displayed prompting users to start a conversation.

The left panel contains Ongoing Conversation and Chat History icons.

Click Chat History to refer to previous requests and results.

The Protegrity Agent version is displayed at the bottom-right of the page. Currently, Protegrity Agent is on version 0.9.

Starting a Conversation

Get started with Protegrity Agent by typing a question in the Start your conversation here… textbox. The Protegrity Agent breaks down the request into actionable steps and executes them, providing live updates in the canvas.

Example to get started with Protegrity Agent

Here is an example. Question to Protegrity Agent: Create a data element for credit card protection

Text Response:

We need to implement data masking for PII in our customer database. Where should we start?
Implementing PII data masking is crucial for GDPR and CCPA compliance. I'll help you design a comprehensive masking strategy. First, I need to understand your data landscape.

Please provide: 1) List of PII fields (SSN, email, phone, etc.), 2) User roles that need access, 3) Your database platform (Snowflake, PostgreSQL, etc.)

How should we handle SSN masking for different user roles?

Based on compliance requirements and operational needs, here's the recommended role-based SSN masking: 1. **Admin/Compliance Officers**: Full SSN access (XXX-XX-XXXX → 123-45-6789) 2. **Customer Support**: Last 4 digits only (XXX-XX-6789) 3. **Analytics/Reporting**: Fully masked (XXX-XX-XXXX) 4. **External Partners**: No access (NULL) This approach follows the principle of least privilege and maintains GDPR compliance by ensuring access is limited to legitimate business needs.

What about email and phone number masking?

Excellent question! For email and phone numbers, we should implement format-preserving masking that maintains data utility while protecting privacy: Email: j***@example.com (show first char + domain) Phone: (XXX) XXX-1234 (show last 4 digits) Let me create those masking policies now.

Can you show me the final data masking implementation?

Here's the complete data masking implementation: ```sql -- Create masking policy for SSN CREATE MASKING POLICY ssn_mask AS (val STRING) RETURNS STRING -> CASE WHEN current_role() IN ('ADMIN', 'COMPLIANCE_OFFICER') THEN val WHEN current_role() = 'CUSTOMER_SUPPORT' THEN 'XXX-XX-' || RIGHT(val, 4) ELSE 'XXX-XX-XXXX' END; -- Apply masking to sensitive columns ALTER TABLE customers MODIFY COLUMN ssn SET MASKING POLICY ssn_mask; ALTER TABLE customers MODIFY COLUMN email SET MASKING POLICY email_mask; ALTER TABLE customers MODIFY COLUMN phone SET MASKING POLICY phone_mask; ``` This ensures PII is automatically masked based on user roles, meeting GDPR Article 32 requirements.

Canvas Response:
[
  {
    "type": "attestation",
    "id": "att_002",
    "claim": "This data masking implementation reduces PII exposure risk by 95% for non-privileged users",
    "confidence_level": 0.95,
    "evidence_data": {
      "source": "Security impact analysis",
      "collected_at": "2025-10-29T14:45:00Z",
      "data": {
        "pii_fields_masked": 12,
        "compliance_frameworks": [
          "GDPR",
          "CCPA",
          "HIPAA"
        ],
        "roles_with_full_access": 2,
        "roles_with_masked_access": 8,
        "roles_with_partial_access": 3
      }
    },
    "evidence_type": "benchmark_score",
    "verification_method": "Role-based access simulation and compliance mapping",
    "created_at": "2025-10-29T14:45:00"
  }
]

The attributes displayed in the canvas response are listed below.
AttribueDescription
typeItems of an agent response can be one of several types:
- TextResponse: The agent’s text output
- AgentPlan: A plan created by the agent
- ExecutedStep: An individual step in the plan
- FeedbackRequest: A request for user feedback
- Attestation: Evidence or reasoning from the agent
idUnique identifier for this conversation
claimThe claim or assertion this evidence supports
confidence_levelConfidence level (0-1) in the validity of this evidence
evidence_dataThe actual evidence data
- source: Source of the evidence
- collected_at: When the evidence was collected
- data: The evidence payload
evidence_typeType of evidence being provided
verification_methodMethod used to verify or collect this evidence
created_atWhen the conversation was created

3.3.2.5 - Appendix - Features and Capabilities

List of features and capabilities of Protegrity Agent

Core Agent Functionality

Protegrity Agent Manager

This is the control center of Protegrity Agent. It keeps everything organized and running smoothly.

  • Keeps track of your conversations and remembers what you are working on.

  • Makes sure the planning and execution parts of the agent work together.

  • Shows you live updates as tasks are happening.

  • Can ask for feedback or clarification while working on your requests.

Intelligent Planner

This part of the agent figures out how to get things done, step by step.

  • Understands your requests in plain language and creates a detailed plan.

  • Breaks big or complicated tasks into smaller, manageable steps.

  • Makes sure each step depends on the right things and leads to clear results.

  • Can adjust the plan if something doesn’t work the first time.

  • Uses information from your documents to make smarter plans.

Autonomous Executor

This is the “doer” that carries out the plan, one step at a time.

  • Thinks through each step carefully to handle complex tasks.

  • Only does one action at a time to keep things safe and reliable.

  • Finds the best tools or methods for each step using smart search.

  • Automatically tries again if something fails, up to a few times.

  • Stops quickly if there is a major problem, and lets you know.

  • Tracks how confident it is in each action and keeps a record for you.

Protegrity ESA Integration

Protegrity Agent delivers robust integration with the Protegrity Enterprise Security Agent (ESA) APIs, supporting endpoint models for streamlined data protection management.

System Management

  • Version and health monitoring (getVersionEndpoint, getHealthEndpoint)

  • API documentation access (getDocEndpoint)

  • Log level management (getLogLevelEndpoint, setLogLevelEndpoint)

  • PIM initialization (initializePimEndpoint)

Data Store Management

  • List, create, retrieve, and remove datastores (listDataStoresEndpoint, createDatastoreEndpoint, getDataStoreEndpoint, removeDataStoreEndpoint)

  • Datastore deployment with sample policies (deployDataStoreEndpoint)

Data Element Management

  • Creation and management of encryption elements: AES-128/256 CBC, AES-128/256 CUSP, Triple DES, SHA1 HMAC

  • Tokenization configuration: Numeric, Alphabetic, AlphaNumeric, Printable, Unicode, Binary

  • Format-preserving encryption setup: Numeric, Alphabetic, AlphaNumeric, Unicode variants

  • Specialized tokens: Email, Credit Card, Date (multiple formats), DateTime, Integer, Decimal

  • Custom masking pattern configuration

  • Data element lifecycle management: list, create, retrieve, and remove

Range and Mask Management

  • Unicode range creation and management

  • Custom alphabet definition

  • Mask pattern configuration for data obfuscation

Policy and Rule Management

  • Security policy lifecycle: create, retrieve, list, and remove

  • Policy rule definition with granular permissions (protect, unprotect, re-protect, audit)

  • Role-based access control integration

  • Policy deployment to datastores

Application Management

  • Trusted application registration

  • Application-specific access controls

  • Registered application management

Source and Member Management

  • LDAP source integration for user directories

  • File-based source configuration

  • Source connection testing

  • Member synchronization

  • User and role management

Role Management

  • Security role creation and management

  • Role member assignment

  • Permission configuration

  • Role-based policy rule definition

Deployment Operations

  • Full system deployment orchestration

  • Deployment status tracking

  • Coordinated multi-component deployments

Alphabet Management

  • Custom alphabet creation for tokenization

  • Code point range definition

  • Alphabet retrieval and removal

Advanced Features and Capabilities

The advanced features and capabilities of the Protegrity Agent emphasizes its robust orchestration, intelligent planning, and autonomous execution. The architecture is designed to efficiently manage complex tasks, ensure seamless coordination, and deliver reliable outcomes through integrated planning and execution components.

Chain-of-Thought Reasoning

Protegrity Agent uses a structured reasoning process to break down complex tasks into manageable steps. Each step follows an Observation → Thought → Action → Result cycle.

When you ask Protegrity Agent to perform a complicated operation, for example, “Set up a policy to protect customer emails and credit card numbers”, it performs the following:

  1. Analyzes the request.

  2. Plans the necessary steps.

  3. Executes each step in sequence, ensuring accuracy.

  4. Provides updates and results for each step.

Maximum 10 reasoning steps per execution are displayed, with automatic detection when reasoning is complete.

Semantic Tool Discovery

Protegrity Agent uses semantic search and ranking algorithms (like BM25 and similarity scoring) to find the best API endpoint for your request.

You don’t need to know technical details or specific API names. Just describe what you want, and Protegrity Agent will select the right tool or endpoint to fulfill your request.

If semantic search isn’t available, Protegrity Agent will use common tools to ensure your request is still handled.

Autonomous Execution and Error Handling

Protegrity Agent can execute multi-step workflows on its own, choosing the right tools and handling errors automatically.

If something goes wrong, for example, a network issue or invalid input, Protegrity Agent:

  1. Retries the operation with exponential backoff, up to 5 retries.
  2. Stops immediately on critical errors like schema validation or timeouts.
  3. Logs errors for debugging and provides feedback to you.

You get reliable execution with minimal manual intervention.

Real-Time Streaming and Feedback

Protegrity Agent uses Server-Sent Events (SSE) to stream live updates during task execution.

As Protegrity Agent works on your request, the dashboard displays:

  • Plan creation and updates.

  • Progress of each step.

  • Confidence scores for tool execution.

  • Requests for clarification, if needed.

  • Final results as soon as they are ready.

The Dashboard always keeps you informed about what’s happening, reducing uncertainty and wait times.

Conversation History Management

Protegrity Agent keeps a persistent history of the conversations and execution context.

Using the Chat History you can:

  • Refer back to previous requests and results.

  • Maintain context across multiple sessions.

  • Export formatted history for debugging or auditing.

This feature ensures continuity and transparency in your interactions.

Database Integration

Protegrity Agent uses a PostgreSQL backend to store conversation and message history, response states, and execution logs. All the interactions and results are securely stored, making it easy to:

  • Audit actions

  • Troubleshoot issues

  • Track progress over time

This enhances reliability, compliance, and traceability.

Containerization and Deployment Flexibility

Protegrity Agent supports Docker and Kubernetes deployments, with multi-stage builds for optimized images.

Protegrity Agent can be deployed in various environments, like, standalone, cloud, or on-premises, with ease, ensuring scalability, security, and efficient resource usage.

3.4 - Protectors

List of protectors

3.4.1 - Application Protector

List of Application Protectors

3.4.1.1 - Application Protector Java

Using Application Protector Java with NFA

The Protegrity Application Protector (AP) Java provides APIs that integrate with the customer application to protect, unprotect, and reprotect sensitive data. The AP Java can be used with any customer application that is developed using the Java programming language.

To perform protect and unprotect operations, refer to Application Protector Java APIs.

3.4.1.1.1 - Installing the Application Protector Java

Steps to install Application Protector Java

Setting up the Application Protector Java

The Protegrity Application Protector (AP) Java provides APIs that integrate with the customer application to protect, unprotect, and reprotect sensitive data. The AP Java can be used with any customer application that is developed using the Java programming language.

Prerequisites

For a detailed information on the prerequistes, refer to System Requirements.

Integrating the Application Protector Java with NFA

To integrate the Application Protector Java with NFA, perform the following steps:

  1. Preparing the environment using the steps mentioned in the section Preparing the Environment.

  2. Install the Application Protector Java using the steps mentioned in the section Application Protector Java Installation.

    Note: When prompted for the ESA IP address, enter the IP address of the NFA ESA. Similarly, when prompted for the ESA listening port number, enter 25400. This IP address and the port number enables the protector to integrate with the NFA ESA.

Post Configuration Steps

For a detailed information on the post configuration steps, refer to Verifying Installation of AP Java.

3.4.1.2 - Application Protector Python

Details about Application Protector Python

The Protegrity Application Protector (AP) Python provides APIs that integrate with the customer application to protect, unprotect, and reprotect sensitive data. The AP Python can be used with any customer application that is developed using the Python programming language.

To perform protect and unprotect operations, refer to Application Protector Python APIs.

3.4.1.2.1 - Installing the Application Protector Python

Steps to install Application Protector Python

Setting up the Application Protector Python

The Protegrity Application Protector (AP) Python provides APIs that integrate with the customer application to protect, unprotect, and reprotect sensitive data. The AP Python can be used with any customer application that is developed using the Python programming language.

Prerequisites

For a detailed information on the prerequistes, refer to System Requirements.

Integrating the Application Protector Python with NFA

To integrate the Application Protector Python with NFA, perform the following steps:

  1. Preparing the environment using the steps mentioned in the section Setting up AP Python on Linux.

  2. Install the Application Protector Python using the steps mentioned in the section Installing Log Forwarder on Linux.

    Note: When prompted for the ESA IP address, enter the IP address of the NFA ESA. Similarly, when prompted for the ESA listening port number, enter 25400. This IP address and the port number enables the protector to integrate with the NFA ESA.

Post Configuration Steps

For a detailed information on the post configuration steps, refer to Verifying the installation of AP Python.

3.4.1.3 - Application Protector .Net

Details about Application Protector .Net

The Protegrity Application Protector (AP) .Net provides APIs that integrate with customer applications to protect, unprotect, and reprotect sensitive data. It can be used with any application developed using .NET Standard 2.0.

To perform protect and unprotect operations, refer to the section Application Protector .Net APIs.

3.4.1.3.1 - Installing the Application Protector .Net

Steps to install Application Protector .Net

Setting up the Application Protector .Net

The Protegrity Application Protector (AP) .Net provides APIs that integrate with customer applications to protect, unprotect, and reprotect sensitive data. It can be used with any application developed using .NET Standard 2.0.

Prerequisites

For a detailed information on the prerequistes, refer to System Requirements.

Integrating the Application Protector .Net with NFA

To integrate the Application Protector .Net with NFA, perform the following steps:

  1. Preparing the environment using the steps mentioned in the section Preparing the Environment.

  2. Install the Application Protector .Net using the steps mentioned in the section Installing the AP .Net Protector.

    Note: When prompted for the ESA IP address, enter the IP address of the NFA ESA. Similarly, when prompted for the ESA listening port number, enter 25400. This IP address and the port number enables the protector to integrate with the NFA ESA.

Post Configuration Steps

For a detailed information on the post configuration steps, refer to Configuring the Protector.

3.4.2 - Big Data Protector

List of Big Data Protectors

3.4.2.1 - Amazon EMR Protector

Using Amazon EMR Protector

The Big Data Protector UDFs and APIs provide a robust framework for securing sensitive data within EMR environments on AWS. These components are part of the Protegrity Big Data Protector architecture, enabling developers and data engineers to integrate advanced data protection directly into big data workflows. The User Defined Functions (UDFs) allow seamless encryption, tokenization, and de-tokenization of sensitive fields during Hive and Spark. By embedding Protegrity UDFs into SQL queries, organizations can enforce column-level security without altering application logic. This ensures compliance while maintaining analytical performance.

To perform protect and unprotect operations using the User Defined Functions, refer to User Defined Functions and APIs.

3.4.2.1.1 - Installing the Amazon Elastic MapReduce Protector

Steps to install the Amazon Elastic MapReduce Protector

Setting up the Amazon EMR Protector

The Amazon EMR Protector v10.0.0 is part of the Protegrity Big Data Protector suite, designed to secure sensitive data in distributed processing environments on AWS Elastic MapReduce (EMR). This protector enables organizations to run analytics on large-scale datasets while ensuring compliance with stringent data privacy regulations.

The Bootstrap Installer is designed to automate the deployment of the Protegrity Big Data Protector (BDP) components during the creation of an Amazon EMR cluster. By leveraging AWS bootstrap actions, this method ensures that all required libraries, configuration files, and services are installed and configured as part of the cluster initialization process.

The Static Installer provides a manual or scripted approach for installing BDP components on existing EMR clusters. This method is best suited for environments where clusters are persistent or require custom installation steps outside the bootstrap lifecycle.

Prerequisites

For a detailed information on the prerequistes for the Bootstrap installer, refer to Verifying the prerequisites.

For a detailed information on the prerequistes for the Static installer, refer to Verifying the prerequisites for Static Installer.

Integrating the Amazon EMR Protector with NFA

To integrate the Amazon EMR Protector with NFA, perform the following steps:

  1. Install the EMR protector using the bootstrap installer as per steps mentioned in the section Using the Bootstrap Installer.

OR

  1. Install the EMR protector using the static installer as per steps mentioned in the section Using the Static Installer.

    Note: When prompted for the ESA IP address, enter the IP address of the NFA ESA. Similarly, when prompted for the ESA listening port number, enter 25400. This IP address and the port number enables the protector to integrate with the NFA ESA.

Post Configuration Steps

For a detailed information on the post configuration steps, refer to Updating the Configuration Parameters.

3.4.2.2 - CDP-AWS-DataHub Protector

Using CDP-AWS-DataHub Protector

The CDP-AWS-DataHub UDFs and APIs provide a robust framework for securing sensitive data within Cloudera Data Platform (CDP) environments on AWS. These components are part of the Protegrity Big Data Protector architecture, enabling developers and data engineers to integrate advanced data protection directly into big data workflows. The User Defined Functions (UDFs) allow seamless encryption, tokenization, and de-tokenization of sensitive fields during Hive, Spark, and Impala operations. By embedding Protegrity UDFs into SQL queries, organizations can enforce column-level security without altering application logic. This ensures compliance while maintaining analytical performance.

To perform protect and unprotect operations using the User Defined Functions, refer to User Defined Functions and APIs.

3.4.2.2.1 - Installing the CDP-AWS-DataHub Protector

Steps to install the CDP-AWS-DataHub Protector

Setting up the CDP-AWS-DataHub Protector

The CDP-AWS-DataHub Protector v10.0.0 is secures sensitive data across the Cloudera Data Platform (CDP) environments hosted on AWS. The protector leverages Protegrity’s tokenization and encryption features to secure data at rest, in transit, and during processing within AWS DataHub clusters.

Prerequisites

For a detailed information on the prerequistes, refer to System Requirements.

Integrating the CDP-AWS-DataHub Protector with NFA

To integrate the CDP-AWS-DataHub Protector with NFA, perform the following steps:

  1. Preparing the environment using the steps mentioned in the section Preparing the Environment.

  2. Install the Big Data Protector using the steps mentioned in the section Installing the Big Data Protector.

    Note: When prompted for the ESA IP address, enter the IP address of the NFA ESA. Similarly, when prompted for the ESA listening port number, enter 25400. This IP address and the port number enables the protector to integrate with the NFA ESA.

Post Configuration Steps

For a detailed information on the post configuration steps, refer to Configuring the Big Data Protector.

3.4.3 - Application Protector Java Container

Details about Application Protector Java Container

Application Protector Java Container is a Kubernetes-based solution to perform security operations using Application Protector Java SDKs in a native cloud environment.

To perform protect and unprotect operations, refer to Application Protector Java Container.

3.4.3.1 - Installing the Application Protector Java Container

Steps to install Application Protector Java Container

Setting up the Application Protector Java Container

The Protegrity Application Protector Java Container provides a robust and scalable APIs designed to simplify integration of Protegrity functions across your systems. Whether you are building custom applications, streamlining workflows, or enabling third-party access, our API offers secure, reliable, and well-documented interface.

Prerequisites

For a detailed information on the prerequistes, refer to System Requirements.

Integrating the Application Protector Java Container

To integrate the Application Protector Java Container with NFA, perform the following steps:

  1. Preparing the environment using the steps mentioned in the section Preparing the Environment.

  2. Install the Application Protector Java Container using the steps mentioned in the section Installing the Protector.

    Note: When prompted for the ESA IP address, enter the IP address of the NFA ESA. Similarly, when prompted for the ESA listening port number, enter 25400. This IP address and the port number enables the protector to integrate with the NFA ESA.

3.4.4 - REST Container

Details about REST Container.

REST Container is a Kubernetes-based solution to perform security operations using REST APIs in a native cloud environment.

To perform protect and unprotect operations, refer to REST Container.

3.4.4.1 - Installing the REST Container

Steps to install REST Container

Setting up the REST Container

The Protegrity REST Container provides a robust and scalable REST API designed to simplify integration of Protegrity functions across your systems. Whether you are building custom applications, streamlining workflows, or enabling third-party access, our API offers secure, reliable, and well-documented endpoints to help you achieve your goals efficiently. With support for standard HTTP methods and JSON payloads, developers can quickly get started.

Prerequisites

For a detailed information on the prerequistes, refer to System Requirements.

Integrating the REST Container with NFA

To integrate the REST Container with NFA, perform the following steps:

  1. Preparing the environment using the steps mentioned in the section Preparing the Environment.

  2. Install the REST Container using the steps mentioned in the section Installing the Protector.

    Note: When prompted for the ESA IP address, enter the IP address of the NFA ESA. Similarly, when prompted for the ESA listening port number, enter 25400. This IP address and the port number enables the protector to integrate with the NFA ESA.

3.4.5 - Cloud Protector

List of Cloud Protectors

This feature is being developed and will be available shortly.

Cloud Protectors keep your data safe when using cloud services like AWS. They work with tools such as Snowflake, Redshift, and Athena to protect sensitive information during queries and analytics. These protectors apply security rules, such as encryption and masking. This ensures that your data stays secure while moving through cloud-based workflows. They are designed to integrate easily with your existing cloud setup, making protection seamless without slowing down performance.

3.5 - Gen AI Add-ons

Gen AI Add-ond features

3.5.1 - Anonymization

Using Anonymization with NFA

Protegrity Anonymization is a software solution that processes data by removing personal information and transforming the remaining details to protect privacy. In simple terms, it takes raw data as input, applies techniques like generalization and summarization, and outputs anonymized data. This output can be used for analysis without revealing individual identities.

For more information about Anonymization, refer to Anonymization.

3.5.1.1 - Installing Anonymization

Steps to install Anonymization

Helm Deployment

This project deploys the Protegrity Anonymization SDK stack on Amazon EKS as a Protegrity AI Team Edition Feature.
It uses Helm to deploy Kubernetes workloads.

Requirements

Before proceeding, ensure the following prerequisites are met:

  • An AWS account and CLI credentials for aws configure are available.
  • An existing VPC with at least two private subnets is available.
  • Helm and kubectl are installed.
  • Kubernetes cluster access with sufficient permissions to create namespaces, deployments, services and ingress rules is available.
  • Protegrity AI Team Edition is installed.

Deployment Steps

  1. Add the Protegrity Anonymization SDK Helm repository using the following command.

    helm repo add protegrity-charts https://artifactory.protegrity.com/artifactory/helm-charts
    helm repo update
    
  2. Create override_values.yaml file with specific configuration details, such as -

  • Namespace
  • Image Locations
  • Connection information
  • Resource limits
  • Any other necessary parameters
    For more information about all configurable parameters, refer to values.yaml file.

    Note: This is an optional step.

  1. Deploy the stack using the following command.

    helm upgrade --install pty-anonymization protegrity-charts/pty-anonymization -n anon-ns --create-namespace --version=1.4.0
    
  2. If the override_values.yaml file is created, run install with file specified using the following command.

    helm upgrade --install pty-anonymization protegrity-charts/pty-anonymization -n <NameSpace> --create-namespace --values override_values.yaml
    
  3. Monitor the deployment process using the following command.

    kubectl get pods -n anon-ns
    
  4. After all pods are in the Running state, verify the services are accessible.

3.5.1.2 - Uninstalling and Cleanup Anonymization

Steps to uninstall and cleanup Anonymization

Uninstall and cleanup the Protegrity Synthetic Data using the following command.

helm uninstall pty-anonymization -n anon-ns --wait --timeout 300s

3.5.2 - Synthetic Data

Using Synthetic Data with NFA

Protegrity Synthetic Data is a privacy-enhancing technology that uses real datasets to create artificial data. It does not represent real individuals and has no connection to real people. However, it still provides strong analytical utility and preserves relationships between variables.

For more information about Synthetic Data, refer to Synthetic Data.

3.5.2.1 - Installing Synthetic Data

Steps to install and cleanup Synthetic Data

Helm Deployment

This project deploys the Protegrity Synthetic Data stack on Amazon EKS as a Protegrity AI Team Edition Feature.
It uses Helm to deploy Kubernetes workloads.

Requirements

Before proceeding, ensure the following prerequisites are met:

  • An AWS account and CLI credentials for aws configure are available.
  • An existing VPC with at least two private subnets is available.
  • Helm and kubectl are installed.
  • Kubernetes cluster access with sufficient permissions to create namespaces, deployments, services, and ingress rules is available.
  • Protegrity AI Team Edition is installed.

Deployment Steps

  1. Add the Protegrity Synthetic Data Helm repository using the following command.

    helm repo add protegrity-charts https://artifactory.protegrity.com/artifactory/helm-charts
    helm repo update
    
  2. Create override_values.yaml file with specific configuration details, such as -

  • Namespace
  • Image Locations
  • Connection information
  • Resource limits
  • Any other necessary parameters
    For more information about all configurable parameters, refer to values.yaml file.

    Note: This is an optional step.

  1. Deploy the stack using the following command.

    helm upgrade --install pty-synthetic-data protegrity-charts/pty-synthetic-data -n syntheticdata-ns --create-namespace
    
  2. If the override_values.yaml file is created, run install with file specified using the following command.

    helm upgrade --install pty-synthetic-data protegrity-charts/pty-synthetic-data -n <NameSpace> --create-namespace --values override_values.yaml
    
  3. Monitor the deployment process using the following command.

    kubectl get pods -n syntheticdata-ns
    
  4. After all pods are in the Running state, verify the services are accessible.

3.5.2.2 - Uninstalling and Cleanup Synthetic Data

Steps to install and cleanup Synthetic Data

Uninstall and cleanup the Protegrity Synthetic Data using the following command.

helm uninstall pty-synthetic-data -n syntheticdata-ns --wait --timeout 300s