1 - Semantic Guardrails

Using Semantic Guardrails with NFA

Protegrity’s Semantic Guardrails solution is a security guardrail engine for AI systems. It evaluates risks in GenAI systems such as chatbots, workflows, and agents, through advanced semantic analytics and intent classification to detect potentially malicious messages. PII detection can also be leveraged for comprehensive security coverage.

For more information about Semantic Guardrails, refer to the Semantic Guardrails documentation.

1.1 - Prerequisites

Prerequisites to install Semantic Guardrails

Before you begin

Ensure that the following requirements are met before installing Semantic Guardrails with NFA.

  • The NFA cluster is installed and accessible.
  • For PII detection, Data Discovery service is installed.
  • Kubernetes command-line tool, kubectl, is configured for the NFA cluster.
  • helm CLI v3.x is installed.
  • Access to relevant container images is available.

1.2 - Installing Semantic Guardrails

Steps to install Semantic Guardrails

This section describes the steps to install Semantic Guardrails.

Installing Data Discovery

For PII detection, it is recommended to install Data Discovery services before installing Semantic Guardrails.

For more information about installing Data Discovery service, refer to Installing Data Discovery.

To verify the Data Discovery service status, run the following command.

kubectl get pods -n data-discovery

1. Preparing and Installing

To install Semantic Guardrails, it is recommended to install from the Artifactory repository.

NOTE: If the installation fails due to existing namespace, then uninstall Semantic Guardrails.

Installing from Artifactory repository

Perform the following steps.

  1. To add the Semantic Guardrails repository, run the following command.

    helm repo add protegrity-charts https://artifactory.protegrity.com/artifactory/helm-charts/
    
  2. To update the repository index, run the following command.

    helm repo update
    
  3. To install the helm chart, run the following command.

    helm upgrade --install nfa-semantic-guardrails semantic-guardrails/semantic-guardrails --version 1.1.0 --namespace pty-semantic-guardrails --create-namespace
    

Installing using a pre-packaged .tgz file

Packaging

If the source code is available, then semantic guardrails can be packaged.

Perform the following steps.

  1. To navigate to the chart directory, run the following command.

    cd deployment/nfa
    
  2. To create a package, run the following command.

    helm package charts/semantic-guardrails
    

Installing

If the semantic-guardrails-1.1.0.tgz file is available, then install the file using the following command.

helm upgrade --install nfa-semantic-guardrails ./semantic-guardrails-1.1.0.tgz \
--namespace pty-semantic-guardrails \
--create-namespace

2. Verifying Deployment Status

To verify the deployment status, run the following command.

kubectl get pods -n pty-semantic-guardrails

After Semantic Guardrails feature is successfully deployed, the expected output is as follows.

NAME                                                 READY   STATUS    RESTARTS   AGE
semantic-guardrails-deployment-xxxxxxxxxx-xxxxx     1/1     Running   0          2m

3. Verifying the Service Status

To verify the service status, run the following command.

kubectl get svc -n pty-semantic-guardrails

After Semantic Guardrails feature is successfully deployed, the expected output is as follows.

NAME                              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
semantic-guardrails-service       ClusterIP   172.20.109.155   <none>        8001/TCP   3h

1.3 - Testing the Semantic Guardrails deployment with NFA

Steps to test the deployment for Semantic Guardrails

Perform the following steps to test the Semantic Guardrails deployment.

1. Testing Semantic Guardrails API

To test the Semantic Guardrails API endpoint, run the following command.

Note: The endpoints require authentication.

curl -k -X POST https://eclipse.aws.protegrity.com/pty/semantic-guardrails/v1.1/conversations/messages/scan \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer YOUR_JWT_TOKEN' \
  --data '{
    "messages": [
      {
        "from": "user",
        "to": "ai", 
        "content": "This is a test message for semantic analysis",
        "processors": ["financial"]
      }
    ]
  }'

We should see, on the response, something similar to:


{
  "from": "user",
  "to": "ai",
  "content": "This is a test message for semantic analysis",
  "outcome": "approved",
  "score": 0.2,
  "explanation": "in-domain"
}

2. Testing Data Discovery Integration

If Data Discovery is installed, then to test the Data Discovery integration, run the following command.

curl -k -X POST https://eclipse.aws.protegrity.com/pty/semantic-guardrails/v1.1/conversations/messages/scan \
  -H 'Content-Type: application/json' \
  -H 'Authorization: Bearer YOUR_JWT_TOKEN' \
  --data '{
    "messages": [
      {
        "from": "ai",
        "to": "user", 
        "content": "My name is John Smith, my credit card number is 15342724758",
        "processors": ["pii"]
      }
    ]
  }'

We should see, on the response, something similar to:

{
  "from": "ai",
  "to": "user",
  "content": "My name is John Smith, my credit card number is 15342724758".,
  "outcome": "rejected",
  "score": 0.9,
  "explanation": "['NAME' : 'John Smith', 'CCN' : '15342724758']"
}

1.4 - Configuring Semantic Guardrails with NFA

Steps to configure Semantic Guardrails with NFA

This service provides AI conversation scanning and semantic analysis capabilities for Semantic Guardrails.

API Endpoints

This section provides an overview of the primary endpoint.

NameEndpoint
Main API/v1/conversations/messages
Models API/pty/semantic-guardrail/v1.1/domain-models/

Environment variables

The semantic-guardrail service uses these environment variables:

VariableDescription
LOG_LEVELLogging level (INFO/DEBUG)
PRODUCTION_LOGGINGEnables production logging (true/false)
DATA_DISCOVERY_SEARCHEnables PII detection search (true/false)
DATA_DISCOVERY_URLData Discovery classification service hostname
(default: “classification-service.data-discovery”)
DATA_DISCOVERY_PORTData Discovery classification service port
(default: 8050)

Custom configurations

To use a different image, ensure to update the content of the values.yaml file.

serviceImages:
  semanticGuardrails: "<your-semantic-guardrails-image>"

To update the deployed cluster, run the following command.

helm template semantic-guardrails charts/semantic-guardrails > semantic_guardrails.yaml 2>&1
kubectl delete -f semantic_guardrails.yaml
kubectl apply -f semantic_guardrails.yaml

1.5 - Uninstalling Semantic Guardrails

Steps to uninstall Semantic Guardrails

Perform the following steps to uninstall Semantic Guardrails.

Uninstalling Semantic-Guardrails

To uninstall semantic-guardrails, run the following command.

helm uninstall semantic-guardrails -n pty-semantic-guardrails

Uninstalling Data Discovery

If Data Discovery is not needed, then uninstall the Data Discovery service.

To uninstall data discovery, run the following command.

helm uninstall data-discovery -n data-discovery

2 - Protegrity Agent

Using Protegrity Agent with NFA

Protegrity Agent is an intelligent agentic AI system designed for Data Protection architects and administrators. Protegrity Agent manages the Protegrity Policy, Data Elements, Roles, Masks, Data Stores and other configurations through natural language conversations. The system provides automated planning and execution capabilities for complex data protection workflows, including policy management, data element configuration, and security rule deployment.

Protegrity Agent leverages advanced Large Language Model (LLM) capabilities with chain-of-thought reasoning to break down complex user requests into executable step-by-step plans. The agent orchestrates operations across the Protegrity ecosystem through the Policy Management (PIM) API, providing an intuitive chatbot-like interface for sophisticated data protection management tasks.

The key capabilities of Protegrity Agent include:

  • Intelligent Planning: Decomposes complex user queries into hierarchical, executable task plans.
  • Natural Language Interface: Manages data protection configurations using conversational API.
  • Autonomous Execution: Executes multi-step workflows with adaptive tool selection and error recovery.
  • Real-time Streaming: Displays Server-Sent Events (SSE) for live progress updates and intermediate results.
  • Enterprise Integration: Integrates with Protegrity PIM seamlessly through comprehensive API coverage.
  • Semantic Tool Discovery: Selects RAG-based tool for optimal endpoint matching.
  • Conversation Management: Tracks conversation history with context-aware interactions.

2.1 - Prerequisites

Prerequisites to install Protegrity Agent

Ensure that the following requirements are met before installing Protegrity Agent with NFA.

  • Kubernetes cluster with RBAC enabled is installed and accessible.
  • Kubernetes command-line tool, kubectl, is configured for the cluster.
  • helm CLI v3.x is installed.
  • Storage class is configured. The chart uses ebs-sc by default.
  • Access to container images from Protegrity Artifactory is available.
  • Required secrets are configured.
    • OPENAI_API_KEY
    • OPENAI_API_ENDPOINT
    • OPENAI_API_VERSION
    • OPENAI_LLM_MODEL
    • OPENAI_EMBEDDING_MODEL
  • The following ports are required while working with Protegrity Agent.
    • Port 8010: FastAPI backend used by Protegrity Agent Service.
    • Port 3000: Web interface for accessing the UI Dashboard.

2.2 - Installing Protegrity Agent

Steps to install Protegrity Agent

The Protegrity Agent can be installed using helm chart. The helm chart deploys the following components.

ComponentDescription
Protegrity Agent ServiceMain application service
PostgreSQL DatabasePersistent database for conversation storage
UI DashboardWeb interface for Protegrity Agent management
IngressExternal access configuration for API and UI endpoints

2.2.1 - Installing Protegrity Agent

Steps to install Protegrity Agent

1. Preparing and Installing

Protegrity Agent can be installed directly from the helm chart in the repository.

NOTE: If the installation fails due to an existing namespace, uninstall the previous deployment before proceeding further.

Perform the following steps to install the helm chart.

  1. Navigate to the Protegrity Agent deployment directory, using the following command.

    cd /path/to/proagent/deployment/nfa
    
  2. Install the helm chart with required configuration, using the following command.

    helm install proagent ./proagent \
      --namespace pty-proagent \
      --create-namespace
    

2. Verifying Deployment Status

Verify the deployment status, using the following command.

kubectl get pods -n pty-proagent

After Protegrity Agent is successfully deployed, the expected output includes:

NAME                                     READY   STATUS    RESTARTS   AGE
proagent-deployment-xxxxxxxxxx-xxxxx     1/1     Running   0          2m
database-statefulset-0                   1/1     Running   0          2m
proagent-ui-deployment-xxxxxxxxxx-xxxxx  1/1     Running   0          2m

Verifying Karpenter nodes

Verify the Karpenter nodes, using the following command.

kubectl get nodes -l karpenter.sh/nodepool=proagent

Verifying Ingress

Verify the ingress status, using the following command.

kubectl get ingress -n pty-proagent

3. Verifying the Service Status

To verify the service status, use the following command:

kubectl get svc -n pty-proagent

After Protegrity Agent is successfully deployed, the expected output includes:

NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
proagent-service       ClusterIP   172.20.x.x       <none>        8010/TCP   3h
database-service       ClusterIP   172.20.x.x       <none>        5432/TCP   3h
proagent-ui-service    ClusterIP   172.20.x.x       <none>        3000/TCP   3h

4. Testing the API

Test the Protegrity Agent API health endpoint, using the following command.

kubectl exec -n pty-proagent deployment/proagent-deployment -- \
  curl -s http://localhost:8010/health

2.2.2 - Post deployment configurations

Steps to install Protegrity Agent

Configuring fluentd service for Logging

After the helm chart is successfully deployed, ensure to patch the fluentd daemonset. This allows it to schedule on Protegrity Agent nodes for log collection.

The Protegrity Agent nodes have two taints which need matching toleration configurations.

Add the toleration for logging taint, using the following command.

kubectl patch daemonset daemonset-fluentd -n default --type='json' -p='[
  {
    "op": "add",
    "path": "/spec/template/spec/tolerations/-",
    "value": {
      "key": "logging",
      "operator": "Equal",
      "value": "fluentd",
      "effect": "NoSchedule"
    }
  }
]'

After running the unsure to restart fluentd services to apply these changes. Restart the fluentd services, using the following command.

kubectl rollout restart daemonset daemonset-fluentd -n default

Verifying status of fluentd service

Verify the status of fluentd service on Protegrity Agent nodes, using the following command.

kubectl get pods -n default -o wide | grep fluentd

Provisioning Karpenter Nodes

Protegrity Agent uses Karpenter to automatically provision dedicated t3.2xlarge nodes.

Using the Karpenter, nodes are:

  • Created on-demand when pods are pending.
  • Tainted to prevent other workloads.
  • Deprovisioned when idle.

If using the existing nodes, then Karpenter nodes can be disabled in values.yaml, using the following command.

karpenterResources:
  enabled: false

Rewriting Ingress URL

After the helm chart is deployed successfully, two separate ingresses are created.

  • API Ingress
  • UI Ingress

The API Ingress, forwards unchanged requests.

ServiceAPI Endpoint
Protegrity Agent service/pty/proagent/v0.9/conversations
Protegrity Agent service/pty/proagent/v0.9/health

The UI Ingress, rewrites URLs for the UI. This allows the UI to run with root-based paths internally while being served from a prefixed external URL.

ServiceAPI Endpoint
UI service/pty/proagent/v0.9/ui
UI service/pty/proagent/v0.9/ui/dashboard

2.2.3 - Accessing Protegrity Agent

Steps to access Protegrity Agent

After the helm charts are successfully deployed, the following services can be accessed using the following.

ServiceAccessing the service
UIhttps://your-domain.com/pty/proagent/v0.9/ui
API Healthhttps://your-domain.com/pty/proagent/v0.9/health
API Docshttps://your-domain.com/pty/proagent/v0.9/docs

2.3 - Configuring Protegrity Agent with NFA

Steps to configure Protegrity Agent with NFA

API Endpoints

This section provides an overview of the primary API endpoints exposed by Protegrity Agent.

The Protegrity Agent exposes the API on port 8010.

NameEndpoint
Conversations/pty/proagent/v0.9/conversations
Responses/pty/proagent/v0.9/responses
Health Check/pty/proagent/v0.9/health
Readiness Probe/pty/proagent/v0.9/ready
Liveness Probe/pty/proagent/v0.9/live
Version Info/pty/proagent/v0.9/version
UI Dashboard/pty/proagent/v0.9/ui

Environment Variables

The Protegrity Agent service uses these environment variables (configured in values.yaml):

Protegrity Agent Application Configuration

  • LOG_LEVEL: Logging level (default: “INFO”)
  • DB_HOST: Database hostname (default: “database-service”)
  • DB_PORT: Database port (default: “5432”)
  • DB_USER: Database user (default: “agentuser”)
  • DB_PASSWORD: Database password
  • DB_NAME: Database name (default: “agentdb”)

Database Configuration

  • PGDATA: PostgreSQL data directory (default: “/var/lib/pgsql/17/data_local”)
  • DB_USER: Database user
  • DB_PASSWORD: Database password
  • DB_NAME: Database name

UI Configuration

  • API_BASE_URL: Protegrity Agent service URL (default: “http://proagent-service:8010”)
  • UI_TITLE: Dashboard title (default: “Protegrity Agent Dashboard”)
  • NODE_ENV: Node environment (default: “production”)

Required Secrets

The following secrets must be configured before deployment:

  • OPENAI_API_KEY: OpenAI API key
  • OPENAI_API_ENDPOINT: OpenAI API endpoint URL
  • OPENAI_API_VERSION: OpenAI API version
  • OPENAI_LLM_MODEL: OpenAI LLM model name
  • OPENAI_EMBEDDING_MODEL: OpenAI embedding model name

RESTful APIs

Event Streaming Server-Sent Events provides real-time updates during agent execution.

RESTful APIs are a component of response endpoints

ComponentDescription
agent_planPlan creation and updates
agent_stepIndividual step execution progress
attestationTool execution attestations with confidence scores
feedback_requestHuman feedback requests for clarification
text_deltaStreaming text responses from reasoning steps
execution_resultFinal execution outcomes

Custom Configurations

To use different container images, update the content of the values.yaml file:

serviceImages:
  proagent: "<your-proagent-image>"
  postgres: "<your-postgres-image>"
  ui: "<your-ui-image>"

To customize storage settings:

storage:
  storageClass: "your-storage-class"
  size: "20Gi"

To customize resource limits:

proagentService:
  resourcesRequired:
    memory: "4Gi"
    cpu: "1000m"
  resourcesLimits:
    memory: "12Gi"
    cpu: "2000m"

Updating the Deployment

To update the deployed cluster with custom values:

  1. Edit the values.yaml file in the helm chart directory, using the following command.

    cd /path/to/proagent/deployment/nfa
    vi proagent/values.yaml
    
  2. Upgrade the deployment using the following command.

    helm upgrade proagent ./proagent \
      --namespace pty-proagent \
    

Alternatively, if specific values from the command line are to be overwritten, then use the following command.

helm upgrade proagent ./proagent \
  --namespace pty-proagent \
  --set serviceImages.proagent="your-custom-image:tag" \
  --set storage.storageClass="your-storage-class"

2.4 - Using Protegrity Agent

Using Protegrity Agent

Accessing the Dashboard

The Protegrity Agent dashboard appears after installing the .tgz file. A Welcome message is displayed prompting users to start a conversation.

The left panel contains Ongoing Conversation and Chat History icons.

Click Chat History to refer to previous requests and results.

The Protegrity Agent version is displayed at the bottom-right of the page. Currently, Protegrity Agent is on version 0.9.

Starting a Conversation

Get started with Protegrity Agent by typing a question in the Start your conversation here… textbox. The Protegrity Agent breaks down the request into actionable steps and executes them, providing live updates in the canvas.

Example to get started with Protegrity Agent

Here is an example. Question to Protegrity Agent: Create a data element for credit card protection

Text Response:

We need to implement data masking for PII in our customer database. Where should we start?
Implementing PII data masking is crucial for GDPR and CCPA compliance. I'll help you design a comprehensive masking strategy. First, I need to understand your data landscape.

Please provide: 1) List of PII fields (SSN, email, phone, etc.), 2) User roles that need access, 3) Your database platform (Snowflake, PostgreSQL, etc.)

How should we handle SSN masking for different user roles?

Based on compliance requirements and operational needs, here's the recommended role-based SSN masking: 1. **Admin/Compliance Officers**: Full SSN access (XXX-XX-XXXX → 123-45-6789) 2. **Customer Support**: Last 4 digits only (XXX-XX-6789) 3. **Analytics/Reporting**: Fully masked (XXX-XX-XXXX) 4. **External Partners**: No access (NULL) This approach follows the principle of least privilege and maintains GDPR compliance by ensuring access is limited to legitimate business needs.

What about email and phone number masking?

Excellent question! For email and phone numbers, we should implement format-preserving masking that maintains data utility while protecting privacy: Email: j***@example.com (show first char + domain) Phone: (XXX) XXX-1234 (show last 4 digits) Let me create those masking policies now.

Can you show me the final data masking implementation?

Here's the complete data masking implementation: ```sql -- Create masking policy for SSN CREATE MASKING POLICY ssn_mask AS (val STRING) RETURNS STRING -> CASE WHEN current_role() IN ('ADMIN', 'COMPLIANCE_OFFICER') THEN val WHEN current_role() = 'CUSTOMER_SUPPORT' THEN 'XXX-XX-' || RIGHT(val, 4) ELSE 'XXX-XX-XXXX' END; -- Apply masking to sensitive columns ALTER TABLE customers MODIFY COLUMN ssn SET MASKING POLICY ssn_mask; ALTER TABLE customers MODIFY COLUMN email SET MASKING POLICY email_mask; ALTER TABLE customers MODIFY COLUMN phone SET MASKING POLICY phone_mask; ``` This ensures PII is automatically masked based on user roles, meeting GDPR Article 32 requirements.

Canvas Response:
[
  {
    "type": "attestation",
    "id": "att_002",
    "claim": "This data masking implementation reduces PII exposure risk by 95% for non-privileged users",
    "confidence_level": 0.95,
    "evidence_data": {
      "source": "Security impact analysis",
      "collected_at": "2025-10-29T14:45:00Z",
      "data": {
        "pii_fields_masked": 12,
        "compliance_frameworks": [
          "GDPR",
          "CCPA",
          "HIPAA"
        ],
        "roles_with_full_access": 2,
        "roles_with_masked_access": 8,
        "roles_with_partial_access": 3
      }
    },
    "evidence_type": "benchmark_score",
    "verification_method": "Role-based access simulation and compliance mapping",
    "created_at": "2025-10-29T14:45:00"
  }
]

The attributes displayed in the canvas response are listed below.
AttribueDescription
typeItems of an agent response can be one of several types:
- TextResponse: The agent’s text output
- AgentPlan: A plan created by the agent
- ExecutedStep: An individual step in the plan
- FeedbackRequest: A request for user feedback
- Attestation: Evidence or reasoning from the agent
idUnique identifier for this conversation
claimThe claim or assertion this evidence supports
confidence_levelConfidence level (0-1) in the validity of this evidence
evidence_dataThe actual evidence data
- source: Source of the evidence
- collected_at: When the evidence was collected
- data: The evidence payload
evidence_typeType of evidence being provided
verification_methodMethod used to verify or collect this evidence
created_atWhen the conversation was created

2.5 - Appendix - Features and Capabilities

List of features and capabilities of Protegrity Agent

Core Agent Functionality

Protegrity Agent Manager

This is the control center of Protegrity Agent. It keeps everything organized and running smoothly.

  • Keeps track of your conversations and remembers what you are working on.

  • Makes sure the planning and execution parts of the agent work together.

  • Shows you live updates as tasks are happening.

  • Can ask for feedback or clarification while working on your requests.

Intelligent Planner

This part of the agent figures out how to get things done, step by step.

  • Understands your requests in plain language and creates a detailed plan.

  • Breaks big or complicated tasks into smaller, manageable steps.

  • Makes sure each step depends on the right things and leads to clear results.

  • Can adjust the plan if something doesn’t work the first time.

  • Uses information from your documents to make smarter plans.

Autonomous Executor

This is the “doer” that carries out the plan, one step at a time.

  • Thinks through each step carefully to handle complex tasks.

  • Only does one action at a time to keep things safe and reliable.

  • Finds the best tools or methods for each step using smart search.

  • Automatically tries again if something fails, up to a few times.

  • Stops quickly if there is a major problem, and lets you know.

  • Tracks how confident it is in each action and keeps a record for you.

Protegrity ESA Integration

Protegrity Agent delivers robust integration with the Protegrity Enterprise Security Agent (ESA) APIs, supporting endpoint models for streamlined data protection management.

System Management

  • Version and health monitoring (getVersionEndpoint, getHealthEndpoint)

  • API documentation access (getDocEndpoint)

  • Log level management (getLogLevelEndpoint, setLogLevelEndpoint)

  • PIM initialization (initializePimEndpoint)

Data Store Management

  • List, create, retrieve, and remove datastores (listDataStoresEndpoint, createDatastoreEndpoint, getDataStoreEndpoint, removeDataStoreEndpoint)

  • Datastore deployment with sample policies (deployDataStoreEndpoint)

Data Element Management

  • Creation and management of encryption elements: AES-128/256 CBC, AES-128/256 CUSP, Triple DES, SHA1 HMAC

  • Tokenization configuration: Numeric, Alphabetic, AlphaNumeric, Printable, Unicode, Binary

  • Format-preserving encryption setup: Numeric, Alphabetic, AlphaNumeric, Unicode variants

  • Specialized tokens: Email, Credit Card, Date (multiple formats), DateTime, Integer, Decimal

  • Custom masking pattern configuration

  • Data element lifecycle management: list, create, retrieve, and remove

Range and Mask Management

  • Unicode range creation and management

  • Custom alphabet definition

  • Mask pattern configuration for data obfuscation

Policy and Rule Management

  • Security policy lifecycle: create, retrieve, list, and remove

  • Policy rule definition with granular permissions (protect, unprotect, re-protect, audit)

  • Role-based access control integration

  • Policy deployment to datastores

Application Management

  • Trusted application registration

  • Application-specific access controls

  • Registered application management

Source and Member Management

  • LDAP source integration for user directories

  • File-based source configuration

  • Source connection testing

  • Member synchronization

  • User and role management

Role Management

  • Security role creation and management

  • Role member assignment

  • Permission configuration

  • Role-based policy rule definition

Deployment Operations

  • Full system deployment orchestration

  • Deployment status tracking

  • Coordinated multi-component deployments

Alphabet Management

  • Custom alphabet creation for tokenization

  • Code point range definition

  • Alphabet retrieval and removal

Advanced Features and Capabilities

The advanced features and capabilities of the Protegrity Agent emphasizes its robust orchestration, intelligent planning, and autonomous execution. The architecture is designed to efficiently manage complex tasks, ensure seamless coordination, and deliver reliable outcomes through integrated planning and execution components.

Chain-of-Thought Reasoning

Protegrity Agent uses a structured reasoning process to break down complex tasks into manageable steps. Each step follows an Observation → Thought → Action → Result cycle.

When you ask Protegrity Agent to perform a complicated operation, for example, “Set up a policy to protect customer emails and credit card numbers”, it performs the following:

  1. Analyzes the request.

  2. Plans the necessary steps.

  3. Executes each step in sequence, ensuring accuracy.

  4. Provides updates and results for each step.

Maximum 10 reasoning steps per execution are displayed, with automatic detection when reasoning is complete.

Semantic Tool Discovery

Protegrity Agent uses semantic search and ranking algorithms (like BM25 and similarity scoring) to find the best API endpoint for your request.

You don’t need to know technical details or specific API names. Just describe what you want, and Protegrity Agent will select the right tool or endpoint to fulfill your request.

If semantic search isn’t available, Protegrity Agent will use common tools to ensure your request is still handled.

Autonomous Execution and Error Handling

Protegrity Agent can execute multi-step workflows on its own, choosing the right tools and handling errors automatically.

If something goes wrong, for example, a network issue or invalid input, Protegrity Agent:

  1. Retries the operation with exponential backoff, up to 5 retries.
  2. Stops immediately on critical errors like schema validation or timeouts.
  3. Logs errors for debugging and provides feedback to you.

You get reliable execution with minimal manual intervention.

Real-Time Streaming and Feedback

Protegrity Agent uses Server-Sent Events (SSE) to stream live updates during task execution.

As Protegrity Agent works on your request, the dashboard displays:

  • Plan creation and updates.

  • Progress of each step.

  • Confidence scores for tool execution.

  • Requests for clarification, if needed.

  • Final results as soon as they are ready.

The Dashboard always keeps you informed about what’s happening, reducing uncertainty and wait times.

Conversation History Management

Protegrity Agent keeps a persistent history of the conversations and execution context.

Using the Chat History you can:

  • Refer back to previous requests and results.

  • Maintain context across multiple sessions.

  • Export formatted history for debugging or auditing.

This feature ensures continuity and transparency in your interactions.

Database Integration

Protegrity Agent uses a PostgreSQL backend to store conversation and message history, response states, and execution logs. All the interactions and results are securely stored, making it easy to:

  • Audit actions

  • Troubleshoot issues

  • Track progress over time

This enhances reliability, compliance, and traceability.

Containerization and Deployment Flexibility

Protegrity Agent supports Docker and Kubernetes deployments, with multi-stage builds for optimized images.

Protegrity Agent can be deployed in various environments, like, standalone, cloud, or on-premises, with ease, ensuring scalability, security, and efficient resource usage.