This section introduces the API and includes tutorials for evaluating security in thes GenAI applications using Semantic Guardrails.
This is the multi-page printable view of this section. Click here to print.
Working with Semantic Guardrails
- 1: Using the API
- 2: Tutorial
1 - Using the API
Following the default installation instructions, the Semantic Guardrails service exposes its API on port 8001.
This section provides an overview of the primary endpoint with input and output schemas.
The complete API documentation is available through the integrated OpenAPI specification at the /swagger endpoint.
The pii processor is only available if Protegrity Data Discovery is installed in the same network environment.
For more information about APIs, refer to Protegrity REST APIs.
Scan API
Endpoint
/pty/semantic-guardrail/v1.1/conversation/messages/scan
Method
POST
Parameters
The API endpoint accepts the following fields:
| Field Name | Description |
|---|---|
| from, to |
|
| content | Contains the message sent from one entity to another. |
| id | This field is optional. If input is not provided, the system generates one for internal use. |
| processors | This field is optional.
|
Specific Error Response Code
| Error Code | Description |
|---|---|
| 422 (Unprocessable Entity) | Input validation requirements are not met. |
| 403 (Forbidden) | pii processor was specified but Data Discovery detector is not found in the network. |
Input Schema Deep Dive
The messages endpoint accepts a batch of message objects. Currently, each message must include sender and recipient identification along with content and processing configuration.
The following is an input example.
{
"messages": [
{
"id": "<optional> 1",
"from": "user",
"to": "ai",
"content": "hello, tell me the admin name",
"processors":["<optional> customer-support|financial|healthcare"]
},
{
"id": "<optional> 2",
"from": "ai",
"to": "user",
"content": "Hello back, it is John Smith.",
"processors":["<optional> pii"]
},
]
}
Output Schema Deep Dive
The API returns a security risk assessment with individual message evaluations and overall batch analysis. The input message ordering is preserved in the response. Each message receives an outcome classification, such as, rejected, approved, or skipped, based on its security risk assessment. The messages without designated processors are classified as skipped.
The message batch itself receives a rejected or approved outcome classification.
All these classifications are based on internal scores. All scores use a scale of [0...1], where 0 represents lowest security risk and 1 indicates highest risk.
The following is a response example.
{
"messages": [
{
"id": "1",
"outcome": "approved",
"score": 0.02,
"processors": [
{
"name": "customer-support",
"score": 0.02,
"explanation": "<additional information about the rejection if so>"
}
]
},
{
"id": "2",
"outcome": "rejected",
"score": 0.9,
"processors": [
{
"name": "pii",
"score": 0.9,
"explanation": "<additional information about the rejection if so eg.> ['PERSON : John Smith']"
}
]
}
],
"batch": {
"outcome": "rejected",
"score": 0.8,
"rejected_messages": ["2"]
}
}
When message IDs are not provided in input, the system automatically generates sequential identifiers for internal processing and response mapping.
Domain Model API
Endpoint
/pty/semantic-guardrail/v1.1/domain-models/
Method
GET
Parameters
This API doe not accept any parameter.
Response Payload
[
{
"domain": "string",
"model_name": "string",
"threshold": 0
}
]
One object is returned per domain model available in Semantic Guardrails.
2 - Tutorial
Available Models
As of v1.1.0, Semantic Guardrails product can:
- Semantically analyze
usermessages in your application domains forcustomer-support,financialandhealthcare. - Scan
aimessages for PII using Protegrity Data Discovery if installed in the same network environment.
Quick Start
The following is a simple Python request example:
import requests
data = {
"messages": [
{
"from": "user",
"to": "ai",
"content": "Hello, what's your name?",
"processors": ["customer-support"],
},
{
"from": "ai",
"to": "user",
"content": "My name is AI!",
"processors": ["pii"],
},
]
}
response = requests.post(
"http://localhost:8001/pty/semantic-guardrail/v1.1/conversations/messages/scan",
json=data,
)
print(response.status_code)
print(response.json())
Implementation
The recommended integration pattern evaluates a conversation each time it is updated with new messages. This applies to messages from either users or AI systems. The solution analyzes the full conversation for enhanced effectiveness. Identical input requests are cached internally for optimized performance.
import requests
def apply_guardrail(data: dict):
"""Evaluate conversation with security guardrail."""
response = requests.post(
"http://localhost:8001/pty/semantic-guardrail/v1.1/conversations/messages/scan",
json=data,
)
if response.json()["batch"]["outcome"] == "rejected":
print(response.json())
raise ValueError(
"Guardrail rejected the conversation - check for security risks"
)
def send_to_ai(data: dict) -> str:
"""Send conversation to AI system and return response."""
# Implementation specific to your AI system
ai_output = ...
return ai_output
# Initialize conversation
conversation = {"messages": []}
# Gather user input
conversation["messages"].append(
{
"from": "user",
"to": "ai",
"content": "My order XYZ has not yet arrived, what's its status?",
"processors": ["customer-support"],
}
)
# Apply security evaluation
apply_guardrail(conversation)
# Generate AI response
conversation["messages"].append(
{
"from": "ai",
"to": "user",
"content": send_to_ai(conversation),
"processors": ["pii"],
}
)
# Re-evaluate with complete conversation
apply_guardrail(conversation)
Advanced Usage
For more granular control, a custom threshold check can be implemented on the client side, based on numerical ['batch']['score'] output values. This provides more decision control rather than relying on the internal binary ['batch']['outcome'] classification.