The Documentation is a repository for Protegrity product documentation. The documentation is available in the HTML format and can be viewed using your browser. You can also print and download the .pdf files of the required product documentation as per your requirements.
This is the multi-page printable view of this section. Click here to print.
Documentation
- 1: Installation
- 1.1: Overview of installation
- 1.2: System Requirements
- 1.3: Partitioning of Disk on an Appliance
- 1.4: Installing the ESA On-Premise
- 1.5: Configuring the ESA
- 1.6: Installing Appliances on Cloud Platforms
- 1.7: Verifying the ESA installation from the Web UI
- 1.8: Initializing the Policy Information Management (PIM) Module
- 1.9: Configuring the ESA in a Trusted Appliances Cluster (TAC)
- 1.10: Creating an Audit Store Cluster
- 2: Configuration
- 2.1: Sending logs to an external security information and event management (SIEM)
- 2.2: Configuring a Trusted Appliance Cluster (TAC) without Consul Integration
- 2.3: Configuring the IP Address for the Docker Interface
- 2.4: PEP Server configuration file
- 2.5: Configuring ESA features
- 2.5.1: Rotating Insight certificates
- 2.5.2: Configuring the disk space on the Log Forwarder
- 2.5.3: Updating configurations after changing the domain name
- 2.5.4: Updating the IP address of the ESA
- 2.5.5: Updating the host name or domain name of the ESA
- 2.5.6: Updating Insight custom certificates
- 2.5.7: Removing an ESA from the Audit Store cluster
- 2.6: Identifying the protector version
- 3: Upgrading ESA to v10.1.0
- 3.1: System and License Requirements
- 3.2: Upgrade Paths to ESA v10.1.0
- 3.3: Prerequisites
- 3.4: Upgrading from v10.0.1
- 3.5: Post upgrade steps
- 3.6: Restoring to the Previous Version of ESA
- 4: Enterprise Security Administrator (ESA)
- 4.1: Architectures
- 4.2: Protegrity Appliance Overview
- 4.3: Data Security Platform Overview
- 4.4: Installing ESA
- 4.5: Logging In to ESA
- 4.6: Command-Line Interface (CLI) Manager
- 4.6.1: Accessing the CLI Manager
- 4.6.2: CLI Manager Structure Overview
- 4.6.3: Working with Status and Logs
- 4.6.3.1: Monitoring System Statistics
- 4.6.3.2: Viewing the Top Processes
- 4.6.3.3: Working with System Statistics (SYSSTAT)
- 4.6.3.4: Auditing Service
- 4.6.3.5: Viewing Appliance Logs
- 4.6.3.6: Viewing User Notifications
- 4.6.4: Working with Administration
- 4.6.4.1: Working with Services
- 4.6.4.2: Setting Date and Time
- 4.6.4.3: Managing Accounts and Passwords
- 4.6.4.4: Working with Backup and Restore
- 4.6.4.5: Setting Up the Email Server
- 4.6.4.6: Working with Azure AD
- 4.6.4.6.1: Configuring Azure AD Settings
- 4.6.4.6.2: Enabling/Disabling Azure AD
- 4.6.4.7: Accessing REST API Resources
- 4.6.4.7.1: Using Basic Authentication
- 4.6.4.7.2: Using Client Certificates
- 4.6.4.7.3: Using JSON Web Token (JWT)
- 4.6.4.8: Securing the GRand Unified Bootloader
- 4.6.4.8.1: Enabling the Credentials for the GRUB Menu
- 4.6.4.8.2: Disabling the GRUB Credentials
- 4.6.4.9: Working with Installations and Patches
- 4.6.4.9.1: Add/Remove Services
- 4.6.4.9.2: Uninstalling Products
- 4.6.4.9.3: Managing Patches
- 4.6.4.10: Managing LDAP
- 4.6.4.10.1: Working with the Protegrity LDAP Server
- 4.6.4.10.2: Changing the Bind User Password
- 4.6.4.10.3: Working with Proxy Authentication
- 4.6.4.10.4: Configuring Local LDAP Settings
- 4.6.4.10.5: Monitoring Local LDAP
- 4.6.4.10.6: Optimizing Local LDAP Settings
- 4.6.4.11: Rebooting and Shutting down
- 4.6.4.12: Accessing the OS Console
- 4.6.5: Working with Networking
- 4.6.5.1: Configuring Network Settings
- 4.6.5.2: Configuring SNMP
- 4.6.5.2.1: Configuring SNMPv3 as a USM Model
- 4.6.5.2.2: Configuring SNMPv3 as a TSM Model
- 4.6.5.3: Working with Bind Services and Addresses
- 4.6.5.3.1: Binding Interface for Management
- 4.6.5.3.2: Binding Interface for Services
- 4.6.5.4: Using Network Troubleshooting Tools
- 4.6.5.5: Managing Firewall Settings
- 4.6.5.6: Using the Management Interface Settings
- 4.6.5.7: Ports Allowlist
- 4.6.6: Working with Tools
- 4.6.6.1: Configuring the SSH
- 4.6.6.1.1: Specifying SSH Mode
- 4.6.6.1.2: Setting Up Advanced SSH Configuration
- 4.6.6.1.3: Managing SSH Known Hosts
- 4.6.6.1.4: Managing Authorized Keys
- 4.6.6.1.5: Managing Identities
- 4.6.6.1.6: Generating SSH Keys
- 4.6.6.1.7: Configuring the SSH
- 4.6.6.1.8: Customizing the SSH Configurations
- 4.6.6.1.9: Exporting/Importing the SSH Settings
- 4.6.6.1.10: Securing SSH Communication
- 4.6.6.2: Clustering Tool
- 4.6.6.2.1: Creating a TAC using the CLI Manager
- 4.6.6.2.2: Joining an Existing Cluster using the CLI Manager
- 4.6.6.2.3: Cluster Operations
- 4.6.6.2.4: Managing a site
- 4.6.6.2.5: Node Management
- 4.6.6.2.5.1: Show Cluster Nodes and Status
- 4.6.6.2.5.2: Viewing the Cluster Status using the CLI Manager
- 4.6.6.2.5.3: Adding a Remote Node to a Cluster
- 4.6.6.2.5.4: Updating Cluster Information using the CLI Manager
- 4.6.6.2.5.5: Managing Communication Methods for Local Node
- 4.6.6.2.5.6: Managing Local to Remote Node Communication
- 4.6.6.2.5.7: Removing a Node from a Cluster using CLI Manager
- 4.6.6.2.5.8: Uninstalling Cluster Services
- 4.6.6.2.6: Trusted Appliances Cluster
- 4.6.6.2.6.1: Updating Cluster Key
- 4.6.6.2.6.2: Redeploy Local Cluster Configuration to All Nodes
- 4.6.6.2.6.3: Cluster Service Interval
- 4.6.6.2.6.4: Execute Commands as OS Root User
- 4.6.6.3: Working with Xen Paravirtualization Tool
- 4.6.6.4: Working with the File Integrity Monitor Tool
- 4.6.6.5: Rotating Appliance OS Keys
- 4.6.6.6: Managing Removable Drives
- 4.6.6.7: Tuning the Web Services
- 4.6.6.8: Tuning the Service Dispatcher
- 4.6.6.9: Working with Antivirus
- 4.6.7: Working with Preferences
- 4.6.7.1: Viewing System Monitor on OS Console
- 4.6.7.2: Setting Password Requirements for CLI System Tools
- 4.6.7.3: Viewing user notifications on CLI load
- 4.6.7.4: Minimizing the Timing Differences
- 4.6.7.5: Setting a Uniform Response Time
- 4.6.7.6: Limiting Incorrect root Login
- 4.6.7.7: Enabling Mandatory Access Control
- 4.6.7.8: FIPS Mode
- 4.6.7.9: Basic Authentication for REST APIs
- 4.6.8: Command Line Options
- 4.7: Web User Interface (Web UI) Management
- 4.7.1: Working with the Web UI
- 4.7.2: Description of Appliance Web UI
- 4.7.3: Working with System
- 4.7.3.1: Working with Services
- 4.7.3.2: Viewing information, statistics, and graphs
- 4.7.3.3: Working with Trusted Appliances Cluster
- 4.7.3.4: Working with Backup and restore
- 4.7.3.4.1: Working with OS Full Backup and Restore
- 4.7.3.4.2: Backing up the data
- 4.7.3.4.3: Backing up custom files
- 4.7.3.4.4: Exporting the custom files
- 4.7.3.4.5: Importing the custom files
- 4.7.3.4.6: Working with the custom files
- 4.7.3.4.7: Restoring configurations
- 4.7.3.4.8: Viewing Export/Import logs
- 4.7.3.5: Scheduling appliance tasks
- 4.7.3.5.1: Viewing the scheduler page
- 4.7.3.5.2: Creating a scheduled task
- 4.7.3.5.3: Scheduling Configuration Export to Cluster Tasks
- 4.7.3.5.4: Deleting a scheduled task
- 4.7.4: Viewing the logs
- 4.7.5: Working with Settings
- 4.7.5.1: Working with Antivirus
- 4.7.5.1.1: Customizing Antivirus Scan Options
- 4.7.5.1.2: Scheduling Antivirus Scan
- 4.7.5.1.3: Updating the Antivirus Database
- 4.7.5.1.4: Working with Antivirus Logs
- 4.7.5.2: Configuring Appliance Two Factor Authentication
- 4.7.5.2.1: Working with Automatic Per-User Shared-Secret
- 4.7.5.2.2: Working with Host-Based Shared-Secret
- 4.7.5.2.3: Working with Remote Authentication Dial-up Service (RADIUS) Authentication
- 4.7.5.2.4: Working with Shared-Secret Lifecycle
- 4.7.5.2.5: Logging in Using Appliance Two Factor Authentication
- 4.7.5.2.6: Disabling Appliance Two Factor Authentication
- 4.7.5.3: Working with Configuration Files
- 4.7.5.4: Working with File Integrity
- 4.7.5.5: Managing File Uploads
- 4.7.5.6: Configuring Date and Time
- 4.7.5.7: Configuring Email
- 4.7.5.8: Configuring Network Settings
- 4.7.5.8.1: Managing Network Interfaces
- 4.7.5.8.2: NIC Bonding
- 4.7.5.9: Configuring Web Settings
- 4.7.5.9.1: General Settings
- 4.7.5.9.2: Session Management
- 4.7.5.9.3: Shell in a box settings
- 4.7.5.9.4: SSL cipher settings
- 4.7.5.9.5: Updating a protocol from the ESA Web UI
- 4.7.5.10: Working with Secure Shell (SSH) Keys
- 4.7.5.10.1: Configuring the authentication type for SSH keys
- 4.7.5.10.2: Configuring inbound communications
- 4.7.5.10.3: Configuring outbound communications
- 4.7.5.10.4: Configuring known hosts
- 4.7.6: Managing Appliance Users
- 4.7.7: Password Policy for all appliance users
- 4.7.7.1: Managing Users
- 4.7.7.1.1: Adding users to internal LDAP
- 4.7.7.1.2: Importing users to internal LDAP
- 4.7.7.1.3: Password policy configuration
- 4.7.7.1.4: Edit users
- 4.7.7.2: Managing Roles
- 4.7.7.3: Configuring the proxy authentication settings
- 4.7.7.4: Working with External Groups
- 4.7.7.5: Configuring the Azure AD Settings
- 4.7.7.5.1: Importing Azure AD Users
- 4.7.7.5.2: Working with External Azure Groups
- 4.8: Trusted Appliances Cluster (TAC)
- 4.8.1: TAC Topology
- 4.8.2: Cluster Configuration Files
- 4.8.3: Deploying Appliances in a Cluster
- 4.8.4: Cluster Security
- 4.8.5: Reinstalling Cluster Services
- 4.8.6: Uninstalling Cluster Services
- 4.8.7: FAQs on TAC
- 4.8.8: Creating a TAC using the Web UI
- 4.8.9: Connection Settings
- 4.8.10: Joining an Existing Cluster using the Web UI
- 4.8.11: Managing Communication Methods for Local Node
- 4.8.12: Viewing Cluster Information
- 4.8.13: Removing a Node from the Cluster using the Web UI
- 4.9: Appliance Virtualization
- 4.9.1: Xen Paravirtualization Setup
- 4.9.1.1: Pre-Conversion Tasks
- 4.9.1.2: Paravirtualization Process
- 4.9.2: Xen Server Configuration
- 4.9.3: Installing Xen Tools
- 4.9.4: Xen Source – Xen Community Version
- 4.10: Appliance Hardening
- 4.10.1: Open listening ports
- 4.11: VMware tools in appliances
- 4.12: Increasing the Appliance Disk Size
- 4.13: Mandatory Access Control
- 4.13.1: Working with profiles
- 4.13.2: Analyzing events
- 4.13.3: AppArmor permissions
- 4.13.4: Troubleshooting for AppArmor
- 4.14: Accessing Appliances using Single Sign-On (SSO)
- 4.14.1: What is Kerberos
- 4.14.1.1: Implementing Kerberos SSO for Protegrity Appliances
- 4.14.1.1.1: Prerequisites
- 4.14.1.1.2: Setting up Kerberos SSO
- 4.14.1.1.3: Logging to the Appliance
- 4.14.1.1.4: Scenarios for Implementing Kerberos SSO
- 4.14.1.1.5: Viewing Logs
- 4.14.1.1.6: Feature Limitations
- 4.14.1.1.7: Troubleshooting
- 4.14.2: What is SAML
- 4.14.2.1: Setting up SAML SSO
- 4.14.2.1.1: Workflow of SAML SSO on an Appliance
- 4.14.2.1.2: Logging on to the Appliance
- 4.14.2.1.3: Implementing SAML SSO on Azure IdP - An Example
- 4.14.2.1.4: Implementing SSO with a Load Balancer Setup
- 4.14.2.1.5: Viewing Logs
- 4.14.2.1.6: Feature Limitations
- 4.14.2.1.7: Troubleshooting
- 4.15: Sample External Directory Configurations
- 4.16: Partitioning of Disk on an Appliance
- 4.16.1: Partitioning the OS in the UEFI Boot Option
- 4.16.2: Partitioning the OS with the BIOS Boot Option
- 4.17: Working with Keys
- 4.18: Working with Certificates
- 4.19: Managing policies
- 4.20: Working with Insight
- 4.20.1: Understanding the Audit Store node status
- 4.20.2: Accessing the Insight Dashboards
- 4.20.3: Working with Audit Store nodes
- 4.20.4: Working with Discover
- 4.20.5: Working with Audit Store roles
- 4.20.6: Understanding Insight Dashboards
- 4.20.7: Working with Protegrity dashboards
- 4.20.8: Working with Protegrity visualizations
- 4.20.9: Visualization templates
- 4.20.10: Insight Certificates
- 4.21: Maintaining Insight
- 4.21.1: Working with alerts
- 4.21.2: Index lifecycle management (ILM)
- 4.21.3: Viewing policy reports
- 4.21.4: Verifying signatures
- 4.21.5: Using the scheduler
- 4.22: Installing Protegrity Appliances on Cloud Platforms
- 4.22.1: Installing Protegrity Appliances on Amazon Web Services (AWS)
- 4.22.1.1: Obtaining the AMI
- 4.22.1.2: Loading the Protegrity Appliance from an Amazon Machine Image (AMI)
- 4.22.1.2.1: Creating an Instance of the Protegrity Appliance from the AMI
- 4.22.1.2.2: Configuring the Virtual Private Cloud (VPC)
- 4.22.1.2.3: Adding a Subnet to the Virtual Private Cloud (VPC)
- 4.22.1.2.4: Finalizing the Installation of Protegrity Appliance on the Instance
- 4.22.1.2.4.1: Logging to the AWS Instance using the SSH Client
- 4.22.1.2.4.2: Finalizing an AWS Instance
- 4.22.1.2.5: Connecting an ESA instance for DSG deployment
- 4.22.1.3: Backing up and Restoring Data on AWS
- 4.22.1.4: Increasing Disk Space on the Appliance
- 4.22.1.5: Best Practices for Using Protegrity Appliances on AWS
- 4.22.1.6: Running the Appliance-Rotation-Tool
- 4.22.1.7: Working with Cloud-based Applications
- 4.22.1.7.1: Configuring Access for AWS Resources
- 4.22.1.7.2: Working with CloudWatch Console
- 4.22.1.7.2.1: Integrating CloudWatch with Protegrity Appliance
- 4.22.1.7.2.2: Configuring Custom Logs on AWS CloudWatch Console
- 4.22.1.7.2.3: Toggling the CloudWatch Service
- 4.22.1.7.2.4: Reloading the AWS CloudWatch Integration
- 4.22.1.7.2.5: Viewing Logs on AWS CloudWatch Console
- 4.22.1.7.2.6: Working with AWS CloudWatch Metrics
- 4.22.1.7.2.7: Viewing Metrics on AWS CloudWatch Console
- 4.22.1.7.2.8: Disabling AWS CloudWatch Integration
- 4.22.1.7.3: Working with the AWS Cloud Utility
- 4.22.1.7.3.1: Storing Backup Files on the AWS S3 Bucket
- 4.22.1.7.3.2: Set Metrics Based Alarms Using the AWS Management Console
- 4.22.1.7.4: FAQs for AWS Cloud Utility
- 4.22.1.7.5: Working with AWS Systems Manager
- 4.22.1.7.5.1: Setting up AWS Systems Manager
- 4.22.1.7.5.2: FAQs on AWS Systems Manager
- 4.22.1.7.6: Troubleshooting for the AWS Cloud Utility
- 4.22.2: Installing Protegrity Appliances on Azure
- 4.22.2.1: Verifying Prerequisites
- 4.22.2.2: Azure Cloud Utility
- 4.22.2.3: Setting up Azure Virtual Network
- 4.22.2.4: Creating a Resource Group
- 4.22.2.5: Creating a Storage Account
- 4.22.2.6: Creating a Container
- 4.22.2.7: Obtaining the Azure BLOB
- 4.22.2.8: Creating Image from the Azure BLOB
- 4.22.2.9: Creating a VM from the Image
- 4.22.2.10: Accessing the Appliance
- 4.22.2.11: Finalizing the Installation of Protegrity Appliance on the Instance
- 4.22.2.12: Accelerated Networking
- 4.22.2.13: Backing up and Restoring VMs on Azure
- 4.22.2.14: Connecting to an ESA Instance
- 4.22.2.15: Deploying the Protegrity Appliance Instance with the Protectors
- 4.22.3: Installing Protegrity Appliances on Google Cloud Platform (GCP)
- 4.22.3.1: Verifying Prerequisites
- 4.22.3.2: Configuring the Virtual Private Cloud (VPC)
- 4.22.3.3: Obtaining the GCP Image
- 4.22.3.4: Converting the Raw Disk to a GCP Image
- 4.22.3.5: Loading the Protegrity Appliance from a GCP Image
- 4.22.3.5.1: Creating a VM Instance from an Image
- 4.22.3.5.2: Creating a VM Instance from a Disk
- 4.22.3.5.3: Accessing the Appliance
- 4.22.3.6: Finalizing the Installation of Protegrity Appliance on the Instance
- 4.22.3.7: Connecting to an ESA instance for DSG deployment
- 4.22.3.8: Deploying the Instance of the Protegrity Appliance with the Protectors
- 4.22.3.9: Backing up and Restoring Data on GCP
- 4.22.3.10: Increasing Disk Space on the Appliance
- 5: Data Security Gateway (DSG)
- 5.1: Protegrity Gateway Technology
- 5.2: Protegrity gateway product
- 5.3: Technical Architecture
- 5.3.1: Configuration over Programming (CoP) Architecture
- 5.3.2: Dynamic Configuration over Programming (CoP)
- 5.4: Deployment Scenarios
- 5.5: Protegrity Methodology
- 5.6: Planning for Gateway Installation
- 5.6.1: LDAP and SSO Configurations
- 5.6.2: Mapping of Sensitive Data Primitives
- 5.6.3: Network Planning
- 5.6.4: HTTP URL Rewriting
- 5.6.5: Clustering and Load Balancing
- 5.6.6: SSL Certificates
- 5.7: Installing the DSG
- 5.7.1: Installing the DSG On-Premise
- 5.7.2: Installing DSG on cloud installation
- 5.7.2.1: Finalizing the instance.
- 5.7.2.2: Launching an instance.
- 5.7.2.3: Prerequisites on cloud platforms
- 5.7.3: Updating the host details
- 5.7.4: Additional steps after changing the hostname or FQDN
- 5.8: Enhancements in DSG 3.3.0.0
- 5.9: Trusted appliances cluster
- 5.10: Upgrading to DSG 3.3.0.0
- 5.10.1: Post installation/upgrade steps
- 5.11: Web UI
- 5.11.1: Cluster
- 5.11.1.1: Monitoring
- 5.11.1.2: Log Viewer
- 5.11.2: Ruleset
- 5.11.2.1: Learn Mode
- 5.11.2.1.1: Learn Mode Scheduled Task
- 5.11.2.2: Ruleset Tab
- 5.11.2.2.1: Ruleset Versioning
- 5.11.3: Transport
- 5.11.3.1: Tunnels
- 5.11.3.1.1: Manage a Tunnel
- 5.11.3.1.2: Amazon S3 Tunnel
- 5.11.3.1.3: HTTP Tunnel
- 5.11.3.1.4: SFTP Tunnel
- 5.11.3.1.5: SMTP Tunnel
- 5.11.3.1.6: NFS/CIFS
- 5.11.3.1.6.1: NFS/CIFS
- 5.11.3.2: Certificates/Key Material
- 5.11.3.2.1: Certificates Tab
- 5.11.3.2.2: Delete Certificates and Keys
- 5.11.3.2.3: Keys Subtab
- 5.11.3.2.4: Other Files Subtab
- 5.11.3.2.5: Upload Certificate/Keys
- 5.11.4: Global Settings
- 5.11.4.1: Debug
- 5.11.4.2: Global Protocol Stack
- 5.11.4.3: Web UI
- 5.11.5: Tokenization Portal
- 5.12: Overview of Sub Clustering
- 5.13: Implementation
- 5.14: Transaction Metrics Logging
- 5.15: Error metrics logging
- 5.16: Usage Metrics Logging
- 5.17: Ruleset Reference
- 5.17.1: Services
- 5.17.1.1: Amazon S3 gateway
- 5.17.1.2: Mount file system out-of-band service
- 5.17.1.3: REST API
- 5.17.1.4: Secure Web socket (WSS)
- 5.17.1.5: SFTP gateway
- 5.17.1.6: SMTP gateway
- 5.17.2: Profile
- 5.17.3: Actions
- 5.17.3.1: Error
- 5.17.3.2: Exit
- 5.17.3.3: Extract
- 5.17.3.3.1: Adobe Action Message Format
- 5.17.3.3.2: Amazon S3 Object
- 5.17.3.3.3: Binary Payload
- 5.17.3.3.4: CSV Payload
- 5.17.3.3.5: Common Event Format (CEF)
- 5.17.3.3.6: XML Payload
- 5.17.3.3.7: Date Time Format
- 5.17.3.3.8: XML with Tree-of-Trees (ToT)
- 5.17.3.3.9: Fixed Width
- 5.17.3.3.10: HTML Form Media Payload
- 5.17.3.3.11: HTTP Message Payload
- 5.17.3.3.12: Enhanced Adobe PDF Codec
- 5.17.3.3.13: JSON Payload
- 5.17.3.3.14: JSON with Tree-of-Trees (ToT)
- 5.17.3.3.15: Microsoft Office Documents
- 5.17.3.3.16: Multipart Mime Payload
- 5.17.3.3.17: PDF Payload
- 5.17.3.3.18: Protocol Buffer Payload
- 5.17.3.3.19: Secure File Transfer Payload
- 5.17.3.3.20: Shared File
- 5.17.3.3.21: SMTP Message Payload
- 5.17.3.3.22: Text Payload
- 5.17.3.3.23: URL Payload
- 5.17.3.3.24: User Defined Extraction Payload
- 5.17.3.3.25: ZIP Compressed File Payload
- 5.17.3.4: Log
- 5.17.3.5: Profile Reference
- 5.17.3.6: Set User Identity
- 5.17.3.7: Set Context Variable
- 5.17.3.8: Transform
- 5.17.3.8.1: GNU Privacy Guard (GPG)
- 5.17.3.8.2: Protegrity Data Protection method
- 5.17.3.8.3: Regular expression replace
- 5.17.3.8.4: Security Assertion Markup Language (SAML) codec
- 5.17.3.8.5: User defined transformation
- 5.17.3.9: Dynamic Injection
- 5.18: DSG REST API
- 5.19: Enabling selective tunnel loading on DSG nodes
- 5.20: User Defined Functions (UDFs)
- 5.21: API for exporting the CoP
- 5.22: Best Practices
- 5.23: Known Limitations
- 5.24: Migrate UDFs to Python 3
- 5.25: Additional configurations in gateway.json
- 5.26: Auditing and logging
- 5.27: Verifying UDF Rules for blocked modules and methods
- 5.28: Managing PEP server configuration file
- 5.29: OpenSSL Curve Names, Algorithms, and Options
- 5.30: Configuring the DSG cluster
- 5.31: Encoding List
- 5.32: Configuring default gateway
- 5.33: Forward logs to Insight
- 5.34: Extending ESA with the DSG Web UI
- 5.35: Backing up and Restoring up the Appliance OS from the Web UI
- 5.36: Setting up ESA communication
- 5.37: Deploying configurations to the cluster
- 5.38: Restarting a node
- 5.39: Deploy configurations to node groups
- 5.40: Codebook Reshuffling
- 5.41: Troubleshooting in DSG
- 6: Policy Management
- 6.1: Protegrity Data Security Methodology
- 6.2: Package Deployment in Protectors
- 6.3: Initializing the Policy Management
- 6.4: Components of a Policy
- 6.4.1: Working With Data Elements
- 6.4.1.1: Example - Creating Token Data Elements
- 6.4.1.2: Example - Creating a FPE Data Element
- 6.4.1.3: Example - Creating Data Elements for Unstructured Data
- 6.4.2: Working With Alphabets
- 6.4.2.1: Creating an Alphabet
- 6.4.3: Working With Masks
- 6.4.3.1: Creating a Mask
- 6.4.3.2: Masking Support
- 6.4.4: Working With Trusted Applications
- 6.4.4.1: Creating a Trusted Application
- 6.4.4.2: Linking Data Store to a Trusted Application
- 6.4.4.3: Deploying a Trusted Application
- 6.4.5: Creating a Data Store
- 6.4.5.1: Adding Allowed Servers for the Data Store
- 6.4.5.2: Adding Policies to the Data Store
- 6.4.5.3: Adding Trusted Applications to the Data Store
- 6.4.6: Working With Member Sources
- 6.4.6.1: Configuring Active Directory Member Source
- 6.4.6.2: Configuring File Member Source
- 6.4.6.3: Configuring LDAP Member Source
- 6.4.6.4: Configuring POSIX Member Source
- 6.4.6.5: Configuring Azure AD Member Source
- 6.4.6.6: Configuring Database Member Source
- 6.4.7: Working with Roles
- 6.4.7.1: Creating a Role
- 6.4.7.2: Mode Types for a Role
- 6.4.7.3: Adding Members to a Role
- 6.4.7.3.1: Filtering Members from AD and LDAP Member Sources
- 6.4.7.3.2: Filtering Members from Azure AD Member Source
- 6.4.7.4: Synchronizing, Listing, or Removing Members in a Role
- 6.4.7.5: Searching User
- 6.5: Creating and Deploying Policies
- 6.5.1: Creating Policies
- 6.5.2: Adding Data Elements to Policy
- 6.5.3: Adding Roles to Policy
- 6.5.4: Adding Permissions to Policy
- 6.5.5: Deploying Policies
- 6.5.6: Policy Management using the Policy API
- 6.6: Deploying Data Stores to Protectors
- 6.7: Managing Policy Components
- 6.8: Policy Management Dashboard
- 6.9: Exporting Package for Resilient Protectors
- 6.10: Legacy Features
- 7: Key Management
- 7.1: Protegrity Key Management
- 7.2: Key Management Web UI
- 7.3: Working with Keys
- 7.4: Key Points for Key Management
- 7.5: Keys-Related Terminology
- 8: Certificate Management
- 8.1: Certificates in the ESA
- 8.2: Certificate Management in ESA
- 8.2.1: Certificate Repository
- 8.2.2: Uploading Certificates
- 8.2.3: Uploading Certificate Revocation List
- 8.2.4: Manage Certificates
- 8.2.5: Changing Certificates
- 8.2.6: Changing CRL
- 8.3: Certificates in DSG
- 8.4: Replicating Certificates in a Trusted Appliance Cluster
- 8.5: Insight Certificates
- 8.6: Validating Certificates
- 9: Protegrity Data Security Platform Licensing
- 10: Troubleshooting
- 10.1: Known issues for the Audit Store
- 10.2: ESA Error Handling
- 10.2.1: Common ESA Logs
- 10.2.2: Common ESA Errors
- 10.2.3: Understanding the Insight indexes
- 10.2.4: Understanding the index field values
- 10.2.5: Index entries
- 10.2.6: Log return codes
- 10.2.7: Protectors security log codes
- 10.2.8: Policy audit codes
- 10.2.9: Additional log information
- 10.3: Known Issues for the td-agent
- 10.4: Known Issues for Protegrity Analytics
- 10.5: Known Issues for the Log Forwarder
- 10.6: Deprecations
- 10.6.1: Deprecations
- 11: Supplemental Guides
- 12: PDF Resources
- 13: Intellectual Property Attribution Statement
1 - Installation
1.1 - Overview of installation
Audience
The installation steps is intended for the following stakeholders:
- Security professionals like security officers who are responsible for protecting business systems in organizations. They plan and ensure execution of security arrangement for their organization.
- System administrators and other technical personnel who are responsible for implementing data security solutions in their organization.
- System Architects who are responsible for providing expert guidance in designing, development and implementation of enterprise data security solution architecture for their business requirements.
Protegrity Data Security Platform
The Protegrity Data Security Platform is a comprehensive source of enterprise data protection solutions. Its design is based on a hub and spoke deployment architecture.
The Protegrity Data Security Platform has following components:
Enterprise Security Administrator (ESA) – that handles the management of policies, keys, monitoring, auditing and reporting of protected systems in the enterprise.
Data Protectors – that protect sensitive data in the enterprise and deploy security policy for enforcement on each installed system. Policy is deployed from ESA to the Data Protectors. The Audit Logs of all activity on sensitive data are reported and stored in the Audit Store cluster on the ESA.
General architecture
The following diagram shows the general architecture of the Protegrity Data Security Platform.
1.2 - System Requirements
The following table lists the supported components and their compatibility settings.
Component | Compatibility |
---|---|
Application Protocols | HTTP 1.0, HTTP 1.1, SSL/TLS |
WebServices | SOAP 1.1 and WSDL 1.1 |
Web Browsers | Minimum supported Web Browser versions are as follows: - Google Chrome version 129.0.6668.58/59 (64-bit) - Mozilla Firefox version 130.0.1 (64-bit) or higher - Microsoft Edge version 128.0.2739.90 (64-bit) |
The following table lists the minimum hardware configurations.
Hardware Components | Configuration |
---|---|
CPU | Multicore Processor, with minimum 8 CPUs |
RAM | 32 GB |
Hard Disk | 320 GB |
CPU Architecture | x86 |
Certificate Requirements
Certificates are used for secure communication between the ESA and protectors. The certificate-based communication and authentication involves a client certificate, server certificate, and a certifying authority that authenticates the client and server certificates.
The various components within the Protegrity Data Security Platform that communicate with and authenticate each other through digital certificates are:
- ESA Web UI and ESA
- Insight
- ESA and Protectors
- Protegrity Appliances and external REST clients
Protegrity client and server certificates are self-signed by Protegrity. However, you can replace them by certificates signed by a trusted and commercial CA. These certificates are used for communication between various components in ESA.
Licensing Requirements
Ensure that a valid license is available before upgrading. After migration, if the license status is invalid, then contact Protegrity Support.
1.3 - Partitioning of Disk on an Appliance
BIOS is amongst the oldest systems used as a boot loader to perform the initialization of the hardware. UEFI is a comparatively newer system that defines a software interface between the operating system and the platform firmware. The UEFI is more advanced than the BIOS and most of the systems are built with support for UEFI and BIOS.
Disk Partitioning is a method of dividing the hard drive into logical partitions. When a new hard drive is installed on a system, the disk is segregated into partitions. These partitions are utilized to store data, which the operating system reads in a logical format. The information about these partitions is stored in the partition table.
There are two types of partition tables, the Master Boot Record (MBR) and the GUID Partition Table (GPT). These form a special boot section in the drive that provides information about the various disk partitions. They help in reading the partition in a logical manner.
The following table lists the differences between the GPT and the MBR.
GUID Partition Table (GPT) | Master Boot Record (MBR) |
---|---|
Supported on UEFI. | Supported on BIOS. Can also be compatible with UEFI. |
Supports partitions up to 9 ZB. | Supports partitions up to 2 TB. |
Number of primary partitions can be extended to 128. | Maximum number of primary partitions is 4. |
Runs in 32-bit and 64-bit OS. | Runs in 16-bit OS. |
Provides discrete driver support in the form of executable. | Stores the drive support in its ROM, therefore, updating the BIOS firmware is difficult. |
Offers features such as Secure Boot to limit the initialization of boot process using unauthorized applications. | Boots in the normal mode |
Has a faster boot time. | Has a standard boot time. |
Depending on the requirements, you can extend the size of the partitions in a physical volume to accommodate all the logs and other appliance related data. You can utilize the Logical Volume Manager (LVM) to increase the partitions in the physical volume. Using LVM, you can manage hard disk storage to allocate, mirror, or resize volumes.
In an appliance, the physical volume is divided into the following three logical volume groups:
Partition | Description |
---|---|
Boot | Contains the boot information. |
PTYVG | Contains the files and information about OS and logs. |
Data Volume Group | Contains the data that is in the /opt directory. |
1.4 - Installing the ESA On-Premise
1. Starting the installation
To install the ESA appliance:
Insert the appliance installation media in the system disk drive.
Boot the system from the disk drive.
The following screen appears.
Press ENTER to start the installation.
The following screen appears.
The system will detect the number of hard drives that are present. If there are multiple hard drives, then it will allow you to choose the hard drive where you want to install the OS partition and the /opt partition.
If there are multiple hard drives, then the following screen appears.
For storing the operating system-related data, select the hard drive where you want to install the OS partition and select OK.
The following screen appears.
For storing the logs, configuration data, and so on select the hard drive where you want to install the /opt partition and select OK.
2. Selecting Network Interface Cards (NICs)
The Network Interface Card (NIC) is a device through which appliances, such as, the ESA or the DSG, connect to each other on a network. You can configure multiple network interface cards (NICs) on the appliance.
The ethMNG interface is generally used for managing the appliance and ethSRV interface is used for binding the appliances for using other services.
For example, the appliance can use the ethMNG interface for the ESA Web UI and the ethSRV interface for enabling communication with different applications in an enterprise.
The following task describes how to select management interfaces.
To select multiple NICs:
If there are multiple NICs, then the following screen appears.
Select the required NIC for management interface.
Choose Select and press ENTER.
3. Configuring Network Settings
After selecting the NIC for management, you configure the network for your appliance. During the network configuration, the system tries to connect to a DHCP server to obtain the hostname, default gateway, and IP addresses for the appliance. If the DHCP is not available, then you can configure the network information manually.
To configure the network settings:
If the DHCP server is configured, then the following screen containing the network information appears.
If the DHCP server is not available, then the following screen appears.
The Network Configuration Information screen appears.
Select Manual and press ENTER.
The following screen appears.
Select DHCP / Static address to configure the DHCP / Static address for the appliance and choose Edit.
Select Static address and choose Update.
If you want to change the hostname of the appliance, then perform the following steps.
- Select Hostname and select Edit.
- Change the Hostname and select OK.
Select Management IP to configure the management IP address for the appliance and choose Edit.
- Add the IP address assigned to the ethMNG interface. This IP address configures the ESA appliance to use the Web UI.
- Enter the Netmask. The ethMNG interface must be connected to the LAN with this Netmask value.
- Select OK.
Select Default Route to configure the default route for the appliance and press Edit.
- Enter the IP address for the default network traffic.
- Select Apply.
Select Domain Name and press Edit.
- Enter the Appliance Domain Name. For example, protegrity.com.
- Press Apply.
Select Name Servers and press Edit.
- Add the IP address of the name server.
- Press OK.
If you want to configure the NTP, then perform the following steps.
- Select Time Server (NTP), and press Edit.
- Add NTP time server on a TCP/IP network.
- Select Apply.
Select Apply.
The network settings are configured.
4. Configuring Time Zone
After you configure the network settings, the Time Zone screen appears. This section explains how to set the time zone.
To set the Time Zone:
On the Time Zone screen, select the time zone.
Press Next.
The time zone is configured.
5. Configuring the Nearest Location
After configuring the time zone, the Nearest Location screen appears.
To Set the Nearest Location:
On the Nearest Location screen, enter the nearest location in GMT or UTC.
Press OK.
The following screen appears.
This screen also allows you to update the default settings of date and time, keyboard manufacturer, keyboard model, and keyboard layout.
6. Updating the Date and Time
To Update the Date and Time:
Press SPACE and select Update date and time.
Press ENTER.
The following screen appears.
Select the date.
Select Set Date and press ENTER.
The next screen appears.
Set the time.
Click Set Time and press ENTER.
The date and time settings are configured.
7. Updating the Keyboard Settings
To Update the Keyboard Settings:
Select Update Keyboard or Console settings.
Press ENTER.
Select the vendor and press the SPACEBAR.
Select Next.
If you select Generic, then a window with the list of generic keyboard models appears.
Select the model you use and press Next.
On the next window, select the keyboard language. The default is English (US).
Select Next.
On the next window, select the console font. The default is Lat15-Fixed16.
Press Next.
A confirmation message appears.
Press OK to confirm.
8. Configuring GRUB Settings
On the Protegrity appliances, GRUB version 2 (GRUB2) is used for loading the kernel. If you want to protect the boot configurations, then you can secure it by enforcing a username and password combination for the GRUB menu.
During installation for the ESA on-premise, a screen to configure GRUB credentials appears. If you want to protect the boot configurations, then you can secure it by enforcing a username and password combination for the GRUB menu. While installing the ESA v9.2.0.0, you can secure the GRUB menu by creating a username and setting password as described in the following task.
To configure GRUB settings:
From the GRUB Credentials page, press the SPACEBAR to select Enable.
Note:
By default the Disable is selected. If you continue to choose Disable, then the security for the GRUB menu is disabled. It is recommended to enable GRUB to secure the appliance.
You can enable this feature from the CLI Manager after the installation is completed. On the CLI Manager, navigate to Administration > GRUB Credential Settings to enable the GRUB settings.
For more information about GRUB, refer Securing the GRand Unified Bootloader (GRUB).
Select OK.
The following screen appears.
Enter a username in the Username text box.
Note:
The requirements for the Username are as follows:
- It should contain a minimum of three and maximum of 16 characters
- It should not contain numbers and special characters
Enter a password in the Password and Re-type Password text boxes.
Note:
The requirements for the Password are as follows:
- It must contain at least eight characters
- It must contain a combination of alphabets, numbers, and printable characters
Select OK and press ENTER.
A message
Credentials for the GRUB menu has been set successfully
appears.Select OK.
9. Setting up Users and Passwords
Only authorized users can access the appliance. The Protegrity Data Security Platform defines a list of roles for each user who can access the appliance. These are system users and LDAP administrative users who have specific roles and permissions. When you install the appliance, the default users configured are as follows:
- root: Super user with access to all commands and files.
- admin: User with administrative privileges to perform all operations.
- viewer: User who can view, but does not have edit permissions.
- local_admin: Local administrator that can be used when the admin user is not accessible.
After completing the server settings, the Users Passwords screen appears that allows you set the passwords for the users.
To set the LDAP Users Passwords:
Add the passwords of the users.
Note: Ensure that the passwords for the users comply with the password polices.
For more information about the password policies, refer Password Policy Configuration.
Select Apply.
The user passwords are set.
10. Licensing
After the appliance components are installed, the Temporary License screen appears. This system takes time. It is recommended to wait for few minutes before proceeding.
Note: After the ESA Appliance is installed, you must apply for a valid license within 30 days.
For more information about licenses, refer Licensing.
11. Installing Products
In the final steps of installing the appliance, you are prompted to select the appliance components to install.
To select products to install:
Press space and select the necessary products to install the following products.
Click OK.
The selected products are installed.
After installation is completed, the following screen appears.
Select Continue to view the CLI Login screen.
1.5 - Configuring the ESA
Configuring authentication settings
User authentication is the process of identifying someone who wants to gain access to a resource. A server contains protected resources that are only accessible to authorized users. When you want to access any resource on the server, the server uses different authentication mechanism to confirm your identity.
You can configure the authentication using for the following methods.
Configuring accounts and passwords
You can change your current password from the CLI Manager. The CLI Manager includes options to change passwords and permissions for multiple users.
For more information on configuring accounts and passwords, refer to section Accounts and Passwords Management.
Configuring Syslog
The Appliance Logs tool can be differentiated into appliance common logs and appliance-specific logs. Syslog is a log type that is common for all appliances.
For more information about configuring syslog, refer the section [Working with Logs].
Configuring external certificates
External certificates or digital certificates are used to encrypt online communications securely between two entities over the Internet. It is a digitally signed statement that is used to assert the online identities of individuals, computers, and other entities on the network, utilizing the security applications of Public Key Infrastructure (PKI). Public Key Infrastructure (PKI) is the standard cryptographic system that is used to facilitate the secure exchange of information between entities.
For more information on configuring certificates, refer here.
Configuring SMTP
The Simple Mail Transfer Protocol (SMTP) setting allows the system to send emails. You can set up an email server that supports the notification features in Protegrity Reports.
To configure SMTP from Web UI:
Login to the ESA.
Navigate to Settings > Network.
Click the SMTP Settings tab.
The following screen appears.
For more information about configuring SMTP, refer Email Setup.
Configuring SNMP
Using Simple Network Management Protocol (SNMP), you can query the appliance performance data.
By default, due to security reasons, the SNMP service is disabled. To enable the service and provide its basic configuration (listening address, community string) you can use the SNMP tool available in the CLI Manager.
To initialize SNMP configuration:
Login to the CLI Manager.
Navigate to Networking > SNMP Configuration.
Enter the root password to execute the SNMP configuration and click OK.
The following screen appears.
You can also start the SNMP Service from the Web UI. Navigate to System > Services to start the SNMP service.
For more information about configuring SNMP, refer Configure SNMP.
1.6 - Installing Appliances on Cloud Platforms
This section describes installing the appliances on Cloud platforms, such as, AWS, Azure, or GCP. For installing the appliances on cloud platforms, you must mount the image containing the Protegrity appliance on a cloud instance or a virtual machine. After mounting the image, you must run the finalization procedure to install the appliance components.
The following steps must be completed to run an appliance on a cloud platform.
- Configure the cloud instance.
- Finalize installation.
1.7 - Verifying the ESA installation from the Web UI
To verify the ESA installation from the Web UI:
Login to the ESA Web UI.
The ESA dashboard appears.
Navigate to System > Information.
The screen displaying the information of your system appears.
Under the Installed Patches area, the ESA_10.1.0 entry appears.
Navigate to System > Services and ensure that all the required services are running.
1.8 - Initializing the Policy Information Management (PIM) Module
To initialize the PIM module:
In a web browser, enter the ESA IP address in the window task bar.
Enter the Username and Password.
Click Sign in.
The ESA dashboard appears.
Navigate to Policy Management > Dashboard.
The following screen to initialize PIM appears.
Click Initialize PIM.
A confirmation message appears.
Click OK.
The Policy management screen appears.
1.9 - Configuring the ESA in a Trusted Appliances Cluster (TAC)
The following figure illustrates the TAC setup.
TAC is established between the primary appliance ESA A and the secondary ESAs, ESA B and ESA C.
For more information about TAC, refer here.
Data replication for policies, forensics, or DSG configuration takes place between all the ESAs.
For more information about replication tasks, refer here.
The Audit Store cluster is enabled for the ESAs.
For more information about enabling Audit Store Cluster, refer here.
All the ESAs are added as a part of the Audit Store Cluster.
For more information about adding an ESA to the Audit Store Cluster, refer here.
1.10 - Creating an Audit Store Cluster
The Audit Store cluster is a collection of nodes that process and store data. The Audit Store is installed on the ESA nodes. The logs generated by the Appliance and Protector machines are stored in this Audit Store. The logs are useful for obtaining information about the nodes and the cluster on the whole. The logs can also be monitored for any data loss, system compromise, or any other issues with the nodes in the Audit Store cluster.
An Audit Store cluster must have a minimum of three nodes with the Master-eligible role due to following scenarios:
- 1 master-eligible node: If only one node with the Master-eligible role is available, then it is elected the Master, by default. In this case, if the node becomes unavailable due to some failure, then the cluster becomes unstable as there is no additional node with the Master-eligible role.
- 2 master-eligible nodes: A cluster where only two nodes have the Master-eligible role, then both have the Master-eligible role at the minimum to be up and running for the cluster to remain functional. If any one of those nodes becomes unavailable due to some failure, then the minimum condition for the nodes with the Master-eligible role is not met and cluster becomes unstable. This setup is not recommended for a multi-node cluster.
- 3 master-eligible nodes and above: In this case, if any one node goes down, then the cluster can still remain functional because the cluster requires a minimum of two nodes with the Master-eligible role.
Completing the Prerequisites
Ensure that the following prerequisites are met before configuring the Audit Store Cluster. Protegrity recommends that the Audit Store Cluster has a minimum of three ESAs for creating a highly-available multi-node Audit Store cluster.
Prepare and set up three v10.1.0 ESAs.
Create the TAC on the first ESA. This will be the Primary ESA.
Add the remaining ESAs to the TAC. These will be the secondary ESAs in the TAC. For more information about installing the ESA, refer here.
Creating the Audit Store Cluster on the ESA
Initialize Insight to set up the Audit Store configuration on the first ESA or the Primary ESA in the TAC. When this option is selected, Insight is configured to retrieve data from the local Audit Store. Additionally, the required processes, such as, td-agent, is started and Protegrity Analytics is initialized. The Audit Store cluster is initialized on the local machine so that other nodes can join this Audit Store cluster.
Perform the following steps to initialize the Audit Store.
Log in to the ESA Web UI.
Navigate to Audit Store > Initialize Analytics.
The following screen appears.
Click Initialize Analytics.
Protegrity Analytics is now configured and retrieves data for the reports from the Audit Store. The Index Lifecycle Management screen is displayed. The data is available on the Audit Store > Dashboard tab.
Verify that the following Audit Store services are running by navigating to System > Services:
- Audit Store Management
- Audit Store Repository
- Audit Store Dashboards
- Analytics
- td-agent
Adding an ESA to the Audit Store Cluster
Add multiple ESAs to the Audit Store cluster to increase the cluster size. In this case, the current ESA is added as a node in the Audit Store cluster. After the configurations are completed, the required processes are started and the logs are read from the Audit Store cluster.
The Audit Store cluster information is updated when a node joins the Audit Store cluster. This information is updated across the Audit Store cluster. Hence, nodes must be added to an Audit Store cluster one at a time. Adding multiple nodes to the Audit Store at the same time using the ESA Web UI would make the cluster information inconsistent, make the Audit Store cluster unstable, and would lead to errors.
Ensure that the following prerequisites are met:
- Ensure that the SSH Authentication type on all the ESAs is set to Password + PublicKey. For more information about setting the authentication, refer here.
- Ensure that Insight is initialized and the Audit Store cluster is created on the node that must be joined.
- The health status of the target Audit Store node is green or yellow.
- The health status of the Audit Store node that must be added to the cluster is green or yellow.
To check the health status of a node, log in to ESA Web UI of the node, click Audit Store > Cluster Management > Overview, and view the Cluster Status from the upper-right corner of the screen. For more information about the health status, refer here.
Log in to the Web UI of the second ESA.
Navigate to Audit Store > Initialize Analytics.
The following screen appears.
Click Join Cluster.
The following screen appears.
Specify the IP address or the hostname of the Audit Store cluster to join. Use hostname only if the hostname is resolved between the nodes.
Ensure that Protegrity Analytics is initialized and the Audit Store cluster is already created on the target node. A node cannot join the cluster if Protegrity Analytics is not initialized on the target node.
Specify the admin username and password for the Audit Store cluster. If required, select the Clear cluster data check box to clear the Audit Store data from the current node before joining the Audit Store cluster. The check box will only be enabled if the node has data, that is, if Insight is installed and initialized on the node. Else, this check box is disabled.
Click Join Cluster.
A confirmation message appears as shown in the following figure.
Click Dismiss.
The Index LIfecycle Management screen appears as shown in the following figure.
Repeat the steps to add the remaining ESAs as required. Add only one appliance at a time. After adding the appliance, wait till the cluster becomes stable. The cluster is stable when the cluster status indicator turns green.
Configuring td-agent in the Audit Store Cluster
Complete the following steps after adding the ESA node to the Audit Store cluster. This configuration is required for processing and storing the logs received by the Audit Store.
This step must only be performed on all the ESAs in the Audit Store cluster for the DSG.
Before performing the steps provided here, verify that the Audit Store cluster health status is green on the Audit Store > Cluster Management > Overview screen of the ESA Web UI.
Log in to the CLI Manager of the ESA node.
Navigate to Tools > PLUG - Forward logs to Audit Store.
Enter the root password and select OK.
In the Setting ESA Communication screen, select OK.
Specify the IP addresses of all the ESA machines in the cluster, separated by commas.
Select OK.
Type y to fetch certificates for communicating with the ESA and select OK.
Enter the admin username and password and select OK.
Repeat the steps provided in this section on all the ESAs in the Audit Store Cluster.
Verifying the Audit Store Cluster
View the Audit Store Management page to verify that the configurations were completed successfully using the steps provided here.
Log in to the ESA Web UI.
Navigate to the Audit Store > Cluster Management > Overview page.
Verify that the nodes are added to the cluster. The health of the nodes must be either green or yellow.
Updating the Priority IP List for Signature Verification
Signature verification jobs run on the ESA and use the ESA’s processing time. Update the priority IP list for the default signature verification jobs after setting up the system. By default, the primary ESA will be used for the priority IP. If there are multiple ESAs in the priority list, then additional ESAs are available to process the signature verifications jobs. This frees up the Primary ESA’s processor to handle other important tasks.
For example, if the maximum jobs to run on an ESA is set to 4 and 10 jobs are queued to run on 2 ESAs, then 4 jobs are started on the first ESA, 4 jobs are started on the second ESA, and 2 jobs will be queued to run till an ESA job slot is free to accept and run the queued job.
For more information about scheduling jobs, refer here.
For more information about signature verification jobs, refer here.
Use the steps provided in this section to update the priority IP list.
Log in to the ESA Web UI.
Navigate to Audit Store > Analytics > Scheduler.
From the Action column, click the Edit icon (
) for the Signature Verification task.
Update the Priority IPs filed with the list of the ESAs available separating the IPs using commas.
Click Save.
Enter the root password, to apply the updates.
2 - Configuration
2.1 - Sending logs to an external security information and event management (SIEM)
This is an optional step.
In the default setup, the logs are sent from the protectors directly to the Audit Store on the ESA using the Log Forwarder on the protector.
For more information about the default flow, refer here.
To forward logs to Insight and the external SIEM, the td-aget is configured to listen for protector logs. The protectors are configured to send the logs to the td-agent on the ESA. Finally, the td-agent is configured to forward the logs to the required locations.
Ensure that the logs are sent to the ESA and the external SIEM using the steps provided in this section. The logs sent to the ESA are required by Protegrity support for troubleshooting the system in case of any issues. Also, ensure that the ESA hostname specified in the configuration files are updated when the hostname of the ESA is changed.
An overview architecture diagram for sending logs to Insight and the external SIEM is shown in the following figure.
The ESA v10.1.0 only supports protectors having the PEP server version 1.2.2+42 and later.
Forward the logs generated on the protector to Insight and the external SIEM using the following steps. Ensure that all the steps are completed in the order specified.
Set up td-agent to receive protector logs.
Send the protector logs to the td-agent.
Configure td-agent to forward logs to the external endpoint.
1. Setting up td-agent to receive protector logs
Configure the td-agent to listen to logs from the protectors and to forward the logs received to Insight.
To configure td-agent:
Add the port 24284 to the rule list on the ESA. This port is configured for the ESA to receives the protector logs over a secure connection.
For more information about adding rules, refer here.
Log in to the CLI Manager of the Primary ESA.
Navigate to Networking > Network Firewall.
Enter the password for the root user.
Select Add New Rule and select Choose.
Select Accept and select Next.
Select Manually.
Select TCP and select Next.
Specify 24284 for the port and select Next.
Select Any and select Next.
Select Any and select Next.
Specify a description for the rule and select Confirm.
Select OK.
Open the OS Console on the Primary ESA.
Log in to the CLI Manager of the Primary ESA.
Navigate to Administration > OS Console.
Enter the root password and select OK.
Enable td-agent to receive logs from the protector.
Navigate to the config.d directory using the following command.
cd /opt/protegrity/td-agent/config.d
Enable the INPUT_forward_external.conf file using the following command. Ensure that the certificates exist in the directory that is specified in the INPUT_forward_external.conf file. If an IP address or host name is specified for the bind parameter in the file, then ensure that the certificates are updated to match the host name or IP address specified. If the host name, IP address, or domain name of the system is updated, then the bind value in this file must be updated. For more information about updating the bind value, refer here.
mv INPUT_forward_external.conf.disabled INPUT_forward_external.conf
Optional: Update the configuration settings to improve the SSL/TLS server configuration on the system.
Navigate to the config.d directory using the following command.
cd /opt/protegrity/td-agent/config.d
Open the INPUT_forward_external.conf file using a text editor.
Add the list of ciphers to the file. Update and use the ciphers that are required. Enter the entire line of code on a single line and retain the formatting of the file.
<source> @type forward bind <Hostname of the Primary ESA> port 24284 <transport tls> ca_path /mnt/ramdisk/certificates/mng/CA.pem cert_path /mnt/ramdisk/certificates/mng/server.pem private_key_path /mnt/ramdisk/certificates/mng/server.key ciphers "ALL:!aNULL:!eNULL:!SSLv2:!SSLv3:!DHE:!AES256-SHA:!CAMELLIA256-SHA:!AES128-SHA:!CAMELLIA128-SHA:!TLS_RSA_WITH_RC4_128_MD5:!TLS_RSA_WITH_RC4_128_SHA:!TLS_RSA_WITH_3DES_EDE_CBC_SHA:!TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA:!TLS_RSA_WITH_SEED_CBC_SHA:!TLS_DHE_RSA_WITH_SEED_CBC_SHA:!TLS_ECDHE_RSA_WITH_RC4_128_SHA:!TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA" </transport> </source>
Save and close the file.
Restart the td-agent service.
Log in to the ESA Web UI.
Navigate to System > Services > Misc > td-agent.
Restart the td-agent service.
Repeat the steps on all the ESAs in the Audit Store cluster.
2. Sending the protector logs to the td-agent
Configure the protector to send the logs to the td-agent on the ESA or appliance. The td-agent forwards the logs received to Insight and the external location.
To configure the protector:
Log in and open a CLI on the protector machine.
Back up the existing files.
Navigate to the config.d directory using the following command.
cd /opt/protegrity/logforwarder/data/config.d
Protectors v9.2.0.0 and later use the /opt/protegrity/logforwarder/data/config.d path. Use the /opt/protegrity/fluent-bit/data/config.d path for protectors v9.1.0.0 and earlier.
Back up the existing out.conf file using the following command.
cp out.conf out.conf_backup
Protectors v9.2.0.0 and later use the out.conf file. Use the out_elastic.conf file for protectors v9.1.0.0 and earlier.
Back up the existing upstream.cfg file using the following command.
cp upstream.cfg upstream.cfg_backup
Protectors v9.2.0.0 and later use the upstream.cfg file. Use the upstream_es.cfg file for protectors v9.1.0.0 and earlier.
Update the out.conf file for specifying the logs that must be forwarded to the ESA.
Navigate to the /opt/protegrity/logforwarder/data/config.d directory. Protectors v9.2.0.0 and later use the /opt/protegrity/logforwarder/data/config.d path. Use the /opt/protegrity/fluent-bit/data/config.d path for protectors v9.1.0.0 and earlier.
Open the out.conf file using a text editor.
Update the file contents with the following code.
Update the code blocks for all the options with the following information:
Update the Name parameter from opensearch to forward.
Delete the following Index, Type, and Time_Key parameters:
Index pty_insight_audit Type _doc Time_Key ingest_time_utc
Delete the Supress_Type_Name and Buffer_Size parameters:
Suppress_Type_Name on Buffer_Size false
The updated extract of the code is shown here.
[OUTPUT] Name forward Match logdata Retry_Limit False Upstream /opt/protegrity/logforwarder/data/config.d/upstream.cfg storage.total_limit_size 256M net.max_worker_connections 1 net.keepalive off Workers 1 [OUTPUT] Name forward Match flulog Retry_Limit no_retries Upstream /opt/protegrity/logforwarder/data/config.d/upstream.cfg storage.total_limit_size 256M net.max_worker_connections 1 net.keepalive off Workers 1
Ensure that the file does not have any trailing spaces or line breaks at the end of the file.
Protectors v9.2.0.0 and later use the the /opt/protegrity/logforwarder/data/config.d path and the upstream.cfg file. Use the /opt/protegrity/fluent-bit/data/config.d path and the upstream_es.cfg file for protectors v9.1.0.0 and earlier.
Save and close the file.
Update the upstream.cfg file for forwarding the logs to the ESA.
Navigate to the /opt/protegrity/logforwarder/data/config.d directory.
Protectors v9.2.0.0 and later use the /opt/protegrity/logforwarder/data/config.d path. Use the /opt/protegrity/fluent-bit/data/config.d path for protectors v9.1.0.0 and earlier.
Open the upstream.cfg file using a text editor.
Update the file contents with the following code.
Update the code blocks for all the nodes with the following information:
Update the Port to 24284.
Delete the Pipeline parameter:
Pipeline logs_pipeline
The updated extract of the code is shown here.
[UPSTREAM] Name pty-insight-balancing [NODE] Name node-1 Host <IP address of the ESA> Port 24284 tls on tls.verify off
The code shows information updated for one node. For multiple nodes, update the information for all the nodes.
Ensure that there are no trailing spaces or line breaks at the end of the file.
If the IP address of the ESA is updated, then update the Host value in the upstream.cfg file.
Protectors v9.2.0.0 and later use the Name parameter as pty-insight-balancing. Use the Name parameter as pty-es-balancing for protectors v9.1.0.0 and earlier.
Save and close the file.
Restart logforwarder on the protector using the following commands.
/opt/protegrity/logforwarder/bin/logforwarderctrl stop /opt/protegrity/logforwarder/bin/logforwarderctrl start
Protectors v9.2.0.0 and later use the /opt/protegrity/logforwarder/bin path. Use the /opt/protegrity/fluent-bit/bin path for protectors v9.1.0.0 and earlier.
If required, complete the configurations on the remaining protector machines.
Update the td-agent configuration to send logs to the external location.
3. Configuring td-agent to forward logs to the external endpoint
As per the setup and requirements, the logs forwarded can be formatted using the syslog-related fields and sent over TLS to the SIEM. Alternatively, send the logs without any formatting over a non-TLS connection to the SIEM, such as, syslog.
The ESA has logs generated by the appliances and the protectors connected to the ESA. Forward these logs to the syslog server and use the log data for further analysis as per requirements.
For a complete list of plugins for forwarding logs, refer to https://www.fluentd.org/plugins/all.
Before you begin: Ensure that the external syslog server is available and running.
The following options are available, select any one of the sections based on the requirements:
- Forwarding Logs to a Syslog Server
- Forwarding Logs to a Syslog Server Over TLS
To forward logs to the external SIEM:
Open the CLI Manager on the Primary ESA.
Log in to the CLI Manager of the Primary ESA where the td-agent was configured in Setting Up td-agent to Receive Protector Logs.
Navigate to Administration > OS Console.
Enter the root password and select OK.
Navigate to the /products/uploads directory using the following command.
cd /products/uploads
Obtain the required plugins files using one of the following commands based on the setup.
If the appliance has Internet access, then run the following commands.
wget https://rubygems.org/downloads/syslog_protocol-0.9.2.gem
wget https://rubygems.org/downloads/remote_syslog_sender-1.2.2.gem
wget https://rubygems.org/downloads/fluent-plugin-remote_syslog-1.0.0.gem
If the appliance does not have Internet access, then complete the following steps.
- Download the following setup files from a system that has Internet and copy them to the appliance in the /products/uploads directory.
- Ensure that the files downloaded have the execute permission.
Prepare the required plugins files using the following commands.
Assign the required ownership permissions to the software using the following command.
chown td-agent *.gem
Assign the required permissions to the software installed using the following command.
chmod -R 755 /opt/td-agent/lib/ruby/gems/3.2.0/
Assign ownership of the .gem files to the td-agent user using the following command.
chown -R td-agent:plug /opt/td-agent/lib/ruby/gems/3.2.0/
Install the required plugins files using one of the following commands based on the setup.
If the appliance has Internet access, then run the following commands.
sudo -u td-agent /opt/td-agent/bin/fluent-gem install syslog_protocol
sudo -u td-agent /opt/td-agent/bin/fluent-gem install remote_syslog_sender
sudo -u td-agent /opt/td-agent/bin/fluent-gem install fluent-plugin-remote_syslog
If the appliance does not have Internet access, then run the following commands.
sudo -u td-agent /opt/td-agent/bin/fluent-gem install --local /products/uploads/syslog_protocol-0.9.2.gem
sudo -u td-agent /opt/td-agent/bin/fluent-gem install install --local /products/uploads/remote_syslog_sender-1.2.2.gem
sudo -u td-agent /opt/td-agent/bin/fluent-gem install --local /products/uploads/fluent-plugin-remote_syslog-1.0.0.gem
Update the configuration files using the following steps.
Navigate to the config.d directory using the following command.
cd /opt/protegrity/td-agent/config.d
Back up the existing output file using the following command.
cp OUTPUT.conf OUTPUT.conf_backup
Open the OUTPUT.conf file using a text editor.
Update the following contents in the OUTPUT.conf file.
Update the match tag in the file to <match *.*.* logdata flulog>.
Add the following code in the match tag in the file:
<store> @type relabel @label @syslog </store>
The final OUTPUT.conf file with the updated content is shown here:
<filter **> @type elasticsearch_genid # to avoid duplicate logs # https://github.com/uken/fluent-plugin-elasticsearch#generate-hash-id hash_id_key _id # storing generated hash id key (default is _hash) </filter> <match *.*.* logdata flulog> @type copy <store> @type opensearch hosts <Hostname of the ESA> port 9200 index_name pty_insight_audit type_name _doc pipeline logs_pipeline # adds new data - if the data already exists (based on its id), the op is skipped. # https://github.com/uken/fluent-plugin-elasticsearch#write_operation write_operation create # By default, all records inserted into Elasticsearch get a random _id. This option allows to use a field in the record as an identifier. # https://github.com/uken/fluent-plugin-elasticsearch#id_key id_key _id scheme https ssl_verify true ssl_version TLSv1_2 ca_file /etc/ksa/certificates/plug/CA.pem client_cert /etc/ksa/certificates/plug/client.pem client_key /etc/ksa/certificates/plug/client.key request_timeout 300s # defaults to 5s https://github.com/uken/fluent-plugin-elasticsearch#request_timeout <buffer> @type file path /opt/protegrity/td-agent/es_buffer retry_forever true # Set 'true' for infinite retry loops. flush_mode interval flush_interval 60s flush_thread_count 8 # parallelize outputs https://docs.fluentd.org/deployment/performance-tuning-single-process#use-flush_thread_count-parameter retry_type periodic retry_wait 10s </buffer> </store> <store> @type relabel @label @triggering_agent </store> <store> @type relabel @label @syslog </store> </match>
Ensure that there are no trailing spaces or line breaks at the end of the file.
Save and close the file.
Create and open the OUTPUT_syslog.conf file using a text editor.
Perform the steps from one of the following solution as per the requirement.
Solution 1: Forward all logs to the external syslog server:
Add the following contents to the OUTPUT_syslog.conf file.
<label @syslog> <match *.*.* logdata flulog> @type copy <store> @type remote_syslog host <IP_of_the_syslog_server_host> port 514 <format> @type json </format> protocol udp <buffer> @type file path /opt/protegrity/td-agent/syslog_tags_buffer retry_forever true # Set 'true' for infinite retry loops. flush_mode interval flush_interval 60s flush_thread_count 8 # parallelize outputs https://docs.fluentd.org/deployment/performance-tuning-single-process#use-flush_thread_count-parameter retry_type periodic retry_wait 10s </buffer> </store> </match> </label>
Ensure that there are no trailing spaces or line breaks at the end of the file.
Solution 2: Forward only the protection logs to the external syslog server:
Add the following contents to the OUTPUT_syslog.conf file.
<label @syslog> <match logdata> @type copy <store> @type remote_syslog host <IP_of_the_syslog_server_host> port 514 <format> @type json </format> protocol udp <buffer> @type file path /opt/protegrity/td-agent/syslog_tags_buffer retry_forever true # Set 'true' for infinite retry loops. flush_mode interval flush_interval 60s flush_thread_count 8 # parallelize outputs https://docs.fluentd.org/deployment/performance-tuning-single-process#use-flush_thread_count-parameter retry_type periodic retry_wait 10s </buffer> </store> </match> </label>
Ensure that there are no trailing spaces or line breaks at the end of the file. Ensure that the <IP_of_the_syslog_server_host> is specified in the file.
To use a TCP connection, update the protocol to tcp. In addition, specify the port that is opened for TCP communication.
For more information about the formatting the output, navigate to https://docs.fluentd.org/configuration/format-section.
Save and close the file.
Update the permissions for the file using the following commands.
chown td-agent:td-agent OUTPUT_syslog.conf chmod 700 OUTPUT_syslog.conf
Restart the td-agent service.
Log in to the ESA Web UI.
Navigate to System > Services > Misc > td-agent,
Restart the td-agent service.
Check the status and restart the rsyslog server on the remote SIEM system using the following commands.
systemctl status rsyslog systemctl restart rsyslog
The logs are now sent to Insight on the ESA and the external SIEM.
To forward logs to the external SIEM:
Open the CLI Manager on the Primary ESA.
Log in to the CLI Manager of the Primary ESA where the td-agent was configured in Setting Up td-agent to Receive Protector Logs.
Navigate to Administration > OS Console.
Enter the root password and select OK.
Navigate to the /products/uploads directory using the following command.
cd /products/uploads
Obtain the required plugin file using one of the following commands based on the setup.
If the appliance has Internet access, then run the following command.
wget https://rubygems.org/downloads/fluent-plugin-syslog-tls-2.0.0.gem
If the appliance does not have Internet access, then complete the following steps.
- Download the fluent-plugin-syslog-tls-2.0.0.gem set up file from a system that has Internet and copy it to the appliance in the /products/uploads directory.
- Ensure that the file downloaded has the execute permission.
Prepare the required plugin file using the following commands.
Assign the required ownership permissions to the installer using the following command.
chown td-agent *.gem
Assign the required permissions to the software installation directory using the following command.
chmod -R 755 /opt/td-agent/lib/ruby/gems/3.2.0/
Assign ownership of the software installation directory to the required users using the following command.
chown -R td-agent:plug /opt/td-agent/lib/ruby/gems/3.2.0/
Install the required plugin file using one of the following commands based on the setup.
If the appliance has Internet access, then run the following command.
sudo -u td-agent /opt/td-agent/bin/fluent-gem install fluent-plugin-syslog-tls
If the appliance does not have Internet access, then run the following command.
sudo -u td-agent /opt/td-agent/bin/fluent-gem install --local /products/uploads/fluent-plugin-syslog-tls-2.0.0.gem
Copy the required certificates on the ESA or the appliance.
Log in to the ESA or the appliance and open the CLI Manager.
Create a directory for the certificates using the following command.
mkdir -p /opt/protegrity/td-agent/new_certs
Update the ownership of the directory using the following command.
chown -R td-agent:plug /opt/protegrity/td-agent/new_certs
Log in to the remote SIEM system.
Using a command prompt, navigate to the directory where the certificates are located. For example,
cd /etc/pki/tls/certs
.Connect to the ESA or appliance using a file transfer manager. For example,
sftp root@ESA_IP
.Copy the CA and client certificates to the /opt/Protegrity/td-agent/new_certs directory using the following command.
put CA.pem /opt/protegrity/td-agent/new_certs put client.* /opt/protegrity/td-agent/new_certs
Update the permissions of the certificates using the following command.
chmod -r 744 /opt/protegrity/td-agent/new_certs/CA.pem chmod -r 744 /opt/protegrity/td-agent/new_certs/client.pem chmod -r 744 /opt/protegrity/td-agent/new_certs/client.key
Update the configuration files using the following steps.
Navigate to the config.d directory using the following command.
cd /opt/protegrity/td-agent/config.d
Back up the existing output file using the following command.
cp OUTPUT.conf OUTPUT.conf_backup
Open the OUTPUT.conf file using a text editor.
Update the following contents in the OUTPUT.conf file.
Update the match tag in the file to <match *.*.* logdata flulog>.
Add the following code in the match tag in the file:
<store> @type relabel @label @syslogtls </store>
The final OUTPUT.conf file with the updated content is shown here:
<filter **> @type elasticsearch_genid # to avoid duplicate logs # https://github.com/uken/fluent-plugin-elasticsearch#generate-hash-id hash_id_key _id # storing generated hash id key (default is _hash) </filter> <match *.*.* logdata flulog> @type copy <store> @type opensearch hosts <Hostname of the ESA> port 9200 index_name pty_insight_audit type_name _doc pipeline logs_pipeline # adds new data - if the data already exists (based on its id), the op is skipped. # https://github.com/uken/fluent-plugin-elasticsearch#write_operation write_operation create # By default, all records inserted into Elasticsearch get a random _id. This option allows to use a field in the record as an identifier. # https://github.com/uken/fluent-plugin-elasticsearch#id_key id_key _id scheme https ssl_verify true ssl_version TLSv1_2 ca_file /etc/ksa/certificates/plug/CA.pem client_cert /etc/ksa/certificates/plug/client.pem client_key /etc/ksa/certificates/plug/client.key request_timeout 300s # defaults to 5s https://github.com/uken/fluent-plugin-elasticsearch#request_timeout <buffer> @type file path /opt/protegrity/td-agent/es_buffer retry_forever true # Set 'true' for infinite retry loops. flush_mode interval flush_interval 60s flush_thread_count 8 # parallelize outputs https://docs.fluentd.org/deployment/performance-tuning-single-process#use-flush_thread_count-parameter retry_type periodic retry_wait 10s </buffer> </store> <store> @type relabel @label @triggering_agent </store> <store> @type relabel @label @syslogtls </store> </match>
Ensure that there are no trailing spaces or line breaks at the end of the file.
Save and close the file.
Create and open the OUTPUT_syslogTLS.conf file using a text editor.
Perform the steps from one of the following solution as per the requirement.
Solution 1: Forward all logs to the external syslog server:
Add the following contents to the OUTPUT_syslogTLS.conf file.
<label @syslogtls> <filter *.*.* logdata flulog> @type record_transformer enable_ruby true <record> severity "${ case record['level'] when 'Error' 'err' when 'ERROR' 'err' else 'info' end }" #local0 -Protection #local1 -Application #local2 -System #local3 -Kernel #local4 -Policy #local5 -User Defined #local6 -User Defined #local7 -Others #local5 and local6 can be defined as per the requirement facility "${ case record['logtype'] when 'Protection' 'local0' when 'Application' 'local1' when 'System' 'local2' when 'Kernel' 'local3' when 'Policy' 'local4' else 'local7' end }" #noHostName - can be changed by customer hostname ${record["origin"] ? (record["origin"]["hostname"] ? record["origin"]["hostname"] : "noHostName") : "noHostName" } </record> </filter> <match *.*.* logdata flulog> @type copy <store> @type syslog_tls host <IP_of_the_rsyslog_server_host> port 601 client_cert /opt/protegrity/td-agent/new_certs/client.pem client_key /opt/protegrity/td-agent/new_certs/client.key ca_cert /opt/protegrity/td-agent/new_certs/CA.pem verify_cert_name true severity_key severity facility_key facility hostname_key hostname <format> @type json </format> </store> </match> </label>
Ensure that there are no trailing spaces or line breaks at the end of the file.
Solution 2: Forward only the protection logs to the external syslog server:
Add the following contents to the OUTPUT_syslogTLS.conf file.
<label @syslogtls> <filter logdata> @type record_transformer enable_ruby true <record> severity "${ case record['level'] when 'Error' 'err' when 'ERROR' 'err' else 'info' end }" #local0 -Protection #local1 -Application #local2 -System #local3 -Kernel #local4 -Policy #local5 -User Defined #local6 -User Defined #local7 -Others #local5 and local6 can be defined as per the requirement facility "${ case record['logtype'] when 'Protection' 'local0' when 'Application' 'local1' when 'System' 'local2' when 'Kernel' 'local3' when 'Policy' 'local4' else 'local7' end }" #noHostName - can be changed by customer hostname ${record["origin"] ? (record["origin"]["hostname"] ? record["origin"]["hostname"] : "noHostName") : "noHostName" } </record> </filter> <match logdata> @type copy <store> @type syslog_tls host <IP_of_the_rsyslog_server_host> port 601 client_cert /opt/protegrity/td-agent/new_certs/client.pem client_key /opt/protegrity/td-agent/new_certs/client.key ca_cert /opt/protegrity/td-agent/new_certs/CA.pem verify_cert_name true severity_key severity facility_key facility hostname_key hostname <format> @type json </format> </store> </match> </label>
Ensure that the <IP_of_the_rsyslog_server_host> is specified in the file.
For more information about the formatting the output, navigate to https://docs.fluentd.org/configuration/format-section.
The logs are formatted using the rfc 3164 format that is commonly used.
For more information about the rfc format, navigate to https://datatracker.ietf.org/doc/html/rfc3164.
Ensure that there are no trailing spaces or line breaks at the end of the file.
Save and close the file.
Update the permissions for the file using the following commands.
chown td-agent:td-agent OUTPUT_syslogTLS.conf chmod 700 OUTPUT_syslogTLS.conf
Restart the td-agent service.
Log in to the ESA Web UI.
Navigate to System > Services > Misc > td-agent,
Restart the td-agent service.
Check the status and restart the rsyslog server on the remote SIEM system using the following commands.
systemctl status rsyslog systemctl restart rsyslog
The logs are now sent to Insight on the ESA and the external SIEM over TLS.
When logs are sent to Insight and the external SIEM over TLS and the domain name for the system is updated, then update the bind address for the system.
For more information about updating the bind address, refer here.
2.2 - Configuring a Trusted Appliance Cluster (TAC) without Consul Integration
For more information about creating a TAC, refer to the section Trusted Appliances Cluster (TAC).
Note: If the node contains scheduled tasks associated with it, then you cannot uninstall the cluster services on it. Ensure that you delete all the scheduled tasks before uninstalling the cluster services.
Note: If you are uninstalling the Consul Integration services, then the Consul related ports and certificates are not required.
To uninstall cluster services, perform the following steps.
Remove the appliance from the TAC.
In the CLI Manager, navigate to Administration > Add/Remove Services.
Press ENTER.
Select Remove already installed applications.
Select Cluster-Consul-Integration v0.2 and select OK.
The integration service is uninstalled.
Select Consul v1.0 and select OK.
The Consul product is uninstalled from your appliance.
After the Consul Integration is successfully uninstalled, then the Cluster labels, such as, Consul-Client and Consul-Server are not available.
To manage the communication between various nodes in a TAC, you can use the communication blocking mechanism.
For more information about the communication blocking mechanism, refer to the section Connection Settings.
2.3 - Configuring the IP Address for the Docker Interface
From ESA v9.0.0.0, the default IP addresses assigned to the docker interfaces are between 172.17.0.0/16 and 172.18.0.0/16. If your have a VPN or your organization’s network configured with the IP addresses are between 172.17.0.0/16 and 172.18.0.0/16, then this might cause conflict with your organization’s private or internal network resulting in loss of network connectivity.
Note: Ensure that the IP addresses assigned to the docker interface must not conflict with the organization’s private or internal network.
In such a case you can reconfigure the IP addresses for the docker interface by performing the following steps.
To configure the IP address of the docker interfaces:
Leave the docker swarm using the following command.
docker swarm leave --force
Remove docker_gwbridge network using the following command.
docker network rm docker_gwbridge
In the /etc/docker/daemon.json file, enter the non-conflicting IP address range.
Note:
You must separate the entries in the daemon.json file using a comma (,). Before adding new entries, ensure that the existing entries are separated by a comma (,). Also, ensure that the entries are enlisted in the correct as shown in the following example .
"bip": "10.200.0.1/24", "default-address-pools": [ {"base":"10.201.0.0/16","size":24} ]
Warning:
If the the entries in the file are not mentioned in the format specified in step 3, then the restart operation for docker service fails.
Restart the docker service using the following command.
/etc/init.d/docker restart
The docker service is restarted successfully.
Check the status of the docker service using the following command.
/etc/init.d/docker status
Initialize docker swarm using the following command.
docker swarm init --advertise-addr=ethMNG --listen-addr=ethMNG --data-path-addr=ethMNG
The IP address of the docker interfaces are changed successfully.
2.4 - PEP Server configuration file
# Configuration file for the pepserver
#
# ----------------------------
# Application configuration
# ----------------------------
[application]
# Directory where the pepserver saves its temporary files etc.
workingdir = ./
# Directory where token elements are stored.
tokenelementdir = ./tokenelements
# Execute this program/script after the policy has been successfully updated in shared memory.
# Can be used to distribute a policy to multiple nodes/destinations.
# If nothing is set no execute is done.
#postdeploy = <path/script>
# Specifies the communication id to use. Default 0
# Teradata : Configurable.
# SQLServer : Must be set to 0.
# Oracle : Configurable. Must match the value specified in 'createobjects.sql'
# DB2 : Configurable.
# Valid values are in the range 0 to 255.
communicationid = 0
# Add the PEP Server's IP Address to request headers.
# This is needed when the PEP Server is communicating with ESA via a proxy.
addipaddressheader = yes
# ---------------------------------
# Logging configuration
# ---------------------------------
[logging]
# Logging level for pepserver application logs: OFF - No logging, SEVERE, WARNING, INFO, CONFIG, ALL
level = WARNING
# Set the output type for protections logs. Set to either tcp or stdout.
# tcp = (default) Logs are sent to fluent-bit using tcp
# stdout = Logs are sent to stdout
output = tcp
# Fluentbit host and port values (mostly localhost) where logs will be forwarded from the protector.
host = 127.0.0.1
port = 15780
# In case that connection to the fluentbit is lost, set how logs must be handled.
# This setting is only for the protector logs and not application logs, sent from pepserver
# drop = (default) Protector throws logs away if connection to the fluentbit is lost
# error = Protector returns error without protecting/unprotecting data if connection to the fluentbit is lost
mode = drop
# Intervall in seconds, on how often we send logs from protector to logforwarder. ( Default 1 sec )
# It can be set to a maximum of 86400 ( i.e. 24 hours ).
logsendinterval = 1
# -----------------------------
# Policy management
# -----------------------------
[policymanagement]
# The base URL to the HubController service.
url = https://10.10.100.5:8443
# Path to the CA certificate.
cafile = ./CA.pem
# Path to the certificate.
certfile = ./cert.pem
# Path to the private key for the certificate.
certkeyfile = ./cert.key
# Path to the credential file used to decrypt the private key.
keycredentialfile = ./certkeyup.bin
# Number of seconds between checks to refresh policy from ESA.
# Specify a value in the range 30 to 86400 seconds (default is 60 seconds). If this
# value is set to be larger than 300 then the node status might not be proper on ESA.
# Some random bias will be added to this value to spread the load from multiple pep servers.
policyrefreshinterval = 60
# Define what value to return if data to protect is an empty string
# null = Return a null value (Default)
# encrypt = Return an encrypted value
# empty = Return an empty string
emptystring = null
# -----------------------------------
# Application Protector configuration
# -----------------------------------
[applicationprotector]
# Listener port for Application Protector Client/Server.
#listener = tcp, 15910/127.0.0.1
# ----------------------------
# Administration configuration
# ----------------------------
[administration]
# Listener port for the administration interface.
# Only accessible on localhost.
listener = tcp, 16700
# The URI to the authentication API.
# Base URL and certificates is taken from policymanagement section
uri = /api/v1/auth/login/checkcredentials
# -----------------------------
# Member management
# -----------------------------
[member]
# Specifies how policy users are checked against policy
# yes = (The default) policy users are treated in case sensitive manner
# no = policy users are treated in case insensitive manner.
case-sensitive = yes
# -----------------------------
# Shared Memory management
# -----------------------------
# This section appears only for the DSG. For other protectors, you must add the section manually.
[sharedmemory]
groupname = dsggroup
worldreadable = no
The following table helps you to understand the usage of the parameters listed in the pepserver.cfg configuration file.
Important: It is recommended that only the parameters listed in the following table are edited as per your requirement.
Appliance/Protectors | Section | Parameter Name | Description |
---|---|---|---|
All Protectors | Application configuration | postdeploy | Set the path of any script that must be executed after the policy is deployed. |
Logging configuration | level | Specifies the logging level set. The log level set in this parameter how the data protection logs appear in the ESA forensics. | |
host | Set the host IP of the Log Forwarder, generally localhost, where the protector will send the logs. | ||
port | Set the port number of the Log Forwarder, generally localhost, where the protector will send the logs. | ||
mode | Set how the logs must be handled in a situation where the connection to the Log Forwarder in the protector is lost.Important: The default value is drop.
For the MS SQL Database Protector, if you update the value of mode setting in the pepserver.cfg file, then the changes are not reflected unless you restart the MS SQL Server or recreate the MS SQL server objects. | ||
output | Set the output type for the aggregated security logs.
CAUTION: | ||
Do not set the output=stdout setting in a production environment. This setting must be used only for debugging. If the output=stdout setting is configured, then the aggregated logs are not sent to Insight. | |||
logsendinterval | Set to configure the time interval after which the logs are sent from the Protector to the Log Forwarder. | ||
The default value is 1 second. | |||
Policy management | emptystring | Defines the behavior when the data to protect is an empty string. | |
The default value is null. The following are the possible values:
| |||
For more information about empty string handling by protectors, refer to the section Empty String Handling by Protectors in the Protection Methods Reference 9.2.0.0. | |||
Member management | case-sensitive | If this parameter is set to no, then the PEP Server considers the policy user names that are case insensitive. If this parameter is set to yes or if it is commented in the file, then the PEP Server considers the policy user names that are case-sensitive. The default value is yes. | |
Shared Memory management This section is seen in the pepserver.cfg for DSG. For other protectors, you must add the section to the pepserver.cfg file. For more information about the Shared Memory management in Big Data Protector, refer to section Updating the Configuration Parameters for the BDP PEP Service in an Open Hadoop Network. |
|groupname|Set the group name. For DSG, this is set to dsggroup.
| |worldreadable|Set to no as default.| |DSG|Policy management|shufflecodebooks|Set to yes when codebook reshuffling must be enabled. The default value is no.Note: Enabling this parameter requires careful consideration.
For more information about codebook reshuffling, refer to 4.3.1 Codebook Re-shuffling in the PEP Server in the Data Security Gateway User Guide 3.0.0.0.
| |randomfile|Path to the file that contains the random bytes for shuffling codebooks.| |PKCS#11 configurationImportant: You must edit values under this section only if shufflecodebooks is enabled.
|provider_library|Path to the PKCS#11 provider library.Note: For more information about codebook reshuffling, refer to 4.3.1 Codebook Re-shuffling in the PEP Server in the Data Security Gateway User Guide 3.0.0.0.
| |slot|The slot number to use on the HSM.Note: For more information about codebook reshuffling, refer to 4.3.1 Codebook Re-shuffling in the PEP Server in the Data Security Gateway User Guide 3.0.0.0.
| |userpin|The scrambled user pin file.Note: For more information about codebook reshuffling, refer to 4.3.1 Codebook Re-shuffling in the PEP Server in the Data Security Gateway User Guide 3.0.0.0.
| |Application Protector|Shared Memory managementNote: This section is seen in the pepserver.cfg for Application Protector.
|groupname|Set the group name.For Application Protector, the group name must be the same as that of the user who is authorized to perform the data security operations.
| |worldreadable|By default, this parameter is set to yes, i.e., shared memory segment permissions are set to 666, which is world-readable. Set this parameter to no, to make it non world-readable. As a result, the permissions are changed to 660.| |Database Protector|Shared Memory managementNote: This section is seen in the pepserver.cfg file for Database Protector.
Note: To modify the Shared Memory management settings, perform the following steps in sequence:
- Stop the PEP server.
- Change the Shared Memory management parameters in the pepserver.cfg file.
For example,
```
groupname = oinstall worldreadable = Yes ```
Here, oinstall is the authorized group name to perform the data security operations in Database Protector.
Remove the shared memories and the semaphores using the following commands.
ipcrm -M OxOOObedaO ipcrm -M 0x000abba4 ipcrm -M OxOOOfaffa ipcrm -M OxOOOdadaO ipcrm -M 0x000c0de4 ipcrm -S 0x000c0de4 ipcrm -S OxOOOdadaO ipcrm -S OxOOOfaffa ipcrm -S 0x000abba4 ipcrm -S OxOOObedaO ```
- Start the PEP server.
|groupname|Set the group name.For Database Protector, the group name must be the same as that of the user who is authorized to perform the data security operations.
| |worldreadable|By default, this parameter is set to yes, i.e., shared memory permissions are set to 666, which is world-readable. Set this parameter to no, to make it non world-readable. As a result, the permissions are changed to 660.|
2.5 - Configuring ESA features
2.5.1 - Rotating Insight certificates
These steps are only applicable for the system-generated Protegrity certificate and keys. For rotating custom certificates, refer here. If the ESA keys are rotated, then the Audit Store certificates must be rotated.
Log in to the ESA Web UI.
Navigate to System > Services > Misc.
Stop the td-agent service. Skip this step if Analytics is not initialized.
On the ESA Web UI, navigate to System > Services > Misc.
Stop the Analytics service.
Navigate to System > Services > Audit Store.
Stop the Audit Store Management service.
Navigate to System > Services > Audit Store.
Stop the Audit Store Repository service.
Run the Rotate Audit Store Certificates tool on the system.
From the CLI, navigate to Tools > Rotate Audit Store Certificates.
Enter the root password and select OK.
Enter the admin username and password and select OK.
Enter the IP of the local system in the Target Audit Store Address field and select OK to rotate the certificates.
After the rotation is complete select OK.
The CLI screen appears.
Navigate to System > Services > Audit Store.
Start the Audit Store Repository service.
Navigate to System > Services > Audit Store.
Start the Audit Store Management service.
Navigate to Audit Store > Cluster Management and confirm that the cluster is functional and the cluster status is green or yellow. The cluster with green status is shown in the following figure.
Navigate to System > Services > Misc.
Start the Analytics service.
Navigate to System > Services > Misc.
Start the td-agent service. Skip this step if Analytics is not initialized.
The following figure shows all services started.
On a multi-node Audit Store cluster, the certificate rotation must be performed on every node in the cluster. First, rotate the certificates on a Lead node, which is the Primary ESA, and then use the IP address of this Lead node while rotating the certificates on the remaining nodes in the cluster. The services mentioned in this section must be stopped on all the nodes, preferably at the same time with minimum delay during certificate rotation. After certificate rotation, the services that were stopped must be started again on the nodes in the reverse order.
Log in to the ESA Web UI.
Stop the required services.
Navigate to System > Services > Misc.
Stop the td-agent service. This step must be performed on all the other nodes followed by the Lead node. Skip this step if Analytics is not initialized.
On the ESA Web UI, navigate to System > Services > Misc.
Stop the Analytics service. This step must be performed on all the other nodes followed by the Lead node.
Navigate to System > Services > Audit Store.
Stop the Audit Store Management service. This step must be performed on all the other nodes followed by the Lead node.
Navigate to System > Services > Audit Store.
Stop the Audit Store Repository service.
Attention: This is a very important step and must be performed on all the other nodes followed by the Lead node without any delay. A delay in stopping the service on the nodes will result in that node receiving logs. This will lead to inconsistency in the logs across nodes and logs might be lost.
Run the Rotate Audit Store Certificates tool on the Lead node.
From the ESA CLI Manager of the Lead node, that is the primary ESA, navigate to Tools > Rotate Audit Store Certificates.
Enter the root password and select OK.
Enter the admin username and password and select OK.
Enter the Ip of the local machine in the Target Audit Store Address field and select OK.
After the rotation is completed without errors, the following screen appears. Select OK to go to the CLI menu screen.
The CLI screen appears.
Run the Rotate Audit Store Certificates tool on all the remaining nodes in the Audit Store cluster one node at a time.
From the ESA CLI Manager of a node in the cluster, navigate to Tools > Rotate Audit Store Certificates.
Enter the root password and select OK.
Enter the admin username and password and select OK.
Enter the IP address of the Lead node in Target Audit Store Address and select OK.
Enter the admin username and password for the Lead node and select OK.
After the rotation is completed without errors, the following screen appears. Select OK to go to the CLI menu screen.
The CLI screen appears.
Start the required services.
Navigate to System > Services > Audit Store.
Start the Audit Store Repository service.
Attention: This step must be performed on the Lead node followed by all the other nodes without any delay. A delay in starting the services on the nodes will result in that node receiving logs. This will lead to inconsistency in the logs across nodes and logs might be lost.
Navigate to System > Services > Audit Store.
Start the Audit Store Management service. This step must be performed on the Lead node followed by all the other nodes.
Navigate to Audit Store > Cluster Management and confirm that the Audit Store cluster is functional and the Audit Store cluster status is green or yellow as shown in the following figure.
Navigate to System > Services > Misc.
Start the Analytics service. This step must be performed on the Lead node followed by all the other nodes.
Navigate to System > Services > Misc.
Start the td-agent service. This step must be performed on the Lead node followed by all the other nodes. Skip this step if Analytics is not initialized.
The following figure shows all services that are started.
Verify that the Audit Store cluster is stable.
On the ESA Web UI, navigate to Audit Store > Cluster Management.
Verify that the nodes are still a part of the Audit Store cluster.
2.5.2 - Configuring the disk space on the Log Forwarder
If the incoming logs are cached faster than they are sent to Insight, then a back pressure arises.
The following formula can be used to calculate the disk space on the Log Forwarder. The formula requires the estimated audit rate and time to sustain the audit rate, without logs being sent to Insight. Modify the values in this example as required. The default value of the disk space is 256 MB.
Disk Space in Mega bytes = (Audit Rate X Time in Seconds X 5.9 ) / 1024.
- Audit Rate = Number of policy audits generated per second
- Time in Seconds = Time duration for which the disk can sustain the audit rate without the logs being sent to Insight.
If the default or the configured value of the storage.total_limit_size setting is reached, then the Log Forwarder discards the oldest audits to create disk space for new audits.
Perform the following steps to configure the storage.total_limit_size setting in the out.conf file on the protector machine.
Log in and open a CLI on the protector machine.
Navigate to the config.d directory using the following command.
cd /opt/protegrity/logforwarder/data/config.d
Protectors v9.2.0.0 and later use the /opt/protegrity/logforwarder/data/config.d path. Use the /opt/protegrity/fluent-bit/data/config.d path for protectors v9.1.0.0 and earlier.
Back up the existing out.conf file using the following command.
cp out.conf out.conf_backup
Open the out.conf file using a text editor.
Update the value of storage.total_limit_size setting in the output blocks. The default value of the storage.total_limit_size is 256 MB. The following snippet shows the extract of the code.
[OUTPUT] Name opensearch Match logdata Retry_Limit False Index pty_insight_audit Type _doc Time_Key ingest_time_utc Upstream /opt/protegrity/logforwarder/data/config.d/upstream.cfg **storage.total\_limit\_size 256M** [OUTPUT] Name opensearch Match flulog Retry_Limit 1 Index pty_insight_audit Type _doc Time_Key ingest_time_utc Upstream /opt/protegrity/logforwarder/data/config.d/upstream.cfg **storage.total\_limit\_size 256M** [OUTPUT] Name opensearch Match errorlog Retry_Limit 1 Index pty_insight_audit Type _doc Time_Key ingest_time_utc Upstream /opt/protegrity/logforwarder/data/config.d/upstream.cfg **storage.total\_limit\_size 256M**
Protectors v9.2.0.0 and later use the /opt/protegrity/logforwarder/data/config.d path. Use the /opt/protegrity/fluent-bit/data/config.d path for protectors v9.1.0.0 and earlier.
Save and close the file.
Restart the Log Forwarder on the protector using the following commands.
/opt/protegrity/logforwarder/bin/logforwarderctrl stop /opt/protegrity/logforwarder/bin/logforwarderctrl start
Protectors v9.2.0.0 and later use the /opt/protegrity/logforwarder/bin path. Use the /opt/protegrity/fluent-bit/bin path for protectors v9.1.0.0 and earlier.
If required, complete the configurations on the remaining protector machines.
2.5.3 - Updating configurations after changing the domain name
Before you begin:
Ensure that the following prerequisites are complete:
The ESA is configured to forward logs to Insight and the external SIEM.
For more information about forwarding logs to a SIEM, refer here.
The external syslog server is available and running.
If certificates are used, ensure that the certificates are updated with the required information.
For more information about updating the certificates, refer here.
Ensure that the hostname does not contain the dot(.) special character.
Perform the following steps to update the configuration:
Open the CLI Manager on the Primary ESA.
Log in to the CLI Manager of the Primary ESA.
Navigate to Administration > OS Console.
Enter the root password and select OK.
Run the following command to update the configuration files.
/opt/protegrity/td-agent/scripts/update_bindaddress_td_agent_INPUT_forward_external.sh $(hostname)
The bind address in INPUT_forward_external.conf is updated with the hostname.domainname.
Restart the td-agent service.
Log in to the ESA Web UI.
Navigate to System > Services > Misc > td-agent,
Restart the td-agent service.
Complete the steps on the remaining ESAs where the domain name must be updated.
If td-agent is used to receive logs on the ESA or are using an external SIEM, then update the upstream.cfg file on the protector using the following steps.
Log in and open a CLI on the protector machine.
Navigate to the config.d directory using the following command.
cd /opt/protegrity/logforwarder/data/config.d
Protectors v9.2.0.0 and later use the /opt/protegrity/logforwarder/data/config.d path. Use the /opt/protegrity/fluent-bit/data/config.d path for protectors v9.1.0.0 and earlier.
Back up the existing upstream.cfg file using the following command.
cp upstream.cfg upstream.cfg_updating_host_backup
Protectors v9.2.0.0 and later use the upstream.cfg file. Use the upstream_es.cfg file for protectors v9.1.0.0 and earlier.
Open the upstream.cfg file using a text editor.
Update the Host value with the updated IP address of the ESA.
The extract of the code is shown here:
[UPSTREAM] Name pty-insight-balancing [NODE] Name node-1 Host <IP address of the ESA> Port 24284 tls on tls.verify off
The code shows information updated for one node. If multiple nodes are present, then ensure that the information is updated for all the nodes.
Do not leave any trailing spaces or line breaks at the end of the file.
Protectors v9.2.0.0 and later use the Name parameter as pty-insight-balancing. Use the Name parameter as pty-es-balancing for protectors v9.1.0.0 and earlier.
Save and close the file.
Restart logforwarder on the Protector using the following commands.
/opt/protegrity/logforwarder/bin/logforwarderctrl stop /opt/protegrity/logforwarder/bin/logforwarderctrl start
Protectors v9.2.0.0 and later use the /opt/protegrity/logforwarder/bin path. Use the /opt/protegrity/fluent-bit/bin path for protectors v9.1.0.0 and earlier.
Complete the configurations on the remaining protector machines.
2.5.4 - Updating the IP address of the ESA
Perform the steps on one system at a time if multiple ESAs must be updated.
Updating the IP address on the Primary ESA
Update the ESA configuration of the Primary ESA. This is the designated ESA that is used to log in for performing all configurations. It is also the ESA that is used to create and deploy policies.
Perform the following steps to refresh the configurations:
Recreate the Docker containers using the following steps.
Open the OS Console on the Primary ESA.
- Log in to the CLI Manager on the Primary ESA.
- Navigate to Administration > OS Console.
- Enter the root password.
Stop the containers using the following commands.
/etc/init.d/asrepository stop /etc/init.d/asdashboards stop
Remove the containers using the following commands.
/etc/init.d/asrepository remove /etc/init.d/asdashboards remove
Update the IP address in the config.yml configuration file.
In the OS Console, navigate to the /opt/protegrity/auditstore/config/security directory.
cd /opt/protegrity/auditstore/config/security
Open the config.yml file using a text editor.
Locate the internalProxies: attribute and update the IP address value for the ESA.
Save and close the file.
Start the containers using the following commands.
/etc/init.d/asrepository start /etc/init.d/asdashboards start
Update the IP address in the asd_api_config.json configuration file.
In the OS Console, navigate to the /opt/protegrity/insight/analytics/config directory.
cd /opt/protegrity/insight/analytics/config
Open the asd_api_config.json file using a text editor.
Locate the x_forwarded_for attribute and update the IP address value for the ESA.
Save and close the file.
Rotate the Audit Store certificates on the Primary ESA.
For the steps to rotate Audit Store certificates, refer here.
Use the IP address of the local node, which is the Primary ESA and the Lead node, while rotating the certificates.
Monitor the cluster status.
Log in to the Web UI of the Primary ESA.
Navigate to Audit Store > Cluster Management.
Wait till the following updates are visible on the Overview page.
- The IP address of the Primary ESA is updated.
- All the nodes are visible in the cluster.
- The health of the cluster is green.
Alternatively, monitor the log files for any errors by logging into the ESA Web UI, navigating to Logs > Appliance, and selecting the following files from the Enterprise-Security-Administrator - Event Logs list:
- insight_analytics
- asmanagement
- asrepository
Updating the IP Address on the Secondary ESA
Ensure that the IP address of the ESA has been updated. Perform the steps on one system at a time if multiple ESAs must be updated.
Perform the following steps to refresh the configurations:
Recreate the Docker containers using the following steps.
Open the OS Console on the Secondary ESA.
- Log in to the CLI Manager on the Secondary ESA.
- Navigate to Administration > OS Console.
- Enter the root password.
Stop the containers using the following commands.
/etc/init.d/asrepository stop /etc/init.d/asdashboards stop
Remove the containers using the following commands.
/etc/init.d/asrepository remove /etc/init.d/asdashboards remove
Update the IP address in the config.yml configuration file.
In the OS Console, navigate to the /opt/protegrity/auditstore/config/security directory.
cd /opt/protegrity/auditstore/config/security
Open the config.yml file using a text editor.
Locate the internalProxies: attribute and update the IP address value for the ESA.
Save and close the file.
Start the containers using the following commands.
/etc/init.d/asrepository start /etc/init.d/asdashboards start
Update the IP address in the asd_api_config.json configuration file.
In the OS Console, navigate to the /opt/protegrity/insight/analytics/config directory.
cd /opt/protegrity/insight/analytics/config
Open the asd_api_config.json file using a text editor.
Locate the x_forwarded_for attribute and update the IP address value for the ESA.
Save and close the file.
Rotate the Audit Store certificates on the Secondary ESA. Perform the steps on the Secondary ESA. However, use the IP address of the Primary ESA, which is the Lead node, for rotating the certificates.
For the steps to rotate Audit Store certificates, refer here.
Monitor the cluster status.
Log in to the Web UI of the Primary ESA.
Navigate to Audit Store > Cluster Management.
Wait till the following updates are visible on the Overview page.
- The IP address of the Secondary ESA is updated.
- All the nodes are visible in the cluster.
- The health of the cluster is green.
Alternatively, monitor the log files for any errors by logging into the ESA Web UI, navigating to Logs > Appliance, and selecting the following files from the Enterprise-Security-Administrator - Event Logs list:
- insight_analytics
- asmanagement
- asrepository
2.5.5 - Updating the host name or domain name of the ESA
Updating the host name or domain name on the Primary ESA
Update the configurations of the Primary ESA. This is the designated ESA that is used to log in for performing all configurations. It is also the ESA that is used to create and deploy policies.
Ensure that the host name or domain name of the ESA has been updated.Ensure that the hostname does not contain the dot(.) special character.
Perform the steps on one system at a time if multiple ESAs must be updated.
Perform the following steps to refresh the configurations:
Update the host name or domain name in the configuration files.
Open the OS Console on the Primary ESA.
- Log in to the CLI Manager on the Primary ESA.
- Navigate to Administration > OS Console.
- Enter the root password.
Update the repository.json file for the Audit Store configuration.
Navigate to the /opt/protegrity/auditstore/management/config directory.
cd /opt/protegrity/auditstore/management/config
Open the repository.json file using a text editor.
Locate and update the hosts attribute with the new host name and domain name as shown in the following example.
"hosts": [ "protegrity-esa123.protegrity.com" ]
Save and close the file.
Update the repository.json file for the Analytics configuration.
Navigate to the /opt/protegrity/insight/analytics/config directory.
cd /opt/protegrity/insight/analytics/config
Open the repository.json file using a text editor.
Locate and update the hosts attribute with the new host name and domain name as shown in the following example.
"hosts": [ "protegrity-esa123.protegrity.com" ]
Save and close the file.
Update the opensearch.yml file for the Audit Store configuration.
Navigate to the /opt/protegrity/auditstore/config directory.
cd /opt/protegrity/auditstore/config
Open the opensearch.yml file using a text editor.
Locate and update the node.name, network.host, and the http.host attributes with the new host name and domain name as shown in the following example. Update the node.name only with the host name. If required, uncomment the line by deleting the number sign (#) character at the start of the line.
... <existing code> ... node.name: protegrity-esa123 ... <existing code> ... network.host: - protegrity-esa123.protegrity.com ... <existing code> ... http.host: - protegrity-esa123.protegrity.com
Save and close the file.
Update the opensearch_dashboards.yml file for the Audit Store Dashboards configuration.
Navigate to the /opt/protegrity/auditstore_dashboards/config directory.
cd /opt/protegrity/auditstore_dashboards/config
Open the opensearch_dashboards.yml file using a text editor.
Locate and update the opensearch.hosts attribute with the new host name and domain name as shown in the following example.
opensearch.hosts: [ "https://protegrity-esa123.protegrity.com:9200" ]
Save and close the file.
Update the OUTPUT.conf file for the td-agent configuration.
Navigate to the /opt/protegrity/td-agent/config.d directory.
cd /opt/protegrity/td-agent/config.d
Open the OUTPUT.conf file using a text editor.
Locate and update the hosts attribute with the new host name and domain name as shown in the following example.
hosts protegrity-esa123.protegrity.com
Save and close the file.
Update the INPUT_forward_external.conf file for the external SIEM configuration. This step is required only if an external SIEM is used.
Navigate to the /opt/protegrity/td-agent/config.d directory.
cd /opt/protegrity/td-agent/config.d
Open the INPUT_forward_external.conf file using a text editor.
Locate and update the bind attribute with the new host name and domain name as shown in the following example.
bind protegrity-esa123.protegrity.com
Save and close the file.
Recreate the Docker containers using the following steps.
Open the OS Console on the Primary ESA, if it is not opened.
- Log in to the CLI Manager on the Primary ESA.
- Navigate to Administration > OS Console.
- Enter the root password.
Stop the containers using the following commands.
/etc/init.d/asrepository stop /etc/init.d/asdashboards stop
Remove the containers using the following commands.
/etc/init.d/asrepository remove /etc/init.d/asdashboards remove
Start the containers using the following commands.
/etc/init.d/asrepository start /etc/init.d/asdashboards start
Rotate the Audit Store certificates on the Primary ESA. Use the IP address of the local node, which is the Primary ESA and the Lead node, while rotating the certificates.
For the steps to rotate Audit Store certificates, refer here.
Update the unicast_hosts.txt file for the Audit Store configuration.
Open the OS Console on the Primary ESA.
Navigate to the /opt/protegrity/auditstore/config directory using the following command.
cd /opt/protegrity/auditstore/config
Open the unicast_hosts.txt file using a text editor.
Locate and update the host name and domain name.
protegrity-esa123 protegrity-esa123.protegrity.com
Save and close the file.
Monitor the cluster status.
Log in to the Web UI of the Primary ESA.
Navigate to Audit Store > Cluster Management.
Wait till the following updates are visible on the Overview page.
- The IP address of the Primary ESA is updated.
- All the nodes are visible in the cluster.
- The health of the cluster is green.
It is possible to monitor the log files for any errors by logging into the ESA Web UI, navigating to Logs > Appliance, and selecting the following files from the Enterprise-Security-Administrator - Event Logs list:
- insight_analytics
- asmanagement
- asrepository
Updating the host name or domain name on the Secondary ESA
Update the configurations of the Secondary ESA after the host name or domain name of the ESA has been updated.
Perform the steps on one system at a time if multiple ESAs must be updated.
Perform the following steps to refresh the configurations:
Update the host name or domain name in the configuration files.
Open the OS Console on the Secondary ESA.
- Log in to the CLI Manager on the Secondary ESA.
- Navigate to Administration > OS Console.
- Enter the root password.
Update the repository.json file for the Audit Store configuration.
Navigate to the /opt/protegrity/auditstore/management/config directory.
cd /opt/protegrity/auditstore/management/config
Open the repository.json file using a text editor.
Locate and update the hosts attribute with the new host name and domain name as shown in the following example.
"hosts": [ "protegrity-esa456.protegrity.com" ]
Save and close the file.
Update the repository.json file for the Analytics configuration.
Navigate to the /opt/protegrity/insight/analytics/config directory.
cd /opt/protegrity/insight/analytics/config
Open the repository.json file using a text editor.
Locate and update the hosts attribute with the new host name and domain name as shown in the following example.
"hosts": [ "protegrity-esa456.protegrity.com" ]
Save and close the file.
Update the opensearch.yml file for the Audit Store configuration.
Navigate to the /opt/protegrity/auditstore/config directory.
cd /opt/protegrity/auditstore/config
Open the opensearch.yml file using a text editor.
Locate and update the node.name, network.host, and the http.host attributes with the new host name and domain name as shown in the following example. Update the node.name only with the host name. If required, uncomment the line by deleting the number sign (#) character at the start of the line.
... <existing code> ... node.name: protegrity-esa456 ... <existing code> ... network.host: - protegrity-esa456.protegrity.com ... <existing code> ... http.host: - protegrity-esa456.protegrity.com
Save and close the file.
Update the opensearch_dashboards.yml file for the Audit Store Dashboards configuration.
Navigate to the /opt/protegrity/auditstore_dashboards/config directory.
cd /opt/protegrity/auditstore_dashboards/config
Open the opensearch_dashboards.yml file using a text editor.
Locate and update the opensearch.hosts attribute with the new host name and domain name as shown in the following example.
opensearch.hosts: [ "https://protegrity-esa456.protegrity.com:9200" ]
Save and close the file.
Update the OUTPUT.conf file for the td-agent configuration.
Navigate to the /opt/protegrity/td-agent/config.d directory.
cd /opt/protegrity/td-agent/config.d
Open the OUTPUT.conf file using a text editor.
Locate and update the hosts attribute with the new host name and domain name as shown in the following example.
hosts protegrity-esa456.protegrity.com
Save and close the file.
Update the INPUT_forward_external.conf file for the external SIEM configuration. This step is required only if an external SIEM is used.
Navigate to the /opt/protegrity/td-agent/config.d directory.
cd /opt/protegrity/td-agent/config.d
Open the INPUT_forward_external.conf file using a text editor.
Locate and update the bind attribute with the new host name and domain name as shown in the following example.
bind protegrity-esa456.protegrity.com
Save and close the file.
Recreate the Docker containers using the following steps.
Open the OS Console on the Secondary ESA, if it is not opened.
- Log in to the CLI Manager on the Secondary ESA.
- Navigate to Administration > OS Console.
- Enter the root password.
Stop the containers using the following commands.
/etc/init.d/asrepository stop /etc/init.d/asdashboards stop
Remove the containers using the following commands.
/etc/init.d/asrepository remove /etc/init.d/asdashboards remove
Start the containers using the following commands.
/etc/init.d/asrepository start /etc/init.d/asdashboards start
Rotate the Audit Store certificates on the Secondary ESA. Perform the steps on the Secondary ESA using the IP address of the Primary ESA, which is the Lead node, for rotating the certificates.
For the steps to rotate Audit Store certificates, refer here.
Update the unicast_hosts.txt file for the Audit Store configuration.
Open the OS Console on the Primary ESA.
Navigate to the /opt/protegrity/auditstore/config directory using the following command.
cd /opt/protegrity/auditstore/config
Open the unicast_hosts.txt file using a text editor.
Locate and update the host name and domain name.
protegrity-esa456 protegrity-esa456.protegrity.com
Save and close the file.
Monitor the cluster status.
Log in to the Web UI of the Primary ESA.
Navigate to Audit Store > Cluster Management.
Wait till the following updates are visible on the Overview page.
- The IP address of the Secondary ESA is updated.
- All the nodes are visible in the cluster.
- The health of the cluster is green.
Monitor the log files for any errors by logging into the ESA Web UI, navigating to Logs > Appliance, and selecting the following files from the Enterprise-Security-Administrator - Event Logs list:
- insight_analytics
- asmanagement
- asrepository
2.5.6 - Updating Insight custom certificates
These steps are only applicable for custom certificate and keys. For rotating Protegrity certificates, refer here.
For more information about certificates, refer here.
Rotate custom certificates on the Audit Store cluster that has a single node in the cluster using the steps provided here.
On a multi-node Audit Store cluster, the certificate rotation must be performed on every node in the cluster. First, rotate the certificates on a Lead node, which is the Primary ESA, and then use the IP address of this Lead node while rotating the certificates on the remaining nodes in the cluster. The services mentioned in this section must be stopped on all the nodes, preferably at the same time with minimum delay during certificate rotation. After updating the certificates, the services that were stopped must be started again on the nodes in the reverse order.
Log in to the ESA Web UI.
Navigate to System > Services > Misc.
Stop the td-agent service. This step must be performed on all the other nodes followed by the Lead node.
On the ESA Web UI, navigate to System > Services > Misc.
Stop the Analytics service. This step must be performed on all the other nodes followed by the Lead node. The other nodes might not have Analytics installed. In this case, skip this step on those nodes.
Navigate to System > Services > Audit Store.
Stop the Audit Store Management service. This step must be performed on all the other nodes followed by the Lead node.
Navigate to System > Services > Audit Store.
Stop the Audit Store Repository service.
Attention: This is a very important step and must be performed on all the other nodes followed by the Lead node without any delay. A delay in stopping the service on the nodes will result in that node receiving logs. This will lead to inconsistency in the logs across nodes and logs might be lost.
Apply the custom certificates on the Lead ESA node.
For more information about certificates, refer here.
Complete any one of the following steps on the remaining nodes in the Audit Store cluster.
Apply the custom certificates on the remaining nodes in the Audit Store cluster.
For more information about certificates, refer here.
Run the Rotate Audit Store Certificates tool on all the remaining nodes in the Audit Store cluster one node at a time.
Log in to the ESA CLI Manager of a node in the Audit Store cluster.
Navigate to Tools > Rotate Audit Store Certificates.
Enter the root password and select OK.
Enter the admin username and password and select OK.
Enter the IP address of the Lead node in Target Audit Store Address and select OK.
Enter the admin username and password for the Lead node and select OK.
After the rotation is completed without errors, the following screen appears. Select OK to go to the CLI menu screen.
The CLI screen appears.
Navigate to System > Services > Audit Store.
Start the Audit Store Repository service.
Attention: This step must be performed on the Lead node followed by all the other nodes without any delay. A delay in starting the services on the nodes will result in that node receiving logs. This will lead to inconsistency in the logs across nodes and logs might be lost.
Navigate to System > Services > Audit Store.
Start the Audit Store Management service. This step must be performed on the Lead node followed by all the other nodes.
Navigate to Audit Store > Cluster Management and confirm that the Audit Store cluster is functional and the Audit Store cluster status is green or yellow as shown in the following figure.
Navigate to System > Services > Misc.
Start the Analytics service. This step must be performed on the Lead node followed by all the other nodes. The other nodes might not have Analytics installed. In this case, skip this step on those nodes.
Navigate to System > Services > Misc.
Start the td-agent service. This step must be performed on the Lead node followed by all the other nodes.
The following figure shows all services that are started.
On the ESA Web UI, navigate to Audit Store > Cluster Management.
Verify that the nodes are still a part of the Audit Store cluster.
2.5.7 - Removing an ESA from the Audit Store cluster
Before you begin:
Verify if the scheduler task jobs are enabled on the ESA using the following steps:
- Log in to the ESA Web UI.
- Navigate to System > Task Scheduler.
- Verify whether the following tasks are enabled on the ESA.
- Update Policy Status Dashboard
- Update Protector Status Dashboard
Perform the following steps to remove the ESA node:
From the ESA Web UI, click Audit Store > Cluster Management to open the Audit Store clustering page.
The Overview screen appears.
Click Leave Cluster.
A confirmation dialog box appears. The Audit Store cluster information is updated when a node leaves the Audit Store cluster. Hence, nodes must be removed from the Audit Store cluster one at a time. Removing multiple nodes from the Audit Store cluster at the same time using the ESA Web UI would lead to errors.
Click YES.
The ESA is removed from the Audit Store cluster. The Leave Cluster button is disabled and the Join Cluster button is enabled. The process takes time to complete. Stay on the same page and do not navigate to any other page while the process is in progress.
If the scheduler task jobs were enabled on the ESA that was removed, then enable the scheduler task jobs on another ESA in the Audit Store cluster. These tasks must be enabled on any one ESA in the Audit Store cluster. Enabling on multiple nodes might result in a loss of data.
- Log in to the ESA Web UI of any one node in the Audit Store cluster.
- Navigate to System > Task Scheduler.
- Enable the following tasks by selecting the task, clicking Edit, selecting the Enable check box, and clicking Save.
- Update Policy Status Dashboard
- Update Protector Status Dashboard
- Click Apply.
- Specify the root password and click OK.
After leaving the Audit Store cluster, the configuration of the node and data is reset. The node will be uninitialized. Before using the node again, Protegrity Analytics needs to be initialized on the node or the node needs to be added to another Audit Store cluster.
2.6 - Identifying the protector version
Perform the following steps to identify the PEP server version of the protector:
Log in to the ESA.
Navigate to Policy Management > Nodes.
View the Version field for all the protectors.
3 - Upgrading ESA to v10.1.0
3.1 - System and License Requirements
The following table lists the supported components and their compatibility settings.
Component | Compatibility |
---|---|
Application Protocols | HTTP 1.0, HTTP 1.1, SSL/TLS |
WebServices | SOAP 1.1 and WSDL 1.1 |
Web Browsers | Minimum supported Web Browser versions are as follows: - Google Chrome version 129.0.6668.58/59 (64-bit) - Mozilla Firefox version 130.0.1 (64-bit) or higher - Microsoft Edge version 128.0.2739.90 (64-bit) |
The following table lists the minimum hardware configurations.
Hardware Components | Configuration |
---|---|
CPU | Multicore Processor, with minimum 8 CPUs |
RAM | 32 GB |
Hard Disk | 320 GB |
CPU Architecture | x86 |
The following partition spaces must be available.
Partition | Minimum Space Required |
---|---|
OS(/) | 40% |
/opt | Twice the patch size |
/var/log | 20% |
The space used in the OS(/) partition should not be more than 60%. If the space used is more than 60%, then you must clean up the OS(/) partition before proceeding with the patch installation process. For more information about cleaning up the OS(/) partition, refer here.
Software Requirements
Ensure that the software requirements are met before upgrading the appliance.
- Must have ESA on v10.0.1.
- At least three ESAs must be in a Trusted Appliance Cluster (TAC).
- At least three ESAs must be in the Audit Store Cluster.
- If logs are forwarded to an external syslog server, then ensure that the syslog server is running during the upgrade.
Important: If only two ESAs are available in the Audit Store cluster, then remove the secondary ESA from the cluster. Upgrade the primary ESA, then upgrade the secondary ESA, and add the secondary ESA back to the Audit Store cluster. However, a minimum of three ESAs are recommended for the Audit Store cluster.
Installation Requirements
The ESA_PAP-ALL-64_x86-64_10.1.0.P.2467.pty patch file is available.
Ensure to download the latest patch for the respective version from the My.Protegrity portal.
For more information about the latest build number and the patch details, refer to the Release Notes of the respective patch.
Licensing Requirements
Ensure that a valid license is available before upgrading. After migration, if the license status is invalid, then contact Protegrity Support.
3.2 - Upgrade Paths to ESA v10.1.0
*indicates all the available hotfix and security patches on the platform version.
For example, to upgrade from the ESA v9.1.0.x to the ESA v10.1.0, install the patches as follows:
- ESA v9.2.0.0
- ESA v9.2.0.1
- ESA v10.0.1
- ESA v10.1.0
For more information about upgrading the ESA v10.0.1 to v10.1.0, refer here.
Before installing any patch, refer to the Release Notes from the My.Protegrity portal.
The following table provides the recommended upgrade paths to the ESA v10.1.0.
Current Version | Path to Upgrade the ESA to v10.1.0 |
---|---|
10.0.1 | Install the v10.1.0 patch. |
9.2.0.1 | 1. Install the v10.0.1 patch. 2. Install the v10.1.0 patch. |
9.2.0.0 | 1. Install the v9.2.0.1 patch. 2. Install the v10.0.1 patch. 3. Install the v10.1.0 patch. |
9.1.0.x | 1. Install the v9.2.0.0 patch. 2. Install the v9.2.0.1 patch. 3. Install the v10.0.1 patch. 4. Install the v10.1.0 patch. |
9.0.0.0 | 1. Install the v9.1.0.x patch. 2. Install the v9.2.0.0 patch 3. Install the v9.2.0.1 patch. 4. Install the v10.0.1 patch. 5. Install the v10.1.0 patch. |
To check the current version of the ESA:
- On the ESA Web UI, navigate to System > Information.
You can view the current patch installed on the ESA. - Navigate to the About page to view the current version of the ESA.
For more information about:
Upgrading the previous ESA versions, refer the Protegrity Data Security Platform Upgrade Guide for the respective versions on the My.Protegrity portal.
Applying the DSG patch on the ESA, refer Extending ESA with DSG Web UI in the Protegrity Data Security Gateway User Guide for the respective version.
3.3 - Prerequisites
Verifying the License Status
Before upgrading the ESA, ensure that the license is not expired or invalid.
An expired or invalid license blocks policy services on the ESA and Devops API’s. A new or existing protector will not receive any policies until a valid license is applied.
For more information about the license, refer Protegrity Data Security Platform Licensing.
Configuring Keys and HSM
If the security keys, such as, master key or repository key have expired or are due to expire within 30 days, then the upgrade fails. Thus, you must rotate the keys before performing the upgrade. Additionally, ensure that the keys are active and in running state.
For more information about rotating keys, refer to Working with Keys.
If you are using an HSM, ensure that the HSM is accessible and running.
For more information about HSM, refer to the corresponding HSM vendor document.
If the prerequisites are not met, the ESA upgrade process fails. In such a case, it is required to restore the ESA to its previous stable version.
Accounts
The administrative account used for upgrading the ESA must be active.
Backup and Restore
The OS backup procedure is performed to backup files, OS settings, policy information, and user information. Ensure that the latest backup is available before upgrading to the latest version.
If the patch installation fails, then you can revert the changes to a previous version. Ensure to backup the complete OS or export the required files before initiating the patch installation process.
For more information about backup and restore, refer here.
- Ensure to perform backup on each ESA separately. The IP settings will cause an issue if the same backup is used to restore different nodes.
- Backup specific components of your appliance using the File Export option. Ensure to create a backup of the Policy Management data, Directory Server settings, Appliance OS Configuration, Export Gateway Configuration Files, and so on.
- While upgrading an ESA with the DSG installed, select the Export Gateway Configuration Files option and perform the export operation.
Full OS backup
The entire OS must be backed up to prevent data loss. This allows the OS to be reverted to a previous stable configuration in case of a patch installation failure. This option is available only for the on-premise deployments.
Perform the following steps to backup the full OS configuration:
- Log in to the ESA Web UI.
- Navigate to System > Backup & Restore > OS Full, to backup the full OS.
- Click Backup.
The backup process is initiated. After the OS Backup process is completed, a notification message appears on the ESA Web UI Dashboard.
Exporting data/configuration to remote appliance
The backup configurations to a remote appliance can be exported.
The following scenario illustrates the steps performed for a successful export of the backup configuration.
- Log in to the CLI Manager.
- Navigate to Administration > Backup/Restore Center.
- Enter the root password and select OK.The Backup Center dialog box appears.
- From the menu, select the Export data/configurations to a remote appliance(s) option and select OK.
- From the Select file/configuration to export dialog box, select Current (Active) Appliance Configuration package to export and select OK.
- Select the packages to export and select OK.
- Select the Import method.For more information on each import method, select Help.
- Type the IP address or hostname for the destination appliance.
- Type the administrative credentials of the remote appliance and select Add.
- In the information dialog box, press OK.The Backup Center screen appears.
Avoid importing all network settings to another machine. This action will create two machines with the same IP in the network. It is recommended to restart the appliance after receiving an appliance core configuration backup.
This item shows up only when exporting to a file.
Creating a snapshot for cloud-based services
A snapshot represents a state of an instance or disk at a point in time. You can use a snapshot of an instance or a disk to backup and restore information in case of failures. Ensure that you have the latest snapshot before upgrading the ESA.
You can create a snapshot of an instance or a disk on the following platforms:
Validating Custom Configuration Files
Complete the following steps if you modified any configuration files.
- Review the contents of any configuration files. Verify that the code in the configuration file is formatted properly. Ensure that there are no additional spaces, tabs, line breaks, or control characters in the configuration file.
- Validate that the backup files are created with the details appended to the extension, for example, .conf_backup or .conf_bkup123.
- Back up any custom configuration files or modified configuration files. If required, use the backup files to restore settings after the upgrade is complete.
Trusted Appliance Cluster (TAC)
While upgrading an ESA appliance that is in a TAC setup, delete the cluster scheduled tasks and then, remove the ESA appliance from the TAC.
For more information about TAC, refer here.
Deleting a Scheduled Task
Perform the following steps to delete a scheduled task:
- From the ESA Web UI, navigate to System > Task Scheduler.The Task Scheduler page displays the list of available tasks.
- Select the required task.
- Select Remove.A confirmation message to remove the scheduled task appears.
- Click OK.
- Select Apply to save the changes.
- Enter the root password and select Ok.The task is deleted successfully.
Removing a Node from the Cluster
While upgrading an ESA appliance that is in a Trusted Appliance Cluster (TAC) setup, remove the the ESA appliance from the TAC and then apply the upgrade patch.
If a node is associated with a cluster task, then the Leave Cluster operation does not remove the node from the cluster. Ensure to delete all such tasks before removing any node from the cluster.
Perform the following steps to remove a node from a cluster:
- From the ESA Web UI of the node that you want to remove from the cluster, navigate to System > Trusted Appliances Cluster.The screen displaying the cluster nodes appears.
- Navigate to Management > Leave Cluster.A confirmation message appears.
- Select Ok.The node is removed from the cluster.
For more information about TAC, refer here.
Disabling the Audit Store Cluster Task
Perform the following steps to disable the task:
- Log in to the ESA Web UI.
- Navigate to System > Task Scheduler.
- Select the Audit Store Management - Cluster Config - Sync task.
- Click Edit.
- Clear the Enable check box.
- Click Save.
- Click Apply.
- Enter the root password and click OK.
- Repeat the steps on all the nodes in the Audit Store cluster.
Disabling Rollover Index Task
Perform the following steps to disable the Rollover Index task:
Log in to the ESA Web UI on any of the nodes in the Audit Store cluster.
Navigate to Audit Store > Analytics > Scheduler.
Click Enable for the Rollover Index task.
The slider moves to the off position that it turns grey in color.
Enter the root password and click Submit to apply the updates.
Repeat steps 1-4 on all nodes in the Audit Store cluster, if required.
3.4 - Upgrading from v10.0.1
Before you begin
Ensure that you upgrade the ESA prior to upgrading the protectors.
Ensure that the ESA nodes in the Audit Store cluster are upgraded one at a time.
If only two ESAs are available in the Audit Store cluster, then remove the secondary ESA from the cluster. Upgrade the primary ESA, then upgrade the secondary ESA, and add the secondary ESA back to the Audit Store cluster. However, a minimum of three ESAs are recommended for the Audit Store cluster.
Uploading and Installing the ESA patch
The ESA patch can be uploaded using the Web UI or the CLI Manager but the patch should only be installed using the CLI Manager.
Uploading the patch using the Web UI
Perform the following steps to upload the patch from the Web UI:
Log in to the ESA Web UI with administrator credentials.
Navigate to Settings > System > File Upload.The File Upload page appears.
In the File Selection section, click Choose File.The file upload dialog box appears.
Select the patch file and click Open.
- You can only upload files with .pty and .tgz extensions.
- If the file uploaded exceeds the Max File Upload Size, then a password prompt appears. Only a user with the administrative role can perform this action. Enter the password and click Ok.
- By default, the Max File Upload Size value is set to 25 MB. To increase this value, refer here.
Click Upload.
After the file is uploaded successfully, then from the Uploaded Files area, choose the uploaded patch.The information for the selected patch appears.
Uploading the patch using the CLI Manager
Perform the following steps to upload the patch from the CLI Manager:
- Log in to the ESA CLI Manager with administrator credentials.
- Navigate to Administration > OS Console to upload the patch.
Enter the root password and click OK. - Upload the patch to the /opt/products_uploads directory using the FTP or SCP command.
The patch file is uploaded.
Installing the ESA patch from CLI Manager
Perform the following steps to install the patch from the CLI Manager:
Log in to the ESA CLI Manager with administrator credentials.
Navigate to Administration > Patch Management to install the patch.
Enter the root password and click OK.Select Install a Patch.
Select the ESA_PAP-ALL-64_x86-64_10.1.0.P.2467.pty patch file and select Install.
- For more information about the latest build number and the patch details, refer to the Release Notes of the respective patch.
After the patch is installed, select Reboot Now.
This screen has a timeout of 60 seconds. If Reboot Now is not selected manually, then the system automatically reboots after 60 seconds.
After the reboot is initiated, the message Patch has been installed successfully !! appears. Select Exit.
The patch is installed and the ESA is upgraded to v10.1.0.
Verifying the ESA Patch Installation
Perform the following steps to verify the patch installation:
- Log in to the ESA CLI Manager.
- Navigate to Administration > Patch Management.
- Enter the root password.
- Select List installed patches. The ESA_10.1.0 patch name appears.
- Log in to the ESA Web UI.
- Navigate to the System > Information page.
The ESA is upgraded to v10.1.0. The upgraded patch on the ESA is displayed in the Installed Patches section.
3.5 - Post upgrade steps
Before you begin
If only two ESAs are available in the Audit Store cluster, then remove the secondary ESA from the cluster, upgrade the primary ESA, upgrade the secondary ESA, and add the secondary ESA to the cluster. However, a minimum of three ESAs are recommended for the Audit Store cluster.
Upgrade each ESA to v10.1.0. After upgrading all the ESA appliances, add the ESA appliances to the TAC, and then create the scheduled tasks.
Verifying Upgrade Logs
During the upgrade process, logs describing the status of the upgrade process are generated. The logs describe the services that are initiated, restarted, or the errors generated.
To view the logs under the /var/log
directory from the CLI Manager, navigate to CLI Manager > Administration > OS console.
- patch_ESA_<version>.log - Provides the logs for the upgrade.
- syslog - Provides collective information about the syslogs.
Enabling Rollover Index Task
Perform the following steps to enable Rollover Index task after upgrading all the ESA appliances:
Log in to the ESA Web UI on any of the nodes in the Audit Store cluster.
Navigate to Audit Store > Analytics > Scheduler
Click Enable for the Rollover Index task. The slider moves to the on position that is blue in color.
Enter the root password and click Submit, to apply the updates.
Repeat steps 1-4 on all nodes in the Audit Store cluster, if required.
Enabling the Audit Store Management - Cluster Config - Sync task
Enable the Audit Store Management - Cluster Config - Sync task on all nodes after upgrading all the ESAs.
Log in to the ESA Web UI.
Navigate to System > Task Scheduler.
Click the Audit Store Management - Cluster Config - Sync task and click Edit.
Select the Enable check box for the Audit Store Management - Cluster Config - Sync task.
Click Save and then click Apply after performing the required changes.
Enter the root password and select OK.
Repeat the steps on all the nodes in the Audit Store cluster.
Joining nodes in ESA cluster
Perform this step only after upgrading all the ESA appliances.
To add the ESAs in a TAC setup using the ESA Web UI:
- From the primary ESA, create a TAC.
- Join all secondary ESAs one-by-one in the TAC.
Creating a Cluster Scheduled Task
Ensure to create a cluster scheduled task, only after upgrading all the ESA appliances.
Perform the following steps to create a cluster scheduled task:
From the ESA Web UI, navigate to System > Backup & Restore > Export.
Under Export, select the Cluster Export radio button.
Click Start Wizard.
The Wizard - Export Cluster screen appears.
In the Data to import, customize the items that you need to export from this machine and imported to the cluster nodes.
If the configurations must be exported on a different ESA, then clear the Certificates check box. For information about copying Insight certificates across systems, refer to Rotating Insight certificates.
Click Next.
In the Source Cluster Nodes, select the nodes that will run this task.
You can specify them by label or select individual nodes.
Click Next.
In the Target Cluster Nodes, select the nodes to import the data.
Click Review.
The New Task screen appears.
Enter the required information in the following sections.
- Basic Properties
- Frequencies
- Restriction
- Logging
Click Save.
A new scheduled task is created.Click Apply to apply the modifications to the task.
A dialog box to enter the root user password appears.Enter the root password and click OK.
The scheduled task is operational.Click Run Now to run the scheduled task immediately.
Optional Restoring the Configuration for the External SIEM
If an external SIEM is used and logs are forwarded to the external SIEM, perform the following steps to restore the configuration:
Skip this step if you are upgrading from the ESA v10.0.x. However, this step is required when upgrading from earlier versions.
From the CLI Manager of the Primary ESA, perform the following steps.
Log in to the CLI Manager of the Primary ESA.
Navigate to Administration > OS Console.
Enter the root password and select OK.
Update the configuration files.
Navigate to the config.d directory.
cd /opt/protegrity/td-agent/config.d
Identify the hostname of the machine.
- Open the OUTPUT.conf file using a text editor.
- Locate the hosts parameter in the file.
- Make a note of the hosts configuration value.
- Close the file.
Back up the existing OUTPUT.conf file.
mv OUTPUT.conf OUTPUT.conf_from_<build_number>
The system uses the .conf extension. To avoid configuration conflicts, ensure that the backup file name is appended to the file name extension, for example, .conf_backup or .conf_bkup123.
Open the backup file that is created during the upgrade using a text editor.
Obtain the build number from the readme file that is provided with the upgrade patch.
The system uses the .conf extension. To avoid configuration conflicts, ensure that the backup file name is appended to the file name extension, for example, .conf_backup or .conf_bkup123.Update hosts localhost to the hosts configuration value identified from the OUTPUT.conf file in the earlier step. This step is only required if the hosts field has the localhost value assigned.
The extract of the updated file is shown in the following example.
<existing code> . . hosts protegrity-esa123.protegrity.com . . .<exising code>
Save and close the file.
Rename the configuration file to OUTPUT.conf.
mv OUTPUT.conf.before_<build_number> OUTPUT.conf
Restart the td-agent service.
Log in to the ESA Web UI.
Navigate to System > Services > Misc > td-agent,
Restart the td-agent service.
Repeat these steps on all the ESAs where the external SIEM configuration was updated before the upgrade.
Installing the DSG patch on ESA
If you install the DSG v3.3.0.0 patch on the ESA v10.1.0, then perform the following operations:
Run the Docker commands
- On the ESA CLI Manager, navigate to Administration > OS Console.
- Run the
docker ps
command. A list of all the available docker containers are displayed as shown in the following example:
Run the below commands to update the container configuration.
docker update --restart=unless-stopped --cpus 2 --memory 1g --memory-swap 1.5g dsg-ui-3.3.0.0.5-1
docker update --restart=always --cpus 1 --memory .5g --memory-swap .6g gpg-agent-3.3.0.0.5-1
The values of--cpus
,--memory
, and--memory-swap
can be changed as per the requirements.
Update the evasive configuration file
- On the ESA CLI Manager, navigate to Administration > OS Console.
- Add new mod evasive configuration file using the following command.
nano /etc/apache2/mods-enabled/whitelist_evasive.conf
- Add the following parameters to the mod evasive configuration file.
<IfModule mod_evasive20.c>
DOSWhitelist 127.0.0.1
</IfModule>
- Save the changes.
- Set the required permissions for evasive configuration file using the following command.
chmod 644 /etc/apache2/mods-enabled/whitelist_evasive.conf
- Reload the apache service using the following command.
/etc/init.d/apache2 reload
3.6 - Restoring to the Previous Version of ESA
3.6.1 - Restoring to the previous version of ESA from a snapshot
In case of upgrade failure, restore it through the OS backup or by importing the backed up files.
3.6.1.1 - Restoring to the Previous Version of ESA on AWS
On AWS, data can be restored by creating a volume of a snapshot. After creating the snapshot, the EC2 instance can be attached to the volume.
Creating a Snapshot of a Volume on AWS
Perform the steps to create a snapshot of a volume:
On the EC2 Dashboard screen, click Volumes under the Elastic Block Store section.
The screen with all the volumes appears.
Right click on the required volume and select Create Snapshot.
The Create Snapshot screen for the selected volume appears.
Enter the required description for the snapshot in the Description text box.
Select click to add a Name tag to add a tag.
Enter the tag in the Key and Value text boxes.
Click Add Tag to add additional tags.
Click Create Snapshot.
A message Create Snapshot Request Succeeded along with the snapshot id appears.
- Ensure that you note the snapshot id.
- Ensure that the status of the snapshot is completed.
Restoring a Snapshot on AWS
Before you begin
- Ensure that the status of the instance is Stopped.
- Ensure that an existing volume on the instance is detached.
Perform the steps to restore a snapshot on AWS:
On the EC2 Dashboard screen, click Snapshots under the Elastic Block Store section.The screen with all the snapshots appears.
Right-click on the required snapshot and select Create Volume from snapshot.
The Create Volume screen appears.
Select the type of volume from the Volume Type drop-down list.
Enter the size of the volume in the Size (GiB) textbox.
Select the availability zone from the Availability Zone drop-down list.
Click Add Tag to add tags.
Click Create Volume.
A message Create Volume Request Succeeded along with the volume id appears. The volume with the snapshot is created.
Ensure to note the volume id.
Under the EBS section, click Volume.
The screen displaying all the volumes appears.
Right-click on the volume that is created.
The pop-up menu appears.
Select Attach Volume.
The Attach Volume dialog box appears.
Enter the Instance ID or name of the instance in the Instance text box.
Enter /dev/xvda in the Device text box.
Click the Attach to add the volume to an instance.
The snapshot is added to the EC2 instance as a volume.
3.6.1.2 - Restoring to the Previous Version of ESA on Azure
On Azure, data can be restored by creating a volume of a snapshot.
Creating a Snapshot of a Virtual Machine on Azure
Perform the steps to create a snapshot of a virtual machine:
Sign in to the Azure homepage.
On the Azure Dashboard screen, select Virtual Machine.The screen displaying the list of all the Azure virtual machines appears.
Select the required virtual machine.
The screen displaying the details of the virtual machine appears.On the left pane, under Settings, click Disks.
The details of the disk appear.
Select the disk and click Create Snapshot.
The Create Snapshot screen appears.
Enter the following information:
- Name: Name of the snapshot
- Subscription: Subscription account for Azure
Select the required resource group from the Resource group drop-down list.
Select the required account type from the Account type drop-down list.
Click Create.
The snapshot of the disk is created.
Restoring from a Snapshot on Azure
Perform the steps to restore a snapshot on Azure.
On the Azure Dashboard screen, select Virtual Machine.
The screen displaying the list of all the Azure virtual machines appears.
Select the required virtual machine.
The screen displaying the details of the virtual machine appears.
On the left pane, under Settings, click Disks.
Click Swap OS Disk.
The Swap OS Disk screen appears.
Click the Choose disk drop-down list and select the snapshot created.
Enter the confirmation text and click OK.
The machine is stopped and the disk is successfully swapped.
Restart the virtual machine to verify whether the snapshot is available.
3.6.1.3 - Restoring to the Previous Version of ESA on GCP
On GCP, data can be restored creating a volume of a snapshot.
Creating a Snapshot of a Disk on GCP
Perform the steps to create a snapshot of a disk:
On the Compute Engine dashboard, click Snapshots.
The Snapshots screen appears.
Click Create Snapshot.
The Create a snapshot screen appears.
Enter information in the following text boxes.
- Name - Name of the snapshot.
- Description – Description for the snapshot.
Select the required disk for which the snapshot is to be created from the Source Disk drop-down list.
Click Add Label to add a label to the snapshot.
Enter the label in the Key and Value text boxes.
Click Add Label to add additional tags.
Click Create.
- Ensure that the status of the snapshot is set to completed.
- Ensure that you note the snapshot id.
Restoring from a Snapshot on GCP
Perform the steps to restore a snapshot:
- Navigate to Compute Engine > VM instances.The VM instances screen appears.
- Select the required instance.
The screen with instance details appears. - Stop the instance.
- After the instance is stopped, click EDIT.
- Under the Boot Disk area, remove the existing disk.
- Click Add Item.
- Select the Name drop-down list and click Create a disk. The Create a disk screen appears.
- Under Source Type area, select the required snapshot.
- Enter the other details, such as, Name, Description, Type, and Size (GB).
- Click Create. The snapshot of the disk is added in the Boot Disk area.
- Click Save.The instance is updated with the new snapshot.
3.6.2 - Restoring to the Previous Version of ESA On-premise
To roll back the system to the previous version, perform the steps to restore the system.This helps in cases such as when an upgrade fails.
Perform the steps to restore to the previous version of the ESA on-premise.
- From the CLI Manager, navigate to Administration > Reboot And Shutdown > Reboot to restart your system.A screen to enter the reason for restart appears.
- Enter the reason and select OK.
- Enter the root password and select OK.
The appliance restarts and the following screen appears. - Select System-Restore and press ENTER.
The Welcome to System Restore Mode screen appears. - Select Initiate OS-Restore Procedure and select OK.The restore procedure is initiated.
After the OS-Restore procedure is completed, the login screen appears.
4 - Enterprise Security Administrator (ESA)
Protegrity Data Security Platform provides policy management and data protection. It has as its main component the Enterprise Security Administrator (ESA). Working in combination with a Protegrity database protector, application protector, file protector, or big data protector it can be used for managing data security policy, key management, and auditing and reporting.
- ESA: The ESA Manager provides information on how to install specific components, work with policy management tools, manage keys and key rotation and manage switching between Soft HSM and Key Store, configuring logging repositories and using logging tools. This document contains details for all these features.
- Audit Store: The Audit Store is a repository for the logs generated from multiple sources, such as the kernel, policy management, member source, application logs, and protectors. The Audit Store supports clustering for scalability.
- Insight: This feature displays forensics from the Audit Store on the Audit Store Dashboards. It provides options to query and display data from the Audit Store. Predefined graphs are available for analyzing the data from the Audit Store. It provides options for generating and saving customized queries and reports. An enhanced alerting system tracks the data in the Audit Store to monitor the systems and alert users if required.
- Data Security Gateway: The Data Security Gateway (DSG) is a network intermediary that can be classified under Cloud Access Security Brokers (CASB) and Cloud Data Protection Gateway (CDPG). CASBs provide security administrators a central check point to ensure secure and compliant use of cloud services across multiple cloud providers. CDPG is a security policy enforcement check point that exists between cloud data consumer and cloud service provider to interject enterprise policies whenever the cloud-based resources are accessed.
4.1 - Architectures
4.1.1 - Logging architecture
Architecture overview
Logging follows a fixed routine. The system generates logs, which are collected and then forwarded to Insight. Insight stores the logs in the Audit Store. The Audit Store holds the logs and these log records are used in various areas, such as, forensics, alerts, reports, dashboards, and so on. This section explains the logging architecture.
The ESA v10.1.0 only supports protectors having the PEP server version 1.2.2+42 and later.
ESA:
The ESA has the td-agent service installed for receiving and sending logs to Insight. Insight stores the logs in the Audit Store that is installed on the ESA. From here, the logs are analyzed by Insight and used in various areas, such as, forensics, alerts, reports, dashboards, visualizations, and so on. Additionally, logs are collected from the log files generated by the Hub Controller and Membersource services and sent to Insight. By default, all Audit Store nodes have all node roles, that is, Master-eligible, Data, and Ingest. A minimum of three ESAs are required for creating a dependable Audit Store cluster to protect it from system crashes. The architecture diagram shows three ESAs.
Protectors:
The logging system is configured on the protectors to send logs to Insight on the ESA using the Log Forwarder.
DSG:
The DSG has the td-agent service installed. The td-agent forwards the appliance logs to Insight on the ESA. The Log Forwarder service forwards the data security operations-related logs, namely protect, unprotect, and reprotect, and the PEP server logs to Insight on the ESA.
Important: The gateway logs are not forwarded to Insight.
Container-based protectors
The container-based protectors are the Immutable Java Application Protector Container and REST containers. They Immutable Java Application Protector Container represents a new form factor for the Java Application Protector. The container is intended to be deployed on the Kubernetes environment.
The REST container represents a new form factor that is being developed for the Application Protector REST. The REST container is deployed on Kubernetes, residing on any of the Cloud setups.
Components of Insight
The solution for collecting and forwarding the logs to Insight composes the logging architecture. The various components of Insight are installed on an appliance or an ESA.
A brief overview of the Insight components is provided in the following figure.
Understanding Analytics
Analytics is a component that is configured when setting up the ESA. After it is installed, the tools, such as, the scheduler, reports, rollover tasks, and signature verification tasks are available. These tools are used to maintain the Insight indexes.
Understanding the Audit Store Dashboards
The logs stored in the Audit Store hold valuable data. This information is very useful when used effectively. To view the information in an effective way, Insight provides tools such as dashboards and visualization. These tools are used to view and analyze the data in the Audit Store. The ESA logs are displayed on the Discover screen of the Audit Store Dashboards.
Understanding the Audit Store
The Audit Store is the database of the logging ecosystem. The main task of the Audit Store is to receive all the logs, store them, and provide the information when log-related data is requested. It is very versatile and processes data fast.
The Audit Store is a component that is installed on the ESA during the installation. The Audit Store is scalable, hence, additional nodes can be added to the Audit Store cluster.
Understanding the td-agent
The td-agent forms an integral part of the Insight ecosystem. It is responsible for sending logs from the appliance to Insight. It is the td-agent service that is configured to send and receive logs. The service is installed, by default, on the ESA and DSG.
Based on the installation, the following configurations are performed for the td-agent:
- Insight on the local system: In this case, the td-agent is configured to collect the logs and send it to Insight on the local system.
- Insight on a remote system: In this case, Insight is not installed locally, such as the DSG, but it is installed on the ESA. The td-agent is configured to forward logs securely to Insight in the ESA.
Understanding the Log Forwarder
The Log Forwarder is responsible for forwarding data security operation logs to Insight in the ESA. In cases when the ESA is unreachable, the Log Forwarder handles the logs until the ESA is available.
For Linux-based protectors, such as the Oracle Database Protector for Linux, if the connection to the ESA is lost, then the Log Forwarder starts collecting the logs in the memory cache. If the ESA is still unreachable after the cache is full, then the Log Forwarder continues collecting the logs and stores in the disk. When the connection to the ESA is restored, the logs in the cache are forwarded to Insight. The default memory cache for collecting logs is 256 MB. If the filesystem for Linux protectors is not EXT4 or XFS, then the logs will not be saved to the disk after the cache is full.
For information about updating the cache limit, refer here.
The following table provides information about how the Log Forwarder handles logs in different situations.
If.. | then the Log Forwarder… |
---|---|
Connection to ESA is lost | Starts collecting logs in the in-memory cache based on the cache limit defined. |
Connection to ESA is lost and the cache is full | In case of Linux-based protectors, the Log Forwarder continues to collect the logs and stores in disk. If the disk space is full, then all the cache files are emptied and the Log Forwarder continues to run. For Windows-based protectors, the Log Forwarder starts throwing away the logs. |
Connection to ESA is restored | Forwards logs to Insight on the ESA. |
Understanding the log aggregation
The architecture, the configurations, and the workflow provided here describes the log aggregation feature.
Log aggregation happens within the protector.
- Protector flushes all security audits once every second, by default. If a different user, data elements, or operations are involved, the number of security logs generated every second varies. The operations include protect, unprotect, or reprotect. Also, the generation rate of security logs depends on the number of users, data elements, and operations involved.
- FluentBit, by default, sends one batch every 10 second and when this occurs it will take all security and application logs.
The following diagram describes the architecture and the workflow of the log aggregation.
- The security logs generated by the protectors are aggregated in the protectors.
- The application logs are not aggregated and they are sent directly to the Log Forwarder.
- The security logs are aggregated and flushed at specific time intervals.
- The aggregated security logs from the Log Forwarder are forwarded to Insight.
The following diagram illustrates how similar logs are aggregated.
The similar security logs are aggregated after the log send interval or when an application is stopped.
For example, if 30 similar protect operations are performed simultaneously, then a single log will be generated with a count of 30.
4.1.2 - Overview of the Protegrity logging architecture
Overview of logging
Protegrity software generates comprehensive logs. The logging infrastructure generates a huge number of log entries that take time to read. The enhanced logging architecture consolidates logs in Insight. Insight stores the logs in the Audit Store and provides tools to make it easier to view and analyze log data.
When the user performs some operation using Protegrity software, or interact with Protegrity software directly, or indirectly using a different software or interface, a log entry is generated. This entry is stored in the Audit Store with other similar entries. A log entry contains valuable information about the interaction between the Protegrity software and a user or other systems.
A log entry might contain the following information:
- Date and time of the operation.
- User who initiated the operation.
- Operation that was initiated.
- Systems involved in the operation.
- Files that were modified or accessed.
As the transactions build up, the quantum of logs generated also increases. Every day a lot of business transactions, inter-process activities, interactivity-based activities, system level activities, and other transactions take place resulting in a huge number of log entries. All these logs take up a lot of time and space to store and process.
Evolution of logging in Protegrity
Every system had its own advantages and disadvantages. Over time, the logging system evolved to reduce the disadvantages of the existing system and also improve on the existing features. This ensured that a better system was available that provided more information and at the same time reduced the processing and storage overheads.
Logging in legacy products
In legacy Protegrity platforms, that is version 8.0.0.0 and earlier, security events were collected in the form of logs. These events were a list of transactions when the protection, unprotection, and reprotection operations were performed. The logs were delivered as a part of the customer Protegrity security solution. This system allowed tracking of the operations and also provided information for troubleshooting the Protegrity software.
However, this had disadvantages due to the volume of the logs generated and the granularity of the logs. When many security operations were performed, the volume of logs kept on increasing. This made it difficult for the platform to keep track of everything. When the volume increased beyond a limit and could not be managed, then customers had to turn off receiving log operations for the successful attempts to get protected data in the clear.
The logs collected reported the security operations performed. However, the exact number of operations performed was difficult to record. This inconsistency existed across the various protectors. The SDKs could provide individual counts of the operations performed while database protectors could not provide the exact count of the operations performed.
To solve this issue of obtaining exact counts, the capability called metering was added to the Protegrity products. Metering provided a count of the total number of security events. Even in the case of Metering, storage was an issue. This is because one audit log was generated in PostgreSQL in each ESA. Cross replications of logs across ESAs was a challenge because there was no way to automatically replicate logs across ESAs.
New logging infrastructure
Protegrity continues to improve on the products and solutions provided. Starting from version 8.1.0.0, a new and robust logging architecture in introduced on the ESA. This new system improves on the way audit logs are created and processed on the protectors. The logs are processed and aggregated according to the event being performed. For example, if 40 protect operations are performed on the protector, then one log with the count 40 is created instead of 40 individual logs for each operation. This reduces the number of logs generated and at the same time retains the quality of the information generated.
The audit logs that are created provide a lot of information about the event being performed on the protector. In addition to system events and protection events, the audit log also holds information for troubleshooting the protector and the security events on the protector. This solves the issue of granularity of the logs that existed in earlier systems. An advantage of this architecture is that the logs help track the working of the system. It also allows monitoring the system for any issues, both in the working of the system and from a security perspective.
The new architecture uses software, such as, Fluent Bit and Fluentd. These allow logs to be transported from the protector to the ESA over a secure, encrypted line. This ensures the safety and security of the information. The new architecture also used Elasticsearch for replicating logs across ESAs. It made the logging system more robust and protected the data from being lost in the case of an ESA failure. Over iterations, the Elasticsearch was upgraded with additional security using Open Distro. From version 9.1.0.0, OpenSearch was introduced that improved the logging architecture further. These software provided configuration flexibility to provide a better logging system.
From version 9.2.0.0, Insight is introduced that allow the logs to be visualized and various reports to be created for monitoring the health and the security of the system. Additionally, from the ESA version 9.2.0.0 and protectors version 9.1.0.0 or 9.2.0.0 the new logging system has been improved even further. It is now possible to view how many security operations the Protegrity solution has delivered and what Protegrity protectors are being used in the solution.
The audit logs generated are important for a robust security solution and are stored in Insight in the ESA. Since the volume of logs generated have been reduced in comparison to legacy solutions, the logs are always received by the ESA. Thus, the capability to turn off logging is no longer required and has been deprecated. The new logging architecture offers a wide range of tools and features for managing logs. In the case where the volume of logs is very large, the logs can be archived using the Index Lifecycle Management (ILM) to the short term archive or long term archive. This frees up the system and resources and at the same time makes the logs available when required in the future.
For more information about ILM, refer here.
The process for archiving logs can also be automated using the scheduler provided with the ESA. In addition to archiving logs, the processes for auto generating reports, rolling over the index, and performing signature verification can also be automated.
For more information about scheduling tasks, refer here.
4.2 - Protegrity Appliance Overview
Protegrity Appliance Overview
The Protegrity Data Security Platform provides policy management and data protection and has the following appliances.
- Enterprise Security Administrator (ESA) is the main component of the Data Security Platform. Working in combination with a Protegrity Protector, it can be used to encrypt or tokenize your data. Protectors include the Database Protector, Application Protector, File Protector, or Big Data Protector.
- The Data Security Gateway (DSG) is a network intermediary that can be classified under Cloud Access Security Brokers (CASB) and Cloud Data Protection Gateway (CDPG). CASBs provide security administrators a central check point to ensure secure and compliant use of cloud services across multiple cloud providers. CDPG is a security policy enforcement check point that exists between cloud data consumer and cloud service provider to interject enterprise policies whenever the cloud-based resources are accessed.
All Protegrity Appliances are based on the same framework with the base operating system (OS) as hardened Linux, which provides the platform for Protegrity products. This platform includes the required OS low-level components as well as higher-level components for enhanced security manageability.
All Protegrity Appliances have two basic interfaces: CLI Manager and Web UI. CLI Manager is a console-based environment and Web UI is a web-based environment. Most of the management features are shared by all appliances. Some examples of the shared management features are network settings management, date and time settings management, logs management, and appliance configuration facilities, among others.
The following guides provide the details of features specific to each Appliance:
- Protegrity Enterprise Security Administrator Guide
- Data Security Gateway User Guide
An organization can use a mix of these mandatory and may-use methods to secure data.
4.3 - Data Security Platform Overview
The Protegrity Data Security Platform is a comprehensive source of enterprise data protection solutions. Its design is based on a hub and spoke deployment architecture.
The Protegrity Data Security Platform has following components:
Enterprise Security Administrator (ESA) - Handles the management of policies, keys, monitoring, auditing, and reporting of protected systems in the enterprise.
Data Protectors – Protect sensitive data in the enterprise and deploy security policy for enforcement on each installed system. A policy is deployed from ESA to the Data Protectors and Audit Logs of all activity on sensitive data is forwarded to the appliances, such as, the ESA, or external logging systems.
4.4 - Installing ESA
You can install ESA on-premise or a cloud platform such as AWS, GCP, or Azure. When you upgrade from a previous version, ESA is available as patch. The following are the different ways of installing ESA:
Installing ESA
- ISO Installation: This installation is performed for an on-premise environment where ESA is installed on a local system using an ESA ISO is provided by Protegrity. The installation of the ISO begins by installing the hardened version of Linux on your system, setting up the network, and configuring date/time. This is then followed by updating the location, setting up OS user accounts, and installing the ESA-related components. For more information about installing ESA using ISO, refer to Installing ESA using ISO.
- Cloud Platforms: On Cloud platforms such as, AWS, GCP, or Azure, ESA images for the respective cloud are generated and provided by Protegrity. In these images, ESA is installed with specific components. You must obtain the image from Protegrity and create an instance on the cloud platform. After creating the instance, you run certain steps for finalizing the installation. For more information about installing ESA on cloud platforms, refer to Installing ESA on Cloud Platforms.
A temporary license is provided by default when you first install the Appliance and is valid for 30 days from the date of this installation. To continue using Protegrity features, you have to obtain a validated license before your temporary license expires.
For more information about licensing, refer to Protegrity Data Security Platform Licensing.
4.5 - Logging In to ESA
The Enterprise Security Administrator (ESA), contains several components such as, Insight, Audit Store, Analytics, Policy Management, Key Management, Certificate Management, Clustering, Backup/Restore, Networking, User Management, and so on. You must login to ESA to avail the services of these components. Log in to the CLI Manager or Web UI of ESA to secure your data using these components.
The login aspect of the appliance can be categorized into the following categories:
Simple Login
Log in to ESA from CLI or Web UI by providing valid user credentials. You can login to ESA as an appliance or LDAP user. For more information about users, refer to ESA users.
From your Web browser, type the domain name of the ESA HTTPS protocol, for example, https://192.168.1.x/. The Web Interface splash screen appears. The following figure displays the login page of the ESA Web UI.
You can login to the ESA CLI Manger using an SSH session.
Single Sign-On (SSO)
Single Sign-on (SSO) is a feature that enables users to authenticate multiple applications by logging in to a system only once. On the Protegrity appliances, you can utilize the Kerberos SSO mechanism to login to the appliance. For more information about SSO, refer to Single Sign-On. The following figure displays the login page with SSO.
Two-Factor Authentication
The two factor authentication is a verification process where two recognized factors are used to identify you before granting you access to a system or website. In addition to your password, you must correctly enter a different numeric one-time passcode or the verification code to finish the login process. This provides an extra layer of security to the traditional authentication method. For more information about two-factor authentication, refer Two-Factor Authentication.
Protegrity supports Mozilla Firefox, Chrome, and Internet Explorer browsers for Web UI login.
4.6 - Command-Line Interface (CLI) Manager
Command-Line Interface (CLI) Manager is a Protegrity Platform tool for managing the Protegrity appliances. CLI Manager is a text-based environment for managing status, administration, configuration, preferences, and networking of your appliance. This section describes how to login to the CLI Manager, and its many features.
4.6.1 - Accessing the CLI Manager
You log on to the CLI Manager to manage the appliance settings and monitor your appliance. The CLI Manager is available using any of the following text consoles:
- Direct connection using local keyboard and monitor.
- Serial connection using an RS232 console cable.
- Network connection using a Secure Shell (SSH port 22) connection to the appliance management IP address.
To log on to the CLI Manager:
From the Web UI pane, click the window that appears at the bottom right.
A new CLI Manager window opens.
At the prompt, type the admin login credentials set during the appliance installation.
Press ENTER.
The CLI Manager screen appears.
First time login
When you login through the CLI or the Web UI for the first time, with the password policy enabled, the Update Password screen appears. It is recommended that you change the password since the administrator sets the initial password.
Shell Accounts role with SHell Access
If you are a user associated to Shell Accounts role with Shell (non-CLI) Access permissions, you cannot access the CLI or Web UI. This is an exception when the user has the password policy enabled and is required to change the password through the Web UI.
For more information about configuring the password policy, refer to section Password Policy Configuration.
CLI Manager Main Screen
The CLI Manager screen appears when you successfully login to the CLI Manager. This screen appears with the messages that relate to the user who has logged in and also mentions the priority of each message. Note here that % to the bottom-right of the screen indicates the information available for viewing on the screen.
If you click Continue, then the CLI Manager main screen appears.
The following figure illustrates the CLI Manager main screen.
CLI Manager Navigation
There are many common keystrokes that help you to navigate the CLI Manager. The following table describes the navigation keys.
Key | Description |
---|---|
UP ARROW DOWN ARROW | Navigates up and down menu options |
ENTER | Selects an option or continues process |
Q | Quits the CLI Manager |
T | Goes to the top of the current menu |
U | Moves up one level |
H | Displays key settings and instructions |
TAB | Moves between multiple fields |
Page Up | Scroll Up |
Page Down | Scroll Down |
In the following sections, the main system menus in the CLI manager are explained in detail.
4.6.2 - CLI Manager Structure Overview
There are five main system menus in the CLI Manager which are common for the Protegrity appliances:
- Status and Logs
- Administration
- Networking
- Tools
- Preferences
Status and Logs
Status and Logs menu includes four options that make the analysis of logs easier.
- System Monitor tool with real-life information on the CPU, network, and disk usage.
- Top Processes view having a list of 10 top memory and CPU users. The information is updated periodically.
- Appliance Logs tool, divided into subcategories. These can be appliance common logs and appliance specific logs. Thus, you can view system event logs that relate to, for example, syslog, installation, kernel, and web services engine logs which are common for all four Protegrity appliances.
- User Notifications tool include all the messages for a user. The latest notifications are also displayed on the screen after login.
For more information about status and logs, refer to section Working with Status and Logs.
Administration
Administration menu is the same for all three appliances. Using this menu, you can perform most of the standard server administration tasks.
- Start/stop/restart services
- Change time/time zone/date/NTP server
- Change passwords for admin/viewer/root user/LDAP users and unlock locked users
- Backup/restore OS, appliance configuration
- Set up email (SMTP)
- JWT Configuration
- Azure AD Configuration
- Install/uninstall services and patches
- Set up communication with a directory server (Local/external LDAP, Active Directory) and monitor the LDAP
- Reboot and shut down
- Access appliance OS console
For more information about appliance administration, refer to section Working with Administration.
Networking
Networking menu is the same for all four appliances. Using the Networking menu, you can configure the network settings as per your requirements.
- Change host name, appliance address, gateway, domain information
- Configure SNMP – refresh/start/set service or show/set string
- Specify management interface for Web UI and Web Services
- Configure network interface settings and assign services to multiple IP addresses
- Troubleshoot the network
- Manage Firewall settings
- Ports Allowlist
For more information about appliance networking, refer to section Working with Networking.
Tools
Tools menu is different for all the four appliances. However, most of the tools are common. Using this menu, you can perform the following tasks.
- Configure SSH mode to include known hosts/authorized keys/identities, and generate new server key
- Set up trusted appliances cluster
- Set up XEN paravirtualization
- View status of external hard drives
- Run antivirus and update signature file
- Configure Web services settings
For more information about common appliance tools, refer to section Working with Tools.
If you are using DSG, then you have additional tools for configuring ESA communication. Refer to the appropriate Appliance guide for details.
The additional tools for logging and reporting and policy management mentioned in the list are specifically for configuring ESA appliance.
Preferences
Preferences menu is common for all four appliances. Using this menu, you can perform the following tasks:
- Set up local console settings
- Specify if root password is required for the CLI system tools
- Display the system monitor in OS console
- Minimize timing differences
- Set uniform response time for failed login
- Enable root credentials check limit
- Enable AppArmor
- Enable FIPS Mode
- Basic Authentication for REST APIs
For more information about appliance preferences, refer to section Working with Preferences.
4.6.3 - Working with Status and Logs
The Status and Logs screen allows you to access system monitor information, examine top memory and CPU usage, and view appliance logs. You can access it from the CLI Manager main screen. This screen shows the hostname to which you are connected, and it allows you to view and manage your audit logs.
The following figure shows the Status and Logs screen.
In addition to the existing logs, the following additional security logs are generated:
- Appliance’s own LDAP when users are added and removed.
- SUDO commands are issued from the shell.
- There are failed attempts to log in from SSH or Web UI.
- All shell commands: This is a PCI-DSS requirement.
4.6.3.1 - Monitoring System Statistics
Using System Monitor, you can view the following system statistics.
- CPU usage
- RAM
- Disk space free or in use.
- If more hard disks are required, and so on.
To view the system information, login to the CLI Manager, navigate to Status and Logs > System Monitor.
4.6.3.2 - Viewing the Top Processes
Using Top Processes, you can examine in real-time, the processes using up memory or CPU.
To view the top processes, login to the CLI Manager, navigate to Status and Logs > Top Processes.
4.6.3.3 - Working with System Statistics (SYSSTAT)
The System Statistics (SYSSTAT) is a tool to monitor system resources and their performance on LINUX/UNIX systems. It contains utilities that collect system information, report CPU statistics, report input-output statistics, and so on. The SYSSTAT tool provides an extensive and detailed data for all the activities in your system.
The SYSSTAT contains the following utilities for analyzing your system:
- sar
- iostat
- mpstat
- pidstat
- nfsiostat
- cisfsiostat
These utilities collect, report, and save system activity information. Using the reports generated, you can check the performance of your system.
The SYSSTAT tool is available when you install the appliance.
On the Web UI, navigate to System > Task Scheduler to view the SYSSTAT tasks. You must run the following tasks to collect the system information:
- Sysstat Activity Report to collect information at short intervals
- Sysstat Activity Summary to collect information at a specific time daily
The following figure displays the SYSSTAT tasks on the Web UI.
The logs are stored in the /var/logs/sysstat directory.
The tasks are disabled by default. You must enable the tasks from the Task Scheduler for collecting the system information.
4.6.3.4 - Auditing Service
The Linux Auditing System is a tool or utility that allows to monitor events occurring in a system. It is integrated with the kernel to watch the system operations. The events that must be monitored are added as rules and defined to which extent that the event must be tracked. If the event is triggered, then a detailed audit log is generated. Based on this log, you can track any violations to the system and improve security measures to prevent them.
In Protegrity appliances, the auditing tool is implemented to track certain events that can pose as a security threat. The Audit Service is installed and running in the appliance for this purpose. On the Web UI, navigate to System > Services to view the status of the service. The Audit Service runs to check the following events:
- Update timezone
- Update AppArmor profiles
- Manage OS users and their passwords
If any of these events occur, then a low severity log is generated and stored in the logs. The logs are available in the /var/log/audit/audit.log directory. The logs that are generated by the auditing tool, contain detailed information about modifications triggered by the events that are listed in the audit rules. This helps to differentiate between a simple log and an audit log generated by the auditing tool for monitoring potential risks to the appliance.
For example, consider a scenario where an OS user is added to the appliance. If the Audit Service is stopped, then details of the user addition are not displayed and logs contain entries as illustrated in the following figure.
If the Audit Service is running, then the same event triggers a detailed audit log describing the user addition. The logs are illustrated in the following figure.
As illustrated in the figure, the following are some audits that are triggered for the event:
- USER_CHAUTHOK: User attribute is modified.
- EOE: Multiple record event ended.
- PATH: Recorded a path file name.
Thus, based on the details provided in the type attribute, a potential threat to the system can be monitored.
For more information about the audit types, refer to the following link:
On the Web UI, an Audit Service Watchdog scheduled task is added to ensure that the Audit Service is running. This task is executed once every hour.
Caution: It is recommended to keep the Audit Service running for security purposes.
4.6.3.5 - Viewing Appliance Logs
Using Appliance Logs, you can view all logs that are gathered by the appliance.
To view the appliance logs, login to the CLI Manager, navigate to Status and Logs > Appliance Logs.
Table: Appliance Logs
Logs | Logs Types | Description | Appliances
Specific | |
ESA | DSG | |||
System Event Logs | Syslog | All appliance logs. | ✓ | ✓ |
Installation | Installation logs contain all of the information
gathered during the installation procedure. These logs include all
errors during installation and information on all the processes,
resources, and settings used for installation. | ✓ | ✓ | |
Patches | Patches installed on appliance | ✓ | ✓ | |
Patch_SASL | Proxy Authentication (SASL) related logs | |||
Authentication | Authentication logs, such as user logins. | ✓ | ✓ | |
Web Services | Logs generated by the Web Services modules. | ✓ | ✓ | |
Web Management | Logs generated by the Appliance Web UI engine | ✓ | ✓ | |
Current Event | Current event logs contain all the operations performed
on the appliance. It gathers all information from different
services and appliance components. | ✓ | ✓ | |
Kernel | System kernel logs. | ✓ | ✓ | |
Web Services Server | Web Services Apache logs | ✓ | ✓ | |
Patch_Logging | Logging server related logs such as
installation log: logging server and so on. | ✓ | ✓ | |
Web Services Engine | Web Services HTTP-Server logs | Appliance Web UI related logs. | ✓ | ✓ |
Service Dispatcher | Access Logs | Service Dispatcher Access Logs | ✓ | ✓ |
Server Logs | Service Dispatcher Server Logs | ✓ | ✓ | |
Logging | Startup | ESA logging and reporting mechanism specific
logs. | ✓ | |
WatchDog | ✓ | |||
Database Access Layer | ✓ | |||
Database Engine | ✓ | |||
PEP Server | Logs received from PEP Server that is located on the FPV
and DSG. | ✓ | ||
Cluster Logs | Export Import Cluster | ✓ | ||
DSG Patch Installation | Cluster | Log all operations performed during installation of the
DSG patch | ✓ |
You can delete the desired logs using the Purge button and view them in real-time using the Real-Time View button. When you finish viewing the logs, press Done to exit.
4.6.3.6 - Viewing User Notifications
All the messages that display when you log in to either to the Web UI or CLI Manager can be viewed here as well.
To view the user notifications, login to the CLI Manager, navigate to Status and Logs > User Notifications.
4.6.4 - Working with Administration
Appliance administration is the most important part of the appliance framework. Most of the administrative tools and tasks can be performed using the Administration menu of the CLI Manager.
The following screen illustrates the Administration screen on the CLI Manager.
Some of the administration tasks, such as creating clustered environment or setting up the virtualization can be done only in the CLI Manager by selecting the Administration menu. Most of the administration tasks can be performed using the Web UI.
4.6.4.1 - Working with Services
You can manually start and stop appliance services.
To view all appliance services and their statuses, login to the CLI Manager, navigate to Administration > Services.
Use caution before stopping or restarting a particular service. Make sure that no important actions are being performed by other users using the service that must be stopped or restarted.
In the Services dialog box, you can start, stop, or restart the following services:
Table 1. Appliance Services
Services | ESA | DSG |
OS | ✓ | ✓ |
Web UI, Secure Shell (SSH), Firewall, Real-time Graphs, SNMP Service, NTP Service, Cluster Status, Appliance Heartbeat Server, Appliance Heartbeat Client, Log Filter Server, Messaging System, Appliance Queues Backend, Docker, Rsyslog Service | ||
LDAP | ✓ | ✓ |
LDAP Server, Name Service Cache Daemon | ||
Web Services Engine | ✓ | ✓ |
Web Services Engine | ||
Service Dispatcher | ✓ | ✓ |
Service Dispatcher | ||
Logging | ✓ | |
Management Server, Management Server Database, Reports Repository, Reporting Engine | ||
Policy Management | ✓ | |
Policy Repository, HubController, PIM Cluster, Soft HSM
Gateway, Key Management Gateway, Member Source Service,
Meteringfacade, DevOps, Logfacade For more information about
the Meteringfacade and Logfacade services, refer to the section
Services. | ||
Reporting Server | ✓ | |
Reports repository and reporting engine | ||
Distributed Filesystem File Protector | ✓ | |
DFS Cache Refresh | ||
ETL Toolkit | ||
ETL Server | ||
Cloud Gateway | ✓ | |
Cloud Gateway Cluster | ||
td-agent | ✓ | ✓ |
td-agent | ||
Audit Store | ✓ | |
Audit Store Repository | ||
Audit Store Management | ||
Analytics | ✓ | |
Analytics, Audit Store Dashboards | ||
RPS | ✓ |
You can change the status of any service when you select it from the list and choose Select. In the screen that follows the Service Management screen, select stop, start, or restart a service, as required.
When you apply any action on a particular service, the status message appears with the action applied. Press ENTER again to continue.
You can also use the Web UI to start or stop services. In the Web UI Services, you have additional options for stopping/starting services, such as Enable/Disable Auto-start for most of the services.
Important: Although the services can be started or stopped from the Web UI, the start/stop/restart action is restricted for some services. These services can be operated from the OS Console. Run the following command to start/stop/restart a service.
/etc/init.d/<service_name> stop/start/restart
For example, to start the docker service, run the following command.
/etc/init.d/docker start
4.6.4.2 - Setting Date and Time
You can adjust the date and time settings of your appliance by navigating to Administration > Date and Time. You may need to do so if this information was entered incorrectly during initialization.
You can synchronize time with NTP Server using the Time Server (NTP) option (explained in the following paragraph), change time zone using the Set Time Zone option, change date using the Set Date option, or change time using the Set Time option. The information selected during installation is available beside each option.
Use an Up Arrow or Down Arrow key to change the values in the editable fields, such as Month/Year. Use any arrow key to navigate the calendar. Use the Tab key to navigate between the editable fields.
You can set the time and date using the Web UI as well.
For more information about setting the appliance time and date, refer to section Configuring Date and Time.
License, certificates, and date and time modifications
Date and time modifications may affect licenses and certificates. It is recommended to have time synchronized between Appliances and Protectors.
Configure NTP Time Server
You must enable or disable the NTP settings only from the CLI Manager or Web UI.
You can access the Configure Server NTP Time Server screen by navigating to Administration > Date and Time > Time Server option.
To enable NTP synchronization, you need to specify the NTP Server first and then enable NTP. Once the NTP Server is specified, the new time will be applied immediately.
The NTP synchronization may take some time and while it is in progress, the Synchronization Status displays In Progress. When it is over, the Synchronization Status displays Time Synchronized.
4.6.4.3 - Managing Accounts and Passwords
The Appliance CLI Manager includes options to change password and permissions for multiple users through the CLI interface. The options available are listed as follows:
- Change My Password
- Manage Password and Local-Accounts
- Reset directory user-password
- Change OS root account password
- Change OS local_admin account password
- Change OS local_admin account permissions
- Manage internal Service-Accounts
- Manage local OS users
OS Users in Appliances
When you install an appliance, some users are installed to run specific services for the products.
When adding users, ensure that you do not add the OS users as policy users.
The following table describes the OS users that are available in your appliance.
OS Users | Description |
---|---|
alliance | Handles DSG processes |
root | Super user with access to all commands and files |
local_admin | Local administrator that can be used when an LDAP user is not accessible |
www-data | Daemon that runs the Apache, Service dispatcher, and Web services as a user |
ptycluster | Handles TAC related services and communication between TAC through SSH. |
service_admin and service_viewer | Internal service accounts used for components that do not support LDAP |
clamav | Handles ClamAV antivirus |
rabbitmq | Handles the RabbitMQ messaging queues |
epmd | Daemon that tracks the listening address of a node |
openldap | Handles the openLDAP utility |
dpsdbuser | Internal repository user for managing policies |
Strengthening Password Policy
Passwords are a common way of maintaining a security of a user account. The strength and complexity of a password are some of the primary requirements of an enterprise to prevent security vulnerability. A weak password increases chances of a security breach. Thus, to ensure a strong password, different password policies are set to enhance the security of an account.
Password policies are rules that enforce validation checks to provide a strong password. You can set your password policy based on the enterprise ordinance. Some requirements of a strong password policy might include use of numerals, characters, special characters, password length, and so on.
The default requirements of a strong password policy for an appliance OS user are as follows.
- The password must have at least 8 characters.
- All the printable ASCII characters are allowed.
- The password must contain at least one character each from any of the following two groups:
- Numeric: Includes numbers from 0-9.
- Alphabets: Includes capitals [A-Z] and small [a-z] alphabets.
- Special characters: Includes ! " # $ % & ( ) * + , - . / : ; < > = ? @ [ \ ] ^ _ ` { | } ~
You can enforce password policy rules for the LDAP and OS users by editing the check_password.py file. This file contains a Python function that validates a user password. The check_password.py file is run before you set a password for a user. The password for the user is applied only after it is validated using this Python function.
For more information about password policy for LDAP users, refer here.
Enforcing Password Policy
The following section describes how to enforce your policy restrictions for the OS and LDAP user accounts.
To enforce password policy:
Login to the CLI Manager.
Navigate to Administration > OS Console.
Enter the root password and select OK.
Edit the check_password.py file using a text editor.
/etc/ksa/check_password.py
Define the password rules as per your organizational requirements.
For more information about the password policy examples, refer here.
Save the file.
The password rules for the users in ESA are updated.
Examples
The following section describes a few scenarios about enforcing validation checks for the LDAP and OS users.
The check_password.py file contains the def check_password (password) Python function. In this function you can define your validations for the user password. This function returns a status code and a status message. In case of successful validation, the status code is zero and the status message is empty. In case of validation failure, the status code is non-zero and the status message contains the appropriate error message.
Scenario 1:
An enterprise wants to implement the following password rules:
- Length of the password should contain atleast 15 characters
- Password should contain digits
You must add the following snippet in the def check_password (password) function:
# Password length check
if len(password)<15: return (1,"Password should contain at least 15 characters")
# Password digits check
password_set=set(password)
digits=set(string.digits)
if ( password_set.intersection(digits) == set([]) ): return (2,"Password must contain digit)
Scenario 2:
An enterprise wants to implement the following password rule:
- Password should not contain 1234.
You must add the following snippet in the def check_password (password) function:
if password==1234:
return (1,"Password must not contain 1234")
return (0,None)
Scenario 3:
An enterprise wants to implement the following password rules:
- Password should contain a combination of uppercase, lowercase, and numbers.
You must add the following snippet in the def check_password (password) function:
digits=set(string.digits)
if ( password_set.intersection(digits) == set([]) ): return (2,"Password must contain numbers, upper, and lower case characters.")
# Force lowercase
lower_letters=set(string.ascii_lowercase)
if ( password_set.intersection(lower_letters) == set([]) ): return (2,"Password must contain numbers, upper, and lower case characters")
# Force uppercase
upper_letters=set(string.ascii_uppercase)
if ( password_set.intersection(upper_letters) == set([]) ): return (2,"Password must contain numbers, upper ,and lower case characters")
Changing Current Password
In situations where you need to change your current password due to suspicious activity or reasons other than password expiration, you can use the following steps.
For more information about appliance users, refer here.
To change the current password:
Login to the CLI Manager.
Navigate to Administration > Accounts and Passwords > Change My Password.
In the Current password field, type the current password.
In the New Password field, type the new password.
In the Retype Password field, retype the new password.
Select OK and press ENTER to save the changes.
Resetting Directory Account Passwords
You can change the password for any user existing in the internal LDAP directory. The user accounts and their security privileges as well as passwords are defined in the LDAP directory.
To be able to change the password for any LDAP user, you need to provide Administrative LDAP user credentials. You can also provide the old credentials of the LDAP user.
The LDAP Administrator is an admin user or the Directory Administrator assigned by admin. Admin can define Directory Administrators in the LDAP directory.
For more information about the internal LDAP directory, refer here.
To change a directory account password:
Login to the CLI Manager.
Navigate to Administration > Accounts and Passwords > Manage Passwords and Local-Accounts > Reset directory user-password.
In the displayed dialog box, in the Administrative LDAP user name or local_admin and Administrative user password fields, enter the Administrative LDAP user name and password. You can also use the local_admin credentials.
In the Target LDAP user field, enter the LDAP user name you wish to change the password for.
In the Old password field, enter the old password for the selected LDAP user. This step is optional.
In the New password field, enter a new password for the selected LDAP user.
In the Confirm new password field, re-enter a new password for the selected LDAP user.
Select OK and press ENTER to save the changes.
Changing the Root User Password
You may want to change the root user password due to security reasons, and this can only be done using the Appliance CLI Manager.
To change the root password:
Login to the CLI Manager.
Navigate to Administration > Accounts and Passwords > Manage Passwords and Local-Accounts > Change OS root account password.
In the Administrative user name and Administrative user password fields, enter the administrative user name and its valid password. You can also use the local_admin credentials.
In the Old root password field, enter the old password for the root user.
In the New root password field, enter the new password for the root user.
In the Confirm new password field, re-enter the new password for the root user.
Select OK and press ENTER to save the changes.
Changing the Local Admin Account Password
You can log into CLI Manager as a local_admin user if the LDAP is down or for LDAP maintenance. It is recommended that the local_admin account is not used for standard operations since it is primarily intended for maintenance tasks.
To change local_admin account password:
Login to the CLI Manager.
Navigate to Administration > Accounts and Passwords > Manage Passwords and Local-Accounts > Change OS local_admin account password.
In the Administrative user name and Administrative user password fields, enter the administrative user name and the old password for the local_admin. You can also use the Directory Server Administrator credentials.
In the New local_admin password field, enter new local_admin password.
In the Confirm new password filed, re-enter the new local_admin password.
Select OK and press ENTER to save changes.
Changing the Local Admin Account Permission
By default, the local_admin user cannot log into CLI Manager using SSH or log into the Web UI. However, you can configure this access using the tool, which changes the local_admin account permissions.
To change local_admin account permissions:
Login to the CLI Manager.
Navigate to Administration > Accounts and Passwords > Manage Passwords and Local-Accounts > Change OS local_admin account permissions.
In the dialog box displayed, in the Password field, enter the local_admin password.
Select OK.
Specify the permissions for the local_admin. You can either select SSH Access, Web-Interface Access, or both.
Select OK.
Changing Service Accounts Passwords
Service Account users are service_admin and service_viewer. They are used for internal operations of components that do not support LDAP, such as Management Server internal users, and Management Server Postgres database. You cannot log into the Appliance Web UI, Reports Management (for ESA), or CLI Manager using service accounts users. Since service accounts are internal OS accounts, they must be modified only in special cases.
To change service accounts:
Login to the CLI Manager.
Navigate to Administration > Accounts and Passwords > Manage Passwords and Local-Accounts > Manage internal ‘Service-Accounts’.
In the Account name and Account password fields, enter the Administrative user name and password.
Select OK.
In the dialog box displayed, in the Admin Service Account section, in the New password field, enter the new admin service account password.
In the Confirm field, re-enter the new admin service account password.
In the Viewer Service Account section, in the New password field, enter the new viewer service account password.
In the Confirm field, re-enter the new viewer service account password.
Select OK.
In the Service Account details dialog box, click Generate-Random to generate the new passwords randomly. Select OK.
Managing Local OS Users
Managing local OS user option provides you the ability to create users that need direct OS shell access. These users are allowed to perform non-standard functions, such as schedule remote operations, backup agents, run health monitoring, etc. This option also lets you manage passwords and permissions for the dpsdbuser, which is available by default when ESA is installed.
The password restrictions for OS users are as follows:
- For all OS users, you cannot repeat the last 10 passwords used.
- If an OS user signs in three times using an incorrect password, the account is locked for five minutes. You can unlock the user by providing the correct credentials after five minutes. If an incorrect password is provided in the subsequent sign-in attempt, the account is again locked for five minutes.
To manage local OS users:
Login to the CLI Manager.
Navigate to Administration > Accounts and Passwords > Manage Passwords and Local-Accounts > Manage local OS users.
Enter the root password and select OK.
In the dialog box displayed, select Add to add a new user or select an existing user as explained in following steps.
Select Add to create a new local OS user.
In the dialog box displayed, in the User name and Password fields, enter a user name and password for the new user. The & character is not supported in the Username field.
In the Confirm field, re-enter the password for the new user.
Select OK.
Select an existing user from the displayed list.
- You can select one of the following options from the displayed menu.
Table: User Options
Options Description Procedure Check password Validate entered password. - In the dialog box displayed, enter the password for the local OS user.
Validation succeeded
message appears.Update password Change password for the user. - In the dialog box displayed, in the
Old password field, enter the
Old password for the local OS user.This step is optional.
- In the New Password field, enter the New Password for the local OS user.
- In the Confirm field, re-enter the New Password for the local OS user.
Update shell Define shell access for the user. - In the dialog box displayed, select one of the
following options.
- No login access
/bin/fasle
- Linux Shell -
/bin/bash
- Custom
- No login access
Note
The default shell is set as No login access (/bin/false
).Toggle SSH access Set SSH access for the user. Select the Toggle SSH access option and press ENTER to set SSH access to Yes. Note
The default is set as No when a user is created.Delete user Delete the local OS user and related home directory. Select the Delete user option and select Yes to confirm the selection.
Select Close to exit.
4.6.4.4 - Working with Backup and Restore
Using the Backup/Restore Center tool, you can create backups of configuration files and settings. Use the backups to restore a stable configuration if changes have caused problems. Before the Backup Center dialog box appears you will be prompted to enter the root password. You can select from a list of packages to be backed up.
When you import files or configurations, ensure that each component is selected individually.
For more information about using backup and restore, refer here.
Exporting Data Configuration to Local File
Select the configurations to export to a local file. When you select Administration > Backup/Restore Center > Export data/configurations to a local file in the Backup Center screen, you will be asked to specify the packages to export. Before the Backup Center dialog box appears, you will be prompted to enter the root password.
Table: List of Appliance Specific Services
Services | Description | Appliance Specific | |
ESA | DSG | ||
Appliance OS Configuration | Export the OS configuration (networking, passwords, and
others) but not the security modules data. NoteIn the OS
configuration, the certificates component is classified as
follows:
| ✓ | ✓ |
Directory Server And Settings | Export the local directory server and authentication
settings. | ✓ | ✓ |
Export Consul Configuration and Data | Export Consul configuration and data | ✓ | ✓ |
Backup Policy-Management *2 | Export policy management configurations and data, such
as, policies, data stores, data elements, roles, certificates,
keys, logs, Key Store-specific files and certificates among others
to a file. | ✓ | |
Backup Policy-Management Trusted Appliances
Cluster*2 | Export policy management configurations and data, such
as, policies, data stores, data elements, roles, certificates,
keys, logs, Key Store-specific files and certificates among others
to a specific cluster node for a Trusted Appliances
Cluster. NoteIt is recommended to use this option with
cluster export only. | ✓ | |
Backup Policy-Management Trusted Appliances Cluster
without Key Store*1 | Export policy management configurations and data, such
as, policies, data stores, data elements, roles, certificates,
keys, logs among others, but excluding the Key Store-specific
files and certificates to a specific cluster node for a Trusted
Appliances Cluster. NoteThis option excludes the backup of
the Key Store-specific files and certificates. It is
recommended to use this option with cluster export
only. | ✓ | |
Policy Manager Web UI Settings | Export the Policy Management Web UI settings that
includes the Delete permissions specified for
content and audit logs. | ✓ | |
Export All PEP Server Configuration, Logs, Keys,
Certs | Export the data (.db files, license, token elements,
etc.), configuration files, keys, certificates and log
files. | ✓ | |
Export PEP Server Configuration Files | Export all PEP Server configuration files
(.cfg). | ✓ | |
Export PEP Server Log Files | Export PEP Server log files (.log and .dat). | ✓ | |
Export PEP Server Key and Certificate Files | Export PEP Server Key and Certificate files (.bin, .crt,
and .key). | ✓ | |
Export PEP Server Data Files | Export all PEP Server data files (.db), license, token
elements and log counter files. | ✓ | |
Application Protector Web Service | Export Application Protector Web Service configuration
files. | ||
Export Storage and Share Configuration Files | Export all configuration files including NFS, CIFS, FTP,
iSCSI, Webdav. | ||
Export File Protector Configuration Files | Export all File Protector configuration
files. | ||
Export ETL Jobs | Export all ETL job configuration files. | ||
Export Gateway Configuration Files | ✓ | ||
Export Gateway Log Files | ✓ | ||
Cloud Utility AWS | Exports Cloud Utility AWS CloudWatch configuration files. | ✓ | ✓ |
*1 Ensure that only one backup-related option is selected among the options Backup Policy-Management, Backup Policy-Management Trusted Appliances Cluster, and Backup Policy-Management Trusted Appliances Cluster without Key Store. The Backup Policy-Management option must be used to back up the data to a file. In this case, this backup file is used to restore the data to the same machine, at a later point in time.
*2The Backup Policy-Management Trusted Appliances Cluster option must be used to replicate the data to a specific cluster node in the Trusted Appliances Cluster (TAC). This option excludes the backup of the metering data. It is recommended to use this option with cluster export only.
If you want to exclude the Key Store-specific files during the TAC replication, then the Backup Policy-Management Trusted Appliances Cluster without Key Store option must be used to replicate the data. Doing this excludes the Key Store-specific files and certificates, to a specific cluster node in the TAC.
This option excludes the backup of the metering data and the Key Store-specific files and certificates.
It is recommended to use this option with cluster export only.
For more information about the Backup Policy-Management Trusted Appliances Cluster option or the Backup Policy-Management Trusted Appliances Cluster without Key Store option, refer to the section ** TAC Replication of Key Store-specific Files and Certificates** in the Protegrity Key Management Guide 9.1.0.0.
If the OS configuration export is selected, then only the network setting and passwords, among others, are exported. The data and configuration of the security modules are not included. This data is mainly used for replication or recovery.
Before you import the data, note the OS and network settings of the target machine. Ensure that you do not import the saved OS and network settings to the target machine as this creates two machines with the same IP address in your network.
If you need to import all appliance configuration and settings, then perform a full restore for the system configuration. The following will be imported:
- OS configuration and network
- SSH and certificates
- Firewall
- Services status
- Authentication settings
- File Integrity Monitor Policy and settings
To export data configurations to a local file:
Login to the CLI Manager.
Navigate to Administration > Backup/Restore Center.
Enter the root password and select OK.
The Backup Center dialog box appears.
From the menu, select the Export data/configurations to a local file option.
Select the packages to export and select OK.
In the Export Name field, enter the required export name.
In the Password field, enter the password for the backup file.
In the Confirm field, re-enter the specified password.
If required, then enter description for the file.
Select OK.
You can optionally save the logs for the export operation when the export is done:
Click the More Details button.
The export operation log will display.
Click the Save button to save the export log.
In the following dialog box, enter the export log file name.
Click OK.
Click Done to exit the More Details screen.
The newly created configuration file will be saved into /products/exports. It can be accessed from the CLI Manager, the Exported Files and Logs menu, or the Import tab available in the Backup/Restore page, available in the Web UI.
The export log file can be accessed from the CLI Manager, the Exported Files and Logs menu, or the Log Files tab available in the Backup/Restore page, available in the Web UI.
Exporting Data/Configuration to Remote Appliance
You can export backup configurations to a remote appliance.
Important : When assigning a role to the user, ensure that the Can Create JWT Token permission is assigned to the role.If the Can Create JWT Token permission is unassigned to the role of the required user, then exporting data/configuration to a remote appliance fails.To verify the Can Create JWT Token permission, from the ESA Web UI navigate to Settings > Users > Roles.
Follow the steps in this scenario for a successful export of the backup configuration:
Login to the CLI Manager.
Navigate to Administration > Backup/Restore Center.
Enter the root password and select OK.
The Backup Center dialog box appears.
From the menu, select the Export data/configurations to a remote appliance(s) option and select OK.
From the Select file/configuration to export dialog box, select Current (Active) Appliance Configuration package to export and select OK.
In the following dialog box, select the packages to export and select OK.
Enter the password for this backup file.
Select the Import method.
For more information on each import method, select Help.
Type the IP address or hostname for the destination appliance.
Type the admin user credentials of the remote appliance and select Add.
In the information dialog box, press OK.
The Backup Center screen appears.
Exporting Appliance OS Configuration
When you import the appliance core configuration from the other appliance, the second machine will receive all network settings, such as, IP address, and default gateway, among others.
You should not import all network settings to another machine since it will create two machines with the same IP in your network. It is recommended to restart the appliance after receiving an appliance core configuration backup.
This item shows up only when exporting to a file.
Importing Data/Configurations from a File
You can import (restore) data from a file if you need to restore a specific configuration that you have previously saved. When you import files or configurations, ensure that each component is selected individually. During data configurations import, you are asked to enter the file password set during the backup file creation. Export and import Insight certificates on the same ESA. If the configurations must be imported on a different ESA, then do not import Certificates. For copying Insight certificates across systems, refer to Rotating Insight certificates.
To import data configurations from file:
Login to the CLI Manager.
Navigate to Administration > Backup/Restore Center.
Enter the root password and select OK.
The Backup Center dialog box appears.
From the menu, select the Import data/configurations from a file option and select OK.
In the following dialog box, select a file from the list which will be used for the configuration import.
Select OK.
In the following dialog box, enter the password for this backup file.
Select Import method.
Select OK.
In the information dialog box, select OK.
The Import Operation Has Been Completed Successfully message appears.
Consider a scenario when importing a policy management backup that includes the external Key Store data. If the external Key Store is not working, then the HubController service does not start post the restore process.
Select Done.
The Backup Center screen appears.
Reviewing Exported Files and Logs
You can review the exported files and logs.
To review exported files and logs:
Login to the CLI Manager.
Navigate to Administration > Backup/Restore Center.
Enter the root password and select OK.
The Backup Center dialog box appears.
From the menu, select the Exported Files and Logs option.
In the Exported Files and Logs dialog box, select Main Logfile to view the logs.
Select Review.
To view the Operation Logs or Exported Files, select it from the list of available exported files.
Select Review.
Select Back to return to the Backup Center dialog box.
Deleting Exported Files and Logs
To delete exported files and logs:
Login to the CLI Manager.
Navigate to Administration > Backup/Restore Center.
Enter the root password and select OK.
The Backup Center dialog box appears.
From the menu, select the Exported Files and Logs option.
In the Exported Files and Logs dialog box, select the Operation Logs and Exported Files.
Select Delete.
To confirm the deletion, select Yes.
Alternatively, to cancel the deletion, select No.
Backing Up/Restoring Local Backup Partition
The backup is created on the second partition of the local machine.
Thus, for example, if you make an OS full backup in the PVM mode (both Appliance and Xen Server are set to PVM), enable HVM mode, and then reboot the Appliance, you will not be able to boot the system in system-restore mode.
XEN Virtualization
If you are using virtualization, and have backed up the OS in HVM/PVM mode, then you can to restore only in the mode you backed it up (refer here).
Backing up Appliance OS from CLI
It is recommended to perform the full OS back up before any important system changes, such as appliance upgrade or creating a cluster, among others.
To back up the appliance OS from CLI Manager:
Login to the Appliance CLI Manager.
Proceed to Administration > Backup/Restore Center.
The Backup Center screen appears.
Select Backup all to a local backup-partition.
The following screen appears.
Select OK.
The Backup Center screen appears and the OS backup process is initiated.
Login to the Appliance Web UI.
Navigate to Dashboard.
The following message appears after the OS backup completes.
CAUTION: The Restore from backup-partition option appears in the Backup Center screen, after the OS backup is complete.
Restoring Appliance OS from Backup
To restore the appliance OS from backup:
Login to the Appliance CLI Manager.
Navigate to the Administration > Reboot and Shutdown > Reboot.
The Reboot screen appears.
Enter the reason and select OK.
Enter the root password and select OK.
The appliance reboots and the following screen appears.
Select System-Restore.
The Welcome to System Restore Mode screen appears.
Select Initiate OS-Restore Procedure.
The OS restore procedure is initiated.
4.6.4.5 - Setting Up the Email Server
You can set up an email server that supports the notification features in Protegrity Reports. The Protegrity Appliance Email Setup tool guides you through the setup.
Keep the following information available before the setup process:
- SMTP server details.
- SMTP user credentials.
- Contact email account: This email address is used by the Appliance to send user notifications.
Remember to save the email settings before you exit the Email Setup tool.
To set up the Email Server:
Login to the Appliance CLI Manager.
Navigate to Administration > Email (SMTP) Settings.
The Protegrity Appliance Email Setup wizard appears.
Enter the root password and select OK.
The Protegrity Appliance Email Setup screen appears.
Select OK to continue. You can select Cancel to skip the Email Setup.
In the SMTP Server Address field, type the address to the SMTP server and the port number that the mail server uses.
For SMTP Server, the default port is 25.
In the SMTP Username field, enter the name of the user in the mail server.
Protegrity Reporting requires a full email address in the Username.
In the SMTP Password and Confirm Password fields, enter the password of the mail server user. SMTP Username/Password settings are optional. If your SMTP does not require authentication, then you can leave these fields empty.
In the Contact address field, enter the email recipient address.
In the Host identification field, enter the name of the computer hosting the mail server.
Select OK.
The tool tests the connectivity and the Secured SMTP screen appears.
Specify the encryption method. Select StartTLS or disable encryption. SSL/TLS is not supported.
Click OK.
In the SMTP Settings screen that appears, you can:
To… | Follow these steps… |
Send a test email |
|
Save the settings |
|
Change the settings | Select Reconfigure. The SMTP Configuration screen appears. |
Exit the tool without saving |
|
4.6.4.6 - Working with Azure AD
Azure Active Directory (Azure AD) is a cloud-based identity and access management service. It allows access to external (Azure portal) and internal resources (corporate appliances). Azure AD manages your cloud and on-premise applications and protects user identities and credentials.
When you subscribe to Azure AD, it automatically creates an Azure AD tenant. After the Azure AD tenant is created, register your application in the App Registrations module. This acts like an end-point for the appliance to connect to the tenant.
Using the Azure AD configuration tool, you can:
- Enable the Azure AD Authentication and manage user access to the appliance.
- Import the required users or groups to the appliance, and assign specific roles to them.
4.6.4.6.1 - Configuring Azure AD Settings
Before configuring Azure AD Settings on the appliance, you must have the following values that are required to connect the appliance with the Azure AD:
- Tenant ID
- Client ID
- Client Secret or Thumbprint
For more information about the Tenant ID, Client ID, Authentication Type, and Client Secret/Thumbprint, search for the text Register an app with Azure Active Directory on Microsoft’s Technical Documentation site at: https://learn.microsoft.com/en-us/docs/
The following are the list of the API permissions that must be granted.
- Group.Read.All
- GroupMember.Read.All
- User.Read
- User.Read.All
For more information about configuring the application permissions in the Azure AD, please refer https://learn.microsoft.com/en-us/graph/auth-v2-service?tabs=http
To configure Azure AD settings:
On the CLI Manager, navigate to Administration > Azure AD Configuration.
Enter the root password.
The Azure AD Configuration dialog box appears.
Select Configure Azure AD Settings.
The Azure AD Configuration screen appears.
Enter the information for the following fields.
Table: Azure AD Settings
Setting Description Set Tenant ID Unique identifier of the Azure AD instance Set Client ID Unique identifier of an application created in Azure AD Set Auth Type Select one of the Auth Type: SECRET
indicates a password-based authentication. In this authentication type, the secrets are symmetric keys, which the client and the server must know.CERT
indicates a certificate-based authentication. In this authentication type, the certificates are the private keys, which the client uses. The server validates this certificate using the public key.
Set Client Secret/Thumbprint The client secret/thumbprint is the password of the Azure AD application. - If the Auth Type selected is SECRET, then enter Client Secret.
- If the Auth type selected is CERT, then enter Client Thumbprint.
For more information about the Tenant ID, Client ID, Authentication Type, and Client Secret/Thumbprint, search for the text Register an app with Azure Active Directory on Microsoft’s Technical Documentation site at: https://learn.microsoft.com/en-us/docs/
Click Test to check the configuration/settings.
The message Successfully Done appears.
Click OK.
Click Apply to apply and save the changes.
The message Configuration saved successfully appears.
Click OK.
4.6.4.6.2 - Enabling/Disabling Azure AD
Using the Enable/Disable Azure AD option, you can enable or disable the Azure AD settings. You can import users or groups and assign roles when you enable the Azure AD settings.
4.6.4.7 - Accessing REST API Resources
User authentication is the process of identifying someone who wants to gain access to a resource. A server contains protected resources that are only accessible to authorized users. When you want to access any resource on the server, the server uses different authentication mechanism to confirm your identity.
There are different mechanisms for authenticating and authorizing users in a system. In the ESA, REST API services are only accessible to authorized users. You can authorize or authenticate users using one of the following authentication mechanisms:
- Basic Authentication with username and password
- Client Certificates
- Tokens
4.6.4.7.1 - Using Basic Authentication
In the Basic Authentication mechanism, you provide only the user credentials to access protected resources on the server. You provide the user credentials in an authorization header to the server. If the credentials are accurate, then the server provides the required response to access the APIs.
If you want to access the REST API services on ESA, then the IP address of ESA with the username and password must be provided. The ESA matches the credentials with the LDAP or AD. On successful authentication, the roles of the users are verified. The following conditions are checked:
- If the role of the user is Security Officer, then the user can run GET, POST, and DELETE operations on the REST APIs.
- If the role of the user is Security Viewer, then the user can only run GET operation on the REST APIs.
When the Basic Authentication is disabled, then a list of APIs are affected. For more information about the list of APIs, refer here.
The following Curl snippet provides an example to access an API on ESA.
curl -i -X <METHOD> "https://<ESA IP address>:8443/<path of the API>" -d "loginname=<username>&password=<password>"
This command uses an SSL connection. If the server certificates are not configured on ESA, you can append --insecure
to the curl command.
For example,
curl -i -X <METHOD> "https://<ESA IP address>:8443/<path of the API>" -d "loginname=<username>&password=<password>" --insecure
You must provide the username and password every time you access the REST APIs on ESA.
4.6.4.7.2 - Using Client Certificates
The Client Certificate authentication mechanism is a secure way of accessing protected resources on a server. In the authorization header, you provide the details of the client certificate. The server verifies the certificate and allows you to access the resources. When you use certificates as an authentication mechanism, then the user credentials are not stored in any location.
Note: As a security feature, it is recommended to use the client certificates that are protected with a passphrase.
On ESA, the Client Certificate authentication includes the following steps:
- In the authorization header, you must provide the details, such as, client certificate, client key, and CA certificate.
- The ESA retrieves the name of the user from the client certificate and authenticates it with the LDAP or AD.
- After authenticating the user, the role of that user is validated:
- If the role of the user is Security Officer, then the user can run read and write operations on the REST APIs.
- If the role of the user is Security Viewer, then the user can only run read operations on the REST APIs.
- On successful authentication, you can utilize the API services.
The following Curl snippet provides an example to access an API on ESA.
curl -k https://<ESA IP Address>/<path of the API> -X <METHOD> --key <client.key> --cert <client.pem> --cacert <CA.pem> -v --insecure
You must provide your certificate every time you access the REST APIs on ESA.
4.6.4.7.3 - Using JSON Web Token (JWT)
Tokens are reliable and secure mechanisms for authorizing and authenticating users. They are stateless objects created by a server that contain information to identify a user. Using a token, you can gain access to the server without having to provide the credentials for every resource. You request a token from the server by providing valid user credentials. On successive requests to the server, you provide the token as a source of authentication instead of providing the user credentials.
There are different mechanisms for authenticating and authorizing users using tokens. Authentication using JSON Web Tokens (JWT) is one of them. The JWT is an open standard that defines a secure way of transmitting data between two entities as JSON objects.
One of the common uses of JWT is as an API authentication mechanism that allows you to access the protected API resources on your server. You present the JWT generated from the server to access the protected APIs. The JWT is signed using a secret key. Using this secret key, the server verifies the token provided by the client. Any modification to the JWT results in an authentication failure. The information about tokens are not stored on the server.
Only a privileged user can create a JWT. To create a token, ensure that the Can Create JWT Token permission/privilege is assigned to the user role.
The JWT consists of the following three parts:
- Header: The header contains the type of token and the signing algorithm, such as, HS512, HS384, or HS256.
- Payload: The payload contains the information about the user and additional data.
- Signature: Using a secret key, you create the signature to sign the encoded header and payload.
The header and payload are encoded using the Base64Url encoding. The following is the format of JWT:
<encoded header>.<encoded payload>.<signature>
Implementing JWT
On Protegrity appliances, you must have the required authorization to access the REST API services. The following figure illustrates the flow of JWT on the appliances.
As shown in the figure, login with your credentials to access the API. The credentials are validated against a local or external LDAP. A verification is performed to check the API access for the username. After the credentials are validated, a JWT is created and sent to the user as an authentication mechanism. Using JWT, information can be verified and trusted as it is digitally signed. The JWTs can be signed using a secret with the HMAC algorithm or a private key pair using RSA. After you successfully login using your credentials, a JWT is returned from the server. When you want to access a protected resource on the server, you must send the JWT with the request in the headers.
Working with the Secret Key
The JWT is signed using a private secret key and sent to the client to ensure message is not changed during transmission. The secret key encodes that token sent to the client. The secret key is only known to the server for generating new tokens. The client presents the token to access the APIs on the server. Using the secret key, the server validates the token received by the client.
The secret key is generated when you install or upgrade your appliance. You can change the secret key from the CLI Manager. This secret key is stored in the appliance in a scrambled form.
For more information about setting the secret key, refer to section Configuring JWT
For appliances in a TAC, the secret key is shared between appliances in the cluster. Using the export-import process for a TAC, secret keys are exported and imported between the appliances.
If you want to export the JWT configuration to a file or another machine, ensure that you select the Appliance OS Configuration option, in the Export screen. Similarly, if you want to import the JWT configurations between appliances in a cluster, from the Cluster Export Wizard screen, select the Appliances JWT Configuration check box, under Appliance OS Configuration.
For example, consider ESA 1 and ESA 2 in a TAC setup.
- JWT is created on ESA 1 for appliance using a secret key.
- ESA 1 and ESA 2 are added to TAC. The secret key of ESA 1 is shared with ESA 2.
- Client application requests API access from ESA 1. A JWT is generated and shared with the client application. The client accesses the APIs available in ESA 1.
- To access the APIs of ESA 2, the same token generated by ESA1 is applicable for authentication.
Configuring JWT
You can configure the encoding algorithm, secret key, and JWT token expiry.
To configure the JWT settings:
On the CLI Manager, navigate to Administration > JWT Configuration.
A screen to enter the root credentials appears.
Enter the root credentials and select OK.
The JWT Settings screen appears.
Select Set JWT Algorithm to set the algorithm for validating a token.
The Set JWT Algorithm screen appears.
Select the one of the following algorithms:
- HS512
- HS384
- HS256
Select OK.
Select Set JWT Secret to set the secret key.
The Set JWT Secret screen appears.
Enter the secret key in the New Secret and Confirm Secret fields.
Select OK.
Select Set Token Expiry to set the token expiry period.
In the Set Token Expiry field, enter the token expiry value and select OK.
Select Set Token Expiry Unit to set the unit for token expiry value.
Select second(s), minute(s), hour(s), day(s), week(s), month(s), or year(s) option and select OK.
Select Done.
Refreshing JWT
Tokens are valid for certain period. When a token expires, you must request a new token by providing the user credentials. Instead of providing your credentials on every request, you can extend your access to the server resources by refreshing the token.
In the refresh token process, you request a new token from the server by presenting your current token instead of the username and password. The server checks the validity of the token to ensure that the current token is not expired. After the validity check is performed, a new token is issued to you for accessing the API resources.
In the Protegrity appliances, you can refresh the token by executing the REST API for token refresh.
4.6.4.8 - Securing the GRand Unified Bootloader
When a system is powered on, it goes through a boot process before loading the operating system, where an initial set of operations are performed for the system to function normally. The boot process consists of different stages, such as, checking the system hardware, initializing the devices, and loading the operating system.
When the system is powered on, the BIOS performs the Power-On Self-Test (POST) process to initialize the hardware devices attached to the system. It then executes the Master Boot Record (MBR) that contains information about the disks and partitions. The MBR then executes the GRand Unified Bootloader (GRUB).
The GRUB is an operation that identifies the file systems and loads boot images. The GRUB then passes control to the kernel for loading the operating system. The entries in the GRUB menu can be edited by pressing e or c to access the GRUB command-line. Some of the entries that you can modify using the GRUB are listed below:
- Loading kernel images.
- Switching kernel images.
- Logging into single user mode.
- Recovering root password.
- Setting default boot entries.
- Initiating boot sequences.
- Viewing devices and partition, and so on.
In the Protegrity appliances, GRUB version 2 (GRUB 2) is used for loading the kernel. If the GRUB menu settings are modified by an unauthorized user with malicious intent, it can induce threat to the system. Additionally, as per CIS Benchmark, it is recommended to secure the boot settings. Thus, to enhance security of the Protegrity appliances, the GRUB menu can be protected by setting a username and password.
- This feature available only for on-premise installations.
- It is recommended to reset the credentials at regular intervals to secure the system.
The following sections describe about setting user credentials for accessing the GRUB menu on the appliance.
4.6.4.8.1 - Enabling the Credentials for the GRUB Menu
You can set a username and password for the GRUB menu from the appliance CLI Manager.
The user created for the GRUB menu is neither a policy user nor an ESA user.
Note: It is recommended you ensure a backup of the system has completed before performing the following operation.
To enable access to GRUB menu:
Login to the appliance CLI manager as an administrative user.
Navigate to Administration > GRUB Credentials Settings.
The screen to enter the root credentials appears.
Enter the root credentials and select OK.
The screen to Grub Credentials screen appears.
Select Enable and press ENTER.
The following screen appears.
Enter a username in the Username text box.
The requirements for the Username are as follows:
- It should contain a minimum of three and maximum of 16 characters
- It should not contain numbers and special characters
Enter a password in the Password and Re-type Password text boxes.
The requirements for the Password are as follows:
- It must contain at least eight characters
- It must contain a combination of alphabets, numbers, and printable characters
Select OK and press ENTER.
A message Credentials for the GRUB menu has been set successfully appears.
Restart the system.
The following screen appears.
Press e or c.
The screen to enter the credentials appears.
Enter the credentials provided in steps 4 and 5 to modify the GRUB menu.
4.6.4.8.2 - Disabling the GRUB Credentials
You can disable the username and password that is set for accessing the GRUB menu. When you disable access to the GRUB, then the username and password that are set get deleted. You must enable the GRUB Credentials Settings option and set new credentials to secure the GRUB again.
To disable access to the GRUB menu:
Login to the appliance CLI Manager as an administrative user.
Navigate to Administration > GRUB Credentials Settings.
The screen to enter the root credentials appears.
Enter the root credentials and select OK.
The GRUB credentials screen appears.
Select Disable and press ENTER.
A message Credentials for the GRUB menu has been disabled appears.
4.6.4.9 - Working with Installations and Patches
Using the Installations and Patches menu, you can install or uninstall products. You can also view and manage patches from this menu.
4.6.4.9.1 - Add/Remove Services
Using Add/Remove Services tool, you can install the necessary products or remove already installed ones.
To install services:
Login to the Appliance CLI Manager.
Navigate to Administration > Installations and Patches > Add/Remove Services.
Enter the root password to execute the install operation and select OK.
Select Install applications and select OK.
Select products to install and select OK.
- If a new product is selected, the installation process starts.
- If the product is already installed, then refer to step 6.
Select an already installed product to upgrade, uninstall, or reinstall, and select OK.
The Package is already installed screen appears. This step is not applicable for the DSG appliance.
Select any one of the following options:
Option Description Upgrade Installs a newer version of the selected product. Uninstall Removes the selected product. Reinstall Removes and installs the product again. Cancel Returns to the Administration menu. Select OK.
4.6.4.9.2 - Uninstalling Products
To uninstall products:
Login the Appliance CLI Manager.
Proceed to Administration > Installations and Patches > Add or Remove Services.
Enter the root password to execute the uninstall operation and select OK.
Select Remove already installed applications and select OK.
The Select products to uninstall screen appears.
Select the necessary products to uninstall and select OK.
The selected products are uninstalled.
4.6.4.9.3 - Managing Patches
You can install and manage your patches from the Patch Management screen.
It allows you to perform the following tasks.
Option | Description |
---|---|
List installed patches | Displays the list of all the patches which are installed in the system |
Install a patch | Allows you to install the patches |
Display log | Displays the list of logs for the patches |
Installing Patches
To install a patch:
Login to the Appliance CLI Manager.
Navigate to Administration > Patch Management.
Enter the root password and select OK.
The Patch Management screen appears.
Select Install a patch and select OK.
The Install Patch screen appears.
Select the required patch and select Install.
Viewing Patch Information
To view information of a patch:
Login to the Appliance CLI Manager.
Navigate to Administration > Patch Management.
Enter the root password and select OK.
Select Install a patch and select OK.
The Install Patch screen appears.
Select the required patch and select More Info.
The information for the selected patch appears.
Select OK.
4.6.4.10 - Managing LDAP
LDAP is an open industry standard application protocol that is used to access and manage directory information over IP. You can consider it as a central repository of username and passwords, thus providing applications and services the flexibility to validate users by connecting with the LDAP.
The security system of the Appliance distinguishes between two types of users:
End users with specific access or no access to sensitive data. These users are managed through the User Management screen in the Web UI. For more information about user management, refer here.
Administrative users who manage the security policies, for example, “Admin” users who grant or deny access to end users.
In this section, the focus is on managing administrative users. The Administrative users connect to the management interfaces in Web UI or CLI, while the end users connect to the specific security modules they have been allowed access to. For example, a database table may need to be accessed by the end users, while the security policies for access to the table are specified by the Administrative users.
LDAP Tools available in the Administration menu include three tools explained in the following table.
Tool | Description |
---|---|
Specify LDAP Server | Reconfigure all client-side components to use a specific LDAP. To authenticate users, the data security platform supports three modes for integration with directory services: Protegrity LDAP Server, Proxy Authentication, and Local LDAP Server. - Protegrity LDAP: In this mode, all administrative operations such as policy management, key management, etc. are handled by users that are part of the Protegrity LDAP. This mode can be used to configure or authenticate with either local or remote appliance product. - Proxy Authentication: In this mode, you can import users from an external LDAP to ESA. ESA is responsible for authorization of users, while the external LDAP is responsible for authentication of users. - Reset LDAP Server Settings: In this mode, an administrative user can reset the configuration to the default configuration using admin credentials. |
Configure Local LDAP settings | Configure your LDAP to be accessed from the other machines. |
Local LDAP Monitor | Examine how many LDAP operations per second are running. |
4.6.4.10.1 - Working with the Protegrity LDAP Server
Every appliance includes an internal directory service. This service can be utilized by other appliances for user authentication.
For example, a DSG instance might utilize the ESA LDAP for user authentication. In such cases, you can configure the LDAP settings of the DSG in the Protegrity LDAP Server screen. In this screen, you can specify the IP address of the ESA with which you want to connect.
You can add IP addresses of multiple appliances to enable fault tolerance. In this case, if connection to the first appliance fails, connection is transferred to next appliance in the list.
If you are adding multiple appliances in the LDAP URI, ensure that the values of the Bind DN, Bind Password, and Base DN is same for all the appliances in the list.
To specify Protegrity LDAP server:
Login to the Appliance CLI Manager.
Navigate to Administration > Specify LDAP Server.
Enter the root password and select OK.
In the LDAP Server Type screen, select Protegrity LDAP Server and select OK.
The following screen appears.
Enter information for the following fields.
Table 1. LDAP Server Settings
Setting Description LDAP URI Specify the IP address of the LDAP server you want to connect to in the following format. ldap://host:port
. You can configure to connect Protegrity Appliance LDAP. For example,ldap://192.168.3.179:389
.For local LDAP, enter the following IP address:ldap://127.0.0.1:389
.If you specify multiple appliances, ensure that the IP addresses are separated by the space character.For example,ldap://192.1.1.1 ldap://10.1.0.0 ldap://127.0.0.1:389
Base DN The LDAP Server Base distinguished name. For example: ESA LDAP Base DN: dc=esa,dc=protegrity,dc=com.Group DN Distinguished name of the LDAP Server group container. For example: ESA LDAP Group DN:ou=groups,dc=esa,dc=protegrity,dc=com.Users DN Distinguished name of the user container. For example: ESA LDAP Users DN:ou=people,dc=esa,dc=protegrity,dc=com.Bind DN Distinguished name of the LDAP Bind User. For example: ESA LDAP Bind User DN cn=admin, ou=people, dc=esa, dc=protegrity, dc=com.Bind Password The password of the specified LDAP Bind User. If you modify the bind user password, ensure that you use the Specify LDAP Server tool to update the changes in the internal LDAP.Bind UserThe bind user account password allows you to specify the user credentials used for LDAP communication. This user should have full read access to the LDAP entries in order to obtain accounts/groups/permissions.If you are using the internal LDAP, and you change the bind username/password, using Change a directory account option, then you must update the actual LDAP user. Make sure that a user with the specified username/password exists. Run Specify LDAP Server tool with the new password to update all the products with the new password. Refer to section Protegrity LDAP Server for details.Click Test to test the connection.
If the connection is established, then a Successfully Done message appears.
4.6.4.10.2 - Changing the Bind User Password
The following section describe the steps to change the password for the ldap_bind_user using the CLI manager.
To change the ldap_bind_user password:
Login to the Appliance CLI Manager.
Navigate to Administration > Specify LDAP server/s.
Enter the root password and select OK.
Select Reset LDAP Server settings and select OK.
The following screen appears.
Enter the admin username and password and select OK.
The following screen appears.
Select OK.
The following screen appears.
Select Manually enter a new password and select OK.
The following screen appears.
Enter the new password, confirm it, and select OK.
The following screen appears.
Select OK.
The password is successfully changed.
4.6.4.10.3 - Working with Proxy Authentication
Simple Authentication and Security Layer (SASL) is a framework that provides authentication and data security for Internet protocols. The data security layer offers data integrity and confidentiality services. It provides a structured interface between protocols and authentication mechanisms.
SASL enables ESA to separate authentication and authorization of users. The implementation is such that when users are imported, a user with the same name is recreated in the internal LDAP. When the user accesses the data security platform, ESA authorizes the user and communicates with the external LDAP for authenticating the user. This implementation ensures that organizations are not forced to modify their LDAP configuration to accommodate the data security platform. SASL is referred to as Proxy authentication in ESA CLI and Web UI.
To enable proxy authentication:
Login to the Appliance CLI Manager.
Navigate to Administration > LDAP Tools > Specify LDAP Server.
Enter the root password and select OK.
Select Set Proxy Authentication.
Specify the LDAP Server settings for proxy authentication with the external LDAP as shown in the following figure.
For more information about the LDAP settings, refer to Proxy Authentication Settings.
Select Test to test the settings provided. Select Test to test the settings provided. When Test is selected, ESA verifies if the connection to the external LDAP works, as per the Proxy Authentication settings provided
The Bind Password is required when Bind DN is provided message appears.
Select OK.
Enter the LDAP user name and password provided as the bind user.
You can provide username and password of any other user from the LDAP as long as the LDAP Filter field exists in both the bind user name and any other user.
A Testing Proxy Authentication-Completed successfully message appears.
Select OK in the following message screen.
The following confirmation message appears.
Select Apply to apply the settings. In ESA CLI, only one user is allowed to be imported. This user is granted admin privileges, such that importing users and managing users can be performed by the user in the User Management screen. The User Management Web UI is used to import users from the external LDAP.
In the Select user to grant administrative privileges screen, select a user and confirm selection.
In the Setup administrator privileges screen, enter the ESA admin user name and password and select OK.
The following message appears.
Navigate to Administration > Services to verify that the Proxy Authentication Service is running.
4.6.4.10.4 - Configuring Local LDAP Settings
The local LDAP settings are enabled on port 389 by default.
To specify local LDAP server configuration:
Login to the Appliance CLI Manager.
Navigate to Administration > Configure local LDAP settings.
Enter the root password and select OK.
The following screen appears.
In the LDAP listener IP address field, enter the LDAP listener IP address for local access. By default, it is 127.0.0.1.
In the LDAPS (SSL) listener IP address field, enter the LDAPS SSL listener IP address for remote access. It is 0.0.0.0 or a specific valid address for your remote LDAP directory.
Select OK.
4.6.4.10.5 - Monitoring Local LDAP
Local LDAP Monitor tool allows you to examine, in real time, how many LDAP operations per second are currently running, which is very useful to enhance the performance. You can use this tool to monitor the following tasks:
- Check LDAP Connectivity for LDAP Bind and LDAP Search.
- Modify or optimize LDAP cache, threading, and memory settings to improve performance and remove bottlenecks.
- Measure “number of changes” and “last modified date and time” on the LDAP server, which can be useful, for example, for verifying export/import operations.
4.6.4.10.6 - Optimizing Local LDAP Settings
When the Local LDAP receives excessive requests, the requests are cached. However, if the the cache is overloaded, it causes the LDAP to become unresponsive. From v9.1.0.3, a standard set of values for the cache that is required for optimal handling of the LDAP requests is set in the system. After you upgrade to v9.1.0.3, you can tune the cache parameters for the Local LDAP configuration. The default values for the cache parameters is shown in the following list.
- The slapd.conf file in the /etc/ldap directory contains the following cache values:
- cachesize 10000 (10,000 entries)
- idlcachesize 30000 (30,000 entries)
- dbconfig set_cachesize 0 209715200 0 (200 MB)
- The DB_CONFIG file in the /opt/ldap/db* directory contains the following the cache values:
- set_cachesize 0 209715200 0 (200 MB)
Based on the setup and the environment in the organization, you can choose to increase the parameters.
Ensure that you back up the files before editing the parameters.
- On the CLI Manager, navigate to Administration > OS Console.
- Edit the values for the required parameters.
- Restart the slapd service using the /etc/init.d/slapd restart command.
4.6.4.11 - Rebooting and Shutting down
You can reboot or shut down your appliance if necessary using Administration > Reboot and Shutdown. Make sure the Data Security Platform users are aware that the system is being rebooted or turned off and no important tasks are being performed at this time.
Cloud platforms and power off
For cloud platforms, it is recommended to shut down or power off the CLI Manager or Appliance Web UI. With cloud platforms, such as Azure, AWS, or GCP, the instances run the appliance. Powering off the instance from the cloud console might not shut down the appliance gracefully.
4.6.4.12 - Accessing the OS Console
You can access OS console using Administration > OS Control. You require root user credentials to access the OS console.
If you have System Monitor settings enabled in the Preferences menu, then the OS console will display the System Monitor screen upon entering the OS console.
To enable the System Monitor setting:
Login to the Appliance CLI Manager.
Navigate to Preferences.
Enter the root password and select OK.
The Preferences screen appears.
Select Show System-Monitor on OS-Console.
Press Select.
Select Yes and select OK.
Select Done.
4.6.5 - Working with Networking
Networking Management allows configuration of the appliance network settings such as, host name, default gateway, name servers, and so on. You can also configure SNMP settings, network bind services, and network firewall.
From the Appliance CLI Manager, navigate to Networking to manage your network settings.
The following figure shows the Networking Management screen.
Option | Description |
---|---|
Network Settings | Customize the network configuration settings for your appliance. |
SNMP Configuration | Allow a remote machine to query different performance status of the appliance, such as start the service, set listening address, show or set community string, or refresh the service. |
Bind Services/ Addresses | Specify the network address or addresses for management and Web Services. |
Network Troubleshooting Tools | Troubleshoot network and connectivity problems using the following Linux commands – Ping, TCPing, TraceRoute, MTR, TCPDump, SysLog, and Show MAC. |
Network Firewall | Customize firewall rules for the network traffic. |
4.6.5.1 - Configuring Network Settings
When this option is selected, network configuration details added during installation are displayed. The network connection for the appliance are displayed. You can modify the network configuration as per the requirements.
Changing Hostname
The hostname of the appliance can be changed.Ensure that the hostname does not contain the dot(.) special character.
To change the hostname:
Login to the Appliance CLI Manager.
Navigate to Networking > Network Settings.
Select Hostname and select Edit.
In the Set Hostname field, enter a new hostname.
Select OK.
The hostname is changed.
Configuring Management IP Address
You can configure the management IP address for your appliance from the networking screen.
To configure the management IP address:
Login to the Appliance CLI Manager.
Navigate to Networking > Network Settings.
Select Management IP and select Edit.
In the Enter IP field, enter the IP address for the management NIC.
In the Enter Netmask field, enter the subnet for the management NIC.
Select OK.
The management IP is configured.
Configuring Default Route
The default route is a setting that defines the packet forwarding rule for a specific route. This parameter is required only if the appliance is on a different subnet than the Web UI or for the NTP service connection. If necessary, then request the default gateway address from your network administrator and set this parameter accordingly.
The default route is a setting that defines the packet forwarding rule for a specific route. The default route is the first IP address of the subnet for the management interface.
To configure the default route:
Login to the Appliance CLI Manager.
Navigate to Networking > Network Settings.
Select Default Route and press Edit.
Enter the default route and select Apply.
Configuring Domain Name
You can configure the domain name for your appliance from the networking screen.
To configure the domain name:
Login to the Appliance CLI Manager.
Navigate to Networking > Network Settings.
Select Domain Name and select Edit.
In the Set Domain Name field, enter the domain name.
Select Apply.
The domain name is configured.
Configuring Search Domain
You can configure a domain name that is used as in the domain search list.
To configure the search domain:
Login to the Appliance CLI Manager.
Navigate to Networking > Network Settings.
Select Search Domains and select Edit.
In the Search Domains dialog box, select Edit.
In the Edit search domain field, enter the domain name and select OK.
Select Add to add another search domain.
Select Remove to remove a search domain.
Configuring Name Server
You can configure the IP addresses for your domain name.
To configure the domain IP address:
Login to the Appliance CLI Manager.
Navigate to Networking > Network Settings.
Select Name Servers and select Edit.
In the Domain Name Servers dialog box, select Edit to modify the server IP address.
Select Remove to delete the domain IP address.
Select Add to add another domain IP address.
In the Add new nameserver field, enter the domain IP address and select OK.
The IP address for the domain is configured.
Assigning a Default Gateway to the NIC
To assign a default gateway to the NIC:
Login to the Appliance CLI Manager.
Navigate to Networking > Network Settings.
Select Interfaces and select Edit.
The Network Interfaces dialog box appears.
Select the interface for which you want to add a default gateway.
Select Edit.
Select Gateway.
The Gateway Settings dialog box appears.
In the Set Default Gateway for Interface ethMNG field, enter the Gateway IP address and select Apply.
Selecting Management NIC
When you have multiple NICs, you can specify the NIC that functions as a management interface.
To select the management NIC:
Login to the Appliance CLI Manager.
Navigate to Networking > Network Settings.
Select Management interface and select Edit.
Select the required NIC.
Select Select.
The management NIC is changed.
Changing the Management IP on ethMNG
Follow these instructions to change the management IP on ethMNG. Be aware, changes to IP addresses are immediate. Any changes to the management IP, on ethMNG, while you are connected to CLI Manager or Web UI will cause the session to disconnect.
To change the management IP on ethMNG:
Login to the Appliance CLI Manager.
Navigate to Networking > Network Settings.
Select Interfaces and select Edit.
The Network Interfaces screen appears.
Select ethMNG and click Edit.
Select the network type and select Update.
In the Interface Settings dialog box, select Edit.
Enter the IP address and net mask.
Select OK.
At the prompt, press ENTER to confirm.
The IP address is updated, and the Address Management screen appears.
Identifying an Interface
To identify an interface:
Login to the Appliance CLI Manager.
Navigate to Networking > Network Settings.
Select Interfaces and select Edit.
The Network Interfaces screen appears.
Select the network interface and select Blink.
This causes an LED on the NIC to blink and the Network Interfaces screen appears.
Adding a service interface address
From ESA v9.0.0.0, the default IP addresses assigned to the docker interfaces are between 172.17.0.0/16 and 172.18.0.0/16. Ensure that the IP addresses assigned to the docker interface must not conflict with your organization’s private/internal IP addresses.
For more information about reconfiguring the docker interface addresses, refer to Configuring the IP address for the Docker Interface.
Be aware, changes to IP addresses are immediate.
To add a service interface address:
Login to the Appliance CLI Manager.
Navigate to Networking > Network Settings.
Select Interfaces and select Edit.
The Network Interfaces screen appears.
Navigate to the service interface to which you want to add an address and select Update.
Select Add.
At the prompt, type the IP address and the netmask.
Press ENTER.
The address is added, and the Address Management screen appears.
4.6.5.2 - Configuring SNMP
The Simple Network Management Protocol (SNMP) is used for monitoring appliances in a network. It consists of two entities, namely, an agent and a manager that work in a client-server mode. The manager performs the role of the server and agent acts as the client. Managers collect and process information about the network provided by the client. For more information about SNMP, refer to the following link.
In Protegrity appliances, you can use this protocol to query the performance figures of an appliance. Typically, the ESA acts as a manager that monitors other appliances or Linux systems on the network. In ESA, the SNMP can be used in the following two methods:
snmpd: The snmpd is an agent that waits for and responds to requests sent by the SNMP manager. The requests are processed, the necessary information is collected, the requested operation is performed, and the results are sent to the manager. You can run basic SNMP commands, such as, snmpstart, snmpget, snmpwalk, snmpsync, and so on. In a typical scenario, an ESA monitors and requests a status report from another appliance on the network, such as, DSG or ESA. By default, the snmpd requests are communicated over the UDP port 161.
In the Appliance CLI Manager, navigate to Networking > SNMP Configuration > Protegrity SNMPD Settings to configure the snmpd settings. The snmpd.conf file in the /etc/snmp directory contains the configuration settings of the SNMP service.
snmptrapd: The snmptrapd is a service that sends messages to the manager in the form of traps. The SNMP traps are alert messages that are configured in the manager in a way that an event occurring at the client immediately triggers a report to the manager. In a typical scenario, you can create a trap in ESA to cold-start a system on the network in case of a power issue. By default, the snmptrapd requests are sent over the UDP port 162. Unlike snmpd, in the snmptrapd service, the agent proactively sends reports to the manager based on the traps that are configured.
In the CLI Manager, navigate to Networking > SNMP Configuration > Protegrity SNMPTRAPD Settings to configure the snmptrapd settings. The snmptrapd.conf file in the /etc/snmp directory can be edited to configure SNMP traps on ESA.
The following table describes the different settings that you configure for snmpd and snmptrapd services.
Setting | Description | Applicable to SNMPD | Applicable to SNMPTRAPD | Notes |
Managing service | Start, stop, or restart the service | ✓ | ✓ | Ensure that the SNMP service is running. On the Web UI, navigate to System → Services tab to check the status of the service. |
Set listening address | Set the port to accept SNMP requests | ✓ | ✓ |
NoteYou can change the listening address only
once. |
Set DTLS/TLS listening port | Configure SNMP on DTLS over UDP or SNMP on TLS over TCP | ✓ | The default listening port for SNMPD is set to
TCP 10161 . | |
Set community string | String comprising of user id and password to access the statistics of another device | ✓ |
The SNMPv1 is used as default a protocol, but you can also configure SNMPv2 and SNMPv3 to monitor the status and collect information from network devices. The SNMPv3 protocol supports the following two security models:
- User Security Model (USM)
- Transport Security Model (TSM)
4.6.5.2.1 - Configuring SNMPv3 as a USM Model
Configuring SNMPv3 as a USM Model:
From the CLI manager navigate to Administration > OS Console.
The command prompt appears.
Perform the following steps to comment the rocommunity string.
Edit the snmpd.conf using a text editor.
/etc/snmp/snmpd.conf
Prepend a # to comment the rocommunity string.
Save the changes.
Run the following command to set the path for the snmpd.conf file.
export datarootdir=/usr/share
Stop the SNMP daemon using the following command:
/etc/init.d/snmpd stop
Add a user with read-only permissions using the following command:
net-snmp-create-v3-user -ro -A <authorization password> -a MD5 -X <authorization password> -x DES snmpuser
For example,
net-snmp-create-v3-user -ro -A snmpuser123 -a MD5 -X snmpuser123 -x DES snmpuser
Start the SNMP daemon using the following command:
/etc/init.d/snmpd start
Verify if SNMPv1 is disabled using the following command:
snmpwalk -v 1 -c public <hostname or IP address>
Verify if SNMPv3 is enabled using the following command:
snmpwalk -u <username> [-A (authphrase)] [-a (MD5|SHA)] [-x DES] [-X (privaphrase)] (ipaddress)[:(dest_port)] [oid]
For example,
snmpwalk -u snmpuser -A snmpuser123 -a MD5 -X snmpuser123 -x DES -l authPriv 127.0.0.1 -v3
Unset the variable assigned to the snmpd.conf file using the following command.
unset datarootdir
4.6.5.2.2 - Configuring SNMPv3 as a TSM Model
Configuring SNMPv3 as a TSM Model:
From the CLI manager navigate to Administration > OS Console.
The command prompt appears.
Set up the CA certificates, Server certificates, Client certificates, and Server key on the server using the following commands:
ln -s /etc/ksa/certificates/CA.pem /etc/snmp/tls/ca-certs/CA.crt ln -s /etc/ksa/certificates/server.pem /etc/snmp/tls/certs/server.crt ln -s /etc/ksa/certificates/client.pem /etc/snmp/tls/certs/client.crt ln -s /etc/ksa/certificates/mng/server.key /etc/ksa/certificates/server.key
Change the mode of the server.key file under /etc/ksa/certificates/ directory to read only using the following command:
chmod 600 /etc/ksa/certificates/server.key
Edit the snmpd.conf file under /etc/ksa directory.
Append the following configuration in the snmpd.conf file.
[snmp] localCert server [snmp] trustCert CA certSecName 10 client --sn <username> Trouser -s tsm "< username>" AuthPriv
Alternatively, you can also use a field from the certificate using the –-cn flag as a username as follows:
certSecName 10 client –cn Trouser –s tsm “Protegrity Client” AuthPriv
To use fingerprint as a certificate identifier, execute the following command:
net-snmp-cert showcerts --fingerprint 11`
Restart the SNMP daemon using the following command:
/etc/init.d/snmpd restart
You can also restart the SNMP service using the ESA Web UI.
Deploy the certificates on the client side.
4.6.5.3 - Working with Bind Services and Addresses
The Bind Services/Addresses tool allows for separating the Web services from their management, Web UI and SSH. You can specify the network cards that will be used for Web management and Web services. For example, the DSG appliance uses the ethMNG interface for Web UI and the ethSRV interface for enabling communication with different applications in an enterprise. This article provides instructions for selecting network interfaces for management and services.
Ensure that all the NICs added to the appliance are configured in the Network Settings screen.
4.6.5.3.1 - Binding Interface for Management
If you have multiple NICs, you can specify the NIC that functions as a management interface.
To bind the management NIC:
Login to the CLI Manager.
Navigate to Networking > Bind Services/Address.
Enter the root password and select OK.
Select Management and choose Select.
In the interface for ethMNG, select OK.
Choose Select and press ENTER.
The NIC for Management is assigned.
Select Done.
A message Successfully done appears and the NIC for service requests are assigned.
Navigate to Administration > OS Console.
Enter the root password and select OK.
Run the
netstat -tunlp
command to verify the status of the NICs.
4.6.5.3.2 - Binding Interface for Services
If you have multiple service NICs, you can specify the NICs that will function to accept the Web service requests on port 8443.
To bind the service NIC:
Login to the CLI Manager.
Navigate to Networking > Bind Services/Address.
Enter the root password and select OK.
Select Service and choose Select.
A list of service interfaces with their IP addresses is displayed.
Select the required interface(s) and select OK.
The following message appears.
Choose Yes and press ENTER.
Select Done.
A message Successfully done appears and the NIC for service requests are assigned.
Navigate to Administration > OS Console.
Enter the root password and select OK.
Run the
netstat -tunlp
command to verify the status of the NICs.
4.6.5.4 - Using Network Troubleshooting Tools
Using the Network Troubleshooting Tools, you can check the health of your network and troubleshoot problems. This tool is composed of several utilities that allow you to test the integrity of you network. The following table describes the utilities that make up the Network Utilities tool.
Table 1. Network Utilities
Name | Using this tool you can... | How… |
Ping | Tests whether a specific Host is accessible across the
network. | In the Address field, type the IP
address that you want to test. Press
ENTER. |
TCPing | Tests whether a specific TCP port on a Host is
accessible across the network. | In the Address field, type the IP
address. In the Port field, type the
port number. Select OK. |
TraceRoute | Tests the path of a packet from one machine to another.
Returns timing information and the path of the packet. | At the prompt, type the IP address or Host name of the
destination machine. Select
OK. |
MTR | Tests the path of a packet and returns the list of
routers traversed and some statistics about each. | At the prompt, type the IP address or Host
name. Select OK. |
TCPDump | Tests network traffic, and examines all packets going
through the machine. | To filter information, by network interface, protocol,
Host, or port, type the criteria in the corresponding text
boxes. Select OK. |
SysLog | Sends syslog messages. Can be used to test syslog
connectivity. | In the Address field, enter the
IP address of the remote machine the syslogs will be sent
to. In the Port field, enter a port
number the remote machine is listening to. In the
Message field, enter a test message. Select
OK. On the remote machine, check if
the syslog was successfully sent. Note that the appliance
uses UDP syslog, so there is no way to validate whether the syslog
server is accessible. |
Show MAC | Finds out the MAC address for a given IP address.
Detects IP collision. | At the prompt, type the IP address or Host
name. Select OK. |
4.6.5.5 - Managing Firewall Settings
Protegrity internal firewall provides a way to allow or restrict inbound access from the outside to Protegrity Appliances. Using the Network Firewall tool you can manage your Firewall settings. For example, you can allow access to the management-network interface only from a specific machine while denying access to all other machines.
To improve security in the appliance, the firewall in v9.2.0.0 is upgraded to use the nftables framework instead of the iptables framework. The nftables framework helps remedy issues, including those relating to scalability and performance.
The iptables framework allows the user to configure IP packet filter rules. The iptables framework has multiple pre-defined tables and base chains, that define the treatment of the network traffic packets. With the iptables framework, you must configure every single rule. You cannot combine the rules because they have several base chains.
The nftables framework is the successor of the iptables framework. With the nftables framework, there are no pre-defined tables or chains that define the network traffic. It uses simple syntax, combines multiple rules, and one rule can contain multiple actions. You can export the data related to the rules and chains to json or xml using the nft userspace utility.
Verifying the nftables
This section provides the steps to verify the nftables.
To verify the nftables:
Log in to the CLI Manager.
Navigate to Administration > OS Console.
Enter the root password and select OK.
Run the command
nft list ruleset
.
The nftables rules appear.
Listing the Rules
Using the Rules List option, you can view the available firewall rules.
To view the details of the rule:
Log in to the CLI Manager.
Navigate to Networking > Network Firewall.
Enter the root password and select OK.
The following screen appears.
From the menu, select Rules List to view the list of rules.
A list of rules appear.
Select a rule from the list and click More.
The policy, protocol, source IP address, interface, port, and description appear.
Click Delete to delete a selected rule. Once confirmed, the rule is deleted.
Log in to the Web UI.
Navigate to System > Information to view the rules.
Reordering the Rules List
Using the Reorder Rules List option, you can reorder the list of rules. With buttons Move up and Move down you can move the selected rule. When done, click Apply for the changes to take effect.
The order of the specified rules are important. When reordering the firewall rules, take into account that rules which are in the beginning of the list are of the first priority. Thus, if there are conflicting rules in the list, the one which is the first in the list is applied.
Specifying the Default Policy
The default policy determines what to do on packets that do not match any existing rule. Using the Specify Default Policy option, you can set the default policy for the input chains. You can specify one of the following options:
- Accept - Let the traffic pass through.
- Drop - Remove the packet from the wire and generate no error packet.
If not specified by any rule, then the incoming packet will be dropped as the default policy. If specified by a rule, then the incoming packet will be allowed/denied or dropped depending on the policy of the rule.
Adding a New Rule
Every new rule specifies the criteria for matching packets and the action required. You can add a new rule using the Add New Rule option. This section explains how to add a firewall rule.
Adding a new rule is a multi-stage process that includes:
- Specifying an action to be taken for matching incoming traffic:
- Accept - Allow the packets.
- Drop - Remove the packet from the wire and generate no error packet.
- Reject - Remove the packet from the wire and return an error packet.
- Specifying the local service for this rule.
- Specifying the local network interface. It can be any or selected interface..
- Specifying the remote machine criteria.
- Providing a description for the rule. This is optional.
When a Firewall rule is added, it is added to the end of the Firewall list. If there is a conflicting rule in the beginning of the list, then the new rule may be ignored by the Firewall. Thus, it is recommended to move the new rule somewhere to the beginning of the Firewall rules list.
Adding a New Rule with the Predefined List of Functionality
Follow these instructions to add a new rule with the predefined list of functionality:
Select a policy for the rule, accept, drop, or reject, which will define how a package from the specific machine will be treated by the appliance Firewall.
Click Next.
Specify what will be affected by the rule. Two options are available: to specify the affected functionality list, in this case, you do not need to specify the ports since they are already predefined, or to specify the protocol and the port.
Select the local service affected by the rule. You can select one or more items to be affected by the firewall rule.
Click Next.
If you want to have a number of similar rules, then you can specify multiple items from the functionality list. Thus, for example, if you want to allow access from a certain machine to the appliance LDAP, SNMP, High Availability, SSH Management, or Web Services Management, you can specify these items in the list.
Click Manually.
In the following dialog box, select a protocol for the rule. You can select between TCP, UDP, ICMP, or any.
In the following screen, specify the port number and click Next.
In the following screen you are prompted to specify an interface. Select between ethMNG (Ethernet management interface), ethSRV0 (Ethernet security service interface), ethSRV1, or select Any.
In the following screen you are prompted to specify the remote machine. You can specify between single/IP with subnet or domain name.
When you select Single, you will be asked to specify the IP in the following screen.
When you select IP with Subnet, you will be asked to specify the IP first, and then to specify the subnet.
When you select Domain Name, you will be asked to specify the domain name.
When you have specified the remote machine, the Summary screen appears. You can enter the description of your rule if necessary.
Click Confirm to save the changes.
Click OK in the confirmation message listing the rules that will be added to the Rules list.
Disabling/Enabling the Firewall Rules
Using the Disable/Enable Firewall option, you can start your firewall. All rules that are available in the firewall rules list will be affected by the firewall when it is enabled. All new rules added to the list will be affected by the firewall. You can also restart, start, or stop the firewall using Appliance Web UI.
Resetting the Firewall Settings
Using the Reset Firewall Settings option, you can delete all firewall rules. If you use this option, then the firewall default policy becomes accept and the firewall is enabled.
If you require additional security, then change the default policy and add the necessary rules immediately after you reset the firewall.
4.6.5.6 - Using the Management Interface Settings
Using the Management Interface Settings option, you can specify the network interface that will be used for management (ethMNG). By default, the first network interface is used for management (ethMNG). The first management Ethernet is the one that is on-board.
If you change the network interface, then you are asked to reboot the appliance for the changes to take effect.
Note: The MAC address is stored in the appliance configuration. If the machine boots orreboots and this MAC address cannot be found, then the default, which is the first network card, will be applied.
4.6.5.7 - Ports Allowlist
On the Proxy Authentication screen of the Web UI, you can add multiple AD servers for retrieving users. The AD servers are added as URLs that contain the IP address/domain name and the listening port number. You can restrict the ports on which the LDAP listens to by maintaining a port allowlist. This ensures that only those ports that are trusted in the organization are mentioned in the URLs.
On the CLI Manager, navigate to Networking > Ports Allowlist to set a list of trusted ports. By default, port 389 is added to the allowlist.
The following figure illustrates the Ports Allowlist screen.
This setting is applicable only to the ports entered in the Proxy Authentication screen of the Web UI.
Viewing list of allowed ports
You can view the list of ports that are specified in the allowlist.
On the CLI Manager, navigate to Networking > Ports Allowlist.
Enter the root credentials.
Select List allowed ports.
The list of allowed ports appears.
Adding ports to the allowlist
Ensure that multiple port numbers are comma-delimited and do not contain space between them.
On the CLI Manager, navigate to Networking > Ports Allowlist.
Enter the root credentials.
Select Add Ports.
Enter the required ports and select OK.
A confirmation message appears.
4.6.6 - Working with Tools
Protegrity appliances are equipped with a Tools menu. The following sections list and explain the available tools and their functionalities.
4.6.6.1 - Configuring the SSH
The SSH Configuration tool provides a convenient way to examine and manage the SSH configuration that would fit your needs. Changing the SSH configuration may be necessary for special needs, troubleshooting, or advanced non-standard scenarios. By default, the SSH is configured to deny any SSH communication with unknown remote servers. You can allow the authorized users with keys to communicate without passwords. Every time you add a remote host, the system obtains the SSH key for this host, and adds it to the known hosts.
Note: It is recommended to create a backup of the SSH settings/keys before you make any modifications.
For more information for Backup from CLI, refer to here.
For more information for Backup from Web UI, refer to here.
Using Tools > SSH Configuration, you can:
- Specify SSH Mode.
- Specify SSH configuration.
- Manage the hosts that the Appliance can connect to.
- Set the authorized keys.
- Manage the keys that belong to local accounts.
- Generate new SSH server keys.
4.6.6.1.1 - Specifying SSH Mode
Using SSH Mode tool, you can set restrictions for SSH connections. The restrictions can be hardened or made slack according to your needs. Four modes are available, as described in the following table:
Mode | SSH Server | SSH Client |
---|---|---|
Paranoid | Disable root access | Disable password authentication, allows to connect only using public keys. Block connections to unknown hosts. |
Standard | Disable root access | Allow password authentication. Allow connections to new or unknown hosts, enforce SSH fingerprint of known hosts. |
Open | Allow root access Accept connections using passwords and public keys | Allow password authentication. Allow connection to all hosts – do not check hosts fingerprints. |
4.6.6.1.2 - Setting Up Advanced SSH Configuration
A user with administrative credentials can configure the SSH idle timeout and client authentication settings. The following screen shows the Advanced SSH Configuration.
In the Idle Timeout field, enter the idle timeout period in seconds. This allows the user to set idle timeout period for the SSH server before logout.
When you are working on the OS Console using the OpenSSH session, if the session is idle for the specified time, then the OS Console session gets closed. However, you are re-directed to the Administration screen.
In the Client Authentications field, specify the order for trying the SSH authentication method. This allows you to prefer one method over another. The default for this option is publickey, password.
4.6.6.1.3 - Managing SSH Known Hosts
Using Known Hosts: Hosts I can connect to, you can manage the hosts that you can connect to using SSH. The following table explains the options in the Hosts that I connect to dialog box:
Using… | You can… |
---|---|
Display List | View the list of SSH allowed hosts you can connect to. |
Reset List | Clear the SSH allowed hosts list. Only the local host, which is the default, appears. |
Add Host | Add a new SSH allowed host. |
Delete Host | Delete a host from the list of SSH allowed hosts. |
Refresh (Sync) Host | Make sure that the available key is a correct key from each IP. To do this, go to each IP/host and re-obtain its key. |
4.6.6.1.4 - Managing Authorized Keys
SSH Authorized keys are used to specify SSH keys that are allowed to connect to this machine without entering the password. The system administrator can create such SSH keys and import the keys to this appliance. This is a standard SSH mechanism to allow secured access to machines without a need to enter a password.
Using the Authorized Keys tool, you can display the keys and delete the list of authorized keys from the Reset List option. This would reject all incoming connections that used the authorized keys reset with this tool.
Examine and manage the users that are authorized to access this host.
4.6.6.1.5 - Managing Identities
Using the Identities menu, you can manage and examine which users can start SSH communication from this host using SSH keys. You can:
- Display the list of such keys that already exist.
- Reset the SSH keys. This means that all SSH keys used for outgoing connections are deleted.
- Add an identity from the list already available by default or create one as required, using the Directory or Filter options.
- Delete an identity. This should be done with extreme care.
4.6.6.1.6 - Generating SSH Keys
Using the Generate SSH Keys, you can create new SSH keys. If you recreate the SSH Keys, then the remote machines that store the current SSH key, will not be able to contact the appliance until you manually update the SSH keys on those machines.
4.6.6.1.7 - Configuring the SSH
SSH is a network protocol that ensures a secure communication over an unsecured network. It comprises of a utility suite which provides high-level authentication encryption over unsecured communication channels. SSH utility suites provide a set of default rules that ensure the security of the appliances. These rules consist of various configurations such as password authentication, log level info, port numbers info, login grace time, strict modes, and so on. These configurations are enabled by default when the SSH service starts. These rules are provided in the sshd_config.orig file under the /etc/ssh directory.
You can customize the SSH rules for your appliances as per your requirements. You can configure the rules in the sshd_config.append file under the /etc/ksa directory.
Warning: To add customised rules or configurations to the SSH configuration file, modify the sshd_config.append file only. It is recommended to use the console for modifying these settings.
For example, if you want to add a match rule for a test user, test_user with the following configurations:
- User can only login with a valid password.
- Only three incorrect password attempts are permitted.
- Requires host-based authentication.
You must add the following configuration for the match rule in the sshd_config.append file. Make sure to restart the SSH service to apply the updated configurations.
Match user test_user
PasswordAuthentication yes
MaxAuthTries 3
HostbasedAuthentication yes
Ensure that you must enter the valid configurations in the sshd_config.append file.
If the rule added to the file is incorrect, then the SSH service reverts to the default configurations provided in the sshd_config.orig file.
Consider an example where the SSH rule is incorrectly configured by replacing PasswordAuthentication with Password—Authentication. The following code snippet describes the incorrect configuration.
Match user test_user
Password---Authentication yes
MaxAuthTries 3
HostbasedAuthentication yes
Then, the following message appears on the OS Console when the SSH services restart.
root@protegrity-esa858:/var/www# /etc/init.d/ssh restart
[ ok ] Stopping OpenBSD Secure Shell server: sshd.
The configuration(s) added is incorrect. Reverting to the default configuration.
/etc/ssh/sshd_config: line 274: Bad configuration option: Password---Authentication
/etc/ssh/sshd_config line 274: Directive 'Password---Authentication' is not allowed within a Match block
[ ok ] Starting OpenBSD Secure Shell server: sshd.
If you want to configure the SSH settings for an HA environment, then you must add the rules to both the nodes individually before creating the HA.
For more information about configuring rules to SSH, refer to here.
4.6.6.1.8 - Customizing the SSH Configurations
To configure SSH rules:
Login to the CLI Manager with the root credentials.
Navigate to Administrator > OS Console.
Configure a new rule using a text editor.
/etc/ksa/sshd_config.append
Configure the required SSH rule and save the file.
Restart the SSH service through the CLI or Web UI.
- To restart the SSH service from the Web UI, navigate to System > Services > Secured Shell (SSH).
- To restart the SSH service from CLI Manager, navigate to Administration > Services > Secured Shell (SSH).
The SSH services starts with the customized rules or configurations.
- To restart the SSH service from the Web UI, navigate to System > Services > Secured Shell (SSH).
4.6.6.1.9 - Exporting/Importing the SSH Settings
You can backup or restore the SSH settings. To export these configurations, select the Appliance OS configuration option while exporting the custom files.
To import the SSH configurations, select the SSH Settings option.
Warning: You can configure SSH settings and SSH identities that are server-specific. It is recommended to not export or import these SSH settings as it may break the SSH services on the appliance.
For more information on Exporting Custom Files, refer to here.
4.6.6.1.10 - Securing SSH Communication
When the client communicates with the server using SSH protocol, a key exchange process occurs for encrypting and decrypting the communication. During the key exchange process, client and server decide on the cipher suites that must be used for communication. The cipher suites contain different algorithms for securing the communication. One of the algorithms that Protegrity appliances uses is SHA1, which is vulnerable to collision attacks. Thus, to secure the SSH communication, it is recommended to deprecate the SHA1 algorithm. The following steps describe how to remove the SHA1 algorithm from the SSH configuration.
To secure SSH communication:
On the CLI Manager, navigate to Administration > OS Console.
Navigate to the /etc/ssh directory.
Edit the sshd_config.orig file.
Remove the following entry:
MACs hmac-sha1,hmac-sha2-256,hmac-sha2-512
Remove the following entry:
KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1
Save the changes and exit the editor.
Navigate to the /etc/ksa directory.
Edit the sshd_config.append file.
Append the following entries to the file.
MACs hmac-sha2-256,hmac-sha2-512 KexAlgorithms curve25519-sha256@libssh.org,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256
Save the changes and exit the editor.
Restart the SSH service using the following command.
/etc/init.d/ssh restart
The SHA1 algorithm is removed for the SSH communication.
4.6.6.2 - Clustering Tool
Using Tools > Clustering Tool, you can create the Trusted cluster. The trusted cluster can be used to synchronize data from one server to another other one.
4.6.6.2.1 - Creating a TAC using the CLI Manager
About Creating a TAC using the CLI
Before creating a TAC, ensure that the SSH Authentication type is set to Public key or Password + PublicKey.
If you are using cloned machines to join a cluster, it is necessary to rotate the keys on all cloned nodes before joining the cluster.
If the cloned machines have proxy authentication, two factor authentication, or TAC enabled, it is recommended to use new machines. This avoids any limitations or conflicts, such as, inconsistent TAC, mismatched node statuses, conflicting nodes, and key rotation failures due to keys in use.
For more information about rotating the keys, refer here.
How to create the TAC using the CLI Manager
To create a cluster using the CLI Manager:
In the ESA CLI Manager, navigate to Tools > Clustering > Trusted Appliances Cluster.
The following screen appears.
Select Create: Create new cluster.
The screen to select the communication method appears.
Select Set preferred method to set the preferred communication method.
- Select Manage local methods to add, edit, or delete a communication method.
- For more information about managing communication methods for local node, refer here.
Select Done.
The Cluster Services screen appears and the cluster is created.
4.6.6.2.2 - Joining an Existing Cluster using the CLI Manager
If you are using cloned machines to join a cluster, it is necessary to rotate the keys on all cloned nodes before joining the cluster.
If the cloned machines have proxy authentication, two factor authentication, or TAC enabled, it is recommended to use new machines. This avoids any limitations or conflicts, such as, inconsistent TAC, mismatched node statuses, conflicting nodes, and key rotation failures due to keys in use.
For more information about rotating the keys, refer here.
Important : When assigning a role to the user, ensure that the Can Create JWT Token permission is assigned to the role.If the Can Create JWT Token permission is unassigned to the role of the required user, then joining the cluster operation fails.To verify the Can Create JWT Token permission, from the ESA Web UI navigate to Settings > Users > Roles.
To join a cluster using the CLI Manager:
In the ESA CLI Manager, navigate to Tools > Clustering > Trusted Appliances Cluster.
In the Cluster Services screen, select Join: Join an existing cluster.
The following screen appears.
Enter the IP address of the target node in the Node text box.
Enter the credentials of the user of the target node in the Username and Password text boxes.
- Ensure that the user has administrative privileges.
- Select Advanced to manage communication or set the preferred communication method.
For more information about managing communication methods, refer here.
- Ensure that the user has administrative privileges.
Select Join.
The node is joined to an existing cluster.
4.6.6.2.3 - Cluster Operations
Using Cluster Operations, you can execute the standard set of commands or copy files from the local node to other nodes in the cluster. You can only execute the commands or copy files to the nodes that are directly connected to the local node.
The following figure displays the Cluster Operations screen.
Executing Commands using the CLI Manager
This section describes the steps to execute commands using the CLI Manager.
To execute commands using the CLI Manager:
In the CLI Manager, navigate to Tools > Trusted Appliances Cluster > Cluster Operations: Execute Commands/Deploy Files.
Select Execute.
The Select command screen appears with the following list of commands:
- Display top 10 CPU Consumers
- Display top 10 memory Consumers
- Report free disk space
- Report free memory space
- Display TCP/UDP network information
- Display performance and system counters
- Display cluster tasks
- Manually enter a command
Select the required command and select Next.
The following screen appears.
Select the target node and select Next.
The Summary screen displaying the output of the selected command appears.
Copying Files from Local Node to Remote Node
This section describes the steps to copy files from local node to remote node.
To copy files from local node to remote nodes:
In the CLI Manager, navigate to Tools > Trusted Appliances Cluster > Cluster Operations: Execute Commands/Deploy Files .
The screen with the appliances connected to the cluster appears.
Select Put Files.
The list of files in the current directory appears. Select Directory to change the current directory
Select the required file and select Next.
The Target Path screen appears.
Select the required option and select Next.
The following screen appears.
Select the target node and select Next.
The Summary screen confirming the file to be deployed appears.
Select Next.
The files are deployed to the target nodes.
4.6.6.2.4 - Managing a site
Using Site Management, you can perform the following operations:
- Obtain Site Information
- Add a site
- Remove sites added to the cluster, if more than one site exists in the cluster
- Rename a site
- Set the master site
The following screen shows the Site Management screen.
View a Site
You can view the information for all the sites in the cluster by selecting Show sites information. When a cluster is created, a master site with site1 is created by default. The following screen displays the Site Information screen.
Adding Sites to a Cluster
This section describes the steps to add multiple sites to a cluster from the CLI Manager.
To add a site to a cluster:
On the CLI Manager, navigate to Tools > Trusted Appliances Cluster > Site Management > Add Site.
The following screen appears.
Select OK.
The new site is added.
Renaming a Site
This section describes the steps to rename a site from the CLI Manager.
To rename a site:
On the CLI Manager, navigate to Tools > Trusted Appliances Cluster > Site Management > Update Cluster Site Settings.
Select the required site and select Rename.
The Rename Site screen appears.
Type the required site name and select OK.
The site is renamed.
Setting a Master Site from the CLI Manager
This section describes the steps to set a master site from the CLI Manager.
To set a master site from the CLI Manager:
On the CLI Manager, navigate to Tools > Trusted Appliances Cluster > Site Management > Set Master Site.
The Set Master Site screen appears.
Select the required site and select Set Master.
A message Operation has been completed successfully appears and the new master site is set. An empty cluster site does not contain any node. You cannot set an empty cluster site as a master site.
Deleting a Cluster Site
This section describes the steps to delete a cluster site from the CLI Manager. You can only delete an empty cluster site.
To delete a cluster site:
In the CLI Manager of the node hosting the appliance cluster, navigate to Tools > Trusted Appliances Cluster > Site Management > Remove: Remove Cluster sites(s).
The Remove Site screen appears.
Select the required site and select Remove.
Select OK.
The site is deleted.
4.6.6.2.5 - Node Management
Using Node Management, you can:
- List the nodes - The same option as List Nodes menu, refer here.
- Add a node to the cluster - If your appliance is a part of the cluster, and you want to add a remote node to this cluster.
- Update cluster information - For updating the identification entries.
- Manage communication method of the nodes.
- Remove a remote node from the cluster.
4.6.6.2.5.1 - Show Cluster Nodes and Status
The following table describes the fields that appear on the status screen.
Field | Description |
---|---|
Hostname | Hostname of the node |
Address | IP address of the node |
Label | Label assigned to the node |
Type | Build version of the node |
Status | Online/Blocked/Offline |
Node Messages | Messages that appear for the node |
Connection | Connection setting of the node (On/Off) |
4.6.6.2.5.2 - Viewing the Cluster Status using the CLI Manager
To view the status of the nodes in a cluster using the CLI Manager:
In the CLI Manager, navigate to Tools > Trusted Appliances Cluster > Node Management > List Nodes.
The screen displaying the status of the nodes appears.
Select Change View to change the view.
The list of different reports is as follows:
- List View: Displays the list of all the nodes.
- Labels View: Displays a grouped view of the nodes.
- Status View: Displays the status of the nodes.
- Report view: Displays the cluster diagnostics, network or connectivity issues, and generate error or warning messages if required.
4.6.6.2.5.3 - Adding a Remote Node to a Cluster
To add a remote node to the cluster:
In the CLI Manager of the node hosting the cluster, navigate to Tools > Trusted Appliances Cluster > Node Management > Add Node: Add a remote node to this cluster.
The Add Node screen appears.
Enter the credentials of the local node user, which must have administrative privileges, into the Username and Password text boxes.
Type the preferred communication method on the Preferred Method text box.
Type the accessible communication method of the target node in the Reachable Address text box.
Type the credentials of the target node user in the Username and Password text boxes.
Select OK.
The node is invited to the cluster.
4.6.6.2.5.4 - Updating Cluster Information using the CLI Manager
It is recommended not to change the name of the node after you create the cluster task.
To update cluster information:
In the CLI Manager of the node hosting the cluster, navigate to Tools > Trusted Appliances Cluster > Node Management > Update Cluster Information.
The Update Cluster Information screen appears.
Type the name of the node in the Name text box.
Type the information describing the node in the Description text box.
Type the required label for the node in the Labels text box.
Select OK.
The details of the node are updated.
4.6.6.2.5.5 - Managing Communication Methods for Local Node
Every node in a network is identified using a unique identifier. A communication method is a qualifier for the remote nodes in the network to communicate with the local node.
There are two standard methods by which a node is identified:
- Local IP Address of the system (ethMNG)
- Host name
The nodes joining a cluster use the communication method to communicate with each other. The communication between nodes in a cluster occur over one of the accessible communication methods.
Adding a Communication Method from the CLI Manager
This section describes the steps to add a communication method from the CLI Manager.
To add a communication method from the CLI Manager:
In the ESA CLI Manager, navigate to Tools > Clustering > Trusted Appliances Cluster.
In the Cluster Services screen, select Node Management: Add/Remove Cluster Nodes/ Information.
In the Node Management screen, select Manage node’s local communication methods.
In the Select Communication Method screen, select Add.
Type the required communication method and select OK.
The new communication method is added.
Ensure that the length of the text is less than or equal to 64 characters.
Editing a Communication Method from the CLI Manager
This section describes the steps to edit a communication method from the CLI Manager.
To add a communication method from the CLI Manager:
In the ESA CLI Manager, navigate to Tools > Clustering > Trusted Appliances Cluster.
In the Cluster Services screen, select Node Management: Add/Remove Cluster Nodes/ Information.
In the Node Management screen, select Manage node’s local communication methods.
In the Select Communication Method screen, select the communication method to edit and select Edit.
In the Edit method screen, enter the required changes and select OK.
The changes to the communication method are complete.
Deleting a Communication Method from the CLI Manager
This section describes the steps to delete a communication method from the CLI Manager.
To delete a communication method from the CLI Manager:
In the ESA CLI Manager, navigate to Tools > Clustering > Trusted Appliances Cluster.
In the Cluster Services screen, select Node Management: Add/Remove Cluster Nodes/ Information.
In the Node Management screen, select Manage node’s local communication methods.
In the Select Communication Method screen, select the required communication method and select Delete.
The communication method of the node is deleted.
4.6.6.2.5.6 - Managing Local to Remote Node Communication
You can select the method that a node uses to communicate with another node in a network. The communication methods of all the nodes are visible across the cluster. You can select the specific communication mode to connect with a specific node in the cluster. In the Node Management screen, you can set the communication between a local node and remote node in a cluster.
You can also set the preferred method that a node uses to communicate with other nodes in a network. If the selected communication method is not accessible, then the other available communication methods of the target node are used for communication.
Selecting a Local to Remote Node Communication Method
This section describes the steps to select a local to remote node communication method.
To select a local to remote node communication method:
In the ESA CLI Manager, navigate to Tools > Clustering > Trusted Appliances Cluster.
In the Cluster Services screen, select Node Management: Add/Remove Cluster Nodes/ Information.
In the Node Management screen, select Manage local to other nodes communication methods.
In the Manage local to other nodes communication method, select the required node for which you want to change the communication method.
Select Change.
Select the required communication method and select Choose. If a new communication must be added so it can be chosen as the required communication method, select Add New to add it.
Select Ok.
The communication method is selected to communicate with the remote node in the cluster.
Changing a Local to Remote Node Communication Method
This section describes the steps to change a local to remote node communication method.
To change a local to remote node communication method:
In the ESA CLI Manager, navigate to Tools > Clustering > Trusted Appliances Cluster.
In the Cluster Services screen, select Node Management: Add/Remove Cluster Nodes/ Information.
In the Node Management screen, select Manage local to other nodes communication methods.
In the Manage local to other nodes communication method screen, select a remote node and select Change.
The following screen appears.
Select the required communication method.
Select Choose.
The new local to other nodes communication methods is set.
4.6.6.2.5.7 - Removing a Node from a Cluster using CLI Manager
Before attempting to remove a node, verify if it is associated with a cluster task. If a node is associated with a cluster task that is based on the hostname or IP address, then the Remove a (remote) cluster node operation will not remove node from the cluster. Ensure that you delete all such tasks before removing any node from the cluster.
To remove a node from a cluster using the CLI Manager:
In the ESA CLI Manager, navigate to Tools > Trusted Appliances Cluster.
In the Cluster Services screen, select Node Management: Add/Remove Cluster Nodes/Information.
The following screen appears.
Select Remove: Delete a (remote) cluster node and select OK.
The screen displaying the nodes in the cluster appears.
Select the required node and select OK.
The following screen appears.
Select OK.
Select REFRESH to view the updated status.
4.6.6.2.5.8 - Uninstalling Cluster Services
Before attempting to remove a node, verify if it is associated with a cluster task. If a node is associated with a cluster task that is based on the hostname or IP address, then the Uninstall Cluster Services operation will not uninstall the cluster services on the node. Ensure that you delete all such tasks before uninstalling the cluster services.
To remove a node from a cluster using the CLI Manager:
In the ESA CLI Manager, navigate to Tools > Trusted Appliances Cluster.
In the Cluster Services screen, select 7 Uninstall : Uninstall Cluster Services.
A confirmation message appears.
Select Yes.
The cluster services are uninstalled.
4.6.6.2.6 - Trusted Appliances Cluster
A Trusted Appliances cluster can be used to transfer data from one node to other nodes regardless of their location, as long as standard SSH access is supported. This mechanism allows you to run remote commands on remote cluster nodes, transfer files to remote nodes and export configurations to remote nodes. Trusted appliances clusters are typically used for disaster recovery. The trusted appliance cluster can be configured and controlled using the Appliance Web UI as well as the Appliance CLI.
Clustering details are fully explained in section Trusted Appliances Cluster (TAC). In that section you will find information how to:
- Setup a trusted appliances cluster
- Add the appliance to an existing trusted appliances cluster
- Remove an appliance from the trusted appliances cluster
- Manage cluster nodes
- Run commands on cluster nodes
Using the cluster maintenance, you can perform the following functions:
- List cluster nodes
- Update cluster keys
- Redeploy local cluster configuration to all nodes
- Review cluster service interval
- Execute commands as OS root user
4.6.6.2.6.1 - Updating Cluster Key
Before you begin
Ensure that all the nodes in the cluster are active, before changing the cluster key.
If a new key is deployed to a node that is unreachable, then connect the node to the cluster. In this scenario, remove the node from the cluster and re-join the cluster.
Generate a new set of the cluster SSH keys to the nodes that are directly connected to the local node. This ensures that the trusted appliance cluster is secure.
To re-generate cluster keys:
In the ESA CLI Manager, navigate to Tools > Clustering > Trusted Appliances Cluster > Maintenance: Update Cluster Settings.
The following screen appears.
Select New Cluster Keys.
A message to re-generate the cluster keys appears.
Select Yes.
The new keys are deployed to the nodes that are directly connected.
4.6.6.2.6.2 - Redeploy Local Cluster Configuration to All Nodes
You can redeploy the local cluster configuration to force it to be applied on all connected nodes. Usually there is no need for such operation since the configurations are synchronized automatically. However, if the cluster status service is stopped or you want to force a specific configuration, then you can use this option to force the configuration.
When you select to Redeploy local cluster configuration to all nodes in the Update Cluster dialog box, the operation is performed at once with no confirmation.
4.6.6.2.6.3 - Cluster Service Interval
The cluster provides an auto-update mechanism that runs in the background as a background service which is responsible for updating local and remote cluster configurations and cluster health checks.
You can specify the cluster service interval in the Cluster Service Interval dialog box.
The interval (in seconds) specifies the sleep time between cluster background updates/operations. For example, if the specified value is 120 seconds, then every two minutes the cluster service will update its status and synchronize its cluster configuration with the other nodes (if changes identified).
4.6.6.2.6.4 - Execute Commands as OS Root User
By default, the cluster user is a restricted user which means that the cluster commands will be restricted by the OS. There are scenarios where you would like to disable these restrictions and allow the cluster user to run as the OS root user.
Using the details in the table below, you can specify whether to execute the commands as root or as a restricted user.
You can specify… | To… |
---|---|
Yes | Always execute commands as the OS root user. It is less secure, risky if executing the wrong command. |
No | Always execute commands as non-root restricted user. It is more secure, but not common for many scenarios. |
Ask | Always be asked before a command is executed. |
4.6.6.3 - Working with Xen Paravirtualization Tool
Using Tools > Clustering Tool, you can setup an appliance virtual environment. The default installation of a Protegrity appliance uses hardware virtualization mode (HVM). The appliance can be reconfigured to use parallel virtualization mode (PVM) to optimize the performance of virtual guest machines.
Protegrity supports these virtual servers:
- Xen®
- Microsoft Hyper-V™
- KVM Hypervisor
XEN paravirtualization details are fully covered in section Xen Paravirtualization Setup. In that section you will find information how to:
- Set up Xen paravirtualization
- Follow the paravirtualization process
4.6.6.4 - Working with the File Integrity Monitor Tool
Using Tools > File Integrity Monitor, you can make a weekly check. The content modifications can be viewed by the Security Officer since the PCI specifications require that sensitive files and folders in the Appliance are monitored. This information contains password, certificate, and configuration files. All changes made to these files can be reviewed by authorized users.
4.6.6.5 - Rotating Appliance OS Keys
When you install the appliance, it generates multiple security identifiers such as, keys, certificates, secrets, passwords, and so on. These identifiers ensure that sensitive data is unique between two appliances in a network. When you receive a Protegrity appliance image or replicate an appliance image on-premise, the identifiers are generated with certain values. If you use the security identifiers without changing their values, then security is compromised and the system might be vulnerable to attacks. Using the Rotate Appliance OS Keys, you can randomize the values of these security identifiers on an appliance. This tool must be run only when you finalize the ESA from a cloud instance.
Set ESA communication and key rotations
When an appliance, such as DSG, communicates with ESA, the Set ESA communication must be performed. Before running the Set ESA communication process, ensure appliance OS keys are rotated.
For example, if the OS keys are not rotated, then you might not be able to add the appliances to a Trusted Appliances Cluster (TAC).
To rotate appliance OS keys:
From the CLI Manager, navigate to to Tools > Rotate Appliance OS Keys.
Enter the root credentials.
The following screen appears.
Select Yes.
The following screen appears.
If you select No, then the Rotate Appliance OS Keys operation is discarded.
Enter the administrative credentials and select OK.
The following screen appears.
The following screen appears.
To update the user passwords, provide the credentials for the following users.
- root
- admin
- viewer
- local_admin
If you have deleted any of the default users, such as admin or viewer, those users will not be listed in the User’s Passwords screen.
Select Apply.
The user passwords are updated and the appliance OS keys are rotated.
After rotating appliance keys the hostname of ESA changes, update the hostname in the configuration files and rotate the Insight certificates using the steps from Updating the host name or domain name of the ESA.
4.6.6.6 - Managing Removable Drives
As a security feature, you can restrict access to the removable drives attached to your appliances. You can enable or disable the access to the removable disks, such as, CD/DVD drive or USB Flash drives.
The access to the removable disks is enabled by default.
Disabling CD or DVD drive
To disable CD or DVD drive:
On the CLI Manager, navigate to Tools > Removable Media Management > Disable CD/DVD Drives.
Press ENTER.
The following message appears.
Disabling USB Flash Drive
To disable USB flash drive:
On the CLI Manager, navigate to Tools > Removable Media Management > Disable USB Flash Drives..
Press ENTER.
The following message appears.
Enabling CD or DVD Drive
To enable CD/DVD drive:
On the CLI Manager, navigate to Tools > Removable Media Management > Enable CD/DVD Drives.
Press ENTER.
Enabling USB Flash Drive
To enable USB flash drive:
On the CLI Manager, navigate to Tools > Removable Media Management > Enable Flash Drives.
Press ENTER.
4.6.6.7 - Tuning the Web Services
Using Tools > Web Services Tuning, you can monitor and configure the Application Protector Web Service Sessions. You can view information such as Session Shared Memory ID, maximum open sessions, open sessions, free sessions, and session timeout.
CAUTION: It is recommended to contact Protegrity Support before applying any changes for Web Services.
In the Web Services Tuning screen you can find and configure the following fields.
Start Servers: In the StartServers field, you configure the number of child servers processes created on startup. Since the number of processes is dynamically controlled depending on the load, there is usually no reason to adjust the default parameter.
Minimum Spare Servers: In the MinSpareServers field, you set the minimum number of child server processes not handling a request. If the number of such processes is less than configured in the MinSpareServers field, then the parent process creates new children at a maximum rate of 1 per second. It is recommended to change the default value only when dealing with very busy sites.
Maximum Spare Servers: In the MaxSpareServers field, you set the maximum number of child server processes not handling a request. When the number of such processes exceeds the number configured in MaxSpareServers, the parent process kills the excessive processes.
It is recommended to change the default value only when dealing with very busy sites. If you try to set the value lower than MinSpareServers, then it will automatically be adjusted to MinSpareServers value +1.
Maximum Clients: In the MaxClients field, you set the maximum number of connections to be processed simultaneously.
Maximum Requests per Child: In the MaxRequestsPerChild field, you set the limit on the number of requests that an individual child server will handle during its life. When the number of requests exceeds the value configured in the MaxRequestsPerChild field, the child process dies. If you set the MaxRequestsPerChild value to 0, then the process will never expire.
Maximum Keep Alive Requests: In the MaxKeepAliveRequest field, you can set the maximum number of requests that can be allowed during a persistent connection. If you set 0, then the number of allowed request will be unlimited. For maximum performance, leave this number high.
Keep Alive Timeout: In the KeepAliveTimeout field, you can set the number of seconds to wait for the next request from same client on the same connection.
4.6.6.8 - Tuning the Service Dispatcher
Using Tools > Service Dispatcher Tuning, you can configure the parameters to improve service dispatcher performance.
The Service Dispatcher parameters are the Apache Multi-Processing Module (MPM) worker parameters. The Apache MPM Worker module implements a multi-threaded multi-process web server that allows it to serve higher number of requests with limited system resources. For more information about the Apache MPM Worker parameters, refer to https://httpd.apache.org/docs/2.2/mod/worker.html.
The following table provides information about the configurable parameters and recommendations for Service Dispatcher performance.
Parameter | Default Value | Description |
---|---|---|
StartServers | 64 | The number of apache server instances that start at the beginning when you start Apache. It is recommended not to enter the StartServers value more than the value for MaxSpareThreads, as this results in processes being terminated immediately after initializing. |
ServerLimit | 1600 | The maximum number of child processes. It is recommended to change the ServerLimit value only if the values in MaxClients and ThreadsPerChild need to be changed. |
MinSpareThreads | 512 | The minimum number of idle threads that are available to handle requests. It is recommended to keep the MinSpareThreads value higher than the estimated requests that will come in one second. |
MaxSpareThreads | 1600 | The maximum number of idle threads. It is recommended to reserve adequate resources to handle MaxClients. If MaxSpareThreads are insufficient, the webserver will terminate and frequently create child processes, reducing performance. |
ThreadLimit | 512 | The upper limit of the configurable threads per child process. To avoid unused shared memory allocation, it is recommended not to set the ThreadLimit value much higher than the ThreadsPerChild value. |
ThreadsPerChild | 288 | The number of threads created by each child process. It is recommended to keep the ThreadsPerChild value such that it can handle common load on the server. |
MaxRequestWorkers | 40000 | The maximum number of requests that can be processed simultaneously. It is recommended to take into consideration the expected load when setting the MaxRequestWorkers values. Any connection that comes over the load, will drop, and the details can be seen in the error log. Error log file path - /var/log/apache2-service_dispatcher/errors.log |
MaxConnectionsPerChild | 0 | The maximum number of connections that a child server process can handle in its life. If the MaxConnectionsPerChild value is reached, this process expires. It is recommended to set the MaxConnectionsPerChild value to 0, so that this process never expires. |
4.6.6.9 - Working with Antivirus
The AntiVirus program uses ClamAV, an open source and cross-platform antivirus engine designed to detect malicious trojan, virus, and malware threats. A single file or directory, or the whole system can be scanned. Infected file or files are logged and can be deleted or moved to a different location, as required.
The Antivirus option allows you to perform the following actions.
Option | Description |
---|---|
Scan Result | Displays the list of the infected files in the system. |
Scan now | Allows the scan to start. |
Options | Allows access to customize the antivirus scan options. |
View log | Displays the list of scan logs. |
Customizing Antivirus Scan Options from the CLI
To customize Antivirus scan options from the CLI:
Go to Tools > AntiVirus.
Select Options.
Press ENTER.
The following table provides a list of the choices available to you to customize scan options.
Table 1. List of all scan options
Option Selection Description Action Ignore Ignore the infected file and proceed with the scan. Move to directory Move the infected files to specific directory. In the text box, enter the path where the infected file should be moved.Delete infected file Remove the infected file from the directory. Recursive True Scan sub-directories. False Do not scan sub-directories. Scan directory Path of the directory to be scanned.
4.6.7 - Working with Preferences
You can set up your console preferences using the Preferences menu.
You can choose to configure the following preferences:
- Show system monitor on OS Console
- Require password for CLI system tools
- Show user Notifications on CLI load
- Minimize the timing differences
- Set uniform response time for failed login
- Enable root credentials check limit
- Enable AppArmor
- Enable FIPS Mode
- Basic Authentication for REST APIs
4.6.7.1 - Viewing System Monitor on OS Console
You can choose to show a performance monitor before switching to OS Console. If you choose to show the monitor, then the dialog delays for one second before the initialization of the OS Console. The value must be set to Yes or No.
4.6.7.2 - Setting Password Requirements for CLI System Tools
Many CLI tools and utilities require different credentials, such as root and admin user credentials. You can choose to require or not to require a password for CLI system tools. The value must be set to Yes or No.
Specifying No here will allow you to execute these tools without having to enter the system passwords. This can be useful when the system administrator is the security manager as well. This setting is not recommended since it is makes the Appliance less secure.
4.6.7.3 - Viewing user notifications on CLI load
You can choose to display notifications in the CLI home screen every time a user logs in to the Appliance. These notifications are specific to the user. The value must be set to Yes or No.
4.6.7.4 - Minimizing the Timing Differences
You sign in to the appliance to access different features provided. When you sign in with incorrect credentials, the request is denied and the server sends an appropriate response indicating the reason for failure to log in. The time taken to send the response varies based on the different authentication failures, such as invalid password, invalid username, expired username, and so on. This time interval is vulnerable to security attacks for obtaining valid users from the system. Thus, to mitigate such attacks, you can minimize the time interval to reduce the response time between an incorrect sign-in and server response. To enable this setting, toggle the value of the Minimize the timing differences option from the CLI Manager to Yes.
The default value of the Minimize the timing differences option is No.
When you login with a locked user account, a notification indicating that the user account is locked appears. This notification will not appear when the value of Minimize the timing differences option is Yes. Instead you will get a notification indicating that the username or password is incorrect.
4.6.7.5 - Setting a Uniform Response Time
If you login to the ESA Web UI with invalid credentials, then the time taken to respond to various authentication scenario failures, varies. The various scenarios can be invalid username, invalid password, expired username, and so on. This variable time interval may introduce a timing attack on the system.
To reduce the risk of a timing attack, you need to reduce the variable time interval and specify a response time to handle invalid credentials. Thus, the response time for the authentication scenarios remains the same.
The response time for the authentication scenarios are based on different factors such as, hardware configurations, network configurations, and system performance. Thus, the standard response time would differ between organizations. It is therefore recommended to set the response time based on the settings in your organization.
For example, if the response time for a valid login scenario is 5 seconds, then you can set the uniform response time as 5.
Enter the time interval in seconds and select OK to enable the feature. Alternatively, enter 0 in the text box to disable the feature.
4.6.7.6 - Limiting Incorrect root Login
If you log in to a system with an incorrect password, the permission to access the system is denied. Multiple attempts to log in with an incorrect password open a route to brute force attacks on the system. Brute force is an exhaustive hacking method, where a hacker guesses a user password over successive incorrect attempts. Using this method, a hacker gains access to a system for malicious purposes.
In our appliances, the root user has access to various operations in the system such as accessing OS console, uploading files, patch installation, changing network settings, and so on. A brute force attack on this user might render the system vulnerable to other security attacks. Therefore, to secure the root login, you can limit the number of incorrect password attempts to the appliance. On the Preferences screen, enable the Enable root credentials limit check option to limit an LDAP user from entering incorrect passwords for the root login. The default value of the Enable root credentials limit check option is Yes.
If you enable the Enable root credentials limit check, the LDAP user can login as root only with a fixed number of successive incorrect attempts. After the limit on the number of incorrect attempts is reached, the LDAP user is blocked from logging in as root, thus preventing a brute force attack. After the locking period is completed, the LDAP user can login as root with the correct password.
When you enter an incorrect password for the root login, the events are recorded in the logs.
By default, the root login is blocked for a period of five minutes after three incorrect attempts. You can configure the number of incorrect attempts and the lock period for the root login.
For more information about configuring the lock period and successive incorrect attempts, contact Protegrity Support.
4.6.7.7 - Enabling Mandatory Access Control
For implementing Mandatory Access Control, the AppArmor module is introduced on Protegrity appliances. You can define profiles for protecting files that are present in the appliance.
4.6.7.8 - FIPS Mode
The Federal Information Processing Standards (FIPS) defines guidelines for data processing. These guidelines outline the usage of the encryption algorithms and other data security measures before accessing the data. Only a user with administrative privileges can access this functionality.
For more information about the FIPS, refer to https://www.nist.gov/standardsgov/compliance-faqs-federal-information-processing-standards-fips.
Enabling the FIPS Mode
To enable the FIPS mode:
Login to the appliance CLI Manager and navigate to Preferences.
Enter the root password and click OK.
The Preferences screen appears.
Select the Enable FIPS Mode.
Press Select.
The Enable FIPS Mode dialog box appears.
Select Yes and click OK.
The following screen appears.
For more information on the anti-virus settings, refer here.
Click OK.
The following screen appears. Click OK.
After the FIPS mode is enabled, restart the appliance to apply the changes.
Disabling the FIPS Mode
To disable the FIPS mode:
Login to the appliance CLI Manager and navigate to Preferences.
Enter the root password and click OK.
The Preferences screen appears.
Select the Enable FIPS Mode.
Press Select.
The Enable FIPS Mode dialog box appears.
Select No and click OK.
The following screen appears. Click OK.
After the FIPS mode is disabled, restart the appliance to apply the changes.
4.6.7.9 - Basic Authentication for REST APIs
The Basic Authentication mechanism provides only the user credentials to access protected resources on the server. The user credentials are provided in an authorization header to the server. If the credentials are accurate, then the server provides the required response to access the APIs.
For more information about the Basic Authentication, refer here.
Disabling the Basic Authentication
To disable the Basic Authentication:
Login to the appliance CLI Manager and navigate to Preferences.
Enter the root password and click OK.
The Preferences screen appears.
Select the Basic Authentication for Rest APIs.
Press Select.
The Basic Authentication for REST APIs dialog box appears.
Select No and click OK.
The message Basic Authentication for REST APIs disabled successfully appears.
Click OK.
Important:
If the Basic Authentication is disabled, then the following APIs are affected:
- GetCertificate REST API: Fetch certificate to protector.
- DevOps API: Policy Management REST API.
- RPS REST API: Resilient Package Immutable REST API.The getcertificate stops working for the 9.1.x protectors when the Basic Authentication is disabled.However, the DevOps and RPS REST APIs can also use the Certificate and JWT Authentication support.
Enabling the Basic Authentication
To enable the Basic Authentication:
Login to the appliance CLI Manager and navigate to Preferences.
Enter the root password and click OK.
The Preferences screen appears.
Select the Basic Authentication for REST APIs.
Press Select.
The Basic Authentication for REST APIs dialog box appears.
Select Yes and click OK.
The message Basic Authentication for REST APIs enabled successfully appears.
Click OK.
4.6.8 - Command Line Options
4.6.8.1 - Forwarding system logs to Insight
Log in to the CLI Manager on the ESA or the appliance.
Navigate to Tools > PLUG - Forward logs to Audit Store.
Enter the password for the root user and select OK.
Enter the IP address of all the nodes in the Audit Store cluster with the Ingest role and select OK. Specify multiple IP addresses separated by comma.
To identify the node with the Ingest roles, log in to the ESA Web UI and navigate to Audit Store > Cluster Management > Overview > Nodes.
Enter y to fetch certificates and select OK.
Specifying y fetches td-agent certificates from target node. These certificates can then be used to validate and connect to the target node. They are required to authenticate with Insight while forwarding logs to the target node. The passphrase for the certificates are stored in the /etc/ksa/certs directory.
Specify n if the certificates are already available on the system, fetching certificates are not required, or custom certificates are to be used.
Enter the credentials for the admin user of the destination machine and select OK.
The td-agent service is configured to send logs to Insight and the CLI menu appears.
4.6.8.2 - Forwarding audit logs to Insight
The example provided here is for DSG. Refer to the specific protector documentation for the protector configuration.
Log in to the CLI Manager on the appliance.
Navigate to Tools > ESA Communication.
Enter the password of the root user of the appliance and select OK.
Select the Logforwarder configuration option, press Tab to select Set Location Now, and press Enter.
The ESA Location screen appears.
Select the ESA to connect with, then press Tab to select OK, and press ENTER.
The ESA selection screen appears.
To enter the ESA details manually, select the Enter manually option. A prompt is displayed to enter the ESA IP address or hostname.
Enter the ESA administrator username and password to establish communication between the ESA and the appliance. Press Tab to select OK and press Enter.
The Enterprise Security Administrator - Admin Credentials screen appears.
Enter the IP address or hostname for the ESA. Press Tab to select OK and press ENTER. Specify multiple IP addresses separated by comma. To add an ESA to the list, specify the IP addresses of all the existing ESAs in the comma separated list, and then specify the IP for the additional ESA.
The Forward Logs to Audit Screen screen appears.
After successfully establishing the connection with the ESA, the following summary dialog box appears. Press Tab to select OK and press Enter.
Repeat step 1 to step 8 on all the appliance nodes in the cluster.
4.6.8.3 - Applying Audit Store Security Configuration
From the ESA Web UI, navigate to System > Services > Audit Store.
Start the Audit Store Repository service.
Open the ESA CLI.
Navigate to Tools.
Run Apply Audit Store Security Configs.
4.6.8.4 - Setting the total memory for the Audit Store Repository
The RAM allocated for the Audit Store on the appliance is set to a optimal default value. If this value is not as per the existing requirement, then use this tool to modify the RAM allocation. However, when certain operations are performed, such as, when the role for the node is modified or a node is removed from the cluster, then the value set is overwritten. Additionally, the RAM allocation reverts to the optimal default value. In this case, perform these steps again for setting the RAM allocation after modifying the role of the node or adding a node back to the Audit Store cluster.
From the ESA Web UI, navigate to System > Services > Audit Store.
Start the Audit Store Repository service.
Open the ESA CLI.
Navigate to Tools.
Run Set Audit Store Repository Total Memory.
Enter the password for the root user and select OK.
Specify the total memory that must be allocated for the Audit Store Repository and select OK.
Select Exit to return to the menu.
Repeat the steps on the remaining nodes, if required.
4.6.8.5 - Rotating Insight certificates
For more information about rotating the Insight certificates, refer here.
4.7 - Web User Interface (Web UI) Management
The Web UI is a web-based environment for managing status, policy, administration,networking, and so on. The options that you perform using the CLI manager can also be performed from the Web UI.
4.7.1 - Working with the Web UI
The following screen displays the ESA Web UI.
The following table describes the details of options available on the Web UI menu.
Options | Description |
---|---|
Dashboard | View user notifications, disk usage, alerts, server details, memory / CPU / network utilization, and cluster status |
Policy Management | Manage creating and deploying policies. For more information about keys, refer Policy Management. |
Key Management | Manage master data. For more information about keys, refer to the Key Management. |
System | Configure Trusted Appliances Cluster, set up backup and restore, view system statistics, graphs, information, and manage services. |
Logs | View logs that are generated for web services. |
Settings | Configure network settings, set up certificates, manage users, roles, and licenses. |
Audit Store | Manage the repository for all audit data and logs. For more information about the Audit Store, refer Audit Store. View dashboards that display information using graphs and charts for a quick understanding of the protect, unprotect, and reportect transactions performed. For more information about dashboards, refer Dashboards. |
The following figure describes the icons that are visible on the ESA Web UI.
Icon | Description |
---|---|
![]() | Download support logs, view product documentation, and view the version information about the ESA and its components. |
![]() | Extend session timeout |
![]() | Notifications and alerts |
![]() | Edit profile or sign out of the profile |
![]() | Power off or restart the system |
Logging into the appliance Web UI
Log in to the Appliance Web User Interface (Web UI) to manage the Appliance settings and monitor your Appliance.
When you login through the CLI or the Web UI for the first time, with the password policy enabled, the Update Password screen appears. It is recommended that you change the password since the administrator sets the initial password.
If you login to the Web UI for the first time with Shell Accounts role and with access to Shell (non-CLI) Access permissions, you cannot access the Web UI but you can update the password through the Update Password screen.For more information about configuring the password policy, refer Password Policy Configuration.
It is recommended to configure your browser settings such that passwords are not saved. If you save the password, then on your next login you would start the session as a previously logged-in user.
The following screen displays the login screen of the Web UI.
To log into the Appliance Web UI:
- From your web browser, type the Management IP address for your appliance using HTTPS protocol, for example, https://10.1.0.185/.The Web Interface splash screen appears.
- Enter your user credentials.If the credentials are approved, then the Appliance Dashboard appears.
Viewing user notifications
There is a message at the top of the screen with the number of notifications that appear on this page and other web pages. If you click on the notification, then it is directed to the Services and Status screen.
Alternatively, you can store the messages in the Audit Store and use Discover to view the detailed logs.
You can delete the messages after reading them.
The messages that are older than a year are automatically deleted from the User Notification list, but retained in the logs.
On the User Notifications area of the ESA, the notifications and events occurring on the appliances communicating with it are also visible.
For the notifications to appear on the ESA, ensure that same set of client key pair are present on the ESA and the appliances that communicate with the ESA.For more information about certificates, refer Certificate Management.
Scheduled tasks generate some of these messages. To view a list of scheduled tasks that generate these messages, navigate to System > Task Scheduler.
Logging out of the appliance Web UI
There are two ways to log off the Appliance Web UI.
- Log off as a user, while the Appliance continues to run.
- Restart or shut down the Appliance.
In case of cloud platforms, such as, Azure, AWS or GCP, the instances run the appliance. Powering off the instance running the appliance might not shut down the appliance. You must power off the appliance only from the CLI Manager or Appliance Web UI.
To log out as a user:
Click the second icon that appears on the right of the Appliance Toolbar.
Click Sign Out.
The login screen appears.
Shutting down the appliance
The Reboot option shuts down the appliance and restarts it again. Users will need to login again when the authentication screen appears.
With cloud platforms such as Azure, AWS, or GCP, the instances run the appliance. Powering off the instance from the cloud console may not shut down the appliance gracefully. It is recommended to power off from the CLI Manager or Web UI.
To shut down the appliance:
Click the last icon from Appliance Toolbar.
Click Shutdown.
Enter your password to confirm.
Provide the reason for the shut down and click OK.
The appliance server shuts down. The Web UI screen may continue to display on the window; however, the Web UI does not work.
4.7.2 - Description of Appliance Web UI
The Appliance Web UI appears upon successful login. This page shows the Host to which you are attached, IP address, and the current user logged on.
The different menu options are given in the following table.
Option | Using this navigation menu you can | Applicable to |
---|---|---|
Dashboard | A view-only window, which provides status at a glance – service, server, notifications, disk usage, and graphical representation of CPU, memory, and network usage. | All |
Policy Management | Used to create data stores, data elements, masks, roles, keys, and deploy a policy. | ESA |
Key Management | Used to view information, rotate, or change the key states of the Master Key, Repository Key, and Data Store Keys. View information for the active Key Store or switch the Key Store. | ESA |
System |
| All |
Logs | Used to view logs for separate tasks such as web services engine, policy management, DFSFP, and Appliance. | ESA |
Settings |
| All |
Audit Store | A repository for all audit data and logs. It is used to initialize analytics, view and analyze Insight dashboards, and manage cluster. | ESA |
Cloud Gateway | Used to create certificate, tunnel, service, Rules for traffic flow management, add DSG nodes to cluster, monitor cluster health, view logs. | DSG |
The following graphic illustrates the different panes in the ESA Web UI.
Component | Description |
---|---|
Navigation Pane | The number of options in the navigation menu depends on the installed Appliance. The functionality is also restricted based on the user permissions. You could have read-write or read-only permissions for certain options. Using the different options, you can create a policy, add a user, run a few security checks such as file integrity or scan for virus, review or change the network settings, among others. |
Workspace | This window on the right includes either displayed information or fields where information needs to be added. When an option is selected, the resulting window appears. |
Status Bar | The bar at the bottom displays the last refresh activity time. Also, if you click the rectangle a separate Appliance CLI screen opens. All these options are available in the CLI screen as well. |
Toolbar | This bar at the top displays the name of the currently open window on the left and icons on the right. |
The details of the icons in the toolbar is as follows:
Component | Icon Name | Description |
---|---|---|
1 | Notification | The number is the total number of unopened messages for you. |
2 | User | Change your password or log out as a user. ESA continues to run. |
3 | Help | Download the file(s) that are required by Protegrity Support for troubleshooting, view the product documentation, and view the version information about the ESA and its components. |
4 | Session | Extend the session without timing out. You have to enter your credentials again to login. |
5 | Power Option | Reboot or shut down the Appliance, after ensuring that the Appliance is not being used. |
Support
The Help option on the toolbar, allows you to download information about the status of the appliance and other services that is sometimes required by Protegrity Services to troubleshoot.
Check the boxes that you require and optionally provide a prefix to the automatically generated file name. You may optionally add a description and protect the resulting zip file and all the xml files inside with a password.
Viewing Version of the Installed Components of the ESA
The > About option on the toolbar allows you to view the version information about the ESA and its components.
The following figure shows the version information of the installed components of the ESA.
For example, you can view the version of the Data Protection System (DPS) that is being used.
Extending Timeout from Appliance
The following icons are available for all Appliances on the top right corner of the Appliance Web UI page.
The hourglass icon enables you to extend the working time for the Appliance. To extend the timeout for the Appliance Web UI, click on the hourglass icon.
A message appears mentioning Session timeout extended successfully.
4.7.3 - Working with System
The System Information navigation folder includes all information about the appliance listed below.
- Services and their statuses
- The hardware and software information
- Performance statistics
- Graphs
- Real-time graphs
- Appliance logs
System option available on the left pane provides the following options:
System Options | Description |
---|---|
Services | View and manage OS, logging and reporting, policy management and other miscellaneous services. |
Information | View the health of the system. |
Trusted Appliances Cluster | View the status of trusted appliances clusters and saved files. |
System Statistics | View the performance of the hardware and networks. |
Backup and Restore | Take backups of files and restore these, as well as take backups of full OS and log files. |
Task Scheduler | Schedule tasks to run in the background such as anti-virus scans and password policy checks, among others. |
Graphs | View how the system is running in a graphical form. |
4.7.3.1 - Working with Services
You can manually start, restart, and stop services in the appliance. You can act upon all services at once, or select specific ones.
In the System > Services page, the tabs list the available services and their statuses. The Information tab appears with the system information like the hardware information, system properties, system status, and open ports.
Although the services can be started or stopped from the Web UI, the start/stop/restart action is restricted for some services. These services can be operated from the OS Console.Run the following command to start/stop/restart a service.
/etc/init.d/<service_name> stop/start/restart
For example, to start the docker service, run the following command.
/etc/init.d/docker start
If you stop the Service Dispatcher service from the Web UI, you might not be able to access ESA from the Web browser. Hence, it recommended to stop the Service Dispatcher service from the CLI Manager only.
Web Interface Auto-Refresh Mode
You can set the auto-refresh mode to refresh the necessary information according to a set time interval. The Auto-Refresh is available in the status bar that show the dynamically changing status information, such as status and logs. Thus, for example, an Auto Refresh pane is available in System > Services, at the bottom of the page.
The Auto-Refresh pane is not shown by default. You should click the Auto-Refresh button to view the pane.
To modify the auto-refresh mode, from the Appliance Web Interface, select the necessary value in the Auto-Refresh drop-down list. The refresh is applied in accordance with the set time.
4.7.3.2 - Viewing information, statistics, and graphs
Viewing System Information
All hardware information, system properties, system statuses, open ports and firewall rules are listed in the Information tab.
The information is organized into sections called Hardware, System Properties, System Status, Open Ports, and Firewall.
Hardware section includes information on system, chipset, processors, and amount of total RAM.
System Properties section appears with information on current Appliance, logging server, and directory server.
System Status section lists such properties as data and time, boot time, up time, number of logged in users, and average load.
Open Ports section lists types, addresses, and names of services that are running.
Firewall section in System > Information lists all firewall rules, firewall status (enabled/disabled), and the default policy (drop/accept) which determines what to do on packets that do not match any existing rule.
Viewing System Statistics
Using System > System Statistics, you can view performance statistics to assess system usage and efficiency. The Performance page refreshes itself every few seconds and shows the statistics in real time.
The Performance page shows system information:
- Hardware - System, chipset, processors, total RAM
- System Status - Date/time, boot time, up-time, users connected, load average
- Networking - Interface, address, bytes sent/received, packets sent/received
- Partitions - Partition name and size, used and avail
- Kernel - Idle time, kernel time, I/O time, user time
- Memory - Memory total, swap cached, and inactive, among others
You can customize the page refresh rate, so that you are viewing the latest information at any time.
Viewing Performance Graphs
Using System > Graphs, you can view performance graphs and real-time graphs in addition to statistics. In the Performance tab you can view a graphical representation of performance statistics from the past 5 minutes or past 24 hours for these items:
- CPU application use - % CPU I/O wait, CPU system use
- Total RAM - Free RAM, used RAM
- Total Swap - Free Swap, used Swap
- Free RAM
- Used RAM
- System CPU usage
- Application CPU use, %
- Log space used - Log space available, log space total
- Application data used - Application data available space, application data total size
- Total page faults
- File descriptor usage
- ethMNG incoming/ethMNG outgoing
- ethSRV0 incoming/ethSRV0 outgoing
- ethSRV1 incoming/ethSRV1 outgoing
In the Realtime Graphs tab you can monitor current state of performance statistics for these items:
- CPU usage
- Memory Status - free and used RAM
The following figure illustrates the Realtime Graphs tab.
4.7.3.3 - Working with Trusted Appliances Cluster
The Clustering menu becomes available in the appliance Web Interface, System > Trusted Appliance Cluster. The status of the cluster is by default updated every minute, and it can be configured using Cluster Service Interval, available in the CLI Manager.
Status tab appears with the information on nodes which are in the cluster. In the Filter drop-down combo box, you can filter the nodes by the name, address and label.
In the Display drop-down combo box, you can select to display node summary, top 10 CPU consumers, top 10 Memory consumers, free disk report, TCP/UDP network information, system information, and display ALL.
Saved Files tab appears with the files that were saved in the CLI Manager. These files show the status of the appliance cluster node or the result of the command run on the cluster.
4.7.3.4 - Working with Backup and restore
The backup process copies or archives data. The restore process ensures that the original data is restored if data corruption occurs.
You can back up and restore configurations and the operating system from the Backup/Restore page. It is recommended to have a backup of all system configurations.
The Backup/Restore page includes Export, Import, OS Full, and Log Files tabs, which you can use to create configuration backups and restore them later.
Using Export, you can also export a configuration to a trusted appliances cluster, and schedule periodic replication of the configuration on all nodes that are in the trusted appliances cluster. Using export this way, you can periodically update the configuration on all, or just necessary nodes of the cluster.
Using Import, you can restore the created backups of the product configurations and appliance OS core configuration.
Using Full OS Backup, you can create backup of the entire appliance OS.
The Full OS Backup/Restore features of the Protegrity appliances is not available on the cloud platform.
4.7.3.4.1 - Working with OS Full Backup and Restore
It is recommended to perform the full OS back up before any important system changes, such as appliance upgrade or creating a cluster, among others.
Backing up the appliance OS
The backup process may take several minutes to complete.
Perform the following steps to back up the appliance OS.
Log in to the Appliance Web UI.
Proceed to System > Backup > Restore.
Navigate to the O.S Full tab and click Backup.
A confirmation message appears.
Press ENTER.
The Backup Center screen appears and the OS backup process is initiated.
Navigate to Appliance Dashboard.A notification O.S Backup has been initiated appears. After the backup is complete, a notification O.S Backup has been completed appears.
Restoring the appliance OS
Use caution when restoring the appliance OS. Consider a scenario where it is necessary to restore a full OS backup that includes the external Key Store data. If the external Key Store is not working, then the HubController service does not start after the restore process.
Perform the following steps to restore the appliance OS.
- Login to the Appliance Web UI.
- Proceed to System > Backup & Restore.
- Navigate to the O.S Full tab and click Restore.A message that the restore process is initiated appears.
- Select OK.The restore process starts and the system restarts after the process is completed.
- Log in to the appliance and navigate to Appliance Dashboard.A notification O.S Restore has been completed appears.
4.7.3.4.2 - Backing up the data
Using the Export tab, you can create backups of the product configurations and/or appliance OS core configuration.
Before you begin
Starting from the Big Data Protector 7.2.0 release, the HDFS File Protector (HDFSFP) is deprecated. The HDFSFP-related sections are retained to ensure coverage for using an older version of Big Data Protector with the ESA 7.2.0.
If you plan to use ESAs in a Trusted Appliances Cluster, and you are using HDFSFP with the DFSFP patch installed on the ESA, then ensure that you clear the DFSFP_Export check box when exporting the configurations from the ESA, which will be designated as the Master ESA.
In addition, for the Slave ESAs, ensure that the HDFSFP datastore is not defined and the HDFSFP service is not added.
The HDFSFP data from the Master ESA should be backed up to a file and moved to a backup repository outside the ESA. This will help in retaining the data related to HDFSFP, in cases of any failures.
Backing up configuration to local file
Perform the following steps to backup the configuration to local file.
- Navigate to System > Backup & Restore > Export.
- In the Export Type area, select To File radio button.
- In the Data To export area, select the items to be exported.Click more.. for the description of every item.
- Click Export.The Output File screen appears.
- Enter information in the following fields:
- Output File: Name of the file.
If you want to replace an existing file on the system with this file, click the Overwrite existing file check box. - Password: Password for the file.
- Export Description: Information about the file.
- Output File: Name of the file.
- Click Confirm.A message
Export operation has been completed successfully
appears. The created configuration is saved to your system.
Exporting Configuration to Cluster
You can export your appliance configuration to the trusted appliances cluster, which your appliance belongs to. The procedure of creating the backup is almost the same as exporting to a file.
You need to define what configurations to export, and which nodes in the cluster receive the configuration. You do not need to import the files as is required when backing up the selected configuration. The configuration will be automatically replicated on the selected nodes when you export the configuration to the cluster.
When you are exporting data from one ESA to other, ensure that you run separate tasks to export the LDAP settings first and then the OS settings.
Perform the following steps to export a configuration to a trusted appliances cluster.
Navigate to System > Backup & Restore > Export.
In the Export Type area, select the To Cluster radio button.
In the Data to import area, customize the items that you want to export from your machine and import to the cluster nodes.
If the configurations must be exported on a different ESA, then clear the Certificates check box. For information about copying Insight certificates across systems, refer to Rotating Insight certificates.
In the Target Cluster Nodes area, select which nodes you want to export the configuration to. You can specify them by label or select individual nodes. You can select to show command line, if necessary.
Click Export.
4.7.3.4.3 - Backing up custom files
In the ESA, you can export or import the files that cannot be exported using the cluster export task. The custom set of files include configuration files, library files, directories containing files, and any other files. On the ESA Web UI, navigate to Settings > System > Files to view the customer.custom file. That file contains the list of files to include for export and import.
The following figure displays a sample snippet of the customer.custom file.
If you include a file, then you must specify the full path of the file. The following snippet explains the format for exporting a file.
/<directory path>/<filename>.<extension>
For example, to export the abc.txt file that is present in the test directory, you must add the following line in the customer.custom file.
/test/abc.txt
If the file does not exist, then an error message appears and the import export process terminates. In this case, you can add the prefix optional to the file path in the customer.custom file. This ensures that if the file does not exist, then the import export process continues without terminating abruptly.
If the file exists and the prefix optional is added, then the file is exported to the other node.For example, if the file 123.txt is present in the test directory, then it is exported to the other node. If the file does not exist, then the export of this file is skipped and the other files are exported.optional:/abc/test/123.txt
For more information about exporting files, refer Editing the customer.custom File.
If you include a directory, then you must specify the full path for the directory. All the files present within the directory are exported. The following snippet explains the format for exporting all the files in a directory.
/<directory path>/*
For example, to export a directory test_dir that is present in the /opt directory, add the following line in the customer.custom file.
/opt/test_dir/*
You can also include all the files present under the subdirectories for export. If you prefix the directory path with the value recursive, then all the files within the subdirectories are also exported.
For example, to export all the subdirectories present in the test_dir directory, add the following line in the customer.custom file.
recursive:/opt/test_dir/
For more information about exporting directories, refer to the section Editing the customer.custom File to Include Directories.
You must export the custom files before importing them to a file or on the other nodes on a cluster.
4.7.3.4.4 - Exporting the custom files
Perform the following steps to export the customer.custom file to a local file or to a cluster.
Exporting the customer.custom file to a local file
- Navigate to System > Backup & Restore > Export.
- In the Export Type area, select To File.
- In the Data To Export area, select Appliance OS Configuration.
- Click Export.The Output file screen appears.
- Enter the name of the file in the Export Name text box.
- Enter the required password in the Password text box.
- Click Confirm.The message Export operation has been completed successfully appears.
- Click the Done button.The file is exported and is stored in the
/products/exports
directory.
- On the CLI Manager, navigate to Administration > Backup/Restore Center > Export data/configurations to a local file.
- Select Appliance OS Configuration and select OK.A screen to enter the export information appears.
- Enter the required name of the file in the Export Name text box.
- Enter the required password in the Password and Confirm text boxes.
- Select OK.
- Select Done after the export operation completes.
Exporting the customer.custom file on a cluster
On the Web UI, navigate to System > Backup & Restore > Export.
In the Export Type area, select Cluster Export option.
If the configurations must be exported to a different ESA, then clear the Certificates check box. For information about copying Insight certificates across systems, refer to Rotating Insight certificates.
Click Start Wizard.
Select User custom list of files in the Data To Import tab.
Click Next.
Select the required options in the Source Cluster Nodes tab and click Next.
Select the required options in the Target Cluster Nodes tab and click Review.
Enter the required data in the Basic Properties, Frequency, Logging, and Restriction areas.For more information about the task details, refer Schedule Appliance Tasks. The message Export operation has been completed successfully appears.
Click Save.A File saved message appears.
- On the CLI Manager, navigate to Administration > Backup/Restore Center > Export data/configurations to remote appliance(s).
- Select the required file or configuration to export and select OK.
- Enter the required password for the file or configuration.
- Select Custom Files and folders and select OK.
- Enter the required credentials for the target appliance on the Target Appliance(s) screen.
- Select OK.The custom files and configurations are exported to the target node.
- Click Save.
4.7.3.4.5 - Importing the custom files
Perform the following steps to import the customer.custom file to a local file.
Importing the customer.custom file to a local file
- On the Web UI, navigate to System > Backup & Restore > Import.
- From the dropdown menu, select the exported file.
- Click Import.
- On the following screen, select Custom Files and folders.
- Enter the password for the file in the Password text box and click Import.
The message File
has been imported successfully appears. - Click Done.
- On the CLI Manager, navigate to Administration > Backup/Restore Center > Import configurations from a local file.The Select an item to import screen appears.
- Select the required file or configuration to export and select OK.The contents of the file appear.
- Select OK.
- Enter the required password on the following screen and select OK.
- Select the required components.
Warning: Ensure to select each component individually.
- Select OK.The file import process starts.
- Select Done after the import process completes.
4.7.3.4.6 - Working with the custom files
Editing the customer.custom file
Administration privileges are required for editing the customer.custom file.
This section describes the various options that are applicable when you export a file.
Consider the following scenarios for exporting a file:
- Include a file abc.txt present in the /opt/test directory.
- Include all the file extensions that start with abc in the /opt/test/check directory.
- Include multiple files using regular expressions.
To edit the customer.custom file from the Web UI:
- On the Web UI, navigate to Settings > System > Files.
- Click Edit beside the customer.custom file.
- Configure the following settings to export the file.
#To include the abc.txt file /opt/test/abc.text #If the file does not exist, skip the export of the file optional:/opt/test/pqr.txt #To include all text files /opt/test/*.txt #To include all the files extensions for file abc present in the /opt/test/check directory /opt/test/check/abc.* #To include files file1.txt, file2.txt, file3.txt, file4.txt, and file5.txt /opt/test/file[1-5].txt
- Click Save.
It is recommended to use the Cluster export task to export Appliance Configuration settings, SSH settings, Firewall settings, LDAP settings, and HA settings. Do not import Insight certificates using Certificates, rotate the Insight certificates using the steps from Rotating Insight certificates.If the files exist at the target location, then they are overwritten.
Editing the customer.custom File to Include Directories
This section describes the various options that are applicable when you export a file.
Consider the following scenarios for exporting files in a directory:
- Export files is the directory abc_dir present in the /opt/test directory
- Export all the files present in subdirectories under the abc_dir directory
Ensure that the files mentioned in the customer.custom file are not specified in the exclude file.For more information about the exclude file, refer to the section Editing the Exclude File.
To edit the customer.custom file from the Web UI:
On the Web UI, navigate to Settings > System > Files.
Click Edit beside to the customer.custom file.The following is a snippet listing the sample settings for exporting a directory.
#To include all the files present in the abc directory /opt/test/abc_dir/* #To include all the files in the subdirectories present in the abc_dir directory recursive:/opt/test/abc_dir
If you have a Key Store configured with ESA, then you can export the Key Store libraries and files using the customer.custom file. The following is a sample snippet listing the settings for exporting a Key Store directory.
#To include all the files present in the Safeguard directory /opt/safeguard/* #To include all the files present in the Safenet directory /usr/safenet/*
The following is a sample snippet listing the settings for exporting the self-signed certificates.
#To include all the files present in the Certificates directory /etc/ksa/certificates
Click Save.
Editing the customer.custom File to include files
The library files and other settings that are not exported using the cluster export task can be addressed using the customer.custom file.
Ensure that the files mentioned in the customer.custom file are not specified in the exclude file.For more information about the exclude file, refer to the section Editing the Exclude File.
To edit the customer.custom file from the Web UI:
On the Web UI, navigate to Settings > System > Files.
Click Edit beside to the customer.custom file.If you have a Key Store configured with ESA, then you can export the Key Store libraries and files using the customer.custom file. The following is a sample snippet listing the settings for exporting a Key Store directory.
#To include all the files present in the Safeguard directory /opt/safeguard/* #To include all the files present in the Safenet directory /usr/safenet/*
The following is a sample snippet listing the settings for exporting the self-signed certificates.
#To include all the files present in the Certificates directory /etc/ksa/certificates
Click Save.
Editing the exclude files
The exclude file contains the list of system files and directories that you don’t want to export. You can access the exclude file from the CLI Manager only. The exclude file is present in the /opt/ExportImport/filelist
directory.
- A user which has root privileges is required to edit the exclude file, as it lists the system directories that you cannot import.
- If a file or directory is present in both the exclude file and the customer.custom file, then the file or directory is not exported.
The following directories are in the exclude file:
- /etc
- /usr
- /sys
- /proc
- /dev
- /run
- /srv
- /boot
- /mnt
- /OS_bak
- /opt_bak
The list of files mentioned in the exclude file affect only the customer.custom file and not the standard cluster export tasks.
If you want to export or import files, then ensure that these files are not listed in the exclude file.
To edit the exclude file:
- On the CLI Manager, navigate to Administration > OS Console.
- Navigate to the
/opt/ExportImport/filelist/
directory. - Edit the exclude file using an editor.
- Perform the required changes.
- Save the changes.
4.7.3.4.7 - Restoring configurations
Using the Import tab, you can restore the created backups of the product configurations and appliance OS core configuration. Using the Import tab, you also can upload a configuration file saved on your local machine to the appliance. You can also download a configuration file from the appliance and save it to your local machine.
Using the Import tab, you also can:
- Upload a configuration file saved on your local machine to the appliance.
- Download a configuration file from the appliance and save it to your local machine.
Before importing
Before importing the configuration files, ensure that the required products are installed in the appliance. For example, if you are importing files related to Consul Configuration and Data, ensure that the Consul product is installed in the appliance.
When you import files or configurations on an appliance from another appliance, different settings such as, firewall, SSH, or OS are imported. During this import, the settings on the target appliance might change. This might cause a product or component on the target appliance to stop functioning. Thus, after an import of the file or settings is completed, ensure that the settings, such as, ports, SSH, and firewall on the target machine are compatible with the latest features and components.For example, new features, such as, Consul are added to v7.1 MR2. When you import the settings from the previous versions, the settings in v7.1 MR2, such as, firewall or ports are overridden. So, you must ensure that the rules are added for the functioning of the new features.
When you import files or configurations, ensure that each component is selected individually.
Restoring configuration from backup
To restore a configuration from backup:
Navigate to the System > Backup & Restore.
Navigate to the Import tab, select a saved configuration from the list and click Import.
Choose specific components from the exported configuration if you do not want to restore the whole package.
If the configurations must be imported on a different ESA, then clear the Certificates check box. For information about copying Insight certificates across systems, refer to Rotating Insight certificates.
In the Password field, enter the password for the exported file and click Import.
4.7.3.4.8 - Viewing Export/Import logs
When you export or import files using the Web UI, the operation log is saved automatically. These log files are displayed in Log Files tab. You can view, delete, or download the log files.
When you export or import files using the CLI Manager, the details of the files are logged.
4.7.3.5 - Scheduling appliance tasks
Navigating to System > Task Scheduler you can schedule appliance tasks to run automatically. You can create or manage tasks from the ESA Web UI.
4.7.3.5.1 - Viewing the scheduler page
The following figure illustrates the default scheduled tasks that are available after you install the appliance.
The Scheduler page displays the list of available tasks.
To edit a task, click Edit. Click Save and then click Apply and enter the root password after performing the required changes.
To delete a task, select the required task and click Remove. Then, click Apply and enter the root password to remove the task.
On the ESA Web UI, navigate to Audit Store > Dashboard > Discover screen to view the logs of a scheduled task.
For creating a scheduled task, the following parameters are required.
- Basic properties
- Customizing frequency
- Execution
- Restrictions
- Logging
The following tasks must be enabled on any one ESA in the Audit Store cluster. Enabling the tasks on multiple nodes will result in a loss of data. If these scheduler task jobs are enabled on an ESA that was removed, then enable these tasks on another ESA in the Audit Store cluster.
- Update Policy Status Dashboard
- Update Protector Status Dashboard
Basic properties
In the Basic Properties section, you must specify the basic and mandatory attributes of the new task. The following table lists the basic attributes that you need to specify.
Attribute | Description |
---|---|
Name | A unique numeric identifier must be assigned. |
Description | The task displayed name, which should also be unique. |
Frequency | You can specify the frequency of the task: |
Customizing frequency
In the Frequency section of the new scheduled task, you can customize the frequency of the task execution. The following table lists the frequency parameters which you can additionally define.
Attribute | Description | Notes |
---|---|---|
Minutes | Defines the minutes when the task will be executed: | Every minute is the default. You can select several options, or clear the selection. For example, you can select to execute the task on the first, second, and 9th minute of the hour. |
Days | Defines the day of the month when the task will be executed | Every day is the default. You can select several options, or clear the selection. |
Days of the week | Defines the day of the week when the task will be executed: | Every DOW (day of week) is the default. You can select several options, or clear the selection. |
Hours | Defines the hour when the task will be executed | Every hour is the default. You can select several options, or clear the selection. If you select *, then the task will be executed each hour.If you select */6, then the task will be executed every six hours at 0, 6, 12, and 18. |
Month | Defines the month when the task will be executed | Every month is the default. You can select several options, or clear the selection. If you select *, then the task will be executed each month. |
The Description field of Frequency section will be automatically populated with the frequency details that you specified in the fields mentioned in the following table. Task Next Run will hint when the task next run will occur.
Execution
In the Command Line section, you need to specify the command which will be executed, and the user who will execute this command. You can optionally specify the command parameters separately.
Command LineIn the Command Line edit field, specify a command that will be executed. Each command can include the following items:
- The task script/executable command.
- User name to execute the task is optional.
- Parameters to the script as part of the command is optional, can be specified separately in the Parameters section.
ParametersUsing the Parameters section, you can specify the command parameters separately.
You can add as many parameters as you need using the Add Param button, and remove the unnecessary ones by clicking the Remove button.
For each new parameter you need to enter Name (any), Type (option), and Text (any).
Each parameter can be of text (default) and system type. If you specify system, then the parameter will be actually a script that will be executed, and its output will be given as the parameter.
UsernameIn the Username edit field, specify the user who owns the task. If not specified, then tasks run as root.
Only root, local_admin, and ptycluster users are applicable.
Restrictions
In a Trusted Appliance cluster, Restrictions allow you to choose the sites on which the scheduled tasks will be executed. The following table lists the restrictions that you can select.
Attribute | Description |
---|---|
On master site | The scheduled tasks are executed on the Master site |
On non-master site | The scheduled tasks are executed on the non-Master site |
If you select both the options, On master site and On non-master site, then the scheduled task is executed on both sites.
Logging
In the Logging section, you should specify the logging details explained in the table below:
Logging Detail | Description | Notes |
---|---|---|
Show command line in logs? | Select a check-box to show the command line in the logs. | It is advisable not to select this option if the command includes sensitive data, such as passwords. |
SysLogLog Server | Define the following details: | You should configure these fields to be able to easily analyze the incoming logs. Specifies whether to send an event to the Log Server (ESA) and the severity: No event, Lowest, Low, Medium, High, Critical for failed/success task execution. |
Log File | Specify the files names where the success and failed operations are logged. | Specifies whether to store the task execution details in local log files. You can specify to use the same file for successful and failed events. These files will be located in /var/log . You can also examine the success and failed logs in the Appliance Logs, in the appliance Web Interface. |
4.7.3.5.2 - Creating a scheduled task
Perform the following steps to create a scheduled task.
- On the ESA Web UI, navigate to System > Task Scheduler.
- Click New Task. The New Task screen appears.
- Enter the required information in the Basic Properties section.For more information about the basic properties, refer here.
- Enter the required information in the Frequencies section.For more information about customizing frequencies, refer here.
- Enter the required information in the Command Line section.For more information about executing command line, refer here.
- Enter the required information in the Restrictions section.For more information about restrictions, refer here.
- Enter the required information in the Logging section.For more information about logging, refer here.
- Click Save.A new scheduled task is created.
- Click Apply to apply the modifications to the task.A dialog box to enter the root user password appears.
- Enter the root password and click OK.The scheduled task is now operational.
Running the task
After completing the steps, select the required task and click Run Now to run the scheduled task immediately.
Additionally, you can create a scheduled task, for exporting a configuration to a trusted appliances cluster using System > Backup/Restore > Export.
4.7.3.5.3 - Scheduling Configuration Export to Cluster Tasks
You can schedule configuration export tasks to periodically replicate a specified configuration on the necessary cluster nodes.
The procedure of creating a configuration export task is almost the same as exporting a configuration to the cluster. The is a slight difference between these processes. In exporting a configuration to the cluster, it is a one-time procedure which the user needs to run manually. A scheduled task makes periodic updates and can be run a number of times in accordance with the schedule that the user specifies.
To schedule a configuration export to a trusted appliances cluster:
From the ESA Web UI, navigate to System > Backup & Restore > Export.
Under Export, select the Cluster Export radio button.
If the configurations must be exported on a different ESA, then clear the Certificates check box during the export. For information about copying Insight certificates across systems, refer to Rotating Insight certificates.
Click Start Wizard.
The Wizard - Export Cluster screen appears.
In the Data to import, customize the items that you need to export from this machine and imported to the cluster nodes.
Click Next.
In the Source Cluster Nodes, select the nodes that will run this task.
You can specify them by label or select individual nodes.
Click Next.
In the Target Cluster Nodes, select the nodes to import the data.
Click Review.
The New Task screen appears.
Enter the required information in the following sections.
- Basic Properties
- Frequencies
- Command Line
- Restriction
- Logging
Click Save.
A new scheduled task is created.Click Apply to apply the modifications to the task.
A dialog box to enter the root user password appears.Enter the root password and click OK.
The scheduled task is operational.Click Run Now to run the scheduled task immediately.
4.7.3.5.4 - Deleting a scheduled task
Perform the following steps to delete a scheduled task:
- From the ESA Web UI, navigate to System > Task Scheduler.The Task Scheduler page displays the list of available tasks.
- Select the required task.
- Click Remove.A confirmation message to remove the scheduled task appears.
- Click OK.
- Click Apply to save the changes.
- Enter the root password and select Ok.The task is deleted successfully.
4.7.4 - Viewing the logs
Based on the products installed, you can view the logs in the Logs screen. Based on the components installed in ESA, the following are the logs are generated in the following screens:
- Web Services Engine
- Service Dispatcher
- Appliance Logs
The information icon on the screen displays the order in which the new logs appear. If the new logs appear on top, you can scroll down through the screen the view the previously generated logs.
Viewing Web Services Engine Logs
In the Web Services screen, you can view the logs for all the Web services requests on ports, such as, 443 or 8443.
The Web Services logs are classified as follows:
- HTTP Server Logs
- SOAP Module Logs
The following figure illustrates the HTTP Server Logs.
Navigate to Logs > Web Services Engine > Web Services HTTP Server Logs to view the HTTP Server logs.
Viewing Service Dispatcher Logs
You can view the logs for the Service Dispatcher under Logs > Service Dispatcher > Service Dispatcher Logs.
The following figure illustrates the service dispatcher logs.
Viewing Appliance Logs
You can view logs of the events occurring in the appliance under Logs > Appliance. The Appliance Logs page lists logs for each event and provides options for managing the logs. The logs files (.log extension) that are in the /var/log
directory appear on the appliance logs screen. The logs can be categorized as all appliance component logs, installation logs, patch logs, kernel logs, and so on.
Current Event Logs are the most informative appliance logs and are displayed by default when you proceed to the Appliance Logs page. Depending on the logging level configuration (set in the appropriate configuration files of the appliance components), the Current Event Logs display the events in accordance with the selected level of severity (No logging, SEVERE, WARNING, INFO, CONFIG, ALL).
Based on the configuration set for the logs, they are rotated periodically.
If the fluentd component is installed in the appliance, logs are sent to the Log Server. If the fluentd component is not installed in the appliance, logs are stored in /audits_from_rsyslog.log file under /var/log/pap directory.
The following figures illustrate the appliance logs.
The following table describes the actions you can perform on the appliance logs.
Action | Description |
---|---|
Print the logs | |
Download | Download the logs to a specific directory |
Refresh | Refresh the logs |
Save a copy | Save a copy of the current log with a timestamp |
Purge Log | Clear the logs |
If the logs are rotated, the following message appears.Logs have been rotated. Do you want to continue with new logs?
Select OK to view the new logs generated.
For more information about configuring log rotation and log retention, refer here.
4.7.5 - Working with Settings
The Settings menu on the appliance Web UI allows you to configure various features, such as, antivirus, two-factor authentication, networking, file management, user management, and licences.
4.7.5.1 - Working with Antivirus
The Antivirus program uses ClamAV, an open source and cross-platform Antivirus engine designed to detect malicious Trojan, virus, and malware threats. A single file or directory, or the whole system can be scanned. Infected file or files are logged and can be deleted or moved to a different location, as required.
You can use Antivirus to perform the following functions:
- Schedule the scans or run these on demand.
- Update the virus data signature or database files, or run the update on demand.
- View the logs generated for every virus found.
Simple user interfaces and standard configurations for both Web UI and CLI of the Appliance make viewing logs, running scans, or updating the virus signature file easy.
FIPS mode and Antivirus
If the FIPS mode is enabled, then the Antivirus is disabled on the appliance.
For more information on the FIPS Mode, refer here.
4.7.5.1.1 - Customizing Antivirus Scan Options
In the Antivirus section, you can customize the scan by setting the following options:
- Action: Ignore the scan result, move the file to a separate directory, or delete the infected files
- Recursive: Implement and scan directories, sub-directories, and files
- Scan Directory: Specify the directory
To customize Antivirus scan options:
Navigate to Settings > Security > Antivirus.
Click Options.
Choose the required options and click Apply.A message
Option changes are accepted!
appears.
4.7.5.1.2 - Scheduling Antivirus Scan
An Antivirus scan can be scheduled only from the Web UI.
Navigate to System > Task Scheduler.
Search Anti-Virus system scan.If it is present, then scanning is already scheduled.Verify the Frequency and update if required.
If Antivirus system scan is not present, then follow these steps:
a. Click +New Task.
b. Add the details, such as the Name, Description, and Frequency.
c. Add the command line steps, and Logging details.
Click Save at the top right of the window.
The Antivirus scanning automatically begins at the scheduled time and logs are saved.
4.7.5.1.3 - Updating the Antivirus Database
You must update the Antivirus database or the signature files frequently. This ensures the Antivirus is updated so it can pick up any new threats to the appliance. The Antivirus database can either be updated from the official ClamAV website, local websites, mirrors, or using the signature files. The signature files are downloaded from the website and uploaded on the appliance Web UI. The following are the Antivirus signature database files that must be downloaded:
- main.cvd
- daily.cvd
- bytecode.cvd
The Antivirus signature database files can be updated in one of the following two ways:
- SSH/HTTP/HTTPS/FTP
- Official website/mirror/local sites
It is recommended that you update the signature database files directly from the official website.
Updating the Antivirus Database Manually
Perform the following steps to update the Antivirus database.
On the appliance Web UI, navigate to Settings > Security > Antivirus.
Click Database Update > Settings.
Select one of the following settings.
Settings | Description |
---|---|
Local/remote mirror server | Server containing the database update. Enter the URL of the server in Input the target URL text box. |
Official website through HTTP proxy server | Proxy server of ClamAV containing the database update. Enter the following information: |
Local directory | Local directory where the updated database signature files, such as, main.cvd, daily.cvd and bytecode.cvd are stored. Enter the directory path in Input the target directory text box. |
Remote host | Host containing the updated database signature files. Connect to this host using an SSH, HTTP, HTTPS, or FTP connection. Enter information in the required fields to establish a connection with the remote host. |
- Select Confirm.The database update is initiated.
Updating the Antivirus Signature Files Manually
In case network is not available or the Internet is disconnected, you can manually update the signature database files. The signature files are downloaded from the website and placed in a local directory. The following are the Antivirus signature database files that must be downloaded:
- main.cvd
- daily.cvd
- bytecode.cvd
It is recommended that you update the signature database files directly from the official website.
Perform the following steps to manually update the Antivirus database signature files.
Download the Antivirus signature database files: main.cvd, daily.cvd, and bytecode.cvd.
On the CLI Manager, navigate to Administration > OS Console.
Create the following directory in the appliance:
/home/admin/clam_update/
Save the downloaded signature database files in the /home/admin/clam_update/ directory.
Scheduling Update of Antivirus Signature Files
Scheduling an update option is available only on the Web UI.
Go to System > Task Scheduler.
Select the Anti-Virus database update row.
Click Edit from the Scheduler task bar.For more information about scheduling appliance tasks, refer here.
Click Save at the top right corner of the workspace window.
4.7.5.1.4 - Working with Antivirus Logs
Log files are generated for all system and database activities. These logs are stored in the local log
file, runtime.log which is saved in the /etc/opt/Antivirus/
directory.
You can view and delete the local log files.
Viewing Antivirus Logs
The logs for the Antivirus can be viewed from the appliance Web UI. The logs consist of Antivirus database updates, scan results, infections found, and so on. These logs are also available on the Audit Store > Dashboard > Discover screen. You can view all logs, including those deleted, in the local file.
Perform the following steps to view logs.
- Navigate to Settings > Security > Antivirus.
- Click Log.
Deleting Logs from Local File Using the Web UI
Perform the following steps to delete logs from local file using the Web UI.
- Navigate to Settings > Security > Antivirus.
- Click Log.
- Click Purge.All existing logs in the local log file are deleted.
Viewing Logs from the CLI Manager
Perform the following steps to delete logs from local file using the CLI Manager.
- Navigate to Status and Logs > Appliance Logs.
- Select System event logs.
- Press View.
- From the list of available installed patches, select patches.
- Press Show.A detailed list of patch related logs are displayed on the ESA Server window.
Configuring Log Rotation and Log Retention
Perform the following steps to configure log rotation and log retention.
Append the following configuration to the /etc/logrotate.conf file:
/var/log/clamav/*.log { missingok monthly size 10M rotate 1 }
For periodic log rotation, run the following command:
cd /etc/opt/Antivirus/ mv /etc/opt/Antivirus/runtime.log /var/log/clamav ln -s /var/log/clamav/runtime.log runtime.log
4.7.5.2 - Configuring Appliance Two Factor Authentication
Two factor authentication is a verification process where two recognized factors are used to identify you before granting you access to a system or website. In addition to your password, you must correctly enter a different numeric one-time passcode or the verification code to finish the login process. This provides an extra layer of security to the traditional authentication method.
In order to provide this functionality, a trust is created between the appliance and the mobile device being used for authentication. The trust is simply a shared-secret or a graphic barcode that is generated by the system and is presented to the user upon first login.
There is an advantage of using the two-factor authentication feature. If a hacker manages to guess your password, then entry to your system is not possible. This is because a device is required to generate the verification code.
The verification code is a dynamic code that is generated by any smart device such as smartphone or tablet. The user enters the shared-secret or scans the barcode into the smart device, and from that moment onwards the smartphone generates a new verification-code every 30-60 seconds. The user is required to enter this verification code every time as part of the login process. For validating the one time password (OTP), ensure that the date and time on the ESA and your system are in sync.
Protegrity appliances and authenticators
There are a few requirements for using two factor authentication with Protegrity appliances.
- For validating one time passwords (OTP), the date and time on the ESA and the validating device must be in sync.
- Protegrity appliances only support use of the Google, Microsoft, or Radius Authenticator apps.
- Download the appropriate app on a mobile device, or any other TOTP-compatable device or application.
The Security Officer configures the Appliance Two Factor Authentication by any one of the following three methods:
Automatic per-user shared-secret is the default and recommended method. It allows having a separate shared-secret for each user, which is generated by the system for them. The shared-secret will be presented to the user upon the first login.
Radius Authentication is the authentication using the RADIUS protocol.
Host-based shared-secret allows a common shared-secret for all users, which can be specified and distributed to the users by the Security Officer. Host-based shared-secret method is useful to force the same secret code for multiple appliances in clustered environments.
4.7.5.2.1 - Working with Automatic Per-User Shared-Secret
Automatic per-user shared-secret is the default and recommended method for configuring two factor authentication. It allows having a separate shared-secret for each user, which is generated by the system for them. The shared-secret will be presented to the user upon the first login.
Configuring Two Factor Authentication with Automatic Per-User Shared-Secret
The following section describes how to configure two factor authentication using automatic per-user shared-secret.
Perform the following steps to configure two factor authentication with automatic per-user shared-secret.
From the Appliance Web UI, navigate to Settings > Security > Two Factor Authentication.
Check the Enable Two-Factor-Authentication check box.
Select the Automatic per-user shared-secret option.
The following pane appears with the options to enable this authentication mode.
If required, then you can customize the message that will be presented to users upon their first login.
Check the Advanced Settings check box to display the Console Message button. By clicking Console Message, a new window appears where you can review and modify the message that will be presented to the user.
You can apply the following logging-settings in order to specify what to log:
- Log failed log-in attempts
- Log any successful log-ins
- Log only first-successful log-in
Click Apply to save the changes.
Logging in to the Web UI
Before beginning, be aware of time limits. When entering codes from the authenticator there is a time limit. Ensure codes are entered in the Enter Authentication code field within the displayed time limit.
The following section describes how to log in to the Web UI after configuring automatic per-user shared-secret.
Perform the following steps to login to the Web UI:
Navigate to the ESA Web UI login page.
In the Username and Password text boxes, enter the user credentials.
Click Sign in.The Two step authentication screen appears.
Scan the QR code using an authentication application.Alternatively, click the Can’t see QR code? link.A QR code gets generated and displayed below it as shown in the figure.
Enter the displayed code in the authentication app to generate One-time password.
In the Enter authentication code field box, enter the one-time password, and click Verify.
After the code is validated, the ESA home page appears.
4.7.5.2.2 - Working with Host-Based Shared-Secret
Host-based shared-secret allows a common shared-secret for all users, which can be specified and distributed to the users by the Security Officer. Host-based shared-secret method is useful to force the same secret code for multiple appliances in clustered environments.
Configuring Two Factor Authentication with Host-Based Shared-Secret
The following section describes how to configure two factor authentication using host-based shared-secret.
Perform the following steps to configure Two Factor Authentication with Host-based shared-secret.
- On the ESA Web UI, navigate to Settings > Security > Two Factor Authentication.
- Check the Enable Two-Factor-Authentication check box.
- Select Host-based shared-secret from Authentication Mode.
- Click Modify.The Host-based shared-secret key appears.
If required, click Generate to modify the Host-based shared-secret key. Ensure that you note the Host-based shared-secret key to generate TOTP. - You can apply the following logging-settings in order to specify what to log:
- Log failed log-in attempts
- Log any successful log-ins
- Click Apply to save the changes. A confirmation message appears.
Logging in to the Web UI
Before beginning, be aware of time limits. When entering codes from the authenticator there is a time limit. Ensure codes are entered in the authenticator code box within the displayed time limit
The following section describes how to log in to the Web UI after configuring host-based shared-secret.
To login to the Web UI:
Navigate to the ESA Web UI login page.
In the Username and Password text boxes, enter the user credentials.
Click Sign in.
The 2 step authentication screen appears.
Use the Host-Based Shared-Secret key obtained from the configuration process to generate authentication code.
Enter the Host-Based Shared-Secret key in the authentication app to generate authentication code.
In the authenticator code box, enter the authentication code, and click Verify.
After the code is validated, the ESA home page appears.
4.7.5.2.3 - Working with Remote Authentication Dial-up Service (RADIUS) Authentication
The Remote Authentication Dial-up Service (RADIUS) is a networking protocol for managing authentication, authorization, and accounting in a network. It defines a workflow for communication of information between the resources and services in a network. The RADIUS protocol uses the UDP transport layer for communication. The RADIUS protocol consists of two components, the RADIUS server and the RADIUS client. The server receives the authentication and authorization requests of users from the RADIUS clients. The communication between the RADIUS client and RADIUS server is authenticated using a shared secret key.
You can integrate the RADIUS protocol with an ESA for two-factor authentication. The following figure describes the implementation between ESA and the RADIUS server.
- The ESA is connected to the AD that contains user information.
- The ESA is a client to the RADIUS sever that contains the network and connection policies for the AD users. It also contains a RADIUS secret key to connect to the RADIUS server. The communication between the ESA and the RADIUS sever is through the Password Authentication Protocol (PAP).
- An OTP generator is configured with the RADIUS server. An OTP is generated for each user. Based on the secret key for each user, an OTP for the user is generated.
In ESA, the following two files are created as part of the RADIUS configuration:
- The dictionary file that contains the default list of attributes for the RADIUS server.
- The custom_attributes.json file that contains the customized list of attributes that you can provide to the RADIUS server.
Important : When assigning a role to the user, ensure that the Can Create JWT Token permission is assigned to the role.If the Can Create JWT Token permission is unassigned to the role of the required user, then remote authentication fails.To verify the Can Create JWT Token permission, from the ESA Web UI navigate to Settings > Users > Roles.
Configuring Radius Two-Factor Authentication
To configure Radius two-factor authentication:
On the Appliance Web UI, navigate to Settings > Security > Two Factor Authentication.
Check the Enable Two-Factor-Authentication checkbox.
Select the Radius Server option as shown in the following figure.
Type the IP address or the hostname of the RADIUS server in the Radius Server text box.
Type the secret key in the Radius Secret text box.
Type the port of the RADIUS server in the Radius port text box.Alternatively, the default port is 1812.
Type the username that connects to the RADIUS server in the Validation User Name text box.
Type the OTP code for the user in the Validation OTP text box.
Click Validate to validate the configuration.A message confirming the configuration appears.
Click Apply to apply the changes.
Logging in to the Web UI
Perform the following steps to login to the Web UI:
Open the ESA login page.
Type the user credentials in the Username and Password text boxes.
Click Sign-in.The following screen appears.
Type the OTP code and select Verify.After the OTP is validated, the ESA home page appears.
Editing the Radius Configuration Files
To edit the configuration files:
On the Appliance Web UI, navigate to Settings > System.
Under OS-Radius Server tab, click Edit corresponding to the custom_attibutes.json or directory to edit the attributes.
If required, modify the attributes to the required values.
Click Save.The changes are saved.
Logging in to the CLI
Perform the following steps to login to CLI Manager:
Open the Appliance CLI Manager.
Enter the user credentials.
Press ENTER .The following screen appears.
Type the verification code and select OK.After the code is validated, the main screen for the CLI Manager appears.
4.7.5.2.4 - Working with Shared-Secret Lifecycle
All users of appliance two factor authentication get a shared-secret for verification. This shared-secret for a user remains in the two factor authentication group list until it is manually deleted. Even if a user becomes ineligible to access the system, the username remains linked to the shared-secret.
This exception is valid for those users opting for per-user authentication.
If the same user or another user with the same name is again added to the system, then the user becomes eligible to use the already existing shared-secret.
To prevent this exception, ensure that an ineligible user is manually removed from the Two Factor Authentication group.
Revoking Shared-Secret for the User
The option to revoke shared-secret is useful when user needs to switch to another mobile device or the previous shared-secret cannot be retrieved from the earlier device.
Perform the following steps to revoke shared-secret for the user:
On the Appliance Web UI, navigate to Settings > Security > Two Factor Authentication.
Ensure that the Enable Two-Factor-Authentication and Automatic per-user shared-secret checkbox are checked.
Inspect Users Shared Secrets area to identify user account to revoke.You can revoke users who have already logged in to the Appliance.
Click Revoke.
Select the user to discard by clicking the checkbox next to the username.
Click Apply to save the changes.A new shared-secret code will be created for the revoked user and is presented upon the next login.
4.7.5.2.5 - Logging in Using Appliance Two Factor Authentication
Perform the following steps to log in using Appliance Two Factor Authentication:
Navigate to ESA login page.
Enter your username.
Enter your password.
Click Sign in.After verification, a separate login dialog appears.
As a prerequisite, a new user must setup an account on Google Authenticator. Download the Google Authenticator app in your device and follow the instructions to create a new account.
Enter the shared-secret in your device.If the system is configured for per-user shared-secret, then this secret code is made available. If this is a web-session, then you are presented with a barcode and the applications that support it.
After you accept the shared-secret, the device displays a verification code.
Enter this verification code in the screen displayed in step 4.
Click Verify.
4.7.5.2.6 - Disabling Appliance Two Factor Authentication
Perform the following steps to disable Appliance Two Factor Authentication:
Using the Appliance Web UI, navigate to Settings > Security > Two Factor Authentication.
Clear the Enable Two-Factor-Authentication checkbox.
Click Apply to save the changes.
Disable Two Factor through local console
You can also disable two-factor authentication from the local console.You need to switch to OS console and execute the following command.
# /etc/opt/2FA/2fa.sh -–disable
4.7.5.3 - Working with Configuration Files
The Product Files screen displays the configuration files of all the products that are installed in ESA. You can view, modify, delete, upload, or download the configuration files from this screen. In the ESA Web UI, navigate to Settings > System > Files to view the configuration files.
The following table describes the different products and their respective configuration files that are available in ESA.
Product | Configuration Files | Description |
OS – Radius Server | Dictionary | Contains the dictionary translations for analyzing requests and generating responses for RADIUS server. |
custom_attributes.json | Contains the configuration settings of the header data for the RADIUS server. | |
OS –Export/Import | Customer.custom | Lists the custom files that can be exported or
imported. For more information about exporting custom files, refer here. |
Audit Store-SMTP Config Files | smtp_config.json | Contains the SMTP configuration settings for sending email alerts. |
smtp_config.json.example | Contains SMTP configuration settings and example values for sending email alerts. This is a template file. | |
Policy Management – Member Source Service User Files | exampleusers.txt | Lists the users that can be used in policy. For more
information about policy users, refer Policy Management. |
Policy Management – Member Source Service Group Files | examplegroups.txt | Lists the user groups that can be used in policy. For
more information about policy user groups, refer Policy Management.
. |
Settings → System → Files → Downloads - Other files | contractual.htm | Lists all the third-party software licenses that are
utilized in ESA.NoteYou cannot modify the file. |
Distributed Filesystem File Protector – Configuration Files | dfscacherefresh.cfg | Contains the DFSFP configuration settings such as, logging,
SSL, Security, and so on. For more information about the
dfscacherefresh.cfg file, refer to the
Protegrity Big Data Protector Guide 9.2.0.0
.NoteStarting
from the Big Data Protector 7.2.0 release, the HDFS File Protector
(HDFSFP) is deprecated. The HDFSFP-related sections are retained
to ensure coverage for using an older version of Big Data
Protector with the ESA 7.2.0. |
Cloud Gateway –Settings | gateway.json | Lists the log level settings for Data Security Gateway. For more information about the
gateway.json file, refer to the Protegrity Data Security Gateway User Guide
3.2.0.0. |
alliance.conf | Configuration file to direct syslog events between servers over TCP or UDP. |
The following figure illustrates various actions that you can perform on the Product Files screen.
Callout | Description | Action |
---|---|---|
1 | Collapse/Expand | Collapse or expand to view the configuration files. |
2 | Edit | Edit the configuration file. |
3 | Upload | Upload a configuration file.Note: When you upload a file, it replaces the existing file in the system. |
4 | Download | Download the file to your local system. |
5 | Delete | Delete the file from the system. |
6 | Download | Download all the files of the product to your local system. |
7 | Reset | Reset the configuration to the previously saved settings. |
Viewing a Configuration File
You can view the contents of the configuration file from the Web UI. If the file size is greater than 5 MB, you must download the file to view the contents.
Perform the following steps to view a file:
Navigate to Settings > System > Files.The screen with the files appears.
Click on the required file. The contents of the file appear.
You can modify, download, or delete the file using the Edit, Download and Delete icon respectively.
Uploading a Configuration File
Perform the following steps to upload a file.
- Navigate to Settings > System > Files.The screen with the files appears.
- Click on the upload icon.The file browser icon appears.
- Select the configuration file and click Upload File.A confirmation message appears.
- Click Ok.A message confirming the upload appears.
Modifying a Configuration File
In addition to editing the file from the Files screen, you can also modify the content of the file from the view option. If you want to modify the content of a file whose size is greater than 5 MB, you must download the file to the local machine, modify the content, and then upload the file through the Web UI.
For instructions to download a configuration file, refer here.
Perform the following steps to modify a file.
- Navigate to Settings > System > Files.The screen with the files appears.
- Click on the required file.The contents of the file appear.
- Click the Edit to modify the file.
- Perform the required changes and click Save.A message confirming the changes appears.
Deleting a Configuration File
In addition to deleting the file from the Files screen, you can also delete the file from the view option. After you delete the file, an exclamation icon appears indicating that the file does not exist on the server. Using the reset functionality, you can restore the deleted file.
Perform the following steps to delete a file.
- Navigate to Settings > System > Files.The screen with the files appears.
- Click on the required file.The contents of the file appear.
- Click the Delete icon to modify the file.A message confirming the deletion appears.
- Select Yes.
Resetting a File
The Reset functionality is used to restore the changes that are done to your file. For every configuration file, the Reset icon is disabled. This icon is enabled when you perform any of the following changes:
- Modify the configuration file
- Delete the configuration file
When you modify or delete a file, the original file is backed up in the /etc/configuration-files-backup directory. For every modification, the file in the directory is overwritten. When you click the Reset icon, the file is retrieved from the directory and restored on the Files screen.
Perform the following steps to restore a file.
- Navigate to Settings > System > Files.The screen with the files appears.
- Click the Reset icon to restore a file.The file that is edited or deleted is restored.
Limits on resetting files
Only the changes that are performed on the files through the Web UI are backed up. Changes performed on the files through the CLI Manager are not backed up and cannot be restored.
4.7.5.4 - Working with File Integrity
The content modifications can be viewed by the Security Officer since the PCI specifications require that sensitive files and folders in the Appliance are monitored. This information contains password, certificate, and configuration files. The File Integrity Monitor makes a weekly check and all changes made to these files can be reviewed by authorized users.
To check file modifications at any given time, click Settings > Security > File Integrity > Check. The Security Officer views and accepts the changes, writing comments as necessary, in the comment box. Accepting changes means that the changes are removed from the viewable list. Changes cannot be rejected. You must not accept deletion of system files. These files must be available.
Only the last modification made to a file appears.
All the changes can also be viewed on the Audit Store > Dashboard > Discover screen. Another report shows all accepted changes. For more information about Discover, refer Discover.
Before applying a patch, it is recommended to check the files and accept the required changes under Settings > File Integrity > Check.
After installing the patches for appliances such as ESA or DSG, check the files and accept the required changes again under Settings > Security > File Integrity > Check.
4.7.5.5 - Managing File Uploads
You can upload a patch file of any size from the File Upload screen in the ESA Web UI. The files uploaded from the Web UI are available in the /opt/products_uploads
directory.
After the file is uploaded, in the Uploaded Files section, select the file to view the file information, download it, or delete it.
To upload a file:
Navigate to Settings > System > File Upload.The File Upload page appears.
In the File Selection section, click Choose File.The file upload dialog box appears.
Select the required file and click Open.
- You can only upload files with .pty and .tgz extensions.
- If the file uploaded exceeds the Max File Upload Size, then a password prompt appears. Only a user with the administrative role can perform this action. Enter the password and click Ok.
- By default, the Max File Upload Size value is set to 25 MB. To increase this value, refer here.
Click Upload.The file is uploaded to the
/opt/products_uploads
location.- If a file contains spaces in its name, then it will be automatically replaced with underline character (_).
- The files are scanned by the internal AntiVirus before they are uploaded in the appliance.
- If the FIPS mode is enabled, then the anti-virus scan is skipped during the file upload.
- The SHA512 checksum value is validated during the upload process.
- If the network is interrupted while uploading the file, then the appliance retries to upload the file.The retry upload process is attempted ten times. Each attempt lasts for ten seconds.
After the file is uploaded successfully, then from the Uploaded Files area, choose the uploaded patch.The information for the selected patch appears.
Verifying uploaded file integrity
To verify the integrity of the uploaded file, validate the checksum values displayed on the screen with the checksum values of the downloaded patch file.You can obtain the checksum values from the My.Protegrity or contact Protegrity Support.
4.7.5.6 - Configuring Date and Time
You can use the Date/Time tab to change the date and time settings. To update the date and time, navigate to Settings > System > Date/Time.
The Date and Time screen with the Update Time Periodically option enabled is shown in the following figure.
The date and time options are described in the following table.
Setting | Details | How to configure/change |
---|---|---|
Update Time Periodically | Synchronize the time with the specified NTP Server, upon boot and once an hour. | You can enable this option using Enable button and disable it using Disable. Only enable or disable NTP settings from the CLI Manager or the Web UI. |
Current Appliance Date/Time | Manually synchronize the time with the specified NTP Server. You can use NTP Server synchronization only if NTP service is running. | You can force and restart time synchronization using Reset NTP Sync. You can display NTP analysis using NTP Query button. |
Set Time Zone | Specify the time zone for your appliance. | Select your local time zone from the Set Time Zone list and click Set Time Zone. |
Set Manually Date/Time (mm/dd/yyyy hh:mm) | Set the time manually. | Type the date and time using the format mm/dd/yyyy hh/mm. Click Set Date/Time.Note: The Set Manually Date/Time (mm/dd/yyyy hh:mm) text box appears only if the Update Time Periodically functionality is disabled. |
4.7.5.7 - Configuring Email
The SMTP setting allows the system to send emails.
You can test that the email works by clicking Test. Error logs can be viewed on the Audit Store > Dashboard > Discover screen.
For more information about Discover, refer Working with Discover.
Some scripts run after you click Save.Ensure to save the details only when the connection is intact.
If the email address cannot be authenticated, then the Show Test Communication area displays the communication between the appliance and the SMTP server for debugging.
4.7.5.8 - Configuring Network Settings
On the Network Settings screen, you can configure the network details for the appliance. The following table explains the different settings that can be configured.
Information in the following table is specific to the Web UI. For information on the same features and configuring them in the CLI, refer here.
Setting | Details | How to configure/change |
---|---|---|
Hostname | The hostname is a unique name for a system or a node in a network.Ensure that the hostname does not contain the dot(.) special character. | Click Apply on the Web UI or change the hostname of the appliance from the Network Settings screen in the CLI Manager. |
Management IP | The management IP, which is the IP address of the appliance, is defined through CLI Manager. | Select Blink to identify the interface. This will cause a LED on the NIC to blink and then click Change. |
Default Route | The default route is an optional destination for all network traffic that does not belong to the LAN segment. For example, the IP address of your LAN router in the IP address format is 172.16.8.12. It is required only if the appliance is on a different subnet than the Appliance Web Interface. | Click Apply to set the default route. |
Domain | The appliance domain name specified during appliance installation. | You can change it by specifying a new name and clicking Apply. |
Search Domains | The appliance can belong to one domain and search an additional three domains. | You can add them using Add button. |
Domain Name Servers | If your appliance uses domain names and IP addresses, then you must configure a domain name server (DNS) to help resolve Internet name addresses. The domain name should be for your local network, like Protegrity.com or math.mit.edu and the name servers should be IP addresses. The appliance can use up to three DNS servers for name resolving. Once you have configured a DNS, the system can be managed using an SSH connection. | You can add them using Add button, and remove them using Remove. You can specify them using Apply button. |
4.7.5.8.1 - Managing Network Interfaces
Using Settings > Network > Network Settings, you can view appliance network interfaces names and addresses and add them from the Interfaces page.
Changes to IP addresses
Changes to IP addresses are immediate. Changes to the management IP (on ethMNG), while connected via SSH or the Appliance Web Interface, causes the session to disconnect.
Assigning an Address to an Interface
Perform the following steps to assign an address to an interface.
- Navigate to Settings > Network.
- Click Network Settings.The Interfaces page appears.
- Identify the interface on the appliance by clicking Blink for the interface you want to identify.Select a LED on the NIC that blinks to indicate that interface.
- In the Interface row, type the address and Net mask of the interface, and then click Add.
Assigning an Address to an Interface Using Web UI
Perform the following steps to assign an address to an interface.
- In the Web UI, navigate to Settings > Network > Network Settings.The Network Settings page appears.
- In the Network Interfaces area, select Add New IP in the Gateway column.
Ensure that the IP address for the NIC is added. - Enter the IP address of the default gateway and select OK.The default gateway for the interface is added.
4.7.5.8.2 - NIC Bonding
The Network Interface Card (NIC) is a device through which appliances, such as ESA or DSG, on a network connect to each other. If the NIC stops functioning or is under maintenance, the connection is interrupted, and the appliance is unreachable. To mitigate the issues caused by the failure of a single network card, Protegrity leverages the NIC bonding feature for network redundancy and fault tolerance. In NIC bonding, multiple NICs are configured on a single appliance. You then bind the NICs to increase network redundancy. NIC bonding ensures that if one NIC fails, the requests are routed to the other bonded NICs. Thus, failure of a NIC does not affect the operation of the appliance. You can bond the configured NICs using different bonding modes.
Bonding Modes
The bonding modes determine how traffic is routed across the NICs. The MII monitoring (MIIMON) is a link monitoring feature that is used for inspecting the failure of NICs added to the appliance. The frequency of monitoring is 100 milliseconds. The following modes are available to bind NICs together:
- Mode 0/Balance Round Robin
- Mode 1/Active-backup
- Mode 2/Exclusive OR
- Mode 3/Broadcast
- Mode 4/Dynamic Link Aggregation
- Mode 5/Adaptive Transmit Load Balancing
- Mode 6/Adaptive Load Balancing
The following two bonding modes are supported for appliances:
- Mode 1/Active-backup policy: In this mode, multiple NICs, which are slaves, are configured on an appliance. However, only one slave is active at a time. The slave that accepts the requests is active and the other slaves are set as standby. When the active NIC stops functioning, the next available slave is set as active.
- Mode 6/Adaptive load balancing: In this mode, multiple NICs are configured on an appliance. All the NICs are active simultaneously. The traffic is distributed sequentially across all the NICs in a round-robin method. If a NIC is added or removed from the appliance, the traffic is redistributed accordingly among the available NICs. The incoming and outgoing traffic is load balanced and the MAC address of the actual NIC receives the request. The throughput achieved in this mode is high as compared to Mode 1/Active-backup policy.
Prerequisites
Ensure that you complete the following pre-requisites when binding interfaces:
- The IP address is assigned only to the NIC on which the bond is initiated. You must not assign an IP address to the other NICs.
- The NIC is not configured on an HA setup.
- The NICs are on the same network.
Creating a Bond
The following procedure describes the steps to create a bond between NICs. For more information about the bonding nodes, refer here.
Ensure that the IP address of the slave nodes are static.
Perform the following steps to create a bond.
On the Web UI, navigate to Settings > Network > Network Settings.The Network Settings screen appears.
Under the Network Interfaces area, click Create Bond corresponding to the interface on which you want to initiate the bond.The following screen appears.
Ensure that the IP address is assigned to the interface on which you want to initiate the bond.
Select the following modes from the drop-down list:
- Active-backup policy
- Adaptive Load Balancing
Select the interfaces with which you want to create a bond.
Select Establish Network Bonding.A confirmation message appears.
Click OK.The bond is created, and the list appears on the Web UI.
Removing a Bond
Perform the following steps to remove a bond:
- On the Web UI, navigate to Settings > Network > Network Settings.The Network Settings screen appears with all the created bonds as shown in the following figure.
Under the Network Interfaces area, click Remove Bond corresponding to the interface on which the bonding is created.A confirmation screen appears.
Select OK.The bond is removed and the interfaces are visible on the IP/Network list.
Viewing a Bond
Using the DSG CLI Manager, you can view the bonds that are created between all the interfaces.
Perform the following steps to view a bond:
On the DSG CLI Manager, navigate to Networking > Network Settings.The Network Configuration Information Settings screen appears.
Navigate to Interface Bonding and select Edit.The Network Teaming screen displaying all the bonded interfaces appears as shown in the following figure.
Resetting the Bond
You can reset all the bonds that are created for an appliance. When you reset the bonds, all the bonds created are disabled. The slave NICs are reset to their initial state, where you can configure the network settings for them separately.
Perform the following steps to reset all the bonds:
On the DSG CLI Manager, navigate to Networking > Network Settings.The Network Configuration Information Settings screen appears.
Navigate to Interface Bonding and select Edit.The Network Teaming screen displaying all the bonded interfaces appears.
Select Reset.The following screen appears.
- Select OK.The bonding for all the interfaces is removed.
4.7.5.9 - Configuring Web Settings
Navigate to Settings > Network > Web settings, the Web Settings page contains the following sections:
- General Settings
- Session Management
- Shell In A Box Settings
- SSL Cipher Settings
4.7.5.9.1 - General Settings
The General Settings contains the following file upload configurations:
- Max File Upload Size
- File Upload Chunk Size
Increasing Maximum File Upload Size
By default, the maximum file upload size is set to 25 MB. You can increase the limit up to 4096 MB.
Perform the following steps to increase the maximum file upload size:
- From the Appliance Web UI, proceed to Settings > Network > Web Settings.The Web Settings screen appears.
Move the Max File Upload Size slider to the right to increase the limit.
Click Update.
Increasing File Upload Chunk Size
By default, the file upload chunk size is set to 100 MB. You can increase the limit up to 512 MB.
Perform the following steps to increase the file upload chunk size:
- From the Appliance Web UI, proceed to Settings > Network > Web Settings.The Web Settings screen appears.
Move the File Upload Chunk Size slider to the right to increase the limit.
Click Update.
4.7.5.9.2 - Session Management
Only the admin user can extend the time using this option. The extended time becomes applicable to all users of the Appliance.
Managing the session settings
Only the admin user can extend the time using this option. The extended time becomes applicable to all users of the Appliance.
Perform the following steps to timeout using Appliance Web UI option:
From the Appliance Web UI, proceed to Settings > Network.
Click Web Settings.The following screen appears.
Move the Session Timeout slider to the right to increase the time, in minutes.
Click Update.
Fixing the Session Timeout
Perform the following steps to fix the session timeout.
There may be cases where the timeout session should be fixed, and the Appliance logs out even if the session is an active session.
From the Appliance Web UI, proceed to Settings > Network.
Click Web Settings.The following screen appears.
Move the Session Timeout slider to the right or left to increase or decrease the time, in minutes.
Select the Is hard timeout check box.
Click Update.
4.7.5.9.3 - Shell in a box settings
This setting allows a user with Appliance Web Manager permission to configure access to the Shell In A Box feature which is available through the Web UI. This setting applies to all the users that have access to the Web UI.
When enabled the users are able to view the CLI icon on the bottom right corner of the web page.
Perform the following steps to enable/disable Shell In A Box Settings.
From the Appliance Web UI, proceed to Settings > Network.
Click Web Settings.The following screen appears.
To enable or disable the Shell In a Box Settings, select the Allow Shell In a Box check box.
Click Update.
4.7.5.9.4 - SSL cipher settings
The ESA appliance uses the OpenSSL library to encrypt and secure connections. You can configure an encrypted connection using the following two strings:
- SSL Protocols
- SSL Cipher Suites
The protocols and the list of ciphers supported by the appliance are included in the SSLProtocol and SSLCipherSuite strings respectively. The SSLProtocol supports SSL v2, SSL v3, TLS v1, TLS v1.1, TLS v1.2, and TLS v1.3 protocols.
To disable any protocol from the SSLProtocol string, prepend a hyphen (-) to the protocol. To disable any cipher suite from the SSLCipherSuite string, prepend an exclamation (!) to the cipher suite.
For more information about the OpenSSL library, refer to http://www.openssl.org/docs.
Using TLS v1.3
The TLS v1.3 protocol is introduced from v8.1.0.0. If you want to use this protocol, then ensure that you append the following cipher suite in the SSLCipherSuite text box.
TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
4.7.5.9.5 - Updating a protocol from the ESA Web UI
Perform the following steps to update a protocol from the ESA Web UI:
In the ESA Web UI, navigate to Settings > Network > Web Settings.The Web Settings page appears.
Under SSL Cipher Settings tab, the SSLProtocol text box contains the value ALL-SSLv2-SSLv3.
Add
–
to the required protocol.For example, to disable TLS1.1, type -TLSv1.1 in the SSLProtocol text box.
- Click Update to save the changes.
To re-enable TLSv1.1 using the Web UI, remove –TLSv1.1 from the SSLProtocol text box.
4.7.5.10 - Working with Secure Shell (SSH) Keys
The Secure Shell (SSH) is a network protocol that ensures an secure communication over unsecured network. A user connects to the SSH server using the SSH Client. The SSH protocol is comprised of a suite of utilities which provides high-level authentication encryption over unsecured communication channels.
A typical SSH setup consists of a host machine and a remote machine. A key pair is required to connect to the host machine through any remote machine. A key pair consists of a Public key and a Private key. The key pair allows the host machine to securely connect to the remote machine without entering a password for authentication.
For enhancing security, a Private key is secured using a passphrase. This ensures that only the rightful recipient can have access to the decrypted data. You can either generate key pairs or work with existing key pairs.
If you add a Private key without a passphrase, it is encrypted with a random passphrase. This passphrase is scrambled and stored.
If you choose a Private key with a passphrase, then the Private key is stored as it is. This passphrase is scrambled and stored.
For more information about generating the SSH key pairs, refer Adding a New Key.
The SSH protocol allows an authorized user to connect to the host machines from the remote machines. Both inbound communication and outbound communication are supported using the SSH protocol. An authorized user is a combination of an appliance user associated with a valid key pair. An authorized user must be listed as a valid recipient to connect using the SSH protocol.
The SSH protocol allows the authorized users to run tasks securely on the remote machine. When the users connect to the appliance using the SSH protocol, then the communication is known as inbound communication.
For more information about inbound SSH configuration, refer here.
When the users connect to a known host using their private keys, then the communication is known as outbound communication. The authorized users are allowed to initiate the SSH communication from the host.
For more information about outbound SSH configuration, refer here.
On the ESA Web UI, you can configure all the following standard aspects of SSH:
- Authorized Keys
- Identities Keys
- Known Hosts
SSH pane:With the SSH configuration Manager you can examine and manage the SSH configuration. The SSH keys can be configured in the Authentication Configuration pane on the ESA Web UI.
The following figure shows the SSH Configuration Manager pane.
Authentication Type:The SSH Server is configured in the following three ways:
- Password
- Public Key
- Password + publickey
Authentication Type | Description |
---|---|
Password | In this authentication type, only the password is required for authentication to the SSH server. The public key is not required on the server for authentication. |
Public Key | In this authentication type, the server requires only the public key for authentication. The password is not required for authentication. |
Password + Public key | In this authentication type, the server can accept both, the keys and the password, for authentication. |
SSH Mode:
From the Web UI, navigate to Settings > Network > SSH. Using the SSH mode, restrictions for SSH connections can be set. The restrictions can be hardened or loosened based on the needs. There are four modes SSH mode types are shown below.
Mode | SSH Server | SSH Client |
---|---|---|
Paranoid | Disable root access | Disable password authentication, that is, allow to connect only using public keys. Block connections to unknown hosts. |
Standard | Disable root access | Allow password authentication. Allow connections to new (unknown) hosts, enforce SSH fingerprint of known hosts. |
Open | Allow root access Accept connections using passwords and public keys. | Allow password authentication. Allow connection to all hosts – do not check hosts fingerprints. |
4.7.5.10.1 - Configuring the authentication type for SSH keys
Perform the following steps to configure the SSH Key Authentication Type.
From the ESA Web UI, navigate to Settings > Network.The Network Settings pane appears.
Select the SSH tab.The SSH Configuration Manager pane appears.
Select the authentication type from the Authentication Type drop down menu.
Select the SSH mode from the SSH Mode drop down menu.
Click Apply.A message Configuration saved successfully appears.
4.7.5.10.2 - Configuring inbound communications
The users who are allowed to connect to the appliance using SSH are listed in the Authorized Keys (Inbound) tab.
The following screen shows the Authorized Keys.
Adding a New Key
An authorized key has to be created for a user or a machine to connect to an appliance on the host machine.
Perform the following steps to add a new key.
From the ESA Web UI, navigate to Settings > Network.The Network Settings pane appears
Select the SSH tab.The SSH Configuration Manager pane appears.
Select the Authorized Keys (Inbound) tab.
Click Add New Key.The Add New Authorized Key dialog box appears.
Select a user.
Select Generate new public key.
The Root password is required to create Authorized Key prompt appears. Enter the root password and click Ok.
If the private key is to be saved, then select Click To Download Private Key.The private key is saved to the local machine.
If the public key is to be saved, then select Click To Download Public Key.The public key is saved to the local machine.
Click Finish.The new authorized key is added.
Uploading a Key
You can assign a public key to a user by uploading the key from the Web UI.
Perform the following steps to upload a key.
From the ESA Web UI, navigate to Settings > Network.The Network Settings pane appears.
Select the SSH tab.The SSH Configuration Manager pane appears.
Select the Authorized Keys (Inbound) tab.
Click Add New Key.The Add New Authorized Key dialog box appears.
Select a user.
Select Upload public key.The file browser dialog box appears.
Select a public key file.
Click Open.
The Root password is required to create Authorized Key prompt appears. Enter the root password and click Ok.The key is assigned to the user.
Reusing public keys between users
The public key of one user can be assigned as a public key of another user.
Perform the following steps to upload an existing key.
From the ESA Web UI, navigate to Settings > Network.The Network Settings pane appears.
Select the SSH tab.The SSH Configuration Manager pane appears.
Select the Authorized Keys (Inbound) tab.
Click Add New Key.The Add New Authorized Key dialog box appears.
Select a user.
Select Choose from existing keys.
Select the public key.
The Root password is required to create Authorized Key prompt appears. Enter the root password and click Ok.The public key is assigned to the user.
Downloading a Public Key
From the Web UI, you can download the public of a user to the local machine.
Perform the following steps to download a key.
From the ESA Web UI, navigate to Settings > Network.The Network Settings pane appears.
Select the SSH tab.The SSH Configuration Manager pane appears.
Select the Authorized Keys (Inbound) tab.
Select a user.
Select Download Public Key.The public key is saved to the local directory.
Deleting an Authorized Key
You can remove a key from the authorized users list. Once the key is removed from the list, the remote machine will no longer be able to connect to the host machine.
Perform the following steps to delete an authorized key:
From the ESA Web UI, navigate to Settings > Network.The Network Settings pane appears
Select the SSH tab.The SSH Configuration Manager pane appears.
Select the Authorized Keys (Inbound) tab.
Select a user.
Select Delete Authorized Key.A message confirming the deletion appears.
Click Yes.
The Root password is required to delete Authorized Key prompt appears. Enter the root password and click Ok.The key is deleted from the authorized keys list.
Clearing all Authorized Keys
You can remove all the public keys from the authorized keys list.
Perform the following steps to clear all keys:
From the ESA Web UI, navigate to Settings > Network.The Network Settings pane appears.
Select the SSH tab.The SSH Configuration Manager pane appears.
Select the Authorized Keys (Inbound) tab.
Click Reset List.A message confirming the deletion of all authorized keys appears.
Click Yes.
The Root password is required to delete all Authorized Keys prompt appears. Enter the root password and click Ok.All the keys are deleted.
4.7.5.10.3 - Configuring outbound communications
The users who can connect to the known hosts with their private keys are listed in the Identities Keys (Outbound) tab.
The following screen shows the Identities.
Adding a New Key
A new public key can be generated for the host machine to connect with another machine.
Perform the following steps to add a new key.
From the ESA Web UI, navigate to Settings > Network.The Network Settings pane appears.
Select the SSH tab.The SSH Configuration Manager pane appears.
Select the Identities Keys (Outbound) tab.
Click Add New Key.The Add New Identity Key dialog box appears.
Select a user.
Select Generate new keys.
The Root password is required to create Identity Key prompt appears. Enter the root password and click Ok.
If the public key is to be saved, then select Click to Download Public Key .The public key is saved to the local machine.
Click Finish.The new authorized key is added.
Downloading a Public Key
You can download the host’s public key from the Web UI.
Perform the following steps to download a key.
From the ESA Web UI, navigate to Settings > Network.The Network Settings pane appears.
Select the SSH tab.The SSH Configuration Manager pane appears.
Select the Identities Keys (Outbound) tab.
Select a user.
Select Download Public Key.The public key is saved to the local machine.
Uploading Keys
Perform the following steps to upload an existing key.
From the ESA Web UI, navigate to Settings > Network.The Network Settings pane appears.
Select the SSH tab.The SSH Configuration Manager pane appears.
Select the Identities Keys (Outbound) tab.
Click Add New Key.The Add New Identity Key dialog box appears.
Select a user.
Select Upload Keys.The list of public keys with the users that they are assigned to appears.
Select Upload Public Key.The file browser dialog box appears.
Select a public key file from your local machine.
Click Open.The public key is assigned to the user.
Select Upload Private Key.The file browser dialog box appears.
Select a private key file from your local machine.
Click Open.
If the private key is protected by a passphrase, then the text field Private Key Passphrase appears.Enter the private key passphrase.
- Click Finish.The new identity key is added.
Reusing public keys between users
The public and private key pair of one user can assigned as a public and private key pair of another user.
Perform the following steps to choose from an existing key.
From the ESA Web UI, navigate to Settings > Network.The Network Settings pane appears.
Select the SSH tab.The SSH Configuration Manager pane appears.
Select the Identities Keys (Outbound) tab.
Click Add New Key.The Add New Identity Key dialog box appears.
Select a user.
Select Choose from existing keys.
Select the public key.
The Root password is required to create Identity Key prompt appears. Enter the root password and click Ok.The public key is assigned to the user.
Deleting an Identity
You can delete an identity for a user. Once the identity is removed, the user will no longer be able to connect to another machine.
Perform the following steps to delete an identity:
From the ESA Web UI, navigate to Settings > Network.The Network Settings pane appears.
Select the SSH tab.The SSH Configuration Manager pane appears.
Select the Identities Keys (Outbound) tab.
Select a user.
Click Delete Identity.A message confirming the deletion appears.
Click Yes.
The Root password is required to delete the Identity Key prompt appears. Enter the root password and click Ok.The identity is deleted.
Clearing all Identities
You can remove all the public keys from the authorized keys list.
Perform the following steps to clear all identities.
From the ESA Web UI, navigate to Settings > Network.The Network Settings pane appears.
Select the SSH tab.The SSH Configuration Manager pane appears.
Select the Identities Keys (Outbound) tab.
Click Reset Identity List.A message confirming the deletion of all identities appears.
Click Yes.
The Root password is required to delete all Identity Keys prompt appears. Enter the root password and click Ok.All the identities are deleted.
4.7.5.10.4 - Configuring known hosts
By default, the SSH is configured to deny all the communications to unknown remote servers. Known hosts list the machines or nodes to which the host machine can connect to. The SSH servers to which the host can communicate with are added under Known Hosts.
Adding a New Host
You can add a host to the list of known hosts that can have a connection established.
Perform the following steps to add a host.
From the ESA Web UI, navigate to Settings > Network.The Network Settings pane appears.
Select the SSH tab.The SSH Configuration Manager pane appears.
Select the Known Hosts tab.
Click Add Host.The Enter the ip/hostname dialog box appears.
Enter the IP address or hostname in the Enter the ip/hostname text box.
Click Ok.All host is added to the known hosts list.
Updating the Host Keys
You can refresh the hostnames to check for updates to host’s public keys.
Perform the following steps to updated a host key.
From the ESA Web UI, navigate to Settings > Network.The Network Settings pane appears.
Select the SSH tab.The SSH Configuration Manager pane appears.
Select the Known Hosts tab.
Select a host name.
Click Refresh Host Key.The key for the host name is updated.
Deleting a Host
If a connection to a host is no longer required, then you can delete the host from the known host list.
Perform the following steps to delete a known host.
From the ESA Web UI, navigate to Settings > Network.The Network Settings pane appears.
Select the SSH tab.The SSH Configuration Manager pane appears.
Select the Known Hosts tab.
Select a host name.
Click Delete Host.A message confirming the deletion appears.
Click Yes.The host is deleted.
Resetting the Host Keys
You can set the keys of all the hosts to a default value.
Perform the following steps to reset all the host keys:
From the ESA Web UI, navigate to Settings > Network.The Network Settings pane appears.
Select the SSH tab.The SSH Configuration Manager pane appears.
Select the Known Hosts tab.
Select Reset Host Keys.A message confirming the reset appears.
Click Yes.The host keys for all the hostnames is set to a default value.
4.7.6 - Managing Appliance Users
Only authorized users can access the Appliances. These users are system users and LDAP administrative users. The roles of these users are explained in detail in the following sections.
Appliance Users
The root and local_admin users are appliance system users. These users are initialized during installation.
root and local_admin
As a root user, you can be asked to provide the root account password to log in to some CLI Manager tools. For example, Change Accounts and Passwords tool or Configure SSH tool.
The root account is used to exit the appliance command line interface and go directly into the host operating system command line. This gives the system administrator full control over the machine.
The local_admin is necessary for LDAP maintenance when the LDAP is not working or is not accessible.
LDAP Users
The admin and viewer user accounts are LDAP users that are initialized during installation.
For more information about users, refer here.
admin and viewer Accounts
The admin and viewer accounts are used to log onto CLI Manager or Appliance Web UI. These user accounts can be modified using:
- CLI Manager, for instructions refer to section Accounts and Passwords.
- Web UI, where these accounts are the part of the LDAP.
- Policy management.
When these passwords are changed in the CLI Manager or Appliance Web UI, the change applies to all other installed components, thus synchronizing the passwords automatically.
LDAP Target Users
When you have your appliance installed and configured, you can create LDAP users and assign necessary permissions to these users. You can also create groups of users. The system users are by default predefined in the internal LDAP directory.
For more information about creating users in LDAP and defining their security permissions, refer here.
System Roles
Protegrity Data Security Platform role-based access defines a list of roles, including a list of operations that a role can perform. Each user is assigned to one or more roles. User-based access defines a user to whom the operations are granted. There are several predefined roles on ESA.
The following table describes these roles.
Role | Is used by… |
---|---|
root user | The OS system administrator who maintains the Appliance machine, which could be ESA or DSG. |
admin user | The user who specifically manages the creation of roles and members in the LDAP directory. This user could also be the DBA, System Administrator, Programmer, and others. This user is responsible for installing, integrating, or monitoring Protegrity platform components into their corporate infrastructure for the purpose of implementing the Protegrity-based data protection solution. |
viewer user | Personnel who can only view and not create or make changes. |
4.7.7 - Password Policy for all appliance users
The password policy applies to all LDAP users.
The LDAP user password should:
- Be at least 8 characters long
- Contain at least two of the following three character groups:
- Numeric [0-9]
- Alphabetic [a-z, A-Z]
- Special symbols, such as: ~ ! @ # $ % ^ & * ( ) _ + { } | : " < > ? ` - = [ ] \ ; ’ , . /
Thus, your password should look like one of the following examples:
- Protegrity123 (alphabetic and numeric)
- Protegrity!@#$ (alphabetic and special symbols)
- 123!@#$ (numeric and special symbols)
The strength of the password is validated by default. This strength validation can also be customized by creating a script file to meet the requirements of your organization.
From the CLI, press Administration > Accounts and Passwords > Manage Passwords and Local-Accounts. Select the correct Change option and update the password.
You can enforce organization rules for password validity from the Web UI, from Settings > Users > User Management, where the following can be configured:
- Minimum period for changeover
- Password expiry
- Lock on maximum failures
- Password history
For more information about configuring the password policy, refer here.
4.7.7.1 - Managing Users
You require users in every system to run the business application. The foremost step in any system involves setting up users that operate on different faces of the application.
In ESA, setting up a user involves operations such, as assigning roles, setting up password policies, setting up Active Directories (ADs) and so on. This section describes the various activities that constitute the user management for ESA. In ESA, you can add the following users:
- OS Users: Users for for managing and debugging OS related operations.
- Appliance users: User for performing various operations based on the roles assigned to them. Created or imported from other directory services too.
Understanding ESA Users
In any given environment, users are entities that consume services provided by a system. Only authorized users can access the system. In Protegrity appliances, users are created to manage ESA for various purposes. These users are system users and LDAP administrative users.
On ESA, the users navigate to Settings > Users > User Management to view the list of the users that are available in the appliance.
In ESA, users can be categorized as follows:
Internal Appliance Users
These are the users created by default when the ESA is installed. These users are used to perform various operations on the Web UI, such as managing cluster, managing LDAP, and so on. On ESA Web UI, navigate to Settings > Users > User Management to view the list of the users that are available in the appliance.
The following is the list of users that are created when ESA is installed.
User Name | Description | Role |
---|---|---|
admin | Administrator account with access to the Web UI and CLI Manager options. | Security Administrator |
viewer | User with view only access to the Web UI and CLI Manager options. | Security Administrator Viewer |
ldap_bind_user | Created when local LDAP is installed | N/A |
samba_admin_user | Access folders shared by CIFS service running on File Protector Vault. | N/A |
PolicyUser | Perform security operations on the protector node. | Policy User |
ProxyUser | Perform security operations on behalf of other policy users. | ProxyUser |
OS users
These are the users that contain access to all the CLI operations in the appliance. You can create local OS users from the CLI Manager. On CLI Manager, navigate to Administration > Accounts and Passwords > Manage Passwords and Local Accounts to view and manage the OS users in the appliance.
The following is the list of OS users in the appliance.
OS Users | Description |
---|---|
alliance | Handles DSG processes |
root | Super user with access to all commands and files |
local_admin | Local administrator that can be used when an LDAP user is not accessible |
www-data | Daemon that runs the Apache, Service dispatcher, and Web services as a user |
ptycluster | Handles TAC related services and communication between TAC through SSH. |
service_admin and service_viewer | Internal service accounts used for components that do not support LDAP |
clamav | Handles ClamAV antivirus |
rabbitmq | Handles the RabbitMQ messaging queues |
epmd | Daemon that tracks the listening address of a node |
openldap | Handles the openLDAP utility |
dpsdbuser | Internal repository user for managing policies |
Policy Users
These users are imported from a file or an external source for managing policy operations on ESA. Policy users are used by protectors that communicate with ESA for performing security operations.
External Appliance users
These are external users that are added to the appliance for performing various operations on the Web UI. The LDAP users are imported by using the External Groups or Importing Users.You can also add new users to the appliances from the User Management screen.
Ensure that the Proxy Authentication Settings are configured before importing the users.
Managing Appliance Users
After you configure the LDAP server, you can either add users to internal LDAP or import users from the external LDAP. The users are then assigned to roles based on the permissions you want to grant them.
Default users
The default users packaged with ESA that are common across appliances are provided in the following table. You can edit each of these roles to provide additional privileges.
User Name | Description | Role |
---|---|---|
admin | Administrator account with full access to the Web UI and CLI Manager options. | Security Administrator |
viewer | User with view only access to the Web UI and CLI Manager options. | Security Administrator Viewer |
ldap_bind_user | User who accesses the local LDAP in ESA or other appliances. | n/a |
PolicyUser | Users who can perform security operations on the DSG Test Utility. | Policy User |
ProxyUser | Users who can perform security operations on behalf of other policy users on the Protection Server.Note: The Protection Server is deprecated. This user should not be used. | ProxyUser |
Proxy users
The following table describes the three types of proxy users in ESA:
Callout | Description |
---|---|
Local | Users that are authenticated using the local LDAP or created during installation. |
Manual | Users that are manually created or imported manually from an external directory service. |
Automatic | Users that are imported automatically from an external directory service and a part of different External Groups. For more information about External Groups, refer here. |
User Management Web UI
The user management screen allows you to add, import, and modify permissions for the users. The following screen displays the ESA User Management Web UI.
Callout | Column | Description |
1 | User Name | Name of the user. This user can either be added to the internal LDAP server or imported from an external LDAP server. |
2 | Password Policy | Enable password policy for selected user. This option is
available only for local users. For more information about
defining password policy for users, refer Password Policy. |
3 | User Password Status | Indicates status of the user. The available states are
as follows. Valid – user is active and ready to use
ESA. Warning – user must change password to gain access to
ESA. When the user tries to login after this status is flagged, it
will be mandatory for the user to change the password to access
the appliance. Note:As the administrator sets the initial
password, it is recommended to change your password at the first
login for security reasons. |
4 | Lock Status | User status based on the defined password policy. The
available states are as follows: Locked – Users who are locked
after series of incorrect attempts to log in to
ESA. Unlocked – Users who can access
ESA. <value> - Number of attempts remaining for a user
after entering incorrect password. |
5 | Expiration Date | Indicates expiry status for a user. The available statuses
are as follows: Time left for expiry – Displays |
6 | User Type | Indicates if user is a local, manual or automatically imported user. |
7 | Last Unsuccessful Login (UTC) | Indicates the time of the last unsuccessful login attempted
by the user. The time displayed is in UTC. Note:If a user
successfully logs in through the Web UI or the CLI manager, then
the time stamp for any previous unsuccessful attempts is
reset. |
8 | Roles | Linked roles to that user. |
9 | Add User | Add a new internal LDAP user. |
10 | Import User | Import users from the external LDAP server. Note: This
option is available only when Proxy Authentication is
enabled. |
11 | Action | The following Actions are available.![]() When you reset password for
a user, Enter your password prompt appears.
Enter the password and click Ok. Note: If
the number of unsuccessful password attempts exceed the defined
value in the password policy, the account gets
locked. |
![]() When you remove a user,
Enter your password prompt appears. Enter
the password and click Ok. Note: If the
number of unsuccessful password attempts exceed the defined value
in the password policy, the account gets
locked. | ||
![]() When you convert a user to a local LDAP user,
ESA creates the user in its local LDAP server. | ||
12 | Page Navigation | Navigate through pages to view more users. |
13 | View Entries | Select number of users to be displayed in a single view. You can select to view up to 50 users. |
14 | Search User Name | Enter the name of the user you want to filter from the list of users. |
4.7.7.1.1 - Adding users to internal LDAP
You can create users with custom permissions and roles, and add them to the internal LDAP server.
If you are trying to add users and are not authorized to add users, then you can temporarily add users by providing credentials of a user with LDAP Manager permissions. This session remains active and lets you add users for a timeout period of 5 mins. During the active session, if you need to revoke this user and return to your session, you can click Remove.
Perform the following steps to add users to internal LDAP. In these steps, we will use the name “John Doe” as the name of the user being added to the internal LDAP.
In the Web UI, navigate to Settings > Users > User Management.
Click Add User to add new users.
- Click Cancel to exit the adding user screen.
- The & character is not supported in the Username field.
Enter John as First Name, Doe as Last Name, and provide a Description. The User Name text box is auto-populated. You can edit it, if required.
- The maximum number of characters that you can enter in the First Name, Last Name, and User Name fields is 100.
- The maximum number of characters that you can enter in the Description field is 200.
Click Continue to configure password.
Enter the password and confirm it in the consecutive text box.
Verify that the Enable Password Policy toggle button is enabled to apply password policy for the user.
The Enable Password Policy toggle button is enabled as default. For more information about password policy, refer here.
Click Continue to assign role to the user.
Select the role you want to assign to the user. You can assign the user to multiple roles.
Click Add User.
Enter your password prompt appears. Enter the password and click Ok. If the number of unsuccessful password attempts exceed the defined value in the password policy, the account gets locked.
For more information about Password Policy, refer here.
After 5 mins, the session ends, and you can no longer add users. The following figure shows this feature in the Web UI.
4.7.7.1.2 - Importing users to internal LDAP
In the User Management screen, you can import users from an external LDAP to the internal LDAP. This option gives you the flexibility to add selected users from your LDAP to the ESA.
Ensure that Proxy Authentication is enabled before importing users from an external directory service.
For more information about working with Proxy Authentication, refer to here.
The username in local LDAP is case-sensitive and the username in Active Directory is case-insensitive. It is recommended not to import users from external LDAP where the username in the local LDAP and the username in the external LDAP are same.
The users imported are not local users of the internal LDAP. You cannot apply password policy to these users. To convert the imported user to a local user, navigate to Settings > Users > User Management, select the user, and then click Convert to Local user . When you convert a user to a local LDAP user, ESA creates the user in its local LDAP server.
Perform the following steps to import users to internal LDAP.
In the Web UI, navigate to Settings> Users > User Management.
Click Import Users to add an external LDAP user to the internal LDAP.The Import Users screen appears.
Select Search by Username to search the users by username or select Search by custom filter to search the users using the LDAP filter.
Type the required number of results to display in the Display Number of Results text box.
If you want to overwrite existing user, click Overwrite Existing Users.
Click Next.The users matching the search criteria appear on the screen.
Select the required users and click Next.The screen to select the roles appears.
Select the required roles for the selected users and click Next.
The Enter your password prompt appears. Enter the password and click Ok. If the number of unsuccessful password attempts exceed the defined value in the password policy, the account gets locked.
For more information about Password Policy, refer here.
The screen displaying the roles imported appears.
The users, along with the roles, are imported to the internal LDAP.
4.7.7.1.3 - Password policy configuration
The user with administrative privileges can define password policy rules. PolicyUser and ProxyUser have the Password Policy option as disabled, by default.
Defining a Password Policy
If the number of unsuccessful password attempts exceed the defined value in the password policy, the account gets locked.
For more information about Password Policy, refer here.
Perform the following steps to define a password policy.
From the ESA Web UI, navigate to Settings > Users.
On the User Management tab under the Define Password Policy area, click Edit (
).
Select the password policy options for users which is described in the following table:
Password Policy Option Description Default Value Possible Values Minimum period for changeover Number of days since the last password change. 1 0-29 Password expiry Number of days a password remains valid. 30 0-720 Lock on maximum failures Number of attempts a user makes before the account is locked and requires Admin help for unlocking. 5 0-10 Password history Number of older passwords that are retained and checked against when a password is updated. 1 0-64 Click on Apply Changes.
Enter your password prompt appears. Enter the password and click Ok.
Resetting the password policy to default settings
If the number of unsuccessful password attempts exceed the defined value in the password policy, the account gets locked.
For more information about Password Policy, refer here.
The password policy is set to default values as mentioned in the Password Policy Configuration table.
The users imported into LDAP have Password Policy disabled, by default. This option cannot be enabled for imported users.
Perform the following steps to reset the password policy to default settings.
Click Reset.A confirmation message appears.
Click Yes.
The Enter your password prompt appears. Enter the password and click Ok.
Enabling password policy for Local LDAP users
Perform the following steps to enable password policy for Local LDAP users.
From the ESA Web UI, navigate to Settings > Users.
In the Manage Users area, click Password Policy toggle for the user.A dialog box appears requesting LDAP credentials.
The Enter your password prompt appears. Enter the password and click Ok.
After successful validation, password policy is enabled for the user.
Users locked out from too many password failures
If the number of unsuccessful password attempts exceed the defined value in the password policy, the account gets locked. Users who have been locked out receive the error message “Login Failure: Account locked” when trying to log in. To unlock the user, a user with administrative privileges must reset their password.
When an Admin user is locked, the local_admin user can be used to unlock the Admin user from the CLI Manager. Note that the local_admin is not part of LDAP, so it cannot be locked. For more information about Password Policy, including resetting passwords, refer Password Policy.
4.7.7.1.4 - Edit users
For every change done for the user, the Enter your password prompt appears. Enter the password and click Ok.
Perform the following steps to edit the user.
Navigate to Settings > Users > User Management. Click on a User Name.
Under the General Info section, edit the Description.
Under the Password Policy section, toggle to enable or disable the Password Policy.
Under the Roles section, select role(s) from the list for the user.
Click Reset Password to reset password for the user.
Click the
icon to delete the user.
Users locked out from too many password failures
If the number of unsuccessful password attempts exceed the defined value in the password policy, the account gets locked.
For more information about Password Policy, refer here.
4.7.7.2 - Managing Roles
Roles are templates that include permissions and users can be assigned to one or more roles. Users in the appliance must be attached to a role.
The default roles packaged with ESA are as follows:
Roles | Description | Permissions |
---|---|---|
Policy Proxy User | Allows a user to connect to DSG via SOAP/REST and access web services using Application Protector (AP). | Proxy-User |
Policy User | Allows user to connect to DSG via SOAP/REST and perform security operations using Application Protector (AP). | Policy-User |
Security Administrator Viewer | Role that can view the ESA Web UI, CLI, and reports. | Security Viewer, Appliance CLI Viewer, Appliance web viewer, Reports Viewer |
Shell Accounts | Role who has direct SSH access to Appliance OS shell.Note: It is recommended that careful consideration is taken when assigning the Shell Accounts role and permission to a user.Ensure that if a user is assigned to the Shell Account role, no other role is linked to the same user. The user has no access to the Web UI or CLI, except when the user has password policy enabled and is required to change password through Web UI. | Shell (non-CLI) AccessNote: The user can access SSH directly if the permission is tied to this role. |
Security Administrator | Role who is responsible for setting up data security using ESA policy management, which includes but is not limited to creating policy, managing policy, and deploying policy. | Security Officer, Reports Manager, Appliance Web Manager, Appliance CLI Administrator, Export Certificates, DPS Admin, Directory Manager, Export Keys, RLP Manager |
The capabilities of a role are defined by the permissions attached to the role. Though roles can be created, modified, or deleted from the appliance, permissions cannot be edited. The permissions that are available to map with a user and packaged with ESA as default permissions are as follows:
Permissions | Description |
---|---|
Appliance CLI Administrator | Allows users to perform all operations available as part of ESA CLI Manager. |
Appliance Web Manager | Allows user to perform all operations available as part of the ESA Web UI. |
Audit Store Admin | Allows user to manage the Audit Store. |
Can Create JWT Token | Allows user to create JWT token for communication. |
Customer Business manager | Allows users to retrieve metering reports. |
DPS Admin | Allows user to use the DPS admin tool on the protector node. |
Export Certificates | Allows user to use download certificates from ESA. |
Key Manager | Allows user to access the Key Management Web UI, rotate ERK or DSK, and modify ERK states. |
Policy-User | Allows user to connect to Data Security Gateway (DSG) via REST and perform security operations using Application Protector (AP). |
RLP Manager | Allows user to manage rules stored on Row-Level Security Administrator (ROLESA). Manage includes accessing, viewing, creating, etc. |
Reports Viewer | Allows user to only view reports. |
Security Viewer | Allows user to have read only access to policy management in the Appliance. |
Appliance CLI Viewer | Allows user to login to the Appliance CLI as a viewer and view the appliance setup and configuration. |
Appliance web viewer | Allows user to login to the Appliance web-interface as a viewer. |
AWS Admin | Allows user to configure and access AWS tools if the AWS Cloud Utility product is installed. |
Directory Manager | Allows user to manage the Appliance LDAP Directory Service. |
Export Keys | Allows user to export keys from ESA. |
Reports Manager | Allows user to manage reports and do functions related to reports. Manage includes accessing, viewing, creating, scheduling, etc. |
Security Officer | Allows user to manage policy, keys, and do functions related to policy and key management. Manage includes accessing, viewing, creating, deploying, etc. |
Shell (non-CLI) Access | Allows user to get direct access to the Appliance OS shell via SSH. It is recommended that careful consideration is taken when assigning the Shell Accounts role and permission to a user. Ensure that if a user is assigned to the Shell Account role, no other role is linked to the same user. |
Export Resilient Package | Allows user to export package from the ESA by using the RPS API. |
Can Create JWT Token | Allows user to create a Java Web Token (JWT) for user authentication. |
ESA Admin | Allows user to perform operations on Audit Store Cluster Management. |
Insight Admin | Allows to perform operations on Discover Web UI. |
Proxy-User | Allows user to connect to DSG via REST and perform security operations using Application Protector (AP). |
SSO Login | Allows user to login to the system using the Single Sign-On (SSO) mechanism. |
The ESA Roles web UI is as seen in the following image.
Callout | Column | Description |
---|---|---|
1 | Role Name | Name of the role available on ESA. Note: If you want to edit an existing role, click the role name from the displayed list. After making required edits, click Save to save the changes. |
2 | Description | Brief description about the role and its capabilities. |
3 | Permissions | Permission mapped to the role. The tasks that a user mapped to a role can perform is based on the permissions enabled. |
4 | Action | The following Actions are available.
|
5 | Add Role | Add a custom role to ESA. |
Duplicating and deleting roles
Keep the following in mind when duplicating and deleting roles.
- It is recommended to delete a role from the Web UI only. This ensures that the updates are reflected correctly across all the users that were associated with the role.
- When you duplicate or delete a role, the Enter your password prompt appears. Enter the password and click Ok to complete the task.
Adding a Role
You can create a custom business role with permissions and privileges that you want to map with that role. Custom templates provide the flexibility to create additional roles with ease.
Perform the following steps to add a role. In those steps we will use an example role named “Security Viewer”.
In the Web UI, navigate to Settings > Users > Roles.
If you want to edit an existing role, click the role name from the displayed list. After making required edits, click Save to save the changes.
Click Add Role to add a business role.
Enter Security Viewer as the Name.
Enter a brief description in the Description text box.
Select custom as the template from the Templates drop-down.
Under Role Permissions and Privileges area, select the permissions you want to grant to the role.Click Uncheck All to clear all the check boxes. Ensure that you do not select the Shell (non-CLI) Access permission for users who require Web UI and CLI access.
Click Save to save the role.
Enter your password prompt appears. Enter the password and click Ok.
4.7.7.3 - Configuring the proxy authentication settings
To configure the proxy authentication from the Web UI, the directory_administrator permission must be associated with the required role. It is also possible to do this through the CLI manager. For more information about configuring LDAP from the CLI manager, refer to here.
Perform the following steps to configure proxy authentication settings.
In the Web UI, navigate to Settings > Users > Proxy Authentication. The following figure shows example LDAP configuration.
Enter the LDAP IP address for the external LDAP in LDAP URI.The accepted format is ldap://host:port.
- Click the
icon to add multiple LDAP servers.
- Click the
icon to remove the LDAP server from the list.
- Click the
Enter data in the fields as shown in the following table:
Fields Description Base DN The LDAP Server Base distinguished name. For example: Base DN: dc=sherwood, dc=com. Bind DN Distinguished name of the LDAP Bind User.
It is recommended that this user is granted viewer permissions. For example: Bind DN: administrator@sherwood.comBind Password The password of the specified LDAP Bind User. StartTLS Method Set this value based on configuration at the customer LDAP. Verify Peer Enable this setting to validate the certificate from an AD. If this setting is enabled, ensure that the following points are considered: - You must require a CA certificate to verify the server certificate from AD.For more information about certificates, refer Certificate Management.
- The LDAP Uri matches the hostname in the server and CA certificates.
- LDAP AD URI hostname is resolved in the hosts file.
LDAP Filter Provide the attribute to be used for filtering users in the external LDAP. For example, you can use the default attribute, sAMAccountName, to authenticate users in a single AD.
Note: In case of same usernames across multiple ADs, it is recommended to use LDAP filter such as UserPrincipalName to authenticate users.Click Test to test the provided configuration.A LDAP test connectivity passed successfully message appears.
Click Apply to apply and save the configuration settings.
The Enter your password prompt appears. Enter the password and click Ok.A Proxy Authentication was ENABLED and configuration were saved successfully message appears.
Navigate to System > Services and verify that the Proxy Authentication Service is running.
If you make any changes to the existing configuration, click Save to save and apply the changes. Click Disable to disable the proxy authentication.
After the Proxy Authentication is enabled, the user egsyncd_service_admin is enabled. It is recommended not to change the password for this user.
After enabling Proxy Authentication, you can proceed to adding users and mapping roles to the users. For more information about importing users, refer here.
4.7.7.4 - Working with External Groups
The directory service providers, such as, Active Directory (AD) or Oracle Directory Server Enterprise Edition (ODSEE), are identity management systems that contain information about the enterprise users. You can map the users in the directory service providers to the various roles defined in the Appliances. The External Groups feature enables you to associate users or groups to the roles.
You can import users from a directory service to assign roles for performing various security and administrative operations in the appliances. Using External Groups, you connect to an external source, import the required users or groups, and assign the appliance-specific roles to them. The appliances automatically synchronize with the directory service provider at regular time intervals to update user information. If any user or group in a source directory service is updated, it is reflected across the users in the external groups. The updates made to the local LDAP do not affect the source directory service provider.
If any changes occur to the roles or users in the external groups, an audit event is triggered.
Ensure that Proxy Authentication is enabled to use an external group.
The following screen displays the External Groups screen.
Only users with Directory Manager role can configure the External Groups screen.
The following table describes the actions you can perform on the External Groups screen.
Icon | Description |
---|---|
![]() | List the users present for the external group. |
![]() | Synchronize with the external group to update the users. |
![]() | Delete the external group. |
Required fields for External Groups
Listed below are the required fields for creating an External Group.
Title: Name designated to the External Group
Description: Additional text describing the External Group
Group DN: Distinguished name where groups can be found in the directory
Query by: To pull users from the directory server to the appliance, you must query the directory server using required parameters. This can be achieved using one of the following two methods:
Query by UserQuery by User allows to add specific set of users from a directory server.
Group PropertiesIn the Group Properties, the search is based on the values entered in the Group DN and Member Attribute Name text boxes. Consider an example, where the values in the Group DN and Member Attribute Name are cn=esa,ou=groups,dc=sherwood,dc=com and memberOf respectively. In this case, the search is performed on every user that is available in the directory server. The memberOf value of the users are matched with the specified Group DN. Only those users whose memberOf value matches the Group DN values are returned.
Search FilterThis field facilitates searching multiple users using regex patterns. Consider an example, where the values in the Search Filter for the user is cn=S*. In this case all the users beginning with cn=S in the directory server are retrieved.
Query by GroupUsing this method, you can search and add users of a group in the directory server. All the users belonging to the group are retrieved in the search process.
Group PropertiesIn the Group Properties, the search is based on the values entered in the Group DN and Member Attribute Name text boxes. Consider an example, where the values in the Group DN and Member Attribute Name are cn=hr,ou=groups,dc=sherwood,dc=com and member respectively. The search is performed in the directory server for the group mentioned in the Group DN text box. If the group is available, then all the users of that group containing value of member attribute as cn=hr,ou=groups,dc=sherwood,dc=com are retrieved.
Search FilterThis field facilitates searching multiple groups across the directory server. The users are retrieved based on the values provided in the Search Filter and Member Attribute Name text boxes. A search is performed on the group mentioned in Search Filter and the value mentioned in the Member Attribute Name attribute of the group is fetched. Consider an example, where the values in the Search Filter for the group is cn=accounts and the value in the Member Attribute Name value is member. All the groups that match with cn=accounts are searched. The value that is available in the member attribute of those groups are retrieved as the search result.
Adding an External Group
You can add an external group to assign roles for a group of users. For example, consider a scenario to add an external group with data entered in the Search Filter textbox.
Perform the following steps to add an external group.
In the ESA Web UI, navigate to Settings > Users > External Groups.
Click Create.
Enter the required information in the Title and Description fields.
If you select Group Properties, then enter the Group DN and Member Attribute Name.For example,
Enter the following DN in the Group DN text box:
cn=Joe,ou=groups,dc=sherwood,dc=com
Enter the following attribute in the Member Attribute Name text box:
memberOf
This text box is not applicable for ODSEE.
If you select Search Filter, enter the search criteria in the Search Filter text box.
For example,
For AD, you can enter the search filter as follows:
(&(memberOf=cn=John,dc=Bob,dc=com))
For ODSEE, you can enter the search filter as follows:
isMemberOf=cn=Alex,ou=groups,dc=sherwood,dc=com
Click Preview Users to view the list of users for the selected search criteria.
Select the required roles from the Roles tab.
Click Save.
An external group is added.
The Users tab is visible, displaying the list of users added as a part of the external group.
Importing from ODSEE and special characters
If you are importing users from ODSEE, usernames containing special characters are not supported. Special characters include semi colon, forward slash, curly brackets, parentheses, angled brackets, or plus sign. That is: ;, /, {}, () , <>, or +, respectively.
Editing an External Group
You can edit an external group to modify fields such as Description, Mode, Roles, or Group Properties. If any updates are made to the roles of the users in the external groups, the modifications are applicable immediately to the users existing in the local LDAP.
Ensure that you synchronize with the source directory service if you update the Group DN or the search filter.
Perform the following steps to edit an external group:
In the ESA Web UI, navigate to Settings > Users > External Groups.
Select the required external group.
Edit the required fields.
Click Save.
The Enter your password prompt appears. Enter the password and click Ok.The changes to the external group are updated.
Deleting an External Group
When you delete an external group, the following scenarios are considered while removing a user from an external group:
- If the users are not part of other external groups, the users are removed from the local LDAP.
- If the users are a part of multiple external groups, only the association with the deleted external group and roles is removed.
Perform the following steps to remove an External Group:
In the ESA Web UI, navigate to Settings > Users > External Groups.
Select the required external group and click the Delete (
) icon.
The Enter your password prompt appears. Enter the password and click Ok.The external group is deleted.
Synchronizing the External Group
When the proxy authentication is enabled, the External Groups Sync Service is started. This service is responsible for the automatic synchronization of the external groups with the directory services. The time interval for automatic synchronization is 24 hours.
You can manually synchronize the external groups with the directory services using the Synchronize () icon.
After clicking theSynchronize () icon, the Enter your password prompt appears. Enter the password and click Ok.
The following scenarios occur when synchronization is performed between the external groups and the directory services.
- Users are added to ESA and roles are assigned.
- Roles of existing users in ESA are updated.
- Users are deleted from the ESA if they are associated with any external groups.
Based on the scenarios, the messages appearing in the Web UI, when synchronization is performed, are described in the following table.
Message | Description |
---|---|
Added | Users are added to the ESA the roles mentioned in the external groups are assigned to the user. |
Updated | Roles pertaining to the users are updated ESA. |
Removed | Roles corresponding to the deleted external group is removed for the users. Users are not deleted from ESA. |
Deleted | Users are deleted from ESA as they are not associated to any external group. |
Failed | Updates to the user fail. The reason for the failure in update appears in the Web UI. |
If a GroupDN for an external group is not available during synchronization, the users are removed or deleted. The following log appears in the Insight logs:
Appliance Warning: GroupDN is missing in external Source.
Also, in the Appliance logs, the following message appears:
External Group: <Group name>, GroupDN: <domain name> could not be found on the external source
4.7.7.5 - Configuring the Azure AD Settings
You can configure the Azure AD settings from the Web UI. Using the Web UI, you can enable the Azure AD settings to manage user access to cloud applications, import users or groups, and assign specific roles to them.
For more information about configuring Azure AD Settings from the CLI Manager, refer here.
Before configuring Azure AD Settings on the appliance, you must have the following information that is required to connect the appliance with the Azure AD:
- Tenant ID
- Client ID
- Client Secret or Thumbprint
For more information about the Tenant ID, Client ID, Authentication Type, and Client Secret/Thumbprint, search for the text Register an app with Azure Active Directory on Microsoft’s Technical Documentation site at https://learn.microsoft.com/en-us/docs/
The following are the list of the API permissions that must be granted.
- Group.Read.All
- GroupMember.Read.All
- User.Read
- User.Read.All
For more information about configuring the application permissions in the Azure AD, please refer https://learn.microsoft.com/en-us/graph/auth-v2-service?tabs=http.
Perform the following steps to configure Azure AD settings:
- On the Web UI, navigate to Settings > Users > Azure AD.The following figure shows an example of Azure AD configuration.
Enter the data in the fields as shown in the following table:
Setting Description Tenant ID Unique identifier of the Azure AD instance. Client ID Unique identifier of an application created in Azure AD. Auth Type Select one of the Auth Type: - SECRET indicates a password-based authentication. In this authentication type, the secrets are symmetric keys, which the client and the server must know.
- CERT indicates a certificate-based authentication. In this authentication type, the certificates are the private keys, which the client uses. The server validates this certificate using the public key.
Client Secret/Thumbprint The client secret/thumbprint is the password of the Azure AD application. - If the Auth Type selected is SECRET, then enter Client Secret.
- If the Auth type selected is CERT, then enter Client Thumbprint.
For more information about the Tenant ID, Client ID, Authentication Type, and Client Secret/Thumbprint, search for the text Register an app with Azure Active Directory on Microsoft’s Technical Documentation site at https://learn.microsoft.com/en-us/docs/.
Click Test to test the provided configuration.The Azure AD settings are authenticated successfully. To save the changes, click ‘Apply/Save’. message appears.
Click Apply to apply and save the configuration settings.The Azure AD settings are saved successfully message appears.
4.7.7.5.1 - Importing Azure AD Users
Before importing Azure users, ensure that the following prerequisites are considered:
- Ensure that the user is not present in the nested group. If the user is present in the nested group, then the nested group will not be synced on the appliance.
- Check the user status before importing them to the appliance. If a user with the Disabled status is imported, then that user will not be able to login to the appliance.
- Ensure that an external user is not added to the group. If an external user is added to the group, then that user will not be synced on the appliance.
- Ensure that the special character # (hash) is not used while creating the username. If you are importing users from the Azure AD, then the usernames containing the special character # (hash) will not be able to login to the appliance. The usernames containing the following special characters are supported in the appliance.
- ’ (single quote)
- . (period)
- ^ (caret)
- ! (exclamation)
- ~ (tilde)
- - (minus)
- _ (underscore)
- Ensure that the Azure AD settings are enabled before importing the users.
You can import users from the Azure AD to the appliance, on the User Management screen.
For more information about configuring the Azure AD settings, refer here.
Perform the following steps to import Azure AD users.
On the Web UI, navigate to Settings > Users > User Management.
Click Import Azure Users.
The Enter your password prompt appears. Enter the password and click Ok.The Import Users screen appears.
Search a user by entering the name in the Username/Filter box.
If required, toggle the Overwrite Existing Users option to ON to overwrite users that are already imported to the appliance.
Click Next.The users matching the search criteria appear on the screen.
Select the required users and click Next.The screen to select the roles appears.
Select the required roles for the selected users and click Next.The screen displaying the imported users appears.
Click Close.The users, with their roles, are imported to the appliance.
4.7.7.5.2 - Working with External Azure Groups
The Azure AD is an identity management system that contains information about the enterprise users. You can map the users in the Azure AD to the various roles defined in the Appliances. The External Azure Groups feature enables you to associate users or groups to the roles.
You can import users from the Azure AD to assign roles for performing various security and administrative operations on the appliances. Using External Azure Groups, you connect to Azure AD, import the required users or groups, and assign the Appliance-specific roles to them.
Ensure that Azure AD is enabled to use external Azure group.
The following screen displays the External Azure Groups screen.
Only users with the Directory Manager permissions can configure the External Groups screen.
The following table describes the actions that you can perform on the External Groups screen.
Icon | Description |
---|---|
![]() | List the users present for the Azure External Group. |
![]() | Synchronize with the Azure External Group to update the users. |
![]() | Delete the Azure External Group. |
Adding an Azure External Group
You can add an Azure External Group to assign roles for a group of users.
Perform the following steps to add an External Group.
From the ESA Web UI, navigate to Settings > Users > Azure External Groups.
Click Add External Group.
Enter the group name in the Groupname/Filter field.
Click Search Groups to view the list of groups.
Select one group from the list, and click Submit.
Enter a description in the Description field.
Select the required roles from the Roles tab.
Click Save.The External Group has been created successfully message appears.
Editing an Azure External Group
You can edit an Azure external group to modify Description and Roles. If any updates are made to the roles of the users in the Azure External Groups, then the modifications are applicable immediately to the users existing on the Appliance.
Perform the following steps to edit an External Group:
On the ESA Web UI, navigate to Settings > Users > Azure External Groups.
Select the required external group.
Edit the required fields.
Click Save.
The Enter your password prompt appears. Enter the password and click Ok.The changes to the external group are updated.
Synchronizing the Azure External Groups
When the Azure AD is enabled, the Azure External Groups is started. You can manually synchronize the Azure External Groups using the Synchronize () icon.
After clicking the Synchronize () icon, the Enter your password prompt appears. Enter the password and click Ok.
Note: If the number of unsuccessful password attempts exceed the defined value in the password policy, then the user account gets locked.
For more information about Password Policy, refer here.
The messages appearing on the Web UI, when synchronization is performed between Azure External Groups and the appliance, are described in the following table.
Message | Description |
---|---|
Success |
|
Failed | Updates to the user failed.Note: The reason for the failure in updating the user appears on the Web UI. |
Deleting Azure External Groups
When you delete an Azure External Group, the following scenarios are considered while removing a user from the Azure External Group:
- If the users are not part of other external groups, then the users are removed from the Appliance.
- If the users are a part of multiple external groups, the only the association with the deleted Azure External Group and roles is removed.
Perform the following steps to remove an Azure External Group.
From the ESA Web UI, navigate to Settings > Users > Azure External Groups.
Select the required external group and click the Delete (
) icon.
The Enter your password prompt appears. Enter the password and click Ok.The Azure External Group is deleted.
4.8 - Trusted Appliances Cluster (TAC)
Network clustering is where a group of computers are organized so they function together, providing highly available resources. Clustering is highly desirable for disaster recovery. Failure of one system will not affect business continuity and the performance of resources is maintained.
A Trusted Appliances cluster (TAC) is a tool, where appliances, such as, ESA or DSG replicate and maintain information. In a TAC, multiple appliances are connected using SSH. A trusted channel is created to transfer data between the appliances in the cluster. You can also run remote commands, backup data, synchronize files and configurations across multiple sites, or import/export configurations between appliances that are directly connected to each other.
In a TAC, all the systems in the cluster are in an active state. The request for security operations are handled across the active appliances in the cluster. Thus, in case of a failure of an appliance, the requests are balanced across other appliances in the cluster.
4.8.1 - TAC Topology
The TAC is a connected graph with a fully connected cluster. In a fully connected cluster, every node directly communicates with other nodes in the cluster.
The following figure shows a connected graph with four nodes A, B, C, and D that are directly connected to each other.
In a TAC, each appliance is classified either as a client or a server.
- Client: A client is a stateless agent that requests information from a server.
- Server: A server maintains information about all the appliances in the cluster, performs regular health checks, and responds to queries from the clients.
A server can be further classified as a leader or a follower. The leader is responsible for maintaining the status of cluster and replicating cluster-related information among other servers in the cluster. The first appliance that is added the cluster is the leader. The other appliances added to the cluster are followers.
It is important to maintain the number of servers to keep the cluster available. For a cluster to be available, the number of servers available must be (N/2) + 1, where N is the number of servers in the cluster. Thus, it is recommended to have a minimum of three servers in your cluster for fault tolerance.
4.8.2 - Cluster Configuration Files
In a cluster, you can deploy an appliance as a server or a client by modifying the cluster configuration files. For deploying an appliance on a cluster, the following configuration files are available for an appliance.
agent.json
The agent.json file specifies the role of an appliance in the cluster. The file is available in the /opt/cluster-consul-integration/configure directory.
The following table describes the attributes that can be configured in the agent.json
file.
Attribute | Description | Values |
---|---|---|
type | The role of the appliance in the cluster. |
|
agent_auto.json
This file is considered only if the type attribute in the agent.json file is set to auto. The agent_auto.json file specifies the maximum number of servers allowed in a cluster. Additionally, you can also specify which appliances can be added to the cluster as servers.
The agent_auto.json file is available in the /opt/cluster-consul-integration/configure directory.
The following table describes the attributes that can be configured in the agent_auto.json file.
Attribute | Description | Values |
---|---|---|
maximum_servers | The maximum number of servers that can be deployed in a cluster. | 5 (default)Note
|
PAP_eligible_servers | The list of appliances that can be deployed as servers. |
|
config.json
This file contains the cluster-related information for an appliance, such as, data center, ports, Consul certificates, bind address, and so on. The config.json file is available in the /opt/consul/configure directory.
4.8.3 - Deploying Appliances in a Cluster
You can deploy the appliances in a cluster as a server or a client. The type attribute in the agent.json file and the PAP_eligible_servers and maximum_servers attributes in the agent_auto.json file determine how the appliance is deployed in the cluster.
The following flowchart illustrates how an appliance is deployed in a cluster.
Example process for deploying appliances in a cluster
Consider an ESA appliance, ESA001, on which you create a cluster. As this is the first appliance on the cluster, ESA001 is becomes the leader of the cluster. The following are the values of the default attributes of the agent.json and agent_auto.json files on ESA001.
- type: auto
- maximum_servers: 5
- PAP_eligible_servers: ESA
Now, you want to add another ESA appliance, ESA002, to this cluster as a server. In this case, you must ensure that the type attribute in the agent.json file of ESA002 is set as server.
If you want to add an ESA003 to the cluster as a client, you must ensure that the type attribute in the agent.json file of ESA003 is set as client.
The following figure illustrates the cluster comprising of nodes ESA001, ESA002, and ESA003.
Now, you add another ESA appliance, ESA004, to this cluster with the following attributes:
- type: auto
- maximum_servers: 5
- PAP_eligible_servers: ESA
In this case, the following checks are performed:
- Is the value of maximum_servers greater than zero? Yes.
- Is the number of servers in the cluster exceeding the maximum_servers? No
- Is the appliance code of ESA004 in the PAP_eligible_servers list? Yes.
The name or appliance code of appliances can be viewed in the Appliance_code file in the /etc directory.
As long as the limit of the number of servers on the cluster is not exceeded and the appliance is a part of the server list, ESA004 is added as a server as shown in the following figure.
Now add a DSG appliance named CG001 to this cluster with the following attributes:
- type: auto
- maximum_servers: 5
- PAP_eligible_servers: CG
In this case, the following checks are performed:
- Is the maximum_servers greater than zero? Yes.
- Is the number of servers in the cluster exceeding the maximum_servers? No.
- Is the appliance code of CG001 in the PAP_eligible_servers list? Yes.
Thus, DSG1 is added to the cluster as a server.
Now, consider a cluster with five servers, ESA001, ESA002, ESA003, ESA004, and ESA006 as shown in the following figure.
You now add another ESA appliance, ESA007 to this cluster, with the following attributes:
- type: auto
- maximum_servers: 5
- PAP_eligible_servers: ESA
In this case, the following checks are performed:
- Is the maximum_servers greater than zero? Yes.
- Is the number of servers in the cluster exceeding the maximum_servers? Yes
- Is the appliance code of ESA007 in the PAP_eligible_servers list? Yes
Thus, as the limit of the number of servers in a cluster is exceeded, ESA007 is added as a client.
4.8.4 - Cluster Security
This section describes about the Cluster Security.
Gossip Key
In the cluster, the appliances communicate using the Gossip protocol. The cluster supports encrypting the communication using the gossip key. This key is generated during the creation of the cluster. The gossip key is then shared across all the appliances in the cluster.
SSL Certificates
SSL certificates are used to authenticate the appliances on the cluster. Every appliance contains the following default cluster certificates in the certificate repository:
- Server certificate and key for Consul
- Certificate Authorities(CA) certificate and key for Consul
In a cluster, the server certificates of the appliances are validated by the CA certificate of the appliance that initiated the cluster. This CA certificate is shared across all the appliances on the cluster for SSL communication.
You can also upload your custom CA and server certificates to the appliances on the cluster. The CA.key file is not mandatory when you deploy custom certificates for an appliance.
Ensure that you apply a single CA certificate on all the appliances in the cluster.
If the CA.key is available, the appliances that are added to the cluster download the CA certificate and key. The new server certificate for the appliance are generated using the CA key file.
If the CA.key is not available, all the keys and certificates are shared among the appliances in the cluster.
Ensure that the custom certificates match the following requirements:
The CN attribute of the server certificate is set in the following format:
server.<datacenter name>.<domain>
The domain and datacenter name must be equal to the value mentioned in theconfig.json file. For example, server.ptydatacenter.protegrity.
The custom certificates contain the following entries:
localhost
- 127.0.0.1
- FQDN of the local servers in the clusterFor example, an SSL Certificate with SAN extension of servers ESA1, ESA2, and ESA3 in a cluster has the following entries:
localhost
- 127.0.0.1
- ESA1.protegrity.com
- ESA2.protegrity.com
- ESA3.protegrity.com
The following figure illustrates the certificates.
Ports
The following ports are used for enabling communication between appliances:
- TCP port of 8300 – Used by servers to handle incoming request
- TCP and UDP ports of 8301 – Used by appliances to gossip on LAN
- TCP and UDP ports of 8302 – Used by appliances to gossip on WAN
Appliance Key Rotation
If you are using cloned machines to join a cluster, it is necessary to rotate the keys on all cloned nodes before joining the cluster. If the cloned machines have proxy authentication, two factor authentication, or TAC enabled, it is recommended to use new machines. This avoids any limitations or conflicts, such as, inconsistent TAC, mismatched node statuses, conflicting nodes, and key rotation failures due to keys in use.
For more information about rotating the keys, refer here.
4.8.5 - Reinstalling Cluster Services
If the configuration files for TAC are corrupted, you can reinstall the consul service.
Before you begin
Ensure that Cluster-Consul-Integration v1.0.0 service is uninstalled before reinstalling Consul v2.4.0 service.
To reinstall the Cluster-Consul-Integration service:
In the CLI Manager, navigate to Administration > Add/Remove Services.
Press ENTER.
Enter the root password and select OK.
Select Install applications.
Select only Consul v2.4.0 and select OK.
Select Yes.
The Consul product is reinstalled on your appliance.
Install the Cluster-Consul-Integration v1.0.0 service.
For more information about installing services, refer to the Protegrity Installation.
4.8.6 - Uninstalling Cluster Services
If there is cluster with a maximum of ten nodes and you do not want to continue with the integrated cluster services, then uninstall the cluster services.
To uninstall cluster services:
Remove the appliance from the TAC.
In the CLI Manager, navigate to Administration > Add/Remove Services.
Press ENTER.
Enter the root password and select OK.
Select Remove already installed applications.
Select Cluster-Consul-Integration v1.0.0 and select OK.
The integration service is uninstalled.
Select Consul v2.4.0 and select OK.
The Consul product is uninstalled from your appliance.
If the node contains scheduled tasks associated with it, then you cannot uninstall the cluster services on it. Ensure that you delete all the scheduled tasks before uninstalling the cluster services.
4.8.7 - FAQs on TAC
This section lists the FAQs on TAC.
Question | Answer |
---|---|
Can I block communication between appliances? | No. Blocking communication between appliances is disabled from release v7.1.0 MR2. |
What is the recommended minimum quorum of servers required in a cluster? | The recommended minimum quorum of servers required in a cluster is three. |
How to determine which appliance is the leader of the cluster? | In the OS Console of an appliance, run the following command:/usr/local/consul operator raft list-peers -http-addr https://localhost:9000 -ca-file /opt/consul/ssl/ca.pem -client-cert /opt/consul/ssl/cert.pem -client-key /opt/consul/ssl/cert.key |
Can I change the certificates of an appliance that is added to a cluster? | Yes. Ensure that the certificates are valid. For more information about the validity of the certificates, refer here. |
Can I remove the last server from the cluster? | No, you cannot remove the last server from the cluster. The clients depend on this server for cluster related information. If you remove this server, then you risk de-stabilizing the cluster. |
How to determine the role of an appliance in a cluster? | In the Web UI, navigate to the Trusted Appliance Cluster. On the screen, the labels for the appliances appear. The label for the server is Consul Server and that of the client is Consul Client. |
Can I add an appliance other than ESA as server? | Yes. Ensure that the value of the type attribute in the agent.json file under the /opt/cluster-consul-integration/configure directory is set as server. |
Can I clone a machine and join it to the cluster? | Yes, you can clone a machine to join in the cluster.However, if you are using cloned machines to join a cluster, it is necessary to rotate the keys on all cloned nodes before joining the cluster. If the cloned machines have proxy authentication, two factor authentication, or TAC enabled, it is recommended to use new machines. This avoids any limitations or conflicts, such as, inconsistent TAC, mismatched node statuses, conflicting nodes, and key rotation failures due to keys in use. |
For more information about rotating the keys, refer here. |
4.8.8 - Creating a TAC using the Web UI
You can create a TAC, where you add an appliance to the cluster.
Before you begin
When setting up or adding appliances to your cluster, you may be required to request a license for new nodes from Protegrity.
For more information about licensing, refer to the Protegrity Data Security Platform Licensing and your license agreement with Protegrity.
Before creating a TAC, ensure that the SSH Authentication type is set to Password + PublicKey.
If you are using cloned machines to join a cluster, it is necessary to rotate the keys on all cloned nodes before joining the cluster.
If the cloned machines have proxy authentication, two factor authentication, or TAC enabled, it is recommended to use new machines. This avoids any limitations or conflicts, such as, inconsistent TAC, mismatched node statuses, conflicting nodes, and key rotation failures due to keys in use.
For more information about rotating the keys, refer here.
Creating a TAC
In the ESA Web UI, navigate to System > Trusted Appliances Cluster.
The Join Cluster screen appears.
Select Create a new cluster.
The following screen appears.
Select the preferred communication method.
Select Add New to add, edit, or delete a communication method.
For more information about managing communication methods, refer here.
Click Save.
A cluster is created.
4.8.9 - Connection Settings
In a TAC, you can create a partially connected cluster using the Connecting Setting feature. In a partially connected cluster, the nodes selectively communicate with other nodes in the cluster without disconnecting the graph. If you want to avoid redundant information between certain nodes in the cluster, you can block the direct communication between them.
This feature is only supported if the Cluster-Consul-Integration and Consul components are not installed on your system.
The following figure shows a partially connected cluster connected graph with four nodes, where the nodes selectively communicate with some nodes in the cluster.
As shown in the figure, the direct communication between nodes C and D, A and D, B and C are blocked. If node B requires information about node C, it receives information from node A. The cluster is a fully connected graph where you can communicate directly or indirectly with every node in the cluster.
In a disconnected graph, there is no communication path between one node and other nodes in the cluster. You cannot create a TAC with a disconnected graph.
In a partially connected cluster, as some nodes are not connected to each other directly, there might be a delay in propagating data, depending on the path that the data needs to traverse.
Connection Settings for Nodes
This section describes the steps to set the connection settings for nodes in a cluster.
To set connection settings for nodes in the cluster:
In the CLI Manager, navigate to Tools > Trusted Appliances Cluster > Connection Management: Set connection settings for cluster nodes.
The following screen appears.
Select the required node in the cluster.
Select Choose.
The list of connection settings between the node and other nodes in the cluster appears.
Press SPACEBAR to toggle the connection setting for a particular node.
Select Apply.
The connection settings for the node are saved.
Caution: You can only create cluster export tasks between nodes that are directly connected to each other.
4.8.10 - Joining an Existing Cluster using the Web UI
If your appliance is not a part of any trusted appliances cluster, then you can add it to an existing cluster. This section describes the steps to join a TAC using the Web UI.
Before you begin
If you are using cloned machines to join a cluster, it is necessary to rotate the keys on all cloned nodes before joining the cluster.
If the cloned machines have proxy authentication, two factor authentication, or TAC enabled, it is recommended to use new machines. This avoids any limitations or conflicts, such as, inconsistent TAC, mismatched node statuses, conflicting nodes, and key rotation failures due to keys in use.
For more information about rotating the keys, refer here.
Important : When assigning a role to the user, ensure that the Can Create JWT Token permission is assigned to the role.If the Can Create JWT Token permission is unassigned to the role of the required user in the target node, then joining the cluster operation fails.To verify the Can Create JWT Token permission, from the ESA Web UI navigate to Settings > Users > Roles.
Adding to an existing cluster
On the ESA Web UI, navigate to System > Trusted Appliances Cluster.
The following screen appears.
Enter the IP address of the target node in the Node text box.
Enter the credentials of the user of the target node in the Username and Password text boxes.
Click Connect.
The Site drop-down list and the Communication Methods options appear.
If you need to add a new communication method, click Add New. Otherwise, continue on to the next step.
Select the site and the preferred communication method.
Click Join.
The node is added to the cluster and the following screen appears.
Handling Consul certificates after adding an appliance to the cluster
After joining an appliance to the cluster, during replication, the Consul certificates are copied from the source to the target appliance. In this case, it is recommended to delete the Consul certificates pertaining to the target node from the Certificate Management screen. Navigate to Settings > Network > Certificate Repository. Click the delete icon next to Server certificate and key for Consul.
4.8.11 - Managing Communication Methods for Local Node
Every node in a network is identified using a unique identifier. A communication method is a qualifier for the remote nodes in the network to communicate with the local node.
There are two standard methods by which a node is identified:
- Local IP Address of the system (ethMNG)
- Host name
The nodes joining a cluster use the communication method to communicate with each other. The communication between nodes in a cluster occur over one of the accessible communication methods.
Adding a Communication Method from the Web UI
This section describes the steps to add a communication method from the Web UI.
In the Web UI, you can add a communication method only before creating a cluster. Perform the following steps to add a communication method from the Web UI.
In the Web UI, navigate to System > Trusted Appliances Cluster.
The Join Cluster Screen appears.
Click Create a new Cluster.
Click Create.
Click Add New.
The Add Communication Method text box appears.
Type the communication method and select OK.
The communication method is added.
Editing a Communication Method from the Web UI
This section describes the steps to edit a communication method from the Web UI.
In the Web UI, you can edit a communication method only before you create a cluster. Perform the following steps to edit a communication method from the Web UI.
In the Web UI, navigate to System > Trusted Appliances Cluster.
The Join Cluster Screen appears.
Click Create a new Cluster.
The Create New Cluster screen appears.
Click Create.
Click the Edit
icon corresponding to the communication method to be edited.
The Edit Communication Method text box appears.
Type the communication method and select OK.
The communication method is edited.
Deleting a Communication Method from the Web UI
This section describes the steps to delete a communication method from the Web UI.
To delete a communication method from the Web UI:
In the Web UI, navigate to System > Trusted Appliances Cluster.
The Join Cluster Screen appears.
In the Web UI, you can delete a communication method before you create a cluster.
Click Create New Cluster.
The Create New Cluster screen appears.
Click Create.
Click the Delete
icon corresponding to the communication method to be deleted.
A message confirming the delete operation appears.
Select OK.
The communication method is deleted.
4.8.12 - Viewing Cluster Information
This section describes the how to view cluster information using the Web UI.
To execute commands using Web UI:
In the Web UI, navigate to System > Trusted Appliances Cluster .
The screen with the appliances connected to the cluster appears.
Select All in the drop-down list.
The following options appear:
- Node Summary
- Cluster Tasks
- DiskFree
- MemoryFree
- Network
- System Info
- Top 10 CPU
- Top 10 Memory
- All
Select the required option.
The selected information for the appliances appears in the right pane.
4.8.13 - Removing a Node from the Cluster using the Web UI
This section describes the steps to remove a node from a cluster using the Web UI.
Before you begin
If a node is associated with a cluster task that is based on the hostname or IP address, then the Leave Cluster operation will not remove node from the cluster. Ensure that you delete all such tasks before removing any node from the cluster.
Removing the node
On the Web UI of the node that you want to remove from the cluster, navigate to System > Trusted Appliances Cluster.
The screen displaying the cluster nodes appears.
Navigate to Management > Leave Cluster.
The following screen appears.
A confirmation message appears.
Select Ok.
The node is removed from the cluster.
Scheduled tasks and removed nodes
If the scheduled tasks are created between the nodes in a cluster, then ensure that after you remove a node from the cluster, all the scheduled tasks related to the node are disabled or deleted.
4.9 - Appliance Virtualization
The default installation of Protegrity appliances use hardware virtualization mode (HVM). An appliance can be reconfigured to use parallel virtualization mode (PVM) to optimize the performance of virtual guest machines. Protegrity supports the following virtual servers:
- Xen
- Microsoft Hyper-VP
- Linux KVM Hypervisor
The information in this section will provide details on appliance virtualization. Understanding some of the instructions and details will require some Xen knowledge and technical skills. The virtual server configuration is done with its own tools. The examples shown later in this section are for using paravirtualization with Xen. Xen hypervisor is a thin software layer that is inserted between the server hardware and the operating system. This provides an abstraction layer that allows each physical server to run one or more virtual servers, effectively decoupling the operating system and its applications from the underlying physical server. Xen hypervisor changes are facilitated by the Xen Paravirtualization Tool.
For more information about Xen, and Xen hypervisor, refer http://www.xen.org/.
About switching from HVM to PVM
This section will also show how to switch from HVM to PVM. The following two main tasks are involved:
- Configuration changes on the guest machine, the appliance.
- Configuration changes on the virtual server.
The appliance configuration changes are facilitated by the Xen Paravirtualization tool, which is available in the appliance Tools menu, in the CLI Manager.
4.9.1 - Xen Paravirtualization Setup
This section describes the paravirtualization process, from preparation to running the tools and rebooting into PVM mode.
The paravirtualization tool provides an easy way to convert HVM to PVM and back again. It automates changes to configuration files and XenServer parameter.
This section describes the actual configuration changes on both the Appliance and XenServer in case you need or want to understand the low-level mechanisms involved.
Before you begin
It is recommended that you consult Protegrity Support before using the information in this Technical Reference section to manually change your configurations.
4.9.1.1 - Pre-Conversion Tasks
Before switching from HVM to PVM you should perform a system check, interface check, and system backup.
System Check
The Protegrity software appliance is installed with HVM. This means the appliance operating system does not know that it is running on a hypervisor.
To check the system:
Use the following Linux command to check whether the Linux kernel supports paravirtualization and examine the hypervisor.
# dmesg | grep –i boot
If the following message does not appear, then the kernel does not support paravirtualization:
Booting paravirtualized kernel
The rest of the output shows the hypervisor name, for example, Xen. If you are running on a physical hardware, or the hypervisor was not configured to use PVM, then the following output appears:
bare hardware
Interface Check
The conversion tools and tasks assume that the Protegrity Appliance virtual hard disk is using the IDE interface, which is the default interface. Check that the device name used by the Linux Operating System is hda, and not sda or other devices.
System Backup
Switching from HVM to PVM requires changes in many configuration files, so it is very important to back up the system before applying the changes. Use the XenServer snapshot functionality to back up the system.
For more information about the snapshot functionality, refer to the XenServer documentation.
It is also recommended that you back up the appliance data and configuration files using the standard appliance backup mechanisms.
For more information about backing up from CLI Manager, refer here.
Managing local OS user option provides you the ability to create users that need direct OS shell access. These users are allowed to perform non-standard functions, such as schedule remote operations, backup agents, run health monitoring, etc. This option also lets you manage passwords and permissions for the dpsdbuser, which is available by default when ESA is installed.
Managing Local OS Users
This section describes the steps to manage the local OS users.
To manage local OS users:
Navigate to Administration > Accounts and Passwords > Manage Passwords and Local-Accounts > Manage local OS users.
In the dialog displayed, enter the root password and confirm selection.
Add a new user or select an existing user as explained in following steps.
Select Add to create a new local OS user.
In the dialog box displayed, enter a User name and Password for the new user. The & character is not supported in the Username field.
Confirm the password in the required text boxes.
Select OK and press Enter to save the user.
Select an existing user from the list displayed.
- You can select one of the following options from the displayed menu.
Options Description Procedure Check password Validate entered password. In the dialog box displayed, enter the password for the local OS user. AValidation succeeded
message appears.Update password Change password for the user. - In the dialog box displayed, enter the Old password for the local OS user.This step is optional.
- Enter the New Password and confirm it in the required text boxes.
Update shell Define shell access for the user. In the dialog box displayed, select one of the following options: - No login access
- Linux Shell -
/bin/sh
- Custom
Note: The default shell is set as No login access (/bin/false
).Toggle SSH access Set SSH access for the user. Select the Toggle SSH access option and press Enter to set SSH access to Yes. Note: The default is set as No when a user is created.Delete user Delete the local OS user and related home directory. Select the Delete user option and confirm the selection.
Select Close to exit the option.
Backup and Restore
If you backed up the OS in HVM/PVM mode, then you will be able to restore only in the mode in which you backed it up. For more information about backing up from the Web UI, refer to section System Backup and Restore.
4.9.1.2 - Paravirtualization Process
There are several tasks you must perform to switch from HVM to PVM.
The following figure shows the overall task flow.
The installed Appliance comes with the Appliance Paravirtualization Support Tool, which is equipped with the following:
- Displays the current paravirtualization status of the appliance.
- Displays Next Boot paravirtualization status of the appliance.
- Converts from HVM to PVM and back again.
- Connects to the XenServer and configures the Xen hypervisor for HVM or PVM.
Starting the Appliance Paravirtualization Support Tool
You can use Appliance Paravirtualization Support Tool to configure the local appliance for PVM.
To start the Appliance Paravirtualization Support Tool:
Access the ESA CLI Manager.
Navigate to Tools > Xen ParaVirtualization screen.
The root permission is required for entering the tool menu.
When you launch the tool, the main screen shows the current system status and provides options for managing virtualization.
Enabling Paravirtualization
When you convert your appliance to PVM mode, the internal configuration is modified and the Next Boot status changes to support paravirtualization. Both virtual block device and virtual console support is enabled as well.
To enable Paravirtualization:
To enable PVM on the appliance, you need to configure both XenServer and the appliance.
You can configure XenServer in two ways:
- Copy the tool to the XenServer and execute it locally, not using the appliance.
- Execute the commands manually using the
xe
command of Xen console.
To configure the local appliance for PVM from the Appliance Paravirtualization Support Tool main screen, select Enable paravirtualization settings.
The status indicators in the Next boot configuration section of the main screen change from Disabled to Enabled.
Configuring Host for PVM
To configure the Host for PVM, you need to have access to the XenServer machine.
Once the local Appliance is configured to use PVM, you connect to the XenServer to run the Xen ParaVirtualization Support Tool. This configures changes on the Xen hypervisor so that it runs in Host PVM mode. You will be asked for a root password upon launching the tool.
The following figure shows the main screen of the Xen Paravirtualization Support Tool.
To configure the Host for PVM:
From the Appliance ParaVirtualization Support Tool main screen, select Connect to XenServer hypervisor and execute tool.
Select OK.
The XenServer hypervisor interface appears.
At the prompt, type the IP or host name of the XenServer.
Press ENTER.
At the prompt, type the user name for SCP/SSH connection.
Press ENTER.
At the prompt, type the password to upload the file.
Press ENTER.
The tool is uploaded to the
/tmp
directory.At the prompt, type the password to remotely run the tool.
Press ENTER.
An introduction message appears.
At the prompt, type the name of the target virtual machine.
Alternatively, press ENTER to list available virtual machines.
The Xen ParaVirtualization Support Tool Main Screen appears and shows the current virtual machine information and status.
Type 4 to enable paravirtualization settings.
Press ENTER.
The following screen appears.
At the prompt, type Y to save the configuration.
Press ENTER.
You can use option 3 to back up the entries that will be modified.
The backup is stored in the /tmp directory on the XenServer machine as a rollback script that can be executed later on to revert the configuration back from PVM to HVM.Type q to exit the Appliance Paravirtualization Support Tool.
Rebooting the Appliance for PVM
After configuring the appliance and the Host for PVM, the appliance must be restarted. When it restarts, it will come up and run in PVM mode.
Before you begin
Before rebooting the appliance:
Exit both local and remote Paravirtualization tools before rebooting the appliance.
In the PVM, the system might not boot if there are two bootable devices. Be sure to eject any bootable CD/DVD on the guest machine.
If you encounter console issues after reboot, then close the XenCenter and restart a new session.
Booting into System Restore mode
You cannot boot in the System Restore mode when in the Xen Server PVM mode, because it does not show up during appliance launching and appears only if you have previously backed up the OS. However, you can boot in the System Restore mode when in the Xen Server HVM mode.
How to reboot the appliance for PVM
To reboot appliance for PVM:
To reboot the appliance for PVM, navigate to Administration > Reboot and Shutdown > Reboot.
Restart the Appliance Paravirtualization Support Tool and check the main screen to verify the current mode.
Disabling Paravirtualization
To disable Paravirtualization:
To revert the appliance back to HVM, you need to disable paravirtualization on the guest appliance OS and on the XenServer.
To return the appliance to HVM, use the Disable Paravirtualization Settings option, available in the Appliance Paravirtualization Support Tool.
The status indicators in the Next boot configuration section on the main screen change from Enabled to Disabled.
To return the XenServer to HVM, perform one of the following tasks to revert the XenServer configuration to HVM:
If… Then… You backed up the XenServer configuration by creating a rollback script while switching from HVM to PVM, using option 3 on the Xen Paravirtualization Support Tool Execute the rollback script. You want to use the Xen Paravirtualization Support Tool Use the Xen Paravirtualization Support Tool to connect to the XenServer, and then type 5 to select Disable paravirtualization Setting (enable HVM). For more information about connecting to the XenServer, refer to section Configure Host for PVM. You want to perform a manual conversion Manually convert from PVM to HVM. For more information about converting from PVM to HVM, refer to section Manual Configuration of Xen Server.
4.9.2 - Xen Server Configuration
This section describes about configuring the Xen Server.
Appliance Configuration Files for PVM
The following table describes the appliance configuration files that are affected by the appliance Xen Paravirtualization tool.
File Name | Description | HVM | PVM |
---|---|---|---|
/boot/grub/menu.lst | Boot Manager. The root partition is affected and the console parameters. | root=/dev/hda1 | root=/dev/xvda1 console=hvc0 xencons=hvc0 |
/etc/fstab | Mounting table | Using the hda device name (/dev/hda1,/dev/hda2,…) | Using the xvda device-name (/dev/xvda1,…) |
/etc/inittab | Console | tty1 | hvc0 |
Xen Server Parameters for PVM
This section lists the Xen Server Parameters for PVM.
The following settings are affected by the Appliance Paravirtualization Support Tool.
Parameter Name | Description | HVM | PVM |
---|---|---|---|
HVM-boot-policy | VM parameter: boot-loader | BIOS Order | “” (empty) |
PV-bootloader | VM Parameter: paravirtualization loader | “” (empty) | Pygrub |
Bootable | Virtual Block Device parameter | false | “true” |
Manual Configuration of Xen Server
This section describes about configuring the Xen Server manually.
It is recommended that you use the Xen Paravirtualization Support Tool to switch between HVM and PVM. However, you sometimes might need to manually configure the XenServer. This section describes the commands you use to switch between the two modes.
It is recommended that you consult Protegrity Support before manually applying the commands. Back up your data prior to configuration changes. Read the XenServer documentation to avoid errors.
Converting HVM to PVM
This section describes the steps to convert HVM to PVM.
To convert HVM to PVM use the following commands to convert from HVM to PVM, where NAME_OF_VM_MACHINE is the name of the virtual machine.
```
TARGET_VM_NAME="NAME_OF_VM_MACHINE"
TARGET_VM_UUID=$(xe vm-list name-label="$TARGET_VM_NAME" params=uuid --minimal)
TARGET_VM_VBD=$(xe vm-disk-list uuid=$TARGET_VM_UUID | grep -A1 VBD | tail -n 1 | cut -f2 - | sed "s/ *//g")
xe vm-param-set uuid=$TARGET_VM_UUID HVM-boot-policy=""
xe vm-param-set uuid=$TARGET_VM_UUID PV-bootloader="pygrub"
xe vbd-param-set uuid=$TARGET_VM_VBD bootable="true"
```
Converting PVM to HVM
This section describes the steps to convert PVM to HVM.
To convert PVM to HVM use the following commands to convert from PVM to HVM, where NAME_OF_VM_MACHINE is the name of the virtual machine.
```
TARGET_VM_NAME="NAME_OF_VM_MACHINE"
TARGET_VM_UUID=$(xe vm-list name-label="$TARGET_VM_NAME" params=uuid --minimal)
TARGET_VM_VBD=$(xe vm-disk-list uuid=$TARGET_VM_UUID | grep -A1 VBD | tail -n 1 | cut -f2 - | sed "s/ *//g")
xe vm-param-set uuid=$TARGET_VM_UUID HVM-boot-policy="BIOS order"
xe vm-param-set uuid=$TARGET_VM_UUID PV-bootloader=""
xe vbd-param-set uuid=$TARGET_VM_VBD bootable="false"
```
4.9.3 - Installing Xen Tools
Protegrity uses Xen tools to enhance and improve the virtualization environment with better management and performance monitoring. The appliance is a hardened machine, so you must send the Xen tools (.deb) package to Protegrity. In turn, Protegrity provides you with an installable package for your Xen Server environment. You must upload the package to the appliance and install it from within the OS Console.
To install Xen tools:
Mount the Xen tools CDROM to the guest machine:
Using the XenCenter, mount the XenTools (xs-tools.iso file) as a CD to the VM.
Log in to the appliance, and then switch to OS Console.
To manually mount the device, run the following command:
# Mount /dev/xvdd /cdrom
Copy the XEN tools .deb package to your desktop machine. You can do that:
Using scp to copy the file to a Linux machine, for example:
# scp –F /dev/null /cdrom/Linux/*_i386.de YOUR_TARGET_MACHINE:/tmp
Using Web UI, download the following package:
# ln –s /cdrom/Linux /var/www/xentools
Downloading the file from https://YOUR_IP/xentools.
When you are done, delete the soft link (/var/www/xentools).
Send the xe-guest-utilities_XXXXXX_i386.deb file to Protegrity.
Protegrity will provide you with this package in a .tgz file.
Upload the package to the appliance using the Web UI.
Extract the package and execute the installation:
# cd /products/uploads # tar xvfz xe-guest-utilities_XXXXX_i386.tgz # cd xe-guest-utilities_XXXXX_i386 # ./install.sh
Unmount the /cdrom on the appliance.
Eject the mounted ISO.
Reboot the Appliance to clean up references to temporary files and processes.
4.9.4 - Xen Source – Xen Community Version
Unlike XenServer, which provides an integrated UI to configure the virtual machines, Xen Source® does not provide one. Therefore, the third step of switching from HVM to PVM must be done manually by changing configuration files.
This section provides examples of basic Xen configuration files that you can use to initialize Protegrity Appliance on Xen Source hypervisor.
For more information about Xen Source, refer to Protegrity Support, Xen Source documentation, and forums.
HVM Configuration
The following commands are used to manually configure the appliance for full virtualization.
import os, re
arch_libdir = 'lib'
arch = os.uname()[4]
if os.uname()[0] == 'Linux' and re.search('64', arch):
arch_libdir = 'lib64'
kernel = "/usr/lib/xen/boot/hvmloader"
builder='hvm'
boot="cda"
memory = 1024
name = "ESA"
vif = [ 'type=ioemu, bridge=xenbr0' ]
disk = [ 'file:/etc/xen/ESA.img,hda,w', 'file:/media/ESA.iso,hdc:cdrom,r' ]
device_model = '/usr/' + arch_libdir + '/xen/bin/qemu-dm'
sdl=0
opengl=0
vnc=1
vncunused=0
vncpasswd=''
stdvga=0
serial='pty'
PVM Configuration
The following commands are used to manually configure the appliance for paravirtualization.
kernel = "/usr/lib/xen/boot/pv-grub-x86_64.tgz"
extra = "(hd0,0)/boot/grub/menu.lst"
memory = 1024
name = "ESA"
vif = [ 'bridge=xenbr0' ]
disk = [ 'file:/etc/xen/ESA.img,xvda,w']
#vfb = [ 'vnc=1' ]# Enable this for graphical GRUB splash-screen
Modify the configuration file names, locations, and resources to suit your own environment and requirements.
Virtual Appliance
Create a new (minimum) virtual appliance on XEN Source after creating the configuration files as /etc/xen/ESA.hvm.cfg and /etc/xen/ESA.pv.cfg.
# xm info
# dd if=/dev/zero of=/etc/xen/ESA.img bs=1 count=1 seek=15G
# xm create –c /etc/xen/ESA.hvm.cfg
… Install machine… configure PVM …
# xm shutdown ESA
# xm create –c /etc/xen/ESA.pv.cfg
Paravirtualization FAQ and Troubleshooting
This section lists some Paravirtualization Frequently Asked Questions and Answers.
Frequently Asked Questions | Answers |
Why are XenTools not provided with the appliance? | In addition to the distribution issues, the XenTools depends on the exact version of your XenServer. |
I cannot boot the virtual machine in PVM mode. | Ensure that no CD/DVD (ISO image) is inserted to the
machine. Eject all CD/DVDs, and then reboot. Make sure that PVM
is enabled on the hypervisor itself. For more information
about PVM, refer to section Manual Configuration of Xen Server. The
last resort would be to use a Live-CD, for example, Knoppix, in
order modify the appliance files. |
I cannot initialize High-Availability. | Probably you have installed the XenTools but you have not rebooted the system after the XenTools installation. Reboot the system and retry. |
I need to set up a cloned virtual machine as soon as possible. | Currently cloning a virtual appliance is a risk which is
not recommended. Perform the following steps. 1. 2.
|
After switching to PVM mode, I cannot use the XenCenter. | Close the XenCenter and open a new instance. |
4.10 - Appliance Hardening
The Protegrity Appliance provides the framework for its appliance-based products. The base Operating System (OS) used for Protegrity Appliances is Linux, which provides the platform for Protegrity products. This platform includes the required OS low-level components as well as higher-level components for enhanced security management. Linux is widely accepted as the preferred base OS for many customized solutions, such as in firewalls and embedded systems, among others.
Linux was selected for the following reasons:
- Open Source: Linux is an Open Source solution.
- Stable: The OS is a stable platform due to its R&D and QA cycles.
- Customizable: The OS can be customized up to a high level.
- Proven system: The OS has already been proven in many production environments and systems.
For a list of installed components, refer to the Contractual.htm document available in the appliance Web UI under Settings > System > Files pane.
Protegrity takes several measures to harden this Linux-based system and make it more secure. For example, many non-essential packages and components are removed. If you want to install external packages on the appliances, the packages must be certified by Protegrity.
For more information about installing external packages, contact Protegrity Support.
The following additional hardening measures are described in this section:
- Linux Kernel
- Restricted Logins
- Enhances Logging
- Open Listening TCP Ports
- Packages and Services
Several major components, services, or packages are disabled or removed for appliance hardening. The following table lists the removed packages.
Removed Object | Examples |
---|---|
Network Services (except SSH/Apache) | telnet client/server client/server |
Package Managers | apt |
Additional Packages | Man Pages Documents |
Appliance Hardening
The appliance kernels are optimized for hardening. The Protegrity appliances are currently equipped with a modular patched Linux Kernel version 4.9.38. These kernel are patched to enhance some capabilities as well as optimize it for server-side usage. Standard server-side features such as scheduler and TCP settings are available.
Logging in
Restricted Log in
Every Protegrity Appliance is equipped with an internal LDAP directory service, OpenLDAP. Appliances may use this internal LDAP for authentication, or an external one.
The ESA Server provides directory services to all the other appliances. However, to avoid single point of failure you can use multiple directory services.
Four users are predefined and available after the appliance is installed. Unlike in standard Linux, the root user is blocked and cannot access the system without permission from the admin user. The admin user cannot access the Linux Shell Console without permission from the root user. This design provides extra security to ensure that in order to perform any OS-related or security-related operations, both root and admin users must cooperate. The operations include upgrade, and patches. The same design applies to SSH connectivity.
The main characteristics of the four users are described here.
root user
- Local OS user.
- By default, can only access machine’s console.
- All other access requires additional admin user login to ensure isolation of duties.
- If required, then login using SSH can be allowed, which is blocked by default.
- No Web UI access.
admin user
- LDAP directory management user.
- Usually this user is the Chief Security Officer.
- Can access and manage Web UI or CLI menu using machine’s console or SSH.
- Can create additional users.
- If required, then root user login for OS related activities can be allowed.
viewer user
- LDAP directory user.
- By default, has read-only access to Appliance features.
- Can access Web UI and CLI menu using machine’s console or SSH but cannot modify settings/server.
local_admin user
- Local OS user.
- Emergency or maintenance user with limited admin user permission.
- Handles cases where the directory server is not accessible.
- By default, has SSH and Web UI blocked, and only machine’s console is accessible.
The appliance login design facilitates appliance hardening. The following two OS users are defined:
- root: The standard system administrator user.
- local_admin: Administrative OS user for maintenance, in case the LDAP is not accessible.
By default, has SSH and Web UI blocked, and only machine’s console is accessible
These are the basic login rules:
- The root user will never be able to login directly.
- The admin user can connect to the CLI Manager, locally or through SSH.
- A root shell can be accessed from within the admin CLI Manager.
Enhanced Log in
The logging capabilities are enhanced for appliance hardening. In addition to the standard OS logs or syslogs that are available by default, many other operations are logged as well.
Logs that are considered important are sent to the Protegrity ESA logging facility, which can be local or remote. This means that in addition to the standard syslog repository, Protegrity provides a secured repository for important system logs.
You can find these events from within the logs that are escalated to the ESA logging facility:
- System startup logs
- Protegrity product or service is started or stopped
- System backup and restore operations
- High Availability events
- User logins
- Configuration changes
Configuring user limits
In Linux, a user utilizes the system resources to perform different operations. When a user with minimal privileges runs operations that use most system resources, it can result in the unavailability of resources for other users. This introduces a Denial-of-Service (DoS) attack on the system. To mitigate this attack, you can restrict users or groups utilizing the system resources. For Protegrity appliances, using the ulimit functionality, you can limit the number of processes that a user can create. The ulimit functionality cannot be applied on usernames that contain the space character.
4.10.1 - Open listening ports
The ports in a network are communication channels which information flows through, from one system to another. This section provides the list of ports that must be configured in your environment to access the features and services on the Protegrity appliances.
Ports for accesing ESA
The following are the list of ports that must configured for the system users to access ESA.
Port Number | Protocol | Source | Destination | NIC | Description |
---|---|---|---|---|---|
22 | TCP | System User | ESA | Management NIC (ethMNG) | Access to CLI Manager |
443 | TCP | System User | ESA | Management NIC (ethMNG) | Access to Web UI for Security Officer or ESA administrator |
Ports for accesing Insight
The following are the list of ports that must configured for the system users to access Insight.
Port Number | Protocol | Source | Destination | NIC | Description |
---|---|---|---|---|---|
22 | TCP | System User | Insight | Management NIC (ethMNG) | Access to CLI Manager |
443 | TCP | System User | Insight | Management NIC (ethMNG) | Access to Web UI for Security Officer or Audit Store administrator |
Ports for accesing Protectors
The following are the list of ports that must be configured between the ESA and the non-appliance based protectors such as, Big Data Protector (BDP), Application Protector (AP), and so on.
Port Number | Protocol | Source | Destination | NIC | Description | Notes (If any) |
8443 | TCP | Non-appliance-based Protectors such as, Big Data
Protector (BDP), Application Protector (AP), z/OS and so
on. | ESA | Management NIC (ethMNG) |
| |
25400 | TCP | Non-appliance-based Protectors such as, Big Data
Protector (BDP), Application Protector (AP), z/OS and so
on. | Resilient Package Proxy (RPP) in the ESA | Management NIC (ethMNG) |
| The protectors need to access this port. Ensure that the firewall of the customer is not blocking this port. |
6379 | TCP | ESA | BDP Lead Node | Management NIC (ethMNG) | Communication between ESA and BDP lead node. | If HDFSFP is used, this port must be opened. Starting from the Big Data Protector 7.2.0 release, the HDFS File Protector (HDFSFP) is deprecated. The HDFSFP-related sections are retained to ensure coverage for using an older version of Big Data Protector with the ESA 7.2.0. If a port other than 6379 is configured
while installing BDP, ensure that the configured port is open. |
9200 | TCP | Log Forwarder | Insight | Management NIC (ethMNG) of ESA | To send audit logs received from the Log Server and forward it to the ESA/Insight. |
Ports for ESA on TAC
The following are the list of ports that must be configured for the ESA appliances in a Trusted Appliances Cluster (TAC).
Port Number | Protocol | Source | Destination | NIC | Description | Notes (If any) |
22 | TCP | Primary ESA | Secondary ESA | Management NIC (ethMNG) | Communication in TAC | |
22 | TCP | Secondary ESA | Primary ESA | Management NIC (ethMNG) | Communication in TAC | |
443 | TCP | Primary ESA | Secondary ESA | Management NIC (ethMNG) | Communication in TAC | |
443 | TCP | Secondary ESA | Primary ESA | Management NIC (ethMNG) | Communication in TAC | |
443 | TCPing | Primary ESA | Secondary ESA | Management NIC (ethMNG) | Communication in TAC | Used for joining a cluster. |
443 | TCPing | Secondary ESA | Primary ESA | Management NIC (ethMNG) | Communication in TAC | Used for joining a cluster. |
10100 | UDP | Primary ESA | Secondary ESA | Management NIC (ethMNG) | Communication in TAC | This port is optional. If the appliance heartbeat services are stopped, this port can be disabled. |
10100 | UDP | Secondary ESA | Primary ESA | Management NIC (ethMNG) | Communication in TAC | This port is optional. If the appliance heartbeat services are stopped, this port can be disabled. |
8300 | TCP | Primary ESA | Secondary ESA | Management NIC (ethMNG) | Used by servers to handle incoming request. | This is used by servers to handle incoming requests from other agents. |
8300 | TCP | Secondary ESA | Primary ESA | Management NIC (ethMNG) | Handle incoming requests | This is used by servers to handle incoming requests from other agents. |
8301 | TCP and UDP | Primary ESA | Secondary ESA | Management NIC (ethMNG) | Gossip on LAN. | This is used to handle gossip in the LAN. Required by all agents. |
8301 | TCP and UDP | Secondary ESA | Primary ESA | Management NIC (ethMNG) | Gossip on LAN. | This is used to handle gossip in the LAN. Required by all agents. |
8302 | TCP and UDP | Primary ESA | Secondary ESA | Management NIC (ethMNG) | Gossip on WAN. | This is used by servers to gossip over the WAN, to other servers. As of Consul 0.8 the WAN join flooding feature requires the Serf WAN port (TCP/UDP) to be listening on both WAN and LAN interfaces. |
8302 | TCP and UDP | Secondary ESA | Primary ESA | Management NIC (ethMNG) | Gossip on WAN. | This is used by servers to gossip over the WAN, to other servers. As of Consul 0.8 the WAN join flooding feature requires the Serf WAN port (TCP/UDP) to be listening on both WAN and LAN interfaces. |
8600 | TCP and UDP | ESA | DSG | Management NIC (ethMNG) | Listens to the DNS server port. | Used to resolve DNS queries. |
8600 | TCP and UDP | DSG | ESA | Management NIC (ethMNG) | Listens to the DNS server port. | Used to resolve DNS queries. |
9000 | TCP and UDP | ESA | DSG | Management NIC (ethMNG) | Checks local certificates. | If your TAC utilizes Consul services, you must enable this port. |
9000 | TCP and UDP | DSG | ESA | Management NIC (ethMNG) | Checks local certificates. | If your TAC utilizes Consul services, you must enable this port. |
Additional Ports
Based on the firewall rules and network infrastructure of your organization, you must open ports for the services listed in the following table.
Port Number | Protocol | Source | Destination | NIC | Description | Notes (If any) |
123 | UDP | ESA | Time servers | Management NIC (ethMNG) of ESA | NTP Time Sync Port | This port can be configured based on the enterprise network policies or according to your use case. |
389 | TCP | ESA | Active Directory server | Management NIC (ethMNG) of ESA | Authentication for External AD and synchronization with External Groups. | This port can be configured based on the enterprise network policies or according to your use case. |
389 | TCP | ESA | Active Directory server | Management NIC (ethMNG) of ESA | Synchronization with External AD Groups for policy
users. | This port can be configured based on the enterprise network policies or according to your use case. |
636 | TCP | ESA | Active Directory server | Management NIC (ethMNG) of ESA | Authentication for External AD and synchronization with External Groups. | This port is for LDAPS. It can be configured based on the enterprise network policies or according to your use case. |
636 | TCP | ESA | Active Directory server | Management NIC (ethMNG) of ESA | Synchronization with External AD Groups for policy users. | This port is for LDAPS. It can be configured based on the enterprise network policies or according to your use case. |
1812 | TCP | ESA | RADIUS server | Management NIC (ethMNG) of ESA | Authentication with RADIUS server. | This port can be configured based on the enterprise
network policies or according to your use case. |
514 | UDP | ESA | Syslog servers | Management NIC (ethMNG) of ESA | Storing logs | This port can be configured based on the enterprise network policies or according to your use case. |
FutureX (9111) | TCP | ESA | HSM server | Management NIC (ethMNG) of ESA | HSM communication | This port can be configured based on the enterprise network policies or according to your use case. |
Safenet (1792) | TCP | ESA | HSM server | Management NIC (ethMNG) of ESA | HSM communication | This port must be opened and configured based on the enterprise network policies or according to your use case. |
nCipher non-privileged port (8000) | TCP | ESA | HSM sever | Management NIC (ethMNG) of ESA | HSM communication | This port must be opened and configured based on the enterprise network policies or according to your use case. |
nCipher privileged port (8001) | TCP | ESA | HSM sever | Management NIC (ethMNG) of ESA | HSM communication | This port must be opened and configured based on the enterprise network policies or according to your use case. |
Utimaco (288) | TCP | ESA | HSM sever | Management NIC (ethMNG) of ESA | HSM communication | This port must be opened and configured based on the enterprise network policies or according to your use case. |
Ports for Users
If you are utilizing the DSG appliance, the following ports must be configured in your environment.
Port Number | Protocol | Source | Destination | NIC | Description |
22 | TCP | System User | DSG | Management NIC (ethMNG) | Access to CLI Manager. |
443 | TCP | System User | DSG | Management NIC (ethMNG) | Access to Web UI. |
Ports for Communication with ESA
The following are the list of ports that must be configured for communication between DSG and ESA.
Port Number | Protocol | Source | Destination | NIC | Description | Notes (If any) |
22 | TCP | ESA | DSG | Management NIC (ethMNG) |
| |
443 | TCP | ESA | DSG | Management NIC (ethMNG) | Communication in TAC | |
443 | TCP | DSG | ESA and Virtual IP address of ESA | Management NIC (ethMNG) | Downloading certificates from ESA | |
8443 | TCP | DSG | ESA and Virtual IP address of ESA | Management NIC (ethMNG) |
| |
389 | TCP | DSG | Virtual IP address of ESA | Management NIC (ethMNG) | Authentication and authorization by ESA | |
5671 | TCP | DSG | ESA | Management NIC (ethMNG) | Messages sent from DSG to ESA | This port is required to support backward compatibility, where ESA v7.2.1 communicates with the earlier versions of appliances other than ESA. For example, port 5671 is required for user notifications from a DSG system to appear on the ESA v7.2.1 Dashboard. |
10100 | UDP | DSG | ESA | Management NIC (ethMNG) |
| This port is optional. If the appliance heartbeat services are stopped, this port can be disabled. |
DSG Ports for Communication in TAC
The following are the list of ports that must also be configured when DSG is configured in a TAC.
Port Number | Protocol | Source | Destination | NIC | Description | Notes (If any) |
22 | TCP | DSG | ESA | Management NIC (ethMNG) | Communication in TAC | |
8585 | TCP | ESA | DSG | Management NIC (ethMNG) | Cloud Gateway cluster | |
443 | TCP | ESA | DSG | Management NIC (ethMNG) | Communication in TAC | |
10100 | UDP | ESA | DSG | Management NIC (ethMNG) | Communication in TAC | This port is optional. If the Appliance Heartbeat services are stopped, this port can be disabled. |
10100 | UDP | DSG | ESA | Management NIC (ethMNG) |
| This port is optional. If the Appliance Heartbeat services are stopped, this port can be disabled. |
10100 | UDP | DSG | DSG | Management NIC (ethMNG) | Communication in TAC | This port is optional. |
8300 | TCP | ESA | DSG | Management NIC (ethMNG) | Used by servers to handle incoming request. | This is used by servers to handle incoming requests from other agents. |
8300 | TCP | DSG | ESA | Management NIC (ethMNG) | Handle incoming requests | This is used by servers to handle incoming requests from other agents. |
8300 | TCP | DSG | DSG | Management NIC (ethMNG) | Handle incoming requests | This is used by servers to handle incoming requests from other agents. |
8301 | TCP and UDP | ESA | DSG | Management NIC (ethMNG) | Gossip on LAN. | This is used to handle gossip in the LAN. Required by all agents. |
8301 | TCP and UDP | DSG | ESA | Management NIC (ethMNG) | Gossip on LAN. | This is used to handle gossip in the LAN. Required by all agents. |
8301 | TCP and UDP | DSG | DSG | Management NIC (ethMNG) | Gossip on LAN. | This is used to handle gossip in the LAN. Required by all agents. |
8302 | TCP and UDP | ESA | DSG | Management NIC (ethMNG) | Gossip on WAN. | This is used by servers to gossip over the WAN, to other servers. As of Consul 0.8 the WAN join flooding feature requires the Serf WAN port (TCP/UDP) to be listening on both WAN and LAN interfaces. |
8302 | TCP and UDP | DSG | ESA | Management NIC (ethMNG) | Gossip on WAN. | This is used by servers to gossip over the WAN, to other servers. As of Consul 0.8 the WAN join flooding feature requires the Serf WAN port (TCP/UDP) to be listening on both WAN and LAN interfaces. |
8302 | TCP and UDP | DSG | DSG | Management NIC (ethMNG) | Gossip on WAN. | This is used by servers to gossip over the WAN, to other servers. As of Consul 0.8 the WAN join flooding feature requires the Serf WAN port (TCP/UDP) to be listening on both WAN and LAN interfaces. |
Additional Ports for DSG
In DSG, service NICs are not assigned a specific port number. You can configure a port number as per your requirements.
Based on the firewall rules and network infrastructure of your organization, you must open ports for the services listed in the following table.
Port Number | Protocol | Source | Destination | NIC | Description | Notes (If any) |
123 | UDP | DSG | Time servers | Management NIC (ethMNG) of ESA | NTP Time Sync Port | This port can be configured based on the enterprise network policies or according to your use case. |
514 | UDP | DSG | Syslog servers | Management NIC (ethMNG) of ESA | Storing logs | This port can be configured based on the enterprise network policies or according to your use case. |
N/A* | N/A* | DSG | Applications/Systems | Service NIC (ethSRV) of DSG | Enabling communication for DSG with different
applications in the organization. | This port can be configured based on the enterprise network policies or according to your use case. |
N/A* | N/A* | Applications/System | DSG | Service NIC (ethSRV) of DSG | Enabling communication for DSG with different
applications in the organization. | This port can be configured based on the enterprise network policies or according to your use case. |
Ports for the Internet
The following ports must be configured on ESA for communication with the Internet.
Port Number | Protocol | Source | Destination | NIC | Description |
80 | TCP | ESA | ClamAV Database | Management NIC (ethMNG) of ESA | Updating the Antivirus database on ESA. |
Recommended Ports for Strengthening Firewall Rules
The following ports are recommended for strengthening the firewall configurations.
Port Number | Protocol | Source | Destination | NIC | Description |
67 | UDP | Appliance/System | DHCP server | Management NIC (ethMNG) | Allows server requests from the DHCP server. |
68 | UDP | Appliance/System | DHCP server | Management NIC (ethMNG) | Allows client requests on the DHCP server. |
161 | UDP | ESA/DSG | SNMP | Management NIC (ethMNG) | Allows SNMP requests. |
10161 | TCP and UDP | ESA/DSG | SNMP | Management NIC (ethMNG) | Allows SNMP requests over DTLS. |
Insight Ports
The following ports must be configured for communication between the ESA and Insight.
Port Number | Protocol | Source | Destination | NIC | Description | Notes (If any) |
9200 | TCP | ESA | ESA | Management NIC (ethMNG) of ESA / Insight | Audit Store REST communication. | This port can be configured based on the enterprise network policies or according to your use case. |
9300 | TCP | ESA | ESA | Management NIC (ethMNG) of ESA / Insight | Internode communication between the Audit Store nodes. | This port can be configured based on the enterprise network policies or according to your use case. |
24224 | UDP | ESA | ESA | Management NIC (ethMNG) of ESA / Insight | Communication between td-agent and Audit Store. | This port can be configured according to your use case when forwarding logs to an external Security information and event management (SIEM). |
24284 | TCP | Protector | ESA | Management NIC (ethMNG) of ESA / Insight | Communication between protector and td-agent . | This port can be configured according to your use case when forwarding logs to an external Security information and event management (SIEM) over TLS. |
4.11 - VMware tools in appliances
The VMware tools are used to access the utilities that enable you to monitor and improve management of the virtual machines that are part of your environment. When you install or upgrade your appliance, the VMware tools are automatically installed.
4.12 - Increasing the Appliance Disk Size
If you need to increase the total disk size of the Appliance, then you can add additional hard disks to the Appliance. The Appliance refers to the added hard disks as logical volumes, or partitions, which offer additional disk capacity.
As required, partitions can be added, removed, or moved from one hard disk to another. It is possible to create smaller partitions on a hard disk and combine multiple hard disks to form a single large partition.
Configuration of Appliance for Adding More Disks
Hard disks or volumes can be added to the appliance at two different times:
Add the hard disk during installation of the Appliance.
For more information about adding and configuring the hard disk, refer to the Protegrity Installation Guide.
Add the hard disks later when required.
Steps have been separately provided for a single hard disk installation and more than one hard disk installation later in this section.
Installation of Additional Hard Disks
Ensure that the Appliance is installed and working and the hard disks to be added are readily available.
To install one or more hard disks:
If the Appliance is working, then log out of the Appliance and turn it off.
Add the required hard disk.
Turn on the appliance.
Login to the CLI console with admin credentials.
Navigate to Tools > Disk Management.
Search for the new device name, for example, :/dev/sda, and note down the capacity and the partitions in the device.
Select Refresh.
The system recognizes any added hard disks.
Select Extend to add more hard disks to the existing disk size.
Select the newly added hard disk.
Click Extend again to confirm that the newly added hard disk has been added to the Appliance disk size.
A dialog appears asking for confirmation with the following message.
Warning! All data on the /dev/sda will be removed! Press YES to continue…
Select Continue.
The newly added hard disk is added to the existing disk size of the Appliance.
Navigate to Tools > Disk Management.
The following screen appears confirming addition of the hard disk to the Appliance disk size.
Rolling Back Addition of New Hard Disks
If the Appliance has been upgraded, then roll back to the setup of the previous version is possible. Roll back option is unavailable if you have upgraded your system to Appliance v8.0.0 and have not finalized the upgrade. When you finalize the upgrade, you confirm that the system is functional. Only then does the roll back feature become available.
For more information about upgrade, refer to the Protegrity Upgrade Guide.
4.13 - Mandatory Access Control
Mandatory Access Control (MAC) is a security approach that allows or denies an individual access to resources in a system. With MAC, you can set polices that can be enforced on the resources. The policies are defined by the administrator and cannot be overridden by other users.
Among many implementations of MAC, Application Armor (AppArmor) is a CIS recommended Linux security module that protects the operating system and its applications from threats. It implements MAC for constraining the ability of a process or user on operating system resources.
AppArmor allows you to define policies for protecting the executable files and directories present in the system. It applies these policies to the profiles. Profiles are groups, where restriction on specific actions for the files or directories are defined. The following are the two modes of applying policies on profiles:
Enforce: The profiles are monitored to either permit or deny a specific action.
Complain: The profiles are monitored, but actions are not restricted. Instead, actions are logged in the audit events.
For more information about AppArmor, refer to http://wiki.apparmor.net
AppArmor in Protegrity appliances
AppArmor increases security by restricting actions on the executable files in the system. It is added as another layer of security to protect custom scripts and prevent information leaks in case of any security breach. On Protegrity appliances, AppArmor is enabled to protect the different OS features, such as, antivirus, firewall, scheduled tasks, trusted appliances cluster, proxy authentication, and so on. Separate profiles are created for appliance-specific features. For more information about the list of profiles, refer to Viewing profiles. In an unprecedented case of a security breach on the appliances, any attempt to modify the protected profiles are foiled by AppArmor. The logs for the denials are generated and appear under system logs where they can be analyzed.
After AppArmor is enabled, all profiles that are defined in it are protected. Although it is enabled, if a new executable script is introduced in the appliance, AppArmor does not automatically protect this script. For every new script or file to be protected, a separate AppArmor profile must be created and permissions must be assigned to it.
The following sections describe the various tasks that you can perform on the Protegrity appliances using AppArmor.
4.13.1 - Working with profiles
Creating a Profile
In addition to the existing profiles in the appliances, AppArmor allows creating profiles for other executable files present in the system. Using the aa-genprof
command, you can create a profile to protect a file. When this command is run, AppArmor loads that file in complain mode and provides an option to analyze all the activities that might arise. It learns about all the activities that are present in the file and suggests the permissions that can be applied on them. After the permissions are assigned to the file, the profile is created and set in the enforce mode.
As an example, consider an executable file apparmor_example.sh in your system for which you want to create a profile. The script is copied in the /etc/opt/ directory and contains the following actions:
- Creating a file sample1.txt in the /etc/opt/ directory
- Changing permissions for the sample1.txt file
- Removing sample1.txt file
Ensure that apparmor_example.sh file has a 755 permission set to it.
Generating a profile for a file
The following steps describe how to generate a profile for the apparmor_example.sh file.
Perform the following steps to create a profile.
Login to the CLI Manager of the appliance.
Navigate to Administration > OS Console.
Navigate to the /etc/opt directory.
Run the following command to view the commands in the apparmor_example.sh file.
cat apparmor_example.sh
The following commands appear.
#!/bin/bash touch /etc/opt/sample1.txt chmod 400 /etc/opt/sample1.txt rm /etc/opt/sample1.txt
Replicate the SSH session. Navigate to the OS Console and run the following command
aa-genprof /etc/opt/apparmor_example.sh
The following screen appears.
Switch to the first SSH session and run the following script.
./apparmor_example.sh
The commands are run successfully.
Switch to the second SSH session. Type S to scan and create a profile for the apparmor_example.sh file.
AppArmor reads the first command. It provides different permissions based on what the command does, and assigns a severity to it.
Profile: /etc/opt/apparmor_example.sh Execute: /bin/touch Severity: unknown (I)nherit / (C)hild / (N)amed / (X) ix On / (D)eny / Abo(r)t / (F)inish
Type I to assign the inherit permissions.
After selecting the option for the first command, AppArmor reads each action and provides a list of permissions for each action. Type the required character that needs to be assigned for the permissions.
Type F to finish the scanning and S to save the change to the profile.
The following message appears.
Setting /etc/opt/apparmor_example.sh to enforce mode. Reloaded AppArmor profiles in enforce mode. Please consider contributing your new profile! See the following wiki page for more information: http://wiki.apparmor.net/index.php/Profiles Finished generating profile for /etc/opt/apparmor_example.sh.
Restart the AppArmor service using the following command.
/etc/init.d/apparmor restart
Navigate to the /etc/apparmor.d directory to view the profile.
The profile appears as follows.
etc.opt.apparmor_example.sh
Setting a Profile on Complain Mode
For easing the restrictions applied to the a profile, you can apply the complain mode on it. AppArmor allows actions to be performed, but logs all the activities that occur for that profile. AppArmor provides the aa-complain
command to perform this task. The following task describes the steps to set the apparmor_example.sh file in the complain mode.
Perform the following steps to set a profile in complain mode.
Login to the CLI Manager of the appliance.
Navigate to Administration > OS Console.
Run the enforce command as follows:
aa-complain /etc/apparmor.d/etc.opt.apparmor_example.sh
Run the
./apparmor_example.sh
script.Navigate to the /var/log/syslog directory to view the logs.
Even though an event has a certain restriction, the logs display that AppArmor allowed it to occur and has logged it for the apparmor_example.sh script.
Setting a Profile on Enforce Mode
When the appliance is installed in your system, the enforce mode is applied on the profiles by default. If you want to add a profile in enforce mode, AppArmor provides the aa-enforce
command to perform this task.The following task describes the steps to set the apparmor_example.sh file in enforce mode.
Perform the following steps to set a profile in enforce mode.
Login to the CLI Manager of the appliance.
Navigate to Administration > OS Console.
Run the enforce command as follows:
aa-enforce /etc/apparmor.d/etc.opt.apparmor_example.sh
Run the
./apparmor_example.sh
script.Based on the permissions that are assigned while creating the profile for the script, the following message is displayed on the screen.
The Deny permission is assigned to all the commands in this script.
Modifying an Existing Profile
In an appliance, Protegrity provides a default set of profiles for appliance-specific features. These include profiles for Two-factor authentication, Antivirus, TAC, Networking, and so on. The profiles contain appropriate permissions that require the feature to run smoothly without compromising its security. However, access-denial logs for some permissions may appear when these features are run. This calls for modifying the profile of a feature by appending the permissions to it.
Consider the usr.sbin.apache2 profile that is related to the networking services. When this feature is executed, based on the permissions that are defined, AppArmor allows the required operations to run. If it encounters a new action on this profile, it generates a Denied error and halts the task from proceeding.
For example, the following log appears for the usr.sbin.apache2 profile after the host name of the system is changed from the Networking screen on the CLI Manager.
type=AVC msg=audit(1593004864.290:2492): apparmor="DENIED" operation="exec" profile="/usr/sbin/apache2" name="/sbin/ethtool" pid=32518 comm="sh" requested_mask="x" denied_mask="x" fsuid=0 ouid=0FSUID="root" OUID="root"
As described in the log, AppArmor denied an execute permission for this profile. Every time you change the host name from the CLI manager, AppArmor will not permit that operation to be performed. This can be mitigated by modifying the profile from the /etc/apparmor.d/custom directory. Thus, the additional permission must be added to the usr.sbin.apache2 profile that is present in the /etc/apparmor.d/custom directory. This ensures that the new permissions to the profile are considered and existing permissions are not overwritten when the feature is executed. If you get a permission error log on the Appliance Logs screen, then perform the following steps to update the usr.sbin.apache2 profile with a new permission.
Updating profile permissions
Perform the steps in the instructions below to update profile permissions.
Those steps are also applicable for permission denial logs that appear for other default profiles provided by Protegrity. Based on the permissions that are denied, update the respective profiles with the new operations.
To update profile permissions:
On the CLI Manager, navigate to Administration > OS Console.
Navigate to the /etc/apparmor.d/custom directory.
Open the required profile on the editor.
For example, open the usr.sbin.apache2 profile in the editor.
Add the following permission.
<Value in the name parameter of the denial log> rix,
For example, the command for usr.sbin.apache2 denial log is as follows.
/sbin/ethtool rix,
Save the changes and exit the editor.
Run the following command to update the changes to the AppArmor profile.
apparmor_parser -r /etc/apparmor.d/<Profile>
For example,
apparmor_parser -r /etc/apparmor.d/usr.sbin.apache2
Now, change the host name of the system from the CLI Manager. The denial logs are not observed.
Viewing Status of Profiles
Using the aa-status
command, AppArmor loads and displays all the profiles that are configured in the system. It displays all the profiles that are in enforce and complain modes.
Perform the following steps to view the status for the profiles.
Login to the CLI Manager of the appliance.
Navigate to Administration > OS Console.
Run the status command as follows:
aa-status
The screen with the list of all profiles appears.
4.13.2 - Analyzing events
AppArmor provides an interactive tool to analyze the events occurring in the system. The aa-logprof
is one such utility that scans the logs for the events in your system. The aa-logprof
command scans the logs and provides a set actions for modifying a profile.
Consider the apparmor_example.sh script that is in the enforce mode. After a certain period of time, you modify the script and insert a command to list all the files in the directory. When you run the apparmor_example.sh script, a Permission denied error appears on the screen. As a new command is added to this script and permissions are not assigned to the updated entry, AppArmor does not allow the script to run. The permissions must be assigned before the script is executed. To evaluate the permissions that can be applied to the new entries, you can view the logs for details. On the appliance CLI Manager, the logs are available in the audit.log file in the /var/log/ directory. The following figure displays the logs that appear for the apparmor_example.sh script.
In the figure, the logs describe the profile for apparmor_example.sh. The logs contain the following information:
- AppArmor has denied an open operation for the profile that contains a new command.
- The script does not have access to a /dev/tty directory with the requested_mask=“r” permission as it is not defined for the new command.
Thus, the logs provide an insight on the different operations that occur when the script is executed. After analyzing the logs and evaluating the permissions, you can run the aa-logprof
command to update the permissions for the script.
The changes that are applied on the profiles are audited and logs are generated for it. For more information about the audit logs, refer to System Auditing.
Important: It is not recommended to use the
aa-logprof
command for profiles defined by Protegrity. If you want to modify an existing profile, refer to Modifying an existing Profile.
Updating profile permissions
Perform the following steps to update profile permissions.
Login to the CLI Manager of the appliance.
Navigate to Administration > OS Console.
Run the
aa-logprof
command.Reading log entries from /var/log/syslog. Updating AppArmor profiles in /etc/apparmor.d. Complain-mode changes: Profile: /etc/opt/apparmor_examples.sh Path: /bin/rm Old Mode: r New Mode: mr Severity: unknown [1 - /bin/rm mr,] (A)llow / [(D)eny] / (I)gnore / (G)lob / Glob with (E)xtension / (N)ew / Audi(t) / Abo(r)t / (F)inish
Type the required permissions. Type F to finish scanning.
After the permissions are granted, the following screen appears.
= Changed Local Profiles = The following local profiles were changed. Would you like to save them? [1 - /etc/opt/apparmor_examples.sh] (S)ave Changes / Save Selec(t)ed Profile / [(V)iew Changes] / View Changes b/w (C)lean profiles / Abo(r)t
Type S to save the changes.
Writing updated profile for /etc/opt/apparmor_examples.sh.
Navigate to the /etc/apparmor.d directory to view the profile.
4.13.3 - AppArmor permissions
The following table describes the different permissions that AppArmor lists when creating a profile or analyzing events.
Permission | Description |
---|---|
(I)nherit | Inherit the permissions from the parent profile. |
(A)llow | Allow access to a path. |
(I)gnore | Ignore the prompt. |
(D)eny | Deny access to a path. |
(N)ew | Create a new profile. |
(G)lob | Select a specific path or create a general rule using wild cards that match a broader set of paths. |
Glob with (E)xtension | Modify the original directory path while retaining the filename extension. |
(C)hild | Creates a rule in a profile, requires a sub-profile to be created in the parent profile, and rules must be separately generated for this child. |
Abo(r)t | Exit AppArmor without saving the changes. |
(F)inish | Finish scanning for the profile. |
(S)ave | Save the changes for the profile. |
4.13.4 - Troubleshooting for AppArmor
The following table describes solutions to issues that you might encounter while using AppArmor .
Issue | Reason | Solution |
After you run the File Export or File Import operation in the appliance, the following message appears in the logs:type=AVC msg=audit(1594813145.658:7306): apparmor="DENIED" operation="exec" profile="/usr/sbin/apache2" name="/usr/lib/sftp-server" pid=58379 comm="bash" requested_mask="x"* denied_mask="x" *fsuid=0 ouid=0FSUID="root" OUID="root" | Perform the following steps:
| |
If a scheduler task containing a customized script is run,
then the scheduled task is not executed and a denial message
appears in the log. For example, if a task scheduler contains the
/demo.sh script in the command line, the
following message appears in the logs.type=AVC msg=audit(1598429205.615:35253): apparmor="DENIED" operation="exec" profile="/usr/sbin/apache2" name="/demo.sh" pid=32684 comm=".taskV5FLVl.tmp" requested_mask="x" denied_mask="x" fsuid=0 ouid=0FSUID="root" OUID="root" | AppArmor restricts running any custom scripts from the scheduled task | Perform the following steps.
|
If you run the Put Files operation
between two machines in a TAC, the following messages appear as
logs in the source and target appliances. Source appliance type=AVC msg=audit(1598288495.530:5168): apparmor="DENIED" operation="mknod" profile="/etc/opt/Cluster/cluster_helper" name="/dummyfilefortest.sh" pid=62621 comm="mv" requested_mask="c" denied_mask="c" fsuid=0 ouid=0FSUID="root" OUID="root" Target appliance type=AVC msg=audit(1598288495.950:2116): apparmor="DENIED" operation="chown" profile="/etc/opt/Cluster/cluster_helper" name="/dummyfilefortest.sh" pid=17413 comm="chown" requested_mask="w" denied_mask="w" fsuid=0 ouid=0FSUID="root" OUID="root" | Perform the following steps.
|
4.14 - Accessing Appliances using Single Sign-On (SSO)
What is SSO?
Single Sign-on (SSO) is a feature that enables users to authenticate multiple applications by logging to a system only once. It provides federated access, where a ticket or token is trusted across multiple applications in a system. Users log in using their credentials. They are authenticated through authentication servers such as Active Directory (AD) or LDAP that validate the credentials. After successful authentication, a ticket is generated for accessing different services.
Consider an enterprise user having access to multiple applications that offer a variety of services. The applications might require user authentication, where one provides usernames and passwords to access them. Each time the user accesses any of the applications, the ask to provide the credentials increases. It is required that a user remember multiple user credentials for the applications. Thus, to avoid the confusion for the users, the Single Sign-On (SSO) mechanism can be used to facilitate access to multiple applications by logging in to the system only once.
4.14.1 - What is Kerberos
One of the protocols that SSO uses for authentication is Kerberos. Kerberos is an authentication protocol that uses secret key cryptography for secure communication over untrusted networks. Kerberos is a protocol used in a client-server architecture, where the client and server verify each other’s identities. The messages sent between the client and server are encrypted, thus preventing attackers from snooping.
For more information about Kerberos, refer to https://web.mit.edu/kerberos/
Key Entities in Kerberos
There are few key entities that are involved in a Kerberos communication.
- Key Distribution Center (KDC): Third-party system or service that distributes tickets.
- Authentication Server (AS): Server that validates the user logging into a system.
- Ticket Granting Server (TGS): Server that grants clients a ticket to access the services.
- Encrypted Keys: Symmetric keys that are shared between the entities such as, authentication server, TGS, and the main server.
- Simple and Protected GSS-API Negotiation (SPNEGO): The Kerberos SPNEGO mechanism is used in a client-server architecture for negotiating an authentication protocol in an HTTP communication. This mechanism is utilized when the client and the server want to authenticate each other, but are not sure about the authentication protocols that are supported by each of them.
- Service Principal Name (SPN): SPN represents a service on a network. Every service must be defined in the Kerberos database.
- Keytab File: It is an entity that contains an Active Directory account and the keys for decrypting Kerberos tickets. Using the keytab file, you can authenticate remote systems without entering a password.
For implementing Kerberos SSO, ensure that the following prerequisites are considered:
- The appliances, such as, the ESA, or DSG are up and running.
- The AD is configured and running.
- The IP addresses of the appliances are resolved to a Fully Qualified Domain Name (FQDN).
4.14.1.1 - Implementing Kerberos SSO for Protegrity Appliances
In Protegrity appliances, you can utilize the Kerberos SSO mechanism to login to the appliance. The user logs into the system with his domain credentials for accessing the appliances such as, the ESA or DSG. The appliance validates the user and on successful validation, allows the user access to the appliance. For utilizing the SSO mechanism, you must configure certain settings on different entities, such as, AD, Web browser, and the ESA appliance. The following sections describe a step-by-step approach for setting up SSO.
Protegrity supported directory services
For Protegrity appliances, only Microsoft AD is supported.
4.14.1.1.1 - Prerequisites
For implementing Kerberos SSO, ensure that the following prerequisites are considered:
- The appliances, such as, the ESA, or DSG are up and running.
- The AD is configured and running.
- The IP addresses of the appliances are resolved to a Fully Qualified Domain Name (FQDN).
4.14.1.1.2 - Setting up Kerberos SSO
This section describes the different tasks that an administrative user must perform for enabling the Kerberos SSO feature on the Protegrity appliances.
Order | Platform | Step | Reference |
---|---|---|---|
1 | Appliance Web UI | On the appliance Web UI, import the domain users from the AD to the internal LDAP of the appliance. Assign SSO Login permissions to the required user role. | Importing Users and assigning role |
2 | Active Directory | On the AD, map the Kerberos SPN to a user account. | Configuring SPN |
3 | Active Directory | On the AD, generate a keytab file. | Generating keytab file |
4 | Appliance Web UI | On the appliance Web UI, upload the generated keytab file. | Uploading keytab file |
5 | Web Browser | On the user’s machine, configure the Web browsers to handle SPNEGO negotiation. | Configuring browsers |
Importing Users and Assigning Role
In the initial steps for setting up Kerberos SSO, a user with administrative privileges must import users from an AD to the appliance. After importing, assign the required permissions to the users for logging with SSO.
To import users and assign roles:
On the appliance Web UI, navigate to Settings > Users > Proxy Authentication.
Enter the required parameters for connecting to the AD.
For more information about setting AD parameters, refer here.
Navigate to the Roles tab.
Create a role or modify an existing role.
Select the SSO Login permission check box for the role and click Save.
If you are configuring SSO on the DSG, then ensure the user is also granted the required cloud gateway permissions.
Navigate to the User Management tab.
Click Import Users to import the required users to the internal LDAP.
For more information about importing users, refer here.
Assign the role with the SSO Login permissions to the required users.
Creating Service Principal Name (SPN)
A Service Principal Name (SPN) is an entity that represents a service mapped to an instance on a network. For a Kerberos-based authentication, the SPN must be configured in Active Directory (AD). For Protegrity appliances, only Microsoft AD is supported. The SPN is registered with the AD. In this configuration, a service associates itself with the AD for the purpose of authentication requests.
For Protegrity, the instance is represented by appliances, such as, the ESA or DSG. It uses the SPNEGO authentication for authenticating users for SSO. The SPNEGO uses the HTTP service for authenticating users. The SPN is configured for the appliances in the following format.
service/instance@domain
Ensure an SPN is created for every ESA appliance involved in the Kerberos SSO implementation.
Example SPN creation
Consider an appliance with host name esa1.protegrity.com on the domain protegrity.com. The SPN must be set in the AD as HTTP/esa1.protegrity.com@protegrity.com.
The SPN of the appliance can be configured in the AD using the setspn
command. Thus, to create the SPN for esa1.protegrity.com, run the following command.
setspn -A HTTP/esa1.protegrity.com@protegrity.com
Creating the Keytab File
The keytab is an encrypted file that contains the Kerberos principals and keys. It allows an entity to use a Kerberos service without being prompted for a password on every access. The keytab file decrypts every Kerberos service request and authenticates it based on the password.
For Protegrity appliances, an SSO authentication request of a user from an appliance to the AD passes through the keytab file. In this file, you map the appliance user’s credentials to the SPN of the appliance. The keytab file is created using the ktpass
command. The following is the syntax for this command:
ktpass -out <Location where to generate the keytab file> -princ HTTP/<SPN of the appliance> -mapUser <username> -mapOp set -pass <Password> -crypto All -pType KRB5_NT_PRINCIPAL
The following sample snippet describes the ktpass
for mapping a user in the keytab file. Consider an ESA appliance with host name esa1.protegrity.com on the domain protegrity.com. The SPN for the appliance is set as HTTP/esa1.protegrity.com@protegrity.com. Thus, to create a keytab file and map a user Tom, run the following command.
ktpass -out C:\esa1.keytab -princ HTTP/esa1.protegrity.com@protegrity.com -mapUser Tom@protegrity.com -mapOp set -pass Test@1234 -crypto All -pType KRB5_NT_PRINCIPAL
Uploading Keytab File
After creating the keytab file from the AD, you must upload it on the appliance. You must upload the keytab file before enabling Kerberos SSO
To upload the keytab file:
On the Appliance Web UI, navigate to Settings > Users > Single Sign-On.
The Single Sign On screen appears.
From the Keytab File field, upload the keytab file generated.
Click the Upload Keytab icon.
A confirmation message appears.
Select Ok.
Click the Delete icon to delete the keytab file. You can delete the keytab file only when the Kerberos for single sign-on (Spnego) option is disabled.
Under the Kerberos for single sign-on (Spnego) tab, click the Enable toggle switch to enable Kerberos SSO.
A confirmation message appears.
Select Ok.
A message Kerberos SSO was enabled successfully appears.
Configuring SPNEGO Authentication on the Web Browser
Before implementing Kerberos SSO for Protegrity appliances, you must ensure that the Web browsers are configured to perform SPNEGO authentication. The tasks in this section describe the configurations that must be performed on the Web Browsers. The recommended Web browsers and their versions are as follows:
- Google Chrome version 129.0.6668.58/59 (64-bit)
- Mozilla Firefox version 130.0.1 (64-bit) or higher
- Microsoft Edge version 128.0.2739.90 (64-bit)
The following sections describe the configurations on the Web browsers.
Configuring SPNEGO Authentication on Firefox
The following steps describe the configurations on Mozilla Firefox.
To configure on the Firefox Web browser:
Open Firefox on the system.
Enter about:config in the URL.
Type negotiate in the Search bar.
Double click on network.negotiate-auth.trusted-uris parameter.
Enter the FQDN of the appliance and exit the browser.
Configuring SPNEGO Authentication on Chrome
With Google Chrome, you must set the white list servers that Chrome will negotiate with. If you are using a Windows machine to log in to the appliances, then the configurations entered in other browsers are shared with Chrome. You need not add a separate configuration.
4.14.1.1.3 - Logging to the Appliance
After configuring the required SSO settings, you can login to the appliance using Kerberos SSO.
To login to the appliance using SSO:
Open the Web browser and enter the FQDN of the ESA or DSG in the URL.
Click Sign in with Kerberos SSO.
The Dashboard of the ESA/DSG appliance appears.
4.14.1.1.4 - Scenarios for Implementing Kerberos SSO
This section describes the different scenarios for implementing Kerberos SSO.
Implementing Kerberos SSO on an Appliance Connected to an AD
This section describes the process of implementing Kerberos SSO when an appliance utilizes authentication services of the local LDAP.
You can also login to the appliance without SSO by providing valid user credentials.
Steps to configure Kerberos SSO with a Local LDAP
Consider an appliance for which you are configuring SSO. Ensure that you perform the following steps to implement it.
- Import users from an external directory and assign SSO permissions.
- Configure SPN for the appliance.
- Create and upload the keytab file on the appliance.
- Configure the browser to support SSO.
Logging in with Kerberos SSO
After configuring the required settings, user enters the appliance domain name on the Web browser and clicks Sign in with SSO to access appliance. On successful authentication, the Dashboard of the appliance appears.
Example process
The following figure illustrates the SSO process for appliances that utilize the local LDAP.
The user logs in to the domain with their credentials.
For example, a user, Tom, logs in to the domain abc.com as tom@abc.com and password *********.
Tom is authenticated on the AD. On successful authentication, he is logged in to the system.
For accessing the appliance, the user enters the FQDN of the appliance on the Web browser.
For example, esa1.protegrity.com.
If Tom wants to access the appliance using SSO, then he clicks Sign in with SSO on the Web browser.
A message is sent to the AD requesting a token for Tom to access the appliance.
The AD generates a SPNEGO token and provides it to Tom.
This SPNEGO token is then provided to the appliance to authenticate Tom.
The appliance performs the following checks.
- It receives the token and decrypts it. If the decryption is successful, then the token is valid.
- Retrieves the username from the token.
- Validates Tom with the internal LDAP.
- Retrieves the role for Tom and verifies that the role has the SSO Login permissions. After successfully validating the token and the role permissions, Tom can access the appliance.
Implementing Kerberos SSO on other Appliances Communicating with ESA
This section describes the process of implementing Kerberos SSO when an appliance utilizes authentication services of another appliance. Typically, the DSG depends on ESA for user management and LDAP connectivity. This section explains the steps that must be performed to implement SSO on the DSG.
Implementing Kerberos SSO on DSG
This section explains the process of SSO authentication between the ESA and the DSG. It also includes information about the order of set up to enable SSO authentication on the DSG.
The DSG depends on the ESA for user and access management. The DSG can leverage the users and user permissions that are defined in the ESA only if the DSG is set to communicate with the ESA.
The following figure illustrates the SSO process for appliances that utilize the LDAP of another appliance.
Example process
The user logs in to the system with their credentials.
For example, John logs in to the domain abc.com as john@abc.com and password *********. The user is authenticated on the AD. On successful authentication the user is logged in to the system.
For accessing the DSG Web UI John enters the FQDN of the DSG on the Web browser.
For example, dsg.protegrity.com.
If John wants to access the DSG Web UI using SSO, he clicks Sign in with SSO on the Web browser.
The username of John and the URL of the DSG is forwarded to the ESA.
The ESA sends the request to the AD for generating a SPNEGO token.
The AD generates a SPNEGO token to authenticate John and sends it to the ESA.
The ESA performs the following steps to validate John.
Receives the token and decrypts it. If the decryption is successful, then the token is valid.
Retrieves the username from the token.
Validates John with the internal LDAP.
Retrieves the role for John and verifies that the role has SSO Login .
If ESA encounters any error related to the role, username, or token, an error is displayed on the Web UI. For more information about the errors, refer Troubleshooting.
On successful authentication, the ESA generates a service JWT.
The ESA sends this service JWT and the URL of to the Web browser.
The Web browser presents this JWT to the DSG for validation.
The DSG validates the JWT based on the secret key shared with ESA. On successful validation, John can login to the DSG Web UI.
Before You Begin:
Ensure that you complete the following steps to implement SSO on the DSG.
Ensure that the Set ESA Communication process is performed on the DSG for establishing communication with the ESA.
For more information about setting ESA communication, refer Setting up ESA Communication.
Import users from an external directory on the ESA and assign SSO and cloud gateway permissions.
Configure SPN for the ESA.
Enable Single Sign-on on the ESA.
Export the JWT settings to all the DSG nodes in the cluster.
Next Steps:
After ensuring that the prerequisites for SSO in the DSG implementation are completed, you must complete the configuration on the DSG Web UI.
For more information about completing the configuration, refer LDAP and SSO Configurations.
Exporting the JWT Settings to the DSG Nodes in the Cluster
As part of SSO implementation for the DSG, the JWT settings must be exported to all the DSG nodes that will be configured to use SSO authentication.
Ensure that the ESA, where SSO is enabled, and the DSG nodes are in a cluster.
To export the JWT settings:
Log in to the ESA Web UI.
Navigate to System > Backup & Restore.
On the Export, select the Cluster Export option, and click Start Wizard.
On the Data to import tab, select only Appliance JWT Configuration. Ensure that Appliance JWT Configuration is the only check box selected, and then click Next.
On the Source Cluster Nodes tab, select Create and Run a task now, and click Next.
On the Target Cluster Nodes tab, select all the DSG nodes where you want to export the JWT settings, and click Execute.
Implementing Kerberos SSO with a Load Balancer Setup
This section describes the process of implementing SSO with a Load Balancer that is setup between the appliances.
Steps to configure SSO in a load balancer setup
Consider two appliances, L1 and L2, that are configured behind a load balancer. Ensure that you perform the following steps to implement it.
- Import users from an external directory on the L1 and L2 and assign SSO login permissions.
- Ensure that the FQDN is resolved to the IP address of the load balancer.
- Configure SPN for the load balancer.
- Create and upload the keytab file on L1 and L2.
- Configure the browser to support SSO.
Logging in with SSO
After configuring the required settings, the user enters the FQDN of load balancer on the Web browser and clicks Sign in with Kerberos SSO to access it. On successful authentication, the Dashboard of the appliance appears.
4.14.1.1.5 - Viewing Logs
You can view the logs that are generated for when the Kerberos SSO mechanism is utilized. The logs are are generated for the following events:
- Uploading keytab file on the appliance
- Deleting the keytab file on the appliance
- User logging to the appliance through SSO
- Enabling or disabling SSO
Navigate to Logs > Appliance Logs to view the logs.
You can also navigate on the Discover screen to view the logs.
4.14.1.1.6 - Feature Limitations
This section covers some known limitations of the Kerberos SSO feature.
Trusted Appliances Cluster
The keytab file is specific for an SPN. A keytab file assigned for one appliance is not applicable for another appliance. Thus, if your appliance is in a TAC, it is recommended not to replicate the keytab file between different appliances.
4.14.1.1.7 - Troubleshooting
This section describes the issues and their solutions while utilizing the Kerberos SSO mechanism.
Table: Kerberos SSO Troubleshooting
Issue | Reason | Solution |
The following message appears while logging in with
SSO.Login Failure: SPNEGO authentication is not supported on this client. | The browser is not configure to handle SPNEGO authentication | Configure the browser to perform SPNEGO
authentication. For more information about configuring the
browser settings, refer Configuring
browsers. |
The following message appears while logging in with
SSO.Login Failure: Unauthorized to SSO Login. |
| Ensure that the following points are considered:
For more information about configuring user role, refer Importing
Users and assigning role. |
The following error appears while logging in with
SSO.Login Failure: Please contact System Administrator | The JWT secret key is not the same between the appliances. | If an appliance is using an LDAP of another appliance for user authentication, then ensure that the JWT secret is shared between them. |
The following error appears while logging in with
SSO.Login Failure: SSO authentication disabled | This error might occur when you are using LDAP of another appliance for authentication. If SSO in the appliance that contains the LDAP information is disabled, this error message appears. | On the ESA Web UI, navigate to System > Settings > Users > Advanced and check Enable SSO check box. |
When you are using an LDAP of another appliance for authentication and logging in using SSO, a Service not available message appears on the Web browser. |
| Ensure the following:
|
4.14.2 - What is SAML
About SAML
Security Assertion Markup Language (SAML) is an open standard for communication between an identity provider (IdP) and an application. It is a way to authenticate users in an IdP to access the application.
SAML SSO leverages SAML for seamless user authentication. It uses XML format to transfer authentication data between the IdP and the application. Once users log in to the IdP, they can access multiple applications without providing their user credentials every time. For SAML SSO to be functioning, the IdP and the application must support the SAML standard.
Key Entities in SAML
There are few key entities that are involved in a Kerberos communication:
- Identity Provider (IdP): A service that manages user identities.
- Service Provider (SP): An entity connecting to the IdP for authenticating users.
- Metadata: An file containing information for connecting an SP to an IdP.
Implementing SAML SSO for Protegrity Appliances
In Protegrity appliances, you can utilize the SAML SSO mechanism to login to the appliance. To use this feature, you log in to an IdP, such as, AWS, Azure, or GCP. After you are logged in to the IdP, you can access appliances such as, the ESA or the DSG. The appliance validates the user and on successful validation, allows the user access to the appliance. The following sections describe a step-by-step approach for setting up SAML SSO.
4.14.2.1 - Setting up SAML SSO
Prerequisites
For implementing SAML SSO, ensure that the following prerequisites are met:
- The SPs, such as, the ESA or the DSG are up and running.
- The users are available in IdPs, such as, AWS, Azure, or GCP.
- The IdP contains a SAML application for your appliance.
- The users that will leverage the SAML SSO feature are added to the appliance from the User Management screen.
- The IP addresses of the appliances are resolved to a Fully Qualified Domain Name (FQDN).
Setting up SAML SSO
This section describes the different tasks that an administrative user must perform for enabling the SAML SSO feature on the Protegrity appliances.
As part of this process changes may be required to be performed on a user’s roles and settings for LDAP. For more information, refer to section Adding Users to Internal LDAP and Managing Roles.
Table 1. Setting up SSO
Order | Platform | Step | Reference |
1 | Appliance Web UI | Add the users that require SAML SSO. Assign SSO Login permissions to the required user role. Ensure that the password of the users are changed after the first login to the appliance. |
|
2 | Appliance Web UI | Provide the FQDN and entity ID. This is retrieved from the IdP in which a SAML enterprise application is created for your appliance. | Configuring Service Provider (SP) Settings |
3 | Appliance Web UI | Provide the metadata information that is generated on the IdP. | Configuring IdP Settings |
Configuring Service Provider (SP) Settings
Before enabling SAML SSO on the appliance, you must provide the following values that are required to connect the appliance with the IdP.
Fully Qualified Domain Name (FQDN)
The Web UI must have a FQDN so it can be accessed from the web browser of the appliance. While configuring SSO on the IdP, you are required to provide a URL that maps your application on the IdP. Ensure that the URL specified in the IdP matches the FQDN specified on the appliance Web UI. Also, ensure that the IP address of your appliance is resolved to a reachable domain name.
Entity ID
The entity ID is a unique value that identifies your SAML application on the IdP. This value is assigned/generated on the IdP after registering your SAML enterprise application on it.
The nomenclature of the entity ID might vary between IdPs.
To enter the SP settings:
On the appliance Web UI, navigate to Settings > Users > Single Sign-On > SAML SSO.
Under the SP Settings section, enter the FQDN that is resolved to the IP address of the appliance in the FQDN text box.
Enter the unique value that is assigned to the SAML enterprise application on the IdP in the Entity ID text box.
If you want to allow access to User Management screen, enable the Access User Management screen option.
- User Management screens require users to provide local user password while performing any operation on it.
- Enabling this option will require users to remember and provide the password created for the user on the appliance.
Click Save.
The SP settings are configured.
Configuring IdP Settings
After configuring the the SP settings, you provide the metadata that acts as an important parameter in SAML SSO. The metadata is the chain that links the appliance to the IdP. It is an XML structure that contains information, such as, keys, certificates, and entity ID URL. This information is required for communication between the appliance and IdP. The metadata can be provided in either of the following ways:
- Metadata URL: Provide the URL of the metadata that is retrieved from the IdP.
- Metadata File: Provide the metadata file that is downloaded from the IdP and stored on your system. If you edit the metadata file, then ensure that the information in the metadata is correct before uploading it on the appliance.
To enter the metadata settings:
On the appliance Web UI, navigate to Settings > Users > Single Sign-On > SAML SSO.
Click Enable to enable SAML SSO.
If the metadata URL is available, under the IdP Settings section, then select Metadata URL from the Metadata Settings drop-down list. Enter the URL of the metadata.
If the metadata file is downloaded, under the IdP Settings section, then select Metadata File from the Metadata Settings drop-down list. Upload the metadata file.
If you want to allow access to the User Management screen, enable the Access User Management screen option.
- User Management screens require users to provide local user password while performing any operation on it.
- Enabling this option will require users to remember and provide the password created for the user on the appliance.
Click Save.
The metadata settings are configured.
- If you upload a new metadata file over the existing file, the changes are overridden by the new file.
- If you edit the metadata file, then ensure that the information in the metadata is correct before uploading it on the appliance.
4.14.2.1.1 - Workflow of SAML SSO on an Appliance
After entering all the required data, you are ready to log in to the appliance with SAML SSO. Before explaining the procedure to log in, the general flow of information is illustrated in the following figure.
Follow the below process to login to the appliance. Additionally, you can login to the appliance without SSO by providing valid user credentials.
Process
Follow these steps to login with SSO:
The user provides the FQDN of the appliance on the Web browser.
For example, the user enters esa.protegrity.com and clicks SAML Single Sign-On.
- Ensure that the user session on the IdP is active.
- If the session is idle or inactive, then a screen to enter the IdP credentials will appear.
The browser generates an authorization request and sends it to the IdP for verification.
If the user is authorized, then the IdP generates a SAML token and returns it to the Web browser.
This SAML token is then provided to the appliance to authenticate the user.
The appliance receives the token. If the token is valid, then the permissions of the user are checked.
Once these are validated, the Web UI of the appliance appears.
4.14.2.1.2 - Logging on to the Appliance
After configuring the required SSO settings, you can login to the appliance using SSO. Ensure that the user session on the IdP is active. If the session is idle or inactive, then a screen to enter the IdP credentials will appear.
To login to the appliance using SSO:
Open the Web browser and enter the FQDN of the ESA or the DSG in the URL.
The following screen appears.
Click Sign in with SAML SSO.
The Dashboard of the ESA/DSG appliance appears.
4.14.2.1.3 - Implementing SAML SSO on Azure IdP - An Example
This section provides a step-by-step sample scenario for implementing SAML SSO on the ESA with the Azure IdP.
Prerequisites
An ESA is up and running.
Ensure that the IP address of ESA is resolved to a reachable FQDN.
For example, resolve the IP address of ESA to esa.protegrity.com.
On the Azure IdP, perform the following steps to retrieve the entity ID and metadata.
Log in to the Azure Portal. Navigate to Azure Active Directory. Select the tenant for your organization. Add the enterprise application in the Azure IdP. Note the value of Application Id for your enterprise application.
For more information about creating an enterprise application, refer to https://docs.microsoft.com/.
Select Single sign-on > SAML. Edit the Basic SAML configuration and enter the Reply URL (Assertion Consumer Service URL). The format for this text box is https://<FQDN of the appliance>/Management/Login/SSO/SAML/ACS.
For example, the value in the Reply URL (Assertion Consumer Service URL) is, https://esa.protegrity.com/Management/Login/SSO/SAML/ACS
Under the SAML Signing Certificate section, copy the Metadata URL or download the Metadata XML file.
Users leveraging the SAML SSO feature are available in the Azure IdP tenant.
Steps
Log in to ESA as an administrative user. Add all the users for which you want to enable SAML SSO. Assign the roles to the users with the SSO Login permission.
For example, add the user Sam from the User Management screen on the ESA Web UI. Assign a Security Administrator role with SSO Login permission to Sam.
Ensure that the user Sam is present in the Azure AD.
Navigate to Settings > Users > Single Sign-On > SAML Single Sign-On. In the Service Provider (SP) settings section, enter esa.protegrity.com and the Appliance ID in the FQDN and Entity ID text boxes respectively. Click Save.
In the Identity Provider (IdP) Settings section, enter the Metadata URL in the Metadata Settings text box. If the Metadata XML file is downloaded on your system, then upload it. Click Save.
Select the Enable option to enable SAML SSO.
If you want to allow access to User Management screen, enable the Access User Management screen option.
Log out from ESA.
Open a new Web browser session. Log in to the Azure portal as Sam with the IdP credentials.
Open another session on the Web browser and enter the FQDN of ESA. For example, esa.protegrity.com.
Ensure that the user session on the IdP is active. If the session is idle or inactive, then a screen to enter the IdP credentials will appear.
Click Sign in with SAML SSO. You are automatically directed to the ESA Dashboard without providing the user credentials.
4.14.2.1.4 - Implementing SSO with a Load Balancer Setup
This section describes the process of implementing SSO with a Load Balancer that is setup between the appliances.
Steps to configure SSO in a Load Balancer setup
Consider two appliances, L1 and L2, that are configured behind a load balancer. Ensure that you perform the following steps to implement it.
- Add the users to the internal LDAP and assign SSO login permissions.
- Ensure that the FQDN is resolved to the IP address of the load balancer.
Logging in with SSO
After configuring the required settings, the user enters the FQDN of load balancer on the Web browser and clicks Sign in with SAML SSO to access it. On successful authentication, the appliance Dashboard appears.
4.14.2.1.5 - Viewing Logs
You can view the logs that are generated for when the SAML SSO mechanism is utilized. The logs are generated for the following events:
- Uploading the metadata
- User logging to the appliance through SAML SSO
- Enabling or disabling SAML SSO
- Configuring the Service Provider and IdP settings
Navigate to Logs > Appliance Logs to view the logs.
You can also navigate on the Discover screen to view the logs.
4.14.2.1.6 - Feature Limitations
There are some known limitations of the SAML SSO feature.
- The Configuration export to Cluster Tasks and Export data configuration to remote appliance of the SAML SSO settings are not supported. The SAML SSO settings include the hostname, so importing the SAML settings on another machine will replace the hostname.
- After logging in to the appliance through SAML SSO, if you have the Directory Manager permissions, you can access the User Management screen. A prompt to enter the user password appears after a user management operation is performed on it. In this case, you must enter the password that you have set on the appliance. The password that is set on the IdP is not applicable here.
4.14.2.1.7 - Troubleshooting
This section describes the issues and their solutions while utilizing the SAML SSO mechanism.
Issue | Reason | Solution |
The following message appears while logging in with SSO.Login Failure: Unauthorized to SSO Login. |
| Ensure that the following points are considered:
For more information about configuring user role, refer here. |
4.15 - Sample External Directory Configurations
In appliances, the external directory servers such as, Active Directory (AD) or Oracle Directory Server Enterprise Edition (ODSEE) use the OpenLDAP protocol to authenticate users. The following sections describe the parameters that you must configure to connect with an external directory.
Sample AD configuration
The following example describes the parameters for setting up an AD connection.
LDAP Uri: ldap://192.257.50.10:389
Base DN: dc=sherwood,dc=com
Bind DN: administrator@sherwood.com
Bind Password: <Password for the Bind User>
StartTLS Method: Yes
Verify Peer: Yes
LDAP Filter: sAMAccountName
Same usernames across multiple ADs
In case of same usernames across multiple ADs, it is recommended to use LDAP Filter such as UserPrincipalName to authenticate users.
Sample ODSEE configuration
The following example describes the parameters for setting up an ODSEE connection.
Protegrity appliances support ODSEE v11.1.1.7.0
LDAP Uri: ldap://192.257.50.10:389
Base DN: dc=sherwood,dc=com
Bind DN: cn=Directory Manager or cn=admin,cn=Administrators,cn=config
Bind Password: <Password for the Bind User>
StartTLS Method: Yes
Verify Peer: Yes
LDAP Filter: User attributes such as,uid, cn, sn, and so on.
Sample SAML Configuration
The following example describes the parameters for setting up a SAML connection.
SAML Single Sign-On:
Enable: Yes
Access User Management Screen: No
Service Provider (SP) Settings:
FQDN: appliancefqdn.com
Entity ID: e595ce43-c50a-4fd2-a3ef-5a4d93a602ae
Identity Provider (IdP) Settings:
Metadata Settings: Metadata URL
Sample SAML File: FQDN_EntityID_Metadata_user_credentials 1.csv
Sample Content of the SAML File:
Sample Kerberos Configuration
The following example describes the parameters for setting up a Kerberos connection. The Kerberos for Single Sign-On uses Simple and Protected GSSAPI Negotiation Mechanism (SPNEGO).
Kerberos for Single Sign-On using (Spnego):
Enable: Yes
Service Principal Name: *HTTP/<username>.esatestad.com@ESATESTAD.*COM
Sample Keytab File: <username> 1.keytab
Sample Azure AD Configuration
The following example describes the parameters for setting up an Azure AD connection.
Azure AD Settings: Enabled
Tenant ID: 3d45143b-6c92-446a-814b-ead9ab5c5e0b
Client ID: a1204385-00eb-44d4-b352-e4db25a55c52
Auth Type: Secret
Client Secret: xxxx
4.16 - Partitioning of Disk on an Appliance
Firmware is low-level software that is responsible for initializing the hardware components of a system during the boot process. It is required to initialize the boot process. It provides runtime services for the operating system and the programs on the system. There are two types of boot modes in the system setup, Basic Input/Output System (BIOS) and Unified Extensible Firmware Interface (UEFI).
BIOS is amongst the oldest systems used as a boot loader to perform the initialization of the hardware. UEFI is a comparatively newer system that defines a software interface between the operating system and the platform firmware. The UEFI is more advanced than the BIOS and most of the systems are built with support for UEFI and BIOS.
Disk Partitioning is a method of dividing the hard drive into logical partitions. When a new hard drive is installed on a system, the disk is segregated into partitions. These partitions are utilized to store data, which the operating system reads in a logical format. The information about these partitions is stored in the partition table.
There are two types of partition tables, the Master Boot Record (MBR) and the GUID Partition Table (GPT). These form a special boot section in the drive that provides information about the various disk partitions. They help in reading the partition in a logical manner.
The following table lists the differences between the GPT and the MBR.
GUID Partition Table (GPT) | Master Boot Record (MBR) |
---|---|
Supported on UEFI. | Supported on BIOS. Can also be compatible with UEFI. |
Supports partitions up to 9 ZB. | Supports partitions up to 2 TB. |
Number of primary partitions can be extended to 128. | Maximum number of primary partitions is 4. |
Runs in 32-bit and 64-bit OS. | Runs in 16-bit OS. |
Provides discrete driver support in the form of executable. | Stores the drive support in its ROM, therefore, updating the BIOS firmware is difficult. |
Offers features such as Secure Boot to limit the initialization of boot process using unauthorized applications. | Boots in the normal mode |
Has a faster boot time. | Has a standard boot time. |
Depending on the requirements, you can extend the size of the partitions in a physical volume to accommodate all the logs and other appliance related data. You can utilize the Logical Volume Manager (LVM) to increase the partitions in the physical volume. Using LVM, you can manage hard disk storage to allocate, mirror, or resize volumes.
In an appliance, the physical volume is divided into the following three logical volume groups:
Partition | Description |
---|---|
Boot | Contains the boot information. |
PTYVG | Contains the files and information about OS and logs. |
Data Volume Group | Contains the data that is in the /opt directory. |
4.16.1 - Partitioning the OS in the UEFI Boot Option
The PTYVG volume partition is divided into three logical volumes. These are the PTYVG-OS, the PTYVG-OS_bak, and the PTYVG-LOGS volume. The PTYVG volume partition contains the OS information.
The following table illustrates the partitioning of the volumes in the PTYVG directory.
Logical Volume | Description | Default Size |
---|---|---|
PTYVG-OS | The root partition | 16 GB |
PTYVG-OS_bak | The backup for the root partition | 16 GB |
PTYVG-LOGS | The logs that are in the /var/log directory | 12 GB |
In the UEFI mode, the sda1 is the EFI partition which stores the UEFI executables required to perform the booting process for the system. This .efi file points to the sda3 partition where the GRUB configurations are stored. The grub.cfg file initiates the boot process.
The following table illustrates the partitioning of all the logical volume groups in a single hard disk system.
Table: Partition of Logical Volume Groups
Partition | Partition Name | Physical Volume | Volume Group | Directory | Directory Path | Size |
/dev/sda | sda1 | EFI Partition | 400M | |||
sda2 | 100M | |||||
sda3 | BOOT | 900M | ||||
sda4 | Physical Volume 1 | PTYVG | OS | / | 16G | |
OS_bak | 16G | |||||
logs | /var/log | 12G | ||||
sda5 | Physical Volume 2 | PTYVG_DATA | opt | 50% of rest | ||
opt_bak | 50% of rest |
As shown in the table, the sda1 is the EFI Partition and contains information up to 400 MB. The sda2 is the Unallocated Partition which is required for supporting the GPT and occupies 100 MB. The sda3 is the Boot Partition volume group. It can contain information up to 900 MB. The sda4 is the PTYVG partition and uses 44 GB of hard disk space to store information about the OS and the logs. The remaining partition size is allotted for the data volume group.
For Cloud-based platforms, the opt_bak and the OS_bak directories are not available in the data volume group. The data in the PTYVG_DATA partition is available in the opt directory only.
If you want to use the EFI Boot Option for the ESA, then select the required option while creating the machine.
4.16.2 - Partitioning the OS with the BIOS Boot Option
Depending on the requirements, you can extend the size of the partitions in a physical volume to accommodate all the logs and other appliance related data. You can utilize the Logical Volume Manager (LVM) to increase the partitions in the physical volume. Using LVM, you can manage hard disk storage to allocate, mirror, or resize volumes.
In an appliance, the physical volume is divided into the following three logical volume groups:
Partition | Description |
---|---|
Boot | Contains the boot information |
PTYVG | Contains the files and information about OS and logs |
Data Volume Group | Contains the data that in the /opt directory |
The PTYVG volume partition contains the OS information. You must increase the PTYVG volume group to extend the root partition. The following table describes the different logical volumes in the PTYVG volume group.
Logical Volume | Description | Default Size |
---|---|---|
OS | The root partition |
|
OS-bak | The backup for the root partition |
|
LOGS | The logs that are in the /var/log directory |
|
SWAP | The swap partition |
The swap partition for ESA is 8 GB. |
The following table illustrates the partitioning of all the logical volume groups in a single hard disk system.
Table: Partition of Logical Volume Groups
Partition | Partition Name | Physical Volume | Volume Group | Directory | Directory Path | Upgraded Appliances Size | ISO and Cloud Installation Size |
/dev/sda | sda1 | /boot | 400M | ||||
sda2 | 100M | ||||||
sda3 | Physical Volume 1 | PTYVG | OS | / | 8G | 16G | |
OS_bak | 8G | 16G | |||||
logs | /var/log | 6G | 12 | ||||
swap | [SWAP] | By default, 2G for appliances
products. The swap partition for ESA is 8G. | |||||
sda4 | Physical Volume 2 | PTYVG_DATA | opt | /opt/docker/lib | 50% of rest | ||
opt_bak | 50% of rest |
- For Cloud-based platforms, the OS_bak directory is not available in the data volume group. The data in the PTYVG partition is available in the OS directory only.
- For Cloud-based platforms, the opt_bak directory is not available in the data volume group. The data in the PTYVG_DATA partition is available in the opt directory only.
If multiple hard disks installed on an appliance, then you can select the required hard disks for configuring the OS volume and the data volume. You can also extend the OS partition or the disk partition across the hard disks that are installed on the appliance.
The following table illustrates an example of partitioning in multiple hard disks.
Table: Partitioning in Multiple Hard Drives
Partition | Partition Name | Physical Volume | Volume Group | Directory | Directory Path | Upgraded Appliances Size | ISO and Cloud Installation Size |
/dev/sda | sda1 | /boot | 400M | ||||
sda2 | 100M | ||||||
sda3 | Physical Volume 1 | PTYVG | OS | / | 8G | 16G | |
OS_bak | 8G | 16G | |||||
logs | /var/log | 6G | 12G | ||||
swap | [SWAP] | By default, 2G for appliances
products. The swap partition for ESA is 8G. | |||||
sda4 | Physical Volume 2 | PTYVG_DATA | opt | /opt/docker/lib | 50% of rest | ||
opt_bak | 50% of rest |
Partition | Physical Volume | Volume Group | Directory | Directory Path | Size |
/dev/sdb | Physical Volume 1 | PTYVG_DATA | opt | /opt/docker/lib | 50% |
opt_bak | 50% |
- For Cloud-based platforms, the OS_bak directory is not available in the data volume group. The data in the PTYVG partition is available in the OS directory only.
- For Cloud-based platforms, the opt_bak directory is not available in the data volume group. The data in the PTYVG_DATA partition is available in the opt directory only.
The hard disk, sda, contains the partitions for the root and the PTYVG volumes. The hard disk, sdb contains the partition for the data volume group.
Extending the OS partition
The following sections describe the procedures to extend the OS partition.
Before you begin
Before extending the OS partition, it is recommended to back up your appliance. It ensures that you can roll back your changes in case of an error.
When you add a new hard disk to the partition, you should restart the system. This ensures that all the hard disks appear.
- For the Cloud-based platforms, the names of the hard disks may get updated after restarting the system.
- Ensure that the you verify the names of the hard disks before proceeding further.
Starting in Single User Mode
You must load in the Single User Mode to change the kernel command line.
For Cloud-based platforms, the Single User Mode is unavailable. It is recommended to perform the following operations from the OS Console. While performing these operations, ensure that the system is accessible by only a single user.
To boot into Single User Mode:
Install a new hard disk on the appliance.
For more information about installing a new hard disk, refer here.
Boot the appliance in Single User Mode.
If the GRUB Credentials are enabled, the screen to enter the GRUB credentials appears. Enter the credentials and press ENTER.
The following screen appears.
Select Normal and press E.
The following screen appears.
Select the linux/generic line and append <SPACE>S to the end of the line as shown in the following figure.
Press F10 to restart the appliance.
After the appliance is restarted, a prompt to enter the root password appears.
Enter the root password and press ENTER.
Creating a Partition
After editing the kernel command line, you must create the required partitions.
The following procedure describes how to create a partition on a new hard disk, sdb. You can add multiple hard disks to the appliance.
If you add multiple hard disks to the appliance, then the devices are created as /dev/sdb, /dev/sdc, /dev/sdd, and so on. You can select the required hard disk based on the storage space available.
For Cloud-based platforms, the names of the hard disk might differ. Based on the cloud platform, the hard disk names may appear as nvme1n1, xvdb, or so on.
To create a partition:
Run the following command to list the hard disks that are available.
lsblk
Run the following command to format the partition.
fdisk /dev/sdb
Type o to create a partition table and press ENTER.
Type n to create a new partition and press ENTER.
Type p to create a primary partition and press ENTER.
In the following prompt, assign a partition number to the new partition.
If you want to enter the default number for the partition, then press ENTER.
Type the required starting partition sector for the partition.
If you want to enter the default sector for the partition, then press ENTER.
Type the last sector for the partition and press ENTER.
If you want to enter the default sector for the partition, then press ENTER.
Type t to change the type of the new partition and press ENTER.
Type 8e to convert the disk partition to Linux LVM and press ENTER.
Type w to save the changes and press ENTER.
A message The partition table has been altered! appears.
Run the following command to initialize the disk partition that is used with LVM.
pvcreate /dev/sdb1
For Cloud-based platforms, you should use the name of the disk partition only. For instance, if the name of the hard disk on the Cloud-based platform is nvme0n1, then run the following command to initialize the disk partition that is used with LVM.
pvcreate /dev/nvme0n1
If the following confirmation message appears, then press y.
WARNING: dos signature detected on /dev/sdb1 at offset 510. Wipe it? [y/n]: y
A message Physical volume “/dev/sdb1” is successfully created appears.
Run the following command to extend the PTYVG volume.
vgextend PTYVG /dev/sdb1
A message
Volume group “PTYVG” successfully extended
appears.
Extending the OS and the Backup Volume
After extending the PTYVG volume you can resize the OS and the OS_bak volumes using the lvextend and resize commands.
Ensure that you consider the following points before extending the partitions in the PTYVG volume group:
- Back up the OS partition before extending the partition.
- Back up the policy, LDAP, and other required data to the /opt directory before extending the volume.
The following procedure describes how to extend the OS and the OS_bak volumes by 4 GB.
Ensure that there is enough free space available while extending the size of the OS, the OS_bak, and the log volumes. For instance, if you extend the hard disk by 1 GB and if the space is less than the required level, then the following error appears.
Insufficient free space: 1024 extents needed, but only 1023 available
To resolve this error, you must increase the partition size by 0.9 GB.
To create a partition:
Run the following commands to extend the OS-bak and OS volume.
# lvextend -L +4G /dev/PTYVG/OS_bak
A message Logical Volume OS_bak successfully resized appears.
Ensure that the you extend the size of the OS and the OS_bak volumes to the same value.
Run the following command to resize the file system in the OS_bak volume.
# resize2fs /dev/mapper/PTYVG-OS_bak
A message resize2fs: On-Line resizing finished successfully appears.
Run the following commands to extend the OS volume.
# lvextend -L +4G /dev/PTYVG/OS
A message Logical Volume OS successfully resized appears.
Run the following command to resize the file system in the OS volume.
# resize2fs /dev/mapper/PTYVG-OS
A message resize2fs: On-Line resizing finished successfully appears.
Restart the appliance.
Extending the Logs Volume
You can resize the logs volume using the lvextend and resize commands. This ensures that you provision the required space for the logs that are generated. You must back up the current logs to the /opt directory before extending the logs volume.
Before extending the logs volume, ensure that you start the appliance in Single User Mode and create a partition.
For more information about Single User Mode, refer here.
For more information about creating a partition, refer here.
The following procedure describes how to extend the logs volume by 4 GB.
To extend the logs volume:
Run the following commands to create a temporary folder in the /opt directory.
# mkdir /opt/tmp/logs
Run the following command to copy the files from the logs volume to the /opt directory.
# /usr/bin/rsync -axzHS --delete-before /var/log/ /opt/tmp/logs/
While copying the logs from the /var/log/ directory to the /opt directory, ensure that the space available in the /opt directory is more than the size of the logs.
Run the following commands to extend the logs volume.
# lvextend -L +4G /dev/PTYVG/logs
A message Logical Volume logs successfully resized appears.
Run the following command to resize the file system in the logs volume.
# resize2fs /dev/mapper/PTYVG-logs
A message resize2fs: On-Line resizing finished successfully appears.
Run the following command to copy the files from /opt directory to the logs volume.
# /usr/bin/rsync -axzHS --delete-before /opt/tmp/logs/ /var/log/
Run the following command to remove the temporary folder created in the /opt directory.
# rm -r /opt/tmp/logs
Restart the appliance.
4.17 - Working with Keys
The Protegrity Data Security platform uses many keys to protect your sensitive data. The Protegrity Key Management solution manages these keys and this system is embedded into the fabric of the Protegrity Data Security Platform. For example, creating a cryptographic or data protection key is a part of the process of defining the way sensitive data is to be protected. There is not a specific user visible function to create a data protection key.
With key management as a part of the platform’s core infrastructure, the security team can focus on protecting data and not the low-level mechanics of key management. This platform infrastructure-based key management technique eliminates the need for any human to be a custodian of keys. This holds true for any of the functions included in key management.
The keys that are part of the Protegrity Key Management solution are:
Key Encryption Key (KEK): The cryptographic key used to protect other keys. The KEKs are categorized as follows:
- Master Key - It protects the Data Store Keys and Repository Key. In the ESA, only one active Master Key is present at a time.
- Repository Key - It protects policy information in the ESA. In the ESA, only one active Repository Key is present at a time.
- Data Store Key - It encrypts the audit logs on the protection endpoint. In the ESA, multiple active Data Store Keys can be present at a time. This key applies only to v8.0.0.0 and earlier protector versions.
Signing Key: The protector utilizes the Signing Key to sign the audit logs for each data protection operation. The signed audit log records are then sent to the ESA, which authenticates and displays the signature details received for the log records.
For more information about the signature details for the log records, refer to the Protegrity Log Forwarding Guide 9.2.0.0.
Data Encryption Key (DEK): The cryptographic key used to encrypt the sensitive data for the customers.
Codebooks: The lookup tables used to tokenize the sensitive data.
For more information about managing keys, refer to the Protegrity Key Management Guide 9.2.0.0.
4.18 - Working with Certificates
Digital certificates are used to encrypt online communication and authentication between two entities. For two entities exchanging sensitive information, the one that initiates the request for exchange can be called the client and the one that receives the request and constitutes the other entity can be called the server.
The authentication of both the client and the server involves the use of digital certificates issued by the trusted Certificate Authorities (CAs). The client authenticates itself to a server using its client certificate. Similarly, the server also authenticates itself to the client using the server certificate. Thus, certificate-based communication and authentication involves a client certificate, server certificate, and a certifying authority that authenticates the client and server certificates.
Protegrity client and server certificates are self-signed by Protegrity. However, you can replace them by certificates signed by a trusted and commercial CA. These certificates are used for communication between various components in ESA.
The certificate support in Protegrity involves the following:
ESA supports the upload of certificates with strength equal to 4096 bits. You can upload a certificate with strength less than 4096 bits but the system will show you a warning message. Custom certificates for Insight must be generated using a 4096 bit key.
The ability to replace the self-signed Protegrity certificates with the CA based certificates.
The retrieval of username from client certificates for authentication of user information during policy enforcement.
The ability to download the server’s CA certificate and upload it to a certificate trust store to trust the server certificate for communication with ESA.
The various components within the Protegrity Data Security Platform that communicate with and authenticate each other through digital certificates are:
- ESA Web UI and ESA
- ESA and Protectors
- Protegrity Appliances and external REST clients
As illustrated in the figure, the use of certificates within the Protegrity systems involves the following:
Communication between ESA Web UI and ESA
In case of a communication between the ESA Web UI and ESA, ESA provides its server certificate to the browser. In this case, it is only server authentication that takes place in which the browser ensures that ESA is the trusted server.
Communication between ESA and Protectors
In case of a communication between ESA and Protectors, certificates are used to mutually authenticate both the entities. The server and the client i.e. ESA and the Protector respectively ensure that both are trusted entities. The Protectors could be hosted on customer business systems or it could be a Protegrity Appliance.
Communication between Protegrity Appliances and external REST clients
Certificates ensure the secure communication between the customer client and Protegrity REST server or between the customer client and the customer REST server.
4.19 - Managing policies
The policy each organization creates within ESA is based on requirements with relevant regulations. A policy helps to determine, specify and enforce certain data security rules. These data security rules are as shown in the following figure.
Classification
This section discusses about the classification of Policy Management in ESA.
What do you want to protect?
The data that is to be protected needs to be classified. This step determines the type of data that the organization considers sensitive. The compliance or security team will choose to meet certain standard compliance requirements with specific law or regulation, such as the Payment Card Industry Data Security Standard (PCI DSS) or the Health Information Portability and Accessibility Act (HIPAA).
In ESA, you classify the sensitive data fields by creating ‘Data Elements’ for each field or type of data.
Why do you need to protect?
The fundamental goal of all IT security measures is the protection of sensitive data. The improper disclosure of sensitive data can cause serious harm to the reputation and business of the organization. Hence, the protection of sensitive data by avoiding identity theft and protecting privacy is for everyone’s advantage.
Discovery
This section discusses about the discovery of Policy Management in ESA.
Find where the data is located in the enterprise
The data protection systems are the locations in the enterprise to focus on as the data security solution is designed. Any data security solution identifies the systems that contains the sensitive data.
How you want to protect it?
Data protection has different scenarios which require different forms of protection. For example, tokenization is preferred over encryption for credit card protection. The technology used must be understood to identify a protection method. For example, if a database is involved, Protegrity identifies a Protector to match up with the technology used to achieve protection of sensitive data.
Who is authorized to view it in the clear?
In any organization, the access to unprotected sensitive data must be given only to the authorized stakeholders to accomplish their jobs. A policy defines the authorization criteria for each user. The users are defined in the form of members of roles. A level of authorization is associated with each role which assigns data access privileges to all members in the role.
Protection
The Protegrity Data Security Platform delivers the protection through a set of Data Protectors. The Protegrity Protectors meet the governance requirements to protect sensitive data in any kind of environment. ESA delivers the centrally managed policy set and the Protectors locally enforce them. It also collects audit logs of all activity in their systems and sends back to ESA for reporting.
Enforcement
The value of any company or its business is in its data. The company or business suffers serious issues if an unauthorized user gets access to the data. Therefore, it becomes necessary for any company or business to protect its data. The policy is created to enforce the data protection rules that fulfils the requirements of the security team. It is deployed to all Protegrity Protectors that are protecting sensitive data at protection points.
Monitoring
As a policy is enforced, the Protegrity Protectors collects audit logs in their systems and reports back to ESA. Audit logs helps to capture authorized and unauthorized attempts to access sensitive data at all protection points. It also captures logs on all changes made to policies. You can specify what types of audit records are captured and sent back to ESA for analysis and reporting.
4.20 - Working with Insight
Logging follows a fixed routine. The system generates logs, which are collected and then forwarded to Insight. The Audit Store holds the logs and these log records are used in various areas, such as, alerts, reports, dashboards, and so on. This section explains Insight in ESA.
4.20.1 - Understanding the Audit Store node status
Viewing cluster status
The Overview screen shows information about the Audit Store cluster. Use this information to understand the health of the Audit Store cluster. Access the Overview screen by navigating to Audit Store > Cluster Management > Overview. The Overview screen is shown in the following figure.
The following information is shown on the Overview screen:
- Join Custer: Click to add a node to the Audit Store cluster. The node can be added to only one Audit Store cluster. This button is disabled after the node is added to the Audit Store cluster.
- Leave Cluster: Click to remove a node from the Audit Store cluster. This button is disabled after the node is removed from an Audit Store cluster.
- Cluster Name: The name displays the Audit Store cluster name.
- Cluster Status: The cluster status displays the index status of the worst shard in the Audit Store cluster. Accordingly, the following status information appears:
- Red status indicates that the specific shard is not allocated in the Audit Store cluster.
- Yellow status indicates that the primary shard is allocated but replicas are not allocated.
- Green status indicates that all shards are allocated.
- Number of Nodes: The count of active nodes in the Audit Store cluster.
- Number of Data Nodes: The count of nodes that have a data role.
- Active Primary Shards: The count of active primary shards in the Audit Store cluster.
- Active Shards: The total of active primary and replica shards.
- Relocating Shards: The count of shards that are being relocated.
- Initializing Shards: The count of shards that are under initialization.
- Unassigned Shards: The count of shards that are not allocated.
- OS Version: The version number of the OpenSearch used for the Audit Store.
- Current Master: The IP address of the current Audit Store node that is elected as master.
- Indices Count: The count of indices in the Audit Store cluster.
- Total Docs: The document count of all indices in the Audit Store cluster, excluding security index docs.
- Number of Master Nodes: The count of nodes that have the master-eligible role.
- Number of Ingest Nodes: The count of nodes that have the ingest role.
Viewing the node status
The Nodes tab on the Overview screen shows the status of the nodes in the Audit Store cluster. This tab displays important information about the node. The Nodes tab is shown in the following figure.
The following information is shown on the Nodes tab:
- Node IP: The IP address of the node.
- Role: The roles assigned to the node. By default, nodes are assigned all the roles. The following roles are available:
- Master: This is the master-eligible role. The nodes having this role can be elected as the cluster master to control the Audit Store cluster.
- Data: The nodes having the data role hold data and perform data-related operations.
- Ingest: The nodes having the ingest role process the logs received before the logs are stored in the Audit Store.
- Action: The button to edit the roles for the current node.
- Name: The name for the node.
- Up Time: The uptime for the node.
- Disk Total (Bytes): The total disk space in bytes.
- Disk Used (Bytes): The disk space used in bytes.
- Disk Avail (Bytes): The available disk space in bytes.
- RAM Max (Bytes): The total RAM available in bytes.
- RAM Current (Bytes): The current RAM used in bytes.
Viewing the index status
The Indices tab on the Overview screen shows the status of the indexes on the Audit Store cluster. This tab displays important information about the indexes. The Indices tab is shown in following figure.
The following information is shown on the Indices tab:
- Index: The index name.
- Doc Count: The number of documents in the index.
- Health Status: The index health per index. The index level health status is controlled by the worst shard status. Accordingly, the following status information appears:
- Red status indicates that the specific shard is not allocated in the Audit Store cluster.
- Yellow status indicates that the primary shard is allocated but replicas are not allocated.
- Green status indicates that all shards are allocated.
- Pri Store Size (Bytes): The primary store size in bytes for all shards, including shard replicas of the index.
- Store Size (Bytes): The total store size in bytes for all shards, including shard replicas of the index.
4.20.2 - Accessing the Insight Dashboards
Log in to the ESA Web UI.
Click Audit Store > Dashboard. If pop-ups are blocked in the browser, then click Open in a new tab to view the Audit Store Dashboards.
The Audit Store Dashboards is displayed in a new tab of the browser.
4.20.3 - Working with Audit Store nodes
Registering a node
When a node that was down and unregistered starts again, it would have the Audit Store configuration. Similarly, when any external machine, such as an external OpenSearch machine is connected to the Audit Store cluster without using the Join Cluster button from the Audit Store Cluster Management page, then the node appears with an orange icon (). In this case, register the node using the Register button.
Perform the following steps to register a node:
Navigate to Audit Store > Cluster Management > Overview > Nodes.
Click Register (
).
The node will be a part of the cluster and a black node icon (
) will appear.
Unregistering a node
When a node goes down, such as due to a crash or for maintenance, then the node is greyed out (). Disconnect the node from the cluster using the Unregister button. A disconnected node can be added back to the cluster later, if required.
Perform the following steps to remove the disconnected node:
Navigate to Audit Store > Cluster Management > Overview > Nodes.
Click Unregister (
).
The node will still be a part of the cluster, however, it will not be visible in the list.
4.20.4 - Working with Discover
For more information about Discover, refer to https://opensearch.org/docs/latest/dashboards/.
Viewing logs
The logs aggregated and collected are sent to Insight. Insight stores the logs in the Audit Store. The logs from the Audit Store are displayed on the Audit Store Dashboards. Here, the different fields and the data logged is visible. In addition to viewing the data, these logs serve as input for Analytics to analyze the health of the system and to monitor the system for providing security.
View the logs by logging into the ESA and navigating to Audit Store > Dashboard > Open in new tab, select Discover from the menu, and select a time period such as Last 30 days.
Use the default index to view the log data. Alternatively, select an index pattern or alias for the entries to view the data from a different index. Indexes can be created or deleted. However, deleting an index will lead to a permanent loss of data in the index. If the index was not backed up earlier, then the logs from the index deleted cannot be recreated or retrieved.
Saved queries
Run a query and customize the log details displayed. Save the query and the settings for running a query, such as, the columns, row count, tail, and indexes for the query. The saved queries created are user-specific.
There following saved queries are provided to view information:
- Policy: This query is available to view policy logs.
- Security: This query is available to view security operation logs.
- Unsuccessful Security Operations: This query is available to view unsuccessful security operation-related logs.
In ESA, navigate to Audit Store > Dashboard > Open in new tab, select Discover from the menu, and optionally select a time period such as Last 30 days..
The viewer role user or a user with the viewer role can only view and run saved queries. Admin rights are required to create or modify query filters.
Select the index for running the query.
Enter the query in the Search field.
Optionally, select the required fields.
Click the See saved queries (
) icon to save the query.
The Saved Queries list appears.
Click Save current query.
The Save query dialog box appears.
Specify a name for the query.
Click Save to save the query information, including the configurations specified, such as, the columns, row count, tail, indexes, and query.
The query is saved.
Click the See saved queries (
) icon to view the saved queries.
4.20.5 - Working with Audit Store roles
A node can have one role or multiple roles. A cluster needs at least one node with each role. Hence, roles of the node in a single-node cluster cannot be removed. Similarly, if the node is the last node in the cluster with a particular role, then the role cannot be removed. By default, all the nodes must have the master-eligible, data, and ingest roles:
- Master-eligible: This is the master-eligible node. It is eligible to be elected as the master node that controls the Audit Store cluster. A minimum of 3 nodes with the master-eligible role are required in the cluster to make the Audit Store cluster stable and resilient.
- Data: This node holds data and can perform data-related operations. A minimum of 2 nodes with the data role are required in the Audit Store cluster to reduces data loss when a node goes down.
- Ingest: This node processes logs received before the log is indexed for further storage and processing. A minimum of 2 nodes with the ingest role are required in the Audit Store cluster.
The Audit Store uses the following formula to determine the minimum number of nodes with the Master-eligible role that should be running in the cluster:
Minimum number of running nodes with the Master-eligible role in a cluster = (Total number of nodes with the Master-eligible role in a cluster / 2) + 1
For example, if the cluster has 5 nodes that have the Master-eligible role, then the minimum number of nodes with the Master-eligible role that needs to be running for the cluster to remain functional is 3.
An Audit Store cluster must have a minimum of 3 nodes with the Master-eligible role due to following scenarios:
- 1 master-eligible node: If the only node is present with the Master-eligible role, then it is elected the Master, by default, because it is the only node with the required Master-eligible role. In this case, if the node becomes unavailable due to some failure, then the cluster becomes unstable as there is no additional node with the Master-eligible role.
- 2 master-eligible nodes: A cluster where only 2 nodes have the Master-eligible role will both have the Master-eligible role at the minimum to be up and running for the cluster to remain functional. If any one of those nodes becomes unavailable due to some failure, then the minimum condition for the nodes with the Master-eligible role is not met and the cluster becomes unstable.
- 3 master-eligible nodes and above: In this case, if any one node goes down, then the cluster can still remain functional because this cluster requires two nodes with the Master-eligible role to be running at the minimum, as per the minimum Master-eligible role formula.
For more information about node and roles, refer to https://opensearch.org/docs/latest/tuning-your-cluster/.
Based on the requirements, modify the roles of a node using the following steps.
Log in to the Web UI of the system to change the role.
Click Audit Store > Cluster Management > Overview to open the Audit Store clustering page.
Click Edit Roles.
Select the check box to add a role. Alternatively, clear the check box to remove a role.
Click Update Roles.
Click Dismiss in the message box that appears after the role update.
4.20.6 - Understanding Insight Dashboards
Viewing the graphs provides an easier and faster method for reading the log information. This helps understand the working of the system and also take decisions faster, such as, understanding the processing load on the ESAs and accordingly expanding the cluster by adding nodes, if required.
For more information about the dashboards, navigate to https://opensearch.org/docs/latest/dashboards/.
Accessing the Insight Dashboards
The Insight Dashboards appears on a separate tab from the ESA Web UI. However, it uses the same session as the ESA Web UI. Signing out from the ESA Web UI also signs out from the Insight Dashboards. Complete the steps provided here to view the Insight Dashboards.
Log in to the ESA Web UI.
Click Audit Store > Dashboard. If pop-ups are blocked in the browser, click Open in a new tab to view the Audit Store Dashboards, also known as Insight Dashboards.
The Audit Store Dashboards is displayed in a new tab of the browser.
Overview of the Insight Dashboards Interface
An overview of the various parts of the Insight Dashboards, also known as the Audit Store Dashboards, is provided here.
The Audit Store Dashboard appears as shown in the following figure.
The following components are displayed on the screen.
Callout | Element | Description |
---|---|---|
1 | Navigation panel | The menu displays the different Insight applications, such as, dashboards, reports, and alerts. |
2 | Search bar | The search bar helps find elements and run queries. Use filters to narrow the search results. |
3 | Bread crumb | The menu is used to quickly navigate across screens. |
4 | Panel | The stage is the area to create and view visualizations and log information. |
5 | Toolbar | The toolbar lists the commands and shortcuts for performing tasks. |
6 | Time filter | The time filter specifies the time window for viewing logs. Update the filter if logs are not visible. Use the Quick Select menu to select predefined time periods. |
7 | Help | The help menu provides access to the online help and to view the community forums. |
Accessing the help
The Insight Dashboard helps visualize log data and information. Use the help documentation provided by Insight to configure and create visualizations.
To access the help:
Open the Audit Store Dashboards.
Click the Help icon from the upper-right corner of the screen.
Click Documentation.
Alternatively, navigate to https://opensearch.org/docs/latest/dashboards/.
4.20.7 - Working with Protegrity dashboards
The configuration of dashboards created in the earlier versions of Insight Dashboards are retained after the ESA is upgraded. Protegrity provides default dashboards with version 10.1.0. If the title of an existing dashboard matches the new dashboard provided by Protegrity, then a duplicate entry is visible. Use the date and time stamp to identify and rename the earlier dashboards. The Protector status interval is used for presenting the data on some dashboards. The information presented on the dashboard might not have the correct values if the interval is updated.
Do not clone, delete, or modify the configuration or details of the dashboards that are provided by Protegrity. To create a customized dashboard, first clone and customize the required visualizations, then create a dashboard, and place the customized visualizations on the dashboard.
To view a dashboard:
Log in to the ESA.
Navigate to Audit Store > Dashboard.
From the navigation panel, click Dashboards.
Click the dashboard.
Viewing the Security Operation Dashboard
The security operation dashboard displays the counts of individual and total number of security operations for successful and unsuccessful operations. The Security Operation Dashboard has a table and pie charts that summarizes the security operations performed by a specific data store, protector family, and protector vendor. This dashboard shows different visualizations for the Successful Security Operations, Security Operations, Reprotect Counts, Successful Security Operation Counts, Security Operation Counts, Security Operation Table, and Unsuccessful Security Operations.
This dashboard cannot be deleted. The dashboard is shown in the following figure.
The dashboard has the following panels:
- Total Security Operations: Displays pie charts for for the successful and unsuccessful security operations:
- Successful: Total number of security operations that succeeded.
- Unsuccessful: Total number of security operations that was unsuccessful.
- Successful Security Operations: Displays pie chart for the following security operation:
- Protect: Total number of protect operations.
- Unprotect: Total number of unprotect operations.
- Reprotect: Total number of reprotect operations.
- Unsuccessful Security Operations: Displays pie chart for the following security operation:
- Error: Total number of operations that were unsuccessful due to an error.
- Warning: Total number of operations that were unsuccessful due to a warning.
- Exception: Total number of operations that were unsuccessful due to an exception.
- Total Security Operation Values: Displays the following information
- Successful - Count: Total number of security operations that succeeded.
- Unsuccessful - Count: Total number of security operations that were unsuccessful.
- Successful Security Operation Values: Displays the following information:
- Protect - Count: Total number of protect operations.
- Unprotect - Count: Total number of unprotect operations.
- Reprotect - Count: Total number of reprotect operations.
- Unsuccessful Security Operation Values: Displays the following information:
- ERROR - Count: Total number of error logs.
- WARNING - Count: Total number of warning logs.
- EXCEPTION - Count: Total number of exception logs.
- Security Operation Table: Displays the number of security operations done for a data store, protector family, protector vendor, and protector version.
- Unsuccessful Security Operations: Displays a list of unsuccessful security operations with details, such as, time, data store, protector family, protector vendor, protector version, IP, hostname, level, count, description, and source.
Viewing the Protector Inventory Dashboard
The protector inventory dashboard displays protector details connected to the ESA through bar graphs and tables. This dashboard has the Protector Family, Protector Version, Protector Count, and Protector List visualizations. It is useful for understanding information about the installed Protectors.
Only protectors that perform security operations show up on the dashboard. Updating the IP address or the host name of the Protector shows the old and new entry for the protector.
This dashboard cannot be deleted. The dashboard is shown in the following figure.
The dashboard has the following panels:
- Protector Family: Displays bar charts with information for the protector family based on the installation count of the protector.
- Protector Version: Displays bar charts with information of the protector version based on the installation count of the protector.
- Protector Count: Displays the count of the deployed protectors for the corresponding Protector Family, Protector Vendor, and Protector Version.
- Protector List: Displays the list of protectors installed with information, such as, Protector Vendor, Protector Family, Protector Version, Protector IP, Hostname, Core Version, PCC Version, and URP count. The URP shows the security operations performed, that is, the unprotect, reprotect, and protect operations.
Viewing the Protector Status Dashboard
The protector status dashboard displays the protector connectivity status through a pie chart and a table visualization. This information is available only for v10.0.0 protectors. It is useful for understanding information about the installed v10.0.0 protectors. This dashboard uses status logs sent by the protector, so the protector which performed at least one security operation shows up on this dashboard. A protector is shown in one of the following states on the dashboard:
- OK: The latest logs are sent from the protector to the ESA within the last 15 minutes.
- Warning: The latest logs sent from the protector to the ESA are within the last 15 and 60 minutes.
- Error: The latest logs sent from the protector to the ESA are more than 60 minutes.
Updating the IP address or the host name of the protector shows the old and new entry for the protector.
This dashboard shows the v10.0.0 protectors that are connected to the ESA. The status of earlier protectors is available by logging into the ESA and navigating to Policy Management > Nodes.
This dashboard cannot be deleted. The dashboard is shown in the following figure.
The dashboard has the following panels:
- Connectivity status pie chart: Displays a pie chart of the different states with the number of protectors that are in each state.
- Protector Status: Displays the list of protectors connectivity status with information, such as, Datastore, Node IP, Hostname, Protector Platform, Core Version, Protector Vendor, Protector Family, Protector Version, Status, and Last Seen.
Viewing the Policy Status Dashboard
The policy status dashboard displays the Policy and Trusted Application connectivity status with respective to a DataStore. It is useful to understand deployment of the DataStore on all protector nodes. This dashboard displays the Policy deploy Status, Trusted Application deploy status, Policy Deploy details, and Trusted Application details visualizations. This information is available only for v10.0.0 protectors.
The policy status logs are sent to Insight. These logs are stored in the policy status index that is pty_insight_analytics_policy. The policy status index is analyzed using the correlation ID to identify the unique policies received by the ESA. The time duration and the correlation ID are then analyzed for determining the policy status.
The dashboard uses status logs sent by the protectors about the deployed policy, so the Policy or Trusted Application used for at least one security operation shows up on this dashboard. A Policy and Trusted Application can be shown in one of the following states on the dashboard:
- OK: The latest correlation value of the logs sent for the Policy or Trusted Application to the ESA are within the last 15 minutes.
- Warning: The latest correlation value of the logs sent for the Policy or Trusted Application to the ESA are more than 15 minutes.
This dashboard cannot be deleted. The dashboard is shown in the following figure.
The dashboard has the following panels:
- Policy Deploy Status: Displays a pie chart of the different states with the number of policies that are in each state.
- Trusted Application Status: Displays a pie chart of the different states with the number of trusted applications that are in each state.
- Policy Deploy Details: Displays the list of policies and details, such as, Datastore Name, Node IP, Hostname, Last Seen, Policy Status, Process Name, Process Id, Platform, Core Version, PCC Version, Vendor, Family, Version, Deployment Time, and Policy Count.
- Trusted Application Details: Displays the list of policies for Trusted Applications and details, such as, Datastore Name, Node IP, Hostname, Last Seen, Policy Status, Process Name, Process Id, Platform, Core Version, PCC Version, Vendor, Family, Version, Authorize Time, and Policy Count.
Data Element Usage Dashboard
The dashboard shows the security operation performed by users according to data elements. It displays the top 10 data elements used for the top five users.
The following visualizations are displayed on the dashboard:
- Data Element Usage Intensity Of Users per Protect operation
- Data Element Usage Intensity Of Users per Unprotect operation
- Data Element Usage Intensity Of Users per Reprotect operation
The dashboard is displayed in the following figure.
Sensitive Activity Dashboard
The dashboard shows the daily count of security events by data elements for specific time period.
The following visualization is displayed on the dashboard:
- Sensitive Activity By Date
The dashboard is displayed in the following figure.
Server Activity Dashboard
The dashboard shows the daily count of all events by servers for specific time period. The older Audit index entries are not displayed on a new installation.
The following visualizations are displayed on the dashboard:
- Server Activity of Troubleshooting Index By Date
- Server Activity of Policy Index By Date
- Server Activity of Audit Index By Date
- Server Activity of Older Audit Index By Date
The dashboard is displayed in the following figure.
High & Critical Events Dashboard
The dashboard shows the daily count of system events of high and critical severity for selected time period. The older Audit index entries are not displayed on a new installation.
The following visualizations are displayed on the dashboard:
- System Report - High & Critical Events of Troubleshooting Index
- System Report - High & Critical Events of Policy Index
- System Report - High & Critical Events of Older Audit Index
The dashboard is displayed in the following figure.
Unauthorized Access Dashboard
The dashboard shows the cumulative counts of unauthorized access and activity by users into Protegrity appliances and protectors.
The following visualization is displayed on the dashboard:
- Unauthorized Access By Username
The dashboard is displayed in the following figure.
User Activity Dashboard
The dashboard shows the cumulative transactions performed by users over a date range.
The following visualization is displayed on the dashboard:
- User activity across Date range
The dashboard is displayed in the following figure.
4.20.8 - Working with Protegrity visualizations
The configuration of visualizations created in the earlier versions of the Audit Store Dashboards are retained after the ESA is upgraded. Protegrity provides default visualizations with version 10.1.0. If the title of an existing visualization matches the new visualization provided by Protegrity, then a duplicate entry is visible. Use the date and time stamp to identify and rename the existing visualizations.
Do not delete or modify the configuration or details of the visualizations provided by Protegrity. To customize the visualization, create a copy of the visualization and perform the customization on the copy of the visualization.
To view visualizations:
Log in to the ESA.
Navigate to Audit Store > Dashboard.
The Audit Store Dashboards appear in a new window. Click Open in a new tab if the dashboard is not displayed.
From the navigation panel, click Visualize.
Create and view visualizations from here.
Click a visualization to view it.
User Activity Across Date Range
Description: The user activity during the date range specified.
- Type: Heat Map
- Filter: Audit Index Logtypes
- Configuration:
- Index: pty_insight_*audit*
- Metrics:
- Value: Sum
- Field: cnt
- Buckets:
- X-axis
- Aggregation: Date Histogram
- Field: origin.time_utc
- Minimum interval: Day
- Y-axis
- Sub aggregation: Terms
- Field: protection.policy_user.keyword
- Order by: Metric:Sum of cnt
- Order: Descending
- Size: 1
- Custom label: Policy Users
- X-axis
Sensitive Activity by Date
Description: The data element usage on a daily basis.
- Type: Line
- Filter: Audit Index Logtypes
- Configuration:
- Index: pty_insight_*audit*
- Metrics: Y-axis: Count
- Buckets:
- X-axis
- Aggregation: Date Histogram
- Field: origin.time_utc
- Minimum interval: Day
- Custom label: Date
- Split series
- Sub aggregation: Terms
- Field: protection.dataelement.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 10
- Custom label: Operation Count
- X-axis
Unauthorized Access By Username
Description: Top 10 Unauthorized Protect and Unprotect operation counts per user.
- Type: Vertical Bar
- Filter 1: Audit Index Logtypes
- Filter 2: protection.audit_code: 3
- Configuration:
- Index: pty_insight_*audit*
- Metrics: Y-axis: Count
- Buckets:
- X-axis
- Aggregation: Terms
- Field: protection.policy_user.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 10
- Custom label: Top 10 Policy Users
- Split series
- Sub aggregation: Filters
- Filter 1-Protect: level=‘Error’
- Filter 2-Unprotect: level=‘WARNING’
- X-axis
System Report - High & Critical Events of Audit Indices
Description: The chart reporting high and critical events from the Audit index.
- Type: Vertical Bar
- Filter: Severity Level : (High & Critical)
- Configuration:
- Index: pty_insight_analytics*audits_*
- Metrics: Y-axis: Count
- Buckets:
- X-axis
- Aggregation: Date Histogram
- Field: origin.time_utc
- Minimum Interval: Auto
- Custom label: Date
- Split series
- Sub aggregation: Terms
- Field: level.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 20
- Split series
- Sub aggregation: Terms
- Field: origin.hostname.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 50
- Custom label: Server
- X-axis
System Report - High & Critical Events of Policy Logs Index
Description: The chart reporting high and critical events from the Policy index.
- Type: Vertical Bar
- Filter: Severity Level : (High & Critical)
- Configuration:
- Index: pty_insight_analytics*policy_log_*
- Metrics: Y-axis: Count
- Buckets:
- X-axis
- Aggregation: Date Histogram
- Field: origin.time_utc
- Minimum Interval: Auto
- Custom label: Date
- Split series
- Sub aggregation: Terms
- Field: level.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 20
- Split series
- Sub aggregation: Terms
- Field: origin.hostname.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 50
- Custom label: Server
- X-axis
System Report - High & Critical Events of Troubleshooting Logs Index
Description: The chart reporting high and critical events from the Troubleshooting index.
- Type: Vertical Bar
- Filter: Severity Level : (High & Critical)
- Configuration:
- Index: pty_insight_analytics*troubleshooting_*
- Metrics: Y-axis: Count
- Buckets:
- X-axis
- Aggregation: Date Histogram
- Field: origin.time_utc
- Minimum Interval: Auto
- Custom label: Date
- Split series
- Sub aggregation: Terms
- Field: level.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 20
- Split series
- Sub aggregation: Terms
- Field: origin.hostname.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 50
- Custom label: Server
- X-axis
Data Element Usage Intensity Of Users per Protect operation
Description: The chart shows the data element usage intensity of users per protect operation. It displays the top 10 data elements used by the top five users.
- Type: Heat Map
- Filter 1: protection.operation.keyword: Protect
- Filter 2: Audit Index Logtypes
- Configuration:
- Index: pty_insight_*audit*
- Metrics: Y-axis: Count
- Buckets:
- X-axis
- Aggregation: Terms
- Field: protection.policy_user.keyword
- Order by: Metric: Count
- Order: Descending
- Size: 5
- Y-axis
- Sub aggregation: Terms
- Field: protection.dataelement.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 10
- X-axis
Data Element Usage Intensity Of Users per Reprotect operation
Description: The chart shows the data element usage intensity of users per reprotect operation. It displays the top 10 data elements used by the top five users.
- Type: Heat Map
- Filter 1: protection.operation.keyword: Reprotect
- Filter 2: Audit Index Logtypes
- Configuration:
- Index: pty_insight_*audit*
- Metrics: Y-axis: Count
- Buckets:
- X-axis
- Aggregation: Terms
- Field: protection.policy_user.keyword
- Order by: Metric: Count
- Order: Descending
- Size: 5
- Y-axis
- Sub aggregation: Terms
- Field: protection.dataelement.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 10
- X-axis
Data Element Usage Intensity Of Users per Unprotect operation
Description: The chart shows the data element usage intensity of users per unprotect operation. It displays the top 10 data elements used by the top five users.
- Type: Heat Map
- Filter 1: protection.operation.keyword: Unprotect
- Filter 2: Audit Index Logtypes
- Configuration:
- Index: pty_insight_*audit*
- Metrics: Y-axis: Count
- Buckets:
- X-axis
- Aggregation: Terms
- Field: protection.policy_user.keyword
- Order by: Metric: Count
- Order: Descending
- Size: 5
- Y-axis
- Sub aggregation: Terms
- Field: protection.dataelement.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 10
- X-axis
Server Activity of Older Audit Indices By Date
Description: The chart shows the daily count of all events by servers for specific time period from the old audit index.
- Type: Line
- Configuration:
- Index: pty_insight_*audit_*
- Metrics: Y-axis: Count
- Buckets:
- X-axis
- Aggregation: Date Histogram
- Field: origin.time_utc
- Minimum interval: Day
- Split series
- Sub aggregation: Terms
- Field: origin.hostname.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 50
- X-axis
Server Activity of Audit Index By Date
Description: The chart shows the daily count of all events by servers for specific time period from the audit index.
- Type: Line
- Configuration:
- Index: pty_insight_analytics*audits_*
- Metrics: Y-axis: Count
- Buckets:
- X-axis
- Aggregation: Date Histogram
- Field: origin.time_utc
- Minimum interval: Day
- Split series
- Sub aggregation: Terms
- Field: origin.hostname.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 50
- X-axis
Server Activity of Policy Index By Date
Description: The chart shows the daily count of all events by servers for specific time period from the policy index.
- Type: Line
- Configuration:
- Index: pty_insight_analytics*policy_log_*
- Metrics: Y-axis: Count
- Buckets:
- X-axis
- Aggregation: Date Histogram
- Field: origin.time_utc
- Minimum interval: Day
- Split series
- Sub aggregation: Terms
- Field: origin.hostname.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 50
- X-axis
Server Activity of Troubleshooting Index By Date
Description: The chart shows the daily count of all events by servers for specific time period from the troubleshooting index.
- Type: Line
- Configuration:
- Index: pty_insight_analytics*troubleshooting_*
- Metrics: Y-axis: Count
- Buckets:
- X-axis
- Aggregation: Date Histogram
- Field: origin.time_utc
- Minimum interval: Day
- Split series
- Sub aggregation: Terms
- Field: origin.hostname.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 50
- X-axis
Connectivity status
Description: This pie chart display connectivity status for the protectors.
- Type: Pie
- Configuration:
- Index: pty_insight_analytics*protector_status_dashboard_*
- Metrics:
- Slice size
- Aggregation: Unique Count
- Field: origin.ip
- Custom label: Number
- Slice size
- Buckets:
- Split slices
- Aggregation: Terms
- Field: protector_status.keyword
- Order by: Metric:Number
- Order: Descending
- Size: 10000
- Split slices
Policy_Deploy_Status_Chart
Description: This pie chart displays the deployment status of the policy.
- Type: Pie
- Filter: policystatus.type.keyword: POLICY
- Configuration:
- Index: pty_insight_analytics*policy_status_dashboard_*
- Metrics:
- Slice size
- Aggregation: Unique Count
- Field: _id
- Slice size
- Buckets:
- Split slices
- Aggregation: Terms
- Field: policystatus.status.keyword
- Order by: Metric:Unique Count of _id
- Order: Descending
- Size: 50
- Custom label: Policy Status
- Split slices
Policy_Deploy_Status_Table
Description: This table displays the policy deployment status and uniquely identified information for the data store, protector, process, platform, node, and so on.
- Type: Data Table
- Filter: policystatus.type.keyword: POLICY
- Configuration:
- Index: pty_insight_analytics*policy_status_dashboard_*
- Metrics:
- Aggregation: Count
- Custom label: Metrics Count
- Buckets:
- Split rows
- Aggregation: Terms
- Field: protector.datastore.keyword
- Order by: Metric: Metrics Count
- Order: Descending
- Size: 50
- Custom label: Data Store Name
- Split rows
- Aggregation: Terms
- Field: origin.ip
- Order by: Metric: Metrics Count
- Order: Descending
- Size: 50
- Custom label: Node IP
- Split rows
- Aggregation: Terms
- Field: origin.hostname.keyword
- Order by: Metric: Metrics Count
- Order: Descending
- Size: 50
- Custom label: Host Name
- Split rows
- Aggregation: Terms
- Field: policystatus.status.keyword
- Order by: Metric: Metrics Count
- Order: Descending
- Size: 50
- Custom label: Status
- Split rows
- Aggregation: Terms
- Field: origin.time_utc
- Order by: Metric: Metrics Count
- Order: Descending
- Size: 50
- Custom label: Last Seen
- Split rows
- Aggregation: Terms
- Field: process.name.keyword
- Order by: Metric: Metrics Count
- Order: Descending
- Size: 50
- Custom label: Process Name
- Split rows
- Aggregation: Terms
- Field: process.id.keyword
- Order by: Metric: Metrics Count
- Order: Descending
- Size: 50
- Custom label: Process Id
- Split rows
- Aggregation: Terms
- Field: process.platform.keyword
- Order by: Metric: Metrics Count
- Order: Descending
- Size: 50
- Custom label: Platform
- Split rows
- Aggregation: Terms
- Field: process.core_version.keyword
- Order by: Metric: Metrics Count
- Order: Descending
- Size: 50
- Custom label: Core Version
- Split rows
- Aggregation: Terms
- Field: process.pcc_version.keyword
- Order by: Metric: Metrics Count
- Order: Descending
- Size: 50
- Custom label: PCC Version
- Split rows
- Aggregation: Terms
- Field: protector.version.keyword
- Order by: Metric: Metrics Count
- Order: Descending
- Size: 50
- Custom label: Protector Version
- Split rows
- Aggregation: Terms
- Field: protector.vendor.keyword
- Order by: Metric: Metrics Count
- Order: Descending
- Size: 50
- Custom label: Vendor
- Split rows
- Aggregation: Terms
- Field: protector.family.keyword
- Order by: Metric: Metrics Count
- Order: Descending
- Size: 50
- Custom label: Family
- Split rows
- Aggregation: Terms
- Field: policystatus.deployment_or_auth_time
- Order by: Metric: Metrics Count
- Order: Descending
- Size: 50
- Custom label: Deployment Time
- Split rows
Protector Count
Description: This table displays the number of protector for each family, vendor, and version.
- Type: Data Table
- Configuration:
- Index: pty_insight_*audit*
- Metrics:
- Aggregation: Unique Count
- Field: origin.ip
- Custom label: Deployment Count
- Buckets:
- Split rows
- Aggregation: Terms
- Field: protector.family.keyword
- Order by: Metric: Deployment Count
- Order: Descending
- Size: 10000
- Custom label: Protector Family
- Split rows
- Aggregation: Terms
- Field: protector.vendor.keyword
- Order by: Metric: Metrics Count
- Order: Descending
- Size: 10000
- Custom label: Protector Vendor
- Split rows
- Aggregation: Terms
- Field: protector.version.keyword
- Order by: Metric: Deployment Count
- Order: Descending
- Size: 10000
- Custom label: Protector Version
- Split rows
Protector Family
Description: This chart displays the counts of protectors installed for each protector family.
- Type: Vertical Bar
- Configuration:
- Index: pty_insight_*audit*
- Metrics: Y-axis
- Aggregation: Unique Count
- Field: origin.ip
- Custom label: Number
- Buckets:
- X-axis
- Aggregation: Terms
- Field: protector.family.keyword
- Order by: Metric:Number
- Order: Descending
- Size: 10000
- Custom label:Protector Family
- X-axis
Protector List
Description: This table displays details of the protector.
- Type: Data Table
- Filter: NOT protection.audit_code: is one of 27, 28
- Configuration:
- Index: pty_insight_*audit*
- Metrics:
- Aggregation: Sum
- Field: cnt
- Custom label: URP
- Buckets:
- Split rows
- Aggregation: Terms
- Field: protector.vendor.keyword
- Order by: Metric:URP
- Order: Descending
- Size: 10000
- Custom label: Protector Vendor
- Split rows
- Aggregation: Terms
- Field: protector.family.keyword
- Order by: Metric:URP
- Order: Descending
- Size: 10000
- Custom label: Protector Family
- Split rows
- Aggregation: Terms
- Field: protector.version.keyword
- Order by: Metric:URP
- Order: Descending
- Size: 10000
- Custom label: Protector Version
- Split rows
- Aggregation: Terms
- Field: origin.ip
- Order by: Metric:URP
- Order: Descending
- Size: 10000
- Custom label: Protector IP
- Split rows
- Aggregation: Terms
- Field: origin.hostname.keyword
- Order by: Metric:URP
- Order: Descending
- Size: 10000
- Custom label: Hostname
- Split rows
- Aggregation: Terms
- Field: protector.core_version.keyword
- Order by: Metric:URP
- Order: Descending
- Size: 10000
- Custom label: Core Version
- Split rows
- Aggregation: Terms
- Field: protector.pcc_version.keyword
- Order by: Metric:URP
- Order: Descending
- Size: 10000
- Custom label: Pcc Version
- Split rows
Protector Status
Description: This table display protector status information.
- Type: Data Table
- Configuration:
- Index: pty_insight_analytics*protector_status_dashboard_*
- Metrics:
- Aggregation: Top Hit
- Field: origin.time_utc
- Aggregate with: Concatenate
- Size: 100
- Sort on: origin.time_utc
- Order: Descending
- Custom label: last seen
- Buckets:
- Split rows
- Aggregation: Terms
- Field: protector.datastore.keyword
- Order by: Alphabetically
- Order: Descending
- Size: 10000
- Custom label: Datastore
- Split rows
- Aggregation: Terms
- Field: origin.hostname.keyword
- Order by: Alphabetically
- Order: Descending
- Size: 10000
- Custom label: Hostname
- Split rows
- Aggregation: Terms
- Field: process.platform.keyword
- Order by: Alphabetically
- Order: Descending
- Size: 10000
- Custom label: Protector Platform
- Split rows
- Aggregation: Terms
- Field: process.core_version.keyword
- Order by: Alphabetically
- Order: Descending
- Size: 10000
- Custom label: Core Version
- Split rows
- Aggregation: Terms
- Field: protector.vendor.keyword
- Order by: Alphabetically
- Order: Descending
- Size: 10000
- Custom label: Protector Vendor
- Split rows
- Aggregation: Terms
- Field: protector.family.keyword
- Order by: Alphabetically
- Order: Descending
- Size: 10000
- Custom label: Protector Family
- Split rows
- Aggregation: Terms
- Field: protector.version.keyword
- Order by: Alphabetically
- Order: Descending
- Size: 10000
- Custom label: Protector Version
- Split rows
- Aggregation: Terms
- Field: protector_status.keyword
- Order by: Alphabetically
- Order: Descending
- Size: 10000
- Custom label: Protector Status
- Split rows
Protector Version
Description: This chart displays the protector count for each protector version.
- Type: Vertical Bar
- Configuration:
- Index: pty_insight_*audit*
- Metrics: Y-axis
- Aggregation: Unique Count
- Field: origin.ip
- Custom label: Number
- Buckets:
- X-axis
- Aggregation: Terms
- Field: protector.version.keyword
- Order by: Metric:Number
- Order: Descending
- Size: 10000
- Custom label: Protector Version
- Y-axis
- Sub aggregation: Terms
- Field: protection.dataelement.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 50
- X-axis
- Filter: protection.operation.keyword: Unprotect
Security Operation Table
Description: The table displays the number of security operations grouped by data stores, protector vendors, and protector families.
- Type: Data Table
- FIlter: NOT protection.audit_code: is one of 27 , 28
- Configuration:
- Index: pty_insight_*audit_*
- Metrics:
- Aggregation: Sum
- Field: cnt
- Custom label: Security Operations Count
- Buckets:
- Split rows
- Aggregation: Terms
- Field: protection.datastore.keyword
- Order by: Metric:Security Operation Count
- Order: Descending
- Size: 10000
- Custom label: Data Store Name
- Split rows
- Aggregation: Terms
- Field: protector.family.keyword
- Order by: Metric:Security Operation Count
- Order: Descending
- Size: 10000
- Custom label: Protector Family
- Split rows
- Aggregation: Terms
- Field: protector.vendor.keyword
- Order by: Metric:Security Operation Count
- Order: Descending
- Size: 10000
- Custom label: Protector Vendor
- Split rows
- Aggregation: Terms
- Field: protector.version.keyword
- Order by: Metric:Security Operation Count
- Order: Descending
- Size: 10000
- Custom label: Protector Version
- Split rows
Successful Security Operation Values
Description: The visualization displays only successful protect, unprotect, and reprotect operation counts.
- Type: Metric
- Configuration:
- Index: pty_insight_*audit*
- Metrics:
- Aggregation: Sum
- Field: cnt
- Custom label: Count
- Buckets:
- Split group
- Aggregation: Filters
- Filter 1-Protect: protection.operation: protect and level: success
- Filter 2-Unprotect: protection.operation: unprotect and level: success
- Filter 3-Reprotect: protection.operation: reprotect and level: success
- Split group
Successful Security Operations
Description: The pie chart displays only successful protect, unprotect, and reprotect operations.
- Type: Pie
- Configuration:
- Index: pty_insight_*audit*
- Metrics:
- Aggregation: Sum
- Field: cnt
- Custom label: URP
- Buckets:
- Split slices
- Aggregation: Filters
- Filter 1-Protect: protection.operation: protect and level: Success
- Filter 2-Unprotect: protection.operation: unprotect and level: Success
- Filter 3-Reprotect: protection.operation: reprotect and level: Success
- Split slices
Total Security Operation Values
Description: The visualization displays successful and unsuccessful security operation counts.
- Type: Metric
- Configuration:
- Index: pty_insight_*audit*
- Metrics:
- Aggregation: Sum
- Field: cnt
- Custom label: Count
- Buckets:
- Split group
- Aggregation: Filters
- Filter 1-Successful: logtype:protection and level: Success and not protection.audit_code: 27
- Filter 2-Unsuccessful: logtype:protection and not level: Success and not protection.audit_code: 28
- Split group
Total Security Operations
Description: The pie chart displays successful and unsuccessful security operations.
- Type: Pie
- Configuration:
- Index: pty_insight_*audit*
- Metrics:
- Aggregation: Sum
- Field: cnt
- Custom label: URP
- Buckets:
- Split slices
- Aggregation: Filters
- Filter 1-Successful: logtype:protection and level: Success and not protection.audit_code: 27
- Filter 2-Unsuccessful: logtype:protection and not level: Success and not protection.audit_code: 28
- Split slices
Trusted_App_Status_Chart
Description: The pie chart displays the trusted application deployment status.
- Type: Pie
- Filter: policystatus.type.keyword: TRUSTED_APP
- Configuration:
- Index: pty_insight_analytics*policy_status_dashboard_*
- Metrics:
- Slice size:
- Aggregation: Unique Count
- Field: _id
- Custom label: Trusted App
- Slice size:
- Buckets:
- Split slices
- Aggregation: Terms
- Field: policystatus.status.keyword
- Order by: Metric: Trusted App
- Order: Descending
- Size: 100
- Custom label: Trusted App Status
- Split slices
Trusted_App_Status_Table
Description: The trusted application deployment status that is displayed on the dashboard. This table uniquely identifies the data store, protector, process, platform, node, and so on.
- Type: Data Table
- Filter: policystatus.type.keyword: TRUSTED_APP
- Configuration:
- Index: pty_insight_analytics*policy_status_dashboard_*
- Metrics:
- Aggregation: Count
- Custom label: Metrics Count
- Buckets:
- Split rows
- Aggregation: Terms
- Field: policystatus.application_name.keyword
- Order by: Metric: Metric:Count
- Order: Descending
- Size: 5
- Custom label: Application name
- Split rows
- Aggregation: Terms
- Field: protector.datastore.keyword
- Order by: Metric: Metric:Count
- Order: Descending
- Size: 50
- Custom label: Data Store Name
- Split rows
- Aggregation: Terms
- Field: origin.ip
- Order by: Metric: Metric:Count
- Order: Descending
- Size: 50
- Custom label: Node IP
- Split rows
- Aggregation: Terms
- Field: origin.hostname.keyword
- Order by: Metric: Metric:Count
- Order: Descending
- Size: 50
- Custom label: Host Name
- Split rows
- Aggregation: Terms
- Field: policystatus.status.keyword
- Order by: Metric: Metric:Count
- Order: Descending
- Size: 50
- Custom label: Status
- Split rows
- Aggregation: Terms
- Field: origin.time_utc
- Order by: Metric: Metric:Count
- Order: Descending
- Size: 50
- Custom label: Last Seen
- Split rows
- Aggregation: Terms
- Field: process.name.keyword
- Order by: Metric: Metric:Count
- Order: Descending
- Size: 50
- Custom label: Process Name
- Split rows
- Aggregation: Terms
- Field: process.id.keyword
- Order by: Metric: Metric:Count
- Order: Descending
- Size: 50
- Custom label: Process Id
- Split rows
- Aggregation: Terms
- Field: process.platform.keyword
- Order by: Metric: Metric:Count
- Order: Descending
- Size: 50
- Custom label: Platform
- Split rows
- Aggregation: Terms
- Field: process.core_version.keyword
- Order by: Metric: Metric:Count
- Order: Descending
- Size: 50
- Custom label: Core Version
- Split rows
- Aggregation: Terms
- Field: process.pcc_version.keyword
- Order by: Metric: Metric:Count
- Order: Descending
- Size: 50
- Custom label: PCC Version
- Split rows
- Aggregation: Terms
- Field: protector.version.keyword
- Order by: Metric: Metric:Count
- Order: Descending
- Size: 50
- Custom label: Protector Version
- Split rows
- Aggregation: Terms
- Field: protector.vendor.keyword
- Order by: Metric: Metric:Count
- Order: Descending
- Size: 50
- Custom label: Vendor
- Split rows
- Aggregation: Terms
- Field: protector.family.keyword
- Order by: Metric: Metric:Count
- Order: Descending
- Size: 50
- Custom label: Family
- Split rows
- Aggregation: Terms
- Field: policystatus.deployment_or_auth_time
- Order by: Metric: Metric:Count
- Order: Descending
- Size: 50
- Custom label: Authorize Time
- Split rows
- Split rows
- Aggregation: Terms
- Field: protector.datastore.keyword
- Order by: Metric: Metric:Count
- Order: Descending
- Size: 50
- Custom label: Data Store Name
Unsuccessful Security Operation Values
Description: The metric displays unsuccessful security operation counts.
- Type: Metric
- Filter 1: logtype: Protection
- Filter 2: NOT level: success
- Filter 3: NOT protection.audit_code: 28
- Configuration:
- Index: pty_insight_*audit*
- Metrics:
- Aggregation: Sum
- Field: cnt
- Custom label: Count
- Buckets: - Split group - Aggregation: Terms - Field: level.keyword - Order by: Metric:Count - Order: Descending - Size: 10000
Unsuccessful Security Operations
Description: The pie chart displays unsuccessful security operations.
- Type: Pie
- Filter 1: logtype: protection
- Filter 2: NOT level: success
- Configuration:
- Index: pty_insight_*audit*
- Metrics:
- Slice size:
- Aggregation: Sum
- Field: cnt
- Custom label: Counts
- Slice size:
- Buckets:
- Split slices
- Aggregation: Terms
- Field: level.keyword
- Order by: Metric: Counts
- Order: Descending
- Size: 10000
- Split slices
4.20.9 - Visualization templates
The configuration of visualizations created in the earlier versions of the Audit Store Dashboards are retained after the ESA is upgraded. Protegrity provides default visualizations with version 10.1.0. If the title of an existing visualization matches the new visualization provided by Protegrity, then a duplicate entry is visible. Use the date and time stamp to identify and rename the existing visualizations.
Do not delete or modify the configuration or details of the new visualizations provided by Protegrity. To customize the visualization, create a copy of the visualization and perform the customization on the copy of the visualization.
Activity by data element usage count
Description: This graph displays the security operation count for each data element.
- Type: Vertical Bar
- Configuration:
- Index: pty_insight_*audit_*
- Metrics: Y-axis: Count
- Buckets:
- X-axis
- Aggregation: Terms
- Field: protection.dataelement.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 10
- Custom label: Data Elements
- Split series
- Sub aggregation: Terms
- Field: protection.operation.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 10
- X-axis
All activity by date
Description: This chart displays all logs trends as per the date.
- Type: Line
- Configuration:
- Index: pty_insight_*audit_*
- Metrics: Y-axis: Count
- Buckets:
- X-axis
- Aggregation: Date Histogram
- Field: origin.time_utc
- Minimum interval: Auto
- X-axis
Application protector audit report
Description: This report uses AP python for generating the audit logs.
- Type: Data Table
- Configuration:
- Index: pty_insight_*audit_*
- Metrics: Y-axis: Count
- Buckets:
- Split rows
- Aggregation: Terms
- Field: protection.dataelement.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 50
- Split rows
- Sub aggregation: Terms
- Field: protection.policy_user.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 50
- Split rows
- Sub aggregation: Terms
- Field: origin.ip
- Order by: Metric:Count
- Order: Descending
- Size: 50
- Split rows
- Sub aggregation: Terms
- Field: protection.operation.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 50
- Split rows
- Sub aggregation: Terms
- Field: additional_info.description.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 50
- Split rows
- Sub aggregation: Terms
- Field: origin.time_utc
- Order by: Metric:Count
- Order: Descending
- Size: 50
- Split rows
Policy report
Description: The policy report for the last 30 days.
Type: Data Table
Configuration:
- Index: pty_insight_*audit_*
- Metrics: Metric: Count
- Buckets:
- Split rows
- Aggregation: Date Histogram
- Field: origin.time_utc
- Minimum interval: Auto
- Custom label: Date & Time
- Split rows
- Sub aggregation: Terms
- Field: client.ip.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 50
- Custom label: Client IP
- Split rows
- Sub aggregation: Terms
- Field: client.username.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 50
- Custom label: Client Username
- Split rows
- Sub aggregation: Terms
- Field: additional_info.description.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 50
- Custom label: Additional Info
- Split rows
- Sub aggregation: Terms
- Field: level.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 50
- Custom label: Severity Level
- Split rows
Protection activity across datastore
Description: The protection activity across datastore and types of protectors used.
- Type: Pie
- Configuration:
- Index: pty_insight_*audit_*
- Metrics: Slice size: Count
- Buckets:
- Split chart
- Aggregation: Terms
- Field: protection.datastore.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 5
- Split slices
- Sub aggregation: Terms
- Field: protection.operation.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 5
- Split chart
System daily activity
Description: This shows the system activity for the day.
- Type: Line
- Configuration:
- Index: pty_insight_*audit_*
- Metrics: Y-axis: Count
- Buckets:
- X-axis
- Aggregation: Date Histogram
- Field: origin.time_utc
- Minimum interval: Auto
- Split series
- Sub aggregation: Terms
- Field: logtype.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 10
- X-axis
Top 10 unauthorized access by data element
Description: The top 10 unauthorized access by data element for Protect and Unprotect operations for the last 30 days.
- Type: Horizontal Bar
- Configuration:
- Index: pty_insight_*audit_*
- Metrics: Y-axis: Count
- Buckets:
- X-axis
- Aggregation: Terms
- Field: protection.dataelement.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 10
- Custom label: Data elements
- Split series
- Sub aggregation: Filters
- Filter 1 - Protect: level=‘Error’
- Filter 2 - Unprotect: level=‘WARNING’
- X-axis
Total security operations per five minutes
Description: The total security operations generated grouped using five minute intervals.
- Type: Line
- Configuration:
- Index: pty_insight_*audit_*
- Metrics: Y-axis: Count
- Buckets:
- X-axis
- Aggregation: Date Histogram
- Field: origin.time_utc
- Minimum interval: Day
- Split series
- Sub aggregation: Terms
- Field: protection.operation.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 5
- Split chart
- Sub aggregation: Terms
- Field: protection.datastore.keyword
- Order by: Alphabetical
- Order: Descending
- Size: 5
- Custom label: operations
- X-axis
User activity operation count
Description: The count of total operations performed per user.
- Type: Vertical Bar
- Configuration:
- Index: pty_insight_*audit_*
- Metrics: Y-axis: Count
- Buckets:
- X-axis
- Aggregation: Terms
- Field: protection.policy_user.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 50
- Split series
- Sub aggregation: Terms
- Field: protection.operation.keyword
- Order by: Metric:Count
- Order: Descending
- Size: 5
- X-axis
4.20.10 - Insight Certificates
The default certificates provided are signed using the system-generated Protegrity-CA certificate. However, after installation custom certificates can be used. Ensure that all the certificates are signed by the same CA as shown in the following diagram.
Update the certificates in the following order:
- Audit Store Cluster certificate
- Audit Store REST certificate
- PLUG client certificate for Audit Store
- Analytics client certificate for Audit Store
The various certificates used for communication between the nodes with their descriptions are provided here. The passphrase for the certificates are stored in the /etc/ksa/certs directory.
Management & Web Services: These services manages certificate-based communication and authentication between the ESA and its internal components and between ESA and external clients (REST).
For more information about Management and Web Services certificates, refer here.
Audit Store Cluster: This is used for the Insight inter-node communication that takes place over the port 9300. These certificates are stored in the /esa/ksa/certificates/as_cluster directory on the ESA.
Server certificate: The server certificate is used for for inter-node communication. The nodes identify each other using this certificate. The Audit Store Cluster and Audit Store REST server certificate must be the same.
Client certificate: The client certificate is used for applying and maintaining security configurations for the Audit Store cluster.
Audit Store REST: This is used for the Audit Store REST API communication over the port 9200. These certificates are stored in the /esa/ksa/certificates/as_rest directory on the ESA.
Server certificate: The server certificate is used for mutual authentication with the client. The Audit Store Cluster and Audit Store REST server certificate must be the same.
Client certificate:The client certificate is used by the Audit Store nodes to authenticate and communicate with the Audit Store.
Analytics Client for Audit Store: This is used for communication between Analytics and the Audit Store. These certificates are stored in the /esa/ksa/certificates/ian directory on the ESA.
Client certificate: The client certificate is used by Analytics to authenticate and communicate with the Audit Store.
PLUG Client for Audit Store: This is used for communication between Insight and the Audit Store. These certificates are stored in the /esa/ksa/certificates/plug directory on the ESA.
Client certificate: The client certificate is used by the Log Forwarder to authenticate and communicate with the Audit Store.
Using custom certificates in Insight
The certificates used for Insight are system-generated Protegrity certificates. If required, upload and use custom CA, Server, and Client certificates for Insight.
For custom certificates, ensure that the following prerequisites are met:
Ensure that all certificates share a common CA.
Ensure that the following requirements are met when creating the certificates:
The CN attribute of the Audit Store Server certificate is set to insights_cluster.
The CN attribute of the Audit Store Cluster Client certificate is set to es_security_admin.
The CN attribute of the Audit Store REST Client certificate is set to es_admin.
The CN attribute of the PLUG client certificate for the Audit Store is set to plug.
The CN attribute of the Analytics client certificate for the Audit Store is set to insight_analytics.
The Audit Store Server certificates’ must contain the following in the Subject Alternative Name (SAN) field:
- Required: FQDN of all the Audit Store nodes in the cluster
- Optional: IP addresses of all the Audit Store nodes in the cluster
- Optional: Hostname of all the Audit Store nodes in the cluster
For a DNS server, include the hostname and FQDN details from the DNS sever in the certificate.
Ensure that the certificates are generated using a 4096 bit key.
For example, an SSL certificate with the SAN extension of servers ES1, ES2, and ES3 in a cluster will have the following entries:
- ES1
- ES2
- ES3
- ES1.protegrity.com
- ES2.protegrity.com
- ES3.protegrity.com
- IP address of ES1
- IP address of ES2
- IP address of ES3
When upgrading from an earlier version to ESA 8.1.0.0 and later with custom certificates, run the following step after the upgrade is complete and custom certificates are applied for Insight, that is, td-agent, Audit Store, and Analytics, if installed.
From the ESA Web UI, navigate to System > Services > Audit Store.
Ensure that the Audit Store Repository service is not running. If the service is running, then stop the service.
Configure the custom certificates and upload it to the Certificate Repository.
Set the custom certificates for the logging components as Active.
From the ESA Web UI, navigate to System > Services > Audit Store.
Start the Audit Store Repository service.
Open the ESA CLI.
Navigate to Tools.
Run Apply Audit Store Security Configs.
Continue the installation to create an Audit Store cluster or join an existing Audit Store cluster.
For more information about creating the Audit Store cluster, refer here].
4.21 - Maintaining Insight
Logging follows a fixed routine. The system generates logs, which are collected and then forwarded to Insight. Insight stores the logs in the Audit Store. These log records are used in various areas, such as, alerts, reports, dashboards, and so on. This section explains the logging architecture.
4.21.1 - Working with alerts
Viewing alerts
Generated alerts are displayed on the Audit Store Dashboards. View and acknowledge the alerts from the alerting dashboard by navigating to OpenSearch Plugins > Alerting > Alerts. The alerting dashboard is shown in the following figure.
Destinations for alerts are moved to channels in Notifications. For more information about working with Monitors, Alerts, and Notifications, refer to the section Monitors in https://opensearch.org/docs/latest/dashboards/.
Creating notifications
Create notification channels to receive alerts as per individual requirements. The alerts are sent to the destination specified in the channel.
Creating a custom webhook notification
A webhook notification sends the alerts generated by a monitor to a destination, such as, a web page.
Perform the following steps to configure the notification channel for generating webhook alerts:
Log in to the ESA Web UI.
Navigate to Audit Store > Dashboard.
The Audit Store Dashboards appears. If a new tab does not automatically open, click Open in a new tab.
From the menu, navigate to Management > Notifications > Channels.
Click Create channel.
Specify the following information under Name and description.
- Name: Http_webhook
- Description: For generating http webhook alerts.
Specify the following information under Configurations.
- Channel type: Custom webhook
- Method: POST
- Define endpoints by: Webhook URL
- Webhook URL: Specify the URL that receives the alert. For example https://webhook.site/9385a259-3b82-4e99-ad1e-1eb875f00734.
- Webhook headers: Specify the key value pairs for the webhook.
Click Send test message to send a message to the email recipients.
Click Create to create the channel.
The webhook is set up successfully.
Proceed to create a monitor and attach the channel created using the steps from Creating the monitor.
Creating email alerts using custom webhook
An email notification sends alerts generated by a monitor to an email address. It is also possible to configure the SMTP channel for sending an email alert. It is recommended to send email alerts using custom webhooks, which offers added security. The email alerts can be encrypted or non-encrypted. Accordingly, the required SMTP settings for email notifications must be configured on the ESA.
Perform the following steps to configure the notification channel for generating email alerts using custom webhooks:
Ensure that the following is configured as per the requirement:
- Configuring SMTP on the ESA, refer here.
Log in to the ESA Web UI.
Navigate to Audit Store > Dashboard.
The Audit Store Dashboards appears. If a new tab does not automatically open, click Open in a new tab.
From the menu, navigate to OpenSearch Plugins > Notifications > Channels.
Click Create channel.
Specify the following information under Name and description.
- Name: Unsecure_smtp_email
- Description: For generating unsecured SMTP email alerts.
Specify the following information under Configurations.
- Channel type: Custom webhook
- Define endpoints by: Custom attributes URL
- Type: HTTP
- Host: <ESA_IP>
- Port: 8588
- Path: rest/alerts/alerts/send_smtp_email_alerts
Under Query parameters, click Add parameter and specify the following information. Click Add parameter and add cc and bcc, if required.
- Key: to
- Value: <email_ID>
Under Webhook headers, click Add header and specify the following information.
- Key: Pty-Username
- Value:
%internal_scheduler;
Under Webhook headers, click Add header and specify the following information.
- Key: Pty-Roles
- Value: auditstore_admin
Click Create to save the channel configuration.
CAUTION: Do not click Send test message because the configuration for the channel is not complete.
The success message appears and the channel is created. The webhook for the email alerts is set up successfully.
Proceed to create a monitor and attach the channel created using the steps from Creating the monitor.
Perform the following steps to configure the notification channel for generating secure email alerts using custom webhooks:
Ensure that the following is configured as per the requirement:
- Configuring SMTP on the ESA, refer here.
Configure the certificates, if not already configured.
Download the CA certificate of your SMTP server.
Log in to the ESA Web UI.
Upload the SMTP CA certificate on the ESA.
Navigate to Settings > Network > Certificate Repository.
Upload your CA certificate to the ESA.
Select and activate your certificates in Management & Web Services from Settings > Network > Manage Certificates. For more information about ESA certificates, refer here.
Update the smtp_config.json configuration file.
Navigate to Settings > System > Files > smtp_config.json.
Click the Edit the product file (
) icon.
Update the following SMTP settings and the certificate information in the file. Sample values are provided in the following code, ensure that you use values as per individual requirements.
Set enabled to true to enable SMTP settings.
"enabled": true,
Specify the host address for the SMTP connection.
"host": "192.168.1.10",
Specify the port for the SMTP connection.
"port": "25",
Specify the email address of the sender for the SMTP connection.
"sender_email_address": "<Email_ID>",
Enable STARTTLS.
"use_start_tls": "true",
Enable server certificate validation.
"verify_server_cert": "true",
Specify the location for the CA certificate.
"ca_file_path": "/etc/ksa/certificates/mng/CA.pem",
Click Save.
Repeat the steps on the remaining nodes of the Audit Store cluster.
Navigate to Audit Store > Dashboard.
The Audit Store Dashboards appears. If a new tab does not automatically open, click Open in a new tab.
From the menu, navigate to OpenSearch Plugins > Notifications > Channels.
Click Create channel.
Specify the following information under Name and description.
- Name: Secure_smtp_email
- Description: For generating secured SMTP email alerts.
Specify the following information under Configurations.
- Channel type: Custom webhook
- Define endpoints by: Custom attributes URL
- Type: HTTP
- Host: <ESA_IP>
- Port: 8588
- Path: rest/alerts/alerts/send_secure_smtp_email_alerts
Under Query parameters, click Add parameter and specify the following information. Click Add parameter and add cc and bcc, if required.
- Key: to
- Value: <email_ID>
Under Webhook headers, click Add header and specify the following information.
- Key: Pty-Username
- Value:
%internal_scheduler;
Under Webhook headers, click Add header and specify the following information.
- Key: Pty-Roles
- Value: auditstore_admin
Click Create to save the channel configuration.
CAUTION: Do not click Send test message because the configuration for the channel is not complete.
The success message appears and the channel is created. The webhook for the email alerts is set up successfully.
Proceed to create a monitor and attach the channel created using the steps from Creating the monitor.
Creating an email notification
Perform the following steps to configure the notification channel for generating email alerts:
Log in to the ESA Web UI.
Navigate to Audit Store > Dashboard.
The Audit Store Dashboards appears. If a new tab does not automatically open, click Open in a new tab.
From the menu, navigate to Management > Notifications > Channels.
Click Create channel.
Specify the following information under Name and description.
- Name: Email_alert
- Description: For generating email alerts.
Specify the following information under Configurations.
- Channel type: Email
- Sender type: SMTP sender
- Default recipients: Specify the list of email addresses for receiving the alerts.
Click Create SMTP sender and add the following parameters.
- Sender name: Specify a descriptive name for sender.
- Email address: Specify the email address that must receive the alerts.
- Host: Specify the host name of the email server.
- Port: 25
- Encryption method: None
Click Create.
Click Send test message to send a message to the email recipients.
Click Create to create the channel.
The email alert is set up successfully.
Proceed to create a monitor and attach the channel created using the steps from Creating the monitor.
Creating the monitor
A monitor tracks the system and sends an alert when a trigger is activated. Triggers cause actions to occur when certain criteria are met. Those criteria are set when a trigger is created. For more information about monitors, actions, and triggers, refer to Alerting.
Perform the following steps to create a monitor. The configuration specified here is just an example. For real use, create whatever configuration is needed, per individual requirements:
Ensure that a notification is created using the steps from Creating notifications.
From the menu, navigate to OpenSearch Plugins > Alerting > Monitors.
Click Create Monitor.
Specify a name for the monitor.
For the Monitor defining method, select Extraction query editor.
For the Schedule, select 30 Minutes.
For the Index, select the required index.
Specify the following query for the monitor. Modify the query as per the requirement.
{ "size": 0, "query": { "match_all": { "boost": 1 } } }
Click Add trigger and specify the information provided here.
Specify a trigger name.
Specify a severity level.
Specify the following code for the trigger condition:
ctx.results[0].hits.total.value > 0
Click Add action.
From the Channels list, select the required channel.
Add the following code in the Message field. The default message displayed might not be formatted properly. Update the message by replacing the Line spaces with the n escape code. The message value is a JSON value, use escape characters to structure the email properly using valid JSON syntax.
```
{
"message": "Please investigate the issue.\n - Trigger: {{ctx.trigger.name}}\n - Severity: {{ctx.trigger.severity}}\n - Period start: {{ctx.periodStart}}\n - Period end: {{ctx.periodEnd}}",
"subject": "Monitor {{ctx.monitor.name}} just entered alert status"
}
```
> The **message** value is a JSON value. Be sure to use escape characters to structure the email properly using valid JSON syntax. The default message displayed might not be formatted properly. Update the message by replacing the Line spaces with the **\\n** escape code.
- Select the Preview message check box to view the formatted email message.
- Click Send test message and verify the recipient’s inbox for the message.
- Click Save to update the configuration.
4.21.2 - Index lifecycle management (ILM)
In the earlier versions of the ESA, the UI for Index Lifecycle Management was named as Information Lifecycle Management.
The following figure shows the ILM system components and the workflow.
The ILM log repository is divided into the following parts:
- Active logs that may be required for immediate reporting. These logs are accessed regularly for high frequency reporting.
- Logs that are pushed to Short Term Archive (STA). These logs are accessed occasionally for moderate reporting frequency.
- Logs that are pushed to Long Term Archive (LTA). These logs are accessed rarely for low reporting frequency. The logs are stored where they can be backed up by the backup mechanism used by the enterprise.
The ILM feature in Protegrity Analytics is used to archive the log entries from the index. The logs generated for the ILM operations appear on this page. Only logs generated by ILM operation on the ESA v9.2.0.0 and above appear on the page after upgrading to the latest version of the ESA. For ILM logs generated on an earlier version of the ESA, navigate to Audit Store > Dashboard > Open in new tab, select Discover from the menu, select the time period, and search for the ILM logs using keywords for the additional_info.procedure field, such as, export, process_post_export_log, or scroll_index_for_export.
Use the search bar to filter logs. Click the Reset Search () icon to clear the search filter and view all the entries. To search for the ILM logs using the origin time, specify the Origin Time(UTC) term within double quotes.
Move entries out of the index when not required and import them back into the index when required using the export and import feature. Only one operation can be run at a time for each node for exporting logs or importing logs. The ILM screen is shown in the following figure.
The Viewer role user or a user with the viewer role can only view data on the ILM screen. Admin rights are required to use the import, export, migrate, and delete features of the ILM.
Use the ILM for managing indexes, such as, the audit index, the policy log index, the protector status index, and the troubleshooting index. The Audit Store Dashboards has the ISM feature for managing the other indexes. Using the ISM feature might result in a loss of logs and it is not advised to use the ILM feature where possible.
Exporting logs
As log entries fill the Audit Store, the size of the log index increases. This slows down log operations for searching and retrieving log entries. To speed up these operations, export log entries out of the index and store them in an external file. If required, import the entries again for audit and analysis.
Moving index entries out of the index file, removes the entries from the index file and places them in a backup file. This backup file is the STA and reduces the load and processing time for the main index. The backup file is created in the /opt/protegrity/insight/archive/
directory. To store the file at a different location, mount the destination in the /opt/protegrity/insight/archive/ directory. In this case, specify the directory name, for example, /opt/protegrity/insight/archive/
If the location is on the same drive or volume as the main index, then the size of the index would reduce. However, this would not be an effective solution for saving space on the current volume. To save space, move the backup file to a remote system or into LTA.
Only one export operation can be run at a time. Empty indexes cannot de exported and must be manually deleted.
On the ESA, navigate to Audit Store > Analytics > Index Lifecycle Management.
Click Export.
The Export Data screen appears.
Complete the fields for exporting the log data from the default index.
The available fields are:
- From Index: Select the index to export data from.
- Password: Specify the password for securing the backup file.
- Confirm Password: Specify the password again for reconfirmation.
- Directory (optional): Specify the location to save the backup file. If a value is not specified, then the default directory
/opt/protegrity/insight/archive/
is used.
Click Export.
Specify the root password.
Click Submit.
The log entries are extracted, then copied to the backup file, and protected using the password. After a successful export, the exported index will be deleted from Insight.
After the export is complete, move the backup file to a different location till the log entries are required. Import the entries in the index again for analysis or audit.
Importing logs
The exported log entires and secondary indexes are stored in a separate file. If these entries are required for analysis, then import them back into Insight. To be able to import, the archive file should be inside the archive
directory or within a directory inside the archive
directory.
Keep the passwords handy, in case the log entries were exported and protected using password protection. Do not rename the default index file name for this feature to work. Imported indexes are excluded and are not exported when the auto-export task is run from the scheduler.
On the ESA, navigate to Audit Store > Analytics > Index Lifecycle Management.
Click Import.
The Import Data screen appears.
Complete the fields for importing the log data to the default index or secondary index.
The available fields are:
- File Name: Select the file name of the backup file.
- Password: Specify the password for the backup file.
Click Import.
Data will be imported to an index that is named using the file name or the index name. When importing a file which was exported in version 8.0.0.0 or later, then the new index name will be the date range of the entries in the index file using the format pty_insight_audit_ilm_(from_date)-(to_date)
. For example, pty_insight_audit_ilm_20191002_113038-20191004_083900
.
Deleting indexes
Use the Delete option to delete indexes that are not required. Only delete custom indexes that are created and listed in the Source list. Deleting the index will lead to a permanent loss of data in the index. If the index was not archived earlier, then the logs from the index deleted cannot be recreated or retrieved.
On the ESA, navigate to Audit Store > Analytics > Index Lifecycle Management.
Click Delete.
The Delete Index screen appears.
Select the index to delete from the Source list.
Select the Data in the selected index will be permanently deleted. This operation cannot be undone. check box.
Click Delete.
The Authentication screen appears.
Enter the root password.
Click Submit.
4.21.3 - Viewing policy reports
If a report is present where policies were not modified, then a breach might have occurred. These instances can be further analyzed to find and patch security issues. A new policy report is generated when this reporting agent is first installed on the ESA. This ensures that the initial state of all the policies on all the data stores in the ESA. A user can then use the Protegrity Analytics to list all the reports that were saved over time and select the required reports.
Ensure that the required policies that must be displayed in the report are deployed. Perform the following steps to view the policies deployed.
- Log in to the ESA Web UI.
- Navigate to Policy Management > Policies & Trusted Application > Policies.
- Verify that the policies to track are deployed and have the Deploy Status as OK.
If the reporting tool is installed when a policy is being deployed, then the policy status in the report might show up as Unknown or as a warning. In this case, manually deploy the policy again so that it is displayed in the Policy Report.
Perform the following steps to view the policy report.
In the ESA, navigate to Audit Store > Analytics > Policy Report.
The Policy screen appears.
Select a time period for the reports using the From and To date picker. This is an optional step. The time period narrows the search results for the number of reports displayed for the selected data store.
Select a data store from the Deployed Datastore list.
Click Search.
The reports are filtered and listed based on the selection.
Click the link for the report to view.
For every policy deployed, the following information is displayed:
- Policy details: This section displays the name, type, status, and last modified time for the policy.
- List of Data Elements: This table displays the name, description, type, method, and last modified date and time for a data element in the policy.
- List of Data Stores: This table lists the name, description, and last modified date and time for the data store.
- List of Roles: This table lists the name, description, mode, and last modified date and time for a role.
- List of Permissions: This table lists the various roles and the permissions applicable with the role.
Print the report for comparing and analyzing the different reports that are generated when policies are deployed or undeployed. Alternatively, click the Back button to go back to the search results. Print the report using the landscape mode.
4.21.4 - Verifying signatures
The log entries having checksums are identified. These entries are then processed using the signature key and the checksum received in the log entry from the protector is checked. If both the checksum values match, then the log entry has not been tampered with. If a mismatch is found, then it might be possible that the log entry was tampered or there is an issue receiving logs from a protector. These can be viewed on the Discover screen by using the following search criteria.
logtype:verification
The Signature Verification screen is used to create jobs. These jobs can be run as per a schedule using the scheduler.
For more information about scheduling signature verification jobs, refer here.
To view the list of signature verification jobs created, from the Analytics screen, navigate to Signature Verification > Jobs.
The lifecycle of an Ad-Hoc job is shown in the following figure.
The Ad-Hoc job lifecycle is described here.
A job is created.
If Run Now is selected while creating the job, then the job enters the Queued to Run state.
If Run Now is not selected while creating the job, then the job enters the Ready state. The job will only be processed and enters the Queued to Run state by clicking the Start button.
When the scheduler runs, based on the scheduler configuration, the Queued to Run jobs enter the Running state.
After the job processing completes, the job enters the Completed state. Click Continue Running to move the job to the Queued to Run state for processing any new logs generated.
If Stop is clicked while the job is running, then the job moves to the Queued to Stop state, and then moves to the Stopped state.
Click Continue Running to re-queue the job and move the job to the Queued to Run state.
A System job is created by default for verifying signatures. This job runs as per the signature verification schedule to processes the audit log signatures.
The logs that fail verification are displayed in the following locations for analysis.
- In Discover using the query logtype:verification.
- On the Signature Verification > Logs tab.
When the signature verification for an audit log fails, the failure logs are logged in Insight. Alerts can be generated by using monitors that query the failed logs.
The lifecycle of a System job is shown in the following figure.
The System job lifecycle is described here.
- The System job is created when Analytics is initialized or the ESA is upgraded and enters the Queued to Run state.
- When the scheduler runs, then the job enters the Running state.
- After processing is complete, then the job returns to the Queued to Run state because it is a system job that needs to keep processing records as they arrive.
- While the job is running, clicking Stop moves the job to the Queued to Stop state followed by the Stopped state.
- If the job is in the Stopped state, then clicking Continue Running moves the job to the Queued to Run state.
Working with signatures
The list of signature verification jobs created is available on the Signature Verification tab. From this tab, view, create, edit, and execute the jobs. Jobs can also be stopped or continued from this tab.
To view the list of signature verification jobs, from the Analytics screen, navigate to Signature Verification > Jobs.
The viewer role user or a user with the viewer role can only view the signature verification jobs. The admin rights are required to create or modify signature verification jobs.
After initializing Analytics during a fresh installation, ensure that the priority IP list for the default signature verification jobs is updated. The list is updated by editing the task from Analytics > Scheduler > Signature Verification Job. During the upgrade from an earlier version of the ESA, if Analytics is initialized on an ESA, then the ESA will be used for the priority IP, else update the priority IP for the signature verification job after the upgrade is complete. If multiple ESAs are present in the priority list, then more ESAs are available to process the signature verifications jobs that must be processed.
For example, if the max jobs to run on an ESA is set to 4 and 10 jobs are queued to run on 2 ESAs, then 4 jobs are started on the first ESA, 4 jobs are started on the second ESA, and 2 jobs will be queued to run till an ESA job slot gets free to accept and run the queued job.
Use the search field to filter and find the required verification job. Click the Reset Search icon to clear the filter and view all jobs. Use the following information while using the search function:
- Type the entire word to view results containing the word.
- Use wildcard characters for searching. This is not applicable for wildcard characters used within double quotes.
- Search for a specific word by specifying the word within double quotes. This is required for words having the hyphen (-) character that the system treats as a space.
- Specify the entire word, if the word contains the underscore (_) character.
The following columns are available on this screen. Click a label to sort the items in the ascending or descending order. Sorting is available for the Name, Created, Modified, and Type columns.
Column | Description |
---|---|
Name | A unique name for the signature verification job. |
Indices | A list of indexes on which the signature verification job will run. |
Query | The signature verification query. |
Pending | The number of logs pending for signature verification. |
Processed | The current number of logs processed. |
Not-Verified | The number of logs that could not be verified. Only protector and PEP server logs for version 8.1.0.0 and higher can be verified. |
Success | The number of verifiable logs where signature verification succeeded. |
Failure | The number of verifiable logs where signature verification failed. |
Created | The creation date of the signature verification job. |
Modified | The date on which the signature verification job was modified. |
Type | The type of the signature verification job. The available options are SYSTEM where the job is created by the system and ADHOC where the custom job is created by a user. |
State | Shows the job status. |
Action | The actions that can be performed on the signature verification job. |
The root or admin rights are required to create or modify signature verification jobs.
The available statuses are:
: Queued to run. The job will run soon.
: Ready. The job will run when the scheduler initiates the job.
: Running. The job is running. Click Stop from Actions to stop the job.
: Queued to stop. The job processing will stop soon.
: Stopped. The job has been stopped. Click Continue Running from Actions to continue the job. If a signature verification scheduler job is stopped from the Scheduler > Monitor page, then the status might be updated on this page after about 5 minutes.
: Completed. The job is complete. Click Continue Running from Actions to run the job again.
The available actions are:
- Click the Edit icon (
) to update the job.
- Click the Start icon (
) to run the job.
- Click the Stop icon (
) to stop the job.
- Click the Continue Running icon (
) to resume the job.
Creating a signature verification job
Specify a query for creating the signature verification job. Additionally, select the indexes that the signature verification job needs to run on.
In Analytics, navigate to Signature Verification > Jobs.
The Signature Verification Jobs screen is displayed.
Click New Job.
The Create Job screen is displayed.
Specify a unique name for the job in the Name field.
Select the index or alias to query from the Indices list. An alias is a reference to one or more indexes available in the Indices list. The alias is generated and managed by the system and cannot be created or deleted.
Specify a description for the job in the Description field.
Select the Run Now check box to run the job after it is created.
Use the Query field to specify a JSON query. Errors in the code, if any, are marked with a red cross before the code line.
The following options are available for working with the query:
- Indent code (
): Click to format the code using tab spaces.
- Remove white space from code (
): Click to format the code by removing the white spaces and displaying the query in a continuous line.
- Undo (
): Click to undo the last change made.
- Redo (
): Click to redo the last change made.
- Clear (
): Click to clear the query text.
- Indent code (
Specify the contents of the query tag for creating the JSON query. For example, specify the query
```
{
"query":{
"match" : {
"*field\_name*":"*field\_value*"
}
}
}
```
as
```
{
"match" : {
"*field\_name*":"*field\_value*"
}
}
```
Click Run to test the query.
View the result displayed in the Query Response field.
The following options are available to work with the output:
- Expand all fields (
): Click to expand all fields in the result.
- Collapse all fields (
): Click to collapse all fields in the result.
- Switch Editor Mode (
): Click to select the editor mode. The following options are available:
- View: Switch to the tree view.
- Preview: Switch to the preview mode.
- Copy (
): Click to copy the contents of the output to the clipboard.
- Search fields and values (
): Search for the required text in the output.
- Maximize (
): Click to maximize the Query Response field. Click Minimize (
) to minimize the field to the original size when maximized.
- Expand all fields (
Click Save to save the job and return to the Signature Verification Jobs screen.
Editing a signature verification job
Edit an adhoc signature verification job to update the name and the description of the job.
In Analytics, navigate to Signature Verification > Jobs.
The Signature Verification Jobs screen is displayed.
Locate the job to update.
From the Actions column, click the Edit (
) icon.
The Job screen is displayed.
Update the name and description as required.
The Indices and Query options can be edited if the job is in the Ready state, else they are available in the read-only mode.
View the JSON query in the Query field.
The following options are available for working with the query:
- Indent code (
): Click to format the code using tab spaces.
- Remove white space from code (
): Click to format the code by removing the white spaces and displaying the query in a continuous line.
- Undo (
): Click to undo the last change made.
- Redo (
): Click to redo the last change made.
- Indent code (
Click Run to test the query, if required.
View the result displayed in the Query Response field.
The following options are available to work with the output:
- Expand all fields (
): Click to expand all fields in the result.
- Collapse all fields (
): Click to collapse all fields in the result.
- Switch Editor Mode (
): Click to select the editor mode. The following options are available:
- View: Switch to the tree view.
- Preview: Switch to the preview mode.
- Copy (
): Click to copy the contents of the output to the clipboard.
- Search fields and values (
): Search for the required text in the output.
- Maximize (
): Click to maximize the Query Response field. Click Minimize (
) to minimize the field to the original size when maximized.
- Expand all fields (
Click Save to update the job and return to the Signature Verification Jobs screen.
4.21.5 - Using the scheduler
To view the list of tasks that are scheduled, from the Analytics screen, navigate to Scheduler > Tasks. The viewer role user or a user with the viewer role can only view logs and history related to the Scheduler. You need admin rights to create or modify schedules.
The following tasks are available by default:
Task | Description |
---|---|
Export Troubleshooting Indices | Scheduled task for exporting logs from the troubleshooting index. |
Export Policy Log Indices | Scheduled task for exporting logs from the policy index. |
Export Protectors Status Indices | Scheduled task for exporting logs from the protector status index. |
Delete Miscellaneous Indices | Scheduled task for deleting old versions of the miscellaneous index that are rolled over. |
Delete DSG Error Indices | Scheduled task for deleting old versions of the DSG error index that are rolled over. |
Delete DSG Usage Indices | Scheduled task for deleting old versions of the DSG usage matrix index that are rolled over. |
Delete DSG Transaction Indices | Scheduled task for deleting old versions of the DSG transaction matrix index that are rolled over. |
Signature Verification | Scheduled task for performing signature verification of log entries. |
Export Audit Indices | Scheduled task for exporting logs from the audit index. |
Rollover Index | Scheduled task for performing an index rollover. |
Ensure that the scheduled tasks are disabled on all the nodes before upgrading the ESA.
The scheduled task values on a new installation and an upgraded machine might differ. This is done to preserve any custom settings and modifications for the scheduled task. After upgrading the ESA, revisit the scheduled task parameters and modify them if required.
The list of scheduled tasks are displayed. You can create tasks, view, edit, enable or disable, and modify scheduled task properties from this screen. The following columns are available on this screen.
Column | Description |
---|---|
Name | A unique name for the scheduled task. |
Schedule | The frequency set for executing the task. |
Task Template | The task template for creating the schedule. |
Priority IPs | A list of IP addresses of the machines on which the task must be run. |
Params | The parameters for the task that must be executed. |
Enabled | Use this toggle switch to enable or disable the task from running as per the schedule. |
Action | The actions that can be performed on the scheduled task. |
The available action options are:
- Click the Edit icon (
) to update the task.
- Click the Delete icon (
) to delete the task.
Creating a Scheduled Task
Use the repository scheduler to create scheduled tasks. You can set a scheduled task to run after a fixed interval, every day at a particular time, a fixed day every week, or a fixed day of the month.
Complete the following steps to create a scheduled task.
From the Analytics screen, navigate to Scheduler > Tasks.
Click Add New Task.
The New Task screen appears.
Complete the fields for creating a scheduled task.
The following fields are available:
- Name: Specify a unique name for the task.
- Schedule: Specify the template and time for running the command using cron. The date and time when the command will be run appears in the area below the Schedule field. The following settings are available:
Select Template: Select a template from the list. The following templates are available:
- Custom: Specify a custom schedule for executing the task.
- Every Minute: Set the task to execute every minute.
- Every 5 Minutes: Set the task to execute after every 5 minutes.
- Every 10 Minutes: Set the task to execute after every 10 minutes.
- Every Hour: Set the task to execute every hour.
- Every 2 Hours: Set the task to execute every 2 hours.
- Every 5 Hours: Set the task to execute every 5 hours.
- Every Day: Set the task to execute every day at 12 am.
- Every Alternate Day: Set the task to execute every alternate day at 12 am.
- Every Week: Set the task to execute once every week on Sunday at 12 am.
- Every Month: Set the task to execute at 12 am on the first day of every month.
- Every Alternate Month: Set the task to execute at 12 am on the first day of every alternate month.
- Every Year: Set the task to execute at 12 am on the first of January every year.
If a template is selcted and the date and time settings are modified, then the Custom template is used.
The scheduler runs only one instance of a particular task. If the task is already running, then the scheduler skips running the task again. For example, if a task is set to run every 1 minute, and the earlier instance is not complete, then the scheduler skips running the task. The scheduled task will be run again at the scheduled time after the current task is complete.
- Date and time: Specify the date and the time when the command must be executed. The following fields are available:
- **Min**: Specify the time settings in minutes for executing the command.
- **Hrs**: Specify the time settings in hours for executing the command.
- **DOM**: Specify the day of the month for executing the command.
- **Mon**: Specify the month for executing the command.
- **DOW**: Specify the day of the week for executing the command.
Some of the fields also accept the special syntax. For the special syntax, refer [here](#special-syntax).
- **Task Template**: Select a task template to view and specify the parameters for the scheduled task. The following task templates are available:
- **ILM Multi Delete**
- **ILM Multi Export**
- **Audit index Rollover**
- **Signature Verification**
- **Priority IPs**: Specify a list of the ESA IP addresses in the order of priority for execution. The task is executed on the first IP address that is specified in this list. If the IP is not available to execute the task, then the job is executed on the next prioritized IP address in the list.
- **Use Only Priority IPs**: Enable this toggle switch to only execute the task on any one node from the list of the ESA IP addresses specified in the priority field. If this toggle switch is disabled, then the task execution is first attempted on the list of IPs specified in the **Priority IPs** field. If a machine is not available, then the task is run on any machine that is available on the Audit Store cluster which might not be mentioned in the **Priority IPs** field.
- **Multi node Execution**: If disabled, then the task is run on a single machine. Enable this toggle switch to run the task on all available machines.
- **Enabled**: Use this toggle switch to enable or disable the task from running as per the schedule.
- Specify the parameters for the scheduled task and click Save. The parameters are based on the OR condition. The task is run when any one of the conditions specified is satisfied.
The scheduled task is created and enabled. The job executes on the date and time set.
ILM Multi Delete:
This task is used for automatically deleting indexes when the criteria specified is fulfilled. It displays the required fields for specifying the criteria parameters for deleting indexes. You can use a regex expression for the index pattern.
- Index Pattern: A regex pattern for specifying the indexes that must be monitored.
- Max Days: The maximum number of days to retain the index after which they must be deleted. The default is 365 (365 days).
- Max Docs: The maximum document limit for the index. If the number of docs exceeds this number, then the index is deleted. The default is 1000000000 (1 Billion).
- Max MB(size): The maximum size of the index in MB. If the size of the index exceeds this number, then the index is deleted. The default is 150000 (150 GB).
Specify one or multiple options for the parameters.
The fields for ILM entries is shown in the following figure.
ILM Multi Export:
This task is used for automatically exporting logs when the criteria specified is fulfilled. It displays the required fields for specifying the criteria parameters for exporting indexes. This task is disabled by default after it is created. Enable the Use Only Priority IPs and specify specific ESA machines in the Priority IPs field this task is created to improve performance. Any indexes imported into ILM are not exported using this scheduled task. The Audit index export task is enhanced to support multiple indexes and is renamed to ILM Multi Export.
This task is available for processing the audit, troubleshooting, policy log, and protector status indexes.
- Index Pattern: The pattern for the indexes that must be exported. Use regex to specify multiple indexes.
- Max Days: The number of days to store indexes. Any index beyond this age is exported. The default age specified is 365 days.
- Max Docs: The maximum docs present over all the indexes. If the number of docs exceeds this number, then the indexes are exported. The default is 1000000000 (1 Billion).
- Max MB(size): The maximum size of the index in MB. If the size of the index exceeds this number, then the index is exported. The default is 150000 (150 GB).
- File password: The password for the exported file. The password is hidden. Keep the password safe. A lost password cannot be retrieved.
- Retype File password: The password confirmation for the exported file.
- Dir Path: The directory for storing the exported index in the default path. The default path specified is /opt/protegrity/insight/archive/. You can specify and create nested folders using this parameter. Also, if the directory specified does not exist, then the directory is created in the /opt/protegrity/insight/archive/ directory.
You can specify one or multiple options for the Max Days, Max Docs, and Max MB(size) parameters.
The fields for the entries is shown in the following figure.
Audit Index Rollover:
This task performs an index rollover on the index referred by the alias when any of the specified conditions are fulfilled. The conditions are index age, number of documents in the index, or the index size crosses the specified value.
This task is available for processing the audit, troubleshooting, policy log, protector status, and DSG-related indexes.
- Max Age: The maximum age after which the index must be rolled over. This default is 30d, that is 30 days. The values supported are, y for years, M for months, w for weeks, d for days, h or H for hours, m for minutes, and s for seconds.
- Max Docs: The maximum number of docs that an index can contain. An index rollover is performed when this limit is reached. The default is 200000000, that is 200 million.
- Max Size: The maximum index size of the index that is allowed. An index rollover is performed when the size limit is reached. The default is 5gb. The units supported are, b for bytes, kb for kilobytes, mb for megabytes, gb for gigabytes, tb for terabytes, and pb for petabytes.
The fields for the Audit Index Rollover entries is shown in the following figure.
Signature Verification:
This task runs the signature verification tasks after the time interval that is set. It runs the default signature-related job and the ad-hoc jobs created on the Signature Verification tab.
- Max Job Idle Time Minutes: The maximum time to keep the jobs idle. After the jobs are idle for the time specified, the idle jobs are cleared and re-queued. The default specified is 2 minutes.
- Max Parallel Jobs Per Node: The maximum number of signature verification jobs to run in parallel on each system. If number of jobs specified here is reached, then new scheduled jobs are not started. This default is 4 jobs. For example, if 10 jobs are queued to run on 2 ESAs, then 4 jobs are started on the first ESA, 4 jobs are started on the second ESA, and 2 jobs will be queued to run till an ESA job slot gets free to accept and run the queued job.
The fields for the Manage Signature Verification Jobs entries is shown in the following figure.
Working with scheduled tasks
After creating a scheduled task, specify whether the task must be enabled or disabled for running. You can edit the task to modify the commands or the task schedule.
Complete the following steps to modify a task.
From the Analytics screen, navigate to Scheduler > Tasks.
The list of scheduled tasks appears.
Use the search field to search for a specific task from the list.
Click the Enabled toggle switch to enable or disable the task for running as per the schedule.
Alternatively, clear the Enabled toggle switch to prevent the task from running as per the schedule.
Click the Edit icon (
) to update the task.
The Edit Task page is displayed.
Update the task as required and click Save.
The task is saved and run as per the defined schedule.
Viewing the scheduler monitor
The Monitor screen shows a list of all the scheduled tasks. It also displays whether the task is running or was executed successfully. You can also stop a running task or restart a stopped task from this screen.
Complete the following steps to monitor the tasks.
From the Analytics screen, navigate to Scheduler > Monitor.
The list of scheduled tasks appears.
The Tail option can be set from the upper-right corner of the screen. Setting the Tail option to ON updates the scheduler history list with the latest scheduled tasks that are run.
You can use the search field to search for specific tasks from the list.
Scroll to view the list of scheduled tasks executed. The following information appears:
- Name: This is the name of the task that was executed.
- IP: This is the host IP of the system that executed the task.
- Start Time: This is the time when the scheduled task started executing.
- End Time: This is the end time when the scheduled task finished executing.
- Elapsed Time: This is the execution time in seconds for the scheduled task.
- State: This is the state displayed for the task. The available states are:
: Running. The task is running. You can click Stop from Actions to stop the task.
: Queued to stop. The task processing will stop soon.
: Stopped. The task has been stopped. The job might take about 20 seconds to stop the process.
If an ILM Multi Export job is stopped, then the next ILM Multi Export job cannot be started within 2 minutes of stopping a previous running job.
If a signature verification scheduler job is stopped from the Scheduler > Monitor page, then the status might be updated on this page after about 5 minutes.
: Completed. The task is complete.
- Action: Click Stop to abort the running task. This button is only displayed for tasks that are running.
Using the Index State Management
Use the scheduler and the Analytics ILM for managing indexes. The Index State Management can be used to manage indexes not supported by the scheduler or ILM. However, it is not recommended to use the Index State Management for managing indexes. The Index State Management provides configurations and settings for rotating the index.
Perform the following steps to configure the index:
- Log in to the ESA Web UI.
- Navigate to Audit Store > Dashboard. The Audit Store Dashboards appears. If a new tab does not automatically open, click Open in a new tab.
- Update the index definition.
- From the menu, navigate to Index Management.
- Click the required index entry.
- Click Edit.
- Select JSON editor.
- Click Continue.
- Update the required configuration under rollover.
- Click Update.
- Update the policy definition for the index.
- From the menu, navigate to Index Management.
- Click Policy managed indexes.
- Select the check box for the index that was updated.
- CLick Change Policy.
- Select the index from the Managed indices list.
- From the State filter, select Rollover.
- Select the index from the New policy list.
- Ensure that the Keep indices in their current state after the policy takes effect option is selected.
- Click Change.
Special syntax
The special syntax for specifying the schedule is provided in the following table.
Character | Definition | Fields | Example |
---|---|---|---|
, | Specifies a list of values. | All | 1, 2, 5, 6. |
- | Specifies a range of values. | All | 3-5 specifies 3, 4, 5. |
/ | Specifies the values to skip. | All | */4 specifies 0, 4, 8, and so on. |
* | Specifies all values. | All | * specifies all the values in the field where it is used. |
? | Specifies no specific value. | DOM, DOW | 4 in the day-of-month field and ? in the day-of-week field specifies to run on the 4th day of the month. |
# | Specifies the nth day of the month. | DOW | 2#4 specifies 2 for Monday and 4 for 4th week in the month. |
L | Specifies the last day in the week or month. | DOM, DOW | 7L specifies the last Saturday in the month. |
W | Specifies the weekday closest to the specified day. | DOM | 12W specifies to run on the 12th of the month. If 12 is a Saturday, then run on Friday the 11th. If 12th is a Sunday, then run on Monday the 13th. |
4.22 - Installing Protegrity Appliances on Cloud Platforms
This section describes the procedure for installing appliances on cloud platforms, such as, Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.
4.22.1 - Installing Protegrity Appliances on Amazon Web Services (AWS)
Amazon Web Services (AWS) is a cloud-based computing service. It provides several services, such as, computing power through Amazon Elastic Compute Cloud (EC2), storage through Amazon Simple Storage Service (S3), and so on.
The AWS stores Amazon Machine Images (AMIs), which are templates or virtual images containing an operating system, applications, and configuration settings.
Protegrity appliances offer flexibility and can run in the following environments:
- On-premise: The appliance is installed and runs on dedicated hardware.
- Virtualized: The appliance is installed and runs on a virtual machine.
- Cloud: The appliance is installed and runs on or as part of a Cloud-based service.
Protegrity provides AMIs that contain the appliance image, running on a customized and hardened Linux distribution.
This section describes the prerequisites and tasks for installing Protegrity appliances on AWS. In addition, it describes some best practices for using the Protegrity appliances on AWS effectively.
The Full OS Backup/Restore features of the Protegrity appliances is not available on the AWS platform.
Verifying Prerequisites
The following prerequisites are essential to install the Protegrity appliances on AWS:
- Login URL for the AWS account
- AWS account with the authentication credentials
- Access to the My.Protegrity portal
Hardware Requirements
As the Protegrity appliances are hosted and run on AWS, the hardware requirements are dependent on the configurations provided by Amazon. However, these requirements can autoscale as per customer requirements and budget.
The minimum recommendation for an appliance is 8 CPU cores and 32 GB memory. On AWS, this configuration is available in the t3a.2xlarge option.
For more information about the hardware requirements of the ESA, refer to the section System Requirements.
Network Requirements
Protegrity appliances on AWS are provided with an Amazon Virtual Private Cloud (VPC) networking environment. Amazon VPC enables you to access other AWS resources, such as other instances of Protegrity appliances on AWS.
You can configure the Amazon VPC by specifying its usable IP address range. You can also create and configure subnets, network gateways, and the security settings.
For more information about the Amazon VPC, refer to the Amazon VPC documentation at: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Introduction.html.
If you are using the ESA or the DSG appliance with AWS, then ensure that the inbound and outbound ports of the appliances are configured in the Amazon Virtual Private Cloud (VPC). This ensures that they are able to interact with the other required components.
For more information about the list of inbound and outbound ports to be configured based on the appliance, refer Open Listening Ports.
Accessing the Internet
The following points list the ways in which you can provide or limit Internet access for an appliance instance in the VPC:
- If you need to connect the appliance to the Internet, then ensure that the appliance is on the default subnet so that it uses the Internet gateway that is included in the VPC.
- If you need to allow the appliance to initiate outbound connections to, and prevent inbound connections from the Internet, then ensure that you use a Network Address Translation (NAT) device.
- If you want to block the connection of the appliance to the Internet, then ensure that the appliance is on a private subnet.
Accessing a Corporate Network
If you need to connect the appliance to a corporate network, then ensure that you use an IPSec hardware VPN connection.
4.22.1.1 - Obtaining the AMI
Before creating the instance on AWS, you must obtain the image from the My.Protegrity portal. On the portal, you select the required ESA version and choose AWS as the target cloud platform. You then share the product to your cloud account. The following steps describe how to share the AMI to your cloud account.
To obtain and share the AMI:
Log in to the My.Protegrity portal with your user account.
Click Product Management > Explore Products > Data Protection.
Select the required ESA Platform Version from the drop-down.
The Product Family table will update based on the selected ESA Platform Version.
The ESA Platform Versions listed in drop-down menu reflect all versions. These include versions that were either previously downloaded or shipped within the organization along with any newer versions available thereafter. Navigate to Product Management > My Product Inventory to check the list of products previously downloaded.
The images in this section consider the ESA as a reference. Ensure that you select the required image.
Select the Product Family.
The description box will populate with the Product Family details.
Click View Products to advance to the product listing screen.
Callout Element Name Description 1 Target Platform Details Shows details about the target platform. 2 Product Name Shows the product name. 3 Product Family Shows the product family name. 4 OS Details Shows the operating system name. 5 Version Shows the product version. 6 End of Support Date Shows the final date that Protegrity will provide support for the product. 7 Action Click the View icon ( ) to open the Product Detail screen.
8 Export as CSV Downloads a .csv file with the results displayed on the screen. 9 Search Criteria Type text in the search field to specify the search filter criteria or filter the entries using the following options:- OS- Target Platform 10 Request one here Opens the Create Certification screen for a certification request. Select the AWS cloud target platform you require and click the View icon (
) from the Action column.
The Product Detail screen appears.
Callout Element Name Description 1 Product Detail Shows the following information about the product:- Product name- Family name- Part number- Version- OS details- Hardware details- Target platform details- End of support date
- Description2 Product Build Number Shows the product build number. 3 Release Type Name Shows the type of build, such as, release, hotfix, or patch. 4 Release Date Shows the release date for the build. 5 Build Version Shows the build version. 6 Actions Shows the following options for download:- Click the Share Product icon ( ) to share the product through the cloud.- Click the Download Signature icon (
) to download the product signature file.- Click the Download Readme icon (
) to download the Release Notes.
7 Download Date Shows the date when the file was downloaded. 8 User Shows the user name who downloaded the build. 9 Active Deployment Select the check box to mark the software as active. Clear the check box to mark the software as inactive. This option is available only after you download a product.| |10|Product Build Number|Shows the product build number.|
Click the Share Product icon (
) to share the desired cloud product.
If the access to the cloud products is restricted and the Customer Cloud Account details are not available, then a message appears. The message displays the information that is required and the contact information for obtaining access to cloud share.
A dialog box appears and your available cloud accounts will be displayed.
Select your required cloud account in which to share the Protegrity product.
Click Share.
A message box is displayed with the command line interface (CLI) instructions with the option to download a detailed PDF containing the cloud web interface instructions. Additionally, the instructions for sharing the cloud product are sent to your registered email address and to your notification inbox in My.Protegrity.
Click the Copy icon (
) to copy the command for sharing the cloud product and run the command in CLI. Alternatively, click Instructions to download the detailed PDF instructions for cloud sharing using the CLI or the web interface.
The cloud sharing instruction file is saved in a
.pdf
format. You need a reader, such as, Acrobat Reader to view the file.The Cloud Product will be shared with your cloud account for seven (7) days from the original share date in the My.Protegrity portal.
After the seven (7) day time period, you need to request a new share of the cloud product through My.Protegrity.com.
4.22.1.2 - Loading the Protegrity Appliance from an Amazon Machine Image (AMI)
This section describes the tasks that need to be performed for loading the Protegrity appliance from an AMI, which is provided by Protegrity.
4.22.1.2.1 - Creating an Instance of the Protegrity Appliance from the AMI
Perform the following steps to create an instance of the Protegrity appliance using an AMI.
Access AWS at the following URL:
The AWS home screen appears.
Click the Sign In to the Console button.
The AWS login screen appears.
On the AWS login screen, enter the following details:
- Account Number
- User Name
- Password
Click the Sign in button.
After successful authentication, the AWS Management Console screen appears.
Click Services.
Navigate to Compute > EC2
The EC2 Dashboard screen appears.
Contact Protegrity Support and provide your Amazon Account Number so that the required Protegrity AMIs can be made accessible to the account.
Click on AMIs under the Images section.
The AMIs that are accessible to the user account appear in the right pane.
Select the AMI of the required Protegrity appliance in the right pane.
Click the Launch instance from AMI button to launch the selected Protegrity appliance.
The Launch an instance screen appears.
Depending on the performance requirements, choose the required instance type.
For the ESA appliance, an instance with 32 GB RAM is recommended.
If you need to configure the details of the instance, then click the Next: Configure Instance Details button.
The Configure Instance Details screen appears.
Specify the following parameters on the Configure Instance Details screen:
Number of Instances: The number of instances that you want to launch at a time.
Purchasing option: The option to request Spot instances, which are unused EC2 instances. If you select this option, then you need to specify the maximum price that you are willing to pay for each instance on an hourly basis.
Network: The VPC to launch the appliance in. If you need to create a VPC, then click the Create new VPC link. For more information about creating a VPC, refer to the section Configuring VPC.
Subnet: The Subnet to be used to launch the appliance. A subnet resides in one Availability zone.
If you need to create a Subnet, then click the Create new subnet link.
For more information about creating a subnet, refer to the section Adding a Subnet to the Virtual Private Cloud (VPC).
Auto-assign Public IP: The IP address from where your instance can be accessed over the Internet. You need to select Enable from the list.
Availability Zone: A location within a region that is designed to be isolated from failures in other Availability Zones.
IAM role: This option is disabled by default.
Shutdown behaviour: The behaviour of the appliance when an OS-level shut down command is initiated.
Enable Termination Protection: The option to prevent accidental termination of the appliance instance.
Monitoring: The option to monitor, collate, and analyze the metrics for the instance of your appliance.
If you need to add additional storage to the instance of the appliance, then click the Next: Add Storage button.
The Add Storage screen appears.
You can provision additional storage for the appliance by clicking the Add New Volume button. Root is the default volume for your instance.
Alternatively, you can provision additional storage for the appliance later too.
For more information on configuring the additional storage on the instance of the appliance, refer to the section Increasing Disk Space on the Appliance.
If you need to create a key-value pair, then click the Add additional tags button.
Enter the Key and Value information and select the Resource types from the drop-down.
Select the Existing Key Pair option and choose a key from the list of available key pairs.
- Alternatively, you can select the Create a new Key Pair, to create a new key pair.
- If you proceed without a key pair, then the system will not be accessible.
If you need to configure the Security Group, then click the Next: Configure Security Group button.
The Configure Security Group screen appears.
You can assign a security group from the available list.
Alternatively, you can create security group with rules for the required inbound and outbound ports.
The Summary section lists all the details related to the instance of the appliance. You can review the required sections before you launch your instance.
Click the Launch instance button.
The instance of the required Protegrity appliance is launched and the Launch Status screen appears.
Click the View Instances button.
The Instances screen appears listing the instance of the appliance.
If you need to use the instance of the appliance, then access the appliance CLI Manager using the IP address of the appliance.
4.22.1.2.2 - Configuring the Virtual Private Cloud (VPC)
If you need to connect two Protegrity appliances, or to the Internet, or a corporate network using a Private IP address, then you might need to configure the VPC.
For more information about the various inbound and outbound ports to be configured in the VPC, refer to section Open Listening Ports.
Perform the following steps to configure the VPC for the instance of the Protegrity appliance.
Ensure that you are logged in to AWS and at the AWS Management Console screen.
On the AWS Management Console, click VPC under the Networking section.
The VPC Dashboard screen appears.
Click on Your VPCs under the Virtual Private Cloud section.
The Create VPC screen appears listing all available VPCs in the right pane.
Click the Create VPC button.
The Create VPC dialog box appears.
Specify the following parameters on the Create VPC dialog box:
- Name tag: The name of the VPC.
- CIDR block: The range of the IP addresses for the VPC in x.x.x.x/y form where x.x.x.x is the IP address and y is the /16 and /28 netmask.
- Tenancy: This parameter can be set to Default or Dedicated. If the value is set to Default, then it selects the tenancy attribute specified while launching the instance of the appliance for the VPC.
Click the Yes, Create button.
The VPC is created.
4.22.1.2.3 - Adding a Subnet to the Virtual Private Cloud (VPC)
You can add Subnets to your VPC. A subnet resides in an Availability zone. When you create a subnet, you can specify the CIDR block.
Perform the following steps to create the subnet for your VPC.
Ensure that you are logged in to AWS and at the AWS Management Console screen.
On the AWS Management Console, click VPC under the Networking section.
The VPC Dashboard screen appears.
Click Subnets under the Virtual Private Cloud section.
The create subnet screen appears listing all available subnets in the right pane.
Click the Create Subnet button.
The Create Subnet dialog box appears.
Specify the following parameters on the Create Subnet dialog box.
- Name tag: The name for the Subnet.
- VPC: The VPC for which you want to create a subnet.
- Availability Zone: The Availability zone where the subnet resides.
- CIDR block: The range of the IP addresses for the VPC in x.x.x.x/y form where x.x.x.x is the IP address and y is the /16 and /28 netmask.
Click the Yes, Create button.
The Subnet is created.
4.22.1.2.4 - Finalizing the Installation of Protegrity Appliance on the Instance
When you install the appliance, it generates multiple security identifiers such as, keys, certificates, secrets, passwords, and so on. These identifiers ensure that sensitive data is unique between two appliances in a network. When you receive a Protegrity appliance image, the identifiers are generated with certain values. If you use the security identifiers without changing their values, then security is compromised and the system might be vulnerable to attacks.
Rotating Appliance OS keys to finalize installation
Using the Rotate Appliance OS Keys tool, you can randomize the values of these security identifiers for an appliance. During the finalization process, you run the key rotation tool to secure your appliance.
If you do not complete the finalization process, then some features of the appliance may not be functional including the Web UI.
For example, if the OS keys are not rotated, then you might not be able to add appliances to a Trusted Appliances Cluster (TAC).
For information about the default passwords, refer to the section Launching the ESA instance on Amazon Web Services in the Release Notes 10.1.0 from the My.Protegrity.
4.22.1.2.4.1 - Logging to the AWS Instance using the SSH Client
After installing the Protegrity Appliance on AWS, you must log in to the AWS instance using the SSH Client.
To login to the AWS instance using the SSH Client:
Start the local SSH Client.
Perform the SSH operation on the AWS instance using the key pair utilizing the following command. Ensure that you use the local_admin user to perform the SSH operation.
ssh -i <path of the private key pair> local_admin@<IP address of the AWS instance>
Press Enter.
4.22.1.2.4.2 - Finalizing an AWS Instance
You can finalize the installation of the ESA after signing in to the CLI Manager.
Before you begin
“Before finalizing the AWS instance, consider the following:
The SSH Authentication Type by default, is set to Public key. Ensure that you use the Public key for accessing the CLI. You can change the authentication type from the ESA Web UI, once the finalization is completed.
Ensure that the finalization process is initiated from a single session only. If you start finalization simultaneously from a different session, then the “Finalization is already in progress.” message appears. You must wait until the finalization of the instance is successfully completed.
Ensure that the appliance session is not interrupted. If the session is interrupted, then the instance becomes unstable and the finalization process is not completed on that instance.
Finalizing the AWS instance
Perform the following steps to finalize the AWS instance:
Sign in to the ESA CLI Manager of the instance created using the default local admin credentials.
The following screen appears.
Select Yes to initiate the finalization process.
If you select No, then the finalization process is not initiated.
To manually initiate the finalization process, navigate to Tools > Finalize Installation and press ENTER.
A confirmation screen to rotate the appliance OS keys appears. Select OK to rotate the appliance OS keys.
The following screen appears.
To update the user passwords, provide the credentials for the following users:
- root
- admin
- viewer
- local_admin
Select Apply.
The user passwords are updated and the appliance OS keys are rotated.
The finalization process is completed.
Default products installed on appliances
The appliance comes with some products installed by default. If you want to verify the installed products or install additional products, then navigate to Administration > Installations and Patches > Add/Remove Services.
For more information about installing products, refer here.
4.22.1.2.5 - Connecting an ESA instance for DSG deployment
If you are using an instance of the DSG appliance, then you need to provide the connectivity details using the CLI Manager. These details are related to an instance of the ESA appliance in the DSG appliance.
For more information about connecting to an instance of the ESA appliance, refer Setting up ESA Communication.
Before you begin
Ensure that you run the Appliance-rotation-tool on the ESA before you setup the communication of the DSG appliance with the ESA appliance.
For more information about running the Appliance-rotation-tool on the ESA, refer to section Running the Appliance-Rotation-Tool.
Deploying the Instance of the Protegrity Appliance with the Protectors
You can configure the various protectors that are a part of the Protegrity Data Security Platform with the instance of the ESA appliance running on AWS.
Depending on the Cloud-based environment which hosts the protectors, the protectors can be configured with the instance of the ESA appliance in one of the following ways:
- If protectors and ESA are running on same VPC, then configure the protectors using the internal IP address. This IP address must be of the appliance within the same VPC.
- If protectors and ESA are running on different VPCs, then the VPC of the ESA instance must be configured to connect to the VPC of the protectors.
4.22.1.3 - Backing up and Restoring Data on AWS
A snapshot represents a state of an instance or disk at a point in time. You can use a snapshot of an instance or a disk to backup or restore information in case of failures.
Creating a Snapshot of a Volume on AWS
In AWS, you can create a snapshot of a volume.
To create a snapshot on AWS:
On the EC2 Dashboard screen, click Volumes under the Elastic Block Store section.
The screen with all the volumes appears.
Right click on the required volume and select Actions > Create Snapshot.
The Create Snapshot screen for the selected volume appears.
Enter the required description for the snapshot in the Description text box.
Select Add tag to add a tag.
Enter the tag in the Key and Value text boxes.
Click Add Tag to add additional tags.
Click Create Snapshot.
A message Create Snapshot Request Succeeded appears, along with the snapshot ID.
Ensure that you note the snapshot ID.
Ensure that the status of the snapshot is completed.
Restoring a Snapshot on AWS
On AWS, you can restore data by creating a volume of a snapshot. You then attach the volume to an EC2 instance.
Before you begin
Ensure that the status of the instance is Stopped.
Ensure that you detach an existing volume on the instance.
Restoring a Snapshot
To restore a snapshot on AWS:
On the EC2 Dashboard screen, click Snapshots under the Elastic Block Store section.
The screen with all the snapshots appears.
Right-click on the required snapshot and select Create Volume.
The Create Volume screen form appears.
Select the type of volume from the Volume Type drop-down list.
Enter the size of the volume in the Size (GiB) textbox.
Select the availability zone from the Availability Zone* drop-down list.
Click Add Tag to add tags.
Click Create Volume.
A message Create Volume Request Succeeded along with the volume id appears. The volume with the snapshot is created.
Ensure that you note the volume id.
Under the EBS section, click Volume.
The screen displaying all the volumes appears.
Right-click on the volume that is created.
The pop-up menu appears.
Select Attach Volume.
The Attach Volume dialog box appears.
Enter the Instance ID or name of the instance in the Instance text box.
Enter /dev/xvda in the Device text box.
Click Attach to add the volume to an instance.
The snapshot is added to the EC2 instance as a volume.
4.22.1.4 - Increasing Disk Space on the Appliance
After an instance of the appliance is created, you can increase the disk space on the appliance.Ensure that the instance is powered off before performing the following steps.
To increase disk space for the Appliance on AWS:
On the EC2 Dashboard screen, click Volumes under the Elastic Block Store section.
The Create Volume screen appears.
Click the Create Volume button.
The Create Volume dialog box appears.
Enter the required size of the additional disk space in the Size (GiB) text box.
Enter the snapshot ID of the instance, for which the additional disk space is required in the Snapshot ID text box.
Click the Create button.
The required additional disk space is created as a volume.
Right-click on the additional disk, which is created.
The pop-up menu appears.
Select Attach Volume.
The Attach Volume dialog box appears.
Enter the Instance ID or name tag of the appliance to add the disk space in the Instance text box.
Click the Attach button to add the disk space to the required appliance instance.
The disk space is added to the appliance instance.
After the disk space on the appliance instance is added, navigate to Instances under the Instances section.
Right-click on the appliance instance in which the disk space was added.
Select Instance State > Start.
The appliance instance is started.
After the appliance instance is started, configure the additional storage on the appliance using the CLI Manager on the appliance.
For more information on configuring the additional storage on the appliance, refer to section Installation of Additional Hard Disks.
4.22.1.5 - Best Practices for Using Protegrity Appliances on AWS
There are recommended best practices for using Protegrity appliances on AWS.
Force SSH Keys
Configure the appliance to enable SSH keys and disable SSH passwords for all users.
If you need to create or join a Trusted Appliance cluster, then ensure that SSH passwords are enabled when you are creating or joining the cluster, and then disabled.
For more information about the SSH keys, refer to section Working with Secure Shell (SSH) Keys.
Install Upgrades
After you run the Appliance-rotation tool, it is recommended that you install all the latest Protegrity updates.
Configure your VPC or Security Group
To ensure successful communication between the appliance and the other entities connected to it.
For more information about the list of inbound and outbound ports for the appliances, refer to section Open Listening Ports.
4.22.1.6 - Running the Appliance-Rotation-Tool
The Appliance-rotation-tool modifies the required keys, certificates, credentials, and passwords for the appliance. This helps to differentiate the sensitive data on the appliance from other similar instances.
Before you begin
If you are configuring an ESA appliance instance, then you must run the Appliance-rotation-tool after creating the instance of the appliance.
Ensure that you do not run the appliance rotation tool when the appliance OS keys are in use.
For example, you must not run the appliance rotation tool when a cluster is enabled, two-factor authentication is enabled, external users are enabled, and so on.
How to run the Appliance-Rotation-Tool
Perform the following steps to rotate the required keys, certificates, credentials, and passwords for the appliance.
To use the Appliance-rotation-tool:
On the ESA, navigate to CLI Manager > Tools > Rotate Appliance OS Keys.
The root password dialog box appears.
Enter the root password.
Press ENTER.
The Appliance OS Key Rotation dialog box appears.
Select Yes.
Press ENTER.
The administrative credentials dialog box appears.
Enter the Account name and Account password on the appliance.
Select OK.
To update the user passwords, provide the credentials for the users on the User’s Passwords screen. If default users such as root, admin, viewer, and local_admin have been manually deleted, they will not be listed on the User’s Passwords screen. Otherwise, to update the passwords, provide credentials for the following default users:
- root
- admin
- viewer
- local_admin
Select Apply. The user passwords are updated.
The process to rotate the required keys, certificates, credentials, and other identifiers on the appliance starts.
4.22.1.7 - Working with Cloud-based Applications
Cloud-based applications are products or services for storing data on the cloud. In cloud-based applications, the computing and processing of data is handled on the cloud. Local applications interact with the cloud services for various purposes, such as, data storage, data computing, and so on. Cloud-based applications are allocated resources dynamically and aim at reducing infrastructure cost, improving network performance, easing information access, and scaling of resources.
AWS offers a variety of cloud-based products for computing, storage, analytics, networking, and management. Using the Cloud Utility product, services such as, CloudWatch and AWS CLI are leveraged by the Protegrity appliances.
Prerequisites
The following prerequisites are essential for AWS Cloud Utility.
The Cloud Utility AWS v2.3.0 product must be installed.
From 8.0.0.0, if an instance is created on the AWS using the cloud image, then Cloud Utility AWS is preinstalled on this instance.
For more information about installing the Cloud Utility AWS v2.3.0, refer to the Protegrity Installation Guide.
If you are launching a Protegrity appliance on an AWS EC2 instance, then you must have a valid IAM Role.
For more information about IAM Role, refer to Configuring Access for AWS Resources.
If you are launching a Protegrity appliance on a non-AWS instance, such as on-premise, Microsoft Azure, or GCP instance, then the AWS Configure option must be set up.
For more information about configuring AWS credentials, refer to AWS Configure.
The user accessing the Cloud Utility AWS Tools must have AWS Admin permission assigned to the role.
For more information about AWS admin, refer to Managing Roles.
4.22.1.7.1 - Configuring Access for AWS Resources
A server might contain resources that only the authorized users can access. For accessing a protected resource, you must provide valid credentials to utilize the services of the resource. Similarly, on the AWS platform, only privileged users can access and utilize the AWS cloud applications. The Identity and Access Management (IAM) is the mechanism for securing access to your resources on AWS.
The two types of IAM mechanisms are as follows:
IAM user is an entity that represents users on AWS. To access the resources or services on AWS, the IAM user must have the privileges to access these resources. By default, you have to set up all required permissions for a user. Each IAM user can have specific defined policies. An IAM user account is beneficial as it can have special permissions or privileges associated for a user.
For more information about creating an IAM user, refer to the following link:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html
An IAM user can access the AWS services on the required Protegrity appliance instances with the access keys. The access keys are the authentication mechanisms that authorize AWS CLI requests. The access keys can be generated when you create the IAM user account. Similar to the username and password, the access keys consist of access key ID and the secret access key. The access keys validate a user to access the required AWS services.
For more information about setting up an IAM user to use AWS Configure, refer to AWS Configure.
IAM role is the role for your AWS account and has specific permissions associated with it. An IAM role has defined permissions and privileges which can be given to multiple IAM users. For users that need same permissions to access the AWS services, you should associate an IAM role with the given user account.
If you want a Protegrity appliance instance to utilize the AWS resources, the instance must be provided with the required privileges. This is achieved by attaching an IAM role to the instance. The IAM role must have the required privileges to access the AWS resources.
For more information about creating an IAM role, refer to the following link:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create.html
For more information about IAM, refer to the following link.
https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html
AWS Configure
The AWS Configure operation is a process for configuring an IAM user to access the AWS services on the Protegrity appliance instance. These AWS services include CloudWatch, CloudTrail, S3 bucket, and so on.
To utilize AWS resources and services, you must set up AWS Configure if you have an IAM User.
To set up AWS Configure on a non-AWS instance, such as on-premise, Microsoft Azure, or GCP instance, you must have the following:
A valid IAM User
Secret key associated with the IAM User
Access key ID for the IAM User
The AWS Region on whose servers you want to send the default service requests
For more information about the default region name, refer to the following link.
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
If the access keys or the IAM role do not have the required privileges, then the user cannot utilize the corresponding AWS resources.
For AWS Configure, only one IAM user can be configured for an appliance at a time.
Configuring AWS Services
Below are instructions for configuring AWS services.
Before you begin
It is recommended to configure the AWS services from the Tools > Cloud Utility AWS Tools > AWS Configure menu.
On the Appliance Web UI, ensure that the AWS Admin privilege is assigned to the user role for configuring AWS on non-AWS instance.
How to configure AWS Services
To configure the AWS services:
Login to the Appliance CLI Manager.
To configure the AWS services, navigate to Tools > Cloud Utility AWS Tools > AWS Configure.
Enter the root credentials.
The following screen appears.
Select Edit and press ENTER.
Enter the AWS credentials associated with your IAM user in the AWS Access Key ID and AWS Secret Access Key text boxes.
Enter the region name in the Default Region Name text box. This field is case sensitive. Ensure that the values are entered in small-case.
For more information about the default region name, refer to the following link:
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
Enter the output format in the Default Output Format text box. This field is case sensitive. Ensure that the values are entered in small-case.
If the field is left empty, the Default Output Format is json. However, the supported Default Output Formats are json, table, and text.
For more information about the default output format, refer to the following link:
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
Select OK and press ENTER.
A validation screen appears.
Select OK and press ENTER.
A confirmation screen appears.
Select OK.
The configurations are applied successfully.
4.22.1.7.2 - Working with CloudWatch Console
AWS CloudWatch tool is used for monitoring applications. Using CloudWatch, you can monitor and store the metrics and logs for analyzing the resources and applications.
CloudWatch allows you to collect metrics and track them in real-time. Using this service you can configure alarms for the metrics. CloudWatch provides visibility into the various aspects of your services including the operational health of your device, performance of the applications, and resource utilization.
For more information about AWS CloudWatch, refer to the following link:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html
CloudWatch logs help you to monitor a cumulative list of all the logs from different applications on a single dashboard. This provides a central point to view and search the logs which are displayed in the order of the time when they were generated. Using CloudWatch you can store and access your log files from various sources. CloudWatch allows you to query your log data, monitor the logs which are originating from the instances and events, and retain and archive the logs.
For more information about CloudWatch logs, refer to the following link:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html
Prerequisites
For using AWS CloudWatch console, ensure that the IAM role or IAM user that you want to integrate with the appliance must have CloudWatchAgentServerPolicy policy assigned to it.
For more information about using the policies with the IAM Role or IAM User, refer to the following link:
4.22.1.7.2.1 - Integrating CloudWatch with Protegrity Appliance
You must enable CloudWatch integration to use the AWS CloudWatch services. This helps you to send the metrics and the logs from the appliances to the AWS CloudWatch Console.
The following section describes the steps to enable CloudWatch integration on Protegrity appliances.
To enable AWS CloudWatch integration:
Login to the ESA CLI Manager.
To enable AWS CloudWatch integration, navigate to Tools > Cloud Utility AWS Tools > CloudWatch Integration.
Enter the root credentials.
The following screen appears.
The warning message is displayed due to the cost involved from AWS.
For more information about the cost of integrating CloudWatch, refer to the following link:
Select Yes and press ENTER.
A screen listing the logs that are being sent to the CloudWatch Console appears.
Select Yes.
Wait till the following screen appears.
Select OK.
CloudWatch integration is enabled successfully. The CloudWatch service is enabled on the Web UI and CLI.
4.22.1.7.2.2 - Configuring Custom Logs on AWS CloudWatch Console
You can send logs from an appliance which is on-premise or launched on any of the cloud platforms, such as, AWS, GCP, or Azure. The logs are sent from the appliances and stored on the AWS CloudWatch Console. By default, the following logs are sent from the appliances:
- Syslogs
- Current events logs
- Apache2 error logs
- Service dispatcher error logs
- Web services error logs
You can send custom log files to the AWS CloudWatch Console. To send custom log files to the AWS CloudWatch Console, you must create a file in the /opt/aws/pty/cloudwatch/config.d/ directory. You can add or edit the log streams in this file to generate the custom logs with the following parameters.
You must not edit the default configuration file, appliance.conf, in the /opt/aws/pty/cloudwatch/config.d/ directory.
The following table explains the parameters that you must use to configure the log streams.
Parameter | Description | Example |
---|---|---|
file_path | Location where the file or log is stored | “/var/log/appliance.log” |
log_stream_name | Name of the log that will appear on the AWS CloudWatch Console | “Appliance_Logs” |
log_group_name | Name under which the logs are displayed on the CloudWatch Console | - On the CloudWatch Console, the logs appear under the hostname of the ESA instance.- Ensure that you must not modify the parameter log_group_name and its value {hostname}. |
Sample configuration files
Do not edit the appliance.conf configuration file in the /opt/aws/pty/cloudwatch/config.d/
directory.
If you want to configure a new log stream, then you must use the following syntax:
[
{
"file_path": "<path_of_the_first_log_file>",
"log_stream_name": "<Name_of_the_log_stream_to_be_displayed_in_CloudWatch>",
"log_group_name": "{hostname}"
},
.
.
.
{
"file_path": "<path_of_the_nth_log_file>",
"log_stream_name": "<Name_of_the_log_stream_to_be_displayed_in_CloudWatch>",
"log_group_name": "{hostname}"
}
]
The following snippet displays the sample configuration file, configuration_filename.conf, that sends appliance logs to the AWS CloudWatch Console.
[
{
"file_path": "/var/log/syslog",
"log_stream_name": "Syslog",
"log_group_name": "{hostname}"
},
{
"file_path": "/var/log/user.log",
"log_stream_name": "Current_Event_Logs",
"log_group_name": "{hostname}"
}
]
If you configure custom log files to send to CloudWatch Console, then you must reload the CloudWatch integration or restart the CloudWatch service. Also, ensure that the CloudWatch integration is enabled and running.
For more information about Reloading AWS CloudWatch Integration, refer to Reloading AWS CloudWatch Integration.
4.22.1.7.2.3 - Toggling the CloudWatch Service
In the Protegrity appliances, the Cloudwatch service enables the transmission of logs from the appliances to the AWS CloudWatch Console. Enabling the AWS Cloudwatch Integration also enables this service with which you can start or stop the logs from being sent to the AWS CloudWatch Console. The following sections describe how to toggle the CloudWatch service for pausing or continuing log transmission. The toggling can be performed in either the CLI Manager or the Web UI.
Before you begin
Ensure that the valid AWS credentials are configured before toggling the CloudWatch service.
For more information about
Starting or Stopping the CloudWatch Service from the Web UI
If you want to temporarily stop the transmission of logs from the appliance to the AWS Console, then you can stop the CloudWatch Service.
To start or stop the AWS CloudWatch service from the Web Ui:
Login to the Appliance Web UI.
Navigate to System > Services.
Locate the CloudWatch service to start or stop. Select the appropriate icon, either Start or Stop, to perform the desired action.
- Select Stop to stop the transmission of logs and metrics.
- Select Start or Restart to start the CloudWatch service.
Starting or Stopping the CloudWatch Service from the CLI Manager
If you want to temporarily stop the transmission of logs from the appliance to the AWS Console, then you can stop the CloudWatch Service.
To start or stop the AWS CloudWatch service from the CLI Manager:
Login to the appliance CLI Manager.
Navigate to Administration > Services.
Locate the CloudWatch service to start or stop. Select the appropriate icon, either Start or Stop, to perform the desired action.
- Select Stop to stop the transmission of logs and metrics.
- Select Start to start the CloudWatch service.
4.22.1.7.2.4 - Reloading the AWS CloudWatch Integration
If you want to update the existing configurations in the /opt/aws/pty/cloudwatch/config.d/ directory, then you must reload the CloudWatch integration.
To reload the AWS CloudWatch integration:
Login to the ESA CLI Manager.
To reload CloudWatch, navigate to Tools > Cloud Utility AWS Tools > CloudWatch Integration.
Enter the root credentials.
The following screen appears.
Select Reload and press ENTER.
The logs are updated and sent to the AWS CloudWatch Console.
4.22.1.7.2.5 - Viewing Logs on AWS CloudWatch Console
After performing the required changes on the CLI Manager, the logs are visible on the CloudWatch Console.
To view the logs on the CloudWatch console:
Login to the AWS Web UI.
From the Services tab, navigate to Management & Governance > CloudWatch.
To view the logs, from the left pane navigate to Logs > Log groups.
Select the required log group. The name of the log group is the same as the hostname of the appliance.
To view the logs, select the required log stream from the following screen.
4.22.1.7.2.6 - Working with AWS CloudWatch Metrics
The metrics for the following entities in the appliances are sent to the AWS CloudWatch Console.
Metrics | Description |
---|---|
Memory Use Percent | Percentage of the memory that is consumed by the appliance. |
Disk I/O | Bytes and packets read and written by the appliance.You can view the following parameters:- write_bytes- read_bytes- writes- reads |
Network | Bytes and packets sent and received by the appliance.You can view the following parameters:- bytes_sent- bytes_received- packets_sent- packets_received |
Disk Used Percent | Percentage of the disk space that is consumed by the appliance. |
CPU Idle | Percentage of time for which the CPU is idle. |
Swap Memory Use Percent | Percentage of the swap memory that is consumed by the appliance. |
Unlike logs, you cannot customize the metrics that you want to send to CloudWatch. If you want to customize these metrics, then contact Protegrity Support.
4.22.1.7.2.7 - Viewing Metrics on AWS CloudWatch Console
To view the metrics on the CloudWatch console:
Login to the AWS Web UI.
From the Services tab, navigate to Management & Governance > CloudWatch.
To view the metrics, from the left pane navigate to Metrics > All metrics.
Navigate to AWS namespace.
The following screen appears.
Select EC2.
Select the required metrics from the following screen.
To view metrics of the Protegrity appliances that are on-premise or other cloud platforms, such as Azure or GCP, navigate to Custom namespace > CWAgent.
The configured metrics appear.
4.22.1.7.2.8 - Disabling AWS CloudWatch Integration
If you want stop the logs and metrics that are being sent to the AWS CloudWatch Console. To disintegrate the Cloudwatch removing the service from the appliance. Then, disable the AWS CloudWatch integration from the appliance. As a result, the CloudWatch service is removed from the Services screen of the Web UI and the CLI Manager.
To disable the AWS CloudWatch integration:
Login to the ESA CLI Manager.
To disable CloudWatch, navigate to Tools > Cloud Utility AWS Tools > CloudWatch Integration.
The following screen appears.
Select Disable and press ENTER.
The logs from the appliances are not updated in the AWS CloudWatch Console and the CloudWatch Integration is disabled.
A warning screen with message Are you sure you want to disable CLoudWatch integration? appears. Select Yes and press Enter.
The CloudWatch integration disabled successfully message appears. Click Ok.
The AWS CloudWatch integration is disabled.
After disabling CloudWatch integration, you must delete the Log groups and Log streams from the AWS CloudWatch console.
4.22.1.7.3 - Working with the AWS Cloud Utility
You can work with the AWS Cloud Utility in various ways. This section contains usage examples for using the AWS Cloud Utility. However, the scope of working with Cloud Utility is not limited to the scenarios covered in this section.
The following scenarios are explained in this section:
- Encrypting and storing the backed up files on the AWS S3 bucket.
- Setting metrics-based alarms using the AWS Management Console.
4.22.1.7.3.1 - Storing Backup Files on the AWS S3 Bucket
If you want to store backed up files on the AWS S3 bucket, you can use the Cloud Utility feature. You can transit these files from the Protegrity appliance to the AWS S3 bucket.
The following tasks are explained in this section:
- Encrypting the backed up .tgz files using the AWS Key Management Services (KMS).
- Storing the encrypted files in the AWS S3 bucket.
- Retrieving the encrypted files stored in the S3 bucket.
- Decrypting the retrieved files using the AWS KMS.
- Importing the decrypted files on the Protegrity appliance.
About the AWS S3 bucket and usage
The AWS S3 bucket is a cloud resource which helps you to securely store your data. It enables you to keep the data backup at multiple locations, such as, on-premise and on cloud. For easy accessibility, you can backup and store data of one machine and import the same data to another machine, using the AWS S3 bucket. It also provides an additional layer of security by helping you encrypt the data before uploading it to the cloud.
Using the OS Console option in the CLI Manager, you can store your backed up files in the AWS S3 bucket. You can encrypt your files using the the AWS Key Management Services (KMS) before storing it in the AWS S3 bucket.
The following figure shows the flow for storing your data on the AWS S3 bucket.
Prerequisites
Ensure that you complete the following prerequisites for uploading the backed up files to the S3 bucket:
The Configured AWS user or the attached IAM role must have access to the S3 bucket.
For more information about configuring access to the AWS resources, refer to Configuring access for AWS resources.
The Configured AWS user or the attached IAM role must have AWSKeyManagementServicePowerUser permission to use the KMS.
For more information about configuring AWS resources, refer to Configuring access for AWS resources.
For more information about KMS, refer to the following link.
https://docs.aws.amazon.com/kms/latest/developerguide/iam-policies.html
The backed up .tgz file should be present in the /products/exports folder.
For more information about exporting the files, refer to Export Data Configuration to Local File.
You must have the KMS keys present in the AWS Key Management Service.
For more information about KMS keys, refer to the following link:
https://docs.aws.amazon.com/kms/latest/developerguide/getting-started.html.
Encrypting and Storing Files
To encrypt and upload the exported file from /products/exports to the S3 bucket:
Login to the Appliance CLI manager.
To encrypt and upload files, navigate to Administration > OS Console.
Enter the root credentials.
Change the directory to /products/exports using the following command.
cd /products/exports
Encrypt the required file using the aws-encryption-cli command.
aws-encryption-cli --encrypt --input <file_to_encrypt> --master-keys key=<Key_ID> region=<region-name> --output <encrypted_output_filename> --metadata-output <metadata_filename> --encryption-context purpose=<purpose_for_performing encryption>
Parameter Description file_to_encrypt The backed up file that needs to be encrypted before uploading to the S3 bucket. Key_ID The key ID of the KMS key that needs to be used for encrypting the file. region-name The region where the KMS key is stored. encrypted_output_filename The name of the file after encryption. metadata_filename The name of the file where the metadata needs to be stored. purpose_for_performing encryption The purpose of encrypting the file. For more information about encrypting data using the KMS, refer to the following link.
https://docs.aws.amazon.com/cli/latest/reference/kms/encrypt.html
The file is encrypted.
Upload the encrypted file to the S3 bucket using the following command.
aws s3 cp <encrypted_output_filename> <s3Uri>
The file is uploaded in the S3 bucket.
For example, if you have an encrypted file test.enc and you want to upload it to your personal bucket, mybucket, in s3 bucket, then use the following command:
aws s3 cp test.enc s3://mybucket/test.enc
For more information about the S3 bucket, refer to the following link:
Decrypting and Importing Files
To decrypt and import the files from the S3 bucket:
Login to the Appliance CLI manager.
To decrypt and import the file, navigate to Administration > OS Console.
Enter the root credentials.
Change the directory to /products/exports using the following command:
cd /products/exports
Download the encrypted file using the following command:
aws s3 cp <s3Uri> <local_file_name(path)>
For example, if you want to download the file test.txt to your local machine as test2.txt, then use the following command:
aws s3 cp s3://mybucket/test.txt test2.txt
Decrypt the downloaded file using the following command:
aws-encryption-cli --decrypt --input <file_to_decrypt> --output <decrypted_file_name> --metadata-output <metadata_filename>
Parameter Description file_to_decrypt The backed up file that needs to be decrypted after downloading from the S3 bucket. decrypted_output_filename The name with which the file is saved after decryption. metadata_filename The name of the file where the metadata needs to be stored. Ensure that the metadata_filename must be the same filename which is used during encryption of the file.
The file is decrypted.
For more information about decrypting the downloaded file, refer to the following link.
Import the decrypted file to the local machine.
For more information about importing the decrypted file, refer to Import Data/Configurations from a File.
4.22.1.7.3.2 - Set Metrics Based Alarms Using the AWS Management Console
If you want to set alarms and alerts for your machine, using Protegrity appliances, you can send logs and metrics to the AWS Console. The AWS Management Console enables you to set alerts and configure SNS events as per your requirements.
You can create alerts based on the following metrics:
- Memory Use Percent
- Disk I/O
- Network
- Disk Used Percent
- CPU Idle
- Swap Memory Use Percent
Prerequisite
Ensure that the CloudWatch integration is enabled.
For more information about enabling the CloudWatch integration, refer to Enabling AWS CloudWatch Integration.
Creating an SNS Event
The following steps explain how to create an SNS event for an email-based notification.
To create an SNS event:
Login to the Amazon Management Console.
To create an SNS event, navigate to Services > Application Integration > Simple Notification Services > Topics.
Select Create topic.
The following screen appears.
Enter the required Details.
Click Create topic.
The following screen appears.
Ensure that you remember the Amazon Resource Name (ARN) associated to your topic.
For more information about the ARN, refer to the following link.
https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
The topic is created.
From the left pane, click Subscriptions.
Click Create subscription.
Enter the Topic ARN of the topic created in the above step.
From the Protocol field, select Email.
In the Endpoint, enter the required email address where you want to receive the alerts.
Enter the optional details.
Click Create subscription.
An SNS event is created and a confirmation email is sent to the subscribed email address.
To confirm the email subscription, click the Confirm Subscription link from the email received on the registered email address.
Creating Alarms
The following steps explain the procedure to set an alarm for CPU usage.
To create an alarm:
Login to the Amazon Management Console.
To create an alarm, navigate to Services > Management & Governance > CloudWatch.
From the left pane, select Alarms > In alarm.
Select Create alarm.
Click Select metric.
The Select metric window appears.
From the Custom Namespaces, select CWAgent.
Select cpu, host.
Select the required metric and click Select metric.
Configure the required metrics.
Configure the required conditions.
Click Next.
The Notification screen appears.
Select the alarm state.
From Select SNS topic, choose Select an existing SNS topic.
Enter the required email type in Send a notification to… dialog box.
Select Next.
Enter the Name and Description.
Select Next.
Preview the configuration details and click Create alarm.
An alarm is created.
4.22.1.7.4 - FAQs for AWS Cloud Utility
This section lists the FAQs for the AWS Cloud Utility.
Where can I install the AWS Cloud/CloudWatch/Cloud Utilities?
AWS Cloud Utility can be installed on any appliance-based product. It is compatible with the ESA and the DSG that are installed on-premise or on cloud platforms, such as, AWS, Azure, or GCP.
If an instance is created on the AWS using the cloud image, then Cloud Utility AWS is preinstalled on this instance.
Which version of AWS CLI is supported by the AWS Cloud Utility product v2.3.0?
AWS CLI 2.15.41 is supported by the Cloud Utility AWS product v2.3.0.
What is the Default Region Name while configuring AWS services?
The Default Region Name on whose servers you want to send the default service requests.
For more information about Default Region Name, refer to the following link: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
Can I configure multiple accounts for AWS on a single appliance?
No, you cannot configure multiple accounts for AWS on a single appliance.
How to determine the Log group name?
The Log group name is same as the hostname of the appliance.
Can I change the Log group name?
No, you cannot change the Log group name.
Can I change the appliance hostname after enabling CloudWatch integration?
If you change the appliance hostname after enabling CloudWatch integration, then:
- A new Log Group is created with the updated hostname.
- Only the new logs will be present in the updated Log Group.
- The new Log Group consists of only the updated logs files.
- It is recommended to manually delete the previous Log Group from the AWS CloudWatch Console.
Are there any configuration files for AWS CloudWatch?
Yes, there are configuration files for CloudWatch. The configuration files are present in /opt/aws/pty/cloudwatch/config.d/
directory.
The config.json file for cloud watch is present in /opt/aws/pty/cloudwatch/config.json
file.
It is recommended not to edit the default configuration files.
What happens if I enable CloudWatch integration with a corrupt file?
The invalid configuration file is listed in a dialog box.
The logs corresponding to all other valid configurations will be sent to the AWS CloudWatch Console.
What happens if I edit the only default configuration files, such as, /opt/aws/pty/cloudwatch/config.d/
, with invalid data for CloudWatch integration?
In this case, only metrics will be sent to the AWS CloudWatch Console.
How can I export or import the CloudWatch configuration files?
You can export or import the CloudWatch configuration files either through the CLI Manager or through the Web UI.
For more information about exporting or importing the configuration files through the CLI manager, refer to Exporting Data Configuration to Local File.
For more information about exporting or importing the configuration files through the Web UI, refer to Backing Up Data.
What are the compatible Output Formats while configuring the AWS?
The following Default Output Formats are compatible:
- json
- table
- text
If I use an IAM role, what is the Default Output Formats?
The Default Output Format is json.
If I disable the CloudWatch integration, why do I need to delete Log Groups and Log Streams manually?
You should delete Log Groups and Log Streams manually because this relates to the billing cost.
Protegrity will only disable sending logs and metrics to the CloudWatch Console.
How can I check the status of the CloudWatch agent service?
You can view the status of the of the CloudWatch service using one of the following.
On the Web UI, navigate to System > Services.
On the CLI Manager, navigate to Administration > Services.
On the CLI Manager, navigate to Administration > OS Console and run the following command:
/etc/init.d/cloudwatch_service status
Can I customize the metrics that i want to send to the CloudWatch console?
No, you cannot customize the metrics to send to the CloudWatch console. If you want to customize the metrics, then contact Protegrity Support.
How often are the metrics collected from the appliances?
The metrics are collected at 60 seconds intervals from the appliance.
How much does Amazon CloudWatch cost?
For information about the billing and pricing details, refer to https://aws.amazon.com/cloudwatch/pricing/.|
Can I provide the file path as <foldername/>* to send logs to the folder?
No, you can not provide the file path as <foldername/>*.
Regex is not allowed in the CloudWatch configuration file. You must specify the absolute file path.
Can I configure AWS from OS Console?
No, you can not. If you configure AWS from the OS Console it will change the expected behaviour of the AWS Cloud Utility.
What happens to the custom configurations if I uninstall or remove the AWS Cloud Utility product?
The custom configurations are retained.
What happens to CloudWatch if I delete AWS credentials from ESA after enabling CloudWatch integration?
You can not change the status of the CloudWatch service. You must reconfigure the ESA with valid AWS credentials to perform the CloudWatch-related operations.
Why some of the log files are world readable?
The files with the .log extension present in the /opt/aws/pty/cloudwatch/logs/state folder are not log files. These files are used by the CloudWatch utility to monitor the logs.
Why is the CloudWatch service stopped when the patch is installed? How do I restart the service?
As the CloudWatch service is stopped when the patch is installed, it remains in the stopped state after the Cloud Utility Patch (CUP) installation. So, we must restart the CloudWatch service manually.To restart the CloudWatch service manually, perform the following steps.
- Login to the OS Console.
- Restart the CloudWatch service using the following command.
/etc/init.d/cloudwatch_service restart
4.22.1.7.5 - Working with AWS Systems Manager
The AWS Systems Manager allows you to manage and operate the infrastructure on AWS. Using the Systems Manager console, you can view operational data from multiple AWS services and automate operational tasks across the AWS services.
For more information about AWS Systems Manager, refer to the following link:
https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html
Prerequisites
Before using the AWS Systems Manager, ensure that the IAM role or IAM user to integrate with the appliance has a policy assigned to it. You can attach one or more IAM policies that define the required permissions for a particular IAM role.
For more information about the IAM role, refer to section Configuring Access for AWS Instances.
For more information about creating an IAM instance profile for Systems Manager, refer to the following link:
https://docs.aws.amazon.com/systems-manager/latest/userguide/setup-instance-profile.html
4.22.1.7.5.1 - Setting up AWS Systems Manager
You must set up AWS Systems Manager to use the Systems Manager Agent (SSM Agent).
You can set up Systems Manager for:
- An AWS instance
- A non-AWS instance or an on-premise platform
After the SSM Agent is installed in an instance, ensure that the auto-update option is disabled, as we do not support auto-update. If the SSM Agent gets auto updated, the service will get corrupted.
For more information about automatic updates for SSM Agent, refer to the following link:
Setting up Systems Manager for AWS Instance
To set up Systems Manager for an AWS instance:
Assign the IAM Role created in the section Prerequisites.
For more information about attaching an IAM role to an instance, refer to the following link:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#attach-iam-role
Start the Amazon SSM-Agent from the Services menu or run the following command to start the SSM-Agent.
/etc/init.d/amazon-ssm-agent start
Setting up Systems Manager for non-AWS Instance
To set up Systems Manager for non-AWS instance:
Create a hybrid activation for the Linux instances.
For more information about creating a managed instance activation for a hybrid environment, refer to the following link:
https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-managed-instance-activation.html
Important: After you successfully complete the activation, an Activation Code and Activation ID appears. Copy this information and save it. If you lose this information, then you must create a new activation.
Login to the CLI as an admin user and open the OS Console.
Using the Activation Code and Activation ID obtained in Step 1, run the following command to activate and register the SSM-Agent.
amazon-ssm-agent -register -code <activation-code> -id <activation-id> -region <region>
Here
<region>
is the identifier of the instance region.Note the instance-id. This will be used to perform operations from SSM-Agent.
For more information on how to register a managed instance, refer to the following link:
Start the Amazon SSM-Agent from the Services menu or run the following command to start the SSM-Agent.
/etc/init.d/amazon-ssm-agent start
4.22.1.7.5.2 - FAQs on AWS Systems Manager
This section lists the FAQs on AWS Systems Manager.
What can I do when there is a problem with starting the service or the service is automatically updated?
Uninstall and reinstall the Cloud Utility AWS product.
For more information on installing and uninstalling the services, refer Add/Remove Services.
What is the name of the service?
The service name is Amazon SSM-Agent.
What can I do if the AWS Systems Manager shows a permission denied message after attaching the correct IAM Role?
Restart the service after attaching the IAM role for new permissions to take effect.
Is the Amazon SSM-Agent service available in the Services menu in the Web UI and the CLI?
Yes.
Can I manage the Amazon SSM-Agent service from the Menu option in the Web UI?
Yes, you can start or stop and restart the Amazon SSM Agent service from the Menu option in the Web UI.
4.22.1.7.6 - Troubleshooting for the AWS Cloud Utility
This section lists the troubleshooting for the AWS Cloud Utility.
While using AWS services the following error appears: UnknownRegionError("No default region found...”)
Issue: The service is unable to retrieve the AWS Region from the system.
Workaround: The service is region specific. Include the region name in the command.
region=<region-name>
The CloudWatch service was running and the service has stopped after restarting the system.
Issue: The CloudWatch Service Mode is set to Manual
Workaround: You should restart the service manually.
If the CloudWatch Service Mode is set to Automatic, then wait until all the services start.
The CloudWatch integration is enabled, but the log group/log stream is not created or logs are not being updated.
Issue: This issue occurs because the associated IAM Role or IAM User does not have required permissions to perform CloudWatchrelated operations.
To verify the error, check the log file by using a text editor.
/var/log/amazon/amazoncloudwatch-agent/amazoncloudwatch-agent.log
You can see one of the following errors:
E! WriteToCloudWatch failure, err: AccessDenied: User: arn:aws:sts:**** is not authorized to perform: cloudwatch:PutMetricData
E! cloudwatchlogs: code: AccessDeniedException, message: User: arn:aws:sts:**** is not authorized to perform: logs:PutLogEvents
E! CreateLogStream / CreateLogGroup AccessDeniedException: User: arn:aws:sts:**** is not authorized to perform: logs:CreateLogStream
Workaround: Assign CloudWatchAgentServerPolicy permissions to the associated IAM Role or IAM User and restart the service.
I can see the error message: Unable to locate valid credentials for CloudWatch
Issue: The error message can be because of one of the following reasons:
- If you are using an AWS instance, then the IAM Role is not configured for the AWS instance.
- If you are using a non-AWS instance, then the IAM User is configured with invalid AWS
Workaround: On AWS instance, navigate to the AWS console and attach the IAM role to the instance.
For more information about attaching the IAM role, refer https://aws.amazon.com/blogs/security/easily-replace-or-attach-an-iam-role-to-an-existing-ec2-instance-by-using-the-ec2-console/.
On non-AWS instance, to configure the IAM user with valid credentials, navigate to Tools > CloudWatch Utility AWS Tools > AWS Configure.
I am unable to see AWS Tools section under Tools in the CLI Manager
Issue: The AWS Admin role is not assigned to the instance.
Workaround: For more information about the AWS Admin role, refer Managing Roles.
I can see one of the following error messages: CloudWatch Service started failed
or CloudWatch Service stopped failed
Issue: The ESA is configured with invalid AWS credentials.
Workaround: You must reconfigure the ESA with valid AWS credentials.
4.22.2 - Installing Protegrity Appliances on Azure
Azure is a cloud computing service offered by Microsoft, which provides services for compute, storage, and networking. It also provides software, platform, and infrastructure services along with support for different programming languages, tools, and frameworks.
The Azure cloud platform includes the following components:
4.22.2.1 - Verifying Prerequisites
This section describes the prerequisites, including the hardware and network requirements, for installing and using Protegrity appliances on Azure.
Prerequisites
The following prerequisites are essential to install the Protegrity appliances on Azure:
- Sign in URL for the Azure account
- Authentication credentials for the Azure account
- Working knowledge of Azure
- Access to the My.Protegrity portal
Before you begin:
Ensure that you use the following order to create a virtual machine on Azure:
Order | Description |
---|---|
1 | Create a Resource Group |
2 | Create a Storage Account |
3 | Create a Container |
4 | Obtain the Azure BLOB |
5 | Create an image from the BLOB |
6 | Create a VM from the image |
Hardware Requirements
As the Protegrity appliances are hosted and run on Azure, the hardware requirements are dependent on the configurations provided by Microsoft. However, these requirements can change based on the customer requirements and budget. The actual hardware configuration depends on the actual usage or amount of data and logs expected.
The minimum recommendation for an appliance is 8 CPU cores and 32 GB memory. This option is available under the Standard_D8s_v3 option on Azure.
For more information about the hardware requirements of ESA, refer here.
Network Requirements
The Protegrity appliances on Azure are provided with an Azure virtual networking environment. The virtual network enables you to access other instances of Protegrity resources in your project.
For more information about configuring Azure virtual network, refer here.
4.22.2.2 - Azure Cloud Utility
The Azure Cloud Utility is an appliance component that is available for supporting features specific to Azure Cloud Platform. For Protegrity appliances, this component must be installed to utilize the services of Azure Accelerated Networking and Azure Linux VM agent. If you are utilizing the Azure Accelerated Networking or Azure Linux VM agent, then it is recommended to not uninstall this component.
When you upgrade or install the appliance from an Azure v10.1.0 blob, the Azure Cloud Utility is installed automatically in the appliance
4.22.2.3 - Setting up Azure Virtual Network
The Azure virtual network is a service that provides connectivity to the virtual machine and services on Azure. You can configure the Azure virtual network by specifying usable IP addresses. You can also create and configure subnets, network gateways, and security settings.
For more information about setting up Azure virtual network, refer to the Azure virtual network documentation at:
https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview
If you are using the ESA or the DSG appliance with Azure, ensure that the inbound and outbound ports of the appliances are configured in the virtual network.
For more information about the list of inbound and outbound ports, refer to section Open Listening Ports.
4.22.2.4 - Creating a Resource Group
Resource Groups in Azure are a collection of multiple Azure resources, such as virtual machines, storage accounts, virtual networks, and so on. The resource groups enable managing and maintaining the resources as a single entity.
For more information about creating resource groups, refer to the Azure resource group documentation at:
https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-portal
4.22.2.5 - Creating a Storage Account
Azure storage accounts contain all the Azure storage data objects, such as disks, blobs, files, queues, and tables. The data in the storage accounts are scalable, secure, and highly available.
For more information about creating storage accounts, refer to the Azure storage accounts documentation at:
https://docs.microsoft.com/en-us/azure/storage/common/storage-quickstart-create-account
4.22.2.6 - Creating a Container
The data storage objects in a storage account are stored in a container. Similar to directories in a file system, the container in Azure contain BLOBS. You add a container in Azure to store the ESA BLOB.
For more information about creating a container, refer to the following link:
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-portal
4.22.2.7 - Obtaining the Azure BLOB
In Azure, you can share files across different storage accounts. The ESA that is packaged as a BLOB, is shared across storage accounts on Azure. A BLOB is a data type that is used to store unstructured file formats. Azure supports BLOB storage to store unstructured data, such as audio, text, images, and so on. The BLOB of the appliance is shared by Protegrity to the client’s storage account.
Before creating the instance on Azure, you must obtain the BLOB from the My.Protegrity portal. On the portal, you select the required ESA version and choose Azure as the target cloud platform. You then share the product to your cloud account. The following steps describe how to share the BLOB to your cloud account.
Before creating the instance on AWS, you must obtain the image from the My.Protegrity portal. On the portal, you select the required ESA version and choose AWS as the target cloud platform. You then share the product to your cloud account. The following steps describe how to share the AMI to your cloud account.
To obtain and share the BLOB:
Log in to the My.Protegrity portal with your user account.
Click Product Management > Explore Products > Data Protection.
Select the required ESA Platform Version from the drop-down.
The Product Family table will update based on the selected ESA Platform Version.
The ESA Platform Versions listed in drop-down menu reflect all versions. These include versions that were either previously downloaded or shipped within the organization along with any newer versions available thereafter. Navigate to Product Management > My Product Inventory to check the list of products previously downloaded.
The images in this section consider the ESA as a reference. Ensure that you select the required image.
Select the Product Family.
The description box will populate with the Product Family details.
Click View Products to advance to the product listing screen.
Callout Element Name Description 1 Target Platform Details Shows details about the target platform. 2 Product Name Shows the product name. 3 Product Family Shows the product family name. 4 OS Details Shows the operating system name. 5 Version Shows the product version. 6 End of Support Date Shows the final date that Protegrity will provide support for the product. 7 Action Click the View icon ( ) to open the Product Detail screen.
8 Export as CSV Downloads a .csv file with the results displayed on the screen. 9 Search Criteria Type text in the search field to specify the search filter criteria or filter the entries using the following options:- OS- Target Platform 10 Request one here Opens the Create Certification screen for a certification request. Select the Azure cloud target platform you require and click the View icon (
) from the Action column.
The Product Detail screen appears.
Callout Element Name Description 1 Product Detail Shows the following information about the product:- Product name- Family name- Part number- Version- OS details- Hardware details- Target platform details- End of support date
- Description2 Product Build Number Shows the product build number. 3 Release Type Name Shows the type of build, such as, release, hotfix, or patch. 4 Release Date Shows the release date for the build. 5 Build Version Shows the build version. 6 Actions Shows the following options for download:- Click the Share Product icon ( ) to share the product through the cloud.- Click the Download Signature icon (
) to download the product signature file.- Click the Download Readme icon (
) to download the Release Notes.
7 Download Date Shows the date when the file was downloaded. 8 User Shows the user name who downloaded the build. 9 Active Deployment Select the check box to mark the software as active. Clear the check box to mark the software as inactive. This option is available only after you download a product.| |10|Product Build Number|Shows the product build number.|
Click the Share Product icon (
) to share the desired cloud product.
If the access to the cloud products is restricted and the Customer Cloud Account details are not available, then a message appears. The message displays the information that is required and the contact information for obtaining access to cloud share.
A dialog box appears and your available cloud accounts will be displayed.
Select your required cloud account in which to share the Protegrity product.
Click Share.
A message box is displayed with the command line interface (CLI) instructions with the option to download a detailed PDF containing the cloud web interface instructions. Additionally, the instructions for sharing the cloud product are sent to your registered email address and to your notification inbox in My.Protegrity.
Click the Copy icon (
) to copy the command for sharing the cloud product and run the command in CLI. Alternatively, click Instructions to download the detailed PDF instructions for cloud sharing using the CLI or the web interface.
The cloud sharing instruction file is saved in a
.pdf
format. You need a reader, such as, Acrobat Reader to view the file.The Cloud Product will be shared with your cloud account for seven (7) days from the original share date in the My.Protegrity portal.
After the seven (7) day time period, you need to request a new share of the cloud product through My.Protegrity.com.
4.22.2.8 - Creating Image from the Azure BLOB
After you obtain the BLOB from Protegrity, you must create an image from the BLOB. The following steps describe the parameters that must be selected to create an image.
To create an image from the BLOB:
Log in to the Azure portal.
Select Images and click Create.
Enter the details in the Resource Group, Name, and Region text boxes.
In the OS disk option, select Linux.
In the VM generation option, select Gen 1.
In the Storage blob drop-down list, select the Protegrity Azure BLOB.
Enter the appropriate information in the required fields and click Review + create.
The image is created from the BLOB.
4.22.2.9 - Creating a VM from the Image
After obtaining the image, you can create a VM from it. For more information about creating a VM from the image, refer to the following link.
To create a VM:
Login in to the Azure homepage.
Click Images.
The list of all the images appear.
Select the required image.
Click Create VM.
Enter details in the required fields.
Select SSH public key in the Authentication type option. Do not select the Password based mechanism as an authentication type. Protegrity recommends not using this type as a security measure.
In the Username text box, enter the name of a user. Be aware, this user will not have SSH access to the appliance. Refer to the following section Created OS user and SSH access to appliance for more details.
This user is added as an OS level user in the appliance. Ensure that the following usernames are not provided in the Username text box:
Select the required SSH public key source.
Enter the required information in the Disks, Networking, Management, and Tags sections.
Click Review + Create.
The VM is created from the image.
After the VM is created, you can access the appliance from the CLI Manager or Web UI.
Created OS user and SSH access to appliance
The OS user that is created in step 7 does not have SSH access to the appliance. If you want to provide SSH access to this user, login to the appliance as another administrative user and toggle SSH access. In addition, update the user to permit Linux shell access (/bin/sh).
4.22.2.10 - Accessing the Appliance
After setting up the virtual machine, you can access the appliance through the IP address that is assigned to the virtual machine. It is recommended to access the appliance with the administrative credentials.
If the number of unsuccessful password attempts exceed the defined value in the password policy, then the account gets locked.
For more information on the password policy for the admin and viewer users, refer here, and for the root and local_admin OS users, refer here.
4.22.2.11 - Finalizing the Installation of Protegrity Appliance on the Instance
When you install the appliance, it generates multiple security identifiers such as, keys, certificates, secrets, passwords, and so on. These identifiers ensure that sensitive data is unique between two appliances in a network. When you receive a Protegrity appliance image, the identifiers are generated with certain values. If you use the security identifiers without changing their values, then security is compromised and the system might be vulnerable to attacks.
Rotating Appliance OS keys to finalize installation
Using Rotate Appliance OS Keys, you can randomize the values of these security identifiers for an appliance. During the finalization process, you run the key rotation tool to secure your appliance.
If you do not complete the finalization process, then some features of the appliance may not be functional including the Web UI.
For example, if the OS keys are not rotated, then you might not be able to add appliances to a Trusted Appliances Cluster (TAC).
For information about the default passwords, refer the Release Notes 10.1.0.
Finalizing ESA Installation
You can finalize the installation of the ESA after signing in to the CLI Manager.
Before you begin
Ensure that the finalization process is initiated from a single session only. If you start finalization simultaneously from a different session, then the Finalization is already in progress. message appears. You must wait until the finalization of the instance is successfully completed.
Additionally, ensure that the appliance session is not interrupted. If the session is interrupted, then the instance becomes unstable and the finalization process is not completed on that instance.
To finalize ESA installation:
Sign in to the ESA CLI Manager of the instance created using the default administrator credentials.
The following screen appears.
Select Yes to initiate the finalization process.
The screen to enter the administrative credentials appears.
If you select No, then the finalization process is not initiated.
To manually initiate the finalization process, navigate to Tools > Finalize Installation and press ENTER.
Enter the credentials for the admin user and select OK.
A confirmation screen to rotate the appliance OS keys appears.
Select OK to rotate the appliance OS keys.
The following screen appears.
To update the user passwords, provide the credentials for the following users:
- root
- admin
- viewer
- local_admin
Select Apply.
The user passwords are updated and the appliance OS keys are rotated.
The finalization process is completed.
Default products installed on appliances
The appliance comes with some products installed by default. If you want to verify the installed products or install additional products, then navigate to Administration > – Installations and Patches – > Add/Remove Services.
For more information about installing products, refer the section Working with Installation and Packages in Protegrity Installation Guide.
4.22.2.12 - Accelerated Networking
Accelerated networking is a feature provided by Microsoft Azure which enables the user to improve the performance of the network. This is achieved by enabling Single-root input/output virtualization (SR-IOV) to a virtual machine.
In a virtual environment, SR-IOV specifies the isolation of PCIe resources to improve manageability and performance. The SR-IOV interface helps to virtualize, access, and share the PCIe resources, such as, the connection ports for graphic cards, hard drives, and so on. This successfully reduces the latency, network jitters and CPU utilization.
As shown in figure below, the virtual switch is an integral part of a network for connecting the hardware and the virtual machine. The virtual switch helps in enforcing the policies on the virtual machine. These policies include access control lists, isolation, network security controls, and so on, and are implemented on the virtual switch. The network traffic routes through the virtual switch and the policies are implemented on the virtual machine. This results in higher latency, network jitters, and higher CPU utilization.
However, in an accelerated network, the policies are applied on the hardware. The network traffic only routes through the network cards directly forwarding it to the virtual machine. The policies are applied on the hardware instead of the virtual switch. This helps the network traffic to bypass the virtual switch and the host while maintaining the policies applied at the host. Reducing the layers of communication between the hardware and the virtual machine helps to improve the network performance.
Following are the benefits of accelerated networking:
- Reduced Latency: Bypassing the virtual switch from the data path increases the number of packets which are processed in the virtual machine.
- Reduced Jitter: Bypassing the virtual switch and host from the network reduces the processing time for the policies. The policies are directly implemented on the virtual machine thereby reducing the network jitters caused by the virtual switch.
- CPU Utilization: Applying the policies to the hardware and implementing them directly on the virtual machine reduces the workload on the CPU to process these policies.
Prerequisites
The following prerequisites are essential to enable or disable the Azure Accelerated Networking feature.
A machine with the Azure CLI should be configured. This must be a separate Windows or Linux machine.
For more information about installing the Azure CLI, refer to the following link.
https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest
The Protegrity appliance must be in the stop (deallocated) state.
The virtual machine must use the supported instance size.
For more information about the supported series of virtual machines for the accelerated networking feature, refer to the Supported Instance Sizes for Accelerated Networking.
Supported Instance Sizes for Accelerated Networking
There are several series of instance sizes used on the virtual machines that support the accelerated networking feature.
These include the following:
- D/DSv2
- D/DSv3
- E/ESv3
- F/FS
- FSv2
- Ms/Mms
The most generic and compute-optimized instance sizes for the accelerated networking feature is with 2 or more vCPUs. However, on the systems with supported hyperthreading features, the accelerated networking feature must have instance sizes with 4 or more vCPUs.
For more information about the supported instance sizes, refer to the following link.
Creating a Virtual Machine with Accelerated Networking Enabled
If you want to enable accelerated networking while creating the instance, then it is achieved only from the Azure CLI. The Azure portal does not provide the option to create an instance with accelerated networking enabled.
For more information about creating a virtual machine with accelerated networking, refer to the following link.
To create a virtual machine with the accelerated networking feature enabled:
From the machine on which the Azure CLI is installed, login to Azure using the following command.
az login
Create a virtual machine using the following command.
az vm create --image <name of the Image> --resource-group <name of the resource group> --name <name of the new instance> --size <configuration of the instance> --admin-username <administrator username> --ssh-key-values <SSH key path> --public-ip-address "" --nsg <Azure virtual network> --accelerated-networking true
For example, the table below lists values to create a virtual machine with the following parameters.
Parameter Value Name of the image ProtegrityESAAzure name-of-resource-group MyResourcegroup size Standard_DS3_v2 admin-username admin nsg TierpointAccessDev ssh-key-value ./testkey.pub The virtual machine is created with the accelerated networking feature enabled.
Enabling Accelerated Networking
Perform the following steps to enable the Azure Accelerated Networking feature on the Protegrity appliance.
To enable accelerated networking:
From the machine on which the Azure CLI is installed, login to Azure using the following command.
az login
Stop the Protegrity appliance using the following command.
az vm deallocate --resource-group <ResourceGroupName> --name <InstanceName>
Parameter Description ResourceGroupName Name of the resource group where the instance is located. InstanceName Name of the instance that you want to stop. Enable accelerated networking on your virtual machine’s network card using the following command.
az network nic update --name <nic-name> --resource-group <ResourceGroupName> --accelerated-networking true
Parameter Description nic-name Name of the network interface card attached to the instance where you want to enable accelerated networking. ResourceGroupName Name of the resource group where the instance is located. Start the Protegrity appliance.
Disabling Accelerated Networking
Perform the following steps to disable the Azure Accelerated Networking features on the Protegrity appliance.
To disable accelerated networking:
From the machine on which the Azure CLI is installed, login to Azure using the following command.
az login
Stop the Protegrity appliance using the following command.
az vm deallocate --resource-group <ResourceGroupName> --name <InstanceName>
Parameter Description ResourceGroupName Name of the resource group where the instance is located. InstanceName Name of the instance that you want to stop. Disable accelerated networking on your virtual machine’s network card using the following command.
az network nic update --name <nic-name> --resource-group <ResourceGroupName> --accelerated-networking false
Parameter Description nic-name Name of the network interface card attached to the instance where you want to enable accelerated networking. ResourceGroupName Name of the resource group where the instance is located. Start the Protegrity appliance.
Troubleshooting and FAQs for Azure Accelerated Networking
This section lists the Troubleshooting and FAQs for the Azure Accelerated Networking feature.
What is the recommended number of virtual machines required in the Azure virtual network?
It is recommended to have at least two or more virtual machines in the Azure virtual network.
Can I stop or deallocate my machine from the Web UI?
Yes. You can stop or deallocate your machine from the Web UI. Navigate to the Azure instance details page and click Stop from the top ribbon.
Can I uninstall the Cloud Utility Azure if the accelerated networking feature is enabled?
It is recommended to disable the accelerated networking feature before uninstalling the Cloud Utility Azure.
How do I verify that the accelerated networking is enabled on my machine?
Perform the following steps:
Login to the CLI manager.
Navigate to Administration > OS Console.
Enter the root credentials.
Verify that the Azure Accelerated Networking feature is enabled by using the following commands.
# lspci | grep “Virtual Function”
Confirm the Mellanox VF device is exposed to the VM with the lspci command.
The following is a sample output:
001:00:02.0 Ethernet controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
# ethtool -S ethMNG | grep vf
Check for activity on the virtual function (VF) with the ethtool -S eth0 | grep vf_ command. If you receive an output similar to the following sample output, accelerated networking is enabled and working. The value of the packets and bytes should not be zero`
vf_rx_packets: 992956 vf_rx_bytes: 2749784180 vf_tx_packets: 2656684 vf_tx_bytes: 1099443970 vf_tx_dropped: 0
How do I verify from the Azure Web portal that the accelerated networking is enabled on my machine?
Perform the following steps:
- From the Azure Web portal, navigate to the virtual machine’s details page.
- From the left pane, navigate to Networking.
- If there are multiple NICs, then select the required NIC.
- Verify that the accelerated networking feature is enabled from the Accelerated Networking field.
Can I use the Cloud Shell on the Azure portal for enabling or disabling the accelerated networking feature?
Yes, you can use the Cloud Shell for enabling or disabling the accelerated networking. For more information about the pricing of the cloud shell, refer to the following link.
https://azure.microsoft.com/en-in/pricing/details/cloud-shell
How can I enable the accelerated networking feature using the Cloud Shell?
Perform the following steps to enable the accelerated networking feature using the Cloud Shell:
From the Microsoft Azure portal, launch the Cloud Shell.
Stop the Protegrity appliance using the following command.
az vm deallocate --resource-group <ResourceGroupName> --name <InstanceName>
Enable accelerated networking on your virtual machine’s network card using the following command.
az network nic update --name <nic-name> --resource-group <ResourceGroupName> --accelerated-networking true
Start the Protegrity appliance.
How can I disable the accelerated networking feature using the Cloud Shell?
Perform the following steps to disable the accelerated networking feature using the Cloud Shell:
From the Microsoft Azure portal, launch the Cloud Shell.
Stop the Protegrity appliance using the following command.
az vm deallocate --resource-group <ResourceGroupName> --name <InstanceName>
Enable accelerated networking on your virtual machine’s network card using the following command.
az network nic update --name <nic-name> --resource-group <ResourceGroupName> --accelerated-networking false
Start the Protegrity appliance.
Are there any specific regions where the accelerated networking feature is supported?
The accelerated networking feature is supported in all public Azure regions and Azure government clouds. For more information about the supported regions, refer to the following link:
https://docs.microsoft.com/en-us/azure/virtual-network/create-vm-accelerated-networking-cli#regions
Is it necessary to stop (deallocate) the machine to enable or disable the accelerated networking feature?
Yes. It is necessary to stop (deallocate) the machine to enable or disable the accelerated networking feature.This is because if the machine is not in the stop (deallocate) state, then it may cause the value of the vf packets to freeze. This results in an unexpected behaviour of the machine.
Is there any additional cost for using the accelerated networking feature?
No. There is no additional cost required for using the accelerated networking feature. For more information about the costing, contact Protegrity Support.
4.22.2.13 - Backing up and Restoring VMs on Azure
On Azure, you can prevent unintended loss of data by backing up your virtual machines. Azure allows you to optimize your backup by providing different levels of consistency. Similarly, the data on the virtual machines can be easily restored to a stable state. You can back up a virtual machine using the following two methods:
- Creating snapshots of the disk
- Using recovery services vaults
This following sections describe how to create and restore backups using the two mentioned methods.
Backing up and Restoring using Snapshots of Disks
The following sections describe how to create snapshots of disks and recover them on virtual machines. This procedure of backup and recovery is applicable for virtual machines that are created from disks and custom images.
Creating a Snapshot of a Virtual Machine on Azure
To create a snapshot of a virtual machine:
Sign in to the Azure homepage.
On the left pane, select Virtual machines.
The Virtual machines screen appears.
Select the required virtual machine and click Disks.
The details of the disk appear.
Select the disk and click Create Snapshot.
The Create Snapshot screen appears.
Enter the following information:
- Name: Name of the snapshot
- Subscription: Subscription account for Azure
Select the required resource group from the Resource group drop-down list.
Select the required account type from the Account type drop-down list.
Click Create.
The snapshot of the disk is created.
Restoring from a Snapshot on Azure
This section describes the steps to restore a snapshot of a virtual machine on Azure.
Before you begin
Ensure that the snapshot of the machine is taken.
How to restore from a snapshot on Azure
To restore a virtual machine from a snapshot:
On the Azure Dashboard screen, select Virtual Machine.
The screen displaying the list of all the Azure virtual machines appears.
Select the required virtual machine.
The screen displaying the details of the virtual machine appears.
On the left pane, under Settings, click Disks.
Click Swap OS Disk.
The Swap OS Disk screen appears.
Click the Choose disk drop-down list and select the snapshot created.
Enter the confirmation text and click OK.
The machine is stopped and the disk is successfully swapped.
Restart the virtual machine to verify whether the snapshot is available.
Backing up and Restoring using Recovery Services Vaults
Recovery services vault is an entity that stores backup and recovery points. They enable you to copy the configuration and data from virtual machines. The benefit of using recovery services vaults is that it helps organize your backups and minimize the overhead of management. It comes with enhanced capabilities of backing up data without compromising on data security. These vaults also allow you to create backup polices for virtual machines, thus ensuring integrity and protection. Using recovery services vaults, you can retain recovery points of protected virtual machines to restore them at a later point in time.
For more information about Recovery services vaults, refer to the following link:
https://docs.microsoft.com/en-us/azure/backup/backup-azure-recovery-services-vault-overview
Before you begin
This process of backup and restore is applicable only for virtual machines that are created from a custom image.
Creating Recovery Services Vaults
Before starting with the backup procedure, you must create a recovery services vault.
Before you begin
Ensure that you are aware about the pricing and role-based access before proceeding with the backup.
For more information about the pricing and role-based access, refer to the following links:
To create a recovery services vault:
Sign in to the Azure homepage.
On the Azure Dashboard screen, search Recovery Services vaults.
The screen displaying all the services vaults appears.
Click Add.
The Create Recovery Services vault screen appears.
Populate the following fields:
- Subscription: Account name under which the recovery services vault is created.
- Resource group: Associate a resource group to the vault.
- Vault name: Name of the vault.
- Region: Location where the data for recovery vault must be stored.
The Welcome to Azure Backup screen appears on the right pane.
Click Review + create .
The recovery services vault is created.
Backing up Virtual Machine using Recovery Services Vault
This section describes how to create a backup of a virtual machine using a Recovery Services Vault. For more information about the backup, refer to the link, https://docs.microsoft.com/en-us/azure/backup/backup-azure-vm-backup-faq
To create a backup of a virtual machine:
Sign in to the Azure homepage.
On the left pane, select Virtual machines.
The Virtual machines screen appears.
Select the required virtual machine.
On the left pane, under the Operations tab, click Backup.
The Welcome to Azure Backup screen appears on the right pane.
From the Recovery Services vault option, choose Select existing and select the required vault.
In the backup policy, you specify the frequency, backup schedule, and so on. From the Choose backup policy option, select a policy from the following options:
- DailyPolicy: Retain the daily backup taken at 9.00 AM UTC for 180 days.
- DefaultPolicy: Retain the daily backup taken at 10.30 AM UTC for 30 days.
- Create backup policy: Customize the backup policy as per your requirements.
Click Enable backup.
A notification stating that backup is initiated appears.
On the Azure Dashboard screen, search Recovery Services vaults.
The screen displaying all the services vaults appears.
Select the required services vault.
The screen displaying the details of the virtual machine appears.
On the center pane, under Protected items, click Backup items.
The screen displaying the different management types vault appears.
Select the required management type.
After the backup is completed, the list displays the virtual machine for which the backup was initiated.
Restoring a Virtual Machine using Recovery Services Vaults
In Azure, when restoring a virtual machine using Recovery Services vaults, you have the following two options:
- Creating a virtual machine: Create a virtual machine with the backed up information.
- Replacing an existing: Replace an existing disk on the virtual machine with the backed up information.
Restoring by Creating a Virtual Machine
This section describes how to restore a backup on a virtual machine by creating a virtual machine.
Before you begin
Ensure that the backup process for the virtual machine is completed.
How to restore by creating a virtual machine
To restore a virtual machine by creating a virtual machine:
On the Azure Dashboard screen, search Recovery Services vaults.
The screen displaying all the services vaults appears.
Select the required services vault.
The screen displaying the details of the services vault appears.
On the center pane, under Protected items, click Backup items.
The screen displaying the different management types vault appears.
Select the required management type.
The virtual machines for which backup has been initiated appears.
Select the virtual machine.
The screen displaying the backup details, and restore points appear.
Click Restore VM.
The Select Restore point screen appears.
Choose the required restore point and click OK.
The Restore Configuration screen appears.
If you want to create a virtual machine, click Create new.
Populate the following fields for the respective options:
- Restore type: Create a new virtual machine without overwriting an existing backup.
- Virtual machine name: Name for the virtual machine.
- Resource group: Associate vault to a resource group.
- Virtual network: Associate vault to a virtual network.
- Storage account: Associate vault to a storage account.
Click OK.
Click Restore.
The restore process is initiated. A virtual machine is created with the backed up information.
Restoring a Virtual Machine by Restoring a Disk
This section describes how to restore a backup on a virtual machine by restoring a disk on a virtual machine.
Before you begin
Ensure that the backup process for the virtual machine is completed. Also, ensure that the VM is stopped before performing the restore process.
How to restore a virtual machine by creating a virtual machine
To restore a virtual machine by creating a virtual machine:
On the Azure Dashboard screen, search Recovery Services vaults.
The screen displaying all the services vaults appears.
Select the required services vault.
The screen displaying the details of the services vault appears.
On the center pane, under Protected items, click Backup items.
The screen displaying the different management types vault appears.
Select the required management type.
The virtual machines for which backup has been initiated appears.
Select the virtual machine.
The screen displaying the backup details, and restore points appear.
Click Restore VM.
The Select Restore point screen appears.
Choose the required restore point and click OK.
The Restore Configuration screen appears.
Click Replace existing.
Populate the following fields:
- Restore type: Replace the disk from a selected restore point.
- Staging location: Temporary location used during the restore process.
Click OK.
Click Restore.
The restore process is initiated. The backup is restored by replacing an existing disk on the machine with the disk containing the backed up information.
4.22.2.14 - Connecting to an ESA Instance
If you are using an instance of the DSG appliance on Azure, you must connect it to an instance of the ESA appliance. Using the CLI manager, you must provide the connectivity details of the ESA appliance in the DSG appliance.
For more information about connecting a DSG instance with ESA, refer Setting up ESA Communication.
4.22.2.15 - Deploying the Protegrity Appliance Instance with the Protectors
You can configure the various protectors that are a part of the Protegrity Data Security Platform with an instance of the ESA appliance running on Azure.
Depending on the cloud-based environment that hosts the protectors, the protectors can be configured with the instance of the ESA appliance in one of the following ways:
- If the protectors are running on the same virtual network as the instance of the ESA appliance, then the protectors need to be configured using the internal IP address of the ESA appliance within the virtual network.
- If the protectors are running on a different virtual network than that of the ESA appliance, then the virtual network of the ESA instance needs to be configured to connect to the virtual network of the protectors.
4.22.3 - Installing Protegrity Appliances on Google Cloud Platform (GCP)
The Google Cloud Platform (GCP) is a cloud computing service offered by Google, which provides services for compute, storage, networking, cloud management, security, and so on. The following products are available on GCP:
- Google Compute Engine provides virtual machines for instances.
- Google App Engine provides a Software Developer Kit (SDK) to develop products.
- Google Cloud Storage is a storage platform to store large data sets.
- Google Container Engine is a cluster-oriented container to develop and manage Docker containers.
Protegrity provides the images for GCP that contain either the Enterprise Security Administrator (ESA), or the Data Security Gateway (DSG).
This section describes the prerequisites and tasks for installing Protegrity appliances on GCP. In addition, it describes some best practices for using the Protegrity appliances on GCP effectively.
4.22.3.1 - Verifying Prerequisites
This section describes the prerequisites including the hardware, software, and network requirements for installing and using Protegrity appliances on GCP.
Prerequisites
The following prerequisite is essential to install the Protegrity appliances on GCP:
- A GCP account and the following information:
- Login URL for the GCP account
- Authentication credentials for the GCP account
- Access to the My.Protegrity portal
Hardware Requirements
As the Protegrity appliances are hosted and run on GCP, the hardware requirements are dependent on the configurations provided by GCP. The actual hardware configuration depends on the actual usage or amount of data and logs expected. However, these requirements can autoscale as per customer requirements and budget.
The minimum recommendation for an appliance is 8 CPU cores and 32 GB memory. On GCP, this configuration is available under the Machine type drop-down list in the n1-standard-8 option.
For more information about the hardware requirements of ESA, refer to section System Requirements.
Network Requirements
The Protegrity appliances on GCP are provided with a Google Virtual Private Cloud (VPC) networking environment. The Google VPC enables you to access other instances of Protegrity resources in your project.
You can configure the Google VPC by specifying the IP address range. You can also create and configure subnets, network gateways, and the security settings.
For more information about the Google VPC, refer to the VPC documentation at: https://cloud.google.com/vpc/docs/vpc
If you are using the ESA or the DSG appliance with GCP, then ensure that the inbound and outbound ports of the appliances are configured in the VPC.
For more information about the list of inbound and outbound ports, refer to the section Open Listening Ports.
4.22.3.2 - Configuring the Virtual Private Cloud (VPC)
You must configure your Virtual Private Cloud (VPC) to connect to different Protegrity appliances.
To configure a VPC:
Ensure that you are logged in to the GCP Console.
Navigate to the Home screen.
Click the navigation menu on the Home screen.
Under Networking, navigate to VPC network > VPC networks.
The VPC networks screen appears.
Click CREATE VPC NETWORK.
The Create a VPC network screen appears.
Enter the name and description of the VPC network in the Name and Description text boxes.
Under the Subnets area, click Custom to add a subnet.
Enter the name of the subnet in the Name text box.
Click Add a Description to enter a description for the subnet.
Select the region where the subnet is placed from the Region drop-down menu.
Enter the IP address range for the subnet in the IP address range text box.
For example, 10.1.0.0/99.
Select On or Off from the Private Google Access options to set access for VMs on the subnet to access Google services without assigning external IP addresses.
Click Done. Additionally, click Add Subnet to add another subnet.
Select Regional from the Dynamic routing mode option.
Click Create to create the VPC.
The VPC is added to the network.
Adding a Subnet to the Virtual Private Cloud (VPC)
You can add a subnet to your VPC.
To add a subnet:
Ensure that you are logged in to the GCP Console.
Under Networking, navigate to VPC network > VPC networks.
The VPC networks screen appears.
Select the VPC.
The VPC network details screen appears.
Click EDIT.
Under Subnets area, click Add Subnet.
The Add a subnet screen appears.
Enter the subnet details.
Click ADD.
Click Save.
The subnet is added to the VPC.
4.22.3.3 - Obtaining the GCP Image
Before creating the instance on GCP, you must obtain the image from the My.Protegrity portal. On the portal, you select the required ESA version and choose GCP as the target cloud platform. You then share the product to your cloud account. The following steps describe how to share the image to your cloud account.
To obtain and share the image:
Log in to the My.Protegrity portal with your user account.
Click Product Management > Explore Products > Data Protection.
Select the required ESA Platform Version from the drop-down.
The Product Family table will update based on the selected ESA Platform Version.
The ESA Platform Versions listed in drop-down menu reflect all versions. These include versions that were either previously downloaded or shipped within the organization along with any newer versions available thereafter. Navigate to Product Management > My Product Inventory to check the list of products previously downloaded.
The images in this section consider the ESA as a reference. Ensure that you select the required image.
Select the Product Family.
The description box will populate with the Product Family details.
Click View Products to advance to the product listing screen.
Callout Element Name Description 1 Target Platform Details Shows details about the target platform. 2 Product Name Shows the product name. 3 Product Family Shows the product family name. 4 OS Details Shows the operating system name. 5 Version Shows the product version. 6 End of Support Date Shows the final date that Protegrity will provide support for the product. 7 Action Click the View icon ( ) to open the Product Detail screen.
8 Export as CSV Downloads a .csv file with the results displayed on the screen. 9 Search Criteria Type text in the search field to specify the search filter criteria or filter the entries using the following options:- OS- Target Platform 10 Request one here Opens the Create Certification screen for a certification request. Select the GCP cloud target platform you require and click the View icon (
) from the Action column.
The Product Detail screen appears.
Callout Element Name Description 1 Product Detail Shows the following information about the product:- Product name- Family name- Part number- Version- OS details- Hardware details- Target platform details- End of support date
- Description2 Product Build Number Shows the product build number. 3 Release Type Name Shows the type of build, such as, release, hotfix, or patch. 4 Release Date Shows the release date for the build. 5 Build Version Shows the build version. 6 Actions Shows the following options for download:- Click the Share Product icon ( ) to share the product through the cloud.- Click the Download Signature icon (
) to download the product signature file.- Click the Download Readme icon (
) to download the Release Notes.
7 Download Date Shows the date when the file was downloaded. 8 User Shows the user name who downloaded the build. 9 Active Deployment Select the check box to mark the software as active. Clear the check box to mark the software as inactive. This option is available only after you download a product.| |10|Product Build Number|Shows the product build number.|
Click the Share Product icon (
) to share the desired cloud product.
If the access to the cloud products is restricted and the Customer Cloud Account details are not available, then a message appears. The message displays the information that is required and the contact information for obtaining access to cloud share.
A dialog box appears and your available cloud accounts will be displayed.
Select your required cloud account in which to share the Protegrity product.
Click Share.
A message box is displayed with the command line interface (CLI) instructions with the option to download a detailed PDF containing the cloud web interface instructions. Additionally, the instructions for sharing the cloud product are sent to your registered email address and to your notification inbox in My.Protegrity.
Click the Copy icon (
) to copy the command for sharing the cloud product and run the command in CLI. Alternatively, click Instructions to download the detailed PDF instructions for cloud sharing using the CLI or the web interface.
The cloud sharing instruction file is saved in a
.pdf
format. You need a reader, such as, Acrobat Reader to view the file.The Cloud Product will be shared with your cloud account for seven (7) days from the original share date in the My.Protegrity portal.
After the seven (7) day time period, you need to request a new share of the cloud product through My.Protegrity.com.
4.22.3.4 - Converting the Raw Disk to a GCP Image
After obtaining the image from Protegrity, you can proceed to create a virtual image. However, the image provided is available as disk in a raw format. This must be converted to a GCP specific image before you create an instance. The following steps provide the details of converting the image in a raw format to a GCP-specific image.
To convert the image:
Login to the GCP Console.
Run the following command.
gcloud compute images create <Name for the new GCP Image > --source-uri gs://<Name of the storage location where the raw image is obtained>/<Name of the GCP image>>
For example,
gcloud compute images create esa80 --source-uri gs://stglocation80/esa-pap-all-64-x86-64-gcp-8-0-0-0-1924.tar.gz
The raw image is converted to a GCP-specific image. You can now create an instance using this image
4.22.3.5 - Loading the Protegrity Appliance from a GCP Image
This section describes the tasks that you must perform to load the Protegrity appliance from an image that is provided by Protegrity. You must create a VM instance using the image provided in the following two methods:
- Creating a VM instance from the Protegrity appliance image provided
- Creating a VM instance from a disk that is created with an image of the Protegrity appliance
4.22.3.5.1 - Creating a VM Instance from an Image
This section describes how to create a Virtual Machine (VM) from an appliance image provided to you.
To create a VM from an image:
Ensure that you are logged in to the GCP.
Click Compute Engine.The Compute Engine screen appears.
Click CREATE INSTANCE.The Create an instance screen appears.
Enter the following information:
- Name: Name of the instance
- Description: Description for the instance
Select the region and zone from the Region and Zone drop-down menus respectively.
Under the Machine Type area, select the processor and memory configurations based on the requirements.
Click Customize to customize the memory, processor, and core configuration.
Under the Boot disk area, click Change to configure the boot disk.
The Boot disk screen appears.
Click Custom Images.
Under the Show images from drop-down menu, select the project where the image of the appliance is provided.
Select the image for the root partition.
Select the required disk type from the Boot disk type drop-down list.
Enter the size of the disk in the Size (GB) text box.
Click Select.
The disk is configured.
Under the Identity and API access area, select the account from the Service Account drop-down menu to access the Cloud APIs.
Depending on the selection, select the access scope from the Access Scope option.
Under the Firewall area, select the Allow HTTP traffic or Allow HTTPS traffic checkboxes to permit HTTP or HTTPS requests.
Click Networking to set the networking options.
Enter data in the Network tags text box.
Click Add network interface to add a network interface.
If you want to edit a network interface, then click the edit icon (
).
Click Create to create and start the instance.
4.22.3.5.2 - Creating a VM Instance from a Disk
You can create disks using the image provided for your account. You must create a boot disk using the OS image. After creating the disk, you can attach it to an instance.
This section describes how to create a disk using an image. Using this disk, you then create a VM instance.
Creating a Disk from the GCP Image
Perform the following steps to create a disk using an image.
Before you begin
Ensure that you have access to the Protegrity appliance images.
How to create a disk using a GCP Image
To create a disk of the Protegrity appliance:
Access the GCP domain at the following URL: https://cloud.google.com/
The GCP home screen appears.
Click Console.
The GCP login screen appears.
On the GCP login screen, enter the following details:
- User Name
- Password
Click Sign in.
After successful authentication, the GCP management console screen appears.
Click Go to the Compute Engine dashboard under the Compute Engine area.
The Dashboard screen appears
Click Disks on the left pane.
The Disks screen appears.
Click CREATE DISK to create a new disk.
The Create a disk screen appears.
Enter the following details:
- Name: Name of the disk
- Description: Description for the disk
Select one of the following options from the Type drop-down menu:
- Standard persistent disk
- SSD persistent disk
Select the region and zone from the Region and Zone drop-down menus respectively.
Select one of the following options from the Source Type option:
Image: The image of the Protegrity appliance that is provided.
Select the image from the Source Image drop-down menu.
Snapshot: The snapshot of a disk.
Blank: Create a blank disk.
Enter the size of the disk in the Size (GB) text box.
Select Google-managed key from the Encryption option.
Click Create.
The disk is created.
Creating a VM Instance from a Disk
This section describes how to create a VM instance from a disk that is created from an image.
For more information about creating a disk, refer to section Creating a Disk from the GCP Image.
To create a VM instance from a disk:
Ensure that you are logged in to the GCP Console.
Click Compute Engine.
The Compute Engine screen appears.
Click CREATE INSTANCE.
The Create an instance screen appears.
Enter information in the following text boxes:
- Name
- Description
Select the region and zone from the Region and Zone drop-down menus respectively.
Under the Machine Type section, select the processor and memory configuration based on the requirements.
Click Customize to customize your memory, processor and core configuration.
Under Boot disk area, click Change to configure the boot disk.
The Boot disk screen appears.
- Click Existing Disks.
- Select the required disk created with the Protegrity appliance image.
- Click Select.
Under Firewall area, select the Allow HTTP traffic or Allow HTTPS traffic checkboxes to permit HTTP or HTTPS requests.
Click Create to create and start the instance.
4.22.3.5.3 - Accessing the Appliance
After setting up the virtual machine, you can access the appliance through the IP address that is assigned to the virtual machine. It is recommended to access the appliance with the administrative credentials.
If the number of unsuccessful password attempts exceed the defined value in the password policy, the account gets locked.
For more information on the password policy for the admin and viewer users, refer here, and for the root and local_admin OS users, refer here.
4.22.3.6 - Finalizing the Installation of Protegrity Appliance on the Instance
When you install the appliance, it generates multiple security identifiers such as, keys, certificates, secrets, passwords, and so on. These identifiers ensure that sensitive data is unique between two appliances in a network. When you receive a Protegrity appliance image, the identifiers are generated with certain values. If you use the security identifiers without changing their values, then security is compromised and the system might be vulnerable to attacks.
Rotating Appliance OS keys to finalize installation
Using the Rotate Appliance OS Keys, you can randomize the values of these security identifiers for an appliance. During the finalization process, you run the key rotation tool to secure your appliance.
If you do not complete the finalization process, then some features of the appliance may not be functional including the Web UI.
For example, if the OS keys are not rotated, then you might not be able to add appliances to a Trusted Appliances Cluster (TAC).
For information about the default passwords, refer the Release Notes 10.1.0.
Finalizing ESA Installation
You can finalize the installation of the ESA after signing in to the CLI Manager.
Before you begin
Ensure that the finalization process is initiated from a single session only. If you start finalization simultaneously from a different session, then the Finalization is already in progress. message appears. You must wait until the finalization of the instance is successfully completed.
Additionally, ensure that the appliance session is not interrupted. If the session is interrupted, then the instance becomes unstable and the finalization process is not completed on that instance.
To finalize ESA installation:
Sign in to the ESA CLI Manager of the instance created using the default administrator credentials.
The following screen appears.
Select Yes to initiate the finalization process.
The screen to enter the administrative credentials appears.
If you select No, then the finalization process is not initiated.
To manually initiate the finalization process, navigate to Tools > Finalize Installation and press ENTER.
Enter the credentials for the admin user and select OK.
A confirmation screen to rotate the appliance OS keys appears.
Select OK to rotate the appliance OS keys.
The following screen appears.
To update the user passwords, provide the credentials for the following users:
- root
- admin
- viewer
- local_admin
Select Apply.
The user passwords are updated and the appliance OS keys are rotated.
The finalization process is completed.
Default products installed on appliances
The appliance comes with some products installed by default. If you want to verify the installed products or install additional products, then navigate to Administration > Installations and Patches > Add/Remove Services.
For more information about installing products, refer the section Working with Installation and Packages in Protegrity Installation Guide.
4.22.3.7 - Connecting to an ESA instance for DSG deployment
If you are using an instance of the DSG appliance on GCP, you must connect it to the instance of the ESA appliance. Using the CLI manager, you must provide the connectivity details of the ESA appliance in the DSG appliance.
For more information about connecting to an instance of the ESA appliance, refer to the section Setting up ESA Communication in the Data Security Gateway Guide 3.0.0.0.
4.22.3.8 - Deploying the Instance of the Protegrity Appliance with the Protectors
You can configure the various protectors that are a part of the Protegrity Data Security Platform with the instance of the ESA appliance running on AWS.
Depending on the Cloud-based environment which hosts the protectors, the protectors can be configured with the instance of the ESA appliance in one of the following ways:
- If the protectors are running on the same VPC as the instance of the ESA appliance, then the protectors need to be configured using the internal IP address of the appliance within the VPC.
- If the protectors are running on a different VPC than that of the instance of the ESA appliance, then the VPC of the instance of the ESA needs to be configured to connect to the VPC of the protectors.
4.22.3.9 - Backing up and Restoring Data on GCP
You can use a snapshot of an instance or a disk to backup or restore information in case of failures. A snapshot represents a state of an instance or disk at a point in time.
Creating a Snapshot of a Disk on GCP
This section describes the steps to create a snapshot of a disk.
To create a snapshot on GCP:
On the Compute Engine dashboard, click Snapshots.
The Snapshots screen appears.
Click Create Snapshot.
The Create a snapshot screen appears.
Enter information in the following text boxes.
- Name - Name of the snapshot.
- Description – Description for the snapshot.
Select the required disk for which the snapshot is to be created from the Source Disk drop-down list.
Click Add Label to add a label to the snapshot.
Enter the label in the Key and Value text boxes.
Click Add Label to add additional tags.
Click Create.
Ensure that the status of the snapshot is set to completed.
Ensure that you note the snapshot id.
Restoring from a Snapshot on GCP
This section describes the steps to restore data using a snapshot.
Before you begin
Ensure that a snapshot of the disk was created before beginning this process.
How to restore data using a snapshot
To restore data using a snapshot on GCP:
Navigate to Compute Engine > VM instances.
The VM instances screen appears.
Select the required instance.
The screen with instance details appears.
Stop the instance.
After the instance is stopped, click EDIT.
Under the Boot Disk area, remove the existing disk.
Click Add Item.
Select the Name drop-down list and click Create a disk.
The Create a disk screen appears.
Under Source Type area, select the required snapshot.
Enter the other details, such as, Name, Description, Type, and Size (GB).
Click Create.
The snapshot of the disk is added in the Boot Disk area.
Click Save.
The instance is updated with the new snapshot.
4.22.3.10 - Increasing Disk Space on the Appliance
After creating an instance on GCP, you can add a disk to your appliance.
To add a disk to a VM instance:
Ensure that you are logged in to the GCP Console.
Click Compute Engine.
The Compute Engine screen appears.
Select the instance.
The VM instance details screen appears.
Click EDIT.
Under Additional disks, click Add new disk.
Enter the disk name in the Name field box.
Select the disk permissions from the Mode option.
If you want to delete the disk or keep the disk after the instance is created, select the required option from the Deletion rule option.
Enter the disk size in GB in the Size (GB) field box.
Click Done.
Click Save.
The disk is added to the VM instance.
5 - Data Security Gateway (DSG)
5.1 - Protegrity Gateway Technology
Protegrity Gateway Technology provides an insight into the gateway technology offered that lets you protect data at rest as well as on the fly.
Background
The most important asset for organizations today is data. Data is being collected at an unprecedented rate. Data analysts and data mining scientists develop analytical processes to gain transformative insights from the collected data to gain corporate advantages, growth, and innovation.
This rich pool of data is commonly tied to individuals, such as employees, customers, patients, and the like, making it a target for identity theft. The ever-increasing cases of data breaches is a proof that the business of stealing data is a large and lucrative business for hackers. In effort to stop data thefts, organizations are constantly looking for innovative solutions for protecting sensitive data without affecting the use and analysis of this data.
Audience
Multiple stakeholders collaborate to deliver enterprise level data security solutions. Some are responsible for setting corporate business requirements while others own the responsibility of designing and implementing data security solutions.
The audience for this document is the following stakeholders who play a role in the data security ecosystem:
Business Owners: Focused on maximizing the value and growth delivered by their business system. Data security concerns and security solutions may prevent business owners from executing their plans. These stakeholders are the advocate for the data and its untapped potential.
Security Professionals (CISO, Security Officers, Governance, Risk, etc.): Responsible for keeping business systems secure. They must understand the goals of the business owners and design and deliver data security solutions that offer a balance between protecting the data and enabling business usage. These security professionals:
- Set the security risk tolerance for the organization.
- Identify the data that is deemed sensitive in an organization.
- Design and implement the data security solution that meets business requirements.
- Establish the ongoing monitoring and alerting of sensitive data.
IT (DBA’s, Developers, etc.): Responsible for implementing and deploying business and data security solution. Some organizations have a specialized IT team that is part of the security organization. In this document, this team is identified as the team that implements and deploys the data security solution, irrespective of their location in the organization chart.
System Architects: Equipped with deep knowledge of business infrastructure and of the corporate data security requirements makes them the center authority responsible for the technical architecture of the data security solution.
These stakeholders are involved from the initial stages of vetting data security vendors to the eventual design of the data security architecture implemented by the IT stakeholders.
What is Protegrity Gateway Technology
Protegrity Gateway Technology is an umbrella term for the new and innovative push to deliver data security solutions from Protegrity that is highly transparent to corporate infrastructures.
When adopting data security solutions, companies expect minimal impact on existing business systems and processes. In the past, data security solutions have been integrated into business applications and databases.
These approaches require changes to these systems.
The gateway is a network intermediary between systems that communicate with each other through the network. By delivering data security solutions on the network, changes to the existing systems are avoided or minimized.
The Protegrity Gateway Technology protects data on the network.
Why the Protegrity gateway technology?
The Protegrity Gateway Technology represents an extension to the Protegrity Data-Centric Audit and Protection (DCAP) platform and Protegrity Vaultless Tokenization. The largest enterprises worldwide are using these today to protect sensitive data.
The combination of the Protegrity DCAP platform, Protegrity Vaultless Tokenization, and the Protegrity Gateway Technology delivers many benefits:
Enterprise: As yet another protector in the Protegrity Data Centric Audit and Protection platform family, the Protegrity Gateways can receive and use policies from the Enterprise Security Administrator (ESA). It define rules for how you want to protect sensitive data throughout your enterprise. The protected data is interoperable with other protectors in the DCAP family.
Transparent: Delivering data protection on the network eliminates the need to modify the source or destination systems. This makes the implementation of a security solution easier than if you had to modify application code or database schema.
Fast: The gateway provides the fastest mechanism to protect and unprotect data at line speed. The granularity of the security operations is very high since the operations are applied very close to the data with no latency.
Scalability: The gateways can scale vertically as well as horizontally. The vertical scaling is enabled through the addition of CPU and RAM while horizontal scaling is enabled by adding more nodes to a gateway cluster.
Configuration over Programming (CoP): The implementation of data security with the Protegrity Gateways does not require programming. Implementation is configured through an easy to use web interface. This practice is called Configuration over Programming (CoP).
Deployment Flexibility: The Protegrity Gateways as well as the Protegrity DCAP platform can be deployed on-premise, in the cloud deployment, or in a hybrid deployment architecture.
Use Cases: The Protegrity Gateway Technology is a kind of “Swiss Army Knife” for applying data security across many use cases that are described in detailed in this document.
Extensibility: While CoP delivers virtually all you need to implement data security solutions using the Protegrity Gateways, you may extend the functionality using Python programming language through User Defined Functions.
SaaS Protection Agility: SaaS applications (Salesforce, Workday, Box, etc.) have gained popularity due to the ease with which you can add business functionality to your organization. Because of their cloud-based deployment model, SaaS applications can change quickly. The approach of implementing data security solutions with Configuration over Programming (CoP) makes it easy to keep up with these changes and avoid outages.
How Protegrity gateway protects data
Protegrity gateways deliver security operations on sensitive data by peering into the payloads that are being transmitted through the network.
The gateway intercepts standard protocols such as TCP/IP. The payloads on the backs of these protocols are scanned for sensitive data and security operations (protection or un-protection) are applied to the sensitive data as it passes through the gateway.
With the gateway approach to delivering security operations, impact to existing systems is eliminated or minimized.
Data-centric auditing and protection platform
Protegrity Gateway Technology is part of the Protegrity Data-Centric Audit and Protection family of protector. Protegrity Gateway Technology together with the Enterprise Security Administrator (ESA) makes up the Protegrity Data-Centric Audit and Protection Platform.
The Enterprise Security Administrator (ESA) is a central point of management of data security policies enforced by various protectors. Each protector is designed to accept an ESA policy that provide the rules for protecting and unprotecting sensitive data.
Security operations on sensitive data performed by any of the protectors can be audited. Audit logs based on invoked security operations are sent back to ESA for reporting and alerting purposes.
You can protect sensitive data in any business system that is secured with Protegrity. It allows the protected data to travel in a protected state between different business systems, and then unprotect it in a target system for authorized users.
Fine Grained and Coarse Grained Data Security:
Data Security can span many different types of security that are differentiated by based on the point where security policies are applied. Security can be applied to the perimeter by only letting some users in.
Security applied to data can be delivered in the following forms:
It can be applied to the data as an access control layer by hiding data from un-authorized users.
It can be applied to data by encrypting the raw storage associated with data stores. This is sometimes called coarse grained data protection or tablespace protection or Transparent Database Encryption.
It can also be applied to the specific data itself with encryption or tokenization such as the Social Security Number, the E-mail address, the Name and so on.
Protegrity Vaultless Tokenization enables you to reduce the scope of systems where sensitive data exists in the clear, with minimum to no impact of its business usage.
Together with other protectors contained in the Protegrity Data-Centric Audit and Protection platform, the Protegrity Gateway Technology products deliver on these approaches with a single product.
Configuration over Programming (CoP) brings it all together
The Protegrity Gateway Technology brings together the ability to peer into networks payloads and the security policy-based rules for protecting sensitive data with a concept called Configuration over Programming (CoP).
The diagram above depicts the gateway along with the components that together constitute CoP. Data is transmitted bidirectional between System A and System B. The gateway acts as a network intermediary through which the transmissions pass. A set of rules called CoP Profiles define the transformations performed on that data.
Security Officers set rules that define how the security team would like corporate security solutions to treat sensitive data. Having the security team define these rules across the corporate data asset delivers consistency in the security treatment. This helps both security and the usability of the data. Having the security team responsible for this task also delivers separation of duties. This helps to reduce or eliminate a conflict of interest in who sets the rules for protecting sensitive data and who can see the sensitive data in the clear.
IT technical resources bring it all together through CoP. CoP enables a technical resource, a CoP Administrator, to create a set of CoP profiles that blend the different aspects of delivering security or other transformations on data. A CoP Profile include:
Data Collection: The data collection profile rules define the network protocols that are being inspected. For example, you can instruct the gateway that it will inspect HTTP or SFTP. These are standard protocols on top of which the transmission of data across networks are built.
Extend Gateway: If you have a custom protocol, the gateway can be extended and configured to accept that as well.
Data Extraction: Protocols carry many kinds of payloads that are commonly used. They carry web pages or document content that are used to transfer business system data from one application to another. CoP Profiles are configured to identify specific data within these commonly used payloads.
Extend Codecs: A set of extraction codecs are included with the gateway. s with the protocols, the data extraction codecs can also be extended to include new standard or custom codecs.
Actions on Data: Once you have defined the protocol and identified specific data within a payload, you can then apply an action or a transformation rule on the data. In the context to data security, this action is a security operation (protect or unprotect). The rules for performing the security action come from the policy rules identified by the security team.
5.2 - Protegrity gateway product
The Protegrity Gateway Technology applies security operations on the network. The Data Security Gateway (DSG) offers flexibility and can run in the following environments:
- On-premise: Appliance runs on dedicated hardware.
- Virtualized: The appliance runs on a virtual machine.
- Cloud: The appliance runs on or is as part of a Cloud-based service.
SaaS based business applications are being adopted at a rapid pace. Organizations use SaaS-based application to fulfill different business needs. For example, SaaS based CRM applications can be used to manage their customer relationship. Company’s purchasing SaaS applications are outsourcing the burden of development, maintenance, and infrastructure to the SaaS vendors. A subscription contract and a browser are all that is required.
These SaaS applications store corporate data in the cloud and some of which may be sensitive. The DSG can be used to protect sensitive data as it moves from a corporate environment to the SaaS application storage on the cloud. When the data is returned, the sensitive protected data is unprotected and delivered to the intended user in a usable form.
CoP Profiles are available for certain SaaS applications. Also, profiles can be created to configure the DSG to protect and unprotect sensitive data for a specific SaaS application. The DSG delivers differentiation from other vendors described in the following points:
Security: DSG protects sensitive data that is stored in the SaaS cloud storage. This data is rendered in an unusable form to system administrators who maintain the SaaS application. If security imposed by the SaaS application is breached, the data still stays protected. Cryptographic keys used to protect and unprotect sensitive data are stored in the premise. This gives control over the sensitive data.
This platform can be used for Data Residency Applications that have been recently highlighted by the changes in the European safe harbor requirements.
Enterprise data security: CoP profiles use the data security policy rules to control how sensitive data is protected throughout an organization. These rules are created in ESA by the security professionals. Data protected in the SaaS and that moves between business systems realize the benefit of a consistent enterprise data security model. Secured data is interoperable between business systems. Data is then secured at rest and in transit.
Agility: SaaS applications are built on a SaaS model. This model allows SaaS to add functionality with minimal effort of installing new software or other complexities. A new feature is developed and made available the next time a user loads the application on their browser. Thus, the applications are subject to change.
Data security solutions that protect SaaS applications must be agile in reacting to these changes. Certain SaaS data security vendors require building new products to accommodate these changes in their security solution. DSG uses the Configuration over Programming (CoP) to manage these changes. The CoP model uses profiles, that require no programming, to configure the changes.
SaaS change monitoring: Companies subscribe to SaaS applications as it readily lets them add business functionality to their business. These SaaS applications are subject to change. These changes should not break the security solution. Protegrity maintains systems that track these SaaS changes and test against the CoP Profiles that have been created for a specific SaaS. These changes will be identified and CoP Profile configuration modifications will be provided to secure and run a business.
Pricing model: The DSG can be purchased with the same model that is used to purchase the SaaS applications themselves. This makes the process of adding security to the SaaS applications as seamless as possible.
Third-party integration: Often, SaaS applications will communicate with other SaaS applications and business processes through RESTful APIs. DSG can be used to perform security operations between these systems. For example, data in a SaaS application like Salesforce may require protection. When data is pulled from Salesforce into other third party application, the data may be in the clear. DSG can intercept these APIs and unprotect data that needs to be used in the third-party business process.
The DSG applies data security operations on the network and uses the CoP Profiles to configure a data security solution.
When building modern business applications, browsers are the primary means of interaction between users and the business system. Network based protocols, such as web services or RESTful APIs, form the way of communicating with business functions or systems.
Flexibility: DSG can be applied to several use cases that are outside the realm of SaaS applications. This “Swiss Army Knife” type of product can single-handedly account for different data security scenarios. These scenarios together will constitute a holistic and complete solution. This is a powerful addition to the development set of tools.
Transparency: Today, most data security solutions require integration with the host business system components. These components include business applications and databases. By applying security operations on the network, DSG reduces the amount of work required to protect the organization.
Enterprise: Data protection with DSG can be combined with any other protector in the Protegrity Data-Centric Audit and Protection platform. If the gateway does not meet a specific requirement, data security solution can be fulfilled with other protectors in the platform.
The set of security scenarios described in the following section gives a glimpse of the flexibility of this product.
DSG used to protect web applications
Most applications today are based on a web interface. Web interfaces have an architecture that protect data from the browser to the Web Server with HTTPS. The Web Server terminates HTTPS and the traffic flows through Application Servers. Finally, the traffic flows into a data store. Sometimes there are business systems that process data from the databases.
From a security point of view, the data that is flowing after HTTPS is terminated will be in the clear. Adding a DSG before the Web Server can terminate HTTPS and execute the CoP Profile to extract and protect any sensitive data. For example, if an SSN is entered in a web form, a specific CoP Profile rule can be created. This rule picks out the HTML label associated with the value entered and protects the SSN.
Like Web Servers and App Servers, the DSGs are stateless and can be placed behind a load balancer. Vertical and horizontal scaling can manage the throughput required to maintain business performance.
Consider the following cases:
- Business functions are performed in the App Server on sensitive data.
- Business functions are performed in the database itself within stored procedures.
In such cases, DSG provides a RESTful API Server. Database UDFs from Protegrity Database Protector can also be used.
Data security gateway as an API security gateway
APIs are the interoperability language of modern architectures. Classic use of APIs is for interfaces internal to applications. Also, APIs are moving data and executing processes between disparity applications.
These APIs are implemented in the form of SOAP based Web Services or RESTful APIs. In these scenarios, techniques to implement both the client and the server component can be used. The server component may also come from different vendors.
Irrespective of how this architecture is implemented, the DSG can be placed between the client and the server. Here, a CoP Profile can be used to specify what and how sensitive data passing in either direction will be protected or un-protected.
DSG for files
A common scenario found in enterprises today is the delivery of files into an organization from business partners. Often, the security approach to protecting data in transit from the source to the destination is SFTP.
However, when the file lands in the SFTP Server, the encryption is terminated and the sensitive fields in the clear. This is similar to the web application scenario for the HTTPS server. The data downstream is exposed in the clear.
The DSG can be placed before the file lands on the SFTP Server. It can terminate the security making the payload visible to the gateway. A CoP Profile can be created to perform diverse types of security operations to the file.
Coarse Grained Protection: CoP Profile rules can be created to encrypt the entire file before it lands on the SFTP Server.
Fine Grained Protection: CoP Profile rules can be created to encrypt or tokenize specific data elements contained within the file.
DSG as an on-demand RESTful API server
RESTful APIs can be used to perform certain functions that are delivered by the REST Server. In context to data security, business applications can make requests to the DSG as a RESTful Server for data protection.
Protecting or unprotecting data is easy. Send the XML or JSON document that contains the data to be acted on and the CoP Profile rules take care of the rest. The rule will identify the exact part of the document and act on that data. There is no need to parse that data out and then reconstruct the document after the security operation is performed.
5.3 - Technical Architecture
System architecture
Protegrity Gateway Technology products are assembled on a layered architecture. The lower layers provide the foundational aspects of the system such as clustering and protocol stacks. The higher layers are specialized and provide various business functions. They are building blocks that instruct on how the gateway should act on data. Some of these building blocks include functions such as decoders for various data formats as well as data transformation for cryptography.
The gateway architecture provides standard out-of-the-box building blocks. These building blocks can be extended by the customer at each layer as per their requirements. These requirements can be security-related or requirements that will aid the customer in processing data.
The following figure shows a view of the gateway system architecture.
Platform
The Platform Layer runs on top of customer-provided hardware or virtualization resources. It includes an operating system that has been security-hardened by Protegrity. The infrastructural layer running above it called the Protegrity Appliance Framework.
The Protegrity Appliance Framework is responsible for common services, such as inter-node communications mechanisms and clustering. Data communicated through the platform layer is passed onto the Data Collection Layer for further processing.
Data collection
The Data Collection Layer is the glue between the higher layers of the gateway and the external world. It is responsible for ingesting data into the gateway and passing it on higher layers for further processing. Likewise, it is responsible for receiving data from the higher layers and outputting it to the external world. In the TCP/IP architecture terms, this is the transport/application protocol layer of the gateway architecture.
The primary method through which gateway interfaces with external world is over networking. Data is typically delivered to and from the gateway by means of application-layer protocols such HTTP, SFTP and SMTP. The gateway terminates these protocol stacks. These protocols can be extended to any protocol that a company has created for their own requirements. Custom protocols can be creating using the gateways’ User Defined Functions (UDFs).
Data delivered through these protocols are passed to the Data Extraction Layer for further processing.
Data extraction layer
The Data Extraction Layer is at the heart of fine-grained data inspection capabilities of the gateway. The Data Extraction layer is split into two logical functions:
Codecs: These are the parsers or the data encoders/decoders targeted at following individual native formats:
- XML
- JSON
- ZIP
- Open-Office file formats such as DOCX, PPTX, and XLSX.
Extractors: These are responsible for fine-grained extraction of select data from data outputted by the codec components. These include mechanisms such as Regular Expressions, XPath, and JSONPath.
The subsets of data extracted by the Data Extraction Layer are passed up to the Action Layer. Here, they may be transformed for data security or acted upon for some other business logic. Transformed data subsets received from the Action Layer are substituted in their original place in the original payload. The modified payload is encoded and delivered down to the Data Collection layer for outputting to the external world.
The building blocks in this layer can be extended to include custom requirements through UDFs. UDFs enables customers to build and extend the gateway with their own data decoding and extraction logic using Python programing language.
Data extracted from payloads is passed to the Action Layer for further processing.
Action layer
The Action Layer is responsible for operating on the data passed on to it by the Data Extraction Layer. The data extracted is acted upon by actions in the Action Layer.
Operating on this data may include transforming the data for security purposes. This includes the following data security capabilities offered by the core Protegrity:
- protection by means of encryption or tokenization
- un-protection
- re-protection
- hashing
- masking
This layer also includes a UDF component. It enables customers to extend the system with their own action transformation logic using Python programming language.
5.3.1 - Configuration over Programming (CoP) Architecture
CoP overview
The CoP is a key paradigm used in the Protegrity Gateway Technology. The CoP technology enables a CoP administrator to create a set of rules that instructs the gateway on how to process data that traverses it.
The CoP technology is also a key technology from a user experience. The structure of the rules is equally important as the rules. The set of rules, their structure, and an easy-to-use interface results in a powerful concept called the CoP.
The DSG is fundamentally architected on the CoP principle. CoP suggests that configuration should be the preferred way of extending or customizing a system as opposed to programming. Users configure rules in the UI to define step-by-step processing of incoming messages. This allows DSG users to manage any kind of input message so long as they have corresponding rules configured in the DSG. The rules are generally categorized as extraction and transformation.
The DSG product evolution started with Static CoP, where the request processing rules are configured ahead of time. However, DSG now also offers a concept called Dynamic CoP. This allows JSON structure rule definitions to be dynamically injected in the request messages and executed on the fly.
DSG users configure the CoP Rulesets to construct a REST API that is suitable to their environment. DSG’s RESTful interface is high-level. Its API users are not exposed to low-level underlying crypto API message sequences such as open and close session. Further, low-level parameters such as data element name or session handle are not exposed either. User identity can be obtained as follows:
- Pre-configured in DSG derived as a result of HTTP basic authentication user.
- Dynamically provided through the API as an HTTP header.
- Name of the header is user configurable.
- Some part of the HTTP message body.
The following figure shows high-level functionality of the DSG RESTful interface.
For simplicity, the DSG example above shows a plain text string that is tokenized word-by-word. The tokens are returned in the 200 OK response. DSG comes with a whole battery of “codecs”. Codecs are message parsers that allow DSG to parse and process complex payload bodies. DSG’s codecs include the following payloads:
- XML
- JSON
- Text
- Binary
- CSV
- Fixed-Width
- MS-Office
- Google Protocol Buffers
- HPE ArcSight CEF
- Date-Time
- PGP
Further, DSG allows custom extraction and transformation rules to be written in Python and plugged-in within DSG CoP Rulesets.
The following sections describe Ruleset, the Ruleset Structure and the Ruleset engine followed by an example.
CoP Ruleset
The DSG includes built-in standard protocol codecs. These allow configuration-driven payload parsing and processing for most data security use cases.
The Ruleset describes a set of instructions that the gateway uses to transform data as it traverses the gateway in any direction. The various kinds of Rule objects currently available in the gateway are illustrated in the following figure.
A typical Ruleset is constructed from the Extract and Transform rules.
The core rules available today are as follows:
Extract: Extraction rules are responsible for extracting smaller pieces of data from larger bodies of data. By way of engaging existing codecs, they are also capable of interpreting data per predefined encoding schemes. While the Extraction rules function as data filters, they do not actually manipulate data. Therefore, they are branch nodes in Ruleset tree and have child rules below them.
Transform: Transformation rules are responsible for manipulating data passed into them. Typical data security use cases will employ Transformation rules for the following:
- protection
- un-protection
- re-protection
- masking
- hashing
Transformations do not warrant out-of-the-box security actions for the customers. They can build their own actions with Transformation User Defined Functions (UDFs). Customers can extend the out-of-the-box transformations with UDFs.
Log: The Log rule object allows to add log entries to the DSG log. User can define the level of logging that needs to be reflected in the log. The decision of where to save the log can also be made in this rule.
Exit: The Exit option acts as a terminating action and the rules are not processed further.
Set User identity: The Set User Identity rule object comes in effect if username details are part of the payload. The Protegrity Data Protection transformation leverages the value set in this rule such that the subsequent transformation actions calls are performed by the set user.
Profile Reference: An external profile can be referenced using the Profile Reference action. This rule transfers the control to a separate batch of rules grouped in a profile.
Error: Use this action to add custom response message for any invalid content.
Dynamic Injection: Use Dynamic CoP to send rules for extraction and transformation as part of a request header along with the data for protection in request message body.
Set Context Variable: Use this action type to a variable to any value that can then be used as an input to other rules. The value set by to this rule will be kept throughout the rule lifecycle.
Ruleset Structure
Rulesets are organized in a hierarchical structure where Extract rules are branch nodes and other rules such as Transform rules are leaf nodes. In other words, extract specific data from the payload and then perform a Transform action on the data extracted.
Rules are compartmentalized into Profile containers. Profile containers can be enabled or disabled and they can also be referenced by a Profile Reference rule.
Ruleset Tree of Trees (ToT)
A typical Ruleset are recursed and processed in a sequence. With this mechanism, sibling rules that belong a given parent and all the child rules that belong to a sibling rule are recursed and executed sequentially. This occurs from top to bottom with no provision for conditional branching.
However, this disallows decision-based, mutually exclusive execution of individual child rules on various parts of extracted data within the same extraction context. Examples include a row in a CSV file, groups within a regular expression, or multiple XPaths within an XML document. This leads to extraction or parsing of the same data multiple times. Various parts of extracted data within the same extraction context may require to be processed differently.
The RuleSet ToT feature is an enhancement to the RuleSet algorithm that addresses this drawback. With the RuleSet ToT feature, an extraction parent rule can have multiple child rules. Those can be executed mutually exclusive to each other based on some condition applied in the parent rule. The feature allows various parts of extracted data to be processed downstream using different profile references. Since the profile references are sub-trees in and of themselves, this feature adds a Tree-of-Trees structural notation to the CoP rulesets.
The following compares the layout and execution paths of traditional rulesets with the ToT rulesets:
In the above example, a CSV payload needs to be processed as per the following requirements:
- Column 1 needs to be protected using an Alphanumeric data element.
- Column 6 needs to be protected using a Date data element.
- Column 9 needs to be protected using a Unicode data element.
The traditional RuleSet strategy involved extracting or parsing the same CSV payload three times. Once for each column that needs protection using different data elements, as shown on the left side. In contrast, a ToT-enabled RuleSet requires extracting the CSV payload only once where values extracted from different columns can be sent down different child rules that provide different protection data elements. Consequently, the overall CSV payload processing time reduces substantially.
In this release, the Ruleset ToT feature supports the payloads:
Ruleset execution engine
Rulesets are executed with the Ruleset engine that is built into the gateway. The Ruleset engine is responsible for cascaded execution of the Ruleset. The behaviors of Rules objects range from data processing - extract and transform. It moves to controlling the execution flow of rule tree - exit. Some supplementary activities are also performed and logged.
The Ruleset engine will recursively traverse the Ruleset node by node. For example, Extract nodes will extract data that will be transformed with a Transform rule node. Following this, the recursion stack is rolled up and the reverse process happens. Here, data is encoded and packaged back to its original format and sent to the intended recipient.
Ruleset and ruleset execution example
The following example of the Ruleset, the Ruleset structure, and the Ruleset execution is illustrated. This example is started with an HTTP POST with an XML payload of a person’s information. The Ruleset is a hierarchy of 3 Extract nodes with the Transform rule as the end leaf node.
Extract Rule: The Extract Rule extracts the XML document from the message body.
Extract Rule: A second Extract Rule will take the XML document and parse the data that is to be transformed – the person’s name. This is done by using XPath.
Extract Rule: A third Extract Rule will split out the name into individual words – in this example, the first and the last name. This is done by using REGEX.
Transform Rule: The Transform Rule will take each word and apply an action. In this example the first name is protected and the last name is protected.
The next set of rules will perform operations in the reverse and prepare the contents to go back to the sender. The same Extraction rules would perform reverse processing as the recursion unwinds.
Extract Rule: On the return trip, an Extract Rule is used to combine the protected first and last name into a single string – Name.
Extract Rule: This rule will place the Name back into the XML document.
Extract Rule: The final Extract rule will place the XML document back into the message body to be sent back to the sender with the name protected.
5.3.2 - Dynamic Configuration over Programming (CoP)
Ruleset execution can be segregated into Static CoP and Dynamic CoP. In some cases, the payload type and structure is predictable and known. Rulesets for such payloads and processing of data is defined using Static CoP. A user who defines Rulesets using static CoP must be authorized and have permissions to access DSG nodes.
Organizations may be divided into disparate systems or applications. Each system user may send custom payloads on-the-fly to the DSG nodes with little scope for predictability. Providing users with access to DSG nodes for defining Rulesets is risky. In such situations, you can use Dynamic CoP for protection. You must send rules for extraction and transformation as part of a request header along with the data for protection in request message body.
While creating Rulesets for Dynamic CoP, use the Profile Reference rule for data transformation instead of the Transform rule. The security benefits of using Profile Reference rule are higher than the Transform rule. The reason is that the requests can be triggered out of the secure network perimeter of an organization.
Dynamic CoP provides the following advantages:
- Flexibility to send custom requests based on the payload at hand. Prior customization to configuring the ruleset is not needed.
- Restrict or configure the allowed actions that users can send in the request header.
The following figure illustrates how Static CoP RuleSets are combined with Dynamic CoP Rulesets as part of a given REST API or Gateway transaction:
- The Static CoP Administrator creates the tunnel configurations and Ruleset for the Static CoP rule execution. This static rule forms the base for the Dynamic rule to follow. Based on the URI defined in both the Static CoP rule and Dynamic CoP rule, the entire Ruleset structure is executed when a request is received.
- The REST API or gateway clients can be application developers of multiple applications in an organization. They may want to protect their data on-the-fly. The developers will create aDynamic CoP structure based on the allowed list of action types and payloads that they are allowed to send as request header.
- The Dynamic CoP structure provides an outline of how the request header must be constructed.
- When the request is sent, the header hooks to the Dynamic Injection action type that is part of the Ruleset structure. The Ruleset executes successfully and protected data is sent as a response.
Dynamic CoP structure
Based on the type of Ruleset execution to be achieved, Dynamic CoP can either be implemented with ToT or without ToT.
The following structure explains Ruleset structure when Dynamic CoP is implemented without ToT.
The following structure explains Ruleset structure when Dynamic CoP is implemented with ToT.
In the Figure, the profileName is the profile reference to the profile that the ToT structure follows. Ensure that you understand the Ruleset structure/hierarchy at the DSG node before configuring the Dynamic CoP with ToT rule. Refer to Dynamic rule and Dynamic rule injection.
Use case implemented using Static CoP
The following image explains how the use case would be implemented if static CoP is used.
The individual steps are described as following.
Step 1 – This step extracts the body of the HTTP request message. The extracted body content will be the entire JSON document in our example. The extracted output of this Rule will be fed to all its children sequentially. In this example, there is only one child of this extraction rule which is step 2.
Step 2 – This step parses the JSON input as a text document. This is done such that a regular expression can be evaluated to find sensitive data in the document. This step will yield person name strings “Joe Smith” and “Alice Miller” to this child rule. In this example, there is only one child of this extraction rule which is step 3.
Step 3 – This step splits the extracted data from the previous rule into words. Step number 2 above yielded all person names in the document as strings and this rule in step 3 will split those strings into names . The names can then be protected word by word. This will be done by running a simple REGEX on the input. Each word “Joe”, “Smith”, “Alice”, will be fed into children rule nodes of this rule one by one. In this use case, there is only one child to this rule, which is step 4.
Step 4– This step does the actual data protection. Since this rule is a transformation node - a leaf node without any children - the rule will return resulting ciphertext or token to the parent.
At the end of Step 4, the RuleSet recursion stack will unwind. Each branch Rule node will reverse its previous action such that the overall data can be returned to its original format. Going back in the reverse direction, Step 4 will return tokens to Step 3 which will concatenate them together into a string. Step 2 will substitute the strings yielded from Step 3 into the original JSON document in place of the original plaintext strings. Step 1 that was responsible for extracting the body of the HTTP request will replace what has been extracted with the modified JSON document. A layer of platform logic outside the RuleSet tree execution will create an HTTP response message. This message will convey the modified JSON document back to the client.
Use case implemented using Dynamic CoP
The following image explains how the use case would be implemented if dynamic CoP is used.
Among the 4 steps described in implementing Static CoP, steps 2 and 3 are the ones that dictate the real business logic. The steps may change on a request-by-request basis. Step 1 defines extraction of HTTP request message body, which is standard in any REST API request processing. Step 2 defines how sensitive data is extracted from input JSON message. Step 3 defines how a string is split into words for word-by-word protection. Step 4 defines the data protection parameters.
The logic for step 4 can either be injected through Dynamic CoP or used through Static CoP using the profile references. The protection rule is statically configured in the system and can be referenced from step 3’s Dynamic CoP JSON rule. Users may choose to use statically configured protection rules. Profile references can be used for an added layer of security controls and governance.
In the example, step 4’s logic will be injected through Dynamic CoP. It shows how to convey data element name and policy user’s identity through Dynamic CoP.
Dynamic CoP Ruleset Configurations
The Dynamic CoP JSON uses the same JSON structure as the Static CoP JSON. The only difference is that Dynamic CoP JSON is dynamically injected. To start off with our Dynamic CoP JSON, parts of the corresponding Static CoP JSON have been copied. Dynamic CoP JSON can be created programmatically. Also, one can use canned JSON template strings and substitute the variable values in it on a request-by-request basis.
The RuleSet JSON fragment for steps 2, 3 and 4 is shown in the following figure. This JSON will be delivered as-is in an HTTP header. It is configured as “X-Protegrity-DCoP-Rules” in our example. DSG will extract this configured header name and inject its value while executing the RuleSet tree.
The following figure shows the skeletal Static CoP RuleSet configuration in ESA WebUI for enabling Dynamic CoP.
The following figure shows how the Dynamic CoP rules are conveyed to DSG in an HTTP header field and the JSON response output in the Postman tool.
The JSON response output is the same in both our Static and Dynamic CoP examples.
5.4 - Deployment Scenarios
The Protegrity Gateway Technology has the flexibility to be deployed in any of the following environments:
- On-premise.
- Private cloud.
- Public cloud.
- Hybrid combinations if the necessary network communication is available to its consumers.
The deployment approach is based on the business and security requirements for your use cases and organization. For example, data security may require an on-premise deployment. This might be necessary to keep the key material within physical geography or corporate logical borders. These borders mark the security perimeter that unprotected sensitive data should not cross.
Security domain borders may also be subject to the type of sensitive data. Functioning as a demarcation point, the gateway can protect and unprotect sensitive data crossing these borders. A business data security team will require the gateway to be deployed within their secured domain for the subject sensitive data.
The following diagrams depict different deployment scenarios.
When protecting SaaS Applications, DSG is deployed on-premise of the company that is using the SaaS application as a business system.
When protecting Web Applications that are deployed on-premise, the DSG is deployed on-premise of the company that is hosting the web application infrastructure.
Companies have the option of deploying the DSG on the private or public cloud environment.
5.5 - Protegrity Methodology
The Protegrity Methodology helps organizations implement a data security solution through a set of steps that start with data governance and ends at rolling out the implemented solution.
Data governance
Corporate Data Governance, often based on a board level directive, will specify the data that is sensitive to an organization. The source of these data elements may come from regulatory requirements or from internal corporate security goals that go beyond standard compliance. These are the data elements that will be the focus of designing and delivering a data security solution.
Discovery
During the Discovery step, Protegrity Solution Architects will collaborate with the customer corporate IT and Corporate Security stakeholders. They will identify the location and use of the sensitive data that has been identified by Data Governance.
A Discovery document is created that contains the data flows, technologies used (databases, applications, etc.), performance, SLA requirements, and who is authorized to view protected sensitive data in the clear.
Solution design
Based on the results of the Discovery Step, Solution Architects will work with the customer Architecture stakeholders to design and document a data security solution. This solution will meet the requirements of Data Governance.
This step involves methodically tracing through the Discover document, following the path of sensitive data as it flows through different technologies. The goal is to deliver end to end data security from the point of entry or creation through business processes, and ultimately until the data is archived or deleted.
At different points during this step, prototyping may be used to assess the impact of a solution over another. The data security solution is recorded in a Solution Design document.
Protegrity Data Security Solutions have the goal of delivering security to match the risk tolerance of the organization while recognizing the trade-off between security and usability.
Product installation
The Solution Design document will identify the list of Protegrity products that will be used to satisfy the customer data security requirements. These products need to be installed on the target environments.
Installation step also involves basic settings and verification of connectivity among the designed solution product components.
Solution configuration
The Protegrity platform has the flexibility to protect whatever data your organization deems sensitive and to use the most appropriate protection method. Configuring the solution means that data security policies will be created and deployed to the Protegrity protectors. The policies will identify the data that needs to be protected, how that data is to be protected and who should have access to that data. These policies are deployed to all Protegrity protection agent and will guide protectors on all data security operations.
In addition to the data security policy, the protectors are configured to bind the data protection operations to a target layer, system or environment. The Data Security Gateway (DSG) is integrated at the network level. Therefore, it is likely that the configuration step will also involve network firewall, load balancer, and IDP configuration or integration. Specific Gateway Rulesets for the designed solution will also be identified and set as part of this step.
Initial migration
With all data security solutions where sensitive data is being changed – protected, all existing data will need to be protect as well. This process is known as Initial Migration. Initial migration is applied to replace all the sensitive data that already exists in the system unprotected, with its protected. This step exists to avoid having unprotected and protected data mixed together.
Testing
Data Security Solution add security functions that will protect and unprotect sensitive data. These security operations may be constrained to certain individuals or processes. The step in the Protegrity Methodology will require the testing of the data security solution before rolling the solution out.
The methodology step ensures that the data is protected, when it should be protected or unprotected, and that business systems continue to function as usual. This is controlled by the data security policy.
Production rollout
The final step is to roll the solution out and make it available for users.
5.6 - Planning for Gateway Installation
This section provides information about prerequisites that must be met before DSG installation can be started.
Planning Overview
This section can be used as a guide and a checklist for what needs to be considered before the gateway is installed.
This document has many examples of technical concepts and activities like the ones described in this section that are part of the gateway using and configuring the gateway. As a way of facilitating the explanation of these concepts and activities, a fictitious organization called Biloxi Corp is used. The Biloxi Corp has purchased a SaaS called ffcrm.com. The Protegrity gateway is used to protect Biloxi data that is stored in ffcrm.com.
Minimum Hardware Requirements
The performance of the gateway nodes is primarily dependent on the capabilities of the hardware they are installed on. While optimal hardware server specifications are dependent on individual product usage environments, the minimum hardware specifications recommended lower end for production environments are as follows:
- CPU: 4 Cores
- Disk Size: 320 GB
- RAM: 16 GB
- Network Interfaces: 2
Note: The hardware configuration required might vary based on the actual usage or amount of data and logs expected.
The gateway software appliances are certified on the following server platforms.
ESA
As with all Protegrity protectors, gateway instances are centrally managed and controlled from the ESA. As a prerequisite to gateway installation, a working instance of the ESA is required.
Note: For information about the ESA version supported by this release of the DSG, refer to the Data Security Gateway v3.2.0.0 Release Notes.
ESA is the centrally managed component that consists of the policy related data, data store, key material, and the DSG configurations, such as, Certificates, Rulesets, Tunnels, Global Settings, and some additional configurations in the gateway.json file. As per design, the ESA is responsible for pushing the DSG configuration to all the DSG nodes in a cluster.
If you create any configuration on a DSG node and the deploy operation is performed on the ESA, then the configuration on the DSG node will be overwritten by the configuration on the ESA and you will lose all the configuration on the DSG node. Thus, it is recommended that if you are creating any DSG configuration, you must create it on the ESA as the same configurations will be pushed to all the DSG nodes in the cluster. This ensures that the configurations available on all the DSG nodes in a cluster are the same.
Ensure that you push the DSG configurations by clicking Deploy or Deploy to Node Groups from the ESA Web UI. You can click the Deploy or Deploy to Node Groups options from the Cluster and Ruleset screens on the ESA Web UI. Clicking the Deploy or Deploy to Node Groups options from either of these screens on the ESA Web UI ensures that all the DSG configurations are pushed from the ESA to the DSG nodes in a cluster.
Forwarding Logs in DSG
The log management mechanism for Protegrity products forwards the logs to Insight on the ESA.
The following services forwards the logs to the Insight:
- td-agent : It forwards the appliance logs to Insight on the ESA.
- Log Forwarder: It forwards the data security operations-related logs, such as, protect, unprotect, and reprotect and the PEP server logs to Insight on the ESA.
Ensure that the Analytics is initialized on the ESA. The initialization of Analytics is required for displaying the Audit Store information on the Audit Store Dashboards. Refer to Initializing analytics on the ESA for initializing Analytics. Refer to forwarding logs to audit store for configuring the DSG to forward appliance logs to the ESA.
5.6.1 - LDAP and SSO Configurations
The DSG is dependent on the ESA for user management. The users that are part of an organization AD are configured with the ESA internal LDAP.
If your organization plans to implement SSO authentication across all the Protegrity appliances, then you must enable SSO on the ESA and the DSG. The DSG depends on the ESA for user and access management and it is recommended that user management is performed on the ESA.
Before you can configure SSO with the DSG, you must complete the prerequisites on the ESA.
For more information about completing prerequisites on the ESA, refer to section Implementing SSO on DSG in the Protegrity Appliances Overview Guide 9.2.0.0.
After completing the prerequisites, ensure that the following order of SSO configuration on the DSG nodes is followed.
- Enable SSO on the DSG node.
- Configure the Web browser to add the site to trusted sites.
- Login to the DSG appliance.
Enabling SSO on DSG
This section provides information about enabling SSO on the DSG nodes. It involves setting the ESA FQDN and enabling the SSO option.
Before SSO is enabled, ensure that the following prerequisite is completed.
- Ensure that the ESA FQDN is available.
To enable SSO on the DSG node:
Login to the DSG Web UI.
Navigate to Settings > Users.
Click the Advanced tab.
In the Authentication Server field, enter the ESA FQDN.
Click Update to save the server details.
Click the Enable toggle switch to enable the Kerberos SSO.
Repeat the step 1 to step 6 on all the DSG nodes in the cluster.
Configuring SPNEGO Authentication on the Web Browser
Before implementing Kerberos SSO for Protegrity appliances, you must ensure that the Web browsers are configured to perform SPNEGO authentication. The tasks in this section describe the configurations that must be performed on the Web Browsers. The recommended Web browsers and their versions are as follows:
- Google Chrome version 84.0.4147.135 (64-bit)
- Mozilla Firefox version 79.0 (64-bit) or higher
- Microsoft Edge version 84.0.522.63 (64-bit)
The following sections describe the configurations on the Web browsers.
Configuring SPNEGO Authentication on Firefox
The following steps describe the configurations on Mozilla Firefox.
To configure on the Firefox Web browser:
Open Firefox on the system.
Enter about:config in the URL.
Type negotiate in the Search bar.
Double click on network.negotiate-auth.trusted-uris parameter.
Enter the FQDN of the appliance and exit the browser.
Configuring SPNEGO Authentication on Internet Explorer
The following steps describe the configurations on Internet Explorer 11.
To configure on the Internet Explorer Web browser:
Open Internet Explorer on the machine
Navigate to Tools > Internet options > Security .
Select Local intranet.
Enter the FQDN of the appliance under sites that are included in the local intranet zone.
Select Ok.
Configuring SPNEGO Authentication on Chrome
With Google Chrome, you must set the white list servers that Chrome will negotiate with. If you are using a Windows machine to log in to the appliances, then the configurations entered in other browsers are shared with Chrome. You need not add a separate configuration.
Logging to the Appliance
After configuring the required SSO settings, you can login to the DSG using SSO.
To login to the DSG using SSO:
Open the Web browser and enter the FQDN of the DSG in the URL.
The following screen appears.
Click Sign in with SSO.
The Dashboard of the DSG appliance appears.
5.6.2 - Mapping of Sensitive Data Primitives
Corporate Governance will typically identify the data that is deemed sensitive to an organization. An example of this data can be PCI DSS data such as credit cards, Personally Identifiable Data (PII) and Protected Health Information (PHI). PII can include data elements such as First name, Last Name, Social Security Numbers, E-mail Addresses, or any data element that can identify an individual.
When using the gateway to protect sensitive data, the data must be identified through techniques exposed in a CoP Profile. For example, if the requirement is to protect sensitive data in a public SaaS, the identified sensitive data will need to be mapped to the corresponding fields in web forms rendered by the SaaS. These web forms are typically part of SaaS web pages where end users input sensitive data in SaaS for adding new data or searching existing data. A later section on the gateway configuration describes how the form fields will be targeted for protection through configuration rules.
5.6.3 - Network Planning
Connecting the gateway to a network involves address allocation and network communication routing for the service consumers. Network planning also includes gateway cluster sizing and addition of Load Balancers (LB) in front of the gateway cluster.
To protect data in a SaaS application, you gather a list of public domain and host names through which the SaaS is accessed over the Internet.
In case of internal enterprise applications, this relates to identifying networking address (IP addresses or host names) of relevant applications.
Gateway network interfaces can be divided into two categories, administrative and service. Administrative interfaces, such as Web UI and command line (SSH), are used to control and manage its configuration and monitor its state while service interfaces are used to deliver the service it is set to do. It is important that two NICs are created before you install the DSG.
For network security reasons DSG isolates the administrative interfaces from the service ones by allocating each with a separate network address. This enables physical separation when more than one physical NIC is available, otherwise logical separation is achieved by designating two different IP Addresses for admin and service use. Production implementation may strive to achieve further isolation for the service interface by separating inbound and outbound channels, in which case three IP Address will be required.
Network firewalls situated between consumer’s gateway interfaces, admin or services, and between the gateway and the system it is expected to communicate with will require to adjust to allow it.
Note: The supported TLS versions are SSLv3, TLSv1.0, TLSv1.1, TLSv1.2, and TLSv1.3.
If you are utilizing the DSG appliance, the following ports must be configured in your environment.
Port Number/TYPE (ECHO) | Protocol | Source | Destination | NIC | Description |
---|---|---|---|---|---|
22 | TCP | System User | DSG | Management NIC (ethMNG) | Access to CLI Manager |
443 | TCP | System User | DSG | Management NIC (ethMNG) | Access to Web UI |
The following are the list of ports that must be configured for communication between DSG and ESA.
Port Number/TYPE (ECHO) | Protocol | Source | Destination | NIC | Description | Notes (If any) |
---|---|---|---|---|---|---|
22 | TCP | ESA | DSG | Management NIC (ethMNG) |
| |
443 | TCP | ESA | DSG | Management NIC (ethMNG) | Communication in TAC | |
443 | TCP | DSG | ESA and Virtual IP address of ESA | Management NIC (ethMNG) | Downloading certificates from ESA | |
8443 | TCP | DSG | ESA and Virtual IP address of ESA | Management NIC (ethMNG) |
| |
15600 | TCP | DSG | Virtual IP address of ESA | Management NIC (ethMNG) | Sending audit events from DSG to ESA | |
389 | TCP | DSG | Virtual IP address of ESA | Management NIC (ethMNG) | Authentication and authorization by ESA | |
10100 | UDP | DSG | ESA | Management NIC (ethMNG) |
| This port is optional. If the appliance heartbeat services are stopped, this port can be disabled. |
5671 | TCP | DSG | Virtual IP address of ESA | Messaging between Protegrity appliances. | While establishing communication with ESA, if the user notification is not set, you can disable this port. |
The following are the list of ports that must also be configured when DSG is configured in a TAC.
Port Number/TYPE (ECHO) | Protocol | Source | Destination | NIC | Description | Notes (If any) |
---|---|---|---|---|---|---|
22 | TCP | DSG | ESA | Management NIC (ethMNG) | Communication in TAC | |
8585 | TCP | ESA | DSG | Management NIC (ethMNG) | Cloud Gateway cluster | |
443 | TCP | ESA | DSG | Management NIC (ethMNG) | Communication in TAC | |
10100 | UDP | ESA | DSG | Management NIC (ethMNG) | Communication in TAC | This port is optional. If the Appliance Heartbeat services are stopped, this port can be disabled. |
10100 | UDP | DSG | ESA | Management NIC (ethMNG) |
| This port is optional. If the Appliance Heartbeat services are stopped, this port can be disabled. |
10100 | UDP | DSG | DSG | Management NIC (ethMNG) | Communication in TAC | This port is optional. |
Based on the firewall rules and network infrastructure of your organization, you must open ports for the services listed in the following table.
Port Number/TYPE (ECHO) | Protocol | Source | Destination | NIC | Description | Notes (If any) |
---|---|---|---|---|---|---|
123 | UDP | DSG | Time servers | Management NIC (ethMNG) of ESA | NTP Time Sync Port | You can change the port as per your organizational requirements. |
514 | TCP | DSG | Syslog servers | Management NIC (ethMNG) of ESA | Storing logs | You can change the port as per your organizational requirements. |
N/A* | N/A* | DSG | Applications/Systems | Service NIC (ethSRV) of DSG | Enabling communication for DSG with different applications in the organization | You can change the port as per your organizational requirements. |
N/A* | N/A* | Applications/System | DSG | Service NIC (ethSRV) of DSG | Enabling communication for DSG with different applications in the organization | You can change the port as per your organizational requirements. |
Note: N/A* - In DSG, service NICs are not assigned a specific port number. You can configure a port number as per your requirements.
NIC Bonding
The NIC is a device through which appliances, such as ESA or DSG, on a network connect to each other. If the NIC stops functioning or is under maintenance, the connection is interrupted, and the appliance is unreachable. To mitigate the issues caused by the failure of a single network card, Protegrity leverages the NIC bonding feature for network redundancy and fault tolerance.
In NIC bonding, multiple NICs are configured on a single appliance. You then bind the NICs to increase network redundancy. NIC bonding ensures that if one NIC fails, the requests are routed to the other bonded NICs. Thus, failure of a NIC does not affect the operation of the appliance.
You can bond the configured NICs using different bonding modes.
CAUTION:The NIC bonding feature is applicable only for the DSG nodes that are configured on the on-premise platform. The DSG nodes that are configured on the cloud platforms, such as, AWS, GCP, or Azure, do not support this feature.
Bonding Modes
The bonding modes determine how traffic is routed across the NICs. The MII monitoring (MIIMON) is a link monitoring feature that is used for inspecting the failure of NICs added to the appliance. The frequency of monitoring is 100 milliseconds. The following modes are available to bind NICs together:
- Mode 0/Balance Round Robin
- Mode 1/Active-backup
- Mode 2/Exclusive OR
- Mode 3/Broadcast
- Mode 4/Dynamic Link Aggregation
- Mode 5/Adaptive Transmit Load Balancing
- Mode 6/Adaptive Load Balancing
The following two bonding modes are supported for appliances:
- Mode 1/Active-backup policy: In this mode, multiple NICs, which are slaves, are configured on an appliance. However, only one slave is active at a time. The slave that accepts the requests is active and the other slaves are set as standby. When the active NIC stops functioning, the next available slave is set as active.
- Mode 6/Adaptive load balancing: In this mode, multiple NICs are configured on an appliance. All the NICs are active simultaneously. The traffic is distributed sequentially across all the NICs in a round-robin method. If a NIC is added or removed from the appliance, the traffic is redistributed accordingly among the available NICs. The incoming and outgoing traffic is load balanced and the MAC address of the actual NIC receives the request. The throughput achieved in this mode is high as compared to mode 1.
Prerequisites
Ensure that you complete the following pre-requisites when binding interfaces:
- The IP address is assigned only to the NIC on which the bond is initiated. You must not assign an IP address to the other NICs.
- The NICs are on the same network.
Creating a Bond
This section describes the procedure to create a bond between NICs.
Note: Ensure that the IP address of the slave nodes are static.
Note: Ensure that you have added a default gateway for the Management NIC (ethMNG) and Service NIC (ethSRV0). For more information about adding a default gateway to the Management NIC and Service NIC, refer to the section Configuring Default Gateway for Network Interfaces.
Note: When a bond is created with any service NIC (ethSRVX) in the Web UI, its status indicator appears red - which may indicate it is not functioning properly - even though the service NIC (ethSRVX) is active. To change the service NIC (ethSRVX) status indicator to green, click Refresh.
To create a bond:
On the DSG Web UI, navigate to Settings > Network > Network Settings.
The Network Settings screen appears.
Under the Network Interfaces area, click Create Bond corresponding to the interface on which you want to initiate the bond.
The following screen appears.
Note: Ensure that the IP address is assigned to the interface on which you want to initiate the bond.
Select the following modes from the drop down list:
- Active-backup policy
- Adaptive Load Balancing
Select the interfaces with which you want to create a bond.
Select Establish Network Bonding.
A confirmation message appears.
Click OK.
The bond is created and the list appears on the Web UI.
Removing a Bond
The following procedure describes the steps to remove a bond between NICs.
To remove a bond:
On the DSG Web UI, navigate to Settings > Network > Network Settings.
The Network Settings screen appears with all the created bonds as shown in the following figure.
Under the Network Interfaces area, click Remove Bond corresponding to the interface on which the bonding is created.
A confirmation screen appears.
Select OK.
The bond is removed and the interfaces are visible on the IP/Network list.
Viewing a Bond
Using the DSG CLI Manager, you can view the bonds that are created between all the interfaces.
To view a bond:
On the DSG CLI Manager, navigate to Networking > Network Settings.
The Network Configuration Information Settings screen appears.
Navigate to Interface Bonding and select Edit.
The Network Teaming screen displaying all the bonded interfaces appears as shown in the following figure.
Resetting the Bond
You can reset all the bonds that are created for an appliance. When you reset the bonds, all the bonds created are disabled. The slave NICs are reset to their initial state, where you can configure the network settings for them separately.
To reset all the bonds:
On the DSG CLI Manager, navigate to Networking > Network Settings.
The Network Configuration Information Settings screen appears.
Navigate to Interface Bonding and select Edit.
The Network Teaming screen displaying all the bonded interfaces appears.
Select Reset.
The following screen appears.
Select OK.
The bonding for all the interfaces is removed.
5.6.4 - HTTP URL Rewriting
Operating in the in-band mode of data protection against SaaS applications, DSG is placed between the end-user’s client devices and the SaaS servers on the public Internet. For DSG to intercept the traffic between end-user devices and SaaS servers, the top level public Internet Fully Qualified Domain Names (FQDN) that are made accessible by the SaaS need to be identified. Once identified, these FQDNs shall be mapped to internal URLs pointed at DSG and the corresponding URL mappings shall be configured in DSG.
Like most websites on the public Internet, SaaS applications are accessed by their users through one or more Uniform Resource Locators (URL). These are typically HTTP(S) URLs that are made up of FQDNs, e.g. https://www.ffcrm.com
, which are uniquely routable on the public Internet. A SaaS may be accessible on the public Internet through many public facing URLs. An identification of all such public URLs is essential for ensuring that all the traffic between the end users and the SaaS can be routed through DSG. A list of top level Internet facing FQDNs of a SaaS may be gathered from the following sources:
- SaaS Support Documentation: SaaS providers typically provide publicly available documentation where they publish their externally visible FQDNs. This information typically exists for the IT teams of customer enterprises such that they can allow - allowed list - access to these FQDNs through their corporate firewalls.
- Using Browser Tools or Network Sniffers: As an alternative or in addition, the IT team at Biloxi Corp may attempt to find the public FQDNs of ffcrm.com themselves. This can be achieved by making use of network sniffers - possibly an embedded function within Biloxi Corp’s corporate firewall or a forward proxy.
An alternative is to use ‘developer tools’ in the user’s web browser. Browser developer tools show a complete trace of HTTP messaging between the browser and the SaaS. If all relevant sections of ffcrm.com SaaS have been accessed, this trace will reveal the relevant public FQDNs made visible by ffcrm.com.
As a result of performing the above steps, let’s consider that the IT team at Biloxi Corp has identified the following top level public FQDNs exposed by ffcrm.com.
www.ffcrm.com
login.ffcrm.com
crm.ffcrm.com
analytics.ffcrm.com
For DSG to interwork traffic between its two sides (end users and SaaS), DSG relies on FQDN translation. The following figure shows FQDN translation performed by DSG.
The above domain names will be mapped to internal domain names pointed at DSG. For instance, DSG will be configured with following URL mappings.
Incoming Request URL | Outgoing Request URL |
---|---|
https://www.ffrcm.biloxi.com | https://www.ffcrm.com |
https://login.ffcrm.biloxi.com | https://login.ffcrm.com |
https://crm.ffcrm.biloxi.com | https://crm.ffcrm.com |
https://analytics.ffcrm.biloxi.com | https://analytics.ffcrm.com |
This domain name mapping can be generalized by configuring a Domain Name Service (DNS) with a global mapping of *.ffrcm.biloxi.com to DSG which will apply to any of the sub domain www, login, crm, analytics or any other that might be added by ffcrm.com in the future.
Ultimately, end users will be consuming the service through the internal host names. Techniques like Single Sign On (SSO) using Security Assertion Markup Language (SAML) can be used to force users to use internal host names even if direct access to the external ones is attempted.
5.6.5 - Clustering and Load Balancing
DSG deployed as a cluster of appliance nodes provides the necessary the overall system capacity as well as high availability through redundancy. Nodes within a DSG cluster operate autonomously in an active/active arrangement.
Dependent on capabilities of underlying server hardware, traffic patterns and a few other factors, a single DSG node can process a certain amount of traffic. The size of a DSG cluster is determined by comparing the capacity of single node against customer’s performance requirements. For more information about the specific metrics collected in a controlled performance test environment, contact Protegrity Support for DSG Performance Report.
Let’s consider that the IT team at Biloxi Corp has established that they need three DSG nodes to meet their capacity and availability requirements. To hide DSG cluster topology from the end-users, the cluster is fronted by an off-the-shelf Load Balancer.
While considering load-balancing of HTTP traffic, since DSG nodes are stateless in and of themselves and across HTTP transactions, DSG places minimum requirements on Load Balancers (LBs). For instance, LBs fronting DSG cluster are not required maintain session stickiness or affinity. In fact, these LBs may be configured to operate at the lowest layers of TCP/IP protocol stack such as the networking or transport while being unaware of the application layer (HTTP).
Note: When available, DSG logging will leverage X-Real-IP HTTP Header added by Load Balancers to represent the actual client address.
The following figure shows a DSG cluster comprised of three gateway nodes fronted by a Load Balancer.
5.6.6 - SSL Certificates
The use of secured socket layer (aka SSL) prevents a man-in-the-middle from tampering or eavesdropping the communication between two parties. Though it may not be a requirement it is certainly a best practice to secure all communication channels that may be used to transmit sensitive data. DSG function is to transform data transmitted through it. To achieve that over a secured communication channel it is necessary for DSG to terminate the inbound TLS/SSL communication. This step may be skipped when no inbound SSL is used, otherwise, SSL Server Certificate and Keys are needed for DSG to properly terminate inbound SSL connections.
During the install process of DSG, a series of self-signed SSL Certificates are generated for your convenience. You may use it in non-production environment. It is recommended however to use your own certificate for production use.
No further action is required if you choose to use the service certificate generated during install time.
Certificate and keys can be uploaded for use with DSG cluster after the installation. Should you choose to use certificates generated elsewhere, be prepared to upload both the certificate and the associated key in such case. Supported certificate file formats are .crt and .pem.
You may need to generate your own self-signed certificate of specific attributes such as hostname, key strength or expiration date.
5.7 - Installing the DSG
Installing the DSG On-Premise
The Data Security Gateway (DSG) installation requires an existing ESA. It serves as a single point of management for the data security policy, rules configuration, and on-going monitoring of the system. This section provides information about the recommended order of the steps to install a DSG appliance.
Before you begin
Ensure that an ESA 10.0.0 is installed.
For more information about installing the ESA 10.0.0, refer to the sections Installing Appliance On-Premise and Installing Appliances on Cloud Platforms.
Ensure that HubController service is running on the ESA.
Ensure that Policy Management (PIM) has been initialized on the ESA. The initialization of PIM ensures that cryptographic keys for protecting data and the policy repository have been created.
For more information about initializing the PIM, refer here.Ensure that the FIPS mode is disabled on the ESA.
Ensure that Analytics component is initialized on the ESA. The initialization of Analytics component is required for displaying the Audit Store information on Audit Store Dashboards.
For more information about initializing Analytics, refer here.Ensure that the list of patches are available for DSG 3.3.0.0 release. The following table describes the patch details.
Patch Description ESA_PAP-ALL-64_x86-64_10.0.0.xxxx.DSGUP.pty This patch is applied on the ESA to extend the ESA with the DSG Web UI. DSG_PAP-ALL-64_x86-64_3.3.0.0.x.iso The .iso image is used to install the DSG appliance.
Installation Order
To setup the DSG, it is recommended to use the following installation order.
Order of installation | Description | Affected Appliance | Reference |
---|---|---|---|
1 | Install the DSG v3.3.0.0 ISO. | DSG | Installing DSG |
2 | Configure the Default Gateway for the Service NIC using the DSG CLI Manager. | DSG | Configuring Default Gateway for Service NIC (ethSRV0) using the DSG CLI Manager |
3 | Create a Trusted Appliance Cluster (TAC) on the DSG | DSG | Create a TAC |
4 | Add the DSG nodes to the existing Trusted Appliance Cluster from the Cluster tab. | DSG | Adding a DSG node |
5 | Update the host details of ESA on DSG. If the DNS and name server is properply configured then this step is optional. | DSG | Updating the host details |
6 | Create a Trusted Appliance Cluster (TAC) on the ESA | ESA | Create a TAC |
7 | Add the ESA nodes to the existing Trusted Appliance Cluster from the TAC screen. | ESA | Adding a ESA node |
8 | Update the host details of DSG on ESA. If the DNS and name server is properply configured then this step is optional. | ESA | Updating the host details |
9 | Set ESA communication between the DSGs and ESA. | DSG | Set communication |
10 | Apply the DSG v3.3.0.0 patch (ESA_PAP-ALL-64_x86-64_10.0.0.xxxx-DSGUP.pty) on the ESA v10.0.0. Before applying a patch on the ESA, it is recommended to take a full OS backup from the ESA Web UI. For more information about taking a full OS backup from the ESA Web UI, refer here. | ESA | Extending ESA with DSG Web UI |
11 | Enable Scheduled Tasks on Insight | ESA | Enabling Scheduled Tasks on Insight |
12 | Configure the DSG to forward the logs to Insight* on the ESA. | DSG | Forwarding Logs to Insight |
Note: By default, the default_80 HTTP tunnel is disabled for the security reasons. If you want to use default_80 HTTP tunnel in any service, then you must enable the default_80 HTTP tunnel from the Web UI.
To enable the default_80 HTTP tunnel, on the ESA Web UI, navigate to **Cloud Gateway > 3.3.0.0 {build number}**Cloud Gateway > 3.3.0.0 {build number}> Transport. Then, click the Tunnels tab. Select the default_80 HTTP tunnel and click Edit.
After the default_80 tunnel is enabled, you must restart the gateway. On the Tunnels tab, click Deploy to All Nodes to restart the gateway.
5.7.1 - Installing the DSG On-Premise
Installing DSG
This section provides information about installing the DSG ISO.
To install the DSG:
Insert and mount the DSG installation media ISO.
Restart the machine and ensure that it boots up from the installation media.
The following DSG splash screen appears.
Press ENTER to start the install procedure.
The following screen appears only when you reimage to the DSG v3.3.0.0 from any older DSG version. You must enter YES to proceed with the installation, press Tab to select OK, and then press ENTER.
The following installation options screen appears. Select INSTALL Destroy content and install a new copy, press Tab to select OK, and then press ENTER.
Note: If an existing file system is detected on the host hard drive, then a warning may appear prior to this step. Users who choose to proceed with the install will be prompted with additional confirmation before the data on the file system is lost.
The system restarts and the Network interfaces screen with the network interfaces detected appears.
Press Tab to select the management network interface, which will be the management NIC for the DSG node. Then, proceed by pressing Tab to select Select and press Enter.
Note: The selected network interface will be used for communication between the ESA and the DSG.
The DSG appliance attempts to detect a DHCP server to setup the network configuration. If the DHCP server is detected, then the Network Configuration Information screen appears with the settings provided by the DHCP server. If you want to modify the auto-detected settings, then press Tab to select Edit and press Enter to update the information.
Press Tab to select Apply and press ENTER.
Note: If a DHCP server is detected, then the Select a node screen appears. Select the ESA IP address that you want to use for the ESA communication from the list.
The following dialog appears when the DHCP server is not detected. Press Tab to select Manual, and press Enter to provide the IP address manually or Retry to attempt locating the DHCP server again.
The Network Configuration Information dialog appears. You must enter the network configuration and select Apply to continue.
Note: On the DSG, the Management Interface is used for communication between the ESA and the DSG, and accessing the DSG Web UI. The Service Interface is used for handling the network traffic traversing through the DSG.
For more information about the management interface and the service interface, refer to the section Network Planning.
Press Tab to select the time zone of the host, press Tab to select Next, and then press ENTER.
Press Tab to select the nearest location, press Tab to select Next, and then press ENTER.
Press Tab to select the required option, press Tab to select OK, and then press ENTER.
Press Tab to select the required option and then press ENTER.
Press Tab to select the required option and then press ENTER.
Select Enable, press Tab to select OK, and then press ENTER to provide the credentials for securing the GRand Unified Bootloader (GRUB).
Note: GRUB is used to provide enhanced security for the DSG appliance using a username and password combination.
For more information about using GRUB, refer to the section Securing the GRand Unified Bootloader (GRUB) in the Protegrity Appliances Overview Guide 9.2.0.0.CAUTION: The GRUB option is available only for on-premise installations.
Enter a username, password, and password confirmation on the screen, select OK and press ENTER.
Note: The requirements for the Username are as follows:
- It should contain a minimum of three and maximum of 16 characters.
- It should not contain numbers and special characters.
Note: The requirements for the Password are as follows:
- It must contain at least eight characters.
- It must contain a combination of alphabets, numbers, and printable characters.
Press Tab to set the user passwords, and then press Tab to select Apply and press Enter.
Note: It is recommended that strong passwords are set for all the users.
For more information about password policies, refer to the section Strengthening Password Policy in the Protegrity Appliances Overview Guide 9.2.0.0.Enter the IP address or hostname for the ESA. Press Tab to select OK and press ENTER. You can specify multiple IP addresses separated by comma.
The Forward Logs to Audit Store screen appears.
Note: If the IP address or hostname of ESA is not provided while installing the DSG, then the user can add the ESA through ESA Communication.
Select the ESA that you want to connect with, and then press Tab to select OK and press ENTER.
The ESA Selection screen appears.
Note: If you want to enter the ESA details manually, then select the Enter manually option. You must enter the ESA IP address when this option is selected.
Provide the username and password for the ESA that you want to communicate with, press Tab to select OK, and then press ENTER.
Enter the IP Address and Network Mask to configure the service interface and press Tab to select OK and press ENTER.
CAUTION: For ensuring network security, the DSG isolates the management interface from the service interface by allocating each with a separate network address. Ensure that two NICs are added to the DSG.
Select the Cloud Utility AWS v2.2, press Tab to select OK, and then press ENTER to install the utility. The Cloud Utility AWS v2.2 utility must be selected if you plan to forward the DSG logs to AWS CloudWatch. If you choose to install the Cloud Utility AWS v2.2 utility later, you can install this utility from the DSG CLI using the Add or Remove Services option after installing the DSG.
Note: For more information about forwarding the DSG logs to AWS CloudWatch, refer to the section Forwarding logs to AWS CloudWatch.
Select all the options except Join Cloud Gateway Cluster, press Tab to select Set Location Now and press ENTER.
CAUTION: Ensure that the Join Cloud Gateway Cluster option is deselected. It is recommended that the user adds the DSG node from the Cluster tab after the DSG installation is completed.
For more information about adding a node to cluster, refer to the section Adding a DSG node.Select the ESA that you want to connect with, and then press Tab to select OK and press ENTER.
The ESA selection screen appears.
Note: If you want to enter the ESA details manually, then select the Enter manually option. You will be asked to enter the ESA IP address or hostname when this option is selected.
Enter the ESA administrator username and password to establish communication between the ESA and the DSG. Press Tab to select OK and press Enter.
The Enterprise Security Administrator - Admin Credentials screen appears.
Enter the IP address or hostname for the ESA. Press Tab to select OK and press ENTER. You can specify multiple IP addresses separated by comma.
The Forward Logs to Audit Store screen appears.
After successfully establishing the connection with the ESA, the following Summary dialog box appears.
Press Tab to select Continue and press Enter to continue to the DSG CLI manager.
A Welcome to Protegrity Appliance dialog box appears.
Login to the DSG CLI Manager.
Navigate to Administration > Reboot and Shutdown.
Select Reboot and press Enter.
Provide a reason for restarting the DSG node, select OK and press Enter.
Enter the
root
password, select OK and press Enter.The DSG node is restarted.
Login to the DSG Web UI.
Click the
(Help) icon, and then click About.
Verify that the DSG version is reflected as DSG 3.2.0.0.
Login to the ESA Web UI using the administrator credentials.
Navigate to **Cloud Gateway > 3.3.0.0 {build number}**Cloud Gateway > 3.3.0.0 {build number}> Cluster.
Click Deploy to push the DSG Ruleset configurations to the DSG nodes.
Note:
The DSG Ruleset will be pushed only if you Add the DSG node after the installation.
Login to the DSG Web UI using the administrator credentials.
Navigate to Logs > Appliance.
Click Cloud Gateway - Event Logs, and select Gateway.
Verify that the startup logs do not display any errors.
5.7.2 - Installing DSG on cloud installation
Installation of the DSG on cloud platforms involves the following steps:
5.7.2.1 - Finalizing the instance.
After the DSG instance is launched, you must complete the process to finalize DSG installation to rotate the Protegrity provided keys and certificates so that these are regenerated as a security best practice.
Note: It is recommended to finalize the installation of the DSG instance using the Serial Console provided by cloud platform. Do not finalize the installation of the DSG instance using the SSH connection.
To finalize the DSG installation:
Connect to the serial console of the instance on the cloud platform.
Login using the administrator credentials for the DSG. Refer to the release notes for the credentials.
Press Tab to select Yes and press Enter to finalize the installation.
The finalize installation confirmation screen appears.
If you select No during finalization, then the DSG installation does not complete.
Perform the following steps to complete the finalization of the DSG installation on the DSG CLI manager.
Enter the administrator credentials for the DSG instance, press Tab to select OK and press Enter.
Note:
The credentials for logging in to the DSG are provided in the DSG 3.1.0.0 readme.
Press Tab to select Yes and press Enter to rotate the required keys, certificates, and credentials for the appliance.
Configure the default user’s passwords, press Tab to select Apply and press Enter to continue. Set strong passwords as the password policy.
Press Tab to select Continue and press Enter to complete the finalization of the DSG installation.
Login to the DSG CLI Manager.
Navigate to Administration > Reboot and Shutdown.
Select Reboot and press Enter.
Provide a reason for rebooting the DSG, select OK and press Enter.
Enter the
root
password, select OK and press Enter.The DSG node is rebooted.
Log in to DSG with the new passwords.
The finalization of the DSG installation completes successfully.
5.7.2.2 - Launching an instance.
AWS
Launch the instance on AWS using the following steps.
Log in to the AWS UI.
Launch an instance from the AMI provided by Protegrity. Refer to .
Select an instance type that covers the following minimum configurations:
- 4 core CPU
- 16 GB memory
Select the required network configurations.
Add a storage with a minimum of 64 GB.
Configure the required details and launch the instance.
Azure
Launch the instance on Azure using the following steps.
Log in to the Azure UI.
Create a disk from the BLOB provided by Protegrity. Refer to .
Create a virtual machine from the disk created in step 2. Refer to .
While creating the virtual machine, select an instance type that covers the following minimum configurations:
- 4 core CPU
- 16 GB memory
Select SSH public key in the Authentication type option. Select the required network configurations.
Add a storage with a minimum of 64 GB.
Configure the required details and create the virtual machine.
GCP
Launch the instance on GCP using the following steps.
Log in to the GCP UI.
While creating the virtual machine, select Machine Ccnfiguration type that covers the following minimum configurations:
- 4 core CPU
- 16 GB memory
Under Boot disk, select the image provided by Protegrity.
Set the boot size to a minimum of 64 GB.
Configure the required details and create the virtual machine.
5.7.2.3 - Prerequisites on cloud platforms
AWS
The following prerequisites apply for installing DSG on AWS.
- ESA of the compatible version is installed and available.
- A trusted appliance cluster is created on ESA.
- Analytics component is initialized on ESA.
- PIM is initialized on ESA.
- An Amazon account with the required credentials and permissions is available.
- The AMI generated by Protegrity for installing the DSG is available.
- Create an instance with two network interfaces.
Azure
The following prerequisites apply for installing DSG on Azure.
- ESA of the compatible version is installed and available.
- A trusted appliance cluster is created on ESA.
- Analytics component is initialized on ESA.
- PIM is initialized on ESA.
- An Azure account with the required credentials and permissions is available.
- The BLOB generated by Protegrity for installing the DSG is available.
- Create an instance with two network interfaces.
GCP
The following prerequisites apply for installing DSG on GCP.
- ESA of the compatible version is installed and available.
- A trusted appliance cluster is created on ESA.
- Analytics component is initialized on ESA.
- PIM is initialized on ESA.
- A GCP account with the required credentials and permissions is available.
- The disk generated by Protegrity for installing the DSG is available.
- Create an instance with two network interfaces.
5.7.3 - Updating the host details
Updating the host details of DSG on the ESA node
Perform the following steps to update the host details of DSG on the the ESA node:
Note the FQDN of DSG.
On the ESA CLI manager, go to Administration > OS Console.
Edit the hosts file using the following commmand.
vi /etc/ksa/hosts.append
Add the FQDN of DSG.
Run the following command.
``` /etc/init.d/networking restart ```
Exit the command prompt.
Updating the host details of ESA on the DSG node
Perform the following steps to update the host details of ESA on the the DSG node:
Note the FQDN of ESA.
On the DSG CLI manager, go to Administration > OS Console.
Edit the hosts file using the following commmand.
vi /etc/ksa/hosts.append
Add the FQDN of ESA.
Run the following command.
``` /etc/init.d/networking restart ```
Exit the command prompt.
5.7.4 - Additional steps after changing the hostname or FQDN
Perform the following steps if the hostname or FQDN of a DSG node is changed.
- Remove this DSG node from the DSG TAC.
- On the DSG Web UI, navigate to System > Trusted Appliances CLuster.
- Under the Management drop down, select Leave Cluster.
- If the custom certificates are used, regenerate these certificates with the updated hostname.
- If the custom certificates are not used, regenerate the certificate using the following command.
python3 -m ksa.cert.manager --installation-apply
Caution: Run this command only on the DSG node where the hostname or FQDN has changed.
- Create a TAC on the DSG node or join an existing TAC.
For more information about creating a TAC, refer to Create a TAC. - Run the ESA communication process between DSG and ESA.
For more information about set ESA communication, refer to Set communication process. - Update the host details of DSG on ESA.
For more information about updating the host file, refer to Updating the host details. - Restart the DSG UI service from the ESA Web UI.
- Navigate to System > Services.
- Restart the DSG UI service.
- Register the DSG node with an existing TAC, run the following command.
/opt/protegrity/alliance/3.3.0.0.5-1/bin/scripts/register_dsg_tac.sh
script.
If the hostname or FQDN of ESA node is changed, run through the following points.
- Ensure that the updated hostname or FQDN of ESA is mapped to the DSG node.
For more information about updating the host file, refer to Updating the host details. - If the custom certificates are used, regenerate these certificates with the updated hostname.
- If the custom certificates are not used, regenerate the certificate using the following command.
python3 -m ksa.cert.manager --installation-apply
Caution: Run this command only on the ESA node where the hostname or FQDN has changed.
- To choose the latest certificates, run the following steps.
- On the DSG Web UI, navigate to Settings > Network > Manage Certificates.
- Under the Management area, select Change Certificates.
- Under CA Certificate(s), unselect the CA certificate from ESA. Click Next.
- Under Server Certificates, retain the default settings. Click Next.
- Under Client Certificates, ensure that only the latest DSG System client certificate and key is selected.
- Click Apply.
- After updating the certificates, ensure that the set ESA communication steps are performed on the DSG node.
For more information about set ESA communication, refer to Set communication process.
5.8 - Enhancements in DSG 3.3.0.0
One of the key takeaways for 3.3 is the concept of backward compatibility. Previous versions of DSG were compatible only with a subsequent ESA version. Version 3.3.0.0 forms a base that allows future versions of DSG to be compatible with multiple ESA versions. A version of DSG can be compatible with the current and earlier versions of ESA. For this release, DSG is only compatible with ESA v10.0.0. Some improvements for DSG 3.3.0.0 are as follows.
Upgrade
Similar to the earlier releases, upgrade to v3.3 is done through of a patch. The v3.3.0.0 patch must be installed on a v3.2.0.0 machine. It is pertinent to note that direct upgrade to v3.3.0.0 is possible only from v3.2.0.0. Other versions of DSG must be upgraded to v3.2.0.0 before proceeding with v3.3.0.0 upgrade. An intermediate 3.2.0.0. hotfix must be installed on v3.2.0.0 before installing the v3.3.0.0 patch. For more information about upgrade, refer here.
Trusted appliances cluster
The creation of Trusted appliances cluster (TAC) in 3.3.0.0 is different from that of previous releases. In releases prior to 3.3.0.0, a cluster was created with ESA and DSG in the same cluster. However, from v3.3.0.0 the process to create a cluster changes. In this version, separate clusters for ESAs and DSGs must be created. For information about TAC, refer to Trusted appliances cluster.
GPG agent
In 3.3.0.0., the GPG agent is run as a separate application. A separate service for the GPG agent is added in the UI. The commands to create keys are different from the earlier versions. For more information about GPG agent, refer here.
Install
(Work in progress)
A new option of updating host details appear while setting up ESA communication. Refer Link to the necessary section.
if certs are already syn, then do not syn again. document the TS for certificate dobara sync.
upgrading backward compatibility via DSG patch on ESA
5.9 - Trusted appliances cluster
The Trusted appliances cluster (TAC) in v3.3.0.0 is markedly different from that of the earlier versions. The following figure illustrates a sample TAC.
Starting DSG 3.3.0.0, separate cluster for ESAs and DSGs are created. Separate clusters are required for each unique DSG major/minor version. Different major/minor versions of DSGs must not be combined in a single TAC. DSGs and ESAs should not be combined in a TAC. Use the set ESA communication utility to link DSGs to ESA.
While running the install or the upgrade process, add the FQDN of the ESAs and DSGs in the hosts file of every node in the cluster. In the upcoming releases, multiple clusters can be created. Using TAC labels, one can identify to which cluster a node belongs to. A TAC label can be added from the CLI Manager. For more information about adding a TAC label, refer to Updating Cluster Information using the CLI Manager.
The DSG cluster can be viewed from the Cluster screen on the ESA UI. On the UI, go to Cloud Gateway > 3.3.0.0 {build number} > Cluster. The DSG nodes in the cluster are displayed.
This setup of TAC sets a stage for the upcoming releases, where DSGs can communicate with various versions of ESAs.
In a cluster, ESA communicates with a healthy DSG. However, if the DSG is unhealthy or removed from the cluster, the communication might be lost. The ESA must connect to a DSG to deploy the policies and CoP packages. If the connection attempts fails, it tries to reconnect with another healthy DSG in the cluster. The ksa.json file displays the number of attempts ESA can take to establish a connection with the cluster. In this file, configure the retries parameter to set the maximum number of attempts by ESA. Once connected, the communication is established again and configurations can be deployed in the cluster. The default maximum number of retries attempts is 3. It may be adjusted by updating the retry_count value in the ksa.json file.
5.10 - Upgrading to DSG 3.3.0.0
The upgrade of DSG is performed by installing the v3.3.0.0 patch. It is pertinent to note that direct upgrade to v3.3.0.0 is possible only from v.3.2.0.0. The following versions must be upgraded to v3.2.0.0 before upgrading to v3.3.0.0.
- 3.0.0.0
- 3.1.0.1
- 3.1.0.2
- 3.1.0.3
- 3.1.0.4
- 3.1.0.5
Upgrading DSG and ESA
The DSG patch in ESA is added to extend the DSG functionality on ESA. It allows ESA to deploy configurations to other DSG nodes. The following figure illustrates DSG component upgrade on ESA.
The following figure illustrates the DSG upgrade.
Upgrade process
Upgrading the DSG version involves a series of steps that must be performed on ESA and DSG. The following order illustrates the upgrade process.
Before you begin
- Ensure that ESA is on v9.2.0.0.
- If the ESA is on a version prior to 9.2.0.0, upgrade the ESA to v.9.2.0.0.
- Extend the DSG on ESA by installing the v3.2.0.0 patch on ESA.
- Ensure that DSGs is on v3.2.0.0. If the DSG is on a version prior to v3.2.0.0, upgrade the DSG to v3.2.0.0
- Ensure that the 3.2.0.0 HF-1 patch is installed on all the DSGs and ESA.
- Ensure that communication is established between the ESA and DSG.
- Ensure that DSGs and ESA are in a cluster. All the DSGs in the cluster are healthy.
- Ensure that ESAs and DSGs are accessible through their hostnames. If not, run the following steps to update the host details of the ESA and all the DSGs.
Upgrade Procedure
The following figure illustrates a sample setup that will be upgraded to v.3.3.0.0.
- Run the following commands on any one DSG in the cluster. These commands gather and store the details of all DSGs in the cluster. The details are then used to create a cluster of DSGs.
tar zxvfp DSG_PAP-ALL-64_x86-64_3.3.*UP.pty --strip-components=1 -C . installer/alliance_gateway_migration_scripts_v3.9.2.tgz
tar zxvfp alliance_gateway_migration_scripts_v3.*.tgz ./create_tac_nodes.pyc
python create_tac_nodes.pyc --save-dsg-details -f FILE
In reference to the figure, run these commands on node A.
- Install the
v3.3.0.0
patch sequentially on all the DSGs in the cluster. - Run the following command on the DSG to create a cluster. This must be only run on a DSG where step 1 was performed.In reference to the figure, run this command only on node A.
python create_tac_nodes.pyc --create-dsg-tac -f FILE
- Run the set communication process between an upgraded DSG and ESA. Ensure that only Upgrade host settings for DSG is selected. For more information about set ESA communcication, refer to set communication process
- Install
v10.0.0
patch to upgrade the ESA. - Install
v3.3.0.0
patch on ESA to extend the DSG capabilities on ESA.
Canary Upgrade
In a canary upgrade, the DSG nodes are re-imaged to v.3.3.0.0. The DSG image is installed afresh on an existing or a new system.
Before proceeding with the upgrade, back up the PEP server configuration from the DSG nodes. Run the following steps.
- Login to the DSG Web UI.
- Navigate to Settings > System.
- Under the Files tab, download the
pepserver.cfg
file. - Repeat step 1 to step 3 on each DSG node in the cluster.
Consider the following figure.
Run the following steps on ESA A.
- Remove the ESA A from the cluster.
- Upgrade the ESA A to v10.0.0.
- If the ESA is not accessible through host name, add the following host details:
- Host details of ESA B.
- Host details of node A, node B, node C, and node D.
- Create a cluster.
Run the following steps on node A.
- Remove node A from the cluster.
- Re-image the node A to v3.3.0.0.
- If the ESA is not accessible through host name, add the following host details:
- Host details of ESA A and B.
- Host details of node B, node C, and node D.
- Run the set communication for node A to communicate with the upgraded ESA A.
- Restore the
pepserver.cfg
file that was backed up for node A. - Create a cluster on the node.
Run the following steps on node B.
- Remove node B from the cluster.
- Re-image node B to v 3.3.0.0.
- If the ESA is not accessible through host name, add the following host details:
- Host details of ESA A and B.
- Host details of node A, node C, and node D.
- Restore the
pepserver.cfg
file backed up for node B. - Join node B to the cluster created on node A.
Run the following steps on node C.
- Remove node C from the cluster.
- Re-image node C to v 3.3.0.0.
- If the ESA is not accessible through host name, add the following host details.
- Host details of ESA A and B.
- Host details of node A, node B, and node D.
- Restore the
pepserver.cfg
file backed up for node C. - Join node C to the cluster created on node A.
Run the following steps on node D.
- Remove node D from the cluster.
- Re-image node D to v 3.3.0.0.
- If the ESA is not accessible through host name, add the following host details.
- Host details of ESA A and B
- Host details of node A, node B, and node C.
- Restore the
pepserver.cfg
file that was backed up for node D. - Join node D to the cluster created on node A.
Run the following steps on ESA B.
- Remove node D from the cluster.
- Upgrade ESA B to v10.0.0.
- If the ESA is not accessible through host name, add the following host details:
- Host details of ESA A.
- Host details of node A, node B, node C, and node D.
- Join ESA B to the cluster created on ESA A.
The following figure illustrates the upgraded setup.
5.10.1 - Post installation/upgrade steps
Running Docker commands
Run the following commands after installing or upgrading the DSG to 3.3.0.0.
- On the ESA CLI Manager, navigate to Administration > OS Console.
- Run the
docker ps
command. A list of all the available docker containers is displayed. For example,
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
018a569bd7a6 gpg-agent:3.3.0.0.5 "gpg-agent --server …" 9 days ago Up 2 hours gpg-agent-3.3.0.0.5-1
5a30bd37e576 dsg-ui:3.3.0.0.5 "/opt/start-httpd.sh" 9 days ago Up 3 hours dsg-ui-3.3.0.0.5-1
Run the below commands to update the container configuration.
docker update --restart=unless-stopped --cpus 2 --memory 1g --memory-swap 1.5g dsg-ui-3.3.0.0.5-1
docker update --restart=always --cpus 1 --memory .5g --memory-swap .6g gpg-agent-3.3.0.0.5-1
The values of--cpus
,--memory
, and--memory-swap
can be changed as per the requirements.
Updating the evasive configuration file
Run the following commands only if ESA is upgraded from 9.1.0.x to 10.0.0.
- On the ESA CLI Manager, navigate to Administration > OS Console.
- Add new mod evasive configuration file using the following command.
nano /etc/apache2/mods-enabled/whitelist_evasive.conf
- Add the following parameters to the mod evasive configuration file.
<IfModule mod_evasive20.c>
DOSWhitelist 127.0.0.1
</IfModule>
- Save the changes.
- Set the required permissions for evasive configuration file using the following command.
chmod 644 /etc/apache2/mods-enabled/whitelist_evasive.conf
- Reload the apache service using the following command.
/etc/init.d/apache2 reload
Verifying the DSG installation
This section describes the steps to verify the version details of the DSG instance.
To verify the version details of the DSG instance:
Login to the DSG Web UI.
Click the
(Information) icon, and then click About.
Verify that the DSG version is reflected as DSG 3.3.0.0.
The DSG version details appear on the DSG Web UI successfully.
Pushing the DSG rulesets
This section describes the steps to push the DSG Rulesets in a cluster.
To push the DSG Rulesets:
Login to the ESA Web UI using the administrator credentials.
Navigate to Cloud Gateway > 3.3.0.0 {build number} > Cluster.
Go to Actions and click Deploy or Deploy to Node Groups to push the DSG Ruleset configurations to the DSG nodes. For more information about deploying the configurations, refer to the sections Deploying the Configurations to Entire Cluster or Deploying the Configurations to Node Groups.
The DSG Rulesets are pushed to the DSG nodes in a cluster or node groups.
Verifying the Startup Logs
This section describes the steps to verify the startup logs.
To verify the DSG startup logs:
Login to the DSG Web UI using the administrator credentials.
Navigate to Logs > Appliance.
Click Cloud Gateway - Event Logs, and select gateway.
Verify that the startup logs do not display any errors.
The DSG startup logs are displayed on the DSG Web UI.
Enabling Scheduled Tasks on Insight
The DSG metrics logs that are generated over time can be scheduled for cleanup regularly. You can click Audit Store > Analytics > Scheduler, select the Delete DSG Error Indices, Delete DSG Usage Indices, or Delete DSG Transaction Indices, and then click Edit to modify the scheduled task that initiates the Indices file cleanup at regular intervals. The scheduled task can be set to n days based on your preference.
For more information about the audit indexes, refer to Understanding the index field values.
For more information about scheduled tasks, refer to Using the scheduler.
5.11 - Web UI
The DSG Web UI is a collection of DSG-specific UI screens under Cloud Gateway menu that are part of the ESA Web UI. The Cloud Gateway menu is enabled after the ESA patch for Cloud Gateway is installed in ESA.
The ESA dashboard is as seen in the following figure.
The Cloud Gateway menu contains the following sub-menus:
Cluster
- Monitoring: View all the nodes in a cluster. Add, delete, start, stop, and apply patches to node(s) in the cluster.
- Log Viewer: View consolidated logs across all DSG nodes.
Ruleset
- Ruleset: View the hierarchical set of rules. The rules are defined for each profile that is created under a service.
- Learn Mode: View the hierarchical processing of a rule that is affected due to a triggered transaction request or response.
Transport
- Certificates: View the certificates generated by or uploaded to the DSG.
- Tunnels: View the list of protocol-specific tunnels configured on the DSG.
Global Settings
- Debug: Configure log settings, Learn mode settings, and set configurations that enable administrative queries.
- Global Protocol Stack: Apart from the settings that you configure for each service type, some settings affect all services that relate to a protocol type.
- Web UI: The Web UI tab lets you configure additional settings that impact how the UI is displayed.
Test Utilities: The test utilities provide an interface where you can select the data security operation you want to perform, along with the DSG node, data elements available in the policies deployed at that node, and an external IV value for added security layer. This menu is available only for users with the policy user permission.
Note: The Tunnel and Ruleset configurations can be created on ESA and DSG. However, it is recommended to create the Tunnel and Ruleset configurations on the ESA. This allows the same configuration to be pushed simultaneously to all the ESA and DSG nodes in the cluster. If these configurations are created only on DSG, it can be overridden by the configuration created on the ESA.
5.11.1 - Cluster
The Cluster menu includes the following tabs: Monitoring and the Log Viewer tabs.
Monitoring: The Monitoring tab displays information about the available nodes in a DSG cluster.
Log Viewer: Unified view of the log messages across all the nodes in the cluster.
5.11.1.1 - Monitoring
The individual nodes in the cluster can be monitored and managed. The following actions are available on the monitoring screen:
Add nodes to the cluster
Deploy configurations to the nodes in the cluster
Deploy configurations to specific node groups
Change groups in the cluster
Change groups on selected nodes
Refresh nodes in the cluster
Start, stop, or restart individual nodes
The following figure illustrates the Monitoring screen:
1 Cluster Health - Cluster health indicator in Green, Orange, or Red.
2 Hostname - Hostname of the DSG.
3 IP - IP address of the DSG.
4 PAP version - Build version of the DSG.
5 Health - Status of DSG.
6 Node Group - Node group assigned to the DSG node. If no node group is specified, the node group is assigned as default.
7 Config Version - The tag name provided while deploying the configuration to a particular node group.
8 DSG Version - Build version of DSG.
9 Uptime - Time that the DSG has been running.
10 Load Avg - Average load on a process in the last five, ten, and fifteen minutes.
11 Utilization - Number of DSG processes and CPU cores utilized.
12 Connections - Total number of active connections for the node.
13 Select for Patch Installation or Node Group Update - Select the option to updating the node group
14 Socket - Total number of open sockets for the node.
15 Config Version Description - The additional information about the configuration version. If the description is not provided while deploying the configurations to a particular node group, then this field will be empty.
Cluster and node health status color indication reference:
Color | Description |
---|---|
![]() | Node is healthy and services are running. |
![]() | Warning. Some services related to a node need attention. |
![]() | Not running or unreachable. |
The following figure illustrates the actions for the Cluster Monitoring screen.
The callouts in the figure are explained as follows.
1 Expand - Expand to view the other columns.
2 Refresh - Refresh all the nodes in the cluster. The Refresh drop down list provides the following options:
- Start: Starts all nodes.
- Stop: Stops all nodes.
- Refresh: Refreshes details related to all nodes.
- Restart: Restarts all nodes. The restart operation will not export the configurations, it will only restart all the nodes.
- Deploy: Deploys the configuration changes on all the DSG nodes in the cluster. The configurations are pushed and the node is restarted. For more information, refer here.
- Deploy to Node Groups: Deploy the configurations to the selected node groups in the cluster. The configurations are pushed and the node is restarted. For more information, refer here.
3 Actions - The Actions drop down list at the cluster level provides the following options:
- Apply Patch on Cluster: Applies a patch to all nodes in a cluster including the ESA.
- Apply Patch on Selected Nodes: Apply the same patch simultaneously on all selected nodes.
- Change Groups on Entire Cluster: Change the node group of all the DSG nodes in the cluster
- Change Groups on Selected nodes: Select the node and change the node group on that particular DSG node in the cluster
- Add Node: Adding a Node to the Cluster.
4 Order - Sort the nodes in ascending or descending order
5 Actions - The Actions drop down list at the individual node level provides the following options:
- Start: Start a node
- Stop: Stop a node
- Restart: Restart a node.
- Apply Patch: Applies a patch to a single node. Before applying a patch on a single DSG node, ensure that the same patch is applied on the ESA.
- Change Groups: Changes the node group on an individual DSG node
- Delete: Delete a node.
Deploying the Configurations to Entire Cluster
The configurations can be pushed to all the DSG nodes in the cluster. This action can be performed by clicking the Deploy option on the Cluster page or from the Ruleset page.
To deploy the configurations to entire cluster:
In the ESA Web UI, navigate to **Cloud Gateway > 3.3.0.0 {build number}**Cloud Gateway > 3.3.0.0 {build number}> Cluster.
Select the Refresh drop down menu and click Deploy.
The following pop-up message occur on the Cluster screen.
Click YES to push the configurations to all the node groups and nodes.
The configurations will be deployed to all the nodes in the entire cluster.
Deploying the Configurations to Node Groups
The configurations can be pushed to the selected node groups. The configuration will only be pushed to the DSG nodes associated with the node groups. This action can be performed by clicking the Deploy to Node Groups option on the cluster page or from the Ruleset page.
To deploy the configurations to the selected node groups:
In the ESA Web UI, navigate to **Cloud Gateway > 3.3.0.0 {build number}**Cloud Gateway > 3.3.0.0 {build number}> Cluster.
Select the Refresh drop down menu and click Deploy to Node Groups.
Perform the following steps to deploy the configurations to the node groups.
Click Deploy to Node Groups.
The Select node groups for deploy screen appears.
The default and lob1 are the node groups associated with the DSG nodes. When you add a node to cluster, a node group is assigned to that node. For more information about adding a node and node group to the cluster, refer here.
Enter the name for the configuration version in the Tag Name field.
The tag name is the version name of a configuration that is deployed to a particular node group. The tag name must be alphanumeric separated by spaces or underscores. If the tag name is not provided, then it will automatically generate the name in the YYYY_mm_dd_HH_MM_SS format.Enter the description for the configuration in the Description field. The user can provide additional information about the configuration that is to be deployed.
On the Deployment Node Groups option select the node group to which the configurations must be deployed.
Click Submit.
5.11.1.2 - Log Viewer
The Log Viewer screen provides a unified view of all the logs. The logs are classified in the following levels:
Debug: Debugging trace.
Verbose: Additional information that can help a user with detailed troubleshooting
Information: Log entry for information purposes
Warning: Non-critical problem. Appears in orange.
Error: Issue that requires user’s attention. Appears in red.
The following figure illustrates the column for the Log Viewer screen.
1 Host - Host name or IP address of the DSG node where the log message was generated.
2 PID - Captures the process identifier of the DSG daemons that generated the log message.
3 Timestamp (UTC) - Time recorded when an event for the log was generated. The time recorded is displayed in the Coordinated Universal Time (UTC) format.
4 Level - Severity level of the log message.
5 Module - Part of the program that generated the log.
6 Source - Procedure in the module that generated the log.
7 Message - A textual description of the event logged.
The following figure illustrates the actions for the Log Viewer screen.
Search logs
Click the search box to scan through the log archive that is collectively maintained across all the DSG nodes within the cluster. The search for logs is not limited to the records that appear on the screen. When a user clicks search, all the log records that are present on the screen as well as on the server are retrieved.
Clear records
Clearing the screen removes the entries that are currently displayed. You can view all the archived logs even after the records are cleared. Click Clear Screen to clear the logs. However, the logs are cleared on from the Log Viewer screen. It is not deleted from the appliance logs and avaialble for reference.
Retrieve archived logs
When the logs are cleared from the screen, the records are archived and can be viewed later. After clearing the records, click the Refresh icon. A link displaying the timestamp of the last updated record appears. Click Showing logs after
Fetching records
Click the Refresh icon to view new logs generated.
5.11.2 - Ruleset
The RuleSet menu includes the RuleSet and the Learn Mode tabs.
- RuleSet tab
The Ruleset tab provides you the capability to create a hierarchical rule pattern based on the service type. The changes made to the Ruleset tree require deployment of configuration to take effect. - Learn Mode
Learn Mode tab provides a consolidated view of all message recorded by the DSG cluster. It allows you to consider messages exchanged through the DSG nodes and study the payloads as they are seen by the DSG. Understanding how messages are structured enables you to set the appropriate rules which will transform the relevant parts in it before it is forwarded on.
5.11.2.1 - Learn Mode
Learn mode allows to consider messages exchanged through the DSG nodes and study the payloads as they are seen by the DSG. Understanding how messages are structured enables to set the appropriate rules which will transform the relevant parts in it before it is forwarded.
The Learn Mode tab is shown in the following figure.
The following table provides the description for each column available on the Web UI.
1 Received (UTC) - Time when the transaction is triggered. The time recorded is displayed in the Coordinated Universal Time (UTC) format.
2 PID - Process Identifier that has carried the request or response transaction on the gateway machine.
3 Source - Source IP address or hostname in the request.
4 Destination - Destination IP address or hostname in the request.
5 Service - Service name to which the transaction belongs.
6 Hostname - DSG node hostname where the request was received and processed.
7 Message - Provides information about the type of message.
8 Processing Time (ms) - Time required to complete the transaction.
9 Rules Filters - Filter the rules based on the selected option for a transaction.
10 Filter Summary - Summary of rule details, such as, Elapsed time, result, and Action Count.
11 Message Difference - Difference between the message received by the rule and the message processed by the rule.
12 Wrap lines - Select to break the text to fit in the readable view.
13 View in Binary - View message in hexadecimal format.Note: If you want to view a payload such as .zip, .pdf, or more, you can use the View in Binary option.
14 Download Payload - Click to download large payloads that cannot be completely displayed on the screen.
** Failed Transaction (in red color) - Any failed transaction is highlighted in the color red.
The following figure illustrates the actions on the Learn Mode screen.
The following table provides the description for each action available on the Web UI.
1 Search log - Search the learn mode content.
2 Column Filters - Apply column filters for each column to filter or search records based on the string and regex pattern match.
3 Refresh - Refresh the list.
4 Reset - Logs from the server are purged.
5 Collapse/Expand tree - Collapse or expand the rule tree.
You can select a record in the Learn Mode screen to view details regarding the matched and unmatched rules for that entry. If the size of the message exceeds the limit, then a message Contents of the selected record are too large to be displayed appears.
5.11.2.1.1 - Learn Mode Scheduled Task
Click System > Task Scheduler, select the Learn Mode Log Cleanup scheduled task, and then click Edit to modify the scheduled task that initiates the learnmodecleanup.sh
file at regular intervals. The scheduled task can be set to n hours or days based on your preference. The default recommended frequency is Daily-Every Midnight.
In addition to setting the task, you can define the duration for which you want to archive the Learn Mode logs. The following image displays the Learn Mode Log Cleanup scheduled task.
The following table provides sample configurations:
Frequency | Command line value | Retain the logs for | Default values |
---|---|---|---|
Daily-Every Midnight | /opt/protegrity/alliance/bin/scripts/learnmodecleanup.sh 10 DAYS | Last 10 DAYS | Days can be set between 1 to 66 |
Every hour | /opt/protegrity/alliance/bin/scripts/learnmodecleanup.sh 10 HOURS | Last 10 HOURS | Hours can be set between 1 to 23 |
Note: If a numeric value is set without the HOURS or DAYS qualifier, then DAYS is considered as the default.
5.11.2.2 - Ruleset Tab
The changes made to the Ruleset tree require deployment of configuration to take effect.
The RuleSet tab is shown in the following figure:
The following table provides the description for each of the available RuleSet options:
1 Search - Click to search for service, profile, or rules.
2 Search textbox - Provide service, profile, or rule name.
3 Add new service - Add a new service-level based on the service type used. Only one service can be created for every service type.
4 View Old Versions - Click to view archived Ruleset configuration backups.
5 Deploy - Deploy the configurations to all the DSG nodes in the cluster. The Deploy operation will export the configurations and restart all the nodes.
6 Deploy to Node groups - Deploy the configurations to the selected node groups in the cluster. This will export the configurations and restart the nodes associated with the node groups.
7 Import - Import the Ruleset tree to the Web UI. Files should be uploaded in .zip extension structure.
- Ensure that the service exists as part of the Ruleset before you import a configuration exported at Profile level.
- Ensure that the directory structure that the exported .zip maintains is replicated when you repackage the files for import. Also, the JSON files must be valid.
- If an older ruleset configuration .zip created using any older DSG version, that includes a GPG ruleset with key passphrase defined, is imported, then the DSG does not encrypt the key passphrase.
8 Export All - Export the Ruleset tree configuration. The rules are downloaded in a .zip format.
9 Edit - Edit the service, profile, or rule details as per requirement.
10 Expand Rule- Expand the rule tree and view child rules.
If you want to further work with rules, right-click any rule to view a set of sub menus. The sub menu options are seen in above figure. The options are described in the following table.
11 Duplicate - Duplicate a service, profile, or rule to create a copy of these Ruleset elements.
12 Export - Export the Ruleset tree configuration at Service or Profile level. All the child rules under the parent Service or Profile are exported. The rules are downloaded in the .zip format.
13 Create Rule - Add child rule under the parent rule.
14 Delete - Delete the selected rule.
15 Cut - Cut the selected rule from the parent rule.
16 Copy - Copy the selected rule under a parent.
17 View Configuration - View the configuration of the rule in the JSON format. You can copy the JSON format of the rule and pass it as parameter value in the header of the Dynamic CoP ruleset. This option is available only for the individual rules.
Instead of cut and copy a rule to change its hierarchy among siblings, you can also drag a sibling rule and change its positioning. When the drop is successful, a green tick icon ( ) is displayed as shown in the following figure.
When the drop is unsuccessful, a red cross icon ( ) is displayed as shown in the following figure.
A log is generated in the Forensics screen every time you cut, copy, delete, or reorder a rule from the Ruleset screen in the ESA.
The following figure shows a service with Warning indication.
The symbol is seen on the service when the child rule is not created or when Learn Mode is enabled.
Deploy configurations to the Cluster
In the ESA Web UI, navigate to Cloud Gateway > 3.3.0.0 {build number} > Ruleset.
Click Deploy. A confirmation message occurs.
Click Continue to push the configurations to all the node groups and nodes. The configurations will be deployed to the entire cluster.
Deploy configurations to node groups
In the ESA Web UI, navigate to Cloud Gateway > 3.3.0.0 {build number} > Ruleset.
Click Deploy > Deploy to Node Groups.
The Select node groups for deploy screen appears.
Enter the name for the configuration version in the Tag Name field. The tag name is the version name of a configuration that is deployed to a particular node group. The tag name must be alphanumeric, separated by spaces or underscores. If the tag name is not provided, then it will automatically generate the name in the YYYY_mm_dd_HH_MM_SS format.
Enter the description for the configuration in the Description field.
On the Deployment Node Groups option, select the node group to which the configurations must be deployed.
Click Submit.
The configurations are deployed to the node groups.
5.11.2.2.1 - Ruleset Versioning
What is it
After deploying a configuration to particular node group or to entire cluster, a backup of these configurations are saved in View Older Versions on the Ruleset page. The most recent deployed configuration for a particular node group is shown as Deployed status on viewing the older versions There are tagged and untagged versions seen on viewing the older versions. You can create a tagged or untagged version.
The following figure shows the Ruleset versioning screen.
The following table provides the description for the deployed configurations.
1 The configuration is deployed to default node group and you can see Deployed status for this configuration version. This is the most recent deployed configuration version for the default node group with Deployed status. Each node group will have Deployed status for the most recent configuration version.
2 The configuration is deployed to lob1 node group and the configuration version is untagged. As the version is untagged, it will automatically generate the name with timestamp in the YYYY_mm_dd_HH_MM_SS format. Each node group will archive the three most recent untagged version. Refer to configuring the default value.
3 The configuration is deployed to lob1 node group and the configuration version is tagged. While deploying the configuration to default node group the lob1_fst_configuration tag name was provided to configuration version. Each node group will archive the ten most recent tagged version. Refer to configuring the default value
Working with ruleset versioning
Each time a configuration is changed and deployed, the DSG creates a backup configuration version. You can apply an earlier configuration version and make it active, in case you want to revert to the older configuration version.
On the DSG Web UI, navigate to **Cloud Gateway > 3.3.0.0 {build number}**Cloud Gateway > 3.3.0.0 {build number}> Ruleset.
The following figure shows the Ruleset versioning screen.
Click View Old Versions.
Click the Viewing drop-down to view the available versions.
Select a version.
The left pane displays the Services, Profiles, and Rules that are part of the selected version.
Click Apply Selected Version to make the version active or click Close Old Versions to exit the screen.
Click Deploy or Deploy to Node Groups to save changes.
For more information about deploying the configurations to entire cluster or the node groups, refer Deploying the Configurations to Entire Cluster and Deploying the Configurations to Node Groups.
It is recommended that any changes to the Ruleset configuration is made through the Cloud Gateway menu available on the ESA Web UI. Any changes made to the Ruleset configuration from the DSG Web UI of an individual node are overridden by the changes made to the ruleset configuration from the ESA Web UI. After overriding, the older Ruleset configuration on individual nodes is displayed as active and no backup for this configuration is maintained.
Updating versions
If you want to change the number of tagged or untagged versions that a node can store, then on the DSG node, login to the OS console. Navigate to the
/opt/protegrity/alliance/version-1/config/webinterface
directory. Edit the following parameter in the nodeGroupsConfig.json file.no_of_node_group_deployed_archives = <number_of_untagged_versions_to_be_stored>
The default value for the untagged version is set at 3.
no_of_node_group_deployed_tag_archives = <number_of_tagged_versions_to_be_stored>
The default value for the tagged version is set at 10.
5.11.3 - Transport
The Transport Menu includes the Certificates and the Tunnels tabs.
Certificates tab allow to configure TLS/SSL certificates for SSL Termination by DSG.
Tunnels tab allows to define the DSG inbound communication channels.
5.11.3.1 - Tunnels
The changes made to Tunnels require cluster restart to take effect. You can either use the bundled default tunnels or create a tunnel based on your requirements.
The Tunnels tab is as seen in the following figure.
The following table provides the description of the columns available on the Web UI.
1 Name - Unique tunnel name.
2 Description - Unique description that describes port supported by the tunnel.
3 Protocol - Protocol type that the tunnel supports. The available Type values are HTTP, S3, SMTP, SFTP, NFS, and CIFS.
4 Enabled - Status of the tunnel. Displays status as true, if the tunnel is enabled.
5 Start without service - Select to start the tunnel if no service is configured or if no services are enabled.
6 Interface - IP address through which sensitive data enters the DSG. The available Listening Address options are as follows:
- ethMNG: The management interface on which the Web UI is accessible.
- ethSRV0: The service interface for communicating with an untrusted service.
- 127.0.0.1: The local loopback adapter.
- 0.0.0.0: The broadcast address for listening to all the available network interfaces over all IP addresses.
- Other: Manually add a listening address based on your requirements.
Note: The service interface, ethSRV0, listens on port 443. If you want to stop this interface from listening on this port, then edit the default_443 tunnel and disable it.
7 Port - Port linked to the listening address.
8 Certificate - Certificate applicable to a tunnel.
9 Deploy to All Nodes - Deploy the configurations to all the DSG nodes in the cluster.|Deploy can also be performed from the Cluster tab or Ruleset screen. In a scenario where an ESA and two DSG nodes are in a cluster, by using the Selective Tunnel Loading functionality, you can load specific tunnel configurations on specific DSG nodes.
Click Deploy to All Nodes to push specific tunnel configurations from an ESA to specific DSG nodes in a cluster.
The following figure illustrates the actions for the Tunnels screen.
The following table provides the available actions:
1 Create Tunnel - Create a tunnel configuration as per your requirements.
2 Edit - Edit an existing tunnel configuration.
3 Delete - Delete an existing tunnel configuration
5.11.3.1.1 - Manage a Tunnel
Create a tunnel
You can create tunnels for custom ports that are not predefined in the DSG using the Create Tunnel option in the Tunnels tab. The Create Tunnel screen is as seen in the following figure.
The following table provides the description for each option available on the UI.
Callout | Column/Textbox | Description |
---|---|---|
1 | Name | Name of the tunnel. |
2 | Tunnel ID | Unique ID of the tunnel. |
3 | Description | Unique description that describes port supported by the tunnel. |
4 | Enabled | Select to enable the tunnel. The check box is selected as a default. Uncheck the check box to disable the tunnel. |
5 | Start without service | Select to start the tunnel if no service is configured or if no services are enabled. |
6 | Protocol | Protocol type supported by the tunnel. |
The following types of tunnels can be created.
- HTTP
- SFTP
- SMTP
- Amazon S3
- CIFS/NFS
Edit a tunnel
Edit an existing tunnel configuration using the Edit option in the Tunnels tab. The Edit Tunnel screen is as seen in the following figure.
After editing the required field, click Update to save your changes.
Delete a tunnel
Delete an existing tunnel using the Delete option in the Tunnels tab. The Delete Tunnel screen is shown in the following figure.
The following table provides the description for each option available on the UI.
Callout | Column/Textbox/Button | Description |
---|---|---|
1 | Cancel | Cancel the process of deleting a tunnel. |
2 | Delete | Delete the existing tunnel from the Tunnels tab. |
5.11.3.1.2 - Amazon S3 Tunnel
Amazon Simple Storage Service (S3) is an online file storage web service. It lets you manage files through browser-based access as well as web services APIs. In DSG, the S3 tunnel is used to communicate with Amazon S3 cloud storage over the Amazon S3 REST API. The higher-layer S3 Service object, which sits above the tunnel object, configured at the RuleSet level is used to process file contents retrieved from S3.
A sample S3 tunnel configuration is shown in the following figure.
Amazon S3 uses buckets to store data and data is classified as objects. Each object is identified with a unique key ID. Consider an example that john.doe is the bucket and incoming is a folder under john.doe bucket. Assuming the requirement is that files landing in the incoming folder should be picked up and processed by DSG nodes. The data pulled from the AWS online storage is available in the incoming folder under the source bucket. The Amazon S3 Service is used to perform data security operation on this data in the source bucket.
Note: The DSG supports four levels of nested folders in an Amazon S3 bucket.
After the rules are executed, the processed data may be stored in a separate bucket (e.g. the folder named outgoing under the same john.doe bucket), which is the target bucket. When the DSG nodes poll AWS for a file uploaded, whichever node accesses the file first places a lock on the file. You can specify if the lock files must be stored in a separate bucket or under the source bucket. If the file is locked, the other DSG nodes will stop trying to access the file.
If the data operation on a locked file fails, the lock file can be viewed for detailed log and error information. The lock files are automatically deleted if the processing completes successfully.
Consider the scenario where an incoming bucket contains two directories Folder1 and Folder2.
The DSG allows multiprocessing of files that are place in the bucket. The lock files are created for every file processed. In the scenario mentioned, the lock files are created as follows:
- If the abc.csv file of Folder1 is processed, the lock file is created as Folder1.abc.csv.<hostname>.<Process ID>.lock.
- If the pqr.csv file of Folder2 is processed, the lock file is created as Folder1.pqr.csv.<hostname>.<Process ID>.lock.
Consider the following figure where files are nested in the S3 bucket.
The lock files are created as follows:
- If the abc.csv file of Folder1 is processed, the lock file is created as Folder1.abc.csv.<hostname>.<Process ID>.lock.
- If the pqr.csv file of Folder2 is processed, the lock file is created as Folder1.Folder2.pqr.csv.<hostname>.<Process ID>.lock.
- If the abc.csv file of Folder3 is processed, the lock file is created as Folder1.Folder2.Folder3.abc.csv.<hostname>.<Process ID>.lock.
If the multiprocessing of files is to be discontinued, remove the enhanced-lock-filename flag from the features.json file available in the System > Files on the DSG Web UI.
The following image illustrates the options available for an S3 tunnel.
The options specific to the S3 Protocol type are described as follows:
Bucket list settings
1 Source Bucket Name: Bucket name as defined in AWS where the files that need to be processed are available.
2. Source File Name Pattern: Regex pattern for the filenames to be processed. For example, .csv.
Rename Processed Files: Regex logic for renaming processed file.
3. Match Pattern: Regex logic for renaming processed file.
4. Replace Value: Value to append or name that will be used to rename the original source file based on the pattern provided and grouping defined in regex logic.
5. Overwrite Target Object: Select to overwrite a file in the bucket with a newly processed file of the same name. Refer to Amazon S3 Object.
6. Lock Files Bucket: Name of the lock files folder, if you want the lock files to be stored in a separate bucket. If not defined, lock files are placed in the source bucket.
7. Interval: Time in secs when the DSG node will poll AWS for pulling files. You can also specify a cron job expression. Refer to Cron documentation. The default value is 5. If you use the cron job expression “* * * * *”, DSG will poll AWS at the minimum interval of one minute.
Cron job format is also supported to schedule jobs.
AWS Settings
8. AWS Access Key Id: Access key id used to make secure protocol request to an AWS service API. Refer to Amazon Web Service documentation.
9. AWS Secret Access Key: Secret access key related to the access key id. The access key id and secret access key work together to sign into AWS and provide access to resources. Refer to Amazon Web Service documentation.
10. AWS Endpoint URL: Specify the endpoint URL if it is other than the amazon S3 bucket. This parameter should only be configured if the user is using DSG to connect to other endpoint than amazon S3 bucket i.e. On-Premise S3, Google Cloud Bucket, and so on.If not defined, the DSG will connect to Amazon S3 bucket.
11. Path to CA Bundle: Specify the path to CA bundle if the endpoint is other than Amazon S3 bucket. If the user has installed the S3 on-premise using the self signed certificate, then specify that path to CA bundle in this parameter. If the endpoint URL is Amazon S3 bucket, then by default it uses SSL certificate to connect to S3 bucket.
Advanced Settings
12. Advanced settings: Set additional advanced options for tunnel configuration, if required, in the form of JSON in the following textbox. In a scenario where an ESA and two DSG nodes are in a cluster, by using the Selective Tunnel Loading functionality, you can load specific tunnel configurations on specific DSG nodes.
The advanced settings that can be configured for S3 Protocol.
Options | Description |
---|---|
SSECustomerAlgorithm | If server-side encryption with a customer-provided encryption key was requested, the response will include this header confirming the encryption algorithm used. |
SSECustomerKey | Constructs a new customer provided server-side encryption key. |
SSECustomerKeyMD5 | If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide round-trip message integrity verification of the customer-provided encryption key. |
ACL | Allows controlling the ownership of uploaded objects in an S3 bucket.For example, if ACL or Access Control List is set to “bucket-owner-full-control”, new objects uploaded by other AWS accounts are owned by the bucket owner. By default, the objects uploaded by other AWS accounts are owned by them. |
Using S3 tunnel to access files on Google Cloud Storage
Similar to AWS buckets, data is also stored on the Google Cloud Storage can also be protected. You can use the S3 tunnel to access the files on the GCP storage. The incoming and processed file has to be placed in the same storage in separate folders. For example, a storage named john.doe bucket contains a folder incoming that contains files to be picked and processed by DSG nodes. This acts as the source bucket. After the rules are executed, the data is stored in the processed bucket. Ensure the following points are considered.
- AWS Endpoint URL contains the URL of the Google Cloud storage.
- AWS Access Key ID and AWS Secret Access Key contain the secret ID and HMAC keys.
Refer to Google docs for information about Access ID and HMAC keys.
5.11.3.1.3 - HTTP Tunnel
Based on the protocol selected, the dependent fields in the Tunnel screen vary. The following image illustrates the settings that are specific to the HTTP protocol.
The options for the Inbound Transport Settings field in the Tunnel Details screen specific to the HTTP Protocol type are described in the following table.
Network Settings
1 Listening Interface: IP address through which sensitive data enters the DSG. The following Listening Interface options are available:
- ethMNG: The management interface on which the DSG Web UI is accessible.
- ethSRV0: The service interface for communicating with an untrusted service.
- 127.0.0.1: The local loopback adapter.
- 0.0.0.0: The broadcast address for listening to all the available network interfaces.
- Other: Manually add a listening address based on your requirements.
2 Port:Port linked to the listening address.
TLS/SSL Security Settings
3 TLS Enabled: Select to enable TLS features.
4 Certificate: Certificate applicable for a tunnel.
5 Cipher Suites: Colon separated list of Ciphers.
6 TLS Mutual Authentication: CERT_NONE is selected as default. Use CERT_OPTIONAL to validate if a client certificate is provided or CERT_REQUIRED to process a request only if a client certificate is provided. If TLS mutual authentication is set to CERT_OPTIONAL or CERT_REQUIRED, then the CA certificate must be provided.
7 CA Certificates: A CA certificate chain. This option is applicable only if the value client certificate is set to 1 (optional) or 2 (required). Client certificates can be requested at the tunnel and the RuleSet level for authentication. On the Tunnels screen, you can configure the ca_reqs parameter in the Inbound Transport Settings field to request the client certificate. Similarly, on the Ruleset screen, you can toggle the Required Client Certificate checkbox to enable or disable client certificates. Based on the combination of the options in the tunnel and the RuleSet, the server executes the transaction. If the certificate is incorrect or not provided, then server returns a 401 error response.
The following table explains the combinations for the client certificate at the tunnel and the RuleSet level.
TLS Mutual Authentication (Tunnel Screen) | Required Client Certificate (Enable/Disabled) (Ruleset Screen) | Result |
---|---|---|
CERT_NONE | Disabled | The transaction is executed |
Enabled | The server returns a 401 error response. | |
CERT_OPTIONAL | Disabled | The transaction is executed |
Enabled | If the client certificate is provided, then transaction is executed. If the client certificate is not provided, then the server returns a 401 error response. | |
CERT_REQUIRED | Disabled | The transaction is executed |
Enabled | The transaction is executed |
8 DH Parameters: The .pem filename that includes the DH parameters. Upload the .pem file from the Certificate/Key Material screen. The Diffie-Hellman (DH) parameters define the way OpenSSL performs the DH Key exchange.
9 ECDH Curve Name: Supported curve names for the ECDH key exchange.The Elliptic curve Diffie–Hellman (ECDH) protocol allows key agreement and leverages elliptic-curve cryptography (ECC) properties for enhanced security.
10 Certificate Revoke List: Path of the Certificate Revocation List (CRL) file. For more information about CRL error message that appears when a revoked certificate is sent, refer to the CRL error. The ca.crl.pem
file includes a list of certificates that are revoked. Based on the flags that you provide in the verify_flags setting, SSL identifies certificate verification operations that need to performed. The CRL verification operations can be VERIFY_CRL_CHECK_LEAF or VERIFY_CRL_CHECK_CHAIN.
When you try to access the DSG through HTTPS using such a revoked certificate, the DSG returns the following error message.
11 Verify Flags Set to one of the following operations to verify the CRL:
- VERIFY_DEFAULT
- VERIFY_X509_TRUSTED_FIRST
- VERIFY_CRL_CHECK_LEAF
- VERIFY_CRL_CHECK_CHAIN
12 SSL Options|Set the required flags that reflect the TLS behavior at runtime. A single flag or multiple flags can be used.It is used to define the supported SSL options in the JSON format. The DSG supports TLS v1.2
.|For example, in the following JSON, TLSv1
and TLSv1.1
are disabled.{
|
“options”: [“OP_NO_SSLv2”,
“OP_NO_SSLv3”,
“OP_NO_TLSv1”,
“OP_NO_TLSv1_1”]
}
13 Advanced Settings Set additional advanced options for tunnel configuration, if required, in the form of JSON.|In a scenario where an ESA and two DSG nodes are in a cluster, by using the Selective Tunnel Loading functionality, you can load specific tunnel configurations on specific DSG nodes.
Options | Description | Default (if any) | Notes |
---|---|---|---|
idle_connection_timeout | Timeout set for an idle connection. The datatype for this option is seconds. | 3600 | |
max_buffer_size | Maximum value of incoming data to a buffer. The datatype for this option is bytes. | 10240000 | |
max_write_buffer_size | Maximum value of outgoing data to a buffer. The datatype for this option is bytes. | 10240000 | This parameter is applicable only with REST streaming. |
no_keep_alive | If set to TRUE, then the connection closes after one request. | false | |
decompress_request | Decompress the gzip request body | false | |
chunk_size | Bytes to read at one time from the underlying transport. The datatype for this option is bytes. | 16384 | |
max_header_size | Maximum bytes for HTTP headers. The datatype for this option is bytes. | 65536 | |
body_timeout | Timeout set for wait time when reading request body. The datatype for this option is seconds. | ||
max_body_size | Maximum bytes for the HTTP request body. The datatype for this option is bytes. | 4194304 | Though the DSG allows to configure the maximum body size, the response body size will differ and cannot be configured on the DSG. The response body size that the gateway will send to the HTTP client is dependent on multiple factors, such as, the complexity of the rule, transform rule configured in case you use regex replace, size of response received from destination, and so on. If a request is sent to the client with the response body size greater than the value configured in the DSG, then the following response is returned and the DSG closes the connection: 400 Bad Request In earlier versions of the DSG, the DSG closed the connection and sent 200 as the response code. |
max_streaming_body_size | Maximum bytes for the HTTP request body when HTTP streaming with REST is enabled. The datatype for this option is bytes. | 52428800 | |
maximumBytes | This field is not supported for the DSG 3.0.0.0 release and will be supported in a later DSG release. | ||
maximumRequests | This field is not supported for the DSG 3.0.0.0 release and will be supported in a later DSG release. | ||
thresholdDelta | This field is not supported for the DSG 3.0.0.0 release and will be supported in a later DSG release. | ||
write_cache_memory_size | For an HTTP blocking client sending a REST streaming request, the DSG processes the request and tries to send the response back. If the client type is blocking, then DSG will store the response to the memory till the write_cache_memory_size limit is reached. The DSG then starts writing to the disk.The file size is managed using the write_cache_disk_size parameter.The value for this setting is defined in bytes. |
| |
write_cache_disk_size | Set the file size that holds the response after the write_cache_memory_size limit is reached while processing the REST streaming request sent by an HTTP blocking client.After the write_cache_disk_size limit is reached, the DSG starts writing to the disk.The data on the disk always exists in an encrypted format and the disk cache file is discarded after the response is sent. The value for this setting is defined in bytes. |
| |
additional_http_methods | Include additional HTTP methods, such as, PURGE LINK, LINE, UNLINK, and so on. | ||
cookie_attributes | Add a new HTTP cookie to the list of cookies that the DSG accepts. | [“expires”, “path”, “domain”, “secure”, “httponly”, “max-age”, “version”, “comment”, “priority”, “samesite”] | |
compress_response | Compresses the response sent to the client if the client supports gzip encoding, i.e. sends Accept-Encoding:gzip. | false |
Generating ECDSA certificate and key
The dh_params parameter points to a .pem file. The .pem file includes the DH parameters that are required to enable DH key exchange for improved protection without compromising computational resources required at each end. The value accepted by this field is the file name with the extension (.pem). The DSG supports both RSA certificates and Elliptic Curve Digital Signature Algorithm (ECDSA) certificates for the ECDHE protocol. The RSA certificates are available as default when the DSG is installed, while to use ECDSA certificates in the DSG, you must generate an ECDSA certificate and the related key. The following procedure explains how to generate the ECDSA certificate and key.
To generate dhparam.pem file:
Set the SSL options in the Inbound Transport settings as given in the following example.
- DH Parameters:
/opt/protegrity/alliance/config/dhparam/dhparam.pem
- ECDH Curve Name: prime256v1
- SSL Options: OP_NO_COMPRESSION
- DH Parameters:
From the ESA CLI Manager, navigate to Administration > OS Console.
Execute the following command to generate the dhparam.pem file.
openssl dhparam -out /opt/protegrity/alliance/config/dhparam/dhparam.pem 2048
Note: Ensure that you create the dhparam directory in the given path. The path /opt/protegrity/alliance/config/dhparam is the location where you want to save the .pem file. The value 2048 is the key size.
- Execute the following command to generate the key.
openssl genpkey -paramfile dhparam.pem -out dhkey.pem
The ecdh_curve_name parameter is the curve type that is required for the key exchange. The OpenSSL curves that are supported by DSG are listed in Supported OpenSSL Curve Names.
You can configure additional inbound settings that apply to HTTP from the Global Settings page on the DSG Web UI.
5.11.3.1.4 - SFTP Tunnel
Based on the protocol selected, the dependent fields in the Tunnel screen vary. The following image illustrates that settings specific to SFTP protocol.
The options specific to the SFTP Protocol type are described in the following table.
Callout | Column/Textbox/Button | Subgroup | Description | Notes |
---|---|---|---|---|
Network Settings | ||||
1 | Listening Interface* | IP address through which sensitive data enters the DSG. | ||
2 | Port | Port linked to the listening address. | ||
SSH Transport Security Options | SFTP specific security options that are mandatory.Select a paired server host key or provide the key path. | |||
3 | Server Host Key Filename | Paired server host public key, uploaded through Certificate/Key material screen, that enables SFTP authentication. If the key includes an extension, such as *.key, enter the key name with the extension. For Files that are not uploaded to the resources directory, you must provide the absolute path along with the key name. | The DSG only accepts private keys that are not passphrase encrypted. | |
4 | Advanced Settings* | Set additional advanced options for tunnel configuration, if required, in the form of JSON. | In a scenario where an ESA and two DSG nodes are in a cluster, by using the Selective Tunnel Loading functionality, you can load specific tunnel configurations on specific DSG nodes. |
|
**-The advanced settings that can be configured for SFTP Protocol.
Options | Description | Default (if any) |
---|---|---|
idle_connection_timeout | Timeout set for an idle connection.The datatype for this option is seconds. | 30 |
default_window_size | SSH Transport window size | 2097152 |
default_max_packet_size | Maximum packet transmission in the network. The datatype for this option is bytes. | 32768 |
use_compression | Toggle SSH Compression | True |
ciphers | List of supported ciphers | ‘aes128-ctr’, ‘aes256-ctr’, ‘3des-cbc’ |
kex | Key exchange algorithms | ‘diffie-hellman-group14-sha1’, ‘diffie-hellman-group-exchange-sha1’ |
digests | List of supported hash algorithms used in authentication. | ‘hmac-sha1’ |
The following snippet describes the example format for the SFTP Advanced settings:
{
"idle_connection_timeout": 30,
"default_window_size": 2097152,
"default_max_packet_size": 32768,
"use_compression": true,
"ciphers": [
"aes128-ctr",
"aes256-ctr",
"3des-cbc"
],
"kex": [
"diffie-hellman-group14-sha1",
"diffie-hellman-group-exchange-sha1"
],
"digests": [
"hmac-sha1"
]
}
5.11.3.1.5 - SMTP Tunnel
The DSG can perform data security operations on the sensitive data sent by an Simple Mail Transfer Protocol (SMTP) client before the data reaches the destination SMTP server.
Over the internet, SMTP is an Internet standard for sending emails. When an email is sent to anyone, the email is sent using an SMTP client to the SMTP server. For example, if an email is sent from john.doe@xyz.com to jane.smith@abc.com, the email first reaches the xyz’s SMTP server, then reaches abc’s SMTP server, before it finally reaches the recipient, jane.smith@abc.com.
The DSG intercepts the communication between the SMTP client and server and performs data security operations on sensitive data. The sensitive data residing in the email elements, such as subject of an email, body of an email, attachments, filename, and so on, are supported for the SMTP protocol:
When the DSG is used as an SMTP gateway, the Rulesets must use the SMTP service and the first child Extract rule must be SMTP Message.
The following image illustrates how the SMTP protocol is handled in the DSG. Consider an example where, john.doe@xyz.com is sending an email to jane.smith@xyz.com. The xyz
SMTP server is the same for the sender and the recipient.
The sender, john.doe@xyz.com, sends an email to the recipient, jane.smith@xyz.com. The Subject of the email contains sensitive data that must be protected before it reaches the recipient.
The DSG is configured with an SMTP tunnel such that it listens for incoming requests on the listening ports. The DSG is also configured with Rulesets such that an Extract rule extracts the Subject from the request. The Extract rule also defines a regex that extracts the sensitive data and passes it to the Transform rule. The Transform rule performs data security operations on the sensitive data.
The DSG forwards the email with the protected data in the Subject to the SMTP server.
The recipient SMTP client polls the SMTP server for any emails. The email is received and the sensitive data in the Subject appears protected.
The following image illustrates the settings specific to the SMTP protocol.
The options specific to the SMTP Tunnel are described in the following table.
Callout | Column/Textbox/Button | Description | Notes |
---|---|---|---|
Network Settings | |||
1 | Listening Interface* | Enter the service IP of the DSG, where the DSG listens for the incoming SMTP requests. | |
2 | Port | Port linked to the listening address. | |
Security Settings for SMTP | |||
3 | Certificate | Server-side Public Key Infrastructure (PKI) certificate to enable TLS/SSL security. | |
4 | Cipher Suites | Semi-colon separated list of Ciphers. | |
5 | Advanced Settings | Set additional advanced options for tunnel configuration, if required, in the form of JSON. | In a scenario where an ESA and two DSG nodes are in a cluster, by using the Selective Tunnel Loading functionality, you can load specific tunnel configurations on specific DSG nodes. |
The ssl_options supported for the SMTP Tunnel are described in the following table.
Options | Description | Default |
---|---|---|
cert_reqs | Specifies whether a certificate is required for validating the SSL connection between the SMTP client and the DSG. The following values can be configured:
| CERT_NONE |
ssl_version | Specifies the SSL protocol version used for establishing the SSL connection between the SMTP client and the DSG. | PROTOCOL_SSLv23 |
ca_certs | Path where the CA certificates (in PEM format only) are stored. | n/a |
* The following Listening Interface options are available:
- ethMNG: The management interface on which the DSG Web UI is accessible.
- ethSRV0: The service interface where the DSG listens for the incoming SMTP requests.
- 127.0.0.1: The local loopback adapter.
- 0.0.0.0: The broadcast address for listening to all the available network interfaces.
- Other: Manually add a listening address based on your requirements.
**-The advanced settings that can be configured for SMTP Protocol.
Options | Description | Default (if any) |
---|---|---|
idle_connection_timeout | Timeout set for an idle connection.The datatype for this option is seconds. | 30 |
default_window_size | SSH Transport window size | 2097152 |
default_max_packet_size | Maximum packet transmission in the network. The datatype for this option is bytes. | 32768 |
5.11.3.1.6 - NFS/CIFS
Though the files are accessed remotely, the behavior is same as when files are accessed locally. The NFS file system follows a client/server model, where the server is responsible for authentication and permissions, while the client accesses data through the local disk systems.
A sample NFS/CIFS tunnel configuration is shown in the following figure.
Note: The Address format for an NFS tunnel is
<ip address/hostname>:<mount_path>
and for a CIFS tunnel is\\<ip address or hostname>\<share_path>
.
Consider an example NFS/CIFS server with folder structure that includes folders namely, /input
, /output
, and /lock
. When a client accesses the NFS/CIFS server, the files are stored in the input folder. The Mounted File System Out-of-Band Service is used to perform data security operation on the files in the /input
folder. A source file is processed only when a corresponding trigger file is created and found in the /input
folder.
Note: Ensure that the trigger file time stamp is greater than or equal to the source file time stamp.
After the rules are executed, the processed files can be stored in a separate folder, such as in this example, /output
. When DSG nodes poll NFS/CIFS server for a file uploaded, whichever node accesses the file first places a lock on the file. You can specify if the lock files must be stored in a separate bucket, such as /lock
or under the source folder. If the file is locked, the other DSG nodes will stop trying to access the file.
If the data operation on a locked file fails, the lock file can be viewed for detailed log and error information. The lock files are automatically deleted if the processing completes successfully.
5.11.3.1.6.1 - NFS/CIFS
The options for the NFS tunnel as illustrated in the following figure
Mount settings
1 Mount Point - The Address format for an NFS tunnel is <IP address/hostname>:<mount_path>
2 Input Directory - The mount tunnel forwards the files present in this directory for further processing. This directory structure will be defined in the NFS/CIFS share.
3 Source File Name Pattern - Regex logic for identifying the source files that must be processed.
4 Overwrite Target File - Select to overwrite a file in the bucket with the newly processed file with the same name.
Rename processed files
Regex logic for renaming original source files after processed files are generated
5 Match Pattern - Exact pattern to match and filter the file.
6 Replace Value - Value to append or name that will be used to rename the original source file based on the pattern provided and grouping defined in regex logic.
Trigger File
File that triggers the rule. The rule will be triggered, only if this file is found to exist in the input directory. Files in the NFS/CIFS Share directory are not processed until the trigger criteria is met. Ensure that the trigger file is sent only after the files that need to be processed are placed in the source directory. After the trigger file is placed, you must touch the trigger file.
7 Trigger File Name Pattern - Identifier that will be appended to each source file to create a trigger control file. Consider a source file abc.csv, if you define the identifier as %.ctl, you must create a trigger file abc.csv.ctl to ensure that the source file is processed.
It is mandatory to provide a trigger file for each source file to ensure that it is processed. Files without a corresponding trigger file will not be processed.
The *, [, and ] characters are not accepted as part of the trigger file pattern.
8 Delete Trigger File - Enable to delete the trigger file after the source file is processed.
9 Lock Files Directory - Directory where the lock files will be stored. If this value is not provided as per the directory structure in the NFS/CIFS share, then the lock files will be stored in the mount point. Ensure that the lock directory name does not include spaces. The DSG will not process files under the lock directory that includes spaces.
10 Error Files directory - Files that fail to process are moved to this directory. The lock files generated for such files are also moved to this directory.
For example, the file is moved from the /input
directory to the /error
directory.
11 Error Files Extension - Extension that will be appended to each error file. If you do not specify an extension, then the .err extension will be used.
Mount Options
Parameters that will be used to mount the share.
12 Mount Type - Specify Soft if you want the mount point to report an error, if the server is unreachable after wait time crosses the Mount Timeout value. If you select Hard, ensure that the Interrupt Timeout checkbox is selected.
13 Mount Timeout - Number in seconds after which an error is reported. Default value is 60.
14 Options - Additional NFS options that can be provided as inbound settings. If the lock directory is not defined, then the lock files are automatically placed in the /input
directory. For example, {"port":1234, "nolock" "nfsvers": 3}
. To enable enhanced security for the mounted share, it is recommended that the following options are set:
noexec,nosuid,nodev
where:
- noexec: Disallow execution of executable binaries on the mounted file system.
- nosuid: Disallow creation of set user id files on the file system.
- nodev: Disallow mounting of special devices, such as, USB, printers, etc.
15 Advanced Settings - Set additional advanced options for tunnel configuration, if required, in the form of JSON in the Advanced Settings textbox. For example, {"interval":5, "fileChunkSize": 4096}
. In a scenario where an ESA and two DSG nodes are in a cluster, by using the Selective Tunnel Loading functionality, you can load specific tunnel configurations on specific DSG nodes.
Note: Ensure that the NFS share options are configured in the exports configuration file for each mount that the DSG will access. The all_squash option must be set to specify the anonuid and anongid with the user ID and group ID of the non-root user respectively.
This prevents the DSG from changing user and group permissions of the mount directories on the NFS server.
5.11.3.2 - Certificates/Key Material
This tab displays key material and other files in three different subtabs.
The Certificates/Key Material tab and subtabs are illustrated in the following figure.
The following table describes the available subtabs:
Callout | Column/Textbox | Description |
---|---|---|
1 | Certificates | View self-generated or trusted certificates. |
2 | Keys | View paired keys associated with certificates and unpaired keys. |
3 | Other Files | View other files such as GPG data, etc. |
4 | Upload | Upload a certificate, key, or other files. |
The following subtabs are available:
Certificates
Keys
Other Files
5.11.3.2.1 - Certificates Tab
The certificates uploaded to DSG are displayed in this section. Other information such as paired key, validity, and last modified date is also displayed.
A certificate and key that is paired displays a ( ) icon indicating that the certificate is ready to use. A certificate or key without any pairing is indicated with a (
) icon. If a certificate or key has expired, it is indicated with a (
) icon. Files available in the Other Files subtab will always be marked with a (
) icon.
The Cloud Gateway Certificate Expiration Check scheduled task is created by default to alert about certificates that are due to expire in the next 30 days.
Before you regenerate any default expired certificates, ensure that the best practices for certificates and keys are noted.
The Certificates subtab is shown in the following figure.
The following table describes the available options:
Callout | Icon (if any) | Column/Textbox | Description |
---|---|---|---|
1 | ![]() | Information | View Certificate details. |
2 | ![]() | Download | Download a certificate. |
3 | ![]() | Delete | Delete a Certificate. |
5.11.3.2.2 - Delete Certificates and Keys
Clikc the Delete option in the Certificates/Key Material tab.
The Delete Certificate screen is shown in the following figure:
The following tables describes the available options:
Callout | Column/Textbox/Button | Description |
---|---|---|
1 | Cancel | Cancel the process of deleting a certificate |
2 | Delete | Delete the certificate, key, or other files. |
5.11.3.2.3 - Keys Subtab
Keys cannot be downloaded, but the information can be viewed () or a key can be deleted (
).
A certificate and key that is paired displays a ( ) icon indicating that the certificate is ready to use. A certificate or key without any pairing is indicated with a (
) icon. If a certificate or key has expired, then it is indicated with a (
) icon. Files available in the Other Files subtab will always be marked with a (
) icon.
The supported key formats that can be uploaded are .crt, .csr, .key, .gpg, .pub, and .pem. For any private key without an extension, when you click Deploy to All Nodes, the permissions for the key changes to 755
making it world readable. To restrict the permissions, ensure that you generate the key with the .key extension.
The keys uploaded to the DSG can either be a non-encrypted private key or an encrypted private key. For either of the key types uploaded, the DSG ensures that the keys in the DSG ecosystem are always present in an encrypted format. When a non-encrypted private key is uploaded to the DSG, you are presented with an option to encrypt the key. If you choose to encrypt the key, DSG requests for a password for encrypting the key before it is stored on the DSG.
It is recommended that any non-encrypted private key is encrypted before it is uploaded to the DSG. Also,
It is recommended that any key uploaded to the DSG is of RSA type and a minimum of 3072-bits for optimum security.
5.11.3.2.4 - Other Files Subtab
A certificate and key that is paired displays a ( ) icon indicating that the certificate is ready to use. A certificate or key without any pairing is indicated with a (
) icon. If a certificate or key has expired, it is indicated with a (
) icon. Files available in the Other Files subtab will always be marked with a (
) icon.
The following table describes the available subtabs:
Callout | Icon (if any) | Column/Textbox | Description |
---|---|---|---|
1 | ![]() | Information | View Certificate details. |
2 | ![]() | Download | Download a certificate. |
3 | ![]() | Delete | Delete a Certificate. |
5.11.3.2.5 - Upload Certificate/Keys
Click Upload option in the Certificates tab to upload the certificate.
After clicking Upload Certificate, you can either upload a key or a certificate. When you upload a certificate, the password field does not appear.
After you click Choose File to select the key file, you must click Upload Certificate. Enter the password, and then click Upload Certificate again.
It is recommended that upload of any certificate or key is performed on the ESA. If the certificate is uploaded to a DSG node and configurations is deployed from ESA, then the changes made on the DSG node are overwritten by the configuration pushed by the ESA.
Note: Ensure that the passphrase for any key that is uploaded to the DSG Web UI is of minimum 8 character length.
If the key you uploaded is an encrypted private key, then you must enter the password for the key.
If the key you uploaded is a non-encrypted private key, an option is presented to encrypt the private key. If you select the option, you must provide a password that the DSG uses to encrypt the non-encrypted private key before it is stored internally.
The following figure illustrates the Upload Cerficate/Key screen
The following table describes the available options:
Callout | Column/Textbox/Button | Description | Notes |
---|---|---|---|
1 | Choose File | Select certificate and key files to upload. | You cannot upload multiple files in an instance. You must first upload the certificate file, and then the paired .key file. If you upload unpaired keys or certificates, then they are not displayed on the Certificate screen. |
2* | Do you want to encrypt the private key | Select the check box to encrypt a non-encrypted private key. If you clear the check box, then the private key will be uploaded without encryption. | It is recommended that any non-encrypted private is encrypted when uploaded to the DSG. |
3* | Password | Enter the password for an encrypted private key. For a non-encrypted private key, provide a password that will be used to encrypt the key. | The DSG supports ASCII passwords for keys. If your private key is encrypted with any other character password, then ensure that it is changed to an ASCII password. |
4* | Confirm Password | Re-enter the password | |
5 | Upload Certificate | Upload the certificate or .key file. | If you upload a private key without an extension, then ensure that you append the .key extension to the key. |
*-Appears only when a key is uploaded. |
5.11.4 - Global Settings
The Global Settings allows to configure debug options, global protocol settings, and Web UI settings that impact the DSG.
The following image illustrates the UI options on the Global Settings tab.
The following table provides the description for each of the available RuleSet options:
Callout | Icon | Column/Textbox/Button | Description | Notes |
---|---|---|---|---|
1 | Deploy to All Nodes | Deploy the configurations to all the DSG nodes in the cluster. Note: Deploy can also be performed from the Cluster tab. | In a scenario where an ESA and two DSG nodes are in a cluster, by using the Selective Tunnel Loading functionality, you can load specific tunnel configurations on specific DSG nodes. Click Deploy to All Nodes to push specific tunnel configurations from an ESA to specific DSG nodes in a cluster. | |
2 | ![]() | Expand | Expand the subtab and view available options. | |
![]() | Collapse | Collapse the subtab to hide the available options. | ||
3 | ![]() | Edit | Edit the available options in the subtab. |
5.11.4.1 - Debug
The following figure illustrates the Debug tab.
The following table provides information about fields in the Debug tab.
Sub tab | Fields | Description |
---|---|---|
Log Settings | Log Level | Set the logging level at the node level. |
Admin Interface | Listening Address | Listening address for the admin tunnel. The DSG listens for requests such as learn mode settings that are sent through the admin tunnel. |
Admin tunnel is a system tunnel that lets you send administrative requests to individual DSG nodes. | ||
Listening Port | Listening port for the admin tunnel. | |
SSL Certificate | The DSG admin certificate to authenticate inbound requests. | |
SSL Certificate Key | Paired DSG admin key used with the admin certificate. | |
Client CA Certificate | The .pem file against which the client certificate will be validated. | |
Client Certificate | Client certificate (.pem) file that is required for establishing communication between the ESA-DSG nodes and the DSG-DSG nodes. | |
Client Certificate Key File | Paired client certificate key. | |
Common Name | Common name defined in the client certificate. Ensure that the Common Name (CN) defined in the client certificate matches the name defined in this field. | |
OpenSSL Cipher Lists | Semi-colon separated list of Ciphers. | |
SSL Options | Options you must set for successful communication between the ESA-DSG nodes and the DSG-DSG nodes. | |
Stats Log Settings | Stats Logging Enabled | Enable stats logging to get information about the connections established and closed for any service on all or individual DSG nodes in a cluster. |
Global Learn Mode Settings* | Enabled | Select to enable Learn Mode at node level. |
Exclude Payload Types | Resources matching this regex patterns are excluded from the Learn Mode logging. | |
Exclude Content-Type | Protocol messages with Content-type headers are excluded from the Learn Mode logging. | |
Include Resource | Resources matching this regex pattern are included in the Learn Mode logging. | |
Include Content-Type | Protocol messages with Content-type headers are included in the Learn Mode logging. | |
Free Disk Space Threshold | Minimum free disk space required so that Learn Mode feature remains enabled. The feature is automatically disabled, if free disk space falls below this threshold. You must enable this feature manually, if it has been disabled. | |
Long Running Routines Tracing | Enabled | Enable stack trace for processes that exceed the defined timeout. |
Timeout | Define value in seconds to log a stack trace of processes that do not process smoothly in a given timeout. The default value is 20 seconds. |
* - Settings provided in these fields can be overwritten by the settings provided at Service/Ruleset level.
5.11.4.2 - Global Protocol Stack
The following figure illustrates the Global Protocol Stack tab.
The following table provides information about the fields in the Global Protocol Stack tab.
Sub tab | Fields | Description | Default | Notes |
---|---|---|---|---|
HTTP | Max Clients | If the user wants to increase the number of simultaneous outbound connections that the DSG can create, to serve the incoming requests, then the user can modify this setting. | The default value for this setting is 100. | |
User defined server header | If you want to change the value of the server header in an application response, then you can use this parameter. | |||
Outbound Connection Cache TTL | In situations where you want to keep a TCP connection persistent beyond the default limit of inactivity that the firewall allows, you must configure this setting to a timeout value.The timeout value must be defined in seconds. | -1 This value indicates that the feature is disabled. The connection remains active and stored in cache until the DSG node is restarted. | ||
NFS | Enabled | Set as true to enable the NFS tunnel and service. | ||
Interval | Time in seconds when the DSG node will poll the NFS server for fetching files. You can also specify a cron job expression. For more information about how to schedule cron jobs, refer to the cron documentation. | The Cron job format is also supported to schedule jobs. If you use the cron job expression “* * * * *”, then the DSG will poll the NFS server at the minimum interval of one minute. |
5.11.4.3 - Web UI
The following figure illustrates the Web UI tab.
The following table provides information about fields in the Web UI tab.
Sub tab | Fields | Description | Default |
---|---|---|---|
Learn Mode UI Performance Settings | Max Worker Threads | Maximum worker threads that would render learnmode flow dumps on screen. | 15 |
Flow Dump Filesize | The rules displayed in the Learn mode screen and the payload message difference are stored in a separate file in DSG. If the payloads and rules in your configuration are high in volume, you can configure this file size. | 10 MB |
5.11.5 - Tokenization Portal
After you set up a cluster between the ESA and multiple DSG nodes, the policies are deployed on respective DSG nodes. Each policy has related data elements, data stores, and roles. You can use the Tokenization Portal menu to examine protection or unprotection of data when a protection data element is used.
The Tokenization Portal provides an interface where you can select the data security operation you want to perform, along with the DSG node, data elements available in the policies deployed at that node, and an external IV value. Every protection operation performed through the Tokenization Portal is recorded as an event in Forensics.
To access the test utilities, in the browser, enter (https://ESA/DSG_IP_Address/TokenizationPortal)
.
Before you access the Tokenization Portal, ensure that the following pre-requisites are met:
- Ensure that any user who wants to access the test utilities must be granted the Cloud Gateway User and Policy User permissions.
- Ensure that the ESA where you are accessing Tokenization Portal is a part of the cluster.
- Ensure that the policy on the ESA is deployed to all the DSG nodes in the cluster.
- Ensure that the policy is synchronized across all the ESAs in the cluster.
The following image illustrates the UI options on the Tokenization Portal tab.
The following table provides the description for each of the available Tokenization tab options:
Callout | Column/Textbox/Button | Sub-Columns | Description | Notes |
---|---|---|---|---|
1 | Input Text | Enter the data you want to protect or unprotect. | ||
2 | Output Text | Transformed data, either protected or unprotected based on operation selected. | ||
3 | Output Encoding* | Select the type of encoding you want to use. Though the Output Encoding option is part of the Input Text area, remember that encoding is applied to Input data during protection and reprotection, and output data during unprotection. | ||
4 | Clear | Clear the text in the Input Text or Output Text box. | ||
5 | Unprotect/Protect/Reprotect | Click to perform security operation on the Input Text. You can either Protect, Unprotect, or Reprotect Data. | This option changes to Protect, Unprotect, or Reprotect based on the Action selected for data security operation. | |
6 | Clear | Clear the text in the Input Text or Output Text box. | ||
Action Logs | Logs related to the data protection or unprotection are displayed under this area. These logs are cached in the browser session and are not persisted in the system. | |||
7 | Status | Displays ![]() | ||
8 | DateTime | Date and time when the data security operation was performed. | ||
9 | External IV | External IV value that was used for the data security operation. | ||
10 | Data Element | Data element used. | ||
11 | DSG Node | The DSG node where the data security operation was performed. | ||
12 | Output | Transformed data. | ||
13 | Input | Input data. | ||
14 | Action | Data security operation performed. | ||
15 | New External IV | New external IV value that will be used along with the protect or unprotect algorithm to create more secure encrypted data. | This field applies to the Reprotect option only. | |
16 | New Data Element | New data element that will be used to perform data security operation. | This field applies to Rthe eprotect option only. | |
17 | External IV | External IV value that will be used along with the protect or unprotect algorithm to create more secure encrypted data. | ||
18 | Data Element | Data element that will be used to perform data security operation. | ||
19 | DSG Node | The DSG node where the data security operation will be performed. | ||
20 | Action | Data security operation, Protect, Unprotect, or Reprotect that you want to perform. |
* - The available encoding options are as follows:
- Output Encoding: The field appears when action selected is either Protect or Reprotect.
- Input Encoding: The field appears when action selected is Unprotect.
5.12 - Overview of Sub Clustering
In a TAC where ESA and DSGs are setup, the configuration files are pushed from the ESA to the DSG nodes on the cluster. However, in versions prior to DSG 3.0.0.0, only a single copy of the gateway configurations could be pushed to the DSG nodes. The ability to push specific rulesets to specific nodes in the cluster was unavailable. From v3.0.0.0, this limitation has been eliminated by introducing the sub-clustering feature.
Sub-clustering allows the user to create separate clusters of the DSG nodes in a TAC. All the nodes in the sub-cluster contain the same rulesets. This enables the user to maintain different copies of rulesets for different sub-clusters. Sub-clusters can be used to define various Lines-of-Business (LOB) of the organization. The user can then create logical node groups to deploy the rulesets on different DSG nodes (LOBs) successfully.
For example, if an XYZ company is spread across the globe with multiple LOBs, then they can use sub-clustering feature to deploy the configurations to a particular node group, i.e. LOB1, LOB2, LOB3, and so on.
The following image illustrates how the sub-clustering feature is implemented for the DSG nodes.
The figure depicts three node groups LOB1, LOB2, LOB3. Consider LOB1, that caters to only HTTP and S3 services. The common is the service that is shared among all the LOBs. This common service includes the protection profiles, unprotection profiles, and so on, that can be used by the user. Only the tunnels that are used for the enabled services will be loaded. The other tunnels will not be loaded. Thus, for LOB1, only the HTTP and S3 tunnels will be loaded.
Perform the following steps to push the configurations to the LOB1 node group.
Add four DSG nodes from the cluster page and set the node group as LOB1.
For more information about adding a node and node group to the cluster, refer to the section Adding a Node to the Cluster.
Enable the Office 365 and S3 service1 services. These services will be pushed to the LOB1 node group. Disable the NFS service 1, SFTP service, restapi, Adlabs, salesforce, and S3 service 2 services
Select LOB1 to push the rulesets to all the four DSG nodes in the LOB1 node group.
Note: For more information about deploying the configurations to node groups, refer to the section Deploying the Configurations to Node Groups.
The following figure describes a sample use case for sub-clustering.
As shown in figure, consider LOB1, LOB2, and LOB3 are different lines of business that belong to an XYZ company. Each LOB are as follows:
- The LOB1 will use the S3 bucket’s folder 1 and office 365 SaaS services. This LOB is assigned to nodes D1, D2, D3, and D4.
- The LOB2 will use the Salesforce SaaS, SFTP server, and Adlabs Saas services. This LOB is assigned to nodes D5, D6, D7, and D8.
- The LOB3 will use the NFS share and S3 bucket’s folder 2 services. This LOB is assigned to nodes D9, D10, D11, and D12.
All other services in the RuleSet page will be disabled to deploy the configurations to LOB1 node group.
Important Notes
The sub-clustering feature can only be used when the DSG node is added from the Cluster screen of the DSG Web UI. It is recommended to add the node to the cluster only from this screen. While adding a node, a node group can be assigned to the DSG node. If a node group is not assigned, then a default node group is assigned to that DSG node.
For more information about adding the DSG node from cluster page, refer to the section Adding a Node to the Cluster.
The tunnels, certificates/keys, and gateway configuration files are common to all the DSGs in the cluster.
If the user is using the Selective Tunnel Reloading feature with sub-clustering, then ensure that you prefix dsg_ in the node group name while setting the tunnel configuration.
For DSG Appliances, rulesets are deployed based on the node groups that are mapped to it.
For DSG containers, the user can use CoP export API to export the configurations for a particular node group and then deploy it to the containers. This is achieved by passing the Node Group as a parameter to the export API.
For more information about CoP export API, refer to the section CoP Export API for deploying the CoP (Containers Only).
Sub-Clustering FAQs
Questions | Answers |
---|---|
What if I have to change the node group assigned to a DSG node? | If you have to change the node group of a node, then login to the ESA Web UI, navigate to **Cloud Gateway > 3.3.0.0 {build number}**Cloud Gateway > 3.3.0.0 {build number}> Cluster, then on the node click the Actions drop down list and select Change Groups option. Specify the required node group name and save it. |
What if I have to change the node group on multiple DSG nodes at a time? | If you have to change the node group of multiple DSG nodes at a time, then login to the ESA Web UI, navigate to **Cloud Gateway > 3.3.0.0 {build number}**Cloud Gateway > 3.3.0.0 {build number}> Cluster, then select the check box given for a individual node on which the node group must be changed, click the Actions drop down list, and select Change Groups on Selected Nodes option. Specify the node group name and save the changes. |
From where should be the DSG nodes added to the cluster? | The DSG node must be only added from the Cluster page. Login to ESA Web UI, navigate to **Cloud Gateway > 3.3.0.0 {build number}**Cloud Gateway > 3.3.0.0 {build number}> Cluster, click Actions drop down list and select Add Node option. For more information about adding a node to the cluster, refer to the section Add Node. |
What if while adding a node to cluster, the deployment node group is not specified? | If the deployment node group is not specified, then by default it will get assigned to the default node group. |
Can the DSG node be assigned to different node groups at a time? | No, the DSG node can be assigned to only one node group at a time. If required, then you can change the node group but at a time the node will be associated to one node group. |
What would happen if you add the DSG node from the CLI or TAC? | It is not recommended to add the DSG node from the CLI or TAC. The sub-clustering feature would not work with all the functionalities. |
Can we deploy the configurations to multiple node groups? | Yes, if you have different node groups and you click Deploy to Node Groups option on the Cluster tab or Ruleset screen then it will show all the node groups created. Select the check box of the node groups to which the configurations must be pushed. |
How to configure the services in the Ruleset page, to push it to a particular node group? | Enable the required services and deploy it to the intended node group. Note: Disable all the services that are not intended to be pushed on the node group. |
Can I have separate node groups as LOB1, lob1, or any combination of letters for this node group name? | All the combination of letters of the node group name is considered as same and it will be displayed in the lowercase. |
Can we deploy the configuration to the node group without providing the tag name or description? | Yes, the tag name and description are not the mandatory fields. If you do not provide the tag name, then the configuration version will be untagged. |
What would happen if the node group has a recently deployed configuration and you are assigning that node group to a DSG node? | In this scenario, the recently deployed configuration for that node group will be redeployed to the DSG node. |
5.13 - Implementation
The implementation includes RuleSets Configuration, Migration, Testing and Production Rollout.
The configuration of a Ruleset would be subject to the data and how it is exchanged between the systems communicating through DSG. A common SaaS deployment example is followed and additional generic examples, where applicable, are provided.
The process involves the following steps:
Network Setup
- Domain Name System (DNS)
- Connectivity
- Upload or generate SSL Certificates
- Configure Tunnels
Configuring Ruleset
- Create a Service under Ruleset
- Use Learn Mode to find message carrying sensitive data
- Create a Profile under Service and define rules to intercept the message and transform it
Optional: Forwarding logs to external SIEM systems
Rolling Out to Production
The implementation section will continue to follow the prototypical organization, Biloxi Corp, example, an enterprise that chose to use a SaaS called ffcrm.com. The crm.example.com story is followed to configure Ruleset profile.
Network Setup
As part of setting up the CRM, you must set up domains names and ssl certificates.
Domain Name System
The CRM SaaS selected by Biloxi Corp is accessible through the public domain name www.ffcrm.com. To integrate the gateway in front of the CRM, the consumers need to be directed to the gateway’s load balancer. A public domain like www.ffcrm.com is owned by a third party. Else, the necessary changes could have been made so it gets resolved to address of the gateway’s load balancer. Therefore, a domain name, www.ffcrm.biloxi.com, is used to internally represent the external www.ffcrm.com one. DNS would then be configured such that all sub domain names (i.e. www.ffcrm.biloxi.com or anything.ffcrm.biloxi.com) will always point to the same gateway’s load balancer address.
The public domain name www.ffcrm.com will be accessible to Biloxi Corp users. However, that concern is addressed by preventing them from successfully authenticating with the service should they attempt to bypass the gateway, intentionally or by error.
In a lab or testing environment, one can alternatively modify their local system hosts file to achieve a similar goal. This approach only effects the local machine and does not support wild card configuration, hence each domain name must be specified explicitly.
For example: 2.10.1.53 ffcrm.biloxi.com www.ffcrm.biloxi.com
This part is completed when all SaaS domain names resolve to your gateway’s load balancer IP address.
Network Connectivity
Biloxi Corp is following network segregation best practices using network firewalls. Thus, communication between systems in different parts of the network is strictly controlled.
Depending on where the gateway is installed, some of the firewall’s configuration must be modified so that the gateway cluster is reachable and accessible. The public domain www.ffcrm.com must also be accessible to the gateway cluster.
Administrative web interface and ssh access to ESA and the gateway cluster may require adjusting network firewall settings.
Verify that the network is properly configured to allow users connectivity to the gateway cluster. This can be validated using tools like ping and telnet. In addition, you’ll need to confirm the connectivity between the gateway nodes themselves and the public SaaS system.
SSL Certificates
The load balancer in front of the gateway cluster at Biloxi Corp is equipped with the appropriate SSL Certificate. However, another certificate needs to be generated to secure the connectivity between the load balancer and the gateway cluster. You could generate the certificate elsewhere and add it to the list along with its key using the upload button.
Configuring Ruleset
This section explains steps to create rules and how the rule trajectory can be tracked with Learn Mode.
Creating a Service under RuleSet
At this point it is not known how sensitive data is exchanged with the CRM application. To find out the details, the Learn mode functionality is used, which is controlled at the Service level. Before you begin creating services, ensure that you read the Best Practices for defining services. For more information about best practices, refer to Best practices
Using the main menu, navigate to Cloud Gateway > 3.3.0.0 {build number} > RuleSet > RuleSet. Click the Add New Service link to create a new service. After giving it an appropriate name and description, the new HTTP Gateway Service with one or more tunnels needs to be associated. Through these tunnels the www.ffcrm.com application network message exchange will be occur. Use the (+) icon next to the Tunnels and Hostnames fields to associate the service with the default_443 tunnel and the internal hostname to external forwarding address mapping.
The hostname is how DSG identifies which service applies to an incoming message.
The transaction metric logging feature available at the service level provides a detailed metrics of the transactions occuring in DSG. These details can be logged as part of the gateway.log whenever a protect operation HTTP request is made to the DSG. For more information about transaction metrics options, refer to Transaction Metrics logging.
The sample metrics as seen in the log file are as follows:
Restart the gateway cluster for this change to take effect.
Trusting Self-Signed Certificates for an Outbound Communication
Learn-mode is used to find the messages carrying sensitive data, examine their structure and design rules to protect it before it is forwarded on to www.ffcrm.com. For TLS-based outbound communications in DSG, it is expected that the server or SaaS uses a certificate that is signed by a trusted certification authority. In some cases, self-signed certificates are used for outbound communications. For such cases, the DSG is required to trust server’s certificate to enable TLS-based outbound communications.
Perform the following steps to trust self-signed certificates in the ESA node.
In the ESA Web UI, navigate to Settings > System > File Upload.
Click Browse to upload the self-signed certificates.
From the ESA CLI Manager, navigate to Administration > OS Console. Ensure that the certificate is in PEM (ASCII BASE64) format. This may be done by inspecting the certificate file using the cat command to verify whether the contents of the certificate are in readable (BASE64 encoded) or in a binary format.
If the contents of the certificate are in binary format (DER), execute the following command to convert the contents to PEM (BASE64 encoding), else proceed to Step 7.
openssl x509 -inform der -in <certificate name>.cer -out <certificate name>.pem
Create a directory
opt/protegrity/alliance/config/outbound_certs
/ if it does not exist.Create a file trusted.cer under
/opt/protegrity/alliance/config/outbound_certs
if it does not exist.Copy the contents of the Base64 converted certificate (.pem) file to the trusted.cer file.
cat <certificate name>.pem >> opt/protegrity/alliance/config/outbound_certs/trusted.cer
If you started with a different certificate file extension name that was already ASCII Base64 encoded, then use the same extension name instead of the “.pem” file extension name in the command example above.
In the ESA Web UI, navigate to Cloud Gateway > 3.3.0.0 {build number} > RuleSet.
Select the service to which the certificate is assigned and click Edit.
Add the following option in the Outbound Transport Settings textbox.
{"ca_certs": "/opt/protegrity/alliance/config/outbound_certs/trusted.cer"}
Restart the Cluster by navigating to Cloud Gateway > 3.3.0.0 {build number} > Cluster to replicate certificates to all DSG nodes.
Use Learn Mode to find message carrying sensitive data
When enabled, Learn mode logs all message exchange to host names matching the service configuration. This allows us to consider these messages payload. One learns how it is structured so appropriate rules can be set to transform the relevant parts in it before it is forwarded on.
Let’s first check that Learn mode is functioning properly by verifying that new entries are shown as the www.ffcrm.biloxi.com application is accessed. Navigate the main menu Cloud Gateway > 3.3.0.0 {build number} > RuleSet > Learn Mode. If this is the first time you’re visiting the Learn mode page then chances are it will come up with no data. You can always click the reset ( ) button to purge the learn mode data.
Navigating to the www.ffcrm.biloxi.com application home page will generate new entries in the Learn mode page. It is recommended to use a separate window so that going back to the Learn mode page becomes easier.
Switch back to the Learn page and refresh the list by clicking the refresh ( ) button. New entries will indicate that Learn mode is functioning as expected.
Password Masking
The www.ffcrm.biloxi.com application requires users to authenticate with their username and password before they can use the application. To avoid Learn mode from logging user’s passwords, Service’ Password Masking needs to be configured. The Password Masking configuration comprised of a regex pattern to identify the URL(s), another regex pattern to identify the password in the payload and a masking value which will be replace the password before the message is logged by Learn mode.
To do that, the request message carrying the password needs to be identified when the authentication is submitted. Using a unique value for the password will make it easier to find in Learn mode. No need to use real username and password, instead let’s use testusertest for the username and testpasswordtest for the password.
Submit the login form, switch to the Learn mode screen and refresh it.
Additional messages now appear in the list. Let’s examine the first entry listed right after submitting the fake credentials by clicking on it. Additional details will appear at the bottom of the page. Click any entry on the left, select the Messages Difference tab on the right to see the message payload and scroll all the way down.
The rules displayed in the Learn mode screen and the payload message difference are stored in a separate file in DSG. The default file size limit for this file is 3MB. If the payloads and rules in your configuration are high in volume, you can configure this file size.
In the ESA CLI Manager, navigate to Administration > OS Console. Edit the webuiConfig.json
file in the /opt/protegrity/alliance/config/webinterface
directory to add the Learn mode file size parameter as follows:
{
"learnMode": {
"MAX_WORKER_THREADS": 5,
"LEARNMODE_FLOW_DUMP_FILESIZE": 10
}
}
You can now see that the characteristics of the message carrying the authentication password can be the request message Method and URL. It evident that the authentication with www.ffcrm.com is handled by a POST request to /authentication URI . Thepassword is carried in this message body as a value for the parameter name authentication%5Bpassword%5D . This is URI/www-form encoding for authentication[password].
In some cases, you may need to repeat this test several times to confirm the consistency of the URL, parameter names, etc.
With these details, adjust the service configuration with the details that were found to mask the authentication password or any other data is not required by learn mode to log in the clear. Both the pattern and the resource are regular expressions so let’s use /authentication$ for the resource and (?<=(\bauthentication%5Bpassword%5D=))(.+?)(?=($|&)) for the pattern.
Save the changes and restart the cluster for these changes to take effect. You can do that from the Cloud Gateway > 3.3.0.0 {build number} > Cluster page, Monitoring tab.
Test the change made to the service settings by repeating the authentication attempt as done before. Again, no need to use real credentials, as this step is to ensure the password gets masked in Learn mode.
Note that resetting the Learn mode list makes it easier to find the messages you are looking for in a smaller set of message.
Protecting Sensitive Data
Data protection is achieved by finding the target data in a message and transforming it from its clear state. Then, protect and return it depending on the flow direction of the message. In the www.ffcrm.com application the account names need to be protected So, the gateway will be configured to protect account names in messages transmitting it from user’s browser to the application backend. The gateway will also be configured to unprotect when it is consumed by transforming it in message payloads transmitted from the application backend back to user’s browser.
Following the Protegrity methodology, the discovery phase will uncover all the sensitive data entry points in our target application. One obvious entry point is the page in the application where users can create new accounts. Navigate www.ffcrm.biloxi.com to where new accounts and purge the Learn mode list before creating a new testing account. A sample TestAccount123 is unique enough name to easily find in Learn mode.
Switching back to Learn mode page, the message posed with the account name TestAccount123 that was created can be found.
Learn mode is showing that DSG sees this information sent as a value to the account[name] parameter in the POST request message body to /accounts URI. The message body is encoded using URL Encoding which is again why account[name] is displayed as account%5Bname%5D.
The details to create a set of rules under Ruleset to extract the posted account name and protect it using Protegrity Data Protection transformation is available. Create a profile under the existing service and call it Account Name.
Now create a new rule which will extract the body of the create account message. This rule will look for a POST HTTP Request message with URI matching the regular expression /accounts$.
Under the Create Account Message rule, let’s add another rule to extract the value of the parameter which carries the account name found earlier that this parameter name is account[name].
Last the account name needs to be protected using a Protegrity Data Protection Transformation rule. This rule requires us to specify and configure how to protect the data. Data Element is a value defined in data security policies managed by Security Officers using Protegrity Enterprise Security Administrator (ESA), which represents data security characteristics like encryption or tokenization. Our security officer had provided us with a data element name PTY_ACCOUNT_NAME which was created for protecting account names in Biloxi corp.
The alliance user is an OS user and is responsible for managing DSG processes. Ensure that you do not add this user as a policy user.
In addition to the data protection, an encoding codec with unique prefix and suffix will be used. The reason for it is so that the protected data will be easy to find by looking for data which begins and ends with the specified prefix and suffix. The ability to find it easily comes handy when the data is consumed and a set of rules must be created to remove the protection. Instead of creating a rule for every message that might carry it, we can scan every message coming back from www.ffcrm.biloxi.com and unprotect every protected data we find corresponding to our prefix and suffix.
Restart the gateway cluster and test the new set of rules that was created by creating a new account name TestAccount456.
The expected result is that the new account name will appear protected (encrypted/tokenized and encoded) as oppose to the same way it was entered.
Lastly, the data protection applied on Account Names needs to be reversed. This allows the authorized Biloxi users consuming this information through www.ffcrm.biloxi.com to see it in clear.
The prefix and suffix will help searching the protected account names in every payload coming back from the application. Therefore, a generic rule for textual payloads can be created. This will look for protected account names and unprotect them such that users will get the HTML payload with the account name in the clear.
The top level branch in this case will target a HTML Response message.
In the extracted HTML Response message body, we will look for the protected account names by searching for values matching a regex pattern comprised of our prefix and suffix.
The leaf rule of this branch will unprotect the extracted protected account names found in the HTML response payload.
Restart the gateway cluster and test the new rules branch created by revisiting the page where TestAccount456 account name appeared protected earlier. A simple refresh of the web page may not be enough as local caching may be used by the browser. Clearing cache, local storage and session information may be required.
Other payload types such as JSON or XML may be used by the application therefore additional generic unprotect rules for these payload types may be required.
Forwarding logs to SIEM systems
You can forward all ESA and gateway logs to an external Security Information and Event Management (SIEM) system, such as Splunk or AWS Cloudwatch.
Forwarding logs to an external SIEM
If you plan to forward all ESA and gateway logs to an external SIEM system, configure the alliance.conf file.
To forward logs to an SIEM system:
In the ESA Web UI, navigate to Settings > System.
Navigate to the Cloud Gateway > 3.3.0.0 {build number} > Settings area.
Click Edit to edit the
alliance.conf
file.You can either send all logs to the SIEM system or just the gateway logs.
To send gateway or DSG logs using TCP, add the following rule to the file.
*.* @@(<IP_ADDRESS> or <HOSTNAME>):<PORT>
To send gateway or DSG logs using TCP, add the following rule to the file.
:msg, contains, "PCPG:" @@(<IP_ADDRESS> or <HOSTNAME>):<PORT>
Ensure that the rule is the first entry in the alliance.conf file. The configurations must be made in ESA. After you deploy the configuration, ensure that the rsyslog service is restarted on each DSG node. If you are using UDP protocol instead of TCP protocol, the rules to be defined are as follows:
- To send gateway or DSG logs using UDP, add the following rule to the file.
*.* @(<IP_ADDRESS> or <HOSTNAME>):<PORT>
- To send gateway or DSG logs using UDP, add the following rule to the file.
:msg, contains, "PCPG:" @(<IP_ADDRESS> or <HOSTNAME>):<PORT>
The following file is a sample alliance.conf file that shows the SIEM configurations.
Forwarding logs to AWS CloudWatch
The AWS CloudWatch must be configured with each DSG node in the cluster to forward the logs from an DSG appliance. After the AWS CloudWatch is configured with a DSG node, you must add the path of the required DSG appliance log files to the appliance.conf file in the appliance. For more information about AWS CloudWatch, refer to AWS Cloudwatch
Rolling Out to Production
Following the Protegrity methodology, production rollout requires deploying DSG with all the necessary rules to protect and unprotect sensitive data. You also need to prevent users/applications bypassing DSG to avoid sensitive data in the clear from reaching the target application directly. Lastly, migrate any historical data that may already exist in the target system.
After Ruleset is created and tested in sandbox or testing environment, back up and export the entire configuration. Then, import and restore it on the production DSG cluster. Addresses, certificates and data elements names may be different from a non-production system which may require some modification to it which applies to production environment only. Before going live it is recommended to test and verify the Ruleset configuration for all sensitive data in scope of the workflow mapped during discovery phase. After a successful testing of the workflows, temporarily bypass DSG to confirm that all sensitive data resided protected on the application backend side. No backend instance of the target sensitive data exists in the clear.
Target application may still be accessible directly, especially when the target application is a public SaaS. For example, after production rollout, Biloxi users may still attempt to access www.ffcrm.com directly. The risk is not in these users seeing protected data as much as it is in them submitting data which otherwise would have been protected by DSG before reaching the application backend. There are several ways to prevent that and may not even be a challenging factor when the target application is owned by the same organization implementing DSG. If the application is external, i.e. SaaS, controlling the authentication can be used to solve this problem.
Authentication may be offered by SaaS using SAML or by owning the authentication process themselves. Most organization would prefer Single Sign On (SSO) using SAML which DSG can be configured to proxy or resign, essentially establishing a trust between DSG and the SaaS. If SAML is not an available option, DSG can be configured to treat username/password as sensitive data as well. This means that users will be known to the SaaS as their protected username and password. An attempt to bypass DSG in such case will result the same as it would for users who are not known to the application at all.
It is not uncommon to introduce data protection to a system already in use. In such cases, it is likely that data designated for protection has already been added to the system in the clear. Historical data migration is the process of replacing the sensitive data from its clear state to a protected state. Applications may offer different ways of achieving this goal normally by exporting the data, transforming it and importing it back into the system. DSG REST API can be used to perform the transformation which will require a Ruleset to be configured for the exported payload format.
Homegrown and third party application changes all the time. Hence, it is highly recommended to maintain a testing environment for future Ruleset or other configuration modification that may be needed.
5.14 - Transaction Metrics Logging
The transaction metrics allows the user to view the detailed information of the operations performed by the DSG. The Transaction metrics logging feature can be enabled at the service level.
For more information about enabling the transaction metrics logging feature, refer to the Table: Service Fields.
The transaction metrics are logged in the gateway.log file in JSON format.
The sample transaction metrics for a REST request is as seen in the following snippet.
The following table describes the parameters available in the transaction metrics for different services.
Parameter | Service Supported | Data Type in DSG | Data Type in the Audit Store | Description | Examples |
---|---|---|---|---|---|
auth_cache_hit | HTTP, REST | boolean | boolean | The credential cache status. True indicates that the basic authentication credentials were cached and False indicates that the credentials were not cached. | False |
auth_end_time | HTTP, REST | string | string | The timestamp when the basic authentication was completed. | 2024-02-28T11:26:17.482491732+00:00 |
auth_start_time | HTTP, REST | string | string | The timestamp when the basic authentication was started. | 2024-02-28T11:26:17.466345072+00:00 |
auth_total_time | HTTP, REST | float | double | The difference in seconds between the auth_time_end and auth_time_start parameters. | 0.016147 |
auth_user_name | HTTP, REST | string | string | The username used for basic authentication. | admin |
bucket_name | S3 Out-of-Band | string | string | The name of the S3 bucket from where the DSG reads the object to be processed. | dsg-s3/incoming |
client_correlation_handle | All | string | string | The ID used to uniquely identify a request. It is usually a UUID. This parameter is optional. | 31373039313139363333353837 |
client_ip | All | string | string | The IP address of the client that sent the request to the DSG. | 192.168.1.10 |
data_element_name | All | string | string | The name of the data element used to transform the sensitive data. | PTY_Unicode |
data_protection | All | object | object | The object representing the Protegrity Data Protection transformation rule. | {"data_protection":{"data_elements":[{"data_element_name":"TE_A_N_S13_L1R3_N","num_unprotect":20,"len_unprotect":428}]}} |
dsg_version | All | string | string | The version number of the gateway process. | 3.1.0.0.103 |
file_name | S3 Out-of-Band, Mounted Out-ofBand | string | string | The name of the file that has been processed. | Sample_S3.csv |
http_method | HTTP, REST | string | string | The HTTP method associated with request. | POST |
http_outbound_available_clients | HTTP | integer | long | The number of outbound HTTP clients available for the requests. | 100 |
http_outbound_count_new_connections | HTTP | integer | long | The number of new connections created to process the request.
| 1 |
http_outbound_count_redirect | HTTP | integer | long | The number of redirects encountered while processing a request. | 0 |
http_outbound_local_port | HTTP | integer | integer | The local port used for the outbound connection. | 60084 |
http_outbound_response_code | HTTP | integer | integer | The HTTP status response code from downstream system. | 200 |
http_outbound_size_download | HTTP | float | double | The size of the data received from the downstream system in bytes. | 76.00 |
http_outbound_size_queue | HTTP | float | double | The number of requests waiting to be sent to downstream systems. | 0 |
http_outbound_size_upload | HTTP | float | double | The size of data sent to downstream system in bytes. | 76.00 |
http_outbound_speed_download | HTTP | float | double | Average download speed. Bytes per second. | 4.00 |
http_outbound_speed_upload | HTTP | float | double | Average upload speed. Bytes per second. | 75697.00 |
http_outbound_time_appconnect | HTTP | float | double | The time taken to complete the SSH/TLS handshake. | 0.000000000 |
http_outbound_time_connect | HTTP | float | double | The time taken to connect to the remote host. | 0.000374000 |
http_outbound_time_namelookup | HTTP | float | double | The time taken to resolve the name. | 0.000161000 |
http_outbound_time_pretransfer | HTTP | float | double | The time from the start until before the first byte is sent. | 0.000397000 |
http_outbound_time_queue | HTTP | float | double | The time that the requests spent in the queue before being processed. | 0.000008821 |
http_outbound_time_request | HTTP | float | double | The time from when the request was popped off the queue to be processed to the time a response was sent back to the caller. | 0.001168013 |
http_outbound_time_starttransfer | HTTP | float | double | The time taken from the start of the request until the first byte was received from the server. | 0.000398000 |
http_outbound_time_total | HTTP | float | double | Total time that the client library took to process the HTTP request. | 0.001004000 |
http_outbound_url | HTTP | string | string | The destination URL used for the outbound request. | http://tornadoserver:8889/passthrough |
http_reason_phrase | HTTP, REST | string | string | The reason phrase associated with the HTTP status code. | OK |
http_status_code | HTTP, REST | integer | integer | The HTTP status code sent to the HTTP client. | 200 |
input_etag | S3 Out-of-Band | string | string | The Etag of the input object processed by the DSG. | a0b00e60cc87fff8537e68827c3f329a |
input_size | S3 Out-of-Band | integer | long | The size of the input object, in bytes, processed by the DSG. | 81 |
learn_mode_enabled | All | boolean | boolean | Indicates if the Learn mode is enabled. | false |
len_protect | All | integer | long | The length of the sensitive data that is protected. | 30 |
local_port | HTTP, REST | integer | integer | The local port used for the inbound connection, can be used with the open_connections parameter to identify new and unique connections. | 43004 |
logtype* | All | NA | string | The value to identify type of metric, such as, dsg_metrics_transaction. | dsg_metrics_transaction |
method | SFTP | string | string | The SFTP method associated with the request. The method can be either GET or PUT. | download |
node_hostname | All | string | string | The hostname of the DSG. | protegrity-cg123 |
node_pid | All | integer | integer | The process id of the gateway process that processed the request. | 56532 |
num_protect | All | integer | long | The number of protect operations performed. | 3 |
num_replace | All | integer | long | The number of regex replace performed. | 2 |
open_connections | HTTP, REST | integer | long | The number of open connections associated with the tunnel in a process. | 1 |
origin_time_utc* | All | NA | date | The time in UTC at which this log is ingested. | Feb 26, 2024 @ 03:51:54.416 |
output_bucket_name | S3 Out-of-Band | string | string | The name of S3 bucket where the DSG writes the processed object. | dsg-s3/incoming |
output_etag | S3 Out-of-Band | string | string | Etag of the output object processed by the DSG. | a0b00e60cc87fff8537e68827c3f329a |
output_file_name | S3 Out-of-Band | string | string | The name of the object that is written to the new S3 bucket (i.e. The value of output_bucket_name parameter) by the DSG. | Sample_s3.csv |
output_size | S3 Out-of-Band | integer | long | The size of the object, in bytes, written to the output S3 bucket. | 81 |
processing_time_downstream | HTTP, SMTP, SFTP | float | double | The time is the difference between the start time of processing a response and the end time of processing a request. | 0.003696442 |
processing_time_request | All | float | double | The time taken for the ruleset to process the request data. | 0.001080275 |
processing_time_response | HTTP, SMTP, SFTP, S3 Out-of-Band | float | double | The time taken for the ruleset to process the response data. It is only applicable to the protocols where a response is expected from a downstream system. | 0.000162601 |
regex_replace | All | object | object | The object representing the Regex Replace transformation rule. | {"regex_replace":{"replace_rules":[{"rule_name":"Hello -> HELLO","num_replace":6},{"rule_name":"World -> dlroW","num_replace":6}]}} |
request_uri | HTTP, REST | string | string | The URI of the request being processed by the DSG. | http://httpservice/passthrough |
rule_name | All | string | string | The name of the rule used to transform the sensitive data. | Sample Rule1 |
server_ip | SFTP | string | string | The IP address orhostname of the SFTP server that the DSG is communicating with. | sftp.server.com |
service_name | All | string | string | The name of the service processing the request. | Passthrough |
service_type | All | string | string | The type of the service processing the request. | HTTP-GW |
time_pre_processing | HTTP, REST | float | double | The time an HTTP or REST request waited before it was processed. | 0.010870 |
time_start | All | date | date | The timestamp when the DSG received a request. | 2024-02-28T11:27:13.515926838+00:00 |
time_end | All | date | date | The timestamp representing when a request was completed. | 2024-02-28T11:27:13.519971132+00:00 |
time_lock | S3 Out-of-Band | float | double | The time taken to process the file from the time the lock was created. | 1708963670.43 |
time_total | All | float | double | The difference, in seconds, between the time_end and time_start parameters. | 0.005429983 |
transformations | All | object | object | The object representing the Regex Replace and Protegrity Data Protection transformation rules. | "transformations":{"data_protection":{"data_elements":[{"data_element_name":"TE_A_N_S13_L1R3_N","num_unprotect":20,"len_unprotect":428}]}} |
tunnel_name | All | string | string | The name of the tunnel processing the request. | default_80 |
user_name | All | string | string | The username used for the protection, unprotection, or reprotection. | jack123 |
* -The origin_time_utc and logtype parameters will only be displayed on the Audit Store Dashboards.
By default, the normalize-time-labels flag is configured in the features.json file. If the normalize-time-labels flag is configured, then it converts the default timestamp parameters to normalized timestamp parameters, as shown in the Table: Default and Normalized timestamp parameters.
To access the features.json file, navigate to Settings > System > Files, and under the Cloud Gateway - Settings area, access the features.json file.
The following table shows the default timestamp parameters and the normalized timestamp parameters.
Default Timestamp Parameters | Normalized Timestamp Parameters |
---|---|
auth_end_time | auth_time_end |
auth_start_time | auth_time_start |
auth_total_time | auth_time_total |
end_time | time_end |
start_time | time_start |
total_time | time_total |
pre_processing_time | time_pre_processing |
Forwarding Transaction Metrics to Insight
The transaction metrics is also forwarded to Insight and can be viewed on the Audit Store Dashboards.
Ensure that the following prerequisites are met before you view the logs on the Audit Store Dashboards:
The Analytics component is initialized on the ESA. The initialization of Analytics is required for displaying the Audit Store information on the Audit Store Dashboards.
For more information about initializing the Analytics, refer to the section Initializing analytics on the ESA in the Protegrity Installation Guide.
For more information about the audit indexes, refer to the section Understanding the audit index fields in the Protegrity Insight Guide.
The logs are forwarded to the Audit Store.
For more information about forwarding the logs, refer to the section Forwarding Audit Logs to Insight.
The following figure shows the sample transaction metrics on the Discover screen of the Audit Store Dashboards.
Note: The index_node, tiebreaker, and index_time_utc parameters are only logged on the Audit Store Dashboards.
For more information about these parameters, refer to the section Understanding the audit index fields in the Protegrity Insight Guide.
The DSG transaction logs are stored in the pty_insight_analytics_dsg_transaction_metrics_9.2 index file. It is recommended to enable the scheduled task to free up the space used by old index files that you do not require. For transaction metrics, edit the Delete DSG Transaction Indices task and enable the task. The scheduled task can be set to n days based on your preference.
For more information about scheduled tasks, refer to the section Using the scheduler in the Protegrity Insight Guide.
Total Time Breakdown for HTTP Request
This section describes the total time taken for processing the HTTP request.
The total_time value is calculated by adding the time taken by the following parameters:
- time_pre_processing: The time an HTTP or REST request waited before it was processed.
- processing_time_request: The time taken for the ruleset to process the request data.
- processing_time_downstream: The time taken to send a request to a downstream system and receive a response from the client.
- processing_time_response: The time taken for the ruleset to process the response data.
The following chart depicts the breakdown of the total time taken for an HTTP request.
The processing_time_downstream value is the difference between the start time of processing the response and the end time of processing a request. The processing_time_dowstream is calculated by considering the time taken by any the following parameters:
- http_outbound_time_queue: The time that the request spent in the queue before being processed.
- http_outbound_namelookup: The time taken to resolve the name.
- http_outbound_time_connect: The time taken to connect to the remote host.
- http_outbound_time_appconnect: The time taken to complete the SSH/TLS handshake.
- http_outbound_time_pretransfer: The time from the start until before the first byte is sent.
- http_outbound_time_starttransfer: The time taken from the start of the request until the first byte was received from the server.
- http_outbound_time_total: Total time that the client library took to process the HTTP request.
- http_outbound_time_redirect: The time, in seconds, it took for all redirection steps including name lookup, connect, pretransfer, and transfer before the final transaction was started.
The following chart depicts the processing time downstream for an HTTP request.
5.15 - Error metrics logging
Error Metrics allow a user to view details about errors, which are encountered while processing a file. The error metrics logging feature can be enabled at the service level.
When the Protegrity Data Protection Transform Rule is configured, certain error conditions can occur during processing. The following are some of the error conditions:
- The input is too short or long for a particular data element
- Invalid Email ID
- Invalid Data Type
- Invalid Credit Card Details
When these error conditions occur, the rule will stop processing. The permissive error handling feature is hence used to handle the errors and process the erroneous input file.
For more information about permissive error handling, refer to the Table: Protegrity Data Protection Method.
If there are a lot of erroneous data in an input file, it can be difficult to identify and categorize errors. In that situation, the error metrics can be used to understand the total number of errors, the offset of where the error was encountered, the reasons why the error was encountered, the ruleset details, and so on.
Error metrics is written to the gateway.log file and the Log Viewer screen in the JSON format.
In the case of NFS and CIFS protocols, a lock file is created for each file that is to be processed. Error metrics will also be appended to the lock files alongside the update details for each file. For example, if there are ten files to be processed, namely, Test1 to Test10, then ten respective lock files will be created, namely, Test1.lock to Test10.lock. If there are any errors encountered in the Test1 file, then the error metrics for this file will be appended to the Test1.lock file.
Important: The error metrics is only supported for the following payloads:
- CSV Payload
- Fixed Width
For more information about the CSV and Fixed Width payloads, refer to the sections CSV Payload and Fixed Width.
Important: The error metrics support is only available for the following services:
- HTTP
- REST
- NFS
- CIFS
Important: The following conditions should be met to use the error metrics logging feature:
- Users must use the Protegrity Data Protection method to transform the data.
- Permissive error handling should be configured at the transform rule.
- Error Metrics Logging field must be enabled at the service level.
Note: If the permissive error handling is disabled or a different transformation method is used, then the total_error_count will be 1 in the error metrics.
The sample error metrics for the REST request is as seen in the following log file snippet:
The following table describes the parameters available in the error metrics for different services.
Parameter | Services Supported | Data Type in DSG | Data Type in the Audit Store | Description | Example |
column_info | HTTP, REST, NFS, CIFS | integer, string | integer, string | For the CSV payload, a list of column numbers and column
names will be logged when the errors are encountered.NoteThe
Header will be taken from the CSV or the Fixed Width extract
rule. While configuring the CSV extract rule, if the Header is
set to -1, then the column_info parameter
will not be logged in the error metrics.The following snippets shows the
column_info parameter for the Fixed Width
payload:"column_info": { "1": { "column_start": 3, "column_width": 17 } }, | {"column_info":{"2":"first_name"} |
columns | HTTP, REST, NFS, CIFS | integer | long | The column numbers where the error is encountered. | 2 |
error_count | HTTP, REST, NFS, CIFS | integer | long | The total number of errors encountered for a particular reason. | 1 |
file_name | NFS, CIFS | string | string | The name of the file that is being processed by the DSG. | Sample_NFS.csv |
id | HTTP, REST, NFS, CIFS | string | string | A unique ID to identify the transaction. | a272d51b14df435eb67fdf46d2ecff83 |
logtype* | All | NA | string | The value to identify type of metric such as
dsg_metrics_error . | dsg_metrics_error |
node_hostname | HTTP, REST, NFS, CIFS | string | string | The hostname of the DSG. | protegrity-cg123 |
node_pid | HTTP, REST, NFS, CIFS | integer | integer | The Process ID of the gateway process that processed request. | 5577 |
origin_time_utc* | All | NA | date | The time in UTC at which this log is ingested. | Feb 26, 2024 @ 03:51:54.416 |
reasons | All | object | object | The object representing the set of error reason, ruleset details, offset of columns or rows, and error count. | reasons":[{"reason":"The input is too short (returnCode 0)","rulesets":[{"ruleset":"CSV Rule","offset":[{"columns":[{"2":{"rows":[2,5,8,11,14,17,20],"trimmed":false}}]}],"error_count":7}]}] |
reason | HTTP, REST, NFS, CIFS | string | string | The reason for a particular error will be displayed. | The input is too short (returnCode 0) |
request_uri | HTTP, REST | string | string | The URI of the request being processed by the DSG. | http://testservice:8081/echo |
rows | HTTP, REST, NFS, CIFS | integer | integer | The row numbers where the error is encountered. | 0 |
ruleset | HTTP, REST, NFS, CIFS | string | string | The traversal of the transform rule, which induces the error. | Text Protection/Word by word data extraction/Data Protection |
service_name | HTTP, REST, NFS, CIFS | string | string | The name of the service processing the request. | REST Demo Service |
time_end | HTTP, REST, NFS, CIFS | string | date | The timestamp representing when a request was completed. | 2024-02-28T11:48:26.318685532+00:00 |
total_error_count | HTTP, REST, NFS, CIFS | integer | long | The total number of errors encountered while processing a request. | 1 |
time_start | HTTP, REST, NFS, CIFS | string | date | The timestamp when the DSG received a request. | 2024-02-28T11:48:18.148773909+00:00 |
total_time | HTTP, REST, NFS, CIFS | float | double | The difference in seconds between the
time_end and time_start
parameters. | 8.17 |
trimmed | HTTP, REST, NFS, CIFS | boolean | boolean | True indicates that the error metrics is trimmed. The
trimming of an error metrics depend on the
checkErrorLogAfterCount parameter, that is
configurable in the gateway.json file.For
more information about the
checkErrorLogAfterCount parameter, refer to
the Table: gateway.json configurations in the
Protegrity Data Security Gateway User Guide
3.2.0.0. | false |
tunnel_name | HTTP, REST, NFS, CIFS | string | string | The name of the tunnel processing the request. | Tunnel_8081 |
Forwarding Error Metrics to Insight
The error metrics is also forwarded to the Audit Store and can be viewed on the Audit Store Dashboards.
Ensure that the following prerequisites are met before you view the logs on the Audit Store Dashboards:
The Analytics component is initialized on the ESA. The initialization of Analytics is required for displaying the Audit Store information in the Audit Store Dashboards.
For more information about initializing the Analytics, refer to the section Initializing analytics on the ESA in the Protegrity Installation Guide.
For more information about the audit indexes, refer to the section Understanding the audit index fields in the Protegrity Insight Guide.
The logs are forwarded to Insight.
For more information about forwarding the logs, refer to the section Forwarding Audit Logs to Insight.
The following figure shows the sample error metrics on the Discover screen of the Audit Store Dashboards.
Note: The index_node, tiebreaker, and index_time_utc parameters are only logged on the Audit Store Dashboards.
For more information about these parameters, refer to the section Understanding the audit index fields in the Protegrity Insight Guide 9.2.0.0.
The DSG error metrics logs are stored in the pty_insight_analytics_dsg_error_metrics_9.2 index file. You can configure and enable the scheduled task to free up the space used by old index files that you do not require. For error metrics, edit the Delete DSG Error Indices task and enable the task. The scheduled task can be set to n days based on your preference.
For more information about scheduled tasks, refer to the section Using the scheduler in the Protegrity Insight Guide 9.2.0.0.
Error Metrics with Non-Permissive Errors
This section describes how the non-permissive errors are logged in the error metrics.
When the permissive error handling is disabled and an error is encountered while transforming data, it will capture the error once and stop processing the entire file. The error could be in the input file, ruleset, or any other configuration. In such scenarios, the error metrics will always be logged with a total_error_count = 1.
The sample error metrics with non-permissive errors are as seen in the following log file snippet:
Configuring the HTTP Status Codes
This section describes how to configure the HTTP status codes for the errors that may occur while processing a file.
When errors are encountered and the user wants to handle them permissively, then different HTTP status codes can be configured in the Error Code field from the DSG Web UI. At the service level of the RuleSet page, an Error Code field is added for the HTTP and REST protocols to handle errors permissively.
Note: The Error Code field is not supported for the NFS and CIFS protocols.
The following are the HTTP status codes that can be configured from the Web UI:
- 200 OK
- 201 Created
- 202 Accepted
- 203 Non-Authoritative Information
- 205 Reset Content
- 206 Partial Content
- 400 Bad Request
- 401 Unauthorized
- 403 Forbidden
- 422 Unprocessable Entity
- 500 Internal Server Error
- 503 Service Unavailable
Note: By default, the Error Code is set to
200 OK
.
The error metrics options for different protocols are as seen in the following figures:
5.16 - Usage Metrics Logging
Usage metrics provide information about the usage of tunnels, services, profiles, and rules. By default, the usage metrics feature is enabled in the gateway.json file.
For more information about the gateway.json file, refer to the section Gateway.json file.
The following snippet shows how the usage metrics feature is enabled in the gateway.json file.
"stats": {
"enabled": true
}
The metrics are recorded in CSV format in the gateway.log file, and then parsed to JSON and sent to Insight.
For more information about viewing the usage metrics on Insight, refer to the section Forwarding Usage Metrics to Insight.
The logs are emitted at a default interval of 120 seconds. During this 120-second window, all requests processed by the gateway process will be recorded. To modify the time window, configure the usageLogInterval parameter in the stats setting in gateway.json file.
Note:
The time interval is calculated once the gateway restarts.
The usage metrics will only be logged when there is a transaction.
The following table describes the usage metrics for Tunnels.
Parameter | Data Type in DSG | Data Type in Insight | Description | Example |
---|---|---|---|---|
metrics_type | string | string | The metric type is displayed as tunnel. | Tunnel |
version | integer | integer | A version for the tunnel metric type. | 0 |
tunnel_type | string | string | The type of tunnel used to process a request is displayed. | HTTP |
logtype | string | string | The value to identify type of metric such as dsg_metrics_usage_tunnel. | dsg_metrics_usage_tunnel |
log_time | epoch_millis | date | The time when the usage is reported. | 1707242725766 |
log_interval | integer | long | The time difference between the current and previous logs. | 30003 |
tunnel_id | string | string | The unique ID of the tunnel. | t-808715e0-b725-4781-8bf3-429220dd46d5 |
uptime | float | double | The time in seconds since the tunnel loaded. | 22670.46733236313 |
bytes_processed | integer | long | The frontend and backend bytes the tunnel processed since the last time usage was reported. | 38 |
frontend_bytes_processed | integer | long | The frontend bytes the tunnel has processed since the last time usage was reported. | 38 |
backend_bytes_processed | integer | long | The backend bytes the tunnel has processed since the last time usage was reported. | 0 |
total_bytes_processed | integer | long | The total number of frontend and backend bytes the tunnel has processed during the time the tunnel has been loaded. | 38 |
total_frontend_bytes_processed | integer | long | The total number of frontend bytes the tunnel has processed during the time the tunnel has been loaded. | 38 |
total_backend_bytes-_processed | integer | long | The total number of backend bytes the tunnel has processed during the time the tunnel has been loaded. | 0 |
message_count | integer | long | The number of requests the tunnel received since the last time usage was reported. | 1 |
total_message_count | integer | long | The total number of requests the tunnel received during the time the tunnel has been loaded. | 1 |
origin_time_utc* | NA | date | The time in UTC at which this log is ingested. | Feb 26, 2024 @ 03:51:54.416 |
* -The origin_time_utc parameter will only be displayed on the Insight Dashboards.
The following table describes the usage metrics for Services.
Parameter | Data Type in DSG | Data Type in Insight | Description | Example |
---|---|---|---|---|
metrics_type | string | string | The metric type is displayed as Service. | Service |
version | integer | integer | A version for the service metric type. | 0 |
service_type | string | string | The type of service used to process a request is displayed. | REST-API |
logtype | string | string | The value to identify type of metric such as dsg_metrics_usage_service. | dsg_metrics_usage_service |
log_time | epoch_millis | date | The time when the usage is reported. | 1707242 725766 |
log_interval | integer | long | The time difference between the current and previous logs. | 30003 |
service_id | string | string | The unique ID of the service. | s-62a3a161-6bd7-42fa-a9c6-6357d77824ca |
parent_id | string | string | The unique ID of the tunnel rule. | t-808715e0-b725-4781-8bf3-429220dd46d5 |
calls | integer | long | The number of times the service processed frontend and backend requests since the time the usage was last reported. | 38 |
frontend_calls | integer | long | The number of times the service processed frontend requests since the time the usage was last reported. | 38 |
backend_calls | integer | long | The number of times the service processed backend requests since the time the usage was last reported. | 0 |
total_calls | integer | long | The total number of times the service processed frontend and backend requests since the service has been loaded. | 38 |
total_frontend_calls | integer | long | The total number of times the service processed frontend and backend requests since the service has been loaded. | 38 |
total_backend_calls | integer | long | The total number of times the service processed frontend and backend requests since the service has been loaded. | 0 |
bytes_processed | integer | long | The frontend and backend bytes the service processed since the last time usage was reported. | 2 |
frontend_bytes_processed | integer | long | The frontend bytes the tunnel processed since the last time usage was reported. | 1 |
backend_bytes_processed | integer | long | The backend bytes the tunnel processed since the last time usage was reported. | 1 |
total_bytes_processed | integer | long | The total number of frontend and backend bytes the service has processed during the time the service has been loaded. | 2 |
total_frontend_bytes_processed | integer | long | The total number of frontend bytes the tunnel has processed during the time the tunnel has been loaded. | 1 |
total_backend_bytes_processed | integer | long | The total number of backend bytes the tunnel has processed during the time the tunnel has been loaded. | 1 |
origin_time_utc* | NA | date | The time in UTC at which this log is ingested. | Feb 26, 2024 @ 03:51:54.416 |
* -The origin_time_utc parameter will only be displayed on the Insight Dashboards.
The following table describes the usage metrics for Profile.
Parameter | Data Type in DSG | Data Type in Insight | Description | Example |
---|---|---|---|---|
metrics_type | string | string | The metric type is displayed as Profile. | Profile |
version | integer | integer | A version for the profile metric type. | 0 |
log_time | epoch_millis | date | The time when the usage is reported. | 1707242725766 |
log_interval | integer | long | The time difference between the current and previous logs. | 339439 |
logtype | string | string | The value to identify type of metric such as dsg_metrics_usage_profile. | dsg_metrics_usage_profile |
parent_id | string | string | The unique ID of the service rule. | s-62a3a161-6bd7-42fa-a9c6-6357d77824ca |
profile_id | string | string | The unique ID of the profile. | p-b335795f-8e77-4b15-9ba0-06002cc29bb9 |
calls | integer | long | The number of times the profile processed a request since the time usage was last reported. | 1 |
total_calls | integer | long | The total number of times the profile processed a request since profile has been loaded. | 1 |
profile_reference_count | integer | long | The number of times this profile has been called through a profile reference since the time the usage was last reported. | 0 |
total_profile_reference_count | integer | long | The total number of times this profile has been called through a profile reference since the profile has been loaded. | 0 |
bytes_processed | integer | long | The bytes the profile processed since the last time the usage was reported. | 38 |
total_bytes_processed | integer | long | The total bytes the profile processed since the profile has been loaded. | 38 |
elapsed_time_sample_count | integer | long | The number of times the profile was sampled since the last time the usage was reported. | 1 |
elapsed_time_average | integer | long | The average amount of time in nano-seconds it took to process a request based on elapsed-time-sample-count. | 13172454 |
total_elapsed_time_sample_count | integer | long | The number of times the profile was sampled since the profile has been loaded. | 1 |
total_elapsed_time_sample_average | integer | long | The average amount of time in nano-seconds it took to process a request based on total-elapsed-time-sample-count. | 13172454 |
origin_time_utc* | NA | date | The time in UTC at which this log is ingested. | Feb 26, 2024 @ 03:51:54.420 |
* -The origin_time_utc parameter will only be displayed on the Insight Dashboards.
The following table describes the usage metrics for Rules.
Name | Data Type in DSG | Data Type in Insight | Description | Example |
---|---|---|---|---|
metrics version | integer | integer | A version for the rule metric type. | 0 |
rule-type | string | string | The type of rule used to process a request is displayed. | Extract |
codec | string | string | It will display the type of payload extracted or the method used for data transformation. | Text |
logtype | string | string | The value to identify type of metric such as dsg_metrics_usage_rule. | dsg_metrics_usage_rule |
log_time | epoch_millis | date | The time when the usage is reported. | 1707242725766 |
log_interval | date | date | The time difference between the current and previous logs. | 22670424 |
broken | boolean | boolean | It indicates whether the rule is broken or not. | false |
domain_name_rewrite | boolean | boolean | It indicates whether the rule is domain name rewrite or not. | false |
rule_id | string | string | The unique ID of the rule. | r-38b72f16-f838-4602-81aa-cd881a76e418 |
parent_id | string | string | The unique ID of the profile rule. | p-b335795f-8e77-4b15-9ba0-06002cc29bb9 |
calls | integer | long | The number of times the rule processed a request since the time the usage was last reported. | 1 |
total_calls | integer | long | The total number of times the rule processed a request since rule has been loaded. | 1 |
profile_reference_count | integer | long | The number of times this rule has been called via a profile reference since the time the usage was last reported. | 0 |
total_profile_reference_count | integer | long | The total number of times this rule has been called via a profile reference since the rule has been loaded. | 0 |
bytes_processed | integer | long | The bytes the rule processed since the last time the usage was reported. | 1 |
total_bytes_processed | integer | long | The total bytes the rule processed since the rule has been loaded. | 1 |
elapsed_time_sample_count | integer | long | The number of times the rule was sampled since the last time usage was reported. | 1 |
elapsed_time_sample_average | integer | long | The average amount of time in nano-seconds it took to process a data based on elapsed-time-sample-count. | 13137378 |
total_elapsed_time_sample_count | integer | long | The number of times the rule was sampled since the rule has been loaded. | 1 |
total_elapsed_time_sample_average | integer | long | The average amount of time in nano-seconds it took to process a data based on total-elapsed-time-sample-count. | 13137378 |
origin_time_utc* | NA | date | The time in UTC at which this log is ingested. | Feb 26, 2024 @ 03:51:54.420 |
* -The origin_time_utc parameter will only be displayed on the Insight Dashboards.
Forwarding Usage Metrics to Insight
The usage metrics is also forwarded to the Audit Store and can be viewed on the Insight Dashboards.
Ensure that the following prerequisites are met before you view the logs on the Insight Dashboards:
The Analytics component is initialized on the ESA. The initialization of Analytics is required for displaying the Audit Store information on the Audit Store Dashboards.
For more information about initializing the Analytics, refer to the section Initializing analytics on the ESA in the Protegrity Installation Guide.
For more information about the audit indexes, refer to the section Understanding the audit index fields in the Protegrity Insight Guide.
The logs are forwarded to Insight.
For more information about forwarding the logs, refer to the section Forwarding Audit Logs to Insight.
The following figure shows the sample usage metrics on the Discover screen of the Audit Store Dashboards.
Note: The index_node, tiebreaker, and index_time_utc parameters are only logged on the Audit Store Dashboards.
For more information about these parameters, refer to the section Understanding the audit index fields in the Protegrity Insight Guide.
The DSG usage metrics logs are stored in the pty_insight_analytics_dsg_usage_metrics_9.2 index file. You can configure and enable the scheduled task to free up the space used by old index files that you do not require. For usage metrics, edit the Delete DSG Usage Indices task and enable the task. The scheduled task can be set to n days based on your preference.
For more information about scheduled tasks, refer to the section Using the scheduler in the Protegrity Insight Guide 9.2.0.0.
5.17 - Ruleset Reference
A ruleset comprises of a service, profile, and a rule. The object defined at the beginning is called a service. Under the service, a profile is created. A service can contain one or more profiles. Profiles are container for rules. The rules are applied on messages transmitted through the DSG.
5.17.1 - Services
In DSG, the following service types are available:
REST API Service: DSG acts as a REST API Server, protecting or unprotecting application in a trusted domain.
Gateway Service: DSG acts as a gateway to protect sensitive information before it reaches an untrusted domain. The following are the different gateway services:
- REST API
- HTTP
- WebSocket Secure (WSS)
- SMTP
- SFTP
- Amazon s3
- Mounted File System
Gateway service fields
The following figure illustrates all the common fields for the available service types.
The following table describes all the common fields for the available Service Types.
Field | Sub field | Description | Notes |
---|---|---|---|
Service Type | Specify the role of this service i.e. whether to act as REST API or act as a gateway for a specific protocol. | ||
Name | Name for the Service. | ||
Description | Description for the Service. | ||
Enabled | Enable or disable the Service. | ||
Tunnels | List of tunnels lying below the service instance. | ||
Hostnames | List of hostname to forwarding address mappings | ||
Hostname | Hostname or the IP address for an inbound request received by the gateway. | ||
Forwarding Address | Hostname or the IP address for an outbound request forwarded by the gateway. | ||
Password Masking | List of parameters value to be masked before the output is sent to the log files. | ||
Pattern | Regular expression to find text to replace in the parameter. | ||
Resource | Regular expression to look for in the parameter before masking it. | ||
Mask | The replacement text which acts as a mask for the pattern. | ||
Learn Mode Settings | Filters for capturing details to be presented in the learn mode. | ||
Enabled | Enable or disable learn mode settings. | ||
Exclude Resource | Values in the field are excluded from the Learn Mode logging. | ||
Exclude Content Type | Content type specified in the field is excluded from the Learn Mode logging. | ||
Include Resource | Values in the field are included in the Learn Mode logging. | ||
Include Content-Type | Content type specified in the field is included in the Learn Mode logging. | ||
Transaction Metrics Logging | Define if you want to log detailed transaction metrics, such as, protect operation performed, length of the data, service used to perform protection, tunnel used, and so on. | ||
Enabled | Enable or disable transaction metrics to be logged in the log file. | ||
Log Level | Select from the following logging levels
| Ensure that the log level you select is the same or part of a higher log subset that you defined in the gateway log level. | |
Transaction Metrics in HTTP Response Header | |||
HTTP Response Header Reporting Enabled | Enable or disable detailed transaction metrics such as, data security operation performed, length of the data, service used to perform protection, tunnel used, and so on in the HTTP Response Header. | If the HTTP Response Header Reporting Enabled option is selected and streaming is enabled, the transaction metrics data will not be displayed in the HTTP Response Header. | |
HTTP Response Header Name | Name of the HTTP Response Header carrying the transaction metrics data. The default value for this option is X-Protegrity-Transaction-Metrics. You can change the default value as per your requirements. | The name of the HTTP Response Header must be defined with valid characters. An HTTP Response Header name defined with invalid characters is automatically modified to the default value X-Protegrity-Transaction-Metrics. |
-The Transaction Metrics in HTTP Response Header option is only available for the REST API and HTTP services.
5.17.1.1 - Amazon S3 gateway
The fields for the Amazon S3 Gateway service are as seen in the following figure.
The following table describes the additional fields relevant for the Amazon S3 Gateway service.
Field | Sub-Field | Description | Notes |
---|---|---|---|
Object Mapping | List of source and target objects that the service will use. | ||
Source | Bucket path where data that needs to be protected is stored. For example, john.doe/incoming . | The DSG supports four levels of nested folders in an Amazon S3 bucket. | |
Target | Bucket path where protected data is stored. For example, john.doe/outgoing . | ||
Streaming | List of file processing delimiters to process file using streaming.Note: The Text, CSV, and Binary payloads are supported. If you want to use XML/JSON payload with HTTP streaming, ensure you use the Text payload for extract rule. | ||
Filename | Regular Expression to look for in the file’s name and path before applying streaming (e.g. \.csv$) | ||
Delimiter | Regular Expression used to delimit stream. Rules will be invoked on delimited streams. | If the delimiter value is not matched, then the data will be processed in non-streaming mode. | |
The options for the Outbound Transport Settings field in the Amazon S3 Gateway are described in the following table.
Options | Description |
---|---|
SSECustomerAlgorithm | If server-side encryption with a customer-provided encryption key was requested, the response will include this header confirming the encryption algorithm used. |
SSECustomerKey | Constructs a new customer provided server-side encryption key. |
SSECustomerKeyMD5 | If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide round trip message integrity verification of the customer-provided encryption key. |
ServerSideEncryption | The Server-side encryption algorithm used when storing this object in S3 (e.g., AES256, aws:kms). |
StorageClass | Specifies constants that define Amazon S3 storage classes. |
SSEKMSKeyId | Specifies the ID of the AWS Key Management Service (KMS) master encryption key that was used for the object. |
ACL | Allows controlling the ownership of uploaded objects in an S3 bucket.For example, if ACL or Access Control List is set to “bucket-owner-full-control”, new objects uploaded by other AWS accounts are owned by the bucket owner. By default, the objects uploaded by other AWS accounts are owned by them. |
5.17.1.2 - Mount file system out-of-band service
The additional fields for the NFS and CIFS are as seen in the following figure.
The following table describes the additional fields relevant for the Mounted File System service.
Field | Sub-Field | Description | Notes |
---|---|---|---|
File Mapping | List of source and target files that the service will process. | ||
Source | Regex logic that includes the source path where data that needs to be protected is stored along with the filter to identify specific files. For example, if you set (.*\/)input\/(.*) as the value, all the files in the input folder will be selected for processing. | Click Test Regex to verify if the regex expression is valid. | |
Target | Regex logic that includes the target path where processed data is stored along with other identifiers, such as appending additional tag.For example, if you set \1output/\2.processed as the value, the processed files will move to the I/output folder with .processed appended to them.Click Test Regex to verify if the regex expression is valid. | ||
Streaming | Enabling streaming lets you process a payload in smaller chunks that are broken based on delimiters defined and processed as they are chunked. Using streaming, you no longer must wait for the entire payload to process, and then transmitted. List of file processing delimiters to process file using streaming. | The Text, CSV, and Binary payloads are supported. If you want to use XML/JSON payload with streaming, ensure you use the Text payload for extract rule. | |
File Key | Regular Expression to look for in the payload before applying streaming (e.g. \.csv$). Streaming is applied only to requests where File Key matches the regex pattern. | Click Test Regex to verify if the regex expression is valid. | |
Delimiter | Regular Expression used to delimit stream. Rules will be invoked on delimited streams. | Click Test Regex to verify if the regex expression is valid. If the delimiter value is not matched, then the data will be processed in non-streaming mode. | |
Error Metrics Logging | Log the metrics for error, such as total number of errors, error offset, reason for the error, and so on. | . | |
Enabled | Enable or disable error metrics to be logged in the log file. | ||
Log level |
|
The following example snippet describes the format for the Outbound Transport Settings field for NFS service:
{
"filePermissions":"770",
"createMissingDirectory":"true"
}
The options for the Outbound Transport Settings field are described in the following table.
Options | Description | Default (if any) |
---|---|---|
filePermissions | Set the file permissions.Note: This setting applies only to the NFS service. | n/a |
createMissingDirectory | Set to true if you want to create lock, error, and output directory automatically. | n/a |
Note: Before you start using the NFS/CIFS Tunnel or Service, ensure that the rpcbind service is running on the NFS/CIFS server.
5.17.1.3 - REST API
The fields for the REST API service are as seen in the following figure.
The following table describes the additional fields for the REST API Gateway service.
Field | Sub-Field | Description | Default (if any) | Notes |
---|---|---|---|---|
Dynamic Learn Mode Header | The header that will be used to send a request to enable the learn mode for a particular URI. | |||
Dynamic Streaming Configuration* | HTTP header that will be used to send a request. | |||
Streaming | Enabling streaming lets you process a payload in smaller chunks that are broken based on delimiters defined and processed as they are chunked. Using streaming, you no longer must wait for the entire payload to process, and then transmitted. The chunk size must be entered in bytes.List of file processing delimiters to process file using streaming. | Chunk size - 65536 | The Text, CSV, and Binary payloads are supported. If you want to use XML/JSON payload with streaming, ensure you use the Text payload for extract rule. | |
Authentication Cache Timeout | Define the amount of time for which the username and password in the REST request is stored in cache. | 900 seconds | ||
Asynchronous Client Configuration | If streaming is enabled and you plan to use an asynchronous HTTP client, then these settings must be configured. The DSG is optimized to handle asynchronous requests. | This parameter is applicable only with REST streaming. | ||
HTTP Async Client Enabled | Select to enable when HTTP asynchronous client will send a request to DSG. | False | The HTTP Async Client Header Name header must be sent as part of the HTTP request for DSG to understand that the incoming requests are sent from an asynchronous client. If the header is not sent as part of the request, then the DSG assumes that the request is sent from a synchronous client. This parameter is applicable only with REST streaming. | |
HTTP Async Client Header Name | Provide the header name that must be set in an HTTP request in the client such that DSG understands that the request is sent from an asynchronous HTTP client. For example, if the header name is set to X-Protegrity-Async-Client in the service, then when a request is sent to the DSG, the header value must be set to either ‘yes’, ’true’, or ‘1’. | This parameter is applicable only with REST streaming. | ||
Error Metrics Logging | Log the metrics for error, such as total number of errors, error offset, reason for the error, and so on. | . | ||
Enabled | Enable or disable error metrics to be logged in the log file. | |||
Log level |
| Ensure that the log level you select is the same or part of a higher log subset that you defined in the gateway log level. | ||
Error | Set one HTTP status code for the errors that may occur in the file while processing it. Select from the following HTTP status codes:
|
* -The dynamic streaming configuration can be explained as follows:
If you want to send dynamic requests to enable streaming on a given URI, you can use this field. Consider an example, where you set this value as X-Protegrity-Rest-Header. When you send an HTTP request with the X-Protegrity-Rest-Header header value, DSG will begin the data protection for that URI based on the parameters provided in the request.
A typical format for the value in the header is as follows:
"{"streaming":{"uri":"/echo","delimiter":"(?ms)(^.*\\r?\\n)", "chunk_size": 5000}}"
Parameter | Description | Default | Notes |
---|---|---|---|
delimiter | Regular Expression used to delimit stream. Rules will be invoked on delimited streams.(?ms)(^.*\\r?\\n) | If the delimiter value is not matched, then the data will be processed in non-streaming mode. | |
Uri | Regular Expression to look for in the payload before applying streaming (e.g. \.csv$). Streaming is applied only to requests where URI matches the regex pattern. | ||
chunk_size | Size of the smaller chunks that the data must be broken into. The chunk size must be entered in bytes. | 65536 |
Note: The delimiter parameter must be sent as part of the HTTP header information. The uri and chunk_size parameters are optional. If uri is not provided, the request URI is considered, while if the chunk_size is not provided, the chunk size defined in HTTP tunnel configuration is considered.
5.17.1.4 - Secure Web socket (WSS)
In the DSG, the WSS service can be used by configuring the HTTP Tunnel. The WSS service is designed for listening to traffic on HTTP and HTTPS ports 80 and 443 respectively.
Caution: In this release, the DSG uses the WSS service to pass through data as-is without performing any data protection operation, such as, protect, unprotect, and reprotect. You cannot invoke any child rules using the WSS service.
The fields for the WSS Gateway service are as seen in the following figure.
The following table describes the additional fields for the WSS Gateway service.
Field | Sub-Field | Description | Default (if any) |
---|---|---|---|
URI | List the required URI to receive the request. | ||
Origin Checking | Checks the websocket handshake origin header. | ||
Auto Handle Domain Name Rewrite | Adds the domain name, rewrites the filters and the rules that replace the host name in the forwarded requests or responses as per the target or source hostname. | ||
Outbound Transport Settings | Name-Value pairs used with the outbound transport. | ||
Authentication Cache Timeout | Define the amount of time for which the username and password in the REST request is stored in cache. | 900 seconds |
5.17.1.5 - SFTP gateway
The SFTP Gateway service can be implemented with either Password authentication or Public Key exchange authentication.
The fields for the SFTP Gateway service are as seen in the following figure.
The additional fields for the SFTP Gateway service when authentication method is Public Key are as seen in the following figure.
Before you begin
Ensure that the following pre-requisites are complete before you start using the SFTP gateway with Public Key authentication method.
The SFTP client Public Key must be available and upload to the Certificates screen in the ESA Web UI.
The DSG Public Key and Private Key must be generated and uploaded to the Certificates screen in the ESA Web UI.
The DSG Public Key must be uploaded to the SFTP server.
Ensure that the DSG Public Key is granted
644
permissions on the SFTP server.The DSG supports RSA keys. Ensure that only RSA keys are uploaded to the ESA/DSG Web UI.
The following table describes the additional fields relevant for the SFTP Gateway service.
SFTP tunnel automatically sets the user identity with an authenticated username. Thus, subsequent calls to Protegrity Data Protection transformations actions are done on behalf of the authenticated user.
The following SFTP commands are not supported.
df
chgrp
chown
Field | Sub-Field | Description | Default (if any) | Notes |
---|---|---|---|---|
Streaming | Enabling streaming lets you process a payload in smaller chunks that are broken based on delimiters defined and processed as they are chunked. Using streaming, you no longer must wait for the entire payload to process, and then transmitted.List of file processing delimiters to process file using streaming. | Chunk size - 64 kBIf you want to change the chunk size, modify the chunk_size parameter in the Inbound Settings for the tunnel. | The Text, CSV, and Binary payloads are supported. If you want to use XML/JSON payload with streaming, ensure you use the Text payload for extract rule. | |
Filename | Regular Expression to look for in the payload before applying streaming (e.g. \.csv$). Streaming is applied only to requests where URI matches the regex pattern. | Click Test Regex to verify if the regex expression is valid. | ||
Delimiter | Regular Expression used to delimit stream. Rules will be invoked on delimited streams. | Click Test Regex to verify if the regex expression is valid. If the delimiter value is not matched, then the data will be processed in non-streaming mode. | ||
User Authentication Method | SFTP authentication method used to communicate between client and server. | |||
Password | Enables password authentication for communication. You must enter password, when prompted, while initiating connection with the SFTP server. | |||
Public Key | Enable Public Key method for communication. The SFTP client shares its Public Key with the gateway and the gateway shares its Public Key with the SFTP server. This enables password-less communication between SFTP client and server when gateway is the intermediary.Ensure that the pre-requisites are completed before you start using the SFTP gateway. | |||
Inbound Push Public Keys file | Specifies the file name for the SFTP client Public Key. | |||
Outbound Push Private Key file | Specifies the file name for the Gateway Private Key. | |||
Outbound Push Private Keys file passphrase | Enter the passphrase for DSG Private Key. If no value is entered for encrypting the private key, the passphrase value is null . | |||
Outbound Transport Settings | Additional outbound settings that you want to parse during SFTP communication. |
The options for the Outbound Transport Settings field in the SFTP Gateway are described in the following table.
Options | Description | Default (if any) |
---|---|---|
window_size | SSH Transport window size. The datatype for this option is bytes. | 3145728 |
use_compression | Toggle SSH transport compression. | TRUE |
max_request_size | Set the maximum size of the message that is sent during transmission of a file.The maximum limit for servers that accept message size more than the default value is 250 KB. | 32768 |
enable_setstat | Set to False when using the AWS Transfer for SFTP as the SFTP server. | True |
5.17.1.6 - SMTP gateway
The SMTP Gateway service provides options that must be configured to define the level of extraction that must be performed on the incoming requests on the DSG. Based on the requirements, data security operations are performed on the extracted sensitive data.
The fields for the SMTP Gateway service are as shown in the following figure.
The following table describes the additional fields for the SMTP Gateway service.
Field | Sub-field | Description |
---|---|---|
Hostnames | ||
Host Address | Hostname or the IP address for an inbound request received by the gateway. The service IP of the DSG must be specified. For example, secured-smtp.abc.com . | |
Forwarding Address | Hostname or the IP address for an outbound request forwarded by the gateway. The hostname or IP address of the SMTP server must be specified. For example, smtp.abc.com . | |
Outbound Transport Settings | Name-Value pairs used with the outbound transport. |
The ssl_options supported for the Outbound Transport Settings in the SMTP Gateway are described in the following table.
Options | Description | Default |
---|---|---|
certfile | Path of the certificate stored in the DSG to be sent to the SMTP server. | n/a |
keyfile | Path of the key stored in the DSG to be sent to the SMTP server. | n/a |
cert_reqs | Specifies whether a certificate is required for validating the TLS/SSL connection between the DSG and the SMTP server. The following values can be configured:
| CERT_NONE |
ssl_version | Specifies the SSL protocol version used for establishing the SSL connection between the DSG and the SMTP server. | PROTOCOL_SSLv23 |
ciphers | Specifies the list of supported ciphers. | ‘ECDH+AESGCM’,‘DH+AESGCM’,‘ECDH+AES256’,‘DH+AES256’,‘ECDH+AES128’,‘DH+AES’,‘RSA+AESGCM’,‘RSA+AES’ |
ca_certs | Path where the CA certificates (in PEM format only) are stored. | n/a |
5.17.2 - Profile
Based on the data type, rules can be grouped in a single profile. This grouping assists you in managing the rules created for the data type.
You can also refer to another profile in a RuleSet when executing a rule as a part of an action.
The following figure illustrates the fields for a Gateway Profile.
The following table describes the fields for the Gateway Profile.
Field | Description |
---|---|
Name | Unique name for the Profile. |
Description | Description for the Profile. |
Enabled | Enable or disable the Profile. |
5.17.3 - Actions
The following are the different types of actions in DSG:
- Error
- Exit
- Extract
- Log
- Profile Reference
- Set User Identity
- Set Context Variable
- Transform
- Dynamic Rule Injection
5.17.3.1 - Error
In HTTP protocol, you can send custom response messages for requests with invalid content, while for other protocols, such as SFTP or SMTP the connection is terminated.
The fields for the Error action are as seen in the following figure.
The following table describes the fields applicable to the Error action.
Field | Description |
---|---|
Message | Add a custom response message for any invalid content.You can add a custom response message for any invalid content using one of the following options.
|
The Error action type must always be created as a leaf node - a rule without any child nodes.
5.17.3.2 - Exit
The Exit option acts as a terminating action and the rules are not processed further. The exit option must always be created as a leaf node, a rule without a child node.
The fields for the Exit action are as seen in the following figure.
The following table describes the fields for the Exit Action.
Field | Description |
---|---|
Has to Match | Regular Expression for the input to match to Exit. |
Must Not Match | Negative Regular expression for the above. |
Scope* | Specifies the scope of the Exit Rule. |
* The following are the available options for the Scope field.
- Branch: Stop processing the child rules.
- Profile: Stop processing the rules under the same profile.
- RuleSet: Stop processing all the rules.
5.17.3.3 - Extract
The Extract action defines the payloads supported by the DSG.
The following payloads are supported in DSG.
Adobe Action Message Format (AMF)
Binary
Character Separated Values (CSV)
Common Event Format (CEF)
eXtensible Markup Language (XML)
Extensible Markup Language (XML) with Tree-of-Tress (ToT)
Fixed Width
HTML Form Media Type (X-WWW-FORM-URLENCODED)
HTTP Message
JavaScript Object Notation (JSON)
JavaScript Object Notation (JSON) with Tree-of-Tress (ToT)
Multipart Mime
Microsoft Office 2007 Excel Document
Microsoft Office 2013 Document
Adobe Portable Document Format (PDF)
Enhanced Adobe Portable Document Format (PDF)
Protocol Buffer (protobuf)
Secured File Transfer
Amazon S3 Object
SMTP Message
Text
Uniform Resource Locator
User Defined Extraction
Note: JWT token encoded in base64 and without padding characters can only be extracted using the UDF extraction rule.
ZIP Compressed File
5.17.3.3.1 - Adobe Action Message Format
This payload extracts AMF format from the request and lets you define regex to control precise extraction.
The fields for Adobe Action Message Format (AMF) payload are as seen in the following figure.
The properties for the AMF payload are explained in the following table.
Field | Description |
---|---|
Method* | Specifies the method of extraction for AMF payloads. |
Pattern | Regular Expression pattern to match and extract from string value of AMF payload |
* The following options are available for the Method field.
- Serialize: Configure the AMF payload only to be exposed in learn mode. This will be useful for debugging while creating rules for learn mode.
- Serialized String Value: Configure the AMF payload as string and extract the matched Pattern.
- String Value: Configure the data using the matched Pattern. The data is not serialized ahead of the pattern matching.
- String Value by Key Name: The data is expected to come in key-value pairs. The parameters are matched using the Pattern. The value for the matched parameter is extracted.
5.17.3.3.2 - Amazon S3 Object
This payload extracts Amazon S3 object from the request and lets you define regex to control precise extraction. It is generally used with the Amazon S3 service.
The following figure illustrates the Amazon S3 Object payload fields.
The properties for the Amazon S3 Object payload are explained in the following table.
Properties | Description |
---|---|
Object Key | Regex logic to identify source object key to be extracted. |
Target Object | Object attribute that will be extracted from the following options.
|
5.17.3.3.3 - Binary Payload
This payload extracts binary data from the request and lets you define regex to control precise extraction.
The fields for Binary payload are as seen in the following figure.
The properties for the Binary payload are explained in the following table.
Field | Sub-Field | Description |
---|---|---|
Prerequisite Match Pattern | A regular expression to be searched for in the input is specified in the field. | |
Pattern | The regular expression pattern on which the extraction is applied is specified in this field. For example, consider if the text is “Hello World”, then pattern would be “\w+”. | |
Pattern Group Id | The grouping number to extract from the Regular Expression Pattern. For example, for the text “Hello World”, Group Id would be 0 to match characters in first group as per regex. | |
Profile Name | Profile to be used to perform transform operations on the matched content. | |
User Comments | Additional information related to the action performed by the group processing. | |
Encoding | The encoding method used while extracting binary payload. | |
Prefix | Prefix text to be padded before the protected value. This helps in identifying a protected text from a clear one. | |
Suffix | Suffix text to be padded after the protected value. The use is same as above. | |
Padding Character | Characters to be added to raise the number of characters to minimum required size by the Protection method. | |
Minimum Input length | Number of characters that define if input is too short for the Protection method to be padded with Padding Character |
The following table describes the fields for Encoding.
Field | Description |
---|---|
Codec | Select the appropriate codec based on the selection of Encoding |
The following options are available for the Encoding field:
- No Encoding
- Standard
- External
- Proprietary
No Encoding
If the No Encoding option is selected, then no encoding is applied.
Standard
The Standard Encoding consists of built-in codecs of standard character encodings or mapping tables, including UTF-8, UTF-16, ASCII and more.
For more information about the complete list of encoding methods. refer to the section Standard Encoding Method List.
External
When external encoding is applied, you must select a codec.
The following table describes the codecs for the External encoding.
Codec | Description |
---|---|
Base64 | Binary to text encoding to represent binary data in ASCII format. |
HTML Encoding | Replace special characters “&”, “<” and “>” to HTML-safe sequences. |
JSON Escape | Escapes special JSON characters, such as quote (”") in JSON string values to make it JSON-safe sequences. |
URI Encoding | RFC 2396 Uniform Resource Identifiers (URI) requires each part of URL to be quoted. It will not encode ‘/’. |
URI Encoding Plus | It is similar to URI Encoding, except replacing ’ ’ with ‘+’. |
XML Encoding | Escape &, <, and > in a string of data, then quote it for use as an attribute value to XML-safe sequences. |
Quoted Printable | Convert to/from quoted-printable transport encoding as per RFC 1521. |
SQL Escape | Performs SQL statement string escaping by replacing single quote (’) with two single quotes (’), replaces single double quote (") with two double quotes (""). |
Proprietary
When proprietary encoding is selected, codecs linked are displayed.
The following table describes the codecs for the Proprietary encoding.
Codec | Description |
---|---|
Base128 Unicode CJK | Base128 encoding in Chinese, Japanese and Korean characters. |
High ASCII | Character encodings of for eight bit or larger. |
The following encryption methods are not supported for the High ASCII codec and the Base128 Unicode CJK codec:
- AES-128
- AES-256
- 3DES
- CUSP AES-128
- CUSP AES-256
- CUSP 3DES
- FPE NIST 800-38G Unicode (Basic Latin and Latin-1 Supplement Alpha)
- FPE NIST 800-38G Unicode (Basic Latin and Latin-1 Supplement Alpha-Numeric)
The following tokenization data types are not supported for the High ASCII codec and the Base128 Unicode CJK codec:
- Binary
- Printable
The input data for the Base128 Unicode CJK and High ASCII codecs must contain only ASCII characters. For example, if input data consisting of non-english characters is tokenized using the Alpha tokenization, then the Alpha tokenization treats the non-english characters as a delimiter and the tokenized output will include the non-english characters. As a result, the protection or unprotection operation will fail.
5.17.3.3.4 - CSV Payload
This payload extracts CSV format from the request and lets you define regex to control precise extraction.
With the Row and Column Index, you can now define how the column positions can be calculated. For example, consider a CSV input as provided in the following snippet, where the first column begins at the 0th field in the row, that are padded with commas until the next column, ex5 begins. This applies when the indexing is 0-based. If you choose to use 1-based indexing, the first column begins at 1 and subsequent fields are 2, 3, and so on. Based on these definitions, you can define the rule and its properties.
first, ex5, last, pick, ex6, city, ex1, ex2
John, wwww, Smith, mister, wwww, stamford, 333, 444
Adam, wwww, Martin, mister, wwww, fairfield, 333, 444
It is recommended to use the External Base64 encoding method in the Transform action type for the CSV codec. If the Standard Base64 method is used, then additional newline feeds are generated in the output.
The CSV implementation in the DSG does not support the following:
- The fields contain line breaks.
- If double quotes are used to enclose fields, then the double quotes appearing inside a field are escaped by preceding them with another double quote.
The fields for the CSV payload are as seen in the following figure.
If the CSV input includes NON-ASCII or Unicode data, then the Binary extract rule must be used before using the CSV extract rule.
If the CSV input file includes non-printable special characters, then to transform the data successfully, the user must add the csv-bytes-parsing parameter in the features.json file.
To add the parameter in the features.json file, perform the following steps.
- Login to the ESA Web UI.
- Navigate to Settings > System > Files.
- Open the features.json file for editing.
- Add the csv-bytes-parsing parameter in the features.json file. The csv-bytes-parsing parameter must be added in the following format:
{ "features": [ "csv-bytes-parsing" ] }
The properties for the CSV payload are explained in the following table.
Properties | Sub-Field | Description | Additional Information |
---|---|---|---|
Line Separator | Separator that defines where a new line begins. | ||
Skip Lines Matching Pattern | Regex pattern that defines the lines that need to be skipped. | For example, consider the following lines in the file: User, Admin, Full Access, Viewer Partial Access, User, Viewer, Admin No Access, Viewer, User, Root No Acess, Partial Access, Root, Admin
| |
Preserve Number of Columns | Select to check if the number of columns are equal to the column headers in a CSV file. If there is a mismatch between the actual number of columns and the number of column headers, then the rule stops processing further and an error appears in the log. If you clear this check box and a mismatch is detected, then the rule still continues to process the data. A warning appears in the log. | If the checkbox is selected, ensure that the data does not contain two or more consecutive Line Separators. For example, if the Line Separator is set to \n, the following syntax must be corrected.name, city, pin\n Joe, NY, 10\n Smith, LN, 12\n \n Remove the consecutive occurrences of \n | |
Row and Column Index | Select 0 if row and column counting begins at 0 or 1 if it begins at 1. | 0 | |
Header Line Number | Line number with column headers. |
| |
Data Starts at Line | Line number from which the data begins. | Value calculated as Header Line Number +1 | |
Column Separator | Value by which the columns are separated. | ||
Columns | List of columns to be extracted and for which values action is to be applied. For example, consider a .csv file with multiple columns such as SSN, Name, etc that need to be processed. | ||
Column Name/Index | Column Name or index number of the column that will be processed. For example, if the name of the 1 column is “Name”, the value in Column Name/Index would be either 1 or Name. For example, with Row and Column Index defined as 0, if the name of the 1st column is “Name”, the value in Column Name/Index would be either 0 or Name. | ||
Profile Name | Profile to be used to perform transform operations on the matched content. | ||
User Comments | Additional information related to the action performed by the column processing. | ||
Text Qualifier | Pattern that allows cells to be combined. | ||
Pattern | Pattern that applies to the cells, after the lines and columns have been separated. | ||
Advanced Settings | Define the quote handling for unbalanced quotes in CSV records.
| If quoteHandlingMode is set as DEFAULT, the unbalanced quotes are balanced. However, if the quote is followed by a string, the unbalanced quotes are not corrected by the DSG. For example, in the following CSV text, the quotes are not balanced by DSG: 'Joe,03/11/2024 or "Joe,13/11/2024 The output of this entry remains unchanged. |
5.17.3.3.5 - Common Event Format (CEF)
If you want to protect fields that are part of a CEF log file, you can use the CEF payload to extract the required fields.
The properties for the Common Event Format (CEF) payload are explained in the following table.
Properties | Sub-Field | Description |
---|---|---|
Line Separator | Regex pattern to identify field separation. | |
Fields | CEF names and profile references must be selected. | |
Field Name | Comma separated list of CEF key names that need to be transformed (protected or unprotected). | |
Profile Name | Profile to be used to perform transform operations on the matched content. | |
User Comments | Additional information related to the action performed by the column processing. |
5.17.3.3.6 - XML Payload
This payload extracts the XML format content from the request and lets you extract the exact XML element value with it.
The fields for the XML payload are as seen in the following figure.
The properties for the XML payload are explained in the following table.
Properties | Description |
---|---|
XPath List | The XML element value to be extracted is specified in this field. Note: Ensure that you enter the XPath by following proper syntax for extracting the XML element value. If you enter incorrect syntax, then the service which has this XML payload definition in the rule fails to load and process the request. |
Advance XML Parser options* | Configure advanced parsing parameter options for the XML payload. This field accepts parsing options in the JSON format. The parsing options are of the Boolean data type. For example, the parsing parameter, remove_comments , accepts the values as true or false . |
* The Advance XML Parser options field provides the following parsing parameters that can be configured.
Options | Description | Default |
---|---|---|
remove_blank_text | Boolean value used to remove the whitespaces for indentation in the XML payload. | False |
remove_comments | Boolean value used to remove comments from the XML payload. In the XML format, comments are entered in the <!-- --> tag. | False |
remove_pis | Boolean value used to remove Processing Instructions (pi) from the XML payload. In the XML format, processing instructions are entered in the <? -- ?> tag. | False |
strip_cdata | Boolean value used to replace content in the cdata, Character data, or tag by normal text content. | True |
resolve_entities | Boolean value used to replace the entity value by their textual data value. | False |
no_network | Boolean value used to prevent network access while searching for external documents. | True |
ns_clean | Boolean value used to remove redundant namespace declarations. | False |
Consider the following example to understand the Advance XML Parser options available in the XML codec. In this example, a request is sent from a client to remove the whitespaces between the XML tags from a sample XML payload in the message body of the HTTP/REST request. The following Ruleset is created for this example.
Create an extract rule for the HTTP message payload using the default RuleSet template defined under the REST API service.
Consider the following sample XML payload in the HTTP message body.
<?xml version = "1.0" encoding = "ASCII" ?> <class_list> <!--Students grades are uploaded by months--> <student> <name>John Doe</name> <grade>A</grade> </student> </class_list>
In the example, a lot of white spaces are used for indentation. The payload contain spaces, carriage returns, and line feeds between the
<class_list>
,<student>
, and<name>
XML tags.The extract rule for extracting the HTTP message body is as seen in the following figure.
Under the Extract rule, create another child rule to extract the XML payload from the HTTP Message.
In this child rule, provide
/class_list/student/name
to parse the XML payload in the XPath List field and set the remove_blank_text parameter to true in the Advance XML Parser options field in the JSON format.Under this extract rule, create another child rule to extract the sensitive data between the
<name>
and<name>
tags. The fields for this child extract rule are as seen in the following figure.Under the extract rule, create a transform rule to protect the sensitive data between the
<name>
and the<name>
tags using Regex Replace with a patternxxxxx
. The fields for the transform rule are as seen in the following figure.Click Deploy or Deploy to Node Groups to apply the configuration changes.
When a request is sent to the configured URI, the DSG processes the request and the following response appears with the whitespaces removed from the XML payload. In addition, the sensitive data between the <name>
and the </name>
tags is protected.
<?xml version='1.0' encoding='ASCII'?>
<class_list><!--Students grades are uploaded by months--><student><name>xxxxx xxxxx</name><grade>A</grade></student></class_list>
5.17.3.3.7 - Date Time Format
The Datetime format payload is used to convert custom datetime formats, which are not supported by tokenization datetime or date data element, to a supported format that can be processed by DSG.
Consider an example, where you provide a time format, such as DD/MM/YYYY HH:MM:SS as an input to an Extract rule with the Datetime payload. The given format is not supported by the datetime or date tokenization data element. The Extract rule converts the format to an acceptable format, a transform rule protects the datetime. The Datetime payload converts the protected value to the input format and returns this value to the user.
When you request DSG to unprotect the protected datetime value, an extract rule identifies the protected datetime value, a subsequent transform rule unprotects the value and returns the original datetime format, which is DD/MM/YYYY HH:MM:SS.
Ensure that the input sent to the extract rule for Date Time extraction is exactly in the same input format as configured in the rule. If you are unsure of the input that might be sent to the extract rule, then ensure that before you rollout for production, Ruleset configuration is thoroughly checked.
The following figure illustrates the Date Time format payload fields.
Before you begin:
Ensure that the following pre-requisites are completed:
- The datetime data element defined in the policy on ESA is used to perform protect or unprotect operation.
The following table describes the fields for Datetime codec.
Field | Description |
---|---|
Input Date Time Format | Format in which the input is provided to DSG.Note: This field accepts numeric values only in the input request sent to DSG. |
Data Element Date Time Format | Format to which input must be converted. Note: Ensure that the Transform rule that follows the Extract rule uses the same data element that is used to configure the Date Time Format codec. |
Mode of Operation | Data security operation that needs to be performed. You can select Protect or Unprotect. Note: The mode of operation must be same as the data security operation that you want to perform in the Transform rule. |
DistinguishableDate* | Select this checkbox if the data element used to protect the date time is included this setting. |
*These fields appear only when Unprotect is selected as Mode of Operation.
5.17.3.3.8 - XML with Tree-of-Trees (ToT)
The XML with Tree-of-Trees (ToT) codec extracts the XML element defined in the XPath field. The XML with ToT codec allows you to process the multiple XML elements in an extract rule.
The fields for the XML with ToT payload is as seen in the following figure.
To understand the XML with ToT payload, consider the following example where the student details, such as, name, age, subject, and gender can be sent as a part of the request. In this example, the XML with ToT rule extracts and protects the name and the age element.
<?xml version='1.0' encoding='UTF-8'?>
<students>
<student>
<name>Rick Grimes</name>
<age>35</age>
<subject>Maths</subject>
<gender>Male</gender>
</student>
<student>
<name>Daryl Dixon </name>
<age>33</age>
<subject>Science</subject>
<gender>Male</gender>
</student>
<student>
<name>Maggie</name>
<age>36</age>
<subject>Arts</subject>
<gender>Female</gender>
</student>
</students>
The following figure illustrates one extraction rule for multiple XML elements.
In the Figure 14-25, the XML ToT Extract rule extracts the two different XML elements, name and age. The /students/student/name
path extracts the name element and protect it with the transform rule. Similarly, the /students/student/age
path extracts the age element and protect it with the transform rule. Same data elements are used to protect both the XML elements. You can use different data elements to transform XML elements as per your requirements. It is recommended to use the Profile reference, from the drop-down that appears on the Profile Name field. This helps to process the extraction and transformation in one rule. In addition, it reduces the transform overhead of defining one element at a time for the same XML file. If the Profile Name field is left empty then the extracted value is passed to the child rule for transformation.
For more information about profile referencing, refer to the section Profile Reference.
The properties for the XML with ToT payload are explained in the following table.
Properties | Subfield | Description |
---|---|---|
XPaths with Profile Reference | Define the required XPath and Profile reference. Note: Ensure that you enter the XPath by following the required syntax for extracting the XML element value. For example, in the Figure 14-25, the /students/student/name path is defined for the name element, ensure to follow the same syntax to extract the XML element. If you enter an incorrect syntax, then the defined rule is disabled. | |
XPath | Define the required XML element. | |
Profile Name | Select the required transform rule. | |
User Comments | Add additional information for the action performed if required. | |
Advance XML Parser options | Configure advanced parsing parameter options for the XML payload. This field accepts parsing options in the JSON format. The parsing options are of the Boolean data type. For example, the parsing parameter, remove_comments , accepts the values as true or false .Note: The Advance XML Parser options that apply to the XML codec also apply to the XML with ToT codec. For more information about the additional XML Parser, refer to the Table: Advance XML Parser. |
5.17.3.3.9 - Fixed Width
In scenarios where the input data is sent to DSG in a fixed width format, the Fixed Width codec is used. In a fixed width input, the data columns are specified in terms of exact column start character offset and fixed column width in terms of number of characters that define column width.
For example, consider a fixed width input as provided in the following snippet. The Name column begins at the 0th character in a row, has a fixed width of 20 characters and is padded with spaces until the next column Number begins. The Number column begins at the 20th character in a row and has a fixed width of 12 characters.
With the Row and Column Index, you can now define how the column positions can be calculated. If you choose to use 1-based indexing, the Name column begins at 1 and for fixed width of 20 characters, subsequent column will begin at the 21st character. While, if you use 0-based indexing, the Name column begins at 0 and for fixed width of 20 characters, subsequent column will begin at the 20th character. Based on these definitions, you can define the rule and its properties.
Name Number
John Smith 418-Y11-4111
Mary Hartford 319-Z19-4341
Evan Nolan 465-R45-4567
The fields for the Fixed Width payload are as seen in the following figure.
Note: If the input file includes non-printable special characters, then to transform the data successfully, the user must add the fw-bytes-parsing parameter in the features.json file.
To add the parameter in the features.json file, perform the following steps.
- Login to the ESA Web UI.
- Navigate to Settings > System > Files.
- Open the features.json file for editing.
- Add the fw-bytes-parsing parameter in the features.json file. The fw-bytes-parsing parameter must be added in the following format:
{ "features": [ "fw-bytes-parsing" ] }
The properties for the Fixed Width payload are explained in the following table.
Properties | Sub-Field | Description |
---|---|---|
Line Separator | Separator that defines where a new line begins. | |
Skip Lines Matching Pattern | Regex pattern that defines the lines that need be skipped.For example, consider the following lines in the file:User, Admin, Full Access, Viewer Partial Access, User, Viewer, Admin No Access, Viewer, User, Root No Acess, Partial Access, Root, Admin
| |
Preserve Input Length | Select to perform a check for the input and output length. If a mismatch is detected, then the rule stops processing further and an error appears in the log. If you clear this check box and a mismatch is detected, the rule still continue processing the data. A warning appears in the log. | |
Row and Column Index | Select 0 if row and column counting begins at 0 or 1 if it begins at 1. | |
Data Starts at Line | Line number from which the data begins. | |
Fixed Width Columns | ||
Column Position | Column position where the data begins. For example, if you are protecting the first column with 20 characters fixed width, then the value in this field will be 0. This value differs based on the Row and Column Index defined. For example, if you choose to use 0-based indexing, then the first column begins at 0, and the value in this field will be 0. | |
Column Width | The fixed width of the column that must be protected. For example, if you are protecting the first column with 20 characters fixed width, then the value in this field will be 20. | |
Profile Name | Profile to be used to perform transform operations on the matched content. Note: Ensure that the data element used to perform the transfer operation is of Length Preserving type. | |
User Comments | Additional information related to the action performed by the column processing. |
5.17.3.3.10 - HTML Form Media Payload
This payload extracts HTML form media format from the request and lets you define regex to control precise extraction.
The fields for the HTML Form Media Payload (X-WWW-FORM-URLENCODED) payload are as seen in the following figure.
The properties for the X-WWW-FORM-URLENCODED payload are explained in the following table.
Properties | Description |
---|---|
Name | The regular expression to match the parameter name is specified in this field. |
Value | The value to be extracted is specified in this field. |
Target Object | The parameter object to be extracted is specified in this field. |
Encoding Mode | Encoding mode that will be used for URI encoding handling. |
Encoding Reserve Characters | Characters beyond uppercase and lowercase alphabets, underscore, dot, and hyphen. |
5.17.3.3.11 - HTTP Message Payload
This payload extracts HTTP message format from the request and lets you define regex to control precise extraction.
The following figure illustrates the HTTP Message payload fields.
The properties for the HTTP Message payload are explained in the following table.
Properties | Description | |
---|---|---|
HTTP Message Type | Type of HTTP Message to be matched. | |
Method | The value to be extracted is specified in this field. | |
Request URI | The regular expression to be matched with the request URI is specified in this field. | |
Request Headers | The list of name and value as regular expression to be matched with the request headers is specified in this field. | |
Message Body | The parameter object to be extracted is specified in this field. | |
Require Client Certificate | If checked, the client must present a certificate for authentication. If no certificate is provided, a 401 or 403 response appears. | |
Authentication | Authentication rule required for the rule to execute. Authentication mode can be none or basic authentication. | |
Target Object | The target message body to be extracted is specified in this field. The following Target Object options are available:
Client Certificate* - The following fields are displayed if the Client Certificate option is selected in the Target Object drop down menu:
| |
Attribute | The client certificate attributes to be extracted are specified in this field. The following attribute options are available:
| |
Value | Regular expression to identify the client certificate attributes to be extracted. The default value is (.*). | |
Target Object | The value or the attribute of the client certificate to be extracted is specified in this field. The following Target Object options are available:
|
5.17.3.3.12 - Enhanced Adobe PDF Codec
The Enhanced Adobe PDF codec extracts the PDF payload from the request and lets you define Regex to control precise extraction. This payload is available when the Action type is selected as Extract.
As part of the ruleset construction for this codec, it is mandatory to include a child Text extract rule under the Enhanced Adobe PDF codec extract rule. You must not use any other rule apart from the child Text extract rule under the Enhanced Adobe PDF codec extract rule.
In the DSG, some font files are already added to the /opt/protegrity/alliance/config/pdf_fonts directory. By default, the following font file is set in the gateway.json file.
"pdf_codec_default_font":{
"name": "OpenSans-Regular.ttf"
}
Note: The Advanced Settings can be used to configure the default font file for a specific rule.
If you want to process a PDF file that contains custom fonts, then upload it to the /opt/protegrity/alliance/config/pdf_fonts directory. If the custom fonts are not uploaded to the mentioned directory, then the OpenSans-Regular.ttf font file will be used to process the PDF file.
For more information about how-to examples to detokenize a PDF, refer to the section Using Amazon S3 to Detokenize a PDF and Using HTTP Tunnel to Detokenize a PDF in the Protegrity Data Security Gateway How-to Guide.
The following figure displays the Enhanced Adobe PDF payload fields.
The properties for the Enhanced Adobe PDF payload are explained in the following table.
Note: The configurations in the Advanced Settings are only applicable for that specific rule.
Properties | Description |
---|---|
Pattern | Pattern to be matched for is specified in the field.If no pattern is specified, then the whole input is considered for matching. |
Advanced Settings | Set the following additional configurations for the Enhanced Adobe PDF codec. Set the margins to determine if it is a line or paragraph in the PDF file.
Note: The {“layout_analysis_config” : {“char_margin”: 0.1, “line_margin”: 0.1}} settings can also be configured in the gateway.json file. Set the default font file to process the PDF file.
|
Known Limitations
The following list describes the known limitations for this release.
- The Enhanced Adobe PDF codec does not support detokenization for sensitive data that splits into multiple lines. It is expected that the prefix, data to be detokenized, and the suffix are in a single line and do not break into multiple lines.
- The embedded fonts are not supported. Ensure that when you are uploading the fonts, the entire character set for that font family is uploaded to the DSG.
- The prefix and suffix used to identify the data to be detokenized must be unique and not a part of the data.
- The PDFs created with the rotate operator are not supported for detokenization.
- The Enhanced Adobe PDF codec does not process password protected PDFs.
- The detokenized data appears spaced out with extra white spaces.
5.17.3.3.13 - JSON Payload
This codec extracts the JSON element from the JSON request as per the JSONPATH defined.
Consider the following sample input that will be processed using the JSON codec to extract a unique JSON element:
{
"entities":[
{
"entity_type":"CostCenter",
"properties":{
"Id":"10097",
"LastUpdateTime":1455383881190,
"Currency":"USD",
"ApproveThresholdAmount":100000,
"AccountingCode":"5555",
"CostCenterAttachments":"{\"complexTypeProperties\":[]}"
}
}
],
"operation":"UPDATE"
}
In the Extract rule, assuming that the AccountingCode needs to be protected, the JSONPath that will be set is entities[*].properties.AccountingCode. Based on the input JSON structure, the JSONPath value differs.
The following figure illustrates the JSON payload fields.
The properties for the JSON payload are explained in the following tab
Properties | Sub-Field | Description |
---|---|---|
JSONPath | This JSON element value to be extracted is specified in the JSON path. Note: Ensure that you enter the JSONPath by following proper syntax for extracting the JSON element value. If you enter incorrect syntax, then the service which has this JSON payload definition in the rule fails to load and process the request. | |
Allow Empty String | Enable to pass values that are defined as only whitespaces, such as, value: " “, that are part of the JSON payload and continue processing of the sequential rules. If this check box is disabled, then the Extract rule does not process values that are defined as only whitespaces. | |
Preserve Element Order | Select to preserve the order of key-value pairs in the JSON response. | |
Fail Transaction | Select to fail transaction when an error is encountered during tokenization. The error might occur due to use of incorrect token element to protect input data. For example, when handling integer input data, the accurate token element would be an integer token element. The DSG uses tokenization logic to perform data protection. This logic can work to its optimum only if the correct token element is used to protect the input data. If you fail to perform careful analysis of your input data and identify the accurate token element that must be used, then it will result in issues when data is protected using the tokenization logic. To avoid this issue, it is recommended that before defining rules, analyze the input data and identify the accurate token element to be used to protect the data. For more information about identifying the token element that will best suit the input data, refer to Protegrity Protection Methods Reference Guide 9.0.0.0. | |
Minimize Output | Select to minify the JSON response. The JSON response is displayed in a compact form as opposed to the indented JSON response that the DSG sends. Note: It is recommended that this option is selected when the JSON input includes deeply nested key-value pairs. | |
Process Mode | Select to parse JSON data types. This field includes the following three options:
| |
Complex - Stringify | Select to process the complex JSON data type, such as, arrays and objects, to string values and serializing to JSON values before being passed to the child rule. This option is displayed by default. | |
Simple - Primitive | Select to process primitive data types, namely, string, int, float, and boolean. It does not support the processing of complex data types, such as, arrays and objects when it matches the JSON data type; the processing fails and an error message is displayed. | |
Complex - Recurse | Select to process the complex JSON data type and iterate through the JSON array or object recursively. |
The following table describes the additional configuration option for the Recurse mode.
Options | Description | Default |
---|---|---|
recurseMaxDepth | Maximum recursion depth that can be set for iterating matched arrays or objects. CAUTION: This parameter comes in effect only when the Complex - Recurse mode is selected. It is not supported for the Complex - Stringify and the Simple - Primitive modes. | 25 |
JSONPath Examples
This section provides guidance on the type of JSONPath expressions that DSG understands. This guidance must be considered before you define the acceptable JSONPath to be extracted when using the JSON codec.
The DSG supports the following operators.
CAUTION: The
$
operator is not supported.
Operator | Description | Example |
---|---|---|
* | Wildcard to select all elements in scope. | foo.*.baz ["foo"][*]["baz"] |
/docs | Skip any number of elements in path. | foo/docsbaz |
[] | Access arrays or names with spaces in them. | ["foo"]["bar"]["baz"] array[-1].attr [3] |
array[1:-1:2] | Slicing arrays. | array[1:-1:2] |
=, >, <, >=, <= and != | Filter using these elements. | foo(bar.baz=true) foo.bar(baz>0).baz foo(bar="yawn").bar |
To understand the JSONPath, consider the following example JSON. The subsequent table provides JSONPath examples that can be used with the example JSON.
{
"store": {
"book": [
{
"category": "reference",
"author": "Nigel Rees",
"title": "Sayings of the Century",
"price": 8.95
},
{
"category": "fiction",
"author": "J. R. R. Rowling",
"title": "Harry Potter and Chamber of Secrets",
"isbn": "0-395-12345-8",
"price": 29.99
},
{
"category": "fiction",
"author": "J. R. R. Tolkien ",
"title": "The Lord of the Rings",
"isbn": "0-395-19395-8",
"price": 22.99
},
{
"category": "fiction",
"author": "Arthur Conan Doyle ",
"title": "Sherlock Homes",
"isbn": "0-795-19395-8",
"price": 9
}
]
}
}
The following table provides the JSONPath examples based on the JSON example.
JSONPath | Description | Notes |
---|---|---|
store/docstitle | All titles are displayed. | The given JSONPath examples are different in construct but provide the same result. |
store.book[*].title | ||
store.book/docstitle | ||
["store"]["book"][*]["title"] | ||
store.book[0].title | The first title is displayed. | The given JSONPath examples are different in construct but provide the same result. |
["store"]["book"][0]["title"] | ||
store.book[1:-1].title | All titles except first and last title are displayed. | The given JSONPath examples are different in construct but provide the same result. |
["store"]["book"][1:-1]["title"] | ||
["store"]["book"](price>=9)["title"] | All titles with book price greater than or equal to 9 or 9.0. | |
["store"]["book"](price>9)["title"] | All titles with book price greater than 9 or 9.0. | |
["store"]["book"](price<9)["title"] | All titles with book price less than 9 or 9.0. | |
["store"]["book"](price<=9)["title"] | All titles with book price less than or equal to 9 or 9.0. |
5.17.3.3.14 - JSON with Tree-of-Trees (ToT)
This section provides an overview of the JSON with Tree-of-Trees (ToT) payload. The JSON ToT payload allows you to use the advantages offered by Tree of Trees to extract the JSON payload from the request and provide protection according to the data element defined. Profile Reference can also be used to process different elements of the JSON.
The following figure illustrates the JSON ToT fields.
The properties for the JSON ToT payload are explained in the following table:
Properties | Sub-Field | Description | |
---|---|---|---|
Allow Empty String | Enable to pass values that are defined as only whitespaces, such as value: " “, that are part of the JSON payload and continue processing of the sequential rules. If this check box is disabled, then the Extract rule does not process values that are defined as only whitespaces. | ||
JSON Paths with Profile Reference | JSON path and profile references must be selected. | ||
JSON Path | JSON path representing the JSON field targeted for extraction. | ||
Profile Name | Profile to be used to perform transform operations on the matched content. | ||
User Comments | Additional information related to the action performed by the group processing. | ||
Process Mode | Select to parse JSON data types. This field includes the following three options:
| ||
Complex - Stringify | Select to process the complex JSON data type, such as, arrays and objects, to string values and serializing to JSON values before being passed to the child rule. This option is displayed by default. | ||
Simple - Primitive | Select to process primitive data types, namely, string, int, float, and boolean. It does not support the processing of complex data types, such as, arrays and objects when it matches the JSON data type; the processing fails and an error message is displayed. | ||
Complex - Recurse | Select to process the complex JSON data type and iterate through the JSON array or object recursively. | ||
Preserve Element Order | Select to preserve the order of key-value pairs in the JSON response. This option is selected by default when you create the JSON ToT rule. | ||
Fail Transaction | Select to fail transaction when an error is encountered during tokenization. The error might occur due to use of incorrect token element to protect input data. For example, when handling integer input data, the accurate token element would be an integer token element. The DSG uses tokenization logic to perform data protection. This logic can work to its optimum only if the correct token element is used to protect the input data. If you fail to perform careful analysis of your input data and identify the accurate token element that must be used, then it will result in issues when data is protected using the tokenization logic. To avoid this issue, it is recommended that before defining rules, analyze the input data and identify the accurate token element to be used to protect the data. This option is selected by default when you create the JSON ToT rule. For more information about identifying the token element that will best suit the input data, refer to Protegrity Protection Methods Reference Guide 9.0.0.0. | ||
Minimize Output | Select to minify the JSON response. The JSON response is displayed in a compact form as opposed to the indented JSON response that the DSG sends. This option is deselected by default when you create the JSON ToT rule. Note: It is recommended that this option is selected when the JSON input includes deeply nested key-value pairs. |
The following table describes the additional configuration option for the Recurse mode.
Options | Description | Default |
---|---|---|
recurseMaxDepth | Maximum recursion depth that can be set for iterating matched arrays or objects. CAUTION: This parameter comes in effect only when the Complex - Recurse mode is selected. It is not supported for the Complex - Stringify and the Simple - Primitive modes. | 25 |
5.17.3.3.15 - Microsoft Office Documents
This payload extracts Microsoft Office documents from the request and lets you define regex to control precise extraction.
The following figure illustrates the MS Office payload fields.
The properties for the Microsoft Office documents payload are explained in the following table.
Properties | Sub-Field | Description |
---|---|---|
Pattern | The regular expression pattern on which the extraction is applied is specified in this field. For example, consider if the text is “Hello World”, then pattern would be “\w+”. | |
Pattern Group Id | The grouping number to extract from the Regular Expression Pattern. For example, for the text “Hello World”, Group Id would be 0 to match characters in first group as per regex. | |
Profile Name | Profile to be used to perform transform operations on the matched content. | |
User Comments | Additional information related to the action performed by the group processing. | |
Length Preservation | Data transformation output is padded with spaces to make the output length equal to the input length. |
5.17.3.3.16 - Multipart Mime Payload
This payload extracts mime payload from the request and lets you define regex to control precise extraction.
The following figure illustrates the Multipart Mime payload.
The properties for the Multipart Mime payload are explained in the following table.
Properties | Description |
---|---|
Headers | Name-Value pair of the headers to be intercepted. |
Message Body | Intercept the message matching the regular expression. |
Target Object | Target message to be extracted. |
5.17.3.3.17 - PDF Payload
This payload extracts PDF payload from the request and lets you define regex to control precise extraction.
The following figure illustrates the PDF payload fields.
The properties for the PDF payload are explained in the following table.
Properties | Description |
---|---|
Pattern | Pattern to be matched for is specified in the field.If no pattern is specified, then the whole input is considered for matching. |
Note: The DSG PDF codec supports only text formats in PDFs.
For any assistance in supporting additional text formats, contact Protegrity Professional Services.
5.17.3.3.18 - Protocol Buffer Payload
The PBpath defines a way to address fields in binary encoded protocolbuf messages. It uses field ids to construct an address messages or fields in a nested message hierarchy.
An example for the PBpath field is shown as follows:
1.101.2.201.301.2.401.701.2.802
In DSG, protocol buffer version 2 is used.
The following figure illustrates the Protocol Buffer (protobuf) payload fields.
The properties for the Protocol Buffer payload are explained in the following table.
Properties | Description |
---|---|
PBPath List | This PB element value to be extracted is specified in the PB path. Note: Ensure that you enter the PBPath by following proper syntax for extracting the protobuf messages. If you enter incorrect syntax, then the service which has this protobuf payload definition in the rule fails to load and process the request. |
5.17.3.3.19 - Secure File Transfer Payload
This payload extracts SFTP message from the request and lets you further processing to be done on the files.
The following figure illustrates the Secure File Transfer payload fields.
The properties for the Secured File Transfer payload are explained in the following table.
Properties | Description |
---|---|
File Name | Name of the file to be matched. If the field is left blank, then all the files are matched. |
Method | Rule to be applied on the download or the upload of files. |
5.17.3.3.20 - Shared File
This payload extracts file from the request and lets you define regex to control precise extraction. It is generally used with Mounted services, namely NFS and CIFS.
The following figure illustrates the NFS/CIFS share-related Shared File payload fields.
The properties for the Shared File payload are explained in the following table.
Properties | Description |
---|---|
File Key | Regex logic to identify source file key to be extracted.Note: Click Test Regex to verify if the regex expression is valid. |
Target File | Attribute that will be extracted from the payload. The options are:
|
5.17.3.3.21 - SMTP Message Payload
This payload extracts SMTP payload from the request and lets you define regex to control precise extraction.
The following figure illustrates the SMTP message payload fields.
The properties for the SMTP payload are explained in the following table.
Properties | Description |
---|---|
SMTP Message Type | Type of SMTP message to be intercepted. |
Method | A condition is applied, if matching is to be performed on the files that are uploaded or the files that are downloaded. |
Command | Regular expression to be matched with a command. |
Target Object | Attribute to be extracted. |
5.17.3.3.22 - Text Payload
This payload extracts text payload from the request and lets you define regex to control precise extraction.
The following figure illustrates the Text payload fields.
The properties for the Text payload are explained in the following table.
Properties | Sub-Field | Description |
---|---|---|
Prerequisite Match Pattern | Regular expression to be matched before the action is executed. | |
Pattern | The regular expression pattern on which the extraction is applied is specified in this field. For example, consider if the text is “Hello World”, then pattern would be “\w+”. | |
Pattern Group Id | The grouping number to extract from the Regular Expression Pattern. For example, for the text “Hello World”, Group Id would be 0 to match characters in first group as per regex. | |
Profile Name | Profile to be used to perform transform operations on the matched content. | |
User Comments | Additional information related to the action performed by the group processing. | |
Encoding | Type of encoding to be used. | |
Codec | The encoding method used is specified in this field. For more information about codec types, refer to the section Standard Encoding Method List. |
5.17.3.3.23 - URL Payload
This payload extracts URL payload from the request and extract precise object based on selection.
The following figure illustrates the URL payload fields.
The properties for the URL payload are explained in the following table.
Properties | Description |
---|---|
Target Object | Object attribute to be extracted. |
5.17.3.3.24 - User Defined Extraction Payload
This codec lets you define custom extraction logic and pass arguments to the next rule. The language that is currently supported for extraction is Python.
From DSG 3.0.0.0, the Python version is upgraded to python 3. The UDFs written in Python v2.7 will not be compatible with Python v3.10. To migrate the UDFs from python 2 to python 3, refer to the section Migrating the UDFs to Python 3.
The following figure illustrates the User Defined Extraction payload fields.
The properties for the User Defined Extraction payload are explained in the following table.
Properties | Description |
---|---|
Programming Language | Programming language used for data extraction is selected. The language that is currently supported for extraction is Python. |
Source Code | Source code for the selected programming language. CAUTION: Ensure that the class name UserDefinedExtraction is not changed while creating the UDF. Note: For more information about the supported libraries apart from the default Python modules, refer to the section Supported Libraries. |
Initialization Arguments | The list of arguments passed to the constructor of the user defined extraction code is specified in this field. |
Rule Advanced Settings | Provide a specific blocked module that must be overruled. The module will be overruled only for that extract rule. The parameter must be set to the name of the module that must be overruled in the following format.{"override_blocked_modules": ["<name of module>", "<name of module>"]} Note: Currently, methods cannot be overruled using Advanced settings. For more information about the allowed methods and modules, refer to the section Allowed Modules and Methods in UDF. Using the Rule Advanced Settings option, any module that is blocked, can be overruled to be unblocked. For example, the following are the modules that are allowed in the gateway.json file. "globalUDFSettings" : { "allowed_modules":["bs4", "common.logger", "re", "gzip", "fromstring", "cStringIO","struct", "traceback"] } The os module is not listed as part of the allowed_modules parameter in the gateway.json file, so it is blocked. To allow the use of the os module in the Source Code of UDF rules, you can set the {“override_blocked_modules”: [“os”]} in the Advanced Settings of the extract rule. Note: By overriding blocked modules, you risk introducing security risks to the DSG system. |
Note: The DSG supports the usage of the PyJwt python library in custom UDF creations. PyJWT is a python library that is used to implement Open Authentication (OAuth) using JSON Web Tokens (JWT). JSON Web Tokens (JWT) is an open standard that defines how to transmit information between a sender and a receiver as a JSON object. To authenticate JWT for OAuth, you must write a custom UDF. The PyJwt library version supported by the DSG is 1.7.1.
For more information about writing a custom UDF on the DSG, refer to the section User Defined Functions (UDF).
Note: The DSG supports the usage of the Kafka python library in custom UDF creations. Kafka is a python library that is used for storing, processing, and forwarding for applications in a distributed environment. For example, the DSG uses the Kafka library to forward Transaction Metrics logs to external applications. The Kafka library version supported by the DSG is 2.0.2.
For more information about writing a custom UDF on the DSG, refer to the section User Defined Functions (UDF).
Note: The DSG supports the usage of the Openpyxl Python library in custom UDF creations. Openpyxl is a Python library that is used to parse Excel xlsx, xlsm, xltx, xltm files. This library enables column-based transformation for Microsoft Office Excel. The Openpyxl library version supported by the DSG is 2.6.4.
Note: The DSG uses the in-built tarfile python module for custom UDF creation. This module is used in the DSG to parse .tar and .tgz packages. Using the tarfile module, you can extract and decompress .tar and .tgz packages.
5.17.3.3.25 - ZIP Compressed File Payload
This payload extracts ZIP file from the request and lets you extract file name or file content.
The following figure illustrates the ZIP Compressed File payload fields.
The properties for the ZIP payload are explained in the following table.
Properties | Description |
---|---|
File Name | Name of the file on which action is to be performed. |
Target Object | File name or the file content to be extracted. |
5.17.3.4 - Log
Create a rule in the child node of the extract rule defined for the payload. The payload can be defined for services, such as, HTTP, SFTP, REST, and so on. When the extract rule is invoked for the payload, the log rule will display the contents of the payload in the gateway log.
The fields for the Logs action type are as seen in the following figure.
The properties for the Log action are explained in the following table.
Properties | Description |
---|---|
Level* | Type of log to be generated. The type of log selected in this field decides the log entry that is displayed on the Audit screen in Forensics. |
Destination** | Location where the log file is sent to is specified in this field. |
Title Format | Used to format log message title using rule %(name) as a variable. Only applicable to destinations that support title. |
Message Format | Used to format log message using %(value) as a variable. Value represent the data extracted by parent rule. |
* -In the Log action, depending on the severity of the message, you can define different types of log levels. The following table shows the DSG log level setting and the corresponding log entry that is seen in Forensics.
DSG Log Level | Logging in Forensics |
---|---|
Warning | Normal |
Error | High |
Verbose | Lowest |
Information | Low |
Debug | Debug |
** The following options are available for the Destination field:
- Forensics: Log is sent to the Forensics and the DSG internal log.
- Internal: Log is sent only to the DSG internal log.
Consider the following example to understand the functionality of the Log action type. In this example, the Log action type functionality is used to find the transactionID from a sample text message in the HTTP payload. The following RuleSet structure is created to find the transactionID:
Create an extract rule for the HTTP payload using the default RuleSet template defined under the REST API service. The HTTP payload consists of the following sample text message:
{"source":{"merchantId":"LCPLC-lphf21","storeId":"LPHF2100303086", "terminalId":"0","operatorId":"10","terminalType":"GASPRO", "offline":false}, "message":{"transactionType":"WALLET","messageType":"VERIFYPARTNER", "transactionTime":"2018-05-09T11:48:47+0000", "transactionId":"20180509114847LPHF2100301237000090"}, "identity":{"key":"4169671111","entryType":"KEYEDOPERATOR"}}
The fields for the extract rule for the HTTP message payload are as seen in the following figure:
Create a child extract rule to extract the text message in the payload.
Create a RegEx expression, “transactionID”:(.*)", to match the pattern for the transactionID. The fields for the child extract rule are as seen in the following figure:
Create another child rule for the Log action type. The fields for the Log action type are as seen in the following figure:
Add the following values in the Log action type.
- Action: Log
- Level: Warning
- Destination: Internal
- Title Format: %(name)s
- Message Format: %(value)s
After the log rule is processed, the following message will be displayed in the DSG internal log:
5.17.3.5 - Profile Reference
While creating Rulesets for Dynamic CoP, use the Profile Reference rule is used for data transformation instead of the Transform rule. The security benefits of using Profile Reference rule are higher than the Transform rule as the requests are triggered on the fly.
The fields for the Profile Reference action are as seen in the following figure.
The following table describes the fields for the Profile Reference action.
Field | Description |
---|---|
Profile Name | Select a reference to an external profile from the drop down |
Note: The Profile Reference action type must always be created as a leaf node (a rule without any child nodes).
5.17.3.6 - Set User Identity
The fields for the Set User Identity action are as seen in the following figure.
Note: If the Set User Identity rule is followed by a Transform rule, then any data processing logs generated in Forensics are logged with the user set using the Set User Identity rule.
In addition, the user set using the Set User Identity rule overrides the user set in the Transform rules for auditing any data processing logs in Forensics.
Note: If the “Set User Identity” rule is configured along with Basic Authentication, then the “user_name” field in the transaction metrics will be set with the username configured in the “Set User Identity” rule.
If Basic Authentication is configured without the “Set User Identity” rule, then the “user_name” field in the transaction metrics displays the username configured in the Basic Authentication.
5.17.3.7 - Set Context Variable
You can The value set due to this rule will be maintained throughout the rule lifecycle.
The following table describes the Variable Name type supported by the Set Context Variable option.
Field | Description |
---|---|
User IP Addr | Captures the client IP address forwarded by the load balancer that distributes client requests among DSG nodes. This IP address is displayed in the audit log. |
Value-External IV Protect, Unprotect | Uses the External IV value that is sent in the header to protect or unprotect data. This value overrides the value set in the Default External IV field in the Transform rule. |
Value-External IV Reprotect | Uses the External IV value that is sent in the header to reprotect data. This value overrides the value set in the Reprotect External IV field in the Transform rule. |
Dynamic Rule | Used when Dynamic CoP is implemented for the given Ruleset hierarchy. A request header with Dynamic CoP rule accesses the URI to complete the Ruleset execution. |
Client Correlation Handle | Captures the Linux epoch time when the protect or unprotect operation is successful. |
User Defined Headers | Extracts JSON data from the input and set it into the response header. The JSON data is extracted into key-value pairs and appended in the response header. This field also accepts list of lists as input. For example, [["access-id","asds62231231"],["secret-access-token","sdas1353412"]] .Consider an example where in some sample JSON data, {"access-id":"asds62231231, "secret-access-token":"sdas1353412"} , is sent from a server to the DSG. After the DSG processes the request, the JSON data is extracted into key-value pairs and appended in the response header. The key will be the header name and the value will be the corresponding header value. The following snippet is displayed in the response header:access-id -> asds62231231 secret-access-token -> sdas1353412 |
The Set Context Variable action type must always be created as a leaf node - a rule without any child nodes.
User IP address
Record the IP of the client that sents a request to a DSG node in the audit log. When a client request is sent to the load balancer that distributes incoming requests to the cluster of DSG nodes, the load balancer appends a header to the request. This header captures the client IP address.
The types of headers can be X-Forwarded-For, which is most commonly used, or X-Client-IP, User-Agent, and so on.
Before a Set Context Variable with the User IP Addr Variable Name type rule is created, an extract rule that extracts the Header with the given header name, such as X-Forwarded-For, from a request would be created.
If a request header sends an IP address 192.168.0.0 as the X-Forwarded-For value, the following image shows the client IP in the Forensics log displaying this IP address value.
The fields for the Variable Name type are as seen in the following figure.
The following table describes the fields.
Field | Description | Default (if any) |
---|---|---|
Truncate Input | Select this check box to truncate any context variable value passed in the header that exceeds the maximumInputLength set in the Rule Advanced Settings.If this check box is not selected and the value set in the context variable exceeds the length set in the maximumInputLength parameter, then the transaction fails with an error. | |
Rule Advanced Settings | Set the parameter maximumInputLength such that data beyond this length is not set as the context variable. The datatype for this option is bytes. | 512 |
Value External IV protect
You can send an external IV value that will used along with the protect or unprotect algorithm in the request header to create more secure encrypted data. External IV values add additional layer of randomness and help in creating secure tokens.
Note: This value overrides the value set in the Default External IV field in the Transform rule.
The fields for the Variable Name type are as seen in the following figure.
The following table describes the fields.
Field | Description | Default (if any) |
---|---|---|
Truncate Input | Select this check box to truncate any context variable value passed in the header that exceeds the maximumInputLength set in the Rule Advanced Settings.If this check box is not selected and the value set in the context variable exceeds the length set in the maximumInputLength parameter, then the transaction fails with an error. | |
Rule Advanced Settings | Set the parameter maximumInputLength such that data beyond this length is not set as the context variable.The datatype for this option is bytes. | 512 |
Note: If an External IV value is sent in the header to protect or unprotect sensitive data, with the case-preserving and position-preserving property enabled in the Alpha-Numeric (0-9, a-z, A-Z) token type, then the External IV property is not supported.
Value-External IV Reprotect
You can send an external IV value that will used along with the reprotect algorithm in the request header to create more secure encrypted data. External IV values add additional layer of randomness and help in creating secure tokens.
Note: This value overrides the value set in the Default External IV field in the Transform rule.
The fields for the Variable Name type are as seen in the following figure.
The following table describes the fields.
Field | Description | Default (if any) |
---|---|---|
Truncate Input | Select this check box to truncate any context variable value passed in the header that exceeds the maximumInputLength set in the Rule Advanced Settings.If this check box is not selected and the value set in the context variable exceeds the length set in the maximumInputLength parameter, then the transaction fails with an error. | |
Rule Advanced Settings | Set the parameter maximumInputLength such that data beyond this length is not set as the context variable.The datatype for this option is bytes. | 512 |
Note: If an External IV value is sent in the header to protect or unprotect sensitive data, with the case-preserving and position-preserving property enabled in the Alpha-Numeric (0-9, a-z, A-Z) token type, then the External IV property is not supported.
Dynamic Rule
The Dynamic Rule provides a hook in form of the URI in the preceding rule and a logical endpoint for the Dynamic CoP header request to join the rule tree.
After you define the Dynamic Rule variable name type, you can proceed with creating the Dynamic Injection action type.
The fields for the Variable Name type are as seen in the following figure.
The following table describes the fields.
Field | Description | Default (if any) |
---|---|---|
Truncate Input | Select this check box to truncate any context variable value passed in the header that exceeds the maximumInputLength set in the Rule Advanced Settings.If this check box is not selected and the value set in the context variable exceeds the length set in the maximumInputLength parameter, then the transaction fails with an error. | |
Rule Advanced Settings | Set the parameter maximumInputLength such that data beyond this length is not set as the context variable.The datatype for this option is bytes. | 4096 |
Client Correlation Handle
The client correlation handle captures the Linux epoch time when the protect or unprotect operation is successful.
When you define rulesets, the rules are structured such that the extract rule identifies the protect successful event from the input message. This rule is followed by the extraction of the timestamp using a UDF rule.
The set context variable rule follows next to set the variable to the extracted timestamp. You can further create a rule that converts this timestamp to a hex value followed by a Log rule to display the exact time of protect and unprotect operation in the ESA Forensics or DSG logs.
The fields for the Variable Name type are as seen in the following figure.
The following table describes the fields.
Field | Description | Default (if any) |
---|---|---|
Truncate Input | Select this check box to truncate any context variable value passed to the Set Context Variable rule that exceeds the maximumInputLength parameter value set in the Rule Advanced Settings.Note: The maximum value that can be set for the maximumInputLength parameter value is 20. If this parameter is set to a value greater than 20, then the following warning message appears in the gateway startup logs and the context variable value is truncated to 20 characters. |
"Value configured by user is ignored as it exceeds
20 Characters (Maximum Limit)"
If this parameter is not configured, the context variable value is truncated to 20 characters by default.
If this check box is not selected and the context variable value passed to the Set Context Variable rule exceeds the maximumInputLength
parameter value set in the Rule Advanced Settings, then the transaction fails with an error.| |
|Rule Advanced Settings|Set the parameter maximumInputLength
such that data beyond this length is not set as the context variable. The datatype for this option is number of characters.|20|
User Defined Headers
User Defined Headers are meant to provide additional information about an HTTP Response Header that can be helpful for troubleshooting purposes. The User Defined Headers can include information such as custom cookies, state information, and provide information to the load balancer, for example, CPU utilization of a particular node behind the load balancer.
The fields for the Variable Name type are as seen in the following figure.
The following table describes the fields.
Field | Description | Default (if any) |
---|---|---|
Truncate Input | Select this check box to truncate any context variable value passed in the header that exceeds the maximumInputLength set in the Rule Advanced Settings.If this check box is not selected and the value set in the context variable exceeds the length set in the maximumInputLength parameter, then the transaction fails with an error. | |
Rule Advanced Settings | Set the parameter maximumInputLength such that data beyond this length is not set as the context variable.The datatype for this option is bytes. | 4096 |
5.17.3.8 - Transform
For any Transform action, if you click the ( ) icon to add a new rule, a message TRANSFORM rule cannot have a child rule under it appears.
The fields for the Transform action are as seen in the following figure.
The following table describes the methods applicable to the Transform action.
Field | Description |
---|---|
Protegrity Data Protection | Protect data using Protegrity Data Protection methods such as tokenization or encryption. |
Regular Expression Replace | List the patterns to replace for regular expression transformation. |
User Defined Transformation | Provide User Defined Functions (UDFs) for transformation. |
GNU Privacy Guard (GPG) | Enable GPG encryption and decryption for data transformation. |
SAML Codec | Enable Security Assertion Markup Language (SAML) support. |
The Transform action type must always be created as a leaf node (a rule without any child nodes).
If an External IV value is configured in the Transform rule, with the case-preserving and position-preserving property enabled in Alpha-Numeric (0-9, a-z, A-Z) token type, then the External IV property cannot be used to transform sensitive data.
5.17.3.8.1 - GNU Privacy Guard (GPG)
The GPG encrypted data is first optionally compressed, encrypted with a one-time generated session key, and this session key is then encrypted with the public key. The extracted data from execution of RuleSet can be transformed using the GPG method in the Transform action.
From the DSG Web UI, in the Operation field, you can either select Encrypt or Decrypt operation. The options for each operation vary based on the selection. The DSG appliance is compatible with GPG v2.2. Refer to the GPG documentation at https://www.gnupg.org/faq/gnupg-faq.html
Importing keys
Run the following steps to import public and private keys generated outside DSG.
To import keys:
Upload the public key from the ESA Web UI. Navigate to Cloud Gateway > 3.3.0.0 {build number} > Transport > Certificate/Key Material.
The Certificate/Key Material screen appears.
On the Certificate/Key Material screen, click Upload.
Click Choose File and select the public key to be uploaded.
Upload the private key to ESA using an FTP tool.
On the DSG CLI Manager, navigate to the
/opt/protegrity/alliance/3.3.0.0.<build number>-1/config/resources/
directory. Verify that the private key and public key are available in this directory.Run the following command.
docker ps
A list of all the available docker images is displayed. For exampleCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 28791aa86a02 gpg-agent:3.3.0.0.51 "gpg-agent --server …" 15 hours ago Up 25 minutes gpg-agent-3.3.0.0.51-1
Under the NAMES column, note the name of the image corresponding to the
gpg-agent:3.3.0.0<build number>
.Run the following command to import the public key.
docker exec -it <Name of the GPG container> gpg --homedir /opt/protegrity/alliance/config/resources --import /opt/protegrity/alliance/config/resources/<public_key_file_name>
For example,
docker exec -it gpg-agent-3.3.0.0.51-1 gpg --homedir /opt/protegrity/alliance/config/resources --import /opt/protegrity/alliance/config/resources/test.gpg
Import the private key by running the following command:
docker exec -it <Name of the GPG container> gpg --homedir /opt/protegrity/alliance/config/resources --allow-secret-key-import --pinentry-mode loopback --import /opt/protegrity/alliance/config/resources/<private_key_file_name>
For example,
docker exec -it gpg-agent-3.3.0.0.51-1 gpg --homedir /opt/protegrity/alliance/config/resources --allow-secret-key-import --pinentry-mode loopback --import /opt/protegrity/alliance/config/resources/secret.gpg
Trust the imported keys as ultimate keys by running the following command:
docker exec -it <Name of the GPG container> gpg --homedir /opt/protegrity/alliance/config/resources --edit-key <Name>
For example,
```
docker exec -it gpg-agent-3.3.0.0.51-1 gpg --homedir /opt/protegrity/alliance/config/resources --edit-key test.user@sample.com
gpg> trust
#enter 5<RETURN>
#enter y<RETURN>
gpg> quit
```
Generating GPG keys
Steps to generate the GPG keys on ESA.
Login to ESA CLI Manager.
Run the following command to generate the key.
docker exec -it <Name of GPG container> --homedir /opt/protegrity/alliance/config/resources/ --pinentry-mode=loopback --full-generate-key
For example,
docker exec -it gpg-agent-3.3.0.0.51-1 gpg --homedir /opt/protegrity/alliance/config/resources/ --pinentry-mode=loopback --full-generate-key
Select the type of key that you want to generate from the following options.
- (1) RSA and RSA (default)
- (2) DSA and Elgamal
- (3) DSA (sign only)
- (4) RSA (sign only)
Enter the keysize for the key. The keysize can range between 1024 to 4096.
Select the validity of the key from the following options.
- 0 = key does not expire
- <n> = key expires in n days
- <n>w = key expires in n weeks
- <n>m = key expires in n months
- <n>y = key expires in n years
Enter the real name that identifies the key.
Enter the email address for the key.
Enter a comment for the key. The public key in GPG includes a key and user ID information that identifies the key with the user ID.
Select (O) to confirm the user ID details.
Press Enter or provide a passphrase. The passphrase is used during decryption.
Run the following command to verify the key is generated.
docker exec -it <Name of the container> gpg --homedir /opt/protegrity/alliance/config/resources/ --list-keys
For example,
docker exec -it gpg-agent-3.3.0.0.51-1 gpg --homedir /opt/protegrity/alliance/config/resources/ --list-keys
The gpg directory must include the following files after you generate a GPG key successfully:
- pubring.gpg
- secring.gpg
- trustdb.gpg
- random_seed
- s.gpg-agent
- s.gpg-agent.ssh
- s.gpg-agent.extra
- s.gpg-agent.browser
- private-keys-v1.d
- openpgp-revocs.d
Encrypt operation
The encrypt operation transform rule related options for GPG rule implementation are listed in this section.
The following table describes the fields for Encrypt operation in the GNU Privacy Guard method.
Field | Description | Restrictions (if any) |
---|---|---|
Recipient Name | Encrypt data for the user provided. Recipient name is defined when the public key is generated. You can either provide the email id or the key id. | |
ASCII Armor* | Enable to generate ASCII format output. This option can be used when the output data needs to be created in a format that can be safely sent via email or be printed. | |
Custom Arguments | Provide additional arguments that you want to pass to the GPG command line apart from the given arguments. Ensure that the syntax is correct. | Provide additional arguments that you want to pass to the GPG command line apart from the given arguments. Ensure that the syntax is correct.
|
Decrypt operation
The decrypt operation transform rule-related options for the GNU Privacy Guard (GPG) rule implementation are listed in this section.
The following table describes the fields for the Decrypt operation in the GPG method.
Field | Description | Notes |
---|---|---|
Passphrase | Provide the private key passphrase as a string or name of the file placed in /config/resources directory that contains the passphrase. A null value means that the private key is not passphrase protected. |
|
Delimiter | Regular Expression used to delimit stream. Rules will be invoked on delimited streams. | |
Custom Arguments | Provide additional arguments that you want to pass to the GPG command line apart from the given arguments. Ensure that the syntax is accurate. |
5.17.3.8.2 - Protegrity Data Protection method
The following table describes the fields for Protegrity Data Protection method.
Field | Sub-Field | Description | Notes |
---|---|---|---|
Protection Method | Specify the action performed (protection, unprotection, or re-protection). | ||
Data Element Name | Specify the Data element used (for protection, unprotection, or re-protection). | The Data Element Name drop down list populates data elements from the deployed policy. | |
Encryption Data Element | Select to process the encryption data element | ||
Default External IV | Default value to be used as an external initialization vector. | ||
Reprotect New Data Element Name | New data element name that will be used to reprotect data. | The Data Element Name drop down list populates data elements from the deployed policy. | |
Reprotect New Default External IV | New default value to be used as an external initialization vector. | ||
Default Username | Policy Username used for the user. |
| |
Encoding | Encoding method to be used. | ||
Codec | Based on the encoding selected, select the codec to be used.For more information about codec types, refer to the section Codecs. | ||
Prefix | Prefix text to be padded before the protected value. This helps in identifying protected text from clear text. | ||
Suffix | Suffix text to be padded after the protected value. This helps in identifying protected text from clear text. | ||
Padding Character | Characters to be added to raise the number of characters to the minimum required size by the Protection method. | ||
Minimum Input length | Number of characters that define if input is too short for the Protection method to be padded with the Padding Character. |
| |
Advanced Settings | |||
Permissive Error Handling | Click ![]() | ||
Enabled | Select to enable permissive handling of error generated due to distorted input. | ||
Error strings | Regex pattern to identify the errors that need to be handled permissively. You can also provide the exact error message.For example, if the error message on the Log viewer screen is “The input is too short”, then you can enter the exact message “The input is too short” in this field. Other error message examples are “The input is too long”, “License is invalid”, “Permission denied”, “Policy not available”, and so on. Based on the error message that you encounter and want to handle differently, the value in this field should be adjusted accordingly. For example, a pattern, such as, too short, too long, Permission denied can be used to gracefully handle the respective three errors. | ||
Output Data | Regex substitution pattern that dictates how output values for erroneous input values are substituted. For example, if this value is set to “????”, then the distorted input will be replaced with this value, thus allowing the rule to process instead of failing due to distorted input. Users may choose such fixed substitution strings to spot individual erroneous input data values post processing of data. You can also add prefix and suffix to the input. The regex must follow the “ <prefix>\g<0><suffix> ” REGEX substitution pattern.For example, if you want the input to be identified with the “#*_” as the input prefix and “_#*” as the input suffix, the regex pattern with be “#*_\g(0)_#* “. |
5.17.3.8.3 - Regular expression replace
The following table describes the fields for Regular Expression Replace method.
Field | Sub-Field | Description | Notes |
---|---|---|---|
Replace Pattern | List of patterns to be matched and replaced for regular expression transformation. | ||
Match Pattern | Regex logic that defines pattern to be matched. | Instead of using .*, use .+ to match the sequence of characters. | |
Replace Value | Value to replace the matched pattern. |
5.17.3.8.4 - Security Assertion Markup Language (SAML) codec
Any SAML implementation involves the following two entities, namely, the Single Sign-On (SSO) application or the Service Provider that the user is trying to access and the Identity Provider (IDP) responsible for authenticating the user.
A typical SAML implementation is illustrated in the following diagram.
Consider that the user wants to access the Service Provider (SP). The SP redirects the user’s authentication request to the Identity Provider (IDP) through the DSG.
When a user tries to access a Service Provider (SP), an authentication request is generated. The DSG receives and forwards the request to the IDP. The IDP authenticates the user and reverts with a SAML assertion response. The DSG updates the domain names in the SAML assertion response, verifies the inbound SAML signature, and re-signs the modified SAML response to ensure that the response is secure and not tampered. The response is then forwarded to the SP. The SP verifies the response and authenticates the user.
After this authentication is complete, when the same user tries to access any other SP on the same network, authentication is no longer required. The IDP identifies the user from the previous active session and reverts with a SAML response that user is already authenticated.
Note: SAML codec is tested with the following the SalesForce SP and the Microsoft Azure AD IDP.
Before you begin:
Ensure that the following prerequisites are met:
- The IDP’s public certificate, which was used to sign the message response, is uploaded to the ESA using the Certificate screen.
- The certificate and private key that the customer wants to use to re-sign the SAML response using the DSG must be uploaded to the ESA.
- The certificate that is used to re-sign the message must be uploaded to the SP for validating the response.
- As the user requests to access the SP are redirected to the IDP through the DSG, ensure that in the SP configurations, the IDP redirect URLs are configured.
For example, if the IDP is https:\\login.abc.com, then in the SP configurations ensure that the redirect URLs are set to https:\\secure.login.abc.com.
As the SAML response from the IDP, that is used to authenticate the user, is redirected through the DSG to the SP, ensure that the IDP configurations are set as required.
For example, if the SP is https:\\biloxi.com, then in the IDP configurations, ensure that the redirect URLs are set to https:\\secure.biloxi.com.
Note: After you upload the certificates to the ESA, ensure that you deploy configurations from the ESA such that the certificates and keys are pushed to all the DSG nodes in the cluster.
The SAML codec fields can be seen in the following image.
The following table describes the fields for the SAML codec.
Field | Sub-Field | Description | Notes |
---|---|---|---|
Message Verification Certificate | Select the IDP’s public certificate from the list of available certificates drop down. | Ensure that the certificates are uploaded to the ESA from the Certificate screen. | |
Message Signing Certificate | Select the certificate that will be used to re-sign the response from the list of available certificates drop down list. Both message and assertion signing is supported. | Ensure that the certificates are uploaded to the ESA from the Certificate screen. | |
Assertion Namespace | Assertion namespace value defined in the SAML response. | This field is redundant. Any value that you enter in this field, will be bypassed. | |
Key Passphrase | The passphrase for the encrypted signing key that will be used to re-sign the certificate. | This field is redundant. Any value that you enter in this field, will be bypassed. | |
Replace Regex | Regex pattern to identify the forwarded hostname received from the IDP. You can also provide the exact hostname. | ||
Replace value | Hostname that will be used to forward the SAML response. | ||
Assertion settings | Use Message Certificates | Select this checkbox to use the same certificates that were used for verification of the SAML response to re-sign the assertions. | This field is redundant. Any value that you enter in this field, will be bypassed. |
Verification Certificate | If you choose to use a certificate other than the one used to re-sign the message response, then select a certificate from the list of available certificates drop down list. | This field is redundant. Any value that you enter in this field, will be bypassed. | |
Signing Certificate | If you choose to use a certificate other than the one used to re-sign the message response, then select a certificate from the list of available certificates drop down list. | This field is redundant. Any value that you enter in this field, will be bypassed. |
5.17.3.8.5 - User defined transformation
The following figure illustrates the User Defined Transformation payload fields.
The following table describes the fields for User Defined Transformation method.
Properties | Description | Notes |
---|---|---|
Programming Language | Programming language used for data transformation is selected. The language that is currently supported for transformation is Python. | |
Source Code | Source code for the selected programming language. | Ensure that the class name UserDefinedTrasnformation is not changed while creating the UDF. |
Initialization Arguments | The list of arguments passed to the constructor of the user defined transformation code is specified in this field. | |
Rule Advanced Settings | As part of the security enhancements, the gateway.json file includes the globalUDFSettings key. This key and the corresponding value defines a list of vulnerable modules and methods that are blocked. Provide the specific module that must be overruled. The module will be overruled only for the extract rule. The parameter must be set to the name of the module that must be overruled in the following format.{"override_blocked_modules": ["<name of module>", "<name of module>"]} Using the Rule Advanced Settings option, any module that is set as blocked can be overruled to be unblocked. For example, setting the value as {“override_blocked_modules”: [“os”]}} allows the os module to be used in the code in spite of it being blocked in the gateway.json file. | Currently, methods cannot be overruled using Advanced settings. |
The DSG supports the usage of the PyJwt python library in custom UDF creations. PyJWT is a python library that is used to implement Open Authentication (OAuth) using JSON Web Tokens (JWT). JSON Web Tokens (JWT) is an open standard that defines how to transmit information between a sender and a receiver as a JSON object. To authenticate JWT for OAuth, you must write a custom UDF. The PyJwt library version supported by the DSG is 1.7.1.
The DSG supports the usage of the Kafka python library in custom UDF creations. Kafka is a python library that is used for storing, processing, and forwarding for applications in a distributed environment. For example, the DSG uses the Kafka library to forward Transaction Metrics logs to external applications. The Kafka library version supported by the DSG is 2.0.2.
The DSG supports the usage of the Openpyxl Python library in custom UDF creations. Openpyxl is a Python library that is used to parse Excel xlsx, xlsm, xltx, xltm files. This library enables column-based transformation for Microsoft Office Excel. The Openpyxl library version supported by the DSG is 2.6.4.
5.17.3.9 - Dynamic Injection
After defining a Dynamic Rule variable name type to store the request header that sends the Dynamic CoP structure, configure the Dynamic Injection action type to process Dynamic rules and protect data on-the-fly.
While creating Rulesets for Dynamic CoP, use the Profile Reference rule is for data transformation instead of the Transform rule. The security benefits of using Profile Reference rule are higher than the Transform rule as the requests can be triggered out of the secure network perimeter of an organization.
The fields for the Dynamic Injection action are as seen in the following figure.
The following table describes the Authorized Rule Types applicable to the Dynamic Injection action.
Field | Description |
---|---|
Rule Type | Allowed Action type that will be parsed from the Dynamic Rule request header.The following are the Available Action type options:
|
Payload | Select the allowed Payload type that will be parsed from the Dynamic Rule request header. This field also accepts Regex patterns for selecting payload that needs to be parsed.Note: To select all payloads, click the Payload drop down and select ALL. |
Note: The Dynamic Injection action type must always be created as a leaf node (a rule without any child nodes).
Creating a Dynamic Injection rule
The dynamic injection rule must be created to send dynamic Ruleset requests.
To create a Dynamic Injection Rule:
Create an extract rule under an existing profile or a new profile to extract the request header with the Dynamic Rule.
Under this extract rule, create a Set Context Variable rule to extract the Dynamic rule and set as a variable.
Under the same profile, create an extract rule to extract the request message.
Under the extract rule created in step 3, create the Dynamic Injection rule. The Dynamic rule received in the request header in step 2 will be hooked to this rule for further processing.
Click Deploy or Deploy to Node Groups to apply configuration changes.
When a request is received at the configured URI, the request header and request body are processed, and then the response is sent.
5.18 - DSG REST API
In addition to providing a conceptual overview, the discussion steps through a use case where DSG is configured with a RuleSet object behind a REST API URL end-point.
The RuleSet has been designed to provide fine-grained protection of an XML document where the sensitive data is identified through an XPath. The sections also contain code samples in Java, Python and Scala that show example invocation of the API from client applications written in these languages.
Overview
In addition to offering In-Band mechanism for data security, DSG offers a RESTful Web Service API. This mechanism of utilizing DSG’s capabilities is referred to as On-Demand data security.
Client applications invoke DSG’s REST API by sending it HTTP requests pointed at pre-configured URLs in the DSG. These URLs are internally backed by request processing behaviors which are specified through RuleSets configuration objects. In general, RuleSets allow a user to define hierarchical, step-by-step processing of data conveyed in any Layer-7 protocol messages. Example Rule nodes of a RuleSet tree might include extraction of HTTP POST request body, parsing of the body content according to a certain format (typically specified in HTTP Content-Type header), extraction of sensitive data within the message body and protection of extracted data.
While invoking a REST API call on DSG, a client conveys a document of certain format embedded in an HTTP request method (e.g. HTTP POST) to a pre-configured DSG REST URL. As an outcome of processing the request, DSG sends a 200 response to the client. The response message carries a modified version of the original document wherein select sensitive data pieces in plaintext are substituted with their cryptographic equivalent (either cipher text or tokens).
The following figure shows an example usage of DSG’s REST API. It illustrates a fine-grained data security scenario where DSG accepts an incoming request from a client where the request carries a document with certain sensitive data elements embedded in it. DSG parses the input document, extracts sensitive data pieces in it and protects those extracted data fragments as per preconfigured data security policies.
The protected data fragments are substituted in-place at the same location of their plaintext equivalents in the original decoded document. The decoded document is then encoded back to its original format and delivered back to the client as part of the API response.
While the illustration above shows data protection, one can certainly conclude that the same mechanism can be used for any other form of data transformation natively available in Protegrity core data security platform including data un-protection, re-protection, masking and hashing.
REST API Authentication
DSG supports basic authentication and mutual authentication.
Basic Authentication
DSG supports user authentication using the HTTP basic authentication mechanism.
The user credentials (username and password) are validated against ESA LDAP, which may be connected to any external directory source. Successful user authentication results are cached for a configurable period thus saving the authentication round trips for performance efficiency reasons. The identity of the authenticated users is automatically used as their ‘policy user’ identity for performing data security operations.
In this authentication type, the username and password are included in the authorization header.
For permissions and access control, there are three types of roles in DSG:
- Cloud Gateway Admin
- Cloud Gateway Viewer
- Cloud Gateway Auth
The following table describes each of these roles.
Role | Description |
---|---|
Cloud Gateway Admin | Role to allow to modify configurations |
Cloud Gateway Viewer | Role to allow to only read the configurations |
Cloud Gateway Auth | Role to define user authentication for REST and HTTP protocols |
In DSG, you can allow users linked to the Cloud Gateway Authentication role to perform REST API authentication. For a REST API call, the client sends the username and password in the authorization header to the server. After the authentication is successful, all the rules that are part of the REST API call are processed. The authentication is performed at the parent level of every rule and the authentication is cached for every child rule.
The following steps explain the process for the REST API authentication:
- The user makes a REST API call to the gateway.
- The authorization parameters, username and password, are verified against LDAP.
- On successful authentication, the username-password combination is cached for the transaction.
- The gateway then sends a response to the REST API call sent by the user.
- If the authentication fails, the server sends a 401 error response stating that the authorization parameters are invalid. The REST API call then terminates.
To enable REST API authentication from the ESA Web UI, ensure that the user is linked to the Cloud Gateway Auth role as shown in the following figure.
Enabling Rest API Authentication
You can enable REST authentication using the HTTP message payload.
To enable REST API Authentication:
In the ESA Web UI, navigate to Cloud Gateway > 3.3.0.0 {build number} > RuleSet.
Select the required rule.
In the Authentication drop down list, select Basic.
Save the changes.
You can also request client certificates as a part of authorization to make a REST API call. After the certificate is verified successfully, all the rules that are a part of the REST API call are executed. If the certificate is invalid, a 401 error response is sent to the user.
Enabling a Client Certificate for a Rule
Apart from specifying client certificate at tunnel level, you can also specify client certificate at service level.
To enable Client Certificate for a Rule
In the ESA Web UI, navigate to **Cloud Gateway > 3.3.0.0 {build number}**Cloud Gateway > 3.3.0.0 {build number}> RuleSet
Select the required rule.
Check the Require Client Certificate checkbox.
Save the changes.
TLS Mutual Authentication
DSG can be configured with trusted root CAs and/or the individual client machine certificates for the machines that will be allowed to connect to DSG’s REST API TLS tunnel.
Client machines that fail to offer a valid client certificate will not be able to connect to DSG’s REST API TLS ports.
When communicating with DSG, you can add an additional layer of security by requesting the HTTP client to present a client certificate. You can do so by enabling mutual authentication in DSG. The certificate presented by the client must be trusted by DSG in order for the TLS connection to be successfully established.
Before rolling out to production, in the testing environment, you can use DSG to generate the client key and certificate. The client key and certificate are self-signed by DSG using a self-generated CA.
Enabling Mutual Authentication
DSG supports TLS mutual authentication and you can use steps in this section to enable it.
Ensure that the following prerequisites are completed in the given order.
- Ensure that you generate custom CA certificates and keys, for example, a certificate file
ca_test.crt
, a key fileca_test.key
, a pem fileca_test.pem
and upload it from the Certificate/Key Material screen on the DSG. - Ensure that you generate custom server certificates and keys, for example, a certificate file
server_test.crt
, a key fileserver_test.key
, a pem fileserver_test.pem
, and sign it with the CA certificates generated in step 1. After creating the certificates and keys, upload it from the Certificate/Key Material screen on the DSG. - Ensure that you generate custom client certificates and keys, for example, a certificate file
client_test.crt
, a key fileclient_test.key
, a pem fileclient_test.pem
, and sign it with the CA certificates generated in step 1. You must use the client certificate while sending a request to the DSG.
To enable Mutual Authentication:
Configuring Tunnel to enable Mutual Authentication.
On the ESA Web UI, navigate to the Tunnels screen and click the HTTP tunnel that you want to edit.
In TLS Mutual Authentication, set the value as CERT_OPTIONAL.
In CA Certificates, provide the absolute path of the ca_test.pem certificate.
Configuring rule to enable Mutual Authentication.
- In the extract rule, ensure that you select the Require Client Certificate check box.
Protecting an XML Document through DSG REST API
Let’s review a use case where a client requests protection of certain sensitive information within an XML document through DSG’s REST API.
While this use case describes fine-grained protection of an XML document, one can easily translate this into other structured data formats such as CSV, JSON etc. or even unstructured data formats.
As a precursor to understanding DSG RuleSet configuration, let’s first review the REST API input and the expected output. The following snippet shows sample REST API request and response messages. The client sends an HTTP POST request message that carries an XML document in it. The expectation is that the content of <Person><Name>…</Name></Person> XML hierarchy be protected at word boundaries. DSG responds back with an updated XML document wherein the specified sensitive data been substituted with tokens.
Request:
POST /protect/xml/xpath HTTP/1.1
Host: restapi
Content-Length: 85
Content-Type: application/xml
<Person>
<Title>Mr.</Title>
<Name>Joe Smith</Name>
<Gender>Male</Gender>
</Person>
Response:
HTTP/1.1 200 OK
Transfer-Encoding: chunked
Content-Type: application/xml
Server: Protegrity Cloud Gateway 1.0.0.170
<Person>
<Title>Mr.</Title>
<Name>nM9M 4NFuRl9</Name>
<Gender>Male</Gender>
To produce the API output shown above, DSG is configured with a RuleSet object ahead of time. The RuleSet object is tied to a REST API URL (/protect/xml/xpath in this example). As mentioned earlier, a RuleSet object is a collection of all the rules responsible for handing requests arriving on a specific URL. The handling of this request is decomposed in four cascading steps as shown in the following figure.
Step 1: Extract body of the HTTP request message. The extracted body content will be the entire XML document. The extracted output of this Rule will be fed to all its children sequentially. In this example, there is only one child of this Rule node.
Step 2: Parse the input as an XML document such that an XPath can be evaluated on it to find position offsets of the sensitive data content.
Note: One may choose to treat the XML document as a flat text file and run a REGEX on it instead and get the same results. The choice depends on studying the XML and where all sensitive data lies and which mechanism will yield more accurate, maintainable and high performant results.
Step 3: Split the extracted data from the previous rule into words. This will be done by running a simple REGEX on the input. Each word will be fed into children rule nodes of this rule one by one. In this use case there is only one child to this Rule.
Step 4: Protect input content. Since this Rule is a leaf node (a node without any children), return resulting ciphertext or token to the parent.
At the end of Step 4, the RuleSet traversal stack will unwind and each branch Rule node will reverse its previous action such that the overall data can be brought back to its original format. Going back in the reverse direction, Step 4 will return tokens to Step 3 which will concatenate them together into a string. Step 2 will substitute the string yielded back from Step 3 into the original XML document in place of the original plaintext string pointed at by the configured XPath. Step 1 that was originally responsible for extracting body of the HTTP request will now replace what has been extracted with the modified XML document. A layer of platform logic outside of RuleSet tree execution will create an HTTP response message which will convey the modified XML document to the client.
Let us translate the request handling design described above into real DSG configuration. The figures from Step 2 Rule Node Configuration to Postman Client Example depict Rule nodes creation for Step 1 through Step 4 in the ESA Web UI under the Cloud Gateway section.
Java Client using Native java.net.HttpURLConnection
This section describes REST API example with JAVA client.
import java.io.BufferedReader;
import java.io.DataOutputStream;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.URL;
/*
* DSG REST API Client Example:
*
* This example simulates a DSG REST API client. The example uses Java native
* HttpURLConnection utility for sending an HTTP POST request to a configured
* REST API end-point URL in DSG ("http://restapi/protect/xml/regex" or
* "http://restapi/protect/xml/xpath").
*
* The example sends an XML document with sensitive content in plain-text and
* expects to receive an HTTP response (200) where such content has been
* substituted by cipher-text or tokens. In order to support this example, the
* DSG has been configured with a corresponding REST API RuleSet that protects
* information contained in <Name>...</Name> XML tag.
*
* The DSG REST API consumes the entire XML document, finds the sensitive
* information in it either based on a configured REGEX or XPath, and responds
* back with a modified XML document where the sensitive information has been
* protected. No other part of the input document is modified.
*
* While this example demonstrates XML as the subject content, DSG supports a
* variety of content types including all textual formats (XML, JSON, HTML, CSV,
* JS, HTML etc.) as well as complex formats such as XLS(X), PPT(X), DOC(X),
* TSV, MHT, ZIP and PDF.
*/
public class RestApiSampleHttpURLConnection {
public static void main(String[] args) {
// DSG's REST API end point URL - tied to REGEX or XPath based RuleSet
String dsgRestApiUrl = "http://restapi/protect/xml/xpath";
// String dsgRestApiUrl = "http://restapi/protect/xml/regex";
HttpURLConnection conn = null;
try {
// Create connection
URL url = new URL(dsgRestApiUrl);
conn = (HttpURLConnection)url.openConnection();
// Set headers and body
conn.setRequestMethod("POST");
conn.addRequestProperty("Content-Type", "application/xml");
conn.setDoOutput(true);
// Send request
DataOutputStream wr = new DataOutputStream (conn.getOutputStream());
String inXml = "<Person>"
+ "<Title>Mr.</Title>"
+ "<Name>Joe Smith</Name>"
+ "<Gender>Male</Gender>"
+ "</Person>";
wr.writeBytes(inXml);
wr.close();
System.out.println("REQUEST: \n" + "POST " + dsgRestApiUrl + "\n" + inXml);
// Receive response line by line
BufferedReader in = new BufferedReader(
new InputStreamReader(conn.getInputStream()));
String inputLine;
StringBuffer response = new StringBuffer();
while ((inputLine = in.readLine()) != null) {
response.append(inputLine);
}
in.close();
System.out.println("RESPONSE: \n" + response.toString());
} catch (Exception e) {
e.printStackTrace();
}
}
}
Python Client
This section showcases REST API example with Python client.
#!/usr/bin/python
import http.client
reqBody = "<Person> \
<Title>Mr.</Title> \
<Name>Joe Smith</Name> \
<Gender>Male</Gender> \
</Person>"
headers = {'Content-type': 'application/xml'}
conn = http.client.HTTPConnection('restapi')
conn.request('POST', '/protect/xml/xpath', reqBody, headers)
print(conn.getresponse().read())
Scala Client using Apache HTTP Client
This section showcases REST API example with SCALA client.
import org.apache.http.client.methods.HttpPost
import org.apache.http.impl.client.DefaultHttpClient
import org.apache.http.entity.StringEntity
import org.apache.http.util.EntityUtils;
object RestApiClient {
def main(args: Array[String]): Unit = {
val post = new HttpPost("http://restapi/protect/xml/xpath")
post.addHeader("Content-Type","application/xml")
val inXml = "<Person>" +
"<Title>Mr.</Title>" +
"<Name>Joe Smith</Name>" +
"<Gender>Male</Gender>" +
"</Person>";
val client = new DefaultHttpClient
val params = client.getParams
post.setEntity(new StringEntity(inXml))
val response = client.execute(post)
val rspBodyString = EntityUtils.toString(response.getEntity)
println("RESPONSE: \n" + rspBodyString)
}
}
Postman (Chrome Plugin) Client
This section describes a REST API example with the Postman client.
5.19 - Enabling selective tunnel loading on DSG nodes
When the DSG nodes are in a cluster, generally, the ESA is responsible for pushing the Ruleset configurations to all the nodes in the cluster. In a situation where it is required that only specific tunnels are loaded on specific DSG nodes, the selective tunnel loading feature can be used.
The following figure describes how labels work with the tunnel and the DSG node definitions.
In the above figure, consider an example TAC, where exists an ESA and two DSG nodes, namely DSG Node 1 and DSG Node 2. The DSG Node 1 is an on-premise DSG, while the DSG Node 2 is a DSG on cloud. The DSG Ruleset configuration defined on the ESA includes multiple tunnel configurations, for instance, Tunnel A that must be loaded only on the DSG Node 1 and Tunnel B that must be loaded only on the DSG Node 2. The Tunnel C is common for both the DSG nodes and hence is loaded on both the nodes.
With the selective tunnel load feature, labels can be set for the tunnel and the nodes in a cluster such that only the required tunnel is loaded on the DSG node.
Perform the following steps to achieve selective tunnel loading.
Define the labels for each DSG node in a TAC.
Define the labels for each tunnel. This label name must match the label name defined in the DSG node label definition.
Deploy the DSG configuration from the ESA by performing the following steps.
- Login to the ESA Web UI.
- Navigate to Cloud Gateway > 3.3.0.0 {build number} > Cluster.
- Select the Refresh drop down menu and click Deploy.
Based on these definitions, when the DSG configuration is deployed, the Tunnel A configurations are loaded in the DSG Node 1 since the label, node1, defined in both the configurations is the same. Similarly, the Tunnel B configurations are loaded in the DSG Node 2 since the label, node2, defined in both the configurations is the same. Since no labels are defined for the Tunnel C, the Tunnel C configurations are loaded in the DSG Node 1 and DSG Node 2.
The default behavior of the Deploy functionality does not change with the enhancements provided by Selective Tunnel Loading. In a TAC, if a configuration is pushed from the ESA, then it will be pushed to all the DSG nodes that are a part of the TAC. The configuration may include a Data Security Policy, RuleSet, Tunnels, and Services associated with a Tunnel.
Adding a label to a DSG node for selective tunnel loading
Labels help you organize your nodes into groups or categories. By specifying a label for a node, you ensure that the node is a member of the label group. As part of enabling selective tunnel loading, the same label must be set for the DSG node in a Cluster and the Tunnel configuration. This section provides information about adding a label to the DSG node.
Ensure that the following prerequisites are met:
- The DSG nodes where the labels are defined must be in the same TAC.
- The TAC must be healthy.
- Ensure that the label defined for the DSG node is same as defined in the tunnel configuration.
To add a label to a DSG node for selective tunnel reloading:
Login to the DSG CLI Manager.
Navigate to Tools > Clustering > Trusted Appliances Cluster.
On the Cluster Services screen, navigate to Node Management : Add/Remove Cluster Nodes/Information, highlight OK and press Enter.
On the Node Management screen, navigate to Update Cluster Information, highlight OK and press Enter.
On the Update Cluster Information screen, navigate to Labels, and add the following label.
;dsg_onpremise
For example, if the NFS tunnel must be deployed only for the on-premise DSG node in a cluster, then ensure that the label parameter is set to the same label, such as dsg_onpremise, set for the on-premise DSG.
Highlight OK and press Enter.
Navigate to the Cluster Services screen and select List Nodes: Show Cluster Nodes & Status to verify if the label has been created successfully
Removing a label from a DSG node for selective tunnel loading
This section describes the steps to remove a label from the DSG node for Selective Tunnel Loading. It is recommended to be cautious before removing a label from the DSG node. By removing a label for a node, you ensure that the node is not a member of the label group. For example, if the NFS tunnel must be loaded only for the on-premise DSG node in a cluster, and a label parameter, such as dsg_onpremise, is removed for the on-premise DSG node, then the NFS tunnel will not be loaded on the on-premise DSG node in a cluster.
Ensure that the following prerequisites are met:
- The DSG nodes where the labels are removed must be in the same TAC.
- The TAC must be healthy.
To remove a label to a DSG node for selective tunnel reloading:
Remove the Tunnel label from the DSG node by performing the following steps.
Login to the DSG Web UI.
Click Transport > Tunnels.
Click
to edit the tunnel.
Under Advanced Settings, remove the following key-value pair.
{"label":"dsg_onpremise"}
Remove the TAC label from the DSG node by performing the following steps.
Login to the DSG CLI Manager.
Navigate to Tools > Clustering > Trusted Appliances Cluster.
On the Cluster Services screen, navigate to Node Management : Add/Remove Cluster Nodes/Information, highlight OK and press Enter.
On the Node Management screen, navigate to Update Cluster Information, highlight OK and press Enter.
On the Update Cluster Information screen, navigate to Labels, and remove the following label.
dsg_onpremise
Adding a label to a tunnel for selective tunnel loading
As part of enabling selective tunnel reloading, the advanced settings for a tunnel configuration must be modified such that only the specific tunnel is reloaded when the matching label is found configured for a DSG node in a cluster.
Ensure that the following prerequisites are met:
The DSG nodes where the labels are defined must be in the same TAC.
The TAC must be healthy.
For more information about checking cluster health, refer to the section Monitoring tab.
Ensure that the label defined for the DSG node is same as defined in the tunnel configuration.
For more information about settings the label in the DSG node, refer to the section Adding a Label to a DSG Node for Selective Tunnel Loading.
To add a label to a Tunnel for selective tunnel reloading:
Login to the DSG Web UI.
Click Transport > Tunnels.
Click
to edit the tunnel.
Under Advanced Settings, add the following key-value pair.
{"label":"dsg_onpremise"}
For example, if the NFS tunnel must be reloaded only for the on-premise DSG node in a cluster, then ensure that the label parameter is set to the same label, such as dsg_onpremise, set for the on-premise DSG.
For more information about adding a label for a DSG node in a cluster, refer to the section Adding a Label to a DSG Node for Selective Tunnel Loading.
5.20 - User Defined Functions (UDFs)
The DSG provides built-in standard protocol CoP blocks. These blocks allow configuration-driven handling for most data security use cases. In addition, the DSG UDF capability is designed for addressing unique customer requirements that are otherwise not possible to address through configuration only. Such requirements may include extracting relevant data from proprietary application layer protocols and payload formats or altering data in some custom way.
The DSG UDF mechanism is designed for customizations and extensibility of the DSG product deployments in the field. Any UDF code is part of the customer-specific deployment and is not a part of the base DSG product delivery from Protegrity. Customers are responsible for the functionality, quality, and on-going maintenance of their DSG UDF code.
User Defined Functions
The Extraction and Transformation rules are responsible for actual data processing. Thus, they are enabled with UDF functionality.
The concept of UDFs is not new. They are prevalent in the RDBMS world as a means of inserting custom logic in database queries or stored procedures. In comparison to APIs with strict client/server semantics, UDFs allow inserting a small piece of logic to an existing execution flow. UDFs are basically call-backs, which mean that they must comply with the calling program’s interface. They must not negatively affect the overall execution flow in terms of their added latency.
The DSG Extraction and Transformation UDFs are user-written pieces of logic. These must comply with the DSG Rules Engine interfaces to allow switching control and data between the main program and the UDF. However, beyond complying with the DSG’s interface, UDF writers have complete freedom in what they want to achieve within the UDF.
The following figure shows an example RuleSet tree with an Extraction and a Transformation Rule object that are defined as UDFs. In the example, the Extraction UDF performs word by word extraction of input data. The Transformation UDF toggles alphabet cases for each word passed into it.
RuleSet Tree Recursion and Generators
The DSG Rules Engine is responsible for executing RuleSet tree. However, the actual DSG data processing behavior is an outcome of tree recursion where Rule behaviors are executed in the order laid out in the tree. Since the design of RuleSet tree is completely configurable, this approach is referred to as Configuration-over-Programming (CoP).
Extraction rules are branch nodes responsible for mining data, whereas, Transformation rules are leaf nodes responsible for manipulating data. To achieve loose coupling between Rule objects, lazy searches over data and simplicity of programming, Extraction rules are implemented as Generators. Currently, the DSG UDFs are programmable in Python. This means that Extraction UDFs are written with Python yield keyword. It allows Extraction UDFs to be performance efficient. At the same time, it supports an iterator interface without returning an iterator as a data structure collection. The following figure shows how an Extraction rule works as a Generator.
Transformation UDFs require a simple Python class and typically only one method to be implemented. Users implement a Python class called UserDefinedTransformation and implement a transform method in it. The transform method inputs a dictionary as (named context in the following example. This dictionary uses the following two keys:
context[“input”] – Data input into UDF
context[“output”] – Data output from UDF (transformed in some way)
The input and the output data must be in bytes.
class UserDefinedTransformation(object):
def transform(self, context):
input = context["input"]
# Transform input in some way and return it in output
context["output"] = output
Implementing an Extraction
Extraction UDF writers implement a Python class called UserDefinedExtraction with an Extract method in it. The Extract method must be implemented as a Python generator. Similar to Transformation UDFs, Extraction UDFs input a dictionary with input and output keys. In addition, Extraction UDFs use another dictionary for returning Generator output as named item in the following example with value key. Code listing with comments in the following snippet describe the interfaces with the calling program.
class UserDefinedExtraction(object):
def extract(self, context):
input = context["input"]
# Extract desired pieces of data from input
# Return/yield extracted pieces (one by one) to caller for ….. :
# Populate item dict. with value key in it with each extracted piece of data
item = { "value": extractedData }
# Yield extracted pieces of data. They will be passed on to Transformation rules yield item
# Transformed data will be available in item dictionary with value key
transformedData = item["value"]
# Transformed data is assembled back in output and returned to caller
context["output"] = output
User Defined Variables in the UDFs
The Extraction and Transformation UDFs allow users to define their own variables that are maintained throughput the scope of RuleSet execution. This is useful in passing information across different UDFs. For example, setting a variable in one UDF and retrieving it in another UDF. A specific key called cookies has been reserved in the context dictionary for this purpose.
For example, users may use the cookies key to set their own dictionary of parameters and retrieve in a UDF called subsequently.
context["cookies"] = { “customAuthCode”: authCode }
authCode = context["cookies"] [“customAuthCode”]
Passing input arguments in UDFs
The Transformation and Extraction UDF classes allow users to pass in a variable number of statically configured input arguments in their ()init()()__ method as shown in the following screenshot.
Advanced Rule Settings in UDFs
The gateway.json file includes a configuration where vulnerable methods and modules are blocked from being imported as part of the Extract and Transform UDFs. This default behavior can be overruled by setting the Rule Advanced Settings parameter. For more information, refer here.
In the following example source code, the code requests to import the os module. This module is part of the default blocked modules in the gateway.json file. If as part of the UDF rule configuration, it is required that the os module be unblocked. Then, the Rule Advanced Settings parameter must be set as shown in the figure.
Currently, blocked methods cannot be overridden using Advanced settings.
Python code listing of Sample UDFs
This section provides a python code listing of sample UDFs.
"""
Example custom extraction implementation. Extracts words from an
input string.
"""
class UserDefinedExtraction(object):
def __init__(self, *args):
"""
Import Python RE module and compile the RE at object creation
time.
"""
import re
self.pattern = re.compile(b"\w+")
def extract(self, context):
"""
Generator implementation. Takes an input string and splits it
into words using RE module. Words are yielded one at a time.
"""
input = context["input"]
cursor = 0
output = list()
for wordMatch in self.pattern.finditer(input):
output.append(input[cursor:wordMatch.start()])
item = { "value": wordMatch.group() }
yield item
output.append(item["value"])
cursor = wordMatch.end()
output.append(input[cursor:])
context["output"] = b"".join(output)
"""
Custom Transformation UDF: Toggles alphabet cases.
"""
class UserDefinedTransformation(object):
def transform(self, context):
output = []
for c in context["input"].decode():
if c.islower():
output.append(c.upper())
else:
output.append(c.lower())
context["output"] = "".join(output).encode()
Blocked Modules and Methods in UDFs
The modules and methods that are vulnerable to run a UDF can be added to the blocked_modules and blocked_methods parameters respectively in the gateway.json file.
The following snippet shows how to add the vulnerable modules and methods to the gateway.json file.
"globalUDFSettings": {
"blocked_methods": [
"eval",
"exec",
"dir",
"__import__",
"memoryview"
],
"blocked_modules": [
"pip",
"install",
"commands",
"subprocess",
"popen2",
"sys",
"os",
"platform",
"signal",
"asyncio"
]
}
When you reimage to the DSG v3.2.0.0, the blocked modules and methods will not be part of the gateway.json file, instead allowed modules and methods will be listed. The blocked modules and methods are still supported, but it is recommended to use the allowed list approach.
Allowed Modules and Methods in UDF
The modules and methods that are safe to run a UDF is added to the allowed list. All other modules and methods that are not on the list are blocked.
These configurations are added to the globalUDFSettings parameter in the gateway.json file. By default, in the gateway.json file the following modules and methods are allowed.
"globalUDFSettings" : {
"allowed_modules":["bs4", "common.logger", "re", "gzip", "fromstring", "cStringIO","struct", "traceback"] ,
"allowed_methods" : ["BeautifulSoup", "find_all", "fromstring", "format_exc", "list", "dict", "str", "warning"]
}
If the source code in the UDF rule uses any other modules or methods, it is necessary to add them to the allowed list. If you want to allow any vulnerable modules or methods, then it is recommended to use the Rule Advanced Settings option instead.
The blocked and allowed lists are mutually exclusive. If methods or modules are listed in both the blocked and allowed list parameters, then the following error appears in the gateway.log file:
allowed module/methods '<module/method name>' cannot be used with blocked module/method
5.21 - API for exporting the CoP
The CoP technology enables a CoP Administrator to create a set of rules. These rules instruct the gateway on how to process data that traverses it. Using an API, export the CoP from ESA to the local system. The CoP API exports the basic configurations from the ESAs config directory.
Using the CoP API, following directories and files can be retrieved from the config directory:
- Directories
- Resources
- Tunnels
- Services
- Log
- Files
- gateway.json
- features.json
After the required data is retrieved, the CoP API will create a CoP package. This package is encrypted with the PKCS5 standard and is Base64 encoded.
Supported Authentication Methods for CoP API
The following authentication methods can be used to establish a connection with the ESA or DSG:
- Basic authentication with the ESA or DSG user name and password.
- Client Certificate-based authentication.
- Token-based authentication.
API for Exporting the CoP Configurations
DSG offers two versions of the CoP API. The following are the two base URLs:
https://{IP address of ESA}/3.3.0.0.{build number}/1/exportGatewayConfigFile
. This is the existing version of the API.https://{IP address of ESA}}/3.3.0.0.{build number}/1/v1/exportGatewayConfigFiles
. This API is introduced from version 3.3.0.0. Also, the API uses a stronger cipher and key.
This API request exports the CoP package that can only be used with DSG containers.
Exporting CoP using the Existing version
The following sections describe the API for version 1.
Base URL
https://{IP address of ESA}/3.3.0.0.{build number}/1/exportGatewayConfigFile
Method
POST
Curl request syntax
Export API- PKCS5
```
curl --location --request POST 'https://<IP address of ESA>/3.3.0.0.{build number}/1/exportGatewayConfigFile' \
--header 'Authorization: Basic <base64 encoded admin credentials>' \
--header 'Content-Type: text/plain' \
--data-raw '{
"nodeGroup": "<node group name>",
"kek":{
"pkcs5":{
"passphrase": "<passphrase>",
"salt": "<salt>"
}}
}' -k -o <Response file name in .json format>
```
Authentication credentials
- username: Basic authentication user name
- password: Basic authentication password.
As per the requirement, Client Certificate-based authentication or Token-based authentication can also be used.
Request body elements
- nodeGroup name and version: Set the node group name and version for which the configurations should be exported.* In the request body element, the user can specify the node group name and version or only the node group name. If the node group name and version parameter is removed from the request body elements, then it will provide the active configurations.
Encryption Method
The pkcs5 encryption is used to protect the exported file. In this release, the publicKey encryption is not supported.
Encryption Method | Sub-elements | Description |
---|---|---|
pkcs5 | salt | Set the salt that will be used to encrypt the exported CoP. |
passphrase | Set the passphrase that will be used to encrypt the exported CoP. | The passphrase must be at least ten character long. The passphrase must contain at least one uppercase letter, one numeric value, and a special character. |
Exporting CoP using the new version
The following sections describe the API for the new version.
Base URL
https://{IP address of ESA}/3.3.0.0.{build number}/1/v1/exportGatewayConfigFiles
Method
POST
Curl request syntax
Export API- PKCS5
```
curl --location --request POST 'https://{IP address of ESA}/3.3.0.0.{build number}/1/v1/exportGatewayConfigFiles' \
--header 'Authorization: Basic <base64 encoded admin credentials>' \
--header 'Content-Type: text/plain' \
--data-raw '{
"nodeGroup": "<node group name>",
"kek":{
"pkcs5":{
"passphrase": "<passphrase>",
"salt": "<salt>"
}}
}' -k -o <Response file name in .json format>
```
Authentication credentials
- username: Basic authentication user name
- password: Basic authentication password.
As per the requirement, Client Certificate-based authentication or Token-based authentication can also be used.
Request body elements
- nodeGroup name and version: Set the node group name and version for which the configurations should be exported.* In the request body element, the user can specify the node group name and version or only the node group name. If the node group name and version parameter is removed from the request body elements, then it will provide the active configurations.
Encryption Method
The pkcs5 encryption is used to protect the exported file. In this release, the publicKey encryption is not supported.
Encryption Method | Sub-elements | Description |
---|---|---|
pkcs5 | salt | Set the salt that will be used to encrypt the exported CoP. |
passphrase | Set the passphrase that will be used to encrypt the exported CoP. | The passphrase must be at least ten character long. The passphrase must contain at least one uppercase letter, one numeric value, and a special character. |
Scenarios
The CoP package can be exported for the following three scenarios.
Scenario 1
Exporting the active CoP configuration.
CURL request
curl --location --request POST 'https://{IP address of ESA}/3.3.0.0.{build number}/1/v1/exportGatewayConfigFiles' \
--header 'Authorization: Basic YWRtaW46YWRtaW4xMjM0' \
--header 'Content-Type: text/plain' \
--header 'Cookie: SameSite=Strict' \
--data-raw '{
"kek":{
"pkcs5":{
"passphrase": "xxxxxxxxxx",
"salt": "salt"
}
}
}'
-k -o cop_demo.json
CURL response
{
"cop_version": "1.0.0.0.1",
"timestamp": 1638782764,
"cop": {
"dsg_version": "3.1.0.0.x",
"copPackage":"<Contents of encrypted CoP package in base64 format>"
}
}
Scenario 2
Exporting the CoP configuration for lob1 node group.
CURL request
curl --location --request POST 'https://{IP address of ESA}/3.3.0.0.{build number}/1/v1/exportGatewayConfigFiles' \
--header 'Authorization: Basic YWRtaW46YWRtaW4xMjM0' \
--header 'Content-Type: text/plain' \
--header 'Cookie: SameSite=Strict' \
--data-raw '{
"nodeGroup": "lob1",
"kek":{
"pkcs5":{
"passphrase": "xxxxxxxxxx",
"salt": "salt"
}
}
}'
-k -o cop_demo.json
CURL response
{
"nodeGroup": "lob1",
"cop_version": "1.0.0.0.1",
"timestamp": 1638796249,
"cop": {
"dsg_version": "3.1.0.0.x",
"copPackage":"<Contents of encrypted CoP package in base64 format>"
}
}
Scenario 3
Exporting the CoP configuration for a particular configuration version of lob1 node group.
CURL request
curl --location --request POST 'https://{IP address of ESA}/3.3.0.0.{build number}/1/v1/exportGatewayConfigFiles' \
--header 'Authorization: Basic YWRtaW46YWRtaW4xMjM0' \
--header 'Content-Type: text/plain' \
--header 'Cookie: SameSite=Strict' \
--data-raw '{
"nodeGroup": "lob1:2021_11_24_03_24_36",
"kek":{
"pkcs5":{
"passphrase": "xxxxxxxxxx",
"salt": "salt"
}
}
}'
-k -o cop_demo.json
CURL response
{
"nodeGroup": "lob1",
"cop_version": "1.0.0.0.1",
"timestamp": 1638796804,
"cop": {
"dsg_version": "3.1.0.0.x",
"copPackage":"<Contents of encrypted CoP package in base64 format>"
}
}
5.22 - Best Practices
Naming convention
It is recommended to keep the names of rules short but descriptive. Using the description field of a rules is highly recommended for the benefit of whomever might be maintaining it in the future. Rule names may appear in logs therefore keeping choosing an appropriate name would make it easier to find in the Ruleset tree.
Keeping the same host names, addresses and data element names across environment where possible will make it environment agnostic. Keeping the environments configuration in sync is common for testing and diagnosing. This can be done by backing up, export and restore the configuration of one environment to another. The minimum changes needed to be made after import and restore, the better.
Learn mode
Learn mode is useful for studying the payload of an application for creating the appropriate data protection transformation rules, diagnosing a problem, or analyzing performance. It however impacts performance due to the disk IO involved in the process. It also occupies disk space that otherwise would not have been used. Some regulations may require certain handling of such log information in a production system which DSG may not comply with and therefore it is highly recommended to keep Learn mode disabled for the services used in a production system.
Note: The rules displayed in the Learn mode screen and the payload message difference are stored in a separate file in DSG. The default file size limit for this file is 3MB. If the payloads and rules in your configuration are high in volume, you can configure this file size.
In the ESA CLI Manager, navigate to Administration > OS Console. Edit the webuiConfig.json file in the /opt/protegrity/alliance/config/webinterface
directory to add the Learn mode file size parameter as follows:
{
"learnMode": {
"MAX_WORKER_THREADS": 5,
"LEARNMODE_FLOW_DUMP_FILESIZE": 10
}
}
Using prefix and suffix
The optional use of prefix and suffix to “mark” protected data makes the Ruleset more generic, optimized, and resilient for potential modifications. This is since the unprotection set of rules is targeting a payload type rather than a specific message. This means that the rule will be useful for any new message which the application may be extended to so long that it uses the same payload type the system is configured to manage.
To make the use of prefix and suffix efficient, apply the data protection on a word-by-word bases rather than a complete sentence. The reason behind this recommendation is that application may use spaces to break down sentences to words and index each word separately. Using prefix and suffix on each word individually will maintain its value should the application do so in the backend.
By using different sequence for different class, prefix and suffix may also be used to distinguish between different types of protected data classification.
Profile reference
Profile references can be used to point at a specific profile as if the rules in the profile existed in place of the reference. A single copy of repeatedly used rules hierarchy can then be maintained and referenced from other profiles.
For example, creating a generic rule to protect an SSN in a certain way might be useful in more than one place. It is recommended to create a separate profile and reference it from where it is needed. That way should protection of SSN ever change in the future, adjustment will be required in one place only.
Note that references can be made across services. One may choose to create a service which is not associated with any tunnel (dormant) and use it as a container for profiles referenced from other services.
Modifying out-of-the-box profiles and rules
DSG does not block users from updating any of the rules provided out-of-the-box. Customers are however encouraged to avoid such changes in profiles used in production environment. Updates that Protegrity may release for these out-of-the-box profiles may conflict with customer modification. Customer modifications in this case may be overwritten. It is recommended to make a copy of the profile and disable the out-of-the-box profile/branch as a mean to avoid such conflict.
Defining services
Services are fundamental containers that further define the rules used for data security operations. When designing DSG implementations, you must ensure that only one service type per protocol-hostname combination is created. You can further divide all applicable rules under this service in form of profiles.
If you create multiple services per protocol-hostname combination, DSG might never process the rules under the second service.
For example, consider a situation where you want certain payload, such as text, to use tokenization data elements as the chosen form of data security, and other payload, such as CSV, to use encryption data elements. You create Service A to use the tokenization elements, while Service B to use the encryption data elements. In the given scenario, if both services are enabled, DSG will execute the first service, Service A.
To avoid this situation, the correct approach would be creating a Profile A that includes subsequent rules to use tokenization data elements, and Profile B that includes rules to use encryption data elements.
Caution: It is recommended that when creating Rulesets for Dynamic CoP, the Profile Reference rule is used for data transformation instead of the Transform rule. The security benefits of using Profile Reference rule are higher than the Transform rule since the requests can be triggered out of the secure network perimeter of an organization.
Default certificates and keys
The DSG generates default admin certificates and keys that are required for internal communication between DSG nodes and ESA.
If any certificates or keys in the DSG expire, then you must note the following points before regenerating them.
It is recommended that you do not use any default certificates in production environments. Ensure that you use custom admin certificates and keys and upload them to the DSG using the ESA Web UI.
For any other non-production environments, if you have used default admin certificates and keys, you must generate them from any machine other than the DSG.
The admin tunnel, which is used by DSG for internal communication between appliances (ESA and DSG), requires the
.pem
files. Ensure that for default DSG certificates, the.pem
files are also generated along with the.crt
and.key
files.
Note: Ensure that the common name is set to
protegrityClient
in the openssl command to generate the default certificates.
Ensure that after you generate the default certificates and keys, they are renamed as defined in the gateway.json file. You can access the gateway.json file from the ESA Web UI, by navigating to Settings > System. The following image highlights the names of the certificates, keys, and commonName settings in the gateway.json file.
If you have used custom certificates for the admin certificates and keys, then you must ensure that the common name that was used to generate the
.csr
file is set in the gateway.json file.You must edit the gateway.json file to change the commonName parameter as required.
Migration of Data Tokenized by DSG
If you want to migrate any data tokenized in DSG, then following point must be considered. The DSG only generates UTF-8 encoded tokens. If the tokens generated by DSG need to be used in other ecosystems that are configured for an encoding other than UTF-8, a translation is required to the encoding of the target ecosystem.
5.23 - Known Limitations
Protegrity data protection
Data element configuration: During runtime Protegrity Data Protection action rules will be validating the input against restrictions imposed by the Data Element configuration. For example, Data Element may be configured to handle a certain type or format of data (i.e. date, textual, binary, etc). Input mismatching these restrictions will result in an error.
Input length: Input length restrictions is subject to Data Element configuration as well. Tokenization of alphanumeric input is limited to ~4 KB while encryption is limited to ~2 GB. More information can be found in ESA and policy management documentation.
Null values: Payload structures such as JSON and XML can reference empty and null values. Extraction and transformation of Null or empty values are not supported currently.
Hardware sizing
When calculating memory sizing for a DSG node one has to take into consideration the maximum payload size expected to be handled. To determine the minimum RAM required, the max payload size should be multiplied by the number of rules and the number of CPU cores, times two. For example a 32 core CPU machine with 128 GB of RAM would be able handle execution of up to 25 rules back to back to process a 20 MB payload message for upto 200 concurrent connections.
Max Payload = (Total RAM -(3 GB + (500 MB * CPU Cores))) / (Concurrent Users * Rules Count)
where,
3 GB represents base system operation including OS and its supporting services. 500 MB represents a worker process which is executed for each CPU Core available. Concurrent Users represents maximum concurrent connections. Rules Count represent rules in the rules which will be engaged during runtime to process the payload.
Ideal configuration where only warning and errors are logged should be well within the minimum hardware requirements of 320 GB of disk space. This however may not be enough for certain diagnostics scenarios where learn mode log files would be keeping a copy of every rules’ input/output payload. Learn mode will shut off automatically should free disk space cross the minimum threshold of 1 GB.
Network
- The DSG uses software based SSL termination. The cost of which is 10% CPU overhead relative to using a clear communication channel.
- SFTP commands in DSG also have some limitations.
- Commands such as chgrp and chown are not yet supported through the gateway.
- A warning log is generated for every outbound SFTP connection. This is due to the outbound host key trust/caching list not yet persistent.
- SFTP session negotiation is expected to be initiated within 10 seconds. Client applications which queue open a SFTP connection but delay the session negotiation process may suffer connection termination due to timeout. This timeout is yet to be configurable.
Ruleset engine
For XML payload extractor, the order of XML tag attributes may change. The CDATA tag and closing tags may be optimized by the internal libxml library used to parse XML payload. Thus, output XML structure may be structured differently compared to the input it is sourced from.
5.24 - Migrate UDFs to Python 3
In DSG, UDFs are scripted through various codecs. These codecs are scripted in Python that supports the extraction and transformation rules. In versions prior to the DSG 3.0.0.0, UDFs were scripted in Python version 2. However, from 3.0.0.0, Python is upgraded to version 3. While upgrading DSG to version 3.0.0.0 or beyond, scripts coded in Python version 2 must be migrated to Python version 3. This ensures that the UDFs are compatible with the latest version of DSG and run correctly.
Run the following steps to migrate the UDFs to python 3.
- Extract Python scripts from any UDFs in older DSG versions.
- Using a library, such as, python 2to3, convert the exported UDFs written in python 2 to python 3. Refer to https://docs.python.org/3/library/2to3.html#module-lib2to3 for the detailed procedure.
- In the DSG 3.0.0.0, import the converted Python scripts in the UDF codec.
5.25 - Additional configurations in gateway.json
The Global Settings tab provides information about the different configurations, that when set, are enforced across all the DSG nodes. In addition to the setting that are part of the Global Settings tab, the gateway.json
file also includes additional settings.
The gateway.json
file includes configurations, such as, setting the log levels, enabling learn mode, and so on. The sample configuration is illustrated below:
{
"log": {
"logLevel": "Warning",
"logFacility": [
{
"enabled": false,
"facilityName": "Tunnel",
"logLevel": "Information"
},
{
"enabled": false,
"facilityName": "DiskBuffer",
"logLevel": "Warning"
},
{
"enabled": false,
"facilityName": "Admin",
"logLevel": "Warning"
},
{
"enabled": false,
"facilityName": "RuleSet",
"logLevel": "Verbose"
},
{
"enabled": false,
"facilityName": "Service",
"logLevel": "Warning"
}
]
},
"mountManager": {
"enabled": true,
"interval" : "*/3 * * * *"
},
"admin": {
"listenAddress": "ethMNG",
"listenPort": 8585,
"certificateFilename": "admin.pem",
"certificateKeyFilename": "admin.key",
"ciphers": "ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS!SSLv2:!SSLv3!TLSv1!TLSv1.1",
"clientCACertificateFileName" : "ca.pem",
"clientCertificateFileName" : "admin_client.pem",
"clientCertificateKeyFileName" : "admin_client.key",
"commonName" : "protegrityClient",
"ssl_options":"{\"cert_reqs\":\"CERT_REQUIRED\"}"
},
"learnModeDefault": {
"enabled": false,
"excludeResource": "\\.(css|png|gif|jpg|ico|woff|ttf|svg|eot)(\\?|\\b)",
"excludeContentType": "\\bcss|image|video|svg\\b",
"freeDiskSpaceThreashold": 1024000000
},
"globalUDFSettings" : {
"allowed_modules":["bs4", "common.logger", "re", "gzip", "fromstring", "cStringIO","struct", "traceback"] ,
"allowed_methods" : ["BeautifulSoup", "find_all", "fromstring", "format_exc", "list", "dict", "str", "warning"]
},
"globalProtocolStackSettings": {
"http": {
"max_clients": 100,
"connection_cache_ttl": -1,
"max_body_size": 4194304,
"max_streaming_body_size": 52428800,
"include_hostname_in_header": true
}
},
"longRunningRoutinesTracing": {
"enabled": false,
"timeout": 20
},
"pdf_codec_default_font":{
"name": "OpenSans-Regular.ttf"
},
"stats" :{
"enabled" : true
}
}
It is recommended that any settings that must be changed, are edited on the ESA and then pushed to the DSG nodes in the cluster. To access the gateway.json
file, on the ESA Web UI, navigate to Settings > System.
log
The snippet for ’log’ is as follows:
"log": {
"logLevel": "Warning",
"logFacility": [
{
"enabled": false,
"facilityName": "Tunnel",
"logLevel": "Information"
},
{
"enabled": false,
"facilityName": "DiskBuffer",
"logLevel": "Warning"
},
{
"enabled": false,
"facilityName": "Admin",
"logLevel": "Warning"
},
{
"enabled": false,
"facilityName": "RuleSet",
"logLevel": "Verbose"
},
{
"enabled": false,
"facilityName": "Service",
"logLevel": "Warning"
}
]
},
Settings to control logging level. The following configurations are available for the log configuration:
logLevel: Set the logging level. The available logging levels are as follows:
- Warning (default)
- Info
- Debug
- Verbose
logFacility: Set the logging level for the following modules:
- Ruleset
- Services
- Tunnel
- DiskBuffer
- Admin
checkErrorLogAfterCount: Decide the trimming factor that is a part of the error metrics. You can set this value in the range of -1 to 1000.
- If the value set is greater than -1 and the log size of the error metrics is greater than 4k, then it will trim the error_metrics in such a way that all the parameters will be displayed accurately and only the row number information will be trimmed.
- If the log size is not exceeding 4k, then the error metrics will be displayed as is.
- If the value is set to -1 and the log size of error metrics is greater than 4k, then all the characters after the 4k limit will be trimmed from the log file.
- If the logs are not repetitive, additional rows will be reported in separate logs. This parameter is not present in the gateway.json file. Add the checkErrorLogAfterCount parameter to enable this feature.
mountManager
Settings related to NFS mounts. The following configurations are available for the log configuration:
enabled: Enable or disable mount management.
interval: Time in seconds when the DSG node will poll the NFS shares for pulling files. You can also specify a cron job expression. The cron job format is also supported to schedule jobs. If you use the cron job expression “* * * * *”, then the DSG will poll the NFS shares at the minimum interval of one minute.
admin
Settings related to admin tunnel are listed. DSG uses this tunnel for internal communication with ESA and other DSG nodes.
listenAddress: Listening interface name, typically ethMNG.
listenPort: Port on which the interface listens to.
certificateFilename: Admin tunnel certificate file name with the .pem extension. The default certificates and keys are set after the DSG is installed.
certificateKeyFilename: Admin tunnel key file name with the .key.
ciphers: Colon separated list of Ciphers.
clientCACertificateFilename: Admin tunnel CA certificate filename with the .pem extension.
clientCertificateFilename: Admin tunnel client certificate filename with the .pem extension.
clientCertificateKeyFilename: Admin tunnel Client key file name with the .key extension.
commonName: Common name as defined while creating the admin tunnel client certificates.
ssl_options: Set the SSL options to be enforced. For a secure communication between DSG and ESA, it is recommended not to modify this option. Default value is "cert_reqs":"CERT_REQUIRED".
learnModeDefault
Settings for the Learn Mode.
enabled: Enable or disable Learn Mode on the DSG node. Default value it true.
excludeResources: Values in the field are excluded from the Learn Mode logging. Default value is \.(css|png|gif|jpg|ico|woff|ttf|svg|eot)(\?|\b).
excluedContentType: Content type specified in the field is excluded from the Learn Mode logging. Default value is \bcss|image|video|svg\b.
freeDiskSpaceThreshold: Minimum free disk space required so that the Learn Mode feature remains enabled. The feature is automatically disabled, if free disk space falls below this threshold. If the setting is disabled, then you must enable this feature manually. Default value is 1024000000.
globalUDFSettings
Settings that apply to any rules defined with custom UDF implementation for a DSG node.
allowed_modules: List of modules that can be used in the UDF. Default value it bs4, common.logger, re, gzip, fromstring, cStringIO,struct, traceback.
allowed_methods: List of methods that can be used in the UDF. Default value is BeautifulSoup, find_all, fromstring, format_exc, list, dict, str, warning.
globalProtocolStackSettings (http)
Settings for incoming HTTP requests management.
max_clients: Set the maximum number of concurrent outbound connections every gateway process can establish with each host. Default value is 100.
include_hostname_in_header: By default, the hostname will be visible in response header. Set the parameter to false, to remove the hostname from the response header. Default value is true.
connection_cache_ttl: Timeout value that you must configure up to which an HTTP request connection persists. Following values can be set.
- -1: Set to enable caching (default).
- 0: Set to disable caching.
: Set a value in seconds.
max_body_size: Maximum bytes for the HTTP request body. The datatype for this option is bytes. Default value is 4194304.
max_streaming_body_size: Maximum bytes for the HTTP request body when REST with streaming is enabled. The datatype for this option is bytes. Default value is 52428800.
longRunningRoutinesTracing
enabled: Enable or disable tracing. Default value is false.
timeout: Define the value in seconds to log a stack trace of processes that do not process easily in the given timeout interval. You can set the parameter to false, to remove the hostname from the response header. Default value is 20.
pdf_codec_default_font
- name: Set the default font file to process the PDF file under the Enhanced Adobe PDF codec extract rule. Default value is OpenSans-Regular.ttf.
stats
- enabled: Enable or disable the usage metrics. Default value is true.
5.26 - Auditing and logging
For every configuration change that occurs on the DSG, such as, creation of tunnels, Rulesets, deployinging of the configuration, and so on, the DSG generates an audit log. Though most of the logs are forwarded to the ESA and are visible on Forensics, some DSG logs serve a specific purpose and are available only on the individual DSG nodes.
Discover
The log management mechanism for Protegrity products forwards the logs to Insight on the ESA. Insight stores the logs in the Audit Store. The following services forwards the logs to Insight:
- td-agent service installed. The td-agent forwards the appliance logs to Insight on the ESA.
- Log Forwarder service forwards the data security operations-related logs, namely protect, unprotect, and reprotect and the PEP server logs to Insight on the ESA.
Note: Before you can access Discover, you must configure the DSG to forward logs to Insight on the ESA. You must also verify that the td-agent and the Log Forwarder services are running on the DSG. To verify the service status, navigate to System > Services on the DSG Web UI.
For more information about configuring the DSG to forward appliance logs to the ESA, refer to Forwarding Appliance Logs to Insight.
For more information about configuring Log Forwarder to forward the audit logs, refer to Forwarding Audit Logs to Insight.
Note: The Log Forwarder configuration can be modified in the pepserver.cfg file. If the Log Forwarder mode in the pepserver.cfg file is modified to error mode or drop mode, then the Pepserver service and the Cloud Gateway* service should be restarted.
To restart the services, login to the DSG Web UI, navigate to System > Services, restart the Pepserver service, and then restart the Cloud Gateway service.
For more information about logging configuration in the pepserver.cfg file, refer to the section PEP Server Configuration File in the Protegrity Installation Guide 9.1.0.0.
To access the Discover logs, on the ESA Web UI, navigate to Audit Store > Dashboard > Discover. The Discover page displays audit logs for the following events:
- Tunnel creation, deletion, and updates
- Ruleset creation, deletion, and updates
- Certificate upload, deletion, and downloads
- Key upload
- Deploying the DSG configurations
- Configuration changes made in the Global Settings tab
- Data security operations, such as, protect, unprotect, and re-protect
- System logs
- PEP server logs
Log Viewer
The Log Viewer displays the aggregation of the gateway logs for all the DSG nodes in the cluster. To access the Log Viewer file, on the ESA Web UI, navigate to **Cloud Gateway > 3.3.0.0 {build number} > Log Viewer.
Important: The gateway logs are not forwarded to Insight.
The Log Viewer file details log messages that can be used to analyze the following events:
- UDF-related compilation errors
- Transaction metrics logs
- Stack traces to debug exceptions
Note: The
gateway.log
file can also be forwarded to any log forwarding system, such as, a Security Information and Event Management (SIEM) system or AWS CloudWatch utility.
For more information about log forwarding, refer to Forwarding logs to SIEM systems.
Audit log representation
The DSG has the Log Forwarder service that forwards the logs related to the data security operations, such as, protect, unprotect, reprotect, and the PEP server logs to Insight on the ESA. The logs generated from the DSG are collected and forwarded to Insight. Insight stores the logs in the Audit Store.
The Audit Store holds the logs and these log records are used in various areas, such as, Insight , forensics, alerts, reports, dashboards, and so on. Insight is a component which provides the interface for viewing the data from the Audit Store. When the data security operations are performed to protect the sensitive data, an aggregated audit log is generated, and displayed on the Discover page in the Audit Store Dashboards.
Before you begin:
Ensure that the Analytics component is initialized on the ESA Web UI. On the ESA Web UI, you can access the logs on the Discover page only after initializing the Analytics component.
For more information about initializing the Analytics component, refer to Initializing the Audit Store Cluster on the ESA.
To understand auditing and logging for the DSG, consider the following example that will be processed using the CSV codec to extract the sensitive data.
firstname,lastname,city,country
John,Smith,London,Uk
Adam,Martin,London,Uk
Jae,Crowder,London,Uk
John,Holland,Bern,Switzerland
Marcus,Smart,Paris,France
Johnson,Mickey,Ottawa,Canada
For more information about extracting the CSV payload, refer to the section CSV Payload.
The CSV extract rule is defined to process all the rows and columns. When a request is sent, the DSG processes the request and the data is protected.
firstname,lastname,city,country
5HnMc6vZ,G8bcRG7J1X,SQSsyxEgBKw,ATJuBh
CMgcxkSL,dlyfZKMIt5H,SQSsyxEgBKw,ATJuBh
Iqj0jjq,RgbFVD6GnOjT,SQSsyxEgBKw,ATJuBh
5HnMc6vZ,SQtul5Lqymz0,1dC18Ciy,jTFgvSyjjROCx9QZOw
6Tz3mgUy3aD,pqDuxmLouR,49HA83v7PO,Jb3kzS8gcyk
4iILZXVL06xs,nXhtMyK6vx8,TiRDIPY1Ik5,Elc5GhObzFF
After the protection operation is completed, a log is generated on the Forensics page on the ESA Web UI. An aggregated log is generated for all the protect operations performed by the CSV codec. In versions prior to the DSG 2.6.0.0, 24 different audit records were generated for each protect operation. Logging is now enhanced on the DSG and a single log entry with the count 24 is generated for the example. A log with the count is only displayed when the protect operation is completed successfully. In case of failure, the individual audit records will be displayed on the Forensics page on the ESA Web UI.
The following figure illustrates the audit log representation for the protect operation.
5.27 - Verifying UDF Rules for blocked modules and methods
If you are using UDFs in Rule definitions, then it is important to verify whether you are using any of the blocked modules and methods. The introduction of blocking is a security best practice that restricts UDF code instructions to use safe modules and methods.
After installing the DSG, ensure that you note the following points:
- Verify if any of the following blocked modules and methods are defined in the Source Code option in the UDF rules:
- blocked_modules: pip , install, commands, subprocess, popen2, sys, os, platform, signal, asyncio
- blocked_methods: eval, exec, dir, import, memoryview
- If any of the blocked modules or methods are defined in the Source Code option in the UDF rules, then use either of the following options:
Option 1: Remove the module/method from the gateway.json file.
Note: By removing blocked modules and methods, you risk introducing security risks to the DSG system should any UDF code misuse these otherwise blocked module/method.
Option 2: Edit the UDF rule to override the blocked module using the override_blocked_modules parameter.
Note: By overriding blocked modules, you risk introducing security risks to the DSG system should any UDF code misuse these otherwise blocked module.
5.28 - Managing PEP server configuration file
Login to the DSG Web UI.
Navigate to Settings > System.
Under the Files tab, download the pepserver.cfg file.
Repeat step 1 to step 3 on each DSG node in the cluster.
5.29 - OpenSSL Curve Names, Algorithms, and Options
Curve Name | Description |
---|---|
secp112r1 | SECG/WTLS curve over a 112-bit prime field |
secp112r2 | SECG curve over a 112-bit prime field |
secp128r1 | SECG curve over a 128-bit prime field |
secp128r2 | SECG curve over a 128-bit prime field |
secp160k1 | SECG curve over a 160-bit prime field |
secp160r1 | SECG curve over a 160-bit prime field |
secp160r2 | SECG/WTLS curve over a 160-bit prime field |
secp192k1 | SECG curve over a 192-bit prime field |
secp224k1 | SECG curve over a 224-bit prime field |
secp224r1 | NIST/SECG curve over a 224-bit prime field |
secp256k1 | SECG curve over a 256-bit prime field |
secp384r1 | NIST/SECG curve over a 384-bit prime field |
secp521r1 | NIST/SECG curve over a 521-bit prime field |
prime192v1 | NIST/X9.62/SECG curve over a 192-bit prime field |
prime192v2 | X9.62 curve over a 192-bit prime field |
prime192v3 | X9.62 curve over a 192-bit prime field |
prime239v1 | X9.62 curve over a 239-bit prime field |
prime239v2 | X9.62 curve over a 239-bit prime field |
prime239v3 | X9.62 curve over a 239-bit prime field |
prime256v1 | X9.62/SECG curve over a 256-bit prime field |
sect113r1 | SECG curve over a 113-bit binary field |
sect113r2 | SECG curve over a 113-bit binary field |
sect131r1 | SECG/WTLS curve over a 131-bit binary field |
sect131r2 | SECG curve over a 131-bit binary field |
sect163k1 | NIST/SECG/WTLS curve over a 163-bit binary field |
sect163r1 | SECG curve over a 163-bit binary field |
sect163r2 | NIST/SECG curve over a 163-bit binary field |
sect193r1 | SECG curve over a 193-bit binary field |
sect193r2 | SECG curve over a 193-bit binary field |
sect233k1 | NIST/SECG/WTLS curve over a 233-bit binary field |
sect233r1 | NIST/SECG/WTLS curve over a 233-bit binary field |
sect239k1 | SECG curve over a 239-bit binary field |
sect283k1 | NIST/SECG curve over a 283-bit binary field |
sect283r1 | NIST/SECG curve over a 283-bit binary field |
sect409k1 | NIST/SECG curve over a 409-bit binary field |
sect409r1 | NIST/SECG curve over a 409-bit binary field |
sect571k1 | NIST/SECG curve over a 571-bit binary field |
sect571r1 | NIST/SECG curve over a 571-bit binary field |
c2pnb163v1 | X9.62 curve over a 163-bit binary field |
c2pnb163v2 | X9.62 curve over a 163-bit binary field |
c2pnb163v3 | X9.62 curve over a 163-bit binary field |
c2pnb176v1 | X9.62 curve over a 176-bit binary field |
c2tnb191v1 | X9.62 curve over a 191-bit binary field |
c2tnb191v2 | X9.62 curve over a 191-bit binary field |
c2tnb191v3 | X9.62 curve over a 191-bit binary field |
c2pnb208w1 | X9.62 curve over a 208-bit binary field |
c2tnb239v1 | X9.62 curve over a 239-bit binary field |
c2tnb239v2 | X9.62 curve over a 239-bit binary field |
c2tnb239v3 | X9.62 curve over a 239-bit binary field |
c2pnb272w1 | X9.62 curve over a 272-bit binary field |
c2pnb304w1 | X9.62 curve over a 304-bit binary field |
c2tnb359v1 | X9.62 curve over a 359-bit binary field |
c2pnb368w1 | X9.62 curve over a 368-bit binary field |
c2tnb431r1 | X9.62 curve over a 431-bit binary field |
wap-wsg-idm-ecid-wtls1 | WTLS curve over a 113-bit binary field |
wap-wsg-idm-ecid-wtls3 | NIST/SECG/WTLS curve over a 163-bit binary field |
wap-wsg-idm-ecid-wtls4 | SECG curve over a 113-bit binary field |
wap-wsg-idm-ecid-wtls5 | X9.62 curve over a 163-bit binary field |
wap-wsg-idm-ecid-wtls6 | SECG/WTLS curve over a 112-bit prime field |
wap-wsg-idm-ecid-wtls7 | SECG/WTLS curve over a 160-bit prime field |
wap-wsg-idm-ecid-wtls8 | WTLS curve over a 112-bit prime field |
wap-wsg-idm-ecid-wtls9 | WTLS curve over a 160-bit prime field |
wap-wsg-idm-ecid-wtls10 | NIST/SECG/WTLS curve over a 233-bit binary field |
wap-wsg-idm-ecid-wtls11 | NIST/SECG/WTLS curve over a 233-bit binary field |
wap-wsg-idm-ecid-wtls12 | WTLS curve over a 224-bit prime field |
Options | Description |
---|---|
OP_ALL | Enables workarounds for various bugs present in other SSL implementations. This option is set by default. It does not necessarily set the same flags as OpenSSL’s SSL_OP_ALL constant. |
OP_NO_SSLv2 | Prevents an SSLv2 connection. This option is only applicable in conjunction with PROTOCOL_SSLv23. It prevents the peers from choosing SSLv2 as the protocol version. |
OP_NO_SSLv3 | Prevents an SSLv3 connection. This option is only applicable in conjunction with PROTOCOL_SSLv23. It prevents the peers from choosing SSLv3 as the protocol version. |
OP_NO_TLSv1 | Prevents a TLSv1 connection. This option is only applicable in conjunction with PROTOCOL_SSLv23. It prevents the peers from choosing TLSv1 as the protocol version. |
OP_NO_TLSv1_1 | Prevents a TLSv1.1 connection. This option is only applicable in conjunction with PROTOCOL_SSLv23. It prevents the peers from choosing TLSv1.1 as the protocol version. Available only with openSSL version 1.0.1+. |
OP_NO_TLSv1_2 | Prevents a TLSv1.2 connection. This option is only applicable in conjunction with PROTOCOL_SSLv23. It prevents the peers from choosing TLSv1.2 as the protocol version. Available only with openSSL version 1.0.1+. |
OP_CIPHER_SERVER_PREFERENCE | Use the server’s cipher ordering preference, rather than the client’s. This option has no effect on client sockets and SSLv2 server sockets. |
OP_SINGLE_DH_USE | Prevents re-use of the same DH key for distinct SSL sessions. This improves forward secrecy but requires more computational resources. This option only applies to server sockets. |
OP_SINGLE_ECDH_USE | Prevents re-use of the same ECDH key for distinct SSL sessions. This improves forward secrecy but requires more computational resources. This option only applies to server sockets. |
OP_NO_COMPRESSION | Disable compression on the SSL channel. This is useful if the application protocol supports its own compression scheme. This option is only available with OpenSSL 1.0.0 and later |
5.30 - Configuring the DSG cluster
Configuring the DSG Cluster
Creating a cluster
On the DSG Web UI, navigate to System > Trusted Appliances Cluster.
The Join Cluster screen appears.
Click Create a new Cluster to create a DSG cluster.
The Create Cluster screen appears.
Select a preferred communication method.
Click Save to create a cluster.
Adding a Node to the Cluster
On the ESA Web UI, navigate to Cloud Gateway > 3.3.0.0 {build number} > Cluster > Monitoring.
The Cluster screen appears. No nodes are added to the cluster.
Select the Actions drop down list in the Cluster Health pane.
The following options appear:
- Apply Patch on Cluster
- Apply Patch on selected Nodes
- Change Groups on Entire Cluster
- Change Groups on Selected Nodes
- Add Node
Perform the following steps to add a node.
Click Add Node.
The Add new node to cluster screen appears.
Enter the FQDN or IP address of the DSG node to be added in the cluster in the Node IP field. It is recommended to enter the FQDN of the DSG instead of the IP address.
Enter the administrator user name for the ESA node user in the Node User Name field.
Enter the administrator password for the ESA node user in the Node Password field.
Enter the node group name in the Deployment Node Group field.
Note: If the deployment node group is not specified, by default it will get assigned to the default node group.
Click Submit.
Click Refresh > Deploy or Deploy to Node Groups.
For more information about deploying the configurations to entire cluster or the node groups, refer to the section Deploying the Configurations to Entire Cluster and Deploying the Configurations to Node Groups.
The node is added to the cluster.
The following figure is the Trusted Appliances Cluster (TAC) page after adding the nodes to the cluster.
Remove a node from the cluster
If the node in the cluster is not required, run the following steps to remove it from a cluster.
To remove a Node from the Cluster:
On the ESA Web UI, navigate to Cloud Gateway > 3.3.0.0 {build number} > Cluster > Monitoring.
Click Actions next to the node you want to remove.
Click Delete. The node is removed from the cluster.
5.31 - Encoding List
The rules that use the listed encoding are as follows:
- Binary
- HTML Form Media Type
- Text
- ProtegrityDataProtection
Standard encoding list
Method | Description |
---|---|
ascii | English (646, us-ascii) |
base64 | Base64 multiline MIME conversion (the result always includes a trailing ‘\n’) |
big5 | Traditional Chinese (big5-tw, csbig5) |
big5hkscs | Traditional Chinese (big5-hkscs, hkscs) |
bz2 | Compression using bz2 |
cp037 | English (IBM037, IBM039) |
cp424 | Hebrew (EBCDIC-CP-HE, IBM424) |
cp437 | English (437, IBM437) |
cp500 | Western Europe (EBCDIC-CP-BE, EBCDIC-CP-CH, IBM500) |
cp720 | Arabic (cp720) |
cp737 | Greek (cp737) |
cp775 | Baltic languages (IBM775) |
cp850 | Western Europe (850, IBM850) |
cp852 | Central and Eastern Europe (852, IBM852) |
cp855 | Bulgarian, Byelorussian, Macedonian, Russian, Serbian (855, IBM855) |
cp856 | Hebrew (cp856) |
cp857 | Turkish (857, IBM857) |
cp858 | Western Europe (858, IBM858) |
cp860 | Portuguese (860, IBM860) |
cp861 | Icelandic (861, CP-IS, IBM861) |
cp862 | Hebrew (862, IBM862) |
cp863 | Canadian (863, IBM863) |
cp864 | Arabic (IBM864) |
cp865 | Danish, Norwegian (865, IBM865) |
cp866 | Russian (866, IBM866) |
cp869 | Greek (869, CP-GR, IBM869) |
cp874 | Thai (cp874) |
cp875 | Greek (cp875) |
cp932 | Japanese (932, ms932, mskanji, ms-kanji) |
cp949 | Korean (949, ms949, uhc) |
cp950 | Traditional Chinese (950, ms950) |
cp1006 | Urdu (cp1006) |
cp1026 | Turkish (ibm1026) |
cp1140 | Western Europe (ibm1140) |
cp1250 | Central and Eastern Europe (windows-1250) |
cp1251 | Bulgarian, Byelorussian, Macedonian, Russian, Serbian (windows-1251) |
cp1252 | Western Europe (windows-1252) |
cp1253 | Greek (windows-1253) |
cp1254 | Turkish (windows-1254) |
cp1255 | Hebrew (windows-1255) |
cp1256 | Arabic (windows-1256) |
cp1257 | Baltic languages (windows-1257) |
cp1258 | Vietnamese (windows-1258) |
euc_jp | Japanese (eucjp, ujis, u-jis) |
euc_jis_2004 | Japanese (jisx0213, eucjis2004) |
euc_jisx0213 | Japanese (eucjisx0213) |
euc_kr | Korean (euckr, korean, ksc5601, ks_c-5601, ks_c-5601-1987, ksx1001, ks_x-1001) |
gb2312 | Simplified Chinese (chinese, csiso58gb231280, euc-cn, euccn, eucgb2312-cn, gb2312-1980, gb2312-80, iso-ir-58) |
gbk | Unified Chinese (936, cp936, ms936) |
gb18030 | Unified Chinese (gb18030-2000) |
hex | Hexadecimal representation conversion (two digits per byte) |
hz | Simplified Chinese (hzgb, hz-gb, hz-gb-2312) |
iso2022_jp | Japanese (csiso2022jp, iso2022jp, iso-2022-jp) |
iso2022_jp_1 | Japanese (iso2022jp-1, iso-2022-jp-1) |
iso2022_jp_2 | Japanese, Korean, Simplified Chinese, Western Europe, Greek (iso2022jp-2, iso-2022-jp-2) |
iso2022_jp_2004 | Japanese (iso2022jp-2004, iso-2022-jp-2004) |
iso2022_jp_3 | Japanese (iso2022jp-3, iso-2022-jp-3) |
iso2022_jp_ext | Japanese (iso2022jp-ext, iso-2022-jp-ext) |
iso2022_kr | Korean (csiso2022kr, iso2022kr, iso-2022-kr) |
latin_1 | West Europe (iso-8859-1, iso8859-1, 8859, cp819, latin, latin1, L1) |
iso8859_2 | Central and Eastern Europe (iso-8859-2, latin2, L2) |
iso8859_3 | Esperanto, Maltese (iso-8859-3, latin3, L3) |
iso8859_4 | Baltic languages (iso-8859-4, latin4, L4) |
iso8859_5 | Bulgarian, Byelorussian, Macedonian, Russian, Serbian (iso-8859-5, cyrillic) |
iso8859_6 | Arabic (iso-8859-6, arabic) |
iso8859_7 | Greek (iso-8859-7, greek, greek8) |
iso8859_8 | Hebrew (iso-8859-8, hebrew) |
iso8859_9 | Turkish (iso-8859-9, latin5, L5) |
iso8859_10 | Nordic languages (iso-8859-10, latin6, L6) |
iso8859_11 | Thai languages (iso-8859-11, thai) |
iso8859_13 | Baltic languages (iso-8859-13, latin7, L7) |
iso8859_14 | Celtic languages (iso-8859-14, latin8, L8) |
iso8859_15 | Western Europe (iso-8859-15, latin9, L9) |
iso8859_16 | South-Eastern Europe (iso-8859-16, latin10, L10) |
johab | Korean (cp1361, ms1361) |
koi8_r | Russian () |
koi8_u | Ukrainian () |
mac_cyrillic | Bulgarian, Byelorussian, Macedonian, Russian, Serbian (maccyrillic) |
mac_greek | Greek (macgreek) |
mac_iceland | Icelandic (maciceland) |
mac_latin2 | Central and Eastern Europe (maclatin2, maccentraleurope) |
mac_roman | Western Europe (macroman) |
mac_turkish | Turkish (macturkish) |
ptcp154 | Kazakh (csptcp154, pt154, cp154, cyrillic-asian) |
shift_jis | Japanese (csshiftjis, shiftjis, sjis, s_jis) |
shift_jis_2004 | Japanese (shiftjis2004, sjis_2004, sjis2004) |
shift_jisx0213 | Japanese (shiftjisx0213, sjisx0213, s_jisx0213) |
utf_32 | Unicode Transformation Format (U32, utf32) |
utf_32_be | Unicode Transformation Format (big endian) |
utf_32_le | Unicode Transformation Format (little endian) |
utf_16 | Unicode Transformation Format (U16, utf16) |
utf_16_be | Unicode Transformation Format (big endian BMP only) |
utf_16_le | Unicode Transformation Format (little endian BMP only) |
utf_7 | Unicode Transformation Format (U7, unicode-1-1-utf-7) |
utf_8 | Unicode Transformation Format (U8, UTF, utf8) |
utf_8_sig | Unicode Transformation Format (with BOM signature) |
zlib | Gzip compression (zip) |
External encoding list
- Base64
- HTML Encoding
- JSON Escape
- URI Encoding
- URI Encoding Plus
- XML Encoding
- Quoted Printable
- SQL Escape
Proprietary
- Base128
- Unicode
- CJK
- High ASCII
5.32 - Configuring default gateway
When you install the DSG, to route network traffic, you can add an individual default gateway to the network interfaces in the same network using either the Static IP mode or the DHCP server. Mapping unique default gateways ensures requests intended for the required IP are routed through the default gateway assigned to that IP address.
Configure default gateway for management NIC
Configure the default gateway for the management NIC (ethMNG) using the DSG CLI manager:
On the DSG CLI Manager, navigate to Networking > Network Settings.
On the Network Configuration Information screen, press Tab to select Interfaces and press Enter.
Press Tab to select ethMNG and press Enter to add a default gateway.
Press Tab to select Gateway and press Enter.
Enter the Gateway IP address.
Press Tab to select Apply and press Enter.
Configure default gateway for service NIC
Configure the default gateway for the service NIC (ethSRV0) using the DSG CLI manager.
On the DSG CLI Manager, navigate to Networking > Network Settings.
In the Network Configuration Information screen, press Tab to select Interfaces and press Enter.
Press Tab to select ethSRV0 and press Enter to add a default gateway.
Press Tab to select Gateway and press Enter.
Enter the Gateway IP address.
Press Tab to select Apply and press Enter.
5.33 - Forward logs to Insight
After installing or upgrading to the DSG 3.3.0.0, you must configure the DSG to forward the DSG logs to the Audit Store on the ESA using the steps provided in this section.
Ensure that you have configured the Insight component on the ESA. Configuring this component allows Insight to retrieve the DSG appliance and audit logs.
For more information about Insight, refer to the section Components of Insight.
Forwarding appliance logs to Insight
The appliance logs (syslog), transaction metrics, error metrics, and usage metrics are forwarded through the td-agent service to Insight on the ESA.
To forward appliance logs to Insight:
Login to the DSG CLI Manager.
Navigate to Tools > PLUG - Forward logs to Audit Store.
Enter the password of the DSG root user and select OK.
Enter the username and password of the DSG administrator user and select OK.
Select OK .
Enter the IP address for the ESA and select OK. You can specify multiple IP addresses separated by comma.
Enter y to fetch certificates and select OK.
These certificates is used to validate and connect to the target node. It is required to authenticate with the Audit Store while forwarding logs to the target node.
If the certificates already exists on the system, then specify n in this screen.
Enter the username and password of the ESA administrator user and select OK.
The td-agent service is configured to send logs to the Audit Store and the CLI menu appears.
Repeat step 1 to step 8 on all the DSG nodes in the cluster.
Forwarding audit logs to Insight
The audit logs are the data security operation-related logs, namely protect, unprotect, and reprotect and the PEP server logs. The audit logs are forwarded through the Log Forwarder service to Insight. Insight stores the logs in the Audit Store on the ESA.
To forward audit logs to the Insight:
Login to the DSG CLI Manager.
Navigate to Tools > ESA Communication.
Enter the password of the DSG root user and select OK.
Select the Logforwarder configuration option. Press Tab to select Set Location Now and press Enter.
The ESA Location screen appears.
Select the ESA that you want to connect with, and then press Tab to select OK and press ENTER.
The ESA selection screen appears.
Note: If you want to enter the ESA details manually, then select the Enter manually option. You will be asked to enter the ESA IP address or hostname when this option is selected.
Enter the ESA administrator username and password to establish communication between the ESA and the DSG. Press Tab to select OK and press Enter.
The Enterprise Security Administrator - Admin Credentials screen appears.
Enter the IP address or hostname for the ESA. Press Tab to select OK and press ENTER. You can specify multiple IP addresses separated by comma.
The Forward Logs to Audit Screen screen appears.
After successfully establishing the connection with the ESA, the following Summary dialog box appears. Press Tab to select OK and press Enter.
Repeat step 1 to step 8 on all the DSG nodes in the cluster.
5.34 - Extending ESA with the DSG Web UI
Login to the ESA Web UI.
Navigate to Settings > System > File Upload.
Click Choose File to upload the DSG patch file.
Select the file and click Upload.
The uploaded patch appears on the Web UI.
On the ESA CLI Manager, navigate to Administration > Installation and Patches > Patch Management.
Enter the root password.
Select Install a Patch and press OK.
Select the uploaded patch.
Press Install.
The patch is successfully installed.
The DSG component is installed on the ESA. To verify the details of the DSG component from the About screen on the ESA, complete the following steps.
- Login to the ESA Web UI.
- Click the
(Information) icon, and then click About.
- Verify that the DSG version is reflected as DSG 3.3.0.0.
5.35 - Backing up and Restoring up the Appliance OS from the Web UI
Backup the appliance OS.
Backing up the OS on a DSG appliance.
Login to the DSG Web UI.
Navigate to System > Backup & Restore.
Navigate to the OS Full tab and click Backup.
The message The backup process may take several minutes to complete. It is recommended to stop all services prior to starting the backup is displayed.
Press ENTER.
The Backup Center screen appears and the OS backup process is initiated.
Navigate to the DSG Dashboard.
The notification O.S Backup has been initiated is displayed. After the backup is complete, the notification O.S Backup has been completed is displayed.
Restore backup
Restoring the backup on a DSG appliance.
Login to the DSG CLI.
Navigate to Administration > Reboot and Shutdown.
Select Reboot and press Enter.
Provide a reason for rebooting the DSG node, select OK and press Enter.
Enter the root password, select OK and press Enter.
The reboot operation is initiated.
During start up, select the System Restore Mode.
Select Initiate OS-Restore Procedure.
Select OK.
The restore process is initiated.
After the restore process is completed, the login screen appears.
5.36 - Setting up ESA communication
Set ESA communication in cases, such as, change of the ESA IP address, change of the ESA certificates, adding the ESA IP address for cloud platforms, and joining the DSG in an existing cluster in the ESA.
On the DSG CLI Manager, navigate to Tools > ESA Communication.
Enter the root password, press Tab to select OK and press Enter.
Select all the options, press Tab to select Set Location Now and press ENTER.
Enter the FQDN of ESA. Press Tab to select OK and press Enter.
The ESA selection screen appears.
Note: If you want to enter the ESA details manually, then select the Enter manually option. You will be asked to enter the ESA IP address or hostname when this option is selected.
Enter the ESA administrator credentials in the Username and Password text boxes. Press Tab to select OK and press Enter.
The Enterprise Security Administrator - Admin Credentials screen appears.
Enter the FQDN for the ESA. Press Tab to select OK and press ENTER. You can specify multiple IP addresses separated by comma.
The Forward Logs to Audit Store screen appears.
Note: You must specify the IP addresses for all the ESAs already configured and then enter the IP for the additional ESA.
After successfully establishing the connection with the ESA, the following Summary dialog box appears. Press Tab to select OK and press Enter.
The ESA communication is established successfully between the DSG and the ESA.
5.37 - Deploying configurations to the cluster
Using the Cluster screen or the Ruleset screen, the configurations can be pushed to all the nodes in the cluster.
Deploy configurations to cluster from the Cluster screen
In the ESA Web UI, navigate to Cloud Gateway > 3.3.0.0 {build number} > Cluster.
Select the Refresh drop down menu and click Deploy. A confirmation message to deploy the configurations to all nodes is displayed.
Click YES to push the configurations to all the node groups and nodes. The configurations will be deployed to all the nodes in the entire cluster.
Deploy configurations to cluster from the Ruleset screen
In the ESA Web UI, navigate to Cloud Gateway > 3.3.0.0 {build number} > Ruleset.
Click Deploy. A confirmation message occurs.
Click Continue to push the configurations to all the node groups and nodes. The configurations will be deployed to the entire cluster.
5.38 - Restarting a node
On the CLI Manager, navigate to Administration > Reboot and Shutdown.
Select Reboot and press Enter.
Provide a reason for restarting, select OK and press Enter.
Enter the
root
password, select OK and press Enter.
5.39 - Deploy configurations to node groups
The configuration will only be pushed to the DSG nodes associated with the node groups. Click Deploy to Node Groups on the RuleSet page or the Cluster tab to perform this operation. Ensure that the node groups are created.
Deploying to node groups from Ruleset screen
In the ESA Web UI, navigate to Cloud Gateway > 3.3.0.0 {build number} > Ruleset.
Click Deploy > Deploy to Node Groups.
The Select node groups for deploy screen appears.
Enter the name for the configuration version in the Tag Name field. The tag name is the version name of a configuration that is deployed to a particular node group. The tag name must be alphanumeric, separated by spaces or underscores. If the tag name is not provided, then it will automatically generate the name in the YYYY_mm_dd_HH_MM_SS format.
Enter the description for the configuration in the Description field.
On the Deployment Node Groups option, select the node group to which the configurations must be deployed.
Click Submit.
The configurations are deployed to the node groups.
Deploying to node groups from Cluster screen
The configuration will only be pushed to the DSG nodes associated with the node groups. Click Deploy to Node Groups on the RuleSet page or the Cluster tab to perform this operation.
In the ESA Web UI, navigate to **Cloud Gateway > 3.3.0.0 {build number} > Cluster.
Select the Refresh drop down menu and click Deploy to Node Groups.
Click Deploy to Node Groups.
The Select node groups for deploy is displayed.
Enter the name for the configuration version in the Tag Name field. The tag name is the version name of a configuration that is deployed to a particular node group. The tag name must be alphanumeric, separated by spaces or underscores. If the tag name is not provided, then it will automatically generate the name in the YYYY_mm_dd_HH_MM_SS format.
Enter the description for the configuration in the Description field.
On the Deployment Node Groups option select the node group to which the configurations must be deployed.
Click Submit.
The configurations are deployed to the node groups.
5.40 - Codebook Reshuffling
Codebook reshuffling is a feature that provides an organization the ability to share protected data outside of its tokenization domain to meet data privacy, analysis, and regulatory requirements. A tokenization domain or a token domain can be defined as a business unit, a geographical location, or a subsidiary organization where protected data is stored. The data protected by enabling Codebook Reshuffling cannot be unprotected outside the tokenization domain.
Codebook reshuffling provides support for the following tokenization data elements:
- Alpha (a-z, A-Z)
- Alpha-Numeric (0-9, a-z, A-Z)
- Binary
- Credit Card (0-9)
- Date (YYYY-MM-DD)
- Date (DD/MM/YYYY)
- Date (MM/DD/YYYY)
- DateTime Date (YYYY-MM-DD HH:MM:SS MMM)
- Decimal (numeric with decimal point and sign)
- Integer
- Lower ASCII (Lower part of ASCII table)
- Numeric (0-9)
- Printable
- Uppercase Alpha (A-Z)
- Uppercase Alpha-Numeric (0-9, A-Z)
- Unicode
- Unicode Base64
- Unicode Gen2
Ensure that you do not unprotect historically protected data using any token element as it causes data corruption if the shufflecodebooks parameter in the pepserver.cfg file is set to yes.
For example, if you have protected sensitive data using the Credit Card token in earlier releases of DSG, where Codebook Reshuffling is not supported for any tokens, and upgrade to the latest version of the DSG, then unprotecting the sensitive data using the same Credit Card token causes data corruption if the parameters in the pepserver.cfg file are configured for Codebook Reshuffling.
Note: As the Codebook Reshuffling feature is an advanced functionality, you must contact the Protegrity Professional Services team for more information about its usage.
Codebook Reshuffling can be enabled on the DSG for all the supported tokenization data elements to generate unique tokens for protected values across the tokenization domains. The following generic example will help you to understand more about the functionality of Codebook Reshuffling.
Consider a scenario where an organization has two tokenization domains, Tokenization Domain 1 and Tokenization Domain 2, which are distributed in two different tokenization domains. The Tokenization Domain 1 contains an ESA connected to multiple DSG nodes. A separate TAC for ESAs and DSGs is created.
On the ESA, create a tokenization data element. Codebooks are generated on the ESA when a tokenization data element is created.
Add the newly created tokenization data element to a policy.
Create a Binary Large Object (BLOB) file on each DSG node using the BLOB creation utility (BCU) that contains random bytes. The BLOB file will be encrypted automatically using an AES encryption key fetched from the HSM.
Deploy the policy created on the master ESA to the DSG nodes in Token Domain 1 and Token Domain 2. The DSG nodes download the policy information from the ESA. After the policy is deployed on the DSG, if the codebook reshuffling parameter is enabled, then the codebook will be shuffled again by using the BLOB file created on the DSG.
Create a Ruleset on the DSG nodes to protect the sensitive data.
After a request is sent from the client, the DSG processes and protects the sensitive data. It generates unique tokens for protected values across the tokenization domains.
The advantage offered by codebook reshuffling is that sensitive data on Token Domain 1 cannot be accessed by Token Domain 2. It cannot be accessed because it is protected by a different codebook, available only on the Token Domain 1. The unique tokens generated for the protected data on the Token Domain 1 can be used to derive an insightful analysis for an organization without compromising on any data security compliance and regulatory norms.
Codebook Reshuffling in the PEP Server
Codebook Reshuffling in the PEP server uses the BCU to create a Binary Large Object (BLOB) file that contains random bytes. The HSM encrypts these random bytes and sends it to the DSG. DSG saves these encrypted random bytes in a file. When the PEP server starts, it sends the encrypted random bytes to HSM for decryption. The HSM then decrypts the random bytes and sends it back to DSG. These random bytes are used by DSG to reshuffle the code books.
The file with random bytes is encrypted using an AES encryption key from the HSM and saved to the disk.
The PEP server loads the BLOB file from the disk and decrypts it using the key from the HSM. It then re-shuffles the supported codebooks, whenever a policy is published to shared memory. Based on the random bytes decrypted from the BLOB, codebook Reshuffling generates unique tokens for protected values across the tokenization domains.
Ensure that you do not unprotect historically protected data using any token element as it causes data corruption if the shufflecodebooks parameter in the pepserver.cfg file is set to yes.
After the data is protected by enabling Codebook Reshuffling on the DSG, you must perform data security operations like protect, unprotect, and re-protect for the sensitive data by only using the Data Security Gateway (DSG) protector. Also, DSG does not support the migration of protected data, with the shufflecodebooks parameter in the pepserver.cfg file is set to yes, from the DSG to other protectors. An attempt to migrate the protected data and unprotecting it may cause data corruption.
The Codebook Reshuffling feature is tested and supported for the Safenet Luna 7.4 HSM devices. The procedure provided in this section is for the Safenet Luna 7.4 HSM devices.
To enable the reshuffling of codebooks in the PEP server:
Download the HSM library files. In this procedure, it is assumed that the HSM files are added to the /opt/protegrity/hsm directory.
Note: The HSM directory is not created on the DSG by default after DSG installation. Ensure that you use the following commands to create the HSM directory.
mkdir /opt/protegrity/hsm
Ensure that the required ownership and permissions are set for the HSM library files in the /opt/protegrity/hsm directory by running the following commands.
chown -R service_admin:service_admin /opt/protegrity/hsm chmod -R 744 /opt/protegrity/hsm
Ensure that the HSM library configuration files are available on the DSG.
Create a soft link to link the HSM shared library file using the following commands.
cd /opt/protegrity/defiance_dps/data su -s /bin/sh service_admin -c "ln -s /opt/protegrity/hsm/<HSM shared library file> pkcs11.plm"
Create the environment settings file using the following commands.
echo "export <Variable Name>=<HSM Configuration file path> >> /opt/protegrity/defiance_ dps/bin/dps.env chown service_admin:service_admin /opt/protegrity/defiance_dps/bin/dps.env chmod 644 /opt/protegrity/defiance_dps/bin/dps.env
Export the HSM Configuration file parameter in the current session using the following command.
source ../bin/dps.env
Create the AES encryption key in the HSM using the following commands.
cd /opt/protegrity/defiance_dps/data ../bin/bcu -lib ./pkcs11.plm -op createkey -label <labelname> -slot <slotnumber> -userpin <SOME_USER_PIN>
Create a BLOB file using the BLOB creation utility (BCU), encrypt the BLOB file with the created encryption key, and save the BLOB file to the disk using the following command.
../bin/bcu -lib ./pkcs11.plm -op createblob -label <labelname> -slot <slotnumber> -size <size> -userpin <password> -file random.dat
Create the credentials file for the PEP server to connect to the HSM using the following command.
../bin/bcu -op savepin -userpin <password> -file userpin.bin
Ensure that the required ownership and permissions are set for random.dat and userpin.bin files by using the following command.
chown service_admin:service_admin userpin.bin random.dat
CAUTION: Ensure that you backup the BLOB file and the user pin file and save it on your local machine. The BLOB file and the user pin files must not be saved on the DSG.
Ensure that you have set the shufflecodebooks configuration parameter to yes and the path to the file containing the random bytes in the pepserver.cfg configuration file using the following code snippet.
# shuffle token codebooks after they are downloaded. # yes, no. default no. shufflecodebooks = yes # Path to the file that contains the random bytes for shuffling codebooks. randomfile = ./random.dat
Note: The shufflecodebooks configuration parameter is available in the Policy Management section of the pepserver.cfg file.
Ensure that you have set the required path to the PKCS#11 provider library, slot number to be used on the HSM, and the required path to the userpin.bin file in the pepserver.cfg configuration file using the following code snippet.
# ----------------------------------- # PKCS#11 configuration # Values in this section is only used # when shufflecodebooks = yes # ----------------------------------- [pkcs11] # The path to the PKCS#11 provider library. provider_library = ./pkcs11.plm # The slot number to use on the HSM. slot = 1 /Enter the slot number used at the time of the creation of the key/ # The scrambled user pin file. userpin = ./userpin.bin
Note: The PKCS#11 configuration parameter is available in the PKCS#11 configuration section of the pepserver.cfg file.
Note: Ensure that the slot number added in step 8 of this procedure and the slot number added in step 12 are the same.
On the DSG Web UI, navigate to System > Services to restart the PEP server.
Re-protecting the Existing BLOB
This section describes the steps to reprotect the existing BLOB file with a new key. The user can encrypt the BLOB file by using the steps mentioned in the Codebook Reshuffling in the PEP Server section. These steps will allow the user to encrypt the BLOB using a key label of an AES key. After performing these operations, if the user wants to reprotect the BLOB with another key, then the user should create a new key and use the new key label to encrypt the BLOB. The reprotect operation provided will only reprotect the existing BLOB with a new key label and will not change the content in the BLOB.
To reprotect the existing BLOB:
Create the new AES encryption key in the HSM using the following commands.
./bcu -lib <safenet lib so file path> -slot <slotId> -userpin <userpin> -op createkey -label <new_labelname>
Run the following command to reprotect the existing BLOB.
./bcu -lib <safenet lib so file path> -slot <slotId> -userpin <userpin> -op reprotectblob -label <new_labelname> -file <blob filename>
Note: The BCU utility will not perform the reprotect operation of the BLOB, if the keys are used from different HSMs or from different slots or partitions.
Ensure that the required ownership and permission are set for the random.dat file by using the following commands.
chown service_admin:service_admin random.dat chmod 640 random.dat
To have the changes made in the above steps reflected, login to the DSG Web UI and navigate to System > Services to restart the PEP server. You can view the success or failure logs of the reshuffling process on the PepServerLog screen on the DSG Web UI. To view the PepServerLog screen, navigate to Logs > PepServer.
On the PepServerLog screen, ensure that you have set the Log level to ALL in the pepserver.cfg file.
# --------------------------------- # Logging configuration, # Write application log to file as trace # --------------------------------- [logging] # Logging level: OFF - No logging, SEVERE, WARNING, INFO, CONFIG, ALL level = ALL
The following figure shows the PepServerLog after the BLOB file is encrypted with new key.
5.40.1 - Restore Backed up Files for Codebook Reshuffling
It is recommended to configure the HSM before restoring the backed up codebook re-shuffling configuration files.
The Codebook Re-shuffling feature is tested and supported for the Utimaco HSM and Safenet Luna 7.4 HSM devices. The procedure provided in this section is for the Utimaco HSM and Safenet Luna 7.4 HSM devices.
Login to the DSG Web UI.
On the DSG Web UI, navigate to Settings > System > File Upload.
Note: By default, the Max File Upload size is set to 25 MB on the DSG appliances. If the <filename>.tgz file size is more than 25 MB, the Max File Upload size must be changed. If this value is set to 2 GB, then the following steps can be ignored.
Perform the following steps to increase the Max File Upload size:
- On the DSG Web UI, navigate to Settings > Network > Web Settings.
- Under General Settings, ensure that the Max File Upload is set to 2 GB to accommodate the patch upload.
- Ensure that the steps 1 and 2 are performed on each DSG node in the cluster.
On the File Selection screen, select the <filename>.tgz file, which consists of the following backed up codebook re-shuffling files, and click Upload:
- BLOB (random.dat)
- dps.env
- User PIN (userpin.bin)
Login to the DSG CLI Manager.
Navigate to Administration > OS Console.
Enter the root password.
Navigate to the /products/uploads directory by running the following command.
cd /products/uploads
Run the following command to extract the contents of the <filename>.tgz file.
tar -xvpf <filename>.tgz -C /
The contents of the <filename>.tgz file are extracted.
Setup the Token Domain for Codebook Re-shuffling by running the following commands.
cd /opt/protegrity/defiance_dps/data su -s /bin/sh service_admin -c "ln -s /opt/protegrity/hsm/libCryptoki2_64.so pkcs11.plm"
Run the following command to source the dps.env file.
. /opt/protegrity/defiance_dps/bin/dps.env
Note: The command has a dot followed by a space and then the path.
Ensure that you have set the shufflecodebooks configuration parameter to yes and the path to the file containing the random bytes in the pepserver.cfg configuration file using the following code snippet.
# shuffle token codebooks after they are downloaded. # yes, no. default no. shufflecodebooks = yes # Path to the file that contains the random bytes for shuffling codebooks. randomfile = ./random.dat
Ensure that you have set the required path to the PKCS#11 provider library, slot number to be used on the HSM, and the required path to the userpin.bin file in the pepserver.cfg configuration file using the following code snippet.
# ----------------------------------- # PKCS#11 configuration # Values in this section is only used # when shufflecodebooks = yes # ----------------------------------- [pkcs11] # The path to the PKCS#11 provider library. provider_library = ./pkcs11.plm # The slot number to use on the HSM. slot = 1 # The scrambled user pin file. userpin = ./userpin.bin
Note: The PKCS#11 configuration parameter is available in the PKCS#11 configuration section of the pepserver.cfg file.
On the DSG Web UI, navigate to System > Services to restart the PEP server.
The backed up Codebook Reshuffling configuration files are restored.
5.41 - Troubleshooting in DSG
DSG UI issues
DSG UI is not loading with an Internal Server error.
Issue: An Internal Server Error
is displayed while accessing the DSG UI from ESA.
This issue occurs due to one of the following reasons:
- All the DSGs in the TAC are deleted
- The DSG node that is used to communicate with ESA is unhealthy. ESA then attempts to connect with another healthy node in the cluster. After multiple retries, if no healthy node with which ESA can communicate is found, this error is displayed on the screen.
Resolution:
- Recreate the TAC by adding all the required DSGs to the cluster.
- Run the set ESA communication process from all the DSG nodes in the cluster.
- Run the following script on ESA.
/opt/protegrity/alliance/3.3.0.0.{build number}-1/bin/scripts/register_dsg_tac.sh
DSG UI is not loading with a certificate error.
Issue : The DSG UI does not load and a [SSL: TLSV1_ALERT_UNKNOWN_CA]
entry is displayed in the logs.
This might occur as the certificates are not synchronized. The following are the few reasons for issue.
- ESA communication is not run.
- Resolution: The TAC is deleted and recreated.
- Resolution: If the TAC is deleted and recreated, run the set ESA communication process between the DSGs and ESA.
- If the set ESA communication is run, the certificates are synchronized multiple times.
- Resolution: Run the following steps:
- On the DSG UI, navigate to Cloud Gateway > 3.3.0.0 {build number} > Transport > Manage Certificates
- Click Change Certificates. A screen with the list of certificates is displayed.
- Based on the timestamp, select only the latest CA certficate from ESA.
- Unselect the other CA certificates from ESA. Ensure that you do not unselect other certificates in the list.
- Select Next. Click Apply.
- Resolution: Run the following steps:
DSG UI not loading with a NameResolutionError.
Issue : The DSG UI does not load and a NameResolutionError
entry is displayed in the logs.
This might occur if the DSG or ESA are not accessible through their host names.
Resolution: If DNS name server is not configured, ensure that FQDN of DSG is present in the /etc/hosts directory of ESA. Also, ensure that the FQDN of ESA is present in the /etc/hosts file of DSG.
DSG UI not loading as the DNS is not configured correctly.
Issue : The DSG UI does not load and a Failed to resolve 'protegrity-cg***.ec2.internal' ([Errno -2] Name or service not known)"))
entry is displayed in the logs.
This might occur if the DSG or ESA are not accessible through their host names.
Resolution:
- Ensure that the DNS Name server is configured correctly.
DSG UI not loading with a certificate error.
Issue: An CERTIFICATE_VERIFY_FAILED
error appears DSG appears in the logs.
This might occur if the DSG or ESA are not accessible through their host names. The issue can be mitigated as follows: - Ensure that the DNS Name server is configured correctly. - If DNS name server is not configured, ensure that FQDN of DSG is present in the /etc/hosts directory of ESA. Also, ensure that the FQDN of ESA is present in the /etc/hosts file of DSG.
DSG UI not loading with a KSA host error.
Issue: An error Failed to find new KSA host from the TAC
is displayed in the logs.
The ESA reaches out to the DSG that is registered in the ksa.json file. If this DSG in not reachable, it attempts to connect with another healthy DSG in the cluster. If the attempt to connect with any healthy DSG node in the cluster fails, the issue occurs.
Resolution: Run the following steps:
- Check the health of all the nodes in the cluster.
- Check if the DSGs in the TAC are accessible.
- Check whether the set ESA communication between the DSG nodes and ESA was completed.
DSG UI not loading with a HTTP connection error
Issue: An error Request to X.X.X.X failed with error HTTPSConnectionPool(host='X.X.X.X', port=443): Max retries exceeded with url: /cpg/v1/ksa
is displayed in the logs.
The ESA is not able to reach the DSG. Resolution: Run one of the following steps:
- Re-register the ESA with appropriate online DSG node
- Increase max retry count in the ksa.json file.
Unable to register DSG on ESA
Issue: An error Unable to add ptycluster user's SSH public key, Request failed due to 'Internal Server Error'. Please make sure host(protegrity-esa***.protegrity.com) have TAC enabled.
is displayed in the logs.
Resolution: Ensure that the TAC is created on the DSG or ESA. Run the set ESA communication process for the DSG in the cluster.
Ruleset deployment
Rulesets are not deployed from ESA
Issue: When a ruleset is deployed from an ESA to DSG, the operation fails. A failure message is displayed in the logs.
This issue might occur due to one of the following reasons:
- One node in the TAC is deleted or unhealthy.
- TAC is deleted and recreated. Resolution: If the TAC is deleted and recreated, run the set ESA communication process again. Ensure that the certificates between ESA and DSG are synchronized.
Miscellaneous
Support logs are empty
Issue: When the support logs from a DSG Web UI are downloaded, the downloaded .tgz file is empty.
Resolution:
- On the ESA Web UI, ensure that the DSG Container UI service is up and running. If the service is stopped, restart the service. Download the support logs and check the entries.
- While installing a DSG patch on ESA, a details of the DSG node must be provided. Ensure that this DSG node is healthy. This DSG node must be accessible through its host name.
Common issues
Issue: The usage metrics are not forwarded to Insight.
- Reason: The /var/log partition is full.
- Recovery Action:
Perform the following steps.
- Back up the gateway.log files.
- Ensure that the partition space is cleared. To free up the space, you can remove the rotated gateway log files.
- Delete or purge the *usagemetrics.pos* file from the */opt/protegrity/usagemetrics/bin directory*.
- On the Web UI, navigate to System > Services. Restart the **Usage Metrics Parser Service**.
Issue: When SaaS is accessed through the gateway, the following error is displayed.
HTTP Response Code 599: Unknown.
- Reason 1: The SaaS server certificate is invalid.
- Recovery Action:
Perform one of the following steps.
- Ensure that the forwarding address is correct.
- Add the SaaS server certificate to the gateway’s trusted store.
- Reason 2: The system time on the DSG nodes is not in sync with the ESA.
- Synchronize the system time for all the DSG nodes performing the following steps.
- From the CLI Manager, navigate to Tools > ESA communication.
- Select **Use ESA’s NTP** to synchronize the system time of the node with ESA.
- Consider using an NTP server for system time across all DSG nodes and the ESA.
- Synchronize the system time for all the DSG nodes performing the following steps.
- Reason 3: The DNS configuration might be incorrect.
- Recovery Action:
Perform one of the following steps.
- Verify that the DNS configuration for the DSG node is set as required.
- Verify that the hostname addresses mentioned in the service configuration are accessible by the DSG node.
Issue: The SaaS web interface is not accessible through the browser. Following error is displayed.
HTTP Response Code 500: Internal Server Error.
- Reason: The DSG node is not configured to service the requested host name.
- Recovery Action: Verify if the Cloud Gateway profiles and services are configured to accept and serve the requested hostname.
Issue: The following error message appears on the client application while accessing DSG.
404 : Not Found
- Reason: The HTTP Extract Message rule configured on the DSG node cannot be invoked.
- Recovery Action:
Perform one of the following steps.
- Ensure that you have sent the request to the URI configured on the DSG. If the request is sent to the incorrect URI, then the request will not be processed.
- Verify the HTTP Method in the HTTP request.
Issue: The following error message appears in the gateway logs
Error;MountCIFSTunnel;check_for_new_files;error checking for new files, Connection timed out. Server did not respond within timeout.
- Reason: The connection between the DSG and CIFS server is interrupted.
- Recovery Action: Restart the CIFS server and process the data.
Issue: Learn mode is not working.
- Reason: Learn mode is not enabled.
- Recovery action: Perform one of the following steps.
- Enable learn mode for the required service.
- Configure the following learn mode settings while creating the service.
- Mention the contents to be included in the *includeResource* and the *includeContentType* parameters.
For example, you can include the following resources and content types:"includeResource": "\\.(css|png|gif|jpg|ico|woff|ttf|svg|eot)(\\?|\\b)",
"includeContentType": "\\bcss|image|video|svg\\b"
, - Mention the contentsto be excluded in the *excludeResource* and the *excludeContentType*parameters.
For example, you can excludethe following resources and content types:"excludeResource": "\\.(css|png|gif|jpg|ico|woff|ttf|svg|eot)(\\?|\\b)",
"excludeContentType": "\\bcss|image|video|svg\\b",
- Mention the contents to be included in the *includeResource* and the *includeContentType* parameters.
Issue: Following message is displayed in the log
WarningPolicy;missing_host_key;Unknownssh-rsa host key for
:f1b2e0bde5d34244ba104bab1ce66f96 - Reason: The gateway issues an outbound request to an SFTP server.
- Recovery action: The functionality of the DSG node is not affected. No action is required.
Set ESA communication is failing
Issue: While running the set ESA communication tool, the process fails. The following can one of the reasons for the failure:
- PIM initialization is not done on ESA. Workaround: Initialize the PIM on the ESA.
- A TAC is not created on DSG. Workaround: Create a cluster on a DSG and add the required nodes to the cluster.
6 - Policy Management
The value of any company or its business is in its data. The company or business suffers serious issues if an unauthorized user gets access to the data. Therefore, it becomes necessary for any company or business to protect its data.
The data may contain sensitive information like personally identifiable information, company secrets such as pricing information or intellectual property etc. The process of protecting sensitive data to protect the privacy and personal identity of individuals is called De-Identification.
When de-identifying data, the analysis consists of:
- Anonymization – In anonymization, the intent is to protect privacy by sanitizing any information that could lead to the individual being identified. The de-identified data cannot be re-identified. It includes methods like encryption, masking etc.
- Pseudonymization – In pseudonymization, artificial identifiers or pseudonyms replace the identifying data within a data record. The de-identified data can be re-identified only to authorized users. It includes methods like vaultless tokenization.
The Protegrity methodology together with policy management provides a framework for designing and delivering enterprise data security solutions. Data security solutions, when adopted within an organization, ensures the security of information assets. One of the key components of data security is a policy.
Policy is a set of rules that defines how sensitive data needs to be protected. These policies are designed or created and then distributed to locations in the enterprise, where data needs to be protected.
Policy management is a set of capabilities for creating, maintaining, and distributing the policies.
6.1 - Protegrity Data Security Methodology
The data security policy each organization creates within ESA is based on requirements with relevant regulations. A policy helps you to determine, specify and enforce certain data security rules. These data security rules are as shown in the following figure.
Classification
This section discusses about the classification of Policy Management in ESA.
What do you want to protect?
The data that is to be protected needs to be classified. This step determines the type of data that the organization considers sensitive. The compliance or security team will choose to meet certain standard compliance requirements with specific law or regulation. For example, the Payment Card Industry Data Security Standard (PCI DSS) or the Health Information Portability and Accessibility Act (HIPAA).
In ESA, you classify the sensitive data fields by creating ‘Data Elements’ for each field or type of data.
Why do you need to protect?
The fundamental goal of all IT security measures is the protection of sensitive data. The improper disclosure of sensitive data can cause serious harm to the reputation and business of the organization. Hence, the protection of sensitive data by avoiding identity theft and protecting privacy is for everyone’s advantage.
Discovery
This section discusses about the discovery of Policy Management in ESA.
Where is the data located in the enterprise?
The data protection systems are the locations in the enterprise to focus on as the data security solution is designed. Any data security solution identifies the systems that contains the sensitive data.
In ESA, you specify locations by creating a Data Store.
How you want to protect it?
Data protection has different scenarios which require different forms of protection. For example, tokenization is preferred over encryption for credit card protection. The technology used must be understood to identify a protection method. For example, if a database is involved, Protegrity identifies a Protector to match up with the technology used to achieve protection of sensitive data.
Who is authorized to view it in the clear?
In any organization, the access to unprotected sensitive data must be given only to the authorized stakeholders to accomplish their jobs. A policy defines the authorization criteria for each user. The users are defined in the form of members of roles. A level of authorization is associated with each role which assigns data access privileges to all members in the role.
Protection
This section discusses about protection of Policy Management in ESA.
The Protegrity Data Security Platform delivers the protection through a set of Data Protectors. The Protegrity Protectors meet the governance requirements to protect sensitive data in any kind of environment. ESA delivers the centrally managed data security policies as part of a package and the Protectors locally enforce them. It also collects audit logs of all activity in their systems and sends back to ESA for reporting.
Enforcement
This section discusses about enforcement of Policy Management in ESA.
The policy is created to enforce the data protection rules that fulfils the requirements of the security team. It is deployed to all Protegrity Protectors that are protecting sensitive data at protection points.
Monitoring
This section discusses about monitoring audits related to Policy Management in ESA.
As a policy is enforced, the Protegrity Protectors collects audit logs in their systems and reports back to Insight. Audit logs helps you to capture authorized and unauthorized attempts to access sensitive data at all protection points. It also captures logs on all changes made to policies.
6.2 - Package Deployment in Protectors
This section describes Package Deployment in protectors.
Protegrity enables you to deploy packages to protectors. A package can be a single policy, a standalone entity such as a CoP ruleset, or a combination of a policy and other entities. For example, a package can include the following entities:
Data Security Policy - Security policies used to protect, unprotect, and reprotect data.
CoP Ruleset - Instructions used by the Protegrity Data Security Gateway (DSG) to transform data.
For more information about the CoP Ruleset, refer to Ruleset reference.
The following image illustrates how the Data Security Policy that is defined in the ESA reaches the protectors as part of a package. A Data Security Policy is created and deployed in the ESA either using the ESA Web UI or the DevOps API. When the protector sends a request to the ESA, the ESA creates a package containing the policy. The protector then pulls the package and the related metadata. If a change is made to any of the policies that are part of the package, the protector pulls the updated package from the ESA. There can be multiple scenarios when any change in policy is made.
Important: The deployment scenario explained in this section applies to 10.0.0 protectors and later.
6.3 - Initializing the Policy Management
When you install and log in to the ESA Web UI for the first time, you must initialize the Policy Management (PIM). This initialization creates the keys-related data and the policy repository. This section describes the steps to initialize the Policy Management (PIM) to load the Policy Management-specific information on the ESA Web UI. When you try to access any of the Policy Management or Key Management screens on the ESA Web UI, a request to initialize the PIM appears.
Prior to the installation of protectors, ensure that you perform the following steps to initialize the Policy Management.
To initialize the Policy Management:
On the ESA Web UI, click Policy Management or Key Management.
Click any option available in the Policy Management or Key Management area.
The following screen to initialize PIM appears.
Click Initialize PIM.
A confirmation message box appears.
Click Ok.
The policy management information appears in the Policy Management area.
You can also initialize the PIM using the Policy Management REST API.
6.4 - Components of a Policy
A policy contains multiple components that work together to enforce protection at the protection endpoints. The role component and policy is tied closely. Any changes made to the organization LDAP, such as a user linked to a role is added or deleted, result in an update to the policy. The automatic deployment of policy is only applicable for automatic roles. When a policy is deployed in ESA, the protectors sends a request to the ESA for retrieving the updated policy. The ESA creates a package containing the updated policy and the protector pulls the package and the related metadata. So, when a change in package is detected due to one of the following reasons, the protector pulls the package:
- Security Role changes.
- Rotation of keys. This is only applicable when either the Signing Key or the Data Element Key for an encryption data element with Key ID is rotated.
- Changes in permissions.
- Addition or deletion of data elements.
- Updating of the individual components of a package, such as, the data security policy or the CoP.
You can also create a resilient package that is immutable or static by exporting the package using the RPS API. For more information about the RPS API, refer to section APIs for Resilient Protectors in the Protegrity APIs, UDFs, Commands Reference Guide.
6.4.1 - Working With Data Elements
Data elements consist of a set of data protection properties to protect sensitive data. This set consists of different token types, encryption algorithms, and encryption options. The most important of these properties are the methods that you use to protect sensitive data.
For more information about the protection methods, refer to Protection Methods Reference Guide from the Legacy Documents section.
You can create data elements for the following data types:
Structured Data: Structured Data provides the properties that support column-level database protection, and capabilities to integrate policies into applications, with an API. The Structured Data can also be used by the COP Ruleset to transform the data.
Unstructured Data: Unstructured Data provides the properties supporting file protection. The file protection capabilities enable the protection of sensitive data as it traverses the enterprise or as it rests within files.
Important: 10.0.0 Protectors do not support policies with Unstructured data elements.
The following figure shows the New Data Element screen.
The following table provides the description for each data element available on the of the ESA Web UI.
Callout | UI Element | Description |
---|---|---|
1 | Type | Type of data element you require to create, structured or unstructured. |
2 | Name | Unique name identifying the data element. |
3 | Description | Text describing the data element. |
4 | Method | Tokenization, encryption, masking, and monitoring methods. |
5 | Data Type | If you have selected the Tokenization method, then you need to specify the data type. For example, Numeric, Alpha-Numeric, UnicodeGen2, and so on. |
6 | Tokenizer | If you have selected the Tokenization method, you need to select the Tokenizer. For example, SLT_1_6, SLT_2_6, SLT_1_3 and SLT_2_3. |
7 | Encryption Options/Tokenize Options | Based on the method selected, the tokenization or the encryption options change. |
6.4.1.1 - Example - Creating Token Data Elements
This example shows how to create numeric tokenization data element that is used to tokenize numerical data.
To create a structured data element:
On the ESA Web UI, navigate to Policy Management > Data Elements & Masks > Data Elements.
Click Add New Data Element.
The New Data Element screen appears.
Select Structured from Type.
Type a unique name for the data element in the Name textbox.
The maximum length of the data element name is 55 characters.
Type the description for the data element in the Description textbox.
Select the protection method from the Method drop-down. In this example, select Tokenization.
Select the tokenization data type from the Data Type drop down. In this example, select Numeric (0-9).
For more information about the different data types, refer to the Protection Methods Reference Guide.
Select the tokenizer from the Tokenizer drop-down.
For more information about the different token elements, refer to the Protection Methods Reference Guide.
If the Tokenizer should leave characters in clear, then set the number of characters from left and from right in the From Left and From Right text boxes.
For more information on the maximum and minimum input values for these fields, refer to the section Minimum and Maximum Input Length in the Protection Methods Reference Guide.
If the token length needs to be equal to the provided input, then select the Preserve length check box.
If you select the Preserve length option, then you can also choose the behavior for short data tokenization in the Allow Short Data drop-down.
If you require short data tokenization with existing data that was previously not protected using short data enabled data element, then you must do the following:
- Unprotect the existing data with the old data element.
- Create a new data element that is enabled with short data.
- Reprotect the unprotected data with the new short data enabled data element. For more information about length preservation and short data tokenization, refer to section Length Preserving and Short Data Tokenization in Protection Methods Reference Guide.
Click Save.
A message Data Element has been saved successfully appears.
6.4.1.2 - Example - Creating a FPE Data Element
This example shows how to create an FPE data element that is used to encrypt Plaintext Alphabet data.
To create a structured FPE data element:
On the ESA Web UI, navigate to Policy Management > Data Elements & Masks > Data Elements.
Click Add New Data Element.
The New Data Element screen appears.
Select Structured from Type.
Enter a unique name for the data element in the Name textbox.
The maximum length of the data element name is 55 characters.
Type the description for the data element in the Description textbox.
Select FPE NIST 800-38G from the Method drop-down.
Select a data type from the Plaintext Alphabet drop-down.
Configure the minimum input length from the Minimum Input Length text box.
Select the tweak input mode from the Tweak Input Mode drop-down.
For more information about the tweak input mode, refer to the section Tweak Input in the Protection Methods Reference Guide.
Select the short data configuration from the Allow Short Data drop-down.
Note: FPE does not support data less than 2 bytes, but you can set the minimum message length value accordingly.
For more information about length preservation and short tokens, refer to section Length Preserving in Protection Methods Reference Guide from the Legacy Documents section.
If you are create a short data token in a policy and then deploy the policy, the Forensics displays a policy deployment warning indicating that the data element has unsupported settings.
Enter the required input characters to be retained in the clear in the From Left and From Right text box.
For more information about this setting, refer to the section Left and Right Settings in the Protection Methods Reference Guide from the Legacy Documents section.
Configure any special numeric data handling request, such as Credit Card Number (CCN), in the Special numeric alphabet handling drop-down.
For more information about handling special numeric data, refer to the section Handling Special Numeric Data in the Protection Methods Reference Guide from the Legacy Documents section.
Click Save.
A message Data Element has been created successfully appears.
6.4.1.3 - Example - Creating Data Elements for Unstructured Data
This example shows how to create an AES-256 data element that is used to encrypt a file.
To create an unstructured data element:
On the ESA Web UI, navigate to Policy Management > Data Elements & Masks > Data Elements.
Click Add New Data Element.
The New Data Element screen appears.
Select Unstructured from Type.
Type a unique name for the data element in the Name textbox.
Note: Ensure that the length of the data element name does not exceed 55 characters.
Type the required description for the data element in the Description textbox.
Select AES-256 from the Method drop-down list.
If you want to enable multiple instances of keys with the data element, then check the Use Key ID (KID) checkbox.
Click Save.
A message Data Element has been saved successfully appears.
6.4.2 - Working With Alphabets
The Unicode Gen2 token type gives you the liberty to customize what Unicode data to protect and how the protected token value is returned. It allows you to leverage existing internal alphabets or create custom alphabets by defining Unicode code points. This flexibility allows you to create token values in the same Unicode character set as the input data.
For more information about the code points and considerations around creating alphabets, refer to the section Code Points in Unicode Gen2 Token Type in the Protegrity Protection Methods and Reference Guide.
The following figure shows the Alphabet screen.
6.4.2.1 - Creating an Alphabet
An alphabet can include multiple alphabet templates, for example, an alphabet can be created to generate a token value, which is a mix of Numeric and Cyrillic characters.
To create an alphabet:
On the ESA Web UI, navigate to Policy Management > Data Elements & Masks > Alphabets.
Click Add New Alphabet.
The New Alphabet screen appears.
Enter a unique name for the alphabet in the Name text box.
Under the Alphabet tab, click Add to add existing alphabets or custom code points to the new alphabet.
The Add Alphabet entry screen appears.
If you plan to use multiple alphabet entries to create a token alphabet, then click Add again to add other alphabet entries.
Ensure that code points in the alphabet are supported by the protectors using this alphabet.
Select an existing alphabet, a custom code point, or a range of custom code points.
The following options are available for creating an alphabet.
Important: For the SLT_1_3 tokenizer, you must include a minimum of 10 code points and a maximum of 160 code points.
Important: For the SLT_X_1 tokenizer, you must include a minimum of 161 code points and a maximum of 100k code points.
Alphabet option Description Existing Alphabets Select one of the existing alphabets. The list includes internal and custom alphabets. Custom code point in hex (0020-3FFFF) Add custom code points that will be used to generate the token value. Custom code point range in hex (0020-3FFFF) Add a range of code points that will be used to generate the token value.
Note: When creating an alphabet using the code point range option, note that the code points are not validated.
For more information about consideration related to defining code point ranges, refer to the section Code Point Range in Unicode Gen2 Token Type in the Protegrity Protection Methods and Reference Guide.Click Add to add the alphabet entry to the alphabet.
Click Save to save the alphabet.
Important: Only the alphabet characters that are supported by the OS fonts are rendered properly on the Web UI.
A message Alphabet has been created successfully appears.
6.4.3 - Working With Masks
Masks are a pattern of symbols and characters, that when imposed on a data field obscures its actual value to the viewer of that data. For example, you might want to mask out characters from credit cards and Social Security numbers. Masks can obscure data completely or partially. For example, a partial mask might display the last four digits of a credit card on a grocery store receipt.
For more information about the Masks option, refer to the section Masks in the Protegrity Protection Methods and Reference Guide.
The following figure shows the New Mask screen.
The following table provides the description for each element available on the Web UI.
Callout | UI Element | Description |
---|---|---|
1 | Name | Unique name to identify the mask. |
2 | Description | Text describing the mask. |
3 | Mask Template | Mask templates for masking the data.For more information about the mask templates, refer to Table 3-5 Mask Templates. |
4 | Mask Options | Following options are available to mask the data:
|
5 | From Left / From Right | Select the text to be masked from left and from the right. |
6 | Mask Mode | If the masking is in clear or masked. |
7 | Mask Character | The character to mask the data. |
8 | Sample Output | The output based on the select template and mask mode. |
The following table shows the different mask mode templates.
Mask Template | Mask Mode-Clear | Mask Mode-Mask |
---|---|---|
CCN- 6*4 | Template to retain six characters from left and four characters from right in clear. For example, 123456******3456 123-12*1234 A123-1*******4-12 | Template to mask six characters from left and four characters from right. For example, ******789012**** ******-**** ******234-123**** |
CCN 12*0 | Template to retain 12 characters from left and no characters from right in clear. For example, 123456789012**** 123-12-1234 A123-1234-12***** | Template to mask 12 characters from left but no characters from right. For example, ************3456 *********** ************34-12 |
CCN 4*4 | Template to retain four characters from left and four characters from right in clear. For example, 1234********3456 123-***1234 A123*********4-12 | Template to mask four characters from left and four characters from right. For example, ****56789012**** ****12-**** ****-1234-123**** |
SSN x-4 | Template to retain no characters from left but only four characters from right in clear. For example, ************3456 *******1234 *************4-12 | Template to mask no characters from left but four characters from right. For example, 123456789012**** 123-12-**** A123-1234-123**** |
SSN 5-x | Template to retain five characters from left but no characters from right. For example, 12345*********** 123-1****** A123-************ | Template to mask five characters from left but no characters from right. For example, *****67890123456 *****2-1234 *****1234-1234-12 |
6.4.3.1 - Creating a Mask
To create a mask:
On the ESA Web UI, navigate to Policy Management > Data Elements & Masks > Masks.
Click Add New Mask.
The New Mask screen appears.
Enter a unique name for the mask in the Name text box.
Enter the description for the mask in the Description textbox.
Select CCN 6X4 from the Mask Template drop-down.
Select Clear from Mask Mode.
Select the masking character from the Character drop-down.
Click Save.
A message Mask has been saved successfully appears.
6.4.3.2 - Masking Support
Data Element Method | Data Type | Masking Support |
---|---|---|
Tokenization | Numeric (0-9) | Yes |
Alpha (a-z, A-Z) | Yes | |
Uppercase Alpha (A-Z) | Yes | |
Uppercase Alpha-Numeric (0-9, A-Z) | Yes | |
Printable | Yes | |
Date (YYYY-MM-DD, DD/MM/YYYY, MM.DD.YYYY) | No | |
DateTime | No | |
Decimal | No | |
Unicode | No | |
Unicode Base64 | No | |
Unicode Gen2 | No | |
Binary | No | |
Credit Card (0-9) | Yes | |
Lower ASCII | Yes | |
Yes | ||
Integer | No | |
Encryption Algorithm: 3DES, AES-128, AES-256, CUSP 3DES, CUSP AES-128, CUSP AES-256 | Yes | |
Format Preserving Encryption (FPE) Note: Masking is supported only for FPE data elements without Left and Right settings and with ASCII plaintext encoding that are upgraded from the previous versions of ESA to 10.1.0 version. | No | |
No Encryption | Yes | |
Masking | Yes |
6.4.4 - Working With Trusted Applications
The Trusted Applications (TA) is an entity that defines which system users and applications are authorized to run the Application Protector to protect data.
Trusted Application does not support the capability of adding multiple users and applications in a single Trusted Application instance. In a Trusted Application, you can add only one application and its corresponding system user. If you want to add multiple users and applications, then you must create Trusted Application for each application and its corresponding system user.
6.4.4.1 - Creating a Trusted Application
To make an application trusted:
On the ESA Web UI, navigate to Policy Management > Policies & Trusted Applications > Trusted Applications.
Click Add New Trusted Application.
The New Trusted Application screen appears.
Type a unique name to the trusted application in the Name textbox.
Type the required description to the trusted application in the Description textbox.
Type the name of the application in the Application Name textbox.
The maximum length of an Application Name is up to 64 characters.
Important: In case of AP Java and AP Go applications, ensure that you specify the complete module or package name.
In the application name, you can type the asterisk (*) wild card character to represent multiple characters or the question mark (?) wild card character to represent a single character. You can also use multiple wild card characters in the application name.
For example, if you specify Test_App* as the application name, then you can use applications with names, such as, Test_App1 or Test_App123 to perform security operations.
Caution: Use wild card characters with discretion, as they can potentially lead to security threats.
Type the name of the application user in the Application User textbox.
In the application user name, you can type the asterisk (*) wild card character to represent multiple characters or the question mark (?) character to represent a single character. You can also use multiple wild card characters in the application user name.
For example, if you specify User* as the application user name, then you can have users with names, such as, User1 or User123 to perform security operations.
Caution: Use wild card characters with discretion, as they can potentially lead to security threats.
Click Save.
A message Trusted Application has been created successfully appears.
6.4.4.2 - Linking Data Store to a Trusted Application
To link a trusted application to a data store:
On the ESA Web UI, navigate to Policy Management > Policies & Trusted Applications > Trusted Applications.
The list of all the trusted applications appear.
Select the required trusted application.
The screen to edit the trusted application appears.
Under the Data Stores tab, click Add.
The screen to add the data stores appears.
Select the required data stores.
Click Add.
A message Select Data Stores have been added to Trusted Application successfully appears.
6.4.4.3 - Deploying a Trusted Application
Deploying a trusted application consists of the following two states:
Making the Trusted Application ready for deployment
The following procedure describes the steps to make a trusted application ready for deployment.
After you link data stores to your trusted application, it is ready to be deployed to the protector nodes.
To make a trusted application ready for deployment:
On the ESA Web UI, navigate to Policy Management > Policies & Trusted Applications > Trusted Applications.
The list of all the trusted applications appear.
Select the required trusted application.
The screen to edit the trusted application appears.
Click Ready to Deploy.
A message Trusted Application has been marked ready to deploy appears.
The Deploy action is active.
Deploying the Trusted Application
The following procedure describes the steps to deploy the trusted application.
You deploy the policy to the data store after the trusted application is ready for deployment. If no data stores are linked with the trusted application, then the deployment of the trusted application fails.
To deploy the trusted application:
On the ESA Web UI, navigate to Policy Management > Policies & Trusted Applications > Trusted Applications.
The list of all the trusted applications appear.
Select the required application that is in the ready to deploy state.
The screen to edit the trusted application appears.
Click Deploy.
A message Trusted application has been successfully deployed appears.
During deployment, the Application Protector validates the trusted application. If the validation fails, then the Protector generates an audit entry with the detailed information.
You can also deploy the trusted application by deploying the data store. In this case, all the policies and trusted applications that are linked to the data store are prepared to be distributed to the protection points.
For more information about deploying a data store, refer to Deploying Data Stores to Protectors.
6.4.5 - Creating a Data Store
You create data stores to specify the protectors in your enterprise to which you want to deploy policies and trusted applications. The protectors are identified by their IP address which must be unique across the enterprise. Using the data store, you can define the list of protector nodes that can pull the packages. A data store consists of information on policies and trusted applications. You can create a default data store that deploys polices to the protectors that are not a part of the allowed servers list of any data store. Thus, when a new protector is added that is not a part of any data store, the protector inherits the policy information pertaining to the default data store.
You cannot create data stores with the same names in the data store name. You can create only one default data store for a single instance of ESA.
To create a data store:
On the ESA Web UI, navigate to Policy Management > Data Stores.
The list of all the data stores appear.
Click Add New Data Store.
The New Data Store screen appears.
Enter a unique name identifying the data store in the Name textbox.
The maximum length of the data store name is 55 characters.
Enter the description describing the data store in the Description textbox.
Click the Select as Default Data Store option.
If a default data store already exists and you are updating another data store as the default data store, then the following message appears.
A default Data Store already exists, Please confirm to make this the new default Data Store.
Click Ok.
Click Save.
A message Data Store has been created successfully appears.
The following tabs are visible after the data store has been saved, as per the type of data store:
- The Policies and Trusted Applications tabs are visible in case of a default data store.
- The Allowed Servers, Policies, and Trusted Applications tabs are visible in case of a non-default data store.
You can also create a Data Store using the Policy Management REST API.
6.4.5.1 - Adding Allowed Servers for the Data Store
Specifying Allowed Servers for the Data Store
This section describes the steps to specify allowed servers for a data store.
To specify allowed servers for a data store:
On the ESA Web UI, navigate to Policy Management > Data Stores.
The list of all the data stores appear.
From the Allowed Servers tab for the data store, click Add.
The Add Allowed Servers screen appears.
If you want to add a single server, then select Single Server and specify the server IP address.
If you want to add a range of servers, then Multiple Servers. Enter the range in the From and To text boxes.
Click Add.
The servers are added to the list
6.4.5.2 - Adding Policies to the Data Store
To add policy to a data store:
On the ESA Web UI, navigate to Policy Management > Data Stores .
The list of all the data stores appear.
Select the data store.
The screen to edit the data store appears.
Click the Policies tab.
Click Add.
The list of policies created appear.
Select the policies.
Click Add.
A message Selected Polices have been added to the Data Store successfully appears.
For more information on creating policies, refer to section Creating and Deploying Policies.
6.4.5.3 - Adding Trusted Applications to the Data Store
To add trusted application to a data store:
On the ESA Web UI, navigate to Policy Management > Data Stores .
The list of all the data stores appear.
Select the data store.
The screen to edit the data store appears.
Click the Trusted Applications tab.
Click Add.
The list of trusted applications created appear.
Select the trusted applications.
Click Add.
A message Selected Trusted Applications have been added to the Data Store successfully appears.
6.4.6 - Working With Member Sources
The users can come from the following types of sources:
- User directory, such as:
- LDAP
- Posix LDAP
- Active Directory
- Azure AD
- Database
- Teradata
- Oracle
- SQL Server
- DB2
- PostgreSQL
- File
Using these source types, you configure the connection to a directory to retrieve information on the users and the user groups available.
When you configure the connection to any member source, you can verify the following values:
- The connection parameters that you have specified are correct.
- The users and groups can be retrieved from the member source. Click Test adjacent to the member source entry from the list or from the respective member source screen. The Test Member Source Connection dialog box displays the status with the following parameters:
- connection
- authentication
- groups
- users
Note: The password length of a member source on some platforms may have a limitation.
6.4.6.1 - Configuring Active Directory Member Source
To create an Active Directory member source:
On the ESA Web UI, navigate to Policy Management > Roles & Member Source > Member Sources.
Click Add New Member Source.
The New Member Source screen appears.
Enter a unique name of the file member source in the Name textbox.
Type the description in the Description textbox.
Select Active Directory from the Source Type drop-down list.
The Active Directory Member Source screen appears.
Enter the information in the directory fields.
The following table describes the directory fields for Active Directory member sources.
Field Name Description Host The Fully Qualified Domain Name (FQDN), or IP of the directory server. Port The network port on the directory server where the service is listening. TLS Options - The Use TLS option can be enabled to create secure communication to the directory server.
- The Use LDAPS option can be enabled to create secure communication to the directory server. LDAPS uses TLS/SSL as a transmission protocol.
Note: Selection of the LDAPS option is dependent on selecting the TLS option. If the TLS option is not selected, then the LDAPS option is not available for selection.Recursive Search The recursive search can be enabled to search the user groups in the active directory recursively. For example, consider a user group U1 with members User1, User2, and Group1, and Group1 with members User3 and User4. If you list the group members in user group U1 with recursive search enabled, then the search result displays User1, User2, User3, and User4. Base DN The base distinguished name where users can be found in the directory. Username The username of the Active Directory server. Password/Secret The password of the user binding to the directory server. Click Save.
A message Member Source has been created successfully appears.
6.4.6.2 - Configuring File Member Source
In Policy Management, the exampleusers.txt
and examplegroups.txt
are sample member source files that contain a list of users or groups respectively. These files are available on the ESA Web UI. You can edit them to add multiple user name or user groups. You can also create a File Member source by adding a custom file.
The examplegroups.txt
has the following format.
[Examplegroups]
<groupusername1>
<groupusername2>
<groupusername3>
Note: Ensure that the file has read permission set for Others.
Important: The exampleusers.txt or examplegroups.txt files do not support the Unicode characters, which are characters with the
\U
prefix.
Viewing the List of Users and Groups in the Sample Files
This sections describes the steps to view the list of users and groups in the sample files.
To view list of users and groups in the sample files:
On the ESA Web UI, navigate to Settings > Systems > Files.
Click View, corresponding to
exampleusers.txt
orexamplegroups.txt
under Policy Management-Member Source Service User Files and Policy Management-Member Source Service Group Files respectively.
The list of users in the exampleuser.txt
file or examplegroups.txt
file appear.
Creating File Member Source
This section describes the procedure on how to create a file member source.
To create file member source:
On the ESA Web UI, navigate to Policy Management > Roles & Member Source > Member Sources.
Click Add New Member Source.
The New Member Source screen appears.
Enter a unique name of the file member source in the Name textbox.
Type the description in the Description textbox.
Select File from the Source Type drop-down list.
Select Upload file from the User File drop-down list.
Click the Browse..
icon to open the file browser.
Select the user file.
Click Upload File
icon.
A message User File has been uploaded successfully appears.
Select Upload file from the Group File drop-down list.
Click the Browse..
icon to open the file browser.
Select the group file.
Click Upload File
icon.
A message Group File has been uploaded successfully appears.
Click Save.
A message Member Source has been created successfully appears.
6.4.6.3 - Configuring LDAP Member Source
To create an LDAP member source:
On the ESA Web UI, navigate to Policy Management > Roles & Member Source > Member Sources.
Click Add New Member Source.
The New Member Source screen appears.
Enter a unique name of the file member source in the Name textbox.
Type the description in the Description textbox.
Select LDAP from the Source Type drop-down list.
The LDAP Member Source screen appears
Enter the information in the LDAP member source fields.
The following table describes the directory fields for LDAP member sources.
Field Name Description Host The Fully Qualified Domain Name (FQDN), or IP of the directory server. Port The network port on the directory server where the service is listening. Use TLS The TLS is enabled to create a secure communication to the directory server. LDAPS, which is deprecated, is no longer the supported protocol. TLS is the only supported protocol. User Base DN The base distinguished name where users can be found in the directory. The user Base DN is used as the user search criterion in the directory. Group Base DN The base distinguished name where groups can be found in the directory. The group base dn is used as a group search criterion in the directory. User Attribute The Relative Distinguished Name (RDN) attribute of the user distinguished name. Group Attribute The RDN attribute of the group distinguished name. User Object Class The object class of entries where user objects are stored. Results from a directory search of users are filtered using user object class. Group Object Class The object class of entries where group objects are stored. Results from a directory search of groups are filtered using group object class. User Login Attribute The attribute intended for authentication or login. Group Members Attribute The attribute that enumerates members of the group. Group Member is DN The members may be listed using their fully qualified name, for example, their distinguished name or as in the case with the Posix user attribute (cn) value. Timeout The timeout value when waiting for a response from the directory server. Bind DN The DN of a user that has read access, rights to query the directory. Password/Secret The password of the user binding to the directory server Parsing users from a DN instead of querying the LDAP server: By default, a user is not resolved by querying the external LDAP server. Instead, the user is resolved by parsing the User Login Attribute from the Distinguished Name that has been initially retrieved by the Member Source Service. This option is applicable only if the Group Member is DN option is enabled while configuring the Member Source. In this case, the members must be listed using their fully qualified name, such as their Distinguished Name. If the ESA is unable to parse the DN or the DN is not available in the specified format, the user is resolved by querying the external LDAP server.
Click Save.
A message Member Source has been created successfully appears.
6.4.6.4 - Configuring POSIX Member Source
You can retrieve users and user groups from any external LDAP and Posix LDAP. The internal LDAP available on ESA, uses the Posix schema. Thus, when using ESA, it is recommended to use Posix LDAP to configure the connection with the internal ESA LDAP.
To create a Posix LDAP member source:
On the ESA Web UI, navigate to Policy Management > Roles & Member Source > Member Sources.
Click Add New Member Source.
The New Member Source screen appears.
Enter a unique name of the file member source in the Name textbox.
Type the description in the Description textbox.
Select Posix LDAP from the Source Type drop-down list.
The Posix LDAP Member Source screen appears.
Enter the information in the directory fields.
The following table describes the directory fields for Posix LDAP member source.
Field Name Description Host The Fully Qualified Domain Name (FQDN), or IP of the directory server. Port The network port on the directory server where the service is listening. Use TLS The TLS can be enabled to create a secure communication to the directory server. Base DN The base distinguished name where users can be found in the directory. Username The username of the Posix LDAP server. Password/Secret The password of the user binding to the directory server. Click Save.
A message Member Source has been created successfully appears.
6.4.6.5 - Configuring Azure AD Member Source
To create an Azure AD member source:
On the ESA Web UI, navigate to Policy Management > Roles & Member Sources > Member Sources.
Click Add New Member Source.
The New Member Source screen appears.
Enter a unique name of the Azure AD member source in the Name textbox.
Type the description in the Description textbox.
Select Azure AD from the Source Type drop-down list.
The Azure AD Member Source screen appears.
Enter the information in the directory fields.
The following table describes the directory fields for the Azure Active Directory member sources.
Field Name Description Recursive Search The recursive search can be enabled to search the user groups in the Azure AD recursively. Tenant ID The unique identifier of the Azure AD instance Client ID The unique identifier of an application created in Azure AD User Attribute The Relative Distinguished Name (RDN) attribute of the user distinguished name. The following user attributes are available:
- displayName - The name displayed in the address book for the user.
- userPrincipalName - The user principal name (UPN) of the user.
- givenName - The given name (first name) of the user.
- employeeId - The employee identifier assigned to the user by the organization.
- id - The unique identifier for the user.
- mail - The SMTP address for the user.
- onPremisesDistinguishedName - Contains the on-premises Active Directory distinguished name (DN).
- onPremisesDomainName - Contains the on-premises domainFQDN, also called dnsDomainName, synchronized from the on-premises directory.
- onPremisesSamAccountName - Contains the on-premises samAccountName synchronized from the on-premises directory.
- onPremisesSecurityIdentifier - Contains the on-premises security identifier (SID) for the user that was synchronized from the on-premises setup to the cloud.
-onPremisesUserPrincipalName - Contains the on-premises userPrincipalName synchronized from the on-premises directory.
- securityIdentifier - Security identifier (SID) of the user, used in Windows scenarios.Group Attribute The RDN attribute of the group distinguished name. The following group attributes are available:
- displayName - The display name for the group.
- id - The unique identifier for the group.
- mail - The SMTP address for the group.
- onPremisesSamAccountName - Contains the on-premises SAM account name synchronized from the on-premises directory.
- onPremisesSecurityIdentifier - Contains the on-premises security identifier (SID) for the group that was synchronized from the on-premises setup to the cloud.
- securityIdentifier - Security identifier of the group, used in Windows scenarios.Group Members Attribute The attribute that enumerates members of the group.
Note: Ensure to select the same Group Members Attribute as the User Attribute.
The following group members attributes are available:
- displayName - The name displayed in the address book for the user.
- userPrincipalName - The user principal name (UPN) of the user.
- givenName - The given name (first name) of the user.
- employeeId - The employee identifier assigned to the user by the organization.
- id - The unique identifier for the user.
- mail - The SMTP address for the user.
- onPremisesDistinguishedName - Contains the on-premises Active Directory distinguished name (DN).
- onPremisesDomainName - Contains the on-premises domainFQDN, also called dnsDomainName, synchronized from the on-premises directory.
- onPremisesSamAccountName - Contains the on-premises samAccountName synchronized from the on-premises directory.
- onPremisesSecurityIdentifier - Contains the on-premises security identifier (SID) for the user that was synchronized from the on-premises setup to the cloud.
- onPremisesUserPrincipalName - Contains the on-premises userPrincipalName synchronized from the on-premises directory.
- securityIdentifier - Security identifier (SID) of the user, used in Windows scenarios.Password/Secret The client secret is the password/secret of the Azure AD application. Click Save.
A message Member Source has been created successfully appears.
6.4.6.6 - Configuring Database Member Source
You use the Database type to obtain users from database, such as, SQL Server, Teradata, DB2, PostgreSQL, or Oracle. An ODBC connection to the database must be setup to retrieve user information.
The following table describes the connection variable settings for the databases supported in Policy Management.
Database Type | Database |
---|---|
SQLSERVER | System DSN Name (ODBC) For example, SQLSERVER_DSN. |
TERADATA | System DSN Name (ODBC) For example, TD_DSN. |
ORACLE | Transport Network Substrate Name (TNSNAME). |
DB2 | System DSN Name (ODBC) For example, DB2DSN. |
POSTGRESQL | System DSN Name For example, POSTGRES. |
Creating Database Member Source
This section describes the procedure on how to create a database member source.
To create a Database Member Source:
On the ESA Web UI, navigate to Policy Management > Roles & Member Source > Member Sources.
Click Add New Member Source.
The New Member Source screen appears.
Enter a unique name for the file member source in the Name text box.
Type the description in the Description text box.
Select Database from the Source Type drop-down list.
Select one of the following database from the Source drop-down list.
- Teradata
- Oracle
- SQL Server
- DB2
- PostgreSQL
To enable the usage of a custom data source name, switch the Use Custom DSN toggle.
- Enter the custom data source name in the DSN text box.
- Ensure that the specified DSN is present in the odbc.ini configuration file located in the /opt/protegrity/mbs/conf/ directory.
If you are selecting the Oracle database as the source database, then enter the service name in the Service Name text box.
Note: This step is applicable for the Oracle database only.
If you are not using Custom DSN, then the following steps are applicable.
Enter the database name in the Database text box.
Enter the host name in the Host text box.
Enter the port to connect to the database in the Port text box.
Enter the username in the Username text box.
Enter the password in the Password text box.
Click Save.
The message Member Source has been created successfully appears.
6.4.7 - Working with Roles
The authorization criteria for each user is defined in form of members of roles. Roles determine and define the unique data access privileges for each member. Each role is associated with a level of authorization granted to all its members, including specific data access privileges.
The following figure shows the New Role screen.
The following table provides the description for each element available on the of the Web UI.
Callout | UI Element | Description |
---|---|---|
1 | Name | The unique name of the role. |
2 | Description | The description of the role. |
3 | Mode | The refresh mode for that role. For more information about refresh mode, refer to section Mode Types for a Role. |
4 | Applicable to all members | If enabled, the specific role will be applied to any member that does not belong to any other role. |
6.4.7.1 - Creating a Role
To create a role:
On the ESA Web UI, navigate to Policy Management > Roles & Member Source > Roles.
Click Add New Role.
The New Role screen appears.
Enter a unique name for the role in the Name textbox.
Note: Ensure that the length of the role name does not exceed 55 characters.
Enter the required description for the role in the Description textbox.
In the Mode drop-down, select a refresh mode.
For more information about about mode types for a role, refer to section Mode Types for a Role.
If you want to apply this role to all members in all the member sources, click Applicable to all members. If enabled, the role is applied to all members in users or groups that do not belong to any other role.
Click Save.
6.4.7.2 - Mode Types for a Role
The modes that are available are Automatic, Semi-automatic, and Manual.
The synchronization of members can be described as follows:
- Synchronization between Hub Controller and Member Source: The Member Source component is responsible for synchronization of the latest changes made in the external sources, such as LDAP, AD, file, or database. In the ESA, the HubController synchronizes with the Member Source to update the policy with any changes detected in roles once in an hour.
- Automatic Mode
- In automatic mode, groups from the member sources are synchronized periodically without user intervention. The synchronization happens every one hour. The updated policy is deployed automatically after the synchronization.
- Semi-Automatic Mode
- Semi-Automatic mode is similar to the automatic mode with the exception that you must synchronize the groups manually. The updated policy is deployed automatically after the manual synchronization.
For a new member added to a group, you can manually synchronize the changes by setting the mode to semi-automatic. Then, you can use the Synchronize Members button from the Members tab of a Role screen.
- Manual Mode
- The roles with mode type as Manual can accept both groups and users. You must manually synchronize the groups. After manual synchronization of members, you must set the policy as Ready to Deploy followed by deploying the policy manually.
For a new member added to a group, you can manually synchronize the changes by clicking the Synchronize Members button from the Members tab of a Role screen.
Note: If a user having the same name but with different letter case appears in multiple roles within a policy, then it can cause permission issues when the policy is deployed. This can happen if the user has different permissions in each role.
To avoid this issue, when the members are automatically synchronized, and users having the same name but different letter case appear in roles, an error is generated. This error appears in the Notifications section of the ESA dashboard to inform you that such conflicting users have been found. The error specifies the correlation ID of the HubController audit log that has been generated. To identify the conflicting users, navigate to the Discover page in the Audit Store Dashboards and search for the specified correlation ID.
6.4.7.3 - Adding Members to a Role
This section describes the steps to add members to a role.
To add members to a role:
On the ESA Web UI, navigate to Policy Management > Roles & Member Source > Roles.
Click on the role name link to which you want to add members.
The selected role screen appears.
In the Members tab, click Add.
The Add Members screen appears.
In the Choose Member Source drop-down, select the Member Source.
In the Display Member Type drop-down, select the member type.
For Automatic or Semi-Automatic mode, it causes the removal of members of type Users from the role. The Display Member Type drop-down is disabled in this case with default Group member type.
Enter the filter parameter in the Filter Members text box.
It accepts characters such as ‘*’ to display all results or word search to search with a word.
For more information about filtering members from AD and LDAP member sources, refer to the sections Filtering Members from AD and LDAP Member Sources and Filtering Members from Azure AD Member Source.
Select the number of display results in the Display Number of Results spin box.
Click Next.
The step 2 of Add Members dialog box appears.
Note: The ID column displays the unique identifier for the Azure AD, Posix LDAP and Active Directory member sources.
Select the check box next to each member you want to add.
Click Add.
The selected members are added to the role.
Note: The ID column displays the unique identifier for the Azure AD, Posix LDAP and Active Directory member sources.
Click Save to save the role.
In addition to the Members tab, you can find:
- Policies: It displays all the policies that are linked to this role.
- Data Stores: It displays all the data stores that are linked to this role.
6.4.7.3.1 - Filtering Members from AD and LDAP Member Sources
The following table lists some examples using different AD and LDAP search criteria to filter the members.
Search Criteria | Description |
---|---|
* | Retrieves all users and groups |
Character or word search | Retrieves the results that contain the specified character or word |
(cn=*protegrity*) | Retrieves all common names that contain the term protegrity in it |
(sn=abc*) | Retrieves all surnames that starts with abc |
(objectClass=*) | Retrieves all the results |
(&(objectClass=user)(!(cn=protegrity))) | Retrieves all the users without the common name as protegrity |
(&(cn=protegrity)(objectClass=user)(email=*)) | Retrieves all the users with an email attribute and with common name as protegrity |
(!(email=*)) | Retrieves all the users without an email attribute |
(&(objectClass=user)(| (cn=protegrity*)(cn=admin*))) | Retrieves all the users with common name that starts with protegrity or admin |
If the input in the search filter includes special characters, then you must use the escape sequence in place of the special character to make it a valid input in the search filters.
The following table lists the escape sequence for each of the special characters.
ASCII Character | Escape Sequence |
---|---|
( | \28 |
) | \29 |
* | \2A |
\ | \5C |
The following table lists some examples of search filters with the usage of escape sequences to include special characters in the search input.
Input with Special Character | Input with Escape Sequence | Description |
---|---|---|
(cn=protegrity*)) | (cn=protegrity\2A\29) | The search filter retrieves the values that contain protegrity*) In this case, the parenthesis requires an escape sequence because it is unmatched. |
(cn= abc (xyz) abc) | The search filter retrieves the values that contain abc (xyz) abc In this case, the escape sequence is not required as the parenthesis are matched. |
6.4.7.3.2 - Filtering Members from Azure AD Member Source
The following table lists an example for the Azure AD search criteria to filter the members.
Search Criteria | Description |
---|---|
startsWith(displayname,‘xyz’) | Retrieves all groups and users that start with xyz Note: For more information and examples about the filter criteria for the Azure AD member source, search for the text Advanced query capabilities on Azure AD on Microsoft’s Technical Documentation site at: https://learn.microsoft.com/en-us/docs/ |
6.4.7.4 - Synchronizing, Listing, or Removing Members in a Role
The following figure explains the steps to synchronize, list or remove members in a role.
Note: The ID column displays the unique identifier for the Azure AD, Posix LDAP and Active Directory member sources.
The following table provides the description for each element available on the of the Web UI.
Callout | Task Name | Steps |
---|---|---|
1 | Synchronize Members | 1. Select the role you want to update by clicking on it in the ESA Web UI, under Policy Management > Roles & Member Sources > Roles. 2. Click the Synchronize Members ![]() A status message appears. |
2 | List Group Members | 1. Select the role you want to update by clicking on it in the ESA Web UI, under Policy Management > Roles & Member Sources > Roles. 2. Click the List Group Members ![]() The dialog box appears with the list of all members in the group. |
3 | Remove Members | 1. Select the role you want to update by clicking on it in the ESA Web UI, under Policy Management > Roles & Member Sources > Roles. 2. Click Remove. A confirmation dialog box appears. 3. Click Ok. |
6.4.7.5 - Searching User
The Search Member screen from the Roles & Member Sources screen provides a way to search the users with the associated roles. It also provides additional details like user added time, role and member source to which it belongs to.
For example, if you want to check the role of a user with its member source, then the search results can provide you a way to troubleshoot or check the user-role mapping.
To search a member:
On the ESA Web UI, navigate to Policy Management > Roles & Member Source > Search Member.
Enter the search criteria in the Member Name textbox.
For more on valid search criteria, refer to Search Criteria.
Click Search
.
The search results appear.
- Search Criteria
- Consider the following scenario:
- You have created a file member source named MemberSource1 which includes:
- Group File named examplegroups with users examplegroupuser1 and examplegroupuser2
- User File named exampleusers with users exampleuser1 and exampleuser2
- You have created a role named Role1.
- You have added all users from MemberSource1 to Role1.
For the given example, the following table lists the search results with different search criteria.
Table: Search Criteria
Search Criteria Description Output Wild card Search with ‘*’ It displays all the members. Character search Character search Search with ‘1’ It displays examplegroupuser1 and exampleuser1. Word search Search with ‘group’ It displays examplegroupuser1 and examplegroupuser2. You can perform additional actions on the search results such as:
- Clicking on the Role or Source column values redirects you to the Roles or Member Sources page respectively.
- Members can be sorted based on Name, Added Time, Role or Source columns.
- Search results also can be filtered with another search option, which is provided in the search results.
- You have created a file member source named MemberSource1 which includes:
6.5 - Creating and Deploying Policies
The following figure displays a sample policy.
You can add data elements, roles, link the policy to a data store, and deploy the policy to the protector nodes. You also set different permissions for the content restrictions for a policy.
You can create two types of policies:
- Structured Policy - Policy that supports column-level database protection and integrates policies into applications using an API. This policy type contains only structured data elements.
- Unstructured Policy - Policy that provides support for file protection. This policy type contains only unstructured data elements. It is only supported for File Protectors. The unstructured policy is not applicable for 10.0.0 protectors.
A policy is in one of the following states:
- Ready to Deploy – The policy is created with the required information and ready for deployment.
- Deployed – The policy is ready to be distributed to the protectors.
You can modify a policy at any point in time. If a policy that is deployed is modified, then the policy returns to the Ready to Deploy state.
The Deploy Status is only applicable for 9.x.x.x protectors and earlier. It is not applicable for 10.0.0 protectors and later.
For 10.0.0 protectors and later, you can access this information from the Protegrity Dashboard.
The Policy Management Web UI is primarily used to create policies and related metadata.
6.5.1 - Creating Policies
Creating a Structured Policy
This section describes the steps to create a structured policy.
To create a structured policy:
On the ESA Web UI, navigate to Policy Management > Policies & Trusted Applications > Policies.
The list of all the policies appears.
Click Add New Policy.
The New Policy screen appears.
Select Structured from Type.
Type a unique name for the policy in the Name textbox.
Note: The maximum length of the policy name is 55 characters.
Type the description for the policy in the Description textbox.
Under the Permissions tab, select the required permissions.
For more information about adding permissions, refer to Adding Permissions to Policy.
Under the Data Elements tab, add the required data elements.
For more information about adding data elements, refer to Working with Data Elements.
Under the Roles tab, add the required roles.
For more information about adding roles, refer to Working with Roles.
Under the Data Stores tab, add the required Data Stores.
For more information about adding data stores, refer to Working with Data Stores.
Click Save.
A message Policy has been created successfully appears.
Creating an Unstructured Policy
This section describes the steps to create an unstructured policy.
To create an unstructured policy:
On the ESA Web UI, navigate to Policy Management > Policies & Trusted Applications > Policies.
The list of all the policies appear.
Click Add New Policy.
The New Policy screen appears.
Select Unstructured from Type.
Type a unique name for the policy in the Name textbox.
The maximum length of the policy name is 55 characters.
Type the description for the policy in the Description textbox.
Under the Permissions tab, select the required permissions.
For more information about adding permissions, refer to Adding Permissions to Policy.
Under the Data Elements tab, add the required data elements.
For more information about adding data elements, refer to Working with Data Elements.
Under the Roles tab, add the required roles.
For more information about adding roles, refer to Working with Roles.
Under the Data Stores tab, add the required Data Stores.
For more information about adding data stores, refer to Working with Data Stores.
Click Save.
A message Policy has been created successfully appears.
6.5.2 - Adding Data Elements to Policy
Adding Data Elements for Structured Policies
This section describes the steps to add data elements for structured policies.
To add data elements for structured policies:
On the ESA Web UI, navigate to Policy Management > Policies & Trusted Applications > Policies.
The list of all the policies appear.
Select the policy.
The screen to edit the policy appears.
Click the Data Elements tab.
Click Add.
The list of all the available data elements appears.
Select the data elements.
Click Add.
A message Selected Data Elements have been added to the policy successfully appears.
Adding Data Elements for Unstructured Policies
This section describes the steps to add data elements for unstructured policies.
To add data elements for unstructured policies:
On the ESA Web UI, navigate to Policy Management > Policies & Trusted Applications > Policies.
The list of all the policies appear.
Select the policy.
The screen to edit the policy appears.
Click the Data Elements tab.
Click Add.
The list of data elements created for unstructured data appears.
Select the data elements.
Click Add.
A message Selected Data Elements have been added to the policy successfully appears.
6.5.3 - Adding Roles to Policy
Adding Roles to Policies
You add roles to a policy to enforce user access to data. The roles can be set up to enable granular access control on the sensitive enterprise data. You can add one or more roles to a policy.
To add roles to policies:
On the ESA Web UI, navigate to Policy Management > Policies & Trusted Applications > Policies.
The list of all the policies appear.
Select the policy.
The screen to edit the policy appears.
Click the Roles tab.
Click Add.
The list of roles created appears.
Select the roles.
Click Add.
A message Selected Roles have been added to the policy successfully appears.
6.5.4 - Adding Permissions to Policy
Using the policy permissions, the system can determine what is returned to a user who wants to view protected data. If the user has the appropriate permissions, then the data gets decrypted or detokenized. If permission is denied, then a NULL value is returned by default. Depending on your data element and policy settings, the system can instead return a no-access value (such as Exception or Protected value). The permissions are always defined in the context of a roles and a data element.
You can set a no-access value, such as Exception or Protected value, through editing the permission settings for a role or a data element.
For more information about editing the permission settings of a role or data element, refer to the Customizing Permissions for Data Element in a Policy or Customizing Permissions for Role in a Policy.
The following table describes the different permissions that you can set for structured data.
Permission | Options | Permission Description |
---|---|---|
Content | Unprotect | Allow members to get protected data in cleartext. |
Protect | Allow members to add and protect the data. Note: From 10.1.0, if you have selected the HMAC-SHA256 data elements, then only the Protect option is enabled. The other options, such as, Reprotect and Unprotect are grayed out. | |
Reprotect | Allow members to reprotect the protected data with a new data element. |
The following table describes the permissions that you can set for an unstructured data. These permissions are only applicable for File Protector.
Permission | Options | Permission Description |
---|---|---|
Content | Unprotect | Allow members to get protected data in cleartext. |
Protect | Allow members to add data and protect it as needed. | |
Reprotect | Allow members to reprotect the protected data with a new data element. | |
Object | Create | Allow members to create a file or directory. |
Admin Permissions | Manage Protection | Allow members to add or remove protection. |
You can also set permissions or rules using the Policy Management REST API.
Setting Default Permissions for a Policy
This section describes the steps to set the default permissions for a policy.
To set default permissions for a policy:
On the ESA Web UI, navigate to Policy Management > Policies & Trusted Applications > Policies.
The list of all the policies appear.
Select the required policy.
The screen to edit the policy appears.
Click the Permissions tab.
The following screen appears.
Select the required permissions.
For more information about the permissions, refer to the tables Permissions for Structured Data and Permissions for Unstructured Data.
Click Save.
The permissions are set for the policy.
Customizing Permissions for Data Elements in a Policy
You can edit the permissions for an individual data element. When you edit the permissions for a data element, then you change the permissions for the roles associated with the data element.
To customize permissions for data element in a policy:
On the ESA Web UI, navigate to Policy Management > Policies & Trusted Applications > Policies.
The list of all the policies appear.
Select the required policy.
The screen to edit the policy appears.
Click the Data Elements tab.
Click Edit Permissions.
The screen to update the permissions for the role appears.
Select the required permissions.
Note: If you are using masks with any data element, then ensure that masks are created before editing permissions.
Click Save.
A message Permissions have been updated successfully appears.
Note: The customized permissions, if any, override the default permissions for any policy.
Customizing Permissions for Roles in a Policy
You can edit the permissions for individual roles. When you edit the permissions for a role, then you change the permissions for the data elements associated with the role.
To customize permissions for role in a policy:
On the ESA Web UI, navigate to Policy Management> Policies & Trusted Applications> Policies.
The list of all the policies appear.
Select the policy.
The screen to edit the policy appears.
Click the Roles tab.
Click Edit Permissions.
The screen to update the permissions for the role appears.
Select the permissions.
Click Save.
A message Permissions have been updated successfully appears.
Note: The customized permissions, if any, override the default permissions for any policy.
6.5.4.1 - Permission Conflicts and Collision Behavior
Masking Rules for Users in Multiple Roles
If the mask settings, which are applied along with the permission settings, for users in multiple roles result in a conflict, then the resultant output differs.
Consider a scenario, where user U1 with a policy P1, associated with roles R1, R2, and R3 and connected with the data element DE1 containing different masks (Left, Right) and output formats, then the resultant output is applicable as per the following table.
Role | User | Data Element | Output Format | Mask Settings | Resultant Output |
R1 | U1 | DE1 | MASK | Left: 1, Right: 2 | Left: 1, Right: 2 |
R1 | U1 | DE1 | MASK | Left: 1, Right: 2 | Left: 1, Right: 2 |
R2 | U1 | DE1 | MASK | Left: 1, Right: 2 | |
R1 | U1 | DE1 | MASK | Left: 1, Right: 2 | There is conflict in the mask settings (Left, Right) and thus, the Unprotect access is revoked with NULL as the output. |
R2 | U1 | DE1 | MASK | Left: 0, Right: 5 | |
R1 | U1 | DE1 | MASK | Left: 1, Right: 2 with mask character ‘*’ | There is conflict in the mask character settings and thus, the Unprotect access is revoked with NULL as the output. |
R2 | U1 | DE1 | MASK | Left: 1, Right: 2 with mask character ‘/’ | |
R1 | U1 | DE1 | MASK | Left: 1, Right: 2 | There is conflict in the mask settings (Left, Right) and thus, the Unprotect access is revoked with NULL as the output. |
R2 | U1 | DE1 | MASK | Left: 1, Right: 2 | |
R3 | U1 | DE1 | MASK | Left: 0, Right: 5 | |
R1 | U1 | DE1 | MASK | Left: 1, Right: 1 with masked mode | There is conflict in the mask
settings and thus, the Unprotect access is revoked with NULL as
the output. For example: If the value 12345 is masked with
Left: 1, Right: 1 settings in masked mode,
then it results in *234*.If the value 12345 is masked with
Left: 1, Right: 1 settings in clear mode,
then it results in 1***5.As the resultant values are
conflicting, the Unprotect access is revoked with NULL as the
output. |
R2 | U1 | DE1 | MASK | Left: 1, Right: 1 with clear mode | |
R1 | U1 | DE1 | MASK | Left: 1, Right: 2 | There is conflict in the output formats. The resultant output is most permissive, which is CLEAR. |
R2 | U1 | DE1 | CLEAR | ||
R1 | U1 | DE1 | MASK | Left: 1, Right: 2 | There is conflict in the output formats due to conflicting MASK settings. However, with the CLEAR setting applicable in the order of access as per the role R3, the resultant output is most permissive. In this case, it is CLEAR. |
R2 | U1 | DE2 | MASK | Left: 0, Right: 5 | |
R3 | U1 | DE3 | CLEAR |
A data element-role connection with disabled permission for unprotect operation results in a NULL value, by default, and can be set to other no-access values, such as Exception or Protected value.
The following table explains how no-access values work with different output formats for users in multiple roles.
Sr. No. | Role | User | Data Element | No Access Operation | Output Format | Mask Settings | Resultant Output |
1 | R1 | U1 | DE1 | MASK | Left: 1, Right: 2 | There is conflict in the output formats. If one of the roles has access, then the output format is used. The resultant output is most permissive, which is MASK. | |
R2 | U1 | DE1 | NULL | ||||
2 | R1 | U1 | DE1 | MASK | Left: 1, Right: 2 | ||
R2 | U1 | DE1 | Protected | ||||
3 | R1 | U1 | DE1 | MASK | Left: 1, Right: 2 | ||
R2 | U1 | DE1 | Exception | ||||
4 | R1 | U1 | DE1 | CLEAR | If one of the roles has access, then the output format is used. The resultant output is most permissive, which is CLEAR. | ||
R2 | U1 | DE1 | NULL | ||||
5 | R1 | U1 | DE1 | CLEAR | |||
R2 | U1 | DE1 | Protected | ||||
6 | R1 | U1 | DE1 | CLEAR | |||
R2 | U1 | DE1 | Exception |
No Access Permissions
If the Unprotect access permission is not assigned to a user, then either the NULL value or noaccess permission, such as, Protected or Exception value is returned. The returned value is based on the permission settings for a role or a data element. If a user is assigned to multiple roles with different permission settings for the data element, then the following no-access permission on the protector is applicable.
No Access Permission 1 | No Access Permission 2 | Resultant Permission on the Protector |
---|---|---|
Protected | NULL | Protected |
Protected | EXCEPTION | Protected |
Protected | Mask | Mask |
Protected | Clear | Clear |
NULL | EXCEPTION | EXCEPTION |
NULL | Mask | Mask |
NULL | Clear | Clear |
EXCEPTION | Mask | Mask |
EXCEPTION | Clear | Clear |
6.5.4.1.1 - Inheriting Permissions for Users in Multiple Policies and Roles
If a policy user is assigned a role that does not have a specific data element associated with the role, then the user cannot use the data element for performing security operations. If the user tries to use the data element, then an error is generated.
However, consider a policy where you have created a default role that is applicable to all the users. You associate a specific data element with this default role. In this case, the policy user, who is included in another role that is not associated with the specific data element, inherits the permissions for this data element from the default role. This scenario is applicable only if the users are a part of the same policy or a part of multiple policies that are applied to the same data store.
Example:
Policy 1 contains the role R1, which is assigned to the user U1. The role R1 is associated with a data element DE1, which has the permissions Unprotect, Protect, and Reprotect. The user U1 can unprotect, protect, and reprotect the data using the data element DE1.
Policy 2 contains the role R2, which is assigned to the user U2. The role R2 is associated with a data element DE2. which has the permissions Unprotect, Protect, and Reprotect. The user U2 can unprotect, protect, and reprotect the data using the data element DE2.
Policy P3 contains the role R3, which is applicable to all the users. The role R3 role is associated with two data elements DE1 and DE2. Both the data elements have the Unprotect permissions associated with it.
Note: The ALL USERS denotes a default role which is applicable to all the users. To enable the default role, click the Applicable to all the members toggle button on the ESA web UI. For more information about Applicable to all the members, refer to the section Working with Roles.
Use Case 1
The Use Case 1 table demonstrates that roles R1 and R2 have an indirect relationship with data elements DE1 and DE2 that is they are part of different policies but they are deployed to the same data store, they have inherited the permission of the default role for data elements DE1 and DE2.
Table 1. Use Case 1
Policy structure in the ESA | Protector side permissions after deploying the policy | ||||||
P1 | R1 | U1 | DE1 | URP | U1 | DE1 | URP |
DE2 | U | ||||||
P2 | R2 | U2 | DE2 | URP | U2 | DE1 | U |
DE2 | URP | ||||||
P3 | R3 | ALL USERS | DE1 | U | ALL USERS | DE1 | U |
DE2 | U | DE2 | U |
As shown in the table, in the case of old behaviour, no permissions have been inherited from the role R3 that is applicable to the data elements DE1 and DE2 for all the users.
The Unprotect permissions highlighted in bold in the table for the new behavior column indicate the permission, that have been inherited from the role R3, that is applicable to the data elements DE1 and DE2 for all the users.
Use Case 2
The Use Case 2 table demonstrates that if roles R1 and R2 have a direct relationship with data elements DE1 and DE2, then they will not inherit the permissions of the default role. In this case, protector side permissions after deploying the policy are the same as shown in the old behavior and new behaviour columns.
Table 2. Use Case 2
Policy structure in the ESA | Protector side permissions after deploying the policy | ||||||
P1 | R1 | U1 | DE1 | URP | U1 | DE1 | URP |
DE2 | - | DE2 | - | ||||
R3 | ALL USERS | DE1 | U | U2 | DE1 | - | |
DE2 | U | DE2 | URP | ||||
P2 | R2 | U2 | DE1 | - | ALL USERS | DE1 | UR |
DE2 | URP | ||||||
P3 | R4 | ALL USERS | DE1 | R | DE2 | UR | |
DE2 | R |
Use Case 3
The Use Case 3 table demonstrates that if roles R1 and R2 have a direct relationship with data elements DE1 and DE2, then they will not inherit the permissions of the default role. In this case, protector side permissions after deploying the policy are same as shown in the old behavior and new behaviour columns.
Table 3. Use Case 3
Policy structure in the ESA | Protector side permissions after deploying the policy | ||||||
P1 | R1 | U1 | DE1 | URP | U1 | DE1 | URP |
DE2 | - | DE2 | - | ||||
R2 | U2 | DE1 | - | U2 | DE1 | - | |
DE2 | URP | DE2 | URP | ||||
R3 | ALL USERS | DE1 | U | ALL USERS | DE1 | UR | |
DE2 | U | ||||||
R4 | ALL USERS | DE1 | R | DE2 | UR | ||
DE2 | R |
Use Case 4
The Use Case 4 table demonstrates that as role R2 has an indirect relationship with data element DE1, it has inherited the permissions of the default role. The role R1 has a direct relationship with data element DE1, and it have not inherited the permissions of the default role.
Table 4. Use Case 4
Policy structure in the ESA | Protector side permissions after deploying the policy | ||||||
P1 | R1 | U1 | DE1 | - | U1 | DE1 | - |
DE2 | - | ||||||
R3 | ALL USERS | DE1 | U | U2 | DE1 | U | |
DE2 | URP | ||||||
P2 | R2 | U2 | DE2 | URP | ALL USERS | DE1 | U |
DE2 | - |
As shown in the table, in the case of old behaviour, no permissions have been inherited from the role R3 that is applicable to the data element DE1 for all the users.
The Unprotect permission highlighted in bold in the table for the new behavior column indicate the permissions that has been inherited from the role R3, that is applicable to the data element DE1 for all the users.
Use Case 5
The Use Case 5 table demonstrates that Role R1 has inherited the permissions of the default role for the data element DE2.
Table 5. Use Case 5
Policy structure in the ESA | Protector side permissions after deploying the policy | ||||||
P1 | R1 | U1 | DE1 | URP | U1 | DE1 | URP |
DE2 | UP | ||||||
P2 | R3 | ALL USERS | DE2 | U | ALL USERS | DE1 | - |
P3 | R4 | ALL USERS | DE2 | P | DE2 | UP |
As shown in the table, in the case of old behaviour, no permissions have been inherited from the roles R3 and R4 that is applicable to the data element DE2 for all the users.
The resultant permissions highlighted in bold in the table for the new behavior column indicate the permissions that have been inherited from the roles R3 and R4, that is applicable to the data element DE2 for all the users.
Use Case 6
The Use Case 6 table demonstrates that role R1 will inherit the permissions of the default role for the data element DE2.
Table 6. Use Case 6
Policy structure in the ESA | Protector side permissions after deploying the policy | ||||||
P1 | R1 | U1 | DE1 | U | U1 | DE1 | UP |
DE2 | URP | ||||||
P2 | R5 | U1 | DE1 | P | ALL USERS | DE1 | - |
P3 | R3 | ALL USERS | DE2 | URP | DE2 | URP |
As shown in the table, in the case of old behaviour, no permissions have been inherited from the role R3 that is applicable to the data element DE2 for all the users.
The resultant permissions highlighted in bold in the table for new behavior column indicate the permissions that have been inherited from the role R3 that is applicable to the data element DE2 for all the users.
Use Case 7
The Use Case 7 table demonstrates that if role R1 is related to data element DE1 in policy P1 and and role R3 is related to data element DE1 in policy P3, then roles R1 and R3 will not inherit the permissions of the default role. In this case, protector side permissions after deploying the policy are same as shown in the old behavior and new behaviour columns.
Table 7. Use Case 7
Policy structure in the ESA | Protector side permissions after deploying the policy | ||||||
P1 | R1 | U1 | DE1 | U | U1 | DE1 | U |
DE2 | - | ||||||
P2 | R1 | U1 | DE2 | - | ALL USERS | DE1 | URP |
P3 | R3 | ALL USERS | DE1 | URP | DE2 | - |
6.5.5 - Deploying Policies
Following are the steps to deploy the policy:
- The policy should be made ready for deployment.
- Deploy the policy.
Preparing the Policy for Deployment
This section describes the steps to prepare the policy ready for deployment.
If all the parameters are valid, then the policy is set to the Ready to Deploy state.
To make the policy ready for deployment:
On the ESA Web UI, navigate to Policy Management > Policies & Trusted Applications > Policies.
The list of all the policies appear.
Select the required policy.
The screen to edit the policy appears.
Click Ready to Deploy.
A message confirming the deployment appears. The Ready to Deploy is inactive and the Deploy is active. The ESA must be up and running to deploy the package to the protectors.
Note:
- When the ESA is offline and the Protector, Resilient Package Proxy (RPP), or the Resilient Package Agent (RPA) is switched off and then switched on: The Protector, RPP, or RPA fail to download the package from the ESA after it is turned on. This download activity continues until the Protector, RPP, or RPA successfully downloads the package.
- When the Protector, RPP, or RPA is running and then the ESA goes offline: The Protector, RPP, or RPA will continue to ping back the ESA, once every minute, to get the latest package. If there is no response from the ESA, then the Protector, RPP, or RPA sends a log to inform that it failed to download or check for any possible updates to the package. The package is kept published and the protect operation continues. If the RPP is not installed on the ESA, then the RPP provides the cached package to the Protector. Similarly, the RPA provides the downloaded package from the shared memory to the Protector.
Deploying the Policy
This section describes the steps how to deploy the policy after it has been prepared for deployment.
To deploy the policy:
On the ESA Web UI, navigate to Policy Management > Policies & Trusted Applications > Policies.
The list of all the policies appear.
Select the required policy.
The screen to edit the policy appears.
Click Deploy.
A message Policy has been deployed successfully appears.
Note: Redeploy the policy only when there are changes to an existing policy.
If you deploy a policy to a data store, which contains additional policies that have already been deployed, then the policy user inherits the permissions from the multiple policies.
For more information about inherting permissions, refer to Inheriting Permissions for Users in Multiple Policies and Roles.
6.5.6 - Policy Management using the Policy API
The Policy Management REST APIs are used to create or manage the policies. The policy management functions performed from the ESA Web UI can also be performed using the REST APIs. In addition, the read-only information about the appliance is also available using the REST API. The REST API does not support unstructured data elements and policies.
For more information about the Policy Management APIs, refer to the section Using the DevOps REST APIs in the Protegrity APIs, UDFs, Commands Reference Guide from the Legacy Documents section.
6.6 - Deploying Data Stores to Protectors
After you create a data store with the all the required components, you deploy the policy to the nodes:
For more information about Data Stores, refer to section Working with Data Stores.
To deploy the data store:
On the ESA Web UI, navigate to Policy Management > Data Stores.
The list of all the data stores appear.
Select the data store.
The screen to edit the data store appears.
Click Deploy.
A message **Data Store has been deployed successfully** appears.
When the Protector pulls the package that contains a policy added to the data store, it connects to ESA to retrieve the necessary policy information such as members for each role in the policy, token elements, and so on.
6.7 - Managing Policy Components
You can edit or delete the following policy components from Policy Management:
- Data Elements
- Masks
- Alphabets
- Member Sources
- Data Stores
- Trusted Applications
- Policies
- Roles
The following table describes the editable fields for each policy component.
Policy Component | Fields |
---|---|
Data Elements | - Description |
Masks | - Name - Description - Mask Template - Mask Mode - Character |
Alphabets | No Fields Note: After an alphabet entry is added to the Alphabet, the alphabet entry cannot be edited. |
Roles | All fields |
Policies | - Name - Description - Password - Permissions - Data Elements - Roles - Data Stores |
Trusted Applications | - Name - Description - Application Name - Application User - Data Stores |
Member Sources | All fields |
When you click the link for name of policy component, you can edit the fields from the component edit panel. You click the Delete ( ) icon to delete the policy component.
If the policy components are already added to a policy, then you cannot delete it.
6.8 - Policy Management Dashboard
The following figure shows the Dashboard screen for Policy Management.
The Policy Management dashboard consists of the following three areas:
- Summary
- Keys
- Statuses - The Statuses area is a legacy feature that is not applicable for 10.0.0 protectors and later.
Summary
This section provides information about the Summary area in the Policy Management dashboard.
The Summary area displays an overview of the number of policy components created using the Policy Management Web UI. The following policy components appear under the Summary area:
- Roles
- Data Elements
- Member Sources
You can navigate to the respective policy components by clicking the corresponding number.
For example, you click 2 corresponding to Data Elements, to view the list of data elements.
Keys
This section provides information about the Keys area in the Policy Management dashboard.
The Keys tab displays an overview on the number of keys managed and created using the Key Management Web UI. The following key components appear under the Keys area:
- Keys Managed: Total number of keys present in the Policy Management system. This includes the Master Keys, Repository Keys, Signing Keys, Data Store Keys, and Data Element Keys. This number also includes all the rotated keys.
- Data Element Keys Managed: Total number of Data Element Keys currently present in the Policy Management system. Each key is connected to an existing Data Element. This number also includes the rotated Data Element Keys. It is important to note that a maximum number of 8191 keys can be present in the system.
- Data Element Keys Created: Total number of Data Element keys that have been ever created in the Policy Management system. This number also includes the keys that you have created earlier but have subsequently removed after deleting a data element. It is important to note that you can create a maximum number of 8191 keys.
The following figure shows the Keys area.
6.9 - Exporting Package for Resilient Protectors
The following table provides information about each component that must be used or configured to export the package from the ESA.
Policy Components | Description | Reference |
---|---|---|
RPS API | After the package is configured on the ESA, the RPS API helps in exporting the package that can be imported to the resilient protectors. For more information about how resilient packages are imported to protectors, refer to the respective Resilient Protector documentation. | For more information about the RPS API, refer to section APIs for Resilient Protectors in the Protegrity APIs, UDFs, Commands Reference Guide. |
RPS Service | The RPS service is installed on the ESA. This service must be up and running before any resource is requested using the RPS API. | - Verifying the RPS service - section Verifying the RPS Service for Policy Deployment on Resilient Protectors in the Installation Guide. - RPS service - section Start and Stop Services in the Appliance Overview Guide. |
Export Resilient Package permission | The Export Resilient Package permission must be assigned to the role that will be granted to the user exporting the package. | For more information about the permission, refer to Managing Roles in the Appliance Overview Guide. |
6.10 - Legacy Features
The following Policy Management features are not applicable for 10.0.0 protectors and later:
- Policy Management > Nodes
- Policy Management > Dashboard > Statuses
For more information about these Legacy features for your supported Protectors earlier than 10.0.0, refer to a corresponding version of the Policy Management Guide from the My.Protegrity website.
For 10.0.0 protectors and later, you can access the information about the protector nodes and statuses from the Protegrity Dashboard.
7 - Key Management
A Key Management solution that an enterprise selects must ensure that data encryption does not disrupt organizational functions. Key Management solutions must provide secure administration of keys through their life cycle. This includes generation, use, distribution, storage, recovery, rotation, termination, auditing, and archival.
7.1 - Protegrity Key Management
The following keys are a part of the Protegrity Key Management solution:
- Key Encryption Key (KEK): The cryptographic key used to protect other keys. It is also known as the Master Key (MK). It protects the Data Store Keys, Repository Keys, Signing Keys, and Data Element Keys.
- Data Encryption Key (DEK): The cryptographic keys used to protect sensitive data. The Data Encryption Keys (DEKs) are categorized as follows:
- Repository Key: It protects the policy information in the ESA.
- Signing Key: The protector utilizes the signing key to sign the audit logs for each data protection operation.
- Data Store Key: These keys are no longer used and are only present due to backward compatibility.
- Data Element Key: The cryptographic key that is used to protect sensitive data linked to an encryption data element.
Key Encryption Key (KEK) in Protegrity
- The Protegrity Key Encryption Key (KEK) is known as the Master Key (MK).
- It uses AES with a 256-bit key.
- The MK is non-exportable and is generated and stored within the active Key Store.
- The MK is responsible for protecting all the DEKs.
- The MK, RK, and signing key are generated when the ESA Policy and Key Management is initialized.
Data Encryption Keys (DEKs) in Protegrity
- The Repository Key (RK), Signing Key, Data Store Keys (DSKs), and Data Element Keys are collectively referred to as the (DEKs).
- The RK, Signing Key, and DSK are AES 256-bit keys.
- The Data Element Keys can be both 128-bit and 256-bit keys depending on the protection method used.
- The DEKs are generated by the active Key Store.
Key Usage Overview
In the Protegrity Data Security Platform, endpoint protection is implemented through policies. The keys form a part of the underlying infrastructure of a policy and are not explicitly visible.
The following figure provides an overview of the key management workflow.
- All MKs are stored in the Key Store.
- In the ESA, all DEKs are stored and protected by the MK.
- In the Protector, the Signing key and Data Element Keys are stored in the memory.
Certificates
Certificates in Protegrity are generated when the ESA is installed. These certificates are used for internal communication between various components in the ESA. Their related keys are used for communication between the ESA and protectors.
For more information about certificates, refer to the section Certificates in ESA in the Certificate Management.
Key Store
A Key Store is a device used to generate keys, store keys, and perform cryptographic operations. The MK is stored in the Key Store and it is used to protect and un-protect DEKs.
When an enterprise implements a data protection solution in their infrastructure, they must carefully consider the type of Key Store to use as part of the implementation strategy. The Key Store can be connected to the Soft HSM, HSM, or KMS.
When the ESA is installed, the internal Protegrity Soft HSM generates the Master Key (MK). When switching Key Store, a new MK is generated in the new Key Store. The existing DEKs are re-protected using this new MK and the old MK is deactivated.
Protegrity Soft HSM: The Protegrity Soft HSM is an internal Soft HSM bundled with the ESA. The Protegrity Soft HSM provides all the functionalities that are provided by an HSM. Using the Protegrity Soft HSM ensures that keys remain within the secure perimeter of the data security solution (ESA).
HSM: The Protegrity Data Security Platform provides you the flexibility, if needed, to switch to an HSM.
Ensure that the HSM supports the PKCS #11 interface.
For more information about switching from the Key Store, refer to the HSM Integration.
Cloud HSM: Cloud-hosted Hardware Security Module (Cloud HSM) service enables you to host encryption keys in a cloud-hosted HSM cluster. You can perform cryptographic operations using this service as well. Protegrity supports both Amazon Web Service (AWS) and Google Cloud Platform (GCP) Cloud HSM. The GCP console is used to define a project where keyrings, keys, and key versions can be created. You can use GCP Cloud HSM to ensure the same key life cycles as on-premise.
Warning: Ensure that the project location supports creating HSM level keys.
For more information about switching Protegrity Soft HSM to Cloud HSM and Configuring the Keystore with the AWS Customer Managed Keys, refer to the Key Management Service (KMS) Integration on the Cloud Platforms.
7.2 - Key Management Web UI
Master Keys Web UI
Information related to Master Keys, such as, state, timestamps for key creation and modification, and so on is available on the Master Keys Web UI.
The following image shows the Master Keys UI.
Repository Keys Web UI
Information related to Repository keys, such as, state, timestamps for key creation and modification, and so on is available on the Repository Keys Web UI.
The following image shows the Repository Keys UI.
Data Store Keys Web UI
Information related to DSKs, such as, state, timestamps for key creation and modification, and so on is available on the Data Store Keys Web UI.
The following image shows the Data Store Keys UI.
The options available as part of the UI are explained in the following table.
No. | Option | Description |
---|---|---|
1 | Select multiple Data Store | Select the check box to rotate multiple Data Stores at the same time. |
2 | Data Store Name | Click to view information related to Active DSK and older keys. |
3 | UID | Unique identifier of the key. |
4 | State | Current State of the DSK linked to the Data Store. |
5 | OUP | The period of time in the cryptoperiod of a symmetric key during which cryptographic protection may be applied to data. |
6 | RUP | The period of time during the cryptoperiod of a symmetric key during which the protected information is processed. |
7 | Generated By | Indicates which Key Store has the generated key. |
8 | Action | Rotate - Click to rotate the DSK for a Data Store Key. |
9 | Rotate | Click to rotate the DSK for multiple selected Data Store Keys. |
If you click the Data Store name, for example DS1, you can view detailed information about the active key and older keys.
The Action column provides an option to change the state of a key to Destroyed or mark the key as Compromised. For more information about the options available for DSK states, refer to Changing Key States.
Signing Key Web UI
The Signing Key is used to add a signature to log records generated for each data protection operation. These signed log records are then sent from the Protector to the ESA. The Signing Key is used to identify if log records have been tampered with and that they are received from the required protection endpoint or Protector.
A single Signing Key is linked and deployed with all the data stores. At a time, only one Signing Key can be in the Active state.
The following image shows the Signing Keys UI.
Data Element Keys Web UI
Information related to Data Element Keys, such as, state, OUP, RUP, and so on is available on the Data Element Keys Web UI.
The following image shows the Data Element Keys UI.
The options available as part of the UI are explained in the following table.
No | Option | Description |
---|---|---|
1 | Data Element Name | Click to view information related to the Active Data Element Key and older keys. |
2 | UID | Unique identifier of the key. |
3 | State | Current State of the Data Element Key linked to the Data Element. |
4 | OUP | The period of time in the cryptoperiod of a symmetric key during which cryptographic protection may be applied to data. |
5 | RUP | The period of time during the cryptoperiod of a symmetric key during which the protected information is processed. |
6 | Generated By | Indicates the source of key generation: - Soft HSM - The key has been generated by Protegrity Soft HSM. - Key Store - The Key Store used to generate the key. - Software - This option appears if you have generated the Data Element Key in an earlier version of the ESA before upgrading the ESA to v10.1.0. |
7 | Action | Create New Key: Click to create a new key. This option is available only if you have created a Data Element Key with a key ID. |
If you click the Data Element name, for example AeS256KeyID, then you can view detailed information about an active key and older keys.
If key ID is enabled for a data element, then you click Create New Key to create a new key for the data element.
Important: Starting from the ESA v10.1.0, the Data Element Keys are generated by the active Key Store. When MK is rotated, all DEKs needs to be re-protected. As a result, if you are using too many keys, then your system might slow down in the following scenarios:
You are frequently rotating the keys.
You are using too many encryption data elements where the Key ID is enabled. This allows you to create multiple keys for the same encryption data element.
You are using too many data stores.
Your connection to the HSM is slow.
You can find out the total number of keys currently in use from the Keys area in the Policy Management Dashboard.
7.3 - Working with Keys
Key rotation involves putting the new encryption key into active use. Key rotation can take place when the key is about to expire or when it needs to be deactivated due to malicious threats.
Master Key (MK), Repository Key (RK), Data Store Key (DSK), and Signing Key
The key rotation for KEKs and DEKs in the Protegrity Data Security Platform can be described as follows:
- The Master Key (MK), Repository Key (RK), Data Store Key (DSK), and Signing Key can be rotated using the ESA Web UI.
- The supported states for the MK, RK, DSK, and Signing Key are Active, Deactivated, Compromised, and Destroyed.
- When the ESA is installed, the MK, RK, and Signing Key are in the Active state.
- When the MK, RK, DSK, and Signing Key is rotated, the old key state changes from Active to Deactivated, while the new key becomes Active.
- The MK, RK, DSK, and Signing Key is set for automatic rotation ten days prior to the Originator Usage Period (OUP) expiration date by default.
- On the ESA Web UI, navigate to Key Management > Key Name to view the key details.
- The rotation for the MK, RK, DSK, and Signing Key from the ESA Web UI requires the user to be assigned with the KeyManager role.
Viewing the Key Information
You can view the MK,RK, DSK, and Signing Key information, such as, state, OUP, RUP,and other details using the Web UI. To view the key information:
- On the ESA Web UI, click Key Management > Required Key tab.
For example; if you select MK, the MK screen appears.
- In the Current key info section, view the current key information.
- The table displays the information related to the older Master Key.
You can rotate the MK, RK, DSK, and Signing Key by clicking the Rotate button.
Changing the Key States
The following table provides information about the possible key states for MK, RK, DSK, and Signing Key that you can change based on their current state.
Current Key State | Can change state to | State Change | |
State | Reason | ||
Active | Deactivated |
| Auto |
Deactivated | Compromised | Key is compromised. | Manual |
Destroyed | Organization requirement | Manual |
In the Deactivated key state, you can -
- Click Compromised to mark the key as Compromised and display a Compromised label next to the state.
- Click Destroy to mark the key as Destroyed and display a Destroyed label next to the state.
Data Element Keys
Data elements can have key IDs associated with them. Key IDs are a way to correlate a data element with its encrypted data. When a data element is created, and if the protection method is key based, a unique Data Element Key is generated. This key is seen in the Key Management Web UI.
Information related to Data Element Keys, such as, state, OUP, RUP, and so on is available on the Data Element Keys Web UI.
To view information about the Data Element Key:
- On the ESA Web UI, click Key Management > Data Element Keys.
The View and manage Data Element Keys screen displays the list of data element keys. - Click a data element name, for example DE1. The Data Elements tab appears, which displays the current information about the Data Element Key.
- The table displays the information related to the older Data Element Keys.
Data Element Key States
This section describes the key states for the Data Element Keys.
The following table provides information about the possible key states for the Data Element Keys that you can change based on their current state.
Current Key State | Can change state to | State Change | |
State | Reason | ||
Preactive | Active | Deploying a policy | Auto |
Active | Deactivated | Adding a new key to the data element.If you click the Data Element name, for example AES256KeyID, then you click Create New Key button to create a new key for the data element. | Auto |
When you create a new key, its state is set to Preactive state.
Key Cryptoperiod and States
Cryptoperiods can be defined as the time span for which the key remains available for use across an enterprise. Setting cryptoperiods ensures that the probability of key compromise by external threats is limited. Shorter cryptoperiods ensure that the strength of security is greater.
In the ESA, the Master Key, Repository Key, Signing Key, Data Store Key, and the Data Element Keys are governed by cryptoperiods. For these keys in the ESA, the validity is dictated by the Originator Usage Period (OUP) and the Recipient Usage Period (RUP). The OUP is the period until when the key can be used for protection, while the RUP is the period when the key can be used to unprotect only.
For keys in Protegrity, the following table provides the OUP and RUP information.
Key Name | OUP | RUP |
---|---|---|
Master Key | 1 Year | 1 Year |
Repository Key | <=2 Years | <=5 Years |
Data Store Key | <=2 Years | <=5 Years |
Signing Key | <=2 Years | <=5 Years |
Data Element Key | <=2 Years | <=5 Years |
For more information about key states, refer to Changing Key States.
7.4 - Key Points for Key Management
- The user must have the KeyManager role to rotate the Repository Key (RK), Master Key (MK), Signing Key, and Data Store Key (DSK).
- Key Rotation must be performed only after reviewing the existing policies or regulatory compliances followed by your organization.
- It is essential that a Corporate Incident Response Plan is drafted to:
- Understand the security risks of key rotation.
- Handle situations where keys might be compromised.
- Consult with security professionals, such as Protegrity, to understand how to enable key rotation. Minimize the impact on business processes affected by the keys during this process.
7.5 - Keys-Related Terminology
The following table provides an introduction to terminology related to keys that can help you understand Protegrity Key Management.
Term | Definition |
---|---|
Master Key (MK) | It is generated and stored in the Key Store. When the Key Management is initialized, the Key Store is switched or active key is rotated. MK protects all DEKs in the Policy repository. |
Repository Key (RK) | It is generated in the configured Key Store when the Key Management is initialized or active key is rotated. It is protected by MK. It protects the Policy Repository in ESA. |
Data Store Keys (DSK) | It is generated in the configured Key Store when a Data Store is created. It is protected by MK. It is only used to protect staging located on the ESA. |
Signing Key | It is generated in the configured Key Store when the ESA is installed and key management is initialized. It is protected by MK.It is used to sign the audits generated by protectors. It is used by the Protector to add a signature to the log records generated for each data protection operation, which are then sent from the Protector to the ESA.The Signing Key helps to identify that the log records have not been tampered with and are received from the required protection endpoint or Protector. |
Key Encryption Keys (KEK) | It protects other keys. In Protegrity Data Security Platform, the MK is the KEK. |
Data Encryption Keys (DEK) | It is used to protect data. In the Protegrity Data Security Platform, the RK, Signing Key, DSK, and Data Element Keys are the DEKs. |
Data Element Keys | It is generated when a data element is created. This key protects the sensitive data. |
Protegrity Soft HSM | It is internally housed in the ESA. It is used to generate keys and stores the Master key. |
Key Store - HSM or KMS | The Key Store can be a Hardware Security Module (HSM), or other supported Key Management Service (KMS) that can store keys and perform cryptographic operations. |
NIST 800-57 | NIST Special Publication 800-57 defines best practices and recommendations for the Key Management. |
FIPS 140-2 | Federal information process standard (FIPS) used to accredit cryptographic modules. |
PKCS#11 Interface | Standard API for Key Management. |
Key States | The state of a key during the key life cycle. |
Cryptoperiods | The time span during which a specific key is authorized for use or in which the keys for a given system or application may remain in effect. |
Originator Usage Period (OUP) | The period of time in the cryptoperiod of a symmetric key during which cryptographic protection may be applied to data |
Recipient Usage Period (RUP) | The period of time during the cryptoperiod of a symmetric key during which the protected information is processed. |
Endpoint | It is the protection endpoint. In most cases, it is the Protector. |
Policy Repository | Internal storage in ESA, which stores policy information including the Master key properties and all DEK properties. |
8 - Certificate Management
8.1 - Certificates in the ESA
Digital certificates are used to encrypt online communication and authentication between two entities. For two entities exchanging sensitive information, the one that initiates the request for exchange can be called the client. The one that receives the request and constitutes the other entity can be called the server.
The authentication of both the client and the server involves the use of digital certificates issued by the trusted Certificate Authorities (CAs). The client authenticates itself to a server using its client certificate. Similarly, the server also authenticates itself to the client using the server certificate. Thus, certificate-based communication and authentication involves a client certificate, server certificate, and a certifying authority that authenticates the client and server certificates.
Protegrity client and server certificates are self-signed by Protegrity. However, you can replace them by certificates signed by a trusted and commercial CA. These certificates are used for communication between various components in ESA.
The certificate support in Protegrity involves the following:
The ability to replace the self-signed Protegrity certificates with CA based certificates.
For more information about replacing the self-signed Protegrity certificates with CA based certificates, refer to the section Changing Certificates.
The retrieval of username from client certificates for authentication of user information during policy enforcement.
The ability to download the server’s CA certificate and upload it to a certificate trust store to trust the server certificate for communication with ESA.
Points to remember when uploading the certificates:
ESA supports the upload of certificates with strength equal to 4096 bits. You can upload a certificate with strength less than 4096 bits but the system will show you a warning message.
Custom certificates for Insight must be generated using a 4096 bit key.
When uploading, make sure the certificate version is v3. Uploading certificates with version lower than v3 is not supported.
When uploading, make sure that the certificate uses the RSA Keys. Certificates with other keys are not supported.
The various components within the Protegrity Data Security Platform that communicate with and authenticate each other through digital certificates are:
- ESA Web UI and ESA
- ESA and Protectors
- Protegrity Appliances and external REST clients
As illustrated in the figure, the use of certificates within the Protegrity systems involves the following:
Communication between ESA Web UI and ESA
In case of a communication between the ESA Web UI and ESA, ESA provides its server certificate to the browser. In this case, it is only server authentication that takes place in which the browser ensures that ESA is the trusted server.
Communication between ESA and protectors
In case of a communication between ESA and protectors, certificates are used to mutually authenticate both the entities. The server and the client i.e. ESA and the protector respectively ensure that both are trusted entities. The protectors could be hosted on customer business systems or it could be a Protegrity Appliance.
Communication between Protegrity Appliances and external REST clients
Certificates ensure the secure communication between the customer client and Protegrity REST server or between the customer client and the customer REST server.
8.2 - Certificate Management in ESA
When ESA is installed, it generates default self-signed certificates in X.509 v3 PEM format. These certificates are:
- CA Certificate – This consists of the CA.pem and CA.key file.
- Server Certificate - This consists of the server.pem and server.key file.
- Client Certificate - This consists of the client.pem and client.key file.
The services that use and manage these certificates are:
- Management – It is the service which manages certificate based communication and authentication between ESA and its internal components like LDAP, Appliance queue, protectors, etc.
- Web Services – It is the service which manages certificate based communication and authentication between ESA and external clients (REST).
- Consul – It is the service that manages certificates between the Consul server and the Consul client.
ESA provides a certificate manager where you can manage the default certificates and also upload your own CA-signed certificates. This manager comprises of two components which are as follows:
- Certificate Repository
- Manage Certificates
Note: When creating a CA-signed client certificate which you want use in ESA, it is mandatory that you keep the CN attribute of the client certificate to be “Protegrity Client".
If there are CA cross-sign certificates with the AddTrust legacy, then you must upload the active intermediate certificates from the Manage Certificates page. If the expired certificates are present in the certificate chain, then it might lead to failures.
For more information about upload the updated certificates, refer to the section Changing Certificates.
For more information about the CA cross-sign certificates with the AddTrust legacy, refer to https://support.sectigo.com/articles/Knowledge/Sectigo-AddTrust-External-CA-Root-Expiring-May-30-2020.
If other attributes, such as email address or name, are appended to the CN attribute, then you perform the following steps to set the CN attribute to Protegrity Client.
For example, if the CN attribute is set as Protegrity Client/emailAddress=user@abc.com
, then the attributes appended after the / delimiter must be removed.
In the ESA CLI Manager, navigate to Administration > OS Console
Open the pty_get_username_from_certificate.py file using a text editor.
/etc/ksa/pty_get_username_from_certificate.py
Comment the line containing the CN attribute and enter the following regular expression:
REG_EX_GET_VAL_AFTER_CN = "CN=(.*?)\/"
Save the changes.
Navigate to Administration > Services
Restart the Service Dispatcher service.
8.2.1 - Certificate Repository
A Certificate Revocation List (CRL) is a list containing entries of digital certificates that are no longer trusted as they are revoked by the issuing Certificate Authority (CA). The digital certificates can be revoked for one of the following possible reasons:
- The certificate is expired.
- The certificate is compromised.
- The certificate is lost.
- The certificate is breached.
CRLs are used to avoid the usage of certificates that are revoked and are used at various endpoints including the web browsers. When a browser makes a connection to a site, the identity of the site owner is checked using the server’s digital certificate. Also, the validity of the digital certificate is verified by checking whether the digital certificate is not listed in the Certificate Revocation List. If the certificate entry is present in this list, then the authentication for that revoked certificate fails.
The Certificate Repository screen is accessible from the ESA Web UI, navigate to Settings > Network > Certificate Repository. The following figure and table provides the details about the Certificate Repository screen.
Callout | Action | Description |
---|---|---|
1 | ID | ESA generated ID for the certificate and key file. |
2 | Type | Specifies the type of the file i.e. certificate, key, or CRL. |
3 | Archive time | It is the timestamp when the certificate was uploaded to the certificate repository. |
4 | Status | This column shows the status of the certificate in the Certificate Repository, which can be:
|
5 | Description | Displays the description given by the user when the certificate is uploaded to Certificate Repository. It is recommended to provide a meaningful description while uploading a certificate. |
6 | ![]() | Allows you to delete multiple selected certificates or CRLs from the Certificate Repository. Note: Only expired certificates or CRLs can be deleted. |
7 | ![]() | Provides additional information or details about a certificate and its private key (if uploaded). |
8 | ![]() | Allows you to delete the certificate or CRL from the Certificate Repository. Note: Only expired certificates or CRLs can be deleted. |
8.2.2 - Uploading Certificates
To upload certificates:
On the ESA Web UI, navigate to Settings > Network > Certificate Repository.
Click Upload New Files.
The Upload new file to repository dialog box appears.
Click Certificate/Key to upload a certificate file and a private key file.
CAUTION: Certificates have a public and private key. The public key is mentioned in the certificate and as a best practice the private key is maintained as a separate file. In ESA, you can upload either the certificate file or both certificate and private key file together. In ESA Certificate Repository, it is mandatory to upload the certificate file.
CAUTION: If the private key file is inside the certificate, then you have the option to upload just the Certificate file. The DSKs are identified using the UID column that displays the Key Id.
> **Note:** It is recommended to use private key with a length of 4096-bit.
Click Choose File to select both certificate and key files.
Enter the required description in the Description text box.
Click Upload.
CAUTION: If the private key is encrypted, a prompt to enter the passphrase will be displayed.
The certificate and the key file is saved in repository and the Certificate Repository screen is updated with the details.
When you upload a private key that is protected with a passphrase, the key and the passphrase are stored in the hard disk. The passphrase is stored in an encrypted form using a secure algorithm. The passphrase and the private key are also stored in the system memory. The services, such as Apache, RabbitMQ, or LDAP, access this system memory to load the certificates.
If you upload a private key that does not have a passphrase, the key is stored in the system memory. The services, such as Apache, RabbitMQ, or LDAP access the system memory to load the certificates.
If you are using a proxy server to connect to the Internet, ensure that you upload the required custom certificates of that server in ESA from the Certificate Repository screen.
8.2.3 - Uploading Certificate Revocation List
To upload CRL:
On the ESA Web UI, navigate to Settings > Network > Certificate Repository .
Click Upload New Files.
The Upload new file to repository dialog box appears.
Click Certificate Revocation List to upload a CRL file.
Click Choose File to select a CRL file.
Enter the required description in the Description text box.
Click Upload.
A confirmation message appears and the CRL is uploaded to the ESA.
8.2.4 - Manage Certificates
The following figure and table provides the actions available from the Manage Certificates screen.
Callout | Action | Description |
---|---|---|
1 | ![]() | Gives information about Management and Web Services groups. |
2 | ![]() | Download the server’s CA certificate. You can download only the server’s CA certificate and upload it to another certificate trust store to trust the server certificate for communication with ESA. |
3 | ![]() | Gives additional information or details about a certificate. |
8.2.5 - Changing Certificates
To change certificates:
On the ESA Web UI, navigate to Settings > Network > Manage Certificates.
Click Change Certificates.
The Certificate Management wizard appears with CA certificate(s) section.
Select the check box next to the CA Certificate that you want to set as active.
CAUTION: This section shows server, client, and CA certificates together. However, ensure that you select only the required certificates in their respective screens. You can select multiple CA certificates for ESA Management and Web Services section. ESA allows you to have only one server and one client active at any given time.
Click Next.
The Server Certificate section appears.
Select the check box next to the Server Certificate that you want to set as active.
Click Next.
The Client Certificate section appears
Select the check box next to the Client Certificate that you want to set as active.
Click Apply.
The following message appears:
The system Management Certificates will be changed and a re-login maybe required. Do you want to continue?
Click Yes.
A confirmation message appears and the active certificates are displayed on the screen.
CAUTION: When you upload a server certificate to the ESA and activate it, you are logged out of the ESA Web UI. This happens because the browser does not trust the new CA signed server certificate. You must login again for the browser to get the new server certificate and to use it for all further communications.
8.2.6 - Changing CRL
To change CRL:
On the ESA Web UI, navigate to Settings > Network > Manage Certificates.
Click Revocation List.
The Certificate Revocation List dialog box appears.
Select the Enable Certificate Revocation List check box.
Select the check box next to the CRL file that you want to set as active.
Click Apply.
A confirmation message appears.
8.3 - Certificates in DSG
During the install process of DSG, a series of self-signed SSL Certificates are generated. You may use it in a non-production environment. It is recommended to use your own certificate for production use.
When you install a DSG node, the following types of certificates and keys are generated:
- CA certificate –This consists of CA.pem, CA.crt, and CA.key file.
- Server Certificate - This consists ofservice.pem, service.crt, and service.key file.
- Client Certificate - This consists of client.pem, client.crt, and client.key file.
- Admin Certificate – This consists of admin.pem, admin.crt and admin.key.
- Admin Client Certificate - This consists of admin_client.crt and admin_client.key.
The certificates in DSG are classified as Inbound Certificates and Outbound Certificates. You must use Inbound certificates for secure communication between client and DSG. In setups, such as Software as a Service (SaaS), where DSG communicates with a SaaS that is not part of the on-premise setup or governed by an enterprise networking perimeter, Outbound certificates are employed.
The following image illustrates the flow of certificates in DSG.
Based on the protocol used the certificates that client must present to DSG and DSG must present to destination differ. For the Figure, consider HTTPS protocol is used.
- Step 1:
- When a client tries to access the SaaS through the DSG, DSG uses the certificate configured as part of tunnel configuration to communicate with the client. The client must trust the certificate to initiate the communication between client and DSG.
- Step 2:
- The step 2 involves DSG forwarding the request to the destination. In the TLS-based outbound communication in DSG, it is expected that the destination uses a certificate that is signed by a trusted certification authority. For example, in case of SaaS, it might use self-signed certificates. In this case, DSG must trust the server’s certificate to initiate TLS-based outbound communication.
- Step 3:
- When the REST API client tries to communicate with the DSG, DSG uses the certificate configured as part of tunnel configuration to communicate with the client. The client browser must accept and trust the certificate to initiate the communication.
Inbound Certificates
The inbound certificate differs based on the protocol that is used to communicate with the DSG. This section covers certificates involved when using HTTPS using default certificates, TLS mutual authentication, and SFTP protocols.
HTTPS using default certificates
Consider a setup where a client is accessing the destination with DSG in between using the HTTPS protocol. In this case, DSG uses the certificate configured as part of tunnel configuration to communicate with the client.
In non-production environment, you can continue to use the default certificates that are generated when DSG is installed. In case of production deployment, it is recommended that you use your own certificates that are signed by a trusted certification authority.
In case you are using own certificates and keys, ensure that you replace the default CA certificates /keys and other certificates/keys with the signed certificates/keys.
TLS Mutual Authentication
DSG can be configured with trusted root CAs and/or the individual client machine certificates for the machines that will be allowed to connect to DSG. The client presents a client certificate to DSG, DSG verifies it against the CA certificate, and once validated, lets the client machine communicate with destination where DSG is in between.
Ensure that you replace the default CA certificates and keys and other certificates and keys with the signed certificates and keys.
Along with these certificates, every time a request is made to the DSG node, the client machine will present client certificate that was generated using the CA certificate. DSG validates the client certificate so that the client machine can communicate with DSG. The clients that fail to present a valid client certificate will not be able to connect to the destination.
Apart from presenting the certificate, at Tunnel level, ensure that the TLS Mutual Authentication is set to CERT_OPTIONAL or CERT_MANDATORY. Also, in the Extract rule at Ruleset level, ensure that the Require Client Certificate check box is selected if you want to perform this check at service level.
For more information about enabling TLS mutual authentication, refer to Enabling Mutual Authentication.
SFTP
DSG can be configured to work as an intermediary between an SFTP client and server when accessing files using SFTP protocol. With SFTP, credentials are never transmitted in clear and information flows over an SSH tunnel.
If you are using SFTP, ensure that the SFTP server key is uploaded using the Certificates screen on the DSG node. At tunnel level, for an SFTP tunnel, you must specify this server key.
At the rule level, you can add a layer of security using the authentication method option. Using DSG, an SFTP client can communicate with the destination using either Password or SSH keys. Ensure that the SSH keys are trusted.
If you select Password as the authentication method, client must provide the password when prompted. While, if you are using Publickey as the authentication method, the SFTP client must trust DSG publickey and DSG must trust SFTP client publickey.
For more information about SFTP rule level settings and enabling password-less authentication, refer to SFTP Gateway.
Outbound Certificates
The DSG can be used as an intermediary between client and destination. For example, in case of SaaS as destination, it is important that the self-signed certificates that a destination uses are trusted by DSG.
It might happen that the SaaS certificates are in DER format. DSG accepts certificates in PEM or CRT format, and hence you must convert the DER format to an acceptable PEM format.
For more information about trusting the self-signed certificates and converting the DER format to PEM format, refer to Creating a Service under Ruleset.
8.4 - Replicating Certificates in a Trusted Appliance Cluster
The following figure illustrates the replication of certificates between two ESAs in a TAC.
The figure depicts two ESAs in a TAC. The ESA1 contains the server and the client certificates. The certificates in ESA1 are signed by CA1. The Protectors communicate with ESA1 to retrieve the client certificate.
Note: The Subject attribute for the server certificates is CN=<hostname> and that of the client certificate is CN= Protegrity Client.
In a TAC, when replication between ESA1 and ESA2 happens, the CA, server, and client certificates from ESA1 are copied to ESA2. However, when the certificates are replicated from ESA1 to ESA2, the Subject attribute is not updated to the hostname of ESA2. Due to this mismatch, the protectors are not able to communicate with ESA2.
- Solution:
- To ensure the communication of protectors with the ESA, perform one the following methods:
- Use a Subject Alternative Name (SAN) certificate to add additional hostnames. You can configure multiple ESA domains using a SAN certificate.
- Use wildcard for domain names in certificates to add multiple domains.
8.5 - Insight Certificates
Note: The default certificates provided are signed using the system-generated Protegrity-CA certificate. However, after installation you can use custom certificates. Also ensure that all the certificates are signed by the same CA as shown in the following diagram.
Note: When you are updating certificates, ensure that the certificates are updated in the following order:
- Audit Store Cluster certificate.
- Audit Store REST certificate.
- PLUG client certificate for Audit Store.
- Analytics client certificate for Audit Store.
The various certificates used for communication between the nodes with their descriptions are provided here.
Management & Web Services: These services manages certificate-based communication and authentication between the ESA and its internal components and between ESA and external clients (REST).
For more information about Management & Web Services certificates, refer to Certificate Management in ESA.
Audit Store Cluster: This is used for the Audit Store inter-node communication that takes place over the port 9300.
Server certificate:
The server certificate is used for for inter-node communication. The nodes identify each other using this certificate.
Note: The Audit Store Cluster and Audit Store REST server certificate must be the same.
Client certificate:
The client certificate is used for applying and maintaining security configurations for the Audit Store cluster.
Audit Store REST: This is used for the Audit Store REST API communication over the port 9200.
Server certificate:
The server certificate is used for mutual authentication with the client.
Note: The Audit Store Cluster and Audit Store REST server certificate must be the same.
Client certificate:
The client certificate is used by the Audit Store nodes to authenticate and communicate with the Audit Store.
Analytics Client for Audit Store: This is used for communication between Analytics and the Audit Store.
Client certificate:
The client certificate is used by Analytics to authenticate and communicate with the Audit Store.
PLUG Client for Audit Store: This is used for communication between the logging components and the Audit Store.
Client certificate:
The client certificate is used by the Log Forwarder to authenticate and communicate with the Audit Store.
Using Custom Certificates in Insight
The certificates used for the Insight component are system-generated Protegrity certificates. If required, you can upload and use your custom CA, Server, and Client certificates for Insight.
When you use custom certificates, ensure that they meet the following prerequisites:
Ensure that all certificates share a common CA.
Ensure that the following requirements are met when creating the certificates:
- The CN attribute of the Audit Store Server certificate is set to insights_cluster.
- The CN attribute of the Audit Store Cluster Client certificate is set to es_security_admin.
- The CN attribute of the Audit Store REST Client certificate is set to es_admin.
- The CN attribute of the PLUG client certificate for the Audit Store is set to plug.
- The CN attribute of the Analytics client certificate for the Audit Store is set to insight_analytics.
- The Audit Store Server certificates’ must contain the following in the Subject Alternative Name (SAN) field:
Required: FQDN of all the Audit Store nodes in the cluster.
Optional: IP addresses of all the Audit Store nodes in the cluster.
Optional: Hostname of all the Audit Store nodes in the cluster.
Note: If you are using a DNS server, then also include the hostname and FQDN details from the DNS sever in the certificate.
Ensure that the certificates are generated using a 4096 bit key.
For example, an SSL certificate with the SAN extension of servers ES1, ES2, and ES3 in a cluster will have the following entries:
- ES1
- ES2
- ES3
- ES1.protegrity.com
- ES2.protegrity.com
- ES3.protegrity.com
- IP address of ES1
- IP address of ES2
- IP address of ES3
Note: If you are upgrading from an earlier version to ESA 8.1.0.0 and later and use custom certificates, then run the following step after the upgrade is complete. Custom certificates are applied for td-agent, Audit Store, and Analytics, if installed.
From the ESA Web UI, navigate to System > Services > Audit Store.
Ensure that the Audit Store Repository service is not running. If the service is running, then stop the service.
Configure the custom certificates and upload it to the Certificate Repository.
Set the custom certificates for the logging components as Active.
From the ESA Web UI, navigate to System > Services > Audit Store.
Start the Audit Store Repository service.
Open the ESA CLI.
Navigate to Tools.
Run Apply Audit Store Security Configs.
Continue the installation to create an Audit Store cluster or join an existing Audit Store cluster.
For more information, refer the Connecting to the Audit Store topic in the Protegrity Analytics Guide 8.1.0.0.
8.6 - Validating Certificates
Verifying the validity of a certificate
You can verify a client or a server certificate using the following commands:
```
openssl verify -CAfile /etc/ksa/certificates/CA.pem /etc/ksa/certificates/client.pem
openssl verify -CAfile /etc/ksa/certificates/CA.pem /etc/ksa/certificates/ws/server.pem
```
If the client or server certificate is signed by the provided CA certificate, then the certificate is valid. The message **OK** appears.
Verifying the purpose of a certificate
You can verify if the certificate is a client, a server, or a CA certificate using the following command:
```
openssl x509 -in <Certificate name> -noout -purpose
```
For example, run the following command to verify the purpose of the client certificate:
```
openssl x509 -in /etc/ksa/certificates/client.pem -noout -purpose
```
Extracting the CN of a certificate
To extract the username of a certificate, you must pass the DN value to the pty_get_username_from_certificate function. The following steps explain how to extract the CN of a certificate.
In the CLI Manager, navigate to the OS Console.
Run the following command to extract the value that is in the Subject attribute of the certificate.
openssl x509 -noout -subject -nameopt compat -in /etc/ksa/certificates/client.pem
Run the following command to extract the username from the Subject attribute of the client certificate.
etc/ksa/pty_get_username_from_certificate.py "<Value in the Subject attribute of the client certificate>"
For example,
etc/ksa/pty_get_username_from_certificate.py "/O=Acme Inc./C=US/CN=Protegrity Client"
The CN attribute in a certificate can contain the Fully Qualified Domain Name (FQDN) of the client or server. If the length of the FQDN is greater than 64 characters, the hostname is considered as CN to generate a certificate.
Working with intermediate CA certificates
A root certificate is a public key certificate that identifies the root CA. The chain of certificates that exist between the root certificate and the certificate issued to you are known as intermediate CA certificates. You can use an intermediate CA certificate to sign the client and server certificates.
If you have multiple intermediate CA certificates, then you must link all the intermediate certificates and the root CA certificates into a single chain before you upload to the Certificate repository.
The following figure illustrates an example of two intermediate certificates and a root certificate.
In the figure, the server certificate is signed by an intermediate certificate CA2. The intermediate certificate CA2 is signed by CA1, which is signed by the root CA.
You can merge the CA certificates using the following command in the OS Console:
cat ./CA2.pem ./CA1.pem ./rootCA.pem > ./newCA.pem
You must then upload the newCA.pem certificate to the Certificate Repository.
Ensure that you link the CA certificates in the appropriate hierarchy.
Increasing the Log Level to view errors for certificates:
If you want to view the errors and warnings generated for certificates, then you can increase the LogLevel attribute.
In the CLI Manager, navigate to the OS Console.
View the apache.mng.conf file using a text editor.
/etc/ksa/service_dispatcher/servers/apache.mng.conf
Update the value of the LogLevel parameter from warn to debug and exit the editor.
View the apache.ws.conf file using a text editor.
/etc/ksa/service_dispatcher/servers/apache.ws.conf
Update the value of the LogLevel parameter from warn to debug.
Restart the Service Dispatcher service.
Navigate to the /var/log/apache2-service_dispatcher directory.
Open the error.log file to view the required logs.
After debugging the errors, ensure that you revert the value of the LogLevel parameter to warn and restart the Service Dispatcher service.
9 - Protegrity Data Security Platform Licensing
The Licensing content answers the following questions:
- What is the difference between a temporary and validated license?
- How can you request a validated license?
- What happens if the license expires?
- How are you notified when your license is due to expire?
- What are the features included in a validated license?
It is strongly recommended that you read all the Licensing sections. Ensure that you understand how the licensing affects the ESA installation, ESA upgrade, trusted appliances cluster licensing, and protectors’ licensing.
To prevent unauthorized use of the Protegrity Data Security Platform and prevent illegal copying and distribution, Protegrity supports licensing. The licenses provided by Protegrity are unique and non-transferable. They permit product usage for the term specified in your agreement with Protegrity.
The benefit to you, as our customer, is that the Protegrity license provides additional security to the product. The license also supports a legal agreement stipulating the rights and liabilities of both parties.
License Agreement
The License Agreement is a contract between the licensor and purchaser, establishing the purchaser’s right to use the software.
The Protegrity License Agreement stipulates the license expiration date and the functionality which is available before and after the license expiration date.
For specific details about your particular licensing terms, refer to your License Agreement provided by Protegrity.
License Types
When your Enterprise Security Administrator (ESA) is installed and Policy Management is initialized, a temporary license is applied to it by default.
The temporary license which is created during initialization allows you to use ESA and Policy management for 30 days starting from the day you initialized Policy Management. When the temporary license expires, you are able to log on to ESA, but you have restricted permissions for using it.
For more information, refer Expired License.
To continue using the ESA with full administrative permissions, you must obtain a validated license provided by Protegrity. The validated license has an expiration date which is determined by the License Agreement between your company and Protegrity.
Temporary and Validated License Characteristics
This section explains types of licenses and characteristics of each license.
The following table describes the characteristics of each type of license. The license characteristics explain the key points of each license and show how they differ from each other.
Table - Characteristics of Different License Types
Characteristics | Temporary License | Validated License |
Obtaining a license | Installed by default during ESA installation. | Requested using ESA Web UI. |
Updating a license | Not applicable | Requested using ESA Web UI. |
Warning alerts before expiration of the license date | 30 days prior to expiration date.
The alerts appear as start-up messages when you log into ESA. The
alerts can also be configured using email, and are available in
the ESA logs and logs received from the Protection Points. In addition, if the expiry of your license is less than or equal to 90 days, the License Information dialog box appears when you log in to the ESA. | |
Cluster licensing | Can only be used on a particular node where it was created during installation. | Stipulated by the License Agreement. For details, refer to Cluster Licensing. |
9.1 - Obtaining a Validated License
You can validate the license from the Licenses pane, available in the ESA Web UI. You can refer to the following screenshot. Only a user with ESA Administrative permissions can request and activate the license.
Requesting a License
You can request a validated ESA license while your temporary license is valid, or invalid, or has already expired.
To request an ESA license:
As an ESA administrator, proceed to Settings > Licenses.
In the Licenses pane, click the Generate button.
Save the automatically generated licenserequest.xml file to the local disk.
Send the license request file to licensing@protegrity.com.
Activating a License
After submitting your license request, you receive an email with a license file called license.xml. This file includes the original data retrieved from the license request, expiration date, and additional information, if required.
Note: If there is a License Agreement between your company and Protegrity, you will receive the validated license by the end of the following business day.
To activate an ESA license:
Save the license.xml file to your local disk when you receive it from Protegrity.
As an ESA administrator, proceed to Settings > Licenses.
Click Activate License.
In the Licenses pane, click Browse.
Select the license.xml file.
You are notified about success or failure of the activation process.
Note: You do not need to restart ESA and any data protectors to activate the validated license. However, if you have policies deployed to protection points with a temporary license, then you must re-deploy the policies with the validated license.
The license file is stored in an encrypted format on the ESA file system after it is activated.
CAUTION: Modifying either the temporary or validated license file leads to license deactivation.
Updating a License
You need to update your current license before it expires. You may also update the license in case your needs have changed.
The process of updating the license is the same as when you apply for a new license. You need to submit a new license request and send an email to licensing@protegrity.com with the information about what you would like to change in your current license.
For details, refer to Requesting a License.
9.2 - Non-Licensed Product
A license expires when the end of the term for that license has passed. A corrupted license is not valid. For details about expired licenses, refer to Expired License. For more about corrupted licenses, refer to Corrupted (Invalid) License.
Warning: From 10.0.0 onwards, the protectors will display the following behaviour with regards to the ESA licensing -
An expired or invalid license will block policy, key, and deployment services on the ESA and via the DevOps APIs. An existing protector will continue to perform security operations. However, if you add a new protector or restart an existing protector, then the protector will not receive any policy until a valid license is applied. In addition, you will not be able to perform any other task from the Policy Management UI unless you obtain a valid license. On performing any action on the Policy Management UI, you will automatically be navigated to the License Manager screen as shown in the following screenshot.
For more information about obtaining a valid license, refer to Obtaining a Validated License.
License Expiration Notification
If the expiry of your license is less than or equal to 90 days, then the License Information dialog box appears when you log in to the ESA.
This dialog box specifies the number of days by when your license is going to expire. Click Acknowledge to continue accessing the ESA Web UI.
On the ESA Web UI, a message in the Notification pane reminds you that your license is due to expire. This reminder message appears every day from one month prior to the expiration date.
Expired License
A license expires depending on the expiration time and date settings in the license file. In the Notification pane of the ESA Web UI, Expired license status is displayed.
Corrupted (Invalid) License
If a license has been corrupted, in the Licenses pane of the ESA Web UI, then the Invalid license status is displayed.
If a license has been corrupted, in the Licenses pane of the ESA Web UI, then the Invalid license status is displayed.
A license may be corrupted in the following cases:
- License file has been changed manually.
- License file has been deleted.
- System date and time has been modified prior to when the license was applied to the product.
CAUTION: You MUST NOT change the system date and time to an earlier date and time than the license has been generated. This can lead to the license deactivation. The daylight saving time change is done automatically.
CAUTION: You MUST NOT edit or delete the license file saved on ESA since it can lead to license deactivation.
License Alerts
The Hub Controller generates warning logs at start-up, and once per day, when a license is about to expire, has expired, or is invalid. The ESA Web UI and Policy Management generate alert notifications about license status.
Once the system detects that the current system date is less than or equal to 30 days from the expiration date, an audit event is generated. For a temporary license, the system generates alerts once the ESA is installed.
Once the license is expired or becomes invalid, the Data Security Platform produces logs and notifications. These logs and notifications informs you about the change in the state of the license. You can refer to the Alerts and Notifications when License at Risk table for more details.
The Expiration Date field shows notifications about the current license status. In that field it will show the number of days left before the license expires.
You can also set up separate email notification alerts when licenses are about to expire using the ESA Web UI. For more information about setting up separate email notification alerts, refer to the Enterprise Security Administrator Guide.
The following table lists the system notifications and alerts about the status of the license at risk.
Table - Alerts and Notifications when License at Risk
Alert type | ESA alerts | Protection point alerts | Cumulative alerts information |
| License Information dialog box in the ESA Web UI home page. | For 10.0.0 protectors, warning is not generated. For protectors earlier than 10.0.0, a WARNING is generated in the PEP Server application log once per hour and upon PEP Server restart. | The license alerts and audits are sent to the ESA Audit Store. |
License alert in the Notifications tab of the ESA Web UI. | |||
WARNING generated by the Hub Controller in application log once per day and upon Hub Controller restart. |
9.3 - Cluster Licensing
There are two types of restrictions that can be applied to your Protegrity license. A Configuration Lock is not machine specific and therefore can be used on other nodes in a cluster. A Node Lock is specific to the machine address of the node, and therefore cannot be used on other nodes. Node Lock is the stronger of the two restrictions and it will always take precedence when applied.
The descriptions of these restrictions follow:
License Agreement | Configuration Lock | Node Lock |
---|---|---|
Perpetual License | Always applied | Not applied. |
Term License | Always applied | Applied as stipulated by your License Agreement with Protegrity. |
The procedure you follow for requesting license files for your cluster are explained in the following sections.
CAUTION: These procedures must be followed ONLY when your Protegrity license agreement stipulates that the Node Lock is applied. If your license agreement only has the Configuration Lock applied, then you can use the same license file for all nodes.
Licensing Trusted Appliance Cluster
From Release 6.6.x onwards, we offer customers the functionality to create an appliance cluster, primarily for use in disaster recovery (DR) scenarios. This allows you to create a trusted network of appliances with replication between appliances hosts. Depending upon the type of license agreement you have with Protegrity, you may be required to request a new validated license file when adding nodes to your appliance cluster. You must refer to your Protegrity License Agreement for specific terms.
To obtain a license for an ESA cluster:
Create an ESA cluster as explained in the Protegrity Appliances Overview Guide.
Generate a license request file by using the Web Interface on each individual node.
Save the license request file on your local disk with a different name than the default name. For example, licenserequest2.xml.
Send an email to licensing@protegrity.com including all license request files obtained in step 2. In the email, state that you need a license for an ESA trusted appliances cluster.
When you receive the single Protegrity license, activate it on one of the ESA nodes as explained in section 3.2 Activating a License.
Export the policy to all other ESA nodes in the cluster.
Note: Ensure that you create a new license request for each node in the cluster. This request is created while you add a new node to an existing cluster, including the new node. Once it is done, you can send it to Protegrity.
10 - Troubleshooting
10.1 - Known issues for the Audit Store
Known Issue: The Audit Store node security remains uninitialized and the message Audit Store Security is not initialized. appears on the Audit Store Cluster Management page.
Resolution:
Run the following steps to resolve the issue.
- From the ESA Web UI, navigate to System > Services > Audit Store.
- Ensure that the Audit Store Repository service is running.
- Open the ESA CLI.
- Navigate to Tools.
- Run Apply Audit Store Security Configs.
Known Issue: The Audit Store Repository service stops when the user logs out from the ESA admin console after changing the hostname from the admin console tools menu.
Issue:
The Audit Store Repository service stops when the user rotates Insight certificate, finalizes an appliance (as in the case of cloud environments), changes the hostname, or starts or stops the audit store repository service, by logging into admin console and then logging out from admin console. The Audit Store Repository service also stops when the user logs out of the admin console after starting or stopping the Audit Store Repository service from the admin console.
Resolution:
Manually start the Audit Store Repository service from System > Services on the ESA Web UI.
Known Issue: Logs sent to the Audit Store do not get saved and errors might be displayed.
Issue:
The Audit Store cannot receive and store logs when the disk space available on the ESA is low. In this case, errors or warnings similar to high disk watermark [90%] exceeded are displayed in the logs.
Resolution:
Perform one of the following steps to resolve the issue:
- Delete old indices that are not required using ILM in Analytics.
- Increase the disk space on all nodes.
- Add new nodes to the cluster.
Known Issue: The Audit Store Repository fails to start after updating the domain name.
Issue:
After updating the domain name, the Audit Store Repository service fails to start.
Resolution:
The Audit Store Repository depends on the domain name. However, the domain name configuration in the Audit Store does not get updated automatically when there is a change in the domain name.
Perform the following steps to apply the updated domain name to the Audit Store:
Log in to the CLI Manager of the ESA or the Appliance.
Navigate to Administration > OS Console.
Enter the root password and select OK.
Navigate to the /opt/protegrity/auditstore/config directory using the following command.
cd **/opt/protegrity/auditstore/config**
Open the opensearch.yml file in a text editor.
Replace the existing domain name attribute with your domain name for the following configuration attributes.
- network.host
- http.host Consider the following example where the domain name of protegrity.com is updated to example.com.
Existing configuration:
network.host: ["localhost","127.0.0.1","192.168.1.120","protegrity-esa123","protegrity-esa123.**protegrity.com**"]
Updated configuration:
network.host: ["localhost","127.0.0.1","192.168.1.120","protegrity-esa123","protegrity-esa123.**example.com**"]
Save and close the file.
Rotate the Insight certificates.
For more information about rotating certificates, refer here.
Log in to the CLI Manager of the ESA or the Appliance.
Navigate to Administration > OS Console.
Enter the root password and select OK.
Run the following commands to refresh the network settings.
/etc/opt/scripts/on-hostname-set/90_update_asrepository.sh /etc/opt/scripts/on-hostname-set/91_update_cluster_user_ssh_keys.sh
Known Issue: The Upgrade cannot continue as the cluster health status is red. Check out the Troubleshooting Guide for info on how to proceed. error message appears.
Issue: A cluster status in red color means that at least one primary shard and its replicas are not allocated to a node, that is, there are indices with the index health status in red color in the Audit Store cluster.
Workaround
Complete the following steps to resolve the cluster health with the red status.
From the Web UI of the Appliance, navigate to System > Services > Audit Store.
Ensure that the Audit Store Repository service is running.
Log in to the CLI Manager of the Appliance.
Navigate to Administration > OS Console.
Identify the indices with the health status as red using the following command.
wget -q --ca-cert=<Path_to_CA_certificate>/CA.pem --certificate=<Path_to_client_certificate>/client.pem --private-key=<Path_to_client_key>/client.key -O - https://<Appliance_IP>:9200/_cat/indices | grep red
Ensure that you update the variables before running the command. An example of the command is provided here.
wget -q --ca-cert=/etc/ksa/certificates/as_cluster/CA.pem --certificate=/etc/ksa/certificates/as_cluster/client.pem --private-key=/etc/ksa/certificates/as_cluster/client.key -O - https://localhost:9200/_cat/indices | grep red
A list of indices containing the health status as red appears as shown in the following example.
red open pty_insight_audit_vx.x-xxxx.xx.xx-000014 dxmEWom8RheqOhnaFeM3sw 1 1
In the example, pty_insight_audit_vx.x-xxxx.xx.xx-000014 is the index having a red index health status where the index’s primary shard and replicas are not available or allocated to any node in the cluster.
Identify the reason for unassigned shards using the following command.
wget -q --ca-cert=<Path_to_CA_certificate>/CA.pem --certificate=<Path_to_client_certificate>/client.pem --private-key=<Path_to_client_key>/client.key -O - https://<Appliance_IP>:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason | grep UNASSIGNED
Ensure that you update the variables before running the command. An example of the command is provided here.
wget -q --ca-cert=/etc/ksa/certificates/as_cluster/CA.pem --certificate=/etc/ksa/certificates/as_cluster/client.pem --private-key=/etc/ksa/certificates/as_cluster/client.key -O - https://localhost:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason | grep UNASSIGNED
The reasons for the shards being unassigned appear. This example shows one of the reasons for the unassigned shard.
`pty_insight_audit_vx.x-xxxx.xx.xx-000014 0 p UNASSIGNED NODE_LEFT` `pty_insight_audit_vx.x-xxxx.xx.xx-000014 0 r UNASSIGNED NODE_LEFT`
In the example, the 0th p and r shards of the pty_insight_audit_vx.x-xxxx.xx.xx-000014 index are unassigned due to the NODE_LEFT reason, that is, because the node left the Audit Store cluster. The p indicates a primary shard and the r indicates a replica shard.
Retrieve the details for the shard being unassigned using the following command.
wget -q --ca-cert=<Path_to_CA_certificate>/CA.pem --certificate=<Path_to_client_certificate>/client.pem --private-key=<Path_to_client_key>/client.key --header='Content-Type:application/json' --method=GET --body-data='{ "index": "<Index_name>", "shard": <Shard_ID>, "primary":<true or false> }' -O - https://<Appliance_IP>:9200/_cluster/allocation/explain?pretty
Ensure that you update the variables before running the command. An example of the command with the index name as pty_insight_audit_vx.x-xxxx.xx.xx-000014, shard ID as 0, and primary shard as true is provided here.
wget -q --ca-cert=/etc/ksa/certificates/as_cluster/CA.pem --certificate=/etc/ksa/certificates/as_cluster/client.pem --private-key=/etc/ksa/certificates/as_cluster/client.key --header='Content-Type:application/json' --method=GET --body-data='{ "index": "pty_insight_audit_vx.x-xxxx.xx.xx-000014", "shard": 0, "primary": true }' -O - https://localhost:9200/_cluster/allocation/explain?pretty
The details of the unassigned shard appears. This example shows one of the reasons for the unassigned shard.
{ "index": "pty_insight_audit_vx.x-xxxx.xx.xx-000014", "shard": 0, "primary": true, "current_state": "unassigned", "unassigned_info": { "reason": "NODE_LEFT", "at": "2022-03-28T05:05:25.631Z", "details": "node_left [gJ38FzlDSEmTAPcP0yw57w]", "last_allocation_status": "no_valid_shard_copy" }, "can_allocate": "no_valid_shard_copy", **"allocate\_explanation": "cannot allocate because all found copies of the shard are either stale or corrupt",** "node_allocation_decisions": [ { "node_id": "3KXS1w9HTOeMH1KbDShGIQ", "node_name": "ESA1", "transport_address": "xx.xx.xx.xx:9300", "node_attributes": { "shard_indexing_pressure_enabled": "true" }, "node_decision": "no", "store": { "in_sync": false, "allocation_id": "HraOWSZlT3KNXxOHDhZL5Q" } } ] }
In this example, the shard is not allocated because all found copies of the shard are either stale or corrupt. There are no valid shard copies that can be allocated for this index. This is a data loss scenario, where the data is unavailable because the node or nodes that had the data have disconnected from the cluster. In such a scenario, if the disconnected nodes are brought back in the cluster, then the cluster can reconstruct itself and become healthy again. If bringing the nodes back is not possible, then deleting indices with the red index health status is the only way to fix a red cluster health status.
Complete one of the following two steps to stabilize the cluster.
Troubleshoot the cluster:
- Verify that the Audit Store services are running. Restart any Audit Store service that is in the stopped state.
- Ensure that the disconnected nodes are running.
- Try to add any disconnected nodes back to the cluster.
- Restart the system or restore the system from a backup.
Delete the index:
Delete the indices with the index health status as red using the following command. Execute the command from any one node of Audit Store node which is running.
wget -q --ca-cert=<Path_to_CA_certificate>/CA.pem --certificate=<Path_to_client_certificate>/client.pem --private-key=<Path_to_client_key>/client.key --header='Content-Type:application/json' --method=DELETE -O - https://<Appliance_IP>:9200/<Index_name>
Ensure that you update the variables before running the command. An example of the command to delete the pty_insight_audit_vx.x-xxxx.xx.xx-000014 index is provided here.
wget -q --ca-cert=/etc/ksa/certificates/as_cluster/CA.pem --certificate=/etc/ksa/certificates/as_cluster/client.pem --private-key=/etc/ksa/certificates/as_cluster/client.key --header='Content-Type:application/json' --method=DELETE -O - https://localhost:9200/pty_insight_audit_vx.x-xxxx.xx.xx-000014
CAUTION:
This command deletes the index and must be used carefully.
Known Issue: The Authentication failure for the user during JWT token generation while downloading ssh key from node error appears while performing any Audit Store cluster-related operation.
Issue: The Can create JWT token permission is required for a role to perform Audit Store cluster-related operations. The error appears if the permission is not assigned to the user.
Workaround
Use a user with the appropriate permissions for performing Audit Store cluster-related operations. Alternatively, verify and add the Can create JWT token permission. To verify and add the Can Create JWT Token permission, from the ESA Web UI, navigate to Settings > Users > Roles.
10.2 - ESA Error Handling
ESA appliance collects all logs that come from different Protegrity Servers. The following section explains the logs that you may find and the errors that you may encounter on the ESA.
10.2.1 - Common ESA Logs
Log type | Details | Logs Description |
---|---|---|
Appliance logs ESA Web Interface, | Here you can view appliance system logs. These logs are saved for two weeks, and then they are automatically deleted. | The ESA appliance logs the appliance-specific system
events:
|
Data Management Server (DMS) logs ESA Web Interface, | Here you can view DMS system related logs:
| System logs related to monitoring and maintenance of the Logging Repository (DMS). |
10.2.2 - Common ESA Errors
Patch Signing
From v10.1.0, all the packages, including the Protegrity developed packages, are signed by Protegrity. This ensures the integrity of the software being installed.
The following errors may occur while uploading the patch using Web UI or CLI Manager.
#The patch is signed by Protegrity signing key and the verification key is expired
Issue: This issue occurs if the verification key is expired, the following error message appears:Error: Patch signature(s) expired. Would you like to continue installation?
Workaround:
- Click Yes to install the patch. The patch gets installed successfully.
- Click No. The patch installation gets terminated.
For more information about the Protegrity signed patch, contact Protegrity Support.
#The patch is not signed by Protegrity signing key
Issue: This issue occurs if the patch is not signed by Protegrity signing key.Error: Signatures not found. Aborting
Workaround: Click Exit to terminate the installation process.It is recommended to use a Protegrity signed patch.
For more information about the Protegrity signed patch, contact Protegrity Support.
Disk Space
#Insufficient disk space in the /var/log directory
Issue: This issue occurs if the disk space in the /var/log directory is insufficient.Error: Unable to install the patch. The required disk space is insufficient for the following partition: /var/log/
Workaround: Ensure that at least 20% disk space in the /var/log directory is available to install the patch successfully.
#Insufficient disk space in the /opt/ directory
Issue: This issue occurs if the disk space in the /opt directory is insufficient.Error: Unable to install the patch. The required disk space is insufficient for the following partition: /opt/
Workaround: Ensure that the available disk space in the /opt/tmp directory is at least twice the patch size.
#Insufficient disk space in the /OS directory
Issue: This issue occurs if the disk space in the /OS directory is insufficient.
Workaround: Ensure that at least 40% disk space in the /OS directory is available to install the patch successfully.
The space used in the OS(/) partition should not be more than 60%. If the space used is more than 60%, then you must clean up the OS(/) partition before proceeding with the patch installation process. For more information about cleaning up the OS(/) partition, refer to the documentation available at the following link.https://my.protegrity.com/knowledge/ka04W000000nSxJQAU/
Miscellaneous
Unable to export the information while executing the cluster task using the IP address of the node.
Issue: This might occur if the task is executed using the IP address of the cluster task instead of the Hostname.
Workaround: To resolve this issue, ensure that the IP address of the cluster node is replaced with the Hostname in the task.
For more information about executing the cluster task, refer Scheduling Configuration Export to Cluster Tasks.
Basic Authentication
If you try to perform operations, such as, joining a cluster, exporting data/ configuration to a remote appliance, and so on , the operation fails with the following error:Errorcode: 403
Issue: This issue occurs if the Basic Authentication is disabled, and you try to perform any of the following operations.
- Joining an existing cluster
- Establishing set ESA Communication
- Exporting data/configuration to a remote appliance
- Work with RADIUS authentication
Workaround: Ensure that the Can Create JWT Token permission is assigned to the role. If the Can Create JWT Token permission is not assigned to the role of the required user, then the operation fails.
To verify the Can Create JWT Token permission, from the ESA Web UI navigate to Settings > Users > Roles.
10.2.3 - Understanding the Insight indexes
All the Appliances and Protectors send logs to Insight. The logs from the Audit Store are displayed on the Discover screen of the Audit Store Dashboards. Here, you can view the different fields logged. In addition to viewing the data, these logs serve as input for Insight to analyze the health of the system and to monitor the system for providing security. These logs are stored in the Audit index with the name, such as, pty_insight_analytics_audit_9.2-*. To refer to old and new audit indexes, the alias pty_insight_*audit_* is used.
The /var/log/asdashboards.log file is empty. The init.d logs for the Audit Store Dashboards are available in /var/log/syslog. The container-related logs are available in /var/log/docker/auditstore_dashboards.log.
You can view the Discover screen by logging into the ESA and navigating to Audit Store > Dashboard > Open in new tab, select Discover from the menu, and select a time period such as Last 30 days. The Discover screen appears.
The following table lists the various indexes and information about the data contained in the index. You can view the index list by logging into the ESA, and navigating to Audit Store > Cluster Management > Overview > Indices. Indexes can be created or deleted. However, deleting an index will lead to a permanent loss of data in the index. If the index was not backed up earlier, then the logs from the index deleted cannot be recreated or retrieved.
Index Name | Origin | Description |
---|---|---|
.kibana_1 | Audit Store | This is a system index created by the Audit Store. This hold information about the dashboards. |
.opendistro_security | Audit Store | This is a system index created by the Audit Store. This hold information about the security, roles, mapping, and so on. |
.opendistro-job-scheduler-lock | Audit Store | This is a system index created by the Audit Store. |
.opensearch-notifications-config | Audit Store | This is a system index created by the Audit Store. |
.opensearch-observability | Audit Store | This is a system index created by the Audit Store. |
.plugins-ml-config | Audit Store | This is a system index created by the Audit Store. |
.ql-datasources | Audit Store | This is a system index created by the Audit Store. |
pty_auditstore_cluster_config | ESA | This index logs logs information about the Audit Store cluster. |
pty_insight_analytics_audit | ESA | This index logs the audit data for all the URP operations and the DSG appliance logs. It also captures all logs with the log type protection, metering, audit, and security. |
pty_insight_analytics_autosuggestion | ESA | This index holds the autocomplete information for querying logs in Insight. The index was used in earlier versions of ESA. |
pty_insight_analytics_crons | ESA | This index logs information about the cron scheduler jobs. |
pty_insight_analytics_crons_logs | ESA | This index logs for the cron scheduler when the jobs are executed. |
pty_insight_analytics_dsg_error_metrics | DSG | This index logs the DSG error information. |
pty_insight_analytics_dsg_transaction_metrics | DSG | This index logs the DSG transaction information. |
pty_insight_analytics_dsg_usage_metrics | DSG | This index logs the DSG usage information. |
pty_insight_analytics_encryption_store | ESA | This index encrypts and stores the password specified for the jobs. |
pty_insight_analytics_forensics_custom_queries | ESA | This index stores the custom queries created for forensics. The index was used in earlier versions of ESA. |
pty_insight_analytics_ilm_export_jobs | ESA | This index logs information about the running ILM export jobs. |
pty_insight_analytics_ilm_status | ESA | This index logs the information about the running ILM import and delete jobs. |
pty_insight_analytics_kvs | ESA | This is an internal index for storing the key-value type information. |
pty_insight_analytics_miscellaneous | ESA | This index logs entries that are not categorized in the other index files. |
pty_insight_analytics_policy | ESA | This index logs information about the ESA policy. It is a system index created by the ESA. |
pty_insight_analytics_policy_log | ESA | This index logs for the ESA policy when the jobs are executed. |
pty_insight_analytics_policy_status_dashboard | ESA | The index holds information about the policy of the protectors for the dashboard. |
pty_insight_analytics_protector_status_dashboard | ESA | This index holds information about the 10.0.0 protectors for the dashboard. |
pty_insight_analytics_protectors_status | Protectors | This index holds the status logs of version 10.0.0 protectors. |
pty_insight_analytics_report | ESA | This index holds information for the reports created. The index was used in earlier version of ESA. |
pty_insight_analytics_signature_verification_jobs | ESA | This index logs information about the signature verification jobs. |
pty_insight_analytics_signature_verification_running_jobs | ESA | This index logs information about the signature verification jobs that are currently running. |
pty_insight_analytics_troubleshooting | ESA | This index logs the log type application, kernel, system, and verification. |
10.2.4 - Understanding the index field values
Common Logging Information
These logging fields are common with the different log types generated by Protegrity products.
Note: These common fields are used across all log types.
Field | Data Type | Description | Source | Example |
---|---|---|---|---|
cnt | Integer | The aggregated count for a specific log. | Protector | 5 |
logtype | String | The type of log. For example, Protection, Policy, Application, Audit, Kernel, System, or Verification.For more examples about the log types, refer here. | Protector | Protection |
level | String | The level of severity. For example, SUCCESS, WARNING, ERROR, or INFO. These are the results of the logging operation.For more information about the log levels, refer here. | Protector | SUCCESS |
starttime | Date | This is an unused field. | Protector | |
endtime | Date | This is an unused field. | Protector | |
index_time_utc | Date | The time the Log Forwarder processed the logs. | Audit Store | Sep 8, 2024 @ 12:55:24.733 |
ingest_time_utc | Date | The time the log was inserted into the Audit Store. | Log Forwarder | Sep 8, 2024 @ 12:56:22.027 |
uri | String | The URI for the log. This is an unused field. | ||
correlationid | String | A unique ID that is generated when the policy is deployed. | Hubcontroller | clo5nyx470bi59p22fdrsr7k3 |
filetype | String | This is the file type, such as, regular file, directory, or device, when operations are performed on the file. This displays the value ISREG for files and ISDIR for directories. This is only used in File Protector. | File Protector | ISDIR |
index_node | String | The index node that ingested the log. | Audit Store | protegrity-esa746/192.168.2.20 |
operation | String | This is an unused field. | ||
path | String | This field is provided for Protector-related data. | File Protector | /hmount/source_dir/postmark_dir/postmark/1 |
system_nano_time | Long | This displays the time in nano seconds for the Signature Verification job. | Signature Verification | 255073580723571 |
tiebreaker | Long | This is an internal field that is used with the index time to make a record unique across nodes for sorting. | Protector, Signature Verification | 2590230 |
_id | String | This is the entry id for the record stored in the Audit Store. | Log Forwarder, td-agent | NDgyNzAwMDItZDI5Yi00NjU1LWJhN2UtNzJhNWRkOWYwOGY3 |
_index | String | This is the index name of the Audit Store where the log is stored. | Log Forwarder, td-agent | pty_insight_analytics_audits_10.0-2024.08.30-000001 |
Additional_Info
These descriptions are used for all types of logs.
Field | Data Type | Description | Source | Example |
---|---|---|---|---|
description | String | Description about the log generated. | All modules | Data protect operation was successful, Executing attempt_rollover for |
module | String | The module that generated the log. | All modules | .signature.job_runner |
procedure | String | The method in the module that generated the log. | All modules | create_job |
title | String | The title for the audit log. | DSG | DSG’s Rule Name INFO : DSG Patch Installation - User has chosen to reboot system later., Cloud Gateway service restart, and so on.# |
Process
This section describes the properties of the process that created the log. For example, the protector or the rputils.
Field | Data Type | Description | Source | Example |
---|---|---|---|---|
thread_id | String | The thread_id of the process that generated the log. | PEP Server | 3382487360 |
id | String | The id of the process that generated the log. | PEP Server | 41710 |
user | String | The user that runs the program that generated the log. | All modules | service_admin |
version | String | The version of the program or Protector that generated the log. | All modules | 1.2.2+49.g126b2.1.2 |
platform | String | The platform that the program that generated the log is running on. | PEP Server | Linux_x64 |
module | String | The module that generated the log. | ESA, Protector | rpstatus |
name | String | The name of the process that generated the log. | All modules | Protegrity PEP Server |
pcc_version | String | The core pcc version. | PEP Server | 3.4.0.20 |
Origin
This section describes the origin of the log, that is, from where the log came from and when it was generated.
Field | Data Type | Description | Source | Example |
---|---|---|---|---|
time_utc | Date | The time in the Coordinated Universal Time (UTC) format when the log was generated. | All modules | Sep 8, 2024 @ 12:56:29.000 |
hostname | String | The hostname of the machine where the log was generated. | All modules | ip-192-16-1-20.protegrity.com |
ip | IP | The IP of the machine where the log was generated. | All modules | 192.168.1.20 |
Protector
This section describes the Protector that generated the log. For example, the vendor and the version of the Protector.
Note: For more information about the Protector vendor, family, and version, refer here.
Field | Data Type | Description | Source | Example |
---|---|---|---|---|
vendor | String | The vendor of the Protector that generated the log. This is specified by the Protector. | Protector | DSG |
family | String | The Protector family of the Protector that generated the logs. This is specified by the Protector. For more information about the family, refer here. | Protector | gwp |
version | String | The version of the Protector that generated the logs. This is specified by the Protector. | Protector | 1.2.2+49.g126b2.1.2 |
core_version | String | This is the Core component version of the product. | Protector | 1.2.2+49.g126b2.1.2 |
pcc_version | String | This is the PCC version. | Protector | 3.4.0.20 |
Protection
This section describes the protection that was done, what was done, the result of the operation, where it was done, and so on.
Field | Data Type | Description | Source | Example |
---|---|---|---|---|
policy | String | The name of the policy. This is only used in File Protector. | Protector | aes1-rcwd |
role | String | This field is not used and will be deprecated. | Protector | |
datastore | String | The name of the datastore used for the security operation. | Protector | Testdatastore |
audit_code | Integer | The return code for the operation. For more information about the return codes, refer here. | Protector | 6 |
session_id | String | The identifier for the session. | Protector | |
request_id | String | The ID of the request that generated the log. | Protector | |
old_dataelement | String | The old dataelement value before the reprotect to a new dataelement. | Protector | AES128 |
mask_setting | String | The mask setting used to protect data. | Protector | Mask Left:4 Mask Right:4 Mark Character: |
dataelement | String | The dataelement used when protecting or unprotecting data. This is passed by the Protector performing the operation. | Protector | PTY_DE_CCN |
operation | String | The operation, for example Protect, Unprotect, or Reprotect. This is passed in by the Protector performing the operation. | Protector | Protect |
policy_user | String | The policy user for which the operation is being performed. This is passed in by the Protector performing the operation. | Protector | exampleuser1 |
devicepath | String | The path to the device. This is only used in File Protector. | Protector | /hmount/fuse_mount |
filetype | String | The type of file that was protected or unprotected. This displays the value ISREG for files and ISDIR for directories. This is only used in File Protector. | Protector | ISREG |
path | String | The path to the file protected or unprotected by the File Protector. This is only used in File Protector. | Protector | /testdata/src/ez/audit_log(13).csv |
Client
This section describes from where the log came from.
Field | Data Type | Description | Source | Example |
---|---|---|---|---|
ip | String | The IP of the client that generated the log. | DSG | 192.168.2.10 |
username | String | The username that ran the Protector or Server on the client that created the log. | Hubcontroller | johndoe |
Policy
This section describes the information about the policy.
Field | Data Type | Description | Source | Example |
---|---|---|---|---|
audit_code | Integer | This is the policy audit code for the policy log. | PEP Server | 198 |
policy_name | String | This is the policy name for the policy log. | PEP Server | AutomationPolicy |
severity | String | This is the severity level for the policy log entry. | PEP Server | Low |
username | String | This is the user who modified the policy. | PEP Server | johndoe |
Metering
This section describes the metering log information.
Note: These fields are applicable for Protectors up to v7.2.1. If you upgraded your ESA from v7.2.1 to v9.1.0.0 and migrated the metering audits, then these fields contain data.
Metering is not supported for Protectors v8.0.0.0 and above and these are fields will be blank.
Field | Data Type | Description | Source | Example |
---|---|---|---|---|
meteringmode | String | This is the mode for metering logs, such as, delta or total. | PEP Server | total |
origin | String | This is the IP from where metering data originated. | PEP Server | 192.168.0.10 |
protection_count | Double | This is the number of protect operations metered. | PEP Server | 10 |
reprotection_count | Double | This is the number of reprotect operations metered. | PEP Server | 5 |
timestamp | Date | This is the UTC timestamp when the metering log entry was generated. | PEP Server | Sep 8, 2020 @ 12:56:29.000 |
uid | String | This is the unique ID of the metering source that generated the log. | PEP Server | Q2XJPGHZZIYKBPDX5K0KEISIV9AX9V |
unprotection_count | Double | This is the number of unprotect operations metered. | PEP Server | 10 |
Signature
This section handles the signing of the log. The key that was used to sign the log and the actual checksum that was generated.
Field | Data Type | Description | Source | Example |
---|---|---|---|---|
key_id | String | The key ID of the signingkey that signed the log record. | Protector | cc93c930-2ba5-47e1-9341-56a8d67d55d4 |
checksum | String | The checksum that was the result of signing the log. | Protector | 438FE13078719ACD4B8853AE215488ACF701ECDA2882A043791CDF99576DC0A0 |
counter | Double | This is the chain of custody value. It helps maintain the integrity of the log data. | Protector | 50321 |
Verification
This section describes the log information generated for a failed signature verification job.
Field | Data Type | Description | Source | Example |
---|---|---|---|---|
doc_id | String | This is the document ID for the audit log where the signature verification failed. | Signature Verification | N2U2N2JkM2QtMDhmYy00OGJmLTkyOGYtNmRhYzhhMGExMTFh |
index_name | String | This is the index name where the log signature verification failed. | Signature Verification | pty_insight_analytics_audits_10.0-2024.08.30-000001 |
job_id | String | This is the job ID of the signature verification job. | Signature Verification | 1T2RaosBEEC_iPz-zPjl |
job_name | String | This is the job name of the signature verification job. | Signature Verification | System Job |
reason | String | This is the audit log specifying the reason of the signature verification failure. | Signature Verification | INVALID_CHECKSUM | INVALID_KEY_ID | NO_KEY_AND_DOC_UPDATED |
10.2.5 - Index entries
Audit index
The log types of protection, metering, audit, and security are stored in the audit index. These log are generated during security operations. The logs generated by protectors are stored in the audit index with the name as shown in the following table for the respective version.
ESA version | Index pattern | Description | Example |
---|---|---|---|
ESA v10.1.0 | pty_insight_analytics_*audits* | Use in the Audit Store Dashboards for viewing v10.1.0 logs on the dashboard. | pty_insight_analytics_audits_10.0-2024.08.30-000001 |
v9.2.0.0 and earlier | pty_insight_*audit_* | Use in the Audit Store Dashboards for viewing older release logs on the dashboard. | pty_insight_analytics_audit_9.2-2024.08.07-000001, pty_insight_audit_v9.1-2028.02.10-000019, pty_insight_audit_v2.0-2022.02.19-000006, pty_insight_audit_v1.1-2021.02.17-000001, pty_insight_audit_v1-2020.12.21-000001 |
v8.0.0.0 and above | pty_insight_*audit* | Use in the Audit Store Dashboards for viewing all logs. | pty_insight_analytics_audits_10.0-2024.08.30-000001, pty_insight_analytics_audit_9.2-2024.08.07-000001, pty_insight_audit_v9.1-2028.02.10-000019, pty_insight_audit_v2.0-2022.02.19-000006, pty_insight_audit_v1.1-2021.02.17-000001, pty_insight_audit_v1-2020.12.21-000001 |
The following parameters are configured for the index rollover in v10.1.0:
- Index age: 30 days
- Document count: 200,000,000
- Index size: 5 GB
Protection logs
These logs are generated by protectors during protecting, unprotecting, and reprotecting data operations. These logs are generated by protectors, such as, DSG.
Use the following query in Discover to view these logs.
logtype:protection
A sample log is shown here:
{
"process": {
"thread_id": "1227749696",
"module": "coreprovider",
"name": "java",
"pcc_version": "3.6.0.1",
"id": "4190",
"user": "user4",
"version": "10.0.0-alpha+13.gef09.10.0",
"core_version": "2.1.0+17.gca723.2.1",
"platform": "Linux_x64"
},
"level": "SUCCESS",
"signature": {
"key_id": "11a8b7d9-1621-4711-ace7-7d71e8adaf7c",
"checksum": "43B6A4684810383C9EC1C01FF2C5CED570863A7DE609AE5A78C729A2EF7AB93A"
},
"origin": {
"time_utc": "2024-09-02T13:55:17.000Z",
"hostname": "hostname1234",
"ip": "10.39.3.156"
},
"cnt": 1,
"protector": {
"vendor": "Java",
"pcc_version": "3.6.0.1",
"family": "sdk",
"version": "10.0.0-alpha+13.gef09.10.0",
"core_version": "2.1.0+17.gca723.2.1"
},
"protection": {
"dataelement": "TE_A_S13_L1R2_Y",
"datastore": "DataStore",
"audit_code": 6,
"operation": "Protect",
"policy_user": "user1"
},
"index_node": "protegrity-esa399/10.39.1.23",
"tiebreaker": 210,
"logtype": "Protection",
"additional_info": {
"description": "Data protect operation was successful"
},
"index_time_utc": "2024-09-02T13:55:24.766355224Z",
"ingest_time_utc": "2024-09-02T13:55:17.678Z",
"client": {},
"correlationid": "cm0f1jlq700gbzb19cq65miqt"
},
"fields": {
"origin.time_utc": [
"2024-09-02T13:55:17.000Z"
],
"index_time_utc": [
"2024-09-02T13:55:24.766Z"
],
"ingest_time_utc": [
"2024-09-02T13:55:17.678Z"
]
},
"sort": [
1725285317000
]
The above example contains the following information:
- additional_info
- origin
- protector
- protection
- process
- client
- protector
- signature
For more information about the various fields, refer here.
Metering logs
These logs are generated by protectors of prior to 8.0.0.0. These logs are not generated by latest protectors.
Use the following query in Discover to view these logs.
logtype:metering
For more information about the various fields, refer here.
Audit logs
These logs are generated when the rule set of the DSG protector gets updated.
Use the following query in Discover to view these logs.
logtype:audit
A sample log is shown here:
{
"additional_info.description": "User admin modified default_80 tunnel successfully ",
"additional_info.title": "Gateway : Tunnels : Tunnel 'default_80' Modified",
"client.ip": "192.168.2.20",
"cnt": 1,
"index_node": "protegrity-esa746/192.168.1.10",
"index_time_utc": "2024-01-24T13:30:17.171646Z",
"ingest_time_utc": "2024-01-24T13:29:35.000000000Z",
"level": "Normal",
"logtype": "Audit",
"origin.hostname": "protegrity-cg406",
"origin.ip": "192.168.2.20",
"origin.time_utc": "2024-01-24T13:29:35.000Z",
"process.name": "CGP",
"process.user": "admin",
"tiebreaker": 2260067,
"_id": "ZTdhNzFmMTUtMWZlOC00MmY4LWJmYTItMjcwZjMwMmY4OGZh",
"_index": "pty_insight_audit_v9.1-2024.01.23-000006"
}
This example includes data from each of the following groups defined in the index:
- additional_info
- client
- origin
- process
For more information about the various fields, refer here.
Security logs
These logs are generated by security events of the system.
Use the following query in Discover to view these logs.
logtype:security
For more information about the various fields, refer here.
Troubleshooting index
The log types of application, kernel, system, and verification logs are stored in the troubleshooting index. These logs helps you understand the working of the system. The logs stored in this index are essential when the system is down or has issues. This is the pty_insight_analytics_troubleshooting index. The index pattern for viewing these logs in Discover is pty_insight_*troubleshooting_*.
The following parameters are configured for the index rollover:
- Index age: 30 days
- Document count: 200,000,000
- Index size: 5 GB
Application Logs
These logs are generated by Protegrity servers and Protegrity applications.
Use the following query in Discover to view these logs.
logtype:application
A sample log is shown here:
{
"process": {
"name": "hubcontroller"
},
"level": "INFO",
"origin": {
"time_utc": "2024-09-03T10:02:34.597000000Z",
"hostname": "protegrity-esa503",
"ip": "10.37.4.12"
},
"cnt": 1,
"index_node": "protegrity-esa503/10.37.4.12",
"tiebreaker": 16916,
"logtype": "Application",
"additional_info": {
"description": "GET /dps/v1/deployment/datastores | 304 | 127.0.0.1 | Protegrity Client | 8ms | "
},
"index_time_utc": "2024-09-03T10:02:37.314521452Z",
"ingest_time_utc": "2024-09-03T10:02:36.262628342Z",
"correlationid": "cm0m9gjq500ig1h03zwdv6kok"
},
"fields": {
"origin.time_utc": [
"2024-09-03T10:02:34.597Z"
],
"index_time_utc": [
"2024-09-03T10:02:37.314Z"
],
"ingest_time_utc": [
"2024-09-03T10:02:36.262Z"
]
},
"highlight": {
"logtype": [
"@opensearch-dashboards-highlighted-field@Application@/opensearch-dashboards-highlighted-field@"
]
},
"sort": [
1725357754597
]
The above example contains the following information:
- additional_info
- origin
- process
For more information about the various fields, refer here.
Kernel logs
These logs are generated by the kernel and help you analyze the working of the internal system. Some of the modules that generate these logs are CRED_DISP, KERNEL, USER_CMD, and so on.
Use the following query in Discover to view these logs.
logtype:Kernel
For more information and description about the components that can generate kernel logs, refer here.
For a list of components and modules and the type of logs they generate, refer here.
A sample log is shown here:
{
"process": {
"name": "CRED_DISP"
},
"origin": {
"time_utc": "2024-09-03T10:02:55.059999942Z",
"hostname": "protegrity-esa503",
"ip": "10.37.4.12"
},
"cnt": "1",
"index_node": "protegrity-esa503/10.37.4.12",
"tiebreaker": 16964,
"logtype": "Kernel",
"additional_info": {
"module": "pid=38236",
"description": "auid=4294967295 ses=4294967295 subj=unconfined msg='op=PAM:setcred grantors=pam_rootok acct=\"rabbitmq\" exe=\"/usr/sbin/runuser\" hostname=? addr=? terminal=? res=success'\u001dUID=\"root\" AUID=\"unset\"",
"procedure": "uid=0"
},
"index_time_utc": "2024-09-03T10:02:59.315734771Z",
"ingest_time_utc": "2024-09-03T10:02:55.062254541Z"
},
"fields": {
"origin.time_utc": [
"2024-09-03T10:02:55.059Z"
],
"index_time_utc": [
"2024-09-03T10:02:59.315Z"
],
"ingest_time_utc": [
"2024-09-03T10:02:55.062Z"
]
},
"highlight": {
"logtype": [
"@opensearch-dashboards-highlighted-field@Kernel@/opensearch-dashboards-highlighted-field@"
]
},
"sort": [
1725357775059
]
This example includes data from each of the following groups defined in the index:
- additional_info
- origin
- process
For more information about the various fields, refer here.
System logs
These logs are generated by the operating system and help you analyze and troubleshoot the system when errors are found.
Use the following query in Discover to view these logs.
logtype:System
For a list of components and modules and the type of logs they generate, refer here.
A sample log is shown here:
{
"process": {
"name": "ESAPAP",
"version": "10.0.0+2412",
"user": "admin"
},
"level": "Low",
"origin": {
"time_utc": "2024-09-03T10:00:34.000Z",
"hostname": "protegrity-esa503",
"ip": "10.37.4.12"
},
"cnt": "1",
"index_node": "protegrity-esa503/10.37.4.12",
"tiebreaker": 16860,
"logtype": "System",
"additional_info": {
"description": "License is due to expire in 30 days. The validity of license has been acknowledged by the user. (web-user 'admin' , IP: '10.87.2.32')",
"title": "Appliance Info : License is due to expire in 30 days. The validity of license has been acknowledged by the user. (web-user 'admin' , IP: '10.87.2.32')"
},
"index_time_utc": "2024-09-03T10:01:10.113708469Z",
"client": {
"ip": "10.37.4.12"
},
"ingest_time_utc": "2024-09-03T10:00:34.000000000Z"
},
"fields": {
"origin.time_utc": [
"2024-09-03T10:00:34.000Z"
],
"index_time_utc": [
"2024-09-03T10:01:10.113Z"
],
"ingest_time_utc": [
"2024-09-03T10:00:34.000Z"
]
},
"highlight": {
"logtype": [
"@opensearch-dashboards-highlighted-field@System@/opensearch-dashboards-highlighted-field@"
]
},
"sort": [
1725357634000
]
This example includes data from each of the following groups defined in the index:
- additional_info
- origin
- process
For more information about the various fields, refer here.
Verification logs
These log are generated by Insight on the ESA when a signature verification fails.
Use the following query in Discover to view these logs.
logtype:Verification
For a list of components and modules and the type of logs they generate, refer here.
A sample log is shown here:
{
"process": {
"name": "insight.pyc",
"id": 45277
},
"level": "Info",
"origin": {
"time_utc": "2024-09-03T10:14:03.120342Z",
"hostname": "protegrity-esa503",
"ip": "10.37.4.12"
},
"cnt": 1,
"index_node": "protegrity-esa503/10.37.4.12",
"tiebreaker": 17774,
"logtype": "Verification",
"additional_info": {
"module": ".signature.job_executor",
"description": "",
"procedure": "__log_failure"
},
"index_time_utc": "2024-09-03T10:14:03.128435514Z",
"ingest_time_utc": "2024-09-03T10:14:03.120376Z",
"verification": {
"reason": "SV_VERIFY_RESPONSES.INVALID_CHECKSUM",
"job_name": "System Job",
"job_id": "9Vq1opEBYpV14mHXU9hW",
"index_name": "pty_insight_analytics_audits_10.0-2024.08.30-000001",
"doc_id": "JI5bt5EBMqY4Eog-YY7C"
}
},
"fields": {
"origin.time_utc": [
"2024-09-03T10:14:03.120Z"
],
"index_time_utc": [
"2024-09-03T10:14:03.128Z"
],
"ingest_time_utc": [
"2024-09-03T10:14:03.120Z"
]
},
"highlight": {
"logtype": [
"@opensearch-dashboards-highlighted-field@Verification@/opensearch-dashboards-highlighted-field@"
]
},
"sort": [
1725358443120
]
This example includes data from each of the following groups defined in the index:
- additional_info
- process
- origin
- verification
For more information about the various fields, refer here.
Policy log index
The log type of policy is stored in the policy log index. They include logs for the policy-related operations, such as, when the policy is updated. The index pattern for viewing these logs in Discover is pty_insight_*policy_log_*.
The following parameters are configured for the policy log index:
- Index age: 30 days
- Document count: 200,000,000
- Index size: 5 GB
Use the following query in Discover to view these logs.
logtype:policyLog
For a list of components and modules and the type of logs they generate, refer here.
A sample log is shown here:
{
"process": {
"name": "hubcontroller",
"user": "service_admin",
"version": "1.8.0+6.g5e62d8.1.8"
},
"level": "Low",
"origin": {
"time_utc": "2024-09-03T08:29:14.000000000Z",
"hostname": "protegrity-esa503",
"ip": "10.37.4.12"
},
"cnt": 1,
"index_node": "protegrity-esa503/10.37.4.12",
"tiebreaker": 10703,
"logtype": "Policy",
"additional_info": {
"description": "Data element created. (Data Element 'TE_LASCII_L2R1_Y' created)"
},
"index_time_utc": "2024-09-03T08:30:31.358367506Z",
"client": {
"ip": "10.87.2.32",
"username": "admin"
},
"ingest_time_utc": "2024-09-03T08:29:30.017906235Z",
"correlationid": "cm0m64iap009r1h0399ey6rl8",
"policy": {
"severity": "Low",
"audit_code": 150
}
},
"fields": {
"origin.time_utc": [
"2024-09-03T08:29:14.000Z"
],
"index_time_utc": [
"2024-09-03T08:30:31.358Z"
],
"ingest_time_utc": [
"2024-09-03T08:29:30.017Z"
]
},
"highlight": {
"additional_info.description": [
"(Data Element '@opensearch-dashboards-highlighted-field@DE@/opensearch-dashboards-highlighted-field@' created)"
]
},
"sort": [
1725352154000
]
The example contains the following information:
- additional_info
- origin
- policy
- process
For more information about the various fields, refer here.
Policy Status Dashboard index
The policy status dashboard index contains information for the Policy Status Dashboard. It holds the policy and trusted application deployment status information. The index pattern for viewing these logs in Discover is pty_insight_analytics*policy_status_dashboard_*.
{
"logtype": "Status",
"process": {
"thread_id": "2458884416",
"module": "rpstatus",
"name": "java",
"pcc_version": "3.6.0.1",
"id": "2852",
"user": "root",
"version": "10.0.0-alpha+13.gef09.10.0",
"core_version": "2.1.0+17.gca723.2.1",
"platform": "Linux_x64"
},
"origin": {
"time_utc": "2024-09-03T10:24:19.000Z",
"hostname": "ip-10-49-2-49.ec2.internal",
"ip": "10.49.2.49"
},
"cnt": 1,
"protector": {
"vendor": "Java",
"datastore": "DataStore",
"family": "sdk",
"version": "10.0.0-alpha+13.gef09.10.0"
},
"ingest_time_utc": "2024-09-03T10:24:19.510Z",
"status": {
"core_correlationid": "cm0f1jlq700gbzb19cq65miqt",
"package_correlationid": "cm0m1tv5k0019te89e48tgdug"
},
"policystatus": {
"type": "TRUSTED_APP",
"application_name": "APJava_sample",
"deployment_or_auth_time": "2024-09-03T10:24:19.000Z",
"status": "WARNING"
}
},
"fields": {
"policystatus.deployment_or_auth_time": [
"2024-09-03T10:24:19.000Z"
],
"origin.time_utc": [
"2024-09-03T10:24:19.000Z"
],
"ingest_time_utc": [
"2024-09-03T10:24:19.510Z"
]
},
"sort": [
1725359059000
]
The example contains the following information:
- additional_info
- origin
- protector
- policystatus
- policy
- process
Protectors status index
The protector status logs generated by protectors of v10.0.0 are stored in this index. The index pattern for viewing these logs in Discover is pty_insight_analytics_protectors_status_*.
The following parameters are configured for the index rollover:
- Index age: 30 days
- Document count: 200,000,000
- Index size: 5 GB
Use the following query in Discover to view these logs.
logtype:status
A sample log is shown here:
{
"logtype":"Status",
"process":{
"thread_id":"2559813952",
"module":"rpstatus",
"name":"java",
"pcc_version":"3.6.0.1",
"id":"1991",
"user":"root",
"version":"10.0.0.2.91.5ec4b8b",
"core_version":"2.1.0-alpha+24.g7fc71.2.1",
"platform":"Linux_x64"
},
"origin":{
"time_utc":"2024-07-30T07:22:41.000Z",
"hostname":"ip-10-39-3-218.ec2.internal",
"ip":"10.39.3.218"
},
"cnt":1,
"protector":{
"vendor":"Java",
"datastore":"ESA-10.39.2.7",
"family":"sdk",
"version":"10.0.0.2.91.5ec4b8b"
},
"ingest_time_utc":"2024-07-30T07:22:41.745Z",
"status":{
"core_correlationid":"clz79lc2o004jmb29neneto8k",
"package_correlationid":"clz82ijw00037k790oxlnjalu"
}
}
The example contains the following information:
- additional_info
- origin
- policy
- protector
Protector Status Dashboard index
The protector status dashboard index contains information for the Protector Status Dashboard. It holds the protector status information. The index pattern for viewing these logs in Discover is pty_insight_analytics*protector_status_dashboard_.
A sample log is shown here:
{
"logtype": "Status",
"process": {
"thread_id": "2458884416",
"module": "rpstatus",
"name": "java",
"pcc_version": "3.6.0.1",
"id": "2852",
"user": "root",
"version": "10.0.0-alpha+13.gef09.10.0",
"core_version": "2.1.0+17.gca723.2.1",
"platform": "Linux_x64"
},
"origin": {
"time_utc": "2024-09-03T10:24:19.000Z",
"hostname": "ip-10-49-2-49.ec2.internal",
"ip": "10.49.2.49"
},
"cnt": 1,
"protector": {
"vendor": "Java",
"datastore": "DataStore",
"family": "sdk",
"version": "10.0.0-alpha+13.gef09.10.0"
},
"ingest_time_utc": "2024-09-03T10:24:19.510Z",
"status": {
"core_correlationid": "cm0f1jlq700gbzb19cq65miqt",
"package_correlationid": "cm0m1tv5k0019te89e48tgdug"
},
"protector_status": "Warning"
},
"fields": {
"origin.time_utc": [
"2024-09-03T10:24:19.000Z"
],
"ingest_time_utc": [
"2024-09-03T10:24:19.510Z"
]
},
"sort": [
1725359059000
]
The example contains the following information:
- additional_info
- origin
- protector
- process
DSG transaction metrics
The table in this section lists the details for the various parameters generated by DSG transactions. The DSG transaction logs are stored in the pty_insight_analytics_dsg_transaction_metrics_9.2 index file. The index pattern for viewing these logs in Discover is pty_insight_analytics_dsg_transaction_metrics_*. The following parameters are configured for the index rollover:
- Index age: 1 day
- Document count: 10,000,000
- Index size: 1 GB
This index stores the following fields.
* -The origin_time_utc and logtype parameters will only be displayed on the Audit Store Dashboards.
For more information about the transaction metric logs, refer to the section Transaction Metrics Logging in the Protegrity Data Security Gateway User Guide 3.2.0.0.
Scheduled tasks are available for deleting this index. You can configure and enable the scheduled task to free up the space used by old index files that you do not require.
For more information about scheduled tasks, refer here.
DSG usage metrics
This section describes the codes associated with the following DSG usage metrics:
- Tunnels usage data (Version 0)
- Service usage data (Version 0)
- Profile usage data (Version 0)
- Rules usage data (Version 0)
The table in this sub sections lists the details for the various parameters generated while using the DSG. The DSG usage metrics logs are stored in the pty_insight_analytics_dsg_usage_metrics_9.2 index file. The index pattern for viewing these logs in Discover is pty_insight_analytics_dsg_usage_metrics_*. The following parameters are configured for the index rollover:
- Index age: 1 day
- Document count: 3,500,000
- Index size: 1 GB
For more information about the usage metrics, refer to the Protegrity Data Security Gateway User Guide 3.2.0.0.
Scheduled tasks are available for deleting this index. You can configure and enable the scheduled task to free up the space used by old index files that you do not require.
For more information about scheduled tasks, refer here.
Tunnels usage data
The table in this section describes the usage metric for Tunnels.
Position | Name | Data Type | Description |
---|---|---|---|
0 | metrics type | integer | 0 for Tunnels |
1 | metrics version | integer | 0 |
2 | tunnel-type | string | the tunnel type CIFS, HTTP, NFS, S3, SFTP, SMTP |
3 | timestamp | string | time usage is reported |
4 | tunnel-id | string | address of tunnel instance will be unique id generated when tunnel is created. |
5 | uptime | float | time in seconds since the tunnel loaded |
6 | bytes-processed | integer | frontend and backend bytes the tunnel processed since the last time usage was reported |
7 | frontend-bytes-processed | integer | frontend bytes the tunnel has processed since the last time usage was reported |
8 | backend-bytes-processed | integer | backend bytes the tunnel has processed since the last time usage was reported |
9 | total-bytes-processed | integer | total number of frontend and backend bytes the tunnel has processed during the time the tunnel has been loaded |
10 | frontend-bytes-processed | integer | total number of frontend bytes the tunnel has processed during the time the tunnel has been loaded |
11 | backend-bytes-processed | integer | total number of backend bytes the tunnel has processed during the time the tunnel has been loaded |
12 | message-count | integer | number of requests the tunnel received since the last time usage was reported |
13 | total-message-count | integer | total number of requests the tunnel received during the time the tunnel has been loaded |
14 | ingest_time_utc | string | Time in UTC at which this log is ingested |
15 | logtype | string | Value to identify type of metric –> dsg_metrics_usage_tunnel |
A sample is provided here:
{"metrics_type":"Tunnel","version":0,"tunnel_type":"HTTP","cnt":1,"logtype":"Application","origin":{"time_utc":"2023-04-13T12:28:18Z"},"previous_timestamp":"2023-04-13T12:28:08Z","tunnel_id":"140361619513360","checksum":"4139677074","uptime":620.8048927783966,"bytes_processed":401,"frontend_bytes_processed":401,"backend_bytes_processed":0,"previous_bytes_processed":401,"previous_frontend_bytes_processed":401,"previous_backend_bytes_processed":0,"total_bytes_processed":1203,"total_frontend_bytes_processed":1203,"total_backend_bytes_processed":0,"message_count":1,"previouse_message_count":1,"total_message_count":3}
Services usage data
The table in this section describes the usage metric for Services.
Position | Name | Data Type | Description |
---|---|---|---|
0 | metrics type | integer | 1 for Services |
1 | metrics version | integer | 0 |
2 | service-type | string | the service type HTTP-GW, MOUNTED-OOB, REST-API, S3-OOB, SMTP-GW, SFTP-GW, WS-GW |
3 | timestamp | string | time usage is reported |
4 | service-id | string | UUID of service name |
5 | tunnel-id | string | UUID of tunnel name |
6 | calls | integer | number of times service processed frontend and backend requests since the time usage was last reported |
7 | frontend-calls | integer | number of times service processed frontend requests since the time usage was last reported |
8 | backend-calls | integer | number of times service processed backend requests since the time usage was last reported |
9 | total-calls | integer | total number number of times service processed frontend and backend requests since the service has been loaded |
10 | total-frontend-calls | integer | total number number of times service processed frontend and backend requests since the service has been loaded |
11 | total-backend-calls | integer | total number number of times service processed frontend and backend requests since the service has been loaded |
12 | bytes-processed | integer | frontend and backend bytes the service processed since the last time usage was reported |
13 | frontend-bytes-processed | integer | frontend bytes the tunnel processed since the last time usage was reported |
14 | backend-bytes-processed | integer | backend bytes the tunnel processed since the last time usage was reported |
15 | total-bytes-processed | integer | total number of frontend and backend bytes the service has processed during the time the service has been loaded |
16 | total-frontend-bytes-processed | integer | total number of frontend bytes the tunnel has processed during the time the tunnel has been loaded |
17 | total-backend-bytes-processed | integer | total number of backend bytes the tunnel has processed during the time the tunnel has been loaded |
18 | ingest_time_utc | string | Time in UTC at which this log is ingested |
19 | logtype | string | Value to identify type of metric –> dsg_metrics_usage_service |
A sample is provided here:
{"metrics_type":"Service","version":0,"service_type":"REST-API","cnt":1,"logtype":"Application","origin":{"time_utc":"2023-04-13T12:28:18Z"}, "previous_timestamp":"2023-04-13T12:28:08Z", "service_id":"140361548704016","checksum":"3100121694","tunnel_checksum":"4139677074","calls":401,"frontend_calls":401,"backend_calls":0,"previous_calls":401,"previous_frontend_calls":401,"previous_backend_calls":0,"total_calls":1203,"total_frontend_calls":1203,"total_backend_calls":0,"bytes_processed":2,"frontend_bytes_processed":1,"backend_bytes_processed":1,"previous_bytes_processed":2,"previous_frontend_bytes_processed":1,"previous_backend_bytes_processed":1,"total_bytes_processed":6,"total_frontend_bytes_processed":3,"total_backend_bytes_processed":3}
Profile usage data
The table in this section describes the usage metric for Profile.
Position | Name | Data Type | Description |
---|---|---|---|
0 | metrics type | integer | 2 for Profile |
1 | metrics version | integer | 0 |
2 | timestamp | string | time usage is reported |
3 | prev-timestamp | string | the previous time usage was reported |
4 | profile-id | string | address of profile instance will be unique id generated when profile is created |
5 | parent-id | string | checksum of profile or service calling this profile |
6 | calls | integer | number of times the profile processed a request since the time usage was last reported |
7 | total-calls | integer | total number of times the profile processed a request since profile has been loaded |
8 | profile-ref-count | integer | the number of times this profile has been called via a profile reference since the time usage was last reported |
9 | prev-profile-ref-count | integer | the number of times this profile has been called via a profile reference the last time usage was last reported |
10 | total-profile-ref-count | integer | total number of times this profile has been called via a profile reference since the profile has been loade |
11 | bytes-processed | integer | bytes the profile processed since the last time usage was reported |
12 | total-bytes-processed | integer | total bytes the profile processed since the profile has been loaded |
13 | elapsed-time-sample-count | integer | the number of times the profile was sampled since the last time usage was reported |
14 | elapsed-time-mean | integer | the average amount of time in nano-seconds it took to process a request based on elapsed-time-sample-count |
15 | total-elapsed-time-sample-count | integer | the number of times the profile was sampled since the profile has been loaded |
16 | total-elapsed-time-sample-mean | integer | the average amount of time in nano-seconds it took to process a request based on total-elapsed-time-sample-count |
17 | ingest_time_utc | string | Time in UTC at which this log is ingested |
18 | logtype | string | Value to identify type of metric –> dsg_metrics_usage_profile |
A sample is provided here:
{"metrics_type":"Profile","version":0,"cnt":1,"logtype":"Application","origin":{"time_utc":"2023-04-13T12:28:18Z"},"previous_timestamp":"2023-04-13T12:28:08Z","profile_id":"140361548999248","checksum":"3504922421","parent_checksum":"3100121694","calls":2,"previous_calls":2,"total_calls":6,"profile_reference_count":0,"previous_profile_reference_count":0,"total_profile_reference_count":0,"bytes_processed":802,"previous_bytes_processed":802,"total_bytes_processed":2406,"elapsed_time_sample_count":2,"elapsed_time_average":221078.5,"total_elapsed_time_sample_count":6,"total_elapsed_time_sample_average":245797.0}
Rules usage data
The table in this section describes the usage metric for Rules.
Position | Name | Data Type | Description |
---|---|---|---|
0 | metrics type | integer | 3 for Rules |
1 | metrics version | integer | 0 |
2 | rule-type | string | rule is one of Dynamice Injection, Error, Exit, Extract, Log, Profile Reference, Set Context Variable, Set User Identity, Transform |
3 | codec | string | only applies to Extract |
4 | timestamp | string | time usage is reported |
5 | flag | boolean | Broken rule or is domain name rewrite |
6 | rule-id | string | address of rule instance will be unique id generated when rule is created. |
7 | parent-id | string | checksum of rule or profile calling this rule |
8 | calls | integer | number of times the rule processed a request since the time usage was last reported |
9 | total-calls | integer | total number of times the rule processed a request since rule has been loaded |
10 | profile-ref-count | integer | the number of times this rule has been called via a profile reference since the time usage was last reported |
11 | prev-profile-ref-count | integer | the number of times this rule has been called via a profile reference the last time usage was last reported |
12 | total-profle-ref-count | integer | total number of times this rule has been called via a profile reference since the rule has been loaded |
13 | bytes-processed | integer | bytes the rule processed since the last time usage was reported |
14 | total-bytes-processed | integer | total bytes the rule processed since the rule has been loaded |
15 | elapsed-time-sample-count | integer | the number of times the rule was sampled since the last time usage was reported |
16 | elapsed-time-sample-mean | integer | the average amount of time in nano-seconds it took to process a data based on elapsed-time-sample-count |
17 | total-elapsed-time-sample-count | integer | the number of times the rule was sampled since the rule has been loaded |
18 | total-elapsed-time-sample-mean | integer | the average amount of time in nano-seconds it took to process a data based on total-elapsed-time-sample-count |
19 | ingest_time_utc | string | Time in UTC at which this log is ingested |
20 | logtype | string | Value to identify type of metric –> dsg_metrics_usage_rule |
A sample is provided here:
{"metric_type":"Rule","version":0,"rule_type":"Extract","codec":"Set User Identity","cnt":1,"logtype":"Application","origin":{"time_utc":"2023-04-13T12:28:18Z"},"previous_timestamp":"2023-04-13T12:28:08Z","broken":false,"domain_name_rewrite":false,"rule_id":"140361553016464","rule_checksum":"932129179","parent_checksum":"3504922421","calls":1,"previous_calls":1,"total_calls":3,"profile_reference_count":0,"previous_profile_reference_count":0,"total_profile_reference_count":0,"bytes_processed":1,"previous_bytes_processed":1,"total_bytes_processed":3,"elapsed_time_sample_count":1,"elapsed_time_sample_average":406842.0,"total_elapsed_time_sample_count":3,"total_elapsed_time_sample_average":451163.6666666667}
DSG error metrics
The table in this section lists the details for the various parameters generated for the DSG Error Metrics. The DSG Error Metrics logs are stored in the pty_insight_analytics_dsg_error_metrics_9.2 index file. The index pattern for viewing these logs in Discover is pty_insight_analytics_dsg_error_metrics_*. The following parameters are configured for the index rollover:
- Index age: 1 day
- Document count: 3,500,000
- Index size: 1 GB
This index stores the following fields.
* -The origin_time_utc and logtype parameters will only be displayed on the Audit Store Dashboards.
For more information about the error metric logs, refer to the Protegrity Data Security Gateway User Guide 3.2.0.0.
Scheduled tasks are available for deleting this index. You can configure and enable the scheduled task to free up the space used by old index files that you do not require.
For more information about scheduled tasks, refer here.
Miscellaneous index
The logs that are not added to the other indexes are captured and stored in the miscellaneous index. The index pattern for viewing these logs in Discover is pty_insight_analytics_miscellaneous_*.
This index should not contain any logs. If any logs are visible in this index, then kindly contact Protegrity support.
The following parameters are configured for the index rollover:
- Index age: 7 days
- Document count: 3,500,000
- Index size: 200 mb
Use the following query in Discover to view these logs.
logtype:miscellaneous;
Scheduled tasks are available for deleting this index. You can configure and enable the scheduled task to free up the space used by old index files that you do not require.
For more information about scheduled tasks, refer here.
10.2.6 - Log return codes
Return Code | Description |
---|---|
0 | Error code for no logging |
1 | The username could not be found in the policy |
2 | The data element could not be found in the policy |
3 | The user does not have the appropriate permissions to perform the requested operation |
4 | Tweak is null |
5 | Integrity check failed |
6 | Data protect operation was successful |
7 | Data protect operation failed |
8 | Data unprotect operation was successful |
9 | Data unprotect operation failed |
10 | The user has appropriate permissions to perform the requested operation but no data has been protected/unprotected |
11 | Data unprotect operation was successful with use of an inactive keyid |
12 | Input is null or not within allowed limits |
13 | Internal error occurring in a function call after the provider has been opened |
14 | Failed to load data encryption key |
15 | Tweak input is too long |
16 | The user does not have the appropriate permissions to perform the unprotect operation |
17 | Failed to initialize the PEP: this is a fatal error |
19 | Unsupported tweak action for the specified fpe data element |
20 | Failed to allocate memory |
21 | Input or output buffer is too small |
22 | Data is too short to be protected/unprotected |
23 | Data is too long to be protected/unprotected |
24 | The user does not have the appropriate permissions to perform the protect operation |
25 | Username too long |
26 | Unsupported algorithm or unsupported action for the specific data element |
27 | Application has been authorized |
28 | Application has not been authorized |
29 | The user does not have the appropriate permissions to perform the reprotect operation |
30 | Not used |
31 | Policy not available |
32 | Delete operation was successful |
33 | Delete operation failed |
34 | Create operation was successful |
35 | Create operation failed |
36 | Manage protection operation was successful |
37 | Manage protection operation failed |
38 | Not used |
39 | Not used |
40 | No valid license or current date is beyond the license expiration date |
41 | The use of the protection method is restricted by license |
42 | Invalid license or time is before license start |
43 | Not used |
44 | The content of the input data is not valid |
45 | Not used |
46 | Used for z/OS query default data element when policy name is not found |
47 | Access key security groups not found |
48 | Not used |
49 | Unsupported input encoding for the specific data element |
50 | Data reprotect operation was successful |
51 | Failed to send logs, connection refused |
52 | Return code used by bulkhandling in pepproviderauditor |
10.2.7 - Protectors security log codes
The security logging level can be configured when a data security policy is created in the Policy management in ESA. If logging level is set to audit successful and audit failed, then both successful and failed Unprotect/Protect/Reprotect/Delete operations will be logged.
You can define the server where these security audit logs will be sent to. You can do that by modifying the Log Server configuration section in pepserver.cfg file.
If you configure to send protector security logs to ESA, you will be able view them in Discover, by logging into the ESA and navigating to Audit Store > Dashboard > Open in new tab, select Discover from the menu, and select a time period such as Last 30 days. The following table displays the logs sent by protectors.
Log Code | Severity | Description | Error Message | DB / AP Operations | MSSQL | Teradata | Oracle | DB2 | XC API Definitions | Recovery Actions |
---|---|---|---|---|---|---|---|---|---|---|
0 | S | Internal ID when audit record should not be generated. | - | - | - | - | - | - | XC_LOG_NONE | No action is required. |
1 | W | The username could not be found in the policy in shared memory. | No such user | URPD | 1 | 01H01 or U0001 | 20101 | 38821 | XC_LOG_USER_NOT_FOUND | Verify that the user that calls a PTY function is in the policy. Ensure that your policy is synchronized across all Teradata nodes. Make sure that the ESA connectivity information is correct in the pepserver.cfg file. |
2 | W | The data element could not be found in the policy in shared memory. | No such data element | URPD | 2 | U0002 | 20102 | 38822 | XC_LOG_DATA_ELEMENT_NOT_FOUND | Verify that you are calling a PTY function with data element that exists in the policy. |
3 | W | The data element was found, but the user does not have the appropriate permissions to perform the requested operation. | Permission denied | URPD | 3 | 01H03 or U0003 | 20103 | 38823 | XC_LOG_PERMISSION_DENIED | Verify that you are calling a PTY function with a user having access permissions to perform this operation according to the policy. |
4 | E | Tweak is null. | Tweak null | URPD | 4 | 01H04 or U0004 | 20104 | 38824 | XC_LOG_TWEAK_NULL | Ensure that the tweak is not a null value. |
5 | W | The data integrity check failed when decrypting using a Data Element with CRC enabled. | Integrity check failed | U— | 5 | U0005 | 20105 | 38825 | XC_LOG_INTEGRITY_CHECK_FAILED | Check that you use the correct data element to decrypt. Check that your data was not corrupted, restore data from the backup. |
6 | S | The data element was found, and the user has the appropriate permissions for the operation. Data protection was successful. | -RP- | 6 | U0006 | 20106 | 38826 | XC_LOG_PROTECT_SUCCESS | No action is required. | |
7 | W | The data element was found, and the user has the appropriate permissions for the operation. Data protection was NOT successful. | -RP- | 7 | U0007 | 20107 | 38827 | XC_LOG_PROTECT_FAILED | Failed to create Key ID crypto context. Verify that your data is not corrupted and you use valid combination of input data and data element to encrypt. | |
8 | S | The data element was found, and the user has the appropriate permissions for the operation. Data unprotect operation was successful. If mask was applied to the DE, then the appropriate record is added to the audit log description. | U— | 8 | U0008 | 20108 | 38828 | XC_LOG_UNPROTECT_SUCCESS | No action is required. | |
9 | W | The data element was found, and the user has the appropriate permissions for the operation. Data unprotect operation was NOT successful. | U— | 9 | U0009 | 20109 | 38829 | XC_LOG_UNPROTECT_FAILED | Failure to decrypt data with Key ID by data element without Key ID. Verify that your data is not corrupted and you use valid combination of input data and data element to decrypt. | |
10 | S | Policy check OK. The data element was found, and the user has the appropriate permissions for the operation. NO protection operation is done. | —D | 10 | U0010 | 20110 | 38830 | XC_LOG_OK_ACCESS | No action is required. Successful DELETE operation was performed. | |
11 | W | The data element was found, and the user has the appropriate permissions for the operation. Data unprotect operation was successful with use of an inactive key ID. | U— | 11 | U0011 | 20111 | 38831 | XC_LOG_INACTIVE_KEYID_USED | No action is required. Successful UNPROTECT operation was performed. | |
12 | E | Input parameters are either NULL or not within allowed limits. | URPD | 12 | U0012 | 20112 | 38832 | XC_LOG_INVALID_PARAM | Verify the input parameters are correct. | |
13 | E | Internal error occurring in a function call after the PEP Provider has been opened. For instance: - failed to get mutex/semaphore, - unexpected null parameter in internal (private) functions, - uninitialized provider, etc. | URPD | 13 | U0013 | 20113 | 38833 | XC_LOG_INTERNAL_ERROR | Restart PEP Server and re-deploy the policy. | |
14 | W | A key for a data element could not be loaded from shared memory into the crypto engine. | Failed to load data encryption key - Cache is full, or Failed to load data encryption key - No such key, or Failed to load data encryption key - Internal error. | URP- | 14 | U0014 | 20114 | 38834 | XC_LOG_LOAD_KEY_FAILED | If return message is ‘Cache is full’, then logoff and logon again, clear the session and cache. For all other return messages restart PEP Server and re-deploy the policy. |
15 | Tweak input is too long. | |||||||||
16 | The user does not have the appropriate permissions to perform the unprotect operation. | |||||||||
17 | E | A fatal error was encountered when initializing the PEP. | URPD | 17 | U0017 | 20117 | 38837 | XC_LOG_INIT_FAILED | Re-install the protector, re-deploy policy. | |
19 | Unsupported tweak action for the specified fpe data element. | |||||||||
20 | E | Failed to allocate memory. | URPD | 20 | U0020 | 20120 | 38840 | XC_LOG_OUT_OF_MEMORY | Check what uses the memory on the server. | |
21 | W | Supplied input or output buffer is too small. | Buffer too small | URPD | 21 | U0021 | 20121 | 38841 | XC_LOG_BUFFER_TOO_SMALL | Token specific error about supplied buffers. Data expands too much, using non-length preserving Token element. Check return message for specific error, and verify you use correct combination of data type (encoding), and token element. Verify supported data types according to Protegrity Protection Methods Reference 7.2.1. |
22 | W | Data is too short to be protected or unprotected. E.g. Too few characters were provided when tokenizing with a length-preserving token element. | Input too short | URPD | 22 | U0022 | 20122 | 38842 | XC_LOG_INPUT_TOO_SHORT | Provide the longer input data. |
23 | W | Data is too long to be protected or unprotected. E.g. Too many characters were provided. | Input too long | URPD | 23 | U0023 | 20123 | 38843 | XC_LOG_INPUT_TOO_LONG | Provide the shorter input data. |
24 | The user does not have the appropriate permissions to perform the protect operation. | |||||||||
25 | W | Unauthorized Username too long. | Username too long. | UPRD | - | U0025 | - | - | Run query by user with Username up to 255 characters long. | |
26 | E | Unsupported algorithm or unsupported action for the specific data element or unsupported policy version. For example, unprotect using HMAC data element. | URPD | 26 | U0026 | 20126 | 38846 | XC_LOG_UNSUPPORTED | Check the data elements used for the crypto operation. Note that HMAC data elements cannot be used for decrypt and re-encrypt operations. | |
27 | Application has been authorized. | |||||||||
28 | Application has not been authorized. | |||||||||
29 | The JSON type is not serializable. | |||||||||
30 | W | Failed to save audit record in shared memory. | Failed to save audit record | URPD | 30 | U0030 | 20130 | 38850 | XC_LOG_AUDITING_FAILED | Check if PEP Server is started. |
31 | E | The policy shared memory is empty. | Policy not available | URPD | 31 | U0031 | 20131 | 38851 | XC_LOG_EMPTY_POLICY | No policy is deployed on PEP Server. |
32 | Delete operation was successful. | |||||||||
33 | Delete operation failed. | |||||||||
34 | Create operation was successful. | |||||||||
35 | Create operation failed. | |||||||||
36 | Manage protection operation was successful. | |||||||||
37 | Manage protection operation failed. | |||||||||
39 | E | The policy in shared memory is locked. This is the result of a disk full alert. | Policy locked | URPD | 39 | U0039 | 20139 | 38859 | XC_LOG_POLICY_LOCKED | Fix the disk space and restart the PEP Server. |
40 | E | No valid license or current date is beyond the license expiration date. | License expired | -RP- | 40 | U0040 | 20140 | 38860 | XC_LOG_LICENSE_EXPIRED | ESA System Administrator should request and obtain a new license. Re-deploy policy with renewed license. |
41 | E | The use of the protection method is restricted by the license. | Protection method restricted by license. | URPD | 41 | U0041 | 20141 | 38861 | XC_LOG_METHOD_RESTRICTED | Perform the protection operation with the protection method that is not restricted by the license. Request license with desired protection method enabled. |
42 | E | Invalid license or time is before license start time. | License is invalid. | URPD | 42 | U0042 | 20142 | 38862 | XC_LOG_LICENSE_INVALID | ESA System Administrator should request and obtain a new license. Re-deploy policy with renewed license. |
44 | W | Content of the input data to protect is not valid (e.g. for Tokenization). E.g. Input is alphabetic when it is supposed to be numeric. | Invalid format | -RP- | 44 | U0044 | 20144 | 38864 | XC_LOG_INVALID_FORMAT | Verify the input data is of the supported alphabet for specified type of token element. |
46 | E | Used for z/OS Query Default Data element when policy name is not found. | No policy. Cannot Continue. | 46 | n/a | n/a | n/a | XC_LOG_INVALID_POLICY | Specify the valid policy. Policy name is case sensitive. | |
47 | Access Key security groups not found. | |||||||||
48 | Rule Set not found. | |||||||||
49 | Unsupported input encoding for the specific data element. | |||||||||
50 | S | The data element was found, and the user has the appropriate permissions for the operation. The data Reprotect operation is successful. | -R- | n/a | n/a | n/a | n/a | No action is required. Successful REPROTECT operation was performed. | ||
51 | Failed to send logs, connection refused! |
10.2.8 - Policy audit codes
Log Code | Log Description | Description of the event |
---|---|---|
50 | Policy created | Generated when a new policy was created in Policy management in ESA. |
51 | Policy updated | Generated when a new policy was updated in Policy management in ESA. |
52 | Policy deleted | Generated when an existing policy was deleted from Policy management in ESA. |
56 | Policy role created | Generated when a new policy role was created in Policy management in ESA. |
71 | Policy deployed | Generated when a policy was sent for deployment to a PEP Server. |
75 | Policy data store added | Generated when a new data store was added to a policy. |
76 | Policy changed state | Generated when a policy has changed its state to Ready to Deploy, or Deployed. |
78 | Key created | Generated when new key was created for the Data Element. |
80 | Policy deploy failed | Generated when a policy failed to be deployed. |
83 | Token deploy failed | Generated when the token failed to be deployed. |
84 | Token deployed successfully | Generated when the token deployed successfully. |
85 | Data Element Key(s) exported | Generated when export keys API executes successfully. Lists each Data Element that was successfully exported. |
86 | Policy deploy warning | Generated when the Policy deploy operation fails. |
100 | Password changed | Generated when the password of the admin user was changed. |
101 | Data store created | Generated when a new data store was created. |
102 | Data store updated | Generated when a new data store was updated. |
103 | Data store deleted | Generated when a data store was deleted. |
107 | Mask created | Generated when a new mask was created. |
108 | Mask deleted | Generated when any mask was deleted. |
109 | Security coordinate deleted | Generated when an existing security coordinate was deleted. |
110 | Security coordinate created | Generated when a new security coordinate is created. |
111 | Role created | Generated when a new role is created in Policy management in ESA. |
112 | Role deleted | Generated when any role was deleted from Policy management in ESA. |
113 | Member source created | Generated when a new external source is created in Policy management in ESA. |
114 | Member source updated | Generated when a new member source is updated in Policy management in ESA. |
115 | Member source deleted | Generated when any external source was deleted from Policy management in ESA. |
116 | All roles resolved | Generated when the members in the automatic roles are synchronized. |
117 | Role resolved | Generated when it has fetched all members from a certain role into the policy. |
118 | Role group member resolved | Generated when it has fetched all group members from a role into the policy. |
119 | Trusted application created | Generated when a new trusted application is created in Policy management in ESA. |
120 | Trusted application deleted | Generated when a new trusted application is deleted in Policy management in ESA. |
121 | Trusted application updated | Generated when a new trusted application is updated in Policy management in ESA. |
126 | Mask updated | Generated when a mask is updated in Policy management in ESA. |
127 | Role updated | Generated when a role is updated in Policy management in ESA. |
129 | Node registered | Generated when a node is registered with ESA. |
130 | Node updated | Generated when a node is updated. |
131 | Node unregistered | Generated when a node is not registered with ESA. |
140 | Disk full alert | Generated when the disk is full. |
141 | Disk full warning | Generated to warn the IT administrator that the disk is almost full. |
144 | Login success | Generated when Security Officer logs into Policy management in ESA. |
145 | Login failed | Generated when Security Officer failed to log into Policy management in ESA. |
146 | Logout success | Generated when Security Officer logs out from Policy management in ESA. |
149 | Data element key updated | Generated when a data element with a key is updated in Policy management in ESA. |
150 | Data element key created | Generated when a new data element with a key is created in Policy management in ESA. |
151 | Data element key deleted | Generated when a data element (and its key) was deleted from Policy management in ESA. |
152 | Too many keys created | Generated when the number of data element key IDs has reached its maximum. |
153 | License expire warning | Generated once per day and upon HubController restart when less than 30 days are left before license expiration. |
154 | License has expired | Generated when the license becomes expired. |
155 | License is invalid | Generated when the license becomes invalid. |
156 | Policy is compromised | Generated when integrity of pepserver.db has been compromised. |
157 | Failed to import some users | Generated when users having names longer than 255 characters were not fetched from an external source. |
158 | Policy successfully imported | Generated when the policy is successfully retrieved from the pepserver.imp file (configured in pepserver.cfg), imported and decrypted. |
159 | Failed to import policy | Generated when the policy fails to be imported from the pepserver.imp file (configured in pepserver.cfg). |
170 | Data store key exported | Generated when the Data store key is exported. |
171 | Key updated | Generated when the key rotation is successful. |
172 | Key deleted | Generated when the key is deleted. |
173 | Datastore key has expired | Generated when the Data store key expires. |
174 | Datastore key expire warning | Generated when the Data store key is about to expire. |
176 | Datastore key rotated | Generated when the Data store key is rotated. |
177 | Master key has expired | Generated when the Master key expires. |
178 | Master key expire warning | Generated when the Master key is about to expire. |
179 | Master key rotated | Generated when the Master key is rotated. |
180 | New HSM Configured | Generated when a new HSM configuration is created. |
181 | Repository key has expired. | Generated when the Repository Key has expired. |
182 | Repository key expiry warning. | Generated when the Repository Key is on the verge of expiry. The warning message mentions the number of days left for the Repository Key expiry. |
183 | Repository key rotated. | Generated when the Repository Key has been rotated. |
184 | Metering created | Generated when metering data is created. |
185 | Metering updated | Generated when metering data is updated. |
186 | Metering deleted | Generated when metering data is deleted. |
187 | Integrity created | Generated when an integrity check is created. |
188 | Integrity updated | Generated when the integrity check is updated. |
189 | Integrity deleted | Generated when an integrity check is deleted. |
195 | Signing key has expired | Generated when the signing key has expired. |
196 | Signing key expire warning | Generated for the signing key expiry warning. |
197 | Signing key rotated | Generated when the signing key is rotated. |
198 | Signing key exported | Generated when the signing key is exported. |
199 | Case sensitive data element created | Generated when a case sensitive data element is created. |
210 | Data Element key has expired | Generated when the data element key has expired. |
211 | Data Element key expire warning | Generated for the data element key expiry warning. |
212 | Conflicting policy users found | Generated when conflicting policy users are found. |
220 | Data Element deprecated | Generated when the data element is deprecated. |
10.2.9 - Additional log information
These are values for understanding the values that are displayed in the log records.
Log levels
Most events on the system generate logs. The level of the log helps you understand whether the log is just an information message or denotes some issue with the system. The log message and the log level allows you to understand more about the working of the system and also helps you identify and troubleshoot any system issues.
Protection logs: These logs are generated for Unprotect, Reprotect, and Protect (URP) operations.
- SUCCESS: This log is generated for a successful URP operation.
- WARNING: This log is generated if a user does not have access and the operation is unprotect.
- EXCEPTION: This log is generated if a user does not have access, the operation is unprotect, and the return exception property is set.
- ERROR: This log is generated for all other issues.
Application logs: These logs are generated by the application. The log level denotes the severity level of the log, however, levels 1 and 6 are used for the log configuration.
- 1: OFF. This level is used to turn logging off.
- 2: SEVERE. This level indicates a serious failure that prevents normal program execution.
- 3: WARNING. This level indicates a potential problem or an issue with the system.
- 4: INFO. This level is used to display information messages about the application.
- 5: CONFIG. This level is used to display static configuration information that is useful during debugging.
- 6: ALL. This level is used to log all messages.
Policy logs: These logs are used for the policy logs.
- LOWEST
- LOW
- NORMAL
- HIGH
- CRITICAL
- N/A
Protector information
The information displayed in the Protector-related fields of the audit log are listed in the table.
protector.family | protector.vendor | protector.version |
---|---|---|
DATA SECURITY GATEWAY | ||
gwp | DSG | 3.3.0.0.x |
APPLICATION PROTECTORS | ||
sdk | C | 9.1.0.0.x |
sdk | Java | 10.0.0+x, 9.1.0.0.x |
sdk | Python | 9.1.0.0.x |
sdk | Go | 9.1.0.0.x |
sdk | NodeJS | 9.1.0.0.x |
sdk | DotNet | 9.1.0.0.x |
TRUSTED APPLICATION LOGS IN APPLICATION PROTECTORS | ||
<process.name> | C | 9.1.0.0.x |
<process.name> | Java | 9.1.0.0.x |
<process.name> | Python | 9.1.0.0.x |
<process.name> | Go | 9.1.0.0.x |
<process.name> | NodeJS | 9.1.0.0.x |
<process.name> | DotNet | 9.1.0.0.x |
DATABASE PROTECTOR | ||
dbp | SqlServer | 9.1.0.0.x |
dbp | Oracle | 9.1.0.0.x |
dbp | Db2 | 9.1.0.0.x |
dwp | Teradata | 10.0.0+x, 9.1.0.0.x |
dwp | Exadata | 9.1.0.0.x |
BIG DATA PROTECTOR | ||
bdp | Impala | 9.2.0.0.x, 9.1.0.0.x |
bdp | Mapreduce | 9.2.0.0.x, 9.1.0.0.x |
bdp | Pig | 9.2.0.0.x, 9.1.0.0.x |
bdp | HBase | 9.2.0.0.x, 9.1.0.0.x |
bdp | Hive | 9.2.0.0.x, 9.1.0.0.x |
bdp | Spark | 9.2.0.0.x, 9.1.0.0.x |
bdp | SparkSQL | 9.2.0.0.x, 9.1.0.0.x |
Protectors having CORE version 1.2.2+42.g01eb3.1.2 and higher are compatible with ESA v10.1.0. For more version-related information, refer to the Product Compatibility on My.Protegrity. The protector family might display the process.name for some protectors. This will be fixed in a later release.
Modules and components and the log type
Some of the components and modules and the logtype that they generate are provided in the following table.
Module / Component | Protection | Policy | Application | Audit | Kernel | System | Verification |
---|---|---|---|---|---|---|---|
as_image_management.pyc | ✓ | ||||||
as_memory_management.pyc | ✓ | ||||||
asmanagement.pyc | ✓ | ||||||
buffer_watch.pyc | ✓ | ||||||
devops | ✓ | ||||||
DSGPAP | ✓ | ||||||
ESAPAP | ✓ | ||||||
fluentbit | ✓ | ||||||
hubcontroller | ✓ | ||||||
imps | ✓ | ||||||
insight.pyc | ✓ | ||||||
insight_cron_executor.pyc | ✓ | ||||||
insight_cron_job_method_executor.pyc | ✓ | ||||||
kmgw_external | ✓ | ||||||
kmgw_internal | ✓ | ||||||
logfacade | ✓ | ||||||
membersource | ✓ | ||||||
meteringfacade | ✓ | ||||||
PIM_Cluster | ✓ | ||||||
Protegrity PEP Server | ✓ | ||||||
TRIGGERING_AGENT_policy_deploy.pyc | ✓ |
For more information and description about the components that can generate kernel logs, refer here.
Kernel logs
This section lists the various kernel logs that are generated.
Note: This list is compiled using information from https://pmhahn.github.io/audit/.
User and group account management:
- ADD_USER: A user-space user account is added.
- USER_MGMT: The user-space management data.
- USER_CHAUTHTOK: A user account attribute is modified.
- DEL_USER: A user-space user is deleted.
- ADD_GROUP: A user-space group is added.
- GRP_MGMT: The user-space group management data.
- GRP_CHAUTHTOK: A group account attribute is modified.
- DEL_GROUP: A user-space group is deleted.
User login live cycle events:
- CRYPTO_KEY_USER: The cryptographic key identifier used for cryptographic purposes.
- CRYPTO_SESSION: The parameters set during a TLS session establishment.
- USER_AUTH: A user-space authentication attempt is detected.
- LOGIN: The user log in to access the system.
- USER_CMD: A user-space shell command is executed.
- GRP_AUTH: The group password is used to authenticate against a user-space group.
- CHUSER_ID: A user-space user ID is changed.
- CHGRP_ID: A user-space group ID is changed.
- Pluggable Authentication Modules (PAM) Authentication:
- USER_LOGIN: A user logs in.
- USER_LOGOUT: A user logs out.
- PAM account:
- USER_ERR: A user account state error is detected.
- USER_ACCT: A user-space user account is modified.
- ACCT_LOCK: A user-space user account is locked by the administrator.
- ACCT_UNLOCK: A user-space user account is unlocked by the administrator.
- PAM session:
- USER_START: A user-space session is started.
- USER_END: A user-space session is terminated.
- Credentials:
- CRED_ACQ: A user acquires user-space credentials.
- CRED_REFR: A user refreshes their user-space credentials.
- CRED_DISP: A user disposes of user-space credentials.
Linux Security Model events:
- DAC_CHECK: The record discretionary access control (DAC) check results.
- MAC_CHECK: The user space Mandatory Access Control (MAC) decision is made.
- USER_AVC: A user-space AVC message is generated.
- USER_MAC_CONFIG_CHANGE:
- SELinux Mandatory Access Control:
- AVC_PATH: dentry and vfsmount pair when an SELinux permission check.
- AVC: SELinux permission check.
- FS_RELABEL: file system relabel operation is detected.
- LABEL_LEVEL_CHANGE: object’s level label is modified.
- LABEL_OVERRIDE: administrator overrides an object’s level label.
- MAC_CONFIG_CHANGE: SELinux Boolean value is changed.
- MAC_STATUS: SELinux mode (enforcing, permissive, off) is changed.
- MAC_POLICY_LOAD: SELinux policy file is loaded.
- ROLE_ASSIGN: administrator assigns a user to an SELinux role.
- ROLE_MODIFY: administrator modifies an SELinux role.
- ROLE_REMOVE: administrator removes a user from an SELinux role.
- SELINUX_ERR: internal SELinux error is detected.
- USER_LABELED_EXPORT: object is exported with an SELinux label.
- USER_MAC_POLICY_LOAD: user-space daemon loads an SELinux policy.
- USER_ROLE_CHANGE: user’s SELinux role is changed.
- USER_SELINUX_ERR: user-space SELinux error is detected.
- USER_UNLABELED_EXPORT: object is exported without SELinux label.
- AppArmor Mandatory Access Control:
- APPARMOR_ALLOWED
- APPARMOR_AUDIT
- APPARMOR_DENIED
- APPARMOR_ERROR
- APPARMOR_HINT
- APPARMOR_STATUS APPARMOR
Audit framework events:
- KERNEL: Record the initialization of the Audit system.
- CONFIG_CHANGE: The Audit system configuration is modified.
- DAEMON_ABORT: An Audit daemon is stopped due to an error.
- DAEMON_ACCEPT: The auditd daemon accepts a remote connection.
- DAEMON_CLOSE: The auditd daemon closes a remote connection.
- DAEMON_CONFIG: An Audit daemon configuration change is detected.
- DAEMON_END: The Audit daemon is successfully stopped.
- DAEMON_ERR: An auditd daemon internal error is detected.
- DAEMON_RESUME: The auditd daemon resumes logging.
- DAEMON_ROTATE: The auditd daemon rotates the Audit log files.
- DAEMON_START: The auditd daemon is started.
- FEATURE_CHANGE: An Audit feature changed value.
Networking related:
- IPSec:
- MAC_IPSEC_ADDSA
- MAC_IPSEC_ADDSPD
- MAC_IPSEC_DELSA
- MAC_IPSEC_DELSPD
- MAC_IPSEC_EVENT: The IPSec event, when one is detected, or when the IPSec configuration changes.
- NetLabel:
- MAC_CALIPSO_ADD: The NetLabel CALIPSO DoI entry is added.
- MAC_CALIPSO_DEL: The NetLabel CALIPSO DoI entry is deleted.
- MAC_MAP_ADD: A new Linux Security Module (LSM) domain mapping is added.
- MAC_MAP_DEL: An existing LSM domain mapping is added.
- MAC_UNLBL_ALLOW: An unlabeled traffic is allowed.
- MAC_UNLBL_STCADD: A static label is added.
- MAC_UNLBL_STCDEL: A static label is deleted.
- Message Queue:
- MQ_GETSETATTR: The mq_getattr and mq_setattr message queue attributes.
- MQ_NOTIFY: The arguments of the mq_notify system call.
- MQ_OPEN: The arguments of the mq_open system call.
- MQ_SENDRECV: The arguments of the mq_send and mq_receive system calls.
- Netfilter firewall:
- NETFILTER_CFG: The Netfilter chain modifications are detected.
- NETFILTER_PKT: The packets traversing Netfilter chains.
- Commercial Internet Protocol Security Option:
- MAC_CIPSOV4_ADD: A user adds a new Domain of Interpretation (DoI).
- MAC_CIPSOV4_DEL: A user deletes an existing DoI.
Linux Cryptography:
- CRYPTO_FAILURE_USER: A decrypt, encrypt, or randomize cryptographic operation fails.
- CRYPTO_IKE_SA: The Internet Key Exchange Security Association is established.
- CRYPTO_IPSEC_SA: The Internet Protocol Security Association is established.
- CRYPTO_LOGIN: A cryptographic officer login attempt is detected.
- CRYPTO_LOGOUT: A cryptographic officer logout attempt is detected.
- CRYPTO_PARAM_CHANGE_USER: A change in a cryptographic parameter is detected.
- CRYPTO_REPLAY_USER: A replay attack is detected.
- CRYPTO_TEST_USER: The cryptographic test results as required by the FIPS-140 standard.
Process:
- BPRM_FCAPS: A user executes a program with a file system capability.
- CAPSET: Any changes in process-based capabilities.
- CWD: The current working directory.
- EXECVE; The arguments of the execve system call.
- OBJ_PID: The information about a process to which a signal is sent.
- PATH: The file name path information.
- PROCTITLE: The full command-line of the command that was used to invoke the analyzed process.
- SECCOMP: A Secure Computing event is detected.
- SYSCALL: A system call to the kernel.
Special system calls:
- FD_PAIR: The use of the pipe and socketpair system calls.
- IPC_SET_PERM: The information about new values set by an IPC_SET control operation on an Inter-Process Communication (IPC) object.
- IPC: The information about a IPC object referenced by a system call.
- MMAP: The file descriptor and flags of the mmap system call.
- SOCKADDR: Record a socket address.
- SOCKETCALL: Record arguments of the sys_socketcall system call (used to multiplex many socket-related system calls).
Systemd:
- SERVICE_START: A service is started.
- SERVICE_STOP: A service is stopped.
- SYSTEM_BOOT: The system is booted up.
- SYSTEM_RUNLEVEL: The system’s run level is changed.
- SYSTEM_SHUTDOWN: The system is shut down.
Virtual Machines and Container:
- VIRT_CONTROL: The virtual machine is started, paused, or stopped.
- VIRT_MACHINE_ID: The binding of a label to a virtual machine.
- VIRT_RESOURCE: The resource assignment of a virtual machine.
Device management:
- DEV_ALLOC: A device is allocated.
- DEV_DEALLOC: A device is deallocated.
Trusted Computing Integrity Measurement Architecture:
- INTEGRITY_DATA: The data integrity verification event run by the kernel.
- INTEGRITY_EVM_XATTR: The EVM-covered extended attribute is modified.
- INTEGRITY_HASH: The hash type integrity verification event run by the kernel.
- INTEGRITY_METADATA: The metadata integrity verification event run by the kernel.
- INTEGRITY_PCR: The Platform Configuration Register (PCR) invalidation messages.
- INTEGRITY_RULE: A policy rule.
- INTEGRITY_STATUS: The status of integrity verification.
Intrusion Prevention System:
- Anomaly detected:
- ANOM_ABEND
- ANOM_ACCESS_FS
- ANOM_ADD_ACCT
- ANOM_AMTU_FAIL
- ANOM_CRYPTO_FAIL
- ANOM_DEL_ACCT
- ANOM_EXEC
- ANOM_LINK
- ANOM_LOGIN_ACCT
- ANOM_LOGIN_FAILURES
- ANOM_LOGIN_LOCATION
- ANOM_LOGIN_SESSIONS
- ANOM_LOGIN_TIME
- ANOM_MAX_DAC
- ANOM_MAX_MAC
- ANOM_MK_EXEC
- ANOM_MOD_ACCT
- ANOM_PROMISCUOUS
- ANOM_RBAC_FAIL
- ANOM_RBAC_INTEGRITY_FAIL
- ANOM_ROOT_TRANS
- Responses:
- RESP_ACCT_LOCK_TIMED
- RESP_ACCT_LOCK
- RESP_ACCT_REMOTE
- RESP_ACCT_UNLOCK_TIMED
- RESP_ALERT
- RESP_ANOMALY
- RESP_EXEC
- RESP_HALT
- RESP_KILL_PROC
- RESP_SEBOOL
- RESP_SINGLE
- RESP_TERM_ACCESS
- RESP_TERM_LOCK
Miscellaneous:
- ALL: Matches all types.
- KERNEL_OTHER: The record information from third-party kernel modules.
- EOE: An end of a multi-record event.
- TEST: The success value of a test message.
- TRUSTED_APP: The record of this type can be used by third-party application that require auditing.
- TTY: The TTY input that was sent to an administrative process.
- USER_TTY: An explanatory message about TTY input to an administrative process that is sent from the user-space.
- USER: The user details.
- USYS_CONFIG: A user-space system configuration change is detected.
- TIME_ADJNTPVAL: The system clock is modified.
- TIME_INJOFFSET: A Timekeeping offset is injected to the system clock.
10.3 - Known Issues for the td-agent
Known Issue: The Buffer overflow error appears in the /var/log/td-agent/td-agent.log file.
Description: When the total size of the files in td-agent buffer /opt/protegrity/td-agent/es_buffer directory reaches the default maximum limit of 64 GB, then the Buffer overflow error appears.
Resolution:
Add the total_limit_size parameter to increase the buffer limit in the OUTPUT.conf file using the following steps.
Log in to the CLI Manager of the ESA node.
Navigate to Administration > OS Console.
Stop the td-agent service using the following command:
/etc/init.d/td-agent stop
Alternatively, stop the service by logging into the ESA Web UI, navigating to System > Services, and stopping the td-agent service under Misc.
Navigate to the
/opt/protegrity/td-agent/config.d
directory.Open the
OUTPUT.conf
file.Add the total_limit_size parameter in the buffer section of the
OUTPUT.conf
file.In this example, the total_limit_size is doubled to 128 GB.
Save the file.
Start the td-agent service using the following command:
/etc/init.d/td-agent start
Alternatively, start the service by logging into the ESA Web UI, navigating to System > Services, and starting the td-agent service under Misc.
Known Issue: The Too many open files error appears in the /var/log/td-agent/td-agent.log file.
Description: When the total number of files in the td-agent buffer /opt/protegrity/td-agent/es_buffer directory reaches the maximum limit, then the Too many open files error appears.
Resolution:
Change the limit for the maximum number of open files for the td-agent service in the /etc/init.d/td-agent file using the following steps.
Log in to the CLI Manager of the ESA node.
Navigate to Administration > OS Console.
Stop the td-agent service using the following command:
/etc/init.d/td-agent stop
Alternatively, stop the service by logging into the ESA Web UI, navigating to System > Services, and stopping the td-agent service under Misc.
Navigate to the
/etc/init.d
directory.Open the
td-agent
file.Change the ulimit.
In this example, the ulimit is increased to 120000.
Save the file.
Start the td-agent service using the following command:
/etc/init.d/td-agent start
Alternatively, start the service by logging into the ESA Web UI, navigating to System > Services, and starting the td-agent service under Misc.
10.4 - Known Issues for Protegrity Analytics
Known Issue: Client side validation is missing on the Join an existing Audit Store Cluster page.
Issue:
Log in to the ESA Web UI and navigate to the Audit Store > Cluster Management > Overview page >Join Cluster. When you specify an invalid IP, enter a username or password more than the 36-character limit that is accepted on the Appliance, and click Join Cluster, then no errors are displayed and the request is processed.
Observation:
The Join an existing Audit Store Cluster page request is processed without any client-side validation, hence, an invalid IP address or a username or password more than 36 characters does not display any error.
Known Issue: High memory usage on the ESA.
Issue:
When using the Audit Store, the memory usage is high on the ESA.
Workaround:
Reduce the memory usage by updating the memory allocated to the Audit Store on the ESA to 4 GB using the Set Audit Store Repository Total Memory CLI option.
For more information about the Set Audit Store Repository Total Memory CLI option, refer here.
Known Issue: In Analytics, on the Index Lifecycle Management page, after exporting, importing, or deleting an index one of the following scenarios occurs:
- The index operation performed does not appear in the other operation lists. For example, an exported index does not appear in the import index list.
- Performing the same operation on the same index again displays an error message.
- If the index appears in another operation list, performing the operation displays an error message. For example, an exported index appears in the delete index list and deleting the index displays an error.
Issue: After performing an operation, the index list for the export, import, and delete operations does not refresh automatically.
Workaround: Refresh the Index Lifecycle Management page after performing an export, import, or delete operation.
10.5 - Known Issues for the Log Forwarder
Known Issue: The Protector is unable to reconnect to a Log Forwarder after it is restarted.
Description: This issue occurs whenever you have a Proxy server between a Protector and a Log Forwarder. When the Log Forwarder is stopped, the connection between the Protector and the Proxy server is still open, even though the connection between the Proxy server and the Log Forwarder is closed. As a result, the Protector continues sending audit files to the Proxy server. This results in loss of the audit files. Whenever the Log Forwarder is restarted, the Protector is unable to reconnect to the Log Forwarder.
This issue is applicable to all the Protectors where the Log Forwarder is not running on the local host machine. For example, this issue is applicable to AIX or z/OS protectors because the Log Forwarder is not running on the same machine where the Protectors have been installed. This issue also occurs if you have a Load Balancer or a Firewall between the Protector and the Log Forwarder, instead of a Proxy server.
Resolution: Remove the Proxy server or ensure that you configure the Proxy server in a way that the connection between the Protector and the Proxy server is stopped as soon as the Log Forwarder is stopped. This ensures that whenever the Log Forwarder is restarted, the Protector reconnects with the Log Forwarder and continues to send the audits to the Log Forwarder without any data loss.
For more information about configuring the Proxy server, contact your IT administrator.
10.6 - Deprecations
10.6.1 - Deprecations
Deprecated Functions
Extract KeyID from Data getCurrentKeyId getDefaultDataElement Byte API
Database Protector
- native database access control
Deprecated Products
- Protection Server
- File Protector Gateway
- File Protector: Linux Kernel Implementation
- File Protector: Windows Bases Volume Encryption
- File Protector for AIX
- XC Client, XC Server, XC Lite
- Solaris operating system on all the Protectors
- HAWQ Big Data Component
- Linux Based Volume Encryption
Deprecated Capabilities
- HDFS FP
- ESA OS Based High Availability
- Samba
- Protegrity Storage Unit (PSU)
- Change in Insight behavior (from v9.0.0.0)
- Jasper Reporting
- Metering
- MapReduce
- HMAC API in BDP
- Gate in Upgrade
Deprecated Data Elements
Deprecated data elements can still be used. However, users will be unable to create these data elements using the GUI. Users can use the DevOps APIs to create these data elements. In this case, the system triggers a warning and create a log entry. This is to indicate that the data element is deprecated. In addition, the system triggers a notification if the policy contains any of the deprecated datatypes. The system triggers this notification while the hubcontroller service starts.
- Printable Characters Tokenization
- Unicode Gen 1 Tokenization
- Date Tokenization
- DTP 2 Encryption
- 3DES Encryption
- CUSP 3DES Encryption
- SHA1 Hashing
- Date tokens (DATE-YYYY-MM-DD, DATE-DD/MM/YYYY, DATE-MM/DD/YYYY )
- UNICODE tokens
- PRINTABLE tokens
- UNICODE-Base 64 tokens
- SHA1
- 3DES
Removed Data Elements
The support for the Format Preserving Encryption (FPE) data element was started in the ESA v7.2.x.
If the FPE left/right data element is created in any version of the ESA, from 7.2.x to 9.2.x, then there is a risk of data loss.
The data encoding can have an effect on the resulting token when the Left and Right properties are in use. This means that FPE tokens with this property cannot always be moved “as-is” to other systems where the encoding of the data changes.
In ESA v10.0.1,
- The upgrade is blocked if the FPE left/right data element is created.
- A new solution is created to support the FPE left/right data element. This solution works only when the protectors of version 10.0.0 or newer are used.
11 - Supplemental Guides
KMS Integration on Cloud Platforms Guide
The Key Management Service (KMS) on cloud platforms provides detailed integration with the Protegrity Enterprise Security Administrator (ESA) appliance. This integration supports Amazon Web Services (AWS), Azure, and Google Cloud Platform (GCP).
- AWS KMS - It is an encryption and key management service that provides keys and functionality for AWS KMS operations.
- Azure KMS - It is a part of Azure Key Vault which safeguards cryptographic keys and enables cryptographic operations.
- GCP KMS - It is used to perform cryptographic operations to protect data stored in the cloud or on-premise.
For more information about KMS Integration with ESA on Cloud Platforms, refer KMS Integration on Cloud Platforms.
12 - PDF Resources
Use the following links to access documentation for the earlier releases of product.
Appliances Guides
Discover Guide
Protegrity Discover provides the right solution for the data discovery needs. It is a data discovery system that inspects, locates, and tracks sensitive information in a data-dynamic world. It helps to identify sensitive information that must be protected to secure the business.
For more information about data discovery products, refer to Protegrity Discover.
Other Guides
Data Security Platform Feature Guide
Protegrity Data Security Platform Feature Guide provides a general overview of the updated features in the current release. It also includes guidelines on how to use these features. This guide covers the new and deprecated features, starting from release 9.2.0.0 through release 7.0. Additionally, it describes the new protectors, starting from the release 7.1.
For more information about the new and deprecated features, refer to Data Security Platform Feature Guide 9.2.0.0.
Data Security Platform Licensing Guide
Protegrity Data Security Platform Licensing Guide provides a general overview of licensing and its importance to Protegrity products. It explains the difference between temporary and validated licenses, how to request a validated license, and what happens if a license expires.
For more information about licensing, refer to Data Security Platform Licensing 9.2.0.0.
Troubleshooting Guide
Protegrity Troubleshooting Guide provides first-level support for Protegrity customers. It includes information about logging levels for Policy Management DPS Servers, HDFSFP, and common cluster issues. The document also includes information on resetting the administrator password, special utilities, and frequently asked questions.
For more information about troubleshooting, refer to Troubleshooting Guide 9.2.0.0.
Protector Guides
APIs, UDFs, and Commands Reference Guide
Protegrity APIs, UDFs, and Commands Reference Guide provides information about all Protegrity Protectors APIs, UDFs, and commands. It details the API elements and their parameters, including data types and usage, and describes the APIs available in Big Data Protector. It provides information about available UDFs in Database Protector. It details UDFs for Mainframe z/OS and describes using policy management functions through REST APIs. It details how to use Protegrity APIs for Immutable Protectors.
For more information about APIs, UDFs, and commands, refer to APIs, UDFs, and Commands Reference Guide 9.1.0.0.
Application Protector Guide
Protegrity Application Protector Guide introduces the concepts of the Application Protector, its configuration, and its APIs. It explains the three variants of the AP C: AP Standard, AP Client, and AP Lite. It describes AP Java, AP Python, AP Golang, AP NodeJS, and AP .Net, covering their architecture, features, configuration, and available APIs. It also includes a sample application demonstrating how to use the Protector to protect, reprotect, and unprotect data. It describes the multi-node Application Protector architecture, including its components and how logs are collected using the new Log Forwarder. It also explains how to use the Go module.
For more information about Application Protectors, refer to Application Protector Guide 9.1.0.0.
Application Protector On-Premises Immutable Policy User Guide
Protegrity Application Protector On-Premises Immutable Policy User Guide discusses the Immutable Application Protector and its uses. It covers the protector’s concepts, the business problem it solves, and its architecture and workflow. It describes the installation and uninstallation processes for IAP C, IAP Java, IAP Python, and IAP Go on various platforms. It also explains how to run the Immutable Application Protector with sample applications.
For more information about Immutable Application Protector, refer to Protegrity Application Protector On-Premises Immutable Policy User Guide 9.1.0.0.
Big Data Protector Guide
Protegrity Big Data Protector Guide provides information on configuring and using BDP for Hadoop. It also details the protection coverage of various Hadoop ecosystem applications, including MapReduce, Hive, and Pig. It provides information about various data protection solutions. These solutions include Hadoop Application Protector, Protegrity HBase Protector, Protegrity Impala Protector, Protegrity Spark Java and Spark SQL protectors, and Spark Scala. It provides information about error codes and descriptions for Big Data Protector. It also describes procedures for migrating tokenized Unicode data between a Teradata database and other systems.
For more information about Big Data Protectors, refer to Big Data Protector Guide 9.1.0.0.
Database Protector Guide
Protegrity Database Protector Guide explains the Database Protector and its uses. It provides detailed information about installing and configuring the Database Protector for MS SQL, Netezza, Oracle, Greenplum databases, Teradata databases, DB2 Open Systems databases, and Trino. It provides information about supported database versions and platforms for database protectors.
For more information about database protectors, refer to Database Protector Guide 9.1.0.0.
File Protector Guide
Protegrity File Protector Guide provides an overview of the File Protector, including its architecture, features, key components, and modules. It also details the procedures for installing, upgrading, and uninstalling the File Protector. It describes the configuration files of the File Protector and its usage. It also explains the dfpshell, a privileged shell for system administrators to manage the File Protector. Additionally, it provides details the File Protector’s licensing and the creation and management of policies using the ESA. It lists all File Protector commands and their usage. It details the features supported by File Protector, including backup and restore procedures for protected data.
For more information about file protector, refer to File Protector Guide 9.1.0.0.
FPVE-Core User Guide
Protegrity File Protector Volume Encryption (FPVE)-Core Guide provides information about the FPVE-Core’s concepts, architecture, supported platforms, and features on Windows. It provides instructions for installing, upgrading, and uninstalling FPVE-Core. It also explains the commands and their usage within FPVE-Core, along with a list of supported features. Additionally, it describes how to migrate the kernel-based FPVE to FPVE-Core. It also provides a metering feature that counts successful protect and unprotect operations on a file basis.
For more information about FPVE-Core, refer to Protegrity FPVE-Core User Guide 9.0.0.0.
FUSE File Protector Guide
Protegrity FUSE File Protector Guide provides information on FUSE FP’s architecture, workflow, features, supported platforms, and library versions. It also details the installation, configuration, and uninstallation of FUSE FP, along with the necessary configuration settings for its services. It explains how to manage the dfpshell. It explains how to create and manage policies for the FUSE File Protection (FUSE FP). It also details the commands for FUSE FP, including how to back up and restore data encrypted with File Encryption (FE) and Access Control (AC). It explains Metering feature which counts successful protect, unprotect, and re-protect operations on a file basis.
For more information about FUSE FP, refer to Protegrity FUSE File Protector Guide 9.0.0.0.
Protection Methods Reference Guide
Protegrity Protection Method Reference Guide provides an overview of Protegrity’s protection methods. It explains the properties of protection methods, including examples of their usage. It provides details about tokenization, tokenization types, encryption algorithms, no encryption, monitoring, hashing, and masking. It provides a table listing ASCII character codes. It provides information about 3DES encryption column sizes, protectors, hashing functions, and codebook reshuffling.
For more information about protection methods, refer to Protection Methods Reference 9.2.0.0.
Row Level Protector Guide
Protegrity Row Level Protector Guide provides a summary of Row Level Protection components and their functionality. It details installation, configuration, and basic control instructions for the Row Level Protection servers in a Linux environment.
For more information about Row Level Protection, refer to Row Level Protector Guide 7.1.
zOS Protector Guide
Protegrity zOS Protector Guide provides information on installing, configuring, and using Protegrity z/OS Protectors. It covers the general architecture, components, features, and limitations of the z/OS Protector. It details Application Protector variants on z/OS, covering installation and configuration for both stateful and stateless protocols. It also explains the deployment and management of permissions for File Protector on the z/OS platform. It discusses Database Solutions on z/OS Protectors. It also describes how to install and configure the Mainframe z/OS Protector, and provides a list of all UDFs with their syntax. Additionally, it describes IMS Protector for z/OS, covering installation and configuration. It also details IMS segment edit/compression exit routines, including restrictions, usage, return codes, and sample programs.
For more information about z/OS Protectors, refer to zOS Protector Guide 7.0.1.
13 - Intellectual Property Attribution Statement
Copyright © 2004-2025 Protegrity Corporation. All rights reserved.
Protegrity products are protected by and subject to patent protections;
Patent: https://support.protegrity.com/patents/.
Protegrity® and the Protegrity logo are the registered trademarks of Protegrity Corporation.
NOTICE TO ALL PERSONS RECEIVING THIS DOCUMENT
Some of the product names mentioned herein are used for identification purposes only and may be trademarks and/or registered trademarks of their respective owners.
Windows, Azure, MS-SQL Server, Internet Explorer and Internet Explorer logo, Active Directory, and Hyper-V are registered trademarks of Microsoft Corporation in the United States and/or other countries.
Linux is a registered trademark of Linus Torvalds in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
SCO and SCO UnixWare are registered trademarks of The SCO Group.
Sun, Oracle, Java, and Solaris are the registered trademarks of Oracle Corporation and/or its affiliates in the United States and other countries.
Teradata and the Teradata logo are the trademarks or registered trademarks of Teradata Corporation or its affiliates in the United States and other countries.
Hadoop or Apache Hadoop, Hadoop elephant logo, Hive, Presto, and Pig are trademarks of Apache Software Foundation.
Cloudera and the Cloudera logo are trademarks of Cloudera and its suppliers or licensors.
Hortonworks and the Hortonworks logo are the trademarks of Hortonworks, Inc. in the United States and other countries.
Greenplum Database is the registered trademark of VMware Corporation in the U.S. and other countries.
Pivotal HD is the registered trademark of Pivotal, Inc. in the U.S. and other countries.
PostgreSQL or Postgres is the copyright of The PostgreSQL Global Development Group and The Regents of the University of California.
AIX, DB2, IBM and the IBM logo, and z/OS are registered trademarks of International Business Machines Corp., registered in many jurisdictions worldwide.
Utimaco Safeware AG is a member of the Sophos Group.
Jaspersoft, the Jaspersoft logo, and JasperServer products are trademarks and/or registered trademarks of Jaspersoft Corporation in the United States and in jurisdictions throughout the world.
Xen, XenServer, and Xen Source are trademarks or registered trademarks of Citrix Systems, Inc. and/or one or more of its subsidiaries, and may be registered in the United States Patent and Trademark Office and in other countries.
VMware, the VMware “boxes” logo and design, Virtual SMP and VMotion are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions.
Amazon Web Services (AWS) and AWS Marks are the registered trademarks of Amazon.com, Inc. in the United States and other countries.
HP is a registered trademark of the Hewlett-Packard Company.
HPE Ezmeral Data Fabric is the trademark or registered trademark of Hewlett Packard Enterprise in the United States and other countries.
Dell is a registered trademark of Dell Inc.
Novell is a registered trademark of Novell, Inc. in the United States and other countries.
POSIX is a registered trademark of the Institute of Electrical and Electronics Engineers, Inc.
Mozilla and Firefox are registered trademarks of Mozilla foundation.
Chrome and Google Cloud Platform (GCP) are registered trademarks of Google Inc.
Kubernetes is a registered trademark of the Linux Foundation in the United States and/or other countries.
OpenShift is a trademark of Red Hat, Inc., registered in the United States and other countries.
Docker and the Docker logo are trademarks or registered trademarks of Docker, Inc. in the United States and/or other countries.