This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Policy Workflow

Explaining the policy workflow.

Summary

This section outlines a workflow for creating a policy. This workflow is used in examples and scripts that are provided in related sections of the documentation.

Here are the general steps of the Policy Workflow:

  1. Initialize Policy Management
  2. Prepare Data Element
  3. Create Member Source
  4. Create Role
  5. Assign Member Source to Role
  6. Create Policy Shell
  7. Define Rule with Data Element and Role
  8. Create Datastore
  9. Deploy Policy to a Datastore
  10. Confirm Deployment

Each step of the workflow has a sub-page that describes the workflow in detail, including a description, purpose, inputs, and outputs. Other sections of this documentation show examples of these steps for creating a policy, such as a policy to protect a credit card number.

Description

The workflow described in this section assumes that the reader is working in an environment where the core Protegrity platform components are already installed, accessible, and functioning correctly. The steps focus on policy creation and deployment, not on platform installation, infrastructure provisioning, or troubleshooting underlying system issues.

Assumptions

To execute any CLI or API command in this example, the following assumptions have been made:

  • You are operating on a new AI Team Edition setup.
    • Set up the AI Team Edition by installing the Protegrity Provisioned Cluster. For more information about installing the PPC, refer to the section Installing PPC.
  • You are connected to the Policy Manager container.
    • Connect to the Policy Manager container by deploying the Protegrity Policy Manager. For more information about deploying the Protegrity Policy Manager, refer to the section Installing Policy Workbench.

CLI Examples

To execute any CLI command in this example, the following additional assumption has been made:

API Examples

To execute any API command in this example, the following additional assumption has been made:

  • You have access to the Protegrity Policy Management REST APIs.

Purpose

To clearly establish the scope of the workflow and avoid ambiguity about what the documentation covers versus what is expected to be completed beforehand. By defining these assumptions up front, the workflow can focus on explaining policy behavior and intent, rather than environmental setup.

Outcome

With these assumptions satisfied, the reader can proceed through the workflow steps with the expectation that each command or configuration action will succeed without requiring additional environment preparation.

Tips

  • If any assumption is not met, resolve it before continuing with the workflow to avoid misleading errors later.
  • For environment setup, installation, or operational guidance, refer to the dedicated platform installation and operations documentation rather than this workflow.

1 - Initialize Policy Management

Workflow to initialize policy management.

Summary

Initialize the Policy Management environment so it can store keys, policies, and configuration data required for all subsequent steps.

Description

This step prepares the Policy Management subsystem by creating the internal key material and policy repository used by the API. Initialization ensures that the environment is in a valid state before you create any data elements, roles, policies, or datastores.

Purpose

To set up the foundational Policy Management environment so that all future API commands operate against a valid and initialized repository.

Prerequisites

None.

Initialization is the first action performed before any policy‑related configuration can occur.

Inputs

No inputs are required.

The initialization command runs with system defaults and prepares the environment automatically.

Outcome

Policy Management is fully initialized, and the system is ready to accept policy configuration commands. After this step completes, proceed to create data elements, roles, member sources, and policies using the API.

Conceptual Examples

  • Example 1: A new environment has just been installed. Initialize the internal structures needed so that the administrator can begin defining data protection policies.
  • Example 2: A test or sandbox environment is reset. Initialization is performed again to rebuild the policy repository before running new API‑based examples or scripts.

Tips

None.

2 - Prepare Data Element

Workflow to prepare data element.

Summary

Create a Data Element that defines the sensitive data type and how it will be protected. For example, whether the data is tokenzied, encrypted, or masked.

Description

A Data Element describes a category of sensitive information, such as credit card numbers, Social Security numbers, names, or email addresses. It then defines the protection method that applies to the category. This includes the protection algorithm, formatting constraints, visibility rules, and validation options. A Data Element is the foundation of all policy rules. Policies reference Data Elements to determine how data is protected and under which circumstances it may be revealed or transformed.

Purpose

To formally define what data will be protected and how it should be processed. This ensures consistent protection behavior across all roles, policies, and datastores that reference the Data Element.

Prerequisites

None.

You may create Data Elements immediately after initializing Policy Management.

Inputs

Typical inputs may include:

  • Data Element name
  • Description
  • Protection method. For example, tokenization, encryption, and masking.
  • Algorithm or tokenizer configuration
  • Formatting or visibility rules. For example, keep last four digits.
  • Validation rules. For example, Luhn checks for credit cards.

Sub-tasks

Sometimes you might want to create a mask or use a special alphabet in your policy.

Create Mask
  • When and why
    • Create a Mask when you need to partially or fully hide sensitive data during presentation to end‑users. Masks allow you to obfuscate some or all characters. For example, showing only the last four digits. Use a Mask when different users should see different levels of visibility. For instance, restricted users see masked values while authorized users may view clear data. Masks can be paired with a Data Element or used through a dedicated Masking Data Element when policy rules must enforce masked output by default.

Create Alphabet

  • When and why
    • Create an Alphabet when the data you are protecting includes characters from specific languages or extended Unicode sets, such as Spanish, Polish, Korean, or other multilingual inputs. Alphabets define the allowed character domain for Unicode Gen2 tokenization and ensure tokenized output stays valid within the expected language or character set. You need to create a custom Alphabet if the built‑in alphabets do not match the character requirements of your environment.

Outcome

A Data Element is created and stored in the Policy Management environment. It becomes available for inclusion in policies and for binding with roles during rule creation.

Conceptual Examples

  • Example 1: Credit Card Tokenization
    • A Data Element named de_credit_card is created to tokenize credit card numbers using a chosen tokenizer. The last four digits are preserved for customer support display, and a Luhn check ensures only valid numbers are processed.
  • Example 2: Email Address Masking
    • A Data Element named de_email is created to enforce consistent masking of email addresses, such as replacing the user portion with asterisks while preserving the domain.

Tips

  • Use descriptive names so Data Elements are easy to identify when building policies.
  • Choose protection methods based on business use cases. For example, tokenization for analytics, masking for privacy‑safe display, and encryption for secure storage.
  • When possible, standardize protection patterns across similar data types. For example, all PAN fields follow the same tokenization rule.
  • Before creating many Data Elements, define a naming convention. For example, de_<datatype>_<method>.

3 - Create Member Source

Workflow to create a member source.

Summary

Create a Member Source that defines the external system from which user and group identities will be imported for use in roles and policies.

Description

A Member Source establishes a connection to an identity provider, such as a directory service, a database, or a simple user or group file. This ensures that real users and service accounts can be referenced within policy roles. Member Sources supply the identities that roles draw from, allowing the system to stay aligned with organizational updates to accounts, groups, and permissions.

Purpose

To provide a trusted and maintainable source of user and group information for policy enforcement. Member Sources ensure that roles are populated automatically or programmatically using authoritative identity data rather than manual user entry.

Prerequisites

None.

Member Sources can be created at any time, though they are typically defined before assigning them to roles.

Inputs

Inputs vary depending on the type of Member Source, but commonly include:

  • Source type. For example, file, directory, database, and API.
  • Location or connection settings. For example, paths, URLs, and hostnames.
  • User and group data. For example, lists, queries, or mappings.
  • Access credentials if required.

Outcome

A Member Source is created and available for assignment to one or more roles. Once assigned, the Member Source becomes the mechanism through which those roles obtain their user and group membership.

Conceptual Examples

  • Example 1: File‑Based Member Source for Testing
    • A small file containing sample users and groups is created for a development environment. A Member Source is configured to read from this file, populating roles without connecting to a production identity system.
  • Example 2: Directory‑Backed Member Source for Production
    • A Member Source is configured to point to an organization’s central directory service. When new employees join or leave teams, their group membership updates automatically in the Member Source, and corresponding roles inherit those changes.

Tips

  • Use file‑based Member Sources for demos, pilots, and sandbox environments. They are easy to set up and reset.
  • For production, use a centralized identity provider to avoid manually updating user lists.
  • Keep Member Source names descriptive. For example, ms_hr_directory and ms_test_users.
  • Confirm that users and groups in the Member Source align with your expected role design to avoid misconfiguration during rule creation.

4 - Create Role

Workflow to create a role.

Summary

Create a Role to represent a group of users or service accounts that will receive specific permissions in a policy.

Description

A Role is a logical container that defines who will receive access to a Data Element within a policy. Roles do not hold permissions on their own. Instead, they become meaningful when paired with Data Elements and permissions in policy rules. Roles allow you to centralize and standardize access behavior across multiple users by grouping identities into functional categories such as Data Analysts, Customer Support, or Payment Service Applications.

Purpose

To establish an authorization boundary that policies can reference when granting or restricting access to sensitive data. Roles allow policies to express business intent clearly. For example, This group may tokenize credit card data,” or Only this role may unprotect values.

Prerequisites

None.

Roles can be created at any time, although they become active only after a Member Source is assigned in the next step.

Inputs

Typical inputs when creating a Role include:

  • Role name.
  • Description of its business purpose.
  • Assignment mode. For example, manual assignment versus assignment from a Member Source.

These inputs help clearly define the role’s identity and intended usage in policy rules.

Outcome

A Role is created and ready to populate with members. It can now be linked to a Member Source and later associated with Data Elements and permissions within a policy.

Conceptual Examples

  • Example 1: Protection Role
    • A role named r_cc_protect is created for payment‑processing applications responsible for protecting credit card numbers using tokenization before storage.
  • Example 2: Limited‑Access Role
    • A role named r_customer_support_masked is created for agents who may view masked customer data but cannot unprotect or view clear‑text values.

Tips

  • Keep role names short but descriptive. For example r_<domain>_<capability>
  • Use separate roles for different permission levels, such as protect versus unprotect, to keep policies clean and auditable.
  • Avoid putting too many responsibilities in a single role. For example, smaller, purpose‑specific roles simplify long‑term maintenance.
  • If possible, design roles around business functions and not individuals, to avoid maintenance churn.
  • Note this can be created with the option of ALL_USERS.

5 - Assign Member Source to Role

Workflow to assign member source to role.

Summary

Assign a user or group from a Member Source to a Role so the Role is backed by real identities that can receive policy permissions. This step links the Role to the identities it should represent and, when synchronized, imports current membership from the source into the Role.

Description

This step connects a previously created Role to a specific user or group that exists in a Member Source. For example, LDAP, Active Directory, Azure AD, a database, or a file-based source. Using pim create roles members, you define which source-backed identity should belong to the Role. After that, running a role sync updates the Role with membership information from the source.

This is the point where a Role stops being only a named container and becomes tied to actual enterprise identities. Once this binding exists, the Role can be used meaningfully in policy rules, because the system can map policy access decisions back to real users, groups, or service accounts.

Purpose

To bind a Role to authoritative identities from a Member Source so that policy permissions apply to real users or groups rather than to an empty logical object. This ensures policy enforcement reflects the organization’s existing identity model and can stay aligned with membership changes in the source system over time.

Prerequisites

  • A Role must already exist.
  • A Member Source must already be created and available.
  • The user or group to be assigned must exist in that Member Source. It should also be identifiable by name and type, with an optional synchronization identifier if required by the source.

Inputs

Typical inputs for this step include:

  • Role UID or Role identifier.
  • Member name from the source, such as user or group name.
  • Source UID identifying the Member Source.
  • Member type, such as USER or GROUP.
  • Optional synchronization identifier, depending on the source and membership model.

You may also optionally run a synchronization operation after assignment so that the Role reflects current membership from the source immediately.

Outcome

The Role now has a source-backed member assignment and can be used as an identity-backed object in policy rules. After synchronization, the Role reflects the current membership information from the Member Source, allowing policy access to apply to actual users, groups, or service accounts. Without this step, policies may be defined correctly but still not grant access to anyone in practice.

Conceptual Examples

  • Example 1: Assigning an LDAP Group to a Protection Role
    • A Role named r_cc_protect is linked to a group such as pci_analysts from an LDAP Member Source. After the role is synchronized, all current members of that LDAP group become the effective identities behind the Role. This allows them to receive the permissions defined later in the policy.
  • Example 2: Assigning a Service Account User from a File-Based Source
    • In a test environment, a file-based Member Source contains sample users. A specific service account user is attached to a Role so that demo or automation workflows can exercise the policy. After synchronization, that Role can be referenced in rules just like a production role backed by a centralized identity provider.

Tips

  • Prefer assigning groups instead of individual users when possible. This reduces maintenance and keeps Role design aligned with business functions. This is consistent with the examples and scripts, which commonly model role membership using source groups such as pci_analysts or hr_analysts. Note that some of the examples will not use groups.
  • Run a role synchronization after assigning the member source so the Role reflects current source membership immediately. The example workflow explicitly marks sync as recommended.
  • Use clear naming and role design so the source membership aligns with the intended policy behavior. A mismatch between Role purpose and source membership can make later rule definitions misleading or ineffective. This follows the workflow guidance that roles should map to business purpose and member sources should align with expected role design.

6 - Create Policy Shell

Workflow to create a policy shell.

Summary

Create an empty Policy Shell that acts as the container for roles, data elements, rules, and deployment configuration.

Description

A Policy Shell is the foundational policy object that holds all components of a complete policy but initially contains no rules or assignments. It defines the policy’s identity, which is its name, description, and purpose, and prepares the environment for adding data elements, roles, permissions, and datastores. Creating a Policy Shell is the administrative starting point for constructing a full policy.

Purpose

To establish a dedicated policy container that will later be populated with rules governing how sensitive data is protected and who may access it. The Policy Shell provides organizational structure and acts as the anchor for all subsequent policy configuration steps.

Prerequisites

  • Policy Management must be initialized. For more information about the initialization step, refer to section Initialize Policy Management.
  • Any Data Elements, Roles, or Member Sources you plan to use may optionally be created beforehand, but are not required at this step.

Inputs

Typical inputs for this step include:

  • Policy name.
  • Policy description.
  • Optional metadata or tags for categorization.

At this stage, no data elements, roles, or permissions are defined.Only the policy container itself is defined.

Outcome

A new, empty policy is created and ready to be configured. You can now begin attaching Data Elements, assigning Roles, defining permissions, and associating Datastores.

Conceptual Examples

  • Example 1: Credit Card Protection Policy
    • An administrator creates a new policy shell named policy_credit_card intended to govern how credit card numbers are tokenized and which users can unprotect them.
  • Example 2: Customer Support Access Policy
    • A policy shell named policy_support_data is created to organize rules that provide masked data to customer service roles while restricting access to full values.

Tips

  • Choose clear and descriptive names so the purpose of the policy is immediately recognizable.
  • Create separate policies for distinct business domains to simplify auditing and updates. For example payments, HR, or analytics.
  • Avoid overloading one policy with too many unrelated Data Elements. Smaller policies are easier to manage and review.
  • Think of the Policy Shell as the project folder for everything that will follow.

7 - Define Rule with Data Element and Role

Workflow to define rule with Data Element and Role.

Summary

Define a rule that specifies how a Role may interact with a Data Element by assigning permissions such as protect, unprotect, mask, or view.

Description

A Rule establishes the relationship between a Data Element and a Role within a policy. It defines which operations members of that Role are allowed to perform on the Data Element. For example, protecting the data using tokenization, viewing masked values, or unprotecting the data if permitted. Rules are the core of policy logic. They determine the behavior of the system when a user or application attempts to access or process sensitive data.

Purpose

To define who, the Role; can do what, permission; to which data, the Data Element. Rules translate business intent into enforceable policy logic and ensure consistent application of protection standards across all datastores.

Prerequisites

  • A Policy Shell must exist. For more information about creating a policy shell, refer to the section Create Policy Shell.
  • A Data Element must be created. For more information about creating a data element, refer to the section Prepare Data Element.
  • A Role must be created and associated with a Member Source. For more information about creating a member source, creating a role, and assigning member source to the role, refer to the following sections:

Inputs

Typical inputs for this step include:

  • Role to which the rule applies.
  • Data Element being controlled.
  • Permissions. For example, protect, unprotect, mask, and view.
  • Optional output behavior. For example, allow masked return only.
  • Optional masking configuration if applicable.

Outcome

A rule is added to the policy, granting or restricting specific interactions between the designated Role and Data Element. The policy now contains enforceable access logic that dictates how protected data will behave for different types of users or applications.

Conceptual Examples

  • Example 1: Protect‑Only Rule
    • A rule is created to allow the r_cc_protect role to protect credit card numbers using tokenization but not unprotect them. Applications using this role can store sensitive data safely, but cannot retrieve clear values.
  • Example 2: Masked‑View Rule
    • A rule is created for the r_support_masked role, allowing customer support teams to view masked data but not access clear text or perform protection operations.

Tips

  • Define rules with the principle of least privilege. Only grant operations that are required for the role’s function.
  • Avoid giving unprotect permissions unless absolutely necessary. Restricting this keeps sensitive data safe.
  • Use naming conventions to help visually match roles to rule types. For example, r_<domain>_protect and r_<domain>_viewmasked.
  • For complex policies, document why each rule exists to simplify future audits or updates.

8 - Create Datastore

Workflow to create datastore.

Summary

Create a Datastore entry that represents the application, service, or infrastructure component where the policy will be deployed and enforced.

Description

A Datastore defines the environment in which a policy will operate, such as an application server, a database engine, an API endpoint, or another enforcement point. It represents the location where data is accessed or processed and where the policy rules, which have been defined earlier through roles and data elements, will be applied. Creating a Datastore registers this target environment with the policy management system so that policies can later be deployed to it.

Purpose

To identify and register a policy enforcement location so the policy system knows where the rules should run. Without a Datastore, a policy cannot be enforced, because the system has no target environment to push the configuration to.

Prerequisites

  • A Policy Shell must exist. For more information about creating a policy shell, refer to the section Create Policy Shell.
  • Rules that define how roles interact with data elements should already be created. For more information about defining roles, refer to the section Define Rule with Data Element and Role.
  • The environment where the Datastore will be mapped. For example, application, service, or host should be known.

Inputs

Typical Datastore inputs include:

  • Name of the Datastore.
  • Type. For examle, application, service, and database.
  • Connection information. For example, hostnames, endpoints, and identifiers.
  • Optional metadata. For example, environment tags such as dev, test, or production.

Actual inputs depend on the type of enforcement point being registered.

Outcome

A Datastore is created and available for policy deployment. Policies can now be associated with this Datastore so that enforcement can occur during real data access operations.

Conceptual Examples

None.

Tips

  • Use consistent naming to distinguish environments. For example, ds_payments_prod and ds_analytics_dev.
  • Create separate Datastores for different systems, even if they use the same policy, to maintain clear deployment boundaries.
  • Map Datastores to actual data‑flow locations. Wherever sensitive data is read, processed, or stored, a Datastore should exist.
  • Confirm the destination system is reachable or properly registered before deploying policies to avoid deployment failures.

9 - Deploy Policy to a Datastore

Workflow to deploy policy to datastore.

Summary

Deploy the completed policy to a Datastore so that its rules are actively enforced during real data access operations.

Description

Deploying a policy makes it operational on a specific Datastore, such as an application, service, database, or other enforcement point. Until deployment occurs, a policy exists only as a configuration object. Deployment pushes all rules, including Data Elements, Roles, and permissions, to the target Datastore. This ensures that the runtime environment can apply them when users or applications interact with sensitive data.

Purpose

To activate the policy in an environment where protected data is accessed. Deployment ensures that the Datastore enforces the correct behavior, such as tokenization, masking, unprotect permissions, or other rules, based on the policy definition.

Prerequisites

  • A Policy Shell must be created. For more information about creating a policy shell, refer to the section Create Policy Shell.
  • The policy must contain rules that bind Data Elements to Roles. For more information about defining roles, refer to the section Define Rule with Data Element and Role.
  • A Datastore must exist. For more information about creating a datastore, refer to the section Create Datastore.
  • Connectivity or registration between Policy Management and the Datastore should be confirmed.

Inputs

Typical deployment inputs include:

  • Policy name or ID.
  • Datastore name or ID.
  • Optional deployment parameters, depending on environment. For example, environment tags and version notes.

Outcome

The policy is successfully deployed to the specified Datastore. The enforcement point is now configured to apply the defined data protection rules whenever sensitive data is read, written, or processed.

Conceptual Examples

Not applicable.

Tips

  • Always verify that the Datastore is correctly registered before deploying to avoid deployment errors.
  • If you maintain separate development, test, and production environments, use clearly named Datastores for each to avoid mis-deployment.
  • After deployment, test a few representative access scenarios to confirm enforcement works as intended.
  • Consider using versioning or descriptions on deployments for auditability and rollback clarity.

10 - Confirm Deployment

Workflow to confirm deployment.

Summary

Verify that the policy has been successfully deployed to the intended Datastore by retrieving deployment information.

Description

After deploying a policy, it is important to confirm that the system has registered the deployment correctly. The API provides a command to retrieve a list of all Datastores along with the policies currently connected to them. This verification step ensures that the deployment completed successfully and that the Datastore is now enforcing the appropriate policy rules.

Purpose

To confirm that the policy deployment is active and correctly associated with the target Datastore. This step provides assurance that the configuration is in effect and ready for runtime enforcement.

Prerequisites

A policy must have been deployed to a Datastore. For more information about deploying a policy, refer to the section Deploy Policy to a Datastore.

Inputs

No inputs are required. The confirmation command runs without arguments.

Outcome

You receive a deployment report listing each Datastore by UID and the policies associated with it. If the policy appears in the list, then deployment is confirmed.

Conceptual Examples

Not applicable.

Tips

  • If the expected policy does not appear in the list, re‑run the deployment or check for configuration errors.
  • Use this command routinely when validating changes or troubleshooting application behavior.
  • Keep track of Datastore UIDs to avoid confusion in complex environments.