Credit Card

Details about the Credit Card token type.

The Credit Card token type helps maintain transparency. It provides ways to clearly distinguish a token from the real value which is a recommendation of the PCI DSS. The Credit Card token type supports only numeric input (no separators are allowed as input).

Table: Credit Card Tokenization properties


Tokenization Type Properties

Settings

Name

Credit Card

Token type and Format

Digits 0 through 9

(no separators are allowed as input)

Tokenizer

Length Preservation

Minimum Length

Maximum Length

SLT_1_3

SLT_2_3

Yes

3

4096

SLT_1_6

SLT_2_6

Yes

6

4096

Possibility to set Minimum/ maximum length

No

Left/Right settings

Yes

Internal IV

Yes, if Left/Right settings are non-zero

External IV

Yes

Return of Protected value

Yes

Token specific properties

Invalid LUHN Checksum

Invalid Card Type

Alphabetic Indicator

The credit card number real value is distinguished from the tokenized value based on the token value validation properties.

Table: Specific Properties of the Credit Card Token Type

Credit Card Token Value Validation PropertiesLeft in ClearRight in ClearCommentsValidation Properties Compatibility
Invalid Luhn Checksum (On/Off)YesYesRight characters which are to be left in the clear can be specified. This usually requires specifying a group of up to four characters.Can be used together.
Invalid Card Type (On/Off)0YesLeft cannot be specified, it is zero by default.
Alphabetic Indicator (On/Off)YesYesThe indicator will be in the token, which means that left and right can be specified.Can be used only separately from the other token validation properties.

You can create a Credit Card token element and select no validation property for it. If the Credit Card token is involved, it will be handled similar to a Numeric token. However, additional checks will be applied to the input based on the properties detailed in the Credit Card token general properties column in the table above.

To enable the Credit Card token properties, such as, Invalid LUHN checksum and Invalid Card Type, with the SLT Tokenizers, refer to Credit Card Properties with SLT Tokenizers.

Invalid Luhn Checksum

The purpose of the Luhn checksum is to detect incorrectly entered card details. If you enable Invalid Luhn Checksum token validation, then you must use valid credit cards otherwise tokenization will be denied for an invalid credit card number.

A valid credit card has a valid Luhn checksum. Upon tokenization, the tokenized value will have an invalid Luhn checksum. Here is an example of the tokenized credit card with the invalid Luhn digit.

Table: Credit Card Number with Luhn Checksum Examples

Credit Card NumberTokenized ValuesComments
4067604564321453Token is not generated due to invalid input value. Error is returned.The input value contains invalid Luhn checksum. The value cannot be tokenized with Luhn enabled.
40676045643214542009071778438613The Luhn in the input value is correct, the value is tokenized. Tokenized value has invalid Luhn checksum.

Invalid Card Type

An invalid credit card indicates an issue with the credit card details. An invalid card type will result in token values not starting with the digits that real credit card numbers begin with. The first digit in a real credit card number is the Major Industry Identifier. Thus, digits 3,4,5,6, and 0 can be the first digits of the real credit card number, which are then substituted during tokenization.

Table: Real Credit Card Values with Tokenized Values

Real Credit Card Value34560
Tokenized Value27891

Here is an example of the tokenized credit card with the invalid card type.

Table: Credit Card Number with Invalid Card Type Examples

Credit Card NumberTokenized ValuesComments
40676045643214547335610268467066The credit card type is valid, the tokenization is successful.
2067604564321454Token is not generated due to invalid input value. Error is returned.The credit card type is invalid since the first digit of the value “2” does not belong to a real credit card. The value cannot be tokenized.

Alphabetic Indicator

The alphabetic indicator replaces the tokenized value with an alphabet. If you enable Alphabetic Indicator validation, then the resulting token value will have one alphabetic character.

You will need to choose the position of the alphabetic character before tokenizing a credit card number otherwise the resulting token will have no alphabetic indicator.

The alphabetic indicator will substitute the tokenized value according to the following rule:

Table: Alphabetic Indicator with Tokenized Digits

Tokenized digit0123456789
Alphabetic indicatorABCDEFGHIJ

In the following table, the Visa Card Number “4067604564321454” is tokenized. A tokenized value, represented by “7594107411315001”, is substituted with an alphabetic character in a selected position.

Table: Examples of Credit Card Tokenization with Alphabetic Indicator

Credit Card Number (Input Value)PositionTokenized ValuesComments
4067604564321454-7594107411315001No substitution since the position is undefined.
4067604564321454147594107411315A01Digit “0” is substituted with character “A” at position 14.

Credit Card Properties with SLT Tokenizers

The Credit Card Properties with SLT Tokenizers explains the minimum data length required for tokenization. This occurs when the Credit Card token properties is used in combination with the SLT Tokenizers.

If you enable Credit Card token properties for tokenization, such as Invalid LUHN checksum and Invalid Card Type, you need to select an appropriate SLT Tokenizer. This is required to ensure the minimum data length is available for successful tokenization.

The following table represents the minimum data length required for tokenization as per the usage of Credit Card token properties with the SLT Tokenizers.

Table: Minimum Data Length - Credit Card Token Properties with SLT Tokenizers

Enabled Credit Card Token PropertyMinimum Data Length (in digits) Required for Tokenization
SLT_1_3/SLT_2_3SLT_1_6/SLT_2_6
Invalid LUHN Checksum47
Invalid Card Type47
Invalid LUHN Checksum and Invalid Card Type58

Credit Card Tokenization Properties for different protectors

Application Protector

The following table shows supported input data types for Application protectors with the Credit Card token.

Table: Supported input data types for Application protectors with Credit Card token

Application Protectors*2AP Java*1AP Python
Supported input data typesSTRING

CHAR[]

BYTE[]
STRING

BYTES

*1 - The API accepts and returns data in BYTE[] format. The customer application needs to convert the input into byte arrays before calling the API, and similarly, convert the output from byte arrays after receiving the response from the API.

*2 - The Protegrity Application Protector only supports bytes converted from the string data type. If any other data type is directly converted to bytes and passed as input to the Application Protectors APIs that support byte as input and provide byte as output, then data corruption might occur.

For more information about Application protectors, refer to Application Protector.

Big Data Protector

Protegrity supports MapReduce, Hive, Pig, HBase, Spark, and Impala, which utilizes Hadoop Distributed File System (HDFS) or Ozone as the data storage layer. The data is protected from internal and external threats, and users and business processes can continue to utilize the secured data. Protegrity protects data inside the files using tokenization and strong encryption protection methods.

The following table shows supported input data types for Big Data protectors with the Credit Card token.

Table: Supported input data types for Big Data protectors with Credit Card token

Big Data ProtectorsMapReduce*2HivePigHBase*2ImpalaSpark*2Spark SQLTrino
Supported input data types*1BYTE[]STRINGCHARARRAYBYTE[]STRINGBYTE[]

STRING
STRINGVARCHAR

*1 – If the input and output types of the API are BYTE[], then the customer application should convert the input to and output from the byte array, before calling the API.

*2 – The Protegrity MapReduce protector, HBase coprocessor, and Spark protector only support bytes converted from the string data type. Bytes as input that are not generated from string data type might cause data corruption to occur when:

  • Any other data type is directly converted to bytes should be passed as input to the MapReduce or Spark API that supports byte as input and provides byte as output.
  • Any other data type is directly converted to bytes and inserted in an HBase table. Where the HBase table is configured with the Protegrity HBase coprocessor.

For more information about Big Data protectors, refer to Big Data Protector.

Data Warehouse Protector

The Protegrity Data Warehouse Protector is an advanced security solution designed to protect sensitive data at the column level. This enables you to secure your data, while still permitting access to authorized users. Additionally, the Data Warehouse Protector integrates seamlessly with existing database systems using the User-Defined Functions for an enhanced security. Protegrity protects data inside the data warehouses using various tokenization and encryption methods.

The following table shows the supported input data types for the Teradata protector with the Credit Card token.

Table: Supported input data types for Data Warehouse protectors with Credit Card token

Data Warehouse ProtectorsTeradata
Supported input data typesVARCHAR LATIN

For more information about Data Warehouse protectors, refer to Data Warehouse Protector.

Database Protectors

The following table shows supported input data types for Database protectors with the Credit card token.

Table: Supported input data types for Database protectors with Credit Card token

ProtectorOracleMSSQL
Supported Input Data TypesVARCHAR2
CHAR
VARCHAR
CHAR

For more information about Database protectors, refer to Database Protectors


Last modified : March 05, 2026