Printable
Deprecated
Starting from v10.0.x, the Printable token type is deprecated.
It is recommended to use the Unicode Gen2 token type instead of the Printable token type.
The Printable token type tokenizes ASCII printable characters from the ISO 8859-15 alphabet, which include letters, digits, punctuation marks, and miscellaneous symbols.
Table: Printable Tokenization Type properties
Tokenization Type Properties | Settings | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
Name | Printable | |||||||||||
Token type and Format | ASCII printable characters, which include letters, digits, punctuation marks, and miscellaneous symbols. Hex character codes from 0x20 to 0x7E and from 0xA0 to 0xFF. Refer to ASCII Character Codes for the list of ASCII characters supported by Printable token. | |||||||||||
Tokenizer*1*2 | Length Preservation | Allow Short Data | Minimum Length | Maximum Length | ||||||||
SLT_1_3 | Yes | Yes | 1 | 4096 | ||||||||
No, return input as it is | 3 | |||||||||||
No, generate error | ||||||||||||
No | NA | 1 | 4091 | |||||||||
Possibility to set Minimum/ maximum length | No | |||||||||||
Left settings | Yes | |||||||||||
Internal IV | Yes, if Left/Right settings are non-zero | |||||||||||
External IV | Yes | |||||||||||
Return of Protected value | Yes | |||||||||||
Token specific properties | Token tables are large in size, approximately 27MB. Refer to SLT Tokenizer Characteristics for the exact numbers. | |||||||||||
*1 – The character column “CHAR” to protect is configured to remove trailing spaces before the tokenization. This means that the space character can be lost in translation for Printable tokens. To avoid this consider using Lower ASCII token instead of Printable for CHAR columns and input data having spaces.
*2 – Printable tokenization is not supported on databases where the character set is UTF.
The following table shows examples of the way in which a value will be tokenized with the Printable token.
Table: Examples of Tokenization for Printable Values
| Input Values | Tokenized Values | Comments |
|---|---|---|
| La Scala 05698 | F|ZpÙç|Ôä%s^¦4 | All characters in the input value, including spaces, are tokenized. |
| Ford Mondeo CA-0256TY M34 567 K-45 | §)%ß#)ðYjt{¬ÓÊEµV²ù² | All characters in the input value, including spaces, are tokenized. |
| qw | rD | Printable, SLT_1_3, Left=0, Right=0, Length Preservation=Yes, Allow Short Data=Yes The minimum length meets the requirement for the SLT_1_3 tokenizer when Length Preservation=Yes and Allow Short Data=Yes. |
| qw | Error. Input too short. | Printable, SLT_1_3, Left=0, Right=0, Length Preservation=Yes, Allow Short Data=No, generate an error The input has two characters to tokenize, which is short for SLT_1_3 tokenizer when Length Preservation=Yes and Allow Short Data=No, generate an error. |
| qw qwa | qw rDZ | Printable, SLT_1_3, Left=0, Right=0, Length Preservation=Yes, Allow Short Data=No, return input as it is. If the input value has less than three characters to tokenize, then it is returned as is else it is tokenized. |
Printable Tokenization Properties for different protectors
Application Protector
Printable tokenization is recommended for APIs that accept BYTE [] as input and provide BYTE [] as output. If uniform encoding is maintained across protectors, tokens generated by these APIs can be used across various protectors.
To ensure accurate tokenization results, user must use ISO 8859-15 character encoding when converting String data to Byte. This input should then be passed to Byte APIs.
Note: If Printable tokens are generated using APIs or UDFs that accept STRING or VARCHAR as input, then the protected values can only be unprotected using the protector with which it was protected. If you are unprotecting the protected data using any other protector, then you could get inconsistent results.
The following table shows supported input data types for Application protectors with the Printable token.
Table: Supported input data types for Application protectors with Printable token
| Application Protectors*2 | AP Java*1 | AP Python |
|---|---|---|
| Supported input data types | STRING CHAR[] BYTE[] | STRING BYTES |
*1 - The API accepts and returns data in BYTE[] format. The customer application needs to convert the input into byte arrays before calling the API, and similarly, convert the output from byte arrays after receiving the response from the API.
*2 - The Protegrity Application Protector only supports bytes converted from the string data type. If any other data type is directly converted to bytes and passed as input to the Application Protectors APIs that support byte as input and provide byte as output, then data corruption might occur.
For more information about Application protectors, refer to Application Protector.
Big Data Protector
Protegrity supports MapReduce, Hive, Pig, HBase, Spark, and Impala, which utilizes Hadoop Distributed File System (HDFS) or Ozone as the data storage layer. The data is protected from internal and external threats, and users and business processes can continue to utilize the secured data. Protegrity protects data inside the files using tokenization and strong encryption protection methods.
The following table shows supported input data types for Big Data protectors with the Printable token.
Table: Supported input data types for Big Data protectors with Printable token
| Big Data Protectors | MapReduce*4*5 | Hive | Pig | HBase*4*5 | Impala*2*3 | Spark*4*5 | Spark SQL | Trino |
|---|---|---|---|---|---|---|---|---|
| Supported input data types*1*6 | BYTE[] | Not supported | Not supported | BYTE[] | STRING | BYTE[]*5 | Not supported | VARCHAR |
*1 – If the input and output types of the API are BYTE[], then the customer application should convert the input to and output from the byte array, before calling the API.
*2 – Ensure that you use the Horizontal tab “\t” as the field or column delimiter when loading data that is tokenized using Printable tokens for Impala.
*3 – Though the tokenization results for Impala may not be formatted and displayed accurately, they will be unprotected to the original values, using the respective protector.
*4 – The Protegrity MapReduce protector, HBase coprocessor, and Spark protector only support bytes converted from the string data type. Data types that are not bytes converted from the string data type might cause data corruption to occur when:
- Any other data type is directly converted to bytes and passed as input to the MapReduce or Spark API that supports byte as input and provides byte as output.
- Any other data type is directly converted to bytes and inserted in an HBase table. Where the HBase table is configured with the Protegrity HBase coprocessor.
*5 – It is recommended to use Printable tokenization with APIs that accepts BYTE[] as input and provides BYTE[] as output. If uniform encoding is maintained across protectors, Printable tokens generated by such APIs can be used across various protectors. To ensure accurate formatting and display of tokenization results, clients should use ISO 8859-15 character encoding. Before passing input to Byte APIs, clients must convert String data type to Byte and apply ISO 8859-15 character encoding.
*6 – Printable tokens are generated using APIs or UDFs. These APIs or UDFs accept STRING or VARCHAR as input. Then, the protected values can only be unprotected using the protector with which it was protected. If you are unprotecting the protected data using any other protector, then you could get inconsistent results.
For more information about Big Data protectors, refer to Big Data Protector.
Data Warehouse Protector
The Protegrity Data Warehouse Protector is an advanced security solution designed to protect sensitive data at the column level. This enables you to secure your data, while still permitting access to authorized users. Additionally, the Data Warehouse Protector integrates seamlessly with existing database systems using the User-Defined Functions for an enhanced security. Protegrity protects data inside the data warehouses using various tokenization and encryption methods.
Printable tokens are generated using APIs or UDFs. These APIs or UDFs accept STRING or VARCHAR as input. Then, the protected values can only be unprotected using the protector with which it was protected. If you are unprotecting the protected data using any other protector, then you could get inconsistent results.
Important: Tokenizing XML or JSON data with Printable tokenization will not return valid XML or JSON format output.
JSON and XML UDFs are supported for the Teradata Data Warehouse Protector.
The following table shows the supported input data types for the Teradata protector with the Printable token.
Table: Supported input data types for Data Warehouse protectors with Printable token
| Data Warehouse Protectors | Teradata |
|---|---|
| Supported input data types | VARCHAR LATIN |
For more information about Data Warehouse protectors, refer to Data Warehouse Protector.
Database Protectors
The following table shows supported input data types for Database protectors with the Printable token.
Table: Supported input data types for Database protectors with Printable token
| Protector | Oracle | MSSQL |
|---|---|---|
| Supported Input Data Types | VARCHAR2 CHAR | VARCHAR CHAR |
For more information about Database protectors, refer to Database Protectors
Feedback
Was this page helpful?