Upper-Case Alpha (A-Z)
The Upper-Case Alpha token type tokenizes all alphabetic symbols as uppercase. After de-tokenization, all alphabetic symbols are returned as uppercase. This means that initial and detokenized values would not match if the input contains lowercase letters.
Table: Upper-Case Alpha Tokenization Type properties
Tokenization Type Properties | Settings | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Name | Upper-Case Alpha | ||||||||||||||
Token type and Format | Upper-Case letters A through Z | ||||||||||||||
Tokenizer | Length Preservation | Allow Short Data | Minimum Length | Maximum Length | |||||||||||
SLT_1_3 SLT_2_3 | Yes | Yes | 1 | 4096 | |||||||||||
No, return input as it is | 3 | ||||||||||||||
No, generate error | |||||||||||||||
No | NA | 1 | 4049 | ||||||||||||
Possibility to set Minimum/ maximum length | No | ||||||||||||||
Left/Right settings | Yes | ||||||||||||||
Internal IV | Yes, if Left/Right settings are non-zero | ||||||||||||||
External IV | Yes | ||||||||||||||
Return of Protected value | Yes | ||||||||||||||
Token specific properties | Lower case characters are accepted in the input but they will be converted to upper-case in output value. | ||||||||||||||
The following table shows examples of the way in which a value will be tokenized with the Upper-case Alpha token.
Table: Examples of Upper Case Alpha tokenization values
| Input Value | Tokenized Value | Comments |
|---|---|---|
| abc | OIM | Upper-case Alpha, SLT_2_3, Left=0, Right=0, Length Preservation=Yes The value has minimum length for SLT_2_3 tokenizer. Lowercase characters in the input are converted to uppercase in output. De-tokenization will return “ABC”. |
| NY | ZIZ | Upper-case Alpha, SLT_1_3, Left=0, Right=0, Length Preservation=No The value is padded up to 3 characters which is minimum length for SLT_1_3 tokenizer. |
| NY | Error. Input too short. | Upper-case Alpha, SLT_2_3, Left=0, Right=0, Length Preservation=Yes, Allow Short Data=No, generate error Input value has only two alpha characters to tokenize, which is short for SLT_2_3 tokenizer when Length Preservation=Yes and Allow Short Data=No, generate error. |
| NY NYA | NY ZIO | Upper-case Alpha, SLT_2_3, Left=0, Right=0, Length Preservation=Yes, Allow Short Data=No, return input as it is If the input value has less than three characters to tokenize, then it is returned as is else it is tokenized. |
| NY | ZI | Upper-case Alpha, SLT_2_3, Left=0, Right=0, Length Preservation=Yes, Allow Short Data=Yes Input value has only two alpha characters to tokenize, which meets minimum length requirement for SLT_2_3 tokenizer when Length Preservation=Yes and Allow Short Data=Yes. |
| 131 Summer Street, Bridgewater | 131 ZBXDPW G FYTZP, CRTTPXPLYGCU | Upper-case Alpha, SLT_1_3, Left=0, Right=0, Length Preservation=No Numeric characters, spaces and comma are treated as delimiters and not tokenized. Output value is longer than initial value. |
| Albert Einstein | AOALXO POHLFHMU | Upper-case Alpha, SLT_2_3, Left=0, Right=0, Length Preservation=Yes Space is treated as delimiters and not tokenized. Output value is the same length as initial value. |
| 704-BBJ | 704-GTU | Upper-case Alpha, SLT_1_3, Left=3, Right=0, Length Preservation=Yes Three characters from left are left in clear. Dash is treated as delimiter. |
Upper-case Alpha Tokenization Properties for different protectors
Application Protector
The following table shows supported input data types for Application protectors with the Upper-case Alpha token.
Table: Supported input data types for Application protectors with Upper-case Alpha token
| Application Protectors*2 | AP Java*1 | AP Python |
|---|---|---|
| Supported input data types | BYTE[] CHAR[] STRING | BYTES STRING |
*1 - The API accepts and returns data in BYTE[] format. The customer application needs to convert the input into byte arrays before calling the API, and similarly, convert the output from byte arrays after receiving the response from the API.
*2 - The Protegrity Application Protectors only support bytes converted from the string data type. If int, short, or long format data is directly converted to bytes and passed as input to the Application Protector APIs that support byte as input and provide byte as output, then data corruption might occur.
For more information about Application protectors, refer to Application Protector.
Big Data Protector
Protegrity supports MapReduce, Hive, Pig, HBase, Spark, and Impala, which utilizes Hadoop Distributed File System (HDFS) or Ozone as the data storage layer. The data is protected from internal and external threats, and users and business processes can continue to utilize the secured data. Protegrity protects data inside the files using tokenization and strong encryption protection methods.
The following table shows supported input data types for Big Data protectors with the Upper-Case Alpha token.
Table: Supported input data types for Big Data protectors with Upper-Case Alpha token
| Big Data Protectors | MapReduce*2 | Hive | Pig | HBase*2 | Impala | Spark*2 | Spark SQL | Trino |
|---|---|---|---|---|---|---|---|---|
| Supported input data types*1 | BYTE[] | CHAR*3 STRING | CHARARRAY | BYTE[] | STRING | BYTE[] STRING | STRING | VARCHAR |
*1 – If the input and output types of the API are BYTE[], then the customer application should convert the input to and output from the byte array, before calling the API.
*2 – The Protegrity MapReduce protector, HBase coprocessor, and Spark protector only support bytes converted from the string data type. Data types that are not bytes converted from the string data type might cause data corruption to occur when:
- Any other data type is directly converted to bytes and passed as input to the MapReduce or Spark API that supports byte as input and provides byte as output.
- Any other data type is directly converted to bytes and inserted in an HBase table. Where the HBase table is configured with the Protegrity HBase coprocessor.
*3 – If you are using the Char tokenization UDFs in Hive, then ensure that the data elements have length preservation selected. In Char tokenization UDFs, using data elements without length preservation selected, is not supported.
For more information about Big Data protectors, refer to Big Data Protector.
Data Warehouse Protector
The Protegrity Data Warehouse Protector is an advanced security solution designed to protect sensitive data at the column level. This enables you to secure your data, while still permitting access to authorized users. Additionally, the Data Warehouse Protector integrates seamlessly with existing database systems using the User-Defined Functions for an enhanced security. Protegrity protects data inside the data warehouses using various tokenization and encryption methods.
The following table shows the supported input data types for the Teradata protector with the Upper-case Alpha token.
Table: Supported input data types for Data Warehouse protectors with Upper-case Alpha token
| Data Warehouse Protectors | Teradata |
|---|---|
| Supported input data types | VARCHAR LATIN |
For more information about Data Warehouse protectors, refer to Data Warehouse Protector.
Database Protectors
The following table shows supported input data types for Database protectors with the Alpha token.
Table: Supported input data types for Database protectors with Alpha token
| Protector | Oracle | MSSQL |
|---|---|---|
| Supported Input Data Types | VARCHAR2 CHAR | VARCHAR CHAR |
For more information about Database protectors, refer to Database Protectors
Feedback
Was this page helpful?