This is the multi-page printable view of this section. Click here to print.
Preparing the environment
1 - Setting up for the Bootstrap Installer
The procedures mentioned in this section are applicable only for the Bootstrap installer approach to prepare the environment for the Big Data Protector.
1.1 - Verifying the prerequisites
The content mentioned in this section is applicable only for the Bootstrap approach to install the Big Data Protector.
Ensure that the following prerequisites are met, before installing the Big Data Protector on an Amazon EMR cluster:
- It is recommended to be familiar with the following parts:
- The Amazon EMR environment
- Storage bucket, used to store the Big Data Protector installation files
- Bootstrap Action, used to invoke the installation of Big Data Protector
- Amazon Virtual Private Cloud (VPC)
- An ESA appliance v10.x.x is installed and running.
- An S3 bucket is available to copy the Big Data Protector installation files, which are created using the Configurator script.
For more information about creating an S3 bucket, refer to the Amazon documentation for creating the S3 bucket.
- The following table depicts the list of ports that are configured on ESA and the nodes in the cluster, which will run the Big Data Protector:
| Destination Port No. | Protocols | Sources | Destinations | Descriptions |
8443 | TCP | RPAgent on the Big Data Protector cluster node | ESA | The RPAgent communicates with ESA through port
8443 to download a Policy. |
9200 | Log Forwarder on the Big Data Protector cluster node | Protegrity Audit Store appliance | The Log Forwarder sends all the logs to the Protegrity
Audit Store appliance through port
9200. | |
15780 | Protector on the Big Data Protector cluster node | Log Forwarder on the Big Data Protector cluster node | The Big Data Protector writes Audit Logs to localhost
through port 15780. The RPAgent
Application Logs are also written to localhost through port
15780. The Log Forwarder reads the logs from
that socket. |
1.2 - Extracting the Big Data Protector Package
The steps mentioned in this section are applicable only for the Bootstrap approach to install the Big Data Protector.
After receiving the Big Data Protector installation package from Protegrity, copy it to any Amazon EC2 instance or any node that has connectivity to ESA.
After downloading the Big Data Protector package, extract it to:
- Access the Configurator script and
- Install the Big Data Protector on all the nodes on an Amazon EMR cluster.
To extract the Configurator script from the installation package:
Log in to the CLI on a machine or an Amazon EC2 node that has connectivity to ESA.
Copy the Big Data Protector package
BigDataProtector_Linux-ALL-64_x86-64_EMR-<EMR_version>-64_<BDP_version>.tgzto any directory.For example, /opt/protegrity/.
To extract the contents of the package, run the following command:
tar -xvf BigDataProtector_Linux-ALL-64_x86-64_EMR-<EMR_version>-64_<BDP_version>.tgzPress ENTER.
The command extracts the installer package and the signature files.
BigDataProtector_Linux-ALL-64_x86-64_EMR-<EMR_version>-64_<BDP_version>.tgz signatures/ signatures/BigDataProtector_Linux-ALL-64_x86-64_EMR-<EMR_version>-64_<BDP_version>.tgz_<BDP_version>.sigVerify the authenticity of the build using the signatures folder. For more information, refer Verification of Signed Protector Build.
To extract the configurator script, run the following command:
tar –xvf BigDataProtector_Linux-ALL-64_x86-64_EMR-<EMR_version>-64_<BDP_version>.tgzPress ENTER.
The command extracts the configurator script.
BDP_Configurator_EMR-<EMR_version>_<BDP_version>.sh
1.3 - Executing the Configurator Script
The steps mentioned in this section are applicable only for the Bootstrap approach to install the Big Data Protector.
Execute the configurator script to create the installation files for installing the Big Data Protector on an Amazon EMR cluster. You can install the Big Data Protector on an Amazon EMR cluster in any one of the following methods:
- New EMR cluster: The configurator script will:
- Download the certificates and key encryption files from ESA.
- Create the Big Data Protector installation files for a new EMR cluster.
- Create the bootstrap installer and classpath configurator script for a new EMR cluster.
- Copy the Big Data Protector installation files, bootstrap installer, and the classpath configurator script to the S3 bucket.
- Existing EMR cluster: The configurator script will generate the installation package to install the Big Data Protector on an existing EMR cluster.
To execute the configurator script:
Log in to the staging environment.
Navigate to the directory that contains the
BDP_Configurator_EMR-<EMR_version>_<BDP_version>.shscript.To execute the configurator script, run the following command:
./BDP_Configurator_EMR-<EMR_version>_<BDP_version>.shPress ENTER.
The prompt to continue the installation of the Big Data Protector appears.
*********************************************************************** Welcome to the Big Data Protector Configurator Wizard *********************************************************************** This will create the Big Data Protector Installation files for AWS EMR. Do you want to continue? [yes or no]:To continue, type
yes.Press ENTER.
The prompt to create the Big Data Protector installation package, depending on the EMR cluster, appears.
Protegrity Big Data Protector Configurator started... Enter the EMR cluster for which the Big Data Protector installation package needs to be created: [ 1 ] : New EMR Cluster [ 2 ] : Existing EMR cluster [ 1 or 2 ]:Depending on your requirement, select any one of the following options:
- To create the Big Data Protector installation package for a new EMR cluster, type
1. - To generate the Big Data Protector installation package, in a local directory, for an existing EMR cluster, type
2.
For more information about installing the Big Data Protector on an existing EMR cluster, refer Using the Static Installer.
- To create the Big Data Protector installation package for a new EMR cluster, type
To create the Big Data Protector installation package for a new EMR cluster, type
1.Press ENTER.
The prompt to enter the S3 URI to upload the Big Data Protector installation files appears.
Generating Big Data Protector for a new EMR cluster...... Enter the S3 URI where the BDP Installation files are to be uploaded. (E.g. s3://examplebucket/folder):Type the path of the S3 storage bucket.
Note: Ensure that the path of the S3 storage bucket is in the following format:
s3://<bucket_name>/<folder_in_the_bucket>where,
- <bucket_name> - specifies the name of the storage bucket.
- <folder_in_the_bucket> - specifies the directory within the bucket.
Press ENTER.
The prompt to either upload the installation files to the S3 bucket or generate them locally appears.
Choose one option among the following for BDP Installation files: [1] -> Upload files to 's3://<bucket_name>/<folder_in_the_bucket>' S3 URI. [2] -> Generate files locally to current working directory. (You would have to manually upload the files to the specified S3 URI) [ 1 or 2 ]:To upload the installation files to the S3 storage bucket, type
1.Press ENTER.
The prompt to select the type of AWS access key appears.
Choose the Type of AWS Access Keys from the following options: [1] -> IAM User Access Keys (Permanent access key id & secret access key) [2] -> Temporary Security Credentials (Temporary access key id, secret access key & session token) [ 1 or 2 ]:Depending on the type of AWS Access Keys you want to use, type
1or2. For example, to use the temporary security credentials, type2.Press ENTER.
The prompt to enter the access key ID appears.
Enter the Access Key ID:Enter the access key ID.
Press ENTER.
The prompt to enter the secret access key appears.
Enter the Secret Access Key:Enter the secret access key.
Press ENTER.
The prompt to enter the security session token appears.
Enter the Security Session Token:Enter the Security Session Token.
Press ENTER.
The prompt to enter ESA hostname or IP address appears.
Enter the ESA Hostname/IP Address:Enter the hostname or the IP address of ESA.
Press ENTER.
The prompt to enter the listening port for ESA appears.
Enter ESA host listening port [8443]:Enter the listening port for ESA.
Alternatively, to use the default listening port, press ENTER.
Press ENTER.
The prompt to enter the JWT token appears.
If you have an existing ESA JSON Web Token (JWT) with Export Certificates role, enter it otherwise enter 'no':Enter the JWT token.
Press ENTER.
The prompt to select the audit store type appears.
Select the Audit Store type where Log Forwarder(s) should send logs to. [ 1 ] : Protegrity Audit Store [ 2 ] : External Audit Store [ 3 ] : Protegrity Audit Store + External Audit Store Enter the no.:Depending on the Audit Store type, select any one of the following options:
Option Description 1To use the default setting using the Protegrity Audit Store appliance, type 1. If you enter1, then the default Fluent Bit configuration files are used and Fluent Bit will forward the logs to the Protegrity Audit Store appliances.2To use an external audit store, type 2. If you enter2, then the default Fluent Bit configuration files used for the External Audit Store (out.conf and upstream.cfg in the/opt/protegrity/fluent-bit/data/config.d/directory) are renamed (out.conf.bkp and upstream.cfg.bkp) so that they will not be used by Fluent Bit. Additionally, the custom Fluent Bit configuration files for the external audit store are copied to the /opt/protegrity/fluent-bit/data/config.d/ directory.3To use a combination of the default setting with an external audit store, type 3. If you enter3, then the default Fluent Bit configuration files used for the Protegrity Audit Store (out.conf and upstream.cfg in the/opt/protegrity/fluent-bit/data/config.d/directory) are not renamed. However, the custom Fluent Bit configuration files for the external audit store are copied to the/opt/protegrity/fluent-bit/data/config.d/directory.Press ENTER.
The prompt to enter the comma separated list of hostname or IP addresses appears.
Enter comma-separated list of Hostnames/IP Addresses and/or Ports of Protegrity Audit Store. Allowed Syntax: hostname[:port][,hostname[:port],hostname[:port]...] (Default Value - <ESA_IP_Address>:9200) Enter the list:Enter the comma-separated IP addresses/ports in the correct syntax.
Press ENTER.
The prompt to enter the local directory path that stores the custom Fluent Bit configuration file appears.
Enter the local directory path on this node that stores the custom Fluent-Bit configuration files for External Audit Store:Note: The configurator script will display this prompt only if you select option
2or3in step 28. When you select option2or3in step 28, the custom configuration files are copied to the /<installation_directory>/fluent-bit/data/config.d/ directory during the execution of bootstrap script on the EMR nodes.Enter the local directory path that stores the custom Fluent Bit configuration files.
Press ENTER.
The prompt to generate the application logs for the RPAgent appears.
Do you want RPAgent's log to be generated in a file? [yes or no]:To generate the logs in a file, type
yes.Press ENTER.
The script generates the installation files and uploads them to the specified S3 bucket.
RPAgent's log will be generated in a file. ************************************************************************************ Welcome to the RPAgent Setup Wizard. ************************************************************************************ Unpacking................... Extracting files... Unpacked rpagent compressed file... Temporarily setting up rpagent directory structure on current node... Unpacking... Extracting files... Downloading certificates from <ESA_IP_Address>:8443... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 11264 100 11264 0 0 163k 0 --:--:-- --:--:-- --:--:-- 164k Extracting certificates... Certificates successfully downloaded and stored in /<installation_dir>/rpagent/data Protegrity RPAgent installed in /<installation_dir>/rpagent. Retrieving the S3 bucket's AWS Region via AWS S3 REST API... Successfully retrieved S3 bucket's AWS region: <AWS_region_name> Started Uploading the generated installation files via AWS S3 REST API...... Uploading bdp_bootstrap_installer.sh to the S3 bucket. File uploaded to s3://<bucket_name>/<folder_in_the_bucket>/bdp_bootstrap_installer.sh Uploading bdp_classpath_configurator.py to the S3 bucket. File uploaded to s3://<bucket_name>/<folder_in_the_bucket>/bdp_classpath_configurator.py Uploading BigDataProtector_Linux-ALL-64_x86-64_EMR-7.9-64_<BDP_version>.tgz to the S3 bucket. File uploaded to s3://<bucket_name>/<folder_in_the_bucket>/BigDataProtector_Linux-ALL-64_x86-64_EMR-<EMR_version>-64_<BDP_version>.tgz Successfully Uploaded BigDataProtector_Linux-ALL-64_x86-64_EMR-<EMR_version>-64_<BDP_version>.tgz, bdp_bootstrap_installer.sh, bdp_classpath_configurator.py to S3 bucket 's3://<bucket_name>/<folder_in_the_bucket>' Successfully Generated installation files at ./Installation_Files/ directory. Successfully configured Big Data Protector for a new EMR cluster..
2 - Setting up for the Static Installer
The procedures mentioned in this section are applicable only for the Static installer approach to prepare the environment for the Big Data Protector.
2.1 - Verifying the prerequisites for Static Installer
The content mentioned in this section is applicable only for the Static installer approach to install the Big Data Protector.
Ensure that the following prerequisites are met, before installing the Big Data Protector:
The EMR cluster is installed, configured, and running.
The ESA v10.0.x instance is installed, configured, and running.
The static installer for EMR uses utilities, such as, pssh (parallel ssh) and pscp (parallel scp). These utilities require Python to be installed on the Primary node. To verify whether Python is installed on the Primary node, run the following command:
/usr/bin/env python --versionThe command returns the version of Python installed on the system.
If you are unable to detect Python on the Primary node, then ensure that you have a compatible version of Python installed on the lead node (preferably Python 3.x). Ensure that the utilities are able to detect the version of Python using the following command:
/usr/bin/env pythonA
sudoeruser account with privileges to perform the following tasks:- Update the system by modifying the configuration, permissions, or ownership of directories and files.
- Perform third party configuration.
- Create directories and files.
- Modify the permissions and ownership for the created directories and files.
- Set the required permissions to the create directories and files for the Protegrity Service Account.
- Permissions for using the SSH service.
The following user accounts are present to perform the required tasks:
ADMINISTRATOR_USERis the sudoer user account that is responsible to install and uninstall the Big Data Protector on the cluster. This user account must havesudoaccess to install the product.EXECUTOR_USER: It is a user that has ownership of all Protegrity files, directories, and services.OPERATOR_USER: It is responsible for performing tasks, such as, starting or stopping tasks, monitoring services, updating the configuration, and maintaining the cluster while the Big Data Protector is installed on it. If you want to start, stop, or restart the Protegrity services, then you requiresudoerprivileges for this user to impersonate theEXECUTOR_USER.- Depending on the requirements, a single user on the system may perform multiple roles. If a single user is performing multiple roles, then ensure that the following conditions are met:
- The user has the required permissions and privileges to impersonate the other user accounts, for performing their roles, and perform tasks as the impersonated user.
- The user is assigned the highest set of privileges, from the required roles that it needs to perform, to execute the required tasks. For example, if a single user is performing tasks as
ADMINISTRATOR_USER,EXECUTOR_USER, andOPERATOR_USER, then ensure that the user is assigned the privileges of theADMINISTRATOR_USER.
A Private Key file (.pem file) for the
sudoeruser, which is used for enabling key-based authentication, and for communicating with all the nodes in the EMR cluster, is present on the Master node.As key-based authentication for the
sudoeruser is provided, which is required for installing and using Big Data Protector on the EMR cluster, ensure that theADMINISTRATOR_USERorOPERATOR_USERhave the value of theNOPASSWDparameter set toALLin the sudoer’s file.The management scripts provided by the installer in the
cluster_utilsdirectory should be run only by the user (OPERATOR_USER) having privileges to impersonate theEXECUTOR_USER.- If the value of the
AUTOCREATE_PROTEGRITY_IT_USRparameter in theBDP.configfile is set toNo, then ensure that a service group containing a user for running the Protegrity services on all the nodes in the cluster already exists. - If the Hadoop cluster is configured with AD or LDAP for user management, then ensure that the
AUTOCREATE_PROTEGRITY_IT_USRparameter in theBDP.configfile is set toNoand that the required service account user is created on all the nodes in the cluster.
- If the value of the
The table lists the ports required for the EMR cluster.
| Destination Port No. | Protocols | Sources | Destinations | Descriptions |
8443 | TCP | RPAgent on the Big Data Protector cluster node | ESA | The RPAgent communicates with ESA through port
8443 to download a Policy. |
9200 | Log Forwarder on the Big Data Protector cluster node | Protegrity Audit Store appliance | The Log Forwarder sends all the logs to the Protegrity
Audit Store appliance through port
9200. | |
15780 | Protector on the Big Data Protector cluster node | Log Forwarder on the Big Data Protector cluster node | The Big Data Protector writes Audit Logs to localhost
through port 15780. The RPAgent
Application Logs are also written to localhost through port
15780. The Log Forwarder reads the logs from
that socket. |
2.2 - Extracting the Installation Package
The steps mentioned in this section are applicable only for the Static installer approach to install the Big Data Protector.
To extract the files from the installation package:
Ensure that the installation package
BigDataProtector_Linux-ALL-64_x86-64_EMR-<emr_version>-64_<BDP_version>.tgzis copied to the Master node on the EMR cluster in any temporary directory, such as/opt/protegrity/.To extract the files from the installation package, run the following command:
tar -xvf BigDataProtector_Linux-ALL-64_x86-64_EMR-<emr_version>-64_<BDP_version>.tgzPress ENTER. The command extracts the following files:
uninstall.sh ptyLogAnalyzer.sh ptyLog_Consolidator.sh PepHbaseProtector<HBase_version>Setup_Linux_emr-<emr_version>_<BDP_version>.sh bdp_classpath_deconfigurator.py PepSpark<Spark_version>Setup_Linux_emr-<emr_version>_<BDP_version>.sh JcoreLiteSetup_Linux_x64_<JcoreLite_version>.gadcc.release-<BDP_version>.sh PepPig<pig_version>Setup_Linux_emr-<emr_version>_<BDP_version>.sh bdp_common/ bdp_common/bdp.properties.template bdp_common/config.ini.template Logforwarder_Setup_Linux_x64_<core_version>.sh node_uninstall.sh bdp_classpath_configurator.py RPAgent_Setup_Linux_x64_<core_version>.sh PepMapreduce<MapReduce_version>Setup_Linux_emr-<emr_version>_<BDP_version>.sh PepHive<Hive_version>Setup_Linux_emr-<emr_version>_<BDP_version>.sh BDP.config BdpInstallx.x.x_Linux_<BDP_version>.sh
2.3 - Updating the BDP.Config File
The steps mentioned in this section are applicable only for the Static Installer approach to install the Big Data Protector.
Note: Ensure that the
BDP.configfile is updated before the Big Data Protector is installed.
Do not update the BDP.config file when the installation of the Big Data Protector is in progress.
To update the BDP.config file:
Create a
hostsfile containing the IP addresses of all the nodes in the cluster, except the Lead node, and specify them in theBDP.configfile.The installation script uses this file to install the Big Data Protector on the nodes.
Open the
BDP.configfile in any text editor and modify the following parameter values:HADOOP_DIR– is the installation home directory for the Hadoop distribution.PROTEGRITY_DIR– is the directory where the Big Data Protector will be installed.The examples used in this document assume that the Big Data Protector is installed in the
/opt/protegrity/directory.CLUSTERLIST_FILE– This file contains the host name or IP addresses all the nodes in the cluster, except the Lead node, listing one host name and IP address per line.Ensure that you specify the file name with the complete path.
SPARK_PROTECTOR– Specifies one of the following values, as required:Yes– Specifies to install the Spark protector. Set the value of this parameter toYes, if the user wants to run Hive UDFs with Spark SQL, or use the Spark protector samples if theINSTALL_DEMOparameter is set toYes.No– Specifies to skip installing the Spark protector.
AUTOCREATE_PROTEGRITY_IT_USR– Determines the Protegrity service account. The service group and service user name specified in thePROTEGRITY_IT_USR_GROUPandPROTEGRITY_IT_USRparameters respectively will be created if this parameter is set toYes. One of the following values can be specified, as required:Yes– Instructs the installer to create the service groupPROTEGRITY_IT_USR_GROUPcontaining the userPROTEGRITY_IT_USRfor executing the Protegrity services on all the nodes in the cluster.If the service group or service user are already present, then the installer exits.
If you uninstall the Big Data Protector, then the service group and the service user are deleted.
No– Instructs the installer to skip creating a service groupPROTEGRITY_IT_USR_GROUPwith the service userPROTEGRITY_IT_USRfor executing the Protegrity services on all the nodes in the cluster.
PROTEGRITY_IT_USR_GROUP– is the service group required for running the Protegrity services on all the nodes in the cluster. All the Protegrity installation directories are owned by this service group.PROTEGRITY_IT_USR– is the service account user required for running the Protegrity services on all the nodes in the cluster and is a part of the groupPROTEGRITY_IT_USR_GROUP. All the Protegrity installation directories are owned by this service user.