Tunnels
The Tunnels tab lets you define the DSG inbound communication channels.
The changes made to Tunnels require cluster restart to take effect. You can either use the bundled default tunnels or create a tunnel based on your requirements.
The Tunnels tab is as seen in the following figure.
The following table provides the description of the columns available on the Web UI.
1 Name - Unique tunnel name.
2 Description - Unique description that describes port supported by the tunnel.
3 Protocol - Protocol type that the tunnel supports. The available Type values are HTTP, S3, SMTP, SFTP, NFS, and CIFS.
4 Enabled - Status of the tunnel. Displays status as true, if the tunnel is enabled.
5 Start without service - Select to start the tunnel if no service is configured or if no services are enabled.
6 Interface - IP address through which sensitive data enters the DSG. The available Listening Address options are as follows:
- ethMNG: The management interface on which the Web UI is accessible.
- ethSRV0: The service interface for communicating with an untrusted service.
- 127.0.0.1: The local loopback adapter.
- 0.0.0.0: The broadcast address for listening to all the available network interfaces over all IP addresses.
- Other: Manually add a listening address based on your requirements.
Note: The service interface, ethSRV0, listens on port 443. If you want to stop this interface from listening on this port, then edit the default_443 tunnel and disable it.
7 Port - Port linked to the listening address.
8 Certificate - Certificate applicable to a tunnel.
9 Deploy to All Nodes - Deploy the configurations to all the DSG nodes in the cluster.|Deploy can also be performed from the Cluster tab or Ruleset screen. In a scenario where an ESA and two DSG nodes are in a cluster, by using the Selective Tunnel Loading functionality, you can load specific tunnel configurations on specific DSG nodes.
Click Deploy to All Nodes to push specific tunnel configurations from an ESA to specific DSG nodes in a cluster.
The following figure illustrates the actions for the Tunnels screen.
The following table provides the available actions:
1 Create Tunnel - Create a tunnel configuration as per your requirements.
2 Edit - Edit an existing tunnel configuration.
3 Delete - Delete an existing tunnel configuration
1 - Manage a Tunnel
From the Tunnels tab, a tunnel can be created, edited, or deleted.
Create a tunnel
You can create tunnels for custom ports that are not predefined in the DSG using the Create Tunnel option in the Tunnels tab. The Create Tunnel screen is as seen in the following figure.
The following table provides the description for each option available on the UI.
Callout | Column/Textbox | Description |
---|
1 | Name | Name of the tunnel. |
2 | Tunnel ID | Unique ID of the tunnel. |
3 | Description | Unique description that describes port supported by the tunnel. |
4 | Enabled | Select to enable the tunnel. The check box is selected as a default. Uncheck the check box to disable the tunnel. |
5 | Start without service | Select to start the tunnel if no service is configured or if no services are enabled. |
6 | Protocol | Protocol type supported by the tunnel. |
The following types of tunnels can be created.
- HTTP
- SFTP
- SMTP
- Amazon S3
- CIFS/NFS
Edit a tunnel
Edit an existing tunnel configuration using the Edit option in the Tunnels tab. The Edit Tunnel screen is as seen in the following figure.
After editing the required field, click Update to save your changes.
Delete a tunnel
Delete an existing tunnel using the Delete option in the Tunnels tab. The Delete Tunnel screen is shown in the following figure.
The following table provides the description for each option available on the UI.
Callout | Column/Textbox/Button | Description |
---|
1 | Cancel | Cancel the process of deleting a tunnel. |
2 | Delete | Delete the existing tunnel from the Tunnels tab. |
2 - Amazon S3 Tunnel
About S3 tunnel fields.
Amazon Simple Storage Service (S3) is an online file storage web service. It lets you manage files through browser-based access as well as web services APIs. In DSG, the S3 tunnel is used to communicate with Amazon S3 cloud storage over the Amazon S3 REST API. The higher-layer S3 Service object, which sits above the tunnel object, configured at the RuleSet level is used to process file contents retrieved from S3.
A sample S3 tunnel configuration is shown in the following figure.
Amazon S3 uses buckets to store data and data is classified as objects. Each object is identified with a unique key ID. Consider an example that john.doe is the bucket and incoming is a folder under john.doe bucket. Assuming the requirement is that files landing in the incoming folder should be picked up and processed by DSG nodes. The data pulled from the AWS online storage is available in the incoming folder under the source bucket. The Amazon S3 Service is used to perform data security operation on this data in the source bucket.
Note: The DSG supports four levels of nested folders in an Amazon S3 bucket.
After the rules are executed, the processed data may be stored in a separate bucket (e.g. the folder named outgoing under the same john.doe bucket), which is the target bucket. When the DSG nodes poll AWS for a file uploaded, whichever node accesses the file first places a lock on the file. You can specify if the lock files must be stored in a separate bucket or under the source bucket. If the file is locked, the other DSG nodes will stop trying to access the file.
If the data operation on a locked file fails, the lock file can be viewed for detailed log and error information. The lock files are automatically deleted if the processing completes successfully.
Consider the scenario where an incoming bucket contains two directories Folder1 and Folder2.
The DSG allows multiprocessing of files that are place in the bucket. The lock files are created for every file processed. In the scenario mentioned, the lock files are created as follows:
- If the abc.csv file of Folder1 is processed, the lock file is created as Folder1.abc.csv.<hostname>.<Process ID>.lock.
- If the pqr.csv file of Folder2 is processed, the lock file is created as Folder1.pqr.csv.<hostname>.<Process ID>.lock.
Consider the following figure where files are nested in the S3 bucket.
The lock files are created as follows:
- If the abc.csv file of Folder1 is processed, the lock file is created as Folder1.abc.csv.<hostname>.<Process ID>.lock.
- If the pqr.csv file of Folder2 is processed, the lock file is created as Folder1.Folder2.pqr.csv.<hostname>.<Process ID>.lock.
- If the abc.csv file of Folder3 is processed, the lock file is created as Folder1.Folder2.Folder3.abc.csv.<hostname>.<Process ID>.lock.
If the multiprocessing of files is to be discontinued, remove the enhanced-lock-filename flag from the features.json file available in the System > Files on the DSG Web UI.
The following image illustrates the options available for an S3 tunnel.
The options specific to the S3 Protocol type are described as follows:
Bucket list settings
1 Source Bucket Name: Bucket name as defined in AWS where the files that need to be processed are available.
2. Source File Name Pattern: Regex pattern for the filenames to be processed. For example, .csv.
Rename Processed Files: Regex logic for renaming processed file.
3. Match Pattern: Regex logic for renaming processed file.
4. Replace Value: Value to append or name that will be used to rename the original source file based on the pattern provided and grouping defined in regex logic.
5. Overwrite Target Object: Select to overwrite a file in the bucket with a newly processed file of the same name. Refer to Amazon S3 Object.
6. Lock Files Bucket: Name of the lock files folder, if you want the lock files to be stored in a separate bucket. If not defined, lock files are placed in the source bucket.
7. Interval: Time in secs when the DSG node will poll AWS for pulling files. You can also specify a cron job expression. Refer to Cron documentation. The default value is 5. If you use the cron job expression “* * * * *”, DSG will poll AWS at the minimum interval of one minute.
Cron job format is also supported to schedule jobs.
AWS Settings
8. AWS Access Key Id: Access key id used to make secure protocol request to an AWS service API. Refer to Amazon Web Service documentation.
9. AWS Secret Access Key: Secret access key related to the access key id. The access key id and secret access key work together to sign into AWS and provide access to resources. Refer to Amazon Web Service documentation.
10. AWS Endpoint URL: Specify the endpoint URL if it is other than the amazon S3 bucket. This parameter should only be configured if the user is using DSG to connect to other endpoint than amazon S3 bucket i.e. On-Premise S3, Google Cloud Bucket, and so on.If not defined, the DSG will connect to Amazon S3 bucket.
11. Path to CA Bundle: Specify the path to CA bundle if the endpoint is other than Amazon S3 bucket. If the user has installed the S3 on-premise using the self signed certificate, then specify that path to CA bundle in this parameter. If the endpoint URL is Amazon S3 bucket, then by default it uses SSL certificate to connect to S3 bucket.
Advanced Settings
12. Advanced settings: Set additional advanced options for tunnel configuration, if required, in the form of JSON in the following textbox. In a scenario where an ESA and two DSG nodes are in a cluster, by using the Selective Tunnel Loading functionality, you can load specific tunnel configurations on specific DSG nodes.
The advanced settings that can be configured for S3 Protocol.
Options | Description |
---|
SSECustomerAlgorithm | If server-side encryption with a customer-provided encryption key was requested, the response will include this header confirming the encryption algorithm used. |
SSECustomerKey | Constructs a new customer provided server-side encryption key. |
SSECustomerKeyMD5 | If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide round-trip message integrity verification of the customer-provided encryption key. |
ACL | Allows controlling the ownership of uploaded objects in an S3 bucket.For example, if ACL or Access Control List is set to “bucket-owner-full-control”, new objects uploaded by other AWS accounts are owned by the bucket owner. By default, the objects uploaded by other AWS accounts are owned by them. |
Using S3 tunnel to access files on Google Cloud Storage
Similar to AWS buckets, data is also stored on the Google Cloud Storage can also be protected. You can use the S3 tunnel to access the files on the GCP storage. The incoming and processed file has to be placed in the same storage in separate folders. For example, a storage named john.doe bucket contains a folder incoming that contains files to be picked and processed by DSG nodes. This acts as the source bucket. After the rules are executed, the data is stored in the processed bucket. Ensure the following points are considered.
- AWS Endpoint URL contains the URL of the Google Cloud storage.
- AWS Access Key ID and AWS Secret Access Key contain the secret ID and HMAC keys.
Refer to Google docs for information about Access ID and HMAC keys.
3 - HTTP Tunnel
HTTP tunnel configurations.
Based on the protocol selected, the dependent fields in the Tunnel screen vary. The following image illustrates the settings that are specific to the HTTP protocol.
The options for the Inbound Transport Settings field in the Tunnel Details screen specific to the HTTP Protocol type are described in the following table.
Network Settings
1 Listening Interface: IP address through which sensitive data enters the DSG. The following Listening Interface options are available:
- ethMNG: The management interface on which the DSG Web UI is accessible.
- ethSRV0: The service interface for communicating with an untrusted service.
- 127.0.0.1: The local loopback adapter.
- 0.0.0.0: The broadcast address for listening to all the available network interfaces.
- Other: Manually add a listening address based on your requirements.
2 Port:Port linked to the listening address.
TLS/SSL Security Settings
3 TLS Enabled: Select to enable TLS features.
4 Certificate: Certificate applicable for a tunnel.
5 Cipher Suites: Colon separated list of Ciphers.
6 TLS Mutual Authentication: CERT_NONE is selected as default. Use CERT_OPTIONAL to validate if a client certificate is provided or CERT_REQUIRED to process a request only if a client certificate is provided. If TLS mutual authentication is set to CERT_OPTIONAL or CERT_REQUIRED, then the CA certificate must be provided.
7 CA Certificates: A CA certificate chain. This option is applicable only if the value client certificate is set to 1 (optional) or 2 (required). Client certificates can be requested at the tunnel and the RuleSet level for authentication. On the Tunnels screen, you can configure the ca_reqs parameter in the Inbound Transport Settings field to request the client certificate. Similarly, on the Ruleset screen, you can toggle the Required Client Certificate checkbox to enable or disable client certificates. Based on the combination of the options in the tunnel and the RuleSet, the server executes the transaction. If the certificate is incorrect or not provided, then server returns a 401 error response.
The following table explains the combinations for the client certificate at the tunnel and the RuleSet level.
TLS Mutual Authentication (Tunnel Screen) | Required Client Certificate (Enable/Disabled) (Ruleset Screen) | Result |
---|
CERT_NONE | Disabled | The transaction is executed |
| Enabled | The server returns a 401 error response. |
CERT_OPTIONAL | Disabled | The transaction is executed |
| Enabled | If the client certificate is provided, then transaction is executed. If the client certificate is not provided, then the server returns a 401 error response. |
CERT_REQUIRED | Disabled | The transaction is executed |
| Enabled | The transaction is executed |
8 DH Parameters: The .pem filename that includes the DH parameters. Upload the .pem file from the Certificate/Key Material screen. The Diffie-Hellman (DH) parameters define the way OpenSSL performs the DH Key exchange.
9 ECDH Curve Name: Supported curve names for the ECDH key exchange.The Elliptic curve Diffie–Hellman (ECDH) protocol allows key agreement and leverages elliptic-curve cryptography (ECC) properties for enhanced security.
10 Certificate Revoke List: Path of the Certificate Revocation List (CRL) file. For more information about CRL error message that appears when a revoked certificate is sent, refer to the CRL error. The ca.crl.pem
file includes a list of certificates that are revoked. Based on the flags that you provide in the verify_flags setting, SSL identifies certificate verification operations that need to performed. The CRL verification operations can be VERIFY_CRL_CHECK_LEAF or VERIFY_CRL_CHECK_CHAIN.
When you try to access the DSG through HTTPS using such a revoked certificate, the DSG returns the following error message.
11 Verify Flags Set to one of the following operations to verify the CRL:
- VERIFY_DEFAULT
- VERIFY_X509_TRUSTED_FIRST
- VERIFY_CRL_CHECK_LEAF
- VERIFY_CRL_CHECK_CHAIN
. The certificates are checked against the CRL file only for the inbound connections to the DSG node.
12 SSL Options|Set the required flags that reflect the TLS behavior at runtime. A single flag or multiple flags can be used.It is used to define the supported SSL options in the JSON format. The DSG supports TLS v1.2
.|For example, in the following JSON, TLSv1
and TLSv1.1
are disabled.
{
“options”: [“OP_NO_SSLv2”,
“OP_NO_SSLv3”,
“OP_NO_TLSv1”,
“OP_NO_TLSv1_1”]
}
|
13 Advanced Settings Set additional advanced options for tunnel configuration, if required, in the form of JSON.|In a scenario where an ESA and two DSG nodes are in a cluster, by using the Selective Tunnel Loading functionality, you can load specific tunnel configurations on specific DSG nodes.
Options | Description | Default (if any) | Notes |
---|
idle_connection_timeout | Timeout set for an idle connection. The datatype for this option is seconds. | 3600 | |
max_buffer_size | Maximum value of incoming data to a buffer. The datatype for this option is bytes. | 10240000 | |
max_write_buffer_size | Maximum value of outgoing data to a buffer. The datatype for this option is bytes. | 10240000 | This parameter is applicable only with REST streaming. |
no_keep_alive | If set to TRUE, then the connection closes after one request. | false | |
decompress_request | Decompress the gzip request body | false | |
chunk_size | Bytes to read at one time from the underlying transport. The datatype for this option is bytes. | 16384 | |
max_header_size | Maximum bytes for HTTP headers. The datatype for this option is bytes. | 65536 | |
body_timeout | Timeout set for wait time when reading request body. The datatype for this option is seconds. | | |
max_body_size | Maximum bytes for the HTTP request body. The datatype for this option is bytes. | 4194304 | Though the DSG allows to configure the maximum body size, the response body size will differ and cannot be configured on the DSG. The response body size that the gateway will send to the HTTP client is dependent on multiple factors, such as, the complexity of the rule, transform rule configured in case you use regex replace, size of response received from destination, and so on. If a request is sent to the client with the response body size greater than the value configured in the DSG, then the following response is returned and the DSG closes the connection:
400 Bad Request In earlier versions of the DSG, the DSG closed the connection and sent 200 as the response code. |
max_streaming_body_size | Maximum bytes for the HTTP request body when HTTP streaming with REST is enabled. The datatype for this option is bytes. | 52428800 | |
maximumBytes | | | This field is not supported for the DSG 3.0.0.0 release and will be supported in a later DSG release. |
maximumRequests | | | This field is not supported for the DSG 3.0.0.0 release and will be supported in a later DSG release. |
thresholdDelta | | | This field is not supported for the DSG 3.0.0.0 release and will be supported in a later DSG release. |
write_cache_memory_size | For an HTTP blocking client sending a REST streaming request, the DSG processes the request and tries to send the response back. If the client type is blocking, then DSG will store the response to the memory till the write_cache_memory_size limit is reached. The DSG then starts writing to the disk.The file size is managed using the write_cache_disk_size parameter.The value for this setting is defined in bytes. | - Min - 10485760
- Default - 52428800
- Max - 104857600
| |
write_cache_disk_size | Set the file size that holds the response after the write_cache_memory_size limit is reached while processing the REST streaming request sent by an HTTP blocking client.After the write_cache_disk_size limit is reached, the DSG starts writing to the disk.The data on the disk always exists in an encrypted format and the disk cache file is discarded after the response is sent. The value for this setting is defined in bytes. | - Min - 52428800
- Default - 104857600
- Max - 314572800
| |
additional_http_methods | Include additional HTTP methods, such as, PURGE LINK, LINE, UNLINK, and so on. | | |
cookie_attributes | Add a new HTTP cookie to the list of cookies that the DSG accepts. | [“expires”, “path”, “domain”, “secure”, “httponly”, “max-age”, “version”, “comment”, “priority”, “samesite”] | |
compress_response | Compresses the response sent to the client if the client supports gzip encoding, i.e. sends Accept-Encoding:gzip. | false | |
Generating ECDSA certificate and key
The dh_params parameter points to a .pem file. The .pem file includes the DH parameters that are required to enable DH key exchange for improved protection without compromising computational resources required at each end. The value accepted by this field is the file name with the extension (.pem). The DSG supports both RSA certificates and Elliptic Curve Digital Signature Algorithm (ECDSA) certificates for the ECDHE protocol. The RSA certificates are available as default when the DSG is installed, while to use ECDSA certificates in the DSG, you must generate an ECDSA certificate and the related key. The following procedure explains how to generate the ECDSA certificate and key.
To generate dhparam.pem file:
Set the SSL options in the Inbound Transport settings as given in the following example.
- DH Parameters:
/opt/protegrity/alliance/config/dhparam/dhparam.pem
- ECDH Curve Name: prime256v1
- SSL Options: OP_NO_COMPRESSION
From the ESA CLI Manager, navigate to Administration > OS Console.
Execute the following command to generate the dhparam.pem file.
openssl dhparam -out /opt/protegrity/alliance/config/dhparam/dhparam.pem 2048
Note: Ensure that you create the dhparam directory in the given path. The path /opt/protegrity/alliance/config/dhparam is the location where you want to save the .pem file. The value 2048 is the key size.
- Execute the following command to generate the key.
openssl genpkey -paramfile dhparam.pem -out dhkey.pem
The ecdh_curve_name parameter is the curve type that is required for the key exchange. The OpenSSL curves that are supported by DSG are listed in Supported OpenSSL Curve Names.
You can configure additional inbound settings that apply to HTTP from the Global Settings page on the DSG Web UI.
4 - SFTP Tunnel
Configure the SFTP tunnel.
Based on the protocol selected, the dependent fields in the Tunnel screen vary. The following image illustrates that settings specific to SFTP protocol.
The options specific to the SFTP Protocol type are described in the following table.
Callout | Column/Textbox/Button | Subgroup | Description | Notes |
---|
| Network Settings | | | |
1 | | Listening Interface* | IP address through which sensitive data enters the DSG. | |
2 | | Port | Port linked to the listening address. | |
| SSH Transport Security Options | | SFTP specific security options that are mandatory.Select a paired server host key or provide the key path. | |
3 | | Server Host Key Filename | Paired server host public key, uploaded through Certificate/Key material screen, that enables SFTP authentication. If the key includes an extension, such as *.key, enter the key name with the extension. For Files that are not uploaded to the resources directory, you must provide the absolute path along with the key name. | The DSG only accepts private keys that are not passphrase encrypted. |
4 | Advanced Settings* | Set additional advanced options for tunnel configuration, if required, in the form of JSON. | In a scenario where an ESA and two DSG nodes are in a cluster, by using the Selective Tunnel Loading functionality, you can load specific tunnel configurations on specific DSG nodes. | - ethMNG: The management interface on which the Web UI is accessible.
- ethSRV0: The service interface for communicating with an untrusted service.
- 127.0.0.1: The local loopback adapter.
- 0.0.0.0: The broadcast address for listening to all the available network interfaces overall IP addresses.
- Other: Manually add a listening address based on your requirement.
|
**-The advanced settings that can be configured for SFTP Protocol.
Options | Description | Default (if any) |
---|
idle_connection_timeout | Timeout set for an idle connection.The datatype for this option is seconds. | 30 |
default_window_size | SSH Transport window size | 2097152 |
default_max_packet_size | Maximum packet transmission in the network. The datatype for this option is bytes. | 32768 |
use_compression | Toggle SSH Compression | True |
ciphers | List of supported ciphers | ‘aes128-ctr’, ‘aes256-ctr’, ‘3des-cbc’ |
kex | Key exchange algorithms | ‘diffie-hellman-group14-sha1’, ‘diffie-hellman-group-exchange-sha1’ |
digests | List of supported hash algorithms used in authentication. | ‘hmac-sha1’ |
The following snippet describes the example format for the SFTP Advanced settings:
{
"idle_connection_timeout": 30,
"default_window_size": 2097152,
"default_max_packet_size": 32768,
"use_compression": true,
"ciphers": [
"aes128-ctr",
"aes256-ctr",
"3des-cbc"
],
"kex": [
"diffie-hellman-group14-sha1",
"diffie-hellman-group-exchange-sha1"
],
"digests": [
"hmac-sha1"
]
}
5 - SMTP Tunnel
Configure SMTP tunnel.
The DSG can perform data security operations on the sensitive data sent by an Simple Mail Transfer Protocol (SMTP) client before the data reaches the destination SMTP server.
Over the internet, SMTP is an Internet standard for sending emails. When an email is sent to anyone, the email is sent using an SMTP client to the SMTP server. For example, if an email is sent from john.doe@xyz.com to jane.smith@abc.com, the email first reaches the xyz’s SMTP server, then reaches abc’s SMTP server, before it finally reaches the recipient, jane.smith@abc.com.
The DSG intercepts the communication between the SMTP client and server and performs data security operations on sensitive data. The sensitive data residing in the email elements, such as subject of an email, body of an email, attachments, filename, and so on, are supported for the SMTP protocol:
When the DSG is used as an SMTP gateway, the Rulesets must use the SMTP service and the first child Extract rule must be SMTP Message.
The following image illustrates how the SMTP protocol is handled in the DSG. Consider an example where, john.doe@xyz.com is sending an email to jane.smith@xyz.com. The xyz
SMTP server is the same for the sender and the recipient.
The sender, john.doe@xyz.com, sends an email to the recipient, jane.smith@xyz.com. The Subject of the email contains sensitive data that must be protected before it reaches the recipient.
The DSG is configured with an SMTP tunnel such that it listens for incoming requests on the listening ports. The DSG is also configured with Rulesets such that an Extract rule extracts the Subject from the request. The Extract rule also defines a regex that extracts the sensitive data and passes it to the Transform rule. The Transform rule performs data security operations on the sensitive data.
The DSG forwards the email with the protected data in the Subject to the SMTP server.
The recipient SMTP client polls the SMTP server for any emails. The email is received and the sensitive data in the Subject appears protected.
The following image illustrates the settings specific to the SMTP protocol.
The options specific to the SMTP Tunnel are described in the following table.
Callout | Column/Textbox/Button | Description | Notes |
---|
| Network Settings | | |
1 | | Listening Interface* | Enter the service IP of the DSG, where the DSG listens for the incoming SMTP requests. |
2 | | Port | Port linked to the listening address. |
| Security Settings for SMTP | | |
3 | | Certificate | Server-side Public Key Infrastructure (PKI) certificate to enable TLS/SSL security. |
4 | | Cipher Suites | Semi-colon separated list of Ciphers. |
5 | Advanced Settings | Set additional advanced options for tunnel configuration, if required, in the form of JSON. | In a scenario where an ESA and two DSG nodes are in a cluster, by using the Selective Tunnel Loading functionality, you can load specific tunnel configurations on specific DSG nodes. |
The ssl_options supported for the SMTP Tunnel are described in the following table.
Options | Description | Default |
---|
cert_reqs | Specifies whether a certificate is required for validating the SSL connection between the SMTP client and the DSG. The following values can be configured:- CERT_NONE: If the parameter is set to CERT_NONE, then the SMTP client certificate is not required for validating the SSL connection between the SMTP client and the DSG.
- CERT_OPTIONAL: If the parameter is set to CERT_OPTIONAL, then the SMTP client certificate is not required for validating the SSL connection between the SMTP client and the DSG. The SMTP client certificate is validated only if it is provided.
- CERT_REQUIRED: If the parameter is set to CERT_REQUIRED, then the SMTP client certificate is required for validating the SSL connection between the SMTP client and the DSG.
| CERT_NONE |
ssl_version | Specifies the SSL protocol version used for establishing the SSL connection between the SMTP client and the DSG. | PROTOCOL_SSLv23 |
ca_certs | Path where the CA certificates (in PEM format only) are stored. | n/a |
* The following Listening Interface options are available:
- ethMNG: The management interface on which the DSG Web UI is accessible.
- ethSRV0: The service interface where the DSG listens for the incoming SMTP requests.
- 127.0.0.1: The local loopback adapter.
- 0.0.0.0: The broadcast address for listening to all the available network interfaces.
- Other: Manually add a listening address based on your requirements.
**-The advanced settings that can be configured for SMTP Protocol.
Options | Description | Default (if any) |
---|
idle_connection_timeout | Timeout set for an idle connection.The datatype for this option is seconds. | 30 |
default_window_size | SSH Transport window size | 2097152 |
default_max_packet_size | Maximum packet transmission in the network. The datatype for this option is bytes. | 32768 |
6 - NFS/CIFS
The Network File System (NFS) enables users to store and access data from storage points such as disks and directories over a shared network. The Common Internet File System (CIFS) is a file sharing protocol for Windows OS-based systems.
Though the files are accessed remotely, the behavior is same as when files are accessed locally. The NFS file system follows a client/server model, where the server is responsible for authentication and permissions, while the client accesses data through the local disk systems.
A sample NFS/CIFS tunnel configuration is shown in the following figure.
Note: The Address format for an NFS tunnel is <ip address/hostname>:<mount_path>
and for a CIFS tunnel is \\<ip address or hostname>\<share_path>
.
Consider an example NFS/CIFS server with folder structure that includes folders namely, /input
, /output
, and /lock
. When a client accesses the NFS/CIFS server, the files are stored in the input folder. The Mounted File System Out-of-Band Service is used to perform data security operation on the files in the /input
folder. A source file is processed only when a corresponding trigger file is created and found in the /input
folder.
Note: Ensure that the trigger file time stamp is greater than or equal to the source file time stamp.
After the rules are executed, the processed files can be stored in a separate folder, such as in this example, /output
. When DSG nodes poll NFS/CIFS server for a file uploaded, whichever node accesses the file first places a lock on the file. You can specify if the lock files must be stored in a separate bucket, such as /lock
or under the source folder. If the file is locked, the other DSG nodes will stop trying to access the file.
If the data operation on a locked file fails, the lock file can be viewed for detailed log and error information. The lock files are automatically deleted if the processing completes successfully.
6.1 - NFS/CIFS
The Network File System (NFS) enables users to store and access data from storage points such as disks and directories over a shared network.
The options for the NFS tunnel as illustrated in the following figure
Mount settings
1 Mount Point - The Address format for an NFS tunnel is <IP address/hostname>:<mount_path>
2 Input Directory - The mount tunnel forwards the files present in this directory for further processing. This directory structure will be defined in the NFS/CIFS share.
3 Source File Name Pattern - Regex logic for identifying the source files that must be processed.
4 Overwrite Target File - Select to overwrite a file in the bucket with the newly processed file with the same name.
Rename processed files
Regex logic for renaming original source files after processed files are generated
5 Match Pattern - Exact pattern to match and filter the file.
6 Replace Value - Value to append or name that will be used to rename the original source file based on the pattern provided and grouping defined in regex logic.
Trigger File
File that triggers the rule. The rule will be triggered, only if this file is found to exist in the input directory. Files in the NFS/CIFS Share directory are not processed until the trigger criteria is met. Ensure that the trigger file is sent only after the files that need to be processed are placed in the source directory. After the trigger file is placed, you must touch the trigger file.
7 Trigger File Name Pattern - Identifier that will be appended to each source file to create a trigger control file. Consider a source file abc.csv, if you define the identifier as %.ctl, you must create a trigger file abc.csv.ctl to ensure that the source file is processed.
It is mandatory to provide a trigger file for each source file to ensure that it is processed. Files without a corresponding trigger file will not be processed.
The *, [, and ] characters are not accepted as part of the trigger file pattern.
8 Delete Trigger File - Enable to delete the trigger file after the source file is processed.
9 Lock Files Directory - Directory where the lock files will be stored. If this value is not provided as per the directory structure in the NFS/CIFS share, then the lock files will be stored in the mount point. Ensure that the lock directory name does not include spaces. The DSG will not process files under the lock directory that includes spaces.
10 Error Files directory - Files that fail to process are moved to this directory. The lock files generated for such files are also moved to this directory.
For example, the file is moved from the /input
directory to the /error
directory.
11 Error Files Extension - Extension that will be appended to each error file. If you do not specify an extension, then the .err extension will be used.
Mount Options
Parameters that will be used to mount the share.
12 Mount Type - Specify Soft if you want the mount point to report an error, if the server is unreachable after wait time crosses the Mount Timeout value. If you select Hard, ensure that the Interrupt Timeout checkbox is selected.
13 Mount Timeout - Number in seconds after which an error is reported. Default value is 60.
14 Options - Additional NFS options that can be provided as inbound settings. If the lock directory is not defined, then the lock files are automatically placed in the /input
directory. For example, {"port":1234, "nolock" "nfsvers": 3}
. To enable enhanced security for the mounted share, it is recommended that the following options are set:
where:
- noexec: Disallow execution of executable binaries on the mounted file system.
- nosuid: Disallow creation of set user id files on the file system.
- nodev: Disallow mounting of special devices, such as, USB, printers, etc.
15 Advanced Settings - Set additional advanced options for tunnel configuration, if required, in the form of JSON in the Advanced Settings textbox. For example, {"interval":5, "fileChunkSize": 4096}
. In a scenario where an ESA and two DSG nodes are in a cluster, by using the Selective Tunnel Loading functionality, you can load specific tunnel configurations on specific DSG nodes.
Note: Ensure that the NFS share options are configured in the exports configuration file for each mount that the DSG will access. The all_squash option must be set to specify the anonuid and anongid with the user ID and group ID of the non-root user respectively.
This prevents the DSG from changing user and group permissions of the mount directories on the NFS server.