Archive

Posts Tagged ‘cloudera’

BDR between kerberos enabled environment Enabling Replication Between Clusters with Kerberos Authentication

April 23, 2018 Leave a comment

To enable replication between clusters, additional setup steps are required to ensure that the source and destination clusters can communicate.

Note: If either the source or destination cluster is running Cloudera Manager 4.6 or higher, then both clusters (source and destination) must be running 4.6 or higher. For example, cross-realm authentication does not work if one cluster is running Cloudera Manager 4.5.x and one is running Cloudera Manager 4.6 or higher.

Continue reading:

  • Considerations for Realm Names
  • Configuration
  • Configuring a Peer Relationship

Considerations for Realm Names 

If the source and destination clusters each use Kerberos for authentication, use one of the following configurations to prevent conflicts when running replication jobs:

  • If the clusters do not use the same KDC (Kerberos Key Distribution Center), Cloudera recommends that you use different realm names for each cluster.
  • You can use the same realm name if the clusters use the same KDC or different KDCs that are part of a unified realm, for example where one KDC is the master and the other is a slave KDC.
  • Note:If you have multiple clusters that are used to segregate production and non-production environments, this configuration could result in principals that have equal permissions in both environments. Make sure that permissions are set appropriately for each type of environment.

 

Important: If the source and destination clusters are in the same realm but do not use the same KDC or the KDCs are not part of a unified realm, the replication job will fail.

Configuration 

 

  1. On the hosts in the source & destinationclusters, ensure that the conf file (typically located at /etc/kbr5.conf) on each host has the following information:
    • The kdc information for the sourcecluster’s Kerberos realm. For example:

 

realms

 INTBDA.BIL.COM = {
kdc =<–KDC__MASTER__NODE–>:88
kdc = <–KDC__SLAVE__NODE–>:88
admin_server = b<–KDC__MASTER__NODE–>:749
default_domain = bnet.luxds.net
}
DEVBDA.BIL.COM = {
kdc = <–KDC__MASTER__NODE–>:88
kdc =<–KDC__SLAVE__NODE–>:88
admin_server = <–KDC__MASTER__NODE–>:749
default_domain = bnet.luxds.net
}

  • Domain/host-to-realm mapping for the sourcecluster NameNode hosts. You configure these mappings in the [domain_realm] For example, to map two realms named SRC.MYCO.COM and DEST.MYCO.COM, to the domains of hosts named hostname.src.myco.com and hostname.dest.myco.com, make the following mappings in the krb5.conf file:

[domain_realm]
.src.myco.com = SRC.MYCO.COM
src.myco.com = SRC.MYCO.COM
.dest.myco.com = DEST.MYCO.COM
dest.myco.com = DEST.MYCO.COM

 

BE CAREFUL!!!

But in MYcase, the scenario is completely different. Since the domain names are all ending with bnet.luxds.net, it is not possible to use directly given way. If we use it, then we will face below:

PriviledgedActionException as:hdfs/<–HOST_NAME–>>@DEVBDA.BIL.COM (auth:KERBEROS) cause:java.io.IOException: java.lang.IllegalArgumentException:

Server has invalid Kerberos principal:

hdfs/<–HOST_NAME–>>@INTBDA.BIL.COM, expecting: hdfs/<–HOST_NAME–>>@DEVBDA.BIL.COM

To handle this problem, we have to change the domain_realm as below:

From:

[domain_realm]
.bnet.luxds.net = INTBDA.BIL.COM
bnet.luxds.net = INTBDA.BIL.COM

To(when also adding the other cluster information as well). It shoud contain all host that we have for given environment.

[domain_realm]
NODE_1_CLUSTER_1= INTBDA.BIL.COM
NODE_2_CLUSTER_1= INTBDA.BIL.COM
NODE_3_CLUSTER_1= INTBDA.BIL.COM
NODE_4_CLUSTER_1= INTBDA.BIL.COM
NODE_5_CLUSTER_1= INTBDA.BIL.COM
NODE_6_CLUSTER_1= INTBDA.BIL.COM
NODE_1_CLUSTER_2= DEVBDA.BIL.COM
NODE_2_CLUSTER_2= DEVBDA.BIL.COM
NODE_3_CLUSTER_2= DEVBDA.BIL.COM

 

ihave to arrange domain_realm as above on both cluster.

Trust Creation

addprinc krbtgt/INTBDA.BIL.COM@DEVBDA.BIL.COM to DEV
addprinc krbtgt/DEVBDA.BIL.COM@INTBDA.BIL.COM to INT

 

With these two,  i will be able to reach clusters.

Add dfs.namenode.kerberos.principal.pattern parameter to all clusters

1

  1. On the destinationcluster, use Cloudera Manager to add the realm of the source cluster to the Trusted Kerberos Realms configuration property:
    1. Go to the HDFS service.
    2. Click the Configuration
    3. In the search field type “Trusted Kerberos” to find the Trusted Kerberos Realms
    4. Enter the source cluster realm.
    5. Click Save Changesto commit the changes.

 

Trusted Realm Addition on HDFS

 2  3

 

Configuring a Peer Relationship 

  1. Go to the Peerspage by selecting Administration > Peers. The Peers page displays. If there are no existing peers, you will see only an Add Peer button in addition to a short message. If you have existing peers, they are listed in the Peers list.
  2. Click the Add Peer
  3. In the Add Peer pop-up, provide a name, the URL (including the port) of the Cloudera Manager Server that will act as the source for the data to be replicated, and the login credentials for that server.Important:The role assigned to the login on the source server must be either a User Administratoror a Full Administrator.Cloudera recommends that SSL be used and a warning is shown if the URL scheme is http instead of https. Once both peers have been configured to use SSL/TLS, add the remote source Cloudera Manager’s SSL certificate to the local Cloudera Manager truststore, and vice versa.
  4. Click the Add Peerbutton in the pop-up to create the peer relationship. The peer is added to the Peers list.
  5. To test the connectivity between your Cloudera Manager Server and the peer, select Actions > Test Connectivity.

4

Advertisements

Disable all SSL certificate and go back to the initial state

April 23, 2018 Leave a comment

Disable all SSL certificate and go back to initial state.

1. All steps are done as ‘root’ user.

2. If you have passwordless ssh setup on all nodes you can run dcli on any node, otherwise run the dcli commands on Node 1.

3. When you get to the point of restaring the CM server, do that on Node (The node with CM role,Node 3 by default)

4. Make sure to run the regenerate script on Node 3.

 

1. On Node 1 Back up existing Security directory # dcli -C “cp -r -p /opt/cloudera/security /opt/cloudera/security.BAK_`date +%d%b%Y%H%M%S`”

 

2. Verify there is a backed up file:
# dcli -C ls -ltrd /opt/cloudera/security*

 

3. Executing script for renew default certificates:

*********Perform all steps as ‘root’ user on Node 3*****************

a) Download and copy the regenerate.sh script the node with Cloudera
Manager role, this is Node 3 by default.

You can download it to any directory. For example /tmp.

 

b) Give execute permissions to the script.

# chmod a+x /tmp/regenerate.sh

#########################################################################################################################
#Script should not be used for renewing User’s self-signed certificates. Scripts renews only BDA default certificates. #
#########################################################################################################################

#!/usr/bin/bash -x
export CMUSR=”admin”
if [[ -z $CMPWD ]]; then
export CMPWD=”$1″
if [[ -z $CMPWD ]]; then
echo “INFO: Since no CM password was given nothing can be done”
exit 0
fi
fi
key_loc=`bdacli getinfo cluster_https_keystore_path`
key_password=`bdacli getinfo cluster_https_keystore_password`
trust_password=`bdacli getinfo cluster_https_truststore_password`
trust_loc=`bdacli getinfo cluster_https_truststore_path`
firstnode=(`json-select –jpx=”MAMMOTH_NODE” /opt/oracle/bda/install/state/config.json`)
nodenames=(`json-select –jpx=”RACKS/NODE_NAMES” /opt/oracle/bda/install/state/config.json`)
for node in “${nodenames[@]}”
do
ssh $node “keytool -importkeystore -srckeystore $key_loc -destkeystore /tmp/nodetmp.p12 -deststoretype PKCS12 -srcalias \$HOSTNAME -srcstorepass $key_password -srckeypass $key_password -destkeypass $key_password -deststorepass $key_password”
ssh $node “openssl pkcs12 -in /tmp/nodetmp.p12 -nodes -nocerts -out privateKey.pem -passin pass:$key_password -passout pass:$keystore”
ssh $node ‘openssl req -x509 -new -nodes -key privateKey.pem -sha256 -days 7300 -out newCert.pem -subj “/C=/ST=/L=/O=/CN=${HOSTNAME}”‘
ssh $node “keytool -import -keystore $key_loc -file newCert.pem -alias \$HOSTNAME -storepass $key_password -keypass $key_password”
ssh $node “/usr/java/latest/bin/keytool -exportcert -keystore $key_loc -alias \$HOSTNAME -storepass $key_password -file /opt/cloudera/security/jks/node.cert”
ssh $node “scp /opt/cloudera/security/jks/node.cert root@${firstnode}:/opt/cloudera/security/jks/node_\${HOSTNAME}.cert”
ssh $node “rm -f /tmp/nodetmp.p12; rm -f privateKey.pem; rm -f newCert.pem; rm -f /opt/cloudera/security/x509/node.key; rm -f /opt/cloudera/security/x509/node.cert; rm -f /opt/cloudera/security/x509/node_*pem”
ssh $node “/usr/java/latest/bin/keytool -importkeystore -srckeystore $key_loc -srcstorepass $key_password -srckeypass $key_password -destkeystore /tmp/\${HOSTNAME}-keystore.p12 -deststoretype PKCS12 -srcalias \$HOSTNAME -deststorepass $key_password -destkeypass $key_password -noprompt”
ssh $node “openssl pkcs12 -in /tmp/\${HOSTNAME}-keystore.p12 -passin pass:${key_password} -nokeys -out /opt/cloudera/security/x509/node.cert”
ssh $node “openssl pkcs12 -in /tmp/\${HOSTNAME}-keystore.p12 -passin pass:${key_password} -nocerts -out /opt/cloudera/security/x509/node.key -passout pass:${key_password}”
ssh $node “openssl rsa -in /opt/cloudera/security/x509/node.key -passin pass:${key_password} -out /opt/cloudera/security/x509/node.hue.key”
ssh $node “chown hue /opt/cloudera/security/x509/node.key”
ssh $node “chown hue /opt/cloudera/security/x509/node.cert”
ssh $node “chown hue /opt/cloudera/security/x509/node.hue.key”
done

create=`ls /opt/cloudera/security/jks/ | grep “create”`
ssh $firstnode “rm -f $trust_loc”
ssh $firstnode ” /opt/cloudera/security/jks/./${create} $trust_password”
ssh $firstnode ” /opt/cloudera/security/x509/./create_hue.truststore.pl $trust_password”
ssh $firstnode “dcli -C -f $trust_loc -d $trust_loc”
ssh $firstnode “dcli -C -f /opt/cloudera/security/x509/hue.pem -d /opt/cloudera/security/x509/hue.pem”
rm -f /opt/cloudera/security/jks/cm_key.der
rm -f /opt/cloudera/security/x509/agents.pem
/usr/java/latest/bin/keytool -exportcert -keystore $key_loc -alias $HOSTNAME -storepass $key_password -file /opt/cloudera/security/jks/cm_key.der
openssl x509 -out /opt/cloudera/security/x509/agents.pem -in /opt/cloudera/security/jks/cm_key.der -inform der
scp /opt/cloudera/security/x509/agents.pem root@${firstnode}:/opt/cloudera/security/x509/agents.pem
ssh $firstnode dcli -C -f /opt/cloudera/security/x509/agents.pem -d /opt/cloudera/security/x509/agents.pem

c) Run the script providing the Cloudera Manager admin password as an argument to execute the script:

# ./regenerate.sh <cm_password>

d) Upload the output to the SR for review.

 

4. Once script execution is completed restart Cloudera Manager server
and agents.

a) Stop Cloudera Manager Agents.

# dcli -C service cloudera-scm-agent stop

b) Restart Cloudera Manager server (On Node 3)

# service cloudera-scm-server restart

c) Verify with:
# service cloudera-scm-server status

d) Start Cloudera Manager Agents.
# dcli -C service cloudera-scm-agent start

e) Verify with:
# dcli -C service cloudera-scm-agent status

 

5. Make sure there are no ssl warnings in the Cloudera Manager Server logs.

/var/log/cloudera-scm-server/cloudera-scm-server.log

You can also do:
tail -f /var/log/cloudera-scm-server/cloudera-scm-server.log

and then also upload the
/var/log/cloudera-scm-server/cloudera-scm-server.log to the SR for review.

 

6. In CM:
a) Restart Management services and once healthy.
b) Restart the Cluster services.

 

7. Certificate validity can be checked using keytool or openssl commands.

a) with keytool
# keytool -printcert -file /opt/cloudera/security/x509/agents.pem

b) with openssl:
echo | openssl s_client -connect <fqdn.of.cloudera.manager.webui>:7183 2>/dev/null | openssl x509 -noout -subject -dates

Enable SSL over cluster via SAN(subject alternative name)

April 23, 2018 Leave a comment

Step-by-step guide

On BDA V4.5 and higher using certificates signed by a user’s Certificate Authority for web consoles and for Hadoop network encryption is supported. The support includes use of a client’s own certificates signed by the client’s Certificate Authority instead of the default which is to use self-signed certificates generated on the BDA.

At a high level users update the Mammoth installed truststore with the CA public certificate and the keystores with keys/certificates signed by the customer CA or create a new keystore and truststore on all nodes of the BDA and point Cloudera Manager to that new location.

User provided signed certificates are not allowed for puppet. The use of puppet is internal and puppet is not intended for direct user usage.

The recommendation when using customer provided signed certificates with web consoles, etc. is to use the keystores/truststores provided by Mammoth and to make the minimal changes possible.

The Mammoth installed values can be viewed with:

  • bdacli getinfo cluster_https_keystore_password – Display the password of the keystore used by CM.
  • bdacli getinfo cluster_https_keystore_path – Display the path of the keystore used by CM.
  • bdacli getinfo cluster_https_truststore_password – Display the password of the truststore used by CM.
  • bdacli getinfo cluster_https_truststore_path – Display the path of the truststore used by CM.

Note: This document and all examples use the existing passwords provided in the Mammoth commands above.  There are additional changes required if the passwords are changed.

The steps presented here can be performed in a cluster where Kerberos is or is not installed.

This document requires;

1) that a user provided CA public certificate is available for use on the BDA and

2) that the user will use the BDA node specific Certificate Signing Requests to create BDA node specific signed certificates and copy them to the BDA as specified in the document.

Prerequisites for setting up user provided certificates for web consoles and hadoop network encryption

On the BDA Cluster

  1. Identify the server running the Hue service.

In Cloudera Manager (CM) navigate: hue > Instances

Keep track of the server.

  1. Make sure the cluster is healthy:
  2. a) Verify with:

bdacheckcluster

[root@host_1 cloudera]# bdacheckcluster
INFO: Logging results to /tmp/bdacheckcluster_1522049448/
Enter CM admin user to run dumpcluster
Enter username (admin):
Enter CM admin password to enable check for CM services and hosts
Press ENTER twice to skip CM services and hosts checks
Enter password:
Enter password again:
SUCCESS: Mammoth configuration file is valid.
SUCCESS: hdfs is in good health
SUCCESS: zookeeper is in good health
SUCCESS: yarn is in good health
SUCCESS: oozie is in good health
SUCCESS: hive is in good health
SUCCESS: hue is in good health
SUCCESS: yarn is in good health
SUCCESS: yarn is in good health
SUCCESS: sentry is in good health
SUCCESS: flume is in good health
SUCCESS: client is in good health
SUCCESS: Cluster passed checks on all hadoop services health check
SUCCESS: c39df580-32e2-4671-b2a4-5e47574aba5b is in good health
SUCCESS: 8b04b32a-d763-4817-a8e2-832ba024d52d is in good health
SUCCESS: dc7db617-3b2a-4517-a7a0-775df21c8be1 is in good health
SUCCESS: 4f4784b7-056d-4834-bcd2-a5b050e51a00 is in good health
SUCCESS: e406d9d0-951e-499a-90dc-a97ede1db51e is in good health
SUCCESS: 43e1382a-945a-49cd-8f56-a1670f09a6ca is in good health
SUCCESS: Cluster passed checks on all hosts health check
SUCCESS: All cluster host names are pingable
INFO: Starting cluster host hardware checks
SUCCESS: All cluster hosts pass hardware checks
INFO: Starting cluster host software checks
host_5:
host_6:
host_2:
host_1:
host_4:
host_3:
SUCCESS: All cluster hosts pass software checks
SUCCESS: All ILOM hosts are pingable
SUCCESS: All client interface IPs are pingable
SUCCESS: All admin eth0 interface IPs are pingable
SUCCESS: All private Infiniband interface IPs are pingable
INFO: All PDUs are pingable
SUCCESS: All InfiniBand switches are pingable
SUCCESS: Puppet master is running on host_1-master
SUCCESS: Puppet running on all cluster hosts
SUCCESS: Cloudera SCM server is running on host_3
SUCCESS: Cloudera SCM agent running on all cluster hosts
SUCCESS: Name Node is running on host_1
SUCCESS: Secondary Name Node is running on host_2
SUCCESS: Resource Manager is running on host_3
SUCCESS: Data Nodes running on all cluster hosts
SUCCESS: Node Managers running on all cluster slave hosts
INFO: Skipping Hadoop filesystem test because the hdfs user has no Kerberos ticket.
INFO: Use this command to get a Kerberos ticket for the hdfs user :
INFO: su hdfs -c “kinit hdfs@REALM.NAME”
SUCCESS: MySQL server is running on MySQL master node host_3
SUCCESS: MySQL server is running on MySQL backup node host_2
SUCCESS: Hive Server is running on Hive server node host_4
SUCCESS: Hive metastore server is running on Hive server node host_4
SUCCESS: Dnsmasq server running on all cluster hosts
INFO: Checking local DNS resolve of public hostnames on all cluster hosts
SUCCESS: All cluster hosts resolve public hostnames to private IPs
INFO: Checking local reverse DNS resolve of private IPs on all cluster hosts
SUCCESS: All cluster hosts resolve private IPs to public hostnames
SUCCESS: 2 virtual NICs available on all cluster hosts
SUCCESS: NTP service running on all cluster hosts
SUCCESS: At least one valid NTP server accessible from all cluster servers.
SUCCESS: Max clock drift of 0 seconds is within limits
SUCCESS: Big Data Appliance cluster health checks succeeded
[root@host_1 cloudera]#

b) Make sure services are healthy in CM.

c) Verify the output from the cluster verification checks is successful on Node 1 of the cluster:

mammoth -c

[root@host_1 cloudera]# mammoth -c
INFO: Logging all actions in /opt/oracle/BDAMammoth/bdaconfig/tmp/host_1-20180326093706.log and traces in /opt/oracle/BDAMammoth/bdaconfig/tmp/host_1-20180326093706.trc
INFO: This is the install of the primary rack
INFO: Creating nodelist files…
INFO: Checking if password-less ssh is set up
INFO: Executing checkRoot.sh on nodes /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
SUCCESS: Executed checkRoot.sh on nodes /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
INFO: Executing checkSSHAllNodes.sh on nodes /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
SUCCESS: Executed checkSSHAllNodes.sh on nodes /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
INFO: Checking passwordless ssh setup to host_1
host_1<–fdqn–>
INFO: Checking if password-less ssh is set up
INFO: Executing checkRoot.sh on nodes /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
SUCCESS: Executed checkRoot.sh on nodes /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
INFO: Executing checkSSHAllNodes.sh on nodes /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
SUCCESS: Executed checkSSHAllNodes.sh on nodes /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
INFO: Reading component versions from /opt/oracle/BDAMammoth/bdaconfig/COMPONENTS
INFO: Getting factory serial numbers
INFO: Executing getserials.sh on nodes /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
SUCCESS: Executed getserials.sh on nodes /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
SUCCESS: Generated bdaserials on all nodes
SUCCESS: Ran /usr/bin/scp host_1:/opt/oracle/bda/factory_serial_numbers /opt/oracle/bda/install/log/factory_serial_numbers-host_1 and it returned: RC=0
SUCCESS: Ran /usr/bin/scp host_2:/opt/oracle/bda/factory_serial_numbers /opt/oracle/bda/install/log/factory_serial_numbers-host_2 and it returned: RC=0
SUCCESS: Ran /usr/bin/scp host_3:/opt/oracle/bda/factory_serial_numbers /opt/oracle/bda/install/log/factory_serial_numbers-host_3 and it returned: RC=0
SUCCESS: Ran /usr/bin/scp host_4:/opt/oracle/bda/factory_serial_numbers /opt/oracle/bda/install/log/factory_serial_numbers-host_4 and it returned: RC=0
SUCCESS: Ran /usr/bin/scp host_5:/opt/oracle/bda/factory_serial_numbers /opt/oracle/bda/install/log/factory_serial_numbers-host_5 and it returned: RC=0
SUCCESS: Ran /usr/bin/scp host_6:/opt/oracle/bda/factory_serial_numbers /opt/oracle/bda/install/log/factory_serial_numbers-host_6 and it returned: RC=0

INFO: Executing genTestUsers.sh on nodes host_1 #Step -1#
SUCCESS: Executed genTestUsers.sh on nodes host_1 #Step -1#
SUCCESS: Successfully set up Kerberos test users.
INFO: Executing copyKeytab.sh on nodes /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
SUCCESS: Executed copyKeytab.sh on nodes /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
SUCCESS: Successfully copied keytabs to Mammoth node.
INFO: Executing oracleUser.sh on nodes /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
SUCCESS: Executed oracleUser.sh on nodes /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
INFO: Doing post-cleanup operations
INFO: Running cluster validation checks and generating install summary
Enter CM admin password to enable check for CM services and hosts
Press ENTER twice to skip CM services and hosts checks
Enter password:
Enter password again:
INFO: Password saved. Doing Cloudera Manager health checks, please wait
Which Cluster Type? Hadoop Cluster
Is Kerberos enabled? Yes
INFO: Running validation tests may take up to 30 minutes depending on the size of the cluster, please wait

HttpFS Test
—————————————————————————————-
SUCCESS: Running httpfs server test succeeded. HTTP/1.1 200 OK
INFO: Test finished in 8 seconds. Details in httpfs_test.out
SUCCESS: HttpFS Test succeeded

Hive Server 2 Test
—————————————————————————————-
INFO: HiveServer2 test – Query database/table info via Beeline
INFO: Test finished in 14 seconds. Details in hiveserver2_test.out
SUCCESS: Hive Server 2 Test succeeded

Spark Test
—————————————————————————————-
INFO: final status: SUCCEEDED
SUCCESS: Pi is roughly 3.1395511395511395
INFO: Test finished in 33 seconds. Details in spark_test.out
SUCCESS: Spark Test succeeded

Spark2 Test
—————————————————————————————-
SUCCESS: Pi is roughly 3.1424471424471423
INFO: Test finished in 33 seconds. Details in spark2_test.out
SUCCESS: Spark2 Test succeeded

Orabalancer Test
—————————————————————————————-
SUCCESS: Oracle Perfect Balance test passed
INFO: Test finished in 77 seconds. Details in balancer_test.out
SUCCESS: Orabalancer Test succeeded

WebHCat Test
—————————————————————————————-
SUCCESS: creating a hcatlog database succeeded. HTTP/1.1 200 OK
SUCCESS: creating a table succeeded. HTTP/1.1 200 OK
SUCCESS: creating a partition succeeded. HTTP/1.1 200 OK
SUCCESS: creating a colum succeeded. HTTP/1.1 200 OK
SUCCESS: creating a property succeeded. HTTP/1.1 200 OK
SUCCESS: describing hcat table succeeded. HTTP/1.1 200 OK
SUCCESS: deleting hcat table succeeded. HTTP/1.1 200 OK
SUCCESS: deleting hcat database succeeded. HTTP/1.1 200 OK
INFO: Test finished in 97 seconds. Details in webhcat_test.out
SUCCESS: WebHCat Test succeeded

Hive Metastore Test
—————————————————————————————-
INFO: Query Hive Metastore Table Passed on node host_1
INFO: Query Hive Metastore Table Passed on node host_2
INFO: Query Hive Metastore Table Passed on node host_3
INFO: Query Hive Metastore Table Passed on node host_4
INFO: Query Hive Metastore Table Passed on node host_5
INFO: Query Hive Metastore Table Passed on node host_6
INFO: Test finished in 107 seconds. Details in metastore_test.out
SUCCESS: Hive Metastore Test succeeded

Teragen-sort-validate Test
—————————————————————————————-
INFO: Test finished in 234 seconds. Details in terasort.out
SUCCESS: Teragen-sort-validate Test succeeded

Oozie Workflow Test
—————————————————————————————-
INFO: Map Reduce Job Status: OK job_1521738483661_0031 SUCCEEDED
INFO: Pig Job Status: OK job_1521738483661_0019 SUCCEEDED
INFO: Hive Job Status: OK job_1521738483661_0023 SUCCEEDED
INFO: Sqoop Job Status: OK job_1521738483661_0026 SUCCEEDED
INFO: Streaming Job Status: OK job_1521738483661_0028 SUCCEEDED
INFO: Test finished in 245 seconds. Details in ooziewf_test.out
SUCCESS: Oozie Workflow Test succeeded

BDA Cluster Check
—————————————————————————————-
host_5:
host_2:
host_6:
host_1:
host_4:
host_3:
INFO: All PDUs are pingable
SUCCESS: Big Data Appliance cluster health checks succeeded
INFO: Test finished in 212 seconds. Details in bdacheckcluster.out
SUCCESS: BDA Cluster Check succeeded
========================================================================================
TEST LOG STATUS TIME(s)
—————————————————————————————-
BDA_Cluster_Check bdacheckcluster.out SUCCESS 212
Teragen-sort-validate_Test terasort.out SUCCESS 234
Oozie_Workflow_Test ooziewf_test.out SUCCESS 245
Hive_Metastore_Test metastore_test.out SUCCESS 107
Hive_Server_2_Test hiveserver2_test.out SUCCESS 14
WebHCat_Test webhcat_test.out SUCCESS 97
HttpFS_Test httpfs_test.out SUCCESS 8
Orabalancer_Test balancer_test.out SUCCESS 77
Spark_Test spark_test.out SUCCESS 33
Spark2_Test spark2_test.out SUCCESS 33
—————————————————————————————-
Total time : 457 sec.
========================================================================================
INFO: Executing oracleUserDestroy.sh on nodes /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
SUCCESS: Executed oracleUserDestroy.sh on nodes /opt/oracle/BDAMammoth/bdaconfig/tmp/all_nodes #Step -1#
INFO: Executing remTestUsers.sh on nodes host_1 #Step -1#
SUCCESS: Executed remTestUsers.sh on nodes host_1 #Step -1#
SUCCESS: Successfully removed Kerberos test users.
SUCCESS: Ran /bin/cp -pr /opt/oracle/BDAMammoth/bdaconfig/tmp/* /opt/oracle/bda/install/log/clusterchk/summary-20180326093820 and it returned: RC=0
SUCCESS: Ran /bin/rm -rf /opt/oracle/BDAMammoth/bdaconfig/tmp/* and it returned: RC=0
SUCCESS: Ran /bin/cp -prf /tmp/bdacheckcluster* /opt/oracle/bda/install/log/clusterchk/summary-20180326093820 and it returned: RC=0
INFO: Install summary copied to /opt/oracle/bda/install/log/clusterchk/summary-20180326093820
INFO: Time spent in post-cleanup operations was 602 seconds
========================================================================================
SUCCESS: Cluster validation checks were all successful
INFO: Please download the install summary zipfile from /tmp/<–clustername–>-install-summary.zip
========================================================================================
[root@host_1 cloudera]#

3. On Node 1 of the cluster find the existing https keystore password. Create an environment variable for it so it can be used in the steps below.
a) Get the existing https keystore password with: “bdacli getinfo cluster_https_keystore_password”.

Output looks like:

bdacli getinfo cluster_https_keystore_password
Enter the admin user for CM (press enter for admin): admin
Enter the admin password for CM:
3ZnkFUO9rYKFqu0gcusnFwgDUuvlzN3wU0UJit6CuobaVSl67QuyAJUq4WaNSzki

 

b) Create an environment variable for use during the setup:
export PW=3ZnkFUO9rYKFqu0gcusnFwgDUuvlzN3wU0UJit6CuobaVSl67QuyAJUq4WaNSzki
echo $PW
3ZnkFUO9rYKFqu0gcusnFwgDUuvlzN3wU0UJit6CuobaVSl67QuyAJUq4WaNSzki

 

4. On Node 1 of the cluster find the existing https truststore password and https truststore path. Create an environment variables for each so they
can be used in the steps below.

a) Get the existing https truststore password with: “bdacli getinfo cluster_https_truststore_password”.
Output looks like:

bdacli getinfo cluster_https_truststore_password
Enter the admin user for CM (press enter for admin):
Enter the admin password for CM:
FghaVNdTCkMatGgOhZITygmOzqY5IFqBKBhLUsY40IPpezLx89TmQF61CmcBKEoS

b) Create an environment variable for use during the setup:

export TPW=FghaVNdTCkMatGgOhZITygmOzqY5IFqBKBhLUsY40IPpezLx89TmQF61CmcBKEoS
echo $TPW
FghaVNdTCkMatGgOhZITygmOzqY5IFqBKBhLUsY40IPpezLx89TmQF61CmcBKEoS

c) Get the existing https truststore path with: “bdacli getinfo cluster_https_truststore_path”.

Output looks like below for a cluster name of “<–clustername–>”.

# bdacli getinfo cluster_https_truststore_path
Enter the admin user for CM (press enter for admin):
Enter the admin password for CM:
/opt/cloudera/security/jks/<–clustername–>.truststore

d) Create the corresponding environment variables.

Example based on a cluster name of “<–clustername–>”:
export TPATH=/opt/cloudera/security/jks/<–clustername–>.truststore
echo $TPATH
/opt/cloudera/security/jks/<–clustername–>.truststore

Steps to setup user provided certificates for web consoles and hadoop network encryption

Perform all steps as ‘root’ user.  On the BDA cluster perform steps on on Node 1 unless specified otherwise.  This will only be the case for the Hue service updates and for starting/stopping the Cloudera Manager server.

1.Stop Cloudera Manager (CM) services.

a) Log into Cloudera Manager as ‘admin’ user.

b) Stop the cluster services:  Home<cluster_name>dropdown > Stop

c) Stop the Cloudera Management Service: Homemgmtdropdown > Stop

d) Stop the Cloudera Manager agents:

dcli -C service cloudera-scm-agent stop

Verify with:

dcli -C service cloudera-scm-agent status

  1. e) Stop the Cloudera Manager server from Node 3:

service cloudera-scm-server stop

Verify with:

service cloudera-scm-server status

 

  1. Create a new /opt/cloudera/security, backing up the existing directory first (in case of needing to restore anything). Do this on all nodes of the cluster.
  2. Back up the existing /opt/cloudera/security on all cluster nodes. /opt/cloudera/security is the base location for security-related files.

 

dcli -C “mv /opt/cloudera/security /opt/cloudera/security.BAK_`date +%d%b%Y%H%M%S`”

 

  1. b) Create a new /opt/cloudera/security and related sub-directories on all cluster nodes. Where /opt/cloudera/security/jks is the location for the Java-based keystore/ and truststore/ files for use by Cloudera Manager and Java-based cluster services. And /opt/cloudera/security/x509 is the location for for openssl key/, cert/ and cacerts/ files to be used by the Cloudera Manager Agent and Hue.

dcli -C mkdir -p /opt/cloudera/security/jks
dcli -C mkdir -p /opt/cloudera/security/x509

 

  1. Create a staging directory on first server and upload “.pfx”, “.cer” files. For our case, there should be one  “.pfx”(for the private keys which contains all hostname from both client and management) and two  “.cer” files(one should be public keys of given pfx file and the other should be the root certificate).

root@host_1 cloudera]# cd /root/staging/
[root@host_1 staging]# ls -lrt
-rw-r–r– 1 root root 1500 Mar 20 17:42 <–root_cer–>.cer
-rw-r–r– 1 root root 2928 Mar 20 18:19 <–certificate–>.cer
-rw-r–r– 1 root root 3892 Mar 21 15:58 <–certificate–>.pfx
[root@host_1 staging]#

4. Since given pfx files are sending with generic password and alias as “certreq-2df655dc-bb52-4442-9612-bc0622375d2f”,
before starting, I have to find the alias and also change the password.

[root@host_1 ~]# keytool -list -keystore /root/staging/<–certificate–>.pfx
Enter keystore password:
***************** WARNING WARNING WARNING *****************
* The integrity of the information stored in your keystore *
* has NOT been verified! In order to verify its integrity, *
* you must provide your keystore password. *
***************** WARNING WARNING WARNING *****************
Keystore type: JKS
Keystore provider: SUN
Your keystore contains 1 entry
certreq-2df655dc-bb52-4442-9612-bc0622375d2f, Mar 20, 2018, PrivateKeyEntry,
[root@host_1 ~]#

a) To change the alias, you have to run below command and this command also will create “/opt/cloudera/security/jks/node.jks” as well.

— to import
[root@host_1 ~]# keytool -importkeystore -srckeystore /root/staging/<–certificate–>.pfx -srcstoretype pkcs12 -destkeystore
/opt/cloudera/security/jks/node.jks -deststoretype JKS -alias certreq-2df655dc-bb52-4442-9612-bc0622375d2f -destalias <–alias that you want to set–>

 

— to check
keytool -keystore /opt/cloudera/security/jks/node.jks -list
[root@host_1 ~]# keytool -keystore /opt/cloudera/security/jks/node.jks -list
Enter keystore password:
***************** WARNING WARNING WARNING *****************
* The integrity of the information stored in your keystore *
* has NOT been verified! In order to verify its integrity, *
* you must provide your keystore password. *
***************** WARNING WARNING WARNING *****************
Keystore type: JKS
Keystore provider: SUN
Your keystore contains 1 entry
<–alias that you want to set–>, Mar 20, 2018, PrivateKeyEntry,
Certificate fingerprint (SHA1): 02:24:BC:E1:9E:29:AE:C0:7B:F9:B3:8A:86:14:45:92:55:E6:03:DB
[root@host_1 ~]#

b) To change the password, you have to run below command:
–change the password for alias <–alias that you want to set–>
keytool -keypasswd -keystore /opt/cloudera/security/jks/node.jks -alias <–alias that you want to set–>

[root@host_1 ~]# keytool -keypasswd -keystore /opt/cloudera/security/jks/node.jks -alias <–alias that you want to set–>
Enter keystore password:
Enter key password for <–alias that you want to set–>
New key password for <–alias that you want to set–>:
Re-enter new key password for <–alias that you want to set–>:
[root@host_1 ~]#

5. As next step, root certificate has to be imported to trust store that we created:

— import <–your companys root cer–> root certificate
keytool -keystore /opt/cloudera/security/jks/node.jks -alias <–alias of root cer–> -import -file
/opt/cloudera/security/jks/<–your company root cer–>.cer -storepass $PW -keypass $PW -noprompt

 

[root@host_1 ~]# keytool -keystore /opt/cloudera/security/jks/node.jks -alias <–alias of root cer–> –
import -file /opt/cloudera/security/jks/<–your company root cer–>.cer -storepass $PW -keypass $PW -noprompt
Certificate was added to keystore

 

–check
[root@host_1 ~]# keytool -keystore /opt/cloudera/security/jks/node.jks -list
Enter keystore password:
***************** WARNING WARNING WARNING *****************
* The integrity of the information stored in your keystore *
* has NOT been verified! In order to verify its integrity, *
* you must provide your keystore password. *
***************** WARNING WARNING WARNING *****************
Keystore type: JKS
Keystore provider: SUN
Your keystore contains 2 entries
<–alias of root cer–>, Mar 20, 2018, trustedCertEntry, —-> this is coming from root certificate
Certificate fingerprint (SHA1): 66:73:3B:4D:90:0C:F1:B1:EA:D4:76:33:F2:74:37:07:8E:3A:E8:01
<–alias that you want to set from previous entry–>, Mar 20, 2018, PrivateKeyEntry, —-> this is coming from pfx file
Certificate fingerprint (SHA1): 02:24:BC:E1:9E:29:AE:C0:7B:F9:B3:8A:86:14:45:92:55:E6:03:DB
[root@host_1 ~]#

6. After importing all required certificates, we have to align the rest of cluster:

–copy to whole environment
dcli -C -f /opt/cloudera/security/jks/<–your company root cer–>.cer -d /opt/cloudera/security/jks/
dcli -C -f /opt/cloudera/security/jks/node.jks -d /opt/cloudera/security/jks/
dcli -C -f /root/staging/<–certificate–> -d /opt/cloudera/security/jks/

 

–check whole environment
[root@host_1 ~]# dcli -C keytool -keystore /opt/cloudera/security/jks/node.jks -list
…..
xx.xx.xx.xx: Keystore type: JKS
xx.xx.xx.xx: Keystore provider: SUN
xx.xx.xx.xx:
xx.xx.xx.xx: Your keystore contains 2 entries
xx.xx.xx.xx:
xx.xx.xx.xx: <–alias of root cer–>, Mar 20, 2018, trustedCertEntry,
xx.xx.xx.xx: Certificate fingerprint (SHA1): 66:73:3B:4D:90:0C:F1:B1:EA:D4:76:33:F2:74:37:07:8E:3A:E8:01
xx.xx.xx.xx: <–alias that you want to set–>, Mar 20, 2018, PrivateKeyEntry,
xx.xx.xx.xx: Certificate fingerprint (SHA1): 02:24:BC:E1:9E:29:AE:C0:7B:F9:B3:8A:86:14:45:92:55:E6:03:DB
xx.xx.xx.xx: Keystore type: JKS
xx.xx.xx.xx: Keystore provider: SUN
xx.xx.xx.xx:
xx.xx.xx.xx: Your keystore contains 2 entries
xx.xx.xx.xx:
xx.xx.xx.xx: <–alias of root cer–>, Mar 20, 2018, trustedCertEntry,
xx.xx.xx.xx: Certificate fingerprint (SHA1): 66:73:3B:4D:90:0C:F1:B1:EA:D4:76:33:F2:74:37:07:8E:3A:E8:01
xx.xx.xx.xx: <–alias that you want to set–>, Mar 20, 2018, PrivateKeyEntry,
xx.xx.xx.xx: Certificate fingerprint (SHA1): 02:24:BC:E1:9E:29:AE:C0:7B:F9:B3:8A:86:14:45:92:55:E6:03:DB
xx.xx.xx.xx: Keystore type: JKS
xx.xx.xx.xx: Keystore provider: SUN
xx.xx.xx.xx:
xx.xx.xx.xx: Your keystore contains 2 entries
xx.xx.xx.xx:
xx.xx.xx.xx: <–alias of root cer–>, Mar 20, 2018, trustedCertEntry,
xx.xx.xx.xx: Certificate fingerprint (SHA1): 66:73:3B:4D:90:0C:F1:B1:EA:D4:76:33:F2:74:37:07:8E:3A:E8:01
xx.xx.xx.xx: <–alias that you want to set–>, Mar 20, 2018, PrivateKeyEntry,
xx.xx.xx.xx: Certificate fingerprint (SHA1): 02:24:BC:E1:9E:29:AE:C0:7B:F9:B3:8A:86:14:45:92:55:E6:03:DB
xx.xx.xx.xx: Keystore type: JKS
xx.xx.xx.xx: Keystore provider: SUN
xx.xx.xx.xx:
xx.xx.xx.xx: Your keystore contains 2 entries
xx.xx.xx.xx:
xx.xx.xx.xx: <–alias of root cer–>, Mar 20, 2018, trustedCertEntry,
xx.xx.xx.xx: Certificate fingerprint (SHA1): 66:73:3B:4D:90:0C:F1:B1:EA:D4:76:33:F2:74:37:07:8E:3A:E8:01
xx.xx.xx.xx: <–alias that you want to set–>, Mar 20, 2018, PrivateKeyEntry,
xx.xx.xx.xx: Certificate fingerprint (SHA1): 02:24:BC:E1:9E:29:AE:C0:7B:F9:B3:8A:86:14:45:92:55:E6:03:DB
xx.xx.xx.xx: Keystore type: JKS
xx.xx.xx.xx: Keystore provider: SUN
xx.xx.xx.xx:
xx.xx.xx.xx: Your keystore contains 2 entries
xx.xx.xx.xx:
xx.xx.xx.xx: <–alias of root cer–>, Mar 20, 2018, trustedCertEntry,
xx.xx.xx.xx: Certificate fingerprint (SHA1): 66:73:3B:4D:90:0C:F1:B1:EA:D4:76:33:F2:74:37:07:8E:3A:E8:01
xx.xx.xx.xx: <–alias that you want to set–>, Mar 20, 2018, PrivateKeyEntry,
xx.xx.xx.xx: Certificate fingerprint (SHA1): 02:24:BC:E1:9E:29:AE:C0:7B:F9:B3:8A:86:14:45:92:55:E6:03:DB
xx.xx.xx.xx: Keystore type: JKS
xx.xx.xx.xx: Keystore provider: SUN
xx.xx.xx.xx:
xx.xx.xx.xx: Your keystore contains 2 entries
xx.xx.xx.xx:
xx.xx.xx.xx: <–alias of root cer–>, Mar 20, 2018, trustedCertEntry,
xx.xx.xx.xx: Certificate fingerprint (SHA1): 66:73:3B:4D:90:0C:F1:B1:EA:D4:76:33:F2:74:37:07:8E:3A:E8:01
xx.xx.xx.xx: <–alias that you want to set–>, Mar 20, 2018, PrivateKeyEntry,
xx.xx.xx.xx: Certificate fingerprint (SHA1): 02:24:BC:E1:9E:29:AE:C0:7B:F9:B3:8A:86:14:45:92:55:E6:03:DB
[root@host_1 ~]#

7. As next step, we have to create truststore as below:

–put root certificate to $TPATH(which is /opt/cloudera/security/jks/<–clustername–>.truststore)

[root@host_1 ~]# keytool -keystore $TPATH -alias <–alias of root cer–> -import -file /opt/cloudera/security/jks/IS4F_ROOT_CA_B64.cer
Enter keystore password:
Re-enter new password:
Owner: CN=<–your company root cer name–>, O=<–your company root cer name–>, C=BE
Issuer: CN=<–your company root cer name–>, O=I<–your company root cer name–>, C=BE
Serial number: 40000000001464238d3e9
Valid from: Wed May 28 12:00:00 CEST 2014 until: Sat May 28 12:00:00 CEST 2039
Certificate fingerprints:
MD5: 84:2C:DD:A9:D4:1A:C2:25:79:60:C7:23:24:44:06:43
SHA1: 66:73:3B:4D:90:0C:F1:B1:EA:D4:76:33:F2:74:37:07:8E:3A:E8:01
SHA256: AC:6C:06:2F:05:F3:0E:66:1D:58:6F:1D:D4:14:B5:A8:D8:47:8D:A5:DA:B6:C9:AB:88:91:30:92:0B:82:4B:73
Signature algorithm name: SHA1withRSA
Version: 3
Extensions:
#1: ObjectId: 2.5.29.19 Criticality=true
BasicConstraints:[
CA:true
PathLen:1
]
#2: ObjectId: 2.5.29.32 Criticality=false
CertificatePolicies [
[CertificatePolicyId: [1.3.6.1.4.1.8162.1.3.2.10.1.0]
[PolicyQualifierInfo: [
qualifierID: 1.3.6.1.5.5.7.2.2
qualifier: 0000: 30 36 1A 34 68 74 74 70 73 3A 2F 2F 72 6F 6F 74 06.4https://root
0010: 2E 49 53 34 46 70 6B 69 73 65 72 76 69 63 65 73 .IS4Fpkiservices
0020: 2E 63 6F 6D 2F 49 53 34 46 5F 52 6F 6F 74 43 41 .com/<–alias of root cer–>
0030: 5F 43 50 53 2E 70 64 66 _CPS.pdf
]] ]
]
#3: ObjectId: 2.5.29.15 Criticality=true
KeyUsage [
Key_CertSign
Crl_Sign
]
#4: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: 4B 09 C5 83 63 B9 3D 54 5C 1B 60 A2 28 F9 1A 6D K…c.=T\.`.(..m
0010: 0F F8 1F 1C ….
]
]
Trust this certificate?host_1 [no]: yes
Certificate was added to keystore

[root@ ~]#

–copy truststore to whole environment
[root@host_1 ~]# dcli -C -f $TPATH -d $TPATH

–check
[root@host_1 ~]# dcli -C ls -lrt $TPATH
192.168.11.16: -rw-r–r– 1 root root 1113 Mar 20 17:57 /opt/cloudera/security/jks/<–clustername–>.truststore
192.168.11.17: -rw-r–r– 1 root root 1113 Mar 20 17:57 /opt/cloudera/security/jks/<–clustername–>.truststore
192.168.11.18: -rw-r–r– 1 root root 1113 Mar 20 17:57 /opt/cloudera/security/jks/<–clustername–>.truststore
192.168.11.19: -rw-r–r– 1 root root 1113 Mar 20 17:57 /opt/cloudera/security/jks/<–clustername–>.truststore
192.168.11.20: -rw-r–r– 1 root root 1113 Mar 20 17:57 /opt/cloudera/security/jks/<–clustername–>.truststore
192.168.11.21: -rw-r–r– 1 root root 1113 Mar 20 17:57 /opt/cloudera/security/jks/<–clustername–>.truststore
[root@host_1 ~]#

8. Copy the CA public certificate file to an agents.pem file to be used by Cloudera Manager agents and Hue on each node of the cluster.

-copy certificate as agent.pem

dcli -C -f /opt/cloudera/security/jks/<–alias of root cer–>.cer -d /opt/cloudera/security/x509/agents.pem
[root@host_1 ~]# dcli -C -f /opt/cloudera/security/jks/<–alias of root cer–>.cer -d /opt/cloudera/security/x509/agents.pem

–check

[root@host_1 ~]# dcli -C ls -lrt /opt/cloudera/security/x509/agents.pem
xx.xx.xx.xx: -rw-r–r– 1 root root 1500 Mar 20 17:58 /opt/cloudera/security/x509/agents.pem
xx.xx.xx.xx: -rw-r–r– 1 root root 1500 Mar 20 17:58 /opt/cloudera/security/x509/agents.pem
xx.xx.xx.xx: -rw-r–r– 1 root root 1500 Mar 20 17:58 /opt/cloudera/security/x509/agents.pem
xx.xx.xx.xx: -rw-r–r– 1 root root 1500 Mar 20 17:58 /opt/cloudera/security/x509/agents.pem
xx.xx.xx.xx: -rw-r–r– 1 root root 1500 Mar 20 17:58 /opt/cloudera/security/x509/agents.pem
xx.xx.xx.xx: -rw-r–r– 1 root root 1500 Mar 20 17:58 /opt/cloudera/security/x509/agents.pem
[root@host_1 ~]#

9. Setup CA public certificate, ca.crt, for the Hue service. All commands in this step are run from the Hue server node.
a) ssh to the Hue node.
b) Verify /opt/cloudera/security/jks/<–your company root cer name–>.cer:
–for HUE go to HUE host

[root@hue_host ~]# ls -l /opt/cloudera/security/jks/<–your company root cer name–>.cer
-rw-r–r– 1 root root 1500 Mar 20 17:53 /opt/cloudera/security/jks/<–your company root cer name–>.cer

c) Create the link to /opt/cloudera/security/x509/hue.pem.
–create sof-link for hue
[root@hue_host ~]# ln /opt/cloudera/security/jks/<–your company root cer name–>.cer /opt/cloudera/security/x509/hue.pem

–check
[root@hue_host ~]# ls -l /opt/cloudera/security/x509/hue.pem
-rw-r–r– 2 root root 1500 Mar 20 17:53 /opt/cloudera/security/x509/hue.pem
[
root@hue_host ~]#
d) Export the existing https keystore password collected in the “Prerequisite” Section on the Hue server node. For example:

# export PW=3ZnkFUO9rYKFqu0gcusnFwgDUuvlzN3wU0UJit6CuobaVSl67QuyAJUq4WaNSzki  

e) Run the keytool commands to import the key and create required files for HUE:
–import the key <–alias that you want to set–> for HUE

/usr/java/latest/bin/keytool -importkeystore -srckeystore /opt/cloudera/security/jks/node.jks -srcstorepass $PW -srckeypass $PW -destkeystore
/tmp/hue_host-keystore.p12 -deststoretype PKCS12 -srcalias <–alias that you want to set–> -deststorepass $PW -destkeypass $PW -noprompt

–Run the “openssl pkcs12” command:

openssl pkcs12 -in /tmp/${HOSTNAME}-keystore.p12 -passin pass:${PW} -nocerts -out /opt/cloudera/security/x509/node.key -passout pass:${PW}
[root@hue_host ~]# openssl pkcs12 -in /tmp/hue_host-keystore.p12 -passin pass:${PW} -nocerts -out /opt/cloudera/security/x509/node.key -passout pass:${PW}
MAC verified OK
[root@hue_host ~]#

–Run the “openssl rsa” command

openssl rsa -in /opt/cloudera/security/x509/node.key -passin pass:${PW} -out /opt/cloudera/security/x509/node.hue.key
[root@hue_host ~]# openssl rsa -in /opt/cloudera/security/x509/node.key -passin pass:${PW} -out /opt/cloudera/security/x509/node.hue.key
writing RSA key
[root@hue_host ~]#

–Run the “openssl pkcs12” command like:

openssl pkcs12 -in /tmp/${HOSTNAME}-keystore.p12 -passin pass:${PW} -nokeys -out /opt/cloudera/security/x509/node.cert
[root@hue_host ~]# openssl pkcs12 -in /tmp/${HOSTNAME}-keystore.p12 -passin pass:${PW} -nokeys -out /opt/cloudera/security/x509/node.cert
MAC verified OK
[root@hue_host~]#

f) Change the owner on the files to be “hue”

 

[root@hue_host ~]# cd /opt/cloudera/security/x509/
[root@hue_host x509]# ls -l
total 20
-rw-r–r– 1 root root 1500 Mar 20 17:58 agents.pem
-rw-r–r– 2 root root 1500 Mar 20 17:53 hue.pem
-rw-r–r– 1 root root 3122 Mar 20 18:08 node.cert
-rw-r–r– 1 root root 1675 Mar 20 18:07 node.hue.key
-rw-r–r– 1 root root 1977 Mar 20 18:06 node.key

[root@hue_host x509]#
chown hue /opt/cloudera/security/x509/node.key
chown hue /opt/cloudera/security/x509/node.cert
chown hue /opt/cloudera/security/x509/node.hue.key

[root@hue_host x509]# ls -l
total 20
-rw-r–r– 1 root root 1500 Mar 20 17:58 agents.pem
-rw-r–r– 2 root root 1500 Mar 20 17:53 hue.pem
-rw-r–r– 1 hue root 3122 Mar 20 18:08 node.cert
-rw-r–r– 1 hue root 1675 Mar 20 18:07 node.hue.key
-rw-r–r– 1 hue root 1977 Mar 20 18:06 node.key
[root@hue_host x509]#

  1. start everything…… and check cloudera server manager logs from cloudera server host(which is the 3rd server for all environments) to see if you have any kind of ssl errors or not.
  2. a) Start the Cloudera Manager server from Node 3:

#  service cloudera-scm-server start

Verify:

# service cloudera-scm-server status

  1. b) From Node 1, start the agents:

# dcli -C service cloudera-scm-agent start

Verify:

# service cloudera-scm-agent status

  1. c) Log into CM as “admin” user. Note this is like a fresh first time login.
  2. d) Start the mgmt service: Homemgmtdropdown > Start
  3. e) Start the cluster:Home<cluster-name> > dropdown > Start

 

  1. Make sure the cluster is healthy.
  2. a) Verify with:

# bdacheckcluster

  1. b) Make sure services are healthy in CM.
  2. c) Verify the output from the cluster verification checks is successful:

# cd /opt/oracle/BDAMammoth

# ./mammoth -c

 

IF YOU FACE ANY KIND OF PROBLEM, JUST ROLL-BACK WHAT YOU HAVE DONE VIA REPLACING YOUR DIRECTORY WITH THE ONE THAT YOU BACKED UP IN THE BEGINNING.

How-to: Quickly Configure Kerberos for Your Apache Hadoop Cluster “http://blog.cloudera.com/blog/2015/03/how-to-quickly-configure-kerberos-for-your-apache-hadoop-cluster/”

March 11, 2016 Leave a comment

Use the scripts and screenshots below to configure a Kerberized cluster in minutes.

Kerberos is the foundation of securing your Apache Hadoop cluster. With Kerberos enabled, user authentication is required. Once users are authenticated, you can use projects like Apache Sentry (incubating) for role-based access control via GRANT/REVOKE statements.

Taming the three-headed dog that guards the gates of Hades is challenging, so Cloudera has put significant effort into making this process easier in Hadoop-based enterprise data hubs. In this post, you’ll learn how to stand-up a one-node cluster with Kerberos enforcing user authentication, using the Cloudera QuickStart VM as a demo environment.

If you want to read the product documentation, it’s available here. You should consider this reference material; I’d suggest reading it later to understand more details about what the scripts do.

Requirements

You need the following downloads to follow along.

Initial Configuration

Before you start the QuickStart VM, increase the memory allocation to 8GB RAM and increase the number of CPUs to two. You can get by with a little less RAM, but we will have everything including the Kerberos server running on one node.

Start up the VM and activate Cloudera Manager as shown here:

Give this script some time to run, it has to restart the cluster.

KDC Install and Setup Script

The script goKerberos_beforeCM.sh does all the setup work for the Kerberos server and the appropriate configuration parameters. The comments are designed to explain what is going on inline. (Do not copy and paste this script! It contains unprintable characters that are pretending to be spaces. Rather, download it.)

Cloudera Manager Kerberos Wizard

After running the script, you now have a working Kerberos server and can secure the Hadoop cluster. The wizard will do most of the heavy lifting; you just have to fill in a few values.

To start, log into Cloudera Manager by going to http://quickstart.cloudera:7180 in your browser. The userid is cloudera and the password is cloudera. (Almost needless to say but never use “cloudera” as a password in a real-world setting.)

There are lots of productivity tools here for managing the cluster but ignore them for now and head straight for the Administration > Kerberos wizard as shown in the next screenshot.

Click on the “Enable Kerberos” button.

The four checklist items were all completed by the script you’ve already run. Check off each item and select “Continue.”

The Kerberos Wizard needs to know the details of what the script configured. Fill in the entries as follows:

  • KDC Server Host: quickstart.cloudera
  • Kerberos Security Realm: CLOUDERA
  • Kerberos Encryption Types: aes256-cts-hmac-sha1-96

Click “Continue.”

Do you want Cloudera Manager to manage the krb5.conf files in your cluster? Remember, the whole point of this blog post is to make Kerberos easier. So, please check “Yes” and then select “Continue.”

The Kerberos Wizard is going to create Kerberos principals for the different services in the cluster. To do that it needs a Kerberos Administrator ID. The ID created is: cloudera-scm/admin@CLOUDERA.

The screen shot shows how to enter this information. Recall the password is: cloudera.

The next screen provides good news. It lets you know that the wizard was able to successfully authenticate.

OK, you’re ready to let the Kerberos Wizard do its work. Since this is a VM, you can safely select “I’m ready to restart the cluster now” and then click “Continue.” You now have time to go get a coffee or other beverage of your choice.

How long does that take? Just let it work.

Congrats, you are now running a Hadoop cluster secured with Kerberos.

Kerberos is Enabled. Now What?

The old method of su - hdfs will no longer provide administrator access to the HDFS filesystem. Here is how you become the hdfs user with Kerberos:

Now validate you can do hdfs user things:

Next, invalidate the Kerberos token so as not to break anything:

The min.user parameter needs to be fixed per the message below:

This is the error message you get without fixing min.user.id:

Save the changes shown above and restart the YARN service. Now validate that the cloudera user can use the cluster:

If you forget to kinit before trying to use the cluster you’ll get the errors below. The simple fix is to use kinit with the principal you wish to use.