How to Install Kerberos 5 KDC Server on Linux for Authentication

Kerberos is a network authentication protocol.

Kerberos provides a strong cryptographic authentication against the devices which lets the client & servers to communicate in a more secured manner. It is designed to address network security problems.

When firewalls acts a solution to address the intrusion from the external networks, Kerberos usually used to address the intrusion and other security problems within the network.

The current version of Kerberos is version 5 which is called as KRB5.

To implement the Kerberos, we need to have the centralized authentication service running on server.

This service is called KEY DISTRIBUTION CENTER (KDC).

A server registered with KDC is trusted by all other computers in the Kerberos realm.

Sample krb5.conf File

Here’s an example krb5.conf file that contains all the REALM and domain to REALM mapping information,

# cat /etc/krb5.conf
[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 default_realm = EXAMPLE.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true

[realms]
 EXAMPLE.COM = {
  kdc = kerberos.example.com
  admin_server = kerberos.example.com
 }

[domain_realm]
 .example.com = EXAMPLE.COM
 example.com = EXAMPLE.COM

Install Kerberos KDC server

For security reason, it is recommended to run the Kerberos (KDC) server on a separate server.

Download and install the krb5 server package.

# rpm -ivh krb5-server-1.10.3-10.el6_4.6.x86_64.rpm
Preparing...       ########################################### [100%]
   1:krb5-server   ########################################### [100%]

Verify that the following rpm are installed before configuring KDC:

# rpm -qa | grep -i krb5
pam_krb5-2.3.11-9.el6.x86_64
krb5-server-1.10.3-10.el6_4.6.x86_64
krb5-workstation-1.10.3-10.el6_4.6.x86_64
krb5-libs-1.10.3-10.el6_4.6.x86_64

Modify /etc/krb5.conf File

Change /etc/krb5.conf to reflect like the below with the appropriate REALM and DOMAIN_REALM mappings.

# cat /etc/krb5.conf
[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 default_realm = MYREALM.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true

[realms]
 MYREALM.COM = {
  kdc = elserver1.example.com
  admin_server = elserver1.example.com
 }

[domain_realm]
 .myrealm.com = MYREALM.COM
 myrealm.com = MYREALM.COM

Modify kdc.conf File

Also the kdc.conf should be modified as shown below.

# cat /var/kerberos/krb5kdc/kdc.conf
[kdcdefaults]
 kdc_ports = 88
 kdc_tcp_ports = 88

[realms]
 MYREALM.COM = {
  #master_key_type = aes256-cts
  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal 
  arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
 }

Create KDC database

Next, create the KDC database using the kdb5_util command as shown below. As this stage, enter the appropriate pasword for the KDC database master key.

# /usr/sbin/kdb5_util create -s
Loading random data
Initializing database '/var/kerberos/krb5kdc/principal' for realm 'MYREALM.COM',
master key name 'K/M@MYREALM.COM'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key:
Re-enter KDC database master key to verify:

Assign Administrator Privilege

The users can be granted administrator privileges to the database using the file /var/kerberos/krb5kdc/kadm5.acl.

# cat /var/kerberos/krb5kdc/kadm5.acl
*/admin@MYREALM.COM     *

In the above example, any principal in the MYREALM with an admin instance has all administrator privileges.

Create a Principal

Create the principal using the following command. In this example, I created the principal with the user name “eluser”.

# kadmin.local -q "addprinc eluser/admin"
Authenticating as principal root/admin@MYREALM.COM with password.
WARNING: no policy specified for eluser/admin@MYREALM.COM; defaulting to no policy
Enter password for principal "eluser/admin@MYREALM.COM":
Re-enter password for principal "eluser/admin@MYREALM.COM":
Principal "eluser/admin@MYREALM.COM" created.

Start the Kerberos Service

Start the KDC and kadmin daemons as shown below.

# service krb5kdc start
Starting Kerberos 5 KDC:               [  OK  ]

# service kadmin start
Starting Kerberos 5 Admin Server:      [  OK  ]

Configure Kerberos Client

Configure the Kerberos client to authenticate against the KDC database:

Now let’s see how to configure the krb5 client to authenticate against the Kerberos KDC database we created above.

Step 1: Install the krb5-libs and krb5-workstation packages on the client machine.

Step 2: Copy the /etc/krb5.conf from the KDC server to the client machine.

Step 3: Now we need to create the principal for the client in the KDC/Kerberos database.

You can use the below commands to create the principal for the client machine on the KDC master server. In the below example the I am creating a host principal for the client elserver3.example.com on the master KDC server elserver1.example.com

# kadmin.local -q "addprinc host/elserver3.example.com"
Authenticating as principal root/admin@MYREALM.COM with password.
WARNING: no policy specified for host/elserver1.example.com@MYREALM.COM; defaulting to no policy
Enter password for principal "host/elserver1.example.com@MYREALM.COM":
Re-enter password for principal "host/elserver1.example.com@MYREALM.COM":
Principal "host/elserver1.example.com@MYREALM.COM" created.

Step 4: Extract the krb5.keytab for the client from the KDC master server using the below command:

# kadmin.local -q "ktadd -k /etc/krb5.keytab host/elserver3.example.com"
Authenticating as principal root/admin@MYREALM.COM with password.
Entry for principal host/elserver3.example.com with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/elserver3.example.com with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/elserver3.example.com with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/elserver3.example.com with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/elserver3.example.com with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/elserver3.example.com with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:/etc/krb5.keytab.

This completes the configuration. You are all done at this stage.

From now on, everytime you establish a SSH, RSH connection the host verifies its identity against the KDC database using keytab file and it establishes secure connection over the Kerberos.

Ktadd is used a generate new keytab or add a principal to an existing keytab from the kadmin command.

Ktremove is used to remove the principal from an existing keytab. The command to remove the principal that we created above is,

kadmin.local -q "ktremove -k /etc/krb5.keytab –q all"

Delete a KDC database

For some reason, if you have to delete a KDC database, use the following command:

# kdb5_util -r MYREALM.COM destroy
kdb5_util: Deleting KDC database stored in /usr/local/var/krb5kdc/principal, you sure
(type yes to confirm)? <== yes
OK, deleting database '/usr/local/var/krb5kdc/principal'...

-f option in the above command forces the deletion of KDC database.

Backup and Restore KDC Database

To backup a KDC database to a file, use krb5_util_dump.

# kdb5_util dump kdcfile

# ls -l kdcfile
-rw-------. 1 root root 5382 Apr 10 07:25 kdcfile

To restore the KDC database from the dump file created in the above step, do the following:

# kdb5_util load kdcfile
Advertisements

How-to: Analyze Twitter Data with Apache Hadoop “http://blog.cloudera.com/blog/2012/09/analyzing-twitter-data-with-hadoop/”

Social media has gained immense popularity with marketing teams, and Twitter is an effective tool for a company to get people excited about its products. Twitter makes it easy to engage users and communicate directly with them, and in turn, users can provide word-of-mouth marketing for companies by discussing the products. Given limited resources, and knowing we may not be able to talk to everyone we want to target directly, marketing departments can be more efficient by being selective about whom we reach out to.

In this post, we’ll learn how we can use Apache FlumeApache HDFSApache Oozie, and Apache Hive to design an end-to-end data pipeline that will enable us to analyze Twitter data. This will be the first post in a series. The posts to follow to will describe, in more depth, how each component is involved and how the custom code operates. All the code and instructions necessary to reproduce this pipeline is available on the Cloudera Github.

Who is Influential?

To understand whom we should target, let’s take a step back and try to understand the mechanics of Twitter. A user – let’s call him Joe – follows a set of people, and has a set of followers. When Joe sends an update out, that update is seen by all of his followers. Joe can also retweet other users’ updates. A retweet is a repost of an update, much like you might forward an email. If Joe sees a tweet from Sue, and retweets it, all of Joe’s followers see Sue’s tweet, even if they don’t follow Sue. Through retweets, messages can get passed much further than just the followers of the person who sent the original tweet. Knowing that, we can try to engage users whose updates tend to generate lots of retweets. Since Twitter tracks retweet counts for all tweets, we can find the users we’re looking for by analyzing Twitter data.

Now we know the question we want to ask: Which Twitter users get the most retweets? Who is influential within our industry?

How Do We Answer These Questions?

SQL queries can be used to answer this question: We want to look at which users are responsible for the most retweets, in descending order of most retweeted. However, querying Twitter data in a traditional RDBMS is inconvenient, since the Twitter Streaming API outputs tweets in a JSON format which can be arbitrarily complex. In the Hadoop ecosystem, the Hive project provides a query interface which can be used to query data that resides in HDFS. The query language looks very similar to SQL, but allows us to easily model complex types, so we can easily query the type of data we have. Seems like a good place to start. So how do we get Twitter data into Hive? First, we need to get Twitter data into HDFS, and then we’ll be able to tell Hive where the data resides and how to read it.

The diagram above shows a high-level view of how some of the CDH (Cloudera’s Distribution Including Apache Hadoop) components can be pieced together to build the data pipeline we need to answer the questions we have. The rest of this post will describe how these components interact and the purposes they each serve.

Gathering Data with Apache Flume

The Twitter Streaming API will give us a constant stream of tweets coming from the service. One option would be to use a simple utility like curl to access the API and then periodically load the files. However, this would require us to write code to control where the data goes in HDFS, and if we have a secure cluster, we will have to integrate with security mechanisms. It will be much simpler to use components within CDH to automatically move the files from the API to HDFS, without our manual intervention.

Apache Flume is a data ingestion system that is configured by defining endpoints in a data flow called sources and sinks. In Flume, each individual piece of data (tweets, in our case) is called an event; sources produce events, and send the events through a channel, which connects the source to the sink. The sink then writes the events out to a predefined location. Flume supports some standard data sources, such as syslog or netcat. For this use case, we’ll need to design a custom source that accesses the Twitter Streaming API, and sends the tweets through a channel to a sink that writes to HDFS files. Additionally, we can use the custom source to filter the tweets on a set of search keywords to help identify relevant tweets, rather than a pure sample of the entire Twitter firehose. The custom Flume source code can be found here.

Partition Management with Oozie

Once we have the Twitter data loaded into HDFS, we can stage it for querying by creating an external table in Hive. Using an external table will allow us to query the table without moving the data from the location where it ends up in HDFS. To ensure scalability, as we add more and more data, we’ll need to also partition the table. A partitioned table allows us to prune the files that we read when querying, which results in better performance when dealing with large data sets. However, the Twitter API will continue to stream tweets and Flume will perpetually create new files. We can automate the periodic process of adding partitions to our table as the new data comes in.

Apache Oozie is a workflow coordination system that can be used to solve this problem. Oozie is an extremely flexible system for designing job workflows, which can be scheduled to run based on a set of criteria. We can configure the workflow to run an ALTER TABLE command that adds a partition containing the last hour’s worth of data into Hive, and we can instruct the workflow to occur every hour. This will ensure that we’re always looking at up-to-date data.

The configuration files for the Oozie workflow are located here.

Querying Complex Data with Hive

Before we can query the data, we need to ensure that the Hive table can properly interpret the JSON data. By default, Hive expects that input files use a delimited row format, but our Twitter data is in a JSON format, which will not work with the defaults. This is actually one of Hive’s biggest strengths. Hive allows us to flexibly define, and redefine, how the data is represented on disk. The schema is only really enforced when we read the data, and we can use the Hive SerDe interface to specify how to interpret what we’ve loaded.

SerDe stands for Serializer and Deserializer, which are interfaces that tell Hive how it should translate the data into something that Hive can process. In particular, the Deserializer interface is used when we read data off of disk, and converts the data into objects that Hive knows how to manipulate. We can write a custom SerDe that reads the JSON data in and translates the objects for Hive. Once that’s put into place, we can start querying. The JSON SerDe code can be found here. The SerDe will take a tweet in JSON form, like the following:

and translate the JSON entities into queryable columns:

which will result in:

We’ve now managed to put together an end-to-end system, which gathers data from the Twitter Streaming API, sends the tweets to files on HDFS through Flume, and uses Oozie to periodically load the files into Hive, where we can query the raw JSON data, through the use of a Hive SerDe.

Some Results

In my own testing, I let Flume collect data for about three days, filtering on a set of keywords:

hadoop, big data, analytics, bigdata, cloudera, data science, data scientist, business intelligence, mapreduce, data warehouse, data warehousing, mahout, hbase, nosql, newsql, businessintelligence, cloudcomputing

The collected data was about half a GB of JSON data, and here is an example of what a tweet looks like. The data has some structure, but certain fields may or may not exist. The retweeted_status field, for example, will only be present if the tweet was a retweet. Additionally, some of the fields may be arbitrarily complex. The hashtags field is an array of all the hashtags present in the tweets, but most RDBMS’s do not support arrays as a column type. This semi-structured quality of the data makes the data very difficult to query in a traditional RDBMS. Hive can handle this data much more gracefully.

The query below will find usernames, and the number of retweets they have generated across all the tweets that we have data for:

For the few days of data, I found that these were the most retweeted users for the industry:

From these results, we can see whose tweets are getting heard by the widest audience, and also determine whether these people are communicating on a regular basis or not. We can use this information to more carefully target our messaging in order to get them talking about our products, which, in turn, will get other people talking about our products.

Conclusion

In this post we’ve seen how we can take some of the components of CDH and combine them to create an end-to-end data management system. This same architecture could be used for a variety of applications designed to look at Twitter data, such as identifying spam accounts, or identifying clusters of keywords. Taking the system even further, the general architecture can be used across numerous applications. By plugging in different Flume sources and Hive SerDes, this application can be customized for many other applications, like analyzing web logs, to give an example. Grab the code, and give it a shot yourself.

CDH 5.3: Apache Sentry Integration with HDFS

Starting in CDH 5.3, Apache Sentry integration with HDFS saves admins a lot of work by centralizing access control permissions across components that utilize HDFS.

It’s been more than a year and a half since a couple of my colleagues here at Cloudera shipped the first version of Sentry (now Apache Sentry (incubating)). This project filled a huge security gap in the Apache Hadoop ecosystem by bringing truly secure and dependable fine grained authorization to the Hadoop ecosystem and provided out-of-the-box integration for Apache Hive. Since then the project has grown significantly–adding support for Impala and Search and the wonderful Hue App to name a few significant additions.

In order to provide a truly secure and centralized authorization mechanism, Sentry deployments have been historically set up so that all Hive’s data and metadata are accessible only by HiveServer2 and every other user is cut out. This has been a pain point for Sqoop users as Sqoop does not use the HiveServer2 interface. Hence users with a Sentry-secured Hive deployment were forced to split the import task into two steps: simple HDFS import followed by manually loading the data into Hive.

With the inclusion of HDFS ACLs and the integration of Sentry into the Hive metastore in CDH 5.1, users were able to improve this situation and get the direct Hive import working again. However, this approach required manual administrator intervention to configure HDFS ACLs according to the Sentry configuration and needed a manual refresh to keep both systems in sync.

One of the large features included in the recently released CDH 5.3 is Sentry integration with HDFS, which enables customers to easily share data between Hive, Impala and all the other Hadoop components that interact with HDFS (MapReduce, Spark, Pig, and Sqoop, and so on) while ensuring that user access permissions only need to be set once, and that they are uniformly enforced.

The rest of this post focuses on the example of using Sqoop together with this Sentry feature. Sqoop data can now be imported into Hive without any additional administrator intervention. By exposing Sentry policies—what tables from which a user can select and to what tables they can insert—directly in HDFS, Sqoop will re-use the same policies that have been configured via GRANT/REVOKE statements or the Hue Sentry App and will import data into Hive without any trouble.

Configuration

In order for Sqoop to seamlessly import into a Sentry Secured Hive instance, the Hadoop administrator needs to follow a few configuration steps to enable all the necessary features. First, your cluster needs to be using the Sentry Service as backend for storing authorization metadata and not rely on the older policy files.

If you are already using Sentry Service and GRANT/REVOKE statements, you can directly jump to step 3).

  1. Make sure that you have Sentry service running on your cluster. You should see it in the service list:

Ekran Resmi 2015-10-12 01.58.03

    2.  And that Hive is configured to use this service as a backend for Sentry metadata:

Ekran Resmi 2015-10-12 01.58.14

     3.  Finally enable HDFS Integration with Sentry:

Ekran Resmi 2015-10-12 01.58.27

Example Sqoop Import

Let’s assume that we have user jarcec who needs to import data into a Hive database named default. User jarcec is part of a group that is also called jarcec – in real life the name of the group doesn’t have to be the same as the username and that is fine.

With an unsecured Hive installation, the Hadoop administrator would have to jump in and grant writing privilege to user jarcec for directory /user/hive/warehouse or one of its subdirectories. With Sentry and HDFS integration, the Hadoop administrator no longer needs to jump in. Instead Sqoop will reuse the same authorization policies that has been configured through Hive SQL or via the Sentry Hue Application. Let’s assume that user bc is jarcec‘s Manager and already has privileges to grant privileges in the default database.

    1. bc starts by invoking beeline and connecting to HiveServer2:

 

 

 

[bc@sqoopsentry-1 ~]$ beeline

 

1: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> !connect jdbc:hive2://sqoopsentry-1.vpc.cloudera.com:10000/default;principal=hive/sqoopsentry-1.vpc.cloudera.com@ENT.CLOUDERA.COM

    1. In case that user jarcec is not part of any role yet, we need to create a role for him:

 

 

1: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> CREATE ROLE jarcec_role;

 

No rows affected (0.769 seconds)

    1. And this new role jarcec_role needs to be granted to jarcec‘s group jarcec.

 

 

1: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> GRANT ROLE jarcec_role to GROUP jarcec;

 

No rows affected (0.651 seconds)

    1. And finally bc can grant access to database default (or any other) to the role jarcec_role;

 

 

1: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> GRANT ALL ON DATABASE default TO ROLE jarcec_role;

 

No rows affected (0.16 seconds)

By executing the steps above, user jarcec has been given privilege to do any action (insert or select) with all objects inside database default. That includes the ability to create new tables, insert data or simply querying existing tables. With those privileges user jarcec can run the following Sqoop command as he was used to:

 

 

 

[jarcec@sqoopsentry-1 ~]$ sqoop import –connect jdbc:mysql://mysql.ent.cloudera.com/sqoop –username sqoop –password sqoop –table text <strong>–hive-import</strong>

14/12/14 15:37:38 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5-cdh5.3.0

 …

14/12/14 15:38:58 INFO mapreduce.ImportJobBase: Transferred 249.7567 MB in 75.8448 seconds (3.293 MB/sec) 

14/12/14 15:38:58 INFO mapreduce.ImportJobBase: Retrieved 1000000 records.

14/12/14 15:38:58 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `text` AS t LIMIT 1

14/12/14 15:38:58 INFO hive.HiveImport: Loading uploaded data into Hive

14/12/14 15:39:09 INFO hive.HiveImport: 14/12/14 15:39:09 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.

14/12/14 15:39:09 INFO hive.HiveImport:

14/12/14 15:39:09 INFO hive.HiveImport: Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.26/jars/hive-common-0.13.1-cdh5.3.0.jar!/hive-log4j.properties 

14/12/14 15:39:12 INFO hive.HiveImport: OK

14/12/14 15:39:12 INFO hive.HiveImport: Time taken: 1.079 seconds

14/12/14 15:39:12 INFO hive.HiveImport: Loading data to table default.text

14/12/14 15:39:12 INFO hive.HiveImport: setfacl: Permission denied. user=jarcec is not the owner of inode=part-m-00000

14/12/14 15:39:12 INFO hive.HiveImport: setfacl: Permission denied. user=jarcec is not the owner of inode=part-m-00001

14/12/14 15:39:12 INFO hive.HiveImport: setfacl: Permission denied. user=jarcec is not the owner of inode=part-m-00002

14/12/14 15:39:13 INFO hive.HiveImport: setfacl: Permission denied. user=jarcec is not the owner of inode=part-m-00003

14/12/14 15:39:13 INFO hive.HiveImport: Table default.text stats: [numFiles=4, numRows=0, totalSize=261888896, rawDataSize=0]

14/12/14 15:39:13 INFO hive.HiveImport: OK

14/12/14 15:39:13 INFO hive.HiveImport: Time taken: 0.719 seconds

14/12/14 15:39:13 INFO hive.HiveImport: <strong>Hive import complete</strong>.

14/12/14 15:39:13 INFO hive.HiveImport: Export directory is not empty, keeping it.

And jarcec can easily confirm in beeline that data have been indeed imported into Hive:

 

 

0: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> show tables from default;

+————+–+

|  tab_name  |

+————+–+

| text       |

+————+–+

1 row selected (0.177 seconds)

0: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> select count(*) from text;

+———-+–+

|   _c0    |

+———-+–+

| 1000000  |

+———-+–+

1 row selected (72.188 seconds)

If Hive is configured to inherit permissions, you might notice that Sqoop will print out several warnings similar to this one:

 

14/12/14 15:39:12 INFO hive.HiveImport: setfacl: Permission denied. user=jarcec is not the owner of inode=part-m-00000

As there is no need to inherit HDFS permissions when Sentry is enabled in HDFS, you can safely ignore such messages.