Tag Archives: Big Data

Hadoop fs Shell Commands Examples – Baby Steps

Hadoop file system (fs) shell commands are used to perform various file operations like copying file, changing permissions, viewing the contents of the file, changing ownership of files, creating directories etc.

The syntax of fs shell command is

hadoop fs <args>

All the fs shell commands takes the path URI as arguments. The format of URI is sheme://authority/path. The scheme and authority are optional. For hadoop the scheme is hdfs and for local file system the scheme is file. IF you do not specify a scheme, the default scheme is taken from the configuration file. You can also specify the directories in hdfs along with the URI as hdfs://namenodehost/dir1/dir2 or simple /dir1/dir2.

The hadoop fs commands are almost similar to the unix commands. Let see each of the fs shell commands in detail with examples:

Hadoop fs Shell Commands

hadoop fs ls:

The hadoop ls command is used to list out the directories and files. An example is shown below:

> hadoop fs -ls /user/hadoop/employees
Found 1 items
-rw-r--r--   2 hadoop hadoop 2 2012-06-28 23:37 /user/hadoop/employees/000000_0

The above command lists out the files in the employees directory.

> hadoop fs -ls /user/hadoop/dir
Found 1 items
drwxr-xr-x   - hadoop hadoop  0 2013-09-10 09:47 /user/hadoop/dir/products

The output of hadoop fs ls command is almost similar to the unix ls command. The only difference is in the second field. For a file, the second field indicates the number of replicas and for a directory, the second field is empty.

hadoop fs lsr:

The hadoop lsr command recursively displays the directories, sub directories and files in the specified directory. The usage example is shown below:

> hadoop fs -lsr /user/hadoop/dir
Found 2 items
drwxr-xr-x   - hadoop hadoop  0 2013-09-10 09:47 /user/hadoop/dir/products
-rw-r--r--   2 hadoop hadoop    1971684 2013-09-10 09:47 /user/hadoop/dir/products/products.dat

The hadoop fs lsr command is similar to the ls -R command in unix.

hadoop fs cat:

Hadoop cat command is used to print the contents of the file on the terminal (stdout). The usage example of hadoop cat command is shown below:

> hadoop fs -cat /user/hadoop/dir/products/products.dat

cloudera book by amazon
cloudera tutorial by ebay

hadoop fs chgrp:

hadoop chgrp shell command is used to change the group association of files. Optionally you can use the -R option to change recursively through the directory structure. The usage of hadoop fs -chgrp is shown below:

hadoop fs -chgrp [-R] <NewGroupName> <file or directory name>

hadoop fs chmod:

The hadoop chmod command is used to change the permissions of files. The -R option can be used to recursively change the permissions of a directory structure. The usage is shown below:

hadoop fs -chmod [-R] <mode | octal mode> <file or directory name>

hadoop fs chown:

The hadoop chown command is used to change the ownership of files. The -R option can be used to recursively change the owner of a directory structure. The usage is shown below:

hadoop fs -chown [-R] <NewOwnerName>[:NewGroupName] <file or directory name>

hadoop fs mkdir:

The hadoop mkdir command is for creating directories in the hdfs. You can use the -p option for creating parent directories. This is similar to the unix mkdir command. The usage example is shown below:

> hadoop fs -mkdir /user/hadoop/hadoopdemo

The above command creates the hadoopdemo directory in the /user/hadoop directory.

> hadoop fs -mkdir -p /user/hadoop/dir1/dir2/demo

The above command creates the dir1/dir2/demo directory in /user/hadoop directory.

hadoop fs copyFromLocal:

The hadoop copyFromLocal command is used to copy a file from the local file system to the hadoop hdfs. The syntax and usage example are shown below:

Syntax:
hadoop fs -copyFromLocal <localsrc> URI

Example:

Check the data in local file
> ls sales
2000,iphone
2001, htc

Now copy this file to hdfs

> hadoop fs -copyFromLocal sales /user/hadoop/hadoopdemo

View the contents of the hdfs file.

> hadoop fs -cat /user/hadoop/hadoopdemo/sales
2000,iphone
2001, htc

hadoop fs copyToLocal:

The hadoop copyToLocal command is used to copy a file from the hdfs to the local file system. The syntax and usage example is shown below:

Syntax
hadoop fs -copyToLocal [-ignorecrc] [-crc] URI <localdst>

Example:

hadoop fs -copyToLocal /user/hadoop/hadoopdemo/sales salesdemo

The -ignorecrc option is used to copy the files that fail the crc check. The -crc option is for copying the files along with their CRC.

hadoop fs cp:

The hadoop cp command is for copying the source into the target. The cp command can also be used to copy multiple files into the target. In this case the target should be a directory. The syntax is shown below:

hadoop fs -cp /user/hadoop/SrcFile /user/hadoop/TgtFile
hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2 hdfs://namenodehost/user/hadoop/TgtDirectory

hadoop fs -put:

Hadoop put command is used to copy multiple sources to the destination system. The put command can also read the input from the stdin. The different syntaxes for the put command are shown below:

Syntax1: copy single file to hdfs

hadoop fs -put localfile /user/hadoop/hadoopdemo

Syntax2: copy multiple files to hdfs

hadoop fs -put localfile1 localfile2 /user/hadoop/hadoopdemo

Syntax3: Read input file name from stdin
hadoop fs -put - hdfs://namenodehost/user/hadoop/hadoopdemo

hadoop fs get:

Hadoop get command copies the files from hdfs to the local file system. The syntax of the get command is shown below:

hadoop fs -get /user/hadoop/hadoopdemo/hdfsFileName localFileName

hadoop fs getmerge:

hadoop getmerge command concatenates the files in the source directory into the destination file. The syntax of the getmerge shell command is shown below:

hadoop fs -getmerge <src> <localdst> [addnl]

The addnl option is for adding new line character at the end of each file.

hadoop fs moveFromLocal:

The hadoop moveFromLocal command moves a file from local file system to the hdfs directory. It removes the original source file. The usage example is shown below:

> hadoop fs -moveFromLocal products /user/hadoop/hadoopdemo

hadoop fs mv:

It moves the files from source hdfs to destination hdfs. Hadoop mv command can also be used to move multiple source files into the target directory. In this case the target should be a directory. The syntax is shown below:

hadoop fs -mv /user/hadoop/SrcFile /user/hadoop/TgtFile
hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2 hdfs://namenodehost/user/hadoop/TgtDirectory

hadoop fs du:

The du command displays aggregate length of files contained in the directory or the length of a file in case its just a file. The syntax and usage is shown below:

hadoop fs -du hdfs://namenodehost/user/hadoop

hadoop fs dus:

The hadoop dus command prints the summary of file lengths

> hadoop fs -dus hdfs://namenodehost/user/hadoop
hdfs://namenodehost/user/hadoop 21792568333

hadoop fs expunge:

Used to empty the trash. The usage of expunge is shown below:

hadoop fs -expunge

hadoop fs rm:

Removes the specified list of files and empty directories. An example is shown below:

hadoop fs -rm /user/hadoop/file

hadoop fs -rmr:

Recursively deletes the files and sub directories. The usage of rmr is shown below:

hadoop fs -rmr /user/hadoop/dir

hadoop fs setrep:

Hadoop setrep is used to change the replication factor of a file. Use the -R option for recursively changing the replication factor.

hadoop fs -setrep -w 4 -R /user/hadoop/dir

hadoop fs stat:

Hadoop stat returns the stats information on a path. The syntax of stat is shown below:

hadoop fs -stat URI

> hadoop fs -stat /user/hadoop/
2013-09-24 07:53:04

hadoop fs tail:

Hadoop tail command prints the last kilobytes of the file. The -f option can be used same as in unix.

> hafoop fs -tail /user/hadoop/sales.dat

12345 abc
2456 xyz

hadoop fs test:

The hadoop test is used for file test operations. The syntax is shown below:

hadoop fs -test -[ezd] URI

Here “e” for checking the existence of a file, “z” for checking the file is zero length or not, “d” for checking the path is a directory or no. On success, the test command returns 1 else 0.

hadoop fs text:

The hadoop text command displays the source file in text format. The allowed source file formats are zip and TextRecordInputStream. The syntax is shown below:

hadoop fs -text <src>

hadoop fs touchz:

The hadoop touchz command creates a zero byte file. This is similar to the touch command in unix. The syntax is shown below:

hadoop fs -touchz /user/hadoop/filename

How-to: Analyze Twitter Data with Apache Hadoop “http://blog.cloudera.com/blog/2012/09/analyzing-twitter-data-with-hadoop/”

Social media has gained immense popularity with marketing teams, and Twitter is an effective tool for a company to get people excited about its products. Twitter makes it easy to engage users and communicate directly with them, and in turn, users can provide word-of-mouth marketing for companies by discussing the products. Given limited resources, and knowing we may not be able to talk to everyone we want to target directly, marketing departments can be more efficient by being selective about whom we reach out to.

In this post, we’ll learn how we can use Apache FlumeApache HDFSApache Oozie, and Apache Hive to design an end-to-end data pipeline that will enable us to analyze Twitter data. This will be the first post in a series. The posts to follow to will describe, in more depth, how each component is involved and how the custom code operates. All the code and instructions necessary to reproduce this pipeline is available on the Cloudera Github.

Who is Influential?

To understand whom we should target, let’s take a step back and try to understand the mechanics of Twitter. A user – let’s call him Joe – follows a set of people, and has a set of followers. When Joe sends an update out, that update is seen by all of his followers. Joe can also retweet other users’ updates. A retweet is a repost of an update, much like you might forward an email. If Joe sees a tweet from Sue, and retweets it, all of Joe’s followers see Sue’s tweet, even if they don’t follow Sue. Through retweets, messages can get passed much further than just the followers of the person who sent the original tweet. Knowing that, we can try to engage users whose updates tend to generate lots of retweets. Since Twitter tracks retweet counts for all tweets, we can find the users we’re looking for by analyzing Twitter data.

Now we know the question we want to ask: Which Twitter users get the most retweets? Who is influential within our industry?

How Do We Answer These Questions?

SQL queries can be used to answer this question: We want to look at which users are responsible for the most retweets, in descending order of most retweeted. However, querying Twitter data in a traditional RDBMS is inconvenient, since the Twitter Streaming API outputs tweets in a JSON format which can be arbitrarily complex. In the Hadoop ecosystem, the Hive project provides a query interface which can be used to query data that resides in HDFS. The query language looks very similar to SQL, but allows us to easily model complex types, so we can easily query the type of data we have. Seems like a good place to start. So how do we get Twitter data into Hive? First, we need to get Twitter data into HDFS, and then we’ll be able to tell Hive where the data resides and how to read it.

The diagram above shows a high-level view of how some of the CDH (Cloudera’s Distribution Including Apache Hadoop) components can be pieced together to build the data pipeline we need to answer the questions we have. The rest of this post will describe how these components interact and the purposes they each serve.

Gathering Data with Apache Flume

The Twitter Streaming API will give us a constant stream of tweets coming from the service. One option would be to use a simple utility like curl to access the API and then periodically load the files. However, this would require us to write code to control where the data goes in HDFS, and if we have a secure cluster, we will have to integrate with security mechanisms. It will be much simpler to use components within CDH to automatically move the files from the API to HDFS, without our manual intervention.

Apache Flume is a data ingestion system that is configured by defining endpoints in a data flow called sources and sinks. In Flume, each individual piece of data (tweets, in our case) is called an event; sources produce events, and send the events through a channel, which connects the source to the sink. The sink then writes the events out to a predefined location. Flume supports some standard data sources, such as syslog or netcat. For this use case, we’ll need to design a custom source that accesses the Twitter Streaming API, and sends the tweets through a channel to a sink that writes to HDFS files. Additionally, we can use the custom source to filter the tweets on a set of search keywords to help identify relevant tweets, rather than a pure sample of the entire Twitter firehose. The custom Flume source code can be found here.

Partition Management with Oozie

Once we have the Twitter data loaded into HDFS, we can stage it for querying by creating an external table in Hive. Using an external table will allow us to query the table without moving the data from the location where it ends up in HDFS. To ensure scalability, as we add more and more data, we’ll need to also partition the table. A partitioned table allows us to prune the files that we read when querying, which results in better performance when dealing with large data sets. However, the Twitter API will continue to stream tweets and Flume will perpetually create new files. We can automate the periodic process of adding partitions to our table as the new data comes in.

Apache Oozie is a workflow coordination system that can be used to solve this problem. Oozie is an extremely flexible system for designing job workflows, which can be scheduled to run based on a set of criteria. We can configure the workflow to run an ALTER TABLE command that adds a partition containing the last hour’s worth of data into Hive, and we can instruct the workflow to occur every hour. This will ensure that we’re always looking at up-to-date data.

The configuration files for the Oozie workflow are located here.

Querying Complex Data with Hive

Before we can query the data, we need to ensure that the Hive table can properly interpret the JSON data. By default, Hive expects that input files use a delimited row format, but our Twitter data is in a JSON format, which will not work with the defaults. This is actually one of Hive’s biggest strengths. Hive allows us to flexibly define, and redefine, how the data is represented on disk. The schema is only really enforced when we read the data, and we can use the Hive SerDe interface to specify how to interpret what we’ve loaded.

SerDe stands for Serializer and Deserializer, which are interfaces that tell Hive how it should translate the data into something that Hive can process. In particular, the Deserializer interface is used when we read data off of disk, and converts the data into objects that Hive knows how to manipulate. We can write a custom SerDe that reads the JSON data in and translates the objects for Hive. Once that’s put into place, we can start querying. The JSON SerDe code can be found here. The SerDe will take a tweet in JSON form, like the following:

and translate the JSON entities into queryable columns:

which will result in:

We’ve now managed to put together an end-to-end system, which gathers data from the Twitter Streaming API, sends the tweets to files on HDFS through Flume, and uses Oozie to periodically load the files into Hive, where we can query the raw JSON data, through the use of a Hive SerDe.

Some Results

In my own testing, I let Flume collect data for about three days, filtering on a set of keywords:

hadoop, big data, analytics, bigdata, cloudera, data science, data scientist, business intelligence, mapreduce, data warehouse, data warehousing, mahout, hbase, nosql, newsql, businessintelligence, cloudcomputing

The collected data was about half a GB of JSON data, and here is an example of what a tweet looks like. The data has some structure, but certain fields may or may not exist. The retweeted_status field, for example, will only be present if the tweet was a retweet. Additionally, some of the fields may be arbitrarily complex. The hashtags field is an array of all the hashtags present in the tweets, but most RDBMS’s do not support arrays as a column type. This semi-structured quality of the data makes the data very difficult to query in a traditional RDBMS. Hive can handle this data much more gracefully.

The query below will find usernames, and the number of retweets they have generated across all the tweets that we have data for:

For the few days of data, I found that these were the most retweeted users for the industry:

From these results, we can see whose tweets are getting heard by the widest audience, and also determine whether these people are communicating on a regular basis or not. We can use this information to more carefully target our messaging in order to get them talking about our products, which, in turn, will get other people talking about our products.

Conclusion

In this post we’ve seen how we can take some of the components of CDH and combine them to create an end-to-end data management system. This same architecture could be used for a variety of applications designed to look at Twitter data, such as identifying spam accounts, or identifying clusters of keywords. Taking the system even further, the general architecture can be used across numerous applications. By plugging in different Flume sources and Hive SerDes, this application can be customized for many other applications, like analyzing web logs, to give an example. Grab the code, and give it a shot yourself.

CDH 5.3: Apache Sentry Integration with HDFS

Starting in CDH 5.3, Apache Sentry integration with HDFS saves admins a lot of work by centralizing access control permissions across components that utilize HDFS.

It’s been more than a year and a half since a couple of my colleagues here at Cloudera shipped the first version of Sentry (now Apache Sentry (incubating)). This project filled a huge security gap in the Apache Hadoop ecosystem by bringing truly secure and dependable fine grained authorization to the Hadoop ecosystem and provided out-of-the-box integration for Apache Hive. Since then the project has grown significantly–adding support for Impala and Search and the wonderful Hue App to name a few significant additions.

In order to provide a truly secure and centralized authorization mechanism, Sentry deployments have been historically set up so that all Hive’s data and metadata are accessible only by HiveServer2 and every other user is cut out. This has been a pain point for Sqoop users as Sqoop does not use the HiveServer2 interface. Hence users with a Sentry-secured Hive deployment were forced to split the import task into two steps: simple HDFS import followed by manually loading the data into Hive.

With the inclusion of HDFS ACLs and the integration of Sentry into the Hive metastore in CDH 5.1, users were able to improve this situation and get the direct Hive import working again. However, this approach required manual administrator intervention to configure HDFS ACLs according to the Sentry configuration and needed a manual refresh to keep both systems in sync.

One of the large features included in the recently released CDH 5.3 is Sentry integration with HDFS, which enables customers to easily share data between Hive, Impala and all the other Hadoop components that interact with HDFS (MapReduce, Spark, Pig, and Sqoop, and so on) while ensuring that user access permissions only need to be set once, and that they are uniformly enforced.

The rest of this post focuses on the example of using Sqoop together with this Sentry feature. Sqoop data can now be imported into Hive without any additional administrator intervention. By exposing Sentry policies—what tables from which a user can select and to what tables they can insert—directly in HDFS, Sqoop will re-use the same policies that have been configured via GRANT/REVOKE statements or the Hue Sentry App and will import data into Hive without any trouble.

Configuration

In order for Sqoop to seamlessly import into a Sentry Secured Hive instance, the Hadoop administrator needs to follow a few configuration steps to enable all the necessary features. First, your cluster needs to be using the Sentry Service as backend for storing authorization metadata and not rely on the older policy files.

If you are already using Sentry Service and GRANT/REVOKE statements, you can directly jump to step 3).

  1. Make sure that you have Sentry service running on your cluster. You should see it in the service list:

Ekran Resmi 2015-10-12 01.58.03

    2.  And that Hive is configured to use this service as a backend for Sentry metadata:

Ekran Resmi 2015-10-12 01.58.14

     3.  Finally enable HDFS Integration with Sentry:

Ekran Resmi 2015-10-12 01.58.27

Example Sqoop Import

Let’s assume that we have user jarcec who needs to import data into a Hive database named default. User jarcec is part of a group that is also called jarcec – in real life the name of the group doesn’t have to be the same as the username and that is fine.

With an unsecured Hive installation, the Hadoop administrator would have to jump in and grant writing privilege to user jarcec for directory /user/hive/warehouse or one of its subdirectories. With Sentry and HDFS integration, the Hadoop administrator no longer needs to jump in. Instead Sqoop will reuse the same authorization policies that has been configured through Hive SQL or via the Sentry Hue Application. Let’s assume that user bc is jarcec‘s Manager and already has privileges to grant privileges in the default database.

    1. bc starts by invoking beeline and connecting to HiveServer2:

 

 

 

[bc@sqoopsentry-1 ~]$ beeline

 

1: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> !connect jdbc:hive2://sqoopsentry-1.vpc.cloudera.com:10000/default;principal=hive/sqoopsentry-1.vpc.cloudera.com@ENT.CLOUDERA.COM

    1. In case that user jarcec is not part of any role yet, we need to create a role for him:

 

 

1: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> CREATE ROLE jarcec_role;

 

No rows affected (0.769 seconds)

    1. And this new role jarcec_role needs to be granted to jarcec‘s group jarcec.

 

 

1: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> GRANT ROLE jarcec_role to GROUP jarcec;

 

No rows affected (0.651 seconds)

    1. And finally bc can grant access to database default (or any other) to the role jarcec_role;

 

 

1: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> GRANT ALL ON DATABASE default TO ROLE jarcec_role;

 

No rows affected (0.16 seconds)

By executing the steps above, user jarcec has been given privilege to do any action (insert or select) with all objects inside database default. That includes the ability to create new tables, insert data or simply querying existing tables. With those privileges user jarcec can run the following Sqoop command as he was used to:

 

 

 

[jarcec@sqoopsentry-1 ~]$ sqoop import –connect jdbc:mysql://mysql.ent.cloudera.com/sqoop –username sqoop –password sqoop –table text <strong>–hive-import</strong>

14/12/14 15:37:38 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5-cdh5.3.0

 …

14/12/14 15:38:58 INFO mapreduce.ImportJobBase: Transferred 249.7567 MB in 75.8448 seconds (3.293 MB/sec) 

14/12/14 15:38:58 INFO mapreduce.ImportJobBase: Retrieved 1000000 records.

14/12/14 15:38:58 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `text` AS t LIMIT 1

14/12/14 15:38:58 INFO hive.HiveImport: Loading uploaded data into Hive

14/12/14 15:39:09 INFO hive.HiveImport: 14/12/14 15:39:09 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.

14/12/14 15:39:09 INFO hive.HiveImport:

14/12/14 15:39:09 INFO hive.HiveImport: Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.26/jars/hive-common-0.13.1-cdh5.3.0.jar!/hive-log4j.properties 

14/12/14 15:39:12 INFO hive.HiveImport: OK

14/12/14 15:39:12 INFO hive.HiveImport: Time taken: 1.079 seconds

14/12/14 15:39:12 INFO hive.HiveImport: Loading data to table default.text

14/12/14 15:39:12 INFO hive.HiveImport: setfacl: Permission denied. user=jarcec is not the owner of inode=part-m-00000

14/12/14 15:39:12 INFO hive.HiveImport: setfacl: Permission denied. user=jarcec is not the owner of inode=part-m-00001

14/12/14 15:39:12 INFO hive.HiveImport: setfacl: Permission denied. user=jarcec is not the owner of inode=part-m-00002

14/12/14 15:39:13 INFO hive.HiveImport: setfacl: Permission denied. user=jarcec is not the owner of inode=part-m-00003

14/12/14 15:39:13 INFO hive.HiveImport: Table default.text stats: [numFiles=4, numRows=0, totalSize=261888896, rawDataSize=0]

14/12/14 15:39:13 INFO hive.HiveImport: OK

14/12/14 15:39:13 INFO hive.HiveImport: Time taken: 0.719 seconds

14/12/14 15:39:13 INFO hive.HiveImport: <strong>Hive import complete</strong>.

14/12/14 15:39:13 INFO hive.HiveImport: Export directory is not empty, keeping it.

And jarcec can easily confirm in beeline that data have been indeed imported into Hive:

 

 

0: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> show tables from default;

+————+–+

|  tab_name  |

+————+–+

| text       |

+————+–+

1 row selected (0.177 seconds)

0: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> select count(*) from text;

+———-+–+

|   _c0    |

+———-+–+

| 1000000  |

+———-+–+

1 row selected (72.188 seconds)

If Hive is configured to inherit permissions, you might notice that Sqoop will print out several warnings similar to this one:

 

14/12/14 15:39:12 INFO hive.HiveImport: setfacl: Permission denied. user=jarcec is not the owner of inode=part-m-00000

As there is no need to inherit HDFS permissions when Sentry is enabled in HDFS, you can safely ignore such messages.

 

Making Hadoop Accessible to your Employees with LDAP

Hue easily integrates with your corporation’s existing identity management systems and provides authentication mechanisms for SSO providers. By changing a few configuration parameters, your employees can start doing big data analysis in their browser by leveraging an existing security policy.

This blog post details the various features and capabilities available in Hue for LDAP:

  1. Authentication
  2. Search bind
  3. Direct bind

Importing users

Importing groups

Synchronizing users and groups

  1. Attributes synchronized 
  2. Useradmin interface
  3. Command line interface

LDAP search

Case sensitivity

LDAPS/StartTLS support

Debugging

Notes

Summary

1.    Authentication

The typical authentication scheme for Hue takes of the form of the following image:

Ekran Resmi 2015-08-29 17.17.32

Passwords are saved into the Hue databases.

With the Hue LDAP integration, users can use their LDAP credentials to authenticate and inherit their existing groups transparently. There is no need to save or duplicate any employee password in Hue:

Ekran Resmi 2015-08-29 17.17.45

There are several other ways to authenticate with Hue: PAM, SPNEGO, OpenID, OAuth, SAML2, etc. This section details how Hue can authenticate against an LDAP directory server.

When authenticating via LDAP, Hue validates login credentials against a directory service if configured with this authentication backend:

1

2

3

[desktop]

 [[auth]]

 backend=desktop.auth.backend.LdapBackend

The LDAP authentication backend will automatically create users that don’t exist in Hue by default. Hue needs to import users in order to properly perform the authentication. The password is never imported when importing users. The following configuration can be used to disable automatic import:

1

2

3

[desktop]

  [[ldap]]

  create_users_on_login=false

The purpose of disabling the automatic import is to only allow to login a predefined list of manually imported users.

The case sensitivity of the authentication process is defined in the “Case sensitivity” section below.

Note

If a user is logging in as A before enabling LDAP auth and then after enabling LDAP auth logs in as B,  all workflows, queries etc will be associated with the user A and be unavailable. The old workflows would need to have their owner fields changed to B: this can be done in the Hue shell.

There are two different ways to authenticate with a directory service through Hue:

  1. Search bind
  2. Direct bind

1.1.    Search bind

The search bind mechanism for authenticating will perform an ldapsearch against the directory service and bind using the found distinguished name (DN) and password provided. This is, by default, used when authenticating with LDAP. The configurations that affect this mechanism are outlined in “LDAP search”.

1.2.    Direct bind

The direct bind mechanism for authenticating will bind to the ldap server using the username and password provided at login. There are two options that can be used to choose how Hue binds:

  1. nt_domain – Domain component for User Principal Names (UPN) in active directory. This active directory specific idiom allows Hue to authenticate with active directory without having to follow LDAP references to other partitions. This typically maps to the email address of the user or the users ID in conjunction with the domain.
  2. ldap_username_pattern – Provides a template for the DN that will ultimately be sent to the directory service when authenticating.

If ‘nt_domain’ is provided, then Hue will use a UPN to bind to the LDAP service:

1

2

3

[desktop]

  [[ldap]]

  nt_domain=example.com

Otherwise, the ‘ldap_username_pattern’ configuration is used (the <username> parameter will be replaced with the username provided at login):

1

2

3

[desktop]

    [[ldap]]

    ldap_username_pattern=”uid=&lt;username&gt;,ou=People,DC=hue-search,DC=ent,DC=cloudera,DC=com”

Typical attributes to search for include:

  1. uid
  2. sAMAccountName

To enable direct bind authentication, the ‘search_bind_authentication’ configuration must be set to false:

1

2

3

[desktop]

    [[ldap]]

    search_bind_authentication=false

2.    Importing users

If an LDAP user needs to be part of a certain group and have a particular set of permissions, then this user can be imported via the Useradmin interface:

Ekran Resmi 2015-08-29 17.17.54

As you can see, there are two options available when importing:

  1. Distinguished name
  2. Create home directory

If ‘Create home directory’ is checked, when the user is imported their home directory in HDFS will automatically be created, if it doesn’t already exist.

If ‘Distinguished name’ is checked, then the username provided must be a full distinguished name (eg: uid=hue,ou=People,dc=gethue,dc=com). Otherwise, the Username provided should be a fragment of a Relative Distinguished Name (rDN) (e.g., the username “hue” maps to the rDN “uid=hue”). Hue will perform an LDAP search using the same methods and configurations as defined in the “LDAP search” section. Essentially, Hue will take the provided username and create a search filter using the ‘user_filter’ and ‘user_name_attr’ configurations. For more information on how Hue performs LDAP searches, see the “LDAP Search” section.

The case sensitivity of the search and import processes are defined in the “Case sensitivity” section.

3.    Importing groups

Groups are importable via the Useradmin interface. Then, users can be added to this group, which would provide a set of permissions (e.g. accessing the Impala application). This function works almost the exact same way as user importing, but has a couple of extra features.

Ekran Resmi 2015-08-29 17.18.06

As the above image portrays, not only can groups be discovered via DN and rDN search, but users that are members of the group and members of the group’s subordinate groups can be imported as well. Posix groups and members are automatically imported if the group found has the object class ”posixGroup”.

4.    Synchronizing users and groups

Users and groups can be synchronized with the directory service via the Useradmin interface or via a command line utility. The images from the previous sections use the words “Sync” to indicate that when a name of a user or group that exists in Hue is being added, it will in fact be synchronized instead. In the case of importing users for a particular group, new users will be imported and existing users will be synchronized. Note: Users that have been deleted from the directory service will not be deleted from Hue. Those users can be manually deactivated from Hue via the Useradmin interface.

The groups of a user can be synced when he logs in (to keep its permission in sync):

1

2

3

4

[desktop]

  [[ldap]]

  # Synchronize a users groups when they login

  ## sync_groups_on_login=false

4.1.    Attributes synchronized

Currently, only the first name, last name, and email address are synchronized. Hue looks for the LDAP attributes ‘givenName’, ‘sn’, and ‘mail’ when synchronizing.  Also, the ‘user_name_attr’ config is used to appropriately choose the username in Hue. For instance, if ‘user_name_attr’ is set to “uid”, then the “uid” returned by the directory service will be used as the username of the user in Hue.

4.2.    Useradmin interface

The “Sync LDAP users/groups” button in the Useradmin interface will  automatically synchronize all users and groups.

Ekran Resmi 2015-08-29 17.18.16

4.3.    Command line interface

Here’s a quick example of how to use the command line interface to synchronize users and groups:

<hue root>/build/env/bin/hue sync_ldap_users_and_groups

5.    LDAP search

There are two configurations for restricting the search process:

  1. user_filter – General LDAP filter to restrict the search.
  2. user_name_attr – Which attribute will be considered the username to search against.

Here is an example configuration:

1

2

3

4

5

6

7

8

[desktop]

    [[ldap]]

    [[[users]]]

    user_filter=”objectClass=*”

    user_name_attr=uid

    # Whether or not to follow referrals

    ## follow_referrals=false

With the above configuration, the LDAP search filter will take on the form:

(&(objectClass=*)(uid=<user entered usename>))

6.    Case sensitivity

Hue can be configured to ignore the case of usernames as well as force usernames to lower case via the ‘ignore_username_case’ and ‘force_username_lowercase’ configurations. These two configurations are recommended to be used in conjunction with each other. This is useful when integrating with a directory service containing usernames in capital letters and unix usernames in lowercase letters (which is a Hadoop requirement). Here is an example of configuring them:

[desktop]

1

2

3

4

[desktop]

    [[ldap]]

    ignore_username_case=true

    force_username_lowercase=true

7.    LDAPS/StartTLS support

Secure communication with LDAP is provided via the SSL/TLS and StartTLS protocols. It allows Hue to validate the directory service it’s going to converse with. Practically speaking, if a Certificate Authority Certificate file is provided, Hue will communicate via LDAPS:

1

2

3

[desktop]

    [[ldap]]

    ldap_cert=/etc/hue/ca.crt

The StartTLS protocol can be used as well (step up to SSL/TLS):

1

2

3

[desktop]

    [[ldap]]

    use_start_tls=true

8.    Debugging

Get more information when querying LDAP and use the ldapsearch tool:

1

2

3

4

5

6

7

8

9

10

[desktop]

    [[ldap]]

    debug=true

    # Sets the debug level within the underlying LDAP C lib.

    ## debug_level=255

    # Possible values for trace_level are 0 for no logging, 1 for only logging the method calls with arguments,

    # 2 for logging the method calls with arguments and the complete results and 9 for also logging the traceback of method calls.

    trace_level=0

Note

Make sure to add to the Hue server environment:

1

2

DESKTOP_DEBUG=true

DEBUG=true

9.    Notes

  1. Setting “search_bind_authentication=true” in the hue.ini will tell Hue to perform an LDAP search using the bind credentials specified in the hue.ini (bind_dn, bind_password). Hue will then search using the base DN specified in “base_dn” for an entry with the attribute, defined in “user_name_attr”, with the value of the short name provided in the login page. The search filter, defined in “user_filter” will also be used to limit the search. Hue will search the entire subtree starting from the base DN.
  2. Setting  ”search_bind_authentication=false” in the hue.ini will tell Hue to perform a direct bind to LDAP using the credentials provided (not bind_dn and bind_password specified in the hue.ini). There are two effective modes here:
    1. nt_domain is specified in the hue.ini: This is used to connect to an Active Directory directory service. In this case, the UPN (User Principal Name) is used to perform a direct bind. Hue forms the UPN by concatenating the short name provided at login and the nt_domain like so: “<short name>@<nt_domain>”. The ‘ldap_username_pattern’ config is completely ignore.
    2. nt_domain is NOT specified in the hue.ini: This is used to connect to all other directory services (can even handle Active Directory, but nt_domain is the preferred way for AD). In this case, ‘ldap_username_pattern’ is used and it should take on the form “cn=<username>,dc=example,dc=com” where <username> will be replaced with whatever is provided at the login page.
  3. The UserAdmin app will always perform an LDAP search when manage LDAP entries and will then always use the “bind_dn”, “bind_password”, “base_dn”, etc. as defined in the hue.ini.
  4. At this point in time, there is no other bind semantics supported other than SIMPLE_AUTH. For instance, we do not yet support MD5-DIGEST, NEGOTIATE, etc. Though, we definitely want to hear from folks what they use so we can prioritize these things accordingly!

10.    Summary

The Hue team is working hard on improving security. Upcoming LDAP features include: Import nested LDAP groups and multidomain support for Active Directory. We hope this brief overview of LDAP in Hue will help you make your system more secure, more compliant with current security standards, and open up big data analysis to many more users!

Group Synchronization Backends in Hue

Hueis the turn-key solution for Apache Hadoop. It hides the complexity of the ecosystem including HDFS, Oozie, MapReduce, etc. Hue provides authentication and integrates with SAMLLDAP, and other systems. A new feature added in Hue is the ability to synchronize groups with a third party authority provider. In this blog post, we’ll be covering the basics of creating a Group Synchronization Backend.

The Design

The purpose of the group synchronization backends are to keep Hue’s internal group lists fresh. The design was separated into two functional parts:

  1. A way to synchronize on every request.
  2. A definition of how and what to synchronize.

Ekran Resmi 2015-08-29 14.00.11

Image 1: Request cycle in Hue with a synchronization backend.

The first function is a Django middleware that is called on every request. It is intended to be immutable, but configurable. The second function is a backend that can be customized. This gives developers the ability to choose how their groups and user-group memberships can be synchronized. The middleware can be configured to use a particular synchronization backend and will call it on every request. If no backend is configured, then the middleware is disabled.

Creating Your Own Backend

A synchronization backend can be created by extending a class and providing your own logic. Here is an example backend that comes packaged with Hue:

class LdapSynchronizationBackend(DesktopSynchronizationBackendBase): USER_CACHE_NAME = ‘ldap_use_group_sync_cache’ def sync(self, request): user = request.user if not user or not user.is_authenticated(): return if not User.objects.filter(username=user.username, userprofile__creation_method=str(UserProfile.CreationMethod.EXTERNAL)).exists(): LOG.warn(“User %s is not an Ldap user” % user.username) return # Cache should be cleared when user logs out. if self.USER_CACHE_NAME not in request.session: request.session[self.USER_CACHE_NAME] = import_ldap_users(user.username, sync_groups=True, import_by_dn=False) request.session.modified = True

In the above code snippet, the synchronization backend is defined by extending “DesktopSynchronizationBackendBase”. Then, the method “sync(self, request)” is overridden and provides the syncing logic.

Configuration

The synchronization middleware can be configured to use a backend by changing “desktop -> auth -> user_group_membership_synchronization_backend” to the full import path of your class. For example, setting this config to “desktop.auth.backend.LdapSynchronizationBackend” configures Hue to synchronize with the configured LDAP authority.

Design Intelligently

Backends in Hue are extremely powerful and can affect the performance of the server. So, they should be designed in such a fashion that they do not do any operations that block for long periods of time. Also, they should manage the following appropriately:

  1. Throttling requests to whatever service contains the group information.
  2. Ensuring users are authenticated.
  3. Caching if appropriate.

Summary

Hue is enterprise grade software ready to integrate with LDAP, SAML, etc. The newest feature, Group Synchronization, ensures corporate authority is fresh in Hue. It’s easy to configure and create backends and Hue comes with an LDAP backend.

Hue is undergoing heavy development and are welcoming external contributions! Have any suggestions? Feel free to tell us what you think through hue-user or @gethue.