Installing Oracle R Distribution and Oracle R Enterprise on Exadata

ORE Installation Documentation and Requirements

Check the official Oracle R Enterprise documentation for a description of the requirements and installations steps.

http://docs.oracle.com/cd/E36939_01/doc/doc.13/e36763/intro.htm#OREAD110

Check the support matrix to ensure that the R distribution supports the version of ORE to be installed.

http://docs.oracle.com/cd/E36939_01/doc/doc.13/e36762/toc.htm#ORERN105

Install the version of ORE that is compatible with the Client / Studio that the customer is using. In this case the customer is using R 2.15.3.

Install R 2.15.3 Distribution.

http://docs.oracle.com/cd/E36939_01/doc/doc.13/e36762/toc.htm#ORERN104

http://docs.oracle.com/cd/E36939_01/doc/doc.13/e36763.pdf

Software Download

http://www.oracle.com/technetwork/database/options/advanced-analytics/r-enterprise/ore-downloads-1502823.html

Overview

  1. Install Oracle R Distribution (RPMs) on all Exadata Nodes. The Oracle R distribution is a prerequisite for installing Oracle R Enterprise.
  2. Execute the Oracle R Enterprise installation script on all Exadata Nodes. First time execution on the first node will create the database repository and packages. Subsequent execution on other nodes will copy the required libraries on that node.
  3. Install Supporting files on all Exadata Nodes.
  4. Create or grant access to the database user.

Obtaining RPMs

The method used for this installation is the RPM method. (Most Exadata systems will not have direct access to the public yum database).

There are several options for obtaining the required RPMS. The method used in this case was to download the rpms from the public YUM server using a VM that has internet access. The rpms were then transferred to Exadata.

Using the yum –downloadonly feature:

yum install yum-downloadonly

yum install <rpmfile> –y –downloadonly –downloaddir=/u01/downloads

or

yum reinstall <rpmfile> –y –downloadonly –downloaddir=/u01/downloads

Installing RPMs

Check exadata for rpms already installed (eg: rpm –qv libXau-devel )

The list below is the order in which the RPMs were installed based on their dependencies.

(The required RPMs and the versions will change over time. Check documentation).

Install the RPMs on all Exadata DB nodes:

rpm -Uvh libXau-devel-1.0.1-3.1.x86_64.rpm

rpm -Uvh libX11-devel-1.0.3-11.el5_7.1.x86_64.rpm mesa-libGL-devel-6.5.1-7.11.el5_9.x86_64.rpm xorg-x11-proto-devel-7.1-13.el5.x86_64.rpm libXdmcp-devel-1.0.1-2.1.x86_64.rpm

rpm -Uvh libtiff-3.8.2-18.el5_8.x86_64.rpm

rpm -Uvh cups-libs-1.3.7-30.el5_9.3.x86_64.rpm

rpm -Uvh cairo-1.2.4-5.el5.x86_64.rpm

rpm -Uvh atk-1.12.2-1.fc6.x86_64.rpm

rpm -Uvh hicolor-icon-theme-0.9-2.1.noarch.rpm

rpm -Uvh bitstream-vera-fonts-1.10-7.noarch.rpm

rpm -Uvh pango-1.14.9-8.0.1.el5_7.3.x86_64.rpm

rpm -Uvh gtk2-2.10.4-29.el5.x86_64.rpm

rpm -Uvh poppler-0.5.4-19.el5.x86_64.rpm

rpm -Uvh poppler-utils-0.5.4-19.el5.x86_64.rpm

rpm -Uvh paps-0.6.6-20.el5.x86_64.rpm

rpm -Uvh dbus-python-0.70-9.el5_4.x86_64.rpm

rpm -Uvh avahi-0.6.16-10.el5_6.x86_64.rpm

rpm -Uvh avahi-compat-libdns_sd-0.6.16-10.el5_6.x86_64.rpm

rpm -Uvh cups-1.3.7-30.el5_9.3.x86_64.rpm

warning: user lp does not exist – using root

warning: user lp does not exist – using root

rpm -Uvh libgfortran-4.1.2-52.el5_8.1.i386.rpm

rpm -Uvh tetex-fonts-3.0-33.15.el5_8.1.x86_64.rpm

rpm -Uvh tetex-dvips-3.0-33.15.el5_8.1.x86_64.rpm

rpm -Uvh libgfortran-4.1.2-54.el5.x86_64.rpm

rpm -Uvh libfontenc-1.0.2-2.2.el5.x86_64.rpm

rpm -Uvh libXfont-1.2.2-1.0.4.el5_7.x86_64.rpm

rpm -Uvh ttmkfdir-3.0.9-23.el5.x86_64.rpm

rpm -Uvh xorg-x11-font-utils-7.1-3.x86_64.rpm

rpm -Uvh libFS-1.0.0-3.1.x86_64.rpm

rpm -Uvh chkfontpath-1.10.1-1.1.x86_64.rpm xorg-x11-xfs-1.0.2-5.el5_6.1.x86_64.rpm

rpm -Uvh urw-fonts-2.3-6.1.1.noarch.rpm

rpm -Uvh ghostscript-fonts-5.50-13.1.1.noarch.rpm ghostscript-8.70-14.el5_8.1.x86_64.rpm

rpm -Uvh netpbm-10.35.58-10.el5.x86_64.rpm

rpm -Uvh netpbm-progs-10.35.58-10.el5.x86_64.rpm

rpm -Uvh desktop-file-utils-0.10-7.x86_64.rpm

rpm -Uvh dialog-1.0.20051107-1.2.2.x86_64.rpm

rpm -Uvh ed-0.2-39.el5_2.x86_64.rpm

rpm -Uvh tetex-3.0-33.15.el5_8.1.x86_64.rpm

rpm -Uvh tetex-latex-3.0-33.15.el5_8.1.x86_64.rpm

rpm -Uvh R-core-2.15.3-1.el5.x86_64.rpm

rpm -Uvh bzip2-devel-1.0.3-6.el5_5.x86_64.rpm

rpm -Uvh gmp-4.1.4-10.el5.x86_64.rpm

rpm -Uvh gcc-gfortran-4.1.2-54.el5.x86_64.rpm

rpm -Uvh pcre-devel-6.6-6.el5_6.1.x86_64.rpm

rpm -Uvh tcl-devel-8.4.13-6.el5.x86_64.rpm

rpm -Uvh tk-devel-8.4.13-5.el5_1.1.x86_64.rpm

rpm -Uvh zlib-devel-1.2.3-7.el5.x86_64.rpm

rpm -Uvh texinfo-4.8-14.el5.x86_64.rpm

rpm -Uvh texinfo-tex-4.8-14.el5.x86_64.rpm

rpm -Uvh R-devel-2.15.3-1.el5.x86_64.rpm

rpm -Uvh libRmath-2.15.3-1.el5.x86_64.rpm

rpm -Uvh libRmath-devel-2.15.3-1.el5.x86_64.rpm

rpm -Uvh R-2.15.3-1.el5.x86_64.rpm

At this point if we try to install the rpms below we will get that they are “Already installed”….

rpm -Uvh zlib-1.2.3-7.el5.i386.rpm

rpm -Uvh zlib-1.2.3-7.el5.x86_64.rpm

rpm -Uvh libgcc-4.1.2-54.el5.i386.rpm

rpm -Uvh libgcc-4.1.2-54.el5.i386.rpm

rpm -Uvh libstdc++-4.1.2-54.el5.x86_64.rpm

rpm -Uvh libstdc++-4.1.2-54.el5.i386.rpm

rpm -Uvh tcl-8.4.13-6.el5.x86_64.rpm

rpm -Uvh libstdc++-devel-4.1.2-54.el5.x86_64.rpm

rpm -Uvh libstdc++-devel-4.1.2-54.el5.i386.rpm

rpm -Uvh cpp-4.1.2-54.el5.x86_64.rpm

rpm -Uvh gcc-c++-4.1.2-54.el5.x86_64.rpm

rpm -Uvh gcc-4.1.2-54.el5.x86_64.rpm

rpm -Uvh mesa-libGL-6.5.1-7.11.el5_9.x86_64.rpm

Check the R software installed and the version

[root@dm01dbadm04 ~]# R

Oracle Distribution of R version 2.15.3 (–) — “Security Blanket”

Copyright (C) The R Foundation for Statistical Computing

ISBN 3-900051-07-0

Platform: x86_64-unknown-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.

You are welcome to redistribute it under certain conditions.

Type ‘license()’ or ‘licence()’ for distribution details.

Natural language support but running in an English locale

R is a collaborative project with many contributors.

Type ‘contributors()’ for more information and

‘citation()’ on how to cite R or R packages in publications.

Type ‘demo()’ for some demos, ‘help()’ for on-line help, or

‘help.start()’ for an HTML browser interface to help.

Type ‘q()’ to quit R.

You are using Oracle’s distribution of R. Please contact

Oracle Support for any problems you encounter with this

distribution.

> q()

Save workspace image? [y/n/c]: n

Install Oracle R Enterprise

  1. Download the ORE Server and Supporting zip packages for Linux 64-bit.

http://www.oracle.com/technetwork/database/options/advanced-analytics/r-enterprise/ore-downloads-1502823.html

  1. Transfer both the Server and the Supporting files to each Exadata node and unzip.
  2. Login as the “oracle” user and set the ORACLE_SID and Oracle environment variables.
  3. Execute the “install.sh” script within the “server” directory.

[oracle@dm01dbadm02 server]$ ./install.sh

Oracle R Enterprise 1.3.1 Server Installation.

Copyright (c) 2012, 2013 Oracle and/or its affiliates. All rights reserved.

Checking R ………………. Pass

Checking R libraries ……… Pass

Checking ORACLE_HOME ……… Pass

Checking ORACLE_SID ………. Pass

Checking sqlplus …………. Pass

Checking ORACLE instance ….. Pass

Checking ORE …………….. Pass

Current configuration

R_HOME               = /usr/lib64/R

R_LIBS_USER         = /u01/app/oracle/product/11.2.0.3/dbhome_1/R/library

ORACLE_HOME         = /u01/app/oracle/product/11.2.0.3/dbhome_1

ORACLE_SID           = phppovx2

Do you wish to install ORE? [yes]

Choosing RQSYS tablespaces

PERMANENT tablespace to use for RQSYS [SYSAUX]: RQSYS

ERROR: PERMANENT tablespace RQSYS not found

PERMANENT tablespace to use for RQSYS [SYSAUX]:

TEMPORARY tablespace to use for RQSYS [TEMP]:

Tablespaces summary

PERMANENT tablespace = SYSAUX

TEMPORARY tablespace = TEMP

Installing libraries ……… Pass

Installing RQSYS data …….. Pass

Installing RQSYS code …….. Pass

Installing ORE packages …… Pass

Creating ORE script ………. Pass

NOTE: ORE has been enabled for all database users. Next, install the

supporting packages.

You may create an ORE user with the demo_user.sh script, which

automatically grants the required privileges. A complete list of

privileges is available in the script rquser.sql.

To use ORE Embedded R Execution functionality, grant the user

the RQADMIN role.

Please, consult the documentation for more information.

Done

[oracle@dm01dbadm02 server]$

  1. Change directory to the “supporting” directory from the Supporting zip file and install the supporting files

ORE CMD INSTALL ROracle_1.1-9_R_x86_64-unknown-linux-gnu.tar.gz

ORE CMD INSTALL DBI_0.2-5_R_x86_64-unknown-linux-gnu.tar.gz

ORE CMD INSTALL png_0.1-4_R_x86_64-unknown-linux-gnu.tar.gz

Check ORE installation….

ORE -e “library(ORE)”

  1. Execute the ORE server install.sh script on all other RAC nodes :

[oracle@dm01dbadm01 server]$ ./install.sh

Oracle R Enterprise 1.3.1 Server Installation.

Copyright (c) 2012, 2013 Oracle and/or its affiliates. All rights reserved.

Checking R ………………. Pass

Checking R libraries ……… Pass

Checking ORACLE_HOME ……… Pass

Checking ORACLE_SID ………. Pass

Checking sqlplus …………. Pass

Checking ORACLE instance ….. Pass

Checking ORE …………….. Pass

Current configuration

R_HOME               = /usr/lib64/R

R_LIBS_USER         = /u01/app/oracle/product/11.2.0.3/dbhome_1/R/library

ORACLE_HOME         = /u01/app/oracle/product/11.2.0.3/dbhome_1

ORACLE_SID           = osspovx1

Installing libraries ……… Pass

Installing ORE packages …… Pass

Creating ORE script ………. Pass

NOTE: ORE has been enabled for all database users. Next, install the

supporting packages.

You may create an ORE user with the demo_user.sh script, which

automatically grants the required privileges. A complete list of

privileges is available in the script rquser.sql.

To use ORE Embedded R Execution functionality, grant the user

the RQADMIN role.

Please, consult the documentation for more information.

Done

The following libraries will get copied to the location shown.

ls –l $ORACLE_HOME/lib/ore.so

ls –l $ORACLE_HOME/lib/librqe.so

ls –l $ORACLE_HOME/R/library

Install the supporting files on all other Exadata DB nodes.

[oracle@dm01dbadm01 supporting]$ ORE CMD INSTALL ROracle_1.1-9_R_x86_64-unknown-linux-gnu.tar.gz

* installing to library ‘/u01/app/oracle/product/11.2.0.3/dbhome_1/R/library’

* installing *binary* package ‘ROracle’ …

* DONE (ROracle)

[oracle@dm01dbadm01 supporting]$ ORE CMD INSTALL DBI_0.2-5_R_x86_64-unknown-linux-gnu.tar.gz

* installing to library ‘/u01/app/oracle/product/11.2.0.3/dbhome_1/R/library’

* installing *binary* package ‘DBI’ …

* DONE (DBI)

[oracle@dm01dbadm01 supporting]$ ORE CMD INSTALL png_0.1-4_R_x86_64-unknown-linux-gnu.tar.gz

* installing to library ‘/u01/app/oracle/product/11.2.0.3/dbhome_1/R/library’

* installing *binary* package ‘png’ …

* DONE (png)

[oracle@dm01dbadm01 supporting]$

  1. Grant the following privileges to existing db users that will be used for ORE.

grant RQADMIN to OSSDM;

grant create mining model to OSSDM;

Execute demo_user.sh to create new database users for ORE.

  1. Validate ORE installation….

ORE -e “library(ORE)”

See the documentation for further validation checks:

http://docs.oracle.com/cd/E36939_01/doc/doc.13/e36763/postinstall.htm#OREAD190

Advertisements

How to Install Kerberos 5 KDC Server on Linux for Authentication

Kerberos is a network authentication protocol.

Kerberos provides a strong cryptographic authentication against the devices which lets the client & servers to communicate in a more secured manner. It is designed to address network security problems.

When firewalls acts a solution to address the intrusion from the external networks, Kerberos usually used to address the intrusion and other security problems within the network.

The current version of Kerberos is version 5 which is called as KRB5.

To implement the Kerberos, we need to have the centralized authentication service running on server.

This service is called KEY DISTRIBUTION CENTER (KDC).

A server registered with KDC is trusted by all other computers in the Kerberos realm.

Sample krb5.conf File

Here’s an example krb5.conf file that contains all the REALM and domain to REALM mapping information,

# cat /etc/krb5.conf
[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 default_realm = EXAMPLE.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true

[realms]
 EXAMPLE.COM = {
  kdc = kerberos.example.com
  admin_server = kerberos.example.com
 }

[domain_realm]
 .example.com = EXAMPLE.COM
 example.com = EXAMPLE.COM

Install Kerberos KDC server

For security reason, it is recommended to run the Kerberos (KDC) server on a separate server.

Download and install the krb5 server package.

# rpm -ivh krb5-server-1.10.3-10.el6_4.6.x86_64.rpm
Preparing...       ########################################### [100%]
   1:krb5-server   ########################################### [100%]

Verify that the following rpm are installed before configuring KDC:

# rpm -qa | grep -i krb5
pam_krb5-2.3.11-9.el6.x86_64
krb5-server-1.10.3-10.el6_4.6.x86_64
krb5-workstation-1.10.3-10.el6_4.6.x86_64
krb5-libs-1.10.3-10.el6_4.6.x86_64

Modify /etc/krb5.conf File

Change /etc/krb5.conf to reflect like the below with the appropriate REALM and DOMAIN_REALM mappings.

# cat /etc/krb5.conf
[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 default_realm = MYREALM.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true

[realms]
 MYREALM.COM = {
  kdc = elserver1.example.com
  admin_server = elserver1.example.com
 }

[domain_realm]
 .myrealm.com = MYREALM.COM
 myrealm.com = MYREALM.COM

Modify kdc.conf File

Also the kdc.conf should be modified as shown below.

# cat /var/kerberos/krb5kdc/kdc.conf
[kdcdefaults]
 kdc_ports = 88
 kdc_tcp_ports = 88

[realms]
 MYREALM.COM = {
  #master_key_type = aes256-cts
  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal 
  arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
 }

Create KDC database

Next, create the KDC database using the kdb5_util command as shown below. As this stage, enter the appropriate pasword for the KDC database master key.

# /usr/sbin/kdb5_util create -s
Loading random data
Initializing database '/var/kerberos/krb5kdc/principal' for realm 'MYREALM.COM',
master key name 'K/M@MYREALM.COM'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key:
Re-enter KDC database master key to verify:

Assign Administrator Privilege

The users can be granted administrator privileges to the database using the file /var/kerberos/krb5kdc/kadm5.acl.

# cat /var/kerberos/krb5kdc/kadm5.acl
*/admin@MYREALM.COM     *

In the above example, any principal in the MYREALM with an admin instance has all administrator privileges.

Create a Principal

Create the principal using the following command. In this example, I created the principal with the user name “eluser”.

# kadmin.local -q "addprinc eluser/admin"
Authenticating as principal root/admin@MYREALM.COM with password.
WARNING: no policy specified for eluser/admin@MYREALM.COM; defaulting to no policy
Enter password for principal "eluser/admin@MYREALM.COM":
Re-enter password for principal "eluser/admin@MYREALM.COM":
Principal "eluser/admin@MYREALM.COM" created.

Start the Kerberos Service

Start the KDC and kadmin daemons as shown below.

# service krb5kdc start
Starting Kerberos 5 KDC:               [  OK  ]

# service kadmin start
Starting Kerberos 5 Admin Server:      [  OK  ]

Configure Kerberos Client

Configure the Kerberos client to authenticate against the KDC database:

Now let’s see how to configure the krb5 client to authenticate against the Kerberos KDC database we created above.

Step 1: Install the krb5-libs and krb5-workstation packages on the client machine.

Step 2: Copy the /etc/krb5.conf from the KDC server to the client machine.

Step 3: Now we need to create the principal for the client in the KDC/Kerberos database.

You can use the below commands to create the principal for the client machine on the KDC master server. In the below example the I am creating a host principal for the client elserver3.example.com on the master KDC server elserver1.example.com

# kadmin.local -q "addprinc host/elserver3.example.com"
Authenticating as principal root/admin@MYREALM.COM with password.
WARNING: no policy specified for host/elserver1.example.com@MYREALM.COM; defaulting to no policy
Enter password for principal "host/elserver1.example.com@MYREALM.COM":
Re-enter password for principal "host/elserver1.example.com@MYREALM.COM":
Principal "host/elserver1.example.com@MYREALM.COM" created.

Step 4: Extract the krb5.keytab for the client from the KDC master server using the below command:

# kadmin.local -q "ktadd -k /etc/krb5.keytab host/elserver3.example.com"
Authenticating as principal root/admin@MYREALM.COM with password.
Entry for principal host/elserver3.example.com with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/elserver3.example.com with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/elserver3.example.com with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/elserver3.example.com with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/elserver3.example.com with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:/etc/krb5.keytab.
Entry for principal host/elserver3.example.com with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:/etc/krb5.keytab.

This completes the configuration. You are all done at this stage.

From now on, everytime you establish a SSH, RSH connection the host verifies its identity against the KDC database using keytab file and it establishes secure connection over the Kerberos.

Ktadd is used a generate new keytab or add a principal to an existing keytab from the kadmin command.

Ktremove is used to remove the principal from an existing keytab. The command to remove the principal that we created above is,

kadmin.local -q "ktremove -k /etc/krb5.keytab –q all"

Delete a KDC database

For some reason, if you have to delete a KDC database, use the following command:

# kdb5_util -r MYREALM.COM destroy
kdb5_util: Deleting KDC database stored in /usr/local/var/krb5kdc/principal, you sure
(type yes to confirm)? <== yes
OK, deleting database '/usr/local/var/krb5kdc/principal'...

-f option in the above command forces the deletion of KDC database.

Backup and Restore KDC Database

To backup a KDC database to a file, use krb5_util_dump.

# kdb5_util dump kdcfile

# ls -l kdcfile
-rw-------. 1 root root 5382 Apr 10 07:25 kdcfile

To restore the KDC database from the dump file created in the above step, do the following:

# kdb5_util load kdcfile

How-to: Analyze Twitter Data with Apache Hadoop “http://blog.cloudera.com/blog/2012/09/analyzing-twitter-data-with-hadoop/”

Social media has gained immense popularity with marketing teams, and Twitter is an effective tool for a company to get people excited about its products. Twitter makes it easy to engage users and communicate directly with them, and in turn, users can provide word-of-mouth marketing for companies by discussing the products. Given limited resources, and knowing we may not be able to talk to everyone we want to target directly, marketing departments can be more efficient by being selective about whom we reach out to.

In this post, we’ll learn how we can use Apache FlumeApache HDFSApache Oozie, and Apache Hive to design an end-to-end data pipeline that will enable us to analyze Twitter data. This will be the first post in a series. The posts to follow to will describe, in more depth, how each component is involved and how the custom code operates. All the code and instructions necessary to reproduce this pipeline is available on the Cloudera Github.

Who is Influential?

To understand whom we should target, let’s take a step back and try to understand the mechanics of Twitter. A user – let’s call him Joe – follows a set of people, and has a set of followers. When Joe sends an update out, that update is seen by all of his followers. Joe can also retweet other users’ updates. A retweet is a repost of an update, much like you might forward an email. If Joe sees a tweet from Sue, and retweets it, all of Joe’s followers see Sue’s tweet, even if they don’t follow Sue. Through retweets, messages can get passed much further than just the followers of the person who sent the original tweet. Knowing that, we can try to engage users whose updates tend to generate lots of retweets. Since Twitter tracks retweet counts for all tweets, we can find the users we’re looking for by analyzing Twitter data.

Now we know the question we want to ask: Which Twitter users get the most retweets? Who is influential within our industry?

How Do We Answer These Questions?

SQL queries can be used to answer this question: We want to look at which users are responsible for the most retweets, in descending order of most retweeted. However, querying Twitter data in a traditional RDBMS is inconvenient, since the Twitter Streaming API outputs tweets in a JSON format which can be arbitrarily complex. In the Hadoop ecosystem, the Hive project provides a query interface which can be used to query data that resides in HDFS. The query language looks very similar to SQL, but allows us to easily model complex types, so we can easily query the type of data we have. Seems like a good place to start. So how do we get Twitter data into Hive? First, we need to get Twitter data into HDFS, and then we’ll be able to tell Hive where the data resides and how to read it.

The diagram above shows a high-level view of how some of the CDH (Cloudera’s Distribution Including Apache Hadoop) components can be pieced together to build the data pipeline we need to answer the questions we have. The rest of this post will describe how these components interact and the purposes they each serve.

Gathering Data with Apache Flume

The Twitter Streaming API will give us a constant stream of tweets coming from the service. One option would be to use a simple utility like curl to access the API and then periodically load the files. However, this would require us to write code to control where the data goes in HDFS, and if we have a secure cluster, we will have to integrate with security mechanisms. It will be much simpler to use components within CDH to automatically move the files from the API to HDFS, without our manual intervention.

Apache Flume is a data ingestion system that is configured by defining endpoints in a data flow called sources and sinks. In Flume, each individual piece of data (tweets, in our case) is called an event; sources produce events, and send the events through a channel, which connects the source to the sink. The sink then writes the events out to a predefined location. Flume supports some standard data sources, such as syslog or netcat. For this use case, we’ll need to design a custom source that accesses the Twitter Streaming API, and sends the tweets through a channel to a sink that writes to HDFS files. Additionally, we can use the custom source to filter the tweets on a set of search keywords to help identify relevant tweets, rather than a pure sample of the entire Twitter firehose. The custom Flume source code can be found here.

Partition Management with Oozie

Once we have the Twitter data loaded into HDFS, we can stage it for querying by creating an external table in Hive. Using an external table will allow us to query the table without moving the data from the location where it ends up in HDFS. To ensure scalability, as we add more and more data, we’ll need to also partition the table. A partitioned table allows us to prune the files that we read when querying, which results in better performance when dealing with large data sets. However, the Twitter API will continue to stream tweets and Flume will perpetually create new files. We can automate the periodic process of adding partitions to our table as the new data comes in.

Apache Oozie is a workflow coordination system that can be used to solve this problem. Oozie is an extremely flexible system for designing job workflows, which can be scheduled to run based on a set of criteria. We can configure the workflow to run an ALTER TABLE command that adds a partition containing the last hour’s worth of data into Hive, and we can instruct the workflow to occur every hour. This will ensure that we’re always looking at up-to-date data.

The configuration files for the Oozie workflow are located here.

Querying Complex Data with Hive

Before we can query the data, we need to ensure that the Hive table can properly interpret the JSON data. By default, Hive expects that input files use a delimited row format, but our Twitter data is in a JSON format, which will not work with the defaults. This is actually one of Hive’s biggest strengths. Hive allows us to flexibly define, and redefine, how the data is represented on disk. The schema is only really enforced when we read the data, and we can use the Hive SerDe interface to specify how to interpret what we’ve loaded.

SerDe stands for Serializer and Deserializer, which are interfaces that tell Hive how it should translate the data into something that Hive can process. In particular, the Deserializer interface is used when we read data off of disk, and converts the data into objects that Hive knows how to manipulate. We can write a custom SerDe that reads the JSON data in and translates the objects for Hive. Once that’s put into place, we can start querying. The JSON SerDe code can be found here. The SerDe will take a tweet in JSON form, like the following:

and translate the JSON entities into queryable columns:

which will result in:

We’ve now managed to put together an end-to-end system, which gathers data from the Twitter Streaming API, sends the tweets to files on HDFS through Flume, and uses Oozie to periodically load the files into Hive, where we can query the raw JSON data, through the use of a Hive SerDe.

Some Results

In my own testing, I let Flume collect data for about three days, filtering on a set of keywords:

hadoop, big data, analytics, bigdata, cloudera, data science, data scientist, business intelligence, mapreduce, data warehouse, data warehousing, mahout, hbase, nosql, newsql, businessintelligence, cloudcomputing

The collected data was about half a GB of JSON data, and here is an example of what a tweet looks like. The data has some structure, but certain fields may or may not exist. The retweeted_status field, for example, will only be present if the tweet was a retweet. Additionally, some of the fields may be arbitrarily complex. The hashtags field is an array of all the hashtags present in the tweets, but most RDBMS’s do not support arrays as a column type. This semi-structured quality of the data makes the data very difficult to query in a traditional RDBMS. Hive can handle this data much more gracefully.

The query below will find usernames, and the number of retweets they have generated across all the tweets that we have data for:

For the few days of data, I found that these were the most retweeted users for the industry:

From these results, we can see whose tweets are getting heard by the widest audience, and also determine whether these people are communicating on a regular basis or not. We can use this information to more carefully target our messaging in order to get them talking about our products, which, in turn, will get other people talking about our products.

Conclusion

In this post we’ve seen how we can take some of the components of CDH and combine them to create an end-to-end data management system. This same architecture could be used for a variety of applications designed to look at Twitter data, such as identifying spam accounts, or identifying clusters of keywords. Taking the system even further, the general architecture can be used across numerous applications. By plugging in different Flume sources and Hive SerDes, this application can be customized for many other applications, like analyzing web logs, to give an example. Grab the code, and give it a shot yourself.

CDH 5.3: Apache Sentry Integration with HDFS

Starting in CDH 5.3, Apache Sentry integration with HDFS saves admins a lot of work by centralizing access control permissions across components that utilize HDFS.

It’s been more than a year and a half since a couple of my colleagues here at Cloudera shipped the first version of Sentry (now Apache Sentry (incubating)). This project filled a huge security gap in the Apache Hadoop ecosystem by bringing truly secure and dependable fine grained authorization to the Hadoop ecosystem and provided out-of-the-box integration for Apache Hive. Since then the project has grown significantly–adding support for Impala and Search and the wonderful Hue App to name a few significant additions.

In order to provide a truly secure and centralized authorization mechanism, Sentry deployments have been historically set up so that all Hive’s data and metadata are accessible only by HiveServer2 and every other user is cut out. This has been a pain point for Sqoop users as Sqoop does not use the HiveServer2 interface. Hence users with a Sentry-secured Hive deployment were forced to split the import task into two steps: simple HDFS import followed by manually loading the data into Hive.

With the inclusion of HDFS ACLs and the integration of Sentry into the Hive metastore in CDH 5.1, users were able to improve this situation and get the direct Hive import working again. However, this approach required manual administrator intervention to configure HDFS ACLs according to the Sentry configuration and needed a manual refresh to keep both systems in sync.

One of the large features included in the recently released CDH 5.3 is Sentry integration with HDFS, which enables customers to easily share data between Hive, Impala and all the other Hadoop components that interact with HDFS (MapReduce, Spark, Pig, and Sqoop, and so on) while ensuring that user access permissions only need to be set once, and that they are uniformly enforced.

The rest of this post focuses on the example of using Sqoop together with this Sentry feature. Sqoop data can now be imported into Hive without any additional administrator intervention. By exposing Sentry policies—what tables from which a user can select and to what tables they can insert—directly in HDFS, Sqoop will re-use the same policies that have been configured via GRANT/REVOKE statements or the Hue Sentry App and will import data into Hive without any trouble.

Configuration

In order for Sqoop to seamlessly import into a Sentry Secured Hive instance, the Hadoop administrator needs to follow a few configuration steps to enable all the necessary features. First, your cluster needs to be using the Sentry Service as backend for storing authorization metadata and not rely on the older policy files.

If you are already using Sentry Service and GRANT/REVOKE statements, you can directly jump to step 3).

  1. Make sure that you have Sentry service running on your cluster. You should see it in the service list:

Ekran Resmi 2015-10-12 01.58.03

    2.  And that Hive is configured to use this service as a backend for Sentry metadata:

Ekran Resmi 2015-10-12 01.58.14

     3.  Finally enable HDFS Integration with Sentry:

Ekran Resmi 2015-10-12 01.58.27

Example Sqoop Import

Let’s assume that we have user jarcec who needs to import data into a Hive database named default. User jarcec is part of a group that is also called jarcec – in real life the name of the group doesn’t have to be the same as the username and that is fine.

With an unsecured Hive installation, the Hadoop administrator would have to jump in and grant writing privilege to user jarcec for directory /user/hive/warehouse or one of its subdirectories. With Sentry and HDFS integration, the Hadoop administrator no longer needs to jump in. Instead Sqoop will reuse the same authorization policies that has been configured through Hive SQL or via the Sentry Hue Application. Let’s assume that user bc is jarcec‘s Manager and already has privileges to grant privileges in the default database.

    1. bc starts by invoking beeline and connecting to HiveServer2:

 

 

 

[bc@sqoopsentry-1 ~]$ beeline

 

1: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> !connect jdbc:hive2://sqoopsentry-1.vpc.cloudera.com:10000/default;principal=hive/sqoopsentry-1.vpc.cloudera.com@ENT.CLOUDERA.COM

    1. In case that user jarcec is not part of any role yet, we need to create a role for him:

 

 

1: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> CREATE ROLE jarcec_role;

 

No rows affected (0.769 seconds)

    1. And this new role jarcec_role needs to be granted to jarcec‘s group jarcec.

 

 

1: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> GRANT ROLE jarcec_role to GROUP jarcec;

 

No rows affected (0.651 seconds)

    1. And finally bc can grant access to database default (or any other) to the role jarcec_role;

 

 

1: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> GRANT ALL ON DATABASE default TO ROLE jarcec_role;

 

No rows affected (0.16 seconds)

By executing the steps above, user jarcec has been given privilege to do any action (insert or select) with all objects inside database default. That includes the ability to create new tables, insert data or simply querying existing tables. With those privileges user jarcec can run the following Sqoop command as he was used to:

 

 

 

[jarcec@sqoopsentry-1 ~]$ sqoop import –connect jdbc:mysql://mysql.ent.cloudera.com/sqoop –username sqoop –password sqoop –table text <strong>–hive-import</strong>

14/12/14 15:37:38 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5-cdh5.3.0

 …

14/12/14 15:38:58 INFO mapreduce.ImportJobBase: Transferred 249.7567 MB in 75.8448 seconds (3.293 MB/sec) 

14/12/14 15:38:58 INFO mapreduce.ImportJobBase: Retrieved 1000000 records.

14/12/14 15:38:58 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `text` AS t LIMIT 1

14/12/14 15:38:58 INFO hive.HiveImport: Loading uploaded data into Hive

14/12/14 15:39:09 INFO hive.HiveImport: 14/12/14 15:39:09 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.

14/12/14 15:39:09 INFO hive.HiveImport:

14/12/14 15:39:09 INFO hive.HiveImport: Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.26/jars/hive-common-0.13.1-cdh5.3.0.jar!/hive-log4j.properties 

14/12/14 15:39:12 INFO hive.HiveImport: OK

14/12/14 15:39:12 INFO hive.HiveImport: Time taken: 1.079 seconds

14/12/14 15:39:12 INFO hive.HiveImport: Loading data to table default.text

14/12/14 15:39:12 INFO hive.HiveImport: setfacl: Permission denied. user=jarcec is not the owner of inode=part-m-00000

14/12/14 15:39:12 INFO hive.HiveImport: setfacl: Permission denied. user=jarcec is not the owner of inode=part-m-00001

14/12/14 15:39:12 INFO hive.HiveImport: setfacl: Permission denied. user=jarcec is not the owner of inode=part-m-00002

14/12/14 15:39:13 INFO hive.HiveImport: setfacl: Permission denied. user=jarcec is not the owner of inode=part-m-00003

14/12/14 15:39:13 INFO hive.HiveImport: Table default.text stats: [numFiles=4, numRows=0, totalSize=261888896, rawDataSize=0]

14/12/14 15:39:13 INFO hive.HiveImport: OK

14/12/14 15:39:13 INFO hive.HiveImport: Time taken: 0.719 seconds

14/12/14 15:39:13 INFO hive.HiveImport: <strong>Hive import complete</strong>.

14/12/14 15:39:13 INFO hive.HiveImport: Export directory is not empty, keeping it.

And jarcec can easily confirm in beeline that data have been indeed imported into Hive:

 

 

0: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> show tables from default;

+————+–+

|  tab_name  |

+————+–+

| text       |

+————+–+

1 row selected (0.177 seconds)

0: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> select count(*) from text;

+———-+–+

|   _c0    |

+———-+–+

| 1000000  |

+———-+–+

1 row selected (72.188 seconds)

If Hive is configured to inherit permissions, you might notice that Sqoop will print out several warnings similar to this one:

 

14/12/14 15:39:12 INFO hive.HiveImport: setfacl: Permission denied. user=jarcec is not the owner of inode=part-m-00000

As there is no need to inherit HDFS permissions when Sentry is enabled in HDFS, you can safely ignore such messages.

 

Deploy the third application – includes Elastic Java Message Service Configuration(Part -5-)

This version of the application has a Java Message Service (JMS) producer. Another tab is introduced into the application that will allow you to send JMS messages. This tab also contains a “table of processed messages”, the browser displays the messages that were processed by the message driven bean. The Enterprise Application that we will deploy later has the JMS Message Driven Bean in, this bean reads the

  1. At this point you can restart your managed servers.

Click on View changes and restart in Change Center and select Restart Checklist tab then check the box near to both managed servers and click on Restart.

Ekran Resmi 2015-09-22 13.05.51

  1. Click on ‘Lock & Edit’. Deploy the application JMSProducer-1.0.war to the wlsdevCluster, you can find this at:

/u01/content/weblogic-innovation-seminars/WInS_Demos/weblogic-basics/lab3

Ekran Resmi 2015-09-22 13.06.01

You would have to activate the changes and also ensure that the application begins servicing requests.

  1. Make sure the application is running and working correctly by going to URL: http://localhost:7002/JMSProducer/

Ekran Resmi 2015-09-22 13.06.09

We will now use the WebLogic console to configure JMS, this is performed under Services / Messaging/JMS Servers. See screenshot below:

Ekran Resmi 2015-09-22 13.06.16

  1. Click on ‘Lock & Edit’. Create a JMS server by clicking New, using the name

Ekran Resmi 2015-09-22 13.06.22

Click Create a New Store. Use the Type: File Store, like in the screenshot below:

And Click Next.

Ekran Resmi 2015-09-22 13.06.32

  1. Use the following values in the next screen:

Name = cluster

Target to wlsdevCluster

Directory = /u01/projects/wlsdevday/jmsjta

Click OK

 Ekran Resmi 2015-09-22 13.06.44

 

Activate the changes. Then click on ‘Lock & Edit’ again. This time the ‘Persistent Store’ dropdown would be refreshed to include ‘cluster’ Select the Persistent store that you just created as per the screen shot below:

Ekran Resmi 2015-09-22 13.06.51

Click Next.

Target the JMS Server to wlsdevCluster.

Ekran Resmi 2015-09-22 13.06.57

Click Finish.

  1. Activate your changes, and your JMS Server should look like this:

Ekran Resmi 2015-09-22 13.07.02

  1. We are now going to create a JMS Module to hold our Connection Factory and Queue.

To do this, go back Services. Click on ‘Lock & Edit’

Select JMS Modules

Ekran Resmi 2015-09-22 13.07.09

Create a new JMS Module

Note: A JMS Module and Subdeployment can be used to group resources, including queue and topic destinations, connection factories, JMS templates, destination sort keys, destination quota, distributed destinations, foreign servers, and store-and-forward parameters. Consider how much easier it is to manage and deploy using a Module and Subdeployments rather than remembering the relationships between the JMS resources you are creating and their targeting.

Call the JMS Module jmsDevDayModule.

Ekran Resmi 2015-09-22 13.07.16

Click Next

Now target the module to the cluster.

Ekran Resmi 2015-09-22 13.07.24

Check the box next to Would you like to add resources to this JMS system module?

Ekran Resmi 2015-09-22 13.07.31

And click Finish.

On activation of changes, you should now see a screen like the one below

Ekran Resmi 2015-09-22 13.07.37

Click on ‘Lock & Edit’ again and then click New to add artifacts to the module.

Select the Connection Factory radio button and click Next.

 

Ekran Resmi 2015-09-22 13.07.46

Enter the following information in the next screen:

Connection Name:                              DevDayCF

Connection Factory JNDI name:          DevDayCF

 Ekran Resmi 2015-09-22 13.07.52

Click Next

Click on the Advanced Targeting (all will become clear later!)

Ekran Resmi 2015-09-22 13.07.52

Click on Create a new Subdeployment

 Ekran Resmi 2015-09-22 13.08.03

Enter the following information in the next screen:

Name: DevDaySub

 Ekran Resmi 2015-09-22 13.08.09

Click OK.

Target the subdeployment to JMS Server created.

Ekran Resmi 2015-09-22 13.08.13

Click Finish.

Activate Changes.

Now we need to create a queue in the JMS Module, this Queue will be a Distributed Destination, and this means the queue will run across the whole cluster.

Click on ‘Lock & Edit’ again. Click New.

Check the Distributed Queue radio button and click Next.

 Ekran Resmi 2015-09-22 13.08.22

On the next screen enter the following information:

Queue Name:              DevDayQueue

Queue JNDI name:      DevDayQueue

Make sure Destination Type is Uniform.

Your distributed queue configuration should look like below:

Ekran Resmi 2015-09-22 13.08.29

Click Next

Click Advanced Targeting, and choose the Subdeployment created earlier (DevDaySub)

Ekran Resmi 2015-09-22 13.08.38

Click Finish

Activate your changes.

We now need to deploy the JMS Consumer JAR. Follow the same procedure as before to deploy the application. You’ll find the application here:

/u01/content/weblogic-innovation-seminars/WInS_Demos/weblogic-basics/lab3

The application is called EJBReceiver-1.0.jar (use the knowledge you have gained on the Developer Day so far, to deploy the application to wlsdevCluster).Dont forget to activate the changes and start the application to service all requests.

  1. Open application window in browser and type any message and click in Send Message.

 Ekran Resmi 2015-09-22 13.08.45

Again send a message.

Ekran Resmi 2015-09-22 13.08.53

Open two terminal and run the following command in both terminal. One terminal will show the log detail of wlsdevManaged-1 and another will show the log of wlsdevManaged-2.

tail -f /u01/projects/wlsdevday/domains/wlsdevdayDomain/servers/wlsdevManaged-1/logs/wlsdevManaged-1.out

tail -f /u01/projects/wlsdevday/domains/wlsdevdayDomain/servers/wlsdevManaged-2/logs/wlsdevManaged-2.out

Ekran Resmi 2015-09-22 13.09.00

As you see we have distributed queue, so our messages is processed by both managed server uniformly. Each time we send a new message it is processed uniformly by managed server.

Now it is time to clean-up. Click on Lock & Edit, stop and then delete the applications (ClustWLSessionSampleLBApp, JMSProducer-1.0 and EJBReceiver-1.0). Activate the changes finally.

Stop all the managed server instances. Ensure that only the AdminServer is up and running.