Category Archives: #oracle_Emp

Acquiring Big Data Using Apache Flume

No technology is more synonymous with Big Data than Apache Hadoop. Hadoop’s distributed filesystem and compute framework make possible cost-effective, linearly scalable processing of petabytes of data. Unfortunately, there are few tutorials devoted to how to get big data into Hadoop in the first place.
Some data destined for Hadoop clusters surely comes from sporadic bulk loading processes, such as database and mainframe offloads and batched data dumps from legacy systems. But what has made data really big in recent years is that most new data is contained in high-throughput streams. Application logs, GPS tracking, social media updates, and digital sensors all constitute fast-moving streams begging for storage in the Hadoop Distributed File System (HDFS). As you might expect, several technologies have been developed to address the need for collection and transport of these high-throughput streams. Facebook’s Scribe and Apache/LinkedIn’s Kafka both offer solutions to the problem, but Apache Flume is rapidly becoming a de facto standard for directing data streams into Hadoop.

This article describes the basics of Apache Flume and illustrates how to quickly set up Flume agents for collecting fast-moving data streams and pushing the data into Hadoop’s filesystem. By the time we’re finished, you should be able to configure and launch a Flume agent and understand how multi-hop and fan-out flows are easily constructed from multiple agents.

Anatomy of a Flume Agent

Flume deploys as one or more agents, each contained within its own instance of the Java Virtual Machine (JVM). Agents consist of three pluggable components: sources, sinks, and channels. An agent must have at least one of each in order to run. Sources collect incoming data as events. Sinks write events out, and channels provide a queue to connect the source and sink. (Figure 1.)

Apache Flume Agent
Figure 1: Flume Agents consist of sources, channels, and sinks.

Sources

Put simply, Flume sources listen for and consume events. Events can range from newline-terminated strings in stdout to HTTP POSTs and RPC calls — it all depends on what sources the agent is configured to use. Flume agents may have more than one source, but must have at least one. Sources require a name and a type; the type then dictates additional configuration parameters.

On consuming an event, Flume sources write the event to a channel. Importantly, sources write to their channels as transactions. By dealing in events and transactions, Flume agents maintain end-to-end flow reliability. Events are not dropped inside a Flume agent unless the channel is explicitly allowed to discard them due to a full queue.

Channels

Channels are the mechanism by which Flume agents transfer events from their sources to their sinks. Events written to the channel by a source are not removed from the channel until a sink removes that event in a transaction. This allows Flume sinks to retry writes in the event of a failure in the external repository (such as HDFS or an outgoing network connection). For example, if the network between a Flume agent and a Hadoop cluster goes down, the channel will keep all events queued until the sink can correctly write to the cluster and close its transactions with the channel.

Channels are typically of two types: in-memory queues and durable disk-backed queues. In-memory channels provide high throughput but no recovery if an agent fails. File or database-backed channels, on the other hand, are durable. They support full recovery and event replay in the case of agent failure.

Sinks

Sinks provide Flume agents pluggable output capability — if you need to write to a new type storage, just write a Java class that implements the necessary classes. Like sources, sinks correspond to a type of output: writes to HDFS or HBase, remote procedure calls to other agents, or any number of other external repositories. Sinks remove events from the channel in transactions and write them to output. Transactions close when the event is successfully written, ensuring that all events are committed to their final destination.

Setting up a Simple Agent for HDFS

A simple one-source, one-sink Flume agent can be configured with just a single configuration file. In this example, I’ll create a text file named sample_agent.conf — it looks a lot like a Java properties file. At the top of the file, I configure the agent’s name and the names of its source, sink, and channel.

hdfs-agent.sources= netcat-collect
hdfs-agent.sinks = hdfs-write
hdfs-agent.channels= memory-channel

This defines an agent named hdfs-agent and the names of the sources, sinks, and channels; keep the name in mind, because we’ll need it to start the agent. Multiple sources, sinks, and channels can be defined on these lines as a whitespace-delimited list of names. In this case, the source is named netcat-collect, the sink hdfs-write, and the channel is named memory-channel. The names are indicative of what I’m setting up: events collected via netcat will be written to HDFS and I will use a memory-only queue for transactions.

Next, I configure the source. I use a netcat source, as it provides a simple means of interactively testing the agent. A netcat source requires a type as well as an address and port to which it should bind. The netcat source will listen on localhost on port 11111; messages to netcat will be consumed by the source as events.

 

hdfs-agent.sources.netcat-collect.type = netcat
hdfs-agent.sources.netcat-collect.bind = 127.0.0.1
hdfs-agent.sources.netcat-collect.port = 11111

With the source defined, I’ll configure the sink to write to HDFS. HDFS sinks support a number of options, but by default, the HDFS sink writes Hadoop SequenceFiles. In this example, I’ll specify the sink write raw textfiles to HDFS so they can be easily inspected; I’ll also set a roll interval, which forces Flume to commit writes to HDFS every 30 seconds. File rolls can be configured based on time, size, or a combination of the two. The file rolls are particularly important in environments for which HDFS does not support appending to files.

hdfs-agent.sinks.hdfs-write.type = hdfs
hdfs-agent.sinks.hdfs-write.hdfs.path = hdfs://namenode_address:8020/path/to/flume_test
hdfs-agent.sinks.hdfs-write.rollInterval = 30
hdfs-agent.sinks.hdfs-write.hdfs.writeFormat=Text
hdfs-agent.sinks.hdfs-write.hdfs.fileType=DataStream

Finally, I’ll configure a memory-backed channel to transfer events from source to sink and connect them together. Keep in mind that if I exceed the channel capacity, Flume will drop events. If I need durability in a file or JDBC, channel should be used instead.

 

hdfs-agent.channels.memoryChannel.type = memory
hdfs-agent.channels.memoryChannel.capacity=10000
hdfs-agent.sources.netcat-collect.channels=memoryChannel
hdfs-agent.sinks.hdfs-write.channel=memoryChannel

With the configuration complete, I start the Flume agent from a terminal:

flume-ng agent -f /path/to/sample_agent.conf -n hdfs-agent

In the Flume agent’s logs, I look for indication that the source, sink, and channel have successfully started. For example:

INFO nodemanager.DefaultLogicalNodeManager: Starting Channel memoryChannel
INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: memoryChannel started
INFO nodemanager.DefaultLogicalNodeManager: Starting Sink hdfs-write
INFO nodemanager.DefaultLogicalNodeManager: Starting Source netcat-collect
INFO source.NetcatSource: Source starting

In a separate terminal, connect to the agent via netcat and enter a series of messages.

> nc localhost 11111
> testing
> 1
> 2
> 3
In the agent logs, an HDFS file will be created and committed every 30 seconds. If I print the contents of the files to standard out using HDFS cat, I’ll find the messages from netcat are stored.

More Advanced Deployments

Regardless of source, direct writers to HDFS are too simple to be suitable for many deployments: Application servers may reside in the cloud while clusters are on-premise, many streams of data may need to be consolidated, or events may need to be filtered during transmission. Fortunately, Flume easily enables reliable multi-hop event transmission. Fan-in and fan-out patterns are readily supported via multiple sources and channel options. Additionally, Flume provides the notion of interceptors, which allow the decoration and filtering of events in flight.

Multi-Hop Topologies

Flume provides multi-hop deployments via Apache Avro-serialized RPC calls. For a given hop, the sending agent implements an Avro sink directed to a host and port where the receiving agent is listening. The receiver implements an Avro source bound to the designated host-port combination. Reliability is ensured by Flume’s transaction model. The sink on the sending agent does not close its transaction until receipt is acknowledged by the receiver. Similarly, the receiver does not acknowledge receipt until the incoming event has been committed to its channel.

#sender configuration
avro-agent.sinks= avro-sink
avro-agent.sinks.avro-sink.type=avro
avro-agent.sinks.avro-sink.host=remote.host.com
avro-agent.sinks.avro-sink.port=11111
#receiver configuration on remote.host.com
hdfs-agent.sources=avro-source
hdfs-agent.sources.avro-source.type=avro
hdfs-agent.sources.avro-source.bind=0.0.0.0
hdfs-agent.sources.avro-source.port=11111
hdfs-agent.sources.avro-source.channels=memoryChannel

Multihop event flow
Figure 2: Multihop event flows are constructed using RPCs between Avro sources and sinks.

Fan-In and Fan-Out

Fan-in is a common case for Flume agents. Agents may be run on many data collectors (such as application servers) in a large deployment, while only one or two writers to a remote Hadoop cluster are required to handle the total event throughput. In this case, the Flume topology is simple to configure. Each agent at a data collector implements the appropriate source and an Avro sink. All Avro sinks point to the host and port of the Flume agent charged with writing to the Hadoop cluster. The agent at the Hadoop cluster simply configures an Avro source on the designated host and port. Incoming events are consolidated automatically and are written to the configured sink.

Fan-out topologies are enabled via Flume’s source selectors. Selectors can be replicating — sending all events to multiple channels — or multiplexing. Multiplexed sources can be partitioned by mappings defined on events via interceptors. For example, a replicating selector may be appropriate when events need to be sent to HDFS and to flat log files or a database. Multiplexed selectors are useful when different mappings should be directed to different writers; for example, data destined for partitioned Hive tables may be best handled via multiplexing.

hdfs-agent.channels=mchannel1 mchannel2
hdfs-agent.sources.netcat-collect.selector.type = replicating
hdfs-agent.sources.r1.channels = mchannel1 mchannel2

Interceptors

Flume provides a robust system of interceptors for in-flight modification of events. Some interceptors serve to decorate data with metadata useful in multiplexing the data or processing it after it has been written to the sink. Common decorations include timestamps, hostnames, and static headers. It’s a great way to keep track of when your data arrived and from where it came.

More interestingly, interceptors can be used to selectively filter or decorate events. The Regex Filtering Interceptor allows events to be dropped if they match the provided regular expression. Similarly, the Regex Extractor Interceptor decorates event headers according to a regular expression. This is useful if incoming events require multiplexing, but static definitions are too inflexible.

hdfs-agent.sources.netcat-collect.interceptors = filt_int
hdfs-agent.sources.netcat-collect.interceptors.filt_int.type=regex_filter
hdfs-agent.sources.netcat-collect.interceptors.filt_int.regex=^echo.*
hdfs-agent.sources.netcat-collect.interceptors.filt_int.excludeEvents=true

Conclusion

There are lots of ways to acquire Big Data with which to fill up a Hadoop cluster, but many of those data sources arrive as fast-moving streams of data. Fortunately, the Hadoop ecosystem contains a component specifically designed for transporting and writing these streams: Apache Flume. Flume provides a robust, self-contained application which ensures reliable transportation of streaming data. Flume agents are easy to configure, requiring only a property file and an agent name. Moreover, Flume’s simple source-channel-sink design allows us to build complicated flows using only a set of Flume agents. So, while we don’t often address the process of acquiring Big Data for our Hadoop clusters, doing so is as easy and fun as taking a log ride.

Chancing All Passwords on Exadata

All the components of an Exadata system have default passwords. We will look at each component and how to change the default passwords for each.

Database Server
An Exadata X5-2 has eight database servers. Each server has the following ID with defaults passwords:
•    Root
•    Oracle
•    Grid

As a user, you can either go in individually, change the passwords on each server or use the utility DCLI that Oracle provides on an Exadata to change all the passwords in parallel on all servers. Oracle provides files that include various server configurations. For the database component, the dbs_group file is used to change the root, grid and Oracle passwords on all database servers.

#cd /opt/oracle.SupportTools/onecommand
[root@xex1dbadm01 onecommand]# cat dbs_group
xex1dbadm01
xex1dbadm02
xex1dbadm03
xex1dbadm04
xex1dbadm05
xex1dbadm06
xex1dbadm07
xex1dbadm08
ORAPASS=<oraclpassword>
ROOTPASS=<rootpassword>
GRIDPASS=<gridpassword>
dcli -l root -g dbs_group “echo ${ROOTPASS} | passwd –stdin root”
dcli -l root -g dbs_group “echo ${ORAPASS} | passwd –stdin oracle”
dcli -l root -g dbs_group “echo ${GRIDPASS} | passwd –stdin grid”

This will allow for parallel execution of change password for all the servers in the file dbs_group and the end result being new passwords on all your database servers.
Database Server Service Processor
Each Oracle Exadata Database server comes with an ILOM (integrated lights on management) interface, which is also known as a service processor. Each service processor comes with a default password that should be changed immediately.

$ cd /opt/oracle.SupportTools/onecommand
HOSTLIST=`cat /opt/oracle.SupportTools/onecommand/dbs_group`
for TSOH in $HOSTLIST
do
echo $TSOH
ipmitool -H $TSOH-ilom -U root -P <old password> set password 2 <New password>
done

Cell Server Password Change
A full Exadata X5-2 comes with 14 storage cells, and, as such, it is important to be able to use DCLI to change the password,
which allows for changing all the accounts on the cell server (i.e., root, celladmin and cellmonitor).

ROOTPASS=<rootpass>
CELLADMPASS=<celladminpassword>
CELLMONPASS=<cellmonpass>
dcli -l root -g ~/cell_group “echo ${CELLADMPASS} | passwd –stdin celladmin”
dcli -l root -g ~/cell_group “echo ${CELLMONPASS} | passwd –stdin cellmonitor”
dcli -l root -g ~/cell_group “echo ${ROOTPASS} | passwd –stdin root”

Storage Cell Service Processor
Each Exadata storage cell has a service processor similar to a database server, and a similar strategy can be used to the database server for changing ILOM passwords.

$ cd /opt/oracle.SupportTools/onecommand
HOSTLIST=`cat /opt/oracle.SupportTools/onecommand/cell_group`
for TSOH in $HOSTLIST
do
echo $TSOH
ipmitool -H $TSOH-ilom -U root -P <old password> set password 2 <New password>
done

InfiniBand Switches
A Full Rack Exadata has three InfiniBand switches, and, as with other components, it is important to change the passwords. Due to Oracle Bug 13494021,
you might have to perform some extra steps on each InfiniBand switch.

ssh root@<infiniband switch>
–only if you hit bug 13494021 you will do this
cd /conf
cp -p shadow shadow.backup
cd /etc
cp -p shadow /conf/shadow
ln -sf /etc/shadow.ilom shadow
ls -l shadow*
— End Bug Fix
#Passwd nm2user
#passwd ilom-admin
#passwd root
#passwd ilom-operator

Cisco Switch

An Exadata system also contains a Cisco brand switch. It is important to check what utility is available during install time.
It is preferable to have ssh enabled on the switch rather than telnet, which ships as default on the X5-2. Oracle My Oracle Support (MOS) Note 1415044.1 can be used to reconfigure the Cisco switch to ssh only. Once the configuration is complete, you can change the password from the default using the below commands.

ssh admin@<ciscoswitch>
Switch>enable
Password:
Switch#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
Switch(config)#enable password <new password>
Switch(config)#enable secret <new password>
The enable secret you have chosen is the same as your enable password.
This is not recommended. Re-enter the enable secret.
Switch(config)#end
Switch#write memory
#exit

Installing Oracle R Distribution and Oracle R Enterprise on Exadata

ORE Installation Documentation and Requirements

Check the official Oracle R Enterprise documentation for a description of the requirements and installations steps.

http://docs.oracle.com/cd/E36939_01/doc/doc.13/e36763/intro.htm#OREAD110

Check the support matrix to ensure that the R distribution supports the version of ORE to be installed.

http://docs.oracle.com/cd/E36939_01/doc/doc.13/e36762/toc.htm#ORERN105

Install the version of ORE that is compatible with the Client / Studio that the customer is using. In this case the customer is using R 2.15.3.

Install R 2.15.3 Distribution.

http://docs.oracle.com/cd/E36939_01/doc/doc.13/e36762/toc.htm#ORERN104

http://docs.oracle.com/cd/E36939_01/doc/doc.13/e36763.pdf

Software Download

http://www.oracle.com/technetwork/database/options/advanced-analytics/r-enterprise/ore-downloads-1502823.html

Overview

  1. Install Oracle R Distribution (RPMs) on all Exadata Nodes. The Oracle R distribution is a prerequisite for installing Oracle R Enterprise.
  2. Execute the Oracle R Enterprise installation script on all Exadata Nodes. First time execution on the first node will create the database repository and packages. Subsequent execution on other nodes will copy the required libraries on that node.
  3. Install Supporting files on all Exadata Nodes.
  4. Create or grant access to the database user.

Obtaining RPMs

The method used for this installation is the RPM method. (Most Exadata systems will not have direct access to the public yum database).

There are several options for obtaining the required RPMS. The method used in this case was to download the rpms from the public YUM server using a VM that has internet access. The rpms were then transferred to Exadata.

Using the yum –downloadonly feature:

yum install yum-downloadonly

yum install <rpmfile> –y –downloadonly –downloaddir=/u01/downloads

or

yum reinstall <rpmfile> –y –downloadonly –downloaddir=/u01/downloads

Installing RPMs

Check exadata for rpms already installed (eg: rpm –qv libXau-devel )

The list below is the order in which the RPMs were installed based on their dependencies.

(The required RPMs and the versions will change over time. Check documentation).

Install the RPMs on all Exadata DB nodes:

rpm -Uvh libXau-devel-1.0.1-3.1.x86_64.rpm

rpm -Uvh libX11-devel-1.0.3-11.el5_7.1.x86_64.rpm mesa-libGL-devel-6.5.1-7.11.el5_9.x86_64.rpm xorg-x11-proto-devel-7.1-13.el5.x86_64.rpm libXdmcp-devel-1.0.1-2.1.x86_64.rpm

rpm -Uvh libtiff-3.8.2-18.el5_8.x86_64.rpm

rpm -Uvh cups-libs-1.3.7-30.el5_9.3.x86_64.rpm

rpm -Uvh cairo-1.2.4-5.el5.x86_64.rpm

rpm -Uvh atk-1.12.2-1.fc6.x86_64.rpm

rpm -Uvh hicolor-icon-theme-0.9-2.1.noarch.rpm

rpm -Uvh bitstream-vera-fonts-1.10-7.noarch.rpm

rpm -Uvh pango-1.14.9-8.0.1.el5_7.3.x86_64.rpm

rpm -Uvh gtk2-2.10.4-29.el5.x86_64.rpm

rpm -Uvh poppler-0.5.4-19.el5.x86_64.rpm

rpm -Uvh poppler-utils-0.5.4-19.el5.x86_64.rpm

rpm -Uvh paps-0.6.6-20.el5.x86_64.rpm

rpm -Uvh dbus-python-0.70-9.el5_4.x86_64.rpm

rpm -Uvh avahi-0.6.16-10.el5_6.x86_64.rpm

rpm -Uvh avahi-compat-libdns_sd-0.6.16-10.el5_6.x86_64.rpm

rpm -Uvh cups-1.3.7-30.el5_9.3.x86_64.rpm

warning: user lp does not exist – using root

warning: user lp does not exist – using root

rpm -Uvh libgfortran-4.1.2-52.el5_8.1.i386.rpm

rpm -Uvh tetex-fonts-3.0-33.15.el5_8.1.x86_64.rpm

rpm -Uvh tetex-dvips-3.0-33.15.el5_8.1.x86_64.rpm

rpm -Uvh libgfortran-4.1.2-54.el5.x86_64.rpm

rpm -Uvh libfontenc-1.0.2-2.2.el5.x86_64.rpm

rpm -Uvh libXfont-1.2.2-1.0.4.el5_7.x86_64.rpm

rpm -Uvh ttmkfdir-3.0.9-23.el5.x86_64.rpm

rpm -Uvh xorg-x11-font-utils-7.1-3.x86_64.rpm

rpm -Uvh libFS-1.0.0-3.1.x86_64.rpm

rpm -Uvh chkfontpath-1.10.1-1.1.x86_64.rpm xorg-x11-xfs-1.0.2-5.el5_6.1.x86_64.rpm

rpm -Uvh urw-fonts-2.3-6.1.1.noarch.rpm

rpm -Uvh ghostscript-fonts-5.50-13.1.1.noarch.rpm ghostscript-8.70-14.el5_8.1.x86_64.rpm

rpm -Uvh netpbm-10.35.58-10.el5.x86_64.rpm

rpm -Uvh netpbm-progs-10.35.58-10.el5.x86_64.rpm

rpm -Uvh desktop-file-utils-0.10-7.x86_64.rpm

rpm -Uvh dialog-1.0.20051107-1.2.2.x86_64.rpm

rpm -Uvh ed-0.2-39.el5_2.x86_64.rpm

rpm -Uvh tetex-3.0-33.15.el5_8.1.x86_64.rpm

rpm -Uvh tetex-latex-3.0-33.15.el5_8.1.x86_64.rpm

rpm -Uvh R-core-2.15.3-1.el5.x86_64.rpm

rpm -Uvh bzip2-devel-1.0.3-6.el5_5.x86_64.rpm

rpm -Uvh gmp-4.1.4-10.el5.x86_64.rpm

rpm -Uvh gcc-gfortran-4.1.2-54.el5.x86_64.rpm

rpm -Uvh pcre-devel-6.6-6.el5_6.1.x86_64.rpm

rpm -Uvh tcl-devel-8.4.13-6.el5.x86_64.rpm

rpm -Uvh tk-devel-8.4.13-5.el5_1.1.x86_64.rpm

rpm -Uvh zlib-devel-1.2.3-7.el5.x86_64.rpm

rpm -Uvh texinfo-4.8-14.el5.x86_64.rpm

rpm -Uvh texinfo-tex-4.8-14.el5.x86_64.rpm

rpm -Uvh R-devel-2.15.3-1.el5.x86_64.rpm

rpm -Uvh libRmath-2.15.3-1.el5.x86_64.rpm

rpm -Uvh libRmath-devel-2.15.3-1.el5.x86_64.rpm

rpm -Uvh R-2.15.3-1.el5.x86_64.rpm

At this point if we try to install the rpms below we will get that they are “Already installed”….

rpm -Uvh zlib-1.2.3-7.el5.i386.rpm

rpm -Uvh zlib-1.2.3-7.el5.x86_64.rpm

rpm -Uvh libgcc-4.1.2-54.el5.i386.rpm

rpm -Uvh libgcc-4.1.2-54.el5.i386.rpm

rpm -Uvh libstdc++-4.1.2-54.el5.x86_64.rpm

rpm -Uvh libstdc++-4.1.2-54.el5.i386.rpm

rpm -Uvh tcl-8.4.13-6.el5.x86_64.rpm

rpm -Uvh libstdc++-devel-4.1.2-54.el5.x86_64.rpm

rpm -Uvh libstdc++-devel-4.1.2-54.el5.i386.rpm

rpm -Uvh cpp-4.1.2-54.el5.x86_64.rpm

rpm -Uvh gcc-c++-4.1.2-54.el5.x86_64.rpm

rpm -Uvh gcc-4.1.2-54.el5.x86_64.rpm

rpm -Uvh mesa-libGL-6.5.1-7.11.el5_9.x86_64.rpm

Check the R software installed and the version

[root@dm01dbadm04 ~]# R

Oracle Distribution of R version 2.15.3 (–) — “Security Blanket”

Copyright (C) The R Foundation for Statistical Computing

ISBN 3-900051-07-0

Platform: x86_64-unknown-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.

You are welcome to redistribute it under certain conditions.

Type ‘license()’ or ‘licence()’ for distribution details.

Natural language support but running in an English locale

R is a collaborative project with many contributors.

Type ‘contributors()’ for more information and

‘citation()’ on how to cite R or R packages in publications.

Type ‘demo()’ for some demos, ‘help()’ for on-line help, or

‘help.start()’ for an HTML browser interface to help.

Type ‘q()’ to quit R.

You are using Oracle’s distribution of R. Please contact

Oracle Support for any problems you encounter with this

distribution.

> q()

Save workspace image? [y/n/c]: n

Install Oracle R Enterprise

  1. Download the ORE Server and Supporting zip packages for Linux 64-bit.

http://www.oracle.com/technetwork/database/options/advanced-analytics/r-enterprise/ore-downloads-1502823.html

  1. Transfer both the Server and the Supporting files to each Exadata node and unzip.
  2. Login as the “oracle” user and set the ORACLE_SID and Oracle environment variables.
  3. Execute the “install.sh” script within the “server” directory.

[oracle@dm01dbadm02 server]$ ./install.sh

Oracle R Enterprise 1.3.1 Server Installation.

Copyright (c) 2012, 2013 Oracle and/or its affiliates. All rights reserved.

Checking R ………………. Pass

Checking R libraries ……… Pass

Checking ORACLE_HOME ……… Pass

Checking ORACLE_SID ………. Pass

Checking sqlplus …………. Pass

Checking ORACLE instance ….. Pass

Checking ORE …………….. Pass

Current configuration

R_HOME               = /usr/lib64/R

R_LIBS_USER         = /u01/app/oracle/product/11.2.0.3/dbhome_1/R/library

ORACLE_HOME         = /u01/app/oracle/product/11.2.0.3/dbhome_1

ORACLE_SID           = phppovx2

Do you wish to install ORE? [yes]

Choosing RQSYS tablespaces

PERMANENT tablespace to use for RQSYS [SYSAUX]: RQSYS

ERROR: PERMANENT tablespace RQSYS not found

PERMANENT tablespace to use for RQSYS [SYSAUX]:

TEMPORARY tablespace to use for RQSYS [TEMP]:

Tablespaces summary

PERMANENT tablespace = SYSAUX

TEMPORARY tablespace = TEMP

Installing libraries ……… Pass

Installing RQSYS data …….. Pass

Installing RQSYS code …….. Pass

Installing ORE packages …… Pass

Creating ORE script ………. Pass

NOTE: ORE has been enabled for all database users. Next, install the

supporting packages.

You may create an ORE user with the demo_user.sh script, which

automatically grants the required privileges. A complete list of

privileges is available in the script rquser.sql.

To use ORE Embedded R Execution functionality, grant the user

the RQADMIN role.

Please, consult the documentation for more information.

Done

[oracle@dm01dbadm02 server]$

  1. Change directory to the “supporting” directory from the Supporting zip file and install the supporting files

ORE CMD INSTALL ROracle_1.1-9_R_x86_64-unknown-linux-gnu.tar.gz

ORE CMD INSTALL DBI_0.2-5_R_x86_64-unknown-linux-gnu.tar.gz

ORE CMD INSTALL png_0.1-4_R_x86_64-unknown-linux-gnu.tar.gz

Check ORE installation….

ORE -e “library(ORE)”

  1. Execute the ORE server install.sh script on all other RAC nodes :

[oracle@dm01dbadm01 server]$ ./install.sh

Oracle R Enterprise 1.3.1 Server Installation.

Copyright (c) 2012, 2013 Oracle and/or its affiliates. All rights reserved.

Checking R ………………. Pass

Checking R libraries ……… Pass

Checking ORACLE_HOME ……… Pass

Checking ORACLE_SID ………. Pass

Checking sqlplus …………. Pass

Checking ORACLE instance ….. Pass

Checking ORE …………….. Pass

Current configuration

R_HOME               = /usr/lib64/R

R_LIBS_USER         = /u01/app/oracle/product/11.2.0.3/dbhome_1/R/library

ORACLE_HOME         = /u01/app/oracle/product/11.2.0.3/dbhome_1

ORACLE_SID           = osspovx1

Installing libraries ……… Pass

Installing ORE packages …… Pass

Creating ORE script ………. Pass

NOTE: ORE has been enabled for all database users. Next, install the

supporting packages.

You may create an ORE user with the demo_user.sh script, which

automatically grants the required privileges. A complete list of

privileges is available in the script rquser.sql.

To use ORE Embedded R Execution functionality, grant the user

the RQADMIN role.

Please, consult the documentation for more information.

Done

The following libraries will get copied to the location shown.

ls –l $ORACLE_HOME/lib/ore.so

ls –l $ORACLE_HOME/lib/librqe.so

ls –l $ORACLE_HOME/R/library

Install the supporting files on all other Exadata DB nodes.

[oracle@dm01dbadm01 supporting]$ ORE CMD INSTALL ROracle_1.1-9_R_x86_64-unknown-linux-gnu.tar.gz

* installing to library ‘/u01/app/oracle/product/11.2.0.3/dbhome_1/R/library’

* installing *binary* package ‘ROracle’ …

* DONE (ROracle)

[oracle@dm01dbadm01 supporting]$ ORE CMD INSTALL DBI_0.2-5_R_x86_64-unknown-linux-gnu.tar.gz

* installing to library ‘/u01/app/oracle/product/11.2.0.3/dbhome_1/R/library’

* installing *binary* package ‘DBI’ …

* DONE (DBI)

[oracle@dm01dbadm01 supporting]$ ORE CMD INSTALL png_0.1-4_R_x86_64-unknown-linux-gnu.tar.gz

* installing to library ‘/u01/app/oracle/product/11.2.0.3/dbhome_1/R/library’

* installing *binary* package ‘png’ …

* DONE (png)

[oracle@dm01dbadm01 supporting]$

  1. Grant the following privileges to existing db users that will be used for ORE.

grant RQADMIN to OSSDM;

grant create mining model to OSSDM;

Execute demo_user.sh to create new database users for ORE.

  1. Validate ORE installation….

ORE -e “library(ORE)”

See the documentation for further validation checks:

http://docs.oracle.com/cd/E36939_01/doc/doc.13/e36763/postinstall.htm#OREAD190

CDH 5.3: Apache Sentry Integration with HDFS

Starting in CDH 5.3, Apache Sentry integration with HDFS saves admins a lot of work by centralizing access control permissions across components that utilize HDFS.

It’s been more than a year and a half since a couple of my colleagues here at Cloudera shipped the first version of Sentry (now Apache Sentry (incubating)). This project filled a huge security gap in the Apache Hadoop ecosystem by bringing truly secure and dependable fine grained authorization to the Hadoop ecosystem and provided out-of-the-box integration for Apache Hive. Since then the project has grown significantly–adding support for Impala and Search and the wonderful Hue App to name a few significant additions.

In order to provide a truly secure and centralized authorization mechanism, Sentry deployments have been historically set up so that all Hive’s data and metadata are accessible only by HiveServer2 and every other user is cut out. This has been a pain point for Sqoop users as Sqoop does not use the HiveServer2 interface. Hence users with a Sentry-secured Hive deployment were forced to split the import task into two steps: simple HDFS import followed by manually loading the data into Hive.

With the inclusion of HDFS ACLs and the integration of Sentry into the Hive metastore in CDH 5.1, users were able to improve this situation and get the direct Hive import working again. However, this approach required manual administrator intervention to configure HDFS ACLs according to the Sentry configuration and needed a manual refresh to keep both systems in sync.

One of the large features included in the recently released CDH 5.3 is Sentry integration with HDFS, which enables customers to easily share data between Hive, Impala and all the other Hadoop components that interact with HDFS (MapReduce, Spark, Pig, and Sqoop, and so on) while ensuring that user access permissions only need to be set once, and that they are uniformly enforced.

The rest of this post focuses on the example of using Sqoop together with this Sentry feature. Sqoop data can now be imported into Hive without any additional administrator intervention. By exposing Sentry policies—what tables from which a user can select and to what tables they can insert—directly in HDFS, Sqoop will re-use the same policies that have been configured via GRANT/REVOKE statements or the Hue Sentry App and will import data into Hive without any trouble.

Configuration

In order for Sqoop to seamlessly import into a Sentry Secured Hive instance, the Hadoop administrator needs to follow a few configuration steps to enable all the necessary features. First, your cluster needs to be using the Sentry Service as backend for storing authorization metadata and not rely on the older policy files.

If you are already using Sentry Service and GRANT/REVOKE statements, you can directly jump to step 3).

  1. Make sure that you have Sentry service running on your cluster. You should see it in the service list:

Ekran Resmi 2015-10-12 01.58.03

    2.  And that Hive is configured to use this service as a backend for Sentry metadata:

Ekran Resmi 2015-10-12 01.58.14

     3.  Finally enable HDFS Integration with Sentry:

Ekran Resmi 2015-10-12 01.58.27

Example Sqoop Import

Let’s assume that we have user jarcec who needs to import data into a Hive database named default. User jarcec is part of a group that is also called jarcec – in real life the name of the group doesn’t have to be the same as the username and that is fine.

With an unsecured Hive installation, the Hadoop administrator would have to jump in and grant writing privilege to user jarcec for directory /user/hive/warehouse or one of its subdirectories. With Sentry and HDFS integration, the Hadoop administrator no longer needs to jump in. Instead Sqoop will reuse the same authorization policies that has been configured through Hive SQL or via the Sentry Hue Application. Let’s assume that user bc is jarcec‘s Manager and already has privileges to grant privileges in the default database.

    1. bc starts by invoking beeline and connecting to HiveServer2:

 

 

 

[bc@sqoopsentry-1 ~]$ beeline

 

1: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> !connect jdbc:hive2://sqoopsentry-1.vpc.cloudera.com:10000/default;principal=hive/sqoopsentry-1.vpc.cloudera.com@ENT.CLOUDERA.COM

    1. In case that user jarcec is not part of any role yet, we need to create a role for him:

 

 

1: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> CREATE ROLE jarcec_role;

 

No rows affected (0.769 seconds)

    1. And this new role jarcec_role needs to be granted to jarcec‘s group jarcec.

 

 

1: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> GRANT ROLE jarcec_role to GROUP jarcec;

 

No rows affected (0.651 seconds)

    1. And finally bc can grant access to database default (or any other) to the role jarcec_role;

 

 

1: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> GRANT ALL ON DATABASE default TO ROLE jarcec_role;

 

No rows affected (0.16 seconds)

By executing the steps above, user jarcec has been given privilege to do any action (insert or select) with all objects inside database default. That includes the ability to create new tables, insert data or simply querying existing tables. With those privileges user jarcec can run the following Sqoop command as he was used to:

 

 

 

[jarcec@sqoopsentry-1 ~]$ sqoop import –connect jdbc:mysql://mysql.ent.cloudera.com/sqoop –username sqoop –password sqoop –table text <strong>–hive-import</strong>

14/12/14 15:37:38 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5-cdh5.3.0

 …

14/12/14 15:38:58 INFO mapreduce.ImportJobBase: Transferred 249.7567 MB in 75.8448 seconds (3.293 MB/sec) 

14/12/14 15:38:58 INFO mapreduce.ImportJobBase: Retrieved 1000000 records.

14/12/14 15:38:58 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `text` AS t LIMIT 1

14/12/14 15:38:58 INFO hive.HiveImport: Loading uploaded data into Hive

14/12/14 15:39:09 INFO hive.HiveImport: 14/12/14 15:39:09 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.

14/12/14 15:39:09 INFO hive.HiveImport:

14/12/14 15:39:09 INFO hive.HiveImport: Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.3.0-1.cdh5.3.0.p0.26/jars/hive-common-0.13.1-cdh5.3.0.jar!/hive-log4j.properties 

14/12/14 15:39:12 INFO hive.HiveImport: OK

14/12/14 15:39:12 INFO hive.HiveImport: Time taken: 1.079 seconds

14/12/14 15:39:12 INFO hive.HiveImport: Loading data to table default.text

14/12/14 15:39:12 INFO hive.HiveImport: setfacl: Permission denied. user=jarcec is not the owner of inode=part-m-00000

14/12/14 15:39:12 INFO hive.HiveImport: setfacl: Permission denied. user=jarcec is not the owner of inode=part-m-00001

14/12/14 15:39:12 INFO hive.HiveImport: setfacl: Permission denied. user=jarcec is not the owner of inode=part-m-00002

14/12/14 15:39:13 INFO hive.HiveImport: setfacl: Permission denied. user=jarcec is not the owner of inode=part-m-00003

14/12/14 15:39:13 INFO hive.HiveImport: Table default.text stats: [numFiles=4, numRows=0, totalSize=261888896, rawDataSize=0]

14/12/14 15:39:13 INFO hive.HiveImport: OK

14/12/14 15:39:13 INFO hive.HiveImport: Time taken: 0.719 seconds

14/12/14 15:39:13 INFO hive.HiveImport: <strong>Hive import complete</strong>.

14/12/14 15:39:13 INFO hive.HiveImport: Export directory is not empty, keeping it.

And jarcec can easily confirm in beeline that data have been indeed imported into Hive:

 

 

0: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> show tables from default;

+————+–+

|  tab_name  |

+————+–+

| text       |

+————+–+

1 row selected (0.177 seconds)

0: jdbc:hive2://sqoopsentry-1.vpc.cloudera.co> select count(*) from text;

+———-+–+

|   _c0    |

+———-+–+

| 1000000  |

+———-+–+

1 row selected (72.188 seconds)

If Hive is configured to inherit permissions, you might notice that Sqoop will print out several warnings similar to this one:

 

14/12/14 15:39:12 INFO hive.HiveImport: setfacl: Permission denied. user=jarcec is not the owner of inode=part-m-00000

As there is no need to inherit HDFS permissions when Sentry is enabled in HDFS, you can safely ignore such messages.