Flatten complex nested parquet files on Hadoop with Herringbone

Herringbone

Herringbone is a suite of tools for working with parquet files on hdfs, and with impala and hive.https://github.com/stripe/herringbone

Please visit my github and this specific page for more details.

Installation:

Note: You must be using a Hadoop machine and herringbone needs Hadoop environmet.

Pre-requsite : Thrift

  • Thrift 0.9.1 (MUST have 0.9.1 as 0.9.3 and 0.10.0 will give error while packaging)
  • Get thrift 0.9.1 Link

Pre-requsite : Impala

  • First setup Cloudera repo in your machine:
  • Install Impala
    • Install impala : $ sudo apt-get install impala
    • Install impala Server : $ sudo apt-get install impala-server
    • Install impala stat-store : $ sudo apt-get install impala-state-store
    • Install impala shell : $ sudo apt-get install impala-shell
    • Verify : impala : $ impala-shell
impala-shell
Starting Impala Shell without Kerberos authentication
Connected to mr-0xd7-precise1.0xdata.loc:21000
Server version: impalad version 2.6.0-cdh5.8.4 RELEASE (build 207450616f75adbe082a4c2e1145a2384da83fa6)
Welcome to the Impala shell. Press TAB twice to see a list of available commands.

Copyright (c) 2012 Cloudera, Inc. All rights reserved.

(Shell build version: Impala Shell v1.4.0-cdh4-INTERNAL (08fa346) built on Mon Jul 14 15:52:52 PDT 2014)

Building : Herringbone source

Here is the successful herringbone “mvn package” command log for your review:

[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Build Order:
[INFO]
[INFO] Herringbone Impala
[INFO] Herringbone Main
[INFO] Herringbone
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Herringbone Impala 0.0.2
[INFO] ------------------------------------------------------------------------
..
..
..
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Herringbone 0.0.1
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Herringbone Impala ................................. SUCCESS [ 2.930 s]
[INFO] Herringbone Main ................................... SUCCESS [ 13.012 s]
[INFO] Herringbone ........................................ SUCCESS [ 0.000 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 16.079 s
[INFO] Finished at: 2017-10-06T11:27:20-07:00
[INFO] Final Memory: 90M/1963M
[INFO] ------------------------------------------------------------------------

Using Herringbone

Note: You must have fiels on Hadoop, not on local file system

Verify the file on Hadoop:

  • ~/herringbone$ hadoop fs -ls /user/avkash/file-test1.parquet
  • -rw-r–r– 3 avkash avkash 1463376 2017-09-13 16:56 /user/avkash/file-test1.parquet
  • ~/herringbone$ bin/herringbone flatten -i /user/avkash/file-test1.parquet
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/avkash/herringbone/herringbone-main/target/herringbone-0.0.1-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.8.4-1.cdh5.8.4.p0.5/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
17/10/06 12:06:44 INFO client.RMProxy: Connecting to ResourceManager at mr-0xd1-precise1.0xdata.loc/172.16.2.211:8032
17/10/06 12:06:45 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
17/10/06 12:06:45 INFO input.FileInputFormat: Total input paths to process : 1
17/10/06 12:06:45 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
1 initial splits were generated.
  Max: 1.34M
  Min: 1.34M
  Avg: 1.34M
1 merged splits were generated.
  Max: 1.34M
  Min: 1.34M
  Avg: 1.34M
17/10/06 12:06:45 INFO mapreduce.JobSubmitter: number of splits:1
17/10/06 12:06:45 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1499294366934_0707
17/10/06 12:06:45 INFO impl.YarnClientImpl: Submitted application application_1499294366934_0707
17/10/06 12:06:46 INFO mapreduce.Job: The url to track the job: http://mr-0xd1-precise1.0xdata.loc:8088/proxy/application_1499294366934_0707/
17/10/06 12:06:46 INFO mapreduce.Job: Running job: job_1499294366934_0707
17/10/06 12:06:52 INFO mapreduce.Job: Job job_1499294366934_0707 running in uber mode : false
17/10/06 12:06:52 INFO mapreduce.Job:  map 0% reduce 0%
17/10/06 12:07:22 INFO mapreduce.Job:  map 100% reduce 0%

Now verify the file:

~/herringbone$ hadoop fs -ls /user/avkash/file-test1.parquet-flat

Found 2 items
-rw-r--r--   3 avkash avkash          0 2017-10-06 12:07 /user/avkash/file-test1.parquet-flat/_SUCCESS
-rw-r--r--   3 avkash avkash    2901311 2017-10-06 12:07 /user/avkash/file-test1.parquet-flat/part-m-00000.parquet

Thats it, enjoy!!

Advertisements

Enterprise Hadoop Solution distributed by key Hadoop vendors

Lets start from Cloudera Enterprise Data Hub:

Cloudera-Ehadoop

Here is the offering from Hortonworks:

HW-enterprizehadoop

And this is how MapR is packaging Enterprise Hadoop

mapr-hadoop

And finally Pivotal Enterprise Hadoop offering:

Pivotal-hadoop

Keywords: Apache Hadoop, Cloudera, Hortonworks, Pivotal, MapR, Big Data

Setting up Pivotal Hadoop (PivotalHD 1.1 Community Edition) Cluster in CentOS 6.5

Download Pivotal HD Package

http://bitcast-a.v1.o1.sjc1.bitgravity.com/greenplum/pivotal-sw/pivotalhd_community_1.1.tar.gz

The package consist of 3 tarball package:

  • PHD-1.1.0.0-76.tar.gz
  • PCC-2.1.0-460.x86_64.tar.gz
  • PHDTools-1.1.0.0-97.tar.gz

Untar above package and start with PCC (Pivotal Command Center)

Install Pivotal Command Center:

$tar -zxvf PCC-2.1.0-460.x86_64.tar.gz
$PHDCE1.1/PCC-2.1.0-460/install

Log in using  newly created user gpadmin:
$  su – gpadmin
$  sudo cp /root/.bashrc .
$  sudo cp /root/.bash_profile .
$  sudo cp /root/.bash_logout .
$  sudo cp /root/.cshrc .
$  sudo cp /root/.tcshrc .

Logout and re-login:
$ exit
$ su – gpadmin

Make sure you have alias set for your localhost:
$  vi /etc/hosts
xx.xx.xx.xx pivotal-master.hadoopbox.com  pivotal-master
$ service network restart
$ ping pivotal-master
$ ping pivotal-master.hadoopbox.com
Now we will use Pivotal HD Package, so lets untar it into PHD-1.1.0.0-76 folder.
Expand PHD* package and then import it:
$  icm_client import -s PHD-1.1.0.0-76/

Get cluster specific configuration:
$ icm_client fetch-template -o ~/ClusterConfigDir

Edit cluster configuration based on your domain details:
$  vi ~/ClusterConfigDir/clusterConfig.xml
Replace all host.yourdomain.com to your_Domainname. Somehow having .  {dot} in domain name is not accepted.
Also select the services you would want to install. you must need base 3 services hdfs, YARN, and Zookeeper in PivotalHD:

<services>hdfs,yarn,zookeeper</services> <!– hbase,hive,hawq,gpxf,pig,mahout</services> –>

Create password-less SSH configuration:

$ ssh-keygen -t rsa
$  cd .ssh
$  cat id_rsa.pub >> authorized_keys
$  cat authorized_keys
$  chmod 700 $HOME && chmod 700 ~/.ssh && chmod 600 ~/.ssh/*

[gpadmin@pivotal-master ~]$ icm_client deploy -c ClusterConfigDir
Please enter the root password for the cluster nodes:
PCC creates a gpadmin user on the newly added cluster nodes (if any). Please enter a non-empty password to be used for the gpadmin user:
Verifying input
Starting install
Running scan hosts
[RESULT] The following hosts do not meet PHD prerequisites: [ pivotal-master.hadoopbox.com ] Details…

Host: pivotal-master.hadoopbox.com
Status: [FAILED]
[ERROR] Please verify supported OS type and version. Supported OS: RHEL6.1, RHEL6.2, RHEL6.3, RHEL6.4, CentOS6.1, CentOS6.2, CentOS6.3, CentOS6.4
[OK] SELinux is disabled
[OK] sshpass installed
[OK] gpadmin user exists
[OK] gpadmin user has sudo privilege
[OK] .ssh directory and authorized_keys have proper permission
[OK] Puppet version 2.7.20 installed
[OK] Ruby version 1.9.3 installed
[OK] Facter rpm version 1.6.17 installed
[OK] Admin node is reachable from host using FQDN and admin hostname.
[OK] umask is set to 0002.
[OK] nc and postgresql-devel packages are installed or available in the yum repo
[OK] iptables: Firewall is not running.
[OK] Time difference between clocks within acceptable threshold
[OK] Host FQDN is configured correctly
[OK] Host has proper java version.
ERROR: Fetching status of the cluster failed
HTTP Error 500: Server Error
Cluster ID: 4

Because I have Cent OS 6.5 so lets edit /etc/centos-release file to let Pivotal installation know CentOS 6.4.
[gpadmin@pivotal-master ~]$ cat /etc/centos-release
CentOS release 6.5 (Final)
[gpadmin@pivotal-master ~]$ sudo mv /etc/centos-release /etc/centos-release-orig
[gpadmin@pivotal-master ~]$ sudo cp /etc/centos-release-orig /etc/centos-release
[gpadmin@pivotal-master ~]$ sudo vi /etc/centos-release

CentOS release 6.4 (Final)  <— Edit to look like I am using CentOS 6.4 even when I have CentOS 6.5

[gpadmin@pivotal-master ~]$ icm_client deploy -c ClusterConfigDir
Please enter the root password for the cluster nodes:
PCC creates a gpadmin user on the newly added cluster nodes (if any). Please enter a non-empty password to be used for the gpadmin user:
Verifying input
Starting install
[====================================================================================================] 100%
Results:
pivotal-master… [Success]
Details at /var/log/gphd/gphdmgr/
Cluster ID: 5

$ cat /var/log/gphd/gphdmgr/GPHDClusterInstaller_1392419546.log
Updating Option : TimeOut
Current Value   : 60
TimeOut=”180″
pivotal-master : Push Succeeded
pivotal-master : Push Succeeded
pivotal-master : Push Succeeded
pivotal-master : Push Succeeded
pivotal-master : Push Succeeded
pivotal-master : Push Succeeded
[INFO] Deployment ID: 1392419546
[INFO] Private key path : /var/lib/puppet/ssl-icm/private_keys/ssl-icm-1392419546.pem
[INFO] Signed cert path : /var/lib/puppet/ssl-icm/ca/signed/ssl-icm-1392419546.pem
[INFO] CA cert path : /var/lib/puppet/ssl-icm/certs/ca.pem
hostlist: pivotal-master
running: massh /tmp/tmp.jaDiwkIFMH bombed uname -n
sync cmd sudo python ~gpadmin/GPHDNodeInstaller.py –server=pivotal-master.hadoopbox.com –certname=ssl-icm-1392419546 –logfile=/tmp/GPHDNodeInstaller_1392419546.log –sync –username=gpadmin
[INFO] Deploying batch with hosts [‘pivotal-master’]
writing host list to file /tmp/tmp.43okqQH7Ji
[INFO] All hosts succeeded.

$ icm_client list
Fetching installed clusters
Installed Clusters:
Cluster ID: 5     Cluster Name: pivotal-master     PHD Version: 2.0     Status: installed

$ icm_client start -l pivotal-master
Starting services
Starting cluster
[====================================================================================================] 100%
Results:
pivotal-master… [Success]
Details at /var/log/gphd/gphdmgr/

Check HDFS:
$ hdfs dfs -ls /
Found 4 items
drwxr-xr-x   – mapred hadoop          0 2014-02-14 15:19 /mapred
drwxrwxrwx   – hdfs   hadoop          0 2014-02-14 15:19 /tmp
drwxrwxrwx   – hdfs   hadoop          0 2014-02-14 15:20 /user
drwxr-xr-x   – hdfs   hadoop          0 2014-02-14 15:20 /yarn

Now open Browser @ https://your_domain_name:5443/
Username/Password – gpadmin/gpadmin

 

Pivotal Command Center Service Status:
$ service commander status
commander (pid  2238) is running…