Setting up Pivotal Hadoop (PivotalHD 1.1 Community Edition) Cluster in CentOS 6.5

Download Pivotal HD Package

http://bitcast-a.v1.o1.sjc1.bitgravity.com/greenplum/pivotal-sw/pivotalhd_community_1.1.tar.gz

The package consist of 3 tarball package:

  • PHD-1.1.0.0-76.tar.gz
  • PCC-2.1.0-460.x86_64.tar.gz
  • PHDTools-1.1.0.0-97.tar.gz

Untar above package and start with PCC (Pivotal Command Center)

Install Pivotal Command Center:

$tar -zxvf PCC-2.1.0-460.x86_64.tar.gz
$PHDCE1.1/PCC-2.1.0-460/install

Log in using  newly created user gpadmin:
$  su – gpadmin
$  sudo cp /root/.bashrc .
$  sudo cp /root/.bash_profile .
$  sudo cp /root/.bash_logout .
$  sudo cp /root/.cshrc .
$  sudo cp /root/.tcshrc .

Logout and re-login:
$ exit
$ su – gpadmin

Make sure you have alias set for your localhost:
$  vi /etc/hosts
xx.xx.xx.xx pivotal-master.hadoopbox.com  pivotal-master
$ service network restart
$ ping pivotal-master
$ ping pivotal-master.hadoopbox.com
Now we will use Pivotal HD Package, so lets untar it into PHD-1.1.0.0-76 folder.
Expand PHD* package and then import it:
$  icm_client import -s PHD-1.1.0.0-76/

Get cluster specific configuration:
$ icm_client fetch-template -o ~/ClusterConfigDir

Edit cluster configuration based on your domain details:
$  vi ~/ClusterConfigDir/clusterConfig.xml
Replace all host.yourdomain.com to your_Domainname. Somehow having .  {dot} in domain name is not accepted.
Also select the services you would want to install. you must need base 3 services hdfs, YARN, and Zookeeper in PivotalHD:

<services>hdfs,yarn,zookeeper</services> <!– hbase,hive,hawq,gpxf,pig,mahout</services> –>

Create password-less SSH configuration:

$ ssh-keygen -t rsa
$  cd .ssh
$  cat id_rsa.pub >> authorized_keys
$  cat authorized_keys
$  chmod 700 $HOME && chmod 700 ~/.ssh && chmod 600 ~/.ssh/*

[gpadmin@pivotal-master ~]$ icm_client deploy -c ClusterConfigDir
Please enter the root password for the cluster nodes:
PCC creates a gpadmin user on the newly added cluster nodes (if any). Please enter a non-empty password to be used for the gpadmin user:
Verifying input
Starting install
Running scan hosts
[RESULT] The following hosts do not meet PHD prerequisites: [ pivotal-master.hadoopbox.com ] Details…

Host: pivotal-master.hadoopbox.com
Status: [FAILED]
[ERROR] Please verify supported OS type and version. Supported OS: RHEL6.1, RHEL6.2, RHEL6.3, RHEL6.4, CentOS6.1, CentOS6.2, CentOS6.3, CentOS6.4
[OK] SELinux is disabled
[OK] sshpass installed
[OK] gpadmin user exists
[OK] gpadmin user has sudo privilege
[OK] .ssh directory and authorized_keys have proper permission
[OK] Puppet version 2.7.20 installed
[OK] Ruby version 1.9.3 installed
[OK] Facter rpm version 1.6.17 installed
[OK] Admin node is reachable from host using FQDN and admin hostname.
[OK] umask is set to 0002.
[OK] nc and postgresql-devel packages are installed or available in the yum repo
[OK] iptables: Firewall is not running.
[OK] Time difference between clocks within acceptable threshold
[OK] Host FQDN is configured correctly
[OK] Host has proper java version.
ERROR: Fetching status of the cluster failed
HTTP Error 500: Server Error
Cluster ID: 4

Because I have Cent OS 6.5 so lets edit /etc/centos-release file to let Pivotal installation know CentOS 6.4.
[gpadmin@pivotal-master ~]$ cat /etc/centos-release
CentOS release 6.5 (Final)
[gpadmin@pivotal-master ~]$ sudo mv /etc/centos-release /etc/centos-release-orig
[gpadmin@pivotal-master ~]$ sudo cp /etc/centos-release-orig /etc/centos-release
[gpadmin@pivotal-master ~]$ sudo vi /etc/centos-release

CentOS release 6.4 (Final)  <— Edit to look like I am using CentOS 6.4 even when I have CentOS 6.5

[gpadmin@pivotal-master ~]$ icm_client deploy -c ClusterConfigDir
Please enter the root password for the cluster nodes:
PCC creates a gpadmin user on the newly added cluster nodes (if any). Please enter a non-empty password to be used for the gpadmin user:
Verifying input
Starting install
[====================================================================================================] 100%
Results:
pivotal-master… [Success]
Details at /var/log/gphd/gphdmgr/
Cluster ID: 5

$ cat /var/log/gphd/gphdmgr/GPHDClusterInstaller_1392419546.log
Updating Option : TimeOut
Current Value   : 60
TimeOut=”180″
pivotal-master : Push Succeeded
pivotal-master : Push Succeeded
pivotal-master : Push Succeeded
pivotal-master : Push Succeeded
pivotal-master : Push Succeeded
pivotal-master : Push Succeeded
[INFO] Deployment ID: 1392419546
[INFO] Private key path : /var/lib/puppet/ssl-icm/private_keys/ssl-icm-1392419546.pem
[INFO] Signed cert path : /var/lib/puppet/ssl-icm/ca/signed/ssl-icm-1392419546.pem
[INFO] CA cert path : /var/lib/puppet/ssl-icm/certs/ca.pem
hostlist: pivotal-master
running: massh /tmp/tmp.jaDiwkIFMH bombed uname -n
sync cmd sudo python ~gpadmin/GPHDNodeInstaller.py –server=pivotal-master.hadoopbox.com –certname=ssl-icm-1392419546 –logfile=/tmp/GPHDNodeInstaller_1392419546.log –sync –username=gpadmin
[INFO] Deploying batch with hosts [‘pivotal-master’]
writing host list to file /tmp/tmp.43okqQH7Ji
[INFO] All hosts succeeded.

$ icm_client list
Fetching installed clusters
Installed Clusters:
Cluster ID: 5     Cluster Name: pivotal-master     PHD Version: 2.0     Status: installed

$ icm_client start -l pivotal-master
Starting services
Starting cluster
[====================================================================================================] 100%
Results:
pivotal-master… [Success]
Details at /var/log/gphd/gphdmgr/

Check HDFS:
$ hdfs dfs -ls /
Found 4 items
drwxr-xr-x   – mapred hadoop          0 2014-02-14 15:19 /mapred
drwxrwxrwx   – hdfs   hadoop          0 2014-02-14 15:19 /tmp
drwxrwxrwx   – hdfs   hadoop          0 2014-02-14 15:20 /user
drwxr-xr-x   – hdfs   hadoop          0 2014-02-14 15:20 /yarn

Now open Browser @ https://your_domain_name:5443/
Username/Password – gpadmin/gpadmin

 

Pivotal Command Center Service Status:
$ service commander status
commander (pid  2238) is running…

Troubleshooting Cloudera Manager installation and start issues

After Cloudera Manager is installed and running you can access it over the web UI through <Your_IP_Address>:7180 , if you could not access it, then here are few ways to troubleshoot the problem. These details are helpful with you are installing Cloudera Hadoop on a remotely located machine and you just have shell access to that machine over SSH.

Verify if Cloudera Manager is running: 

[root@cloudera-master ~]# service cloudera-scm-server status
cloudera-scm-server (pid 4652) is running…

Now lets check the Cloudera Manager logs for further verification:

[root@cloudera-master ~]# ps -ef | grep cloudera-scm
498        977     1  0 16:31 ?        00:00:01 /usr/bin/postgres -D /var/lib/cloudera-scm-server-db/data
root      3729     1  0 16:59 pts/0    00:00:00 su cloudera-scm -s /bin/bash -c nohup /usr/sbin/cmf-server
498       3731  3729 53 16:59 ?        00:00:47 /usr/java/jdk1.6.0_31/bin/java -cp .:lib/*:/usr/share/java/mysql-connector-java.jar -Dlog4j.configuration=file:/etc/cloudera-scm-server/log4j.properties -Dcmf.root.logger=INFO,LOGFILE -Dcmf.log.dir=/var/log/cloudera-scm-server -Dcmf.log.file=cloudera-scm-server.log -Dcmf.jetty.threshhold=WARN -Dcmf.schema.dir=/usr/share/cmf/schema -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -Dpython.home=/usr/share/cmf -Xmx2G -XX:MaxPermSize=256m com.cloudera.server.cmf.Main
root      3835  1180  0 17:00 pts/0    00:00:00 grep cloudera-scm

You can access Cloudera Manager Logs located as below (as shown in the above commandline):

[root@cloudera-master ~]# tail -5 /var/log/cloudera-scm-server/cloudera-scm-server.log 
2014-02-06 16:59:29,596  INFO [WebServerImpl:cmon.JobDetailGatekeeper@71] ActivityMonitor configured to allow job details for all jobs.
2014-02-06 16:59:29,597  INFO [WebServerImpl:cmon.JobDetailGatekeeper@71] ActivityMonitor configured to allow job details for all jobs.
2014-02-06 16:59:29,601  INFO [WebServerImpl:mortbay.log@67] jetty-6.1.26.cloudera.2
2014-02-06 16:59:29,605  INFO [WebServerImpl:mortbay.log@67] Started SelectChannelConnector@0.0.0.0:7180
2014-02-06 16:59:29,606  INFO [WebServerImpl:cmf.WebServerImpl@280] Started Jetty server.

Lets disable Linux Machine Firewall to see if it works:

First make sure you have to fiddle with your Firewall, below I am disabling the firewall in my Linux machine:

[root@cloudera-master ~]# /etc/init.d/iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[  OK  ]
[root@cloudera-master ~]# /etc/init.d/iptables stop
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]
[root@cloudera-master ~]# /etc/init.d/ip6tables save
ip6tables: Saving firewall rules to /etc/sysconfig/ip6table[  OK  ]
[root@cloudera-master ~]# /etc/init.d/ip6tables stop
ip6tables: Setting chains to policy ACCEPT: filter         [  OK  ]
ip6tables: Flushing firewall rules:                        [  OK  ]
ip6tables: Unloading modules:                              [  OK  ]
[root@cloudera-master ~]# chkconfig 
ip6tables           0:off     1:off     2:off     3:off     4:off     5:off     6:off
iptables            0:off     1:off     2:off     3:off     4:off     5:off     6:off

Once Firewall is stopped completely restart Cloudera Manager: 

If your machines are behind a firewall you can go ahead and disable the firewall on a machine which has Cloudera Manager running. If not please setup/open specific ports for Cloudera Manager to get it working.

[root@cloudera-master ~]# service –status-all | grep cloudera
cloudera-scm-server (pid  1082) is running…
/usr/bin/postgres “-D” “/var/lib/cloudera-scm-server-db/data”

[root@cloudera-master ~]# service cloudera-scm-server 
Usage: cloudera-scm-server {start|stop|restart|status}

[root@cloudera-master ~]# service cloudera-scm-server restart
Stopping cloudera-scm-server:                              [  OK  ]
Starting cloudera-scm-server:                              [  OK  ]

Note: that cloudera-scm-server is listed through chkconfig however the status shows OFF.

[root@cloudera-master ~]# chkconfig 
cloudera-scm-server     0:off     1:off     2:off     3:off     4:off     5:off     6:off
cloudera-scm-server-db     0:off     1:off     2:off     3:on     4:on     5:on     6:off

Once working and accessible the Cloudera Manager Login Page looks as below:

 

 

Screen Shot 2014-02-06 at 5.20.35 PM

Hadoop HDFS Error: xxxx could only be replicated to 0 nodes, instead of 1

Sometime when using Hadoop  either using HDFS directly or running a MapReduce job which access HDFS, user get an error i.e. XXXX could only be replicated to 0 nodes, instead of 1

Example (1): Copying a file from local file system to HDFS
$myhadoop$ ./currenthadoop/bin/hadoop fs -copyFromLocal ./b.txt /
14/02/03 11:59:48 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /b.txt could only be replicated to 0 nodes, instead of 1
Example (2): Running MapReduce Job:
$myhadoop$ ./currenthadoop/bin/hadoop jar hadoop-examples-1.2.1.jar pi 10 1
 Number of Maps  = 10
 Samples per Map = 1
 14/02/03 12:02:11 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/henryo/PiEstimator_TMP_3_141592654/in/part0 could only be replicated to 0 nodes, instead of 1
The root cause for above problem is that Datanode is not available means datanode process is not running at all.
You can verify it by running the jps command as below to make sure all key process are running specific to HDFS/MR1/MR2(YARN) version.
Hadoop Process for HDFS/MR1:

$ jps
69269 TaskTracker
69092 DataNode
68993 NameNode
69171 JobTracker

Hadoop Process for HDFS/MR2

$ jps
43624 DataNode
44005 ResourceManager
43529 NameNode
43890 SecondaryNameNode
44105 NodeManager

If you look at Datanode logs you might see the reason for why Datanode could not started i.e. as below:

2014-02-03 17:50:37,334 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Metrics system not started: Cannot locate configuration: tried hadoop-metrics2-datanode.properties, hadoop-metrics2.properties
2014-02-03 17:50:37,947 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /private/tmp/hdfs/datanode: namenode namespaceID = 1867802097; datanode namespaceID = 1895712546
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:414)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
Based on above the problem is that folder where HDFS datanode is defined (/tmp/hdfs/datanode ), is not correctly configured. Either the folder does not exist or the contents are unreadable or the folder is inaccessible or locked.
Solution:
To solve this problem you may need to look for your HDFS -> Datanode folder accessibility and once properly configured, start Datanode/Namenode again.

Troubleshooting YARN NodeManager – Unable to start NodeManager because mapreduce.shuffle value is invalid

With Hadoop 2.2.x you might experience NodeManager is not running and the failure reports the following error message when starting YARN NodeManger:

2014-01-31 17:13:00,500 FATAL org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Failed to initialize mapreduce.shuffle
java.lang.IllegalArgumentException: The ServiceName: mapreduce.shuffle set in yarn.nodemanager.aux-services is invalid.The valid service name should only contain a-zA-Z0-9_ and can not start with numbers
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices.serviceInit(AuxServices.java:98)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:218)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:188)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:338)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:386)

 

If you check yarn-site.xml (in etc/hadoop/) you will see the following setting by default:

<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce.shuffle</value>
</property>

Solution:

To solve this problem you just need to change mapreduce.shuffle to mapreduce_shuffle as shown below:

<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

Note: With Hadoop 0.23.10 the value mapreduce.shuffle is still correct and works fine so this change is applicable to Hadoop 2.2.x