Free ebook: Introducing Microsoft Azure HDInsight

New Free eBook by Microsoft Press:

Microsoft Press is thrilled to share another new free ebook with you:Introducing Microsoft Azure HDInsight, by Avkash Chauhan, Valentine Fontama, Michele Hart, Wee Hyong Tok, and Buck Woody. 

hdinsight-book

Free ebook: Introducing Microsoft Azure HDInsight

Introduction (excerpt)

Microsoft Azure HDInsight is Microsoft’s 100 percent compliant distribution of Apache Hadoop on Microsoft Azure. This means that standard Hadoop concepts and technologies apply, so learning the Hadoop stack helps you learn the HDInsight service. At the time of this writing, HDInsight (version 3.0) uses Hadoop version 2.2 and Hortonworks Data Platform 2.0.

In Introducing Microsoft Azure HDInsight, we cover what big data really means, how you can use it to your advantage in your company or organization, and one of the services you can use to do that quickly—specifically, Microsoft’s HDInsight service. We start with an overview of big data and Hadoop, but we don’t emphasize only concepts in this book—we want you to jump in and get your hands dirty working with HDInsight in a practical way. To help you learn and even implement HDInsight right away, we focus on a specific use case that applies to almost any organization and demonstrate a process that you can follow along with.

We also help you learn more. In the last chapter, we look ahead at the future of HDInsight and give you recommendations for self-learning so that you can dive deeper into important concepts and round out your education on working with big data.

Here are the download links (and below the links you’ll find an ebook excerpt that describes this offering):

Download the PDF (6.37 MB; 130 pages) fromhttp://aka.ms/IntroHDInsight/PDF

Download the EPUB (8.46 MB) fromhttp://aka.ms/IntroHDInsight/EPUB

Download the MOBI (12.8 MB) fromhttp://aka.ms/IntroHDInsight/MOBI

Download the code samples (6.83 KB) fromhttp://aka.ms/IntroHDInsight/CompContent

Advertisements

Hadoop 2.4.0 release (helpful links)

Kudos to Hadoop community as Hadoop 2.4.0 release is available for everyone to consume. A small list of improvements in HDFS, MapReduce along with overall framework are as below but not limited to:

Hadoop 2.4.0 Highlights:

  • HDFS:
    • Full HTTPS support
    • ACL Supported HDFS, allows easier access to Apache Sentry-managed data by components using it
    • Native supported Rolling upgrades in HDFS
    • HDFS FSImage using protocol-buffers for smoother operational upgrades
  • YARN:
    • ResourceManager HA Automatic Failover
    •  YARN Timeline Server PREVIEW for storing and serving generic application history

Hadoop 2.4.0 Release Notes:

http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-common/releasenotes.html

Hadoop 2.4.0 Source download:

http://apache.mirrors.tds.net/hadoop/common/hadoop-2.4.0/hadoop-2.4.0-src.tar.gz

Hadoop 2.4.0 Binary download:

http://apache.mirrors.tds.net/hadoop/common/hadoop-2.4.0/hadoop-2.4.0.tar.gz

Troubleshooting Cloudera Manager installation and start issues

After Cloudera Manager is installed and running you can access it over the web UI through <Your_IP_Address>:7180 , if you could not access it, then here are few ways to troubleshoot the problem. These details are helpful with you are installing Cloudera Hadoop on a remotely located machine and you just have shell access to that machine over SSH.

Verify if Cloudera Manager is running: 

[root@cloudera-master ~]# service cloudera-scm-server status
cloudera-scm-server (pid 4652) is running…

Now lets check the Cloudera Manager logs for further verification:

[root@cloudera-master ~]# ps -ef | grep cloudera-scm
498        977     1  0 16:31 ?        00:00:01 /usr/bin/postgres -D /var/lib/cloudera-scm-server-db/data
root      3729     1  0 16:59 pts/0    00:00:00 su cloudera-scm -s /bin/bash -c nohup /usr/sbin/cmf-server
498       3731  3729 53 16:59 ?        00:00:47 /usr/java/jdk1.6.0_31/bin/java -cp .:lib/*:/usr/share/java/mysql-connector-java.jar -Dlog4j.configuration=file:/etc/cloudera-scm-server/log4j.properties -Dcmf.root.logger=INFO,LOGFILE -Dcmf.log.dir=/var/log/cloudera-scm-server -Dcmf.log.file=cloudera-scm-server.log -Dcmf.jetty.threshhold=WARN -Dcmf.schema.dir=/usr/share/cmf/schema -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -Dpython.home=/usr/share/cmf -Xmx2G -XX:MaxPermSize=256m com.cloudera.server.cmf.Main
root      3835  1180  0 17:00 pts/0    00:00:00 grep cloudera-scm

You can access Cloudera Manager Logs located as below (as shown in the above commandline):

[root@cloudera-master ~]# tail -5 /var/log/cloudera-scm-server/cloudera-scm-server.log 
2014-02-06 16:59:29,596  INFO [WebServerImpl:cmon.JobDetailGatekeeper@71] ActivityMonitor configured to allow job details for all jobs.
2014-02-06 16:59:29,597  INFO [WebServerImpl:cmon.JobDetailGatekeeper@71] ActivityMonitor configured to allow job details for all jobs.
2014-02-06 16:59:29,601  INFO [WebServerImpl:mortbay.log@67] jetty-6.1.26.cloudera.2
2014-02-06 16:59:29,605  INFO [WebServerImpl:mortbay.log@67] Started SelectChannelConnector@0.0.0.0:7180
2014-02-06 16:59:29,606  INFO [WebServerImpl:cmf.WebServerImpl@280] Started Jetty server.

Lets disable Linux Machine Firewall to see if it works:

First make sure you have to fiddle with your Firewall, below I am disabling the firewall in my Linux machine:

[root@cloudera-master ~]# /etc/init.d/iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[  OK  ]
[root@cloudera-master ~]# /etc/init.d/iptables stop
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]
[root@cloudera-master ~]# /etc/init.d/ip6tables save
ip6tables: Saving firewall rules to /etc/sysconfig/ip6table[  OK  ]
[root@cloudera-master ~]# /etc/init.d/ip6tables stop
ip6tables: Setting chains to policy ACCEPT: filter         [  OK  ]
ip6tables: Flushing firewall rules:                        [  OK  ]
ip6tables: Unloading modules:                              [  OK  ]
[root@cloudera-master ~]# chkconfig 
ip6tables           0:off     1:off     2:off     3:off     4:off     5:off     6:off
iptables            0:off     1:off     2:off     3:off     4:off     5:off     6:off

Once Firewall is stopped completely restart Cloudera Manager: 

If your machines are behind a firewall you can go ahead and disable the firewall on a machine which has Cloudera Manager running. If not please setup/open specific ports for Cloudera Manager to get it working.

[root@cloudera-master ~]# service –status-all | grep cloudera
cloudera-scm-server (pid  1082) is running…
/usr/bin/postgres “-D” “/var/lib/cloudera-scm-server-db/data”

[root@cloudera-master ~]# service cloudera-scm-server 
Usage: cloudera-scm-server {start|stop|restart|status}

[root@cloudera-master ~]# service cloudera-scm-server restart
Stopping cloudera-scm-server:                              [  OK  ]
Starting cloudera-scm-server:                              [  OK  ]

Note: that cloudera-scm-server is listed through chkconfig however the status shows OFF.

[root@cloudera-master ~]# chkconfig 
cloudera-scm-server     0:off     1:off     2:off     3:off     4:off     5:off     6:off
cloudera-scm-server-db     0:off     1:off     2:off     3:on     4:on     5:on     6:off

Once working and accessible the Cloudera Manager Login Page looks as below:

 

 

Screen Shot 2014-02-06 at 5.20.35 PM

Handling Hadoop Error "could only be replicated to 0 nodes, instead of 1" during copying data to HDFS or with mapreduce jobs

Sometimes copy files to HDFS or running a MapReduce jobs you might receive an error as below:

During file copy to HDFS the error and call stack look like as below:

File /platfora/uploads/test.xml could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation

at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1339)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2198)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745) 
UTC Timestamp: 11/20 04:14 amVersion: 2.5.4-IQT-build.73

During MapReduce job failure the error message and call stack look like as below:

DFSClient.java (line 2873) DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File ****/xyz.jar could only be replicated to 0 nodes, instead of 1

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:698)
at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:573)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1132)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)

There could be various problems within datanode which could exhibit this issue such as:

  • Inconsistency in your datanodes
    • Restart your Hadoop cluster and see if this solves your problem.
  • Communication between datanodes and namenode
    • Network Issues
      • For example if you have Hadoop in EC2 instances and due to any security reason nodes can not talk, this problem may occur. You can fix the security by putting all nodes inside same EC2 security group to solve this problem.
    • Make sure that you can get datanode status from HDFS page or command line using command below:
      • $hadoop dfs-admin -report
  • Disk space full on datanode
    • What you can do is verify disk space availability in your system and make sure Hadoop  logs are not warning about disk space issue.
  • Busy or unresponsive datanode
    • Sometime datanodes are busy scanning block or working on a maintenance job initiated by namenode.
  • Negative block size configuration etc..
    • Please check the value of dfs.block.size in hdfs-site.xml and correct it per your Hadoop configuration

Unix/OSX trick to parse column based console text to get specific row and column text value

With either OSX or *nix OS, console outputs have text data in columns. Sometimes it may required to parse column based console output to get specific row or column value.  In this article we will use a combination of sort, head, tail, tr, cut commands to get particular row or column value from console output.

Lets start with ls command which returns the folder and files name as shown below:

$ ls -l

total 8
drwxr-xr-x 2 avkash staff 68 Sep 17 00:10 Any
drwxr-xr-x 2 avkash staff 68 Sep 17 00:07 Apps
drwxr-xr-x 2 avkash staff 68 Sep 17 00:07 Books
drwxr-xr-x 2 avkash staff 68 Sep 17 00:08 Code
drwxr-xr-x 2 avkash staff 68 Sep 17 00:07 Music
drwxr-xr-x 2 avkash staff 68 Sep 17 00:07 Songs
drwxr-xr-x 2 avkash staff 68 Sep 17 00:08 Zeppos
-rw-r–r– 1 avkash staff 46 Sep 17 00:23 data.txt

$ ls -l | head -n 2 | tr -s ‘ ‘ | cut -d’ ‘ -f9

Any

$ ls -l | tail -n 1 | tr -s ‘ ‘ | cut -d’ ‘ -f9
data.txt

You can also use this trick with sorting command output using sort command (-n, -M, -d etc….)

$ ls -l | sort -n
-rw-r–r– 1 avkash staff 46 Sep 17 00:23 data.txt
drwxr-xr-x 2 avkash staff 68 Sep 17 00:07 Apps
drwxr-xr-x 2 avkash staff 68 Sep 17 00:07 Books
drwxr-xr-x 2 avkash staff 68 Sep 17 00:07 Music
drwxr-xr-x 2 avkash staff 68 Sep 17 00:07 Songs
drwxr-xr-x 2 avkash staff 68 Sep 17 00:08 Code
drwxr-xr-x 2 avkash staff 68 Sep 17 00:08 Zeppos
drwxr-xr-x 2 avkash staff 68 Sep 17 00:10 Any
total 8

$ ls -l | sort -n | tail -n 2 | tr -s ‘ ‘ | cut -d’ ‘ -f9
Any

$ ls -l | sort -n | head -n 1 | tr -s ‘ ‘ | cut -d’ ‘ -f9
data.txt

This trick does works with any text in console i.e. you could use output of a file as well.

Example 2:

$ cat data.txt

Any
Apps
Books
Code
Gum
Music
Songs
Zeppos
X

$ cat data.txt  | tail -n 1 | tr -s ‘ ‘ | cut -d’ ‘ -f1

X

$ cat data.txt | head -n 1 | tr -s ‘ ‘ | cut -d’ ‘ -f1

Any

Thats all. Have fun!!

Thanks to my friend Aaron for showing this great trick.

Keywords: ls, sort, head, tail, tr, cut