Tagged: bigdata

0

How to create a Hive UDF in Scala

Source: https://community.hortonworks.com/articles/42695/how-to-create-a-hive-udf-in-scala.html   This article will focus on creating a custom HIVE UDF in the Scala programming language. Intellij IDEA 2016 was used to create the project and artifacts. Creation and testing of the UDF was performed on the Hortonworks...

0

Permanently add jars to hadoop

Looking to add custom SerDe and custom or third party codecs to Hortonworks HDP? Only auxlib folder trick worked for me after having tried lot of alternatives. The places where we need to add that auxlib folder containing JARs is,...

0

Best practices for Namenode and Datanode restarts

Problems Following are some problems we might come across while working in a large setup of hadoop clusters, Namenode restarts taking long time (http://nn-host:50070/dfshealth.html#tab-startup-progress) Namenode startup goes to safemode for a long time after restart   Best practices for Namenode &...

0

Setting up password-less ssh across all nodes in a cluster

Pre-requisites User account for which passwordless ssh will be setup, should be present on all nodes Password of the account should be same across all nodes pdsh and ssh-copy-id commands should be available Prepare 2 files file_of_hosts.txt – containing all...

0

Hive on Tez Performance Tuning – Determining Reducer Counts

Source: https://community.hortonworks.com/articles/22419/hive-on-tez-performance-tuning-determining-reducer.html   Short Description: Some practical steps in Hive Tez tuning Article How Does Tez determine the number of reducers? How can I control this for performance? In this article, I will attempt to answer this while executing and tuning...

0

Adding compression codec to Hortonworks data platform

Lately I tried installing xz/lzma codec on my local VM setup. The compression ratios are pretty awesome. Won’t do a benchmark here, try it out yourself 😉   Steps Download codec JAR – https://github.com/yongtang/hadoop-xz or https://mvnrepository.com/artifact/io.sensesecure/hadoop-xz Copy downloaded JAR to HDPs’ libs...

0

Good looking .hiverc file

Following is the .hiverc from one of the hadoop environments I work on, — additional .jar includes like the one below — add jar hdfs://ualprod/tmp/json-serde-1.3.7-jar-with-dependencies.jar; set hive.exec.dynamic.partition.mode=nonstrict; set hive.auto.convert.join.noconditionaltask=true; set hive.optimize.sort.dynamic.partition=true; set hive.exec.max.dynamic.partitions=100000; set hive.exec.max.dynamic.partitions.pernode=10000; — large mem?? set hive.tez.container.size=10240;...

0

Kafka on OSX / macOS

Source: https://dtflaneur.wordpress.com/2015/10/05/installing-kafka-on-mac-osx/   Apache Kafka is a highly-scalable publish-subscribe messaging system that can serve as the data backbone in distributed applications. With Kafka’s Producer-Consumer model it becomes easy to implement multiple data consumers that do live monitoring as well persistent...

0

Apache drill – No current connection

After reading multiple posts, it seems that this is a problem of conflicting jars. My current setup has apache drill installed using $brew install apache-drill and upon executing $drill-embedded or $drill-localhost, I see below error (line 10) robin@MacBook-Pro:~$ drill-localhost Java HotSpot(TM)...

0

Creating Hive tables on compressed files

Stuck with creating Hive tables on compressed files? Well the documentation on apache.org suggests that Hive natively supports compressed file – https://cwiki.apache.org/confluence/display/Hive/CompressedStorage Lets try that out. Store a snappy compressed file on HDFS. … thinking, I do not have such file… Wait!...