Issue in Inserting data to hive partition table with over 100k partitions

314 views Asked by At

I created a staging table with 20 million records with only two field viewerid and viewedid. From that i am trying to create a dynamic partitions ORC table with "viewerid" column, but map job is not completing as shown in the attached pic

mapred-site.xml

<configuration>
<property>
  <name> mapreduce.framework.name</name>
  <value>yarn</value>
</property>
<property>
  <name>mapreduce.jobhistory.address</name>
  <value>localhost:10020</value>
</property>
<property>
  <name>mapreduce.jobhistory.webapp.address</name>
  <value>localhost:19888</value>
</property>
<property>
  <name>mapreduce.map.memory.mb</name>
  <value>4096</value>
</property>
<property>
  <name>mapreduce.reduce.memory.mb</name>
  <value>8192</value>
</property>
<property>
  <name>mapreduce.map.java.opts</name>
  <value>-Xmx3072m</value>
</property>
<property>
  <name>mapreduce.reduce.java.opts</name>
  <value>-Xmx6144m</value>
</property>

<property>
  <name>mapred.tasktracker.map.tasks.maximum</name>
  <value>4</value>
</property>
<property>
  <name>mapred.tasktracker.reduce.tasks.maximum</name>
  <value>4</value>
</property>


**yarn-site.xml**

 <property>
 <name>yarn.nodemanager.aux-services</name>
 <value>mapreduce_shuffle</value>
</property>
<property>
 <name>yarn.resourcemanager.scheduler.address</name>
 <value>hadoop-master:8030</value>
</property>
<property>
 <name>yarn.resourcemanager.address</name>
 <value>hadoop-master:8032</value>
</property>
<property>
 <name>yarn.resourcemanager.webapp.address</name>
 <value>hadoop-master:8088</value>
</property>
<property>
 <name>yarn.resourcemanager.resource-tracker.address</name>
 <value>hadoop-master:8031</value>
</property>

job status:

enter image description here

my stage table:

hive> desc formatted bmviews;
OK
# col_name              data_type               comment             

viewerid                int                                         
viewedid                int                                         

# Detailed Table Information         
Database:               bm                       
Owner:                  sudheer                  
CreateTime:             Tue Aug 29 18:22:34 IST 2017     
LastAccessTime:         UNKNOWN                  
Retention:              0                        
Location:               hdfs://hadoop-master:54311/user/hive/warehouse/bm.db/bmviews     
Table Type:             MANAGED_TABLE            
Table Parameters:        
    numFiles                9                   
    numRows                 0                   
    rawDataSize             0                   
    totalSize               539543256           
    transient_lastDdlTime   1504070146          

# Storage Information        
SerDe Library:          org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe   
InputFormat:            org.apache.hadoop.mapred.TextInputFormat     
OutputFormat:           org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat  
my partition table description:

enter image description here

I have changed the partitions per node to 200k but still facing the issue. I have two data nodes (8g,6g) ram respectively and namenode with 16gb ram.

How can I insert the data into my partition table?

0

There are 0 answers