Optimize write to a hive table

152 views Asked by At

I have a hql reading data from large source tables (above 500TB) and writing data to a static partitioned hive table. I am writing about 1TB data every day to this table. data processing is fine with the MapReduce job, but the write is very slow, data loading time ranges from 10-28 hours. I have tried changing the table file format from sequence to ORC which did not add much performance on the write. I had Snappy compression with a sequence file format originally on the table. I enabled Parallel execution, auto-map join, CBO, vectorization to boost processing in general. For writing particularly I tried setting hive.exec.scratchdir=/tmp/hive to make the copy operation from .hive-staging to target directory to a move/rename operation. But that failed with the below message. Also tried setting hive.exec.copyfile.maxsize=1099511627776, which failed as well. I am using mapreduce2 with the Yarn/application master. Can someone tell me how to just directly write to the target dir or use a rename operation instead of copy which is taking a long time?

Failed with exception 
Unable to move source /xxx/tmp/aaa/hive_hive_2020-10-02_15-06-33_205_3126778922824450411-1/-ext-10000 
to destination /xxx/ttt/temp/fff/c=FULL/dt=20200922 
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask

Error: java.lang.RuntimeException: Hive Runtime Error while closing operators
        at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.close(ExecMapper.java:210)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename output from: /xxx/ttt/temp/fff/c=FULL/.hive-staging_hive_2020-10-02_16-52-38_613_8689698651254070943-1/_task_tmp.-ext-10000/_tmp.005299_3 to: /xxx/temp/fff/c=FULL/.hive-staging_hive_2020-10-02_16-52-38_613_8689698651254070943-1/_tmp.-ext-10000/005299_3
0

There are 0 answers