How could I set the number or size of output files in an "insert" script?

6.5k views Asked by At

I have a partitioned table "t1" in hive with a lot of differently sized data files (total: 900Mb). I want to reduce the number of files in order to get less files into another table "t2". The tables "t1" and "t2" were created in this way:

Set hive.exec.compress.output=true;
Set mapred.output.compression.codec=snappy;
SET mapred.output.compression.type=BLOCK; 

use xxx;
CREATE EXTERNAL TABLE tX partitioned by (a string, b string, c string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
WITH SERDEPROPERTIES (
'avro.schema.literal'='
{
    "type": "record",
    "name": "Event",
    "fields":[
        {
            "name": "headers",
            "type": {
                    "type": "map",
                    "values": ["null","string"]
                    }
        },
        {
            "name": "body",
            "type": "bytes"
        }
    ]
}')
STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
LOCATION '/hive/xxx.db/tX';

I developed this script:

SET hive.exec.compress.output=true;
SET mapred.output.compression.codec=snappy;
SET mapred.output.compression.type=BLOCK; 
SET hive.exec.dynamic.partition.mode=nonstrict;
SET hive.merge.mapfiles=true;
SET hive.merge.mapredfiles=true;
SET hive.merge.size.per.task=268435456;
SET hive.merge.smallfiles.avgsize=134217728;
INSERT OVERWRITE TABLE xxx.t2 PARTITION (a, b, c) SELECT * FROM xxx.t1 WHERE a=1 and b=2 and c=3;

In CDH4 with hive 0.10, I got:

242106023 /hive/xxx.db/t2/a=1/b=2/c=3/000000_0
232866517 /hive/xxx.db/t2/a=1/b=2/c=3/000001_0
217161082 /hive/xxx.db/t2/a=1/b=2/c=3/000002_0
 37516541 /hive/xxx.db/t2/a=1/b=2/c=3/000003_0

Now, I want to migrate to CDH5 with hive 0.13.1. When I run the script in CDH5 I get:

530348055 /hive/xxx.db/t2/a=1/b=2/c=3/000000_0

Execution plan CDH4:

ABSTRACT SYNTAX TREE:
  (TOK_QUERY (TOK_FROM (TOK_TABREF (TOK_TABNAME xxx t1))) (TOK_INSERT (TOK_DESTINATION (TOK_TAB (TOK_TABNAME xxx t2) (TOK_PARTSPEC (TOK_PARTVAL a) (TOK_PARTVAL b) (TOK_PARTVAL c)))) (TOK_SELECT (TOK_SELEXPR TOK_ALLCOLREF)) (TOK_WHERE (and (and (= (TOK_TABLE_OR_COL a) 1) (= (TOK_TABLE_OR_COL b) 2)) (= (TOK_TABLE_OR_COL c) 3)))))

STAGE DEPENDENCIES:
  Stage-1 is a root stage
  Stage-7 depends on stages: Stage-1 , consists of Stage-4, Stage-3, Stage-5
  Stage-4
  Stage-0 depends on stages: Stage-4, Stage-3, Stage-6
  Stage-2 depends on stages: Stage-0
  Stage-3
  Stage-5
  Stage-6 depends on stages: Stage-5

STAGE PLANS:
  Stage: Stage-1
    Map Reduce
      Alias -> Map Operator Tree:
        t1
          TableScan
            alias: t1
            Select Operator
              expressions:
                    expr: headers
                    type: map<string,string>
                    expr: body
                    type: array<tinyint>
                    expr: a
                    type: string
                    expr: b
                    type: string
                    expr: c
                    type: string
              outputColumnNames: _col0, _col1, _col2, _col3, _col4
              File Output Operator
                compressed: false
                GlobalTableId: 1
                table:
                    input format: org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat
                    output format: org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat
                    serde: org.apache.hadoop.hive.serde2.avro.AvroSerDe
                    name: xxx.t2

  Stage: Stage-7
    Conditional Operator

  Stage: Stage-4
    Move Operator
      files:
          hdfs directory: true
          destination: hdfs://node/tmp/hive-user/hive_2015-06-10_17-46-17_570_5009234087568150280-1/-ext-10000

  Stage: Stage-0
    Move Operator
      tables:
          partition:
            a
            b
            c
          replace: true
          table:
              input format: org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat
              output format: org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat
              serde: org.apache.hadoop.hive.serde2.avro.AvroSerDe
              name: xxx.t2

  Stage: Stage-2
    Stats-Aggr Operator

  Stage: Stage-3
    Map Reduce
      Alias -> Map Operator Tree:
        hdfs://node/tmp/hive-user/hive_2015-06-10_17-46-17_570_5009234087568150280-1/-ext-10002
            File Output Operator
              compressed: false
              GlobalTableId: 0
              table:
                  input format: org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat
                  output format: org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat
                  serde: org.apache.hadoop.hive.serde2.avro.AvroSerDe
                  name: xxx.t2

  Stage: Stage-5
    Map Reduce
      Alias -> Map Operator Tree:
        hdfs://node/tmp/hive-user/hive_2015-06-10_17-46-17_570_5009234087568150280-1/-ext-10002
            File Output Operator
              compressed: false
              GlobalTableId: 0
              table:
                  input format: org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat
                  output format: org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat
                  serde: org.apache.hadoop.hive.serde2.avro.AvroSerDe
                  name: xxx.t2

  Stage: Stage-6
    Move Operator
      files:
          hdfs directory: true
          destination: hdfs://node/tmp/hive-user/hive_2015-06-10_17-46-17_570_5009234087568150280-1/-ext-10000

Execution plan CDH5:

STAGE DEPENDENCIES:
  Stage-1 is a root stage
  Stage-0 depends on stages: Stage-1
  Stage-2 depends on stages: Stage-0

STAGE PLANS:
  Stage: Stage-1
    Map Reduce
      Map Operator Tree:
          TableScan
            alias: t1
            Statistics: Num rows: 882980 Data size: 900640395 Basic stats: COMPLETE Column stats: NONE
            Select Operator
              expressions: headers (type: map<string,string>), body (type: binary), a (type: string), b (type: string), c (type: string)
              outputColumnNames: _col0, _col1, _col2, _col3, _col4
              Statistics: Num rows: 882980 Data size: 900640395 Basic stats: COMPLETE Column stats: NONE
              Reduce Output Operator
                key expressions: _col2 (type: string), _col3 (type: string), _col4 (type: string)
                sort order: +++
                Map-reduce partition columns: _col2 (type: string), _col3 (type: string), _col4 (type: string)
                Statistics: Num rows: 882980 Data size: 900640395 Basic stats: COMPLETE Column stats: NONE
                value expressions: _col0 (type: map<string,string>), _col1 (type: binary), _col2 (type: string), _col3 (type: string), _col4 (type: string)
      Reduce Operator Tree:
        Extract
          Statistics: Num rows: 882980 Data size: 900640395 Basic stats: COMPLETE Column stats: NONE
          File Output Operator
            compressed: false
            Statistics: Num rows: 882980 Data size: 900640395 Basic stats: COMPLETE Column stats: NONE
            table:
                input format: org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat
                output format: org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat
                serde: org.apache.hadoop.hive.serde2.avro.AvroSerDe
                name: xxx.t2

  Stage: Stage-0
    Move Operator
      tables:
          partition:
            a
            b
            c
          replace: true
          table:
              input format: org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat
              output format: org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat
              serde: org.apache.hadoop.hive.serde2.avro.AvroSerDe
              name: xxx.t2

  Stage: Stage-2
    Stats-Aggr Operator

I tryed modifying the script:

Script 1:

SET mapreduce.job.reduces=2;
SET hive.exec.compress.output=true;
SET mapred.output.compression.codec=snappy;
SET mapred.output.compression.type=BLOCK;
SET hive.exec.dynamic.partition.mode=nonstrict;
INSERT OVERWRITE TABLE xxx.t2 PARTITION (a, b, c) SELECT * FROM xxx.t1 WHERE a=1 and b=2 and c=3;

Output 1:

Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 2

Script 2:

SET mapreduce.job.reduces=0;
SET hive.exec.compress.output=true;
SET mapred.output.compression.codec=snappy;
SET mapred.output.compression.type=BLOCK;
SET hive.exec.dynamic.partition.mode=nonstrict;
INSERT OVERWRITE TABLE xxx.t2 PARTITION (a, b, c) SELECT * FROM xxx.t1 WHERE a=1 and b=2 and c=3;

Output 2 (In this case, SET mapreduce.job.reduces=0; doesn't work):

Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 1

Script 3:

SET hive.exec.reducers.bytes.per.reducer=268435456;
SET hive.exec.compress.output=true;
SET mapred.output.compression.codec=snappy;
SET mapred.output.compression.type=BLOCK;
SET hive.exec.dynamic.partition.mode=nonstrict;
INSERT OVERWRITE TABLE t2 PARTITION (a, b, c) SELECT * FROM t1 WHERE a=1 and b=2 and c=3;

Output 3:

Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 4

In spite of the number of reducers, only 1 file is written (500Mb) in CDH5.

Is something wrong in my script? is possible to set reducers=0? how could I set the number or size of output files in an "insert" script?

thanks in advance.

Written with StackEdit.

1

There are 1 answers

0
txabez On

I have found the solution. The problem was a new property in hive 0.13:

hive.optimize.sort.dynamic.partition

(https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties)

So I set it to "false". Now, the execution plan doesn't need reducer:

STAGE DEPENDENCIES:
  Stage-1 is a root stage
  Stage-7 depends on stages: Stage-1 , consists of Stage-4, Stage-3, Stage-5
  Stage-4
  Stage-0 depends on stages: Stage-4, Stage-3, Stage-6
  Stage-2 depends on stages: Stage-0
  Stage-3
  Stage-5
  Stage-6 depends on stages: Stage-5

STAGE PLANS:
  Stage: Stage-1
    Map Reduce
      Map Operator Tree:
          TableScan
            alias: t1
            Statistics: Num rows: 882980 Data size: 900640395 Basic stats: COMPLETE Column stats: NONE
            Select Operator
              expressions: headers (type: map<string,string>), body (type: binary), a (type: string), b (type: string), c (type: string)
              outputColumnNames: _col0, _col1, _col2, _col3, _col4
              Statistics: Num rows: 882980 Data size: 900640395 Basic stats: COMPLETE Column stats: NONE
              File Output Operator
                compressed: true
                Statistics: Num rows: 882980 Data size: 900640395 Basic stats: COMPLETE Column stats: NONE
                table:
                    input format: org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat
                    output format: org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat
                    serde: org.apache.hadoop.hive.serde2.avro.AvroSerDe
                    name: cassiopeia30_raw.t2

  Stage: Stage-7
    Conditional Operator

  Stage: Stage-4
    Move Operator
      files:
          hdfs directory: true
          destination: hdfs://dpbgr-cdh-clus02-ns/csipei/hive/cassiopeia30_raw.db/t2/.hive-staging_hive_2015-06-25_12-02-57_439_8862807801483314053-1/-ext-10000

  Stage: Stage-0
    Move Operator
      tables:
          partition:
            a
            b
            c
          replace: true
          table:
              input format: org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat
              output format: org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat
              serde: org.apache.hadoop.hive.serde2.avro.AvroSerDe
              name: cassiopeia30_raw.t2

  Stage: Stage-2
    Stats-Aggr Operator

  Stage: Stage-3
    Map Reduce
      Map Operator Tree:
          TableScan
            File Output Operator
              compressed: true
              table:
                  input format: org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat
                  output format: org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat
                  serde: org.apache.hadoop.hive.serde2.avro.AvroSerDe
                  name: cassiopeia30_raw.t2

  Stage: Stage-5
    Map Reduce
      Map Operator Tree:
          TableScan
            File Output Operator
              compressed: true
              table:
                  input format: org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat
                  output format: org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat
                  serde: org.apache.hadoop.hive.serde2.avro.AvroSerDe
                  name: cassiopeia30_raw.t2

  Stage: Stage-6
    Move Operator
      files:
          hdfs directory: true
          destination: hdfs://dpbgr-cdh-clus02-ns/csipei/hive/cassiopeia30_raw.db/t2/.hive-staging_hive_2015-06-25_12-02-57_439_8862807801483314053-1/-ext-10000

Time taken: 0.179 seconds, Fetched: 86 row(s)

The query runs without reducers:

Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 0

I get as many output files as mappers, just all that I wanted.