Spring jdbcTemaplate how to send complete batch size to DB2 server in one shot?

1.1k views Asked by At

While jdbcTemplate.batchUpdate(...) is running I can see DB row count is increased gradually(by running count(*) in the table), initially 2k then 3k and goes till 10k. 2k and 3k are not exact numbers sometimes I get 235 and then 4567.

I was expecting 10 k rows (batch size) to be committed in one shot. In my understanding, if initially, I get row count 0 then next row count should be 10k. I don't want one by one insert for performance reason that's why used batch update feature and seems it also doesn't commit all in one shot.

I want to send data(10k rows) to DB server only once for my batch size. for this is there anything I should specify in the configuration?

Below is the way I am writing jdbcTemplate batch update batch size is 10k.

public void insertRows(...) { 
    ... 
    jdbcTemplate.batchUpdate(query, new BatchPreparedStatementSetter(){
    @Override public void

    setValues(PreparedStatement ps, int i) throws SQLException {
        ... 
    }

    @Override public int getBatchSize() { 
        if(data == null){ 
            return 0; 
        }
        return data.size(); 
    }
   }); 
}

Edit: Added @Transactional to isertRows method still I can see the same behavior. using Transnational it commits after 10k rows, but when I see count using with UR (select count(*) from mytable with ur) it shows data being updated gradually (2k 4k so on till 10k). That means data goes to a server in chunks (probably one bye one). How can i send everything in one shot. This question suggests it is achieved using rewriteBatchedStatements in mysql, is there anything similar we have in DB2 as well.

I am using DataSource implementation com.ibm.db2.jcc.DB2BaseDataSource

2

There are 2 answers

0
sendon1982 On

Maybe try this set even though I only see it compatible with MySQL And also upgrade your DB2 jdbc driver version

spring.datasource.hikari.data-source-properties.useConfigs=maxPerformance
spring.datasource.hikari.data-source-properties.rewriteBatchedStatements=true

Another way I can think is to use CompletableFutuere and submit multiple batch update together.

1
techi On

How about the below method? specify nUpdates =10,000 in your case. I have not tried testing this.please ignore my answer if it does not work.

// the batch size is set in the BatchPreparedStatementSetter, the number of rows we want to process is equal to the nbUpdates parameter
    public int[] batchUpdate(String sql, final long nbUpdates, final BatchPreparedStatementSetter pss) throws DataAccessException {
        if (logger.isDebugEnabled()) {
            logger.debug("Executing SQL batch update [" + sql + "]");
        }

        return (int[]) execute(sql, new PreparedStatementCallback() {
            public Object doInPreparedStatement(PreparedStatement ps) throws SQLException {
                try {
                    int batchSize = pss.getBatchSize();
                    InterruptibleBatchPreparedStatementSetter ipss = (pss instanceof InterruptibleBatchPreparedStatementSetter ? (InterruptibleBatchPreparedStatementSetter) pss
                            : null);
                    if (JdbcUtils.supportsBatchUpdates(ps.getConnection())) {
                        List<Integer> rowsAffected = new ArrayList<Integer>();
                        for (int i = 1; i <= nbUpdates; i++) {
                            pss.setValues(ps, i - 1);
                            if (ipss != null && ipss.isBatchExhausted(i - 1)) {
                                if (logger.isDebugEnabled()) {
                                    int batchIdx = (i % batchSize == 0) ? i / batchSize : (i / batchSize) + 1;
                                    logger.debug("Batch exhausted - Sending last SQL batch update #" + batchIdx);
                                }
                                int[] res = ps.executeBatch();
                                for (int j = 0; j < res.length; j++) {
                                    rowsAffected.add(res[j]);
                                }
                                break;
                            }
                            ps.addBatch();
                            if (i % batchSize == 0 || i == nbUpdates) {
                                if (logger.isDebugEnabled()) {
                                    int batchIdx = (i % batchSize == 0) ? i / batchSize : (i / batchSize) + 1;
                                    logger.debug("Sending SQL batch update #" + batchIdx);
                                }
                                int[] res = ps.executeBatch();
                                for (int j = 0; j < res.length; j++) {
                                    rowsAffected.add(res[j]);
                                }
                            }
                        }
                        int[] result = new int[rowsAffected.size()];
                        for (int i = 0; i < result.length; i++) {
                            result[i] = rowsAffected.get(i).intValue();
                        }
                        return result;
                    } else {
                        List<Integer> rowsAffected = new ArrayList<Integer>();
                        for (int i = 0; i < nbUpdates; i++) {
                            pss.setValues(ps, i);
                            if (ipss != null && ipss.isBatchExhausted(i)) {
                                break;
                            }
                            rowsAffected.add(ps.executeUpdate());
                        }
                        int[] rowsAffectedArray = new int[rowsAffected.size()];
                        for (int i = 0; i < rowsAffectedArray.length; i++) {
                            rowsAffectedArray[i] = rowsAffected.get(i);
                        }
                        return rowsAffectedArray;
                    }
                } finally {
                    if (pss instanceof ParameterDisposer) {
                        ((ParameterDisposer) pss).cleanupParameters();
                    }
                }
            }
        });
    }