I have a spring boot application with different batch config class, one of my batch config reads a file from minio and split it to 3 files created on minio also. The config class contains a job called loadFactureItemsJob
@Bean("loadFactureItemsJob")
public Job loadFactureItemsJob(JobBuilderFactory jobBuilderFactory,@Qualifier("loadFactureEnteteStep") Step loadFactureEnteteStep,
@Qualifier("loadFactureLigneStep") Step loadFactureLigneStep, @Qualifier("loadFacturePaiementStep") Step loadFacturePaiementStep,
@Qualifier("separateCsvStep") Step separateCsvStep,StepBuilderFactory stepBuilder,@Qualifier("errorFactureItemStep") Step stepError
) {
return jobBuilderFactory.get("loadFactureItemsJob")
.start(separateCsvStep)
.next(loadFactureEnteteStep)
.next(loadFactureLigneStep)
.next(loadFacturePaiementStep)
.build();
}
contains four steps, the problem is on my last three steps.Normally each step have a listener reader writer and its chunk oriented like this
@Bean("loadFactureEnteteStep")
public Step loadFactureEnteteStep(StepBuilderFactory stepBuilderFactory,
@Qualifier("factureResource") Resource resource,
@Qualifier("factureWriter") FactureItemWriter factureWriter,
@Qualifier("listenerEntete") FetchFileToImportListener listenerEntete) throws IOException {
return stepBuilderFactory.get("loadFactureEnteteStep")
.<SapGenericElement, SapGenericElement> chunk(10000)
.reader(factureEnteteReader(resource))
.writer(factureWriter)
.listener(listenerEntete)
.build();
}
for example this step needs to read a file from minio (3X XXX lines ) and insert it to my database meanwhile writing a report for each record in another file on minio ( to this point all good ).
Lately, when i m executing this job i get an error
org.springframework.batch.item.file.NonTransientFlatFileException: Unable to read from resource: [Amazon s3 resource [bucket='bucket-app4155-8252-front-caisse-distrib' and object='xfb/in/dsie1021-sfr_distribution_loos/52941/FichierSepares_5686/LST_ENTETES_5686.csv']]
Caused by: java.net.SocketException: Connection reset
I tried to decrease my chunk size to 5000/500/100 but after certain time during the processing i get the same error.
I did also set the timeouts for my amazonS3 client :
@Autowired
@Qualifier("public")
private AmazonS3 amazonS3;
private void configureS3ClientTimeouts() {
if (amazonS3 != null) {
ClientConfiguration clientConfiguration = ((AmazonS3Client) amazonS3).getClientConfiguration();
if (clientConfiguration != null) {
clientConfiguration.setMaxErrorRetry(5);
clientConfiguration.setSocketTimeout(7200000); // 2 hours
clientConfiguration.withTcpKeepAlive(true);
} else {
LOGGER.info("ClientConfiguration is null!! S3 TIMEOUT NOT SET!!");
}
}
}
tried to add the @Retryable annotation
@Retryable(value = { SocketException.class, Exception.class },
maxAttempts = 3,
backoff = @Backoff(delay = 1000, multiplier = 2))
but nothing seems to work really for my case !