PLC4X:Exception during scraping of Job

258 views Asked by At

I'm actually developing a project that read data from 19 PLCs Siemens S1500 and 1 modicon. I have used the scraper tool following this tutorial:

PLC4x scraper tutorial

but when the scraper is working for a little amount of time I get the following exception:

enter image description here

I have changed the scheduled time between 1 to 100 and I always get the same exception when the scraper reach the same number of received messages.

I have tested if using PlcDriverManager instead of PooledPlcDriverManager could be a solution but the same problem persists.

In my pom.xml I use the following dependency:

        <dependency>
            <groupId>org.apache.plc4x</groupId>
            <artifactId>plc4j-scraper</artifactId>
            <version>0.7.0</version>
        </dependency>

I have tried to change the version to an older one like 0.6.0 or 0.5.0 but the problem still persists.

If I use the modicon (Modbus TCP) I also get this exception after a little amount of time.

Anyone knows why is happening this error? Thanks in advance.

Edit: With the scraper version 0.8.0-SNAPSHOT I continue having this problem.

Edit2: This is my code, I think the problem can be that in my scraper I am opening a lot of connections and when it reaches 65526 messages it fails. But since all the processing is happenning inside the lambda function and I'm using a PooledPlcDriverManager, I think the scraper is using only one connection so I dont know where is the mistake.

 try {
        // Create a new PooledPlcDriverManager
        PlcDriverManager S7_plcDriverManager = new PooledPlcDriverManager();


        // Trigger Collector
        TriggerCollector S7_triggerCollector = new TriggerCollectorImpl(S7_plcDriverManager);

        // Messages counter
        AtomicInteger messagesCounter = new AtomicInteger();


        // Configure the scraper, by binding a Scraper Configuration, a ResultHandler and a TriggerCollector together
        TriggeredScraperImpl S7_scraper = new TriggeredScraperImpl(S7_scraperConfig, (jobName, sourceName, results) -> {
            LinkedList<Object> S7_results = new LinkedList<>();

            messagesCounter.getAndIncrement();

            S7_results.add(jobName);
            S7_results.add(sourceName);
            S7_results.add(results);

            logger.info("Array: " + String.valueOf(S7_results));
            logger.info("MESSAGE number: " + messagesCounter);

            // Producer topics routing
            String topic = "s7" + S7_results.get(1).toString().substring(S7_results.get(1).toString().indexOf("S7_SourcePLC") + 9 , S7_results.get(1).toString().length());
            String key = parseKey_S7("s7");
            String value = parseValue_S7(S7_results.getLast().toString(),S7_results.get(1).toString());
            logger.info("------- PARSED VALUE -------------------------------- " + value);

            // Create my own Kafka Producer
            ProducerRecord<String, String> record = new ProducerRecord<String, String>(topic, key, value);

            // Send Data to Kafka - asynchronous
            producer.send(record, new Callback() {
                public void onCompletion(RecordMetadata recordMetadata, Exception e) {
                    // executes every time a record is successfully sent or an exception is thrown
                    if (e == null) {
                        // the record was successfully sent
                        logger.info("Received new metadata. \n" +
                                "Topic:" + recordMetadata.topic() + "\n" +
                                "Partition: " + recordMetadata.partition() + "\n" +
                                "Offset: " + recordMetadata.offset() + "\n" +
                                "Timestamp: " + recordMetadata.timestamp());
                    } else {
                        logger.error("Error while producing", e);
                    }
                }
            });



        }, S7_triggerCollector);
        

        S7_scraper.start();
        S7_triggerCollector.start();


        } catch (ScraperException e) {
            logger.error("Error starting the scraper (S7_scrapper)", e);
        }
1

There are 1 answers

3
Christofer Dutz On

So in the end indeed it was the PLC that was simply hanging up the connection randomly. However the NiFi integration should have handled this situation more gracefully. I implemented a fix for this particular error ... could you please give version 0.8.0-SNAPSHOT a try (or use 0.8.0 if we happen to have released it already)