Memory Leak when using open62541 to send a read request to OPCUA server

121 views Asked by At

I am setting up a C application that is supposed to read data from an OPCUA server and convert it to a JSON message to be used somewhere else.

Currently the OPCUA client is set up as follows:

void setUpClient(struct ClientParameters *clientParameters, struct ClientOPCEndpointInfo* *clientInfos, int clientInfosLength)
{
    signal(SIGINT, stopHandler);

    client = UA_Client_new();
    UA_ClientConfig *clientConfig = UA_Client_getConfig(client);
    UA_ClientConfig_setDefault(clientConfig);

    clientConfig->securityMode = UA_MESSAGESECURITYMODE_NONE;

    opcServerAddress = clientParameters->endpointUrl;

    nodeCount = clientInfosLength;

    prepareClientNodes(clientInfos);
}

And data is read like this:

int readOPCData(struct ClientOPCEndpointInfo* *clientInfos, char *timestamp)
{
    UA_ReadValueId ids[nodeCount];
    
    for (int i = 0; i < nodeCount; i++)
    {
        UA_ReadValueId_init(&ids[i]);
        ids[i].attributeId = UA_ATTRIBUTEID_VALUE;
        ids[i].nodeId = clientNodes[i];
    }

    UA_ReadRequest_init(&request);
    request.nodesToRead = ids;
    request.nodesToReadSize = nodeCount;

    UA_ReadResponse response = UA_Client_Service_read(client, request);

    if (response.responseHeader.serviceResult != UA_STATUSCODE_GOOD)
    {
        logError("Error while reading OPC data from OPC server!", true);
        return 1;
    }

    UA_DateTime responseDateTime = response.responseHeader.timestamp;
    UA_Int64 responseTimestamp = UA_DateTime_toUnixTime(responseDateTime);
    sprintf(timestamp, "%li", responseTimestamp);

    for (int i = 0; i < nodeCount; i++)
    {
        UA_Variant *variant = &response.results[i].value;

        if (UA_Variant_hasScalarType(variant, &UA_TYPES[UA_TYPES_DOUBLE]))
        {
            UA_Double value = *(UA_Double*)variant->data;

            *clientInfos[i]->value = (double)value;
        }
        else if (UA_Variant_hasScalarType(variant, &UA_TYPES[UA_TYPES_FLOAT]))
        {
            UA_Float value = *(UA_Float*)variant->data;

            *clientInfos[i]->value = (double)value;
        }
        else if (UA_Variant_hasScalarType(variant, &UA_TYPES[UA_TYPES_UINT32]))
        {
            UA_UInt32 value = *(UA_UInt32*)variant->data;

            *clientInfos[i]->value = (double)value;
        }
        else if (UA_Variant_hasScalarType(variant, &UA_TYPES[UA_TYPES_BOOLEAN]))
        {
            UA_Boolean value = *(UA_Boolean*)variant->data;

            *clientInfos[i]->value = (double)value;
        }
    }

    return 0;
}

Functionally this works totally fine for my case. But when run on the production machine the process will be killed by Ubuntu after a certain while. I suspect the cause to be a problem with available memory.

Valgrind reports the following problem after the program terminates:

==19032== 43,624 (40,320 direct, 3,304 indirect) bytes in 8 blocks are definitely lost in loss record 44 of 45
==19032==    at 0x484DA83: calloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==19032==    by 0x6A60F4: Array_decodeBinary (ua_types_encoding_binary.c:484)
==19032==    by 0x6A6ABB: decodeBinaryStructure.lto_priv.0 (ua_types_encoding_binary.c:1666)
==19032==    by 0x6B5B23: UA_decodeBinaryInternal (ua_types_encoding_binary.c:1817)
==19032==    by 0x69685D: processMSGResponse (ua_client.c:469)
==19032==    by 0x6B7A0F: UnknownInlinedFun (ua_securechannel.c:712)
==19032==    by 0x6B7A0F: UnknownInlinedFun (ua_securechannel.c:859)
==19032==    by 0x6B7A0F: UA_SecureChannel_processBuffer (ua_securechannel.c:975)
==19032==    by 0x698EB0: __Client_networkCallback (ua_client_connect.c:1489)
==19032==    by 0x6AD94E: TCP_connectionSocketCallback (eventloop_posix_tcp.c:217)
==19032==    by 0x6A142B: UA_EventLoopPOSIX_pollFDs (eventloop_posix_epoll.c:111)
==19032==    by 0x6A161B: UA_EventLoopPOSIX_run (eventloop_posix.c:287)
==19032==    by 0x699B3D: __Client_Service (ua_client.c:643)
==19032==    by 0x699C1E: __UA_Client_Service (ua_client.c:690)

This seems to be an internal problem in the open62541 library. But I can't tell if it is caused by incorrecty use on my end or by a bug in the library itself.

EDIT: Correcting some spelling mistakes.

1

There are 1 answers

4
Renat On BEST ANSWER

There is a need to call corresponding *_clear cleanup functions on variables before variables are leaving their scope (as *_deleteMembers functions were deprecated):

int readOPCData(struct ClientOPCEndpointInfo* *clientInfos, char *timestamp)
{
    ...

    if (response.responseHeader.serviceResult != UA_STATUSCODE_GOOD)
    {
        logError("Error while reading OPC data from OPC server!", true);

        UA_ReadResponse_clear(&response);
        for (int i = 0; i < nodeCount; i++)
        {
            UA_ReadValueId_clear(&ids[i]);
        }
        UA_ReadRequest_clear(&request);
        return 1;
    }

    ...

    UA_ReadResponse_clear(&response);
    for (int i = 0; i < nodeCount; i++)
    {
        UA_ReadValueId_clear(&ids[i]);
    }
    UA_ReadRequest_clear(&request);
    return 0;
}