How to use the rti.connext python library connect to Cloud Discovery Service

95 views Asked by At

Problem

I want to use the rti.connext (or rticonnextdds-connector) Python API to connect to CDS using UDPv4 WAN. However, after running the publish program, I can't see the topic on other computers in a different LAN. I'm not sure if it's an issue with the XML configuration file. Using C++, I can successfully publish the topic to CDS by loading the same XML configuration file.

System info

  • Ubuntu 20.04
  • RTI version: 6.1.1
  • RTI python library version: 0.1.5
  • Python version: 3.9.18

XML Setting (base_Qos_Profile.xml)

<?xml version="1.0"?>
<dds xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
    xsi:noNamespaceSchemaLocation="https://community.rti.com/examples/built-qos-profiles" 
    version="6.1.1">
    <!-- QoS Library containing the QoS profile used in the example.
        A QoS library is a named set of QoS profiles.
    -->
    <types>
        <struct name= "msg">
            <member name="x" type="int16"/>
        </struct>
    </types>

    <qos_library name="RTIProxyQosLibrary">
        
        <!-- Logging -->
        <!-- https://community.rti.com/kb/enabling-logging-xml-qos-file -->
        
        <qos_profile name="factoryLogging" is_default_participant_factory_profile="true">
            <participant_factory_qos>
            <logging>
                <!-- <verbosity>SILENT</verbosity> -->
                <verbosity>LOCAL</verbosity>
                <category>ALL</category>
                <print_format>VERBOSE_TIMESTAMPED</print_format>

            <!-- Logging Option  1: no log file -->

            <!-- Logging Option  2: logging to a single file -->
                <!-- <output_file>LogFile.log</output_file> -->

            <!-- Logging Option 3: logging to a file set -->
                <output_file>LogFile_</output_file>
                <output_file_suffix>.log</output_file_suffix>
                <max_bytes_per_file>10000000</max_bytes_per_file>
                <max_files>10</max_files>
            </logging>
            </participant_factory_qos>
        </qos_profile>

        <!-- This profile is used to set up transport settings for the maximum
           size allowed for UDP.  This is required to get the maximum possible
           throughput.  -->

        <qos_profile name="MaxTransportThroughput">

            <domain_participant_qos>
                <transport_builtin>
                    <mask>UDPv4_WAN</mask>
                    <udpv4_wan>
                        <message_size_max>1400</message_size_max>
                    </udpv4_wan>
                </transport_builtin>
                <!-- 11/01 add, by rti recommand -->

                <discovery>
                    <initial_peers>
                        <element>rtps@udpv4_wan://<my_CDS address>:<port></element>
                    </initial_peers>
                </discovery>
            </domain_participant_qos>

            <participant_qos>
                <receiver_pool>
                    <buffer_size>1048112</buffer_size>
                    <!-- 524056 - 512KB  -->
                </receiver_pool>
                <property>
                    <value>
                        <!--
                            Configure UDP transport for higher throughput:
                          -->

                        <!-- 11/01 add, by rti recommand -->
                        <element>
                            <name>dds.participant.protocol.rtps_overhead</name>
                            <value>196</value>
                        </element>

                        <!-- 11/01 change, by rti recommand 1048112->1400 -->
                        <element>
                            <name>dds.transport.UDPv4.builtin.parent.message_size_max</name>
                            <value>1400</value>
                            <!--<value>1048112</value>-->
                            <!-- 512 KB  -->
                        </element>

                        <!--
                          The next setting is to setup the buffers for sending and recieving data
                          on the transport to be at least double what our expected message size will be.
                          -->
                        <element>
                            <name>dds.transport.UDPv4.builtin.send_socket_buffer_size</name>
                            <!-- <value>2097152</value> -->
                            <value>50331648</value>
                            <!-- <value>32 MB</value> -->
                        </element>
                        <element>
                            <name>dds.transport.UDPv4.builtin.recv_socket_buffer_size</name>
                            <value>50331648</value>
                            <!-- 16 MB -->
                        </element>
                        <!-- Configure shared memory transport for higher
                             throughput: -->
                        <element>
                            <!--  Set the shared memory maximum message size to
                                  the same value that was set for UDP.   -->
                            <name>dds.transport.shmem.builtin.parent.message_size_max</name>
                            <value>1048112</value>
                            <!-- 512 KB - header sizes -->
                            
                            <!-- <value>65507</value> -->
                            <!-- 64 KB - header sizes -->
                        </element>
                        <element>
                            <!-- Set the size of the shared memory transport's
                                 receive buffer to some large value.  -->
                            <name>dds.transport.shmem.builtin.receive_buffer_size</name>
                            <!-- <value>4194304</value> -->
                            <value>33554432</value>
                            <!-- 16 MB -->
                        </element>
                        <element>
                            <!--  Set the maximum number of messages that the
                                  shared memory transport can cache while
                                  waiting for them to be read and deserialized.
                             -->
                            <name>dds.transport.shmem.builtin.received_message_count_max</name>
                            <value>100000</value>
                            <!-- 524056 - 512KB  -->
                        </element>
                    </value>
                </property>
            </participant_qos>
        </qos_profile>
    </qos_library>
</dds>

Publisher (msg_pub.py)

import rti.connextdds as dds
import time
import argparse

try:
    xrange
except NameError:
    xrange = range

def publisher_main(domain_id, sample_count):
    participant = dds.DomainParticipant(domain_id)
    participant.enable()
    
    provider = dds.QosProvider("base_Qos_Profile.xml")
    msg = provider.type('msg')
    
    topic = dds.DynamicData.Topic(participant, "Example msg", msg, )

    publisher = dds.Publisher(participant)
    writer = dds.DynamicData.DataWriter(publisher, topic)
    
    handle = dds.InstanceHandle.nil()
    # write samples in a loop, incrementing the 'x' field
    count = 0
    while (sample_count == 0) or (count < sample_count):
        time.sleep(0.5)

        instance = dds.DynamicData(msg)
        instance["x"] = count
        print("publish:\n",instance)
        writer.write(instance, handle)
        count += 1


if __name__ == "__main__":
    parser = argparse.ArgumentParser(
        description="RTI Connext DDS Example: Using Builtin Topics (Publisher)"
    )
    parser.add_argument("-d", "--domain", type=int, default=0, help="DDS Domain ID")
    parser.add_argument(
        "-c", "--count", type=int, default=0, help="Number of samples to send"
    )

    args = parser.parse_args()

    publisher_main(args.domain, args.count)

I've tried modifying the XML file, but I still can't establish a successful connection. I hope to be able to subscribe to topics published from different domains under different LANs.

0

There are 0 answers