I have a service that is using a load-balancer in order to expose externally a certain IP. I am using metallb because my cluster is bare metal.
This is the configuration of the service: 
Inside of the cluster the application running perform a binding to a zmq socket (TCP type) like:
m_zmqSock->bind(endpoint);
where endpoint = tcp://127.0.0.1:1234 and
m_zmqSock = std::make_unique<zmq::socket_t>(*m_zmqContext,zmq::socket_type::pair);
m_zmqSock->setsockopt(ZMQ_RCVTIMEO,1);
Then from an application in my local computer (with access to the cluster) I am trying to connect and send data like:
zmqSock->connect(zmqServer);
where zmqServer = tcp://192.168.49.241:1234 and
zmq::context_t ctx;
auto zmqSock = std::make_unique<zmq::socket_t>(ctx,zmq::socket_type::pair);
Any idea on how could I make the zmq socket connect from my host to send data to the application and receive response also?
Welcome to ZeroMQ - let's sketch a work-plan:
PUSH-PULLpattern, being fed from the cluster-side by aaPushSIDE->send(...)with regularly spaced timestamped messages, using also a resources saving setup there, usingaPushSIDE->setsockopt( ZMQ_COMPLETE,... )andaPushSIDE->setsockopt( ZMQ_CONFLATE,... )localhost'sPULL-endrecv()-s regular updates, feel free to also add an up-stream link from localhost towards the cluster-hosted code, again using aPUSH-PULLpattern in the opposite direction.Why a pair of
PUSH-PULL-s here?First, it helps isolate the root-cause of the problem. Next, it allows you to separate concerns and control each of the flows independently of any other ( details on control loops with many interconnects, with different flows, different priority levels and different error handling procedures are so common to all have exclusively only the non-blocking forms of the
recv()-methods & doing multi-levelpoll()-methods' soft-control of the maximum permitted time spent ( wasted ) on testing a new message arrival go beyond of the scope of this Q/A text - feel free to seek further in this formal event-handling framing and about using low-level socket-monitor diagnostics )Last, but not least, the
PAIR-PAIRarchetype used to be reported in ZeroMQ native API documentation as "experimental" for the most of my ZeroMQ-related life ( since v2.1, yeah, so long ). Accepting that fact, I never used aPAIRarchetype on any other Transport Class but for a pure-in-RAM, network-protocol stack-lessinproc:"connections" ( that are not actually any connections, but a Zero-Copy, almost Zero-Latency smart pure pointer-to-memory-block passing trick among some of a same-process co-operating threads ).