Terminating QWebSocketServer with connected sockets

1.1k views Asked by At

I debug console multithreaded application written in C++/Qt 5.12.1. It is running on Linux Mint 18.3 x64.

This app has SIGINT handler, QWebSocketServer and QWebSocket table. It uses close() QWebSocketServer and call abort()/deleteLater() for items in QWebSocket table to handle the termination.

If the websocket client connects to this console app, then termination fails because of some running thread (I suppose it's internal QWebSocket thread). Termination is successful if there were no connections.

How to fix it? So that the app gracefully exits.

2

There are 2 answers

0
Alexander V On BEST ANSWER

To gracefully quit the socket server we can attempt:

The most important part is to allow the main thread event loop to run and wait on QWebSocketServer::closed() so that the slot calls QCoreApplication::quit().

That can be done even with:

connect(webSocketServer, &QWebSocketServer::closed,
        QCoreApplication::instance(), &QCoreApplication::quit);

If we don't need more detailed reaction.

After connecting that signal before all, proceed with pauseAccepting() to prevent more connections. Call QWebSocketServer::close.

The below may not be needed if the above sufficient. You need to try the above first, and only if still have problems then deal with existing and pending connections. From my experience the behavior was varying on platforms and with some unique websocket implementations in the server environment (which is likely just Qt for you).

As long as we have some array with QWebSocket instances, we can try to call QWebSocket::abort() on all of them to immediately release. This step seem to be described by the question author.

Try to iterate pending connections with QWebSocketServer::nextPendingConnection() and call abort() for them. Call deleteLater, if that works as well.

4
Kuba hasn't forgotten Monica On

There is no need to do anything. What do you mean by "graceful exit"? As soon as there's a request to terminate your application, you should terminate it immediately using exit(0) or a similar mechanism. That's what "graceful exit" should be.

Note: I got reformed. I used to think that graceful exits were a good thing. They are most usually a waste of CPU resources and usually indicate problems in the architecture of the application.

A good rationale for why it should be so written in the kj framework (a part of capnproto).

Quoting Kenton Varda:

KJ_NORETURN(virtual void exit()) = 0;

Indicates program completion. The program is considered successful unless error() was called. Typically this exits with _Exit(), meaning that the stack is not unwound, buffers are not flushed, etc. -- it is the responsibility of the caller to flush any buffers that matter. However, an alternate context implementation e.g. for unit testing purposes could choose to throw an exception instead.

At first this approach may sound crazy. Isn't it much better to shut down cleanly? What if you lose data? However, it turns out that if you look at each common class of program, _Exit() is almost always preferable. Let's break it down:

  • Commands: A typical program you might run from the command line is single-threaded and exits quickly and deterministically. Commands often use buffered I/O and need to flush those buffers before exit. However, most of the work performed by destructors is not flushing buffers, but rather freeing up memory, placing objects into freelists, and closing file descriptors. All of this is irrelevant if the process is about to exit anyway, and for a command that runs quickly, time wasted freeing heap space may make a real difference in the overall runtime of a script. Meanwhile, it is usually easy to determine exactly what resources need to be flushed before exit, and easy to tell if they are not being flushed (because the command fails to produce the expected output). Therefore, it is reasonably easy for commands to explicitly ensure all output is flushed before exiting, and it is probably a good idea for them to do so anyway, because write failures should be detected and handled. For commands, a good strategy is to allocate any objects that require clean destruction on the stack, and allow them to go out of scope before the command exits. Meanwhile, any resources which do not need to be cleaned up should be allocated as members of the command's main class, whose destructor normally will not be called.

  • Interactive apps: Programs that interact with the user (whether they be graphical apps with windows or console-based apps like emacs) generally exit only when the user asks them to. Such applications may store large data structures in memory which need to be synced to disk, such as documents or user preferences. However, relying on stack unwind or global destructors as the mechanism for ensuring such syncing occurs is probably wrong. First of all, it's 2013, and applications ought to be actively syncing changes to non-volatile storage the moment those changes are made. Applications can crash at any time and a crash should never lose data that is more than half a second old. Meanwhile, if a user actually does try to close an application while unsaved changes exist, the application UI should prompt the user to decide what to do. Such a UI mechanism is obviously too high level to be implemented via destructors, so KJ's use of _Exit() shouldn't make a difference here.

  • Servers: A good server is fault-tolerant, prepared for the possibility that at any time it could crash, the OS could decide to kill it off, or the machine it is running on could just die. So, using _Exit() should be no problem. In fact, servers generally never even call exit anyway; they are killed externally.

  • Batch jobs: A long-running batch job is something between a command and a server. It probably knows exactly what needs to be flushed before exiting, and it probably should be fault-tolerant.