can c/c++ do preemeptive multitasking in a single thread?

3.3k views Asked by At

Preemptive multitasking in C/C++: can a running thread be interrupted by some timer and switch between tasks?

Many VMs and other language runtimes using green-threading and such are implemented in these terms; can C/C++ apps do the same?

If so, how?

This is going to be platform dependent, so please discuss this in terms of the support particular platforms have for this; e.g. if there's some magic you can do in a SIGALRM handler on Linux to swap some kind of internal stack (perhaps using longjmp?), that'd be great!


I ask because I am curious.

I have been working for several years making async IO loops. When writing async IO loops I have to be very careful not to put expensive to compute computation into the loop as it will DOS the loop.

I therefore have an interest in the various ways an async IO loop can be made to recover or even fully support some kind of green threading or such approach. For example, sampling the active task and the number of loop iterations in a SIGALRM, and then if a task is detected to be blocking, move the whole of everything else to a new thread, or some cunning variation on this with the desired result.

There was some complaints about node.js in this regard recently, and elsewhere I've seen tantalizing comments about other runtimes such as Go and Haskell. But lets not go too far away from the basic question of whether you can do preemptive multitasking in a single thread in C/C++

6

There are 6 answers

2
Denis K On

Windows has fibers that are user-scheduled units of execution sharing the same thread. http://msdn.microsoft.com/en-us/library/windows/desktop/ms682661%28v=vs.85%29.aspx

UPD: More information about user-scheduled context switching can be found in LuaJIT sources, it supports coroutines for different platforms, so looking at the sources can be useful even if you are not using lua at all. Here is the summary: http://coco.luajit.org/portability.html,

4
jalf On

What you're asking makes no sense. What would your one thread be interrupted by? Any executing code has to be in a thread. And each thread is basically a sequential execution of code. For a thread to be interrupted, it has to be interrupted by something. You can't just jump around randomly inside your existing thread as a response to an interrupt. Then it's no longer a thread in the usual sense.

What you normally do is this:

  • either you have multiple threads, and one of your threads is suspended until the alarm is triggered,
  • alternatively, you have one thread, which runs in some kind of event loop, where it receives events from (among other sources) the OS. When the alarm is triggered, it sends a message to your thread's event loop. If your thread is busy doing something else, it won't immediately see this message, but once it gets back into the event loop and processing events, it'll get it, and react.
0
ninjalj On

Userspace threading libraries are usually cooperative (e.g: GNU pth, SGI's statethreads, ...). If you want preemptiveness, you'd go to kernel-level threading.

You could probably use getcontext()/setcontext()... from a SIGALARM signal handler, but if it works, it would be messy at best. I don't see what advantage has this approach over kernel threading or event-based I/O: you get all the non-determinism of preemptiveness, and you don't have your program separated into sequential control flows.

1
sehe On

As others have outlined, preemptive is likely not very easy to do.

The usual pattern for this is using co-procedures.

Coprocedures are a very nice way to express finite state machines (e.g. text parsers, communication handlers).

You can 'emulate' the syntax of co-procedures with a modicum of preprocessor macro magic.


Regarding optimal input/output scheduling

You could have a look at Boost Asio: The Proactor Design Pattern: Concurrency Without Threads

Asio also has a co-procedure 'emulation' model based on a single (IIRC) simple preprocessor macro, combined with some amount of cunningly designed template facilities that come things eerily close to compiler support for _stack-less co procedures.

The sample HTTP Server 4 is an example of the technique.

The author of Boost Asio (Kohlhoff) explains the mechanism and the sample on his Blog here: A potted guide to stackless coroutines

Be sure to look for the other posts in that series!

0
Clifford On

The title is an oxymoron, a thread is an independent execution path, if you have two such paths, you have more than one thread.

You can do a kind of "poor-man's" multitasking using setjmp/longjmp, but I would not recommend it and it is cooperative rather than pre-emptive.

Neither C nor C++ intrinsically support multi-threading, but there are numerous libraries for supporting it, such as native Win32 threads, pthreads (POSIX threads), boost threads, and frameworks such as Qt and WxWidgets have support for threads also.

0
Yahia On

As far as i understand you are mixing things that are usually not mixed:

  • Asynchrounous Singals
    A signal is usually delivered to the program (thus in your description one thread) on the same stack that is currently running and runs the registered signal handler... in BSD unix there is an option to let the handler run on a separate so-called "signal stack".

  • Threads and Stacks
    The ability to run a thread on its own stack requires the ability to allocate stack space and save and restore state information (that includes all registers...) - otherwise clean "context switch" between threads/processes etc. is impossible. Usually this is implemented in the kernel and very often using some form of assembler since that is a very low-level and very time-sensitive operation.

  • Scheduler
    AFAIK every system capable of running threads has some sort of scheduler... which is basically a piece of code running with the highest privileges. Often it has subscribed to some HW signal (clock or whatever) and makes sure that no other code ever registers directly (only indirectly) to that same signal. The scheduler has thus the ability to preemt anything on that system. Main conern is usually to give the threads enough CPU cycles on the available cores to do their job. Implementation usually includes some sort of queues (often more than one), priority handling and several other stuff. Kernel-side threads usually have a higher priority than anything else.

  • Modern CPUs
    On modern CPUs the implementation is rather complicated since involves dealing with several cores and even some "special threads" (i.e. hyperthreads)... since modern CPUs usually have several levels of Cache etc. it is very important to deal with these appropriately to achieve high performance.

All the above means that your thread can and most probably will be preempted by OS on a regular basis.

In C you can register signal handlers which in turn preempt your thread on the same stack... BEWARE that singal handlers are problematic if reentered... you can either put the processing into the signal handler or fill some structure (for example a queue) and have that queue content consumed by your thread...

Regarding setjmp/longjmp you need to be aware that they are prone to several problems when used with C++.

For Linux there is/was a "full preemption patch" available which allows you to tell the scheduler to run your thread(s) with even higher priority than kernel thread (disk I/O...) get!

For some references see

For seeing an acutal implementation of a scheduler etc. checkout the linux serouce code at https://kernel.org .

Since your question isn't very specific I am not sure whether this is a real answer but I suspect it has enough information to get you started.

REMARK:

I am not sure why you might want to implement something already present in the OS... if it for a higher performance on some async I/O then there are several options with maximum performance usually available on the kernel-level (i.e. write kernel-mode code)... perhaps you can clarify so that a more specific answer is possible.