Nowadays the concept "real-time" has a lot different interpretations. In this question two definitions are provided:
The hard real-time definition considers any missed deadline to be a system failure. This scheduling is used extensively in mission critical systems where failure to conform to timing constraints results in a loss of life or property.
and
The soft real-time definition allows for frequently missed deadlines, and as long as tasks are timely executed their results continue to have value. Completed tasks may have increasing value up to the deadline and decreasing value past it.
In my research I came to the following conclusions:
- The middleware supports hard real-time if it provides predictable and efficient end-to-end control over system resources. Like setting the thread-priority of all the threads created by the middleware.
- It appears to me that good performance is the most relevant factor to support soft real-time applications.
Is this true? Are other relevant features of communication middlewares which support soft real-time applications?
First, for precise definitions of real-time principles and terms, based on first principles and mental models, I refer you to real-time.org.
The real-time practitioner computing community uses a variety of inconsistent and incomplete "definitions" of "real-time," "hard real-time," and "soft real-time." The real-time computing research community has a consensus on "hard real-time" but is confused about "soft real-time."
The core of the research community's "hard" real-time computing model is that tasks have hard deadlines, and all these deadlines must not be missed, else the system has failed. Meeting the deadlines is the "timeliness" criterion, and guaranteeing that all deadlines will be met is the "predictability" criterion--that predictability is "deterministic."
(In some of these models, tasks without deadlines are allowed in the background if they do not interfere with the hard real-time tasks; they usually also are prevented from being starved.)
This model requires everything related to the hard real-time tasks to be static (known in advance)--i.e., it requires that the time evolution of the system is known in advance. This requirement is very strong, and in most cases, it is not feasible. There are important hard real-time systems in which this requirement is (at least presumed) to be satisfied. Well-known examples include digital avionics flight control, certain medical devices, power plant control, railroad crossing control, etc. These examples are safety-critical, but not all hard real-time systems are (and we will see below that most safety-critical systems are not and cannot be hard real-time, although some may include simple low level hard real-time components).
Soft real-time refers to a class of real-time systems which are generalizations of hard real-time ones. The generalizations included weaker timeliness criteria and/or weaker predictability criteria.
For example, consider a model with tasks having deadlines as hard real-time ones do. In this particular model, the timeliness criterion is that any number of tasks are allowed to be up to 15% tardy, and the predictability criterion is that this must be guaranteed (i.e., deterministic) just like for hard real-time systems. If one or more of these tasks is more than 15% tardy, the system has failed.This model is not a conventional hard real-time one, although it may be a safety-critical one.
Consider another model: the timeliness criterion is that no more than 20% of the tasks can be more than 5% tardy, and the predictability criterion is that this is guaranteed to be satisfied with at least probability 0.9. Violation of the timeliness and/or predictability criteria means the system has failed.This is not a hard real-time system, although it may be a safety-critical one.
But consider: what if the utility of that system degrades according to not meeting one or any of those criteria--say, 23% were more than 5% tardy, or less than 20% of the tasks were tardy but 10% of those were more than 5% tardy, or all of the criteria were met except that the predictability is only 0.8. There are many real-time systems having such dynamic properties.
We need to specify how that system degradation (say, the system's "utility" or "value") is related to how many and to what degree any of those timeliness and predictability criteria were or were not met. In fact, this model is a notional example of many actual existing real-time systems that are as safety-critical as possible--for example, for doing defense against nuclear armed hostile missiles (and numerous other military combat systems, because they all have various inherent dynamic uncertainties).
Now we return to that need to specify how a real-time system's timeliness and predictability are related to the system's utility. A successfully used solution to that is called "time/utility functions," (or "time/value functions") and is described in great detail on real-time.org. The functions for each task are derived from the physical nature of the system application(s). The system's timeliness and predictability of timeliness are based on those of the tasks--for example, by weighted accrual of their individual utilities.
Soft real-time systems (in the precisely defined sense described on real-time.org) are the general case, and hard real-time systems are a special case that applies to a much smaller domain of real-time problems. All hard and soft real-time systems can be specified and created with time/utility functions.
All that clarified, now we can address your question about real-time middleware.
One obvious source for an answer is The Open Group Real-Time CORBA (RTC) standard (Google, there is a GREAT deal of detailed information available).
RTC can be implemented as a fixed priority infrastructure, with a 15-bit system-wide priority that is mapped onto the node priorities. In that case, the minimum requirements are: respecting thread priorities between client and server for resolving resource contention during the processing of CORBA invocations; bounding the duration of thread priority inversions during end-to-end processing; bounding the latencies of operation invocations. It is possible to build hard real-time RTC distributed systems according to those three requirements (and many exist)--but obviously the underlying network QoS affects the real-time behavior. So RTC provides for pluggable application-specific networking, such as those having deterministic QoS (so hard real-time is possible at and below the RTC layers), and those having non-deterministic QoS (but still the RTC layers have the three essential fixed priority real-time properties).
More generally, RTC provides for soft real-time (in the technical sense defined on real-time.org) at the CORBA layers. It does that by providing a first order scheduling abstraction called "distributed threads." And it provides a scheduling framework that supports not only fixed priorities but also time/utility functions, which are general enough to express a very general class of "utility accrual" soft real-time scheduling algorithms. Such algorithms (or usually heuristics) are needed for distributed systems consisting of application-specific soft real-time system models such as I described above.
What if you don't want to use RTC? The good news is that RTC's principles first appeared publicly in a different distributed real-time system (described on real-time.org), and can be (and have been) transplanted to other real-time middleware for both hard and soft real-time systems.
For soft real-time (again, in the precisely defined sense from real-time.org) middleware, the principles of dynamic timeliness and predictability of timeliness must be applied to resource management at each node of the middleware's system--including being applied to scheduling the middleware's network (e.g., access, routing, etc.). Instances of this approach appear in several Ph.D. theses, and have also been implemented in a number of military combat distributed real-time time systems.