Keeping your priorities straight: Part 1 - context
时间:2009-08-18 来源:mawell
By William E. Lamie Embedded.com (01/26/09, 11:17:00 H EST) Real-time applications consist of multiple threads, each performing a portion of the system's workload under the management of a real-time operating system (RTOS). In a real-time system, a thread's priority reflects the relative urgency of its work, and RTOSes strive at all times to run the most urgent work that needs to be performed, swapping out lower priority work.
Often, multiple portions of a system's workload are of equal importance, and none warrants a higher priority than another. In such cases, multiple threads operate at the same priority, and run sequentially, in a "round-robin" fashion. Whether due to greater urgency, or a round-robin sequence, whenever one thread gives way to another, the RTOS must perform what is called a "context switch."
A context-switch is a complex procedure in which the RTOS saves all the information being used by the running thread (its "context") and loads the context of another thread in its place. A thread's context includes its working register set, program counter, and other thread-critical information. This context is saved on the stack, or in a thread control block data structure, from which it gets re-loaded when the RTOS wants to run that thread again.
Context switches generally are the single most time-consuming RTOS operation in a real-time system, often taking hundreds of cycles to execute. The amount of processing varies from RTOS to RTOS, but generally involves the following operations as shown below in Figure 1 below.
Figure 1 - A typical context switch involves a number of operations, each one requiring a number of CPU cycles |
As will be seen in the example below, when threads are assigned unique priorities, the order in which they become ready to run determines the number of context switches performed. If the order in which they become ready to run is in ascending priority order, then each time one becomes ready to run, it will immediately cause a preemption of the lower-priority thread that is running, and result in a context switch.
Conversely, if the order of activation is in descending priority order, then each activation does not cause preemption, since the running thread is higher in priority. But when threads are of the same priority, the order in which they become ready has no impact on the number of preemptions, and always results in fewest consistent number of context switches.
Because of all the processing it requires, a context switch is one of the most important measures of real-time performance in embedded systems. While all RTOSes go to great lengths to optimize the performance of their context switch operation, the application developer must ensure that a system performs as few context switches as possible.
For a given application, the way priorities are assigned to individual threads can have a significant impact on the number of context switches performed. In particular, by running multiple threads at the same priority, rather than assigning them each a unique priority, the system designer can avoid unnecessary context switches and reduce RTOS overhead.
Assigning multiple threads the same priority also makes it possible to properly address priority inheritance, and to implement round-robin scheduling and time-slicing. Each of these mechanisms is important in a real-time system, and is difficult, if not impossible to implement, without running multiple threads at the same priority. Each can be used to keep system overhead low and—perhaps more importantly—to keep system behavior understandable.
What's Prioritization About?
Before analyzing the relationship between priority assignment and system performance, it is important to understand what a thread's priority represents, and how it affects the way the RTOS schedules that thread to run.
Most RTOSes employ a priority-based, preemptive scheduler. In this type of scheduler, the highest priority thread that is "ready to run" (i.e., is not waiting for something else to happen) is the one that the RTOS runs on the CPU. A thread's "readiness" may change as the result of an interrupt or the action of another thread.
One simple, but common scenario is for a thread to be "waiting" for a message to appear in a message queue, and when the message appears, the waiting thread becomes "ready." The RTOS is responsible for keeping track of which threads are ready, which are waiting, and for recognizing when an event enables a waiting thread to become ready.
When a thread with priority higher than the active thread becomes ready to run (e.g., because the message it was waiting for finally arrived), the RTOS preempts the active thread. This preemption results in a context switch in which the context of the active thread is saved, the context of the higher priority thread is loaded, and the higher priority thread then runs on the CPU.
Because of the complex relationship between priority assignment and context switches, many application developers might not realize how much control they have over the number of context switches an application must perform.
How Priorities Determine Context Switch Count
To illustrate the effect that various methods of priority assignment can have on context switching, consider a system with four threads (Figure 2, below), named A, B, C, and D.
Figure 2 - To measure the impact of priority assignment, we set up two cases - one where all threads have the same priority and one where they each have a unique priority. |
In this example, the threads operate in a producer-consumer fashion., with Thread D the producer thread, sending three messages into each of the queues of threads A, B, and C. Let's look at how the priorities of the threads impact the total number of context switches performed by this system.
To do this, we'll examine two cases. In "Case-1", all threads are assigned the same priority (4), and will execute in a round-robin fashion. In Case-2, the four threads are assigned unique priorities of 1, 2, 3, and 4. In each case, we'll measure the number of context switches performed, as well as the time it takes to complete an equal amount of work.
To read Part 2, go to: A context switch's operational flow - two examples
To read Part 3, go to: Understanding the implications
William E. Lamie is co-founder and CEO of Express Logic, Inc., and is the author of the ThreadX RTOS. Prior to founding Express Logic, Mr. Lamie was the author of the Nucleus RTOS and co-founder of Accelerated Technology, Inc. Mr. Lamie has over 20 years experience in embedded systems development, over 15 of which are in the development of real-time operating systems for embedded applications