Friday, May 21, 2010

OS Terminologies

Multitasking
Multitasking is the process of scheduling and switching the controller between several tasks. Using multitasking a controller attend several sequential tasks. Multitasking maximizes the utilization of the controller. One of the most important aspects of multitasking is that it allows the application programmer to manage complexity inherent in real-time applications.

Kernel
The kernel is the part of a multitasking system responsible for the management of tasks and communication between tasks. The fundamental service provided by the kernel is context switching. The use of a real-time kernel will generally simplify the design of systems by allowing the application to be divided into multiple tasks managed by the kernel.

Scheduler
The scheduler, is the part of the kernel responsible for determining which task will run next. Most of the real-time kernels are priority based. Each task is assigned a priority based on its importance. The priority for each task is application specific. In a priority-based kernel, control of the Microcontroller is always given to the highest priority task.

Task
A task, also called a thread, is a part of main process which, execute separately. The real-time application involves splitting the work to be done into tasks which are responsible for a portion of an application. Each task is assigned a priority, its own set of CPU registers, and its own stack area. Each task typically is an infinite loop that can be in any one of states: READY, RUNNING, WAITING, etc.

Context Switch
In multitasking, kernel switched from one task to another, the state (processor registers and associated data) must be saved, so that after some time when the task will restarted it will continue as it was never interrupted. The restarting that task involves loading the processor registers and memory with all the previously saved data and restarting it at the instruction that was to be executed when it was last interrupted. Once this operation is performed, the new task's context is restored from its storage area and then resumes execution of the new task's code. This process is called a context switch or a task switch. Context switching adds overhead to the application, as every task requires a memory space to save the last state, – task’s stack. The time required to perform a context switch is determined by how many registers have to be saved and restored by the CPU.

Preemptive Scheduling
Preemptive scheduling uses a real-time clock that generates interrupts at regular intervals(say every 1/100th of a second). Each time an interrupt occurs, the processor is switched from one to another task. The next tasks sequence generally based on assign priorities to each task. The highest priority task ready to run is always given control of the CPU. When a task with higher priority is ready to run, the current task is preempted(suspended) and the higher priority task is started. The priorities of task can be changed in ISR based on application, hence some of the tasks may be executed more frequently than others.

Co-operative/Non-Preemptive Scheduling
In Co-operative scheduling policy, generally the processes are arranged in a ROUND ROBIN queue. When a running process gives itself up, it goes to the end of the queue and the task at the top of the queue is executed. All the other tasks in the queue move up one place. The new higher priority task will gain control of the CPU only when the current task gives up the CPU. This provides a measure of fairness, as the tasks cooperate with each other to share the CPU.  The asynchronous events are handled by ISRs. An ISR can make a higher priority task ready to run, but the ISR always returns to the interrupted task. In non-preemptive, the interrupt latency is kept low. At the task level, non-reentrant functions can be used by each task to avoid state corruption by another task. This is because each task can run to completion before it relinquishes the CPU.

No comments:

Post a Comment