Part 1 Background
Chapter 1 Computer System Overview
Chapter 2 Operating System Overview
Part 2 Processes
Chapter 3 Process Description and Control
3.1 What Is a Process?
- Background
- Processes and Processes Control Blocks
In computing, a process is an instance of a computer program that is being executed. It contains the program code and its activity. Depending on the operating system (OS), a process may be made up of multiple threads of execution that execute instructions concurrently.
We also think of a process as an entity that consists of a number of elements. Two essential elements of a process are program code(which may be shared with other processes that are executing the same program) and a set of data associated with that code.
Identifier: A unique identifier associated with this process, to distinguish it from all other processes.
State: If the process is currently executing, it is in the running state.
Priority: Priority level relative to other processes.
Program counter: The address of the next instruction in the program to be executed.
Memory pointers: Includes pointers to the program code and data associated with this process, plus any memory blocks shared with other processes.
Context data: These are data that are present in registers in the processor while the process is executing.
I/O status information: Includes outstanding I/O requests, I/O devices(e.g., disk drives) assigned to this process, a list of files in use by the process, and so on.
Accounting information: May include the amount of processor time and clock time used, time limits, account numbers, and so on.
The significant point about the process control block is that it contains sufficient information so that it is possible to interrupt a running process and later resume execution as if the interruption had not occurred.
3.2 Process States
-
A Two-State Process Model
A Two-State Process Model transition diagram -
The Creation and Termination of Processes
common events leading to the creation of a process
common events leading to the termination of a process -
A Five-State Model
five-state process model -
Suspended Processes
with one suspend state
with two suspend states
reasons for process suspension
3.3 Process Description
- Operating System Control Structures
general structure of operating system control tables
Memory tables are used to keep track of both main(real) and secondary(virtual) memory.
I/O tables are used by the OS to manage the I/O devices and channels of the computer system.
File tables provide information about the existence of files, their location on secondary memory, their current status, and other attributes.
Process tables to manage the process. - Process Control Structures
Process Location
typical elements of a process image
Process Attributes
typical elements of a process control block
3.4 Process Control
- Modes of Execution
The less-privileged mode is often referred to as the user mode because user programs typically would execute in this mode. The more-privileged mode is referred to as the system mode, control mode, or kernel mode. - Process Creation
- Assign a unique process identifier to the new process.
- Allocate space for the process.
- Initialize the process control block.
- Set the appropriate linkages.
- Create or expand other data structures.
- Process Switching
-
When To Switch Process
mechanisms for interrupting the execution of a process -
Mode Switching
If an interrupt is pending, the processor does the following:- It sets the program counter to the starting address of an interrupt handler program.
- It switches from user mode to kernel mode so that the interrupt processing code may include privileged instructions.
- Change Of Process State
-
When To Switch Process
3.5 Execution of the Operating System
-
Nonprocess Kernel
separate kernel -
Execution within User Processes
OS functions execute within user processes -
Process-Based Operating System
OS functions execute as separate processes
3.6 Security Issues
- System Access Threads
- Countermeasures
3.7 UNIX SVR4 Process Management
Process States
Process Description
Process Control
Chapter 4 Threads
4.1 Process and Threads
- Multithreading
Multithreading refers to the ability of an OS support multiple, concurrent paths of execution within a single process.
Single-Threaded and Multithreaded Process Models- It takes far less time to create a new thread in an existing process than to create a brand-new process.
- It takes less time to terminate a thread than a process.
- It takes less time to switch between two threads within the same process than to switch between processes.
- Threads enhance efficiency in communication between different executing programs. Threads within the same process share memory and files, they can communicate with each other without invoking the kernel.
Dimension | Multi processes | Multithreading | conclusion |
---|---|---|---|
Data share/sync | Difficult to share(IPC); Easy to sync | Easy to share in the same process; Difficult to sync | depends |
Memory/CPU | More memory, CPU low utilization | Less memory, CPU high utilization | Multithreading better |
Create/Terminate/Switch | Difficult for the context switch, take more time to create/terminate | Easy for the context switch, take less time to create/terminate | Multithreading better |
Programming/Debugging | Easy | Difficult | Multi processes better |
Reliable | Independent | A crash in a thread takes down the whole process | Multi processes better |
Distribution | Adapted to multi-core, multi-computer | Adapted to multi-core distribution | Multi processes better |
- Thread Functionality
- Thread States: Running, Ready and Blocked.
- Thread Synchronization: All of the threads of a process share the same address space and other resources, such as open files. Any alteration of a resource by one thread affects the environment of the other threads in the same process. It is, therefore, necessary to synchronize the activities of the various threads so that they do not interfere with each other or corrupt data structures.
4.2 Types of Threads
-
User-Level and Kernel-Level Threads
There are a number of advantages to the use of ULTs instead of KLTs, including the following:- Thread switching does not require kernel mode privileges because all of the thread management data structures are within the user address space of a single process.
- Scheduling can be application specific.
- ULTs can run on any OS.
There are two distinct disadvantages of ULTs compared to KLTs:
- In a typical OS, many system calls are blocking.
- In a pure ULT strategy, a multithreaded application cannot take advantage of multiprocessing. A kernel assigns one process to only one processor at a time.
Two Solutions:
- Change multiple threads to multiple processes.
-
Jacketing(convert a blocking system call into a nonblocking system call)
Thread and Process Operation Latencies(μs)
-
Other Arrangements
Relationship between Threads and Processes
4.3 Multicore and Multithreading
-
Performance of Software on Multicore
Amdahl's law -
Application Example: Valve Game Software
4.4 Windows 7 Thread and SMP Management
- Process and Thread Objects
- Multithreading
- Thread States
- Support for OS SubSystems
- Symmetric Multiprocessing Support
4.5 Solaris Thread and SMO Management
- Multithreaded Architecture
- Motivation
- Process Structure
- Thread Execution
- Interrupts as Threads
4.6 Linux Process and Thread Management
-
Linux Tasks
Linux Process/Thread Model -
Linux Threads
Linux clone() flags
4.7 Mac OS X Grand Central Dispatch
Chapter 5 Synchronization
5.1 Principles of Concurrency
Some Key Terms Related to Concurrency- A Simple Example
- Race Condition
- Operating System Concerns
- The OS must be able to keep track of the various processes.
- The OS must allocate and deallocate various resources for each active process. These resources include
- Processor time: This is the scheduling function, discussed in Part Four.
- Memory: Most operating systems use a virtual memory scheme. The topic is addressed in Part Three.
- Files: Discussed in Chater 12
- I/O devices: Discussed in Chapter 11
- The OS must protect the data and physical resources of each process against unintended interference by other processes.
- The functioning of a process and the output it produces must be independent of the speed at which its execution is carried out relative to the speed of other concurrent processes.
- Process Interaction
- Processes unaware of each other
- Processes indirectly aware of each other
-
Processes directly aware of each other
Process Interaction
Competition among processes for resources
In the case of competing processes, three control problems must be faced: mutual exclusion(critical resource/critical section: In concurrent programming, concurrent accesses to shared resources can lead to unexpected or erroneous behavior, so parts of the program where the shared resource is accessed are protected. This protected section is the critical section or critical region. It cannot be executed by more than one process at a time), deadlock, starvation.
Illustration of Mutual Exclusion
Cooperation among processes by sharing
Cooperation among processes by communication
- Requirements for Mutual Exclusion
- Mutual exclusion must be enforced: Only one process at a time is allowed into its critical section, among all processes that have critical sections for the same resource or shared object.
- A process that halts in its noncritical section must do so without interfering with other processes.
- It must not be possible for a process requiring access to a critical section to be delayed indefinitely: no deadlock or starvation.
- When no process is in a critical section, any process that requests entry to its critical section must be permitted to enter without delay.
- No assumptions are made about relative process speeds or a number of processors.
- A process remains inside its critical section for a finite time only.
5.2 Mutual Exclusion: Hardware Support
-
Interrupt Disabling
In a uniprocessor system, concurrent processes cannot have overlapped execution; they can only be interleaved. Furthermore, a process will continue to run until it invokes an OS service or until it is interrupted. Therefore, to guarantee mutual exclusion, it is sufficient to prevent a process from being interrupted.
Interrupt Disabling
Because the critical section cannot be interrupted, mutual exclusion is guaranteed. The price of this approach, however, is high. Another problem is that this approach will not work in a multiprocessor architecture.
- Special Machine Instructions
Compare&Swap Instruction The compare&swap instruction, also called a compare and exchange instruction, can be defined as follows:
compare&swap instruction
Exchange Instruction The exchange instruction can be defined as follows:
exchange instruction
Properties of the machine-instruction approach The use of a special machine instruction to enforce mutual exclusion has a number of advantages:- It is applicable to any number of processes on either a single processor or multiple processors sharing main memory.
- It is simple and therefore easy to verify.
- It can be used to support multiple critical sections; each critical section can be defined by its own variable.
There are some serious disadvantages: - Busy waiting is employed: Thus, while a process is waiting for access to a critical section, it continues to consume processor time.
- Starvation is possible: When a process leaves a critical section and more than one process is waiting, the selection of a waiting process is arbitrary. Thus, some process could indefinitely be denied access.
- Deadlock is possible: Consider the following scenario on a single-processor system. Process P1 executes the special instruction(e.g., compare&swap, exchange) and enters its critical section. P1 is then interrupted to give the processor to P2, which has higher priority. If P2 now attempts to use the same resource as P1, it will be denied access because of the mutual exclusion mechanism. Thus, it will go into a busy waiting loop. However, P1 will never be dispatched because it is of lower priority than another ready process, P2.
5.3 Semaphores
Common Concurrency MechanisumsThe fundamental principle is this: Two or more processes can cooperate by means of simple signals, such that a process can be forced to stop at a specified place until it has received a specific signal. To transmit a signal via semaphore s, a process executes the primitive semSignal(s). To receive a signal via semaphore s, a process executes the primitive semWait(s); if the corresponding signal has not yet been transmitted, the process is suspended until the transmission takes place.
To achieve the desired effect, we can view the semaphore as a variable that has an integer value upon which only three operations are defined:
1. A semaphore may be initialized to a nonnegative integer value.
2. The semWait operation decrements the semaphore value. If the value becomes negative, then the process executing the semWait is blocked. Otherwise, the process continues execution.
3. The semSignal operation increments the semaphore value. If the resulting value is less than or equal to zero, then a process blocked by a semWait operation, if any, is unblocked.
A definition of semaphore primitives
A definition of binary semaphore primitives
Example of Semaphore Mechanism
Mutual Exclusion Using Semaphores
-
Mutual Exclusion
Processes Accessing Shared Data Protected by a Semaphore
Process A executes semWait(lock); because the semaphore has a value of 1 at the time of the semWait Operation, A can immediately enter its critical section and the semaphore takes on the value 0. While A is in its critical section, both B and C perform a semWait operation and are blocked pending the availability of the semaphore. When A exits its critical section and performs semSignal(lock), B, which was the first process in the queue, can now enter its critical section. The program can equally well handle a requirement that more than one process be allowed in its critical section at a time. This requirement is met simply by initializing the semaphore to the specified value. Thus, at any time, the value of s.count can be interpreted as follows:- s.count≥0: s.count is the number of processes that can execute semWait(s) without suspension(if no semSignal(s) is executed in the meantime). Such situations will allow semaphores to support synchronization as well as mutual exclusion.
- s.count<0: The magnitude of s.count is the number of processes suspended in s.queue.
-
A Solution to the Infinite-Buffer Producer/Consumer Problem Using Semaphores
The producer and consumer functions can be expressed as follows(variable in and out are initialized to 0 and n is the size of the buffer):
Producer/Consumer -
Implementation of Semaphores
As was mentioned earlier, it is imperative that the semWait and semSignal operations be implemented as atomic primitives. The essence of the problem is one of mutual exclusion: only one process at a time may manipulate a semaphore with either a semWait or semSignal operation.
5.4 Monitors
- Monitor with Signal
A monitor is a software module consisting of one or more procedures, an initialization sequence, and local data. The chief characteristics of a monitor are the following:- The local data variables are accessible only by the monitor's procedures and not by any external procedure.
- A process enters the monitor by invoking one of its procedures.
- Only one process may be executing in the monitor at a time; any other processes that have invoked the monitor are blocked, waiting for the monitor to become available.
A monitor supports synchronization by the use of condition variables that are contained within the monitor and accessible only within the monitor. Condition variables are a special data type in monitors, which are operated on by two functions:
- cwait(c): Suspend execution of the calling process on condition c. The monitor is now available for use by another process.
-
csignal(c): Resume execution of some process blocked after a cwait on the same condition. If there are several such processes, choose one of them; if there is no such process, do nothing.
Structure of a Monitor
- Alternate Model of Monitor with Notify and Broadcast
There are two drawbacks to this approach:- If the process issuing the csignal has not finished with the monitor, then two additional process switches are required: one to block this process and another to resume it when the monitor becomes available.
- Process scheduling associated with a signal must be perfectly reliable. When a csignal is issued, a process from the corresponding condition queue must be activated immediately and the scheduler must ensure that no other process enters the monitor before activation. Otherwise, the condition under which the process was activated could change.
5.5 Message Passing
When processes interact with one another, two fundamental requirements must be satisfied: synchronization and communication. Processes need to be synchronized to enforce mutual exclusion; cooperating processes may need to exchange information. One approach to providing both of these functions is message passing. Message passing has the further advantage that it lends itself to implementation in distributed systems as well as in shared-memory multiprocessor and uniprocessor systems.
The actual function of message passing is normally provided in the form of a pair of primitives:
- send(destination, message)
-
receive(source, message)
Design Characteristics of Message Systems for Interprocess Communication and Synchronization
- Synchronization
Both the sender and receiver can be blocking or nonblocking. Three combinations are common, although any particular system will usually have only one or two combinations implemented:
- Blocking send, blocking receive
- Nonblocking send, blocking receive
-
Nonblocking send, nonblocking receive
The nonblocking send is more natural for many concurrent programming tasks. For the receive primitive, the blocking version appears to be more natural for many concurrent programming tasks.
-
Addressing
Clearly, it is necessary to have a way of specifying in the send primitive which process is to receive the message. Similarly, most implementations allow a receiving process to indicate the source of a message to be received. The various schemes for specifying processes to send and receive primitives fall into two categories: direct addressing and indirect addressing.
A strength of the use of indirect addressing is that, by decoupling the sender and receiver, it allows for greater flexibility in the use of messages. The relationship between senders and receivers can be one to one, many to one, one to many, or many to many.
Indirect Process Communication -
Message Format
General Message Format -
Queueing Discipline
The simplest queueing discipline is first-in-first-out, but this may not be sufficient if some messages are more urgent than others. An alternative is to allow the specifying of message priority, on the basis of the message type or by designation by the sender. Another alternative is to allow the receiver to inspect the message queue and select which message to receive next. -
Mutual Exclusion
Mutual Exclusion Using Messages
5.6 Readers/Writers Problem
The readers/writers problem is defined as follows: There is a data area shared among a number of processes. The data area could be a file, a block of main memory, or even a bank of processor registers. There are a number of processes that only read the data area(readers) and a number that only write to the data area(writer). The conditions that must be satisfied are as follows:
1. Any number of readers may simultaneously read the file.
2. Only one writer at a time may write to the file.
3. If a writer is writing to the file, no reader may read it.
- Readers Have Priority
- Writer Have Priority
Chapter 6 Concurrency: Deadlock and Starvation
6.1 Principles of Deadlock
Deadlock can be defined as the permanent blocking of a set of processes that either compete for system resources or communicate with each other.
Illustration of Deadlock
-
Reusable Resources
Two general categories of resources can be distinguished: reusable and consumable. A reusable resource is one that can be safely used by only one process at a time and is not depleted by that use. Examples of reusable resources include processors; I/O channels; main and secondary memory; devices; and data structures such as files, databases, and semaphores. Example of deadlock with a reusable resource: multiprogramming system interleaves execution and request for main memory. -
Consumable Resources
A consumable resource is one that can be created(produced) and destroyed(consumed).
Summary of Deadlock Detection, Prevention, and Avoidance Approaches for Operating Sysmtes -
Resource Allocation Graphs
Examples of Resource Allocation Graphs
A useful tool in characterizing the allocation of resources to processes is the resource allocation graph.
- The Conditions for Deadlock
Three conditions of policy must be present for a deadlock to be possible:- Mutual exclusion.
- Hold and wait.
-
No preemption.
The first three conditions are necessary but not sufficient for a deadlock to exit. For a deadlock to actually take place, a fourth condition is required: -
Circular wait.
Deadlock
6.2 Deadlock Prevention
The strategy of deadlock prevention is, simply put, to design a system in such a way that the possibility of deadlock is excluded. An indirect method of deadlock prevention is to prevent the occurrence of one of the three necessary conditions listed previously. A direct method of deadlock prevention is to prevent the occurrence of a circular wait.
- Mutual Exclusion
In general, the first of the four listed conditions cannot be disallowed. - Hold and Wait
The hold-and-wait condition can be prevented by requiring that a process request all of its required resources at one time and blocking the process until all requests can be granted simultaneously. - No Preemption
This condition can be prevented in several ways. First, if a process holding certain resources is denied a further request, that process must release its original resources and, if necessary, request them again together with the additional resource. Alternatively, if a process requests a resource that is currently held by another process, the OS may preempt the second process and require it to release its resources. - Circular Wait
The circular wait condition can be prevented by defining a linear ordering of resource types.
6.3 Deadlock Avoidance
In deadlock prevention, we constrain resource requests to prevent at least one of the four conditions of deadlock. This is either done indirectly, by preventing one of the three necessary policy conditions(mutual exclusion, hold and wait, no preemption), or directly, by preventing circular wait. This leads to inefficient use of resources and inefficient execution of processes. Deadlock avoidance, on the other hand, allows the three necessary conditions but makes judicious choices to assure that the deadlock point is never reached. As such, avoidance allows more concurrency than prevention. With deadlock avoidance, a decision is made dynamically whether the current resource allocation request will, if granted, potentially lead to a deadlock. Two approaches to deadlock avoidance:
- Do not start progress if its demands might lead to deadlock.
- Do not grant an incremental resource request to a process if this allocation might lead to deadlock.
-
Process Initiation Denial
System of n processes and m different types of resources
Relationoships hold
With these quantities defined, we can define a deadlock avoidance policy that refuses to start a new process if its resource requirements might lead to deadlock. Start a new process Pn+1 only if :
That is, a process is only started if the maximum claim of all current processes plus those of the new process can be met. - Resource Allocation Denial
The strategy of resource allocation denial referred to as the banker's algorithm. Consider a system with a fixed number of processes and a fixed number of resources. At any time a process may have zero or more resources allocated to it. The state of the system reflects the current allocation of resources to processes. Thus, the state consists of the two vectors, Resource and Available, and the two matrices, Claim and Allocation, defined earlier. A safe state is one in which there is at least one sequence of resource allocations to processes that do not result in a deadlock.
Deadlock avoidance has the advantage that it is not necessary to preempt and rollback processes, as in deadlock detection, and is less restrictive than deadlock prevention. However, it does have a number of restrictions on its use:- The maximum resource requirement for each process must be stated in advance.
- The processes under consideration must be independent; that is, the order in which they execute must be unconstrained by any synchronization requirements.
- There must be a fixed number of resources to allocate.
- No process may exit while holding resources.
6.4 Deadlock Detection
Deadlock prevention strategies are very conservative; they solve the problem of deadlock by limiting access to resources and by imposing restrictions on processes. At the opposite extreme, deadlock detection strategies do not limit resource access or restrict process actions.
- Deadlock Detection Algorithm
A deadlock exists if and only if there are unmarked processes at the end of the algorithm. Each unmarked process is deadlocked. - Recovery
Once deadlock has been detected, some strategy is needed for recovery. The following are possible approaches, listed in order of increasing sophistication:- Abort all deadlocked process. This is, believe it or not, one of the most common, if not the most common, the solution adopted in operating systems.
- Back up each deadlocked process to some previously defined checkpoint, and restart all processes. This requires that rollback and restart mechanisms be built into the system. The risk in this approach is that the original deadlock may recur. However, the indeterminacy of concurrent processing may ensure that this does not happen.
- Successively abort deadlocked processes until deadlock no longer exists. The order in which processes are selected for abortion should be on the basis of some criterion of minimum cost. After each abortion, the detection algorithm must be re-invoked to see whether deadlock still exists.
- Successively preempt resources until deadlock no longer exists. As in (3), a cost-based selection should be used, and reinvocation of the detection algorithm is required after each preemption. A process that has a resource preempted from it must be rolled back to a point prior to its acquisition of that resource.
6.5 An Integrated Deadlock Strategy
There are strengths and weaknesses to all of the strategies for dealing with deadlock. Rather than attempting to design an OS facility that employs only one of those strategies, it might be more efficient to use different strategies in different situations.
- Group resources into a number of different resource classes.
- Use the linear ordering strategy defined previously for the prevention of a circular wait to prevent deadlocks between resource classes.
- Within a resource class, use the algorithm that is most appropriate for that class.
As an example of this technique, consider the following classes of resources:
- Swappable space: Blocks of memory on secondary storage for use in swapping processes
- Process resources: Assignable devices, such as tape drives, and files
- Main memory: Assignable to processes in pages or segments
- Internal resources: Such as I/O channels
6.6 Dining Philosophers Problem
Dining Arrangement for Philosophers- Solution Using Semaphores
- Solution Using a Monitor
网友评论