美文网首页
Operating System

Operating System

作者: hshg | 来源:发表于2018-11-21 11:08 被阅读0次

    Part 1 Background

    Chapter 1 Computer System Overview

    Chapter 2 Operating System Overview

    Part 2 Processes

    Chapter 3 Process Description and Control

    3.1 What Is a Process?
    1. Background
    2. Processes and Processes Control Blocks
      In computing, a process is an instance of a computer program that is being executed. It contains the program code and its activity. Depending on the operating system (OS), a process may be made up of multiple threads of execution that execute instructions concurrently.
      We also think of a process as an entity that consists of a number of elements. Two essential elements of a process are program code(which may be shared with other processes that are executing the same program) and a set of data associated with that code.
    Simplified Process Control Block
    Identifier: A unique identifier associated with this process, to distinguish it from all other processes.
    State: If the process is currently executing, it is in the running state.
    Priority: Priority level relative to other processes.
    Program counter: The address of the next instruction in the program to be executed.
    Memory pointers: Includes pointers to the program code and data associated with this process, plus any memory blocks shared with other processes.
    Context data: These are data that are present in registers in the processor while the process is executing.
    I/O status information: Includes outstanding I/O requests, I/O devices(e.g., disk drives) assigned to this process, a list of files in use by the process, and so on.
    Accounting information: May include the amount of processor time and clock time used, time limits, account numbers, and so on.
    The significant point about the process control block is that it contains sufficient information so that it is possible to interrupt a running process and later resume execution as if the interruption had not occurred.
    3.2 Process States
    1. A Two-State Process Model


      A Two-State Process Model transition diagram
    2. The Creation and Termination of Processes


      common events leading to the creation of a process
      common events leading to the termination of a process
    3. A Five-State Model


      five-state process model
    4. Suspended Processes


      with one suspend state
      with two suspend states
      reasons for process suspension
    3.3 Process Description
    1. Operating System Control Structures
      general structure of operating system control tables
      Memory tables are used to keep track of both main(real) and secondary(virtual) memory.
      I/O tables are used by the OS to manage the I/O devices and channels of the computer system.
      File tables provide information about the existence of files, their location on secondary memory, their current status, and other attributes.
      Process tables to manage the process.
    2. Process Control Structures
      Process Location
      typical elements of a process image
      Process Attributes
      typical elements of a process control block
    3.4 Process Control
    1. Modes of Execution
      The less-privileged mode is often referred to as the user mode because user programs typically would execute in this mode. The more-privileged mode is referred to as the system mode, control mode, or kernel mode.
    2. Process Creation
      1. Assign a unique process identifier to the new process.
      2. Allocate space for the process.
      3. Initialize the process control block.
      4. Set the appropriate linkages.
      5. Create or expand other data structures.
    3. Process Switching
      • When To Switch Process
        mechanisms for interrupting the execution of a process
      • Mode Switching
        If an interrupt is pending, the processor does the following:
        1. It sets the program counter to the starting address of an interrupt handler program.
        2. It switches from user mode to kernel mode so that the interrupt processing code may include privileged instructions.
      • Change Of Process State
    3.5 Execution of the Operating System
    1. Nonprocess Kernel


      separate kernel
    2. Execution within User Processes


      OS functions execute within user processes
    3. Process-Based Operating System


      OS functions execute as separate processes
    3.6 Security Issues
    1. System Access Threads
    2. Countermeasures
    3.7 UNIX SVR4 Process Management

    Process States
    Process Description
    Process Control


    Chapter 4 Threads

    4.1 Process and Threads
    1. Multithreading
      Multithreading refers to the ability of an OS support multiple, concurrent paths of execution within a single process.
      Single-Threaded and Multithreaded Process Models
      1. It takes far less time to create a new thread in an existing process than to create a brand-new process.
      2. It takes less time to terminate a thread than a process.
      3. It takes less time to switch between two threads within the same process than to switch between processes.
      4. Threads enhance efficiency in communication between different executing programs. Threads within the same process share memory and files, they can communicate with each other without invoking the kernel.
    Dimension Multi processes Multithreading conclusion
    Data share/sync Difficult to share(IPC); Easy to sync Easy to share in the same process; Difficult to sync depends
    Memory/CPU More memory, CPU low utilization Less memory, CPU high utilization Multithreading better
    Create/Terminate/Switch Difficult for the context switch, take more time to create/terminate Easy for the context switch, take less time to create/terminate Multithreading better
    Programming/Debugging Easy Difficult Multi processes better
    Reliable Independent A crash in a thread takes down the whole process Multi processes better
    Distribution Adapted to multi-core, multi-computer Adapted to multi-core distribution Multi processes better
    1. Thread Functionality
      • Thread States: Running, Ready and Blocked.
      • Thread Synchronization: All of the threads of a process share the same address space and other resources, such as open files. Any alteration of a resource by one thread affects the environment of the other threads in the same process. It is, therefore, necessary to synchronize the activities of the various threads so that they do not interfere with each other or corrupt data structures.
    4.2 Types of Threads
    1. User-Level and Kernel-Level Threads
      There are a number of advantages to the use of ULTs instead of KLTs, including the following:

      1. Thread switching does not require kernel mode privileges because all of the thread management data structures are within the user address space of a single process.
      2. Scheduling can be application specific.
      3. ULTs can run on any OS.

      There are two distinct disadvantages of ULTs compared to KLTs:

      • In a typical OS, many system calls are blocking.
      • In a pure ULT strategy, a multithreaded application cannot take advantage of multiprocessing. A kernel assigns one process to only one processor at a time.

      Two Solutions:

      • Change multiple threads to multiple processes.
      • Jacketing(convert a blocking system call into a nonblocking system call)


        Thread and Process Operation Latencies(μs)
    2. Other Arrangements


      Relationship between Threads and Processes
    4.3 Multicore and Multithreading
    1. Performance of Software on Multicore


      Amdahl's law
    2. Application Example: Valve Game Software

    4.4 Windows 7 Thread and SMP Management
    1. Process and Thread Objects
    2. Multithreading
    3. Thread States
    4. Support for OS SubSystems
    5. Symmetric Multiprocessing Support
    4.5 Solaris Thread and SMO Management
    1. Multithreaded Architecture
    2. Motivation
    3. Process Structure
    4. Thread Execution
    5. Interrupts as Threads
    4.6 Linux Process and Thread Management
    1. Linux Tasks


      Linux Process/Thread Model
    2. Linux Threads


      Linux clone() flags
    4.7 Mac OS X Grand Central Dispatch

    Chapter 5 Synchronization

    5.1 Principles of Concurrency
    Some Key Terms Related to Concurrency
    1. A Simple Example
    2. Race Condition
    3. Operating System Concerns
      1. The OS must be able to keep track of the various processes.
      2. The OS must allocate and deallocate various resources for each active process. These resources include
        • Processor time: This is the scheduling function, discussed in Part Four.
        • Memory: Most operating systems use a virtual memory scheme. The topic is addressed in Part Three.
        • Files: Discussed in Chater 12
        • I/O devices: Discussed in Chapter 11
      3. The OS must protect the data and physical resources of each process against unintended interference by other processes.
      4. The functioning of a process and the output it produces must be independent of the speed at which its execution is carried out relative to the speed of other concurrent processes.
    4. Process Interaction
      • Processes unaware of each other
      • Processes indirectly aware of each other
      • Processes directly aware of each other
        Process Interaction
        Competition among processes for resources
        In the case of competing processes, three control problems must be faced: mutual exclusion(critical resource/critical section: In concurrent programming, concurrent accesses to shared resources can lead to unexpected or erroneous behavior, so parts of the program where the shared resource is accessed are protected. This protected section is the critical section or critical region. It cannot be executed by more than one process at a time), deadlock, starvation.
        Illustration of Mutual Exclusion
        Cooperation among processes by sharing
        Cooperation among processes by communication
    5. Requirements for Mutual Exclusion
      1. Mutual exclusion must be enforced: Only one process at a time is allowed into its critical section, among all processes that have critical sections for the same resource or shared object.
      2. A process that halts in its noncritical section must do so without interfering with other processes.
      3. It must not be possible for a process requiring access to a critical section to be delayed indefinitely: no deadlock or starvation.
      4. When no process is in a critical section, any process that requests entry to its critical section must be permitted to enter without delay.
      5. No assumptions are made about relative process speeds or a number of processors.
      6. A process remains inside its critical section for a finite time only.
    5.2 Mutual Exclusion: Hardware Support
    1. Interrupt Disabling
      In a uniprocessor system, concurrent processes cannot have overlapped execution; they can only be interleaved. Furthermore, a process will continue to run until it invokes an OS service or until it is interrupted. Therefore, to guarantee mutual exclusion, it is sufficient to prevent a process from being interrupted.


      Interrupt Disabling

      Because the critical section cannot be interrupted, mutual exclusion is guaranteed. The price of this approach, however, is high. Another problem is that this approach will not work in a multiprocessor architecture.

    2. Special Machine Instructions
      Compare&Swap Instruction The compare&swap instruction, also called a compare and exchange instruction, can be defined as follows:
      compare&swap instruction
      Exchange Instruction The exchange instruction can be defined as follows:
      exchange instruction
      Properties of the machine-instruction approach The use of a special machine instruction to enforce mutual exclusion has a number of advantages:
      • It is applicable to any number of processes on either a single processor or multiple processors sharing main memory.
      • It is simple and therefore easy to verify.
      • It can be used to support multiple critical sections; each critical section can be defined by its own variable.
        There are some serious disadvantages:
      • Busy waiting is employed: Thus, while a process is waiting for access to a critical section, it continues to consume processor time.
      • Starvation is possible: When a process leaves a critical section and more than one process is waiting, the selection of a waiting process is arbitrary. Thus, some process could indefinitely be denied access.
      • Deadlock is possible: Consider the following scenario on a single-processor system. Process P1 executes the special instruction(e.g., compare&swap, exchange) and enters its critical section. P1 is then interrupted to give the processor to P2, which has higher priority. If P2 now attempts to use the same resource as P1, it will be denied access because of the mutual exclusion mechanism. Thus, it will go into a busy waiting loop. However, P1 will never be dispatched because it is of lower priority than another ready process, P2.
    5.3 Semaphores
    Common Concurrency Mechanisums
    The fundamental principle is this: Two or more processes can cooperate by means of simple signals, such that a process can be forced to stop at a specified place until it has received a specific signal. To transmit a signal via semaphore s, a process executes the primitive semSignal(s). To receive a signal via semaphore s, a process executes the primitive semWait(s); if the corresponding signal has not yet been transmitted, the process is suspended until the transmission takes place.
    To achieve the desired effect, we can view the semaphore as a variable that has an integer value upon which only three operations are defined:
    1. A semaphore may be initialized to a nonnegative integer value.
    2. The semWait operation decrements the semaphore value. If the value becomes negative, then the process executing the semWait is blocked. Otherwise, the process continues execution.
    3. The semSignal operation increments the semaphore value. If the resulting value is less than or equal to zero, then a process blocked by a semWait operation, if any, is unblocked.
    A definition of semaphore primitives
    A definition of binary semaphore primitives
    Example of Semaphore Mechanism
    Mutual Exclusion Using Semaphores
    1. Mutual Exclusion

      Processes Accessing Shared Data Protected by a Semaphore
      Process A executes semWait(lock); because the semaphore has a value of 1 at the time of the semWait Operation, A can immediately enter its critical section and the semaphore takes on the value 0. While A is in its critical section, both B and C perform a semWait operation and are blocked pending the availability of the semaphore. When A exits its critical section and performs semSignal(lock), B, which was the first process in the queue, can now enter its critical section. The program can equally well handle a requirement that more than one process be allowed in its critical section at a time. This requirement is met simply by initializing the semaphore to the specified value. Thus, at any time, the value of s.count can be interpreted as follows:
      • s.count≥0: s.count is the number of processes that can execute semWait(s) without suspension(if no semSignal(s) is executed in the meantime). Such situations will allow semaphores to support synchronization as well as mutual exclusion.
      • s.count<0: The magnitude of s.count is the number of processes suspended in s.queue.
    2. The Producer/Consumer Problem

      A Solution to the Infinite-Buffer Producer/Consumer Problem Using Semaphores
      The producer and consumer functions can be expressed as follows(variable in and out are initialized to 0 and n is the size of the buffer):
      Producer/Consumer
    3. Implementation of Semaphores
      As was mentioned earlier, it is imperative that the semWait and semSignal operations be implemented as atomic primitives. The essence of the problem is one of mutual exclusion: only one process at a time may manipulate a semaphore with either a semWait or semSignal operation.

    5.4 Monitors
    1. Monitor with Signal
      A monitor is a software module consisting of one or more procedures, an initialization sequence, and local data. The chief characteristics of a monitor are the following:
      1. The local data variables are accessible only by the monitor's procedures and not by any external procedure.
      2. A process enters the monitor by invoking one of its procedures.
      3. Only one process may be executing in the monitor at a time; any other processes that have invoked the monitor are blocked, waiting for the monitor to become available.
        A monitor supports synchronization by the use of condition variables that are contained within the monitor and accessible only within the monitor. Condition variables are a special data type in monitors, which are operated on by two functions:
      • cwait(c): Suspend execution of the calling process on condition c. The monitor is now available for use by another process.
      • csignal(c): Resume execution of some process blocked after a cwait on the same condition. If there are several such processes, choose one of them; if there is no such process, do nothing.
        Structure of a Monitor
    2. Alternate Model of Monitor with Notify and Broadcast
      There are two drawbacks to this approach:
      1. If the process issuing the csignal has not finished with the monitor, then two additional process switches are required: one to block this process and another to resume it when the monitor becomes available.
      2. Process scheduling associated with a signal must be perfectly reliable. When a csignal is issued, a process from the corresponding condition queue must be activated immediately and the scheduler must ensure that no other process enters the monitor before activation. Otherwise, the condition under which the process was activated could change.
    5.5 Message Passing

    When processes interact with one another, two fundamental requirements must be satisfied: synchronization and communication. Processes need to be synchronized to enforce mutual exclusion; cooperating processes may need to exchange information. One approach to providing both of these functions is message passing. Message passing has the further advantage that it lends itself to implementation in distributed systems as well as in shared-memory multiprocessor and uniprocessor systems.
    The actual function of message passing is normally provided in the form of a pair of primitives:

    • send(destination, message)
    • receive(source, message)
      Design Characteristics of Message Systems for Interprocess Communication and Synchronization
    1. Synchronization
      Both the sender and receiver can be blocking or nonblocking. Three combinations are common, although any particular system will usually have only one or two combinations implemented:
    • Blocking send, blocking receive
    • Nonblocking send, blocking receive
    • Nonblocking send, nonblocking receive
      The nonblocking send is more natural for many concurrent programming tasks. For the receive primitive, the blocking version appears to be more natural for many concurrent programming tasks.
    1. Addressing
      Clearly, it is necessary to have a way of specifying in the send primitive which process is to receive the message. Similarly, most implementations allow a receiving process to indicate the source of a message to be received. The various schemes for specifying processes to send and receive primitives fall into two categories: direct addressing and indirect addressing.
      A strength of the use of indirect addressing is that, by decoupling the sender and receiver, it allows for greater flexibility in the use of messages. The relationship between senders and receivers can be one to one, many to one, one to many, or many to many.


      Indirect Process Communication
    2. Message Format


      General Message Format
    3. Queueing Discipline
      The simplest queueing discipline is first-in-first-out, but this may not be sufficient if some messages are more urgent than others. An alternative is to allow the specifying of message priority, on the basis of the message type or by designation by the sender. Another alternative is to allow the receiver to inspect the message queue and select which message to receive next.

    4. Mutual Exclusion


      Mutual Exclusion Using Messages
    5.6 Readers/Writers Problem

    The readers/writers problem is defined as follows: There is a data area shared among a number of processes. The data area could be a file, a block of main memory, or even a bank of processor registers. There are a number of processes that only read the data area(readers) and a number that only write to the data area(writer). The conditions that must be satisfied are as follows:
    1. Any number of readers may simultaneously read the file.
    2. Only one writer at a time may write to the file.
    3. If a writer is writing to the file, no reader may read it.

    1. Readers Have Priority
    2. Writer Have Priority

    Chapter 6 Concurrency: Deadlock and Starvation

    6.1 Principles of Deadlock

    Deadlock can be defined as the permanent blocking of a set of processes that either compete for system resources or communicate with each other.


    Illustration of Deadlock
    1. Reusable Resources
      Two general categories of resources can be distinguished: reusable and consumable. A reusable resource is one that can be safely used by only one process at a time and is not depleted by that use. Examples of reusable resources include processors; I/O channels; main and secondary memory; devices; and data structures such as files, databases, and semaphores. Example of deadlock with a reusable resource: multiprogramming system interleaves execution and request for main memory.

    2. Consumable Resources
      A consumable resource is one that can be created(produced) and destroyed(consumed).


      Summary of Deadlock Detection, Prevention, and Avoidance Approaches for Operating Sysmtes
    3. Resource Allocation Graphs
      A useful tool in characterizing the allocation of resources to processes is the resource allocation graph.

      Examples of Resource Allocation Graphs
    1. The Conditions for Deadlock
      Three conditions of policy must be present for a deadlock to be possible:
      1. Mutual exclusion.
      2. Hold and wait.
      3. No preemption.
        The first three conditions are necessary but not sufficient for a deadlock to exit. For a deadlock to actually take place, a fourth condition is required:
      4. Circular wait.
        Deadlock
    6.2 Deadlock Prevention

    The strategy of deadlock prevention is, simply put, to design a system in such a way that the possibility of deadlock is excluded. An indirect method of deadlock prevention is to prevent the occurrence of one of the three necessary conditions listed previously. A direct method of deadlock prevention is to prevent the occurrence of a circular wait.

    1. Mutual Exclusion
      In general, the first of the four listed conditions cannot be disallowed.
    2. Hold and Wait
      The hold-and-wait condition can be prevented by requiring that a process request all of its required resources at one time and blocking the process until all requests can be granted simultaneously.
    3. No Preemption
      This condition can be prevented in several ways. First, if a process holding certain resources is denied a further request, that process must release its original resources and, if necessary, request them again together with the additional resource. Alternatively, if a process requests a resource that is currently held by another process, the OS may preempt the second process and require it to release its resources.
    4. Circular Wait
      The circular wait condition can be prevented by defining a linear ordering of resource types.
    6.3 Deadlock Avoidance

    In deadlock prevention, we constrain resource requests to prevent at least one of the four conditions of deadlock. This is either done indirectly, by preventing one of the three necessary policy conditions(mutual exclusion, hold and wait, no preemption), or directly, by preventing circular wait. This leads to inefficient use of resources and inefficient execution of processes. Deadlock avoidance, on the other hand, allows the three necessary conditions but makes judicious choices to assure that the deadlock point is never reached. As such, avoidance allows more concurrency than prevention. With deadlock avoidance, a decision is made dynamically whether the current resource allocation request will, if granted, potentially lead to a deadlock. Two approaches to deadlock avoidance:

    • Do not start progress if its demands might lead to deadlock.
    • Do not grant an incremental resource request to a process if this allocation might lead to deadlock.
    1. Process Initiation Denial


      System of n processes and m different types of resources
      Relationoships hold

      With these quantities defined, we can define a deadlock avoidance policy that refuses to start a new process if its resource requirements might lead to deadlock. Start a new process Pn+1 only if :



      That is, a process is only started if the maximum claim of all current processes plus those of the new process can be met.
    2. Resource Allocation Denial
      The strategy of resource allocation denial referred to as the banker's algorithm. Consider a system with a fixed number of processes and a fixed number of resources. At any time a process may have zero or more resources allocated to it. The state of the system reflects the current allocation of resources to processes. Thus, the state consists of the two vectors, Resource and Available, and the two matrices, Claim and Allocation, defined earlier. A safe state is one in which there is at least one sequence of resource allocations to processes that do not result in a deadlock.
      Deadlock avoidance has the advantage that it is not necessary to preempt and rollback processes, as in deadlock detection, and is less restrictive than deadlock prevention. However, it does have a number of restrictions on its use:
      • The maximum resource requirement for each process must be stated in advance.
      • The processes under consideration must be independent; that is, the order in which they execute must be unconstrained by any synchronization requirements.
      • There must be a fixed number of resources to allocate.
      • No process may exit while holding resources.
    6.4 Deadlock Detection

    Deadlock prevention strategies are very conservative; they solve the problem of deadlock by limiting access to resources and by imposing restrictions on processes. At the opposite extreme, deadlock detection strategies do not limit resource access or restrict process actions.

    1. Deadlock Detection Algorithm
      A deadlock exists if and only if there are unmarked processes at the end of the algorithm. Each unmarked process is deadlocked.
    2. Recovery
      Once deadlock has been detected, some strategy is needed for recovery. The following are possible approaches, listed in order of increasing sophistication:
      1. Abort all deadlocked process. This is, believe it or not, one of the most common, if not the most common, the solution adopted in operating systems.
      2. Back up each deadlocked process to some previously defined checkpoint, and restart all processes. This requires that rollback and restart mechanisms be built into the system. The risk in this approach is that the original deadlock may recur. However, the indeterminacy of concurrent processing may ensure that this does not happen.
      3. Successively abort deadlocked processes until deadlock no longer exists. The order in which processes are selected for abortion should be on the basis of some criterion of minimum cost. After each abortion, the detection algorithm must be re-invoked to see whether deadlock still exists.
      4. Successively preempt resources until deadlock no longer exists. As in (3), a cost-based selection should be used, and reinvocation of the detection algorithm is required after each preemption. A process that has a resource preempted from it must be rolled back to a point prior to its acquisition of that resource.
    6.5 An Integrated Deadlock Strategy

    There are strengths and weaknesses to all of the strategies for dealing with deadlock. Rather than attempting to design an OS facility that employs only one of those strategies, it might be more efficient to use different strategies in different situations.

    • Group resources into a number of different resource classes.
    • Use the linear ordering strategy defined previously for the prevention of a circular wait to prevent deadlocks between resource classes.
    • Within a resource class, use the algorithm that is most appropriate for that class.

    As an example of this technique, consider the following classes of resources:

    • Swappable space: Blocks of memory on secondary storage for use in swapping processes
    • Process resources: Assignable devices, such as tape drives, and files
    • Main memory: Assignable to processes in pages or segments
    • Internal resources: Such as I/O channels
    6.6 Dining Philosophers Problem
    Dining Arrangement for Philosophers
    1. Solution Using Semaphores
    2. Solution Using a Monitor
    6.7 UNIX Concurrency Mechanisms
    6.8 Linux Kernel Concurrency Mechanisms
    6.9 Solaris Thread Synchronization Primitives
    6.10 Windows 7 Concurrency Mechanisms

    相关文章

      网友评论

          本文标题:Operating System

          本文链接:https://www.haomeiwen.com/subject/fsrtqqtx.html