Midterm - Quiz 4

Approved & Edited by ProProfs Editorial Team
The editorial team at ProProfs Quizzes consists of a select group of subject experts, trivia writers, and quiz masters who have authored over 10,000 quizzes taken by more than 100 million users. This team includes our in-house seasoned quiz moderators and subject matter experts. Our editorial experts, spread across the world, are rigorously trained using our comprehensive guidelines to ensure that you receive the highest quality quizzes.
Learn about Our Editorial Process
| By Carlo Baja
C
Carlo Baja
Community Contributor
Quizzes Created: 1 | Total Attempts: 161
Questions: 15 | Attempts: 161

SettingsSettingsSettings
Midterm - Quiz 4 - Quiz


Questions and Answers
  • 1. 

    Whenever the cpu idle, the operating system must select one of the processes in the ready queue to be executed. The selection process is carried out by the ________

    Explanation
    The correct answer is the SHORT TERM SCHEDULER. The short term scheduler is responsible for selecting one of the processes in the ready queue to be executed when the CPU is idle. This scheduler determines which process will run next and allocates CPU time to it. It makes decisions quickly and frequently, ensuring efficient utilization of the CPU and maintaining a balance between all the processes in the system.

    Rate this question:

  • 2. 

    The ________ is the module that gives control of the cpu to the process selected by the short-term scheduler

    Explanation
    The dispatcher is responsible for giving control of the CPU to the process selected by the short-term scheduler. It is a module that handles the context switching and transfers control from the operating system to the selected process. The dispatcher ensures that the selected process is loaded into the CPU and begins execution, allowing it to utilize the CPU's resources.

    Rate this question:

  • 3. 

    If the cpu is busy executing processes, then work is being done. One measure of work is the number of processes that are completed per time unit is called ________

    Explanation
    Throughput is a measure of work that indicates the number of processes completed per unit of time. In this context, if the CPU is busy executing processes, it means work is being done, and the number of processes completed can be used to measure the throughput. Therefore, the correct answer is throughput.

    Rate this question:

  • 4. 

    When a process enters the ready queue, its pcb is linked onto the tail of the queue. When the cpu is free, it is allocated to the process at the head of the queue. The running process is then removed from the queue. Which scheduling algorithm does the above describe? ________

    Explanation
    The given explanation describes the First Come First Served (FCFS) scheduling algorithm. In FCFS, the processes are executed in the order they arrive in the ready queue. When a process enters the ready queue, its Process Control Block (PCB) is linked to the tail of the queue. When the CPU becomes available, the process at the head of the queue is allocated the CPU, and the running process is removed from the queue. This ensures that the processes are executed in the order they arrive, following the principle of "first come, first served."

    Rate this question:

  • 5. 

    Processor afinity states that a processor has affinity for the process which it is currently executing? 

    • A.

      True

    • B.

      False

    Correct Answer
    B. False
    Explanation
    Processor affinity refers to the ability of an operating system to bind a process to a specific processor or set of processors. It allows the operating system to schedule the process on the specified processor(s) consistently. However, processor affinity does not mean that a processor has an affinity for the process it is currently executing. The decision of which processor executes a process is typically determined by the operating system's scheduling algorithm. Therefore, the statement that a processor has affinity for the process it is currently executing is false.

    Rate this question:

  • 6. 

    A major problem with priority scheduling algorithms is indefinite blocking. A process that is ready to run but waiting for the cpu can be considered blocked. A priority scheduling algorithm can leave some low-priority processes waiting indefinitely. This problem is also called ________

    Correct Answer
    STARVATION
    Explanation
    The major problem with priority scheduling algorithms is indefinite blocking, where a process that is ready to run but waiting for the CPU can be considered blocked. This can result in low-priority processes waiting indefinitely, which is known as starvation. In other words, starvation occurs when a process is unable to proceed or complete its execution due to being constantly overshadowed by higher-priority processes.

    Rate this question:

  • 7. 

    The round-robin (RR) scheduling algorithm is designed especially for the time-sharing systems. It is similar to fcfs scheduling, but preemption is added to enable the system to switch between processes, A small unit of time, called a time quantum or ________ , is defined.

    Correct Answer
    TIME SLICE
    Explanation
    The round-robin (RR) scheduling algorithm is designed for time-sharing systems, where multiple processes are executed in a cyclical manner. Each process is given a small unit of time called a time quantum or time slice. This time slice determines how long a process can run before it is preempted and another process is given a chance to execute. The use of time slices allows for fair allocation of CPU time among processes and prevents any single process from monopolizing the CPU for too long.

    Rate this question:

  • 8. 

    A multilevel queue scheduling algorithm partitions the ready queue into several seperate queues. The process are permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority, or process type. Each queue has its own scheduling algorithm. 

    • A.

      True

    • B.

      False

    Correct Answer
    A. True
    Explanation
    The statement is true because a multilevel queue scheduling algorithm indeed partitions the ready queue into separate queues. Each process is permanently assigned to one queue based on some property such as memory size, process priority, or process type. Each queue also has its own scheduling algorithm to determine the order in which processes are executed.

    Rate this question:

  • 9. 

    The mukltilevel feedback queue scheduling algorithm allows a process to move between queues. The idle is to seperate processes according to the charateristics of their ________.

    Correct Answer
    CPU BURSTS
    Explanation
    The multilevel feedback queue scheduling algorithm allows a process to move between queues based on the characteristics of their CPU bursts. This means that processes with similar CPU burst characteristics are grouped together in the same queue. By doing this, the algorithm aims to improve the overall efficiency of the scheduling process by ensuring that processes with similar behavior are treated in a similar manner.

    Rate this question:

  • 10. 

    On operating systems that supports threads, it is  ________ threads-not processes-that are being scheduled by the operating system.

    Correct Answer
    KERNEL LEVEL
    Explanation
    On operating systems that support threads, it is the kernel level that schedules threads, not processes. The kernel level is responsible for managing and allocating system resources, including CPU time, to different threads. Threads are lighter weight than processes and can be scheduled more efficiently, allowing for better multitasking and responsiveness in the operating system. By scheduling threads at the kernel level, the operating system can optimize resource allocation and improve overall system performance.

    Rate this question:

  • 11. 

    Processor affinity takes several forms. when an operating system has a policy of attempting to keep a process running on the same processor-but-not guaranteeing that it will do so-we have a situation known as ________ Here,the operating system will attempt to keep a process on a single processor, but it is possible for a process to migrate between processors.

    Correct Answer
    SOFT AFFINITY
    Explanation
    Soft affinity refers to the operating system's policy of attempting to keep a process running on the same processor, but without guaranteeing that it will do so. In this case, the operating system will make an effort to keep a process on a single processor, but it is still possible for the process to migrate between processors. This can be advantageous in certain scenarios where a process may benefit from running on a specific processor, but does not necessarily require strict affinity to that processor.

    Rate this question:

  • 12. 

    One approach to load balancing is ________ where a specific task periodically checks the loads oon each processor and-if it finds an imbalance-evenly distributes the load by moving processes from overloaded to idle or less-busy processors.

    Correct Answer
    PUSH MIGRATION
    Explanation
    The correct answer is "PUSH MIGRATION". This approach to load balancing involves a specific task periodically checking the loads on each processor. If an imbalance is detected, the task evenly distributes the load by moving processes from overloaded processors to idle or less-busy processors. This process is known as "PUSH MIGRATION".

    Rate this question:

  • 13. 

    Multicore processors may complicate scheduling issues. Let's consider how this can happen. Researchers have discovered that when a processor access memory, it spends a significant amount of time waiting for the date to become available. This situation may occur for various reasons, such as a cache miss (accessing data that are not in cache memory). This situation is know as ________.

    Correct Answer
    MEMORY STALL
    Explanation
    Multicore processors may complicate scheduling issues because when a processor accesses memory, it often has to wait for the data to become available. This waiting time can be significant and is known as a memory stall. This can happen due to reasons like cache misses, where the data being accessed is not present in the cache memory. In such cases, the processor has to wait for the data to be fetched from the main memory, causing a delay in the execution of instructions.

    Rate this question:

  • 14. 

    ________ refers to the period of time from the arrival of an interrupt at the cpu to the start of the routine that services the interrupt. When an interrupt occurs, the operation system must first complete the instruction it is executing and determine the type of interrupt that occured. It must then save the state of the current process before servicing the interrupt using the specific interrupt service routine (isr).

    Correct Answer
    INTERRUPT LATENCY
    Explanation
    Interrupt latency refers to the period of time from the arrival of an interrupt at the CPU to the start of the routine that services the interrupt. When an interrupt occurs, the operating system needs to finish executing the current instruction and identify the type of interrupt. It then saves the state of the current process before handling the interrupt using the specific interrupt service routine (ISR). This latency is the time it takes for the CPU to respond to an interrupt and start processing it.

    Rate this question:

  • 15. 

    ________ schedulers operate by allocating t shares among all applications. An application can receive n shares of time, thus ensuring that the application will have n/t of the total processor time.

    Correct Answer
    PROPORTIONAL SHARE
    Explanation
    Proportional share schedulers allocate a certain number of shares among all applications. Each application is then assigned a specific number of shares, denoted by "n", which ensures that it will receive a fraction of the total processor time equal to n/t. This allows for a fair distribution of resources among applications based on their assigned shares.

    Rate this question:

Quiz Review Timeline +

Our quizzes are rigorously reviewed, monitored and continuously updated by our expert board to maintain accuracy, relevance, and timeliness.

  • Current Version
  • Mar 21, 2023
    Quiz Edited by
    ProProfs Editorial Team
  • Sep 03, 2019
    Quiz Created by
    Carlo Baja
Advertisement