The editorial team at ProProfs Quizzes consists of a select group of subject experts, trivia writers, and quiz masters who have authored over 10,000 quizzes taken by more than 100 million users. This team includes our in-house seasoned quiz moderators and subject matter experts. Our editorial experts, spread across the world, are rigorously trained using our comprehensive guidelines to ensure that you receive the highest quality quizzes.
Instructions: Attempt all questions. Do not use any kind of helping material (Serach Engine, Books etc)
Questions and Answers
1.
Round robin scheduling is essentially the preemptive version of
A.
FIFO
B.
Shortest job first
C.
Shortest remaining
D.
Longest time first
Correct Answer
A. FIFO
Explanation Round robin scheduling is a preemptive scheduling algorithm where each process is assigned a fixed time slot, called a time quantum, in which it can execute. Once a process's time quantum expires, it is preempted and the next process in the queue is given a chance to execute. This scheduling algorithm follows the First-In-First-Out (FIFO) principle, meaning that the processes are executed in the order they arrived in the ready queue. Therefore, the correct answer is FIFO.
Rate this question:
2.
Let S and Q be two semaphores initialized to 1, where P0 and P1 processes the following statements wait(S);wait(Q); ---; signal(S);signal(Q) and wait(Q); wait(S);---;signal(Q);signal(S); respectively. The above situation depicts a
A.
Semaphore
B.
Deadlock
C.
Signal
D.
Interrupt
Correct Answer
B. Deadlock
Explanation The given scenario depicts a deadlock. A deadlock occurs when two or more processes are unable to proceed because each is waiting for the other to release a resource. In this case, P0 is waiting for semaphore Q to be released before proceeding, while P1 is waiting for semaphore S to be released. Since both processes are waiting indefinitely for a resource that will never be released, a deadlock occurs.
Rate this question:
3.
The state of a process after it encounters an I/O instruction is
A.
Ready
B.
Blocked/Waiting
C.
Idle
D.
Running
Correct Answer
B. Blocked/Waiting
Explanation After a process encounters an I/O instruction, it enters a state of being blocked or waiting. This means that the process is unable to proceed further until the input/output operation is completed. During this time, the process is temporarily suspended and cannot execute any other instructions. It remains in this state until the I/O operation is finished, at which point it may transition back to the ready or running state, depending on the scheduling algorithm being used.
Rate this question:
4.
PCB stands for
A.
Program Control Block
B.
Process Control Block
C.
Process Communication Block
D.
None of the options
Correct Answer
B. Process Control Block
Explanation The correct answer is "Process Control Block." PCB stands for Process Control Block, which is a data structure used by operating systems to store information about a process. It contains important details such as the process ID, program counter, register values, and memory allocation. The PCB allows the operating system to manage and control processes effectively by keeping track of their status and resources.
Rate this question:
5.
The Banker's algorithm is used
A.
To prevent deadlock in operating systems
B.
To detect deadlock in operating systems
C.
To rectify a deadlocked state
D.
None of the options
Correct Answer
A. To prevent deadlock in operating systems
Explanation The Banker's algorithm is a resource allocation and deadlock avoidance algorithm used in operating systems. It is designed to prevent deadlock by determining if a requested resource allocation will result in a safe state or lead to deadlock. The algorithm works by simulating the allocation of resources to processes and checking if a safe sequence can be found. If a safe sequence is not possible, the request is denied to prevent deadlock. Therefore, the correct answer is "to prevent deadlock in operating systems."
Rate this question:
6.
___________ is a high level abstraction over Semaphore.
A.
Shared memory
B.
Message passing
C.
Monitor
D.
Mutual exclusion
Correct Answer
C. Monitor
Explanation A monitor is a high-level abstraction over Semaphore. It provides a way to synchronize the execution of threads by allowing only one thread to access a shared resource at a time. Monitors use condition variables to control the execution of threads, ensuring that they wait for a certain condition to be true before proceeding. This helps in achieving mutual exclusion and avoiding race conditions in concurrent programs. Monitors are commonly used in programming languages like Java and Python to implement thread-safe operations and ensure thread synchronization.
Rate this question:
7.
Switching the CPU to another Process requires to save state of the old process and loading new process state is called as
A.
Process Blocking
B.
Time Sharing
C.
Context Switch
D.
Multiprogramming
Correct Answer
C. Context Switch
Explanation When the CPU switches from executing one process to another, it needs to save the state of the current process and load the state of the new process. This is known as a context switch. During a context switch, the CPU saves the current process's register values, program counter, and other necessary information, and then loads the saved state of the new process. This allows the CPU to seamlessly switch between multiple processes and give the illusion of multitasking to the user.
Rate this question:
8.
32 bit OS such as can only access 4 GB RAM atmost (there is no other technique to access more memory on 32 bit OS)
A.
True
B.
False
Correct Answer
B. False
Explanation A 32-bit operating system can only access a maximum of 4 GB of RAM. However, there are techniques such as Physical Address Extension (PAE) that can be used to access more memory on a 32-bit OS. Therefore, the statement that there is no other technique to access more memory on a 32-bit OS is false.
Rate this question:
9.
Master Boot Record is the first sector of unformatted disk.
A.
True
B.
False
Correct Answer
B. False
Explanation The Master Boot Record (MBR) is not the first sector of an unformatted disk. It is actually the first sector of a formatted disk. The MBR contains the boot loader, which is responsible for loading the operating system. Therefore, the correct answer is False.
Rate this question:
10.
In a multithreaded environment main thread terminates after the termination of child threads.
A.
True
B.
False
Correct Answer
A. True
Explanation In a multithreaded environment, the main thread is responsible for creating and managing child threads. Once all the child threads have completed their execution, the main thread can safely terminate. This is because the main thread waits for the child threads to finish before it exits. Therefore, the statement that the main thread terminates after the termination of child threads is true.
Rate this question:
11.
Which of the following disk scheduling techniques has a drawback of starvation ?
A.
SCAN
B.
SSTF
C.
FCFS
D.
LIFO
Correct Answer
B. SSTF
Explanation SSTF (Shortest Seek Time First) disk scheduling technique has a drawback of starvation. In SSTF, the disk arm moves to the request with the shortest seek time from its current position. This means that requests with larger seek times may be continuously ignored or delayed, leading to starvation. As a result, some requests may never be serviced, causing a potential loss of data or system inefficiency.
Rate this question:
12.
When two or more processes attempt to access the same resource a _________ occurs.
A.
Critical section
B.
Fight
C.
Communication problem
D.
Race condition
Correct Answer
D. Race condition
Explanation A race condition occurs when two or more processes try to access the same resource simultaneously, leading to unpredictable and inconsistent results. This can happen when the processes are not properly synchronized or when they do not follow a specific order of execution. In a race condition, the outcome of the processes' execution depends on the timing and order of their access to the shared resource, which can lead to errors or unexpected behavior.
Rate this question:
13.
________ scheduler selects the jobs from the pool of jobs and loads into the ready queue.
A.
Long term
B.
Short term
C.
Medium term
D.
None of the options
Correct Answer
A. Long term
Explanation The long term scheduler selects the jobs from the pool of jobs and loads them into the ready queue. This is because the long term scheduler is responsible for deciding which processes should be brought into the main memory from the secondary storage. Once the jobs are in the ready queue, they can be executed by the CPU.
Rate this question:
14.
A thread is a Heavy Weight process .
A.
True
B.
False
Correct Answer
B. False
Explanation A thread is not a Heavy Weight process. Threads are lightweight processes that exist within a process and share the same memory space. They are used to perform multiple tasks concurrently within a single process, allowing for efficient multitasking. Unlike heavy-weight processes, threads do not require separate memory space or system resources. Thus, the given statement is false.
Rate this question:
15.
Trap is
A.
A software interrupt
B.
A hardware interrupt
C.
Unmaskable
D.
an exception or a fault
E.
Asynchonous
Correct Answer(s)
A. A software interrupt C. Unmaskable D. an exception or a fault
Explanation The given correct answer states that a trap can be a software interrupt, unmaskable, an exception, or a fault. A software interrupt is a mechanism used by software to request a specific service from the operating system. Unmaskable interrupts are hardware interrupts that cannot be disabled or masked by the processor. Exceptions are events that occur during the execution of a program that disrupt the normal flow of instructions. Faults, on the other hand, are exceptions that can be corrected and allow the program to continue executing. Therefore, a trap can refer to any of these types of events or mechanisms.
Rate this question:
16.
Using Priority Scheduling algorithm, find the average waiting time for the following set of processes given with their priorities in the order: Process : Burst Time : Priority respectively .
P1 : 10 : 3
P2 : 1 : 1
P3 : 2 : 4
P4 : 1 : 5
P5 : 5 : 2
A.
8 milliseconds
B.
8.2 milliseconds
C.
7.75 milliseconds
D.
3 milliseconds
Correct Answer
B. 8.2 milliseconds
Explanation The Priority Scheduling algorithm assigns the processes to the CPU based on their priority. In this case, the processes are given priorities in the order: P2 (priority 1), P5 (priority 2), P1 (priority 3), P3 (priority 4), and P4 (priority 5). The burst time is the time taken by each process to complete its execution.
To calculate the average waiting time, we need to calculate the waiting time for each process and then find the average of all the waiting times. The waiting time for a process can be calculated by subtracting the arrival time of the process from the total time the process has been waiting.
Using the Priority Scheduling algorithm, the waiting times for the processes are as follows:
P2: 0 milliseconds (no processes with higher priority)
P5: 1 millisecond (P2 takes 1 millisecond to complete)
P1: 6 milliseconds (P2 and P5 take a total of 7 milliseconds to complete)
P3: 9 milliseconds (P2, P5, and P1 take a total of 10 milliseconds to complete)
P4: 10 milliseconds (P2, P5, P1, and P3 take a total of 11 milliseconds to complete)
The average waiting time is calculated by summing up all the waiting times and dividing by the number of processes. In this case, the average waiting time is (0 + 1 + 6 + 9 + 10) / 5 = 8.2 milliseconds.
Rate this question:
17.
Aging is a technique to avoid starvation in a scheduling system.
A.
True
B.
False
Correct Answer
A. True
Explanation Aging is a technique used in scheduling systems to prevent starvation, which occurs when a process is constantly delayed or unable to execute due to other processes receiving priority. Aging involves gradually increasing the priority of a process the longer it waits, ensuring that eventually it will receive priority and be executed. This technique helps to ensure fairness and prevent processes from being indefinitely delayed or starved of resources.
Rate this question:
18.
Marshalling is process of packaging and sending interface method parameters across thread or process boundaries.
A.
True
B.
False
Correct Answer
A. True
Explanation Marshalling is indeed the process of packaging and sending interface method parameters across thread or process boundaries. This is commonly done in distributed systems or when communicating between different components or systems. Marshalling ensures that the data being sent is in a format that can be understood by the receiving end, regardless of the programming languages or platforms involved. It helps to maintain interoperability and allows for seamless communication between different parts of a system. Therefore, the given answer, "True," is correct.
Rate this question:
19.
State whether true or false.
i) Multithreading is useful for application that perform a number of essentially independent tasks that do not be serialized.
ii) An example of multithreading is a database server that listens for and process numerous client request.
A.
i-True, ii-False
B.
I-True, ii-True
C.
i-False, ii-True
D.
I-False, ii-False
Correct Answer
B. I-True, ii-True
Explanation Multithreading is indeed useful for applications that perform a number of essentially independent tasks that do not need to be serialized. This means that the tasks can be executed concurrently and in parallel, improving the overall efficiency and performance of the application. An example of multithreading is a database server that listens for and processes numerous client requests simultaneously, allowing for faster response times and improved scalability. Hence, the statement i) is true. The statement ii) is also true as it provides an accurate example of multithreading.
Rate this question:
20.
Consider the following set of processes, with the arrival times and the CPU burst times given in milliseconds.
-------------------------------------
Process Arrival-Time Burst-Time
-------------------------------------
P1 0 5
P2 1 3
P3 2 3
P4 4 1
-------------------------------------
What is the average turnaround time for these processes with the preemptive shortest remaining processing time first (SROT) algorithm?
A.
5.50
B.
5.75
C.
6.00
D.
6.25
Correct Answer
A. 5.50
Explanation The average turnaround time for these processes with the preemptive shortest remaining processing time first (SROT) algorithm is 5.50. This means that on average, it takes 5.50 milliseconds for each process to complete its execution and return its result. The SROT algorithm selects the process with the shortest remaining burst time, allowing for better utilization of the CPU and minimizing waiting times. In this case, the processes are scheduled in the order P1, P2, P3, and P4, with their respective burst times of 5, 3, 3, and 1. The turnaround time for each process can be calculated by adding its burst time to the waiting time. The average turnaround time is then calculated by summing the turnaround times of all processes and dividing by the total number of processes. In this case, the average turnaround time is 5.50.
Rate this question:
21.
Consider three processes (process id 0, 1, 2 respectively) with compute time bursts 2, 4 and 8 time units. All processes arrive at time zero. Consider the longest remaining time first (LRTF) scheduling algorithm. In LRTF ties are broken by giving priority to the process with the lowest process id. The average turn around time is:
A.
14 units
B.
13 units
C.
15 units
D.
16 units
Correct Answer
B. 13 units
Explanation The LRTF scheduling algorithm selects the process with the longest remaining time to execute first. In this case, process 2 has the longest remaining time burst of 8 units. After process 2 completes, process 1 with a remaining time burst of 4 units is selected. Finally, process 0 with a remaining time burst of 2 units is executed. The turn around time for each process is the sum of its compute time burst and the time it spent waiting. For process 0, the turn around time is 2 units, for process 1 it is 6 units, and for process 2 it is 14 units. The average turn around time is the sum of the turn around times divided by the number of processes, which is (2 + 6 + 14)/3 = 22/3 = 7.33 units. However, since the answer options are in whole units, the closest option is 13 units.
Rate this question:
22.
Consider the 3 processes, P1, P2 and P3 shown in the table
----------------------------------------------
Process Arrival time Time unit required
----------------------------------------------
P1 0 5
P2 1 7
P3 3 4
----------------------------------------------
The completion order of the 3 processes under the policies FCFS and RRS (round robin scheduling with CPU quantum of 2 time units) are
Explanation The completion order of the processes under the FCFS (First-Come, First-Served) policy is determined by the arrival time of the processes. In this case, P1 has an arrival time of 0, P2 has an arrival time of 1, and P3 has an arrival time of 3. Therefore, P1 will be the first to complete, followed by P2, and then P3.
Under the RRS (Round Robin Scheduling) policy with a CPU quantum of 2 time units, each process is given a time slice of 2 units before moving on to the next process. In this case, P1 will start and complete its time slice, then P2 will start and complete its time slice, and finally P3 will start and complete its time slice. Therefore, the completion order under RRS with a quantum of 2 units is P1, P3, P2.
Rate this question:
23.
The average memory access time for a machine with a cache hit rate of 90% where the cache access time is 10ns and the memory access time is 100ns is
A.
55ns
B.
45ns
C.
90ns
D.
19ns
Correct Answer
D. 19ns
Explanation The average memory access time can be calculated using the formula:
Average memory access time = (Cache hit rate * Cache access time) + (Cache miss rate * Memory access time)
Given that the cache hit rate is 90% and the cache access time is 10ns, we can calculate the cache miss rate by subtracting the cache hit rate from 100%:
Cache miss rate = 100% - 90% = 10%
Substituting the values into the formula:
Average memory access time = (0.9 * 10ns) + (0.1 * 100ns) = 9ns + 10ns = 19ns
Therefore, the correct answer is 19ns.
Rate this question:
24.
At a particular time, the value of counting semaphore is 10. it will become 7 after
A.
3 V operations
B.
5P operations
C.
5V and 2P operations
D.
13 P and 10 V operations
Correct Answer
D. 13 P and 10 V operations
Explanation The counting semaphore starts with a value of 10. Each V operation increments the value of the semaphore by 1, while each P operation decrements it by 1. Therefore, 10 V operations would increase the value to 20. However, 13 P operations would decrement the value by 13, resulting in a final value of 7. Similarly, 10 V operations would increase the value to 17, and 2 P operations would decrement it by 2, resulting in a final value of 15. Hence, the correct answer is 13 P and 10 V operations.
Rate this question:
25.
In real time operating systems, which of the following is the most suitable scheduling scheme?
A.
round- robin
B.
FCFS
C.
Pre-emptive scheduling
D.
Random scheduling
Correct Answer
C. Pre-emptive scheduling
Explanation Pre-emptive scheduling is the most suitable scheduling scheme in real-time operating systems because it allows the system to interrupt a running process and allocate the CPU to a higher priority task. This ensures that time-critical tasks are executed in a timely manner, as the operating system can prioritize and allocate resources based on the urgency of the tasks. In contrast, round-robin, FCFS, and random scheduling schemes do not provide the same level of responsiveness and prioritization for real-time tasks.
Rate this question:
26.
20-bit address bus allows access to a memory of capacity
A.
1 Mb
B.
2 Mb
C.
32Mb
D.
64 Mb
Correct Answer
A. 1 Mb
Explanation A 20-bit address bus allows access to a memory of capacity 1 Mb because each bit in the address bus represents a binary digit (0 or 1), and with 20 bits, we can have a total of 2^20 unique combinations. Since each combination represents a memory address, a 20-bit address bus can address a maximum of 2^20 memory locations. In this case, since each memory location represents 1 byte of data, the total memory capacity is 2^20 bytes, which is equivalent to 1 Mb.
Rate this question:
27.
Threads do not share memory with other threads.
A.
True
B.
False
Correct Answer
B. False
Explanation Threads in a program share memory with other threads. This means that they can access and modify the same variables and data structures. This sharing of memory allows threads to communicate and synchronize their actions. However, it also introduces the possibility of conflicts and race conditions, where multiple threads try to access or modify the same memory simultaneously, leading to unpredictable results. To ensure proper synchronization and avoid such issues, synchronization mechanisms like locks and semaphores are used. Therefore, the given answer "False" is correct as threads do share memory with other threads.
Rate this question:
28.
Threads share
A.
Address space
B.
Register
C.
Stack
D.
Signal
E.
Code
Correct Answer(s)
A. Address space D. Signal E. Code
Explanation Threads share the address space, signal, and code. Address space refers to the memory space that is allocated to a process, and threads within the same process share this memory space. This allows them to access and modify the same variables and data structures. Signals are used for inter-process communication and can be sent between threads within the same process. Code refers to the instructions that are executed by the processor, and threads within the same process share the same code.
Rate this question:
Quiz Review Timeline +
Our quizzes are rigorously reviewed, monitored and continuously updated by our expert board to maintain accuracy, relevance, and timeliness.