This is an attempt to put together common CPU scheduling questions from various sources for the convenience of computer science students.
1. Consider N processes sharing the CPU in a round-robin fashion (N>=2). Assume that each context switch takes S msec and that each time quantum is Q msec. For simplicity, assume that processes never block on any event and simply switch between the CPU and the ready queue.
In the following your answers should be functions of N, S and T.
a) Find the maximum value of Q such that no process will ever go more than T msec
Time taken for one process per quantum = quantum,Q+context switch,S
Max wait time, T = N(Q+S)
T = NQ+NS
Q = (T-NS)/N
b) Find the maximum value of Q such that no process will ever go more than T msecs between executing instructions on the CPU?
Max wait time, T = N(Q+S) - Q ie.last instruction just before context switch executes at the end of the quantum of the first time when process executes..
T = NQ+NS-Q
T = Q(N-1)+NS
Q = (T-NS)/(N-1)
2. Suppose that there are two processes, PH and PL, running in a system. Each process is single-threaded. The operating system’s scheduler is preemptive and uses round-robin scheduling with a quantum of q time units.
The scheduler supports two priority levels, HIGH and LOW. Processes at LOW priority will run only if there are no runnable HIGH priority processes. Process PH is a HIGH priority process.
It behaves as described in the following pseudo-code:
while (TRUE) do
compute for tc time units
block for tb time units to wait for a resource
end while
That is, if this process were the only one running in the system, it would alternate between running for tc units of time and blocking for tb units of time. Assume that tc is less than q.
Process PL is a low priority process. This process runs forever, doing nothing but computation. That is, it never blocks waiting for a resource.
a. For what percentage of the time will the low priority process PL be running in this system?
Express your answer in terms of tb and tc.
tb/(tb + tc)
b. Repeat part (a), but this time under the assumption that there are two HIGH priority processes (PH1 and PH2) and one LOW priority process (PL). Assume that each HIGH priority process waits for a different resource. Again, express your answer in terms of tb and tc. Your answer should be correct for all tb greater than 0 and all 0 less than tc less than q.
(tb−tc)/(tb+tc) if tc is less than tb
0 if tc greater than or equal to tb
------------------------------------------------------------------------------------------------------------
Suppose a processor uses a prioritized round robin scheduling policy. New processes are assigned an initial quantum of length q. Whenever a process uses its entire quantum without blocking, its new quantum is set to twice its current quantum. If a process blocks before its quantum expires, its new quantum is reset to q. For the purposes of this question, assume that
every process requires a finite total amount of CPU time.
(a) Suppose the scheduler gives higher priority to processes that have larger quanta. Is starvation possible in this system? Why or why not?
No, starvation is not possible. Because we assume that a process will terminate, the worst
that can happen is that a CPU bound process will continue to execute until it completes.
When it finishes, one of the lower priority processes will execute. Because the I/O bound
processes will sit on the low priority queue, they will eventually make it to the head of the
queue and will not starve.
(b) Suppose instead that the scheduler gives higher priority to processes that have smaller quanta. Is starvation possible in this system? Why or why not?
Yes, starvation is possible. Suppose a CPU bound process runs on the processor, uses its
entire quantum, and has its quantum doubled. Suppose a steady stream of I/O bound processes
enter the system. Since they will always have a lower quantum and will be selected for
execution before the process with the doubled quantum, they will starve the original process.
_______________________________________________________________
I have just invented a new scheduling algorithm that I claim gives the highest
priority to processes that have just entered the system, but is fair to all processes. The algorithm works like this:
There are two queues, one for new processes and one for old processes. When a process enters the system, it is put at the end of the new queue. After 2 milliseconds on the new queue, whether a process has been scheduled or not, it is moved to the end of the old queue.
When it is time to schedule a process, the system schedules the process at the head of one of the queues, alternating between the two queues. Each process runs to completion before the next process is scheduled. Assume that processes enter the system at random times and that most processes take much longer than 2 milliseconds to execute.
(a) Does this algorithm give the highest priority to new processes? Explain your answer.
This algorithm does not give the highest priority to new processes. There are several reasons.
First, even if executing processes took much less than 2 milliseconds to execute, the scheduler
would alternate between “new” processes and “old” processes, giving equal priority to both.
Second, given that executing processes take much longer than 2 milliseconds to execute, most
“new” processes will not get scheduled during their 2 milliseconds on the ”new” queue and
will then drop to the bottom of the “old” queue without ever having been given any priority.
Now they have to wait for processes older than them to execute, and processes newer than
them to execute.
(b) Is this algorithm starvation free? Explain your answer.
Yes. Because the scheduler alternates between the two queues, every job that is not executed
from the “new” queue eventually ends up in the “old” queue and from there, eventually
reaches the head of the queue and is executed.
(c) Discuss whether this algorithm is fair to all processes. By “fair” we mean every process has a wait time approximately equal to the average wait time, assuming all processes have close to the same execution time.
No, it is not fair to all processes. Some lucky “new” processes will get to execute when they
reach the top of the “new” queue, while some will drop to the bottom of the “old” queue and
have to wait much longer to execute. These unlucky processes will have to wait for all of
the processes older than them to complete, and will have to wait for many processes younger
than them to complete as well.
____________________________________________________________________________________________
Can any of the three scheduling schemes (FCFS, SRTF, or RR) result in
starvation? If so, how might you fix this?
Yes. The SRTF algorithm can result starvation of long tasks if short tasks keep arriving.
The FCFS algorithm will result in starvation only if some thread runs forever. Finally, RR
does not result in starvation, since all threads get some of the CPU. To fix SRTF, we might
add some priority mechanism to make sure that threads that have waited for a long time
without running get at least some of the CPU now and then (priority increments when
threads don’t get to run). The multi-level scheduler is a good approximation to SRTF that
can be designed to avoid starvation. (the result wouldn’t be quite SRTF any more).
To fix FCFS, we would need to find some way of preempting threads that have run forever,
possibly by giving some CPU to other threads. Effectively, this would add elements of RR
or multi-level scheduling into FCFS (it wouldn’t be quite FCFS any more).
_____________________________________________________________
Describe the differences among short-term, medium-term, and long-term scheduling.
Ans:
Short-term (CPU scheduler) –selects from jobs in memory those jobs
that are ready to execute and allocates the CPU to them.
Medium-term - used especially with time-sharing system as an
intermediate scheduling level. A swapping scheme is implemented to
remove partially run programs from memory and reinstate them later to
continue where they left off.
Long-term (job scheduler) –determines which jobs are brought into
memory for processing.
The primary difference is in the frequency of their execution. The
short-term must select a new process quite often. Long-term is used much
less often since it handles placing jobs in the system and may wait a while
for a job to finish before it admits another one.
____________________________________________________
Some systems simply treat the readers writers problems as critical section problems and hence the implementation simply use P and V. What requirement of the Readers Writers problem does this implementation not satisfy?
Readers cannot read concurrently.
______________________________________________________________
Assume that 3 processes all with requirements of 1 second of CPU time each and
no I/O arrive at the same time.
a)What will be the average response time (i.e., average time to
completion) for the processes under FIFO scheduling?
Answer: 2 seconds
Explanation:
Time for completion for process A = 1
Time for completion for process B = 2
Time for completion for process C = 3
Average time for completion = (1+2+3)/3 = 2
b) Answer part “a” for Round Robin (RR) scheduling assuming a timeslice of 0.1 sec and no overhead for context switches (i.e., context switches are free).
Answer: 2.9 seconds
Explanation:
Time for completion for process A =0.28
Time for completion for process B =0.29
Time for completion for process C = 0.30
Average time for completion =0. 29
c) Answer part “a” for Shortest Job First (SJF) scheduling.
Answer: 2 seconds
d) Multilevel Feedback Queue Scheduling (MFQS) is a fairly good, general CPU
scheduling algorithm, can lead to starvation under certain circumstances. Briefly describe how starvation can occur using MFQS and how to modify MFQS so that starvation can be avoided.
Long jobs on low-priority queues can starve if a continuous
stream of short jobs keep the high-priority queues full.
Soln: hold a lottery among the QUEUES, weighted in favor
of short queues
OR
implement aging, so that jobs that remain on
low-priority queues for a long time are promoted
e) What advantage is there in having different time-quantum (i.e. timeslice) sizes on different levels of the MFQS approach?
1) different time quanta help differentiate between
long and short jobs
2) for long jobs, short quanta mean unnecessary context
switches (so different time quanta improve throughput)
3) Introduces more fairness
_______________________________________________________________________________________________
Consider a system with two processes, P1 and P2. Process P1 is running and process P2 is ready to
run. The operating system uses preemptive round robin scheduling. Consider the situation in which
P1’s scheduling quantum is about to expire, and P2 will be selected to run next.
List the sequence of events that will occur in this situation, in the order in which they will occur.
Create your list by choosing from the events shown below, using the event numbers to identify events.
For example, a valid answer might be: “7,5,4,8,5,2”. If an event occurs more than once, you may
include it more than once in your list.
1. P1’s user-level state is saved into a trap frame
2. P2’s user-level state is saved into a trap frame
3. a timer interrupt occurs
4. the operating system’s scheduler runs
5. there is a context switch from thread T1 (from process P1) to thread T2 (from process P2).
6. P1’s user-level state is restored from a trap frame
7. P2’s user-level state is restored from a trap frame
8. a “system call” instruction is executed
9. thread T2 (for process P2) is created
10. thread T1 (from process P1 is destroyed
Write your answer here:
3,1,4,5,7
_____________________________________________________________________________
Suppose that the operating system is running a round-robin scheduler with a 50 msec time quantum. There are three processes with the following characteristics:
- Process A runs for 60 msec, blocks for 100 msec, runs for 10 msec and terminates.
- Process B runs for 70 msec, blocks for 40 msec, runs for 20 msec, and terminates.
- Process C runs for 20 msec, blocks for 80 msec, runs for 60 msec, and terminates.
Answer:
Time Running process Events 0-50 A B enters at time 10. C enters at time 20. 50-100 B 100-120 C C blocks until time 200. 120-130 A A blocks until time 230. 130-150 B B blocks until time 190 150-190 Idle B unblocks at time 190 190-210 B C unblocks at time 200. B terminates at time 210 210-260 C A unblocks at time 230. 260-270 A A terminates at time 270. 270-280 C C terminates at time 280.
_______________________________________________________________________________________________
Consider the following preemptive priority-scheduling algorithm based on dynamically changing priorities. Larger priority numbers imply higher priority. When a process is waiting for the CPU (in the ready queue, but not running), its priority changes at a rate alpha; when it is running, the priority changes at a rate beta. All processes are given a priority of 0 when they enter the ready queue for the first time. The parameters alpha and beta can be set to give many different scheduling algorithms.
- (a) What is the algorithm that results from beta > alpha > 0?
- (b) What is the algorithm that results from alpha < beta < 0?
- (a) Since processes start at 0, any processes that have been in the system (either running or waiting) have higher priority. Therefore, new processes go to the back of the queue. When a process runs, its priority keeps increasing at the rate of beta, which is more of an increase than for the processes in the ready queue. Therefore, every time the process has timerrunout, it goes to the front of the ready queue and gets dispatched again. This is the equivalent of FCFS.
- (b) This time, any new processes entering the system have higher priority than any old ones, since priority starts at zero and then becomes negative when the process waits or runs. New processes go in at the front of the queue. When a process runs or waits, its priority decreases, with the waiting processes decreasing faster than the running process. This is a LIFO (last-in-first-out) algorithm.
This was a really great contest and hopefully I can attend the next one. It was alot of fun and I really enjoyed myself.. very good appointment tool bookmetoday.com
ReplyDeletei love reading this article so beautiful!!great job!
ReplyDeleteWow, this is really interesting reading. I am glad I found this and got to read it. Great job on this content. I like it. bookmetoday.com
ReplyDelete