Scheduling
Whether scheduling is based on processes or threads depends on whether the OS is multi-threading capable: Given a group of ready processes or threads, which process/thread to run?
- When a process is created
- When a process exits
- When a process blocks
- When an I/O interrupt occurs
- Interactive: preemption is essential, preemption is a means ofr the OS to take away the CPU from a currently running process/thread
- Batch:
- No user impatiently waiting
- mostly non-preemptive, or preemptive with long period for each process
- Real-time: deadlines
- Turn Around Time (Batch)
- Throughput (e.g. Jobs per second)
- Response Time (Interactive)
- Average wait times (how long waiting in ready queue)
CPU Burst: a sequence of instructions a process runs without requesting I/O. Mostly dependent on the program behavior.
IO "Burst": time required to satisfy an IO request while the Process can not run any code. Mostly dependent on system behavior (how many other IOs, speed of device, etc.)
All systems
- Fairness: giving each process a fair share of the CPU
- Policy enforcement: seeing that stated policy is carried out
- Balance: keeping all parts of the system busy
Batch systems
- Throughput: maximize jobs per hour
- Turnaround time: minimize time between submission and termination
- CPU utilization: keep the CPU busy all the time
Interactive systems
- Response time: respond to requests quickly
- Proportionality: meet users'expectations
Real-time systems
- Meeting deadlines: avoid losing data
- Predictability: avoid quality degradation in multimedia systems
Almost ALL scheduling algorithms can be described by the following process state transition diagram or a derivative of it (we covered some more sophisticated one in prior lecture)

Process State Transition
Non-preemptive (run till I/O or exit). Processes ordered as queue: A new process is added to the end of the queue. A blocked process that becomes ready added to the end of the queue
Main disadvantage: Can hurt I/O bound processes or processes with frequent I/O
Non-preemptive. Assumes runtime is known in advance. Is only optimal when all the jobs are available simultaneously.
Scheduler always chooses the process whose remaining time is the shortest. Runtime has to be known in advance. Preemptive or non-preemptive (wait till block or done)
This typically reduces average turnaround time.
Each process is assigned a time interval referred to as quantum. After the quantum, the CPU is given to another process (i.e. CPU is removed from the process/thread aka preemption).
RR = FIFO + preemption/quantum
Length of the quantum
- If too short, too many context switches will result in lower CPU efficiency
- If too long, will cause poor response to short interactive
- quantum longer than CPU burst is good
Each process is assigned a priority. Runnable process with the highest priority is allowed to run. Priorities are assigned statically or dynamically.
Must not allow a process to run forever:
- Can decrease the priority of the currently running process
- Use time quantum for each process
Multiple levels of priority (MLQ) plus each level is run round-robin.
Issue: starvation if higher priorities have ALWAYS something to run
Aka priority decay scheduler. If process has to be preempted, moves to (
dynamic_priority--
).When it reaches "-1", dynamic priority is reset to (
static_priority-1
): This creates some issues when high prio is reset before low prio is executing.When a process is made ready (from blocked): its dynamic priority is reset to (
static_priority-1
).What kind of process should be in bottom queue?
- Higher priority for IO-Bound tasks
- Lower priority for CPU-Bound tasks
Algorithm | Advantages | Disadvantages |
---|---|---|
First Come First Serve (FCFS) |
|
|
Shortest Job First (SJF) |
|
|
Priority based Scheduling |
|
|
processes to execute in a specific order.
In this case, FCFS works best when compared to SJF and RR because the processes are short which means that no process will wait for a longer time. When each process is executed one by one, every process will be executed eventually.
will only be completed if all the processes are executed successfully in a given time.
Round Robin scheduling works efficiently here because it does not cause starvation and also gives equal time quantum for each process.
Priority based scheduling works efficiently in this case because generally kernel based processes have higher priority when compared to user based processes.
For example, the scheduler itself is a kernel based process, it should run first so that it can schedule other processes.
Occasionally or when no process is runnable, scheduler[i] looks to steal work elsewhere.
Each scheduler maintains a load average and history to determine stability of its load. LB typically done in a hierarchy.
Frequency of neighbor check:
- Level in hierarchy: cost to migrate
- Make "small" changes by pulling work from other cpu
Last modified 1yr ago