The interval scheduling problem is described as follows:

*Given a set {1, 2, …, n} of n requests, where i ^{th} request starts at time s(i) and finishes at time f(i), find a maximum-size subset of compatible requests. Two requests i and j are compatible if they do not overlap i.e., either f(i) <= s(j) or f(j) <= s(i).*

**Example:** Given 3 requests with s(1) = 1, f(1) = 3, s(2) = 2, f(2) = 4, s(3) = 3, f(3) = 5, the maximum-size subset of compatible requests is {1, 3}.

This problem can be solved optimally with a simple greedy strategy of scheduling requests based on earliest finish time i.e., from the set of compatible requests, we always select the one with the earliest finish time. The idea is to free the resource as soon as possible while still satisfying one request so as to maximize the time left to satisfy other requests.

**Algorithm:**

**Proof of optimality:**

Let O be an optimal schedule. We need to show that |*schedule*| = |O|. Let i_{1}, i_{2}, …, i_{k} be the set of requests in *schedule* in the order they were added to *schedule* and j_{1}, j_{2}, …, j_{m} be the set of requests in O, ordered by start time (as well as finish time since the requests are compatible).

By induction, we can prove that ∀r <= k, f(i_{r}) <= f(j_{r}).

This is true for r = 1 i.e., f(i_{1}) <= f(j_{1}) because the algorithm starts by selecting the request with minimum start time.

For r > 1, we assume that the induction holds for r – 1 i.e., f(i_{r-1}) <= f(j_{r-1}). Since O consists of compatible requests, f(j_{r-1}) <= s(j_{r}) and thus, f(i_{r-1}) <= s(j_{r}). The interval j_{r} is in the set of available intervals at the time when the algorithm selects i_{r}. Since the algorithm selects the available interval with the smallest finish time, it follows that f(i_{r}) <= f(j_{r}).

We are now in good shape to prove by contradiction that |*schedule*| = |O| or m = k. If m > k, there must be a request j_{k+1} in O such that f(j_{k}) <= s(j_{k+1}), but f(i_{k}) <= f(j_{k}), hence f(i_{k}) <= s(j_{k+1}) i.e., j_{k+1} is compatible with i_{1}, i_{2}, …, i_{k} and the algorithm could not possibly have stopped at i_{k}. This completes the proof.

**Implementation and Running time:**

The algorithm can be implemented in O(n log n) by first sorting the n requests in order of finish time, followed by a linear processing of requests.

Thus, the time to sort dominates the running time of the algorithm.

Take a look at C++ implementation.

Try your hand at problem BUSYMAN that uses this idea.

**Reference:** Algorithm Design by Kleinberg and Tardos

Pingback: Weighted Interval Scheduling Problem | Everything Under The Sun