The interval partitioning problem is described as follows:

*Given a set {1, 2, …, n} of n requests, where i ^{th} request starts at time s(i) and finishes at time f(i), find the minimum number of resources needed to schedule all requests so that no two requests are assigned to the same resource at the same time. Two requests i and j can conflict in one of two ways:*

- s(i) <= s(j) and f(i) > s(j)
- s(j) <= s(i) and f(j) > s(i)

**Example:** Given 3 requests with s(1) = 1, f(1) = 3, s(2) = 2, f(2) = 4, s(3) = 3, f(3) = 5, at least 2 resources are needed to satisfy all requests. A valid assignment of requests to resources is {1, 3} and {2}.

This problem can be solved optimally with a greedy strategy of scheduling requests based on earliest start time i.e., from the set of remaining requests, we always select the one with the earliest start time and assign it to one of the available resources (if it would not cause a conflict) or demand another resource for this request.

**Algorithm:**

**Proof of optimality:**

Let us define the depth of a set of requests R to be the maximum number of requests in R in conflict at any time. It is clear that for any set of requests R, the number of resources required is at least the depth of R because the requests in conflict must be assigned to different resources.

We now prove by contradiction that the above greedy algorithm uses exactly d resources to schedule a set of requests R, where d is the depth of the set R.

Suppose the greedy algorithm uses more than d resources. Let j be the first request that is assigned to the resource d+1. Since we process requests according to non-decreasing start times, there are d requests that start (at or) before j and are in conflict with j. Thus at the start time of request j, there are at least d+1 requests in conflict, contradicting our assumption that the depth is d. Thus, the greedy algorithm uses exactly d resources and hence is optimal.

**Implementation and Running time:**

The algorithm can be implemented in O(n log n + n log d) = O(n log n) by pre-sorting the n requests in order of start time and using an Indexed min-priority queue to keep track of the resource having the earliest free time (the time at which the resource becomes free).

The initial sorting of requests take O(n log n) time. Then, for each request, we either increase key of a resource or insert a new resource in the priority queue at the time when the priority queue contains d resources, each of which take O(log d) time. Thus the overall time complexity is (n log n + n log d) = O(n log n), i.e., the time to sort dominates the overall running time of the algorithm.

Take a look at C++ implementation.

Try your hand at problem DONALDO which uses this idea.

**Reference:** Algorithm Design by Kleinberg and Tardos

Thanks to Miguel Oliveira for suggesting this approach:

You don’t need a heap here. After a sorting step, the assignment can be done in O(n).

Create an “events” array. An event is either the start or finish time of a request. Sort the events in non-decreasing order by time. In case of equal times, a finish event should come first than a start event.

Keep a queue of available resources, initially empty.

Process the events.

a) If it is a finish event, add its resource to the available resources queue.

b) If it is a start event, assign one of the available resources in the queue. If there isn’t any, create a new one and assign it. Associate this request with the chosen resource.

There are 2*n events. This will run in O(n log n + n) = O(n log n). With O(n) extra space.

in the line * In case of equal times, a finish event should come first than a start event. * , what did u meant by equal times?

The event array contains both start and finish events. If start time of some interval equals finish time of some other interval, we want the finish event to appear first in the sorted order.

Can you please tell me why earliest finishing time will not work here?

I was unable to find a counter-example so I can’t yet say that earliest finish time won’t work here. Maybe a counter-example exists for a slightly larger instance of the problem. Also, the proof for earliest start time greedy algorithm cannot be directly adapted for earliest finish time approach.

Actually, greedy with respect to start time will not work; earliest finish time is the right strategy. As a counterexample, consider request A which spans time from t =0 to t= 10, and requests B and C, which span times t=1 to t=2 and t=2 to t=3, respectively. Your strategy will pick A, while the opimum is picking B and C. The intuition for why you pick earliest finishing time is basically that you want to free up as much time as possible for your remaining events.

Please read the problem statement carefully. What you are talking about is Interval Scheduling Problem: https://kartikkukreja.wordpress.com/2013/09/20/interval-scheduling-problem/, wherein we have to satisfy the maximum number of requests using a single resource.

This is Interval Partitioning Problem. Here we have to satisfy all requests using the minimum number of resources.

Another technique would be integer compression followed by directly finding the depth ‘d’. Complexity: O(nlogn)

So what happens if we do not sort by starting time and we allocate resource only by checking a conflict. Will that be an optimal algorithm?

The problem doesn’t say that we have to sort by start time. It just happens to be an optimal solution. Of course there may be other solutions having the same complexity.