Weighted Interval Scheduling Problem

Unlike the Interval Scheduling Problem where we sought to maximize the number of requests that could be accommodated simultaneously, in the Weighted Interval Scheduling Problem, each request i has an associated value or weight wi and the goal is to find the maximum-weight subset of compatible requests. It degenerates to the interval scheduling problem when wi = 1 ∀i.

No greedy algorithm is known to solve this problem optimally. However, it can be solved with dynamic programming.

Suppose the requests are sorted in order of non-decreasing finish time: f1 ≤ f2 ≤ … ≤ fn. This defines a natural ordering of requests. Let p(j), for an interval j, be the largest i < j s.t. the intervals i and j are disjoint i.e., i is the interval having the greatest finish time fi s.t. fi ≤ sj.

Let there be an optimal solution O. Either an interval i belongs to O or it doesn’t. If i ∈ O, then no interval strictly between p(i) and i could belong to O. Moreover, if i ∈ O, then O must include an optimal solution consisting of requests {1, 2, …, p(i)} because if it did not, we could replace O’s choice of requests from {1,2,…,p(i)} with a better one without any danger of overlapping request i. If i ∉ O, then O is simply the optimal solution to the problem consisting of requests {1, 2, …, i-1}.

For 1 ≤ j ≤ n, let Oj denote the optimal solution to the problem consisting of requests {1, 2, …, j} and OPT(j) denote the value of this solution, then we have the following recursive formulation:

OPT(j) = max(wj + OPT(p(j)), OPT(j-1))

Recursive formulation:

However, implementing it directly would take exponential time but there are only a linear number of subproblems O(j) for j in {1, 2, …, n} so by memoizing results or computing the results bottom-up, we can reduce the running time to a polynomial.

Algorithm:

  1. Sort the requests in order of non-decreasing finish times. This step takes time O(n log n).
  2. For 1 ≤ j ≤ n, find the largest i < j s.t. fi ≤ sj, call it p(j). Since the requests are sorted in order of non-decreasing finish times, we can use binary search to find p(j) in time O(log j). Thus, this step takes time sum(O(log j)) for 1 ≤ j ≤ n = O(n log n).
  3. For 1 ≤ j ≤ n, compute OPT(j) = max(wj + OPT(p(j)), OPT(j-1)). This step takes time O(n).

Thus the overall time complexity of the algorithm is O(n log n).

Take a look at C++ implementation.

Reference: Algorithm Design by Kleinberg and Tardos

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s