I came across this incredibly interesting problem that took me quite a while to solve: *implementing a queue that supports the minimum operation*, i.e., aside from the normal queue operations (enqueue, dequeue), it should also support a minimum operation, whereby a client could ask for the minimum element in the queue at any point. Before someone jumps in and says well that’s just a min priority queue, no, there’s a difference. Priority queues don’t return elements in the order in which they were inserted but in the order of their priority. This queue has to return elements in the order in which they were inserted and also support querying for the current minimum element in the queue.

Now, I had to solve this problem for an online judge so I just had to return the right sequence of outputs for the given sequence of inputs i.e., I could read and preprocess the entire input before returning any output. This led me to go off in the wrong direction with this approach:

- Simulate all enqueue/dequeue operations on a vector by just maintaining the queue boundaries and actually keeping the deleted elements in their place
- Mark the ranges for all the minimum queries and maintain them in a queue
- Preprocess the vector by building a segment tree for answering min queries in a range
- Process all min queries in the queue and write output

The expected time complexity was O(N) and this approach has O(N log N) time complexity but an online judge would have a hard time distinguishing them so I expected this solution to pass. But this is not the right approach and if I’m implementing a data structure, my clients would want results for their queries when they ask them, not after I’ve seen all the queries.

Another approach is to use a normal queue and a min indexed priority queue. The normal queue would store the elements and indices of those elements in the indexed priority queue, so that it can delete them in order. The min operation would take O(1) but enqueue and dequeue operations would now have O(log N) complexity. It’s a reasonable solution and might be perfectly acceptable – it answers the min queries when a client asks them, not after it has seen the entire input. But there exists a better solution.

It was really difficult to come up with the right solution and I don’t think it’s possible to come up with it in an interview setting without getting any hints so I can’t really describe the thought process but let’s go through the hints.

**Hint 1: How would we implement a stack that supports the min operation?**

Well, that’s easy. All elements in a stack are pushed and popped off the same end. We can just store pairs of elements (the pushed element and the current minimum) on the stack. When we want to push a new element on the stack, we need only look at the top pair to get the minimum of all the elements in the stack and calculate the new minimum. A query for minimum also need only look at the top pair. Here’s the simple implementation:

template <typename T> | |

class StackWithMin { | |

private: | |

stack< pair<T, T> > S; | |

public: | |

void push(T& x) { | |

S.push(pair<T, T>(x, S.empty() ? x : min(x, S.top().second))); | |

} | |

T pop() { | |

if (S.empty()) | |

throw "stack empty"; | |

pair<T, T> top = S.top(); S.pop(); | |

return top.first; | |

} | |

T getMin() { | |

if (S.empty()) | |

throw "stack empty"; | |

return S.top().second; | |

} | |

int size() { | |

return S.size(); | |

} | |

bool empty() { | |

return S.empty(); | |

} | |

}; |

But it’s unclear how that helps us. Storing the current minimum worked with stacks because elements enter and leave through the same end. The same approach does not seem to extend to queues, well, until you see hint 2.

**Hint 2: How would we implement a queue with two stacks?**

Wow! I never thought there could be any legitimate purpose in implementing a queue with two stacks except for it being a decade-old interview question. So how do we implement a queue with two stacks? We use one stack, say S1, for answering dequeue requests and the other, say S2, for enqueue requests. When S1 becomes empty, we pop all the elements off S2 and push them into S1. Using two stacks in this way gives us FIFO behavior. Here’s a simple implementation:

template <typename T> | |

class QueueWithTwoStacks { | |

private: | |

stack<T> S1, S2; | |

public: | |

void enqueue(T& x) { | |

S2.push(x); | |

} | |

T dequeue() { | |

if (S1.empty()) { | |

while (!S2.empty()) { | |

T top = S2.top(); S2.pop(); | |

S1.push(top); | |

} | |

} | |

if (S1.empty()) | |

throw "queue empty"; | |

T top = S1.top(); S1.pop(); | |

return top; | |

} | |

int size() { | |

return S1.size() + S2.size(); | |

} | |

bool empty() { | |

return S1.empty() && S2.empty(); | |

} | |

}; |

Note that this doesn’t have worst-case O(1) complexity for each dequeue operation. It offers an amortized O(1) time complexity for each dequeue operation, which means that for a sequence of N enqueue/dequeue operations, the total complexity is O(N), which is still pretty good.

It should be pretty clear now how we can combine these two hints to create a queue that supports min operation. We can implement the queue using two stacks, with each stack supporting min operation. The overall minimum for the queue is just the minimum of the two stacks. Here’s a simple implementation:

template <typename T> | |

class QueueWithMin { | |

private: | |

stack< pair<T, T> > S1, S2; | |

public: | |

void enqueue(T& x) { | |

S2.push(pair<T, T>(x, S2.empty() ? x : min(x, S2.top().second))); | |

} | |

T dequeue() { | |

if (S1.empty()) { | |

while (!S2.empty()) { | |

pair<T, T> top = S2.top(); S2.pop(); | |

top.second = S1.empty() ? top.first : min(top.first, S1.top().second); | |

S1.push(top); | |

} | |

} | |

if (S1.empty()) | |

throw "empty queue"; | |

pair<T, T> top = S1.top(); S1.pop(); | |

return top.first; | |

} | |

T getMin() { | |

if (empty()) | |

throw "empty queue"; | |

return S1.empty() ? S2.top().second : (S2.empty() ? S1.top().second : min(S1.top().second, S2.top().second)); | |

} | |

bool empty() { | |

return S1.empty() && S2.empty(); | |

} | |

int size() { | |

return S1.size() + S2.size(); | |

} | |

}; |

Pretty cool, huh! Enqueue and min operations have worst case O(1) time complexity while dequeue has amortized O(1) complexity.

A friend of mine described another solution for this problem, which is much easier to discover on your own. Remember how we calculated the largest rectangle in a histogram using a stack? Before pushing a bar on the stack, we got rid of all the larger bars. That’s exactly what we need here. Effectively the auxiliary data structure maintains a non-decreasing subsequence and the smallest element is always at the bottom. When the dequeued element is the same as the minimum element, we’ll have to remove the bottom element (this is why the sequence is non-decreasing; we need the duplicates). We can’t use a stack as the auxiliary data structure because we need to delete from both ends but that’s what a deque is for. Here’s a simple implementation:

template <typename T> | |

class QueueWithMin { | |

private: | |

queue<T> Q; | |

deque<T> D; | |

public: | |

void enqueue(T& x) { | |

Q.push(x); | |

while (!D.empty() && D.back() > x) | |

D.pop_back(); | |

D.push_back(x); | |

} | |

T dequeue() { | |

if (Q.empty()) | |

throw "queue empty"; | |

if (D.front() == Q.front()) | |

D.pop_front(); | |

T top = Q.front(); Q.pop(); | |

return top; | |

} | |

T getMin() { | |

return D.front(); | |

} | |

int size() { | |

return Q.size(); | |

} | |

bool empty() { | |

return Q.empty(); | |

} | |

}; |

In this case, dequeue and min operations have worst case O(1) time complexity while enqueue has amortized O(1) complexity. Both these solutions are pretty similar.