[2] | 1 | :mod:`heapq` --- Heap queue algorithm
|
---|
| 2 | =====================================
|
---|
| 3 |
|
---|
| 4 | .. module:: heapq
|
---|
| 5 | :synopsis: Heap queue algorithm (a.k.a. priority queue).
|
---|
| 6 | .. moduleauthor:: Kevin O'Connor
|
---|
| 7 | .. sectionauthor:: Guido van Rossum <guido@python.org>
|
---|
| 8 | .. sectionauthor:: François Pinard
|
---|
[391] | 9 | .. sectionauthor:: Raymond Hettinger
|
---|
[2] | 10 |
|
---|
| 11 | .. versionadded:: 2.3
|
---|
| 12 |
|
---|
[391] | 13 | **Source code:** :source:`Lib/heapq.py`
|
---|
| 14 |
|
---|
| 15 | --------------
|
---|
| 16 |
|
---|
[2] | 17 | This module provides an implementation of the heap queue algorithm, also known
|
---|
| 18 | as the priority queue algorithm.
|
---|
| 19 |
|
---|
[391] | 20 | Heaps are binary trees for which every parent node has a value less than or
|
---|
| 21 | equal to any of its children. This implementation uses arrays for which
|
---|
| 22 | ``heap[k] <= heap[2*k+1]`` and ``heap[k] <= heap[2*k+2]`` for all *k*, counting
|
---|
| 23 | elements from zero. For the sake of comparison, non-existing elements are
|
---|
| 24 | considered to be infinite. The interesting property of a heap is that its
|
---|
| 25 | smallest element is always the root, ``heap[0]``.
|
---|
[2] | 26 |
|
---|
| 27 | The API below differs from textbook heap algorithms in two aspects: (a) We use
|
---|
| 28 | zero-based indexing. This makes the relationship between the index for a node
|
---|
| 29 | and the indexes for its children slightly less obvious, but is more suitable
|
---|
| 30 | since Python uses zero-based indexing. (b) Our pop method returns the smallest
|
---|
| 31 | item, not the largest (called a "min heap" in textbooks; a "max heap" is more
|
---|
| 32 | common in texts because of its suitability for in-place sorting).
|
---|
| 33 |
|
---|
| 34 | These two make it possible to view the heap as a regular Python list without
|
---|
| 35 | surprises: ``heap[0]`` is the smallest item, and ``heap.sort()`` maintains the
|
---|
| 36 | heap invariant!
|
---|
| 37 |
|
---|
| 38 | To create a heap, use a list initialized to ``[]``, or you can transform a
|
---|
| 39 | populated list into a heap via function :func:`heapify`.
|
---|
| 40 |
|
---|
| 41 | The following functions are provided:
|
---|
| 42 |
|
---|
| 43 |
|
---|
| 44 | .. function:: heappush(heap, item)
|
---|
| 45 |
|
---|
| 46 | Push the value *item* onto the *heap*, maintaining the heap invariant.
|
---|
| 47 |
|
---|
| 48 |
|
---|
| 49 | .. function:: heappop(heap)
|
---|
| 50 |
|
---|
| 51 | Pop and return the smallest item from the *heap*, maintaining the heap
|
---|
| 52 | invariant. If the heap is empty, :exc:`IndexError` is raised.
|
---|
| 53 |
|
---|
| 54 | .. function:: heappushpop(heap, item)
|
---|
| 55 |
|
---|
| 56 | Push *item* on the heap, then pop and return the smallest item from the
|
---|
| 57 | *heap*. The combined action runs more efficiently than :func:`heappush`
|
---|
| 58 | followed by a separate call to :func:`heappop`.
|
---|
| 59 |
|
---|
| 60 | .. versionadded:: 2.6
|
---|
| 61 |
|
---|
| 62 | .. function:: heapify(x)
|
---|
| 63 |
|
---|
| 64 | Transform list *x* into a heap, in-place, in linear time.
|
---|
| 65 |
|
---|
| 66 |
|
---|
| 67 | .. function:: heapreplace(heap, item)
|
---|
| 68 |
|
---|
| 69 | Pop and return the smallest item from the *heap*, and also push the new *item*.
|
---|
| 70 | The heap size doesn't change. If the heap is empty, :exc:`IndexError` is raised.
|
---|
| 71 |
|
---|
[391] | 72 | This one step operation is more efficient than a :func:`heappop` followed by
|
---|
| 73 | :func:`heappush` and can be more appropriate when using a fixed-size heap.
|
---|
| 74 | The pop/push combination always returns an element from the heap and replaces
|
---|
| 75 | it with *item*.
|
---|
[2] | 76 |
|
---|
[391] | 77 | The value returned may be larger than the *item* added. If that isn't
|
---|
| 78 | desired, consider using :func:`heappushpop` instead. Its push/pop
|
---|
| 79 | combination returns the smaller of the two values, leaving the larger value
|
---|
| 80 | on the heap.
|
---|
[2] | 81 |
|
---|
| 82 |
|
---|
| 83 | The module also offers three general purpose functions based on heaps.
|
---|
| 84 |
|
---|
| 85 |
|
---|
| 86 | .. function:: merge(*iterables)
|
---|
| 87 |
|
---|
| 88 | Merge multiple sorted inputs into a single sorted output (for example, merge
|
---|
| 89 | timestamped entries from multiple log files). Returns an :term:`iterator`
|
---|
| 90 | over the sorted values.
|
---|
| 91 |
|
---|
| 92 | Similar to ``sorted(itertools.chain(*iterables))`` but returns an iterable, does
|
---|
| 93 | not pull the data into memory all at once, and assumes that each of the input
|
---|
| 94 | streams is already sorted (smallest to largest).
|
---|
| 95 |
|
---|
| 96 | .. versionadded:: 2.6
|
---|
| 97 |
|
---|
| 98 |
|
---|
| 99 | .. function:: nlargest(n, iterable[, key])
|
---|
| 100 |
|
---|
| 101 | Return a list with the *n* largest elements from the dataset defined by
|
---|
| 102 | *iterable*. *key*, if provided, specifies a function of one argument that is
|
---|
| 103 | used to extract a comparison key from each element in the iterable:
|
---|
| 104 | ``key=str.lower`` Equivalent to: ``sorted(iterable, key=key,
|
---|
| 105 | reverse=True)[:n]``
|
---|
| 106 |
|
---|
| 107 | .. versionadded:: 2.4
|
---|
| 108 |
|
---|
| 109 | .. versionchanged:: 2.5
|
---|
| 110 | Added the optional *key* argument.
|
---|
| 111 |
|
---|
| 112 |
|
---|
| 113 | .. function:: nsmallest(n, iterable[, key])
|
---|
| 114 |
|
---|
| 115 | Return a list with the *n* smallest elements from the dataset defined by
|
---|
| 116 | *iterable*. *key*, if provided, specifies a function of one argument that is
|
---|
| 117 | used to extract a comparison key from each element in the iterable:
|
---|
| 118 | ``key=str.lower`` Equivalent to: ``sorted(iterable, key=key)[:n]``
|
---|
| 119 |
|
---|
| 120 | .. versionadded:: 2.4
|
---|
| 121 |
|
---|
| 122 | .. versionchanged:: 2.5
|
---|
| 123 | Added the optional *key* argument.
|
---|
| 124 |
|
---|
| 125 | The latter two functions perform best for smaller values of *n*. For larger
|
---|
| 126 | values, it is more efficient to use the :func:`sorted` function. Also, when
|
---|
| 127 | ``n==1``, it is more efficient to use the built-in :func:`min` and :func:`max`
|
---|
| 128 | functions.
|
---|
| 129 |
|
---|
| 130 |
|
---|
[391] | 131 | Basic Examples
|
---|
| 132 | --------------
|
---|
| 133 |
|
---|
| 134 | A `heapsort <http://en.wikipedia.org/wiki/Heapsort>`_ can be implemented by
|
---|
| 135 | pushing all values onto a heap and then popping off the smallest values one at a
|
---|
| 136 | time::
|
---|
| 137 |
|
---|
| 138 | >>> def heapsort(iterable):
|
---|
| 139 | ... 'Equivalent to sorted(iterable)'
|
---|
| 140 | ... h = []
|
---|
| 141 | ... for value in iterable:
|
---|
| 142 | ... heappush(h, value)
|
---|
| 143 | ... return [heappop(h) for i in range(len(h))]
|
---|
| 144 | ...
|
---|
| 145 | >>> heapsort([1, 3, 5, 7, 9, 2, 4, 6, 8, 0])
|
---|
| 146 | [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
|
---|
| 147 |
|
---|
| 148 | Heap elements can be tuples. This is useful for assigning comparison values
|
---|
| 149 | (such as task priorities) alongside the main record being tracked::
|
---|
| 150 |
|
---|
| 151 | >>> h = []
|
---|
| 152 | >>> heappush(h, (5, 'write code'))
|
---|
| 153 | >>> heappush(h, (7, 'release product'))
|
---|
| 154 | >>> heappush(h, (1, 'write spec'))
|
---|
| 155 | >>> heappush(h, (3, 'create tests'))
|
---|
| 156 | >>> heappop(h)
|
---|
| 157 | (1, 'write spec')
|
---|
| 158 |
|
---|
| 159 |
|
---|
| 160 | Priority Queue Implementation Notes
|
---|
| 161 | -----------------------------------
|
---|
| 162 |
|
---|
| 163 | A `priority queue <http://en.wikipedia.org/wiki/Priority_queue>`_ is common use
|
---|
| 164 | for a heap, and it presents several implementation challenges:
|
---|
| 165 |
|
---|
| 166 | * Sort stability: how do you get two tasks with equal priorities to be returned
|
---|
| 167 | in the order they were originally added?
|
---|
| 168 |
|
---|
| 169 | * In the future with Python 3, tuple comparison breaks for (priority, task)
|
---|
| 170 | pairs if the priorities are equal and the tasks do not have a default
|
---|
| 171 | comparison order.
|
---|
| 172 |
|
---|
| 173 | * If the priority of a task changes, how do you move it to a new position in
|
---|
| 174 | the heap?
|
---|
| 175 |
|
---|
| 176 | * Or if a pending task needs to be deleted, how do you find it and remove it
|
---|
| 177 | from the queue?
|
---|
| 178 |
|
---|
| 179 | A solution to the first two challenges is to store entries as 3-element list
|
---|
| 180 | including the priority, an entry count, and the task. The entry count serves as
|
---|
| 181 | a tie-breaker so that two tasks with the same priority are returned in the order
|
---|
| 182 | they were added. And since no two entry counts are the same, the tuple
|
---|
| 183 | comparison will never attempt to directly compare two tasks.
|
---|
| 184 |
|
---|
| 185 | The remaining challenges revolve around finding a pending task and making
|
---|
| 186 | changes to its priority or removing it entirely. Finding a task can be done
|
---|
| 187 | with a dictionary pointing to an entry in the queue.
|
---|
| 188 |
|
---|
| 189 | Removing the entry or changing its priority is more difficult because it would
|
---|
| 190 | break the heap structure invariants. So, a possible solution is to mark the
|
---|
| 191 | existing entry as removed and add a new entry with the revised priority::
|
---|
| 192 |
|
---|
| 193 | pq = [] # list of entries arranged in a heap
|
---|
| 194 | entry_finder = {} # mapping of tasks to entries
|
---|
| 195 | REMOVED = '<removed-task>' # placeholder for a removed task
|
---|
| 196 | counter = itertools.count() # unique sequence count
|
---|
| 197 |
|
---|
| 198 | def add_task(task, priority=0):
|
---|
| 199 | 'Add a new task or update the priority of an existing task'
|
---|
| 200 | if task in entry_finder:
|
---|
| 201 | remove_task(task)
|
---|
| 202 | count = next(counter)
|
---|
| 203 | entry = [priority, count, task]
|
---|
| 204 | entry_finder[task] = entry
|
---|
| 205 | heappush(pq, entry)
|
---|
| 206 |
|
---|
| 207 | def remove_task(task):
|
---|
| 208 | 'Mark an existing task as REMOVED. Raise KeyError if not found.'
|
---|
| 209 | entry = entry_finder.pop(task)
|
---|
| 210 | entry[-1] = REMOVED
|
---|
| 211 |
|
---|
| 212 | def pop_task():
|
---|
| 213 | 'Remove and return the lowest priority task. Raise KeyError if empty.'
|
---|
| 214 | while pq:
|
---|
| 215 | priority, count, task = heappop(pq)
|
---|
| 216 | if task is not REMOVED:
|
---|
| 217 | del entry_finder[task]
|
---|
| 218 | return task
|
---|
| 219 | raise KeyError('pop from an empty priority queue')
|
---|
| 220 |
|
---|
| 221 |
|
---|
[2] | 222 | Theory
|
---|
| 223 | ------
|
---|
| 224 |
|
---|
| 225 | Heaps are arrays for which ``a[k] <= a[2*k+1]`` and ``a[k] <= a[2*k+2]`` for all
|
---|
| 226 | *k*, counting elements from 0. For the sake of comparison, non-existing
|
---|
| 227 | elements are considered to be infinite. The interesting property of a heap is
|
---|
| 228 | that ``a[0]`` is always its smallest element.
|
---|
| 229 |
|
---|
| 230 | The strange invariant above is meant to be an efficient memory representation
|
---|
| 231 | for a tournament. The numbers below are *k*, not ``a[k]``::
|
---|
| 232 |
|
---|
| 233 | 0
|
---|
| 234 |
|
---|
| 235 | 1 2
|
---|
| 236 |
|
---|
| 237 | 3 4 5 6
|
---|
| 238 |
|
---|
| 239 | 7 8 9 10 11 12 13 14
|
---|
| 240 |
|
---|
| 241 | 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
|
---|
| 242 |
|
---|
| 243 | In the tree above, each cell *k* is topping ``2*k+1`` and ``2*k+2``. In an usual
|
---|
| 244 | binary tournament we see in sports, each cell is the winner over the two cells
|
---|
| 245 | it tops, and we can trace the winner down the tree to see all opponents s/he
|
---|
| 246 | had. However, in many computer applications of such tournaments, we do not need
|
---|
| 247 | to trace the history of a winner. To be more memory efficient, when a winner is
|
---|
| 248 | promoted, we try to replace it by something else at a lower level, and the rule
|
---|
| 249 | becomes that a cell and the two cells it tops contain three different items, but
|
---|
| 250 | the top cell "wins" over the two topped cells.
|
---|
| 251 |
|
---|
| 252 | If this heap invariant is protected at all time, index 0 is clearly the overall
|
---|
| 253 | winner. The simplest algorithmic way to remove it and find the "next" winner is
|
---|
| 254 | to move some loser (let's say cell 30 in the diagram above) into the 0 position,
|
---|
| 255 | and then percolate this new 0 down the tree, exchanging values, until the
|
---|
| 256 | invariant is re-established. This is clearly logarithmic on the total number of
|
---|
| 257 | items in the tree. By iterating over all items, you get an O(n log n) sort.
|
---|
| 258 |
|
---|
| 259 | A nice feature of this sort is that you can efficiently insert new items while
|
---|
| 260 | the sort is going on, provided that the inserted items are not "better" than the
|
---|
| 261 | last 0'th element you extracted. This is especially useful in simulation
|
---|
| 262 | contexts, where the tree holds all incoming events, and the "win" condition
|
---|
[391] | 263 | means the smallest scheduled time. When an event schedules other events for
|
---|
[2] | 264 | execution, they are scheduled into the future, so they can easily go into the
|
---|
| 265 | heap. So, a heap is a good structure for implementing schedulers (this is what
|
---|
| 266 | I used for my MIDI sequencer :-).
|
---|
| 267 |
|
---|
| 268 | Various structures for implementing schedulers have been extensively studied,
|
---|
| 269 | and heaps are good for this, as they are reasonably speedy, the speed is almost
|
---|
| 270 | constant, and the worst case is not much different than the average case.
|
---|
| 271 | However, there are other representations which are more efficient overall, yet
|
---|
| 272 | the worst cases might be terrible.
|
---|
| 273 |
|
---|
| 274 | Heaps are also very useful in big disk sorts. You most probably all know that a
|
---|
| 275 | big sort implies producing "runs" (which are pre-sorted sequences, which size is
|
---|
| 276 | usually related to the amount of CPU memory), followed by a merging passes for
|
---|
| 277 | these runs, which merging is often very cleverly organised [#]_. It is very
|
---|
| 278 | important that the initial sort produces the longest runs possible. Tournaments
|
---|
| 279 | are a good way to that. If, using all the memory available to hold a
|
---|
| 280 | tournament, you replace and percolate items that happen to fit the current run,
|
---|
| 281 | you'll produce runs which are twice the size of the memory for random input, and
|
---|
| 282 | much better for input fuzzily ordered.
|
---|
| 283 |
|
---|
| 284 | Moreover, if you output the 0'th item on disk and get an input which may not fit
|
---|
| 285 | in the current tournament (because the value "wins" over the last output value),
|
---|
| 286 | it cannot fit in the heap, so the size of the heap decreases. The freed memory
|
---|
| 287 | could be cleverly reused immediately for progressively building a second heap,
|
---|
| 288 | which grows at exactly the same rate the first heap is melting. When the first
|
---|
| 289 | heap completely vanishes, you switch heaps and start a new run. Clever and
|
---|
| 290 | quite effective!
|
---|
| 291 |
|
---|
| 292 | In a word, heaps are useful memory structures to know. I use them in a few
|
---|
| 293 | applications, and I think it is good to keep a 'heap' module around. :-)
|
---|
| 294 |
|
---|
| 295 | .. rubric:: Footnotes
|
---|
| 296 |
|
---|
| 297 | .. [#] The disk balancing algorithms which are current, nowadays, are more annoying
|
---|
| 298 | than clever, and this is a consequence of the seeking capabilities of the disks.
|
---|
| 299 | On devices which cannot seek, like big tape drives, the story was quite
|
---|
| 300 | different, and one had to be very clever to ensure (far in advance) that each
|
---|
| 301 | tape movement will be the most effective possible (that is, will best
|
---|
| 302 | participate at "progressing" the merge). Some tapes were even able to read
|
---|
| 303 | backwards, and this was also used to avoid the rewinding time. Believe me, real
|
---|
| 304 | good tape sorts were quite spectacular to watch! From all times, sorting has
|
---|
| 305 | always been a Great Art! :-)
|
---|
| 306 |
|
---|