source: python/trunk/Doc/library/multiprocessing.rst

Last change on this file was 391, checked in by dmik, 11 years ago

python: Merge vendor 2.7.6 to trunk.

  • Property svn:eol-style set to native
File size: 81.9 KB
RevLine 
[2]1:mod:`multiprocessing` --- Process-based "threading" interface
2==============================================================
3
4.. module:: multiprocessing
5 :synopsis: Process-based "threading" interface.
6
7.. versionadded:: 2.6
8
9
10Introduction
11----------------------
12
13:mod:`multiprocessing` is a package that supports spawning processes using an
14API similar to the :mod:`threading` module. The :mod:`multiprocessing` package
15offers both local and remote concurrency, effectively side-stepping the
16:term:`Global Interpreter Lock` by using subprocesses instead of threads. Due
17to this, the :mod:`multiprocessing` module allows the programmer to fully
18leverage multiple processors on a given machine. It runs on both Unix and
19Windows.
20
21.. warning::
22
23 Some of this package's functionality requires a functioning shared semaphore
24 implementation on the host operating system. Without one, the
25 :mod:`multiprocessing.synchronize` module will be disabled, and attempts to
26 import it will result in an :exc:`ImportError`. See
27 :issue:`3770` for additional information.
28
29.. note::
30
[391]31 Functionality within this package requires that the ``__main__`` module be
[2]32 importable by the children. This is covered in :ref:`multiprocessing-programming`
33 however it is worth pointing out here. This means that some examples, such
34 as the :class:`multiprocessing.Pool` examples will not work in the
35 interactive interpreter. For example::
36
37 >>> from multiprocessing import Pool
38 >>> p = Pool(5)
39 >>> def f(x):
40 ... return x*x
41 ...
42 >>> p.map(f, [1,2,3])
43 Process PoolWorker-1:
44 Process PoolWorker-2:
45 Process PoolWorker-3:
46 Traceback (most recent call last):
47 Traceback (most recent call last):
48 Traceback (most recent call last):
49 AttributeError: 'module' object has no attribute 'f'
50 AttributeError: 'module' object has no attribute 'f'
51 AttributeError: 'module' object has no attribute 'f'
52
53 (If you try this it will actually output three full tracebacks
54 interleaved in a semi-random fashion, and then you may have to
55 stop the master process somehow.)
56
57
58The :class:`Process` class
59~~~~~~~~~~~~~~~~~~~~~~~~~~
60
61In :mod:`multiprocessing`, processes are spawned by creating a :class:`Process`
62object and then calling its :meth:`~Process.start` method. :class:`Process`
63follows the API of :class:`threading.Thread`. A trivial example of a
64multiprocess program is ::
65
66 from multiprocessing import Process
67
68 def f(name):
69 print 'hello', name
70
71 if __name__ == '__main__':
72 p = Process(target=f, args=('bob',))
73 p.start()
74 p.join()
75
76To show the individual process IDs involved, here is an expanded example::
77
78 from multiprocessing import Process
79 import os
80
81 def info(title):
82 print title
83 print 'module name:', __name__
[391]84 if hasattr(os, 'getppid'): # only available on Unix
85 print 'parent process:', os.getppid()
[2]86 print 'process id:', os.getpid()
87
88 def f(name):
89 info('function f')
90 print 'hello', name
91
92 if __name__ == '__main__':
93 info('main line')
94 p = Process(target=f, args=('bob',))
95 p.start()
96 p.join()
97
98For an explanation of why (on Windows) the ``if __name__ == '__main__'`` part is
99necessary, see :ref:`multiprocessing-programming`.
100
101
102
103Exchanging objects between processes
104~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
105
106:mod:`multiprocessing` supports two types of communication channel between
107processes:
108
109**Queues**
110
[391]111 The :class:`~multiprocessing.Queue` class is a near clone of :class:`Queue.Queue`. For
[2]112 example::
113
114 from multiprocessing import Process, Queue
115
116 def f(q):
117 q.put([42, None, 'hello'])
118
119 if __name__ == '__main__':
120 q = Queue()
121 p = Process(target=f, args=(q,))
122 p.start()
123 print q.get() # prints "[42, None, 'hello']"
124 p.join()
125
126 Queues are thread and process safe.
127
128**Pipes**
129
130 The :func:`Pipe` function returns a pair of connection objects connected by a
131 pipe which by default is duplex (two-way). For example::
132
133 from multiprocessing import Process, Pipe
134
135 def f(conn):
136 conn.send([42, None, 'hello'])
137 conn.close()
138
139 if __name__ == '__main__':
140 parent_conn, child_conn = Pipe()
141 p = Process(target=f, args=(child_conn,))
142 p.start()
143 print parent_conn.recv() # prints "[42, None, 'hello']"
144 p.join()
145
146 The two connection objects returned by :func:`Pipe` represent the two ends of
147 the pipe. Each connection object has :meth:`~Connection.send` and
148 :meth:`~Connection.recv` methods (among others). Note that data in a pipe
149 may become corrupted if two processes (or threads) try to read from or write
150 to the *same* end of the pipe at the same time. Of course there is no risk
151 of corruption from processes using different ends of the pipe at the same
152 time.
153
154
155Synchronization between processes
156~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
157
158:mod:`multiprocessing` contains equivalents of all the synchronization
159primitives from :mod:`threading`. For instance one can use a lock to ensure
160that only one process prints to standard output at a time::
161
162 from multiprocessing import Process, Lock
163
164 def f(l, i):
165 l.acquire()
166 print 'hello world', i
167 l.release()
168
169 if __name__ == '__main__':
170 lock = Lock()
171
172 for num in range(10):
173 Process(target=f, args=(lock, num)).start()
174
175Without using the lock output from the different processes is liable to get all
176mixed up.
177
178
179Sharing state between processes
180~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
181
182As mentioned above, when doing concurrent programming it is usually best to
183avoid using shared state as far as possible. This is particularly true when
184using multiple processes.
185
186However, if you really do need to use some shared data then
187:mod:`multiprocessing` provides a couple of ways of doing so.
188
189**Shared memory**
190
191 Data can be stored in a shared memory map using :class:`Value` or
192 :class:`Array`. For example, the following code ::
193
194 from multiprocessing import Process, Value, Array
195
196 def f(n, a):
197 n.value = 3.1415927
198 for i in range(len(a)):
199 a[i] = -a[i]
200
201 if __name__ == '__main__':
202 num = Value('d', 0.0)
203 arr = Array('i', range(10))
204
205 p = Process(target=f, args=(num, arr))
206 p.start()
207 p.join()
208
209 print num.value
210 print arr[:]
211
212 will print ::
213
214 3.1415927
215 [0, -1, -2, -3, -4, -5, -6, -7, -8, -9]
216
217 The ``'d'`` and ``'i'`` arguments used when creating ``num`` and ``arr`` are
218 typecodes of the kind used by the :mod:`array` module: ``'d'`` indicates a
219 double precision float and ``'i'`` indicates a signed integer. These shared
[391]220 objects will be process and thread-safe.
[2]221
222 For more flexibility in using shared memory one can use the
223 :mod:`multiprocessing.sharedctypes` module which supports the creation of
224 arbitrary ctypes objects allocated from shared memory.
225
226**Server process**
227
228 A manager object returned by :func:`Manager` controls a server process which
229 holds Python objects and allows other processes to manipulate them using
230 proxies.
231
232 A manager returned by :func:`Manager` will support types :class:`list`,
233 :class:`dict`, :class:`Namespace`, :class:`Lock`, :class:`RLock`,
234 :class:`Semaphore`, :class:`BoundedSemaphore`, :class:`Condition`,
[391]235 :class:`Event`, :class:`~multiprocessing.Queue`, :class:`Value` and :class:`Array`. For
[2]236 example, ::
237
238 from multiprocessing import Process, Manager
239
240 def f(d, l):
241 d[1] = '1'
242 d['2'] = 2
243 d[0.25] = None
244 l.reverse()
245
246 if __name__ == '__main__':
247 manager = Manager()
248
249 d = manager.dict()
250 l = manager.list(range(10))
251
252 p = Process(target=f, args=(d, l))
253 p.start()
254 p.join()
255
256 print d
257 print l
258
259 will print ::
260
261 {0.25: None, 1: '1', '2': 2}
262 [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
263
264 Server process managers are more flexible than using shared memory objects
265 because they can be made to support arbitrary object types. Also, a single
266 manager can be shared by processes on different computers over a network.
267 They are, however, slower than using shared memory.
268
269
270Using a pool of workers
271~~~~~~~~~~~~~~~~~~~~~~~
272
273The :class:`~multiprocessing.pool.Pool` class represents a pool of worker
274processes. It has methods which allows tasks to be offloaded to the worker
275processes in a few different ways.
276
277For example::
278
279 from multiprocessing import Pool
280
281 def f(x):
282 return x*x
283
284 if __name__ == '__main__':
285 pool = Pool(processes=4) # start 4 worker processes
[391]286 result = pool.apply_async(f, [10]) # evaluate "f(10)" asynchronously
[2]287 print result.get(timeout=1) # prints "100" unless your computer is *very* slow
288 print pool.map(f, range(10)) # prints "[0, 1, 4,..., 81]"
289
[391]290Note that the methods of a pool should only ever be used by the
291process which created it.
[2]292
[391]293
[2]294Reference
295---------
296
297The :mod:`multiprocessing` package mostly replicates the API of the
298:mod:`threading` module.
299
300
301:class:`Process` and exceptions
302~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
303
[391]304.. class:: Process(group=None, target=None, name=None, args=(), kwargs={})
[2]305
306 Process objects represent activity that is run in a separate process. The
307 :class:`Process` class has equivalents of all the methods of
308 :class:`threading.Thread`.
309
310 The constructor should always be called with keyword arguments. *group*
311 should always be ``None``; it exists solely for compatibility with
312 :class:`threading.Thread`. *target* is the callable object to be invoked by
313 the :meth:`run()` method. It defaults to ``None``, meaning nothing is
314 called. *name* is the process name. By default, a unique name is constructed
315 of the form 'Process-N\ :sub:`1`:N\ :sub:`2`:...:N\ :sub:`k`' where N\
316 :sub:`1`,N\ :sub:`2`,...,N\ :sub:`k` is a sequence of integers whose length
317 is determined by the *generation* of the process. *args* is the argument
318 tuple for the target invocation. *kwargs* is a dictionary of keyword
319 arguments for the target invocation. By default, no arguments are passed to
320 *target*.
321
322 If a subclass overrides the constructor, it must make sure it invokes the
323 base class constructor (:meth:`Process.__init__`) before doing anything else
324 to the process.
325
326 .. method:: run()
327
328 Method representing the process's activity.
329
330 You may override this method in a subclass. The standard :meth:`run`
331 method invokes the callable object passed to the object's constructor as
332 the target argument, if any, with sequential and keyword arguments taken
333 from the *args* and *kwargs* arguments, respectively.
334
335 .. method:: start()
336
337 Start the process's activity.
338
339 This must be called at most once per process object. It arranges for the
340 object's :meth:`run` method to be invoked in a separate process.
341
342 .. method:: join([timeout])
343
344 Block the calling thread until the process whose :meth:`join` method is
345 called terminates or until the optional timeout occurs.
346
347 If *timeout* is ``None`` then there is no timeout.
348
349 A process can be joined many times.
350
351 A process cannot join itself because this would cause a deadlock. It is
352 an error to attempt to join a process before it has been started.
353
354 .. attribute:: name
355
356 The process's name.
357
358 The name is a string used for identification purposes only. It has no
359 semantics. Multiple processes may be given the same name. The initial
360 name is set by the constructor.
361
362 .. method:: is_alive
363
364 Return whether the process is alive.
365
366 Roughly, a process object is alive from the moment the :meth:`start`
367 method returns until the child process terminates.
368
369 .. attribute:: daemon
370
371 The process's daemon flag, a Boolean value. This must be set before
372 :meth:`start` is called.
373
374 The initial value is inherited from the creating process.
375
376 When a process exits, it attempts to terminate all of its daemonic child
377 processes.
378
379 Note that a daemonic process is not allowed to create child processes.
380 Otherwise a daemonic process would leave its children orphaned if it gets
381 terminated when its parent process exits. Additionally, these are **not**
382 Unix daemons or services, they are normal processes that will be
[391]383 terminated (and not joined) if non-daemonic processes have exited.
[2]384
[391]385 In addition to the :class:`threading.Thread` API, :class:`Process` objects
[2]386 also support the following attributes and methods:
387
388 .. attribute:: pid
389
390 Return the process ID. Before the process is spawned, this will be
391 ``None``.
392
393 .. attribute:: exitcode
394
395 The child's exit code. This will be ``None`` if the process has not yet
396 terminated. A negative value *-N* indicates that the child was terminated
397 by signal *N*.
398
399 .. attribute:: authkey
400
401 The process's authentication key (a byte string).
402
403 When :mod:`multiprocessing` is initialized the main process is assigned a
[391]404 random string using :func:`os.urandom`.
[2]405
406 When a :class:`Process` object is created, it will inherit the
407 authentication key of its parent process, although this may be changed by
408 setting :attr:`authkey` to another byte string.
409
410 See :ref:`multiprocessing-auth-keys`.
411
412 .. method:: terminate()
413
414 Terminate the process. On Unix this is done using the ``SIGTERM`` signal;
[391]415 on Windows :c:func:`TerminateProcess` is used. Note that exit handlers and
[2]416 finally clauses, etc., will not be executed.
417
418 Note that descendant processes of the process will *not* be terminated --
419 they will simply become orphaned.
420
421 .. warning::
422
423 If this method is used when the associated process is using a pipe or
424 queue then the pipe or queue is liable to become corrupted and may
425 become unusable by other process. Similarly, if the process has
426 acquired a lock or semaphore etc. then terminating it is liable to
427 cause other processes to deadlock.
428
[391]429 Note that the :meth:`start`, :meth:`join`, :meth:`is_alive`,
430 :meth:`terminate` and :attr:`exitcode` methods should only be called by
431 the process that created the process object.
[2]432
433 Example usage of some of the methods of :class:`Process`:
434
435 .. doctest::
436
437 >>> import multiprocessing, time, signal
438 >>> p = multiprocessing.Process(target=time.sleep, args=(1000,))
439 >>> print p, p.is_alive()
440 <Process(Process-1, initial)> False
441 >>> p.start()
442 >>> print p, p.is_alive()
443 <Process(Process-1, started)> True
444 >>> p.terminate()
445 >>> time.sleep(0.1)
446 >>> print p, p.is_alive()
447 <Process(Process-1, stopped[SIGTERM])> False
448 >>> p.exitcode == -signal.SIGTERM
449 True
450
451
452.. exception:: BufferTooShort
453
454 Exception raised by :meth:`Connection.recv_bytes_into()` when the supplied
455 buffer object is too small for the message read.
456
457 If ``e`` is an instance of :exc:`BufferTooShort` then ``e.args[0]`` will give
458 the message as a byte string.
459
460
461Pipes and Queues
462~~~~~~~~~~~~~~~~
463
464When using multiple processes, one generally uses message passing for
465communication between processes and avoids having to use any synchronization
466primitives like locks.
467
468For passing messages one can use :func:`Pipe` (for a connection between two
469processes) or a queue (which allows multiple producers and consumers).
470
[391]471The :class:`~multiprocessing.Queue`, :class:`multiprocessing.queues.SimpleQueue` and :class:`JoinableQueue` types are multi-producer,
[2]472multi-consumer FIFO queues modelled on the :class:`Queue.Queue` class in the
[391]473standard library. They differ in that :class:`~multiprocessing.Queue` lacks the
[2]474:meth:`~Queue.Queue.task_done` and :meth:`~Queue.Queue.join` methods introduced
475into Python 2.5's :class:`Queue.Queue` class.
476
477If you use :class:`JoinableQueue` then you **must** call
478:meth:`JoinableQueue.task_done` for each task removed from the queue or else the
[391]479semaphore used to count the number of unfinished tasks may eventually overflow,
[2]480raising an exception.
481
482Note that one can also create a shared queue by using a manager object -- see
483:ref:`multiprocessing-managers`.
484
485.. note::
486
487 :mod:`multiprocessing` uses the usual :exc:`Queue.Empty` and
488 :exc:`Queue.Full` exceptions to signal a timeout. They are not available in
489 the :mod:`multiprocessing` namespace so you need to import them from
490 :mod:`Queue`.
491
[391]492.. note::
[2]493
[391]494 When an object is put on a queue, the object is pickled and a
495 background thread later flushes the pickled data to an underlying
496 pipe. This has some consequences which are a little surprising,
497 but should not cause any practical difficulties -- if they really
498 bother you then you can instead use a queue created with a
499 :ref:`manager <multiprocessing-managers>`.
500
501 (1) After putting an object on an empty queue there may be an
502 infinitesimal delay before the queue's :meth:`~Queue.empty`
503 method returns :const:`False` and :meth:`~Queue.get_nowait` can
504 return without raising :exc:`Queue.Empty`.
505
506 (2) If multiple processes are enqueuing objects, it is possible for
507 the objects to be received at the other end out-of-order.
508 However, objects enqueued by the same process will always be in
509 the expected order with respect to each other.
510
[2]511.. warning::
512
513 If a process is killed using :meth:`Process.terminate` or :func:`os.kill`
[391]514 while it is trying to use a :class:`~multiprocessing.Queue`, then the data in the queue is
515 likely to become corrupted. This may cause any other process to get an
[2]516 exception when it tries to use the queue later on.
517
518.. warning::
519
520 As mentioned above, if a child process has put items on a queue (and it has
[391]521 not used :meth:`JoinableQueue.cancel_join_thread
522 <multiprocessing.Queue.cancel_join_thread>`), then that process will
[2]523 not terminate until all buffered items have been flushed to the pipe.
524
525 This means that if you try joining that process you may get a deadlock unless
526 you are sure that all items which have been put on the queue have been
527 consumed. Similarly, if the child process is non-daemonic then the parent
528 process may hang on exit when it tries to join all its non-daemonic children.
529
530 Note that a queue created using a manager does not have this issue. See
531 :ref:`multiprocessing-programming`.
532
533For an example of the usage of queues for interprocess communication see
534:ref:`multiprocessing-examples`.
535
536
537.. function:: Pipe([duplex])
538
539 Returns a pair ``(conn1, conn2)`` of :class:`Connection` objects representing
540 the ends of a pipe.
541
542 If *duplex* is ``True`` (the default) then the pipe is bidirectional. If
543 *duplex* is ``False`` then the pipe is unidirectional: ``conn1`` can only be
544 used for receiving messages and ``conn2`` can only be used for sending
545 messages.
546
547
548.. class:: Queue([maxsize])
549
550 Returns a process shared queue implemented using a pipe and a few
551 locks/semaphores. When a process first puts an item on the queue a feeder
552 thread is started which transfers objects from a buffer into the pipe.
553
554 The usual :exc:`Queue.Empty` and :exc:`Queue.Full` exceptions from the
555 standard library's :mod:`Queue` module are raised to signal timeouts.
556
[391]557 :class:`~multiprocessing.Queue` implements all the methods of :class:`Queue.Queue` except for
[2]558 :meth:`~Queue.Queue.task_done` and :meth:`~Queue.Queue.join`.
559
560 .. method:: qsize()
561
562 Return the approximate size of the queue. Because of
563 multithreading/multiprocessing semantics, this number is not reliable.
564
565 Note that this may raise :exc:`NotImplementedError` on Unix platforms like
566 Mac OS X where ``sem_getvalue()`` is not implemented.
567
568 .. method:: empty()
569
570 Return ``True`` if the queue is empty, ``False`` otherwise. Because of
571 multithreading/multiprocessing semantics, this is not reliable.
572
573 .. method:: full()
574
575 Return ``True`` if the queue is full, ``False`` otherwise. Because of
576 multithreading/multiprocessing semantics, this is not reliable.
577
[391]578 .. method:: put(obj[, block[, timeout]])
[2]579
[391]580 Put obj into the queue. If the optional argument *block* is ``True``
[2]581 (the default) and *timeout* is ``None`` (the default), block if necessary until
582 a free slot is available. If *timeout* is a positive number, it blocks at
583 most *timeout* seconds and raises the :exc:`Queue.Full` exception if no
584 free slot was available within that time. Otherwise (*block* is
585 ``False``), put an item on the queue if a free slot is immediately
586 available, else raise the :exc:`Queue.Full` exception (*timeout* is
587 ignored in that case).
588
[391]589 .. method:: put_nowait(obj)
[2]590
[391]591 Equivalent to ``put(obj, False)``.
[2]592
593 .. method:: get([block[, timeout]])
594
595 Remove and return an item from the queue. If optional args *block* is
596 ``True`` (the default) and *timeout* is ``None`` (the default), block if
597 necessary until an item is available. If *timeout* is a positive number,
598 it blocks at most *timeout* seconds and raises the :exc:`Queue.Empty`
599 exception if no item was available within that time. Otherwise (block is
600 ``False``), return an item if one is immediately available, else raise the
601 :exc:`Queue.Empty` exception (*timeout* is ignored in that case).
602
603 .. method:: get_nowait()
604
605 Equivalent to ``get(False)``.
606
[391]607 :class:`~multiprocessing.Queue` has a few additional methods not found in
[2]608 :class:`Queue.Queue`. These methods are usually unnecessary for most
609 code:
610
611 .. method:: close()
612
613 Indicate that no more data will be put on this queue by the current
614 process. The background thread will quit once it has flushed all buffered
615 data to the pipe. This is called automatically when the queue is garbage
616 collected.
617
618 .. method:: join_thread()
619
620 Join the background thread. This can only be used after :meth:`close` has
621 been called. It blocks until the background thread exits, ensuring that
622 all data in the buffer has been flushed to the pipe.
623
624 By default if a process is not the creator of the queue then on exit it
625 will attempt to join the queue's background thread. The process can call
626 :meth:`cancel_join_thread` to make :meth:`join_thread` do nothing.
627
628 .. method:: cancel_join_thread()
629
630 Prevent :meth:`join_thread` from blocking. In particular, this prevents
631 the background thread from being joined automatically when the process
632 exits -- see :meth:`join_thread`.
633
[391]634 A better name for this method might be
635 ``allow_exit_without_flush()``. It is likely to cause enqueued
636 data to lost, and you almost certainly will not need to use it.
637 It is really only there if you need the current process to exit
638 immediately without waiting to flush enqueued data to the
639 underlying pipe, and you don't care about lost data.
[2]640
[391]641
642.. class:: multiprocessing.queues.SimpleQueue()
643
644 It is a simplified :class:`~multiprocessing.Queue` type, very close to a locked :class:`Pipe`.
645
646 .. method:: empty()
647
648 Return ``True`` if the queue is empty, ``False`` otherwise.
649
650 .. method:: get()
651
652 Remove and return an item from the queue.
653
654 .. method:: put(item)
655
656 Put *item* into the queue.
657
658
[2]659.. class:: JoinableQueue([maxsize])
660
[391]661 :class:`JoinableQueue`, a :class:`~multiprocessing.Queue` subclass, is a queue which
[2]662 additionally has :meth:`task_done` and :meth:`join` methods.
663
664 .. method:: task_done()
665
666 Indicate that a formerly enqueued task is complete. Used by queue consumer
667 threads. For each :meth:`~Queue.get` used to fetch a task, a subsequent
668 call to :meth:`task_done` tells the queue that the processing on the task
669 is complete.
670
[391]671 If a :meth:`~Queue.Queue.join` is currently blocking, it will resume when all
[2]672 items have been processed (meaning that a :meth:`task_done` call was
673 received for every item that had been :meth:`~Queue.put` into the queue).
674
675 Raises a :exc:`ValueError` if called more times than there were items
676 placed in the queue.
677
678
679 .. method:: join()
680
681 Block until all items in the queue have been gotten and processed.
682
683 The count of unfinished tasks goes up whenever an item is added to the
684 queue. The count goes down whenever a consumer thread calls
685 :meth:`task_done` to indicate that the item was retrieved and all work on
686 it is complete. When the count of unfinished tasks drops to zero,
[391]687 :meth:`~Queue.Queue.join` unblocks.
[2]688
689
690Miscellaneous
691~~~~~~~~~~~~~
692
693.. function:: active_children()
694
695 Return list of all live children of the current process.
696
697 Calling this has the side affect of "joining" any processes which have
698 already finished.
699
700.. function:: cpu_count()
701
702 Return the number of CPUs in the system. May raise
703 :exc:`NotImplementedError`.
704
705.. function:: current_process()
706
707 Return the :class:`Process` object corresponding to the current process.
708
709 An analogue of :func:`threading.current_thread`.
710
711.. function:: freeze_support()
712
713 Add support for when a program which uses :mod:`multiprocessing` has been
714 frozen to produce a Windows executable. (Has been tested with **py2exe**,
715 **PyInstaller** and **cx_Freeze**.)
716
717 One needs to call this function straight after the ``if __name__ ==
718 '__main__'`` line of the main module. For example::
719
720 from multiprocessing import Process, freeze_support
721
722 def f():
723 print 'hello world!'
724
725 if __name__ == '__main__':
726 freeze_support()
727 Process(target=f).start()
728
729 If the ``freeze_support()`` line is omitted then trying to run the frozen
730 executable will raise :exc:`RuntimeError`.
731
732 If the module is being run normally by the Python interpreter then
733 :func:`freeze_support` has no effect.
734
735.. function:: set_executable()
736
737 Sets the path of the Python interpreter to use when starting a child process.
738 (By default :data:`sys.executable` is used). Embedders will probably need to
739 do some thing like ::
740
[391]741 set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
[2]742
743 before they can create child processes. (Windows only)
744
745
746.. note::
747
748 :mod:`multiprocessing` contains no analogues of
749 :func:`threading.active_count`, :func:`threading.enumerate`,
750 :func:`threading.settrace`, :func:`threading.setprofile`,
751 :class:`threading.Timer`, or :class:`threading.local`.
752
753
754Connection Objects
755~~~~~~~~~~~~~~~~~~
756
757Connection objects allow the sending and receiving of picklable objects or
758strings. They can be thought of as message oriented connected sockets.
759
[391]760Connection objects are usually created using :func:`Pipe` -- see also
[2]761:ref:`multiprocessing-listeners-clients`.
762
763.. class:: Connection
764
765 .. method:: send(obj)
766
767 Send an object to the other end of the connection which should be read
768 using :meth:`recv`.
769
[391]770 The object must be picklable. Very large pickles (approximately 32 MB+,
771 though it depends on the OS) may raise a :exc:`ValueError` exception.
[2]772
773 .. method:: recv()
774
775 Return an object sent from the other end of the connection using
[391]776 :meth:`send`. Blocks until there its something to receive. Raises
777 :exc:`EOFError` if there is nothing left to receive
[2]778 and the other end was closed.
779
780 .. method:: fileno()
781
[391]782 Return the file descriptor or handle used by the connection.
[2]783
784 .. method:: close()
785
786 Close the connection.
787
788 This is called automatically when the connection is garbage collected.
789
790 .. method:: poll([timeout])
791
792 Return whether there is any data available to be read.
793
794 If *timeout* is not specified then it will return immediately. If
795 *timeout* is a number then this specifies the maximum time in seconds to
796 block. If *timeout* is ``None`` then an infinite timeout is used.
797
798 .. method:: send_bytes(buffer[, offset[, size]])
799
800 Send byte data from an object supporting the buffer interface as a
801 complete message.
802
803 If *offset* is given then data is read from that position in *buffer*. If
[391]804 *size* is given then that many bytes will be read from buffer. Very large
805 buffers (approximately 32 MB+, though it depends on the OS) may raise a
806 :exc:`ValueError` exception
[2]807
808 .. method:: recv_bytes([maxlength])
809
810 Return a complete message of byte data sent from the other end of the
[391]811 connection as a string. Blocks until there is something to receive.
812 Raises :exc:`EOFError` if there is nothing left
[2]813 to receive and the other end has closed.
814
815 If *maxlength* is specified and the message is longer than *maxlength*
816 then :exc:`IOError` is raised and the connection will no longer be
817 readable.
818
819 .. method:: recv_bytes_into(buffer[, offset])
820
821 Read into *buffer* a complete message of byte data sent from the other end
[391]822 of the connection and return the number of bytes in the message. Blocks
823 until there is something to receive. Raises
[2]824 :exc:`EOFError` if there is nothing left to receive and the other end was
825 closed.
826
827 *buffer* must be an object satisfying the writable buffer interface. If
828 *offset* is given then the message will be written into the buffer from
829 that position. Offset must be a non-negative integer less than the
830 length of *buffer* (in bytes).
831
832 If the buffer is too short then a :exc:`BufferTooShort` exception is
833 raised and the complete message is available as ``e.args[0]`` where ``e``
834 is the exception instance.
835
836
837For example:
838
839.. doctest::
840
841 >>> from multiprocessing import Pipe
842 >>> a, b = Pipe()
843 >>> a.send([1, 'hello', None])
844 >>> b.recv()
845 [1, 'hello', None]
846 >>> b.send_bytes('thank you')
847 >>> a.recv_bytes()
848 'thank you'
849 >>> import array
850 >>> arr1 = array.array('i', range(5))
851 >>> arr2 = array.array('i', [0] * 10)
852 >>> a.send_bytes(arr1)
853 >>> count = b.recv_bytes_into(arr2)
854 >>> assert count == len(arr1) * arr1.itemsize
855 >>> arr2
856 array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0])
857
858
859.. warning::
860
861 The :meth:`Connection.recv` method automatically unpickles the data it
862 receives, which can be a security risk unless you can trust the process
863 which sent the message.
864
865 Therefore, unless the connection object was produced using :func:`Pipe` you
866 should only use the :meth:`~Connection.recv` and :meth:`~Connection.send`
867 methods after performing some sort of authentication. See
868 :ref:`multiprocessing-auth-keys`.
869
870.. warning::
871
872 If a process is killed while it is trying to read or write to a pipe then
873 the data in the pipe is likely to become corrupted, because it may become
874 impossible to be sure where the message boundaries lie.
875
876
877Synchronization primitives
878~~~~~~~~~~~~~~~~~~~~~~~~~~
879
880Generally synchronization primitives are not as necessary in a multiprocess
881program as they are in a multithreaded program. See the documentation for
882:mod:`threading` module.
883
884Note that one can also create synchronization primitives by using a manager
885object -- see :ref:`multiprocessing-managers`.
886
887.. class:: BoundedSemaphore([value])
888
889 A bounded semaphore object: a clone of :class:`threading.BoundedSemaphore`.
890
[391]891 (On Mac OS X, this is indistinguishable from :class:`Semaphore` because
[2]892 ``sem_getvalue()`` is not implemented on that platform).
893
894.. class:: Condition([lock])
895
896 A condition variable: a clone of :class:`threading.Condition`.
897
898 If *lock* is specified then it should be a :class:`Lock` or :class:`RLock`
899 object from :mod:`multiprocessing`.
900
901.. class:: Event()
902
903 A clone of :class:`threading.Event`.
[391]904 This method returns the state of the internal semaphore on exit, so it
905 will always return ``True`` except if a timeout is given and the operation
906 times out.
[2]907
[391]908 .. versionchanged:: 2.7
909 Previously, the method always returned ``None``.
910
[2]911.. class:: Lock()
912
913 A non-recursive lock object: a clone of :class:`threading.Lock`.
914
915.. class:: RLock()
916
917 A recursive lock object: a clone of :class:`threading.RLock`.
918
919.. class:: Semaphore([value])
920
[391]921 A semaphore object: a clone of :class:`threading.Semaphore`.
[2]922
923.. note::
924
925 The :meth:`acquire` method of :class:`BoundedSemaphore`, :class:`Lock`,
926 :class:`RLock` and :class:`Semaphore` has a timeout parameter not supported
927 by the equivalents in :mod:`threading`. The signature is
928 ``acquire(block=True, timeout=None)`` with keyword parameters being
929 acceptable. If *block* is ``True`` and *timeout* is not ``None`` then it
930 specifies a timeout in seconds. If *block* is ``False`` then *timeout* is
931 ignored.
932
[391]933 On Mac OS X, ``sem_timedwait`` is unsupported, so calling ``acquire()`` with
934 a timeout will emulate that function's behavior using a sleeping loop.
[2]935
936.. note::
937
938 If the SIGINT signal generated by Ctrl-C arrives while the main thread is
939 blocked by a call to :meth:`BoundedSemaphore.acquire`, :meth:`Lock.acquire`,
940 :meth:`RLock.acquire`, :meth:`Semaphore.acquire`, :meth:`Condition.acquire`
941 or :meth:`Condition.wait` then the call will be immediately interrupted and
942 :exc:`KeyboardInterrupt` will be raised.
943
944 This differs from the behaviour of :mod:`threading` where SIGINT will be
945 ignored while the equivalent blocking calls are in progress.
946
947
948Shared :mod:`ctypes` Objects
949~~~~~~~~~~~~~~~~~~~~~~~~~~~~
950
951It is possible to create shared objects using shared memory which can be
952inherited by child processes.
953
954.. function:: Value(typecode_or_type, *args[, lock])
955
956 Return a :mod:`ctypes` object allocated from shared memory. By default the
957 return value is actually a synchronized wrapper for the object.
958
959 *typecode_or_type* determines the type of the returned object: it is either a
960 ctypes type or a one character typecode of the kind used by the :mod:`array`
961 module. *\*args* is passed on to the constructor for the type.
962
963 If *lock* is ``True`` (the default) then a new lock object is created to
964 synchronize access to the value. If *lock* is a :class:`Lock` or
965 :class:`RLock` object then that will be used to synchronize access to the
966 value. If *lock* is ``False`` then access to the returned object will not be
967 automatically protected by a lock, so it will not necessarily be
968 "process-safe".
969
970 Note that *lock* is a keyword-only argument.
971
972.. function:: Array(typecode_or_type, size_or_initializer, *, lock=True)
973
974 Return a ctypes array allocated from shared memory. By default the return
975 value is actually a synchronized wrapper for the array.
976
977 *typecode_or_type* determines the type of the elements of the returned array:
978 it is either a ctypes type or a one character typecode of the kind used by
979 the :mod:`array` module. If *size_or_initializer* is an integer, then it
980 determines the length of the array, and the array will be initially zeroed.
981 Otherwise, *size_or_initializer* is a sequence which is used to initialize
982 the array and whose length determines the length of the array.
983
984 If *lock* is ``True`` (the default) then a new lock object is created to
985 synchronize access to the value. If *lock* is a :class:`Lock` or
986 :class:`RLock` object then that will be used to synchronize access to the
987 value. If *lock* is ``False`` then access to the returned object will not be
988 automatically protected by a lock, so it will not necessarily be
989 "process-safe".
990
991 Note that *lock* is a keyword only argument.
992
993 Note that an array of :data:`ctypes.c_char` has *value* and *raw*
994 attributes which allow one to use it to store and retrieve strings.
995
996
997The :mod:`multiprocessing.sharedctypes` module
998>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
999
1000.. module:: multiprocessing.sharedctypes
1001 :synopsis: Allocate ctypes objects from shared memory.
1002
1003The :mod:`multiprocessing.sharedctypes` module provides functions for allocating
1004:mod:`ctypes` objects from shared memory which can be inherited by child
1005processes.
1006
1007.. note::
1008
1009 Although it is possible to store a pointer in shared memory remember that
1010 this will refer to a location in the address space of a specific process.
1011 However, the pointer is quite likely to be invalid in the context of a second
1012 process and trying to dereference the pointer from the second process may
1013 cause a crash.
1014
1015.. function:: RawArray(typecode_or_type, size_or_initializer)
1016
1017 Return a ctypes array allocated from shared memory.
1018
1019 *typecode_or_type* determines the type of the elements of the returned array:
1020 it is either a ctypes type or a one character typecode of the kind used by
1021 the :mod:`array` module. If *size_or_initializer* is an integer then it
1022 determines the length of the array, and the array will be initially zeroed.
1023 Otherwise *size_or_initializer* is a sequence which is used to initialize the
1024 array and whose length determines the length of the array.
1025
1026 Note that setting and getting an element is potentially non-atomic -- use
1027 :func:`Array` instead to make sure that access is automatically synchronized
1028 using a lock.
1029
1030.. function:: RawValue(typecode_or_type, *args)
1031
1032 Return a ctypes object allocated from shared memory.
1033
1034 *typecode_or_type* determines the type of the returned object: it is either a
1035 ctypes type or a one character typecode of the kind used by the :mod:`array`
1036 module. *\*args* is passed on to the constructor for the type.
1037
1038 Note that setting and getting the value is potentially non-atomic -- use
1039 :func:`Value` instead to make sure that access is automatically synchronized
1040 using a lock.
1041
1042 Note that an array of :data:`ctypes.c_char` has ``value`` and ``raw``
1043 attributes which allow one to use it to store and retrieve strings -- see
1044 documentation for :mod:`ctypes`.
1045
1046.. function:: Array(typecode_or_type, size_or_initializer, *args[, lock])
1047
1048 The same as :func:`RawArray` except that depending on the value of *lock* a
1049 process-safe synchronization wrapper may be returned instead of a raw ctypes
1050 array.
1051
1052 If *lock* is ``True`` (the default) then a new lock object is created to
[391]1053 synchronize access to the value. If *lock* is a
1054 :class:`~multiprocessing.Lock` or :class:`~multiprocessing.RLock` object
1055 then that will be used to synchronize access to the
[2]1056 value. If *lock* is ``False`` then access to the returned object will not be
1057 automatically protected by a lock, so it will not necessarily be
1058 "process-safe".
1059
1060 Note that *lock* is a keyword-only argument.
1061
1062.. function:: Value(typecode_or_type, *args[, lock])
1063
1064 The same as :func:`RawValue` except that depending on the value of *lock* a
1065 process-safe synchronization wrapper may be returned instead of a raw ctypes
1066 object.
1067
1068 If *lock* is ``True`` (the default) then a new lock object is created to
[391]1069 synchronize access to the value. If *lock* is a :class:`~multiprocessing.Lock` or
1070 :class:`~multiprocessing.RLock` object then that will be used to synchronize access to the
[2]1071 value. If *lock* is ``False`` then access to the returned object will not be
1072 automatically protected by a lock, so it will not necessarily be
1073 "process-safe".
1074
1075 Note that *lock* is a keyword-only argument.
1076
1077.. function:: copy(obj)
1078
1079 Return a ctypes object allocated from shared memory which is a copy of the
1080 ctypes object *obj*.
1081
1082.. function:: synchronized(obj[, lock])
1083
1084 Return a process-safe wrapper object for a ctypes object which uses *lock* to
1085 synchronize access. If *lock* is ``None`` (the default) then a
1086 :class:`multiprocessing.RLock` object is created automatically.
1087
1088 A synchronized wrapper will have two methods in addition to those of the
1089 object it wraps: :meth:`get_obj` returns the wrapped object and
1090 :meth:`get_lock` returns the lock object used for synchronization.
1091
1092 Note that accessing the ctypes object through the wrapper can be a lot slower
1093 than accessing the raw ctypes object.
1094
1095
1096The table below compares the syntax for creating shared ctypes objects from
1097shared memory with the normal ctypes syntax. (In the table ``MyStruct`` is some
1098subclass of :class:`ctypes.Structure`.)
1099
1100==================== ========================== ===========================
1101ctypes sharedctypes using type sharedctypes using typecode
1102==================== ========================== ===========================
1103c_double(2.4) RawValue(c_double, 2.4) RawValue('d', 2.4)
1104MyStruct(4, 6) RawValue(MyStruct, 4, 6)
1105(c_short * 7)() RawArray(c_short, 7) RawArray('h', 7)
1106(c_int * 3)(9, 2, 8) RawArray(c_int, (9, 2, 8)) RawArray('i', (9, 2, 8))
1107==================== ========================== ===========================
1108
1109
1110Below is an example where a number of ctypes objects are modified by a child
1111process::
1112
1113 from multiprocessing import Process, Lock
1114 from multiprocessing.sharedctypes import Value, Array
1115 from ctypes import Structure, c_double
1116
1117 class Point(Structure):
1118 _fields_ = [('x', c_double), ('y', c_double)]
1119
1120 def modify(n, x, s, A):
1121 n.value **= 2
1122 x.value **= 2
1123 s.value = s.value.upper()
1124 for a in A:
1125 a.x **= 2
1126 a.y **= 2
1127
1128 if __name__ == '__main__':
1129 lock = Lock()
1130
1131 n = Value('i', 7)
1132 x = Value(c_double, 1.0/3.0, lock=False)
1133 s = Array('c', 'hello world', lock=lock)
1134 A = Array(Point, [(1.875,-6.25), (-5.75,2.0), (2.375,9.5)], lock=lock)
1135
1136 p = Process(target=modify, args=(n, x, s, A))
1137 p.start()
1138 p.join()
1139
1140 print n.value
1141 print x.value
1142 print s.value
1143 print [(a.x, a.y) for a in A]
1144
1145
1146.. highlightlang:: none
1147
1148The results printed are ::
1149
1150 49
1151 0.1111111111111111
1152 HELLO WORLD
1153 [(3.515625, 39.0625), (33.0625, 4.0), (5.640625, 90.25)]
1154
1155.. highlightlang:: python
1156
1157
1158.. _multiprocessing-managers:
1159
1160Managers
1161~~~~~~~~
1162
1163Managers provide a way to create data which can be shared between different
1164processes. A manager object controls a server process which manages *shared
1165objects*. Other processes can access the shared objects by using proxies.
1166
1167.. function:: multiprocessing.Manager()
1168
1169 Returns a started :class:`~multiprocessing.managers.SyncManager` object which
1170 can be used for sharing objects between processes. The returned manager
1171 object corresponds to a spawned child process and has methods which will
1172 create shared objects and return corresponding proxies.
1173
1174.. module:: multiprocessing.managers
1175 :synopsis: Share data between process with shared objects.
1176
1177Manager processes will be shutdown as soon as they are garbage collected or
1178their parent process exits. The manager classes are defined in the
1179:mod:`multiprocessing.managers` module:
1180
1181.. class:: BaseManager([address[, authkey]])
1182
1183 Create a BaseManager object.
1184
[391]1185 Once created one should call :meth:`start` or ``get_server().serve_forever()`` to ensure
[2]1186 that the manager object refers to a started manager process.
1187
1188 *address* is the address on which the manager process listens for new
1189 connections. If *address* is ``None`` then an arbitrary one is chosen.
1190
1191 *authkey* is the authentication key which will be used to check the validity
1192 of incoming connections to the server process. If *authkey* is ``None`` then
1193 ``current_process().authkey``. Otherwise *authkey* is used and it
1194 must be a string.
1195
[391]1196 .. method:: start([initializer[, initargs]])
[2]1197
[391]1198 Start a subprocess to start the manager. If *initializer* is not ``None``
1199 then the subprocess will call ``initializer(*initargs)`` when it starts.
[2]1200
1201 .. method:: get_server()
1202
1203 Returns a :class:`Server` object which represents the actual server under
1204 the control of the Manager. The :class:`Server` object supports the
1205 :meth:`serve_forever` method::
1206
1207 >>> from multiprocessing.managers import BaseManager
1208 >>> manager = BaseManager(address=('', 50000), authkey='abc')
1209 >>> server = manager.get_server()
1210 >>> server.serve_forever()
1211
1212 :class:`Server` additionally has an :attr:`address` attribute.
1213
1214 .. method:: connect()
1215
1216 Connect a local manager object to a remote manager process::
1217
1218 >>> from multiprocessing.managers import BaseManager
1219 >>> m = BaseManager(address=('127.0.0.1', 5000), authkey='abc')
1220 >>> m.connect()
1221
1222 .. method:: shutdown()
1223
1224 Stop the process used by the manager. This is only available if
1225 :meth:`start` has been used to start the server process.
1226
1227 This can be called multiple times.
1228
1229 .. method:: register(typeid[, callable[, proxytype[, exposed[, method_to_typeid[, create_method]]]]])
1230
1231 A classmethod which can be used for registering a type or callable with
1232 the manager class.
1233
1234 *typeid* is a "type identifier" which is used to identify a particular
1235 type of shared object. This must be a string.
1236
1237 *callable* is a callable used for creating objects for this type
1238 identifier. If a manager instance will be created using the
1239 :meth:`from_address` classmethod or if the *create_method* argument is
1240 ``False`` then this can be left as ``None``.
1241
1242 *proxytype* is a subclass of :class:`BaseProxy` which is used to create
1243 proxies for shared objects with this *typeid*. If ``None`` then a proxy
1244 class is created automatically.
1245
1246 *exposed* is used to specify a sequence of method names which proxies for
1247 this typeid should be allowed to access using
1248 :meth:`BaseProxy._callMethod`. (If *exposed* is ``None`` then
1249 :attr:`proxytype._exposed_` is used instead if it exists.) In the case
1250 where no exposed list is specified, all "public methods" of the shared
1251 object will be accessible. (Here a "public method" means any attribute
[391]1252 which has a :meth:`~object.__call__` method and whose name does not begin
1253 with ``'_'``.)
[2]1254
1255 *method_to_typeid* is a mapping used to specify the return type of those
1256 exposed methods which should return a proxy. It maps method names to
1257 typeid strings. (If *method_to_typeid* is ``None`` then
1258 :attr:`proxytype._method_to_typeid_` is used instead if it exists.) If a
1259 method's name is not a key of this mapping or if the mapping is ``None``
1260 then the object returned by the method will be copied by value.
1261
1262 *create_method* determines whether a method should be created with name
1263 *typeid* which can be used to tell the server process to create a new
1264 shared object and return a proxy for it. By default it is ``True``.
1265
1266 :class:`BaseManager` instances also have one read-only property:
1267
1268 .. attribute:: address
1269
1270 The address used by the manager.
1271
1272
1273.. class:: SyncManager
1274
1275 A subclass of :class:`BaseManager` which can be used for the synchronization
1276 of processes. Objects of this type are returned by
1277 :func:`multiprocessing.Manager`.
1278
1279 It also supports creation of shared lists and dictionaries.
1280
1281 .. method:: BoundedSemaphore([value])
1282
1283 Create a shared :class:`threading.BoundedSemaphore` object and return a
1284 proxy for it.
1285
1286 .. method:: Condition([lock])
1287
1288 Create a shared :class:`threading.Condition` object and return a proxy for
1289 it.
1290
1291 If *lock* is supplied then it should be a proxy for a
1292 :class:`threading.Lock` or :class:`threading.RLock` object.
1293
1294 .. method:: Event()
1295
1296 Create a shared :class:`threading.Event` object and return a proxy for it.
1297
1298 .. method:: Lock()
1299
1300 Create a shared :class:`threading.Lock` object and return a proxy for it.
1301
1302 .. method:: Namespace()
1303
1304 Create a shared :class:`Namespace` object and return a proxy for it.
1305
1306 .. method:: Queue([maxsize])
1307
1308 Create a shared :class:`Queue.Queue` object and return a proxy for it.
1309
1310 .. method:: RLock()
1311
1312 Create a shared :class:`threading.RLock` object and return a proxy for it.
1313
1314 .. method:: Semaphore([value])
1315
1316 Create a shared :class:`threading.Semaphore` object and return a proxy for
1317 it.
1318
1319 .. method:: Array(typecode, sequence)
1320
1321 Create an array and return a proxy for it.
1322
1323 .. method:: Value(typecode, value)
1324
1325 Create an object with a writable ``value`` attribute and return a proxy
1326 for it.
1327
1328 .. method:: dict()
1329 dict(mapping)
1330 dict(sequence)
1331
1332 Create a shared ``dict`` object and return a proxy for it.
1333
1334 .. method:: list()
1335 list(sequence)
1336
1337 Create a shared ``list`` object and return a proxy for it.
1338
[391]1339 .. note::
[2]1340
[391]1341 Modifications to mutable values or items in dict and list proxies will not
1342 be propagated through the manager, because the proxy has no way of knowing
1343 when its values or items are modified. To modify such an item, you can
1344 re-assign the modified object to the container proxy::
1345
1346 # create a list proxy and append a mutable object (a dictionary)
1347 lproxy = manager.list()
1348 lproxy.append({})
1349 # now mutate the dictionary
1350 d = lproxy[0]
1351 d['a'] = 1
1352 d['b'] = 2
1353 # at this point, the changes to d are not yet synced, but by
1354 # reassigning the dictionary, the proxy is notified of the change
1355 lproxy[0] = d
1356
1357
[2]1358Namespace objects
1359>>>>>>>>>>>>>>>>>
1360
1361A namespace object has no public methods, but does have writable attributes.
1362Its representation shows the values of its attributes.
1363
1364However, when using a proxy for a namespace object, an attribute beginning with
1365``'_'`` will be an attribute of the proxy and not an attribute of the referent:
1366
1367.. doctest::
1368
1369 >>> manager = multiprocessing.Manager()
1370 >>> Global = manager.Namespace()
1371 >>> Global.x = 10
1372 >>> Global.y = 'hello'
1373 >>> Global._z = 12.3 # this is an attribute of the proxy
1374 >>> print Global
1375 Namespace(x=10, y='hello')
1376
1377
1378Customized managers
1379>>>>>>>>>>>>>>>>>>>
1380
1381To create one's own manager, one creates a subclass of :class:`BaseManager` and
[391]1382uses the :meth:`~BaseManager.register` classmethod to register new types or
[2]1383callables with the manager class. For example::
1384
1385 from multiprocessing.managers import BaseManager
1386
1387 class MathsClass(object):
1388 def add(self, x, y):
1389 return x + y
1390 def mul(self, x, y):
1391 return x * y
1392
1393 class MyManager(BaseManager):
1394 pass
1395
1396 MyManager.register('Maths', MathsClass)
1397
1398 if __name__ == '__main__':
1399 manager = MyManager()
1400 manager.start()
1401 maths = manager.Maths()
1402 print maths.add(4, 3) # prints 7
1403 print maths.mul(7, 8) # prints 56
1404
1405
1406Using a remote manager
1407>>>>>>>>>>>>>>>>>>>>>>
1408
1409It is possible to run a manager server on one machine and have clients use it
1410from other machines (assuming that the firewalls involved allow it).
1411
1412Running the following commands creates a server for a single shared queue which
1413remote clients can access::
1414
1415 >>> from multiprocessing.managers import BaseManager
1416 >>> import Queue
1417 >>> queue = Queue.Queue()
1418 >>> class QueueManager(BaseManager): pass
1419 >>> QueueManager.register('get_queue', callable=lambda:queue)
1420 >>> m = QueueManager(address=('', 50000), authkey='abracadabra')
1421 >>> s = m.get_server()
1422 >>> s.serve_forever()
1423
1424One client can access the server as follows::
1425
1426 >>> from multiprocessing.managers import BaseManager
1427 >>> class QueueManager(BaseManager): pass
1428 >>> QueueManager.register('get_queue')
1429 >>> m = QueueManager(address=('foo.bar.org', 50000), authkey='abracadabra')
1430 >>> m.connect()
1431 >>> queue = m.get_queue()
1432 >>> queue.put('hello')
1433
1434Another client can also use it::
1435
1436 >>> from multiprocessing.managers import BaseManager
1437 >>> class QueueManager(BaseManager): pass
1438 >>> QueueManager.register('get_queue')
1439 >>> m = QueueManager(address=('foo.bar.org', 50000), authkey='abracadabra')
1440 >>> m.connect()
1441 >>> queue = m.get_queue()
1442 >>> queue.get()
1443 'hello'
1444
1445Local processes can also access that queue, using the code from above on the
1446client to access it remotely::
1447
1448 >>> from multiprocessing import Process, Queue
1449 >>> from multiprocessing.managers import BaseManager
1450 >>> class Worker(Process):
1451 ... def __init__(self, q):
1452 ... self.q = q
1453 ... super(Worker, self).__init__()
1454 ... def run(self):
1455 ... self.q.put('local hello')
1456 ...
1457 >>> queue = Queue()
1458 >>> w = Worker(queue)
1459 >>> w.start()
1460 >>> class QueueManager(BaseManager): pass
1461 ...
1462 >>> QueueManager.register('get_queue', callable=lambda: queue)
1463 >>> m = QueueManager(address=('', 50000), authkey='abracadabra')
1464 >>> s = m.get_server()
1465 >>> s.serve_forever()
1466
1467Proxy Objects
1468~~~~~~~~~~~~~
1469
1470A proxy is an object which *refers* to a shared object which lives (presumably)
1471in a different process. The shared object is said to be the *referent* of the
1472proxy. Multiple proxy objects may have the same referent.
1473
1474A proxy object has methods which invoke corresponding methods of its referent
1475(although not every method of the referent will necessarily be available through
1476the proxy). A proxy can usually be used in most of the same ways that its
1477referent can:
1478
1479.. doctest::
1480
1481 >>> from multiprocessing import Manager
1482 >>> manager = Manager()
1483 >>> l = manager.list([i*i for i in range(10)])
1484 >>> print l
1485 [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
1486 >>> print repr(l)
1487 <ListProxy object, typeid 'list' at 0x...>
1488 >>> l[4]
1489 16
1490 >>> l[2:5]
1491 [4, 9, 16]
1492
1493Notice that applying :func:`str` to a proxy will return the representation of
1494the referent, whereas applying :func:`repr` will return the representation of
1495the proxy.
1496
1497An important feature of proxy objects is that they are picklable so they can be
1498passed between processes. Note, however, that if a proxy is sent to the
1499corresponding manager's process then unpickling it will produce the referent
1500itself. This means, for example, that one shared object can contain a second:
1501
1502.. doctest::
1503
1504 >>> a = manager.list()
1505 >>> b = manager.list()
1506 >>> a.append(b) # referent of a now contains referent of b
1507 >>> print a, b
1508 [[]] []
1509 >>> b.append('hello')
1510 >>> print a, b
1511 [['hello']] ['hello']
1512
1513.. note::
1514
1515 The proxy types in :mod:`multiprocessing` do nothing to support comparisons
1516 by value. So, for instance, we have:
1517
1518 .. doctest::
1519
1520 >>> manager.list([1,2,3]) == [1,2,3]
1521 False
1522
1523 One should just use a copy of the referent instead when making comparisons.
1524
1525.. class:: BaseProxy
1526
1527 Proxy objects are instances of subclasses of :class:`BaseProxy`.
1528
1529 .. method:: _callmethod(methodname[, args[, kwds]])
1530
1531 Call and return the result of a method of the proxy's referent.
1532
1533 If ``proxy`` is a proxy whose referent is ``obj`` then the expression ::
1534
1535 proxy._callmethod(methodname, args, kwds)
1536
1537 will evaluate the expression ::
1538
1539 getattr(obj, methodname)(*args, **kwds)
1540
1541 in the manager's process.
1542
1543 The returned value will be a copy of the result of the call or a proxy to
1544 a new shared object -- see documentation for the *method_to_typeid*
1545 argument of :meth:`BaseManager.register`.
1546
[391]1547 If an exception is raised by the call, then is re-raised by
[2]1548 :meth:`_callmethod`. If some other exception is raised in the manager's
1549 process then this is converted into a :exc:`RemoteError` exception and is
1550 raised by :meth:`_callmethod`.
1551
1552 Note in particular that an exception will be raised if *methodname* has
1553 not been *exposed*
1554
1555 An example of the usage of :meth:`_callmethod`:
1556
1557 .. doctest::
1558
1559 >>> l = manager.list(range(10))
1560 >>> l._callmethod('__len__')
1561 10
1562 >>> l._callmethod('__getslice__', (2, 7)) # equiv to `l[2:7]`
1563 [2, 3, 4, 5, 6]
1564 >>> l._callmethod('__getitem__', (20,)) # equiv to `l[20]`
1565 Traceback (most recent call last):
1566 ...
1567 IndexError: list index out of range
1568
1569 .. method:: _getvalue()
1570
1571 Return a copy of the referent.
1572
1573 If the referent is unpicklable then this will raise an exception.
1574
1575 .. method:: __repr__
1576
1577 Return a representation of the proxy object.
1578
1579 .. method:: __str__
1580
1581 Return the representation of the referent.
1582
1583
1584Cleanup
1585>>>>>>>
1586
1587A proxy object uses a weakref callback so that when it gets garbage collected it
1588deregisters itself from the manager which owns its referent.
1589
1590A shared object gets deleted from the manager process when there are no longer
1591any proxies referring to it.
1592
1593
1594Process Pools
1595~~~~~~~~~~~~~
1596
1597.. module:: multiprocessing.pool
1598 :synopsis: Create pools of processes.
1599
1600One can create a pool of processes which will carry out tasks submitted to it
1601with the :class:`Pool` class.
1602
[391]1603.. class:: multiprocessing.Pool([processes[, initializer[, initargs[, maxtasksperchild]]]])
[2]1604
1605 A process pool object which controls a pool of worker processes to which jobs
1606 can be submitted. It supports asynchronous results with timeouts and
1607 callbacks and has a parallel map implementation.
1608
1609 *processes* is the number of worker processes to use. If *processes* is
1610 ``None`` then the number returned by :func:`cpu_count` is used. If
1611 *initializer* is not ``None`` then each worker process will call
1612 ``initializer(*initargs)`` when it starts.
1613
[391]1614 Note that the methods of the pool object should only be called by
1615 the process which created the pool.
1616
1617 .. versionadded:: 2.7
1618 *maxtasksperchild* is the number of tasks a worker process can complete
1619 before it will exit and be replaced with a fresh worker process, to enable
1620 unused resources to be freed. The default *maxtasksperchild* is None, which
1621 means worker processes will live as long as the pool.
1622
1623 .. note::
1624
1625 Worker processes within a :class:`Pool` typically live for the complete
1626 duration of the Pool's work queue. A frequent pattern found in other
1627 systems (such as Apache, mod_wsgi, etc) to free resources held by
1628 workers is to allow a worker within a pool to complete only a set
1629 amount of work before being exiting, being cleaned up and a new
1630 process spawned to replace the old one. The *maxtasksperchild*
1631 argument to the :class:`Pool` exposes this ability to the end user.
1632
[2]1633 .. method:: apply(func[, args[, kwds]])
1634
[391]1635 Equivalent of the :func:`apply` built-in function. It blocks until the
1636 result is ready, so :meth:`apply_async` is better suited for performing
1637 work in parallel. Additionally, *func* is only executed in one of the
1638 workers of the pool.
[2]1639
1640 .. method:: apply_async(func[, args[, kwds[, callback]]])
1641
1642 A variant of the :meth:`apply` method which returns a result object.
1643
1644 If *callback* is specified then it should be a callable which accepts a
1645 single argument. When the result becomes ready *callback* is applied to
1646 it (unless the call failed). *callback* should complete immediately since
1647 otherwise the thread which handles the results will get blocked.
1648
1649 .. method:: map(func, iterable[, chunksize])
1650
1651 A parallel equivalent of the :func:`map` built-in function (it supports only
[391]1652 one *iterable* argument though). It blocks until the result is ready.
[2]1653
1654 This method chops the iterable into a number of chunks which it submits to
1655 the process pool as separate tasks. The (approximate) size of these
1656 chunks can be specified by setting *chunksize* to a positive integer.
1657
1658 .. method:: map_async(func, iterable[, chunksize[, callback]])
1659
1660 A variant of the :meth:`.map` method which returns a result object.
1661
1662 If *callback* is specified then it should be a callable which accepts a
1663 single argument. When the result becomes ready *callback* is applied to
1664 it (unless the call failed). *callback* should complete immediately since
1665 otherwise the thread which handles the results will get blocked.
1666
1667 .. method:: imap(func, iterable[, chunksize])
1668
1669 An equivalent of :func:`itertools.imap`.
1670
1671 The *chunksize* argument is the same as the one used by the :meth:`.map`
1672 method. For very long iterables using a large value for *chunksize* can
[391]1673 make the job complete **much** faster than using the default value of
[2]1674 ``1``.
1675
1676 Also if *chunksize* is ``1`` then the :meth:`!next` method of the iterator
1677 returned by the :meth:`imap` method has an optional *timeout* parameter:
1678 ``next(timeout)`` will raise :exc:`multiprocessing.TimeoutError` if the
1679 result cannot be returned within *timeout* seconds.
1680
1681 .. method:: imap_unordered(func, iterable[, chunksize])
1682
1683 The same as :meth:`imap` except that the ordering of the results from the
1684 returned iterator should be considered arbitrary. (Only when there is
1685 only one worker process is the order guaranteed to be "correct".)
1686
1687 .. method:: close()
1688
1689 Prevents any more tasks from being submitted to the pool. Once all the
1690 tasks have been completed the worker processes will exit.
1691
1692 .. method:: terminate()
1693
1694 Stops the worker processes immediately without completing outstanding
1695 work. When the pool object is garbage collected :meth:`terminate` will be
1696 called immediately.
1697
1698 .. method:: join()
1699
1700 Wait for the worker processes to exit. One must call :meth:`close` or
1701 :meth:`terminate` before using :meth:`join`.
1702
1703
1704.. class:: AsyncResult
1705
1706 The class of the result returned by :meth:`Pool.apply_async` and
1707 :meth:`Pool.map_async`.
1708
1709 .. method:: get([timeout])
1710
1711 Return the result when it arrives. If *timeout* is not ``None`` and the
1712 result does not arrive within *timeout* seconds then
1713 :exc:`multiprocessing.TimeoutError` is raised. If the remote call raised
1714 an exception then that exception will be reraised by :meth:`get`.
1715
1716 .. method:: wait([timeout])
1717
1718 Wait until the result is available or until *timeout* seconds pass.
1719
1720 .. method:: ready()
1721
1722 Return whether the call has completed.
1723
1724 .. method:: successful()
1725
1726 Return whether the call completed without raising an exception. Will
1727 raise :exc:`AssertionError` if the result is not ready.
1728
1729The following example demonstrates the use of a pool::
1730
1731 from multiprocessing import Pool
1732
1733 def f(x):
1734 return x*x
1735
1736 if __name__ == '__main__':
1737 pool = Pool(processes=4) # start 4 worker processes
1738
1739 result = pool.apply_async(f, (10,)) # evaluate "f(10)" asynchronously
1740 print result.get(timeout=1) # prints "100" unless your computer is *very* slow
1741
1742 print pool.map(f, range(10)) # prints "[0, 1, 4,..., 81]"
1743
1744 it = pool.imap(f, range(10))
1745 print it.next() # prints "0"
1746 print it.next() # prints "1"
1747 print it.next(timeout=1) # prints "4" unless your computer is *very* slow
1748
1749 import time
1750 result = pool.apply_async(time.sleep, (10,))
1751 print result.get(timeout=1) # raises TimeoutError
1752
1753
1754.. _multiprocessing-listeners-clients:
1755
1756Listeners and Clients
1757~~~~~~~~~~~~~~~~~~~~~
1758
1759.. module:: multiprocessing.connection
1760 :synopsis: API for dealing with sockets.
1761
1762Usually message passing between processes is done using queues or by using
[391]1763:class:`~multiprocessing.Connection` objects returned by
1764:func:`~multiprocessing.Pipe`.
[2]1765
1766However, the :mod:`multiprocessing.connection` module allows some extra
1767flexibility. It basically gives a high level message oriented API for dealing
1768with sockets or Windows named pipes, and also has support for *digest
1769authentication* using the :mod:`hmac` module.
1770
1771
1772.. function:: deliver_challenge(connection, authkey)
1773
1774 Send a randomly generated message to the other end of the connection and wait
1775 for a reply.
1776
1777 If the reply matches the digest of the message using *authkey* as the key
1778 then a welcome message is sent to the other end of the connection. Otherwise
1779 :exc:`AuthenticationError` is raised.
1780
[391]1781.. function:: answer_challenge(connection, authkey)
[2]1782
1783 Receive a message, calculate the digest of the message using *authkey* as the
1784 key, and then send the digest back.
1785
1786 If a welcome message is not received, then :exc:`AuthenticationError` is
1787 raised.
1788
1789.. function:: Client(address[, family[, authenticate[, authkey]]])
1790
1791 Attempt to set up a connection to the listener which is using address
1792 *address*, returning a :class:`~multiprocessing.Connection`.
1793
1794 The type of the connection is determined by *family* argument, but this can
1795 generally be omitted since it can usually be inferred from the format of
1796 *address*. (See :ref:`multiprocessing-address-formats`)
1797
1798 If *authenticate* is ``True`` or *authkey* is a string then digest
1799 authentication is used. The key used for authentication will be either
1800 *authkey* or ``current_process().authkey)`` if *authkey* is ``None``.
1801 If authentication fails then :exc:`AuthenticationError` is raised. See
1802 :ref:`multiprocessing-auth-keys`.
1803
1804.. class:: Listener([address[, family[, backlog[, authenticate[, authkey]]]]])
1805
1806 A wrapper for a bound socket or Windows named pipe which is 'listening' for
1807 connections.
1808
1809 *address* is the address to be used by the bound socket or named pipe of the
1810 listener object.
1811
1812 .. note::
1813
1814 If an address of '0.0.0.0' is used, the address will not be a connectable
1815 end point on Windows. If you require a connectable end-point,
1816 you should use '127.0.0.1'.
1817
1818 *family* is the type of socket (or named pipe) to use. This can be one of
1819 the strings ``'AF_INET'`` (for a TCP socket), ``'AF_UNIX'`` (for a Unix
1820 domain socket) or ``'AF_PIPE'`` (for a Windows named pipe). Of these only
1821 the first is guaranteed to be available. If *family* is ``None`` then the
1822 family is inferred from the format of *address*. If *address* is also
1823 ``None`` then a default is chosen. This default is the family which is
1824 assumed to be the fastest available. See
1825 :ref:`multiprocessing-address-formats`. Note that if *family* is
1826 ``'AF_UNIX'`` and address is ``None`` then the socket will be created in a
1827 private temporary directory created using :func:`tempfile.mkstemp`.
1828
1829 If the listener object uses a socket then *backlog* (1 by default) is passed
[391]1830 to the :meth:`~socket.socket.listen` method of the socket once it has been
1831 bound.
[2]1832
1833 If *authenticate* is ``True`` (``False`` by default) or *authkey* is not
1834 ``None`` then digest authentication is used.
1835
1836 If *authkey* is a string then it will be used as the authentication key;
1837 otherwise it must be *None*.
1838
1839 If *authkey* is ``None`` and *authenticate* is ``True`` then
1840 ``current_process().authkey`` is used as the authentication key. If
1841 *authkey* is ``None`` and *authenticate* is ``False`` then no
1842 authentication is done. If authentication fails then
1843 :exc:`AuthenticationError` is raised. See :ref:`multiprocessing-auth-keys`.
1844
1845 .. method:: accept()
1846
1847 Accept a connection on the bound socket or named pipe of the listener
[391]1848 object and return a :class:`~multiprocessing.Connection` object. If
1849 authentication is attempted and fails, then
1850 :exc:`~multiprocessing.AuthenticationError` is raised.
[2]1851
1852 .. method:: close()
1853
1854 Close the bound socket or named pipe of the listener object. This is
1855 called automatically when the listener is garbage collected. However it
1856 is advisable to call it explicitly.
1857
1858 Listener objects have the following read-only properties:
1859
1860 .. attribute:: address
1861
1862 The address which is being used by the Listener object.
1863
1864 .. attribute:: last_accepted
1865
1866 The address from which the last accepted connection came. If this is
1867 unavailable then it is ``None``.
1868
1869
1870The module defines two exceptions:
1871
1872.. exception:: AuthenticationError
1873
1874 Exception raised when there is an authentication error.
1875
1876
1877**Examples**
1878
1879The following server code creates a listener which uses ``'secret password'`` as
1880an authentication key. It then waits for a connection and sends some data to
1881the client::
1882
1883 from multiprocessing.connection import Listener
1884 from array import array
1885
1886 address = ('localhost', 6000) # family is deduced to be 'AF_INET'
1887 listener = Listener(address, authkey='secret password')
1888
1889 conn = listener.accept()
1890 print 'connection accepted from', listener.last_accepted
1891
1892 conn.send([2.25, None, 'junk', float])
1893
1894 conn.send_bytes('hello')
1895
1896 conn.send_bytes(array('i', [42, 1729]))
1897
1898 conn.close()
1899 listener.close()
1900
1901The following code connects to the server and receives some data from the
1902server::
1903
1904 from multiprocessing.connection import Client
1905 from array import array
1906
1907 address = ('localhost', 6000)
1908 conn = Client(address, authkey='secret password')
1909
1910 print conn.recv() # => [2.25, None, 'junk', float]
1911
1912 print conn.recv_bytes() # => 'hello'
1913
1914 arr = array('i', [0, 0, 0, 0, 0])
1915 print conn.recv_bytes_into(arr) # => 8
1916 print arr # => array('i', [42, 1729, 0, 0, 0])
1917
1918 conn.close()
1919
1920
1921.. _multiprocessing-address-formats:
1922
1923Address Formats
1924>>>>>>>>>>>>>>>
1925
1926* An ``'AF_INET'`` address is a tuple of the form ``(hostname, port)`` where
1927 *hostname* is a string and *port* is an integer.
1928
1929* An ``'AF_UNIX'`` address is a string representing a filename on the
1930 filesystem.
1931
1932* An ``'AF_PIPE'`` address is a string of the form
1933 :samp:`r'\\\\.\\pipe\\{PipeName}'`. To use :func:`Client` to connect to a named
1934 pipe on a remote computer called *ServerName* one should use an address of the
1935 form :samp:`r'\\\\{ServerName}\\pipe\\{PipeName}'` instead.
1936
1937Note that any string beginning with two backslashes is assumed by default to be
1938an ``'AF_PIPE'`` address rather than an ``'AF_UNIX'`` address.
1939
1940
1941.. _multiprocessing-auth-keys:
1942
1943Authentication keys
1944~~~~~~~~~~~~~~~~~~~
1945
[391]1946When one uses :meth:`Connection.recv <multiprocessing.Connection.recv>`, the
1947data received is automatically
[2]1948unpickled. Unfortunately unpickling data from an untrusted source is a security
1949risk. Therefore :class:`Listener` and :func:`Client` use the :mod:`hmac` module
1950to provide digest authentication.
1951
1952An authentication key is a string which can be thought of as a password: once a
1953connection is established both ends will demand proof that the other knows the
1954authentication key. (Demonstrating that both ends are using the same key does
1955**not** involve sending the key over the connection.)
1956
1957If authentication is requested but do authentication key is specified then the
1958return value of ``current_process().authkey`` is used (see
1959:class:`~multiprocessing.Process`). This value will automatically inherited by
1960any :class:`~multiprocessing.Process` object that the current process creates.
1961This means that (by default) all processes of a multi-process program will share
1962a single authentication key which can be used when setting up connections
1963between themselves.
1964
1965Suitable authentication keys can also be generated by using :func:`os.urandom`.
1966
1967
1968Logging
1969~~~~~~~
1970
1971Some support for logging is available. Note, however, that the :mod:`logging`
1972package does not use process shared locks so it is possible (depending on the
1973handler type) for messages from different processes to get mixed up.
1974
1975.. currentmodule:: multiprocessing
1976.. function:: get_logger()
1977
1978 Returns the logger used by :mod:`multiprocessing`. If necessary, a new one
1979 will be created.
1980
1981 When first created the logger has level :data:`logging.NOTSET` and no
1982 default handler. Messages sent to this logger will not by default propagate
1983 to the root logger.
1984
1985 Note that on Windows child processes will only inherit the level of the
1986 parent process's logger -- any other customization of the logger will not be
1987 inherited.
1988
1989.. currentmodule:: multiprocessing
1990.. function:: log_to_stderr()
1991
1992 This function performs a call to :func:`get_logger` but in addition to
1993 returning the logger created by get_logger, it adds a handler which sends
1994 output to :data:`sys.stderr` using format
1995 ``'[%(levelname)s/%(processName)s] %(message)s'``.
1996
1997Below is an example session with logging turned on::
1998
1999 >>> import multiprocessing, logging
2000 >>> logger = multiprocessing.log_to_stderr()
2001 >>> logger.setLevel(logging.INFO)
2002 >>> logger.warning('doomed')
2003 [WARNING/MainProcess] doomed
2004 >>> m = multiprocessing.Manager()
2005 [INFO/SyncManager-...] child process calling self.run()
2006 [INFO/SyncManager-...] created temp directory /.../pymp-...
2007 [INFO/SyncManager-...] manager serving at '/.../listener-...'
2008 >>> del m
2009 [INFO/MainProcess] sending shutdown message to manager
2010 [INFO/SyncManager-...] manager exiting with exitcode 0
2011
2012In addition to having these two logging functions, the multiprocessing also
2013exposes two additional logging level attributes. These are :const:`SUBWARNING`
2014and :const:`SUBDEBUG`. The table below illustrates where theses fit in the
2015normal level hierarchy.
2016
2017+----------------+----------------+
2018| Level | Numeric value |
2019+================+================+
2020| ``SUBWARNING`` | 25 |
2021+----------------+----------------+
2022| ``SUBDEBUG`` | 5 |
2023+----------------+----------------+
2024
2025For a full table of logging levels, see the :mod:`logging` module.
2026
2027These additional logging levels are used primarily for certain debug messages
2028within the multiprocessing module. Below is the same example as above, except
2029with :const:`SUBDEBUG` enabled::
2030
2031 >>> import multiprocessing, logging
2032 >>> logger = multiprocessing.log_to_stderr()
2033 >>> logger.setLevel(multiprocessing.SUBDEBUG)
2034 >>> logger.warning('doomed')
2035 [WARNING/MainProcess] doomed
2036 >>> m = multiprocessing.Manager()
2037 [INFO/SyncManager-...] child process calling self.run()
2038 [INFO/SyncManager-...] created temp directory /.../pymp-...
2039 [INFO/SyncManager-...] manager serving at '/.../pymp-djGBXN/listener-...'
2040 >>> del m
2041 [SUBDEBUG/MainProcess] finalizer calling ...
2042 [INFO/MainProcess] sending shutdown message to manager
2043 [DEBUG/SyncManager-...] manager received shutdown message
2044 [SUBDEBUG/SyncManager-...] calling <Finalize object, callback=unlink, ...
2045 [SUBDEBUG/SyncManager-...] finalizer calling <built-in function unlink> ...
2046 [SUBDEBUG/SyncManager-...] calling <Finalize object, dead>
2047 [SUBDEBUG/SyncManager-...] finalizer calling <function rmtree at 0x5aa730> ...
2048 [INFO/SyncManager-...] manager exiting with exitcode 0
2049
2050The :mod:`multiprocessing.dummy` module
2051~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2052
2053.. module:: multiprocessing.dummy
2054 :synopsis: Dumb wrapper around threading.
2055
2056:mod:`multiprocessing.dummy` replicates the API of :mod:`multiprocessing` but is
2057no more than a wrapper around the :mod:`threading` module.
2058
2059
2060.. _multiprocessing-programming:
2061
2062Programming guidelines
2063----------------------
2064
2065There are certain guidelines and idioms which should be adhered to when using
2066:mod:`multiprocessing`.
2067
2068
2069All platforms
2070~~~~~~~~~~~~~
2071
2072Avoid shared state
2073
2074 As far as possible one should try to avoid shifting large amounts of data
2075 between processes.
2076
2077 It is probably best to stick to using queues or pipes for communication
2078 between processes rather than using the lower level synchronization
2079 primitives from the :mod:`threading` module.
2080
2081Picklability
2082
2083 Ensure that the arguments to the methods of proxies are picklable.
2084
2085Thread safety of proxies
2086
2087 Do not use a proxy object from more than one thread unless you protect it
2088 with a lock.
2089
2090 (There is never a problem with different processes using the *same* proxy.)
2091
2092Joining zombie processes
2093
2094 On Unix when a process finishes but has not been joined it becomes a zombie.
2095 There should never be very many because each time a new process starts (or
[391]2096 :func:`~multiprocessing.active_children` is called) all completed processes
2097 which have not yet been joined will be joined. Also calling a finished
2098 process's :meth:`Process.is_alive <multiprocessing.Process.is_alive>` will
2099 join the process. Even so it is probably good
[2]2100 practice to explicitly join all the processes that you start.
2101
2102Better to inherit than pickle/unpickle
2103
2104 On Windows many types from :mod:`multiprocessing` need to be picklable so
2105 that child processes can use them. However, one should generally avoid
2106 sending shared objects to other processes using pipes or queues. Instead
[391]2107 you should arrange the program so that a process which needs access to a
[2]2108 shared resource created elsewhere can inherit it from an ancestor process.
2109
2110Avoid terminating processes
2111
[391]2112 Using the :meth:`Process.terminate <multiprocessing.Process.terminate>`
2113 method to stop a process is liable to
[2]2114 cause any shared resources (such as locks, semaphores, pipes and queues)
2115 currently being used by the process to become broken or unavailable to other
2116 processes.
2117
2118 Therefore it is probably best to only consider using
[391]2119 :meth:`Process.terminate <multiprocessing.Process.terminate>` on processes
2120 which never use any shared resources.
[2]2121
2122Joining processes that use queues
2123
2124 Bear in mind that a process that has put items in a queue will wait before
2125 terminating until all the buffered items are fed by the "feeder" thread to
2126 the underlying pipe. (The child process can call the
[391]2127 :meth:`~multiprocessing.Queue.cancel_join_thread` method of the queue to avoid this behaviour.)
[2]2128
2129 This means that whenever you use a queue you need to make sure that all
2130 items which have been put on the queue will eventually be removed before the
2131 process is joined. Otherwise you cannot be sure that processes which have
2132 put items on the queue will terminate. Remember also that non-daemonic
2133 processes will be automatically be joined.
2134
2135 An example which will deadlock is the following::
2136
2137 from multiprocessing import Process, Queue
2138
2139 def f(q):
2140 q.put('X' * 1000000)
2141
2142 if __name__ == '__main__':
2143 queue = Queue()
2144 p = Process(target=f, args=(queue,))
2145 p.start()
2146 p.join() # this deadlocks
2147 obj = queue.get()
2148
2149 A fix here would be to swap the last two lines round (or simply remove the
2150 ``p.join()`` line).
2151
2152Explicitly pass resources to child processes
2153
2154 On Unix a child process can make use of a shared resource created in a
2155 parent process using a global resource. However, it is better to pass the
2156 object as an argument to the constructor for the child process.
2157
2158 Apart from making the code (potentially) compatible with Windows this also
2159 ensures that as long as the child process is still alive the object will not
2160 be garbage collected in the parent process. This might be important if some
2161 resource is freed when the object is garbage collected in the parent
2162 process.
2163
2164 So for instance ::
2165
2166 from multiprocessing import Process, Lock
2167
2168 def f():
2169 ... do something using "lock" ...
2170
2171 if __name__ == '__main__':
2172 lock = Lock()
2173 for i in range(10):
2174 Process(target=f).start()
2175
2176 should be rewritten as ::
2177
2178 from multiprocessing import Process, Lock
2179
2180 def f(l):
2181 ... do something using "l" ...
2182
2183 if __name__ == '__main__':
2184 lock = Lock()
2185 for i in range(10):
2186 Process(target=f, args=(lock,)).start()
2187
[391]2188Beware of replacing :data:`sys.stdin` with a "file like object"
[2]2189
2190 :mod:`multiprocessing` originally unconditionally called::
2191
2192 os.close(sys.stdin.fileno())
2193
2194 in the :meth:`multiprocessing.Process._bootstrap` method --- this resulted
2195 in issues with processes-in-processes. This has been changed to::
2196
2197 sys.stdin.close()
2198 sys.stdin = open(os.devnull)
2199
2200 Which solves the fundamental issue of processes colliding with each other
2201 resulting in a bad file descriptor error, but introduces a potential danger
2202 to applications which replace :func:`sys.stdin` with a "file-like object"
2203 with output buffering. This danger is that if multiple processes call
[391]2204 :meth:`~io.IOBase.close()` on this file-like object, it could result in the same
[2]2205 data being flushed to the object multiple times, resulting in corruption.
2206
2207 If you write a file-like object and implement your own caching, you can
2208 make it fork-safe by storing the pid whenever you append to the cache,
2209 and discarding the cache when the pid changes. For example::
2210
2211 @property
2212 def cache(self):
2213 pid = os.getpid()
2214 if pid != self._pid:
2215 self._pid = pid
2216 self._cache = []
2217 return self._cache
2218
2219 For more information, see :issue:`5155`, :issue:`5313` and :issue:`5331`
2220
2221Windows
2222~~~~~~~
2223
2224Since Windows lacks :func:`os.fork` it has a few extra restrictions:
2225
2226More picklability
2227
2228 Ensure that all arguments to :meth:`Process.__init__` are picklable. This
2229 means, in particular, that bound or unbound methods cannot be used directly
2230 as the ``target`` argument on Windows --- just define a function and use
2231 that instead.
2232
[391]2233 Also, if you subclass :class:`~multiprocessing.Process` then make sure that
2234 instances will be picklable when the :meth:`Process.start
2235 <multiprocessing.Process.start>` method is called.
[2]2236
2237Global variables
2238
2239 Bear in mind that if code run in a child process tries to access a global
2240 variable, then the value it sees (if any) may not be the same as the value
[391]2241 in the parent process at the time that :meth:`Process.start
2242 <multiprocessing.Process.start>` was called.
[2]2243
2244 However, global variables which are just module level constants cause no
2245 problems.
2246
2247Safe importing of main module
2248
2249 Make sure that the main module can be safely imported by a new Python
2250 interpreter without causing unintended side effects (such a starting a new
2251 process).
2252
2253 For example, under Windows running the following module would fail with a
2254 :exc:`RuntimeError`::
2255
2256 from multiprocessing import Process
2257
2258 def foo():
2259 print 'hello'
2260
2261 p = Process(target=foo)
2262 p.start()
2263
2264 Instead one should protect the "entry point" of the program by using ``if
2265 __name__ == '__main__':`` as follows::
2266
2267 from multiprocessing import Process, freeze_support
2268
2269 def foo():
2270 print 'hello'
2271
2272 if __name__ == '__main__':
2273 freeze_support()
2274 p = Process(target=foo)
2275 p.start()
2276
2277 (The ``freeze_support()`` line can be omitted if the program will be run
2278 normally instead of frozen.)
2279
2280 This allows the newly spawned Python interpreter to safely import the module
2281 and then run the module's ``foo()`` function.
2282
2283 Similar restrictions apply if a pool or manager is created in the main
2284 module.
2285
2286
2287.. _multiprocessing-examples:
2288
2289Examples
2290--------
2291
2292Demonstration of how to create and use customized managers and proxies:
2293
2294.. literalinclude:: ../includes/mp_newtype.py
2295
2296
[391]2297Using :class:`~multiprocessing.pool.Pool`:
[2]2298
2299.. literalinclude:: ../includes/mp_pool.py
2300
2301
2302Synchronization types like locks, conditions and queues:
2303
2304.. literalinclude:: ../includes/mp_synchronize.py
2305
2306
[391]2307An example showing how to use queues to feed tasks to a collection of worker
2308processes and collect the results:
[2]2309
2310.. literalinclude:: ../includes/mp_workers.py
2311
2312
2313An example of how a pool of worker processes can each run a
2314:class:`SimpleHTTPServer.HttpServer` instance while sharing a single listening
2315socket.
2316
2317.. literalinclude:: ../includes/mp_webserver.py
2318
2319
2320Some simple benchmarks comparing :mod:`multiprocessing` with :mod:`threading`:
2321
2322.. literalinclude:: ../includes/mp_benchmarks.py
2323
Note: See TracBrowser for help on using the repository browser.