source: python/trunk/Doc/library/profile.rst

Last change on this file was 391, checked in by dmik, 11 years ago

python: Merge vendor 2.7.6 to trunk.

  • Property svn:eol-style set to native
File size: 27.9 KB
RevLine 
[2]1.. _profile:
2
3********************
4The Python Profilers
5********************
6
[391]7**Source code:** :source:`Lib/profile.py` and :source:`Lib/pstats.py`
[2]8
[391]9--------------
[2]10
11.. _profiler-introduction:
12
13Introduction to the profilers
14=============================
15
16.. index::
17 single: deterministic profiling
18 single: profiling, deterministic
19
[391]20:mod:`cProfile` and :mod:`profile` provide :dfn:`deterministic profiling` of
21Python programs. A :dfn:`profile` is a set of statistics that describes how
22often and for how long various parts of the program executed. These statistics
23can be formatted into reports via the :mod:`pstats` module.
[2]24
[391]25The Python standard library provides three different implementations of the same
26profiling interface:
[2]27
[391]281. :mod:`cProfile` is recommended for most users; it's a C extension with
29 reasonable overhead that makes it suitable for profiling long-running
30 programs. Based on :mod:`lsprof`, contributed by Brett Rosen and Ted
31 Czotter.
[2]32
33 .. versionadded:: 2.5
34
[391]352. :mod:`profile`, a pure Python module whose interface is imitated by
36 :mod:`cProfile`, but which adds significant overhead to profiled programs.
37 If you're trying to extend the profiler in some way, the task might be easier
38 with this module.
[2]39
40 .. versionchanged:: 2.4
[391]41 Now also reports the time spent in calls to built-in functions
42 and methods.
[2]43
[391]443. :mod:`hotshot` was an experimental C module that focused on minimizing
[2]45 the overhead of profiling, at the expense of longer data
46 post-processing times. It is no longer maintained and may be
47 dropped in a future version of Python.
48
49
50 .. versionchanged:: 2.5
51 The results should be more meaningful than in the past: the timing core
52 contained a critical bug.
53
54The :mod:`profile` and :mod:`cProfile` modules export the same interface, so
55they are mostly interchangeable; :mod:`cProfile` has a much lower overhead but
56is newer and might not be available on all systems.
57:mod:`cProfile` is really a compatibility layer on top of the internal
58:mod:`_lsprof` module. The :mod:`hotshot` module is reserved for specialized
59usage.
60
[391]61.. note::
[2]62
[391]63 The profiler modules are designed to provide an execution profile for a given
64 program, not for benchmarking purposes (for that, there is :mod:`timeit` for
65 reasonably accurate results). This particularly applies to benchmarking
66 Python code against C code: the profilers introduce overhead for Python code,
67 but not for C-level functions, and so the C code would seem faster than any
68 Python one.
69
70
[2]71.. _profile-instant:
72
73Instant User's Manual
74=====================
75
76This section is provided for users that "don't want to read the manual." It
77provides a very brief overview, and allows a user to rapidly perform profiling
78on an existing application.
79
[391]80To profile a function that takes a single argument, you can do::
[2]81
82 import cProfile
[391]83 import re
84 cProfile.run('re.compile("foo|bar")')
[2]85
86(Use :mod:`profile` instead of :mod:`cProfile` if the latter is not available on
87your system.)
88
[391]89The above action would run :func:`re.compile` and print profile results like
90the following::
[2]91
[391]92 197 function calls (192 primitive calls) in 0.002 seconds
[2]93
[391]94 Ordered by: standard name
[2]95
[391]96 ncalls tottime percall cumtime percall filename:lineno(function)
97 1 0.000 0.000 0.001 0.001 <string>:1(<module>)
98 1 0.000 0.000 0.001 0.001 re.py:212(compile)
99 1 0.000 0.000 0.001 0.001 re.py:268(_compile)
100 1 0.000 0.000 0.000 0.000 sre_compile.py:172(_compile_charset)
101 1 0.000 0.000 0.000 0.000 sre_compile.py:201(_optimize_charset)
102 4 0.000 0.000 0.000 0.000 sre_compile.py:25(_identityfunction)
103 3/1 0.000 0.000 0.000 0.000 sre_compile.py:33(_compile)
[2]104
[391]105The first line indicates that 197 calls were monitored. Of those calls, 192
106were :dfn:`primitive`, meaning that the call was not induced via recursion. The
107next line: ``Ordered by: standard name``, indicates that the text string in the
108far right column was used to sort the output. The column headings include:
[2]109
[391]110ncalls
111 for the number of calls,
[2]112
[391]113tottime
114 for the total time spent in the given function (and excluding time made in
115 calls to sub-functions)
[2]116
[391]117percall
118 is the quotient of ``tottime`` divided by ``ncalls``
[2]119
[391]120cumtime
121 is the cumulative time spent in this and all subfunctions (from invocation
122 till exit). This figure is accurate *even* for recursive functions.
[2]123
[391]124percall
125 is the quotient of ``cumtime`` divided by primitive calls
[2]126
[391]127filename:lineno(function)
128 provides the respective data of each function
[2]129
[391]130When there are two numbers in the first column (for example ``3/1``), it means
131that the function recursed. The second value is the number of primitive calls
132and the former is the total number of calls. Note that when the function does
133not recurse, these two values are the same, and only the single figure is
134printed.
[2]135
[391]136Instead of printing the output at the end of the profile run, you can save the
137results to a file by specifying a filename to the :func:`run` function::
[2]138
[391]139 import cProfile
140 import re
141 cProfile.run('re.compile("foo|bar")', 'restats')
[2]142
[391]143The :class:`pstats.Stats` class reads profile results from a file and formats
144them in various ways.
145
146The file :mod:`cProfile` can also be invoked as a script to profile another
147script. For example::
148
149 python -m cProfile [-o output_file] [-s sort_order] myscript.py
150
151``-o`` writes the profile results to a file instead of to stdout
152
153``-s`` specifies one of the :func:`~pstats.Stats.sort_stats` sort values to sort
154the output by. This only applies when ``-o`` is not supplied.
155
156The :mod:`pstats` module's :class:`~pstats.Stats` class has a variety of methods
157for manipulating and printing the data saved into a profile results file::
158
159 import pstats
160 p = pstats.Stats('restats')
161 p.strip_dirs().sort_stats(-1).print_stats()
162
163The :meth:`~pstats.Stats.strip_dirs` method removed the extraneous path from all
164the module names. The :meth:`~pstats.Stats.sort_stats` method sorted all the
165entries according to the standard module/line/name string that is printed. The
166:meth:`~pstats.Stats.print_stats` method printed out all the statistics. You
167might try the following sort calls::
168
[2]169 p.sort_stats('name')
170 p.print_stats()
171
172The first call will actually sort the list by function name, and the second call
173will print out the statistics. The following are some interesting calls to
174experiment with::
175
176 p.sort_stats('cumulative').print_stats(10)
177
178This sorts the profile by cumulative time in a function, and then only prints
179the ten most significant lines. If you want to understand what algorithms are
180taking time, the above line is what you would use.
181
182If you were looking to see what functions were looping a lot, and taking a lot
183of time, you would do::
184
185 p.sort_stats('time').print_stats(10)
186
187to sort according to time spent within each function, and then print the
188statistics for the top ten functions.
189
190You might also try::
191
192 p.sort_stats('file').print_stats('__init__')
193
194This will sort all the statistics by file name, and then print out statistics
195for only the class init methods (since they are spelled with ``__init__`` in
196them). As one final example, you could try::
197
198 p.sort_stats('time', 'cum').print_stats(.5, 'init')
199
200This line sorts statistics with a primary key of time, and a secondary key of
201cumulative time, and then prints out some of the statistics. To be specific, the
202list is first culled down to 50% (re: ``.5``) of its original size, then only
203lines containing ``init`` are maintained, and that sub-sub-list is printed.
204
205If you wondered what functions called the above functions, you could now (``p``
206is still sorted according to the last criteria) do::
207
208 p.print_callers(.5, 'init')
209
210and you would get a list of callers for each of the listed functions.
211
212If you want more functionality, you're going to have to read the manual, or
213guess what the following functions do::
214
215 p.print_callees()
[391]216 p.add('restats')
[2]217
218Invoked as a script, the :mod:`pstats` module is a statistics browser for
219reading and examining profile dumps. It has a simple line-oriented interface
220(implemented using :mod:`cmd`) and interactive help.
221
[391]222:mod:`profile` and :mod:`cProfile` Module Reference
223=======================================================
[2]224
[391]225.. module:: cProfile
226.. module:: profile
227 :synopsis: Python source profiler.
[2]228
[391]229Both the :mod:`profile` and :mod:`cProfile` modules provide the following
230functions:
[2]231
[391]232.. function:: run(command, filename=None, sort=-1)
[2]233
[391]234 This function takes a single argument that can be passed to the :func:`exec`
235 function, and an optional file name. In all cases this routine executes::
[2]236
[391]237 exec(command, __main__.__dict__, __main__.__dict__)
[2]238
[391]239 and gathers profiling statistics from the execution. If no file name is
240 present, then this function automatically creates a :class:`~pstats.Stats`
241 instance and prints a simple profiling report. If the sort value is specified
242 it is passed to this :class:`~pstats.Stats` instance to control how the
243 results are sorted.
[2]244
[391]245.. function:: runctx(command, globals, locals, filename=None)
[2]246
[391]247 This function is similar to :func:`run`, with added arguments to supply the
248 globals and locals dictionaries for the *command* string. This routine
249 executes::
[2]250
[391]251 exec(command, globals, locals)
[2]252
[391]253 and gathers profiling statistics as in the :func:`run` function above.
[2]254
[391]255.. class:: Profile(timer=None, timeunit=0.0, subcalls=True, builtins=True)
[2]256
[391]257 This class is normally only used if more precise control over profiling is
258 needed than what the :func:`cProfile.run` function provides.
[2]259
[391]260 A custom timer can be supplied for measuring how long code takes to run via
261 the *timer* argument. This must be a function that returns a single number
262 representing the current time. If the number is an integer, the *timeunit*
263 specifies a multiplier that specifies the duration of each unit of time. For
264 example, if the timer returns times measured in thousands of seconds, the
265 time unit would be ``.001``.
[2]266
[391]267 Directly using the :class:`Profile` class allows formatting profile results
268 without writing the profile data to a file::
[2]269
[391]270 import cProfile, pstats, StringIO
271 pr = cProfile.Profile()
272 pr.enable()
273 # ... do something ...
274 pr.disable()
275 s = StringIO.StringIO()
276 sortby = 'cumulative'
277 ps = pstats.Stats(pr, stream=s).sort_stats(sortby)
278 ps.print_stats()
279 print s.getvalue()
[2]280
[391]281 .. method:: enable()
[2]282
[391]283 Start collecting profiling data.
[2]284
[391]285 .. method:: disable()
[2]286
[391]287 Stop collecting profiling data.
[2]288
[391]289 .. method:: create_stats()
[2]290
[391]291 Stop collecting profiling data and record the results internally
292 as the current profile.
[2]293
[391]294 .. method:: print_stats(sort=-1)
[2]295
[391]296 Create a :class:`~pstats.Stats` object based on the current
297 profile and print the results to stdout.
[2]298
[391]299 .. method:: dump_stats(filename)
[2]300
[391]301 Write the results of the current profile to *filename*.
[2]302
[391]303 .. method:: run(cmd)
[2]304
[391]305 Profile the cmd via :func:`exec`.
[2]306
[391]307 .. method:: runctx(cmd, globals, locals)
[2]308
[391]309 Profile the cmd via :func:`exec` with the specified global and
310 local environment.
[2]311
[391]312 .. method:: runcall(func, *args, **kwargs)
[2]313
[391]314 Profile ``func(*args, **kwargs)``
[2]315
[391]316.. _profile-stats:
317
318The :class:`Stats` Class
319========================
320
321Analysis of the profiler data is done using the :class:`~pstats.Stats` class.
322
[2]323.. module:: pstats
324 :synopsis: Statistics object for use with the profiler.
325
[391]326.. class:: Stats(*filenames or profile, stream=sys.stdout)
[2]327
328 This class constructor creates an instance of a "statistics object" from a
[391]329 *filename* (or list of filenames) or from a :class:`Profile` instance. Output
330 will be printed to the stream specified by *stream*.
[2]331
332 The file selected by the above constructor must have been created by the
333 corresponding version of :mod:`profile` or :mod:`cProfile`. To be specific,
334 there is *no* file compatibility guaranteed with future versions of this
[391]335 profiler, and there is no compatibility with files produced by other
336 profilers. If several files are provided, all the statistics for identical
337 functions will be coalesced, so that an overall view of several processes can
338 be considered in a single report. If additional files need to be combined
339 with data in an existing :class:`~pstats.Stats` object, the
340 :meth:`~pstats.Stats.add` method can be used.
[2]341
[391]342 Instead of reading the profile data from a file, a :class:`cProfile.Profile`
343 or :class:`profile.Profile` object can be used as the profile data source.
[2]344
[391]345 :class:`Stats` objects have the following methods:
[2]346
[391]347 .. method:: strip_dirs()
[2]348
[391]349 This method for the :class:`Stats` class removes all leading path
350 information from file names. It is very useful in reducing the size of
351 the printout to fit within (close to) 80 columns. This method modifies
352 the object, and the stripped information is lost. After performing a
353 strip operation, the object is considered to have its entries in a
354 "random" order, as it was just after object initialization and loading.
355 If :meth:`~pstats.Stats.strip_dirs` causes two function names to be
356 indistinguishable (they are on the same line of the same filename, and
357 have the same function name), then the statistics for these two entries
358 are accumulated into a single entry.
[2]359
360
[391]361 .. method:: add(*filenames)
[2]362
[391]363 This method of the :class:`Stats` class accumulates additional profiling
364 information into the current profiling object. Its arguments should refer
365 to filenames created by the corresponding version of :func:`profile.run`
366 or :func:`cProfile.run`. Statistics for identically named (re: file, line,
367 name) functions are automatically accumulated into single function
368 statistics.
[2]369
370
[391]371 .. method:: dump_stats(filename)
[2]372
[391]373 Save the data loaded into the :class:`Stats` object to a file named
374 *filename*. The file is created if it does not exist, and is overwritten
375 if it already exists. This is equivalent to the method of the same name
376 on the :class:`profile.Profile` and :class:`cProfile.Profile` classes.
[2]377
[391]378 .. versionadded:: 2.3
[2]379
380
[391]381 .. method:: sort_stats(*keys)
[2]382
[391]383 This method modifies the :class:`Stats` object by sorting it according to
384 the supplied criteria. The argument is typically a string identifying the
385 basis of a sort (example: ``'time'`` or ``'name'``).
[2]386
[391]387 When more than one key is provided, then additional keys are used as
388 secondary criteria when there is equality in all keys selected before
389 them. For example, ``sort_stats('name', 'file')`` will sort all the
390 entries according to their function name, and resolve all ties (identical
391 function names) by sorting by file name.
[2]392
[391]393 Abbreviations can be used for any key names, as long as the abbreviation
394 is unambiguous. The following are the keys currently defined:
[2]395
[391]396 +------------------+----------------------+
397 | Valid Arg | Meaning |
398 +==================+======================+
399 | ``'calls'`` | call count |
400 +------------------+----------------------+
401 | ``'cumulative'`` | cumulative time |
402 +------------------+----------------------+
403 | ``'cumtime'`` | cumulative time |
404 +------------------+----------------------+
405 | ``'file'`` | file name |
406 +------------------+----------------------+
407 | ``'filename'`` | file name |
408 +------------------+----------------------+
409 | ``'module'`` | file name |
410 +------------------+----------------------+
411 | ``'ncalls'`` | call count |
412 +------------------+----------------------+
413 | ``'pcalls'`` | primitive call count |
414 +------------------+----------------------+
415 | ``'line'`` | line number |
416 +------------------+----------------------+
417 | ``'name'`` | function name |
418 +------------------+----------------------+
419 | ``'nfl'`` | name/file/line |
420 +------------------+----------------------+
421 | ``'stdname'`` | standard name |
422 +------------------+----------------------+
423 | ``'time'`` | internal time |
424 +------------------+----------------------+
425 | ``'tottime'`` | internal time |
426 +------------------+----------------------+
[2]427
[391]428 Note that all sorts on statistics are in descending order (placing most
429 time consuming items first), where as name, file, and line number searches
430 are in ascending order (alphabetical). The subtle distinction between
431 ``'nfl'`` and ``'stdname'`` is that the standard name is a sort of the
432 name as printed, which means that the embedded line numbers get compared
433 in an odd way. For example, lines 3, 20, and 40 would (if the file names
434 were the same) appear in the string order 20, 3 and 40. In contrast,
435 ``'nfl'`` does a numeric compare of the line numbers. In fact,
436 ``sort_stats('nfl')`` is the same as ``sort_stats('name', 'file',
437 'line')``.
[2]438
[391]439 For backward-compatibility reasons, the numeric arguments ``-1``, ``0``,
440 ``1``, and ``2`` are permitted. They are interpreted as ``'stdname'``,
441 ``'calls'``, ``'time'``, and ``'cumulative'`` respectively. If this old
442 style format (numeric) is used, only one sort key (the numeric key) will
443 be used, and additional arguments will be silently ignored.
[2]444
[391]445 .. For compatibility with the old profiler.
[2]446
447
[391]448 .. method:: reverse_order()
[2]449
[391]450 This method for the :class:`Stats` class reverses the ordering of the
451 basic list within the object. Note that by default ascending vs
452 descending order is properly selected based on the sort key of choice.
[2]453
[391]454 .. This method is provided primarily for compatibility with the old
455 profiler.
[2]456
457
[391]458 .. method:: print_stats(*restrictions)
[2]459
[391]460 This method for the :class:`Stats` class prints out a report as described
461 in the :func:`profile.run` definition.
[2]462
[391]463 The order of the printing is based on the last
464 :meth:`~pstats.Stats.sort_stats` operation done on the object (subject to
465 caveats in :meth:`~pstats.Stats.add` and
466 :meth:`~pstats.Stats.strip_dirs`).
[2]467
[391]468 The arguments provided (if any) can be used to limit the list down to the
469 significant entries. Initially, the list is taken to be the complete set
470 of profiled functions. Each restriction is either an integer (to select a
471 count of lines), or a decimal fraction between 0.0 and 1.0 inclusive (to
472 select a percentage of lines), or a regular expression (to pattern match
473 the standard name that is printed. If several restrictions are provided,
474 then they are applied sequentially. For example::
[2]475
[391]476 print_stats(.1, 'foo:')
[2]477
[391]478 would first limit the printing to first 10% of list, and then only print
479 functions that were part of filename :file:`.\*foo:`. In contrast, the
480 command::
[2]481
[391]482 print_stats('foo:', .1)
[2]483
[391]484 would limit the list to all functions having file names :file:`.\*foo:`,
485 and then proceed to only print the first 10% of them.
[2]486
487
[391]488 .. method:: print_callers(*restrictions)
[2]489
[391]490 This method for the :class:`Stats` class prints a list of all functions
491 that called each function in the profiled database. The ordering is
492 identical to that provided by :meth:`~pstats.Stats.print_stats`, and the
493 definition of the restricting argument is also identical. Each caller is
494 reported on its own line. The format differs slightly depending on the
495 profiler that produced the stats:
[2]496
[391]497 * With :mod:`profile`, a number is shown in parentheses after each caller
498 to show how many times this specific call was made. For convenience, a
499 second non-parenthesized number repeats the cumulative time spent in the
500 function at the right.
[2]501
[391]502 * With :mod:`cProfile`, each caller is preceded by three numbers: the
503 number of times this specific call was made, and the total and
504 cumulative times spent in the current function while it was invoked by
505 this specific caller.
[2]506
507
[391]508 .. method:: print_callees(*restrictions)
[2]509
[391]510 This method for the :class:`Stats` class prints a list of all function
511 that were called by the indicated function. Aside from this reversal of
512 direction of calls (re: called vs was called by), the arguments and
513 ordering are identical to the :meth:`~pstats.Stats.print_callers` method.
[2]514
515
[391]516.. _deterministic-profiling:
[2]517
[391]518What Is Deterministic Profiling?
519================================
[2]520
[391]521:dfn:`Deterministic profiling` is meant to reflect the fact that all *function
522call*, *function return*, and *exception* events are monitored, and precise
523timings are made for the intervals between these events (during which time the
524user's code is executing). In contrast, :dfn:`statistical profiling` (which is
525not done by this module) randomly samples the effective instruction pointer, and
526deduces where time is being spent. The latter technique traditionally involves
527less overhead (as the code does not need to be instrumented), but provides only
528relative indications of where time is being spent.
[2]529
[391]530In Python, since there is an interpreter active during execution, the presence
531of instrumented code is not required to do deterministic profiling. Python
532automatically provides a :dfn:`hook` (optional callback) for each event. In
533addition, the interpreted nature of Python tends to add so much overhead to
534execution, that deterministic profiling tends to only add small processing
535overhead in typical applications. The result is that deterministic profiling is
536not that expensive, yet provides extensive run time statistics about the
537execution of a Python program.
[2]538
[391]539Call count statistics can be used to identify bugs in code (surprising counts),
540and to identify possible inline-expansion points (high call counts). Internal
541time statistics can be used to identify "hot loops" that should be carefully
542optimized. Cumulative time statistics should be used to identify high level
543errors in the selection of algorithms. Note that the unusual handling of
544cumulative times in this profiler allows statistics for recursive
545implementations of algorithms to be directly compared to iterative
546implementations.
[2]547
548
[391]549.. _profile-limitations:
550
[2]551Limitations
552===========
553
554One limitation has to do with accuracy of timing information. There is a
555fundamental problem with deterministic profilers involving accuracy. The most
556obvious restriction is that the underlying "clock" is only ticking at a rate
557(typically) of about .001 seconds. Hence no measurements will be more accurate
558than the underlying clock. If enough measurements are taken, then the "error"
559will tend to average out. Unfortunately, removing this first error induces a
560second source of error.
561
562The second problem is that it "takes a while" from when an event is dispatched
563until the profiler's call to get the time actually *gets* the state of the
564clock. Similarly, there is a certain lag when exiting the profiler event
565handler from the time that the clock's value was obtained (and then squirreled
566away), until the user's code is once again executing. As a result, functions
567that are called many times, or call many functions, will typically accumulate
568this error. The error that accumulates in this fashion is typically less than
569the accuracy of the clock (less than one clock tick), but it *can* accumulate
570and become very significant.
571
572The problem is more important with :mod:`profile` than with the lower-overhead
573:mod:`cProfile`. For this reason, :mod:`profile` provides a means of
574calibrating itself for a given platform so that this error can be
575probabilistically (on the average) removed. After the profiler is calibrated, it
576will be more accurate (in a least square sense), but it will sometimes produce
577negative numbers (when call counts are exceptionally low, and the gods of
578probability work against you :-). ) Do *not* be alarmed by negative numbers in
579the profile. They should *only* appear if you have calibrated your profiler,
580and the results are actually better than without calibration.
581
582
583.. _profile-calibration:
584
585Calibration
586===========
587
588The profiler of the :mod:`profile` module subtracts a constant from each event
589handling time to compensate for the overhead of calling the time function, and
590socking away the results. By default, the constant is 0. The following
591procedure can be used to obtain a better constant for a given platform (see
[391]592:ref:`profile-limitations`). ::
[2]593
594 import profile
595 pr = profile.Profile()
596 for i in range(5):
597 print pr.calibrate(10000)
598
599The method executes the number of Python calls given by the argument, directly
600and again under the profiler, measuring the time for both. It then computes the
601hidden overhead per profiler event, and returns that as a float. For example,
[391]602on a 1.8Ghz Intel Core i5 running Mac OS X, and using Python's time.clock() as
603the timer, the magical number is about 4.04e-6.
[2]604
605The object of this exercise is to get a fairly consistent result. If your
606computer is *very* fast, or your timer function has poor resolution, you might
607have to pass 100000, or even 1000000, to get consistent results.
608
609When you have a consistent answer, there are three ways you can use it: [#]_ ::
610
611 import profile
612
613 # 1. Apply computed bias to all Profile instances created hereafter.
614 profile.Profile.bias = your_computed_bias
615
616 # 2. Apply computed bias to a specific Profile instance.
617 pr = profile.Profile()
618 pr.bias = your_computed_bias
619
620 # 3. Specify computed bias in instance constructor.
621 pr = profile.Profile(bias=your_computed_bias)
622
623If you have a choice, you are better off choosing a smaller constant, and then
624your results will "less often" show up as negative in profile statistics.
625
[391]626.. _profile-timers:
[2]627
[391]628Using a custom timer
629====================
[2]630
[391]631If you want to change how current time is determined (for example, to force use
632of wall-clock time or elapsed process time), pass the timing function you want
633to the :class:`Profile` class constructor::
[2]634
[391]635 pr = profile.Profile(your_time_func)
[2]636
[391]637The resulting profiler will then call ``your_time_func``. Depending on whether
638you are using :class:`profile.Profile` or :class:`cProfile.Profile`,
639``your_time_func``'s return value will be interpreted differently:
[2]640
641:class:`profile.Profile`
[391]642 ``your_time_func`` should return a single number, or a list of numbers whose
643 sum is the current time (like what :func:`os.times` returns). If the
644 function returns a single time number, or the list of returned numbers has
645 length 2, then you will get an especially fast version of the dispatch
646 routine.
[2]647
648 Be warned that you should calibrate the profiler class for the timer function
[391]649 that you choose (see :ref:`profile-calibration`). For most machines, a timer
650 that returns a lone integer value will provide the best results in terms of
651 low overhead during profiling. (:func:`os.times` is *pretty* bad, as it
652 returns a tuple of floating point values). If you want to substitute a
653 better timer in the cleanest fashion, derive a class and hardwire a
654 replacement dispatch method that best handles your timer call, along with the
655 appropriate calibration constant.
[2]656
657:class:`cProfile.Profile`
[391]658 ``your_time_func`` should return a single number. If it returns integers,
659 you can also invoke the class constructor with a second argument specifying
660 the real duration of one unit of time. For example, if
661 ``your_integer_time_func`` returns times measured in thousands of seconds,
662 you would construct the :class:`Profile` instance as follows::
[2]663
[391]664 pr = cProfile.Profile(your_integer_time_func, 0.001)
[2]665
666 As the :mod:`cProfile.Profile` class cannot be calibrated, custom timer
[391]667 functions should be used with care and should be as fast as possible. For
668 the best results with a custom timer, it might be necessary to hard-code it
669 in the C source of the internal :mod:`_lsprof` module.
[2]670
[391]671
[2]672.. rubric:: Footnotes
673
[391]674.. [#] Prior to Python 2.2, it was necessary to edit the profiler source code to
675 embed the bias as a literal number. You still can, but that method is no longer
[2]676 described, because no longer needed.
Note: See TracBrowser for help on using the repository browser.