[2] | 1 | :mod:`tokenize` --- Tokenizer for Python source
|
---|
| 2 | ===============================================
|
---|
| 3 |
|
---|
| 4 | .. module:: tokenize
|
---|
| 5 | :synopsis: Lexical scanner for Python source code.
|
---|
| 6 | .. moduleauthor:: Ka Ping Yee
|
---|
| 7 | .. sectionauthor:: Fred L. Drake, Jr. <fdrake@acm.org>
|
---|
| 8 |
|
---|
[391] | 9 | **Source code:** :source:`Lib/tokenize.py`
|
---|
[2] | 10 |
|
---|
[391] | 11 | --------------
|
---|
| 12 |
|
---|
[2] | 13 | The :mod:`tokenize` module provides a lexical scanner for Python source code,
|
---|
| 14 | implemented in Python. The scanner in this module returns comments as tokens as
|
---|
| 15 | well, making it useful for implementing "pretty-printers," including colorizers
|
---|
| 16 | for on-screen displays.
|
---|
| 17 |
|
---|
[391] | 18 | To simplify token stream handling, all :ref:`operators` and :ref:`delimiters`
|
---|
| 19 | tokens are returned using the generic :data:`token.OP` token type. The exact
|
---|
| 20 | type can be determined by checking the token ``string`` field on the
|
---|
| 21 | :term:`named tuple` returned from :func:`tokenize.tokenize` for the character
|
---|
| 22 | sequence that identifies a specific operator token.
|
---|
| 23 |
|
---|
[2] | 24 | The primary entry point is a :term:`generator`:
|
---|
| 25 |
|
---|
| 26 | .. function:: generate_tokens(readline)
|
---|
| 27 |
|
---|
| 28 | The :func:`generate_tokens` generator requires one argument, *readline*,
|
---|
| 29 | which must be a callable object which provides the same interface as the
|
---|
| 30 | :meth:`readline` method of built-in file objects (see section
|
---|
| 31 | :ref:`bltin-file-objects`). Each call to the function should return one line
|
---|
[391] | 32 | of input as a string. Alternately, *readline* may be a callable object that
|
---|
| 33 | signals completion by raising :exc:`StopIteration`.
|
---|
[2] | 34 |
|
---|
| 35 | The generator produces 5-tuples with these members: the token type; the token
|
---|
| 36 | string; a 2-tuple ``(srow, scol)`` of ints specifying the row and column
|
---|
| 37 | where the token begins in the source; a 2-tuple ``(erow, ecol)`` of ints
|
---|
| 38 | specifying the row and column where the token ends in the source; and the
|
---|
| 39 | line on which the token was found. The line passed (the last tuple item) is
|
---|
| 40 | the *logical* line; continuation lines are included.
|
---|
| 41 |
|
---|
| 42 | .. versionadded:: 2.2
|
---|
| 43 |
|
---|
| 44 | An older entry point is retained for backward compatibility:
|
---|
| 45 |
|
---|
| 46 |
|
---|
| 47 | .. function:: tokenize(readline[, tokeneater])
|
---|
| 48 |
|
---|
| 49 | The :func:`tokenize` function accepts two parameters: one representing the input
|
---|
| 50 | stream, and one providing an output mechanism for :func:`tokenize`.
|
---|
| 51 |
|
---|
| 52 | The first parameter, *readline*, must be a callable object which provides the
|
---|
| 53 | same interface as the :meth:`readline` method of built-in file objects (see
|
---|
| 54 | section :ref:`bltin-file-objects`). Each call to the function should return one
|
---|
| 55 | line of input as a string. Alternately, *readline* may be a callable object that
|
---|
| 56 | signals completion by raising :exc:`StopIteration`.
|
---|
| 57 |
|
---|
| 58 | .. versionchanged:: 2.5
|
---|
| 59 | Added :exc:`StopIteration` support.
|
---|
| 60 |
|
---|
| 61 | The second parameter, *tokeneater*, must also be a callable object. It is
|
---|
| 62 | called once for each token, with five arguments, corresponding to the tuples
|
---|
| 63 | generated by :func:`generate_tokens`.
|
---|
| 64 |
|
---|
| 65 | All constants from the :mod:`token` module are also exported from
|
---|
| 66 | :mod:`tokenize`, as are two additional token type values that might be passed to
|
---|
| 67 | the *tokeneater* function by :func:`tokenize`:
|
---|
| 68 |
|
---|
| 69 |
|
---|
| 70 | .. data:: COMMENT
|
---|
| 71 |
|
---|
| 72 | Token value used to indicate a comment.
|
---|
| 73 |
|
---|
| 74 |
|
---|
| 75 | .. data:: NL
|
---|
| 76 |
|
---|
| 77 | Token value used to indicate a non-terminating newline. The NEWLINE token
|
---|
| 78 | indicates the end of a logical line of Python code; NL tokens are generated when
|
---|
| 79 | a logical line of code is continued over multiple physical lines.
|
---|
| 80 |
|
---|
| 81 | Another function is provided to reverse the tokenization process. This is useful
|
---|
| 82 | for creating tools that tokenize a script, modify the token stream, and write
|
---|
| 83 | back the modified script.
|
---|
| 84 |
|
---|
| 85 |
|
---|
| 86 | .. function:: untokenize(iterable)
|
---|
| 87 |
|
---|
| 88 | Converts tokens back into Python source code. The *iterable* must return
|
---|
| 89 | sequences with at least two elements, the token type and the token string. Any
|
---|
| 90 | additional sequence elements are ignored.
|
---|
| 91 |
|
---|
| 92 | The reconstructed script is returned as a single string. The result is
|
---|
| 93 | guaranteed to tokenize back to match the input so that the conversion is
|
---|
| 94 | lossless and round-trips are assured. The guarantee applies only to the token
|
---|
| 95 | type and token string as the spacing between tokens (column positions) may
|
---|
| 96 | change.
|
---|
| 97 |
|
---|
| 98 | .. versionadded:: 2.5
|
---|
| 99 |
|
---|
| 100 | Example of a script re-writer that transforms float literals into Decimal
|
---|
| 101 | objects::
|
---|
| 102 |
|
---|
| 103 | def decistmt(s):
|
---|
| 104 | """Substitute Decimals for floats in a string of statements.
|
---|
| 105 |
|
---|
| 106 | >>> from decimal import Decimal
|
---|
| 107 | >>> s = 'print +21.3e-5*-.1234/81.7'
|
---|
| 108 | >>> decistmt(s)
|
---|
| 109 | "print +Decimal ('21.3e-5')*-Decimal ('.1234')/Decimal ('81.7')"
|
---|
| 110 |
|
---|
| 111 | >>> exec(s)
|
---|
| 112 | -3.21716034272e-007
|
---|
| 113 | >>> exec(decistmt(s))
|
---|
| 114 | -3.217160342717258261933904529E-7
|
---|
| 115 |
|
---|
| 116 | """
|
---|
| 117 | result = []
|
---|
| 118 | g = generate_tokens(StringIO(s).readline) # tokenize the string
|
---|
| 119 | for toknum, tokval, _, _, _ in g:
|
---|
| 120 | if toknum == NUMBER and '.' in tokval: # replace NUMBER tokens
|
---|
| 121 | result.extend([
|
---|
| 122 | (NAME, 'Decimal'),
|
---|
| 123 | (OP, '('),
|
---|
| 124 | (STRING, repr(tokval)),
|
---|
| 125 | (OP, ')')
|
---|
| 126 | ])
|
---|
| 127 | else:
|
---|
| 128 | result.append((toknum, tokval))
|
---|
| 129 | return untokenize(result)
|
---|
| 130 |
|
---|