1 | \section{\module{tokenize} ---
|
---|
2 | Tokenizer for Python source}
|
---|
3 |
|
---|
4 | \declaremodule{standard}{tokenize}
|
---|
5 | \modulesynopsis{Lexical scanner for Python source code.}
|
---|
6 | \moduleauthor{Ka Ping Yee}{}
|
---|
7 | \sectionauthor{Fred L. Drake, Jr.}{fdrake@acm.org}
|
---|
8 |
|
---|
9 |
|
---|
10 | The \module{tokenize} module provides a lexical scanner for Python
|
---|
11 | source code, implemented in Python. The scanner in this module
|
---|
12 | returns comments as tokens as well, making it useful for implementing
|
---|
13 | ``pretty-printers,'' including colorizers for on-screen displays.
|
---|
14 |
|
---|
15 | The primary entry point is a generator:
|
---|
16 |
|
---|
17 | \begin{funcdesc}{generate_tokens}{readline}
|
---|
18 | The \function{generate_tokens()} generator requires one argment,
|
---|
19 | \var{readline}, which must be a callable object which
|
---|
20 | provides the same interface as the \method{readline()} method of
|
---|
21 | built-in file objects (see section~\ref{bltin-file-objects}). Each
|
---|
22 | call to the function should return one line of input as a string.
|
---|
23 |
|
---|
24 | The generator produces 5-tuples with these members:
|
---|
25 | the token type;
|
---|
26 | the token string;
|
---|
27 | a 2-tuple \code{(\var{srow}, \var{scol})} of ints specifying the
|
---|
28 | row and column where the token begins in the source;
|
---|
29 | a 2-tuple \code{(\var{erow}, \var{ecol})} of ints specifying the
|
---|
30 | row and column where the token ends in the source;
|
---|
31 | and the line on which the token was found.
|
---|
32 | The line passed is the \emph{logical} line;
|
---|
33 | continuation lines are included.
|
---|
34 | \versionadded{2.2}
|
---|
35 | \end{funcdesc}
|
---|
36 |
|
---|
37 | An older entry point is retained for backward compatibility:
|
---|
38 |
|
---|
39 | \begin{funcdesc}{tokenize}{readline\optional{, tokeneater}}
|
---|
40 | The \function{tokenize()} function accepts two parameters: one
|
---|
41 | representing the input stream, and one providing an output mechanism
|
---|
42 | for \function{tokenize()}.
|
---|
43 |
|
---|
44 | The first parameter, \var{readline}, must be a callable object which
|
---|
45 | provides the same interface as the \method{readline()} method of
|
---|
46 | built-in file objects (see section~\ref{bltin-file-objects}). Each
|
---|
47 | call to the function should return one line of input as a string.
|
---|
48 | Alternately, \var{readline} may be a callable object that signals
|
---|
49 | completion by raising \exception{StopIteration}.
|
---|
50 | \versionchanged[Added \exception{StopIteration} support]{2.5}
|
---|
51 |
|
---|
52 | The second parameter, \var{tokeneater}, must also be a callable
|
---|
53 | object. It is called once for each token, with five arguments,
|
---|
54 | corresponding to the tuples generated by \function{generate_tokens()}.
|
---|
55 | \end{funcdesc}
|
---|
56 |
|
---|
57 |
|
---|
58 | All constants from the \refmodule{token} module are also exported from
|
---|
59 | \module{tokenize}, as are two additional token type values that might be
|
---|
60 | passed to the \var{tokeneater} function by \function{tokenize()}:
|
---|
61 |
|
---|
62 | \begin{datadesc}{COMMENT}
|
---|
63 | Token value used to indicate a comment.
|
---|
64 | \end{datadesc}
|
---|
65 | \begin{datadesc}{NL}
|
---|
66 | Token value used to indicate a non-terminating newline. The NEWLINE
|
---|
67 | token indicates the end of a logical line of Python code; NL tokens
|
---|
68 | are generated when a logical line of code is continued over multiple
|
---|
69 | physical lines.
|
---|
70 | \end{datadesc}
|
---|
71 |
|
---|
72 | Another function is provided to reverse the tokenization process.
|
---|
73 | This is useful for creating tools that tokenize a script, modify
|
---|
74 | the token stream, and write back the modified script.
|
---|
75 |
|
---|
76 | \begin{funcdesc}{untokenize}{iterable}
|
---|
77 | Converts tokens back into Python source code. The \var{iterable}
|
---|
78 | must return sequences with at least two elements, the token type and
|
---|
79 | the token string. Any additional sequence elements are ignored.
|
---|
80 |
|
---|
81 | The reconstructed script is returned as a single string. The
|
---|
82 | result is guaranteed to tokenize back to match the input so that
|
---|
83 | the conversion is lossless and round-trips are assured. The
|
---|
84 | guarantee applies only to the token type and token string as
|
---|
85 | the spacing between tokens (column positions) may change.
|
---|
86 | \versionadded{2.5}
|
---|
87 | \end{funcdesc}
|
---|
88 |
|
---|
89 | Example of a script re-writer that transforms float literals into
|
---|
90 | Decimal objects:
|
---|
91 | \begin{verbatim}
|
---|
92 | def decistmt(s):
|
---|
93 | """Substitute Decimals for floats in a string of statements.
|
---|
94 |
|
---|
95 | >>> from decimal import Decimal
|
---|
96 | >>> s = 'print +21.3e-5*-.1234/81.7'
|
---|
97 | >>> decistmt(s)
|
---|
98 | "print +Decimal ('21.3e-5')*-Decimal ('.1234')/Decimal ('81.7')"
|
---|
99 |
|
---|
100 | >>> exec(s)
|
---|
101 | -3.21716034272e-007
|
---|
102 | >>> exec(decistmt(s))
|
---|
103 | -3.217160342717258261933904529E-7
|
---|
104 |
|
---|
105 | """
|
---|
106 | result = []
|
---|
107 | g = generate_tokens(StringIO(s).readline) # tokenize the string
|
---|
108 | for toknum, tokval, _, _, _ in g:
|
---|
109 | if toknum == NUMBER and '.' in tokval: # replace NUMBER tokens
|
---|
110 | result.extend([
|
---|
111 | (NAME, 'Decimal'),
|
---|
112 | (OP, '('),
|
---|
113 | (STRING, repr(tokval)),
|
---|
114 | (OP, ')')
|
---|
115 | ])
|
---|
116 | else:
|
---|
117 | result.append((toknum, tokval))
|
---|
118 | return untokenize(result)
|
---|
119 | \end{verbatim}
|
---|