summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-04-10 12:14:50 +0000
committerCoprDistGit <infra@openeuler.org>2023-04-10 12:14:50 +0000
commite102de552baa22c39be5551081b8bf2fce4b590c (patch)
treee01be294b1a02adf9cd3c805a81e6d6ed5c0bd28
parente921dec7661e536935e248e1976ceab75f8c2fa0 (diff)
automatic import of python-tokenize-rt
-rw-r--r--.gitignore1
-rw-r--r--python-tokenize-rt.spec291
-rw-r--r--sources1
3 files changed, 293 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..3ef6e4d 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/tokenize_rt-5.0.0.tar.gz
diff --git a/python-tokenize-rt.spec b/python-tokenize-rt.spec
new file mode 100644
index 0000000..0832273
--- /dev/null
+++ b/python-tokenize-rt.spec
@@ -0,0 +1,291 @@
+%global _empty_manifest_terminate_build 0
+Name: python-tokenize-rt
+Version: 5.0.0
+Release: 1
+Summary: A wrapper around the stdlib `tokenize` which roundtrips.
+License: MIT
+URL: https://github.com/asottile/tokenize-rt
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/40/01/fb40ea8c465f680bf7aa3f5bee39c62ba8b7f52c38048c27aa95aff4f779/tokenize_rt-5.0.0.tar.gz
+BuildArch: noarch
+
+
+%description
+The stdlib `tokenize` module does not properly roundtrip. This wrapper
+around the stdlib provides two additional tokens `ESCAPED_NL` and
+`UNIMPORTANT_WS`, and a `Token` data type. Use `src_to_tokens` and
+`tokens_to_src` to roundtrip.
+This library is useful if you're writing a refactoring tool based on the
+python tokenization.
+## Installation
+```bash
+pip install tokenize-rt
+```
+## Usage
+### datastructures
+#### `tokenize_rt.Offset(line=None, utf8_byte_offset=None)`
+A token offset, useful as a key when cross referencing the `ast` and the
+tokenized source.
+#### `tokenize_rt.Token(name, src, line=None, utf8_byte_offset=None)`
+Construct a token
+- `name`: one of the token names listed in `token.tok_name` or
+ `ESCAPED_NL` or `UNIMPORTANT_WS`
+- `src`: token's source as text
+- `line`: the line number that this token appears on.
+- `utf8_byte_offset`: the utf8 byte offset that this token appears on in the
+ line.
+#### `tokenize_rt.Token.offset`
+Retrieves an `Offset` for this token.
+### converting to and from `Token` representations
+#### `tokenize_rt.src_to_tokens(text: str) -> List[Token]`
+#### `tokenize_rt.tokens_to_src(Iterable[Token]) -> str`
+### additional tokens added by `tokenize-rt`
+#### `tokenize_rt.ESCAPED_NL`
+#### `tokenize_rt.UNIMPORTANT_WS`
+### helpers
+#### `tokenize_rt.NON_CODING_TOKENS`
+A `frozenset` containing tokens which may appear between others while not
+affecting control flow or code:
+- `COMMENT`
+- `ESCAPED_NL`
+- `NL`
+- `UNIMPORTANT_WS`
+#### `tokenize_rt.parse_string_literal(text: str) -> Tuple[str, str]`
+parse a string literal into its prefix and string content
+```pycon
+>>> parse_string_literal('f"foo"')
+('f', '"foo"')
+```
+#### `tokenize_rt.reversed_enumerate(Sequence[Token]) -> Iterator[Tuple[int, Token]]`
+yields `(index, token)` pairs. Useful for rewriting source.
+#### `tokenize_rt.rfind_string_parts(Sequence[Token], i) -> Tuple[int, ...]`
+find the indices of the string parts of a (joined) string literal
+- `i` should start at the end of the string literal
+- returns `()` (an empty tuple) for things which are not string literals
+```pycon
+>>> tokens = src_to_tokens('"foo" "bar".capitalize()')
+>>> rfind_string_parts(tokens, 2)
+(0, 2)
+>>> tokens = src_to_tokens('("foo" "bar").capitalize()')
+>>> rfind_string_parts(tokens, 4)
+(1, 3)
+```
+## Differences from `tokenize`
+- `tokenize-rt` adds `ESCAPED_NL` for a backslash-escaped newline "token"
+- `tokenize-rt` adds `UNIMPORTANT_WS` for whitespace (discarded in `tokenize`)
+- `tokenize-rt` normalizes string prefixes, even if they are not parsed -- for
+ instance, this means you'll see `Token('STRING', "f'foo'", ...)` even in
+ python 2.
+- `tokenize-rt` normalizes python 2 long literals (`4l` / `4L`) and octal
+ literals (`0755`) in python 3 (for easier rewriting of python 2 code while
+ running python 3).
+## Sample usage
+- https://github.com/asottile/add-trailing-comma
+- https://github.com/asottile/future-annotations
+- https://github.com/asottile/future-fstrings
+- https://github.com/asottile/pyupgrade
+- https://github.com/asottile/yesqa
+
+%package -n python3-tokenize-rt
+Summary: A wrapper around the stdlib `tokenize` which roundtrips.
+Provides: python-tokenize-rt
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-tokenize-rt
+The stdlib `tokenize` module does not properly roundtrip. This wrapper
+around the stdlib provides two additional tokens `ESCAPED_NL` and
+`UNIMPORTANT_WS`, and a `Token` data type. Use `src_to_tokens` and
+`tokens_to_src` to roundtrip.
+This library is useful if you're writing a refactoring tool based on the
+python tokenization.
+## Installation
+```bash
+pip install tokenize-rt
+```
+## Usage
+### datastructures
+#### `tokenize_rt.Offset(line=None, utf8_byte_offset=None)`
+A token offset, useful as a key when cross referencing the `ast` and the
+tokenized source.
+#### `tokenize_rt.Token(name, src, line=None, utf8_byte_offset=None)`
+Construct a token
+- `name`: one of the token names listed in `token.tok_name` or
+ `ESCAPED_NL` or `UNIMPORTANT_WS`
+- `src`: token's source as text
+- `line`: the line number that this token appears on.
+- `utf8_byte_offset`: the utf8 byte offset that this token appears on in the
+ line.
+#### `tokenize_rt.Token.offset`
+Retrieves an `Offset` for this token.
+### converting to and from `Token` representations
+#### `tokenize_rt.src_to_tokens(text: str) -> List[Token]`
+#### `tokenize_rt.tokens_to_src(Iterable[Token]) -> str`
+### additional tokens added by `tokenize-rt`
+#### `tokenize_rt.ESCAPED_NL`
+#### `tokenize_rt.UNIMPORTANT_WS`
+### helpers
+#### `tokenize_rt.NON_CODING_TOKENS`
+A `frozenset` containing tokens which may appear between others while not
+affecting control flow or code:
+- `COMMENT`
+- `ESCAPED_NL`
+- `NL`
+- `UNIMPORTANT_WS`
+#### `tokenize_rt.parse_string_literal(text: str) -> Tuple[str, str]`
+parse a string literal into its prefix and string content
+```pycon
+>>> parse_string_literal('f"foo"')
+('f', '"foo"')
+```
+#### `tokenize_rt.reversed_enumerate(Sequence[Token]) -> Iterator[Tuple[int, Token]]`
+yields `(index, token)` pairs. Useful for rewriting source.
+#### `tokenize_rt.rfind_string_parts(Sequence[Token], i) -> Tuple[int, ...]`
+find the indices of the string parts of a (joined) string literal
+- `i` should start at the end of the string literal
+- returns `()` (an empty tuple) for things which are not string literals
+```pycon
+>>> tokens = src_to_tokens('"foo" "bar".capitalize()')
+>>> rfind_string_parts(tokens, 2)
+(0, 2)
+>>> tokens = src_to_tokens('("foo" "bar").capitalize()')
+>>> rfind_string_parts(tokens, 4)
+(1, 3)
+```
+## Differences from `tokenize`
+- `tokenize-rt` adds `ESCAPED_NL` for a backslash-escaped newline "token"
+- `tokenize-rt` adds `UNIMPORTANT_WS` for whitespace (discarded in `tokenize`)
+- `tokenize-rt` normalizes string prefixes, even if they are not parsed -- for
+ instance, this means you'll see `Token('STRING', "f'foo'", ...)` even in
+ python 2.
+- `tokenize-rt` normalizes python 2 long literals (`4l` / `4L`) and octal
+ literals (`0755`) in python 3 (for easier rewriting of python 2 code while
+ running python 3).
+## Sample usage
+- https://github.com/asottile/add-trailing-comma
+- https://github.com/asottile/future-annotations
+- https://github.com/asottile/future-fstrings
+- https://github.com/asottile/pyupgrade
+- https://github.com/asottile/yesqa
+
+%package help
+Summary: Development documents and examples for tokenize-rt
+Provides: python3-tokenize-rt-doc
+%description help
+The stdlib `tokenize` module does not properly roundtrip. This wrapper
+around the stdlib provides two additional tokens `ESCAPED_NL` and
+`UNIMPORTANT_WS`, and a `Token` data type. Use `src_to_tokens` and
+`tokens_to_src` to roundtrip.
+This library is useful if you're writing a refactoring tool based on the
+python tokenization.
+## Installation
+```bash
+pip install tokenize-rt
+```
+## Usage
+### datastructures
+#### `tokenize_rt.Offset(line=None, utf8_byte_offset=None)`
+A token offset, useful as a key when cross referencing the `ast` and the
+tokenized source.
+#### `tokenize_rt.Token(name, src, line=None, utf8_byte_offset=None)`
+Construct a token
+- `name`: one of the token names listed in `token.tok_name` or
+ `ESCAPED_NL` or `UNIMPORTANT_WS`
+- `src`: token's source as text
+- `line`: the line number that this token appears on.
+- `utf8_byte_offset`: the utf8 byte offset that this token appears on in the
+ line.
+#### `tokenize_rt.Token.offset`
+Retrieves an `Offset` for this token.
+### converting to and from `Token` representations
+#### `tokenize_rt.src_to_tokens(text: str) -> List[Token]`
+#### `tokenize_rt.tokens_to_src(Iterable[Token]) -> str`
+### additional tokens added by `tokenize-rt`
+#### `tokenize_rt.ESCAPED_NL`
+#### `tokenize_rt.UNIMPORTANT_WS`
+### helpers
+#### `tokenize_rt.NON_CODING_TOKENS`
+A `frozenset` containing tokens which may appear between others while not
+affecting control flow or code:
+- `COMMENT`
+- `ESCAPED_NL`
+- `NL`
+- `UNIMPORTANT_WS`
+#### `tokenize_rt.parse_string_literal(text: str) -> Tuple[str, str]`
+parse a string literal into its prefix and string content
+```pycon
+>>> parse_string_literal('f"foo"')
+('f', '"foo"')
+```
+#### `tokenize_rt.reversed_enumerate(Sequence[Token]) -> Iterator[Tuple[int, Token]]`
+yields `(index, token)` pairs. Useful for rewriting source.
+#### `tokenize_rt.rfind_string_parts(Sequence[Token], i) -> Tuple[int, ...]`
+find the indices of the string parts of a (joined) string literal
+- `i` should start at the end of the string literal
+- returns `()` (an empty tuple) for things which are not string literals
+```pycon
+>>> tokens = src_to_tokens('"foo" "bar".capitalize()')
+>>> rfind_string_parts(tokens, 2)
+(0, 2)
+>>> tokens = src_to_tokens('("foo" "bar").capitalize()')
+>>> rfind_string_parts(tokens, 4)
+(1, 3)
+```
+## Differences from `tokenize`
+- `tokenize-rt` adds `ESCAPED_NL` for a backslash-escaped newline "token"
+- `tokenize-rt` adds `UNIMPORTANT_WS` for whitespace (discarded in `tokenize`)
+- `tokenize-rt` normalizes string prefixes, even if they are not parsed -- for
+ instance, this means you'll see `Token('STRING', "f'foo'", ...)` even in
+ python 2.
+- `tokenize-rt` normalizes python 2 long literals (`4l` / `4L`) and octal
+ literals (`0755`) in python 3 (for easier rewriting of python 2 code while
+ running python 3).
+## Sample usage
+- https://github.com/asottile/add-trailing-comma
+- https://github.com/asottile/future-annotations
+- https://github.com/asottile/future-fstrings
+- https://github.com/asottile/pyupgrade
+- https://github.com/asottile/yesqa
+
+%prep
+%autosetup -n tokenize-rt-5.0.0
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-tokenize-rt -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Mon Apr 10 2023 Python_Bot <Python_Bot@openeuler.org> - 5.0.0-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..d80b6df
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+09ad635922066b79bef43d7fe75c2257 tokenize_rt-5.0.0.tar.gz