summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-05-05 04:30:58 +0000
committerCoprDistGit <infra@openeuler.org>2023-05-05 04:30:58 +0000
commit8c9abe45675bf272d557239fd0534c3732abb0d1 (patch)
tree6315f61f319354bb7ea903f04bf34ddef144d3a6
parent1da41af1320881e2658c6b35fe871319ea2891a0 (diff)
automatic import of python-english-wordsopeneuler20.03
-rw-r--r--.gitignore1
-rw-r--r--python-english-words.spec273
-rw-r--r--sources1
3 files changed, 275 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..c652914 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/english-words-2.0.0.tar.gz
diff --git a/python-english-words.spec b/python-english-words.spec
new file mode 100644
index 0000000..9ab6b12
--- /dev/null
+++ b/python-english-words.spec
@@ -0,0 +1,273 @@
+%global _empty_manifest_terminate_build 0
+Name: python-english-words
+Version: 2.0.0
+Release: 1
+Summary: Generate sets of english words by combining different word lists
+License: MIT
+URL: https://github.com/mwiens91/english-words-py
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/34/55/4a8c7eb50e2c9445e8dd8e1960893050267179359fbd348a0601b800d7c8/english-words-2.0.0.tar.gz
+BuildArch: noarch
+
+
+%description
+[![PyPI](https://img.shields.io/pypi/v/english-words.svg)](https://pypi.org/project/english-words/)
+
+# english-words-py
+
+Returns sets of English words created by combining different words
+lists together. Example usage: to get a set of English words from the
+"web2" word list, including only lower-case letters, you write the
+following:
+
+```python3
+>>> from english_words import get_english_words_set
+>>> web2lowerset = get_english_words_set(['web2'], lower=True)
+```
+
+## Usage
+
+From the main package, import `get_english_words_set` as demonstrated
+above. This function takes a number of arguments; the first is a list of
+word list identifiers for the word lists to combine and the rest are
+flags. These arguments are described here (in the following order):
+
+- `sources` is an iterable containing strings
+corresponding to word list identifiers (see "Word lists" subsection
+below)
+- `alpha` (default `False`) is a flag specifying that all
+ non-alphanumeric characters (e.g.: `-`, `'`) should be stripped
+- `lower` (default `False` ) is a flag specifying that all upper-case
+ letters should be converted to lower-case
+
+Each word list is pre-processed to handle the above flags, so using any
+combination of options will not cause the function to run slower.
+
+Note that some care needs to be used when combining word lists. For
+example, only proper nouns in the `web2` word list are capitalized, but
+every word in the `gcide` word list is capitalized.
+
+### Word lists
+
+| Name/URL | Identifier | Notes |
+| :--- | :--- | :--- |
+| [GCIDE 0.53 index](https://ftp.gnu.org/gnu/gcide/) | `gcide` | Words found in GNU Collaborative International Dictionary of English 0.53. All words capitalized, and like a dictionary.<br/><br/>Unicode characters are currently unprocessed; for example `<ae/` is present in the dictionary instead of `æ`. Ideally, these should all be converted. |
+| [web2 revision 326913](https://svnweb.freebsd.org/base/head/share/dict/web2?view=markup&pathrev=326913) | `web2` | |
+
+## Adding additional word lists
+
+To add a word list, say with identifier `x`, put the word list (one word
+per line), into a plain text file `x.txt` in the [`raw_data`](raw_data)
+directory at the root of the repository. Then, to process the word list
+(and all others in the directory) run the script
+[`process_raw_data.py`](scripts/process_raw_data.py).
+
+## Installation
+
+Install this with pip with
+
+```
+pip install english-words
+```
+
+This package is unfortunately rather large (~20MB), and will run into
+scaling issues if more word lists or (especially) options are added.
+When that bridge is crossed, word lists should possibly be chosen by the
+user instead of simply including all of them; word lists could also be
+preprocessed on the client side instead of being included in the
+package.
+
+
+
+
+%package -n python3-english-words
+Summary: Generate sets of english words by combining different word lists
+Provides: python-english-words
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-english-words
+[![PyPI](https://img.shields.io/pypi/v/english-words.svg)](https://pypi.org/project/english-words/)
+
+# english-words-py
+
+Returns sets of English words created by combining different words
+lists together. Example usage: to get a set of English words from the
+"web2" word list, including only lower-case letters, you write the
+following:
+
+```python3
+>>> from english_words import get_english_words_set
+>>> web2lowerset = get_english_words_set(['web2'], lower=True)
+```
+
+## Usage
+
+From the main package, import `get_english_words_set` as demonstrated
+above. This function takes a number of arguments; the first is a list of
+word list identifiers for the word lists to combine and the rest are
+flags. These arguments are described here (in the following order):
+
+- `sources` is an iterable containing strings
+corresponding to word list identifiers (see "Word lists" subsection
+below)
+- `alpha` (default `False`) is a flag specifying that all
+ non-alphanumeric characters (e.g.: `-`, `'`) should be stripped
+- `lower` (default `False` ) is a flag specifying that all upper-case
+ letters should be converted to lower-case
+
+Each word list is pre-processed to handle the above flags, so using any
+combination of options will not cause the function to run slower.
+
+Note that some care needs to be used when combining word lists. For
+example, only proper nouns in the `web2` word list are capitalized, but
+every word in the `gcide` word list is capitalized.
+
+### Word lists
+
+| Name/URL | Identifier | Notes |
+| :--- | :--- | :--- |
+| [GCIDE 0.53 index](https://ftp.gnu.org/gnu/gcide/) | `gcide` | Words found in GNU Collaborative International Dictionary of English 0.53. All words capitalized, and like a dictionary.<br/><br/>Unicode characters are currently unprocessed; for example `<ae/` is present in the dictionary instead of `æ`. Ideally, these should all be converted. |
+| [web2 revision 326913](https://svnweb.freebsd.org/base/head/share/dict/web2?view=markup&pathrev=326913) | `web2` | |
+
+## Adding additional word lists
+
+To add a word list, say with identifier `x`, put the word list (one word
+per line), into a plain text file `x.txt` in the [`raw_data`](raw_data)
+directory at the root of the repository. Then, to process the word list
+(and all others in the directory) run the script
+[`process_raw_data.py`](scripts/process_raw_data.py).
+
+## Installation
+
+Install this with pip with
+
+```
+pip install english-words
+```
+
+This package is unfortunately rather large (~20MB), and will run into
+scaling issues if more word lists or (especially) options are added.
+When that bridge is crossed, word lists should possibly be chosen by the
+user instead of simply including all of them; word lists could also be
+preprocessed on the client side instead of being included in the
+package.
+
+
+
+
+%package help
+Summary: Development documents and examples for english-words
+Provides: python3-english-words-doc
+%description help
+[![PyPI](https://img.shields.io/pypi/v/english-words.svg)](https://pypi.org/project/english-words/)
+
+# english-words-py
+
+Returns sets of English words created by combining different words
+lists together. Example usage: to get a set of English words from the
+"web2" word list, including only lower-case letters, you write the
+following:
+
+```python3
+>>> from english_words import get_english_words_set
+>>> web2lowerset = get_english_words_set(['web2'], lower=True)
+```
+
+## Usage
+
+From the main package, import `get_english_words_set` as demonstrated
+above. This function takes a number of arguments; the first is a list of
+word list identifiers for the word lists to combine and the rest are
+flags. These arguments are described here (in the following order):
+
+- `sources` is an iterable containing strings
+corresponding to word list identifiers (see "Word lists" subsection
+below)
+- `alpha` (default `False`) is a flag specifying that all
+ non-alphanumeric characters (e.g.: `-`, `'`) should be stripped
+- `lower` (default `False` ) is a flag specifying that all upper-case
+ letters should be converted to lower-case
+
+Each word list is pre-processed to handle the above flags, so using any
+combination of options will not cause the function to run slower.
+
+Note that some care needs to be used when combining word lists. For
+example, only proper nouns in the `web2` word list are capitalized, but
+every word in the `gcide` word list is capitalized.
+
+### Word lists
+
+| Name/URL | Identifier | Notes |
+| :--- | :--- | :--- |
+| [GCIDE 0.53 index](https://ftp.gnu.org/gnu/gcide/) | `gcide` | Words found in GNU Collaborative International Dictionary of English 0.53. All words capitalized, and like a dictionary.<br/><br/>Unicode characters are currently unprocessed; for example `<ae/` is present in the dictionary instead of `æ`. Ideally, these should all be converted. |
+| [web2 revision 326913](https://svnweb.freebsd.org/base/head/share/dict/web2?view=markup&pathrev=326913) | `web2` | |
+
+## Adding additional word lists
+
+To add a word list, say with identifier `x`, put the word list (one word
+per line), into a plain text file `x.txt` in the [`raw_data`](raw_data)
+directory at the root of the repository. Then, to process the word list
+(and all others in the directory) run the script
+[`process_raw_data.py`](scripts/process_raw_data.py).
+
+## Installation
+
+Install this with pip with
+
+```
+pip install english-words
+```
+
+This package is unfortunately rather large (~20MB), and will run into
+scaling issues if more word lists or (especially) options are added.
+When that bridge is crossed, word lists should possibly be chosen by the
+user instead of simply including all of them; word lists could also be
+preprocessed on the client side instead of being included in the
+package.
+
+
+
+
+%prep
+%autosetup -n english-words-2.0.0
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-english-words -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Fri May 05 2023 Python_Bot <Python_Bot@openeuler.org> - 2.0.0-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..1403820
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+5c174af14180a977d8916bcda4c9b500 english-words-2.0.0.tar.gz