summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-05-05 14:14:08 +0000
committerCoprDistGit <infra@openeuler.org>2023-05-05 14:14:08 +0000
commitfc898cc7b264e9a9fc9b9204ce5f3b258d1e5ef0 (patch)
tree4a6116a77f8bce8699dcfd4dff78c0ca064f6402
parente3ab3aa2d953d8a69837603e2a396db448064ec1 (diff)
automatic import of python-scandevalopeneuler20.03
-rw-r--r--.gitignore1
-rw-r--r--python-scandeval.spec667
-rw-r--r--sources1
3 files changed, 669 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..b4c8fd1 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/ScandEval-6.3.0.tar.gz
diff --git a/python-scandeval.spec b/python-scandeval.spec
new file mode 100644
index 0000000..552bc86
--- /dev/null
+++ b/python-scandeval.spec
@@ -0,0 +1,667 @@
+%global _empty_manifest_terminate_build 0
+Name: python-scandeval
+Version: 6.3.0
+Release: 1
+Summary: Evaluation of pretrained language models on mono- or multilingual Scandinavian language tasks.
+License: MIT
+URL: https://scandeval.github.io
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/b4/cd/d7af3e26d0b1867a3bd7f280bbc003aea8c5e60e97f9a6a9425208b6c77f/ScandEval-6.3.0.tar.gz
+BuildArch: noarch
+
+Requires: python3-tqdm
+Requires: python3-huggingface-hub
+Requires: python3-transformers
+Requires: python3-torch
+Requires: python3-datasets
+Requires: python3-click
+Requires: python3-termcolor
+Requires: python3-numpy
+Requires: python3-sentencepiece
+Requires: python3-protobuf
+Requires: python3-seqeval
+Requires: python3-pandas
+Requires: python3-dotenv
+Requires: python3-evaluate
+Requires: python3-sacremoses
+Requires: python3-jax
+Requires: python3-flax
+Requires: python3-jaxlib
+Requires: python3-pyinfer
+
+%description
+<div align='center'>
+<img src="https://raw.githubusercontent.com/saattrupdan/ScandEval/main/gfx/scandeval.png" width="517" height="217">
+</div>
+
+### Evaluation of pretrained language models on mono- or multilingual Scandinavian language tasks.
+
+______________________________________________________________________
+[![PyPI Status](https://badge.fury.io/py/scandeval.svg)](https://pypi.org/project/scandeval/)
+[![Documentation](https://img.shields.io/badge/docs-passing-green)](https://saattrupdan.github.io/ScandEval/scandeval.html)
+[![License](https://img.shields.io/github/license/saattrupdan/ScandEval)](https://github.com/saattrupdan/ScandEval/blob/main/LICENSE)
+[![LastCommit](https://img.shields.io/github/last-commit/saattrupdan/ScandEval)](https://github.com/saattrupdan/ScandEval/commits/main)
+[![Code Coverage](https://img.shields.io/badge/Coverage-73%25-yellow.svg)](https://github.com/saattrupdan/ScandEval/tree/main/tests)
+[![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-2.0-4baaaa.svg)](https://github.com/saattrupdan/ScandEval/blob/main/CODE_OF_CONDUCT.md)
+
+
+## Installation
+To install the package simply write the following command in your favorite terminal:
+```
+$ pip install scandeval
+```
+
+## Quickstart
+### Benchmarking from the Command Line
+The easiest way to benchmark pretrained models is via the command line interface. After
+having installed the package, you can benchmark your favorite model like so:
+```
+$ scandeval --model-id <model-id>
+```
+
+Here `model_id` is the HuggingFace model ID, which can be found on the [HuggingFace
+Hub](https://huggingface.co/models). By default this will benchmark the model on all
+the datasets eligible. If you want to benchmark on a specific dataset, this can be done
+via the `--dataset` flag. This will for instance evaluate the model on the
+`AngryTweets` dataset:
+```
+$ scandeval --model-id <model-id> --dataset angry-tweets
+```
+
+We can also separate by language. To benchmark all Danish models on all Danish
+datasets, say, this can be done using the `language` tag, like so:
+```
+$ scandeval --language da
+```
+
+Multiple models, datasets and/or languages can be specified by just attaching multiple
+arguments. Here is an example with two models:
+```
+$ scandeval --model-id <model-id1> --model-id <model-id2> --dataset angry-tweets
+```
+
+The specific model version to use can also be added after the suffix '@':
+```
+$ scandeval --model-id <model-id>@<commit>
+```
+
+It can be a branch name, a tag name, or a commit id. It defaults to 'main' for latest.
+
+See all the arguments and options available for the `scandeval` command by typing
+```
+$ scandeval --help
+```
+
+### Benchmarking from a Script
+In a script, the syntax is similar to the command line interface. You simply initialise
+an object of the `Benchmarker` class, and call this benchmark object with your favorite
+models and/or datasets:
+```
+>>> from scandeval import Benchmarker
+>>> benchmark = Benchmarker()
+>>> benchmark('<model-id>')
+```
+
+To benchmark on a specific dataset, you simply specify the second argument, shown here
+with the `AngryTweets` dataset again:
+```
+>>> benchmark('<model_id>', 'angry-tweets')
+```
+
+If you want to benchmark a subset of all the models on the Hugging Face Hub, you can
+specify several parameters in the `Benchmarker` initializer to narrow down the list of
+models to the ones you care about. As a simple example, the following would benchmark
+all the Nynorsk models on Nynorsk datasets:
+```
+>>> benchmark = Benchmarker(language='nn')
+>>> benchmark()
+```
+
+
+## Documentation
+
+See the full documentation [here](https://saattrupdan.github.io/ScandEval/scandeval.html).
+
+
+## Citing ScandEval
+If you want to cite the framework then feel free to use this:
+```
+@article{nielsen2022scandeval,
+ title={ScandEval: Evaluation of language models on mono- or multilingual Scandinavian language tasks.},
+ author={Nielsen, Dan Saattrup},
+ journal={GitHub. Note: https://github.com/saattrupdan/ScandEval},
+ year={2022}
+}
+```
+
+## Remarks
+The image used in the logo has been created by the amazing [Scandinavia and the
+World](https://satwcomic.com/) team. Go check them out!
+
+
+## Project structure
+```
+.
+├── .flake8
+├── .github
+│   └── workflows
+│   ├── ci.yaml
+│   └── docs.yaml
+├── .gitignore
+├── .pre-commit-config.yaml
+├── CHANGELOG.md
+├── LICENSE
+├── README.md
+├── gfx
+│   └── scandeval.png
+├── makefile
+├── notebooks
+├── poetry.toml
+├── pyproject.toml
+├── src
+│   ├── scandeval
+│   │   ├── __init__.py
+│   │   ├── benchmark_config_factory.py
+│   │   ├── benchmark_dataset.py
+│   │   ├── benchmarker.py
+│   │   ├── callbacks.py
+│   │   ├── cli.py
+│   │   ├── config.py
+│   │   ├── dataset_configs.py
+│   │   ├── dataset_factory.py
+│   │   ├── dataset_tasks.py
+│   │   ├── exceptions.py
+│   │   ├── hf_hub.py
+│   │   ├── languages.py
+│   │   ├── model_loading.py
+│   │   ├── named_entity_recognition.py
+│   │   ├── question_answering.py
+│   │   ├── question_answering_trainer.py
+│   │   ├── scores.py
+│   │   ├── sequence_classification.py
+│   │   ├── speed_benchmark.py
+│   │   ├── types.py
+│   │   └── utils.py
+│   └── scripts
+│   ├── create_angry_tweets.py
+│   ├── create_dane.py
+│   ├── create_mim_gold_ner.py
+│   ├── create_norec.py
+│   ├── create_norne.py
+│   ├── create_scala.py
+│   ├── create_scandiqa.py
+│   ├── create_suc3.py
+│   ├── create_swerec.py
+│   ├── create_wikiann_fo.py
+│   ├── fill_in_missing_model_metadata.py
+│   ├── fix_dot_env_file.py
+│   ├── load_ud_pos.py
+│   └── versioning.py
+└── tests
+ ├── __init__.py
+ ├── conftest.py
+ ├── test_benchmark_config_factory.py
+ ├── test_benchmark_dataset.py
+ ├── test_benchmarker.py
+ ├── test_callbacks.py
+ ├── test_cli.py
+ ├── test_config.py
+ ├── test_dataset_configs.py
+ ├── test_dataset_factory.py
+ ├── test_dataset_tasks.py
+ ├── test_exceptions.py
+ ├── test_hf_hub.py
+ ├── test_languages.py
+ ├── test_model_loading.py
+ ├── test_named_entity_recognition.py
+ ├── test_question_answering.py
+ ├── test_question_answering_trainer.py
+ ├── test_scores.py
+ ├── test_sequence_classification.py
+ ├── test_speed_benchmark.py
+ ├── test_types.py
+ └── test_utils.py
+```
+
+
+%package -n python3-scandeval
+Summary: Evaluation of pretrained language models on mono- or multilingual Scandinavian language tasks.
+Provides: python-scandeval
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-scandeval
+<div align='center'>
+<img src="https://raw.githubusercontent.com/saattrupdan/ScandEval/main/gfx/scandeval.png" width="517" height="217">
+</div>
+
+### Evaluation of pretrained language models on mono- or multilingual Scandinavian language tasks.
+
+______________________________________________________________________
+[![PyPI Status](https://badge.fury.io/py/scandeval.svg)](https://pypi.org/project/scandeval/)
+[![Documentation](https://img.shields.io/badge/docs-passing-green)](https://saattrupdan.github.io/ScandEval/scandeval.html)
+[![License](https://img.shields.io/github/license/saattrupdan/ScandEval)](https://github.com/saattrupdan/ScandEval/blob/main/LICENSE)
+[![LastCommit](https://img.shields.io/github/last-commit/saattrupdan/ScandEval)](https://github.com/saattrupdan/ScandEval/commits/main)
+[![Code Coverage](https://img.shields.io/badge/Coverage-73%25-yellow.svg)](https://github.com/saattrupdan/ScandEval/tree/main/tests)
+[![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-2.0-4baaaa.svg)](https://github.com/saattrupdan/ScandEval/blob/main/CODE_OF_CONDUCT.md)
+
+
+## Installation
+To install the package simply write the following command in your favorite terminal:
+```
+$ pip install scandeval
+```
+
+## Quickstart
+### Benchmarking from the Command Line
+The easiest way to benchmark pretrained models is via the command line interface. After
+having installed the package, you can benchmark your favorite model like so:
+```
+$ scandeval --model-id <model-id>
+```
+
+Here `model_id` is the HuggingFace model ID, which can be found on the [HuggingFace
+Hub](https://huggingface.co/models). By default this will benchmark the model on all
+the datasets eligible. If you want to benchmark on a specific dataset, this can be done
+via the `--dataset` flag. This will for instance evaluate the model on the
+`AngryTweets` dataset:
+```
+$ scandeval --model-id <model-id> --dataset angry-tweets
+```
+
+We can also separate by language. To benchmark all Danish models on all Danish
+datasets, say, this can be done using the `language` tag, like so:
+```
+$ scandeval --language da
+```
+
+Multiple models, datasets and/or languages can be specified by just attaching multiple
+arguments. Here is an example with two models:
+```
+$ scandeval --model-id <model-id1> --model-id <model-id2> --dataset angry-tweets
+```
+
+The specific model version to use can also be added after the suffix '@':
+```
+$ scandeval --model-id <model-id>@<commit>
+```
+
+It can be a branch name, a tag name, or a commit id. It defaults to 'main' for latest.
+
+See all the arguments and options available for the `scandeval` command by typing
+```
+$ scandeval --help
+```
+
+### Benchmarking from a Script
+In a script, the syntax is similar to the command line interface. You simply initialise
+an object of the `Benchmarker` class, and call this benchmark object with your favorite
+models and/or datasets:
+```
+>>> from scandeval import Benchmarker
+>>> benchmark = Benchmarker()
+>>> benchmark('<model-id>')
+```
+
+To benchmark on a specific dataset, you simply specify the second argument, shown here
+with the `AngryTweets` dataset again:
+```
+>>> benchmark('<model_id>', 'angry-tweets')
+```
+
+If you want to benchmark a subset of all the models on the Hugging Face Hub, you can
+specify several parameters in the `Benchmarker` initializer to narrow down the list of
+models to the ones you care about. As a simple example, the following would benchmark
+all the Nynorsk models on Nynorsk datasets:
+```
+>>> benchmark = Benchmarker(language='nn')
+>>> benchmark()
+```
+
+
+## Documentation
+
+See the full documentation [here](https://saattrupdan.github.io/ScandEval/scandeval.html).
+
+
+## Citing ScandEval
+If you want to cite the framework then feel free to use this:
+```
+@article{nielsen2022scandeval,
+ title={ScandEval: Evaluation of language models on mono- or multilingual Scandinavian language tasks.},
+ author={Nielsen, Dan Saattrup},
+ journal={GitHub. Note: https://github.com/saattrupdan/ScandEval},
+ year={2022}
+}
+```
+
+## Remarks
+The image used in the logo has been created by the amazing [Scandinavia and the
+World](https://satwcomic.com/) team. Go check them out!
+
+
+## Project structure
+```
+.
+├── .flake8
+├── .github
+│   └── workflows
+│   ├── ci.yaml
+│   └── docs.yaml
+├── .gitignore
+├── .pre-commit-config.yaml
+├── CHANGELOG.md
+├── LICENSE
+├── README.md
+├── gfx
+│   └── scandeval.png
+├── makefile
+├── notebooks
+├── poetry.toml
+├── pyproject.toml
+├── src
+│   ├── scandeval
+│   │   ├── __init__.py
+│   │   ├── benchmark_config_factory.py
+│   │   ├── benchmark_dataset.py
+│   │   ├── benchmarker.py
+│   │   ├── callbacks.py
+│   │   ├── cli.py
+│   │   ├── config.py
+│   │   ├── dataset_configs.py
+│   │   ├── dataset_factory.py
+│   │   ├── dataset_tasks.py
+│   │   ├── exceptions.py
+│   │   ├── hf_hub.py
+│   │   ├── languages.py
+│   │   ├── model_loading.py
+│   │   ├── named_entity_recognition.py
+│   │   ├── question_answering.py
+│   │   ├── question_answering_trainer.py
+│   │   ├── scores.py
+│   │   ├── sequence_classification.py
+│   │   ├── speed_benchmark.py
+│   │   ├── types.py
+│   │   └── utils.py
+│   └── scripts
+│   ├── create_angry_tweets.py
+│   ├── create_dane.py
+│   ├── create_mim_gold_ner.py
+│   ├── create_norec.py
+│   ├── create_norne.py
+│   ├── create_scala.py
+│   ├── create_scandiqa.py
+│   ├── create_suc3.py
+│   ├── create_swerec.py
+│   ├── create_wikiann_fo.py
+│   ├── fill_in_missing_model_metadata.py
+│   ├── fix_dot_env_file.py
+│   ├── load_ud_pos.py
+│   └── versioning.py
+└── tests
+ ├── __init__.py
+ ├── conftest.py
+ ├── test_benchmark_config_factory.py
+ ├── test_benchmark_dataset.py
+ ├── test_benchmarker.py
+ ├── test_callbacks.py
+ ├── test_cli.py
+ ├── test_config.py
+ ├── test_dataset_configs.py
+ ├── test_dataset_factory.py
+ ├── test_dataset_tasks.py
+ ├── test_exceptions.py
+ ├── test_hf_hub.py
+ ├── test_languages.py
+ ├── test_model_loading.py
+ ├── test_named_entity_recognition.py
+ ├── test_question_answering.py
+ ├── test_question_answering_trainer.py
+ ├── test_scores.py
+ ├── test_sequence_classification.py
+ ├── test_speed_benchmark.py
+ ├── test_types.py
+ └── test_utils.py
+```
+
+
+%package help
+Summary: Development documents and examples for scandeval
+Provides: python3-scandeval-doc
+%description help
+<div align='center'>
+<img src="https://raw.githubusercontent.com/saattrupdan/ScandEval/main/gfx/scandeval.png" width="517" height="217">
+</div>
+
+### Evaluation of pretrained language models on mono- or multilingual Scandinavian language tasks.
+
+______________________________________________________________________
+[![PyPI Status](https://badge.fury.io/py/scandeval.svg)](https://pypi.org/project/scandeval/)
+[![Documentation](https://img.shields.io/badge/docs-passing-green)](https://saattrupdan.github.io/ScandEval/scandeval.html)
+[![License](https://img.shields.io/github/license/saattrupdan/ScandEval)](https://github.com/saattrupdan/ScandEval/blob/main/LICENSE)
+[![LastCommit](https://img.shields.io/github/last-commit/saattrupdan/ScandEval)](https://github.com/saattrupdan/ScandEval/commits/main)
+[![Code Coverage](https://img.shields.io/badge/Coverage-73%25-yellow.svg)](https://github.com/saattrupdan/ScandEval/tree/main/tests)
+[![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-2.0-4baaaa.svg)](https://github.com/saattrupdan/ScandEval/blob/main/CODE_OF_CONDUCT.md)
+
+
+## Installation
+To install the package simply write the following command in your favorite terminal:
+```
+$ pip install scandeval
+```
+
+## Quickstart
+### Benchmarking from the Command Line
+The easiest way to benchmark pretrained models is via the command line interface. After
+having installed the package, you can benchmark your favorite model like so:
+```
+$ scandeval --model-id <model-id>
+```
+
+Here `model_id` is the HuggingFace model ID, which can be found on the [HuggingFace
+Hub](https://huggingface.co/models). By default this will benchmark the model on all
+the datasets eligible. If you want to benchmark on a specific dataset, this can be done
+via the `--dataset` flag. This will for instance evaluate the model on the
+`AngryTweets` dataset:
+```
+$ scandeval --model-id <model-id> --dataset angry-tweets
+```
+
+We can also separate by language. To benchmark all Danish models on all Danish
+datasets, say, this can be done using the `language` tag, like so:
+```
+$ scandeval --language da
+```
+
+Multiple models, datasets and/or languages can be specified by just attaching multiple
+arguments. Here is an example with two models:
+```
+$ scandeval --model-id <model-id1> --model-id <model-id2> --dataset angry-tweets
+```
+
+The specific model version to use can also be added after the suffix '@':
+```
+$ scandeval --model-id <model-id>@<commit>
+```
+
+It can be a branch name, a tag name, or a commit id. It defaults to 'main' for latest.
+
+See all the arguments and options available for the `scandeval` command by typing
+```
+$ scandeval --help
+```
+
+### Benchmarking from a Script
+In a script, the syntax is similar to the command line interface. You simply initialise
+an object of the `Benchmarker` class, and call this benchmark object with your favorite
+models and/or datasets:
+```
+>>> from scandeval import Benchmarker
+>>> benchmark = Benchmarker()
+>>> benchmark('<model-id>')
+```
+
+To benchmark on a specific dataset, you simply specify the second argument, shown here
+with the `AngryTweets` dataset again:
+```
+>>> benchmark('<model_id>', 'angry-tweets')
+```
+
+If you want to benchmark a subset of all the models on the Hugging Face Hub, you can
+specify several parameters in the `Benchmarker` initializer to narrow down the list of
+models to the ones you care about. As a simple example, the following would benchmark
+all the Nynorsk models on Nynorsk datasets:
+```
+>>> benchmark = Benchmarker(language='nn')
+>>> benchmark()
+```
+
+
+## Documentation
+
+See the full documentation [here](https://saattrupdan.github.io/ScandEval/scandeval.html).
+
+
+## Citing ScandEval
+If you want to cite the framework then feel free to use this:
+```
+@article{nielsen2022scandeval,
+ title={ScandEval: Evaluation of language models on mono- or multilingual Scandinavian language tasks.},
+ author={Nielsen, Dan Saattrup},
+ journal={GitHub. Note: https://github.com/saattrupdan/ScandEval},
+ year={2022}
+}
+```
+
+## Remarks
+The image used in the logo has been created by the amazing [Scandinavia and the
+World](https://satwcomic.com/) team. Go check them out!
+
+
+## Project structure
+```
+.
+├── .flake8
+├── .github
+│   └── workflows
+│   ├── ci.yaml
+│   └── docs.yaml
+├── .gitignore
+├── .pre-commit-config.yaml
+├── CHANGELOG.md
+├── LICENSE
+├── README.md
+├── gfx
+│   └── scandeval.png
+├── makefile
+├── notebooks
+├── poetry.toml
+├── pyproject.toml
+├── src
+│   ├── scandeval
+│   │   ├── __init__.py
+│   │   ├── benchmark_config_factory.py
+│   │   ├── benchmark_dataset.py
+│   │   ├── benchmarker.py
+│   │   ├── callbacks.py
+│   │   ├── cli.py
+│   │   ├── config.py
+│   │   ├── dataset_configs.py
+│   │   ├── dataset_factory.py
+│   │   ├── dataset_tasks.py
+│   │   ├── exceptions.py
+│   │   ├── hf_hub.py
+│   │   ├── languages.py
+│   │   ├── model_loading.py
+│   │   ├── named_entity_recognition.py
+│   │   ├── question_answering.py
+│   │   ├── question_answering_trainer.py
+│   │   ├── scores.py
+│   │   ├── sequence_classification.py
+│   │   ├── speed_benchmark.py
+│   │   ├── types.py
+│   │   └── utils.py
+│   └── scripts
+│   ├── create_angry_tweets.py
+│   ├── create_dane.py
+│   ├── create_mim_gold_ner.py
+│   ├── create_norec.py
+│   ├── create_norne.py
+│   ├── create_scala.py
+│   ├── create_scandiqa.py
+│   ├── create_suc3.py
+│   ├── create_swerec.py
+│   ├── create_wikiann_fo.py
+│   ├── fill_in_missing_model_metadata.py
+│   ├── fix_dot_env_file.py
+│   ├── load_ud_pos.py
+│   └── versioning.py
+└── tests
+ ├── __init__.py
+ ├── conftest.py
+ ├── test_benchmark_config_factory.py
+ ├── test_benchmark_dataset.py
+ ├── test_benchmarker.py
+ ├── test_callbacks.py
+ ├── test_cli.py
+ ├── test_config.py
+ ├── test_dataset_configs.py
+ ├── test_dataset_factory.py
+ ├── test_dataset_tasks.py
+ ├── test_exceptions.py
+ ├── test_hf_hub.py
+ ├── test_languages.py
+ ├── test_model_loading.py
+ ├── test_named_entity_recognition.py
+ ├── test_question_answering.py
+ ├── test_question_answering_trainer.py
+ ├── test_scores.py
+ ├── test_sequence_classification.py
+ ├── test_speed_benchmark.py
+ ├── test_types.py
+ └── test_utils.py
+```
+
+
+%prep
+%autosetup -n scandeval-6.3.0
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-scandeval -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Fri May 05 2023 Python_Bot <Python_Bot@openeuler.org> - 6.3.0-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..fb3f365
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+3ecd2991a91c30d1386375a8f7ceebc8 ScandEval-6.3.0.tar.gz