summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-06-20 06:21:37 +0000
committerCoprDistGit <infra@openeuler.org>2023-06-20 06:21:37 +0000
commitad0493ffb294e2c7f9bf8bf5627100f4b5ceecd3 (patch)
tree5abf1684534179cba733601d565c95eb6ab35920
parentdf4cbdf9b185a896405ebe46d4fd5624758943c9 (diff)
automatic import of python-cute-rankingopeneuler20.03
-rw-r--r--.gitignore1
-rw-r--r--python-cute-ranking.spec217
-rw-r--r--sources1
3 files changed, 219 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..aab522b 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/cute_ranking-0.0.3.tar.gz
diff --git a/python-cute-ranking.spec b/python-cute-ranking.spec
new file mode 100644
index 0000000..e1aafb4
--- /dev/null
+++ b/python-cute-ranking.spec
@@ -0,0 +1,217 @@
+%global _empty_manifest_terminate_build 0
+Name: python-cute-ranking
+Version: 0.0.3
+Release: 1
+Summary: A cute little python module for calculating different ranking metrics. Based entirely on the gist from https://gist.github.com/bwhite/3726239.
+License: Apache Software License 2.0
+URL: https://github.com/ncoop57/cute_ranking/tree/main/
+Source0: https://mirrors.aliyun.com/pypi/web/packages/af/7e/ef728679c6f11668b99c8f4d5e3bbda5f1abd05c983850ca04cf666ff9c3/cute_ranking-0.0.3.tar.gz
+BuildArch: noarch
+
+Requires: python3-numpy
+
+%description
+# Cute Ranking
+> A cute little python module for calculating different ranking metrics. Based entirely on the gist from https://gist.github.com/bwhite/3726239.
+
+
+[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/cute-ranking)](https://pypi.org/project/cute-ranking/)
+[![PyPI Status](https://badge.fury.io/py/cute-ranking.svg)](https://badge.fury.io/py/cute-ranking)
+[![PyPI Status](https://pepy.tech/badge/cute-ranking)](https://pepy.tech/project/cute-ranking)
+[![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/ncoop57/cute-ranking/blob/main/LICENSE)
+
+## Install
+
+Requires a minimum python installation of 3.6
+
+`pip install cute_ranking`
+
+## How to use
+
+```python
+from cute_ranking.core import mean_reciprocal_rank
+
+relevancies = [[0, 0, 1], [0, 1, 0], [1, 0, 0]]
+mean_reciprocal_rank(relevancies)
+```
+
+
+
+
+ 0.611111111111111
+
+
+
+The library current supports the following information retrieval ranking metrics:
+1. Mean Reciprocal Rank - `mean_reciprocal_rank`
+2. Relevancy Precision - `r_precision`
+3. Precision at K - `precision_at_k`
+4. Recall at K - `recall_at_k`
+5. F1 score at K - `f1_score_at_k`
+6. Average Precision - `average_precision`
+7. Mean Average Precision - `mean_average_precision`
+8. Discounted Cumulative Gain at K - `dcg_at_k`
+9. Normalized Discounted Cumulative Gain at K - `ndcg_at_k`
+10. Mean Rank - `mean_rank`
+11. Hit@k - `hit_rate_at_k`
+
+# Contributing
+PRs and issues welcome! Please make sure to read through the `CONTRIBUTING.md` doc for how to contribute :).
+
+
+
+
+%package -n python3-cute-ranking
+Summary: A cute little python module for calculating different ranking metrics. Based entirely on the gist from https://gist.github.com/bwhite/3726239.
+Provides: python-cute-ranking
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-cute-ranking
+# Cute Ranking
+> A cute little python module for calculating different ranking metrics. Based entirely on the gist from https://gist.github.com/bwhite/3726239.
+
+
+[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/cute-ranking)](https://pypi.org/project/cute-ranking/)
+[![PyPI Status](https://badge.fury.io/py/cute-ranking.svg)](https://badge.fury.io/py/cute-ranking)
+[![PyPI Status](https://pepy.tech/badge/cute-ranking)](https://pepy.tech/project/cute-ranking)
+[![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/ncoop57/cute-ranking/blob/main/LICENSE)
+
+## Install
+
+Requires a minimum python installation of 3.6
+
+`pip install cute_ranking`
+
+## How to use
+
+```python
+from cute_ranking.core import mean_reciprocal_rank
+
+relevancies = [[0, 0, 1], [0, 1, 0], [1, 0, 0]]
+mean_reciprocal_rank(relevancies)
+```
+
+
+
+
+ 0.611111111111111
+
+
+
+The library current supports the following information retrieval ranking metrics:
+1. Mean Reciprocal Rank - `mean_reciprocal_rank`
+2. Relevancy Precision - `r_precision`
+3. Precision at K - `precision_at_k`
+4. Recall at K - `recall_at_k`
+5. F1 score at K - `f1_score_at_k`
+6. Average Precision - `average_precision`
+7. Mean Average Precision - `mean_average_precision`
+8. Discounted Cumulative Gain at K - `dcg_at_k`
+9. Normalized Discounted Cumulative Gain at K - `ndcg_at_k`
+10. Mean Rank - `mean_rank`
+11. Hit@k - `hit_rate_at_k`
+
+# Contributing
+PRs and issues welcome! Please make sure to read through the `CONTRIBUTING.md` doc for how to contribute :).
+
+
+
+
+%package help
+Summary: Development documents and examples for cute-ranking
+Provides: python3-cute-ranking-doc
+%description help
+# Cute Ranking
+> A cute little python module for calculating different ranking metrics. Based entirely on the gist from https://gist.github.com/bwhite/3726239.
+
+
+[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/cute-ranking)](https://pypi.org/project/cute-ranking/)
+[![PyPI Status](https://badge.fury.io/py/cute-ranking.svg)](https://badge.fury.io/py/cute-ranking)
+[![PyPI Status](https://pepy.tech/badge/cute-ranking)](https://pepy.tech/project/cute-ranking)
+[![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/ncoop57/cute-ranking/blob/main/LICENSE)
+
+## Install
+
+Requires a minimum python installation of 3.6
+
+`pip install cute_ranking`
+
+## How to use
+
+```python
+from cute_ranking.core import mean_reciprocal_rank
+
+relevancies = [[0, 0, 1], [0, 1, 0], [1, 0, 0]]
+mean_reciprocal_rank(relevancies)
+```
+
+
+
+
+ 0.611111111111111
+
+
+
+The library current supports the following information retrieval ranking metrics:
+1. Mean Reciprocal Rank - `mean_reciprocal_rank`
+2. Relevancy Precision - `r_precision`
+3. Precision at K - `precision_at_k`
+4. Recall at K - `recall_at_k`
+5. F1 score at K - `f1_score_at_k`
+6. Average Precision - `average_precision`
+7. Mean Average Precision - `mean_average_precision`
+8. Discounted Cumulative Gain at K - `dcg_at_k`
+9. Normalized Discounted Cumulative Gain at K - `ndcg_at_k`
+10. Mean Rank - `mean_rank`
+11. Hit@k - `hit_rate_at_k`
+
+# Contributing
+PRs and issues welcome! Please make sure to read through the `CONTRIBUTING.md` doc for how to contribute :).
+
+
+
+
+%prep
+%autosetup -n cute_ranking-0.0.3
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "\"/%h/%f\"\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "\"/%h/%f\"\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "\"/%h/%f\"\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "\"/%h/%f\"\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "\"/%h/%f.gz\"\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-cute-ranking -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Tue Jun 20 2023 Python_Bot <Python_Bot@openeuler.org> - 0.0.3-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..af72346
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+3ef055916fc640bcd49499fd60037031 cute_ranking-0.0.3.tar.gz