%global _empty_manifest_terminate_build 0 Name: python-bayesmark Version: 0.0.8 Release: 1 Summary: Bayesian optimization benchmark system License: Apache v2 URL: https://github.com/uber/bayesmark/ Source0: https://mirrors.nju.edu.cn/pypi/web/packages/fe/35/5b3ad7f835676f53ed219cfe70e2f190cadbec21a9570e794489f721c00f/bayesmark-0.0.8.tar.gz BuildArch: noarch %description This project provides a benchmark framework to easily compare Bayesian optimization methods on real machine learning tasks. This project is experimental and the APIs are not considered stable. This Bayesian optimization (BO) benchmark framework requires a few easy steps for setup. It can be run either on a local machine (in serial) or prepare a *commands file* to run on a cluster as parallel experiments (dry run mode). Only ``Python>=3.6`` is officially supported, but older versions of Python likely work as well. The core package itself can be installed with: pip install bayesmark However, to also require installation of all the "built in" optimizers for evaluation, run: pip install bayesmark[optimizers] It is also possible to use the same pinned dependencies we used in testing by `installing from the repo <#install-in-editable-mode>`_. Building an environment to run the included notebooks can be done with: pip install bayesmark[notebooks] Or, ``bayesmark[optimizers,notebooks]`` can be used. A quick example of running the benchmark is `here <#example>`_. The instructions are used to generate results as below: %package -n python3-bayesmark Summary: Bayesian optimization benchmark system Provides: python-bayesmark BuildRequires: python3-devel BuildRequires: python3-setuptools BuildRequires: python3-pip %description -n python3-bayesmark This project provides a benchmark framework to easily compare Bayesian optimization methods on real machine learning tasks. This project is experimental and the APIs are not considered stable. This Bayesian optimization (BO) benchmark framework requires a few easy steps for setup. It can be run either on a local machine (in serial) or prepare a *commands file* to run on a cluster as parallel experiments (dry run mode). Only ``Python>=3.6`` is officially supported, but older versions of Python likely work as well. The core package itself can be installed with: pip install bayesmark However, to also require installation of all the "built in" optimizers for evaluation, run: pip install bayesmark[optimizers] It is also possible to use the same pinned dependencies we used in testing by `installing from the repo <#install-in-editable-mode>`_. Building an environment to run the included notebooks can be done with: pip install bayesmark[notebooks] Or, ``bayesmark[optimizers,notebooks]`` can be used. A quick example of running the benchmark is `here <#example>`_. The instructions are used to generate results as below: %package help Summary: Development documents and examples for bayesmark Provides: python3-bayesmark-doc %description help This project provides a benchmark framework to easily compare Bayesian optimization methods on real machine learning tasks. This project is experimental and the APIs are not considered stable. This Bayesian optimization (BO) benchmark framework requires a few easy steps for setup. It can be run either on a local machine (in serial) or prepare a *commands file* to run on a cluster as parallel experiments (dry run mode). Only ``Python>=3.6`` is officially supported, but older versions of Python likely work as well. The core package itself can be installed with: pip install bayesmark However, to also require installation of all the "built in" optimizers for evaluation, run: pip install bayesmark[optimizers] It is also possible to use the same pinned dependencies we used in testing by `installing from the repo <#install-in-editable-mode>`_. Building an environment to run the included notebooks can be done with: pip install bayesmark[notebooks] Or, ``bayesmark[optimizers,notebooks]`` can be used. A quick example of running the benchmark is `here <#example>`_. The instructions are used to generate results as below: %prep %autosetup -n bayesmark-0.0.8 %build %py3_build %install %py3_install install -d -m755 %{buildroot}/%{_pkgdocdir} if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/lib64 ]; then find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/bin ]; then find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/sbin ]; then find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . mv %{buildroot}/doclist.lst . %files -n python3-bayesmark -f filelist.lst %dir %{python3_sitelib}/* %files help -f doclist.lst %{_docdir}/* %changelog * Wed May 10 2023 Python_Bot - 0.0.8-1 - Package Spec generated