diff options
| -rw-r--r-- | .gitignore | 1 | ||||
| -rw-r--r-- | python-flaml.spec | 577 | ||||
| -rw-r--r-- | sources | 1 |
3 files changed, 579 insertions, 0 deletions
@@ -0,0 +1 @@ +/FLAML-1.2.0.tar.gz diff --git a/python-flaml.spec b/python-flaml.spec new file mode 100644 index 0000000..e50d50e --- /dev/null +++ b/python-flaml.spec @@ -0,0 +1,577 @@ +%global _empty_manifest_terminate_build 0 +Name: python-FLAML +Version: 1.2.0 +Release: 1 +Summary: A fast library for automated machine learning and tuning +License: MIT License +URL: https://github.com/microsoft/FLAML +Source0: https://mirrors.nju.edu.cn/pypi/web/packages/fb/f7/38298ae67a633f668e68bf08cc13d7c401852b036ddfb95098a86315f028/FLAML-1.2.0.tar.gz +BuildArch: noarch + +Requires: python3-NumPy +Requires: python3-lightgbm +Requires: python3-xgboost +Requires: python3-scipy +Requires: python3-pandas +Requires: python3-scikit-learn +Requires: python3-azureml-mlflow +Requires: python3-catboost +Requires: python3-psutil +Requires: python3-xgboost +Requires: python3-optuna +Requires: python3-catboost +Requires: python3-holidays +Requires: python3-prophet +Requires: python3-statsmodels +Requires: python3-hcrystalball +Requires: python3-pytorch-forecasting +Requires: python3-transformers[torch] +Requires: python3-datasets +Requires: python3-nltk +Requires: python3-rouge-score +Requires: python3-seqeval +Requires: python3-transformers[torch] +Requires: python3-datasets +Requires: python3-nltk +Requires: python3-rouge-score +Requires: python3-seqeval +Requires: python3-nni +Requires: python3-jupyter +Requires: python3-matplotlib +Requires: python3-openml +Requires: python3-openai +Requires: python3-diskcache +Requires: python3-optuna +Requires: python3-ray[tune] +Requires: python3-pyspark +Requires: python3-joblibspark +Requires: python3-joblibspark +Requires: python3-optuna +Requires: python3-pyspark +Requires: python3-flake8 +Requires: python3-thop +Requires: python3-pytest +Requires: python3-coverage +Requires: python3-pre-commit +Requires: python3-torch +Requires: python3-torchvision +Requires: python3-catboost +Requires: python3-rgf-python +Requires: python3-optuna +Requires: python3-openml +Requires: python3-statsmodels +Requires: python3-psutil +Requires: python3-dataclasses +Requires: python3-transformers[torch] +Requires: python3-datasets +Requires: python3-nltk +Requires: python3-rouge-score +Requires: python3-hcrystalball +Requires: python3-seqeval +Requires: python3-pytorch-forecasting +Requires: python3-mlflow +Requires: python3-pyspark +Requires: python3-joblibspark +Requires: python3-nbconvert +Requires: python3-nbformat +Requires: python3-ipykernel +Requires: python3-pytorch-lightning +Requires: python3-holidays +Requires: python3-prophet +Requires: python3-statsmodels +Requires: python3-hcrystalball +Requires: python3-vowpalwabbit + +%description +[](https://badge.fury.io/py/FLAML) + +[](https://github.com/microsoft/FLAML/actions/workflows/python-package.yml) + +[](https://pepy.tech/project/flaml) +[](https://gitter.im/FLAMLer/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) +[](https://discord.gg/Cppx2vSPVP) + + +# A Fast Library for Automated Machine Learning & Tuning + +<p align="center"> + <img src="https://github.com/microsoft/FLAML/blob/main/website/static/img/flaml.svg" width=200> + <br> +</p> + +:fire: OpenAI GPT-3 models support in v1.1.3. ChatGPT and GPT-4 support will be added in v1.2.0. + +:fire: A [lab forum](https://github.com/microsoft/FLAML/tree/tutorial-aaai23/tutorial) on FLAML at AAAI 2023. + +:fire: A [hands-on tutorial](https://github.com/microsoft/FLAML/tree/tutorial/tutorial) on FLAML presented at KDD 2022 + +## What is FLAML +FLAML is a lightweight Python library that finds accurate machine +learning models automatically, efficiently and economically. It frees users from selecting +models and hyperparameters for each model. It can also be used to tune generic hyperparameters for foundation models, MLOps/LMOps workflows, pipelines, mathematical/statistical models, algorithms, computing experiments, software configurations and so on. + +1. For common machine learning or AI tasks like classification, regression, and generation, it quickly finds quality models for user-provided data with low computational resources. It supports both classical machine learning models and deep neural networks, including foundation models such as the GPT series. +1. It is easy to customize or extend. Users can find their desired customizability from a smooth range: minimal customization (computational resource budget), medium customization (e.g., scikit-style learner, search space and metric), or full customization (arbitrary training and evaluation code). +1. It supports fast automatic tuning, capable of handling complex constraints/guidance/early stopping. FLAML is powered by a new, [cost-effective +hyperparameter optimization](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function/#hyperparameter-optimization-algorithm) +and model selection method invented by Microsoft Research, and many followup [research studies](https://microsoft.github.io/FLAML/docs/Research). + +FLAML has a .NET implementation in [ML.NET](http://dot.net/ml), an open-source, cross-platform machine learning framework for .NET. In ML.NET, you can use FLAML via low-code solutions like [Model Builder](https://dotnet.microsoft.com/apps/machinelearning-ai/ml-dotnet/model-builder) Visual Studio extension and the cross-platform [ML.NET CLI](https://docs.microsoft.com/dotnet/machine-learning/automate-training-with-cli). Alternatively, you can use the [ML.NET AutoML API](https://www.nuget.org/packages/Microsoft.ML.AutoML/#versions-body-tab) for a code-first experience. + + +## Installation + +### Python + +FLAML requires **Python version >= 3.7**. It can be installed from pip: + +```bash +pip install flaml +``` + +To run the [`notebook examples`](https://github.com/microsoft/FLAML/tree/main/notebook), +install flaml with the [notebook] option: + +```bash +pip install flaml[notebook] +``` + +### .NET + +Use the following guides to get started with FLAML in .NET: + +- [Install Model Builder](https://docs.microsoft.com/dotnet/machine-learning/how-to-guides/install-model-builder?tabs=visual-studio-2022) +- [Install ML.NET CLI](https://docs.microsoft.com/dotnet/machine-learning/how-to-guides/install-ml-net-cli?tabs=windows) +- [Microsoft.AutoML](https://www.nuget.org/packages/Microsoft.ML.AutoML/0.20.0) + +## Quickstart + +* With three lines of code, you can start using this economical and fast +AutoML engine as a [scikit-learn style estimator](https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML). + +```python +from flaml import AutoML +automl = AutoML() +automl.fit(X_train, y_train, task="classification") +``` + +* You can restrict the learners and use FLAML as a fast hyperparameter tuning +tool for XGBoost, LightGBM, Random Forest etc. or a [customized learner](https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML#estimator-and-search-space). + +```python +automl.fit(X_train, y_train, task="classification", estimator_list=["lgbm"]) +``` + +* You can also run generic hyperparameter tuning for a [custom function](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function). + +```python +from flaml import tune +tune.run(evaluation_function, config={…}, low_cost_partial_config={…}, time_budget_s=3600) +``` + +* [Zero-shot AutoML](https://microsoft.github.io/FLAML/docs/Use-Cases/Zero-Shot-AutoML) allows using the existing training API from lightgbm, xgboost etc. while getting the benefit of AutoML in choosing high-performance hyperparameter configurations per task. + +```python +from flaml.default import LGBMRegressor + +# Use LGBMRegressor in the same way as you use lightgbm.LGBMRegressor. +estimator = LGBMRegressor() +# The hyperparameters are automatically set according to the training data. +estimator.fit(X_train, y_train) +``` + +* (New) You can optimize [generations](https://microsoft.github.io/FLAML/docs/Use-Cases/Auto-Generation) by ChatGPT or GPT-4 etc. with your own tuning data, success metrics and budgets. + +```python +from flaml import oai + +config, analysis = oai.Completion.tune( + data=tune_data, + metric="success", + mode="max", + eval_func=eval_func, + inference_budget=0.05, + optimization_budget=3, + num_samples=-1, +) +``` + +## Documentation + +You can find a detailed documentation about FLAML [here](https://microsoft.github.io/FLAML/) where you can find the API documentation, use cases and examples. + +In addition, you can find: + +- [Talks](https://www.youtube.com/channel/UCfU0zfFXHXdAd5x-WvFBk5A) and [tutorials](https://github.com/microsoft/FLAML/tree/tutorial/tutorial) about FLAML. + +- Research around FLAML [here](https://microsoft.github.io/FLAML/docs/Research). + +- FAQ [here](https://microsoft.github.io/FLAML/docs/FAQ). + +- Contributing guide [here](https://microsoft.github.io/FLAML/docs/Contribute). + +- ML.NET documentation and tutorials for [Model Builder](https://learn.microsoft.com/dotnet/machine-learning/tutorials/predict-prices-with-model-builder), [ML.NET CLI](https://learn.microsoft.com/dotnet/machine-learning/tutorials/sentiment-analysis-cli), and [AutoML API](https://learn.microsoft.com/dotnet/machine-learning/how-to-guides/how-to-use-the-automl-api). + +## Contributing + +This project welcomes contributions and suggestions. Most contributions require you to agree to a +Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us +the rights to use your contribution. For details, visit <https://cla.opensource.microsoft.com>. + +If you are new to GitHub [here](https://help.github.com/categories/collaborating-with-issues-and-pull-requests/) is a detailed help source on getting involved with development on GitHub. + +When you submit a pull request, a CLA bot will automatically determine whether you need to provide +a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions +provided by the bot. You will only need to do this once across all repos using our CLA. + +This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). +For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or +contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. + + +%package -n python3-FLAML +Summary: A fast library for automated machine learning and tuning +Provides: python-FLAML +BuildRequires: python3-devel +BuildRequires: python3-setuptools +BuildRequires: python3-pip +%description -n python3-FLAML +[](https://badge.fury.io/py/FLAML) + +[](https://github.com/microsoft/FLAML/actions/workflows/python-package.yml) + +[](https://pepy.tech/project/flaml) +[](https://gitter.im/FLAMLer/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) +[](https://discord.gg/Cppx2vSPVP) + + +# A Fast Library for Automated Machine Learning & Tuning + +<p align="center"> + <img src="https://github.com/microsoft/FLAML/blob/main/website/static/img/flaml.svg" width=200> + <br> +</p> + +:fire: OpenAI GPT-3 models support in v1.1.3. ChatGPT and GPT-4 support will be added in v1.2.0. + +:fire: A [lab forum](https://github.com/microsoft/FLAML/tree/tutorial-aaai23/tutorial) on FLAML at AAAI 2023. + +:fire: A [hands-on tutorial](https://github.com/microsoft/FLAML/tree/tutorial/tutorial) on FLAML presented at KDD 2022 + +## What is FLAML +FLAML is a lightweight Python library that finds accurate machine +learning models automatically, efficiently and economically. It frees users from selecting +models and hyperparameters for each model. It can also be used to tune generic hyperparameters for foundation models, MLOps/LMOps workflows, pipelines, mathematical/statistical models, algorithms, computing experiments, software configurations and so on. + +1. For common machine learning or AI tasks like classification, regression, and generation, it quickly finds quality models for user-provided data with low computational resources. It supports both classical machine learning models and deep neural networks, including foundation models such as the GPT series. +1. It is easy to customize or extend. Users can find their desired customizability from a smooth range: minimal customization (computational resource budget), medium customization (e.g., scikit-style learner, search space and metric), or full customization (arbitrary training and evaluation code). +1. It supports fast automatic tuning, capable of handling complex constraints/guidance/early stopping. FLAML is powered by a new, [cost-effective +hyperparameter optimization](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function/#hyperparameter-optimization-algorithm) +and model selection method invented by Microsoft Research, and many followup [research studies](https://microsoft.github.io/FLAML/docs/Research). + +FLAML has a .NET implementation in [ML.NET](http://dot.net/ml), an open-source, cross-platform machine learning framework for .NET. In ML.NET, you can use FLAML via low-code solutions like [Model Builder](https://dotnet.microsoft.com/apps/machinelearning-ai/ml-dotnet/model-builder) Visual Studio extension and the cross-platform [ML.NET CLI](https://docs.microsoft.com/dotnet/machine-learning/automate-training-with-cli). Alternatively, you can use the [ML.NET AutoML API](https://www.nuget.org/packages/Microsoft.ML.AutoML/#versions-body-tab) for a code-first experience. + + +## Installation + +### Python + +FLAML requires **Python version >= 3.7**. It can be installed from pip: + +```bash +pip install flaml +``` + +To run the [`notebook examples`](https://github.com/microsoft/FLAML/tree/main/notebook), +install flaml with the [notebook] option: + +```bash +pip install flaml[notebook] +``` + +### .NET + +Use the following guides to get started with FLAML in .NET: + +- [Install Model Builder](https://docs.microsoft.com/dotnet/machine-learning/how-to-guides/install-model-builder?tabs=visual-studio-2022) +- [Install ML.NET CLI](https://docs.microsoft.com/dotnet/machine-learning/how-to-guides/install-ml-net-cli?tabs=windows) +- [Microsoft.AutoML](https://www.nuget.org/packages/Microsoft.ML.AutoML/0.20.0) + +## Quickstart + +* With three lines of code, you can start using this economical and fast +AutoML engine as a [scikit-learn style estimator](https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML). + +```python +from flaml import AutoML +automl = AutoML() +automl.fit(X_train, y_train, task="classification") +``` + +* You can restrict the learners and use FLAML as a fast hyperparameter tuning +tool for XGBoost, LightGBM, Random Forest etc. or a [customized learner](https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML#estimator-and-search-space). + +```python +automl.fit(X_train, y_train, task="classification", estimator_list=["lgbm"]) +``` + +* You can also run generic hyperparameter tuning for a [custom function](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function). + +```python +from flaml import tune +tune.run(evaluation_function, config={…}, low_cost_partial_config={…}, time_budget_s=3600) +``` + +* [Zero-shot AutoML](https://microsoft.github.io/FLAML/docs/Use-Cases/Zero-Shot-AutoML) allows using the existing training API from lightgbm, xgboost etc. while getting the benefit of AutoML in choosing high-performance hyperparameter configurations per task. + +```python +from flaml.default import LGBMRegressor + +# Use LGBMRegressor in the same way as you use lightgbm.LGBMRegressor. +estimator = LGBMRegressor() +# The hyperparameters are automatically set according to the training data. +estimator.fit(X_train, y_train) +``` + +* (New) You can optimize [generations](https://microsoft.github.io/FLAML/docs/Use-Cases/Auto-Generation) by ChatGPT or GPT-4 etc. with your own tuning data, success metrics and budgets. + +```python +from flaml import oai + +config, analysis = oai.Completion.tune( + data=tune_data, + metric="success", + mode="max", + eval_func=eval_func, + inference_budget=0.05, + optimization_budget=3, + num_samples=-1, +) +``` + +## Documentation + +You can find a detailed documentation about FLAML [here](https://microsoft.github.io/FLAML/) where you can find the API documentation, use cases and examples. + +In addition, you can find: + +- [Talks](https://www.youtube.com/channel/UCfU0zfFXHXdAd5x-WvFBk5A) and [tutorials](https://github.com/microsoft/FLAML/tree/tutorial/tutorial) about FLAML. + +- Research around FLAML [here](https://microsoft.github.io/FLAML/docs/Research). + +- FAQ [here](https://microsoft.github.io/FLAML/docs/FAQ). + +- Contributing guide [here](https://microsoft.github.io/FLAML/docs/Contribute). + +- ML.NET documentation and tutorials for [Model Builder](https://learn.microsoft.com/dotnet/machine-learning/tutorials/predict-prices-with-model-builder), [ML.NET CLI](https://learn.microsoft.com/dotnet/machine-learning/tutorials/sentiment-analysis-cli), and [AutoML API](https://learn.microsoft.com/dotnet/machine-learning/how-to-guides/how-to-use-the-automl-api). + +## Contributing + +This project welcomes contributions and suggestions. Most contributions require you to agree to a +Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us +the rights to use your contribution. For details, visit <https://cla.opensource.microsoft.com>. + +If you are new to GitHub [here](https://help.github.com/categories/collaborating-with-issues-and-pull-requests/) is a detailed help source on getting involved with development on GitHub. + +When you submit a pull request, a CLA bot will automatically determine whether you need to provide +a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions +provided by the bot. You will only need to do this once across all repos using our CLA. + +This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). +For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or +contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. + + +%package help +Summary: Development documents and examples for FLAML +Provides: python3-FLAML-doc +%description help +[](https://badge.fury.io/py/FLAML) + +[](https://github.com/microsoft/FLAML/actions/workflows/python-package.yml) + +[](https://pepy.tech/project/flaml) +[](https://gitter.im/FLAMLer/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) +[](https://discord.gg/Cppx2vSPVP) + + +# A Fast Library for Automated Machine Learning & Tuning + +<p align="center"> + <img src="https://github.com/microsoft/FLAML/blob/main/website/static/img/flaml.svg" width=200> + <br> +</p> + +:fire: OpenAI GPT-3 models support in v1.1.3. ChatGPT and GPT-4 support will be added in v1.2.0. + +:fire: A [lab forum](https://github.com/microsoft/FLAML/tree/tutorial-aaai23/tutorial) on FLAML at AAAI 2023. + +:fire: A [hands-on tutorial](https://github.com/microsoft/FLAML/tree/tutorial/tutorial) on FLAML presented at KDD 2022 + +## What is FLAML +FLAML is a lightweight Python library that finds accurate machine +learning models automatically, efficiently and economically. It frees users from selecting +models and hyperparameters for each model. It can also be used to tune generic hyperparameters for foundation models, MLOps/LMOps workflows, pipelines, mathematical/statistical models, algorithms, computing experiments, software configurations and so on. + +1. For common machine learning or AI tasks like classification, regression, and generation, it quickly finds quality models for user-provided data with low computational resources. It supports both classical machine learning models and deep neural networks, including foundation models such as the GPT series. +1. It is easy to customize or extend. Users can find their desired customizability from a smooth range: minimal customization (computational resource budget), medium customization (e.g., scikit-style learner, search space and metric), or full customization (arbitrary training and evaluation code). +1. It supports fast automatic tuning, capable of handling complex constraints/guidance/early stopping. FLAML is powered by a new, [cost-effective +hyperparameter optimization](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function/#hyperparameter-optimization-algorithm) +and model selection method invented by Microsoft Research, and many followup [research studies](https://microsoft.github.io/FLAML/docs/Research). + +FLAML has a .NET implementation in [ML.NET](http://dot.net/ml), an open-source, cross-platform machine learning framework for .NET. In ML.NET, you can use FLAML via low-code solutions like [Model Builder](https://dotnet.microsoft.com/apps/machinelearning-ai/ml-dotnet/model-builder) Visual Studio extension and the cross-platform [ML.NET CLI](https://docs.microsoft.com/dotnet/machine-learning/automate-training-with-cli). Alternatively, you can use the [ML.NET AutoML API](https://www.nuget.org/packages/Microsoft.ML.AutoML/#versions-body-tab) for a code-first experience. + + +## Installation + +### Python + +FLAML requires **Python version >= 3.7**. It can be installed from pip: + +```bash +pip install flaml +``` + +To run the [`notebook examples`](https://github.com/microsoft/FLAML/tree/main/notebook), +install flaml with the [notebook] option: + +```bash +pip install flaml[notebook] +``` + +### .NET + +Use the following guides to get started with FLAML in .NET: + +- [Install Model Builder](https://docs.microsoft.com/dotnet/machine-learning/how-to-guides/install-model-builder?tabs=visual-studio-2022) +- [Install ML.NET CLI](https://docs.microsoft.com/dotnet/machine-learning/how-to-guides/install-ml-net-cli?tabs=windows) +- [Microsoft.AutoML](https://www.nuget.org/packages/Microsoft.ML.AutoML/0.20.0) + +## Quickstart + +* With three lines of code, you can start using this economical and fast +AutoML engine as a [scikit-learn style estimator](https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML). + +```python +from flaml import AutoML +automl = AutoML() +automl.fit(X_train, y_train, task="classification") +``` + +* You can restrict the learners and use FLAML as a fast hyperparameter tuning +tool for XGBoost, LightGBM, Random Forest etc. or a [customized learner](https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML#estimator-and-search-space). + +```python +automl.fit(X_train, y_train, task="classification", estimator_list=["lgbm"]) +``` + +* You can also run generic hyperparameter tuning for a [custom function](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function). + +```python +from flaml import tune +tune.run(evaluation_function, config={…}, low_cost_partial_config={…}, time_budget_s=3600) +``` + +* [Zero-shot AutoML](https://microsoft.github.io/FLAML/docs/Use-Cases/Zero-Shot-AutoML) allows using the existing training API from lightgbm, xgboost etc. while getting the benefit of AutoML in choosing high-performance hyperparameter configurations per task. + +```python +from flaml.default import LGBMRegressor + +# Use LGBMRegressor in the same way as you use lightgbm.LGBMRegressor. +estimator = LGBMRegressor() +# The hyperparameters are automatically set according to the training data. +estimator.fit(X_train, y_train) +``` + +* (New) You can optimize [generations](https://microsoft.github.io/FLAML/docs/Use-Cases/Auto-Generation) by ChatGPT or GPT-4 etc. with your own tuning data, success metrics and budgets. + +```python +from flaml import oai + +config, analysis = oai.Completion.tune( + data=tune_data, + metric="success", + mode="max", + eval_func=eval_func, + inference_budget=0.05, + optimization_budget=3, + num_samples=-1, +) +``` + +## Documentation + +You can find a detailed documentation about FLAML [here](https://microsoft.github.io/FLAML/) where you can find the API documentation, use cases and examples. + +In addition, you can find: + +- [Talks](https://www.youtube.com/channel/UCfU0zfFXHXdAd5x-WvFBk5A) and [tutorials](https://github.com/microsoft/FLAML/tree/tutorial/tutorial) about FLAML. + +- Research around FLAML [here](https://microsoft.github.io/FLAML/docs/Research). + +- FAQ [here](https://microsoft.github.io/FLAML/docs/FAQ). + +- Contributing guide [here](https://microsoft.github.io/FLAML/docs/Contribute). + +- ML.NET documentation and tutorials for [Model Builder](https://learn.microsoft.com/dotnet/machine-learning/tutorials/predict-prices-with-model-builder), [ML.NET CLI](https://learn.microsoft.com/dotnet/machine-learning/tutorials/sentiment-analysis-cli), and [AutoML API](https://learn.microsoft.com/dotnet/machine-learning/how-to-guides/how-to-use-the-automl-api). + +## Contributing + +This project welcomes contributions and suggestions. Most contributions require you to agree to a +Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us +the rights to use your contribution. For details, visit <https://cla.opensource.microsoft.com>. + +If you are new to GitHub [here](https://help.github.com/categories/collaborating-with-issues-and-pull-requests/) is a detailed help source on getting involved with development on GitHub. + +When you submit a pull request, a CLA bot will automatically determine whether you need to provide +a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions +provided by the bot. You will only need to do this once across all repos using our CLA. + +This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). +For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or +contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. + + +%prep +%autosetup -n FLAML-1.2.0 + +%build +%py3_build + +%install +%py3_install +install -d -m755 %{buildroot}/%{_pkgdocdir} +if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi +if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi +if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi +if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi +pushd %{buildroot} +if [ -d usr/lib ]; then + find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/lib64 ]; then + find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/bin ]; then + find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/sbin ]; then + find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst +fi +touch doclist.lst +if [ -d usr/share/man ]; then + find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst +fi +popd +mv %{buildroot}/filelist.lst . +mv %{buildroot}/doclist.lst . + +%files -n python3-FLAML -f filelist.lst +%dir %{python3_sitelib}/* + +%files help -f doclist.lst +%{_docdir}/* + +%changelog +* Tue Apr 11 2023 Python_Bot <Python_Bot@openeuler.org> - 1.2.0-1 +- Package Spec generated @@ -0,0 +1 @@ +603b39a2165f7fa4c9de70f38f3726cc FLAML-1.2.0.tar.gz |
