summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-04-21 16:24:50 +0000
committerCoprDistGit <infra@openeuler.org>2023-04-21 16:24:50 +0000
commit33347632858c36c16ad6e7a5d7dcb7cf2a4c1116 (patch)
tree6000bedc6ea1c2bea100df52c852b5bedb251ae7
parent21c3e75c48ba882b9dadea48dccd472011d49c3e (diff)
automatic import of python-gpytorchopeneuler20.03
-rw-r--r--.gitignore1
-rw-r--r--python-gpytorch.spec223
-rw-r--r--sources2
3 files changed, 151 insertions, 75 deletions
diff --git a/.gitignore b/.gitignore
index b04128f..5f1fb45 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1 +1,2 @@
/gpytorch-1.9.1.tar.gz
+/gpytorch-1.10.tar.gz
diff --git a/python-gpytorch.spec b/python-gpytorch.spec
index 679c463..e6f32db 100644
--- a/python-gpytorch.spec
+++ b/python-gpytorch.spec
@@ -1,16 +1,16 @@
%global _empty_manifest_terminate_build 0
Name: python-gpytorch
-Version: 1.9.1
+Version: 1.10
Release: 1
Summary: An implementation of Gaussian Processes in Pytorch
License: MIT
URL: https://gpytorch.ai
-Source0: https://mirrors.nju.edu.cn/pypi/web/packages/af/23/9683f34e84d79d5ec564548bb6c4f88e107f1a6687ea8b1615d98cfbdfcb/gpytorch-1.9.1.tar.gz
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/b4/be/bb6898d9a31f5daa3c0a18f613e87d1970f0cf546cbf5925b3eb908be036/gpytorch-1.10.tar.gz
BuildArch: noarch
Requires: python3-scikit-learn
Requires: python3-linear-operator
-Requires: python3-black
+Requires: python3-ufmt
Requires: python3-twine
Requires: python3-pre-commit
Requires: python3-ipython
@@ -29,11 +29,21 @@ Requires: python3-nbval
%description
[![Test Suite](https://github.com/cornellius-gp/gpytorch/actions/workflows/run_test_suite.yml/badge.svg)](https://github.com/cornellius-gp/gpytorch/actions/workflows/run_test_suite.yml)
[![Documentation Status](https://readthedocs.org/projects/gpytorch/badge/?version=latest)](https://gpytorch.readthedocs.io/en/latest/?badge=latest)
+[![License](https://img.shields.io/badge/license-MIT-green.svg)](LICENSE)
+[![Python Version](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
+[![Conda](https://img.shields.io/conda/v/gpytorch/gpytorch.svg)](https://anaconda.org/gpytorch/gpytorch)
+[![PyPI](https://img.shields.io/pypi/v/gpytorch.svg)](https://pypi.org/project/gpytorch)
GPyTorch is a Gaussian process library implemented using PyTorch. GPyTorch is designed for creating scalable, flexible, and modular Gaussian process models with ease.
-Internally, GPyTorch differs from many existing approaches to GP inference by performing all inference operations using modern numerical linear algebra techniques like preconditioned conjugate gradients. Implementing a scalable GP method is as simple as providing a matrix multiplication routine with the kernel matrix and its derivative via our `LinearOperator` interface, or by composing many of our already existing `LinearOperators`. This allows not only for easy implementation of popular scalable GP techniques, but often also for significantly improved utilization of GPU computing compared to solvers based on the Cholesky decomposition.
-GPyTorch provides (1) significant GPU acceleration (through MVM based inference); (2) state-of-the-art implementations of the latest algorithmic advances for scalability and flexibility ([SKI/KISS-GP](http://proceedings.mlr.press/v37/wilson15.pdf), [stochastic Lanczos expansions](https://arxiv.org/abs/1711.03481), [LOVE](https://arxiv.org/pdf/1803.06058.pdf), [SKIP](https://arxiv.org/pdf/1802.08903.pdf), [stochastic variational](https://arxiv.org/pdf/1611.00336.pdf) [deep kernel learning](http://proceedings.mlr.press/v51/wilson16.pdf), ...); (3) easy integration with deep learning frameworks.
+Internally, GPyTorch differs from many existing approaches to GP inference by performing most inference operations using numerical linear algebra techniques like preconditioned conjugate gradients.
+Implementing a scalable GP method is as simple as providing a matrix multiplication routine with the kernel matrix and its derivative via our [LinearOperator](https://github.com/cornellius-gp/linear_operator) interface,
+or by composing many of our already existing `LinearOperators`.
+This allows not only for easy implementation of popular scalable GP techniques,
+but often also for significantly improved utilization of GPU computing compared to solvers based on the Cholesky decomposition.
+GPyTorch provides (1) significant GPU acceleration (through MVM based inference);
+(2) state-of-the-art implementations of the latest algorithmic advances for scalability and flexibility ([SKI/KISS-GP](http://proceedings.mlr.press/v37/wilson15.pdf), [stochastic Lanczos expansions](https://arxiv.org/abs/1711.03481), [LOVE](https://arxiv.org/pdf/1803.06058.pdf), [SKIP](https://arxiv.org/pdf/1802.08903.pdf), [stochastic variational](https://arxiv.org/pdf/1611.00336.pdf) [deep kernel learning](http://proceedings.mlr.press/v51/wilson16.pdf), ...);
+(3) easy integration with deep learning frameworks.
## Examples, Tutorials, and Documentation
-See our numerous [**examples and tutorials**](https://gpytorch.readthedocs.io/en/latest/) on how to construct all sorts of models in GPyTorch.
+See our [**documentation, examples, tutorials**](https://gpytorch.readthedocs.io/en/latest/) on how to construct all sorts of models in GPyTorch.
## Installation
**Requirements**:
- Python >= 3.8
@@ -44,14 +54,26 @@ pip install gpytorch
conda install gpytorch -c gpytorch
```
(To use packages globally but install GPyTorch as a user-only package, use `pip install --user` above.)
-#### Latest (unstable) version
+#### Latest (Unstable) Version
To upgrade to the latest (unstable) version, run
```bash
pip install --upgrade git+https://github.com/cornellius-gp/linear_operator.git
pip install --upgrade git+https://github.com/cornellius-gp/gpytorch.git
```
+#### Development version
+If you are contributing a pull request, it is best to perform a manual installation:
+```sh
+git clone https://github.com/cornellius-gp/gpytorch.git
+cd gpytorch
+pip install -e .[dev,examples,test,pyro,keops]
+```
+To generate the documentation locally, you will also need to run the following command
+from the linear_operator folder:
+```sh
+pip install -r docs/requirements.txt
+```
#### ArchLinux Package
-Note: Experimental AUR package. For most users, we recommend installation by conda or pip.
+**Note**: Experimental AUR package. For most users, we recommend installation by conda or pip.
GPyTorch is also available on the [ArchLinux User Repository](https://wiki.archlinux.org/index.php/Arch_User_Repository) (AUR).
You can install it with an [AUR helper](https://wiki.archlinux.org/index.php/AUR_helpers), like [`yay`](https://aur.archlinux.org/packages/yay/), as follows:
```bash
@@ -70,23 +92,9 @@ If you use GPyTorch, please cite the following papers:
year={2018}
}
```
-## Development
-To run the unit tests:
-```bash
-python -m unittest
-```
-By default, the random seeds are locked down for some of the tests.
-If you want to run the tests without locking down the seed, run
-```bash
-UNLOCK_SEED=true python -m unittest
-```
-If you plan on submitting a pull request, please make use of our pre-commit hooks to ensure that your commits adhere
-to the general style guidelines enforced by the repo. To do this, navigate to your local repository and run:
-```bash
-pip install pre-commit
-pre-commit install
-```
-From then on, this will automatically run flake8, isort, black and other tools over the files you commit each time you commit to gpytorch or a fork of it.
+## Contributing
+See the contributing guidelines [CONTRIBUTING.md](https://github.com/cornellius-gp/gpytorch/blob/master/CONTRIBUTING.md)
+for information on submitting issues and pull requests.
## The Team
GPyTorch is primarily maintained by:
- [Jake Gardner](https://www.cis.upenn.edu/~jacobrg/index.html) (University of Pennsylvania)
@@ -94,7 +102,22 @@ GPyTorch is primarily maintained by:
- [Kilian Weinberger](http://kilian.cs.cornell.edu/) (Cornell University)
- [Andrew Gordon Wilson](https://cims.nyu.edu/~andrewgw/) (New York University)
- [Max Balandat](https://research.fb.com/people/balandat-max/) (Meta)
-We would like to thank our other contributors including (but not limited to) David Arbour, Eytan Bakshy, David Eriksson, Jared Frank, Sam Stanton, Bram Wallace, Ke Alexander Wang, Ruihan Wu.
+We would like to thank our other contributors including (but not limited to)
+Eytan Bakshy,
+Wesley Maddox,
+Ke Alexander Wang,
+Ruihan Wu,
+Sait Cakmak,
+David Eriksson,
+Sam Daulton,
+Martin Jankowiak,
+Sam Stanton,
+Zitong Zhou,
+David Arbour,
+Karthik Rajkumar,
+Bram Wallace,
+Jared Frank,
+and many more!
## Acknowledgements
Development of GPyTorch is supported by funding from
the [Bill and Melinda Gates Foundation](https://www.gatesfoundation.org/),
@@ -102,6 +125,8 @@ the [National Science Foundation](https://www.nsf.gov/),
[SAP](https://www.sap.com/index.html),
the [Simons Foundation](https://www.simonsfoundation.org),
and the [Gatsby Charitable Trust](https://www.gatsby.org.uk).
+## License
+GPyTorch is [MIT licensed](https://github.com/cornellius-gp/gpytorch/blob/main/LICENSE).
%package -n python3-gpytorch
Summary: An implementation of Gaussian Processes in Pytorch
@@ -112,11 +137,21 @@ BuildRequires: python3-pip
%description -n python3-gpytorch
[![Test Suite](https://github.com/cornellius-gp/gpytorch/actions/workflows/run_test_suite.yml/badge.svg)](https://github.com/cornellius-gp/gpytorch/actions/workflows/run_test_suite.yml)
[![Documentation Status](https://readthedocs.org/projects/gpytorch/badge/?version=latest)](https://gpytorch.readthedocs.io/en/latest/?badge=latest)
+[![License](https://img.shields.io/badge/license-MIT-green.svg)](LICENSE)
+[![Python Version](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
+[![Conda](https://img.shields.io/conda/v/gpytorch/gpytorch.svg)](https://anaconda.org/gpytorch/gpytorch)
+[![PyPI](https://img.shields.io/pypi/v/gpytorch.svg)](https://pypi.org/project/gpytorch)
GPyTorch is a Gaussian process library implemented using PyTorch. GPyTorch is designed for creating scalable, flexible, and modular Gaussian process models with ease.
-Internally, GPyTorch differs from many existing approaches to GP inference by performing all inference operations using modern numerical linear algebra techniques like preconditioned conjugate gradients. Implementing a scalable GP method is as simple as providing a matrix multiplication routine with the kernel matrix and its derivative via our `LinearOperator` interface, or by composing many of our already existing `LinearOperators`. This allows not only for easy implementation of popular scalable GP techniques, but often also for significantly improved utilization of GPU computing compared to solvers based on the Cholesky decomposition.
-GPyTorch provides (1) significant GPU acceleration (through MVM based inference); (2) state-of-the-art implementations of the latest algorithmic advances for scalability and flexibility ([SKI/KISS-GP](http://proceedings.mlr.press/v37/wilson15.pdf), [stochastic Lanczos expansions](https://arxiv.org/abs/1711.03481), [LOVE](https://arxiv.org/pdf/1803.06058.pdf), [SKIP](https://arxiv.org/pdf/1802.08903.pdf), [stochastic variational](https://arxiv.org/pdf/1611.00336.pdf) [deep kernel learning](http://proceedings.mlr.press/v51/wilson16.pdf), ...); (3) easy integration with deep learning frameworks.
+Internally, GPyTorch differs from many existing approaches to GP inference by performing most inference operations using numerical linear algebra techniques like preconditioned conjugate gradients.
+Implementing a scalable GP method is as simple as providing a matrix multiplication routine with the kernel matrix and its derivative via our [LinearOperator](https://github.com/cornellius-gp/linear_operator) interface,
+or by composing many of our already existing `LinearOperators`.
+This allows not only for easy implementation of popular scalable GP techniques,
+but often also for significantly improved utilization of GPU computing compared to solvers based on the Cholesky decomposition.
+GPyTorch provides (1) significant GPU acceleration (through MVM based inference);
+(2) state-of-the-art implementations of the latest algorithmic advances for scalability and flexibility ([SKI/KISS-GP](http://proceedings.mlr.press/v37/wilson15.pdf), [stochastic Lanczos expansions](https://arxiv.org/abs/1711.03481), [LOVE](https://arxiv.org/pdf/1803.06058.pdf), [SKIP](https://arxiv.org/pdf/1802.08903.pdf), [stochastic variational](https://arxiv.org/pdf/1611.00336.pdf) [deep kernel learning](http://proceedings.mlr.press/v51/wilson16.pdf), ...);
+(3) easy integration with deep learning frameworks.
## Examples, Tutorials, and Documentation
-See our numerous [**examples and tutorials**](https://gpytorch.readthedocs.io/en/latest/) on how to construct all sorts of models in GPyTorch.
+See our [**documentation, examples, tutorials**](https://gpytorch.readthedocs.io/en/latest/) on how to construct all sorts of models in GPyTorch.
## Installation
**Requirements**:
- Python >= 3.8
@@ -127,14 +162,26 @@ pip install gpytorch
conda install gpytorch -c gpytorch
```
(To use packages globally but install GPyTorch as a user-only package, use `pip install --user` above.)
-#### Latest (unstable) version
+#### Latest (Unstable) Version
To upgrade to the latest (unstable) version, run
```bash
pip install --upgrade git+https://github.com/cornellius-gp/linear_operator.git
pip install --upgrade git+https://github.com/cornellius-gp/gpytorch.git
```
+#### Development version
+If you are contributing a pull request, it is best to perform a manual installation:
+```sh
+git clone https://github.com/cornellius-gp/gpytorch.git
+cd gpytorch
+pip install -e .[dev,examples,test,pyro,keops]
+```
+To generate the documentation locally, you will also need to run the following command
+from the linear_operator folder:
+```sh
+pip install -r docs/requirements.txt
+```
#### ArchLinux Package
-Note: Experimental AUR package. For most users, we recommend installation by conda or pip.
+**Note**: Experimental AUR package. For most users, we recommend installation by conda or pip.
GPyTorch is also available on the [ArchLinux User Repository](https://wiki.archlinux.org/index.php/Arch_User_Repository) (AUR).
You can install it with an [AUR helper](https://wiki.archlinux.org/index.php/AUR_helpers), like [`yay`](https://aur.archlinux.org/packages/yay/), as follows:
```bash
@@ -153,23 +200,9 @@ If you use GPyTorch, please cite the following papers:
year={2018}
}
```
-## Development
-To run the unit tests:
-```bash
-python -m unittest
-```
-By default, the random seeds are locked down for some of the tests.
-If you want to run the tests without locking down the seed, run
-```bash
-UNLOCK_SEED=true python -m unittest
-```
-If you plan on submitting a pull request, please make use of our pre-commit hooks to ensure that your commits adhere
-to the general style guidelines enforced by the repo. To do this, navigate to your local repository and run:
-```bash
-pip install pre-commit
-pre-commit install
-```
-From then on, this will automatically run flake8, isort, black and other tools over the files you commit each time you commit to gpytorch or a fork of it.
+## Contributing
+See the contributing guidelines [CONTRIBUTING.md](https://github.com/cornellius-gp/gpytorch/blob/master/CONTRIBUTING.md)
+for information on submitting issues and pull requests.
## The Team
GPyTorch is primarily maintained by:
- [Jake Gardner](https://www.cis.upenn.edu/~jacobrg/index.html) (University of Pennsylvania)
@@ -177,7 +210,22 @@ GPyTorch is primarily maintained by:
- [Kilian Weinberger](http://kilian.cs.cornell.edu/) (Cornell University)
- [Andrew Gordon Wilson](https://cims.nyu.edu/~andrewgw/) (New York University)
- [Max Balandat](https://research.fb.com/people/balandat-max/) (Meta)
-We would like to thank our other contributors including (but not limited to) David Arbour, Eytan Bakshy, David Eriksson, Jared Frank, Sam Stanton, Bram Wallace, Ke Alexander Wang, Ruihan Wu.
+We would like to thank our other contributors including (but not limited to)
+Eytan Bakshy,
+Wesley Maddox,
+Ke Alexander Wang,
+Ruihan Wu,
+Sait Cakmak,
+David Eriksson,
+Sam Daulton,
+Martin Jankowiak,
+Sam Stanton,
+Zitong Zhou,
+David Arbour,
+Karthik Rajkumar,
+Bram Wallace,
+Jared Frank,
+and many more!
## Acknowledgements
Development of GPyTorch is supported by funding from
the [Bill and Melinda Gates Foundation](https://www.gatesfoundation.org/),
@@ -185,6 +233,8 @@ the [National Science Foundation](https://www.nsf.gov/),
[SAP](https://www.sap.com/index.html),
the [Simons Foundation](https://www.simonsfoundation.org),
and the [Gatsby Charitable Trust](https://www.gatsby.org.uk).
+## License
+GPyTorch is [MIT licensed](https://github.com/cornellius-gp/gpytorch/blob/main/LICENSE).
%package help
Summary: Development documents and examples for gpytorch
@@ -192,11 +242,21 @@ Provides: python3-gpytorch-doc
%description help
[![Test Suite](https://github.com/cornellius-gp/gpytorch/actions/workflows/run_test_suite.yml/badge.svg)](https://github.com/cornellius-gp/gpytorch/actions/workflows/run_test_suite.yml)
[![Documentation Status](https://readthedocs.org/projects/gpytorch/badge/?version=latest)](https://gpytorch.readthedocs.io/en/latest/?badge=latest)
+[![License](https://img.shields.io/badge/license-MIT-green.svg)](LICENSE)
+[![Python Version](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/)
+[![Conda](https://img.shields.io/conda/v/gpytorch/gpytorch.svg)](https://anaconda.org/gpytorch/gpytorch)
+[![PyPI](https://img.shields.io/pypi/v/gpytorch.svg)](https://pypi.org/project/gpytorch)
GPyTorch is a Gaussian process library implemented using PyTorch. GPyTorch is designed for creating scalable, flexible, and modular Gaussian process models with ease.
-Internally, GPyTorch differs from many existing approaches to GP inference by performing all inference operations using modern numerical linear algebra techniques like preconditioned conjugate gradients. Implementing a scalable GP method is as simple as providing a matrix multiplication routine with the kernel matrix and its derivative via our `LinearOperator` interface, or by composing many of our already existing `LinearOperators`. This allows not only for easy implementation of popular scalable GP techniques, but often also for significantly improved utilization of GPU computing compared to solvers based on the Cholesky decomposition.
-GPyTorch provides (1) significant GPU acceleration (through MVM based inference); (2) state-of-the-art implementations of the latest algorithmic advances for scalability and flexibility ([SKI/KISS-GP](http://proceedings.mlr.press/v37/wilson15.pdf), [stochastic Lanczos expansions](https://arxiv.org/abs/1711.03481), [LOVE](https://arxiv.org/pdf/1803.06058.pdf), [SKIP](https://arxiv.org/pdf/1802.08903.pdf), [stochastic variational](https://arxiv.org/pdf/1611.00336.pdf) [deep kernel learning](http://proceedings.mlr.press/v51/wilson16.pdf), ...); (3) easy integration with deep learning frameworks.
+Internally, GPyTorch differs from many existing approaches to GP inference by performing most inference operations using numerical linear algebra techniques like preconditioned conjugate gradients.
+Implementing a scalable GP method is as simple as providing a matrix multiplication routine with the kernel matrix and its derivative via our [LinearOperator](https://github.com/cornellius-gp/linear_operator) interface,
+or by composing many of our already existing `LinearOperators`.
+This allows not only for easy implementation of popular scalable GP techniques,
+but often also for significantly improved utilization of GPU computing compared to solvers based on the Cholesky decomposition.
+GPyTorch provides (1) significant GPU acceleration (through MVM based inference);
+(2) state-of-the-art implementations of the latest algorithmic advances for scalability and flexibility ([SKI/KISS-GP](http://proceedings.mlr.press/v37/wilson15.pdf), [stochastic Lanczos expansions](https://arxiv.org/abs/1711.03481), [LOVE](https://arxiv.org/pdf/1803.06058.pdf), [SKIP](https://arxiv.org/pdf/1802.08903.pdf), [stochastic variational](https://arxiv.org/pdf/1611.00336.pdf) [deep kernel learning](http://proceedings.mlr.press/v51/wilson16.pdf), ...);
+(3) easy integration with deep learning frameworks.
## Examples, Tutorials, and Documentation
-See our numerous [**examples and tutorials**](https://gpytorch.readthedocs.io/en/latest/) on how to construct all sorts of models in GPyTorch.
+See our [**documentation, examples, tutorials**](https://gpytorch.readthedocs.io/en/latest/) on how to construct all sorts of models in GPyTorch.
## Installation
**Requirements**:
- Python >= 3.8
@@ -207,14 +267,26 @@ pip install gpytorch
conda install gpytorch -c gpytorch
```
(To use packages globally but install GPyTorch as a user-only package, use `pip install --user` above.)
-#### Latest (unstable) version
+#### Latest (Unstable) Version
To upgrade to the latest (unstable) version, run
```bash
pip install --upgrade git+https://github.com/cornellius-gp/linear_operator.git
pip install --upgrade git+https://github.com/cornellius-gp/gpytorch.git
```
+#### Development version
+If you are contributing a pull request, it is best to perform a manual installation:
+```sh
+git clone https://github.com/cornellius-gp/gpytorch.git
+cd gpytorch
+pip install -e .[dev,examples,test,pyro,keops]
+```
+To generate the documentation locally, you will also need to run the following command
+from the linear_operator folder:
+```sh
+pip install -r docs/requirements.txt
+```
#### ArchLinux Package
-Note: Experimental AUR package. For most users, we recommend installation by conda or pip.
+**Note**: Experimental AUR package. For most users, we recommend installation by conda or pip.
GPyTorch is also available on the [ArchLinux User Repository](https://wiki.archlinux.org/index.php/Arch_User_Repository) (AUR).
You can install it with an [AUR helper](https://wiki.archlinux.org/index.php/AUR_helpers), like [`yay`](https://aur.archlinux.org/packages/yay/), as follows:
```bash
@@ -233,23 +305,9 @@ If you use GPyTorch, please cite the following papers:
year={2018}
}
```
-## Development
-To run the unit tests:
-```bash
-python -m unittest
-```
-By default, the random seeds are locked down for some of the tests.
-If you want to run the tests without locking down the seed, run
-```bash
-UNLOCK_SEED=true python -m unittest
-```
-If you plan on submitting a pull request, please make use of our pre-commit hooks to ensure that your commits adhere
-to the general style guidelines enforced by the repo. To do this, navigate to your local repository and run:
-```bash
-pip install pre-commit
-pre-commit install
-```
-From then on, this will automatically run flake8, isort, black and other tools over the files you commit each time you commit to gpytorch or a fork of it.
+## Contributing
+See the contributing guidelines [CONTRIBUTING.md](https://github.com/cornellius-gp/gpytorch/blob/master/CONTRIBUTING.md)
+for information on submitting issues and pull requests.
## The Team
GPyTorch is primarily maintained by:
- [Jake Gardner](https://www.cis.upenn.edu/~jacobrg/index.html) (University of Pennsylvania)
@@ -257,7 +315,22 @@ GPyTorch is primarily maintained by:
- [Kilian Weinberger](http://kilian.cs.cornell.edu/) (Cornell University)
- [Andrew Gordon Wilson](https://cims.nyu.edu/~andrewgw/) (New York University)
- [Max Balandat](https://research.fb.com/people/balandat-max/) (Meta)
-We would like to thank our other contributors including (but not limited to) David Arbour, Eytan Bakshy, David Eriksson, Jared Frank, Sam Stanton, Bram Wallace, Ke Alexander Wang, Ruihan Wu.
+We would like to thank our other contributors including (but not limited to)
+Eytan Bakshy,
+Wesley Maddox,
+Ke Alexander Wang,
+Ruihan Wu,
+Sait Cakmak,
+David Eriksson,
+Sam Daulton,
+Martin Jankowiak,
+Sam Stanton,
+Zitong Zhou,
+David Arbour,
+Karthik Rajkumar,
+Bram Wallace,
+Jared Frank,
+and many more!
## Acknowledgements
Development of GPyTorch is supported by funding from
the [Bill and Melinda Gates Foundation](https://www.gatesfoundation.org/),
@@ -265,9 +338,11 @@ the [National Science Foundation](https://www.nsf.gov/),
[SAP](https://www.sap.com/index.html),
the [Simons Foundation](https://www.simonsfoundation.org),
and the [Gatsby Charitable Trust](https://www.gatsby.org.uk).
+## License
+GPyTorch is [MIT licensed](https://github.com/cornellius-gp/gpytorch/blob/main/LICENSE).
%prep
-%autosetup -n gpytorch-1.9.1
+%autosetup -n gpytorch-1.10
%build
%py3_build
@@ -307,5 +382,5 @@ mv %{buildroot}/doclist.lst .
%{_docdir}/*
%changelog
-* Mon Apr 10 2023 Python_Bot <Python_Bot@openeuler.org> - 1.9.1-1
+* Fri Apr 21 2023 Python_Bot <Python_Bot@openeuler.org> - 1.10-1
- Package Spec generated
diff --git a/sources b/sources
index a32ddd9..e50383f 100644
--- a/sources
+++ b/sources
@@ -1 +1 @@
-95765d3f604be70b096b0ec7b5ceb961 gpytorch-1.9.1.tar.gz
+ff64c884751c6d6364889f6111fc584a gpytorch-1.10.tar.gz