summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--.gitignore1
-rw-r--r--python-gpytorch.spec311
-rw-r--r--sources1
3 files changed, 313 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..b04128f 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/gpytorch-1.9.1.tar.gz
diff --git a/python-gpytorch.spec b/python-gpytorch.spec
new file mode 100644
index 0000000..679c463
--- /dev/null
+++ b/python-gpytorch.spec
@@ -0,0 +1,311 @@
+%global _empty_manifest_terminate_build 0
+Name: python-gpytorch
+Version: 1.9.1
+Release: 1
+Summary: An implementation of Gaussian Processes in Pytorch
+License: MIT
+URL: https://gpytorch.ai
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/af/23/9683f34e84d79d5ec564548bb6c4f88e107f1a6687ea8b1615d98cfbdfcb/gpytorch-1.9.1.tar.gz
+BuildArch: noarch
+
+Requires: python3-scikit-learn
+Requires: python3-linear-operator
+Requires: python3-black
+Requires: python3-twine
+Requires: python3-pre-commit
+Requires: python3-ipython
+Requires: python3-jupyter
+Requires: python3-matplotlib
+Requires: python3-scipy
+Requires: python3-torchvision
+Requires: python3-tqdm
+Requires: python3-pykeops
+Requires: python3-pyro-ppl
+Requires: python3-flake8
+Requires: python3-flake8-print
+Requires: python3-pytest
+Requires: python3-nbval
+
+%description
+[![Test Suite](https://github.com/cornellius-gp/gpytorch/actions/workflows/run_test_suite.yml/badge.svg)](https://github.com/cornellius-gp/gpytorch/actions/workflows/run_test_suite.yml)
+[![Documentation Status](https://readthedocs.org/projects/gpytorch/badge/?version=latest)](https://gpytorch.readthedocs.io/en/latest/?badge=latest)
+GPyTorch is a Gaussian process library implemented using PyTorch. GPyTorch is designed for creating scalable, flexible, and modular Gaussian process models with ease.
+Internally, GPyTorch differs from many existing approaches to GP inference by performing all inference operations using modern numerical linear algebra techniques like preconditioned conjugate gradients. Implementing a scalable GP method is as simple as providing a matrix multiplication routine with the kernel matrix and its derivative via our `LinearOperator` interface, or by composing many of our already existing `LinearOperators`. This allows not only for easy implementation of popular scalable GP techniques, but often also for significantly improved utilization of GPU computing compared to solvers based on the Cholesky decomposition.
+GPyTorch provides (1) significant GPU acceleration (through MVM based inference); (2) state-of-the-art implementations of the latest algorithmic advances for scalability and flexibility ([SKI/KISS-GP](http://proceedings.mlr.press/v37/wilson15.pdf), [stochastic Lanczos expansions](https://arxiv.org/abs/1711.03481), [LOVE](https://arxiv.org/pdf/1803.06058.pdf), [SKIP](https://arxiv.org/pdf/1802.08903.pdf), [stochastic variational](https://arxiv.org/pdf/1611.00336.pdf) [deep kernel learning](http://proceedings.mlr.press/v51/wilson16.pdf), ...); (3) easy integration with deep learning frameworks.
+## Examples, Tutorials, and Documentation
+See our numerous [**examples and tutorials**](https://gpytorch.readthedocs.io/en/latest/) on how to construct all sorts of models in GPyTorch.
+## Installation
+**Requirements**:
+- Python >= 3.8
+- PyTorch >= 1.11
+Install GPyTorch using pip or conda:
+```bash
+pip install gpytorch
+conda install gpytorch -c gpytorch
+```
+(To use packages globally but install GPyTorch as a user-only package, use `pip install --user` above.)
+#### Latest (unstable) version
+To upgrade to the latest (unstable) version, run
+```bash
+pip install --upgrade git+https://github.com/cornellius-gp/linear_operator.git
+pip install --upgrade git+https://github.com/cornellius-gp/gpytorch.git
+```
+#### ArchLinux Package
+Note: Experimental AUR package. For most users, we recommend installation by conda or pip.
+GPyTorch is also available on the [ArchLinux User Repository](https://wiki.archlinux.org/index.php/Arch_User_Repository) (AUR).
+You can install it with an [AUR helper](https://wiki.archlinux.org/index.php/AUR_helpers), like [`yay`](https://aur.archlinux.org/packages/yay/), as follows:
+```bash
+yay -S python-gpytorch
+```
+To discuss any issues related to this AUR package refer to the comments section of
+[`python-gpytorch`](https://aur.archlinux.org/packages/python-gpytorch/).
+## Citing Us
+If you use GPyTorch, please cite the following papers:
+> [Gardner, Jacob R., Geoff Pleiss, David Bindel, Kilian Q. Weinberger, and Andrew Gordon Wilson. "GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration." In Advances in Neural Information Processing Systems (2018).](https://arxiv.org/abs/1809.11165)
+```
+@inproceedings{gardner2018gpytorch,
+ title={GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration},
+ author={Gardner, Jacob R and Pleiss, Geoff and Bindel, David and Weinberger, Kilian Q and Wilson, Andrew Gordon},
+ booktitle={Advances in Neural Information Processing Systems},
+ year={2018}
+}
+```
+## Development
+To run the unit tests:
+```bash
+python -m unittest
+```
+By default, the random seeds are locked down for some of the tests.
+If you want to run the tests without locking down the seed, run
+```bash
+UNLOCK_SEED=true python -m unittest
+```
+If you plan on submitting a pull request, please make use of our pre-commit hooks to ensure that your commits adhere
+to the general style guidelines enforced by the repo. To do this, navigate to your local repository and run:
+```bash
+pip install pre-commit
+pre-commit install
+```
+From then on, this will automatically run flake8, isort, black and other tools over the files you commit each time you commit to gpytorch or a fork of it.
+## The Team
+GPyTorch is primarily maintained by:
+- [Jake Gardner](https://www.cis.upenn.edu/~jacobrg/index.html) (University of Pennsylvania)
+- [Geoff Pleiss](http://github.com/gpleiss) (Columbia University)
+- [Kilian Weinberger](http://kilian.cs.cornell.edu/) (Cornell University)
+- [Andrew Gordon Wilson](https://cims.nyu.edu/~andrewgw/) (New York University)
+- [Max Balandat](https://research.fb.com/people/balandat-max/) (Meta)
+We would like to thank our other contributors including (but not limited to) David Arbour, Eytan Bakshy, David Eriksson, Jared Frank, Sam Stanton, Bram Wallace, Ke Alexander Wang, Ruihan Wu.
+## Acknowledgements
+Development of GPyTorch is supported by funding from
+the [Bill and Melinda Gates Foundation](https://www.gatesfoundation.org/),
+the [National Science Foundation](https://www.nsf.gov/),
+[SAP](https://www.sap.com/index.html),
+the [Simons Foundation](https://www.simonsfoundation.org),
+and the [Gatsby Charitable Trust](https://www.gatsby.org.uk).
+
+%package -n python3-gpytorch
+Summary: An implementation of Gaussian Processes in Pytorch
+Provides: python-gpytorch
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-gpytorch
+[![Test Suite](https://github.com/cornellius-gp/gpytorch/actions/workflows/run_test_suite.yml/badge.svg)](https://github.com/cornellius-gp/gpytorch/actions/workflows/run_test_suite.yml)
+[![Documentation Status](https://readthedocs.org/projects/gpytorch/badge/?version=latest)](https://gpytorch.readthedocs.io/en/latest/?badge=latest)
+GPyTorch is a Gaussian process library implemented using PyTorch. GPyTorch is designed for creating scalable, flexible, and modular Gaussian process models with ease.
+Internally, GPyTorch differs from many existing approaches to GP inference by performing all inference operations using modern numerical linear algebra techniques like preconditioned conjugate gradients. Implementing a scalable GP method is as simple as providing a matrix multiplication routine with the kernel matrix and its derivative via our `LinearOperator` interface, or by composing many of our already existing `LinearOperators`. This allows not only for easy implementation of popular scalable GP techniques, but often also for significantly improved utilization of GPU computing compared to solvers based on the Cholesky decomposition.
+GPyTorch provides (1) significant GPU acceleration (through MVM based inference); (2) state-of-the-art implementations of the latest algorithmic advances for scalability and flexibility ([SKI/KISS-GP](http://proceedings.mlr.press/v37/wilson15.pdf), [stochastic Lanczos expansions](https://arxiv.org/abs/1711.03481), [LOVE](https://arxiv.org/pdf/1803.06058.pdf), [SKIP](https://arxiv.org/pdf/1802.08903.pdf), [stochastic variational](https://arxiv.org/pdf/1611.00336.pdf) [deep kernel learning](http://proceedings.mlr.press/v51/wilson16.pdf), ...); (3) easy integration with deep learning frameworks.
+## Examples, Tutorials, and Documentation
+See our numerous [**examples and tutorials**](https://gpytorch.readthedocs.io/en/latest/) on how to construct all sorts of models in GPyTorch.
+## Installation
+**Requirements**:
+- Python >= 3.8
+- PyTorch >= 1.11
+Install GPyTorch using pip or conda:
+```bash
+pip install gpytorch
+conda install gpytorch -c gpytorch
+```
+(To use packages globally but install GPyTorch as a user-only package, use `pip install --user` above.)
+#### Latest (unstable) version
+To upgrade to the latest (unstable) version, run
+```bash
+pip install --upgrade git+https://github.com/cornellius-gp/linear_operator.git
+pip install --upgrade git+https://github.com/cornellius-gp/gpytorch.git
+```
+#### ArchLinux Package
+Note: Experimental AUR package. For most users, we recommend installation by conda or pip.
+GPyTorch is also available on the [ArchLinux User Repository](https://wiki.archlinux.org/index.php/Arch_User_Repository) (AUR).
+You can install it with an [AUR helper](https://wiki.archlinux.org/index.php/AUR_helpers), like [`yay`](https://aur.archlinux.org/packages/yay/), as follows:
+```bash
+yay -S python-gpytorch
+```
+To discuss any issues related to this AUR package refer to the comments section of
+[`python-gpytorch`](https://aur.archlinux.org/packages/python-gpytorch/).
+## Citing Us
+If you use GPyTorch, please cite the following papers:
+> [Gardner, Jacob R., Geoff Pleiss, David Bindel, Kilian Q. Weinberger, and Andrew Gordon Wilson. "GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration." In Advances in Neural Information Processing Systems (2018).](https://arxiv.org/abs/1809.11165)
+```
+@inproceedings{gardner2018gpytorch,
+ title={GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration},
+ author={Gardner, Jacob R and Pleiss, Geoff and Bindel, David and Weinberger, Kilian Q and Wilson, Andrew Gordon},
+ booktitle={Advances in Neural Information Processing Systems},
+ year={2018}
+}
+```
+## Development
+To run the unit tests:
+```bash
+python -m unittest
+```
+By default, the random seeds are locked down for some of the tests.
+If you want to run the tests without locking down the seed, run
+```bash
+UNLOCK_SEED=true python -m unittest
+```
+If you plan on submitting a pull request, please make use of our pre-commit hooks to ensure that your commits adhere
+to the general style guidelines enforced by the repo. To do this, navigate to your local repository and run:
+```bash
+pip install pre-commit
+pre-commit install
+```
+From then on, this will automatically run flake8, isort, black and other tools over the files you commit each time you commit to gpytorch or a fork of it.
+## The Team
+GPyTorch is primarily maintained by:
+- [Jake Gardner](https://www.cis.upenn.edu/~jacobrg/index.html) (University of Pennsylvania)
+- [Geoff Pleiss](http://github.com/gpleiss) (Columbia University)
+- [Kilian Weinberger](http://kilian.cs.cornell.edu/) (Cornell University)
+- [Andrew Gordon Wilson](https://cims.nyu.edu/~andrewgw/) (New York University)
+- [Max Balandat](https://research.fb.com/people/balandat-max/) (Meta)
+We would like to thank our other contributors including (but not limited to) David Arbour, Eytan Bakshy, David Eriksson, Jared Frank, Sam Stanton, Bram Wallace, Ke Alexander Wang, Ruihan Wu.
+## Acknowledgements
+Development of GPyTorch is supported by funding from
+the [Bill and Melinda Gates Foundation](https://www.gatesfoundation.org/),
+the [National Science Foundation](https://www.nsf.gov/),
+[SAP](https://www.sap.com/index.html),
+the [Simons Foundation](https://www.simonsfoundation.org),
+and the [Gatsby Charitable Trust](https://www.gatsby.org.uk).
+
+%package help
+Summary: Development documents and examples for gpytorch
+Provides: python3-gpytorch-doc
+%description help
+[![Test Suite](https://github.com/cornellius-gp/gpytorch/actions/workflows/run_test_suite.yml/badge.svg)](https://github.com/cornellius-gp/gpytorch/actions/workflows/run_test_suite.yml)
+[![Documentation Status](https://readthedocs.org/projects/gpytorch/badge/?version=latest)](https://gpytorch.readthedocs.io/en/latest/?badge=latest)
+GPyTorch is a Gaussian process library implemented using PyTorch. GPyTorch is designed for creating scalable, flexible, and modular Gaussian process models with ease.
+Internally, GPyTorch differs from many existing approaches to GP inference by performing all inference operations using modern numerical linear algebra techniques like preconditioned conjugate gradients. Implementing a scalable GP method is as simple as providing a matrix multiplication routine with the kernel matrix and its derivative via our `LinearOperator` interface, or by composing many of our already existing `LinearOperators`. This allows not only for easy implementation of popular scalable GP techniques, but often also for significantly improved utilization of GPU computing compared to solvers based on the Cholesky decomposition.
+GPyTorch provides (1) significant GPU acceleration (through MVM based inference); (2) state-of-the-art implementations of the latest algorithmic advances for scalability and flexibility ([SKI/KISS-GP](http://proceedings.mlr.press/v37/wilson15.pdf), [stochastic Lanczos expansions](https://arxiv.org/abs/1711.03481), [LOVE](https://arxiv.org/pdf/1803.06058.pdf), [SKIP](https://arxiv.org/pdf/1802.08903.pdf), [stochastic variational](https://arxiv.org/pdf/1611.00336.pdf) [deep kernel learning](http://proceedings.mlr.press/v51/wilson16.pdf), ...); (3) easy integration with deep learning frameworks.
+## Examples, Tutorials, and Documentation
+See our numerous [**examples and tutorials**](https://gpytorch.readthedocs.io/en/latest/) on how to construct all sorts of models in GPyTorch.
+## Installation
+**Requirements**:
+- Python >= 3.8
+- PyTorch >= 1.11
+Install GPyTorch using pip or conda:
+```bash
+pip install gpytorch
+conda install gpytorch -c gpytorch
+```
+(To use packages globally but install GPyTorch as a user-only package, use `pip install --user` above.)
+#### Latest (unstable) version
+To upgrade to the latest (unstable) version, run
+```bash
+pip install --upgrade git+https://github.com/cornellius-gp/linear_operator.git
+pip install --upgrade git+https://github.com/cornellius-gp/gpytorch.git
+```
+#### ArchLinux Package
+Note: Experimental AUR package. For most users, we recommend installation by conda or pip.
+GPyTorch is also available on the [ArchLinux User Repository](https://wiki.archlinux.org/index.php/Arch_User_Repository) (AUR).
+You can install it with an [AUR helper](https://wiki.archlinux.org/index.php/AUR_helpers), like [`yay`](https://aur.archlinux.org/packages/yay/), as follows:
+```bash
+yay -S python-gpytorch
+```
+To discuss any issues related to this AUR package refer to the comments section of
+[`python-gpytorch`](https://aur.archlinux.org/packages/python-gpytorch/).
+## Citing Us
+If you use GPyTorch, please cite the following papers:
+> [Gardner, Jacob R., Geoff Pleiss, David Bindel, Kilian Q. Weinberger, and Andrew Gordon Wilson. "GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration." In Advances in Neural Information Processing Systems (2018).](https://arxiv.org/abs/1809.11165)
+```
+@inproceedings{gardner2018gpytorch,
+ title={GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration},
+ author={Gardner, Jacob R and Pleiss, Geoff and Bindel, David and Weinberger, Kilian Q and Wilson, Andrew Gordon},
+ booktitle={Advances in Neural Information Processing Systems},
+ year={2018}
+}
+```
+## Development
+To run the unit tests:
+```bash
+python -m unittest
+```
+By default, the random seeds are locked down for some of the tests.
+If you want to run the tests without locking down the seed, run
+```bash
+UNLOCK_SEED=true python -m unittest
+```
+If you plan on submitting a pull request, please make use of our pre-commit hooks to ensure that your commits adhere
+to the general style guidelines enforced by the repo. To do this, navigate to your local repository and run:
+```bash
+pip install pre-commit
+pre-commit install
+```
+From then on, this will automatically run flake8, isort, black and other tools over the files you commit each time you commit to gpytorch or a fork of it.
+## The Team
+GPyTorch is primarily maintained by:
+- [Jake Gardner](https://www.cis.upenn.edu/~jacobrg/index.html) (University of Pennsylvania)
+- [Geoff Pleiss](http://github.com/gpleiss) (Columbia University)
+- [Kilian Weinberger](http://kilian.cs.cornell.edu/) (Cornell University)
+- [Andrew Gordon Wilson](https://cims.nyu.edu/~andrewgw/) (New York University)
+- [Max Balandat](https://research.fb.com/people/balandat-max/) (Meta)
+We would like to thank our other contributors including (but not limited to) David Arbour, Eytan Bakshy, David Eriksson, Jared Frank, Sam Stanton, Bram Wallace, Ke Alexander Wang, Ruihan Wu.
+## Acknowledgements
+Development of GPyTorch is supported by funding from
+the [Bill and Melinda Gates Foundation](https://www.gatesfoundation.org/),
+the [National Science Foundation](https://www.nsf.gov/),
+[SAP](https://www.sap.com/index.html),
+the [Simons Foundation](https://www.simonsfoundation.org),
+and the [Gatsby Charitable Trust](https://www.gatsby.org.uk).
+
+%prep
+%autosetup -n gpytorch-1.9.1
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-gpytorch -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Mon Apr 10 2023 Python_Bot <Python_Bot@openeuler.org> - 1.9.1-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..a32ddd9
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+95765d3f604be70b096b0ec7b5ceb961 gpytorch-1.9.1.tar.gz