summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-04-11 06:31:51 +0000
committerCoprDistGit <infra@openeuler.org>2023-04-11 06:31:51 +0000
commit6e3d84857809d75d203a1ecce9f70353abd0786b (patch)
treed976040957148fcb63ccc8019bb6f46e147d615e
parent734d75c2e9fdb85487b621469acdaca3ff2b3144 (diff)
automatic import of python-torch-scatter
-rw-r--r--.gitignore1
-rw-r--r--python-torch-scatter.spec351
-rw-r--r--sources1
3 files changed, 353 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..f9f2054 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/torch_scatter-2.1.1.tar.gz
diff --git a/python-torch-scatter.spec b/python-torch-scatter.spec
new file mode 100644
index 0000000..980e1eb
--- /dev/null
+++ b/python-torch-scatter.spec
@@ -0,0 +1,351 @@
+%global _empty_manifest_terminate_build 0
+Name: python-torch-scatter
+Version: 2.1.1
+Release: 1
+Summary: PyTorch Extension Library of Optimized Scatter Operations
+License: MIT License
+URL: https://github.com/rusty1s/pytorch_scatter
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/db/ae/3dee934b7118aec8528a6832dbb3cf079e13dd442c4600cae8d29a4f9fea/torch_scatter-2.1.1.tar.gz
+BuildArch: noarch
+
+
+%description
+**[Documentation](https://pytorch-scatter.readthedocs.io)**
+This package consists of a small extension library of highly optimized sparse update (scatter and segment) operations for the use in [PyTorch](http://pytorch.org/), which are missing in the main package.
+Scatter and segment operations can be roughly described as reduce operations based on a given "group-index" tensor.
+Segment operations require the "group-index" tensor to be sorted, whereas scatter operations are not subject to these requirements.
+The package consists of the following operations with reduction types `"sum"|"mean"|"min"|"max"`:
+* [**scatter**](https://pytorch-scatter.readthedocs.io/en/latest/functions/scatter.html) based on arbitrary indices
+* [**segment_coo**](https://pytorch-scatter.readthedocs.io/en/latest/functions/segment_coo.html) based on sorted indices
+* [**segment_csr**](https://pytorch-scatter.readthedocs.io/en/latest/functions/segment_csr.html) based on compressed indices via pointers
+In addition, we provide the following **composite functions** which make use of `scatter_*` operations under the hood: `scatter_std`, `scatter_logsumexp`, `scatter_softmax` and `scatter_log_softmax`.
+All included operations are broadcastable, work on varying data types, are implemented both for CPU and GPU with corresponding backward implementations, and are fully traceable.
+## Installation
+### Anaconda
+**Update:** You can now install `pytorch-scatter` via [Anaconda](https://anaconda.org/pyg/pytorch-scatter) for all major OS/PyTorch/CUDA combinations 🤗
+Given that you have [`pytorch >= 1.8.0` installed](https://pytorch.org/get-started/locally/), simply run
+```
+conda install pytorch-scatter -c pyg
+```
+### Binaries
+We alternatively provide pip wheels for all major OS/PyTorch/CUDA combinations, see [here](https://data.pyg.org/whl).
+#### PyTorch 2.0
+To install the binaries for PyTorch 2.0.0, simply run
+```
+pip install torch-scatter -f https://data.pyg.org/whl/torch-2.0.0+${CUDA}.html
+```
+where `${CUDA}` should be replaced by either `cpu`, `cu117`, or `cu118` depending on your PyTorch installation.
+| | `cpu` | `cu117` | `cu118` |
+|-------------|-------|---------|---------|
+| **Linux** | ✅ | ✅ | ✅ |
+| **Windows** | ✅ | ✅ | ✅ |
+| **macOS** | ✅ | | |
+#### PyTorch 1.13
+To install the binaries for PyTorch 1.13.0, simply run
+```
+pip install torch-scatter -f https://data.pyg.org/whl/torch-1.13.0+${CUDA}.html
+```
+where `${CUDA}` should be replaced by either `cpu`, `cu116`, or `cu117` depending on your PyTorch installation.
+| | `cpu` | `cu116` | `cu117` |
+|-------------|-------|---------|---------|
+| **Linux** | ✅ | ✅ | ✅ |
+| **Windows** | ✅ | ✅ | ✅ |
+| **macOS** | ✅ | | |
+**Note:** Binaries of older versions are also provided for PyTorch 1.4.0, PyTorch 1.5.0, PyTorch 1.6.0, PyTorch 1.7.0/1.7.1, PyTorch 1.8.0/1.8.1, PyTorch 1.9.0, PyTorch 1.10.0/1.10.1/1.10.2, PyTorch 1.11.0 and PyTorch 1.12.0/1.12.1 (following the same procedure).
+For older versions, you need to explicitly specify the latest supported version number or install via `pip install --no-index` in order to prevent a manual installation from source.
+You can look up the latest supported version number [here](https://data.pyg.org/whl).
+### From source
+Ensure that at least PyTorch 1.4.0 is installed and verify that `cuda/bin` and `cuda/include` are in your `$PATH` and `$CPATH` respectively, *e.g.*:
+```
+$ python -c "import torch; print(torch.__version__)"
+>>> 1.4.0
+$ echo $PATH
+>>> /usr/local/cuda/bin:...
+$ echo $CPATH
+>>> /usr/local/cuda/include:...
+```
+Then run:
+```
+pip install torch-scatter
+```
+When running in a docker container without NVIDIA driver, PyTorch needs to evaluate the compute capabilities and may fail.
+In this case, ensure that the compute capabilities are set via `TORCH_CUDA_ARCH_LIST`, *e.g.*:
+```
+export TORCH_CUDA_ARCH_LIST = "6.0 6.1 7.2+PTX 7.5+PTX"
+```
+## Example
+```py
+import torch
+from torch_scatter import scatter_max
+src = torch.tensor([[2, 0, 1, 4, 3], [0, 2, 1, 3, 4]])
+index = torch.tensor([[4, 5, 4, 2, 3], [0, 0, 2, 2, 1]])
+out, argmax = scatter_max(src, index, dim=-1)
+```
+```
+print(out)
+tensor([[0, 0, 4, 3, 2, 0],
+ [2, 4, 3, 0, 0, 0]])
+print(argmax)
+tensor([[5, 5, 3, 4, 0, 1]
+ [1, 4, 3, 5, 5, 5]])
+```
+## Running tests
+```
+pytest
+```
+## C++ API
+`torch-scatter` also offers a C++ API that contains C++ equivalent of python models.
+For this, we need to add `TorchLib` to the `-DCMAKE_PREFIX_PATH` (*e.g.*, it may exists in `{CONDA}/lib/python{X.X}/site-packages/torch` if installed via `conda`):
+```
+mkdir build
+cd build
+# Add -DWITH_CUDA=on support for CUDA support
+cmake -DCMAKE_PREFIX_PATH="..." ..
+make
+make install
+```
+
+%package -n python3-torch-scatter
+Summary: PyTorch Extension Library of Optimized Scatter Operations
+Provides: python-torch-scatter
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-torch-scatter
+**[Documentation](https://pytorch-scatter.readthedocs.io)**
+This package consists of a small extension library of highly optimized sparse update (scatter and segment) operations for the use in [PyTorch](http://pytorch.org/), which are missing in the main package.
+Scatter and segment operations can be roughly described as reduce operations based on a given "group-index" tensor.
+Segment operations require the "group-index" tensor to be sorted, whereas scatter operations are not subject to these requirements.
+The package consists of the following operations with reduction types `"sum"|"mean"|"min"|"max"`:
+* [**scatter**](https://pytorch-scatter.readthedocs.io/en/latest/functions/scatter.html) based on arbitrary indices
+* [**segment_coo**](https://pytorch-scatter.readthedocs.io/en/latest/functions/segment_coo.html) based on sorted indices
+* [**segment_csr**](https://pytorch-scatter.readthedocs.io/en/latest/functions/segment_csr.html) based on compressed indices via pointers
+In addition, we provide the following **composite functions** which make use of `scatter_*` operations under the hood: `scatter_std`, `scatter_logsumexp`, `scatter_softmax` and `scatter_log_softmax`.
+All included operations are broadcastable, work on varying data types, are implemented both for CPU and GPU with corresponding backward implementations, and are fully traceable.
+## Installation
+### Anaconda
+**Update:** You can now install `pytorch-scatter` via [Anaconda](https://anaconda.org/pyg/pytorch-scatter) for all major OS/PyTorch/CUDA combinations 🤗
+Given that you have [`pytorch >= 1.8.0` installed](https://pytorch.org/get-started/locally/), simply run
+```
+conda install pytorch-scatter -c pyg
+```
+### Binaries
+We alternatively provide pip wheels for all major OS/PyTorch/CUDA combinations, see [here](https://data.pyg.org/whl).
+#### PyTorch 2.0
+To install the binaries for PyTorch 2.0.0, simply run
+```
+pip install torch-scatter -f https://data.pyg.org/whl/torch-2.0.0+${CUDA}.html
+```
+where `${CUDA}` should be replaced by either `cpu`, `cu117`, or `cu118` depending on your PyTorch installation.
+| | `cpu` | `cu117` | `cu118` |
+|-------------|-------|---------|---------|
+| **Linux** | ✅ | ✅ | ✅ |
+| **Windows** | ✅ | ✅ | ✅ |
+| **macOS** | ✅ | | |
+#### PyTorch 1.13
+To install the binaries for PyTorch 1.13.0, simply run
+```
+pip install torch-scatter -f https://data.pyg.org/whl/torch-1.13.0+${CUDA}.html
+```
+where `${CUDA}` should be replaced by either `cpu`, `cu116`, or `cu117` depending on your PyTorch installation.
+| | `cpu` | `cu116` | `cu117` |
+|-------------|-------|---------|---------|
+| **Linux** | ✅ | ✅ | ✅ |
+| **Windows** | ✅ | ✅ | ✅ |
+| **macOS** | ✅ | | |
+**Note:** Binaries of older versions are also provided for PyTorch 1.4.0, PyTorch 1.5.0, PyTorch 1.6.0, PyTorch 1.7.0/1.7.1, PyTorch 1.8.0/1.8.1, PyTorch 1.9.0, PyTorch 1.10.0/1.10.1/1.10.2, PyTorch 1.11.0 and PyTorch 1.12.0/1.12.1 (following the same procedure).
+For older versions, you need to explicitly specify the latest supported version number or install via `pip install --no-index` in order to prevent a manual installation from source.
+You can look up the latest supported version number [here](https://data.pyg.org/whl).
+### From source
+Ensure that at least PyTorch 1.4.0 is installed and verify that `cuda/bin` and `cuda/include` are in your `$PATH` and `$CPATH` respectively, *e.g.*:
+```
+$ python -c "import torch; print(torch.__version__)"
+>>> 1.4.0
+$ echo $PATH
+>>> /usr/local/cuda/bin:...
+$ echo $CPATH
+>>> /usr/local/cuda/include:...
+```
+Then run:
+```
+pip install torch-scatter
+```
+When running in a docker container without NVIDIA driver, PyTorch needs to evaluate the compute capabilities and may fail.
+In this case, ensure that the compute capabilities are set via `TORCH_CUDA_ARCH_LIST`, *e.g.*:
+```
+export TORCH_CUDA_ARCH_LIST = "6.0 6.1 7.2+PTX 7.5+PTX"
+```
+## Example
+```py
+import torch
+from torch_scatter import scatter_max
+src = torch.tensor([[2, 0, 1, 4, 3], [0, 2, 1, 3, 4]])
+index = torch.tensor([[4, 5, 4, 2, 3], [0, 0, 2, 2, 1]])
+out, argmax = scatter_max(src, index, dim=-1)
+```
+```
+print(out)
+tensor([[0, 0, 4, 3, 2, 0],
+ [2, 4, 3, 0, 0, 0]])
+print(argmax)
+tensor([[5, 5, 3, 4, 0, 1]
+ [1, 4, 3, 5, 5, 5]])
+```
+## Running tests
+```
+pytest
+```
+## C++ API
+`torch-scatter` also offers a C++ API that contains C++ equivalent of python models.
+For this, we need to add `TorchLib` to the `-DCMAKE_PREFIX_PATH` (*e.g.*, it may exists in `{CONDA}/lib/python{X.X}/site-packages/torch` if installed via `conda`):
+```
+mkdir build
+cd build
+# Add -DWITH_CUDA=on support for CUDA support
+cmake -DCMAKE_PREFIX_PATH="..." ..
+make
+make install
+```
+
+%package help
+Summary: Development documents and examples for torch-scatter
+Provides: python3-torch-scatter-doc
+%description help
+**[Documentation](https://pytorch-scatter.readthedocs.io)**
+This package consists of a small extension library of highly optimized sparse update (scatter and segment) operations for the use in [PyTorch](http://pytorch.org/), which are missing in the main package.
+Scatter and segment operations can be roughly described as reduce operations based on a given "group-index" tensor.
+Segment operations require the "group-index" tensor to be sorted, whereas scatter operations are not subject to these requirements.
+The package consists of the following operations with reduction types `"sum"|"mean"|"min"|"max"`:
+* [**scatter**](https://pytorch-scatter.readthedocs.io/en/latest/functions/scatter.html) based on arbitrary indices
+* [**segment_coo**](https://pytorch-scatter.readthedocs.io/en/latest/functions/segment_coo.html) based on sorted indices
+* [**segment_csr**](https://pytorch-scatter.readthedocs.io/en/latest/functions/segment_csr.html) based on compressed indices via pointers
+In addition, we provide the following **composite functions** which make use of `scatter_*` operations under the hood: `scatter_std`, `scatter_logsumexp`, `scatter_softmax` and `scatter_log_softmax`.
+All included operations are broadcastable, work on varying data types, are implemented both for CPU and GPU with corresponding backward implementations, and are fully traceable.
+## Installation
+### Anaconda
+**Update:** You can now install `pytorch-scatter` via [Anaconda](https://anaconda.org/pyg/pytorch-scatter) for all major OS/PyTorch/CUDA combinations 🤗
+Given that you have [`pytorch >= 1.8.0` installed](https://pytorch.org/get-started/locally/), simply run
+```
+conda install pytorch-scatter -c pyg
+```
+### Binaries
+We alternatively provide pip wheels for all major OS/PyTorch/CUDA combinations, see [here](https://data.pyg.org/whl).
+#### PyTorch 2.0
+To install the binaries for PyTorch 2.0.0, simply run
+```
+pip install torch-scatter -f https://data.pyg.org/whl/torch-2.0.0+${CUDA}.html
+```
+where `${CUDA}` should be replaced by either `cpu`, `cu117`, or `cu118` depending on your PyTorch installation.
+| | `cpu` | `cu117` | `cu118` |
+|-------------|-------|---------|---------|
+| **Linux** | ✅ | ✅ | ✅ |
+| **Windows** | ✅ | ✅ | ✅ |
+| **macOS** | ✅ | | |
+#### PyTorch 1.13
+To install the binaries for PyTorch 1.13.0, simply run
+```
+pip install torch-scatter -f https://data.pyg.org/whl/torch-1.13.0+${CUDA}.html
+```
+where `${CUDA}` should be replaced by either `cpu`, `cu116`, or `cu117` depending on your PyTorch installation.
+| | `cpu` | `cu116` | `cu117` |
+|-------------|-------|---------|---------|
+| **Linux** | ✅ | ✅ | ✅ |
+| **Windows** | ✅ | ✅ | ✅ |
+| **macOS** | ✅ | | |
+**Note:** Binaries of older versions are also provided for PyTorch 1.4.0, PyTorch 1.5.0, PyTorch 1.6.0, PyTorch 1.7.0/1.7.1, PyTorch 1.8.0/1.8.1, PyTorch 1.9.0, PyTorch 1.10.0/1.10.1/1.10.2, PyTorch 1.11.0 and PyTorch 1.12.0/1.12.1 (following the same procedure).
+For older versions, you need to explicitly specify the latest supported version number or install via `pip install --no-index` in order to prevent a manual installation from source.
+You can look up the latest supported version number [here](https://data.pyg.org/whl).
+### From source
+Ensure that at least PyTorch 1.4.0 is installed and verify that `cuda/bin` and `cuda/include` are in your `$PATH` and `$CPATH` respectively, *e.g.*:
+```
+$ python -c "import torch; print(torch.__version__)"
+>>> 1.4.0
+$ echo $PATH
+>>> /usr/local/cuda/bin:...
+$ echo $CPATH
+>>> /usr/local/cuda/include:...
+```
+Then run:
+```
+pip install torch-scatter
+```
+When running in a docker container without NVIDIA driver, PyTorch needs to evaluate the compute capabilities and may fail.
+In this case, ensure that the compute capabilities are set via `TORCH_CUDA_ARCH_LIST`, *e.g.*:
+```
+export TORCH_CUDA_ARCH_LIST = "6.0 6.1 7.2+PTX 7.5+PTX"
+```
+## Example
+```py
+import torch
+from torch_scatter import scatter_max
+src = torch.tensor([[2, 0, 1, 4, 3], [0, 2, 1, 3, 4]])
+index = torch.tensor([[4, 5, 4, 2, 3], [0, 0, 2, 2, 1]])
+out, argmax = scatter_max(src, index, dim=-1)
+```
+```
+print(out)
+tensor([[0, 0, 4, 3, 2, 0],
+ [2, 4, 3, 0, 0, 0]])
+print(argmax)
+tensor([[5, 5, 3, 4, 0, 1]
+ [1, 4, 3, 5, 5, 5]])
+```
+## Running tests
+```
+pytest
+```
+## C++ API
+`torch-scatter` also offers a C++ API that contains C++ equivalent of python models.
+For this, we need to add `TorchLib` to the `-DCMAKE_PREFIX_PATH` (*e.g.*, it may exists in `{CONDA}/lib/python{X.X}/site-packages/torch` if installed via `conda`):
+```
+mkdir build
+cd build
+# Add -DWITH_CUDA=on support for CUDA support
+cmake -DCMAKE_PREFIX_PATH="..." ..
+make
+make install
+```
+
+%prep
+%autosetup -n torch-scatter-2.1.1
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-torch-scatter -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Tue Apr 11 2023 Python_Bot <Python_Bot@openeuler.org> - 2.1.1-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..af396b2
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+2d547d80e9b8fca2b8da0362f8c80fc8 torch_scatter-2.1.1.tar.gz