summaryrefslogtreecommitdiff
path: root/python-opt-einsum-fx.spec
diff options
context:
space:
mode:
Diffstat (limited to 'python-opt-einsum-fx.spec')
-rw-r--r--python-opt-einsum-fx.spec396
1 files changed, 396 insertions, 0 deletions
diff --git a/python-opt-einsum-fx.spec b/python-opt-einsum-fx.spec
new file mode 100644
index 0000000..9824053
--- /dev/null
+++ b/python-opt-einsum-fx.spec
@@ -0,0 +1,396 @@
+%global _empty_manifest_terminate_build 0
+Name: python-opt-einsum-fx
+Version: 0.1.4
+Release: 1
+Summary: Einsum optimization using opt_einsum and PyTorch FX
+License: MIT
+URL: https://github.com/Linux-cpp-lisp/opt_einsum_fx
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/93/de/856dab99be0360c7275fee075eb0450a2ec82a54c4c33689606f62e9615b/opt_einsum_fx-0.1.4.tar.gz
+BuildArch: noarch
+
+Requires: python3-torch
+Requires: python3-opt-einsum
+Requires: python3-packaging
+
+%description
+# opt_einsum_fx
+
+[![Documentation Status](https://readthedocs.org/projects/opt-einsum-fx/badge/?version=latest)](https://opt-einsum-fx.readthedocs.io/en/latest/?badge=latest)
+
+Optimizing einsums and functions involving them using [`opt_einsum`](https://optimized-einsum.readthedocs.io/en/stable/) and PyTorch [FX](https://pytorch.org/docs/stable/fx.html) compute graphs.
+
+Issues, questions, PRs, and any thoughts about further optimizing these kinds of operations are welcome!
+
+For more information please see [the docs](https://opt-einsum-fx.readthedocs.io/en/stable/).
+
+## Installation
+
+### PyPI
+
+The latest release can be installed from PyPI:
+```bash
+$ pip install opt_einsum_fx
+```
+
+### Source
+
+To get the latest code, run:
+
+```bash
+$ git clone https://github.com/Linux-cpp-lisp/opt_einsum_fx.git
+```
+and install it by running
+```bash
+$ cd opt_einsum_fx/
+$ pip install .
+```
+
+You can run the tests with
+```bash
+$ pytest tests/
+```
+
+## Minimal example
+
+```python
+import torch
+import torch.fx
+import opt_einsum_fx
+
+def einmatvecmul(a, b, vec):
+ """Batched matrix-matrix-vector product using einsum"""
+ return torch.einsum("zij,zjk,zk->zi", a, b, vec)
+
+graph_mod = torch.fx.symbolic_trace(einmatvecmul)
+print("Original code:\n", graph_mod.code)
+graph_opt = opt_einsum_fx.optimize_einsums_full(
+ model=graph_mod,
+ example_inputs=(
+ torch.randn(7, 4, 5),
+ torch.randn(7, 5, 3),
+ torch.randn(7, 3)
+ )
+)
+print("Optimized code:\n", graph_opt.code)
+```
+outputs
+```
+Original code:
+import torch
+def forward(self, a, b, vec):
+ einsum_1 = torch.functional.einsum('zij,zjk,zk->zi', a, b, vec); a = b = vec = None
+ return einsum_1
+
+Optimized code:
+import torch
+def forward(self, a, b, vec):
+ einsum_1 = torch.functional.einsum('cb,cab->ca', vec, b); vec = b = None
+ einsum_2 = torch.functional.einsum('cb,cab->ca', einsum_1, a); einsum_1 = a = None
+ return einsum_2
+```
+
+We can measure the performance improvement (this is on a CPU):
+```python
+from torch.utils.benchmark import Timer
+
+batch = 1000
+a, b, vec = torch.randn(batch, 4, 5), torch.randn(batch, 5, 8), torch.randn(batch, 8)
+
+g = {"f": graph_mod, "a": a, "b": b, "vec": vec}
+t_orig = Timer("f(a, b, vec)", globals=g)
+print(t_orig.timeit(10_000))
+
+g["f"] = graph_opt
+t_opt = Timer("f(a, b, vec)", globals=g)
+print(t_opt.timeit(10_000))
+```
+gives ~2x improvement:
+```
+f(a, b, vec)
+ 276.58 us
+ 1 measurement, 10000 runs , 1 thread
+
+f(a, b, vec)
+ 118.84 us
+ 1 measurement, 10000 runs , 1 thread
+```
+Depending on your function and dimensions you may see even larger improvements.
+
+## License
+
+`opt_einsum_fx` is distributed under an [MIT license](LICENSE).
+
+
+
+%package -n python3-opt-einsum-fx
+Summary: Einsum optimization using opt_einsum and PyTorch FX
+Provides: python-opt-einsum-fx
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-opt-einsum-fx
+# opt_einsum_fx
+
+[![Documentation Status](https://readthedocs.org/projects/opt-einsum-fx/badge/?version=latest)](https://opt-einsum-fx.readthedocs.io/en/latest/?badge=latest)
+
+Optimizing einsums and functions involving them using [`opt_einsum`](https://optimized-einsum.readthedocs.io/en/stable/) and PyTorch [FX](https://pytorch.org/docs/stable/fx.html) compute graphs.
+
+Issues, questions, PRs, and any thoughts about further optimizing these kinds of operations are welcome!
+
+For more information please see [the docs](https://opt-einsum-fx.readthedocs.io/en/stable/).
+
+## Installation
+
+### PyPI
+
+The latest release can be installed from PyPI:
+```bash
+$ pip install opt_einsum_fx
+```
+
+### Source
+
+To get the latest code, run:
+
+```bash
+$ git clone https://github.com/Linux-cpp-lisp/opt_einsum_fx.git
+```
+and install it by running
+```bash
+$ cd opt_einsum_fx/
+$ pip install .
+```
+
+You can run the tests with
+```bash
+$ pytest tests/
+```
+
+## Minimal example
+
+```python
+import torch
+import torch.fx
+import opt_einsum_fx
+
+def einmatvecmul(a, b, vec):
+ """Batched matrix-matrix-vector product using einsum"""
+ return torch.einsum("zij,zjk,zk->zi", a, b, vec)
+
+graph_mod = torch.fx.symbolic_trace(einmatvecmul)
+print("Original code:\n", graph_mod.code)
+graph_opt = opt_einsum_fx.optimize_einsums_full(
+ model=graph_mod,
+ example_inputs=(
+ torch.randn(7, 4, 5),
+ torch.randn(7, 5, 3),
+ torch.randn(7, 3)
+ )
+)
+print("Optimized code:\n", graph_opt.code)
+```
+outputs
+```
+Original code:
+import torch
+def forward(self, a, b, vec):
+ einsum_1 = torch.functional.einsum('zij,zjk,zk->zi', a, b, vec); a = b = vec = None
+ return einsum_1
+
+Optimized code:
+import torch
+def forward(self, a, b, vec):
+ einsum_1 = torch.functional.einsum('cb,cab->ca', vec, b); vec = b = None
+ einsum_2 = torch.functional.einsum('cb,cab->ca', einsum_1, a); einsum_1 = a = None
+ return einsum_2
+```
+
+We can measure the performance improvement (this is on a CPU):
+```python
+from torch.utils.benchmark import Timer
+
+batch = 1000
+a, b, vec = torch.randn(batch, 4, 5), torch.randn(batch, 5, 8), torch.randn(batch, 8)
+
+g = {"f": graph_mod, "a": a, "b": b, "vec": vec}
+t_orig = Timer("f(a, b, vec)", globals=g)
+print(t_orig.timeit(10_000))
+
+g["f"] = graph_opt
+t_opt = Timer("f(a, b, vec)", globals=g)
+print(t_opt.timeit(10_000))
+```
+gives ~2x improvement:
+```
+f(a, b, vec)
+ 276.58 us
+ 1 measurement, 10000 runs , 1 thread
+
+f(a, b, vec)
+ 118.84 us
+ 1 measurement, 10000 runs , 1 thread
+```
+Depending on your function and dimensions you may see even larger improvements.
+
+## License
+
+`opt_einsum_fx` is distributed under an [MIT license](LICENSE).
+
+
+
+%package help
+Summary: Development documents and examples for opt-einsum-fx
+Provides: python3-opt-einsum-fx-doc
+%description help
+# opt_einsum_fx
+
+[![Documentation Status](https://readthedocs.org/projects/opt-einsum-fx/badge/?version=latest)](https://opt-einsum-fx.readthedocs.io/en/latest/?badge=latest)
+
+Optimizing einsums and functions involving them using [`opt_einsum`](https://optimized-einsum.readthedocs.io/en/stable/) and PyTorch [FX](https://pytorch.org/docs/stable/fx.html) compute graphs.
+
+Issues, questions, PRs, and any thoughts about further optimizing these kinds of operations are welcome!
+
+For more information please see [the docs](https://opt-einsum-fx.readthedocs.io/en/stable/).
+
+## Installation
+
+### PyPI
+
+The latest release can be installed from PyPI:
+```bash
+$ pip install opt_einsum_fx
+```
+
+### Source
+
+To get the latest code, run:
+
+```bash
+$ git clone https://github.com/Linux-cpp-lisp/opt_einsum_fx.git
+```
+and install it by running
+```bash
+$ cd opt_einsum_fx/
+$ pip install .
+```
+
+You can run the tests with
+```bash
+$ pytest tests/
+```
+
+## Minimal example
+
+```python
+import torch
+import torch.fx
+import opt_einsum_fx
+
+def einmatvecmul(a, b, vec):
+ """Batched matrix-matrix-vector product using einsum"""
+ return torch.einsum("zij,zjk,zk->zi", a, b, vec)
+
+graph_mod = torch.fx.symbolic_trace(einmatvecmul)
+print("Original code:\n", graph_mod.code)
+graph_opt = opt_einsum_fx.optimize_einsums_full(
+ model=graph_mod,
+ example_inputs=(
+ torch.randn(7, 4, 5),
+ torch.randn(7, 5, 3),
+ torch.randn(7, 3)
+ )
+)
+print("Optimized code:\n", graph_opt.code)
+```
+outputs
+```
+Original code:
+import torch
+def forward(self, a, b, vec):
+ einsum_1 = torch.functional.einsum('zij,zjk,zk->zi', a, b, vec); a = b = vec = None
+ return einsum_1
+
+Optimized code:
+import torch
+def forward(self, a, b, vec):
+ einsum_1 = torch.functional.einsum('cb,cab->ca', vec, b); vec = b = None
+ einsum_2 = torch.functional.einsum('cb,cab->ca', einsum_1, a); einsum_1 = a = None
+ return einsum_2
+```
+
+We can measure the performance improvement (this is on a CPU):
+```python
+from torch.utils.benchmark import Timer
+
+batch = 1000
+a, b, vec = torch.randn(batch, 4, 5), torch.randn(batch, 5, 8), torch.randn(batch, 8)
+
+g = {"f": graph_mod, "a": a, "b": b, "vec": vec}
+t_orig = Timer("f(a, b, vec)", globals=g)
+print(t_orig.timeit(10_000))
+
+g["f"] = graph_opt
+t_opt = Timer("f(a, b, vec)", globals=g)
+print(t_opt.timeit(10_000))
+```
+gives ~2x improvement:
+```
+f(a, b, vec)
+ 276.58 us
+ 1 measurement, 10000 runs , 1 thread
+
+f(a, b, vec)
+ 118.84 us
+ 1 measurement, 10000 runs , 1 thread
+```
+Depending on your function and dimensions you may see even larger improvements.
+
+## License
+
+`opt_einsum_fx` is distributed under an [MIT license](LICENSE).
+
+
+
+%prep
+%autosetup -n opt-einsum-fx-0.1.4
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-opt-einsum-fx -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Wed May 31 2023 Python_Bot <Python_Bot@openeuler.org> - 0.1.4-1
+- Package Spec generated