summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--.gitignore1
-rw-r--r--python-attention.spec434
-rw-r--r--sources1
3 files changed, 436 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..179980e 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/attention-5.0.0.tar.gz
diff --git a/python-attention.spec b/python-attention.spec
new file mode 100644
index 0000000..bfdd637
--- /dev/null
+++ b/python-attention.spec
@@ -0,0 +1,434 @@
+%global _empty_manifest_terminate_build 0
+Name: python-attention
+Version: 5.0.0
+Release: 1
+Summary: Keras Attention Layer
+License: Apache 2.0
+URL: https://pypi.org/project/attention/
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/c3/3f/4f821fbcf4c401ec43b549b67d12bf5dd00eb4545378c336b09a17bdd9f3/attention-5.0.0.tar.gz
+BuildArch: noarch
+
+Requires: python3-numpy
+Requires: python3-tensorflow
+
+%description
+# Keras Attention Mechanism
+
+[![Downloads](https://pepy.tech/badge/attention)](https://pepy.tech/project/attention)
+[![Downloads](https://pepy.tech/badge/attention/month)](https://pepy.tech/project/attention)
+[![license](https://img.shields.io/badge/License-Apache_2.0-brightgreen.svg)](https://github.com/philipperemy/keras-attention-mechanism/blob/master/LICENSE) [![dep1](https://img.shields.io/badge/Tensorflow-2.0+-brightgreen.svg)](https://www.tensorflow.org/)
+![Simple Keras Attention CI](https://github.com/philipperemy/keras-attention-mechanism/workflows/Simple%20Keras%20Attention%20CI/badge.svg)
+
+Many-to-one attention mechanism for Keras.
+
+<p align="center">
+ <img src="examples/equations.png" width="600">
+</p>
+
+
+## Installation
+
+*PyPI*
+
+```bash
+pip install attention
+```
+
+## Example
+
+```python
+import numpy as np
+from tensorflow.keras import Input
+from tensorflow.keras.layers import Dense, LSTM
+from tensorflow.keras.models import load_model, Model
+
+from attention import Attention
+
+
+def main():
+ # Dummy data. There is nothing to learn in this example.
+ num_samples, time_steps, input_dim, output_dim = 100, 10, 1, 1
+ data_x = np.random.uniform(size=(num_samples, time_steps, input_dim))
+ data_y = np.random.uniform(size=(num_samples, output_dim))
+
+ # Define/compile the model.
+ model_input = Input(shape=(time_steps, input_dim))
+ x = LSTM(64, return_sequences=True)(model_input)
+ x = Attention(units=32)(x)
+ x = Dense(1)(x)
+ model = Model(model_input, x)
+ model.compile(loss='mae', optimizer='adam')
+ model.summary()
+
+ # train.
+ model.fit(data_x, data_y, epochs=10)
+
+ # test save/reload model.
+ pred1 = model.predict(data_x)
+ model.save('test_model.h5')
+ model_h5 = load_model('test_model.h5', custom_objects={'Attention': Attention})
+ pred2 = model_h5.predict(data_x)
+ np.testing.assert_almost_equal(pred1, pred2)
+ print('Success.')
+
+
+if __name__ == '__main__':
+ main()
+```
+
+## Other Examples
+
+Browse [examples](examples).
+
+Install the requirements before running the examples: `pip install -r examples/examples-requirements.txt`.
+
+
+### IMDB Dataset
+
+In this experiment, we demonstrate that using attention yields a higher accuracy on the IMDB dataset. We consider two
+LSTM networks: one with this attention layer and the other one with a fully connected layer. Both have the same number
+of parameters for a fair comparison (250K).
+
+Here are the results on 10 runs. For every run, we record the max accuracy on the test set for 10 epochs.
+
+
+| Measure | No Attention (250K params) | Attention (250K params) |
+| ------------- | ------------- | ------------- |
+| MAX Accuracy | 88.22 | 88.76 |
+| AVG Accuracy | 87.02 | 87.62 |
+| STDDEV Accuracy | 0.18 | 0.14 |
+
+As expected, there is a boost in accuracy for the model with attention. It also reduces the variability between the runs, which is something nice to have.
+
+
+### Adding two numbers
+
+Let's consider the task of adding two numbers that come right after some delimiters (0 in this case):
+
+`x = [1, 2, 3, 0, 4, 5, 6, 0, 7, 8]`. Result is `y = 4 + 7 = 11`.
+
+The attention is expected to be the highest after the delimiters. An overview of the training is shown below, where the
+top represents the attention map and the bottom the ground truth. As the training progresses, the model learns the
+task and the attention map converges to the ground truth.
+
+<p align="center">
+ <img src="examples/attention.gif" width="320">
+</p>
+
+### Finding max of a sequence
+
+We consider many 1D sequences of the same length. The task is to find the maximum of each sequence.
+
+We give the full sequence processed by the RNN layer to the attention layer. We expect the attention layer to focus on the maximum of each sequence.
+
+After a few epochs, the attention layer converges perfectly to what we expected.
+
+<p align="center">
+ <img src="examples/readme/example.png" width="320">
+</p>
+
+## References
+
+- https://www.cs.cmu.edu/~./hovy/papers/16HLT-hierarchical-attention-networks.pdf
+- https://arxiv.org/abs/1508.04025
+- https://arxiv.org/abs/1409.0473
+
+
+%package -n python3-attention
+Summary: Keras Attention Layer
+Provides: python-attention
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-attention
+# Keras Attention Mechanism
+
+[![Downloads](https://pepy.tech/badge/attention)](https://pepy.tech/project/attention)
+[![Downloads](https://pepy.tech/badge/attention/month)](https://pepy.tech/project/attention)
+[![license](https://img.shields.io/badge/License-Apache_2.0-brightgreen.svg)](https://github.com/philipperemy/keras-attention-mechanism/blob/master/LICENSE) [![dep1](https://img.shields.io/badge/Tensorflow-2.0+-brightgreen.svg)](https://www.tensorflow.org/)
+![Simple Keras Attention CI](https://github.com/philipperemy/keras-attention-mechanism/workflows/Simple%20Keras%20Attention%20CI/badge.svg)
+
+Many-to-one attention mechanism for Keras.
+
+<p align="center">
+ <img src="examples/equations.png" width="600">
+</p>
+
+
+## Installation
+
+*PyPI*
+
+```bash
+pip install attention
+```
+
+## Example
+
+```python
+import numpy as np
+from tensorflow.keras import Input
+from tensorflow.keras.layers import Dense, LSTM
+from tensorflow.keras.models import load_model, Model
+
+from attention import Attention
+
+
+def main():
+ # Dummy data. There is nothing to learn in this example.
+ num_samples, time_steps, input_dim, output_dim = 100, 10, 1, 1
+ data_x = np.random.uniform(size=(num_samples, time_steps, input_dim))
+ data_y = np.random.uniform(size=(num_samples, output_dim))
+
+ # Define/compile the model.
+ model_input = Input(shape=(time_steps, input_dim))
+ x = LSTM(64, return_sequences=True)(model_input)
+ x = Attention(units=32)(x)
+ x = Dense(1)(x)
+ model = Model(model_input, x)
+ model.compile(loss='mae', optimizer='adam')
+ model.summary()
+
+ # train.
+ model.fit(data_x, data_y, epochs=10)
+
+ # test save/reload model.
+ pred1 = model.predict(data_x)
+ model.save('test_model.h5')
+ model_h5 = load_model('test_model.h5', custom_objects={'Attention': Attention})
+ pred2 = model_h5.predict(data_x)
+ np.testing.assert_almost_equal(pred1, pred2)
+ print('Success.')
+
+
+if __name__ == '__main__':
+ main()
+```
+
+## Other Examples
+
+Browse [examples](examples).
+
+Install the requirements before running the examples: `pip install -r examples/examples-requirements.txt`.
+
+
+### IMDB Dataset
+
+In this experiment, we demonstrate that using attention yields a higher accuracy on the IMDB dataset. We consider two
+LSTM networks: one with this attention layer and the other one with a fully connected layer. Both have the same number
+of parameters for a fair comparison (250K).
+
+Here are the results on 10 runs. For every run, we record the max accuracy on the test set for 10 epochs.
+
+
+| Measure | No Attention (250K params) | Attention (250K params) |
+| ------------- | ------------- | ------------- |
+| MAX Accuracy | 88.22 | 88.76 |
+| AVG Accuracy | 87.02 | 87.62 |
+| STDDEV Accuracy | 0.18 | 0.14 |
+
+As expected, there is a boost in accuracy for the model with attention. It also reduces the variability between the runs, which is something nice to have.
+
+
+### Adding two numbers
+
+Let's consider the task of adding two numbers that come right after some delimiters (0 in this case):
+
+`x = [1, 2, 3, 0, 4, 5, 6, 0, 7, 8]`. Result is `y = 4 + 7 = 11`.
+
+The attention is expected to be the highest after the delimiters. An overview of the training is shown below, where the
+top represents the attention map and the bottom the ground truth. As the training progresses, the model learns the
+task and the attention map converges to the ground truth.
+
+<p align="center">
+ <img src="examples/attention.gif" width="320">
+</p>
+
+### Finding max of a sequence
+
+We consider many 1D sequences of the same length. The task is to find the maximum of each sequence.
+
+We give the full sequence processed by the RNN layer to the attention layer. We expect the attention layer to focus on the maximum of each sequence.
+
+After a few epochs, the attention layer converges perfectly to what we expected.
+
+<p align="center">
+ <img src="examples/readme/example.png" width="320">
+</p>
+
+## References
+
+- https://www.cs.cmu.edu/~./hovy/papers/16HLT-hierarchical-attention-networks.pdf
+- https://arxiv.org/abs/1508.04025
+- https://arxiv.org/abs/1409.0473
+
+
+%package help
+Summary: Development documents and examples for attention
+Provides: python3-attention-doc
+%description help
+# Keras Attention Mechanism
+
+[![Downloads](https://pepy.tech/badge/attention)](https://pepy.tech/project/attention)
+[![Downloads](https://pepy.tech/badge/attention/month)](https://pepy.tech/project/attention)
+[![license](https://img.shields.io/badge/License-Apache_2.0-brightgreen.svg)](https://github.com/philipperemy/keras-attention-mechanism/blob/master/LICENSE) [![dep1](https://img.shields.io/badge/Tensorflow-2.0+-brightgreen.svg)](https://www.tensorflow.org/)
+![Simple Keras Attention CI](https://github.com/philipperemy/keras-attention-mechanism/workflows/Simple%20Keras%20Attention%20CI/badge.svg)
+
+Many-to-one attention mechanism for Keras.
+
+<p align="center">
+ <img src="examples/equations.png" width="600">
+</p>
+
+
+## Installation
+
+*PyPI*
+
+```bash
+pip install attention
+```
+
+## Example
+
+```python
+import numpy as np
+from tensorflow.keras import Input
+from tensorflow.keras.layers import Dense, LSTM
+from tensorflow.keras.models import load_model, Model
+
+from attention import Attention
+
+
+def main():
+ # Dummy data. There is nothing to learn in this example.
+ num_samples, time_steps, input_dim, output_dim = 100, 10, 1, 1
+ data_x = np.random.uniform(size=(num_samples, time_steps, input_dim))
+ data_y = np.random.uniform(size=(num_samples, output_dim))
+
+ # Define/compile the model.
+ model_input = Input(shape=(time_steps, input_dim))
+ x = LSTM(64, return_sequences=True)(model_input)
+ x = Attention(units=32)(x)
+ x = Dense(1)(x)
+ model = Model(model_input, x)
+ model.compile(loss='mae', optimizer='adam')
+ model.summary()
+
+ # train.
+ model.fit(data_x, data_y, epochs=10)
+
+ # test save/reload model.
+ pred1 = model.predict(data_x)
+ model.save('test_model.h5')
+ model_h5 = load_model('test_model.h5', custom_objects={'Attention': Attention})
+ pred2 = model_h5.predict(data_x)
+ np.testing.assert_almost_equal(pred1, pred2)
+ print('Success.')
+
+
+if __name__ == '__main__':
+ main()
+```
+
+## Other Examples
+
+Browse [examples](examples).
+
+Install the requirements before running the examples: `pip install -r examples/examples-requirements.txt`.
+
+
+### IMDB Dataset
+
+In this experiment, we demonstrate that using attention yields a higher accuracy on the IMDB dataset. We consider two
+LSTM networks: one with this attention layer and the other one with a fully connected layer. Both have the same number
+of parameters for a fair comparison (250K).
+
+Here are the results on 10 runs. For every run, we record the max accuracy on the test set for 10 epochs.
+
+
+| Measure | No Attention (250K params) | Attention (250K params) |
+| ------------- | ------------- | ------------- |
+| MAX Accuracy | 88.22 | 88.76 |
+| AVG Accuracy | 87.02 | 87.62 |
+| STDDEV Accuracy | 0.18 | 0.14 |
+
+As expected, there is a boost in accuracy for the model with attention. It also reduces the variability between the runs, which is something nice to have.
+
+
+### Adding two numbers
+
+Let's consider the task of adding two numbers that come right after some delimiters (0 in this case):
+
+`x = [1, 2, 3, 0, 4, 5, 6, 0, 7, 8]`. Result is `y = 4 + 7 = 11`.
+
+The attention is expected to be the highest after the delimiters. An overview of the training is shown below, where the
+top represents the attention map and the bottom the ground truth. As the training progresses, the model learns the
+task and the attention map converges to the ground truth.
+
+<p align="center">
+ <img src="examples/attention.gif" width="320">
+</p>
+
+### Finding max of a sequence
+
+We consider many 1D sequences of the same length. The task is to find the maximum of each sequence.
+
+We give the full sequence processed by the RNN layer to the attention layer. We expect the attention layer to focus on the maximum of each sequence.
+
+After a few epochs, the attention layer converges perfectly to what we expected.
+
+<p align="center">
+ <img src="examples/readme/example.png" width="320">
+</p>
+
+## References
+
+- https://www.cs.cmu.edu/~./hovy/papers/16HLT-hierarchical-attention-networks.pdf
+- https://arxiv.org/abs/1508.04025
+- https://arxiv.org/abs/1409.0473
+
+
+%prep
+%autosetup -n attention-5.0.0
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-attention -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Tue Apr 11 2023 Python_Bot <Python_Bot@openeuler.org> - 5.0.0-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..40273fd
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+31df6e2f394bbb8499b1b6d37718e8a6 attention-5.0.0.tar.gz