summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-05-31 03:26:00 +0000
committerCoprDistGit <infra@openeuler.org>2023-05-31 03:26:00 +0000
commit592288e5b69de790834ec0776600a6a088b74b48 (patch)
treef03b4516920822754445ff0dc56b41abb2352f7b
parentaabeb5addd53e0b6d3696670e9c460d3291866bb (diff)
automatic import of python-keras-models
-rw-r--r--.gitignore1
-rw-r--r--python-keras-models.spec468
-rw-r--r--sources1
3 files changed, 470 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..7bb7e6c 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/keras-models-0.0.7.tar.gz
diff --git a/python-keras-models.spec b/python-keras-models.spec
new file mode 100644
index 0000000..6ac2de1
--- /dev/null
+++ b/python-keras-models.spec
@@ -0,0 +1,468 @@
+%global _empty_manifest_terminate_build 0
+Name: python-keras-models
+Version: 0.0.7
+Release: 1
+Summary: Keras Models Hub
+License: Apache License 2.0
+URL: https://github.com/Marcnuth/Keras-Models
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/d4/a5/4d1dd4a1d31c56a28e32441404c01694faa13d384f7d679987eb16a0456e/keras-models-0.0.7.tar.gz
+BuildArch: noarch
+
+Requires: python3-keras
+Requires: python3-numpy
+Requires: python3-spacy
+Requires: python3-Pillow
+Requires: python3-opencv-python
+Requires: python3-pathlib
+
+%description
+# Keras Models Hub
+
+![PyPI - Downloads](https://img.shields.io/pypi/dm/keras-models?label=PyPI)
+
+This repo aims at providing both **reusable** Keras Models and **pre-trained** models, which could easily integrated into your projects.
+
+## Install
+
+```shell
+pip install keras-models
+```
+
+If you will using the NLP models, you need run one more command:
+```shell
+python -m spacy download xx_ent_wiki_sm
+```
+
+## Usage Guide
+
+### Import
+
+```
+import kearasmodels
+```
+
+
+### Examples
+
+#### Reusable Models
+
+__LinearModel__
+
+__DNN__
+
+__CNN__
+
+```python
+from keras_models.models import CNN
+
+# fake data
+X = np.random.normal(0, 1.0, size=500 * 100 * 100 * 3).reshape(500, 100, 100, 3)
+w1 = np.random.normal(0, 1.0, size=100)
+w2 = np.random.normal(0, 1.0, size=3)
+Y = np.dot(np.dot(np.dot(X, w2), w1), w1) + np.random.randint(1)
+
+# initialize & train model
+model = CNN(input_shape=X.shape[1:], filters=[32, 64], kernel_size=(2, 2), pool_size=(3, 3), padding='same', r_dropout=0.25, num_classes=1)
+model.compile(optimizer='adam', loss=mean_squared_error, metrics=['mae', 'mse'])
+model.summary()
+
+model.fit(X, Y, batch_size=16, epochs=100, validation_split=0.1)
+```
+
+__SkipGram__
+
+__WideDeep__
+
+#### Pre-trained Models
+
+__VGG16_Places365__
+> This model is forked from [GKalliatakis/Keras-VGG16-places365](https://github.com/GKalliatakis/Keras-VGG16-places365) and [CSAILVision/places365](https://github.com/CSAILVision/places365)
+
+```python
+from keras_models.models.pretrained import vgg16_places365
+labels = vgg16_places365.predict(['your_image_file_pathname.jpg', 'another.jpg'], n_top=3)
+
+# Example Result: labels = [['cafeteria', 'food_court', 'restaurant_patio'], ['beach', 'sand']]
+```
+
+
+## Models
+
+- LinearModel
+- DNN
+- WideDeep
+- TextCNN
+- TextDNN
+- SkipGram
+- ResNet
+- VGG16_Places365 [pre-trained]
+- working on more models
+
+## Citation
+
+__WideDeep__
+
+```
+Cheng H T, Koc L, Harmsen J, et al.
+Wide & deep learning for recommender systems[C]
+Proceedings of the 1st workshop on deep learning for recommender systems. ACM, 2016: 7-10.
+```
+
+__TextCNN__
+
+```
+Kim Y.
+Convolutional neural networks for sentence classification[J].
+arXiv preprint arXiv:1408.5882, 2014.
+```
+
+__SkipGram__
+
+```
+Mikolov T, Chen K, Corrado G, et al.
+Efficient estimation of word representations in vector space[J].
+arXiv preprint arXiv:1301.3781, 2013.
+```
+
+
+__VGG16_Places365__
+```
+Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., & Torralba, A.
+Places: A 10 million Image Database for Scene Recognition
+IEEE Transactions on Pattern Analysis and Machine Intelligence
+```
+
+__ResNet__
+```
+He K, Zhang X, Ren S, et al.
+Deep residual learning for image recognition[C]
+Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.
+
+```
+
+## Contribution
+
+Please submit PR if you want to contribute, or submit issues for new model requirements.
+
+
+
+
+
+%package -n python3-keras-models
+Summary: Keras Models Hub
+Provides: python-keras-models
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-keras-models
+# Keras Models Hub
+
+![PyPI - Downloads](https://img.shields.io/pypi/dm/keras-models?label=PyPI)
+
+This repo aims at providing both **reusable** Keras Models and **pre-trained** models, which could easily integrated into your projects.
+
+## Install
+
+```shell
+pip install keras-models
+```
+
+If you will using the NLP models, you need run one more command:
+```shell
+python -m spacy download xx_ent_wiki_sm
+```
+
+## Usage Guide
+
+### Import
+
+```
+import kearasmodels
+```
+
+
+### Examples
+
+#### Reusable Models
+
+__LinearModel__
+
+__DNN__
+
+__CNN__
+
+```python
+from keras_models.models import CNN
+
+# fake data
+X = np.random.normal(0, 1.0, size=500 * 100 * 100 * 3).reshape(500, 100, 100, 3)
+w1 = np.random.normal(0, 1.0, size=100)
+w2 = np.random.normal(0, 1.0, size=3)
+Y = np.dot(np.dot(np.dot(X, w2), w1), w1) + np.random.randint(1)
+
+# initialize & train model
+model = CNN(input_shape=X.shape[1:], filters=[32, 64], kernel_size=(2, 2), pool_size=(3, 3), padding='same', r_dropout=0.25, num_classes=1)
+model.compile(optimizer='adam', loss=mean_squared_error, metrics=['mae', 'mse'])
+model.summary()
+
+model.fit(X, Y, batch_size=16, epochs=100, validation_split=0.1)
+```
+
+__SkipGram__
+
+__WideDeep__
+
+#### Pre-trained Models
+
+__VGG16_Places365__
+> This model is forked from [GKalliatakis/Keras-VGG16-places365](https://github.com/GKalliatakis/Keras-VGG16-places365) and [CSAILVision/places365](https://github.com/CSAILVision/places365)
+
+```python
+from keras_models.models.pretrained import vgg16_places365
+labels = vgg16_places365.predict(['your_image_file_pathname.jpg', 'another.jpg'], n_top=3)
+
+# Example Result: labels = [['cafeteria', 'food_court', 'restaurant_patio'], ['beach', 'sand']]
+```
+
+
+## Models
+
+- LinearModel
+- DNN
+- WideDeep
+- TextCNN
+- TextDNN
+- SkipGram
+- ResNet
+- VGG16_Places365 [pre-trained]
+- working on more models
+
+## Citation
+
+__WideDeep__
+
+```
+Cheng H T, Koc L, Harmsen J, et al.
+Wide & deep learning for recommender systems[C]
+Proceedings of the 1st workshop on deep learning for recommender systems. ACM, 2016: 7-10.
+```
+
+__TextCNN__
+
+```
+Kim Y.
+Convolutional neural networks for sentence classification[J].
+arXiv preprint arXiv:1408.5882, 2014.
+```
+
+__SkipGram__
+
+```
+Mikolov T, Chen K, Corrado G, et al.
+Efficient estimation of word representations in vector space[J].
+arXiv preprint arXiv:1301.3781, 2013.
+```
+
+
+__VGG16_Places365__
+```
+Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., & Torralba, A.
+Places: A 10 million Image Database for Scene Recognition
+IEEE Transactions on Pattern Analysis and Machine Intelligence
+```
+
+__ResNet__
+```
+He K, Zhang X, Ren S, et al.
+Deep residual learning for image recognition[C]
+Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.
+
+```
+
+## Contribution
+
+Please submit PR if you want to contribute, or submit issues for new model requirements.
+
+
+
+
+
+%package help
+Summary: Development documents and examples for keras-models
+Provides: python3-keras-models-doc
+%description help
+# Keras Models Hub
+
+![PyPI - Downloads](https://img.shields.io/pypi/dm/keras-models?label=PyPI)
+
+This repo aims at providing both **reusable** Keras Models and **pre-trained** models, which could easily integrated into your projects.
+
+## Install
+
+```shell
+pip install keras-models
+```
+
+If you will using the NLP models, you need run one more command:
+```shell
+python -m spacy download xx_ent_wiki_sm
+```
+
+## Usage Guide
+
+### Import
+
+```
+import kearasmodels
+```
+
+
+### Examples
+
+#### Reusable Models
+
+__LinearModel__
+
+__DNN__
+
+__CNN__
+
+```python
+from keras_models.models import CNN
+
+# fake data
+X = np.random.normal(0, 1.0, size=500 * 100 * 100 * 3).reshape(500, 100, 100, 3)
+w1 = np.random.normal(0, 1.0, size=100)
+w2 = np.random.normal(0, 1.0, size=3)
+Y = np.dot(np.dot(np.dot(X, w2), w1), w1) + np.random.randint(1)
+
+# initialize & train model
+model = CNN(input_shape=X.shape[1:], filters=[32, 64], kernel_size=(2, 2), pool_size=(3, 3), padding='same', r_dropout=0.25, num_classes=1)
+model.compile(optimizer='adam', loss=mean_squared_error, metrics=['mae', 'mse'])
+model.summary()
+
+model.fit(X, Y, batch_size=16, epochs=100, validation_split=0.1)
+```
+
+__SkipGram__
+
+__WideDeep__
+
+#### Pre-trained Models
+
+__VGG16_Places365__
+> This model is forked from [GKalliatakis/Keras-VGG16-places365](https://github.com/GKalliatakis/Keras-VGG16-places365) and [CSAILVision/places365](https://github.com/CSAILVision/places365)
+
+```python
+from keras_models.models.pretrained import vgg16_places365
+labels = vgg16_places365.predict(['your_image_file_pathname.jpg', 'another.jpg'], n_top=3)
+
+# Example Result: labels = [['cafeteria', 'food_court', 'restaurant_patio'], ['beach', 'sand']]
+```
+
+
+## Models
+
+- LinearModel
+- DNN
+- WideDeep
+- TextCNN
+- TextDNN
+- SkipGram
+- ResNet
+- VGG16_Places365 [pre-trained]
+- working on more models
+
+## Citation
+
+__WideDeep__
+
+```
+Cheng H T, Koc L, Harmsen J, et al.
+Wide & deep learning for recommender systems[C]
+Proceedings of the 1st workshop on deep learning for recommender systems. ACM, 2016: 7-10.
+```
+
+__TextCNN__
+
+```
+Kim Y.
+Convolutional neural networks for sentence classification[J].
+arXiv preprint arXiv:1408.5882, 2014.
+```
+
+__SkipGram__
+
+```
+Mikolov T, Chen K, Corrado G, et al.
+Efficient estimation of word representations in vector space[J].
+arXiv preprint arXiv:1301.3781, 2013.
+```
+
+
+__VGG16_Places365__
+```
+Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., & Torralba, A.
+Places: A 10 million Image Database for Scene Recognition
+IEEE Transactions on Pattern Analysis and Machine Intelligence
+```
+
+__ResNet__
+```
+He K, Zhang X, Ren S, et al.
+Deep residual learning for image recognition[C]
+Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.
+
+```
+
+## Contribution
+
+Please submit PR if you want to contribute, or submit issues for new model requirements.
+
+
+
+
+
+%prep
+%autosetup -n keras-models-0.0.7
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-keras-models -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Wed May 31 2023 Python_Bot <Python_Bot@openeuler.org> - 0.0.7-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..c54d7ff
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+86d9269547c8a13a8c668e33c5ec3ca9 keras-models-0.0.7.tar.gz