%global _empty_manifest_terminate_build 0 Name: python-larq Version: 0.13.0 Release: 1 Summary: An Open Source Machine Learning Library for Training Binarized Neural Networks License: Apache 2.0 URL: https://larq.dev/ Source0: https://mirrors.nju.edu.cn/pypi/web/packages/c8/42/3f3ffc26b5fc608df47c1bf07209984ebd78b6c3a842e5f2bb8685d7a8ee/larq-0.13.0.tar.gz BuildArch: noarch Requires: python3-numpy Requires: python3-terminaltables Requires: python3-packaging Requires: python3-importlib-metadata Requires: python3-black Requires: python3-flake8 Requires: python3-isort Requires: python3-pytype Requires: python3-tensorflow Requires: python3-tensorflow-gpu Requires: python3-pytest Requires: python3-pytest-cov Requires: python3-pytest-xdist Requires: python3-pytest-mock Requires: python3-snapshottest %description logo
[![](https://github.com/larq/larq/workflows/Unittest/badge.svg)](https://github.com/larq/larq/actions?workflow=Unittest) [![Codecov](https://img.shields.io/codecov/c/github/larq/larq)](https://codecov.io/github/larq/larq?branch=main) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/larq.svg)](https://pypi.org/project/larq/) [![PyPI](https://img.shields.io/pypi/v/larq.svg)](https://pypi.org/project/larq/) [![PyPI - License](https://img.shields.io/pypi/l/larq.svg)](https://github.com/larq/larq/blob/main/LICENSE) [![DOI](https://joss.theoj.org/papers/10.21105/joss.01746/status.svg)](https://doi.org/10.21105/joss.01746) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/ambv/black) Larq is an open-source deep learning library for training neural networks with extremely low precision weights and activations, such as Binarized Neural Networks (BNNs). Existing deep neural networks use 32 bits, 16 bits or 8 bits to encode each weight and activation, making them large, slow and power-hungry. This prohibits many applications in resource-constrained environments. Larq is the first step towards solving this. It is designed to provide an easy to use, composable way to train BNNs (1 bit) and other types of Quantized Neural Networks (QNNs) and is based on the `tf.keras` interface. Note that efficient inference using a trained BNN requires the use of an optimized inference engine; we provide these for several platforms in [Larq Compute Engine](https://github.com/larq/compute-engine). _Larq is part of a family of libraries for BNN development; you can also check out [Larq Zoo](https://github.com/larq/zoo) for pretrained models and [Larq Compute Engine](https://github.com/larq/compute-engine) for deployment on mobile and edge devices._ ## Getting Started To build a QNN, Larq introduces the concept of [quantized layers](https://docs.larq.dev/larq/api/layers/) and [quantizers](https://docs.larq.dev/larq/api/quantizers/). A quantizer defines the way of transforming a full precision input to a quantized output and the pseudo-gradient method used for the backwards pass. Each quantized layer requires an `input_quantizer` and a `kernel_quantizer` that describe the way of quantizing the incoming activations and weights of the layer respectively. If both `input_quantizer` and `kernel_quantizer` are `None` the layer is equivalent to a full precision layer. You can define a simple binarized fully-connected Keras model using the [Straight-Through Estimator](https://docs.larq.dev/larq/api/quantizers/#ste_sign) the following way: ```python model = tf.keras.models.Sequential( [ tf.keras.layers.Flatten(), larq.layers.QuantDense( 512, kernel_quantizer="ste_sign", kernel_constraint="weight_clip" ), larq.layers.QuantDense( 10, input_quantizer="ste_sign", kernel_quantizer="ste_sign", kernel_constraint="weight_clip", activation="softmax", ), ] ) ``` This layer can be used inside a [Keras model](https://www.tensorflow.org/guide/keras/overview#sequential_model) or with a [custom training loop](https://www.tensorflow.org/guide/keras/train_and_evaluate#part_ii_writing_your_own_training_evaluation_loops_from_scratch). ## Examples Check out our examples on how to train a Binarized Neural Network in just a few lines of code: - [Introduction to BNNs with Larq](https://docs.larq.dev/larq/tutorials/mnist/) - [BinaryNet on CIFAR10](https://docs.larq.dev/larq/tutorials/binarynet_cifar10/) ## Installation Before installing Larq, please install: - [Python](https://www.python.org/) version `3.7`, `3.8`, `3.9`, or `3.10` - [Tensorflow](https://www.tensorflow.org/install) version `1.14`, `1.15`, `2.0`, `2.1`, `2.2`, `2.3`, `2.4`, `2.5`, `2.6`, `2.7`, `2.8`, `2.9`, or `2.10`: ```shell pip install tensorflow # or tensorflow-gpu ``` You can install Larq with Python's [pip](https://pip.pypa.io/en/stable/) package manager: ```shell pip install larq ``` ## About Larq is being developed by a team of deep learning researchers and engineers at Plumerai to help accelerate both our own research and the general adoption of Binarized Neural Networks. %package -n python3-larq Summary: An Open Source Machine Learning Library for Training Binarized Neural Networks Provides: python-larq BuildRequires: python3-devel BuildRequires: python3-setuptools BuildRequires: python3-pip %description -n python3-larq logo
[![](https://github.com/larq/larq/workflows/Unittest/badge.svg)](https://github.com/larq/larq/actions?workflow=Unittest) [![Codecov](https://img.shields.io/codecov/c/github/larq/larq)](https://codecov.io/github/larq/larq?branch=main) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/larq.svg)](https://pypi.org/project/larq/) [![PyPI](https://img.shields.io/pypi/v/larq.svg)](https://pypi.org/project/larq/) [![PyPI - License](https://img.shields.io/pypi/l/larq.svg)](https://github.com/larq/larq/blob/main/LICENSE) [![DOI](https://joss.theoj.org/papers/10.21105/joss.01746/status.svg)](https://doi.org/10.21105/joss.01746) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/ambv/black) Larq is an open-source deep learning library for training neural networks with extremely low precision weights and activations, such as Binarized Neural Networks (BNNs). Existing deep neural networks use 32 bits, 16 bits or 8 bits to encode each weight and activation, making them large, slow and power-hungry. This prohibits many applications in resource-constrained environments. Larq is the first step towards solving this. It is designed to provide an easy to use, composable way to train BNNs (1 bit) and other types of Quantized Neural Networks (QNNs) and is based on the `tf.keras` interface. Note that efficient inference using a trained BNN requires the use of an optimized inference engine; we provide these for several platforms in [Larq Compute Engine](https://github.com/larq/compute-engine). _Larq is part of a family of libraries for BNN development; you can also check out [Larq Zoo](https://github.com/larq/zoo) for pretrained models and [Larq Compute Engine](https://github.com/larq/compute-engine) for deployment on mobile and edge devices._ ## Getting Started To build a QNN, Larq introduces the concept of [quantized layers](https://docs.larq.dev/larq/api/layers/) and [quantizers](https://docs.larq.dev/larq/api/quantizers/). A quantizer defines the way of transforming a full precision input to a quantized output and the pseudo-gradient method used for the backwards pass. Each quantized layer requires an `input_quantizer` and a `kernel_quantizer` that describe the way of quantizing the incoming activations and weights of the layer respectively. If both `input_quantizer` and `kernel_quantizer` are `None` the layer is equivalent to a full precision layer. You can define a simple binarized fully-connected Keras model using the [Straight-Through Estimator](https://docs.larq.dev/larq/api/quantizers/#ste_sign) the following way: ```python model = tf.keras.models.Sequential( [ tf.keras.layers.Flatten(), larq.layers.QuantDense( 512, kernel_quantizer="ste_sign", kernel_constraint="weight_clip" ), larq.layers.QuantDense( 10, input_quantizer="ste_sign", kernel_quantizer="ste_sign", kernel_constraint="weight_clip", activation="softmax", ), ] ) ``` This layer can be used inside a [Keras model](https://www.tensorflow.org/guide/keras/overview#sequential_model) or with a [custom training loop](https://www.tensorflow.org/guide/keras/train_and_evaluate#part_ii_writing_your_own_training_evaluation_loops_from_scratch). ## Examples Check out our examples on how to train a Binarized Neural Network in just a few lines of code: - [Introduction to BNNs with Larq](https://docs.larq.dev/larq/tutorials/mnist/) - [BinaryNet on CIFAR10](https://docs.larq.dev/larq/tutorials/binarynet_cifar10/) ## Installation Before installing Larq, please install: - [Python](https://www.python.org/) version `3.7`, `3.8`, `3.9`, or `3.10` - [Tensorflow](https://www.tensorflow.org/install) version `1.14`, `1.15`, `2.0`, `2.1`, `2.2`, `2.3`, `2.4`, `2.5`, `2.6`, `2.7`, `2.8`, `2.9`, or `2.10`: ```shell pip install tensorflow # or tensorflow-gpu ``` You can install Larq with Python's [pip](https://pip.pypa.io/en/stable/) package manager: ```shell pip install larq ``` ## About Larq is being developed by a team of deep learning researchers and engineers at Plumerai to help accelerate both our own research and the general adoption of Binarized Neural Networks. %package help Summary: Development documents and examples for larq Provides: python3-larq-doc %description help logo
[![](https://github.com/larq/larq/workflows/Unittest/badge.svg)](https://github.com/larq/larq/actions?workflow=Unittest) [![Codecov](https://img.shields.io/codecov/c/github/larq/larq)](https://codecov.io/github/larq/larq?branch=main) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/larq.svg)](https://pypi.org/project/larq/) [![PyPI](https://img.shields.io/pypi/v/larq.svg)](https://pypi.org/project/larq/) [![PyPI - License](https://img.shields.io/pypi/l/larq.svg)](https://github.com/larq/larq/blob/main/LICENSE) [![DOI](https://joss.theoj.org/papers/10.21105/joss.01746/status.svg)](https://doi.org/10.21105/joss.01746) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/ambv/black) Larq is an open-source deep learning library for training neural networks with extremely low precision weights and activations, such as Binarized Neural Networks (BNNs). Existing deep neural networks use 32 bits, 16 bits or 8 bits to encode each weight and activation, making them large, slow and power-hungry. This prohibits many applications in resource-constrained environments. Larq is the first step towards solving this. It is designed to provide an easy to use, composable way to train BNNs (1 bit) and other types of Quantized Neural Networks (QNNs) and is based on the `tf.keras` interface. Note that efficient inference using a trained BNN requires the use of an optimized inference engine; we provide these for several platforms in [Larq Compute Engine](https://github.com/larq/compute-engine). _Larq is part of a family of libraries for BNN development; you can also check out [Larq Zoo](https://github.com/larq/zoo) for pretrained models and [Larq Compute Engine](https://github.com/larq/compute-engine) for deployment on mobile and edge devices._ ## Getting Started To build a QNN, Larq introduces the concept of [quantized layers](https://docs.larq.dev/larq/api/layers/) and [quantizers](https://docs.larq.dev/larq/api/quantizers/). A quantizer defines the way of transforming a full precision input to a quantized output and the pseudo-gradient method used for the backwards pass. Each quantized layer requires an `input_quantizer` and a `kernel_quantizer` that describe the way of quantizing the incoming activations and weights of the layer respectively. If both `input_quantizer` and `kernel_quantizer` are `None` the layer is equivalent to a full precision layer. You can define a simple binarized fully-connected Keras model using the [Straight-Through Estimator](https://docs.larq.dev/larq/api/quantizers/#ste_sign) the following way: ```python model = tf.keras.models.Sequential( [ tf.keras.layers.Flatten(), larq.layers.QuantDense( 512, kernel_quantizer="ste_sign", kernel_constraint="weight_clip" ), larq.layers.QuantDense( 10, input_quantizer="ste_sign", kernel_quantizer="ste_sign", kernel_constraint="weight_clip", activation="softmax", ), ] ) ``` This layer can be used inside a [Keras model](https://www.tensorflow.org/guide/keras/overview#sequential_model) or with a [custom training loop](https://www.tensorflow.org/guide/keras/train_and_evaluate#part_ii_writing_your_own_training_evaluation_loops_from_scratch). ## Examples Check out our examples on how to train a Binarized Neural Network in just a few lines of code: - [Introduction to BNNs with Larq](https://docs.larq.dev/larq/tutorials/mnist/) - [BinaryNet on CIFAR10](https://docs.larq.dev/larq/tutorials/binarynet_cifar10/) ## Installation Before installing Larq, please install: - [Python](https://www.python.org/) version `3.7`, `3.8`, `3.9`, or `3.10` - [Tensorflow](https://www.tensorflow.org/install) version `1.14`, `1.15`, `2.0`, `2.1`, `2.2`, `2.3`, `2.4`, `2.5`, `2.6`, `2.7`, `2.8`, `2.9`, or `2.10`: ```shell pip install tensorflow # or tensorflow-gpu ``` You can install Larq with Python's [pip](https://pip.pypa.io/en/stable/) package manager: ```shell pip install larq ``` ## About Larq is being developed by a team of deep learning researchers and engineers at Plumerai to help accelerate both our own research and the general adoption of Binarized Neural Networks. %prep %autosetup -n larq-0.13.0 %build %py3_build %install %py3_install install -d -m755 %{buildroot}/%{_pkgdocdir} if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/lib64 ]; then find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/bin ]; then find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/sbin ]; then find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . mv %{buildroot}/doclist.lst . %files -n python3-larq -f filelist.lst %dir %{python3_sitelib}/* %files help -f doclist.lst %{_docdir}/* %changelog * Fri May 05 2023 Python_Bot - 0.13.0-1 - Package Spec generated