diff options
author | CoprDistGit <infra@openeuler.org> | 2023-04-10 08:25:34 +0000 |
---|---|---|
committer | CoprDistGit <infra@openeuler.org> | 2023-04-10 08:25:34 +0000 |
commit | 26ae3c8c6b30038607143eb08976a78577bf12d5 (patch) | |
tree | 5ff3afa56a9f7ee40e0b79f4e3fa7d60430c5587 | |
parent | cd169c92034aef5d35d6dbf445e0dd654c9ccc71 (diff) |
automatic import of python-flax
-rw-r--r-- | .gitignore | 1 | ||||
-rw-r--r-- | python-flax.spec | 732 | ||||
-rw-r--r-- | sources | 1 |
3 files changed, 734 insertions, 0 deletions
@@ -0,0 +1 @@ +/flax-0.6.8.tar.gz diff --git a/python-flax.spec b/python-flax.spec new file mode 100644 index 0000000..840f3fc --- /dev/null +++ b/python-flax.spec @@ -0,0 +1,732 @@ +%global _empty_manifest_terminate_build 0 +Name: python-flax +Version: 0.6.8 +Release: 1 +Summary: Flax: A neural network library for JAX designed for flexibility +License: Apache Software License +URL: https://github.com/google/flax +Source0: https://mirrors.nju.edu.cn/pypi/web/packages/dc/94/efee7afbcfdec16910f3b6bcc76ed5ed850c44e1b69630e2620a4faaf6c9/flax-0.6.8.tar.gz +BuildArch: noarch + +Requires: python3-numpy +Requires: python3-jax +Requires: python3-msgpack +Requires: python3-optax +Requires: python3-orbax +Requires: python3-tensorstore +Requires: python3-rich +Requires: python3-typing-extensions +Requires: python3-PyYAML +Requires: python3-matplotlib +Requires: python3-atari-py +Requires: python3-clu +Requires: python3-einops +Requires: python3-gym +Requires: python3-jaxlib +Requires: python3-jraph +Requires: python3-ml-collections +Requires: python3-mypy +Requires: python3-opencv-python +Requires: python3-pytest +Requires: python3-pytest-cov +Requires: python3-pytest-custom-exit-code +Requires: python3-pytest-xdist +Requires: python3-pytype +Requires: python3-sentencepiece +Requires: python3-tensorflow-text +Requires: python3-tensorflow-datasets +Requires: python3-tensorflow +Requires: python3-torch +Requires: python3-nbstripout + +%description +<div align="center"> +<img src="https://raw.githubusercontent.com/google/flax/main/images/flax_logo_250px.png" alt="logo"></img> +</div> + +# Flax: A neural network library and ecosystem for JAX designed for flexibility + + [](https://codecov.io/github/google/flax) + + +[**Overview**](#overview) +| [**Quick install**](#quick-install) +| [**What does Flax look like?**](#what-does-flax-look-like) +| [**Documentation**](https://flax.readthedocs.io/) + +This README is a very short intro. **To learn everything you need to know about Flax, refer to our [full documentation](https://flax.readthedocs.io/).** + +Flax was originally started by engineers and researchers within the Brain Team in Google Research (in close collaboration with the JAX team), and is now developed jointly with the open source community. + +Flax is being used by a growing +community of hundreds of folks in various Alphabet research departments +for their daily work, as well as a [growing community +of open source +projects](https://github.com/google/flax/network/dependents?dependent_type=REPOSITORY). + +The Flax team's mission is to serve the growing JAX neural network +research ecosystem -- both within Alphabet and with the broader community, +and to explore the use-cases where JAX shines. We use GitHub for almost +all of our coordination and planning, as well as where we discuss +upcoming design changes. We welcome feedback on any of our discussion, +issue and pull request threads. We are in the process of moving some +remaining internal design docs and conversation threads to GitHub +discussions, issues and pull requests. We hope to increasingly engage +with the needs and clarifications of the broader ecosystem. Please let +us know how we can help! + +Please report any feature requests, +issues, questions or concerns in our [discussion +forum](https://github.com/google/flax/discussions), or just let us +know what you're working on! + +We expect to improve Flax, but we don't anticipate significant +breaking changes to the core API. We use [Changelog](https://github.com/google/flax/tree/main/CHANGELOG.md) +entries and deprecation warnings when possible. + +In case you want to reach us directly, we're at flax-dev@google.com. + +## Overview + +Flax is a high-performance neural network library and ecosystem for +JAX that is **designed for flexibility**: +Try new forms of training by forking an example and by modifying the training +loop, not by adding features to a framework. + +Flax is being developed in close collaboration with the JAX team and +comes with everything you need to start your research, including: + +* **Neural network API** (`flax.linen`): Dense, Conv, {Batch|Layer|Group} Norm, Attention, Pooling, {LSTM|GRU} Cell, Dropout + +* **Utilities and patterns**: replicated training, serialization and checkpointing, metrics, prefetching on device + +* **Educational examples** that work out of the box: MNIST, LSTM seq2seq, Graph Neural Networks, Sequence Tagging + +* **Fast, tuned large-scale end-to-end examples**: CIFAR10, ResNet on ImageNet, Transformer LM1b + +## Quick install + +You will need Python 3.6 or later, and a working [JAX](https://github.com/google/jax/blob/main/README.md) +installation (with or without GPU support - refer to [the instructions](https://github.com/google/jax/blob/main/README.md)). +For a CPU-only version of JAX: + +``` +pip install --upgrade pip # To support manylinux2010 wheels. +pip install --upgrade jax jaxlib # CPU-only +``` + +Then, install Flax from PyPi: + +``` +pip install flax +``` + +To upgrade to the latest version of Flax, you can use: + +``` +pip install --upgrade git+https://github.com/google/flax.git +``` +To install some additional dependencies (like `matplotlib`) that are required but not included +by some dependencies, you can use: + +```bash +pip install flax[all] +``` + +## What does Flax look like? + +We provide three examples using the Flax API: a simple multi-layer perceptron, a CNN and an auto-encoder. + +To learn more about the `Module` abstraction, check out our [docs](https://flax.readthedocs.io/), our [broad intro to the Module abstraction](https://github.com/google/flax/blob/main/docs/notebooks/linen_intro.ipynb). For additional concrete demonstrations of best practices, refer to our +[guides](https://flax.readthedocs.io/en/latest/guides/index.html) and +[developer notes](https://flax.readthedocs.io/en/latest/developer_notes/index.html). + +```py +from typing import Sequence + +import numpy as np +import jax +import jax.numpy as jnp +import flax.linen as nn + +class MLP(nn.Module): + features: Sequence[int] + + @nn.compact + def __call__(self, x): + for feat in self.features[:-1]: + x = nn.relu(nn.Dense(feat)(x)) + x = nn.Dense(self.features[-1])(x) + return x + +model = MLP([12, 8, 4]) +batch = jnp.ones((32, 10)) +variables = model.init(jax.random.PRNGKey(0), batch) +output = model.apply(variables, batch) +``` + +```py +class CNN(nn.Module): + @nn.compact + def __call__(self, x): + x = nn.Conv(features=32, kernel_size=(3, 3))(x) + x = nn.relu(x) + x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2)) + x = nn.Conv(features=64, kernel_size=(3, 3))(x) + x = nn.relu(x) + x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2)) + x = x.reshape((x.shape[0], -1)) # flatten + x = nn.Dense(features=256)(x) + x = nn.relu(x) + x = nn.Dense(features=10)(x) + x = nn.log_softmax(x) + return x + +model = CNN() +batch = jnp.ones((32, 64, 64, 10)) # (N, H, W, C) format +variables = model.init(jax.random.PRNGKey(0), batch) +output = model.apply(variables, batch) +``` + +```py +class AutoEncoder(nn.Module): + encoder_widths: Sequence[int] + decoder_widths: Sequence[int] + input_shape: Sequence[int] + + def setup(self): + input_dim = np.prod(self.input_shape) + self.encoder = MLP(self.encoder_widths) + self.decoder = MLP(self.decoder_widths + (input_dim,)) + + def __call__(self, x): + return self.decode(self.encode(x)) + + def encode(self, x): + assert x.shape[1:] == self.input_shape + return self.encoder(jnp.reshape(x, (x.shape[0], -1))) + + def decode(self, z): + z = self.decoder(z) + x = nn.sigmoid(z) + x = jnp.reshape(x, (x.shape[0],) + self.input_shape) + return x + +model = AutoEncoder(encoder_widths=[20, 10, 5], + decoder_widths=[5, 10, 20], + input_shape=(12,)) +batch = jnp.ones((16, 12)) +variables = model.init(jax.random.PRNGKey(0), batch) +encoded = model.apply(variables, batch, method=model.encode) +decoded = model.apply(variables, encoded, method=model.decode) +``` + +## 🤗 Hugging Face + +In-detail examples to train and evaluate a variety of Flax models for +Natural Language Processing, Computer Vision, and Speech Recognition are +actively maintained in the [🤗 Transformers repository](https://github.com/huggingface/transformers/tree/master/examples/flax). + +As of October 2021, the [19 most-used Transformer architectures](https://huggingface.co/transformers/#supported-frameworks) are supported in Flax +and over 5000 pretrained checkpoints in Flax have been uploaded to the [🤗 Hub](https://huggingface.co/models?library=jax&sort=downloads). + +## Citing Flax + +To cite this repository: + +``` +@software{flax2020github, + author = {Jonathan Heek and Anselm Levskaya and Avital Oliver and Marvin Ritter and Bertrand Rondepierre and Andreas Steiner and Marc van {Z}ee}, + title = {{F}lax: A neural network library and ecosystem for {JAX}}, + url = {http://github.com/google/flax}, + version = {0.6.8}, + year = {2023}, +} +``` + +In the above bibtex entry, names are in alphabetical order, the version number +is intended to be that from [flax/version.py](https://github.com/google/flax/blob/main/flax/version.py), and the year corresponds to the project's open-source release. + +## Note + +Flax is an open source project maintained by a dedicated team in Google Research, but is not an official Google product. + + +%package -n python3-flax +Summary: Flax: A neural network library for JAX designed for flexibility +Provides: python-flax +BuildRequires: python3-devel +BuildRequires: python3-setuptools +BuildRequires: python3-pip +%description -n python3-flax +<div align="center"> +<img src="https://raw.githubusercontent.com/google/flax/main/images/flax_logo_250px.png" alt="logo"></img> +</div> + +# Flax: A neural network library and ecosystem for JAX designed for flexibility + + [](https://codecov.io/github/google/flax) + + +[**Overview**](#overview) +| [**Quick install**](#quick-install) +| [**What does Flax look like?**](#what-does-flax-look-like) +| [**Documentation**](https://flax.readthedocs.io/) + +This README is a very short intro. **To learn everything you need to know about Flax, refer to our [full documentation](https://flax.readthedocs.io/).** + +Flax was originally started by engineers and researchers within the Brain Team in Google Research (in close collaboration with the JAX team), and is now developed jointly with the open source community. + +Flax is being used by a growing +community of hundreds of folks in various Alphabet research departments +for their daily work, as well as a [growing community +of open source +projects](https://github.com/google/flax/network/dependents?dependent_type=REPOSITORY). + +The Flax team's mission is to serve the growing JAX neural network +research ecosystem -- both within Alphabet and with the broader community, +and to explore the use-cases where JAX shines. We use GitHub for almost +all of our coordination and planning, as well as where we discuss +upcoming design changes. We welcome feedback on any of our discussion, +issue and pull request threads. We are in the process of moving some +remaining internal design docs and conversation threads to GitHub +discussions, issues and pull requests. We hope to increasingly engage +with the needs and clarifications of the broader ecosystem. Please let +us know how we can help! + +Please report any feature requests, +issues, questions or concerns in our [discussion +forum](https://github.com/google/flax/discussions), or just let us +know what you're working on! + +We expect to improve Flax, but we don't anticipate significant +breaking changes to the core API. We use [Changelog](https://github.com/google/flax/tree/main/CHANGELOG.md) +entries and deprecation warnings when possible. + +In case you want to reach us directly, we're at flax-dev@google.com. + +## Overview + +Flax is a high-performance neural network library and ecosystem for +JAX that is **designed for flexibility**: +Try new forms of training by forking an example and by modifying the training +loop, not by adding features to a framework. + +Flax is being developed in close collaboration with the JAX team and +comes with everything you need to start your research, including: + +* **Neural network API** (`flax.linen`): Dense, Conv, {Batch|Layer|Group} Norm, Attention, Pooling, {LSTM|GRU} Cell, Dropout + +* **Utilities and patterns**: replicated training, serialization and checkpointing, metrics, prefetching on device + +* **Educational examples** that work out of the box: MNIST, LSTM seq2seq, Graph Neural Networks, Sequence Tagging + +* **Fast, tuned large-scale end-to-end examples**: CIFAR10, ResNet on ImageNet, Transformer LM1b + +## Quick install + +You will need Python 3.6 or later, and a working [JAX](https://github.com/google/jax/blob/main/README.md) +installation (with or without GPU support - refer to [the instructions](https://github.com/google/jax/blob/main/README.md)). +For a CPU-only version of JAX: + +``` +pip install --upgrade pip # To support manylinux2010 wheels. +pip install --upgrade jax jaxlib # CPU-only +``` + +Then, install Flax from PyPi: + +``` +pip install flax +``` + +To upgrade to the latest version of Flax, you can use: + +``` +pip install --upgrade git+https://github.com/google/flax.git +``` +To install some additional dependencies (like `matplotlib`) that are required but not included +by some dependencies, you can use: + +```bash +pip install flax[all] +``` + +## What does Flax look like? + +We provide three examples using the Flax API: a simple multi-layer perceptron, a CNN and an auto-encoder. + +To learn more about the `Module` abstraction, check out our [docs](https://flax.readthedocs.io/), our [broad intro to the Module abstraction](https://github.com/google/flax/blob/main/docs/notebooks/linen_intro.ipynb). For additional concrete demonstrations of best practices, refer to our +[guides](https://flax.readthedocs.io/en/latest/guides/index.html) and +[developer notes](https://flax.readthedocs.io/en/latest/developer_notes/index.html). + +```py +from typing import Sequence + +import numpy as np +import jax +import jax.numpy as jnp +import flax.linen as nn + +class MLP(nn.Module): + features: Sequence[int] + + @nn.compact + def __call__(self, x): + for feat in self.features[:-1]: + x = nn.relu(nn.Dense(feat)(x)) + x = nn.Dense(self.features[-1])(x) + return x + +model = MLP([12, 8, 4]) +batch = jnp.ones((32, 10)) +variables = model.init(jax.random.PRNGKey(0), batch) +output = model.apply(variables, batch) +``` + +```py +class CNN(nn.Module): + @nn.compact + def __call__(self, x): + x = nn.Conv(features=32, kernel_size=(3, 3))(x) + x = nn.relu(x) + x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2)) + x = nn.Conv(features=64, kernel_size=(3, 3))(x) + x = nn.relu(x) + x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2)) + x = x.reshape((x.shape[0], -1)) # flatten + x = nn.Dense(features=256)(x) + x = nn.relu(x) + x = nn.Dense(features=10)(x) + x = nn.log_softmax(x) + return x + +model = CNN() +batch = jnp.ones((32, 64, 64, 10)) # (N, H, W, C) format +variables = model.init(jax.random.PRNGKey(0), batch) +output = model.apply(variables, batch) +``` + +```py +class AutoEncoder(nn.Module): + encoder_widths: Sequence[int] + decoder_widths: Sequence[int] + input_shape: Sequence[int] + + def setup(self): + input_dim = np.prod(self.input_shape) + self.encoder = MLP(self.encoder_widths) + self.decoder = MLP(self.decoder_widths + (input_dim,)) + + def __call__(self, x): + return self.decode(self.encode(x)) + + def encode(self, x): + assert x.shape[1:] == self.input_shape + return self.encoder(jnp.reshape(x, (x.shape[0], -1))) + + def decode(self, z): + z = self.decoder(z) + x = nn.sigmoid(z) + x = jnp.reshape(x, (x.shape[0],) + self.input_shape) + return x + +model = AutoEncoder(encoder_widths=[20, 10, 5], + decoder_widths=[5, 10, 20], + input_shape=(12,)) +batch = jnp.ones((16, 12)) +variables = model.init(jax.random.PRNGKey(0), batch) +encoded = model.apply(variables, batch, method=model.encode) +decoded = model.apply(variables, encoded, method=model.decode) +``` + +## 🤗 Hugging Face + +In-detail examples to train and evaluate a variety of Flax models for +Natural Language Processing, Computer Vision, and Speech Recognition are +actively maintained in the [🤗 Transformers repository](https://github.com/huggingface/transformers/tree/master/examples/flax). + +As of October 2021, the [19 most-used Transformer architectures](https://huggingface.co/transformers/#supported-frameworks) are supported in Flax +and over 5000 pretrained checkpoints in Flax have been uploaded to the [🤗 Hub](https://huggingface.co/models?library=jax&sort=downloads). + +## Citing Flax + +To cite this repository: + +``` +@software{flax2020github, + author = {Jonathan Heek and Anselm Levskaya and Avital Oliver and Marvin Ritter and Bertrand Rondepierre and Andreas Steiner and Marc van {Z}ee}, + title = {{F}lax: A neural network library and ecosystem for {JAX}}, + url = {http://github.com/google/flax}, + version = {0.6.8}, + year = {2023}, +} +``` + +In the above bibtex entry, names are in alphabetical order, the version number +is intended to be that from [flax/version.py](https://github.com/google/flax/blob/main/flax/version.py), and the year corresponds to the project's open-source release. + +## Note + +Flax is an open source project maintained by a dedicated team in Google Research, but is not an official Google product. + + +%package help +Summary: Development documents and examples for flax +Provides: python3-flax-doc +%description help +<div align="center"> +<img src="https://raw.githubusercontent.com/google/flax/main/images/flax_logo_250px.png" alt="logo"></img> +</div> + +# Flax: A neural network library and ecosystem for JAX designed for flexibility + + [](https://codecov.io/github/google/flax) + + +[**Overview**](#overview) +| [**Quick install**](#quick-install) +| [**What does Flax look like?**](#what-does-flax-look-like) +| [**Documentation**](https://flax.readthedocs.io/) + +This README is a very short intro. **To learn everything you need to know about Flax, refer to our [full documentation](https://flax.readthedocs.io/).** + +Flax was originally started by engineers and researchers within the Brain Team in Google Research (in close collaboration with the JAX team), and is now developed jointly with the open source community. + +Flax is being used by a growing +community of hundreds of folks in various Alphabet research departments +for their daily work, as well as a [growing community +of open source +projects](https://github.com/google/flax/network/dependents?dependent_type=REPOSITORY). + +The Flax team's mission is to serve the growing JAX neural network +research ecosystem -- both within Alphabet and with the broader community, +and to explore the use-cases where JAX shines. We use GitHub for almost +all of our coordination and planning, as well as where we discuss +upcoming design changes. We welcome feedback on any of our discussion, +issue and pull request threads. We are in the process of moving some +remaining internal design docs and conversation threads to GitHub +discussions, issues and pull requests. We hope to increasingly engage +with the needs and clarifications of the broader ecosystem. Please let +us know how we can help! + +Please report any feature requests, +issues, questions or concerns in our [discussion +forum](https://github.com/google/flax/discussions), or just let us +know what you're working on! + +We expect to improve Flax, but we don't anticipate significant +breaking changes to the core API. We use [Changelog](https://github.com/google/flax/tree/main/CHANGELOG.md) +entries and deprecation warnings when possible. + +In case you want to reach us directly, we're at flax-dev@google.com. + +## Overview + +Flax is a high-performance neural network library and ecosystem for +JAX that is **designed for flexibility**: +Try new forms of training by forking an example and by modifying the training +loop, not by adding features to a framework. + +Flax is being developed in close collaboration with the JAX team and +comes with everything you need to start your research, including: + +* **Neural network API** (`flax.linen`): Dense, Conv, {Batch|Layer|Group} Norm, Attention, Pooling, {LSTM|GRU} Cell, Dropout + +* **Utilities and patterns**: replicated training, serialization and checkpointing, metrics, prefetching on device + +* **Educational examples** that work out of the box: MNIST, LSTM seq2seq, Graph Neural Networks, Sequence Tagging + +* **Fast, tuned large-scale end-to-end examples**: CIFAR10, ResNet on ImageNet, Transformer LM1b + +## Quick install + +You will need Python 3.6 or later, and a working [JAX](https://github.com/google/jax/blob/main/README.md) +installation (with or without GPU support - refer to [the instructions](https://github.com/google/jax/blob/main/README.md)). +For a CPU-only version of JAX: + +``` +pip install --upgrade pip # To support manylinux2010 wheels. +pip install --upgrade jax jaxlib # CPU-only +``` + +Then, install Flax from PyPi: + +``` +pip install flax +``` + +To upgrade to the latest version of Flax, you can use: + +``` +pip install --upgrade git+https://github.com/google/flax.git +``` +To install some additional dependencies (like `matplotlib`) that are required but not included +by some dependencies, you can use: + +```bash +pip install flax[all] +``` + +## What does Flax look like? + +We provide three examples using the Flax API: a simple multi-layer perceptron, a CNN and an auto-encoder. + +To learn more about the `Module` abstraction, check out our [docs](https://flax.readthedocs.io/), our [broad intro to the Module abstraction](https://github.com/google/flax/blob/main/docs/notebooks/linen_intro.ipynb). For additional concrete demonstrations of best practices, refer to our +[guides](https://flax.readthedocs.io/en/latest/guides/index.html) and +[developer notes](https://flax.readthedocs.io/en/latest/developer_notes/index.html). + +```py +from typing import Sequence + +import numpy as np +import jax +import jax.numpy as jnp +import flax.linen as nn + +class MLP(nn.Module): + features: Sequence[int] + + @nn.compact + def __call__(self, x): + for feat in self.features[:-1]: + x = nn.relu(nn.Dense(feat)(x)) + x = nn.Dense(self.features[-1])(x) + return x + +model = MLP([12, 8, 4]) +batch = jnp.ones((32, 10)) +variables = model.init(jax.random.PRNGKey(0), batch) +output = model.apply(variables, batch) +``` + +```py +class CNN(nn.Module): + @nn.compact + def __call__(self, x): + x = nn.Conv(features=32, kernel_size=(3, 3))(x) + x = nn.relu(x) + x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2)) + x = nn.Conv(features=64, kernel_size=(3, 3))(x) + x = nn.relu(x) + x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2)) + x = x.reshape((x.shape[0], -1)) # flatten + x = nn.Dense(features=256)(x) + x = nn.relu(x) + x = nn.Dense(features=10)(x) + x = nn.log_softmax(x) + return x + +model = CNN() +batch = jnp.ones((32, 64, 64, 10)) # (N, H, W, C) format +variables = model.init(jax.random.PRNGKey(0), batch) +output = model.apply(variables, batch) +``` + +```py +class AutoEncoder(nn.Module): + encoder_widths: Sequence[int] + decoder_widths: Sequence[int] + input_shape: Sequence[int] + + def setup(self): + input_dim = np.prod(self.input_shape) + self.encoder = MLP(self.encoder_widths) + self.decoder = MLP(self.decoder_widths + (input_dim,)) + + def __call__(self, x): + return self.decode(self.encode(x)) + + def encode(self, x): + assert x.shape[1:] == self.input_shape + return self.encoder(jnp.reshape(x, (x.shape[0], -1))) + + def decode(self, z): + z = self.decoder(z) + x = nn.sigmoid(z) + x = jnp.reshape(x, (x.shape[0],) + self.input_shape) + return x + +model = AutoEncoder(encoder_widths=[20, 10, 5], + decoder_widths=[5, 10, 20], + input_shape=(12,)) +batch = jnp.ones((16, 12)) +variables = model.init(jax.random.PRNGKey(0), batch) +encoded = model.apply(variables, batch, method=model.encode) +decoded = model.apply(variables, encoded, method=model.decode) +``` + +## 🤗 Hugging Face + +In-detail examples to train and evaluate a variety of Flax models for +Natural Language Processing, Computer Vision, and Speech Recognition are +actively maintained in the [🤗 Transformers repository](https://github.com/huggingface/transformers/tree/master/examples/flax). + +As of October 2021, the [19 most-used Transformer architectures](https://huggingface.co/transformers/#supported-frameworks) are supported in Flax +and over 5000 pretrained checkpoints in Flax have been uploaded to the [🤗 Hub](https://huggingface.co/models?library=jax&sort=downloads). + +## Citing Flax + +To cite this repository: + +``` +@software{flax2020github, + author = {Jonathan Heek and Anselm Levskaya and Avital Oliver and Marvin Ritter and Bertrand Rondepierre and Andreas Steiner and Marc van {Z}ee}, + title = {{F}lax: A neural network library and ecosystem for {JAX}}, + url = {http://github.com/google/flax}, + version = {0.6.8}, + year = {2023}, +} +``` + +In the above bibtex entry, names are in alphabetical order, the version number +is intended to be that from [flax/version.py](https://github.com/google/flax/blob/main/flax/version.py), and the year corresponds to the project's open-source release. + +## Note + +Flax is an open source project maintained by a dedicated team in Google Research, but is not an official Google product. + + +%prep +%autosetup -n flax-0.6.8 + +%build +%py3_build + +%install +%py3_install +install -d -m755 %{buildroot}/%{_pkgdocdir} +if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi +if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi +if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi +if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi +pushd %{buildroot} +if [ -d usr/lib ]; then + find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/lib64 ]; then + find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/bin ]; then + find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/sbin ]; then + find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst +fi +touch doclist.lst +if [ -d usr/share/man ]; then + find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst +fi +popd +mv %{buildroot}/filelist.lst . +mv %{buildroot}/doclist.lst . + +%files -n python3-flax -f filelist.lst +%dir %{python3_sitelib}/* + +%files help -f doclist.lst +%{_docdir}/* + +%changelog +* Mon Apr 10 2023 Python_Bot <Python_Bot@openeuler.org> - 0.6.8-1 +- Package Spec generated @@ -0,0 +1 @@ +b0f0264323f2e7006c1457df90c66833 flax-0.6.8.tar.gz |