%global _empty_manifest_terminate_build 0 Name: python-jina Version: 3.15.0 Release: 1 Summary: Build multimodal AI services via cloud native technologies · Neural Search · Generative AI · MLOps License: Apache 2.0 URL: https://github.com/jina-ai/jina/ Source0: https://mirrors.nju.edu.cn/pypi/web/packages/68/fd/559d5809832dfc49615aaf5626f1ff28b1de54453a72e2056ad9f702203a/jina-3.15.0.tar.gz BuildArch: noarch %description

Jina: Streamline AI & ML Product Delivery

### Build AI Services [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jina-ai/jina/blob/master/.github/getting-started/notebook.ipynb) Let's build a fast, reliable and scalable gRPC-based AI service. In Jina we call this an **[Executor](https://docs.jina.ai/concepts/executor/)**. Our simple Executor will use Facebook's mBART-50 model to translate French to English. We'll then use a **Deployment** to serve it. > **Note** > A Deployment serves just one Executor. To combine multiple Executors into a pipeline and serve that, use a [Flow](#build-a-pipeline). > **Note** > Run the [code in Colab](https://colab.research.google.com/github/jina-ai/jina/blob/master/.github/getting-started/notebook.ipynb#scrollTo=0l-lkmz4H-jW) to install all dependencies. Let's implement the service's logic:
translate_executor.py
```python from docarray import DocumentArray from jina import Executor, requests from transformers import AutoTokenizer, AutoModelForSeq2SeqLM class Translator(Executor): def __init__(self, **kwargs): super().__init__(**kwargs) self.tokenizer = AutoTokenizer.from_pretrained( "facebook/mbart-large-50-many-to-many-mmt", src_lang="fr_XX" ) self.model = AutoModelForSeq2SeqLM.from_pretrained( "facebook/mbart-large-50-many-to-many-mmt" ) @requests def translate(self, docs: DocumentArray, **kwargs): for doc in docs: doc.text = self._translate(doc.text) def _translate(self, text): encoded_en = self.tokenizer(text, return_tensors="pt") generated_tokens = self.model.generate( **encoded_en, forced_bos_token_id=self.tokenizer.lang_code_to_id["en_XX"] ) return self.tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[ 0 ] ```
Then we deploy it with either the Python API or YAML:
Python API: deployment.py YAML: deployment.yml
```python from jina import Deployment from translate_executor import Translator with Deployment(uses=Translator, timeout_ready=-1) as dep: dep.block() ``` ```yaml jtype: Deployment with: uses: Translator py_modules: - translate_executor.py # name of the module containing Translator timeout_ready: -1 ``` And run the YAML Deployment with the CLI: `jina deployment --uses deployment.yml`
```text ──────────────────────────────────────── 🎉 Deployment is ready to serve! ───────────────────────────────────────── ╭────────────── 🔗 Endpoint ───────────────╮ │ ⛓ Protocol GRPC │ │ 🏠 Local 0.0.0.0:12345 │ │ 🔒 Private 172.28.0.12:12345 │ │ 🌍 Public 35.230.97.208:12345 │ ╰──────────────────────────────────────────╯ ``` Use [Jina Client](https://docs.jina.ai/concepts/client/) to make requests to the service: ```python from docarray import Document from jina import Client french_text = Document( text='un astronaut est en train de faire une promenade dans un parc' ) client = Client(port=12345) # use port from output above response = client.post(on='/', inputs=[french_text]) print(response[0].text) ``` ```text an astronaut is walking in a park ``` > **Note** > In a notebook, one cannot use `deployment.block()` and then make requests to the client. Please refer to the colab link above for reproducible Jupyter Notebook code snippets. ### Build a pipeline [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jina-ai/jina/blob/master/.github/getting-started/notebook.ipynb#scrollTo=YfNm1nScH30U) Sometimes you want to chain microservices together into a pipeline. That's where a [Flow](https://docs.jina.ai/concepts/flow/) comes in. A Flow is a [DAG](https://de.wikipedia.org/wiki/DAG) pipeline, composed of a set of steps, It orchestrates a set of [Executors](https://docs.jina.ai/concepts/executor/) and a [Gateway](https://docs.jina.ai/concepts/gateway/) to offer an end-to-end service. > **Note** > If you just want to serve a single Executor, you can use a [Deployment](#build-ai--ml-services). For instance, let's combine [our French translation service](#build-ai--ml-services) with a Stable Diffusion image generation service from Jina AI's [Executor Hub](https://cloud.jina.ai/executors). Chaining these services together into a [Flow](https://docs.jina.ai/concepts/flow/) will give us a multilingual image generation service. Build the Flow with either Python or YAML:
Python API: flow.py YAML: flow.yml
```python from jina import Flow flow = ( Flow() .add(uses=Translator, timeout_ready=-1) .add( uses='jinaai://jina-ai/TextToImage', timeout_ready=-1, install_requirements=True, ) ) # use the Executor from Executor hub with flow: flow.block() ``` ```yaml jtype: Flow executors: - uses: Translator timeout_ready: -1 py_modules: - translate_executor.py - uses: jinaai://jina-ai/TextToImage timeout_ready: -1 install_requirements: true ``` Then run the YAML Flow with the CLI: `jina flow --uses flow.yml`
```text ─────────────────────────────────────────── 🎉 Flow is ready to serve! ──────────────────────────────────────────── ╭────────────── 🔗 Endpoint ───────────────╮ │ ⛓ Protocol GRPC │ │ 🏠 Local 0.0.0.0:12345 │ │ 🔒 Private 172.28.0.12:12345 │ │ 🌍 Public 35.240.201.66:12345 │ ╰──────────────────────────────────────────╯ ``` Then, use [Jina Client](https://docs.jina.ai/concepts/client/) to make requests to the Flow: ```python from jina import Client, Document client = Client(port=12345) # use port from output above french_text = Document( text='un astronaut est en train de faire une promenade dans un parc' ) response = client.post(on='/', inputs=[french_text]) response[0].display() ``` ![stable-diffusion-output.png](https://raw.githubusercontent.com/jina-ai/jina/master/.github/stable-diffusion-output.png) You can also deploy a Flow to JCloud. First, turn the `flow.yml` file into a [JCloud-compatible YAML](https://docs.jina.ai/concepts/jcloud/yaml-spec/) by specifying resource requirements and using containerized Hub Executors. Then, use `jina cloud deploy` command to deploy to the cloud: ```shell wget https://raw.githubusercontent.com/jina-ai/jina/master/.github/getting-started/jcloud-flow.yml jina cloud deploy jcloud-flow.yml ``` ⚠️ **Caution: Make sure to delete/clean up the Flow once you are done with this tutorial to save resources and credits.** Read more about [deploying Flows to JCloud](https://docs.jina.ai/concepts/jcloud/#deploy). %package -n python3-jina Summary: Build multimodal AI services via cloud native technologies · Neural Search · Generative AI · MLOps Provides: python-jina BuildRequires: python3-devel BuildRequires: python3-setuptools BuildRequires: python3-pip %description -n python3-jina

Jina: Streamline AI & ML Product Delivery

### Build AI Services [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jina-ai/jina/blob/master/.github/getting-started/notebook.ipynb) Let's build a fast, reliable and scalable gRPC-based AI service. In Jina we call this an **[Executor](https://docs.jina.ai/concepts/executor/)**. Our simple Executor will use Facebook's mBART-50 model to translate French to English. We'll then use a **Deployment** to serve it. > **Note** > A Deployment serves just one Executor. To combine multiple Executors into a pipeline and serve that, use a [Flow](#build-a-pipeline). > **Note** > Run the [code in Colab](https://colab.research.google.com/github/jina-ai/jina/blob/master/.github/getting-started/notebook.ipynb#scrollTo=0l-lkmz4H-jW) to install all dependencies. Let's implement the service's logic:
translate_executor.py
```python from docarray import DocumentArray from jina import Executor, requests from transformers import AutoTokenizer, AutoModelForSeq2SeqLM class Translator(Executor): def __init__(self, **kwargs): super().__init__(**kwargs) self.tokenizer = AutoTokenizer.from_pretrained( "facebook/mbart-large-50-many-to-many-mmt", src_lang="fr_XX" ) self.model = AutoModelForSeq2SeqLM.from_pretrained( "facebook/mbart-large-50-many-to-many-mmt" ) @requests def translate(self, docs: DocumentArray, **kwargs): for doc in docs: doc.text = self._translate(doc.text) def _translate(self, text): encoded_en = self.tokenizer(text, return_tensors="pt") generated_tokens = self.model.generate( **encoded_en, forced_bos_token_id=self.tokenizer.lang_code_to_id["en_XX"] ) return self.tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[ 0 ] ```
Then we deploy it with either the Python API or YAML:
Python API: deployment.py YAML: deployment.yml
```python from jina import Deployment from translate_executor import Translator with Deployment(uses=Translator, timeout_ready=-1) as dep: dep.block() ``` ```yaml jtype: Deployment with: uses: Translator py_modules: - translate_executor.py # name of the module containing Translator timeout_ready: -1 ``` And run the YAML Deployment with the CLI: `jina deployment --uses deployment.yml`
```text ──────────────────────────────────────── 🎉 Deployment is ready to serve! ───────────────────────────────────────── ╭────────────── 🔗 Endpoint ───────────────╮ │ ⛓ Protocol GRPC │ │ 🏠 Local 0.0.0.0:12345 │ │ 🔒 Private 172.28.0.12:12345 │ │ 🌍 Public 35.230.97.208:12345 │ ╰──────────────────────────────────────────╯ ``` Use [Jina Client](https://docs.jina.ai/concepts/client/) to make requests to the service: ```python from docarray import Document from jina import Client french_text = Document( text='un astronaut est en train de faire une promenade dans un parc' ) client = Client(port=12345) # use port from output above response = client.post(on='/', inputs=[french_text]) print(response[0].text) ``` ```text an astronaut is walking in a park ``` > **Note** > In a notebook, one cannot use `deployment.block()` and then make requests to the client. Please refer to the colab link above for reproducible Jupyter Notebook code snippets. ### Build a pipeline [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jina-ai/jina/blob/master/.github/getting-started/notebook.ipynb#scrollTo=YfNm1nScH30U) Sometimes you want to chain microservices together into a pipeline. That's where a [Flow](https://docs.jina.ai/concepts/flow/) comes in. A Flow is a [DAG](https://de.wikipedia.org/wiki/DAG) pipeline, composed of a set of steps, It orchestrates a set of [Executors](https://docs.jina.ai/concepts/executor/) and a [Gateway](https://docs.jina.ai/concepts/gateway/) to offer an end-to-end service. > **Note** > If you just want to serve a single Executor, you can use a [Deployment](#build-ai--ml-services). For instance, let's combine [our French translation service](#build-ai--ml-services) with a Stable Diffusion image generation service from Jina AI's [Executor Hub](https://cloud.jina.ai/executors). Chaining these services together into a [Flow](https://docs.jina.ai/concepts/flow/) will give us a multilingual image generation service. Build the Flow with either Python or YAML:
Python API: flow.py YAML: flow.yml
```python from jina import Flow flow = ( Flow() .add(uses=Translator, timeout_ready=-1) .add( uses='jinaai://jina-ai/TextToImage', timeout_ready=-1, install_requirements=True, ) ) # use the Executor from Executor hub with flow: flow.block() ``` ```yaml jtype: Flow executors: - uses: Translator timeout_ready: -1 py_modules: - translate_executor.py - uses: jinaai://jina-ai/TextToImage timeout_ready: -1 install_requirements: true ``` Then run the YAML Flow with the CLI: `jina flow --uses flow.yml`
```text ─────────────────────────────────────────── 🎉 Flow is ready to serve! ──────────────────────────────────────────── ╭────────────── 🔗 Endpoint ───────────────╮ │ ⛓ Protocol GRPC │ │ 🏠 Local 0.0.0.0:12345 │ │ 🔒 Private 172.28.0.12:12345 │ │ 🌍 Public 35.240.201.66:12345 │ ╰──────────────────────────────────────────╯ ``` Then, use [Jina Client](https://docs.jina.ai/concepts/client/) to make requests to the Flow: ```python from jina import Client, Document client = Client(port=12345) # use port from output above french_text = Document( text='un astronaut est en train de faire une promenade dans un parc' ) response = client.post(on='/', inputs=[french_text]) response[0].display() ``` ![stable-diffusion-output.png](https://raw.githubusercontent.com/jina-ai/jina/master/.github/stable-diffusion-output.png) You can also deploy a Flow to JCloud. First, turn the `flow.yml` file into a [JCloud-compatible YAML](https://docs.jina.ai/concepts/jcloud/yaml-spec/) by specifying resource requirements and using containerized Hub Executors. Then, use `jina cloud deploy` command to deploy to the cloud: ```shell wget https://raw.githubusercontent.com/jina-ai/jina/master/.github/getting-started/jcloud-flow.yml jina cloud deploy jcloud-flow.yml ``` ⚠️ **Caution: Make sure to delete/clean up the Flow once you are done with this tutorial to save resources and credits.** Read more about [deploying Flows to JCloud](https://docs.jina.ai/concepts/jcloud/#deploy). %package help Summary: Development documents and examples for jina Provides: python3-jina-doc %description help

Jina: Streamline AI & ML Product Delivery

### Build AI Services [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jina-ai/jina/blob/master/.github/getting-started/notebook.ipynb) Let's build a fast, reliable and scalable gRPC-based AI service. In Jina we call this an **[Executor](https://docs.jina.ai/concepts/executor/)**. Our simple Executor will use Facebook's mBART-50 model to translate French to English. We'll then use a **Deployment** to serve it. > **Note** > A Deployment serves just one Executor. To combine multiple Executors into a pipeline and serve that, use a [Flow](#build-a-pipeline). > **Note** > Run the [code in Colab](https://colab.research.google.com/github/jina-ai/jina/blob/master/.github/getting-started/notebook.ipynb#scrollTo=0l-lkmz4H-jW) to install all dependencies. Let's implement the service's logic:
translate_executor.py
```python from docarray import DocumentArray from jina import Executor, requests from transformers import AutoTokenizer, AutoModelForSeq2SeqLM class Translator(Executor): def __init__(self, **kwargs): super().__init__(**kwargs) self.tokenizer = AutoTokenizer.from_pretrained( "facebook/mbart-large-50-many-to-many-mmt", src_lang="fr_XX" ) self.model = AutoModelForSeq2SeqLM.from_pretrained( "facebook/mbart-large-50-many-to-many-mmt" ) @requests def translate(self, docs: DocumentArray, **kwargs): for doc in docs: doc.text = self._translate(doc.text) def _translate(self, text): encoded_en = self.tokenizer(text, return_tensors="pt") generated_tokens = self.model.generate( **encoded_en, forced_bos_token_id=self.tokenizer.lang_code_to_id["en_XX"] ) return self.tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[ 0 ] ```
Then we deploy it with either the Python API or YAML:
Python API: deployment.py YAML: deployment.yml
```python from jina import Deployment from translate_executor import Translator with Deployment(uses=Translator, timeout_ready=-1) as dep: dep.block() ``` ```yaml jtype: Deployment with: uses: Translator py_modules: - translate_executor.py # name of the module containing Translator timeout_ready: -1 ``` And run the YAML Deployment with the CLI: `jina deployment --uses deployment.yml`
```text ──────────────────────────────────────── 🎉 Deployment is ready to serve! ───────────────────────────────────────── ╭────────────── 🔗 Endpoint ───────────────╮ │ ⛓ Protocol GRPC │ │ 🏠 Local 0.0.0.0:12345 │ │ 🔒 Private 172.28.0.12:12345 │ │ 🌍 Public 35.230.97.208:12345 │ ╰──────────────────────────────────────────╯ ``` Use [Jina Client](https://docs.jina.ai/concepts/client/) to make requests to the service: ```python from docarray import Document from jina import Client french_text = Document( text='un astronaut est en train de faire une promenade dans un parc' ) client = Client(port=12345) # use port from output above response = client.post(on='/', inputs=[french_text]) print(response[0].text) ``` ```text an astronaut is walking in a park ``` > **Note** > In a notebook, one cannot use `deployment.block()` and then make requests to the client. Please refer to the colab link above for reproducible Jupyter Notebook code snippets. ### Build a pipeline [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jina-ai/jina/blob/master/.github/getting-started/notebook.ipynb#scrollTo=YfNm1nScH30U) Sometimes you want to chain microservices together into a pipeline. That's where a [Flow](https://docs.jina.ai/concepts/flow/) comes in. A Flow is a [DAG](https://de.wikipedia.org/wiki/DAG) pipeline, composed of a set of steps, It orchestrates a set of [Executors](https://docs.jina.ai/concepts/executor/) and a [Gateway](https://docs.jina.ai/concepts/gateway/) to offer an end-to-end service. > **Note** > If you just want to serve a single Executor, you can use a [Deployment](#build-ai--ml-services). For instance, let's combine [our French translation service](#build-ai--ml-services) with a Stable Diffusion image generation service from Jina AI's [Executor Hub](https://cloud.jina.ai/executors). Chaining these services together into a [Flow](https://docs.jina.ai/concepts/flow/) will give us a multilingual image generation service. Build the Flow with either Python or YAML:
Python API: flow.py YAML: flow.yml
```python from jina import Flow flow = ( Flow() .add(uses=Translator, timeout_ready=-1) .add( uses='jinaai://jina-ai/TextToImage', timeout_ready=-1, install_requirements=True, ) ) # use the Executor from Executor hub with flow: flow.block() ``` ```yaml jtype: Flow executors: - uses: Translator timeout_ready: -1 py_modules: - translate_executor.py - uses: jinaai://jina-ai/TextToImage timeout_ready: -1 install_requirements: true ``` Then run the YAML Flow with the CLI: `jina flow --uses flow.yml`
```text ─────────────────────────────────────────── 🎉 Flow is ready to serve! ──────────────────────────────────────────── ╭────────────── 🔗 Endpoint ───────────────╮ │ ⛓ Protocol GRPC │ │ 🏠 Local 0.0.0.0:12345 │ │ 🔒 Private 172.28.0.12:12345 │ │ 🌍 Public 35.240.201.66:12345 │ ╰──────────────────────────────────────────╯ ``` Then, use [Jina Client](https://docs.jina.ai/concepts/client/) to make requests to the Flow: ```python from jina import Client, Document client = Client(port=12345) # use port from output above french_text = Document( text='un astronaut est en train de faire une promenade dans un parc' ) response = client.post(on='/', inputs=[french_text]) response[0].display() ``` ![stable-diffusion-output.png](https://raw.githubusercontent.com/jina-ai/jina/master/.github/stable-diffusion-output.png) You can also deploy a Flow to JCloud. First, turn the `flow.yml` file into a [JCloud-compatible YAML](https://docs.jina.ai/concepts/jcloud/yaml-spec/) by specifying resource requirements and using containerized Hub Executors. Then, use `jina cloud deploy` command to deploy to the cloud: ```shell wget https://raw.githubusercontent.com/jina-ai/jina/master/.github/getting-started/jcloud-flow.yml jina cloud deploy jcloud-flow.yml ``` ⚠️ **Caution: Make sure to delete/clean up the Flow once you are done with this tutorial to save resources and credits.** Read more about [deploying Flows to JCloud](https://docs.jina.ai/concepts/jcloud/#deploy). %prep %autosetup -n jina-3.15.0 %build %py3_build %install %py3_install install -d -m755 %{buildroot}/%{_pkgdocdir} if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/lib64 ]; then find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/bin ]; then find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/sbin ]; then find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . mv %{buildroot}/doclist.lst . %files -n python3-jina -f filelist.lst %dir %{python3_sitelib}/* %files help -f doclist.lst %{_docdir}/* %changelog * Sun Apr 23 2023 Python_Bot - 3.15.0-1 - Package Spec generated