%global _empty_manifest_terminate_build 0 Name: python-jina Version: 3.15.0 Release: 1 Summary: Build multimodal AI services via cloud native technologies · Neural Search · Generative AI · MLOps License: Apache 2.0 URL: https://github.com/jina-ai/jina/ Source0: https://mirrors.nju.edu.cn/pypi/web/packages/68/fd/559d5809832dfc49615aaf5626f1ff28b1de54453a72e2056ad9f702203a/jina-3.15.0.tar.gz BuildArch: noarch %description
### Build AI Services [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jina-ai/jina/blob/master/.github/getting-started/notebook.ipynb) Let's build a fast, reliable and scalable gRPC-based AI service. In Jina we call this an **[Executor](https://docs.jina.ai/concepts/executor/)**. Our simple Executor will use Facebook's mBART-50 model to translate French to English. We'll then use a **Deployment** to serve it. > **Note** > A Deployment serves just one Executor. To combine multiple Executors into a pipeline and serve that, use a [Flow](#build-a-pipeline). > **Note** > Run the [code in Colab](https://colab.research.google.com/github/jina-ai/jina/blob/master/.github/getting-started/notebook.ipynb#scrollTo=0l-lkmz4H-jW) to install all dependencies. Let's implement the service's logic:translate_executor.py |
---|
```python from docarray import DocumentArray from jina import Executor, requests from transformers import AutoTokenizer, AutoModelForSeq2SeqLM class Translator(Executor): def __init__(self, **kwargs): super().__init__(**kwargs) self.tokenizer = AutoTokenizer.from_pretrained( "facebook/mbart-large-50-many-to-many-mmt", src_lang="fr_XX" ) self.model = AutoModelForSeq2SeqLM.from_pretrained( "facebook/mbart-large-50-many-to-many-mmt" ) @requests def translate(self, docs: DocumentArray, **kwargs): for doc in docs: doc.text = self._translate(doc.text) def _translate(self, text): encoded_en = self.tokenizer(text, return_tensors="pt") generated_tokens = self.model.generate( **encoded_en, forced_bos_token_id=self.tokenizer.lang_code_to_id["en_XX"] ) return self.tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[ 0 ] ``` |
Python API: deployment.py |
YAML: deployment.yml |
---|---|
```python from jina import Deployment from translate_executor import Translator with Deployment(uses=Translator, timeout_ready=-1) as dep: dep.block() ``` | ```yaml jtype: Deployment with: uses: Translator py_modules: - translate_executor.py # name of the module containing Translator timeout_ready: -1 ``` And run the YAML Deployment with the CLI: `jina deployment --uses deployment.yml` |
Python API: flow.py |
YAML: flow.yml |
---|---|
```python from jina import Flow flow = ( Flow() .add(uses=Translator, timeout_ready=-1) .add( uses='jinaai://jina-ai/TextToImage', timeout_ready=-1, install_requirements=True, ) ) # use the Executor from Executor hub with flow: flow.block() ``` | ```yaml jtype: Flow executors: - uses: Translator timeout_ready: -1 py_modules: - translate_executor.py - uses: jinaai://jina-ai/TextToImage timeout_ready: -1 install_requirements: true ``` Then run the YAML Flow with the CLI: `jina flow --uses flow.yml` |
translate_executor.py |
---|
```python from docarray import DocumentArray from jina import Executor, requests from transformers import AutoTokenizer, AutoModelForSeq2SeqLM class Translator(Executor): def __init__(self, **kwargs): super().__init__(**kwargs) self.tokenizer = AutoTokenizer.from_pretrained( "facebook/mbart-large-50-many-to-many-mmt", src_lang="fr_XX" ) self.model = AutoModelForSeq2SeqLM.from_pretrained( "facebook/mbart-large-50-many-to-many-mmt" ) @requests def translate(self, docs: DocumentArray, **kwargs): for doc in docs: doc.text = self._translate(doc.text) def _translate(self, text): encoded_en = self.tokenizer(text, return_tensors="pt") generated_tokens = self.model.generate( **encoded_en, forced_bos_token_id=self.tokenizer.lang_code_to_id["en_XX"] ) return self.tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[ 0 ] ``` |
Python API: deployment.py |
YAML: deployment.yml |
---|---|
```python from jina import Deployment from translate_executor import Translator with Deployment(uses=Translator, timeout_ready=-1) as dep: dep.block() ``` | ```yaml jtype: Deployment with: uses: Translator py_modules: - translate_executor.py # name of the module containing Translator timeout_ready: -1 ``` And run the YAML Deployment with the CLI: `jina deployment --uses deployment.yml` |
Python API: flow.py |
YAML: flow.yml |
---|---|
```python from jina import Flow flow = ( Flow() .add(uses=Translator, timeout_ready=-1) .add( uses='jinaai://jina-ai/TextToImage', timeout_ready=-1, install_requirements=True, ) ) # use the Executor from Executor hub with flow: flow.block() ``` | ```yaml jtype: Flow executors: - uses: Translator timeout_ready: -1 py_modules: - translate_executor.py - uses: jinaai://jina-ai/TextToImage timeout_ready: -1 install_requirements: true ``` Then run the YAML Flow with the CLI: `jina flow --uses flow.yml` |
translate_executor.py |
---|
```python from docarray import DocumentArray from jina import Executor, requests from transformers import AutoTokenizer, AutoModelForSeq2SeqLM class Translator(Executor): def __init__(self, **kwargs): super().__init__(**kwargs) self.tokenizer = AutoTokenizer.from_pretrained( "facebook/mbart-large-50-many-to-many-mmt", src_lang="fr_XX" ) self.model = AutoModelForSeq2SeqLM.from_pretrained( "facebook/mbart-large-50-many-to-many-mmt" ) @requests def translate(self, docs: DocumentArray, **kwargs): for doc in docs: doc.text = self._translate(doc.text) def _translate(self, text): encoded_en = self.tokenizer(text, return_tensors="pt") generated_tokens = self.model.generate( **encoded_en, forced_bos_token_id=self.tokenizer.lang_code_to_id["en_XX"] ) return self.tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[ 0 ] ``` |
Python API: deployment.py |
YAML: deployment.yml |
---|---|
```python from jina import Deployment from translate_executor import Translator with Deployment(uses=Translator, timeout_ready=-1) as dep: dep.block() ``` | ```yaml jtype: Deployment with: uses: Translator py_modules: - translate_executor.py # name of the module containing Translator timeout_ready: -1 ``` And run the YAML Deployment with the CLI: `jina deployment --uses deployment.yml` |
Python API: flow.py |
YAML: flow.yml |
---|---|
```python from jina import Flow flow = ( Flow() .add(uses=Translator, timeout_ready=-1) .add( uses='jinaai://jina-ai/TextToImage', timeout_ready=-1, install_requirements=True, ) ) # use the Executor from Executor hub with flow: flow.block() ``` | ```yaml jtype: Flow executors: - uses: Translator timeout_ready: -1 py_modules: - translate_executor.py - uses: jinaai://jina-ai/TextToImage timeout_ready: -1 install_requirements: true ``` Then run the YAML Flow with the CLI: `jina flow --uses flow.yml` |