summaryrefslogtreecommitdiff
path: root/python-jina.spec
diff options
context:
space:
mode:
Diffstat (limited to 'python-jina.spec')
-rw-r--r--python-jina.spec603
1 files changed, 603 insertions, 0 deletions
diff --git a/python-jina.spec b/python-jina.spec
new file mode 100644
index 0000000..8a007a6
--- /dev/null
+++ b/python-jina.spec
@@ -0,0 +1,603 @@
+%global _empty_manifest_terminate_build 0
+Name: python-jina
+Version: 3.14.1
+Release: 1
+Summary: Build multimodal AI services via cloud native technologies · Neural Search · Generative AI · MLOps
+License: Apache 2.0
+URL: https://github.com/jina-ai/jina/
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/d6/93/909b20eeddce3941d76a06c357e1d9d7386159e9420f04750d023116ff48/jina-3.14.1.tar.gz
+BuildArch: noarch
+
+
+%description
+<p align="center">
+<a href="https://docs.jina.ai"><img src="https://github.com/jina-ai/jina/blob/master/.github/readme/streamline-banner.png?raw=true" alt="Jina: Streamline AI & ML Product Delivery" width="100%"></a>
+</p>
+### Build AI & ML Services
+<!-- start build-ai-services -->
+[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jina-ai/jina/blob/master/.github/getting-started/notebook.ipynb)
+Let's build a fast, reliable and scalable gRPC-based AI service. In Jina we call this an **[Executor](https://docs.jina.ai/concepts/executor/)**. Our simple Executor will use Facebook's mBART-50 model to translate French to English. We'll then use a **Deployment** to serve it.
+> **Note**
+> A Deployment serves just one Executor. To combine multiple Executors into a pipeline and serve that, use a [Flow](#build-a-pipeline).
+> **Note**
+> Run the [code in Colab](https://colab.research.google.com/assets/colab-badge.svg) to install all dependencies.
+Let's implement the service's logic:
+<table>
+<tr>
+<th><code>translate_executor.py</code> </th>
+<tr>
+<td>
+```python
+from docarray import DocumentArray
+from jina import Executor, requests
+from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
+class Translator(Executor):
+ def __init__(self, **kwargs):
+ super().__init__(**kwargs)
+ self.tokenizer = AutoTokenizer.from_pretrained(
+ "facebook/mbart-large-50-many-to-many-mmt", src_lang="fr_XX"
+ )
+ self.model = AutoModelForSeq2SeqLM.from_pretrained(
+ "facebook/mbart-large-50-many-to-many-mmt"
+ )
+ @requests
+ def translate(self, docs: DocumentArray, **kwargs):
+ for doc in docs:
+ doc.text = self._translate(doc.text)
+ def _translate(self, text):
+ encoded_en = self.tokenizer(text, return_tensors="pt")
+ generated_tokens = self.model.generate(
+ **encoded_en, forced_bos_token_id=self.tokenizer.lang_code_to_id["en_XX"]
+ )
+ return self.tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[
+ 0
+ ]
+```
+</td>
+</tr>
+</table>
+Then we deploy it with either the Python API or YAML:
+<div class="table-wrapper">
+<table>
+<tr>
+<th> Python API: <code>deployment.py</code> </th>
+<th> YAML: <code>deployment.yml</code> </th>
+</tr>
+<tr>
+<td>
+```python
+from jina import Deployment
+with Deployment(uses=Translator, timeout_ready=-1) as dep:
+ dep.block()
+```
+</td>
+<td>
+```yaml
+jtype: Deployment
+with:
+ uses: Translator
+ py_modules:
+ - translate_executor.py # name of the module containing Translator
+ timeout_ready: -1
+```
+And run the YAML Deployment with the CLI: `jina deployment --uses deployment.yml`
+</td>
+</tr>
+</table>
+</div>
+```text
+──────────────────────────────────────── 🎉 Deployment is ready to serve! ─────────────────────────────────────────
+╭────────────── 🔗 Endpoint ───────────────╮
+│ ⛓ Protocol GRPC │
+│ 🏠 Local 0.0.0.0:12345 │
+│ 🔒 Private 172.28.0.12:12345 │
+│ 🌍 Public 35.230.97.208:12345 │
+╰──────────────────────────────────────────╯
+```
+Use [Jina Client](https://docs.jina.ai/concepts/client/) to make requests to the service:
+```python
+from docarray import Document
+from jina import Client
+french_text = Document(
+ text='un astronaut est en train de faire une promenade dans un parc'
+)
+client = Client(port=12345) # use port from output above
+response = client.post(on='/', inputs=[french_text])
+print(response[0].text)
+```
+```text
+an astronaut is walking in a park
+```
+<!-- end build-ai-services -->
+### Build a pipeline
+<!-- start build-pipelines -->
+[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jina-ai/jina/blob/docs-readme-changes/.github/getting-started/notebook.ipynb)
+Sometimes you want to chain microservices together into a pipeline. That's where a [Flow](https://docs.jina.ai/concepts/flow/) comes in.
+A Flow is a [DAG](https://de.wikipedia.org/wiki/DAG) pipeline, composed of a set of steps, It orchestrates a set of [Executors](https://docs.jina.ai/concepts/executor/) and a [Gateway](https://docs.jina.ai/concepts/gateway/) to offer an end-to-end service.
+> **Note**
+> If you just want to serve a single Executor, you can use a [Deployment](#build-ai--ml-services).
+For instance, let's combine [our French translation service](#build-ai--ml-services) with a Stable Diffusion image generation service from Jina AI's [Executor Hub](https://cloud.jina.ai/executors). Chaining these services together into a [Flow](https://docs.jina.ai/concepts/flow/) will give us a multilingual image generation service.
+Build the Flow with either Python or YAML:
+<div class="table-wrapper">
+<table>
+<tr>
+<th> Python API: <code>flow.py</code> </th>
+<th> YAML: <code>flow.yml</code> </th>
+</tr>
+<tr>
+<td>
+```python
+from jina import Flow
+flow = (
+ Flow()
+ .add(uses=Translator, timeout_ready=-1)
+ .add(
+ uses='jinaai://jina-ai/TextToImage',
+ timeout_ready=-1,
+ install_requirements=True,
+ )
+) # use the Executor from Executor hub
+with flow:
+ flow.block()
+```
+</td>
+<td>
+```yaml
+jtype: Flow
+executors:
+ - uses: Translator
+ timeout_ready: -1
+ py_modules:
+ - translate_executor.py
+ - uses: jinaai://jina-ai/TextToImage
+ timeout_ready: -1
+ install_requirements: true
+```
+Then run the YAML Flow with the CLI: `jina flow --uses flow.yml`
+</td>
+</tr>
+</table>
+</div>
+```text
+─────────────────────────────────────────── 🎉 Flow is ready to serve! ────────────────────────────────────────────
+╭────────────── 🔗 Endpoint ───────────────╮
+│ ⛓ Protocol GRPC │
+│ 🏠 Local 0.0.0.0:12345 │
+│ 🔒 Private 172.28.0.12:12345 │
+│ 🌍 Public 35.240.201.66:12345 │
+╰──────────────────────────────────────────╯
+```
+Then, use [Jina Client](https://docs.jina.ai/concepts/client/) to make requests to the Flow:
+```python
+from jina import Client, Document
+client = Client(port=12345) # use port from output above
+french_text = Document(
+ text='un astronaut est en train de faire une promenade dans un parc'
+)
+response = client.post(on='/', inputs=[french_text])
+response[0].display()
+```
+![stable-diffusion-output.png](https://raw.githubusercontent.com/jina-ai/jina/master/.github/stable-diffusion-output.png)
+You can also deploy a Flow to JCloud.
+First, turn the `flow.yml` file into a [JCloud-compatible YAML](https://docs.jina.ai/concepts/jcloud/yaml-spec/) by specifying resource requirements and using containerized Hub Executors.
+Then, use `jina cloud deploy` command to deploy to the cloud:
+```shell
+wget https://raw.githubusercontent.com/jina-ai/jina/master/.github/getting-started/jcloud-flow.yml
+jina cloud deploy jcloud-flow.yml
+```
+⚠️ **Caution: Make sure to delete/clean up the Flow once you are done with this tutorial to save resources and credits.**
+Read more about [deploying Flows to JCloud](https://docs.jina.ai/concepts/jcloud/#deploy).
+<!-- end build-pipelines -->
+
+%package -n python3-jina
+Summary: Build multimodal AI services via cloud native technologies · Neural Search · Generative AI · MLOps
+Provides: python-jina
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-jina
+<p align="center">
+<a href="https://docs.jina.ai"><img src="https://github.com/jina-ai/jina/blob/master/.github/readme/streamline-banner.png?raw=true" alt="Jina: Streamline AI & ML Product Delivery" width="100%"></a>
+</p>
+### Build AI & ML Services
+<!-- start build-ai-services -->
+[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jina-ai/jina/blob/master/.github/getting-started/notebook.ipynb)
+Let's build a fast, reliable and scalable gRPC-based AI service. In Jina we call this an **[Executor](https://docs.jina.ai/concepts/executor/)**. Our simple Executor will use Facebook's mBART-50 model to translate French to English. We'll then use a **Deployment** to serve it.
+> **Note**
+> A Deployment serves just one Executor. To combine multiple Executors into a pipeline and serve that, use a [Flow](#build-a-pipeline).
+> **Note**
+> Run the [code in Colab](https://colab.research.google.com/assets/colab-badge.svg) to install all dependencies.
+Let's implement the service's logic:
+<table>
+<tr>
+<th><code>translate_executor.py</code> </th>
+<tr>
+<td>
+```python
+from docarray import DocumentArray
+from jina import Executor, requests
+from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
+class Translator(Executor):
+ def __init__(self, **kwargs):
+ super().__init__(**kwargs)
+ self.tokenizer = AutoTokenizer.from_pretrained(
+ "facebook/mbart-large-50-many-to-many-mmt", src_lang="fr_XX"
+ )
+ self.model = AutoModelForSeq2SeqLM.from_pretrained(
+ "facebook/mbart-large-50-many-to-many-mmt"
+ )
+ @requests
+ def translate(self, docs: DocumentArray, **kwargs):
+ for doc in docs:
+ doc.text = self._translate(doc.text)
+ def _translate(self, text):
+ encoded_en = self.tokenizer(text, return_tensors="pt")
+ generated_tokens = self.model.generate(
+ **encoded_en, forced_bos_token_id=self.tokenizer.lang_code_to_id["en_XX"]
+ )
+ return self.tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[
+ 0
+ ]
+```
+</td>
+</tr>
+</table>
+Then we deploy it with either the Python API or YAML:
+<div class="table-wrapper">
+<table>
+<tr>
+<th> Python API: <code>deployment.py</code> </th>
+<th> YAML: <code>deployment.yml</code> </th>
+</tr>
+<tr>
+<td>
+```python
+from jina import Deployment
+with Deployment(uses=Translator, timeout_ready=-1) as dep:
+ dep.block()
+```
+</td>
+<td>
+```yaml
+jtype: Deployment
+with:
+ uses: Translator
+ py_modules:
+ - translate_executor.py # name of the module containing Translator
+ timeout_ready: -1
+```
+And run the YAML Deployment with the CLI: `jina deployment --uses deployment.yml`
+</td>
+</tr>
+</table>
+</div>
+```text
+──────────────────────────────────────── 🎉 Deployment is ready to serve! ─────────────────────────────────────────
+╭────────────── 🔗 Endpoint ───────────────╮
+│ ⛓ Protocol GRPC │
+│ 🏠 Local 0.0.0.0:12345 │
+│ 🔒 Private 172.28.0.12:12345 │
+│ 🌍 Public 35.230.97.208:12345 │
+╰──────────────────────────────────────────╯
+```
+Use [Jina Client](https://docs.jina.ai/concepts/client/) to make requests to the service:
+```python
+from docarray import Document
+from jina import Client
+french_text = Document(
+ text='un astronaut est en train de faire une promenade dans un parc'
+)
+client = Client(port=12345) # use port from output above
+response = client.post(on='/', inputs=[french_text])
+print(response[0].text)
+```
+```text
+an astronaut is walking in a park
+```
+<!-- end build-ai-services -->
+### Build a pipeline
+<!-- start build-pipelines -->
+[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jina-ai/jina/blob/docs-readme-changes/.github/getting-started/notebook.ipynb)
+Sometimes you want to chain microservices together into a pipeline. That's where a [Flow](https://docs.jina.ai/concepts/flow/) comes in.
+A Flow is a [DAG](https://de.wikipedia.org/wiki/DAG) pipeline, composed of a set of steps, It orchestrates a set of [Executors](https://docs.jina.ai/concepts/executor/) and a [Gateway](https://docs.jina.ai/concepts/gateway/) to offer an end-to-end service.
+> **Note**
+> If you just want to serve a single Executor, you can use a [Deployment](#build-ai--ml-services).
+For instance, let's combine [our French translation service](#build-ai--ml-services) with a Stable Diffusion image generation service from Jina AI's [Executor Hub](https://cloud.jina.ai/executors). Chaining these services together into a [Flow](https://docs.jina.ai/concepts/flow/) will give us a multilingual image generation service.
+Build the Flow with either Python or YAML:
+<div class="table-wrapper">
+<table>
+<tr>
+<th> Python API: <code>flow.py</code> </th>
+<th> YAML: <code>flow.yml</code> </th>
+</tr>
+<tr>
+<td>
+```python
+from jina import Flow
+flow = (
+ Flow()
+ .add(uses=Translator, timeout_ready=-1)
+ .add(
+ uses='jinaai://jina-ai/TextToImage',
+ timeout_ready=-1,
+ install_requirements=True,
+ )
+) # use the Executor from Executor hub
+with flow:
+ flow.block()
+```
+</td>
+<td>
+```yaml
+jtype: Flow
+executors:
+ - uses: Translator
+ timeout_ready: -1
+ py_modules:
+ - translate_executor.py
+ - uses: jinaai://jina-ai/TextToImage
+ timeout_ready: -1
+ install_requirements: true
+```
+Then run the YAML Flow with the CLI: `jina flow --uses flow.yml`
+</td>
+</tr>
+</table>
+</div>
+```text
+─────────────────────────────────────────── 🎉 Flow is ready to serve! ────────────────────────────────────────────
+╭────────────── 🔗 Endpoint ───────────────╮
+│ ⛓ Protocol GRPC │
+│ 🏠 Local 0.0.0.0:12345 │
+│ 🔒 Private 172.28.0.12:12345 │
+│ 🌍 Public 35.240.201.66:12345 │
+╰──────────────────────────────────────────╯
+```
+Then, use [Jina Client](https://docs.jina.ai/concepts/client/) to make requests to the Flow:
+```python
+from jina import Client, Document
+client = Client(port=12345) # use port from output above
+french_text = Document(
+ text='un astronaut est en train de faire une promenade dans un parc'
+)
+response = client.post(on='/', inputs=[french_text])
+response[0].display()
+```
+![stable-diffusion-output.png](https://raw.githubusercontent.com/jina-ai/jina/master/.github/stable-diffusion-output.png)
+You can also deploy a Flow to JCloud.
+First, turn the `flow.yml` file into a [JCloud-compatible YAML](https://docs.jina.ai/concepts/jcloud/yaml-spec/) by specifying resource requirements and using containerized Hub Executors.
+Then, use `jina cloud deploy` command to deploy to the cloud:
+```shell
+wget https://raw.githubusercontent.com/jina-ai/jina/master/.github/getting-started/jcloud-flow.yml
+jina cloud deploy jcloud-flow.yml
+```
+⚠️ **Caution: Make sure to delete/clean up the Flow once you are done with this tutorial to save resources and credits.**
+Read more about [deploying Flows to JCloud](https://docs.jina.ai/concepts/jcloud/#deploy).
+<!-- end build-pipelines -->
+
+%package help
+Summary: Development documents and examples for jina
+Provides: python3-jina-doc
+%description help
+<p align="center">
+<a href="https://docs.jina.ai"><img src="https://github.com/jina-ai/jina/blob/master/.github/readme/streamline-banner.png?raw=true" alt="Jina: Streamline AI & ML Product Delivery" width="100%"></a>
+</p>
+### Build AI & ML Services
+<!-- start build-ai-services -->
+[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jina-ai/jina/blob/master/.github/getting-started/notebook.ipynb)
+Let's build a fast, reliable and scalable gRPC-based AI service. In Jina we call this an **[Executor](https://docs.jina.ai/concepts/executor/)**. Our simple Executor will use Facebook's mBART-50 model to translate French to English. We'll then use a **Deployment** to serve it.
+> **Note**
+> A Deployment serves just one Executor. To combine multiple Executors into a pipeline and serve that, use a [Flow](#build-a-pipeline).
+> **Note**
+> Run the [code in Colab](https://colab.research.google.com/assets/colab-badge.svg) to install all dependencies.
+Let's implement the service's logic:
+<table>
+<tr>
+<th><code>translate_executor.py</code> </th>
+<tr>
+<td>
+```python
+from docarray import DocumentArray
+from jina import Executor, requests
+from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
+class Translator(Executor):
+ def __init__(self, **kwargs):
+ super().__init__(**kwargs)
+ self.tokenizer = AutoTokenizer.from_pretrained(
+ "facebook/mbart-large-50-many-to-many-mmt", src_lang="fr_XX"
+ )
+ self.model = AutoModelForSeq2SeqLM.from_pretrained(
+ "facebook/mbart-large-50-many-to-many-mmt"
+ )
+ @requests
+ def translate(self, docs: DocumentArray, **kwargs):
+ for doc in docs:
+ doc.text = self._translate(doc.text)
+ def _translate(self, text):
+ encoded_en = self.tokenizer(text, return_tensors="pt")
+ generated_tokens = self.model.generate(
+ **encoded_en, forced_bos_token_id=self.tokenizer.lang_code_to_id["en_XX"]
+ )
+ return self.tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[
+ 0
+ ]
+```
+</td>
+</tr>
+</table>
+Then we deploy it with either the Python API or YAML:
+<div class="table-wrapper">
+<table>
+<tr>
+<th> Python API: <code>deployment.py</code> </th>
+<th> YAML: <code>deployment.yml</code> </th>
+</tr>
+<tr>
+<td>
+```python
+from jina import Deployment
+with Deployment(uses=Translator, timeout_ready=-1) as dep:
+ dep.block()
+```
+</td>
+<td>
+```yaml
+jtype: Deployment
+with:
+ uses: Translator
+ py_modules:
+ - translate_executor.py # name of the module containing Translator
+ timeout_ready: -1
+```
+And run the YAML Deployment with the CLI: `jina deployment --uses deployment.yml`
+</td>
+</tr>
+</table>
+</div>
+```text
+──────────────────────────────────────── 🎉 Deployment is ready to serve! ─────────────────────────────────────────
+╭────────────── 🔗 Endpoint ───────────────╮
+│ ⛓ Protocol GRPC │
+│ 🏠 Local 0.0.0.0:12345 │
+│ 🔒 Private 172.28.0.12:12345 │
+│ 🌍 Public 35.230.97.208:12345 │
+╰──────────────────────────────────────────╯
+```
+Use [Jina Client](https://docs.jina.ai/concepts/client/) to make requests to the service:
+```python
+from docarray import Document
+from jina import Client
+french_text = Document(
+ text='un astronaut est en train de faire une promenade dans un parc'
+)
+client = Client(port=12345) # use port from output above
+response = client.post(on='/', inputs=[french_text])
+print(response[0].text)
+```
+```text
+an astronaut is walking in a park
+```
+<!-- end build-ai-services -->
+### Build a pipeline
+<!-- start build-pipelines -->
+[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/jina-ai/jina/blob/docs-readme-changes/.github/getting-started/notebook.ipynb)
+Sometimes you want to chain microservices together into a pipeline. That's where a [Flow](https://docs.jina.ai/concepts/flow/) comes in.
+A Flow is a [DAG](https://de.wikipedia.org/wiki/DAG) pipeline, composed of a set of steps, It orchestrates a set of [Executors](https://docs.jina.ai/concepts/executor/) and a [Gateway](https://docs.jina.ai/concepts/gateway/) to offer an end-to-end service.
+> **Note**
+> If you just want to serve a single Executor, you can use a [Deployment](#build-ai--ml-services).
+For instance, let's combine [our French translation service](#build-ai--ml-services) with a Stable Diffusion image generation service from Jina AI's [Executor Hub](https://cloud.jina.ai/executors). Chaining these services together into a [Flow](https://docs.jina.ai/concepts/flow/) will give us a multilingual image generation service.
+Build the Flow with either Python or YAML:
+<div class="table-wrapper">
+<table>
+<tr>
+<th> Python API: <code>flow.py</code> </th>
+<th> YAML: <code>flow.yml</code> </th>
+</tr>
+<tr>
+<td>
+```python
+from jina import Flow
+flow = (
+ Flow()
+ .add(uses=Translator, timeout_ready=-1)
+ .add(
+ uses='jinaai://jina-ai/TextToImage',
+ timeout_ready=-1,
+ install_requirements=True,
+ )
+) # use the Executor from Executor hub
+with flow:
+ flow.block()
+```
+</td>
+<td>
+```yaml
+jtype: Flow
+executors:
+ - uses: Translator
+ timeout_ready: -1
+ py_modules:
+ - translate_executor.py
+ - uses: jinaai://jina-ai/TextToImage
+ timeout_ready: -1
+ install_requirements: true
+```
+Then run the YAML Flow with the CLI: `jina flow --uses flow.yml`
+</td>
+</tr>
+</table>
+</div>
+```text
+─────────────────────────────────────────── 🎉 Flow is ready to serve! ────────────────────────────────────────────
+╭────────────── 🔗 Endpoint ───────────────╮
+│ ⛓ Protocol GRPC │
+│ 🏠 Local 0.0.0.0:12345 │
+│ 🔒 Private 172.28.0.12:12345 │
+│ 🌍 Public 35.240.201.66:12345 │
+╰──────────────────────────────────────────╯
+```
+Then, use [Jina Client](https://docs.jina.ai/concepts/client/) to make requests to the Flow:
+```python
+from jina import Client, Document
+client = Client(port=12345) # use port from output above
+french_text = Document(
+ text='un astronaut est en train de faire une promenade dans un parc'
+)
+response = client.post(on='/', inputs=[french_text])
+response[0].display()
+```
+![stable-diffusion-output.png](https://raw.githubusercontent.com/jina-ai/jina/master/.github/stable-diffusion-output.png)
+You can also deploy a Flow to JCloud.
+First, turn the `flow.yml` file into a [JCloud-compatible YAML](https://docs.jina.ai/concepts/jcloud/yaml-spec/) by specifying resource requirements and using containerized Hub Executors.
+Then, use `jina cloud deploy` command to deploy to the cloud:
+```shell
+wget https://raw.githubusercontent.com/jina-ai/jina/master/.github/getting-started/jcloud-flow.yml
+jina cloud deploy jcloud-flow.yml
+```
+⚠️ **Caution: Make sure to delete/clean up the Flow once you are done with this tutorial to save resources and credits.**
+Read more about [deploying Flows to JCloud](https://docs.jina.ai/concepts/jcloud/#deploy).
+<!-- end build-pipelines -->
+
+%prep
+%autosetup -n jina-3.14.1
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-jina -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Mon Apr 10 2023 Python_Bot <Python_Bot@openeuler.org> - 3.14.1-1
+- Package Spec generated