summaryrefslogtreecommitdiff
path: root/python-picovoice.spec
diff options
context:
space:
mode:
Diffstat (limited to 'python-picovoice.spec')
-rw-r--r--python-picovoice.spec365
1 files changed, 365 insertions, 0 deletions
diff --git a/python-picovoice.spec b/python-picovoice.spec
new file mode 100644
index 0000000..16d80c4
--- /dev/null
+++ b/python-picovoice.spec
@@ -0,0 +1,365 @@
+%global _empty_manifest_terminate_build 0
+Name: python-picovoice
+Version: 2.2.1
+Release: 1
+Summary: Picovoice is an end-to-end platform for building voice products on your terms.
+License: Apache Software License
+URL: https://github.com/Picovoice/picovoice
+Source0: https://mirrors.aliyun.com/pypi/web/packages/6c/2f/503c24259ea9506cd3e9afe946f74343782dd6361741d3185bfa164136f4/picovoice-2.2.1.tar.gz
+BuildArch: noarch
+
+Requires: python3-pvporcupine
+Requires: python3-pvrhino
+
+%description
+# Picovoice
+
+Made in Vancouver, Canada by [Picovoice](https://picovoice.ai)
+
+Picovoice is an end-to-end platform for building voice products on your terms. It enables creating voice experiences
+similar to Alexa and Google. But it entirely runs 100% on-device. Picovoice is
+
+- **Private:** Everything is processed offline. Intrinsically HIPAA and GDPR-compliant.
+- **Reliable:** Runs without needing constant connectivity.
+- **Zero Latency:** Edge-first architecture eliminates unpredictable network delay.
+- **Accurate:** Resilient to noise and reverberation. It outperforms cloud-based alternatives by wide margins
+[*](https://github.com/Picovoice/speech-to-intent-benchmark#results).
+- **Cross-Platform:** Design once, deploy anywhere. Build using familiar languages and frameworks.
+
+## Compatibility
+
+* Python 3.5+
+* Runs on Linux (x86_64), macOS (x86_64, arm64), Windows (x86_64), Raspberry Pi (all variants), NVIDIA Jetson (Nano), and BeagleBone.
+
+## Installation
+
+```console
+pip3 install picovoice
+```
+
+## AccessKey
+
+Picovoice requires a valid Picovoice `AccessKey` at initialization. `AccessKey` acts as your credentials when using Picovoice SDKs.
+You can get your `AccessKey` for free. Make sure to keep your `AccessKey` secret.
+Signup or Login to [Picovoice Console](https://console.picovoice.ai/) to get your `AccessKey`.
+
+## Usage
+
+Create a new instance of Picovoice runtime engine
+
+```python
+from picovoice import Picovoice
+
+access_key = "${ACCESS_KEY}" # AccessKey obtained from Picovoice Console (https://console.picovoice.ai/)
+
+keyword_path = ...
+
+def wake_word_callback():
+ pass
+
+context_path = ...
+
+def inference_callback(inference):
+ # `inference` exposes three immutable fields:
+ # (1) `is_understood`
+ # (2) `intent`
+ # (3) `slots`
+ pass
+
+handle = Picovoice(
+ access_key=access_key,
+ keyword_path=keyword_path,
+ wake_word_callback=wake_word_callback,
+ context_path=context_path,
+ inference_callback=inference_callback)
+```
+
+`handle` is an instance of Picovoice runtime engine that detects utterances of wake phrase defined in the file located at
+`keyword_path`. Upon detection of wake word it starts inferring user's intent from the follow-on voice command within
+the context defined by the file located at `context_path`. `keyword_path` is the absolute path to
+[Porcupine wake word engine](https://github.com/Picovoice/porcupine) keyword file (with `.ppn` suffix).
+`context_path` is the absolute path to [Rhino Speech-to-Intent engine](https://github.com/Picovoice/rhino) context file
+(with `.rhn` suffix). `wake_word_callback` is invoked upon the detection of wake phrase and `inference_callback` is
+invoked upon completion of follow-on voice command inference.
+
+When instantiated, valid sample rate can be obtained via `handle.sample_rate`. Expected number of audio samples per
+frame is `handle.frame_length`. The engine accepts 16-bit linearly-encoded PCM and operates on single-channel audio.
+
+```python
+def get_next_audio_frame():
+ pass
+
+while True:
+ handle.process(get_next_audio_frame())
+```
+
+When done resources have to be released explicitly
+
+```python
+handle.delete()
+```
+
+## Non-English Models
+
+In order to detect wake words and run inference in other languages you need to use the corresponding model file. The model files for all supported languages are available [here](https://github.com/Picovoice/porcupine/tree/master/lib/common) and [here](https://github.com/Picovoice/rhino/tree/master/lib/common).
+
+## Demos
+
+[picovoicedemo](https://pypi.org/project/picovoicedemo/) provides command-line utilities for processing real-time
+audio (i.e. microphone) and files using Picovoice platform.
+
+
+
+
+%package -n python3-picovoice
+Summary: Picovoice is an end-to-end platform for building voice products on your terms.
+Provides: python-picovoice
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-picovoice
+# Picovoice
+
+Made in Vancouver, Canada by [Picovoice](https://picovoice.ai)
+
+Picovoice is an end-to-end platform for building voice products on your terms. It enables creating voice experiences
+similar to Alexa and Google. But it entirely runs 100% on-device. Picovoice is
+
+- **Private:** Everything is processed offline. Intrinsically HIPAA and GDPR-compliant.
+- **Reliable:** Runs without needing constant connectivity.
+- **Zero Latency:** Edge-first architecture eliminates unpredictable network delay.
+- **Accurate:** Resilient to noise and reverberation. It outperforms cloud-based alternatives by wide margins
+[*](https://github.com/Picovoice/speech-to-intent-benchmark#results).
+- **Cross-Platform:** Design once, deploy anywhere. Build using familiar languages and frameworks.
+
+## Compatibility
+
+* Python 3.5+
+* Runs on Linux (x86_64), macOS (x86_64, arm64), Windows (x86_64), Raspberry Pi (all variants), NVIDIA Jetson (Nano), and BeagleBone.
+
+## Installation
+
+```console
+pip3 install picovoice
+```
+
+## AccessKey
+
+Picovoice requires a valid Picovoice `AccessKey` at initialization. `AccessKey` acts as your credentials when using Picovoice SDKs.
+You can get your `AccessKey` for free. Make sure to keep your `AccessKey` secret.
+Signup or Login to [Picovoice Console](https://console.picovoice.ai/) to get your `AccessKey`.
+
+## Usage
+
+Create a new instance of Picovoice runtime engine
+
+```python
+from picovoice import Picovoice
+
+access_key = "${ACCESS_KEY}" # AccessKey obtained from Picovoice Console (https://console.picovoice.ai/)
+
+keyword_path = ...
+
+def wake_word_callback():
+ pass
+
+context_path = ...
+
+def inference_callback(inference):
+ # `inference` exposes three immutable fields:
+ # (1) `is_understood`
+ # (2) `intent`
+ # (3) `slots`
+ pass
+
+handle = Picovoice(
+ access_key=access_key,
+ keyword_path=keyword_path,
+ wake_word_callback=wake_word_callback,
+ context_path=context_path,
+ inference_callback=inference_callback)
+```
+
+`handle` is an instance of Picovoice runtime engine that detects utterances of wake phrase defined in the file located at
+`keyword_path`. Upon detection of wake word it starts inferring user's intent from the follow-on voice command within
+the context defined by the file located at `context_path`. `keyword_path` is the absolute path to
+[Porcupine wake word engine](https://github.com/Picovoice/porcupine) keyword file (with `.ppn` suffix).
+`context_path` is the absolute path to [Rhino Speech-to-Intent engine](https://github.com/Picovoice/rhino) context file
+(with `.rhn` suffix). `wake_word_callback` is invoked upon the detection of wake phrase and `inference_callback` is
+invoked upon completion of follow-on voice command inference.
+
+When instantiated, valid sample rate can be obtained via `handle.sample_rate`. Expected number of audio samples per
+frame is `handle.frame_length`. The engine accepts 16-bit linearly-encoded PCM and operates on single-channel audio.
+
+```python
+def get_next_audio_frame():
+ pass
+
+while True:
+ handle.process(get_next_audio_frame())
+```
+
+When done resources have to be released explicitly
+
+```python
+handle.delete()
+```
+
+## Non-English Models
+
+In order to detect wake words and run inference in other languages you need to use the corresponding model file. The model files for all supported languages are available [here](https://github.com/Picovoice/porcupine/tree/master/lib/common) and [here](https://github.com/Picovoice/rhino/tree/master/lib/common).
+
+## Demos
+
+[picovoicedemo](https://pypi.org/project/picovoicedemo/) provides command-line utilities for processing real-time
+audio (i.e. microphone) and files using Picovoice platform.
+
+
+
+
+%package help
+Summary: Development documents and examples for picovoice
+Provides: python3-picovoice-doc
+%description help
+# Picovoice
+
+Made in Vancouver, Canada by [Picovoice](https://picovoice.ai)
+
+Picovoice is an end-to-end platform for building voice products on your terms. It enables creating voice experiences
+similar to Alexa and Google. But it entirely runs 100% on-device. Picovoice is
+
+- **Private:** Everything is processed offline. Intrinsically HIPAA and GDPR-compliant.
+- **Reliable:** Runs without needing constant connectivity.
+- **Zero Latency:** Edge-first architecture eliminates unpredictable network delay.
+- **Accurate:** Resilient to noise and reverberation. It outperforms cloud-based alternatives by wide margins
+[*](https://github.com/Picovoice/speech-to-intent-benchmark#results).
+- **Cross-Platform:** Design once, deploy anywhere. Build using familiar languages and frameworks.
+
+## Compatibility
+
+* Python 3.5+
+* Runs on Linux (x86_64), macOS (x86_64, arm64), Windows (x86_64), Raspberry Pi (all variants), NVIDIA Jetson (Nano), and BeagleBone.
+
+## Installation
+
+```console
+pip3 install picovoice
+```
+
+## AccessKey
+
+Picovoice requires a valid Picovoice `AccessKey` at initialization. `AccessKey` acts as your credentials when using Picovoice SDKs.
+You can get your `AccessKey` for free. Make sure to keep your `AccessKey` secret.
+Signup or Login to [Picovoice Console](https://console.picovoice.ai/) to get your `AccessKey`.
+
+## Usage
+
+Create a new instance of Picovoice runtime engine
+
+```python
+from picovoice import Picovoice
+
+access_key = "${ACCESS_KEY}" # AccessKey obtained from Picovoice Console (https://console.picovoice.ai/)
+
+keyword_path = ...
+
+def wake_word_callback():
+ pass
+
+context_path = ...
+
+def inference_callback(inference):
+ # `inference` exposes three immutable fields:
+ # (1) `is_understood`
+ # (2) `intent`
+ # (3) `slots`
+ pass
+
+handle = Picovoice(
+ access_key=access_key,
+ keyword_path=keyword_path,
+ wake_word_callback=wake_word_callback,
+ context_path=context_path,
+ inference_callback=inference_callback)
+```
+
+`handle` is an instance of Picovoice runtime engine that detects utterances of wake phrase defined in the file located at
+`keyword_path`. Upon detection of wake word it starts inferring user's intent from the follow-on voice command within
+the context defined by the file located at `context_path`. `keyword_path` is the absolute path to
+[Porcupine wake word engine](https://github.com/Picovoice/porcupine) keyword file (with `.ppn` suffix).
+`context_path` is the absolute path to [Rhino Speech-to-Intent engine](https://github.com/Picovoice/rhino) context file
+(with `.rhn` suffix). `wake_word_callback` is invoked upon the detection of wake phrase and `inference_callback` is
+invoked upon completion of follow-on voice command inference.
+
+When instantiated, valid sample rate can be obtained via `handle.sample_rate`. Expected number of audio samples per
+frame is `handle.frame_length`. The engine accepts 16-bit linearly-encoded PCM and operates on single-channel audio.
+
+```python
+def get_next_audio_frame():
+ pass
+
+while True:
+ handle.process(get_next_audio_frame())
+```
+
+When done resources have to be released explicitly
+
+```python
+handle.delete()
+```
+
+## Non-English Models
+
+In order to detect wake words and run inference in other languages you need to use the corresponding model file. The model files for all supported languages are available [here](https://github.com/Picovoice/porcupine/tree/master/lib/common) and [here](https://github.com/Picovoice/rhino/tree/master/lib/common).
+
+## Demos
+
+[picovoicedemo](https://pypi.org/project/picovoicedemo/) provides command-line utilities for processing real-time
+audio (i.e. microphone) and files using Picovoice platform.
+
+
+
+
+%prep
+%autosetup -n picovoice-2.2.1
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "\"/%h/%f\"\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "\"/%h/%f\"\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "\"/%h/%f\"\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "\"/%h/%f\"\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "\"/%h/%f.gz\"\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-picovoice -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Tue Jun 20 2023 Python_Bot <Python_Bot@openeuler.org> - 2.2.1-1
+- Package Spec generated