summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-05-10 04:34:29 +0000
committerCoprDistGit <infra@openeuler.org>2023-05-10 04:34:29 +0000
commit0b1fb4b0e7a91bb1c0b909df342bb126d2f8bdc6 (patch)
tree2154c9325d281e7d92a1a5f5d99bc4fa3c476b91
parent6b131c0f0a4366f72e74e48585634b56fd2d7e7d (diff)
automatic import of python-audiomentationsopeneuler20.03
-rw-r--r--.gitignore1
-rw-r--r--python-audiomentations.spec371
-rw-r--r--sources1
3 files changed, 373 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..d3613c8 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/audiomentations-0.30.0.tar.gz
diff --git a/python-audiomentations.spec b/python-audiomentations.spec
new file mode 100644
index 0000000..db331ef
--- /dev/null
+++ b/python-audiomentations.spec
@@ -0,0 +1,371 @@
+%global _empty_manifest_terminate_build 0
+Name: python-audiomentations
+Version: 0.30.0
+Release: 1
+Summary: A Python library for audio data augmentation. Inspired by albumentations. Useful for machine learning.
+License: MIT
+URL: https://github.com/iver56/audiomentations
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/d0/ae/3e48afd7d761836f24b9c4740666a0e371f9693cb0e2a833052345db1abb/audiomentations-0.30.0.tar.gz
+BuildArch: noarch
+
+Requires: python3-numpy
+Requires: python3-librosa
+Requires: python3-scipy
+Requires: python3-cylimiter
+Requires: python3-lameenc
+Requires: python3-pydub
+Requires: python3-pyloudnorm
+Requires: python3-pyroomacoustics
+
+%description
+# Audiomentations
+
+[![Build status](https://img.shields.io/circleci/project/github/iver56/audiomentations/main.svg)](https://circleci.com/gh/iver56/audiomentations)
+[![Code coverage](https://img.shields.io/codecov/c/github/iver56/audiomentations/main.svg)](https://codecov.io/gh/iver56/audiomentations)
+[![Code Style: Black](https://img.shields.io/badge/code%20style-black-black.svg)](https://github.com/ambv/black)
+[![Licence: MIT](https://img.shields.io/pypi/l/audiomentations)](https://github.com/iver56/audiomentations/blob/main/LICENSE)
+[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7885479.svg)](https://doi.org/10.5281/zenodo.7885479)
+
+A Python library for audio data augmentation. Inspired by
+[albumentations](https://github.com/albu/albumentations). Useful for deep learning. Runs on
+CPU. Supports mono audio and multichannel audio. Can be
+integrated in training pipelines in e.g. Tensorflow/Keras or Pytorch. Has helped people get
+world-class results in Kaggle competitions. Is used by companies making next-generation audio
+products.
+
+Need a Pytorch-specific alternative with GPU support? Check out [torch-audiomentations](https://github.com/asteroid-team/torch-audiomentations)!
+
+# Setup
+
+![Python version support](https://img.shields.io/pypi/pyversions/audiomentations)
+[![PyPI version](https://img.shields.io/pypi/v/audiomentations.svg?style=flat)](https://pypi.org/project/audiomentations/)
+[![Number of downloads from PyPI per month](https://img.shields.io/pypi/dm/audiomentations.svg?style=flat)](https://pypi.org/project/audiomentations/)
+
+`pip install audiomentations`
+
+# Usage example
+
+```python
+from audiomentations import Compose, AddGaussianNoise, TimeStretch, PitchShift, Shift
+import numpy as np
+
+augment = Compose([
+ AddGaussianNoise(min_amplitude=0.001, max_amplitude=0.015, p=0.5),
+ TimeStretch(min_rate=0.8, max_rate=1.25, p=0.5),
+ PitchShift(min_semitones=-4, max_semitones=4, p=0.5),
+ Shift(min_fraction=-0.5, max_fraction=0.5, p=0.5),
+])
+
+# Generate 2 seconds of dummy audio for the sake of example
+samples = np.random.uniform(low=-0.2, high=0.2, size=(32000,)).astype(np.float32)
+
+# Augment/transform/perturb the audio data
+augmented_samples = augment(samples=samples, sample_rate=16000)
+```
+
+# Documentation
+
+See [https://iver56.github.io/audiomentations/](https://iver56.github.io/audiomentations/)
+
+# Transforms
+
+* [AddBackgroundNoise](https://iver56.github.io/audiomentations/waveform_transforms/add_background_noise/)
+* [AddGaussianNoise](https://iver56.github.io/audiomentations/waveform_transforms/add_gaussian_noise/)
+* [AddGaussianSNR](https://iver56.github.io/audiomentations/waveform_transforms/add_gaussian_snr/)
+* [AddShortNoises](https://iver56.github.io/audiomentations/waveform_transforms/add_short_noises/)
+* [AirAbsorption](https://iver56.github.io/audiomentations/waveform_transforms/air_absorption/)
+* [ApplyImpulseResponse](https://iver56.github.io/audiomentations/waveform_transforms/apply_impulse_response/)
+* [BandPassFilter](https://iver56.github.io/audiomentations/waveform_transforms/band_pass_filter/)
+* [BandStopFilter](https://iver56.github.io/audiomentations/waveform_transforms/band_stop_filter/)
+* [Clip](https://iver56.github.io/audiomentations/waveform_transforms/clip/)
+* [ClippingDistortion](https://iver56.github.io/audiomentations/waveform_transforms/clipping_distortion/)
+* [Gain](https://iver56.github.io/audiomentations/waveform_transforms/gain/)
+* [GainTransition](https://iver56.github.io/audiomentations/waveform_transforms/gain_transition/)
+* [HighPassFilter](https://iver56.github.io/audiomentations/waveform_transforms/high_pass_filter/)
+* [HighShelfFilter](https://iver56.github.io/audiomentations/waveform_transforms/high_shelf_filter/)
+* [Lambda](https://iver56.github.io/audiomentations/waveform_transforms/lambda/)
+* [Limiter](https://iver56.github.io/audiomentations/waveform_transforms/limiter/)
+* [LoudnessNormalization](https://iver56.github.io/audiomentations/waveform_transforms/loudness_normalization/)
+* [LowPassFilter](https://iver56.github.io/audiomentations/waveform_transforms/low_pass_filter/)
+* [LowShelfFilter](https://iver56.github.io/audiomentations/waveform_transforms/low_shelf_filter/)
+* [Mp3Compression](https://iver56.github.io/audiomentations/waveform_transforms/mp3_compression/)
+* [Normalize](https://iver56.github.io/audiomentations/waveform_transforms/normalize/)
+* [Padding](https://iver56.github.io/audiomentations/waveform_transforms/padding/)
+* [PeakingFilter](https://iver56.github.io/audiomentations/waveform_transforms/peaking_filter/)
+* [PitchShift](https://iver56.github.io/audiomentations/waveform_transforms/pitch_shift/)
+* [PolarityInversion](https://iver56.github.io/audiomentations/waveform_transforms/polarity_inversion/)
+* [Resample](https://iver56.github.io/audiomentations/waveform_transforms/resample/)
+* [Reverse](https://iver56.github.io/audiomentations/waveform_transforms/reverse/)
+* [RoomSimulator](https://iver56.github.io/audiomentations/waveform_transforms/room_simulator/)
+* [SevenBandParametricEQ](https://iver56.github.io/audiomentations/waveform_transforms/seven_band_parametric_eq/)
+* [Shift](https://iver56.github.io/audiomentations/waveform_transforms/shift/)
+* [SpecChannelShuffle](https://iver56.github.io/audiomentations/spectrogram_transforms/)
+* [SpecFrequencyMask](https://iver56.github.io/audiomentations/spectrogram_transforms/)
+* [TanhDistortion](https://iver56.github.io/audiomentations/waveform_transforms/tanh_distortion/)
+* [TimeMask](https://iver56.github.io/audiomentations/waveform_transforms/time_mask/)
+* [TimeStretch](https://iver56.github.io/audiomentations/waveform_transforms/time_stretch/)
+* [Trim](https://iver56.github.io/audiomentations/waveform_transforms/trim/)
+
+# Changelog
+
+See [https://iver56.github.io/audiomentations/changelog/](https://iver56.github.io/audiomentations/changelog/)
+
+# Acknowledgements
+
+Thanks to [Nomono](https://nomono.co/) for backing audiomentations.
+
+Thanks to [all contributors](https://github.com/iver56/audiomentations/graphs/contributors) who help improving audiomentations.
+
+
+%package -n python3-audiomentations
+Summary: A Python library for audio data augmentation. Inspired by albumentations. Useful for machine learning.
+Provides: python-audiomentations
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-audiomentations
+# Audiomentations
+
+[![Build status](https://img.shields.io/circleci/project/github/iver56/audiomentations/main.svg)](https://circleci.com/gh/iver56/audiomentations)
+[![Code coverage](https://img.shields.io/codecov/c/github/iver56/audiomentations/main.svg)](https://codecov.io/gh/iver56/audiomentations)
+[![Code Style: Black](https://img.shields.io/badge/code%20style-black-black.svg)](https://github.com/ambv/black)
+[![Licence: MIT](https://img.shields.io/pypi/l/audiomentations)](https://github.com/iver56/audiomentations/blob/main/LICENSE)
+[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7885479.svg)](https://doi.org/10.5281/zenodo.7885479)
+
+A Python library for audio data augmentation. Inspired by
+[albumentations](https://github.com/albu/albumentations). Useful for deep learning. Runs on
+CPU. Supports mono audio and multichannel audio. Can be
+integrated in training pipelines in e.g. Tensorflow/Keras or Pytorch. Has helped people get
+world-class results in Kaggle competitions. Is used by companies making next-generation audio
+products.
+
+Need a Pytorch-specific alternative with GPU support? Check out [torch-audiomentations](https://github.com/asteroid-team/torch-audiomentations)!
+
+# Setup
+
+![Python version support](https://img.shields.io/pypi/pyversions/audiomentations)
+[![PyPI version](https://img.shields.io/pypi/v/audiomentations.svg?style=flat)](https://pypi.org/project/audiomentations/)
+[![Number of downloads from PyPI per month](https://img.shields.io/pypi/dm/audiomentations.svg?style=flat)](https://pypi.org/project/audiomentations/)
+
+`pip install audiomentations`
+
+# Usage example
+
+```python
+from audiomentations import Compose, AddGaussianNoise, TimeStretch, PitchShift, Shift
+import numpy as np
+
+augment = Compose([
+ AddGaussianNoise(min_amplitude=0.001, max_amplitude=0.015, p=0.5),
+ TimeStretch(min_rate=0.8, max_rate=1.25, p=0.5),
+ PitchShift(min_semitones=-4, max_semitones=4, p=0.5),
+ Shift(min_fraction=-0.5, max_fraction=0.5, p=0.5),
+])
+
+# Generate 2 seconds of dummy audio for the sake of example
+samples = np.random.uniform(low=-0.2, high=0.2, size=(32000,)).astype(np.float32)
+
+# Augment/transform/perturb the audio data
+augmented_samples = augment(samples=samples, sample_rate=16000)
+```
+
+# Documentation
+
+See [https://iver56.github.io/audiomentations/](https://iver56.github.io/audiomentations/)
+
+# Transforms
+
+* [AddBackgroundNoise](https://iver56.github.io/audiomentations/waveform_transforms/add_background_noise/)
+* [AddGaussianNoise](https://iver56.github.io/audiomentations/waveform_transforms/add_gaussian_noise/)
+* [AddGaussianSNR](https://iver56.github.io/audiomentations/waveform_transforms/add_gaussian_snr/)
+* [AddShortNoises](https://iver56.github.io/audiomentations/waveform_transforms/add_short_noises/)
+* [AirAbsorption](https://iver56.github.io/audiomentations/waveform_transforms/air_absorption/)
+* [ApplyImpulseResponse](https://iver56.github.io/audiomentations/waveform_transforms/apply_impulse_response/)
+* [BandPassFilter](https://iver56.github.io/audiomentations/waveform_transforms/band_pass_filter/)
+* [BandStopFilter](https://iver56.github.io/audiomentations/waveform_transforms/band_stop_filter/)
+* [Clip](https://iver56.github.io/audiomentations/waveform_transforms/clip/)
+* [ClippingDistortion](https://iver56.github.io/audiomentations/waveform_transforms/clipping_distortion/)
+* [Gain](https://iver56.github.io/audiomentations/waveform_transforms/gain/)
+* [GainTransition](https://iver56.github.io/audiomentations/waveform_transforms/gain_transition/)
+* [HighPassFilter](https://iver56.github.io/audiomentations/waveform_transforms/high_pass_filter/)
+* [HighShelfFilter](https://iver56.github.io/audiomentations/waveform_transforms/high_shelf_filter/)
+* [Lambda](https://iver56.github.io/audiomentations/waveform_transforms/lambda/)
+* [Limiter](https://iver56.github.io/audiomentations/waveform_transforms/limiter/)
+* [LoudnessNormalization](https://iver56.github.io/audiomentations/waveform_transforms/loudness_normalization/)
+* [LowPassFilter](https://iver56.github.io/audiomentations/waveform_transforms/low_pass_filter/)
+* [LowShelfFilter](https://iver56.github.io/audiomentations/waveform_transforms/low_shelf_filter/)
+* [Mp3Compression](https://iver56.github.io/audiomentations/waveform_transforms/mp3_compression/)
+* [Normalize](https://iver56.github.io/audiomentations/waveform_transforms/normalize/)
+* [Padding](https://iver56.github.io/audiomentations/waveform_transforms/padding/)
+* [PeakingFilter](https://iver56.github.io/audiomentations/waveform_transforms/peaking_filter/)
+* [PitchShift](https://iver56.github.io/audiomentations/waveform_transforms/pitch_shift/)
+* [PolarityInversion](https://iver56.github.io/audiomentations/waveform_transforms/polarity_inversion/)
+* [Resample](https://iver56.github.io/audiomentations/waveform_transforms/resample/)
+* [Reverse](https://iver56.github.io/audiomentations/waveform_transforms/reverse/)
+* [RoomSimulator](https://iver56.github.io/audiomentations/waveform_transforms/room_simulator/)
+* [SevenBandParametricEQ](https://iver56.github.io/audiomentations/waveform_transforms/seven_band_parametric_eq/)
+* [Shift](https://iver56.github.io/audiomentations/waveform_transforms/shift/)
+* [SpecChannelShuffle](https://iver56.github.io/audiomentations/spectrogram_transforms/)
+* [SpecFrequencyMask](https://iver56.github.io/audiomentations/spectrogram_transforms/)
+* [TanhDistortion](https://iver56.github.io/audiomentations/waveform_transforms/tanh_distortion/)
+* [TimeMask](https://iver56.github.io/audiomentations/waveform_transforms/time_mask/)
+* [TimeStretch](https://iver56.github.io/audiomentations/waveform_transforms/time_stretch/)
+* [Trim](https://iver56.github.io/audiomentations/waveform_transforms/trim/)
+
+# Changelog
+
+See [https://iver56.github.io/audiomentations/changelog/](https://iver56.github.io/audiomentations/changelog/)
+
+# Acknowledgements
+
+Thanks to [Nomono](https://nomono.co/) for backing audiomentations.
+
+Thanks to [all contributors](https://github.com/iver56/audiomentations/graphs/contributors) who help improving audiomentations.
+
+
+%package help
+Summary: Development documents and examples for audiomentations
+Provides: python3-audiomentations-doc
+%description help
+# Audiomentations
+
+[![Build status](https://img.shields.io/circleci/project/github/iver56/audiomentations/main.svg)](https://circleci.com/gh/iver56/audiomentations)
+[![Code coverage](https://img.shields.io/codecov/c/github/iver56/audiomentations/main.svg)](https://codecov.io/gh/iver56/audiomentations)
+[![Code Style: Black](https://img.shields.io/badge/code%20style-black-black.svg)](https://github.com/ambv/black)
+[![Licence: MIT](https://img.shields.io/pypi/l/audiomentations)](https://github.com/iver56/audiomentations/blob/main/LICENSE)
+[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7885479.svg)](https://doi.org/10.5281/zenodo.7885479)
+
+A Python library for audio data augmentation. Inspired by
+[albumentations](https://github.com/albu/albumentations). Useful for deep learning. Runs on
+CPU. Supports mono audio and multichannel audio. Can be
+integrated in training pipelines in e.g. Tensorflow/Keras or Pytorch. Has helped people get
+world-class results in Kaggle competitions. Is used by companies making next-generation audio
+products.
+
+Need a Pytorch-specific alternative with GPU support? Check out [torch-audiomentations](https://github.com/asteroid-team/torch-audiomentations)!
+
+# Setup
+
+![Python version support](https://img.shields.io/pypi/pyversions/audiomentations)
+[![PyPI version](https://img.shields.io/pypi/v/audiomentations.svg?style=flat)](https://pypi.org/project/audiomentations/)
+[![Number of downloads from PyPI per month](https://img.shields.io/pypi/dm/audiomentations.svg?style=flat)](https://pypi.org/project/audiomentations/)
+
+`pip install audiomentations`
+
+# Usage example
+
+```python
+from audiomentations import Compose, AddGaussianNoise, TimeStretch, PitchShift, Shift
+import numpy as np
+
+augment = Compose([
+ AddGaussianNoise(min_amplitude=0.001, max_amplitude=0.015, p=0.5),
+ TimeStretch(min_rate=0.8, max_rate=1.25, p=0.5),
+ PitchShift(min_semitones=-4, max_semitones=4, p=0.5),
+ Shift(min_fraction=-0.5, max_fraction=0.5, p=0.5),
+])
+
+# Generate 2 seconds of dummy audio for the sake of example
+samples = np.random.uniform(low=-0.2, high=0.2, size=(32000,)).astype(np.float32)
+
+# Augment/transform/perturb the audio data
+augmented_samples = augment(samples=samples, sample_rate=16000)
+```
+
+# Documentation
+
+See [https://iver56.github.io/audiomentations/](https://iver56.github.io/audiomentations/)
+
+# Transforms
+
+* [AddBackgroundNoise](https://iver56.github.io/audiomentations/waveform_transforms/add_background_noise/)
+* [AddGaussianNoise](https://iver56.github.io/audiomentations/waveform_transforms/add_gaussian_noise/)
+* [AddGaussianSNR](https://iver56.github.io/audiomentations/waveform_transforms/add_gaussian_snr/)
+* [AddShortNoises](https://iver56.github.io/audiomentations/waveform_transforms/add_short_noises/)
+* [AirAbsorption](https://iver56.github.io/audiomentations/waveform_transforms/air_absorption/)
+* [ApplyImpulseResponse](https://iver56.github.io/audiomentations/waveform_transforms/apply_impulse_response/)
+* [BandPassFilter](https://iver56.github.io/audiomentations/waveform_transforms/band_pass_filter/)
+* [BandStopFilter](https://iver56.github.io/audiomentations/waveform_transforms/band_stop_filter/)
+* [Clip](https://iver56.github.io/audiomentations/waveform_transforms/clip/)
+* [ClippingDistortion](https://iver56.github.io/audiomentations/waveform_transforms/clipping_distortion/)
+* [Gain](https://iver56.github.io/audiomentations/waveform_transforms/gain/)
+* [GainTransition](https://iver56.github.io/audiomentations/waveform_transforms/gain_transition/)
+* [HighPassFilter](https://iver56.github.io/audiomentations/waveform_transforms/high_pass_filter/)
+* [HighShelfFilter](https://iver56.github.io/audiomentations/waveform_transforms/high_shelf_filter/)
+* [Lambda](https://iver56.github.io/audiomentations/waveform_transforms/lambda/)
+* [Limiter](https://iver56.github.io/audiomentations/waveform_transforms/limiter/)
+* [LoudnessNormalization](https://iver56.github.io/audiomentations/waveform_transforms/loudness_normalization/)
+* [LowPassFilter](https://iver56.github.io/audiomentations/waveform_transforms/low_pass_filter/)
+* [LowShelfFilter](https://iver56.github.io/audiomentations/waveform_transforms/low_shelf_filter/)
+* [Mp3Compression](https://iver56.github.io/audiomentations/waveform_transforms/mp3_compression/)
+* [Normalize](https://iver56.github.io/audiomentations/waveform_transforms/normalize/)
+* [Padding](https://iver56.github.io/audiomentations/waveform_transforms/padding/)
+* [PeakingFilter](https://iver56.github.io/audiomentations/waveform_transforms/peaking_filter/)
+* [PitchShift](https://iver56.github.io/audiomentations/waveform_transforms/pitch_shift/)
+* [PolarityInversion](https://iver56.github.io/audiomentations/waveform_transforms/polarity_inversion/)
+* [Resample](https://iver56.github.io/audiomentations/waveform_transforms/resample/)
+* [Reverse](https://iver56.github.io/audiomentations/waveform_transforms/reverse/)
+* [RoomSimulator](https://iver56.github.io/audiomentations/waveform_transforms/room_simulator/)
+* [SevenBandParametricEQ](https://iver56.github.io/audiomentations/waveform_transforms/seven_band_parametric_eq/)
+* [Shift](https://iver56.github.io/audiomentations/waveform_transforms/shift/)
+* [SpecChannelShuffle](https://iver56.github.io/audiomentations/spectrogram_transforms/)
+* [SpecFrequencyMask](https://iver56.github.io/audiomentations/spectrogram_transforms/)
+* [TanhDistortion](https://iver56.github.io/audiomentations/waveform_transforms/tanh_distortion/)
+* [TimeMask](https://iver56.github.io/audiomentations/waveform_transforms/time_mask/)
+* [TimeStretch](https://iver56.github.io/audiomentations/waveform_transforms/time_stretch/)
+* [Trim](https://iver56.github.io/audiomentations/waveform_transforms/trim/)
+
+# Changelog
+
+See [https://iver56.github.io/audiomentations/changelog/](https://iver56.github.io/audiomentations/changelog/)
+
+# Acknowledgements
+
+Thanks to [Nomono](https://nomono.co/) for backing audiomentations.
+
+Thanks to [all contributors](https://github.com/iver56/audiomentations/graphs/contributors) who help improving audiomentations.
+
+
+%prep
+%autosetup -n audiomentations-0.30.0
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-audiomentations -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Wed May 10 2023 Python_Bot <Python_Bot@openeuler.org> - 0.30.0-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..af869c5
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+fbc7d67f41e04719adb4b2beb8627b29 audiomentations-0.30.0.tar.gz