summaryrefslogtreecommitdiff
path: root/python-pytest-depends.spec
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-04-11 11:10:02 +0000
committerCoprDistGit <infra@openeuler.org>2023-04-11 11:10:02 +0000
commitc717d7383488a63a14d441fc30e87928eaae4565 (patch)
tree8ae967cb9cdf96de7d635f2c4000db7bfe8c6a8b /python-pytest-depends.spec
parent77513e7eca75a2dee70e54c4bb55f5503497fd12 (diff)
automatic import of python-pytest-depends
Diffstat (limited to 'python-pytest-depends.spec')
-rw-r--r--python-pytest-depends.spec430
1 files changed, 430 insertions, 0 deletions
diff --git a/python-pytest-depends.spec b/python-pytest-depends.spec
new file mode 100644
index 0000000..5302357
--- /dev/null
+++ b/python-pytest-depends.spec
@@ -0,0 +1,430 @@
+%global _empty_manifest_terminate_build 0
+Name: python-pytest-depends
+Version: 1.0.1
+Release: 1
+Summary: Tests that depend on other tests
+License: MIT
+URL: https://gitlab.com/maienm/pytest-depends
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/95/5b/929e7381c342ca5040136577916d0bb20f97bbadded59fdb9aad084461a2/pytest-depends-1.0.1.tar.gz
+BuildArch: noarch
+
+Requires: python3-colorama
+Requires: python3-future-fstrings
+Requires: python3-networkx
+Requires: python3-pytest
+
+%description
+# pytest-depends
+
+This pytest plugin allows you to declare dependencies between pytest tests, where dependent tests will not run if the
+tests they depend on did not succeed.
+
+Of course, tests should be self contained whenever possible, but that doesn't mean this doesn't have good uses.
+
+This can be useful for when the failing of a test means that another test cannot possibly succeed either, especially
+with slower tests. This isn't a dependency in the sense of test A sets up stuff for test B, but more in the sense of if
+test A failed there's no reason to bother with test B either.
+
+## Installation
+
+Simply install using `pip` (or `easy_install`):
+
+```
+pip install pytest-depends
+```
+
+## Usage
+
+``` python
+BUILD_PATH = 'build'
+
+def test_build_exists():
+ assert os.path.exists(BUILD_PATH)
+
+@pytest.mark.depends(on=['test_build_exists'])
+def test_build_version():
+ result = subprocess.run([BUILD_PATH, '--version'], stdout=subprocess.PIPE)
+ assert result.returncode == 0
+ assert '1.2.3' in result.stdout
+```
+
+This is a simple example of the situation mentioned earlier. In this case, the first test checks whether the build file
+even exists. If this fails, the other test will not be ran, as there is no point in doing to.
+
+## Order
+
+This plugin will automatically re-order the tests so that tests are run after the tests they depend on. If another
+plugin also reorders tests (such as `pytest-randomly`), this may cause problems, as dependencies that haven't ran yet
+are considered failures.
+
+This plugin attempts to make sure it runs last to prevent this issue, but there are no guarantees this is successful. If
+you run into issues with this in combination with another plugin, feel free to open an issue.
+
+## Naming
+
+There are multiple ways to refer to each test. Let's start with an example, which we'll call `test_file.py`:
+
+``` python
+class TestClass(object):
+ @pytest.mark.depends(name='foo')
+ def test_in_class(self):
+ pass
+
+@pytest.mark.depends(name='foo')
+def test_outside_class():
+ pass
+
+def test_without_name(num):
+ pass
+```
+
+The `test_in_class` test will be available under the following names:
+
+- `test_file.py::TestClass::test_in_class`
+- `test_file.py::TestClass`
+- `test_file.py`
+- `foo`
+
+The `test_outside_class` test will be available under the following names:
+
+- `test_file.py::test_outside_class`
+- `test_file.py`
+- `foo`
+
+The `test_without_name` test will be available under the following names:
+
+- `test_file.py::test_without_name`
+- `test_file.py`
+
+Note how some names apply to multiple tests. Depending on `foo` in this case would mean depending on both
+`test_in_class` and `test_outside_class`, and depending on `test_file.py` would mean depending on all 3 tests in this
+file.
+
+Another example, with parametrization. We'll call this one `test_params.py`:
+
+``` python
+@pytest.mark.depends(name='bar')
+@pytest.mark.parametrize('num', [
+ pytest.param(1, marks=pytest.mark.depends(name='baz')),
+ 2,
+])
+def test_with_params(num):
+ pass
+```
+
+The first run of the test, with `num = 1`, will be available under the following names:
+
+- `test_params.py::test_with_params[num0]`
+- `test_params.py::test_with_params`
+- `test_params.py`
+- `bar`
+- `baz`
+
+The second run of the test, with `num = 2`, will be available under the following names:
+
+- `test_params.py::test_with_params[num1]`
+- `test_params.py::test_with_params`
+- `test_params.py`
+- `bar`
+
+Note that the first name has a partially autogenerated name. If you want to depend on a single instance of a
+parametrized test, it's recommended to use the `pytest.depends` syntax to give it a name rather than depending on the
+autogenerated one.
+
+
+
+
+%package -n python3-pytest-depends
+Summary: Tests that depend on other tests
+Provides: python-pytest-depends
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-pytest-depends
+# pytest-depends
+
+This pytest plugin allows you to declare dependencies between pytest tests, where dependent tests will not run if the
+tests they depend on did not succeed.
+
+Of course, tests should be self contained whenever possible, but that doesn't mean this doesn't have good uses.
+
+This can be useful for when the failing of a test means that another test cannot possibly succeed either, especially
+with slower tests. This isn't a dependency in the sense of test A sets up stuff for test B, but more in the sense of if
+test A failed there's no reason to bother with test B either.
+
+## Installation
+
+Simply install using `pip` (or `easy_install`):
+
+```
+pip install pytest-depends
+```
+
+## Usage
+
+``` python
+BUILD_PATH = 'build'
+
+def test_build_exists():
+ assert os.path.exists(BUILD_PATH)
+
+@pytest.mark.depends(on=['test_build_exists'])
+def test_build_version():
+ result = subprocess.run([BUILD_PATH, '--version'], stdout=subprocess.PIPE)
+ assert result.returncode == 0
+ assert '1.2.3' in result.stdout
+```
+
+This is a simple example of the situation mentioned earlier. In this case, the first test checks whether the build file
+even exists. If this fails, the other test will not be ran, as there is no point in doing to.
+
+## Order
+
+This plugin will automatically re-order the tests so that tests are run after the tests they depend on. If another
+plugin also reorders tests (such as `pytest-randomly`), this may cause problems, as dependencies that haven't ran yet
+are considered failures.
+
+This plugin attempts to make sure it runs last to prevent this issue, but there are no guarantees this is successful. If
+you run into issues with this in combination with another plugin, feel free to open an issue.
+
+## Naming
+
+There are multiple ways to refer to each test. Let's start with an example, which we'll call `test_file.py`:
+
+``` python
+class TestClass(object):
+ @pytest.mark.depends(name='foo')
+ def test_in_class(self):
+ pass
+
+@pytest.mark.depends(name='foo')
+def test_outside_class():
+ pass
+
+def test_without_name(num):
+ pass
+```
+
+The `test_in_class` test will be available under the following names:
+
+- `test_file.py::TestClass::test_in_class`
+- `test_file.py::TestClass`
+- `test_file.py`
+- `foo`
+
+The `test_outside_class` test will be available under the following names:
+
+- `test_file.py::test_outside_class`
+- `test_file.py`
+- `foo`
+
+The `test_without_name` test will be available under the following names:
+
+- `test_file.py::test_without_name`
+- `test_file.py`
+
+Note how some names apply to multiple tests. Depending on `foo` in this case would mean depending on both
+`test_in_class` and `test_outside_class`, and depending on `test_file.py` would mean depending on all 3 tests in this
+file.
+
+Another example, with parametrization. We'll call this one `test_params.py`:
+
+``` python
+@pytest.mark.depends(name='bar')
+@pytest.mark.parametrize('num', [
+ pytest.param(1, marks=pytest.mark.depends(name='baz')),
+ 2,
+])
+def test_with_params(num):
+ pass
+```
+
+The first run of the test, with `num = 1`, will be available under the following names:
+
+- `test_params.py::test_with_params[num0]`
+- `test_params.py::test_with_params`
+- `test_params.py`
+- `bar`
+- `baz`
+
+The second run of the test, with `num = 2`, will be available under the following names:
+
+- `test_params.py::test_with_params[num1]`
+- `test_params.py::test_with_params`
+- `test_params.py`
+- `bar`
+
+Note that the first name has a partially autogenerated name. If you want to depend on a single instance of a
+parametrized test, it's recommended to use the `pytest.depends` syntax to give it a name rather than depending on the
+autogenerated one.
+
+
+
+
+%package help
+Summary: Development documents and examples for pytest-depends
+Provides: python3-pytest-depends-doc
+%description help
+# pytest-depends
+
+This pytest plugin allows you to declare dependencies between pytest tests, where dependent tests will not run if the
+tests they depend on did not succeed.
+
+Of course, tests should be self contained whenever possible, but that doesn't mean this doesn't have good uses.
+
+This can be useful for when the failing of a test means that another test cannot possibly succeed either, especially
+with slower tests. This isn't a dependency in the sense of test A sets up stuff for test B, but more in the sense of if
+test A failed there's no reason to bother with test B either.
+
+## Installation
+
+Simply install using `pip` (or `easy_install`):
+
+```
+pip install pytest-depends
+```
+
+## Usage
+
+``` python
+BUILD_PATH = 'build'
+
+def test_build_exists():
+ assert os.path.exists(BUILD_PATH)
+
+@pytest.mark.depends(on=['test_build_exists'])
+def test_build_version():
+ result = subprocess.run([BUILD_PATH, '--version'], stdout=subprocess.PIPE)
+ assert result.returncode == 0
+ assert '1.2.3' in result.stdout
+```
+
+This is a simple example of the situation mentioned earlier. In this case, the first test checks whether the build file
+even exists. If this fails, the other test will not be ran, as there is no point in doing to.
+
+## Order
+
+This plugin will automatically re-order the tests so that tests are run after the tests they depend on. If another
+plugin also reorders tests (such as `pytest-randomly`), this may cause problems, as dependencies that haven't ran yet
+are considered failures.
+
+This plugin attempts to make sure it runs last to prevent this issue, but there are no guarantees this is successful. If
+you run into issues with this in combination with another plugin, feel free to open an issue.
+
+## Naming
+
+There are multiple ways to refer to each test. Let's start with an example, which we'll call `test_file.py`:
+
+``` python
+class TestClass(object):
+ @pytest.mark.depends(name='foo')
+ def test_in_class(self):
+ pass
+
+@pytest.mark.depends(name='foo')
+def test_outside_class():
+ pass
+
+def test_without_name(num):
+ pass
+```
+
+The `test_in_class` test will be available under the following names:
+
+- `test_file.py::TestClass::test_in_class`
+- `test_file.py::TestClass`
+- `test_file.py`
+- `foo`
+
+The `test_outside_class` test will be available under the following names:
+
+- `test_file.py::test_outside_class`
+- `test_file.py`
+- `foo`
+
+The `test_without_name` test will be available under the following names:
+
+- `test_file.py::test_without_name`
+- `test_file.py`
+
+Note how some names apply to multiple tests. Depending on `foo` in this case would mean depending on both
+`test_in_class` and `test_outside_class`, and depending on `test_file.py` would mean depending on all 3 tests in this
+file.
+
+Another example, with parametrization. We'll call this one `test_params.py`:
+
+``` python
+@pytest.mark.depends(name='bar')
+@pytest.mark.parametrize('num', [
+ pytest.param(1, marks=pytest.mark.depends(name='baz')),
+ 2,
+])
+def test_with_params(num):
+ pass
+```
+
+The first run of the test, with `num = 1`, will be available under the following names:
+
+- `test_params.py::test_with_params[num0]`
+- `test_params.py::test_with_params`
+- `test_params.py`
+- `bar`
+- `baz`
+
+The second run of the test, with `num = 2`, will be available under the following names:
+
+- `test_params.py::test_with_params[num1]`
+- `test_params.py::test_with_params`
+- `test_params.py`
+- `bar`
+
+Note that the first name has a partially autogenerated name. If you want to depend on a single instance of a
+parametrized test, it's recommended to use the `pytest.depends` syntax to give it a name rather than depending on the
+autogenerated one.
+
+
+
+
+%prep
+%autosetup -n pytest-depends-1.0.1
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-pytest-depends -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Tue Apr 11 2023 Python_Bot <Python_Bot@openeuler.org> - 1.0.1-1
+- Package Spec generated