summaryrefslogtreecommitdiff
path: root/python-pytest-json-report.spec
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-04-10 14:44:33 +0000
committerCoprDistGit <infra@openeuler.org>2023-04-10 14:44:33 +0000
commitd6f91675839d96465fcd87e976a549b9defd9103 (patch)
tree75e660f83b65f79370684e5e83980cbf7d30c752 /python-pytest-json-report.spec
parent54687da5ddb689ef2d19ac25d141ab8e0057e443 (diff)
automatic import of python-pytest-json-report
Diffstat (limited to 'python-pytest-json-report.spec')
-rw-r--r--python-pytest-json-report.spec1475
1 files changed, 1475 insertions, 0 deletions
diff --git a/python-pytest-json-report.spec b/python-pytest-json-report.spec
new file mode 100644
index 0000000..687c233
--- /dev/null
+++ b/python-pytest-json-report.spec
@@ -0,0 +1,1475 @@
+%global _empty_manifest_terminate_build 0
+Name: python-pytest-json-report
+Version: 1.5.0
+Release: 1
+Summary: A pytest plugin to report test results as JSON files
+License: MIT
+URL: https://github.com/numirias/pytest-json-report
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/4f/d3/765dae9712fcd68d820338908c1337e077d5fdadccd5cacf95b9b0bea278/pytest-json-report-1.5.0.tar.gz
+BuildArch: noarch
+
+Requires: python3-pytest
+Requires: python3-pytest-metadata
+
+%description
+# Pytest JSON Report
+
+[![CI](https://github.com/numirias/pytest-json-report/actions/workflows/main.yml/badge.svg)](https://github.com/numirias/pytest-json-report/actions/workflows/main.yml)
+[![PyPI Version](https://img.shields.io/pypi/v/pytest-json-report.svg)](https://pypi.python.org/pypi/pytest-json-report)
+[![Python Versions](https://img.shields.io/pypi/pyversions/pytest-json-report.svg)](https://pypi.python.org/pypi/pytest-json-report)
+
+This pytest plugin creates test reports as JSON. This makes it easy to process test results in other applications.
+
+It can report a summary, test details, captured output, logs, exception tracebacks and more. Additionally, you can use the available fixtures and hooks to [add metadata](#metadata) and [customize](#modifying-the-report) the report as you like.
+
+## Table of contents
+
+* [Installation](#installation)
+* [Options](#options)
+* [Usage](#usage)
+ * [Metadata](#metadata)
+ * [Modifying the report](#modifying-the-report)
+ * [Direct invocation](#direct-invocation)
+* [Format](#format)
+ * [Summary](#summary)
+ * [Environment](#environment)
+ * [Collectors](#collectors)
+ * [Tests](#tests)
+ * [Test stage](#test-stage)
+ * [Log](#log)
+ * [Warnings](#warnings)
+* [Related tools](#related-tools)
+
+## Installation
+
+```
+pip install pytest-json-report --upgrade
+```
+
+## Options
+
+| Option | Description |
+| --- | --- |
+| `--json-report` | Create JSON report |
+| `--json-report-file=PATH` | Target path to save JSON report (use "none" to not save the report) |
+| `--json-report-summary` | Just create a summary without per-test details |
+| `--json-report-omit=FIELD_LIST` | List of fields to omit in the report (choose from: `collectors`, `log`, `traceback`, `streams`, `warnings`, `keywords`) |
+| `--json-report-indent=LEVEL` | Pretty-print JSON with specified indentation level |
+| `--json-report-verbosity=LEVEL` | Set verbosity (default is value of `--verbosity`) |
+
+## Usage
+
+Just run pytest with `--json-report`. The report is saved in `.report.json` by default.
+
+```bash
+$ pytest --json-report -v tests/
+$ cat .report.json
+{"created": 1518371686.7981803, ... "tests":[{"nodeid": "test_foo.py", "outcome": "passed", ...}, ...]}
+```
+
+If you just need to know how many tests passed or failed and don't care about details, you can produce a summary only:
+
+```bash
+$ pytest --json-report --json-report-summary
+```
+
+Many fields can be omitted to keep the report size small. E.g., this will leave out keywords and stdout/stderr output:
+
+```bash
+$ pytest --json-report --json-report-omit keywords streams
+```
+
+If you don't like to have the report saved, you can specify `none` as the target file name:
+
+```bash
+$ pytest --json-report --json-report-file none
+```
+
+## Advanced usage
+
+### Metadata
+
+The easiest way to add your own metadata to a test item is by using the `json_metadata` [test fixture](https://docs.pytest.org/en/stable/fixture.html):
+
+```python
+def test_something(json_metadata):
+ json_metadata['foo'] = {"some": "thing"}
+ json_metadata['bar'] = 123
+```
+
+Or use the `pytest_json_runtest_metadata` [hook](https://docs.pytest.org/en/stable/reference.html#hooks) (in your `conftest.py`) to add metadata based on the current test run. The dict returned will automatically be merged with any existing metadata. E.g., this adds the start and stop time of each test's `call` stage:
+
+```python
+def pytest_json_runtest_metadata(item, call):
+ if call.when != 'call':
+ return {}
+ return {'start': call.start, 'stop': call.stop}
+```
+
+Also, you could add metadata using [pytest-metadata's `--metadata` switch](https://github.com/pytest-dev/pytest-metadata#additional-metadata) which will add metadata to the report's `environment` section, but not to a specific test item. You need to make sure all your metadata is JSON-serializable.
+
+### A note on hooks
+
+If you're using a `pytest_json_*` hook although the plugin is not installed or not active (not using `--json-report`), pytest doesn't recognize it and may fail with an internal error like this:
+```
+INTERNALERROR> pluggy.manager.PluginValidationError: unknown hook 'pytest_json_runtest_metadata' in plugin <module 'conftest' from 'conftest.py'>
+```
+You can avoid this by declaring the hook implementation optional:
+
+```python
+import pytest
+@pytest.hookimpl(optionalhook=True)
+def pytest_json_runtest_metadata(item, call):
+ ...
+```
+
+### Modifying the report
+
+You can modify the entire report before it's saved by using the `pytest_json_modifyreport` hook.
+
+Just implement the hook in your `conftest.py`, e.g.:
+
+```python
+def pytest_json_modifyreport(json_report):
+ # Add a key to the report
+ json_report['foo'] = 'bar'
+ # Delete the summary from the report
+ del json_report['summary']
+```
+
+After `pytest_sessionfinish`, the report object is also directly available to script via `config._json_report.report`. So you can access it using some built-in hook:
+
+```python
+def pytest_sessionfinish(session):
+ report = session.config._json_report.report
+ print('exited with', report['exitcode'])
+```
+
+If you *really* want to change how the result of a test stage run is turned into JSON, you can use the `pytest_json_runtest_stage` hook. It takes a [`TestReport`](https://docs.pytest.org/en/latest/reference.html#_pytest.runner.TestReport) and returns a JSON-serializable dict:
+
+```python
+def pytest_json_runtest_stage(report):
+ return {'outcome': report.outcome}
+```
+
+### Direct invocation
+
+You can use the plugin when invoking `pytest.main()` directly from code:
+
+```python
+import pytest
+from pytest_jsonreport.plugin import JSONReport
+
+plugin = JSONReport()
+pytest.main(['--json-report-file=none', 'test_foo.py'], plugins=[plugin])
+```
+
+You can then access the `report` object:
+
+```python
+print(plugin.report)
+```
+
+And save the report manually:
+
+```python
+plugin.save_report('/tmp/my_report.json')
+```
+
+
+## Format
+
+The JSON report contains metadata of the session, a summary, collectors, tests and warnings. You can find a sample report in [`sample_report.json`](sample_report.json).
+
+| Key | Description |
+| --- | --- |
+| `created` | Report creation date. (Unix time) |
+| `duration` | Session duration in seconds. |
+| `exitcode` | Process exit code as listed [in the pytest docs](https://docs.pytest.org/en/latest/usage.html#possible-exit-codes). The exit code is a quick way to tell if any tests failed, an internal error occurred, etc. |
+| `root` | Absolute root path from which the session was started. |
+| `environment` | [Environment](#environment) entry. |
+| `summary` | [Summary](#summary) entry. |
+| `collectors` | [Collectors](#collectors) entry. (absent if `--json-report-summary` or if no collectors) |
+| `tests` | [Tests](#tests) entry. (absent if `--json-report-summary`) |
+| `warnings` | [Warnings](#warnings) entry. (absent if `--json-report-summary` or if no warnings) |
+
+#### Example
+
+```python
+{
+ "created": 1518371686.7981803,
+ "duration": 0.1235666275024414,
+ "exitcode": 1,
+ "root": "/path/to/tests",
+ "environment": ENVIRONMENT,
+ "summary": SUMMARY,
+ "collectors": COLLECTORS,
+ "tests": TESTS,
+ "warnings": WARNINGS,
+}
+```
+
+### Summary
+
+Number of outcomes per category and the total number of test items.
+
+| Key | Description |
+| --- | --- |
+| `collected` | Total number of tests collected. |
+| `total` | Total number of tests run. |
+| `deselected` | Total number of tests deselected. (absent if number is 0) |
+| `<outcome>` | Number of tests with that outcome. (absent if number is 0) |
+
+#### Example
+
+```python
+{
+ "collected": 10,
+ "passed": 2,
+ "failed": 3,
+ "xfailed": 1,
+ "xpassed": 1,
+ "error": 2,
+ "skipped": 1,
+ "total": 10
+}
+```
+
+### Environment
+
+The environment section is provided by [pytest-metadata](https://github.com/pytest-dev/pytest-metadata). All metadata given by that plugin will be added here, so you need to make sure it is JSON-serializable.
+
+#### Example
+
+```python
+{
+ "Python": "3.6.4",
+ "Platform": "Linux-4.56.78-9-ARCH-x86_64-with-arch",
+ "Packages": {
+ "pytest": "3.4.0",
+ "py": "1.5.2",
+ "pluggy": "0.6.0"
+ },
+ "Plugins": {
+ "json-report": "0.4.1",
+ "xdist": "1.22.0",
+ "metadata": "1.5.1",
+ "forked": "0.2",
+ "cov": "2.5.1"
+ },
+ "foo": "bar", # Custom metadata entry passed via pytest-metadata
+}
+```
+
+### Collectors
+
+A list of collector nodes. These are useful to check what tests are available without running them, or to debug an error during test discovery.
+
+| Key | Description |
+| --- | --- |
+| `nodeid` | ID of the collector node. ([See docs](https://docs.pytest.org/en/latest/example/markers.html#node-id)) The root node has an empty node ID. |
+| `outcome` | Outcome of the collection. (Not the test outcome!) |
+| `result` | Nodes collected by the collector. |
+| `longrepr` | Representation of the collection error. (absent if no error occurred) |
+
+The `result` is a list of the collected nodes:
+
+| Key | Description |
+| --- | --- |
+| `nodeid` | ID of the node. |
+| `type` | Type of the collected node. |
+| `lineno` | Line number. (absent if not applicable) |
+| `deselected` | `true` if the test is deselected. (absent if not deselected) |
+
+#### Example
+
+```python
+[
+ {
+ "nodeid": "",
+ "outcome": "passed",
+ "result": [
+ {
+ "nodeid": "test_foo.py",
+ "type": "Module"
+ }
+ ]
+ },
+ {
+ "nodeid": "test_foo.py",
+ "outcome": "passed",
+ "result": [
+ {
+ "nodeid": "test_foo.py::test_pass",
+ "type": "Function",
+ "lineno": 24,
+ "deselected": true
+ },
+ ...
+ ]
+ },
+ {
+ "nodeid": "test_bar.py",
+ "outcome": "failed",
+ "result": [],
+ "longrepr": "/usr/lib/python3.6 ... invalid syntax"
+ },
+ ...
+]
+```
+
+### Tests
+
+A list of test nodes. Each completed test stage produces a stage object (`setup`, `call`, `teardown`) with its own `outcome`.
+
+| Key | Description |
+| --- | --- |
+| `nodeid` | ID of the test node. |
+| `lineno` | Line number where the test starts. |
+| `keywords` | List of keywords and markers associated with the test. |
+| `outcome` | Outcome of the test run. |
+| `{setup, call, teardown}` | [Test stage](#test-stage) entry. To find the error in a failed test you need to check all stages. (absent if stage didn't run) |
+| `metadata` | [Metadata](#metadata) item. (absent if no metadata) |
+
+#### Example
+
+```python
+[
+ {
+ "nodeid": "test_foo.py::test_fail",
+ "lineno": 50,
+ "keywords": [
+ "test_fail",
+ "test_foo.py",
+ "test_foo0"
+ ],
+ "outcome": "failed",
+ "setup": TEST_STAGE,
+ "call": TEST_STAGE,
+ "teardown": TEST_STAGE,
+ "metadata": {
+ "foo": "bar",
+ }
+ },
+ ...
+]
+```
+
+
+### Test stage
+
+A test stage item.
+
+| Key | Description |
+| --- | --- |
+| `duration` | Duration of the test stage in seconds. |
+| `outcome` | Outcome of the test stage. (can be different from the overall test outcome) |
+| `crash` | Crash entry. (absent if no error occurred) |
+| `traceback` | List of traceback entries. (absent if no error occurred; affected by `--tb` option) |
+| `stdout` | Standard output. (absent if none available) |
+| `stderr` | Standard error. (absent if none available) |
+| `log` | [Log](#log) entry. (absent if none available) |
+| `longrepr` | Representation of the error. (absent if no error occurred; format affected by `--tb` option) |
+
+#### Example
+
+```python
+{
+ "duration": 0.00018835067749023438,
+ "outcome": "failed",
+ "crash": {
+ "path": "/path/to/tests/test_foo.py",
+ "lineno": 54,
+ "message": "TypeError: unsupported operand type(s) for -: 'int' and 'NoneType'"
+ },
+ "traceback": [
+ {
+ "path": "test_foo.py",
+ "lineno": 65,
+ "message": ""
+ },
+ {
+ "path": "test_foo.py",
+ "lineno": 63,
+ "message": "in foo"
+ },
+ {
+ "path": "test_foo.py",
+ "lineno": 63,
+ "message": "in <listcomp>"
+ },
+ {
+ "path": "test_foo.py",
+ "lineno": 54,
+ "message": "TypeError"
+ }
+ ],
+ "stdout": "foo\nbar\n",
+ "stderr": "baz\n",
+ "log": LOG,
+ "longrepr": "def test_fail_nested():\n ..."
+}
+```
+
+### Log
+
+A list of log records. The fields of a log record are the [`logging.LogRecord` attributes](https://docs.python.org/3/library/logging.html#logrecord-attributes), with the exception that the fields `exc_info` and `args` are always empty and `msg` contains the formatted log message.
+
+You can apply [`logging.makeLogRecord()`](https://docs.python.org/3/library/logging.html#logging.makeLogRecord) on a log record to convert it back to a `logging.LogRecord` object.
+
+#### Example
+
+```python
+[
+ {
+ "name": "root",
+ "msg": "This is a warning.",
+ "args": null,
+ "levelname": "WARNING",
+ "levelno": 30,
+ "pathname": "/path/to/tests/test_foo.py",
+ "filename": "test_foo.py",
+ "module": "test_foo",
+ "exc_info": null,
+ "exc_text": null,
+ "stack_info": null,
+ "lineno": 8,
+ "funcName": "foo",
+ "created": 1519772464.291738,
+ "msecs": 291.73803329467773,
+ "relativeCreated": 332.90839195251465,
+ "thread": 140671803118912,
+ "threadName": "MainThread",
+ "processName": "MainProcess",
+ "process": 31481
+ },
+ ...
+]
+```
+
+
+### Warnings
+
+A list of warnings that occurred during the session. (See the [pytest docs on warnings](https://docs.pytest.org/en/latest/warnings.html).)
+
+| Key | Description |
+| --- | --- |
+| `filename` | File name. |
+| `lineno` | Line number. |
+| `message` | Warning message. |
+| `when` | When the warning was captured. (`"config"`, `"collect"` or `"runtest"` as listed [here](https://docs.pytest.org/en/latest/reference.html#_pytest.hookspec.pytest_warning_captured)) |
+
+#### Example
+
+```python
+[
+ {
+ "code": "C1",
+ "path": "/path/to/tests/test_foo.py",
+ "nodeid": "test_foo.py::TestFoo",
+ "message": "cannot collect test class 'TestFoo' because it has a __init__ constructor"
+ }
+]
+```
+
+## Related tools
+
+- [pytest-json](https://github.com/mattcl/pytest-json) has some great features but appears to be unmaintained. I borrowed some ideas and test cases from there.
+
+- [tox has a switch](http://tox.readthedocs.io/en/latest/example/result.html) to create a JSON report including a test result summary. However, it just provides the overall outcome without any per-test details.
+
+
+
+
+%package -n python3-pytest-json-report
+Summary: A pytest plugin to report test results as JSON files
+Provides: python-pytest-json-report
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-pytest-json-report
+# Pytest JSON Report
+
+[![CI](https://github.com/numirias/pytest-json-report/actions/workflows/main.yml/badge.svg)](https://github.com/numirias/pytest-json-report/actions/workflows/main.yml)
+[![PyPI Version](https://img.shields.io/pypi/v/pytest-json-report.svg)](https://pypi.python.org/pypi/pytest-json-report)
+[![Python Versions](https://img.shields.io/pypi/pyversions/pytest-json-report.svg)](https://pypi.python.org/pypi/pytest-json-report)
+
+This pytest plugin creates test reports as JSON. This makes it easy to process test results in other applications.
+
+It can report a summary, test details, captured output, logs, exception tracebacks and more. Additionally, you can use the available fixtures and hooks to [add metadata](#metadata) and [customize](#modifying-the-report) the report as you like.
+
+## Table of contents
+
+* [Installation](#installation)
+* [Options](#options)
+* [Usage](#usage)
+ * [Metadata](#metadata)
+ * [Modifying the report](#modifying-the-report)
+ * [Direct invocation](#direct-invocation)
+* [Format](#format)
+ * [Summary](#summary)
+ * [Environment](#environment)
+ * [Collectors](#collectors)
+ * [Tests](#tests)
+ * [Test stage](#test-stage)
+ * [Log](#log)
+ * [Warnings](#warnings)
+* [Related tools](#related-tools)
+
+## Installation
+
+```
+pip install pytest-json-report --upgrade
+```
+
+## Options
+
+| Option | Description |
+| --- | --- |
+| `--json-report` | Create JSON report |
+| `--json-report-file=PATH` | Target path to save JSON report (use "none" to not save the report) |
+| `--json-report-summary` | Just create a summary without per-test details |
+| `--json-report-omit=FIELD_LIST` | List of fields to omit in the report (choose from: `collectors`, `log`, `traceback`, `streams`, `warnings`, `keywords`) |
+| `--json-report-indent=LEVEL` | Pretty-print JSON with specified indentation level |
+| `--json-report-verbosity=LEVEL` | Set verbosity (default is value of `--verbosity`) |
+
+## Usage
+
+Just run pytest with `--json-report`. The report is saved in `.report.json` by default.
+
+```bash
+$ pytest --json-report -v tests/
+$ cat .report.json
+{"created": 1518371686.7981803, ... "tests":[{"nodeid": "test_foo.py", "outcome": "passed", ...}, ...]}
+```
+
+If you just need to know how many tests passed or failed and don't care about details, you can produce a summary only:
+
+```bash
+$ pytest --json-report --json-report-summary
+```
+
+Many fields can be omitted to keep the report size small. E.g., this will leave out keywords and stdout/stderr output:
+
+```bash
+$ pytest --json-report --json-report-omit keywords streams
+```
+
+If you don't like to have the report saved, you can specify `none` as the target file name:
+
+```bash
+$ pytest --json-report --json-report-file none
+```
+
+## Advanced usage
+
+### Metadata
+
+The easiest way to add your own metadata to a test item is by using the `json_metadata` [test fixture](https://docs.pytest.org/en/stable/fixture.html):
+
+```python
+def test_something(json_metadata):
+ json_metadata['foo'] = {"some": "thing"}
+ json_metadata['bar'] = 123
+```
+
+Or use the `pytest_json_runtest_metadata` [hook](https://docs.pytest.org/en/stable/reference.html#hooks) (in your `conftest.py`) to add metadata based on the current test run. The dict returned will automatically be merged with any existing metadata. E.g., this adds the start and stop time of each test's `call` stage:
+
+```python
+def pytest_json_runtest_metadata(item, call):
+ if call.when != 'call':
+ return {}
+ return {'start': call.start, 'stop': call.stop}
+```
+
+Also, you could add metadata using [pytest-metadata's `--metadata` switch](https://github.com/pytest-dev/pytest-metadata#additional-metadata) which will add metadata to the report's `environment` section, but not to a specific test item. You need to make sure all your metadata is JSON-serializable.
+
+### A note on hooks
+
+If you're using a `pytest_json_*` hook although the plugin is not installed or not active (not using `--json-report`), pytest doesn't recognize it and may fail with an internal error like this:
+```
+INTERNALERROR> pluggy.manager.PluginValidationError: unknown hook 'pytest_json_runtest_metadata' in plugin <module 'conftest' from 'conftest.py'>
+```
+You can avoid this by declaring the hook implementation optional:
+
+```python
+import pytest
+@pytest.hookimpl(optionalhook=True)
+def pytest_json_runtest_metadata(item, call):
+ ...
+```
+
+### Modifying the report
+
+You can modify the entire report before it's saved by using the `pytest_json_modifyreport` hook.
+
+Just implement the hook in your `conftest.py`, e.g.:
+
+```python
+def pytest_json_modifyreport(json_report):
+ # Add a key to the report
+ json_report['foo'] = 'bar'
+ # Delete the summary from the report
+ del json_report['summary']
+```
+
+After `pytest_sessionfinish`, the report object is also directly available to script via `config._json_report.report`. So you can access it using some built-in hook:
+
+```python
+def pytest_sessionfinish(session):
+ report = session.config._json_report.report
+ print('exited with', report['exitcode'])
+```
+
+If you *really* want to change how the result of a test stage run is turned into JSON, you can use the `pytest_json_runtest_stage` hook. It takes a [`TestReport`](https://docs.pytest.org/en/latest/reference.html#_pytest.runner.TestReport) and returns a JSON-serializable dict:
+
+```python
+def pytest_json_runtest_stage(report):
+ return {'outcome': report.outcome}
+```
+
+### Direct invocation
+
+You can use the plugin when invoking `pytest.main()` directly from code:
+
+```python
+import pytest
+from pytest_jsonreport.plugin import JSONReport
+
+plugin = JSONReport()
+pytest.main(['--json-report-file=none', 'test_foo.py'], plugins=[plugin])
+```
+
+You can then access the `report` object:
+
+```python
+print(plugin.report)
+```
+
+And save the report manually:
+
+```python
+plugin.save_report('/tmp/my_report.json')
+```
+
+
+## Format
+
+The JSON report contains metadata of the session, a summary, collectors, tests and warnings. You can find a sample report in [`sample_report.json`](sample_report.json).
+
+| Key | Description |
+| --- | --- |
+| `created` | Report creation date. (Unix time) |
+| `duration` | Session duration in seconds. |
+| `exitcode` | Process exit code as listed [in the pytest docs](https://docs.pytest.org/en/latest/usage.html#possible-exit-codes). The exit code is a quick way to tell if any tests failed, an internal error occurred, etc. |
+| `root` | Absolute root path from which the session was started. |
+| `environment` | [Environment](#environment) entry. |
+| `summary` | [Summary](#summary) entry. |
+| `collectors` | [Collectors](#collectors) entry. (absent if `--json-report-summary` or if no collectors) |
+| `tests` | [Tests](#tests) entry. (absent if `--json-report-summary`) |
+| `warnings` | [Warnings](#warnings) entry. (absent if `--json-report-summary` or if no warnings) |
+
+#### Example
+
+```python
+{
+ "created": 1518371686.7981803,
+ "duration": 0.1235666275024414,
+ "exitcode": 1,
+ "root": "/path/to/tests",
+ "environment": ENVIRONMENT,
+ "summary": SUMMARY,
+ "collectors": COLLECTORS,
+ "tests": TESTS,
+ "warnings": WARNINGS,
+}
+```
+
+### Summary
+
+Number of outcomes per category and the total number of test items.
+
+| Key | Description |
+| --- | --- |
+| `collected` | Total number of tests collected. |
+| `total` | Total number of tests run. |
+| `deselected` | Total number of tests deselected. (absent if number is 0) |
+| `<outcome>` | Number of tests with that outcome. (absent if number is 0) |
+
+#### Example
+
+```python
+{
+ "collected": 10,
+ "passed": 2,
+ "failed": 3,
+ "xfailed": 1,
+ "xpassed": 1,
+ "error": 2,
+ "skipped": 1,
+ "total": 10
+}
+```
+
+### Environment
+
+The environment section is provided by [pytest-metadata](https://github.com/pytest-dev/pytest-metadata). All metadata given by that plugin will be added here, so you need to make sure it is JSON-serializable.
+
+#### Example
+
+```python
+{
+ "Python": "3.6.4",
+ "Platform": "Linux-4.56.78-9-ARCH-x86_64-with-arch",
+ "Packages": {
+ "pytest": "3.4.0",
+ "py": "1.5.2",
+ "pluggy": "0.6.0"
+ },
+ "Plugins": {
+ "json-report": "0.4.1",
+ "xdist": "1.22.0",
+ "metadata": "1.5.1",
+ "forked": "0.2",
+ "cov": "2.5.1"
+ },
+ "foo": "bar", # Custom metadata entry passed via pytest-metadata
+}
+```
+
+### Collectors
+
+A list of collector nodes. These are useful to check what tests are available without running them, or to debug an error during test discovery.
+
+| Key | Description |
+| --- | --- |
+| `nodeid` | ID of the collector node. ([See docs](https://docs.pytest.org/en/latest/example/markers.html#node-id)) The root node has an empty node ID. |
+| `outcome` | Outcome of the collection. (Not the test outcome!) |
+| `result` | Nodes collected by the collector. |
+| `longrepr` | Representation of the collection error. (absent if no error occurred) |
+
+The `result` is a list of the collected nodes:
+
+| Key | Description |
+| --- | --- |
+| `nodeid` | ID of the node. |
+| `type` | Type of the collected node. |
+| `lineno` | Line number. (absent if not applicable) |
+| `deselected` | `true` if the test is deselected. (absent if not deselected) |
+
+#### Example
+
+```python
+[
+ {
+ "nodeid": "",
+ "outcome": "passed",
+ "result": [
+ {
+ "nodeid": "test_foo.py",
+ "type": "Module"
+ }
+ ]
+ },
+ {
+ "nodeid": "test_foo.py",
+ "outcome": "passed",
+ "result": [
+ {
+ "nodeid": "test_foo.py::test_pass",
+ "type": "Function",
+ "lineno": 24,
+ "deselected": true
+ },
+ ...
+ ]
+ },
+ {
+ "nodeid": "test_bar.py",
+ "outcome": "failed",
+ "result": [],
+ "longrepr": "/usr/lib/python3.6 ... invalid syntax"
+ },
+ ...
+]
+```
+
+### Tests
+
+A list of test nodes. Each completed test stage produces a stage object (`setup`, `call`, `teardown`) with its own `outcome`.
+
+| Key | Description |
+| --- | --- |
+| `nodeid` | ID of the test node. |
+| `lineno` | Line number where the test starts. |
+| `keywords` | List of keywords and markers associated with the test. |
+| `outcome` | Outcome of the test run. |
+| `{setup, call, teardown}` | [Test stage](#test-stage) entry. To find the error in a failed test you need to check all stages. (absent if stage didn't run) |
+| `metadata` | [Metadata](#metadata) item. (absent if no metadata) |
+
+#### Example
+
+```python
+[
+ {
+ "nodeid": "test_foo.py::test_fail",
+ "lineno": 50,
+ "keywords": [
+ "test_fail",
+ "test_foo.py",
+ "test_foo0"
+ ],
+ "outcome": "failed",
+ "setup": TEST_STAGE,
+ "call": TEST_STAGE,
+ "teardown": TEST_STAGE,
+ "metadata": {
+ "foo": "bar",
+ }
+ },
+ ...
+]
+```
+
+
+### Test stage
+
+A test stage item.
+
+| Key | Description |
+| --- | --- |
+| `duration` | Duration of the test stage in seconds. |
+| `outcome` | Outcome of the test stage. (can be different from the overall test outcome) |
+| `crash` | Crash entry. (absent if no error occurred) |
+| `traceback` | List of traceback entries. (absent if no error occurred; affected by `--tb` option) |
+| `stdout` | Standard output. (absent if none available) |
+| `stderr` | Standard error. (absent if none available) |
+| `log` | [Log](#log) entry. (absent if none available) |
+| `longrepr` | Representation of the error. (absent if no error occurred; format affected by `--tb` option) |
+
+#### Example
+
+```python
+{
+ "duration": 0.00018835067749023438,
+ "outcome": "failed",
+ "crash": {
+ "path": "/path/to/tests/test_foo.py",
+ "lineno": 54,
+ "message": "TypeError: unsupported operand type(s) for -: 'int' and 'NoneType'"
+ },
+ "traceback": [
+ {
+ "path": "test_foo.py",
+ "lineno": 65,
+ "message": ""
+ },
+ {
+ "path": "test_foo.py",
+ "lineno": 63,
+ "message": "in foo"
+ },
+ {
+ "path": "test_foo.py",
+ "lineno": 63,
+ "message": "in <listcomp>"
+ },
+ {
+ "path": "test_foo.py",
+ "lineno": 54,
+ "message": "TypeError"
+ }
+ ],
+ "stdout": "foo\nbar\n",
+ "stderr": "baz\n",
+ "log": LOG,
+ "longrepr": "def test_fail_nested():\n ..."
+}
+```
+
+### Log
+
+A list of log records. The fields of a log record are the [`logging.LogRecord` attributes](https://docs.python.org/3/library/logging.html#logrecord-attributes), with the exception that the fields `exc_info` and `args` are always empty and `msg` contains the formatted log message.
+
+You can apply [`logging.makeLogRecord()`](https://docs.python.org/3/library/logging.html#logging.makeLogRecord) on a log record to convert it back to a `logging.LogRecord` object.
+
+#### Example
+
+```python
+[
+ {
+ "name": "root",
+ "msg": "This is a warning.",
+ "args": null,
+ "levelname": "WARNING",
+ "levelno": 30,
+ "pathname": "/path/to/tests/test_foo.py",
+ "filename": "test_foo.py",
+ "module": "test_foo",
+ "exc_info": null,
+ "exc_text": null,
+ "stack_info": null,
+ "lineno": 8,
+ "funcName": "foo",
+ "created": 1519772464.291738,
+ "msecs": 291.73803329467773,
+ "relativeCreated": 332.90839195251465,
+ "thread": 140671803118912,
+ "threadName": "MainThread",
+ "processName": "MainProcess",
+ "process": 31481
+ },
+ ...
+]
+```
+
+
+### Warnings
+
+A list of warnings that occurred during the session. (See the [pytest docs on warnings](https://docs.pytest.org/en/latest/warnings.html).)
+
+| Key | Description |
+| --- | --- |
+| `filename` | File name. |
+| `lineno` | Line number. |
+| `message` | Warning message. |
+| `when` | When the warning was captured. (`"config"`, `"collect"` or `"runtest"` as listed [here](https://docs.pytest.org/en/latest/reference.html#_pytest.hookspec.pytest_warning_captured)) |
+
+#### Example
+
+```python
+[
+ {
+ "code": "C1",
+ "path": "/path/to/tests/test_foo.py",
+ "nodeid": "test_foo.py::TestFoo",
+ "message": "cannot collect test class 'TestFoo' because it has a __init__ constructor"
+ }
+]
+```
+
+## Related tools
+
+- [pytest-json](https://github.com/mattcl/pytest-json) has some great features but appears to be unmaintained. I borrowed some ideas and test cases from there.
+
+- [tox has a switch](http://tox.readthedocs.io/en/latest/example/result.html) to create a JSON report including a test result summary. However, it just provides the overall outcome without any per-test details.
+
+
+
+
+%package help
+Summary: Development documents and examples for pytest-json-report
+Provides: python3-pytest-json-report-doc
+%description help
+# Pytest JSON Report
+
+[![CI](https://github.com/numirias/pytest-json-report/actions/workflows/main.yml/badge.svg)](https://github.com/numirias/pytest-json-report/actions/workflows/main.yml)
+[![PyPI Version](https://img.shields.io/pypi/v/pytest-json-report.svg)](https://pypi.python.org/pypi/pytest-json-report)
+[![Python Versions](https://img.shields.io/pypi/pyversions/pytest-json-report.svg)](https://pypi.python.org/pypi/pytest-json-report)
+
+This pytest plugin creates test reports as JSON. This makes it easy to process test results in other applications.
+
+It can report a summary, test details, captured output, logs, exception tracebacks and more. Additionally, you can use the available fixtures and hooks to [add metadata](#metadata) and [customize](#modifying-the-report) the report as you like.
+
+## Table of contents
+
+* [Installation](#installation)
+* [Options](#options)
+* [Usage](#usage)
+ * [Metadata](#metadata)
+ * [Modifying the report](#modifying-the-report)
+ * [Direct invocation](#direct-invocation)
+* [Format](#format)
+ * [Summary](#summary)
+ * [Environment](#environment)
+ * [Collectors](#collectors)
+ * [Tests](#tests)
+ * [Test stage](#test-stage)
+ * [Log](#log)
+ * [Warnings](#warnings)
+* [Related tools](#related-tools)
+
+## Installation
+
+```
+pip install pytest-json-report --upgrade
+```
+
+## Options
+
+| Option | Description |
+| --- | --- |
+| `--json-report` | Create JSON report |
+| `--json-report-file=PATH` | Target path to save JSON report (use "none" to not save the report) |
+| `--json-report-summary` | Just create a summary without per-test details |
+| `--json-report-omit=FIELD_LIST` | List of fields to omit in the report (choose from: `collectors`, `log`, `traceback`, `streams`, `warnings`, `keywords`) |
+| `--json-report-indent=LEVEL` | Pretty-print JSON with specified indentation level |
+| `--json-report-verbosity=LEVEL` | Set verbosity (default is value of `--verbosity`) |
+
+## Usage
+
+Just run pytest with `--json-report`. The report is saved in `.report.json` by default.
+
+```bash
+$ pytest --json-report -v tests/
+$ cat .report.json
+{"created": 1518371686.7981803, ... "tests":[{"nodeid": "test_foo.py", "outcome": "passed", ...}, ...]}
+```
+
+If you just need to know how many tests passed or failed and don't care about details, you can produce a summary only:
+
+```bash
+$ pytest --json-report --json-report-summary
+```
+
+Many fields can be omitted to keep the report size small. E.g., this will leave out keywords and stdout/stderr output:
+
+```bash
+$ pytest --json-report --json-report-omit keywords streams
+```
+
+If you don't like to have the report saved, you can specify `none` as the target file name:
+
+```bash
+$ pytest --json-report --json-report-file none
+```
+
+## Advanced usage
+
+### Metadata
+
+The easiest way to add your own metadata to a test item is by using the `json_metadata` [test fixture](https://docs.pytest.org/en/stable/fixture.html):
+
+```python
+def test_something(json_metadata):
+ json_metadata['foo'] = {"some": "thing"}
+ json_metadata['bar'] = 123
+```
+
+Or use the `pytest_json_runtest_metadata` [hook](https://docs.pytest.org/en/stable/reference.html#hooks) (in your `conftest.py`) to add metadata based on the current test run. The dict returned will automatically be merged with any existing metadata. E.g., this adds the start and stop time of each test's `call` stage:
+
+```python
+def pytest_json_runtest_metadata(item, call):
+ if call.when != 'call':
+ return {}
+ return {'start': call.start, 'stop': call.stop}
+```
+
+Also, you could add metadata using [pytest-metadata's `--metadata` switch](https://github.com/pytest-dev/pytest-metadata#additional-metadata) which will add metadata to the report's `environment` section, but not to a specific test item. You need to make sure all your metadata is JSON-serializable.
+
+### A note on hooks
+
+If you're using a `pytest_json_*` hook although the plugin is not installed or not active (not using `--json-report`), pytest doesn't recognize it and may fail with an internal error like this:
+```
+INTERNALERROR> pluggy.manager.PluginValidationError: unknown hook 'pytest_json_runtest_metadata' in plugin <module 'conftest' from 'conftest.py'>
+```
+You can avoid this by declaring the hook implementation optional:
+
+```python
+import pytest
+@pytest.hookimpl(optionalhook=True)
+def pytest_json_runtest_metadata(item, call):
+ ...
+```
+
+### Modifying the report
+
+You can modify the entire report before it's saved by using the `pytest_json_modifyreport` hook.
+
+Just implement the hook in your `conftest.py`, e.g.:
+
+```python
+def pytest_json_modifyreport(json_report):
+ # Add a key to the report
+ json_report['foo'] = 'bar'
+ # Delete the summary from the report
+ del json_report['summary']
+```
+
+After `pytest_sessionfinish`, the report object is also directly available to script via `config._json_report.report`. So you can access it using some built-in hook:
+
+```python
+def pytest_sessionfinish(session):
+ report = session.config._json_report.report
+ print('exited with', report['exitcode'])
+```
+
+If you *really* want to change how the result of a test stage run is turned into JSON, you can use the `pytest_json_runtest_stage` hook. It takes a [`TestReport`](https://docs.pytest.org/en/latest/reference.html#_pytest.runner.TestReport) and returns a JSON-serializable dict:
+
+```python
+def pytest_json_runtest_stage(report):
+ return {'outcome': report.outcome}
+```
+
+### Direct invocation
+
+You can use the plugin when invoking `pytest.main()` directly from code:
+
+```python
+import pytest
+from pytest_jsonreport.plugin import JSONReport
+
+plugin = JSONReport()
+pytest.main(['--json-report-file=none', 'test_foo.py'], plugins=[plugin])
+```
+
+You can then access the `report` object:
+
+```python
+print(plugin.report)
+```
+
+And save the report manually:
+
+```python
+plugin.save_report('/tmp/my_report.json')
+```
+
+
+## Format
+
+The JSON report contains metadata of the session, a summary, collectors, tests and warnings. You can find a sample report in [`sample_report.json`](sample_report.json).
+
+| Key | Description |
+| --- | --- |
+| `created` | Report creation date. (Unix time) |
+| `duration` | Session duration in seconds. |
+| `exitcode` | Process exit code as listed [in the pytest docs](https://docs.pytest.org/en/latest/usage.html#possible-exit-codes). The exit code is a quick way to tell if any tests failed, an internal error occurred, etc. |
+| `root` | Absolute root path from which the session was started. |
+| `environment` | [Environment](#environment) entry. |
+| `summary` | [Summary](#summary) entry. |
+| `collectors` | [Collectors](#collectors) entry. (absent if `--json-report-summary` or if no collectors) |
+| `tests` | [Tests](#tests) entry. (absent if `--json-report-summary`) |
+| `warnings` | [Warnings](#warnings) entry. (absent if `--json-report-summary` or if no warnings) |
+
+#### Example
+
+```python
+{
+ "created": 1518371686.7981803,
+ "duration": 0.1235666275024414,
+ "exitcode": 1,
+ "root": "/path/to/tests",
+ "environment": ENVIRONMENT,
+ "summary": SUMMARY,
+ "collectors": COLLECTORS,
+ "tests": TESTS,
+ "warnings": WARNINGS,
+}
+```
+
+### Summary
+
+Number of outcomes per category and the total number of test items.
+
+| Key | Description |
+| --- | --- |
+| `collected` | Total number of tests collected. |
+| `total` | Total number of tests run. |
+| `deselected` | Total number of tests deselected. (absent if number is 0) |
+| `<outcome>` | Number of tests with that outcome. (absent if number is 0) |
+
+#### Example
+
+```python
+{
+ "collected": 10,
+ "passed": 2,
+ "failed": 3,
+ "xfailed": 1,
+ "xpassed": 1,
+ "error": 2,
+ "skipped": 1,
+ "total": 10
+}
+```
+
+### Environment
+
+The environment section is provided by [pytest-metadata](https://github.com/pytest-dev/pytest-metadata). All metadata given by that plugin will be added here, so you need to make sure it is JSON-serializable.
+
+#### Example
+
+```python
+{
+ "Python": "3.6.4",
+ "Platform": "Linux-4.56.78-9-ARCH-x86_64-with-arch",
+ "Packages": {
+ "pytest": "3.4.0",
+ "py": "1.5.2",
+ "pluggy": "0.6.0"
+ },
+ "Plugins": {
+ "json-report": "0.4.1",
+ "xdist": "1.22.0",
+ "metadata": "1.5.1",
+ "forked": "0.2",
+ "cov": "2.5.1"
+ },
+ "foo": "bar", # Custom metadata entry passed via pytest-metadata
+}
+```
+
+### Collectors
+
+A list of collector nodes. These are useful to check what tests are available without running them, or to debug an error during test discovery.
+
+| Key | Description |
+| --- | --- |
+| `nodeid` | ID of the collector node. ([See docs](https://docs.pytest.org/en/latest/example/markers.html#node-id)) The root node has an empty node ID. |
+| `outcome` | Outcome of the collection. (Not the test outcome!) |
+| `result` | Nodes collected by the collector. |
+| `longrepr` | Representation of the collection error. (absent if no error occurred) |
+
+The `result` is a list of the collected nodes:
+
+| Key | Description |
+| --- | --- |
+| `nodeid` | ID of the node. |
+| `type` | Type of the collected node. |
+| `lineno` | Line number. (absent if not applicable) |
+| `deselected` | `true` if the test is deselected. (absent if not deselected) |
+
+#### Example
+
+```python
+[
+ {
+ "nodeid": "",
+ "outcome": "passed",
+ "result": [
+ {
+ "nodeid": "test_foo.py",
+ "type": "Module"
+ }
+ ]
+ },
+ {
+ "nodeid": "test_foo.py",
+ "outcome": "passed",
+ "result": [
+ {
+ "nodeid": "test_foo.py::test_pass",
+ "type": "Function",
+ "lineno": 24,
+ "deselected": true
+ },
+ ...
+ ]
+ },
+ {
+ "nodeid": "test_bar.py",
+ "outcome": "failed",
+ "result": [],
+ "longrepr": "/usr/lib/python3.6 ... invalid syntax"
+ },
+ ...
+]
+```
+
+### Tests
+
+A list of test nodes. Each completed test stage produces a stage object (`setup`, `call`, `teardown`) with its own `outcome`.
+
+| Key | Description |
+| --- | --- |
+| `nodeid` | ID of the test node. |
+| `lineno` | Line number where the test starts. |
+| `keywords` | List of keywords and markers associated with the test. |
+| `outcome` | Outcome of the test run. |
+| `{setup, call, teardown}` | [Test stage](#test-stage) entry. To find the error in a failed test you need to check all stages. (absent if stage didn't run) |
+| `metadata` | [Metadata](#metadata) item. (absent if no metadata) |
+
+#### Example
+
+```python
+[
+ {
+ "nodeid": "test_foo.py::test_fail",
+ "lineno": 50,
+ "keywords": [
+ "test_fail",
+ "test_foo.py",
+ "test_foo0"
+ ],
+ "outcome": "failed",
+ "setup": TEST_STAGE,
+ "call": TEST_STAGE,
+ "teardown": TEST_STAGE,
+ "metadata": {
+ "foo": "bar",
+ }
+ },
+ ...
+]
+```
+
+
+### Test stage
+
+A test stage item.
+
+| Key | Description |
+| --- | --- |
+| `duration` | Duration of the test stage in seconds. |
+| `outcome` | Outcome of the test stage. (can be different from the overall test outcome) |
+| `crash` | Crash entry. (absent if no error occurred) |
+| `traceback` | List of traceback entries. (absent if no error occurred; affected by `--tb` option) |
+| `stdout` | Standard output. (absent if none available) |
+| `stderr` | Standard error. (absent if none available) |
+| `log` | [Log](#log) entry. (absent if none available) |
+| `longrepr` | Representation of the error. (absent if no error occurred; format affected by `--tb` option) |
+
+#### Example
+
+```python
+{
+ "duration": 0.00018835067749023438,
+ "outcome": "failed",
+ "crash": {
+ "path": "/path/to/tests/test_foo.py",
+ "lineno": 54,
+ "message": "TypeError: unsupported operand type(s) for -: 'int' and 'NoneType'"
+ },
+ "traceback": [
+ {
+ "path": "test_foo.py",
+ "lineno": 65,
+ "message": ""
+ },
+ {
+ "path": "test_foo.py",
+ "lineno": 63,
+ "message": "in foo"
+ },
+ {
+ "path": "test_foo.py",
+ "lineno": 63,
+ "message": "in <listcomp>"
+ },
+ {
+ "path": "test_foo.py",
+ "lineno": 54,
+ "message": "TypeError"
+ }
+ ],
+ "stdout": "foo\nbar\n",
+ "stderr": "baz\n",
+ "log": LOG,
+ "longrepr": "def test_fail_nested():\n ..."
+}
+```
+
+### Log
+
+A list of log records. The fields of a log record are the [`logging.LogRecord` attributes](https://docs.python.org/3/library/logging.html#logrecord-attributes), with the exception that the fields `exc_info` and `args` are always empty and `msg` contains the formatted log message.
+
+You can apply [`logging.makeLogRecord()`](https://docs.python.org/3/library/logging.html#logging.makeLogRecord) on a log record to convert it back to a `logging.LogRecord` object.
+
+#### Example
+
+```python
+[
+ {
+ "name": "root",
+ "msg": "This is a warning.",
+ "args": null,
+ "levelname": "WARNING",
+ "levelno": 30,
+ "pathname": "/path/to/tests/test_foo.py",
+ "filename": "test_foo.py",
+ "module": "test_foo",
+ "exc_info": null,
+ "exc_text": null,
+ "stack_info": null,
+ "lineno": 8,
+ "funcName": "foo",
+ "created": 1519772464.291738,
+ "msecs": 291.73803329467773,
+ "relativeCreated": 332.90839195251465,
+ "thread": 140671803118912,
+ "threadName": "MainThread",
+ "processName": "MainProcess",
+ "process": 31481
+ },
+ ...
+]
+```
+
+
+### Warnings
+
+A list of warnings that occurred during the session. (See the [pytest docs on warnings](https://docs.pytest.org/en/latest/warnings.html).)
+
+| Key | Description |
+| --- | --- |
+| `filename` | File name. |
+| `lineno` | Line number. |
+| `message` | Warning message. |
+| `when` | When the warning was captured. (`"config"`, `"collect"` or `"runtest"` as listed [here](https://docs.pytest.org/en/latest/reference.html#_pytest.hookspec.pytest_warning_captured)) |
+
+#### Example
+
+```python
+[
+ {
+ "code": "C1",
+ "path": "/path/to/tests/test_foo.py",
+ "nodeid": "test_foo.py::TestFoo",
+ "message": "cannot collect test class 'TestFoo' because it has a __init__ constructor"
+ }
+]
+```
+
+## Related tools
+
+- [pytest-json](https://github.com/mattcl/pytest-json) has some great features but appears to be unmaintained. I borrowed some ideas and test cases from there.
+
+- [tox has a switch](http://tox.readthedocs.io/en/latest/example/result.html) to create a JSON report including a test result summary. However, it just provides the overall outcome without any per-test details.
+
+
+
+
+%prep
+%autosetup -n pytest-json-report-1.5.0
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-pytest-json-report -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Mon Apr 10 2023 Python_Bot <Python_Bot@openeuler.org> - 1.5.0-1
+- Package Spec generated