%global _empty_manifest_terminate_build 0 Name: python-graphique Version: 1.2 Release: 1 Summary: GraphQL service for arrow tables and parquet data sets. License: Copyright 2022 Aric Coady Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. URL: https://github.com/coady/graphique Source0: https://mirrors.nju.edu.cn/pypi/web/packages/38/bf/35c6c54a4a25176f2613accaf7aed5bd336e607268556aa06a186aa5b9c2/graphique-1.2.tar.gz BuildArch: noarch Requires: python3-pyarrow Requires: python3-strawberry-graphql[asgi,cli] Requires: python3-uvicorn[standard] %description [![image](https://img.shields.io/pypi/v/graphique.svg)](https://pypi.org/project/graphique/) ![image](https://img.shields.io/pypi/pyversions/graphique.svg) [![image](https://pepy.tech/badge/graphique)](https://pepy.tech/project/graphique) ![image](https://img.shields.io/pypi/status/graphique.svg) [![image](https://github.com/coady/graphique/workflows/build/badge.svg)](https://github.com/coady/graphique/actions) [![image](https://codecov.io/gh/coady/graphique/branch/main/graph/badge.svg)](https://codecov.io/gh/coady/graphique/) [![image](https://github.com/coady/graphique/workflows/codeql/badge.svg)](https://github.com/coady/graphique/security/code-scanning) [![image](https://img.shields.io/badge/code%20style-black-000000.svg)](https://pypi.org/project/black/) [![image](http://mypy-lang.org/static/mypy_badge.svg)](http://mypy-lang.org/) [GraphQL](https://graphql.org) service for [arrow](https://arrow.apache.org) tables and [parquet](https://parquet.apache.org) data sets. The schema for a query API is derived automatically. ## Usage ```console % env PARQUET_PATH=... uvicorn graphique.service:app ``` Open http://localhost:8000/ to try out the API in [GraphiQL](https://github.com/graphql/graphiql/tree/main/packages/graphiql#readme). There is a test fixture at `./tests/fixtures/zipcodes.parquet`. ```console % env PARQUET_PATH=... strawberry export-schema graphique.service:app.schema ``` outputs the graphql schema for a parquet data set. ### Configuration Graphique uses [Starlette's config](https://www.starlette.io/config/): in environment variables or a `.env` file. Config variables are used as input to a [parquet dataset](https://arrow.apache.org/docs/python/dataset.html). * PARQUET_PATH: path to the parquet directory or file * FEDERATED = '': field name to extend type `Query` with a federated `Table` * DEBUG = False: run service in debug mode, which includes timing * COLUMNS = None: list of names, or mapping of aliases, of columns to select * FILTERS = None: json `filter` query for which rows to read at startup For more options create a custom [ASGI](https://asgi.readthedocs.io/en/latest/index.html) app. Call graphique's `GraphQL` on an arrow [Dataset](https://arrow.apache.org/docs/python/api/dataset.html), [Scanner](https://arrow.apache.org/docs/python/generated/pyarrow.dataset.Scanner.html), or [Table](https://arrow.apache.org/docs/python/generated/pyarrow.Table.html). The GraphQL `Table` type will be the root Query type. Supply a mapping of names to datasets for multiple roots, and to enable federation. ```python import pyarrow.dataset as ds from graphique import GraphQL app = GraphQL(ds.dataset(...)) # Table is root query type app = GraphQL.federated({: ds.dataset(...), ...}, keys={...}) # Tables on federated fields ``` Start like any ASGI app. ```console uvicorn :app ``` Configuration options exist to provide a convenient no-code solution, but are subject to change in the future. Using a custom app is recommended for production usage. ### API #### types * `Dataset`: interface for an arrow dataset, scanner, or table. * `Table`: implements the `Dataset` interface. Adds typed `row`, `columns`, and `filter` fields from introspecting the schema. * `Column`: interface for an arrow column (a.k.a. ChunkedArray). Each arrow data type has a corresponding column implementation: Boolean, Int, Long, Float, Decimal, Date, Datetime, Time, Duration, Base64, String, List, Struct. All columns have a `values` field for their list of scalars. Additional fields vary by type. * `Row`: scalar fields. Arrow tables are column-oriented, and graphique encourages that usage for performance. A single `row` field is provided for convenience, but a field for a list of rows is not. Requesting parallel columns is far more efficient. #### selection * `slice`: contiguous selection of rows * `filter`: select rows with simple predicates * `scan`: select rows and project columns with expressions #### projection * `columns`: provides a field for every `Column` in the schema * `column`: access a column of any type by name * `row`: provides a field for each scalar of a single row * `apply`: transform columns by applying a function * `join`: join tables by key columns #### aggregation * `group`: group by given columns, transforming the others into list columns * `partition`: partition on adjacent values in given columns, transforming the others into list columns * `aggregate`: apply reduce functions to list columns * `tables`: return a list of tables by splitting on the scalars in list columns #### ordering * `sort`: sort table by given columns * `min`: select rows with smallest values * `max`: select rows with largest values ### Performance Graphique relies on native [PyArrow](https://arrow.apache.org/docs/python/index.html) routines wherever possible. Otherwise it falls back to using [NumPy](https://numpy.org/doc/stable/) or custom optimizations. By default, datasets are read on-demand, with only the necessary rows and columns scanned. Although graphique is a running service, [parquet is performant](https://arrow.apache.org/docs/python/generated/pyarrow.dataset.Dataset.html) at reading a subset of data. Optionally specify `FILTERS` in the json `filter` format to read a subset of rows at startup, trading-off memory for latency. An empty filter (`{}`) will read the whole table. Specifying `COLUMNS` will limit memory usage when reading at startup (`FILTERS`). There is little speed difference as unused columns are inherently ignored. Optional aliasing can also be used for camel casing. If index columns are detected in the schema metadata, then an initial `filter` will also attempt a binary search on tables. ## Installation ```console % pip install graphique[server] ``` ## Dependencies * pyarrow >=12 * strawberry-graphql[asgi,cli] * uvicorn (or other [ASGI server](https://asgi.readthedocs.io/en/latest/implementations.html)) ## Tests 100% branch coverage. ```console % pytest [--cov] ``` ## Changes 1.2 * Pyarrow >=12 required * Grouping fragments optimized * Group by empty columns * Batch sorting and grouping into lists 1.1 * Pyarrow >=11 required * Python >=3.8 required * Scannable functions added * List aggregations deprecated * Group by fragments * Month day nano interval array * `min` and `max` fields memory optimized 1.0 * Pyarrow >=10 required * Dataset schema introspection * Dataset scanning with selection and projection * Binary search on sorted columns * List aggregation, filtering, and sorting optimizations * Compute functions generalized * Multiple datasets and federation * Provisional dataset `join` and `take` 0.9 * Pyarrow >=9 required * Multi-directional sorting * Removed unnecessary interfaces * Filtering has stricter typing 0.8 * Pyarrow >=8 required * Grouping and aggregation integrated * `AbstractTable` interface renamed to `Dataset` * `Binary` scalar renamed to `Base64` 0.7 * Pyarrow >=7 required * `FILTERS` use query syntax and trigger reading the dataset * `FEDERATED` field configuration * List columns support sorting and filtering * Group by and aggregate optimizations * Dataset scanning 0.6 * Pyarrow >=6 required * Group by optimized and replaced `unique` field * Dictionary related optimizations * Null consistency with arrow `count` functions 0.5 * Pyarrow >=5 required * Stricter validation of inputs * Columns can be cast to another arrow data type * Grouping uses large list arrays with 64-bit counts * Datasets are read on-demand or optionally at startup 0.4 * Pyarrow >=4 required * `sort` updated to use new native routines * `partition` tables by adjacent values and differences * `filter` supports unknown column types using tagged union pattern * `Groups` replaced with `Table.tables` and `Table.aggregate` fields * Tagged unions used for `filter`, `apply`, and `partition` functions 0.3 * Pyarrow >=3 required * `any` and `all` fields * String column `split` field 0.2 * Pyarrow >= 2 required * `ListColumn` and `StructColumn` types * `Groups` type with `aggregate` field * `group` and `unique` optimized * Statistical fields: `mode`, `stddev`, `variance` * `is_in`, `min`, and `max` optimized %package -n python3-graphique Summary: GraphQL service for arrow tables and parquet data sets. Provides: python-graphique BuildRequires: python3-devel BuildRequires: python3-setuptools BuildRequires: python3-pip %description -n python3-graphique [![image](https://img.shields.io/pypi/v/graphique.svg)](https://pypi.org/project/graphique/) ![image](https://img.shields.io/pypi/pyversions/graphique.svg) [![image](https://pepy.tech/badge/graphique)](https://pepy.tech/project/graphique) ![image](https://img.shields.io/pypi/status/graphique.svg) [![image](https://github.com/coady/graphique/workflows/build/badge.svg)](https://github.com/coady/graphique/actions) [![image](https://codecov.io/gh/coady/graphique/branch/main/graph/badge.svg)](https://codecov.io/gh/coady/graphique/) [![image](https://github.com/coady/graphique/workflows/codeql/badge.svg)](https://github.com/coady/graphique/security/code-scanning) [![image](https://img.shields.io/badge/code%20style-black-000000.svg)](https://pypi.org/project/black/) [![image](http://mypy-lang.org/static/mypy_badge.svg)](http://mypy-lang.org/) [GraphQL](https://graphql.org) service for [arrow](https://arrow.apache.org) tables and [parquet](https://parquet.apache.org) data sets. The schema for a query API is derived automatically. ## Usage ```console % env PARQUET_PATH=... uvicorn graphique.service:app ``` Open http://localhost:8000/ to try out the API in [GraphiQL](https://github.com/graphql/graphiql/tree/main/packages/graphiql#readme). There is a test fixture at `./tests/fixtures/zipcodes.parquet`. ```console % env PARQUET_PATH=... strawberry export-schema graphique.service:app.schema ``` outputs the graphql schema for a parquet data set. ### Configuration Graphique uses [Starlette's config](https://www.starlette.io/config/): in environment variables or a `.env` file. Config variables are used as input to a [parquet dataset](https://arrow.apache.org/docs/python/dataset.html). * PARQUET_PATH: path to the parquet directory or file * FEDERATED = '': field name to extend type `Query` with a federated `Table` * DEBUG = False: run service in debug mode, which includes timing * COLUMNS = None: list of names, or mapping of aliases, of columns to select * FILTERS = None: json `filter` query for which rows to read at startup For more options create a custom [ASGI](https://asgi.readthedocs.io/en/latest/index.html) app. Call graphique's `GraphQL` on an arrow [Dataset](https://arrow.apache.org/docs/python/api/dataset.html), [Scanner](https://arrow.apache.org/docs/python/generated/pyarrow.dataset.Scanner.html), or [Table](https://arrow.apache.org/docs/python/generated/pyarrow.Table.html). The GraphQL `Table` type will be the root Query type. Supply a mapping of names to datasets for multiple roots, and to enable federation. ```python import pyarrow.dataset as ds from graphique import GraphQL app = GraphQL(ds.dataset(...)) # Table is root query type app = GraphQL.federated({: ds.dataset(...), ...}, keys={...}) # Tables on federated fields ``` Start like any ASGI app. ```console uvicorn :app ``` Configuration options exist to provide a convenient no-code solution, but are subject to change in the future. Using a custom app is recommended for production usage. ### API #### types * `Dataset`: interface for an arrow dataset, scanner, or table. * `Table`: implements the `Dataset` interface. Adds typed `row`, `columns`, and `filter` fields from introspecting the schema. * `Column`: interface for an arrow column (a.k.a. ChunkedArray). Each arrow data type has a corresponding column implementation: Boolean, Int, Long, Float, Decimal, Date, Datetime, Time, Duration, Base64, String, List, Struct. All columns have a `values` field for their list of scalars. Additional fields vary by type. * `Row`: scalar fields. Arrow tables are column-oriented, and graphique encourages that usage for performance. A single `row` field is provided for convenience, but a field for a list of rows is not. Requesting parallel columns is far more efficient. #### selection * `slice`: contiguous selection of rows * `filter`: select rows with simple predicates * `scan`: select rows and project columns with expressions #### projection * `columns`: provides a field for every `Column` in the schema * `column`: access a column of any type by name * `row`: provides a field for each scalar of a single row * `apply`: transform columns by applying a function * `join`: join tables by key columns #### aggregation * `group`: group by given columns, transforming the others into list columns * `partition`: partition on adjacent values in given columns, transforming the others into list columns * `aggregate`: apply reduce functions to list columns * `tables`: return a list of tables by splitting on the scalars in list columns #### ordering * `sort`: sort table by given columns * `min`: select rows with smallest values * `max`: select rows with largest values ### Performance Graphique relies on native [PyArrow](https://arrow.apache.org/docs/python/index.html) routines wherever possible. Otherwise it falls back to using [NumPy](https://numpy.org/doc/stable/) or custom optimizations. By default, datasets are read on-demand, with only the necessary rows and columns scanned. Although graphique is a running service, [parquet is performant](https://arrow.apache.org/docs/python/generated/pyarrow.dataset.Dataset.html) at reading a subset of data. Optionally specify `FILTERS` in the json `filter` format to read a subset of rows at startup, trading-off memory for latency. An empty filter (`{}`) will read the whole table. Specifying `COLUMNS` will limit memory usage when reading at startup (`FILTERS`). There is little speed difference as unused columns are inherently ignored. Optional aliasing can also be used for camel casing. If index columns are detected in the schema metadata, then an initial `filter` will also attempt a binary search on tables. ## Installation ```console % pip install graphique[server] ``` ## Dependencies * pyarrow >=12 * strawberry-graphql[asgi,cli] * uvicorn (or other [ASGI server](https://asgi.readthedocs.io/en/latest/implementations.html)) ## Tests 100% branch coverage. ```console % pytest [--cov] ``` ## Changes 1.2 * Pyarrow >=12 required * Grouping fragments optimized * Group by empty columns * Batch sorting and grouping into lists 1.1 * Pyarrow >=11 required * Python >=3.8 required * Scannable functions added * List aggregations deprecated * Group by fragments * Month day nano interval array * `min` and `max` fields memory optimized 1.0 * Pyarrow >=10 required * Dataset schema introspection * Dataset scanning with selection and projection * Binary search on sorted columns * List aggregation, filtering, and sorting optimizations * Compute functions generalized * Multiple datasets and federation * Provisional dataset `join` and `take` 0.9 * Pyarrow >=9 required * Multi-directional sorting * Removed unnecessary interfaces * Filtering has stricter typing 0.8 * Pyarrow >=8 required * Grouping and aggregation integrated * `AbstractTable` interface renamed to `Dataset` * `Binary` scalar renamed to `Base64` 0.7 * Pyarrow >=7 required * `FILTERS` use query syntax and trigger reading the dataset * `FEDERATED` field configuration * List columns support sorting and filtering * Group by and aggregate optimizations * Dataset scanning 0.6 * Pyarrow >=6 required * Group by optimized and replaced `unique` field * Dictionary related optimizations * Null consistency with arrow `count` functions 0.5 * Pyarrow >=5 required * Stricter validation of inputs * Columns can be cast to another arrow data type * Grouping uses large list arrays with 64-bit counts * Datasets are read on-demand or optionally at startup 0.4 * Pyarrow >=4 required * `sort` updated to use new native routines * `partition` tables by adjacent values and differences * `filter` supports unknown column types using tagged union pattern * `Groups` replaced with `Table.tables` and `Table.aggregate` fields * Tagged unions used for `filter`, `apply`, and `partition` functions 0.3 * Pyarrow >=3 required * `any` and `all` fields * String column `split` field 0.2 * Pyarrow >= 2 required * `ListColumn` and `StructColumn` types * `Groups` type with `aggregate` field * `group` and `unique` optimized * Statistical fields: `mode`, `stddev`, `variance` * `is_in`, `min`, and `max` optimized %package help Summary: Development documents and examples for graphique Provides: python3-graphique-doc %description help [![image](https://img.shields.io/pypi/v/graphique.svg)](https://pypi.org/project/graphique/) ![image](https://img.shields.io/pypi/pyversions/graphique.svg) [![image](https://pepy.tech/badge/graphique)](https://pepy.tech/project/graphique) ![image](https://img.shields.io/pypi/status/graphique.svg) [![image](https://github.com/coady/graphique/workflows/build/badge.svg)](https://github.com/coady/graphique/actions) [![image](https://codecov.io/gh/coady/graphique/branch/main/graph/badge.svg)](https://codecov.io/gh/coady/graphique/) [![image](https://github.com/coady/graphique/workflows/codeql/badge.svg)](https://github.com/coady/graphique/security/code-scanning) [![image](https://img.shields.io/badge/code%20style-black-000000.svg)](https://pypi.org/project/black/) [![image](http://mypy-lang.org/static/mypy_badge.svg)](http://mypy-lang.org/) [GraphQL](https://graphql.org) service for [arrow](https://arrow.apache.org) tables and [parquet](https://parquet.apache.org) data sets. The schema for a query API is derived automatically. ## Usage ```console % env PARQUET_PATH=... uvicorn graphique.service:app ``` Open http://localhost:8000/ to try out the API in [GraphiQL](https://github.com/graphql/graphiql/tree/main/packages/graphiql#readme). There is a test fixture at `./tests/fixtures/zipcodes.parquet`. ```console % env PARQUET_PATH=... strawberry export-schema graphique.service:app.schema ``` outputs the graphql schema for a parquet data set. ### Configuration Graphique uses [Starlette's config](https://www.starlette.io/config/): in environment variables or a `.env` file. Config variables are used as input to a [parquet dataset](https://arrow.apache.org/docs/python/dataset.html). * PARQUET_PATH: path to the parquet directory or file * FEDERATED = '': field name to extend type `Query` with a federated `Table` * DEBUG = False: run service in debug mode, which includes timing * COLUMNS = None: list of names, or mapping of aliases, of columns to select * FILTERS = None: json `filter` query for which rows to read at startup For more options create a custom [ASGI](https://asgi.readthedocs.io/en/latest/index.html) app. Call graphique's `GraphQL` on an arrow [Dataset](https://arrow.apache.org/docs/python/api/dataset.html), [Scanner](https://arrow.apache.org/docs/python/generated/pyarrow.dataset.Scanner.html), or [Table](https://arrow.apache.org/docs/python/generated/pyarrow.Table.html). The GraphQL `Table` type will be the root Query type. Supply a mapping of names to datasets for multiple roots, and to enable federation. ```python import pyarrow.dataset as ds from graphique import GraphQL app = GraphQL(ds.dataset(...)) # Table is root query type app = GraphQL.federated({: ds.dataset(...), ...}, keys={...}) # Tables on federated fields ``` Start like any ASGI app. ```console uvicorn :app ``` Configuration options exist to provide a convenient no-code solution, but are subject to change in the future. Using a custom app is recommended for production usage. ### API #### types * `Dataset`: interface for an arrow dataset, scanner, or table. * `Table`: implements the `Dataset` interface. Adds typed `row`, `columns`, and `filter` fields from introspecting the schema. * `Column`: interface for an arrow column (a.k.a. ChunkedArray). Each arrow data type has a corresponding column implementation: Boolean, Int, Long, Float, Decimal, Date, Datetime, Time, Duration, Base64, String, List, Struct. All columns have a `values` field for their list of scalars. Additional fields vary by type. * `Row`: scalar fields. Arrow tables are column-oriented, and graphique encourages that usage for performance. A single `row` field is provided for convenience, but a field for a list of rows is not. Requesting parallel columns is far more efficient. #### selection * `slice`: contiguous selection of rows * `filter`: select rows with simple predicates * `scan`: select rows and project columns with expressions #### projection * `columns`: provides a field for every `Column` in the schema * `column`: access a column of any type by name * `row`: provides a field for each scalar of a single row * `apply`: transform columns by applying a function * `join`: join tables by key columns #### aggregation * `group`: group by given columns, transforming the others into list columns * `partition`: partition on adjacent values in given columns, transforming the others into list columns * `aggregate`: apply reduce functions to list columns * `tables`: return a list of tables by splitting on the scalars in list columns #### ordering * `sort`: sort table by given columns * `min`: select rows with smallest values * `max`: select rows with largest values ### Performance Graphique relies on native [PyArrow](https://arrow.apache.org/docs/python/index.html) routines wherever possible. Otherwise it falls back to using [NumPy](https://numpy.org/doc/stable/) or custom optimizations. By default, datasets are read on-demand, with only the necessary rows and columns scanned. Although graphique is a running service, [parquet is performant](https://arrow.apache.org/docs/python/generated/pyarrow.dataset.Dataset.html) at reading a subset of data. Optionally specify `FILTERS` in the json `filter` format to read a subset of rows at startup, trading-off memory for latency. An empty filter (`{}`) will read the whole table. Specifying `COLUMNS` will limit memory usage when reading at startup (`FILTERS`). There is little speed difference as unused columns are inherently ignored. Optional aliasing can also be used for camel casing. If index columns are detected in the schema metadata, then an initial `filter` will also attempt a binary search on tables. ## Installation ```console % pip install graphique[server] ``` ## Dependencies * pyarrow >=12 * strawberry-graphql[asgi,cli] * uvicorn (or other [ASGI server](https://asgi.readthedocs.io/en/latest/implementations.html)) ## Tests 100% branch coverage. ```console % pytest [--cov] ``` ## Changes 1.2 * Pyarrow >=12 required * Grouping fragments optimized * Group by empty columns * Batch sorting and grouping into lists 1.1 * Pyarrow >=11 required * Python >=3.8 required * Scannable functions added * List aggregations deprecated * Group by fragments * Month day nano interval array * `min` and `max` fields memory optimized 1.0 * Pyarrow >=10 required * Dataset schema introspection * Dataset scanning with selection and projection * Binary search on sorted columns * List aggregation, filtering, and sorting optimizations * Compute functions generalized * Multiple datasets and federation * Provisional dataset `join` and `take` 0.9 * Pyarrow >=9 required * Multi-directional sorting * Removed unnecessary interfaces * Filtering has stricter typing 0.8 * Pyarrow >=8 required * Grouping and aggregation integrated * `AbstractTable` interface renamed to `Dataset` * `Binary` scalar renamed to `Base64` 0.7 * Pyarrow >=7 required * `FILTERS` use query syntax and trigger reading the dataset * `FEDERATED` field configuration * List columns support sorting and filtering * Group by and aggregate optimizations * Dataset scanning 0.6 * Pyarrow >=6 required * Group by optimized and replaced `unique` field * Dictionary related optimizations * Null consistency with arrow `count` functions 0.5 * Pyarrow >=5 required * Stricter validation of inputs * Columns can be cast to another arrow data type * Grouping uses large list arrays with 64-bit counts * Datasets are read on-demand or optionally at startup 0.4 * Pyarrow >=4 required * `sort` updated to use new native routines * `partition` tables by adjacent values and differences * `filter` supports unknown column types using tagged union pattern * `Groups` replaced with `Table.tables` and `Table.aggregate` fields * Tagged unions used for `filter`, `apply`, and `partition` functions 0.3 * Pyarrow >=3 required * `any` and `all` fields * String column `split` field 0.2 * Pyarrow >= 2 required * `ListColumn` and `StructColumn` types * `Groups` type with `aggregate` field * `group` and `unique` optimized * Statistical fields: `mode`, `stddev`, `variance` * `is_in`, `min`, and `max` optimized %prep %autosetup -n graphique-1.2 %build %py3_build %install %py3_install install -d -m755 %{buildroot}/%{_pkgdocdir} if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/lib64 ]; then find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/bin ]; then find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/sbin ]; then find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . mv %{buildroot}/doclist.lst . %files -n python3-graphique -f filelist.lst %dir %{python3_sitelib}/* %files help -f doclist.lst %{_docdir}/* %changelog * Tue May 30 2023 Python_Bot - 1.2-1 - Package Spec generated