summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-05-05 10:55:23 +0000
committerCoprDistGit <infra@openeuler.org>2023-05-05 10:55:23 +0000
commit92cd7c549faf1d8e1458bffde0c1739fb6c26f48 (patch)
tree570f2180467eaf149b0d986e42d0fd974d53f53c
parentfc67cadff20cbd111a6ace8369c7ebaef5b5184a (diff)
automatic import of python-apiesopeneuler20.03
-rw-r--r--.gitignore1
-rw-r--r--python-apies.spec831
-rw-r--r--sources1
3 files changed, 833 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..959256f 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/apies-1.9.1.tar.gz
diff --git a/python-apies.spec b/python-apies.spec
new file mode 100644
index 0000000..a8c208b
--- /dev/null
+++ b/python-apies.spec
@@ -0,0 +1,831 @@
+%global _empty_manifest_terminate_build 0
+Name: python-apies
+Version: 1.9.1
+Release: 1
+Summary: A flask blueprint providing an API for accessing and searching an ElasticSearch index created from source datapackages
+License: MIT
+URL: https://github.com/OpenBudget/apies
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/ad/4e/7be8932f65b9e1fe46560d059c394406f4538eae685e5934017532863b52/apies-1.9.1.tar.gz
+BuildArch: noarch
+
+Requires: python3-Flask
+Requires: python3-Flask-Cors
+Requires: python3-requests
+Requires: python3-elasticsearch
+Requires: python3-datapackage
+Requires: python3-flask-jsonpify
+Requires: python3-demjson3
+Requires: python3-xlwt
+Requires: python3-xlsxwriter
+Requires: python3-pylama
+Requires: python3-tox
+Requires: python3-dataflows-elasticsearch
+
+%description
+# apies
+
+[![Travis](https://img.shields.io/travis/OpenBudget/apies/master.svg)](https://travis-ci.org/datahq/apies)
+[![Coveralls](http://img.shields.io/coveralls/OpenBudget/apies.svg?branch=master)](https://coveralls.io/r/OpenBudget/apies?branch=master)
+![PyPI - Python Version](https://img.shields.io/pypi/pyversions/apies.svg)
+
+apies is a flask blueprint providing an API for accessing and searching an ElasticSearch index created from source datapackages.
+
+## endpoints
+
+### `/get/<doc-id>`
+
+Fetches a document from the index.
+
+Query parameters that can be used:
+- **type**: The type of the document to fetch (if not `docs`)
+
+### `/search/count`
+
+### `/search/<doc-types>`
+
+Performs a search on the index.
+
+`doc-types` is a comma separated list of document types to search.
+
+Query parameters that can be used:
+- **q**: The full text search textual query
+
+- **filter**: A JSON object with filters to apply to the search. These are applied to the query but don't affect the scoring of the results.
+ Filters should be an array of objects, each object depicting a single filter. All filters are combined with an `OR` operator. For example:
+ ```
+ [
+ {
+ "first-name": "John",
+ "last-name": "Watson"
+ },
+ {
+ "first-name": "Sherlock",
+ "last-name": "Holmes"
+ }
+ ]
+ ```
+ Each object contains a set of rules that all must match. Each rule is a key-value pair, where the key is the field name and the value is the value to match. The value can be a string or an array of strings. If the value is an array, the rule will match if any of the values in the array match. For example:
+ ```
+ {
+ "first-name": ["Emily", "Charlotte"],
+ "last-name": "Bronte"
+ }
+ ```
+ Field names can be appended with two underscores and an operator to convey other relations other than equality. For example:
+ ```
+ {
+ "first-name": "Emily",
+ "last-name": "Bronte",
+ "age__gt": 30,
+ }
+ ```
+ Allowed operators are:
+ ('gt', 'gte', 'lt', 'lte', 'eq', 'not', 'like', 'bounded', 'all'):
+ - `gt`: greater than
+ - `gte`: greater than or equal to
+ - `lt`: less than
+ - `lte`: less than or equal to
+ - `eq`: equal to
+ - `not`: not equal to
+ - `like`: like (textual match)
+ - `bounded`: bounded (geospatial match to a bounding box)
+ - `all`: all (for arrays - all values in the array must exist in the target)
+
+ If multiple operators are needed for the same field, the field can also be suffixed by a hashtag and a number. For example:
+ ```
+ {
+ "city": "San Francisco",
+ "price__lt": 300000,
+ "bedrooms__gt": 4,
+ "amenities": "garage",
+ "amenities#1": ["pool", "back yard"],
+ }
+ ```
+ The above filter will match all documents where the `city` is "San Francisco", `price` is less than 300000, more than 4 `bedrooms`, the `amenities` field contains 'garage' and at least one of "pool" and "back yard".
+
+- **lookup**: A JSON object with lookup filters to apply to the search. These filter the results, but also affect the scoring of the results.
+- **context**: A textual context to search in (i.e. run the search in a subset of results matching the full-text-search query provided in this field)
+
+- **extra**: Extra information that's passed to library extensions
+
+- **size**: Number of results to fetch (default: 10)
+- **offset**: Offset of first result to fetch (default: 0)
+- **order**: Order results by (default: _score)
+
+- **highlight**: Commas separated list of fields to highlight
+- **snippets**: Commas separated list of fields to fetch snippets from
+
+- **match_type**: ElasticSearch match type (default: most_fields)
+- **match_operator**: ElasticSearch match operator (default: and)
+- **minscore**: Minimum score for a result to be returned (default: 0.0)
+
+### `download/<doctypes>`
+
+Downloads search results in either csv, xls or xlsx format.
+
+Query parameters that can be used:
+- **types_formatted**: The type of the documents to search
+- **search_term**: The Elastic search query
+- **size**: Number of hits to return
+- **offset**: Whether or not term offsets should be returned
+- **filters**: What offset to use for the pagination
+- **dont_highlight**:
+- **from_date**: If there should be a date range applied to the search, and from what date
+- **to_date**: If there should be a date range applied to the search, and until what date
+- **order**:
+- **file_format**: The format of the file to be returned, either 'csv', 'xls' or 'xlsx'.
+If not passed the file format will be xlsx
+- **file_name**: The name of the file to be returned, by default the name will be 'search_results'
+- **column_mapping**: If the columns should get a different name then in the
+original data, a column map can be send, for example:
+```
+{
+ "עיר": "address.city",
+ "תקציב": "details.budget"
+}
+```
+
+For example, get a csv file with column mapping:
+```
+http://localhost:5000/api/download/jobs?q=engineering&size=2&file_format=csv&file_name=my_results&column_mapping={%22mispar%22:%22Job%20ID%22}
+```
+
+Or get an xslx file without column mapping:
+```
+http://localhost:5000/api/download/jobs?q=engineering&size=2&file_format=xlsx&file_name=my_results
+```
+
+## configuration
+
+Flask configuration for this blueprint:
+
+
+```python
+
+ from apies import apies_blueprint
+ import elasticsearch
+
+ app.register_blueprint(
+ apies_blueprint(['path/to/datapackage.json', Package(), ...],
+ elasticsearch.Elasticsearch(...),
+ {'doc-type-1': 'index-for-doc-type-1', ...},
+ 'index-for-documents',
+ dont_highlight=['fields', 'not.to', 'highlight'],
+ text_field_rules=lambda schema_field: [], # list of tuples: ('exact'/'inexact'/'natural', <field-name>)
+ multi_match_type='most_fields',
+ multi_match_operator='and'),
+ url_prefix='/search/'
+ )
+```
+
+## local development
+
+You can start a local development server by following these steps:
+
+1. Install Dependencies:
+
+ a. Install Docker locally
+
+ b. Install Python dependencies:
+
+ ```bash
+ $ pip install dataflows dataflows-elasticsearch
+ $ pip install -e .
+ ```
+2. Go to the `sample/` directory
+3. Start ElasticSearch locally:
+ ```bash
+ $ ./start_elasticsearch.sh
+ ```
+
+ This script will wait and poll the server until it's up and running.
+ You can test it yourself by running:
+ ```bash
+ $ curl -s http://localhost:9200
+ {
+ "name" : "99cd2db44924",
+ "cluster_name" : "docker-cluster",
+ "cluster_uuid" : "nF9fuwRyRYSzyQrcH9RCnA",
+ "version" : {
+ "number" : "7.4.2",
+ "build_flavor" : "default",
+ "build_type" : "docker",
+ "build_hash" : "2f90bbf7b93631e52bafb59b3b049cb44ec25e96",
+ "build_date" : "2019-10-28T20:40:44.881551Z",
+ "build_snapshot" : false,
+ "lucene_version" : "8.2.0",
+ "minimum_wire_compatibility_version" : "6.8.0",
+ "minimum_index_compatibility_version" : "6.0.0-beta1"
+ },
+ "tagline" : "You Know, for Search"
+ }
+ ```
+4. Load data into the database
+ ```bash
+ $ DATAFLOWS_ELASTICSEARCH=localhost:9200 python load_fixtures.py
+ ```
+ You can test that data was loaded:
+ ```bash
+ $ curl -s http://localhost:9200/jobs-job/_count?pretty
+ {
+ "count" : 1757,
+ "_shards" : {
+ "total" : 1,
+ "successful" : 1,
+ "skipped" : 0,
+ "failed" : 0
+ }
+ }
+ ```
+5. Start the sample server
+ ```bash
+ $ python server.py
+ * Serving Flask app "server" (lazy loading)
+ * Environment: production
+ WARNING: Do not use the development server in a production environment.
+ Use a production WSGI server instead.
+ * Debug mode: off
+ * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
+ ```
+6. Now you can hit the server's endpoints, for example:
+ ```bash
+ $ curl -s 'localhost:5000/api/search/jobs?q=engineering&size=2' | jq
+ 127.0.0.1 - - [26/Jun/2019 10:45:31] "GET /api/search/jobs?q=engineering&size=2 HTTP/1.1" 200 -
+ {
+ "search_counts": {
+ "_current": {
+ "total_overall": 617
+ }
+ },
+ "search_results": [
+ {
+ "score": 18.812,
+ "source": {
+ "# Of Positions": "5",
+ "Additional Information": "TO BE APPOINTED TO ANY CIVIL <em>ENGINEERING</em> POSITION IN BRIDGES, CANDIDATES MUST POSSESS ONE YEAR OF CIVIL <em>ENGINEERING</em> EXPERIENCE IN BRIDGE DESIGN, BRIDGE CONSTRUCTION, BRIDGE MAINTENANCE OR BRIDGE INSPECTION.",
+ "Agency": "DEPARTMENT OF TRANSPORTATION",
+ "Business Title": "Civil Engineer 2",
+ "Civil Service Title": "CIVIL ENGINEER",
+ "Division/Work Unit": "<em>Engineering</em> Review & Support",
+ ...
+ }
+ ```
+
+
+
+%package -n python3-apies
+Summary: A flask blueprint providing an API for accessing and searching an ElasticSearch index created from source datapackages
+Provides: python-apies
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-apies
+# apies
+
+[![Travis](https://img.shields.io/travis/OpenBudget/apies/master.svg)](https://travis-ci.org/datahq/apies)
+[![Coveralls](http://img.shields.io/coveralls/OpenBudget/apies.svg?branch=master)](https://coveralls.io/r/OpenBudget/apies?branch=master)
+![PyPI - Python Version](https://img.shields.io/pypi/pyversions/apies.svg)
+
+apies is a flask blueprint providing an API for accessing and searching an ElasticSearch index created from source datapackages.
+
+## endpoints
+
+### `/get/<doc-id>`
+
+Fetches a document from the index.
+
+Query parameters that can be used:
+- **type**: The type of the document to fetch (if not `docs`)
+
+### `/search/count`
+
+### `/search/<doc-types>`
+
+Performs a search on the index.
+
+`doc-types` is a comma separated list of document types to search.
+
+Query parameters that can be used:
+- **q**: The full text search textual query
+
+- **filter**: A JSON object with filters to apply to the search. These are applied to the query but don't affect the scoring of the results.
+ Filters should be an array of objects, each object depicting a single filter. All filters are combined with an `OR` operator. For example:
+ ```
+ [
+ {
+ "first-name": "John",
+ "last-name": "Watson"
+ },
+ {
+ "first-name": "Sherlock",
+ "last-name": "Holmes"
+ }
+ ]
+ ```
+ Each object contains a set of rules that all must match. Each rule is a key-value pair, where the key is the field name and the value is the value to match. The value can be a string or an array of strings. If the value is an array, the rule will match if any of the values in the array match. For example:
+ ```
+ {
+ "first-name": ["Emily", "Charlotte"],
+ "last-name": "Bronte"
+ }
+ ```
+ Field names can be appended with two underscores and an operator to convey other relations other than equality. For example:
+ ```
+ {
+ "first-name": "Emily",
+ "last-name": "Bronte",
+ "age__gt": 30,
+ }
+ ```
+ Allowed operators are:
+ ('gt', 'gte', 'lt', 'lte', 'eq', 'not', 'like', 'bounded', 'all'):
+ - `gt`: greater than
+ - `gte`: greater than or equal to
+ - `lt`: less than
+ - `lte`: less than or equal to
+ - `eq`: equal to
+ - `not`: not equal to
+ - `like`: like (textual match)
+ - `bounded`: bounded (geospatial match to a bounding box)
+ - `all`: all (for arrays - all values in the array must exist in the target)
+
+ If multiple operators are needed for the same field, the field can also be suffixed by a hashtag and a number. For example:
+ ```
+ {
+ "city": "San Francisco",
+ "price__lt": 300000,
+ "bedrooms__gt": 4,
+ "amenities": "garage",
+ "amenities#1": ["pool", "back yard"],
+ }
+ ```
+ The above filter will match all documents where the `city` is "San Francisco", `price` is less than 300000, more than 4 `bedrooms`, the `amenities` field contains 'garage' and at least one of "pool" and "back yard".
+
+- **lookup**: A JSON object with lookup filters to apply to the search. These filter the results, but also affect the scoring of the results.
+- **context**: A textual context to search in (i.e. run the search in a subset of results matching the full-text-search query provided in this field)
+
+- **extra**: Extra information that's passed to library extensions
+
+- **size**: Number of results to fetch (default: 10)
+- **offset**: Offset of first result to fetch (default: 0)
+- **order**: Order results by (default: _score)
+
+- **highlight**: Commas separated list of fields to highlight
+- **snippets**: Commas separated list of fields to fetch snippets from
+
+- **match_type**: ElasticSearch match type (default: most_fields)
+- **match_operator**: ElasticSearch match operator (default: and)
+- **minscore**: Minimum score for a result to be returned (default: 0.0)
+
+### `download/<doctypes>`
+
+Downloads search results in either csv, xls or xlsx format.
+
+Query parameters that can be used:
+- **types_formatted**: The type of the documents to search
+- **search_term**: The Elastic search query
+- **size**: Number of hits to return
+- **offset**: Whether or not term offsets should be returned
+- **filters**: What offset to use for the pagination
+- **dont_highlight**:
+- **from_date**: If there should be a date range applied to the search, and from what date
+- **to_date**: If there should be a date range applied to the search, and until what date
+- **order**:
+- **file_format**: The format of the file to be returned, either 'csv', 'xls' or 'xlsx'.
+If not passed the file format will be xlsx
+- **file_name**: The name of the file to be returned, by default the name will be 'search_results'
+- **column_mapping**: If the columns should get a different name then in the
+original data, a column map can be send, for example:
+```
+{
+ "עיר": "address.city",
+ "תקציב": "details.budget"
+}
+```
+
+For example, get a csv file with column mapping:
+```
+http://localhost:5000/api/download/jobs?q=engineering&size=2&file_format=csv&file_name=my_results&column_mapping={%22mispar%22:%22Job%20ID%22}
+```
+
+Or get an xslx file without column mapping:
+```
+http://localhost:5000/api/download/jobs?q=engineering&size=2&file_format=xlsx&file_name=my_results
+```
+
+## configuration
+
+Flask configuration for this blueprint:
+
+
+```python
+
+ from apies import apies_blueprint
+ import elasticsearch
+
+ app.register_blueprint(
+ apies_blueprint(['path/to/datapackage.json', Package(), ...],
+ elasticsearch.Elasticsearch(...),
+ {'doc-type-1': 'index-for-doc-type-1', ...},
+ 'index-for-documents',
+ dont_highlight=['fields', 'not.to', 'highlight'],
+ text_field_rules=lambda schema_field: [], # list of tuples: ('exact'/'inexact'/'natural', <field-name>)
+ multi_match_type='most_fields',
+ multi_match_operator='and'),
+ url_prefix='/search/'
+ )
+```
+
+## local development
+
+You can start a local development server by following these steps:
+
+1. Install Dependencies:
+
+ a. Install Docker locally
+
+ b. Install Python dependencies:
+
+ ```bash
+ $ pip install dataflows dataflows-elasticsearch
+ $ pip install -e .
+ ```
+2. Go to the `sample/` directory
+3. Start ElasticSearch locally:
+ ```bash
+ $ ./start_elasticsearch.sh
+ ```
+
+ This script will wait and poll the server until it's up and running.
+ You can test it yourself by running:
+ ```bash
+ $ curl -s http://localhost:9200
+ {
+ "name" : "99cd2db44924",
+ "cluster_name" : "docker-cluster",
+ "cluster_uuid" : "nF9fuwRyRYSzyQrcH9RCnA",
+ "version" : {
+ "number" : "7.4.2",
+ "build_flavor" : "default",
+ "build_type" : "docker",
+ "build_hash" : "2f90bbf7b93631e52bafb59b3b049cb44ec25e96",
+ "build_date" : "2019-10-28T20:40:44.881551Z",
+ "build_snapshot" : false,
+ "lucene_version" : "8.2.0",
+ "minimum_wire_compatibility_version" : "6.8.0",
+ "minimum_index_compatibility_version" : "6.0.0-beta1"
+ },
+ "tagline" : "You Know, for Search"
+ }
+ ```
+4. Load data into the database
+ ```bash
+ $ DATAFLOWS_ELASTICSEARCH=localhost:9200 python load_fixtures.py
+ ```
+ You can test that data was loaded:
+ ```bash
+ $ curl -s http://localhost:9200/jobs-job/_count?pretty
+ {
+ "count" : 1757,
+ "_shards" : {
+ "total" : 1,
+ "successful" : 1,
+ "skipped" : 0,
+ "failed" : 0
+ }
+ }
+ ```
+5. Start the sample server
+ ```bash
+ $ python server.py
+ * Serving Flask app "server" (lazy loading)
+ * Environment: production
+ WARNING: Do not use the development server in a production environment.
+ Use a production WSGI server instead.
+ * Debug mode: off
+ * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
+ ```
+6. Now you can hit the server's endpoints, for example:
+ ```bash
+ $ curl -s 'localhost:5000/api/search/jobs?q=engineering&size=2' | jq
+ 127.0.0.1 - - [26/Jun/2019 10:45:31] "GET /api/search/jobs?q=engineering&size=2 HTTP/1.1" 200 -
+ {
+ "search_counts": {
+ "_current": {
+ "total_overall": 617
+ }
+ },
+ "search_results": [
+ {
+ "score": 18.812,
+ "source": {
+ "# Of Positions": "5",
+ "Additional Information": "TO BE APPOINTED TO ANY CIVIL <em>ENGINEERING</em> POSITION IN BRIDGES, CANDIDATES MUST POSSESS ONE YEAR OF CIVIL <em>ENGINEERING</em> EXPERIENCE IN BRIDGE DESIGN, BRIDGE CONSTRUCTION, BRIDGE MAINTENANCE OR BRIDGE INSPECTION.",
+ "Agency": "DEPARTMENT OF TRANSPORTATION",
+ "Business Title": "Civil Engineer 2",
+ "Civil Service Title": "CIVIL ENGINEER",
+ "Division/Work Unit": "<em>Engineering</em> Review & Support",
+ ...
+ }
+ ```
+
+
+
+%package help
+Summary: Development documents and examples for apies
+Provides: python3-apies-doc
+%description help
+# apies
+
+[![Travis](https://img.shields.io/travis/OpenBudget/apies/master.svg)](https://travis-ci.org/datahq/apies)
+[![Coveralls](http://img.shields.io/coveralls/OpenBudget/apies.svg?branch=master)](https://coveralls.io/r/OpenBudget/apies?branch=master)
+![PyPI - Python Version](https://img.shields.io/pypi/pyversions/apies.svg)
+
+apies is a flask blueprint providing an API for accessing and searching an ElasticSearch index created from source datapackages.
+
+## endpoints
+
+### `/get/<doc-id>`
+
+Fetches a document from the index.
+
+Query parameters that can be used:
+- **type**: The type of the document to fetch (if not `docs`)
+
+### `/search/count`
+
+### `/search/<doc-types>`
+
+Performs a search on the index.
+
+`doc-types` is a comma separated list of document types to search.
+
+Query parameters that can be used:
+- **q**: The full text search textual query
+
+- **filter**: A JSON object with filters to apply to the search. These are applied to the query but don't affect the scoring of the results.
+ Filters should be an array of objects, each object depicting a single filter. All filters are combined with an `OR` operator. For example:
+ ```
+ [
+ {
+ "first-name": "John",
+ "last-name": "Watson"
+ },
+ {
+ "first-name": "Sherlock",
+ "last-name": "Holmes"
+ }
+ ]
+ ```
+ Each object contains a set of rules that all must match. Each rule is a key-value pair, where the key is the field name and the value is the value to match. The value can be a string or an array of strings. If the value is an array, the rule will match if any of the values in the array match. For example:
+ ```
+ {
+ "first-name": ["Emily", "Charlotte"],
+ "last-name": "Bronte"
+ }
+ ```
+ Field names can be appended with two underscores and an operator to convey other relations other than equality. For example:
+ ```
+ {
+ "first-name": "Emily",
+ "last-name": "Bronte",
+ "age__gt": 30,
+ }
+ ```
+ Allowed operators are:
+ ('gt', 'gte', 'lt', 'lte', 'eq', 'not', 'like', 'bounded', 'all'):
+ - `gt`: greater than
+ - `gte`: greater than or equal to
+ - `lt`: less than
+ - `lte`: less than or equal to
+ - `eq`: equal to
+ - `not`: not equal to
+ - `like`: like (textual match)
+ - `bounded`: bounded (geospatial match to a bounding box)
+ - `all`: all (for arrays - all values in the array must exist in the target)
+
+ If multiple operators are needed for the same field, the field can also be suffixed by a hashtag and a number. For example:
+ ```
+ {
+ "city": "San Francisco",
+ "price__lt": 300000,
+ "bedrooms__gt": 4,
+ "amenities": "garage",
+ "amenities#1": ["pool", "back yard"],
+ }
+ ```
+ The above filter will match all documents where the `city` is "San Francisco", `price` is less than 300000, more than 4 `bedrooms`, the `amenities` field contains 'garage' and at least one of "pool" and "back yard".
+
+- **lookup**: A JSON object with lookup filters to apply to the search. These filter the results, but also affect the scoring of the results.
+- **context**: A textual context to search in (i.e. run the search in a subset of results matching the full-text-search query provided in this field)
+
+- **extra**: Extra information that's passed to library extensions
+
+- **size**: Number of results to fetch (default: 10)
+- **offset**: Offset of first result to fetch (default: 0)
+- **order**: Order results by (default: _score)
+
+- **highlight**: Commas separated list of fields to highlight
+- **snippets**: Commas separated list of fields to fetch snippets from
+
+- **match_type**: ElasticSearch match type (default: most_fields)
+- **match_operator**: ElasticSearch match operator (default: and)
+- **minscore**: Minimum score for a result to be returned (default: 0.0)
+
+### `download/<doctypes>`
+
+Downloads search results in either csv, xls or xlsx format.
+
+Query parameters that can be used:
+- **types_formatted**: The type of the documents to search
+- **search_term**: The Elastic search query
+- **size**: Number of hits to return
+- **offset**: Whether or not term offsets should be returned
+- **filters**: What offset to use for the pagination
+- **dont_highlight**:
+- **from_date**: If there should be a date range applied to the search, and from what date
+- **to_date**: If there should be a date range applied to the search, and until what date
+- **order**:
+- **file_format**: The format of the file to be returned, either 'csv', 'xls' or 'xlsx'.
+If not passed the file format will be xlsx
+- **file_name**: The name of the file to be returned, by default the name will be 'search_results'
+- **column_mapping**: If the columns should get a different name then in the
+original data, a column map can be send, for example:
+```
+{
+ "עיר": "address.city",
+ "תקציב": "details.budget"
+}
+```
+
+For example, get a csv file with column mapping:
+```
+http://localhost:5000/api/download/jobs?q=engineering&size=2&file_format=csv&file_name=my_results&column_mapping={%22mispar%22:%22Job%20ID%22}
+```
+
+Or get an xslx file without column mapping:
+```
+http://localhost:5000/api/download/jobs?q=engineering&size=2&file_format=xlsx&file_name=my_results
+```
+
+## configuration
+
+Flask configuration for this blueprint:
+
+
+```python
+
+ from apies import apies_blueprint
+ import elasticsearch
+
+ app.register_blueprint(
+ apies_blueprint(['path/to/datapackage.json', Package(), ...],
+ elasticsearch.Elasticsearch(...),
+ {'doc-type-1': 'index-for-doc-type-1', ...},
+ 'index-for-documents',
+ dont_highlight=['fields', 'not.to', 'highlight'],
+ text_field_rules=lambda schema_field: [], # list of tuples: ('exact'/'inexact'/'natural', <field-name>)
+ multi_match_type='most_fields',
+ multi_match_operator='and'),
+ url_prefix='/search/'
+ )
+```
+
+## local development
+
+You can start a local development server by following these steps:
+
+1. Install Dependencies:
+
+ a. Install Docker locally
+
+ b. Install Python dependencies:
+
+ ```bash
+ $ pip install dataflows dataflows-elasticsearch
+ $ pip install -e .
+ ```
+2. Go to the `sample/` directory
+3. Start ElasticSearch locally:
+ ```bash
+ $ ./start_elasticsearch.sh
+ ```
+
+ This script will wait and poll the server until it's up and running.
+ You can test it yourself by running:
+ ```bash
+ $ curl -s http://localhost:9200
+ {
+ "name" : "99cd2db44924",
+ "cluster_name" : "docker-cluster",
+ "cluster_uuid" : "nF9fuwRyRYSzyQrcH9RCnA",
+ "version" : {
+ "number" : "7.4.2",
+ "build_flavor" : "default",
+ "build_type" : "docker",
+ "build_hash" : "2f90bbf7b93631e52bafb59b3b049cb44ec25e96",
+ "build_date" : "2019-10-28T20:40:44.881551Z",
+ "build_snapshot" : false,
+ "lucene_version" : "8.2.0",
+ "minimum_wire_compatibility_version" : "6.8.0",
+ "minimum_index_compatibility_version" : "6.0.0-beta1"
+ },
+ "tagline" : "You Know, for Search"
+ }
+ ```
+4. Load data into the database
+ ```bash
+ $ DATAFLOWS_ELASTICSEARCH=localhost:9200 python load_fixtures.py
+ ```
+ You can test that data was loaded:
+ ```bash
+ $ curl -s http://localhost:9200/jobs-job/_count?pretty
+ {
+ "count" : 1757,
+ "_shards" : {
+ "total" : 1,
+ "successful" : 1,
+ "skipped" : 0,
+ "failed" : 0
+ }
+ }
+ ```
+5. Start the sample server
+ ```bash
+ $ python server.py
+ * Serving Flask app "server" (lazy loading)
+ * Environment: production
+ WARNING: Do not use the development server in a production environment.
+ Use a production WSGI server instead.
+ * Debug mode: off
+ * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
+ ```
+6. Now you can hit the server's endpoints, for example:
+ ```bash
+ $ curl -s 'localhost:5000/api/search/jobs?q=engineering&size=2' | jq
+ 127.0.0.1 - - [26/Jun/2019 10:45:31] "GET /api/search/jobs?q=engineering&size=2 HTTP/1.1" 200 -
+ {
+ "search_counts": {
+ "_current": {
+ "total_overall": 617
+ }
+ },
+ "search_results": [
+ {
+ "score": 18.812,
+ "source": {
+ "# Of Positions": "5",
+ "Additional Information": "TO BE APPOINTED TO ANY CIVIL <em>ENGINEERING</em> POSITION IN BRIDGES, CANDIDATES MUST POSSESS ONE YEAR OF CIVIL <em>ENGINEERING</em> EXPERIENCE IN BRIDGE DESIGN, BRIDGE CONSTRUCTION, BRIDGE MAINTENANCE OR BRIDGE INSPECTION.",
+ "Agency": "DEPARTMENT OF TRANSPORTATION",
+ "Business Title": "Civil Engineer 2",
+ "Civil Service Title": "CIVIL ENGINEER",
+ "Division/Work Unit": "<em>Engineering</em> Review & Support",
+ ...
+ }
+ ```
+
+
+
+%prep
+%autosetup -n apies-1.9.1
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-apies -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Fri May 05 2023 Python_Bot <Python_Bot@openeuler.org> - 1.9.1-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..74e9a4a
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+62c9f337cd44a3481089328f6f292773 apies-1.9.1.tar.gz