%global _empty_manifest_terminate_build 0 Name: python-stream-sqlite Version: 0.0.41 Release: 1 Summary: Python function to extract all the rows from a SQLite database file concurrently with iterating over its bytes, without needing random access to the file License: MIT License URL: https://github.com/uktrade/stream-sqlite Source0: https://mirrors.nju.edu.cn/pypi/web/packages/61/7c/f41dbc6f6221a6beac3e173ce3895d224807f7dcfe5531c3976b4d98e5de/stream-sqlite-0.0.41.tar.gz BuildArch: noarch %description # stream-sqlite [![CircleCI](https://circleci.com/gh/uktrade/stream-sqlite.svg?style=shield)](https://circleci.com/gh/uktrade/stream-sqlite) [![Test Coverage](https://api.codeclimate.com/v1/badges/b665c7634e8194fe6878/test_coverage)](https://codeclimate.com/github/uktrade/stream-sqlite/test_coverage) Python function to extract all the rows from a SQLite database file concurrently with iterating over its bytes, without needing random access to the file. Note that the [SQLite file format](https://www.sqlite.org/fileformat.html) is not designed to be streamed; the data is arranged in _pages_ of a fixed number of bytes, and the information to identify a page often comes _after_ the page in the stream (sometimes a great deal after). Therefore, pages are buffered in memory until they can be identified. ## Installation ```bash pip install stream-sqlite ``` ## Usage ```python from stream_sqlite import stream_sqlite import httpx # Iterable that yields the bytes of a sqlite file def sqlite_bytes(): with httpx.stream('GET', 'http://www.parlgov.org/static/stable/2020/parlgov-stable.db') as r: yield from r.iter_bytes(chunk_size=65_536) # If there is a single table in the file, there will be exactly one iteration of the outer loop. # If there are multiple tables, each can appear multiple times. for table_name, pragma_table_info, rows in stream_sqlite(sqlite_bytes(), max_buffer_size=1_048_576): for row in rows: print(row) ``` ## Recommendations If you have control over the SQLite file, `VACUUM;` should be run on it before streaming. In addition to minimising the size of the file, `VACUUM;` arranges the pages in a way that often reduces the buffering required when streaming. This is especially true if it was the target of intermingled `INSERT`s and/or `DELETE`s over multiple tables. Also, indexes are not used for extracting the rows while streaming. If streaming is the only use case of the SQLite file, and you have control over it, indexes should be removed, and `VACUUM;` then run. Some tests suggest that if the file is written in autovacuum mode, i.e. `PRAGMA auto_vacuum = FULL;`, then the pages are arranged in a way that reduces the buffering required when streaming. Your mileage may vary. %package -n python3-stream-sqlite Summary: Python function to extract all the rows from a SQLite database file concurrently with iterating over its bytes, without needing random access to the file Provides: python-stream-sqlite BuildRequires: python3-devel BuildRequires: python3-setuptools BuildRequires: python3-pip %description -n python3-stream-sqlite # stream-sqlite [![CircleCI](https://circleci.com/gh/uktrade/stream-sqlite.svg?style=shield)](https://circleci.com/gh/uktrade/stream-sqlite) [![Test Coverage](https://api.codeclimate.com/v1/badges/b665c7634e8194fe6878/test_coverage)](https://codeclimate.com/github/uktrade/stream-sqlite/test_coverage) Python function to extract all the rows from a SQLite database file concurrently with iterating over its bytes, without needing random access to the file. Note that the [SQLite file format](https://www.sqlite.org/fileformat.html) is not designed to be streamed; the data is arranged in _pages_ of a fixed number of bytes, and the information to identify a page often comes _after_ the page in the stream (sometimes a great deal after). Therefore, pages are buffered in memory until they can be identified. ## Installation ```bash pip install stream-sqlite ``` ## Usage ```python from stream_sqlite import stream_sqlite import httpx # Iterable that yields the bytes of a sqlite file def sqlite_bytes(): with httpx.stream('GET', 'http://www.parlgov.org/static/stable/2020/parlgov-stable.db') as r: yield from r.iter_bytes(chunk_size=65_536) # If there is a single table in the file, there will be exactly one iteration of the outer loop. # If there are multiple tables, each can appear multiple times. for table_name, pragma_table_info, rows in stream_sqlite(sqlite_bytes(), max_buffer_size=1_048_576): for row in rows: print(row) ``` ## Recommendations If you have control over the SQLite file, `VACUUM;` should be run on it before streaming. In addition to minimising the size of the file, `VACUUM;` arranges the pages in a way that often reduces the buffering required when streaming. This is especially true if it was the target of intermingled `INSERT`s and/or `DELETE`s over multiple tables. Also, indexes are not used for extracting the rows while streaming. If streaming is the only use case of the SQLite file, and you have control over it, indexes should be removed, and `VACUUM;` then run. Some tests suggest that if the file is written in autovacuum mode, i.e. `PRAGMA auto_vacuum = FULL;`, then the pages are arranged in a way that reduces the buffering required when streaming. Your mileage may vary. %package help Summary: Development documents and examples for stream-sqlite Provides: python3-stream-sqlite-doc %description help # stream-sqlite [![CircleCI](https://circleci.com/gh/uktrade/stream-sqlite.svg?style=shield)](https://circleci.com/gh/uktrade/stream-sqlite) [![Test Coverage](https://api.codeclimate.com/v1/badges/b665c7634e8194fe6878/test_coverage)](https://codeclimate.com/github/uktrade/stream-sqlite/test_coverage) Python function to extract all the rows from a SQLite database file concurrently with iterating over its bytes, without needing random access to the file. Note that the [SQLite file format](https://www.sqlite.org/fileformat.html) is not designed to be streamed; the data is arranged in _pages_ of a fixed number of bytes, and the information to identify a page often comes _after_ the page in the stream (sometimes a great deal after). Therefore, pages are buffered in memory until they can be identified. ## Installation ```bash pip install stream-sqlite ``` ## Usage ```python from stream_sqlite import stream_sqlite import httpx # Iterable that yields the bytes of a sqlite file def sqlite_bytes(): with httpx.stream('GET', 'http://www.parlgov.org/static/stable/2020/parlgov-stable.db') as r: yield from r.iter_bytes(chunk_size=65_536) # If there is a single table in the file, there will be exactly one iteration of the outer loop. # If there are multiple tables, each can appear multiple times. for table_name, pragma_table_info, rows in stream_sqlite(sqlite_bytes(), max_buffer_size=1_048_576): for row in rows: print(row) ``` ## Recommendations If you have control over the SQLite file, `VACUUM;` should be run on it before streaming. In addition to minimising the size of the file, `VACUUM;` arranges the pages in a way that often reduces the buffering required when streaming. This is especially true if it was the target of intermingled `INSERT`s and/or `DELETE`s over multiple tables. Also, indexes are not used for extracting the rows while streaming. If streaming is the only use case of the SQLite file, and you have control over it, indexes should be removed, and `VACUUM;` then run. Some tests suggest that if the file is written in autovacuum mode, i.e. `PRAGMA auto_vacuum = FULL;`, then the pages are arranged in a way that reduces the buffering required when streaming. Your mileage may vary. %prep %autosetup -n stream-sqlite-0.0.41 %build %py3_build %install %py3_install install -d -m755 %{buildroot}/%{_pkgdocdir} if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/lib64 ]; then find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/bin ]; then find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/sbin ]; then find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . mv %{buildroot}/doclist.lst . %files -n python3-stream-sqlite -f filelist.lst %dir %{python3_sitelib}/* %files help -f doclist.lst %{_docdir}/* %changelog * Tue May 30 2023 Python_Bot - 0.0.41-1 - Package Spec generated