summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-04-10 21:46:09 +0000
committerCoprDistGit <infra@openeuler.org>2023-04-10 21:46:09 +0000
commit2e2a507f4a25120a469ace4e25b806dc7b2b3173 (patch)
tree0663da156b5169565a010321bd5dd80a02bae071
parent1b9afb3d1a4fecf9c1f8ceec9138758a5c6b6289 (diff)
automatic import of python-databricks-sql-connector
-rw-r--r--.gitignore1
-rw-r--r--python-databricks-sql-connector.spec303
-rw-r--r--sources1
3 files changed, 305 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..ac83f05 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/databricks_sql_connector-2.4.1.tar.gz
diff --git a/python-databricks-sql-connector.spec b/python-databricks-sql-connector.spec
new file mode 100644
index 0000000..87c39c0
--- /dev/null
+++ b/python-databricks-sql-connector.spec
@@ -0,0 +1,303 @@
+%global _empty_manifest_terminate_build 0
+Name: python-databricks-sql-connector
+Version: 2.4.1
+Release: 1
+Summary: Databricks SQL Connector for Python
+License: Apache-2.0
+URL: https://github.com/databricks/databricks-sql-python
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/34/e1/129e6b2a316c6d2cf3d8702de405be86b6741f2bc43dafd90dba2b05894d/databricks_sql_connector-2.4.1.tar.gz
+BuildArch: noarch
+
+Requires: python3-thrift
+Requires: python3-pandas
+Requires: python3-pyarrow
+Requires: python3-pyarrow
+Requires: python3-lz4
+Requires: python3-requests
+Requires: python3-oauthlib
+Requires: python3-numpy
+Requires: python3-numpy
+Requires: python3-sqlalchemy
+Requires: python3-openpyxl
+Requires: python3-alembic
+
+%description
+# Databricks SQL Connector for Python
+
+[![PyPI](https://img.shields.io/pypi/v/databricks-sql-connector?style=flat-square)](https://pypi.org/project/databricks-sql-connector/)
+[![Downloads](https://pepy.tech/badge/databricks-sql-connector)](https://pepy.tech/project/databricks-sql-connector)
+
+The Databricks SQL Connector for Python allows you to develop Python applications that connect to Databricks clusters and SQL warehouses. It is a Thrift-based client with no dependencies on ODBC or JDBC. It conforms to the [Python DB API 2.0 specification](https://www.python.org/dev/peps/pep-0249/) and exposes a [SQLAlchemy](https://www.sqlalchemy.org/) dialect for use with tools like `pandas` and `alembic` which use SQLAlchemy to execute DDL.
+
+This connector uses Arrow as the data-exchange format, and supports APIs to directly fetch Arrow tables. Arrow tables are wrapped in the `ArrowQueue` class to provide a natural API to get several rows at a time.
+
+You are welcome to file an issue here for general use cases. You can also contact Databricks Support [here](help.databricks.com).
+
+## Requirements
+
+Python 3.7 or above is required.
+
+## Documentation
+
+For the latest documentation, see
+
+- [Databricks](https://docs.databricks.com/dev-tools/python-sql-connector.html)
+- [Azure Databricks](https://docs.microsoft.com/en-us/azure/databricks/dev-tools/python-sql-connector)
+
+## Quickstart
+
+Install the library with `pip install databricks-sql-connector`
+
+Note: Don't hard-code authentication secrets into your Python. Use environment variables
+
+```bash
+export DATABRICKS_HOST=********.databricks.com
+export DATABRICKS_HTTP_PATH=/sql/1.0/endpoints/****************
+export DATABRICKS_TOKEN=dapi********************************
+```
+
+Example usage:
+```python
+import os
+from databricks import sql
+
+host = os.getenv("DATABRICKS_HOST")
+http_path = os.getenv("DATABRICKS_HTTP_PATH")
+access_token = os.getenv("DATABRICKS_ACCESS_TOKEN")
+
+connection = sql.connect(
+ server_hostname=host,
+ http_path=http_path,
+ access_token=access_token)
+
+cursor = connection.cursor()
+
+cursor.execute('SELECT * FROM RANGE(10)')
+result = cursor.fetchall()
+for row in result:
+ print(row)
+
+cursor.close()
+connection.close()
+```
+
+In the above example:
+- `server-hostname` is the Databricks instance host name.
+- `http-path` is the HTTP Path either to a Databricks SQL endpoint (e.g. /sql/1.0/endpoints/1234567890abcdef),
+or to a Databricks Runtime interactive cluster (e.g. /sql/protocolv1/o/1234567890123456/1234-123456-slid123)
+- `personal-access-token` is the Databricks Personal Access Token for the account that will execute commands and queries
+
+
+## Contributing
+
+See [CONTRIBUTING.md](CONTRIBUTING.md)
+
+## License
+
+[Apache License 2.0](LICENSE)
+
+
+%package -n python3-databricks-sql-connector
+Summary: Databricks SQL Connector for Python
+Provides: python-databricks-sql-connector
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-databricks-sql-connector
+# Databricks SQL Connector for Python
+
+[![PyPI](https://img.shields.io/pypi/v/databricks-sql-connector?style=flat-square)](https://pypi.org/project/databricks-sql-connector/)
+[![Downloads](https://pepy.tech/badge/databricks-sql-connector)](https://pepy.tech/project/databricks-sql-connector)
+
+The Databricks SQL Connector for Python allows you to develop Python applications that connect to Databricks clusters and SQL warehouses. It is a Thrift-based client with no dependencies on ODBC or JDBC. It conforms to the [Python DB API 2.0 specification](https://www.python.org/dev/peps/pep-0249/) and exposes a [SQLAlchemy](https://www.sqlalchemy.org/) dialect for use with tools like `pandas` and `alembic` which use SQLAlchemy to execute DDL.
+
+This connector uses Arrow as the data-exchange format, and supports APIs to directly fetch Arrow tables. Arrow tables are wrapped in the `ArrowQueue` class to provide a natural API to get several rows at a time.
+
+You are welcome to file an issue here for general use cases. You can also contact Databricks Support [here](help.databricks.com).
+
+## Requirements
+
+Python 3.7 or above is required.
+
+## Documentation
+
+For the latest documentation, see
+
+- [Databricks](https://docs.databricks.com/dev-tools/python-sql-connector.html)
+- [Azure Databricks](https://docs.microsoft.com/en-us/azure/databricks/dev-tools/python-sql-connector)
+
+## Quickstart
+
+Install the library with `pip install databricks-sql-connector`
+
+Note: Don't hard-code authentication secrets into your Python. Use environment variables
+
+```bash
+export DATABRICKS_HOST=********.databricks.com
+export DATABRICKS_HTTP_PATH=/sql/1.0/endpoints/****************
+export DATABRICKS_TOKEN=dapi********************************
+```
+
+Example usage:
+```python
+import os
+from databricks import sql
+
+host = os.getenv("DATABRICKS_HOST")
+http_path = os.getenv("DATABRICKS_HTTP_PATH")
+access_token = os.getenv("DATABRICKS_ACCESS_TOKEN")
+
+connection = sql.connect(
+ server_hostname=host,
+ http_path=http_path,
+ access_token=access_token)
+
+cursor = connection.cursor()
+
+cursor.execute('SELECT * FROM RANGE(10)')
+result = cursor.fetchall()
+for row in result:
+ print(row)
+
+cursor.close()
+connection.close()
+```
+
+In the above example:
+- `server-hostname` is the Databricks instance host name.
+- `http-path` is the HTTP Path either to a Databricks SQL endpoint (e.g. /sql/1.0/endpoints/1234567890abcdef),
+or to a Databricks Runtime interactive cluster (e.g. /sql/protocolv1/o/1234567890123456/1234-123456-slid123)
+- `personal-access-token` is the Databricks Personal Access Token for the account that will execute commands and queries
+
+
+## Contributing
+
+See [CONTRIBUTING.md](CONTRIBUTING.md)
+
+## License
+
+[Apache License 2.0](LICENSE)
+
+
+%package help
+Summary: Development documents and examples for databricks-sql-connector
+Provides: python3-databricks-sql-connector-doc
+%description help
+# Databricks SQL Connector for Python
+
+[![PyPI](https://img.shields.io/pypi/v/databricks-sql-connector?style=flat-square)](https://pypi.org/project/databricks-sql-connector/)
+[![Downloads](https://pepy.tech/badge/databricks-sql-connector)](https://pepy.tech/project/databricks-sql-connector)
+
+The Databricks SQL Connector for Python allows you to develop Python applications that connect to Databricks clusters and SQL warehouses. It is a Thrift-based client with no dependencies on ODBC or JDBC. It conforms to the [Python DB API 2.0 specification](https://www.python.org/dev/peps/pep-0249/) and exposes a [SQLAlchemy](https://www.sqlalchemy.org/) dialect for use with tools like `pandas` and `alembic` which use SQLAlchemy to execute DDL.
+
+This connector uses Arrow as the data-exchange format, and supports APIs to directly fetch Arrow tables. Arrow tables are wrapped in the `ArrowQueue` class to provide a natural API to get several rows at a time.
+
+You are welcome to file an issue here for general use cases. You can also contact Databricks Support [here](help.databricks.com).
+
+## Requirements
+
+Python 3.7 or above is required.
+
+## Documentation
+
+For the latest documentation, see
+
+- [Databricks](https://docs.databricks.com/dev-tools/python-sql-connector.html)
+- [Azure Databricks](https://docs.microsoft.com/en-us/azure/databricks/dev-tools/python-sql-connector)
+
+## Quickstart
+
+Install the library with `pip install databricks-sql-connector`
+
+Note: Don't hard-code authentication secrets into your Python. Use environment variables
+
+```bash
+export DATABRICKS_HOST=********.databricks.com
+export DATABRICKS_HTTP_PATH=/sql/1.0/endpoints/****************
+export DATABRICKS_TOKEN=dapi********************************
+```
+
+Example usage:
+```python
+import os
+from databricks import sql
+
+host = os.getenv("DATABRICKS_HOST")
+http_path = os.getenv("DATABRICKS_HTTP_PATH")
+access_token = os.getenv("DATABRICKS_ACCESS_TOKEN")
+
+connection = sql.connect(
+ server_hostname=host,
+ http_path=http_path,
+ access_token=access_token)
+
+cursor = connection.cursor()
+
+cursor.execute('SELECT * FROM RANGE(10)')
+result = cursor.fetchall()
+for row in result:
+ print(row)
+
+cursor.close()
+connection.close()
+```
+
+In the above example:
+- `server-hostname` is the Databricks instance host name.
+- `http-path` is the HTTP Path either to a Databricks SQL endpoint (e.g. /sql/1.0/endpoints/1234567890abcdef),
+or to a Databricks Runtime interactive cluster (e.g. /sql/protocolv1/o/1234567890123456/1234-123456-slid123)
+- `personal-access-token` is the Databricks Personal Access Token for the account that will execute commands and queries
+
+
+## Contributing
+
+See [CONTRIBUTING.md](CONTRIBUTING.md)
+
+## License
+
+[Apache License 2.0](LICENSE)
+
+
+%prep
+%autosetup -n databricks-sql-connector-2.4.1
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-databricks-sql-connector -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Mon Apr 10 2023 Python_Bot <Python_Bot@openeuler.org> - 2.4.1-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..cdb0703
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+70701e3d0f99d7fbaaaf0921426b4838 databricks_sql_connector-2.4.1.tar.gz