summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-04-10 23:34:10 +0000
committerCoprDistGit <infra@openeuler.org>2023-04-10 23:34:10 +0000
commit84553316917d0a045d41d893927049b29c826691 (patch)
tree1b0fc1cb20bf12a2e84c6b8058a0899640148163
parentb255605f1f9a8d03f4b3335977c8a63ee33dbf85 (diff)
automatic import of python-dbt-spark
-rw-r--r--.gitignore1
-rw-r--r--python-dbt-spark.spec349
-rw-r--r--sources1
3 files changed, 351 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..076e01f 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/dbt-spark-1.4.1.tar.gz
diff --git a/python-dbt-spark.spec b/python-dbt-spark.spec
new file mode 100644
index 0000000..204860c
--- /dev/null
+++ b/python-dbt-spark.spec
@@ -0,0 +1,349 @@
+%global _empty_manifest_terminate_build 0
+Name: python-dbt-spark
+Version: 1.4.1
+Release: 1
+Summary: The Apache Spark adapter plugin for dbt
+License: Apache Software License
+URL: https://github.com/dbt-labs/dbt-spark
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/d1/02/2276924d6fc6d559aed653a566c86347765f01d31a2b1f45c13820a3e6f4/dbt-spark-1.4.1.tar.gz
+BuildArch: noarch
+
+Requires: python3-dbt-core
+Requires: python3-sqlparams
+Requires: python3-pyodbc
+Requires: python3-PyHive[hive]
+Requires: python3-thrift
+Requires: python3-pyodbc
+Requires: python3-PyHive[hive]
+Requires: python3-thrift
+Requires: python3-pyspark
+Requires: python3-pyspark
+
+%description
+<p align="center">
+ <img src="https://raw.githubusercontent.com/dbt-labs/dbt/ec7dee39f793aa4f7dd3dae37282cc87664813e4/etc/dbt-logo-full.svg" alt="dbt logo" width="500"/>
+</p>
+<p align="center">
+ <a href="https://github.com/dbt-labs/dbt-spark/actions/workflows/main.yml">
+ <img src="https://github.com/dbt-labs/dbt-spark/actions/workflows/main.yml/badge.svg?event=push" alt="Unit Tests Badge"/>
+ </a>
+ <a href="https://github.com/dbt-labs/dbt-spark/actions/workflows/integration.yml">
+ <img src="https://github.com/dbt-labs/dbt-spark/actions/workflows/integration.yml/badge.svg?event=push" alt="Integration Tests Badge"/>
+ </a>
+</p>
+
+**[dbt](https://www.getdbt.com/)** enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
+
+dbt is the T in ELT. Organize, cleanse, denormalize, filter, rename, and pre-aggregate the raw data in your warehouse so that it's ready for analysis.
+
+## dbt-spark
+
+The `dbt-spark` package contains all of the code enabling dbt to work with Apache Spark and Databricks. For
+more information, consult [the docs](https://docs.getdbt.com/docs/profile-spark).
+
+## Getting started
+
+- [Install dbt](https://docs.getdbt.com/docs/installation)
+- Read the [introduction](https://docs.getdbt.com/docs/introduction/) and [viewpoint](https://docs.getdbt.com/docs/about/viewpoint/)
+
+## Running locally
+A `docker-compose` environment starts a Spark Thrift server and a Postgres database as a Hive Metastore backend.
+Note: dbt-spark now supports Spark 3.1.1 (formerly on Spark 2.x).
+
+The following command would start two docker containers
+```
+docker-compose up -d
+```
+It will take a bit of time for the instance to start, you can check the logs of the two containers.
+If the instance doesn't start correctly, try the complete reset command listed below and then try start again.
+
+Create a profile like this one:
+
+```
+spark_testing:
+ target: local
+ outputs:
+ local:
+ type: spark
+ method: thrift
+ host: 127.0.0.1
+ port: 10000
+ user: dbt
+ schema: analytics
+ connect_retries: 5
+ connect_timeout: 60
+ retry_all: true
+```
+
+Connecting to the local spark instance:
+
+* The Spark UI should be available at [http://localhost:4040/sqlserver/](http://localhost:4040/sqlserver/)
+* The endpoint for SQL-based testing is at `http://localhost:10000` and can be referenced with the Hive or Spark JDBC drivers using connection string `jdbc:hive2://localhost:10000` and default credentials `dbt`:`dbt`
+
+Note that the Hive metastore data is persisted under `./.hive-metastore/`, and the Spark-produced data under `./.spark-warehouse/`. To completely reset you environment run the following:
+
+```
+docker-compose down
+rm -rf ./.hive-metastore/
+rm -rf ./.spark-warehouse/
+```
+
+### Reporting bugs and contributing code
+
+- Want to report a bug or request a feature? Let us know on [Slack](http://slack.getdbt.com/), or open [an issue](https://github.com/fishtown-analytics/dbt-spark/issues/new).
+
+## Code of Conduct
+
+Everyone interacting in the dbt project's codebases, issue trackers, chat rooms, and mailing lists is expected to follow the [PyPA Code of Conduct](https://www.pypa.io/en/latest/code-of-conduct/).
+
+## Join the dbt Community
+
+- Be part of the conversation in the [dbt Community Slack](http://community.getdbt.com/)
+- Read more on the [dbt Community Discourse](https://discourse.getdbt.com)
+
+## Reporting bugs and contributing code
+
+- Want to report a bug or request a feature? Let us know on [Slack](http://community.getdbt.com/), or open [an issue](https://github.com/dbt-labs/dbt-spark/issues/new)
+- Want to help us build dbt? Check out the [Contributing Guide](https://github.com/dbt-labs/dbt/blob/HEAD/CONTRIBUTING.md)
+
+## Code of Conduct
+
+Everyone interacting in the dbt project's codebases, issue trackers, chat rooms, and mailing lists is expected to follow the [dbt Code of Conduct](https://community.getdbt.com/code-of-conduct).
+
+
+%package -n python3-dbt-spark
+Summary: The Apache Spark adapter plugin for dbt
+Provides: python-dbt-spark
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-dbt-spark
+<p align="center">
+ <img src="https://raw.githubusercontent.com/dbt-labs/dbt/ec7dee39f793aa4f7dd3dae37282cc87664813e4/etc/dbt-logo-full.svg" alt="dbt logo" width="500"/>
+</p>
+<p align="center">
+ <a href="https://github.com/dbt-labs/dbt-spark/actions/workflows/main.yml">
+ <img src="https://github.com/dbt-labs/dbt-spark/actions/workflows/main.yml/badge.svg?event=push" alt="Unit Tests Badge"/>
+ </a>
+ <a href="https://github.com/dbt-labs/dbt-spark/actions/workflows/integration.yml">
+ <img src="https://github.com/dbt-labs/dbt-spark/actions/workflows/integration.yml/badge.svg?event=push" alt="Integration Tests Badge"/>
+ </a>
+</p>
+
+**[dbt](https://www.getdbt.com/)** enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
+
+dbt is the T in ELT. Organize, cleanse, denormalize, filter, rename, and pre-aggregate the raw data in your warehouse so that it's ready for analysis.
+
+## dbt-spark
+
+The `dbt-spark` package contains all of the code enabling dbt to work with Apache Spark and Databricks. For
+more information, consult [the docs](https://docs.getdbt.com/docs/profile-spark).
+
+## Getting started
+
+- [Install dbt](https://docs.getdbt.com/docs/installation)
+- Read the [introduction](https://docs.getdbt.com/docs/introduction/) and [viewpoint](https://docs.getdbt.com/docs/about/viewpoint/)
+
+## Running locally
+A `docker-compose` environment starts a Spark Thrift server and a Postgres database as a Hive Metastore backend.
+Note: dbt-spark now supports Spark 3.1.1 (formerly on Spark 2.x).
+
+The following command would start two docker containers
+```
+docker-compose up -d
+```
+It will take a bit of time for the instance to start, you can check the logs of the two containers.
+If the instance doesn't start correctly, try the complete reset command listed below and then try start again.
+
+Create a profile like this one:
+
+```
+spark_testing:
+ target: local
+ outputs:
+ local:
+ type: spark
+ method: thrift
+ host: 127.0.0.1
+ port: 10000
+ user: dbt
+ schema: analytics
+ connect_retries: 5
+ connect_timeout: 60
+ retry_all: true
+```
+
+Connecting to the local spark instance:
+
+* The Spark UI should be available at [http://localhost:4040/sqlserver/](http://localhost:4040/sqlserver/)
+* The endpoint for SQL-based testing is at `http://localhost:10000` and can be referenced with the Hive or Spark JDBC drivers using connection string `jdbc:hive2://localhost:10000` and default credentials `dbt`:`dbt`
+
+Note that the Hive metastore data is persisted under `./.hive-metastore/`, and the Spark-produced data under `./.spark-warehouse/`. To completely reset you environment run the following:
+
+```
+docker-compose down
+rm -rf ./.hive-metastore/
+rm -rf ./.spark-warehouse/
+```
+
+### Reporting bugs and contributing code
+
+- Want to report a bug or request a feature? Let us know on [Slack](http://slack.getdbt.com/), or open [an issue](https://github.com/fishtown-analytics/dbt-spark/issues/new).
+
+## Code of Conduct
+
+Everyone interacting in the dbt project's codebases, issue trackers, chat rooms, and mailing lists is expected to follow the [PyPA Code of Conduct](https://www.pypa.io/en/latest/code-of-conduct/).
+
+## Join the dbt Community
+
+- Be part of the conversation in the [dbt Community Slack](http://community.getdbt.com/)
+- Read more on the [dbt Community Discourse](https://discourse.getdbt.com)
+
+## Reporting bugs and contributing code
+
+- Want to report a bug or request a feature? Let us know on [Slack](http://community.getdbt.com/), or open [an issue](https://github.com/dbt-labs/dbt-spark/issues/new)
+- Want to help us build dbt? Check out the [Contributing Guide](https://github.com/dbt-labs/dbt/blob/HEAD/CONTRIBUTING.md)
+
+## Code of Conduct
+
+Everyone interacting in the dbt project's codebases, issue trackers, chat rooms, and mailing lists is expected to follow the [dbt Code of Conduct](https://community.getdbt.com/code-of-conduct).
+
+
+%package help
+Summary: Development documents and examples for dbt-spark
+Provides: python3-dbt-spark-doc
+%description help
+<p align="center">
+ <img src="https://raw.githubusercontent.com/dbt-labs/dbt/ec7dee39f793aa4f7dd3dae37282cc87664813e4/etc/dbt-logo-full.svg" alt="dbt logo" width="500"/>
+</p>
+<p align="center">
+ <a href="https://github.com/dbt-labs/dbt-spark/actions/workflows/main.yml">
+ <img src="https://github.com/dbt-labs/dbt-spark/actions/workflows/main.yml/badge.svg?event=push" alt="Unit Tests Badge"/>
+ </a>
+ <a href="https://github.com/dbt-labs/dbt-spark/actions/workflows/integration.yml">
+ <img src="https://github.com/dbt-labs/dbt-spark/actions/workflows/integration.yml/badge.svg?event=push" alt="Integration Tests Badge"/>
+ </a>
+</p>
+
+**[dbt](https://www.getdbt.com/)** enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
+
+dbt is the T in ELT. Organize, cleanse, denormalize, filter, rename, and pre-aggregate the raw data in your warehouse so that it's ready for analysis.
+
+## dbt-spark
+
+The `dbt-spark` package contains all of the code enabling dbt to work with Apache Spark and Databricks. For
+more information, consult [the docs](https://docs.getdbt.com/docs/profile-spark).
+
+## Getting started
+
+- [Install dbt](https://docs.getdbt.com/docs/installation)
+- Read the [introduction](https://docs.getdbt.com/docs/introduction/) and [viewpoint](https://docs.getdbt.com/docs/about/viewpoint/)
+
+## Running locally
+A `docker-compose` environment starts a Spark Thrift server and a Postgres database as a Hive Metastore backend.
+Note: dbt-spark now supports Spark 3.1.1 (formerly on Spark 2.x).
+
+The following command would start two docker containers
+```
+docker-compose up -d
+```
+It will take a bit of time for the instance to start, you can check the logs of the two containers.
+If the instance doesn't start correctly, try the complete reset command listed below and then try start again.
+
+Create a profile like this one:
+
+```
+spark_testing:
+ target: local
+ outputs:
+ local:
+ type: spark
+ method: thrift
+ host: 127.0.0.1
+ port: 10000
+ user: dbt
+ schema: analytics
+ connect_retries: 5
+ connect_timeout: 60
+ retry_all: true
+```
+
+Connecting to the local spark instance:
+
+* The Spark UI should be available at [http://localhost:4040/sqlserver/](http://localhost:4040/sqlserver/)
+* The endpoint for SQL-based testing is at `http://localhost:10000` and can be referenced with the Hive or Spark JDBC drivers using connection string `jdbc:hive2://localhost:10000` and default credentials `dbt`:`dbt`
+
+Note that the Hive metastore data is persisted under `./.hive-metastore/`, and the Spark-produced data under `./.spark-warehouse/`. To completely reset you environment run the following:
+
+```
+docker-compose down
+rm -rf ./.hive-metastore/
+rm -rf ./.spark-warehouse/
+```
+
+### Reporting bugs and contributing code
+
+- Want to report a bug or request a feature? Let us know on [Slack](http://slack.getdbt.com/), or open [an issue](https://github.com/fishtown-analytics/dbt-spark/issues/new).
+
+## Code of Conduct
+
+Everyone interacting in the dbt project's codebases, issue trackers, chat rooms, and mailing lists is expected to follow the [PyPA Code of Conduct](https://www.pypa.io/en/latest/code-of-conduct/).
+
+## Join the dbt Community
+
+- Be part of the conversation in the [dbt Community Slack](http://community.getdbt.com/)
+- Read more on the [dbt Community Discourse](https://discourse.getdbt.com)
+
+## Reporting bugs and contributing code
+
+- Want to report a bug or request a feature? Let us know on [Slack](http://community.getdbt.com/), or open [an issue](https://github.com/dbt-labs/dbt-spark/issues/new)
+- Want to help us build dbt? Check out the [Contributing Guide](https://github.com/dbt-labs/dbt/blob/HEAD/CONTRIBUTING.md)
+
+## Code of Conduct
+
+Everyone interacting in the dbt project's codebases, issue trackers, chat rooms, and mailing lists is expected to follow the [dbt Code of Conduct](https://community.getdbt.com/code-of-conduct).
+
+
+%prep
+%autosetup -n dbt-spark-1.4.1
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-dbt-spark -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Mon Apr 10 2023 Python_Bot <Python_Bot@openeuler.org> - 1.4.1-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..58a8c4e
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+d57622abd2774773af75b04c3792b510 dbt-spark-1.4.1.tar.gz