1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
|
%global _empty_manifest_terminate_build 0
Name: python-gcp-airflow-foundations-dev
Version: 11.1
Release: 1
Summary: Opinionated framework based on Airflow 2.0 for building pipelines to ingest data into a BigQuery data warehouse
License: Apache 2.0
URL: https://github.com/badal-io/gcp-airflow-foundations
Source0: https://mirrors.nju.edu.cn/pypi/web/packages/fe/f8/34a85459324b090d0b8d3b22ddb3b3dfc42e7e6ed7d6e4ca34f2a222c8d4/gcp-airflow-foundations-dev-11.1.tar.gz
BuildArch: noarch
%description
# gcp-airflow-foundations
[](https://badge.fury.io/py/gcp-airflow-foundations)
[](https://storage.googleapis.com/public-cloudbuild/build/cloudbuild_status.svg)
[](https://gcp-airflow-foundations.readthedocs.io/en/latest/?badge=latest)

Airflow is an awesome open source orchestration framework that is the go-to for building data ingestion pipelines on GCP (using Composer - a hosted AIrflow service). However, most companies using it face the same set of problems
- **Learning curve**: Airflow requires python knowledge and has some gotchas that take time to learn. Further, writing Python DAGs for every single table that needs to get ingested becomes cumbersome. Most companies end up building utilities for creating DAGs out of configuration files to simplify DAG creation and to allow non-developers to configure ingestion
- **Datalake and data pipelines design best practices**: Airflow only provides the building blocks, users are still required to understand and implement the nuances of building a proper ingestion pipelines for the data lake/data warehouse platform they are using
- **Core reusability and best practice enforcement across the enterprise**: Usually each team maintains its own Airflow source code and deployment
We have written an opinionated yet flexible ingestion framework for building an ingestion pipeline into data warehouse in BigQuery that supports the following features:
- **Zero-code**, config file based ingestion - anybody can start ingesting from the growing number of sources by just providing a simple configuration file. Zero python or Airflow knowledge is required.
- **Modular and extendable** - The core of the framework is a lightweight library. Ingestion sources are added as plugins. Adding a new source can be done by extending the provided base classes.
- **Opinionated automatic creation of ODS (Operational Data Store ) and HDS (Historical Data Store)** in BigQuery while enforcing best practices such as schema migration, data quality validation, idempotency, partitioning, etc.
- **Dataflow job support** for ingesting large datasets from SQL sources and deploying jobs into a specific network or shared VPC.
- Support of **advanced Airflow features** for job prioritization such as slots and priorities.
- Integration with **GCP data services** such as DLP and Data Catalog [work in progress].
- **Well tested** - We maintain a rich suite of both unit and integration tests.
## Installing from PyPI
```bash
pip install 'gcp-airflow-foundations'
```
## Full Documentation
See the [gcp-airflow-foundations documentation](https://gcp-airflow-foundations.readthedocs.io/en/latest/) for more details.
## Running locally
### Sample DAGs
Sample DAGs that ingest publicly available GCS files can be found in the dags folder, and are started as soon Airflow is ran locally. In order to have them successfully run please ensure the following:
- Enable: BigQuery, Cloud Storage, Cloud DLP, Data Catalog API's
- Create a BigQuery Dataset for the HDS and ODS
- Create a DLP Inspect template in DLP
- Create a policy tag in Data Catalog
- Update the gcp_project, location, dataset values, dlp config and policytag configs with your newly created values
### Using Service Account
- Create a service account in GCP, and save it as ```helpers/key/keys.json``` (don't worry, it is in .gitignore, and will not be push to the git repo)
- Run Airflow locally (Airflow UI will be accessible at http://localhost:8080): ```docker-compose up```
- Default authentication values for the Airflow UI are provided in lines 96, 97 of ```docker-composer.yaml```
### Using user IAM
- uncomment line 11 in ```docker-composer.yaml```
- send env var PROJECT_ID to your test project
- Authorize gcloud to access the Cloud Platform with Google user credentials: ```helpers/scripts/gcp-auth.sh```
- Run Airflow locally (Airflow UI will be accessible at http://localhost:8080): ```docker-compose up```
- Default authentication values for the Airflow UI are provided in lines 96, 97 of ```docker-composer.yaml```
### Running tests
- Run unit tests ```./tests/airflow "pytest tests/unit```
- Run unit tests with coverage report ```./tests/airflow "pytest --cov=gcp_airflow_foundations tests/unit```
- Run integration tests ```./tests/airflow "pytest tests/integration```
- Rebuild docker image if requirements changed: ```docker-compose build```
# Contributing
## Install pre-commit hook
Install pre-commit hooks for linting, format checking, etc.
- Install pre-commit python lib locally ```pip install pre-commit```
- Install the pre-commit hooks for the repo```pre-commit install```
%package -n python3-gcp-airflow-foundations-dev
Summary: Opinionated framework based on Airflow 2.0 for building pipelines to ingest data into a BigQuery data warehouse
Provides: python-gcp-airflow-foundations-dev
BuildRequires: python3-devel
BuildRequires: python3-setuptools
BuildRequires: python3-pip
%description -n python3-gcp-airflow-foundations-dev
# gcp-airflow-foundations
[](https://badge.fury.io/py/gcp-airflow-foundations)
[](https://storage.googleapis.com/public-cloudbuild/build/cloudbuild_status.svg)
[](https://gcp-airflow-foundations.readthedocs.io/en/latest/?badge=latest)

Airflow is an awesome open source orchestration framework that is the go-to for building data ingestion pipelines on GCP (using Composer - a hosted AIrflow service). However, most companies using it face the same set of problems
- **Learning curve**: Airflow requires python knowledge and has some gotchas that take time to learn. Further, writing Python DAGs for every single table that needs to get ingested becomes cumbersome. Most companies end up building utilities for creating DAGs out of configuration files to simplify DAG creation and to allow non-developers to configure ingestion
- **Datalake and data pipelines design best practices**: Airflow only provides the building blocks, users are still required to understand and implement the nuances of building a proper ingestion pipelines for the data lake/data warehouse platform they are using
- **Core reusability and best practice enforcement across the enterprise**: Usually each team maintains its own Airflow source code and deployment
We have written an opinionated yet flexible ingestion framework for building an ingestion pipeline into data warehouse in BigQuery that supports the following features:
- **Zero-code**, config file based ingestion - anybody can start ingesting from the growing number of sources by just providing a simple configuration file. Zero python or Airflow knowledge is required.
- **Modular and extendable** - The core of the framework is a lightweight library. Ingestion sources are added as plugins. Adding a new source can be done by extending the provided base classes.
- **Opinionated automatic creation of ODS (Operational Data Store ) and HDS (Historical Data Store)** in BigQuery while enforcing best practices such as schema migration, data quality validation, idempotency, partitioning, etc.
- **Dataflow job support** for ingesting large datasets from SQL sources and deploying jobs into a specific network or shared VPC.
- Support of **advanced Airflow features** for job prioritization such as slots and priorities.
- Integration with **GCP data services** such as DLP and Data Catalog [work in progress].
- **Well tested** - We maintain a rich suite of both unit and integration tests.
## Installing from PyPI
```bash
pip install 'gcp-airflow-foundations'
```
## Full Documentation
See the [gcp-airflow-foundations documentation](https://gcp-airflow-foundations.readthedocs.io/en/latest/) for more details.
## Running locally
### Sample DAGs
Sample DAGs that ingest publicly available GCS files can be found in the dags folder, and are started as soon Airflow is ran locally. In order to have them successfully run please ensure the following:
- Enable: BigQuery, Cloud Storage, Cloud DLP, Data Catalog API's
- Create a BigQuery Dataset for the HDS and ODS
- Create a DLP Inspect template in DLP
- Create a policy tag in Data Catalog
- Update the gcp_project, location, dataset values, dlp config and policytag configs with your newly created values
### Using Service Account
- Create a service account in GCP, and save it as ```helpers/key/keys.json``` (don't worry, it is in .gitignore, and will not be push to the git repo)
- Run Airflow locally (Airflow UI will be accessible at http://localhost:8080): ```docker-compose up```
- Default authentication values for the Airflow UI are provided in lines 96, 97 of ```docker-composer.yaml```
### Using user IAM
- uncomment line 11 in ```docker-composer.yaml```
- send env var PROJECT_ID to your test project
- Authorize gcloud to access the Cloud Platform with Google user credentials: ```helpers/scripts/gcp-auth.sh```
- Run Airflow locally (Airflow UI will be accessible at http://localhost:8080): ```docker-compose up```
- Default authentication values for the Airflow UI are provided in lines 96, 97 of ```docker-composer.yaml```
### Running tests
- Run unit tests ```./tests/airflow "pytest tests/unit```
- Run unit tests with coverage report ```./tests/airflow "pytest --cov=gcp_airflow_foundations tests/unit```
- Run integration tests ```./tests/airflow "pytest tests/integration```
- Rebuild docker image if requirements changed: ```docker-compose build```
# Contributing
## Install pre-commit hook
Install pre-commit hooks for linting, format checking, etc.
- Install pre-commit python lib locally ```pip install pre-commit```
- Install the pre-commit hooks for the repo```pre-commit install```
%package help
Summary: Development documents and examples for gcp-airflow-foundations-dev
Provides: python3-gcp-airflow-foundations-dev-doc
%description help
# gcp-airflow-foundations
[](https://badge.fury.io/py/gcp-airflow-foundations)
[](https://storage.googleapis.com/public-cloudbuild/build/cloudbuild_status.svg)
[](https://gcp-airflow-foundations.readthedocs.io/en/latest/?badge=latest)

Airflow is an awesome open source orchestration framework that is the go-to for building data ingestion pipelines on GCP (using Composer - a hosted AIrflow service). However, most companies using it face the same set of problems
- **Learning curve**: Airflow requires python knowledge and has some gotchas that take time to learn. Further, writing Python DAGs for every single table that needs to get ingested becomes cumbersome. Most companies end up building utilities for creating DAGs out of configuration files to simplify DAG creation and to allow non-developers to configure ingestion
- **Datalake and data pipelines design best practices**: Airflow only provides the building blocks, users are still required to understand and implement the nuances of building a proper ingestion pipelines for the data lake/data warehouse platform they are using
- **Core reusability and best practice enforcement across the enterprise**: Usually each team maintains its own Airflow source code and deployment
We have written an opinionated yet flexible ingestion framework for building an ingestion pipeline into data warehouse in BigQuery that supports the following features:
- **Zero-code**, config file based ingestion - anybody can start ingesting from the growing number of sources by just providing a simple configuration file. Zero python or Airflow knowledge is required.
- **Modular and extendable** - The core of the framework is a lightweight library. Ingestion sources are added as plugins. Adding a new source can be done by extending the provided base classes.
- **Opinionated automatic creation of ODS (Operational Data Store ) and HDS (Historical Data Store)** in BigQuery while enforcing best practices such as schema migration, data quality validation, idempotency, partitioning, etc.
- **Dataflow job support** for ingesting large datasets from SQL sources and deploying jobs into a specific network or shared VPC.
- Support of **advanced Airflow features** for job prioritization such as slots and priorities.
- Integration with **GCP data services** such as DLP and Data Catalog [work in progress].
- **Well tested** - We maintain a rich suite of both unit and integration tests.
## Installing from PyPI
```bash
pip install 'gcp-airflow-foundations'
```
## Full Documentation
See the [gcp-airflow-foundations documentation](https://gcp-airflow-foundations.readthedocs.io/en/latest/) for more details.
## Running locally
### Sample DAGs
Sample DAGs that ingest publicly available GCS files can be found in the dags folder, and are started as soon Airflow is ran locally. In order to have them successfully run please ensure the following:
- Enable: BigQuery, Cloud Storage, Cloud DLP, Data Catalog API's
- Create a BigQuery Dataset for the HDS and ODS
- Create a DLP Inspect template in DLP
- Create a policy tag in Data Catalog
- Update the gcp_project, location, dataset values, dlp config and policytag configs with your newly created values
### Using Service Account
- Create a service account in GCP, and save it as ```helpers/key/keys.json``` (don't worry, it is in .gitignore, and will not be push to the git repo)
- Run Airflow locally (Airflow UI will be accessible at http://localhost:8080): ```docker-compose up```
- Default authentication values for the Airflow UI are provided in lines 96, 97 of ```docker-composer.yaml```
### Using user IAM
- uncomment line 11 in ```docker-composer.yaml```
- send env var PROJECT_ID to your test project
- Authorize gcloud to access the Cloud Platform with Google user credentials: ```helpers/scripts/gcp-auth.sh```
- Run Airflow locally (Airflow UI will be accessible at http://localhost:8080): ```docker-compose up```
- Default authentication values for the Airflow UI are provided in lines 96, 97 of ```docker-composer.yaml```
### Running tests
- Run unit tests ```./tests/airflow "pytest tests/unit```
- Run unit tests with coverage report ```./tests/airflow "pytest --cov=gcp_airflow_foundations tests/unit```
- Run integration tests ```./tests/airflow "pytest tests/integration```
- Rebuild docker image if requirements changed: ```docker-compose build```
# Contributing
## Install pre-commit hook
Install pre-commit hooks for linting, format checking, etc.
- Install pre-commit python lib locally ```pip install pre-commit```
- Install the pre-commit hooks for the repo```pre-commit install```
%prep
%autosetup -n gcp-airflow-foundations-dev-11.1
%build
%py3_build
%install
%py3_install
install -d -m755 %{buildroot}/%{_pkgdocdir}
if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
pushd %{buildroot}
if [ -d usr/lib ]; then
find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/lib64 ]; then
find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/bin ]; then
find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/sbin ]; then
find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
fi
touch doclist.lst
if [ -d usr/share/man ]; then
find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
fi
popd
mv %{buildroot}/filelist.lst .
mv %{buildroot}/doclist.lst .
%files -n python3-gcp-airflow-foundations-dev -f filelist.lst
%dir %{python3_sitelib}/*
%files help -f doclist.lst
%{_docdir}/*
%changelog
* Mon May 29 2023 Python_Bot <Python_Bot@openeuler.org> - 11.1-1
- Package Spec generated
|