%global _empty_manifest_terminate_build 0 Name: python-gcp-storage-emulator Version: 2022.6.11 Release: 1 Summary: A stub emulator for the Google Cloud Storage API License: BSD License URL: https://github.com/oittaa/gcp-storage-emulator Source0: https://mirrors.nju.edu.cn/pypi/web/packages/d6/b0/100bdd201799c3872e2015504e5b67126449938d86d42cbab1ed41a4754a/gcp-storage-emulator-2022.6.11.tar.gz BuildArch: noarch Requires: python3-fs Requires: python3-google-crc32c %description ## Installation `pip install gcp-storage-emulator` ## CLI Usage ### Starting the emulator Start the emulator with: ```bash gcp-storage-emulator start ``` By default, the server will listen on `http://localhost:9023` and data is stored under `./.cloudstorage`. You can configure the folder using the env variables `STORAGE_BASE` (default `./`) and `STORAGE_DIR` (default `.cloudstorage`). If you wish to run the emulator in a testing environment or if you don't want to persist any data, you can use the `--in-memory` parameter. For tests, you might want to consider starting up the server from your code (see the [Python APIs](#python-apis)) If you're using the Google client library (e.g. `google-cloud-storage` for Python) then you can set the `STORAGE_EMULATOR_HOST` environment variable to tell the library to connect to your emulator endpoint rather than the standard `https://storage.googleapis.com`, e.g.: ```bash export STORAGE_EMULATOR_HOST=http://localhost:9023 ``` ### Wiping data You can wipe the data by running ```bash gcp-storage-emulator wipe ``` You can pass `--keep-buckets` to wipe the data while keeping the buckets. #### Example Use in-memory storage and automatically create default storage bucket `my-bucket`. ```bash gcp-storage-emulator start --host=localhost --port=9023 --in-memory --default-bucket=my-bucket ``` ## Python APIs To start a server from your code you can do ```python from gcp_storage_emulator.server import create_server server = create_server("localhost", 9023, in_memory=False) server.start() # ........ server.stop() ``` You can wipe the data by calling `server.wipe()` This can also be achieved (e.g. during tests) by hitting the `/wipe` HTTP endpoint #### Example ```python import os from google.cloud import storage from gcp_storage_emulator.server import create_server HOST = "localhost" PORT = 9023 BUCKET = "test-bucket" # default_bucket parameter creates the bucket automatically server = create_server(HOST, PORT, in_memory=True, default_bucket=BUCKET) server.start() os.environ["STORAGE_EMULATOR_HOST"] = f"http://{HOST}:{PORT}" client = storage.Client() bucket = client.bucket(BUCKET) blob = bucket.blob("blob1") blob.upload_from_string("test1") blob = bucket.blob("blob2") blob.upload_from_string("test2") for blob in bucket.list_blobs(): content = blob.download_as_bytes() print(f"Blob [{blob.name}]: {content}") server.stop() ``` ## Docker Pull the Docker image. ```bash docker pull oittaa/gcp-storage-emulator ``` Inside the container instance, the value of the `PORT` environment variable always reflects the port to which requests are sent. It defaults to `8080`. The directory used for the emulated storage is located under `/storage` in the container. In the following example the host's directory `$(pwd)/cloudstorage` will be bound to the emulated storage. ```bash docker run -d \ -e PORT=9023 \ -p 9023:9023 \ --name gcp-storage-emulator \ -v "$(pwd)/cloudstorage":/storage \ oittaa/gcp-storage-emulator ``` ```python import os from google.cloud import exceptions, storage HOST = "localhost" PORT = 9023 BUCKET = "test-bucket" os.environ["STORAGE_EMULATOR_HOST"] = f"http://{HOST}:{PORT}" client = storage.Client() try: bucket = client.create_bucket(BUCKET) except exceptions.Conflict: bucket = client.bucket(BUCKET) blob = bucket.blob("blob1") blob.upload_from_string("test1") print(blob.download_as_bytes()) ``` %package -n python3-gcp-storage-emulator Summary: A stub emulator for the Google Cloud Storage API Provides: python-gcp-storage-emulator BuildRequires: python3-devel BuildRequires: python3-setuptools BuildRequires: python3-pip %description -n python3-gcp-storage-emulator ## Installation `pip install gcp-storage-emulator` ## CLI Usage ### Starting the emulator Start the emulator with: ```bash gcp-storage-emulator start ``` By default, the server will listen on `http://localhost:9023` and data is stored under `./.cloudstorage`. You can configure the folder using the env variables `STORAGE_BASE` (default `./`) and `STORAGE_DIR` (default `.cloudstorage`). If you wish to run the emulator in a testing environment or if you don't want to persist any data, you can use the `--in-memory` parameter. For tests, you might want to consider starting up the server from your code (see the [Python APIs](#python-apis)) If you're using the Google client library (e.g. `google-cloud-storage` for Python) then you can set the `STORAGE_EMULATOR_HOST` environment variable to tell the library to connect to your emulator endpoint rather than the standard `https://storage.googleapis.com`, e.g.: ```bash export STORAGE_EMULATOR_HOST=http://localhost:9023 ``` ### Wiping data You can wipe the data by running ```bash gcp-storage-emulator wipe ``` You can pass `--keep-buckets` to wipe the data while keeping the buckets. #### Example Use in-memory storage and automatically create default storage bucket `my-bucket`. ```bash gcp-storage-emulator start --host=localhost --port=9023 --in-memory --default-bucket=my-bucket ``` ## Python APIs To start a server from your code you can do ```python from gcp_storage_emulator.server import create_server server = create_server("localhost", 9023, in_memory=False) server.start() # ........ server.stop() ``` You can wipe the data by calling `server.wipe()` This can also be achieved (e.g. during tests) by hitting the `/wipe` HTTP endpoint #### Example ```python import os from google.cloud import storage from gcp_storage_emulator.server import create_server HOST = "localhost" PORT = 9023 BUCKET = "test-bucket" # default_bucket parameter creates the bucket automatically server = create_server(HOST, PORT, in_memory=True, default_bucket=BUCKET) server.start() os.environ["STORAGE_EMULATOR_HOST"] = f"http://{HOST}:{PORT}" client = storage.Client() bucket = client.bucket(BUCKET) blob = bucket.blob("blob1") blob.upload_from_string("test1") blob = bucket.blob("blob2") blob.upload_from_string("test2") for blob in bucket.list_blobs(): content = blob.download_as_bytes() print(f"Blob [{blob.name}]: {content}") server.stop() ``` ## Docker Pull the Docker image. ```bash docker pull oittaa/gcp-storage-emulator ``` Inside the container instance, the value of the `PORT` environment variable always reflects the port to which requests are sent. It defaults to `8080`. The directory used for the emulated storage is located under `/storage` in the container. In the following example the host's directory `$(pwd)/cloudstorage` will be bound to the emulated storage. ```bash docker run -d \ -e PORT=9023 \ -p 9023:9023 \ --name gcp-storage-emulator \ -v "$(pwd)/cloudstorage":/storage \ oittaa/gcp-storage-emulator ``` ```python import os from google.cloud import exceptions, storage HOST = "localhost" PORT = 9023 BUCKET = "test-bucket" os.environ["STORAGE_EMULATOR_HOST"] = f"http://{HOST}:{PORT}" client = storage.Client() try: bucket = client.create_bucket(BUCKET) except exceptions.Conflict: bucket = client.bucket(BUCKET) blob = bucket.blob("blob1") blob.upload_from_string("test1") print(blob.download_as_bytes()) ``` %package help Summary: Development documents and examples for gcp-storage-emulator Provides: python3-gcp-storage-emulator-doc %description help ## Installation `pip install gcp-storage-emulator` ## CLI Usage ### Starting the emulator Start the emulator with: ```bash gcp-storage-emulator start ``` By default, the server will listen on `http://localhost:9023` and data is stored under `./.cloudstorage`. You can configure the folder using the env variables `STORAGE_BASE` (default `./`) and `STORAGE_DIR` (default `.cloudstorage`). If you wish to run the emulator in a testing environment or if you don't want to persist any data, you can use the `--in-memory` parameter. For tests, you might want to consider starting up the server from your code (see the [Python APIs](#python-apis)) If you're using the Google client library (e.g. `google-cloud-storage` for Python) then you can set the `STORAGE_EMULATOR_HOST` environment variable to tell the library to connect to your emulator endpoint rather than the standard `https://storage.googleapis.com`, e.g.: ```bash export STORAGE_EMULATOR_HOST=http://localhost:9023 ``` ### Wiping data You can wipe the data by running ```bash gcp-storage-emulator wipe ``` You can pass `--keep-buckets` to wipe the data while keeping the buckets. #### Example Use in-memory storage and automatically create default storage bucket `my-bucket`. ```bash gcp-storage-emulator start --host=localhost --port=9023 --in-memory --default-bucket=my-bucket ``` ## Python APIs To start a server from your code you can do ```python from gcp_storage_emulator.server import create_server server = create_server("localhost", 9023, in_memory=False) server.start() # ........ server.stop() ``` You can wipe the data by calling `server.wipe()` This can also be achieved (e.g. during tests) by hitting the `/wipe` HTTP endpoint #### Example ```python import os from google.cloud import storage from gcp_storage_emulator.server import create_server HOST = "localhost" PORT = 9023 BUCKET = "test-bucket" # default_bucket parameter creates the bucket automatically server = create_server(HOST, PORT, in_memory=True, default_bucket=BUCKET) server.start() os.environ["STORAGE_EMULATOR_HOST"] = f"http://{HOST}:{PORT}" client = storage.Client() bucket = client.bucket(BUCKET) blob = bucket.blob("blob1") blob.upload_from_string("test1") blob = bucket.blob("blob2") blob.upload_from_string("test2") for blob in bucket.list_blobs(): content = blob.download_as_bytes() print(f"Blob [{blob.name}]: {content}") server.stop() ``` ## Docker Pull the Docker image. ```bash docker pull oittaa/gcp-storage-emulator ``` Inside the container instance, the value of the `PORT` environment variable always reflects the port to which requests are sent. It defaults to `8080`. The directory used for the emulated storage is located under `/storage` in the container. In the following example the host's directory `$(pwd)/cloudstorage` will be bound to the emulated storage. ```bash docker run -d \ -e PORT=9023 \ -p 9023:9023 \ --name gcp-storage-emulator \ -v "$(pwd)/cloudstorage":/storage \ oittaa/gcp-storage-emulator ``` ```python import os from google.cloud import exceptions, storage HOST = "localhost" PORT = 9023 BUCKET = "test-bucket" os.environ["STORAGE_EMULATOR_HOST"] = f"http://{HOST}:{PORT}" client = storage.Client() try: bucket = client.create_bucket(BUCKET) except exceptions.Conflict: bucket = client.bucket(BUCKET) blob = bucket.blob("blob1") blob.upload_from_string("test1") print(blob.download_as_bytes()) ``` %prep %autosetup -n gcp-storage-emulator-2022.6.11 %build %py3_build %install %py3_install install -d -m755 %{buildroot}/%{_pkgdocdir} if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/lib64 ]; then find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/bin ]; then find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/sbin ]; then find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . mv %{buildroot}/doclist.lst . %files -n python3-gcp-storage-emulator -f filelist.lst %dir %{python3_sitelib}/* %files help -f doclist.lst %{_docdir}/* %changelog * Fri May 05 2023 Python_Bot - 2022.6.11-1 - Package Spec generated