diff options
author | CoprDistGit <infra@openeuler.org> | 2023-05-10 05:41:32 +0000 |
---|---|---|
committer | CoprDistGit <infra@openeuler.org> | 2023-05-10 05:41:32 +0000 |
commit | 37eac987aa653a8a5aa1ba4c781aa85a9f8e81d0 (patch) | |
tree | b137477b05f43abd28a5111e8a38c6f37ebc9a2c | |
parent | 1f54d1be0934f2c66d07f22a93fb778a3e87f734 (diff) |
automatic import of python-sla-runneropeneuler20.03
-rw-r--r-- | .gitignore | 1 | ||||
-rw-r--r-- | python-sla-runner.spec | 609 | ||||
-rw-r--r-- | sources | 1 |
3 files changed, 611 insertions, 0 deletions
@@ -0,0 +1 @@ +/sla-runner-0.0.21.tar.gz diff --git a/python-sla-runner.spec b/python-sla-runner.spec new file mode 100644 index 0000000..d51db63 --- /dev/null +++ b/python-sla-runner.spec @@ -0,0 +1,609 @@ +%global _empty_manifest_terminate_build 0 +Name: python-sla-runner +Version: 0.0.21 +Release: 1 +Summary: A continuous test runner for gathering SLA data +License: MIT +URL: https://github.com/billtrust/sla-monitor-runner +Source0: https://mirrors.nju.edu.cn/pypi/web/packages/e6/9f/34b1e159011925a92995f21475ccfcbd05d787a139a003c764c2f60064f1/sla-runner-0.0.21.tar.gz +BuildArch: noarch + + +%description +# SLA Monitor Worker + +This is the test runner portion of the SLA monitor/reporter. It performs tests (or any command you want) repeatedly, and publishes success/failure to an SNS topic for external processing (for example, using lambda to write to a custom cloudwatch metric), as well as optionally uploading logs to an S3 bucket. + +TODO: Unit tests not working + +## Installing + +To install simply install via pip + +```bash +pip install --user sla-runner +``` + +Highly recommended is iam-docker run: + +```bash +pip install --user iam-docker-run +``` + +This project assumes you are using role based authentication, as would be used in a production environment in AWS. This mimics that environment by running with an actual role. + +## Terraform + +Excute the following in the root folder to run terraform. Obviously, have Terraform installed. Set the bucket and table variables to existing backend resources for remote state. + +```shell +# pip install iam-starter +cd terraform +export AWS_ENV="dev" +export TF_DATA_DIR="./.$AWS_ENV-terraform/" +export AWS_DEFAULT_REGION="us-east-1" +export TF_STATE_REGION="us-east-1" +export TF_STATE_BUCKET="mycompany-tfstate-$AWS_ENV" +export TF_STATE_TABLE="tfstate_$AWS_ENV" + +iam-starter \ + --profile $AWS_ENV \ + --command \ + "terraform init \ + -backend-config=\"region=$TF_STATE_REGION\" \ + -backend-config=\"bucket=$TF_STATE_BUCKET\" \ + -backend-config=\"dynamodb_table=$TF_STATE_TABLE\" && \ + terraform apply \ + -var \"aws_env=$AWS_ENV\" \ + -var \"aws_region=$AWS_DEFAULT_REGION\"" +``` + +## Using + +Use iam-docker-run outside of AWS to run tests. In real life scenarios on ECS, instead install sla-runner via pypi in your service container, and set `--image` to the image of the service container which contains your test. + +```bash +docker build -t sla-monitor/sla-runner:latest . + +export AWS_ENV="dev" +iam-docker-run \ + -e SLARUNNER_COMMAND="/bin/bash /src/test-scripts/run-tests.sh" \ + -e SLARUNNER_SERVICE=example-service \ + -e SLARUNNER_GROUPS="dev-team,critical" \ + -e SLARUNNER_DELAY=30 \ + -e SLARUNNER_SNSTOPICNAME="sla-monitor-result-published-$AWS_ENV" \ + -e SLARUNNER_S3BUCKETNAME="sla-monitoring-logs-$AWS_ENV" \ + --full-entrypoint "sla-runner" \ + --region us-east-1 \ + --profile $AWS_ENV \ + --role sla-monitor-runner-role-$AWS_ENV \ + --image sla-monitor/sla-runner:latest +``` + +In ECS, add these as environment variables in the task definition or load from ssm via ssm-starter: + +``` +--full-entrypoint "ssm-starter --ssm-name slarunner --command 'sla-runner'" +``` + +## Variables + +The runner takes the following values which are provided by environment variable. + +### Global variables + +When loading variables via SSM and ssm-starter, you can define default variables by adding a globals path before the service path. + +For example, in your task definition in terraform: + +```json + "entryPoint": ["ssm-starter"], + "command": [ + "--ssm-name", "sla-monitor-globals", + "--ssm-name", "${var.application}", + "--command", "sla-runner" // or script that runs sla-runner + ] +``` + +#### command + +$SLARUNNER_COMMAND + +Command to be run repeatedly. Pretty straightforward. If there is an interrupt, the runner will attempt to finish the command gracefully before exit. + +#### service + +$SLARUNNER_SERVICE + +Name of the component service you're testing. This will be used as the prefix for s3 uploads, and will be passed in the JSON event as "service" to SNS. + +#### groups + +$SLARUNNER_GROUPS + +Name of the grouping of components you're testing, in csv format. This will be passed in the JSON event as "groups" to SNS as a list, and is meant to provide secondary statistics if multiple services are part of the same component. + +#### delay + +$SLARUNNER_DELAY + +How long to wait between commands being run in seconds. + +#### disabled + +$SLARUNNER_DISABLED + +To disable sla-runner at startup. + +#### sns-topic-arn + +$SLARUNNER_SNSTOPICARN + +SNS topic arn to publish results to. It will be published as a JSON object. For example, the command above would produce the following: + +```json +{ + "service": "example-service", + "group": ["dev-team", "critical"], + "succeeded": true, + "timestamp": "1574515200", + "testExecutionSecs": "914" +} +``` + +#### s3-bucket-name + +$SLARUNNER_S3BUCKETNAME + +Bucket to write logs to. This is an optional parameter. The object will be named as the timestamp followed by the result for easily searching by result, and will be prefixed by the service name. For example "example-service/1574514000_SUCCESS" + +#### dry-run + +$SLARUNNER_DRYRUN + +If there is any value at all in this variable, the test will run once, output the sns topic it would publish to, the result message, the log output of the command, and the name of the object that would be written to the bucket. It will NOT publish to sns or write the object to s3. Only for testing purposes. + +## Development and Testing + +```bash +docker build -t sla-runner:latest . +``` + +```bash +iam-docker-run \ + --image sla-runner:latest \ + --role sla-monitor-runner-role \ + --profile dev \ + --region us-east-1 \ + --host-source-path . \ + --container-source-path /src \ + --shell +``` + +## Publishing Updates to PyPi + +For the maintainer - to publish an updated version of ssm-search, increment the version number in version.py and run the following: + +docker build -t sla-runner . && \ +docker run --rm -it --entrypoint make sla-runner publish + +At the prompts, enter the username and password to the pypi.org repo. + + + +%package -n python3-sla-runner +Summary: A continuous test runner for gathering SLA data +Provides: python-sla-runner +BuildRequires: python3-devel +BuildRequires: python3-setuptools +BuildRequires: python3-pip +%description -n python3-sla-runner +# SLA Monitor Worker + +This is the test runner portion of the SLA monitor/reporter. It performs tests (or any command you want) repeatedly, and publishes success/failure to an SNS topic for external processing (for example, using lambda to write to a custom cloudwatch metric), as well as optionally uploading logs to an S3 bucket. + +TODO: Unit tests not working + +## Installing + +To install simply install via pip + +```bash +pip install --user sla-runner +``` + +Highly recommended is iam-docker run: + +```bash +pip install --user iam-docker-run +``` + +This project assumes you are using role based authentication, as would be used in a production environment in AWS. This mimics that environment by running with an actual role. + +## Terraform + +Excute the following in the root folder to run terraform. Obviously, have Terraform installed. Set the bucket and table variables to existing backend resources for remote state. + +```shell +# pip install iam-starter +cd terraform +export AWS_ENV="dev" +export TF_DATA_DIR="./.$AWS_ENV-terraform/" +export AWS_DEFAULT_REGION="us-east-1" +export TF_STATE_REGION="us-east-1" +export TF_STATE_BUCKET="mycompany-tfstate-$AWS_ENV" +export TF_STATE_TABLE="tfstate_$AWS_ENV" + +iam-starter \ + --profile $AWS_ENV \ + --command \ + "terraform init \ + -backend-config=\"region=$TF_STATE_REGION\" \ + -backend-config=\"bucket=$TF_STATE_BUCKET\" \ + -backend-config=\"dynamodb_table=$TF_STATE_TABLE\" && \ + terraform apply \ + -var \"aws_env=$AWS_ENV\" \ + -var \"aws_region=$AWS_DEFAULT_REGION\"" +``` + +## Using + +Use iam-docker-run outside of AWS to run tests. In real life scenarios on ECS, instead install sla-runner via pypi in your service container, and set `--image` to the image of the service container which contains your test. + +```bash +docker build -t sla-monitor/sla-runner:latest . + +export AWS_ENV="dev" +iam-docker-run \ + -e SLARUNNER_COMMAND="/bin/bash /src/test-scripts/run-tests.sh" \ + -e SLARUNNER_SERVICE=example-service \ + -e SLARUNNER_GROUPS="dev-team,critical" \ + -e SLARUNNER_DELAY=30 \ + -e SLARUNNER_SNSTOPICNAME="sla-monitor-result-published-$AWS_ENV" \ + -e SLARUNNER_S3BUCKETNAME="sla-monitoring-logs-$AWS_ENV" \ + --full-entrypoint "sla-runner" \ + --region us-east-1 \ + --profile $AWS_ENV \ + --role sla-monitor-runner-role-$AWS_ENV \ + --image sla-monitor/sla-runner:latest +``` + +In ECS, add these as environment variables in the task definition or load from ssm via ssm-starter: + +``` +--full-entrypoint "ssm-starter --ssm-name slarunner --command 'sla-runner'" +``` + +## Variables + +The runner takes the following values which are provided by environment variable. + +### Global variables + +When loading variables via SSM and ssm-starter, you can define default variables by adding a globals path before the service path. + +For example, in your task definition in terraform: + +```json + "entryPoint": ["ssm-starter"], + "command": [ + "--ssm-name", "sla-monitor-globals", + "--ssm-name", "${var.application}", + "--command", "sla-runner" // or script that runs sla-runner + ] +``` + +#### command + +$SLARUNNER_COMMAND + +Command to be run repeatedly. Pretty straightforward. If there is an interrupt, the runner will attempt to finish the command gracefully before exit. + +#### service + +$SLARUNNER_SERVICE + +Name of the component service you're testing. This will be used as the prefix for s3 uploads, and will be passed in the JSON event as "service" to SNS. + +#### groups + +$SLARUNNER_GROUPS + +Name of the grouping of components you're testing, in csv format. This will be passed in the JSON event as "groups" to SNS as a list, and is meant to provide secondary statistics if multiple services are part of the same component. + +#### delay + +$SLARUNNER_DELAY + +How long to wait between commands being run in seconds. + +#### disabled + +$SLARUNNER_DISABLED + +To disable sla-runner at startup. + +#### sns-topic-arn + +$SLARUNNER_SNSTOPICARN + +SNS topic arn to publish results to. It will be published as a JSON object. For example, the command above would produce the following: + +```json +{ + "service": "example-service", + "group": ["dev-team", "critical"], + "succeeded": true, + "timestamp": "1574515200", + "testExecutionSecs": "914" +} +``` + +#### s3-bucket-name + +$SLARUNNER_S3BUCKETNAME + +Bucket to write logs to. This is an optional parameter. The object will be named as the timestamp followed by the result for easily searching by result, and will be prefixed by the service name. For example "example-service/1574514000_SUCCESS" + +#### dry-run + +$SLARUNNER_DRYRUN + +If there is any value at all in this variable, the test will run once, output the sns topic it would publish to, the result message, the log output of the command, and the name of the object that would be written to the bucket. It will NOT publish to sns or write the object to s3. Only for testing purposes. + +## Development and Testing + +```bash +docker build -t sla-runner:latest . +``` + +```bash +iam-docker-run \ + --image sla-runner:latest \ + --role sla-monitor-runner-role \ + --profile dev \ + --region us-east-1 \ + --host-source-path . \ + --container-source-path /src \ + --shell +``` + +## Publishing Updates to PyPi + +For the maintainer - to publish an updated version of ssm-search, increment the version number in version.py and run the following: + +docker build -t sla-runner . && \ +docker run --rm -it --entrypoint make sla-runner publish + +At the prompts, enter the username and password to the pypi.org repo. + + + +%package help +Summary: Development documents and examples for sla-runner +Provides: python3-sla-runner-doc +%description help +# SLA Monitor Worker + +This is the test runner portion of the SLA monitor/reporter. It performs tests (or any command you want) repeatedly, and publishes success/failure to an SNS topic for external processing (for example, using lambda to write to a custom cloudwatch metric), as well as optionally uploading logs to an S3 bucket. + +TODO: Unit tests not working + +## Installing + +To install simply install via pip + +```bash +pip install --user sla-runner +``` + +Highly recommended is iam-docker run: + +```bash +pip install --user iam-docker-run +``` + +This project assumes you are using role based authentication, as would be used in a production environment in AWS. This mimics that environment by running with an actual role. + +## Terraform + +Excute the following in the root folder to run terraform. Obviously, have Terraform installed. Set the bucket and table variables to existing backend resources for remote state. + +```shell +# pip install iam-starter +cd terraform +export AWS_ENV="dev" +export TF_DATA_DIR="./.$AWS_ENV-terraform/" +export AWS_DEFAULT_REGION="us-east-1" +export TF_STATE_REGION="us-east-1" +export TF_STATE_BUCKET="mycompany-tfstate-$AWS_ENV" +export TF_STATE_TABLE="tfstate_$AWS_ENV" + +iam-starter \ + --profile $AWS_ENV \ + --command \ + "terraform init \ + -backend-config=\"region=$TF_STATE_REGION\" \ + -backend-config=\"bucket=$TF_STATE_BUCKET\" \ + -backend-config=\"dynamodb_table=$TF_STATE_TABLE\" && \ + terraform apply \ + -var \"aws_env=$AWS_ENV\" \ + -var \"aws_region=$AWS_DEFAULT_REGION\"" +``` + +## Using + +Use iam-docker-run outside of AWS to run tests. In real life scenarios on ECS, instead install sla-runner via pypi in your service container, and set `--image` to the image of the service container which contains your test. + +```bash +docker build -t sla-monitor/sla-runner:latest . + +export AWS_ENV="dev" +iam-docker-run \ + -e SLARUNNER_COMMAND="/bin/bash /src/test-scripts/run-tests.sh" \ + -e SLARUNNER_SERVICE=example-service \ + -e SLARUNNER_GROUPS="dev-team,critical" \ + -e SLARUNNER_DELAY=30 \ + -e SLARUNNER_SNSTOPICNAME="sla-monitor-result-published-$AWS_ENV" \ + -e SLARUNNER_S3BUCKETNAME="sla-monitoring-logs-$AWS_ENV" \ + --full-entrypoint "sla-runner" \ + --region us-east-1 \ + --profile $AWS_ENV \ + --role sla-monitor-runner-role-$AWS_ENV \ + --image sla-monitor/sla-runner:latest +``` + +In ECS, add these as environment variables in the task definition or load from ssm via ssm-starter: + +``` +--full-entrypoint "ssm-starter --ssm-name slarunner --command 'sla-runner'" +``` + +## Variables + +The runner takes the following values which are provided by environment variable. + +### Global variables + +When loading variables via SSM and ssm-starter, you can define default variables by adding a globals path before the service path. + +For example, in your task definition in terraform: + +```json + "entryPoint": ["ssm-starter"], + "command": [ + "--ssm-name", "sla-monitor-globals", + "--ssm-name", "${var.application}", + "--command", "sla-runner" // or script that runs sla-runner + ] +``` + +#### command + +$SLARUNNER_COMMAND + +Command to be run repeatedly. Pretty straightforward. If there is an interrupt, the runner will attempt to finish the command gracefully before exit. + +#### service + +$SLARUNNER_SERVICE + +Name of the component service you're testing. This will be used as the prefix for s3 uploads, and will be passed in the JSON event as "service" to SNS. + +#### groups + +$SLARUNNER_GROUPS + +Name of the grouping of components you're testing, in csv format. This will be passed in the JSON event as "groups" to SNS as a list, and is meant to provide secondary statistics if multiple services are part of the same component. + +#### delay + +$SLARUNNER_DELAY + +How long to wait between commands being run in seconds. + +#### disabled + +$SLARUNNER_DISABLED + +To disable sla-runner at startup. + +#### sns-topic-arn + +$SLARUNNER_SNSTOPICARN + +SNS topic arn to publish results to. It will be published as a JSON object. For example, the command above would produce the following: + +```json +{ + "service": "example-service", + "group": ["dev-team", "critical"], + "succeeded": true, + "timestamp": "1574515200", + "testExecutionSecs": "914" +} +``` + +#### s3-bucket-name + +$SLARUNNER_S3BUCKETNAME + +Bucket to write logs to. This is an optional parameter. The object will be named as the timestamp followed by the result for easily searching by result, and will be prefixed by the service name. For example "example-service/1574514000_SUCCESS" + +#### dry-run + +$SLARUNNER_DRYRUN + +If there is any value at all in this variable, the test will run once, output the sns topic it would publish to, the result message, the log output of the command, and the name of the object that would be written to the bucket. It will NOT publish to sns or write the object to s3. Only for testing purposes. + +## Development and Testing + +```bash +docker build -t sla-runner:latest . +``` + +```bash +iam-docker-run \ + --image sla-runner:latest \ + --role sla-monitor-runner-role \ + --profile dev \ + --region us-east-1 \ + --host-source-path . \ + --container-source-path /src \ + --shell +``` + +## Publishing Updates to PyPi + +For the maintainer - to publish an updated version of ssm-search, increment the version number in version.py and run the following: + +docker build -t sla-runner . && \ +docker run --rm -it --entrypoint make sla-runner publish + +At the prompts, enter the username and password to the pypi.org repo. + + + +%prep +%autosetup -n sla-runner-0.0.21 + +%build +%py3_build + +%install +%py3_install +install -d -m755 %{buildroot}/%{_pkgdocdir} +if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi +if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi +if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi +if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi +pushd %{buildroot} +if [ -d usr/lib ]; then + find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/lib64 ]; then + find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/bin ]; then + find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/sbin ]; then + find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst +fi +touch doclist.lst +if [ -d usr/share/man ]; then + find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst +fi +popd +mv %{buildroot}/filelist.lst . +mv %{buildroot}/doclist.lst . + +%files -n python3-sla-runner -f filelist.lst +%dir %{python3_sitelib}/* + +%files help -f doclist.lst +%{_docdir}/* + +%changelog +* Wed May 10 2023 Python_Bot <Python_Bot@openeuler.org> - 0.0.21-1 +- Package Spec generated @@ -0,0 +1 @@ +6cceaa1fa208c1f1f0fdf7c3b3e7b5bc sla-runner-0.0.21.tar.gz |