summaryrefslogtreecommitdiff
path: root/python-benchmark-runner.spec
diff options
context:
space:
mode:
Diffstat (limited to 'python-benchmark-runner.spec')
-rw-r--r--python-benchmark-runner.spec20
1 files changed, 13 insertions, 7 deletions
diff --git a/python-benchmark-runner.spec b/python-benchmark-runner.spec
index f84f2eb..fb6612a 100644
--- a/python-benchmark-runner.spec
+++ b/python-benchmark-runner.spec
@@ -1,11 +1,11 @@
%global _empty_manifest_terminate_build 0
Name: python-benchmark-runner
-Version: 1.0.462
+Version: 1.0.465
Release: 1
Summary: Benchmark Runner Tool
License: Apache License 2.0
URL: https://github.com/redhat-performance/benchmark-runner
-Source0: https://mirrors.nju.edu.cn/pypi/web/packages/de/74/ef6a342394c8e47657181a87db3c7652e62b60fc215c150d2694ec27d526/benchmark-runner-1.0.462.tar.gz
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/1c/91/6d6f758ed16308882d80f9521995c1c14acdc89abf2c3dfabecbc69082c5/benchmark-runner-1.0.465.tar.gz
BuildArch: noarch
Requires: python3-attrs
@@ -47,6 +47,7 @@ This framework support the following embedded workloads:
* [uperf](http://uperf.org/): running uperf workload in Pod, Kata or VM with [Configuration](benchmark_runner/common/template_operations/templates/uperf)
* [vdbench](https://wiki.lustre.org/VDBench/): running vdbench workload in Pod, Kata or VM with [Configuration](benchmark_runner/common/template_operations/templates/vdbench)
* [bootstorm](https://en.wiktionary.org/wiki/boot_storm): calculate VMs boot load time [Configuration](benchmark_runner/common/template_operations/templates/bootstorm)
+** For hammerdb mssql must run once [permission](https://github.com/redhat-performance/benchmark-runner/blob/main/benchmark_runner/common/ocp_resources/custom/template/02_mssql_patch_template.sh)
Benchmark-runner grafana dashboard example:
![](media/grafana.png)
Reference:
@@ -87,7 +88,8 @@ Choose one from the following list:
**optional:scale** SCALE=$SCALE [For Vdbench/Bootstorm: Scale in each node]
**optional:scale** SCALE_NODES=$SCALE_NODES [For Vdbench/Bootstorm: Scale's node]
**optional:scale** REDIS=$REDIS [For Vdbench only: redis for scale synchronization]
-**optional:** LSO_PATH=$LSO_PATH [LSO_PATH='/dev/sdb/' For hammerdb only: for using Local Storage Operator]
+**optional:** LSO_DISK_ID=$LSO_DISK_ID [LSO_DISK_ID='scsi-<replace_this_with_your_actual_disk_id>' For using Local Storage Operator in hammerdb]
+**optional:** WORKER_DISK_IDS=$WORKER_DISK_IDS [WORKER_DISK_IDS For ODF/LSO workloads hammerdb/vdbench]
For example:
```sh
podman run --rm --workload=$WORKLOAD --kubeadmin-password=$KUBEADMIN_PASSWORD --pin-node-benchmark-operator=$PIN_NODE_BENCHMARK_OPERATOR --pin-node1=$PIN_NODE1 --pin-node2=$PIN_NODE2 --elasticsearch=$ELASTICSEARCH --elasticsearch-port=$ELASTICSEARCH_PORT -v $KUBECONFIG:/root/.kube/config --privileged quay.io/ebattat/benchmark-runner:latest
@@ -167,6 +169,7 @@ This framework support the following embedded workloads:
* [uperf](http://uperf.org/): running uperf workload in Pod, Kata or VM with [Configuration](benchmark_runner/common/template_operations/templates/uperf)
* [vdbench](https://wiki.lustre.org/VDBench/): running vdbench workload in Pod, Kata or VM with [Configuration](benchmark_runner/common/template_operations/templates/vdbench)
* [bootstorm](https://en.wiktionary.org/wiki/boot_storm): calculate VMs boot load time [Configuration](benchmark_runner/common/template_operations/templates/bootstorm)
+** For hammerdb mssql must run once [permission](https://github.com/redhat-performance/benchmark-runner/blob/main/benchmark_runner/common/ocp_resources/custom/template/02_mssql_patch_template.sh)
Benchmark-runner grafana dashboard example:
![](media/grafana.png)
Reference:
@@ -207,7 +210,8 @@ Choose one from the following list:
**optional:scale** SCALE=$SCALE [For Vdbench/Bootstorm: Scale in each node]
**optional:scale** SCALE_NODES=$SCALE_NODES [For Vdbench/Bootstorm: Scale's node]
**optional:scale** REDIS=$REDIS [For Vdbench only: redis for scale synchronization]
-**optional:** LSO_PATH=$LSO_PATH [LSO_PATH='/dev/sdb/' For hammerdb only: for using Local Storage Operator]
+**optional:** LSO_DISK_ID=$LSO_DISK_ID [LSO_DISK_ID='scsi-<replace_this_with_your_actual_disk_id>' For using Local Storage Operator in hammerdb]
+**optional:** WORKER_DISK_IDS=$WORKER_DISK_IDS [WORKER_DISK_IDS For ODF/LSO workloads hammerdb/vdbench]
For example:
```sh
podman run --rm --workload=$WORKLOAD --kubeadmin-password=$KUBEADMIN_PASSWORD --pin-node-benchmark-operator=$PIN_NODE_BENCHMARK_OPERATOR --pin-node1=$PIN_NODE1 --pin-node2=$PIN_NODE2 --elasticsearch=$ELASTICSEARCH --elasticsearch-port=$ELASTICSEARCH_PORT -v $KUBECONFIG:/root/.kube/config --privileged quay.io/ebattat/benchmark-runner:latest
@@ -284,6 +288,7 @@ This framework support the following embedded workloads:
* [uperf](http://uperf.org/): running uperf workload in Pod, Kata or VM with [Configuration](benchmark_runner/common/template_operations/templates/uperf)
* [vdbench](https://wiki.lustre.org/VDBench/): running vdbench workload in Pod, Kata or VM with [Configuration](benchmark_runner/common/template_operations/templates/vdbench)
* [bootstorm](https://en.wiktionary.org/wiki/boot_storm): calculate VMs boot load time [Configuration](benchmark_runner/common/template_operations/templates/bootstorm)
+** For hammerdb mssql must run once [permission](https://github.com/redhat-performance/benchmark-runner/blob/main/benchmark_runner/common/ocp_resources/custom/template/02_mssql_patch_template.sh)
Benchmark-runner grafana dashboard example:
![](media/grafana.png)
Reference:
@@ -324,7 +329,8 @@ Choose one from the following list:
**optional:scale** SCALE=$SCALE [For Vdbench/Bootstorm: Scale in each node]
**optional:scale** SCALE_NODES=$SCALE_NODES [For Vdbench/Bootstorm: Scale's node]
**optional:scale** REDIS=$REDIS [For Vdbench only: redis for scale synchronization]
-**optional:** LSO_PATH=$LSO_PATH [LSO_PATH='/dev/sdb/' For hammerdb only: for using Local Storage Operator]
+**optional:** LSO_DISK_ID=$LSO_DISK_ID [LSO_DISK_ID='scsi-<replace_this_with_your_actual_disk_id>' For using Local Storage Operator in hammerdb]
+**optional:** WORKER_DISK_IDS=$WORKER_DISK_IDS [WORKER_DISK_IDS For ODF/LSO workloads hammerdb/vdbench]
For example:
```sh
podman run --rm --workload=$WORKLOAD --kubeadmin-password=$KUBEADMIN_PASSWORD --pin-node-benchmark-operator=$PIN_NODE_BENCHMARK_OPERATOR --pin-node1=$PIN_NODE1 --pin-node2=$PIN_NODE2 --elasticsearch=$ELASTICSEARCH --elasticsearch-port=$ELASTICSEARCH_PORT -v $KUBECONFIG:/root/.kube/config --privileged quay.io/ebattat/benchmark-runner:latest
@@ -381,7 +387,7 @@ name of the promdb snapshot.
see [HOW_TO.md](HOW_TO.md)
%prep
-%autosetup -n benchmark-runner-1.0.462
+%autosetup -n benchmark-runner-1.0.465
%build
%py3_build
@@ -421,5 +427,5 @@ mv %{buildroot}/doclist.lst .
%{_docdir}/*
%changelog
-* Tue Apr 11 2023 Python_Bot <Python_Bot@openeuler.org> - 1.0.462-1
+* Sun Apr 23 2023 Python_Bot <Python_Bot@openeuler.org> - 1.0.465-1
- Package Spec generated