diff options
author | CoprDistGit <infra@openeuler.org> | 2023-05-05 09:46:40 +0000 |
---|---|---|
committer | CoprDistGit <infra@openeuler.org> | 2023-05-05 09:46:40 +0000 |
commit | 76a769816d45f4074d0728bfb4d7fc4db74025a5 (patch) | |
tree | e9ee83d1f34eed7a6672240e32433fcba2d4bf79 | |
parent | 9bef63fa2878ff86f31d6fa094bc634ca919047a (diff) |
automatic import of python-discovery-transition-dsopeneuler20.03
-rw-r--r-- | .gitignore | 1 | ||||
-rw-r--r-- | python-discovery-transition-ds.spec | 199 | ||||
-rw-r--r-- | sources | 1 |
3 files changed, 201 insertions, 0 deletions
@@ -0,0 +1 @@ +/discovery-transition-ds-4.14.3.tar.gz diff --git a/python-discovery-transition-ds.spec b/python-discovery-transition-ds.spec new file mode 100644 index 0000000..512e9a5 --- /dev/null +++ b/python-discovery-transition-ds.spec @@ -0,0 +1,199 @@ +%global _empty_manifest_terminate_build 0 +Name: python-discovery-transition-ds +Version: 4.14.3 +Release: 1 +Summary: Data Science to production accelerator +License: BSD +URL: https://github.com/gigas64/discovery-transition-ds +Source0: https://mirrors.nju.edu.cn/pypi/web/packages/3e/a8/b84da9b732aa3c79009c32f9b1bde7a875fc9971f66e4d7be248a0ce3e3e/discovery-transition-ds-4.14.3.tar.gz +BuildArch: noarch + +Requires: python3-aistac-foundation +Requires: python3-discovery-connectors +Requires: python3-pandas +Requires: python3-numpy +Requires: python3-matplotlib +Requires: python3-seaborn +Requires: python3-scikit-learn +Requires: python3-scipy +Requires: python3-boto3 +Requires: python3-botocore +Requires: python3-fsspec +Requires: python3-s3fs +Requires: python3-pyyaml + +%description +Project Hadron has been built to bridge the gap between data scientists and data engineers. More specifically between +machine learning business outcomes and the final product. It translates the work of data scientists into meaningful, +production ready solutions that can be easily managed by product engineers. +Project Hadron is a core set of abstractions that are the foundation of the three key elements that represent data +science, those being: (1) feature engineering, (2) the construction of synthetic data with simulators, and generators +(3) and statistics and machine learning algorithms for discovery and creating models. Project Hadron uniquely sees +data as ‘all the same’ (lazyprogrammer (2020) https://lazyprogrammer.me/all-data-is-the-same/) , by which we mean +its origin, shape and size stay independent throughout the disciplines so its content, form and structure can be +removed as a factor in the design and implementation of the components built. +Project Hadron has been designed to place data scientists in the familiar environment of machine learning and +statistical tools, extracting their ideas and translating them automagicially into production ready solutions +familiar to data engineers and Subject Matter Experts (SME’s). +Project Hadron provides a clear separation of concerns, whilst maintaining the original intentions of the data +scientist, that can be passed to a production team. It offers trust between the data scientists teams and product +teams. It brings with it transparency and traceability, dealing with bias, fairness, and knowledge. The resulting +outcome provides the product engineers with adaptability, robustness, and reuse; fitting seamlessly into a +microservices solution that can be language agnostic. +Project Hadron is designed using Microservices. Microservices - also known as the microservice architecture - is an +architectural pattern that structures an application as a collection of component services that are: +* Highly maintainable and testable +* Loosely coupled +* Independently deployable +* Highly reusable +* Resilient +* Technically independent +Component services are built for business capabilities and each service performs a single function. Because they are +independently run, each service can be updated, deployed, and scaled to meet demand for specific functions of an +application. Project Hadron microservices enable the rapid, frequent and reliable delivery of large, complex +applications. It also enables an organization to evolve its data science stack and experiment with innovative ideas. +At the heart of Project Hadron is a multi-tenant, NoSQL, singleton, in memory data store that has minimal code and +functionality and has been custom built specifically for Hadron tasks in mind. Abstracted from this is the component +store which allows us to build a reusable set of methods that define each tenanted component that sits separately +from the store itself. In addition, a dynamic key value class provides labeling so that each tenant is not tied to +a fixed set of reference values unless by specificity. Each of the classes, the data store, the component property +manager, and the key value pairs that make up the component are all independent, giving complete flexibility and +minimum code footprint to the build process of new components. +This is what gives us the Domain Contract for each tennant which sits at the heart of what makes the contracts +reusable, translatable, transferable and brings the data scientist closer to the production engineer along with +building a production ready component solution. + +%package -n python3-discovery-transition-ds +Summary: Data Science to production accelerator +Provides: python-discovery-transition-ds +BuildRequires: python3-devel +BuildRequires: python3-setuptools +BuildRequires: python3-pip +%description -n python3-discovery-transition-ds +Project Hadron has been built to bridge the gap between data scientists and data engineers. More specifically between +machine learning business outcomes and the final product. It translates the work of data scientists into meaningful, +production ready solutions that can be easily managed by product engineers. +Project Hadron is a core set of abstractions that are the foundation of the three key elements that represent data +science, those being: (1) feature engineering, (2) the construction of synthetic data with simulators, and generators +(3) and statistics and machine learning algorithms for discovery and creating models. Project Hadron uniquely sees +data as ‘all the same’ (lazyprogrammer (2020) https://lazyprogrammer.me/all-data-is-the-same/) , by which we mean +its origin, shape and size stay independent throughout the disciplines so its content, form and structure can be +removed as a factor in the design and implementation of the components built. +Project Hadron has been designed to place data scientists in the familiar environment of machine learning and +statistical tools, extracting their ideas and translating them automagicially into production ready solutions +familiar to data engineers and Subject Matter Experts (SME’s). +Project Hadron provides a clear separation of concerns, whilst maintaining the original intentions of the data +scientist, that can be passed to a production team. It offers trust between the data scientists teams and product +teams. It brings with it transparency and traceability, dealing with bias, fairness, and knowledge. The resulting +outcome provides the product engineers with adaptability, robustness, and reuse; fitting seamlessly into a +microservices solution that can be language agnostic. +Project Hadron is designed using Microservices. Microservices - also known as the microservice architecture - is an +architectural pattern that structures an application as a collection of component services that are: +* Highly maintainable and testable +* Loosely coupled +* Independently deployable +* Highly reusable +* Resilient +* Technically independent +Component services are built for business capabilities and each service performs a single function. Because they are +independently run, each service can be updated, deployed, and scaled to meet demand for specific functions of an +application. Project Hadron microservices enable the rapid, frequent and reliable delivery of large, complex +applications. It also enables an organization to evolve its data science stack and experiment with innovative ideas. +At the heart of Project Hadron is a multi-tenant, NoSQL, singleton, in memory data store that has minimal code and +functionality and has been custom built specifically for Hadron tasks in mind. Abstracted from this is the component +store which allows us to build a reusable set of methods that define each tenanted component that sits separately +from the store itself. In addition, a dynamic key value class provides labeling so that each tenant is not tied to +a fixed set of reference values unless by specificity. Each of the classes, the data store, the component property +manager, and the key value pairs that make up the component are all independent, giving complete flexibility and +minimum code footprint to the build process of new components. +This is what gives us the Domain Contract for each tennant which sits at the heart of what makes the contracts +reusable, translatable, transferable and brings the data scientist closer to the production engineer along with +building a production ready component solution. + +%package help +Summary: Development documents and examples for discovery-transition-ds +Provides: python3-discovery-transition-ds-doc +%description help +Project Hadron has been built to bridge the gap between data scientists and data engineers. More specifically between +machine learning business outcomes and the final product. It translates the work of data scientists into meaningful, +production ready solutions that can be easily managed by product engineers. +Project Hadron is a core set of abstractions that are the foundation of the three key elements that represent data +science, those being: (1) feature engineering, (2) the construction of synthetic data with simulators, and generators +(3) and statistics and machine learning algorithms for discovery and creating models. Project Hadron uniquely sees +data as ‘all the same’ (lazyprogrammer (2020) https://lazyprogrammer.me/all-data-is-the-same/) , by which we mean +its origin, shape and size stay independent throughout the disciplines so its content, form and structure can be +removed as a factor in the design and implementation of the components built. +Project Hadron has been designed to place data scientists in the familiar environment of machine learning and +statistical tools, extracting their ideas and translating them automagicially into production ready solutions +familiar to data engineers and Subject Matter Experts (SME’s). +Project Hadron provides a clear separation of concerns, whilst maintaining the original intentions of the data +scientist, that can be passed to a production team. It offers trust between the data scientists teams and product +teams. It brings with it transparency and traceability, dealing with bias, fairness, and knowledge. The resulting +outcome provides the product engineers with adaptability, robustness, and reuse; fitting seamlessly into a +microservices solution that can be language agnostic. +Project Hadron is designed using Microservices. Microservices - also known as the microservice architecture - is an +architectural pattern that structures an application as a collection of component services that are: +* Highly maintainable and testable +* Loosely coupled +* Independently deployable +* Highly reusable +* Resilient +* Technically independent +Component services are built for business capabilities and each service performs a single function. Because they are +independently run, each service can be updated, deployed, and scaled to meet demand for specific functions of an +application. Project Hadron microservices enable the rapid, frequent and reliable delivery of large, complex +applications. It also enables an organization to evolve its data science stack and experiment with innovative ideas. +At the heart of Project Hadron is a multi-tenant, NoSQL, singleton, in memory data store that has minimal code and +functionality and has been custom built specifically for Hadron tasks in mind. Abstracted from this is the component +store which allows us to build a reusable set of methods that define each tenanted component that sits separately +from the store itself. In addition, a dynamic key value class provides labeling so that each tenant is not tied to +a fixed set of reference values unless by specificity. Each of the classes, the data store, the component property +manager, and the key value pairs that make up the component are all independent, giving complete flexibility and +minimum code footprint to the build process of new components. +This is what gives us the Domain Contract for each tennant which sits at the heart of what makes the contracts +reusable, translatable, transferable and brings the data scientist closer to the production engineer along with +building a production ready component solution. + +%prep +%autosetup -n discovery-transition-ds-4.14.3 + +%build +%py3_build + +%install +%py3_install +install -d -m755 %{buildroot}/%{_pkgdocdir} +if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi +if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi +if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi +if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi +pushd %{buildroot} +if [ -d usr/lib ]; then + find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/lib64 ]; then + find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/bin ]; then + find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/sbin ]; then + find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst +fi +touch doclist.lst +if [ -d usr/share/man ]; then + find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst +fi +popd +mv %{buildroot}/filelist.lst . +mv %{buildroot}/doclist.lst . + +%files -n python3-discovery-transition-ds -f filelist.lst +%dir %{python3_sitelib}/* + +%files help -f doclist.lst +%{_docdir}/* + +%changelog +* Fri May 05 2023 Python_Bot <Python_Bot@openeuler.org> - 4.14.3-1 +- Package Spec generated @@ -0,0 +1 @@ +1b7cba3d9c04dbf248d31c635a72f5c1 discovery-transition-ds-4.14.3.tar.gz |