%global _empty_manifest_terminate_build 0 Name: python-stream2py Version: 1.0.42 Release: 1 Summary: Bring data streams to python with ease License: MIT URL: https://github.com/i2mint/stream2py Source0: https://mirrors.aliyun.com/pypi/web/packages/52/01/ec59e0575d7fddc94595ac50537a9c49c279d7760fc2e3929c78fa5fbb66/stream2py-1.0.42.tar.gz BuildArch: noarch %description [Documentation hosted here](https://i2mint.github.io/stream2py/index.html). To install: `pip install stream2py` # stream2py Bring data streams to python, with ease. One of the goals of the suite of i2i tools is to get from idea 2 implementation without all the fuss. We've got py2store to do that for the storage (reading or writing) concern, and others (e.g. py2cli, py2ws, py2dash) to take care of exposing python functions to the world (of command line interfaces, webservices, browser dashboards, etc.). Here, we address the stream acquisition concern. As always, we aim at offering as-simple-as-drawing-a-simple-drawing means to get things done. ## Plugins `stream2py` has the core functionality, and is completely dependency-free (only builtin python). That said, to work with specific stream sources, you can install plugins that will allow you to work with them. At the time of writing this we have: - Audio: [audiostream2py](https://github.com/i2mint/audiostream2py) - Keyboard inputs: [keyboardstream2py](https://github.com/i2mint/keyboardstream2py) - PLC (Programmable Logic Controller): [plcstream2py](https://github.com/i2mint/plcstream2py) - Computer health stats: [pchealthstream2py](https://github.com/i2mint/pchealthstream2py) - Video: [videostream2py](https://github.com/i2mint/videostream2py) - Flask: [flaskstream2py](https://github.com/i2mint/flaskstream2py) # What `stream2py` is for ## Reduce vocabulary entropy One way we do this is by reducing the vocabulary entropy: We don't want to have to think about how every specific source calls a read, or a size, or a time to pause before reads, or what format THAT particular sensor is encoding it's data in, having you shuffle through documentation pages before you can figure out how to start doing the fun stuff, which happens to be the stuff that actually produces value. And, oh, once you figure it out, if you don't use it for a few months or years, next time you need to do something similar, you'll have to figure it all out again. No. That's just a waste of time of time. Instead, we say you do that at most once. You don't have to do it at all if the community (us) already provided you with the their-language-to-our-consistent-language adapter for the stream you want to hook into. And if it's something new, well you'll have to figure it out, but you then write the adapter once, and now you (1) can use the rest of stream2py's tools and (2) you don't have to do it again. ## Go back in time We also address the problem of impermanence. Think of the streams that different sensors such as audio, vibration, video offer, or even "industrial" signals such as wifi, can bus data, PLC, etc. They happen, and they're gone. Sure, they usually have buffers, but these are typically just big enough to get the data from high frequency reads -- not enough to have the time for some more involved analysis that smart systems require. We address this problem by ## Multi readers It often happens that you want to do more than one thing with a stream. Say store it, visualize it in real time, and direct it to a analysis pipeline. In order for this to be possible with no hiccups, some things need to be taken care of. We did, so you don't have to. ## Timestamp correctly In our extensive experience with people (the write code to store stream data), we've noticed that many engineers, when faced with the task to timestamp the segments of stream that they're saving, follow a design pattern that goes like this (a) get the stream data (b) ask the system what date/time it is (c) use that (and perhaps, just to make even more likely for the timestamp to be interpreted incorrectly, call it the "offset_date") The problem with this design pattern is that it's all pattern and no design. It is **not** the timestamp of the beginning of the segment: That time happened **after** the **end** of the event of the end of the segment occurred, and even more so, **after** the system that will timestamp and store. Further, there is a lot of wiggle room in the delay accumulated between the actual event, and the moment we ask the system what time it is. Sometimes it doesn't matter, but sometimes it does: For example, if we want to align with some other timestamped data, or use these timestamps to determine if there's gaps or overlaps between the segments we've acquired. Point is, stream2py will give you the tools to tackle that problem properly. It does so by having the stream2py buffers mentioned above keep data flow statistics that readers can then use to more precisely timestamp what they read. [Documentation here](https://i2mint.github.io/stream2py/index.html). %package -n python3-stream2py Summary: Bring data streams to python with ease Provides: python-stream2py BuildRequires: python3-devel BuildRequires: python3-setuptools BuildRequires: python3-pip %description -n python3-stream2py [Documentation hosted here](https://i2mint.github.io/stream2py/index.html). To install: `pip install stream2py` # stream2py Bring data streams to python, with ease. One of the goals of the suite of i2i tools is to get from idea 2 implementation without all the fuss. We've got py2store to do that for the storage (reading or writing) concern, and others (e.g. py2cli, py2ws, py2dash) to take care of exposing python functions to the world (of command line interfaces, webservices, browser dashboards, etc.). Here, we address the stream acquisition concern. As always, we aim at offering as-simple-as-drawing-a-simple-drawing means to get things done. ## Plugins `stream2py` has the core functionality, and is completely dependency-free (only builtin python). That said, to work with specific stream sources, you can install plugins that will allow you to work with them. At the time of writing this we have: - Audio: [audiostream2py](https://github.com/i2mint/audiostream2py) - Keyboard inputs: [keyboardstream2py](https://github.com/i2mint/keyboardstream2py) - PLC (Programmable Logic Controller): [plcstream2py](https://github.com/i2mint/plcstream2py) - Computer health stats: [pchealthstream2py](https://github.com/i2mint/pchealthstream2py) - Video: [videostream2py](https://github.com/i2mint/videostream2py) - Flask: [flaskstream2py](https://github.com/i2mint/flaskstream2py) # What `stream2py` is for ## Reduce vocabulary entropy One way we do this is by reducing the vocabulary entropy: We don't want to have to think about how every specific source calls a read, or a size, or a time to pause before reads, or what format THAT particular sensor is encoding it's data in, having you shuffle through documentation pages before you can figure out how to start doing the fun stuff, which happens to be the stuff that actually produces value. And, oh, once you figure it out, if you don't use it for a few months or years, next time you need to do something similar, you'll have to figure it all out again. No. That's just a waste of time of time. Instead, we say you do that at most once. You don't have to do it at all if the community (us) already provided you with the their-language-to-our-consistent-language adapter for the stream you want to hook into. And if it's something new, well you'll have to figure it out, but you then write the adapter once, and now you (1) can use the rest of stream2py's tools and (2) you don't have to do it again. ## Go back in time We also address the problem of impermanence. Think of the streams that different sensors such as audio, vibration, video offer, or even "industrial" signals such as wifi, can bus data, PLC, etc. They happen, and they're gone. Sure, they usually have buffers, but these are typically just big enough to get the data from high frequency reads -- not enough to have the time for some more involved analysis that smart systems require. We address this problem by ## Multi readers It often happens that you want to do more than one thing with a stream. Say store it, visualize it in real time, and direct it to a analysis pipeline. In order for this to be possible with no hiccups, some things need to be taken care of. We did, so you don't have to. ## Timestamp correctly In our extensive experience with people (the write code to store stream data), we've noticed that many engineers, when faced with the task to timestamp the segments of stream that they're saving, follow a design pattern that goes like this (a) get the stream data (b) ask the system what date/time it is (c) use that (and perhaps, just to make even more likely for the timestamp to be interpreted incorrectly, call it the "offset_date") The problem with this design pattern is that it's all pattern and no design. It is **not** the timestamp of the beginning of the segment: That time happened **after** the **end** of the event of the end of the segment occurred, and even more so, **after** the system that will timestamp and store. Further, there is a lot of wiggle room in the delay accumulated between the actual event, and the moment we ask the system what time it is. Sometimes it doesn't matter, but sometimes it does: For example, if we want to align with some other timestamped data, or use these timestamps to determine if there's gaps or overlaps between the segments we've acquired. Point is, stream2py will give you the tools to tackle that problem properly. It does so by having the stream2py buffers mentioned above keep data flow statistics that readers can then use to more precisely timestamp what they read. [Documentation here](https://i2mint.github.io/stream2py/index.html). %package help Summary: Development documents and examples for stream2py Provides: python3-stream2py-doc %description help [Documentation hosted here](https://i2mint.github.io/stream2py/index.html). To install: `pip install stream2py` # stream2py Bring data streams to python, with ease. One of the goals of the suite of i2i tools is to get from idea 2 implementation without all the fuss. We've got py2store to do that for the storage (reading or writing) concern, and others (e.g. py2cli, py2ws, py2dash) to take care of exposing python functions to the world (of command line interfaces, webservices, browser dashboards, etc.). Here, we address the stream acquisition concern. As always, we aim at offering as-simple-as-drawing-a-simple-drawing means to get things done. ## Plugins `stream2py` has the core functionality, and is completely dependency-free (only builtin python). That said, to work with specific stream sources, you can install plugins that will allow you to work with them. At the time of writing this we have: - Audio: [audiostream2py](https://github.com/i2mint/audiostream2py) - Keyboard inputs: [keyboardstream2py](https://github.com/i2mint/keyboardstream2py) - PLC (Programmable Logic Controller): [plcstream2py](https://github.com/i2mint/plcstream2py) - Computer health stats: [pchealthstream2py](https://github.com/i2mint/pchealthstream2py) - Video: [videostream2py](https://github.com/i2mint/videostream2py) - Flask: [flaskstream2py](https://github.com/i2mint/flaskstream2py) # What `stream2py` is for ## Reduce vocabulary entropy One way we do this is by reducing the vocabulary entropy: We don't want to have to think about how every specific source calls a read, or a size, or a time to pause before reads, or what format THAT particular sensor is encoding it's data in, having you shuffle through documentation pages before you can figure out how to start doing the fun stuff, which happens to be the stuff that actually produces value. And, oh, once you figure it out, if you don't use it for a few months or years, next time you need to do something similar, you'll have to figure it all out again. No. That's just a waste of time of time. Instead, we say you do that at most once. You don't have to do it at all if the community (us) already provided you with the their-language-to-our-consistent-language adapter for the stream you want to hook into. And if it's something new, well you'll have to figure it out, but you then write the adapter once, and now you (1) can use the rest of stream2py's tools and (2) you don't have to do it again. ## Go back in time We also address the problem of impermanence. Think of the streams that different sensors such as audio, vibration, video offer, or even "industrial" signals such as wifi, can bus data, PLC, etc. They happen, and they're gone. Sure, they usually have buffers, but these are typically just big enough to get the data from high frequency reads -- not enough to have the time for some more involved analysis that smart systems require. We address this problem by ## Multi readers It often happens that you want to do more than one thing with a stream. Say store it, visualize it in real time, and direct it to a analysis pipeline. In order for this to be possible with no hiccups, some things need to be taken care of. We did, so you don't have to. ## Timestamp correctly In our extensive experience with people (the write code to store stream data), we've noticed that many engineers, when faced with the task to timestamp the segments of stream that they're saving, follow a design pattern that goes like this (a) get the stream data (b) ask the system what date/time it is (c) use that (and perhaps, just to make even more likely for the timestamp to be interpreted incorrectly, call it the "offset_date") The problem with this design pattern is that it's all pattern and no design. It is **not** the timestamp of the beginning of the segment: That time happened **after** the **end** of the event of the end of the segment occurred, and even more so, **after** the system that will timestamp and store. Further, there is a lot of wiggle room in the delay accumulated between the actual event, and the moment we ask the system what time it is. Sometimes it doesn't matter, but sometimes it does: For example, if we want to align with some other timestamped data, or use these timestamps to determine if there's gaps or overlaps between the segments we've acquired. Point is, stream2py will give you the tools to tackle that problem properly. It does so by having the stream2py buffers mentioned above keep data flow statistics that readers can then use to more precisely timestamp what they read. [Documentation here](https://i2mint.github.io/stream2py/index.html). %prep %autosetup -n stream2py-1.0.42 %build %py3_build %install %py3_install install -d -m755 %{buildroot}/%{_pkgdocdir} if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then find usr/lib -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi if [ -d usr/lib64 ]; then find usr/lib64 -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi if [ -d usr/bin ]; then find usr/bin -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi if [ -d usr/sbin ]; then find usr/sbin -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then find usr/share/man -type f -printf "\"/%h/%f.gz\"\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . mv %{buildroot}/doclist.lst . %files -n python3-stream2py -f filelist.lst %dir %{python3_sitelib}/* %files help -f doclist.lst %{_docdir}/* %changelog * Thu Jun 08 2023 Python_Bot - 1.0.42-1 - Package Spec generated