summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-04-11 11:53:18 +0000
committerCoprDistGit <infra@openeuler.org>2023-04-11 11:53:18 +0000
commit1d004c65c55cde704ec81d5a32fecea8eb584706 (patch)
tree2264e09f50010a87b3b2f605711cbe23e852d017
parent7b01475a8537c71b0d0c5a1f88ed09c0103a0c2f (diff)
automatic import of python-kclpy
-rw-r--r--.gitignore1
-rw-r--r--python-kclpy.spec450
-rw-r--r--sources1
3 files changed, 452 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..542265a 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/kclpy-0.2.0.tar.gz
diff --git a/python-kclpy.spec b/python-kclpy.spec
new file mode 100644
index 0000000..8e30d37
--- /dev/null
+++ b/python-kclpy.spec
@@ -0,0 +1,450 @@
+%global _empty_manifest_terminate_build 0
+Name: python-kclpy
+Version: 0.2.0
+Release: 1
+Summary: A python interface for the Amazon Kinesis Client Library MultiLangDaemon
+License: Amazon Software License
+URL: https://github.com/empiricalresults/kclpy
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/3f/19/687917d4e8c097163767318849f17e215a45277f18c847f799b7cfb59858/kclpy-0.2.0.tar.gz
+BuildArch: noarch
+
+
+%description
+# kclpy
+
+This is a fork of the [Amazon Kinesis Client Library for Python](https://github.com/awslabs/amazon-kinesis-client-python),
+aiming to simplify consuming a kinesis stream using the [Amazon's Kinesis Client Library (KCL)](http://docs.aws.amazon.com/kinesis/latest/dev/developing-consumers-with-kcl.html) multi lang daemon interface.
+
+
+## Why
+
+It should be easy to consume a kinesis stream in python. This library provides a python API to the KCL.
+
+## Usage
+
+Install it:
+
+```sh
+> pip install kclpy
+```
+
+Implement a RecordProcessor. See http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-record-processor-implementation-app-py.html for details on the RecordProcessor interface.
+
+```python
+import kclpy
+import json
+
+class MyStreamProcessor(kclpy.RecordProcessor):
+
+ def process_record(self, data, partition_key, sequence_number):
+
+ try:
+ # assumes the incoming kinesis record is json
+ data = json.loads(data)
+ user = data.get("user")
+
+ # explicitly return True to force a checkpoint (otherwise the default)
+ # checkpointing strategy is used
+ return True
+
+ except ValueError:
+ # not valid json
+ log.error("Invalid json placed on queue, nothing we can do")
+ return
+
+
+def main():
+ kclpy.start(MyStreamProcessor())
+
+if __name__ == '__main__':
+ main()
+```
+
+## Running
+
+Running this directly wont do anything other than wait for records via STDIN. The accompanying [Sylvite](https://github.com/empiricalresults/sylvite) library is an executable jar that will launch our record processor and feed it records.
+
+See the [Sylvite](https://github.com/empiricalresults/sylvite) library for details and a pre-built jar.
+
+```sh
+> java -jar sylvite.jar --config=myapp.properties
+```
+
+## Logging
+
+This library uses the standard python logging module (all logs under the namespace 'kclpy'). The KCL multi-daemon library expects well formed data on STDOUT, so be sure to configure your logging to use STDERR or a file. Do not use print statements in your processor!
+
+
+## Background
+
+The key concept to understand when using the KCL's multi-lang daemon is that there is a Java process doing all communication with the kinesis API and a language agnostic child process that reads and writes from STDIN/STDOUT. This is very similar to how Hadoop streaming works. In order to consume the stream, we need to start up a Java process, which will in-turn start up a child process that will actually handle consuming the stream data.
+
+While this sounds complicated, building on the KCL gives us the advantage of all the checkpointing, resharding and monitoring work that is baked into the KCL. The KCL is also maintained by the awslabs team, so any future enhancements will be handled for free.
+
+
+## RecordProcessor
+
+kclpy is based on awslabs' sample code, with only a few minor tweaks in logging and checkpointing.
+
+### API
+
+Refer to [Amazon's documentation](http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-record-processor-implementation-app-py.html), this fork maintains compatibility with the original implementation.
+
+
+### Checkpointing
+
+The KCL uses a DynamoDB table to maintain it's current position in the stream (checkpoint). kclpy allows you to customize the checkpointing behaviour. The following kwargs can be passed to kclpy.RecordProcessor:
+
+* checkpoint_freq_seconds - Checkpoint at a fixed interval (in seconds).
+* records_per_checkpoint - Checkpoint at a fixed number of records processed.
+
+```python
+def main():
+ # automatically checkpoint every 60 seconds
+ every_minute_processor = MyStreamProcessor(checkpoint_freq_seconds=60)
+
+ # or checkpoint every 100 records
+ every_hundred_records_processor = MyStreamProcessor(records_per_checkpoint=100)
+
+ #todo: start the processor
+```
+
+Alternatively, you can force an explict checkpoint by returning True in your *process_record* call. But be warned, doing this every record will result in a lot of writes to your DynamoDB table.
+
+```python
+import kclpy
+
+def process_data(data):
+ # if this is a special record, tell the library to checkpoint so we don't process
+ # it again.
+ return True
+
+class MyStreamProcessor(kclpy.RecordProcessor):
+
+ def process_record(self, data, partition_key, sequence_number):
+ should_checkpoint = process_data(data)
+ return should_checkpoint
+
+def main():
+ # checkpoints will only be made if process_record() returns True
+ # probably not a great idea in the general case
+ manual_checkpointer = MyStreamProcessor(
+ checkpoint_freq_seconds=0,
+ records_per_checkpoint=0
+ )
+
+ #todo: start the processor
+```
+
+
+
+%package -n python3-kclpy
+Summary: A python interface for the Amazon Kinesis Client Library MultiLangDaemon
+Provides: python-kclpy
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-kclpy
+# kclpy
+
+This is a fork of the [Amazon Kinesis Client Library for Python](https://github.com/awslabs/amazon-kinesis-client-python),
+aiming to simplify consuming a kinesis stream using the [Amazon's Kinesis Client Library (KCL)](http://docs.aws.amazon.com/kinesis/latest/dev/developing-consumers-with-kcl.html) multi lang daemon interface.
+
+
+## Why
+
+It should be easy to consume a kinesis stream in python. This library provides a python API to the KCL.
+
+## Usage
+
+Install it:
+
+```sh
+> pip install kclpy
+```
+
+Implement a RecordProcessor. See http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-record-processor-implementation-app-py.html for details on the RecordProcessor interface.
+
+```python
+import kclpy
+import json
+
+class MyStreamProcessor(kclpy.RecordProcessor):
+
+ def process_record(self, data, partition_key, sequence_number):
+
+ try:
+ # assumes the incoming kinesis record is json
+ data = json.loads(data)
+ user = data.get("user")
+
+ # explicitly return True to force a checkpoint (otherwise the default)
+ # checkpointing strategy is used
+ return True
+
+ except ValueError:
+ # not valid json
+ log.error("Invalid json placed on queue, nothing we can do")
+ return
+
+
+def main():
+ kclpy.start(MyStreamProcessor())
+
+if __name__ == '__main__':
+ main()
+```
+
+## Running
+
+Running this directly wont do anything other than wait for records via STDIN. The accompanying [Sylvite](https://github.com/empiricalresults/sylvite) library is an executable jar that will launch our record processor and feed it records.
+
+See the [Sylvite](https://github.com/empiricalresults/sylvite) library for details and a pre-built jar.
+
+```sh
+> java -jar sylvite.jar --config=myapp.properties
+```
+
+## Logging
+
+This library uses the standard python logging module (all logs under the namespace 'kclpy'). The KCL multi-daemon library expects well formed data on STDOUT, so be sure to configure your logging to use STDERR or a file. Do not use print statements in your processor!
+
+
+## Background
+
+The key concept to understand when using the KCL's multi-lang daemon is that there is a Java process doing all communication with the kinesis API and a language agnostic child process that reads and writes from STDIN/STDOUT. This is very similar to how Hadoop streaming works. In order to consume the stream, we need to start up a Java process, which will in-turn start up a child process that will actually handle consuming the stream data.
+
+While this sounds complicated, building on the KCL gives us the advantage of all the checkpointing, resharding and monitoring work that is baked into the KCL. The KCL is also maintained by the awslabs team, so any future enhancements will be handled for free.
+
+
+## RecordProcessor
+
+kclpy is based on awslabs' sample code, with only a few minor tweaks in logging and checkpointing.
+
+### API
+
+Refer to [Amazon's documentation](http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-record-processor-implementation-app-py.html), this fork maintains compatibility with the original implementation.
+
+
+### Checkpointing
+
+The KCL uses a DynamoDB table to maintain it's current position in the stream (checkpoint). kclpy allows you to customize the checkpointing behaviour. The following kwargs can be passed to kclpy.RecordProcessor:
+
+* checkpoint_freq_seconds - Checkpoint at a fixed interval (in seconds).
+* records_per_checkpoint - Checkpoint at a fixed number of records processed.
+
+```python
+def main():
+ # automatically checkpoint every 60 seconds
+ every_minute_processor = MyStreamProcessor(checkpoint_freq_seconds=60)
+
+ # or checkpoint every 100 records
+ every_hundred_records_processor = MyStreamProcessor(records_per_checkpoint=100)
+
+ #todo: start the processor
+```
+
+Alternatively, you can force an explict checkpoint by returning True in your *process_record* call. But be warned, doing this every record will result in a lot of writes to your DynamoDB table.
+
+```python
+import kclpy
+
+def process_data(data):
+ # if this is a special record, tell the library to checkpoint so we don't process
+ # it again.
+ return True
+
+class MyStreamProcessor(kclpy.RecordProcessor):
+
+ def process_record(self, data, partition_key, sequence_number):
+ should_checkpoint = process_data(data)
+ return should_checkpoint
+
+def main():
+ # checkpoints will only be made if process_record() returns True
+ # probably not a great idea in the general case
+ manual_checkpointer = MyStreamProcessor(
+ checkpoint_freq_seconds=0,
+ records_per_checkpoint=0
+ )
+
+ #todo: start the processor
+```
+
+
+
+%package help
+Summary: Development documents and examples for kclpy
+Provides: python3-kclpy-doc
+%description help
+# kclpy
+
+This is a fork of the [Amazon Kinesis Client Library for Python](https://github.com/awslabs/amazon-kinesis-client-python),
+aiming to simplify consuming a kinesis stream using the [Amazon's Kinesis Client Library (KCL)](http://docs.aws.amazon.com/kinesis/latest/dev/developing-consumers-with-kcl.html) multi lang daemon interface.
+
+
+## Why
+
+It should be easy to consume a kinesis stream in python. This library provides a python API to the KCL.
+
+## Usage
+
+Install it:
+
+```sh
+> pip install kclpy
+```
+
+Implement a RecordProcessor. See http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-record-processor-implementation-app-py.html for details on the RecordProcessor interface.
+
+```python
+import kclpy
+import json
+
+class MyStreamProcessor(kclpy.RecordProcessor):
+
+ def process_record(self, data, partition_key, sequence_number):
+
+ try:
+ # assumes the incoming kinesis record is json
+ data = json.loads(data)
+ user = data.get("user")
+
+ # explicitly return True to force a checkpoint (otherwise the default)
+ # checkpointing strategy is used
+ return True
+
+ except ValueError:
+ # not valid json
+ log.error("Invalid json placed on queue, nothing we can do")
+ return
+
+
+def main():
+ kclpy.start(MyStreamProcessor())
+
+if __name__ == '__main__':
+ main()
+```
+
+## Running
+
+Running this directly wont do anything other than wait for records via STDIN. The accompanying [Sylvite](https://github.com/empiricalresults/sylvite) library is an executable jar that will launch our record processor and feed it records.
+
+See the [Sylvite](https://github.com/empiricalresults/sylvite) library for details and a pre-built jar.
+
+```sh
+> java -jar sylvite.jar --config=myapp.properties
+```
+
+## Logging
+
+This library uses the standard python logging module (all logs under the namespace 'kclpy'). The KCL multi-daemon library expects well formed data on STDOUT, so be sure to configure your logging to use STDERR or a file. Do not use print statements in your processor!
+
+
+## Background
+
+The key concept to understand when using the KCL's multi-lang daemon is that there is a Java process doing all communication with the kinesis API and a language agnostic child process that reads and writes from STDIN/STDOUT. This is very similar to how Hadoop streaming works. In order to consume the stream, we need to start up a Java process, which will in-turn start up a child process that will actually handle consuming the stream data.
+
+While this sounds complicated, building on the KCL gives us the advantage of all the checkpointing, resharding and monitoring work that is baked into the KCL. The KCL is also maintained by the awslabs team, so any future enhancements will be handled for free.
+
+
+## RecordProcessor
+
+kclpy is based on awslabs' sample code, with only a few minor tweaks in logging and checkpointing.
+
+### API
+
+Refer to [Amazon's documentation](http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-record-processor-implementation-app-py.html), this fork maintains compatibility with the original implementation.
+
+
+### Checkpointing
+
+The KCL uses a DynamoDB table to maintain it's current position in the stream (checkpoint). kclpy allows you to customize the checkpointing behaviour. The following kwargs can be passed to kclpy.RecordProcessor:
+
+* checkpoint_freq_seconds - Checkpoint at a fixed interval (in seconds).
+* records_per_checkpoint - Checkpoint at a fixed number of records processed.
+
+```python
+def main():
+ # automatically checkpoint every 60 seconds
+ every_minute_processor = MyStreamProcessor(checkpoint_freq_seconds=60)
+
+ # or checkpoint every 100 records
+ every_hundred_records_processor = MyStreamProcessor(records_per_checkpoint=100)
+
+ #todo: start the processor
+```
+
+Alternatively, you can force an explict checkpoint by returning True in your *process_record* call. But be warned, doing this every record will result in a lot of writes to your DynamoDB table.
+
+```python
+import kclpy
+
+def process_data(data):
+ # if this is a special record, tell the library to checkpoint so we don't process
+ # it again.
+ return True
+
+class MyStreamProcessor(kclpy.RecordProcessor):
+
+ def process_record(self, data, partition_key, sequence_number):
+ should_checkpoint = process_data(data)
+ return should_checkpoint
+
+def main():
+ # checkpoints will only be made if process_record() returns True
+ # probably not a great idea in the general case
+ manual_checkpointer = MyStreamProcessor(
+ checkpoint_freq_seconds=0,
+ records_per_checkpoint=0
+ )
+
+ #todo: start the processor
+```
+
+
+
+%prep
+%autosetup -n kclpy-0.2.0
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-kclpy -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Tue Apr 11 2023 Python_Bot <Python_Bot@openeuler.org> - 0.2.0-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..d721566
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+9fc212230140f4d0193dc707c9caf7e9 kclpy-0.2.0.tar.gz