summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--.gitignore1
-rw-r--r--python-aws-cdk-aws-msk-alpha.spec515
-rw-r--r--sources1
3 files changed, 517 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..f212c37 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/aws-cdk.aws-msk-alpha-2.81.0a0.tar.gz
diff --git a/python-aws-cdk-aws-msk-alpha.spec b/python-aws-cdk-aws-msk-alpha.spec
new file mode 100644
index 0000000..adeb2f2
--- /dev/null
+++ b/python-aws-cdk-aws-msk-alpha.spec
@@ -0,0 +1,515 @@
+%global _empty_manifest_terminate_build 0
+Name: python-aws-cdk.aws-msk-alpha
+Version: 2.81.0a0
+Release: 1
+Summary: The CDK Construct Library for AWS::MSK
+License: Apache-2.0
+URL: https://github.com/aws/aws-cdk
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/2b/e2/3d2401c00498e76c94278c05a9e18d879708386a7c1df87a8d27beac0c58/aws-cdk.aws-msk-alpha-2.81.0a0.tar.gz
+BuildArch: noarch
+
+Requires: python3-aws-cdk-lib
+Requires: python3-constructs
+Requires: python3-jsii
+Requires: python3-publication
+Requires: python3-typeguard
+
+%description
+<!--END STABILITY BANNER-->
+[Amazon MSK](https://aws.amazon.com/msk/) is a fully managed service that makes it easy for you to build and run applications that use Apache Kafka to process streaming data.
+The following example creates an MSK Cluster.
+```python
+# vpc: ec2.Vpc
+cluster = msk.Cluster(self, "Cluster",
+ cluster_name="myCluster",
+ kafka_version=msk.KafkaVersion.V2_8_1,
+ vpc=vpc
+)
+```
+## Allowing Connections
+To control who can access the Cluster, use the `.connections` attribute. For a list of ports used by MSK, refer to the [MSK documentation](https://docs.aws.amazon.com/msk/latest/developerguide/client-access.html#port-info).
+```python
+# vpc: ec2.Vpc
+cluster = msk.Cluster(self, "Cluster",
+ cluster_name="myCluster",
+ kafka_version=msk.KafkaVersion.V2_8_1,
+ vpc=vpc
+)
+cluster.connections.allow_from(
+ ec2.Peer.ipv4("1.2.3.4/8"),
+ ec2.Port.tcp(2181))
+cluster.connections.allow_from(
+ ec2.Peer.ipv4("1.2.3.4/8"),
+ ec2.Port.tcp(9094))
+```
+## Cluster Endpoints
+You can use the following attributes to get a list of the Kafka broker or ZooKeeper node endpoints
+```python
+# cluster: msk.Cluster
+CfnOutput(self, "BootstrapBrokers", value=cluster.bootstrap_brokers)
+CfnOutput(self, "BootstrapBrokersTls", value=cluster.bootstrap_brokers_tls)
+CfnOutput(self, "BootstrapBrokersSaslScram", value=cluster.bootstrap_brokers_sasl_scram)
+CfnOutput(self, "BootstrapBrokerStringSaslIam", value=cluster.bootstrap_brokers_sasl_iam)
+CfnOutput(self, "ZookeeperConnection", value=cluster.zookeeper_connection_string)
+CfnOutput(self, "ZookeeperConnectionTls", value=cluster.zookeeper_connection_string_tls)
+```
+## Importing an existing Cluster
+To import an existing MSK cluster into your CDK app use the `.fromClusterArn()` method.
+```python
+cluster = msk.Cluster.from_cluster_arn(self, "Cluster", "arn:aws:kafka:us-west-2:1234567890:cluster/a-cluster/11111111-1111-1111-1111-111111111111-1")
+```
+## Client Authentication
+[MSK supports](https://docs.aws.amazon.com/msk/latest/developerguide/kafka_apis_iam.html) the following authentication mechanisms.
+### TLS
+To enable client authentication with TLS set the `certificateAuthorityArns` property to reference your ACM Private CA. [More info on Private CAs.](https://docs.aws.amazon.com/msk/latest/developerguide/msk-authentication.html)
+```python
+import aws_cdk.aws_acmpca as acmpca
+# vpc: ec2.Vpc
+cluster = msk.Cluster(self, "Cluster",
+ cluster_name="myCluster",
+ kafka_version=msk.KafkaVersion.V2_8_1,
+ vpc=vpc,
+ encryption_in_transit=msk.EncryptionInTransitConfig(
+ client_broker=msk.ClientBrokerEncryption.TLS
+ ),
+ client_authentication=msk.ClientAuthentication.tls(
+ certificate_authorities=[
+ acmpca.CertificateAuthority.from_certificate_authority_arn(self, "CertificateAuthority", "arn:aws:acm-pca:us-west-2:1234567890:certificate-authority/11111111-1111-1111-1111-111111111111")
+ ]
+ )
+)
+```
+### SASL/SCRAM
+Enable client authentication with [SASL/SCRAM](https://docs.aws.amazon.com/msk/latest/developerguide/msk-password.html):
+```python
+# vpc: ec2.Vpc
+cluster = msk.Cluster(self, "cluster",
+ cluster_name="myCluster",
+ kafka_version=msk.KafkaVersion.V2_8_1,
+ vpc=vpc,
+ encryption_in_transit=msk.EncryptionInTransitConfig(
+ client_broker=msk.ClientBrokerEncryption.TLS
+ ),
+ client_authentication=msk.ClientAuthentication.sasl(
+ scram=True
+ )
+)
+```
+### SASL/IAM
+Enable client authentication with [IAM](https://docs.aws.amazon.com/msk/latest/developerguide/iam-access-control.html):
+```python
+# vpc: ec2.Vpc
+cluster = msk.Cluster(self, "cluster",
+ cluster_name="myCluster",
+ kafka_version=msk.KafkaVersion.V2_8_1,
+ vpc=vpc,
+ encryption_in_transit=msk.EncryptionInTransitConfig(
+ client_broker=msk.ClientBrokerEncryption.TLS
+ ),
+ client_authentication=msk.ClientAuthentication.sasl(
+ iam=True
+ )
+)
+```
+### SASL/IAM + TLS
+Enable client authentication with [IAM](https://docs.aws.amazon.com/msk/latest/developerguide/iam-access-control.html)
+as well as enable client authentication with TLS by setting the `certificateAuthorityArns` property to reference your ACM Private CA. [More info on Private CAs.](https://docs.aws.amazon.com/msk/latest/developerguide/msk-authentication.html)
+```python
+import aws_cdk.aws_acmpca as acmpca
+# vpc: ec2.Vpc
+cluster = msk.Cluster(self, "Cluster",
+ cluster_name="myCluster",
+ kafka_version=msk.KafkaVersion.V2_8_1,
+ vpc=vpc,
+ encryption_in_transit=msk.EncryptionInTransitConfig(
+ client_broker=msk.ClientBrokerEncryption.TLS
+ ),
+ client_authentication=msk.ClientAuthentication.sasl_tls(
+ iam=True,
+ certificate_authorities=[
+ acmpca.CertificateAuthority.from_certificate_authority_arn(self, "CertificateAuthority", "arn:aws:acm-pca:us-west-2:1234567890:certificate-authority/11111111-1111-1111-1111-111111111111")
+ ]
+ )
+)
+```
+## Logging
+You can deliver Apache Kafka broker logs to one or more of the following destination types:
+Amazon CloudWatch Logs, Amazon S3, Amazon Kinesis Data Firehose.
+To configure logs to be sent to an S3 bucket, provide a bucket in the `logging` config.
+```python
+# vpc: ec2.Vpc
+# bucket: s3.IBucket
+cluster = msk.Cluster(self, "cluster",
+ cluster_name="myCluster",
+ kafka_version=msk.KafkaVersion.V2_8_1,
+ vpc=vpc,
+ logging=msk.BrokerLogging(
+ s3=msk.S3LoggingConfiguration(
+ bucket=bucket
+ )
+ )
+)
+```
+When the S3 destination is configured, AWS will automatically create an S3 bucket policy
+that allows the service to write logs to the bucket. This makes it impossible to later update
+that bucket policy. To have CDK create the bucket policy so that future updates can be made,
+the `@aws-cdk/aws-s3:createDefaultLoggingPolicy` [feature flag](https://docs.aws.amazon.com/cdk/v2/guide/featureflags.html) can be used. This can be set
+in the `cdk.json` file.
+```json
+{
+ "context": {
+ "@aws-cdk/aws-s3:createDefaultLoggingPolicy": true
+ }
+}
+```
+
+%package -n python3-aws-cdk.aws-msk-alpha
+Summary: The CDK Construct Library for AWS::MSK
+Provides: python-aws-cdk.aws-msk-alpha
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-aws-cdk.aws-msk-alpha
+<!--END STABILITY BANNER-->
+[Amazon MSK](https://aws.amazon.com/msk/) is a fully managed service that makes it easy for you to build and run applications that use Apache Kafka to process streaming data.
+The following example creates an MSK Cluster.
+```python
+# vpc: ec2.Vpc
+cluster = msk.Cluster(self, "Cluster",
+ cluster_name="myCluster",
+ kafka_version=msk.KafkaVersion.V2_8_1,
+ vpc=vpc
+)
+```
+## Allowing Connections
+To control who can access the Cluster, use the `.connections` attribute. For a list of ports used by MSK, refer to the [MSK documentation](https://docs.aws.amazon.com/msk/latest/developerguide/client-access.html#port-info).
+```python
+# vpc: ec2.Vpc
+cluster = msk.Cluster(self, "Cluster",
+ cluster_name="myCluster",
+ kafka_version=msk.KafkaVersion.V2_8_1,
+ vpc=vpc
+)
+cluster.connections.allow_from(
+ ec2.Peer.ipv4("1.2.3.4/8"),
+ ec2.Port.tcp(2181))
+cluster.connections.allow_from(
+ ec2.Peer.ipv4("1.2.3.4/8"),
+ ec2.Port.tcp(9094))
+```
+## Cluster Endpoints
+You can use the following attributes to get a list of the Kafka broker or ZooKeeper node endpoints
+```python
+# cluster: msk.Cluster
+CfnOutput(self, "BootstrapBrokers", value=cluster.bootstrap_brokers)
+CfnOutput(self, "BootstrapBrokersTls", value=cluster.bootstrap_brokers_tls)
+CfnOutput(self, "BootstrapBrokersSaslScram", value=cluster.bootstrap_brokers_sasl_scram)
+CfnOutput(self, "BootstrapBrokerStringSaslIam", value=cluster.bootstrap_brokers_sasl_iam)
+CfnOutput(self, "ZookeeperConnection", value=cluster.zookeeper_connection_string)
+CfnOutput(self, "ZookeeperConnectionTls", value=cluster.zookeeper_connection_string_tls)
+```
+## Importing an existing Cluster
+To import an existing MSK cluster into your CDK app use the `.fromClusterArn()` method.
+```python
+cluster = msk.Cluster.from_cluster_arn(self, "Cluster", "arn:aws:kafka:us-west-2:1234567890:cluster/a-cluster/11111111-1111-1111-1111-111111111111-1")
+```
+## Client Authentication
+[MSK supports](https://docs.aws.amazon.com/msk/latest/developerguide/kafka_apis_iam.html) the following authentication mechanisms.
+### TLS
+To enable client authentication with TLS set the `certificateAuthorityArns` property to reference your ACM Private CA. [More info on Private CAs.](https://docs.aws.amazon.com/msk/latest/developerguide/msk-authentication.html)
+```python
+import aws_cdk.aws_acmpca as acmpca
+# vpc: ec2.Vpc
+cluster = msk.Cluster(self, "Cluster",
+ cluster_name="myCluster",
+ kafka_version=msk.KafkaVersion.V2_8_1,
+ vpc=vpc,
+ encryption_in_transit=msk.EncryptionInTransitConfig(
+ client_broker=msk.ClientBrokerEncryption.TLS
+ ),
+ client_authentication=msk.ClientAuthentication.tls(
+ certificate_authorities=[
+ acmpca.CertificateAuthority.from_certificate_authority_arn(self, "CertificateAuthority", "arn:aws:acm-pca:us-west-2:1234567890:certificate-authority/11111111-1111-1111-1111-111111111111")
+ ]
+ )
+)
+```
+### SASL/SCRAM
+Enable client authentication with [SASL/SCRAM](https://docs.aws.amazon.com/msk/latest/developerguide/msk-password.html):
+```python
+# vpc: ec2.Vpc
+cluster = msk.Cluster(self, "cluster",
+ cluster_name="myCluster",
+ kafka_version=msk.KafkaVersion.V2_8_1,
+ vpc=vpc,
+ encryption_in_transit=msk.EncryptionInTransitConfig(
+ client_broker=msk.ClientBrokerEncryption.TLS
+ ),
+ client_authentication=msk.ClientAuthentication.sasl(
+ scram=True
+ )
+)
+```
+### SASL/IAM
+Enable client authentication with [IAM](https://docs.aws.amazon.com/msk/latest/developerguide/iam-access-control.html):
+```python
+# vpc: ec2.Vpc
+cluster = msk.Cluster(self, "cluster",
+ cluster_name="myCluster",
+ kafka_version=msk.KafkaVersion.V2_8_1,
+ vpc=vpc,
+ encryption_in_transit=msk.EncryptionInTransitConfig(
+ client_broker=msk.ClientBrokerEncryption.TLS
+ ),
+ client_authentication=msk.ClientAuthentication.sasl(
+ iam=True
+ )
+)
+```
+### SASL/IAM + TLS
+Enable client authentication with [IAM](https://docs.aws.amazon.com/msk/latest/developerguide/iam-access-control.html)
+as well as enable client authentication with TLS by setting the `certificateAuthorityArns` property to reference your ACM Private CA. [More info on Private CAs.](https://docs.aws.amazon.com/msk/latest/developerguide/msk-authentication.html)
+```python
+import aws_cdk.aws_acmpca as acmpca
+# vpc: ec2.Vpc
+cluster = msk.Cluster(self, "Cluster",
+ cluster_name="myCluster",
+ kafka_version=msk.KafkaVersion.V2_8_1,
+ vpc=vpc,
+ encryption_in_transit=msk.EncryptionInTransitConfig(
+ client_broker=msk.ClientBrokerEncryption.TLS
+ ),
+ client_authentication=msk.ClientAuthentication.sasl_tls(
+ iam=True,
+ certificate_authorities=[
+ acmpca.CertificateAuthority.from_certificate_authority_arn(self, "CertificateAuthority", "arn:aws:acm-pca:us-west-2:1234567890:certificate-authority/11111111-1111-1111-1111-111111111111")
+ ]
+ )
+)
+```
+## Logging
+You can deliver Apache Kafka broker logs to one or more of the following destination types:
+Amazon CloudWatch Logs, Amazon S3, Amazon Kinesis Data Firehose.
+To configure logs to be sent to an S3 bucket, provide a bucket in the `logging` config.
+```python
+# vpc: ec2.Vpc
+# bucket: s3.IBucket
+cluster = msk.Cluster(self, "cluster",
+ cluster_name="myCluster",
+ kafka_version=msk.KafkaVersion.V2_8_1,
+ vpc=vpc,
+ logging=msk.BrokerLogging(
+ s3=msk.S3LoggingConfiguration(
+ bucket=bucket
+ )
+ )
+)
+```
+When the S3 destination is configured, AWS will automatically create an S3 bucket policy
+that allows the service to write logs to the bucket. This makes it impossible to later update
+that bucket policy. To have CDK create the bucket policy so that future updates can be made,
+the `@aws-cdk/aws-s3:createDefaultLoggingPolicy` [feature flag](https://docs.aws.amazon.com/cdk/v2/guide/featureflags.html) can be used. This can be set
+in the `cdk.json` file.
+```json
+{
+ "context": {
+ "@aws-cdk/aws-s3:createDefaultLoggingPolicy": true
+ }
+}
+```
+
+%package help
+Summary: Development documents and examples for aws-cdk.aws-msk-alpha
+Provides: python3-aws-cdk.aws-msk-alpha-doc
+%description help
+<!--END STABILITY BANNER-->
+[Amazon MSK](https://aws.amazon.com/msk/) is a fully managed service that makes it easy for you to build and run applications that use Apache Kafka to process streaming data.
+The following example creates an MSK Cluster.
+```python
+# vpc: ec2.Vpc
+cluster = msk.Cluster(self, "Cluster",
+ cluster_name="myCluster",
+ kafka_version=msk.KafkaVersion.V2_8_1,
+ vpc=vpc
+)
+```
+## Allowing Connections
+To control who can access the Cluster, use the `.connections` attribute. For a list of ports used by MSK, refer to the [MSK documentation](https://docs.aws.amazon.com/msk/latest/developerguide/client-access.html#port-info).
+```python
+# vpc: ec2.Vpc
+cluster = msk.Cluster(self, "Cluster",
+ cluster_name="myCluster",
+ kafka_version=msk.KafkaVersion.V2_8_1,
+ vpc=vpc
+)
+cluster.connections.allow_from(
+ ec2.Peer.ipv4("1.2.3.4/8"),
+ ec2.Port.tcp(2181))
+cluster.connections.allow_from(
+ ec2.Peer.ipv4("1.2.3.4/8"),
+ ec2.Port.tcp(9094))
+```
+## Cluster Endpoints
+You can use the following attributes to get a list of the Kafka broker or ZooKeeper node endpoints
+```python
+# cluster: msk.Cluster
+CfnOutput(self, "BootstrapBrokers", value=cluster.bootstrap_brokers)
+CfnOutput(self, "BootstrapBrokersTls", value=cluster.bootstrap_brokers_tls)
+CfnOutput(self, "BootstrapBrokersSaslScram", value=cluster.bootstrap_brokers_sasl_scram)
+CfnOutput(self, "BootstrapBrokerStringSaslIam", value=cluster.bootstrap_brokers_sasl_iam)
+CfnOutput(self, "ZookeeperConnection", value=cluster.zookeeper_connection_string)
+CfnOutput(self, "ZookeeperConnectionTls", value=cluster.zookeeper_connection_string_tls)
+```
+## Importing an existing Cluster
+To import an existing MSK cluster into your CDK app use the `.fromClusterArn()` method.
+```python
+cluster = msk.Cluster.from_cluster_arn(self, "Cluster", "arn:aws:kafka:us-west-2:1234567890:cluster/a-cluster/11111111-1111-1111-1111-111111111111-1")
+```
+## Client Authentication
+[MSK supports](https://docs.aws.amazon.com/msk/latest/developerguide/kafka_apis_iam.html) the following authentication mechanisms.
+### TLS
+To enable client authentication with TLS set the `certificateAuthorityArns` property to reference your ACM Private CA. [More info on Private CAs.](https://docs.aws.amazon.com/msk/latest/developerguide/msk-authentication.html)
+```python
+import aws_cdk.aws_acmpca as acmpca
+# vpc: ec2.Vpc
+cluster = msk.Cluster(self, "Cluster",
+ cluster_name="myCluster",
+ kafka_version=msk.KafkaVersion.V2_8_1,
+ vpc=vpc,
+ encryption_in_transit=msk.EncryptionInTransitConfig(
+ client_broker=msk.ClientBrokerEncryption.TLS
+ ),
+ client_authentication=msk.ClientAuthentication.tls(
+ certificate_authorities=[
+ acmpca.CertificateAuthority.from_certificate_authority_arn(self, "CertificateAuthority", "arn:aws:acm-pca:us-west-2:1234567890:certificate-authority/11111111-1111-1111-1111-111111111111")
+ ]
+ )
+)
+```
+### SASL/SCRAM
+Enable client authentication with [SASL/SCRAM](https://docs.aws.amazon.com/msk/latest/developerguide/msk-password.html):
+```python
+# vpc: ec2.Vpc
+cluster = msk.Cluster(self, "cluster",
+ cluster_name="myCluster",
+ kafka_version=msk.KafkaVersion.V2_8_1,
+ vpc=vpc,
+ encryption_in_transit=msk.EncryptionInTransitConfig(
+ client_broker=msk.ClientBrokerEncryption.TLS
+ ),
+ client_authentication=msk.ClientAuthentication.sasl(
+ scram=True
+ )
+)
+```
+### SASL/IAM
+Enable client authentication with [IAM](https://docs.aws.amazon.com/msk/latest/developerguide/iam-access-control.html):
+```python
+# vpc: ec2.Vpc
+cluster = msk.Cluster(self, "cluster",
+ cluster_name="myCluster",
+ kafka_version=msk.KafkaVersion.V2_8_1,
+ vpc=vpc,
+ encryption_in_transit=msk.EncryptionInTransitConfig(
+ client_broker=msk.ClientBrokerEncryption.TLS
+ ),
+ client_authentication=msk.ClientAuthentication.sasl(
+ iam=True
+ )
+)
+```
+### SASL/IAM + TLS
+Enable client authentication with [IAM](https://docs.aws.amazon.com/msk/latest/developerguide/iam-access-control.html)
+as well as enable client authentication with TLS by setting the `certificateAuthorityArns` property to reference your ACM Private CA. [More info on Private CAs.](https://docs.aws.amazon.com/msk/latest/developerguide/msk-authentication.html)
+```python
+import aws_cdk.aws_acmpca as acmpca
+# vpc: ec2.Vpc
+cluster = msk.Cluster(self, "Cluster",
+ cluster_name="myCluster",
+ kafka_version=msk.KafkaVersion.V2_8_1,
+ vpc=vpc,
+ encryption_in_transit=msk.EncryptionInTransitConfig(
+ client_broker=msk.ClientBrokerEncryption.TLS
+ ),
+ client_authentication=msk.ClientAuthentication.sasl_tls(
+ iam=True,
+ certificate_authorities=[
+ acmpca.CertificateAuthority.from_certificate_authority_arn(self, "CertificateAuthority", "arn:aws:acm-pca:us-west-2:1234567890:certificate-authority/11111111-1111-1111-1111-111111111111")
+ ]
+ )
+)
+```
+## Logging
+You can deliver Apache Kafka broker logs to one or more of the following destination types:
+Amazon CloudWatch Logs, Amazon S3, Amazon Kinesis Data Firehose.
+To configure logs to be sent to an S3 bucket, provide a bucket in the `logging` config.
+```python
+# vpc: ec2.Vpc
+# bucket: s3.IBucket
+cluster = msk.Cluster(self, "cluster",
+ cluster_name="myCluster",
+ kafka_version=msk.KafkaVersion.V2_8_1,
+ vpc=vpc,
+ logging=msk.BrokerLogging(
+ s3=msk.S3LoggingConfiguration(
+ bucket=bucket
+ )
+ )
+)
+```
+When the S3 destination is configured, AWS will automatically create an S3 bucket policy
+that allows the service to write logs to the bucket. This makes it impossible to later update
+that bucket policy. To have CDK create the bucket policy so that future updates can be made,
+the `@aws-cdk/aws-s3:createDefaultLoggingPolicy` [feature flag](https://docs.aws.amazon.com/cdk/v2/guide/featureflags.html) can be used. This can be set
+in the `cdk.json` file.
+```json
+{
+ "context": {
+ "@aws-cdk/aws-s3:createDefaultLoggingPolicy": true
+ }
+}
+```
+
+%prep
+%autosetup -n aws-cdk.aws-msk-alpha-2.81.0a0
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-aws-cdk.aws-msk-alpha -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Wed May 31 2023 Python_Bot <Python_Bot@openeuler.org> - 2.81.0a0-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..7548dab
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+66da9ccda7a85e8e850a7420c42651db aws-cdk.aws-msk-alpha-2.81.0a0.tar.gz