summaryrefslogtreecommitdiff
path: root/python-cdk-efs-assets.spec
diff options
context:
space:
mode:
Diffstat (limited to 'python-cdk-efs-assets.spec')
-rw-r--r--python-cdk-efs-assets.spec744
1 files changed, 744 insertions, 0 deletions
diff --git a/python-cdk-efs-assets.spec b/python-cdk-efs-assets.spec
new file mode 100644
index 0000000..3d13ebe
--- /dev/null
+++ b/python-cdk-efs-assets.spec
@@ -0,0 +1,744 @@
+%global _empty_manifest_terminate_build 0
+Name: python-cdk-efs-assets
+Version: 0.3.95
+Release: 1
+Summary: Amazon EFS assets from Github repositories or S3 buckets
+License: Apache-2.0
+URL: https://github.com/pahud/cdk-efs-assets.git
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/79/77/4c5369eeb5f781e8e5134c171a3e76ecbb10bf661fc46259a4de5ffee202/cdk-efs-assets-0.3.95.tar.gz
+BuildArch: noarch
+
+Requires: python3-aws-cdk-lib
+Requires: python3-cdk-fargate-run-task
+Requires: python3-constructs
+Requires: python3-jsii
+Requires: python3-publication
+Requires: python3-typeguard
+
+%description
+[![NPM version](https://badge.fury.io/js/cdk-efs-assets.svg)](https://badge.fury.io/js/cdk-efs-assets)
+[![PyPI version](https://badge.fury.io/py/cdk-efs-assets.svg)](https://badge.fury.io/py/cdk-efs-assets)
+![Release](https://github.com/pahud/cdk-efs-assets/workflows/Release/badge.svg)
+
+# cdk-efs-assets
+
+CDK construct library to populate Amazon EFS assets from Github or S3. If the source is S3, the construct also optionally supports updating the contents in EFS if a new zip file is uploaded to S3.
+
+## Install
+
+TypeScript/JavaScript:
+
+```bash
+npm i cdk-efs-assets
+```
+
+## SyncedAccessPoint
+
+The main construct that is used to provide this EFS sync functionality is `SyncedAccessPoint`. This extends the standard EFS `AccessPoint` construct, and takes an additional `SyncSource` constructor property which defines the source to sync assets from. The `SyncedAccessPoint` instance can be used anywhere an `AccessPoint` can be used. For example, to specify a volume in a Task Definition:
+
+```python
+const taskDefinition = new ecs.FargateTaskDefinition(this, 'TaskDefinition', {
+ ...
+ volumes: [
+ {
+ name: 'efs-storage',
+ efsVolumeConfiguration: {
+ fileSystemId: sharedFileSystem.fileSystemId,
+ transitEncryption: 'ENABLED',
+ authorizationConfig: {
+ accessPointId: syncedAccessPoint.accessPointId
+ }
+ }
+ },
+ ]
+});
+```
+
+## Sync Engine
+
+This library supports both `AWS Fargate` and `AWS Lambda` as the sync engine. As AWS Lambda currently has know issue with Amazon EFS([#100](https://github.com/pahud/cdk-efs-assets/issues/100)), the default sync engine is `AWS Fargate`. You can opt in AWS Lambda with the `engine` construct property of `SyncedAccessPoint`.
+
+## Sync Source
+
+Use `GithubSyncSource` and `S3ArchiveSyncSource` construct classes to define your `syncSource` from Github
+or Amazon S3 bucket. For example:
+
+To define a public github repository as the `syncSource`:
+
+```python
+new SyncedAccessPoint(stack, 'EfsAccessPoint', {
+ ...
+ syncSource: new GithubSyncSource({
+ vpc,
+ repository: 'https://github.com/pahud/cdk-efs-assets.git',
+ }),
+});
+```
+
+To define a private github repository as the `syncSource`:
+
+```python
+new SyncedAccessPoint(stack, 'EfsAccessPoint', {
+ ...
+ syncSource: new GithubSyncSource({
+ vpc,
+ repository: 'https://github.com/pahud/private-repo.git',
+ secret: {
+ id: 'github',
+ key: 'oauth_token',
+ },
+ }),
+});
+```
+
+### `syncDirectoryPath`
+
+By default, the synced EFS assets are placed into a directory corresponding to the type of the sync source. For example, the default behavior of the GitHub source is to place the copied files into a directory named the same as the repository name (for a repository specified as 'https://github.com/pahud/cdk-efs-assets.git', the directory name would be 'cdk-efs-assets'), while the default behavior of the S3 archive source is to place the copied files into a directory named the same as the zip file (for a zip file name of 'assets.zip', the directory name would be 'assets').
+
+If you wish to override this default behavior, specify a value for the `syncDirectoryPath` property that is passed into the `SyncSource` call.
+
+If you are using the `AccessPoint` in an ECS/Fargate Task Definition, you probably will want to override the value of `syncDirectoryPath` to '/'. This will place the file contents in the root directory of the Access Point. The reason for this is that when you create a volume that is referencing an EFS Access Point, you are not allowed to specify any path other than the root directory in the task definition configuration.
+
+## How to use SyncedAccessPoint initialized with files provisioned from GitHub repository
+
+This will sync assets from a GitHub repository to a directory (by default, the output directory is named after the repository name) in the EFS AccessPoint:
+
+```python
+import { SyncSource, SyncedAccessPoint } from 'cdk-efs-assets';
+
+const app = new App();
+
+const env = {
+ region: process.env.CDK_DEFAULT_REGION ?? AWS_DEFAULT_REGION,
+ account: process.env.CDK_DEFAULT_ACCOUNT,
+};
+
+const stack = new Stack(app, 'testing-stack', { env });
+
+const vpc = ec2.Vpc.fromLookup(stack, 'Vpc', { isDefault: true })
+
+const fileSystem = new efs.FileSystem(stack, 'Filesystem', {
+ vpc,
+ removalPolicy: RemovalPolicy.DESTROY,
+})
+
+const efsAccessPoint = new SyncedAccessPoint(stack, 'GithubAccessPoint', {
+ vpc,
+ fileSystem,
+ path: '/demo-github',
+ createAcl: {
+ ownerGid: '1001',
+ ownerUid: '1001',
+ permissions: '0755',
+ },
+ posixUser: {
+ uid: '1001',
+ gid: '1001',
+ },
+ syncSource: new GithubSyncSource({
+ vpc,
+ repository: 'https://github.com/pahud/cdk-efs-assets.git',
+ })
+});
+```
+
+### Github private repository support
+
+To clone a github private repository, you need to generate your github **PAT(Personal Access Token)** and store the token in **AWS Secrets Manager** secret.
+
+For example, if your PAT is stored in the AWS Secret manager with the secret ID `github` and the key `oauth_token`, you can specify the `secret` property as the sample below. Under the covers the lambda function will format the full github repository uri with your **PAT** and successfully git clone the private repository to the efs filesystem.
+
+Store your PAT into the AWS Secrets Manager with AWS CLI:
+
+```sh
+aws secretsmanager create-secret \
+--name github \
+--secret-string '{"oauth_token":"MYOAUTHTOKEN"}'
+```
+
+Configure the `secret` property to allow lambda to retrieve the **PAT** from the secret:
+
+```python
+new GithubSyncSource({
+ vpc,
+ repository: 'https://github.com/username/repo.git',
+ secret: {
+ id: 'github',
+ key: 'oauth_token',
+ },
+})
+```
+
+## How to use SyncedAccessPoint initialized with files provisioned from zip file stored in S3
+
+This will sync assets from a zip file stored in an S3 bucket to a directory (by default, the output directory is named after the zip file name) in the EFS AccessPoint:
+
+```python
+import { S3ArchiveSync } from 'cdk-efs-assets';
+
+const app = new App();
+
+const env = {
+ region: process.env.CDK_DEFAULT_REGION ?? AWS_DEFAULT_REGION,
+ account: process.env.CDK_DEFAULT_ACCOUNT,
+};
+
+const stack = new Stack(app, 'testing-stack', { env });
+
+const vpc = ec2.Vpc.fromLookup(stack, 'Vpc', { isDefault: true })
+
+const fileSystem = new efs.FileSystem(stack, 'Filesystem', {
+ vpc,
+ removalPolicy: RemovalPolicy.DESTROY,
+})
+
+const bucket = Bucket.fromBucketName(this, 'Bucket', 'demo-bucket');
+
+const efsAccessPoint = new SyncedAccessPoint(stack, 'EfsAccessPoint', {
+ vpc,
+ fileSystem,
+ path: '/demo-s3',
+ createAcl: {
+ ownerGid: '1001',
+ ownerUid: '1001',
+ permissions: '0755',
+ },
+ posixUser: {
+ uid: '1001',
+ gid: '1001',
+ },
+ syncSource: new S3ArchiveSyncSource({
+ vpc,
+ bucket,
+ zipFilePath: 'folder/foo.zip',
+ }),
+});
+```
+
+### syncOnUpdate
+
+If the `syncOnUpdate` property is set to `true` (defaults to `true`), then the specified zip file path will be monitored, and if a new object is uploaded to the path, then it will resync the data to EFS. Note that to use this functionality, you must have a CloudTrail Trail in your account that captures the desired S3 write data event.
+
+This feature is only available with the `LAMBDA` sync engine.
+
+*WARNING*: The contents of the extraction directory in the access point will be destroyed before extracting the zip file.
+
+# `StatefulFargateNginx`
+
+This library comes with `StatefulFargateNginx` construct which allows you to build an Amazon EFS-backed stateful
+AWS Fargate service with its document root seeded from any github repository.
+
+See this [tweet](https://twitter.com/pahudnet/status/1367792169063354371) for the demo.
+
+Sample:
+
+```python
+new StatefulFargateNginx(this, 'NyanCat', {
+ vpc,
+ github: 'https://github.com/cristurm/nyan-cat.git',
+});
+```
+
+
+%package -n python3-cdk-efs-assets
+Summary: Amazon EFS assets from Github repositories or S3 buckets
+Provides: python-cdk-efs-assets
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-cdk-efs-assets
+[![NPM version](https://badge.fury.io/js/cdk-efs-assets.svg)](https://badge.fury.io/js/cdk-efs-assets)
+[![PyPI version](https://badge.fury.io/py/cdk-efs-assets.svg)](https://badge.fury.io/py/cdk-efs-assets)
+![Release](https://github.com/pahud/cdk-efs-assets/workflows/Release/badge.svg)
+
+# cdk-efs-assets
+
+CDK construct library to populate Amazon EFS assets from Github or S3. If the source is S3, the construct also optionally supports updating the contents in EFS if a new zip file is uploaded to S3.
+
+## Install
+
+TypeScript/JavaScript:
+
+```bash
+npm i cdk-efs-assets
+```
+
+## SyncedAccessPoint
+
+The main construct that is used to provide this EFS sync functionality is `SyncedAccessPoint`. This extends the standard EFS `AccessPoint` construct, and takes an additional `SyncSource` constructor property which defines the source to sync assets from. The `SyncedAccessPoint` instance can be used anywhere an `AccessPoint` can be used. For example, to specify a volume in a Task Definition:
+
+```python
+const taskDefinition = new ecs.FargateTaskDefinition(this, 'TaskDefinition', {
+ ...
+ volumes: [
+ {
+ name: 'efs-storage',
+ efsVolumeConfiguration: {
+ fileSystemId: sharedFileSystem.fileSystemId,
+ transitEncryption: 'ENABLED',
+ authorizationConfig: {
+ accessPointId: syncedAccessPoint.accessPointId
+ }
+ }
+ },
+ ]
+});
+```
+
+## Sync Engine
+
+This library supports both `AWS Fargate` and `AWS Lambda` as the sync engine. As AWS Lambda currently has know issue with Amazon EFS([#100](https://github.com/pahud/cdk-efs-assets/issues/100)), the default sync engine is `AWS Fargate`. You can opt in AWS Lambda with the `engine` construct property of `SyncedAccessPoint`.
+
+## Sync Source
+
+Use `GithubSyncSource` and `S3ArchiveSyncSource` construct classes to define your `syncSource` from Github
+or Amazon S3 bucket. For example:
+
+To define a public github repository as the `syncSource`:
+
+```python
+new SyncedAccessPoint(stack, 'EfsAccessPoint', {
+ ...
+ syncSource: new GithubSyncSource({
+ vpc,
+ repository: 'https://github.com/pahud/cdk-efs-assets.git',
+ }),
+});
+```
+
+To define a private github repository as the `syncSource`:
+
+```python
+new SyncedAccessPoint(stack, 'EfsAccessPoint', {
+ ...
+ syncSource: new GithubSyncSource({
+ vpc,
+ repository: 'https://github.com/pahud/private-repo.git',
+ secret: {
+ id: 'github',
+ key: 'oauth_token',
+ },
+ }),
+});
+```
+
+### `syncDirectoryPath`
+
+By default, the synced EFS assets are placed into a directory corresponding to the type of the sync source. For example, the default behavior of the GitHub source is to place the copied files into a directory named the same as the repository name (for a repository specified as 'https://github.com/pahud/cdk-efs-assets.git', the directory name would be 'cdk-efs-assets'), while the default behavior of the S3 archive source is to place the copied files into a directory named the same as the zip file (for a zip file name of 'assets.zip', the directory name would be 'assets').
+
+If you wish to override this default behavior, specify a value for the `syncDirectoryPath` property that is passed into the `SyncSource` call.
+
+If you are using the `AccessPoint` in an ECS/Fargate Task Definition, you probably will want to override the value of `syncDirectoryPath` to '/'. This will place the file contents in the root directory of the Access Point. The reason for this is that when you create a volume that is referencing an EFS Access Point, you are not allowed to specify any path other than the root directory in the task definition configuration.
+
+## How to use SyncedAccessPoint initialized with files provisioned from GitHub repository
+
+This will sync assets from a GitHub repository to a directory (by default, the output directory is named after the repository name) in the EFS AccessPoint:
+
+```python
+import { SyncSource, SyncedAccessPoint } from 'cdk-efs-assets';
+
+const app = new App();
+
+const env = {
+ region: process.env.CDK_DEFAULT_REGION ?? AWS_DEFAULT_REGION,
+ account: process.env.CDK_DEFAULT_ACCOUNT,
+};
+
+const stack = new Stack(app, 'testing-stack', { env });
+
+const vpc = ec2.Vpc.fromLookup(stack, 'Vpc', { isDefault: true })
+
+const fileSystem = new efs.FileSystem(stack, 'Filesystem', {
+ vpc,
+ removalPolicy: RemovalPolicy.DESTROY,
+})
+
+const efsAccessPoint = new SyncedAccessPoint(stack, 'GithubAccessPoint', {
+ vpc,
+ fileSystem,
+ path: '/demo-github',
+ createAcl: {
+ ownerGid: '1001',
+ ownerUid: '1001',
+ permissions: '0755',
+ },
+ posixUser: {
+ uid: '1001',
+ gid: '1001',
+ },
+ syncSource: new GithubSyncSource({
+ vpc,
+ repository: 'https://github.com/pahud/cdk-efs-assets.git',
+ })
+});
+```
+
+### Github private repository support
+
+To clone a github private repository, you need to generate your github **PAT(Personal Access Token)** and store the token in **AWS Secrets Manager** secret.
+
+For example, if your PAT is stored in the AWS Secret manager with the secret ID `github` and the key `oauth_token`, you can specify the `secret` property as the sample below. Under the covers the lambda function will format the full github repository uri with your **PAT** and successfully git clone the private repository to the efs filesystem.
+
+Store your PAT into the AWS Secrets Manager with AWS CLI:
+
+```sh
+aws secretsmanager create-secret \
+--name github \
+--secret-string '{"oauth_token":"MYOAUTHTOKEN"}'
+```
+
+Configure the `secret` property to allow lambda to retrieve the **PAT** from the secret:
+
+```python
+new GithubSyncSource({
+ vpc,
+ repository: 'https://github.com/username/repo.git',
+ secret: {
+ id: 'github',
+ key: 'oauth_token',
+ },
+})
+```
+
+## How to use SyncedAccessPoint initialized with files provisioned from zip file stored in S3
+
+This will sync assets from a zip file stored in an S3 bucket to a directory (by default, the output directory is named after the zip file name) in the EFS AccessPoint:
+
+```python
+import { S3ArchiveSync } from 'cdk-efs-assets';
+
+const app = new App();
+
+const env = {
+ region: process.env.CDK_DEFAULT_REGION ?? AWS_DEFAULT_REGION,
+ account: process.env.CDK_DEFAULT_ACCOUNT,
+};
+
+const stack = new Stack(app, 'testing-stack', { env });
+
+const vpc = ec2.Vpc.fromLookup(stack, 'Vpc', { isDefault: true })
+
+const fileSystem = new efs.FileSystem(stack, 'Filesystem', {
+ vpc,
+ removalPolicy: RemovalPolicy.DESTROY,
+})
+
+const bucket = Bucket.fromBucketName(this, 'Bucket', 'demo-bucket');
+
+const efsAccessPoint = new SyncedAccessPoint(stack, 'EfsAccessPoint', {
+ vpc,
+ fileSystem,
+ path: '/demo-s3',
+ createAcl: {
+ ownerGid: '1001',
+ ownerUid: '1001',
+ permissions: '0755',
+ },
+ posixUser: {
+ uid: '1001',
+ gid: '1001',
+ },
+ syncSource: new S3ArchiveSyncSource({
+ vpc,
+ bucket,
+ zipFilePath: 'folder/foo.zip',
+ }),
+});
+```
+
+### syncOnUpdate
+
+If the `syncOnUpdate` property is set to `true` (defaults to `true`), then the specified zip file path will be monitored, and if a new object is uploaded to the path, then it will resync the data to EFS. Note that to use this functionality, you must have a CloudTrail Trail in your account that captures the desired S3 write data event.
+
+This feature is only available with the `LAMBDA` sync engine.
+
+*WARNING*: The contents of the extraction directory in the access point will be destroyed before extracting the zip file.
+
+# `StatefulFargateNginx`
+
+This library comes with `StatefulFargateNginx` construct which allows you to build an Amazon EFS-backed stateful
+AWS Fargate service with its document root seeded from any github repository.
+
+See this [tweet](https://twitter.com/pahudnet/status/1367792169063354371) for the demo.
+
+Sample:
+
+```python
+new StatefulFargateNginx(this, 'NyanCat', {
+ vpc,
+ github: 'https://github.com/cristurm/nyan-cat.git',
+});
+```
+
+
+%package help
+Summary: Development documents and examples for cdk-efs-assets
+Provides: python3-cdk-efs-assets-doc
+%description help
+[![NPM version](https://badge.fury.io/js/cdk-efs-assets.svg)](https://badge.fury.io/js/cdk-efs-assets)
+[![PyPI version](https://badge.fury.io/py/cdk-efs-assets.svg)](https://badge.fury.io/py/cdk-efs-assets)
+![Release](https://github.com/pahud/cdk-efs-assets/workflows/Release/badge.svg)
+
+# cdk-efs-assets
+
+CDK construct library to populate Amazon EFS assets from Github or S3. If the source is S3, the construct also optionally supports updating the contents in EFS if a new zip file is uploaded to S3.
+
+## Install
+
+TypeScript/JavaScript:
+
+```bash
+npm i cdk-efs-assets
+```
+
+## SyncedAccessPoint
+
+The main construct that is used to provide this EFS sync functionality is `SyncedAccessPoint`. This extends the standard EFS `AccessPoint` construct, and takes an additional `SyncSource` constructor property which defines the source to sync assets from. The `SyncedAccessPoint` instance can be used anywhere an `AccessPoint` can be used. For example, to specify a volume in a Task Definition:
+
+```python
+const taskDefinition = new ecs.FargateTaskDefinition(this, 'TaskDefinition', {
+ ...
+ volumes: [
+ {
+ name: 'efs-storage',
+ efsVolumeConfiguration: {
+ fileSystemId: sharedFileSystem.fileSystemId,
+ transitEncryption: 'ENABLED',
+ authorizationConfig: {
+ accessPointId: syncedAccessPoint.accessPointId
+ }
+ }
+ },
+ ]
+});
+```
+
+## Sync Engine
+
+This library supports both `AWS Fargate` and `AWS Lambda` as the sync engine. As AWS Lambda currently has know issue with Amazon EFS([#100](https://github.com/pahud/cdk-efs-assets/issues/100)), the default sync engine is `AWS Fargate`. You can opt in AWS Lambda with the `engine` construct property of `SyncedAccessPoint`.
+
+## Sync Source
+
+Use `GithubSyncSource` and `S3ArchiveSyncSource` construct classes to define your `syncSource` from Github
+or Amazon S3 bucket. For example:
+
+To define a public github repository as the `syncSource`:
+
+```python
+new SyncedAccessPoint(stack, 'EfsAccessPoint', {
+ ...
+ syncSource: new GithubSyncSource({
+ vpc,
+ repository: 'https://github.com/pahud/cdk-efs-assets.git',
+ }),
+});
+```
+
+To define a private github repository as the `syncSource`:
+
+```python
+new SyncedAccessPoint(stack, 'EfsAccessPoint', {
+ ...
+ syncSource: new GithubSyncSource({
+ vpc,
+ repository: 'https://github.com/pahud/private-repo.git',
+ secret: {
+ id: 'github',
+ key: 'oauth_token',
+ },
+ }),
+});
+```
+
+### `syncDirectoryPath`
+
+By default, the synced EFS assets are placed into a directory corresponding to the type of the sync source. For example, the default behavior of the GitHub source is to place the copied files into a directory named the same as the repository name (for a repository specified as 'https://github.com/pahud/cdk-efs-assets.git', the directory name would be 'cdk-efs-assets'), while the default behavior of the S3 archive source is to place the copied files into a directory named the same as the zip file (for a zip file name of 'assets.zip', the directory name would be 'assets').
+
+If you wish to override this default behavior, specify a value for the `syncDirectoryPath` property that is passed into the `SyncSource` call.
+
+If you are using the `AccessPoint` in an ECS/Fargate Task Definition, you probably will want to override the value of `syncDirectoryPath` to '/'. This will place the file contents in the root directory of the Access Point. The reason for this is that when you create a volume that is referencing an EFS Access Point, you are not allowed to specify any path other than the root directory in the task definition configuration.
+
+## How to use SyncedAccessPoint initialized with files provisioned from GitHub repository
+
+This will sync assets from a GitHub repository to a directory (by default, the output directory is named after the repository name) in the EFS AccessPoint:
+
+```python
+import { SyncSource, SyncedAccessPoint } from 'cdk-efs-assets';
+
+const app = new App();
+
+const env = {
+ region: process.env.CDK_DEFAULT_REGION ?? AWS_DEFAULT_REGION,
+ account: process.env.CDK_DEFAULT_ACCOUNT,
+};
+
+const stack = new Stack(app, 'testing-stack', { env });
+
+const vpc = ec2.Vpc.fromLookup(stack, 'Vpc', { isDefault: true })
+
+const fileSystem = new efs.FileSystem(stack, 'Filesystem', {
+ vpc,
+ removalPolicy: RemovalPolicy.DESTROY,
+})
+
+const efsAccessPoint = new SyncedAccessPoint(stack, 'GithubAccessPoint', {
+ vpc,
+ fileSystem,
+ path: '/demo-github',
+ createAcl: {
+ ownerGid: '1001',
+ ownerUid: '1001',
+ permissions: '0755',
+ },
+ posixUser: {
+ uid: '1001',
+ gid: '1001',
+ },
+ syncSource: new GithubSyncSource({
+ vpc,
+ repository: 'https://github.com/pahud/cdk-efs-assets.git',
+ })
+});
+```
+
+### Github private repository support
+
+To clone a github private repository, you need to generate your github **PAT(Personal Access Token)** and store the token in **AWS Secrets Manager** secret.
+
+For example, if your PAT is stored in the AWS Secret manager with the secret ID `github` and the key `oauth_token`, you can specify the `secret` property as the sample below. Under the covers the lambda function will format the full github repository uri with your **PAT** and successfully git clone the private repository to the efs filesystem.
+
+Store your PAT into the AWS Secrets Manager with AWS CLI:
+
+```sh
+aws secretsmanager create-secret \
+--name github \
+--secret-string '{"oauth_token":"MYOAUTHTOKEN"}'
+```
+
+Configure the `secret` property to allow lambda to retrieve the **PAT** from the secret:
+
+```python
+new GithubSyncSource({
+ vpc,
+ repository: 'https://github.com/username/repo.git',
+ secret: {
+ id: 'github',
+ key: 'oauth_token',
+ },
+})
+```
+
+## How to use SyncedAccessPoint initialized with files provisioned from zip file stored in S3
+
+This will sync assets from a zip file stored in an S3 bucket to a directory (by default, the output directory is named after the zip file name) in the EFS AccessPoint:
+
+```python
+import { S3ArchiveSync } from 'cdk-efs-assets';
+
+const app = new App();
+
+const env = {
+ region: process.env.CDK_DEFAULT_REGION ?? AWS_DEFAULT_REGION,
+ account: process.env.CDK_DEFAULT_ACCOUNT,
+};
+
+const stack = new Stack(app, 'testing-stack', { env });
+
+const vpc = ec2.Vpc.fromLookup(stack, 'Vpc', { isDefault: true })
+
+const fileSystem = new efs.FileSystem(stack, 'Filesystem', {
+ vpc,
+ removalPolicy: RemovalPolicy.DESTROY,
+})
+
+const bucket = Bucket.fromBucketName(this, 'Bucket', 'demo-bucket');
+
+const efsAccessPoint = new SyncedAccessPoint(stack, 'EfsAccessPoint', {
+ vpc,
+ fileSystem,
+ path: '/demo-s3',
+ createAcl: {
+ ownerGid: '1001',
+ ownerUid: '1001',
+ permissions: '0755',
+ },
+ posixUser: {
+ uid: '1001',
+ gid: '1001',
+ },
+ syncSource: new S3ArchiveSyncSource({
+ vpc,
+ bucket,
+ zipFilePath: 'folder/foo.zip',
+ }),
+});
+```
+
+### syncOnUpdate
+
+If the `syncOnUpdate` property is set to `true` (defaults to `true`), then the specified zip file path will be monitored, and if a new object is uploaded to the path, then it will resync the data to EFS. Note that to use this functionality, you must have a CloudTrail Trail in your account that captures the desired S3 write data event.
+
+This feature is only available with the `LAMBDA` sync engine.
+
+*WARNING*: The contents of the extraction directory in the access point will be destroyed before extracting the zip file.
+
+# `StatefulFargateNginx`
+
+This library comes with `StatefulFargateNginx` construct which allows you to build an Amazon EFS-backed stateful
+AWS Fargate service with its document root seeded from any github repository.
+
+See this [tweet](https://twitter.com/pahudnet/status/1367792169063354371) for the demo.
+
+Sample:
+
+```python
+new StatefulFargateNginx(this, 'NyanCat', {
+ vpc,
+ github: 'https://github.com/cristurm/nyan-cat.git',
+});
+```
+
+
+%prep
+%autosetup -n cdk-efs-assets-0.3.95
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-cdk-efs-assets -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Tue Apr 11 2023 Python_Bot <Python_Bot@openeuler.org> - 0.3.95-1
+- Package Spec generated