summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-05-05 15:03:29 +0000
committerCoprDistGit <infra@openeuler.org>2023-05-05 15:03:29 +0000
commita34efbe1bab57a98df08845a03c2d5bd12fe22bd (patch)
tree56377bb13583c40db8b0d6ecef57bdb13ba875cf
parentae5facb1d3ab4b407e26cace3ca0a6f421951ebc (diff)
automatic import of python-kubric-nightlyopeneuler20.03
-rw-r--r--.gitignore1
-rw-r--r--python-kubric-nightly.spec349
-rw-r--r--sources1
3 files changed, 351 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..c358dee 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/kubric-nightly-2023.5.5.tar.gz
diff --git a/python-kubric-nightly.spec b/python-kubric-nightly.spec
new file mode 100644
index 0000000..8639430
--- /dev/null
+++ b/python-kubric-nightly.spec
@@ -0,0 +1,349 @@
+%global _empty_manifest_terminate_build 0
+Name: python-kubric-nightly
+Version: 2023.5.5
+Release: 1
+Summary: A data generation pipeline for creating semi-realistic synthetic multi-object videos with rich annotations such as instance segmentation, depth maps, and optical flow.
+License: Apache 2.0
+URL: https://github.com/google-research/kubric
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/ab/21/dd6ef91b9ac33da89c980bab1286ebd9debd8a308d7464b103dbee47cb40/kubric-nightly-2023.5.5.tar.gz
+BuildArch: noarch
+
+Requires: python3-apache-beam[gcp]
+Requires: python3-bidict
+Requires: python3-dataclasses
+Requires: python3-etils[epath_no_tf]
+Requires: python3-cloudml-hypertune
+Requires: python3-google-cloud-storage
+Requires: python3-imageio
+Requires: python3-munch
+Requires: python3-numpy
+Requires: python3-pandas
+Requires: python3-pypng
+Requires: python3-pyquaternion
+Requires: python3-Levenshtein
+Requires: python3-scikit-learn
+Requires: python3-singledispatchmethod
+Requires: python3-tensorflow
+Requires: python3-tensorflow-datasets
+Requires: python3-traitlets
+Requires: python3-trimesh
+
+%description
+# Kubric
+
+[![Blender](https://github.com/google-research/kubric/actions/workflows/blender.yml/badge.svg?branch=main)](https://github.com/google-research/kubric/actions/workflows/blender.yml)
+[![Kubruntu](https://github.com/google-research/kubric/actions/workflows/kubruntu.yml/badge.svg?branch=main)](https://github.com/google-research/kubric/actions/workflows/kubruntu.yml)
+[![Test](https://github.com/google-research/kubric/actions/workflows/test.yml/badge.svg?branch=main)](https://github.com/google-research/kubric/actions/workflows/test.yml)
+[![Coverage](https://badgen.net/codecov/c/github/google-research/kubric)](https://codecov.io/github/google-research/kubric)
+[![Docs](https://readthedocs.org/projects/kubric/badge/?version=latest)](https://kubric.readthedocs.io/en/latest/)
+
+A data generation pipeline for creating semi-realistic synthetic multi-object
+videos with rich annotations such as instance segmentation masks, depth maps,
+and optical flow.
+
+![](docs/images/teaser.gif)
+
+
+## Motivation and design
+We need better data for training and evaluating machine learning systems, especially in the collntext of unsupervised multi-object video understanding.
+Current systems succeed on [toy datasets](https://github.com/deepmind/multi_object_datasets), but fail on real-world data.
+Progress could be greatly accelerated if we had the ability to create suitable datasets of varying complexity on demand.
+Kubric is mainly built on-top of pybullet (for physics simulation) and Blender (for rendering); however, the code is kept modular to potentially support different rendering backends.
+
+## Getting started
+For instructions, please refer to [https://kubric.readthedocs.io](https://kubric.readthedocs.io)
+
+Assuming you have docker installed, to generate the data above simply execute:
+```
+git clone https://github.com/google-research/kubric.git
+cd kubric
+docker pull kubricdockerhub/kubruntu
+docker run --rm --interactive \
+ --user $(id -u):$(id -g) \
+ --volume "$(pwd):/kubric" \
+ kubricdockerhub/kubruntu \
+ /usr/bin/python3 examples/helloworld.py
+ls output
+```
+
+Kubric employs **Blender 2.93** (see [here](https://github.com/google-research/kubric/blob/01a08d274234f32f2adc4f7d5666b39490f953ad/docker/Blender.Dockerfile#L48)), so if you want to inspect the generated `*.blend` scene file for interactive inspection (i.e. without needing to render the scene), please make sure you have installed the correct Blender version.
+
+## Requirements
+- A pipeline for conveniently generating video data.
+- Physics simulation for automatically generating physical interactions between multiple objects.
+- Good control over the complexity of the generated data, so that we can evaluate individual aspects such as variability of objects and textures.
+- Realism: Ideally, the ability to span the entire complexity range from CLEVR all the way to real-world video such as YouTube8. This is clearly not feasible, but we would like to get as close as possible.
+- Access to rich ground truth information about the objects in a scene for the purpose of evaluation (eg. object segmentations and properties)
+- Control the train/test split to evaluate compositionality and systematic generalization (for example on held-out combinations of features or objects)
+
+
+## Challenges and datasets
+Generally, we store datasets for the challenges in this [Google Cloud Bucket](https://console.cloud.google.com/storage/browser/kubric-public).
+More specifically, these challenges are *dataset contributions* of the Kubric CVPR'22 paper:
+* [MOVi: Multi-Object Video](challenges/movi)
+* [Texture-Structure in NeRF](challenges/texture_structure_nerf)
+* [Optical Flow](challenges/optical_flow)
+* [Pre-training Visual Representations](challenges/pretraining_visual)
+* [Robust NeRF](challenges/robust_nerf)
+* [Multi-View Object Matting](challenges/multiview_matting)
+* [Complex BRDFs](challenges/complex_brdf)
+* [Single View Reconstruction](challenges/single_view_reconstruction)
+* [Video Based Reconstruction](challenges/video_based_reconstruction)
+* [Point Tracking](challenges/point_tracking)
+
+Pointers to additional datasets/workers:
+* [ToyBox (from Neural Semantic Fields)](https://nesf3d.github.io)
+* [MultiShapeNet (from Scene Representation Transformer)](https://srt-paper.github.io)
+* [SyntheticTrio(from Controllable Neural Radiance Fields)](https://github.com/kacperkan/conerf-kubric-dataset#readme)
+
+## Bibtex
+```
+@article{greff2021kubric,
+ title = {Kubric: a scalable dataset generator},
+ author = {Klaus Greff and Francois Belletti and Lucas Beyer and Carl Doersch and
+ Yilun Du and Daniel Duckworth and David J Fleet and Dan Gnanapragasam and
+ Florian Golemo and Charles Herrmann and Thomas Kipf and Abhijit Kundu and
+ Dmitry Lagun and Issam Laradji and Hsueh-Ti (Derek) Liu and Henning Meyer and
+ Yishu Miao and Derek Nowrouzezahrai and Cengiz Oztireli and Etienne Pot and
+ Noha Radwan and Daniel Rebain and Sara Sabour and Mehdi S. M. Sajjadi and Matan Sela and
+ Vincent Sitzmann and Austin Stone and Deqing Sun and Suhani Vora and Ziyu Wang and
+ Tianhao Wu and Kwang Moo Yi and Fangcheng Zhong and Andrea Tagliasacchi},
+ booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
+ year = {2022},
+}
+```
+
+## Disclaimer
+This is not an official Google Product
+
+
+%package -n python3-kubric-nightly
+Summary: A data generation pipeline for creating semi-realistic synthetic multi-object videos with rich annotations such as instance segmentation, depth maps, and optical flow.
+Provides: python-kubric-nightly
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-kubric-nightly
+# Kubric
+
+[![Blender](https://github.com/google-research/kubric/actions/workflows/blender.yml/badge.svg?branch=main)](https://github.com/google-research/kubric/actions/workflows/blender.yml)
+[![Kubruntu](https://github.com/google-research/kubric/actions/workflows/kubruntu.yml/badge.svg?branch=main)](https://github.com/google-research/kubric/actions/workflows/kubruntu.yml)
+[![Test](https://github.com/google-research/kubric/actions/workflows/test.yml/badge.svg?branch=main)](https://github.com/google-research/kubric/actions/workflows/test.yml)
+[![Coverage](https://badgen.net/codecov/c/github/google-research/kubric)](https://codecov.io/github/google-research/kubric)
+[![Docs](https://readthedocs.org/projects/kubric/badge/?version=latest)](https://kubric.readthedocs.io/en/latest/)
+
+A data generation pipeline for creating semi-realistic synthetic multi-object
+videos with rich annotations such as instance segmentation masks, depth maps,
+and optical flow.
+
+![](docs/images/teaser.gif)
+
+
+## Motivation and design
+We need better data for training and evaluating machine learning systems, especially in the collntext of unsupervised multi-object video understanding.
+Current systems succeed on [toy datasets](https://github.com/deepmind/multi_object_datasets), but fail on real-world data.
+Progress could be greatly accelerated if we had the ability to create suitable datasets of varying complexity on demand.
+Kubric is mainly built on-top of pybullet (for physics simulation) and Blender (for rendering); however, the code is kept modular to potentially support different rendering backends.
+
+## Getting started
+For instructions, please refer to [https://kubric.readthedocs.io](https://kubric.readthedocs.io)
+
+Assuming you have docker installed, to generate the data above simply execute:
+```
+git clone https://github.com/google-research/kubric.git
+cd kubric
+docker pull kubricdockerhub/kubruntu
+docker run --rm --interactive \
+ --user $(id -u):$(id -g) \
+ --volume "$(pwd):/kubric" \
+ kubricdockerhub/kubruntu \
+ /usr/bin/python3 examples/helloworld.py
+ls output
+```
+
+Kubric employs **Blender 2.93** (see [here](https://github.com/google-research/kubric/blob/01a08d274234f32f2adc4f7d5666b39490f953ad/docker/Blender.Dockerfile#L48)), so if you want to inspect the generated `*.blend` scene file for interactive inspection (i.e. without needing to render the scene), please make sure you have installed the correct Blender version.
+
+## Requirements
+- A pipeline for conveniently generating video data.
+- Physics simulation for automatically generating physical interactions between multiple objects.
+- Good control over the complexity of the generated data, so that we can evaluate individual aspects such as variability of objects and textures.
+- Realism: Ideally, the ability to span the entire complexity range from CLEVR all the way to real-world video such as YouTube8. This is clearly not feasible, but we would like to get as close as possible.
+- Access to rich ground truth information about the objects in a scene for the purpose of evaluation (eg. object segmentations and properties)
+- Control the train/test split to evaluate compositionality and systematic generalization (for example on held-out combinations of features or objects)
+
+
+## Challenges and datasets
+Generally, we store datasets for the challenges in this [Google Cloud Bucket](https://console.cloud.google.com/storage/browser/kubric-public).
+More specifically, these challenges are *dataset contributions* of the Kubric CVPR'22 paper:
+* [MOVi: Multi-Object Video](challenges/movi)
+* [Texture-Structure in NeRF](challenges/texture_structure_nerf)
+* [Optical Flow](challenges/optical_flow)
+* [Pre-training Visual Representations](challenges/pretraining_visual)
+* [Robust NeRF](challenges/robust_nerf)
+* [Multi-View Object Matting](challenges/multiview_matting)
+* [Complex BRDFs](challenges/complex_brdf)
+* [Single View Reconstruction](challenges/single_view_reconstruction)
+* [Video Based Reconstruction](challenges/video_based_reconstruction)
+* [Point Tracking](challenges/point_tracking)
+
+Pointers to additional datasets/workers:
+* [ToyBox (from Neural Semantic Fields)](https://nesf3d.github.io)
+* [MultiShapeNet (from Scene Representation Transformer)](https://srt-paper.github.io)
+* [SyntheticTrio(from Controllable Neural Radiance Fields)](https://github.com/kacperkan/conerf-kubric-dataset#readme)
+
+## Bibtex
+```
+@article{greff2021kubric,
+ title = {Kubric: a scalable dataset generator},
+ author = {Klaus Greff and Francois Belletti and Lucas Beyer and Carl Doersch and
+ Yilun Du and Daniel Duckworth and David J Fleet and Dan Gnanapragasam and
+ Florian Golemo and Charles Herrmann and Thomas Kipf and Abhijit Kundu and
+ Dmitry Lagun and Issam Laradji and Hsueh-Ti (Derek) Liu and Henning Meyer and
+ Yishu Miao and Derek Nowrouzezahrai and Cengiz Oztireli and Etienne Pot and
+ Noha Radwan and Daniel Rebain and Sara Sabour and Mehdi S. M. Sajjadi and Matan Sela and
+ Vincent Sitzmann and Austin Stone and Deqing Sun and Suhani Vora and Ziyu Wang and
+ Tianhao Wu and Kwang Moo Yi and Fangcheng Zhong and Andrea Tagliasacchi},
+ booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
+ year = {2022},
+}
+```
+
+## Disclaimer
+This is not an official Google Product
+
+
+%package help
+Summary: Development documents and examples for kubric-nightly
+Provides: python3-kubric-nightly-doc
+%description help
+# Kubric
+
+[![Blender](https://github.com/google-research/kubric/actions/workflows/blender.yml/badge.svg?branch=main)](https://github.com/google-research/kubric/actions/workflows/blender.yml)
+[![Kubruntu](https://github.com/google-research/kubric/actions/workflows/kubruntu.yml/badge.svg?branch=main)](https://github.com/google-research/kubric/actions/workflows/kubruntu.yml)
+[![Test](https://github.com/google-research/kubric/actions/workflows/test.yml/badge.svg?branch=main)](https://github.com/google-research/kubric/actions/workflows/test.yml)
+[![Coverage](https://badgen.net/codecov/c/github/google-research/kubric)](https://codecov.io/github/google-research/kubric)
+[![Docs](https://readthedocs.org/projects/kubric/badge/?version=latest)](https://kubric.readthedocs.io/en/latest/)
+
+A data generation pipeline for creating semi-realistic synthetic multi-object
+videos with rich annotations such as instance segmentation masks, depth maps,
+and optical flow.
+
+![](docs/images/teaser.gif)
+
+
+## Motivation and design
+We need better data for training and evaluating machine learning systems, especially in the collntext of unsupervised multi-object video understanding.
+Current systems succeed on [toy datasets](https://github.com/deepmind/multi_object_datasets), but fail on real-world data.
+Progress could be greatly accelerated if we had the ability to create suitable datasets of varying complexity on demand.
+Kubric is mainly built on-top of pybullet (for physics simulation) and Blender (for rendering); however, the code is kept modular to potentially support different rendering backends.
+
+## Getting started
+For instructions, please refer to [https://kubric.readthedocs.io](https://kubric.readthedocs.io)
+
+Assuming you have docker installed, to generate the data above simply execute:
+```
+git clone https://github.com/google-research/kubric.git
+cd kubric
+docker pull kubricdockerhub/kubruntu
+docker run --rm --interactive \
+ --user $(id -u):$(id -g) \
+ --volume "$(pwd):/kubric" \
+ kubricdockerhub/kubruntu \
+ /usr/bin/python3 examples/helloworld.py
+ls output
+```
+
+Kubric employs **Blender 2.93** (see [here](https://github.com/google-research/kubric/blob/01a08d274234f32f2adc4f7d5666b39490f953ad/docker/Blender.Dockerfile#L48)), so if you want to inspect the generated `*.blend` scene file for interactive inspection (i.e. without needing to render the scene), please make sure you have installed the correct Blender version.
+
+## Requirements
+- A pipeline for conveniently generating video data.
+- Physics simulation for automatically generating physical interactions between multiple objects.
+- Good control over the complexity of the generated data, so that we can evaluate individual aspects such as variability of objects and textures.
+- Realism: Ideally, the ability to span the entire complexity range from CLEVR all the way to real-world video such as YouTube8. This is clearly not feasible, but we would like to get as close as possible.
+- Access to rich ground truth information about the objects in a scene for the purpose of evaluation (eg. object segmentations and properties)
+- Control the train/test split to evaluate compositionality and systematic generalization (for example on held-out combinations of features or objects)
+
+
+## Challenges and datasets
+Generally, we store datasets for the challenges in this [Google Cloud Bucket](https://console.cloud.google.com/storage/browser/kubric-public).
+More specifically, these challenges are *dataset contributions* of the Kubric CVPR'22 paper:
+* [MOVi: Multi-Object Video](challenges/movi)
+* [Texture-Structure in NeRF](challenges/texture_structure_nerf)
+* [Optical Flow](challenges/optical_flow)
+* [Pre-training Visual Representations](challenges/pretraining_visual)
+* [Robust NeRF](challenges/robust_nerf)
+* [Multi-View Object Matting](challenges/multiview_matting)
+* [Complex BRDFs](challenges/complex_brdf)
+* [Single View Reconstruction](challenges/single_view_reconstruction)
+* [Video Based Reconstruction](challenges/video_based_reconstruction)
+* [Point Tracking](challenges/point_tracking)
+
+Pointers to additional datasets/workers:
+* [ToyBox (from Neural Semantic Fields)](https://nesf3d.github.io)
+* [MultiShapeNet (from Scene Representation Transformer)](https://srt-paper.github.io)
+* [SyntheticTrio(from Controllable Neural Radiance Fields)](https://github.com/kacperkan/conerf-kubric-dataset#readme)
+
+## Bibtex
+```
+@article{greff2021kubric,
+ title = {Kubric: a scalable dataset generator},
+ author = {Klaus Greff and Francois Belletti and Lucas Beyer and Carl Doersch and
+ Yilun Du and Daniel Duckworth and David J Fleet and Dan Gnanapragasam and
+ Florian Golemo and Charles Herrmann and Thomas Kipf and Abhijit Kundu and
+ Dmitry Lagun and Issam Laradji and Hsueh-Ti (Derek) Liu and Henning Meyer and
+ Yishu Miao and Derek Nowrouzezahrai and Cengiz Oztireli and Etienne Pot and
+ Noha Radwan and Daniel Rebain and Sara Sabour and Mehdi S. M. Sajjadi and Matan Sela and
+ Vincent Sitzmann and Austin Stone and Deqing Sun and Suhani Vora and Ziyu Wang and
+ Tianhao Wu and Kwang Moo Yi and Fangcheng Zhong and Andrea Tagliasacchi},
+ booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
+ year = {2022},
+}
+```
+
+## Disclaimer
+This is not an official Google Product
+
+
+%prep
+%autosetup -n kubric-nightly-2023.5.5
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-kubric-nightly -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Fri May 05 2023 Python_Bot <Python_Bot@openeuler.org> - 2023.5.5-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..552be36
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+01c16d78066ceed3222a0eaafa4d4d94 kubric-nightly-2023.5.5.tar.gz