summaryrefslogtreecommitdiff
path: root/python-labelme.spec
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-04-11 10:50:58 +0000
committerCoprDistGit <infra@openeuler.org>2023-04-11 10:50:58 +0000
commitc39966f82fd80d2960a1cc1bcde9a9a8aa168549 (patch)
tree7ea225bc6105b297cf657fc84ff9dc495c01189e /python-labelme.spec
parent4011e4822c4b290e497adaf21ff29a409d1c95fa (diff)
automatic import of python-labelme
Diffstat (limited to 'python-labelme.spec')
-rw-r--r--python-labelme.spec726
1 files changed, 726 insertions, 0 deletions
diff --git a/python-labelme.spec b/python-labelme.spec
new file mode 100644
index 0000000..6cea43d
--- /dev/null
+++ b/python-labelme.spec
@@ -0,0 +1,726 @@
+%global _empty_manifest_terminate_build 0
+Name: python-labelme
+Version: 5.2.0
+Release: 1
+Summary: Image Polygonal Annotation with Python
+License: GPLv3
+URL: https://github.com/wkentaro/labelme
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/cf/2b/4a06be33ed86cc5227945b8dabb7d8f9d9c6e854f0de966a601738ceda69/labelme-5.2.0.tar.gz
+BuildArch: noarch
+
+
+%description
+<h1 align="center">
+ <img src="https://github.com/wkentaro/labelme/blob/main/labelme/icons/icon.png?raw=true"><br/>labelme
+</h1>
+
+<h4 align="center">
+ Image Polygonal Annotation with Python
+</h4>
+
+<div align="center">
+ <a href="https://pypi.python.org/pypi/labelme"><img src="https://img.shields.io/pypi/v/labelme.svg"></a>
+ <a href="https://pypi.org/project/labelme"><img src="https://img.shields.io/pypi/pyversions/labelme.svg"></a>
+ <a href="https://github.com/wkentaro/labelme/actions"><img src="https://github.com/wkentaro/labelme/workflows/ci/badge.svg?branch=main&event=push"></a>
+</div>
+
+<div align="center">
+ <a href="https://github.com/wkentaro/labelme/blob/main/#installation?raw=true"><b>Installation</b></a> |
+ <a href="https://github.com/wkentaro/labelme/blob/main/#usage"><b>Usage</b></a> |
+ <a href="https://github.com/wkentaro/labelme/tree/main/examples/tutorial#tutorial-single-image-example"><b>Tutorial</b></a> |
+ <a href="https://github.com/wkentaro/labelme/tree/main/examples"><b>Examples</b></a> |
+ <a href="https://github.com/wkentaro/labelme/discussions"><b>Discussions</b></a> |
+ <a href="https://www.youtube.com/playlist?list=PLI6LvFw0iflh3o33YYnVIfOpaO0hc5Dzw"><b>Youtube FAQ</b></a>
+</div>
+
+<br/>
+
+<div align="center">
+ <img src="https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/.readme/annotation.jpg?raw=true" width="70%">
+</div>
+
+## Description
+
+Labelme is a graphical image annotation tool inspired by <http://labelme.csail.mit.edu>.
+It is written in Python and uses Qt for its graphical interface.
+
+<img src="https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/data_dataset_voc/JPEGImages/2011_000006.jpg?raw=true" width="19%" /> <img src="examples/instance_segmentation/data_dataset_voc/SegmentationClassPNG/2011_000006.png" width="19%" /> <img src="examples/instance_segmentation/data_dataset_voc/SegmentationClassVisualization/2011_000006.jpg" width="19%" /> <img src="examples/instance_segmentation/data_dataset_voc/SegmentationObjectPNG/2011_000006.png" width="19%" /> <img src="examples/instance_segmentation/data_dataset_voc/SegmentationObjectVisualization/2011_000006.jpg" width="19%" />
+<i>VOC dataset example of instance segmentation.</i>
+
+<img src="https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation/.readme/annotation.jpg?raw=true" width="30%" /> <img src="examples/bbox_detection/.readme/annotation.jpg" width="30%" /> <img src="examples/classification/.readme/annotation_cat.jpg" width="35%" />
+<i>Other examples (semantic segmentation, bbox detection, and classification).</i>
+
+<img src="https://user-images.githubusercontent.com/4310419/47907116-85667800-de82-11e8-83d0-b9f4eb33268f.gif" width="30%" /> <img src="https://user-images.githubusercontent.com/4310419/47922172-57972880-deae-11e8-84f8-e4324a7c856a.gif" width="30%" /> <img src="https://user-images.githubusercontent.com/14256482/46932075-92145f00-d080-11e8-8d09-2162070ae57c.png" width="32%" />
+<i>Various primitives (polygon, rectangle, circle, line, and point).</i>
+
+
+## Features
+
+- [x] Image annotation for polygon, rectangle, circle, line and point. ([tutorial](https://github.com/wkentaro/labelme/blob/main/examples/tutorial))
+- [x] Image flag annotation for classification and cleaning. ([#166](https://github.com/wkentaro/labelme/pull/166))
+- [x] Video annotation. ([video annotation](https://github.com/wkentaro/labelme/blob/main/examples/video_annotation?raw=true))
+- [x] GUI customization (predefined labels / flags, auto-saving, label validation, etc). ([#144](https://github.com/wkentaro/labelme/pull/144))
+- [x] Exporting VOC-format dataset for semantic/instance segmentation. ([semantic segmentation](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation?raw=true), [instance segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation?raw=true))
+- [x] Exporting COCO-format dataset for instance segmentation. ([instance segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation?raw=true))
+
+
+
+## Requirements
+
+- Ubuntu / macOS / Windows
+- Python3
+- [PyQt5 / PySide2](http://www.riverbankcomputing.co.uk/software/pyqt/intro)
+
+
+## Installation
+
+There are options:
+
+- Platform agnostic installation: [Anaconda](https://github.com/wkentaro/labelme/blob/main/#anaconda)
+- Platform specific installation: [Ubuntu](https://github.com/wkentaro/labelme/blob/main/#ubuntu), [macOS](https://github.com/wkentaro/labelme/blob/main/#macos), [Windows](https://github.com/wkentaro/labelme/blob/main/#windows)
+- Pre-build binaries from [the release section](https://github.com/wkentaro/labelme/releases)
+
+### Anaconda
+
+You need install [Anaconda](https://www.continuum.io/downloads), then run below:
+
+```bash
+# python3
+conda create --name=labelme python=3
+source activate labelme
+# conda install -c conda-forge pyside2
+# conda install pyqt
+# pip install pyqt5 # pyqt5 can be installed via pip on python3
+pip install labelme
+# or you can install everything by conda command
+# conda install labelme -c conda-forge
+```
+
+### Ubuntu
+
+```bash
+sudo apt-get install labelme
+
+# or
+sudo pip3 install labelme
+
+# or install standalone executable from:
+# https://github.com/wkentaro/labelme/releases
+```
+
+### macOS
+
+```bash
+brew install pyqt # maybe pyqt5
+pip install labelme
+
+# or
+brew install wkentaro/labelme/labelme # command line interface
+# brew install --cask wkentaro/labelme/labelme # app
+
+# or install standalone executable/app from:
+# https://github.com/wkentaro/labelme/releases
+```
+
+### Windows
+
+Install [Anaconda](https://www.continuum.io/downloads), then in an Anaconda Prompt run:
+
+```bash
+conda create --name=labelme python=3
+conda activate labelme
+pip install labelme
+
+# or install standalone executable/app from:
+# https://github.com/wkentaro/labelme/releases
+```
+
+
+## Usage
+
+Run `labelme --help` for detail.
+The annotations are saved as a [JSON](http://www.json.org/) file.
+
+```bash
+labelme # just open gui
+
+# tutorial (single image example)
+cd examples/tutorial
+labelme apc2016_obj3.jpg # specify image file
+labelme apc2016_obj3.jpg -O apc2016_obj3.json # close window after the save
+labelme apc2016_obj3.jpg --nodata # not include image data but relative image path in JSON file
+labelme apc2016_obj3.jpg \
+ --labels highland_6539_self_stick_notes,mead_index_cards,kong_air_dog_squeakair_tennis_ball # specify label list
+
+# semantic segmentation example
+cd examples/semantic_segmentation
+labelme data_annotated/ # Open directory to annotate all images in it
+labelme data_annotated/ --labels labels.txt # specify label list with a file
+```
+
+For more advanced usage, please refer to the examples:
+
+* [Tutorial (Single Image Example)](https://github.com/wkentaro/labelme/blob/main/examples/tutorial)
+* [Semantic Segmentation Example](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation?raw=true)
+* [Instance Segmentation Example](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation?raw=true)
+* [Video Annotation Example](https://github.com/wkentaro/labelme/blob/main/examples/video_annotation?raw=true)
+
+### Command Line Arguments
+- `--output` specifies the location that annotations will be written to. If the location ends with .json, a single annotation will be written to this file. Only one image can be annotated if a location is specified with .json. If the location does not end with .json, the program will assume it is a directory. Annotations will be stored in this directory with a name that corresponds to the image that the annotation was made on.
+- The first time you run labelme, it will create a config file in `~/.labelmerc`. You can edit this file and the changes will be applied the next time that you launch labelme. If you would prefer to use a config file from another location, you can specify this file with the `--config` flag.
+- Without the `--nosortlabels` flag, the program will list labels in alphabetical order. When the program is run with this flag, it will display labels in the order that they are provided.
+- Flags are assigned to an entire image. [Example](https://github.com/wkentaro/labelme/blob/main/examples/classification?raw=true)
+- Labels are assigned to a single polygon. [Example](https://github.com/wkentaro/labelme/blob/main/examples/bbox_detection?raw=true)
+
+## FAQ
+
+- **How to convert JSON file to numpy array?** See [examples/tutorial](https://github.com/wkentaro/labelme/blob/main/examples/tutorial#convert-to-dataset).
+- **How to load label PNG file?** See [examples/tutorial](https://github.com/wkentaro/labelme/blob/main/examples/tutorial#how-to-load-label-png-file).
+- **How to get annotations for semantic segmentation?** See [examples/semantic_segmentation](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation?raw=true).
+- **How to get annotations for instance segmentation?** See [examples/instance_segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation?raw=true).
+
+
+## Developing
+
+```bash
+git clone https://github.com/wkentaro/labelme.git
+cd labelme
+
+# Install anaconda3 and labelme
+curl -L https://github.com/wkentaro/dotfiles/raw/main/local/bin/install_anaconda3.sh | bash -s .
+source .anaconda3/bin/activate
+pip install -e .
+```
+
+
+## How to build standalone executable
+
+Below shows how to build the standalone executable on macOS, Linux and Windows.
+
+```bash
+# Setup conda
+conda create --name labelme python=3.9
+conda activate labelme
+
+# Build the standalone executable
+pip install .
+pip install 'matplotlib<3.3'
+pip install pyinstaller
+pyinstaller labelme.spec
+dist/labelme --version
+```
+
+
+## How to contribute
+
+Make sure below test passes on your environment.
+See `.github/workflows/ci.yml` for more detail.
+
+```bash
+pip install -r requirements-dev.txt
+
+flake8 .
+black --line-length 79 --check labelme/
+MPLBACKEND='agg' pytest -vsx tests/
+```
+
+
+## Acknowledgement
+
+This repo is the fork of [mpitid/pylabelme](https://github.com/mpitid/pylabelme).
+
+
+%package -n python3-labelme
+Summary: Image Polygonal Annotation with Python
+Provides: python-labelme
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-labelme
+<h1 align="center">
+ <img src="https://github.com/wkentaro/labelme/blob/main/labelme/icons/icon.png?raw=true"><br/>labelme
+</h1>
+
+<h4 align="center">
+ Image Polygonal Annotation with Python
+</h4>
+
+<div align="center">
+ <a href="https://pypi.python.org/pypi/labelme"><img src="https://img.shields.io/pypi/v/labelme.svg"></a>
+ <a href="https://pypi.org/project/labelme"><img src="https://img.shields.io/pypi/pyversions/labelme.svg"></a>
+ <a href="https://github.com/wkentaro/labelme/actions"><img src="https://github.com/wkentaro/labelme/workflows/ci/badge.svg?branch=main&event=push"></a>
+</div>
+
+<div align="center">
+ <a href="https://github.com/wkentaro/labelme/blob/main/#installation?raw=true"><b>Installation</b></a> |
+ <a href="https://github.com/wkentaro/labelme/blob/main/#usage"><b>Usage</b></a> |
+ <a href="https://github.com/wkentaro/labelme/tree/main/examples/tutorial#tutorial-single-image-example"><b>Tutorial</b></a> |
+ <a href="https://github.com/wkentaro/labelme/tree/main/examples"><b>Examples</b></a> |
+ <a href="https://github.com/wkentaro/labelme/discussions"><b>Discussions</b></a> |
+ <a href="https://www.youtube.com/playlist?list=PLI6LvFw0iflh3o33YYnVIfOpaO0hc5Dzw"><b>Youtube FAQ</b></a>
+</div>
+
+<br/>
+
+<div align="center">
+ <img src="https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/.readme/annotation.jpg?raw=true" width="70%">
+</div>
+
+## Description
+
+Labelme is a graphical image annotation tool inspired by <http://labelme.csail.mit.edu>.
+It is written in Python and uses Qt for its graphical interface.
+
+<img src="https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/data_dataset_voc/JPEGImages/2011_000006.jpg?raw=true" width="19%" /> <img src="examples/instance_segmentation/data_dataset_voc/SegmentationClassPNG/2011_000006.png" width="19%" /> <img src="examples/instance_segmentation/data_dataset_voc/SegmentationClassVisualization/2011_000006.jpg" width="19%" /> <img src="examples/instance_segmentation/data_dataset_voc/SegmentationObjectPNG/2011_000006.png" width="19%" /> <img src="examples/instance_segmentation/data_dataset_voc/SegmentationObjectVisualization/2011_000006.jpg" width="19%" />
+<i>VOC dataset example of instance segmentation.</i>
+
+<img src="https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation/.readme/annotation.jpg?raw=true" width="30%" /> <img src="examples/bbox_detection/.readme/annotation.jpg" width="30%" /> <img src="examples/classification/.readme/annotation_cat.jpg" width="35%" />
+<i>Other examples (semantic segmentation, bbox detection, and classification).</i>
+
+<img src="https://user-images.githubusercontent.com/4310419/47907116-85667800-de82-11e8-83d0-b9f4eb33268f.gif" width="30%" /> <img src="https://user-images.githubusercontent.com/4310419/47922172-57972880-deae-11e8-84f8-e4324a7c856a.gif" width="30%" /> <img src="https://user-images.githubusercontent.com/14256482/46932075-92145f00-d080-11e8-8d09-2162070ae57c.png" width="32%" />
+<i>Various primitives (polygon, rectangle, circle, line, and point).</i>
+
+
+## Features
+
+- [x] Image annotation for polygon, rectangle, circle, line and point. ([tutorial](https://github.com/wkentaro/labelme/blob/main/examples/tutorial))
+- [x] Image flag annotation for classification and cleaning. ([#166](https://github.com/wkentaro/labelme/pull/166))
+- [x] Video annotation. ([video annotation](https://github.com/wkentaro/labelme/blob/main/examples/video_annotation?raw=true))
+- [x] GUI customization (predefined labels / flags, auto-saving, label validation, etc). ([#144](https://github.com/wkentaro/labelme/pull/144))
+- [x] Exporting VOC-format dataset for semantic/instance segmentation. ([semantic segmentation](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation?raw=true), [instance segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation?raw=true))
+- [x] Exporting COCO-format dataset for instance segmentation. ([instance segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation?raw=true))
+
+
+
+## Requirements
+
+- Ubuntu / macOS / Windows
+- Python3
+- [PyQt5 / PySide2](http://www.riverbankcomputing.co.uk/software/pyqt/intro)
+
+
+## Installation
+
+There are options:
+
+- Platform agnostic installation: [Anaconda](https://github.com/wkentaro/labelme/blob/main/#anaconda)
+- Platform specific installation: [Ubuntu](https://github.com/wkentaro/labelme/blob/main/#ubuntu), [macOS](https://github.com/wkentaro/labelme/blob/main/#macos), [Windows](https://github.com/wkentaro/labelme/blob/main/#windows)
+- Pre-build binaries from [the release section](https://github.com/wkentaro/labelme/releases)
+
+### Anaconda
+
+You need install [Anaconda](https://www.continuum.io/downloads), then run below:
+
+```bash
+# python3
+conda create --name=labelme python=3
+source activate labelme
+# conda install -c conda-forge pyside2
+# conda install pyqt
+# pip install pyqt5 # pyqt5 can be installed via pip on python3
+pip install labelme
+# or you can install everything by conda command
+# conda install labelme -c conda-forge
+```
+
+### Ubuntu
+
+```bash
+sudo apt-get install labelme
+
+# or
+sudo pip3 install labelme
+
+# or install standalone executable from:
+# https://github.com/wkentaro/labelme/releases
+```
+
+### macOS
+
+```bash
+brew install pyqt # maybe pyqt5
+pip install labelme
+
+# or
+brew install wkentaro/labelme/labelme # command line interface
+# brew install --cask wkentaro/labelme/labelme # app
+
+# or install standalone executable/app from:
+# https://github.com/wkentaro/labelme/releases
+```
+
+### Windows
+
+Install [Anaconda](https://www.continuum.io/downloads), then in an Anaconda Prompt run:
+
+```bash
+conda create --name=labelme python=3
+conda activate labelme
+pip install labelme
+
+# or install standalone executable/app from:
+# https://github.com/wkentaro/labelme/releases
+```
+
+
+## Usage
+
+Run `labelme --help` for detail.
+The annotations are saved as a [JSON](http://www.json.org/) file.
+
+```bash
+labelme # just open gui
+
+# tutorial (single image example)
+cd examples/tutorial
+labelme apc2016_obj3.jpg # specify image file
+labelme apc2016_obj3.jpg -O apc2016_obj3.json # close window after the save
+labelme apc2016_obj3.jpg --nodata # not include image data but relative image path in JSON file
+labelme apc2016_obj3.jpg \
+ --labels highland_6539_self_stick_notes,mead_index_cards,kong_air_dog_squeakair_tennis_ball # specify label list
+
+# semantic segmentation example
+cd examples/semantic_segmentation
+labelme data_annotated/ # Open directory to annotate all images in it
+labelme data_annotated/ --labels labels.txt # specify label list with a file
+```
+
+For more advanced usage, please refer to the examples:
+
+* [Tutorial (Single Image Example)](https://github.com/wkentaro/labelme/blob/main/examples/tutorial)
+* [Semantic Segmentation Example](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation?raw=true)
+* [Instance Segmentation Example](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation?raw=true)
+* [Video Annotation Example](https://github.com/wkentaro/labelme/blob/main/examples/video_annotation?raw=true)
+
+### Command Line Arguments
+- `--output` specifies the location that annotations will be written to. If the location ends with .json, a single annotation will be written to this file. Only one image can be annotated if a location is specified with .json. If the location does not end with .json, the program will assume it is a directory. Annotations will be stored in this directory with a name that corresponds to the image that the annotation was made on.
+- The first time you run labelme, it will create a config file in `~/.labelmerc`. You can edit this file and the changes will be applied the next time that you launch labelme. If you would prefer to use a config file from another location, you can specify this file with the `--config` flag.
+- Without the `--nosortlabels` flag, the program will list labels in alphabetical order. When the program is run with this flag, it will display labels in the order that they are provided.
+- Flags are assigned to an entire image. [Example](https://github.com/wkentaro/labelme/blob/main/examples/classification?raw=true)
+- Labels are assigned to a single polygon. [Example](https://github.com/wkentaro/labelme/blob/main/examples/bbox_detection?raw=true)
+
+## FAQ
+
+- **How to convert JSON file to numpy array?** See [examples/tutorial](https://github.com/wkentaro/labelme/blob/main/examples/tutorial#convert-to-dataset).
+- **How to load label PNG file?** See [examples/tutorial](https://github.com/wkentaro/labelme/blob/main/examples/tutorial#how-to-load-label-png-file).
+- **How to get annotations for semantic segmentation?** See [examples/semantic_segmentation](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation?raw=true).
+- **How to get annotations for instance segmentation?** See [examples/instance_segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation?raw=true).
+
+
+## Developing
+
+```bash
+git clone https://github.com/wkentaro/labelme.git
+cd labelme
+
+# Install anaconda3 and labelme
+curl -L https://github.com/wkentaro/dotfiles/raw/main/local/bin/install_anaconda3.sh | bash -s .
+source .anaconda3/bin/activate
+pip install -e .
+```
+
+
+## How to build standalone executable
+
+Below shows how to build the standalone executable on macOS, Linux and Windows.
+
+```bash
+# Setup conda
+conda create --name labelme python=3.9
+conda activate labelme
+
+# Build the standalone executable
+pip install .
+pip install 'matplotlib<3.3'
+pip install pyinstaller
+pyinstaller labelme.spec
+dist/labelme --version
+```
+
+
+## How to contribute
+
+Make sure below test passes on your environment.
+See `.github/workflows/ci.yml` for more detail.
+
+```bash
+pip install -r requirements-dev.txt
+
+flake8 .
+black --line-length 79 --check labelme/
+MPLBACKEND='agg' pytest -vsx tests/
+```
+
+
+## Acknowledgement
+
+This repo is the fork of [mpitid/pylabelme](https://github.com/mpitid/pylabelme).
+
+
+%package help
+Summary: Development documents and examples for labelme
+Provides: python3-labelme-doc
+%description help
+<h1 align="center">
+ <img src="https://github.com/wkentaro/labelme/blob/main/labelme/icons/icon.png?raw=true"><br/>labelme
+</h1>
+
+<h4 align="center">
+ Image Polygonal Annotation with Python
+</h4>
+
+<div align="center">
+ <a href="https://pypi.python.org/pypi/labelme"><img src="https://img.shields.io/pypi/v/labelme.svg"></a>
+ <a href="https://pypi.org/project/labelme"><img src="https://img.shields.io/pypi/pyversions/labelme.svg"></a>
+ <a href="https://github.com/wkentaro/labelme/actions"><img src="https://github.com/wkentaro/labelme/workflows/ci/badge.svg?branch=main&event=push"></a>
+</div>
+
+<div align="center">
+ <a href="https://github.com/wkentaro/labelme/blob/main/#installation?raw=true"><b>Installation</b></a> |
+ <a href="https://github.com/wkentaro/labelme/blob/main/#usage"><b>Usage</b></a> |
+ <a href="https://github.com/wkentaro/labelme/tree/main/examples/tutorial#tutorial-single-image-example"><b>Tutorial</b></a> |
+ <a href="https://github.com/wkentaro/labelme/tree/main/examples"><b>Examples</b></a> |
+ <a href="https://github.com/wkentaro/labelme/discussions"><b>Discussions</b></a> |
+ <a href="https://www.youtube.com/playlist?list=PLI6LvFw0iflh3o33YYnVIfOpaO0hc5Dzw"><b>Youtube FAQ</b></a>
+</div>
+
+<br/>
+
+<div align="center">
+ <img src="https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/.readme/annotation.jpg?raw=true" width="70%">
+</div>
+
+## Description
+
+Labelme is a graphical image annotation tool inspired by <http://labelme.csail.mit.edu>.
+It is written in Python and uses Qt for its graphical interface.
+
+<img src="https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation/data_dataset_voc/JPEGImages/2011_000006.jpg?raw=true" width="19%" /> <img src="examples/instance_segmentation/data_dataset_voc/SegmentationClassPNG/2011_000006.png" width="19%" /> <img src="examples/instance_segmentation/data_dataset_voc/SegmentationClassVisualization/2011_000006.jpg" width="19%" /> <img src="examples/instance_segmentation/data_dataset_voc/SegmentationObjectPNG/2011_000006.png" width="19%" /> <img src="examples/instance_segmentation/data_dataset_voc/SegmentationObjectVisualization/2011_000006.jpg" width="19%" />
+<i>VOC dataset example of instance segmentation.</i>
+
+<img src="https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation/.readme/annotation.jpg?raw=true" width="30%" /> <img src="examples/bbox_detection/.readme/annotation.jpg" width="30%" /> <img src="examples/classification/.readme/annotation_cat.jpg" width="35%" />
+<i>Other examples (semantic segmentation, bbox detection, and classification).</i>
+
+<img src="https://user-images.githubusercontent.com/4310419/47907116-85667800-de82-11e8-83d0-b9f4eb33268f.gif" width="30%" /> <img src="https://user-images.githubusercontent.com/4310419/47922172-57972880-deae-11e8-84f8-e4324a7c856a.gif" width="30%" /> <img src="https://user-images.githubusercontent.com/14256482/46932075-92145f00-d080-11e8-8d09-2162070ae57c.png" width="32%" />
+<i>Various primitives (polygon, rectangle, circle, line, and point).</i>
+
+
+## Features
+
+- [x] Image annotation for polygon, rectangle, circle, line and point. ([tutorial](https://github.com/wkentaro/labelme/blob/main/examples/tutorial))
+- [x] Image flag annotation for classification and cleaning. ([#166](https://github.com/wkentaro/labelme/pull/166))
+- [x] Video annotation. ([video annotation](https://github.com/wkentaro/labelme/blob/main/examples/video_annotation?raw=true))
+- [x] GUI customization (predefined labels / flags, auto-saving, label validation, etc). ([#144](https://github.com/wkentaro/labelme/pull/144))
+- [x] Exporting VOC-format dataset for semantic/instance segmentation. ([semantic segmentation](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation?raw=true), [instance segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation?raw=true))
+- [x] Exporting COCO-format dataset for instance segmentation. ([instance segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation?raw=true))
+
+
+
+## Requirements
+
+- Ubuntu / macOS / Windows
+- Python3
+- [PyQt5 / PySide2](http://www.riverbankcomputing.co.uk/software/pyqt/intro)
+
+
+## Installation
+
+There are options:
+
+- Platform agnostic installation: [Anaconda](https://github.com/wkentaro/labelme/blob/main/#anaconda)
+- Platform specific installation: [Ubuntu](https://github.com/wkentaro/labelme/blob/main/#ubuntu), [macOS](https://github.com/wkentaro/labelme/blob/main/#macos), [Windows](https://github.com/wkentaro/labelme/blob/main/#windows)
+- Pre-build binaries from [the release section](https://github.com/wkentaro/labelme/releases)
+
+### Anaconda
+
+You need install [Anaconda](https://www.continuum.io/downloads), then run below:
+
+```bash
+# python3
+conda create --name=labelme python=3
+source activate labelme
+# conda install -c conda-forge pyside2
+# conda install pyqt
+# pip install pyqt5 # pyqt5 can be installed via pip on python3
+pip install labelme
+# or you can install everything by conda command
+# conda install labelme -c conda-forge
+```
+
+### Ubuntu
+
+```bash
+sudo apt-get install labelme
+
+# or
+sudo pip3 install labelme
+
+# or install standalone executable from:
+# https://github.com/wkentaro/labelme/releases
+```
+
+### macOS
+
+```bash
+brew install pyqt # maybe pyqt5
+pip install labelme
+
+# or
+brew install wkentaro/labelme/labelme # command line interface
+# brew install --cask wkentaro/labelme/labelme # app
+
+# or install standalone executable/app from:
+# https://github.com/wkentaro/labelme/releases
+```
+
+### Windows
+
+Install [Anaconda](https://www.continuum.io/downloads), then in an Anaconda Prompt run:
+
+```bash
+conda create --name=labelme python=3
+conda activate labelme
+pip install labelme
+
+# or install standalone executable/app from:
+# https://github.com/wkentaro/labelme/releases
+```
+
+
+## Usage
+
+Run `labelme --help` for detail.
+The annotations are saved as a [JSON](http://www.json.org/) file.
+
+```bash
+labelme # just open gui
+
+# tutorial (single image example)
+cd examples/tutorial
+labelme apc2016_obj3.jpg # specify image file
+labelme apc2016_obj3.jpg -O apc2016_obj3.json # close window after the save
+labelme apc2016_obj3.jpg --nodata # not include image data but relative image path in JSON file
+labelme apc2016_obj3.jpg \
+ --labels highland_6539_self_stick_notes,mead_index_cards,kong_air_dog_squeakair_tennis_ball # specify label list
+
+# semantic segmentation example
+cd examples/semantic_segmentation
+labelme data_annotated/ # Open directory to annotate all images in it
+labelme data_annotated/ --labels labels.txt # specify label list with a file
+```
+
+For more advanced usage, please refer to the examples:
+
+* [Tutorial (Single Image Example)](https://github.com/wkentaro/labelme/blob/main/examples/tutorial)
+* [Semantic Segmentation Example](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation?raw=true)
+* [Instance Segmentation Example](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation?raw=true)
+* [Video Annotation Example](https://github.com/wkentaro/labelme/blob/main/examples/video_annotation?raw=true)
+
+### Command Line Arguments
+- `--output` specifies the location that annotations will be written to. If the location ends with .json, a single annotation will be written to this file. Only one image can be annotated if a location is specified with .json. If the location does not end with .json, the program will assume it is a directory. Annotations will be stored in this directory with a name that corresponds to the image that the annotation was made on.
+- The first time you run labelme, it will create a config file in `~/.labelmerc`. You can edit this file and the changes will be applied the next time that you launch labelme. If you would prefer to use a config file from another location, you can specify this file with the `--config` flag.
+- Without the `--nosortlabels` flag, the program will list labels in alphabetical order. When the program is run with this flag, it will display labels in the order that they are provided.
+- Flags are assigned to an entire image. [Example](https://github.com/wkentaro/labelme/blob/main/examples/classification?raw=true)
+- Labels are assigned to a single polygon. [Example](https://github.com/wkentaro/labelme/blob/main/examples/bbox_detection?raw=true)
+
+## FAQ
+
+- **How to convert JSON file to numpy array?** See [examples/tutorial](https://github.com/wkentaro/labelme/blob/main/examples/tutorial#convert-to-dataset).
+- **How to load label PNG file?** See [examples/tutorial](https://github.com/wkentaro/labelme/blob/main/examples/tutorial#how-to-load-label-png-file).
+- **How to get annotations for semantic segmentation?** See [examples/semantic_segmentation](https://github.com/wkentaro/labelme/blob/main/examples/semantic_segmentation?raw=true).
+- **How to get annotations for instance segmentation?** See [examples/instance_segmentation](https://github.com/wkentaro/labelme/blob/main/examples/instance_segmentation?raw=true).
+
+
+## Developing
+
+```bash
+git clone https://github.com/wkentaro/labelme.git
+cd labelme
+
+# Install anaconda3 and labelme
+curl -L https://github.com/wkentaro/dotfiles/raw/main/local/bin/install_anaconda3.sh | bash -s .
+source .anaconda3/bin/activate
+pip install -e .
+```
+
+
+## How to build standalone executable
+
+Below shows how to build the standalone executable on macOS, Linux and Windows.
+
+```bash
+# Setup conda
+conda create --name labelme python=3.9
+conda activate labelme
+
+# Build the standalone executable
+pip install .
+pip install 'matplotlib<3.3'
+pip install pyinstaller
+pyinstaller labelme.spec
+dist/labelme --version
+```
+
+
+## How to contribute
+
+Make sure below test passes on your environment.
+See `.github/workflows/ci.yml` for more detail.
+
+```bash
+pip install -r requirements-dev.txt
+
+flake8 .
+black --line-length 79 --check labelme/
+MPLBACKEND='agg' pytest -vsx tests/
+```
+
+
+## Acknowledgement
+
+This repo is the fork of [mpitid/pylabelme](https://github.com/mpitid/pylabelme).
+
+
+%prep
+%autosetup -n labelme-5.2.0
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-labelme -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Tue Apr 11 2023 Python_Bot <Python_Bot@openeuler.org> - 5.2.0-1
+- Package Spec generated