%global _empty_manifest_terminate_build 0 Name: python-sahi Version: 0.11.13 Release: 1 Summary: A vision library for performing sliced inference on large images/small objects License: MIT URL: https://github.com/obss/sahi Source0: https://mirrors.nju.edu.cn/pypi/web/packages/68/51/05c8329db4779f556027f3248e1f0e331a41f33fb0bf3be8a9d40c057dde/sahi-0.11.13.tar.gz BuildArch: noarch Requires: python3-opencv-python Requires: python3-shapely Requires: python3-tqdm Requires: python3-pillow Requires: python3-pybboxes Requires: python3-pyyaml Requires: python3-fire Requires: python3-terminaltables Requires: python3-requests Requires: python3-click Requires: python3-black Requires: python3-flake8 Requires: python3-isort Requires: python3-jupyterlab Requires: python3-importlib-metadata Requires: python3-mmdet Requires: python3-pycocotools %description

SAHI: Slicing Aided Hyper Inference

A lightweight vision library for performing large scale object detection & instance segmentation

teaser

downloads downloads
pypi version conda version package testing
ci
Open In Colab HuggingFace Spaces
##
Overview
Object detection and instance segmentation are by far the most important fields of applications in Computer Vision. However, detection of small objects and inference on large images are still major issues in practical usage. Here comes the SAHI to help developers overcome these real-world problems with many vision utilities. | Command | Description | |---|---| | [predict](https://github.com/obss/sahi/blob/main/docs/cli.md#predict-command-usage) | perform sliced/standard video/image prediction using any [yolov5](https://github.com/ultralytics/yolov5)/[mmdet](https://github.com/open-mmlab/mmdetection)/[detectron2](https://github.com/facebookresearch/detectron2)/[huggingface](https://huggingface.co/models?pipeline_tag=object-detection&sort=downloads) model | | [predict-fiftyone](https://github.com/obss/sahi/blob/main/docs/cli.md#predict-fiftyone-command-usage) | perform sliced/standard prediction using any [yolov5](https://github.com/ultralytics/yolov5)/[mmdet](https://github.com/open-mmlab/mmdetection)/[detectron2](https://github.com/facebookresearch/detectron2)/[huggingface](https://huggingface.co/models?pipeline_tag=object-detection&sort=downloads) model and explore results in [fiftyone app](https://github.com/voxel51/fiftyone) | | [coco slice](https://github.com/obss/sahi/blob/main/docs/cli.md#coco-slice-command-usage) | automatically slice COCO annotation and image files | | [coco fiftyone](https://github.com/obss/sahi/blob/main/docs/cli.md#coco-fiftyone-command-usage) | explore multiple prediction results on your COCO dataset with [fiftyone ui](https://github.com/voxel51/fiftyone) ordered by number of misdetections | | [coco evaluate](https://github.com/obss/sahi/blob/main/docs/cli.md#coco-evaluate-command-usage) | evaluate classwise COCO AP and AR for given predictions and ground truth | | [coco analyse](https://github.com/obss/sahi/blob/main/docs/cli.md#coco-analyse-command-usage) | calcualate and export many error analysis plots | | [coco yolov5](https://github.com/obss/sahi/blob/main/docs/cli.md#coco-yolov5-command-usage) | automatically convert any COCO dataset to [yolov5](https://github.com/ultralytics/yolov5) format | ##
Quick Start Examples
[📜 List of publications that cite SAHI (currently 40+)](https://scholar.google.com/scholar?hl=en&as_sdt=2005&sciodt=0,5&cites=14065474760484865747&scipsc=&q=&scisbd=1) [🏆 List of competition winners that used SAHI](https://github.com/obss/sahi/discussions/688) ### Tutorials - [Introduction to SAHI](https://medium.com/codable/sahi-a-vision-library-for-performing-sliced-inference-on-large-images-small-objects-c8b086af3b80) - [Official paper](https://ieeexplore.ieee.org/document/9897990) (ICIP 2022 oral) (NEW) - [Pretrained weights and ICIP 2022 paper files](https://github.com/fcakyon/small-object-detection-benchmark) - [Video inference support is live](https://github.com/obss/sahi/discussions/626) - [Kaggle notebook](https://www.kaggle.com/remekkinas/sahi-slicing-aided-hyper-inference-yv5-and-yx) - [Satellite object detection](https://blog.ml6.eu/how-to-detect-small-objects-in-very-large-images-70234bab0f98) - [Error analysis plots & evaluation](https://github.com/obss/sahi/discussions/622) (NEW) - [Interactive result visualization and inspection](https://github.com/obss/sahi/discussions/624) (NEW) - [COCO dataset conversion](https://medium.com/codable/convert-any-dataset-to-coco-object-detection-format-with-sahi-95349e1fe2b7) - [Slicing operation notebook](demo/slicing.ipynb) - `YOLOX` + `SAHI` demo: sahi-yolox (RECOMMENDED) - `YOLOv5` + `SAHI` walkthrough: sahi-yolov5 - `MMDetection` + `SAHI` walkthrough: sahi-mmdetection - `Detectron2` + `SAHI` walkthrough: sahi-detectron2 - `HuggingFace` + `SAHI` walkthrough: sahi-huggingface (NEW) - `TorchVision` + `SAHI` walkthrough: sahi-torchvision (NEW) sahi-yolox ### Installation sahi-installation
Installation details: - Install `sahi` using pip: ```console pip install sahi ``` - On Windows, `Shapely` needs to be installed via Conda: ```console conda install -c conda-forge shapely ``` - Install your desired version of pytorch and torchvision (cuda 11.3 for detectron2, cuda 11.7 for rest): ```console conda install pytorch=1.10.2 torchvision=0.11.3 cudatoolkit=11.3 -c pytorch ``` ```console conda install pytorch=1.13.1 torchvision=0.14.1 pytorch-cuda=11.7 -c pytorch -c nvidia ``` - Install your desired detection framework (yolov5): ```console pip install yolov5==7.0.4 ``` - Install your desired detection framework (mmdet): ```console pip install mmcv-full==1.7.0 -f https://download.openmmlab.com/mmcv/dist/cu117/torch1.13.0/index.html ``` ```console pip install mmdet==2.26.0 ``` - Install your desired detection framework (detectron2): ```console pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu113/torch1.10/index.html ``` - Install your desired detection framework (huggingface): ```console pip install transformers timm ```
### Framework Agnostic Sliced/Standard Prediction sahi-predict Find detailed info on `sahi predict` command at [cli.md](docs/cli.md#predict-command-usage). Find detailed info on video inference at [video inference tutorial](https://github.com/obss/sahi/discussions/626). Find detailed info on image/dataset slicing utilities at [slicing.md](docs/slicing.md). ### Error Analysis Plots & Evaluation sahi-analyse Find detailed info at [Error Analysis Plots & Evaluation](https://github.com/obss/sahi/discussions/622). ### Interactive Visualization & Inspection sahi-fiftyone Find detailed info at [Interactive Result Visualization and Inspection](https://github.com/obss/sahi/discussions/624). ### Other utilities Find detailed info on COCO utilities (yolov5 conversion, slicing, subsampling, filtering, merging, splitting) at [coco.md](docs/coco.md). Find detailed info on MOT utilities (ground truth dataset creation, exporting tracker metrics in mot challenge format) at [mot.md](docs/mot.md). ##
Citation
If you use this package in your work, please cite it as: ``` @article{akyon2022sahi, title={Slicing Aided Hyper Inference and Fine-tuning for Small Object Detection}, author={Akyon, Fatih Cagatay and Altinuc, Sinan Onur and Temizel, Alptekin}, journal={2022 IEEE International Conference on Image Processing (ICIP)}, doi={10.1109/ICIP46576.2022.9897990}, pages={966-970}, year={2022} } ``` ``` @software{obss2021sahi, author = {Akyon, Fatih Cagatay and Cengiz, Cemil and Altinuc, Sinan Onur and Cavusoglu, Devrim and Sahin, Kadir and Eryuksel, Ogulcan}, title = {{SAHI: A lightweight vision library for performing large scale object detection and instance segmentation}}, month = nov, year = 2021, publisher = {Zenodo}, doi = {10.5281/zenodo.5718950}, url = {https://doi.org/10.5281/zenodo.5718950} } ``` ##
Contributing
`sahi` library currently supports all [YOLOv5 models](https://github.com/ultralytics/yolov5/releases), [MMDetection models](https://github.com/open-mmlab/mmdetection/blob/master/docs/en/model_zoo.md), [Detectron2 models](https://github.com/facebookresearch/detectron2/blob/main/MODEL_ZOO.md), and [HuggingFace object detection models](https://huggingface.co/models?pipeline_tag=object-detection&sort=downloads). Moreover, it is easy to add new frameworks. All you need to do is, create a new .py file under [sahi/models/](https://github.com/obss/sahi/tree/main/sahi/models) folder and create a new class in that .py file that implements [DetectionModel class](https://github.com/obss/sahi/blob/7e48bdb6afda26f977b763abdd7d8c9c170636bd/sahi/models/base.py#L12). You can take the [MMDetection wrapper](https://github.com/obss/sahi/blob/7e48bdb6afda26f977b763abdd7d8c9c170636bd/sahi/models/mmdet.py#L18) or [YOLOv5 wrapper](https://github.com/obss/sahi/blob/7e48bdb6afda26f977b763abdd7d8c9c170636bd/sahi/models/yolov5.py#L17) as a reference. Before opening a PR: - Install required development packages: ```bash pip install -e ."[dev]" ``` - Reformat with black and isort: ```bash python -m scripts.run_code_style format ``` ##
Contributors
Fatih Cagatay Akyon Sinan Onur Altinuc Devrim Cavusoglu Cemil Cengiz Ogulcan Eryuksel Kadir Nar Burak Maden Pushpak Bhoge M. Can V. Christoffer Edlund Ishwor Mehmet Ecevit Kadir Sahin Wey Youngjae Alzbeta Tureckova Wei Ji Aynur Susuz
%package -n python3-sahi Summary: A vision library for performing sliced inference on large images/small objects Provides: python-sahi BuildRequires: python3-devel BuildRequires: python3-setuptools BuildRequires: python3-pip %description -n python3-sahi

SAHI: Slicing Aided Hyper Inference

A lightweight vision library for performing large scale object detection & instance segmentation

teaser

downloads downloads
pypi version conda version package testing
ci
Open In Colab HuggingFace Spaces
##
Overview
Object detection and instance segmentation are by far the most important fields of applications in Computer Vision. However, detection of small objects and inference on large images are still major issues in practical usage. Here comes the SAHI to help developers overcome these real-world problems with many vision utilities. | Command | Description | |---|---| | [predict](https://github.com/obss/sahi/blob/main/docs/cli.md#predict-command-usage) | perform sliced/standard video/image prediction using any [yolov5](https://github.com/ultralytics/yolov5)/[mmdet](https://github.com/open-mmlab/mmdetection)/[detectron2](https://github.com/facebookresearch/detectron2)/[huggingface](https://huggingface.co/models?pipeline_tag=object-detection&sort=downloads) model | | [predict-fiftyone](https://github.com/obss/sahi/blob/main/docs/cli.md#predict-fiftyone-command-usage) | perform sliced/standard prediction using any [yolov5](https://github.com/ultralytics/yolov5)/[mmdet](https://github.com/open-mmlab/mmdetection)/[detectron2](https://github.com/facebookresearch/detectron2)/[huggingface](https://huggingface.co/models?pipeline_tag=object-detection&sort=downloads) model and explore results in [fiftyone app](https://github.com/voxel51/fiftyone) | | [coco slice](https://github.com/obss/sahi/blob/main/docs/cli.md#coco-slice-command-usage) | automatically slice COCO annotation and image files | | [coco fiftyone](https://github.com/obss/sahi/blob/main/docs/cli.md#coco-fiftyone-command-usage) | explore multiple prediction results on your COCO dataset with [fiftyone ui](https://github.com/voxel51/fiftyone) ordered by number of misdetections | | [coco evaluate](https://github.com/obss/sahi/blob/main/docs/cli.md#coco-evaluate-command-usage) | evaluate classwise COCO AP and AR for given predictions and ground truth | | [coco analyse](https://github.com/obss/sahi/blob/main/docs/cli.md#coco-analyse-command-usage) | calcualate and export many error analysis plots | | [coco yolov5](https://github.com/obss/sahi/blob/main/docs/cli.md#coco-yolov5-command-usage) | automatically convert any COCO dataset to [yolov5](https://github.com/ultralytics/yolov5) format | ##
Quick Start Examples
[📜 List of publications that cite SAHI (currently 40+)](https://scholar.google.com/scholar?hl=en&as_sdt=2005&sciodt=0,5&cites=14065474760484865747&scipsc=&q=&scisbd=1) [🏆 List of competition winners that used SAHI](https://github.com/obss/sahi/discussions/688) ### Tutorials - [Introduction to SAHI](https://medium.com/codable/sahi-a-vision-library-for-performing-sliced-inference-on-large-images-small-objects-c8b086af3b80) - [Official paper](https://ieeexplore.ieee.org/document/9897990) (ICIP 2022 oral) (NEW) - [Pretrained weights and ICIP 2022 paper files](https://github.com/fcakyon/small-object-detection-benchmark) - [Video inference support is live](https://github.com/obss/sahi/discussions/626) - [Kaggle notebook](https://www.kaggle.com/remekkinas/sahi-slicing-aided-hyper-inference-yv5-and-yx) - [Satellite object detection](https://blog.ml6.eu/how-to-detect-small-objects-in-very-large-images-70234bab0f98) - [Error analysis plots & evaluation](https://github.com/obss/sahi/discussions/622) (NEW) - [Interactive result visualization and inspection](https://github.com/obss/sahi/discussions/624) (NEW) - [COCO dataset conversion](https://medium.com/codable/convert-any-dataset-to-coco-object-detection-format-with-sahi-95349e1fe2b7) - [Slicing operation notebook](demo/slicing.ipynb) - `YOLOX` + `SAHI` demo: sahi-yolox (RECOMMENDED) - `YOLOv5` + `SAHI` walkthrough: sahi-yolov5 - `MMDetection` + `SAHI` walkthrough: sahi-mmdetection - `Detectron2` + `SAHI` walkthrough: sahi-detectron2 - `HuggingFace` + `SAHI` walkthrough: sahi-huggingface (NEW) - `TorchVision` + `SAHI` walkthrough: sahi-torchvision (NEW) sahi-yolox ### Installation sahi-installation
Installation details: - Install `sahi` using pip: ```console pip install sahi ``` - On Windows, `Shapely` needs to be installed via Conda: ```console conda install -c conda-forge shapely ``` - Install your desired version of pytorch and torchvision (cuda 11.3 for detectron2, cuda 11.7 for rest): ```console conda install pytorch=1.10.2 torchvision=0.11.3 cudatoolkit=11.3 -c pytorch ``` ```console conda install pytorch=1.13.1 torchvision=0.14.1 pytorch-cuda=11.7 -c pytorch -c nvidia ``` - Install your desired detection framework (yolov5): ```console pip install yolov5==7.0.4 ``` - Install your desired detection framework (mmdet): ```console pip install mmcv-full==1.7.0 -f https://download.openmmlab.com/mmcv/dist/cu117/torch1.13.0/index.html ``` ```console pip install mmdet==2.26.0 ``` - Install your desired detection framework (detectron2): ```console pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu113/torch1.10/index.html ``` - Install your desired detection framework (huggingface): ```console pip install transformers timm ```
### Framework Agnostic Sliced/Standard Prediction sahi-predict Find detailed info on `sahi predict` command at [cli.md](docs/cli.md#predict-command-usage). Find detailed info on video inference at [video inference tutorial](https://github.com/obss/sahi/discussions/626). Find detailed info on image/dataset slicing utilities at [slicing.md](docs/slicing.md). ### Error Analysis Plots & Evaluation sahi-analyse Find detailed info at [Error Analysis Plots & Evaluation](https://github.com/obss/sahi/discussions/622). ### Interactive Visualization & Inspection sahi-fiftyone Find detailed info at [Interactive Result Visualization and Inspection](https://github.com/obss/sahi/discussions/624). ### Other utilities Find detailed info on COCO utilities (yolov5 conversion, slicing, subsampling, filtering, merging, splitting) at [coco.md](docs/coco.md). Find detailed info on MOT utilities (ground truth dataset creation, exporting tracker metrics in mot challenge format) at [mot.md](docs/mot.md). ##
Citation
If you use this package in your work, please cite it as: ``` @article{akyon2022sahi, title={Slicing Aided Hyper Inference and Fine-tuning for Small Object Detection}, author={Akyon, Fatih Cagatay and Altinuc, Sinan Onur and Temizel, Alptekin}, journal={2022 IEEE International Conference on Image Processing (ICIP)}, doi={10.1109/ICIP46576.2022.9897990}, pages={966-970}, year={2022} } ``` ``` @software{obss2021sahi, author = {Akyon, Fatih Cagatay and Cengiz, Cemil and Altinuc, Sinan Onur and Cavusoglu, Devrim and Sahin, Kadir and Eryuksel, Ogulcan}, title = {{SAHI: A lightweight vision library for performing large scale object detection and instance segmentation}}, month = nov, year = 2021, publisher = {Zenodo}, doi = {10.5281/zenodo.5718950}, url = {https://doi.org/10.5281/zenodo.5718950} } ``` ##
Contributing
`sahi` library currently supports all [YOLOv5 models](https://github.com/ultralytics/yolov5/releases), [MMDetection models](https://github.com/open-mmlab/mmdetection/blob/master/docs/en/model_zoo.md), [Detectron2 models](https://github.com/facebookresearch/detectron2/blob/main/MODEL_ZOO.md), and [HuggingFace object detection models](https://huggingface.co/models?pipeline_tag=object-detection&sort=downloads). Moreover, it is easy to add new frameworks. All you need to do is, create a new .py file under [sahi/models/](https://github.com/obss/sahi/tree/main/sahi/models) folder and create a new class in that .py file that implements [DetectionModel class](https://github.com/obss/sahi/blob/7e48bdb6afda26f977b763abdd7d8c9c170636bd/sahi/models/base.py#L12). You can take the [MMDetection wrapper](https://github.com/obss/sahi/blob/7e48bdb6afda26f977b763abdd7d8c9c170636bd/sahi/models/mmdet.py#L18) or [YOLOv5 wrapper](https://github.com/obss/sahi/blob/7e48bdb6afda26f977b763abdd7d8c9c170636bd/sahi/models/yolov5.py#L17) as a reference. Before opening a PR: - Install required development packages: ```bash pip install -e ."[dev]" ``` - Reformat with black and isort: ```bash python -m scripts.run_code_style format ``` ##
Contributors
Fatih Cagatay Akyon Sinan Onur Altinuc Devrim Cavusoglu Cemil Cengiz Ogulcan Eryuksel Kadir Nar Burak Maden Pushpak Bhoge M. Can V. Christoffer Edlund Ishwor Mehmet Ecevit Kadir Sahin Wey Youngjae Alzbeta Tureckova Wei Ji Aynur Susuz
%package help Summary: Development documents and examples for sahi Provides: python3-sahi-doc %description help

SAHI: Slicing Aided Hyper Inference

A lightweight vision library for performing large scale object detection & instance segmentation

teaser

downloads downloads
pypi version conda version package testing
ci
Open In Colab HuggingFace Spaces
##
Overview
Object detection and instance segmentation are by far the most important fields of applications in Computer Vision. However, detection of small objects and inference on large images are still major issues in practical usage. Here comes the SAHI to help developers overcome these real-world problems with many vision utilities. | Command | Description | |---|---| | [predict](https://github.com/obss/sahi/blob/main/docs/cli.md#predict-command-usage) | perform sliced/standard video/image prediction using any [yolov5](https://github.com/ultralytics/yolov5)/[mmdet](https://github.com/open-mmlab/mmdetection)/[detectron2](https://github.com/facebookresearch/detectron2)/[huggingface](https://huggingface.co/models?pipeline_tag=object-detection&sort=downloads) model | | [predict-fiftyone](https://github.com/obss/sahi/blob/main/docs/cli.md#predict-fiftyone-command-usage) | perform sliced/standard prediction using any [yolov5](https://github.com/ultralytics/yolov5)/[mmdet](https://github.com/open-mmlab/mmdetection)/[detectron2](https://github.com/facebookresearch/detectron2)/[huggingface](https://huggingface.co/models?pipeline_tag=object-detection&sort=downloads) model and explore results in [fiftyone app](https://github.com/voxel51/fiftyone) | | [coco slice](https://github.com/obss/sahi/blob/main/docs/cli.md#coco-slice-command-usage) | automatically slice COCO annotation and image files | | [coco fiftyone](https://github.com/obss/sahi/blob/main/docs/cli.md#coco-fiftyone-command-usage) | explore multiple prediction results on your COCO dataset with [fiftyone ui](https://github.com/voxel51/fiftyone) ordered by number of misdetections | | [coco evaluate](https://github.com/obss/sahi/blob/main/docs/cli.md#coco-evaluate-command-usage) | evaluate classwise COCO AP and AR for given predictions and ground truth | | [coco analyse](https://github.com/obss/sahi/blob/main/docs/cli.md#coco-analyse-command-usage) | calcualate and export many error analysis plots | | [coco yolov5](https://github.com/obss/sahi/blob/main/docs/cli.md#coco-yolov5-command-usage) | automatically convert any COCO dataset to [yolov5](https://github.com/ultralytics/yolov5) format | ##
Quick Start Examples
[📜 List of publications that cite SAHI (currently 40+)](https://scholar.google.com/scholar?hl=en&as_sdt=2005&sciodt=0,5&cites=14065474760484865747&scipsc=&q=&scisbd=1) [🏆 List of competition winners that used SAHI](https://github.com/obss/sahi/discussions/688) ### Tutorials - [Introduction to SAHI](https://medium.com/codable/sahi-a-vision-library-for-performing-sliced-inference-on-large-images-small-objects-c8b086af3b80) - [Official paper](https://ieeexplore.ieee.org/document/9897990) (ICIP 2022 oral) (NEW) - [Pretrained weights and ICIP 2022 paper files](https://github.com/fcakyon/small-object-detection-benchmark) - [Video inference support is live](https://github.com/obss/sahi/discussions/626) - [Kaggle notebook](https://www.kaggle.com/remekkinas/sahi-slicing-aided-hyper-inference-yv5-and-yx) - [Satellite object detection](https://blog.ml6.eu/how-to-detect-small-objects-in-very-large-images-70234bab0f98) - [Error analysis plots & evaluation](https://github.com/obss/sahi/discussions/622) (NEW) - [Interactive result visualization and inspection](https://github.com/obss/sahi/discussions/624) (NEW) - [COCO dataset conversion](https://medium.com/codable/convert-any-dataset-to-coco-object-detection-format-with-sahi-95349e1fe2b7) - [Slicing operation notebook](demo/slicing.ipynb) - `YOLOX` + `SAHI` demo: sahi-yolox (RECOMMENDED) - `YOLOv5` + `SAHI` walkthrough: sahi-yolov5 - `MMDetection` + `SAHI` walkthrough: sahi-mmdetection - `Detectron2` + `SAHI` walkthrough: sahi-detectron2 - `HuggingFace` + `SAHI` walkthrough: sahi-huggingface (NEW) - `TorchVision` + `SAHI` walkthrough: sahi-torchvision (NEW) sahi-yolox ### Installation sahi-installation
Installation details: - Install `sahi` using pip: ```console pip install sahi ``` - On Windows, `Shapely` needs to be installed via Conda: ```console conda install -c conda-forge shapely ``` - Install your desired version of pytorch and torchvision (cuda 11.3 for detectron2, cuda 11.7 for rest): ```console conda install pytorch=1.10.2 torchvision=0.11.3 cudatoolkit=11.3 -c pytorch ``` ```console conda install pytorch=1.13.1 torchvision=0.14.1 pytorch-cuda=11.7 -c pytorch -c nvidia ``` - Install your desired detection framework (yolov5): ```console pip install yolov5==7.0.4 ``` - Install your desired detection framework (mmdet): ```console pip install mmcv-full==1.7.0 -f https://download.openmmlab.com/mmcv/dist/cu117/torch1.13.0/index.html ``` ```console pip install mmdet==2.26.0 ``` - Install your desired detection framework (detectron2): ```console pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu113/torch1.10/index.html ``` - Install your desired detection framework (huggingface): ```console pip install transformers timm ```
### Framework Agnostic Sliced/Standard Prediction sahi-predict Find detailed info on `sahi predict` command at [cli.md](docs/cli.md#predict-command-usage). Find detailed info on video inference at [video inference tutorial](https://github.com/obss/sahi/discussions/626). Find detailed info on image/dataset slicing utilities at [slicing.md](docs/slicing.md). ### Error Analysis Plots & Evaluation sahi-analyse Find detailed info at [Error Analysis Plots & Evaluation](https://github.com/obss/sahi/discussions/622). ### Interactive Visualization & Inspection sahi-fiftyone Find detailed info at [Interactive Result Visualization and Inspection](https://github.com/obss/sahi/discussions/624). ### Other utilities Find detailed info on COCO utilities (yolov5 conversion, slicing, subsampling, filtering, merging, splitting) at [coco.md](docs/coco.md). Find detailed info on MOT utilities (ground truth dataset creation, exporting tracker metrics in mot challenge format) at [mot.md](docs/mot.md). ##
Citation
If you use this package in your work, please cite it as: ``` @article{akyon2022sahi, title={Slicing Aided Hyper Inference and Fine-tuning for Small Object Detection}, author={Akyon, Fatih Cagatay and Altinuc, Sinan Onur and Temizel, Alptekin}, journal={2022 IEEE International Conference on Image Processing (ICIP)}, doi={10.1109/ICIP46576.2022.9897990}, pages={966-970}, year={2022} } ``` ``` @software{obss2021sahi, author = {Akyon, Fatih Cagatay and Cengiz, Cemil and Altinuc, Sinan Onur and Cavusoglu, Devrim and Sahin, Kadir and Eryuksel, Ogulcan}, title = {{SAHI: A lightweight vision library for performing large scale object detection and instance segmentation}}, month = nov, year = 2021, publisher = {Zenodo}, doi = {10.5281/zenodo.5718950}, url = {https://doi.org/10.5281/zenodo.5718950} } ``` ##
Contributing
`sahi` library currently supports all [YOLOv5 models](https://github.com/ultralytics/yolov5/releases), [MMDetection models](https://github.com/open-mmlab/mmdetection/blob/master/docs/en/model_zoo.md), [Detectron2 models](https://github.com/facebookresearch/detectron2/blob/main/MODEL_ZOO.md), and [HuggingFace object detection models](https://huggingface.co/models?pipeline_tag=object-detection&sort=downloads). Moreover, it is easy to add new frameworks. All you need to do is, create a new .py file under [sahi/models/](https://github.com/obss/sahi/tree/main/sahi/models) folder and create a new class in that .py file that implements [DetectionModel class](https://github.com/obss/sahi/blob/7e48bdb6afda26f977b763abdd7d8c9c170636bd/sahi/models/base.py#L12). You can take the [MMDetection wrapper](https://github.com/obss/sahi/blob/7e48bdb6afda26f977b763abdd7d8c9c170636bd/sahi/models/mmdet.py#L18) or [YOLOv5 wrapper](https://github.com/obss/sahi/blob/7e48bdb6afda26f977b763abdd7d8c9c170636bd/sahi/models/yolov5.py#L17) as a reference. Before opening a PR: - Install required development packages: ```bash pip install -e ."[dev]" ``` - Reformat with black and isort: ```bash python -m scripts.run_code_style format ``` ##
Contributors
Fatih Cagatay Akyon Sinan Onur Altinuc Devrim Cavusoglu Cemil Cengiz Ogulcan Eryuksel Kadir Nar Burak Maden Pushpak Bhoge M. Can V. Christoffer Edlund Ishwor Mehmet Ecevit Kadir Sahin Wey Youngjae Alzbeta Tureckova Wei Ji Aynur Susuz
%prep %autosetup -n sahi-0.11.13 %build %py3_build %install %py3_install install -d -m755 %{buildroot}/%{_pkgdocdir} if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/lib64 ]; then find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/bin ]; then find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/sbin ]; then find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . mv %{buildroot}/doclist.lst . %files -n python3-sahi -f filelist.lst %dir %{python3_sitelib}/* %files help -f doclist.lst %{_docdir}/* %changelog * Fri May 05 2023 Python_Bot - 0.11.13-1 - Package Spec generated