%global _empty_manifest_terminate_build 0
Name: python-rembg
Version: 2.0.35
Release: 1
Summary: Remove image background
License: MIT License
URL: https://github.com/danielgatis/rembg
Source0: https://mirrors.nju.edu.cn/pypi/web/packages/a7/8c/0ac2711dfd4440225963395d00d4768528cc9c4572539dd4a51df067d159/rembg-2.0.35.tar.gz
BuildArch: noarch
Requires: python3-aiohttp
Requires: python3-asyncer
Requires: python3-click
Requires: python3-fastapi
Requires: python3-filetype
Requires: python3-imagehash
Requires: python3-numpy
Requires: python3-onnxruntime
Requires: python3-opencv-python-headless
Requires: python3-pillow
Requires: python3-pooch
Requires: python3-pymatting
Requires: python3-multipart
Requires: python3-scikit-image
Requires: python3-scipy
Requires: python3-tqdm
Requires: python3-uvicorn
Requires: python3-watchdog
Requires: python3-onnxruntime-gpu
%description
# Rembg
[](https://pepy.tech/project/rembg)
[](https://pepy.tech/project/rembg)
[](https://pepy.tech/project/rembg)
[](https://img.shields.io/badge/License-MIT-blue.svg)
[](https://huggingface.co/spaces/KenjieDec/RemBG)
[](https://bgremoval.streamlit.app/)
Rembg is a tool to remove images background.
**If this project has helped you, please consider making a [donation](https://www.buymeacoffee.com/danielgatis).**
## Sponsor
## Requirements
```
python: >3.7, <3.11
```
## Installation
CPU support:
```bash
pip install rembg
```
GPU support:
First of all, you need to check if your system supports the `onnxruntime-gpu`.
Go to https://onnxruntime.ai and check the installation matrix.
If yes, just run:
```bash
pip install rembg[gpu]
```
## Usage as a cli
After the installation step you can use rembg just typing `rembg` in your terminal window.
The `rembg` command has 3 subcommands, one for each input type:
- `i` for files
- `p` for folders
- `s` for http server
You can get help about the main command using:
```
rembg --help
```
As well, about all the subcommands using:
```
rembg --help
```
### rembg `i`
Used when input and output are files.
Remove the background from a remote image
```
curl -s http://input.png | rembg i > output.png
```
Remove the background from a local file
```
rembg i path/to/input.png path/to/output.png
```
Remove the background specifying a model
```
rembg i -m u2netp path/to/input.png path/to/output.png
```
Remove the background returning only the mask
```
rembg i -om path/to/input.png path/to/output.png
```
Remove the background applying an alpha matting
```
rembg i -a path/to/input.png path/to/output.png
```
Passing extras parameters
```
rembg i -m sam -x '{"input_labels": [1], "input_points": [[100,100]]}' path/to/input.png path/to/output.png
```
### rembg `p`
Used when input and output are folders.
Remove the background from all images in a folder
```
rembg p path/to/input path/to/output
```
Same as before, but watching for new/changed files to process
```
rembg p -w path/to/input path/to/output
```
### rembg `s`
Used to start http server.
To see the complete endpoints documentation, go to: `http://localhost:5000/docs`.
Remove the background from an image url
```
curl -s "http://localhost:5000/?url=http://input.png" -o output.png
```
Remove the background from an uploaded image
```
curl -s -F file=@/path/to/input.jpg "http://localhost:5000" -o output.png
```
## Usage as a library
Input and output as bytes
```python
from rembg import remove
input_path = 'input.png'
output_path = 'output.png'
with open(input_path, 'rb') as i:
with open(output_path, 'wb') as o:
input = i.read()
output = remove(input)
o.write(output)
```
Input and output as a PIL image
```python
from rembg import remove
from PIL import Image
input_path = 'input.png'
output_path = 'output.png'
input = Image.open(input_path)
output = remove(input)
output.save(output_path)
```
Input and output as a numpy array
```python
from rembg import remove
import cv2
input_path = 'input.png'
output_path = 'output.png'
input = cv2.imread(input_path)
output = remove(input)
cv2.imwrite(output_path, output)
```
How to iterate over files in a performatic way
```python
from pathlib import Path
from rembg import remove, new_session
session = new_session()
for file in Path('path/to/folder').glob('*.png'):
input_path = str(file)
output_path = str(file.parent / (file.stem + ".out.png"))
with open(input_path, 'rb') as i:
with open(output_path, 'wb') as o:
input = i.read()
output = remove(input, session=session)
o.write(output)
```
To see a full list of examples on how to use rembg, go to the [examples](USAGE.md) page.
## Usage as a docker
Just replace the `rembg` command for `docker run danielgatis/rembg`.
Try this:
```
docker run danielgatis/rembg i path/to/input.png path/to/output.png
```
## Models
All models are downloaded and saved in the user home folder in the `.u2net` directory.
The available models are:
- u2net ([download](https://github.com/danielgatis/rembg/releases/download/v0.0.0/u2net.onnx), [source](https://github.com/xuebinqin/U-2-Net)): A pre-trained model for general use cases.
- u2netp ([download](https://github.com/danielgatis/rembg/releases/download/v0.0.0/u2netp.onnx), [source](https://github.com/xuebinqin/U-2-Net)): A lightweight version of u2net model.
- u2net_human_seg ([download](https://github.com/danielgatis/rembg/releases/download/v0.0.0/u2net_human_seg.onnx), [source](https://github.com/xuebinqin/U-2-Net)): A pre-trained model for human segmentation.
- u2net_cloth_seg ([download](https://github.com/danielgatis/rembg/releases/download/v0.0.0/u2net_cloth_seg.onnx), [source](https://github.com/levindabhi/cloth-segmentation)): A pre-trained model for Cloths Parsing from human portrait. Here clothes are parsed into 3 category: Upper body, Lower body and Full body.
- silueta ([download](https://github.com/danielgatis/rembg/releases/download/v0.0.0/silueta.onnx), [source](https://github.com/xuebinqin/U-2-Net/issues/295)): Same as u2net but the size is reduced to 43Mb.
- isnet-general-use ([download](https://github.com/danielgatis/rembg/releases/download/v0.0.0/isnet-general-use.onnx), [source](https://github.com/xuebinqin/DIS)): A new pre-trained model for general use cases.
- sam ([download encoder](https://github.com/danielgatis/rembg/releases/download/v0.0.0/vit_b-encoder-quant.onnx), [download decoder](https://github.com/danielgatis/rembg/releases/download/v0.0.0/vit_b-decoder-quant.onnx), [source](https://github.com/facebookresearch/segment-anything)): A pre-trained model for any use cases.
### Some differences between the models result
original |
u2net |
u2netp |
u2net_human_seg |
u2net_cloth_seg |
silueta |
isnet-general-use |
sam |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
### How to train your own model
If You need more fine tunned models try this:
https://github.com/danielgatis/rembg/issues/193#issuecomment-1055534289
## Some video tutorials
- https://www.youtube.com/watch?v=3xqwpXjxyMQ
- https://www.youtube.com/watch?v=dFKRGXdkGJU
- https://www.youtube.com/watch?v=Ai-BS_T7yjE
- https://www.youtube.com/watch?v=dFKRGXdkGJU
- https://www.youtube.com/watch?v=D7W-C0urVcQ
## References
- https://arxiv.org/pdf/2005.09007.pdf
- https://github.com/NathanUA/U-2-Net
- https://github.com/pymatting/pymatting
## Buy me a coffee
Liked some of my work? Buy me a coffee (or more likely a beer)
## License
Copyright (c) 2020-present [Daniel Gatis](https://github.com/danielgatis)
Licensed under [MIT License](./LICENSE.txt)
%package -n python3-rembg
Summary: Remove image background
Provides: python-rembg
BuildRequires: python3-devel
BuildRequires: python3-setuptools
BuildRequires: python3-pip
%description -n python3-rembg
# Rembg
[](https://pepy.tech/project/rembg)
[](https://pepy.tech/project/rembg)
[](https://pepy.tech/project/rembg)
[](https://img.shields.io/badge/License-MIT-blue.svg)
[](https://huggingface.co/spaces/KenjieDec/RemBG)
[](https://bgremoval.streamlit.app/)
Rembg is a tool to remove images background.
**If this project has helped you, please consider making a [donation](https://www.buymeacoffee.com/danielgatis).**
## Sponsor
## Requirements
```
python: >3.7, <3.11
```
## Installation
CPU support:
```bash
pip install rembg
```
GPU support:
First of all, you need to check if your system supports the `onnxruntime-gpu`.
Go to https://onnxruntime.ai and check the installation matrix.
If yes, just run:
```bash
pip install rembg[gpu]
```
## Usage as a cli
After the installation step you can use rembg just typing `rembg` in your terminal window.
The `rembg` command has 3 subcommands, one for each input type:
- `i` for files
- `p` for folders
- `s` for http server
You can get help about the main command using:
```
rembg --help
```
As well, about all the subcommands using:
```
rembg --help
```
### rembg `i`
Used when input and output are files.
Remove the background from a remote image
```
curl -s http://input.png | rembg i > output.png
```
Remove the background from a local file
```
rembg i path/to/input.png path/to/output.png
```
Remove the background specifying a model
```
rembg i -m u2netp path/to/input.png path/to/output.png
```
Remove the background returning only the mask
```
rembg i -om path/to/input.png path/to/output.png
```
Remove the background applying an alpha matting
```
rembg i -a path/to/input.png path/to/output.png
```
Passing extras parameters
```
rembg i -m sam -x '{"input_labels": [1], "input_points": [[100,100]]}' path/to/input.png path/to/output.png
```
### rembg `p`
Used when input and output are folders.
Remove the background from all images in a folder
```
rembg p path/to/input path/to/output
```
Same as before, but watching for new/changed files to process
```
rembg p -w path/to/input path/to/output
```
### rembg `s`
Used to start http server.
To see the complete endpoints documentation, go to: `http://localhost:5000/docs`.
Remove the background from an image url
```
curl -s "http://localhost:5000/?url=http://input.png" -o output.png
```
Remove the background from an uploaded image
```
curl -s -F file=@/path/to/input.jpg "http://localhost:5000" -o output.png
```
## Usage as a library
Input and output as bytes
```python
from rembg import remove
input_path = 'input.png'
output_path = 'output.png'
with open(input_path, 'rb') as i:
with open(output_path, 'wb') as o:
input = i.read()
output = remove(input)
o.write(output)
```
Input and output as a PIL image
```python
from rembg import remove
from PIL import Image
input_path = 'input.png'
output_path = 'output.png'
input = Image.open(input_path)
output = remove(input)
output.save(output_path)
```
Input and output as a numpy array
```python
from rembg import remove
import cv2
input_path = 'input.png'
output_path = 'output.png'
input = cv2.imread(input_path)
output = remove(input)
cv2.imwrite(output_path, output)
```
How to iterate over files in a performatic way
```python
from pathlib import Path
from rembg import remove, new_session
session = new_session()
for file in Path('path/to/folder').glob('*.png'):
input_path = str(file)
output_path = str(file.parent / (file.stem + ".out.png"))
with open(input_path, 'rb') as i:
with open(output_path, 'wb') as o:
input = i.read()
output = remove(input, session=session)
o.write(output)
```
To see a full list of examples on how to use rembg, go to the [examples](USAGE.md) page.
## Usage as a docker
Just replace the `rembg` command for `docker run danielgatis/rembg`.
Try this:
```
docker run danielgatis/rembg i path/to/input.png path/to/output.png
```
## Models
All models are downloaded and saved in the user home folder in the `.u2net` directory.
The available models are:
- u2net ([download](https://github.com/danielgatis/rembg/releases/download/v0.0.0/u2net.onnx), [source](https://github.com/xuebinqin/U-2-Net)): A pre-trained model for general use cases.
- u2netp ([download](https://github.com/danielgatis/rembg/releases/download/v0.0.0/u2netp.onnx), [source](https://github.com/xuebinqin/U-2-Net)): A lightweight version of u2net model.
- u2net_human_seg ([download](https://github.com/danielgatis/rembg/releases/download/v0.0.0/u2net_human_seg.onnx), [source](https://github.com/xuebinqin/U-2-Net)): A pre-trained model for human segmentation.
- u2net_cloth_seg ([download](https://github.com/danielgatis/rembg/releases/download/v0.0.0/u2net_cloth_seg.onnx), [source](https://github.com/levindabhi/cloth-segmentation)): A pre-trained model for Cloths Parsing from human portrait. Here clothes are parsed into 3 category: Upper body, Lower body and Full body.
- silueta ([download](https://github.com/danielgatis/rembg/releases/download/v0.0.0/silueta.onnx), [source](https://github.com/xuebinqin/U-2-Net/issues/295)): Same as u2net but the size is reduced to 43Mb.
- isnet-general-use ([download](https://github.com/danielgatis/rembg/releases/download/v0.0.0/isnet-general-use.onnx), [source](https://github.com/xuebinqin/DIS)): A new pre-trained model for general use cases.
- sam ([download encoder](https://github.com/danielgatis/rembg/releases/download/v0.0.0/vit_b-encoder-quant.onnx), [download decoder](https://github.com/danielgatis/rembg/releases/download/v0.0.0/vit_b-decoder-quant.onnx), [source](https://github.com/facebookresearch/segment-anything)): A pre-trained model for any use cases.
### Some differences between the models result
original |
u2net |
u2netp |
u2net_human_seg |
u2net_cloth_seg |
silueta |
isnet-general-use |
sam |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
### How to train your own model
If You need more fine tunned models try this:
https://github.com/danielgatis/rembg/issues/193#issuecomment-1055534289
## Some video tutorials
- https://www.youtube.com/watch?v=3xqwpXjxyMQ
- https://www.youtube.com/watch?v=dFKRGXdkGJU
- https://www.youtube.com/watch?v=Ai-BS_T7yjE
- https://www.youtube.com/watch?v=dFKRGXdkGJU
- https://www.youtube.com/watch?v=D7W-C0urVcQ
## References
- https://arxiv.org/pdf/2005.09007.pdf
- https://github.com/NathanUA/U-2-Net
- https://github.com/pymatting/pymatting
## Buy me a coffee
Liked some of my work? Buy me a coffee (or more likely a beer)
## License
Copyright (c) 2020-present [Daniel Gatis](https://github.com/danielgatis)
Licensed under [MIT License](./LICENSE.txt)
%package help
Summary: Development documents and examples for rembg
Provides: python3-rembg-doc
%description help
# Rembg
[](https://pepy.tech/project/rembg)
[](https://pepy.tech/project/rembg)
[](https://pepy.tech/project/rembg)
[](https://img.shields.io/badge/License-MIT-blue.svg)
[](https://huggingface.co/spaces/KenjieDec/RemBG)
[](https://bgremoval.streamlit.app/)
Rembg is a tool to remove images background.
**If this project has helped you, please consider making a [donation](https://www.buymeacoffee.com/danielgatis).**
## Sponsor
## Requirements
```
python: >3.7, <3.11
```
## Installation
CPU support:
```bash
pip install rembg
```
GPU support:
First of all, you need to check if your system supports the `onnxruntime-gpu`.
Go to https://onnxruntime.ai and check the installation matrix.
If yes, just run:
```bash
pip install rembg[gpu]
```
## Usage as a cli
After the installation step you can use rembg just typing `rembg` in your terminal window.
The `rembg` command has 3 subcommands, one for each input type:
- `i` for files
- `p` for folders
- `s` for http server
You can get help about the main command using:
```
rembg --help
```
As well, about all the subcommands using:
```
rembg --help
```
### rembg `i`
Used when input and output are files.
Remove the background from a remote image
```
curl -s http://input.png | rembg i > output.png
```
Remove the background from a local file
```
rembg i path/to/input.png path/to/output.png
```
Remove the background specifying a model
```
rembg i -m u2netp path/to/input.png path/to/output.png
```
Remove the background returning only the mask
```
rembg i -om path/to/input.png path/to/output.png
```
Remove the background applying an alpha matting
```
rembg i -a path/to/input.png path/to/output.png
```
Passing extras parameters
```
rembg i -m sam -x '{"input_labels": [1], "input_points": [[100,100]]}' path/to/input.png path/to/output.png
```
### rembg `p`
Used when input and output are folders.
Remove the background from all images in a folder
```
rembg p path/to/input path/to/output
```
Same as before, but watching for new/changed files to process
```
rembg p -w path/to/input path/to/output
```
### rembg `s`
Used to start http server.
To see the complete endpoints documentation, go to: `http://localhost:5000/docs`.
Remove the background from an image url
```
curl -s "http://localhost:5000/?url=http://input.png" -o output.png
```
Remove the background from an uploaded image
```
curl -s -F file=@/path/to/input.jpg "http://localhost:5000" -o output.png
```
## Usage as a library
Input and output as bytes
```python
from rembg import remove
input_path = 'input.png'
output_path = 'output.png'
with open(input_path, 'rb') as i:
with open(output_path, 'wb') as o:
input = i.read()
output = remove(input)
o.write(output)
```
Input and output as a PIL image
```python
from rembg import remove
from PIL import Image
input_path = 'input.png'
output_path = 'output.png'
input = Image.open(input_path)
output = remove(input)
output.save(output_path)
```
Input and output as a numpy array
```python
from rembg import remove
import cv2
input_path = 'input.png'
output_path = 'output.png'
input = cv2.imread(input_path)
output = remove(input)
cv2.imwrite(output_path, output)
```
How to iterate over files in a performatic way
```python
from pathlib import Path
from rembg import remove, new_session
session = new_session()
for file in Path('path/to/folder').glob('*.png'):
input_path = str(file)
output_path = str(file.parent / (file.stem + ".out.png"))
with open(input_path, 'rb') as i:
with open(output_path, 'wb') as o:
input = i.read()
output = remove(input, session=session)
o.write(output)
```
To see a full list of examples on how to use rembg, go to the [examples](USAGE.md) page.
## Usage as a docker
Just replace the `rembg` command for `docker run danielgatis/rembg`.
Try this:
```
docker run danielgatis/rembg i path/to/input.png path/to/output.png
```
## Models
All models are downloaded and saved in the user home folder in the `.u2net` directory.
The available models are:
- u2net ([download](https://github.com/danielgatis/rembg/releases/download/v0.0.0/u2net.onnx), [source](https://github.com/xuebinqin/U-2-Net)): A pre-trained model for general use cases.
- u2netp ([download](https://github.com/danielgatis/rembg/releases/download/v0.0.0/u2netp.onnx), [source](https://github.com/xuebinqin/U-2-Net)): A lightweight version of u2net model.
- u2net_human_seg ([download](https://github.com/danielgatis/rembg/releases/download/v0.0.0/u2net_human_seg.onnx), [source](https://github.com/xuebinqin/U-2-Net)): A pre-trained model for human segmentation.
- u2net_cloth_seg ([download](https://github.com/danielgatis/rembg/releases/download/v0.0.0/u2net_cloth_seg.onnx), [source](https://github.com/levindabhi/cloth-segmentation)): A pre-trained model for Cloths Parsing from human portrait. Here clothes are parsed into 3 category: Upper body, Lower body and Full body.
- silueta ([download](https://github.com/danielgatis/rembg/releases/download/v0.0.0/silueta.onnx), [source](https://github.com/xuebinqin/U-2-Net/issues/295)): Same as u2net but the size is reduced to 43Mb.
- isnet-general-use ([download](https://github.com/danielgatis/rembg/releases/download/v0.0.0/isnet-general-use.onnx), [source](https://github.com/xuebinqin/DIS)): A new pre-trained model for general use cases.
- sam ([download encoder](https://github.com/danielgatis/rembg/releases/download/v0.0.0/vit_b-encoder-quant.onnx), [download decoder](https://github.com/danielgatis/rembg/releases/download/v0.0.0/vit_b-decoder-quant.onnx), [source](https://github.com/facebookresearch/segment-anything)): A pre-trained model for any use cases.
### Some differences between the models result
original |
u2net |
u2netp |
u2net_human_seg |
u2net_cloth_seg |
silueta |
isnet-general-use |
sam |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
 |
### How to train your own model
If You need more fine tunned models try this:
https://github.com/danielgatis/rembg/issues/193#issuecomment-1055534289
## Some video tutorials
- https://www.youtube.com/watch?v=3xqwpXjxyMQ
- https://www.youtube.com/watch?v=dFKRGXdkGJU
- https://www.youtube.com/watch?v=Ai-BS_T7yjE
- https://www.youtube.com/watch?v=dFKRGXdkGJU
- https://www.youtube.com/watch?v=D7W-C0urVcQ
## References
- https://arxiv.org/pdf/2005.09007.pdf
- https://github.com/NathanUA/U-2-Net
- https://github.com/pymatting/pymatting
## Buy me a coffee
Liked some of my work? Buy me a coffee (or more likely a beer)
## License
Copyright (c) 2020-present [Daniel Gatis](https://github.com/danielgatis)
Licensed under [MIT License](./LICENSE.txt)
%prep
%autosetup -n rembg-2.0.35
%build
%py3_build
%install
%py3_install
install -d -m755 %{buildroot}/%{_pkgdocdir}
if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
pushd %{buildroot}
if [ -d usr/lib ]; then
find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/lib64 ]; then
find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/bin ]; then
find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/sbin ]; then
find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
fi
touch doclist.lst
if [ -d usr/share/man ]; then
find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
fi
popd
mv %{buildroot}/filelist.lst .
mv %{buildroot}/doclist.lst .
%files -n python3-rembg -f filelist.lst
%dir %{python3_sitelib}/*
%files help -f doclist.lst
%{_docdir}/*
%changelog
* Fri May 05 2023 Python_Bot - 2.0.35-1
- Package Spec generated