diff options
| -rw-r--r-- | .gitignore | 1 | ||||
| -rw-r--r-- | python-vollseg.spec | 373 | ||||
| -rw-r--r-- | sources | 1 |
3 files changed, 375 insertions, 0 deletions
@@ -0,0 +1 @@ +/vollseg-10.8.9.tar.gz diff --git a/python-vollseg.spec b/python-vollseg.spec new file mode 100644 index 0000000..0623557 --- /dev/null +++ b/python-vollseg.spec @@ -0,0 +1,373 @@ +%global _empty_manifest_terminate_build 0 +Name: python-vollseg +Version: 10.8.9 +Release: 1 +Summary: Segmentation tool for biological cells of irregular size and shape in 3D and 2D. +License: BSD-3-Clause +URL: https://github.com/kapoorlab/vollseg/ +Source0: https://mirrors.nju.edu.cn/pypi/web/packages/dd/b4/0309484cd5351270c72b6bf12ffc4c5c1ea90de95347f2539dc9ec515111/vollseg-10.8.9.tar.gz +BuildArch: noarch + +Requires: python3-pandas +Requires: python3-stardist +Requires: python3-scipy +Requires: python3-tifffile +Requires: python3-matplotlib +Requires: python3-napari +Requires: python3-cellpose-vollseg +Requires: python3-torch +Requires: python3-test-tube +Requires: python3-lightning +Requires: python3-tox +Requires: python3-pytest +Requires: python3-pytest-cov + +%description +# VollSeg + +[](https://travis-ci.com/github/kapoorlab/vollseg) +[](https://pypi.org/project/vollseg/) +[](https://github.com/kapoorlab/napari-vollseg/raw/main/LICENSE) +[](https://twitter.com/entracod) + +3D segmentation tool for irregular shaped cells + + + +## Installation +This package can be installed by + + +`pip install --user vollseg` + +`mamba install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia` + +If you are building this from the source, clone the repository and install via + +```bash +git clone https://github.com/kapoorlab/vollseg/ + +cd vollseg + +pip install --user -e . + +`mamba install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia` + +``` + + +### Pipenv install + +Pipenv allows you to install dependencies in a virtual environment. + +```bash +# install pipenv if you don't already have it installed +pip install --user pipenv + +# clone the repository and sync the dependencies +git clone https://github.com/kapoorlab/vollseg/ +cd vollseg +pipenv sync + +# make the current package available +pipenv run python setup.py develop +`mamba install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia` +# you can run the example notebooks by starting the jupyter notebook inside the virtual env +pipenv run jupyter notebook +``` + +Access the `example` folder and run the cells. + +## Algorithm + + +Schematic representation showing the segmentation approach used in VollSeg. First, we input the raw fluorescent image in 3D (A) and preprocess it to remove noise. Next, we obtain the star convex approximation to the cells using Stardist (B) and the U-Net prediction labelled via connected components (C). We then obtain seeds from the centroids of labelled image in B, for each labelled region of C in order to create bounding boxes and centroids. If there is no seed from B in the bounding box region from U-Net, we add the new centroid (in yellow) to the seed pool (D). Finally, we do a marker controlled watershed in 3D using skimage implementation on the probability map shown in (E) to obtain final cell segmentation result shown in (F). All images are displayed in Napari viewer with 3D display view. + +## Example + +To try the provided notebooks we provide an example dataset of Arabidopsis, [Binary Images](https://doi.org/10.5281/zenodo.5217367), [Raw Images](https://doi.org/10.5281/zenodo.5217394) and [Labelled images](https://doi.org/10.5281/zenodo.5217341) and trained models: [stardist](https://doi.org/10.5281/zenodo.5227304), [Denoising](https://doi.org/10.5281/zenodo.5227316), [U-Net](https://doi.org/10.5281/zenodo.5227301). For training the networks use this notebook in [Colab](https://github.com/kapoorlab/VollSeg/blob/main/examples/Train/ColabTrainModel.ipynb). To train a denoising model using noise to void use this [notebook](https://github.com/kapoorlab/VollSeg/blob/main/examples/Train/ColabN2VTrain.ipynb) + + + +## Docker + +A Docker image can be used to run the code in a container. Once inside the project's directory, build the image with: + +~~~bash +docker build -t voll . +~~~ + +Now to run the `track` command: + +~~~bash +# show help +docker run --rm -it voll +~~~ + + +## Requirements + +- Python 3.7 and above. + + +## License + +Under MIT license. See [LICENSE](LICENSE). + +## Authors + +- Varun Kapoor <randomaccessiblekapoor@gmail.com> +- Claudia Carabaña +- Mari Tolonen + + +%package -n python3-vollseg +Summary: Segmentation tool for biological cells of irregular size and shape in 3D and 2D. +Provides: python-vollseg +BuildRequires: python3-devel +BuildRequires: python3-setuptools +BuildRequires: python3-pip +%description -n python3-vollseg +# VollSeg + +[](https://travis-ci.com/github/kapoorlab/vollseg) +[](https://pypi.org/project/vollseg/) +[](https://github.com/kapoorlab/napari-vollseg/raw/main/LICENSE) +[](https://twitter.com/entracod) + +3D segmentation tool for irregular shaped cells + + + +## Installation +This package can be installed by + + +`pip install --user vollseg` + +`mamba install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia` + +If you are building this from the source, clone the repository and install via + +```bash +git clone https://github.com/kapoorlab/vollseg/ + +cd vollseg + +pip install --user -e . + +`mamba install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia` + +``` + + +### Pipenv install + +Pipenv allows you to install dependencies in a virtual environment. + +```bash +# install pipenv if you don't already have it installed +pip install --user pipenv + +# clone the repository and sync the dependencies +git clone https://github.com/kapoorlab/vollseg/ +cd vollseg +pipenv sync + +# make the current package available +pipenv run python setup.py develop +`mamba install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia` +# you can run the example notebooks by starting the jupyter notebook inside the virtual env +pipenv run jupyter notebook +``` + +Access the `example` folder and run the cells. + +## Algorithm + + +Schematic representation showing the segmentation approach used in VollSeg. First, we input the raw fluorescent image in 3D (A) and preprocess it to remove noise. Next, we obtain the star convex approximation to the cells using Stardist (B) and the U-Net prediction labelled via connected components (C). We then obtain seeds from the centroids of labelled image in B, for each labelled region of C in order to create bounding boxes and centroids. If there is no seed from B in the bounding box region from U-Net, we add the new centroid (in yellow) to the seed pool (D). Finally, we do a marker controlled watershed in 3D using skimage implementation on the probability map shown in (E) to obtain final cell segmentation result shown in (F). All images are displayed in Napari viewer with 3D display view. + +## Example + +To try the provided notebooks we provide an example dataset of Arabidopsis, [Binary Images](https://doi.org/10.5281/zenodo.5217367), [Raw Images](https://doi.org/10.5281/zenodo.5217394) and [Labelled images](https://doi.org/10.5281/zenodo.5217341) and trained models: [stardist](https://doi.org/10.5281/zenodo.5227304), [Denoising](https://doi.org/10.5281/zenodo.5227316), [U-Net](https://doi.org/10.5281/zenodo.5227301). For training the networks use this notebook in [Colab](https://github.com/kapoorlab/VollSeg/blob/main/examples/Train/ColabTrainModel.ipynb). To train a denoising model using noise to void use this [notebook](https://github.com/kapoorlab/VollSeg/blob/main/examples/Train/ColabN2VTrain.ipynb) + + + +## Docker + +A Docker image can be used to run the code in a container. Once inside the project's directory, build the image with: + +~~~bash +docker build -t voll . +~~~ + +Now to run the `track` command: + +~~~bash +# show help +docker run --rm -it voll +~~~ + + +## Requirements + +- Python 3.7 and above. + + +## License + +Under MIT license. See [LICENSE](LICENSE). + +## Authors + +- Varun Kapoor <randomaccessiblekapoor@gmail.com> +- Claudia Carabaña +- Mari Tolonen + + +%package help +Summary: Development documents and examples for vollseg +Provides: python3-vollseg-doc +%description help +# VollSeg + +[](https://travis-ci.com/github/kapoorlab/vollseg) +[](https://pypi.org/project/vollseg/) +[](https://github.com/kapoorlab/napari-vollseg/raw/main/LICENSE) +[](https://twitter.com/entracod) + +3D segmentation tool for irregular shaped cells + + + +## Installation +This package can be installed by + + +`pip install --user vollseg` + +`mamba install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia` + +If you are building this from the source, clone the repository and install via + +```bash +git clone https://github.com/kapoorlab/vollseg/ + +cd vollseg + +pip install --user -e . + +`mamba install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia` + +``` + + +### Pipenv install + +Pipenv allows you to install dependencies in a virtual environment. + +```bash +# install pipenv if you don't already have it installed +pip install --user pipenv + +# clone the repository and sync the dependencies +git clone https://github.com/kapoorlab/vollseg/ +cd vollseg +pipenv sync + +# make the current package available +pipenv run python setup.py develop +`mamba install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia` +# you can run the example notebooks by starting the jupyter notebook inside the virtual env +pipenv run jupyter notebook +``` + +Access the `example` folder and run the cells. + +## Algorithm + + +Schematic representation showing the segmentation approach used in VollSeg. First, we input the raw fluorescent image in 3D (A) and preprocess it to remove noise. Next, we obtain the star convex approximation to the cells using Stardist (B) and the U-Net prediction labelled via connected components (C). We then obtain seeds from the centroids of labelled image in B, for each labelled region of C in order to create bounding boxes and centroids. If there is no seed from B in the bounding box region from U-Net, we add the new centroid (in yellow) to the seed pool (D). Finally, we do a marker controlled watershed in 3D using skimage implementation on the probability map shown in (E) to obtain final cell segmentation result shown in (F). All images are displayed in Napari viewer with 3D display view. + +## Example + +To try the provided notebooks we provide an example dataset of Arabidopsis, [Binary Images](https://doi.org/10.5281/zenodo.5217367), [Raw Images](https://doi.org/10.5281/zenodo.5217394) and [Labelled images](https://doi.org/10.5281/zenodo.5217341) and trained models: [stardist](https://doi.org/10.5281/zenodo.5227304), [Denoising](https://doi.org/10.5281/zenodo.5227316), [U-Net](https://doi.org/10.5281/zenodo.5227301). For training the networks use this notebook in [Colab](https://github.com/kapoorlab/VollSeg/blob/main/examples/Train/ColabTrainModel.ipynb). To train a denoising model using noise to void use this [notebook](https://github.com/kapoorlab/VollSeg/blob/main/examples/Train/ColabN2VTrain.ipynb) + + + +## Docker + +A Docker image can be used to run the code in a container. Once inside the project's directory, build the image with: + +~~~bash +docker build -t voll . +~~~ + +Now to run the `track` command: + +~~~bash +# show help +docker run --rm -it voll +~~~ + + +## Requirements + +- Python 3.7 and above. + + +## License + +Under MIT license. See [LICENSE](LICENSE). + +## Authors + +- Varun Kapoor <randomaccessiblekapoor@gmail.com> +- Claudia Carabaña +- Mari Tolonen + + +%prep +%autosetup -n vollseg-10.8.9 + +%build +%py3_build + +%install +%py3_install +install -d -m755 %{buildroot}/%{_pkgdocdir} +if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi +if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi +if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi +if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi +pushd %{buildroot} +if [ -d usr/lib ]; then + find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/lib64 ]; then + find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/bin ]; then + find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst +fi +if [ -d usr/sbin ]; then + find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst +fi +touch doclist.lst +if [ -d usr/share/man ]; then + find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst +fi +popd +mv %{buildroot}/filelist.lst . +mv %{buildroot}/doclist.lst . + +%files -n python3-vollseg -f filelist.lst +%dir %{python3_sitelib}/* + +%files help -f doclist.lst +%{_docdir}/* + +%changelog +* Tue Apr 11 2023 Python_Bot <Python_Bot@openeuler.org> - 10.8.9-1 +- Package Spec generated @@ -0,0 +1 @@ +b59026a7e951e408047957c7eff5dcf5 vollseg-10.8.9.tar.gz |
