summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-05-15 07:59:38 +0000
committerCoprDistGit <infra@openeuler.org>2023-05-15 07:59:38 +0000
commitc2ff162dcb4478e759ed7d15e7ccd7833bfdab26 (patch)
tree790ee67ea303cf9d6e9d72908cafc884f5575463
parentb48b50bf19369ad8552a46cf611b8b45298977bf (diff)
automatic import of python-seg-metrics
-rw-r--r--.gitignore1
-rw-r--r--python-seg-metrics.spec777
-rw-r--r--sources1
3 files changed, 779 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..5af4438 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/seg_metrics-1.1.3.tar.gz
diff --git a/python-seg-metrics.spec b/python-seg-metrics.spec
new file mode 100644
index 0000000..777613b
--- /dev/null
+++ b/python-seg-metrics.spec
@@ -0,0 +1,777 @@
+%global _empty_manifest_terminate_build 0
+Name: python-seg-metrics
+Version: 1.1.3
+Release: 1
+Summary: A package to compute different segmentation metrics for 2D/3D medical images.
+License: MIT License
+URL: https://github.com/Ordgod/segmentation_metrics
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/e2/2a/609b24ca37b73776253bb8e5a120d0f5efa41f6a030e23d889fe9554729a/seg_metrics-1.1.3.tar.gz
+BuildArch: noarch
+
+Requires: python3-pandas
+Requires: python3-numpy
+Requires: python3-coverage
+Requires: python3-matplotlib
+Requires: python3-parameterized
+Requires: python3-tqdm
+Requires: python3-medutils
+Requires: python3-PySimpleGUI
+Requires: python3-SimpleITK
+
+%description
+# Segmentaion Metrics Package [![DOI](https://zenodo.org/badge/273067948.svg)](https://zenodo.org/badge/latestdoi/273067948)
+![GitHub release (latest SemVer)](https://img.shields.io/github/v/release/Ordgod/segmentation_metrics)
+![publish workflow status](https://github.com/Jingnan-Jia/segmentation_metrics/actions/workflows/python-publish.yml/badge.svg)
+[![codecov](https://codecov.io/gh/Jingnan-Jia/segmentation_metrics/branch/master/graph/badge.svg?token=UO1QSYBEU6)](https://codecov.io/gh/Jingnan-Jia/segmentation_metrics)
+![test workflow status](https://github.com/Jingnan-Jia/segmentation_metrics/actions/workflows/test_and_coverage.yml/badge.svg?branch=master)
+[![OSCS Status](https://www.oscs1024.com/platform/badge/Jingnan-Jia/segmentation_metrics.svg?size=small)](https://www.oscs1024.com/project/Jingnan-Jia/segmentation_metrics?ref=badge_small)
+
+This is a simple package to compute different metrics for **Medical** image segmentation(images with suffix `.mhd`, `.mha`, `.nii`, `.nii.gz` or `.nrrd`), and write them to csv file.
+
+## Summary
+To assess the segmentation performance, there are several different methods. Two main methods are volume-based metrics and distance-based metrics.
+
+## Metrics included
+This library computes the following performance metrics for segmentation:
+
+### Voxel based metrics
+- Dice (F-1)
+- Jaccard
+- Precision
+- Recall
+- False positive rate
+- False negtive rate
+- Volume similarity
+
+
+The equations for these metrics can be seen in the [wikipedia](https://en.wikipedia.org/wiki/Precision_and_recall).
+
+### Surface Distance based metrics (with spacing as default)
+- [Hausdorff distance](https://en.wikipedia.org/wiki/Hausdorff_distance)
+- Hausdorff distance 95% percentile
+- Mean (Average) surface distance
+- Median surface distance
+- Std surface distance
+
+**Note**: These metrics are **symmetric**, which means the distance from A to B is the same as the distance from B to A.
+
+For each contour voxel of the segmented volume (A), the Euclidean distance from the closest contour voxel of the reference volume (B) is computed and stored as `list1`. This computation is also performed for the contour voxels of the reference volume (B), stored as `list2`. `list1` and `list2` are merged to get `list3`.
+- `Hausdorff distance` is the maximum value of `list3`.
+- `Hausdorff distance 95% percentile` is the 95% percentile of `list3`.
+- `Mean (Average) surface distance` is the mean value of `list3`.
+- `Median surface distance` is the median value of `list3`.
+- `Std surface distance` is the standard deviation of `list3`.
+
+**References:**
+1. Heimann T, Ginneken B, Styner MA, et al. Comparison and Evaluation of Methods for Liver Segmentation From CT Datasets. IEEE Transactions on Medical Imaging. 2009;28(8):1251–1265.
+2. Yeghiazaryan, Varduhi, and Irina D. Voiculescu. "Family of boundary overlap metrics for the evaluation of medical image segmentation." Journal of Medical Imaging 5.1 (2018): 015006.
+3. Ruskó, László, György Bekes, and Márta Fidrich. "Automatic segmentation of the liver from multi-and single-phase contrast-enhanced CT images." Medical Image Analysis 13.6 (2009): 871-882.
+
+## Installation
+
+```shell
+$ pip install seg-metrics
+```
+
+## Usage
+At first, import the package:
+```python
+import seg_metrics.seg_metrics as sg
+```
+
+
+### Evaluate two batches of images with same filenames from two different folders
+```python
+labels = [0, 4, 5 ,6 ,7 , 8]
+gdth_path = 'data/gdth' # this folder saves a batch of ground truth images
+pred_path = 'data/pred' # this folder saves the same number of prediction images
+csv_file = 'metrics.csv' # results will be saved to this file and prented on terminal as well. If not set, results
+# will only be shown on terminal.
+
+metrics = sg.write_metrics(labels=labels[1:], # exclude background
+ gdth_path=gdth_path,
+ pred_path=pred_path,
+ csv_file=csv_file)
+print(metrics) # a list of dictionaries which includes the metrics for each pair of image.
+```
+After runing the above codes, you can get a **list of dictionaries** `metrics` which contains all the metrics. **Also you can find a `.csv` file containing all metrics in the same directory.** If the `csv_file` is not given, the metrics results will not be saved to disk.
+
+### Evaluate two images
+```python
+labels = [0, 4, 5 ,6 ,7 , 8]
+gdth_file = 'data/gdth.mhd' # ground truth image full path
+pred_file = 'data/pred.mhd' # prediction image full path
+csv_file = 'metrics.csv'
+
+metrics = sg.write_metrics(labels=labels[1:], # exclude background
+ gdth_path=gdth_file,
+ pred_path=pred_file,
+ csv_file=csv_file)
+```
+After runing the above codes, you can get a **dictionary** `metrics` which contains all the metrics. **Also you can find a `.csv` file containing all metrics in the same directory.**
+
+**Note:**
+1. When evaluating one image, the returned `metrics` is a dictionary.
+2. When evaluating a batch of images, the returned `metrics` is a list of dictionaries.
+
+### Evaluate two images with specific metrics
+```python
+labels = [0, 4, 5 ,6 ,7 , 8]
+gdth_file = 'data/gdth.mhd'
+pred_file = 'data/pred.mhd'
+csv_file = 'metrics.csv'
+
+metrics = sg.write_metrics(labels=labels[1:], # exclude background if needed
+ gdth_path=gdth_file,
+ pred_path=pred_file,
+ csv_file=csv_file,
+ metrics=['dice', 'hd'])
+# for only one metric
+metrics = sg.write_metrics(labels=labels[1:], # exclude background if needed
+ gdth_path=gdth_file,
+ pred_path=pred_file,
+ csv_file=csv_file,
+ metrics='msd')
+```
+
+By passing the following parameters to select specific metrics.
+
+```python
+- dice: Dice (F-1)
+- jaccard: Jaccard
+- precision: Precision
+- recall: Recall
+- fpr: False positive rate
+- fnr: False negtive rate
+- vs: Volume similarity
+
+- hd: Hausdorff distance
+- hd95: Hausdorff distance 95% percentile
+- msd: Mean (Average) surface distance
+- mdsd: Median surface distance
+- stdsd: Std surface distance
+```
+
+For example:
+```python
+labels = [1]
+gdth_file = 'data/gdth.mhd'
+pred_file = 'data/pred.mhd'
+csv_file = 'metrics.csv'
+
+metrics = sg.write_metrics(labels, gdth_file, pred_file, csv_file, metrics=['dice', 'hd95'])
+dice = metrics['dice']
+hd95 = metrics['hd95']
+```
+
+
+### Evaluate two images in memory instead of disk
+**Note:**
+1. The two images must be both numpy.ndarray or SimpleITK.Image.
+2. Input arguments are different. Please use `gdth_img` and `pred_img` instead of `gdth_path` and `pred_path`.
+3. If evaluating `numpy.ndarray`, the default `spacing` for all dimensions would be `1.0` for distance based metrics.
+4. If you want to evaluate `numpy.ndarray` with specific spacing, pass a sequence with the length of image dimension as `spacing`.
+
+```python
+labels = [0, 1, 2]
+gdth_img = np.array([[0,0,1],
+ [0,1,2]])
+pred_img = np.array([[0,0,1],
+ [0,2,2]])
+csv_file = 'metrics.csv'
+spacing = [1, 2]
+metrics = sg.write_metrics(labels=labels[1:], # exclude background if needed
+ gdth_img=gdth_img,
+ pred_img=pred_img,
+ csv_file=csv_file,
+ spacing=spacing,
+ metrics=['dice', 'hd'])
+# for only one metrics
+metrics = sg.write_metrics(labels=labels[1:], # exclude background if needed
+ gdth_img=gdth_img,
+ pred_img=pred_img,
+ csv_file=csv_file,
+ spacing=spacing,
+ metrics='msd')
+```
+
+#### About the calculation of surface distance
+The default surface distance is calculated based on **fullyConnected** border. To change the default connected type,
+you can set argument `fullyConnected` as `False` as follows.
+```python
+metrics = sg.write_metrics(labels=[1,2,3],
+ gdth_img=gdth_img,
+ pred_img=pred_img,
+ csv_file=csv_file,
+ fully_connected=False)
+```
+In 2D image, fullyconnected means 8 neighbor points, while faceconnected means 4 neighbor points.
+In 3D image, fullyconnected means 26 neighbor points, while faceconnected means 6 neighbor points.
+
+
+# How to obtain more metrics? like "False omission rate" or "Accuracy"?
+A great number of different metrics, like "False omission rate" or "Accuracy", could be derived from some the [confusion matrics](https://en.wikipedia.org/wiki/Confusion_matrix). To calculate more metrics or design custom metrics, use `TPTNFPFN=True` to return the number of voxels/pixels of true positive (TP), true negative (TN), false positive (FP), false negative (FN) predictions. For example,
+```python
+metrics = sg.write_metrics(
+ gdth_img=gdth_img,
+ pred_img=pred_img,
+ TPTNFPFN=True)
+tp, tn, fp, fn = metrics['TP'], metrics['TN'], metrics['FP'], metrics['FN']
+false_omission_rate = fn/(fn+tn)
+accuracy = (tp + tn)/(tp + tn + fp + fn)
+```
+
+# Comparision with medpy
+`medpy` also provide functions to calculate metrics for medical images. But `seg-metrics`
+has several advantages.
+1. **Faster**. `seg-metrics` is **10 times faster** calculating distance based metrics. This [jupyter
+notebook](https://colab.research.google.com/drive/1gLQghS1d_fWsaJs3G4Ip0GlZHEJFcxDr#scrollTo=mDWvyxW7VExd) could reproduce the results.
+2. **More convenient**. `seg-metrics` can calculate all different metrics in once in one function while
+`medpy` needs to call different functions multiple times which cost more time and code.
+3. **More Powerful**. `seg-metrics` can calculate **multi-label** segmentation metrics and save results to
+`.csv` file in good manner, but `medpy` only provides binary segmentation metrics. Comparision can be found in this [jupyter
+notebook](https://colab.research.google.com/drive/1gLQghS1d_fWsaJs3G4Ip0GlZHEJFcxDr#scrollTo=mDWvyxW7VExd).
+
+
+
+If this repository helps you in anyway, show your love ❤️ by putting a ⭐ on this project.
+I would also appreciate it if you cite the package in your publication. (**Note:** This package is **NOT** approved for clinical use and is intended for research use only. )
+
+#Bibtex
+
+ @misc{Jingnan,
+ title = {A package to compute segmentation metrics: seg-metrics},
+ author = {Jingnan Jia},
+ url = {https://github.com/Ordgod/segmentation_metrics},
+ year = {2020},
+ doi = {10.5281/zenodo.3995075}
+ }
+
+
+
+
+
+
+
+%package -n python3-seg-metrics
+Summary: A package to compute different segmentation metrics for 2D/3D medical images.
+Provides: python-seg-metrics
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-seg-metrics
+# Segmentaion Metrics Package [![DOI](https://zenodo.org/badge/273067948.svg)](https://zenodo.org/badge/latestdoi/273067948)
+![GitHub release (latest SemVer)](https://img.shields.io/github/v/release/Ordgod/segmentation_metrics)
+![publish workflow status](https://github.com/Jingnan-Jia/segmentation_metrics/actions/workflows/python-publish.yml/badge.svg)
+[![codecov](https://codecov.io/gh/Jingnan-Jia/segmentation_metrics/branch/master/graph/badge.svg?token=UO1QSYBEU6)](https://codecov.io/gh/Jingnan-Jia/segmentation_metrics)
+![test workflow status](https://github.com/Jingnan-Jia/segmentation_metrics/actions/workflows/test_and_coverage.yml/badge.svg?branch=master)
+[![OSCS Status](https://www.oscs1024.com/platform/badge/Jingnan-Jia/segmentation_metrics.svg?size=small)](https://www.oscs1024.com/project/Jingnan-Jia/segmentation_metrics?ref=badge_small)
+
+This is a simple package to compute different metrics for **Medical** image segmentation(images with suffix `.mhd`, `.mha`, `.nii`, `.nii.gz` or `.nrrd`), and write them to csv file.
+
+## Summary
+To assess the segmentation performance, there are several different methods. Two main methods are volume-based metrics and distance-based metrics.
+
+## Metrics included
+This library computes the following performance metrics for segmentation:
+
+### Voxel based metrics
+- Dice (F-1)
+- Jaccard
+- Precision
+- Recall
+- False positive rate
+- False negtive rate
+- Volume similarity
+
+
+The equations for these metrics can be seen in the [wikipedia](https://en.wikipedia.org/wiki/Precision_and_recall).
+
+### Surface Distance based metrics (with spacing as default)
+- [Hausdorff distance](https://en.wikipedia.org/wiki/Hausdorff_distance)
+- Hausdorff distance 95% percentile
+- Mean (Average) surface distance
+- Median surface distance
+- Std surface distance
+
+**Note**: These metrics are **symmetric**, which means the distance from A to B is the same as the distance from B to A.
+
+For each contour voxel of the segmented volume (A), the Euclidean distance from the closest contour voxel of the reference volume (B) is computed and stored as `list1`. This computation is also performed for the contour voxels of the reference volume (B), stored as `list2`. `list1` and `list2` are merged to get `list3`.
+- `Hausdorff distance` is the maximum value of `list3`.
+- `Hausdorff distance 95% percentile` is the 95% percentile of `list3`.
+- `Mean (Average) surface distance` is the mean value of `list3`.
+- `Median surface distance` is the median value of `list3`.
+- `Std surface distance` is the standard deviation of `list3`.
+
+**References:**
+1. Heimann T, Ginneken B, Styner MA, et al. Comparison and Evaluation of Methods for Liver Segmentation From CT Datasets. IEEE Transactions on Medical Imaging. 2009;28(8):1251–1265.
+2. Yeghiazaryan, Varduhi, and Irina D. Voiculescu. "Family of boundary overlap metrics for the evaluation of medical image segmentation." Journal of Medical Imaging 5.1 (2018): 015006.
+3. Ruskó, László, György Bekes, and Márta Fidrich. "Automatic segmentation of the liver from multi-and single-phase contrast-enhanced CT images." Medical Image Analysis 13.6 (2009): 871-882.
+
+## Installation
+
+```shell
+$ pip install seg-metrics
+```
+
+## Usage
+At first, import the package:
+```python
+import seg_metrics.seg_metrics as sg
+```
+
+
+### Evaluate two batches of images with same filenames from two different folders
+```python
+labels = [0, 4, 5 ,6 ,7 , 8]
+gdth_path = 'data/gdth' # this folder saves a batch of ground truth images
+pred_path = 'data/pred' # this folder saves the same number of prediction images
+csv_file = 'metrics.csv' # results will be saved to this file and prented on terminal as well. If not set, results
+# will only be shown on terminal.
+
+metrics = sg.write_metrics(labels=labels[1:], # exclude background
+ gdth_path=gdth_path,
+ pred_path=pred_path,
+ csv_file=csv_file)
+print(metrics) # a list of dictionaries which includes the metrics for each pair of image.
+```
+After runing the above codes, you can get a **list of dictionaries** `metrics` which contains all the metrics. **Also you can find a `.csv` file containing all metrics in the same directory.** If the `csv_file` is not given, the metrics results will not be saved to disk.
+
+### Evaluate two images
+```python
+labels = [0, 4, 5 ,6 ,7 , 8]
+gdth_file = 'data/gdth.mhd' # ground truth image full path
+pred_file = 'data/pred.mhd' # prediction image full path
+csv_file = 'metrics.csv'
+
+metrics = sg.write_metrics(labels=labels[1:], # exclude background
+ gdth_path=gdth_file,
+ pred_path=pred_file,
+ csv_file=csv_file)
+```
+After runing the above codes, you can get a **dictionary** `metrics` which contains all the metrics. **Also you can find a `.csv` file containing all metrics in the same directory.**
+
+**Note:**
+1. When evaluating one image, the returned `metrics` is a dictionary.
+2. When evaluating a batch of images, the returned `metrics` is a list of dictionaries.
+
+### Evaluate two images with specific metrics
+```python
+labels = [0, 4, 5 ,6 ,7 , 8]
+gdth_file = 'data/gdth.mhd'
+pred_file = 'data/pred.mhd'
+csv_file = 'metrics.csv'
+
+metrics = sg.write_metrics(labels=labels[1:], # exclude background if needed
+ gdth_path=gdth_file,
+ pred_path=pred_file,
+ csv_file=csv_file,
+ metrics=['dice', 'hd'])
+# for only one metric
+metrics = sg.write_metrics(labels=labels[1:], # exclude background if needed
+ gdth_path=gdth_file,
+ pred_path=pred_file,
+ csv_file=csv_file,
+ metrics='msd')
+```
+
+By passing the following parameters to select specific metrics.
+
+```python
+- dice: Dice (F-1)
+- jaccard: Jaccard
+- precision: Precision
+- recall: Recall
+- fpr: False positive rate
+- fnr: False negtive rate
+- vs: Volume similarity
+
+- hd: Hausdorff distance
+- hd95: Hausdorff distance 95% percentile
+- msd: Mean (Average) surface distance
+- mdsd: Median surface distance
+- stdsd: Std surface distance
+```
+
+For example:
+```python
+labels = [1]
+gdth_file = 'data/gdth.mhd'
+pred_file = 'data/pred.mhd'
+csv_file = 'metrics.csv'
+
+metrics = sg.write_metrics(labels, gdth_file, pred_file, csv_file, metrics=['dice', 'hd95'])
+dice = metrics['dice']
+hd95 = metrics['hd95']
+```
+
+
+### Evaluate two images in memory instead of disk
+**Note:**
+1. The two images must be both numpy.ndarray or SimpleITK.Image.
+2. Input arguments are different. Please use `gdth_img` and `pred_img` instead of `gdth_path` and `pred_path`.
+3. If evaluating `numpy.ndarray`, the default `spacing` for all dimensions would be `1.0` for distance based metrics.
+4. If you want to evaluate `numpy.ndarray` with specific spacing, pass a sequence with the length of image dimension as `spacing`.
+
+```python
+labels = [0, 1, 2]
+gdth_img = np.array([[0,0,1],
+ [0,1,2]])
+pred_img = np.array([[0,0,1],
+ [0,2,2]])
+csv_file = 'metrics.csv'
+spacing = [1, 2]
+metrics = sg.write_metrics(labels=labels[1:], # exclude background if needed
+ gdth_img=gdth_img,
+ pred_img=pred_img,
+ csv_file=csv_file,
+ spacing=spacing,
+ metrics=['dice', 'hd'])
+# for only one metrics
+metrics = sg.write_metrics(labels=labels[1:], # exclude background if needed
+ gdth_img=gdth_img,
+ pred_img=pred_img,
+ csv_file=csv_file,
+ spacing=spacing,
+ metrics='msd')
+```
+
+#### About the calculation of surface distance
+The default surface distance is calculated based on **fullyConnected** border. To change the default connected type,
+you can set argument `fullyConnected` as `False` as follows.
+```python
+metrics = sg.write_metrics(labels=[1,2,3],
+ gdth_img=gdth_img,
+ pred_img=pred_img,
+ csv_file=csv_file,
+ fully_connected=False)
+```
+In 2D image, fullyconnected means 8 neighbor points, while faceconnected means 4 neighbor points.
+In 3D image, fullyconnected means 26 neighbor points, while faceconnected means 6 neighbor points.
+
+
+# How to obtain more metrics? like "False omission rate" or "Accuracy"?
+A great number of different metrics, like "False omission rate" or "Accuracy", could be derived from some the [confusion matrics](https://en.wikipedia.org/wiki/Confusion_matrix). To calculate more metrics or design custom metrics, use `TPTNFPFN=True` to return the number of voxels/pixels of true positive (TP), true negative (TN), false positive (FP), false negative (FN) predictions. For example,
+```python
+metrics = sg.write_metrics(
+ gdth_img=gdth_img,
+ pred_img=pred_img,
+ TPTNFPFN=True)
+tp, tn, fp, fn = metrics['TP'], metrics['TN'], metrics['FP'], metrics['FN']
+false_omission_rate = fn/(fn+tn)
+accuracy = (tp + tn)/(tp + tn + fp + fn)
+```
+
+# Comparision with medpy
+`medpy` also provide functions to calculate metrics for medical images. But `seg-metrics`
+has several advantages.
+1. **Faster**. `seg-metrics` is **10 times faster** calculating distance based metrics. This [jupyter
+notebook](https://colab.research.google.com/drive/1gLQghS1d_fWsaJs3G4Ip0GlZHEJFcxDr#scrollTo=mDWvyxW7VExd) could reproduce the results.
+2. **More convenient**. `seg-metrics` can calculate all different metrics in once in one function while
+`medpy` needs to call different functions multiple times which cost more time and code.
+3. **More Powerful**. `seg-metrics` can calculate **multi-label** segmentation metrics and save results to
+`.csv` file in good manner, but `medpy` only provides binary segmentation metrics. Comparision can be found in this [jupyter
+notebook](https://colab.research.google.com/drive/1gLQghS1d_fWsaJs3G4Ip0GlZHEJFcxDr#scrollTo=mDWvyxW7VExd).
+
+
+
+If this repository helps you in anyway, show your love ❤️ by putting a ⭐ on this project.
+I would also appreciate it if you cite the package in your publication. (**Note:** This package is **NOT** approved for clinical use and is intended for research use only. )
+
+#Bibtex
+
+ @misc{Jingnan,
+ title = {A package to compute segmentation metrics: seg-metrics},
+ author = {Jingnan Jia},
+ url = {https://github.com/Ordgod/segmentation_metrics},
+ year = {2020},
+ doi = {10.5281/zenodo.3995075}
+ }
+
+
+
+
+
+
+
+%package help
+Summary: Development documents and examples for seg-metrics
+Provides: python3-seg-metrics-doc
+%description help
+# Segmentaion Metrics Package [![DOI](https://zenodo.org/badge/273067948.svg)](https://zenodo.org/badge/latestdoi/273067948)
+![GitHub release (latest SemVer)](https://img.shields.io/github/v/release/Ordgod/segmentation_metrics)
+![publish workflow status](https://github.com/Jingnan-Jia/segmentation_metrics/actions/workflows/python-publish.yml/badge.svg)
+[![codecov](https://codecov.io/gh/Jingnan-Jia/segmentation_metrics/branch/master/graph/badge.svg?token=UO1QSYBEU6)](https://codecov.io/gh/Jingnan-Jia/segmentation_metrics)
+![test workflow status](https://github.com/Jingnan-Jia/segmentation_metrics/actions/workflows/test_and_coverage.yml/badge.svg?branch=master)
+[![OSCS Status](https://www.oscs1024.com/platform/badge/Jingnan-Jia/segmentation_metrics.svg?size=small)](https://www.oscs1024.com/project/Jingnan-Jia/segmentation_metrics?ref=badge_small)
+
+This is a simple package to compute different metrics for **Medical** image segmentation(images with suffix `.mhd`, `.mha`, `.nii`, `.nii.gz` or `.nrrd`), and write them to csv file.
+
+## Summary
+To assess the segmentation performance, there are several different methods. Two main methods are volume-based metrics and distance-based metrics.
+
+## Metrics included
+This library computes the following performance metrics for segmentation:
+
+### Voxel based metrics
+- Dice (F-1)
+- Jaccard
+- Precision
+- Recall
+- False positive rate
+- False negtive rate
+- Volume similarity
+
+
+The equations for these metrics can be seen in the [wikipedia](https://en.wikipedia.org/wiki/Precision_and_recall).
+
+### Surface Distance based metrics (with spacing as default)
+- [Hausdorff distance](https://en.wikipedia.org/wiki/Hausdorff_distance)
+- Hausdorff distance 95% percentile
+- Mean (Average) surface distance
+- Median surface distance
+- Std surface distance
+
+**Note**: These metrics are **symmetric**, which means the distance from A to B is the same as the distance from B to A.
+
+For each contour voxel of the segmented volume (A), the Euclidean distance from the closest contour voxel of the reference volume (B) is computed and stored as `list1`. This computation is also performed for the contour voxels of the reference volume (B), stored as `list2`. `list1` and `list2` are merged to get `list3`.
+- `Hausdorff distance` is the maximum value of `list3`.
+- `Hausdorff distance 95% percentile` is the 95% percentile of `list3`.
+- `Mean (Average) surface distance` is the mean value of `list3`.
+- `Median surface distance` is the median value of `list3`.
+- `Std surface distance` is the standard deviation of `list3`.
+
+**References:**
+1. Heimann T, Ginneken B, Styner MA, et al. Comparison and Evaluation of Methods for Liver Segmentation From CT Datasets. IEEE Transactions on Medical Imaging. 2009;28(8):1251–1265.
+2. Yeghiazaryan, Varduhi, and Irina D. Voiculescu. "Family of boundary overlap metrics for the evaluation of medical image segmentation." Journal of Medical Imaging 5.1 (2018): 015006.
+3. Ruskó, László, György Bekes, and Márta Fidrich. "Automatic segmentation of the liver from multi-and single-phase contrast-enhanced CT images." Medical Image Analysis 13.6 (2009): 871-882.
+
+## Installation
+
+```shell
+$ pip install seg-metrics
+```
+
+## Usage
+At first, import the package:
+```python
+import seg_metrics.seg_metrics as sg
+```
+
+
+### Evaluate two batches of images with same filenames from two different folders
+```python
+labels = [0, 4, 5 ,6 ,7 , 8]
+gdth_path = 'data/gdth' # this folder saves a batch of ground truth images
+pred_path = 'data/pred' # this folder saves the same number of prediction images
+csv_file = 'metrics.csv' # results will be saved to this file and prented on terminal as well. If not set, results
+# will only be shown on terminal.
+
+metrics = sg.write_metrics(labels=labels[1:], # exclude background
+ gdth_path=gdth_path,
+ pred_path=pred_path,
+ csv_file=csv_file)
+print(metrics) # a list of dictionaries which includes the metrics for each pair of image.
+```
+After runing the above codes, you can get a **list of dictionaries** `metrics` which contains all the metrics. **Also you can find a `.csv` file containing all metrics in the same directory.** If the `csv_file` is not given, the metrics results will not be saved to disk.
+
+### Evaluate two images
+```python
+labels = [0, 4, 5 ,6 ,7 , 8]
+gdth_file = 'data/gdth.mhd' # ground truth image full path
+pred_file = 'data/pred.mhd' # prediction image full path
+csv_file = 'metrics.csv'
+
+metrics = sg.write_metrics(labels=labels[1:], # exclude background
+ gdth_path=gdth_file,
+ pred_path=pred_file,
+ csv_file=csv_file)
+```
+After runing the above codes, you can get a **dictionary** `metrics` which contains all the metrics. **Also you can find a `.csv` file containing all metrics in the same directory.**
+
+**Note:**
+1. When evaluating one image, the returned `metrics` is a dictionary.
+2. When evaluating a batch of images, the returned `metrics` is a list of dictionaries.
+
+### Evaluate two images with specific metrics
+```python
+labels = [0, 4, 5 ,6 ,7 , 8]
+gdth_file = 'data/gdth.mhd'
+pred_file = 'data/pred.mhd'
+csv_file = 'metrics.csv'
+
+metrics = sg.write_metrics(labels=labels[1:], # exclude background if needed
+ gdth_path=gdth_file,
+ pred_path=pred_file,
+ csv_file=csv_file,
+ metrics=['dice', 'hd'])
+# for only one metric
+metrics = sg.write_metrics(labels=labels[1:], # exclude background if needed
+ gdth_path=gdth_file,
+ pred_path=pred_file,
+ csv_file=csv_file,
+ metrics='msd')
+```
+
+By passing the following parameters to select specific metrics.
+
+```python
+- dice: Dice (F-1)
+- jaccard: Jaccard
+- precision: Precision
+- recall: Recall
+- fpr: False positive rate
+- fnr: False negtive rate
+- vs: Volume similarity
+
+- hd: Hausdorff distance
+- hd95: Hausdorff distance 95% percentile
+- msd: Mean (Average) surface distance
+- mdsd: Median surface distance
+- stdsd: Std surface distance
+```
+
+For example:
+```python
+labels = [1]
+gdth_file = 'data/gdth.mhd'
+pred_file = 'data/pred.mhd'
+csv_file = 'metrics.csv'
+
+metrics = sg.write_metrics(labels, gdth_file, pred_file, csv_file, metrics=['dice', 'hd95'])
+dice = metrics['dice']
+hd95 = metrics['hd95']
+```
+
+
+### Evaluate two images in memory instead of disk
+**Note:**
+1. The two images must be both numpy.ndarray or SimpleITK.Image.
+2. Input arguments are different. Please use `gdth_img` and `pred_img` instead of `gdth_path` and `pred_path`.
+3. If evaluating `numpy.ndarray`, the default `spacing` for all dimensions would be `1.0` for distance based metrics.
+4. If you want to evaluate `numpy.ndarray` with specific spacing, pass a sequence with the length of image dimension as `spacing`.
+
+```python
+labels = [0, 1, 2]
+gdth_img = np.array([[0,0,1],
+ [0,1,2]])
+pred_img = np.array([[0,0,1],
+ [0,2,2]])
+csv_file = 'metrics.csv'
+spacing = [1, 2]
+metrics = sg.write_metrics(labels=labels[1:], # exclude background if needed
+ gdth_img=gdth_img,
+ pred_img=pred_img,
+ csv_file=csv_file,
+ spacing=spacing,
+ metrics=['dice', 'hd'])
+# for only one metrics
+metrics = sg.write_metrics(labels=labels[1:], # exclude background if needed
+ gdth_img=gdth_img,
+ pred_img=pred_img,
+ csv_file=csv_file,
+ spacing=spacing,
+ metrics='msd')
+```
+
+#### About the calculation of surface distance
+The default surface distance is calculated based on **fullyConnected** border. To change the default connected type,
+you can set argument `fullyConnected` as `False` as follows.
+```python
+metrics = sg.write_metrics(labels=[1,2,3],
+ gdth_img=gdth_img,
+ pred_img=pred_img,
+ csv_file=csv_file,
+ fully_connected=False)
+```
+In 2D image, fullyconnected means 8 neighbor points, while faceconnected means 4 neighbor points.
+In 3D image, fullyconnected means 26 neighbor points, while faceconnected means 6 neighbor points.
+
+
+# How to obtain more metrics? like "False omission rate" or "Accuracy"?
+A great number of different metrics, like "False omission rate" or "Accuracy", could be derived from some the [confusion matrics](https://en.wikipedia.org/wiki/Confusion_matrix). To calculate more metrics or design custom metrics, use `TPTNFPFN=True` to return the number of voxels/pixels of true positive (TP), true negative (TN), false positive (FP), false negative (FN) predictions. For example,
+```python
+metrics = sg.write_metrics(
+ gdth_img=gdth_img,
+ pred_img=pred_img,
+ TPTNFPFN=True)
+tp, tn, fp, fn = metrics['TP'], metrics['TN'], metrics['FP'], metrics['FN']
+false_omission_rate = fn/(fn+tn)
+accuracy = (tp + tn)/(tp + tn + fp + fn)
+```
+
+# Comparision with medpy
+`medpy` also provide functions to calculate metrics for medical images. But `seg-metrics`
+has several advantages.
+1. **Faster**. `seg-metrics` is **10 times faster** calculating distance based metrics. This [jupyter
+notebook](https://colab.research.google.com/drive/1gLQghS1d_fWsaJs3G4Ip0GlZHEJFcxDr#scrollTo=mDWvyxW7VExd) could reproduce the results.
+2. **More convenient**. `seg-metrics` can calculate all different metrics in once in one function while
+`medpy` needs to call different functions multiple times which cost more time and code.
+3. **More Powerful**. `seg-metrics` can calculate **multi-label** segmentation metrics and save results to
+`.csv` file in good manner, but `medpy` only provides binary segmentation metrics. Comparision can be found in this [jupyter
+notebook](https://colab.research.google.com/drive/1gLQghS1d_fWsaJs3G4Ip0GlZHEJFcxDr#scrollTo=mDWvyxW7VExd).
+
+
+
+If this repository helps you in anyway, show your love ❤️ by putting a ⭐ on this project.
+I would also appreciate it if you cite the package in your publication. (**Note:** This package is **NOT** approved for clinical use and is intended for research use only. )
+
+#Bibtex
+
+ @misc{Jingnan,
+ title = {A package to compute segmentation metrics: seg-metrics},
+ author = {Jingnan Jia},
+ url = {https://github.com/Ordgod/segmentation_metrics},
+ year = {2020},
+ doi = {10.5281/zenodo.3995075}
+ }
+
+
+
+
+
+
+
+%prep
+%autosetup -n seg-metrics-1.1.3
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-seg-metrics -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Mon May 15 2023 Python_Bot <Python_Bot@openeuler.org> - 1.1.3-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..3781183
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+51b445cdaacc221949624a877041f8b9 seg_metrics-1.1.3.tar.gz