%global _empty_manifest_terminate_build 0 Name: python-spectramap Version: 0.5.3 Release: 1 Summary: Hyperspectral package for spectroscopists License: MIT License URL: https://github.com/spectramap/spectramap Source0: https://mirrors.nju.edu.cn/pypi/web/packages/c0/3d/113329285ccca34c91e8e1198fc0390a5559f5149a996b0f005159e7e010/spectramap-0.5.3.tar.gz BuildArch: noarch Requires: python3-scikit-learn Requires: python3-pyspectra Requires: python3-scipy %description

## *SpectraMap (SpMap): Hyperspectral package for spectroscopists in Python*

Hyperspectral imaging presents important applications in medicine, agriculture, pharmaceutical, space, food and many upcoming applications. The analysis of hyperspectral images requires advanced software. The upcoming developments related to fast hyperspectral imaging, automation and deep learning applications demand innovative software developments for analyzing hyperspectral data. The Figure 1 shows the hyperspectral imaging by a standard spectrometer instrument. More information regarding novel medical imaging is found in advances in imaging.

Figure 1 Raman Imaging Instrument ## Features

The package includes standard tools such as reading, preprocessing, processing and visualization. The designing was focused on working hyperspectral images from Raman datasets. The package is extended to other spectroscopies as long as the data follows the type data structure. Some features are shown by the next figures. -

Preprocessing: some tool such as smoothing, removal of spikes, normalization and advanced baseline corrections are included. Figure 2 illustrates a mean and standard deviation of a tissue signature.

Figure 2 Visualization of tissue Raman signature -

Processing: some tools such as unmixing, pca, pls, vca and hierarchical and kmeans clustering are included. Figure 3 displays application of clustering for locating microplastics on complex matrices.

Figure 3 Segmentation by clustering: (a) clustered image, (b) unmixing image, (c) image and (d) mean clusters -

Visualization: the next examples shows the pca scores of several biomolecules.

Figure 4 PCA scores ## Further upcoming developments: - [] Graphical User Interface - [] Supervised tools - [] Deep learning - CNN - [x] Optimizing speed and organizing main code - [x] More examples ## Installation

The predetermined work interface is Python 3. The library comes with 8 different hyperspectral examples and analysis. A manual presents the relevant functions and examples Manual.

Install the library and required packages: (admin rights): ```python pip install spectramap ``` ## Examples #### Reading and processing a spc file

In the examples , there is ps.spc file for this example. The next lines show some basic tools. The function read_single_spc reads the path directory of the file. ```python from spectramap import spmap as sp #reading spmap pigm = sp.hyper_object('pigment') #creating the hyperobject pigm.read_single_spc('pigment') #reading the spc file pigm.keep(400, 1800) #Keeping fingerprint region pigm_original = pigm.copy() #Copying hyperobject pigm_original.set_label('original') #renaming hyperobject to original pigm.set_label('processed') #renaming hyperobject to processed pigm.rubber() #basic baseline correction rubber band pigm.gol(15, 3, 0) #savitzky-golay filter both = sp.hyper_object('result') #creating an auxilary hyperobject both.concat([pigm_original, pigm]) #concatenating the original and processed data both.show(False) #show both spectra ```

```python both.show_stack(0.2, 0, 'auto') #advanced stack visualization ```

Figure 6 Second visualization #### Reading and processing a comma separated vector file with depth profiling

In the example, there is a layers.csv.xz file for this example. The next lines show some basic tools. The function read_csv requires the path directory of the file. The csv file must keep the structure of the manual (hyperspectral object). The example shows how to analise the data of spectroscopic profiles. ```python from spectramap import spmap as sp # reading spectramap library stack = sp.hyper_object('plastics') # creating the hyper_object stack.read_csv_xz('layers') # reading compressed csv of plastics profile stack.keep(500, 1800) # keeping fingerprint region stack.rubber() # baseline correciton rubber band stack.vector() # vector normalization endmember = stack.vca(6) # number of endmembers endmember.show_stack(0.2, 0, 'auto') # advanced stack plot of endmembers ```

```python abundance = stack.abundance(endmember, 'NNLS') # estimation of concentrations by NNLS abundance.set_resolution(0.01) # setting the step size resolution abundance.show_profile('auto') # plotting spectral profile ```

#### Processing hyperspectral images by VCA and Clustering comming soon. For now on, Check the manual. #### Processing hyperspectral data of plastics by PCA and PLS-LDA

In the example , there is a layers.csv.xz file for this example. The next processing steps computes unsupervised principal component analysis and double supervised partial least square + linear discriminant analysis. The scatter plots show the separation of the plastics: red, light_blue and blue are the most different ones. ```python from spectramap import spmap as sp # reading spectramap library sample = sp.hyper_object("sample") # creating hyper_object sample.read_csv_xz("layers") # reading compressed csv of plastics profile sample.remove(1800, 2700) # removing silent region sample.keep(400, 3300) # keeping finger print and high wavenumber region sample.gaussian(2) # appliying gaussian filter sample.rubber() # rubber baseline correction sample.kmeans(2) # kmeans 2 clusters sample.rename_label([1, 2], ["first", "second"]) # rename labels sub_label = sample.get_label() # saving sub_labels sub_label.name = "sub_label" # renaming the title of sub_label sample.show_stack(0,0, "auto") # showing a stack ```

```python sample.kmeans(6) # kmeans clustering example for main_label main_label = sample.get_label() # saving the main_label main_label.name = "main_label" # renaming the title of the label sample.show_stack(0,0, "auto") # showing the 6 components ```

```python scores_pca, loadings_pca = sample.pca(3, False) # 3 components pca scores_pca.show_scatter("auto", main_label, sub_label, 15) # showing scatter with sublabel ```

```python scores_pls, loadings_pls = sample.pls_lda(3, False, 0.7) # 3 components pls-lda and 70% training data scores_pls.show_scatter("auto", main_label, sub_label, 15) # showing scatter with sublevel ```

The next figures shows the precision, recall (sensititivity), f1-score (weighted average of preceision) and support for the 6 components. Accuracy and average accuracy.

#### Raman wavenumber calibration by paracetaminol

Reproducibility and replicativity are fundamental parameters for Raman spectroscopy. One common way for wavenumber axis calibration is discussed in this section. The requirements are a paracetaminol sample (powder) and the calibration file (well-measured peaks) and a polynomial regression. ```python from spectramap import spmap as sp # reading spectramap library import pandas as pd import numpy as np ### Paracetaminol path = 'para.csv' # path of the paracetaminol data table = pd.read_table(path, sep = ',', header = None) # read data table['label'] = "Para" # create label table[['x', 'y']] = np.zeros((20,2)) # create fake positions ### Processing mp = sp.hyper_object("Para") # creation of hyper object mp.set_data(table.iloc[:,:len(table.columns)-3]) # reading the intensity mp.set_position(table[['x', 'y']]) # reading positions mp.set_label(pd.Series(table['label'])) # reading labeling copy = mp.copy() # copy data peaks = copy.calibration_peaks(mp, 0.05) # finding peaks of para (next plot) ```

```python copy.calibration_regression(peaks) # determining regression for the calibration ```

```python mp.set_wavenumber(copy.get_wavenumber()) # set the new wavenumber to the original mp mp.show(True) # show calibrated data mp.add_peaks(0.1, 'r') # add peaks (not inline mode) mp.save_data("", "calibration") # save calibrated data ```

#### Processing hyperspectral images from biological tissue comming soon. For now on, Check the manual. #### Raman Intensity Calibration The next lines show how to calibrate intensity axis in Ramam spectroscopy. It is required a standard spectrum of halogen lamp and the experimental measurement of the halogen lamp with the Raman instrument. ```python from spectramap import spmap as sp # reading spectramap package reference_trial = sp.hyper_object("reference") # creating reference hyper object reference_trial.read_single_spc(path + "reference") # reading the referece data spectrum reference_trial.show(True) # showing the spectrum in the next plot ```

Now the experimental spectrum. ```python measured_trial = sp.hyper_object("measured") # creating hyper object measured_trial.read_single_spc(path + "lamp") # reading data measured_trial.keep(400, 1900) # keeping finger print region measured_trial.show(True) # showing the plot as the next figures shows ```

Reading the Raman sample. ```python sample = sp.hyper_object("sample") # declareting hyper object sample.read_single_spc(path + "sample") # reading tissue data sample.keep(400, 1900) # keeping finger print region sample.show(True) # showing plot in the next figure ```

> Calibration of the Raman sample. ```python sample.intensity_calibration(reference_trial, measured_trial) # intensity calibration function sample.show(True) # showing the calibrated data in the next figure ```

## Working Team

## License

MIT

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ## References [1] F. Pedregosa, G. Varoquaux, and A. Gramfort, “Scikit-learn: Machine Learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825-- 2830, 2011. [2] J. M. P. Nascimento and J. M. B. Dias, “Vertex component analysis: A fast algorithm to unmix hyperspectral data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 43, no. 4, pp. 898–910, 2005, doi: 10.1109/TGRS.2005.844293. [3] Z. M. Zhang, S. Chen, and Y. Z. Liang, “Baseline correction using adaptive iteratively reweighted penalized least squares,” Analyst, vol. 135, no. 5, pp. 1138–1146, 2010, doi: 10.1039/b922045c. [4] L. McInnes, J. Healy, S. Astels, *hdbscan: Hierarchical density based clustering* In: Journal of Open Source Software, The Open Journal, volume 2, number 11. 2017 %package -n python3-spectramap Summary: Hyperspectral package for spectroscopists Provides: python-spectramap BuildRequires: python3-devel BuildRequires: python3-setuptools BuildRequires: python3-pip %description -n python3-spectramap

## *SpectraMap (SpMap): Hyperspectral package for spectroscopists in Python*

Hyperspectral imaging presents important applications in medicine, agriculture, pharmaceutical, space, food and many upcoming applications. The analysis of hyperspectral images requires advanced software. The upcoming developments related to fast hyperspectral imaging, automation and deep learning applications demand innovative software developments for analyzing hyperspectral data. The Figure 1 shows the hyperspectral imaging by a standard spectrometer instrument. More information regarding novel medical imaging is found in advances in imaging.

Figure 1 Raman Imaging Instrument ## Features

The package includes standard tools such as reading, preprocessing, processing and visualization. The designing was focused on working hyperspectral images from Raman datasets. The package is extended to other spectroscopies as long as the data follows the type data structure. Some features are shown by the next figures. -

Preprocessing: some tool such as smoothing, removal of spikes, normalization and advanced baseline corrections are included. Figure 2 illustrates a mean and standard deviation of a tissue signature.

Figure 2 Visualization of tissue Raman signature -

Processing: some tools such as unmixing, pca, pls, vca and hierarchical and kmeans clustering are included. Figure 3 displays application of clustering for locating microplastics on complex matrices.

Figure 3 Segmentation by clustering: (a) clustered image, (b) unmixing image, (c) image and (d) mean clusters -

Visualization: the next examples shows the pca scores of several biomolecules.

Figure 4 PCA scores ## Further upcoming developments: - [] Graphical User Interface - [] Supervised tools - [] Deep learning - CNN - [x] Optimizing speed and organizing main code - [x] More examples ## Installation

The predetermined work interface is Python 3. The library comes with 8 different hyperspectral examples and analysis. A manual presents the relevant functions and examples Manual.

Install the library and required packages: (admin rights): ```python pip install spectramap ``` ## Examples #### Reading and processing a spc file

In the examples , there is ps.spc file for this example. The next lines show some basic tools. The function read_single_spc reads the path directory of the file. ```python from spectramap import spmap as sp #reading spmap pigm = sp.hyper_object('pigment') #creating the hyperobject pigm.read_single_spc('pigment') #reading the spc file pigm.keep(400, 1800) #Keeping fingerprint region pigm_original = pigm.copy() #Copying hyperobject pigm_original.set_label('original') #renaming hyperobject to original pigm.set_label('processed') #renaming hyperobject to processed pigm.rubber() #basic baseline correction rubber band pigm.gol(15, 3, 0) #savitzky-golay filter both = sp.hyper_object('result') #creating an auxilary hyperobject both.concat([pigm_original, pigm]) #concatenating the original and processed data both.show(False) #show both spectra ```

```python both.show_stack(0.2, 0, 'auto') #advanced stack visualization ```

Figure 6 Second visualization #### Reading and processing a comma separated vector file with depth profiling

In the example, there is a layers.csv.xz file for this example. The next lines show some basic tools. The function read_csv requires the path directory of the file. The csv file must keep the structure of the manual (hyperspectral object). The example shows how to analise the data of spectroscopic profiles. ```python from spectramap import spmap as sp # reading spectramap library stack = sp.hyper_object('plastics') # creating the hyper_object stack.read_csv_xz('layers') # reading compressed csv of plastics profile stack.keep(500, 1800) # keeping fingerprint region stack.rubber() # baseline correciton rubber band stack.vector() # vector normalization endmember = stack.vca(6) # number of endmembers endmember.show_stack(0.2, 0, 'auto') # advanced stack plot of endmembers ```

```python abundance = stack.abundance(endmember, 'NNLS') # estimation of concentrations by NNLS abundance.set_resolution(0.01) # setting the step size resolution abundance.show_profile('auto') # plotting spectral profile ```

#### Processing hyperspectral images by VCA and Clustering comming soon. For now on, Check the manual. #### Processing hyperspectral data of plastics by PCA and PLS-LDA

In the example , there is a layers.csv.xz file for this example. The next processing steps computes unsupervised principal component analysis and double supervised partial least square + linear discriminant analysis. The scatter plots show the separation of the plastics: red, light_blue and blue are the most different ones. ```python from spectramap import spmap as sp # reading spectramap library sample = sp.hyper_object("sample") # creating hyper_object sample.read_csv_xz("layers") # reading compressed csv of plastics profile sample.remove(1800, 2700) # removing silent region sample.keep(400, 3300) # keeping finger print and high wavenumber region sample.gaussian(2) # appliying gaussian filter sample.rubber() # rubber baseline correction sample.kmeans(2) # kmeans 2 clusters sample.rename_label([1, 2], ["first", "second"]) # rename labels sub_label = sample.get_label() # saving sub_labels sub_label.name = "sub_label" # renaming the title of sub_label sample.show_stack(0,0, "auto") # showing a stack ```

```python sample.kmeans(6) # kmeans clustering example for main_label main_label = sample.get_label() # saving the main_label main_label.name = "main_label" # renaming the title of the label sample.show_stack(0,0, "auto") # showing the 6 components ```

```python scores_pca, loadings_pca = sample.pca(3, False) # 3 components pca scores_pca.show_scatter("auto", main_label, sub_label, 15) # showing scatter with sublabel ```

```python scores_pls, loadings_pls = sample.pls_lda(3, False, 0.7) # 3 components pls-lda and 70% training data scores_pls.show_scatter("auto", main_label, sub_label, 15) # showing scatter with sublevel ```

The next figures shows the precision, recall (sensititivity), f1-score (weighted average of preceision) and support for the 6 components. Accuracy and average accuracy.

#### Raman wavenumber calibration by paracetaminol

Reproducibility and replicativity are fundamental parameters for Raman spectroscopy. One common way for wavenumber axis calibration is discussed in this section. The requirements are a paracetaminol sample (powder) and the calibration file (well-measured peaks) and a polynomial regression. ```python from spectramap import spmap as sp # reading spectramap library import pandas as pd import numpy as np ### Paracetaminol path = 'para.csv' # path of the paracetaminol data table = pd.read_table(path, sep = ',', header = None) # read data table['label'] = "Para" # create label table[['x', 'y']] = np.zeros((20,2)) # create fake positions ### Processing mp = sp.hyper_object("Para") # creation of hyper object mp.set_data(table.iloc[:,:len(table.columns)-3]) # reading the intensity mp.set_position(table[['x', 'y']]) # reading positions mp.set_label(pd.Series(table['label'])) # reading labeling copy = mp.copy() # copy data peaks = copy.calibration_peaks(mp, 0.05) # finding peaks of para (next plot) ```

```python copy.calibration_regression(peaks) # determining regression for the calibration ```

```python mp.set_wavenumber(copy.get_wavenumber()) # set the new wavenumber to the original mp mp.show(True) # show calibrated data mp.add_peaks(0.1, 'r') # add peaks (not inline mode) mp.save_data("", "calibration") # save calibrated data ```

#### Processing hyperspectral images from biological tissue comming soon. For now on, Check the manual. #### Raman Intensity Calibration The next lines show how to calibrate intensity axis in Ramam spectroscopy. It is required a standard spectrum of halogen lamp and the experimental measurement of the halogen lamp with the Raman instrument. ```python from spectramap import spmap as sp # reading spectramap package reference_trial = sp.hyper_object("reference") # creating reference hyper object reference_trial.read_single_spc(path + "reference") # reading the referece data spectrum reference_trial.show(True) # showing the spectrum in the next plot ```

Now the experimental spectrum. ```python measured_trial = sp.hyper_object("measured") # creating hyper object measured_trial.read_single_spc(path + "lamp") # reading data measured_trial.keep(400, 1900) # keeping finger print region measured_trial.show(True) # showing the plot as the next figures shows ```

Reading the Raman sample. ```python sample = sp.hyper_object("sample") # declareting hyper object sample.read_single_spc(path + "sample") # reading tissue data sample.keep(400, 1900) # keeping finger print region sample.show(True) # showing plot in the next figure ```

> Calibration of the Raman sample. ```python sample.intensity_calibration(reference_trial, measured_trial) # intensity calibration function sample.show(True) # showing the calibrated data in the next figure ```

## Working Team

## License

MIT

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ## References [1] F. Pedregosa, G. Varoquaux, and A. Gramfort, “Scikit-learn: Machine Learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825-- 2830, 2011. [2] J. M. P. Nascimento and J. M. B. Dias, “Vertex component analysis: A fast algorithm to unmix hyperspectral data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 43, no. 4, pp. 898–910, 2005, doi: 10.1109/TGRS.2005.844293. [3] Z. M. Zhang, S. Chen, and Y. Z. Liang, “Baseline correction using adaptive iteratively reweighted penalized least squares,” Analyst, vol. 135, no. 5, pp. 1138–1146, 2010, doi: 10.1039/b922045c. [4] L. McInnes, J. Healy, S. Astels, *hdbscan: Hierarchical density based clustering* In: Journal of Open Source Software, The Open Journal, volume 2, number 11. 2017 %package help Summary: Development documents and examples for spectramap Provides: python3-spectramap-doc %description help

## *SpectraMap (SpMap): Hyperspectral package for spectroscopists in Python*

Hyperspectral imaging presents important applications in medicine, agriculture, pharmaceutical, space, food and many upcoming applications. The analysis of hyperspectral images requires advanced software. The upcoming developments related to fast hyperspectral imaging, automation and deep learning applications demand innovative software developments for analyzing hyperspectral data. The Figure 1 shows the hyperspectral imaging by a standard spectrometer instrument. More information regarding novel medical imaging is found in advances in imaging.

Figure 1 Raman Imaging Instrument ## Features

The package includes standard tools such as reading, preprocessing, processing and visualization. The designing was focused on working hyperspectral images from Raman datasets. The package is extended to other spectroscopies as long as the data follows the type data structure. Some features are shown by the next figures. -

Preprocessing: some tool such as smoothing, removal of spikes, normalization and advanced baseline corrections are included. Figure 2 illustrates a mean and standard deviation of a tissue signature.

Figure 2 Visualization of tissue Raman signature -

Processing: some tools such as unmixing, pca, pls, vca and hierarchical and kmeans clustering are included. Figure 3 displays application of clustering for locating microplastics on complex matrices.

Figure 3 Segmentation by clustering: (a) clustered image, (b) unmixing image, (c) image and (d) mean clusters -

Visualization: the next examples shows the pca scores of several biomolecules.

Figure 4 PCA scores ## Further upcoming developments: - [] Graphical User Interface - [] Supervised tools - [] Deep learning - CNN - [x] Optimizing speed and organizing main code - [x] More examples ## Installation

The predetermined work interface is Python 3. The library comes with 8 different hyperspectral examples and analysis. A manual presents the relevant functions and examples Manual.

Install the library and required packages: (admin rights): ```python pip install spectramap ``` ## Examples #### Reading and processing a spc file

In the examples , there is ps.spc file for this example. The next lines show some basic tools. The function read_single_spc reads the path directory of the file. ```python from spectramap import spmap as sp #reading spmap pigm = sp.hyper_object('pigment') #creating the hyperobject pigm.read_single_spc('pigment') #reading the spc file pigm.keep(400, 1800) #Keeping fingerprint region pigm_original = pigm.copy() #Copying hyperobject pigm_original.set_label('original') #renaming hyperobject to original pigm.set_label('processed') #renaming hyperobject to processed pigm.rubber() #basic baseline correction rubber band pigm.gol(15, 3, 0) #savitzky-golay filter both = sp.hyper_object('result') #creating an auxilary hyperobject both.concat([pigm_original, pigm]) #concatenating the original and processed data both.show(False) #show both spectra ```

```python both.show_stack(0.2, 0, 'auto') #advanced stack visualization ```

Figure 6 Second visualization #### Reading and processing a comma separated vector file with depth profiling

In the example, there is a layers.csv.xz file for this example. The next lines show some basic tools. The function read_csv requires the path directory of the file. The csv file must keep the structure of the manual (hyperspectral object). The example shows how to analise the data of spectroscopic profiles. ```python from spectramap import spmap as sp # reading spectramap library stack = sp.hyper_object('plastics') # creating the hyper_object stack.read_csv_xz('layers') # reading compressed csv of plastics profile stack.keep(500, 1800) # keeping fingerprint region stack.rubber() # baseline correciton rubber band stack.vector() # vector normalization endmember = stack.vca(6) # number of endmembers endmember.show_stack(0.2, 0, 'auto') # advanced stack plot of endmembers ```

```python abundance = stack.abundance(endmember, 'NNLS') # estimation of concentrations by NNLS abundance.set_resolution(0.01) # setting the step size resolution abundance.show_profile('auto') # plotting spectral profile ```

#### Processing hyperspectral images by VCA and Clustering comming soon. For now on, Check the manual. #### Processing hyperspectral data of plastics by PCA and PLS-LDA

In the example , there is a layers.csv.xz file for this example. The next processing steps computes unsupervised principal component analysis and double supervised partial least square + linear discriminant analysis. The scatter plots show the separation of the plastics: red, light_blue and blue are the most different ones. ```python from spectramap import spmap as sp # reading spectramap library sample = sp.hyper_object("sample") # creating hyper_object sample.read_csv_xz("layers") # reading compressed csv of plastics profile sample.remove(1800, 2700) # removing silent region sample.keep(400, 3300) # keeping finger print and high wavenumber region sample.gaussian(2) # appliying gaussian filter sample.rubber() # rubber baseline correction sample.kmeans(2) # kmeans 2 clusters sample.rename_label([1, 2], ["first", "second"]) # rename labels sub_label = sample.get_label() # saving sub_labels sub_label.name = "sub_label" # renaming the title of sub_label sample.show_stack(0,0, "auto") # showing a stack ```

```python sample.kmeans(6) # kmeans clustering example for main_label main_label = sample.get_label() # saving the main_label main_label.name = "main_label" # renaming the title of the label sample.show_stack(0,0, "auto") # showing the 6 components ```

```python scores_pca, loadings_pca = sample.pca(3, False) # 3 components pca scores_pca.show_scatter("auto", main_label, sub_label, 15) # showing scatter with sublabel ```

```python scores_pls, loadings_pls = sample.pls_lda(3, False, 0.7) # 3 components pls-lda and 70% training data scores_pls.show_scatter("auto", main_label, sub_label, 15) # showing scatter with sublevel ```

The next figures shows the precision, recall (sensititivity), f1-score (weighted average of preceision) and support for the 6 components. Accuracy and average accuracy.

#### Raman wavenumber calibration by paracetaminol

Reproducibility and replicativity are fundamental parameters for Raman spectroscopy. One common way for wavenumber axis calibration is discussed in this section. The requirements are a paracetaminol sample (powder) and the calibration file (well-measured peaks) and a polynomial regression. ```python from spectramap import spmap as sp # reading spectramap library import pandas as pd import numpy as np ### Paracetaminol path = 'para.csv' # path of the paracetaminol data table = pd.read_table(path, sep = ',', header = None) # read data table['label'] = "Para" # create label table[['x', 'y']] = np.zeros((20,2)) # create fake positions ### Processing mp = sp.hyper_object("Para") # creation of hyper object mp.set_data(table.iloc[:,:len(table.columns)-3]) # reading the intensity mp.set_position(table[['x', 'y']]) # reading positions mp.set_label(pd.Series(table['label'])) # reading labeling copy = mp.copy() # copy data peaks = copy.calibration_peaks(mp, 0.05) # finding peaks of para (next plot) ```

```python copy.calibration_regression(peaks) # determining regression for the calibration ```

```python mp.set_wavenumber(copy.get_wavenumber()) # set the new wavenumber to the original mp mp.show(True) # show calibrated data mp.add_peaks(0.1, 'r') # add peaks (not inline mode) mp.save_data("", "calibration") # save calibrated data ```

#### Processing hyperspectral images from biological tissue comming soon. For now on, Check the manual. #### Raman Intensity Calibration The next lines show how to calibrate intensity axis in Ramam spectroscopy. It is required a standard spectrum of halogen lamp and the experimental measurement of the halogen lamp with the Raman instrument. ```python from spectramap import spmap as sp # reading spectramap package reference_trial = sp.hyper_object("reference") # creating reference hyper object reference_trial.read_single_spc(path + "reference") # reading the referece data spectrum reference_trial.show(True) # showing the spectrum in the next plot ```

Now the experimental spectrum. ```python measured_trial = sp.hyper_object("measured") # creating hyper object measured_trial.read_single_spc(path + "lamp") # reading data measured_trial.keep(400, 1900) # keeping finger print region measured_trial.show(True) # showing the plot as the next figures shows ```

Reading the Raman sample. ```python sample = sp.hyper_object("sample") # declareting hyper object sample.read_single_spc(path + "sample") # reading tissue data sample.keep(400, 1900) # keeping finger print region sample.show(True) # showing plot in the next figure ```

> Calibration of the Raman sample. ```python sample.intensity_calibration(reference_trial, measured_trial) # intensity calibration function sample.show(True) # showing the calibrated data in the next figure ```

## Working Team

## License

MIT

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ## References [1] F. Pedregosa, G. Varoquaux, and A. Gramfort, “Scikit-learn: Machine Learning in Python,” Journal of Machine Learning Research, vol. 12, pp. 2825-- 2830, 2011. [2] J. M. P. Nascimento and J. M. B. Dias, “Vertex component analysis: A fast algorithm to unmix hyperspectral data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 43, no. 4, pp. 898–910, 2005, doi: 10.1109/TGRS.2005.844293. [3] Z. M. Zhang, S. Chen, and Y. Z. Liang, “Baseline correction using adaptive iteratively reweighted penalized least squares,” Analyst, vol. 135, no. 5, pp. 1138–1146, 2010, doi: 10.1039/b922045c. [4] L. McInnes, J. Healy, S. Astels, *hdbscan: Hierarchical density based clustering* In: Journal of Open Source Software, The Open Journal, volume 2, number 11. 2017 %prep %autosetup -n spectramap-0.5.3 %build %py3_build %install %py3_install install -d -m755 %{buildroot}/%{_pkgdocdir} if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/lib64 ]; then find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/bin ]; then find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst fi if [ -d usr/sbin ]; then find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . mv %{buildroot}/doclist.lst . %files -n python3-spectramap -f filelist.lst %dir %{python3_sitelib}/* %files help -f doclist.lst %{_docdir}/* %changelog * Tue May 30 2023 Python_Bot - 0.5.3-1 - Package Spec generated