summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-04-12 06:40:15 +0000
committerCoprDistGit <infra@openeuler.org>2023-04-12 06:40:15 +0000
commitc9cbc0e15a9f28d5328cd461df18f8b3c3f6da44 (patch)
treeb34a6341597445fe2d8f364db9396828b7cda364
parent6d0cbda5b83417ab722c5f0e09b55ba367f764a1 (diff)
automatic import of python-blechpyopeneuler20.03
-rw-r--r--.gitignore1
-rw-r--r--python-blechpy.spec679
-rw-r--r--sources1
3 files changed, 681 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..68e2f39 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/blechpy-2.1.39.tar.gz
diff --git a/python-blechpy.spec b/python-blechpy.spec
new file mode 100644
index 0000000..69af7ed
--- /dev/null
+++ b/python-blechpy.spec
@@ -0,0 +1,679 @@
+%global _empty_manifest_terminate_build 0
+Name: python-blechpy
+Version: 2.1.39
+Release: 1
+Summary: Package for exrtacting, processing and analyzing Intan and OpenEphys data
+License: MIT License
+URL: https://github.com/nubs01/blechpy
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/bc/ab/be67984b9fbdeff6f4a17873251fe7df380c189b97aa9b4b386bc14819c9/blechpy-2.1.39.tar.gz
+BuildArch: noarch
+
+Requires: python3-easygui
+Requires: python3-tables
+Requires: python3-numpy
+Requires: python3-datashader
+Requires: python3-scipy
+Requires: python3-scikit-learn
+Requires: python3-tqdm
+Requires: python3-numba
+Requires: python3-matplotlib
+Requires: python3-pygments
+Requires: python3-mistune
+Requires: python3-ipython
+Requires: python3-jupyter-core
+Requires: python3-entrypoints
+Requires: python3-umap-learn
+Requires: python3-holoviews
+Requires: python3-h5py
+Requires: python3-statsmodels
+Requires: python3-seaborn
+Requires: python3-appdirs
+Requires: python3-joblib
+Requires: python3-prompt-toolkit
+Requires: python3-pywavelets
+Requires: python3-imageio
+Requires: python3-PyYAML
+
+%description
+See the <a href='https://nubs01.github.io/blechpy'>full documentation</a> here.
+
+- [blechpy](#blechpy)
+- [Installation](#installation)
+- [Usage](#usage)
+- [Datasets](#datasets)
+ * [Starting wit a raw dataset](#starting-wit-a-raw-dataset)
+ + [Create dataset](#create-dataset)
+ + [Initialize Parameters](#initialize-parameters)
+ + [Basic Processing](#basic-processing)
+ + [Viewing a Dataset](#viewing-a-dataset)
+ * [Loading an existing dataset](#loading-an-existing-dataset)
+ * [Import processed dataset into dataset framework](#import-processed-dataset-into-dataset-framework)
+- [Experiments](#experiments)
+ * [Creating an experiment](#creating-an-experiment)
+ * [Editing recordings](#editing-recordings)
+ * [Held unit detection](#held-unit-detection)
+
+<small><i><a href='http://ecotrust-canada.github.io/markdown-toc/'>Table of contents generated with markdown-toc</a></i></small>
+
+# blechpy
+This is a package to extract, process and analyze electrophysiology data recorded with Intan or OpenEphys recording systems. This package is customized to store experiment and analysis metadata for the BLECh Lab (Katz lab) @ Brandeis University, but can readily be used and customized for other labs.
+
+# Installation
+I recommend installing miniconda to handle your virtual environments
+Create a miniconda environment with:
+```bash
+conda create -n blechpy python==3.7.13
+conda activate blechpy
+```
+Now you can install this package simply with pip:
+```bash
+pip install blechpy
+```
+
+If you want to update blechpy to the latest version:
+```bash
+pip install blechpy -U
+```
+
+Now you can deal with all of your data from within an ipython terminal:
+`ipython`
+
+```python
+import blechpy
+```
+
+### Ubuntu 20.04 LTS+
+With Ubuntu 20 or higher, you will get a segmentation fault when importing blechpy because numba version 0.48 installed through pip is corrupted. You will need to reinstall it via conda
+
+```bash
+conda install numba=0.48.0
+```
+
+# Usage
+blechpy handles experimental metadata using data_objects which are tied to a directory encompassing some level of data. Existing types of data_objects include:
+* dataset
+ * object for a single recording session
+* experiment
+ * object encompasing an ordered set of recordings from a single animal
+ * individual recordings must first be processed as datasets
+* project
+ * object that can encompass multiple experiments & data groups and allow analysis or group differences
+
+# Datasets
+Right now this pipeline is only compatible with recordings done with Intan's 'one file per channel' or 'one file per signal type' recordings settings.
+
+## Starting with a raw dataset
+### Create dataset
+With a brand new *shiny* recording you can initilize a dataset with:
+```python
+dat = blechpy.dataset('path/to/recording/directory')
+# or
+dat = blechpy.dataset() # for user interface to select directory
+```
+This will create a new dataset object and setup basic file paths.
+If you're working via SSH or just want a command-line interface instead of a GUI you can use the keyword argument `shell=True`
+You should only do this when starting data processing for the first time. If you use it on a processed dataset, it will get overwritten.
+Use blechpy.load_dataset() instead to load an existing dataset (see below)
+
+### Initialize Parameters
+```python
+dat.initParams()
+```
+Initalizes all analysis parameters with a series of prompts.
+See prompts for optional keyword params.
+Primarily setups parameters for:
+* Flattening Port & Channel in Electrode designations
+* Common average referencing
+* Labelling areas of electrodes
+* Labelling digital inputs & outputs
+* Labelling dead electrodes
+* Clustering parameters
+* Spike array creation
+* PSTH creation
+* Palatability/Identity Responsiveness calculations
+
+Initial parameters are pulled from default json files in the dio subpackage.
+Parameters for a dataset are written to json files in a *parameters* folder in the recording directory
+
+Useful dat.initParams() arguments:
+* data_quality='hp' -increases strictness of clustering, total # of clusters, and spike-sorting window to -0.75 to 1s.
+* car_keyword = 'bilateral64' -auto assigns channel mapping to match the Omnetics-connector open ephys 64 channel EIB with 2-site implantation
+* car_keyword = '2site_OE64' -auto assigns channel mapping to match Hirose-connector Open Ephys 64 channel EIB with 2-site implantation
+* shell = True -bypasses GUI interface in favor of shell interface, useful if working over SSH or GUI is broken
+### Basic Processing
+
+The most basic data extraction workflow would be:
+```python
+dat = blechpy.dataset('/path/to/data/dir/')
+dat.initParams() # See fucntion docstring, lots of optional parameters to eliminate need for user interaction
+dat.extract_data() # Extracts raw data into HDF5 store
+dat.create_trial_list() # Creates table of digital input triggers
+dat.mark_dead_channels() # View traces and label electrodes as dead, or just pass list of dead channels
+dat.mark_dead_channels([dead channel indices]) #alternatively, if you already know which chanels are dead, you can pass them as an argument
+dat.common_average_reference() # Use common average referencing on data. Repalces raw with referenced data in HDF5 store
+dat.detect_spikes()
+dat.blech_clust_run() # Cluster data using GMM
+dat.blech_clust_run(data_quality='noisy') # alternative: re-run clustering with less strict parameters
+dat.sort_spikes(electrode_number) # Split, merge and label clusters as units
+```
+check blechpy/datastructures/dataset.py to see what functions are available
+
+### Preferred Workflow:
+
+This workflow uses some parameters with defualts which makes the workflow more convenient.
+```python
+dat = blechpy.dataset('/path/to/data/dir/')
+dat.initParams(data_quality = 'hp', car_keyword = '2site_OE64') # 'hp' parameter for stricter clustering criteria, '2site_OE64' automatically maps channels to hirose-connector 64ch OEPS EIB in 2-site implantation
+dat.extract_data()
+dat.create_trial_list()
+dat.mark_dead_channels([channel numbers]) # pass a list of dead channels (i.e. [1,2,3]) to bypass GUI marking of dead channels. Requires that you note them during drive building &/ recording
+dat.common_average_reference()
+dat.detect_spikes()
+dat.blech_clust_run(umap=True) # Cluster with UMAP instead of GMM, supposedly better clustering
+dat.sort_spikes(electrode_number) # Split, merge and label clusters as units
+```
+
+### Checking processing progress:
+
+```python
+dat.processing_status
+```
+Can provide an overview of basic data extraction and processing steps that need to be taken.
+
+### Viewing a Dataset
+Experiments can be easily viewed wih: `print(dat)`
+A summary can also be exported to a text with: `dat.export_to_text()`
+
+## Loading an existing dataset
+```python
+dat = blechpy.load_dataset() # load an existing dataset from .p file
+# or
+dat = blechpy.load_dataset('path/to/recording/directory')
+# or
+dat = blechpy.load_dataset('path/to/dataset/save/file.p')
+```
+
+## Import processed dataset into dataset framework
+```python
+dat = blechpy.port_in_dataset()
+# or
+dat = blechpy.port_in_dataset('/path/to/recording/directory')
+```
+
+# Experiments
+## Creating an experiment
+```python
+exp = blechpy.experiment('/path/to/dir/encasing/recordings')
+# or
+exp = blechpy.experiment()
+```
+This will initalize an experiment with all recording folders within the chosen directory.
+
+## Editing recordings
+```python
+exp.add_recording('/path/to/new/recording/dir/') # Add recording
+exp.remove_recording('rec_label') # remove a recording dir
+```
+Recordings are assigned labels when added to the experiment that can be used to easily reference exerpiments.
+
+## Held unit detection
+```python
+exp.detect_held_units()
+```
+Uses raw waveforms from sorted units to determine if units can be confidently classified as "held". Results are stored in exp.held_units as a pandas DataFrame.
+This also creates plots and exports data to a created directory:
+/path/to/experiment/experiment-name_analysis
+
+# Analysis
+The `blechpy.analysis` module has a lot of useful tools for analyzing your data.
+Most notable is the `blechpy.analysis.poissonHMM` module which will allow fitting of the HMM models to your data. See tutorials.
+
+
+
+
+%package -n python3-blechpy
+Summary: Package for exrtacting, processing and analyzing Intan and OpenEphys data
+Provides: python-blechpy
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-blechpy
+See the <a href='https://nubs01.github.io/blechpy'>full documentation</a> here.
+
+- [blechpy](#blechpy)
+- [Installation](#installation)
+- [Usage](#usage)
+- [Datasets](#datasets)
+ * [Starting wit a raw dataset](#starting-wit-a-raw-dataset)
+ + [Create dataset](#create-dataset)
+ + [Initialize Parameters](#initialize-parameters)
+ + [Basic Processing](#basic-processing)
+ + [Viewing a Dataset](#viewing-a-dataset)
+ * [Loading an existing dataset](#loading-an-existing-dataset)
+ * [Import processed dataset into dataset framework](#import-processed-dataset-into-dataset-framework)
+- [Experiments](#experiments)
+ * [Creating an experiment](#creating-an-experiment)
+ * [Editing recordings](#editing-recordings)
+ * [Held unit detection](#held-unit-detection)
+
+<small><i><a href='http://ecotrust-canada.github.io/markdown-toc/'>Table of contents generated with markdown-toc</a></i></small>
+
+# blechpy
+This is a package to extract, process and analyze electrophysiology data recorded with Intan or OpenEphys recording systems. This package is customized to store experiment and analysis metadata for the BLECh Lab (Katz lab) @ Brandeis University, but can readily be used and customized for other labs.
+
+# Installation
+I recommend installing miniconda to handle your virtual environments
+Create a miniconda environment with:
+```bash
+conda create -n blechpy python==3.7.13
+conda activate blechpy
+```
+Now you can install this package simply with pip:
+```bash
+pip install blechpy
+```
+
+If you want to update blechpy to the latest version:
+```bash
+pip install blechpy -U
+```
+
+Now you can deal with all of your data from within an ipython terminal:
+`ipython`
+
+```python
+import blechpy
+```
+
+### Ubuntu 20.04 LTS+
+With Ubuntu 20 or higher, you will get a segmentation fault when importing blechpy because numba version 0.48 installed through pip is corrupted. You will need to reinstall it via conda
+
+```bash
+conda install numba=0.48.0
+```
+
+# Usage
+blechpy handles experimental metadata using data_objects which are tied to a directory encompassing some level of data. Existing types of data_objects include:
+* dataset
+ * object for a single recording session
+* experiment
+ * object encompasing an ordered set of recordings from a single animal
+ * individual recordings must first be processed as datasets
+* project
+ * object that can encompass multiple experiments & data groups and allow analysis or group differences
+
+# Datasets
+Right now this pipeline is only compatible with recordings done with Intan's 'one file per channel' or 'one file per signal type' recordings settings.
+
+## Starting with a raw dataset
+### Create dataset
+With a brand new *shiny* recording you can initilize a dataset with:
+```python
+dat = blechpy.dataset('path/to/recording/directory')
+# or
+dat = blechpy.dataset() # for user interface to select directory
+```
+This will create a new dataset object and setup basic file paths.
+If you're working via SSH or just want a command-line interface instead of a GUI you can use the keyword argument `shell=True`
+You should only do this when starting data processing for the first time. If you use it on a processed dataset, it will get overwritten.
+Use blechpy.load_dataset() instead to load an existing dataset (see below)
+
+### Initialize Parameters
+```python
+dat.initParams()
+```
+Initalizes all analysis parameters with a series of prompts.
+See prompts for optional keyword params.
+Primarily setups parameters for:
+* Flattening Port & Channel in Electrode designations
+* Common average referencing
+* Labelling areas of electrodes
+* Labelling digital inputs & outputs
+* Labelling dead electrodes
+* Clustering parameters
+* Spike array creation
+* PSTH creation
+* Palatability/Identity Responsiveness calculations
+
+Initial parameters are pulled from default json files in the dio subpackage.
+Parameters for a dataset are written to json files in a *parameters* folder in the recording directory
+
+Useful dat.initParams() arguments:
+* data_quality='hp' -increases strictness of clustering, total # of clusters, and spike-sorting window to -0.75 to 1s.
+* car_keyword = 'bilateral64' -auto assigns channel mapping to match the Omnetics-connector open ephys 64 channel EIB with 2-site implantation
+* car_keyword = '2site_OE64' -auto assigns channel mapping to match Hirose-connector Open Ephys 64 channel EIB with 2-site implantation
+* shell = True -bypasses GUI interface in favor of shell interface, useful if working over SSH or GUI is broken
+### Basic Processing
+
+The most basic data extraction workflow would be:
+```python
+dat = blechpy.dataset('/path/to/data/dir/')
+dat.initParams() # See fucntion docstring, lots of optional parameters to eliminate need for user interaction
+dat.extract_data() # Extracts raw data into HDF5 store
+dat.create_trial_list() # Creates table of digital input triggers
+dat.mark_dead_channels() # View traces and label electrodes as dead, or just pass list of dead channels
+dat.mark_dead_channels([dead channel indices]) #alternatively, if you already know which chanels are dead, you can pass them as an argument
+dat.common_average_reference() # Use common average referencing on data. Repalces raw with referenced data in HDF5 store
+dat.detect_spikes()
+dat.blech_clust_run() # Cluster data using GMM
+dat.blech_clust_run(data_quality='noisy') # alternative: re-run clustering with less strict parameters
+dat.sort_spikes(electrode_number) # Split, merge and label clusters as units
+```
+check blechpy/datastructures/dataset.py to see what functions are available
+
+### Preferred Workflow:
+
+This workflow uses some parameters with defualts which makes the workflow more convenient.
+```python
+dat = blechpy.dataset('/path/to/data/dir/')
+dat.initParams(data_quality = 'hp', car_keyword = '2site_OE64') # 'hp' parameter for stricter clustering criteria, '2site_OE64' automatically maps channels to hirose-connector 64ch OEPS EIB in 2-site implantation
+dat.extract_data()
+dat.create_trial_list()
+dat.mark_dead_channels([channel numbers]) # pass a list of dead channels (i.e. [1,2,3]) to bypass GUI marking of dead channels. Requires that you note them during drive building &/ recording
+dat.common_average_reference()
+dat.detect_spikes()
+dat.blech_clust_run(umap=True) # Cluster with UMAP instead of GMM, supposedly better clustering
+dat.sort_spikes(electrode_number) # Split, merge and label clusters as units
+```
+
+### Checking processing progress:
+
+```python
+dat.processing_status
+```
+Can provide an overview of basic data extraction and processing steps that need to be taken.
+
+### Viewing a Dataset
+Experiments can be easily viewed wih: `print(dat)`
+A summary can also be exported to a text with: `dat.export_to_text()`
+
+## Loading an existing dataset
+```python
+dat = blechpy.load_dataset() # load an existing dataset from .p file
+# or
+dat = blechpy.load_dataset('path/to/recording/directory')
+# or
+dat = blechpy.load_dataset('path/to/dataset/save/file.p')
+```
+
+## Import processed dataset into dataset framework
+```python
+dat = blechpy.port_in_dataset()
+# or
+dat = blechpy.port_in_dataset('/path/to/recording/directory')
+```
+
+# Experiments
+## Creating an experiment
+```python
+exp = blechpy.experiment('/path/to/dir/encasing/recordings')
+# or
+exp = blechpy.experiment()
+```
+This will initalize an experiment with all recording folders within the chosen directory.
+
+## Editing recordings
+```python
+exp.add_recording('/path/to/new/recording/dir/') # Add recording
+exp.remove_recording('rec_label') # remove a recording dir
+```
+Recordings are assigned labels when added to the experiment that can be used to easily reference exerpiments.
+
+## Held unit detection
+```python
+exp.detect_held_units()
+```
+Uses raw waveforms from sorted units to determine if units can be confidently classified as "held". Results are stored in exp.held_units as a pandas DataFrame.
+This also creates plots and exports data to a created directory:
+/path/to/experiment/experiment-name_analysis
+
+# Analysis
+The `blechpy.analysis` module has a lot of useful tools for analyzing your data.
+Most notable is the `blechpy.analysis.poissonHMM` module which will allow fitting of the HMM models to your data. See tutorials.
+
+
+
+
+%package help
+Summary: Development documents and examples for blechpy
+Provides: python3-blechpy-doc
+%description help
+See the <a href='https://nubs01.github.io/blechpy'>full documentation</a> here.
+
+- [blechpy](#blechpy)
+- [Installation](#installation)
+- [Usage](#usage)
+- [Datasets](#datasets)
+ * [Starting wit a raw dataset](#starting-wit-a-raw-dataset)
+ + [Create dataset](#create-dataset)
+ + [Initialize Parameters](#initialize-parameters)
+ + [Basic Processing](#basic-processing)
+ + [Viewing a Dataset](#viewing-a-dataset)
+ * [Loading an existing dataset](#loading-an-existing-dataset)
+ * [Import processed dataset into dataset framework](#import-processed-dataset-into-dataset-framework)
+- [Experiments](#experiments)
+ * [Creating an experiment](#creating-an-experiment)
+ * [Editing recordings](#editing-recordings)
+ * [Held unit detection](#held-unit-detection)
+
+<small><i><a href='http://ecotrust-canada.github.io/markdown-toc/'>Table of contents generated with markdown-toc</a></i></small>
+
+# blechpy
+This is a package to extract, process and analyze electrophysiology data recorded with Intan or OpenEphys recording systems. This package is customized to store experiment and analysis metadata for the BLECh Lab (Katz lab) @ Brandeis University, but can readily be used and customized for other labs.
+
+# Installation
+I recommend installing miniconda to handle your virtual environments
+Create a miniconda environment with:
+```bash
+conda create -n blechpy python==3.7.13
+conda activate blechpy
+```
+Now you can install this package simply with pip:
+```bash
+pip install blechpy
+```
+
+If you want to update blechpy to the latest version:
+```bash
+pip install blechpy -U
+```
+
+Now you can deal with all of your data from within an ipython terminal:
+`ipython`
+
+```python
+import blechpy
+```
+
+### Ubuntu 20.04 LTS+
+With Ubuntu 20 or higher, you will get a segmentation fault when importing blechpy because numba version 0.48 installed through pip is corrupted. You will need to reinstall it via conda
+
+```bash
+conda install numba=0.48.0
+```
+
+# Usage
+blechpy handles experimental metadata using data_objects which are tied to a directory encompassing some level of data. Existing types of data_objects include:
+* dataset
+ * object for a single recording session
+* experiment
+ * object encompasing an ordered set of recordings from a single animal
+ * individual recordings must first be processed as datasets
+* project
+ * object that can encompass multiple experiments & data groups and allow analysis or group differences
+
+# Datasets
+Right now this pipeline is only compatible with recordings done with Intan's 'one file per channel' or 'one file per signal type' recordings settings.
+
+## Starting with a raw dataset
+### Create dataset
+With a brand new *shiny* recording you can initilize a dataset with:
+```python
+dat = blechpy.dataset('path/to/recording/directory')
+# or
+dat = blechpy.dataset() # for user interface to select directory
+```
+This will create a new dataset object and setup basic file paths.
+If you're working via SSH or just want a command-line interface instead of a GUI you can use the keyword argument `shell=True`
+You should only do this when starting data processing for the first time. If you use it on a processed dataset, it will get overwritten.
+Use blechpy.load_dataset() instead to load an existing dataset (see below)
+
+### Initialize Parameters
+```python
+dat.initParams()
+```
+Initalizes all analysis parameters with a series of prompts.
+See prompts for optional keyword params.
+Primarily setups parameters for:
+* Flattening Port & Channel in Electrode designations
+* Common average referencing
+* Labelling areas of electrodes
+* Labelling digital inputs & outputs
+* Labelling dead electrodes
+* Clustering parameters
+* Spike array creation
+* PSTH creation
+* Palatability/Identity Responsiveness calculations
+
+Initial parameters are pulled from default json files in the dio subpackage.
+Parameters for a dataset are written to json files in a *parameters* folder in the recording directory
+
+Useful dat.initParams() arguments:
+* data_quality='hp' -increases strictness of clustering, total # of clusters, and spike-sorting window to -0.75 to 1s.
+* car_keyword = 'bilateral64' -auto assigns channel mapping to match the Omnetics-connector open ephys 64 channel EIB with 2-site implantation
+* car_keyword = '2site_OE64' -auto assigns channel mapping to match Hirose-connector Open Ephys 64 channel EIB with 2-site implantation
+* shell = True -bypasses GUI interface in favor of shell interface, useful if working over SSH or GUI is broken
+### Basic Processing
+
+The most basic data extraction workflow would be:
+```python
+dat = blechpy.dataset('/path/to/data/dir/')
+dat.initParams() # See fucntion docstring, lots of optional parameters to eliminate need for user interaction
+dat.extract_data() # Extracts raw data into HDF5 store
+dat.create_trial_list() # Creates table of digital input triggers
+dat.mark_dead_channels() # View traces and label electrodes as dead, or just pass list of dead channels
+dat.mark_dead_channels([dead channel indices]) #alternatively, if you already know which chanels are dead, you can pass them as an argument
+dat.common_average_reference() # Use common average referencing on data. Repalces raw with referenced data in HDF5 store
+dat.detect_spikes()
+dat.blech_clust_run() # Cluster data using GMM
+dat.blech_clust_run(data_quality='noisy') # alternative: re-run clustering with less strict parameters
+dat.sort_spikes(electrode_number) # Split, merge and label clusters as units
+```
+check blechpy/datastructures/dataset.py to see what functions are available
+
+### Preferred Workflow:
+
+This workflow uses some parameters with defualts which makes the workflow more convenient.
+```python
+dat = blechpy.dataset('/path/to/data/dir/')
+dat.initParams(data_quality = 'hp', car_keyword = '2site_OE64') # 'hp' parameter for stricter clustering criteria, '2site_OE64' automatically maps channels to hirose-connector 64ch OEPS EIB in 2-site implantation
+dat.extract_data()
+dat.create_trial_list()
+dat.mark_dead_channels([channel numbers]) # pass a list of dead channels (i.e. [1,2,3]) to bypass GUI marking of dead channels. Requires that you note them during drive building &/ recording
+dat.common_average_reference()
+dat.detect_spikes()
+dat.blech_clust_run(umap=True) # Cluster with UMAP instead of GMM, supposedly better clustering
+dat.sort_spikes(electrode_number) # Split, merge and label clusters as units
+```
+
+### Checking processing progress:
+
+```python
+dat.processing_status
+```
+Can provide an overview of basic data extraction and processing steps that need to be taken.
+
+### Viewing a Dataset
+Experiments can be easily viewed wih: `print(dat)`
+A summary can also be exported to a text with: `dat.export_to_text()`
+
+## Loading an existing dataset
+```python
+dat = blechpy.load_dataset() # load an existing dataset from .p file
+# or
+dat = blechpy.load_dataset('path/to/recording/directory')
+# or
+dat = blechpy.load_dataset('path/to/dataset/save/file.p')
+```
+
+## Import processed dataset into dataset framework
+```python
+dat = blechpy.port_in_dataset()
+# or
+dat = blechpy.port_in_dataset('/path/to/recording/directory')
+```
+
+# Experiments
+## Creating an experiment
+```python
+exp = blechpy.experiment('/path/to/dir/encasing/recordings')
+# or
+exp = blechpy.experiment()
+```
+This will initalize an experiment with all recording folders within the chosen directory.
+
+## Editing recordings
+```python
+exp.add_recording('/path/to/new/recording/dir/') # Add recording
+exp.remove_recording('rec_label') # remove a recording dir
+```
+Recordings are assigned labels when added to the experiment that can be used to easily reference exerpiments.
+
+## Held unit detection
+```python
+exp.detect_held_units()
+```
+Uses raw waveforms from sorted units to determine if units can be confidently classified as "held". Results are stored in exp.held_units as a pandas DataFrame.
+This also creates plots and exports data to a created directory:
+/path/to/experiment/experiment-name_analysis
+
+# Analysis
+The `blechpy.analysis` module has a lot of useful tools for analyzing your data.
+Most notable is the `blechpy.analysis.poissonHMM` module which will allow fitting of the HMM models to your data. See tutorials.
+
+
+
+
+%prep
+%autosetup -n blechpy-2.1.39
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-blechpy -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Wed Apr 12 2023 Python_Bot <Python_Bot@openeuler.org> - 2.1.39-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..a8dcb7b
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+321e2afdb51c2581a34ecaf902b64699 blechpy-2.1.39.tar.gz