summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-05-31 04:53:10 +0000
committerCoprDistGit <infra@openeuler.org>2023-05-31 04:53:10 +0000
commit2135f2b9bad935a07909ef828159db2b78854234 (patch)
tree728edfe35a8067b8aa006b5ed764574c4123a020
parent6568c2b4305e343101ceb32a4f80e3859fb03c9e (diff)
automatic import of python-ptpt
-rw-r--r--.gitignore1
-rw-r--r--python-ptpt.spec613
-rw-r--r--sources1
3 files changed, 615 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..bd49620 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/ptpt-0.0.28.tar.gz
diff --git a/python-ptpt.spec b/python-ptpt.spec
new file mode 100644
index 0000000..4120f07
--- /dev/null
+++ b/python-ptpt.spec
@@ -0,0 +1,613 @@
+%global _empty_manifest_terminate_build 0
+Name: python-ptpt
+Version: 0.0.28
+Release: 1
+Summary: PyTorch Personal Trainer: My personal framework for deep learning experiments
+License: MIT License
+URL: https://github.com/vvvm23/ptpt
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/8b/a1/6d4ddf75f32336d49d8507c4d6aca821516486168c4602cc0b7f4ec8e9f6/ptpt-0.0.28.tar.gz
+BuildArch: noarch
+
+Requires: python3-torch
+Requires: python3-rich
+Requires: python3-wandb
+Requires: python3-accelerate
+
+%description
+# Alex's PyTorch Personal Trainer (ptpt)
+> (name subject to change)
+
+This repository contains my personal lightweight framework for deep learning
+projects in PyTorch.
+
+> **Disclaimer: this project is very much work-in-progress. Although technically
+> useable, it is missing many features. Nonetheless, you may find some of the
+> design patterns and code snippets to be useful in the meantime.**
+
+## Installation
+
+Install from `pip` by running `pip install ptpt`
+
+You can also build from source. Simply run `python -m build` in the root of the
+repo, then run `pip install` on the resulting `.whl` file.
+
+## Usage
+Import the library as with any other python library:
+```python
+from ptpt.trainer import Trainer, TrainerConfig
+from ptpt.log import debug, info, warning, error, critical
+```
+
+The core of the library is the `trainer.Trainer` class. In the simplest case,
+it takes the following as input:
+
+```python
+net: a `nn.Module` that is the model we wish to train.
+loss_fn: a function that takes a `nn.Module` and a batch as input.
+ it returns the loss and optionally other metrics.
+train_dataset: the training dataset.
+test_dataset: the test dataset.
+cfg: a `TrainerConfig` instance that holds all
+ hyperparameters.
+```
+
+Once this is instantiated, starting the training loop is as simple as calling
+`trainer.train()` where `trainer` is an instance of `Trainer`.
+
+`cfg` stores most of the configuration options for `Trainer`. See the class
+definition of `TrainerConfig` for details on all options.
+
+## Examples
+
+An example workflow would go like this:
+
+> Define your training and test datasets:
+
+```python
+transform=transforms.Compose([
+ transforms.ToTensor(),
+ transforms.Normalize((0.1307,), (0.3081,))
+])
+train_dataset = datasets.MNIST('../data', train=True, download=True, transform=transform)
+test_dataset = datasets.MNIST('../data', train=False, download=True, transform=transform)
+```
+
+> Define your model:
+
+```python
+# `Net` could be any `nn.Module`
+net = Net()
+```
+
+> Define your loss function that calls `net`, taking the full batch as input:
+
+```python
+# minimising classification error
+def loss_fn(net, batch):
+ X, y = batch
+ logits = net(X)
+ loss = F.nll_loss(logits, y)
+
+ pred = logits.argmax(dim=-1, keepdim=True)
+ accuracy = 100. * pred.eq(y.view_as(pred)).sum().item() / y.shape[0]
+ return loss, accuracy
+```
+
+> Optionally create a configuration object:
+
+```python
+# see class definition for full list of parameters
+cfg = TrainerConfig(
+ exp_name = 'mnist-conv',
+ batch_size = 64,
+ learning_rate = 4e-4,
+ nb_workers = 4,
+ save_outputs = False,
+ metric_names = ['accuracy']
+)
+```
+
+> Initialise the Trainer class:
+
+```python
+trainer = Trainer(
+ net=net,
+ loss_fn=loss_fn,
+ train_dataset=train_dataset,
+ test_dataset=test_dataset,
+ cfg=cfg
+)
+```
+
+> Optionally, register some callback functions:
+
+```python
+def callback_fn(_):
+ info("Congratulations, you have completed an epoch!")
+trainer.register_callback(CallbackType.TrainEpoch, callback_fn)
+```
+
+> Call `trainer.train()` to begin the training loop
+
+```python
+trainer.train() # Go!
+```
+
+See more examples [here](examples/).
+
+#### Weights and Biases Integration
+
+Weights and Biases logging is supported via the `ptpt.wandb.WandConfig`
+dataclass.
+
+Currently only supports a small set of features:
+```
+class WandbConfig:
+ project: str = None # project name
+ entity: str = None # wandb entity name
+ name: str = None # run name (leave blank for random two words)
+ config: dict = None # hyperparameters to save on wandb
+ log_net: bool = False # whether to use wandb to watch network gradients
+ log_metrics: bool = True # whether to use wandb to report epoch metrics
+```
+
+If you want to log something else in addition to epoch metrics, you can use
+`ptpt.callbacks` and access wandb through `trainer.wandb`. When calling log
+here, ensure commit is set to `False` to avoid advancing the global step.
+
+## Motivation
+I found myself repeating a lot of same structure in many of my deep learning
+projects. This project is the culmination of my efforts refining the typical
+structure of my projects into (what I hope to be) a wholly reusable and
+general-purpose library.
+
+Additionally, there are many nice theoretical and engineering tricks that
+are available to deep learning researchers. Unfortunately, a lot of them are
+forgotten because they fall outside the typical workflow, despite them being
+very beneficial to include. Another goal of this project is to transparently
+include these tricks so they can be added and removed with minimal code change.
+Where it is sane to do so, some of these could be on by default.
+
+Finally, I am guilty of forgetting to implement decent logging: both of
+standard output and of metrics. Logging of standard output is not hard, and
+is implemented using other libraries such as [rich](https://github.com/willmcgugan/rich).
+However, metric logging is less obvious. I'd like to avoid larger dependencies
+such as tensorboard being an integral part of the project, so metrics will be
+logged to simple numpy arrays. The library will then provide functions to
+produce plots from these, or they can be used in another library.
+
+### TODO:
+
+- [X] Add arbitrary callback support at various points of execution
+- [X] Add metric tracking
+- [ ] Add more learning rate schedulers
+- [ ] Add more optimizer options
+- [ ] Add logging-to-file
+- [ ] Adds silent and simpler logging
+- [ ] Support for distributed / multi-GPU operations
+- [ ] Set of functions for producing visualisations from disk dumps
+- [ ] General suite of useful functions
+
+### References
+- [rich](https://github.com/willmcgugan/rich) by [@willmcgugan](https://github.com/willmcgugan)
+
+### Citations
+
+
+
+%package -n python3-ptpt
+Summary: PyTorch Personal Trainer: My personal framework for deep learning experiments
+Provides: python-ptpt
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-ptpt
+# Alex's PyTorch Personal Trainer (ptpt)
+> (name subject to change)
+
+This repository contains my personal lightweight framework for deep learning
+projects in PyTorch.
+
+> **Disclaimer: this project is very much work-in-progress. Although technically
+> useable, it is missing many features. Nonetheless, you may find some of the
+> design patterns and code snippets to be useful in the meantime.**
+
+## Installation
+
+Install from `pip` by running `pip install ptpt`
+
+You can also build from source. Simply run `python -m build` in the root of the
+repo, then run `pip install` on the resulting `.whl` file.
+
+## Usage
+Import the library as with any other python library:
+```python
+from ptpt.trainer import Trainer, TrainerConfig
+from ptpt.log import debug, info, warning, error, critical
+```
+
+The core of the library is the `trainer.Trainer` class. In the simplest case,
+it takes the following as input:
+
+```python
+net: a `nn.Module` that is the model we wish to train.
+loss_fn: a function that takes a `nn.Module` and a batch as input.
+ it returns the loss and optionally other metrics.
+train_dataset: the training dataset.
+test_dataset: the test dataset.
+cfg: a `TrainerConfig` instance that holds all
+ hyperparameters.
+```
+
+Once this is instantiated, starting the training loop is as simple as calling
+`trainer.train()` where `trainer` is an instance of `Trainer`.
+
+`cfg` stores most of the configuration options for `Trainer`. See the class
+definition of `TrainerConfig` for details on all options.
+
+## Examples
+
+An example workflow would go like this:
+
+> Define your training and test datasets:
+
+```python
+transform=transforms.Compose([
+ transforms.ToTensor(),
+ transforms.Normalize((0.1307,), (0.3081,))
+])
+train_dataset = datasets.MNIST('../data', train=True, download=True, transform=transform)
+test_dataset = datasets.MNIST('../data', train=False, download=True, transform=transform)
+```
+
+> Define your model:
+
+```python
+# `Net` could be any `nn.Module`
+net = Net()
+```
+
+> Define your loss function that calls `net`, taking the full batch as input:
+
+```python
+# minimising classification error
+def loss_fn(net, batch):
+ X, y = batch
+ logits = net(X)
+ loss = F.nll_loss(logits, y)
+
+ pred = logits.argmax(dim=-1, keepdim=True)
+ accuracy = 100. * pred.eq(y.view_as(pred)).sum().item() / y.shape[0]
+ return loss, accuracy
+```
+
+> Optionally create a configuration object:
+
+```python
+# see class definition for full list of parameters
+cfg = TrainerConfig(
+ exp_name = 'mnist-conv',
+ batch_size = 64,
+ learning_rate = 4e-4,
+ nb_workers = 4,
+ save_outputs = False,
+ metric_names = ['accuracy']
+)
+```
+
+> Initialise the Trainer class:
+
+```python
+trainer = Trainer(
+ net=net,
+ loss_fn=loss_fn,
+ train_dataset=train_dataset,
+ test_dataset=test_dataset,
+ cfg=cfg
+)
+```
+
+> Optionally, register some callback functions:
+
+```python
+def callback_fn(_):
+ info("Congratulations, you have completed an epoch!")
+trainer.register_callback(CallbackType.TrainEpoch, callback_fn)
+```
+
+> Call `trainer.train()` to begin the training loop
+
+```python
+trainer.train() # Go!
+```
+
+See more examples [here](examples/).
+
+#### Weights and Biases Integration
+
+Weights and Biases logging is supported via the `ptpt.wandb.WandConfig`
+dataclass.
+
+Currently only supports a small set of features:
+```
+class WandbConfig:
+ project: str = None # project name
+ entity: str = None # wandb entity name
+ name: str = None # run name (leave blank for random two words)
+ config: dict = None # hyperparameters to save on wandb
+ log_net: bool = False # whether to use wandb to watch network gradients
+ log_metrics: bool = True # whether to use wandb to report epoch metrics
+```
+
+If you want to log something else in addition to epoch metrics, you can use
+`ptpt.callbacks` and access wandb through `trainer.wandb`. When calling log
+here, ensure commit is set to `False` to avoid advancing the global step.
+
+## Motivation
+I found myself repeating a lot of same structure in many of my deep learning
+projects. This project is the culmination of my efforts refining the typical
+structure of my projects into (what I hope to be) a wholly reusable and
+general-purpose library.
+
+Additionally, there are many nice theoretical and engineering tricks that
+are available to deep learning researchers. Unfortunately, a lot of them are
+forgotten because they fall outside the typical workflow, despite them being
+very beneficial to include. Another goal of this project is to transparently
+include these tricks so they can be added and removed with minimal code change.
+Where it is sane to do so, some of these could be on by default.
+
+Finally, I am guilty of forgetting to implement decent logging: both of
+standard output and of metrics. Logging of standard output is not hard, and
+is implemented using other libraries such as [rich](https://github.com/willmcgugan/rich).
+However, metric logging is less obvious. I'd like to avoid larger dependencies
+such as tensorboard being an integral part of the project, so metrics will be
+logged to simple numpy arrays. The library will then provide functions to
+produce plots from these, or they can be used in another library.
+
+### TODO:
+
+- [X] Add arbitrary callback support at various points of execution
+- [X] Add metric tracking
+- [ ] Add more learning rate schedulers
+- [ ] Add more optimizer options
+- [ ] Add logging-to-file
+- [ ] Adds silent and simpler logging
+- [ ] Support for distributed / multi-GPU operations
+- [ ] Set of functions for producing visualisations from disk dumps
+- [ ] General suite of useful functions
+
+### References
+- [rich](https://github.com/willmcgugan/rich) by [@willmcgugan](https://github.com/willmcgugan)
+
+### Citations
+
+
+
+%package help
+Summary: Development documents and examples for ptpt
+Provides: python3-ptpt-doc
+%description help
+# Alex's PyTorch Personal Trainer (ptpt)
+> (name subject to change)
+
+This repository contains my personal lightweight framework for deep learning
+projects in PyTorch.
+
+> **Disclaimer: this project is very much work-in-progress. Although technically
+> useable, it is missing many features. Nonetheless, you may find some of the
+> design patterns and code snippets to be useful in the meantime.**
+
+## Installation
+
+Install from `pip` by running `pip install ptpt`
+
+You can also build from source. Simply run `python -m build` in the root of the
+repo, then run `pip install` on the resulting `.whl` file.
+
+## Usage
+Import the library as with any other python library:
+```python
+from ptpt.trainer import Trainer, TrainerConfig
+from ptpt.log import debug, info, warning, error, critical
+```
+
+The core of the library is the `trainer.Trainer` class. In the simplest case,
+it takes the following as input:
+
+```python
+net: a `nn.Module` that is the model we wish to train.
+loss_fn: a function that takes a `nn.Module` and a batch as input.
+ it returns the loss and optionally other metrics.
+train_dataset: the training dataset.
+test_dataset: the test dataset.
+cfg: a `TrainerConfig` instance that holds all
+ hyperparameters.
+```
+
+Once this is instantiated, starting the training loop is as simple as calling
+`trainer.train()` where `trainer` is an instance of `Trainer`.
+
+`cfg` stores most of the configuration options for `Trainer`. See the class
+definition of `TrainerConfig` for details on all options.
+
+## Examples
+
+An example workflow would go like this:
+
+> Define your training and test datasets:
+
+```python
+transform=transforms.Compose([
+ transforms.ToTensor(),
+ transforms.Normalize((0.1307,), (0.3081,))
+])
+train_dataset = datasets.MNIST('../data', train=True, download=True, transform=transform)
+test_dataset = datasets.MNIST('../data', train=False, download=True, transform=transform)
+```
+
+> Define your model:
+
+```python
+# `Net` could be any `nn.Module`
+net = Net()
+```
+
+> Define your loss function that calls `net`, taking the full batch as input:
+
+```python
+# minimising classification error
+def loss_fn(net, batch):
+ X, y = batch
+ logits = net(X)
+ loss = F.nll_loss(logits, y)
+
+ pred = logits.argmax(dim=-1, keepdim=True)
+ accuracy = 100. * pred.eq(y.view_as(pred)).sum().item() / y.shape[0]
+ return loss, accuracy
+```
+
+> Optionally create a configuration object:
+
+```python
+# see class definition for full list of parameters
+cfg = TrainerConfig(
+ exp_name = 'mnist-conv',
+ batch_size = 64,
+ learning_rate = 4e-4,
+ nb_workers = 4,
+ save_outputs = False,
+ metric_names = ['accuracy']
+)
+```
+
+> Initialise the Trainer class:
+
+```python
+trainer = Trainer(
+ net=net,
+ loss_fn=loss_fn,
+ train_dataset=train_dataset,
+ test_dataset=test_dataset,
+ cfg=cfg
+)
+```
+
+> Optionally, register some callback functions:
+
+```python
+def callback_fn(_):
+ info("Congratulations, you have completed an epoch!")
+trainer.register_callback(CallbackType.TrainEpoch, callback_fn)
+```
+
+> Call `trainer.train()` to begin the training loop
+
+```python
+trainer.train() # Go!
+```
+
+See more examples [here](examples/).
+
+#### Weights and Biases Integration
+
+Weights and Biases logging is supported via the `ptpt.wandb.WandConfig`
+dataclass.
+
+Currently only supports a small set of features:
+```
+class WandbConfig:
+ project: str = None # project name
+ entity: str = None # wandb entity name
+ name: str = None # run name (leave blank for random two words)
+ config: dict = None # hyperparameters to save on wandb
+ log_net: bool = False # whether to use wandb to watch network gradients
+ log_metrics: bool = True # whether to use wandb to report epoch metrics
+```
+
+If you want to log something else in addition to epoch metrics, you can use
+`ptpt.callbacks` and access wandb through `trainer.wandb`. When calling log
+here, ensure commit is set to `False` to avoid advancing the global step.
+
+## Motivation
+I found myself repeating a lot of same structure in many of my deep learning
+projects. This project is the culmination of my efforts refining the typical
+structure of my projects into (what I hope to be) a wholly reusable and
+general-purpose library.
+
+Additionally, there are many nice theoretical and engineering tricks that
+are available to deep learning researchers. Unfortunately, a lot of them are
+forgotten because they fall outside the typical workflow, despite them being
+very beneficial to include. Another goal of this project is to transparently
+include these tricks so they can be added and removed with minimal code change.
+Where it is sane to do so, some of these could be on by default.
+
+Finally, I am guilty of forgetting to implement decent logging: both of
+standard output and of metrics. Logging of standard output is not hard, and
+is implemented using other libraries such as [rich](https://github.com/willmcgugan/rich).
+However, metric logging is less obvious. I'd like to avoid larger dependencies
+such as tensorboard being an integral part of the project, so metrics will be
+logged to simple numpy arrays. The library will then provide functions to
+produce plots from these, or they can be used in another library.
+
+### TODO:
+
+- [X] Add arbitrary callback support at various points of execution
+- [X] Add metric tracking
+- [ ] Add more learning rate schedulers
+- [ ] Add more optimizer options
+- [ ] Add logging-to-file
+- [ ] Adds silent and simpler logging
+- [ ] Support for distributed / multi-GPU operations
+- [ ] Set of functions for producing visualisations from disk dumps
+- [ ] General suite of useful functions
+
+### References
+- [rich](https://github.com/willmcgugan/rich) by [@willmcgugan](https://github.com/willmcgugan)
+
+### Citations
+
+
+
+%prep
+%autosetup -n ptpt-0.0.28
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-ptpt -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Wed May 31 2023 Python_Bot <Python_Bot@openeuler.org> - 0.0.28-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..d4fab36
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+c5b2e3b498b3323e74abb278db1b08fe ptpt-0.0.28.tar.gz