summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorCoprDistGit <infra@openeuler.org>2023-04-10 18:28:41 +0000
committerCoprDistGit <infra@openeuler.org>2023-04-10 18:28:41 +0000
commit7bb8ff1bd5019e173982ed1ff25aaeb93633c654 (patch)
tree6f0e6b3104f07b556eb60df54f7b76aa8763f594
parent581ea6c5568641e1b4338119d62f774570602f16 (diff)
automatic import of python-pytorch-metric-learning
-rw-r--r--.gitignore1
-rw-r--r--python-pytorch-metric-learning.spec966
-rw-r--r--sources1
3 files changed, 968 insertions, 0 deletions
diff --git a/.gitignore b/.gitignore
index e69de29..5e6c106 100644
--- a/.gitignore
+++ b/.gitignore
@@ -0,0 +1 @@
+/pytorch-metric-learning-2.1.0.tar.gz
diff --git a/python-pytorch-metric-learning.spec b/python-pytorch-metric-learning.spec
new file mode 100644
index 0000000..431e3d2
--- /dev/null
+++ b/python-pytorch-metric-learning.spec
@@ -0,0 +1,966 @@
+%global _empty_manifest_terminate_build 0
+Name: python-pytorch-metric-learning
+Version: 2.1.0
+Release: 1
+Summary: The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.
+License: MIT License
+URL: https://github.com/KevinMusgrave/pytorch-metric-learning
+Source0: https://mirrors.nju.edu.cn/pypi/web/packages/fa/4f/216b76a20902ac2f00bf7009b5bca11a3b2db9baded9e66287b8b5d2afb2/pytorch-metric-learning-2.1.0.tar.gz
+BuildArch: noarch
+
+Requires: python3-numpy
+Requires: python3-scikit-learn
+Requires: python3-tqdm
+Requires: python3-torch
+Requires: python3-black
+Requires: python3-isort
+Requires: python3-nbqa
+Requires: python3-flake8
+Requires: python3-mkdocs-material
+Requires: python3-record-keeper
+Requires: python3-faiss-gpu
+Requires: python3-tensorboard
+Requires: python3-record-keeper
+Requires: python3-faiss-cpu
+Requires: python3-tensorboard
+
+%description
+<h1>
+<a href="https://github.com/KevinMusgrave/pytorch-metric-learning">
+<img alt="PyTorch Metric Learning" src="https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/docs/imgs/Logo2.png">
+</a>
+</h1>
+
+<p align="center">
+ <a href="https://pypi.org/project/pytorch-metric-learning">
+ <img alt="PyPi version" src="https://img.shields.io/pypi/v/pytorch-metric-learning?color=bright-green">
+ </a>
+
+
+
+ <a href="https://anaconda.org/conda-forge/pytorch-metric-learning">
+ <img alt="Anaconda version" src="https://img.shields.io/conda/v/conda-forge/pytorch-metric-learning?color=bright-green">
+ </a>
+</p>
+
+## News
+
+**January 16**: v1.7.0
+- Fixes an edge case in ArcFaceLoss. See the [release notes](https://github.com/KevinMusgrave/pytorch-metric-learning/releases/tag/v1.7.0).
+- Thanks to contributor [ElisonSherton](https://github.com/ElisonSherton).
+
+**September 3**: v1.6.0
+- `DistributedLossWrapper` and `DistributedMinerWrapper` now support `ref_emb` and `ref_labels`.
+- Thanks to contributor [NoTody](https://github.com/NoTody).
+
+## Documentation
+- [**View the documentation here**](https://kevinmusgrave.github.io/pytorch-metric-learning/)
+- [**View the installation instructions here**](https://github.com/KevinMusgrave/pytorch-metric-learning#installation)
+- [**View the available losses, miners etc. here**](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/CONTENTS.md)
+
+
+## Google Colab Examples
+See the [examples folder](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/examples/README.md) for notebooks you can download or run on Google Colab.
+
+
+## PyTorch Metric Learning Overview
+This library contains 9 modules, each of which can be used independently within your existing codebase, or combined together for a complete train/test workflow.
+
+![high_level_module_overview](docs/imgs/high_level_module_overview.png)
+
+
+
+## How loss functions work
+
+### Using losses and miners in your training loop
+Let’s initialize a plain [TripletMarginLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#tripletmarginloss):
+```python
+from pytorch_metric_learning import losses
+loss_func = losses.TripletMarginLoss()
+```
+
+To compute the loss in your training loop, pass in the embeddings computed by your model, and the corresponding labels. The embeddings should have size (N, embedding_size), and the labels should have size (N), where N is the batch size.
+
+```python
+# your training loop
+for i, (data, labels) in enumerate(dataloader):
+ optimizer.zero_grad()
+ embeddings = model(data)
+ loss = loss_func(embeddings, labels)
+ loss.backward()
+ optimizer.step()
+```
+
+The TripletMarginLoss computes all possible triplets within the batch, based on the labels you pass into it. Anchor-positive pairs are formed by embeddings that share the same label, and anchor-negative pairs are formed by embeddings that have different labels.
+
+Sometimes it can help to add a mining function:
+```python
+from pytorch_metric_learning import miners, losses
+miner = miners.MultiSimilarityMiner()
+loss_func = losses.TripletMarginLoss()
+
+# your training loop
+for i, (data, labels) in enumerate(dataloader):
+ optimizer.zero_grad()
+ embeddings = model(data)
+ hard_pairs = miner(embeddings, labels)
+ loss = loss_func(embeddings, labels, hard_pairs)
+ loss.backward()
+ optimizer.step()
+```
+In the above code, the miner finds positive and negative pairs that it thinks are particularly difficult. Note that even though the TripletMarginLoss operates on triplets, it’s still possible to pass in pairs. This is because the library automatically converts pairs to triplets and triplets to pairs, when necessary.
+
+### Customizing loss functions
+Loss functions can be customized using [distances](https://kevinmusgrave.github.io/pytorch-metric-learning/distances/), [reducers](https://kevinmusgrave.github.io/pytorch-metric-learning/reducers/), and [regularizers](https://kevinmusgrave.github.io/pytorch-metric-learning/regularizers/). In the diagram below, a miner finds the indices of hard pairs within a batch. These are used to index into the distance matrix, computed by the distance object. For this diagram, the loss function is pair-based, so it computes a loss per pair. In addition, a regularizer has been supplied, so a regularization loss is computed for each embedding in the batch. The per-pair and per-element losses are passed to the reducer, which (in this diagram) only keeps losses with a high value. The averages are computed for the high-valued pair and element losses, and are then added together to obtain the final loss.
+
+![high_level_loss_function_overview](docs/imgs/high_level_loss_function_overview.png)
+
+Now here's an example of a customized TripletMarginLoss:
+```python
+from pytorch_metric_learning.distances import CosineSimilarity
+from pytorch_metric_learning.reducers import ThresholdReducer
+from pytorch_metric_learning.regularizers import LpRegularizer
+from pytorch_metric_learning import losses
+loss_func = losses.TripletMarginLoss(distance = CosineSimilarity(),
+ reducer = ThresholdReducer(high=0.3),
+ embedding_regularizer = LpRegularizer())
+```
+This customized triplet loss has the following properties:
+
+ - The loss will be computed using cosine similarity instead of Euclidean distance.
+ - All triplet losses that are higher than 0.3 will be discarded.
+ - The embeddings will be L2 regularized.
+
+### Using loss functions for unsupervised / self-supervised learning
+
+The TripletMarginLoss is an embedding-based or tuple-based loss. This means that internally, there is no real notion of "classes". Tuples (pairs or triplets) are formed at each iteration, based on the labels it receives. The labels don't have to represent classes. They simply need to indicate the positive and negative relationships between the embeddings. Thus, it is easy to use these loss functions for unsupervised or self-supervised learning.
+
+For example, the code below is a simplified version of the augmentation strategy commonly used in self-supervision. The dataset does not come with any labels. Instead, the labels are created in the training loop, solely to indicate which embeddings are positive pairs.
+
+```python
+# your training for-loop
+for i, data in enumerate(dataloader):
+ optimizer.zero_grad()
+ embeddings = your_model(data)
+ augmented = your_model(your_augmentation(data))
+ labels = torch.arange(embeddings.size(0))
+
+ embeddings = torch.cat([embeddings, augmented], dim=0)
+ labels = torch.cat([labels, labels], dim=0)
+
+ loss = loss_func(embeddings, labels)
+ loss.backward()
+ optimizer.step()
+```
+
+If you're interested in [MoCo](https://arxiv.org/pdf/1911.05722.pdf)-style self-supervision, take a look at the [MoCo on CIFAR10](https://github.com/KevinMusgrave/pytorch-metric-learning/tree/master/examples#simple-examples) notebook. It uses CrossBatchMemory to implement the momentum encoder queue, which means you can use any tuple loss, and any tuple miner to extract hard samples from the queue.
+
+
+## Highlights of the rest of the library
+
+- For a convenient way to train your model, take a look at the [trainers](https://kevinmusgrave.github.io/pytorch-metric-learning/trainers/).
+- Want to test your model's accuracy on a dataset? Try the [testers](https://kevinmusgrave.github.io/pytorch-metric-learning/testers/).
+- To compute the accuracy of an embedding space directly, use [AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/).
+
+If you're short of time and want a complete train/test workflow, check out the [example Google Colab notebooks](https://github.com/KevinMusgrave/pytorch-metric-learning/tree/master/examples).
+
+To learn more about all of the above, [see the documentation](https://kevinmusgrave.github.io/pytorch-metric-learning).
+
+
+## Installation
+
+### Required PyTorch version
+ - ```pytorch-metric-learning >= v0.9.90``` requires ```torch >= 1.6```
+ - ```pytorch-metric-learning < v0.9.90``` doesn't have a version requirement, but was tested with ```torch >= 1.2```
+
+Other dependencies: ```numpy, scikit-learn, tqdm, torchvision```
+
+### Pip
+```
+pip install pytorch-metric-learning
+```
+
+**To get the latest dev version**:
+```
+pip install pytorch-metric-learning --pre
+```
+
+**To install on Windows**:
+```
+pip install torch===1.6.0 torchvision===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
+pip install pytorch-metric-learning
+```
+
+**To install with evaluation and logging capabilities**
+
+(This will install the unofficial pypi version of faiss-gpu, plus record-keeper and tensorboard):
+```
+pip install pytorch-metric-learning[with-hooks]
+```
+
+**To install with evaluation and logging capabilities (CPU)**
+
+(This will install the unofficial pypi version of faiss-cpu, plus record-keeper and tensorboard):
+```
+pip install pytorch-metric-learning[with-hooks-cpu]
+```
+
+### Conda
+```
+conda install -c conda-forge pytorch-metric-learning
+```
+
+**To use the testing module, you'll need faiss, which can be installed via conda as well. See the [installation instructions for faiss](https://github.com/facebookresearch/faiss/blob/master/INSTALL.md).**
+
+</details>
+
+
+
+## Benchmark results
+See [powerful-benchmarker](https://github.com/KevinMusgrave/powerful-benchmarker/) to view benchmark results and to use the benchmarking tool.
+
+
+## Development
+Development is done on the ```dev``` branch:
+```
+git checkout dev
+```
+
+Unit tests can be run with the default ```unittest``` library:
+```bash
+python -m unittest discover
+```
+
+You can specify the test datatypes and test device as environment variables. For example, to test using float32 and float64 on the CPU:
+```bash
+TEST_DTYPES=float32,float64 TEST_DEVICE=cpu python -m unittest discover
+```
+
+To run a single test file instead of the entire test suite, specify the file name:
+```bash
+python -m unittest tests/losses/test_angular_loss.py
+```
+
+Code is formatted using ```black``` and ```isort```:
+```bash
+pip install black isort
+./format_code.sh
+```
+
+
+## Acknowledgements
+
+### Contributors
+Thanks to the contributors who made pull requests!
+
+| Contributor | Highlights |
+| -- | -- |
+|[mlopezantequera](https://github.com/mlopezantequera) | - Made the [testers](https://kevinmusgrave.github.io/pytorch-metric-learning/testers) work on any combination of query and reference sets <br/> - Made [AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/) work with arbitrary label comparisons |
+|[cwkeam](https://github.com/cwkeam) | - [VICRegLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#vicregloss) <br/> - Added mean reciprocal rank accuracy to [AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/) |
+|[marijnl](https://github.com/marijnl)| - [BatchEasyHardMiner](https://kevinmusgrave.github.io/pytorch-metric-learning/miners/#batcheasyhardminer) <br/> - [TwoStreamMetricLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/trainers/#twostreammetricloss) <br/> - [GlobalTwoStreamEmbeddingSpaceTester](https://kevinmusgrave.github.io/pytorch-metric-learning/testers/#globaltwostreamembeddingspacetester) <br/> - [Example using trainers.TwoStreamMetricLoss](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/examples/notebooks/TwoStreamMetricLoss.ipynb) |
+| [chingisooinar](https://github.com/chingisooinar) | [SubCenterArcFaceLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#subcenterarcfaceloss) |
+| [elias-ramzi](https://github.com/elias-ramzi) | [HierarchicalSampler](https://kevinmusgrave.github.io/pytorch-metric-learning/samplers/#hierarchicalsampler) |
+| [fjsj](https://github.com/fjsj) | [SupConLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#supconloss) |
+| [AlenUbuntu](https://github.com/AlenUbuntu) | [CircleLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#circleloss) |
+| [interestingzhuo](https://github.com/interestingzhuo) | [**PNPLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#pnploss) |
+| [wconnell](https://github.com/wconnell) | [Learning a scRNAseq Metric Embedding](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/examples/notebooks/scRNAseq_MetricEmbedding.ipynb) |
+| [AlexSchuy](https://github.com/AlexSchuy) | optimized ```utils.loss_and_miner_utils.get_random_triplet_indices``` |
+| [JohnGiorgi](https://github.com/JohnGiorgi) | ```all_gather``` in [utils.distributed](https://kevinmusgrave.github.io/pytorch-metric-learning/distributed) |
+| [Hummer12007](https://github.com/Hummer12007) | ```utils.key_checker``` |
+| [vltanh](https://github.com/vltanh) | Made ```InferenceModel.train_indexer``` accept datasets |
+| [btseytlin](https://github.com/btseytlin) | ```get_nearest_neighbors``` in [InferenceModel](https://kevinmusgrave.github.io/pytorch-metric-learning/inference_models) |
+| [mlw214](https://github.com/mlw214) | Added ```return_per_class``` to [AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/) |
+| [layumi](https://github.com/layumi) | [InstanceLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#instanceloss) |
+| [NoTody](https://github.com/NoTody) | Helped add `ref_emb` and `ref_labels` to the distributed wrappers. |
+| [ElisonSherton](https://github.com/ElisonSherton) | Fixed an edge case in ArcFaceLoss. |
+| [z1w](https://github.com/z1w) | |
+| [thinline72](https://github.com/thinline72) | |
+| [tpanum](https://github.com/tpanum) | |
+| [fralik](https://github.com/fralik) | |
+| [joaqo](https://github.com/joaqo) | |
+| [JoOkuma](https://github.com/JoOkuma) | |
+| [gkouros](https://github.com/gkouros) | |
+| [yutanakamura-tky](https://github.com/yutanakamura-tky) | |
+| [KinglittleQ](https://github.com/KinglittleQ) | |
+| [martin0258](https://github.com/martin0258) | |
+| [michaeldeyzel](https://github.com/michaeldeyzel) | |
+
+
+
+### Facebook AI
+Thank you to [Ser-Nam Lim](https://research.fb.com/people/lim-ser-nam/) at [Facebook AI](https://ai.facebook.com/), and my research advisor, [Professor Serge Belongie](https://vision.cornell.edu/se3/people/serge-belongie/). This project began during my internship at Facebook AI where I received valuable feedback from Ser-Nam, and his team of computer vision and machine learning engineers and research scientists. In particular, thanks to [Ashish Shah](https://www.linkedin.com/in/ashish217/) and [Austin Reiter](https://www.linkedin.com/in/austin-reiter-3962aa7/) for reviewing my code during its early stages of development.
+
+### Open-source repos
+This library contains code that has been adapted and modified from the following great open-source repos:
+- https://github.com/bnu-wangxun/Deep_Metric
+- https://github.com/chaoyuaw/incubator-mxnet/blob/master/example/gluon/embedding_learning
+- https://github.com/facebookresearch/deepcluster
+- https://github.com/geonm/proxy-anchor-loss
+- https://github.com/idstcv/SoftTriple
+- https://github.com/kunhe/FastAP-metric-learning
+- https://github.com/ronekko/deep_metric_learning
+- https://github.com/tjddus9597/Proxy-Anchor-CVPR2020
+- http://kaizhao.net/regularface
+
+### Logo
+Thanks to [Jeff Musgrave](https://www.designgenius.ca/) for designing the logo.
+
+## Citing this library
+If you'd like to cite pytorch-metric-learning in your paper, you can use this bibtex:
+```latex
+@article{Musgrave2020PyTorchML,
+ title={PyTorch Metric Learning},
+ author={Kevin Musgrave and Serge J. Belongie and Ser-Nam Lim},
+ journal={ArXiv},
+ year={2020},
+ volume={abs/2008.09164}
+}
+```
+
+
+
+
+%package -n python3-pytorch-metric-learning
+Summary: The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.
+Provides: python-pytorch-metric-learning
+BuildRequires: python3-devel
+BuildRequires: python3-setuptools
+BuildRequires: python3-pip
+%description -n python3-pytorch-metric-learning
+<h1>
+<a href="https://github.com/KevinMusgrave/pytorch-metric-learning">
+<img alt="PyTorch Metric Learning" src="https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/docs/imgs/Logo2.png">
+</a>
+</h1>
+
+<p align="center">
+ <a href="https://pypi.org/project/pytorch-metric-learning">
+ <img alt="PyPi version" src="https://img.shields.io/pypi/v/pytorch-metric-learning?color=bright-green">
+ </a>
+
+
+
+ <a href="https://anaconda.org/conda-forge/pytorch-metric-learning">
+ <img alt="Anaconda version" src="https://img.shields.io/conda/v/conda-forge/pytorch-metric-learning?color=bright-green">
+ </a>
+</p>
+
+## News
+
+**January 16**: v1.7.0
+- Fixes an edge case in ArcFaceLoss. See the [release notes](https://github.com/KevinMusgrave/pytorch-metric-learning/releases/tag/v1.7.0).
+- Thanks to contributor [ElisonSherton](https://github.com/ElisonSherton).
+
+**September 3**: v1.6.0
+- `DistributedLossWrapper` and `DistributedMinerWrapper` now support `ref_emb` and `ref_labels`.
+- Thanks to contributor [NoTody](https://github.com/NoTody).
+
+## Documentation
+- [**View the documentation here**](https://kevinmusgrave.github.io/pytorch-metric-learning/)
+- [**View the installation instructions here**](https://github.com/KevinMusgrave/pytorch-metric-learning#installation)
+- [**View the available losses, miners etc. here**](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/CONTENTS.md)
+
+
+## Google Colab Examples
+See the [examples folder](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/examples/README.md) for notebooks you can download or run on Google Colab.
+
+
+## PyTorch Metric Learning Overview
+This library contains 9 modules, each of which can be used independently within your existing codebase, or combined together for a complete train/test workflow.
+
+![high_level_module_overview](docs/imgs/high_level_module_overview.png)
+
+
+
+## How loss functions work
+
+### Using losses and miners in your training loop
+Let’s initialize a plain [TripletMarginLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#tripletmarginloss):
+```python
+from pytorch_metric_learning import losses
+loss_func = losses.TripletMarginLoss()
+```
+
+To compute the loss in your training loop, pass in the embeddings computed by your model, and the corresponding labels. The embeddings should have size (N, embedding_size), and the labels should have size (N), where N is the batch size.
+
+```python
+# your training loop
+for i, (data, labels) in enumerate(dataloader):
+ optimizer.zero_grad()
+ embeddings = model(data)
+ loss = loss_func(embeddings, labels)
+ loss.backward()
+ optimizer.step()
+```
+
+The TripletMarginLoss computes all possible triplets within the batch, based on the labels you pass into it. Anchor-positive pairs are formed by embeddings that share the same label, and anchor-negative pairs are formed by embeddings that have different labels.
+
+Sometimes it can help to add a mining function:
+```python
+from pytorch_metric_learning import miners, losses
+miner = miners.MultiSimilarityMiner()
+loss_func = losses.TripletMarginLoss()
+
+# your training loop
+for i, (data, labels) in enumerate(dataloader):
+ optimizer.zero_grad()
+ embeddings = model(data)
+ hard_pairs = miner(embeddings, labels)
+ loss = loss_func(embeddings, labels, hard_pairs)
+ loss.backward()
+ optimizer.step()
+```
+In the above code, the miner finds positive and negative pairs that it thinks are particularly difficult. Note that even though the TripletMarginLoss operates on triplets, it’s still possible to pass in pairs. This is because the library automatically converts pairs to triplets and triplets to pairs, when necessary.
+
+### Customizing loss functions
+Loss functions can be customized using [distances](https://kevinmusgrave.github.io/pytorch-metric-learning/distances/), [reducers](https://kevinmusgrave.github.io/pytorch-metric-learning/reducers/), and [regularizers](https://kevinmusgrave.github.io/pytorch-metric-learning/regularizers/). In the diagram below, a miner finds the indices of hard pairs within a batch. These are used to index into the distance matrix, computed by the distance object. For this diagram, the loss function is pair-based, so it computes a loss per pair. In addition, a regularizer has been supplied, so a regularization loss is computed for each embedding in the batch. The per-pair and per-element losses are passed to the reducer, which (in this diagram) only keeps losses with a high value. The averages are computed for the high-valued pair and element losses, and are then added together to obtain the final loss.
+
+![high_level_loss_function_overview](docs/imgs/high_level_loss_function_overview.png)
+
+Now here's an example of a customized TripletMarginLoss:
+```python
+from pytorch_metric_learning.distances import CosineSimilarity
+from pytorch_metric_learning.reducers import ThresholdReducer
+from pytorch_metric_learning.regularizers import LpRegularizer
+from pytorch_metric_learning import losses
+loss_func = losses.TripletMarginLoss(distance = CosineSimilarity(),
+ reducer = ThresholdReducer(high=0.3),
+ embedding_regularizer = LpRegularizer())
+```
+This customized triplet loss has the following properties:
+
+ - The loss will be computed using cosine similarity instead of Euclidean distance.
+ - All triplet losses that are higher than 0.3 will be discarded.
+ - The embeddings will be L2 regularized.
+
+### Using loss functions for unsupervised / self-supervised learning
+
+The TripletMarginLoss is an embedding-based or tuple-based loss. This means that internally, there is no real notion of "classes". Tuples (pairs or triplets) are formed at each iteration, based on the labels it receives. The labels don't have to represent classes. They simply need to indicate the positive and negative relationships between the embeddings. Thus, it is easy to use these loss functions for unsupervised or self-supervised learning.
+
+For example, the code below is a simplified version of the augmentation strategy commonly used in self-supervision. The dataset does not come with any labels. Instead, the labels are created in the training loop, solely to indicate which embeddings are positive pairs.
+
+```python
+# your training for-loop
+for i, data in enumerate(dataloader):
+ optimizer.zero_grad()
+ embeddings = your_model(data)
+ augmented = your_model(your_augmentation(data))
+ labels = torch.arange(embeddings.size(0))
+
+ embeddings = torch.cat([embeddings, augmented], dim=0)
+ labels = torch.cat([labels, labels], dim=0)
+
+ loss = loss_func(embeddings, labels)
+ loss.backward()
+ optimizer.step()
+```
+
+If you're interested in [MoCo](https://arxiv.org/pdf/1911.05722.pdf)-style self-supervision, take a look at the [MoCo on CIFAR10](https://github.com/KevinMusgrave/pytorch-metric-learning/tree/master/examples#simple-examples) notebook. It uses CrossBatchMemory to implement the momentum encoder queue, which means you can use any tuple loss, and any tuple miner to extract hard samples from the queue.
+
+
+## Highlights of the rest of the library
+
+- For a convenient way to train your model, take a look at the [trainers](https://kevinmusgrave.github.io/pytorch-metric-learning/trainers/).
+- Want to test your model's accuracy on a dataset? Try the [testers](https://kevinmusgrave.github.io/pytorch-metric-learning/testers/).
+- To compute the accuracy of an embedding space directly, use [AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/).
+
+If you're short of time and want a complete train/test workflow, check out the [example Google Colab notebooks](https://github.com/KevinMusgrave/pytorch-metric-learning/tree/master/examples).
+
+To learn more about all of the above, [see the documentation](https://kevinmusgrave.github.io/pytorch-metric-learning).
+
+
+## Installation
+
+### Required PyTorch version
+ - ```pytorch-metric-learning >= v0.9.90``` requires ```torch >= 1.6```
+ - ```pytorch-metric-learning < v0.9.90``` doesn't have a version requirement, but was tested with ```torch >= 1.2```
+
+Other dependencies: ```numpy, scikit-learn, tqdm, torchvision```
+
+### Pip
+```
+pip install pytorch-metric-learning
+```
+
+**To get the latest dev version**:
+```
+pip install pytorch-metric-learning --pre
+```
+
+**To install on Windows**:
+```
+pip install torch===1.6.0 torchvision===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
+pip install pytorch-metric-learning
+```
+
+**To install with evaluation and logging capabilities**
+
+(This will install the unofficial pypi version of faiss-gpu, plus record-keeper and tensorboard):
+```
+pip install pytorch-metric-learning[with-hooks]
+```
+
+**To install with evaluation and logging capabilities (CPU)**
+
+(This will install the unofficial pypi version of faiss-cpu, plus record-keeper and tensorboard):
+```
+pip install pytorch-metric-learning[with-hooks-cpu]
+```
+
+### Conda
+```
+conda install -c conda-forge pytorch-metric-learning
+```
+
+**To use the testing module, you'll need faiss, which can be installed via conda as well. See the [installation instructions for faiss](https://github.com/facebookresearch/faiss/blob/master/INSTALL.md).**
+
+</details>
+
+
+
+## Benchmark results
+See [powerful-benchmarker](https://github.com/KevinMusgrave/powerful-benchmarker/) to view benchmark results and to use the benchmarking tool.
+
+
+## Development
+Development is done on the ```dev``` branch:
+```
+git checkout dev
+```
+
+Unit tests can be run with the default ```unittest``` library:
+```bash
+python -m unittest discover
+```
+
+You can specify the test datatypes and test device as environment variables. For example, to test using float32 and float64 on the CPU:
+```bash
+TEST_DTYPES=float32,float64 TEST_DEVICE=cpu python -m unittest discover
+```
+
+To run a single test file instead of the entire test suite, specify the file name:
+```bash
+python -m unittest tests/losses/test_angular_loss.py
+```
+
+Code is formatted using ```black``` and ```isort```:
+```bash
+pip install black isort
+./format_code.sh
+```
+
+
+## Acknowledgements
+
+### Contributors
+Thanks to the contributors who made pull requests!
+
+| Contributor | Highlights |
+| -- | -- |
+|[mlopezantequera](https://github.com/mlopezantequera) | - Made the [testers](https://kevinmusgrave.github.io/pytorch-metric-learning/testers) work on any combination of query and reference sets <br/> - Made [AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/) work with arbitrary label comparisons |
+|[cwkeam](https://github.com/cwkeam) | - [VICRegLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#vicregloss) <br/> - Added mean reciprocal rank accuracy to [AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/) |
+|[marijnl](https://github.com/marijnl)| - [BatchEasyHardMiner](https://kevinmusgrave.github.io/pytorch-metric-learning/miners/#batcheasyhardminer) <br/> - [TwoStreamMetricLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/trainers/#twostreammetricloss) <br/> - [GlobalTwoStreamEmbeddingSpaceTester](https://kevinmusgrave.github.io/pytorch-metric-learning/testers/#globaltwostreamembeddingspacetester) <br/> - [Example using trainers.TwoStreamMetricLoss](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/examples/notebooks/TwoStreamMetricLoss.ipynb) |
+| [chingisooinar](https://github.com/chingisooinar) | [SubCenterArcFaceLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#subcenterarcfaceloss) |
+| [elias-ramzi](https://github.com/elias-ramzi) | [HierarchicalSampler](https://kevinmusgrave.github.io/pytorch-metric-learning/samplers/#hierarchicalsampler) |
+| [fjsj](https://github.com/fjsj) | [SupConLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#supconloss) |
+| [AlenUbuntu](https://github.com/AlenUbuntu) | [CircleLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#circleloss) |
+| [interestingzhuo](https://github.com/interestingzhuo) | [**PNPLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#pnploss) |
+| [wconnell](https://github.com/wconnell) | [Learning a scRNAseq Metric Embedding](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/examples/notebooks/scRNAseq_MetricEmbedding.ipynb) |
+| [AlexSchuy](https://github.com/AlexSchuy) | optimized ```utils.loss_and_miner_utils.get_random_triplet_indices``` |
+| [JohnGiorgi](https://github.com/JohnGiorgi) | ```all_gather``` in [utils.distributed](https://kevinmusgrave.github.io/pytorch-metric-learning/distributed) |
+| [Hummer12007](https://github.com/Hummer12007) | ```utils.key_checker``` |
+| [vltanh](https://github.com/vltanh) | Made ```InferenceModel.train_indexer``` accept datasets |
+| [btseytlin](https://github.com/btseytlin) | ```get_nearest_neighbors``` in [InferenceModel](https://kevinmusgrave.github.io/pytorch-metric-learning/inference_models) |
+| [mlw214](https://github.com/mlw214) | Added ```return_per_class``` to [AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/) |
+| [layumi](https://github.com/layumi) | [InstanceLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#instanceloss) |
+| [NoTody](https://github.com/NoTody) | Helped add `ref_emb` and `ref_labels` to the distributed wrappers. |
+| [ElisonSherton](https://github.com/ElisonSherton) | Fixed an edge case in ArcFaceLoss. |
+| [z1w](https://github.com/z1w) | |
+| [thinline72](https://github.com/thinline72) | |
+| [tpanum](https://github.com/tpanum) | |
+| [fralik](https://github.com/fralik) | |
+| [joaqo](https://github.com/joaqo) | |
+| [JoOkuma](https://github.com/JoOkuma) | |
+| [gkouros](https://github.com/gkouros) | |
+| [yutanakamura-tky](https://github.com/yutanakamura-tky) | |
+| [KinglittleQ](https://github.com/KinglittleQ) | |
+| [martin0258](https://github.com/martin0258) | |
+| [michaeldeyzel](https://github.com/michaeldeyzel) | |
+
+
+
+### Facebook AI
+Thank you to [Ser-Nam Lim](https://research.fb.com/people/lim-ser-nam/) at [Facebook AI](https://ai.facebook.com/), and my research advisor, [Professor Serge Belongie](https://vision.cornell.edu/se3/people/serge-belongie/). This project began during my internship at Facebook AI where I received valuable feedback from Ser-Nam, and his team of computer vision and machine learning engineers and research scientists. In particular, thanks to [Ashish Shah](https://www.linkedin.com/in/ashish217/) and [Austin Reiter](https://www.linkedin.com/in/austin-reiter-3962aa7/) for reviewing my code during its early stages of development.
+
+### Open-source repos
+This library contains code that has been adapted and modified from the following great open-source repos:
+- https://github.com/bnu-wangxun/Deep_Metric
+- https://github.com/chaoyuaw/incubator-mxnet/blob/master/example/gluon/embedding_learning
+- https://github.com/facebookresearch/deepcluster
+- https://github.com/geonm/proxy-anchor-loss
+- https://github.com/idstcv/SoftTriple
+- https://github.com/kunhe/FastAP-metric-learning
+- https://github.com/ronekko/deep_metric_learning
+- https://github.com/tjddus9597/Proxy-Anchor-CVPR2020
+- http://kaizhao.net/regularface
+
+### Logo
+Thanks to [Jeff Musgrave](https://www.designgenius.ca/) for designing the logo.
+
+## Citing this library
+If you'd like to cite pytorch-metric-learning in your paper, you can use this bibtex:
+```latex
+@article{Musgrave2020PyTorchML,
+ title={PyTorch Metric Learning},
+ author={Kevin Musgrave and Serge J. Belongie and Ser-Nam Lim},
+ journal={ArXiv},
+ year={2020},
+ volume={abs/2008.09164}
+}
+```
+
+
+
+
+%package help
+Summary: Development documents and examples for pytorch-metric-learning
+Provides: python3-pytorch-metric-learning-doc
+%description help
+<h1>
+<a href="https://github.com/KevinMusgrave/pytorch-metric-learning">
+<img alt="PyTorch Metric Learning" src="https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/docs/imgs/Logo2.png">
+</a>
+</h1>
+
+<p align="center">
+ <a href="https://pypi.org/project/pytorch-metric-learning">
+ <img alt="PyPi version" src="https://img.shields.io/pypi/v/pytorch-metric-learning?color=bright-green">
+ </a>
+
+
+
+ <a href="https://anaconda.org/conda-forge/pytorch-metric-learning">
+ <img alt="Anaconda version" src="https://img.shields.io/conda/v/conda-forge/pytorch-metric-learning?color=bright-green">
+ </a>
+</p>
+
+## News
+
+**January 16**: v1.7.0
+- Fixes an edge case in ArcFaceLoss. See the [release notes](https://github.com/KevinMusgrave/pytorch-metric-learning/releases/tag/v1.7.0).
+- Thanks to contributor [ElisonSherton](https://github.com/ElisonSherton).
+
+**September 3**: v1.6.0
+- `DistributedLossWrapper` and `DistributedMinerWrapper` now support `ref_emb` and `ref_labels`.
+- Thanks to contributor [NoTody](https://github.com/NoTody).
+
+## Documentation
+- [**View the documentation here**](https://kevinmusgrave.github.io/pytorch-metric-learning/)
+- [**View the installation instructions here**](https://github.com/KevinMusgrave/pytorch-metric-learning#installation)
+- [**View the available losses, miners etc. here**](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/CONTENTS.md)
+
+
+## Google Colab Examples
+See the [examples folder](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/examples/README.md) for notebooks you can download or run on Google Colab.
+
+
+## PyTorch Metric Learning Overview
+This library contains 9 modules, each of which can be used independently within your existing codebase, or combined together for a complete train/test workflow.
+
+![high_level_module_overview](docs/imgs/high_level_module_overview.png)
+
+
+
+## How loss functions work
+
+### Using losses and miners in your training loop
+Let’s initialize a plain [TripletMarginLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#tripletmarginloss):
+```python
+from pytorch_metric_learning import losses
+loss_func = losses.TripletMarginLoss()
+```
+
+To compute the loss in your training loop, pass in the embeddings computed by your model, and the corresponding labels. The embeddings should have size (N, embedding_size), and the labels should have size (N), where N is the batch size.
+
+```python
+# your training loop
+for i, (data, labels) in enumerate(dataloader):
+ optimizer.zero_grad()
+ embeddings = model(data)
+ loss = loss_func(embeddings, labels)
+ loss.backward()
+ optimizer.step()
+```
+
+The TripletMarginLoss computes all possible triplets within the batch, based on the labels you pass into it. Anchor-positive pairs are formed by embeddings that share the same label, and anchor-negative pairs are formed by embeddings that have different labels.
+
+Sometimes it can help to add a mining function:
+```python
+from pytorch_metric_learning import miners, losses
+miner = miners.MultiSimilarityMiner()
+loss_func = losses.TripletMarginLoss()
+
+# your training loop
+for i, (data, labels) in enumerate(dataloader):
+ optimizer.zero_grad()
+ embeddings = model(data)
+ hard_pairs = miner(embeddings, labels)
+ loss = loss_func(embeddings, labels, hard_pairs)
+ loss.backward()
+ optimizer.step()
+```
+In the above code, the miner finds positive and negative pairs that it thinks are particularly difficult. Note that even though the TripletMarginLoss operates on triplets, it’s still possible to pass in pairs. This is because the library automatically converts pairs to triplets and triplets to pairs, when necessary.
+
+### Customizing loss functions
+Loss functions can be customized using [distances](https://kevinmusgrave.github.io/pytorch-metric-learning/distances/), [reducers](https://kevinmusgrave.github.io/pytorch-metric-learning/reducers/), and [regularizers](https://kevinmusgrave.github.io/pytorch-metric-learning/regularizers/). In the diagram below, a miner finds the indices of hard pairs within a batch. These are used to index into the distance matrix, computed by the distance object. For this diagram, the loss function is pair-based, so it computes a loss per pair. In addition, a regularizer has been supplied, so a regularization loss is computed for each embedding in the batch. The per-pair and per-element losses are passed to the reducer, which (in this diagram) only keeps losses with a high value. The averages are computed for the high-valued pair and element losses, and are then added together to obtain the final loss.
+
+![high_level_loss_function_overview](docs/imgs/high_level_loss_function_overview.png)
+
+Now here's an example of a customized TripletMarginLoss:
+```python
+from pytorch_metric_learning.distances import CosineSimilarity
+from pytorch_metric_learning.reducers import ThresholdReducer
+from pytorch_metric_learning.regularizers import LpRegularizer
+from pytorch_metric_learning import losses
+loss_func = losses.TripletMarginLoss(distance = CosineSimilarity(),
+ reducer = ThresholdReducer(high=0.3),
+ embedding_regularizer = LpRegularizer())
+```
+This customized triplet loss has the following properties:
+
+ - The loss will be computed using cosine similarity instead of Euclidean distance.
+ - All triplet losses that are higher than 0.3 will be discarded.
+ - The embeddings will be L2 regularized.
+
+### Using loss functions for unsupervised / self-supervised learning
+
+The TripletMarginLoss is an embedding-based or tuple-based loss. This means that internally, there is no real notion of "classes". Tuples (pairs or triplets) are formed at each iteration, based on the labels it receives. The labels don't have to represent classes. They simply need to indicate the positive and negative relationships between the embeddings. Thus, it is easy to use these loss functions for unsupervised or self-supervised learning.
+
+For example, the code below is a simplified version of the augmentation strategy commonly used in self-supervision. The dataset does not come with any labels. Instead, the labels are created in the training loop, solely to indicate which embeddings are positive pairs.
+
+```python
+# your training for-loop
+for i, data in enumerate(dataloader):
+ optimizer.zero_grad()
+ embeddings = your_model(data)
+ augmented = your_model(your_augmentation(data))
+ labels = torch.arange(embeddings.size(0))
+
+ embeddings = torch.cat([embeddings, augmented], dim=0)
+ labels = torch.cat([labels, labels], dim=0)
+
+ loss = loss_func(embeddings, labels)
+ loss.backward()
+ optimizer.step()
+```
+
+If you're interested in [MoCo](https://arxiv.org/pdf/1911.05722.pdf)-style self-supervision, take a look at the [MoCo on CIFAR10](https://github.com/KevinMusgrave/pytorch-metric-learning/tree/master/examples#simple-examples) notebook. It uses CrossBatchMemory to implement the momentum encoder queue, which means you can use any tuple loss, and any tuple miner to extract hard samples from the queue.
+
+
+## Highlights of the rest of the library
+
+- For a convenient way to train your model, take a look at the [trainers](https://kevinmusgrave.github.io/pytorch-metric-learning/trainers/).
+- Want to test your model's accuracy on a dataset? Try the [testers](https://kevinmusgrave.github.io/pytorch-metric-learning/testers/).
+- To compute the accuracy of an embedding space directly, use [AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/).
+
+If you're short of time and want a complete train/test workflow, check out the [example Google Colab notebooks](https://github.com/KevinMusgrave/pytorch-metric-learning/tree/master/examples).
+
+To learn more about all of the above, [see the documentation](https://kevinmusgrave.github.io/pytorch-metric-learning).
+
+
+## Installation
+
+### Required PyTorch version
+ - ```pytorch-metric-learning >= v0.9.90``` requires ```torch >= 1.6```
+ - ```pytorch-metric-learning < v0.9.90``` doesn't have a version requirement, but was tested with ```torch >= 1.2```
+
+Other dependencies: ```numpy, scikit-learn, tqdm, torchvision```
+
+### Pip
+```
+pip install pytorch-metric-learning
+```
+
+**To get the latest dev version**:
+```
+pip install pytorch-metric-learning --pre
+```
+
+**To install on Windows**:
+```
+pip install torch===1.6.0 torchvision===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
+pip install pytorch-metric-learning
+```
+
+**To install with evaluation and logging capabilities**
+
+(This will install the unofficial pypi version of faiss-gpu, plus record-keeper and tensorboard):
+```
+pip install pytorch-metric-learning[with-hooks]
+```
+
+**To install with evaluation and logging capabilities (CPU)**
+
+(This will install the unofficial pypi version of faiss-cpu, plus record-keeper and tensorboard):
+```
+pip install pytorch-metric-learning[with-hooks-cpu]
+```
+
+### Conda
+```
+conda install -c conda-forge pytorch-metric-learning
+```
+
+**To use the testing module, you'll need faiss, which can be installed via conda as well. See the [installation instructions for faiss](https://github.com/facebookresearch/faiss/blob/master/INSTALL.md).**
+
+</details>
+
+
+
+## Benchmark results
+See [powerful-benchmarker](https://github.com/KevinMusgrave/powerful-benchmarker/) to view benchmark results and to use the benchmarking tool.
+
+
+## Development
+Development is done on the ```dev``` branch:
+```
+git checkout dev
+```
+
+Unit tests can be run with the default ```unittest``` library:
+```bash
+python -m unittest discover
+```
+
+You can specify the test datatypes and test device as environment variables. For example, to test using float32 and float64 on the CPU:
+```bash
+TEST_DTYPES=float32,float64 TEST_DEVICE=cpu python -m unittest discover
+```
+
+To run a single test file instead of the entire test suite, specify the file name:
+```bash
+python -m unittest tests/losses/test_angular_loss.py
+```
+
+Code is formatted using ```black``` and ```isort```:
+```bash
+pip install black isort
+./format_code.sh
+```
+
+
+## Acknowledgements
+
+### Contributors
+Thanks to the contributors who made pull requests!
+
+| Contributor | Highlights |
+| -- | -- |
+|[mlopezantequera](https://github.com/mlopezantequera) | - Made the [testers](https://kevinmusgrave.github.io/pytorch-metric-learning/testers) work on any combination of query and reference sets <br/> - Made [AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/) work with arbitrary label comparisons |
+|[cwkeam](https://github.com/cwkeam) | - [VICRegLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#vicregloss) <br/> - Added mean reciprocal rank accuracy to [AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/) |
+|[marijnl](https://github.com/marijnl)| - [BatchEasyHardMiner](https://kevinmusgrave.github.io/pytorch-metric-learning/miners/#batcheasyhardminer) <br/> - [TwoStreamMetricLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/trainers/#twostreammetricloss) <br/> - [GlobalTwoStreamEmbeddingSpaceTester](https://kevinmusgrave.github.io/pytorch-metric-learning/testers/#globaltwostreamembeddingspacetester) <br/> - [Example using trainers.TwoStreamMetricLoss](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/examples/notebooks/TwoStreamMetricLoss.ipynb) |
+| [chingisooinar](https://github.com/chingisooinar) | [SubCenterArcFaceLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#subcenterarcfaceloss) |
+| [elias-ramzi](https://github.com/elias-ramzi) | [HierarchicalSampler](https://kevinmusgrave.github.io/pytorch-metric-learning/samplers/#hierarchicalsampler) |
+| [fjsj](https://github.com/fjsj) | [SupConLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#supconloss) |
+| [AlenUbuntu](https://github.com/AlenUbuntu) | [CircleLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#circleloss) |
+| [interestingzhuo](https://github.com/interestingzhuo) | [**PNPLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#pnploss) |
+| [wconnell](https://github.com/wconnell) | [Learning a scRNAseq Metric Embedding](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/examples/notebooks/scRNAseq_MetricEmbedding.ipynb) |
+| [AlexSchuy](https://github.com/AlexSchuy) | optimized ```utils.loss_and_miner_utils.get_random_triplet_indices``` |
+| [JohnGiorgi](https://github.com/JohnGiorgi) | ```all_gather``` in [utils.distributed](https://kevinmusgrave.github.io/pytorch-metric-learning/distributed) |
+| [Hummer12007](https://github.com/Hummer12007) | ```utils.key_checker``` |
+| [vltanh](https://github.com/vltanh) | Made ```InferenceModel.train_indexer``` accept datasets |
+| [btseytlin](https://github.com/btseytlin) | ```get_nearest_neighbors``` in [InferenceModel](https://kevinmusgrave.github.io/pytorch-metric-learning/inference_models) |
+| [mlw214](https://github.com/mlw214) | Added ```return_per_class``` to [AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/) |
+| [layumi](https://github.com/layumi) | [InstanceLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#instanceloss) |
+| [NoTody](https://github.com/NoTody) | Helped add `ref_emb` and `ref_labels` to the distributed wrappers. |
+| [ElisonSherton](https://github.com/ElisonSherton) | Fixed an edge case in ArcFaceLoss. |
+| [z1w](https://github.com/z1w) | |
+| [thinline72](https://github.com/thinline72) | |
+| [tpanum](https://github.com/tpanum) | |
+| [fralik](https://github.com/fralik) | |
+| [joaqo](https://github.com/joaqo) | |
+| [JoOkuma](https://github.com/JoOkuma) | |
+| [gkouros](https://github.com/gkouros) | |
+| [yutanakamura-tky](https://github.com/yutanakamura-tky) | |
+| [KinglittleQ](https://github.com/KinglittleQ) | |
+| [martin0258](https://github.com/martin0258) | |
+| [michaeldeyzel](https://github.com/michaeldeyzel) | |
+
+
+
+### Facebook AI
+Thank you to [Ser-Nam Lim](https://research.fb.com/people/lim-ser-nam/) at [Facebook AI](https://ai.facebook.com/), and my research advisor, [Professor Serge Belongie](https://vision.cornell.edu/se3/people/serge-belongie/). This project began during my internship at Facebook AI where I received valuable feedback from Ser-Nam, and his team of computer vision and machine learning engineers and research scientists. In particular, thanks to [Ashish Shah](https://www.linkedin.com/in/ashish217/) and [Austin Reiter](https://www.linkedin.com/in/austin-reiter-3962aa7/) for reviewing my code during its early stages of development.
+
+### Open-source repos
+This library contains code that has been adapted and modified from the following great open-source repos:
+- https://github.com/bnu-wangxun/Deep_Metric
+- https://github.com/chaoyuaw/incubator-mxnet/blob/master/example/gluon/embedding_learning
+- https://github.com/facebookresearch/deepcluster
+- https://github.com/geonm/proxy-anchor-loss
+- https://github.com/idstcv/SoftTriple
+- https://github.com/kunhe/FastAP-metric-learning
+- https://github.com/ronekko/deep_metric_learning
+- https://github.com/tjddus9597/Proxy-Anchor-CVPR2020
+- http://kaizhao.net/regularface
+
+### Logo
+Thanks to [Jeff Musgrave](https://www.designgenius.ca/) for designing the logo.
+
+## Citing this library
+If you'd like to cite pytorch-metric-learning in your paper, you can use this bibtex:
+```latex
+@article{Musgrave2020PyTorchML,
+ title={PyTorch Metric Learning},
+ author={Kevin Musgrave and Serge J. Belongie and Ser-Nam Lim},
+ journal={ArXiv},
+ year={2020},
+ volume={abs/2008.09164}
+}
+```
+
+
+
+
+%prep
+%autosetup -n pytorch-metric-learning-2.1.0
+
+%build
+%py3_build
+
+%install
+%py3_install
+install -d -m755 %{buildroot}/%{_pkgdocdir}
+if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
+if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
+if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
+if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
+pushd %{buildroot}
+if [ -d usr/lib ]; then
+ find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/lib64 ]; then
+ find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/bin ]; then
+ find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+if [ -d usr/sbin ]; then
+ find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
+fi
+touch doclist.lst
+if [ -d usr/share/man ]; then
+ find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
+fi
+popd
+mv %{buildroot}/filelist.lst .
+mv %{buildroot}/doclist.lst .
+
+%files -n python3-pytorch-metric-learning -f filelist.lst
+%dir %{python3_sitelib}/*
+
+%files help -f doclist.lst
+%{_docdir}/*
+
+%changelog
+* Mon Apr 10 2023 Python_Bot <Python_Bot@openeuler.org> - 2.1.0-1
+- Package Spec generated
diff --git a/sources b/sources
new file mode 100644
index 0000000..ad29370
--- /dev/null
+++ b/sources
@@ -0,0 +1 @@
+ebfb3c6cbb59d523784c12405661733a pytorch-metric-learning-2.1.0.tar.gz