%global _empty_manifest_terminate_build 0
Name: python-lightseq
Version: 3.0.1
Release: 1
Summary: LightSeq is a high performance library for sequence processing and generation
License: Apache Software License
URL: https://github.com/bytedance/lightseq
Source0: https://mirrors.nju.edu.cn/pypi/web/packages/92/c3/ca4ed0027fb97a4fb6f0cf30010f7e0111bf688975c97daee297d1de0e51/lightseq-3.0.1.tar.gz
BuildArch: noarch
Requires: python3-ninja
Requires: python3-numpy
Requires: python3-scipy
%description
LightSeq is a high performance training and inference library for sequence processing and generation implemented
in CUDA.
It enables highly efficient computation of modern NLP models such as **BERT**, **GPT**,
**Transformer**, etc.
It is therefore best useful for *Machine Translation*, *Text Generation*, *Dialog*, *Language
Modelling*, *Sentiment Analysis*, and other related tasks with sequence data.
The library is built on top of CUDA official
library([cuBLAS](https://docs.nvidia.com/cuda/cublas/index.html),
[Thrust](https://docs.nvidia.com/cuda/thrust/index.html), [CUB](http://nvlabs.github.io/cub/)) and
custom kernel functions which are specially fused and optimized for Transformer model family. In
addition to model components, the inference library also provide easy-to deploy model management and serving backend based on
[TensorRT Inference
Server](https://docs.nvidia.com/deeplearning/sdk/inference-server-archived/tensorrt_inference_server_120/tensorrt-inference-server-guide/docs/quickstart.html).
With LightSeq, one can easily develop modified Transformer architecture with little additional code.
## Features
### [>>> Training](./lightseq/training)
The following is a support matrix of LightSeq **training** library compared with
[DeepSpeed](https://github.com/microsoft/DeepSpeed).
![features](./docs/training/images/features.png)
### [>>> Inference](./lightseq/inference)
The following is a support matrix of LightSeq **inference** library compared with
[TurboTransformers](https://github.com/Tencent/TurboTransformers) and
[FasterTransformer](https://github.com/NVIDIA/DeepLearningExamples/tree/master/FasterTransformer).
![support](./docs/inference/images/support.png)
## Performance
### [>>> Training](./lightseq/training)
Here we present the experimental results on WMT14 English to German translation task based on Transformer-big models. We train Transformer models of different sizes on eight NVIDIA Tesla V100/NVIDIA Tesla A100 GPUs with data parallel and fp16 mixed precision.
[Fairseq](https://github.com/pytorch/fairseq) with [Apex](https://github.com/NVIDIA/apex) is choosed as our baseline.
We compute speedup on different batch size using the WPS (real words per second) metric.
More results is available [here](./docs/training/performance.md)
### [>>> Inference](./lightseq/inference)
Here we present the experimental results on neural machine translation based on Transformer-base models using beam search methods.
We choose Tensorflow and
[FasterTransformer](https://github.com/NVIDIA/DeepLearningExamples/tree/master/FasterTransformer) as a comparison.
The implementation from
[tensor2tensor](https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/models/transformer.py)
was used as the benchmark of Tensorflow.
More results is available [here](./docs/inference/performance.md).
## Quick Start
Complete user guide is available [here](docs/guide.md).
### Installation
You can install LightSeq from PyPI:
```shell
$ pip install lightseq
```
LightSeq installation from PyPI only supports Python 3.6 to 3.8 on Linux for now. Consider compiling from source if you have other environments:
```shell
$ PATH=/usr/local/hdf5/:$PATH ENABLE_FP32=0 ENABLE_DEBUG=0 pip install -e $PROJECT_DIR
```
Detailed building introduction is available [here](docs/inference/build.md).
### Fast training from Fairseq
You can experience lightning fast training by running following commands,
Firstly install these requirements.
```shell
$ pip install lightseq fairseq sacremoses
```
Then you can train a translation task on wmt14 en2de dataset by running the following script
```shell
$ sh examples/training/fairseq/ls_fairseq_wmt14en2de.sh
```
To compare lightseq with fairseq, delete the arguments with `ls_` prefix to using the original fairseq implementation
More usage is available [here](./lightseq/training/README.md).
### Fast inference from HuggingFace bart
We provide an end2end bart-base example to see how fast Lightseq is compared to HuggingFace. First you should install these requirements.
```shell
$ pip install torch tensorflow transformers lightseq
$ cd examples/inference/python
```
then you can check the performance by simply running following commands. `hf_bart_export.py` is used to transform pytorch weights to LightSeq protobuffer.
```shell
$ python export/huggingface/hf_bart_export.py
$ python test/ls_bart.py
```
More usage is available [here](./lightseq/inference/README.md).
### Fast deploy inference server
We provide a docker image which contains tritonserver and lightseq's dynamic link library, and you can deploy a inference server by simply replace the model file with your own model file.
```shell
$ sudo docker pull hexisyztem/tritonserver_lightseq:22.01-1
```
More usage is available [here](https://github.com/bytedance/lightseq/tree/master/examples/triton_backend).
## Cite Us
If you use LightSeq in your research, please cite the following paper.
```
@InProceedings{wang2021lightseq,
title = "{L}ight{S}eq: A High Performance Inference Library for Transformers",
author = "Wang, Xiaohui and Xiong, Ying and Wei, Yang and Wang, Mingxuan and Li, Lei",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers (NAACL-HLT)",
month = jun,
year = "2021",
publisher = "Association for Computational Linguistics",
pages = "113--120",
}
@article{wang2021lightseq2,
title={LightSeq2: Accelerated Training for Transformer-based Models on GPUs},
author={Wang, Xiaohui and Xiong, Ying and Qian, Xian and Wei, Yang and Li, Lei and Wang, Mingxuan},
journal={arXiv preprint arXiv:2110.05722},
year={2021}
}
```
## Contact
Any questions or suggestions, please feel free to contact us at
wangxiaohui.neo@bytedance.com, xiongying.taka@bytedance.com, qian.xian@bytedance.com, weiyang.god@bytedance.com, wangmingxuan.89@bytedance.com, lilei@cs.ucsb.edu
## Hiring
The LightSeq team is hiring Interns/FTEs with backgrounds in deep learning system/natural language processing/computer vision/speech.
We are based in Beijing and Shanghai. If you are interested, please send your resume to wangxiaohui.neo@bytedance.com.
%package -n python3-lightseq
Summary: LightSeq is a high performance library for sequence processing and generation
Provides: python-lightseq
BuildRequires: python3-devel
BuildRequires: python3-setuptools
BuildRequires: python3-pip
%description -n python3-lightseq
LightSeq is a high performance training and inference library for sequence processing and generation implemented
in CUDA.
It enables highly efficient computation of modern NLP models such as **BERT**, **GPT**,
**Transformer**, etc.
It is therefore best useful for *Machine Translation*, *Text Generation*, *Dialog*, *Language
Modelling*, *Sentiment Analysis*, and other related tasks with sequence data.
The library is built on top of CUDA official
library([cuBLAS](https://docs.nvidia.com/cuda/cublas/index.html),
[Thrust](https://docs.nvidia.com/cuda/thrust/index.html), [CUB](http://nvlabs.github.io/cub/)) and
custom kernel functions which are specially fused and optimized for Transformer model family. In
addition to model components, the inference library also provide easy-to deploy model management and serving backend based on
[TensorRT Inference
Server](https://docs.nvidia.com/deeplearning/sdk/inference-server-archived/tensorrt_inference_server_120/tensorrt-inference-server-guide/docs/quickstart.html).
With LightSeq, one can easily develop modified Transformer architecture with little additional code.
## Features
### [>>> Training](./lightseq/training)
The following is a support matrix of LightSeq **training** library compared with
[DeepSpeed](https://github.com/microsoft/DeepSpeed).
![features](./docs/training/images/features.png)
### [>>> Inference](./lightseq/inference)
The following is a support matrix of LightSeq **inference** library compared with
[TurboTransformers](https://github.com/Tencent/TurboTransformers) and
[FasterTransformer](https://github.com/NVIDIA/DeepLearningExamples/tree/master/FasterTransformer).
![support](./docs/inference/images/support.png)
## Performance
### [>>> Training](./lightseq/training)
Here we present the experimental results on WMT14 English to German translation task based on Transformer-big models. We train Transformer models of different sizes on eight NVIDIA Tesla V100/NVIDIA Tesla A100 GPUs with data parallel and fp16 mixed precision.
[Fairseq](https://github.com/pytorch/fairseq) with [Apex](https://github.com/NVIDIA/apex) is choosed as our baseline.
We compute speedup on different batch size using the WPS (real words per second) metric.
More results is available [here](./docs/training/performance.md)
### [>>> Inference](./lightseq/inference)
Here we present the experimental results on neural machine translation based on Transformer-base models using beam search methods.
We choose Tensorflow and
[FasterTransformer](https://github.com/NVIDIA/DeepLearningExamples/tree/master/FasterTransformer) as a comparison.
The implementation from
[tensor2tensor](https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/models/transformer.py)
was used as the benchmark of Tensorflow.
More results is available [here](./docs/inference/performance.md).
## Quick Start
Complete user guide is available [here](docs/guide.md).
### Installation
You can install LightSeq from PyPI:
```shell
$ pip install lightseq
```
LightSeq installation from PyPI only supports Python 3.6 to 3.8 on Linux for now. Consider compiling from source if you have other environments:
```shell
$ PATH=/usr/local/hdf5/:$PATH ENABLE_FP32=0 ENABLE_DEBUG=0 pip install -e $PROJECT_DIR
```
Detailed building introduction is available [here](docs/inference/build.md).
### Fast training from Fairseq
You can experience lightning fast training by running following commands,
Firstly install these requirements.
```shell
$ pip install lightseq fairseq sacremoses
```
Then you can train a translation task on wmt14 en2de dataset by running the following script
```shell
$ sh examples/training/fairseq/ls_fairseq_wmt14en2de.sh
```
To compare lightseq with fairseq, delete the arguments with `ls_` prefix to using the original fairseq implementation
More usage is available [here](./lightseq/training/README.md).
### Fast inference from HuggingFace bart
We provide an end2end bart-base example to see how fast Lightseq is compared to HuggingFace. First you should install these requirements.
```shell
$ pip install torch tensorflow transformers lightseq
$ cd examples/inference/python
```
then you can check the performance by simply running following commands. `hf_bart_export.py` is used to transform pytorch weights to LightSeq protobuffer.
```shell
$ python export/huggingface/hf_bart_export.py
$ python test/ls_bart.py
```
More usage is available [here](./lightseq/inference/README.md).
### Fast deploy inference server
We provide a docker image which contains tritonserver and lightseq's dynamic link library, and you can deploy a inference server by simply replace the model file with your own model file.
```shell
$ sudo docker pull hexisyztem/tritonserver_lightseq:22.01-1
```
More usage is available [here](https://github.com/bytedance/lightseq/tree/master/examples/triton_backend).
## Cite Us
If you use LightSeq in your research, please cite the following paper.
```
@InProceedings{wang2021lightseq,
title = "{L}ight{S}eq: A High Performance Inference Library for Transformers",
author = "Wang, Xiaohui and Xiong, Ying and Wei, Yang and Wang, Mingxuan and Li, Lei",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers (NAACL-HLT)",
month = jun,
year = "2021",
publisher = "Association for Computational Linguistics",
pages = "113--120",
}
@article{wang2021lightseq2,
title={LightSeq2: Accelerated Training for Transformer-based Models on GPUs},
author={Wang, Xiaohui and Xiong, Ying and Qian, Xian and Wei, Yang and Li, Lei and Wang, Mingxuan},
journal={arXiv preprint arXiv:2110.05722},
year={2021}
}
```
## Contact
Any questions or suggestions, please feel free to contact us at
wangxiaohui.neo@bytedance.com, xiongying.taka@bytedance.com, qian.xian@bytedance.com, weiyang.god@bytedance.com, wangmingxuan.89@bytedance.com, lilei@cs.ucsb.edu
## Hiring
The LightSeq team is hiring Interns/FTEs with backgrounds in deep learning system/natural language processing/computer vision/speech.
We are based in Beijing and Shanghai. If you are interested, please send your resume to wangxiaohui.neo@bytedance.com.
%package help
Summary: Development documents and examples for lightseq
Provides: python3-lightseq-doc
%description help
LightSeq is a high performance training and inference library for sequence processing and generation implemented
in CUDA.
It enables highly efficient computation of modern NLP models such as **BERT**, **GPT**,
**Transformer**, etc.
It is therefore best useful for *Machine Translation*, *Text Generation*, *Dialog*, *Language
Modelling*, *Sentiment Analysis*, and other related tasks with sequence data.
The library is built on top of CUDA official
library([cuBLAS](https://docs.nvidia.com/cuda/cublas/index.html),
[Thrust](https://docs.nvidia.com/cuda/thrust/index.html), [CUB](http://nvlabs.github.io/cub/)) and
custom kernel functions which are specially fused and optimized for Transformer model family. In
addition to model components, the inference library also provide easy-to deploy model management and serving backend based on
[TensorRT Inference
Server](https://docs.nvidia.com/deeplearning/sdk/inference-server-archived/tensorrt_inference_server_120/tensorrt-inference-server-guide/docs/quickstart.html).
With LightSeq, one can easily develop modified Transformer architecture with little additional code.
## Features
### [>>> Training](./lightseq/training)
The following is a support matrix of LightSeq **training** library compared with
[DeepSpeed](https://github.com/microsoft/DeepSpeed).
![features](./docs/training/images/features.png)
### [>>> Inference](./lightseq/inference)
The following is a support matrix of LightSeq **inference** library compared with
[TurboTransformers](https://github.com/Tencent/TurboTransformers) and
[FasterTransformer](https://github.com/NVIDIA/DeepLearningExamples/tree/master/FasterTransformer).
![support](./docs/inference/images/support.png)
## Performance
### [>>> Training](./lightseq/training)
Here we present the experimental results on WMT14 English to German translation task based on Transformer-big models. We train Transformer models of different sizes on eight NVIDIA Tesla V100/NVIDIA Tesla A100 GPUs with data parallel and fp16 mixed precision.
[Fairseq](https://github.com/pytorch/fairseq) with [Apex](https://github.com/NVIDIA/apex) is choosed as our baseline.
We compute speedup on different batch size using the WPS (real words per second) metric.
More results is available [here](./docs/training/performance.md)
### [>>> Inference](./lightseq/inference)
Here we present the experimental results on neural machine translation based on Transformer-base models using beam search methods.
We choose Tensorflow and
[FasterTransformer](https://github.com/NVIDIA/DeepLearningExamples/tree/master/FasterTransformer) as a comparison.
The implementation from
[tensor2tensor](https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/models/transformer.py)
was used as the benchmark of Tensorflow.
More results is available [here](./docs/inference/performance.md).
## Quick Start
Complete user guide is available [here](docs/guide.md).
### Installation
You can install LightSeq from PyPI:
```shell
$ pip install lightseq
```
LightSeq installation from PyPI only supports Python 3.6 to 3.8 on Linux for now. Consider compiling from source if you have other environments:
```shell
$ PATH=/usr/local/hdf5/:$PATH ENABLE_FP32=0 ENABLE_DEBUG=0 pip install -e $PROJECT_DIR
```
Detailed building introduction is available [here](docs/inference/build.md).
### Fast training from Fairseq
You can experience lightning fast training by running following commands,
Firstly install these requirements.
```shell
$ pip install lightseq fairseq sacremoses
```
Then you can train a translation task on wmt14 en2de dataset by running the following script
```shell
$ sh examples/training/fairseq/ls_fairseq_wmt14en2de.sh
```
To compare lightseq with fairseq, delete the arguments with `ls_` prefix to using the original fairseq implementation
More usage is available [here](./lightseq/training/README.md).
### Fast inference from HuggingFace bart
We provide an end2end bart-base example to see how fast Lightseq is compared to HuggingFace. First you should install these requirements.
```shell
$ pip install torch tensorflow transformers lightseq
$ cd examples/inference/python
```
then you can check the performance by simply running following commands. `hf_bart_export.py` is used to transform pytorch weights to LightSeq protobuffer.
```shell
$ python export/huggingface/hf_bart_export.py
$ python test/ls_bart.py
```
More usage is available [here](./lightseq/inference/README.md).
### Fast deploy inference server
We provide a docker image which contains tritonserver and lightseq's dynamic link library, and you can deploy a inference server by simply replace the model file with your own model file.
```shell
$ sudo docker pull hexisyztem/tritonserver_lightseq:22.01-1
```
More usage is available [here](https://github.com/bytedance/lightseq/tree/master/examples/triton_backend).
## Cite Us
If you use LightSeq in your research, please cite the following paper.
```
@InProceedings{wang2021lightseq,
title = "{L}ight{S}eq: A High Performance Inference Library for Transformers",
author = "Wang, Xiaohui and Xiong, Ying and Wei, Yang and Wang, Mingxuan and Li, Lei",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers (NAACL-HLT)",
month = jun,
year = "2021",
publisher = "Association for Computational Linguistics",
pages = "113--120",
}
@article{wang2021lightseq2,
title={LightSeq2: Accelerated Training for Transformer-based Models on GPUs},
author={Wang, Xiaohui and Xiong, Ying and Qian, Xian and Wei, Yang and Li, Lei and Wang, Mingxuan},
journal={arXiv preprint arXiv:2110.05722},
year={2021}
}
```
## Contact
Any questions or suggestions, please feel free to contact us at
wangxiaohui.neo@bytedance.com, xiongying.taka@bytedance.com, qian.xian@bytedance.com, weiyang.god@bytedance.com, wangmingxuan.89@bytedance.com, lilei@cs.ucsb.edu
## Hiring
The LightSeq team is hiring Interns/FTEs with backgrounds in deep learning system/natural language processing/computer vision/speech.
We are based in Beijing and Shanghai. If you are interested, please send your resume to wangxiaohui.neo@bytedance.com.
%prep
%autosetup -n lightseq-3.0.1
%build
%py3_build
%install
%py3_install
install -d -m755 %{buildroot}/%{_pkgdocdir}
if [ -d doc ]; then cp -arf doc %{buildroot}/%{_pkgdocdir}; fi
if [ -d docs ]; then cp -arf docs %{buildroot}/%{_pkgdocdir}; fi
if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi
if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi
pushd %{buildroot}
if [ -d usr/lib ]; then
find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/lib64 ]; then
find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/bin ]; then
find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst
fi
if [ -d usr/sbin ]; then
find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst
fi
touch doclist.lst
if [ -d usr/share/man ]; then
find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst
fi
popd
mv %{buildroot}/filelist.lst .
mv %{buildroot}/doclist.lst .
%files -n python3-lightseq -f filelist.lst
%dir %{python3_sitelib}/*
%files help -f doclist.lst
%{_docdir}/*
%changelog
* Fri May 05 2023 Python_Bot - 3.0.1-1
- Package Spec generated