diff options
author | CoprDistGit <infra@openeuler.org> | 2023-06-08 15:08:41 +0000 |
---|---|---|
committer | CoprDistGit <infra@openeuler.org> | 2023-06-08 15:08:41 +0000 |
commit | dc25a781a2525027402e733bb6e9c0e5d6e35ea4 (patch) | |
tree | cd2e2f7e39ed1e1468e77fe0934f94a9cf5a8ed4 | |
parent | 32f57e3ab35d0cc4d506d1e0f58a3c05bc89c8c6 (diff) |
automatic import of python-mct-nightlyopeneuler20.03
-rw-r--r-- | .gitignore | 1 | ||||
-rw-r--r-- | python-mct-nightly.spec | 34 | ||||
-rw-r--r-- | sources | 2 |
3 files changed, 21 insertions, 16 deletions
@@ -1 +1,2 @@ /mct-nightly-1.8.0.31032023.post405.tar.gz +/mct-nightly-1.8.0.31052023.post402.tar.gz diff --git a/python-mct-nightly.spec b/python-mct-nightly.spec index ea11992..a5071a0 100644 --- a/python-mct-nightly.spec +++ b/python-mct-nightly.spec @@ -1,11 +1,11 @@ %global _empty_manifest_terminate_build 0 Name: python-mct-nightly -Version: 1.8.0.31032023.post405 +Version: 1.8.0.31052023.post402 Release: 1 Summary: A Model Compression Toolkit for neural networks License: Apache Software License URL: https://pypi.org/project/mct-nightly/ -Source0: https://mirrors.nju.edu.cn/pypi/web/packages/28/c5/980ba849442d9bce7243befb7192a49f8a918d716b0a79cecc9436a212e0/mct-nightly-1.8.0.31032023.post405.tar.gz +Source0: https://mirrors.aliyun.com/pypi/web/packages/55/3f/eb638da87e581c6d8595c336cbf83d9c4b6b3b30b324b32c8286ed582dd5/mct-nightly-1.8.0.31052023.post402.tar.gz BuildArch: noarch Requires: python3-networkx @@ -20,6 +20,7 @@ Requires: python3-PuLP Requires: python3-matplotlib Requires: python3-scipy Requires: python3-protobuf +Requires: python3-mct-quantizers-nightly %description # Model Compression Toolkit (MCT) @@ -64,15 +65,16 @@ In addition, MCT supports different quantization schemes for quantizing weights * Symmetric * Uniform -Core features: +Main features: * <ins>Graph optimizations:</ins> Transforming the model to an equivalent (yet, more efficient) model (for example, batch-normalization layer folding to its preceding linear layer). -* <ins>Quantization parameter search:</ins> Different methods can be used to minimize the expected added quantization-noise during thresholds search (by default, we use Mean-Square-Errorm but other metrics can be used such as No-Clipping, Mean-Average-Error, and more). +* <ins>Quantization parameter search:</ins> Different methods can be used to minimize the expected added quantization-noise during thresholds search (by default, we use Mean-Square-Error, but other metrics can be used such as No-Clipping, Mean-Average-Error, and more). * <ins>Advanced quantization algorithms:</ins> To prevent a performance degradation some algorithms are applied such as: * <ins>Shift negative correction:</ins> Symmetric activation quantization can hurt the model's performance when some layers output both negative and positive activations, but their range is asymmetric. For more details please visit [1]. * <ins>Outliers filtering:</ins> Computing z-score for activation statistics to detect and remove outliers. * <ins>Clustering:</ins> Using non-uniform quantization grid to quantize the weights and activations to match their distributions.[*](#experimental-features) * <ins>Mixed-precision search:</ins> Assigning quantization bit-width per layer (for weights/activations), based on the layer's sensitivity to different bit-widths. * <ins>Visualization:</ins> You can use TensorBoard to observe useful information for troubleshooting the quantized model's performance (for example, the model in different phases of the quantization, collected statistics, similarity between layers of the float and quantized model and bit-width configuration for mixed-precision quantization). For more details, please read the [visualization documentation](https://sony.github.io/model_optimization/docs/guidelines/visualization.html). +* <ins>Target Platform Capabilities:</ins> The Target Platform Capabilities (TPC) describes the target platform (an edge device with dedicated hardware). For more details, please read the [TPC README](model_compression_toolkit/target_platform_capabilities/README.md). #### Experimental features @@ -244,15 +246,16 @@ In addition, MCT supports different quantization schemes for quantizing weights * Symmetric * Uniform -Core features: +Main features: * <ins>Graph optimizations:</ins> Transforming the model to an equivalent (yet, more efficient) model (for example, batch-normalization layer folding to its preceding linear layer). -* <ins>Quantization parameter search:</ins> Different methods can be used to minimize the expected added quantization-noise during thresholds search (by default, we use Mean-Square-Errorm but other metrics can be used such as No-Clipping, Mean-Average-Error, and more). +* <ins>Quantization parameter search:</ins> Different methods can be used to minimize the expected added quantization-noise during thresholds search (by default, we use Mean-Square-Error, but other metrics can be used such as No-Clipping, Mean-Average-Error, and more). * <ins>Advanced quantization algorithms:</ins> To prevent a performance degradation some algorithms are applied such as: * <ins>Shift negative correction:</ins> Symmetric activation quantization can hurt the model's performance when some layers output both negative and positive activations, but their range is asymmetric. For more details please visit [1]. * <ins>Outliers filtering:</ins> Computing z-score for activation statistics to detect and remove outliers. * <ins>Clustering:</ins> Using non-uniform quantization grid to quantize the weights and activations to match their distributions.[*](#experimental-features) * <ins>Mixed-precision search:</ins> Assigning quantization bit-width per layer (for weights/activations), based on the layer's sensitivity to different bit-widths. * <ins>Visualization:</ins> You can use TensorBoard to observe useful information for troubleshooting the quantized model's performance (for example, the model in different phases of the quantization, collected statistics, similarity between layers of the float and quantized model and bit-width configuration for mixed-precision quantization). For more details, please read the [visualization documentation](https://sony.github.io/model_optimization/docs/guidelines/visualization.html). +* <ins>Target Platform Capabilities:</ins> The Target Platform Capabilities (TPC) describes the target platform (an edge device with dedicated hardware). For more details, please read the [TPC README](model_compression_toolkit/target_platform_capabilities/README.md). #### Experimental features @@ -421,15 +424,16 @@ In addition, MCT supports different quantization schemes for quantizing weights * Symmetric * Uniform -Core features: +Main features: * <ins>Graph optimizations:</ins> Transforming the model to an equivalent (yet, more efficient) model (for example, batch-normalization layer folding to its preceding linear layer). -* <ins>Quantization parameter search:</ins> Different methods can be used to minimize the expected added quantization-noise during thresholds search (by default, we use Mean-Square-Errorm but other metrics can be used such as No-Clipping, Mean-Average-Error, and more). +* <ins>Quantization parameter search:</ins> Different methods can be used to minimize the expected added quantization-noise during thresholds search (by default, we use Mean-Square-Error, but other metrics can be used such as No-Clipping, Mean-Average-Error, and more). * <ins>Advanced quantization algorithms:</ins> To prevent a performance degradation some algorithms are applied such as: * <ins>Shift negative correction:</ins> Symmetric activation quantization can hurt the model's performance when some layers output both negative and positive activations, but their range is asymmetric. For more details please visit [1]. * <ins>Outliers filtering:</ins> Computing z-score for activation statistics to detect and remove outliers. * <ins>Clustering:</ins> Using non-uniform quantization grid to quantize the weights and activations to match their distributions.[*](#experimental-features) * <ins>Mixed-precision search:</ins> Assigning quantization bit-width per layer (for weights/activations), based on the layer's sensitivity to different bit-widths. * <ins>Visualization:</ins> You can use TensorBoard to observe useful information for troubleshooting the quantized model's performance (for example, the model in different phases of the quantization, collected statistics, similarity between layers of the float and quantized model and bit-width configuration for mixed-precision quantization). For more details, please read the [visualization documentation](https://sony.github.io/model_optimization/docs/guidelines/visualization.html). +* <ins>Target Platform Capabilities:</ins> The Target Platform Capabilities (TPC) describes the target platform (an edge device with dedicated hardware). For more details, please read the [TPC README](model_compression_toolkit/target_platform_capabilities/README.md). #### Experimental features @@ -553,7 +557,7 @@ MCT aims at keeping a more up-to-date fork and welcomes contributions from anyon %prep -%autosetup -n mct-nightly-1.8.0.31032023.post405 +%autosetup -n mct-nightly-1.8.0.31052023.post402 %build %py3_build @@ -567,20 +571,20 @@ if [ -d example ]; then cp -arf example %{buildroot}/%{_pkgdocdir}; fi if [ -d examples ]; then cp -arf examples %{buildroot}/%{_pkgdocdir}; fi pushd %{buildroot} if [ -d usr/lib ]; then - find usr/lib -type f -printf "/%h/%f\n" >> filelist.lst + find usr/lib -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi if [ -d usr/lib64 ]; then - find usr/lib64 -type f -printf "/%h/%f\n" >> filelist.lst + find usr/lib64 -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi if [ -d usr/bin ]; then - find usr/bin -type f -printf "/%h/%f\n" >> filelist.lst + find usr/bin -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi if [ -d usr/sbin ]; then - find usr/sbin -type f -printf "/%h/%f\n" >> filelist.lst + find usr/sbin -type f -printf "\"/%h/%f\"\n" >> filelist.lst fi touch doclist.lst if [ -d usr/share/man ]; then - find usr/share/man -type f -printf "/%h/%f.gz\n" >> doclist.lst + find usr/share/man -type f -printf "\"/%h/%f.gz\"\n" >> doclist.lst fi popd mv %{buildroot}/filelist.lst . @@ -593,5 +597,5 @@ mv %{buildroot}/doclist.lst . %{_docdir}/* %changelog -* Tue May 30 2023 Python_Bot <Python_Bot@openeuler.org> - 1.8.0.31032023.post405-1 +* Thu Jun 08 2023 Python_Bot <Python_Bot@openeuler.org> - 1.8.0.31052023.post402-1 - Package Spec generated @@ -1 +1 @@ -e66c172fbd203392b88090655205fb09 mct-nightly-1.8.0.31032023.post405.tar.gz +b0eba10a3892618193870ffa25d6c492 mct-nightly-1.8.0.31052023.post402.tar.gz |