Skip to content

Commit

Permalink
Update ReleaseNotes.md with changes made by Nikita S.
Browse files Browse the repository at this point in the history
- For weight compression, align quantization scales to always be saved in FP16 precision nevertheless the input model weights precision. (openvinotoolkit#2596, openvinotoolkit#2508)
- (UX) Expose `OverflowFix`, `AdvancedSmoothQuantParameters` and `AdvancedBiasCorrectionParameters` classes to be available for import as `nncf.OverfloFix`, `nncf.AdvancedSmoothQuantParameters`, `nncf.AdvancedBiasCorrectionParameters`. (openvinotoolkit#2608, openvinotoolkit#2624)
  • Loading branch information
nikita-savelyevv authored Apr 19, 2024
1 parent d6927c9 commit 7f78036
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions ReleaseNotes.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,13 @@ Post-training Quantization:
- Breaking changes:
- ...
- General:
- ...
- For weight compression, align quantization scales to always be saved in FP16 precision nevertheless the input model weights precision.
- Features:
- ...
- Fixes:
- (Torch) An epsilon value is added to quantization parameters to prevent division by zero.
- Improvements:
- ...
- (UX) Expose `OverflowFix`, `AdvancedSmoothQuantParameters` and `AdvancedBiasCorrectionParameters` classes to be available for import as `nncf.OverfloFix`, `nncf.AdvancedSmoothQuantParameters`, `nncf.AdvancedBiasCorrectionParameters`.
- Deprecations/Removals:
- ...
- Tutorials:
Expand All @@ -26,7 +26,7 @@ Compression-aware training:
- Breaking changes:
- ...
- General:
- (Torch) `nncf.quantize` funciton could be used as quantization initialization for Quantization-aware training.
- (Torch) `nncf.quantize` function could be used as quantization initialization for Quantization-aware training.
- Features:
- ...
- Fixes:
Expand Down

0 comments on commit 7f78036

Please sign in to comment.