Skip to content

v0.3.5 - Automatic Mixed Precision & Bugfixes

Compare
Choose a tag to compare
@nreimers nreimers released this 01 Sep 13:09
· 1316 commits to master since this release
  • The old FP16 training code in model.fit() was replaced by using Pytorch 1.6.0 automatic mixed precision (AMP). When setting model.fit(use_amp=True), AMP will be used. On suitable GPUs, this leads to a significant speed-up while requiring less memory.
  • Performance improvements in paraphrase mining & semantic search by replacing np.argpartition with torch.topk
  • If a sentence-transformer model is not found, it will fall back to huggingface transformers repository and create it with mean pooling.
  • Fixing huggingface transformers to version 3.0.2. Next release will make it compatible with huggingface transformers 3.1.0
  • Several bugfixes: Downloading of files, mutli-GPU-encoding