You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The old FP16 training code in model.fit() was replaced by using Pytorch 1.6.0 automatic mixed precision (AMP). When setting model.fit(use_amp=True), AMP will be used. On suitable GPUs, this leads to a significant speed-up while requiring less memory.
Performance improvements in paraphrase mining & semantic search by replacing np.argpartition with torch.topk
If a sentence-transformer model is not found, it will fall back to huggingface transformers repository and create it with mean pooling.
Fixing huggingface transformers to version 3.0.2. Next release will make it compatible with huggingface transformers 3.1.0
Several bugfixes: Downloading of files, mutli-GPU-encoding