Skip to content

Commit

Permalink
update links for removed notebooks in main (openvinotoolkit#1988)
Browse files Browse the repository at this point in the history
  • Loading branch information
eaidova authored May 1, 2024
1 parent ef554db commit 7e329c4
Show file tree
Hide file tree
Showing 23 changed files with 289 additions and 75 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@
"source": [
"# Quantize Data2Vec Speech Recognition Model using NNCF PTQ API\n",
"\n",
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/speech-recognition-quantization/speech-recognition-quantization-data2vec.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
"This tutorial demonstrates how to use the NNCF (Neural Network Compression Framework) 8-bit quantization in post-training mode (without the fine-tuning pipeline) to optimize the speech recognition model, known as [Data2Vec](https://arxiv.org/abs/2202.03555) for the high-speed inference via OpenVINO™ Toolkit. This notebook uses a fine-tuned [data2vec-audio-base-960h](https://huggingface.co/facebook/data2vec-audio-base-960h) [PyTorch](https://pytorch.org/) model trained on the [LibriSpeech ASR corpus](https://www.openslr.org/12). The tutorial is designed to be extendable to custom models and datasets. It consists of the following steps:\n",
"\n",
"- Download and prepare model.\n",
Expand Down Expand Up @@ -263,6 +262,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "0bb514d4-2d00-4a8c-a858-76730c59e3f4",
"metadata": {},
Expand Down Expand Up @@ -1124,4 +1124,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}
22 changes: 19 additions & 3 deletions notebooks/109-performance-tricks/109-latency-tricks.ipynb
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Performance tricks in OpenVINO for latency mode\n",
"\n",
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/performance-tricks/latency-tricks.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
"\n",
"The goal of this notebook is to provide a step-by-step tutorial for improving performance for inferencing in a latency mode. Low latency is especially desired in real-time applications when the results are needed as soon as possible after the data appears. This notebook assumes computer vision workflow and uses [YOLOv5n](https://github.com/ultralytics/yolov5) model. We will simulate a camera application that provides frames one by one.\n",
"\n",
Expand All @@ -26,7 +26,8 @@
"\n",
"\n",
"\n",
"#### Table of contents:\n\n",
"#### Table of contents:\n",
"\n",
"- [Prerequisites](#Prerequisites)\n",
"- [Data](#Data)\n",
"- [Model](#Model)\n",
Expand All @@ -47,6 +48,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -108,6 +110,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -169,6 +172,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -219,6 +223,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -262,6 +267,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -314,6 +320,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -424,6 +431,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -477,6 +485,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -547,6 +556,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -600,6 +610,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -651,6 +662,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -702,6 +714,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -749,6 +762,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -800,6 +814,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -873,6 +888,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -924,4 +940,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
}
}
23 changes: 20 additions & 3 deletions notebooks/109-performance-tricks/109-throughput-tricks.ipynb
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"collapsed": false,
Expand All @@ -11,7 +12,6 @@
"source": [
"# Performance tricks in OpenVINO for throughput mode\n",
"\n",
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/performance-tricks/throughput-tricks.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
"\n",
"The goal of this notebook is to provide a step-by-step tutorial for improving performance for inferencing in a throughput mode. High throughput is especially desired in applications when the results are not expected to appear as soon as possible but to lower the whole processing time. This notebook assumes computer vision workflow and uses [YOLOv5n](https://github.com/ultralytics/yolov5) model. We will simulate a video processing application that has access to all frames at once (e.g. video editing).\n",
"\n",
Expand All @@ -28,7 +28,8 @@
"A similar notebook focused on the latency mode is available [here](109-latency-tricks.ipynb).\n",
"\n",
"\n",
"#### Table of contents:\n\n",
"#### Table of contents:\n",
"\n",
"- [Prerequisites](#Prerequisites)\n",
"- [Data](#Data)\n",
"- [Model](#Model)\n",
Expand All @@ -50,6 +51,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -104,6 +106,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -179,6 +182,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -238,6 +242,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -290,6 +295,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -356,6 +362,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -475,6 +482,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -537,6 +545,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -621,6 +630,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -708,6 +718,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -760,6 +771,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -816,6 +828,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -875,6 +888,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -931,6 +945,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -985,6 +1000,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -1070,6 +1086,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"collapsed": false,
Expand Down Expand Up @@ -1126,4 +1143,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,6 @@
"source": [
"# Live Inference and Benchmark CT-scan Data with OpenVINO™\n",
"\n",
"> **Note:** This notebook has been moved to a new branch named \"latest\". [Click here](https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/ct-segmentation-quantize/ct-scan-live-inference.ipynb) to get the most updated version of the notebook. This branch is deprecated.\n",
"\n",
"## Kidney Segmentation with PyTorch Lightning and OpenVINO™ - Part 4 \n",
"\n",
"This tutorial is a part of a series on how to train, optimize, quantize and show live inference on a medical segmentation model. The goal is to accelerate inference on a kidney segmentation model. The [UNet](https://arxiv.org/abs/1505.04597) model is trained from scratch, and the data is from [Kits19](https://github.com/neheller/kits19).\n",
Expand Down Expand Up @@ -666,4 +664,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
}
}
Loading

0 comments on commit 7e329c4

Please sign in to comment.